Alice
09/27/2022, 1:34 PM2022/09/27 13:20:48.886 INFO [LLRealtimeSegmentDataManager_table__0__224__20220927T0712Z] [telemetry__0__224__20220927T0712Z] Consumed 0 events from (rate:0.0/s), currentOffset=261332557, numRowsConsumedSoFar=633915, numRowsIndexedSoFar=633915
Nagendra Gautham Gondi
09/27/2022, 7:18 PMCaught exception in state transition from OFFLINE -> ONLINE for resource: caseData_REALTIME,
Controller:
Reading segments debug info from servers: [Server_pinot-server-0.pinot-server-headless.pinot-quickstart.svc.cluster.local_8098] for table: caseData_REALTIME
Server: Server_pinot-server-0.pinot-server-headless.pinot-quickstart.svc.cluster.local_8098 returned error: 404
Mohit Garg4628
09/28/2022, 4:30 AMMayank
Edgaras Kryževičius
09/28/2022, 6:52 AMCaused by: java.lang.ClassNotFoundException: org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentGenerationJobRunner
Here is my spark-submit command:
spark-submit \
--class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand \
--master local \
--deploy-mode client \
--conf "spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins" \
--conf "spark.driver.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-3.2/pinot-batch-ingestion-spark-3.2-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-adls/pinot-adls-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" \
--conf "spark.executor.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-3.2/pinot-batch-ingestion-spark-3.2-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-adls/pinot-adls-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" \
local://${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar -jobSpecFile ${PINOT_DISTRIBUTION_DIR}/spark_job_spec.yaml
Here is my spark_job_spec.yaml file:
executionFrameworkSpec:
name: 'spark'
segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentGenerationJobRunner'
segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentTarPushJobRunner'
segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentUriPushJobRunner'
segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentMetadataPushJobRunner'
extraConfigs:
stagingDir: /path/to/staging
jobType: SegmentCreationAndTarPush
inputDirURI: '/path/to/input'
outputDirURI: '/path/to/output'
overwriteOutput: true
pinotFSSpecs:
- scheme: adl2
className: org.apache.pinot.plugin.filesystem.ADLSGen2PinotFS
configs:
accountName: 'account-name'
accessKey: 'sharedAccessKey'
fileSystemName: 'fs-name'
recordReaderSpec:
dataFormat: 'parquet'
className: 'org.apache.pinot.plugin.inputformat.parquet.ParquetNativeRecordReader'
tableSpec:
tableName: 'spire'
pinotClusterSpecs:
- controllerURI: '<http://50.107.051.240:9000>'
Piyush Chauhan
09/28/2022, 10:52 AMEdgaras Kryževičius
09/28/2022, 11:47 AMCaused by: java.lang.IllegalStateException: PinotFS for scheme: abfs has not been initialized
This is spark command I am running:
spark-submit \
--class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand \
--master local \
--deploy-mode client \
--conf "spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins" \
--conf "spark.driver.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-3.2/pinot-batch-ingestion-spark-3.2-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-adls/pinot-adls-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" \
--conf "spark.executor.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-3.2/pinot-batch-ingestion-spark-3.2-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-adls/pinot-adls-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" \
local://${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar -jobSpecFile ${PINOT_DISTRIBUTION_DIR}/SparkIngestionJob.yaml
SparkIngestionJob.yaml:
executionFrameworkSpec:
name: 'spark'
segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark3.SparkSegmentGenerationJobRunner'
segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark3.SparkSegmentTarPushJobRunner'
segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark3.SparkSegmentUriPushJobRunner'
segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark3.SparkSegmentMetadataPushJobRunner'
extraConfigs:
stagingDir: examples/batch/airlineStats/staging
jobType: SegmentCreationAndTarPush
inputDirURI: '<abfs://fs@accountname/...>'
includeFileNamePattern: 'glob:**/*.avro'
outputDirURI: 'examples/batch/airlineStats/segments'
overwriteOutput: true
pinotFSSpecs:
- scheme: adl2
className: org.apache.pinot.plugin.filesystem.ADLSGen2PinotFS
configs:
accountName: '..'
accessKey: '..'
fileSystemName: '..'
recordReaderSpec:
dataFormat: 'avro'
className: 'org.apache.pinot.plugin.inputformat.avro.AvroRecordReader'
tableSpec:
tableName: 'airlineStats'
schemaURI: '<http://20.207.206.121:9000/tables/airlineStats/schema>'
tableConfigURI: '<http://20.207.206.121:9000/tables/airlineStats>'
segmentNameGeneratorSpec:
type: normalizedDate
configs:
segment.name.prefix: 'airlineStats_batch'
exclude.sequence.id: true
pinotClusterSpecs:
- controllerURI: '<http://20.207.206.121:9000>'
pushJobSpec:
pushParallelism: 2
pushAttempts: 2
pushRetryIntervalMillis: 1000
I am also attaching my values.yml file, which is used to deploy Pinot using helm.Tommaso Peresson
09/28/2022, 2:50 PMKen Krugler
09/28/2022, 3:25 PMSELECT sum(metric) AS sumMetric, key
FROM table
WHERE dim1 = 'xx' AND dim2 >= 19144 AND dim2 <= 19173
AND dim3 NOT IN ('yy', 'zz')
GROUP BY key ORDER BY sumMetric DESC LIMIT 3
Thomas Steinholz
09/28/2022, 3:26 PMRealtimeToOfflineSegmentsTask
running… I’ve been following the guide, added the task config, but the task stays at the status of NOT_STARTED
with a {}
task config in the task view giving a 404 error
when trying to run. Any idea what is not correctly configured?Nizar Hejazi
09/28/2022, 8:26 PMSELECT id FROM role_with_company WHERE (isPartialAdmin IS NULL)=true AND company='{company_id}'
I get -for some values of company_id
- the following java.lang.IndexOutOfBoundsException
exception back:
PrestoExternalError(type=EXTERNAL, name=PINOT_EXCEPTION, message="Query SELECT "id" FROM role_with_company WHERE (("company" = {company_id}) AND (("isPartialAdmin" IS NULL) = true)) LIMIT 100000 encountered exception {"message":"QueryExecutionError:\njava.lang.IndexOutOfBoundsException\n\tat java.base/java.nio.Buffer.checkIndex(Buffer.java:687)\n\tat java.base/java.nio.DirectCharBufferU.get(DirectCharBufferU.java:269)\n\tat org.roaringbitmap.buffer.MappeableArrayContainerCharIterator.nextAsInt(MappeableArrayContainer.java:1876)\n\tat org.roaringbitmap.buffer.ImmutableRoaringBitmap$ImmutableRoaringIntIterator.next(ImmutableRoaringBitmap.java:113)","errorCode":200} with pinot query "SELECT "id" FROM role_with_company WHERE (("company" = {company_id}) AND (("isPartialAdmin" IS NULL) = true)) LIMIT 100000"", query_id=20220928_202056_30456_i2zba)
isPartialAdmin is a boolean dimension dictionary-encoded field. The error is happening very frequently.Tiger Zhao
09/28/2022, 8:30 PMcontroller.deleted.segments.retentionInDays=1
. Is this expected? And is it safe to manually delete the segments under that folder?Neeraja Sridharan
09/28/2022, 11:14 PMrobert zych
09/29/2022, 4:03 AMselect d1, d2, max(metric) as max_metric
from t
where datetrunc('DAY', created_at_epoch_ms) = 1660867200000
group by d1, d2
order by max_metric desc
limit 1
Prakhar Pande
09/29/2022, 8:20 AMPiyush Chauhan
09/29/2022, 8:29 AM[
{
"tableName": "packages_REALTIME",
"numSegments": 79,
"numServers": 2,
"numBrokers": 2,
"segmentDebugInfos": [],
"serverDebugInfos": [],
"brokerDebugInfos": [
{
"brokerName": "Broker_pinot-broker-0.dev-pinot-broker-headless.svc.cluster.local_8099",
"idealState": "ONLINE",
"externalView": "ONLINE"
},
{
"brokerName": "Broker_pinot-broker-1.pinot-broker-headless.svc.cluster.local_8099",
"idealState": "ONLINE",
"externalView": "ONLINE"
}
],
"tableSize": {
"reportedSize": "5 MB",
"estimatedSize": "5 MB"
},
"ingestionStatus": {
"ingestionState": "UNHEALTHY",
"errorMessage": "Did not get any response from servers for segment: packages__0__9__20220927T1248Z"
}
}
]
Alice
09/29/2022, 8:58 AMTommaso Peresson
09/29/2022, 4:00 PMdistinctcounthll
in the merge-rollup task in the near future?Abhijeet Kushe
09/29/2022, 5:47 PMKen Krugler
09/30/2022, 12:18 AMEdgaras Kryževičius
09/30/2022, 9:10 AMCaused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.240.0.12 executor 1): com.azure.storage.file.datalake.models.DataLakeStorageException: Status code 409, "{"error":{"code":"PathAlreadyExists","message":"The specified path already exists.\nRequestId:2afa0318-501f-0004-38aa-d4c373000000\nTime:2022-09-30T08:57:09.6163232Z"}}"
In sparkIngestionJobSpec.yaml file I have outputDirUri set to: outputDirUri='<adl2://fs@ac.dfs.core.windows.net/qa/pinot/controller-data/spireStatsV2/>'
In controller configurations:
controller.data.dir=<adl2://fs@ac.dfs.core.windows.net/qa/pinot/controller-data>
I can see that once I started spark job after some time, it created segment file spireStatsV2_batch.tar.gz in <adl2://fs@ac.dfs.core.windows.net/qa/pinot/controller-data/spireStatsV2/event_date=2022-08-20/event_type=other/>
. I imagine that same spark job tries to make a file with the same name on the same path and then it fails. How could I fix it?Wojciech Wasik
09/30/2022, 12:37 PMcsv
file. I have the fallowing configsSlackbot
09/30/2022, 3:28 PMEnzo DECHAENE
09/30/2022, 3:43 PMAli Atıl
09/30/2022, 8:19 AM{"code":400,"error":"TableConfigs: mytable already exists. Use PUT to update existing config"}
i use commands below inside the controller shell to create my tables.
bin/pinot-admin.sh AddTable -schemaFile schema.json -tableConfigFile offline.json -exec
bin/pinot-admin.sh AddTable -schemaFile schema.json -tableConfigFile realtime.json -exec
troywinter
10/01/2022, 11:56 AM{
"tenantRole": "SERVER",
"tenantName": "Tracker",
"offlineInstances": 1,
"realtimeInstances": 1
}
and the response is:
{
"_code": 500,
"_error": "Index 0 out of bounds for length 0"
}
I’m not able finding any logs related to this endpoint in the controller logs.Prakhar Pande
10/03/2022, 2:46 PMLuis Fernandez
10/03/2022, 7:40 PM0.11.0
from 0.10.0
i’m upgrading the controller first and i’m getting the following exception:
java.lang.RuntimeException: Caught exception while initializing ControllerFilePathProvider
at org.apache.pinot.controller.BaseControllerStarter.initControllerFilePathProvider(BaseControllerStarter.java:555) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.controller.BaseControllerStarter.setUpPinotController(BaseControllerStarter.java:374) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.controller.BaseControllerStarter.start(BaseControllerStarter.java:322) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.service.PinotServiceManager.startController(PinotServiceManager.java:118) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:87) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.lambda$startBootstrapServices$0(StartServiceManagerCommand.java:251) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:304) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startBootstrapServices(StartServiceManagerCommand.java:250) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.execute(StartServiceManagerCommand.java:196) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.admin.command.StartControllerCommand.execute(StartControllerCommand.java:187) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.Command.call(Command.java:33) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.Command.call(Command.java:29) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at picocli.CommandLine.executeUserObject(CommandLine.java:1953) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at picocli.CommandLine.access$1300(CommandLine.java:145) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at picocli.CommandLine$RunLast.handle(CommandLine.java:2346) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at picocli.CommandLine$RunLast.handle(CommandLine.java:2311) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at picocli.CommandLine.execute(CommandLine.java:2078) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:165) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:196) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
Caused by: org.apache.pinot.controller.api.resources.InvalidControllerConfigException: Caught exception while initializing file upload path provider
at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.<init>(ControllerFilePathProvider.java:107) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.init(ControllerFilePathProvider.java:49) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.controller.BaseControllerStarter.initControllerFilePathProvider(BaseControllerStarter.java:553) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
... 20 more
Caused by: java.lang.NullPointerException
at org.apache.pinot.shaded.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:770) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at com.google.cloud.storage.BlobId.of(BlobId.java:114) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at com.google.cloud.storage.BlobId.fromPb(BlobId.java:118) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at com.google.cloud.storage.BlobInfo.fromPb(BlobInfo.java:1160) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at com.google.cloud.storage.Blob.fromPb(Blob.java:958) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at com.google.cloud.storage.StorageImpl.get(StorageImpl.java:330) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at com.google.cloud.storage.Bucket.get(Bucket.java:827) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.plugin.filesystem.GcsPinotFS.existsDirectory(GcsPinotFS.java:264) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.plugin.filesystem.GcsPinotFS.exists(GcsPinotFS.java:329) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.plugin.filesystem.GcsPinotFS.exists(GcsPinotFS.java:142) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.<init>(ControllerFilePathProvider.java:71) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.init(ControllerFilePathProvider.java:49) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
at org.apache.pinot.controller.BaseControllerStarter.initControllerFilePathProvider(BaseControllerStarter.java:553) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
... 20 more
it’s a NullPointer and I’m not sure why is it getting this error when it shouldn’t? maybe I need to give it some new permissions to my SA that didn’t have before? or what could be causing this, this is properly working in 0.10.0
Tao Hu
10/03/2022, 9:26 PMHISTOGRAM
in 0.11.0 support distinct count? From the documentation seems like it does notEaugene Thomas
10/04/2022, 2:00 PM/tables
I am getting the response as ,
{
"status": "Table test_demo_REALTIME succesfully added"
}
The table set to ingest from kafka .
But the Controller UI doesn’t show the table name . There is not error trace as well in controller / broker logs . Any help on debugging this ?
PS : I have added the schema for test_demo
previously ( this in shown in Controller UI )