0

The following is a summary of the problems encountered while building the metatron cluster environment. We leave an email contents in FAQ to share information.

From: mohamed.mihoubi@orange.com <mohamed.mihoubi@orange.com>
Sent: Wednesday, April 10, 2019 10:29 PM
To: 이세화님/Metatron개발팀 <sehwa.lee@sk.com>; CHERIFATOU IDRISSA <idrissachrifa@outlook.com>; metatron님/공용 ID <metatron@sk.com>
Cc: ‎morkadomo@gmail.com <???morkadomo@gmail.com>
Subject: RE: Clustering Metatron

Hello Metatron Team,

we solved the problem. thanks for your advices

we had a problem with our YAML configuration as you said.

the yaml parser did not take into account our configuration (it took into account the basic configuration of spring).

Thanks again.

Regards,

De : 이세화님 [mailto:sehwa.lee@sk.com] Envoyé : mercredi 10 avril 2019 11:46
À : CHERIFATOU IDRISSA; metatron님
Cc : MIHOUBI Mohamed OBS/OAB; ‎morkadomo@gmail.com
Objet : RE: Clustering Metatron

In addition,. Check below

password: (no carage return)pem: /home/hadoop/.ssh/bdoc_openstack_adminprojet.pem

From: 이세화님/Metatron개발팀
Sent: Wednesday, April 10, 2019 10:52 AM
To: ‘CHERIFATOU IDRISSA’ <idrissachrifa@outlook.com>; metatron님/공용 ID <metatron@sk.com>
Cc: mohamed.mihoubi@orange.com; ‎morkadomo@gmail.com <‎morkadomo@gmail.com>
Subject: RE: Clustering Metatron

Hello,

It seems that yaml file is not recognized properly,

Try to change yaml file’s name to ‘application-config..yaml’

8100 port is a Starting port used for peon process

Plz, refer to http://druid.io/docs/latest/configuration/index.html

Thanks!.

From: CHERIFATOU IDRISSA <idrissachrifa@outlook.com>
Sent: Wednesday, April 10, 2019 12:24 AM
To: metatron님/공용 ID <metatron@sk.com>
Cc: mohamed.mihoubi@orange.com; ‎morkadomo@gmail.com <‎morkadomo@gmail.com>
Subject: RE: Clustering Metatron

Hello,

Thanks for your response. I try to change my conf in the ./conf/application-config.templete.yaml like this:

server:

port: 8180

logging:

config: classpath:logback-console.xml

polaris:

engine:

hostname:

broker: http://192.168.112.10:8082

overlord: http://192.168.112.18:8090

coordinator: http://192.168.112.6:8081

ingestion:

loader:

remoteType: SSH

localBaseDir: /shared/data/raw/explo/metatron

remoteDir: /shared/data/raw/explo/metatron

hosts:

metatron-test-jln-historical-middleman-2.novalocal:

port: 22

username: adminprojet

password:

pem: /home/hadoop/.ssh/bdoc_openstack_adminprojet.pem

query:

loader:

remoteType: SSH

localBaseDir: /shared/data/raw/explo/metatron

remoteDir: /shared/data/raw/explo/metatron

hosts:

metatron-test-jln-broker-1.novalocal:

port: 22

username: adminprojet

password:

pem: /home/hadoop/.ssh/bdoc_openstack_adminprojet.pem

When i try to ingest the data in metatron, i don’t know why he kept taking the /tmp as my temp repository.

This the error message i have :

I don’t know why he is using the port 8100 for the middleman while i already change it in the runtime.porperties :

2019-04-09T14:03:37,327 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils – Task [index_druid_test_16_03_2019-04-09T14:03:27.918Z] location changed to [TaskLocation{host=’metatron-test-jln-historical-middleman-2.novalocal’, port=8100}].

2019-04-09T14:03:37,084 INFO [main] io.druid.indexing.common.actions.RemoteTaskActionClient – Submitting action for task[index_druid_test_16_03_2019-04-09T14:03:27.918Z] to overlord[http://metatron-test-jln-metatron-discovery.novalocal:8090/druid/indexer/v1/action]: LockTryAcquireAction{interval=2018-10-09T14:03:27.913Z/2019-10-09T14:03:27.913Z}
2019-04-09T14:03:37,098 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,160 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,160 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,160 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,160 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,160 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,161 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,161 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,161 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,161 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,161 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,162 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,162 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,162 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,162 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,162 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,163 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,163 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,163 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,163 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-09T14:03:37,309 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner – Running task: index_druid_test_16_03_2019-04-09T14:03:27.918Z
2019-04-09T14:03:37,310 INFO [main] org.eclipse.jetty.server.Server – jetty-9.2.5.v20141112
2019-04-09T14:03:37,327 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils – Task [index_druid_test_16_03_2019-04-09T14:03:27.918Z] location changed to [TaskLocation{host=’metatron-test-jln-historical-middleman-2.novalocal’, port=8100}].
2019-04-09T14:03:37,328 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils – Task [index_druid_test_16_03_2019-04-09T14:03:27.918Z] status changed to [RUNNING].
2019-04-09T14:03:37,331 INFO [task-runner-0-priority-0] io.druid.indexing.common.actions.RemoteTaskActionClient – Submitting action for task[index_druid_test_16_03_2019-04-09T14:03:27.918Z] to overlord[http://metatron-test-jln-metatron-discovery.novalocal:8090/druid/indexer/v1/action]: LockListAction{}
2019-04-09T14:03:37,514 INFO [task-runner-0-priority-0] io.druid.segment.realtime.firehose.LocalFirehoseFactory – Searching for all [druid_test_16_03_1554818607905.csv] in and beneath [/tmp] 2019-04-09T14:03:37,545 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner – Exception while running task[IndexTask{id=index_druid_test_16_03_2019-04-09T14:03:27.918Z, type=index, dataSource=druid_test_16_03}] com.metamx.common.ISE: Found no files to ingest! Check your schema.
at io.druid.segment.realtime.firehose.LocalFirehoseFactory.connect(LocalFirehoseFactory.java:123) ~[druid-server-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.common.task.IndexTask.getDataIntervals(IndexTask.java:282) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:216) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:458) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:430) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201] 2019-04-09T14:03:37,549 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils – Task [index_druid_test_16_03_2019-04-09T14:03:27.918Z] status changed to [FAILED].
2019-04-09T14:03:37,593 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle – Task completed with status: {
“id” : “index_druid_test_16_03_2019-04-09T14:03:27.918Z”,
“status” : “FAILED”,
“duration” : 240,
“reason” : “Exception: [io.druid.segment.realtime.firehose.LocalFirehoseFactory.connect(LocalFirehoseFactory.java:123), io.druid.indexing.common.task.IndexTask.getDataIntervals(IndexTask.java:282), io.druid.indexing.common.task.IndexTask.run(IndexTask.java:216), io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:458), io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:430), java.util.concurrent.FutureTask.run(FutureTask.java:266), java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149), java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624), java.lang.Thread.run(Thread.java:748), … more]”
}
2019-04-09T14:03:37,623 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory – Registering com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider as a provider class
2019-04-09T14:03:37,623 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory – Registering com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider as a provider class
2019-04-09T14:03:37,624 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory – Registering io.druid.server.initialization.jetty.CustomExceptionMapper as a provider class

This is the middleman conf :

druid.service=druid/middleManager

druid.port=8091

# Number of tasks per middleManager

druid.worker.capacity=3

# Task launch parameters

druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

druid.indexer.task.baseTaskDir=var/druid/task

# HTTP server threads

druid.server.http.numThreads=25

# Processing threads and buffers

druid.processing.buffer.sizeBytes=536870912

druid.processing.numThreads=2

# Hadoop indexing

druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp

druid.indexer.task.defaultHadoopCoordinates=[“org.apache.hadoop:hadoop-client:2.3.0”]

Best regards ,

Provenance : Courrier pour Windows 10

De : CHERIFATOU IDRISSA
Envoyé : Tuesday, April 9, 2019 9:58:57 AM
À : mohamed.mihoubi@orange.commorkadomo@gmail.com
Objet : TR: Clustering Metatron

Provenance : Courrier pour Windows 10

De : 이세화님 <sehwa.lee@sk.com>
Envoyé : Tuesday, April 9, 2019 5:56:08 AM
À : CHERIFATOU IDRISSA
Cc : metatron님
Objet : RE: Clustering Metatron

Dear Cherifatou Idrissa,

It probably seems that MiddleManager could not able to recognize the file that it uploads.

Plz, Check the properties – ‘polaris.ingestion’, ‘polaris.query’ in ./conf/application-config.templete.yaml file

*polaris.ingestion:  for copying files to the middle manager.

*polaris.query: The setting used when the file download & loads the data source.

Please, refer to below example

polaris:

ingestion:

loader:

remoteType: SSH

localBaseDir: ${java.io.tmpdir:-/tmp}

remoteDir: ${java.io.tmpdir:-/tmp}

hosts:

middlemanager_hostname01:

port: 22

username: metatron

password: password

middlemanager_hostname02:

port: 22

username: metatron

password: pem:/tmp/metatron.pem

query:

loader:

remoteType: SSH

localBaseDir: ${java.io.tmpdir:-/tmp}

remoteDir: ${java.io.tmpdir:-/tmp}

hosts:

broker_hostname01:

port: 22

username: metatron

password: password

* ‘middlenamanager_hostname’ must match the hostname in the worker list on overload console.

Thanks!.

From: CHERIFATOU IDRISSA <idrissachrifa@outlook.com>
Sent: Tuesday, April 09, 2019 12:02 AM
To: metatron님/공용 ID <metatron@sk.com>
Subject: RE: Clustering Metatron

Hello,

I install my cluster metatron but i have some issues that i can’t find where it came from. I look on many forum but couldn’t find anything.

I ‘m contacting you to ask for help.

This is the message i receive when i try to ingest my csv file into metatron :

««2019-04-08T14:35:11,633 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner – Exception while running task[IndexTask{id=index_artists_druis_metatron_2019-04-08T14:35:02.236Z, type=index, dataSource=artists_druis_metatron}] com.metamx.common.ISE: Found no files to ingest! Check your schema.
at io.druid.segment.realtime.firehose.LocalFirehoseFactory.connect(LocalFirehoseFactory.java:123) ~[druid-server-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.common.task.IndexTask.getDataIntervals(IndexTask.java:282) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:216) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:458) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:430) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]

And this is the complete message :

2019-04-08T14:35:10,898 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler – Invoking start method[public void io.druid.indexing.worker.executor.ExecutorLifecycle.start() throws java.lang.InterruptedException] on object[io.druid.indexing.worker.executor.ExecutorLifecycle@7a65c995].
2019-04-08T14:35:10,995 INFO [main] io.druid.indexing.worker.executor.ExecutorLifecycle – Running with task: {
“type” : “index”,
“id” : “index_artists_druis_metatron_2019-04-08T14:35:02.236Z”,
“resource” : {
“availabilityGroup” : “index_artists_druis_metatron_2019-04-08T14:35:02.236Z”,
“requiredCapacity” : 1
},
“spec” : {
“dataSchema” : {
“dataSource” : “artists_druis_metatron”,
“parser” : {
“type” : “csv.stream”,
“timestampSpec” : {
“column” : “current_datetime”,
“missingValue” : “2019-04-08T14:35:02.232Z”,
“invalidValue” : “2019-04-08T14:35:02.232Z”,
“replaceWrongColumn” : true
},
“dimensionsSpec” : {
“dimensions” : [ “ConstituentID”, “DisplayName”, “ArtistBio”, “Nationality”, “Gender”, “BeginDate”, “EndDate”, “Wiki QID”, “ULAN” ],
“dimensionExclusions” : [ ],
“spatialDimensions” : [ ] },
“columns” : [ “ConstituentID”, “DisplayName”, “ArtistBio”, “Nationality”, “Gender”, “BeginDate”, “EndDate”, “Wiki QID”, “ULAN” ],
“delimiter” : “,”,
“recordSeparator” : “\\n”,
“skipHeaderRecord” : true
},
“metricsSpec” : [ {
“type” : “count”,
“name” : “count”
} ],
“enforceType” : true,
“granularitySpec” : {
“type” : “uniform”,
“segmentGranularity” : “YEAR”,
“queryGranularity” : {
“type” : “none”
},
“rollup” : true,
“append” : false,
“intervals” : [ “2018-10-08T14:35:02.232Z/2019-10-08T14:35:02.232Z” ] }
},
“ioConfig” : {
“type” : “index”,
“firehose” : {
“type” : “local”,
“baseDir” : “/tmp”,
“filter” : “artists_druis_metatron_1554734102221.csv”,
“parser” : null
}
},
“tuningConfig” : {
“type” : “index”,
“targetPartitionSize” : 5000000,
“numShards” : null,
“indexSpec” : {
“bitmap” : {
“type” : “roaring”
},
“dimensionSketches” : {
“type” : “none”
},
“secondaryIndexing” : { }
},
“buildV9Directly” : true,
“ignoreInvalidRows” : true,
“maxRowsInMemory” : 75000,
“maxOccupationInMemory” : -1
}
},
“context” : {
“druid.task.runner.dedicated.host” : “metatron-test-jln-historical-middleman-2.novalocal:8091”
},
“groupId” : “index_artists_druis_metatron_2019-04-08T14:35:02.236Z”,
“dataSource” : “artists_druis_metatron”,
“interval” : “2018-10-08T14:35:02.232Z/2019-10-08T14:35:02.232Z”
}
2019-04-08T14:35:10,997 INFO [main] io.druid.indexing.worker.executor.ExecutorLifecycle – Attempting to lock file[var/druid/task/index_artists_druis_metatron_2019-04-08T14:35:02.236Z/lock].
2019-04-08T14:35:10,998 INFO [main] io.druid.indexing.worker.executor.ExecutorLifecycle – Acquired lock file[var/druid/task/index_artists_druis_metatron_2019-04-08T14:35:02.236Z/lock] in 1ms.
2019-04-08T14:35:11,015 INFO [main] io.druid.indexing.common.actions.RemoteTaskActionClient – Submitting action for task[index_artists_druis_metatron_2019-04-08T14:35:02.236Z] to overlord[http://metatron-test-jln-metatron-discovery.novalocal:8090/druid/indexer/v1/action]: LockTryAcquireAction{interval=2018-10-08T14:35:02.232Z/2019-10-08T14:35:02.232Z}
2019-04-08T14:35:11,072 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-08T14:35:11,140 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – 2019-04-08T14:35:11,143 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-08T14:35:11,143 INFO [main] com.metamx.http.client.pool.ChannelResourceFactory – Generating: http://metatron-test-jln-metatron-discovery.novalocal:8090
2019-04-08T14:35:11,289 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner – Running task: index_artists_druis_metatron_2019-2019-04-08T14:35:11,308 INFO [main] org.eclipse.jetty.server.Server – jetty-9.2.5.v20141112
2019-04-08T14:35:11,601 INFO [task-runner-0-priority-0] io.druid.segment.realtime.firehose.LocalFirehoseFactory – Searching for all [artists_druis_metatron_1554734102221.csv] in and beneath [/tmp] 2019-04-08T14:35:11,633 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner – Exception while running task[IndexTask{id=index_artists_druis_metatron_2019-04-08T14:35:02.236Z, type=index, dataSource=artists_druis_metatron}] com.metamx.common.ISE: Found no files to ingest! Check your schema.
at io.druid.segment.realtime.firehose.LocalFirehoseFactory.connect(LocalFirehoseFactory.java:123) ~[druid-server-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.common.task.IndexTask.getDataIntervals(IndexTask.java:282) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:216) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:458) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:430) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201] 2019-04-08T14:35:11,654 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils – Task [index_artists_druis_metatron_2019-04-08T14:35:02.236Z] status changed to [FAILED].
2019-04-08T14:35:11,660 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory – Registering com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider as a provider class
2019-04-08T14:35:11,661 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory – Registering com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider as a provider class
2019-04-08T14:35:11,661 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory – Registering io.druid.server.initialization.jetty.CustomExceptionMapper as a provider class
2019-04-08T14:35:11,661 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory – Registering io.druid.server.StatusResource as a root resource class
2019-04-08T14:35:11,667 INFO [main] com.sun.jersey.server.impl.application.WebApplicationImpl – Initiating Jersey application, version ‘Jersey: 1.19 02/11/2015 03:25 AM’
2019-04-08T14:35:11,681 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle – Task completed with status: {
“id” : “index_artists_druis_metatron_2019-04-08T14:35:02.236Z”,
“status” : “FAILED”,
“duration” : 365,
“reason” : “Exception: [io.druid.segment.realtime.firehose.LocalFirehoseFactory.connect(LocalFirehoseFactory.java:123), io.druid.indexing.common.task.IndexTask.getDataIntervals(IndexTask.java:282), io.druid.indexing.common.task.IndexTask.run(IndexTask.java:216), io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:458), io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:430), java.util.concurrent.FutureTask.run(FutureTask.java:266), java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149), java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624), java.lang.Thread.run(Thread.java:748), … more]”
}
2019-04-08T14:35:11,787 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory – Binding io.druid.server.initialization.jetty.CustomExceptionMapper to GuiceManagedComponentProvider with the scope “Singleton”

Best regards,

Cherifatou IDRISSA

Provenance : Courrier pour Windows 10

De : CHERIFATOU IDRISSA <idrissachrifa@outlook.com>
Envoyé : Friday, March 8, 2019 11:13:21 AM
À : mohamed.mihoubi@orange.com
Objet : TR: Clustering Metatron

Provenance : Courrier pour Windows 10

De : 이세화님 <sehwa.lee@sk.com>
Envoyé : Tuesday, March 5, 2019 7:57:29 PM
À : CHERIFATOU IDRISSA
Cc : metatron님
Objet : RE: Clustering Metatron

Hello,

Metatron Discovery and Druid could be installed on different servers.

I’m giving you a installation guide for distributed Druid.

If you read this guide, you could probably install matatron-customized Druid on your cluster.

* Currently we are developing a tool to support Docker-based or Cloud-base installation, but we have not completed it yet.

Please, check our Groups regularly

Thank for your attention!.

From: CHERIFATOU IDRISSA <idrissachrifa@outlook.com>
Sent: Tuesday, March 05, 2019 6:59 PM
To: metatron님/공용 ID <metatron@sk.com>
Subject: Clustering Metatron

Hello,

I want to use Metatron Discovery on my cluster.

I would like to know if i can put Druid customized for Metatron to another machine and how to configure it.

What will be your recommandation for the install ? Do I need special specs for all the install ? How to optimized my installation to be more efficient ?

Regards,

Cherifatou IDRISSA

_________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent doncpas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signalera l’expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d’alteration,Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law;they should not be distributed, used or copied without authorisation.If you have received this email in error, please notify the sender and delete this message and its attachments.As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.Thank you.

Asked question
Add a Comment