Hi Andrew,
Sorry for a little delay in responding; I finally had a chance to
download the HDP 2.4.x sandbox today and give things a go. With
your thorough steps, I was able to catch up the error.
It is worth noting that I stopped at step 6 and searched for details
about the IllegalArgumentException. The Ambari JIRA has this
ticket: https://issues.apache.org/jira/browse/AMBARI-14751 which
suggests that the application calling out to YARN needs to provide
the HDP version like -Dhdp.version=2.4.0.0-169.
My approach for this was to edit the 'geomesa' script and change the
first line specifying GEOMESA_OPTS to be this:
GEOMESA_OPTS="-Dhdp.version=2.4.0.0-169 -Duser.timezone=UTC
-DEPSG-HSQL.directory=/tmp/$(whoami)"
With that quick change, I was able to run some small MR ingests for
TDrive data. So that brings up the question, why didn't setting HDP
version in $JAVA_OPTS sort things? The final lines of the 'geomesa'
script do not call Java with $JAVA_OPTS. (We should update the
script do do so!)
Anyhow, as a bit more info, since I know that in /usr/hdp the
current directory is a symlink to the actual version, I tried to get
fancy and set the HDP version to 'current';). If you do that,
you'll get this error:
File does not exist:
hdfs://sandbox.hortonworks.com:8020/hdp/apps/current/mapreduce/mapreduce.tar.gz
java.io.FileNotFoundException: File does not exist:
hdfs://sandbox.hortonworks.com:8020/hdp/apps/current/mapreduce/mapreduce.tar.gz
Incidentally, this makes it clear that to find options for the value
of hdp.version, one can check /hdp/apps/ in HDFS.
Anyhow, hope that helps. Let's keep the conversation going about
what would make sense to better support HDP.
Jim
On 7/31/2016 8:47 AM, Andrew Morgan
wrote:
Hi Group,
Some here may have been following my thread working
through bug fixes to get geomesa running on a HDP cluster.
I’m still not able to ingest data in distributed
mode, but got past the initial errors and am now on to new ones
- so making progress I think.
If anyone can help - please pitch in.
Apologies for such a long email, but I’m including
my running install notes, errors found, and fixes that work so
far.
I’ll eventually write these up and stick them up on
my github.
## Running Geomesa on a Hortonworks HDP cluster.
NOTE: I have a single node HDP dev cluster. In
larger clusters or with different configurations, your mileage
may vary. I’m running: HDP-2.4.0.0-169
1. Follow the geomesa installation instructions.
Ensure you install the geomesa version that works with
accumulo 1.7+
2. When you get to the point where you are able to
run geomesa env and get results, you are now ready to ingest
data.
3. First I suggest you ingest local files. Follow
instructions to download a sample gdelt events files and pop
it into a data directory on the local filesystem.
4. in $GEOMESA_HOME create a loadscripts directory
where you can create test ingestion scripts
cd
$GEOMESA_HOME; mkdir loadscripts; cd loadscripts
5. create a test load script for your data as
below. Note I’m including parameters that expose passwords, so
this is not ideal. be wary in prod., but to hack a fix, it was
quickest.
vi
load_local_gdelt.sh
create
a base script that includes the hack to set the hp.version
environment variable properly and echo it out to logs.
##
#
this is a test of the gemomesa ingestion.
accumulo_instance_id_param="hdp-accumulo-instance"
accumulo_user=‘your_user_name'
accumulo_pw=‘your_accumulo_pw'
myHortonVersion=`hadoop
version | grep "^This command" | sed " s/^.*hdp.// " | sed
"s/.hadoop.hadoop-common.*//" | sed "s/^/-Dhdp.version=/"
`
echo
"using these java opts:"
echo
${myHortonVersion}
export
JAVA_OPTS=${myHortonVersion}
echo
"Using this Java Option: "${JAVA_OPTS}
geomesa
ingest \
-u ${accumulo_user} -p ${accumulo_pw} \
-i ${accumulo_instance_id_param} -z ${zookeeper_param} \
-c myGeomesa.gdelt -s gdelt-schema \
-C gdelt-reader \
/home/andrew/data/geo/geomesa-1.2.4/dist/tools/geomesa-tools-1.2.4/data/gdelt/20160601.export.CSV
##
Then
if you run this load script you should be able to ingest local
data, as the version variable is passed through properly.
6. Now we have a working ingestion routine to test
with, we try running a load from HDFS in distributed mode.
Copy and rename your load script for edit.
cp load_local_gdelt.sh load_hdfs_gdelt.sh
vi load_hdfs_gdelt.sh
these are the edits I’m testing:
##
#
this is a test of the gemomesa ingestion.
accumulo_instance_id_param="hdp-accumulo-instance"
accumulo_user=‘your_user_name'
accumulo_pw=‘your_accumulo_pw'
myHortonVersion=`hadoop
version | grep "^This command" | sed " s/^.*hdp.// " |
sed "s/.hadoop.hadoop-common.*//" | sed
"s/^/-Dhdp.version=/" `
echo
"using these java opts:"
echo
${myHortonVersion}
export
JAVA_OPTS=${myHortonVersion}
echo
"Using this Java Option: "${JAVA_OPTS}
geomesa
ingest \
-u
${accumulo_user} -p ${accumulo_pw} \
-i
${accumulo_instance_id_param} -z ${zookeeper_param} \
-c
myGeomesa.gcam -s gcam-schema \
-C
gcam-reader \
hdfs:///user/feeds/gdelt/datastore/GcamGeo/GCAM_201606*.csv
## *note I’m ingesting my own dataset
here, and am referencing a file glob in hdfs rather than a
local file. Adjust as needed for your data / SFTS config.
If you run this you will most likely get the
following error on HDP 2.4.0.0-169
[andrew@gzet loadscripts]$ . load_gcam.sh
using these java opts:
-Dhdp.version=2.4.0.0-169
Using this Java Option:
-Dhdp.version=2.4.0.0-169
Using GEOMESA_HOME =
/home/andrew/data/geo/geomesa-1.2.4/dist/tools/geomesa-tools-1.2.4
Creating schema gcam-schema
Running ingestion in distributed mode
Submitting job - please wait...
Unable to parse
'/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework'
as a URI, check the setting for
mapreduce.application.framework.path
java.lang.IllegalArgumentException:
Unable to parse
'/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework'
as a URI, check the setting for
mapreduce.application.framework.path
7. Fixing Hortonwork settings to resolve the
error:
and navigate to the mapreduce2 configs tab.
Search the configuration for
“mapreduce.application” and you’ll see the short list of two
properties to correct: These are:
mapreduce.application.classpath
mapreduce.application.framework.path
After some trial and error I discovered you
need to append
/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mrframework
to the end of your mapreduce.application.classpath
and set
mapreduce.application.framework.path
to being
/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mrframework
Additionally, you also need to create a new
custom property. Do so under the configuration section
called "Custom mapred-site” by clicking "add property".
In the pop up window, fill in these values for
your version of HDP (that matches the one calculated in
the load script above). In my case:
key = hdp.version
value
= 2.4.0.0-169
and hit save. This will create a hdp.version
property that will be included into the configurations
held in mapred-set.xml on a restart, and allow your class
paths to continue to be relative rather than hardcoded. I
found this tip on the web, and it seems to work and be
better than hardcoding directories.
To finalise the configuration change, click
save, jot some version notes, and then follow the on
screen prompts to restart all your affected mapreduce2,
yarn, and hive services.
8. OK - now we can re-test running the
distributed map reduce ingestion load script again.
[andrew@gzet loadscripts]$ .
load_gcam.sh
using these java opts:
-Dhdp.version=2.4.0.0-169
Using this Java Option:
-Dhdp.version=2.4.0.0-169
Using GEOMESA_HOME =
/home/andrew/data/geo/geomesa-1.2.4/dist/tools/geomesa-tools-1.2.4
Creating schema gcam-schema
Running ingestion in distributed
mode
Submitting job - please wait...
[
] 0% complete 0 ingested 0
failed in 00:00:43
Job failed with state FAILED due to:
Application application_1469962580597_0003 failed 2
times due to AM Container for
appattempt_1469962580597_0003_000002 exited with
exitCode: 1
For more detailed output, check
application tracking page:http://gzet.bytesumo.com:8088/cluster/app/application_1469962580597_0003Then,
click on links to logs of each attempt.
Diagnostics: Exception from
container-launch.
Container id:
container_e38_1469962580597_0003_02_000001
Exit code: 1
Stack trace: ExitCodeException
exitCode=1:
at
org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at
org.apache.hadoop.util.Shell.run(Shell.java:487)
at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at
java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at
java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero
exit code 1
Failing this attempt. Failing the
application.
Distributed ingestion complete in
00:00:43
Ingested 0 features with no
failures.
We can see that our environment variable
issues went away, and that it found the mapreduce
framework code, and that it even got as far trying to
launch the job’s yarn containers.
When I examine the logs and zoom in on the
actual error, I find the following:
Log Type: stderr
Log Upload Time: Sun Jul 31 11:41:01
+0100 2016
Log Length: 88
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
It seems that the new problem is another
classpath issue, but this time in yarn.
Searching maven central I see that the
un-findable class is delivered in this jar file:
hadoop-mapreduce-client-app
And to check my yarn paths to see if it’s
being picked up I try this:
[andrew@gzet loadscripts]$ ls -lart
`yarn classpath | sed 's/:/ /g'` | grep
hadoop-mapreduce-client-app
ls: cannot access
mysql-connector-java.jar: No such file or directory
-rw-r--r--. 1 root root 514884 Feb
10 06:44 /usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar
lrwxrwxrwx. 1 root root 49 Mar
29 01:29 /usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar
-> hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar
lrwxrwxrwx. 1 root root 88 Mar 29
01:31 hadoop-mapreduce-client-app.jar
-> /usr/hdp/2.4.0.0-169/hadoop-mapreduce//hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar
lrwxrwxrwx. 1 root root 88 Mar 29
01:31 hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar
-> /usr/hdp/2.4.0.0-169/hadoop-mapreduce//hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar
9. And this is where I’m currently stuck, and
now reading up on yarn. It seems the jar is on the class
path - so perhaps there is a permissions issue somewhere?
Any help at this point would be very welcome.
Andrew
Andrew J Morgan
CEO, Bytesumo Limited
E-mail: andrew@xxxxxxxxxxxx
Bytesumo Limited - Registered
Company in England and Wales 33 Brodrick Grove,
London, SE2 0SR, UK. Company Number: 8505203
Hi Jason,
It's a good idea. I'm going to dig into this and drop
you a line when
I test out some things and find out more.
A
Sent from my iPhone
On 29 Jul 2016, at
17:26, Jason Brown <jbrown@xxxxxxxx>
wrote:
Andrew,
Can you check the value of
`mapreduce.application.classpath` in
mapred-default.xml? If that's not set, I would use
the output of `hadoop classpath` as a first guess.
-Jason
On 07/29/2016 12:00
PM, geomesa-users-request@xxxxxxxxxxxxxxxx
wrote:
Send geomesa-users mailing list submissions to
geomesa-users@xxxxxxxxxxxxxxxx
To subscribe or unsubscribe via the World Wide
Web, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users
or, via email, send a message with subject or body
'help' to
geomesa-users-request@xxxxxxxxxxxxxxxx
You can reach the person managing the list at
geomesa-users-owner@xxxxxxxxxxxxxxxx
When replying, please edit your Subject line so it
is more specific
than "Re: Contents of geomesa-users digest..."
Today's Topics:
1. Re: Hortonworks/Geomesa distributed
ingestion, error.
(Andrew Morgan)
2. Re: Hortonworks/Geomesa distributed
ingestion, error.
(Andrew Morgan)
3. Program using Accumulo backed DataStore won't
exit (Bryan Moore)
4. Re: Program using Accumulo backed DataStore
won't exit
(Jim Hughes)
----------------------------------------------------------------------
Message: 1
Date: Thu, 28 Jul 2016 15:37:36 +0100
From: Andrew Morgan <andrew@xxxxxxxxxxxx>
To: Jason Brown <jbrown@xxxxxxxx>
Cc: geomesa-users@xxxxxxxxxxxxxxxx
Subject: Re: [geomesa-users] Hortonworks/Geomesa
distributed
ingestion, error.
Message-ID: <A560F774-8850-40B4-82E1-AFACF13FDF5E@xxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"
I did try this, and I thought it would work.
In my shell scrip that launches the load I
included these lines:
myHortonVersion=`hadoop version | grep "^This
command" | sed " s/^.*hdp.// " | sed
"s/.hadoop.hadoop-common.*//" | sed
"s/^/-Dhdp.version=/" `
echo ?determined this is the local hortonworks
version:"
echo ${myHortonVersion}
export JAVA_OPTS=${myHortonVersion}
echo "Using this Java Option: ?
echo ${JAVA_OPTS}
geomesa ingest \
-u ${accumulo_user} -p ${accumulo_pw} \
-i ${accumulo_instance_id_param} -z
${zookeeper_param} \
-c myGeomesa.gcam -s gcam-schema \
-C gcam-reader \
hdfs:///user/feeds/gdelt/datastore/GcamGeo/GCAM_201606*.csv
When I run it I still get the same error, pasted
below:
[andrew@gzet loadscripts]$ . load_gcam.sh
using these java opts:
-Dhdp.version=2.4.0.0-169
Using this Java Option: -Dhdp.version=2.4.0.0-169
Using GEOMESA_HOME =
/home/andrew/data/geo/geomesa-1.2.4/dist/tools/geomesa-tools-1.2.4
Creating schema gcam-schema
Running ingestion in distributed mode
Submitting job - please wait...
Unable to parse
'/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework'
as a URI, check the setting for
mapreduce.application.framework.path
java.lang.IllegalArgumentException: Unable to
parse
'/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework'
as a URI, check the setting for
mapreduce.application.framework.path
when I inspect the error further I find the
mapreduce.tar.gz that the code is looking for
here:
/usr/hdp/2.4.0.0-169/hadoop/mapreduce.tar.gz
We see that we are looking at the wrong path,
albeit with the right version embedded in it.
The way the version option is assembled from the
JAVA_OPTS into the URI for the file it?s searching
for needs adjusting.
Is there a way you can pass that in an an option
too?
many thanks with this
Andrew
Andrew J Morgan
CEO, Bytesumo Limited
Tel: +44?(0)7970130767 <>
E-mail: andrew@xxxxxxxxxxxx
<mailto:andrew@xxxxxxxxxxxx>
Bytesumo Limited - Registered Company in England
and Wales 33 Brodrick Grove, London, SE2 0SR, UK.
Company Number: 8505203
On 27 Jul 2016,
at 23:17, Andrew Morgan <andrew@xxxxxxxxxxxx>
wrote:
James - this is really useful, thanks.
I?m now working through creating valid ingestion
configuration routines for my files.
I had some errors in my configurations and I?m
tweaking things to fix them.
For now I?m quickly testing using some local
test files.
When my converter works properly, which should
be shortly, I?ll re-point it at the HDFS data
and try a larger ingest of my back history, and
report back in.
thanks again!
Andrew
Andrew J Morgan
CEO, Bytesumo Limited
Tel: +44?(0)7970130767 <>
E-mail: andrew@xxxxxxxxxxxx
<mailto:andrew@xxxxxxxxxxxx>
Bytesumo Limited - Registered Company in England
and Wales 33 Brodrick Grove, London, SE2 0SR,
UK. Company Number: 8505203
On 27 Jul 2016,
at 20:09, Jason Brown <jbrown@xxxxxxxx
<mailto:jbrown@xxxxxxxx>>
wrote:
Andrew,
Hi and welcome! We're glad you're up and
running and got smoothly this far!
The fix is to set (or append to) an
environment variable JAVA_OPTS with the key
`-Dhdp.version`. Use hadoop version to get the
hdp.version. For example:
$ hadoop version
Hadoop 2.7.1.2.4.2.0-258
Subversion
git@xxxxxxxxxx:hortonworks/hadoop.git -r
13debf893a605e8a88df18a7d8d214f571e05289
Compiled by jenkins on 2016-04-25T05:46Z
Compiled with protoc 2.5.0
From source with checksum
2a2d95f05ec6c3ac547ed58cab713ac
This command was run using
/usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258.jar
We parse as Hadoop 2.7.1 and HDP version
2.4.2.0-258. Note the HDP version appears in
both the hadoop version and the directory for
hadoop-common.jar. So now set (or append) your
JAVA_OPTS.
$ JAVA_OPTS="-Dhdp.version=2.4.2.0-258"
And try your ingest again. Let us know if you
run into any additional issues.
-Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.locationtech.org/mhonarc/lists/geomesa-users/attachments/20160728/897dbfa8/attachment.html>
------------------------------
Message: 2
Date: Thu, 28 Jul 2016 16:19:36 +0100
From: Andrew Morgan <andrew@xxxxxxxxxxxx>
To: geomesa-users@xxxxxxxxxxxxxxxx
Subject: [geomesa-users] Re: Hortonworks/Geomesa
distributed
ingestion, error.
Message-ID: <8E8F43F6-D42F-43FE-BBCB-4EC8C88EC467@xxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"
I did try this, and I thought it would work, but I
still have an issue.
In my shell script that launches the load I
included these lines:
myHortonVersion=`hadoop version | grep "^This
command" | sed " s/^.*hdp.// " | sed
"s/.hadoop.hadoop-common.*//" | sed
"s/^/-Dhdp.version=/" `
echo ?determined this is the local hortonworks
version:"
echo ${myHortonVersion}
export JAVA_OPTS=${myHortonVersion}
echo "Using this Java Option: ?
echo ${JAVA_OPTS}
geomesa ingest \
-u ${accumulo_user} -p ${accumulo_pw} \
-i ${accumulo_instance_id_param} -z
${zookeeper_param} \
-c myGeomesa.gcam -s gcam-schema \
-C gcam-reader \
hdfs:///user/feeds/gdelt/datastore/GcamGeo/GCAM_201606*.csv
<hdfs:///user/feeds/gdelt/datastore/GcamGeo/GCAM_201606*.csv>
When I run it I still get the same error, pasted
below:
[andrew@gzet loadscripts]$ . load_gcam.sh
using these java opts:
-Dhdp.version=2.4.0.0-169
Using this Java Option: -Dhdp.version=2.4.0.0-169
Using GEOMESA_HOME =
/home/andrew/data/geo/geomesa-1.2.4/dist/tools/geomesa-tools-1.2.4
Creating schema gcam-schema
Running ingestion in distributed mode
Submitting job - please wait...
Unable to parse
'/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework'
as a URI, check the setting for
mapreduce.application.framework.path
java.lang.IllegalArgumentException: Unable to
parse
'/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework'
as a URI, check the setting for
mapreduce.application.framework.path
when I inspect the error further I find the
mapreduce.tar.gz that the code is looking for
here:
/usr/hdp/2.4.0.0-169/hadoop/mapreduce.tar.gz
We see that we are looking at the wrong path,
albeit with the right version embedded in it.
The way the version option is assembled from the
JAVA_OPTS into the URI for the file it?s searching
for needs adjusting.
Is there a way to pass the URI path in as an an
option too?
many thanks with this
Andrew
Andrew J Morgan
CEO, Bytesumo Limited
E-mail: andrew@xxxxxxxxxxxx
<mailto:andrew@xxxxxxxxxxxx>
Bytesumo Limited - Registered Company in England
and Wales 33 Brodrick Grove, London, SE2 0SR, UK.
Company Number: 8505203
On 27 Jul 2016,
at 23:17, Andrew Morgan <andrew@xxxxxxxxxxxx
<mailto:andrew@xxxxxxxxxxxx>>
wrote:
James - this is really useful, thanks.
I?m now working through creating valid
ingestion configuration routines for my files.
I had some errors in my configurations and I?m
tweaking things to fix them.
For now I?m quickly testing using some local
test files.
When my converter works properly, which should
be shortly, I?ll re-point it at the HDFS data
and try a larger ingest of my back history,
and report back in.
thanks again!
Andrew
Andrew J Morgan
CEO, Bytesumo Limited
Tel: +44?(0)7970130767 <>
E-mail: andrew@xxxxxxxxxxxx
<mailto:andrew@xxxxxxxxxxxx>
Bytesumo Limited - Registered Company in
England and Wales 33 Brodrick Grove, London,
SE2 0SR, UK. Company Number: 8505203
On 27 Jul
2016, at 20:09, Jason Brown <jbrown@xxxxxxxx
<mailto:jbrown@xxxxxxxx>>
wrote:
Andrew,
Hi and welcome! We're glad you're up and
running and got smoothly this far!
The fix is to set (or append to) an
environment variable JAVA_OPTS with the key
`-Dhdp.version`. Use hadoop version to get
the hdp.version. For example:
$ hadoop version
Hadoop 2.7.1.2.4.2.0-258
Subversion
git@xxxxxxxxxx:hortonworks/hadoop.git -r
13debf893a605e8a88df18a7d8d214f571e05289
Compiled by jenkins on 2016-04-25T05:46Z
Compiled with protoc 2.5.0
From source with checksum
2a2d95f05ec6c3ac547ed58cab713ac
This command was run using
/usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258.jar
We parse as Hadoop 2.7.1 and HDP version
2.4.2.0-258. Note the HDP version appears in
both the hadoop version and the directory
for hadoop-common.jar. So now set (or
append) your JAVA_OPTS.
$ JAVA_OPTS="-Dhdp.version=2.4.2.0-258"
And try your ingest again. Let us know if
you run into any additional issues.
-Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.locationtech.org/mhonarc/lists/geomesa-users/attachments/20160728/7054f183/attachment.html>
------------------------------
Message: 3
Date: Fri, 29 Jul 2016 11:15:34 -0400
From: Bryan Moore <bryan@xxxxxxxxxxxxxx>
To: geomesa-users@xxxxxxxxxxxxxxxx
Subject: [geomesa-users] Program using Accumulo
backed DataStore won't
exit
Message-ID: <d052a8a1-5c88-675f-cd35-28bf31e4e77f@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8";
Format="flowed"
I've written a program using GeoMesa with an
Accumulo backed DataStore
that works fine but won't exit. Below is a
minimal program that
illustrates the problem. It prints the "Start"
and "Finish" messages
but doesn't exit.
Have I done something wrong, not done something I
need to do, or is this
a bug?
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import
org.apache.accumulo.core.client.AccumuloException;
import
org.apache.accumulo.core.client.AccumuloSecurityException;
import org.geotools.data.DataStoreFinder;
import
org.locationtech.geomesa.accumulo.data.AccumuloDataStore;
public class Minimal {
public static void main(String[] args)
throws AccumuloException,
AccumuloSecurityException,
IOException {
System.out.println("Start");
Map<String, String> dsConf =
new HashMap<>();
dsConf.put("instanceId",
"myinstancename");
dsConf.put("zookeepers",
"localhost:2181");
dsConf.put("user", "myuserid");
dsConf.put("password",
"mypassword");
dsConf.put("tableName",
"mysearchtable");
dsConf.put("auths", "");
AccumuloDataStore dataStore =
(AccumuloDataStore)
DataStoreFinder.getDataStore(dsConf);
dataStore.dispose();
System.out.println("Finish");
}
}
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.locationtech.org/mhonarc/lists/geomesa-users/attachments/20160729/855e74d2/attachment.html>
------------------------------
Message: 4
Date: Fri, 29 Jul 2016 11:43:29 -0400
From: Jim Hughes <jnh5y@xxxxxxxx>
To: geomesa-users@xxxxxxxxxxxxxxxx
Subject: Re: [geomesa-users] Program using
Accumulo backed DataStore
won't exit
Message-ID: <579B79A1.9010806@xxxxxxxx>
Content-Type: text/plain; charset="windows-1252";
Format="flowed"
Hi Bryan,
Which version of GeoMesa are you using? There is
a known issue with
GeoMesa 1.2.3 where a thread for pre-computed
stats writing is not
shutdown. We believe we addressed this in 1.2.4.
In terms of helping diagnose the problem, can you
run jstack on the
hanging JVM and look for anything notable in the
output?
Thanks,
Jim
On 07/29/2016
11:15 AM, Bryan Moore wrote:
I've written a program using GeoMesa with an
Accumulo backed DataStore
that works fine but won't exit. Below is a
minimal program that
illustrates the problem. It prints the "Start"
and "Finish" messages
but doesn't exit.
Have I done something wrong, not done something
I need to do, or is
this a bug?
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import
org.apache.accumulo.core.client.AccumuloException;
import
org.apache.accumulo.core.client.AccumuloSecurityException;
import org.geotools.data.DataStoreFinder;
import
org.locationtech.geomesa.accumulo.data.AccumuloDataStore;
public class Minimal {
public static void main(String[] args)
throws AccumuloException,
AccumuloSecurityException,
IOException {
System.out.println("Start");
Map<String, String> dsConf
= new HashMap<>();
dsConf.put("instanceId",
"myinstancename");
dsConf.put("zookeepers",
"localhost:2181");
dsConf.put("user", "myuserid");
dsConf.put("password",
"mypassword");
dsConf.put("tableName",
"mysearchtable");
dsConf.put("auths", "");
AccumuloDataStore dataStore =
(AccumuloDataStore)
DataStoreFinder.getDataStore(dsConf);
dataStore.dispose();
System.out.println("Finish");
}
}
_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your
password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.locationtech.org/mhonarc/lists/geomesa-users/attachments/20160729/3bce0bce/attachment.html>
------------------------------
_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your
password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users
End of geomesa-users Digest, Vol 29, Issue 19
*********************************************
_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your
password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users
_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users
|