yes, I’m using 1.2.1 on both side, and the
installation is that new, that 1.2.1 was the first one
installed.
2016-03-27 15:42:45,948 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51038;
2016-03-27 15:42:45,948 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51040;
2016-03-27 15:42:45,948 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51042;
2016-03-27 15:42:45,948 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51040;
2016-03-27 15:42:45,959 [tserver.TabletServer] INFO :
Starting split: 1z;E;C
2016-03-27 15:42:45,960 [tserver.TabletServer] INFO :
Starting split: 1z;e;c
2016-03-27 15:42:45,960 [tserver.TabletServer] INFO :
Tablet split: 1z;E;C size0 0 size1 0 time 12ms
2016-03-27 15:42:45,961 [tserver.TabletServer] INFO :
Tablet split: 1z;e;c size0 0 size1 0 time 13ms
2016-03-27 15:42:45,992 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51040;
2016-03-27 15:42:46,002 [tserver.TabletServer] INFO :
Starting split: 1z;b;F
2016-03-27 15:42:46,003 [tserver.TabletServer] INFO :
Tablet split: 1z;b;F size0 0 size1 0 time 11ms
2016-03-27 15:42:46,985 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51038;
2016-03-27 15:42:46,985 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51038;
2016-03-27 15:42:47,047 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51038;
2016-03-27 15:42:47,059 [tserver.TabletServer] INFO :
Starting split: 21<;
2016-03-27 15:42:47,061 [tserver.TabletServer] INFO :
Tablet split: 21<; size0 0 size1 0 time 14ms
2016-03-27 15:42:49,109 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51038;
2016-03-27 15:42:49,109 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51038; action:
scan; targetTable: abc; authorizations: ; range: [~METADATA_v6
visibilities: [] 9223372036854775807 false,~METADATA_v6
visibilities:%00; [] 9223372036854775807 false); columns:
[visibilities::]; iterators: []; iteratorOptions: {};
2016-03-27 15:42:49,109 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51038;
2016-03-27 15:43:18,136 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,136 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050; action:
scan; targetTable: abc; authorizations: ; range: [~METADATA_v6
bounds: [] 9223372036854775807 false,~METADATA_v6 bounds:%00;
[] 9223372036854775807 false); columns: [bounds::]; iterators:
[]; iteratorOptions: {};
2016-03-27 15:43:18,136 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,204 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,665 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,665 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,792 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,792 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,866 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,866 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:18,991 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:19,047 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:19,047 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:19,047 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:19,059 [Audit ] INFO : operation:
permitted; user: root; client: 141.21.70.10:51050;
2016-03-27 15:43:19,059 [tserver.TabletServer] INFO :
Adding 1 logs for extent 1z;2;1 as alias 1880
2016-03-27 15:43:19,062 [tserver.TabletServer] INFO :
Adding 1 logs for extent 20<< as alias 1611
2016-03-27 15:43:19,063 [tserver.TabletServer] INFO :
Adding 1 logs for extent 1z;6;5 as alias 1871
2016-03-27 15:43:19,065 [tserver.TabletServer] INFO :
Adding 1 logs for extent 21<; as alias 1898
2016-03-27 15:43:19,066 [tserver.TabletServer] INFO :
Adding 1 logs for extent 1z;9;8 as alias 1873
2016-03-27 15:43:19,068 [tserver.TabletServer] INFO :
Adding 1 logs for extent 1z;c;b as alias 1875
2016-03-27 15:43:19,070 [tserver.TabletServer] INFO :
Adding 1 logs for extent 1z;f;e as alias 1877
When I create the Layer it logs:
[...]
2016-03-27 15:46:01,277
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660;
2016-03-27 15:46:01,278
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660;
2016-03-27 15:46:01,279
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660; action: scan; targetTable: abc;
authorizations: ; range: [~METADATA_v6 id: []
9223372036854775807 false,~METADATA_v6 id:%00; []
9223372036854775807 false); columns: [id::]; iterators: [];
iteratorOptions: {};
2016-03-27 15:46:01,279
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660;
2016-03-27 15:46:01,280
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660;
2016-03-27 15:46:01,280
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660; action: scan; targetTable: abc;
authorizations: ; range: [~METADATA_v6
table.indexes.enabled: [] 9223372036854775807
false,~METADATA_v6 table.indexes.enabled:%00; []
9223372036854775807 false); columns:
[table.indexes.enabled::]; iterators: []; iteratorOptions:
{};
2016-03-27 15:46:01,280
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660;
2016-03-27 15:46:03,833
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660;
2016-03-27 15:46:03,833
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660; action: scan; targetTable: abc;
authorizations: ; range: [~METADATA_v6 bounds: []
9223372036854775807 false,~METADATA_v6 bounds:%00; []
9223372036854775807 false); columns: [bounds::]; iterators:
[]; iteratorOptions: {};
2016-03-27 15:46:03,833
[Audit ] INFO : operation: permitted; user: root; client:
172.17.0.4:56660;
I’m not really sure what
this means, but looks a bit like permission to the hdfs?
Hi Nico,
Ok, we are getting closer!
Whenever this error
("org.apache.accumulo.core.client.impl.AccumuloServerException:
Error on server master1.gt:9997") comes up, there will
be something in the tserver logs on the server
'master1gt'. If you can grab the stack trace from
$ACCUMULO_HOME/logs/tserver*.log on that machine, we'll
be able to see what's going wrong on the server side.
One possibility is that GeoMesa and GeoTrellis are using
different versions of a common dependency. If that were
the case, then the quickest solution is likely to use
Accumulo's classpath isolation. I've added some quick
instructions at the end of this email.
As a final sanity check, are you using GeoMesa 1.2.1 on
both the reading and writing side of the equation?
Also, if you upgraded to 1.2.1 from 1.2.0, have you
restarted the Accumulo tablet servers? (The Accumulo
classloader would still have 1.2.0 iterators/classes in
memory if they were used previously from jars deployed
to each tablet server in lib/ext.)
Cheers,
Jim
In Accumulo 1.6, onecan leverage namespaces to isolate
the GeoMesa classpath from the rest of Accumulo. You
have to create the namespace ahead of time, using the
shell:
> createnamespace myNamespace
> grant NameSpace.CREATE_TABLE -ns myNamespace -u
myUser
> config -s
general.vfs.context.classpath.myNamespace=
hdfs://NAME_NODE_FQDN:54310/accumulo/classpath/myNamespace/[^.].*.jar
> config -ns myNamespace -s
table.classpath.context=myNamespace
Note that this allows you to upload GeoMesa jars to a
path (here: /accumulo/classpath/myNamespace) in HDFS
rather than pushing them to each tablet server.
On 3/27/2016 7:46 AM, Nico
Kreiling wrote:
Hi Jim,
thanks a lot for the long answers:
To
output the table created by Spark, are you
using the 'save' function in GeoMesaSpark
(1)? (I'm wondering if there might be a bug
with this function.) Before calling 'save'
or using a similar function on an RDD, I'd
suggest making sure to create the DataStore
and call createSchema in a single-threaded
context. (Mainly, I want to make sure that
there aren't issuing writing the
SimpleFeatureType/FeatureSource metadata to
Accumulo.)
I actually don’t use the save
function. I oriented on the tutorial and do:
val store =
DataStoreFinder.getDataStore(paramsInsert).asInstanceOf[AccumuloDataStore]
val attributes =
Lists.newArrayList(
"analysis_id:String:index=full",
"*position:Point:srid=4326",
"grade:Integer"
);
// Get the featureType or create it
from the above attributes
var featureType =
store.getSchema(storageTypeName);
if (featureType == null){
val featureTypeAttr =
String.join(", ", attributes);
val simpleFeatureType =
SimpleFeatureTypes.createType(storageTypeName,
featureTypeAttr);
store.createSchema(simpleFeatureType);
featureType =
store.getSchema(storageTypeName);
}
val sfBuilder = new
SimpleFeatureBuilder(featureType)
val sfList =
Lists.newArrayList[SimpleFeature]()
// Ingest just 5 entries, for
testing (the save function makes an array from
the attributes of the element)
blocks.take(5).map(t =>
sfList.add(sfBuilder.buildFeature(t.analysis_id,t.save())))
var collection = new
ListFeatureCollection(featureType, sfList);
var featureStore :SimpleFeatureStore
=
store.getFeatureSource(storageTypeName).asInstanceOf[SimpleFeatureStore];
featureStore.addFeatures( collection
);
Regarding the other questions:
3) The error still happens with just
that one line of data and the three attributes - so
actually there can not be anything bad in the data
itself.
However I liked your suggestion of
exporting and re-importing the table. When I tried
exporting, I recognized that after the first time,
it does not export all, but only one or two. Then I
checked the geomesa.log and saw for each try it did
not work:
Error on server
master1.gt:9997
org.apache.accumulo.core.client.impl.AccumuloServerException:
Error on server master1.gt:9997
at
org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:695)
at
org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:349)
at
org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at
org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at
java.lang.Thread.run(Thread.java:745)
Caused by:
org.apache.thrift.TApplicationException: Internal
error processing startMultiScan
at
org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at
org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startMultiScan(TabletClientService.java:317)
at
org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startMultiScan(TabletClientService.java:297)
at
org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:634)
...
6 more
I also tried to create an empty table
using the geomesa-tools command provided by the
github page. If I create a layer on this empty
table, I still see the same error (Or might it be a
problem of accessing an empty table?). So actually
right know I really think it has to be something
with regard to GeoServer or the adapter.
Regards,
Nico
Hi Nico,
First, sorry for the troubles. I do have a
few suggestions, and I'll break them out in
sections.
Versions: Which version of GeoMesa and
GeoServer are you using?
For Spark:
To output the table created by Spark, are
you using the 'save' function in
GeoMesaSpark (1)? (I'm wondering if there
might be a bug with this function.) Before
calling 'save' or using a similar function
on an RDD, I'd suggest making sure to create
the DataStore and call createSchema in a
single-threaded context. (Mainly, I want to
make sure that there aren't issuing writing
the SimpleFeatureType/FeatureSource metadata
to Accumulo.)
1.
https://github.com/locationtech/geomesa/blob/master/geomesa-compute/src/main/scala/org/locationtech/geomesa/compute/spark/GeoMesaSpark.scala#L136
For GeoServer:
Since you are seeing that exception in the
web layer, I'd expect to see it in the
GeoServer logs. Which J2EE container are
you using? JBoss, Tomcat? By chance, is
there more of the stack trace in those logs?
Generally:
Are any of the fields in the SimpleFeatures
null or empty? Is there any namespacing or
anything interesting happening with the
SimpleFeatureType? (E.g., are there special
characters in the attribute names? A ':'
would definitely throw things for a loop.)
As a suggestion which is part work-around
and part debugging, could you try exporting
the data using the tools and then
immediately re-importing the data into a new
table? If something breaks during
re-ingest, a particular record may show the
issue. If the entire layer is still having
issues in GeoServer, it may be the
SimpleFeatureType or other general
metadata.
Overall, I'm guessing this is a
serialization issue. Hopefully we can track
it down!
Jim
On 3/26/2016
8:39 AM, Nico Kreiling wrote:
Hello,
I have a strange error using
the GeoMesa Geoserver Plugin. My setup
does work for some tables, however for
one table I created using some spark
commands, I always get the following
error, if I base a Geoserver Layer on it
and try to access its data using WFS:
<ServiceException>java.lang.ArrayIndexOutOfBoundsException:
7 7</ServiceException>
Neither
GeoServer Logs nor the Accumulo Monitor
give me any related errors.
Also the
content of the table looks quite fine,
using geomesa-tools I get for describe
String
(Indexed)
position:
Point (ST-Geo-index) (Indexed)
grade:
Integer
And for export:
analysis_id,position,grade
134715,POINT
(8.385753609107171
48.99513421696649),1
(Of course the data I really
want to save has more entries and more
attributes, but even after breaking it
down to this the error still occurs.)
In the Geoserver Layer
Settings I only set the neccessary
Bounding-Box and left all other options
as default
Any ideas what I might test?
Any Ideas where to find more information
on the error and what the 7 7 in the
error description might mean?
Thanks for help, Nico
_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users
_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your
password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users
_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users
_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password,
or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users