[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [geomesa-users] Geomesa Connection Issue with HBase RO cluster
|
Hi Amit,
Are you saying that it does work if you provide hbase-site.xml on
the classpath? Possibly there is something missing in your config
that is in the full hbase-site.xml.
Are you using an older version of EMR? Last I saw, EMR HBase was up
to at least 1.4 - possibly some things do not work the same with
HBase 1.3. Another issue might be that you have both the
spark-runtime and the HBase client jars on your classpath - the
spark runtime bundles those jars in a shaded uber-jar, so you will
have duplicates on the classpath, which can often cause subtle
issues.
Thanks,
Emilio
On 3/30/20 1:12 AM, Amit Srivastava
wrote:
Correcting above email.
Added the log to ensure org.locationtech.geomesa.hbase.data.HBaseConnectionPool
is getting right hbase.config.xml. As per logs Geomesa
is setting up right org.apache.hadoop.conf.Configuration
[1], but still HBase client is not applying it. The value is
getting honored only if we are adding hbase-site.xml in the
class path.
Thanks Emilio, I updated the EMR Hbase jar, but I see
an issue because the parameter "hbase.meta.table.suffix"
is not getting override via hbase.config.xml [1] if I am
using below jars in the classpath. As per HBase
document, "hbase.meta.table.suffix" should be override
because the metadata table name in RO cluster is
hbase:meta_clusterId
Class Paths:
<classpath path="jars/hbase-common-1.3.1.jar"
/>
<classpath path="jars/hbase-client-1.3.1.jar" />
<classpath path="jars/hbase-server-1.3.1.jar" />
<classpath path="jars/hbase-site.zip" />
<classpath
path="jars/geomesa-hbase-datastore_2.11-2.4.0.jar" />
<classpath
path="jars/geomesa-hbase-spark-runtime_2.11-2.4.0.jar"
/>
Code: Where I am creating DataStore
public DataStore newInstance(final
ExecutionContext executionContext, final String catalog)
throws IOException {
final Map<String, String>
hbaseDataStoreParameters = ImmutableMap.of(
"hbase.catalog", catalog,
"hbase.zookeepers",
executionContext.getClusterDetails().getMasterPublicDns(),
"hbase.config.xml",
getXml(executionContext.getClusterDetails().getClusterId()),
"hbase.remote.filtering", "false");
return
DataStoreFinder.getDataStore(hbaseDataStoreParameters);
}
private String getXml(final String clusterId) {
return String.format("<configuration>\n" +
" <property>\n" +
"
<name>hbase.meta.table.suffix</name>\n" +
" <value>%s</value>\n"
+
"
<final>true</final>\n" +
" </property>\n" +
" <property>\n" +
"
<name>hbase.global.readonly.enabled</name>\n"
+
"
<value>true</value>\n" +
"
<final>true</final>\n" +
" </property>\n" +
" <property>\n" +
"
<name>hbase.meta.startup.refresh</name>\n"
+
"
<value>true</value>\n" +
"
<final>true</final>\n" +
" </property>\n" +
"</configuration>", clusterId);
}
I think so, I haven't actually done it before.
Another possibility would be to put the jars in your
local .m2 cache. If you get it working, please
circle back and we can add it to the docs!
Thanks,
Emilio
On 3/27/20 12:06 PM, Amit Srivastava wrote:
Thanks Emilio,
Hi Amit,
You should just need to use the AWS HBase
jars instead of the regular HBase jars
everywhere in the install guide:
https://www.geomesa.org/documentation/user/hbase/install.html
If you're using the
geomesa-hbase-spark-runtime jar, that
bundles the HBase jars inside it, so
you'll need to rebuild it from source,
using the repository Austin linked to get
the AWS HBase jars.
Thanks,
Emilio
On 3/27/20 11:05 AM, Amit Srivastava
wrote:
Hi Emilio and Austin,
I am facing a connection issue
[1] with the HBase RO cluster. I
want to update the HBase jar in the
Geomesa.
I need a suggestion from you. Can
you please tell me where do I need
to make this change in Geomesa
(version 2.4.0)? I see many places
where I need to make changes. Can
you point me in the right direction?
--
Regards,
Amit Kumar Srivastava
--
Regards,
Amit Kumar Srivastava
--
Regards,
Amit Kumar Srivastava
--
Regards,
Amit Kumar Srivastava