the code below allows me to create an embedded undertow servlet server, i have a problem setting the 'max-parameters' of the connector settings, the way i understand it undertow is normally configured via xml file.
public static String initCustomServer_(Servlet servlet,int preferedPort,String servletName,String[] resourceList,String... domainName){
String contextURL = null;
int curPort = preferedPort==-1?9001:preferedPort;
boolean initServ = false;
System.out.println("====servlet running in local mode====");
while(!initServ) {
try{
io.undertow.servlet.api.DeploymentInfo servletBuilder = io.undertow.servlet.Servlets.deployment()
.setClassLoader(servlet.getClass().getClassLoader())
.setContextPath(domainName.length==0?"/":"/"+domainName[0])
.setDeploymentName("test.war")
.addServlets(
io.undertow.servlet.Servlets.servlet(servletName, servlet.getClass()).addMapping("/"+servletName)
)
.setResourceManager(new io.undertow.server.handlers.resource.FileResourceManager(new File("src/dss_core/HTML5/webapp"), 1));
io.undertow.servlet.api.DeploymentManager manager = io.undertow.servlet.Servlets.defaultContainer().addDeployment(servletBuilder);
manager.deploy();
io.undertow.server.HttpHandler servletHandler = manager.start();
io.undertow.server.handlers.PathHandler path = io.undertow.Handlers.path(io.undertow.Handlers.redirect(domainName.length==0?"/":"/"+domainName[0]))
.addPrefixPath(domainName.length==0?"/":"/"+domainName[0], servletHandler);
io.undertow.Undertow server = io.undertow.Undertow.builder()
.addHttpListener(curPort, "localhost")
.setHandler(path)
.build();
server.start();
initServ = true;
contextURL = "http://localhost:"+curPort+(domainName.length==0?"":"/"+domainName[0])+"/"+servletName;
} catch (Exception ex) {
//creation of server at certain port fails therefore try again on another port
System.err.println(" server unable to initialize :" + ex.getMessage());
ex.printStackTrace();
curPort++;
}
}
return contextURL;
}
rather than using an xml like the one below how do i change configurations such as 'max-parameter' via embedded java code?
<server name="default-server">
<http-listener name="default" socket-binding="http" max-parameters="5000"/>
found here are list of stuff that i can configure via xml how can i set them via java code?
UPDATE 1: yay found some options in io.undertow.UndertowOptions, how ever this doesn't work as it is declared final, what now?
io.undertow.UndertowOptions.MAX_PARAMETERS = 10000;
after hours of research and trial and error finally i got it, my first idea was to simply get the code and compile it myself, negative side of that is that i'd have to download all source code then compile it, such proved to be trouble some and i decided to quit after seeing endless dependencies and hours of downloading their source code. configuring the server looked like this
io.undertow.Undertow server = io.undertow.Undertow.builder()
.addHttpListener(curPort, "localhost")
.setHandler(path)
.setServerOption(io.undertow.UndertowOptions.MAX_PARAMETERS, 10000)
.setServerOption(io.undertow.UndertowOptions.OPTION2, Value2)
.build();
setServerOption method and io.undertow.UndertowOptions class finally made sense, it's too bad undertow isn't very popular and not much sample code lying around, i hope i help anybody wishing to take the embedded road of undertow
Related
I tried Connecting the AWS Neptune with this Java code and got the error , NoHostAvailable Exception
approach 1:
public static void main(String[] args) throws Exception {
Cluster.Builder builder = Cluster.build();
builder.addContactPoint("endpoint");
builder.port(8182);
builder.enableSsl(true);
builder.keyStore("pem-file");
Cluster cluster = builder.create();
GraphTraversalSource g = traversal().withRemote(DriverRemoteConnection.using(cluster));
System.out.println(g.V().limit(10).toList());
cluster.close();
}}
approach 2:
Cluster cluster = Cluster.build("endpoint").
enableSsl(true).keyStore("pem").
handshakeInterceptor( r -> {
NeptuneNettyHttpSigV4Signer sigV4Signer = null;
try {
sigV4Signer = new NeptuneNettyHttpSigV4Signer("us-east-2", new
DefaultAWSCredentialsProviderChain());
} catch (NeptuneSigV4SignerException e) {
e.printStackTrace();
}
try {
sigV4Signer.signRequest(r);
} catch (NeptuneSigV4SignerException e) {
e.printStackTrace();
}
return r;
}).create();
Client client=Cluster.open("src\\conf\\remote-objects.yaml").connect();
client.submit("g.V().limit(10).toList()").all().get();
what ever I do, I am getting this error:
Sep 02, 2021 3:18:34 PM io.netty.channel.ChannelInitializer exceptionCaught
WARNING: Failed to initialize a channel. Closing:
java.lang.RuntimeException: java.lang.NullPointerException
org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:117)
Caused by: org.apache.tinkerpop.gremlin.driver.exception.NoHostAvailableException: All hosts
are considered unavailable due to previous exceptions. Check the error log to find the actual
reason.
I need the code or the document to connect my Gremlin code in .java file to AWS neptune. I am struggling and tried various number of ways,
1.created EC2 instance and did installed maven and apache still got error and code is running in Server(EC2), i want code to present in IntelliJ
it would be more helpful, if I get the Exact Code any way. what should be added in remote-objects.yaml.
if we require Pem-file to access Amazon Neptune, please help with the creation of it.
Assuming SSL is enabled but IAM is not, in terms of Java code, this is all you need to create the connection.
Cluster.Builder builder = Cluster.build();
builder.addContactPoint("localhost");
builder.port(8182);
builder.enableSsl(true);
builder.serializer(Serializers.GRAPHBINARY_V1D0);
cluster = builder.create();
drc = DriverRemoteConnection.using(cluster);
g = traversal().withRemote(drc);
You may need to add an entry to your /etc/hosts file to get the SSL certs to resolve correctly such as:
127.0.0.1 localhost my-neptune-cluster.us-east-1.neptune.amazonaws.com
If you find that using localhost with SSL enabled does not work then use the actual Neptune cluster DNS name and make the edit to your /etc/hosts file.
The last thing you will need to do is create access to the Neptune VPC from your local machine. One way is using an SSH tunnel as explained in this post
I'm working on a Java-based FTP server that I can embed in another project. I'm using the Apache mina libraries for the FTP Server. I can start the server but when I try to connect to it I get this error:
Exception in thread "pool-1-thread-1" java. lang.IncompatibleClassChangeError
at org.apache.mina.core.filterchain.DefaultIoFilterChain.register(DefaultIoFilterChain.java:276)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.addLast(DefaultIoFilterChain.java:175)
at org.apache.mina.core.filterchain.DefaultIoFilterChainBuilder.buildFilterChain(DefaultIoFilterChainBuilder.java:452)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.addNow(AbstractPollingIoProcessor.java:430)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.handleNewSessions(AbstractPollingIoProcessor.java:412)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.access$200(AbstractPollingIoProcessor.java:56)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:885)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:51)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I've done some reading on the cause of the error. This site at least implies it's an issue with the Apache Mina code.
I'm using Apache Mina Core Libraries v2.019 and Apache FTP libraries v1.1.1, both are the latest that I can find
Here is my server implementation:
public FTPServer(final String ipadress, final int port){
FtpServerFactory serverFactory = new FtpServerFactory();
ListenerFactory listenerfactory = new ListenerFactory();
listenerfactory.setDataConnectionConfiguration(
new DataConnectionConfigurationFactory().createDataConnectionConfiguration()
);
ConnectionConfigFactory connection = new ConnectionConfigFactory();
connection.setMaxLoginFailures(10);
connection.setLoginFailureDelay(5);
connection.setAnonymousLoginEnabled(false);
// set the ip address of the listener
listenerfactory.setServerAddress(ipaddress);
// set the port of the listener
if (port == 0){
listenerfactory.setPort(PORT);
}
else {
listenerfactory.setPort(port);
// replace the default listener
serverFactory.addListener("default", listenerfactory.createListener());
serverFactory.setConnectionConfig(connection.createConnectionConfig());
}
PropertiesUserManagerFactory userManagerFactory = new
PropertiesUserManagerFactory();
userManagerFactory.setFile(new File("myusers.properties"));
userManagerFactory.setPasswordEncryptor(new SaltedPasswordEncryptor());
UserManager um = userManagerFactory.createUserManager();
BaseUser user = new BaseUser();
user.setName("test");
user.setPassword("test");
user.setHomeDirectory("");
try {
um.save(user);
} catch (FtpException e1) {
// TODO Auto-generated catch block
this.stopServer();
e1.printStackTrace();
}
serverFactory.setUserManager(um);
server = serverFactory.createServer();
//this.StartServer();
}
I started at the beginning and used the Apache sample code to create an FTP server. I stuck it in a static method in Main.java and removed the references to my server code. I was unable to reproduce my error. I then copied bits of my code to the new static method until I had the equivalent to my original code, and was unable to reproduce the original failure. I then restored my code to Main.java and removed the call to the static method. I still couldn't reproduce the error. I'm assuming this was some sort of issue with NetBeans that persisted across several clean and build but was fixed when I brought in the new Apache sample code.
I have a standalone zookeeper server running.
client = CuratorFrameworkFactory.newClient(zkHostPorts, retryPolicy);
client.start();
assertThat(client.checkExists().forPath("/")).isNotNull(); // working
listener = new LeaderSelectorListenerAdapter() {
#Override
public void takeLeadership(CuratorFramework client) throws Exception {
System.out.println("This method is never called! :( ");
Thread.sleep(5000);
}
};
String path = "/somepath";
leaderSelector = new LeaderSelector(client, path, listener);
leaderSelector.autoRequeue();
leaderSelector.start();
I am connecting to the server successfully, defining a listener and starting leader election.
Note: There is only 1 client.
But my client app is never taking leadership. I am not able to figure out what I am doing wrong. Also this is a trivial single client scenario. Shouldn't the client already be a leader
EDIT:
It works if I use TestingServer from curator-test library instead of starting my Zookeeper server, like below -
TestingServer server = new TestingServer();
client = CuratorFrameworkFactory.newClient(server.getConnectString(), retryPolicy);
...
Does this mean there is something wrong with my zookeeper server.
This is my zoo.cfg -
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper/ex1
clientPort=2181
Also, the server appears to be working fine as I am able to connect to it using cli and am able to create/delete zNodes.
Hi I am trying to write a java client for secure hbase.
I want to do kinit also from code itself for that i`m using the usergroup information class.
Can anyone point out where am I going wrong here?
this is the main method that Im trying to connect o hbase from.
I have to add the configuration in the CONfiguration object rather than using the xml, because the client can be located anywhere.
Please see the code below:
public static void main(String [] args) {
try {
System.setProperty(CommonConstants.KRB_REALM, ConfigUtil.getProperty(CommonConstants.HADOOP_CONF, "krb.realm"));
System.setProperty(CommonConstants.KRB_KDC, ConfigUtil.getProperty(CommonConstants.HADOOP_CONF,"krb.kdc"));
System.setProperty(CommonConstants.KRB_DEBUG, "true");
final Configuration config = HBaseConfiguration.create();
config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, AUTH_KRB);
config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION, AUTHORIZATION);
config.set(CommonConfigurationKeysPublic.FS_AUTOMATIC_CLOSE_KEY, AUTO_CLOSE);
config.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, defaultFS);
config.set("hbase.zookeeper.quorum", ConfigUtil.getProperty(CommonConstants.HBASE_CONF, "hbase.host"));
config.set("hbase.zookeeper.property.clientPort", ConfigUtil.getProperty(CommonConstants.HBASE_CONF, "hbase.port"));
config.set("hbase.client.retries.number", Integer.toString(0));
config.set("zookeeper.session.timeout", Integer.toString(6000));
config.set("zookeeper.recovery.retry", Integer.toString(0));
config.set("hbase.master", "gauravt-namenode.pbi.global.pvt:60000");
config.set("zookeeper.znode.parent", "/hbase-secure");
config.set("hbase.rpc.engine", "org.apache.hadoop.hbase.ipc.SecureRpcEngine");
config.set("hbase.security.authentication", AUTH_KRB);
config.set("hbase.security.authorization", AUTHORIZATION);
config.set("hbase.master.kerberos.principal", "hbase/gauravt-namenode.pbi.global.pvt#pbi.global.pvt");
config.set("hbase.master.keytab.file", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
config.set("hbase.regionserver.kerberos.principal", "hbase/gauravt-datanode2.pbi.global.pvt#pbi.global.pvt");
config.set("hbase.regionserver.keytab.file", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
UserGroupInformation.setConfiguration(config);
UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase/gauravt-datanode2.pbi.global.pvt#pbi.global.pvt", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
UserGroupInformation.setLoginUser(userGroupInformation);
User user = User.create(userGroupInformation);
user.runAs(new PrivilegedExceptionAction<Object>() {
#Override
public Object run() throws Exception {
HBaseAdmin admins = new HBaseAdmin(config);
if(admins.isTableAvailable("ambarismoketest")) {
System.out.println("Table is available");
};
HConnection connection = HConnectionManager.createConnection(config);
HTableInterface table = connection.getTable("ambarismoketest");
admins.close();
System.out.println(table.get(new Get(null)));
return table.get(new Get(null));
}
});
System.out.println(UserGroupInformation.getLoginUser().getUserName());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I`m getting the following exception:
Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.readStatus(HBaseSaslRpcClient.java:110)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:146)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:762)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:354)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:883)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:880)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:880)
... 33 more
Any pointers would be helpful.
The above works nicely, but I've seen a lot of folks struggle with setting all of the right properties in the Configuration object. There's no de-facto list that I've found of exactly what you need and don't need and it is painfully dependent on your cluster configuration.
The surefire way is to have a copy of your HBase configurations in your classpath, since your client can be anywhere as you mentioned. Then you can add the resources to your object without having to specify all properties.
Configuration conf = HBaseConfiguration.create();
conf.addResource("core-site.xml");
conf.addResource("hbase-site.xml");
conf.addResource("hdfs-site.xml");
Here were some sources to back this approach:
IBM,
Scalding (Scala)
Also note that this approach doesn't limit you to actually use the internal Zookeeper principal and keytab, i.e. you can create keytabs for applications or Active Directory users and leave the internally generated keytabs for the daemons to authenticate amongst themselves.
Not sure if you still need help. I think setting the "hadoop.security.authentication" property is missing from your snippet.
I am using following code snippet to connect to secure HBase (on CDH5). You can give a try.
config.set("hbase.zookeeper.quorum", zookeeperHosts);
config.set("hbase.zookeeper.property.clientPort", zookeeperPort);
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.master.kerberos.principal", HBASE_MASTER_PRINCIPAL);
config.set("hbase.regionserver.kerberos.principal", HBASE_RS_PRINCIPAL);
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(ZOOKEEPER_PRINCIPAL,ZOOKEEPER_KEYTAB);
HBaseAdmin admins = new HBaseAdmin(config);
TableName[] tables = admins.listTableNames();
for(TableName table: tables){
System.out.println(table.toString());
}
in Jdk 1.8, you need set
"System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");"
config.set("hbase.zookeeper.quorum", zookeeperHosts);
config.set("hbase.zookeeper.property.clientPort", zookeeperPort);
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.master.kerberos.principal", HBASE_MASTER_PRINCIPAL);
config.set("hbase.regionserver.kerberos.principal", HBASE_RS_PRINCIPAL);
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(ZOOKEEPER_PRINCIPAL,ZOOKEEPER_KEYTAB);
HBaseAdmin admins = new HBaseAdmin(config);
TableName[] tables = admins.listTableNames();
for(TableName table: tables){
System.out.println(table.toString());
}
quote:
http://hbase.apache.org/book.html#trouble.client
question: 142.9
I think the best is https://scalding.io/2015/02/making-your-hbase-client-work-in-a-kerberized-environment/
To make the code work you don’t have to change any line from the one written in the top of this post, you just have to make your client able to access the full HBase configuration. This just implies to change your running classpath to:
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hbase/conf:target/scala-2.11/hbase-assembly-1.0.jar
This will make everything run smoothly. It is specific for CDH 5.3 but you can adapt it for your cluster configuration.
PS No need in this:
conf.addResource("core-site.xml");
conf.addResource("hbase-site.xml");
conf.addResource("hdfs-site.xml");
Because HBaseConfiguration has
public static Configuration addHbaseResources(Configuration conf) {
conf.addResource("hbase-default.xml");
conf.addResource("hbase-site.xml");
i'm looking for a way to manage tomcat (on localhost) programmatically via java.
I want to start/stop tomcat and deploy WARs.
Any help is appreciated.
You can run Tomcat embedded in your app.
The way to start/stop tomcat through java is to call execute on the bootstrap.jar (Use the class Runtime) with the sample parameters: -Dcatalina.home=c:/tomcat/
Sample code to see how ant executes tomcat start stop:
http://ptrthomas.wordpress.com/2006/03/25/how-to-start-and-stop-tomcat-from-ant
Sample code to see how external programs are executed from java:
http://www.linglom.com/2007/06/06/how-to-run-command-line-or-execute-external-application-from-java/
You can use java Runtime class to call a bat file. make sure User running java process has rights to start and stop tomcat.
try{
Runtime.getRuntime().exec("c:/program files/tomcat/bin/startup.bat");
} catch(IOException e) {System.out.println("exception");}
To manage tomcat programmatically, you may want to take a look at JMX and the bulit-in MBeans' capabilities of Tomcat.
In essence, you can write your own java based JMX client to talk to the MBeans via RMI or you can take advantage of the JMX Http Proxy in the Manager App and use plain old http requests to script and manage the tomcat instance.
For a good reference of JMX and Tomcat 6:
http://www.datadisk.co.uk/html_docs/java_app/tomcat6/tomcat6_jmx.htm
A good reference of Manager App and JMX Http Proxy:
http://tomcat.apache.org/tomcat-6.0-doc/manager-howto.html#JMX_Set_command
You should be able to deploy and undeploy WARs fairly easily.
I don't think there is an existing MBean that allow you to shutdown tomcat, but it's fairly easy to implement one yourself and call System.exit();
You can use tomcat manager, or see its sources to learn how manager process the deploy operations.
You can restart individual Tomcat connector i.e. port restart like 8843 where your application is running. One scenario when this is required is when you are getting signed certificate through API or you are modifying your truststore.
Here is the complete code/method that I am using to restart tomcat connectors after I add/delete certificates.
public void refreshTrustStore() throws Exception
{
try
{
//following line need to be replaced based on where you get your port. It may be passed in as argument
String httpsPort = configurationManager.getHttpsPort();
String objectString = "*:type=Connector,port=" + httpsPort + ",*";
final ObjectName objectNameQuery = new ObjectName(objectString);
for (final MBeanServer server : MBeanServerFactory.findMBeanServer(null))
{
if (server.queryNames(objectNameQuery, null).size() > 0)
{
MBeanServer mbeanServer = server;
ObjectName objectName = (ObjectName) server.queryNames(objectNameQuery, null).toArray()[0];
mbeanServer.invoke(objectName, "stop", null, null);
// Polling sleep to reduce delay to safe minimum.
// Use currentTimeMillis() over nanoTime() to avoid issues
// with migrating threads across sleep() calls.
long start = System.currentTimeMillis();
// Maximum of 6 seconds, 3x time required on an idle system.
long max_duration = 6000L;
long duration = 0L;
do
{
try
{
Thread.sleep(100);
}
catch (InterruptedException e)
{
Thread.currentThread().interrupt();
}
duration = (System.currentTimeMillis() - start);
} while (duration < max_duration &&
server.queryNames(objectNameQuery, null).size() > 0);
// Use below to get more accurate metrics.
String message = "TrustStoreManager TrustStore Stop: took " + duration + "milliseconds";
logger.information(message);
mbeanServer.invoke(objectName, "start", null, null);
break;
}
}
}
catch (Exception exception)
{
//Log and throw exception
throw exception
}
}