Hazelcast :: Configure for only one host with Spring - java

Now, I am working with spring mvc and hazel cast.
Everything is working well with my application.
For better performance, we would like to use caching method.
We can't use EhCache because we have 2 application in same server with single database.
So, we shifted to Hazelcast and we already configure and it is working.
We got what we want but the problem is hazelcast looking for other server and other server can join our server in caching process.
That is the problem.
We would like to use hazelcast for only one server and no incoming request and outgoing request to join on multicast.
We only want to share data only in our one server.
But we can't find the right configuration for it.
Our current config is as shown below:
In Java Config
Config config = new Config("instance");
NetworkConfig network = config.getNetworkConfig();
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().setEnabled(false);
In spring-application-context.xml
<cache:annotation-driven cache-manager="cacheManager" />
<bean id="cacheManager" class="com.hazelcast.spring.cache.HazelcastCacheManager">
<constructor-arg ref="instance"/>
</bean>
<hz:hazelcast id="instance">
<hz:config>
<hz:group name="dev" password="password"/>
</hz:config>
</hz:hazelcast>
Caching is working well with the above codes but it join multiple server.
Please, help me to configure it for only one server, no external join.
Thanks

Multicast is the only discovery mechanism enabled by default, so turning it off should be all that's required to stop this server joining others.
In spring-application-context.xml try
<hz:hazelcast id="instance">
<hz:config>
<hz:group name="dev" password="password"/>
<hz:network port="5701" port-auto-increment="false">
<hz:join>
<hz:multicast enabled="false"/>
</hz:join>
</hz:network>
</hz:config>
</hz:hazelcast>
The Java config looks good too, but I don't see where it's used.
As proof also, change the group name to something other than "dev". "dev" is the default, so if you use a different name temporarily and you see the new name in the logs, you know your configuration is being used.

Related

URI encoding on JBoss and Spring application

We are developing a Spring application which run on JBoss. I've just discovered a problem with characters encoding and #ModelAttribute and #RequestBody annotations when we are sending requests via GET method described in:
Encoding problem using Spring MVC
http://forum.spring.io/forum/spring-projects/web/74209-responsebody-and-utf-8
and possible solution:
http://wiki.apache.org/tomcat/FAQ/CharacterEncoding#Q2
https://developer.jboss.org/message/643825#643825
https://docs.jboss.org/jbossweb/2.1.x/config/http.html
In general we added:
<system-properties>
<property name="org.apache.catalina.connector.URI_ENCODING" value="UTF-8"/>
<property name="org.apache.catalina.connector.USE_BODY_ENCODING_FOR_QUERY_STRING" value="true"/>
to standalone.xml file on JBoss and this solved the issue.
We just thinking if there is any possibility to keep this properties in our application and use them when needed instead of changing JBoss configuration? Or maybe you know other working solution on the application side instead of server side?

LDAP authentication on Apache against hashed password

I have a setup with an apache HTTP server front facing tomcat server. The Apache server uses LDAP for authentication.
I am using an Embedded LDAP server (Apache DS) and have configured to disable anonymous bind using
service.setAllowAnonymousAccess(false); // Disable Anonymous Access
service.setAccessControlEnabled(true); // Enable basic access control check (allow only System Admin to login to LDAP Server)
My application uses Spring LDAP to connect and perform user operations like Adding a user. I have configured it in spring.xml as follows:
<bean id="ldapContextSource" class="org.springframework.ldap.core.support.LdapContextSource">
<property name="url" value="ldap://localhost:389" />
<property name="base" value="dc=test,dc=com" />
<property name="userDn" value="uid=admin,ou=system" />
<property name="password" value="secret" />
</bean>
Apache httpd.conf is configured to use basic auth
AuthLDAPBindDN "uid=admin,ou=system"
AuthLDAPBindPassword "{SHA}<Hash for secret>"
ISSUE 1 : When trying to login to ldap server using a client (say jexplorer), I am able to login using both Hashed password and using the plain text "secret". How is that possible?
In this case , if someone gets to know the AuthLDAPBindDN and AuthLDAPBindPassword which is a hashed one in my case, They will be able to login using the same to the LDAP server with full access which is a security threat.
Also, I want to replace the password in spring.xml with a hashed one. Since, admin can change the LDAP password, How do I ensure my application to use the updated hashed password as we are hard-coding it in spring.xml?
With regards to your second question: you should typically never hardcode stuff like server URLs, user names, passwords, etc in your XML file. These things should typically be externalized to a property file and processed using <context:property-placeholder>. Say, for instance, that you have a property file with the following contents:
ldap.server.url=ldap://localhost:389
ldap.base=dc=test,dc=com
ldap.userDn=uid=admin,ou=system
ldap.password=secret
You can then refer to these properties in your configuration file, e.g.:
<context:property-placeholder ignore-resource-not-found="true"
location="classpath:/ldap.properties,
file:/etc/mysystem/ldap.properties" />
<bean id="ldapContextSource" class="org.springframework.ldap.core.support.LdapContextSource">
<property name="url" value="${ldap.server.url}" />
<property name="base" value="${ldap.base}" />
<property name="userDn" value="${ldap.userDn}" />
<property name="password" value="${ldap.password}" />
</bean>
Spring will automatically replace the stuff within ${} with the corresponding values from your properties file.
Note that I specified two property file locations in the <context:property-placeholder> element, and that I also included ignore-resource-not-found="true". This is useful, because it enables you to include a properties file with your source for simple development setup, but in production, if you place a properties file under /etc/mysystem/ldap.properties, this will override the stuff in the bundled properties file.
This way, if the password is changed by admin in production environment, all you need to do is change the properties file; you don't need to rebuild the application.
With regards to why the apache DS accepts the hashed password; one reason might be that your LDAP server is set up to accept anonymous access for read operation, which means that it actually doesn't authenticate at all when you're just reading. Might be something else though, you'll have to direct the question to Apache DS support.

How to access multiple remote redis with JedisConnectionFactory?

I made a manager/Service Server system.
Manager server collects data from database and send that data to multiple Service servers.
My code works nice when I have only 1 server. My configuration like below.(root-context.xml)
<bean id="connectionFactory"
class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
<property name="hostName" value="127.0.0.1"/>
<property name="port" value="6379"/>
</bean>
The Problem is Service servers should be multiple. Is there any way to set multiple connection list with spring configurations? Thanks:D
P.S
I know the way to use JedisHelper.java which can be found easily in github. However, what I want to do is figure this out in spring root-context.xml.
you need to use a Jedis Pool.
http://docs.spring.io/spring-data/redis/docs/1.0.6.RELEASE/api/org/springframework/data/redis/connection/jedis/JedisConnectionFactory.html
You need to specify a pool and set it up with all your servers.

Spring cache abstraction - distributed environment

I would like to use the spring cache abstraction in my distributed web application.
My web application runs on 3 different tomcats with a load balancer.
Now, My problem is how exactly can I #Evict cache in all tomcats when another tomcat preforms an update?
Does spring supports this kind of thing?
Thanks!
If it's EHCache that you've told Spring to use, then EHCache supports replication across multiple cache instances across different physical servers. I've had some success with the RMI Replicated Caching using multicast discovery. Evicting from one cache will automatically replicate across the other caches - and likewise when adding to a cache.
In terms of the Spring config, you'll need to set up the various config elements and beans:
<cache:annotation-driven />
<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager">
<property name="cacheManager" ref="cacheManager" />
</bean>
<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean">
<property name="configLocation" value="classpath:/ehcache.xml"/>
</bean>
The rest of the configuration is done in the ehcache.xml file. An example replicated cache from ehcache.xml may look something like this:
<cache name="example"
maxElementsInMemory="1000"
eternal="false"
overflowToDisk="false"
timeToIdleSeconds="0"
timeToLiveSeconds="600">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"/>
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"
properties="bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000"/>
</cache>
And then you'll need to add the replication settings to the ehcache.xml which may look like this:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.2,
multicastGroupPort=4455, timeToLive=1" />
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=localhost, port=40001, socketTimeoutMillis=2000" />
There are other ways to configure replication in EHCache as described in the documentation but the RMI method described above is relatively simple and has worked well for me.
You can try using Clustered caching, like Hazelcast. Ehcache also support clustered caching through Terracota server.
Let's say, you have 3 application nodes, in a load balanced environment. If you use Hazelcast, each application node will act as an cache node of hazelcast, and together they will give you an abstraction of a single Cache server. So whenever you update an entity in a node, the other nodes will get notify instantly, and update it's cache, if necessary. This way, you also won't have to evict your cached object.
Configuring this is also very easy, looks something likes this
<tcp-ip enabled="true">
<member-list>
<member>machine1</member>
<member>machine2</member>
<member>machine3:5799</member>
</member-list>
</tcp-ip>
For further information, try reading this article here.

Spring’s embedded H2 datasource and DB_CLOSE_ON_EXIT

For unit tests (call them integration tests if you want) I have configured an embedded database in my Spring config like so:
<jdbc:embedded-database id="dataSource" type="H2">
<jdbc:script location="classpath:schema_h2.sql" />
</jdbc:embedded-database>
Now, when running the tests from the command line, they work fine, but I get some errors at the end (harmless, but irritating):
WARN 2013-03-25 12:20:22,656 [Thread-9] o.s.j.d.e.H2EmbeddedDatabaseConfigurer 'Could not shutdown embedded database'
org.h2.jdbc.JdbcSQLException: Database is already closed (to disable automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the db URL) [90121-170]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:329) ~[h2-1.3.170.jar:1.3.170]
...
at org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseFactoryBean.destroy(EmbeddedDatabaseFactoryBean.java:65) [spring-jdbc-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:238) [spring-beans-3.2.1.RELEASE.jar:3.2.1.RELEASE]
Now the tip contained in the exception is fine in general, but how do I add this attribute to the embedded datasource? Do I have to expand it, configure it by hand so to speak, to add such ‘advanced’ features?
Specify parameter in JDBC url jdbc:h2:~/test;DB_CLOSE_ON_EXIT=FALSE
Also for in-memory test database I suggest you to add DB_CLOSE_DELAY=-1, like this:
jdbc:h2:mem:alm;MODE=Oracle;DB_CLOSE_DELAY=-1
To add JDBC connection url to embedded-dababase change it to:
<bean id="dataSource" class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
<property name="driverClass" value="org.h2.Driver"/>
<property name="url" value="jdbc:h2:mem:test;MODE=Oracle;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
</bean>
<jdbc:initialize-database data-source="dataSource" ignore-failures="DROPS">
<jdbc:script location="classpath:schema_h2.sql" />
</jdbc:initialize-database>
I had the same issue as Michael Piefel's one and tried to implement the solution that Michail Nikolaev explained.
but it did not work, somehow spring-batch, then, where are the metadata JOB_* tables are.
So, as the version of spring-jdbc used by my application is 3.0.5 and increasing the spring-framework one enters in conflict with dwr (i use it in my app) it's a geo localization based on spring, dwr and gmaps api.
I downloaded the spring-jdbc 4.0.3 release and get from it the H2EmbeddedDatabaseConfigurer.class who has DB_CLOSE_ON_EXIT=FALSE by default and replace with it the one on the spring-jdbc 3.0.5 Release and deploy-it in the war file and it works, the shutdown of the VM didn't provoke the closing of the in-memory database.
Hope this unusual solution helps if other people like me wouldn't be able to implement the other solution.
I had the same problem, but it was because I forgot to add the annotation #Entity on one of my entities. I add it and it work now !
I hope this helps someone.

Categories