Can't connect to a testcontainer Postgres instance - java

I've created a Postgres instance using testcontainers. The container starts but I cannot access it.
I have tried connecting at the containerized DB using DBeaver.
In the eclipse console everything seems fine:
01:29:34.662 [main] DEBUG com.github.dockerjava.core.command.AbstrDockerCmd - Cmd: com.github.dockerjava.core.command.CreateContainerCmdImpl#73386d72[name=,hostName=,domainName=,user=,attachStdin=,attachStdout=,attachStderr=,portSpecs=,tty=,stdinOpen=,stdInOnce=,env={POSTGRES_USER=test,POSTGRES_PASSWORD=test,POSTGRES_DB=ASIGDB_TEST}
Here is my code:
public class CustomPostgresContainer extends PostgreSQLContainer<CustomPostgresContainer>{
private static final String IMAGE_VERSION = "postgres:9.6";
private static CustomPostgresContainer customPostgresContainer;
private static final int EXPOSED_PORT = 5555;
private static final String DB_NAME = "ASIGDB_TEST";
private static final String DB_USER= "test";
private static final String DB_PASSWORD= "test";
public CustomPostgresContainer() {
super(IMAGE_VERSION);
}
public static CustomPostgresContainer getCustomPostgresContainerInstance() {
if(customPostgresContainer == null) {
return extracted().withExposedPorts(EXPOSED_PORT)
.withDatabaseName(DB_NAME)
.withUsername(DB_USER)
.withPassword(DB_PASSWORD);
}
return customPostgresContainer;
}
private static CustomPostgresContainer extracted() {
return new CustomPostgresContainer();
}
#Override
public void start() {
super.start();
}
#Override
public void stop() {
//do nothing, JVM handles shut down
}
}
I get:
Connection to localhost:5555 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Does anyone know what is going on?

According to this link, withExposedPorts() --> this exposed port number is from the perspective of the container.
From the host's perspective Testcontainers actually exposes this on a random free port. This is by design, to avoid port collisions that may arise with locally running software or in between parallel test runs.
Because there is this layer of indirection, it is necessary to ask Testcontainers for the actual mapped port at runtime. This can be done using the getMappedPort method, which takes the original (container) port as an argument:
Integer firstMappedPort = container.getMappedPort(yourExposedPort);<br/>
Try to connect with DBeaver to the port that appears first.

Related

Failing unit test with mockito.mockStatic

So, I'm writing a junit test and I can't seem te figure out why it is failing. I'm using Mockito.mockStatic in order to mock InetAddres.class. Running the unit tests all at once fails. Running them separately succeeds. I understand static blocks are initialized once. What I can't seem to figure out is why class Host is not reinitialized with every unit test. Any help is appreciated
J
Here is my code:
import org.junit.jupiter.api.Test;
import org.mockito.MockedStatic;
import org.mockito.Mockito;
import java.net.InetAddress;
import java.net.UnknownHostException;
import static org.assertj.core.api.Assertions.assertThat;
class HostTest {
#Test
void testLocalhost() {
try (MockedStatic<InetAddress> inetAddressMockedStatic = Mockito.mockStatic(InetAddress.class)) {
InetAddress inetAddress = Mockito.mock(InetAddress.class);
Mockito.when(inetAddress.getHostName()).thenReturn("LOCALHOST");
inetAddressMockedStatic.when(InetAddress::getLocalHost).thenReturn(inetAddress);
assertThat(Host.getLOCALHOST()).isEqualTo("LOCALHOST");
Mockito.reset(inetAddress);
}
}
#Test
void testIP() {
try (MockedStatic<InetAddress> inetAddressMockedStatic = Mockito.mockStatic(InetAddress.class)) {
InetAddress inetAddress = Mockito.mock(InetAddress.class);
Mockito.when(inetAddress.getHostAddress()).thenReturn("127.0.0.1");
inetAddressMockedStatic.when(InetAddress::getLocalHost).thenReturn(inetAddress);
assertThat(Host.getIP()).isEqualTo("127.0.0.1");
}
}
#Test
void testUnkownHostExceptionIP() {
try (MockedStatic<InetAddress> inetAddressMockedStatic = Mockito.mockStatic(InetAddress.class)) {
inetAddressMockedStatic.when(InetAddress::getLocalHost).thenThrow(UnknownHostException.class);
assertThat(Host.getIP()).isEqualTo("Unkown ip");
}
}
#Test
void testUnkownHostExceptionLocalhost() {
try (MockedStatic<InetAddress> inetAddressMockedStatic = Mockito.mockStatic(InetAddress.class)) {
inetAddressMockedStatic.when(InetAddress::getLocalHost).thenThrow(UnknownHostException.class);
assertThat(Host.getLOCALHOST()).isEqualTo("Unkown hostname");
}
}
}
import java.net.InetAddress;
import java.net.UnknownHostException;
public class Host {
private static String LOCALHOST;
private static String IP;
static {
try {
InetAddress localhost = InetAddress.getLocalHost();
LOCALHOST = localhost.getHostName();
IP = localhost.getHostAddress();
} catch (UnknownHostException e) {
LOCALHOST = "Unkown hostname";
IP = "Unkown ip";
}
}
public static String getLOCALHOST() {
return LOCALHOST;
}
public static String getIP() {
return IP;
}
}
The static initializer is only executed once, when the class is loaded. This means it will only run for the first test case using the Host class.
In your example, once testLocalhost is run, the class is used in the line Host.getLOCALHOST(), by which point its initializer has been executed. It never runs again throughout the rest of the unit tests.
If you switch the order of these test cases, you'll get a different result.
Judging by your test cases, there's a few things you could do to make the code match your expectations. Since the IP and the host name will change throughout the execution of your program, they shouldn't be static members set in the static initializer block.
Get rid of shared state. Setting aside concurrency and memory visibility, static members will be visible to all instances of the class. Omit the static keyword and make these into regular fields
public class Host {
private final String hostName;
private final String ip;
// Constructor, use this to build new instances
public Host(String hostName, String ip) {
this.hostName = hostName;
this.ip = ip;
}
// No longer static, this is now an instance method
public getHostName() {
return this.hostName;
}
public getIp() {
return this.ip;
}
}
Build instances of your class, passing arguments to the constructor to customize its behaviour.
// Host.getIp(); // If IP and host name can vary, don't make them static
InetAddress localhost = InetAddress.getLocalHost();
// build a new instance of Host, provide the relevant data at construction time
Host testedHost = new Host(localhost.getHostName(), localhost.getHostAddress());
// call the instance method, this doesn't affect your other tests
assertThat(testedHost.getIp()).is(someIp);
// at this point, the Host instance you created may be garbage-collected to free memory (you don't need to do that yourself)
Now every test case will be independent from the others. Just create a new instance of Host every time you need one.
Get rid of static mocks. notice how the InetAddress method invocations were moved outside the Host class. By passing them through the constructor, you make the code easier to test. Inversion of control is achieved.
Instead of a public constructor, you could use a factory method. Bottom line is that if you want to have the class change its behaviour, it's usually better to create new instances and encapsulate any state.
Static classes and members are better suited for things like immutable contants that won't change throughout the execution of your program, or utility methods that don't rely on any internal state, i.e. pure functions.

Second main() class does not see variables initialized in first main() class

I'm developing an application that requires two main() classes, first one for the actual application, and a second one for the JMX connectivity and management. The issue I'm facing is even after ensuring the first main() is executed and initializes the variables, when the second main() runs but does not see those variables and throws null exception.
Application main():
public class GatewayCore {
private static Logger logger = Logger.getLogger(GatewayCore.class);
private static ThreadedSocketInitiator threadedSocketInitiator;**
private static boolean keepAlive = true;
//private static Thread mqConnectionManager;
public static void main(String args[]) throws Exception {
__init_system();
__init_jmx();
__init_mq();
while(keepAlive) {}
}
private static void __init_system() {
try {
logger.debug("__init_system:: loading configuration file 'sessionSettings.txt'");
SessionSettings sessionSettings = new SessionSettings(new FileInputStream("sessionSettings.txt"));
logger.info("\n" + sessionSettings);
MessageStoreFactory messageStoreFactory = new FileStoreFactory(sessionSettings);
LogFactory logFactory = new FileLogFactory(sessionSettings);
MessageFactory messageFactory = new DefaultMessageFactory();
Application sessionManager = new SessionManager();
threadedSocketInitiator = new ThreadedSocketInitiator(sessionManager, messageStoreFactory, sessionSettings, logFactory, messageFactory);
...
public static ThreadedSocketInitiator getThreadedSocketInitiator() {
return threadedSocketInitiator; }
Secondary main() class, meant to be invoked for JMX-Mbean purpose:
public class RemoteCommandLine {
private static Logger logger = Logger.getLogger(RemoteCommandLine.class);
private static final String JMX_SERVICE_URL_PREFIX = "service:jmx:rmi:///jndi/rmi://";
private static final String HOST = "localhost";
private static String PORT = "24365";
private static JMXConnectionInstance jmxConnectionInstance;
private static boolean keepAlive = true;
public static void main(String[] args) throws IOException, MalformedObjectNameException, ConfigError {
logger.debug(GatewayCore.getThreadedSocketInitiator());
...
From command line, I first run:
java -classpath etdfix.jar:slf4j-api-1.7.25.jar:mina-core-2.0.16.jar:quickfixj-all-2.0.0.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=24365 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false com.scb.etdfix.GatewayCore sessionSettings.txt
Wait for the inits to complete, ensuring threadedSocketInitiator has been assigned, then:
java -classpath etdfix.jar:slf4j-api-1.7.25.jar:mina-core-2.0.16.jar:quickfixj-all-2.0.0.jar com.scb.etdfix.JMX.RemoteCommandLine
Which ultimately throws a null pointer exception for the line:
logger.debug(GatewayCore.getThreadedSocketInitiator());
My plan is to have the first main() initialize the object, then pass to the second main() to do further method calls using the same object (it must be the same instance) when it is manually invoked. Both classes are compiled together into the same JAR. Please advise on how I can get around this issue or anything I can do to debug this further.
In fact, I'm thinking that this may not be possible as when the 2nd main() is invoked, from its POV the first main() isn't initialized. Therefore I should approach this by considering that they are two separate entities.
Each process (each java command) is completely separate, whether they run the same main() or not. This is a feature—the alternative would be to have unrelated parts of the system collide whenever they used a common utility.
That said, nothing stops you from calling GatewayCore.main() yourself (with the real command line or whatever other argument list) if you want to reuse its logic. It might be a good idea, though, to factor out the common code as another function: main() has many special responsibilities and programmers do not usually expect it to be called within a program.

BinaryObjectException: Cannot find schema for object with compact footer

This is the scenario: I deploy my web application to two Tomcat servers, and I use Apache Ignite to cluster web sessions. The load balancer is put in the round robin fashion.
The software I use are:
JDK 1.8.0_66
Apache Tomcat 7.0.68
Apache Ignite 1.6.0
Crossroads load balancer version 2.65
Below is the data I put into the session:
import java.io.Serializable;
public class SessionData implements Serializable {
private static final long serialVersionUID = 1L;
private int counter;
public int getCounter() {
return counter;
}
public void setCounter(int counter) {
this.counter = counter;
}
public SessionData() {
}
}
And I can verify that the two applications do share the same data, and everything works perfectly.
Then I update the session data class to:
public class SessionData implements Serializable {
private static final long serialVersionUID = 1L;
private int counter;
private String ip;
public int getCounter() {
return counter;
}
public void setCounter(int counter) {
this.counter = counter;
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public SessionData() {
}
}
And I deploy the new web application to one of the servers. Now when I refresh the web page which will in turn read and update the counter in the session data, I keep getting the following error from both servers, and the page never loads.
ERROR - root - Failed to update web session: null
class org.apache.ignite.binary.BinaryObjectException: Cannot find schema for object with compact footer [typeId=-2056860774, schemaId=1954049593]
at org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:1721)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.<init>(BinaryReaderExImpl.java:278)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.<init>(BinaryReaderExImpl.java:177)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.<init>(BinaryReaderExImpl.java:156)
at org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:298)
at org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal(BinaryMarshaller.java:109)
at org.apache.ignite.cache.websession.WebSessionV2.unmarshal(WebSessionV2.java:336)
at org.apache.ignite.cache.websession.WebSessionV2.getAttribute(WebSessionV2.java:200)
I believe this is a common senario. Imagine there are dozens of nodes in the cluster, and we need to redeploy an updated version of web application to all of the nodes one after another. And during the process of redeployment, this issue will surface, and the user will suffer from it.
Wonder if this is a real problem for Apache Ignite, or due to my misconfiguration/misunderstanding? And if it is problem, is there any work-around? Or I have to shut down all the servers in the worst case; and if we use a persistent store, do we need to purge all the data in the persistent store?
I'm not sure about the reasons, but this looks like incorrect behavior. Created a ticket: https://issues.apache.org/jira/browse/IGNITE-3194
As a workaround you can try to disable compact footers. To do this, add the following to you Ignite configuration:
<property name="binaryConfiguration">
<bean class="org.apache.ignite.configuration.BinaryConfiguration">
<property name="compactFooter" value="false"/>
</bean>
</property>

Cassandra in Java - Same cluster name everytime

I was trying to write a basic java program in eclipse that uses Cassandra java driver to connect to a Cassandra node.
I found this repository https://github.com/datastax/java-driver.
When I tried to run using -
package com.example.cassandra;
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.Host;
public class SampleConnection {
private Cluster cluster;
private Session session;
public void connect(String node){
cluster = Cluster.builder().addContactPoint(node).build();
session = cluster.connect("mykeyspace");
System.out.println(cluster.getClusterName());
}
public void close()
{
cluster.shutdown();
}
public static void main(String args[]) {
SampleConnection client = new SampleConnection();
client.connect("127.0.0.1");
client.close();
}
1) Encountered output in eclipse as
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/127.0.0.1])
Why it is refusing even to connect let alone create a table? (9042 port, configured in cassandra.yaml, is open & cassandra service is running)
2) Why, in my code, cluster.getClusterName(); is giving "cluster1" as cluster name everytime regardless of my cluster name in cassandra.yaml file?
However, when I tried to use the below code, it worked:
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Host;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.policies.DefaultRetryPolicy;
public class CassConnector {
private static Cluster cluster;
private static Session session;
public static Cluster connect(String node) {
return Cluster.builder().addContactPoint(node)
.withRetryPolicy(DefaultRetryPolicy.INSTANCE).build();
}
public static void main(String[] arg) {
cluster = connect("localhost");
session = cluster.connect("mykeyspace");
session.execute("CREATE KEYSPACE myks WITH REPLICATION = "
+ "{ 'class' : 'SimpleStrategy', 'replication_factor' : 1};" );
session.execute("USE mykeyspace");
String query = "CREATE TABLE emp(emp_id int PRIMARY KEY, "
+ "emp_name text, "
+ "emp_city text );";
session.execute(query);
System.out.println("Table created!");
session.close();
cluster.close();
}
What's the logical difference between these two approaches?
I assume you're referring to Cluster.getClusterName(). From the javadoc:
Note that this is not the Cassandra cluster name, but rather a name assigned to this Cluster object. Currently, that name is only used for one purpose: to distinguish exposed JMX metrics when multiple Cluster instances live in the same JVM (which should be rare in the first place). That name can be set at Cluster building time (through Cluster.Builder.withClusterName(java.lang.String) for instance) but will default to a name like cluster1 where each Cluster instance in the same JVM will have a different number.

Junit testing an Akka singleton actor: preStart() hook not called

I would like to test a singleton actor using java in Scala IDE build of Eclipse SDK (Build id: 3.0.2-vfinal-20131028-1923-Typesafe) and Akka is 2.3.1.
public class WorkerTest {
static ActorSystem system;
#BeforeClass
public static void setup() {
system = ActorSystem.create("ClusterSystem");
}
#AfterClass
public static void teardown() {
JavaTestKit.shutdownActorSystem(system);
system = null;
}
#Test
public void testWorkers() throws Exception {
new JavaTestKit(system) {{
system.actorOf(ClusterSingletonManager.defaultProps(
Props.create(ClassSingleton.class), "class",
PoisonPill.getInstance(),"backend"), "classsingleton");
ActorRef selection = system.actorOf(ClusterSingletonProxy.defaultProps("user/classsingleton/class", "backend"), "proxy");
System.out.println(selection);
}};
}
}
the ClassSingleton.java:
public class ClassSingleton extends UntypedActor {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
public ClassSingleton() {
System.out.println("Constructor is done");
}
public static Props props() {
return Props.create(ClassOperator.class);
}
#Override
public void preStart() throws Exception {
ActorRef selection = getSelf();
System.out.println("ClassSingleton ActorRef... " + selection);
}
#Override
public void onReceive(Object message) {
}
#Override
public void postStop() throws Exception {
System.out.println("postStop ... ");
}
}
The ClassSingleton actor is doing nothing, the printout is:
Actor[akka://ClusterSystem/user/proxy#-893814405] only, which is printed from the ClusterSingletonProxy. No exception and Junit is done, green flag. In debugging ClassSingleton is not called (including contructor and preStart()). Sure it is me, but what is the mistake? Even more confusing that the same ClassSingleton ClusterSingletonManager code is working fine outside of javatestkit and junit.
I suspect that the cluster setup might be reponsible, so I tried to include and exclude the following code (no effect). However I would like to understand why we need it, if we need it (it is from an example code).
Many thanks for your help.
Address clusterAddress = Cluster.get(system).selfAddress();
Cluster.get(system).join(clusterAddress);
Proxy pattern standard behavior is to locate the oldest node and deploy the 'real' actor there and the proxy actors are started on all nodes. I suspect that the cluster configuration did not complete and thus why your actor never got started.
The join method makes the node to become a member of the cluster. So if no one joins the cluster the actor with proxy cannot be created.
The question is are your configuration files that are read during junit test have all the information to create a cluster? Seed-nodes? Is the port set to the same as the seed node?

Categories