I am working on a Java project in Eclipse. I have a staging server and a live server. Those two also have their own mongodbs, which run on a different server on two different ports (29017 and 27017).
Via a Junit Test I want to copy data from the live mongo to the devel mongo.
Weirdest thing: sometimes it works and sometimes I get a socket error. I wonder why mongo sometimes completely refuses to write inserts and on other days it works flawlessly. Here is an excerpt of the mongo log file (the one where code gets inserted) and the Junit test script:
mongo log:
Thu Mar 14 21:01:04 [initandlisten] connection accepted from xx.xxx.xxx.183:60848 #1 (1 connection now open)
Thu Mar 14 21:01:04 [conn1] run command admin.$cmd { isMaster: 1 }
Thu Mar 14 21:01:04 [conn1] command admin.$cmd command: { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:90 0ms
Thu Mar 14 21:01:04 [conn1] opening db: repgain
Thu Mar 14 21:01:04 [conn1] query repgain.editorconfigs query: { $and: [ { customer: "nokia" }, { category: "restaurant" } ] } ntoreturn:0 keyUpdates:0 locks(micros) W:5302 r:176 nreturned:0 reslen:20 0ms
Thu Mar 14 21:01:04 [conn1] Socket recv() errno:104 Connection reset by peer xx.xxx.xxx.183:60848
Thu Mar 14 21:01:04 [conn1] SocketException: remote: xx.xxx.xxx.183:60848 error: 9001 socket exception [1] server [xx.xxx.xxx.183:60848]
Thu Mar 14 21:01:04 [conn1] end connection xx.xxx.xxx.183:60848 (0 connections now open)
junit test script:
public class CopyEditorConfig {
protected final Log logger = LogFactory.getLog(getClass());
private static final String CUSTOMER = "customerx";
private static final String CATEGORY = "categoryx";
#Test
public void test() {
try {
ObjectMapper om = new ObjectMapper();
// script copies the config from m2 to m1.
Mongo m1 = new Mongo("xxx.xxx.com", 29017); // devel
Mongo m2 = new Mongo("yyy.yyy.com", 27017); // live
Assert.assertNotNull(m1);
Assert.assertNotNull(m2);
logger.info("try to connect to db \"dbname\"");
DB db2 = m2.getDB("dbname");
logger.info("get collection \"config\"");
DBCollection c2 = db2.getCollection("config");
JacksonDBCollection<EditorTabConfig, ObjectId> ec2 = JacksonDBCollection.wrap(c2, EditorTabConfig.class, ObjectId.class);
logger.info("find entry with customer {" + CUSTOMER + "} and category {" + CATEGORY + "}");
EditorTabConfig config2 = ec2.findOne(DBQuery.and(DBQuery.is("customer", CUSTOMER), DBQuery.is("category", CATEGORY)));
// config
if (config2 == null) {
logger.info("no customer found to copy.");
} else {
logger.info("Found config with id: {" + config2.objectId + "}");
config2.objectId = null;
logger.info("copy config");
boolean found = false;
DB db1 = m1.getDB("dbname");
DBCollection c1 = db1.getCollection("config");
JacksonDBCollection<EditorTabConfig, ObjectId> ec1 = JacksonDBCollection.wrap(c1, EditorTabConfig.class, ObjectId.class);
EditorTabConfig config1 = ec1.findOne(DBQuery.and(DBQuery.is("customer", CUSTOMER), DBQuery.is("category", CATEGORY)));
if (config1 != null) {
found = true;
}
if (found == false) {
WriteResult<EditorTabConfig, ObjectId> result = ec1.insert(config2);
ObjectId id = result.getSavedId();
logger.info("INSERT config with id: " + id);
} else {
logger.info("UPDATE config with id: " + config1.objectId);
ec1.updateById(config1.objectId, config2);
}
StringWriter sw = new StringWriter();
om.writeValue(sw, config2);
logger.info(sw);
}
} catch (Exception e) {
logger.error("exception occured: ", e);
}
}
}
Running this script seems like a success when I read the log in eclipse. I get an id for both c1 and c2 and the data is also here. The log even states, that it didn't find the config on devel and inserts it. That also is true, if I put it there manually. It gets "updated" then. But the mongo log stays the same.
The socket exception occurs, and the data is never written to the db.
I am out of good ideas to debug this. If you could, I'd be glad to get some tips how to go from here. Also, if any information is missing, please tell me, I'd be glad to share.
Regards,
Alex
It seems you have a connection issue with mongo server. Below ways may help you better diagnose the mongo servers:
Try to get more information from log files:
$less /var/log/mongo/mongod.log
or customized log files defined in mongod.conf
Try to use mongostat to monitor the server state:
$ mongostat -u ADMIN_USER -p ADMIN_PASS
Try to use mongo cli to check server runing status:
$ mongo admin -u ADMIN_USER -p ADMIN_PASS
$ db.serverStatus()
More useful commands is at: http://docs.mongodb.org/manual/reference/method/
Sometimes it may come across with Linux system configs. Try to tune Linux for more connections and limits, and it may help.
To check current Linux limits, run:
$ ulimit -a
Below suggestions may be helpful:
Each connection is seen by Linux as an open file. The default maximum number of open file is 1024. To increase this limit:
modify /etc/security/limits.conf:
root soft nofile 500000
root hard nofile 512000
root soft nproc 500000
root hard nproc 512000
modify /etc/sysctl.conf
fs.file-max=360000
net.ipv4.ip_local_port_range=1024 65000
Comment out the line in your mongod.conf that binds the IP to 127.0.0.1.
Usually, it is set to 127.0.0.1 by default.
For Linux, this config file location should be be /etc/mongod.conf. Once you comment that out , it will receive connections from all interfaces. This fixed it for me as i was getting these socket exceptions as well.
Related
I'm runnig a select query on cassandra in java.It return read timeout exception.The problem is that is doen not produce an error in cqlsh.So the problem is in the code i guess.I'm using flink to connect to cassandra.Also i think that maybe the problem is that in cqlsh i need to hit enter the get more results.Maybe that's the cause for stopping the drive to read through all the result.Also when i use limit 100 it runs well.I have changed the cassandra.yaml read configuration but nothing changed.
ClusterBuilder cb = new ClusterBuilder() {
#Override
public Cluster buildCluster(Cluster.Builder builder) {
return builder.withPort(9042).addContactPoint("127.0.0.1")
.withSocketOptions(new SocketOptions().setReadTimeoutMillis(60000).setKeepAlive(true).setReuseAddress(true))
.withLoadBalancingPolicy(new RoundRobinPolicy())
.withReconnectionPolicy(new ConstantReconnectionPolicy(500L))
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.ONE))
.build();
}
};
CassandraInputFormat<Tuple1<ProfileAlternative>> cassandraInputFormat = new CassandraInputFormat("SELECT toJson(profilealternative) FROM profiles.profile where skills contains 'Financial Sector' ", cb);
cassandraInputFormat.configure(null);
cassandraInputFormat.open(null);
Tuple1<ProfileAlternative> testOutputTuple = new Tuple1();
while (!cassandraInputFormat.reachedEnd()) {
cassandraInputFormat.nextRecord(testOutputTuple);
System.out.println(testOutputTuple.f0);
}
I am using below dependncies,
<dependency>
<groupId>io.debezium</groupId>
<artifactId>debezium-connector-oracle</artifactId>
<version>${version.debezium}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/io.debezium/debezium-connector-mysql -->
<dependency>
<groupId>io.debezium</groupId>
<artifactId>debezium-connector-mysql</artifactId>
<version>${version.debezium}</version>
</dependency>
<version.debezium>0.8.3.Final</version.debezium>
Below is my java method,
public void runMysqlParsser() {
Configuration config = Configuration.create()
/* begin engine properties */
.with("connector.class",
"io.debezium.connector.mysql.MySqlConnector")
.with("offset.storage",
"org.apache.kafka.connect.storage.FileOffsetBackingStore")
.with("offset.storage.file.filename",
"/home/mohit/tmp/offset.dat")
.with("offset.flush.interval.ms", 60000)
/* begin connector properties */
.with("name", "my-sql-connector")
.with("database.hostname", "localhost")
.with("database.port", 3306)
.with("database.user", "root")
.with("database.password", "root")
.with("server.id", 1)
.with("database.server.name", "my-app-connector")
.with("database.history",
"io.debezium.relational.history.FileDatabaseHistory")
.with("database.history.file.filename",
"/home/mohit/tmp/dbhistory.dat")
.with("database.whitelist", "mysql")
.with("table.whitelist", "mysql.customers")
.build();
EmbeddedEngine engine = EmbeddedEngine.create()
.using(config)
.notifying(this::handleEvent)
.build();
Executor executor = Executors.newSingleThreadExecutor();
executor.execute(engine);
}
private void handleEvent(SourceRecord sourceRecord) {
try {
LOG.info("Got record :" + sourceRecord.toString());
} catch (Exception ex) {
LOG.info("exception in handle event:" + ex);
}
My sql configurations,
.
general_log_file = /var/log/mysql/mysql.log
general_log = 1
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_format = row
binlog_row_image = full
binlog_rows_query_log_events = on
gtid_mode = on
enforce_gtid_consistency = on
When I am running this code, I am getting the offset for the history logs also the mysql.log file is getting offset added to it. However when I am executing any update statement to the table, it is not giving me any logs i.e. the handleEvent method is not getting called. Can anyone tell me what is wrong with the code or configuration ?
Below is the logs after running the java code,
$$ java -jar debezium-gcp-1.0-SNAPSHOT-jar-with-dependencies.jar
log4j:WARN No appenders could be found for logger (org.apache.kafka.connect.json.JsonConverterConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Nov 28, 2018 1:29:47 PM com.debezium.gcp.SampleMysqlEmbededDebezium handleEvent
INFO: Got record :SourceRecord{sourcePartition={server=my-app-connector}, sourceOffset={file=mysql-bin.000002, pos=980, gtids=31b708c7-ee22-11e8-b8a3-080027fbf50e:1-17, snapshot=true}} ConnectRecord{topic='my-app-connector', kafkaPartition=0, key=Struct{databaseName=}, value=Struct{source=Struct{version=0.8.3.Final,name=my-app-connector,server_id=0,ts_sec=0,file=mysql-bin.000002,pos=980,row=0,snapshot=true},databaseName=,ddl=SET character_set_server=latin1, collation_server=latin1_swedish_ci;}, timestamp=null, headers=ConnectHeaders(headers=)}
Nov 28, 2018 1:29:47 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect
INFO: Connected to localhost:3306 at 31b708c7-ee22-11e8-b8a3-080027fbf50e:1-17 (sid:6326, cid:21)
Are you whitelisting the correct database/table?
Could you please look at this demo - https://github.com/debezium/debezium-examples/tree/master/kinesis
Just drop the Kinesis related code and print only to console.
Also check table.ignore.builtin configuration option. IMHO mysql database belongs among the builtin ones and is filtered out by default.
I need to obtain the master public DNS value via the Java SDK. The only information that I'll have at the start of the application is the ClusterName which is static.
Thus far I've been able to pull out all the other information that I need excluding this and this, unfortunately is vital for the application to be a success.
This is the code that I'm currently working with:
List<ClusterSummary> summaries = clusters.getClusters();
for (ClusterSummary cs: summaries) {
if (cs.getName().equals("test") && WHITELIST.contains(cs.getStatus().getState())) {
ListInstancesResult instances = emr.listInstances(new ListInstancesRequest().withClusterId(cs.getId()));
clusterHostName = instances.getInstances().get(0).toString();
jobFlowId = cs.getId();
}
}
I've removed the get for PublicIpAddress as wanted the full toString for testing. I should be clear in that this method does give me the DNS that I need but I have no way of differentiating between them.
If my EMR has 4 machines, I don't know which position in the list that Instance will be. For my basic trial I've only got two machines, 1 master and a worker. .get(0) has returned both the values for master and the worker on successive runs.
The information that I'm able to obtain from these is below - my only option that I can see at the moment is to use the 'ReadyDateTime' as an identifier as the master 'should' always be ready first, but this feels hacky and I was hoping on a cleaner solution.
{Id: id,
Ec2InstanceId: id,
PublicDnsName: ec2-54--143.compute-1.amazonaws.com,
PublicIpAddress: 54..143,
PrivateDnsName: ip-10--158.ec2.internal,
PrivateIpAddress: 10..158,
Status: {State: RUNNING,StateChangeReason: {},
Timeline: {CreationDateTime: Tue Feb 21 09:18:08 GMT 2017,
ReadyDateTime: Tue Feb 21 09:25:11 GMT 2017,}},
InstanceGroupId: id,
EbsVolumes: []}
{Id: id,
Ec2InstanceId: id,
PublicDnsName: ec2-54--33.compute-1.amazonaws.com,
PublicIpAddress: 54..33,
PrivateDnsName: ip-10--95.ec2.internal,
PrivateIpAddress: 10..95,
Status: {State: RUNNING,StateChangeReason: {},
Timeline: {CreationDateTime: Tue Feb 21 09:18:08 GMT 2017,
ReadyDateTime: Tue Feb 21 09:22:48 GMT 2017,}},
InstanceGroupId: id
EbsVolumes: []}
Don't use ListInstances. Instead, use DescribeCluster, which returns as one of the fields MasterPublicDnsName.
To expand on what was mentioned by Jonathon:
AmazonEC2Client ec2 = new AmazonEC2Client(cred);
DescribeInstancesResult describeInstancesResult = ec2.describeInstances(new DescribeInstancesRequest().withInstanceIds(clusterInstanceIds));
List<Reservation> reservations = describeInstancesResult.getReservations();
for (Reservation res : reservations) {
for (GroupIdentifier group : res.getGroups()) {
if (group.getGroupName().equals("ElasticMapReduce-master")) { // yaaaaaaaaah, Wahay!
masterDNS = res.getInstances().get(0).getPublicDnsName();
}
}
}
AWSCredentials credentials_profile = null;
credentials_profile = new
DefaultAWSCredentialsProviderChain().getCredentials();
AmazonElasticMapReduceClient emr = new
AmazonElasticMapReduceClient(credentials_profile);
Region euWest1 = Region.getRegion(Regions.US_EAST_1);
emr.setRegion(euWest1);
DescribeClusterFunction fun = new DescribeClusterFunction(emr);
DescribeClusterResult res = fun.apply(new
DescribeClusterRequest().withClusterId(clusterId));
String publicDNSName =res.getCluster().getMasterPublicDnsName();
Below is the working code to get the public DNS name.
I am using P4Java library in my build.gradle file to sync a large zip file (>200MB) residing at a remote Perforce repository but I am encountering a "java.net.SocketTimeoutException: Read timed out" error either during the sync process or (mostly) during deleting the temporary client created for the sync operation. I am referring http://razgulyaev.blogspot.in/2011/08/p4-java-api-how-to-work-with-temporary.html for working with temporary clients using P4Java API.
I tried increasing the socket read timeout from default 30 sec as suggested in http://answers.perforce.com/articles/KB/8044 and also by introducing sleep but both approaches didn't solved the problem. Probing the server to verify the connection using getServerInfo() right before performing sync or delete operations results in a successful connection check. Can someone please point me as to where I should look for answers?
Thank you.
Providing the code snippet:
void perforceSync(String srcPath, String destPath, String server) {
// Generating the file(s) to sync-up
String[] pathUnderDepot = [
srcPath + "*"
]
// Increasing timeout from default 30 sec to 60 sec
Properties defaultProps = new Properties()
defaultProps.put(PropertyDefs.PROG_NAME_KEY, "CustomBuildApp")
defaultProps.put(PropertyDefs.PROG_VERSION_KEY, "tv_1.0")
defaultProps.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000")
// Instantiating the server
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
p4Server.connect()
// Authorizing
p4Server.setUserName("perforceUserName")
p4Server.login("perforcePassword")
// Just check if connected successfully
IServerInfo serverInfo = p4Server.getServerInfo()
println 'Server info: ' + serverInfo.getServerLicense()
// Creating new client
IClient tempClient = new Client()
// Setting up the name and the root folder
tempClient.setName("tempClient" + UUID.randomUUID().toString().replace("-", ""))
tempClient.setRoot(destPath)
tempClient.setServer(p4Server)
// Setting the client as the current one for the server
p4Server.setCurrentClient(tempClient)
// Creating Client View entry
ClientViewMapping tempMappingEntry = new ClientViewMapping()
// Setting up the mapping properties
tempMappingEntry.setLeft(srcPath + "...")
tempMappingEntry.setRight("//" + tempClient.getName() + "/...")
tempMappingEntry.setType(EntryType.INCLUDE)
// Creating Client view
ClientView tempClientView = new ClientView()
// Attaching client view entry to client view
tempClientView.addEntry(tempMappingEntry)
tempClient.setClientView(tempClientView)
// Registering the new client on the server
println p4Server.createClient(tempClient)
// Surrounding the underlying block with try as we want some action
// (namely client removing) to be performed in any way
try {
// Forming the FileSpec collection to be synced-up
List<IFileSpec> fileSpecsSet = FileSpecBuilder.makeFileSpecList(pathUnderDepot)
// Syncing up the client
println "Syncing..."
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
catch (Exception e) {
println "Sync failed. Trying again..."
sleep(60 * 1000)
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
finally {
println "Done syncing."
try {
p4Server.connect()
IServerInfo serverInfo2 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo2.getServerLicense()
// Removing the temporary client from the server
println p4Server.deleteClient(tempClient.getName(), false)
}
catch(Exception e) {
println 'Ignoring exception caught while deleting tempClient!'
/*sleep(60 * 1000)
p4Server.connect()
IServerInfo serverInfo3 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo3.getServerLicense()
sleep(60 * 1000)
println p4Server.deleteClient(tempClient.getName(), false)*/
}
}
}
One unusual thing which I observed while deleting tempClient was it was actually deleting the client but still throwing "java.net.SocketTimeoutException: Read timed out" which is why I ended up commenting the second delete attempt in the second catch block.
Which version of P4Java are you using? Have you tried this out with the newest P4Java? There are notable fixes dealing with RPC sockets since the 2013.2 version forward as can be seen in the release notes:
http://www.perforce.com/perforce/doc.current/user/p4javanotes.txt
Here are some variations that you can try where you have your code to increase timeout and instantiating the server:
a] Have you tried to passing props in its own argument,? For example:
Properties prop = new Properties();
prop.setProperty(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "300000");
UsageOptions uop = new UsageOptions(prop);
server = ServerFactory.getOptionsServer(ServerFactory.DEFAULT_PROTOCOL_NAME + "://" + serverPort, prop, uop);
Or something like the following:
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
You can also set the timeout to "0" to give it no timeout.
b]
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
props.put(RpcPropertyDefs.RPC_SOCKET_POOL_SIZE_NICK, "5");
c]
Properties props = System.getProperties();
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
IOptionsServer server =
ServerFactory.getOptionsServer("p4java://perforce:1666", props, null);
d] In case you have Eclipse users using our P4Eclipse plugin, the property can be set in the plugin preferences (Team->Perforce->Advanced) under the Custom P4Java Properties.
"sockSoTimeout" : "3000000"
REFERENCES
Class RpcPropertyDefs
http://perforce.com/perforce/doc.current/manuals/p4java-javadoc/com/perforce/p4java/impl/mapbased/rpc/RpcPropertyDefs.html
P4Eclipse or P4Java: SocketTimeoutException: Read timed out
http://answers.perforce.com/articles/KB/8044
My program requires a large number of connections to be open (Mongo). I get the error :
Too many connections open, can't open anymore
after 819 connections. I already know we can increase this limit. But that's not what I have in mind. I'm thinking of closing the MongoClient object, and then creating a new one again after 800 connections.
My thinking is that with a new mongoClient object all the connections will be closed and when I start/create it again, the connections will be opened again until 800.
Thus not giving the error. (Let me know if this approach is totally wrong/ won't give the required results.)
For this I need to know the number of connections opened ATM. Is there any way to get this information using java?
You can get connection information by using the db.serverStatus() command. It has a connections subdocument which contains the total/available connections information.
For more information :
Documentation of server status
Details of connections block
Check the number of MongoDB connections using MongoDB Scala driver:
Create a MongoDB client:
import org.mongodb.scala._
import scala.collection.JavaConverters._
import scala.concurrent.Await
import scala.concurrent.duration._
import scala.util.{Failure, Success, Try}
// To directly connect to the default server localhost on port 27017
val mongodbClient: MongoClient = MongoClient()
// Use a Connection String
val mongodbClient: MongoClient = MongoClient("mongodb://localhost")
// or provide custom MongoClientSettings
val settings: MongoClientSettings = MongoClientSettings.builder()
.applyToClusterSettings(b => b.hosts(List(new ServerAddress("localhost")).asJava).
.build()
val mongodbClient: MongoClient = MongoClient(settings)
Call getNoOfMongodbConnection by passing mongodbClient:
val result = getNoOfMongodbConnection(mongodbClient)
Method to get the number of connections(current, available and total)
def getNoOfMongodbConnection(mongodbClient: MongoClient) = {
val adminDatabase = mongodbClient.getDatabase("admin")
val serverStatus = adminDatabase.runCommand(Document("serverStatus" -> 1)).toFuture()
Try {
Await.result(serverStatus, 10 seconds)
} match {
case Success(x) => {
val connection = x.get("connections")
logger.info("Number of mongodb connection:--> " + connection)
connection
}
case Failure(ex) => {
logger.error("Got error while getting the number of Mongodb connection:---> " + ex.printStackTrace())
None
}
}
}