I need to figure out how many writes MongoDB has performed in the last hour against reads.
Is there an easy way to find these stats, which are needed to create an alarm. If the solution is command driven or Java based, it will be really helpful.
after digging up source code of MongoDB jdbc driver . i finally able to get what is required . below is a code for it
public class MongoJmxStat {
private MongoClient mongo;
public MongoJmxStat(MongoClient mongo) {
this.mongo = mongo;
}
public CommandResult getServerStatus() {
CommandResult result = getDb("mongoDB").command("serverStatus");
if (!result.ok()) {
throw new MongoException("could not query for server status. Command Result = " + result);
}
return result;
}
public DB getDb(String databaseName) {
return MongoDbUtils.getDB(mongo, databaseName);
}
private int getOpCounter(String key) {
DBObject opCounters = (DBObject) getServerStatus().get("opcounters");
return (Integer) opCounters.get(key);
}
#ManagedMetric(metricType = MetricType.COUNTER, displayName = "Write operation count")
public int getWriteCount() {
return getOpCounter("insert") + getOpCounter("update") + getOpCounter("delete");
}
#ManagedMetric(metricType = MetricType.COUNTER, displayName = "Read operation count")
public int getReadCount() {
return getOpCounter("query") + getOpCounter("getmore");
}
}
Related
I plan to use a custom Field and TimeBased partitioner to partition my data in s3 as follow:
/part_<field_name>=<field_value>/part_date=YYYY-MM-dd/part_hour=HH/....parquet.
My Partitioner works fine, everything is as expected in my S3 bucket.
The problem is linked to the performance of the sink
I have 400kB/s/broker = ~1.2MB/s in my input topic and the sink works with spikes and commit a small number of records.
If I use the classic TimeBasedPartitioner, enter image description here
So my problem seems to be in my custom partitioner. Here is the code:
package test;
import ...;
public final class FieldAndTimeBasedPartitioner<T> extends TimeBasedPartitioner<T> {
private static final Logger log = LoggerFactory.getLogger(FieldAndTimeBasedPartitioner.class);
private static final String FIELD_SUFFIX = "part_";
private static final String FIELD_SEP = "=";
private long partitionDurationMs;
private DateTimeFormatter formatter;
private TimestampExtractor timestampExtractor;
private PartitionFieldExtractor partitionFieldExtractor;
protected void init(long partitionDurationMs, String pathFormat, Locale locale, DateTimeZone timeZone, Map<String, Object> config) {
this.delim = (String)config.get("directory.delim");
this.partitionDurationMs = partitionDurationMs;
try {
this.formatter = getDateTimeFormatter(pathFormat, timeZone).withLocale(locale);
this.timestampExtractor = this.newTimestampExtractor((String)config.get("timestamp.extractor"));
this.timestampExtractor.configure(config);
this.partitionFieldExtractor = new PartitionFieldExtractor((String)config.get("partition.field"));
} catch (IllegalArgumentException e) {
ConfigException ce = new ConfigException("path.format", pathFormat, e.getMessage());
ce.initCause(e);
throw ce;
}
}
private static DateTimeFormatter getDateTimeFormatter(String str, DateTimeZone timeZone) {
return DateTimeFormat.forPattern(str).withZone(timeZone);
}
public static long getPartition(long timeGranularityMs, long timestamp, DateTimeZone timeZone) {
long adjustedTimestamp = timeZone.convertUTCToLocal(timestamp);
long partitionedTime = adjustedTimestamp / timeGranularityMs * timeGranularityMs;
return timeZone.convertLocalToUTC(partitionedTime, false);
}
public String encodePartition(SinkRecord sinkRecord, long nowInMillis) {
final Long timestamp = this.timestampExtractor.extract(sinkRecord, nowInMillis);
final String partitionField = this.partitionFieldExtractor.extract(sinkRecord);
return this.encodedPartitionForFieldAndTime(sinkRecord, timestamp, partitionField);
}
public String encodePartition(SinkRecord sinkRecord) {
final Long timestamp = this.timestampExtractor.extract(sinkRecord);
final String partitionFieldValue = this.partitionFieldExtractor.extract(sinkRecord);
return encodedPartitionForFieldAndTime(sinkRecord, timestamp, partitionFieldValue);
}
private String encodedPartitionForFieldAndTime(SinkRecord sinkRecord, Long timestamp, String partitionField) {
if (timestamp == null) {
String msg = "Unable to determine timestamp using timestamp.extractor " + this.timestampExtractor.getClass().getName() + " for record: " + sinkRecord;
log.error(msg);
throw new ConnectException(msg);
} else if (partitionField == null) {
String msg = "Unable to determine partition field using partition.field '" + partitionField + "' for record: " + sinkRecord;
log.error(msg);
throw new ConnectException(msg);
} else {
DateTime recordTime = new DateTime(getPartition(this.partitionDurationMs, timestamp.longValue(), this.formatter.getZone()));
return this.FIELD_SUFFIX
+ config.get("partition.field")
+ this.FIELD_SEP
+ partitionField
+ this.delim
+ recordTime.toString(this.formatter);
}
}
static class PartitionFieldExtractor {
private final String fieldName;
PartitionFieldExtractor(String fieldName) {
this.fieldName = fieldName;
}
String extract(ConnectRecord<?> record) {
Object value = record.value();
if (value instanceof Struct) {
Struct struct = (Struct)value;
return (String) struct.get(fieldName);
} else {
FieldAndTimeBasedPartitioner.log.error("Value is not of Struct !");
throw new PartitionException("Error encoding partition.");
}
}
}
public long getPartitionDurationMs() {
return partitionDurationMs;
}
public TimestampExtractor getTimestampExtractor() {
return timestampExtractor;
}
}
It's more or less a merge of FieldPartitioner and TimeBasedPartitioner.
Any clue on why I have suck a bad performance on while sinking messages ?
While partitioning using field in the record, deserialize and extract data from the message can cause this issue ?
As I have around 80 different fields values, can it be a memory issue as it will maintain 80 times more buffers in the heap ?
Thanks for your help.
FYI, the problem was the partitioner itself. My partitioner needed to decode the entire message and get the info.
As I have a lot of messages, it takes time to handle all these events.
I am trying to write a Spring Boot Controller that allows the user to make arbitrary SELECT queries to a Postgres database and see the result. I implemented this by using a form like the one in the link. The project is based on this starter app.
Code:
#Controller
#SpringBootApplication
public class Main {
#Value("${spring.datasource.url}")
private String dbUrl;
#Autowired
private DataSource dataSource;
public static void main(String[] args) throws Exception {
SpringApplication.run(Main.class, args);
}
#GetMapping("/query")
public String queryForm(Model model) {
model.addAttribute("query", new Query());
return "query";
}
#PostMapping("/query")
public String querySubmit(#ModelAttribute Query query) {
try (final Connection connection = dataSource.getConnection()) {
final Statement stmt = connection.createStatement();
final String rawQueryContent = query.getContent().trim();
final String queryContent;
if(!rawQueryContent.toLowerCase().contains("limit")) {
queryContent = rawQueryContent + " LIMIT 500";
} else {
queryContent = rawQueryContent;
}
final ResultSet rs = stmt.executeQuery(queryContent);
final StringBuilder sb = new StringBuilder();
while (rs.next()) {
sb.append("Row #" + rs.getRow() + ": " + rs.toString() + "\n");
}
query.setContent(sb.toString());
rs.close();
stmt.closeOnCompletion();
} catch (Exception e) {
query.setContent(e.getMessage());
}
return "queryresult";
}
#Bean
public DataSource dataSource() throws SQLException {
if (dbUrl == null || dbUrl.isEmpty()) {
return new HikariDataSource();
} else {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(dbUrl);
return new HikariDataSource(config);
}
}
}
The form looks like this:
But the output I am getting looks like this:
Row 1: HikariProxyResultSet#188463256 wrapping org.postgresql.jdbc.PgResultSet#ff61f7d
Row 2: HikariProxyResultSet#188463256 wrapping org.postgresql.jdbc.PgResultSet#ff61f7d
Row 3: HikariProxyResultSet#188463256 wrapping org.postgresql.jdbc.PgResultSet#ff61f7d
Row 4: HikariProxyResultSet#188463256 wrapping org.postgresql.jdbc.PgResultSet#ff61f7d
This is not what I want! I want to see the actual rows in the database, as in:
Row 1: "Dave" | 23 | "Philadelphia"
Row 2: "Anne" | 72 | "New York"
Row 3: "Susie" | 44 | "San Francisco"
Row 4: "Alex" | 22 | "Miami"
Heck, I would rather get the raw string output that I normally get when I hand-type SQL into the database than the address in memory of the ResultSet.
How do I get the actual database output without knowing in advance exactly how many columns there will be in the table or the types of the columns?
I would suggest, for starters to simplify your code by using the JdbcTemplate combined with a ResultSetExtractor to simplify the code. You can use the ResultSet itself to get the number of columns for a result.
I'm also not sure why you are redefining the DataSource.
All in all something like the code below should do the trick (haven't tested it and typed it from the top of my head, so might need some polishing).
#Controller
#SpringBootApplication
public class Main {
#Autowired
private JdbcTemplate jdbc;
public static void main(String[] args) throws Exception {
SpringApplication.run(Main.class, args);
}
#GetMapping("/query")
public String queryForm(Model model) {
model.addAttribute("query", new Query());
return "query";
}
#PostMapping("/query")
public String querySubmit(#ModelAttribute Query query) {
final String rawQueryContent = query.getContent().trim();
final String queryContent;
if(!rawQueryContent.toLowerCase().contains("limit")) {
queryContent = rawQueryContent + " LIMIT 500";
} else {
queryContent = rawQueryContent;
}
String content = jdbc.query(queryContent, new ResultSetExtractor<StringBuilder>() {
public StringBuilder extractData(ResultSet rs) {
StringBuilder sb = new StringBuilder();
int columns = rs.getMetaData().getColumnCount();
while (rs.next()) {
int row = rs.getRow();
sb.append(rs.getRow()).append('|');
for (int i = 1 ; i <= columns ; i++) {
sb.append(rs.getObject(i)).append('|');
}
}
return sb.toString();
}
});
query.setContent(content);
return "queryresult";
}
}
See also How to get the number of columns from a JDBC ResultSet? on how to get the number of columns.
I have uploaded a CSV file and already have nodes and relationship defined on Neo4j. I've tried to create a program base on an example that basically run a cypher query from Spring that would generate the output from neo4j. However, I'm encountering this error:
Exception in thread "main" java.lang.NoSuchMethodError:org.neo4j.graphdb.factory.GraphDatabaseFactory.newEmbeddedDatabase(Ljava/io/File;)Lorg/neo4j/graphdb/GraphDatabaseService;
at org.neo4j.connection.Neo4j.run(Neo4j.java:43)
at org.neo4j.connection.Neo4j.main(Neo4j.java:37)
I'm wondering what could possibly be the error?
Here is my code:
public class Neo4j{
public enum NodeType implements Label{
Issues, Cost, Reliability, Timeliness;
}
public enum RelationType implements RelationshipType{
APPLIES_TO
}
String rows = "";
String nodeResult;
String resultString;
String columnString;
private static File DB_PATH = new File("/Users/phaml1/Documents/Neo4j/default.graphdb/import/");
public static void main(String[] args){
Neo4j test = new Neo4j();
test.run();
}
void run()
{
clear();
GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase(DB_PATH);
try(Transaction tx1 = db.beginTx();
Result result = db.execute("MATCH(b:Business)-[:APPLIES_TO]->(e:Time) RETURN b,e"))
{
while(result.hasNext())
{
while ( result.hasNext() )
{
Map<String,Object> row = result.next();
for ( Entry<String,Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
}
rows += "\n";
}
}
try (Transaction something = db.beginTx();
Result result1 = db.execute("MATCH(b:Business)-[:APPLIES_TO]->(e:Time) RETURN b,e"))
{
Iterator<Node> n_column = result.columnAs("n");
for(Node node: Iterators.asIterable(n_column))
{
nodeResult = node + ": " + node.getProperties("Description");
}
List<String> columns = result.columns();
columnString = columns.toString();
resultString = db.execute("MATCH(b:Business)-[:APPLIES_TO]->(e:Time) RETURN b,e").resultAsString();
}
db.shutdown();
}
}
private void clear(){
try{
deleteRecursively(DB_PATH);
}
catch(IOException e){
throw new RuntimeException(e);
}
}
}
It looks like a Neo4j version conflict.
GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase(DB_PATH);
has a String as the argument in Neo4j 2x (https://neo4j.com/api_docs/2.0.3/org/neo4j/graphdb/factory/GraphDatabaseFactory.html#newEmbeddedDatabase(java.lang.String))
but a File in Neo4j 3x (http://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/factory/GraphDatabaseFactory.html#newEmbeddedDatabase-java.io.File-)
SDN is probably pulling in Neo4j 2.3.6 as a dependency- please check your dependency tree and override the Neo4j version
I have 15 usernames with me, I need to pull worklog entries of these users and manipulate it from JAVA client
Below are the jar files am using to connect JIRA api and fetch values
The code is pasted below
public class JiraConnector {
JiraRestClient jira;
public JiraConnector() throws URISyntaxException {
String url = prop().getUrl();
String userName = prop().getUser() ;
String password = prop().getpwd() ;
JerseyJiraRestClientFactory clientFactory = new JerseyJiraRestClientFactory();
jira = clientFactory.createWithBasicHttpAuthentication(new URI(url),
userName, password);
System.out.println("Connection established to >> " + url);
}
public void printIssueDetails(String jiraNumber) {
System.out.println("JiraNumber is " + jiraNumber);
Issue issue = jira.getIssueClient().getIssue(jiraNumber, null);
System.out.println(issue.getSummary());
System.out.println(issue.getDescription());
}
public void printUserWorkLog(String userName) {
System.out.println("user details invoked ... ");
User user = jira.getUserClient().getUser(userName, null);
System.out.println(user.getDisplayName());
System.out.println(user.getEmailAddress());
}
For any given username, am able to print his displayName and emailAdress (all those basic infos).
But I need to get the list of worklogs for the given user. Not sure how to proceed
You can find all worklog records for selected issue:
List<Worklog> worklogByIssue = ComponentAccessor.getWorklogManager().getByIssue(issue);
After that you can parse all worklog records to determine for what user this record created:
for (Worklog worklogByIssueItem : worklogByIssue)
{
int timeSpent = worklogByIssueItem.getTimeSpent().intValue();
String worklogAuthorName = worklogByIssueItem.getAuthorObject().getName();
...
}
And last task is search of issues by some params:
public static List<Issue> searchIssues(SearchParametersAggregator searchParams)
{
String jqlQuery = searchParams.getJqlQuery();
String projectId = searchParams.getProjectId();
String condition = createCondition(jqlQuery, projectId);
JqlQueryBuilder jqlQueryBuilder = prepareJqlQueryBuilder(condition);
return searchIssues(jqlQueryBuilder);
}
static List<Issue> searchIssues(JqlQueryBuilder jqlQueryBuilder)
{
Query query = jqlQueryBuilder.buildQuery();
SearchService searchService = ComponentAccessor.getComponent(SearchService.class);
try
{
ApplicationUser applicationUser = ComponentAccessor.getJiraAuthenticationContext().getUser();
User user = applicationUser.getDirectoryUser();
SearchResults searchResults = searchService.search(user, query, PagerFilter.getUnlimitedFilter());
List<Issue> issues = searchResults.getIssues();
return issues;
}
catch (SearchException e)
{
LOGGER.error("Error occurs during search of issues");
e.printStackTrace();
}
return new ArrayList<Issue>();
}
static JqlQueryBuilder prepareJqlQueryBuilder(String condition)
{
try
{
Query query = jqlQueryParser.parseQuery(condition);
JqlQueryBuilder builder = JqlQueryBuilder.newBuilder(query);
return builder;
}
catch (JqlParseException e)
{
throw new RuntimeException("JqlParseException during parsing jqlQuery!");
}
}
I use mybatis to perform sql queries in my project. I need to intercept sql query before executing to apply some changed dynamically. I've read about #Interseptors like this:
#Intercepts({#Signature(type= Executor.class, method = "query", args = {...})})
public class ExamplePlugin implements Interceptor {
public Object intercept(Invocation invocation) throws Throwable {
return invocation.proceed();
}
public Object plugin(Object target) {
return Plugin.wrap(target, this);
}
public void setProperties(Properties properties) {
}
}
And it really intercepts executions, but there is no way to change sql query since appropriate field is not writable. Should I build new instance of whole object manually to just replace sql query? Where is the right place to intercept query execution to change it dynamically? Thank.
I hope it will help you:
#Intercepts( { #Signature(type = Executor.class, method = "query", args = {
MappedStatement.class, Object.class, RowBounds.class,
ResultHandler.class
})
})
public class SelectCountSqlInterceptor2 implements Interceptor
{
public static String COUNT = "_count";
private static int MAPPED_STATEMENT_INDEX = 0;
private static int PARAMETER_INDEX = 1;
#Override
public Object intercept(Invocation invocation) throws Throwable
{
processCountSql(invocation.getArgs());
return invocation.proceed();
}
#SuppressWarnings("rawtypes")
private void processCountSql(final Object[] queryArgs)
{
if (queryArgs[PARAMETER_INDEX] instanceof Map)
{
Map parameter = (Map) queryArgs[PARAMETER_INDEX];
if (parameter.containsKey(COUNT))
{
MappedStatement ms = (MappedStatement) queryArgs[MAPPED_STATEMENT_INDEX];
BoundSql boundSql = ms.getBoundSql(parameter);
String sql = ms.getBoundSql(parameter).getSql().trim();
BoundSql newBoundSql = new BoundSql(ms.getConfiguration(),
getCountSQL(sql), boundSql.getParameterMappings(),
boundSql.getParameterObject());
MappedStatement newMs = copyFromMappedStatement(ms,
new OffsetLimitInterceptor.BoundSqlSqlSource(newBoundSql));
queryArgs[MAPPED_STATEMENT_INDEX] = newMs;
}
}
}
// see: MapperBuilderAssistant
#SuppressWarnings({ "unchecked", "rawtypes" })
private MappedStatement copyFromMappedStatement(MappedStatement ms,
SqlSource newSqlSource)
{
Builder builder = new MappedStatement.Builder(ms.getConfiguration(), ms
.getId(), newSqlSource, ms.getSqlCommandType());
builder.resource(ms.getResource());
builder.fetchSize(ms.getFetchSize());
builder.statementType(ms.getStatementType());
builder.keyGenerator(ms.getKeyGenerator());
// setStatementTimeout()
builder.timeout(ms.getTimeout());
// setParameterMap()
builder.parameterMap(ms.getParameterMap());
// setStatementResultMap()
List<ResultMap> resultMaps = new ArrayList<ResultMap>();
String id = "-inline";
if (ms.getResultMaps() != null)
{
id = ms.getResultMaps().get(0).getId() + "-inline";
}
ResultMap resultMap = new ResultMap.Builder(null, id, Long.class,
new ArrayList()).build();
resultMaps.add(resultMap);
builder.resultMaps(resultMaps);
builder.resultSetType(ms.getResultSetType());
// setStatementCache()
builder.cache(ms.getCache());
builder.flushCacheRequired(ms.isFlushCacheRequired());
builder.useCache(ms.isUseCache());
return builder.build();
}
private String getCountSQL(String sql)
{
String lowerCaseSQL = sql.toLowerCase().replace("\n", " ").replace("\t", " ");
int index = lowerCaseSQL.indexOf(" order ");
if (index != -1)
{
sql = sql.substring(0, index);
}
return "SELECT COUNT(*) from ( select 1 as col_c " + sql.substring(lowerCaseSQL.indexOf(" from ")) + " ) cnt";
}
#Override
public Object plugin(Object target)
{
return Plugin.wrap(target, this);
}
#Override
public void setProperties(Properties properties)
{
}
}
You may consider using a string template library (eg Velocity, Handlebars, Mustache) to help you
As of to date, there is even MyBatis-Velocity (http://mybatis.github.io/velocity-scripting/) to help you to do scripting for the sql.
Depending on the changes you want to make, you may want to use the dynamic sql feature of mybatis 3