Execution of stored function using spring's SimpleJdbcCall not giving output - java

I am trying to call a stored function using spring's SimpleJdbc call. I have written a simple function which takes two numbers as input and returns their sum. I am using Oracle 11.2g as the database. I am not getting any exceptions but at the same time not getting the result either. The function works well when called from an anonymous PL/SQL block through sql-plus. The code is as follows:
SimpleJdbcCall _simpleJdbcCall=new SimpleJdbcCall(this.jdbcTemplate);
_simpleJdbcCall.withCatalogName("BROADCASTSHEETMANAGEMENT");
_simpleJdbcCall.withSchemaName("PPV");
_simpleJdbcCall.withFunctionName("TEST");
_simpleJdbcCall.withoutProcedureColumnMetaDataAccess();
_simpleJdbcCall.declareParameters(new SqlParameter("newChangeSequence",java.sql.Types.NUMERIC));
_simpleJdbcCall.declareParameters(new SqlParameter("number1",java.sql.Types.NUMERIC));
_simpleJdbcCall.declareParameters(new SqlParameter("number2",java.sql.Types.NUMERIC));
MapSqlParameterSource mapSqlParameterSource1=new MapSqlParameterSource();
mapSqlParameterSource1.addValue("newChangeSequence", Integer.valueOf(0));
mapSqlParameterSource1.addValue("number1", Integer.valueOf(10));
mapSqlParameterSource1.addValue("number2", Integer.valueOf(20));
newChangeSequence = _simpleJdbcCall.executeFunction(Integer.class,mapSqlParameterSource1);
System.out.println("Returned changeSequence is: " + newChangeSequence);
The stack trace shows following information:
2013/12/19 18:52:53,604 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - Added declared parameter for [TEST]: newChangeSequence
2013/12/19 18:52:53,604 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - Added declared parameter for [TEST]: number1
2013/12/19 18:52:53,604 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - Added declared parameter for [TEST]: number2
2013/12/19 18:52:53,605 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - JdbcCall call not compiled before execution - invoking compile
2013/12/19 18:52:53,608 [main] - [] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource
2013/12/19 18:52:53,609 [main] - [] DEBUG org.springframework.jdbc.datasource.DriverManagerDataSource - Creating new JDBC DriverManager Connection to [jdbc:oracle:thin:#localhost:1521:orcl]
2013/12/19 18:52:53,647 [main] - [] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Registering transaction synchronization for JDBC Connection
2013/12/19 18:52:53,649 [main] - [] DEBUG org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory - Using org.springframework.jdbc.core.metadata.OracleCallMetaDataProvider
2013/12/19 18:52:53,649 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - Compiled stored procedure. Call string is [{? = call PPV.BROADCASTSHEETMANAGEMENT.TEST(?, ?)}]
2013/12/19 18:52:53,649 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - SqlCall for function [TEST] compiled
2013/12/19 18:52:53,651 [main] - [] DEBUG org.springframework.jdbc.core.metadata.CallMetaDataContext - Matching [number2, number1, newChangeSequence] with [number2, newChangeSequence, number1]
2013/12/19 18:52:53,651 [main] - [] DEBUG org.springframework.jdbc.core.metadata.CallMetaDataContext - Found match for [number2, number1, newChangeSequence]
2013/12/19 18:52:53,652 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - The following parameters are used for call {? = call PPV.BROADCASTSHEETMANAGEMENT.TEST(?, ?)} with: {number2=20, number1=10, newChangeSequence=0}
2013/12/19 18:52:53,652 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - 1: newChangeSequence SQL Type 2 Type Name null org.springframework.jdbc.core.SqlParameter
2013/12/19 18:52:53,652 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - 2: number1 SQL Type 2 Type Name null org.springframework.jdbc.core.SqlParameter
2013/12/19 18:52:53,652 [main] - [] DEBUG org.springframework.jdbc.core.simple.SimpleJdbcCall - 3: number2 SQL Type 2 Type Name null org.springframework.jdbc.core.SqlParameter
2013/12/19 18:52:53,653 [main] - [] DEBUG org.springframework.jdbc.core.JdbcTemplate - Calling stored procedure [{? = call PPV.BROADCASTSHEETMANAGEMENT.TEST(?, ?)}]
2013/12/19 18:52:53,655 [main] - [] DEBUG org.springframework.jdbc.core.JdbcTemplate - CallableStatement.execute() returned 'false'
2013/12/19 18:52:53,655 [main] - [] DEBUG org.springframework.jdbc.core.JdbcTemplate - CallableStatement.getUpdateCount() returned -1
Returned changeSequence is: null
The stored procedure code is:
function test(number1 number, number2 number) return number is
newChangeSequence number(4);
begin
newChangeSequence:= number1 + number2;
return newChangeSequence;
end test;

you have to use SqlOutParameter like this:
_simpleJdbcCall.declareParameters.declareParameters(new SqlOutParameter("newChangeSequence",java.sql.Types.NUMERIC));
and please post your stored procedure code too.

Related

How to fix logger randomly logging on the same line, even though it is set to start each log with a new line

I am using log4j to log events in java code. I have it set to start each log line as new line, with timestamp, thread, log level and the class where the log runs. So the configuration looks like this:
LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
logger = loggerContext.getLogger("com.asdf");
logger.setAdditive(true);
PatternLayoutEncoder encoder = new PatternLayoutEncoder();
encoder.setContext(loggerContext);
encoder.setPattern("%-5level %d [%thread:%M:%caller{1}]: %message%n");
encoder.start();
cucumberAppender = new CucumberAppender();
cucumberAppender.setName("cucumber-appender");
cucumberAppender.setContext(loggerContext);
cucumberAppender.setScenario(scenario);
cucumberAppender.setEncoder(encoder);
cucumberAppender.start();
logger.addAppender(cucumberAppender);
loggerContext.start();
logger().info("*********************************************");
logger().info("* Starting Scenario - {}", scenario.getName());
logger().info("*********************************************\n");
}
#After
public void showScenarioResult(Scenario scenario) throws InterruptedException {
logger().info("**************************************************************");
logger().info("* {} Scenario - {} ", scenario.getStatus(), scenario.getName());
logger().info("**************************************************************\n");
cucumberAppender.writeToScenario();
cucumberAppender.stop();
logger.detachAppender(cucumberAppender);
logger.detachAndStopAllAppenders();
}
which most of the times outputs the log correctly, as so:
15:59:25.448 [main] INFO com.asdf.runner.steps.StepHooks -
********************************************* 15:59:25.449 [main] INFO com.asdf.runner.steps.StepHooks - * Starting Scenario - Check Cache 15:59:25.450 [main] INFO com.asdf.runner.steps.StepHooks -
********************************************* 15:59:25.558 [main] DEBUG org.cache2k.core.util.Log - New instance, using SLF4J logging 15:59:25.575 [main] INFO org.cache2k.core.Cache2kCoreProviderImpl - cache2k starting. version=1.0.1.Final, build=undefined, defaultImplementation=HeapCache 15:59:25.629 [main] DEBUG org.cache2k.CacheManager:default - open name=default, id=wvl973, classloaderId=6us14y
However, sometimes the next line of the logger is written on the above one, without using the new line, like below:
15:59:27.353 [main] INFO com.asdf.cache.CacheService - Creating a cache for [Kafka] service with specific types.15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - **************************************************************
15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - * PASSED Scenario - Check Cache
15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - **************************************************************
As you can see, the first StepHooks line goes on the same line as CacheService, which is unaesthetic.
What can i change in order for the log to always log in new line, without exceptions like this?

Netty 4.0.30.Final channel write issue

I have written an encoder that does encode the message I am sending on the wire. In my handler I issued the the ctx.writeAndFlush() method but nothing seems to be written to the remote endpoint. These are my code snippets:
Encoder
#ChannelHandler.Sharable
public class FreeSwitchEncoder extends MessageToByteEncoder<BaseCommand> {
/**
* Logger property
*/
private final Logger _log = LoggerFactory.getLogger(this.getClass());
/**
* Character set delimiting the end of each FreeSwitch message parts
*/
private final String MESSAGE_END_STRING = "\n\n";
private final Charset _encoding = StandardCharsets.UTF_8;
#Override
protected void encode(ChannelHandlerContext ctx, BaseCommand msg, ByteBuf out) throws Exception {
// Get the string representation of the BaseCommand
String toSend = msg.toString().trim();
// Let us check whether the command ends with \n\n
if (!StringUtils.isEmpty(toSend)) {
if (!toSend.endsWith(MESSAGE_END_STRING)) toSend = toSend + MESSAGE_END_STRING;
_log.debug("Encoded message sent [{}]", toSend);
ByteBuf encoded = Unpooled.copiedBuffer(toSend.getBytes());
// encoded = encoded.capacity(encoded.readableBytes());
out.writeBytes(encoded);
}
}
}
The code that sends the message
public FreeSwitchMessage sendCommand(ChannelHandlerContext ctx,
final BaseCommand command) {
ManuelResetEvent manuelResetEvent = new ManuelResetEvent();
syncLock.lock();
try {
manuelResetEvents.add(manuelResetEvent);
_log.debug("Command sent to freeSwitch [{}]", command.toString());
ChannelFuture future = ctx.writeAndFlush(command);
future.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if(!future.isSuccess()){
future.cause().printStackTrace();
}
}
});
} finally {
syncLock.unlock();
}
// Block until the response is available
return manuelResetEvent.get();
}
I do not know what I am doing wrong. The code hangs at the writting to the wire. Please assist. This is the log:
06:53:44.417 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework
06:53:44.421 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
06:53:44.432 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
06:53:44.432 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
06:53:44.433 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
06:53:44.433 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: true
06:53:44.433 [main] DEBUG i.n.util.internal.PlatformDependent - Platform: Windows
06:53:44.434 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 8
06:53:44.434 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
06:53:44.434 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available
06:53:44.434 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false
06:53:44.435 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: unavailable
06:53:44.435 [main] DEBUG i.n.util.internal.PlatformDependent - You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance.
06:53:44.435 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: C:\Users\smsgh\AppData\Local\Temp (java.io.tmpdir)
06:53:44.436 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
06:53:44.436 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
06:53:44.458 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
06:53:44.459 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
06:53:44.487 [main] INFO io.freeswitch.OutboundTest - Client connecting ..
06:53:44.517 [main] DEBUG i.n.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0xf77d8a3de122380b (took 9 ms)
06:53:44.559 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: unpooled
06:53:44.559 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
06:53:44.591 [nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetectionLevel: simple
06:53:44.596 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacity.default: 262144
06:53:44.599 [nioEventLoopGroup-2-1] DEBUG i.f.codecs.FreeSwitchDecoder - read header line [Content-Type: auth/request]
06:53:44.601 [nioEventLoopGroup-2-1] DEBUG i.f.message.FreeSwitchMessage - adding header [CONTENT_TYPE] [auth/request]
06:53:44.601 [nioEventLoopGroup-2-1] DEBUG i.f.codecs.FreeSwitchDecoder - read header line []
06:53:44.602 [nioEventLoopGroup-2-1] DEBUG io.netty.util.internal.Cleaner0 - java.nio.ByteBuffer.cleaner(): available
06:53:44.602 [nioEventLoopGroup-2-1] DEBUG i.n.channel.DefaultChannelPipeline - Discarded inbound message FreeSwitchMessage: contentType=[auth/request] headers=1, body=0 lines. that reached at the tail of the pipeline. Please check your pipeline configuration.
06:53:44.603 [nioEventLoopGroup-2-1] DEBUG i.f.outbound.OutboundHandler - Received message: [FreeSwitchMessage: contentType=[auth/request] headers=1, body=0 lines.]
06:53:44.607 [nioEventLoopGroup-2-1] DEBUG i.f.outbound.OutboundHandler - Auth requested, sending [auth *****]
06:53:44.683 [nioEventLoopGroup-2-1] DEBUG i.f.outbound.OutboundHandler - Command sent to freeSwitch [auth ClueCon]
06:53:44.683 [nioEventLoopGroup-2-1] DEBUG i.f.codecs.FreeSwitchEncoder - Encoded message sent [auth ClueCon
]

algebraic error when running "aggregate" function on dataset

I'm learning hadoop/pig/hive through running through tutorials on hortonworks.com
I have indeed tried to find a link to the tutorial, but unfortunately it only ships with the ISA image that they provide to you. It's not actually hosted on their website.
batting = load 'Batting.csv' using PigStorage(',');
runs = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
grp_data = GROUP runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp,MAX(runs.runs) as max_runs;
join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);
join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;
dump join_data;
I've copied their code exactly as it was stated in the tutorial and I'm getting this output:
2013-06-14 14:34:37,969 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1.1.3.0.0-107 (rexported) compiled May 20 2013, 03:04:35
2013-06-14 14:34:37,977 [main] INFO org.apache.pig.Main - Logging error messages to: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0020/attempt_201306140401_0020_m_000000_0/work/pig_1371245677965.log
2013-06-14 14:34:38,412 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /usr/lib/hadoop/.pigbootup not found
2013-06-14 14:34:38,598 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox:8020
2013-06-14 14:34:38,998 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: sandbox:50300
2013-06-14 14:34:40,819 [main] WARN org.apache.pig.PigServer - Encountered Warning IMPLICIT_CAST_TO_DOUBLE 1 time(s).
2013-06-14 14:34:40,827 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: HASH_JOIN,GROUP_BY
2013-06-14 14:34:41,115 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2013-06-14 14:34:41,160 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.CombinerOptimizer - Choosing to move algebraic foreach to combiner
2013-06-14 14:34:41,201 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler$LastInputStreamingOptimizer - Rewrite: POPackage->POForEach to POJoinPackage
2013-06-14 14:34:41,213 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 3
2013-06-14 14:34:41,213 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 map-reduce splittees.
2013-06-14 14:34:41,214 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 out of total 3 MR operators.
2013-06-14 14:34:41,214 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 2
2013-06-14 14:34:41,488 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2013-06-14 14:34:41,551 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2013-06-14 14:34:41,555 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
2013-06-14 14:34:41,559 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=6398990
2013-06-14 14:34:41,559 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
2013-06-14 14:34:44,244 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job5371236206169131677.jar
2013-06-14 14:34:49,495 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job5371236206169131677.jar created
2013-06-14 14:34:49,517 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up multi store job
2013-06-14 14:34:49,529 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2013-06-14 14:34:49,530 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2013-06-14 14:34:49,530 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
2013-06-14 14:34:49,755 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2013-06-14 14:34:50,144 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2013-06-14 14:34:50,145 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2013-06-14 14:34:50,256 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2013-06-14 14:34:50,316 [JobControl] INFO com.hadoop.compression.lzo.GPLNativeCodeLoader - Loaded native gpl library
2013-06-14 14:34:50,444 [JobControl] INFO com.hadoop.compression.lzo.LzoCodec - Successfully loaded & initialized native-lzo library [hadoop-lzo rev cf4e7cbf8ed0f0622504d008101c2729dc0c9ff3]
2013-06-14 14:34:50,665 [JobControl] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library is available
2013-06-14 14:34:50,666 [JobControl] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library
2013-06-14 14:34:50,666 [JobControl] INFO org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library loaded
2013-06-14 14:34:50,680 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201306140401_0021
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases batting,grp_data,max_runs,runs
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: batting[1,10],runs[2,7],max_runs[4,11],grp_data[3,11] C: max_runs[4,11],grp_data[3,11] R: max_runs[4,11]
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://sandbox:50030/jobdetails.jsp?jobid=job_201306140401_0021
2013-06-14 14:36:01,993 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
2013-06-14 14:36:04,767 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2013-06-14 14:36:04,768 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201306140401_0021 has failed! Stop running all dependent jobs
2013-06-14 14:36:04,768 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2013-06-14 14:36:05,029 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2106: Error executing an algebraic function
2013-06-14 14:36:05,030 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2013-06-14 14:36:05,042 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
1.2.0.1.3.0.0-107 0.11.1.1.3.0.0-107 mapred 2013-06-14 14:34:41 2013-06-14 14:36:05 HASH_JOIN,GROUP_BY
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_201306140401_0021 batting,grp_data,max_runs,runs MULTI_QUERY,COMBINER Message: Job failed! Error - # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201306140401_0021_m_000000
Input(s):
Failed to read data from "hdfs://sandbox:8020/user/hue/batting.csv"
Output(s):
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201306140401_0021 -> null,
null
2013-06-14 14:36:05,042 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2013-06-14 14:36:05,043 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias join_data
Details at logfile: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0020/attempt_201306140401_0020_m_000000_0/work/pig_1371245677965.log
When switching this part: MAX(runs.runs) to avg(runs.runs) then I am getting a completely different issue:
2013-06-14 14:38:25,694 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1.1.3.0.0-107 (rexported) compiled May 20 2013, 03:04:35
2013-06-14 14:38:25,695 [main] INFO org.apache.pig.Main - Logging error messages to: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0022/attempt_201306140401_0022_m_000000_0/work/pig_1371245905690.log
2013-06-14 14:38:26,198 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /usr/lib/hadoop/.pigbootup not found
2013-06-14 14:38:26,438 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox:8020
2013-06-14 14:38:26,824 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: sandbox:50300
2013-06-14 14:38:28,238 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve avg using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0022/attempt_201306140401_0022_m_000000_0/work/pig_1371245905690.log
Anybody know what the issue might be?
I am sure lot of people would have figured this out. I combined Eugene's solution with the original code from Hortonworks such that we get the exact output as specific in the tutorial.
Following code works and produces exact output as specified in the tutorial:
batting = LOAD 'Batting.csv' using PigStorage(',');
runs_raw = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
runs = FILTER runs_raw BY runs > 0;
grp_data = group runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp, MAX(runs.runs) as max_runs;
join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);
join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;
dump join_data;
Note: line "runs = FILTER runs_raw BY runs > 0;" is additional than what has been provided by Hortonworks, thanks to Eugene for sharing working code which I used to modify original Hortonworks code to make it work.
UDFs are case sensitive, so at least to answer the second part of your question - you'll need to use AVG(runs.runs) instead of avg(runs.runs)
It's likely that once you correct your syntax you'll get the original error you reported...
i am having the same exact same issue with exact same log output, but this solution doesn't work because i believe changing MAX with AVG here dumps the whole purpose of this hortonworks.com tutorial - it was to get the MAX runs by playerID for each year.
UPDATE
Finally i got it resolved - you have to either remove the first line in Batting.csv (column names) or edit your Pig Latin code like this:
batting = LOAD ‘Batting.csv’ using PigStorage(‘,’);
runs_raw = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
runs = FILTER runs_raw BY runs > 0;
grp_data = group runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp, MAX(runs.runs) as max_runs;
dump max_runs;
After that you should be able to complete tutorial correctly and get the proper result.
It also looks like this is due to the "bug" in the older versions of Pig rhich was used in the tutorial
Please specify appropriate data type for playerID, year & runs like below:
runs = FOREACH batting GENERATE $0 as playerID:int, $1 as year:chararray, $8 as runs:int;
Not, it should work.

How do I get myBatis update statement to commit in a Java Spring application?

I have Java Spring application running in a jetty-maven plugin. When I call a myBatis insert statement, the statement is automatically committed. However, when I call update, the statement is not committed. Per the myBatis documentation (http://www.mybatis.org/spring/transactions.html):
You cannot call SqlSession.commit(), SqlSession.rollback() or SqlSession.close() over a Spring managed SqlSession.
How do I configure my application to auto commit on a myBatis update statement?
I enabled logging. Here is what the log states on updates:
2012-12-12 17:20:31,669 DEBUG [org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
2012-12-12 17:20:31,669 DEBUG [org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#19e86f9] was not registered for synchronization because synchronization is not active
2012-12-12 17:20:31,669 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
2012-12-12 17:20:31,669 DEBUG [org.springframework.jdbc.datasource.DriverManagerDataSource] - Creating new JDBC DriverManager Connection to [jdbc:jtds:sqlserver://test/test]
2012-12-12 17:20:31,684 DEBUG [org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#af7eaf] will not be managed by Spring
2012-12-12 17:20:31,684 DEBUG [com.persistence.MyMapper.updateMyItem] - ooo Using Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#af7eaf]
2012-12-12 17:20:31,684 DEBUG [com.persistence.MyMapper.updateMyItem] - ==> Preparing: update myTable set date=? where id=?
2012-12-12 17:20:31,700 DEBUG [com.persistence.MyMapper.updateMyItem] - ==> Parameters: 2012-11-26 00:00:00.0(Timestamp), 0(Integer)
2012-12-12 17:20:31,700 DEBUG [org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#19e86f9]
2012-12-12 17:20:31,700 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
On insert, the log is:
2012-12-12 16:35:53,932 DEBUG [org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
2012-12-12 16:35:53,932 DEBUG [org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#22da8f] was not registered for synchronization because synchronization is not active
2012-12-12 16:35:53,932 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
2012-12-12 16:35:53,932 DEBUG [org.springframework.jdbc.datasource.DriverManagerDataSource] - Creating new JDBC DriverManager Connection to [jdbc:jtds:sqlserver://test/test]
2012-12-12 16:35:53,932 DEBUG [org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#3af3cb] will not be managed by Spring
2012-12-12 16:35:53,932 DEBUG [com..persistence.MyMapper.insertMyItem] - ooo Using Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#3af3cb]
2012-12-12 16:35:53,932 DEBUG [com.persistence.MyMapper.insertMyItem] - ==> Preparing: insert into myTable (id,date) values (?, ?)
2012-12-12 16:35:53,932 DEBUG [com.persistence.MyMapper.insertMyItem] - ==> Parameters: 5(Integer), 2012-11-26 00:00:00.0(Timestamp)
2012-12-12 16:35:53,932 DEBUG [org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#22da8f]
2012-12-12 16:35:53,932 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
The insert and update log statements seem to indicate the same basic steps.
After a bit more research, I found that it was a client issue. It was always passing a 0 for the id in the update statement. The records have ids > 0. Along the way, I configured spring txn management. It was at that point that I observed the same behavior and realized it must be something other than server side configuration issue. Sorry about not catching that prior to posting.

how to remove JDBC debug logs [duplicate]

This question already has an answer here:
Java tomcat - how to remove JDBC debug logs
(1 answer)
Closed 8 years ago.
The following log is constantly thrown to the console:
09:36:53.456 [CloseConnectionsTimer] DEBUG o.s.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource
09:36:53.456 [CloseConnectionsTimer] DEBUG o.s.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [0], value class [java.lang.Long], SQL type -5
09:36:53.456 [CloseConnectionsTimer] DEBUG o.s.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.core.JdbcTemplate - Executing prepared SQL query
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.core.JdbcTemplate - Executing prepared SQL statement..
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [0], value class [java.lang.Integer], SQL type 2
09:36:53.472 [CloseConnectionsTimer] DEBUG o.s.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
How can i stop these logs/Change the logging level to INFO or ERROR?
Anything less than debug level will make those logs disappear. eg. INFO, WARN, ERROR
First of all, those are not JDBC logs, those are Spring logs.
Anyway, this should do the job in slf4j or log4j:
<logger name="org.spring.jdbc" level="OFF"/>

Categories