socketchannel.write has a limit? [duplicate] - java

This question already has answers here:
SocketChannel.write() writing problem
(2 answers)
Closed 5 years ago.
I'm writing a nio server,process some http request
and I want to use SocketChannel 's method write(ByteBuffer[] srcs),
code like this
#Override
public void send(ByteBuffer[] arr) throws IOException {
long writeBytes=channel.write(arr);
log.debug("writeBytes "+writeBytes);
}
but if arr is too big ,such as 93k ,it can only write
DEBUG : 2017-08-25 15:03:41 > writeBytes16384
And in the brower , of course it's not complete,only a part of it
if I split it ,such as
#Override
public void send(byte[] bytes, int index, int length) throws IOException {
ByteBuffer buffer=ByteBuffer.allocate(1024);
try {
buffer.put(bytes,index,length);
}catch (BufferOverflowException e){
log.error(e.getMessage());
}
buffer.flip();
channel.write(buffer);
}
and use Thread.sleep(2) after every method,and send 93 times in loop,it's ok,but I don't think it is a good way
16384 is 16k,I realy think some buffer is 16k,but I didn't found which buffer is
I saw channel.socket().getSendBufferSize(); is 8192
I try to channel.socket().setSendBufferSize(4*1024*1024);
but it didn't work
How can I success to transfer a big data (more than 16k) to brower one time and not sleep or wait

thanks #Keyaman
you are right ,I should read some tutorials
I fix it,it works well
while(arr[arr.length-1].hasRemaining()){
long writeBytes=channel.write(arr);
log.debug("writeBytes "+writeBytes);
}
but I still don't it is a good way
it log just like this
DEBUG : 2017-08-25 16:26:56 > writeBytes 16384
DEBUG : 2017-08-25 16:26:56 > writeBytes 0
DEBUG : 2017-08-25 16:26:56 > writeBytes 0
DEBUG : 2017-08-25 16:26:56 > writeBytes 0
...
DEBUG : 2017-08-25 16:26:56 > writeBytes 16384
DEBUG : 2017-08-25 16:26:56 > writeBytes 0
DEBUG : 2017-08-25 16:26:56 > writeBytes 0
DEBUG : 2017-08-25 16:26:56 > writeBytes 0
...
DEBUG : 2017-08-25 16:26:56 > writeBytes 0
DEBUG : 2017-08-25 16:26:56 > writeBytes 12922
there should be a buffer is 16k,when it's full ,it flush
what is it,and can I set it?

Related

Apache Flink batch mode FileSink to S3 can't finish in jetbrains

What we are trying to do: we are evaluating Flink to perform batch processing using DataStream API in BATCH mode.
Minimal application to reproduce the issue:
FileSystem.initialize(GlobalConfiguration.loadConfiguration(System.getenv("FLINK_CONF_DIR")))
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setRuntimeMode(RuntimeExecutionMode.BATCH)
val inputStream = env.fromSource(
FileSource.forRecordStreamFormat(new TextLineFormat(), new Path("s3://testtest/2022/04/12/")).build(), WatermarkStrategy.noWatermarks()
.withTimestampAssigner(new SerializableTimestampAssigner[String]() {
override def extractTimestamp(element: String, recordTimestamp: Long): Long = -1
}), "MySourceName"
)
.map(str => {
val jsonNode = JsonUtil.getJSON(str)
val log = JsonUtil.getJSONString(jsonNode, "log")
if (StringUtils.isNotBlank(log)) {
log
} else {
""
}
})
.filter(StringUtils.isNotBlank(_))
val sink: FileSink[BaseLocation] = FileSink
// .forBulkFormat(new Path("/Users/temp/flinksave"), AvroWriters.forSpecificRecord(classOf[BaseLocation]))
.forBulkFormat(new Path("s3://testtest/avro"), AvroWriters.forSpecificRecord(classOf[BaseLocation]))
.withRollingPolicy(OnCheckpointRollingPolicy.build())
.withOutputFileConfig(config)
.build()
inputStream.map(data => {
val baseLocation = new BaseLocation()
baseLocation.setRegion(data)
baseLocation
}).sinkTo(sink)
inputStream.print("input:")
env.execute()
Flink version: 1.14.2
the program executes normally when the path is local.
The program does not give a error when path change to s3://. However I do not see any files being written in S3 either.
This problem does not exist in the stand-alone mode, but only in the local development environment jetbrains IDEA. Is it because I lack configuration? I have already configured flink-config.yaml like:
s3.access-key: test
s3.secret-key: test
s3.endpoint: http://127.0.0.1:39000
log
18:42:25,524 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Finished reading split(s) [0000000002]
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Finished reading split(s) [0000000001]
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager [] - Closing splitFetcher 0 because it is idle.
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager [] - Closing splitFetcher 0 because it is idle.
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Shutting down split fetcher 0
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Shutting down split fetcher 0
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Split fetcher 0 exited.
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Split fetcher 0 exited.
18:42:25,525 INFO org.apache.flink.connector.file.src.impl.StaticFileSplitEnumerator [] - Subtask 11 (on host '') is requesting a file source split
18:42:25,525 INFO org.apache.flink.connector.file.src.impl.StaticFileSplitEnumerator [] - No more splits available for subtask 11
18:42:25,525 INFO org.apache.flink.connector.file.src.impl.StaticFileSplitEnumerator [] - Subtask 8 (on host '') is requesting a file source split
18:42:25,525 INFO org.apache.flink.connector.file.src.impl.StaticFileSplitEnumerator [] - No more splits available for subtask 8
18:42:25,525 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Reader received NoMoreSplits event.
18:42:25,526 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Reader received NoMoreSplits event.

JDK dns not respect system dns setting in kubernetes

I custom the k8s core dns file to resolve a custom name.which works fine in pods checked by ping xx.
But it not resolved in java appliation(jdk14).
Nameserver is ok.
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search xxxx-5-production.svc.cluster.local svc.cluster.local cluster.local
/ # ping xx
PING xx (192.168.65.2): 56 data bytes
64 bytes from 192.168.65.2: seq=0 ttl=37 time=0.787 ms
Edit: I use coredns rewrite host name xx to host.docker.internal,this is change to coredns config
rewrite name regex (^|(?:\S*\.)*)xx\.?$ {1}host.docker.internal
I add some debug code to the entry:
static void runCommand(String... commands) {
try {
ProcessBuilder cat = new ProcessBuilder(commands);
Process start = cat.start();
start.waitFor();
String output = new BufferedReader(new InputStreamReader(start.getInputStream())).lines().collect(Collectors.joining());
String err = new BufferedReader(new InputStreamReader(start.getErrorStream())).lines().collect(Collectors.joining());
log.info("\n{}: stout {}", Arrays.toString(commands),output);
log.info("\n{}: sterr{}", Arrays.toString(commands),err);
} catch (IOException | InterruptedException e) {
log.error(e.getClass().getCanonicalName(), e);
}
}
public static void main(String[] args) {
try {
InetAddress xx = Inet4Address.getByName("xx");
log.info("{}: {}", "InetAddress xx", xx.getHostAddress());
} catch (IOException e) {
log.error(e.getClass().getCanonicalName(), e);
}
runCommand("cat", "/etc/resolv.conf");
runCommand("ping", "xx","-c","1");
runCommand("ping", "host.docker.internal","-c","1");
runCommand("nslookup", "xx");
runCommand("ifconfig");
SpringApplication.run(FileServerApp.class, args);
}
Here is output:
01:01:39.950 [main] ERROR com.j.file_server_app.FileServerApp - java.net.UnknownHostException
java.net.UnknownHostException: xx: Name or service not known
at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:932)
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1505)
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:851)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1495)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1354)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1288)
at java.base/java.net.InetAddress.getByName(InetAddress.java:1238)
at com.j.file_server_app.FileServerApp.main(FileServerApp.java:43)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:51)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52)
01:01:39.983 [main] INFO com.j.file_server_app.FileServerApp -
[cat, /etc/resolv.conf]: stout nameserver 10.96.0.10search default.svc.cluster.local svc.cluster.local cluster.localoptions ndots:5
01:01:39.985 [main] INFO com.j.file_server_app.FileServerApp -
[cat, /etc/resolv.conf]: sterr
01:01:39.991 [main] INFO com.j.file_server_app.FileServerApp -
[ping, xx, -c, 1]: stout
01:01:39.991 [main] INFO com.j.file_server_app.FileServerApp -
[ping, xx, -c, 1]: sterrping: unknown host
01:01:39.998 [main] INFO com.j.file_server_app.FileServerApp -
[ping, host.docker.internal, -c, 1]: stout PING host.docker.internal (192.168.65.2): 56 data bytes64 bytes from 192.168.65.2: icmp_seq=0 ttl=37 time=0.757 ms--- host.docker.internal ping statistics ---1 packets transmitted, 1 packets received, 0% packet lossround-trip min/avg/max/stddev = 0.757/0.757/0.757/0.000 ms
01:01:39.998 [main] INFO com.j.file_server_app.FileServerApp -
[ping, host.docker.internal, -c, 1]: sterr
01:01:40.045 [main] INFO com.j.file_server_app.FileServerApp -
[nslookup, xx]: stout Server: 10.96.0.10Address: 10.96.0.10#53Non-authoritative answer:Name: host.docker.internalAddress: 192.168.65.2** server can't find xx: NXDOMAIN
01:01:40.045 [main] INFO com.j.file_server_app.FileServerApp -
[nslookup, xx]: sterr
01:01:40.048 [main] INFO com.j.file_server_app.FileServerApp -
[ifconfig]: stout eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.1.3.14 netmask 255.255.0.0 broadcast 0.0.0.0 ether ce:71:60:9a:75:05 txqueuelen 0 (Ethernet) RX packets 35 bytes 3776 (3.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22 bytes 1650 (1.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 1 bytes 29 (29.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 29 (29.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
01:01:40.048 [main] INFO com.j.file_server_app.FileServerApp -
[ifconfig]: sterr
Looks like coredns not working,but in the front end pod,ping is ok,this is front end Dockerfile
FROM library/nginx:stable-alpine
RUN mkdir /app
EXPOSE 80
ADD dist /app
COPY nginx.conf /etc/nginx/nginx.conf
Using docker inspect for fontend and backend container,both network setting are:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}
Both frontend and backend has service with type: LoadBalancer,now my question is why the name resolve behave different in this two pods?

Iterate over different columns using withcolumn in Java Spark

I have to modify a Dataset<Row> according to some rules that are in a List<Row>.
I want to iterate over the Datset<Row> columns using Dataset.withColumn(...) as seen in the next example:
(import necesary libraries...)
SparkSession spark = SparkSession
.builder()
.appName("appname")
.config("spark.some.config.option", "some-value")
.getOrCreate();
Dataset<Row> dfToModify = spark.read().table("TableToModify");
List<Row> ListListWithInfo = new ArrayList<>(Arrays.asList());
ListWithInfo.add(0,RowFactory.create("field1", "input1", "output1", "conditionAux1"));
ListWithInfo.add(1,RowFactory.create("field1", "input1", "output1", "conditionAux2"));
ListWithInfo.add(2,RowFactory.create("field1", "input2", "output3", "conditionAux3"));
ListWithInfo.add(3,RowFactory.create("field2", "input3", "output4", "conditionAux4"));
.
.
.
for (Row row : ListWithInfo) {
String field = row.getString(0);
String input = row.getString(1);
String output = row.getString(2);
String conditionAux = row.getString(3);
dfToModify = dfToModify.withColumn(field,
when(dfToModify.col(field).equalTo(input)
.and(dfToModify.col("conditionAuxField").equalTo(conditionAux))
,output)
.otherwise(dfToModify.col(field)));
}
The code does works as it should, but when there are more than 50 "rules" in the List, the program doesn't finish and this output is shown in the screen:
0/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1653
20/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1650
20/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1635
20/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1641
20/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1645
20/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1646
20/01/27 17:48:18 INFO storage.BlockManagerInfo: Removed broadcast_113_piece0 on **************** in memory (size: 14.5 KB, free: 3.0 GB)
20/01/27 17:48:18 INFO storage.BlockManagerInfo: Removed broadcast_113_piece0 on ***************** in memory (size: 14.5 KB, free: 3.0 GB)
20/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1639
20/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1649
20/01/27 17:48:18 INFO spark.ContextCleaner: Cleaned accumulator 1651
20/01/27 17:49:18 INFO spark.ExecutorAllocationManager: Request to remove executorIds: 6
20/01/27 17:49:18 INFO cluster.YarnClientSchedulerBackend: Requesting to kill executor(s) 6
20/01/27 17:49:18 INFO cluster.YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 6
20/01/27 17:49:18 INFO spark.ExecutorAllocationManager: Removing executor 6 because it has been idle for 60 seconds (new desired total will be 0)
20/01/27 17:49:19 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
20/01/27 17:49:19 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 6.
20/01/27 17:49:19 INFO scheduler.DAGScheduler: Executor lost: 6 (epoch 0)
20/01/27 17:49:19 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
20/01/27 17:49:19 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 6 from BlockManagerMaster.
20/01/27 17:49:19 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(6, *********************, 43387, None)
20/01/27 17:49:19 INFO storage.BlockManagerMaster: Removed 6 successfully in removeExecutor
20/01/27 17:49:19 INFO cluster.YarnScheduler: Executor 6 on **************** killed by driver.
20/01/27 17:49:19 INFO spark.ExecutorAllocationManager: Existing executor 6 has been removed (new total is 0)
20/01/27 17:49:20 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
20/01/27 17:49:21 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
20/01/27 17:49:22 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
.
.
.
.
Is there any way to make it more efficient using Java Spark? (without using for loop or something similar)
Finally I used withColumns method of Dataset<Row> objet. This method need two arguments:
.withColumns(Seq<String> ColumnsNames, Seq<Column> ColumnsValues);
And in the Seq<String> can not be duplicated.
The code is as follow:
SparkSession spark = SparkSession
.builder()
.appName("appname")
.config("spark.some.config.option", "some-value")
.getOrCreate();
Dataset<Row> dfToModify = spark.read().table("TableToModify");
List<Row> ListListWithInfo = new ArrayList<>(Arrays.asList());
ListWithInfo.add(0,RowFactory.create("field1", "input1", "output1", "conditionAux1"));
ListWithInfo.add(1,RowFactory.create("field1", "input1", "output1", "conditionAux2"));
ListWithInfo.add(2,RowFactory.create("field1", "input2", "output3", "conditionAux3"));
ListWithInfo.add(3,RowFactory.create("field2", "input3", "output4", "conditionAux4"));
.
.
.
// initialize values for fields and conditions
String field_ant = ListWithInfo.get(0).getString(0).toLowerCase();
String first_input = ListWithInfo.get(0).getString(1);
String first_output = ListWithInfo.get(0).getString(2);
String first_conditionAux = ListWithInfo.get(0).getString(3);
Column whenColumn = when(dfToModify.col(field_ant).equalTo(first_input)
.and(dfToModify.col("conditionAuxField").equalTo(lit(first_conditionAux)))
,first_output);
// lists with the names of the fields and the conditions
List<Column> whenColumnList = new ArrayList(Arrays.asList());
List<String> fieldsNameList = new ArrayList(Arrays.asList());
for (Row row : ListWithInfo.subList(1,ListWithInfo.size())) {
String field = row.getString(0);
String input = row.getString(1);
String output = row.getString(2);
String conditionAux = row.getString(3);
if (field.equals(field_ant)) {
// if field is equals to fiel_ant the new condition is added to the previous one
whenColumn = whenColumn.when(dfToModify.col(field).equalTo(input)
.and(dfToModify.col("conditionAuxField").equalTo(lit(conditionAux)))
,output);
} else {
// if field is diferent to the previous:
// close the conditions for this field
whenColumn = whenColumn.otherwise(dfToModify.col(field_ant));
// add to the lists the field(String) and the conditions (columns)
whenColumnList.add(whenColumn);
fieldsNameList.add(field_ant);
// and initialize the conditions for the new field
whenColumn = when(dfToModify.col(field).equalTo(input)
.and(dfToModify.col("branchField").equalTo(lit(branch)))
,output);
}
field_ant = field;
}
// add last values
whenColumnList.add(whenColumn);
fieldsNameList.add(field_ant);
// transform list to Seq
Seq<Column> whenColumnSeq = JavaConversions.asScalaBuffer(whenColumnList).seq();
Seq<String> fieldsNameSeq = JavaConversions.asScalaBuffer(fieldsNameList).seq();
Dataset<Row> dfModified = dfToModify.withColumns(fieldsNameSeq, whenColumnSeq);

how to repeat this on Oracle? (Select for update + order by + LIMIT 1 + skip locked)

I have controler:
#GetMapping("/old")
public Product getOld() {
Product omeOld = productService.getOneOld();
log.info(String.valueOf(omeOld.getId()));
return omeOld;
}
Service:
#Override
#Transactional
public Product getOneOld() {
Product aNew = productsRepository.findTop1ByStatusOrderByCountAsc("NEW");
try {
Thread.sleep(5000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
return aNew;
}
And repository:
#Repository
public interface ProductsRepository extends JpaRepository<Product, Long> {
Product findTop1ByStatusOrderByCountAsc(String status);
}
I start JMeter and send 5 request in 5 threads. In result I get 5 response after 5 seconds. Each request was processed by seconds. But in log I see next:
2018-09-14 14:04:35.524 INFO 9048 --- [nio-8080-exec-1] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:04:35.525 INFO 9048 --- [nio-8080-exec-2] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:04:35.532 INFO 9048 --- [nio-8080-exec-3] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:04:35.534 INFO 9048 --- [nio-8080-exec-4] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:04:35.534 INFO 9048 --- [nio-8080-exec-6] c.e.l.demo.controller.ProductController : 1
Each thread select the same row and process it. I need that first thread select first row, second thread select second row and etc. I try use #Lock(LockModeType.PESSIMISTIC_WRITE) :
#Lock(LockModeType.PESSIMISTIC_WRITE)
Product findTop1ByStatusOrderByCountAsc(String status);
Now when I start JMeter I have next behavior:
first thread worck 5 sec, after that second thread work 5 sec and etc. 25 secons all 5 threads. And in log:
2018-09-14 14:11:40.564 INFO 13724 --- [nio-8080-exec-5] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:11:45.566 INFO 13724 --- [nio-8080-exec-4] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:11:50.567 INFO 13724 --- [nio-8080-exec-2] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:11:55.568 INFO 13724 --- [nio-8080-exec-1] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:12:00.570 INFO 13724 --- [nio-8080-exec-3] c.e.l.demo.controller.ProductController : 1
All threads select the same row(if I change this roe in first thread - it will not select in second thread if the conditions do not match).
I try this:
#Query(value = "Select * from products where status = ?1 order by count asc LIMIT 1 for update", nativeQuery = true)
Product findTop1ByStatusOrderByCountAsc(String status);
the result is the same.
But I need - first thread select first row and block it/ Second thread select next not blocked row and process. I try next:
#Query(value = "Select * from products where status = ?1 order by count asc LIMIT 1 for update of products skip locked", nativeQuery = true)
Product findTop1ByStatusOrderByCountAsc(String status);
And it work fine! :
2018-09-14 14:25:00.355 INFO 7904 --- [io-8080-exec-10] c.e.l.demo.controller.ProductController : 4
2018-09-14 14:25:00.355 INFO 7904 --- [nio-8080-exec-4] c.e.l.demo.controller.ProductController : 3
2018-09-14 14:25:00.355 INFO 7904 --- [nio-8080-exec-9] c.e.l.demo.controller.ProductController : 1
2018-09-14 14:25:00.358 INFO 7904 --- [nio-8080-exec-5] c.e.l.demo.controller.ProductController : 5
2018-09-14 14:25:00.359 INFO 7904 --- [nio-8080-exec-2] c.e.l.demo.controller.ProductController : 6
Each select in each thread select one row from non blocked rows!
But how can I repeat this with Oracle? In oracle I can not write LIMIT 1 and if I use ROWNUM = 1 each thread select same row always.

Eclipse [] is an unknown syslog facility error

I'm trying to execute a program in eclipse and I when I click run I see this in the console output:
[] is an unknown syslog facility. Defaulting to [USER].
/
"Failed"
Any ideas?
It looks like that error is coming from org.apache.log4j.net.SyslogAppender, and that you've tried to set a bad facility name. Go take a look at your appenders and how you are setting them up.
414 public
415 void setFacility(String facilityName) {
416 if(facilityName == null)
417 return;
418
419 syslogFacility = getFacility(facilityName);
420 if (syslogFacility == -1) {
421 System.err.println("["+facilityName +
422 "] is an unknown syslog facility. Defaulting to [USER].");
423 syslogFacility = LOG_USER;
424 }
425
426 this.initSyslogFacilityStr();
427
428 // If there is already a sqw, make it use the new facility.
429 if(sqw != null) {
430 sqw.setSyslogFacility(this.syslogFacility);
431 }
432 }

Categories