Handle long min value condition - java

When I ran a program, long min value is getting persisted instead of original value coming from the backend.
I am using the code:
if (columnName.equals(Fields.NOTIONAL)) {
orderData.notional(getNewValue(data));
As output of this, i am getting long min value, instead of original value.
I tried using this method to handle the scenario
public String getNewValue(Object data) {
return ((Long)data).getLong("0")==Long.MIN_VALUE?"":((Long)data).toString();
}
but doesn't work.
Please suggest

EDITED: I misread the code in the question; rereading it, I now get what the author is trying to do, and cleaned up the suggestion as a consequence.
(Long) data).getLong("0") is a silly way to write null, because that doesn't do anything. It retrieves the system property named '0', and then attempts to parse it as a Long value. As in, if you start your VM with java -D0=1234 com.foo.YourClass, that returns 1234. I don't even know what you're attempting to accomplish with this call. Obviously it is not equal to Long.MIN_VALUE, thus the method returns ((Long) data).toString(). If data is in fact a Long representing MIN_VALUE, you'll get the digits of MIN_VALUE, clearly not what you wanted.
Try this:
public String getNewValue(Object data) {
if (data instanceof Number) {
long v = ((Number) data).longValue();
return v == Long.MIN_VALUE ? "" : data.toString();
}
// what do you want to return if the input isn't a numeric object at all?
return "";

Related

java.lang.Integer cannot be cast to java.lang.Long

I'm supposed to receive long integer in my web service.
long ipInt = (long) obj.get("ipInt");
When I test my program and put ipInt value = 2886872928, it give me success.
However, when I test my program and put ipInt value = 167844168, it give me error :
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long
The error is point to the above code.
FYI, my data is in JSON format :
{
"uuID": "user001",
"ipInt": 16744168,
"latiTude": 0,
"longiTude": 0,
}
Is there any suggestion so that I can ensure my code able to receive both ipInteger value?
Both Integer and Long are subclasses of Number, so I suspect you can use:
long ipInt = ((Number) obj.get("ipInt")).longValue();
That should work whether the value returned by obj.get("ipInt") is an Integer reference or a Long reference. It has the downside that it will also silently continue if ipInt has been specified as a floating point number (e.g. "ipInt": 1.5) in the JSON, where you might want to throw an exception instead.
You could use instanceof instead to check for Long and Integer specifically, but it would be pretty ugly.
We don't know what obj.get() returns so it's hard to say precisely, but when I use such methods that return Number subclasses, I find it safer to cast it to Number and call the appropriate xxxValue(), rather than letting the auto-unboxing throw the ClassCastException:
long ipInt = ((Number)obj.get("ipInt")).longValue();
That way, you're doing explicit unboxing to a long, and are able to cope with data that could include a ., which would return a Float or Double instead.
Long.valueOf(jo.get("ipInt").toString());
Is ok.
in kotlin I simply use this:
val myInt: Int = 10
val myLong = myInt.toLong()
You mention the current approach works when you provide a value outside the range of integer, but fails when you are within the integer range. That is an odd behavior for an API, because it seems you need to check the return type yourself. You can do that. The usual way is with instanceof. Something like,
long ipInt;
Object o = obj.get("ipInt");
if (o instanceof Integer) {
ipInt = ((Integer) o).intValue();
} else if (o instanceof Long) {
ipInt = ((Long) o).longValue();
}
public static void main(String[] args) {
JSONObject jo = JSON.parseObject(
"{ \"uuID\": \"user001\", \"ipInt\": 16744168, \"latiTude\": 0, \"longiTude\": 0}");
System.out.println(jo);
long sellerId1 = Long.valueOf(jo.get("ipInt").toString());
//Long sellerId1 = (long)jo.get("ipInt");
System.out.println(sellerId1);
}

Hadoop: MapReduce MinMax result different from original dataset

I am new in Hadoop.
I try to use MapReduce to get the min and max Monthly Precipitation value for each year.
Here is one year of the data set looks like:
Product code,Station number,Year,Month,Monthly Precipitation Total (millimetres),Quality
IDCJAC0001,023000,1839,01,11.5,Y
IDCJAC0001,023000,1839,02,11.4,Y
IDCJAC0001,023000,1839,03,20.8,Y
IDCJAC0001,023000,1839,04,10.5,Y
IDCJAC0001,023000,1839,05,4.8,Y
IDCJAC0001,023000,1839,06,90.4,Y
IDCJAC0001,023000,1839,07,54.2,Y
IDCJAC0001,023000,1839,08,97.4,Y
IDCJAC0001,023000,1839,09,41.4,Y
IDCJAC0001,023000,1839,10,40.8,Y
IDCJAC0001,023000,1839,11,113.2,Y
IDCJAC0001,023000,1839,12,8.9,Y
And this is what the result I get for the year 1839:
1839 1.31709005E9 1.3172928E9
Obviously, the result is not matched to the original data...But I cannot figure out why it happens...
Your code has multiple issues.
(1) In MinMixExposure, you write doubles, but read ints. You also use Double type (meaning that you care about nulls) but do not handle nulls in serialization/deserialization. If you really need nulls, you should write something like this:
// write
out.writeBoolean(value != null);
if (value != null) {
out.writeDouble(value);
}
// read
if (in.readBoolean()) {
value = in.readDouble();
} else {
value = null;
}
If you do not need to store nulls, replace Double with double.
(2) In map function you wrap your code in IOException catch blocks. This doesn't make any sense. If input data has records in incorrect format, then most probably you will get NullPointerException/NumberFormatError in Double.parseDouble(). However, you do not handle these exceptions.
Checking for nulls after you called parseDouble also doesn't make sense.
(3) You pass map key to reducer as Text. I would recommend to pass year as IntWritable (and configure your job with job.setMapOutputKeyClass(IntWritable.class);).
(4) maxExposure must be handled similarly to minExposure in reducer code. Currently you just return the value for the last record.
Your logic to find the min and max exposure in the Reducer seems off. You set maxExposure twice, and never check whether it is actually the max exposure. I'd go with:
public void reduce(Text key, Iterable<MinMaxExposure> values,
Context context) throws IOException, InterruptedException {
Double minExposure = Double.MAX_VALUE;
Double maxExposure = Double.MIN_VALUE;
for (MinMaxExposure val : values) {
if (val.getMinExposure() < minExposure) {
minExposure = val.getMinExposure();
}
if (val.getMaxExposure() > maxExposure) {
maxExposure = val.getMaxExposure();
}
}
MinMaxExposure resultRow = new MinMaxExposure();
resultRow.setMinExposure(minExposure);
resultRow.setMaxExposure(maxExposure);
context.write(key, resultRow);
}

Hadoop Custom Partitioner not behaving according to the logic

Based on this example here, this works. Have tried the same on my dataset.
Sample Dataset:
OBSERVATION;2474472;137176;
OBSERVATION;2474473;137176;
OBSERVATION;2474474;137176;
OBSERVATION;2474475;137177;
Consider each line as string, my Mapper output is:
key-> string[2], value-> string.
My Partitioner code:
#Override
public int getPartition(Text key, Text value, int reducersDefined) {
String keyStr = key.toString();
if(keyStr == "137176") {
return 0;
} else {
return 1 % reducersDefined;
}
}
In my data set most id's are 137176. Reducer declared -2. I expect two output files, one for 137176 and second for remaining Id's. I'm getting two output files but, Id's evenly distributed on both the output files. What's going wrong in my program?
Explicitly set in the Driver method that you want to use your custom Partitioner, by using: job.setPartitionerClass(YourPartitioner.class);. If you don't do that, the default HashPartitioner is used.
Change String comparison method from == to .equals(). i.e., change if(keyStr == "137176") { to if(keyStr.equals("137176")) {.
To save some time, perhaps it will be faster to declare a new Text variable at the beginning of the partitioner, like that: Text KEY = new Text("137176"); and then, without converting your input key to String every time, just compare it with the KEY variable (again using the equals() method). But perhaps those are equivalent. So, what I suggest is:
Text KEY = new Text("137176");
#Override
public int getPartition(Text key, Text value, int reducersDefined) {
return key.equals(KEY) ? 0 : 1 % reducersDefined;
}
Another suggestion, if the network load is heavy, parse the map output key as VIntWritable and change the Partitioner accordingly.

Why isn't it an error?

The following program is a recursive program to find the maximum and minimum of an array.(I think! Please tell me if it is not a valid recursive program. Though there are easier ways to find the maximum and minimum in the array, I'm doing in the recursive manner only as a part of a exercise!)
This program works correctly and produces the outputs as expected.
In the comment line where I have marked "Doubt here!", I am unable to understand why an error is not given during compilation. The return type is clearly an integer array (as specified in the method definition), but I have not assigned the returned data to any integer array, but the program still works. I was expecting an error during compilation if I did it this way, but it worked. If someone would help me figure this out, it'd be helpful! :)
import java.io.*;
class MaxMin_Recursive
{
static int i=0,max=-999,min=999;
public static void main(String[] args) throws IOException
{
BufferedReader B = new BufferedReader(new InputStreamReader(System.in));
int[] inp = new int[6];
System.out.println("Enter a maximum of 6 numbers..");
for(int i=0;i<6;i++)
inp[i] = Integer.parseInt(B.readLine());
int[] numbers_displayed = new int[2];
numbers_displayed = RecursiveMinMax(inp);
System.out.println("The maximum of all numbers is "+numbers_displayed[0]);
System.out.println("The minimum of all numbers is "+numbers_displayed[1]);
}
static int[] RecursiveMinMax(int[] inp_arr) //remember to specify that the return type is an integer array
{
int[] retArray = new int[2];
if(i<inp_arr.length)
{
if(max<inp_arr[i])
max = inp_arr[i];
if(min>inp_arr[i])
min = inp_arr[i];
i++;
RecursiveMinMax(inp_arr); //Doubt here!
}
retArray[0] = max;
retArray[1] = min;
return retArray;
}
}
The return type is clearly an integer array (as specified in the method definition), but I have not assigned the returned data to any integer array, but the program still works.
Yes, because it's simply not an error to ignore the return value of a method. Not as far as the compiler is concerned. It may well represent a bug, but it's a perfectly valid use of the language.
For example:
Console.ReadLine(); // User input ignored!
"text".Substring(10); // Result ignored!
Sometimes I wish it could be used as warning - and indeed Resharper will give warnings when it can detect that "pure" methods (those without any side-effects) are called without using the return value. In particular, call which cause problems in real life:
Methods on string such as Replace and Substring, where users assume that calling the method alters the existing string
Stream.Read, where users assume that all the data they've requested has been read, when actually they should use the return value to see how many bytes have actually been read
There are times where it's entirely appropriate to ignore the return value for a method, even when it normally isn't for that method. For example:
TValue GetValueOrDefault<TKey, TValue>(Dictionary<TKey, TValue> dictionary, TKey key)
{
TValue value;
dictionary.TryGetValue(key, out value);
return value;
}
Normally when you call TryGetValue you want to know whether the key was found or not - but in this case value will be set to default(TValue) even if the key wasn't found, so we're going to return the same thing anyway.
In Java (as in C and C++) it is perfectly legal to discard the return value of a function. The compiler is not obliged to give any diagnostic.

How to convert Session attribute converting to long in java?

long orgId = (Long)request.getSession().getAttribute("orgId");
I am not able to convert the object that I am getting from request.getSession().getAttribute("orgId")
to long variable
So I need to convert it to long.
Could anyway help.
This way is not the best way to proceed, it's too prone to error (and you are assuming orgId value is present as session's attribute and unboxing, in case orgId is null/not present, will throw an exception).
final long orgId;
Object sessionValue = request.getSession().getAttribute("orgId");
if(sessionValue != null) {
if(sessionValue instanceof Long)
{
orgId = ((Long)sessionValue).longValue();
}
else if(if(sessionValue instanceof String) {
orgId = Long.parseLong((String)sessionValue);
}
else {
// you can set orgId = 0, throw exception, do custom conversion
}
}
else {
// manage missed value
}
It depends upon the type of the "orgId" attribute. If it really is a Long, your code should work. If you've for instance added it as a String, you need to convert it to a long with Long.parseLong:
long orgId = Long.parseLong((String)request.getSession().getAttribute("orgId"));
This the common way to do this
String strOrgId = (String) request.getSession().getAttribute("orgId");
Then parse this value to Long
long orgId = Long.parseLong(strOrgId);
It depends on how is your "orgId" stored in session attributes, as a String instance or a Long instance.
Following code is little bit redundant but will work for both cases:
Object attribute = request.getSession().getAttribute("orgId");
long orgId = Long.parseLong(String.valueOf(attribute));
I had a similar problem.. I stored a long in the session, and when I wanted to get the attribute it was automatically deserialized to an Integer OR Long dependent on their size. This was really annoying..
So in my case the solution was to convert to a string and than parse it to a Long:
Object orgIdObject = session.getAttribute("orgId");
Long orgId;
// first, make a null check. you'll never know
if (orgIdObject == null) {
// if value is null, set to -1 or throw and error..
orgId = -1L;
} else {
// convert to string, and then parse to long
orgId = Long.valueOf(orgIdObject.toString());
}
In this way, it does not matter if the Object is a String, Integer or Long. It works with all that types.
Happy Coding,
Kalasch

Categories