Understanding the output of PrintRelocations in Hotspot - java

I am trying to understand how a method is stored in CodeBuffer (divided into three different sections) while it is getting compiled in HotSpot VM. I have printed relocation information however I don't fully understand it. This is my output:
3918 1 1 Demo::workload (2 bytes)
// Relocation information in 3 sections of CodeBuffer
CodeBuffer:
consts.code = 0x00007fffe11f27a0 : 0x00007fffe11f27a0 : 0x00007fffe11ff470 (0 of 52432)
consts.locs = 0x00007fffac008910 : 0x00007fffac008910 : 0x00007fffac008918 (0 of 4) point=0
#0x00007fffac008910:
insts.code = 0x00007fffe11727a0 : 0x00007fffe11727f2 : 0x00007fffe11f2320 (82 of 523136)
insts.locs = 0x00007fffac0087b0 : 0x00007fffac0087b4 : 0x00007fffac008904 (2 of 170) point=77
#0x00007fffac0087b0: b416
relocInfo#0x00007fffac0087b0 [type=11(poll_return) addr=0x00007fffe11727b6 offset=22 format=1]
#0x00007fffac0087b2: 6437
relocInfo#0x00007fffac0087b2 [type=6(runtime_call) addr=0x00007fffe11727ed offset=55 format=1] | [destination=0x00007fffe116f1e0]
#0x00007fffac0087b4:
stubs.code = 0x00007fffe11f2340 : 0x00007fffe11f23f5 : 0x00007fffe11f2780 (181 of 1088)
stubs.locs = 0x00007fffb4009f90 : 0x00007fffb4009faa : 0x00007fffb4009fcc (13 of 30) point=176
#0x00007fffb4009f90: 6428
relocInfo#0x00007fffb4009f90 [type=6(runtime_call) addr=0x00007fffe11f2368 offset=40 format=1] | [destination=0x00007fffe12037e0]
#0x00007fffb4009f92: f803f6ad80097fff705b
relocInfo#0x00007fffb4009f9a [type=7(external_word) addr=0x00007fffe11f23c3 offset=91 data={f6ad80097fff}] | [target=0x00007ffff6ad8009]
#0x00007fffb4009f9c: f060800a
relocInfo#0x00007fffb4009f9e [type=8(internal_word) addr=0x00007fffe11f23cd offset=10 data=96] | [target=0x00007fffe11f236d]
#0x00007fffb4009fa0: 6411
relocInfo#0x00007fffb4009fa0 [type=6(runtime_call) addr=0x00007fffe11f23de offset=17 format=1] | [destination=0x00007ffff676dc98]
#0x00007fffb4009fa2: f801fd729006
relocInfo#0x00007fffb4009fa6 [type=9(section_word) addr=0x00007fffe11f23e4 offset=6 data=-654] | [target=0x00007fffe11f23e4]
#0x00007fffb4009fa8: 640c
relocInfo#0x00007fffb4009fa8 [type=6(runtime_call) addr=0x00007fffe11f23f0 offset=12 format=1] | [destination=0x00007fffe110f2e0]
#0x00007fffb4009faa:
So these addresses after # are addresses of each relocation entry. But what does the 4 digit hex specify after :? And what is all the info after it?

Related

What is the best way to group timestamps into intervals of given length in Java?

I have a table for metrics in Oracle DB where one of the columns is the timestamp. I need to read the metrics from the DB and group them into given intervals of any length( 2 Month or 3 Hour or 1 Days or 2 years etc) between a starting timestamp and ending timestamp. The timestamp will be of format
2020-05-24T18:51:10.018-07:00
I know I can read all the entries from the table and sort them and group them into intervals by converting them all into seconds, but is there a better way to do it ?
You may use match_recognize for this.
with t(ts) as (
select
current_timestamp
+ interval '3' minute
* dbms_random.value(0, level)
from dual
connect by level < 20
)
select *
from t
match_recognize(
order by ts asc
measures
match_number() as grp
all rows per match
pattern(a w5min*)
define
/*Rows within 5 minutes after the first row in bucket*/
w5min as ts - first(ts) < interval '5' minute
)
TS | GRP
:------------------ | --:
2021-11-03 06:20:40 | 1
2021-11-03 06:20:56 | 1
2021-11-03 06:23:27 | 1
2021-11-03 06:23:49 | 1
2021-11-03 06:25:23 | 1
2021-11-03 06:25:36 | 1
2021-11-03 06:32:14 | 2
2021-11-03 06:34:38 | 2
2021-11-03 06:36:29 | 2
2021-11-03 06:36:59 | 2
2021-11-03 06:39:29 | 3
2021-11-03 06:40:17 | 3
2021-11-03 06:41:07 | 3
2021-11-03 06:47:14 | 4
2021-11-03 06:48:31 | 4
2021-11-03 06:52:29 | 5
2021-11-03 06:59:22 | 6
2021-11-03 07:02:05 | 6
2021-11-03 07:04:54 | 7
db<>fiddle here
Whether it’s the best way, who can tell? If you want a Java solution, I suggest this one.
OffsetDateTime[] timestamps = {
OffsetDateTime.parse("2020-05-24T18:51:10.018-07:00"),
OffsetDateTime.parse("2020-03-07T23:45:02.399-08:00"),
OffsetDateTime.parse("2020-05-24T20:01:11.442-07:00"),
OffsetDateTime.parse("2020-03-08T01:03:05.079-08:00"),
OffsetDateTime.parse("2020-05-24T19:32:34.461-07:00"),
};
TemporalAmount intervalLength = Duration.ofHours(2);
List<OffsetDateTime> list = new ArrayList<>(Arrays.asList(timestamps));
list.sort(Comparator.naturalOrder());
List<List<OffsetDateTime>> groups = new ArrayList<>();
if (! list.isEmpty()) {
List<OffsetDateTime> currentGroup = new ArrayList<>();
Iterator<OffsetDateTime> itr = list.iterator();
OffsetDateTime first = itr.next();
currentGroup.add(first);
OffsetDateTime groupEnd = first.plus(intervalLength);
while (itr.hasNext()) {
OffsetDateTime current = itr.next();
if (current.isBefore(groupEnd)) {
currentGroup.add(current);
} else { // new group
groups.add(currentGroup);
currentGroup = new ArrayList<>();
groupEnd = current.plus(intervalLength);
currentGroup.add(current);
}
}
// remember to add last group
groups.add(currentGroup);
}
groups.forEach(System.out::println);
Output:
[2020-03-07T23:45:02.399-08:00, 2020-03-08T01:03:05.079-08:00]
[2020-05-24T18:51:10.018-07:00, 2020-05-24T19:32:34.461-07:00, 2020-05-24T20:01:11.442-07:00]
The advantage of declaring intervalLength a TemporalAmount is you are free to assign either a time-based Duration to it (as above) or a date-based Period.
TemporalAmount intervalLength = Period.ofMonths(3);
In this case the result is just one group:
[2020-03-07T23:45:02.399-08:00, 2020-03-08T01:03:05.079-08:00,
2020-05-24T18:51:10.018-07:00, 2020-05-24T19:32:34.461-07:00,
2020-05-24T20:01:11.442-07:00]
Link
Oracle tutorial: Date Time explaining how to use java.time.

SearchRequest in RootDSE

I have to following function to query users from an AD server:
public List<LDAPUserDTO> getUsersWithPaging(String filter)
{
List<LDAPUserDTO> userList = new ArrayList<>();
try(LDAPConnection connection = new LDAPConnection(config.getHost(),config.getPort(),config.getUsername(),config.getPassword()))
{
SearchRequest searchRequest = new SearchRequest("", SearchScope.SUB,filter, null);
ASN1OctetString resumeCookie = null;
while (true)
{
searchRequest.setControls(
new SimplePagedResultsControl(100, resumeCookie));
SearchResult searchResult = connection.search(searchRequest);
for (SearchResultEntry e : searchResult.getSearchEntries())
{
LDAPUserDTO tmp = new LDAPUserDTO();
tmp.distinguishedName = e.getAttributeValue("distinguishedName");
tmp.name = e.getAttributeValue("name");
userList.add(tmp);
}
LDAPTestUtils.assertHasControl(searchResult,
SimplePagedResultsControl.PAGED_RESULTS_OID);
SimplePagedResultsControl responseControl =
SimplePagedResultsControl.get(searchResult);
if (responseControl.moreResultsToReturn())
{
resumeCookie = responseControl.getCookie();
}
else
{
break;
}
}
return userList;
} catch (LDAPException e) {
logger.error(e.getExceptionMessage());
return null;
}
}
However, this breaks when I try to search on the RootDSE.
What I've tried so far:
baseDN = null
baseDN = "";
baseDN = RootDSE.getRootDSE(connection).getDN()
baseDN = "RootDSE"
All resulting in various exceptions or empty results:
Caused by: LDAPSDKUsageException(message='A null object was provided where a non-null object is required (non-null index 0).
2020-04-01 10:42:22,902 ERROR [de.dbz.service.LDAPService] (default task-1272) LDAPException(resultCode=32 (no such object), numEntries=0, numReferences=0, diagnosticMessage='0000208D: NameErr: DSID-03100213, problem 2001 (NO_OBJECT), data 0, best match of:
''
', ldapSDKVersion=4.0.12, revision=aaefc59e0e6d110bf3a8e8a029adb776f6d2ce28')
So, I really spend a lot of time with this. It is possible to kind of query the RootDSE, but it's not that straight forward as someone might think.
I mainly used WireShark to see what the guys at Softerra are doing with their LDAP Browser.
Turns out I wasn't that far away:
As you can see, the baseObject is empty here.
Also, there is one additional Control with the OID LDAP_SERVER_SEARCH_OPTIONS_OID and the ASN.1 String 308400000003020102.
So what does this 308400000003020102 more readable: 30 84 00 00 00 03 02 01 02 actually do?
First of all, we decode this into something, we can read - in this case, this would be the int 2.
In binary, this gives us: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
As we know from the documentation, we have the following notation:
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 |
|---|---|---|---|---|---|---|---|---|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|-------|-------|
| x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | SSFPR | SSFDS |
or we just take the int values from the documentation:
1 = SSFDS -> SERVER_SEARCH_FLAG_DOMAIN_SCOPE
2 = SSFPR -> SERVER_SEARCH_FLAG_PHANTOM_ROOT
So, in my example, we have SSFPR which is defined as follows:
For AD DS, instructs the server to search all NC replicas except
application NC replicas that are subordinate to the search base, even
if the search base is not instantiated on the server. For AD LDS, the
behavior is the same except that it also includes application NC
replicas in the search. For AD DS and AD LDS, this will cause the
search to be executed over all NC replicas (except for application NCs
on AD DS DCs) held on the DC that are subordinate to the search base.
This enables search bases such as the empty string, which would cause
the server to search all of the NC replicas (except for application
NCs on AD DS DCs) that it holds.
NC stands for Naming Context and those are stored as Operational Attribute in the RootDSE with the name namingContexts.
The other value, SSFDS does the following:
Prevents continuation references from being generated when the search
results are returned. This performs the same function as the
LDAP_SERVER_DOMAIN_SCOPE_OID control.
So, someone might ask why I even do this. As it turns out, I got a customer with several sub DCs under one DC. If I tell the search to handle referrals, the execution time is pretty high and too long - therefore this wasn't really an option for me. But when I turn it off, I wasn't getting all the results when I was defining the BaseDN to be the group whose members I wanted to retrieve.
Searching via the RootDSE option in Softerra's LDAP Browser was way faster and returned the results in less then one second.
I personally don't have any clue why this is way faster - but the ActiveDirectory without any interface of tool from Microsoft is kind of black magic for me anyway. But to be frank, that's not really my area of expertise.
In the end, I ended up with the following Java code:
SearchRequest searchRequest = new SearchRequest("", SearchScope.SUB, filter, null);
[...]
Control globalSearch = new Control("1.2.840.113556.1.4.1340", true, new ASN1OctetString(Hex.decode("308400000003020102")));
searchRequest.setControls(new SimplePagedResultsControl(100, resumeCookie, true),globalSearch);
[...]
The used Hex.decode() is the following: org.bouncycastle.util.encoders.Hex.
A huge thanks to the guys at Softerra which more or less put my journey into the abyss of the AD to an end.
You can't query users from the RootDSE.
Use either a domain or if you need to query users from across domains in a forest use the global catalog (running on different ports, not the default 389 / 636 for LDAP(s).
RootDSE only contains metadata. Probably this question should be asked elsewhere for more information but first read up on the documentation from Microsoft, e.g.:
https://learn.microsoft.com/en-us/windows/win32/ad/where-to-search
https://learn.microsoft.com/en-us/windows/win32/adschema/rootdse
E.g.: namingContexts attribute can be read to find which other contexts you may want to query for actual users.
Maybe start with this nice article as introduction:
http://cbtgeeks.com/2016/06/02/what-is-rootdse/

Spark and non-denormalized tables

I know Spark works much better with denormalized tables, where all the needed data is in one line. I wondering, if it is not the case, it would have a way to retrieve data from previous, or next, rows.
Example:
Formula:
value = (value from 2 year ago) + (current year value) / (value from 2 year ahead)
Table
+-------+-----+
| YEAR|VALUE|
+-------+-----+
| 2015| 100 |
| 2016| 34 |
| 2017| 32 |
| 2018| 22 |
| 2019| 14 |
| 2020| 42 |
| 2021| 88 |
+-------+-----+
Dataset<Row> dataset ...
Dataset<Results> results = dataset.map(row -> {
int currentValue = Integer.valueOf(row.getAs("VALUE")); // 2019
// non sense code just to exemplify
int twoYearsBackValue = Integer.valueOf(row[???].getAs("VALUE")); // 2016
int twoYearsAheadValue = Integer.valueOf(row[???].getAs("VALUE")); // 2021
double resultValue = twoYearsBackValue + currentValue / twoYearsAheadValue;
return new Result(2019, resultValue);
});
Results[] results = results.collect();
Is it possible to grab these values (that belongs to other rows) without changing the table format (no denormalization, no pivots ...) and also without collecting the data, or does it go totally against Spark/BigData principles?

mySQL - Get last added ID [duplicate]

This question already has answers here:
How to get the insert ID in JDBC?
(14 answers)
Closed 4 years ago.
Im a new programmer (just started on my programmer education).
Im trying to set my "tempVenueId" attribute with the last ID in my SQL DB: (in this case idvenue = 12)
Table name: venue
idvenue | name | address
-------------------------------
1 | Khar | 5
2 | SantaCruz | 3
3 | Sion | 2
4 | VT | 1
5 | newFort | 3
6 | Bandra | 2
7 | Worli | 1
8 | Sanpada | 3
9 | Joe | 2
10 | Sally | 1
11 | Elphiston | 2
12 | Currey Road | 1
My code:
Private int tempVenueId;
SqlRowSet rs1 = jdbc.queryForRowSet("SELECT MAX(idvenue) FROM venue;");
while(rs1.next())tempVenueId = rs1.getInt(0);
/*I have also tried with "while(rs1.next())tempVenueId = rs1.getInt(1);"
*/
Still it does not work I get this when I run debug mode in intelliJ:
tempVenueId = 0
The index is from 1 rather than 0,so two ways to solve it:
while(rs1.next()){
tempVenueId = rs1.getInt(1);
break;
};
or
SqlRowSet rs1 = Jdbc.queryForRowSet("SELECT MAX(idvenue) as value FROM venue;");
while(rs1.next()){
tempVenueId = rs1.getInt("value");
break;
}
Solved - This solution was alot easier.... (I think).
String sql = "SELECT LAST_INSERT_ID();";
SqlRowSet rs = Jdbc.queryForRowSet(sql);
rs.next();
Venue VN = new Venue(rs.getInt(1));
tempVenueId = VN;
Now i get the last AUTO INCREMENTET ID from DB

MySQL query to fetch list of data using logical operations

The following are the list of different kinds of books that customers read in a library. The values are stored with the power of 2 in a column called bookType.
I need to fetch list of books with the combinations of persons who read
only Novel Or only Fairytale Or only BedTime Or both Novel + Fairytale
from the database with logical operational query.
Fetch list for the following combinations :
person who reads only novel(Stored in DB as 1)
person who reads both novel and fairy tale(Stored in DB as 1+2 = 3)
person who reads all the three i.e {novel + fairy tale + bed time} (stored in DB as 1+2+4 = 7)
The count of these are stored in the database in a column called BookType(marked with red in fig.)
How can I fetch the above list using MySQL query
From the example, I need to fetch users like novel readers (1,3,5,7).
The heart of this question is conversion of decimal to binary and mysql has a function to do just - CONV(num , from_base , to_base );
In this case from_base would be 10 and to_base would be 2.
I would wrap this in a UDF
So given
MariaDB [sandbox]> select id,username
-> from users
-> where id < 8;
+----+----------+
| id | username |
+----+----------+
| 1 | John |
| 2 | Jane |
| 3 | Ali |
| 6 | Bruce |
| 7 | Martha |
+----+----------+
5 rows in set (0.00 sec)
MariaDB [sandbox]> select * from t;
+------+------------+
| id | type |
+------+------------+
| 1 | novel |
| 2 | fairy Tale |
| 3 | bedtime |
+------+------------+
3 rows in set (0.00 sec)
This UDF
drop function if exists book_type;
delimiter //
CREATE DEFINER=`root`#`localhost` FUNCTION `book_type`(
`indec` int
)
RETURNS varchar(255) CHARSET latin1
LANGUAGE SQL
NOT DETERMINISTIC
CONTAINS SQL
SQL SECURITY DEFINER
COMMENT ''
begin
declare tempstring varchar(100);
declare outstring varchar(100);
declare book_types varchar(100);
declare bin_position int;
declare str_length int;
declare checkit int;
set tempstring = reverse(lpad(conv(indec,10,2),4,0));
set str_length = length(tempstring);
set checkit = 0;
set bin_position = 0;
set book_types = '';
looper: while bin_position < str_length do
set bin_position = bin_position + 1;
set outstring = substr(tempstring,bin_position,1);
if outstring = 1 then
set book_types = concat(book_types,(select trim(type) from t where id = bin_position),',');
end if;
end while;
set outstring = book_types;
return outstring;
end //
delimiter ;
Results in
+----+----------+---------------------------+
| id | username | book_type(id) |
+----+----------+---------------------------+
| 1 | John | novel, |
| 2 | Jane | fairy Tale, |
| 3 | Ali | novel,fairy Tale, |
| 6 | Bruce | fairy Tale,bedtime, |
| 7 | Martha | novel,fairy Tale,bedtime, |
+----+----------+---------------------------+
5 rows in set (0.00 sec)
Note the loop in the UDF to walk through the binary string and that the position of the 1's relate to the ids in the look up table;
I leave it to you to code for errors and tidy up.

Categories