How to delete a full column from spreadsheet using Java? - java

I am building an application by using Google Spreadsheet to store data.
I am adding and updating columns dynamically but not getting how to delete a complete column at once!
Could anyone please help me in deleting a column from spreadsheet? I can delete a cell but want to delete all cells which come under a particular column.
Like delete column 'A' and the nearby column 'B' replace column 'A', Like we do by right clicking on a column and select option 'delete column' on drive spreadsheet.
Can anyone help me doing this? Any api or link?

You may want to check rows and columns operations wherein Sheets API allows you to insert, remove and manipulate rows and columns in sheets.
In deleting rows and columns, you may try the sample spreadsheets.batchUpdate request wherein second request deletes columns B:D.
The request protocol is shown below:
POST https://sheets.googleapis.com/v4/spreadsheets/spreadsheetId:batchUpdate
{
"requests": [
{
"deleteDimension": {
"range": {
"sheetId": sheetId,
"dimension": "ROWS",
"startIndex": 0,
"endIndex": 3
}
}
},
{
"deleteDimension": {
"range": {
"sheetId": sheetId,
"dimension": "COLUMNS",
"startIndex": 1,
"endIndex": 4
}
}
},
],
}
You may also check Updating Spreadsheets guide for more information on how to implement a batch update in different languages using the Google API client libraries including Java.
Hope that helps!

Related

Unable to store complete json field in mysql table field

I am getting a large json data in a api response. I am trying to store this json response in local mysql table. But unable to store complete json response. Please find the below json info.
Sample API json response :
{
"responseCode": 200,
"date": "2020-06-03",
"message": "Success",
"couponDetails": {
"total": 14949,
"codes": "35033769,35441136,35803675,34407176,34717909,34950692,35059148,35452352,35688911,35904465,35904658,35904753,35904824,35904942,35905306,35905318,35905434,35905673,35906615,35907029,35907154,35907222,35907345,35907592,35907683,35907951,35908161,35908194,35908206,34664348,34664436,34665057,34665072,34665768,34665950,34666051,34666110,34666879,34667228,34668101,34670133,34670162,34670259,34670661,34670687,34670994,34671179,34671296,34672207,34672276,34672631,34672747,34673619,34673709,34675355,34676588,34677690,34678019,34679260,34679468,34680550,34680694,34680838,34683321,34684752,34684796,34685198,34685826,34686220,34686276,34351922,34352193,34352369,34352553,34353629,34353971,34355064,34355541,34355625,34356802,34357668,34357869,34357922,34360451,34360500,34360764,34361049,34361174,34361315,34362337,34362412,34363370,34364187,34365025,34365188,34365415,34365904,34366777,34366877,34367361,34368025,34368078,35542974,35543013,35543084,35268238,35268397,35268774,35269689,35269933,35270038,35250597,35063719,35064231,35064237,35270577,35270705,35270969,35064514,35064963,35065129,35251645,35251660,35251798,35253022,35253300,35272389,35272446,35272519,35272640,35272641,35273596,35273716,35423127,35423184,35423372,35424244,35425607,35485524,35486647,35486711,35486970,35487111,35470199,35470485,35488099,35488145,35488270,35490204,35534378,35535484,35535520,35535559,35535601,35535818,21979363,21508096,26237385,24734847,22263784,26889428,29292212,20415646,21836743,20300178,21831783,21198543,23739734,29773862,20715551,25488915,28894112,26536357,26695866,27133857,29133336,28763373,21850298,21990790,27757421,2421785723"
}
}
In my local DB table I am able to insert the below information only, which is not complete :
{
"responseCode": 200,
"date": "2020-06-03",
"message": "Success",
"couponDetails": {
"total": 14949,
"codes": "35033769,35441136,35803675,34407176,34717909,34950692,35059148,35452352,35688911,35904465,35904658,35904753,35904824,35904942,35905306,
Mysql Table Structure :
CREATE TABLE `bookdata_codeinfo_history` (
`generated_date` date DEFAULT NULL,
`book_code` longtext,
`service` varchar(45) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
I want to store json in 'book_code' field but only few information is storing. I am using mysql-5.7.13 version.
Please tell me how to resolve this issue
I think rather than attempting to store the book codes as a string, a better way to model the table would be to break the book_code string into individual rows.
This would help in searching the data and future extensibility of the data model.
That's really strange a LONGTEXT can support up to 4,294,967,295 bytes ~4GB. (kindly recheck your table strucure) .
Or
Put these jsons in files and save the path in the database
Or
Split the codes fields into multiple row, something like idrequest , code and every code contains one value

DynamoDB, method to paginate 100,000 documents with offset/limit?

I have 100,000 documents successfully stored in a table. Currently, I'm just using a primary key (not hash/sort combo). There's no good way to split these into useful partitions for reads, because primarily customers will just be initially pulling the entire database, and then just pulling whatever items have been updated. Additionally, I would like to return results in a paginated fashion using an offset/limit method.
I'm wondering what the best way to do this is. An example item that I have stored in the table is like (id is the primary key):
{
"id": 11299,
"name": "plugin1",
"attributes": {
"plugin_version": "1.30",
"exploit_available": false,
"in_the_news": false,
"exploited_by_malware": false,
"exploited_by_nessus": false,
"risk_factor": "Medium",
"plugin_type": "remote",
"exploitability_ease": "No known exploits are available",
"plugin_publication_date": "2003-03-01T00:00:00Z",
"plugin_modification_date": "2018-07-16T00:00:00Z",
"vuln_publication_date": "2003-01-23T00:00:00Z"
"cvss_temporal_vector": {
"raw": "E:U/RL:OF/RC:C",
"ReportConfidence": "Confirmed",
"Exploitability": "Unproven",
"RemediationLevel": "Official-fix"
}
}
}
I also need to filter on plugin_modification_date, so not sure if it would be helpful to make that a sort key. What's been frustrating when investigating this so far is that everything seems to rely on using the partition key somehow, where it is basically useless when you have a solitary primary key which is unique for all items.

Load a big json file in Mysql or Oracle database

At work, we supply files for other services. Size of them are between 5mo and 500mo.
We want to use Json instead of XML, but i m wondering how our customers could
upload those files in an easy way in their database, Oracle or Mysql.
I mean, i can t find on the web APi or functions or tools, in Mysql or oracle, to do that.
I know that it s easy to work data by data to load a small Json file, decode each
object or array and put them at the right place in database.
But is there an other way to do this, like sqlloader in Oracle ?
And if so, size of our file aren t they too big to produce JSON file, in JAVA for example ?
I guess it might be difficult to do this load job automatically, especially because of arrays like this :
{"employees":[
{"firstName":"John", "lastName":"Doe", "salaryHistory":[1000,2000,3000]},
{"firstName":"Anna", "lastName":"Smith", "salaryHistory":[500,800]},
{"firstName":"Peter", "lastName":"Jones", "salaryHistory":[400]}
]}
where salaryHistory must produce problems because their sizes are different, and data are not madatoryly
in the same table.
Any ideas or help would be welcomed !
Edit
i m looking for a solution to put each data in the good column of a table, i don t need to store a Json structure in a single column of simple table.
like this :
table employees : column are id, FirstName, lastName and
table salaryHistory : column are id, order, salary
and each data must go in the good column like "John" in firstname, "Doe" in lastname, then "1000" in a new row of table salaryHistory , "2000" in another new row of salaryHistory and so on.
Starting with MySQL 5.7 there is a new data type: JSON.
Take a look here for more details.
Example for Oracle 12c:
create table transactions (
id number not null primary key,
trans_msg clob,
constraint
check_json check (trans_msg is json)
);
regular insert:
insert into transactions
values
(
sys_guid(),
systimestamp,
'{
"TransId" : 3,
"TransDate" : "01-JAN-2015",
"TransTime" : "10:05:00",
"TransType" : "Deposit",
"AccountNumber" : 125,
"AccountName" : "Smith, Jane",
"TransAmount" : 300.00,
"Location" : "website",
"CashierId" : null,
"ATMDetails" : null,
"WebDetails" : {
"URL" : "www.proligence.com/acme/dep.htm"
},
"Source" : "Transfer",
"TransferDetails" :
{
"FromBankRouting" : "012345678",
"FromAccountNo" : "1234567890",
"FromAccountType" : "Checking"
}
}'
)
/
SQL*Loader control file and data file:
load data into table transactions
fields terminated by ','
(
trans_id sequence(max,1),
fname filler char(80),
trans_body lobfile(fname) terminated by EOF
)

Get column name ( Meta Data ) Talend

I'm trying to export data and meta data from Mysql Database to a JSON .
My JSON output need to have this structure :
{ "classifier":[
{
"name":"Frequency",
"value":"75 kHz"
},
{
"name":"depth",
"value":"100 m"
} ]}
Frequency for me represent a column Name and 75 Khz is the value of the column for a specific row.
I'm using Talend data integration to do this, and i can get the data, but i can't figure out how to get the meta data, do i have to enter it myself ? or there is a more easy way to do this ?
You cannot export metadata of json file from Mysql because Mysql provide a structured data, hence we have to create our json structure independently using an existing file or manually, the easiest way is to create a sample file like the one used in your question. See Talend Help.

Delete all the columns and its data except for one columns using Astyanax client?

I am working on a Project in which I need to delete all the columns and its data except for one column and its data in Cassandra using Astyanax client.
I have a dynamic column family like below and we already have couple of million records into that Column Family.
create column family USER_TEST
with key_validation_class = 'UTF8Type'
and comparator = 'UTF8Type'
and default_validation_class = 'UTF8Type'
and gc_grace = 86400
and column_metadata = [ {column_name : 'lmd', validation_class : DateType}];
I have user_id as the rowKey and other columns I have is something like this -
a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,lmd
Now I need to delete all the columns and its data except for a15 column. Meaning, I want to keep a15 column and its data for all the user_id(rowKey) and delete rest of the columns and its data..
I already know how to delete data from Cassandra using Astyanax client for a particular rowKey-
public void deleteRecord(final String rowKey) {
try {
MutationBatch m = AstyanaxConnection.getInstance().getKeyspace().prepareMutationBatch();
m.withRow(AstyanaxConnection.getInstance().getEmp_cf(), rowKey).delete();
m.execute();
} catch (ConnectionException e) {
// some code
} catch (Exception e) {
// some code
}
}
Now how to delete all the columns and its data except for one column for all the users id which is my rowKey...
Any thoughts how this can be done using Astyanax client efficiently?
It appears that Astyanax does not currently support the slice delete functionality that is a fairly recent addition to both the storage engine and the Thrift API. If you look at the thrift API reference: http://wiki.apache.org/cassandra/API10
You see that the delete operation takes a SlicePredicate, which can take either a list of columns or a SliceRange. A SliceRange, could specify all columns greater or less than the column you wanted to keep, so that would allow you to do two slice delete operations to delete all but one of the columns in the row.
Unfortunately, Astyanax only has the ability to delete an entire row, or a defined list of columns and doesn't wrap the full SlicePredicate functionality. So it looks like you have two options:
1) See about sending a raw thrift slice delete, bypassing Astyanax wrapper, or
2) Do a column read, followed by a row delete, followed by a column write. This is not ideally efficient, but if it isn't done too frequently shouldn't be prohibitive.
or
3) Read the entire row and explicitly delete all of the columns other than the one you want to preserve.
I should note that while the storage engine and thrift API both support slice deletes, this is also not yet explicitly supported by CQL.
I filed this ticket to address that last limitation:
https://issues.apache.org/jira/browse/CASSANDRA-6292

Categories