Android - String Shortening Approach - java

I'm currently developing a marketing app on Android, which has a feature to send a URL via SMS. Because I'm using SMS, I want to make the text as short as possible, so it won't be split into parts.
The URL is generated dynamically by app. Different contact will result in different URL, as the app puts some "contact related information" to the URL. And this information is the one needs to be shortened, not the base URL.
I tried using Base64 to shorten the string, but it's not working.
Before
Text: Myself|1234567890
Length: 17
After
Text: TXlzZWxmfDEyMzQ1Njc4OTA=
Length: 25
Then I tried Deflater, and the result is better than Base64, but still it's not shorten the string.
Before
Text: Myself|1234567890
Length: 17
After
Text: x��,N�I�1426153��4����3��
Length: 24
I've also tried GZIP, and the result is much worse than other method.
Before
Text: Myself|1234567890
Length: 17
After
Text: ����������������,N�I�1426153��4�����w��������
Length: 36
After comparing test results, I decided to use Base64 as it sometimes works, but I'm completely not satisfied. Can anyone give me a better approach?
EDIT:
I need this String Shortening to be executed OFFLINE, without internet connection. I'm terribly sorry for this sudden change, as our developer team decided so. Any idea?

Base64 on it's own won't work because it typically increases the length of an encoded string by about 37%.
Deflater and GZIP both contain headers that will increase the length of short strings.
However, you can use Huffman coding or Arithmetic coding to take advantage of the fact that some characters are much more common in URLs than others. Generate a frequency table for your strings by generating a thousand of them or so and summing the occurrence of each character, and then generate a Huffman coding table based on these frequencies. You can then use this hard-coded table to encode and decode your strings: don't transmit the table along with the message.
Here is an interactive webpage that allows you to enter various strings and Huffman encode them: you can try it out with your URLs to get a general idea of what kind of compression rate you can expect, however in practice you will get a slightly lower compression rate if you are using the same table for all your strings. For your sample text "Myself|1234567890" the size of Huffman encoded string is 51% of the original.
After you generate your Huffman encoded string you might need to do another pass over it to escape any illegal characters that can't be transmitted in the SMS (or just Base64 encode your Huffman coded string), which might negate your savings from the Huffman encoding somewhat but hopefully you will still end up with a net saving.
If you get a 50% or so compression rate with Huffman coding and then Base64 encode the result (increasing the size again) you will still end up with a result around 30% smaller than the original.

Related

Why do all my decoded Strings have '?' at the end? Java String decoding

I am retrieving tweets from Twitter using the Tweepy library (Python) and Kafka. The text is encoded in UTF-8 as this line shows:
self.producer.send('my-topic', data.encode('UTF-8'))
Where "data" is a String. Then, this data is stored into an Oracle NoSQL database in key-value format. For this reason, the tweet itself is encoded. I do this with Java:
Value myValue = Value.createValue(msg.value().getBytes("UTF-8"));
Finally, the tweets are retrieved by a Formatter developed in Java. In order to store it in a relational schema, I have to parse the tweet so I retrieve it as a String.
String data = new String(value.toByteArray(),StandardCharsets.UTF_8);
As you see, I maintain UTF-8 encoding in all the steps I make. However, when I see the text of the tweet in my database it's always cut. For example:
RT #briIIohead: the hardest pill i had to swallow this year was learning that no matter how good you could be to somebody, no matter how mu?
Notice how it ends with '?' symbol, and it has been clearly cut. Well, this happens with every long tweet. I mean, if the text is like 30 characters long, then it shows fine, however anything longer than 100 or so is cut.
At first I thought it could be my table definition, but the field "Text" is declared as VARCHAR2(400 CHAR) which is the maximum number of characters a tweet can have in the social network.
Any ideas on how can I spot what's cutting the text and putting the '?' symbol at the end?
How "data" looks like:
{"created_at":"Tue May 28 09:23:36 +0000 2019","id":1133302792129351681,"id_str":"1133302792129351681","text":"RT #AppleEDU: Learn, create, and do more with iPad in your classroom. Get the free Everyone Can Create curriculum and bring projects to lif\u2026","source":"\u003ca href=\"http:\/\/twitter.com\/download\/iphone\" rel=\"nofollow\"\u003eTwitter for iPhone\u003c\/a\u003e","truncated":false,"in_reply_to_status_id":null,"in_reply_to_status_id_str":null,"in_reply_to_user_id":null,"in_reply_to_user_id_str":null,"in_reply_to_screen_name":null,"user":{"id":1060510851889750022,"id_str":"1060510851889750022","name":"Rem.0112","screen_name":"0112Rem","location":"Mawson Lakes, Adelaide","url":null,"description":null,"translator_type":"none","protected":false,"verified":false,"followers_count":739,"friends_count":1853,"listed_count":10,"favourites_count":33406,"statuses_count":36936,"created_at":"Thu Nov 08 12:34:25 +0000 2018","utc_offset":null,"time_zone":null,"geo_enabled":true,"lang":null,"contributors_enabled":false,"is_translator":false,"profile_background_color":"F5F8FA","profile_background_image_url":"","profile_background_image_url_https":"","profile_background_tile":false,"profile_link_color":"1DA1F2","profile_sidebar_border_color":"C0DEED","profile_sidebar_fill_color":"DDEEF6","profile_text_color":"333333","profile_use_background_image":true,"profile_image_url":"http:\/\/pbs.twimg.com\/profile_images\/1093157842163355649\/6oAdJTCs_normal.jpg","profile_image_url_https":"https:\/\/pbs.twimg.com\/profile_images\/1093157842163355649\/6oAdJTCs_normal.jpg","profile_banner_url":"https:\/\/pbs.twimg.com\/profile_banners\/1060510851889750022\/1546155144","default_profile":true,"default_profile_image":false,"following":null,"follow_request_sent":null,"notifications":null},"geo":null,"coordinates":null,"place":null,"contributors":null,"retweeted_status":{"created_at":"Thu May 23 15:15:16 +0000 2019","id":1131579354964725760,"id_str":"1131579354964725760","text":"Learn, create, and do more with iPad in your classroom. Get the free Everyone Can Create curriculum and bring proje\u2026 https:\/\/t.co\/aeeSPTXtFx","source":"\u003ca href=\"https:\/\/ads-api.twitter.com\" rel=\"nofollow\"\u003eTwitter Ads Composer\u003c\/a\u003e","truncated":true,"in_reply_to_status_id":null,"in_reply_to_status_id_str":null,"in_reply_to_user_id":null,"in_reply_to_user_id_str":null,"in_reply_to_screen_name":null,"user":{"id":468741166,"id_str":"468741166","name":"Apple Education","screen_name":"AppleEDU","location":"Cupertino, CA","url":null,"description":"Spark new ideas, create more aha moments, and teach in ways you\u2019ve always imagined. Follow #AppleEDU for tips, updates, and inspiration.","translator_type":"none","protected":false,"verified":true,"followers_count":728781,"friends_count":273,"listed_count":2594,"favourites_count":13189,"statuses_count":2766,"created_at":"Thu Jan 19 21:26:14 +0000 2012","utc_offset":null,"time_zone":null,"geo_enabled":false,"lang":null,"contributors_enabled":false,"is_translator":false,"profile_background_color":"F0F0F0","profile_background_image_url":"http:\/\/abs.twimg.com\/images\/themes\/theme1\/bg.png","profile_background_image_url_https":"https:\/\/abs.twimg.com\/images\/themes\/theme1\/bg.png","profile_background_tile":false,"profile_link_color":"0088CC","profile_sidebar_border_color":"C0DEED","profile_sidebar_fill_color":"DDEEF6","profile_text_color":"333333","profile_use_background_image":false,"profile_image_url":"http:\/\/pbs.twimg.com\/profile_images\/892429342046691328\/2SOlm_09_normal.jpg","profile_image_url_https":"https:\/\/pbs.twimg.com\/profile_images\/892429342046691328\/2SOlm_09_normal.jpg","profile_banner_url":"https:\/\/pbs.twimg.com\/profile_banners\/468741166\/1530123538","default_profile":false,"default_profile_image":false,"following":null,"follow_request_sent":null,"notifications":null},"geo":null,"coordinates":null,"place":null,"contributors":null,"is_quote_status":false,"extended_tweet":{"full_text":"Learn, create, and do more with iPad in your classroom. Get the free Everyone Can Create curriculum and bring projects to life through music, drawing, video and photography.","display_text_range":[0,173],"entities":{"hashtags":[],"urls":[],"user_mentions":[],"symbols":[]}},"quote_count":0,"reply_count":3,"retweet_count":3,"favorite_count":58,"entities":{"hashtags":[],"urls":[{"url":"https:\/\/t.co\/aeeSPTXtFx","expanded_url":"https:\/\/twitter.com\/i\/web\/status\/1131579354964725760","display_url":"twitter.com\/i\/web\/status\/1\u2026","indices":[117,140]}],"user_mentions":[],"symbols":[]},"favorited":false,"retweeted":false,"scopes":{"followers":false},"filter_level":"low","lang":"en"},"is_quote_status":false,"quote_count":0,"reply_count":0,"retweet_count":0,"favorite_count":0,"entities":{"hashtags":[],"urls":[],"user_mentions":[{"screen_name":"AppleEDU","name":"Apple Education","id":468741166,"id_str":"468741166","indices":[3,12]}],"symbols":[]},"favorited":false,"retweeted":false,"filter_level":"low","lang":"en","timestamp_ms":"1559035416048"}
I also must mention that this whole chunk is what's encoded. Then decoded, and finally parsed to be introduced in the database. All fields are correctly decoded and parsed, except "text" that's cut
According to the official documentation, a tweet has no more than "140" characters (that is a broad definition); but lately they changed it to 280.
That same document says:
Twitter counts the length of a Tweet using the Normalization Form C (NFC) version of the text.
So they first normalize the text (I'll let you figure out how this is done is java). And later they say:
Twitter also counts the number of codepoints in the text rather than UTF-8 bytes.
Thus:
String test = "RT #briIIohead: the hardest pill i had to swallow this year was learning that no matter how good you could be to somebody, no matter how mu";
System.out.println(test.codePoints().count()); // 139
It seems that the initial tweet was 280 "characters" and your library that you use is not aware of that, so it only uses the previous 140 ones. Since that does some chunking, it seems that the chunking is wrong too it removes some "partial" bytes at the end. When you try to print those - java does not know what those (at the end) bytes actually mean (because of some wrong chunking) and simply says ? (which is the default strategy on what to show when it simply does not understand something).

How to inflate in Python some data that was deflated by Peoplesoft (Java)?

DISCLAIMER: Peoplesoft knowledge is not mandatory in order to help me with this one!
How could i extract the data from that Peoplesoft table, from the PUBDATALONG column?
The description of the table is here:
http://www.go-faster.co.uk/peopletools/psiblogdata.htm
Currently i am using a program written in Java and below is a piece of the code:
Inflater inflater = new Inflater();
byte[] result = new byte[rs.getInt("UNCOMPDATALEN")];
inflater.setInput(rs.getBytes("PUBDATALONG"));
int length = inflater.inflate(result);
System.out.println(new String(result, 0, length, "UTF-8"));
System.out.println();
System.out.println("-----");
System.out.println();
How could I rewrite this using Python?
It is a question that appeared in other forms on Stackoverflow but had no real answer.
I have basic understanding of what the code does in Java but i don't know any library in Python i could work with to achieve the same thing.
Some recommended to try zlib, as it is compatible with the algorithm used by Java Inflater class, but i did not succeed in doing that.
Considering the below facts from PeopleSoft manual:
When the message is received by the PeopleSoft database, the XML data
is converted to UTF-8 to prevent any UCS2 byte order issues. It is
also compressed using the deflate algorithm prior to storage in the
database.
I tried something like this:
import zlib
import base64
UNCOMPDATALEN = 362 #this value is taken from the DB and is the dimension of the data after decompression.
PUBDATALONG = '789CB3B1AFC8CD51284B2D2ACECCCFB35532D43350B2B7E3E5B2F130F40C8977770D8977F4710D0A890F0E710C090D8EF70F0D09080DB183C8BAF938BAC707FBBBFB783ADA19DAE86388D904B90687FAC0F4DAD940CD70F67771B533B0D147E6DAE8A3A9D5C76B3F00E2F4355C=='
print zlib.decompress(base64.b64decode(PUBDATALONG), 0, 362)
and I get this:
zlib.error: Error -3 while decompressing data: incorrect header check
for sure I do something wrong but I am not smart enough to figure it out by myself.
That string is not Base-64 encoded. It is simply hexadecimal. (I have no idea why it ends in ==, which makes it look a little like a Base-64 string.) You should be able to see by inspection that there are no lower case letters, or for that matter upper case letters after F as there would be in a typical Base-64 encoded string of compressed, i.e. random-appearing data.
Remove the equal signs at the end and use .decode("hex") in Python 2, or bytes.fromhex() in Python 3.

Java NIO: Writing File Header - Using SeekableByteChannel

I am manually serializing data objects to a file, using a ByteBuffer and its operations such as putInteger(), putDouble() etc.
One of the fields I'd like to write-out is a String. For the sake of example, let's say this contains a currency. Each currency has a three-letter ISO currency code, e.g. GBP for British Pounds Sterling.
Assuming each object I'm serializing just has a double and a currency; you could consider the serialized data to look something like:
100.00|GBP
200.00|USD
300.00|EUR
Obviously in reality I'm not delimiting the data (the pipe between fields, nor the line feeds), it's stored in binary - just using the above as an illustration.
Encoding the currency with each entry is a bit inefficient, as I keep storing the same three-characters. Instead, I'd like to have a header - which stores a mapping for currencies. The file would look something like:
100
GBP
USD
EUR
~~~
~~~
100.00|1
200.00|2
300.00|3
The first 2 bytes in the file is a short, filled with the decimal value 100. This informs me that there are 100 spaces for currencies in the file. Following this, there are 3-byte chunks which are the currencies in order (ASCII-only characters).
When I read the file back in, all I have to do is build up a 100-element array with the currency codes, and I can cheaply / efficiently look up the relevant currency for each line.
Reading the file back-in seems simple. But I'm interested to hear thoughts on writing-out the data.
I don't know all the currencies up-front, and I'm actually supporting any three-character code - even if it's invalid. Thus I have to build-up the table converting currencies to indexes on-the-fly.
I am intending on using a SeekableByteChannel to address my file, and seeking back to the header every time I find a new currency I've not indexed before.
This has obvious I/O overhead of moving round the file. But, I am expecting to see all the different currencies within the first few data objects written. So it'll probably only seek for the first few seconds of execution, and then not have to perform an additional seek for hours.
The alternative is to wait for the stream of data to finish, and then write the header once. However, if my application crashes and I haven't written-out the header, the data in the file cannot be recovered back to its original content.
Seeking seems like the right thing to do, but I've not attempted it before - and was hoping to hear horror-stories up-front, rather than through trial/error on my end.
The problem with your approach is that you say that you do not want to limit the number of currency codes which implies that you don’t know how much space you have to reserve for the header. Seeking in a plain local file might be cheap if not performed too often, but shifting the entire file contents to reserve more room for the header is big.
The other question is how you define efficiency. If you don’t limit the number of currency codes you have to be aware of the case that a single byte is not sufficient for your index so you need either a dynamic possibly-multi-byte encoding which is more complicated to parse or a fixed multi-byte encoding which ends up taking the same number of bytes as the currency code itself.
So if not space-efficiency for the typical case is more important to you than decoding efficiency you can use the fact that these codes are all made up of ASCII characters only. So you can encode each currency code in three bytes and if you accept one padding byte you can use a single putInt/getInt for storing/retrieving a currency code without the need for any header lookup.
I don’t believe that optimizing these codes further would improve you storage significantly. The table does not consist of currency codes only. It’s very likely the other data will take much more space.

Avoiding line breaks in encrypted and encoded URL string

I am trying to implement a simple string encoder to obfuscates some parts of a URL string (to prevent them from getting mucked with by a user). I'm using code nearly identical to the sample in the JCA guide, except:
using DES (assuming it's a little faster than AES, and requires a smaller key) and
Base64 en/decoding the string to make sure it stays safe for a URL.
For reasons I can't understand, the output string ends up with linebreaks, which I presume won't work. I can't figure out what's causing this. Suggestions on something similar that's easier or pointers to some other resources to read? I'm finding all the cryptography references a bit over my head (and overkill), but a simple ROT13 implementation won't work since I want to deal with a larger character set (and don't want to waste time implementing something likely to have issues with obscure characters i didn't think of).
Sample input (no line break):
http://maps.google.com/maps?q=kansas&hl=en&sll=42.358431,-71.059773&sspn=0.415552,0.718918&hnear=Kansas&t=m&z=7
Sample Output (line breaks as shown below):
GstikIiULcJSGEU2NWNTpyucSWUFENptYk4m5lD8RJl8l1CuspiuXiE9a07fUEAGM/tC7h0Vzus+
jAH6cT4Wtz2RUlBdGf8WtQxVDKZVOzKwi84eQh2kZT9T3KomlnPOu2owJ/2RAEvG+QuGem5UGw==
my encode snippet:
final Key key = new SecretKeySpec(seed.getBytes(), "DES");
final Cipher c = Cipher.getInstance("DES");
c.init(Cipher.ENCRYPT_MODE, key);
final byte[] encVal = c.doFinal(s.getBytes());
return new BASE64Encoder().encode(encVal);
Simply perform base64Str = base64Str.replaceAll("(?:\\r\\n|\\n\\r|\\n|\\r)", "")
on the encoded string.
It works fine when you try do decode it back to bytes. I did test it several times with random generated byte arrays. Obviously decoding process just ignores the newlines either they are present or not.
I tested this "confirmed working" by using com.sun.org.apache.xml.internal.security.utils.Base64
Other encoders not tested.
Base64 encoders usually impose some maximum line (chunk) length, and adds newlines when necessary. You can normally configure that, but that depends on the particular coder implementation.
For example, the class from Apache Commons has a linelength attribute, setting it to zero (or negative) disables the line separation.
BTW: I agree with the other answer in that DES is hardly advisable today. Further, are you just "obfuscating" or really encrypting? Who has the key? The whole thing does not smell very well to me.
import android.util.Base64;
...
return new BASE64.encodeToString(encVal, Base64.NO_WRAP);
Though it's unrelated to your actual question, DES is generally slower than AES (at least in software), so unless you really need to keep the key small, AES is almost certainly a better choice.
Second, it's perfectly normal that encryption (DES or AES) would/will produce new-line characters in its output. Producing output without them will be entirely up to the base-64 encoder, so that's where you clearly need to look.
It's not particularly surprising to see a base-64 insert new-line characters at regular intervals in its output though. The most common use for base-64 encoding is putting raw data into something like the body of an email, where a really long line would cause a problem. To prevent that, the data is broken up into pieces, typically no more than 80 columns (and usually a bit less). In this case, the new-lines should be ignored, however, so you should be able to just delete them, if memory serves.

Is it possible to search a file with comrpessed objects in java?

I read from ORACLE of the following bit:
Can I execute methods on compressed versions of my objects, for example isempty(zip(serial(x)))?
This is not really viable for arbitrary objects because of the encoding of objects. For a particular object (such as String) you can compare the resulting bit streams. The encoding is stable, in that every time the same object is encoded it is encoded to the same set of bits.
So I got this idea, say if I have a char array of 4M something long, is it possible for me to compress it to several hundreds of bytes using GZIPOutputStream, and then map the whole file into memory, and do random search on it by comparing bits? Say if I am looking for a char sequence of "abcd", could I somehow get the bit sequence of compressed version of "abcd", and then just search the file for it? Thanks.
You cannot use GZIP or similar to do this as the encoding of each byte change as the stream is processed. i.e. the only way to determine what a byte means is to read all the bytes previous.
If you want to access the data randomly, you can break the String into smaller sections. That way you only need to decompress a relative short section of data.

Categories