Breaking Vigenere only knowing key length - java

Problem
I want to decode a message encrypted with classic Viginere. I know that the key has a length of exactly 6 characters.
The message is:
BYOIZRLAUMYXXPFLPWBZLMLQPBJMSCQOWVOIJPYPALXCWZLKXYVMKXEHLIILLYJMUGBVXBOIRUAVAEZAKBHXBDZQJLELZIKMKOWZPXBKOQALQOWKYIBKGNTCPAAKPWJHKIAPBHKBVTBULWJSOYWKAMLUOPLRQOWZLWRSLEHWABWBVXOLSKOIOFSZLQLYKMZXOBUSPRQVZQTXELOWYHPVXQGDEBWBARBCWZXYFAWAAMISWLPREPKULQLYQKHQBISKRXLOAUOIEHVIZOBKAHGMCZZMSSSLVPPQXUVAOIEHVZLTIPWLPRQOWIMJFYEIAMSLVQKWELDWCIEPEUVVBAZIUXBZKLPHKVKPLLXKJMWPFLVBLWPDGCSHIHQLVAKOWZSMCLXWYLFTSVKWELZMYWBSXKVYIKVWUSJVJMOIQOGCNLQVXBLWPHKAOIEHVIWTBHJMKSKAZMKEVVXBOITLVLPRDOGEOIOLQMZLXKDQUKBYWLBTLUZQTLLDKPLLXKZCUKRWGVOMPDGZKWXZANALBFOMYIXNGLZEKKVCYMKNLPLXBYJQIPBLNMUMKNGDLVQOWPLEOAZEOIKOWZZMJWDMZSRSMVJSSLJMKMQZWTMXLOAAOSTWABPJRSZMYJXJWPHHIVGSLHYFLPLVXFKWMXELXQYIFUZMYMKHTQSMQFLWYIXSAHLXEHLPPWIVNMHRAWJWAIZAAWUGLBDLWSPZAJSCYLOQALAYSEUXEBKNYSJIWQUKELJKYMQPUPLKOLOBVFBOWZHHSVUIAIZFFQJEIAZQUKPOWPHHRALMYIAAGPPQPLDNHFLBLPLVYBLVVQXUUIUFBHDEHCPHUGUM
Question
I tried a brute-force approach but unfortunately this yields an extreme amount of combinations, too many to compute.
Do you have any idea how to go from here or how to approach this problem in general?
Attempt
Here is what i have so far:
public class Main {
// instance variables - replace the example below with your own
private String message;
private String answer;
private String first;
/**
* Constructor for objects of class Main
*/
public Main()
{
// initialise instance variables
message ="BYOIZRLAUMYXXPFLPWBZLMLQPBJMSCQOWVOIJPYPALXCWZLKXYVMKXEHLIILLYJMUGBVXBOIRUAVAEZAKBHXBDZQJLELZIKMKOWZPXBKOQALQOWKYIBKGNTCPAAKPWJHKIAPBHKBVTBULWJSOYWKAMLUOPLRQOWZLWRSLEHWABWBVXOLSKOIOFSZLQLYKMZXOBUSPRQVZQTXELOWYHPVXQGDEBWBARBCWZXYFAWAAMISWLPREPKULQLYQKHQBISKRXLOAUOIEHVIZOBKAHGMCZZMSSSLVPPQXUVAOIEHVZLTIPWLPRQOWIMJFYEIAMSLVQKWELDWCIEPEUVVBAZIUXBZKLPHKVKPLLXKJMWPFLVBLWPDGCSHIHQLVAKOWZSMCLXWYLFTSVKWELZMYWBSXKVYIKVWUSJVJMOIQOGCNLQVXBLWPHKAOIEHVIWTBHJMKSKAZMKEVVXBOITLVLPRDOGEOIOLQMZLXKDQUKBYWLBTLUZQTLLDKPLLXKZCUKRWGVOMPDGZKWXZANALBFOMYIXNGLZEKKVCYMKNLPLXBYJQIPBLNMUMKNGDLVQOWPLEOAZEOIKOWZZMJWDMZSRSMVJSSLJMKMQZWTMXLOAAOSTWABPJRSZMYJXJWPHHIVGSLHYFLPLVXFKWMXELXQYIFUZMYMKHTQSMQFLWYIXSAHLXEHLPPWIVNMHRAWJWAIZAAWUGLBDLWSPZAJSCYLOQALAYSEUXEBKNYSJIWQUKELJKYMQPUPLKOLOBVFBOWZHHSVUIAIZFFQJEIAZQUKPOWPHHRALMYIAAGPPQPLDNHFLBLPLVYBLVVQXUUIUFBHDEHCPHUGUM";
for (int x = 0; x < message.length() / 6; x++) {
int index = x * 6;
first = new StringBuilder()
.append(first)
.append(message.charAt(index))
.toString();
}
System.out.println(first);
}
}

Non-text message
In case the raw message is not actual text (like english text that makes sense) or you have no information about its content, you will be out of luck.
Especially if the text is actually hashed or double-encrypted, i.e. random stuff.
Breaking an encryption scheme requires knowledge about the algorithm and the messages. Especially in your situation, you will need to know the general structure of your messages in order to break it.
Prerequisites
For the rest of this answer, let me assume your message is actually plain english text. Note that you can easily adopt my answer to other languages. Or even adopt the techniques to other message formats.
Let me also assume that you are talking about classic Vigenere (see Wikipedia) and not about one of its many variants. That means that your input consists only of the letters A to Z, no case, no interpunction, no spaces. Example:
MYNAMEISJOHN // Instead of: My name is John.
The same also applies to your key, it only contains A to Z.
Classic Viginere then shifts by the offset in the alphabet, modulo the alphabet size (which is 26).
Example:
(G + L) % 26 = R
Dictionary
Before we talk about attacks we need to find a way to, given a generated key, find out whether it is actually correct or not.
Since we know that the message consists of english text, we can just take a dictionary (a huge list of all valid english words) and compare our decrypted message against the dictionary. If the key was wrong, the resulting message will not contain valid words (or only a few).
This can be a bit tricky since we lack interpunction (in particular, no spaces).
N-grams
Good thing that there is actually a very accurate way of measuring how valid a text is, which also solves the issue with the missing interpunction.
The technique is called N-grams (see Wikipedia). You choose a value for N, for example 3 (then called tri-grams) and start splitting your text into pairs of 3 characters. Example:
MYNAMEISJOHN // results in the trigrams:
$$M, $$MY, MYN, YNA, NAM, AME, MEI, ISJ, SJO, JOH, OHN, HN$, N$$
What you need now is a frequency analysis of the most common tri-grams in english text. There exist various sources online (or you can run it yourself on a big text corpus).
Then you simply compare your tri-gram frequency to the frequency for real text. Using that, you compute a score of how well your frequency matches the real frequency. If your message contains a lot of very uncommon tri-grams, it is highly likely to be garbage data and not real text.
A small note, mono-grams (1-gram) result in a single character frequency (see Wikipedia#Letter frequency). Bi-grams (2-gram) are used commonly for cracking Viginere and yield good results.
Attacks
Brute-Force
The first and most straightforward attack is always brute-force. And, as long as the key and the alphabet is not that big, the amount of combinations is relatively low.
Your key has length 6, the alphabet has size 26. So the amount of different key combinations is 6^26, which is
170_581_728_179_578_208_256
So about 10^20. This number might appear huge, but do not forget that CPUs operate already in the Gigahertz range (10^9 operations per second, per core). That means that a single core with 1 GHz will have generated all solutions in about 317 years. Now replace that by a powerful CPU (or even GPU) and with a multi-core machine (there are clusters with millions of cores), then this is computed in less than a day.
But okay, I get that you most likely do not have access to such a hardcore cluster. So a full brute-force is not feasible.
But do not worry. There are simple tricks to speed this up. You do not have to compute the full key. How about limiting yourself to the first 3 characters instead of the full 6 characters. You will only be able to decrypt a subset of the text then, but it is enough to analyze whether the outcome is valid text or not (using dictionaries and N-grams, as mentioned before).
This small change already drastically cuts down computation time since you then only have 3^26 combinations. Generating those takes around 2 minutes for a single 1 GHz core.
But you can do even more. Some characters are extremely rare in english text, for example Z. You can simply start by not considering keys that would translate to those values in the text. Let us say you remove the 6 least common characters by that, then your combinations are only 3^20. This takes around 100 milliseconds for a single 1 GHz core. Yes, milliseconds. That is fast enough for your average laptop.
Frequency Attack
Enough brute-force, let us do something clever. A letter frequency attack is a very common attack against those encryption schemes. It is simple, extremely fast and very successful. In fact, it is so simple that there are quite some online tools that offer this for free, for example guballa.de/vigenere-solver (it is able to crack your specific example, I just tried it out).
While Viginere changes the message to unreadable garbage, it does not change the distribution of letters, at least not per digit of the key. So if you look at, let's say the second digit of your key, from there on, every sixth letter (length of the key) in the message will be shifted by the exact same offset.
Let us take a look at a simple example. The key is BAC and the message is
CCC CCC CCC CCC CCC // raw
DCF DCF DCF DCF DCF // decrypted
As you notice, the letters repeat. Looking at the third letter, it is always F. So that means that the sixth and ninth letter, which are also F, all must be the exact same original letter. Since they where all shifted by the C from the key.
That is a very important observation. It means that letter frequency is, within a multiple of a digit of the key (k * (i + key_length)), preserved.
Let us now take a look at the letter distribution in english text (from Wikipedia):
All you have to do now is to split your message into its blocks (modulo key-length) and do a frequency analysis per digit of the blocks.
So for your specific input, this yields the blocks
BYOIZR
LAUMYX
XPFLPW
BZLMLQ
PBJMSC
...
Now you analyze the frequency for digit 1 of each block, then digit 2, and so on, until digit 6. For the first digit, this are the letters
B, L, X, B, P, ...
The result for your specific input is:
[B=0.150, E=0.107, X=0.093, L=0.079, Q=0.079, P=0.071, K=0.064, I=0.050, O=0.050, R=0.043, F=0.036, J=0.036, A=0.029, S=0.029, Y=0.021, Z=0.021, C=0.014, T=0.014, D=0.007, V=0.007]
[L=0.129, O=0.100, H=0.093, A=0.079, V=0.071, Y=0.071, B=0.057, K=0.057, U=0.050, F=0.043, P=0.043, S=0.043, Z=0.043, D=0.029, W=0.029, N=0.021, C=0.014, I=0.014, J=0.007, T=0.007]
[W=0.157, Z=0.093, K=0.079, L=0.079, V=0.079, A=0.071, G=0.071, J=0.064, O=0.050, X=0.050, D=0.043, U=0.043, S=0.036, Q=0.021, E=0.014, F=0.014, N=0.014, M=0.007, T=0.007, Y=0.007]
[M=0.150, P=0.100, Q=0.100, I=0.079, B=0.071, Z=0.071, L=0.064, W=0.064, K=0.057, V=0.043, E=0.036, A=0.029, C=0.029, N=0.029, U=0.021, H=0.014, S=0.014, D=0.007, G=0.007, J=0.007, T=0.007]
[L=0.136, Y=0.100, A=0.086, O=0.086, P=0.086, U=0.086, H=0.064, K=0.057, V=0.050, Z=0.050, S=0.043, J=0.029, M=0.021, T=0.021, W=0.021, G=0.014, I=0.014, B=0.007, C=0.007, N=0.007, R=0.007, X=0.007]
[I=0.129, M=0.107, X=0.100, L=0.086, W=0.079, S=0.064, R=0.057, H=0.050, Q=0.050, K=0.043, E=0.036, C=0.029, T=0.029, V=0.029, F=0.021, J=0.021, P=0.021, G=0.014, Y=0.014, A=0.007, D=0.007, O=0.007]
Look at it. You see that for the first digit the letter B is very common, 15%. And then letter E with 10% and so on. There is a high chance that letter B, for the first digit of the key, is an alias for E in the real text (since E is the most common letter in english text) and that the E stands for the second most common letter, namely T.
Using that you can easily reverse-compute the letter of the key used for encryption. It is obtained by
B - E % 26 = X
Note that your message distribution might not necessary align with the real distribution over all english text. Especially if the message is not that long (the longer, the more accurate is the distribution computation) or mainly consists of weird and unusual words.
You can counter that by trying out a few combinations among the highest of your distribution. So for the first digit you could try out whether
B -> E
E -> E
X -> E
L -> E
Or instead of mapping to E only, also try out the second most common character T:
B -> T
E -> T
X -> T
L -> T
The amount of combinations you get with that is very low. Use dictionaries and N-grams (as mentioned before) to validate whether the key is correct or not.
Java Implementation
Your message is actually very interesting. It perfectly aligns with the real letter frequency over english text. So for your particular case you actually do not need to try out any combinations, nor do you need to do any dictionary/n-gram checks. You can actually just translate the most common letter in your encrypted message (per digit) to the most common character in english text, E, and get the real actual key.
Since that is so simple and trivial, here is a full implementation in Java for what I explained before step by step, with some debug outputs (it is a quick prototype, not really nicely structured):
import java.util.*;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public final class CrackViginere {
private static final int ALPHABET_SIZE = 26;
private static final char FIRST_CHAR_IN_ALPHABET = 'A';
public static void main(final String[] args) {
String encrypted =
"BYOIZRLAUMYXXPFLPWBZLMLQPBJMSCQOWVOIJPYPALXCWZLKXYVMKXEHLIILLYJMUGBVXBOIRUAVAEZAKBHXBDZQJLELZIKMKOWZPXBKOQALQOWKYIBKGNTCPAAKPWJHKIAPBHKBVTBULWJSOYWKAMLUOPLRQOWZLWRSLEHWABWBVXOLSKOIOFSZLQLYKMZXOBUSPRQVZQTXELOWYHPVXQGDEBWBARBCWZXYFAWAAMISWLPREPKULQLYQKHQBISKRXLOAUOIEHVIZOBKAHGMCZZMSSSLVPPQXUVAOIEHVZLTIPWLPRQOWIMJFYEIAMSLVQKWELDWCIEPEUVVBAZIUXBZKLPHKVKPLLXKJMWPFLVBLWPDGCSHIHQLVAKOWZSMCLXWYLFTSVKWELZMYWBSXKVYIKVWUSJVJMOIQOGCNLQVXBLWPHKAOIEHVIWTBHJMKSKAZMKEVVXBOITLVLPRDOGEOIOLQMZLXKDQUKBYWLBTLUZQTLLDKPLLXKZCUKRWGVOMPDGZKWXZANALBFOMYIXNGLZEKKVCYMKNLPLXBYJQIPBLNMUMKNGDLVQOWPLEOAZEOIKOWZZMJWDMZSRSMVJSSLJMKMQZWTMXLOAAOSTWABPJRSZMYJXJWPHHIVGSLHYFLPLVXFKWMXELXQYIFUZMYMKHTQSMQFLWYIXSAHLXEHLPPWIVNMHRAWJWAIZAAWUGLBDLWSPZAJSCYLOQALAYSEUXEBKNYSJIWQUKELJKYMQPUPLKOLOBVFBOWZHHSVUIAIZFFQJEIAZQUKPOWPHHRALMYIAAGPPQPLDNHFLBLPLVYBLVVQXUUIUFBHDEHCPHUGUM";
int keyLength = 6;
char mostCommonCharOverall = 'E';
// Blocks
List<String> blocks = new ArrayList<>();
for (int startIndex = 0; startIndex < encrypted.length(); startIndex += keyLength) {
int endIndex = Math.min(startIndex + keyLength, encrypted.length());
String block = encrypted.substring(startIndex, endIndex);
blocks.add(block);
}
System.out.println("Individual blocks are:");
blocks.forEach(System.out::println);
// Frequency
List<Map<Character, Integer>> digitToCounts = Stream.generate(HashMap<Character, Integer>::new)
.limit(keyLength)
.collect(Collectors.toList());
for (String block : blocks) {
for (int i = 0; i < block.length(); i++) {
char c = block.charAt(i);
Map<Character, Integer> counts = digitToCounts.get(i);
counts.compute(c, (character, count) -> count == null ? 1 : count + 1);
}
}
List<List<CharacterFrequency>> digitToFrequencies = new ArrayList<>();
for (Map<Character, Integer> counts : digitToCounts) {
int totalCharacterCount = counts.values()
.stream()
.mapToInt(Integer::intValue)
.sum();
List<CharacterFrequency> frequencies = new ArrayList<>();
for (Map.Entry<Character, Integer> entry : counts.entrySet()) {
double frequency = entry.getValue() / (double) totalCharacterCount;
frequencies.add(new CharacterFrequency(entry.getKey(), frequency));
}
Collections.sort(frequencies);
digitToFrequencies.add(frequencies);
}
System.out.println("Frequency distribution for each digit is:");
digitToFrequencies.forEach(System.out::println);
// Guessing
StringBuilder keyBuilder = new StringBuilder();
for (List<CharacterFrequency> frequencies : digitToFrequencies) {
char mostFrequentChar = frequencies.get(0)
.getCharacter();
int keyInt = mostFrequentChar - mostCommonCharOverall;
keyInt = keyInt >= 0 ? keyInt : keyInt + ALPHABET_SIZE;
char key = (char) (FIRST_CHAR_IN_ALPHABET + keyInt);
keyBuilder.append(key);
}
String key = keyBuilder.toString();
System.out.println("The guessed key is: " + key);
System.out.println("Decrypted message:");
System.out.println(decrypt(encrypted, key));
}
private static String decrypt(String encryptedMessage, String key) {
StringBuilder decryptBuilder = new StringBuilder(encryptedMessage.length());
int digit = 0;
for (char encryptedChar : encryptedMessage.toCharArray())
{
char keyForDigit = key.charAt(digit);
int decryptedCharInt = encryptedChar - keyForDigit;
decryptedCharInt = decryptedCharInt >= 0 ? decryptedCharInt : decryptedCharInt + ALPHABET_SIZE;
char decryptedChar = (char) (decryptedCharInt + FIRST_CHAR_IN_ALPHABET);
decryptBuilder.append(decryptedChar);
digit = (digit + 1) % key.length();
}
return decryptBuilder.toString();
}
private static class CharacterFrequency implements Comparable<CharacterFrequency> {
private final char character;
private final double frequency;
private CharacterFrequency(final char character, final double frequency) {
this.character = character;
this.frequency = frequency;
}
#Override
public int compareTo(final CharacterFrequency o) {
return -1 * Double.compare(frequency, o.frequency);
}
private char getCharacter() {
return character;
}
private double getFrequency() {
return frequency;
}
#Override
public String toString() {
return character + "=" + String.format("%.3f", frequency);
}
}
}
Decrypted
Using above code, the key is:
XHSIHE
And the full decrypted message is:
ERWASNOTCERTAINDISESTEEMSURELYTHENHEMIGHTHAVEREGARDEDTHATABHORRENCEOFTHEUNINTACTSTATEWHICHHEHADINHERITEDWITHTHECREEDOFMYSTICISMASATLEASTOPENTOCORRECTIONWHENTHERESULTWASDUETOTREACHERYAREMORSESTRUCKINTOHIMTHEWORDSOFIZZHUETTNEVERQUITESTILLEDINHISMEMORYCAMEBACKTOHIMHEHADASKEDIZZIFSHELOVEDHIMANDSHEHADREPLIEDINTHEAFFIRMATIVEDIDSHELOVEHIMMORETHANTESSDIDNOSHEHADREPLIEDTESSWOULDLAYDOWNHERLIFEFORHIMANDSHEHERSELFCOULDDONOMOREHETHOUGHTOFTESSASSHEHADAPPEAREDONTHEDAYOFTHEWEDDINGHOWHEREYESHADLINGEREDUPONHIMHOWSHEHADHUNGUPONHISWORDSASIFTHEYWEREAGODSANDDURINGTHETERRIBLEEVENINGOVERTHEHEARTHWHENHERSIMPLESOULUNCOVEREDITSELFTOHISHOWPITIFULHERFACEHADLOOKEDBYTHERAYSOFTHEFIREINHERINABILITYTOREALIZETHATHISLOVEANDPROTECTIONCOULDPOSSIBLYBEWITHDRAWNTHUSFROMBEINGHERCRITICHEGREWTOBEHERADVOCATECYNICALTHINGSHEHADUTTEREDTOHIMSELFABOUTHERBUTNOMANCANBEALWAYSACYNI
Which is more or less valid english text:
er was not certain disesteem surely then he might have regarded that
abhorrence of the unintact state which he had inherited with the creed
of my sticismas at least open to correction when the result was due to
treachery are morse struck into him the words of izz huett never quite
still ed in his memory came back to him he had asked izz if she loved
him and she had replied in the affirmative did she love him more than
tess did no she had replied tess would lay down her life for him and she
herself could do no more he thought of tess as she had appeared on the day
of the wedding how here yes had lingered upon him how she had hung upon
his words as if they were a gods and during the terrible evening over
the hearth when her simple soul uncovered itself to his how pitiful her
face had looked by the rays of the fire inherinability to realize that
his love and protection could possibly be withdrawn thus from being her
critiche grew to be her advocate cynical things he had uttered to
himself about her but noman can be always acyn I
Which, by the way, is a quote from the british novel Tess of the d'Urbervilles: A Pure Woman Faithfully Presented. Phase the Sixth: The Convert, Chapter XLIX.

Standard Vigenere interleaves Caesar shift cyphers, specified by the key. If the Vigenere key is six characters long, then letters 1, 7, 13, ... of the ciphertext are on one Caesar shift -- every sixth character uses the first character of the key. Letter 2, 8, 14 ... of the ciphertext use a different (in general) Caesar shift and so on.
That gives you six different Caesar shift ciphers to solve. The text will not be in English, due to picking every sixth letter, so you will need to solve it by letter frequency. That will give you a few good options for each position of the key. Try them in order of probability to see which gives the correct decryption.

Related

NZEC error in Hackerearth problem in java

I'm trying the solve this hacker earth problem https://www.hackerearth.com/practice/basic-programming/input-output/basics-of-input-output/practice-problems/algorithm/anagrams-651/description/
I have tried searching through the internet but couldn't find the ideal solution to solve my problem
This is my code:
String a = new String();
String b = new String();
a = sc.nextLine();
b = sc.nextLine();
int t = sc.nextInt();
int check = 0;
int againCheck =0;
for (int k =0; k<t; k++)
{
for (int i =0; i<a.length(); i++)
{
char ch = a.charAt(i);
for (int j =0; j<b.length(); j++)
{
check =0;
if (ch != b.charAt(j))
{
check=1;
}
}
againCheck += check;
}
}
System.out.println(againCheck*againCheck);
I expect the output to be 4, but it is showing the "NZEC" error
Can anyone help me, please?
The requirements state1 that the input is a number (N) followed by 2 x N lines. Your code is reading two strings followed by a number. It is probably throwing an InputMismatchException when it attempts to parse the 3rd line of input as a number.
Hints:
It pays to read the requirements carefully.
Read this article on CodeChef about how to debug a NZEC: https://discuss.codechef.com/t/tutorial-how-to-debug-an-nzec-error/11221. It explains techniques such as catching exceptions in your code and printing out a Java stacktrace so that you can see what is going wrong.
1 - Admittedly, the requirements are not crystal clear. But in the sample input the first line is a number.
As I've written in other answers as well, it is best to write your code like this when submitting on sites:
def myFunction():
try:
#MY LOGIC HERE
except Exception as E:
print("ERROR Occurred : {}".format(E))
This will clearly show you what error you are facing in each test case. For a site like hacker earth, that has several input problems in various test cases, this is a must.
Coming to your question, NZEC stands for : NON ZERO EXIT CODE
This could mean any and everything from input error to server earthquake.
Regardless of hacker-whatsoever.com I am going to give two useful things:
An easier algorithm, so you can code it yourself, becuase your algorithm will not work as you expect;
A Java 8+ solution with totally a different algorithm, more complex but more efficient.
SIMPLE ALGORITM
In you solution you have a tipical double for that you use to check for if every char in a is also in b. That part is good but the rest is discardable. Try to implement this:
For each character of a find the first occurence of that character in b
If there is a match, remove that character from a and b.
The number of remaining characters in both strings is the number of deletes you have to perform to them to transform them to strings that have the same characters, aka anagrams. So, return the sum of the lenght of a and b.
NOTE: It is important that you keep track of what you already encountered: with your approach you would have counted the same character several times!
As you can see it's just pseudo code, of a naive algorithm. It's just to give you a hint to help you with your studying. In fact this algorithm has a max complexity of O(n^2) (because of the nested loop), which is generally bad. Now, a better solution.
BETTER SOLUTION
My algorithm is just O(n). It works this way:
I build a map. (If you don't know what is it, to put it simple it's a data structure to store couples "key-value".) In this case the keys are characters, and the values are integer counters binded to the respective character.
Everytime a character is found in a its counter increases by 1;
Everytime a character is found in b its counter decreases by 1;
Now every counter represents the diffences between number of times its character is present in a and b. So, the sum of the absolute values of the counters is the solution!
To implement it actually add an entry to map whenever I find a character for the first time, instead of pre-costructing a map with the whole alphabet. I also abused with lambda expressions, so to give you a very different sight.
Here's the code:
import java.util.HashMap;
public class HackerEarthProblemSolver {
private static final String a = //your input string
b = //your input string
static int sum = 0; //the result, must be static because lambda
public static void main (String[] args){
HashMap<Character,Integer> map = new HashMap<>(); //creating the map
for (char c: a.toCharArray()){ //for each character in a
map.computeIfPresent(c, (k,i) -> i+1); //+1 to its counter
map.computeIfAbsent(c , k -> 1); //initialize its counter to 1 (0+1)
}
for (char c: b.toCharArray()){ //for each character in b
map.computeIfPresent(c, (k,i) -> i-1); //-1 to its counter
map.computeIfAbsent(c , k -> -1); //initialize its counter to -1 (0-1)
}
map.forEach((k,i) -> sum += Math.abs(i) ); //summing the absolute values of the counters
System.out.println(sum)
}
}
Basically both solutions just counts how many letters the two strings have in common, but with different approach.
Hope I helped!

Converting letters to alphabet position with two JLists

I'm trying to replace all words (alphabet letters) from JList1 to the number corresponding its place in the alphabet to JList2 with the press of the Run button. (ex. A to 01) And if it's not an English alphabet letter then leaving it as it is. Capitalization doesn't matter (a and A is still 01) and spaces should be kept.
For visual purposes:
"Apple!" should be converted to "0116161205!"
"stack Overflow" to "1920010311 1522051806121523"
"über" to "ü020518"
I have tried a few methods I found on here, but had zero clue how to add the extra 0 in front of the first 9 letters or keep the spaces. Any help is much appreciated.
Here is a solution :
//Create a Map of character and equivalent number
Map<Character, String> lettersToNumber = new HashMap<>();
int i = 1;
for(char c = 'a'; c <= 'z'; c++) {
lettersToNumber.put(c, String.format("%02d", i++));
}
//Loop over the characters of your input and the corresponding number
String result = "";
for(char c : "Apple!".toCharArray()) {
char x = Character.toLowerCase(c);
result+= lettersToNumber.containsKey(x) ? lettersToNumber.get(x) : c;
}
Input, Output
Apple! => 0116161205!
stack Overflow => 1920010311 1522051806121523
über => ü020518
So given...
(ex. A to 01) And if it's not an English alphabet letter then leaving it as it is. Capitalization doesn't matter (a and A is still 01) and spaces should be kept.
This raises some interesting points:
We don't care about non-english characters, so we can dispense with issues around UTF encoding
Capitalization doesn't matter
Spaces should be kept
The reason these points are interesting to me is it means we're only interested in a small subset of characters (1-26). This immediately screams "ASCII" to me!
This provides an immediate lookup table which doesn't require us to produce anything up front, it's immediately accessible.
A quick look at any ascii table provides us with all the information we need. A-Z is in the range of 65-90 (since we don't care about case, we don't need to worry about the lower case range.
But how does that help us!?
Well, this now means the primary question becomes, "How do we convert a char to an int?", which is amazingly simple! A char can be both a "character" and a "number" at the same time, because of the ASCII encoding support!
So if you were to print out (int)'A', it would print 65! And since all the characters are in order, we just need to subtract 64 from 65 to get 1!
That's basically your entire problem solved right there!
Oh, okay, you need to deal with the edge cases of characters not falling between A-Z, but that's just a simple if statement
A solution based on the above "might" look something like...
public static String convert(String text) {
int offset = 64;
StringBuilder sb = new StringBuilder(32);
for (char c : text.toCharArray()) {
char input = Character.toUpperCase(c);
int value = ((int) input) - offset;
if (value < 1 || value > 25) {
sb.append(c);
} else {
sb.append(String.format("%02d", value));
}
}
return sb.toString();
}
Now, there are a number of ways you might approach this, I've chosen a path based on my understanding of the problem and my experience.
And based on your example input...
String[] test = {"Apple!", "stack Overflow", "über"};
for (String value : test) {
System.out.println(value + " = " + convert(value));
}
would produce the following output...
Apple! = 0116161205!
stack Overflow = 1920010311 1522051806121523
über = ü020518

Time Complexity of Code for finding longest word inside dictionary

Problem is as follows: You start with a 2 letter word, and you can append letters to the front and back of the word. You have to return the longest word that exists inside a dictionary that you can form by appending letters to the front and back of the 2 letter word, and every new word that you formed must also be inside the dictionary as well
For example:
Start: 'at'
Dict: [hat, chat, chats, rat, rate, orange]
Output: 'chats', because: at -> hat -> chat -> chats
I have the code as follows:
public static String longest(ArrayList<String> input) {
return helper('at', dict);
}
public static String helper(String in, ArrayList<String> dict) {
ArrayList<String> maxes = new ArrayList<String>();
for (char a = 'a'; a < 'z'; a++) {
String front = Character.toString(a) + in;
String back = in + Character.toString(a);
if (dict.contains(front)) {
maxes.add(helper(front, dict));
}
if (dict.contains(back)) {
maxes.add(helper(back, dict));
}
}
if (maxes.size() == 0) {
return in;
}
String word = "";
for (String w : maxes) {
if (w.length() > word.length()) {
word = w;
}
}
return word;
}
I was wondering what the time complexity for this algorithm would be? I can't for the life of me figure it out.
The answer strongly depends on your dictionary (n words with max reachable length L<=n+1) and on your data structure for storing it. Each call to helper (without its recursive calls) is O(n L) with dict being an ArrayList, whereas with a hash table it's O(L) (absent unlikely collisions). (There can be very long unreachable words in the dictionary, but it still costs only O(L) to compare against them because your trial words can't be longer.)
As for the number of calls to helper: this is just a depth-first search on the tree of words related by prepending/appending a letter. As such, it's O(v), where v is the number of vertices visited. The values of v for various input words depends on your dictionary as well: v<=n, of course, and is often much less. As an example: using the 71813 lines in my /usr/share/dict/words that are all ASCII letters (and ignoring case), the most words ever considered is 593 (for "Ar" as in argon).
The worst-case dictionary will have all its words forming a chain "ab", "abc", "abcd", etc.. You visit every word for a total cost of O(v n L)=O(n^3) (O(v L)=O(n^2) with the hash table). Realistic dictionaries will be much faster not only because L is smaller but also because v is; the exact speedup is unfortunately difficult to analyze. It's probably reasonable to assume L is Θ(log(n)); there's no meaningful asymptotic expression for v as a function of n because realistic dictionaries don't have arbitrarily large n.

Creating a simple encryption program

I'd like to use java to make a cipher of sorts, but im not sure how to go about it.
Basically, I'd want the machine to accept a string of text, say "Abcd"
and then a key, say '4532'
The program should move the characters forward in the alphabet if the number matching the place of the letter is even, and backward if it's odd.
If there is no number, the key should loop around until it's out of characters in the string to change.
the program would then print the key. Ideally, if im pseudocoding this correctly, deciphering the string would be a reverse process only applicable with the key.
I'm guessing i'd use a combination of an array and if/else statements.
I'm not sure where to start.
Example & edit String: 'hello' Key: '12'
a b c d e f g h i j k l m n o p q r s t u v w x y z
Because the corresponding key value is 1, h will travel backwards that many spaces.
h = g
because e has a 2, it'll move forward that many spaces.
e = g
the first l then becomes k, while the second becomes n. The Key is repeated because the string is out of numbers to compare. o turns into n because it's matched with 1.
hello would become ggknn with the key 42.
Here are possible steps you can take to do this. This is not an exact and working solution, but it will hopefully get you started.
Start by reading input from the console (via Scanner or a BufferedReader for example).
Split your input on spaces perhaps, so that you have a String[] of words.
Loop through the String[] of words, and loop again for which word. You can have a counter that is incremented in each iteration of an inner loop and gets reset at the end of an inner loop. You can use that counter variable to get a position into the key (key[counter%lengthOfKey]) in each iteration of the inner loop. If the (counter%lengthOfKey)%2 == 0, you have the even number case for the key, else the odd numbered case. Do whatever encryption at that point (simple substitution cipher for example).
There are many methods of Encryption, but if you want to learn about Encryption you should start with the study of XOR encryption. XOR Encryption uses a key and XORs the binary code of every character with the key. If the key is longer than the encrypted code it creates a One-Time Pad that is impossible to decrypt.
XOR - Exclusive OR - Unlike OR both values can not be true at the same time.
Simple Explanation:
Pretend you want to encrypt the string "hello world" with the key 'c'.
For every character in the string XOR it with the key c.
Pretend the binary value of h is 1100011 and the binary value of c is 0010110 (these are made up and will not work) then you XOR every corresponding binary value.
1100011
XOR
0010110
-------
1110101
1110101 is the XORed binary value.
You then cast the binary value back into character and you do this for every step of the encrypted string.
Problems:
Insecure for short keys. but very powerful for long keys and creates a one time pad.
Example code:
http://www.ecestudents.ul.ie/Course_Pages/Btech_ITT/Modules/ET4263/More%20Samples/CEncrypt.java.html
Find below the class for encyption
public class App
{
public static void main(String arg[])
{
String keys = "12";
String codes = "hello";
StringBuilder result = new StringBuilder();
char[] codeList = codes.toCharArray();
char[] keyList = keys.toCharArray();
int maxCount = keys.length();
System.out.println("The length is "+maxCount);
int i = 0;
for (Character code : codeList) {
int key = Character.getNumericValue(keyList[i]);
if(key % 2 == 0)
{
int res = code+key;
result.append((char)res);
}
else
{
int res = code-key;
result.append((char)res);
}
i++;
if(i==maxCount)
{
i = 0;
}
}
System.out.println("The result is "+result.toString());
}
}

Java indexOf function more efficient than Rabin-Karp? Search Efficiency of Text

I posed a question to Stackoverflow a few weeks ago about a creating an efficient algorithm to search for a pattern in a large chunk of text. Right now I am using the String function indexOf to do the search. One suggestion was to use Rabin-Karp as an alternative. I wrote a little test program as follows to test an implementation of Rabin-Karp as follows.
public static void main(String[] args) {
String test = "Mary had a little lamb whose fleece was white as snow";
String p = "was";
long start = Calendar.getInstance().getTimeInMillis();
for (int x = 0; x < 200000; x++)
test.indexOf(p);
long end = Calendar.getInstance().getTimeInMillis();
end = end -start;
System.out.println("Standard Java Time->"+end);
RabinKarp searcher = new RabinKarp("was");
start = Calendar.getInstance().getTimeInMillis();
for (int x = 0; x < 200000; x++)
searcher.search(test);
end = Calendar.getInstance().getTimeInMillis();
end = end -start;
System.out.println("Rabin Karp time->"+end);
}
And here is the implementation of Rabin-Karp that I am using:
import java.math.BigInteger;
import java.util.Random;
public class RabinKarp {
private String pat; // the pattern // needed only for Las Vegas
private long patHash; // pattern hash value
private int M; // pattern length
private long Q; // a large prime, small enough to avoid long overflow
private int R; // radix
private long RM; // R^(M-1) % Q
static private long dochash = -1L;
public RabinKarp(int R, char[] pattern) {
throw new RuntimeException("Operation not supported yet");
}
public RabinKarp(String pat) {
this.pat = pat; // save pattern (needed only for Las Vegas)
R = 256;
M = pat.length();
Q = longRandomPrime();
// precompute R^(M-1) % Q for use in removing leading digit
RM = 1;
for (int i = 1; i <= M - 1; i++)
RM = (R * RM) % Q;
patHash = hash(pat, M);
}
// Compute hash for key[0..M-1].
private long hash(String key, int M) {
long h = 0;
for (int j = 0; j < M; j++)
h = (R * h + key.charAt(j)) % Q;
return h;
}
// Las Vegas version: does pat[] match txt[i..i-M+1] ?
private boolean check(String txt, int i) {
for (int j = 0; j < M; j++)
if (pat.charAt(j) != txt.charAt(i + j))
return false;
return true;
}
// check for exact match
public int search(String txt) {
int N = txt.length();
if (N < M)
return -1;
long txtHash;
if (dochash == -1L) {
txtHash = hash(txt, M);
dochash = txtHash;
} else
txtHash = dochash;
// check for match at offset 0
if ((patHash == txtHash) && check(txt, 0))
return 0;
// check for hash match; if hash match, check for exact match
for (int i = M; i < N; i++) {
// Remove leading digit, add trailing digit, check for match.
txtHash = (txtHash + Q - RM * txt.charAt(i - M) % Q) % Q;
txtHash = (txtHash * R + txt.charAt(i)) % Q;
// match
int offset = i - M + 1;
if ((patHash == txtHash) && check(txt, offset))
return offset;
}
// no match
return -1; // was N
}
// a random 31-bit prime
private static long longRandomPrime() {
BigInteger prime = new BigInteger(31, new Random());
return prime.longValue();
}
// test client
}
The implementation of Rabin-Karp works in that it returns the correct offset of the string I am looking for. What is surprising to me though is the timing statistics that occurred when I ran the test program. Here they are:
Standard Java Time->39
Rabin Karp time->409
This was really surprising. Not only is Rabin-Karp (at least as it is implemented here) not faster than the standard java indexOf String function, it is slower by an order of magnitude. I don't know what is wrong (if anything). Any one have thoughts on this?
Thanks,
Elliott
I answered this question earlier and Elliot pointed out I was just plain wrong. I apologise to the community.
There is nothing magical about the String.indexOf code. It is not natively optimised or anything like that. You can copy the indexOf method from the String source code and it runs just as quickly.
What we have here is the difference between O() efficiency and actual efficiency. Rabin-Karp for a String of length N and a pattern of length M, Rabin-Karp is O(N+M) and a worst case of O(NM). When you look into it, String.indexOf() also has a best case of O(N+M) and a worst case of O(NM).
If the text contains many partial matches to the start of the pattern Rabin-Karp will stay close to its best-case performance, whilst String.indexOf will not. For example I tested the above code (properly this time :-)) on a million '0's followed by a single '1', and the searched for 1000 '0's followed by a single '1'. This forced the String.indexOf to its worst case performance. For this highly degenerate test, the Rabin-Karp algorithm was about 15 times faster than indexOf.
For natural language text, Rabin-Karp will remain close to best-case and indexOf will only deteriorate slightly. The deciding factor is therefore the complexity of operations performed on each step.
In it's innermost loop, indexOf scans for a matching first character. At each iteration is has to:
increment the loop counter
perform two logical tests
do one array access
In Rabin-Karp each iteration has to:
increment the loop counter
perform two logical tests
do two array accesses (actually two method invocations)
update a hash, which above requires 9 numerical operations
Therefore at each iteration Rabin-Karp will fall further and further behind. I tried simplifying the hash algorithm to just XOR characters, but I still had an extra array access and two extra numerical operations so it was still slower.
Furthermore, when a match is find, Rabin-Karp only knows the hashes match and must therefore test every character, whereas indexOf already knows the first character matches and therefore has one less test to do.
Having read on Wikipedia that Rabin-Karp is used to detect plagiarism, I took the Bible's Book of Ruth, removed all punctuation and made everything lower case which left just under 10000 characters. I then searched for "andthewomenherneighboursgaveitaname" which occurs near the very end of the text. String.indexOf was still faster, even with just the XOR hash. However, if I removed String.indexOfs advantage of being able to access String's private internal character array and forced it to copy the character array, then, finally, Rabin-Karp was genuinely faster.
However, I deliberately chose that text as there are 213 "and"s in the Book of Ruth and 28 "andthe"s. If instead I searched just for the last characters "ursgaveitaname", well there are only 3 "urs"s in the text so indexOf returns closer to its best-case and wins the race again.
As a fairer test I chose random 20 character strings from the 2nd half of the text and timed them. Rabin-Karp was about 20% slower than the indexOf algorithm run outside of the String class, and 70% slower than the actual indexOf algorithm. Thus even in the use case it is supposedly appropriate for, it was still not the best choice.
So what good is Rabin-Karp? No matter the length or nature of the text to be searched, at every character compared it will be slower. No matter what hash function we choose we are surely required to make an additional array access and at least two numerical operations. A more complex hash function will give us less false matches, but require more numerical operators. There is simply no way Rabin-Karp can ever keep up.
As demonstrated above, if we need to find a match prefixed by an often repeated block of text, indexOf can be slower, but if we know we are doing that it would look like we would still be better off using indexOf to search for the text without the prefix and then check to see if the prefix was present.
Based on my investigations today, I cannot see any time when the additional complexity of Rabin Karp will pay off.
Here is the source to java.lang.String. indexOf is line 1770.
My suspicion is since you are using it on such a short input string, the extra overhead of the Rabin-Karp algorithm over the seemly naive implementation of java.lang.String's indexOf, you aren't seeing the true performance of the algorithm. I would suggest trying it on a much longer input string to compare performance.
From my understanding, Rabin Karp is best used when searching a block of text for mutiple words/phrases.
Think about a bad word search, for flagging abusive language.
If you have a list of 2000 words, including derivations, then you would need to call indexOf 2000 times, one for each word you are trying to find.
RabinKarp helps with this by doing the search the other way around.
Make a 4 character hash of each of the 2000 words, and put that into a dictionary with a fast lookup.
Now, for each 4 characters of the search text, hash and check against the dictionary.
As you can see, the search is now the other way around - we're searching the 2000 words for a possible match instead.
Then we get the string from the dictionary and do an equals to check to be sure.
It's also a faster search this way, because we're searching a dictionary instead of string matching.
Now, imagine the WORST case scenario of doing all those indexOf searches - the very LAST word we check is a match ...
The wikipedia article for RabinKarp even mentions is inferiority in the situation you describe. ;-)
http://en.wikipedia.org/wiki/Rabin-Karp_algorithm
But this is only natural to happen!
Your test input first of all is too trivial.
indexOf returns the index of was searching a small buffer (String's internal char array`) while the Rabin-Karp has to do preprocessing to setup its data to work which takes extra time.
To see a difference you would have to test in a really large text to find expressions.
Also please note that when using more sofisticated string search algorithm they can have "expensive" setup/preprocessing to provide the really fast search.
In your case you just search a was in a sentence. I any case you should always take the input into account
Without looking into details, two reasons come to my mind:
you are very likely to outperform standard API implementations only for very special cases. I do not consider "Mary had a little lamb whose fleece was white as snow" to be such.
microbenchmarking is very difficult and can give quite misleading results. Here is an explanation, here a list of tools you could use
Not only simply try a longer static string, but try generating random long strings and inserting the search target into a random location each time. Without randomizing it, you will see a fixed result for indexOf.
EDITED:
Random is the wrong concept. Most text is not truly random. But you would need a lot of different long strings to be effective, and not just testing the same String multiple times. I am sure there are ways to extract "random" large Strings from an even larger text source, or something like that.
For this kind of search, Knuth-Morris-Pratt may perform better. In particular if the sub-string doesn't just repeat characters, then KMP should outperform indexOf(). Worst case (string of all the same characters) it will be the same.

Categories