I've got a live stream of JPG images coming in on a different thread in my Java app and I want to continually scan for faces to output later a list of all the different faces that passed the camera while it was running, and how many times each face was seen. Here's my current code:
void doImageProcessing() {
// Create face stuff
FKEFaceDetector faceDetector = new FKEFaceDetector(new HaarCascadeDetector());
EigenFaceRecogniser<KEDetectedFace, Person> faceRecognizer = EigenFaceRecogniser.create(512, new RotateScaleAligner(), 512, DoubleFVComparison.CORRELATION, Float.MAX_VALUE);
FaceRecognitionEngine<KEDetectedFace, Extractor<KEDetectedFace>, Person> faceEngine = FaceRecognitionEngine.create(faceDetector, faceRecognizer);
// Start loop
while (true) {
// Get next frame
byte[] imgData = nextProcessingData;
nextProcessingData = null;
// Decode image
BufferedImage img = ImageIO.read(new ByteArrayInputStream(imgData));
// Detect faces
FImage fimg = ImageUtilities.createFImage(img);
List<KEDetectedFace> faces = faceEngine.getDetector().detectFaces(fimg);
// Go through detected faces
for (KEDetectedFace face : faces) {
// Find existing person for this face
Person person = null;
try {
List<IndependentPair<KEDetectedFace, ScoredAnnotation<Person>>> rfaces = faceEngine.recogniseBest(face.getFacePatch());
ScoredAnnotation<Person> score = rfaces.get(0).getSecondObject();
if (score != null)
person = score.annotation;
} catch (Exception e) {
}
// If not found, create
if (person == null) {
// Create person
person = new Person();
System.out.println("Identified new person: " + person.getIdentifier());
// Train engine to recognize this new person
faceEngine.train(person, face.getFacePatch());
} else {
// This person has been detected before
System.out.println("Identified existing person: " + person.getIdentifier());
}
}
}
}
The problem is it always detects a face as new, even if it's the same face that was detected in the previous frame. rfaces is always empty. It can never identify an existing face. What am I doing wrong?
Also, I have no idea what the parameters for the EigenFaceRecognizer creator function should be, maybe that's why it's not recognizing anything...
The parameters you've given to the EigenFaceRecogniser.create() function are way off, so that's probably the likely cause of your problems. The following is more likely to work:
EigenFaceRecogniser<KEDetectedFace, Person> faceRecognizer = EigenFaceRecogniser.create(20, new RotateScaleAligner(), 1, DoubleFVComparison.CORRELATION, 0.9f);
Explaination:
The first parameter is the number of principal components in the EigenFace algorithm; the exact value is normally determined experimentally, but something around ~20 is probably fine.
The third parameter is the number of nearest neighbours to use for the KNN classifier. 1 nearest-neighbour should be fine.
The final parameter is a distance threshold for the classifier. The correlation comparison returns a similarity measure (high values mean more similar), so the threshold given is a lower limit that must be exceeded. As we've set 1 nearest neighbour, then the distance between the most similar face and the query face must be greater than 0.9. Note that a value of 0.9 is just a guess; in order to optimise the performance of your recogniser you'll need to play around with this.
Another minor point - instead of:
BufferedImage img = ImageIO.read(new ByteArrayInputStream(imgData));
FImage fimg = ImageUtilities.createFImage(img);
It's generally better to let OpenIMAJ read your image as it works around a number of known problems with ImageIO's handling of certain types of JPEG:
FImage fimg = ImageUtilities.readF(new ByteArrayInputStream(imgData));
Related
I recently implemented a simple Deep Q-Learning agent in Processing for a game called Frozen Lake (game from OpenAI Gym). The agent basically has to find the shortest path between the starting and the ending points, avoiding obstacles (holes in the ice) and without going out of the map.
This is the code that generates the state passed to the Neural Network:
//Return an array of double containing all 0s except for the cell the Agent is on that is 1.
private double[] getState()
{
double[] state = new double[cellNum];
for(Cell cell : lake.cells)
{
if((x - cellDim/2) == cell.x && (y - cellDim/2) == cell.y)
{
state[lake.cells.indexOf(cell)] = 1;
}
else
{
state[lake.cells.indexOf(cell)] = 0;
}
}
return state;
}
where lake is the environment object, cells is an ArrayList attribute of lake containing all the squares of the map, x and y are the agent's coordinates on the map.
And all of this works well, but the agent only learns the best path for a single game map and if the map changes the agent must be trained all over again.
I wanted the agent to learn how to play the game and not how to play a single map.
So, instead of setting all the map squares to 0 except the one the agent is on that is set to 1, I tried to associated some random numbers for every kind of square (Goal:1, Ice:8, Hole:0, Goal:3, Agent:7) and set the input like that, but it didn't work at all.
So I tried to convert all the colors of the squares into a grayscale value (from 0 to 255), so that now the different squares were mapped as (roughly): Goal:45, Ice:243.37, Hole:34.57, Goal:70.8, Agent:150.
But this didn't work either, so I mapped all the grayscale values to values between 0 and 1.
But no result with this either.
By the way, this is the code for the Neural Network to calculate the output:
public Layer[] estimateOutput(double[] input)
{
Layer[] neurons = new Layer[2]; //Hidden neurons [0] and Output neurons [1].
neurons[0] = new Layer(input); //To be transformed into Hidden neurons.
//Hidden neurons values calculation.
neurons[0] = neurons[0].dotProduct(weightsHiddenNeurons).addition(biasesHiddenNeurons).sigmoid();
//Output neurons values calculation.
neurons[1] = neurons[0].dotProduct(weightsOutputNeurons).addition(biasesOutputNeurons);
if(gameData.trainingGames == gameData.gamesThreshold)
{
//this.render(new Layer(input), neurons[0], neurons[1].sigmoid()); //Draw Agent's Neural Network.
}
return neurons;
}
and to learn:
public void learn(Layer inputNeurons, Layer[] neurons, Layer desiredOutput)
{
Layer hiddenNeurons = neurons[0];
Layer outputNeurons = neurons[1];
Layer dBiasO = (outputNeurons.subtraction(desiredOutput)).valueMultiplication(2);
Layer dBiasH = (dBiasO.dotProduct(weightsOutputNeurons.transpose())).layerMultiplication((inputNeurons.dotProduct(weightsHiddenNeurons).addition(biasesHiddenNeurons)).sigmoidDerivative());
Layer dWeightO = (hiddenNeurons.transpose()).dotProduct(dBiasO);
Layer dWeightH = (inputNeurons.transpose()).dotProduct(dBiasH);
//Set new values for Weights and Biases
weightsHiddenNeurons = weightsHiddenNeurons.subtraction(dWeightH.valueMultiplication(learningRate));
biasesHiddenNeurons = biasesHiddenNeurons.subtraction(dBiasH.valueMultiplication(learningRate));
weightsOutputNeurons = weightsOutputNeurons.subtraction(dWeightO.valueMultiplication(learningRate));
biasesOutputNeurons = biasesOutputNeurons.subtraction(dBiasO.valueMultiplication(learningRate));
}
Anyway, the whole project is available on GitHub, where the code is better commented: https://github.com/Nyphet/Frozen-Lake-DQL
What am I doing wrong on setting the input? How can I achieve "learning the game" instead of "learning the map"?
Thanks in advance.
I'm creating a game where you pick a nation and you have to manage it, but I can't find a way to load the map without crashing the program due to massive computation (lack of performance).
I made an algorithm that loops trough every pixel of an image containing the provinces (the spatial unit in the game) of the map, each has their own color, this way, when I encounter a color not yet seen in a pixel, I know that's a new province, and I can therefor load it the new Province() instance with the information from a file.
Everything above said works just fine and takes almost no time at all, but to edit the map when various nations attack each other I need a way to render singularly every province to give it its nation's color with a shader.
I've added this piece of code that gets the current pixel position and it scales it down to openGL coordinates, saving it in an arrayList (currVertices), this is then put into an another ArrayList (provinceVertices) of float[] once a new province is found.
(I know the code is not beautiful and I'm not an expert programmer (also I'm 14) so please try to be kind when telling me what I did wrong,
I've tried just storing a vertex every 4 pixel to make the list smaller, but it still crashes)
List<Float> currVertices = new ArrayList<Float>(); // the vertices of the current province
for (int y = 0; y < worldImage.getHeight(); y++) {
for (int x = 0; x < worldImage.getWidth(); x++) {
if (!currColors.contains(worldImage.getRGB(x, y))) {
if (!currVertices.isEmpty())
provinceVertices.add(Utils.toFloatArray(currVertices)); // store the current province's vertices into the total database
currVertices.clear();
}
if (x % 4 == 0)
currVertices.add((float) (x) / EngineManager.getWindowWidth());
if (y % 4 == 0)
currVertices.add((float) (y) / EngineManager.getWindowHeight());
}
}
I've only included the code representing the loading of the vertices
public static float[] toFloatArray(List<Float> list) {
float[] array = new float[list.size()];
ListIterator<Float> iterator = list.listIterator();
while (iterator.hasNext()) {
array[iterator.nextIndex()] = list.get(iterator.nextIndex());
}
return array;
}
the goal would be for the second ArrayList to have all the vertices in the right order, but when I try and add the currVertices to the provinceVertices the game just crashes with no error message, which is why I'm guessing the problem is performance-related.
(The vertices load fine into the currVertices list)
Using nextIndex() doesn't increse the index. Try to use instead:
while (iterator.hasNext()) {
array[iterator.nextIndex()] = iterator.next();
}
I work with camera2API and i need to get focal lenght property.
I found one way to get it from camera characteristic
float[] f = characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);
for (float d : f) {
Logger.logGeneral("LENS_INFO_AVAILABLE_FOCAL_LENGTHS : " + d);
}
but this approach retern back value like 3.8 or smth close to it. It depend of the device. But this value have to be approximately close to 30-35...
Then i found another solution. When i take a photo i opened photo properties and saw exextly what i get and what i need
I tryed to get this property directly from image
private JSONObject getJsonProperties() {
JSONObject properties = new JSONObject();
final ExifInterface exif = getExifInterface();
final String [] tagProperties = {TAG_DATETIME, TAG_DATETIME_DIGITIZED, TAG_EXPOSURE_TIME,
TAG_FLASH, TAG_FOCAL_LENGTH, TAG_GPS_ALTITUDE, TAG_GPS_ALTITUDE_REF, TAG_GPS_DATESTAMP,
TAG_GPS_LATITUDE, TAG_GPS_LATITUDE_REF, TAG_GPS_LONGITUDE, TAG_GPS_LONGITUDE_REF,
TAG_GPS_PROCESSING_METHOD, TAG_GPS_TIMESTAMP, TAG_IMAGE_LENGTH, TAG_IMAGE_WIDTH, TAG_ISO, TAG_MAKE,
TAG_MODEL, TAG_ORIENTATION, TAG_SUBSEC_TIME, TAG_SUBSEC_TIME_DIG, TAG_SUBSEC_TIME_ORIG, TAG_WHITE_BALANCE};
for (String tag : tagProperties){
try {
properties.put(tag, exif.getAttribute(tag));
} catch (JSONException e) {
e.printStackTrace();
}
}
return properties;
}
private ExifInterface getExifInterface() {
ExifInterface exif = null;
try {
exif = new ExifInterface(ImageSaver.getImageFilePath());
} catch (IOException e) {
e.printStackTrace();
}
return exif;
}
I combine set of entire properties and of course include one that i very need TAG_FOCAL_LENGTH. But again i get wrong valeu like this 473/100... i don't know what does it mean... Valeu have to be approximatly 30-35...
What am i doing wrong?
Eventyally i found the answer.
According #Aleks G
The 35mm equivalent would be your obtained focal length multiplied by a certain number, known as crop factor. This factor is very much different for different manufacturers and even models. For example, for many Nikon DSLR cameras it's 1.5; for many Canon DSLRs it's 1.6; for Apple iPhone 5 it's 6.99, for Samsung Galaxy S3 it's 7.6, and so on.
To the best of my knowledge, Android does not provide an API to determine the crop factor of the device's camera.
There's one trick you can try using though. Once you take a photo with the device's camera, some devices will populate FocalLengthIn35mmFilm exif tag - so you can try using this technique. Not all phones do this though, for example, my Huawei doesn't.
Some other devices will populate two fields for Focal Length - the first being the actual focal length used, the second being the 35mm equivalent. Again, not all do. The same Huawei phone doesn't, but my iPhone 5 does.
To make the long story short, there's no guaranteed way of figuring out the 35mm equivalent from the focal length you obtain on the device.
And eventually i just add to my list of properties such tag
TAG_FOCAL_LENGTH_IN_35MM_FILM
And now if it is available i can get it
I was playing around with open cv and I decided to test out tutorialpoint's example for a Robinson mask. I copied the code and used a grayscaled jpg.
-unfortunately the outputed image was completely black.
-I tried commenting out what appears to be two additional directional filters. The image still came out black.
-I'm using java 1.8 with opencv 3
try{
int kernelSize = 9;
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
Mat source = Imgcodecs.imread("grayScale2.jpg", Imgcodecs.CV_LOAD_IMAGE_GRAYSCALE);
Mat destination = new Mat(source.rows(),source.cols(),source.type());
Mat kernel = new Mat(kernelSize,kernelSize, CvType.CV_32F){
{
put(0,0,-1);
put(0,1,0);
put(0,2,1);
put(1,0-2);
put(1,1,0);
put(1,2,2);
put(2,0,-1);
put(2,1,0);
put(2,2,1);
}
};
Imgproc.filter2D(source, destination, -1, kernel);
Imgcodecs.imwrite("robinsonMaskExample.jpg", destination);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
The code that you linked us is a bit flawed. It defines the kernel size to be size 9 x 9, but the kernel itself is clearly 3 x 3. As such, it's putting the kernel coefficients at the top left corner of the kernel and the rest of the kernel itself is 0. This is probably the reason why you're not seeing the right results. The put method puts a number in the row and column of the matrix. As you can see in that code that defines the kernel, it's putting things in rows 0,1,2 and columns 0,1,2 - which is implicitly a 3 x 3 kernel, but the size of the kernel is actually 9 x 9.
As such, please uncomment those lines you commented out as it's important to define the entire edge detection mask properly. Also, the post is wrong in terms of what edge detection mask that's using. That's actually using the Sobel operator. I've never heard of a mask called "Robinson" before, but I have heard of a Roberts-Cross mask, which is a 2 x 2 kernel that looks like this:
Source: Wikipedia
Therefore, the simplest fix is to change the kernel size so that it's 3.... so simply change this:
int kernelSize = 9;
To this:
int kernelSize = 3;
For a broader picture:
try{
int kernelSize = 3; // Change
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
Mat source = Imgcodecs.imread("grayScale2.jpg", Imgcodecs.CV_LOAD_IMAGE_GRAYSCALE);
Mat destination = new Mat(source.rows(),source.cols(),source.type());
Mat kernel = new Mat(kernelSize,kernelSize, CvType.CV_32F){
{
put(0,0,-1);
put(0,1,0);
put(0,2,1);
put(1,0-2);
put(1,1,0);
put(1,2,2);
put(2,0,-1);
put(2,1,0);
put(2,2,1); // Leave it this way - don't uncomment
}
};
Imgproc.filter2D(source, destination, -1, kernel);
Imgcodecs.imwrite("robinsonMaskExample.jpg", destination);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
Moral of this story. Let this be a lesson to you in terms of finding tutorials online. Don't trust all of them, as they sometimes give you wrong information, such as what you experienced just now with the wrong kernel size and calling the edge detector wrong. I'd certainly use them as a good starting point, but when it comes to the nitty-gritty details, always debug code that you see that has been posted to make sure that what they intended to write is actually what is produced.
I'm having a horrible time coming up with a good question Title... sorry/please edit if your brain is less shot than mine.
I am having some issues handling my game's maps client side. My game is tile based using 32x32 pixel tiles. My first game map was 1750 x 1750 tiles. I had a bunch of layers client side, but managed to cut it down to 2 (ground and buildings). I was previously loading the entire map's layers into memory(short arrays). When I jumped to 2200 x 2200 tiles I noticed an older pc having some issues with out of memory (1GB+). I wish there was a data type between byte and short(I am aiming for ~1000 different tiles). My game supports multiple resolutions so the players visible space may show 23,17 tiles for a 800x600 resolution all the way up to 45,29 tiles for 1440x1024 (plus) resolutions. I use Tiled to draw my maps and output the 2 layers into separate text files using a format similar to the following (0, 0, 2, 0, 3, 6, 0, 74, 2...) all on one line.
With the help of many SO questions and some research I came up with a map chunking strategy. Using the player's current coordinates, as the center point, I load enough tiles for 5 times the size of the visual map(largest would be 45*5,29*5 = 225,145 tiles). The player is always drawn in the center and the ground moves beneath him/her(when you walk east the ground moves west). The minimap is drawn showing one screen away in all directions, to be three times the size of the visible map. Please see the below(very scaled down) visual representation to explain better than I likely explained it.
My issue is this: When the player moves "1/5th the chunk size tiles away" from the original center point (chunkX/Y) coordinates, I call for the game to re scan the file. The new scan will use the current coordinates of the player as it's center point. Currently this issue I'm having is the rechunking takes like .5s on my pc(which is pretty high spec). The map does not update for like 1-2 tile moves.
To combat the issue above I tried to run the file scanning in a new thread (before hitting the 1/5th point) into a temporary arraybuffer. Then once it was done scanning, I would copy the buffer into the real array, and call repaint(). Randomly I saw some skipping issues with this which was no big deal. Even worse I saw it drawing a random part of the map for 1-2 frames. Code sample below:
private void checkIfWithinAndPossiblyReloadChunkMap(){
if (Math.abs(MyClient.characterX - MyClient.chunkX) + 10 > (MyClient.chunkWidth / 5)){ //arbitrary number away (10)
Runnable myRunnable = new Runnable(){
public void run(){
logger.info("FillMapChunkBuffer started.");
short chunkXBuffer = MyClient.characterX;
short chunkYBuffer = MyClient.characterY;
int topLeftChunkIndex = MyClient.characterX - (MyClient.chunkWidth / 2) + ((MyClient.characterY - (MyClient.chunkHeight / 2)) * MyClient.mapWidth); //get top left coordinate of chunk
int topRightChunkIndex = topLeftChunkIndex + MyClient.chunkWidth - 1; //top right coordinate of chunk
int[] leftChunkSides = new int[MyClient.chunkHeight];
int[] rightChunkSides = new int[MyClient.chunkHeight];
for (int i = 0; i < MyClient.chunkHeight; i++){ //figure out the left and right index points for the chunk
leftChunkSides[i] = topLeftChunkIndex + (MyClient.mapWidth * i);
rightChunkSides[i] = topRightChunkIndex + (MyClient.mapWidth * i);
}
MyClient.groundLayerBuffer = MyClient.FillGroundBuffer(leftChunkSides, rightChunkSides);
MyClient.buildingLayerBuffer = MyClient.FillBuildingBuffer(leftChunkSides, rightChunkSides);
MyClient.groundLayer = MyClient.groundLayerBuffer;
MyClient.buildingLayer = MyClient.buildingLayerBuffer;
MyClient.chunkX = chunkXBuffer;
MyClient.chunkY = chunkYBuffer;
MyClient.gamePanel.repaint();
logger.info("FillMapChunkBuffer done.");
}
};
Thread thread = new Thread(myRunnable);
thread.start();
} else if (Math.abs(MyClient.characterY - MyClient.chunkY) + 10 > (MyClient.chunkHeight / 5)){ //arbitrary number away (10)
//same code as above for Y
}
}
public static short[] FillGroundBuffer(int[] leftChunkSides, int[] rightChunkSides){
try {
return scanMapFile("res/images/tiles/MyFirstMap-ground-p.json", leftChunkSides, rightChunkSides);
} catch (FileNotFoundException e) {
logger.fatal("ReadMapFile(ground)", e);
JOptionPane.showMessageDialog(theDesktop, getStringChecked("message_file_locks") + "\n\n" + e.getMessage(), getStringChecked("message_error"), JOptionPane.ERROR_MESSAGE);
System.exit(1);
}
return null;
}
private static short[] scanMapFile(String path, int[] leftChunkSides, int[] rightChunkSides) throws FileNotFoundException {
Scanner scanner = new Scanner(new File(path));
scanner.useDelimiter(", ");
int topLeftChunkIndex = leftChunkSides[0];
int bottomRightChunkIndex = rightChunkSides[rightChunkSides.length - 1];
short[] tmpMap = new short[chunkWidth * chunkHeight];
int count = 0;
int arrayIndex = 0;
while(scanner.hasNext()){
if (count >= topLeftChunkIndex && count <= bottomRightChunkIndex){ //within or outside (east and west) of map chunk
if (count == bottomRightChunkIndex){ //last entry
tmpMap[arrayIndex] = scanner.nextShort();
break;
} else { //not last entry
if (isInsideMapChunk(count, leftChunkSides, rightChunkSides)){
tmpMap[arrayIndex] = scanner.nextShort();
arrayIndex++;
} else {
scanner.nextShort();
}
}
} else {
scanner.nextShort();
}
count++;
}
scanner.close();
return tmpMap;
}
I really am at my wits end with this. I want to be able to move on past this GUI crap and work on real game mechanics. Any help would be tremendously appreciated. Sorry for the long post, but trust me a lot of thought/sleepless nights has gone into this. I need the SO experts ideas. Thanks so much!!
p.s. I came up with some potential optimization ideas (but not sure these would solve some of the issue):
split the map files into multiple lines so I can call scanner.nextLine() 1 time, rather than scanner.next() 2200 times
come up with a formula that given the 4 corners of the map chunk will know if a given coordinate lies within it. this would allow me to call scanner.nextLine() when at the farthest point on chunk for a given line. this would require the multiline map file approach above.
throw away only 1/5th of the chunk, shift the array, and load the next 1/5th of the chunk
Make sure scanning the file has finished before starting a new scan.
Currently you'll start scanning again (possibly in every frame) while your centre is too far away from the previous scan centre. To fix this remember you are scanning before you even start and enhance your far away condition accordingly.
// MyClient.worker represents the currently running worker thread (if any)
if(far away condition && MyClient.worker == null) {
Runnable myRunnable = new Runnable() {
public void run(){
logger.info("FillMapChunkBuffer started.");
try {
short chunkXBuffer = MyClient.nextChunkX;
short chunkYBuffer = MyClient.nextChunkY;
int topLeftChunkIndex = MyClient.characterX - (MyClient.chunkWidth / 2) + ((MyClient.characterY - (MyClient.chunkHeight / 2)) * MyClient.mapWidth); //get top left coordinate of chunk
int topRightChunkIndex = topLeftChunkIndex + MyClient.chunkWidth - 1; //top right coordinate of chunk
int[] leftChunkSides = new int[MyClient.chunkHeight];
int[] rightChunkSides = new int[MyClient.chunkHeight];
for (int i = 0; i < MyClient.chunkHeight; i++){ //figure out the left and right index points for the chunk
leftChunkSides[i] = topLeftChunkIndex + (MyClient.mapWidth * i);
rightChunkSides[i] = topRightChunkIndex + (MyClient.mapWidth * i);
}
// no reason for them to be a member of MyClient
short[] groundLayerBuffer = MyClient.FillGroundBuffer(leftChunkSides, rightChunkSides);
short[] buildingLayerBuffer = MyClient.FillBuildingBuffer(leftChunkSides, rightChunkSides);
MyClient.groundLayer = groundLayerBuffer;
MyClient.buildingLayer = buildingLayerBuffer;
MyClient.chunkX = chunkXBuffer;
MyClient.chunkY = chunkYBuffer;
MyClient.gamePanel.repaint();
logger.info("FillMapChunkBuffer done.");
} finally {
// in any case clear the worker thread
MyClient.worker = null;
}
}
};
// remember that we're currently scanning by remembering the worker directly
MyClient.worker = new Thread(myRunnable);
// start worker
MyClient.worker.start();
}
Preventing a rescan before the previous rescan has completed presents another challenge: what to do if you walk diagonally i.e. you reach the situation where in x you're meeting the far away condition, start scanning and during that scan you'll meet the condition for y to be far away. Since you choose the next scan centre according to your current position, this problem should not arise as long as you have a large enough chunk size.
Remembering the worker directly comes with a bonus: what do you do if you need to teleport the player/camera at some point while you are scanning? You can now simply terminate the worker thread and start scanning at the new location: you'll have to check the termination flag manually in MyClient.FillGroundBuffer and MyClient.FillBuildingBuffer, reject the (partially computed) result in the Runnable and stop the reset of MyClient.worker in case of an abort.
If you need to stream more data from the file system in your game think of implementing a streaming service (extend the idea of the worker to one that's processing arbitrary file parsing jobs). You should also check if your hard drive is able to perform reading from multiple files concurrently faster than reading a single stream from a single file.
Turning to a binary file format is an option, but won't save much in terms of file size. And since Scanner already uses an internal buffer to do its parsing (parsing integers from a buffer is faster than filling a buffer from a file), you should first focus on getting your worker running optimally.
Try to speed up the reading speed by using a binary file instead of a csv-file.
Use DataInputStream and readShort() for that. (This will also cut down the size of the map.)
You also can also use 32x32 tiles chunks and save them into several files.
So you haven't to load the tiles which are already loaded.