I'm new to Spark and the Hadoop ecosystem and already fell in love with it.
Right now, I'm trying to port an existing Java application over to Spark.
This Java application is structured the following way:
Read file(s) one by one with a BufferedReader with a custom Parser Class that does some heavy computing on the input data. The input files are of 1 to maximum 2.5 GB size each.
Store data in memory (in a HashMap<String, TreeMap<DateTime, List<DataObjectInterface>>>)
Write out the in-memory-datastore as JSON. These JSON files are smaller of size.
I wrote a Scala application that does process my files by one worker but that is obviously not the most performance benefit I can get out of Spark.
Now to my problem with porting this over to Spark:
The input files are line-based. I usually have one message per line. However, some messages depend on preceding lines to form an actual valid message in the Parser. For example it could happen that I get data in the following order in an input file:
{timestamp}#0x033#{data_bytes} \n
{timestamp}#0x034#{data_bytes} \n
{timestamp}#0x035#{data_bytes} \n
{timestamp}#0x0FE#{data_bytes}\n
{timestamp}#0x036#{data_bytes} \n
To form an actual message that out of the "composition message" 0x036, the parser also needs the lines from message 0x033, 0x034 and 0x035. Other messages could also get in between these set of needed messages. The most messages can be parsed by reading a single line though.
Now finally my question:
How to get Spark to split my file correctly for my purposes? The files can not be Split "randomly"; they must be split in a way that makes sure that all my messages can be parsed and the Parser will not wait for input that he will never get. This means that each composition message (messages that depend on preceding lines) need to be in one split.
I guess there are several ways to achieve a correct output but I'll throw some ideas that I had into this post as well:
Define a manual Split algorithm for the file input? This will check that the last few lines of a split do not contain the start of a "big" message [0x033, 0x034, 0x035].
Split the file however spark wants but also add a fixed number of lines (lets say 50, that will do the job for sure) from the last split to the next split. Multiple data will be handled by the Parser class correctly and would not introduce any issues.
The second way might be easier, however I have no clue how to implement this in Spark. Can someone point me into the right direction?
Thanks in advance!
I saw your comment on my blogpost on http://blog.ae.be/ingesting-data-spark-using-custom-hadoop-fileinputformat/ and decided to give my input here.
First of all, I'm not entirely sure what you're trying to do. Help me out here: your file contains lines containing the 0x033, 0x034, 0x035 and 0x036 so Spark will process them separately? While actually these lines need to be processed together?
If this is the case, you shouldn't interpret this as a "corrupt split". As you can read in the blogpost, Spark splits files into records that it can process separately. By default it does this by splitting records on newlines. In your case however, your "record" is actually spread over multiple lines. So yes, you can use a custom fileinputformat. I'm not sure this will be the easiest solution however.
You can try to solve this using a custom fileinputformat that does the following: instead of giving line by line like the default fileinputformat does, you parse the file and keep track of encountered records (0x033, 0x034 etc). In the meanwhile you may filter out records like 0x0FE (not sure if you want to use them elsewhere). The result of this will be that Spark gets all these physical records as one logical record.
On the other hand, it might be easier to read the file line by line and map the records using a functional key (e.g. [object 33, 0x033], [object 33, 0x034], ...). This way you can combine these lines using the key you chose.
There are certainly other options. Whichever you choose depends on your use case.
Related
I have a HTTP request in a thread group that reads from a single column csv file to get values to populate a parameter in the request URL.
Below is my configuration for these:
There are 30 values in the csv data file.
My goal is to have each thread start at the beginning of the file once it gets to the end, effectively infinitely looping through the data values until the scheduler duration expires.
However, what actually happens is some requests try and use (see screenshot below) and therefore fail.
I have tried this but that just stops at the 30th iteration i.e. the end of the csv data file.
I assume I have some config option(s) wrong but I can't find anything online to suggest what they might be. Can anyone point me in the right direction (what i should be searching for?) or provide a solution?
Most probably it's test data issue, double check your CSV file and make sure it doesn't contain empty lines, if they are - remove them and your test should start working as expected.
For small files with only one column you can use __StringFromFile() function - it's much easier to set up and use.
I am seeking some guidance please on how to structure a spring batch application to ingest a bunch of potentially large delimited files, each with a different format.
The requirements are clear:
select the files to ingest from an external source: there can be multiple releases of some files each day so the latest release must be picked
turn each line of each file into json by combining the delimited fields with the column names of the first line (which is skipped)
send each line of json to a RESTFul Api
We have one step which uses a MultiResourceItemReader which processes files in sequence. The files are inputstreams which time out.
Ideally I think we want to have
a step which identifies the files to ingest
a step which processes files in parallel
Thanks in advance.
This is a fun one. I'd implement a customer line tokenizer that extends DelimitedLineTokenizer and also implements LineCallbackHandler. I'd then configure your FlatFileItemReader to skip the first line (list of column names) and pass that first line to your handler/tokenizer to set all your token names.
A custom FieldSetMapper would then receive a FieldSet with all your name/value pairs, which I'd just pass to the ItemProcessor. Your processor could then build your JSON strings and pass them off to your writer.
Obviously, you job falls into typical - reader -> processor -> writer category with writer being optional in your case ( if you don't wish to persist JSON before sending to RESTFul API) or you can call step to send JSON to REST Service as Writer if Writer is done after receiving response from service.
Anyway, you don't need a separate step to just know the file name. Make it part of application initialization code.
Strategies to parallelize your application are listed here.
You just said a bunch of files. If number of lines in those files have similar count, I would go by partitioning approach ( i.e. by implementing Partitioner interface, I will hand over each file to a separate thread and that thread will execute a step - reader -> processor -> writer). You wouldn't need MultiResourceItemReader in this case but simple single file reader as each file will have its own reader. Partitioning
If line count in those files vary a lot i.e. if one file is going to take hours and another getting finished in few minutes, you can continue using MultiResourceItemReader but use approach of Multi-threaded Step to achieve parallelism.This is chunk level parallelism so you might have to make reader thread safe.
Approach Parallel Steps doesn't look suitable for your case since your steps are not independent.
Hope it helps !!
I'm writing custom InputFormat (specifically, a subclass of org.apache.hadoop.mapred.FileInputFormat), OutputFormat, and SerDe for use with binary files to be read in through Apache Hive. Not all records within the binary file have the same size.
I'm finding that Hive's default InputFormat, CombineHiveInputFormat, is not delegating getSplits to my custom InputFormat's implementation, which causes all input files to be split on regular 128MB boundaries. The problem with this is that this split may be in the middle of a record, so all splits but the first are very likely to appear to have corrupt data.
I've already found a few workarounds, but I'm not pleased with any of them.
One workaround is to do:
set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;
When using HiveInputFormat over CombineHiveInputFormat, the call to getSplits is correctly delegated to my InputFormat and all is well. However, I want to make my InputFormat, OutputFormat, etc. easily available to other users, so I'd prefer not to have to go through this. Additionally, I'd like to be able to take advantage of combining splits if possible.
Yet another workaround is to create a StorageHandler. However, I'd prefer not to do this, since this makes all tables backed by the StorageHandler non-native (so all reducers write out to one file, cannot LOAD DATA into the table, and other nicities I'd like to preserve from native tables).
Finally, I could have my InputFormat implement CombineHiveInputFormat.AvoidSplitCombination to bypass most of CombineHiveInputFormat, but this is only available in Hive 1.0, and I'd like my code to work with earlier versions of Hive (at least back to 0.12).
I filed a ticket in the Hive bug tracker here, in case this behavior is unintentional: https://issues.apache.org/jira/browse/HIVE-9771
Has anyone written a custom FileInputFormat that overrides getSplits for use with Hive? Was there ever any trouble getting Hive to delegate the call to getSplits that you had to overcome?
Typically in this situation you leave the splits alone so that you can get data locality for the blocks, and have your RecordReader understand how to start the reading from the first record in the block (split) and to read into the next block where the final record does not end at the exact end of the split. This takes some remote reads but it is normal and usually very minimal.
TextInputFormat/LineRecordReader does this - it uses newline to delimit records, so naturally a record can span two blocks. It will traverse to the first record in the split instead of starting at the first character, and on the last record it will read into the next block if necessary to read the complete data.
Where LineRecordReader starts the split by seeking past the current partial record.
Where LineRecordReader ends the split by reading past the end of the current block.
Hope that helps direct the design of your custom code.
I have a set of text files providing informations that are parsed, analysed and allow building a model. Sometime, the user of this model wants to know which part of a text file was used to generate a given model item.
For that I am thinking of keeping track of the range of lines (or bytes) ids to be able to read the appropriate text part once required.
My question is: I wonder if it their exists any java Reader able to read a file by using a start and stop line (or byte) id instead of reading the file from the begining and counting the lines (bytes)?
Best regards
If you know exactly amount of bytes, that should be skipped, you can use seek method method of RandomAccessFile
To read from the certain byte - SeekableByteChannel. Of cause, there aren't any Readers able to start from the line id - because positions of line separators are unknown.
You can use InputStream.mark() and InputStream.skip() to navigate to concrete position into the file.
But are you sure you really have to implement this yourself? Take a look on Lucine - the indexing service that probably will help you.
So say you have a file that is written in XML or soe other coding language of that sort. Is it possible to just rewrite one line rather than getting the entire file into a string, then changing then line, then having to rewrite the whole string back to the file?
In general, no. File systems don't usually support the idea of inserting or modifying data in the middle of a file.
If your data file is in a fixed-size record format then you can edit a record without overwriting the rest of the file. For something like XML, you could in theory overwrite one value with a shorter one by inserting semantically-irrelevant whitespace, but you wouldn't be able to write a larger value.
In most cases it's simpler to just rewrite the whole file - either by reading and writing in a streaming fashion if you can (and if the file is too large to read into memory in one go) or just by loading the whole file into some in-memory data structure (e.g. XDocument), making the changes, and then saving the file again. (You may want to consider saving to a different file then moving the files around to avoid losing data if the save operation fails for some reason.)
If all of this ends up being too expensive, you should consider using multiple files (so each one is smaller) or a database.
If the line you want to replace is larger than the new line that you want to replace it with, then it is possible as long as it is acceptable to have some kind of padding (for example white-space characters ' ') which will not effect your application.
If on the other hand the new content are larger than the content to be replaced you will need to shift all the data downwards, so you need to rewrite the file, or at least from the replaced line onwards.
Since you mention XML, it might be you are approaching your problem in the wrong way. Could it be that what you need is to replace a specific XML node? In which case you might consider using DOM to read the XML into a hierarchy of nodes and adding/updating/removing in there before writing the XML tree back to the file.