I was looking at the wordCount example from Apache Beam
and when I tried to run this example in local, it wrote the counts into multiple files. I created a test project to read and write data from a file and even that write operation wrote the output in to multiple files. How do I get the result in just a single file? I am using direct runner
That is happening for performance reasons. You should be able to force a single file by using TextIO.Write.withoutSharding
withoutSharding
public TextIO.Write withoutSharding()
Forces a single file as output and empty shard name template. This
option is only compatible with unwindowed writes.
For unwindowed writes, constraining the number of shards is likely to
reduce the performance of a pipeline. Setting this value is not
recommended unless you require a specific number of output files.
This is equivalent to .withNumShards(1).withShardNameTemplate("")
Related
I am new to Hadoop and have been given the task of migrating structured data to HDFS using a java code. I know the same can be accomplished by Sqoop, but that is not my task.
Can someone please explain a possible way to do this.
I did attempt to do it. What I did was copy data from psql server using jdbc driver and then store it in a csv format in HDFS. Is this the right way to go about this?
I have read that Hadoop has its own datatypes for storing structured data. Can you please explain as to how that happens.
Thank you.
The state of the art is using (pull ETL) sqoop as regular batch processes to fetch the data from RDBMs. However, this way of doing is both resource consuming for the RDBMS (often sqoop run multiple thread with multiple jdbc connections), and takes long time (often you run sequential fetch on the RDBMS), and lead to data corruptions (the living RDBMS is updated while this long sqoop process is "always in late").
Then some alternative paradigm exists (push ETL) and are maturing. The idea behind is to build change data capture streams that listen the RDBMS. An example project is debezium. You can then build a realtime ETL that synchronize the RDBMS and the datawarehouse on hadoop or elsewhere.
Sqoop is a simple tool which perform following.
1) Connect to the RDBMS ( postgresql) and get the metadata of the table and create a pojo(a Java Class) of the table.
2) Use the java class to import and export through a mapreduce program.
If you need to write plain java code (Where parallelism you need to control for performance)
Do following:
1) Create a Java Class which connects to RDBMS using Java JDBC
2) Create a Java Class which accepts input String( Get from resultset) and write to HDFS service into a file.
Otherway doing this.
Create a MapReduce using DBInputFormat pass the number of input splits with TextOutputFormat as output directory to HDFS.
Please visit https://bigdatatamer.blogspot.com/ for any hadoop and spark related question.
Thanks
Sainagaraju Vaduka
You are better off using Sqoop. Because you may end up doing exactly what Sqoop is doing if you go the path of building it yourself.
Either way, conceptually, you will need a custom mapper with custom input format with ability to read partitioned data from the source. In this case, table column on which the data has to be partitioned would be required to exploit parallelism. A partitioned source table would be ideal.
DBInputFormat doesn't optimise the calls on source database. Complete dataset is sliced into configured number of splits by the InputFormat.
Each of the mappers would be executing the same query and loading only the portion of the data corresponding to split. This would result in each mapper issuing the same query along with sorting of dataset so it can pick its portion of data.
This class doesn't seem to take advantage of a partitioned source table. You can extend it to handle partitioned tables more efficiently.
Hadoop has structured file formats like AVRO, ORC and Parquet to begin with.
If your data doesn't require to be stored in a columnar format (used primarily for OLAP use cases where only few columns of large set of columns is required to be selected ), go with AVRO.
The way you are trying to do is not a good one because you are going to waste so much of time in developing the code,testing etc., Instead use sqoop to import the data from any RDBMS to hive. The first tool which has to come in our mind is Sqoop(SQL to Hadoop).
I want to store some Java/Scala objects as records in Parquet format, and I'm currently using parquet-avro and the AvroParquetWriter class for this purpose. This works fine, but it is very coupled to Hadoop and it's file system implementation(s). Instead, I would like to somehow get the raw binary data of the files (preferably, but not absolutely necessary, in a streaming fashion) and handle the writing of the files "manually", due to the nature of the framework I'm integrating with. Has anyone been able to achieve something like this?
I have .exe file (I don't have source files so I won't be able to edit the program) taking as parameter path to file which be processing and on the end giving results. For example in console I run this program as follow : program.exe -file file_to_process [other_parametrs]. I have also jar executable file which take two parameters file_to_process and second file and [others_parameters]. In both cases I would like to split input file into smallest part and run programs in parallel. Is there any way to do it efficient with Apache Spark Java framework. I'm new with parallel computations and I read about RDD and pipe operator but I don't know if it would be good in my case because I have path to file.
I will be very grateful for some help or tips.
I have run into similar issues recently, and I have a working code with spark 2.1.0. The basic idea is that, you put your exe with its dependencies such as dll into HDFS or your local and use addFiles to add them into driver, which will also copy them into work executors. Then you can load your file as a RDD, and use mapPartitionsWithIndex function to save each partition into local and execute the exe (use SparkFiles.get to get the path from the work executor) to that partition using Process.
Hope that helps.
I think the general answer is "no". Spark is a framework and in general it administers very specific mechanisms for cluster configuration, shuffling its own data, read big inputs (based on HDFS), monitoring task completion and retries and performing efficient computation. It is not well suited for a case where you have a program you can't touch and that expects a file from the local filesystem.
I guess you could put your inputs on HDFS, then, since Spark accepts arbitrary java/Scala code, you could use whatever language facilities you have to dump to a local file, launch a process (i.e.this), then build some complex logic to monitor for completion (maybe based on the content of the output). the mapPartitions() Spark method would be the one best suited for this.
That said, I would not recommend it. It will be ugly, complex, require you to mess with permissions on the nodes and things like that and would not take good advantage of Spark's strengths.
Spark is well suited for you problem though, especially if each line of your file can be processed independently. I would look to see if there is a way to get the program's code, a library that does the same or if the algorithm is trivial enough to re-implement.
Probably not the answer you were looking for though :-(
I want to make a better performance for data processing using Hadoop MapReduce. So, do I need to use it along with Hadoop DFS? Or maybe MapReduce can be use with other type of data distributed? Show me the way, please....
Hadoop is a framework which includes Map Reduce programming model for computation and HDFS for storage.
HDFS stands for hadoop distributed file system which is inspired from Google File System. The overall Hadoop project is inspired based on the research paper published by Google.
research.google.com/archive/mapreduce-osdi04.pdf
http://research.google.com/archive/mapreduce.html
Using Map Reduce programming model data will be computed in parallel way in different nodes across the cluster which will decrease the processing time.
You need to use HDFS or HBASE to store your data in the cluster to get the high performance. If you like to choose normal file system, then there will not be much difference. Once the data goes to distributed system, automatically it will be divided across different block and replicated by default 3 times to avoid fault tolerance. All these will not be possible with normal file system
Hope this helps!
First, your idea is wrong. Performance of Hadoop MapReduce is not directly related to the performance of HDFS. It is considered to be slow because of its architecture:
It processes data with Java. Each separate mapper and reducer is a separate instance of JVM, which need to be invoked, which takes some time
It puts intermediate data on the HDDs many times. At minimum, mappers write their results (one), reducers reads and merges them, writing result set to disks (two), reducer results written back to your filesystem, usually HDFS (three). You can find more details on the process here: http://0x0fff.com/hadoop-mapreduce-comprehensive-description/.
Second, Hadoop is open framework and it supports many different filesystems. You can read data from FTP, S3, local filesystem (NFS share, for instance), MapR-FS, IBM GPFS, GlusterFS by RedHat, etc. So you are free to choose the one you like. The main idea for MapReduce is to specify InputFormat and OutputFormat that would be able to work with your filesystem
Spark at the moment is considered to be a faster replacement of the Hadoop MapReduce as it puts much of the computations to the memory. But its use really depends on your case
I am facing a problem for which I don't have a clean solution. I am writing a Java application and the application stores certain data in a limited set of files. We are not using any database, just plain files. Due to some user-triggered action, certain files needs to be changed. I need this to be a all-or-nothing operation. That is, either all files are updated, or none of them. It is disastrous if for example 2 of the 5 files are changed, while the other 3 are not due to some IOException.
What is the best strategy to accomplish this?
Is embedding an in-memory database, such as hsqldb, a good reason to get this kind of atomicity/transactional behavior?
Thanks a lot!
A safe approach IMO is:
Backup
Maintain a list of processed files
On exception, restore the ones that have been processed with the backed up one.
It depends on how heavy it is going to be and the limits for time and such.
What is the best strategy to accomplish this? Is embedding an in-memory database, such as hsqldb, a good reason to get this kind of atomicity/transactional behavior?
Yes. If you want transactional behavior, use a well-tested system that was designed with that in mind instead of trying to roll your own on top of an unreliable substrate.
File systems do not, in general, support transactions involving multiple files.
Non-Windows file-systems and NTFS tend to have the property that you can do atomic file replacement, so if you can't use a database and
all of the files are under one reasonably small directory
which your application owns and
which is stored on one physical drive:
then you could do the following:
Copy the directory contents using hard-links as appropriate.
Modify the 5 files.
Atomically swap the modified copy of the directory with the original
Ive used the apache commons transactions library for atomic file operations with success. This allows you to modify files transactionally and potentially roll back on failures.
Here's a link: http://commons.apache.org/transaction/
My approach would be to use a lock, in your java code. So only one process could write some file at each time. I'm assuming your application is the only which writes the files.
If even so some write problem occurs to "rollback" your files you need to save a copy of files like upper suggested.
Can't you lock all the files and only write to them once all files have been locked?