parsing json input in hadoop java - java

My input data is in hdfs. I am simply trying to do wordcount but there is slight difference.
The data is in json format.
So each line of data is:
{"author":"foo", "text": "hello"}
{"author":"foo123", "text": "hello world"}
{"author":"foo234", "text": "hello this world"}
I only want to do wordcount of words in "text" part.
How do I do this?
I tried the following variant so far:
public static class TokenCounterMapper
extends Mapper<Object, Text, Text, IntWritable> {
private static final Log log = LogFactory.getLog(TokenCounterMapper.class);
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
try {
JSONObject jsn = new JSONObject(value.toString());
//StringTokenizer itr = new StringTokenizer(value.toString());
String text = (String) jsn.get("text");
log.info("Logging data");
log.info(text);
StringTokenizer itr = new StringTokenizer(text);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
But I am getting this error:
Error: java.lang.ClassNotFoundException: org.json.JSONException
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:865)
at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

Seems you forgot to embed the JSon library in your Hadoop job jar.
You can have a look there to see how you can build your job with the library:
http://tikalk.com/build-your-first-hadoop-project-maven

There are several ways to use external jars with your map reduce code:
Include the referenced JAR in the lib subdirectory of the submittable JAR: The job will unpack the JAR from this lib subdirectory into the jobcache on the respective TaskTracker nodes and point your tasks to this directory to make the JAR available to your code. If the JARs are small, change often, and are job-specific this is the preferred method. This is what #clement suggested in his answer.
Install the JAR on the cluster nodes. The easiest way is to place the JAR into $HADOOP_HOME/lib directory as everything from this directory is included when a Hadoop daemon starts. Note that a start stop will be needed to make this effective.
TaskTrackers will be using the external JAR, so you can provide it by modifying HADOOP_TASKTRACKER_OPTS option in the hadoop-env.sh configuration file and make it point to the jar. The jar needs to be present at the same path on all the nodes where task-tracker runs.
Include the JAR in the “-libjars” command line option of the hadoop jar … command. The jar will be placed in distributed cache and will be made available to all of the job’s task attempts. Your map-reduce code must use GenericOptionsParser. For more details read this blog post.
Comparison:
1 is a legacy method but discouraged because it has a large negative performance cost.
2 and #3 are good for private clusters but pretty lame practice as you cannot expect end users to do that.
4 is the most recommended option.
Read the main post from Cloudera).

Related

cmu sphinx4 java - Runtime exception caused by FileNotFoundException

I have recently made a Java project with Sphinx4. I found this code online, and I slimmed it down to this to test if Sphinx4 was working:
public class App
{
private static final String ACOUSTIC_MODEL =
"resource:/edu/cmu/sphinx/models/en-us/en-us";
private static final String DICTIONARY_PATH =
"resource:/edu/cmu/sphinx/models/en-us/cmudict-en-us.dict";
public static void main(String[] args) throws Exception {
Configuration configuration = new Configuration();
configuration.setAcousticModelPath(ACOUSTIC_MODEL);
configuration.setDictionaryPath(DICTIONARY_PATH);
configuration.setGrammarName("dialog");
LiveSpeechRecognizer jsgfRecognizer =
new LiveSpeechRecognizer(configuration);
jsgfRecognizer.startRecognition(true);
while (true) {
String utterance = jsgfRecognizer.getResult().getHypothesis();
if (utterance.startsWith("hello")) {
System.out.println("Hello back!");
}
else if (utterance.startsWith("exit")) {
break;
}
}
jsgfRecognizer.stopRecognition();
}
}
However, it gave me this error:
Exception in thread "main" java.lang.RuntimeException: Allocation of search manager resources failed
at edu.cmu.sphinx.decoder.search.WordPruningBreadthFirstSearchManager.allocate(WordPruningBreadthFirstSearchManager.java:247)
at edu.cmu.sphinx.decoder.AbstractDecoder.allocate(AbstractDecoder.java:103)
at edu.cmu.sphinx.recognizer.Recognizer.allocate(Recognizer.java:164)
at edu.cmu.sphinx.api.LiveSpeechRecognizer.startRecognition(LiveSpeechRecognizer.java:47)
at com.weebly.controllingyourcomputer.bartimaeus.App.main(App.java:27)
Caused by: java.io.FileNotFoundException:
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at java.net.URL.openStream(URL.java:1038)
at edu.cmu.sphinx.linguist.language.ngram.SimpleNGramModel.open(SimpleNGramModel.java:403)
at edu.cmu.sphinx.linguist.language.ngram.SimpleNGramModel.load(SimpleNGramModel.java:277)
at edu.cmu.sphinx.linguist.language.ngram.SimpleNGramModel.allocate(SimpleNGramModel.java:114)
at edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.allocate(LexTreeLinguist.java:334)
at edu.cmu.sphinx.decoder.search.WordPruningBreadthFirstSearchManager.allocate(WordPruningBreadthFirstSearchManager.java:243)
... 4 more
I thought it might be something about it not being able to find the paths for ACOUSTIC_MODEL or DICTIONARY_PATH, so I changed the resource: strings to things like %HOME%\\Downloads\\sphinx4-5prealpha-src\\sphinx4-5prealpha-src\\sphinx4-data\\src\\main\\resources\\edu\\cmu\\sphinx\\models\\en-us or paths with forward slashes or with C:\Users\Username\... but none of the paths worked. I know the paths exist because I copy and pasted them from the properties window of the actual resources.
So my question is: is it some of the code that I deleted from the original source code that is causing this error, is it something wrong with the paths, or is it entirely different?
EDIT
By the way, I am using Maven to build my project. I added the dependencies specified on the Sphinx4 website to my pom.xml, but it didn't work (it didn't recognize imports such as edu.com.sphinx.xxx) so I downloaded the JARs from the website they said to download them from and added them to my projects "Libraries" in my Java Build Path in Eclipse.
is it some of the code that I deleted from the original source code that
is causing this error
Yes, you deleted too much.
To recognize with grammar you need to make three calls:
configuration.setGrammarPath(GRAMMAR_PATH);
configuration.setGrammarName(GRAMMAR_NAME);
configuration.setUseGrammar(true);

Executing Sample Flink Program in Local

I am trying to execute a sample program in Apache Flink in local mode.
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;
public class WordCountExample {
public static void main(String[] args) throws Exception {
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet<String> text = env.fromElements(
"Who's there?",
"I think I hear them. Stand, ho! Who's there?");
//DataSet<String> text1 = env.readTextFile(args[0]);
DataSet<Tuple2<String, Integer>> wordCounts = text
.flatMap(new LineSplitter())
.groupBy(0)
.sum(1);
wordCounts.print();
env.execute();
env.execute("Word Count Example");
}
public static class LineSplitter implements FlatMapFunction<String, Tuple2<String, Integer>> {
#Override
public void flatMap(String line, Collector<Tuple2<String, Integer>> out) {
for (String word : line.split(" ")) {
out.collect(new Tuple2<String, Integer>(word, 1));
}
}
}
}
It is giving me exception :
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/InputFormat
at WordCountExample.main(WordCountExample.java:10)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapreduce.InputFormat
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 1 more
What am I doing wrong?
I have used the correct jars also.
flink-java-0.9.0-milestone-1.jar
flink-clients-0.9.0-milestone-1.jar
flink-core-0.9.0-milestone-1.jar
Adding the three Flink Jar files as dependencies in your project is not enough because they have other transitive dependencies, for example on Hadoop.
The easiest way to get a working setup to develop (and locally execute) Flink programs is to follow the quickstart guide which uses a Maven archetype to configure a Maven project. This Maven project can be imported into your IDE.
NoClassDefFoundError extends LinkageError
Thrown if the Java Virtual Machine or a ClassLoader instance tries to
load in the definition of a class (as part of a normal method call or
as part of creating a new instance using the new expression) and no
definition of the class could be found. The searched-for class
definition existed when the currently executing class was compiled,
but the definition can no longer be found.
Your code/jar dependent to hadoop. Found it here download jar file and add it in your classpath org.apache.hadoop.mapreduce.InputFormat
Firstly, the flink jar files which you have included in your project are not enough, include all the jar files which are present in the lib folder present under the flink's source folder.
Secondly, " env.execute();
env.execute("Word Count Example");" These lines of code are not required since you are just printing your dataset onto the console; you're not writing the output into a file(.txt, .csv etc.). So, better to remove these lines (Sometimes throws errors if included in code if not required (observed a lot of times))
Thirdly, while exporting the jar files for your Java Project from your IDE, don't forget to select your 'Main' class.
Hopefully, after making the above changes, your code works.

Error using sphinx4 jars without Maven

I have a problem with the API Sphinx4 and I can't figure out why it doesn't work.
I try to write a little class for capture the voice of an user and write his speaking on a file.
1) I have create a new java project on Eclispe.
2) I have create the class TranscriberDemo.
3) I have create a folder "file".
4) I have copy the folder "en-us" and the files "cmudict-en-us.dict", "en-us.lm.dmp", "10001-90210-01803.wav" on the folder "file".
5) I don't use maven, so I have just include the jar files "sphinx4-core-1.0-SNAPSHOT.jar" and "sphinx4-data-1.0-SNAPSHOT.jar".
you can download them here:
core: https://1fichier.com/?f3y6vqupdr
data: https://1fichier.com/?lpzz8jyerv
I know that the source code is available
here: https://github.com/erka/sphinx-java-api
or here: http://sourceforge.net/projects/cmusphinx/files/sphinx4
But I don't use maven so I can't compile them.
My class:
import java.io.InputStream;
import edu.cmu.sphinx.api.Configuration;
import edu.cmu.sphinx.api.SpeechResult;
import edu.cmu.sphinx.api.StreamSpeechRecognizer;
import edu.cmu.sphinx.result.WordResult;
public class TranscriberDemo
{
public static void main(String[] args) throws Exception
{
System.out.println("Loading models...");
Configuration configuration = new Configuration();
// Load model from the jar
configuration.setAcousticModelPath("file:en-us");
configuration.setDictionaryPath("file:cmudict-en-us.dict");
configuration.setLanguageModelPath("file:en-us.lm.dmp");
StreamSpeechRecognizer recognizer = new StreamSpeechRecognizer(configuration);
InputStream stream = TranscriberDemo.class.getResourceAsStream("file:10001-90210-01803.wav");
stream.skip(44);
// Simple recognition with generic model
recognizer.startRecognition(stream);
SpeechResult result;
while ((result = recognizer.getResult()) != null)
{
System.out.format("Hypothesis: %s\n", result.getHypothesis());
System.out.println("List of recognized words and their times:");
for (WordResult r : result.getWords())
{
System.out.println(r);
}
System.out.println("Best 3 hypothesis:");
for (String s : result.getNbest(3))
System.out.println(s);
}
recognizer.stopRecognition();
}
}
My log:
Loading models...
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/base/Function
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:191)
at edu.cmu.sphinx.util.props.ConfigurationManager.getPropertySheet(ConfigurationManager.java:91)
at edu.cmu.sphinx.util.props.ConfigurationManagerUtils.listAllsPropNames(ConfigurationManagerUtils.java:556)
at edu.cmu.sphinx.util.props.ConfigurationManagerUtils.setProperty(ConfigurationManagerUtils.java:609)
at edu.cmu.sphinx.api.Context.setLocalProperty(Context.java:198)
at edu.cmu.sphinx.api.Context.setAcousticModel(Context.java:88)
at edu.cmu.sphinx.api.Context.<init>(Context.java:61)
at edu.cmu.sphinx.api.Context.<init>(Context.java:44)
at edu.cmu.sphinx.api.AbstractSpeechRecognizer.<init>(AbstractSpeechRecognizer.java:37)
at edu.cmu.sphinx.api.StreamSpeechRecognizer.<init>(StreamSpeechRecognizer.java:35)
at TranscriberDemo.main(TranscriberDemo.java:27)
Caused by: java.lang.ClassNotFoundException: com.google.common.base.Function
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 12 more
Thanks for your help =)
There are multiple issues with your code and your actions:
3) I have create a folder "file".
Not needed
4) I have copy the folder "en-us" and the files "cmudict-en-us.dict", "en-us.lm.dmp", "10001-90210-01803.wav" on the folder "file".
Not needed, you already have models as part of sphinx4-data package.
5) I don't use maven, so I have just include the jar files "sphinx4-core-1.0-SNAPSHOT.jar" and "sphinx4-data-1.0-SNAPSHOT.jar".
This is very wrong because you took outdated jars from unauthorized location. The right place to download jars is listed in tutorial http://oss.sonatype.org
https://oss.sonatype.org/service/local/repositories/snapshots/content/edu/cmu/sphinx/sphinx4-core/1.0-SNAPSHOT/sphinx4-core-1.0-20150223.210646-7.jar
https://oss.sonatype.org/service/local/repositories/snapshots/content/edu/cmu/sphinx/sphinx4-data/1.0-SNAPSHOT/sphinx4-data-1.0-20150223.210601-7.jar
You took malicious jars from some random website which might have a virus or rootkit in them.
here: https://github.com/erka/sphinx-java-api
This is a wrong link too. The correct link is http://github.com/cmusphinx/sphinx4
InputStream stream = TranscriberDemo.class.getResourceAsStream("file:10001-90210-01803.wav");
Here you use file: URL scheme which points to files in inappropriate context. If you want to create InputStream from file do like this:
InputStream stream = new FileInputStream(new File("10001-90210-01803.wav"));
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/base/Function
This error is caused by the fact you took a jar from other place and it said you need additional dependencies. When you see ClassDefFoundError it means you need to add additional jar into your classpath. With official sphinx4 you should not see this error.
Solved.
In fact it was a silly mistake...
Thank you #Nikolay for your answer. I already accept your answer but I resume the process here:
1) Download the sphinx4-core and sphinx4-data jars from https://oss.sonatype.org/#nexus-search;quick~sphinx4.
2) Include them in your project.
3) Test your code.
import edu.cmu.sphinx.api.Configuration;
import edu.cmu.sphinx.api.LiveSpeechRecognizer;
import edu.cmu.sphinx.api.SpeechResult;
public class SpeechToText
{
public static void main(String[] args) throws Exception
{
Configuration configuration = new Configuration();
configuration.setAcousticModelPath("resource:/edu/cmu/sphinx/models/en-us/en-us");
configuration.setDictionaryPath("resource:/edu/cmu/sphinx/models/en-us/cmudict-en-us.dict");
configuration.setLanguageModelPath("resource:/edu/cmu/sphinx/models/en-us/en-us.lm.dmp");
LiveSpeechRecognizer recognizer = new LiveSpeechRecognizer(configuration);
recognizer.startRecognition(true);
SpeechResult result;
while ((result = recognizer.getResult()) != null)
{
System.out.println(result.getHypothesis());
}
recognizer.stopRecognition();
}
}
And that is all!
If you need the source code of Sphinx4: https://github.com/cmusphinx/sphinx4

Map Reduce: Unabale to run the code due to number of errors

Please have a look at the following code
Map.java
public class Map extends Mapper<longwritable, intwritable="" text,=""> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
</longwritable,>
Reduce.java
public class Reduce extends Reducer<text, intwritable,="" intwritable="" text,=""> {
#Override
protected void reduce(
Text key,
java.lang.Iterable<intwritable> values,
org.apache.hadoop.mapreduce.Reducer<text, intwritable,="" intwritable="" text,="">.Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable value : values) {
sum += value.get();
}
context.write(key, new IntWritable(sum));
}
}
</text,></intwritable></text,>
WordCount.java
public class WordCount {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setJarByClass(WordCount.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
This entire code is extracted from this Map Reduce tutorial (http://cloud.dzone.com/articles/how-run-elastic-mapreduce-job)
. As soon as I copied these classes into Eclipse, it showed lot of errors like Cannot be Resolved By Type. It is reasonable because the classes this code is using as instances are no where to find in default JDK, and the tutorial has not given any instructions to download any Library. I ignored it thinking it has something to do with Elastic Map Reduce in server side.
As soon as I uploaded this to Amazon Elastic Map Reduce, created a job flow and run the program, it gave me following errors.
Exception in thread "main" java.lang.Error: Unresolved compilation problems:
Configuration cannot be resolved to a type
Configuration cannot be resolved to a type
Job cannot be resolved to a type
Job cannot be resolved to a type
Text cannot be resolved to a type
IntWritable cannot be resolved to a type
TextInputFormat cannot be resolved to a type
TextOutputFormat cannot be resolved to a type
FileInputFormat cannot be resolved
Path cannot be resolved to a type
FileOutputFormat cannot be resolved
Path cannot be resolved to a type
at WordCount.main(WordCount.java:5)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:187)
How can I make this code work? Do I have to download any library for that? How can I make this code run and see the results? This is my very first Experience in Amazon and Elastic Map reduce, and yes, first experience with Big Data as well.
Please help.
So, you mean ,you didn`t add any hadoop jar to your project, and you ignored the compilation error, and hoped this can run in the server side which installed a hadoop-client?
If it is true, that is impossiable.
You must add the hadoop-client.XX.jar to your project ,any version is OK.
Add all hadoop jars to the project in eclipse and if your code has no errors then you may export it as a jar and run the jar in hadoop.
To add jars goto "Build Path", choose "Configure Build Path" and "Add external jars". (Choose all hadoop jars and add them)
To people encountering this error:
You can right-click on the project you created.
Build path->Configure build path > Add external jar files inside the libraries tab.
Hadoop Jar is located inside file system>usr>lib
you can,
Browse thru : file syster>usr>lib>hadoop> add all the jar files starting from hadoop-annotations.jar till the last jar [parquet-tools.jar]
Then again add new external jars and this time add all jars present in the client folder ;
path(file syster>usr>lib>hadoop>client)

problems separating class and source files

In my _Mathematics package. I've separated the source files into bin and src folders like so:
_Mathematics ->
Formulas ->
src ->
// source files containing mathematical formulas...
// Factorial.java
bin ->
// Factorial.class
// class files containing mathematical formulas...
Problems ->
src ->
// Permutation.java
// source files containing mathematical problems...
bin ->
// Permutation.class
// class files containing mathematical problems...
But, when I compile the file with main(), there is an error like so:
Exception in thread "main" java.lang.NoClassDefFoundError: _Mathematics\Problems
\bin\Permutations (wrong name: _Mathematics/Problems/bin/Permutations)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
Here's the Permutation.java file, where main() is located.
package _Mathematics.Problems.bin;
import _Mathematics.Formulas.bin.Factorial;
public class Permutations {
public static void main(String args[]) {
System.out.printf("There are 10 students. Five are to be chosen and seated in a row for a picture.%nHow many linear arrangements are possible?%n" +
(new Factorial(10).r/new Factorial(5).r) + "%n%n");
System.out.printf("How many permutations are there in the word 'permutation'?%n" +
new Factorial(11).r + "%n%n");
}
}
And here is the other file I have, Factorial.java:
package _Mathematics.Formulas.bin;
public class Factorial {
public int o;
public long r;
public Factorial(int num) {
long result = 1;
for(int i = num; i > 0; i--)
result *= i;
this.o = num;
this.r = result;
}
}
Should I keep the package _Mathematics.Problems.bin;, or should I change it to package _Mathematics.Problems.src;?
What is wrong with my code??
Help would be much appreciated.
Two issues worth mentioning:
bin directories are normally used for executable files. This is because (generally) your OS will have an environment setting that points to these directories, so when you try to run a program, it knows where to look. When you run a Java program, Java itself is the executable (your OS needs to know where to find it). The OS doesn't need to find your actual Java class files, Java needs to find them, for which it uses a completely different environment setting (the classpath). Because of this, if you're putting Java class files in a bin directory, you're probably doing something wrong.
Secondly, your package structure (_Mathematics.Problems.bin) should match exactly the directory structure, but it should reflect the purpose of the classes, so _Mathematics and Problems are reasonable parts of a package structure, but, again, bin or src, is not. Normally, I would create classes and src directories and then my package structure begins under there
So, as explained above, to fix the issue:
make sure the directory and package structures are identical for
your src and classes
by removing the bin part of your package structure, this will be
easier.
For class files, you need to maintain the folder structure which your program is expecting
_Mathematics\Problems\bin\Permutations

Categories