I have an unbounded stream of complex objects that I want to load into BigQuery. The structure of these objects represents the schema of my destination table in BigQuery.
The problem is that since there are a lot of nested fields in the POJO, its an extremely tedious task to convert it to a TableSchema object and I'm looking for a quick/ automated way to convert my POJO to TableSchema object while writing to BigQuery.
I'm not very familiar with Apache Beam API, and any help will be appreciated.
In a pipeline, I load a list of schema from GCS. I keep them in string format because the TableSchema is not serializable. However, I load them to TableSchema for validate them.
Then I add them in string format to a map in the Option object.
String schema = new String(blob.getContent());
// Decorate list of fields for allowing a correct parsing
String targetSchema = "{\"fields\":" + schema + "}";
try {
//Preload schema to ensure validity, but then use string version
Transport.getJsonFactory().fromString(targetSchema, TableSchema.class);
String tableName = blob.getName().replace(SCHEMA_FILE_PREFIX, "").replace(SCHEMA_FILE_SUFFIX, "");
tableSchemaStringMap.put(tableName, targetSchema);
} catch (IOException e) {
logger.warn("impossible to read schema " + blob.getName() + " in bucket gs://" + options.getSchemaBucket());
}
I didn't find another solution when I developed this.
In my company I created kind of a ORM (we called OBQM) to do this. We are expecting to release it to the public. The code is quite big (specially because I created annotations and so on) but I can share with you some snippets for a quick schema generation:
public TableSchema generateTableSchema(#Nonnull final Class cls) {
final TableSchema tableSchema = new TableSchema();
tableSchema.setFields(generateFieldsSchema(cls));
return tableSchema;
}
public List<TableFieldSchema> generateFieldsSchema(#Nonnull final Class cls) {
final List<TableFieldSchema> schemaFields = new ArrayList<>();
final Field[] clsFields = cls.getFields();
for (final Field field : clsFields) {
schemaFields.add(fromFieldToSchemaField(field));
}
return schemaFields;
}
This code takes all the fields from the POJO class and creates a TableSchema object (the one that BigQueryIO uses in ApacheBeam). You can see a method that I created called fromFieldToSchemaField. This method identifies each field type and setup the field name, mode, description and type. In this case to keep it simple I'm going to focus on the type and name:
public static TableFieldSchema fromFieldToSchemaField(#Nonnull final Field field) {
return fromFieldToSchemaField(field, 0);
}
public static TableFieldSchema fromFieldToSchemaField(
#Nonnull final Field field,
final int iteration) {
final TableFieldSchema schemaField = new TableFieldSchema();
final Type customType = field.getGenericType().getTypeName()
schemaField.setName(field.getName());
schemaField.setMode("NULLABLE"); // You can add better logic here, we use annotations to override this value
schemaField.setType(getFieldTypeString(field));
schemaField.setDescription("Optional"); // Optional
if (iteration < MAX_RECURSION
&& (isStruct(schemaField.getType())
|| isRecord(schemaField.getType()))) {
final List<TableFieldSchema> schemaFields = new ArrayList<>();
final Field[] fields = getFieldsFromComplexObjectField(field);
for (final Field subField : fields) {
schemaFields.add(
fromFieldToSchemaField(
subField, iteration + 1));
}
schemaField.setFields(schemaFields.isEmpty() ? null : schemaFields);
}
return schemaField;
}
And now the method that returns the BigQuery field type.
public static String getFieldTypeString(#Nonnull final Field field) {
// On my side this code is much complex but this is a short version of that
final Class<?> cls = (Class<?>) field.getGenericType()
if (cls.isAssignableFrom(String.class)) {
return "STRING";
} else if (cls.isAssignableFrom(Integer.class) || cls.isAssignableFrom(Short.class)) {
return "INT64";
} else if (cls.isAssignableFrom(Double.class)) {
return "NUMERIC";
} else if (cls.isAssignableFrom(Float.class)) {
return "FLOAT64";
} else if (cls.isAssignableFrom(Boolean.class)) {
return "BOOLEAN";
} else if (cls.isAssignableFrom(Double.class)) {
return "BYTES";
} else if (cls.isAssignableFrom(Date.class)
|| cls.isAssignableFrom(DateTime.class)) {
return "TIMESTAMP";
} else {
return "STRUCT";
}
}
Keep in mind that I'm not showing how to identify primitive types or arrays. But this is a good start for your code :). Please let me know if you need any help.
If your using JSON for the message serialization in PubSub you can make use of one of the provided templates:
PubSub To BigQuery Template
The code for that template is here:
PubSubToBigQuery.java
Related
I want to create Junit TestCases of method, in which we are iterating List<Map<String,Object>> using forEach loop with lambda expresion. Now I want to mock statement objectMapper.writeValueAsString(recordObj.get("value")); but I am not understanding how to use recordObj.
public String apply(MyRequestWrapper requestWrapper) {
String resultStr=null;
final Map<String, List<PubSubEvent>> packagesEventList = AppUtilities.getPackagesEventsMappedList();
try {
logger.debug("Received Record:: " + requestWrapper.getBody().toString());
List<RecordProcessedResult> results = new ArrayList<>();
List<Map<String,Object>> recordMaps= string2List(objectMapper,requestWrapper.getBody().toString());
logger.debug("Parsed received payload ::: "+ LocalDateTime.now() + " batch size is ::: "+ recordMaps.size());
if(! ObjectUtils.isEmpty(recordMaps) && !recordMaps.isEmpty() ) {
recordMaps.forEach(recordObj ->{
ConsumerRecord record=objectMapper.convertValue(recordObj, ConsumerRecord.class);
String topicName = recordObj.get("topic").toString();
String key = null;
String value = null;
String offset = null;
String xTraceabilityId = ((Map<String, String>) recordObj.get("headers")).get(IdTypeConstants.XTRACEABILITYID);
String xCorrelationId = ((Map<String, String>) recordObj.get("headers")).get(IdTypeConstants.XCORRELATIONID);
MDC.put(IdTypeConstants.XTRACEABILITYID, xTraceabilityId);
MDC.put(IdTypeConstants.XCORRELATIONID, xCorrelationId);
try {
key = objectMapper.writeValueAsString(recordObj.get("key"));
value = objectMapper.writeValueAsString(recordObj.get("value"));
offset = objectMapper.writeValueAsString(recordObj.get("offset"));
MyEvent myEvent= objectMapper.readValue(value, MyEvent.class);
subscribedPackageProcessor.setInput(input);
subscribedPackageProcessor.setOutput(output);
subscribedPackageProcessor.setPackagesEventList(packagesEventList);
subscribedPackageProcessor.setRequesterType(requesterType); subscribedPackageProcessor.processSubscribedPackage(myEvent.getPackageId());
RecordProcessedResult rpr = new RecordProcessedResult(record, true, null, xTraceabilityId, xCorrelationId, key, System.currentTimeMillis());
results.add(rpr);
}
catch(Exception e) {
RecordProcessedResult rpr = new RecordProcessedResult(record, false, ExceptionUtils.getStackTrace(e), xTraceabilityId, xCorrelationId, key, System.currentTimeMillis());
results.add(rpr);
logger.info("Exception occured while processing fund data :::out ", e);
}
MDC.clear();
});
}
resultStr = objectMapper.writeValueAsString(results);
}catch (Exception e) {
logger.debug(e.getMessage());
}
return resultStr;
}
I have tried following testcases.
#Test void applyTest() throws Exception {
MyEvent myEvent = new MyEvent();
myEvent.setPackageId("test");
MyRequestWrapper flowRequestWrapper= getMyRequestWrapper();
List<Map<String, Object>> maps = string2List(objectMapper1, flowRequestWrapper.getBody().toString());
Map<String,Object> map = new HashMap<String, Object>();
Mockito.when(objectMapper.readValue(Mockito.anyString(), Mockito.any(TypeReference.class))).thenReturn(maps);
Mockito.when(objectMapper.writeValueAsString(Mockito.anyString())).thenReturn("test");
Mockito.when(objectMapper.readValue(Mockito.anyString(), Mockito.eq(MyEvent.class))).thenReturn(myEvent);
//doNothing().when(subscribedPackageProcessor).processSubscribedPackage("");
String response = processESignCompletedEventSvcFlow.apply(flowRequestWrapper);
Assertions.assertNotNull(response);
}
Please help, Thanks
Your method is way too complex to be unit tested. For example it declares dependencies by calling methods in the same class. You cannot mock those and it makes the testing many times more complicated.
List<Map<String,Object>> recordMaps =
string2List(objectMapper,requestWrapper.getBody().toString());
You need to extract the string2List method into a standalone class (with it's own unit tests) that is injected into your class as a dependency.
Then you can just mock the string2List class and when you do that, you control the creation of recordObj instances from your unit test for this method.
Your second "sin" is abusing lambdas by creating one that is longer than two lines. Lambdas should be short. If it spans more than a few lines, it must be extracted into a standalone class that can be unit tested separately. And again, when you have extracted this lambda into a standalone class and unit tested it, you can't just go "new RecordObjConsumer(results)" in your method, as that creates a hard-coded dependency that you again cannot mock. You need to design the consumer so that it can be injected into your class as an external dependency.
I have a protobuf message of the form
enum PolicyValidationType {
Number = 0;
}
message NumberPolicyValidation {
optional int64 maxValue = 1;
optional int64 minValue = 2;
}
message PolicyObject {
required string key = 1;
optional string value = 2;
optional string name = 3;
optional PolicyValidationType validationType = 4;
optional NumberPolicyValidation numberPolicyValidation = 5;
}
For example
policyObject {
key: "sessionIdleTimeoutInSecs"
value: "1800"
name: "Session Idle Timeout"
validationType: Number
numberPolicyValidation {
maxValue: 3600
minValue: 5
}
}
Can someone let me know how can I convert this to a Map like below:-
{validationType=Number, name=Session Idle Timeout, numberPolicyValidation={maxValue=3600.0, minValue=5.0}, value=1800, key=sessionIdleTimeoutInSecs}
One way I can think of is convert this to a json and then convert the json to map?
PolicyObject policyObject;
...
JsonFormat jsonFormat = new JsonFormat();
final String s = jsonFormat.printToString(policyObject);
Type objectMapType = new TypeToken<HashMap<String, Object>>() {}.getType();
Gson gson = new GsonBuilder().registerTypeAdapter(new TypeToken<HashMap<String,Object>>(){}.getType(), new PrimitiveDeserializer()).create();
Map<String, Object> mappedObject = gson.fromJson(s, objectMapType);
I think there must be some better way. Can someone suggest any better approach?
I created small dedicated class to generically convert any Google protocol buffer message into a Java Map.
public class ProtoUtil {
#NotNull
public Map<String, Object> protoToMap(Message proto) {
final Map<Descriptors.FieldDescriptor, Object> allFields = proto.getAllFields();
Map<String, Object> map = new LinkedHashMap<>();
for (Map.Entry<Descriptors.FieldDescriptor, Object> entry : allFields.entrySet()) {
final Descriptors.FieldDescriptor fieldDescriptor = entry.getKey();
final Object requestVal = entry.getValue();
final Object mapVal = convertVal(proto, fieldDescriptor, requestVal);
if (mapVal != null) {
final String fieldName = fieldDescriptor.getName();
map.put(fieldName, mapVal);
}
}
return map;
}
#Nullable
/*package*/ Object convertVal(#NotNull Message proto, #NotNull Descriptors.FieldDescriptor fieldDescriptor, #Nullable Object protoVal) {
Object result = null;
if (protoVal != null) {
if (fieldDescriptor.isRepeated()) {
if (proto.getRepeatedFieldCount(fieldDescriptor) > 0) {
final List originals = (List) protoVal;
final List copies = new ArrayList(originals.size());
for (Object original : originals) {
copies.add(convertAtomicVal(fieldDescriptor, original));
}
result = copies;
}
} else {
result = convertAtomicVal(fieldDescriptor, protoVal);
}
}
return result;
}
#Nullable
/*package*/ Object convertAtomicVal(#NotNull Descriptors.FieldDescriptor fieldDescriptor, #Nullable Object protoVal) {
Object result = null;
if (protoVal != null) {
switch (fieldDescriptor.getJavaType()) {
case INT:
case LONG:
case FLOAT:
case DOUBLE:
case BOOLEAN:
case STRING:
result = protoVal;
break;
case BYTE_STRING:
case ENUM:
result = protoVal.toString();
break;
case MESSAGE:
result = protoToMap((Message) protoVal);
break;
}
}
return result;
}
}
Hope that helps! Share and enjoy.
Be aware that both approaches described above (serialize/deserialize by tuk and custom converter by Zarnuk) will produce different outputs.
With the serialize/deserialize approach:
Field names in snake_case format will be automatically converted into camelCase. JsonFormat.printer() does this.
Numeric values will be converted to float. Gson does that for you.
Values of type Duration will be converted into strings with format durationInseconds + "s", i.e. "30s" for a duration of 30 seconds and "0.000500s" for a duration of 500,000 nanoseconds. JsonFormat.printer() does this.
With the custom converter approach:
Field names will remain as they are described on the proto file.
Integers and floats will keep their own type.
Values of type Duration will become objects with their corresponding fields.
To show the differences, here is a comparison of the outcomes of both approaches.
Original message (here is the proto file):
method_config {
name {
service: "helloworld.Greeter"
method: "SayHello"
}
retry_policy {
max_attempts: 5
initial_backoff {
nanos: 500000
}
max_backoff {
seconds: 30
}
backoff_multiplier: 2.0
retryable_status_codes: UNAVAILABLE
}
}
With the serialize/deserialize approach:
{
methodConfig=[ // field name was converted to cameCase
{
name=[
{
service=helloworld.Greeter,
method=SayHello
}
],
retryPolicy={
maxAttempts=5.0, // was integer originally
initialBackoff=0.000500s, // was Duration originally
maxBackoff=30s, // was Duration originally
backoffMultiplier=2.0,
retryableStatusCodes=[
UNAVAILABLE
]
}
}
]
}
With the custom converter approach:
{
method_config=[ // field names keep their snake_case format
{
name=[
{
service=helloworld.Greeter,
method=SayHello
}
],
retry_policy={
max_attempts=5, // Integers stay the same
initial_backoff={ // Duration values remains an object
nanos=500000
},
max_backoff={
seconds=30
},
backoff_multiplier=2.0,
retryable_status_codes=[
UNAVAILABLE
]
}
}
]
}
Bottom line
So which approach is better?
Well, it depends on what you are trying to do with the Map<String, ?>. In my case, I was configuring a grpc client to be retriable, which is done via ManagedChannelBuilder.defaultServiceConfig API. The API accepts a Map<String, ?> with this format.
After several trials and errors, I figured that the defaultServiceConfig API assumes you are using GSON, hence the serialize/deserialize approach worked for me.
One more advantage of the serialize/deserialize approach is that the Map<String, ?> can be easily converted back to the original protobuf value by serializing it back to json, then using the JsonFormat.parser() to obtain the protobuf object:
ServiceConfig original;
...
String asJson = JsonFormat.printer().print(original);
Map<String, ?> asMap = new Gson().fromJson(asJson, Map.class);
// Convert back to ServiceConfig
String backToJson = new Gson().toJson(asMap);
ServiceConfig.Builder builder = ServiceConfig.newBuilder();
JsonFormat.parser().merge(backToJson, builder);
ServiceConfig backToOriginal = builder.build();
... whereas the custom converter approach method doesn't have an easy way to convert back as you need to write a function to convert the map back to the original proto by navigating the tree.
public class Table{
private Long id = 1;
private String name;
List<Terms> terms;
Map<String,Address>
//getters and setters
}
what i need to do is that i need to link my class tables with database table and each element in the above class is a concept in database table and i have whole structure of java classes as per my xml and related database tables in DB what should be the best way.
as per my understanding what can i think as of now is that
use reflection to get the fields name and apply my business logic
Use XPath of my xml and directly link each concept using XPath
Each time get the value from DB and XML and link it using some mediator logic.
Please suggest and provide some code dummy code if possible
You can try with below example:
Iterator<Table> iterator=tableList.iterator();
boolean foundConcept=false;
while(iterator.hasNext())
{
foundConcept=false;
Table table=iterator.next();
String conceptName=table.getConceptDetails().getName();
Field fieldArr[]=Table.getClass().getDeclaredFields();
List<Field> fields=Arrays.asList(fieldArr);
Iterator<Field> iterator1 =fields.iterator();
int i=0;
while(iterator1.hasNext())
{
Field field=iterator1.next();
field.setAccessible(true);
System.out.println(field.getName()+" # "+field.getType());
if(field.getName().equalsIgnoreCase(conceptName) && String.class.isAssignableFrom(field.getType()))
{
foundConceptMap.put(conceptName, (field.get(Table)).toString());
foundConcept=true;
break;
}
else
{
Type type = field.getGenericType();
if (type instanceof ParameterizedType) {
ParameterizedType pType = (ParameterizedType)type;
System.out.print("Raw type: " + pType.getRawType() + " - ");
System.out.println("Type args: " + pType.getActualTypeArguments()[0]);
if("java.util.List".equalsIgnoreCase(pType.getRawType().getTypeName()))
{
String classWithPackage=pType.getActualTypeArguments()[0].getTypeName();
String className="";
if(classWithPackage.contains("."))
{
className=classWithPackage.substring(classWithPackage.lastIndexOf(".")+1);
}
else
{
className=classWithPackage;
}
System.out.println(className);
if("Terms".equalsIgnoreCase(className))
{
List<Terms> list=Table.getTerms();
setTerms(list, foundConceptMap, conceptName);
}
}
}
}
Working on a pretty printer. Based on my understanding of ANTLR and StringTemplate so far, if I want to match all my grammar rules to templates and apply the template each time the grammar rule is invoked, I can create my templates with names matching my grammar rules.
[Side question: Is this how I should approach it? It seems like ANTLR should being doing the work of matching the parsed text to the output templates. My job will be to make sure the parser rules and templates are complete/correct.]
I think ANTLR 3 allowed directly setting templates inside of the ANTLR grammar, but ANTLR 4 seems to have moved away from that.
Based on the above assumptions, it looks like the MyGrammarBaseListener class that ANTLR generates is going to be doing all the work.
I've been able to collect the names of the rules invoked while parsing the text input by converting this example to ANTLR 4. I ended up with this for my enterEveryRule():
#Override public void enterEveryRule(ParserRuleContext ctx) {
if (builder.length() > 0) {
builder.append(' ');
}
if (ctx.getChildCount() > 0) {
builder.append('(');
}
int ruleIndex = ctx.getRuleIndex();
String ruleName;
if (ruleIndex >= 0 && ruleIndex < ruleNames.length) {
ruleName = ruleNames[ruleIndex];
System.out.println(ruleName); // this part works as intended
}
else {
ruleName = Integer.toString(ruleIndex);
}
builder.append(ruleName);
// CONFUSION HERE:
// get template names (looking through the API to figure out how to do this)
Set<String> templates = (MyTemplates.stg).getTemplateNames()
// or String[] for return value? Java stuff
// for each ruleName in ruleNames
// if (ruleName == templateName)
// run template using rule children as parameters
// write pretty-printed version to file
}
The linked example applies the changes to create the text output in exitEveryRule() so I'm not sure where to actually implement my template-matching algorithm. I'll experiment with both enter and exit to see what works best.
My main question is: How do I access the template names in MyTemplates.stg? What do I have to import, etc.?
(I'll probably be back to ask about matching up rule children to template parameters in a different question...)
Following demonstrates a simple way of dynamically accessing and rendering named StringTemplates. Intent is to build varMap values in the listener (or visitor) in its corresponding context, keyed by parameter name, and call the context dependent named template to incrementally render the content of the template.
public class Render {
private static final String templateDir = "some/path/to/templates";
private STGroupFile blocksGroup;
private STGroupFile stmtGroup;
public Render() {
blocksGroup = new STGroupFile(Strings.concatAsClassPath(templateDir, "Blocks.stg"));
stmtGroup = new STGroupFile(Strings.concatAsClassPath(templateDir, "Statements.stg"));
}
public String gen(GenType type, String name) {
return gen(type, name, null);
}
/**
* type is an enum, identifying the group template
* name is the template name within the group
* varMap contains the named values to be passed to the template
*/
public String gen(GenType type, String name, Map<String, Object> varMap) {
Log.debug(this, name);
STGroupFile stf = null;
switch (type) {
case BLOCK:
stf = blocksGroup;
break;
case STMT:
stf = stmtGroup;
break;
}
ST st = stf.getInstanceOf(name);
if (varMap != null) {
for (String varName : varMap.keySet()) {
try {
st.add(varName, varMap.get(varName));
} catch (NullPointerException e) {
Log.error(this, "Error adding attribute: " + name + ":" + varName + " [" + e.getMessage() + "]");
}
}
}
return st.render();
}
}
I was writing a toString() for a class in Java the other day by manually writing out each element of the class to a String and it occurred to me that using reflection it might be possible to create a generic toString() method that could work on ALL classes. I.E. it would figure out the field names and values and send them out to a String.
Getting the field names is fairly simple, here is what a co-worker came up with:
public static List initFieldArray(String className) throws ClassNotFoundException {
Class c = Class.forName(className);
Field field[] = c.getFields();
List<String> classFields = new ArrayList(field.length);
for (int i = 0; i < field.length; i++) {
String cf = field[i].toString();
classFields.add(cf.substring(cf.lastIndexOf(".") + 1));
}
return classFields;
}
Using a factory I could reduce the performance overhead by storing the fields once, the first time the toString() is called. However finding the values could be a lot more expensive.
Due to the performance of reflection this may be more hypothetical then practical. But I am interested in the idea of reflection and how I can use it to improve my everyday programming.
Apache commons-lang ReflectionToStringBuilder does this for you.
import org.apache.commons.lang3.builder.ReflectionToStringBuilder
// your code goes here
public String toString() {
return ReflectionToStringBuilder.toString(this);
}
Another option, if you are ok with JSON, is Google's GSON library.
public String toString() {
return new GsonBuilder().setPrettyPrinting().create().toJson(this);
}
It's going to do the reflection for you. This produces a nice, easy to read JSON file. Easy-to-read being relative, non tech folks might find the JSON intimidating.
You could make the GSONBuilder a member variable too, if you don't want to new it up every time.
If you have data that can't be printed (like a stream) or data you just don't want to print, you can just add #Expose tags to the attributes you want to print and then use the following line.
new GsonBuilder()
.setPrettyPrinting()
.excludeFieldsWithoutExposeAnnotation()
.create()
.toJson(this);
W/reflection, as I hadn't been aware of the apache library:
(be aware that if you do this you'll probably need to deal with subobjects and make sure they print properly - in particular, arrays won't show you anything useful)
#Override
public String toString()
{
StringBuilder b = new StringBuilder("[");
for (Field f : getClass().getFields())
{
if (!isStaticField(f))
{
try
{
b.append(f.getName() + "=" + f.get(this) + " ");
} catch (IllegalAccessException e)
{
// pass, don't print
}
}
}
b.append(']');
return b.toString();
}
private boolean isStaticField(Field f)
{
return Modifier.isStatic(f.getModifiers());
}
If you're using Eclipse, you may also have a look at JUtils toString generator, which does it statically (generating the method in your source code).
You can use already implemented libraries, as ReflectionToStringBuilder from Apache commons-lang. As was mentioned.
Or write smt similar by yourself with reflection API.
Here is some example:
class UniversalAnalyzer {
private ArrayList<Object> visited = new ArrayList<Object>();
/**
* Converts an object to a string representation that lists all fields.
* #param obj an object
* #return a string with the object's class name and all field names and
* values
*/
public String toString(Object obj) {
if (obj == null) return "null";
if (visited.contains(obj)) return "...";
visited.add(obj);
Class cl = obj.getClass();
if (cl == String.class) return (String) obj;
if (cl.isArray()) {
String r = cl.getComponentType() + "[]{";
for (int i = 0; i < Array.getLength(obj); i++) {
if (i > 0) r += ",";
Object val = Array.get(obj, i);
if (cl.getComponentType().isPrimitive()) r += val;
else r += toString(val);
}
return r + "}";
}
String r = cl.getName();
// inspect the fields of this class and all superclasses
do {
r += "[";
Field[] fields = cl.getDeclaredFields();
AccessibleObject.setAccessible(fields, true);
// get the names and values of all fields
for (Field f : fields) {
if (!Modifier.isStatic(f.getModifiers())) {
if (!r.endsWith("[")) r += ",";
r += f.getName() + "=";
try {
Class t = f.getType();
Object val = f.get(obj);
if (t.isPrimitive()) r += val;
else r += toString(val);
} catch (Exception e) {
e.printStackTrace();
}
}
}
r += "]";
cl = cl.getSuperclass();
} while (cl != null);
return r;
}
}
Not reflection, but I had a look at generating the toString method (along with equals/hashCode) as a post-compilation step using bytecode manipulation. Results were mixed.
Here is the Netbeans equivalent to Olivier's answer; smart-codegen plugin for Netbeans.