We have a gRPC server that inserts the data into the CockRoachDB and the data is coming from a Spring Boot micro-service.
This is my code to persist in the CRDB database:
#Service
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
public class CockroachPersister {
private static final String X_AMZN_REQUESTID = "x-amzn-RequestId";
private static final String X_AMZN_RESPONSE = "x-amzn-Response";
private static final String PUTITEM = "PutItem";
private static final String GETITEM = "GetItem";
private static final String DELETEITEM = "DeleteItem";
private static final String UPDATEITEM = "UpdateItem";
public <T extends Message> T save(final String requestBody, final String action, final String tableName) {
T t = null;
try {
List<GRPCMapper> lGRPCMapper = ServiceMapper.getServices(action,tableName);
for (GRPCMapper grpcMapper : lGRPCMapper) {
System.out.println("grpcMapper.getClassName() ==> "+grpcMapper.getClassName());
Class<?> className = Class.forName(grpcMapper.getClassName());
Class<?> implementedClassType = Class.forName(grpcMapper.getImplementedClass());
Method userMethod = implementedClassType.getDeclaredMethod(grpcMapper.getServiceName(), className);
System.out.println("userMethod\t" + userMethod.getName());
t = (T) userMethod.invoke(null, ProtoUtil.getInstance(requestBody, grpcMapper.getProtoType()));
System.out.printf("Service => %s row(s) Inserted \n", t.getAllFields().toString());
}
} catch (Exception e) {
e.printStackTrace();
}
return t;
}
}
If the initial insertion failed, I would like to try at least 3 TIMES before we can log the error. How do I implement that?
A solution that use message queue will be also acceptable.
Related
we have a message campaign where we send over 100k messages (SMS) a day. So we are a client of SMSC server. We have no influence on SMSC server code. Before some time, we had around 80-90 message per second, now frequency dropped to 15 messages per second, according to tcpdumps.
I have few information regarding this, so I will try to explain best as I can.
So we are using Spring Boot 2.7 and open source jsmpp (3.0.0) library for sending SMS messages (PDU commands) to SMSC.
While reading about protocol (page 40), I noticed that there is a way to send messages asynchronously by providing a seqence_number. The code example is here. But I am not sure if that is going to help...
The code:
#Component
public class ClientConfig {
#Autowired
private MessageReceiverListener msgListener;
#Autowired
private SessionStateListener sessionListener;
private SMPPSession session;
public String charset = "ISO-10646-UCS-2";
public long idleReceiveTimeout = 65000;
public long checkBindingTimeout = 12000;
public long timeout = 7000;
public int enquireLinkTimeout = 15000;
public String hostIp = "someIpAddress";
public int port = 5000;
public String final systemId = "someSystemId";
public String final password = "password";
public BindType bindType = BindType.BIND_TRX; //transceiver
public String systemType = null;
public String addressRange = null;
public TypeOfNumber addrTon = TypeOfNumber.UNKNOWN;
public NumberingPlanIndicator addrNpi = NumberingPlanIndicator.UNKNOWN;
protected synchronized void tryToConnectToSmsc() throws Exception {
try {
// Connect to host
BindParameter bp = new BindParameter(bindType, systemId, password, systemType, addrTon, addrNpi, addressRange);
session = new SMPPSession();
session.setEnquireLinkTimer(enquireLinkTimer);
session.connectAndBind(host, port, bp, timeout);
session.setMessageReceiverListener(msgListener);
session.addSessionStateListener(sessionListener);
}
// Main connection failed.
catch (Exception e) {
//log and re-attempt connection logic here
}
}
}
The listeners:
#Component
public class MySessionListenerImpl implements SessionStateListener {
#Override
public void onStateChange(SessionState newState, SessionState oldState, Session source) {
//TODO
}
}
#Service
public class SmsListenerImpl implements MessageReceiverListener {
#Override
public void onAcceptDeliverSm(DeliverSm deliverSm) throws ProcessRequestException {
//TODO
}
#Override
public void onAcceptAlertNotification(AlertNotification alertNotification) {}
#Override
public DataSmResult onAcceptDataSm(DataSm dataSm, Session session) throws ProcessRequestException {
return null;
}
}
Message sending service:
#Service
public class MessageSendingServiceImpl extends ClientConfig implements MessageSendingService{
private final ESMClass esmClass = new ESMClass();
private final byte protocolId = (byte) 0;
private final byte priorityFlag = (byte) 1;
private final TimeFormatter formatter = new AbsoluteTimeFormatter();
private final byte defaultMsgId = (byte) 0;
public SmsAdapterServiceImpl() {
super();
}
#PostConstruct
public synchronized void init() throws Exception {
super.tryToConnectToSmsc();
}
#Override
public String send(DomainObject obj){ //DomainObject -> contains fields: id, to, from, text, delivery, validity;
String serviceType = null;
//source
TypeOfNumber sourceTON = TypeOfNumber.NATIONAL; //there is some logic here which determines if it is INTERNATIOANL, ALPHANUMERIC etc...
NumberPlaningIndicator sourceNpi = NumberPlaningIndicator.ISDN; //constant...
String sourcePhone = obj.getFrom();
//destination
TypeOfNumber destinationTON = TypeOfNumber.NATIONAL; //there is some logic here which determines if it is INTERNATIOANL, ALPHANUMERIC etc...
NumberPlaningIndicator destinationNpi = NumberPlaningIndicator.ISDN; //constant...
String destinationPhone = obj.getTo();
String scheduledDeliveryTime = null;
if (obj.getDelivery() != null) scheduledDeliveryTime = formatter.format(obj.getDelivery());
String validityPeriodTime = null;
if (obj.getValidity() != null) validityPeriodTime = formatter.format(obj.getValidity());
Map<Short, OptionalParameter> optionalParameters = new HashMap<>();
String text = obj.getText();
if ( text.length() > 89 ) { //set text as payload instead of message text
OctetString os = new OctetString(OptionalParameter.Tag.MESSAGE_PAYLOAD.code(), text, "ISO-10646-UCS-2"); //"ISO-10646-UCS-2" - encoding
optionalParameters.put(os.tag, os);
text = "";
}
String msgId =
session.submitShortMessage( serviceType ,
sourceTON ,
sourceNpi ,
sourcePhone ,
destinationTON ,
destinationNpi ,
destinationPhone ,
esmClass ,
protocolId ,
priorityFlag ,
scheduledDeliveryTime ,
validityPeriodTime ,
new RegisteredDelivery() ,
ReplaceIfPresentFlag.DEFAULT.value() ,
new GeneralDataCoding(Alphabet.ALPHA_UCS2) ,
defaultMsgId ,
text.getBytes("ISO-10646-UCS-2") ,
optionalParameters.values().toArray(new OptionalParameter[0]));
return msgId;
}
}
Client code which invokes the service (it is actually a scheduler job):
#Autowired private MessageSendingService messageSendingService;
#Scheduled(cron)
public void execute() {
List<DomainObject> messages = repository.findMessages(pageable, config.getBatch()); //up to several thousand
start(messages);
ThreadPoolExecutor executorService = (ThreadPoolExecutor) Executors.newFixedThreadPool(getSchedulerConfiguration().getPoolSize(), new NamedThreadFactory("Factory"));
List<DomainObject> domainObjects = Collections.synchronizedList(messages);
List<List<DomainObject>> domainObjectsPartitioned = partition(domainObjects.size(), config.getPoolSize()); //pool size is 4
for (List<DomainObject> list: domainObjectsPartitioned ) {
executorService.execute(new Runnable() {
#Override
public void run() {
try {
start(list);
} catch (Exception e) {
e.printStackTrace();
}
});
}
executorService.shutdown();
}
}
private void start(List<DomainObject> list){
for (DomainObject> obj : list) {
String mid = messageSendingService.send(obj);
//do smtg with id...
}
}
[ISSUE] repo always returns null when I call repo methods, while stepping through, throws null pointer exception. then front end receives
500: Http failure response for http://localhost:4200/api/aiprollout/updatecsv: 500 Internal Server Error
[HAVE TRIED] Adjusting AutoWired and components and service annotations.
[QUESTIONS]
1- Does every repo method need its own service and controller method?
2- Is it okay to create a new service that uses an existing controller?
3- If this new service uses SuperCsv and I create custom CsvCellProcessors, can these cell processors also call the repo? Should these cell processors perform logic? or should it be done else where? What class annotations should these cellProcessors classes have? #Component?
Any advice is greatly appreciated, feel a little lost at this point not even sure what to do.
[CODE]
Controller:
#RestController
#EnableConfigurationProperties({SpoofingConfigurationProperties.class})
#RequestMapping(value = "")
public class AipRolloutController {
private final Logger logger = some logger
private final AipRolloutService AipRolloutService;
private final CsvParserService csvParserService;
#Autowired
public AipRolloutController(AipRolloutService aipRolloutService, CsvParserService csvParserService) {
this.AipRolloutService = aipRolloutService;
this.csvParserService = csvParserService;
}
#PostMapping(value = "/updatecsv", produces = MediaType.APPLICATION_JSON_VALUE)
#ResponseBody
public ResponseEntity<?> processCsv(#RequestParam("csvFile") MultipartFile csvFile) throws IOException {
if (csvFile.isEmpty()) return new ResponseEntity(
responceJson("please select a file!"),
HttpStatus.NO_CONTENT
);
csvParserService.parseCsvFile(csvFile);
return new ResponseEntity(
responceJson("Successfully uploaded - " + csvFile.getOriginalFilename()),
new HttpHeaders(),
HttpStatus.CREATED
);
}
Service:
#Service
public class AipRolloutService {
private static final Logger logger = some logger
#Autowired
private AIPRolloutRepository AIPRolloutRepository;
New Csv parser Service
#Service
public class CsvParserService {
#Autowired private AipRolloutService aipRolloutService;
public CsvParserService(AipRolloutService aipRolloutService) {
this.aipRolloutService = aipRolloutService;
}
public void parseCsvFile(MultipartFile csvFile) throws IOException {
CsvMapReader csvMapReader = new CsvMapReader(new InputStreamReader(csvFile.getInputStream()), CsvPreference.STANDARD_PREFERENCE);
parseCsv(csvMapReader);
csvMapReader.close();
}
private void parseCsv(CsvMapReader csvMapReader) throws IOException {
String[] header = csvMapReader.getHeader(true);
List<String> headers = Arrays.asList(header);
verifySourceColumn(headers);
verifyPovColumn(headers);
final CellProcessor[] processors = getProcessors(headers);
Map<String, Object> csvImportMap = null;
while ((csvImportMap = csvMapReader.read(header, processors)) != null) {
CsvImportDTO csvImportDto = new CsvImportDTO(csvImportMap);
if ( activationTypeP(csvImportDto) ){
int mssValue = Integer.parseInt(csvImportDto.getMssValue());
aipRolloutService.updateAipRollout(csvImportDto.getSource(),
csvImportDto.getPov(),
csvImportDto.getActivationType(),
mssValue);
}
}
}
private CellProcessor[] getProcessors(List<String> headers) {
CellProcessor[] processors = new CellProcessor[headers.size()];
int index = 0;
for (String header : headers) {
if (header.contains(SOURCE_ID)) {
processors[index++] = new CsvSourceIdCellParser();
} else if (header.contains(POV)) {
processors[index++] = new CsvPovCellParser();
} else if (header.contains(ACTIVATION_TYPE)) {
processors[index++] = new CsvActivationTypeCellParser();
} else if (header.contains(ACTIVATION_DATE)) {
processors[index++] = new Optional();
} else if (header.contains(DEACTIVATION_DATE)) {
processors[index++] = new Optional();
} else if (header.contains(MSS_VALUE)) {
processors[index++] = new CsvMssValueCellParser();
} else {
processors[index++] = null; // throw exception? wrong header info instead of allowing null?
}
}
return processors;
}
Custom Cell Processor that calls repo and returns null
public class CsvSourceIdCellParser extends CellProcessorAdaptor {
#Autowired AIPRolloutRepository aipRolloutRepository;
public CsvSourceIdCellParser(){ super(); }
// this constructor allows other processors to be chained
public CsvSourceIdCellParser(CellProcessor next){ super(next); }
#Override
public Object execute(Object value, CsvContext csvContext) {
// throws an Exception if the input is null
validateInputNotNull(value, csvContext);
// get rid of description only need first 3 #'s
value = value.toString().substring(0,3);
// check if WH exists
if( aipRolloutRepository.dcExistsInDatabase(value.toString()) )
return value;
else
throw new RuntimeException("Check Warehouse Value, Value Not Found "
+ "Row number: " + csvContext.getRowNumber()
+ " Column number: " + csvContext.getColumnNumber());
}
}
Repository
#Repository
public class AIPRolloutRepository {
private static final Logger logger = LoggerFactory.getLogger(AIPRolloutRepository.class);
#Autowired
JdbcTemplate jdbcTemplate;
public AIPRolloutRepository() {
}
public boolean dcExistsInDatabase(String dc){
// Query for a count saves time and memory, query for distinct saves time and memory on execution
boolean hasRecord =
jdbcTemplate
.query( "select count (distinct '" + dc +"')" +
"from xxcus.XX_AIP_ROLLOUT" +
"where DC = '" + dc + "';",
new Object[] { dc },
(ResultSet rs) -> {
if (rs.next()) {
return true;
}
return false;
}
);
return hasRecord;
}
Is there a way to sort repository query results by querydsl alias?
So far I've managed to filter, but sorting results with an error:
org.springframework.data.mapping.PropertyReferenceException: No property username found for type User!
request:
GET /users?size=1&sort=username,desc
my rest controller method:
#GetMapping("/users")
public ListResult<User> getUsersInGroup(
#ApiIgnore #QuerydslPredicate(root = User.class) Predicate predicate,
Pageable pageable) {
Page<User> usersInGroup =
userRepository.findByGroup(CurrentUser.getGroup(), predicate, pageable);
return new ListResult<>(usersInGroup);
}
my repository:
#Override
default void customize(QuerydslBindings bindings, QUser root) {
bindings.including(root.account.login, root.account.firstName, root.account.lastName,
root.account.phoneNumber, root.account.email, root.account.postalCode, root.account.city,
root.account.address, root.account.language, root.account.presentationAlias);
bindAlias(bindings, root.account.login, "username");
}
default Page<User> findByGroup(Group group, Predicate predicate, Pageable pageable) {
BooleanExpression byGroup = QUser.user.group.eq(group);
BooleanExpression finalPredicate = byGroup.and(predicate);
return findAll(finalPredicate, pageable);
}
default void bindAlias(QuerydslBindings bindings, StringPath path, String alias) {
bindings.bind(path).as(alias).first(StringExpression::likeIgnoreCase);
}
I've also tried to implement my own PageableArgumentResolver based on QuerydslPredicateArgumentResolver, but some of the methods used there are package private so I thought maybe I am going in the wrong direction
I succeeded by creating a PageableArgumentResolver annotated with class type of query root class and adding alias registry to my generic repository interface.
This solution seems like a workaround but at least it works ;)
repository:
public interface UserRepository extends PageableAndFilterableGenericRepository<User, QUser> {
QDSLAliasRegistry aliasRegistry = QDSLAliasRegistry.instance();
#Override
default void customize(QuerydslBindings bindings, QUser root) {
bindAlias(bindings, root.account.login, "username");
}
default void bindAlias(QuerydslBindings bindings, StringPath path, String alias) {
bindings.bind(path).as(alias).first(StringExpression::likeIgnoreCase);
aliasRegistry.register(alias, path);
}
alias registry:
public class QDSLAliasRegistry {
private static QDSLAliasRegistry inst;
public static QDSLAliasRegistry instance() {
inst = inst == null ? new QDSLAliasRegistry() : inst;
return inst;
}
private QDSLAliasRegistry() {
registry = HashBiMap.create();
}
HashBiMap<String, Path<?>> registry;
resolver:
public class QDSLSafePageResolver implements PageableArgumentResolver {
private static final String DEFAULT_PAGE = "0";
private static final String DEFAULT_PAGE_SIZE = "20";
private static final String PAGE_PARAM = "page";
private static final String SIZE_PARAM = "size";
private static final String SORT_PARAM = "sort";
private final QDSLAliasRegistry aliasRegistry;
public QDSLSafePageResolver(QDSLAliasRegistry aliasRegistry) {
this.aliasRegistry = aliasRegistry;
}
#Override
public boolean supportsParameter(MethodParameter parameter) {
return Pageable.class.equals(parameter.getParameterType())
&& parameter.hasParameterAnnotation(QDSLPageable.class);
}
#Override
public Pageable resolveArgument(MethodParameter parameter,
ModelAndViewContainer mavContainer,
NativeWebRequest webRequest,
WebDataBinderFactory binderFactory) {
MultiValueMap<String, String> parameterMap = getParameterMap(webRequest);
final Class<?> root = parameter.getParameterAnnotation(QDSLPageable.class).root();
final ClassTypeInformation<?> typeInformation = ClassTypeInformation.from(root);
String pageStr = Optional.ofNullable(parameterMap.getFirst(PAGE_PARAM)).orElse(DEFAULT_PAGE);
String sizeStr = Optional.ofNullable(parameterMap.getFirst(SIZE_PARAM)).orElse(DEFAULT_PAGE_SIZE);
int page = Integer.parseInt(pageStr);
int size = Integer.parseInt(sizeStr);
List<String> sortStrings = parameterMap.get(SORT_PARAM);
if(sortStrings != null) {
OrderSpecifier[] specifiers = new OrderSpecifier[sortStrings.size()];
for(int i = 0; i < sortStrings.size(); i++) {
String sort = sortStrings.get(i);
String[] orderArr = sort.split(",");
Order order = orderArr.length == 1 ? Order.ASC : Order.valueOf(orderArr[1].toUpperCase());
specifiers[i] = buildOrderSpecifier(orderArr[0], order, typeInformation);
}
return new QPageRequest(page, size, specifiers);
} else {
return new QPageRequest(page, size);
}
}
private MultiValueMap<String, String> getParameterMap(NativeWebRequest webRequest) {
MultiValueMap<String, String> parameters = new LinkedMultiValueMap<String, String>();
for (Map.Entry<String, String[]> entry : webRequest.getParameterMap().entrySet()) {
parameters.put(entry.getKey(), Arrays.asList(entry.getValue()));
}
return parameters;
}
private OrderSpecifier<?> buildOrderSpecifier(String sort,
Order order,
ClassTypeInformation<?> typeInfo) {
Expression<?> sortPropertyExpression = new PathBuilderFactory().create(typeInfo.getType());
String dotPath = aliasRegistry.getDotPath(sort);
PropertyPath path = PropertyPath.from(dotPath, typeInfo);
sortPropertyExpression = Expressions.path(path.getType(), (Path<?>) sortPropertyExpression, path.toDotPath());
return new OrderSpecifier(order, sortPropertyExpression);
}
}
Itry to aggregate data from a file in HDFS.
I need to add some details from those datas with value on a specific Table in hbase.
but I have the exception :
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
at org.apache.spark.SparkContext.clean(SparkContext.scala:1623)
at org.apache.spark.rdd.RDD.map(RDD.scala:286)
at org.apache.spark.api.java.JavaRDDLike$class.mapToPair(JavaRDDLike.scala:113)
at org.apache.spark.api.java.AbstractJavaRDDLike.mapToPair(JavaRDDLike.scala:46)
at ......
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:577)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:174)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.NotSerializableException: org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation
Serialization stack:
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:38)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
I know that the problem occured when we try to access to the hbase during the map function.
My question is: how to complete my RDDs with the value contains on the hbase Table.
for example:
file in hdfs are csv:
Name;Number1;Number2
toto;1;2
in hbase we have data associate to the name toto.
i need to retrieve the sum of Number1 and Number 2 (that the easiest part)
and aggregate with the data in the table.
for example:
the key for the reducer will be tata and be retrieve by get the rowkey toto in the hbase table.
Any suggestions?
Finally a colleague did it, thanks to yours advice:
so this is the code of the map that permits to aggregate a file with datas from the hbase table.
private final Logger LOGGER = LoggerFactory.getLogger(AbtractGetSDMapFunction.class);
/**
* Namespace name
*/
public static final String NAMESPACE = "NameSpace";
private static final String ID = "id";
private Connection connection = null;
private static final String LINEID = "l";
private static final String CHANGE_LINE_ID = "clid";
private static final String CHANGE_LINE_DATE = "cld";
private String constClientPortHBase;
private String constQuorumHBase;
private int constTimeOutHBase;
private String constZnodeHBase;
public void initConnection() {
Configuration conf = HBaseConfiguration.create();
conf.setInt("timeout", constTimeOutHBase);
conf.set("hbase.zookeeper.quorum", constQuorumHBase);
conf.set("hbase.zookeeper.property.clientPort", constClientPortHBase);
conf.set("zookeeper.znode.parent", constZnodeHBase);
try {
connection = HConnectionManager.createConnection(conf);
} catch (Exception e) {
LOGGER.error("Error in the configuration of the connection with HBase.", e);
}
}
public Tuple2<String, myInput> call(String row) throws Exception {
//this is where you need to init the connection for hbase to avoid serialization problem
initConnection();
....do your work
State state = getCurrentState(myInput.getKey());
....do your work
}
public AbtractGetSDMapFunction( String constClientPortHBase, String constQuorumHBase, String constZnodeHBase, int constTimeOutHBase) {
this.constClientPortHBase = constClientPortHBase;
this.constQuorumHBase = constQuorumHBase;
this.constZnodeHBase = constZnodeHBase;
this.constTimeOutHBase = constTimeOutHBase;
}
/***************************************************************************/
/**
* Table Name
*/
public static final String TABLE_NAME = "Table";
public state getCurrentState(String key) throws TechnicalException {
LOGGER.debug("start key {}", key);
String buildRowKey = buildRowKey(key);
State currentState = new State();
String columnFamily = State.getColumnFamily();
if (!StringUtils.isEmpty(buildRowKey) && null != columnFamily) {
try {
Get scan = new Get(Bytes.toBytes(buildRowKey));
scan.addFamily(Bytes.toBytes(columnFamily));
addColumnsToScan(scan, columnFamily, ID);
Result result = getTable().get(scan);
currentState.setCurrentId(getLong(result, columnFamily, ID));
} catch (IOException ex) {
throw new TechnicalException(ex);
}
LOGGER.debug("end ");
}
return currentState;
}
/***********************************************************/
private Table getTable() throws IOException, TechnicalException {
Connection connection = getConnection();
// Table retrieve
if (connection != null) {
Table table = connection.getTable(TableName.valueOf(NAMESPACE, TABLE_NAME));
return table;
} else {
throw new TechnicalException("Connection to Hbase not available");
}
}
/****************************************************************/
private Long getLong(Result result, String columnFamily, String qualifier) {
Long toLong = null;
if (null != columnFamily && null != qualifier) {
byte[] value = result.getValue(Bytes.toBytes(columnFamily), Bytes.toBytes(qualifier));
toLong = (value != null ? Bytes.toLong(value) : null);
}
return toLong;
}
private String getString(Result result, String columnFamily, String qualifier) {
String toString = null;
if (null != columnFamily && null != qualifier) {
byte[] value = result.getValue(Bytes.toBytes(columnFamily), Bytes.toBytes(qualifier));
toString = (value != null ? Bytes.toString(value) : null);
}
return toString;
}
public Connection getConnection() {
return connection;
}
public void setConnection(Connection connection) {
this.connection = connection;
}
private void addColumnsToScan(Get scan, String family, String qualifier) {
if (org.apache.commons.lang.StringUtils.isNotEmpty(family) && org.apache.commons.lang.StringUtils.isNotEmpty(qualifier)) {
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier));
}
}
private String buildRowKey(String key) throws TechnicalException {
StringBuilder rowKeyBuilder = new StringBuilder();
rowKeyBuilder.append(HashFunction.makeSHA1Hash(key));
return rowKeyBuilder.toString();
}
I am trying to write some integration tests relative to some methods that needs to extract data from MongoDB. In detail, I am using the Embedded Mongo given by Spring Data project. The embedded mongo is clearly provided by Flapdoodle.
I need to import some json file into the Embedded Mongo. I have looked at the tests provided with flapdoodle, but I am not able to understand how they integrates with the magic given by Spring Data + Spring Boot.
Can anyone post some clarifying snippets?
You can create a junit rule (ExternalResource) which runs before and after each test. Check the MongoEmbeddedRule class to get some idea on the implementation details.
Integration test:
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = RANDOM_PORT)
public abstract class TestRunner {
#Autowired
protected MongoTemplate mongoTemplate;
#Rule
public MongoEmbeddedRule mongoEmbeddedRule = new MongoEmbeddedRule(this);
ExternalResource Rule:
public class MongoEmbeddedRule extends ExternalResource {
private final Object testClassInstance;
private final Map<String, Path> mongoCollectionDataPaths;
private final String fieldName;
private final String getterName;
public MongoEmbeddedRule(final Object testClassInstance) {
this(testClassInstance, "mongoTemplate", "getMongoTemplate");
}
protected MongoEmbeddedRule(final Object testClassInstance, final String fieldName, final String getterName) {
this.fieldName = fieldName;
this.getterName = getterName;
this.testClassInstance = testClassInstance;
this.mongoCollectionDataPaths = mongoExtendedJsonFilesLookup();
}
#Override
protected void before() {
dropCollections();
createAndPopulateCollections();
}
#Override
protected void after() {
}
protected Set<String> getMongoCollectionNames() {
return mongoCollectionDataPaths.keySet();
}
public void dropCollections() {
getMongoCollectionNames().forEach(collectionName -> getMongoTemplate().dropCollection(collectionName));
}
protected void createAndPopulateCollections() {
mongoCollectionDataPaths.forEach((key, value) -> insertDocumentsFromMongoExtendedJsonFile(value, key));
}
protected MongoTemplate getMongoTemplate() {
try {
Object value = ReflectionTestUtils.getField(testClassInstance, fieldName);
if (value instanceof MongoTemplate) {
return (MongoTemplate) value;
}
value = ReflectionTestUtils.invokeGetterMethod(testClassInstance, getterName);
if (value instanceof MongoTemplate) {
return (MongoTemplate) value;
}
} catch (final IllegalArgumentException e) {
// throw exception with dedicated message at the end
}
throw new IllegalArgumentException(
String.format(
"%s expects either field '%s' or method '%s' in order to access the required MongoTemmplate",
this.getClass().getSimpleName(), fieldName, getterName));
}
private Map<String, Path> mongoExtendedJsonFilesLookup() {
Map<String, Path> collections = new HashMap<>();
try {
Files.walk(Paths.get("src","test","resources","mongo"))
.filter(Files::isRegularFile)
.forEach(filePath -> collections.put(
filePath.getFileName().toString().replace(".json", ""),
filePath));
} catch (IOException e) {
e.printStackTrace();
}
return collections;
}
private void insertDocumentsFromMongoExtendedJsonFile(Path path, String collectionName) {
try {
List<Document> documents = new ArrayList<>();
Files.readAllLines(path).forEach(l -> documents.add(Document.parse(l)));
getMongoTemplate().getCollection(collectionName).insertMany(documents);
System.out.println(documents.size() + " documents loaded for " + collectionName + " collection.");
} catch (IOException e) {
e.printStackTrace();
}
}
}
json file (names.json) with MongoDB Extended JSON, where every document is in one line and the collection name is the filename without extension.
{ "_id" : ObjectId("594d324d5b49b78da8ce2f28"), "someId" : NumberLong(1), "name" : "Some Name 1", "lastModified" : ISODate("1970-01-01T00:00:00Z")}
{ "_id" : ObjectId("594d324d5b49b78da8ce2f29"), "someId" : NumberLong(2), "name" : "Some Name 2", "lastModified" : ISODate("1970-01-01T00:00:00Z")}
You can have a look at this following Test class, provided by "flapdoodle". The test shows how to import a JSON file containing the collection dataset:
MongoImportExecutableTest.java
You could theoretically also import a whole dump of a database. (using MongoDB restore):
MongoRestoreExecutableTest.java
You can create an abstract class and have setup logic to start mongod and mongoimport process.
AbstractMongoDBTest.java
public abstract class AbstractMongoDBTest {
private MongodProcess mongodProcess;
private MongoImportProcess mongoImportProcess;
private MongoTemplate mongoTemplate;
void setup(String dbName, String collection, String jsonFile) throws Exception {
String ip = "localhost";
int port = 12345;
IMongodConfig mongodConfig = new MongodConfigBuilder().version(Version.Main.PRODUCTION)
.net(new Net(ip, port, Network.localhostIsIPv6()))
.build();
MongodStarter starter = MongodStarter.getDefaultInstance();
MongodExecutable mongodExecutable = starter.prepare(mongodConfig);
File dataFile = new File(Thread.currentThread().getContextClassLoader().getResource(jsonFile).getFile());
MongoImportExecutable mongoImportExecutable = mongoImportExecutable(port, dbName,
collection, dataFile.getAbsolutePath()
, true, true, true);
mongodProcess = mongodExecutable.start();
mongoImportProcess = mongoImportExecutable.start();
mongoTemplate = new MongoTemplate(new MongoClient(ip, port), dbName);
}
private MongoImportExecutable mongoImportExecutable(int port, String dbName, String collection, String jsonFile,
Boolean jsonArray, Boolean upsert, Boolean drop) throws
IOException {
IMongoImportConfig mongoImportConfig = new MongoImportConfigBuilder()
.version(Version.Main.PRODUCTION)
.net(new Net(port, Network.localhostIsIPv6()))
.db(dbName)
.collection(collection)
.upsert(upsert)
.dropCollection(drop)
.jsonArray(jsonArray)
.importFile(jsonFile)
.build();
return MongoImportStarter.getDefaultInstance().prepare(mongoImportConfig);
}
#AfterEach
void clean() {
mongoImportProcess.stop();
mongodProcess.stop();
}
public MongoTemplate getMongoTemplate(){
return mongoTemplate;
}
}
YourTestClass.java
public class YourTestClass extends AbstractMongoDBTest{
#BeforeEach
void setup() throws Exception {
super.setup("db", "collection", "jsonfile");
}
#Test
void test() throws Exception {
}
}