2 questions :
do you know a "best" way to use JNDI resources on spring
configuration?
how can I transform the static attribut configPath to a "value" I can
use in this 2 methods but also on other bean?
#Configuration
#ComponentScan(basePackages = {"com.fdilogbox.report.serveur"})
#EnableAspectJAutoProxy
public class SpringConfig {
public static final String JNDI_RESOURCES = "java:comp/env/report/config";
public static String configPath;
#PostConstruct
public void initConfig() throws IllegalArgumentException {
try {
final JndiTemplate template = new JndiTemplate();
configPath = (String) template.lookup(JNDI_RESOURCES);
Log4jConfigurer.initLogging(configPath + "/log4j.xml");
} catch (NamingException | FileNotFoundException ex) {
throw new IllegalArgumentException("Unable to start logging, because non-file resource!", ex);
}
}
#Bean
public static PropertySourcesPlaceholderConfigurer properties() {
try {
final JndiTemplate template = new JndiTemplate();
configPath = (String) template.lookup(JNDI_RESOURCES);
} catch (NamingException ex) {
throw new IllegalArgumentException("JNDI resource not found : " + JNDI_RESOURCES, ex);
}
PropertySourcesPlaceholderConfigurer pspc = new PropertySourcesPlaceholderConfigurer ();
Resource[] resources = new FileSystemResource[]{new FileSystemResource(configPath + "/application.properties")};
pspc.setLocations(resources);
pspc.setIgnoreUnresolvablePlaceholders(true);
return pspc;
}
}
Related
I'm looking to use #Transactional in spring boot, but after several attempts i cannot get the transaction working despite having an exception inside to rollback, so I'm missing something ?
if an exception happened with WebSocketHandler.sendNotificationToVenue in serviceimpl
i wanna rollback with two insert statements ;(
Here is the ServiceImpl
#Service
#Slf4j
public class OrderInfoServiceImpl implements OrderInfoService {
#Resource
private OrderInfoMapper orderInfoMapper;
#Resource
private OrderDetailsMapper orderdetailsMapper;
#Resource
private WebSocketHandler webSockethHandler;
private OrderDetailsVO od = new OrderDetailsVO();
#Transactional(rollbackFor = Exception.class)
#Override
public Map<String, String> insertOrderInfo(OrderInfoVO order) throws Exception {
Map<String, String> rMap = new HashMap<>();
rMap.put("result", "false");
int oiSum = 0;
try {
for (int i = 0; i < order.getOdList().size(); i++) {
if (order.getOdList().get(i) != null) {
int price = order.getOdList().get(i).getMiPrice();
int qty = order.getOdList().get(i).getOdQuantity();
oiSum += price * qty;
}
}
order.setOiSum(oiSum);
String oiMsg = "";
for (int i = 0; i < order.getOiMessage().size(); i++) {
oiMsg += order.getOiMessage().get(i).get(0) + "/ ";
}
oiMsg = oiMsg.substring(0, oiMsg.lastIndexOf("/"));
order.setOiMsg(oiMsg);
orderInfoMapper.insertOrderInfo(order);
for (int i = 0; i < order.getOdList().size(); i++) {
if (order.getOdList().get(i) != null) {
od = order.getOdList().get(i);
od.setOiNum(order.getOiNum());
orderdetailsMapper.insertOrderDetails(od);
}
}
webSockethHandler.sendNotificatonToVenue(order.getViNum(), order.getOiNum());
rMap.put("result", "true");
} catch (Exception e) {
e.printStackTrace();
throw new RuntimeException();
}
return rMap;
}
and it's the transaction aop class
#Configuration
#EnableAspectJAutoProxy
#EnableTransactionManagement
#Slf4j
public class TransactionAOP {
#Resource
private DataSourceTransactionManager dstm;
#Bean
#ConfigurationProperties(prefix="spring.datasource.hikari")
public DataSource getDS() {
return DataSourceBuilder.create().build();
}
#Bean
public DataSourceTransactionManager txManager() {
return new DataSourceTransactionManager(getDS());
}
#Bean
public TransactionInterceptor txInterceptor() {
log.debug("transaction starts...");
TransactionInterceptor txInterceptor = new TransactionInterceptor();
Properties prop = new Properties();
List<RollbackRuleAttribute> rollbackRules = new ArrayList<>();
rollbackRules.add(new RollbackRuleAttribute(Exception.class));
DefaultTransactionAttribute readOnly = new DefaultTransactionAttribute(TransactionDefinition.PROPAGATION_REQUIRED);
readOnly.setReadOnly(true);
readOnly.setTimeout(30);
RuleBasedTransactionAttribute update = new RuleBasedTransactionAttribute(TransactionDefinition.PROPAGATION_REQUIRED,rollbackRules);
update.setTimeout(30);
prop.setProperty("select*", readOnly.toString());
prop.setProperty("get*", readOnly.toString());
prop.setProperty("find*", readOnly.toString());
prop.setProperty("search*", readOnly.toString());
prop.setProperty("count*", readOnly.toString());
prop.setProperty("*", update.toString());
txInterceptor.setTransactionAttributes(prop);
txInterceptor.setTransactionManager(dstm);
return txInterceptor;
}
#Bean
public Advisor txAdvisor() {
AspectJExpressionPointcut pointcut = new AspectJExpressionPointcut();
pointcut.setExpression("execution(* com.grabit.bdi.service..*ServiceImpl.*(..))");
return new DefaultPointcutAdvisor(pointcut, txInterceptor());
}
}
thanks for your time in advance!
Could you try using the following instead?
#Transactional(rollbackFor = [Exception.class])
I thought I read somewhere that the API requires an array of Exception types to be stipulated in the attribute value.
I want to load a certain number of properties files into the same java.util.Properties object. I achieve this correctly with the following code:
public class GloalPropReader {
public static final Properties DISPATCHER = new Properties();
public static final Properties GLOBAL_PROP = new Properties();
public GloalPropReader() {
try (InputStream input = GloalPropReader.class.getClassLoader().getResourceAsStream("dispatcher.properties")) {
DISPATCHER.load(input);
} catch (IOException ex) {
throw new RuntimeException("Can't access dispatcher information");
}
for (Object nth : DISPATCHER.keySet()) {
String nthKey = (String) nth;
String nthPathToOtherProps = (String) DISPATCHER.get(nthKey);
Path p = Paths.get(nthPathToOtherProps);
try (InputStream input = new FileInputStream(p.toFile())) {
GLOBAL_PROP.load(input);
} catch (IOException ex) {
throw new RuntimeException("Can't access " + nthPathToOtherProps + " information");
}
}
}
}
And having this properties files:
dispatcher.properties
path_to_prop_1=C:/Users/U/Desktop/k.properties
path_to_prop_2=C:/Users/U/Desktop/y.properties
k.properties
prop1=BLABLA
y.properties
prop2=BLEBLE
But what i would like to achieve is to throw a RuntimeException if 2 properties file have the same key inside. For instance, i would like this class to throw an exception if k.properties and y.properties were so:
k.properties
prop1=BLABLA
y.properties
prop1=BLEBLE
EDIT
It's the same as this post Loading multiple properties files but i don't want the overriding logic when 2 keys are equal
public static class GloalPropReader {
private final Properties K_PROPERTIES = new Properties();
private final Properties Y_PROPERTIES = new Properties();
public GloalPropReader() {
loadProperties("k.properties", K_PROPERTIES);
loadProperties("y.properties", Y_PROPERTIES);
Set intersection = new HashSet(K_PROPERTIES.keySet());
intersection.retainAll(Y_PROPERTIES.keySet());
if (!intersection.isEmpty()) {
throw new IllegalStateException("Property intersection detected " + intersection);
}
}
private void loadProperties(String name, Properties y_properties) {
try (InputStream input = GloalPropReader.class.getClassLoader().getResourceAsStream(name)) {
y_properties.load(input);
} catch (IOException ex) {
throw new RuntimeException("Can't access dispatcher information");
}
}
}
I have read about partitioning in spring-batch I've found an example which demonstrates partitioning. The example reads persons from CSV files, does some processing and insert data into the database. So at this example 1 partitioning = 1 file and so partitioner implementation looks like this:
public class MultiResourcePartitioner implements Partitioner {
private final Logger logger = LoggerFactory.getLogger(MultiResourcePartitioner.class);
public static final String FILE_PATH = "filePath";
private static final String PARTITION_KEY = "partition";
private final Collection<Resource> resources;
public MultiResourcePartitioner(Collection<Resource> resources) {
this.resources = resources;
}
#Override
public Map<String, ExecutionContext> partition(int gridSize) {
Map<String, ExecutionContext> map = new HashMap<>(gridSize);
int i = 0;
for (Resource resource : resources) {
ExecutionContext context = new ExecutionContext();
context.putString(FILE_PATH, getPath(resource)); //Depends on what logic you want to use to split
map.put(PARTITION_KEY + i++, context);
}
return map;
}
private String getPath(Resource resource) {
try {
return resource.getFile().getPath();
} catch (IOException e) {
logger.warn("Can't get file from from resource {}", resource);
throw new RuntimeException(e);
}
}
}
But what if I have single 10TB file? Does spring batch allow to partition it in some way?
update:
I tried following approach to achieve what I want:
make 2 steps - first step to divide file into pieces and second step to process pieces we got after the first step:
#Configuration
public class SingleFilePartitionedJob {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private ToLowerCasePersonProcessor toLowerCasePersonProcessor;
#Autowired
private DbPersonWriter dbPersonWriter;
#Autowired
private ResourcePatternResolver resourcePatternResolver;
#Value("${app.file-to-split}")
private Resource resource;
#Bean
public Job splitFileProcessingJob() throws IOException {
return jobBuilderFactory.get("splitFileProcessingJob")
.incrementer(new RunIdIncrementer())
.flow(splitFileIntoPiecesStep())
.next(csvToDbLowercaseMasterStep())
.end()
.build();
}
private Step splitFileIntoPiecesStep() throws IOException {
return stepBuilderFactory.get("splitFile")
.tasklet(new FileSplitterTasklet(resource.getFile()))
.build();
}
#Bean
public Step csvToDbLowercaseMasterStep() throws IOException {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
partitioner.setResources(resourcePatternResolver.getResources("split/*.csv"));
return stepBuilderFactory.get("csvReaderMasterStep")
.partitioner("csvReaderMasterStep", partitioner)
.gridSize(10)
.step(csvToDataBaseSlaveStep())
.taskExecutor(jobTaskExecutorSplitted())
.build();
}
#Bean
public Step csvToDataBaseSlaveStep() throws MalformedURLException {
return stepBuilderFactory.get("csvToDatabaseStep")
.<Person, Person>chunk(50)
.reader(csvPersonReaderSplitted(null))
.processor(toLowerCasePersonProcessor)
.writer(dbPersonWriter)
.build();
}
#Bean
#StepScope
public FlatFileItemReader csvPersonReaderSplitted(#Value("#{stepExecutionContext[fileName]}") String fileName) throws MalformedURLException {
return new FlatFileItemReaderBuilder()
.name("csvPersonReaderSplitted")
.resource(new UrlResource(fileName))
.delimited()
.names(new String[]{"firstName", "lastName"})
.fieldSetMapper(new BeanWrapperFieldSetMapper<Person>() {{
setTargetType(Person.class);
}})
.build();
}
#Bean
public TaskExecutor jobTaskExecutorSplitted() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(30);
taskExecutor.setCorePoolSize(25);
taskExecutor.setThreadNamePrefix("cust-job-exec2-");
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
}
tasklet:
public class FileSplitterTasklet implements Tasklet {
private final Logger logger = LoggerFactory.getLogger(FileSplitterTasklet.class);
private File file;
public FileSplitterTasklet(File file) {
this.file = file;
}
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
int count = FileSplitter.splitTextFiles(file, 100);
logger.info("File was split on {} files", count);
return RepeatStatus.FINISHED;
}
}
logic for splitting file:
public static int splitTextFiles(File bigFile, int maxRows) throws IOException {
int fileCount = 1;
try (BufferedReader reader = Files.newBufferedReader(Paths.get(bigFile.getPath()))) {
String line = null;
int lineNum = 1;
Path splitFile = Paths.get(bigFile.getParent() + "/" + fileCount + "split.txt");
BufferedWriter writer = Files.newBufferedWriter(splitFile, StandardOpenOption.CREATE);
while ((line = reader.readLine()) != null) {
if (lineNum > maxRows) {
writer.close();
lineNum = 1;
fileCount++;
splitFile = Paths.get("split/" + fileCount + "split.txt");
writer = Files.newBufferedWriter(splitFile, StandardOpenOption.CREATE);
}
writer.append(line);
writer.newLine();
lineNum++;
}
writer.close();
}
return fileCount;
}
So I put all file pieces to the special directory.
But this doesn't work because on the moment of context initialization folder /split does not exist yet.
update
I've generated workaround which works:
public class MultiResourcePartitionerWrapper implements Partitioner {
private final MultiResourcePartitioner multiResourcePartitioner = new MultiResourcePartitioner();
private final ResourcePatternResolver resourcePatternResolver;
private final String pathPattern;
public MultiResourcePartitionerWrapper(ResourcePatternResolver resourcePatternResolver, String pathPattern) {
this.resourcePatternResolver = resourcePatternResolver;
this.pathPattern = pathPattern;
}
#Override
public Map<String, ExecutionContext> partition(int gridSize) {
try {
Resource[] resources = resourcePatternResolver.getResources(pathPattern);
multiResourcePartitioner.setResources(resources);
return multiResourcePartitioner.partition(gridSize);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
But it looks ugly. Is it a correct solution?
Spring batch allow you to partition, but it's up to you how to do it.
You can simply split your 10TB file in the partitioner class (by number or by max rows), and each partion reads one splitted file. You can find a lot of example of how to split large file in java.
split very large text file by max rows
My purpose - create test for DAO layer. My Hibernate config has datasourse specified throught JNDI. Also I use Jboss 5.1 so transaction lookup is necessary.
<property name="hibernate.connection.datasource">java:jdbc/MysqlDS</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</property>
<property name="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.JBossTransactionManagerLookup</property>
To make test I need bind these things. Todo this I create class util to bind. Full code below
public class RegisteringJNDIWithDataSource {
public static final String JNDI_MY_DS = "java:jdbc/MysqlDS";
private static final String MYSQL_USER = "user";
private static final String MYSQL_PASS = "pass";
private static final String MYSQL_HOST = "localhost";
public static String CTX_INITIAL_CONTEXT_FACTORY = "org.jnp.interfaces.NamingContextFactory";
public static String CTX_URL_PKG_RPEFIXES = "org.jboss.naming";
private void startRegistry() throws NamingException, RemoteException {
System.out.println(LocateRegistry.getRegistry());
Registry reg = LocateRegistry.createRegistry(1099);
NamingServer server = new NamingServer();
NamingContext.setLocal(server);
System.out.println("RMI registry Stared.");
}
public InitialContext createInitialContextContext() throws NamingException {
System.setProperty(Context.URL_PKG_PREFIXES, CTX_URL_PKG_RPEFIXES);
System.setProperty(Context.INITIAL_CONTEXT_FACTORY, CTX_INITIAL_CONTEXT_FACTORY);
InitialContext initialContextcontext = new InitialContext();
return initialContextcontext;
}
/**
* Registry the following JNDIs
*
* #throws RemoteException
* #throws NamingException
*/
public void registrate() throws RemoteException, NamingException {
startRegistry();
InitialContext ic;
ic = createInitialContextContext();
String[] cxts = JNDI_MY_DS.split("/");
String inCxt = cxts[0];
createSubcontext(ic, inCxt);
for (int i = 1; i < cxts.length - 1; i++) {
// if the data source name is like java:/comp/mysqldatasource
// this takes care of creating subcontexts in jndi
inCxt = inCxt + "/" + cxts[i];
createSubcontext(ic, inCxt);
}
ic.rebind(JNDI_MY_DS, createMysqlDataSource("db_name"));
// the following requires JBoss dependent class. May be sth can be done to generalize this
TransactionManager tm = new CustomTXNManager();
ic.bind("java:/TransactionManager", tm);
UserTransaction ut = new CustomUserTransaction();
ic.bind("UserTransaction", ut);
}
private static Context createSubcontext(Context ctx, String cxtName) throws NamingException {
System.out.println(" creating subcontext " + cxtName);
Context subctx = ctx;
Name name = ctx.getNameParser("").parse(cxtName);
for (int pos = 0; pos < name.size(); pos++) {
String ctxName = name.get(pos);
try {
subctx = (Context) ctx.lookup(ctxName);
} catch (NameNotFoundException e) {
subctx = ctx.createSubcontext(ctxName);
}
// The current subctx will be the ctx for the next name component
ctx = subctx;
}
return subctx;
}
public void unregistrate() throws NamingException {
InitialContext context;
context = createInitialContextContext();
context.unbind(JNDI_MY_DS);
}
private MysqlConnectionPoolDataSource createMysqlDataSource(String database) throws NamingException {
MysqlConnectionPoolDataSource dataSource;
dataSource = new MysqlConnectionPoolDataSource();
dataSource.setUser(MYSQL_USER);
dataSource.setPassword(MYSQL_PASS);
dataSource.setServerName(MYSQL_HOST);
dataSource.setPort(3306);
dataSource.setDatabaseName(database);
return dataSource;
}
public static void main(String args[]) {
RegisteringJNDIWithDataSource dataSource = new RegisteringJNDIWithDataSource();
try {
dataSource.registrate();
} catch (RemoteException ex) {
ex.printStackTrace();
} catch (NamingException ex) {
ex.printStackTrace();
}
}
public static class CustomTXNManager extends TransactionManagerImple implements Serializable {
private static final long serialVersionUID = 1L;
public CustomTXNManager() {
}
}
public static class CustomUserTransaction extends UserTransactionImple implements Serializable {
private static final long serialVersionUID = 1L;
public CustomUserTransaction() {
}
}
}
All works fine if call from console. But when I call this from JUnit I got exception.
#Test
public void test() throws RemoteException, NamingException {
RegisteringJNDIWithDataSource j = new RegisteringJNDIWithDataSource();
j.registrate();
}
javax.naming.OperationNotSupportedException
at com.sun.jndi.rmi.registry.RegistryContext.createSubcontext(RegistryContext.java:230)
at javax.naming.InitialContext.createSubcontext(InitialContext.java:464)
i have annotation-based bean configuration for placeholder.
With the help of this placeholder, i can use properties values i want very easily.
#Bean
public static PropertySourcesPlaceholderConfigurer initPlaceholder() {
PropertySourcesPlaceholderConfigurer placeholder = new PropertySourcesPlaceholderConfigurer();
placeholder.setLocation(new ClassPathResource("some.properties"));
placeholder.setIgnoreUnresolvablePlaceholders(true);
return placeholder;
}
How i can set up this placeholder with ${some.properties} dynamic values?
placeholder.setLocation(new ClassPathResource(ANY_PROPERTIES));
I can not use initPlaceholder(String property)...
What I have done about this was create my own PropertyPlaceHolder (to get an external property file)
public class MyPropertyPlaceholderConfigurer extends PropertyPlaceholderConfigurer {
public static final String ANY_PROPERTY = "ANY_PROPERTY";
private static final Log LOG = LogFactory.getLog(MyPropertyPlaceholderConfigurer.class);
#Override
protected void loadProperties(Properties props) throws IOException {
String anyProperty = System.getProperty(ANY_PROPERTY);
if (StringUtils.isEmpty(anyProperty)) {
LOG.info("Using default configuration");
super.loadProperties(props);
} else {
LOG.info("Setting HDFS LOCATION PATH TO : " + anyProperty);
try {
Path pt = new Path(anyProperty);
Configuration conf = new Configuration();
conf.set(FileSystem.FS_DEFAULT_NAME_KEY, anyProperty);
FileSystem fs = FileSystem.get(conf);
FSDataInputStream fileOpen = fs.open(pt);
BufferedReader br = new BufferedReader(new InputStreamReader(fileOpen));
props.load(br);
} catch (Exception e) {
LOG.error(e);
}
}
}