I have 2 applications communicating via grps.
When I try to send a span context, I always get null in extracted.
Map<String, String> tracingMetadata = new HashMap<>();
tracingMetadata.put("trace-id", traceId);
tracingMetadata.put("span-id", spanId);
SpanContext extracted = tracer.extract(Format.Builtin.TEXT_MAP, new TextMapAdapter(tracingMetadata));
Therefore, I send it via metadata in ClientIntercaptor like that.
return new ForwardingClientCall.SimpleForwardingClientCall<ReqT, RespT>(
next.newCall(method, callOptions)) {
#Override
public void start(Listener<RespT> responseListener, Metadata headers) {
headers.put(Metadata.Key.of("trace-id", Metadata.ASCII_STRING_MARSHALLER), traceId);
headers.put(Metadata.Key.of("span-id", Metadata.ASCII_STRING_MARSHALLER), spanId);
...
When I try to create a context span from this data, it either talks about incompatible types, or context doesn't change. In pre and after equal context.
private void injectParentContext(Span span) {
final String traceId = HeadersServerInterceptor.getTraceIdHeader();
final String spanId = HeadersServerInterceptor.getSpanIdHeader();
final Map<String, String> contextValues = new HashMap<>();
contextValues.put("x-b3-trace-id", traceId);
contextValues.put("x-b3-spanid", spanId);
final SpanContext pre=span.context();
tracer.inject(span.context(), Format.Builtin.TEXT_MAP, new TextMapAdapter(contextValues));
final SpanContext after= span.context();
}
I tried different Format.Builtin and Adapter, nothing works.
What am I doing wrong?
Related
I'm using Apache Kafka Stream where I added a transform in my stream
final StreamsBuilder streamsBuilder = new StreamsBuilder();
final StoreBuilder<KeyValueStore<String, byte[]>> correlationStore =
Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(STORE_NAME),
Serdes.String(),
Serdes.ByteArray());
streamsBuilder.addStateStore(correlationStore);
streamsBuilder.stream(topicName, inputConsumed)
.peek(InboundPendingMessageStreamer::logEntries)
.transform(() -> new CleanerTransformer<String, byte[], KeyValue<String, byte[]>>(Duration.ofMillis(5000), STORE_NAME), STORE_NAME)
.toTable();
I'm having difficulties to understand the CleanerTransformer Transformer class that I create, where in the init method, I set a schedule with a scanFrequency and a PunctuationType.
#Override
public void init(ProcessorContext context) {
this.stateStore = context.getStateStore(purgeStoreName);
context.schedule(scanFrequency, PunctuationType.STREAM_TIME, timestamp -> {
try (final KeyValueIterator<K, byte[]> all = stateStore.all()) {
while (all.hasNext()) {
final var headers = context.headers();
final KeyValue<K, byte[]> record = all.next();
}
}
});
}
Adding an event in the stream, I got the message in the schedule callback, but it's only executed once.
My understanding was, that it should be executed every time configured in the scanFrequency.
Any idea what I'm doing wrong here?
I'm trying to use M2Doc programmatically, I managed to generate my .docx file without getting errors in the validation part but I'm getting the following Error in the generated document:
{m:self.Name} Couldn't find the 'aqlFeatureAccess(org.eclipse.emf.common.util.URI.Hierarchical,java.lang.String)' service
The "self.Name" part is what I wrote in my template.
I think I'm lacking some kind of reference to a service but I don't know how to fix it.
The self variable is a reference to a model based on a meta-model I created. But I'm not sure I imported it correctly in my code.
I based my code on the code I found on the M2Doc website + some code I found on their GitHub, especially concerning how to add a service in the queryEnvironment.
I searched in the source code of acceleo and M2Doc to see which services they add but it seems that they already import all the services I'm using.
As I said, the validation part is going well and doesn't generate a validation file.
public static void parseDocument(String templateName) throws Exception{
final URI templateURI = URI.createFileURI("Template/"+templateName+"."+M2DocUtils.DOCX_EXTENSION_FILE);
final IQueryEnvironment queryEnvironment =
org.eclipse.acceleo.query.runtime.Query.newEnvironmentWithDefaultServices(null);
final Map<String, String> options = new HashMap<>(); // can be empty
M2DocUtils.prepareEnvironmentServices(queryEnvironment, templateURI, options); // delegate to IServicesConfigurator
prepareEnvironmentServicesCustom(queryEnvironment, options);
final IClassProvider classProvider = new ClassProvider(ClassLoader.getSystemClassLoader()); // use M2DocPlugin.getClassProvider() when running inside Eclipse
try (DocumentTemplate template = M2DocUtils.parse(templateURI, queryEnvironment, classProvider)) {
ValidationMessageLevel validationLevel = validateDocument(template, queryEnvironment, templateName);
if(validationLevel == ValidationMessageLevel.OK){
generateDocument(template, queryEnvironment, templateName, "Model/ComplexKaosModel.kaos");
}
}
}
public static void prepareEnvironmentServicesCustom(IQueryEnvironment queryEnvironment, Map<String, String> options){
Set<IService> services = ServiceUtils.getServices(queryEnvironment, FilterService.class);
ServiceUtils.registerServices(queryEnvironment, services);
M2DocUtils.getConfigurators().forEach((configurator) -> {
ServiceUtils.registerServices(queryEnvironment, configurator.getServices(queryEnvironment, options));
});
}
public static void generateDocument(DocumentTemplate template, IQueryEnvironment queryEnvironment,
String templateName, String modelPath)throws Exception{
final Map<String, Object> variable = new HashMap<>();
variable.put("self", URI.createFileURI(modelPath));
final Monitor monitor = new BasicMonitor.Printing(System.out);
final URI outputURI = URI.createFileURI("Generated/"+templateName+".generated."+M2DocUtils.DOCX_EXTENSION_FILE);
M2DocUtils.generate(template, queryEnvironment, variable, outputURI, monitor);
}
The variable "self" contains an URI:
variable.put("self", URI.createFileURI(modelPath));
You have to load your model and set the value of self to an element from your model using something like:
final ResourceSet rs = new ResourceSetImpl();
final Resource r = rs.getResource(uri, true);
final EObject value = r.getContents()...;
variable.put("self", value);
You can get more details on resource loading in the EMF documentation.
I am able to create order using square(v2/locations/location_id/orders)api and getting order id. But I am not able to get this order details and also how I can see this created order on square dashboard? please help me.
I am using the below method for doing it:
public CreateOrderResponse createOrder(String locationId, CreateOrderRequest body) throws ApiException {
Object localVarPostBody = body;
// verify the required parameter 'locationId' is set
if (locationId == null) {
throw new ApiException(400, "Missing the required parameter 'locationId' when calling createOrder");
}
// verify the required parameter 'body' is set
if (body == null) {
throw new ApiException(400, "Missing the required parameter 'body' when calling createOrder");
}
// create path and map variables
String localVarPath = "/v2/locations/{location_id}/orders".replaceAll("\\{" + "location_id" + "\\}",
apiClient.escapeString(locationId.toString()));
// query params
List<Pair> localVarQueryParams = new ArrayList<Pair>();
Map<String, String> localVarHeaderParams = new HashMap<String, String>();
Map<String, Object> localVarFormParams = new HashMap<String, Object>();
final String[] localVarAccepts = { "application/json" };
final String localVarAccept = apiClient.selectHeaderAccept(localVarAccepts);
final String[] localVarContentTypes = { "application/json" };
final String localVarContentType = apiClient.selectHeaderContentType(localVarContentTypes);
String[] localVarAuthNames = new String[] { "oauth2" };
GenericType<CreateOrderResponse> localVarReturnType = new GenericType<CreateOrderResponse>() {
};
CompleteResponse<CreateOrderResponse> completeResponse = (CompleteResponse<CreateOrderResponse>) apiClient
.invokeAPI(localVarPath, "POST", localVarQueryParams, localVarPostBody, localVarHeaderParams,
localVarFormParams, localVarAccept, localVarContentType, localVarAuthNames,
localVarReturnType);
return completeResponse.getData();
}
Thanks
The orders endpoint is only for creating itemized orders for e-commerce transactions. You won't see them anywhere until you charge them, and then you'll see the itemizations for the order in your dashboard with the transaction.
I think I have an interesting question for all of you today. In the code below you will notice I have two SparkContexts one for SparkStreaming and the other one which is a normal SparkContext. According to best practices you should only have one SparkContext in a Spark application even though its possible to circumvent this via allowMultipleContexts in the configuration.
Problem is, I need to retrieve data from hive and from a Kafka topic to do some logic, and whenever I submit my application it obviously returns "Cannot have 2 Spark Contexts Running on JVM".
My question is, is there a correct way to do this than how I am doing it right now?
public class MainApp {
private final String logFile= Properties.getString("SparkLogFileDir");
private static final String KAFKA_GROUPID = Properties.getString("KafkaGroupId");
private static final String ZOOKEEPER_URL = Properties.getString("ZookeeperURL");
private static final String KAFKA_BROKER = Properties.getString("KafkaBroker");
private static final String KAFKA_TOPIC = Properties.getString("KafkaTopic");
private static final String Database = Properties.getString("HiveDatabase");
private static final Integer KAFKA_PARA = Properties.getInt("KafkaParrallel");
public static void main(String[] args){
//set settings
String sql="";
//START APP
System.out.println("Starting NPI_TWITTERAPP...." + new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
System.out.println("Configuring Settings...."+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
SparkConf conf = new SparkConf()
.setAppName(Properties.getString("SparkAppName"))
.setMaster(Properties.getString("SparkMasterUrl"));
//Set Spark/hive/sql Context
JavaSparkContext sc = new JavaSparkContext(conf);
JavaStreamingContext jssc = new JavaStreamingContext(conf, new Duration(5000));
JavaHiveContext HiveSqlContext = new JavaHiveContext(sc);
//Check if Twitter Hive Table Exists
try {
HiveSqlContext.sql("DROP TABLE IF EXISTS "+Database+"TWITTERSTORE");
HiveSqlContext.sql("CREATE TABLE IF NOT EXISTS "+Database+".TWITTERSTORE "
+" (created_at String, id String, id_str String, text String, source String, truncated String, in_reply_to_user_id String, processed_at String, lon String, lat String)"
+" STORED AS TEXTFILE");
}catch(Exception e){
System.out.println(e);
}
//Check if Ivapp Table Exists
sql ="CREATE TABLE IF NOT EXISTS "+Database+".IVAPPGEO AS SELECT DISTINCT a.LATITUDE, a.LONGITUDE, b.ODNCIRCUIT_OLT_CLLI, b.ODNCIRCUIT_OLT_TID, a.CITY, a.STATE, a.ZIP FROM "
+Database+".T_PONNMS_SERVICE B, "
+Database+".CLLI_LATLON_MSTR A WHERE a.BID_CLLI = substr(b.ODNCIRCUIT_OLT_CLLI,0,8)";
try {
System.out.println(sql + new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
HiveSqlContext.sql(sql);
sql = "SELECT LATITUDE, LONGITUDE, ODNCIRCUIT_OLT_CLLI, ODNCIRCUIT_OLT_TID, CITY, STATE, ZIP FROM "+Database+".IVAPPGEO";
JavaSchemaRDD RDD_IVAPPGEO = HiveSqlContext.sql(sql).cache();
}catch(Exception e){
System.out.println(sql + new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
}
//JavaHiveContext hc = new JavaHiveContext();
System.out.println("Retrieve Data from Kafka Topic: "+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
Map<String, Integer> topicMap = new HashMap<String, Integer>();
topicMap.put(KAFKA_TOPIC,KAFKA_PARA);
JavaPairReceiverInputDStream<String, String> messages = KafkaUtils.createStream(
jssc, KAFKA_GROUPID, ZOOKEEPER_URL, topicMap);
JavaDStream<String> json = messages.map(
new Function<Tuple2<String, String>, String>() {
private static final long serialVersionUID = 42l;
#Override
public String call(Tuple2<String, String> message) {
return message._2();
}
}
);
System.out.println("Completed Kafka Messages... "+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
System.out.println("Filtering Resultset... "+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
JavaPairDStream<Long, String> tweets = json.mapToPair(
new TwitterFilterFunction());
JavaPairDStream<Long, String> filtered = tweets.filter(
new Function<Tuple2<Long, String>, Boolean>() {
private static final long serialVersionUID = 42l;
#Override
public Boolean call(Tuple2<Long, String> tweet) {
return tweet != null;
}
}
);
JavaDStream<Tuple2<Long, String>> tweetsFiltered = filtered.map(
new TextFilterFunction());
tweetsFiltered = tweetsFiltered.map(
new StemmingFunction());
System.out.println("Finished Filtering Resultset... "+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
System.out.println("Processing Sentiment Data... "+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
//calculate postive tweets
JavaPairDStream<Tuple2<Long, String>, Float> positiveTweets =
tweetsFiltered.mapToPair(new PositiveScoreFunction());
//calculate negative tweets
JavaPairDStream<Tuple2<Long, String>, Float> negativeTweets =
tweetsFiltered.mapToPair(new NegativeScoreFunction());
JavaPairDStream<Tuple2<Long, String>, Tuple2<Float, Float>> joined =
positiveTweets.join(negativeTweets);
//Score tweets
JavaDStream<Tuple4<Long, String, Float, Float>> scoredTweets =
joined.map(new Function<Tuple2<Tuple2<Long, String>,
Tuple2<Float, Float>>,
Tuple4<Long, String, Float, Float>>() {
private static final long serialVersionUID = 42l;
#Override
public Tuple4<Long, String, Float, Float> call(
Tuple2<Tuple2<Long, String>, Tuple2<Float, Float>> tweet)
{
return new Tuple4<Long, String, Float, Float>(
tweet._1()._1(),
tweet._1()._2(),
tweet._2()._1(),
tweet._2()._2());
}
});
System.out.println("Finished Processing Sentiment Data... "+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
System.out.println("Outputting Tweets Data to flat file "+Properties.getString("HdfsOutput")+" ... "+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
JavaDStream<Tuple5<Long, String, Float, Float, String>> result =
scoredTweets.map(new ScoreTweetsFunction());
result.foreachRDD(new FileWriter());
System.out.println("Outputting Sentiment Data to Hive... "+ new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()));
jssc.start();
jssc.awaitTermination();
}
}
Creating SparkContext
You can create a SparkContext instance with or without creating a SparkConf object first.
Getting Existing or Creating New SparkContext (getOrCreate methods)
getOrCreate(): SparkContext
getOrCreate(conf: SparkConf): SparkContext
SparkContext.getOrCreate methods allow you to get the existing SparkContext or create a new one.
import org.apache.spark.SparkContext
val sc = SparkContext.getOrCreate()
// Using an explicit SparkConf object
import org.apache.spark.SparkConf
val conf = new SparkConf()
.setMaster("local[*]")
.setAppName("SparkMe App")
val sc = SparkContext.getOrCreate(conf)
Refer Here - https://jaceklaskowski.gitbooks.io/mastering-apache-spark/content/spark-sparkcontext.html
Apparently if I use sc.close() to close the original SparkContext before executing JavaStreaming Context it works perfectly, no errors or issues.
you can use a singleton object ContextManager which would handle which context to provide.
public class ContextManager {
private static JavaSparkContext context;
private static String currentType;
private ContextManager() {}
public static JavaSparkContext getContext(String type) {
if(type == currentType && context != null) {
return context;
}
else if (type == "streaming"){
.. clean up the current context ..
.. initialize the context to streaming context ..
currentType = type;
}
else {
..clean up the current context..
... initialize the context to normal context ..
currentType = type;
}
return context;
}
}
There are some issues like in projects where you switch context quite rapidly the overhead would be quite large.
You can access the SparkContext from your JavaStreamingSparkContext, and use that reference when creating additional contexts.
SparkConf sparkConfig = new SparkConf().setAppName("foo");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConfig, Duration.seconds(30));
SqlContext sqlContext = new SqlContext(jssc.sparkContext());
I am using org.springframework.web.util.UriTemplate and I am trying to match this uri template:
http://{varName1}/path1/path2/{varName2}/{varName3}/{varName4}
with the following uri:
http://hostname/path1/path2/design/99999/product/schema/75016TC806AA/TC806AA.tar
Currently I get the following uri variables:
{varName1=hostname, varName2=design/99999/product/schema, varName3=75016TC806AA,varName4=TC806AA.tar}
But I would like to get the following uri variables:
{varName1=hostname, varName2=design varName3=99999, varName4=product/schema/75016TC806AA/TC806AA.tar}
I tried to use wildcards as * or + in my template, but that doesn't seems to work:
http://{varName1}/path1/path2/{varName2}/{varName3}/{varName4*}
http://{varName1}/path1/path2/{varName2}/{varName3}/{+varName4}
Edited
String url = http://localhost/path1/path2/folder1/folder2/folder3/folder4/folder5
UriTemplate uriTemplate = new UriTemplate(urlTemplateToMatch);
Map<String, String> uriVariables = uriTemplate.match(url);
String urlTemplateToMatch1 = http://{varName1}/path1/path2/{varName2}/{varName3}/{varName4}
uriVariables1 = {varName1=localhost, varName2=folder1/folder2/folder3, varName3=folder4, varName4=folder5}
String urlTemplateToMatch2 = http://{varName1}/test1/test2/{varName2:.*?}/{varName3:.*?}/{varName4}
uriVariables2 = {varName1=localhost, varName2:.*?=folder1/folder2/folder3, varName3:.*?=folder4, varName4=folder5}
String urlTemplateToMatch3 = http://{varName1}/test1/test2/{varName2:\\w*}/{varName3:.\\w*}/{varName4}
uriVariables3 = {varName1=localhost, varName2:\w*=folder1/folder2/folder3, varName3:\w*=folder4, varName4=folder5}
Try with:
http://{varName1}/path1/path2/{varName2:.*?}/{varName3:.*?}/{varName4}
or may be
http://{varName1}/path1/path2/{varName2:\\w*}/{varName3:\\w*}/{varName4}
Edit
#RunWith(BlockJUnit4ClassRunner.class)
public class UriTemplateTest {
private String URI = "http://hostname/path1/path2/design/99999/product/schema/75016TC806AA/TC806AA.tar";
private String TEMPLATE_WORD = "http://{varName1}/path1/path2/{varName2:\\w*}/{varName3:\\w*}/{varName4}";
private String TEMPLATE_RELUCTANT = "http://{varName1}/path1/path2/{varName2:.*?}/{varName3:.*?}/{varName4}";
private Map<String, String> expected;
#Before
public void init() {
expected = new HashMap<String, String>();
expected.put("varName1", "hostname");
expected.put("varName2", "design");
expected.put("varName3", "99999");
expected.put("varName4", "product/schema/75016TC806AA/TC806AA.tar");
}
#Test
public void testTemplateWord() {
testTemplate(TEMPLATE_WORD);
}
#Test
public void testTemplateReluctant() {
testTemplate(TEMPLATE_RELUCTANT);
}
private void testTemplate(String template) {
UriTemplate ut = new UriTemplate(template);
Map<String, String> map = ut.match(URI);
Assert.assertEquals(expected, map);
}
}