I am trying to invoke a lambda function which is triggered by S3Event, I have created a bucket as well, also added two images into the bucket.
below are the specifications of bucket.
Below is my code which I have written in java
public String handleRequest(S3Event event, Context context) {
context.getLogger().log("Received event: " + event);
// Get the object from the event and show its content type
String bucket = event.getRecords().get(0).getS3().getBucket().getName();
String key = event.getRecords().get(0).getS3().getObject().getKey();
try {
S3Object response = s3.getObject(new GetObjectRequest(bucket, key));
String contentType = response.getObjectMetadata().getContentType();
context.getLogger().log("CONTENT TYPE: " + contentType);
return contentType;
} catch (Exception e) {
e.printStackTrace();
context.getLogger().log(String.format(
"Error getting object %s from bucket %s. Make sure they exist and"
+ " your bucket is in the same region as this function.", bucket, key));
throw e;
}
}
and below is the error I am getting
com.amazonaws.services.lambda.runtime.events.S3Event not present
Code looks fine, Confirm that you have this package imported :
com.amazonaws.services.lambda.runtime.events.S3Event
And implement the interface "RequestHandler" with your class.
If issue still persist follow this tutorial:
AWS Lambda with S3 for real-time data processing
Hope this will help !
Related
Apparently, in the move from Spring Boot 1 to Spring Boot 2 (Spring 5), the encoding behavior of URL parameters for RestTemplates changed. It seems unusually difficult to get a general query parameter on rest templates passed so that characters that have special meanings such as "+" get properly escaped. It seems that, since "+" is a valid character, it doesn't get escaped, even though its meaning gets altered (see here). This seems bizarre, counter-intuitive, and against every other convention on every other platform. More importantly, I can't figure out how to easily get around it. If I encode the string first, it gets double-encoded, because the "%"s get re-encoded. Anyway, this seems like it should be something very simple that the framework does, but I'm not figuring it out.
Here is my code that worked in Spring Boot 1:
String url = "https://base/url/here";
UriComponentsBuilder builder = UriComponentsBuilder.fromHttpUrl(url);
for (Map.Entry<String, String> entry : query.entrySet()) {
builder.queryParam(entry.getKey(), entry.getValue());
}
HttpEntity<TheResponse> resp = myRestTemplate.exchange(builder.toUriString(), ...);
However, now it won't encode the "+" character, so the other end is interpreting it as a space. What is the correct way to build this URL in Java Spring Boot 2?
Note - I also tried this, but it actually DOUBLE-encodes everything:
try {
for (Map.Entry<String, String> entry : query.entrySet()) {
builder.queryParam(entry.getKey(), URLEncoder.encode(entry.getValue(),"UTF-8" ));
}
} catch(Exception e) {
System.out.println("Encoding error");
}
In the first one, if I put in "q" => "abc+1#efx.com", then, exactly in the URL, I get "abc+1#efx.com" (i.e., not encoded at all). However, in the second one, if I put in "abc+1#efx.com", then I get "abc%252B1%2540efx.com", which is DOUBLE-encoded.
I could hand-write an encoding method, but this seems (a) like overkill, and (b) doing encoding yourself is where security problems and weird bugs tend to creep in. But it seems insane to me that you can't just add a query parameter in Spring Boot 2. That seems like a basic task. What am I missing?
Found what I believe to be a decent solution. It turns out that a large part of the problem is actually the "exchange" function, which takes a string for a URL, but then re-encodes that URL for reasons I cannot fathom. However, the exchange function can be sent a java.net.URI instead. In this case, it does not try to interpolate anything, as it is already a URI. I then use java.net.URLEncoder.encode() to encode the pieces. I still have no idea why this isn't standard in Spring, but this should work.
private String mapToQueryString(Map<String, String> query) {
List<String> entries = new LinkedList<String>();
for (Map.Entry<String, String> entry : query.entrySet()) {
try {
entries.add(URLEncoder.encode(entry.getKey(), "UTF-8") + "=" + URLEncoder.encode(entry.getValue(), "UTF-8"));
} catch(Exception e) {
log.error("Unable to encode string for URL: " + entry.getKey() + " / " + entry.getValue(), e);
}
}
return String.join("&", entries);
}
/* Later in the code */
String endpoint = "https://baseurl.example.com/blah";
String finalUrl = query.isEmpty() ? endpoint : endpoint + "?" + mapToQueryString(query);
URI uri;
try {
uri = new URI(finalUrl);
} catch(URISyntaxException e) {
log.error("Bad URL // " + finalUrl, e);
return null;
}
}
/* ... */
HttpEntity<TheResponse> resp = myRestTemplate.exchange(uri, ...)
I am trying to load at least 4 csv files from my S3 bucket into my RDS Mysql database. Everytime the files are put in the bucket they will have a different name. The filenames have the date added at the end. I would like for them to automatically be uploaded to database when they are put in the S3 bucket. So far all I have is the load function to connect to the database. At this point I'm just trying to load one file. What would I do to have the file automatically loaded once its put in the S3 bucket? Thanks for the help!
lambdafunctionhandler file
public class LambdaFunctionHandler implements RequestHandler<Service, ResponseClass> {
public void loadService(){
Statement stmt = null;
try{
Connection conn = DriverManager.getConnection("jdbc:mysql://connection/db", "user", "password");
log.info("Connected to database.");
//load date sql
String query="LOAD DATA FROM S3 '"+ S3_BUCKET_NAME + "' INTO TABLE " + sTablename
+ " FIELDS TERMINATED BY ',' ENCLOSED BY '\"' "
+ "lines terminated by '\r\n' "+"IGNORE " + ignoreLines+" LINES";
stmt.executeUpdate(query);
System.out.println("loaded table.");
conn.close();
}catch(SQLException e){
e.printStackTrace();
}
}
#Override
public ResponseClass handleRequest(Service arg0, Context arg1) {
String path="";
return null;
}
If you have the full key of whatever file you're trying to load into S3 is going to be, then the standard AmazonS3 client object has this method: boolean doesObjectExist(String bucketName, String objectName) . By the "rules" of S3, uploading a file to S3 is atomic. The specified S3 key will not return true for this call unless the file is completely uploaded.
So you can trigger your upload of your file, and test for completeness with the doesObjectExist call. Once done, then perform your lambda function.
Alternatively, S3 also has another service (if you want to keep feeding the AWS beast) where you can turn on Bucket notifications, or trigger a Lambda function to execute with one of these notifications. I can't remember the name off the top of my head.
I have simple data structure:
"Issue" has poiter to other class "Status" in field "status"
From doc we know include - take key name and pointed data should be avalible in result without any action. But then I try to access pointed data I get null.
ParseQuery<Issue> query = ParseQuery.getQuery("Issues");
query.include("status");
query.whereEqualTo("owner", user);
query.findInBackground(new FindCallback<Issue>() {
public void done(List<Issue> issueList, ParseException e) {
if (e == null) {
Log.d(TAG, "Retrieved " + issueList.size() + " issues");
ParseObject status = issueList.get(0).getParseObject("status"); //NULL!
} else {
Log.d(TAG, "Error: " + e.getMessage());
}
}
});
Manualy form console I can see valid data and jump from Issue to Status by pointer (I have only one record in both)
Parse lib version - 1.11
What I'm doing wrong?
I think it should work.. Check your security and ACL settings in Parse Status class (if you don't have the rights to get the status you won't get it), make sure issueList.get(0) is not null and make sure that the Status object you are pointing to really exist in Parse Status table.
public interface UserService {
#POST(Constants.Api.URL_REGISTRATION)
#FormUrlEncoded
BaseWrapper registerUser(#Field("first_name") String firstname, #Field("last_name") String lastname, #Field("regNumber") String phone, #Field("regRole") int role);
public BaseWrapper registerUser(User user) {
return getUserService().registerUser(user.getFirstName(), user.getLastName(), user.getPhone(), user.getRole());
}
This create Exception
com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_OBJECT but was STRING at line 1 column 1 path $
Big thanks for help.
Let's look at the error you are receiving.
Expected BEGIN_OBJECT
Your JSON is an object, and all JSON objects are enclosed in curly braces ({}). BEGIN_OBJECT is therefore {. And it's expecting it somewhere.
but was STRING
But instead he found a string "Something". Still doesn't tell us where.
at line 1 column 1 path $
Ah, perfect. At line 1 column 1. Which is the start of the JSON. So you have forgotten to enclose the whole thing in {} (or at least you have forgotten the first one, but I bet you've forgotten them both).
Recently i'd faced similiar issue and solutioned only by adding "Accept: application/json" into header section. So, if you're using retrofit 2.0;
1st solution: For post method add headers parameter like below;
#Headers({"Accept: application/json"})
#POST(Constants.Api.URL_REGISTRATION)
#FormUrlEncoded
BaseWrapper registerUser(#Field("first_name") String firstname,
#Field("last_name") String lastname,
#Field("regNumber") String phone,
#Field("regRole") int role);
2nd solution: Add header into your interceptor class like this;
NB: Code is in kotlin
private fun getInterceptor(): Interceptor {
try {
return Interceptor {
val request = it.request()
it.proceed(
request.newBuilder()
.header("Accept", "application/json")
.header("Authorization", "$accessTokenType $accessToken")
.build()
)
}
} catch (exception: Exception) {
throw Exception(exception.message)
}
}
}
Hope it helps, happy coding :)
Cleaning and rebuilding the project works for me.
If you want to add the ArrayList in the json object and parse in GSON then make sure the ArrayList should be like below.
ArrayList<JSONObject>
Instead of
ArrayList<String>
and Parse like this.
ArrayList<JSONObject> mItem = new ArrayList<JSONObject>();
mItem.add(jsonObject);
// and Use like this.
JSONArray list = new JSONArray(mItem);
jsonObject.put("Item",list);
If your app was using stored data from previous version and you changed the data type, you may encounter this error.
For example: I had something stored as a String in my previous version. I later updated the class that had the data stored in it to an object type. When I ran it, I got this error. After clearing the app-data or re-installing it, the error cleared.
Clearing the app-data might be an easy fix.
SOLUTION
In Kotlin we can use Response of ResponseBody and manage the initial response within it.
viewModelScope.launch(Dispatchers.IO) {
mainRepository.getAPIData(
Constants.getRequestBody(requestBody.toString())
).let { it ->
if (it.isSuccessful && it.code() == 200) {
val responseBody = it.body()
val res: String? = responseBody?.string()
try {
val main = JSONObject(res!!)
Log.e("TAG", "onSuccess: " + main.toString())
if (main.getInt("StatusCode") == 200) {
val type = object : TypeToken<Response>() {}.type
val response: Response = Gson().fromJson(
main.toString(), type
)
Log.e("TAG", "onSuccess: " + response.toString())
} else {
apiResponseData.postValue(Resource.error(main.getString("Message"), null))
Log.e("TAG", "onFail: " + main.toString())
}
} catch (exception: JSONException) {
Log.e("TAG", "Exception: " + exception.message)
}
}
}
}
Response - Retrofit
ResponseBody - okHttp
Response is actual model Response e.g. UserResponse
Here, getAPIData() is API call which returns Response of ResponseBody
apiResponseData is MutableLiveData
using this you can avoid JSON type cast error in response.
It's a proguard problem. In release minifyEnabled true broke the API models.
You need to add Serializable in ResponseModel and in RequestModel API
https://i.stack.imgur.com/uHN22.png
In crawler4j we can override a function boolean shouldVisit(WebUrl url) and control whether that particular url should be allowed to be crawled by returning 'true' and 'false'.
But can we add URL(s) at runtime ? if yes , what are ways to do that ?
Currently I can add URL(s) at beginning of program using addSeed(String url) function before the start(BasicCrawler.class, numberOfCrawlers) in CrawlController class and if I try to add new url using addSeed(String url), it gives error. Here is error image .
Any help will be appreciative and please let me know if any more detail about project is required to answer the question .
You can do this.
Use public void schedule(WebURL url) to add URLs to the crawler frontier which is a member of the Frontier.java class. But for this you need to have your url of type WebURL. If you want to make a WebURL out of your string. Please have a look at the addSeed() (below code) which is in the CrawlController.java class to see how it has converted the string (url) into a WebURL.
Also use the existing frontier instance.
Hope this helps..
public void addSeed(String pageUrl, int docId) {
String canonicalUrl = URLCanonicalizer.getCanonicalURL(pageUrl);
if (canonicalUrl == null) {
logger.error("Invalid seed URL: " + pageUrl);
return;
}
if (docId < 0) {
docId = docIdServer.getDocId(canonicalUrl);
if (docId > 0) {
// This URL is already seen.
return;
}
docId = docIdServer.getNewDocID(canonicalUrl);
} else {
try {
docIdServer.addUrlAndDocId(canonicalUrl, docId);
} catch (Exception e) {
logger.error("Could not add seed: " + e.getMessage());
}
}
WebURL webUrl = new WebURL();
webUrl.setURL(canonicalUrl);
webUrl.setDocid(docId);
webUrl.setDepth((short) 0);
if (!robotstxtServer.allows(webUrl)) {
logger.info("Robots.txt does not allow this seed: " + pageUrl);
} else {
frontier.schedule(webUrl); //method that adds URL to the frontier at run time
}
}
Presumably you can implement this function however you like, and have it depend on a list of URLs that should not be crawled. The implementation of shouldVisit is then going to involve asking if a given URL is in your list of forbidden URLs (or permitted URLs), and returning true or false on that basis.