i would like to create social graph based on users and followers from Twitter.
I'm using twitter4j library. I'm iterating over graph starting from selected user and then i'm getting his followers, then recursively function is executed for his followers. The problem is i receive rate limit very quickly.
public void getFollowers(Twitter twitter, String twitterScreenName)
{
try
{
IDs followerIDs = twitter.getFollowersIDs(twitterScreenName, -1);
long [] followerIdList = followerIDs.getIDs();
if (followerIdList.length > 0)
{
String screenName = "";
for(long id: followerIdList)
{
twitter4j.User user = twitter.showUser(id);
screenName = user.getScreenName();
System.out.println("Name: " + screenName);
getFollowers(twitter, screenName);
}
}
}
catch(TwitterException e)
{
e.getMessage();
}
}
The result is unsatisfactory because i can't get deeper to followers of user followers and so on. I know i could wait 15 minutes and again restore program working but it would last too long and graph is very poor.
Is it another way to bypass that problem or other tools similar to twitter4j which can solve problem of creating graph? Thanks for any help.
Related
I am trying to improve the speed at which pages are loading on my site and it seems the issue is with the TTFB. The backend is using Java with Spring framework.
Most of the pages on the website have a TTFB of >1000ms, this time also fluctuates and occasionally will as much as 5000ms.
I have made database indexes for all of the commonly used SQL queries in the back end as well upgrading the instance class running on Google App Engine. This has had a positive impact but the overall performance is still lacking. (The server hosting is in the nearest location to me)
Additional I observed that the TTFB is roughly 200ms longer than the time it takes for the code to run and return the model in the controller. In some cases there are a lot of attributes being added to the model so it's possible html building is taking considerable time but I wonder if there is an issue with how Spring is configured.
From what I have read online a TTFB of more than 600ms is consider sub optimal so I think there is room for more improvement.
Are there any known performance issues with Spring on Google App Engine which I should explore further?
Do you have suggestion for anything else I should try?
Below is the code from one of the slower controllers in cases that helps in identify any obvious inefficiencies.
Thanks for any help!
public String requestPrivateCompanyProfile(Model model,
#PathVariable String companyName,
#RequestParam(required = false) String alertboxMessage,
#RequestParam(required = false) String openSubmenu) {
User user = sessionService.getUser();
Ecosystem ecosystem = sessionService.getEcosystem();
model.addAttribute("user", user);
model.addAttribute("ecosystem", ecosystem);
model.addAttribute("canSeeAdminButton", accessLevelService.isUserHasManagmentAccessLevel(ecosystem, user));
model.addAttribute("userPic", cloudStorageService.getProfilePic(user));
model.addAttribute("ecoNavLogo", cloudStorageService.getEcosystemLogo(ecosystem.getName(), true));
model.addAttribute("defaultCompanyLogo",cloudStorageService.getDefaultCompanyLogo());
// Pass on the alertboxMessage and openSubmenu parameters, if there are any
if (alertboxMessage != null) model.addAttribute("alertboxMessage", alertboxMessage);
if (openSubmenu != null) model.addAttribute("openSubmenu", openSubmenu);
// If user is not registered in the current ecosystem, redirect to index
if (ecoUserDataRepository.findByKeyEcosystemAndKeyUser(ecosystem, user) == null) return "redirect:/";
// If the company doesn't exist, redirect to /companies
Company company = companyRepository.findByEcosystemAndName(ecosystem, companyName);
if (company == null) return "redirect:/companies";
//checks the user has permission to view this page
if (!accessLevelService.isCanSeePrivateCompanyProfile(ecosystem, company, user)) return "redirect:/companies";
// Store the company user data along with the profile pics of each employee
List<CompanyProfileEmployeeDto> employeeDtos = new ArrayList<>();
List<CompanyUserData> cuds = companyUserDataRepository.findAllByKeyCompany(company);
for (CompanyUserData cud : cuds) {
User employee = cud.getKey().getUser();
String profPic = cloudStorageService.getProfilePic(employee);
if (employee == company.getOwner()) {
employeeDtos.add(0, new CompanyProfileEmployeeDto(cud, profPic));
} else {
employeeDtos.add(new CompanyProfileEmployeeDto(cud, profPic));
}
}
//Get all the ecosystem users to enable search for new team members
EcoUserData[] ecoUserData = ecoUserDataRepository.findAllEcoUsers(ecosystem).toArray(new EcoUserData[0]);
//Get tags used by the company
String ecoSystemTags = ecosystem.getEcosystemTags() == null ? "" : ecosystem.getEcosystemTags();
String companyTagsString = company.getTags() == null ? "" : company.getTags();
String[] companyTags = company.getTags() == null ? null : company.getTags().replace("#", "").split(";");
List<String> unusedTags = TagsService.getUnusedTags(companyTagsString, ecoSystemTags);
//get goals and order them for the private company
List<CompanyGoal> companyGoalAchieved = companyGoalRepository.findCompanyGoalByAchievedAndCompany(false, company);
List<CompanyGoal> companyNotGoalAchieved = companyGoalRepository.findCompanyGoalByAchievedAndCompany(true, company);
Collections.reverse(companyGoalAchieved);
Collections.reverse(companyNotGoalAchieved);
//this makes achieved goals the first in the list
List<CompanyGoal> companyGoalsOrdered = new ArrayList<>(companyGoalAchieved);
companyGoalsOrdered.addAll(companyNotGoalAchieved);
//get the shared docs for a specific private company
List<CompanySharedDocs> companySharedDocs = companySharedDocsRepository.findCompanySharedDocsByCompanyAndDocType(company, "shared");
//get the admin documents for a specific private company
List<CompanySharedDocs> companyAdminDocs = companySharedDocsRepository.findCompanySharedDocsByCompanyAndDocType(company, "admin");
//get the admin documents for a specific private company
List<CompanySharedDocs> resourceLinks = companySharedDocsRepository.findCompanySharedDocsByCompanyAndDocType(company, "resource");
//gets shared notes for the company sorts the list so the recently edited notes are first in the list
List<CompanySharedNote> companySharedNotes = companySharedNoteRepository.findCompanySharedNoteByCompany(company);
companySharedNotes.sort(Comparator.comparing(CompanySharedNote::getLastEdited).reversed());
//finds all kpi associated with a company and saves them into a list, each kpi in the list then has its kpiEntries sorted by date
List<KeyPerformanceIndicator> companyKpis = keyPerformanceIndicatorRepository.findAllByCompany(company);
Collections.reverse(companyKpis);
for (KeyPerformanceIndicator kpi: companyKpis){
kpi.sortKpiEntriesByDate();
}
//checks if the user just created a new kpi and if they did add attribute to change frontend display
model.addAttribute("showKpiTab", sessionService.getAttribute("showKpiTab"));
sessionService.removeAttribute("showKpiTab");
model.addAttribute("companyKpis", companyKpis);
model.addAttribute("resourceLinks", resourceLinks);
model.addAttribute("isOwner", company.getOwner() == user);
model.addAttribute("canDeleteCompanyProfile", accessLevelService.isDeleteCompanyProfile(ecosystem, company, user));
model.addAttribute("canSeePrivateCompanyProfile", true);
model.addAttribute("canEditCompanyProfile", accessLevelService.isCanEditCompanyProfile(ecosystem, company, user));
model.addAttribute("companyAdminDocs", companyAdminDocs);
model.addAttribute("companySharedNotes", companySharedNotes);
model.addAttribute("companyGoals", companyGoalsOrdered);
model.addAttribute("companySharedDocs", companySharedDocs);
model.addAttribute("company", company);
model.addAttribute("empDtos", employeeDtos);
model.addAttribute("ecoUserData", ecoUserData);
model.addAttribute("companyTags", companyTags);
model.addAttribute("unusedTags", unusedTags);
model.addAttribute("canUploadAdminDocs", ecoUserDataRepository.findByKeyEcosystemAndKeyUser(ecosystem, user).getAccessLevels().isCanUploadAdminDocs());
model.addAttribute("companyNameList", companyUserDataRepository.findAllCompanyNameByUserAndEcosystem(user, ecosystem));
return "ecosystem/company-profile/private-company-profile";
}
We have an application that loads all contacts stored in an account using the Microsoft Graph API. The initial call we issue is https://graph.microsoft.com/v1.0/users/{userPrincipalName}/contacts$count=true&$orderBy=displayName%20ASC&$top=100, but we use the Java JDK to do that. Then we iterate over all pages and store all loaded contacts in a Set (local cache).
We do this every 5 minutes using an account with over 3000 contacts and sometimes, the count of contacts we received due to using $count does not match the number of contacts we loaded and stored in the local cache.
Verifying the numbers manually we can say, that the count was always correct, but there are contacts missing.
We use the following code to achieve this.
public List<Contact> loadContacts() {
Set<Contact> contacts = new TreeSet<>((contact1, contact2) -> StringUtils.compare(contact1.id, contact2.id));
List<QueryOption> requestOptions = List.of(
new QueryOption("$count", true),
new QueryOption("$orderBy", "displayName ASC"),
new QueryOption("$top", 100)
);
ContactCollectionRequestBuilder pageRequestBuilder = null;
ContactCollectionRequest pageRequest;
boolean hasNextPage = true;
while (hasNextPage) {
// initialize page request
if (pageRequestBuilder == null) {
pageRequestBuilder = graphClient.users(userId).contacts();
pageRequest = pageRequestBuilder.buildRequest(requestOptions);
} else {
pageRequest = pageRequestBuilder.buildRequest();
}
// load
ContactCollectionPage contactsPage = pageRequest.get();
if (contactsPage == null) {
throw new IllegalStateException("request returned a null page");
} else {
contacts.addAll(contactsPage.getCurrentPage());
}
// handle next page
hasNextPage = contactsPage.getNextPage() != null;
if (hasNextPage) {
pageRequestBuilder = contactsPage.getNextPage();
} else if (contactsPage.getCount() != null && !Objects.equals(contactsPage.getCount(), (long) contacts.size())) {
throw new IllegalStateException(String.format("loaded %d contacts but response indicated %d contacts", contacts.size(), contactsPage.getCount()));
} else {
// done
}
}
log.info("{} contacts loaded using graph API", contacts.size());
return new ArrayList<>(contacts);
}
Initially, we did not put the loaded contacts in a Set by ID but just in a List. With the List we very often got more contacts than $count. My idea was, that there is some caching going on and some pages get fetched multiple times. Using the Set we can make sure, that we only have unique contacts in our local cache.
But using the Set, we sometimes have less contacts than $count, meaning some pages got skipped and we end up in the condition that throws the IllegalStateException.
Currently, we use microsoft-graph 5.8.0 and azure-identiy 1.4.2.
Have you experienced similar issues and can help us solve this problem?
Or do you have any idea what could be causing these inconsistent results?
Your help is very much appreciated!
In Keycloak 8.0.1 we have a Realm with a Group and Subgroups like this:
group -
subgroup1
subgroup2
...
We need to insert a batch of subgroups and users into group. The subgroup should have some attributes.
How can I do this?
I tried:
Using an exported realm-export.json file with newly added subgroups and "Overwrite" on the import. Now I don't see how to connect the new user with the subgroup. And I am also not sure if old users will not be removed this way.
Calling the Keycloak REST API. It doesn't seem possible to UPDATE a group and add subgroups. Documentation says:
PUT /{realm}/groups/{id}Update group, ignores subgroups.
Now I am looking at using a UI testing tool to add the user programmatically, but this seems needlessly complex.
Is it possible to programmatically add new subgroups with users associated to that subgroup? Am I missing something with the REST API call or the import functionality? Is there maybe another way via for example the Java Admin Client?
You can create groups and subgroups under it , Here is the sample code to create subgroups using Admin Client. You can also associate users to those groups
public void addSubgroups() {
RealmResource realm =keycloak.realm("myrealm");
GroupRepresentation topGroup = new GroupRepresentation();
topGroup.setName("group");
topGroup = createGroup(realm, topGroup);
createSubGroup(realm,topGroup.getId(),"subgroup1");
createSubGroup(realm,topGroup.getId(),"subgroup2");
}
private void createSubGroup(RealmResource realm, String parentGroupId, String subGroupName) {
GroupRepresentation subgroup = new GroupRepresentation();
subgroup.setName(subGroupName);
try (Response response = realm.groups().group(parentGroupId).subGroup(subgroup)){
if (response.getStatusInfo().getFamily() == Family.SUCCESSFUL) {
System.out.println("Created Subgroup : " + subGroupName );
} else {
logger.severe("Error Creating Subgroup : " + subGroupName + ", Error Message : " + getErrorMessage(response));
}
}
}
private GroupRepresentation createGroup(RealmResource realm, GroupRepresentation group) {
try (Response response = realm.groups().add(group)) {
String groupId = getCreatedId(response);
group.setId(groupId);
return group;
}
}
getCreatedId(response);
This method in above answer (by ravthiru) is belongs to CreatedResponseUtil from package (org.keycloak.admin.client)
CreatedResponseUtil.getCreatedId(response);
I'm continuing a project that has been coming for a few years at my university. One of the activities this project does is to collect some web pages using the google bot.
Due to a problem that I cannot understand, the project is not getting through this part. Already research a lot about what may be happening, if it is some part of the code that is outdated.
The code is in Java and uses Maven for project management.
I've tried to update some information from maven's "pom".
I already tried to change the part of the code that uses the bot, but nothing works.
I'm posting the part of code that isn't working as it should:
private List<JSONObject> querySearch(int numSeeds, String query) {
List<JSONObject> result = new ArrayList<>();
start=0;
do {
String url = SEARCH_URL + query.replaceAll(" ", "+") + FILE_TYPE + "html" + START + start;);
Connection conn = Jsoup.connect(url).userAgent("Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)").timeout(5000);
try {
Document doc = conn.get();
result.addAll(formatter(doc);
} catch (IOException e) {
System.err.println("Could not search for seed pages in IO.");
System.err.println(e);
} catch (ParseException e) {
System.err.println("Could not search for seed pages in Parse.");
System.err.println(e);
}
start += 10;
} while (result.size() < numSeeds);
return result;
}
what some variables do:
private static final String SEARCH_URL = "https://www.google.com/search?q=";
private static final String FILE_TYPE = "&fileType=";
private static final String START = "&start=";
private QueryBuilder queryBuilder;
public GoogleAjaxSearch() {
this.queryBuilder = new QueryBuilder();
}
Until this part is ok, it connect with the bot and can get a html from google. The problem is to separate what found and take only the link, that should be between ("h3.r> a").
That it does in this part with the result.addAll(formatter(doc)
public List<JSONObject> formatter(Document doc) throws ParseException {
List<JSONObject> entries = new ArrayList<>();
Elements results = doc.select("h3.r > a");
for (Element result : results) {
//System.out.println(result.toString());
JSONObject entry = new JSONObject();
entry.put("url", (result.attr("href").substring(6, result.attr("href").indexOf("&")).substring(1)));
entry.put("anchor", result.text());
So when it gets to this part: Elements results = doc.select ("h3.r> a"), find, probably, no h3 and can't increment the "results" list by not entering the for loop. Then goes back to the querysearch function and try again, without increment the results list. And with that, entering in a infinite loop trying to get the requested data and never finding.
If anyone here can help me, I've been trying for a while and I don't know what else to do. Thanks in advance.
I want to get tweets from certain user timelines using java library twitter4j, currently I have source code which can get ~ 3200 tweets from user time line but I can't get full tweet. I have searched in various sources on the internet but I can't find a solution to my problem. anyone can help me or can anyone provide an alternative to get a full tweet from the user timeline with java programming?
my source code :
public static void main(String[] args) throws SQLException {
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
.setOAuthConsumerKey("aaa")
.setOAuthConsumerSecret("aaa")
.setOAuthAccessToken("aaa")
.setOAuthAccessTokenSecret("aaa");
TwitterFactory tf = new TwitterFactory(cb.build());
Twitter twitter = tf.getInstance();
int pageno = 1;
String user = "indtravel";
List statuses = new ArrayList();
while (true) {
try {
int size = statuses.size();
Paging page = new Paging(pageno++, 100);
statuses.addAll(twitter.getUserTimeline(user, page));
System.out.println("***********************************************");
System.out.println("Gathered " + twitter.getUserTimeline(user, page).size() + " tweets");
//get status dan user
for (Status status: twitter.getUserTimeline(user, page)) {
//System.out.println("*********Place Tweets :**************\npalce country :"+status.getPlace().getCountry()+"\nplace full name :"+status.getPlace().getFullName()+"\nplace name :"+status.getPlace().getName()+"\nplace id :"+status.getPlace().getId()+"\nplace tipe :"+status.getPlace().getPlaceType()+"\nplace addres :"+status.getPlace().getStreetAddress());
System.out.println("["+(no++)+".] "+"Status id : "+status.getId());
System.out.println("id user : "+status.getUser().getId());
System.out.println("Length status : "+status.getText().length());
System.out.println("#" + status.getUser().getScreenName() +" . "+status.getCreatedAt()+ " : "+status.getUser().getName()+"--------"+status.getText());
System.out.println("url :"+status.getUser().getURL());
System.out.println("Lang :"+status.getLang());
}
if (statuses.size() == size)
break;
}catch(TwitterException e) {
e.printStackTrace();
}
}
System.out.println("Total: "+statuses.size());
}
Update :
After the answer given by #AndyPiper
the my problem is every tweet that I get will be truncated or not complete. a tweets that I get will be truncated if the length of tweet more than 140 characters. I found the reference tweet_mode=extended, but I do not know how to use it. if you know something please tell me.
Your Configuration should be like this:
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
.setOAuthConsumerKey("aaa")
.setOAuthConsumerSecret("aaa")
.setOAuthAccessToken("aaa")
.setOAuthAccessTokenSecret("aaa")
.setTweetModeExtended(true);
It is explained well here
The Twitter API limits the timeline history to 3200 Tweets. To get more than that you would need to use the (commercial) premium or enterprise APIs to search for Tweets by a specific user.
if you are streaming tweets
fist: you have to add .setTweetModeExtended(true); into your configurationbuilder
second(here is the code)
TwitterStream twitterStream = new TwitterStreamFactory(cb.build()).getInstance();
StatusListener listener = new StatusListener(){
public void onStatus(Status status) {
System.out.println("-------------------------------");
if(status.isRetweet()){
System.out.println(status.getRetweetedStatus().getText());
}
else{
System.out.println(status.getText());
}`
its totally works for me.
take care yourself :)
If you are implementing twitter api using twitter4j.properties file and getting truncated timeline text value,simply add the below property in it at the end.
tweetModeExtended=TRUE