Get an array of user photos using FQL - java

I'm to get a list of the users photos (one's they've been tagged in) using FQL.
Basically I've go an array object like so: _imageAddressArray.
I can retrieve the users' photos using graphApi so I know it works, problem with graphAPi is that it's too slow (+15 seconds min for 100 photos).
So far I've got:
//New Stuff
FQL fql = new FQL(facebook);
String FQLResult = null;
try
{
_userGallery = graphApi.getPhotosMy(_noOfPhotos);
FQLResult = fql.fqlQuery("SELECT object_id, src_small FROM photo");
}
catch (EasyFacebookError e)
{
e.printStackTrace();
}
System.out.println("FQL Result" + FQLResult);
This returns the error: 601, any ideas anyone?
Of course ideally FQLResult will be a string[] (string array)

You're getting an error because you don't have a WHERE clause in your FQL statement that references one of the indexed columns -- shown with a "*" here
To get the photos using FQL that your user has been tagged in, try this:
SELECT object_id, src_small FROM photo WHERE object_id IN
(SELECT object_id FROM photo_tag WHERE subject = me())

Related

Java EE + Oracle: how to use GTT(global temporary table) to avoid possible long(1000+) IN clause?

I have a query for Oracle database, built with CriteriaBuilder of Hibernate. Now, it has a IN clause which already takes about 800+ params.
Team says this may surpass 1000 and hitting the hard upper limit of Oracle itself, which only allows 1000 param for an IN clause. We need to optimize that.
select ih from ItemHistory as ih
where ih.number=:param0
and
ih.companyId in (
select c.id from Company as c
where (
( c.organizationId in (:param1) )
or
( c.organizationId like :param2 )
) and (
c.organizationId in (:param3, :param4, :param5, :param6, :param7, :param8, :param9, :param10, ..... :param818)
)
)
order by ih.eventDate desc
So, two solutions I can think of:
The easy one, as now the list from :param3 to :param818 are below 1000, and in the future, we may hit 1000, we can separate the list if size > 1000, into another IN clause, so it becomes:
c.organizationId in (:param3, :param4, :param5, :param6, :param7, :param8, :param9, :param10, ..... :param1002) or c.organizationId in (:param1003, ...)
Both the original code and solution 1 are not very efficient already. Although it can fetch 40K records in 25 seconds, we should use a GTT(Global Temporary Table), as per what I can find on AskTom or other sites, by professional DBAs. But I can only find SQL examples, not Java code.
What I can imagine is:
createNativeQuery("create global temporary table GTT_COMPANIES if not exist (companyId varchar(32)) ON COMMIT DELETE ROWS;"); and execute(Do we need index here?)
createNativeQuery("insert into GTT_COMPANIES (list)"); query.bind("1", query.getCompanyIds()); and execute(can we bind a list and insert it?)
use CriteriaQuery to select from this table(but I doubt, as CriteriaQueryBuilder will require type safe meta model class to be generated beforehand, and here we don't have the entity; this is an ad-hoc table and no model entity is mapped to it)
and, do we need to create GTT even the list size is < 1000? As often it is big, 700~800.
So, any suggestion? Someone got a working example of Hibernate CriteriaQuery + Oracle GTT?
The whole method is like this:
public List<ItemHistory> findByIdTypePermissionAndOrganizationIds(final Query<String> query, final ItemIdType idType) throws DataLookupException {
String id = query.getObjectId();
String type = idType.name();
Set<String> companyIds = query.getCompanyIds();
Set<String> allowedOrgIds = query.getAllowedOrganizationIds();
Set<String> excludedOrgIds = query.getExcludedOrganizationIds();
// if no orgs are allowed, we should return empty list
if (CollectionUtils.isEmpty(allowedOrgIds)) {
return Collections.emptyList();
}
try {
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
CriteriaQuery<ItemHistory> criteriaQuery = builder.createQuery(ItemHistory.class);
Subquery<String> subQueryCompanyIds = filterByPermissionAndOrgIdsInSubquery(query, builder, criteriaQuery);
Subquery<String> subQueryCompanyIds = criteriaQuery.subquery(String.class);
Root<Company> companies = subQueryCompanyIds.from(Company.class);
companies.alias(COMPANY_ALIAS);
Path<String> orgIdColumn = companies.get(Company_.organizationId);
/* 1. get permission based restrictions */
// select COMPANY_ID where (ORG_ID in ... or like ...) and (ORG_ID not in ... and not like ...)
// actually query.getExcludedOrganizationIds() can also be very long list(1000+), but let's do it later
Predicate permissionPredicate = getCompanyIdRangeByPermission(
builder, query.getAllowedOrganizationIds(), query.getExcludedOrganizationIds(), orgIdColumn
);
/* 2. get org id based restrictions, which was done on top of permission restrictions */
// ... and where (ORG_ID in ... or like ...)
// process companyIds with and without "*" by adding different predicates, like (xxx%, yyy%) vs in (xxx, yyy)
// here, query.getCompanyIds() could be very long, may be 1000+
Predicate orgIdPredicate = groupByWildcardsAndCombine(builder, query.getCompanyIds(), orgIdColumn, false);
/* 3. Join two predicates with AND, because originally filtering is done twice, 2nd is done on basis of 1st */
Predicate subqueryWhere = CriteriaQueryUtils.joinWith(builder, true, permissionPredicate, orgIdPredicate); // join predicates with AND
subQueryCompanyIds.select(companies.get(Company_.id)); // id -> COMPANY_ID
if (subqueryWhere != null) {
subQueryCompanyIds.where(subqueryWhere);
} else {
LOGGER.warn("Cannot build subquery of org id and permission. " +
"Org ids: {}, allowed companies: {}, excluded companies: {}",
query.getCompanyIds(), query.getAllowedOrganizationIds(), query.getExcludedOrganizationIds());
}
Root<ItemHistory> itemHistory = criteriaQuery.from(ItemHistory.class);
itemHistory.alias(ITEM_HISTORY_ALIAS);
criteriaQuery.select(itemHistory)
.where(builder.and(
builder.equal(getColumnByIdType(itemHistory, idType), id),
builder.in(itemHistory.get(ItemHistory_.companyId)).value(subQueryCompanyIds)
))
.orderBy(builder.desc(itemHistory.get(ItemHistory_.eventDate)));
TypedQuery<ItemHistory> finalQuery = entityManager.createQuery(criteriaQuery);
LOGGER.trace(LOG_MESSAGE_FINAL_QUERY, finalQuery.unwrap(org.hibernate.Query.class).getQueryString());
return finalQuery.setMaxResults(MAX_LIST_FETCH_SIZE).getResultList();
} catch (NoResultException e) {
LOGGER.info("No item history events found by permission and org ids with {}={}", type, id);
throw new DataLookupException(ErrorCode.DATA_LOOKUP_NO_RESULT);
} catch (Exception e) {
LOGGER.error("Error when fetching item history events by permission and org ids with {}={}", type, id, e);
throw new DataLookupException(ErrorCode.DATA_LOOKUP_ERROR,
"Error when fetching item history events by permission and org ids with " + type + "=" + id);
}
}

My Customer data is being truncated when added to my List [duplicate]

I am running data.bat file with the following lines:
Rem Tis batch file will populate tables
cd\program files\Microsoft SQL Server\MSSQL
osql -U sa -P Password -d MyBusiness -i c:\data.sql
The contents of the data.sql file is:
insert Customers
(CustomerID, CompanyName, Phone)
Values('101','Southwinds','19126602729')
There are 8 more similar lines for adding records.
When I run this with start > run > cmd > c:\data.bat, I get this error message:
1>2>3>4>5>....<1 row affected>
Msg 8152, Level 16, State 4, Server SP1001, Line 1
string or binary data would be truncated.
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
Also, I am a newbie obviously, but what do Level #, and state # mean, and how do I look up error messages such as the one above: 8152?
From #gmmastros's answer
Whenever you see the message....
string or binary data would be truncated
Think to yourself... The field is NOT big enough to hold my data.
Check the table structure for the customers table. I think you'll find that the length of one or more fields is NOT big enough to hold the data you are trying to insert. For example, if the Phone field is a varchar(8) field, and you try to put 11 characters in to it, you will get this error.
I had this issue although data length was shorter than the field length.
It turned out that the problem was having another log table (for audit trail), filled by a trigger on the main table, where the column size also had to be changed.
In one of the INSERT statements you are attempting to insert a too long string into a string (varchar or nvarchar) column.
If it's not obvious which INSERT is the offender by a mere look at the script, you could count the <1 row affected> lines that occur before the error message. The obtained number plus one gives you the statement number. In your case it seems to be the second INSERT that produces the error.
Just want to contribute with additional information: I had the same issue and it was because of the field wasn't big enough for the incoming data and this thread helped me to solve it (the top answer clarifies it all).
BUT it is very important to know what are the possible reasons that may cause it.
In my case i was creating the table with a field like this:
Select '' as Period, * From Transactions Into #NewTable
Therefore the field "Period" had a length of Zero and causing the Insert operations to fail. I changed it to "XXXXXX" that is the length of the incoming data and it now worked properly (because field now had a lentgh of 6).
I hope this help anyone with same issue :)
Some of your data cannot fit into your database column (small). It is not easy to find what is wrong. If you use C# and Linq2Sql, you can list the field which would be truncated:
First create helper class:
public class SqlTruncationExceptionWithDetails : ArgumentOutOfRangeException
{
public SqlTruncationExceptionWithDetails(System.Data.SqlClient.SqlException inner, DataContext context)
: base(inner.Message + " " + GetSqlTruncationExceptionWithDetailsString(context))
{
}
/// <summary>
/// PArt of code from following link
/// http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
static string GetSqlTruncationExceptionWithDetailsString(DataContext context)
{
StringBuilder sb = new StringBuilder();
foreach (object update in context.GetChangeSet().Updates)
{
FindLongStrings(update, sb);
}
foreach (object insert in context.GetChangeSet().Inserts)
{
FindLongStrings(insert, sb);
}
return sb.ToString();
}
public static void FindLongStrings(object testObject, StringBuilder sb)
{
foreach (var propInfo in testObject.GetType().GetProperties())
{
foreach (System.Data.Linq.Mapping.ColumnAttribute attribute in propInfo.GetCustomAttributes(typeof(System.Data.Linq.Mapping.ColumnAttribute), true))
{
if (attribute.DbType.ToLower().Contains("varchar"))
{
string dbType = attribute.DbType.ToLower();
int numberStartIndex = dbType.IndexOf("varchar(") + 8;
int numberEndIndex = dbType.IndexOf(")", numberStartIndex);
string lengthString = dbType.Substring(numberStartIndex, (numberEndIndex - numberStartIndex));
int maxLength = 0;
int.TryParse(lengthString, out maxLength);
string currentValue = (string)propInfo.GetValue(testObject, null);
if (!string.IsNullOrEmpty(currentValue) && maxLength != 0 && currentValue.Length > maxLength)
{
//string is too long
sb.AppendLine(testObject.GetType().Name + "." + propInfo.Name + " " + currentValue + " Max: " + maxLength);
}
}
}
}
}
}
Then prepare the wrapper for SubmitChanges:
public static class DataContextExtensions
{
public static void SubmitChangesWithDetailException(this DataContext dataContext)
{
//http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
try
{
//this can failed on data truncation
dataContext.SubmitChanges();
}
catch (SqlException sqlException) //when (sqlException.Message == "String or binary data would be truncated.")
{
if (sqlException.Message == "String or binary data would be truncated.") //only for EN windows - if you are running different window language, invoke the sqlException.getMessage on thread with EN culture
throw new SqlTruncationExceptionWithDetails(sqlException, dataContext);
else
throw;
}
}
}
Prepare global exception handler and log truncation details:
protected void Application_Error(object sender, EventArgs e)
{
Exception ex = Server.GetLastError();
string message = ex.Message;
//TODO - log to file
}
Finally use the code:
Datamodel.SubmitChangesWithDetailException();
Another situation in which you can get this error is the following:
I had the same error and the reason was that in an INSERT statement that received data from an UNION, the order of the columns was different from the original table. If you change the order in #table3 to a, b, c, you will fix the error.
select a, b, c into #table1
from #table0
insert into #table1
select a, b, c from #table2
union
select a, c, b from #table3
on sql server you can use SET ANSI_WARNINGS OFF like this:
using (SqlConnection conn = new SqlConnection("Data Source=XRAYGOAT\\SQLEXPRESS;Initial Catalog='Healthy Care';Integrated Security=True"))
{
conn.Open();
using (var trans = conn.BeginTransaction())
{
try
{
using cmd = new SqlCommand("", conn, trans))
{
cmd.CommandText = "SET ANSI_WARNINGS OFF";
cmd.ExecuteNonQuery();
cmd.CommandText = "YOUR INSERT HERE";
cmd.ExecuteNonQuery();
cmd.Parameters.Clear();
cmd.CommandText = "SET ANSI_WARNINGS ON";
cmd.ExecuteNonQuery();
trans.Commit();
}
}
catch (Exception)
{
trans.Rollback();
}
}
conn.Close();
}
I had the same issue. The length of my column was too short.
What you can do is either increase the length or shorten the text you want to put in the database.
Also had this problem occurring on the web application surface.
Eventually found out that the same error message comes from the SQL update statement in the specific table.
Finally then figured out that the column definition in the relating history table(s) did not map the original table column length of nvarchar types in some specific cases.
I had the same problem, even after increasing the size of the problematic columns in the table.
tl;dr: The length of the matching columns in corresponding Table Types may also need to be increased.
In my case, the error was coming from the Data Export service in Microsoft Dynamics CRM, which allows CRM data to be synced to an SQL Server DB or Azure SQL DB.
After a lengthy investigation, I concluded that the Data Export service must be using Table-Valued Parameters:
You can use table-valued parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored procedure or function, without creating a temporary table or many parameters.
As you can see in the documentation above, Table Types are used to create the data ingestion procedure:
CREATE TYPE LocationTableType AS TABLE (...);
CREATE PROCEDURE dbo.usp_InsertProductionLocation
#TVP LocationTableType READONLY
Unfortunately, there is no way to alter a Table Type, so it has to be dropped & recreated entirely. Since my table has over 300 fields (😱), I created a query to facilitate the creation of the corresponding Table Type based on the table's columns definition (just replace [table_name] with your table's name):
SELECT 'CREATE TYPE [table_name]Type AS TABLE (' + STRING_AGG(CAST(field AS VARCHAR(max)), ',' + CHAR(10)) + ');' AS create_type
FROM (
SELECT TOP 5000 COLUMN_NAME + ' ' + DATA_TYPE
+ IIF(CHARACTER_MAXIMUM_LENGTH IS NULL, '', CONCAT('(', IIF(CHARACTER_MAXIMUM_LENGTH = -1, 'max', CONCAT(CHARACTER_MAXIMUM_LENGTH,'')), ')'))
+ IIF(DATA_TYPE = 'decimal', CONCAT('(', NUMERIC_PRECISION, ',', NUMERIC_SCALE, ')'), '')
AS field
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '[table_name]'
ORDER BY ORDINAL_POSITION) AS T;
After updating the Table Type, the Data Export service started functioning properly once again! :)
When I tried to execute my stored procedure I had the same problem because the size of the column that I need to add some data is shorter than the data I want to add.
You can increase the size of the column data type or reduce the length of your data.
A 2016/2017 update will show you the bad value and column.
A new trace flag will swap the old error for a new 2628 error and will print out the column and offending value. Traceflag 460 is available in the latest cumulative update for 2016 and 2017:
https://support.microsoft.com/en-sg/help/4468101/optional-replacement-for-string-or-binary-data-would-be-truncated
Just make sure that after you've installed the CU that you enable the trace flag, either globally/permanently on the server:
...or with DBCC TRACEON:
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15
Another situation, in which this error may occur is in
SQL Server Management Studio. If you have "text" or "ntext" fields in your table,
no matter what kind of field you are updating (for example bit or integer).
Seems that the Studio does not load entire "ntext" fields and also updates ALL fields instead of the modified one.
To solve the problem, exclude "text" or "ntext" fields from the query in Management Studio
This Error Comes only When any of your field length is greater than the field length specified in sql server database table structure.
To overcome this issue you have to reduce the length of the field Value .
Or to increase the length of database table field .
If someone is encountering this error in a C# application, I have created a simple way of finding offending fields by:
Getting the column width of all the columns of a table where we're trying to make this insert/ update. (I'm getting this info directly from the database.)
Comparing the column widths to the width of the values we're trying to insert/ update.
Assumptions/ Limitations:
The column names of the table in the database match with the C# entity fields. For eg: If you have a column like this in database:
You need to have your Entity with the same column name:
public class SomeTable
{
// Other fields
public string SourceData { get; set; }
}
You're inserting/ updating 1 entity at a time. It'll be clearer in the demo code below. (If you're doing bulk inserts/ updates, you might want to either modify it or use some other solution.)
Step 1:
Get the column width of all the columns directly from the database:
// For this, I took help from Microsoft docs website:
// https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.getschema?view=netframework-4.7.2#System_Data_SqlClient_SqlConnection_GetSchema_System_String_System_String___
private static Dictionary<string, int> GetColumnSizesOfTableFromDatabase(string tableName, string connectionString)
{
var columnSizes = new Dictionary<string, int>();
using (var connection = new SqlConnection(connectionString))
{
// Connect to the database then retrieve the schema information.
connection.Open();
// You can specify the Catalog, Schema, Table Name, Column Name to get the specified column(s).
// You can use four restrictions for Column, so you should create a 4 members array.
String[] columnRestrictions = new String[4];
// For the array, 0-member represents Catalog; 1-member represents Schema;
// 2-member represents Table Name; 3-member represents Column Name.
// Now we specify the Table_Name and Column_Name of the columns what we want to get schema information.
columnRestrictions[2] = tableName;
DataTable allColumnsSchemaTable = connection.GetSchema("Columns", columnRestrictions);
foreach (DataRow row in allColumnsSchemaTable.Rows)
{
var columnName = row.Field<string>("COLUMN_NAME");
//var dataType = row.Field<string>("DATA_TYPE");
var characterMaxLength = row.Field<int?>("CHARACTER_MAXIMUM_LENGTH");
// I'm only capturing columns whose Datatype is "varchar" or "char", i.e. their CHARACTER_MAXIMUM_LENGTH won't be null.
if(characterMaxLength != null)
{
columnSizes.Add(columnName, characterMaxLength.Value);
}
}
connection.Close();
}
return columnSizes;
}
Step 2:
Compare the column widths with the width of the values we're trying to insert/ update:
public static Dictionary<string, string> FindLongBinaryOrStringFields<T>(T entity, string connectionString)
{
var tableName = typeof(T).Name;
Dictionary<string, string> longFields = new Dictionary<string, string>();
var objectProperties = GetProperties(entity);
//var fieldNames = objectProperties.Select(p => p.Name).ToList();
var actualDatabaseColumnSizes = GetColumnSizesOfTableFromDatabase(tableName, connectionString);
foreach (var dbColumn in actualDatabaseColumnSizes)
{
var maxLengthOfThisColumn = dbColumn.Value;
var currentValueOfThisField = objectProperties.Where(f => f.Name == dbColumn.Key).First()?.GetValue(entity, null)?.ToString();
if (!string.IsNullOrEmpty(currentValueOfThisField) && currentValueOfThisField.Length > maxLengthOfThisColumn)
{
longFields.Add(dbColumn.Key, $"'{dbColumn.Key}' column cannot take the value of '{currentValueOfThisField}' because the max length it can take is {maxLengthOfThisColumn}.");
}
}
return longFields;
}
public static List<PropertyInfo> GetProperties<T>(T entity)
{
//The DeclaredOnly flag makes sure you only get properties of the object, not from the classes it derives from.
var properties = entity.GetType()
.GetProperties(System.Reflection.BindingFlags.Public
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.DeclaredOnly)
.ToList();
return properties;
}
Demo:
Let's say we're trying to insert someTableEntity of SomeTable class that is modeled in our app like so:
public class SomeTable
{
[Key]
public long TicketID { get; set; }
public string SourceData { get; set; }
}
And it's inside our SomeDbContext like so:
public class SomeDbContext : DbContext
{
public DbSet<SomeTable> SomeTables { get; set; }
}
This table in Db has SourceData field as varchar(16) like so:
Now we'll try to insert value that is longer than 16 characters into this field and capture this information:
public void SaveSomeTableEntity()
{
var connectionString = "server=SERVER_NAME;database=DB_NAME;User ID=SOME_ID;Password=SOME_PASSWORD;Connection Timeout=200";
using (var context = new SomeDbContext(connectionString))
{
var someTableEntity = new SomeTable()
{
SourceData = "Blah-Blah-Blah-Blah-Blah-Blah"
};
context.SomeTables.Add(someTableEntity);
try
{
context.SaveChanges();
}
catch (Exception ex)
{
if (ex.GetBaseException().Message == "String or binary data would be truncated.\r\nThe statement has been terminated.")
{
var badFieldsReport = "";
List<string> badFields = new List<string>();
// YOU GOT YOUR FIELDS RIGHT HERE:
var longFields = FindLongBinaryOrStringFields(someTableEntity, connectionString);
foreach (var longField in longFields)
{
badFields.Add(longField.Key);
badFieldsReport += longField.Value + "\n";
}
}
else
throw;
}
}
}
The badFieldsReport will have this value:
'SourceData' column cannot take the value of
'Blah-Blah-Blah-Blah-Blah-Blah' because the max length it can take is
16.
Kevin Pope's comment under the accepted answer was what I needed.
The problem, in my case, was that I had triggers defined on my table that would insert update/insert transactions into an audit table, but the audit table had a data type mismatch where a column with VARCHAR(MAX) in the original table was stored as VARCHAR(1) in the audit table, so my triggers were failing when I would insert anything greater than VARCHAR(1) in the original table column and I would get this error message.
I used a different tactic, fields that are allocated 8K in some places. Here only about 50/100 are used.
declare #NVPN_list as table
nvpn varchar(50)
,nvpn_revision varchar(5)
,nvpn_iteration INT
,mpn_lifecycle varchar(30)
,mfr varchar(100)
,mpn varchar(50)
,mpn_revision varchar(5)
,mpn_iteration INT
-- ...
) INSERT INTO #NVPN_LIST
SELECT left(nvpn ,50) as nvpn
,left(nvpn_revision ,10) as nvpn_revision
,nvpn_iteration
,left(mpn_lifecycle ,30)
,left(mfr ,100)
,left(mpn ,50)
,left(mpn_revision ,5)
,mpn_iteration
,left(mfr_order_num ,50)
FROM [DASHBOARD].[dbo].[mpnAttributes] (NOLOCK) mpna
I wanted speed, since I have 1M total records, and load 28K of them.
This error may be due to less field size than your entered data.
For e.g. if you have data type nvarchar(7) and if your value is 'aaaaddddf' then error is shown as:
string or binary data would be truncated
You simply can't beat SQL Server on this.
You can insert into a new table like this:
select foo, bar
into tmp_new_table_to_dispose_later
from my_table
and compare the table definition with the real table you want to insert the data into.
Sometime it's helpful sometimes it's not.
If you try inserting in the final/real table from that temporary table it may just work (due to data conversion working differently than SSMS for example).
Another alternative is to insert the data in chunks, instead of inserting everything immediately you insert with top 1000 and you repeat the process, till you find a chunk with an error. At least you have better visibility on what's not fitting into the table.

Parse Server How to get an object ID by query?

If we wanna get an object ID we should do this:
String objectId = gameScore.getObjectId();
but what if we wanna get an object ID by a query? Like this:
ParseQuery<ParseObject> query = ParseQuery.getQuery("mytable");
query.whereEqualTo("Title", "Adrians Book");
List<ParseObject> results = null;
try {
results = query.find();
if(!results.isEmpty()) {
String objectId = results.getObjectId();
}
} catch (com.parse4cn1.ParseException e) {
Dialog.show("Err", "Something went wrong.", "OK", null);
}
Sounds interesting don't you think? I wish it could be possible. As you can see in this example the query will get a value from a specific object in the table which could track for the object ID then returning it as well. ParseQuery class should be implemented with getObjectId(). Because by this way applications always could have access to object IDs from the query even after applications get restarted so in the first example the gameScore which is actually an instance of ParseObject would lost reference to the Database after restarting. Getting object IDs by the query it would be able to program applications to get object IDs automatically without the need of doing it manually nor depending on instances of ParseObject.
#Shai Almog: Thank you very much for taking your time to look at the ParseQuery documentation.
I accidentally figured out how to get this done!
ParseQuery<ParseObject> query = ParseQuery.getQuery("mytable");
query.whereEqualTo("Title", "Adrians Book");
List<ParseObject> results = null;
try {
results = query.find();
if(!results.isEmpty()) {
String objectId = results.get(0).getObjectId();
System.out.println(objectId);
}
} catch (com.parse4cn1.ParseException e) {
Dialog.show("Err", "Something went wrong.", "OK", null);
}
Yep, after adding the method .get(index) it allows you to access the method .getObjectId() since results is a list of a ParseObject, then the respective objectId of your query result will be printed in the console! I'm pretty glad it's working because I won't need to serialize each object for now which would be a pain.
Also if you wanna set an instance of ParseObject with an existing objectId in case you need to update something in your Database, you can use this example:
ParseObject po = ParseObject.create("mytable");
po.setObjectId(//YOUR DESIRED OBJECTID HERE, AS LONG AS IT EXISTS IN THE DATABASE);
As far as I know you need to get the whole object then query it's ID. I don't see a query id method here https://github.com/sidiabale/parse4cn1/blob/41fe491699e604fc6de46267479f47bc422d8978/src/com/parse4cn1/ParseQuery.java

How to fetch WorkItem links with the TFS Java API

We use the TFS Java API to fetch WorkItems from a TFS server:
TFSTeamProjectCollection collection = TFSTeamProjectCollectionUtils
.openTeamProjectCollection(serverUrl, credentials,
new DefaultConnectionAdvisor(Locale.getDefault(),
TimeZone.getDefault()));
WorkItemClient client = collection.getWorkItemClient();
List<WorkItem> result = new ArrayList<>();
try {
WorkItemCollection workItems = client.query(wiqlQuery, null, false);
for (int i = 0; i < workItems.size(); i++) {
WorkItem item = workItems.getWorkItem(i);
result.add(item);
}
return result;
} catch (TECoreException e) {
throw new ConQATException("Failed to fetch work items from TFS", e);
}
If I run the query select * from workitems I get all workitems on the server with all fields and all links. Since I'm only interested in some of the fields, I would like to restrict the query to only those and save some bandwidth/time: select ID, Title from workitems
This works fine, but now the links of the items are missing (i.e. item.getLinks() always returns an empty collection).
Is there a way to select the links other than select * from workitems?
After some more digging around, I found that you can create a link query and run it like this:
WorkItemLinkInfo[] infos = client.createQuery("select * from workitemlinks").runLinkQuery()
With this, you can get the links as WorkItemLinkInfo objects that contain the IDs of the target and source node and the link type.
The solution using WorkItemLinkInfo is correct.
Just as remark: Using a WIQL Query you only receive the attributes you were querying - which cannot be the set of links of a work item (therefore always empty). If you query a single workitem using
WorkItemClient client = TFSConnection.getClient();
WorkItem firstWorkItem = client.getWorkItemByID(id);
then you also get the LinkCollection using (containing RelatedLinks, ExternalLinks or HyperLinks)
LinkCollection linkcoll = firstWorkItem.getLinks()

Facebook FQL: uid of friends

I'm using restfb 1.6.5 (the same problem in 1.6.4) and have problems getting the uids of my friends. This works fine (me-query):
User fbUserMe = fbClient.fetchObject("me", com.restfb.types.User.class);
logger.debug(fbUserMe.getId());
The response body contains something like: {"id":"1234","name":"ME" ...} and fbUserMe.getId() returns my uid. With the next code snipped I want to get the uids of my friends (friends-query):
StringBuffer sb = new StringBuffer();
sb.append("SELECT uid, first_name, last_name FROM user");
sb.append(" WHERE uid IN (SELECT uid2 FROM friend WHERE uid1 = me())");
List<User> users = fbClient.executeQuery(sb.toString(), User.class);
for(User fbUser : users)
{
logger.debug(fbUser.getId());
}
But this always outputs null, although the response body contains this: [{"uid":5678,"first_name":"Peter" ...} ...].
The obvious difference in the response by is id=1234 for the me-query and uid=5678 for the friends-query. If I use a custom class FqlUser like described in RestFB I'm able to get the uid. Now I'm uncertain of the uid. Do I really get same same id (the same my friend would get in a me-Query) or do I get something like the third_party_id described in Facebook: FQL-user?
You're getting the uid. The same id your friend would get in a "me query".

Categories