I don't know why it results in `undefined` when I used tweet_mode='extended' in Twitter API get function() on Node.js - api

I have a few question.
I would like to get twitter full-text, so I used get() function but it truncate when it returned.
like this :
'RT #Journeyto100k_: Google was not the first search engine, but quickly became the standard. Internet explorer even came preloaded on every…',
'RT #ApolloCurrency: Check out our latest blog post! In case you missed it. \n\n"Apollonauts Unveil Wiki…',
I used tweet_mode ='extended' and retweeted_status to get property full_text
but it didn't work.
let keyword1 = T.get('search/tweets', {
q: 'Crypto Currency crypto currency since:2019-04-15', count: 100, request: tweet_mode='extended' },async function (err, data, response) {
let addr;
let text = data.statuses.map(retweeted_status=>retweeted_status.text);
console.log(text);
I expect the output of get() result to be full-text, but the actual output is text truncated.
+ In further more,
data obejct has 'full-text' property, but it returns text truncated.
like this :
{ created_at: 'Fri Apr 19 04:22:40 +0000 2019',
id: 1119093934167212000,
id_str: '1119093934167212032',
full_text:
'RT #Drife_official: DRIFE presented at Trybe’s first annual Endless Dappathon\n #trybe_social \n#cryptocurrency #drife…',
truncated: false,
display_text_range: [Array],
entities: [Object],
metadata: [Object],
source:
'Twitter Web Client',
in_reply_to_status_id: null,
in_reply_to_status_id_str: null,
in_reply_to_user_id: null,
in_reply_to_user_id_str: null,
in_reply_to_screen_name: null,
user: [Object],
geo: null,
coordinates: null,
place: null,
contributors: null,
retweeted_status: [Object],
is_quote_status: false,
retweet_count: 330,
favorite_count: 0,
favorited: false,
retweeted: false,
possibly_sensitive: false,
lang: 'en' },

I found solution.
T.get('search/tweets', {
q, count, tweet_mode: 'extended'},async function (err, data, response) {
let text = data.statuses.map(status=>status.full_text);
I have missed full_text instead of text.
and ONLY RT(retwitt)is truncated. another didn't truncate.

Related

Laravel where, orWhereHas and whereNotIn

Hello great people of SO!
I hope you all have a good day and have a good health
Note: I'm not good at SQL
Sorry for bad english, but I will try my best to explain my issue
I'm using Laravel v8.x for my app, and after setting up model relationships, events, queues, etc, now I'm working for SQL
ATM, I have 2 Models,
User
Post
Relationships:
User hasMany Post
User belongsToMany User (Block)
User belongsToMany User (Follow)
Post belongsTo User
Database:
5 record for User
2 record for Block
3 records for Post
Table: (Using faker)
users
[
{ id: 1, name: 'Jonathan Beatrice', username: 'kiana.fay', ... },
{ id: 2, name: 'Lacey Kirlin', username: 'kenna.turner', ... },
{ id: 3, name: 'Alexander Schiller', username: 'cassandra95', ... },
{ id: 4, name: 'Daniel Wickozky', username: 'nkoepp', ... },
{ id: 5, name: 'Maymie Lehner', username: 'frami.felton', ... }
]
block
[
{ id: 1, by_id: 1, to_id: 2 }, // User #1 block user #2
{ id: 2, by_id: 4, to_id: 1 } // User #4 block user #1
]
posts
[
{ id: 1, user_id: 2, body: 'Test post', ... },
{ id: 2, user_id: 5, body: 'Lorem ipsum dolor sit amet ...', ... },
{ id: 3, user_id: 4, body: 'ABCD festival soon! ...', ... },
]
Everything works fine and smooth
Now that I want to implement search system, I have a problem, since I'm not good with SQL
Here's my code
SearchController.php
use ...;
use ...;
...
public function posts(Request $request)
{
// For testing purpose
$user = User::with(['userBlocks', 'blocksUser'])->find(1);
// Get all id of user that $user block
// return [2]
$user_blocks = $user->userBlocks->pluck('pivot')->pluck('to_id')->toArray();
// Get all id of user that block $user
// return [4]
$blocks_user = $user->blocksUser->pluck('pivot')->pluck('by_id')->toArray();
// Merge all ids above (must be unique())
// return [2, 4]
$blocks = array_merge($user_blocks, $blocks_user);
// .../search?q=xxx
$query = $request->query('q');
$sql = Post::query();
// Search for posts that has `posts`.`body` LIKE ? ($query)
$sql->where('body', 'LIKE', "%$query%");
// This is where I got confused
$sql->orWhereHas('user', function ($post_user) use ($blocks, $query) {
$post_user
->whereNotIn('id', $blocks) // Exclude posts that has user and their id not in (x, x, x, x, ... ; $block variable above)
->where('name', 'LIKE', "%$query%") // Find user that has name LIKE ? ($query)
->orWhere('username', 'LIKE', "%$query%"); // or Find user that has username LIKE ? ($query)
});
$sql->orderBy('created_at', 'DESC');
$sql->with(['user']);
$posts = $sql->simplePaginate(10, ['*'], 'p');
return $posts;
}
I run the code, .../search?q=e
Note:
All users has alphabet E in their names
And also all posts has alphabet E in their body
We (as User #1), block User #2, and User #4, block us (User #1)
Result: Controller returned all posts
This is the query when I use DB::enableQueryLog() and DB::getQueryLog()
SELECT
*
FROM
`posts`
WHERE `body` LIKE ?
AND EXISTS
(SELECT
*
FROM
`users`
WHERE `posts`.`user_id` = `users`.`id`
AND (
`id` NOT IN (?)
AND `username` LIKE ?
OR `name` LIKE ?
))
ORDER BY `created_at` ASC
LIMIT 11 OFFSET 0
Goal: Search all posts that has body LIKE ?, OR posts that has user; username LIKE ? or name LIKE ? (But also exclude the user we block and the user that block us
Thanks in advance
If there's any unclear explanation, I will edit it A.S.A.P
If I run on my recent laravel install, with my proposed change for one of your issues, version 7.19.1, I get this query:
SELECT
*
FROM
`posts`
WHERE `body` LIKE ?
OR EXISTS <- line of interest
(SELECT
*
FROM
`users`
WHERE `posts`.`user_id` = `users`.`id`
AND (
`id` NOT IN (?)
AND (`username` LIKE ?
OR `name` LIKE ?) <- extra brackets ive added
))
ORDER BY `created_at` ASC
LIMIT 11 OFFSET 0
Have a look at the line of interest, and compare it with the query your version of laravel is running. The AND EXISTS line is being incorrectly generated by laravel. OrWhereHas isnt behaving correctly in your version, I can't find the release number to see where it was fixed.
Id recommend upgrading to latest if possible, but thats not always an option. I've had a dig around, and it looks like the user in this question here encountered a similar problem:
WhereHas() / orWhereHas not constraining the query as expected
You can try moving your $sql->with(['user']); to before you OrWhereHas clause. I'm not sure if that will change it to OR, but its worth a try.
Second thing, I've added whereNested to your OR clause to ensure the precedence is correct, which adds the extra brackets in the query above, as in you dont want:
(`id` NOT IN (1, 2, 3)
AND `name` LIKE % test %)
OR `username` LIKE % test %
Since then it would include your blocked posts in the exists clause.
So final changes look like this, which I think fufills your description:
$sql->with(['user']); //deleted from original position and move here
$sql->where('body', 'LIKE', "%$query%")->whereNotIn('id', $blocks); //additional line
$sql->orWhereHas('ambience', function ($post_user) use ($blocks, $query) {
$post_user
->whereNotIn('id', $blocks);
$post_user->whereNested(function($post_user) use ($query) { //new bit
$post_user->where('name', 'LIKE', "%$query%")
->orWhere('username', 'LIKE', "%$query%");
});
});

TableData.insertAll with templateSuffix - frequent 503 errors

We are using TableData.insertAll with a templateSuffix and are experiencing frequent 503 errors with our usage pattern.
We set the templateSuffix based on two pieces of information - the name of the event being inserted and the data of the event being inserted. E.g. 'NewPlayer20160712'. The table ID is set to 'events'.
In most cases this works as expected, but relatively often it will fail and return an error. Approximately 1 in every 200 inserts will fail, which seems way too often for expected behaviour.
The core of our event ingestion service looks like this:
//Handle all rows in rowsBySuffix
async.mapLimit(Object.keys(rowsBySuffix), 5, function(suffix) {
//Construct request for suffix
var request = {
projectId: "tactile-analytics",
datasetId: "discoducksdev",
tableId: "events",
resource: {
"kind": "bigquery#tableDataInsertAllRequest",
"skipInvalidRows": true,
"ignoreUnknownValues": true,
"templateSuffix": suffix, // E.g. NewPlayer20160712
"rows": rowsBySuffix[suffix]
},
auth: jwt // valid google.auth.JWT instance
};
//Insert all rows into BigQuery
var cb = arguments[arguments.length-1];
bigquery.tabledata.insertAll(request, function(err, result) {
if(err) {
console.log("Error insertAll. err=" + JSON.stringify(err) + ", request.resource=" + JSON.stringify(request.resource));
}
cb(err, result);
});
}, arguments[arguments.length-1]);
A typical error would look like this:
{
   "code": 503,
   "errors": [
      {
         "domain": "global",
         "reason": "backendError",
         "message": "Error encountered during execution. Retrying may solve the problem."
      }
   ]
}
The resource part for the insertAll that fails looks like this:
{
   "kind": "bigquery#tableDataInsertAllRequest",
   "skipInvalidRows": true,
   "ignoreUnknownValues": true,
   "templateSuffix": "GameStarted20160618",
   "rows": [
      {
         "insertId": "1f4786eaccd1c16d7ce865fea4c7af89",
         "json": {
            "eventName": "gameStarted",
            "eventSchemaHash": "unique-schema-hash-value",
            "eventTimestamp": 1466264556,
            "userId": "f769dc78-3210-4fd5-a2b0-ca4c48447578",
            "sessionId": "821f8f40-ed08-49ff-b6ac-9a1b8194286b",
            "platform": "WEBPLAYER",
            "versionName": "1.0.0",
            "versionCode": 12345,
            "ts_param1": "2016-06-04 00:00",
            "ts_param2": "2014-01-01 00:00",
            "i_param0": 598,
            "i_param1": 491,
            "i_param2": 206,
            "i_param3": 412,
            "i_param4": 590,
            "i_param5": 842,
            "f_param0": 5945.442,
            "f_param1": 1623.4111,
            "f_param2": 147.04747,
            "f_param3": 6448.521,
            "b_param0": true,
            "b_param1": false,
            "b_param2": true,
            "b_param3": true,
            "s_param0": "Im guesior ti asorne usse siorst apedir eamighte rel kin.",
            "s_param1": "Whe autiorne awayst pon, lecurt mun.",
            "eventHash": "1f4786eaccd1c16d7ce865fea4c7af89",
            "collectTimestamp": "1468346812",
            "eventDate": "2016-06-18"
         }
      }
   ]
}
We have noticed that, if we avoid including the name of the event in the suffix (e.g. the NewPlayer part) and instead just have the date as the suffix, then we never experience these errors.
Is there any way that this can be made to work reliably?
Backend errors happen, we usually see 5 from 10000 requests. We simply retry, and we have more constant rate, and we can provide a reconstructable use case we put a ticket on the Bigquery issue tracker. This way if there is something wrong with our project it can be investigated.
https://code.google.com/p/google-bigquery/

Keen-io: i can't delete special event using extraction query filter

using extraction query (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxx/queries/extraction?api_key=xxxx&event_collection=dispatched-orders&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
result: [
{
mobile: "13185716746",
keen : {
timestamp: "2015-02-10T07:10:07.816Z",
created_at: "2015-02-10T07:10:08.725Z",
id: "54d9aed03bc6964a7d311f9e"
},
data : {
itemId: 2130,
num: 1
},
features: {
communityId: 2000,
dispatcherId: 39,
tradeId: 8581
}
}
]
}
but if i use the same filters in my delete query url (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
properties: {
data.num: "num",
keen.created_at: "datetime",
mobile: "string",
keen.id: "string",
features.communityId: "num",
features.dispatcherId: "num",
keen.timestamp: "datetime",
features.tradeId: "num",
data.itemId: "num"
}
}
plz help me ...
It looks like you are issuing a GET request for the delete comment. If you perform a GET on a collection you get back the schema that Keen has inferred for that collection.
You'll want to issue the above as a DELETE request. Here's the cURL command to do that:
curl -X DELETE "https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800"
Note that you'll probably need to URL encode that JSON as you mentioned in your above post!

Yodlee searchSite returning component with multiple values

The Yodlee docs for siteSearch shows a componentList array with each entry looking like this:
{
"valueIdentifier": "LOGIN",
"valueMask": "LOGIN_FIELD",
"fieldType": {
"typeName": "TEXT"
},
"size": 20,
"maxlength": 22,
"name": "LOGIN",
"displayName": "User ID",
"isEditable": true,
"isOptional": false,
"isEscaped": false,
"helpText": "101920",
"isOptionalMFA": false,
"isMFA": false
},
however, when the siteSearch is matches "baa", we get a response with a componentList array entry that seems to have multiple possible values, like this:
{
\"defaultValues\": [
\"6331\",
\"5700\",
null,
null
],
\"values\": [
null,
null,
null,
null
],
\"validValues\": [
null,
null,
null,
null
], ...
I can't find any documentation on this "multiple-value field", and it seems to be a rare case. Can anyone point me to any information on this type?
Thanks.
I would suggest you to check this.
Here look for Social Security Number example. It is a single field which has been divided into 3 section(i.e. input text box). Similarly you need to consume the Login form you are receiving and show it to the consumer of your application.
Whenever you'll find valueIdentifier is having an array i.e. more than 1 value then it is multiple fixed field type else it's a single field.
Here is the image how does it look like once you render the same.

Too many options with jEditable datatables plugin?

This is driving me a bit insane, maybe someone has some insight to this. I have a jsp page with a editable datatable that I would like to have the following options
$("#masterTable").dataTable({
"sDom" : '<"H"flp>t<"F"ip>r',
"bJQueryUI" : true,
"bStateSave" : true,
"bProcessing" :true
}).makeEditable({
"sAddURL" : "manager/add",
"sDeleteURL" : "manager/delete",
"sAddNewRowFormId" : "formAddNewRowId",
"sAddNewRowButtonId" : "btnAddNewRowId",
"sAddNewRowOkButtonId" : "btnAddNewRowOkId",
"sAddNewRowCancelButtonId" : "btnAddNewRowCancelId",
"sDeleteRowButtonId" : "btnDeleteRowId",
"fnOnAdded" : function(){
window.location.replace("/view/master.jsp");
},
"fnShowError" : function(message, action){
switch(action){
case "add":
window.alert("Add Failed. Please check the error log for the exact problem");
break;
case "update":
window.alert("Update Failed. Please check the error log for the exact problem");
break;
case "delete":
window.alert("Delete Failed. Please check the error log for the exact problem");
break;
}
},
"fnOnDeleted" : function(){
window.location.replace("/view/report.jsp");
},
"aoColumns" : [null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null]
});
But, when I try to load it, I get this error:
java.io.IOException: Error: Attempt to clear a buffer that's already been flushed
The insane part is, if I remove any two options, the page loads fine. It doesn't matter which two, or how many lines they are on(1 or separate), or if they are part of .dataTable or the .makeEditable but if I leave all the options, error. Is there some strange upper limit to the number of options that can be used on a table?