Data inconsistency in the Google+ Hangout App shared data object? - google-plus

I'm getting inconsistent results when setting and getting values using submitDelta, setValue, clearValue, and getState. It appears that these are asynchronous methods, so my synchronous commands (e.g., console.log) execute using a local data object. Then eventually the shared data object updates and in turn updates the local data object. Is this a correct assessment? Is there a way to run these data commands synchronously, i.e., wait for the shared data object to update before moving on in the program?

Yes, data operations in the Hangouts API are asynchronous. To achieve synchronicity you would have to listen to onStateChanged events and only continue whatever you are doing in those events.
The event will be called for all participants, also the local participant that triggered the change.

It doesn't look possible to write synchronous shared data object calls because if you have more than one onStateChange in your code, they all fire together. In other words, I can't tie one shared data object update to one onStateChange.
It looks like Google+ Hangouts executes database APIs in this order:
Getting and setting values in the local data object (using getState)
Getting values from the shared data object using getState.
Changing values in the shared data object (setValue, clearValue, submitDelta, etc.).
onStateChange to listen for shared data object changes.
The following (pseudo)code
setValue('counter', '11')
getValue('counter')
onStateChanged (getValue('counter'))
submitDelta( {'counter': '22'} )
getValue('counter')
onStateChanged (getValue('counter'))
clearValue('counter')
getValue('counter')
onStateChanged (getValue('counter'))
submitDelta( {'counter': '33'} )
getValue('counter')
onStateChanged (getValue('counter'))
would return
undefined
undefined
undefined
undefined
33
33
33
33
33
33
33
33
33
33
33
33
33
33
33
33
because the four getValues execute first, then setValue, submitDelta, clearValue, and submitDelta execute, then the four onStateChanged execute together, four times each because the shared data object is changed four times.
Does that sound right?

Related

Error: ‘Read only object or object without ownership can't be applied to mutable function append!’

I'm using DolphinDB subscribeTable but the error occurs:
I’d like to know what is wrong with my code. I wanted to subscribe to stream tables in DolphinDB and the code is:
csEngine1=createCrossSectionalEngine(name=sTb_Cs + "_eng",
dummyTable=objByName(sTb_join12),
keyColumn=`symbol,
triggeringPattern="perRow",
useSystemTime=false, timeColumn=`TimeStamp)
subscribeTable(tableName=sTb_join12, actionName="do"+sTb_Cs, offset=-1, handler=append!{csEngine1}, msgAsTable=true, hash=5, reconnect=true)
The UDF handler of another subscription to sTb_join12 also queries csEngine1.
def append_plan (csEngine1, candidates2, strategy, msg){
subscribeTable(tableName=sTb_join12, actionName="do" +strategy, offset=-1, handler=append_plan {csEngine1, candidates2, strategy}, msgAsTable=true, hash=6, reconnect=true)
Please tell why the error occurs and how I can fix it.
The problem is that the parameter in your function (append_plan in this case) was not mutable. With an immutable parameter, the object (csEngine1) will be set to readOnly when calling the function. If another thread writes this object (as shown in your first subscription), it will report the error.
Solution #1
Pass the same hash value in the two subscribeTable functions that subscribe to sTb_join12. With the same hash value, the two subscription tasks will be processed in the same thread in turn, which avoids concurrency and error reporting.
Solution #2
Simplify the two subscribeTable functions into one statement. ‘append!’ the cross-section engine with udf, and then continue with the above process to ensure the append operation and select query to the table CSEngines1 are sequentially processed. You can refer to the following example:
tradesCrossEngine008=createCrossSectionalEngine(name="tradesCrossEngine008", dummyTable=objByName(sTc_join12_testSTR), keyColumn=`Symbol, triggeringPattern=`perRow, useSystemTime=false, timeColumn=`TimeStamp)
def append_plan(msg){
getStreamEngine("tradesCrossEngine008").append!(msg)
select Symbol, rank(left_v1+left_v2) as rmk from getStreamEngine("tradesCrossEngine008")
}
subscribeTable(tableName=sTc_join12_testSTR, actionName="tradesCrossEngine008", offset=0, handler=append_plan, msgAsTable=true, hash=9, reconnect=true)

Kafka streams: groupByKey and reduce not triggering action exactly once when error occurs in stream

I have a simple Kafka streams scenario where I am doing a groupyByKey then reduce and then an action. There could be duplicate events in the source topic hence the groupyByKey and reduce
The action could error and in that case, I need the streams app to reprocess that event. In the example below I'm always throwing an error to demonstrate the point.
It is very important that the action only ever happens once and at least once.
The problem I'm finding is that when the streams app reprocesses the event, the reduce function is being called and as it returns null the action doesn't get recalled.
As only one event is produced to the source topic TOPIC_NAME I would expect the reduce to not have any values and skip down to the mapValues.
val topologyBuilder = StreamsBuilder()
topologyBuilder.stream(
TOPIC_NAME,
Consumed.with(Serdes.String(), EventSerde())
)
.groupByKey(Grouped.with(Serdes.String(), EventSerde()))
.reduce { current, _ ->
println("reduce hit")
null
}
.mapValues { _, v ->
println(Id: "${v.correlationId}")
throw Exception("simulate error")
}
To cause the issue I run the streams app twice. This is the output:
First run
Id: 90e6aefb-8763-4861-8d82-1304a6b5654e
11:10:52.320 [test-app-dcea4eb1-a58f-4a30-905f-46dad446b31e-StreamThread-1] ERROR org.apache.kafka.streams.KafkaStreams - stream-client [test-app-dcea4eb1-a58f-4a30-905f-46dad446b31e] All stream threads have died. The instance will be in error state and should be closed.
Second run
reduce hit
As you can see the .mapValues doesn't get called on the second run even though it errored on the first run causing the streams app to reprocess the same event again.
Is it possible to be able to have a streams app re-process an event with a reduced step where it's treating the event like it's never seen before? - Or is there a better approach to how I'm doing this?
I was missing a property setting for the streams app.
props["processing.guarantee"]= "exactly_once"
By setting this, it will guarantee that any state created from the point of picking up the event will rollback in case of a exception being thrown and the streams app crashing.
The problem was that the streams app would pick up the event again to re-process but the reducer step had state which has persisted. By enabling the exactly_once setting it ensures that the reducer state is also rolled back.
It now successfully re-processes the event as if it had never seen it before

Flink: How to process rest of finite stream with combination of countWindowAll()

//assume following logic
val source = arrayOf(1,2,3,4,5,6,7,8,9,10,11,12) // total 12 elements
val env = StreamExecutionEnvironment.createLocalEnvironment(1);
val input = env.fromCollection(source)
.countWindowAll(5)
.aggregate(...) // pack them to List<Int> for bulk upload to DB
.addSink(...) // sends bulk
When i execute it - only first 10 processed, but rest 2 elements
are thrown away - flink shutdown without processing of them.
The only avoid for me - while i totally controll source data, i can push some well-known IGNORABLE_VALUES to source collection to fit window size and then ignore them in sink... but i think where is some far more professional way in flink.
You have a finite stream of 12 and a window that triggers for every 5 elements. So the first window gets 5 elements and then triggers, then the next 5 are received and it triggers, but the last 2 come and the job knows that no more are going to come. So since there aren't 5 elements in the window the trigger doesn't fire so nothing is done with them.

Importing data from multi-value D3 database into SQL issues

Trying to use the mv.NET by bluefinity tools. Made some integration packages with it for importing data from a d3 multi-value database into MS SQL 2012 but seem to be having some trouble with the mapping.
For the VOYAGES table have some commentX fields in the D3 application that are acting quite unwieldy and the INSERT fails after a certain number of rows with the following message
>Error: 0xC0047062 at INSERT, mvNET Source[354]: System.Exception: Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(The value is too large to fit in the column data area of the buffer.))
at mvNETDataSource.mvNETSource.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket)
Error: 0xC0047038 at INSERT, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.The PrimeOutput method on mvNET Source returned error code 0x80131500.The component returned a failure code when the pipeline engine called PrimeOutput().The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.There may be error messages posted before this with more information about the failure.
The value is too large to fit in the column data area of the buffer. -> tried changing the input / outputs types but can't seem to get it right.
In the SQL table the columns are of type ntext.
In the .dtsx job the data type for the columns are of type Unicode String [DT_WSTR] with length 4000 , I guess these are auto-detected.
The import worked for other D3 files like this not sure why it fails for these comment fields.
Running the query on the mv.NET Data Manager ( on the d3 server) times out after 240 seconds so maybe this is the underlying issue?
Any ideas how to proceed? Thank you ~
Most like reason is column COMMGROUP does not have correct data type or some record in source do not fit in output type
To find error record (causing) you have to use on redirect row (property of component failing component ) and get the result set in some txt.csv /or tsv file .
then check data
The exception is being thrown from mv.NET so I suggest you call (or ask your reseller) to call Bluefinity support and ask them about this. You're paying for support, might as well use it. Those programs shouldn't be allowed to throw exceptions like that.
D3 doesn't export Unicode, that might be one issue. But if the Data Manager times-out then I suspect something is wrong in the connectivity into D3. Open a Connection Monitor from the Session Monitor and watch the connection when you make the request. I'm guessing it's either hanging or more probably it's falling into BASIC Debug.
Make sure all D3-side programs related to this are either all Flash-compiled, or all Not Flashed. Your app code will fall into Debug if it's not Flashed but MVNET.BP is.
If it's your program that's in Debug, fix it. If you're not sure which program it is, LIST-RUNTIME-ERRORS in DM.
If it's a MVNET.BP program, again work with Bluefinity. If you are using MVSP for connectivity then the Connection Monitor may be useless, you'll need to change that to an IP (Telnet) connection to see the raw data exchange.

cocoa-applescript: running handler or command every few seconds

In normal applescript, the script is executed down the page, and so any code in loops for every 5 seconds will only run while the loop is running - there is no way to have a single function run every few second regardless of what the script is currently doing or where it is in the script (that I know of). In cocoa-applescript, however, is there a way to run a handler every 5 seconds, at all times, no matter what it is currently doing? Here is what it should be doing in my cocoa-applescript app:
on checkInternetStrength()
do shell script "/System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport -I | grep 'agrCtlRSSI:'" -- this being the script which returns the line containing the signal strength
set SignalStrength to result
set RSSIcount to (count of characters in SignalStrength)
set SignalStrength to ((characters 18 thru RSSIcount of SignalStrength) as string) as integer -- this to turn SignalStrength into just the number and not the whole output line
set SignalStrength to (100 + SignalStrength) as integer
set SignalBar's setIntValue_(SignalStrength) -- SignalBar being the Level Indicator described below
end checkInternetStrength
Summed up, it runs the airport command to check internet connection, turns this into a number from 1 to 100 and uses this on an NSLevelIndicator (100 maximum) to show current signal strength graphically. Now, there is no point having this run once or when you hit a button - that is an option, but it would be nice if it updated itself every, say, 5 seconds with the realtime value. So is there any way to have a process which runs every 5 seconds to do this, while still enabling full functionality of the rest of the script and interface - i.e. as a background process? Comment if you need more extracts from the script.
Example
In Unity-C# scripting, the 'void Update() {code}' will run the code within it every frame while doing everything else simultaneously, so a cocoa-applescript version of this might be an answer, if anyone knows.
I Dont believe this is possible but what I had a similar problem before, what i do, I have an external applescript applicaion that is hidden the repeats the commands, the only problem is, it wont send it back to the app, you'll have to make the external applescript app do it, like
display notification, etc..., in the applescript apps "Info.plist" you can add this:
<key>LSUIElement</key>
<string>1</string>
To make the app run invisibly, but sorry i dont think you can run a handler in the app its self