I launched an A/B test last Thursday. However, after 6 days I'm still not able to see the data in Reporting. Actually, Google stopped recording the Events and I'm left with the same data as when I started. Both Experiment Client and Experiment Event has the same numbers for 6 days.
Is there an outage? Why isn't Google recording the Events?
I considered pausing the test and re-running it. I also checked GA4, but the results of the Events between the Original and Variant are not showing up.
Related
We use python to programmatically grant authorized view / routine access to a large number of views to various datasets.
However since this week we have been receiving the following error :
Dataset time travel window can only be modified once in 1 hours. The previous change happened 0 hours ago
This is preventing our current deployment process.
And so far we have not been able to find a work around to resolve this error. Note that we do not touch the time travel configurations at all as a part of our process.
This seems to be an issue with the BigQuery API.
Google have said that they will be rolling back the breaking change to restore functionality within the day
I am working on a Mule API flow testing out the Salesforce event streams. I have my connector set up and subscribed to a streaming channel.
This is working just fine when I create / update / delete contact records, the events come through and I process them by adding them to another database.
I am slightly confused with the replayId functionality. With the current setup, I can shut down the Mule app, create contacts in the org, and then when I bring the app back online, it resumes by adding data from where it left off. Perfect.
However, I am trying to simulate what would happen if the mule app crashed while processing the events.
I ran some APEX to create 100 random contact records. As soon as I see it log the first flow in my app, I kill the mule app. My assumption here was that it would know where it left off when I resume the app, as if it was offline prior to the contact creation like in the previous test.
What I have noticed is that it only processes the few contacts that made it through before I shut the app down.
It appears that the events may be coming in so quickly in the flow input, that it has already reached the last replayId in the stream. However, since these records still haven't been added to my external database, I am losing those records. The stream did what it was supposed to do, but due to the batch of work the app is still processing, my 100 records are not being committed like the replayId reflects.
How can I approach this so that I don't end up losing data in the event there is a large stream of data prior to an app crash? I remember with Kafka, you had to were able to commit the id once it was inserted into the database so that it knew that the last one you officially processed. Is there such a concept in Mule where I can tell it where I have officially left off and committed to the DB?
Reliability at the protocol (CometD) level implies a number of properties. Chief among them is a transactional ACK(nowledgement) of the message having been received by the subscriber. CometD supports ACKs as an extension. Salesforce's implementation of CometD doesn't support ACKs. Even if it did, you'd still have issues...but the frequency/loss of risk might be lower.
In your case you have to engineer a solution that amounts to finding and replaying events that were not committed to your target database. You do this using custom code or wiring adapters in Mule. Replay ID values are not guaranteed to be contiguous for consecutive events but they will be ordered. Event A with replay ID of 100 will be followed by event B with replay ID of 200.
You will need to store a replay ID value in your DB. You can then use it on resubscription (after subscriber failure) to retrieve events from SF that are missing from your DB. This will only work if the failure window is small enough. Salesforce event retention window is currently at 24 hours for standard platform event license. Higher-level licenses allow for longer retention.
Depending on the volume of data, frequency of events and other process parameters, you could get all of this out of the box with Heroku Connect. It does imply a Postgres DB on Heroku + licensing cost of HC and operational costs but most of our customers in similar circumstances find it worthwhile.
I've created a Power Automate connector which allows a user to create an SQL triggered refresh sequence which cascades all the way through the Dataflow refresh to the Dataset refresh thus eliminating the need for schedules. It seemed to work well when testing yesterday and then I hit the 8th refresh and it started failing. However, when I looked at it today, it seems the first 2 refreshes failed today and I am still getting this error although it only fired twice today. I have set up on Power BI 7 refreshes but it hasn't hit all of them yet in order to return this message. I tried to switch the refresh off on the dataflow but still to no avail. Has anyone encountered this issue before?
{
"error": {
"code": "DailyDataflowRefreshLimitExceeded",
"message": "Skipping dataflow refresh as number of auto refreshes in last 24 hours (8) exceeded allowed limit 8"
}
}
UPDATE: I've just tried the same flow on a new Power BI workspace for the first time and got the same error.
You definitely hit 8 refresh limit in 24 hours. you will have to wait complete 24 hours to perform next set of refresh.
Short answer to lift this limitation, you may have to buy a premium license(48 times per day)
Blog stating the same
Blog from PowerBI
I'm making a financial app and I run into some problems with recurring money like fixed payment, salary, bank saving, ... I tried to add these payments on a certain day by comparing the current day and day of payments. The code is something like this:
If Date.Now.Day = GetPayDate(date) then
//code here //
It's in a start up event and it works but the problem is if users don't open the app on that day, the app will ignore and nothing will be added.
I'm using ADO.net with sql database. It's an app on local client without real time data.
In order to work correctly, users don't have to log on but the app must be run, so I tried to fix it by adding an auto start function on it. But it's not an option because users may not use computer for few days.
Is there any other way to resolve this problem? I just need some solutions or ideas about it, so even if users don't use the app in 2 or 3 months, it still calculate everything once they log on.
Sounds like you really need a windows service that runs on startup, or a scheduled task. A windows service is a type of C# / VB.Net application that's designed to run in the background, and has no UI. The Windows task scheduler can start a program on a regular basis.
For more info about windows services, see https://msdn.microsoft.com/en-us/library/zt39148a%28v=vs.110%29.aspx. For more information on scheduled tasks, see http://www.7tutorials.com/task-scheduler. For a discussion about which is better, see Which is better to use for a recurring job: Service or Scheduled Task?
Or you could compare the current date to >= the pay date if you don't mind paying a few days late.
sorry if my question is a bit ambiguous, I'll explain what i want to do.
i want to run a game on a webserver. its a turn based game, some of you people might have come across it.
Its a game called mafia: http://mafiascum.net/wiki/index.php?title=Newbie_Guide.
I know how it needs to work in terms of a mysql database a server side scripting language etc etc.
What i am not sure about is whats the best way to get a script to activate when the game starts, and be able to run a script every 3 minutes to update the game status:
once 10 people join the game starts
people vote during a 3 minute period. (votes would be stored in a database)
after 3 minutes a script needs to run to calculate the votes and remove a player
then 1 and a half minutes later the script needs to run again.
This cycle of 3 minutes, 1 and a half minutes need to repeat until a certain condition is met, i.e all players but 2 are dead or something.
when players refresh the page they need to be updated on the games status.
Ive read about sockets, and wonder if this might be a good path to take. would sockets be able to send json back to the clients? so that jquery can then update the client with game results.
Ideally i would like the the front end to be done in jquery and the backend script processing to be done by php or something.
How open would this be? in terms of people trying to cheat by sending attacks such as post variables sqli attacks etc etc.
Its quite a broad question, and i am sure there is more than one approcah so is more than one correct answer, but i would be intrested on peoples thoughts on how they would go about developing it.
Thanks for your time :)
I would simply use a CRON job or similar on the backend to update the status every x seconds as you have suggested.
To trigger a game start, simply fire off a PHP command to set your CRON job running.
This way the timing is controlled behind the scenes on the server, and you are free to update the status of the game using jQuery to your actual players.