Since we deployed MobileFirst 7.1.0.0-MFPF-Server-IF201701200103 We were facing this "UNKNOWN" issue in Analytics. I have observed that this issue is fixed in the recent release. We will be deploying this latest release soon. My question is once we deploy this fix, how do we fix this incorrect entry in analytics. Since three months we are seen a lot of these entries.
Is there any way to update the data. Is there any kind of client or a client which can show us a table like structure that we can use to view update or delete data.
The existing UNKNOWN entries in the Analytics Store cannot be eliminated. So having a proper TTL (Time To Live) Settings in place configured for analytics will clear up (purge) the old data will remove in course of time.
Related
I need some help to understand what happened to our cloud to have BigQuery resource running every 12h to our cloud without configuring it. Also, it seems very intense because we got charged, in average, one dollar every day for the past month.
After checking in Logs Explorer, I saw several logs regarding the BigQuery resource
I saw the email from one of our software guy. Since I removed him from our Firebase project, there is no more requests.
Though, that person did not do or configure anything regarding the BigQuery so we are a bit lost here and this is why we are asking some help to investigate and understand what is going on.
Hope you will be able to help. Let me know if you need more information.
Thanks in advance
NB: I did not try to add the software guy's email yet. I wanted to see how it will go for the rest of the month.
The most likely causes I've observed for this in the wild:
A scheduled query was setup by a user.
A data studio dashboard was setup and configured to periodically refresh data in the background.
Someone's setup a workflow the queries BigQuery, such as cloud composer or a cloud function, etc.
It's also possible its just something like a script running in a crontab on someone's machine. The audit log should have relevant details like request origin for cases where it's just something running as part of a bespoke process.
I have two legacy servers in GCE, which have both been flagged as using the deprecated metadata server endpoints. At this moment in time, they have hundreds of GB's of data between them in MySQL and MongoDB data, and risking upgrading something on these boxes which has an adverse affect is not an option.
We are currently in the process of migrating away from the data stored here, but for now, we need to keep them running.
Is anyone aware of any implications to either
a) doing nothing or
b) Just setting the disable-legacy-endpoints metadata flag to true ?
i.e. Will these instances stop working altogether if we leave them as they currently are?
After some more digging into what was actually using the Metadata API to start with, we found that they were being sent by stackdriver_agent which was installed an extremely long time ago while it was free, and just never removed.
Stopping this agent will remove all calls that we make with these two legacy servers.
If you are considering disabling with the disable-legacy-endpoints metadata flag, make sure to test it in a contained environment first, i.e. a new VM from a snapshot of the affected instance, before apply to production services.
For help identifying the instances making the calls, refer to this article
For help identifying the processes within the instances, refer to this article
Our analytics data is taking up almost 100% disk space on the file system. How do we remove the old er data, and prevent such situation from occurring again?
You can follow the url, https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/installation-configuration/production/server-configuration/#setting-up-jndi-properties-for-mobilefirst-server-web-applications to setup JNDI properties in Mobilefirst. You need to
set the TTL values base on you business requirements, and keep the values as short as possible, so that huge data accumulation does not occur again. To clean up the existing data, you can perform the following
Setup the Analytics server with JNDI properties set for TTL and other configuration
Stop the Analytics Server
Delete the /analyticsData directory contents to discard any initial data (this will not affect as there is no data accumulated yet. So that there is no directories within the analyticsData directory) Note:
/analyticsData is the default location, please refer
http://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/installation-configuration/production/analytics/configuration/ to verify the actual value in your environment.
Restart the Analytics server. (Now the index will be created brand new with TTL in effect causing the proper data purging in place)
Project number :149410454586
We are facing the same issue mentioned below
BigQuery streaming insert data availability delay
We are getting the success response from BigQuery , but the data is not visible
while querying it.It is strange that it seemed to happen over a night, till then
it was working fine.
We were supposed to go to deployment for UAT tomorrow and we never expected this
heppen and now helpless, kindly if you could help out on this.
Face the same issue one hour ago. Any streamed in data are not available for queries and it is not a warm-up issue.
Project Number: 272433143339
It seems to be related to https://status.cloud.google.com/incident/bigquery/18010
I'm synchronizing SQL Server 2008 with ~6 SQL Server 2008 Express clients (everything R2 I believe), using the SyncOrchestrator or specifically using http://code.msdn.microsoft.com/windowsdesktop/Database-SyncSQL-Server-e97d1208 as a base with slight modifications. To my knowledge this means all connections are peers or nodes.
I have 2 scopes. One is download only and the other is upload only. The download only scope is ridden with identity columns primarily because I didn't know any better and still couldn't wrap my head around introducing Guids as the PK on the client side. It doesn't totally matter as all clients should have exact replicas of about 8 or so tables and these machines don't touch this data in any way, only read it.
The upload only scope uses Guids as fortunately I can control that portion of the database and there would be no way 10 clients all using the same identity seed could sync back to the server properly. Both scopes use the default provisioning with bulk inserts and the whole 9 yards so there shouldn't be anything I'm doing on the provisioning end to screw this up.
I initially set everything up not using PerformPostRestoreFixup AND the initial database would be manually synchronized with insert statements from the host. This seemed fine but no updates or deletes seemed to ever be applied. You can safely ignore this (only used for historical accuracy and to prove my ineptness) as I then used VS2010 Database Projects to rebuild the database down to schema only & synchronized. I then used the steps outlined here (http://social.microsoft.com/Forums/br/syncdevdiscussions/thread/9ac6d1a1-1565-4b82-a8d8-3d4a9ff5d07b) (sync, backup, restore, call performpostrestorefixup, sync on x clients) and on my dev box where I'm setting all this up I could see updates and deletes just fine. Its when I deploy this to the x clients that I'm not seeing a mirror of the database as I think I should.
The initial sync will complain and try to synchronize all records again. I believe this is expected. During ApplyChangeFailed event on the client I set everything other than DbConflictType.ErrorsOccurred to ApplyAction.RetryWithForceWrite. This may be a source of problems as I initially thought this should be done to force the change down to the client. I want the server to always win in this scenario but during trace I always see the phrase "Local wins" during the bulk insert/update calls. It's possible I'm seeing the error before the re-apply happens but it's awkward to look at.
The only problem I seem to be having is with the download only scope. The initial client database is about a week old now and if I use the performpostrestorefixup steps I don't see any of the updates that have applied between now and then as I think I should. It's as if SyncFx almost prefers a blank database on the client side to kick off the initial sync then all the updates seem to apply just fine with no ApplyChangesFailed events kicking off.
If anyone has seen this before or has a clue where to go I would greatly appreciate it. My brain has fried trying to determine what it is that's going on. My last ditch effort will be to deploy blank databases to all the clients and have them start the sync. I've had no issues with this on the dev side but I can only test one other client to know if that'll do anything different. Aside from that I don't know what to do other than to keep doing manual syncs which would defeat this purpose entirely. I thought PerformPostRestoreFixup would alleviate the issue entirely but I seem to be having the same problems with or without it or perhaps I'm not looking at what I need to be.
Thanks
I wanted to report and close the entry with my findings.
When I would deploy a previously configured client database, I'd often get ApplyChangeFailed events in the form of this log:
"[05:30:41 PM] - ApplyChange Failed: TableName: , Stage: ApplyingInserts, ConflictType: LocalInsertRemoteInsert, Action: RetryWithForceWrite"
This is what I thought would be expected as it tried to reinsert the data that is already there. What this should've been changed to was an update statement during RetryWithForceWrite but I found the data was not updating with what was being sent down.
Once I started each client with a completely blank database and provisioned locally, all of these errors went away. It's as if every client expects some unique id only it sets. I'm also using x64 builds versus x86 which may have some or no bearing on the results. I wish I could determine what exactly happened but it seems that when in doubt, and whenever possible, starting from absolute zero and letting sync fill in the data is your safest option.