I'm working with a Magento 2 plugin that syncs data to QB Desktop using XML.
There was a ton of work done for the plugin to work with current Magento setup, but there's a new error that shows up when a lot of data is being synced (500 products for example).
It looks like there are some products that have errors during sync but QB is not cleaning the errors out from memory.
The products are being synced in a batch - 1 request contains 50 products/items. Some of the product have errors, some not.
QB Posted about this here:
https://developer.intuit.com/app/developer/qbdesktop/docs/develop/exploring-the-quickbooks-desktop-sdk/error-recovery#using-error-recovery-in-qbxml-based-applications
But, there's no actual example that can be used.
Has anyone experienced this or has an example of how to handle the errors?
The SDK also not very clear about what to do:
https://static.developer.intuit.com/qbSDK-current/doc/pdf/QBSDK_ProGuide.pdf
All I want to do is - clear the memory in QB and let it process next request without plugging up the memory.
Related
I am using the Azure DevOps Migration Tools and it hits an exception when I try to migrate a work items with 35977 revisions.
Here is my configuration:
Here is the SOAP error I am getting:
Wow, that's a lot of revisions. I'm not sure I have ever seen a work item with that many revisions. I assume that you have some sort of tool that auto updates the work items, which is what is causing this.
Since it take about 200ms per revision to save Work Items I would
expect it to 5000 seconds (thats 82h) to migrate just 1 work item with
that many!
Since the Azure DevOps Migration tools use the old SOAP API (Object Model) this one is out of our hands. There may be some way to page the revisions but I am unaware. If you do find a way to only load partial works items i'd be intersted... although thinking about it I think there is a wi.LoadPartial() method... never used that.
To move forward you could add AND [System.Rev] < 25000 to your query to not load those work items that have that many revisions.
This would allow you to continue with the CLosed items that are supported.
We have added a way to only migrate some of the revisions when there are more, but we never envisaged in our wildest defensive coding strategies that it would be above 1000 revisions. I can imagine memory and other issues cropping up.
Added to: https://github.com/nkdAgility/azure-devops-migration-tools/issues/1096
I tried setting up the gun.js server code in two machines and set up up 2 browser clients in each machine which contains code for registering both the machines as peers.The server code for both machines has put statements for 2 different nodes
Scneario 1
Started both servers:Existing Data not syncing
Scenario 2
Statred both Servers and both Clients:Existing Data not syncing
Scenario 3
Put a new data item(distinct) from each browser console.The newly put data gets synced in both machines.
Please guide in resolving this issue.
Thanks
FirstTry
#FirstTrial I did not downvote the question (I am not sure why or who did).
I'm not able to replicate the issue from your explanation, here is a way that can test what is going on:
https://github.com/gundb/panic-server
We created a distributed testing tool to simulate failures and stress test the system.
Here is an extremely well documented example in less than 300 lines: https://github.com/amark/gun/blob/master/test/panic/load.js
There is a very similar PANIC test to what you describe, and it passes OK without any errors, you can check it out here: https://github.com/amark/gun/blob/master/test/panic/b2s2s2b.js
Let me know!
We have recently planned to switch from SQLite to Realm in macOS and iOS app due to db file corruption issue with SQLite so we first started with macOS app. All coding changes were smooth and app started working fine.
Background about app and DB usage - app really uses DB very heavily and performs too many read and writes to DB in each minute and saves big xml's to it. In each minute it writes / updates around 10-12 records (at max) with xml and reads 25-30records too. After each read it deletes data along with xml from database and my expectation is once data is deleted it should free up space and reduce file size but looks like it is growing continuously.
To test the new DB changes we kept app running app 3-4 days and DB file size went to 64.42GB and app started being slow. Please refer the attached screen shot.
To further debug, I started app with new DB file and size was 4KB but within 5min it goes to 295KB and never reduced in size even records were continuously added and deleted.
To further clarify, app uses NSThreads to perform various operations and those threads writes and reads data to DB but with proper begin\commit transactions. I also read at 'Large File Size' at https://realm.io/docs/java/latest/#faq and tried to find compactRealm but can't find it in objective c.
Can anybody please advise.
Update - I Give up on Realm
After 15days of efforts, Finally I have stopped usage of Realm and starting to fix/workaround db file corruption issue with SQLite. Realm Huge DB file issue was fixed by making changes to code for threads but then I started getting Too many open files error after running the app for 7-8 hours.
Debugged for whole week and made all possible changes and at some point looks like all was good as xcode were not showing any open files. But again I started getting Too many open files crash and then debugged with instrument and found there were so many open files to realm database, lock, commit and cv files.
I am sure there are no leaks in app and xcode also does not shows those open files in Disk usage as well. I decided to invoke lsof command in code before and after Realm calls and most of the it doesn't increase open file count but sometime n between it increases. In My app it started from 120 files to 550 in around 6 hours. Xcode looks all fine via Disk usage but instrument shows open files.
No good support from Realm team, sent email to them, just got one response. Made many changes to code following their suggestions and doesn't work at all so gave up on it. I think it's good for small apps only.
I have a couple of Composite C1 CMS websites.
To edit them currently I use the web based CMS on the live site.
However - I would like to update the (code & content) in Visual Studio locally - then sync to the web. However, if my local copy is older than that online (e.g. a non techy client has edited something on the live site) and I Web Deploy - it will go over the top of the new file on the server.
I need a solution that works out the newest change? I can't find anything in Google or the C1 docs.
How can I sync - preferably using Web Deploy. Do I need some kind of version control?
Is there a best practice for this - editing the live site through the web interface seems a bit dicey & is slow.
The general answer to this type of scenario seems to be to use the Package Creator. With that you can develop locally, add the files you've changed to a package, and install that package on a live site. This solution does not at all cover all the parts of you question though, and has certain limitations:
You cannot selectively add content to a package. It's all pages or no pages.
Adding datatypes is easy, but updating them later requires you to delete the datatype (and data), and recreate the datatype.
In my experience packages works well for incremental site updates, if you limit the packages content to be front end stuff, like css, images and such.
You say you need a solution that works out the newest changes - I believe the only solution to this is yourself, with the aid of some tooling. I don't think there's a silver bullet solution here.
Should you use a version control system? Yes! By all means. Even if you are not sharing your code with anyone, a VCS is a great way to get to know Composite C1 from a file system perspective, as you can carefully track what files are changed on disk, as you develop. This knowledge is crucial when you want to continuously add features the a website that is already alive and kicking - you need to know what to deploy, and what not to touch.
Make sure you read the docs on how Composite fits in VCS: http://docs.composite.net/Configuration/C1-and-Version-Control
I assume that your sites are using the XML data storage (if you where using SQL Data Store, your content would not be overridden upon sync).
This means that your entire web application lives in one folder on disk on the web server, which can be an advantage here.
I'll try to outline a solution that could work for you, although I must stress that I've never tried this - I'm making it up as I type.
Let's say you're using git, download the site in it's entirety from the production web server, and commit the whole damned thing* to your master branch.
Then you create a new feature branch from that commit, and start making the changes you want to deploy later, and carefully commit your work as you go along, making sure you only commit the changes that are needed for your feature to work, to the feature branch.
Now, you are ready to deploy, and you switch back the master branch, and again download the entire site and commit it to master.
You then merge your feature branch into the master branch, and have git do all the hard work of stitching you changes in with the changes from the live site. There are bound to be merge conflicts, and that is where you will have to jump in, and decide for yourself what content needs to go live.
After this is done and tested, you can web deploy the site up to the production environment.
Changes to the live site might have occurred while you where merging, so consider closing the site, or parts of it, during this process.
If you are using SQL Data Store i suggest paying for a tool like Red Gate's SQL Compare and SQL Data Compare or SQL Delta, to compare your dev database to the production database, and hand pick SQL scripts that can be applied to the production database along with your feature deployment.
'* Do consider using a .gitignore file to avoid committing certain files - refer to the docs for mere info.
I suppose you should use the Package Creator
Also have a look here: http://docs.composite.net/Configuration/C1-and-Version-Control
We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated.
It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working.
We have enabled it like this...
bool UseLocalCache =
int LocalCacheObjectCount = int.MaxValue;
TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3);
DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased;
if (UseLocalCache)
{
configuration.LocalCacheProperties =
new DataCacheLocalCacheProperties(
LocalCacheObjectCount,
LocalCacheDefaultTimeout,
LocalCacheInvalidationPolicy
);
// configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300));
}
Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing).
The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache.
Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly.
We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either.
I cant find anything in SO or Google.
Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think?
(note: the IIS box running ASp.net isnt in yhe cluster - it is just the client).
Any insights greatfully received!
Which DataCache methods are you using to read from the cache? Several of the DataCache methods will always make a hit against the server regardless of local cache being configured. You pretty much have to make sure you only use Get if you want the local cache to be leveraged.
This is one my biggest nits with AppFabric Caching. They don't explain any of this to you and so when you begin to rely on local caching you begin to fall into these little pitfalls because you do not think you're paying a penalty for talking to the service, transferring data over the wire and deserializing objects, but you are.
The worst thing is, I could understand having to talk to the service to make sure the local cache represents the latest data. I can even understand transferring the data back so that multiple calls are not made. What I can not understand for the life of me though is that even if the instance in the local cache is verified to still be the current version that came back from the cache, they still deserialize the object from the wire rather than just returning the instance that's in memory already. If your objects are large/complex this can hurt a lot.
After days and days of looking into why we get so many Local Cache misses we finally solved it:
There is a bug with local cache in AppFabric v 1.1 that is fixed in CU4, see http://support2.microsoft.com/kb/2800726/en-us
Make sure that the Microsoft.ApplicationServer.Caching.Client.dll used by your application is also updated. We had CU4 installed on the machine but got the Client.dll without CU4 from a NuGet package in our application. In our case a simple NuGet package update made everything work.
After installing CU4 and making sure that the Client.dll was also updated we reduced our reads towards the AppFabric Host by a lot, due to Local Cache hits increasing. yay!
Have you tried using a nhibernate profiler? http://nhprof.com/
There is also this:
http://mdcadmintool.codeplex.com/
It's a nice way to manage and view the cache.
Both of these may help in debugging the issue.