Graph DB Frequently Suspended or Inactive - graphdb

I am using Graph DB 8 free version for the project. Frequently the database becomes inactive. I don't know the exact reason. Kindly resolve this.
Thank you

If you are using GraphDB Cloud you should know that non-paid databases are suspended hourly when inactive. This can be resolved by upgrading to a higher tier.
http://cloud-docs.ontotext.com/display/S4docs/FAQ#FAQ-Mydatabasegotsuspended.Why%3F

Related

Is there any way to track log in/log out timing on Onepanel?

I've installed Onepanel on my EKS cluster and I want to run CVAT tool there. I want to keep track on user log in-log out activities and timings. Is that even possible?
Onepanel isn't supported anymore as far as I know. It has an outdated version of CVAT. CVAT has analytics functionality: https://opencv.github.io/cvat/v2.2.0/docs/manual/advanced/analytics/. It can show working time and intervals of activity.

ColdFusion 11 to 2018 Upgrade -- Server Locking Up, How to Test Better?

We are currently testing an upgrade from CF11 to CF2018 for my company's intranet. To give you an idea how long this site has been running, our first version of CF was 3.1! It is still using application.cfm, and there is code from 1998, when I started writing this thing. Yes, 21 years -- I'm astonished, too. It is a hodgepodge of all kinds of older frameworks, too, including Fusebox.
Anyway, we're running Win 2012 VM connected to a SQL 2016 farm. Everything looked OK initially, but in the Week I've been testing, the server has come to a slowdown once (a page took more than 5 seconds to run, something that usually takes 100ms, no DB involvement), and another time, the server came to a grinding halt. The only way I could restart CF App service was by connecting to the server with another server via Services, because doing it via Remote Desktop was so slow.
Now keep in mind -- it's just me testing. This is a site that doesn't have a ton of users, but still, having 5 concurrent connections is normal and there are upwards of 200-400 users hitting this thing every day.
I have FusionReactor running on this thing now, so the next time a lockup happens, I will be able to take a closer look, but what do you think is the best way I can test this? Our site is mostly transactional, users going and filling out forms to put internal orders through. We also connect to XML web services and REST services; we also provide REST services, too. Obviously there's no way to completely replicate a production server's requests onto a test server, but I need to do more thorough testing. Any advice would be hugely appreciated.
I realize your focus for now is trying to recreate the problem on test. That may not be as easy as hoped. Instead, you should be able to understand and resolve it in production. FusionReactor can help, but the answer may well be in the cf logs.
You don't mention assessing the logs at the time of the hangup. See especially the coldfusion-error log, for outofmemory conditions.
You mention raising the heap, but the problem may be with the metaspace instead. If so, consider simply removing the maxmetaspace setting in the jvm args. That may be the sole and likely cause of such new and unexpected outages.
Or if it's not, and there's nothing in the logs at the time, THEN do consider FR. Does IT show anything happening at the time?
If not then consider a need to tune the cf/web server connector. I assume you're using iis. How many sites do you have? And how many connectors (folders in the cf config/wsconfig folder)? What are the settings in their workers.properties file? Are they optimized for the number of sites using that connector?
Also, have you updated cf2018? Are there any errors in the update error log? Did you update the web server connector also?
Are you running the cf2018 pmt (performance monitoring tool set)? Have you updated it?
There could be still more to consider, but let's see how it goes with those. I have blog posts on these and many more topics that would elaborate on things, both at my site (carehart.org) and the Adobe cf portal (coldfusion.adobe.com).
But let's hear if any of this gets you going.

Build pipeline for a large project

I should start with saying that this is my first time really using Azure Dev Ops and setting up pipelines, so I apologize if I don’t understand things right away and seem a little slow haha
I have a large Kentico CMS project (It’s a .NET C# Website project) that I’m trying to setup a build pipeline for but unfortunately because it is so big, the 30 minute timeout always cancels the build process and I’m not too sure what to do to speed it up.
Below are my available pools to choose from. I don’t think we have any self hosted pools at the moment.
This is all for my job. I unfortunately don’t have full access to our Azure Dev Ops or our Azure Portal but there are some settings and configurations that I think I should be able to do. If there are some settings or adjustments that I don’t have access to, I can pass that information along to our IT and Platform Services department.
This is what my build report looks like.
And these are the error messages that I'm getting.
##[Error 1]
The agent has received a shutdown signal. This can happen when the agent service is stopped, or a manually started agent is canceled.
##[Error 2]
The job exceeded the maximum allowed time of 00:30:00 and was stopped. Please visit for more information.
Please let me know what other information I should provide.
Looks like the solution is more a kind of pricing options
Please have a look at here
Free Tier
240 minutes (shared with Build)
30-minutes maximum single job duration
Paid Tier
$40 / Agent
360 minute maximum single job duration
Refer here for the detailed pricing
I ended up creating a self-hosted agent and that got things working. Unfortunately the size of the repo still makes the build and release very long. But I guess that will have to do for now.

MSDTC in Always On Availability Groups

The company I work for would like to use Always on availability groups architecture in our SQL Server supported application. we have 3 databases straight off of installation and one of those is partitioned by configuration, we currently use MSDTC to coordinate transaction between the three, i.e. if committing to databases A and B, and A commit succeeds, a failure on B means a rollback on A and B as opposed to just B.
We ran into an issue when we saw this article
from my understanding this basically means MSDTC is not supported in an Always on availability group mode.
I could not find a replacement for this in SQL server 2012
So my questions are:
What options do we have (Shelve or open source Product/Code change)?
What is specifically the impact of running MSDTC in this setting (complete crash/missing transactions)?
Thanks in advance, your help is greatly appreciated.
Dor
I recently asked a similar question at: https://dba.stackexchange.com/questions/47108/alwayson-ag-dtc-with-failover
> What options do we have (Shelve or open source Product/Code change)?
I think you have two options:
Change your app so that it does not need DTC.
Change your database setup so that it does not use Availability Groups.
In my circumstance, we're using a commercial app so option 1 is not viable. We are currently using database mirroring and based on recent research I now understand that that is also not supported.
My take away is that it is possible to make it work. But it's not simple to do and it puts you in an unsupported situation - which is not acceptable for us. Therefore, I plan to look at utilizing log shipping and change from having a hot standby (with mirroring) or a warm standby (with log shipping).
What is specifically the impact of running MSDTC in this setting (complete crash/missing transactions)?
If you do decide to make use of DTC with Availability Groups or mirroring you run the risk of having corrupted/inconsistent data in a failover scenario. The article you cited gives a good example of how that can happen.
Admittedly, with Log Shipping the same issue can occur. The argument I plan to make is that with log shipping we'll have the ability to roll to a specific point in time and we can make sure we only move to a point in time where we know everything is consistent.
The commercial app we are using does not support high availability. This is our attempt at making it highly available.

On NServiceBus Profiles

I've been trying to find out ways to improve our nservicebus code performance. I searched and stumbled on these profiles that you can set upon running/installing the nservicebus host.
Currently we're running the nservicebus host as-is, and I read that by default we are using the "Lite" version of the available profiles. I've also learnt from this link:
http://docs.particular.net/nservicebus/hosting/nservicebus-host/profiles
that there are Integrated and Production profiles. The documentation does not say much - has anyone tried the Production profiles and noticed an improvement in nservicebus performance? Specifically affecting the speed in consuming messages from the queues?
One major difference between the NSB profiles is how they handle storage of subscriptions.
The lite, integration and production profiles allow NSB to configure how reliable it is. For example, the lite profile uses in-memory subscription storage for all pub/sub registrations. This is a concern because in order to register a subscriber in the lite profile, the publisher has to already be running (so the publisher can store the subscriber list in memory). What this means is that if the publisher crashes for any reason (or is taken offline), all the subscription information is lost (until each subscriber is restarted).
So, the lite profile is good if you are running on a developer machine and want to quickly test how your services interact. However, it is just not suitable to other environments.
The integration profile stores subscription information on a local queue. This can be good for simple environments (like QA etc.). However, in a highly distributed environment holding the subscription information in a database is best, hence the production profile.
So, to answer your question, I don't think that by changing profiles you will see a performance gain. If anything, changing from the lite profile to one of the other profiles is likely to decrease performance (because you incur the cost of accessing queue or database storage).
Unless you tuned the logging yourself, we've seen large improvements based on reduced logging. The performance from reading off the queues is same all around. Since the queues are local, you won't gain much from the transport. I would take a look at tuning your handlers and the underlying infrastructure. You may want to check out tuning MSMQ and look at the disk you are using etc. Another spot would be to look at how distributed transactions are working assuming you are using a remote database that requires them.
Another option to increase processing time is to increase the number of threads consuming the queue. This will require a license. If a license is not an option you can have multiple instances of a single threaded endpoint running. This requires you shard your work based on message type or something else.
Continuing up the scale you can then get into using the Distributor to load balance work. Again this will require a license, but you'll be able to add more nodes as necessary. All of the opportunities above also apply to this topology.