Is there any way to track log in/log out timing on Onepanel? - amazon-eks

I've installed Onepanel on my EKS cluster and I want to run CVAT tool there. I want to keep track on user log in-log out activities and timings. Is that even possible?

Onepanel isn't supported anymore as far as I know. It has an outdated version of CVAT. CVAT has analytics functionality: https://opencv.github.io/cvat/v2.2.0/docs/manual/advanced/analytics/. It can show working time and intervals of activity.

Related

Load URL based on a schedule

I have been hosting a site which requires routine automation and in the past taken care of this automation through the use of CRON functions, yet my host has now killed that functionality and it is no longer available to me.
I have tried to solve this with Schedule Tasks on a Windows machine that is always on; however, this doesn't appear to be to reliable.
I have an AWS account and it seems like there would be something in that morass that could take care of this but I have not found it.
Does anyone know the best way to schedule a routine where i can call a url daily based upon a schedule?

BigQuery resource used every 12h without configuring it

I need some help to understand what happened to our cloud to have BigQuery resource running every 12h to our cloud without configuring it. Also, it seems very intense because we got charged, in average, one dollar every day for the past month.
After checking in Logs Explorer, I saw several logs regarding the BigQuery resource
I saw the email from one of our software guy. Since I removed him from our Firebase project, there is no more requests.
Though, that person did not do or configure anything regarding the BigQuery so we are a bit lost here and this is why we are asking some help to investigate and understand what is going on.
Hope you will be able to help. Let me know if you need more information.
Thanks in advance
NB: I did not try to add the software guy's email yet. I wanted to see how it will go for the rest of the month.
The most likely causes I've observed for this in the wild:
A scheduled query was setup by a user.
A data studio dashboard was setup and configured to periodically refresh data in the background.
Someone's setup a workflow the queries BigQuery, such as cloud composer or a cloud function, etc.
It's also possible its just something like a script running in a crontab on someone's machine. The audit log should have relevant details like request origin for cases where it's just something running as part of a bespoke process.

Build pipeline for a large project

I should start with saying that this is my first time really using Azure Dev Ops and setting up pipelines, so I apologize if I don’t understand things right away and seem a little slow haha
I have a large Kentico CMS project (It’s a .NET C# Website project) that I’m trying to setup a build pipeline for but unfortunately because it is so big, the 30 minute timeout always cancels the build process and I’m not too sure what to do to speed it up.
Below are my available pools to choose from. I don’t think we have any self hosted pools at the moment.
This is all for my job. I unfortunately don’t have full access to our Azure Dev Ops or our Azure Portal but there are some settings and configurations that I think I should be able to do. If there are some settings or adjustments that I don’t have access to, I can pass that information along to our IT and Platform Services department.
This is what my build report looks like.
And these are the error messages that I'm getting.
##[Error 1]
The agent has received a shutdown signal. This can happen when the agent service is stopped, or a manually started agent is canceled.
##[Error 2]
The job exceeded the maximum allowed time of 00:30:00 and was stopped. Please visit for more information.
Please let me know what other information I should provide.
Looks like the solution is more a kind of pricing options
Please have a look at here
Free Tier
240 minutes (shared with Build)
30-minutes maximum single job duration
Paid Tier
$40 / Agent
360 minute maximum single job duration
Refer here for the detailed pricing
I ended up creating a self-hosted agent and that got things working. Unfortunately the size of the repo still makes the build and release very long. But I guess that will have to do for now.

How to proceed with query automation using Import.io

I've successfully created a query with the Extractor tool found in Import.io. It does exactly what I want it to do, however I need to now run this once or twice a day. Is the purpose of Import.io as an API to allow me to build logic such as data storage and schedules tasks (running queries multiple times a day) with my own application or are there ways to scheduled queries and make use of long-term storage of my results completely within the Import.io service?
I'm happy to create a Laravel or Rails app to make requests to the API and store the information elsewhere but if I'm reinventing the wheel by doing so and they provides the means to address this then that is a true time saver.
Thanks for using the new forum! Yes, we have moved this over to Stack Overflow to maximise the community atmosphere.
At the moment, Import does not have the ability to schedule crawls. However, this is something we are going to roll out in the near future.
For the moment, there is the ability to set a Cron job to run when you specify.
Another solution if you are using the free version is to use a CI tool like travis or jenkins to schedule your API scripts.
You can query live the extractors so you don't need to make them run manually every time. This will consume one of your requests from your limit.
The endpoint you can use is:
https://extraction.import.io/query/extractor/extractor_id?_apikey=apikey&url=url
Unfortunately the script will not be a very simple one since most websites have very different respond structures towards import.io and as you may already know, the premium version of the tool provides now with scheduling capabilities.

Java application exception monitoring & alarms

We have a few applications which are running in Windows 2K, 2008 servers. They are written in java.
These applications needs to do many automation tasks. We are having difficulty to monitor these applications. Sometime due to XYZ reasons application either hangs or fail to perform desired job. We only come to know about this after a few days when some one reports that desired function hasn't been executed.
To come out of this issue, we added emails for each imp exceptions but then developer needs to spend time to check those 1000 emails everyday. Which is again not feasible & efficient solution.
Now we are looking for a alert, alarms, notification display & monitoring system. We need to have a remote application which can receive alarms from these java applications & then based on certain information/Condition/Configuration, remote application can display some red, orange, green text on the screen. Based on red text, users can be visually see that there is an issue in the system. If required users can be notified that there is a serious issue in the application.
Please help us to identify any existing mechanism, tool, package to achieve this goal. Any suggestion would be highly appreciated.
Thanks
There are myriad ways to achieve this, but all of them will require some effort. Which way to proceed depends on your needs and abilities. A couple of options occur to me:
Have your processes log their exceptions to a Syslog daemon, running on some central server. Then you could have an admin read through the log file for serious problems, but there are many ways to post-process syslog messages, a web search on it might give some more hints.
Is there any way, when logged into the server, to observe whether or not the process is running properly or not? You could install something like Nagios on a sever, and write a plugin that monitors your particular process on all the servers. The plugin can basically be a shell script that checks "ps", or a log file, or whatever you want.
If you are in an IT department, your organization might already have some system like this (NMS).
I'm not sure why this question is tagged "snmp", but it's technically possible to install an SNMP agent on each server, and have them send traps on certain conditions. I do think it would be slightly overkill because you would also have to get a good SNMP manager to receive the traps and alert a sysadmin.
I would go for a combination of the check_logfiles plugin to parse log exceptions and raise alerts, and check_jmx/jmxquery to check metrics inside the JVM such as heap usage and thread count.
check_logfiles
check_jmx