Using SoftLayer API to configure Evault Backup (configure agent, jobs, and schedule) - api

I would like to automate, via the SoftLayer API, the configuration of an Evault Backup system -- configure the agent, create a job, set the file selection to backup, create the schedule. I can't find any structures that seem to contain that information to create the configuration (except for creating a schedule). Does anyone know if the items needed are available using the SoftLayer API?
To try to get a better picture of related underlying structures, I went via the GUI and created an agent, jobs, and schedule and see that the backups for that are running. I can use the Soflayer API to then query some things -- the job details (job name/description, last run date result), agent status, but cannot seem to query the schedules or replication schedule, nor any of the Agent configuration information beyond its status.

What I know, with the Softlayer API you only will able to get information about the Evault, to configure the device you need to use the WebCC client.

Related

Log Analytics retention policy and querying on logs

I would like to know how can we address this scenario in Azure Log Analytics where I need to generate Kube-audit logs of different cluster every week and also retain these logs for approx 400 days. Now storing it over Log Analytics will cost me more and its not an optimized architecture as I will not be require that so often. So I would like to know from experts whats the best way to design the architecture, where we get the kube audit logs which can be retained for 400 days and be available for querying when required without incurring too much cost.
PS: I also heard in my team that querying 400 days logs always times out in KQL.
Log analytics offerings:
Log analytics now provides the capability to manage several service tiers at table scope. Setting your data as archive, with no query capabilities at a much lower cost. offering spans for up to 7 years.
when needed, you can choose to elevate a subset of your data into the Analytics offering, providing you the capability to query it. The action of elevating your data is denoted as - "Search jobs"
Another option is to elevate an entire period in time to the Analytic offering, they call it - "Restore logs".
Table's different service tiers -
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-retention-archive?tabs=api-1%2Capi-2
Search job offering -
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/search-jobs?tabs=api-1%2Capi-2%2Capi-3
Restore logs -
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/restore?tabs=api-1%2Capi-2
all are under public preview.
both offerings - Search jobs and Restore logs provides you the capability to engage your data on demand, can't comment or suggest regarding the actual cost.
Azure data explorer solution:
Another option is to use Azure storage to hold your data (as an example), Azure data explorer provides the capability to create an external table, that table is a logical view on top of your data, the data itself is kept outside of the ADX cluster. you can query your data by using ADX, expect degradation in query performance.
ADX external table offering -
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/schema-entities/externaltables

What is the most performant way to submit a POST to an API

A little background on what I am needing to accomplish. I am a developer of a cloud-based SaaS application. In one instance, my clients use my application to log the receipt of goods as they come across a conveyor line. Directly before the PC where they are logged into my app, there is another Windows PC that is collecting from instruments, the moisture and weight of the item. I (personally not my app) have full access to this pc and its database. I know how I am going to grab the latest record from the db via stored procedure/SQLCMD.
On the receiving end, I have an API endpoint that needs to receive the ID, Weight, Moisture, and ClientID. This all needs to happen in less than ~2 seconds since they are waiting to add this record to the my software's database.
What is the most-perfomant way for me to stand up a process that triggers retrieving the record from the db and then calls the API? I also want to update the record flagging success for 200 response. My thoughts were to script all of this in a batch file and use cURL to make the API call. Then call this batch file from a task in windows. But I feel like there may be a better way with less moving parts.
P.S. I am not looking for code solutions per say, just direction or tools that will help, also I am using the AWS stack to host my application.
The most performant way is to use AWS Amplify, its ready aws framework and development environment that can connect your existing DB to a REST API easily
you can check their documentation on how to build it
https://docs.amplify.aws/lib/restapi/getting-started/q/platform/js

Bigquery user statistics from Microstrategy

I am using Microstrategy to connect to Bigquery using a service account. I want to collect user level job statistics from MSTR but since I am using a service account, I need a way to track user level job statistics in Bigquery for all the jobs executed via Microstrategy.
Since you are using a Service account to make the requests from Microstrategy, you could look up for all your project Jobs by listing them then, by using each Job ID in the list, gather the information of the job as this shows the Email used for the Job ID.
A workaround for this would be also using Stackdriver Logging advanced filters and use a filter to get the jobs made by the Service Account. For instance:
resource.type="bigquery_resource"
protoPayload.authenticationInfo.principalEmail="<your service account>"
Keep in mind this only shows jobs in the last 30 days due to the Logs retention periods.
Hope it helps.

Dynamic scheduler on GCP

Does GCP have a job scheduling service like Azure Scheduler, where jobs can be scheduled and managed dynamically via API?
Google Cron service is set in a static file and it seems like their answer to this is to use that to poke a roll your own service backed with PubSub and a data store. Looking for Quartz-like functionality, consumable by APP engine, which can be managed and invoked via API as opposed to managing a cluster, queue, and compute instance/VM deployment of Quartz (or the like) or rolling a custom solution. Should support 50 million simultaneous jobs per day with retry / recoverability and dynamic scheduling per tenant capabilities.
This is the cheapest and easiest way I can imagine building a solution today on top of an existing AppEngine based project:
As you observed, currently there is no such API/service directly available on GCP. There is an open feature request (on GAE) for it.
But, also as you observed, it is possible to build and use a custom solution, just like the one you proposed.
Depending on the context even simpler solutions are possible. For a GAE context check out, for example, How to schedule repeated jobs or tasks from user parameters in Google App Engine?.

Using SES to send a personalised weekly digest email?

I am wondering where to start with setting up a weekly personalised digest to go out to my users (over 200k).
It would pull in content specific to them, we currently use SES for notifications, on a Windows EC2 instance with SQL.
Is there a cron style thing for windows IIS?
Probably the easiest way to do this is to develop a console application to send your emails and then use the windows task scheduler to schedule it to run once a week.
Within your console application you'll basically get your users from your database and foreach through each user getting whatever personalised data you need to build up an email message, and then pass off the message to Amazon SES.
To use Amazon SES you'll need to request a sending quota increase because the default quotas are way below what you need: Default sending quota is 10,000 emails per 24-hour period, and a maximum send rate of 5 emails per second.
To implement this functionality you'll need these components:
Some application that will collate all bits of information and create an email message. Probably same app will send to email server (SES). If every message is unique to the user then
Hence the Question: How can I send email message programmatically?
Write your script. C# or any other language. It needs to connect to db, extract either pieces of email body or whole message will be collated by SQL query / SP. it will also extract email addresses and send the email.
You can use SSIS to create process of getting of needed bits, and sendin emails. It offers graphical interface for the process map. It may be not so fast as a custom script, but scheduling is very simple with SQL Server Agent. Also you can implement various processes depending on run time calculations.
Use and other soft to create and send emails (except Mail Merge in MS Word, joking )
Scheduling tool, that will run app from 1 on regular basis.
Use windows scheduling tool
SQL Server Agent. You can run SQL scripts and stored procedures. Scripts and SPs can contain file system commands (call .exe files, read data from files, etc), but you'll need to do some research for syntax, functionality and necessary permissions.
There are other scheduling apps available.
Content control. It may me done by the app, but you'll create some tables or use files for settings, common parts of the email message etc.
You would wish to keep record of various rules used to create custom messages. Logic
Generic advice. For the first time go with software you are familiar with. Solution may be cumbersome, but longest way is taking shortcuts.
There are many mass mailing and mail merge applications around. You can find those easily, compare functionality and may be choose one of those.
Disclaimer: I'm the author of the library mentioned below.
You could be interested by the library I'm writing. It's called nvelopy. I created it so that it's easy to embed in a .Net application to send transactional emails and campaigns. That way, it can directly access your data (users) to create a segment and send it via Amazon while respecting the sending quota.
I developed it for my own web service. I didn't want to setup another server or export/import users and I wanted it to "talk" to my datastore (RavenDb). A SQL datastore connector is also in the works.
Let me know if you have any question (via the contact page of the nvelopy web site).