Can you make separate schedules for workflows on staging versus prod? - flyte

I would like to have certain workflows run on a different schedule in staging and in production (ex: one workflow run multiple times a day in staging and only once a day in production). Doing so would help with getting quicker feedback on the runs and also save on compute costs. How can I do this with flytekit? Is it recommended?

There is no easy way to do this as it goes against main integration/deployment strategy championed by Flyte.
Flyte entities, comprising tasks, workflow, and launch plans, are designed to be iterated under in a user's development domain. After iterating, users are expected to deploy that version to the staging domain and then to production. The more difference there are between those domains, the more confusion there will be down the road we suspect.
That said, it is possible to do this because the registration step allows the user to specify a different configuration file. One of the entries in the configuration file is this workflow_packages construct. This enables the user basically to look at different folders when registering in staging vs. production for instance.
In order to get a launch plan to only exist in one domain, you'll need to put it in a new folder/module that is inaccessible from any of the extant workflow packages, and then put the other one in yet another.
In the staging file,
[sdk]
workflow_packages=app.workflows,staging_lps
In the production file,
[sdk]
workflow_packages=app.workflows,production_lps

Related

How to handle the database backup and restore in different environments?

I am using Directus in prod and multiple non-prod environments. In prod, users will only be updating content in collections. In non-prod, admins will be testing all other updates regarding new collections, user roles, permissions, etc. I plan on backing up tables related to collection content in prod and restoring them to lower environments in order to keep data up to date. Once testing of new collections or other admin changes are done in non prod environments I plan on backing them up and then restoring the tables related to these changes up to prod. Right now, the plan is to write shell scripts using mysqldump and mysqlimport to perform these operations.
My question is what tables relating to collection content do I need to backup in prod and move to lower environments? Also, could I use the method described here to not overwrite any activity records and avoid losing data?
Mission critical tables would be:
directus_collections (what collections are there)
directus_fields (what fields are in those collections)
directus_relations (which collections are connected to each other)
I'd personally copy over all the directus_* tables. Even if you don't necessarily have to move over things like activity, I'd play it safe and make sure you don't run into any weird issues later on.
Also, could I use the method described here to not overwrite any activity records and avoid losing data?
If you're not going to edit any actual content in the non-prod environments, you should be able to ignore directus_activity and directus_revisions, as those two are directly related to the raw content.

Testing an n-tier web application - should my test project have its own database?

In an n-tier web-app, should I be running integration tests against a different database, one dedicated to testing the code? Is it standard practice to test against the production database as well?
You should never run untested code on production. After all, you don't want to discover that it has a bug that wipes out all data. That's what tests are supposed to find. And you should not have test/staging data in the production system. It is good practice to dump the data out of production and load it into another environment for periodic testing with real-world data.
You should have a test database (not shared with production). It's a good idea to wipe out the data before every test.
You can have smoke tests that run in production. They will pretend to be a user(agent) and visit many pages, maybe even create things (with a special tag so you can find them again and delete them.)
I'd rather think of different database user with own data set. Database schema should be the same. I'd never run tests on production database with the same database user. Test logic shouldn't even be delivered to the client as it may lead to severe security issues.
In my opinion you'd need a full production-like data set for testing purposes, to be able to test every single feature of your application. And also you would need an empty database (without any bussiness data) for application clients to have it as initial point on delivery. Such a dataset shouldn't be tested as there is no data needed to test bussiness logic.

Rails "sub-environment" -- still production (or test, etc.) but different

How should we best handle code that is part of a single Rails app, but is used in several different "modes"?
We have several different cases of an app that is driven from the same data sources (MySQL, MongoDB, SOLR) and shares core logic, assets, etc. across multiple different uses.
Background/details:
HTML vs REST API
A common scenario is that we have HTML and REST interfaces. These differences are handled through routing (e.g. /api/v1/user/new vs /user/new) -- with minor differences they provide the same functions. This seems reasonably clean to me.
Multi-tenant
Another common scenario is that the app is "multi-tenant", determined mainly by subdomain of the URL, e.g. partner1.example.com and partner2.example.com (or query-string parameter for API customers) -- each has a number of features or properties that differ. This is handled by a filter ApplicationController using data largely stored in a set of tenant-specific database tables with tenant-specific functionality encapsulated by methods. This also seems reasonably clean to me.
Offline Tasks
One scenario is that a great deal of the data is acquired through a very large number of tasks, running pretty much continuously: feed loaders, scrapers, crawlers, and other tasks of this sort ... the kinds of things you would find in a search engine, which is a large part of what we do. These tasks are launched on idle server instances and run periodically ... but are just rake tasks that are part of the app.
These tasks are characteristically different than our front-end code -- they update data, run calculations, do maintenance tasks and so on -- some tasks run for days (e.g. update 30M documents from an external web service). In the end, these tasks create and keep fresh the core data that our front end app uses.
This one doesn't seem as clean to me, in particular, in some cases, these tasks are running and doing data updates at the same time as our application is using them, so occasionally need to defer to the front-end app when we're under peak loads.
Major Variants of the App
This last case is clearly wrong -- we have made major customizations of our app -- 15% or 20% different, by making branches and then running as an entirely separate app, sharing some of the core data sometimes, but using some of its own data other times. We have mostly fixed this now, as it was, of course, untenable.
OK, there's a question in here somewhere, right?
So in particular for the offline tasks I feel like the app really needs to be launched in a "mode" or perhaps "sub-environment". But we still have normal development, test, qa, demo, pre_release, production environments that have their own isolated data and other configuration parameters. For each of these, we want to be able to run, develop, test and deploy the various "modes" of the application.
Can anyone suggest an appropriate architecture that is similar to the declarative notions of standard Rails environments?
If the number of modes is ever-increasing:
Perhaps the offline tasks could be separated from the main app, into their own application (or a parent abstract task with actual tasks inheriting from it and deployed individually).
If the number of modes is relatively small and won't be changing often:
You could put the per-mode configuration into a config file, logically separate from the rest of the code. Then during the deployments, you would be able to provide a combination of (environment, mode, set of hosts) and get a good level of control of your environments while using the same codebase.

What dataset to work on when Azure role is in staging deployment?

AFAIK staging deployments are intended for testing Azure roles which implies that I could deploy a role with errors in code into staging. If that error damages my data I could be screwed.
How do I address that? I can't stage a role without reasonable data (hard to test it) and I can't let an unstable role damage the data.
Do I have to maintain a separate dataset for staging? How is this problem typically solved?
AFAIK staging deployments are intended for testing Azure roles which implies that I could deploy a role with errors in code into staging. If that error damages my data I could be screwed.
Staging is really designed to be a place for deployment - for spinning up new role instances prior to the instant virtual IP address swap. While you can do some testing there - e.g. making some final checks that your deployment is valid - it's not really there to allow you to do lots of testing.
How do I address that? I can't stage a role without reasonable data (hard to test it) and I can't let an unstable role damage the data.
I've generally tested on a development environment with fake data or deployed as a separate Azure service with fake data. However, I admit this has never been in the situation where I've needed huge amounts of data for testing - generally these tests have been test deployments with just 1 or 2 users.
Staging, as an environmentis meant to acurately simulate your production environment, including the data.
We have the following strategy: production is production, staging is connected to the same DB as staging, because the updates in Azure work the way they do; meaning I want to be able to upgrade my staging deployment, give the client a chance to verify again, and then swap the VIPs for the deployments, thus transitioning the application seamlessly. For those times, when there are breaking changes in the database, we decided to either create a new deployment alltogether, or turn-off the production one, giving users a maintenance notice.
Ultimately it's whatever you decide. But again, bearing in mind what Azure's staging is, I'd suggest keeping the data real, and consider it a beta access "program". Unless of course you have other requirements. But that's besides the point.

Redis databases on a dev machine with multiple projects

How do you manage multiple projects on your development and/or testing machine, when some of those projects use Redis databases?
There are 2 major problems:
Redis doesn't have named databases (only numbers 0-16)
Tests are likely to execute FLUSHDB on each run
Right now, I think we have three options:
Assign different databases for each project, each dev and test environment
Prefix keys with a project name using something like redis-namespace
Nuke and seed the databases anytime you switch between projects
The first one is problematic if multiple projects assign "0" for the main use and "1" for the test and such. Even if Project B decided to change to "2" and "3", another member in the project might have a conflict in another projects for him. In other words, that approach is not SCM friendly.
For the second one, it's a bad idea simply because it adds needless overhead on runtime performance and memory efficiency. And no matter what you do, another project might be already using the same key coincidentally when you joined the project.
The third option is rather a product of compromise, but sometimes I want to keep my local data untouched while I deploy small patches for another projects.
I know this could be a feature request for Redis, but I need a solution now.
Any ideas, practices?
If the projects are independent and so do not need to share data, it is much better to use multiple redis instances - each project configuration has a port number rather than a database name/id. Create an appropriately named config file and startup script for each one so that you can get whichever instance you need running with a single click.
Make sure you update the save settings in each config file as well as setting the ports - Multiple instances using the same dump.rdb file will work, but lead to some rather confusing bugs.
I also use separate instances for development and testing so that the test instance never writes anything to disk and can be flushed at the start of each test.
Redis is moving away from multiple databases, so I would recommend you start migrating put of that mechanism sooner rather than later. This means one instance per db. Given the very low overhead of running Redis, this isn't a problem from a resources standpoint.
That said, you can specify the number of databases, and providing A naming standard would work. For example, configure redis to have say, 60 DBS and you add 10 for the test db. For example db3 uses db13 for testing.
It sounds like your dev, test, and prod environments are pretty tied together. If so, I'd suggest moving away from that. Using separate instances is the easiest route to that, and provides protection against cross purpose contamination. Between this and the future of redis being single-db per instance, separate instances is the best route.