I'm developing some ETL jobs using Mosaic Decisions. While running the job, it's submitting the job to Spark using the default configurations. This default configuration is really huge and I don't need that much for development (as I am using less number of records for unit testing while development).
Is there a way I can instruct Mosaic to use less Spark resources for my development? So that I won't unnecessarily block the resources of the cluster?
Yes, it is possible to achieve that. To do so, you will have to create a new run configuration with the desired resource configuration from the Manager persona (LTI Mosaic Manager). Then, simply execute the flow with the newly created run configuration.
Follow the steps below to create a new run configuration:
Log in to Mosaic Decisions and on the top right corner, click on Projects, and then on Manager.
In Mosaic Manager, click on the Runconfig tab in the left navigation panel.
Click on Add New Configuration. Provide the desired configurations and click Save.
Go back to Mosaic Decisions, and execute the desired flow with the newly created run configuration
Related
While configuring a particular data pipeline in Mosaic Decisions, I want to try out different operations by using the available process nodes. I would like to keep the first few configured nodes for future reference and continue to add some other nodes.
To do this, I'm currently cloning the flow after each incremental change. But, due to this, many flows are getting configured and it becomes very difficult to keep track.
Is there any alternative way to save the history of these multiple configurations of the flow for future reference without cloning and executing them separately?
You can save the history of changes you have made in the flow by simply saving it as a version using Save As Version option provided in the canvas header.
You can also add a description for each of the incremental steps and edit a particular version later if you want. Later, you can also execute each of the saved versions separately by publishing that version from the Version tab and then
executing it normally.
My app consists of two containers: the app itself and a database. I'm planning to wrap the app into a chart, thus paving a way for easy reproducible deployment.
Apart from setting/reading environment envs (which helm+kubernetes seems to handle really well), part of app's configuration is:
making sure the database is pre-filled with special auxiliary data (e.g. admin user exists, some user role names required to create new users are there, etc.).
I like the idea of having readable yaml files hold the entire configuration in a human readable format. However at a glance it doesn't seem that helm in any way would help with this (DB records) kind of configuration.
That being said, what is the best place to put code/configuration ensuring that DB contains certain auxiliary records? A config yaml file? An container init script, written in bash?
You are right, Kubernetes or Helm cannot help with preparing your pre-filled database records/schema.
You should probably have your application initialize those pre-filled data. If you don't want to put this logic into your application, you can ship an initialization script and configure an init container with Kubernetes.
Kubernetes makes sure every time your application container is restarted, the init container runs first. In the init container, you can execute a bash/python/... script that makes sure the records you want are there.
I hope you show me resolve in my case.
When I define many process, how to get status data's tracking of that process. In other word, I want to get process's history. My purpose to show for my client checking.
I have defined a process communicate 3 applications and i deploy it to client.but unfortunately, my client would like to add more an application ( up to 4 apps) in the future. i wonder if how to do that? i perhaps open process again and edit it. Have a way create dynamic process.
Thanks very much.
PVA.
You get a very limited "history" in TIBCO Administator (more or less which process instances completed with success/failure; in case of failure it will also provided the exception and where in the process it failed). However that doesn't show you any tracking of the individual steps/activities that the process passed through. For this, you'd either have to put lots of logging steps into your process (and need to build something that parses this information from log files). Or you could use BusinessWorks ProcessMonitoring, which gives you a full history trail for each process automatically. However it not included with BW and you'll probably need a separate license.
Change the process in TIBCO Designer, build a new ear file, re-deploy the new EAR file in TIBCO Administrator.
I'm using WebAii library for UI testing - I want to test whether my component displays the same records as there are in database therefore I need to switch my application's connection string to point to the test database just for the time of running tests. What is the best way to do it? How to dynamically change the connection string prior to running the app? Thanks
Are you storing the connection string in the Web.config file? If so, I would deploy a new Web.config just before starting the test and then use the command line to send an IISRESET.
FYI, these are the kinds of questions we answer all day long on our public forum dedicated to WebAii.
Cody
Telerik Technical Support
What kind of application is it? This is first probably an indication of not-well-factored code. Next, it is common to have a separate environment for testing code.
If you are, for example, deploying to ASP.NET with Visual Studio, you can use Web.config file transformations to set a different value when you deploy to e.g. test.contoso.com vs. www.contoso.com. The transformation syntax allows you to define a new connection string, or change an existing one from the base Web.config, when deploying a different configuration.
If you have a single environment, and control over it, you could probably write a couple of (Power)shell scripts to copy a web.config with "test" connection strings to your app root prior to the test. Then run a second script to reset the original web.config after the test is run.
If you have access to your deploy directory within the context you will be running your tests, you could even simply have a Web.test.config file included in your unit test project. In [AssemblyInitialize]:
File-copy _\\{your app server}{your app directory}\Web.config to \\{your app server}{your app directory}\Web.config.orig.
File-copy Web.test.config to \\{your app server}{your app directory}\Web.config.
Sleep for a few seconds?
Then do the reverse in [AssemblyCleanup].
Other strategies exist, too. You could build in an override to your application when in debug mode, that checks various things (special file, additional config, cookies, extra query string). Or you could have a Settings manager in your app that you can instrument in test setup when arranging your test (click through UI to change DB settings).
Very likely, however, you may get the best compounding rewards by factoring your code to reduce dependencies. Then you can write unit tests which stub/mock/fake the database. You can use code coverage tools to verify that you've tested specific scenarios, or to see that additional integration tests would be duplication of coverage at that point.
I need to create a Clearcase label script to run on a UNIX server.
Labels will not always be on the latest build and the script needs to be run via a manual process.
It will label every file a branch of code at a version (currently selected by a timestamp-timestamp is from a Hudson build engine which will create these scripts and ftp to the Unix server).
The build server(Windows) is a different machine than what the script will be run on(UNIX).
The build server currently populates then and builds from a snapshot view.
Users do have clearcase access and permissions.
The code is never built from the UNIX machine- it is a central location where multiple people can go to label the code.
Is it necessary to recreate the view on the UNIX server to label(i.e. do I need to start the view, label and then stop view)? Or could I do something more lightweight?
For this kind of task, I definitively recommend using one dynamic view, combined with a time-based selection rule.
You can:
first create a config spec file with the right selection rule based on timestamp used by the build process
set the config spec to your view (cleartool setcs /path/to/config/spec/file, see setcs)
The all process doesn't require to stop/restart the view.
And since it uses a dynamic view, there is no 'update' time to wait (no file to load).
The OP adds in the comments:
What is the benefit of labeling the current dynamic view(set by a time in the config spec) vs labeling the contents of the dynamic view via selecting a version based on timestamp?
(I take this all to mean it is impossible to label without being in a view)
First, yes, you need to be in a view to label.
And ClearCase will label what it sees in the view (i.e. the versions selected by the current config spec)
Now it is better to have a dedicated dynamic view for that kind of operation because that avoid messing with any other view you might be using for any other operation.
This dynamic view can be the only one needed for labeling operation, and by setting the right time-based config spec selection rule, you ensure labeling what was actually used at the time of your build.