Are there any solutions to debug CEL for cumulocity development? - cumulocity

Are there any solutions to debug CEL (cumulocity event language) for cumulocity analytics development?
Thanks,
ZJP

There are two things that you can take a look at to debug CEL.
If you go to the module in adminstration app you can see on the right all outputs that are generated by your module. That way you can verify your statements. Keep in mind that it only shows something if the statement outputs something. If you don't see anything loosen the "where" conditions or split your statement into multiple statements
If there is an error happening in one of your statements or expressions (in the case of a real Java error) there will be an alarm created. They are visible in the normal alarm list but you will also be notified on the home page of administration if there was an error.

Related

IntelliJ IDEA. Compound doesn't work as 'Before launch' task

IntelliJ IDEA and other idea-based IDEs have Run/Debug Configurations that help users to create templates of frequently used tasks. One of the possible run configurations is Compound, which can include multiple run configurations/tasks and run them in parallel.
To mix the execution order IDEA also has Before launch option that allows us to define tasks or other run configurations that should run before the given task.
The problem is Compound works great when not included in any execution queue. When I try to define compound as the before launch task, the compound tasks get executed, but the run configuration where I have defined before launch option - not.
Here is a reproducible example.
Create 3 Shell Script run configurations: script_1, script_2 & script_3.
Each script should log its name into the console using provided script text like is shown here.
Combine script_1 & script_2 into the new Compound Run Configuration like is shown here.
Add the created compound to the Before Launch tasks of script_3.
Expected Result: script_1 & script_2 are executed in parallel and after they're done IDEA starts execution of the script_3.
Actual result: script_1 & script_2 are executed in paralel and after they're done nothing happens.
I haven't found any useful information about how exactly compound works along with other run configurations in the execution queue, but I also have tried the Multirun plugin as a workaround. This plugin's documentation states that it should be perfect to use it instead of compounds in this particular situation, however, developers also state that official functionality like Compound is still preferable. I've tried case Runs tasks A, B before task C in many different combinations and it doesn't even work in a plugin, not even talking about official compounds. Anything special in IDEA logs with both compounds and the Multirun plugin.
Question: am I doing something wrong? Or maybe it's an IDEA bug that should be reported?
Anyway, if compounds shouldn't work like this, why IDEA displays them in Before Launch tasks options? Please tell me what you think.
Tested on IDEA versions 2021.2.3 & 2020.3.4

How to use Variables in Automator

please bear with me, I use Automator since not long.
I have good experience in PHP (totally different) and some small scripting knowledge (apple script, shell, etc).
I try to replicate this logic workflow with Automator:
Ask User to insert value (set $variable_a)
Ask User to insert one more value (set $variable_b)
Submit
This triggers a script that uses both values submitted above. A dummy example:
echo $variable_a
echo variable_b
Seems simple, and it's amazing how fast you can set up this logic with Automator.
The problem is, at stage 2 above, my $variable_a is suddenly a mixed value of $variable_a and $variable_b.
Why does this happen?
They do not seem to act as I understand the generic usage of "variables" in any language or programming step.
In other systems, usually, variables keep as value what they got defined (unless variable variables or you modify them consciously in the code)
I attached an Automator "WorkFlow" File that replicates exactly the abovementioned WorkFlow Logic.
It's a ZIP file, unzip it and open in Automator for a test.
You will see (in the results section for the last step) how the values become (IMHO) false.
Has someone a hint?
The reason this is happening is because the output of one action in the workflow is being fed as input into the next action of the workflow. As inputs are received by actions, they can also aggregate in some cases, such as when setting and getting variables.
The reason it does this is so that you could sent multiple variables directly into, say, a Run Shell Script action, and references them using $1, $2, etc. If Automator only ever took the most recent input, you'd never be able to feed more than one variable into a shell script without first combining them manually yourself into a list.
The solution is simple. Every action has an Options button that you can press, which in turn reveals a checkbox called Ignore this action's input. This needs to be checked for those actions that you want to operate independently of previous results.
Here's a screenshot of your workflow with the appropriate checkboxes ticked against the actions that require it:

Multiple login tests on mobile app with UFT

I am trying to test the Login feature of my Android app with multiple user-password entries that I have in an Excel. I have already been able to import that data from the Excel successfully and run the same test with each row (with "Run on all Rows" option), but now I am facing a problem that I am not being able to solve.
After a test runs with one row, one the test starts over with a new row, it will not restart the app, but start at the same point where the previous one finished. I think this is not the expected behaviour, in general, since most of the GUI testing tools restart the app when testing a feature with parametrization (data from Excel, mostly). Anyway, I "fixed" this by logging out in my app.
In this case there was an "easy solution" by logging out. But what if I was testing a different feature in which I cannot simply "log out". The problem is that in those different cases I would have to navigate back or do something that may fail and has nothing to do with the feature I am testing.
I am not sure if I am not using the right approach. Is there a good general solution for this issue?
I would suggest the following two ways to solve your problem if you cannot simply use logout as the last step.
Use App.Launch function you can add one line to the top of your script like Device("iPhone 7").App("myApp").Launch NotInstall, Restart . Here the device and the app can be TO in object repository or identified using descriptive programing like Device("id:=123456")
Check options in Test Settings Please check the latest UFT version maybe 12.53 or later if there are any options in Test Settings for users to choose to restart or reinstall app for iterations.
Thanks

A process monitor based on periodic sql selects - does this exist or do I need to build it?

I need a simple tool to visualize the status of a series of processes (ETL processes, but that shouldn't matter). This process monitor need to be customizable with color coding for different status codes. The plan is to place the monitor on a big screen in the office making any faults instantly visible to everyone.
Today I can check the status of these processes by running an sql statement against the underlying tables in our oracle database. The output of these queries are the abovementioned status codes for each process. I'm imagining using these sql statements, run periodically (say, every minute or so), as an input to this monitor.
I've considered writing a simple web interface for doing this, but I'm thinking something like this should exist out there already. Anyone have any suggestions?
If just displaying on one workstation another option is SQL Developer Custom Reports. You would still have to fire up SQL Developer and start the report, but the custom reports have a setting so they can be refreshed at a specified interval (5-120 seconds). Depending on the 'richness' of the output you want you can either:
Create a simple Table report (style = Table)
Paste in one of the queries you already use as a starting point.
Create a PL/SQL Block that outputs HTML via DBMS_OUTPUT.PUT_LINE statements (Style = plsql-dbms_output)
Get creative as you like with formatting, colors, etc using HTML tags in the output. I have used this to create bar graphs to show progress of v$Long_Operations. A full description and screen shots are available here Creating a User Defined HTML Report
in SQL Developer.
If you just want to get some output moving you can forego SQL Developer, schedule a process to use your PL/SQL block to write HTML output to a file, and use a browser to display your generated output on your big screen. Alternately make the file available via a web server so others in your office can bring it up. Periodically regnerate the file and make sure to add a refresh meta tag to the page so browsers will periodically reload.
Oracle Application Express is probably the best tool for this.
I would say roll your own dashboard. Depends on your skillset, but I'd do a basic web app in Java (spring or some mvc framework, I'm not a web developer but I know enough to create a basic functional dashboard). Since you already know the SQL needed, it shouldn't be difficult to put together and you can modify as needed in future. Just keep it simple I would say (don't need a middleware or single sign-on or fancy views/charts).

How to verify lots of events in a reasonable way

I am new to software testing. Currently I need to test a middle-sized web application. We have just refactored our codebase and added many event logging logic to the existing code. The event logging code will write to both Windows Eventlog and a SQL database table as well.
The amount of the events is about 200. What approach should I take to test/verify this code refactoring effectivly and efficiently?
Thanks.
I would be tempted to implement unit tests for each of the events to make sure when an event occurs the correct information is passed into your event logging logic.
This would mean that you can trigger one event on the deployed site and verify the data is written to the database and event log. You can have an acceptable level of confidence that the remaining event will be recorded correctly.
If unit testing isn't an option then you will need to verify each event manually, I would alternate between checking the database and the event log as there should be little risk in this area failing. That would mean you would have 200 tests rather than 400 tests.
You could also partition the application into sensible sections and trigger a few events for each section to give you a reasonable level of confidence in the application.
The approach you take will really be determined by how long you have to test, what the cost of would be if an event didn't get logged, and how well developed the logging logic is.
Hope this helps
I would have added tests before you did the refactoring. you dont know where you have broken it already :).
you are saying that it logs into EventViewer and DB, I hope you have exposed logging feature as an interface so that you can:
Extend it to log to some other device if needed
Also makes mocking bit a lot easier
if you have 200 events to test, that's not going to be easy tbh. I dont think you can escape from creating eq number of tests for your 200 events.
I would do it this way:
i would search for all places where my logging interface is used and note all classes and
start with critical paths/ones first (that way you at least cover critical ones)
or you could start from the end, i.e. note down all possible combinations of logs you are getting, maybe point to stale data so that you know if the input is the same, output should be the same too. And every time, regression test your new binaries agaisnt this data and you should get similar number/level of logs.
This shouldn't be to difficult.
Pick a free automated web test tool like Watir (java) or WatiN (.net), (or VS UI Test if you have it.)
Create tests that cover the areas of the web application you expect/need to fire events. Examine the SQL Db after each test to see what events did fire.
If those event streams are correct for the test add a step into the test to verifiy that exactly that event stream was created in the Db.
This will give you a set of tests that will validate the eventing from any portion of your web site in a repeatable fashion.
The efficent & efective part of this approach is that it allows you to create only as many tests as you need to verify the app. Also you do not need to recreate a unit test approach with one test per event.
Automating the tests will allow you re-execute them without additonal effort, and this will really add up over the long haul.
This approach can also be taken with manual testing, but it will be tricky to get consistent & repeatable results. Also re-testing will take nearly as long as the testing uncovers defects that need to be fixed.
Note: while this will be the most effective & efficent way it will not be exhaustive. There will likely be edge case that get missed, but that can be said of nearly any test approach. Just add test cases until you get the coverage you need.
Hope this helps,
Chris