Is there any way to seek through a capl test module if there is an error on the trace ( on the communication mainframe)?
Yes, you can set up on events for
on errorFrame
{
}
for parallel monitoring of your CAN communication (during your CAPL/XML test module is running).
Using it you should get plenty documentation in CANoe help (F1).
Related
How do I handle and return a human readable error in a Java Azure Function app?
All examples of this on a Google search, are just simple instructions on how to do a try-catch, which is not my question.
More specfically, how do we design the return status code and the response body of the request, in a way that provides the most flexibility to a wide array of situations?
Given that we are not integrating Spring-Boot in this case, and that we do not have access to anything Spring.
Given that my API generally returns an object that we will call Pojo1, on error, what is the best way to return a informative message.
NOTE: Of course, I do know there are situations where you want security through obscurity, in which case I would probably choose logging errors to app insights. This is not my question though.
Well, you can set custom headers while returning the request. This
can be done using a setHeader function.
You can also use azure service bus or event grid which will carry
specific messages regarding the errors.
Also, you can use azure monitoring which collect all the error and
notify you when everything happens.
Refer this article by Eugen Paraschiv for indepth explanation on how to use setheader.
Refer this documentation on azure service bus and this documentation on event grid.
Refer this documentation on azure monitoring logs.
I am trying to integrate my Application with Spring sleuth.
I am able to do a successfull integration and I can see spans getting exported to Zipkin.
I am exporting zipkin over http.
Spring boot version - 1.5.10.RELEASE
Sleuth - 1.3.2.RELEASE
Cloud- Edgware.SR2
But now I need to do this in a more controlled way as application is already running in production and people are scared about the overhead which sleuth can have by adding #NewSpan on the methods.
I need to decide on runtime wether the Trace should be added or not (Not talking about exporting). Like for actuator trace is not getting added at all. I assume this will have no overhead on the application. Putting X-B3-Sampled = 0 is not exporting but adding tracing information. Something like skipPattern property but at runtime.
Always export the trace if service exceeds a certain threshold or in case of Exception.
If I am not exporting Spans to zipkin then will there be any overhead by tracing information?
What about this solution? I guess this will work in sampling specific request at runtime.
#Bean
public Sampler customSampler(){
return new Sampler() {
#Override
public boolean isSampled(Span span) {
logger.info("Inside sampling "+span.getTraceId());
HttpServletRequest httpServletRequest=HttpUtils.getRequest();
if(httpServletRequest!=null && httpServletRequest.getServletPath().startsWith("/test")){
return true;
}else
return false;
}
};
}
people are scared about the overhead which sleuth can have by adding #NewSpan on the methods.
Do they have any information about the overhead? Have they turned it on and the application started to lag significantly? What are they scared of? Is this a high-frequency trading application that you're doing where every microsecond counts?
I need to decide on runtime whether the Trace should be added or not (Not talking about exporting). Like for actuator trace is not getting added at all. I assume this will have no overhead on the application. Putting X-B3-Sampled = 0 is not exporting but adding tracing information. Something like skipPattern property but at runtime.
I don't think that's possible. The instrumentation is set up by adding interceptors, aspects etc. They are started upon application initialization.
Always export the trace if service exceeds a certain threshold or in case of Exception.
With the new Brave tracer instrumentation (Sleuth 2.0.0) you will be able to do it in a much easier way. Prior to this version you would have to implement your own version of a SpanReporter that verifies the tags (if it contains an error tag), and if that's the case send it to zipkin, otherwise not.
If I am not exporting Spans to zipkin then will there be any overhead by tracing information?
Yes, there is cause you need to pass tracing data. However, the overhead is small.
I have chipKit uC32 (PIC32MX340F512H) ,chipkit BasicIOShield and PICkit3
programmer all from the Microchip.
I'm using MPLABX IDE.
Since I'm very new to this so I didn't know where to start I have searched and look at the web and find only tutorial which using MPIDE which I'm not allowed to use in my project.
I have read the Reference manual and Data sheet for and make test project but any way the uC32 Board refuse to recognize the BasicIOShield and I was unable to connect this two together.
Any tips and link would be great. Thanks in advance.
In the Basic IO shield Reference manual stated that you should follow some step in order to make the ChipKit to talk to the basic IO shield.
Power on sequence
Apply power to VDD.
Send Display Off command.
Initialize display to desired operating mode.
Clear screen.
Apply power to VBAT.
Delay 100ms.
Send Display On command.
Power off sequence
Send Display Off command.
Power off VBAT.
Delay 100ms.
Power off VDD.
The shield is using SPI2. The following registers are used
RF4, RF5, RF6 and RG9
Also you must change the jumper in the uC32 from LED4 to JP4 and JP8.
in the Appendix B of the reference manual of the basic IO shield is an example code which is useful.
So I have this application that has a process that requires some gen_servers to be alive somewhere else in the cluster.
If they are up it just works, if they are not, my gen_server fails in init with {error,Reason}, this propagates through my supervisor into my applications start function.
The problem is that if I return anything other than {ok,Pid} I get a crash report.
My intention here would be to somehow signal that the application couldn't start properly and that all the processes are down and because of that the application should not be considered active, however, I can only choose to return {ok, self()} and see my application listed as active when it is not, or return {error, Error} and see how it crashes with:
{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[rtb_sup,<0.134.0>]},
{messages,[]},{links,[<0.135.0>]},{dictionary,[]},{trap_exit,false},{status,running},
{heap_size,377},{stack_size,24},{reductions,255}],[]]:\n"
The problem seems to be bigger than this, basically there is no way to tell to the application framework that the app failed. It may look like one of these things that are handled by let the process die in erlang, but allow for an {error, } return value on application:start seems like a good tradeoff.
Any hints?
Application will crash at any moment, so application's dependence relationship at the start time can not provide helpful dynamic crash information.
Before I have read part of rabbitmq project source code, it is also a cluster-based program.
I think rabbitmq has faced your similar question as you said, because cluster need collect related nodes's application "is live" information and memory water highmark information and then make decision.
It's solution is
to register the the first main process of the application in the node locally, the name is "rabbit" in the rabbitmq system, you can find it is rabbit.erl file, and in the function "start/2".
start(normal, []) ->
case erts_version_check() of
ok ->
{ok, SupPid} = rabbit_sup:start_link(),
true = register(rabbit, self()),
print_banner(),
[ok = run_boot_step(Step) || Step <- boot_steps()],
io:format("~nbroker running~n"),
{ok, SupPid};
Error ->
Error
end.
And the other 4 modules, rabbit_node_monitor.erl, rabbit_memory_monitor.erl,
vm_memory_monitor.erl, rabbit_alarm.erl to use two erlang technique, one is monitor process to get "DOWN" message of the registered process, the other is alarm handler to collect these information.
I see several other questions about load testing web services. But as far as I can tell those are all synchronous load testing tools. (Meaning they send a ton of requests but the go one at a time.)
I am looking for a tool where I can say, "I want 100 requests to be launched at the exact same time".
Now, I am new to the whole load testing thing, so it is possible that those tools are asynchronous and I am just missing it.
Anyway, in short my question is: Is there a good tool for load testing WCF Web Services asynchronously (ie lots of threads).
In general, I recommend you look at soapUI, for anything to do with testing web services. They do have load testing features in the Professional edition (I haven't used these yet).
In addition, they've just entered beta with a loadUI product. If it's anywhere near as good as the parent product, then it's worth a hard look.
you can use the Visual Studio load testing agent components to run on multiple client machines and that will allow you to run as asynchronously as you have machines to load.
There is a licence requirement for using this feature.
There are no tools that will allow you to apply a load at exactly the same instant (i.e. within milliseconds), but this is not necessary to load test an application correctly.
For most needs a single load test server running Visual Studio Ultimate edition will be more than enough to get an understand of how your webservice performs under load.
Visual Studio and most other tools I imagine will apply load in an asynchronous manner, but I think in your view you want to apply a set load all at once.
This is not really necessary as in practice load is not applied to a service in this manner.
The best bet for services expecting high load is to load your service until a given number of "requests per second" is reached. Finding what level your application should expect is a bit trickier, but involves figuring out roughly how many users you would expect and the amount they will be using it over a given period.
The other test to do is to setup a load test harness and run the load up until either the webservice starts to perform badly or the test harness runs out of "oomph" and cannot create any more load.
For development time you can use NLoad (http://nload.github.io)
to run load tests on your development machine or testing environment.
For example
public class MyTest : ITest
{
public void Initialize()
{
// Initialize your test, e.g., create a WCF client, load files, etc.
}
public void Execute()
{
// Send http request, invoke a WCF service or whatever you want to load test.
}
}
Then create, configure and run a load test:
var loadTest = NLoad.Test<MyTest>()
.WithNumberOfThreads(100)
.WithDurationOf(TimeSpan.FromMinutes(5))
.WithDeleyBetweenThreadStart(TimeSpan.Zero)
.OnHeartbeat((s, e) => Console.WriteLine(e.Throughput))
.Build();
var result = loadTest.Run();