How to decipher sentry warning; "span of type Connection with operation id (null).." - asp.net-core

Using Sentry.AspNetCore in a .NET core app I get these:
warn: Sentry.ISentryClient[0]
=> SpanId:eed7125b44901343, TraceId:bde9ab9c677f6c45a343a489f37e55c1, ParentId:0000000000000000 =>
ConnectionId:0HMBS5J9RV0II => RequestPath:/ws
RequestId:0HMBS5J9RV0II:00000002
Trying to get a span of type Connection with operation id (null), but it was not found.
What does this message mean, should I get rid of them and if so, how?

Update
It was fixed in 3.10.0.
It looks like a bug in Sentry SDK. See the related issue.
There is hope that this will be fixed soon.
The latter error in regard to spans will be fixed soon.
https://github.com/getsentry/sentry-dotnet/issues/1210#issuecomment-933822947

Related

Datastream Troubleshoot: "An unknown error occurred. Please try again. If the error persists, contact Google support"

We are trying to replicate data from AlloyDB to Bigquery using Datastream.
We Get "An unknown error occurred. Please try again. If the error persists, contact Google support."
In the Datastream console --> objects list, we see all source tables with Object Status "Failed" and Backfill status "Completed".
In Bigquery we see only a subset of the tables (not all the "Completed" objects were synced).
In the Logs Explorer I can see this error on BQ:
I also see this error: error: {
code: 11
message: "Unsupported primary key column either does not exist or is a pseudocolumn at [1:401]"
}
The column referred in the error is of type enum.
The desired situation is having all the AlloyDB tables replicated into Bigquery.
The error message is not very informative...
What does it mean?
What would be the best way to go about troubleshooting this?
We're actively working on making these error messages be more informative, and improvements are continuously being rolled out as we identify more edge cases. Assuming you followed all the steps in the documentation, then you may need to open a ticket with support for further investigation. If a support ticket isn't an option, you can still report the issue using the public issue tracker
I just had this same issue but connecting to a PostgreSQL in AWS RDS:
Beginning with Postgres 10, passwords are encrypted using SCRAM-SHA-256 in PostgreSQL. Google DataStream still expects MD5 password encryption, or it will generate an "unknown error" in the logs and fail the backfills.
You'll need to update your postgresql.conf (or RDS Cluster Parameter Group if you're using AWS like me):
password_encryption = 'MD5'
Restart the database and make sure the parameter has changed with:
SHOW password_encryption;
Reset the password of your users:
ALTER USER "{username}" with password '{password}';
More info from the PostgreSQL docs: https://www.postgresql.org/docs/current/auth-password.html

Kafka 1.0.0 - Serialized.with() uses default serde instead of the ones provided

We recently updated our kafka version from 0.10 to 1.0 and I am updating the deprecated code
KTable<Long, myClass> myKTable = this.streamBuilder
.stream(Serdes.Long(), mySerde, sub_topic)
.groupByKey(Serdes.Long(), mySerde)
.reduce(myReducer, my_store);
to this
KTable<Long, myClass> myKTable = this.streamBuilder
.stream(sub_topic, Consumed.with(Serdes.Long(), mySerde))
.groupByKey(Serialized.with(Serdes.Long(), mySerde))
.reduce(myReducer, Materialized.as(my_store));
My stream throws an error while serializing in groupByKey. The Serialized.with() does not use the keySerde provided and defaults back to byteArray. And this byteArray serde then encounters my key which is a Long and throws a cast error.
Has anyone else encountered this error in the 1.0.0 version of kafka. The first code with the outdated version of kafka works fine. But updating the code to use Serialized.with() does not seem to work. Any help is greatly appreciated.
Can you share the stack trace? I actually think the issue is with reduce() -- you need to specify the Serdes via Materialized again.
This is kinda regression on the new API and got fixed recently in trunk: https://github.com/apache/kafka/pull/4919 Thus, upcoming 2.0 release will contain the fix.

CloudReports (Cloudsim Extension) on Cloudlet migrations gives error Null Exception

I'm Kind new in Cloudsim and CloudReports Extension so i don't know why
when running the CloudReports simulator it gives this Error:
nullpointerexception at org.cloudbus.cloudsim.power.powerdatacenter.processcloudletsubmit(powerdatacenter.java:269)
I was adding a cloudlet scheduling Algorithm to the Extension
All I can see that the error happens with cloudlets Migration.
I tried searching alot about how to Fix it but didn't find something that will help me.
the Error is like this:
java.lang.NullPointerException
at org.cloudbus.cloudsim.Datacenter.processCloudletSubmit(Datacenter.java:761)
at org.cloudbus.cloudsim.power.PowerDatacenter.processCloudletSubmit(PowerDatacenter.java:269)
at org.cloudbus.cloudsim.Datacenter.processEvent(Datacenter.java:159)
at org.cloudbus.cloudsim.core.SimEntity.run(SimEntity.java:406)
at org.cloudbus.cloudsim.core.CloudSim.runClockTick(CloudSim.java:471)
at org.cloudbus.cloudsim.core.CloudSim.run(CloudSim.java:835)
at org.cloudbus.cloudsim.core.CloudSim.startSimulation(CloudSim.java:151)
at cloudreports.simulation.Simulation.runSimulation(Simulation.java:157)
at cloudreports.simulation.Simulation.runAllSimulations(Simulation.java:129)
at cloudreports.simulation.Simulation.run(Simulation.java:98)
at java.lang.Thread.run(Thread.java:748)
plz Advise;
Regards.
I ran into the same problem while I was following the simulation examples provided in the CloudSim project. After debugging figured out that the problem was with setting the UserID of the created cloudlet. I was setting a random value as the userId while the simulation retrieves the cloudlet using the brokerId. So setting the userId with the brokerId that was created did the trick for me.
// dataCenterBroker is the instance of the broker which you created
int brokerId = dataCenterBroker.getId();
//setting the userId after creating the cloudlet
cloudLet.setUserId(brokerId)

WSO2 API Manager returning RunTime Error

I have an API in WSO2. When I try to test it in the store with valid parameters via GET, it returns the following error message:
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>101504</am:code>
<am:type>Status report</am:type>
<am:message>Runtime Error</am:message>
<am:description>Send timeout</am:description>
</am:fault>
I have already searched and tried a lot, but with no success, always returning the same error. Don't know if helps, but the api that I try to access is a PHP file.
Any ideas?
EDIT:
I have identical apis to this one, changing only the response, that are working properly. Even if I erase the php file that the API is pointing, the error keep coming.
EDIT:
I changed the code in Management Console, in Metadata>List>APIs:
{"production_endpoints":
{"url":"http://site/myapi.php","config":
{"format":"leave-as-is","optimize":"leave-as-
is","actionSelect":"fault","actionDuration":30000}},
"sandbox_endpoints":
{"url":"http://site/myapi.php","config":
{"format":"leave-as-is","optimize":"leave-as-
is","actionSelect":"fault","actionDuration":30000}},
"implementation_status":"managed","endpoint_type":"http"}
to this:
{"production_endpoints":
{"url":"http://10.20.40.189/ConsultaAutorizacaoCadPos.php","config":null},
"sandbox_endpoints":{"url":"http://10.20.40.189/ConsultaAutorizacaoCadPos.php","config":null},
"implementation_status":"managed","endpoint_type":"http"}
And this error message vanishes. But this new one appears:
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>303001</am:code>
<am:type>Status report</am:type>
<am:message>Runtime Error</am:message>
<am:description>Currently , Address endpoint : [ Name : admin--myapi_APIproductionEndpoint_0 ] [ State : SUSPENDED ]</am:description>
EDIT: this error message is appearing just sometimes. Most of the time the request takes long time to run and then returns nothing( no content )
Any ideas how to solve it?

QuickFix Trouble - Repeating Groups

My fix engine keeps rejecting messages and I was hoping someone could help me figure out why... I'm receiving the following sample message:
8=FIXT.1.1 9=518 35=AE 34=4 1128=8 49=XXXXXXX 56=YYYYYYY 52=20130322-17:58:37 552=1 54=1 37=Z00097H4ON 11=NOREF 826=0 78=1 79=NOT SPECIFIED 80=100000.000000 5967=129776.520000 453=5 448=BCART6 452=3 447=D 448=BARX 452=1 447=D 448=BARX 452=16 447=D 448=bcart6 452=11 447=D 448=ABCDEFGHI 452=12 447=D 571=6611540 150=F 17=Z00097H4ON 32=100000.000000 38=100000.000000 15=EUR 1056=129776.520000 31=1.2977652 194=1.298120 195=-3.5480 64=20130409 63=W2 60=20130322-17:26:50 75=20130322 1057=Y 460=4 167=FOR 65=OR 55=EUR/USD 10=121
8=FIXT.1.1 9=124 35=3 34=4 49=XXXXXXX 52=20130322-17:58:37.917 56=YYYYYYY 45=4 58=Tag appears more than once 371=448 372=AE 373=13 10=216
But as you can see it's being rejected by the quickfix engine. I am using the 5.0sp1 data dictionary and have configured it in my config file:
[DEFAULT]
ConnectionType=initiator
HeartBtInt=30
ReconnectInterval=10
SocketReuseAddress=Y
FileStorePath=D:\XXX\Interface\ReutersStore
FileLogPath=D:\XXX\Interface\ReutersLog
[SESSION]
BeginString = FIXT.1.1
SenderCompID = XXXXX
TargetCompID= YYYYY
DefaultApplVerId = FIX.5.0
UseDataDictionary=Y
AppDataDictionary=FIX50SP1.xml
StartDay=sunday
StartTime=20:55:00
EndTime=06:05:00
EndDay=saturday
SocketConnectHost= A.B.C.D
SocketConnectPort= 123
Does anyone have any idea why the Engine would be rejecting this message? I know that quickfix is normally able to handle messages with repeating groups, is it a config thing? Any help would be greatly appreciated!
Your message seems to be in order. Try putting this in your config file.
ValidateFieldsOutOfOrder=N
Quickfix by default puts that as Y and the underlying structure storing the tab and field values is unable to see the count before. 453 > 448.
As a sidenote always check these fields. They should point you to the source of the problem.
58=Tag appears more than once
371=448
Maybe it's a shot in the dark, but I had a similar a problem when using a 5.0sp2 dictionary.
I resolved using an updated version of the quickfix library compiled from the library SVN repository. If I remember correctly this was the bug.
It seems that the quickfix library has not been updated since a long time, and for newer version of fix I suggest you to use the "trunk" of the repo.
I had the same problem and i resolved it by tweaking my DataDictionary like the following in message AE TradeCaptureReport