Full error is: "at least 8 commong tracks on both keyframes are needed for reconstruction"
I set up some markers set frames for "keyframe A and B" but when i hit "solve camera motion" i always get this error.
enter image description here
Related
I am following this github example for understanding OFDM on gnuradio-companion, I am able to execute ofdm_tx individually (64 and 512 FFT point) without any issues, but when I connect these two in single graph, I am able to get spectrum from ofdm_tx (no output from ofdm_rx or getting straight line).
My question here, each time I close my output spectrum, my tool get hanged and in background (inside gnu-companion) I observe the following message tarin (attached, printscreen). Similar thing also observed when I run ofdm_rx individually.
Error message in Console :
packet_headerparser_b :info: Detected an invalid packet at item 1448.
header_payload_demux :info :parser returned #f
Please guide me in this regard,
by selecting "NO" for vector source "Repeat" variable , issue sorted out (no hang), but not able to see spectrum anymore.
I am simulating the passenger changeover process in metros using the Anylogic Pedestrian Library.
When passengers enter the vehicle, a seat is assigned to them from the seats available near the door (within a given distance) they entered the vehicle through, using a function called lookForSeat. If there is no more free seat available, their boolean parameter wantToSit is set to false and they will stay standing.
The parameter wantToSit is predefined for the Passenger Agent, with default value randomtrue(0.8). But even if I set it to default value = 1, I get the same error.
Then, passengers are separated using a PedSelectOutput block:
Condition 1: if ped.WantToSit = true --> they are sent to their
assigned seat coordinates (PointNode 'seatPoint', null by default)
Condition 2: true (thus, ped.WantToSit = false) --> they stay in the
standing area in the vehicle, no assigned seatPoint necessary in this case.
Now, it works pretty well, but when the majority of the seats is already occupied, suddenly the PedSelectOutput block directs a passenger with ped.wantToSit to its seating point, which gives null and I get the NullPointerException error.
Attached you find the function, the settings of PedSelectOutput and the log from the command.
As it can be seen, the PedSelectOutput sends the passenger through exit 1 (which gives the error due to calling the coordinates of a "null"), despite ped.wantToSit = false.
Any ideas, what is going wrong? For me it really looks like the function is working properly - I have been changing it the whole day until I realized that something in the PedSelectOutput block goes wrong.
Thank you in advance!
Pic 1: pedSelectOutput block and the command with the log
Pic 2: the function lookForSeat assigning the seats from the seat Collection
The problem here is a subtle one, which has caused me many hours of debugging as well. What you need to understand is that the on exit code is only executed once the agent already has a path to which it is going to exit. i.e. the selectOutput and subsequent blocks are already evaluated and only once it is determined that the agent can move to the next block then the on exit code is called. But the agent will continue on its chosen path that has been determined before the on exit code has been executed.
See the small example below:
I have a pedestrian with a variable that is true by default and a select output that checks this value
If I ran the model all pedestrians exit at the top option, as expected
If I change the variable to false on the On Exit code I might expect that all pedestrians will now exit at the second option
But they don't there is no change....
If I add the code to the on enter code then it does..
Hello people of StackOverflow,
I am currently working on a games engine using the Vulkan graphics API, in the past I was just setting anti-aliasing to the max it could be. However today I was trying to turn it off (to improve performance on weaker systems). To do this I tried to set the MSAA samples on my engine to VK_SAMPLE_COUNT_1_BIT however this produced the validation error:
Validation Error: [ VUID-VkSubpassDescription-pResolveAttachments-00848 ] Object 0: handle = 0x55aaa6e32828, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xfad6c3cb | ValidateCreateRenderPass(): Subpass 0 requests multisample resolve from attachment 0 which has VK_SAMPLE_COUNT_1_BIT. The Vulkan spec states: If pResolveAttachments is not NULL, for each resolve attachment that is not VK_ATTACHMENT_UNUSED, the corresponding color attachment must not have a sample count of VK_SAMPLE_COUNT_1_BIT (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSubpassDescription-pResolveAttachments-00848)
I can work around this problem relatively easily so it isn't really an issue for me, however I was wondering why exactly this limit is put into place. If I want to set the MSAA samples to 1 why can't I?
Thanks,
sckzor
A sample count of 1 means "not a multisampled image". And if you're doing multisample resolve, resolving from a non-multisampled image doesn't make sense. Which is also why you can't use such images for any other things that expect a multisampled image (you can't use an MS-style sampler or texture function on them).
I've created an experiment in psychopy builder in which participants must vocally name pictures presented onscreen (for example, if a picture of a chair appears, the participant has to respond by saying "chair"). I've set up a code component to detect each vocal response, which ends the trial and initiates the next one. This part of the experiment works well, however I'm having trouble integrating EEG recording.
Some important information:
My trial loop reads images and triggerVal's out of a .csv file. I have an image component (called english_naming) that displays images for participants to name out-loud. The component's STOP field is defined as $vpvk.event_onset - this forces the trial to end and the next one to begin upon detection of a vocal response.
So, here is my (working) code component at present:
Begin Experiment:
from psychopy import parallel
port = parallel.port(address=61432)
Begin Routine
vpvk = vk.onsetVoiceKey(
sec=10) # creates the voice key
vpvk.start() #starts recording.
port.setData(triggerVal) # tells psychopy to read trigger values from the .csv file
End Routine
vpvk.stop() # ends the recording
port.setData(0) # resets the trigger value to 0 for the start of the next trial
My problem is this
At present, parallel port events are time-locked to the start of each trial, but I need them to be time-locked to participant's vocal responses. I tried inserting if vpvk.event_onset(): above port.setData(triggerVal), but this fails to generate any trigger codes at all. I've also tried if english_naming==FINISHED but the same problem occurred. I've tried a bunch of variants on these two lines of code, but nothing I can think of seems to work.
I would really really appreciate any advice on this problem. Thanks in advance!
when taking a shot using a long shutter speed >15 seg an error message is returned instead of the result containing the pocture address.The error message is "error": [40403, "Long Shooting"] .
the camera is a nex-6 the api realease v = 1.6
Please use getAvailableShutterSpeed method to get current possible values.
Best Regards,
Prem, Developer World team
See the special note in the API documentation in the "actTakePicture" section. For very long exposure times it will return error code 40403 in which case you can poll the camera using the "awaitTakePicture" method. The camera will continue to return 40403 when queried with "awaitTakePicture" until it's finished capturing, and then it will return the address of the postview image.