I am making a custom USB device which will be sending sensor values when requested from the host (i.e. Computer) via USB.
I looked for Communications and CDC Control & CDC-Data Class, they seemed to do the job and are seen as Virtual COM Port.
Will they be always seen as COM PORT or can be detected something else.
What other class or method is preferable for such example?
Someone suggested CDC class to similar application
Note:
Using STM32 microcontroller
That is the idea of the USB classes to be enumerated particular way. If you want driverless class on any system - I advice HID,
Related
In 8086, we can know the next instruction to execute through CS:PC, where PC is the offset in the current code segement(CS).
However, I'm not sure how JVM knows which instruction to execute.
PC register in JVM only indicates the offset in the current method, but how does it know which method it's in?
Thanks!
I notice the codes for each method start from 0, like thisenter image description here
So, if there are many methods in a class, how can I know which method the current frame is in?
I'm new to Java, so my question may be silly and my explaination is wrong. Thanks for bear with me!
OK, so I assume that you are asking about the JVM in relation to the Java Virtual Machine Specification (JVMS). The most directly relevant part of the spec says this:
2.5.1. The pc Register
The Java Virtual Machine can support many threads of execution at once
(JLS §17). Each Java Virtual Machine thread has its own pc (program
counter) register. At any point, each Java Virtual Machine thread is
executing the code of a single method, namely the current method
(§2.6) for that thread. If that method is not native, the pc register
contains the address of the Java Virtual Machine instruction currently
being executed. If the method currently being executed by the thread
is native, the value of the Java Virtual Machine's pc register is
undefined. The Java Virtual Machine's pc register is wide enough to
hold a returnAddress or a native pointer on the specific platform.
Note the emphasized sentence. It says the address of the instruction being executed. It does not say the instruction's offset from the start of the method's code segment ... as you seem to be saying.
Furthermore, there is no obvious reference to a register holding a pointer to the current method. And the section describing the call stack doesn't mention any pointer to the current method in the stack frame.
Having said all of that, the JVM specification is really a behavioral specification that JVM implementations need to conform to. It doesn't directly mandate that the specified behavior must be implemented in any particular way.
So while it seems to state that the abstract JVM has a register called a PC that contains an "address", it doesn't state categorically what an address means in this context. For instance, it does not preclude the possibility that the interpreter represents the "address" in the PC as a tuple consisting of a method address and a bytecode offset within the method. Or something else. All that really matters is that the JVM implementation can somehow use the PC to get the bytecode instruction to be executed.
I have the following code I am trying to run on an ESP-WROOM-32:
from machine import UART
def do_uart_things():
uart = UART.init(baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
do_uart_things()
I am attempting to initialize a uart bus according to the documentation: https://docs.micropython.org/en/latest/library/machine.UART.html. The documentation suggests that only baudrate, bits, parity, and stop are required, however I get the "1 additional positional arguments required" error. I can not figure out why it is giving this error.
I am also assuming that the rx and tx parameters are automatically converted to the correct type of pin, as needed by the UART class, rather than me having to manually manage it.
I have managed to get slightly similar code working:
from machine import UART
def do_uart_things():
uart = UART(1,9600)
uart.init(baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
#Pin numbers taken from ESP data sheet--they might not be correctly formatted
do_uart_things()
Which has me thinking the documentation is unintentionally misleading, and the leading example is not intended as an "initialize it this way OR this way," but rather requires both things be done.
Am I correct in thinking the latter code example is the correct way to use the micropython UART functionalities? I am also open to referrals to any good examples of UART and I2C usage in micropython, since I've found the documentation to be a little shy of great...
"UART objects can be created and initialised using:..." can be a little misleading. They meant that the object can only be created by using the constructor, however it can be initialised either with the constructor, or later, after the object has been created, but using the init method on it.
As you see, the class constructor needs a first parameter id, whereas the method init() does not. So you can use the constructor
uart = UART(1,baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
but you cannot use UART.init() as this is not a constructor but a method, so it needs to operate on an instance, not a class.
TL;DR: How do you encode and decode an MTLSharedTextureHandle and MTLSharedEventHandler such that it can be transported across an XPC connection inside an xpc_dictionary?
A macOS application I'm working on makes extensive use of XPC services and was implemented using the C-based API. (i.e.: xpc_main, xpc_connection, xpc_dictionary...) This made sense at the time because certain objects, like IOSurfaces, did not support NSCoding/NSSecureCoding and had to be passed using IOSurfaceCreateXPCObject.
In macOS 10.14, Apple introduced new classes for sharing Metal textures and events between processes: MTLSharedTextureHandle and MTLSharedEventHandle. These classes support NSSecureCoding but they don't appear to have a counter-part in the C-XPC interface for encoding/decoding them.
I thought I could use something like [NSKeyedArchiver archivedDataWithRootObject:requiringSecureCoding:error] to just convert them to NSData objects, which can then be stored in an xpc_dictionary, but when I try and do that, I get the following exception:
Caught exception during archival:
This object may only be encoded by an NSXPCCoder.
(NSXPCCoder is a private class.)
This happens for both MTLSharedTextureHandle and MTLSharedEventHandle. I could switch over to using the new NSXPCConnection API but I've already got an extensive amount of code built on the C-interface, so I'd rather not have to make the switch.
Is there any way to archive either of those two classes into a payload that can be stored in an xpc_dictionary for transfer between the service and the client?
MTLSharedTextureHandle only works with NSXPCConnection. If you're creating the texture from an IOSurface you can share the surface instead which is effectively the same thing. Make sure you are using the same GPU (same id<MTLDevice>) in both processes.
There is no workaround for MTLSharedEventHandle using public API.
I recommend switching to NSXPCConnection if you can. Unfortunately there isn't a good story for partially changing over using public API, you'll have to do it all at once or split your XPC service into two separate services.
I am new to USB and I am trying to develop a library that can do Device Firmware Upgrade in our application.
The DFU Standard http://www.usb.org/developers/docs/devclass_docs/DFU_1.1.pdf talks about Run-Time DFU Functional Descriptor.
I understand what Device, configuration, interface and endpoint descriptors are but I don't know what Functional descriptors are. Therefore my questions are:
1. What is a Functional Descriptor?
2. How do I retrieve information about Functional Descriptor?
I am working with libusbto do my work. So if you have any examples, that'll be great help.
1 - Device, Configuration, Interface, Endpoint are standard descriptors for defining a device and its interfaces. These descriptors contain generic information and can be read by the USB device driver.
But Functional descriptors are device class specific and its known to
only the class drivers. So every class such as CDC, DFU, HID, etc. has
their own functional descriptor specific to the class functionalities.
Functional descriptor describes the class specific contents within interface descriptor. A class specific interface descriptor can have more than one functional descriptor. Functional descriptors have common header format.
2 - Since functional descriptors are part of class interface descriptor, please read the interface descriptor using libusb API and you will get the functional descriptor too.
I have some custom DirectShow filters with custom property pages. These work fine when the filter is in the same process as the property page.
However when I use the 'connect to remote graph' feature of Graph Edit the property pages don't work.
When the property page does a QueryInterface for my private COM interface on the remote filter, the QueryInterface fails. Property pages of Microsoft filters (e.g. the EVR video renderer) work fine so it must be possible.
Presumably this is happening because my filter's private interfaces only work 'in process' and I need to add extra COM support so that these interfaces will work with an 'out of process' filter. What do I need to do in COM terms to achieve this?
Do the DirectShow baseclasses support these COM features? Can I reliably detect when the filter is running out of process and refuse to show the property page gracefully?
One option is to build a proxy/stub pair. But another, and way easier, is to make your private interface automation compatible (derive from IDispatch, type constranits apply), and put it into type library, which is to be attached to the DLL, and registered the usual way. Proxy/stub pair will be supplied for such interface automatically without need to bother.
DirectShow base classes do not offer built in support for this. Stock DirectShow filters provided with Windows might be not not be compatible with passing interfaces over process boundaries and my guess would be that it depends on the team in charge for respective development years ago. Video renderers, for instance, have interfaces that you can connect remotely through. Audio renderers, on the contrary, have interfaces without such capability in mind and they just crash one of the processes attempting to makes such connection (client side process, if my memory serves me right).