Are hardware locks resistant to port snooping? - locking

When I searched on the internet, I saw that there are ways to crack the hardware locks, which method is to listen to the connected ports and copy the information that is exchanged, and design a simulator that runs the program. turns
Is there a method or a specific protocol in these modern hardware locks that locks data between the sdk and the hardware lock and is resistant to this type of attack?
I saw that most of the companies provide the lock and software to the crackers to crack the lock. Is there a special software lock in the market that makes this connection safe?
I checked most of the products and did not see anything or explanation in their writings about whether there is a secure connection in these products that is resistant to the lock being copied.
For example, the following hardware lock production companies have been able to copy the lock by individuals or teams and have taken a copy of the lock:
Safenet Sentinel SuperPro UltraPro Rainbow, DESKey DK2 DK3, Eleckey
Eutron SmartKey, Feitian Rockey 2/4ND, GV-Series, Guardant, INROKey
KEYLOK 2, Marx CryptoBox, McAMOS SmartLock, MegaLock, Microcosm
Dinkey, Matrix, SENSELOCK Dongle, SG-Lock Dongle, SecuTech UNIKEY
SecureMetric SecureDongle X, Softlok, Wibu Codemeter CmStick Box Key

Related

Polling vs handshaking in hardware

Brookshear & Brylow's Computer Science: An Overview (12th ed.) states the following:
a process such as printing a document involves a constant two-way dialogue, known as handshaking, in which the computer and the peripheral device exchange information about the device’s status and coordinate their activities.
I'm more familiar with "handshaking" as the process of establishing a TCP connection, and "polling" as the technique of repeatedly checking the status of a hardware device.
This ScienceDirect summary complicates things further, mentioning two kinds of handshaking - hardware and software - neither of which has the meaning I'm familiar with.
So what is the exact relationship between "handshaking" and "polling"?
Handshaking is a multistage process where devices establish a connection and confirm that they are listening/talking to each other. It is a way to ensure, to a certain degree, that data will go to the right place and it will be received.
Polling, as you said, is just repeatedly checking a status of the device. As for relationship, you need to have a way to get polling data and handshaking is often a way to establish this connection

Good practices for AES key derivation and storage on STM32

I'm developing a device on STM32L4x6. It is connected through BLE to a smartphone and it exchanges encrypted data with it.
Encryption is AES-GCM and I'm using the reference implementation provided by STMicro.
I have implemented a shared secret exchange mechanism using Diffie-Hellman protocol on Curve25519. Right now I am using this shared secret directly as AES key.
However I am confused on 2 points:
I think I have to derive a session key from the shared key however I don't really understand how.
about key storage on STM32, what is the common/best practice ? Is it enough to store the key in Flash and to set the Flash in Read Protected Level 1 ?
Thank you
As for deriving a session key - you may want to look into the topic of Key Derivation Function (KDF). Googling it returns a lot of useful informations related to establishing session keys. You may also ask your question on https://crypto.stackexchange.com/.
As for storing keys in STM32 - it depends what your requirements are. Do the keys need to persist between sessions or can you generate a new one each time a connection is established? Generating a new key each time a new connection is made will be safer due to two reasons:
It's different for each connection so even if someone manages to get the key for a session from the past, it may only be used to decrypt that session.
If you generate a new key for each new session, you don't have the need to store it anywhere such as Flash memory, as you may keep it in RAM only. Powering down the device will wipe the key. Enabling read protection prevents access to RAM as well as to internal Flash.
Regarding 2nd point however - STM32 is NOT considered a "Secure Microcontroller". It lacks hardware elements that prevent hardware attacks - power voltage glitch detection, side-channel prevention, secure mesh etc. With enough resources and determination an attacker will be able to obtain the cryptographic keys that you use, for example by grinding down the chip package and optically reading your data. That touches on the aspect of how secure does the device really have to be - development time cost, hardware security cost. With STM32 all you can do is to make it harder (keep the keys in RAM and only when you need it, then overwrite them with noise) and limit the scope of the attacker (change session keys as often as possible, e.g. each session).

UDP Broadcast, Multicast, or Unicast for a "Toy Application"

I'm looking to write a toy application for my own personal use (and possibly to share with friends) for peer-to-peer shared status on a local network. For instance, let's say I wanted to implement it for the name of the current building you're in (let's pretend the network topology is weird, and multiple buildings occupy the same LAN). The idea is if you run the application, you can set what building you're in, and you can see the buildings of every other user running the application on the local network.
The question is, what's the best transport/network layer technology to use to implement this?
My initial inclination was to use UDP Multicast, but the more research I do about it, the more I'm scared off by it: while the technology is great and seems easy to use, if the application is not tailored for a particular site deployment, it also seems most likely to get you a visit from an angry network admin.
I'm wondering, therefore, since this is a relatively low bandwidth application — probably max one update every 4–5 minutes or so from each client, with likely no more than 25–50 clients — whether it might be "cheaper" in many ways to use another strategy:
Multicast: find a way to pick a well-known multicast address from 239.255/16 and have interested applications join the group when they start up.
Broadcast: send out a single UDP Broadcast message every time someone's status changes (and one "refresh" broadcast when the app launches, after which every client replies directly to the requesting user with their current status).
Unicast: send a UDP Broadcast at application start to announce interest, and when a client's status changes, it sends a UDP packet directly to every client who has announced. This results in the highest traffic, but might be less likely to annoy other systems with needless broadcast packets. It also introduces potential complications when apps crash (in terms of generating unnecessary traffic).
Multicast is most certainly the best technology for the job, but I'm wondering if the associated hassles are worth avoiding since this is just a "toy application," not a business-critical service intended for professional network admin deployment and configuration.

Does DDS have a Broker?

I've been trying to read up on the DDS standard, and OpenSplice in particular and I'm left wondering about the architecture.
Does DDS require that a broker be running, or any particular daemon to manage message exchange and coordination between different parties?
If I just launch a single process publishing data for a topic, and launch another process subscribing for the same topic, is this sufficient? Is there any reason one might need another process running?
In the alternative, does it use UDP multicasting to have some sort of automated discovery between publishers and subscribers?
In general, I'm trying to contrast this to traditional queue architectures such as MQ Series or EMS.
I'd really appreciate it if anybody could help shed some light on this.
Thanks,
Faheem
DDS doesn't have a central broker, it uses a multicast based discovery protocol. OpenSplice has a model with a service for each node, but that is an implementation detail, if you check for example RTI DDS, they don't have that.
DDS specification is designed so that implementations are not required to have any central daemons. But of course, it's a choice of implementation.
Implementations like RTI DDS, MilSOFT DDS and CoreDX DDS have decentralized architectures, which are peer-to-peer and does not need any daemons. (Discovery is done with multicast in LAN networks). This design has many advantages, like fault tolerance, low latency and good scalability. And also it makes really easy to use the middleware, since there's no need to administer daemons. You just run the publishers and subscribers and the rest is automatically handled by DDS.
OpenSplice DDS used to require daemon services running on each node, but they have added a new feature in v6 so that you don't need daemons anymore. (They still support the daemon option).
OpenDDS is also peer-to-peer, but it needs a central daemon running for discovery as far as I know.
Think its indeed good to differentiate between a 'centralized broker' architecture (where that broker could be/become a single-point of failure) and a service/daemon on each machine that manages the traffic-flows based on DDS-QoS's such as importance (DDS:transport-priority) and urgency (DDS: latency-budget).
Its interesting to notice that most people think its absolutely necessary to have a (real-time) process-scheduler on a machine that manages the CPU as a critical/shared resource (based on timeslicing, priority-classes etc.) yet that when it comes to DDS, which is all about distributing information (rather than processing of application-code), people find it often 'strange' that a 'network-scheduler' would come in 'handy' (the least) that manages the network(-interface) as a shared-resource and schedules traffic (based on QoS-policy driven 'packing' and utilization of multiple traffic-shaped priority-lanes).
And this is exactly what OpenSplice does when utilizing its (optional) federated-architecture mode where multiple applications that run on a single-machine can share data using a shared-memory segment and where there's a networking-service (daemon) for each physical network-interface that schedules the in- and out-bound traffic based on its actual QoS policies w.r.t. urgency and importance. The fact that such a service has 'access' to all nodal information also facilitates combining different samples from different topics from different applications into (potentially large) UDP-frames, maybe even exploiting some of the available latency-budget for this 'packing' and thus allowing to properly balance between efficiency (throughput) and determinism (latency/jitter). End-to-End determinism is furthermore facilitated by scheduling the traffic over pre-configured traffic-shaped 'priority-lanes' with 'private' Rx/Tx threads and DIFSERV settings.
So having a network-scheduling-daemon per node certainly has some advantages (also as it decouples the network from faulty-applications that could be either 'over-productive' i.e. blowing up the system or 'under-reactive' causing system-wide retransmissions .. an aspect thats often forgotten when arguing over the fact that a 'network-scheduling-daemon' could be viewed as a 'single-point-of-failure' where as the 'other view' could be that without any arbitration, any 'standalone' application that directly talks to the wire could be viewed as a potential system-thread when it starts misbehaving as described above for ANY reason.
Anyhow .. its always a controversial discussion, thats why OpenSplice DDS (as of v6) supports both deployment modes: federated and non-federated (also called 'standalone' or 'single process').
Hope this is somewhat helpful.

appleevent versus notification

I am looking for a high performance inter process communication system in macos X.
What is the best system? AppleEvents or NSNotifications?
Distributed notifications (i.e. notifications sent through NSDistributedNotificationCenter) are most likely not a good option if your goal is high performance and/or reliability. Here is Apple's own take on this subject:
Posting a distributed notification is an expensive operation. The notification gets sent to a system-wide server that distributes it to all the tasks that have objects registered for distributed notifications. The latency between posting the notification and the notification’s arrival in another task is unbounded. In fact, when too many notifications are posted and the server’s queue fills up, notifications may be dropped. http://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Classes/NSDistributedNotificationCenter_Class/Reference/Reference.html
Depending on what you mean by "high performance", you might want to look into distributed objects, or plain old Unix IPC mechanisms (sockets, pipes, shared memory etc).
If you control both the sender and the recipient, you can open a socket between the two processes ( man socketpair ), which is quite high performance. You can also open a file in a shared location ( like /tmp ) and write to it from one process and read from the other, which is quite speedy. You can also open two TCP/IP ports on the local machine, one in each process, and then send from one to the other "over the network".
If your only two choices are NSNotifications or AppleEvents, well, AppleEvents will likely perform better.