I want to solve NP-hard combinatorial optimization problem using quantum optimization.In this regard, I am using "classiq" python library, which a high level API for making hardware compatible quantum circuits, with IBMQ backend.
To use "classiq", you have to first do the authentication of your machine (according to the official "classiq" website: https://docs.classiq.io/latest/getting-started/python-sdk/
Unfortunatly, whenever I ran the statement (classiq.authenticate()), I got runtime error as shown in the attached figure (with the full traceback).
enter image description here
Classiq currently requires a license to use. This could be why the authentication fails.
A license can be acquired by contacting:
https://www.classiq.io/contact-us
I think the Classiq API can definitely help you with your use case. But just like mentioned above you should contact Classiq for a license.
Related
I am studying about tensorflow-federated API to make federated learning with real multiple machines.
But I found the answer on this site that not support to make real multiple federated learning using multiple learning.
Are there no way to make federated learning with real multiple machines?
Even I make a network structure for federated learning with 2 clients PC and 1 server PC, Is it impossible to consist of that system using tensorflow federated API?
Or even if I apply the code, can't I make the system I want?
If you can modify the code to configure it, can you give me a tip?If not, when will there be an example to configure on a real computer?
In case you are still looking for something: If you're not bound to TensorFlow, you could have a look at PySyft, which is using PyTorch. Here is a practical example of a FL system built with one server and two Raspberry Pis as clients.
TFF is really about expressing the federated computations you wish to execute. In terms of physical deployments, TFF includes two distinct runtimes: one "reference executor" which simply interprets the syntactic artifact that TFF generates, serially, all in Python and without any fancy constructs or optimizations; another still under development, but demonstrated in the tutorials, which uses asyncio and hierarchies of executors to allow for flexible executor architectures. Both of these are really about simulation and FL research, and not about deploying to devices.
In principle, this may address your question (in particular, see tff.framework.RemoteExecutor). But I assume that you are asking more about deployment to "real" FL systems, e.g. data coming from sources that you don't control. This is really out of scope for TFF. From the FAQ:
Although we designed TFF with deployment to real devices in mind, at this stage we do not currently provide any tools for this purpose. The current release is intended for experimentation uses, such as expressing novel federated algorithms, or trying out federated learning with your own datasets, using the included simulation runtime.
We anticipate that over time the open source ecosystem around TFF will evolve to include runtimes targeting physical deployment platforms.
Google is offering 300$ for free trail registration for google cloud. I want to use this opportunity to pursue few projects using tensorflow. But unlike AWS, I am not able to find much information on the web regarding how to configure a google compute engine. Can anyone please suggest me or point to resources which will help me?
I already looked into google cloud documentation, while they are clear they really dont give any suggestions as to what kind of CPUs to use or for that matter I cannot see any GPU instances when I tried to create a VM instance. I want to use something on the lines of AWS g2.2xlarge instance.
GPUs on Google Cloud are in alpha:
https://cloud.google.com/gpu/
The timeline given for public availability is 2017:
https://cloudplatform.googleblog.com/2016/11/announcing-GPUs-for-Google-Cloud-Platform.html
I would suggest that you think carefully about whether you want to "scale up" (getting a single very powerful machine to do your training) or "scale out" (distributing your training). In many cases, scaling out works out better and cheaper and Tensorflow/CloudML are set up help you do that.
Here are directions on how to get Tensorflow going in a Jupyter notebook on a Google Compute Engine VM:
https://codelabs.developers.google.com/codelabs/cpb102-cloudml/#0
The first few steps are TensorFlow, the last steps are Cloud ML.
After watching this question I decided to give writing a new op for TensorFlow a try.
Since the requirements of C++, Python and likely a *nix system are not my primary tools, I would like to avoid being at a point where I have to back out and make a system/tool changes just because I did not ask.
Is there a standard or preferred system and or tools used by those working or TensorFlow?
I know that recommendation questions are not allowed here; I am not asking for a personal recommendation, I am asking for the standard used by or what the TensorFlow group finds that works.
Really, anything where you can get Bazel and the required libraries up and running. But since you're starting from scratch: Ubuntu's a very safe bet and (I haven't measured this, but this is a solid estimate) probably gets the most testing and development by the tf team. But there are many options that all work -- you can develop inside a virtualenv on many environments. Things like GPU support get a little more platform-specific, and that's where Ubuntu starts to become the easiest choice if you don't have any other constraints.
The key requirements are outlined in installing Tensorflow from sources.
I wanted to know what steps one would need to take to "hack" a camera's firmware to add/change features, specifically cameras of Canon or Olympus make.
I can understand this is an involved topic, but a general outline of the steps and what I issues I should keep an eye out for would be appreciated.
I presume the first step is to take the firmware, load it into a decompiler (any recommendations?) and examine the contents. I admit I've never decompiled code before, so this will be a good challenge to get me started, any advice? books? tutorials? what should I expect?
Thanks stack as always!
Note : I know about Magic Lantern and CHDK, I want to get technical advise on how they were started and came to be.
http://magiclantern.wikia.com/wiki/Decompiling
http://magiclantern.wikia.com/wiki/Struct_Guessing
http://magiclantern.wikia.com/wiki/Firmware_file
http://magiclantern.wikia.com/wiki/GUI_Events/550D
http://magiclantern.wikia.com/wiki/Register_Map/Brute_Force
I wanted to know what steps one would need to take to "hack" a
camera's firmware to add/change features, specifically cameras of
Canon or Olympus make.
General steps for this hacking/reverse engineering:
Gathering information about the camera system (main CPU, Image coprocessor, RAM/Flash chips..). Challenges: Camera system makers tend to hide such sensitive information. Also, datasheets/documentation for proprietary chips are not released to public at all.
Getting firmware: through dumping Flash memory inside the camera or extracting the firmware from update packages used for camera firmware update. Challenges: Accessing readout circuitry for flash is not a trivial job specially with the fact that camera systems have one of the most densely populated PCBs. Also, Proprietary firmware are highly protected with sophisticated encryption algorithms when embedded into update packages.
Dis-assembly: getting a "bit" more readable instructions out of the opcode firmware. Challenges: Although dis-assemblers are widely available, they will give you the "operational" equivalent assembly code out of the opcode with no guarantee for being human readable/meaningful.
Customization: Just after understanding most of the code functionalities, you can make modifications that need not to harm normal operation of the camera system. Challenges: Not an easy task.
Alternatively, I highly recommend you to look for an already open source camera software (also HW). You can learn a lot about camera systems.
Such projects are: Elphel and AXIOM
I am wondering if anyone have any information on development boards where you can utilize ARM TrustZone? I have the BeagleBoard XM which uses TI's OMAP3530 with Cortex-A8 processor that supports trust zone, however TI confirmed that they have disabled the function on the board as it is a general purpose device.
Further research got me to the panda board which uses OMAP4430 but there is no response from TI and very little information on the internet. How do you learn how to use trust zone?
Best Regards
Mr Gigu
As far as I know, all the OMAP processors you can get off-the-shelf are GP devices, i.e. with the TrustZone functions disabled (or else they're processors in production devices such as off-the-shelf mobile phones, for which you don't get the keys). The situation is similar with other SoC manufacturers. Apart from ARM's limited publications (which only cover the common ARM features anyway, and not the chip-specific features such as memory management details, booting and loading trusted code), all documentation about TrustZone features comes under NDA. This is a pity because it precludes independent analysis of these security features or leverage by open-source software.
I'm afraid that if you want to program for a TrustZone device, you'll have to contact a representative of TI or one of their competitors, convince them that your application is something they want to happen, and obtain HS devices, the keys to sign code for your development boards, and the documentation without which you'll have a very hard time.
As of today OP-TEE runs on quite a few devices (see OP-TEE platforms supported) and several of them are development boards readily available. To name a few HiKey, Raspberry Pi3, ARM Juno Board, Freescale i.MX6 variants etc. Either you could pick up one of those or you could simply try it all using QEMU which is very well supported in OP-TEE.
You can get 45 days trial version for ARM fastmodels. RaspberyPI is supposed to support TrustZone too. www.openvirtualization.org has full open source implementation of ARM TrustZone. ARM is moving away from its proprietary TrustZone APIs to globalplatform API. GlobalPlatform also defines the APIs for Inter process communication etc.
There are a few select boards at this time that do allow development with TrustZone. As far as general purpose board, the FriendlyARM board is a good start (http://www.friendlyarm.net). Also, any board with a Cortex A15 processor must have TrustZone available due to the fact that the virtualization extensions can only be utilized from the Normal world. There may still be a question of whether or not the manufacturer has their own code running in the Secure world, but you can always try. The Arndale is a good development board, but unfortunately Samsung already has code running in the Secure world, so by the time you get access, you're running in the Normal world. So if you need Secure world access, look for non-Samsung, Cortex A15 processors. That'd be your best bet.
It's also worth noting the TI did not technically disable TrustZone. Instead, the bootrom code transitions the processor into the Normal world prior to switching execution to U-boot. So it's actually using TrustZone to move to the Normal world, but then doesn't provide a mechanism for moving back to the Secure world. To prove this, just try to read the SCR and you'll get an undefined exception, which is what will typically happen from the Normal world. However, if you perform a SMC call, it will execute just as expected (i.e., it switches to the Secure world, but then just switches right back to the Normal world), so it looks like nothing happened.
regarding openvirtualization, it can be ported to arm development board like the samsung exynos 4XXX.
you will have access to all source code including the secure os if you use openvirtualization.
but if you just want to develop programs that use the trustzone, I wonder if it is necessary. maybe there are standard driver or api that allow you to do it without worrying about compiling your own secure os?
the best thing you can do is contact parties like Gemalto and the people that brought Mobicore. Note that they will indeed ask you to sign an NDA.
Secondly, you can buy the ARM DS5 development suite. This comes with a lot of documentation including some on trustzone.
You should really take a look at the USB armory from Inverse Path: http://www.inversepath.com/usbarmory.html
It's built on open hardware and open source with full access to Trustzone (you can blow in die fuse to enable secure boot): https://github.com/inversepath/usbarmory
They successfully ran Genode within TZ and Linux in the normal world.