Is there any possibility of running Orion ContextBroker on Raspberry Pi with Raspbian OS?
The requirements recommended in the Orion documentation are:
Although we haven't done yet a precise profiling on Orion Context
Broker, tests done in our development and testing environment show
that a host with 2 CPU cores and 4 GB RAM is fine to run the
ContextBroker and MongoDB server. In fact, this is a rather
conservative estimation, Orion Context Broker could run fine also in
systems with a lower resources profile. The critical resource here is
RAM memory, as MongoDB performance is related to the amount of
available RAM to map database files into memory.
Besides the board constrained resources, you will have to search about the equivalent required libraries for RaspbianOS.
There is a discussion about it here:
https://github.com/telefonicaid/fiware-orion/issues/15
Related
I want to get CPU usage (cumulative) for each VM hosted on a VMware ESXI host.
I tried using Power CLI command 'Get-VMHost' but it only gives the overall CPU usage by ESXI host.
For CPU usage esxtop is a very powerful ESX command and you have to run it at the CLI. I haven't used the Power CLI so I'm unsure if it's available there but it is definitely available at the CLI which VMware tries to discourage you from using (see https://kb.vmware.com/s/article/2004746). Documentation for esxtop for the latest release of vSphere is at https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.monitoring.doc/GUID-D89E8267-C74A-496F-B58E-19672CAB5A53.html.
That document is a bit terse and in terms of getting CPU usage for each VM this old documentation for esxtop may guide you a bit better https://www.vmware.com/pdf/esx2_using_esxtop.pdf. In particular note the different nomenclature of ESXi (and ESX) for which the primary unit of address space and execution is the "world" rather than the "process". Thus you want to get the CPU usage for all "worlds" associated with each VM. Some VMs may have only a single "world" and some may have several and it is configurable. As for esxtop, it has been around forever and most likely it can still today provide the same functionality that it did over a decade ago with ESX 2.
I hear a lot about "Hypervisors are not emulators. If you need to emulate another hardware specifications than you computer have, you need to use emulator, not hypervisor".
Well, but yesterday I saw this video on youtube - click here - which shows how to install Win 95 on modern macOS with VMware Fusion.
The strange thing for me is that on 17:39 you could see that Win 95 virtual machine is "Pentium Pro with 64 MB RAM".
Hmm! So, Fusion somehow faked processor and RAM, right? But it is not emulator, right? So, does it mean that any hypervisor can fake processor and RAM?
At the time of its release, Windows 95 only had code to recognize CPUIDs up to Pentium Pro. Any processor not lower than Pentium Pro is "called" Pentium Pro.
The main difference is the Hypervisor cannot emulate CPU code. All code must run on the original processor.
The hypervisor does emulate the BIOS, which in tells the OS the hardware specs available; including RAM, Boot order and peripherals attached.
When you are talking about VMWare Fusion the way this works depends on how virtualization is achieved. According to wikipedia VMWare Fusion utilizes hardware-assisted virtualization, dynamic binary translation, and para-virtualization.
In the hardware-assited virtualization case, #Strom is correct and guest instructions can be executed directly on the host CPU. Besides #Strom answer, you can fake the CPU type by trapping and emulating the cpuid instruction.
In the para-virtualization case you replace critical instructions by calls to the hypervisor which emulates the instruction on behalf of the guest. So again you emulate the cpuid instruction to "fake" the CPU type. Keep in mind that this requires a modified, hence para-virtualized, guest operating system.
Finally, dynamic binary translation scans the guest code for critical instructions during runtime and either replaces them by traps into the hypervisor achieving some kind of "live para-virtualization" or translating blocks of guest code into equivalent blocks of host code that modifies the VM state according to the original guest code (this is e.g. how the QEMU full system emulator works). As a result, again you are able to "fake" the CPU type by emulating the cpuid instruction. Notice that guest and host can be the same architecture in this case, but there is no need for this.
Of course a combination of above techniques is also feasible.
As for virtualization of main memory, the hypervisor is in full control of the hardware so you can simply configure a VM with just 64MB of main memory. The VM is not able to "see" more than this due to the techniques shortly discussed above.
Please keep in mind that this just gives a very short overview of virtualization and I tried to keep it short and informative, so I know my explanations are partially not very accurate. If you are really interested in virtualization I recommend reading "Virtual Machines: Versatile Platforms for Systems and Processes" or the papers on the topic by Popek & Goldberg and "Xen and the Art of Virtualization"
I am running Apache Guacamole on a Google Cloud Compute Engine f1-micro with CentOS 7 because it is free.
Guacamole runs fine for some time (an hour or so) then unexpectantly crashes. I get the ERR_CONNECTION_REFUSED error in Chrome and when running htop I can see that all of the tomcat processes have stopped. To get it running again I just have to restart tomcat.
I have a message saying "Instance "guac" is overutilized. Consider switching to the machine type: g1-small (1 vCPU, 1.7 GB memory)" in the compute engine console.
I have tried limiting the memory allocation to tomcat, but that didn't seem to work.
Any suggestions?
I think the reason for the ERR_CONNECTION_REFUSED is likely due to the VM instance falling short on resources and in order to keep the OS up, process manager shuts down some processes. SSH is one of those processes, and once you reboot the vm, resource will resume operation in full.
As per the "over-utilization" notification recommending g1-small (1 vCPU, 1.7 GB memory)", please note that, f1-micro is a shared-core micro machine type with 0.2 vCPU, 0.60 GB of memory, backed by a shared physical core and is only ideal for running smaller non-resource intensive applications..
Depending on your Tomcat configuration, also note that:
Connecting to a database is an intensive process.
Creating a Tomcat with Google Marketplace, the default VM setting is "VM instance: 1 vCPU + 3.75 GB memory (n1-standard-1) so upgrading to machine type: g1-small (1 vCPU, 1.7 GB memory) so should ideal in your case.
Why was g1 small machine type recommended. Please note that Compute Engine uses the same CPU utilization numbers reported on the Compute Engine dashboard to determine what recommendations to make. These numbers are based on the average utilization of your instances over 60-seconds intervals, so they do not capture short CPU usage spikes.
So, applications with short usage spikes might need to run on a larger machine type than the one recommended by Google, to accommodate these spikes"
In summary my suggestion would be to upgrade as recommended. Also note that, the rightsizing gives warnings when VM is underutilized or overutilized and in this case, it is recommending to increase your VM size due to overutilization and keep in mind that this is only a recommendation based on the available data.
I am going to be working with a few outside developers for some ASP.NET projects and wanted to setup a co-located server for version control, testing, and staging client sites until they are ready to deploy.
I already have the ISP and have a 10 megabit connection burstable to 100, so I don't think bandwidth is going to be an issue.
My question is, what specs should the server itself have? I was thinking of getting a Dell server with the following specs:
Dual Core Intel Pentium E2180, 2.0GHz, 1MB Cache, 800MHz FSB
4GB, DDR2, 800MHz, 4x1GB,Dual Ranked DIMM
RAID 1 160GB 7.2K RPM SATA 3Gbps hard drives
Windows Server 2008
Will this suffice?
If the project isn't too big that looks fine. My experience with version control systems on large projects is that memory tends to be the biggest bottleneck. I'd make sure you can upgrade to 8GB RAM if the project is going to be large.
What information have you been able to gather regarding how do the amazon web services work?
What hardware do they use
What web server
What Operating System
What storage for AWS
What virtualization software for EC2/EBS
What software for they distributed firewall for EC2
Physical location of their data centers.
I like their services very much and use them a ton at work... just out of curiosity. If you know/heard/read and want to tell, if you saw something online and want to provide a link, very appreciated.
This might be interesting: http://highscalability.com/amazon-architecture
While this question can't be answered in precision, I'll try to shed some light on internal workings that Amazon announced publicly.
Below are some details for commonly used c and m instance types, as well as recently released bare metal instances. Also,
this can be starting point for further research as specifics are far behind single answer on SO.
Compute Hardware.
If you want to take a deep dive I suggest going through all previous generations and current generation instance types. Underlying hardware can be find on this pages.
Bare metal instances
Bare metal instances became GA in April 2018. One of the details - I3.metal instances are powered by 2.3 GHz Intel Xeon processors, offering 36 hyper-threaded cores (72 logical processors), 512 GiB of memory, and 15.2 TB of NVMe SSD-backed instance storage. More info
Compute optimized instances (C)
Latest c5 (late 2017) generation instances are using 3.0 GHz Intel Xeon
Platinum 8000-series. More info here
c4 (generation is using optimized for ec2 processor Intel Xeon E5-2666 v3 (code name Haswell) processor. More info here
c3 generation introduced SSD instance storage and used 2.8 GHz Intel Xeon E5-2680v2. More info here
General purpose ec2 instances (M)
m5 instances are based on Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz. Most likely running on Nitro hypervisor mentioned below. More info
m4's were released back in 2015 and have custom Intel Xeon E5-2676 v3 Haswell processor optimized specifically for EC2. More info
m3's were released in 2012 and for some who remember carried some price reduction with them, making use of AWS use more appealing to audience looking through budgeting lenses. They are/were using Intel Xeon E5-2670 processor and started using SSD instance memory.
What web server
I've seen couple of times error pages from their WebUI (AWS Console) rendered via Tomcat, so I would guess this is console server.
What virtualization software for EC2/EBS
AWS recently announced (with c5 instance type announcement) that they will start using KVM based hypervisor. Presentation linked here outlines hypervisor history very good (table below taken from same page)
Physical location of data centers
This is not (and due security reasons should not) be disclosed publicly. There are always rumors / some sources about it (look at related Quora thread
You can use linux instances or windows instances in Amazon aws. But first of all you shoul run an instance and then select it's operating system. For it's storage they have an instance that called it S3. it is a storage that you can save any kind of file format in it. They have many locations for their data centers. Depending on where you live, you must select the nearest data center to work with it's services so that you have to pay less for your billing payment.
You can go to console.amazon.com and find lots of documentation for each service in help menu.