I have been trying to use virsh attah-disk to attach a qcow2 file as additional storage source. The syntax i am using is (from internet):
virsh attach-disk --driver file vm2 disk2.qcow2 hdc
If the vm is running or paused it shows:
error: this function is not supported by the hypervisor: disk bus 'ide' cannot be hotplugged.
If the vm is shutdown it shows:
error: Requested operation is not valid: cannot attach device on inactive domain
I am not sure about the hdc parameter. I have tried using attach-device function also with xml file as:
<disk type="file" device="disk">
<driver name="file"/>
<source file="/gfs1/disk2.qcow2"/>
<target dev="hdc"/>
</disk>
But this also shows:
error: Failed to attach device from /gfs1/disk2tovm2.xml
error: this function is not supported by the hypervisor: disk bus 'ide' cannot be hotplugged.
I looked at many examples but none of them worked and all had almost the same syntax.
If someone could help me figure out the error.
COMPLETE CONFIGURATION FILE OF VM
root#blade1:/vms# virsh dumpxml vm2
<domain type='kvm' id='33'>
<name>vm2</name>
<uuid>70affd5d-af95-72c5-2d96-c131f46409b6</uuid>
<description>--autostart</description>
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<vcpu>2</vcpu>
<os>
<type arch='i686' machine='pc-0.14'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/vms/vm2.qcow2'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<interface type='bridge'>
<mac address='52:54:00:5e:98:e4'/>
<source bridge='br0'/>
<target dev='vnet0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/0'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/0'>
<source path='/dev/pts/0'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='6900' autoport='no' listen='0.0.0.0'/>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='apparmor'>
<label>libvirt-70affd5d-af95-72c5-2d96-c131f46409b6</label>
<imagelabel>libvirt-70affd5d-af95-72c5-2d96-c131f46409b6</imagelabel>
</seclabel>
</domain>
The reason this doesn't work is because you are trying to attach the disk to a
running domain using IDE architecture.
Imagine you have a real physical server, can you open it up while it is running
and plug in an IDE drive? No, the architecture does not support it. KVM/QEMU,
trying to emulate this architecture, must then present the error you are
seeing: "disk bus 'ide' cannot be hotplugged."
One solution is to attach the disk using SCSI archicture. SCSI supports hot plugging. The command you would use would be:
virsh attach-disk --driver file vm2 disk2.qcow2 sdc
The only change is "sdc" instead of "hdc". This hints to KVM/QEMU that you want SCSI instead of IDE, and it will attach the disk.
Also, when the domain is stopped, you can't use attach-disk because this function is meant for running domains.
newbie to this site but just wanted to comment i have succeeded uploading an iso / hot swap an image on virtd.
my command is as follows:
virsh attach-disk $srvkvmname /<datastorename>/tsgboot.iso hdb --driver qemu --type cdrom --mode readonly
i know that omitting the driver type and on causes the above error.
Related
I'm trying to put in place a kiosk on a Surface Go using the following AssignedAccess.xml file in my provisioning package:
<?xml version="1.0" encoding="utf-8" ?>
<AssignedAccessConfiguration
xmlns="https://schemas.microsoft.com/AssignedAccess/2017/config"
xmlns:r1809="https://schemas.microsoft.com/AssignedAccess/201810/config"
>
<Profiles>
<Profile Id="{f46cfb9f-044f-4d96-bb33-ea1c1c18a354}">
<AllAppsList>
<AllowedApps>
<App AppUserModelId="Microsoft.Windows.Explorer" r1809:AutoLaunch="true" />
<App AppUserModelId="Microsoft.WindowsCalculator_8wekyb3d8bbwe!App" />
<App DesktopAppPath="C:\Program Files\SumatraPDF\SumatraPDF.exe" />
</AllowedApps>
</AllAppsList>
<r1809:FileExplorerNamespaceRestrictions>
<r1809:AllowedNamespace Name="Downloads" />
</r1809:FileExplorerNamespaceRestrictions>
<StartLayout>
<![CDATA[<LayoutModificationTemplate xmlns:defaultlayout="http://schemas.microsoft.com/Start/2014/FullDefaultLayout" xmlns:start="http://schemas.microsoft.com/Start/2014/StartLayout" Version="1" xmlns="http://schemas.microsoft.com/Start/2014/LayoutModification">
<LayoutOptions StartTileGroupCellWidth="6" />
<DefaultLayoutOverride>
<StartLayoutCollection>
<defaultlayout:StartLayout GroupCellWidth="6">
<start:Group Name="Apps">
<start:Tile Size="4x2" Column="0" Row="2" AppUserModelID="Microsoft.WindowsCalculator_8wekyb3d8bbwe!App" />
<start:DesktopApplicationTile Size="2x2" Column="0" Row="0" DesktopApplicationLinkPath="%APPDATA%\Microsoft\Windows\Start Menu\Programs\SumatraPDF.lnk" />
<start:DesktopApplicationTile Size="2x2" Column="2" Row="0" DesktopApplicationLinkPath="%APPDATA%\Microsoft\Windows\Start Menu\Programs\System Tools\File Explorer.lnk" />
</start:Group>
</defaultlayout:StartLayout>
</StartLayoutCollection>
</DefaultLayoutOverride>
</LayoutModificationTemplate>
]]>
</StartLayout>
<Taskbar ShowTaskbar="false" />
</Profile>
</Profiles>
<Configs>
<Config>
<Account>CouncilKiosk</Account>
<DefaultProfile Id="{f46cfb9f-044f-4d96-bb33-ea1c1c18a354}"/>
</Config>
</Configs>
</AssignedAccessConfiguration>
I took a look at the logs and the consensus seems to be this error code '0xC00CE223'. According to my research this is telling me that "Validate failed because the document does not contain exactly one root node." (XML DOM Error Messages Doc) I'm not sure where this is going wrong.
The provisioning package is also setting 2 user accounts (local admin and local user), hiding OOBE, enabling tablet mode as default, and running a provisioning command script that installs a single application and sets registry keys necessary for autologin.
UPDATE: I re-imaged the Surface Go with Windows 10 Pro and it still fails. But now I get an error '0x8000FFFF' which appears to be related to windows update and the windows store. I only have 1 USB port on this thing so it isn't connected to the internet at this time.
UPDATE 2: I re-imaged with a more up to date ISO of 10 Pro and I'm back to the original errors listed in the above post. I have updated the XML file and changed the tag as well as the xmlns from rs5 to r1809. I am not seeing any changes and this continues to be a frustrating problem to have.
Test to change this:
https://schemas.microsoft.com/AssignedAccess/2017/config
to the following:
http://schemas.microsoft.com/AssignedAccess/2017/config
I have a oozie workflow which calls a sqoop and hive action. This individual workflow works fine when I run oozie from command line.
Since the sqoop and hive scripts vary, I pass the values to the workflow.xml using job.properties file.
sudo oozie job -oozie http://hostname:port/oozie -config job.properties -run
Now I want to configure this oozie workflow in Falcon. Can you please help me in figuring out where can I configure or pass the job.properties?
Below is the falcon process.xml
<process name="demoProcess" xmlns="uri:falcon:process:0.1">
<tags>pipeline=degIngestDataPipeline,owner=hadoop, externalSystem=svServers</tags>
<clusters>
<cluster name="demoCluster">
<validity start="2015-01-30T00:00Z" end="2016-02-28T00:00Z"/>
</cluster>
</clusters>
<parallel>1</parallel>
<order>FIFO</order>
<frequency>hours(1)</frequency>
<outputs>
<output name="output" feed="demoFeed" instance="now(0,0)" />
</outputs>
<workflow name="dev-wf" version="0.2.07"
engine="oozie" path="/apps/demo/workflow.xml" />
<retry policy="periodic" delay="minutes(15)" attempts="3" />
</process>
I could not find much help on the web or the falcon documentation regarding this.
I worked on some development in falcon but did not try falcon vanilla a lot, but from what I understand from the tutorial below:
http://hortonworks.com/hadoop-tutorial/defining-processing-data-end-end-data-pipeline-apache-falcon/
I would try creating the oozie-workflow.xml which accepts the job.properties dynamically. Place the properties file in the respective HDFS folder where workflow.xml picks it from and you can change it for every process. Now you can use your falcon process.xml and call it from command line using:
falcon entity -type process -submit -file process.xml
Also in path=/apps/demo/workflow.xml you need not specify the workflow.xml explicitly. You can just give the folder name, example:
<process name="rawEmailIngestProcess" xmlns="uri:falcon:process:0.1">
<tags>pipeline=churnAnalysisDataPipeline,owner=ETLGroup,externalSystem=USWestEmailServers</tags>
<clusters>
<cluster name="primaryCluster">
<validity start="2014-02-28T00:00Z" end="2016-03-31T00:00Z"/>
</cluster>
</clusters>
<parallel>1</parallel>
<order>FIFO</order>
<frequency>hours(1)</frequency>
<outputs>
<output name="output" feed="rawEmailFeed" instance="now(0,0)" />
</outputs>
<workflow name="emailIngestWorkflow" version="2.0.0"
engine="oozie" path="/user/ambari-qa/falcon/demo/apps/ingest/fs" />
<retry policy="periodic" delay="minutes(15)" attempts="3" />
On a second thought I felt like you can create a oozie with shell action to call sqoop_hive.sh which has the following line of code in it:
sudo oozie job -oozie http://hostname:port/oozie -config job.properties -run.
Workflow.xml looks like:
<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>sqoop_hive.sh</exec>
<argument>${feedInstancePaths}</argument>
<file>${wf:appPath()}/sqoop_hive.sh#sqoop_hive.sh</file>
<!-- <file>/tmp/ingest.sh#ingest.sh</file> -->
<!-- <capture-output/> -->
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
Call this using falcon process call like:
falcon entity -type process -submit -file process.xml. job.properties can be changed locally if you create a shell action in oozie which calls the oozie in command line within the shell script.
In ROS How do I start a ros node from the terminal? For example I'm looking to start the node /camera/camera_nodelet_manager but I have no idea how? Do I use rosrun and if so in what way?
Generally rosrun [package_name] [node_name] does the job. However, in many cases a node requires command-line arguments or parameters passed via *.launch (XML) files .
You can start a node with a launch file by executing roslaunch [package_name] [launch_file_name].
A Tab-Tab on roslaunch [package_name] will list you all launch files within the package.
For your specific case, it seems like you're working with openni_launch based on the manager name /camera/camera_nodelet_manager. If you just want to get openni_launch going, you can do
roslaunch openni_launch openni.launch
The nodelet manager is this executable. You can figure that out by looking at the openni.launch file:
<!-- Start nodelet manager in top-level namespace -->
<arg name="manager" value="$(arg camera)_nodelet_manager" />
<arg name="debug" default="false" /> <!-- Run manager in GDB? -->
<include file="$(find rgbd_launch)/launch/includes/manager.launch.xml">
<arg name="name" value="$(arg manager)" />
<arg name="debug" value="$(arg debug)" />
<arg name="num_worker_threads" value="$(arg num_worker_threads)" />
</include>
It is launching a second launch file in the package rgbd_launch that starts the nodelet executable:
<!-- Nodelet manager -->
<node pkg="nodelet" type="nodelet" name="$(arg name)" args="manager"
output="screen" launch-prefix="$(arg launch_prefix)">
<param name="num_worker_threads" value="$(arg num_worker_threads)" />
</node>
But in the general case, the suggestions of #cassinaj are good, roslaunchand rosrun are the command line capabilities of ROS to start executable code.
I use Sonar 4.1.1, Jboss 6.x, Jacoco 0.6.4, execute tasks with Ant I am not allowed to use Maven. In an eclipse workspace, I have two projects, one is the web application another is selenium test.
I am able to get unit test and code coverage for unit test. But sonar is not able to read the integration test file created by Jacoco. I think there might be something wrong with the way I create jacoco-it.exec file so sonar can't read it. Because sonar does read my jacoco-ut.exec file. And I am able to have both reportPath and itReportPath to read my jacoco-ut.exec file with no problem. Also thinking maybe is something wrong in my build file. I did a lot of research and tried many different ways to create the jacoco-it.exec file, different Jacoco settings and followed different examples from sonar, jacoco, other blogs but still doesn't work. I must be missing something Help!! Thanks!!
I have VM arguments for Jboss like this
-javaagent:/path to jar/jacocoagent.jar=destfile=/path for create/jacoco-it.exec
When I run selenium, the above code create a file with some data, size about 1.3MB
Here is the part of build relate to this issue
<property name="sonar.sourceEncoding" value="UTF-8" />
<property name="sonar.java.coveragePlugin" value="jacoco" />
<property name="sonar.core.codeCoveragePlugin" value="jacoco" />
<property name="sonar.dynamicAnalysis" value="reuseReports" />
<property name="sonar.jacoco.reportsPath" value="${reports.dir}/junit" />
<property name="sonar.jacoco.itReportPath" value="${reports.dir}/jacoco-it.exec" />
<property name="sonar.jacoco.reportPath" value="${reports.dir}/jacoco-ut.exec" />
<target name="unitTest" depends="compile">
<taskdef name="junit" classname="org.apache.tools.ant.taskdefs.optional.junit.JUnitTask">
<classpath>
<path refid="classpath"/>
</classpath>
</taskdef>
<!-- Import the JaCoCo Ant Task -->
<taskdef uri="antlib:org.jacoco.ant" resource="org/jacoco/ant/antlib.xml">
<classpath refid="classpath"/>
</taskdef>
<!-- Run your unit tests, adding the JaCoCo agent -->
<jacoco:coverage destfile="reports/jacoco-ut.exec" xmlns:jacoco="antlib:org.jacoco.ant">
<junit printsummary="yes" haltonfailure="yes" forkmode="once" fork="true" dir="${basedir}" failureProperty="test.failed">
<classpath location="${classes.dir}" />
<classpath refid="classpath"/>
<formatter type="plain" />
<formatter type="xml" />
<batchtest fork="true" todir="${reports.junit.xml.dir}">
<fileset dir="src">
<include name="**/*TestAdd.java" />
</fileset>
</batchtest>
</junit>
</jacoco:coverage>
</target>
<target name="coverageTest" depends="compile">
<taskdef name="junit" classname="org.apache.tools.ant.taskdefs.optional.junit.JUnitTask">
<classpath>
<path refid="classpath"/>
</classpath>
</taskdef>
<taskdef uri="antlib:org.jacoco.ant" resource="org/jacoco/ant/antlib.xml">
<classpath refid="classpath"/>
</taskdef>
<!--Run your unit tests, adding the JaCoCo agent-->
<jacoco:coverage xmlns:jacoco="antlib:org.jacoco.ant" dumponexit="true" >
<junit printsummary="yes" haltonfailure="yes" forkmode="once" fork="true" dir="${basedir}" failureProperty="test.failed">
<classpath location="${classes.dir}"/>
<classpath refid="classpath"/>
<formatter type="plain" />
<formatter type="xml" />
<formatter type="plain" usefile="false"/>
<batchtest todir="${reports.junit.xml.dir}">
<fileset dir="../HelloAppTest/src">
<include name="**/answerTest.java"/>
</fileset>
</batchtest>
</junit>
</jacoco:coverage>
</target>
The reason for that is, may be you are NOT attaching Jacocoagent.jar file to the "TARGET" (ex: JBoss / Tomcat) JVM's scope and stopping it so that it can flush the final code coverage data to the jacoco it exec file.
Once you do that (instead of using Maven/ANT's JVM scope), run your non-Unit (IT) tests and then STOP the target JVM.
After the target JVM is stopped, you'll get the final jacoco .exec file genreated for the IT tests. Use that file for sonar.jacoco.itReportPath variable and it'll work.
For ex: I pass/have this variable to the Tomcat's startup.sh script and while starting tomcat (target JVM), I use this variable within the Tomcat's actual start command.
PROJ_EXTRA_JVM_OPTS=-javaagent:tomcat/jacocoagent.jar=destfile=build/jacoco/IT/jacocoIT.exec,append=false
I have to create a directory in HDFS using ssh action in Oozie.
My sample workflow is
<workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
<action name="testjob">
<ssh>
<host>name#host<host>
<command>mkdir</command>
<args>hdfs://host/user/xyz/</args>
</ssh>
<ok to="end"/>
<error to="fail"/>
</action>
</workflow-app>
I am getting error during execution.
Can anybody please guide me what point i am missing here?
You cannot make a directory in the hdfs using the *nix mkdir command. The usage that you have shown in the code will try to execute mkdir command on the local file system, whereas you want to create a directory in HDFS.
Quoting the oozie documentation # http://oozie.apache.org/docs/3.3.0/DG_SshActionExtension.html ; it states
The shell command is executed in the home directory of the specified user in the remote host.
<workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
<action name="testjob">
<ssh>
<host>name#host<host>
<command>/usr/bin/hadoop</command>
<args>dfs</args>
<args>-mkdir</args>
<args>NAME_OF_THE_DIRECTORY_YOU_WANT_TO_CREATE</arg>
</ssh>
<ok to="end"/>
<error to="fail"/>
</action>
The above code depends upon the path to your hadoop binary.