How to use the QEMU that I compiled with my libvirt? qxl error - virtual-machine

I've been trying to run a QEMU that I compiled by myself. I could run it if I use plain qemu commands, but it gets wrong when I substitute the system QEMU with the one I compiled.
If you use the virus edit VM_name, you could edit the configuration. I changed
<emulator>/usr/bin/qemu-system-x86_64</emulator>
into
<emulator>/home/user/Documents/qemu/build/x86_64-softmmu/qemu-system-x86_64</emulator>
You may get something like below
<devices>
<emulator>/home/user/Documents/qemu/build/x86_64-softmmu/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' discard='unmap'/>
<source file='/var/lib/libvirt/images/ubuntu20.04.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</disk>
I got the permission denied issue at first, and I referenced this question below.
https://unix.stackexchange.com/questions/471345/changing-libvirt-emulator-permission-denied
After I changed AppArmor configuration (Add "/home/user/Documents/qemu/build/x86_64-softmmu/qemu-system-x86_64 PUx,"), I get this:
"error: unsupported configuration: domain configuration does not support video model 'qxl'
Failed. Try again? [y,n,i,f,?]:"
My system is ubuntu-22.04LTS, Libvirt version is 8.0.0(installed via apt-get), the QEMU in the system is 6.2.0 (installed via apt-get). The QEMU I compiled is from https://gitlab.com/virtio-fs/qemu.git, and the branch is virtio-fs-dev (version is QEMU 7.0.0).
Are there any features that I forgot to enable when compiling QEMU? Or are there any tutorials to use the QEMU I compiled? Thank you!
Update
I find the qxl problem above is solved if I changed to an older model. But I still got the permission denied problem.
This is what I get from "dmesg":
[12227.019203] virbr0: port 2(vnet5) entered blocking state
[12227.019211] virbr0: port 2(vnet5) entered disabled state
[12227.019390] device vnet5 entered promiscuous mode
[12227.019765] virbr0: port 2(vnet5) entered blocking state
[12227.019769] virbr0: port 2(vnet5) entered listening state
[12227.179234] audit: type=1400 audit(1654891381.082:487): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-16b54d8a-be5c-4530-a8d8-ebc1bd02481d" pid=317483 comm="apparmor_parser"
[12227.233741] audit: type=1400 audit(1654891381.134:488): apparmor="DENIED" operation="exec" profile="libvirt-16b54d8a-be5c-4530-a8d8-ebc1bd02481d" name="/home/xlf/Documents/qemu/build/qemu-system-x86_64" pid=317485 comm="rpc-libvirtd" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
[12227.256173] virbr0: port 2(vnet5) entered disabled state
[12227.257559] device vnet5 left promiscuous mode
[12227.257569] virbr0: port 2(vnet5) entered disabled state
[12227.482556] audit: type=1400 audit(1654891381.386:489): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="libvirt-16b54d8a-be5c-4530-a8d8-ebc1bd02481d" pid=317503 comm="apparmor_parser"
I've already added "/home/user/Documents/qemu/build/x86_64-softmmu/qemu-system-x86_64 PUx," to file /etc/apparmor.d/usr.sbin.libvirtd and execute systemctl reload apparmor.
Are there any solutions? Thanks.

Related

Is there a way to dynamically define and register new Dgraphs in Endeca

As far as my knowledge of Endeca goes, any time you want to add a new dgraph definition in your Endeca configuration, you have to run initializeServices.sh to set the updated configuration on EAC.
I was wondering if there is any way I can do that without running initalizeServices.sh (since it does a lot more than just update the list of Dgraph registered in EAC, and I want to prevent that).
I found the command ./runcommand.sh --update-definition allows you to do configuration changes to a Dgraph, which has already been registered with EAC, but if I add a new dgraph in config and run the command it fails with below error:
[11.17.16 16:00:07] INFO: Setting definition for host 'MDEXLiveHost2'.
[11.17.16 16:00:07] SEVERE: Caught an exception while checking provisioning
Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
com.endeca.soleng.eac.toolkit.host.Host setDefinition - Caught exception while setting host definition.
Caused by com.endeca.eac.client.ProvisioningFault
sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
I can't find any detailed logs of this error being generated anywhere in PlatformServices logs to further debug.
I could, however see in request log that /eac/ProvisioningService gave a HTTP code of 500, which leads me to believe that the script is trying to find current configuration of MDEXLiveHost2 and is unable to find it.
EDITED TO ADD Configuration for:
New host:
<host id="MDEXLiveHost2" hostName="${mdexLive.host2}" port="${mdexLive.eac.port}" useSsl="false" />
New Dgraph:
<dgraph id="DgraphLive2" host-id="MDEXLiveHost2" port="${dgraphLive1.port}"
post-startup-script="LiveDgraphPostStartup">
<properties>
<property name="restartGroup" value="A" />
<property name="updateGroup" value="a" />
<property name="DgraphContentGroup" value="Live" />
</properties>
<log-dir>./logs/dgraphs/DgraphLive</log-dir>
<input-dir>./data/dgraphs/DgraphLive/dgraph_input</input-dir>
<update-dir>./data/dgraphs/DgraphLive/dgraph_input/updates</update-dir>
</dgraph>
EDITED TO ADD errors after manually adding host using eaccmd.sh
Host definition file:
<host host-id="MDEXLiveHost2" host-name="172.18.0.7" port="9999" useSsl="false"/>
The host is added successfully (validated via describe-app)
$./eaccmd.sh describe-app --app myapp | grep MDEXLiveHost2
<host host-name="172.18.0.7" port="9999" host-id="MDEXLiveHost2" useSsl="false">
But, running any command I get this error:
[11.18.16 11:00:58] INFO: Updating provisioning for host 'MDEXLiveHost2'.
[11.18.16 11:00:58] INFO: Host name of host 'MDEXLiveHost2' has changed from 172.18.0.7 to 172.18.0.7 . Components on this host will be re-provisioned.
[11.18.16 11:00:58] INFO: Updating definition for host 'MDEXLiveHost2'.
[11.18.16 11:00:58] SEVERE: Caught an exception while checking provisioning.
Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
com.endeca.soleng.eac.toolkit.host.Host updateEacDefinition - Caught exception while updating host definition.
Caused by com.endeca.eac.client.ProvisioningFault
sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
If only this error could be made more verbose, that might give some help.
You do not have to run initializeServices.sh for every configuration change you make. When you execute other scripts in the control folder, they first check if there are any configuration changes and apply these changes.
As far as the error is concerned, I suspect you either didn't specify the MDEXLiveHost2 in your LiveDGraphCluster.xml or the host that you did specify is not reachable. Verify your configuration.
Lastly your approach to dynamically add more DGraphs into the cluster is not standard practice. When you configure your environment you should do a load test using ENEPerf to simulate the load and then create as many DGraphs and hosts as required. If you are adding more hosts and DGraphs dynamically, you also need to ensure that you add them, dynamically, into your load balancer configuration as well.
My first guess was that maybe the mdex host 2 didn't have Platform services/Mdex installed and Platform services running but it may be that the port you specified is incorrect.
<host host-id="MDEXLiveHost2" host-name="172.18.0.7" port="9999" useSsl="false"/>
Is your eac port 9999 and not 8888 (OOB value)? If it is 9999 on your ITL server, you want to make sure that it is also set to 9999 on your new Dgraph server.

Error running topology in production cluster with Apache Storm 1.0.0, topology does not start

I have a topology that runs well on a Local cluster.
But when I try to run it on a production cluster the following things happens:
The nimbus is up
The storm UI is up
The two workers I use are up
Zookeper is up
I run storm with
storm jar myjar.jar MyClass
Nimbus submits the topology
The topologies and the workers appears in the storm UI
BUT:
The topology does not start despite the fact that its status is ACTIVE
The log file of the topology does not appear in the workers.
I have the following log in the worker on the supervisor.log:
2016-04-15 13:18:19.831 o.a.s.d.supervisor [WARN] There was a connection problem with nimbus. #error {
:cause jobs-rec-storm-nimbus
:via
[{:type java.lang.RuntimeException
:message org.apache.storm.thrift.transport.TTransportException: java.net.UnknownHostException: jobs-rec-storm-nimbus
:at [org.apache.storm.security.auth.TBackoffConnect retryNext TBackoffConnect.java 64]}
{:type org.apache.storm.thrift.transport.TTransportException
:message java.net.UnknownHostException: jobs-rec-storm-nimbus
:at [org.apache.storm.thrift.transport.TSocket open TSocket.java 226]}
{:type java.net.UnknownHostException
:message jobs-rec-storm-nimbus
:at [java.net.AbstractPlainSocketImpl connect AbstractPlainSocketImpl.java 184]}]
:trace
[[java.net.AbstractPlainSocketImpl connect AbstractPlainSocketImpl.java 184]
[java.net.SocksSocketImpl connect SocksSocketImpl.java 392]
[java.net.Socket connect Socket.java 589]
[org.apache.storm.thrift.transport.TSocket open TSocket.java 221]
[org.apache.storm.thrift.transport.TFramedTransport open TFramedTransport.java 81]
[org.apache.storm.security.auth.SimpleTransportPlugin connect SimpleTransportPlugin.java 103]
[org.apache.storm.security.auth.TBackoffConnect doConnectWithRetry TBackoffConnect.java 53]
[org.apache.storm.security.auth.ThriftClient reconnect ThriftClient.java 99]
[org.apache.storm.security.auth.ThriftClient <init> ThriftClient.java 69]
[org.apache.storm.utils.NimbusClient <init> NimbusClient.java 106]
[org.apache.storm.utils.NimbusClient getConfiguredClientAs NimbusClient.java 78]
[org.apache.storm.utils.NimbusClient getConfiguredClient NimbusClient.java 41]
[org.apache.storm.blobstore.NimbusBlobStore prepare NimbusBlobStore.java 268]
[org.apache.storm.utils.Utils getClientBlobStoreForSupervisor Utils.java 462]
[org.apache.storm.daemon.supervisor$fn__9590 invoke supervisor.clj 942]
[clojure.lang.MultiFn invoke MultiFn.java 243]
[org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9351$fn__9369 invoke supervisor.clj 582]
[org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9351 invoke supervisor.clj 581]
[org.apache.storm.event$event_manager$fn__8903 invoke event.clj 40]
[clojure.lang.AFn run AFn.java 22]
[java.lang.Thread run Thread.java 745]]}
2016-04-15 13:18:19.831 o.a.s.d.supervisor [INFO] Finished downloading code for storm id jobs-KafkaMigration-topology-3-1460740616
2016-04-15 13:18:19.850 o.a.s.d.supervisor [INFO] Missing topology storm code, so can't launch worker with assignment ...(some more numbers)
So I asume that I have a connection problem with nimbus, but the properties file in the worker is:
storm.zookeeper.servers:
- "192.168.22.209"
- "192.168.22.216"
- "192.168.22.217"
storm.local.dir: "/app/home/storm"
storm.zookeeper.root: "/storm-prod"
#
nimbus.seeds: ["192.168.120.96"]
And if I make a ping to the nimbus ip from the workers, it returns OK
Where is the error, How can I fix it?
Thanks!
Whats appears to happen in this context is that Storm supervisor resolves nimbus from whatever is configured in storm.yaml seeds/host the first time and from then on uses nimbus host name to download the topology artifacts.
If that is correct, DNS is mandatory for a cluster setup. This is far from ideal, specially when using containers in an orchestrated environment like kubernetes.
Current workaround i'm using is adding
storm.local.hostname: "<local.ip.value>"
to the storm.yaml
Thanks to #bastien who provided the tip on storm user mailing list
I ran into the similar issue. Turns out my firewall rules were blocking the supervisor ports. Make sure the supervisor and nimbus are able to talk to each other.
I found that I need to have the hostnames of the boxes match what I was calling them in the /etc/hosts file
in host file i had
xxx.xxx.xxx.xxx nimbus
but the host name on the box was different and it was pulling the hostname from the os
changing the host name on the os of the nimbus server resolved my issue.

java.net.ConnectException: JBAS012144: Could not connect to remote://nnn.nn.nn.88:9999. The connection timed out

I am trying to run in jboss instance in domain mode. While I do that I am getting the following issue......
[Host Controller] 12:45:56,535 WARN [org.jboss.as.host.controller] (Controller Boot Thread) JBAS010900: Could not connect to remote domain controller at remote://nnn.nn.nn.88:9999 -- java.net.ConnectException: JBAS012144: Could not connect to remote://nnn.nn.nn.88:9999. The connection timed out
I had ran two JBoss instance in domain mode after configuring...
First JBoss instance->
./domain.sh -b nnn.nn.nn.88 -Djboss.bind.address.management=nnn.nn.nn.88
Second JBoss Instance ->
./domain.sh -b nnn.nn.nn.89 -Djboss.domain.master.address=nnn.nn.nn.88 --host-config=host-slave.xml
nnn.nn.nn.88 host.xml configuration is as follows...
<domain-controller>
<local/>
</domain-controller>
nnn.nn.nn.89 host-slave.xml configuration is as follows...
<domain-controller>
<remote host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
I am able to telnet to port 9999 on host nnn.nn.nn.88 from 89..... as I configured by removing loopback ip for public & management port...... Although is it the implication that <domain-controller> has <local/>....
Please help me to solve this issue... JDK version is JDK 7 Update 80.... EAP 6.3....
In HC host.xml and if we use --host-config=host-slave.xml that particular xml has to connected with DC under <domain-controller> node....
jboss.domain.master.address should be Domain Controller address nnn.nn.nn.88....
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
As per the solution article from redhat....
https://access.redhat.com/solutions/218053#
I ran following command for the same configuration which I had while posting this question..... And I got succeeded.....
DC->
./domain.sh -b my-host-ip1 -bmanagement my-host-ip1
HC->
./domain.sh -Djboss.domain.master.address=my-host-ip1 -b my-host-ip2 -bmanagement my-host-ip2
Although is this way of configuring gives clustering capability to DC and HCs..... I had raised same question to Redhat on the same solution article..... The answer must be yes I hope....
https://access.redhat.com/solutions/218053#comment-975683

JBAS010153: Node identifier property is set to the default value. Please make sure it is unique

I am getting the following WARN message while I start my host which is one of the Host Controller (HC) that is attached to the Domain Controller(DC).
[Server:server-two] 14:06:13,822 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 33) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
And my host-slave.xml has the following config...
<server-identities>
<!-- Replace this with either a base64 password of your own, or use a vault with a vault expression -->
<secret value="c2xhdmVfdXNlcl9wYXNzd29yZA=="/>
</server-identities>
I hope this config is the reason...... maybe I didn't understand..... but I couldn't find node identifier property rather this is the default secret value which I hope could be the cause of this WARN message.
However, I didn't mention HC to lookup host-slave.xml..... the command which I ran to start my HC is.....
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
nnn.nn.nn.88 is my DC
Else please advise what's cause of the WARN message.
And please let me know the implication of this WARN message and advise us on the required config to overcome and sort out any consecutive consequences that would've been bound for this WARN.
I'm new to wildfly, and noticed this warning when I started it standalone from eclipse (I'm doing the following tutorial: https://wwu-pi.github.io/tutorials/lectures/eai/020_tutorial_jboss_project.html)
The fix was to add a node-identifier to the core-environment in the subsystem:
<subsystem xmlns="urn:jboss:domain:transactions:2.0">
<core-environment node-identifier="meindertwillemhoving">
<process-id>
<uuid/>
</process-id>
</core-environment>
<recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
</subsystem>
This is in file [wildfly]\standalone\configuration\standalone.xml.
This is the same answer as https://developer.jboss.org/message/880136#880136
According to WFLY-10541 if you are using WildFly v14.0.0 or newer you can pass the following to the startup script to set the transaction node identifier:
-Djboss.tx.node.id=<some-unique-id>
Setting the node identifier to an unique value is only required for proper handling of XA Transactions.
You can set it as follows in your XML configuration:
<subsystem xmlns="urn:jboss:domain:transactions:6.0">
<core-environment node-identifier="${jboss.tx.node.id}">
It needs to be a unique value up to 23 bytes long.
More about this here: http://www.mastertheboss.com/jboss-server/jboss-configuration/configuring-transactions-jta-using-jboss-as7-wildfly
Building on #kaptan's answer I added the following to the bottom of
bin/standalone.conf:
JAVA_OPTS="$JAVA_OPTS -Djboss.tx.node.id=`hostname -f`
This way I don't have to remember to add the "-Djboss.tx.node.id=" when running up wildfly by hand.
For this <server-identities> is not the issue. In fact, it shouldn't be touched at all.
When JBoss is started in domain mode by domain.sh, by default there will be three servers server-one server-two server-three. When you are running one more HC attached to the DC.... the defaulted server which is in auto-start mode will get clash when we start HC attaching to DC,- by the following command.
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
Or by having the host configuration at HC (default host.xml... until unless we choose a different one....).
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
In order to solve this, we need to turn auto-start to false..... And we need to create a new server-group...... To that group we need to add dc-created-server and hc-created-server..... we can choose the appropriate same profile either full-ha or full for both created servers across DC and HC.
SO when we start the group by configuring the required HEAP size including permgen space... You could start both DC and HC.... and in DC you could see both of your-created-servers are started in the created server-group.
DC- Domain Controller
HC- Host Controller
To deploy you need to upload .ear or web-archive in the Application Console. You cannot place it in the deployments folder as how you do in standalone mode with .dodeploy file.
If you upload the same .ear next version do the Replace option instead of the Remove & Add option in the upload process.

RavenDB on a port other than 8080

We are trying to install NServiceBus 4.2.0.0 with RavenDB via the following command:-
nserviceBus.host.exe -install serviceName="xxxx.Server" -displayname="xxxx.Server" -username="domainName\serviceAccountName" -password="serviceAccountPassword"
NServiceBus seems to install however the RavenDB install fails - note we are trying to install under a port other than 8080 - as a result we have placed the line:-
<add name="NServiceBus/Persistence" connectionString="Url = http://localhost:9090" />
...in our config
The error message we receive is:-
[1] WARN NServiceBus.ConfigureRavenPersistence [(null)] <(null)> - Raven could not be contacted. We tried to access Raven using the following url: http://localhost:9090.
If I leave at the default port (8080), everything installs correctly, however I need to change the port because 8080 is already in use
Does anyone have any ideas ?
RavenDB installation is separate from NServiceBus host installation.
To install RavenDB either follow the instructions on RavenDB website or you can install Raven server by using the NServiceBus Powershell cmdlets, see http://docs.particular.net/nservicebus/managing-nservicebus-using-powershell for instructions on how to load the cmdlets.
If you choose to use the cmdlets, you need to execute Install-NServiceBusRavenDB -Port 9090
If you just want to change the RavenDB port, you can do the following:
Note: The paths defined here are from the NServiceBus 4.3.2 installer with default paths
To download the installer, you can visit here: https://github.com/Particular/NServiceBus/releases/download/4.3.2/Particular.NServiceBus-4.3.2.exe
Launch the services management window (i.e. Run services.msc)
Stop service RavenDB
Navigate to the following path: "C:\Program Files\NServiceBus.Persistence.v4"
Edit the Raven.Server.exe.config: <add key="Raven/Port" value="<your port here>" />
Save the config
Start the service
Hit localhost on your new port
You should now be able to hit the RavenDB web on the new port!
Maybe try to change local.config in RavenDB folder.
<?xml version="1.0" encoding="utf-8"?>
<LocalConfig Port="9090" />
then restart raven
To change the port of RavenDb,
IIS
Change the port in IIS :)
here's where mine is set (under bindings)
Development Console
Find the config file of the ravenDb exe or dll (depends if you're running it as an IIS website, windows server or just the console window).
<Root RavenDb folder>\Server\RavenDb.server.exe.config
Set the port, manually in the file. Change the following ...
<add key="Raven/Port" value="*"/>
to
<add key="Raven/Port" value="6969"/>
or whatever port you need/require.
Windows Service
No idea! i've never used it.
Good luck!