What to make out of the output of QSG_RENDERER_DEBUG=render - qml

When I set the environment variable QSG_RENDERER_DEBUG=render , I get the following output:
Renderer::render() QSGAbstractRenderer(0x383865f8) "rebuild: none"
Rendering:
-> Opaque: 38 nodes in 2 batches...
-> Alpha: 34 nodes in 13 batches...
- 0x3836a830 [retained] [noclip] [opaque] [ merged] Nodes: 14 Vertices: 88 Indices: 124 root: 0x0
- 0x3836a7f0 [ upload] [noclip] [opaque] [ merged] Nodes: 24 Vertices: 96 Indices: 144 root: 0x0
- 0x3836a8f0 [retained] [noclip] [ alpha] [unmerged] Nodes: 1 Vertices: 48 Indices: 74 root: 0x0
- 0x3836a870 [retained] [noclip] [ alpha] [unmerged] Nodes: 3 Vertices: 52 Indices: 78 root: 0x0
- 0x3836a8b0 [retained] [noclip] [ alpha] [unmerged] Nodes: 6 Vertices: 400 Indices: 720 root: 0x0
- 0x3836a530 [retained] [noclip] [ alpha] [unmerged] Nodes: 4 Vertices: 56 Indices: 84 root: 0x0
- 0x3836a570 [retained] [noclip] [ alpha] [ merged] Nodes: 1 Vertices: 4 Indices: 6 root: 0x0 opacity: 1
- 0x3836a5b0 [retained] [noclip] [ alpha] [unmerged] Nodes: 7 Vertices: 720 Indices: 1302 root: 0x0
- 0x3836a630 [retained] [noclip] [ alpha] [unmerged] Nodes: 3 Vertices: 28 Indices: 42 root: 0x0
- 0x3836a5f0 [retained] [noclip] [ alpha] [ merged] Nodes: 3 Vertices: 12 Indices: 18 root: 0x0 opacity: 1
- 0x3836a6f0 [retained] [noclip] [ alpha] [ merged] Nodes: 1 Vertices: 4 Indices: 6 root: 0x0 opacity: 1
- 0x3836a670 [retained] [noclip] [ alpha] [ merged] Nodes: 1 Vertices: 4 Indices: 6 root: 0x0 opacity: 1
- 0x3836a6b0 [retained] [noclip] [ alpha] [ merged] Nodes: 1 Vertices: 4 Indices: 6 root: 0x0 opacity: 1
- 0x3836a330 [retained] [noclip] [ alpha] [unmerged] Nodes: 2 Vertices: 24 Indices: 36 root: 0x0
- 0x3836a370 [retained] [noclip] [ alpha] [ merged] Nodes: 1 Vertices: 4 Indices: 6 root: 0x0 opacity: 1
-> times: build: 0, prepare(opaque/alpha): 0/0, sorting: 0, upload(opaque/alpha): 0/0, render: 1
which I deem bad to catastrophal, if I want to run it on a low-performance-device such as an old Raspberry PI.
However I fail to find the answers to: what to make out of this? How can I improve it, to reduce the amount of batches, especially the Alpha-batches?
I have only few objects (2) with opacity != 0 and at the time of this output all of them have been set to invisible.
I have two SVGs (8 nodes each) which might show up there.
I do not use any colors that would not fit in the format '#------ff' (unless the svg-colors of which I use a few might have the alpha-channel set otherwise) and do not use the color 'transparent'
I do have an object that consists out of two "concentric" Rectangles of opaque color, that are marked unmerged when visualizing the overdraw, which I have no clue, why. At least it could be merged with it's not-intersecting siblings, I deem. Why wouldn't they?
I think it migh help if I might identify the objects, but when I ouput the adresses of any object that is visible, (Component.onCompleted: console.log(this)) I do not get any of the listed adresses.
How might I achieve a mapping between the render-objects and my QML-Objects?
And - What doe all those list entries mean at all?
EDIT: It seems, I used some PNG's with alpha enabled. Replacing them with JPG reduced the message to this. But those were only 5 of the nodes/batches. 34 to go...
-> Opaque: 43 nodes in 6 batches...
-> Alpha: 29 nodes in 8 batches...
Greetings & Thanks,
-m-

From doc.qt.io: "[one batch] is created for each unique set of material state and geometry type". Reduce the number of differences and the default renderer puts compatible OpenGL primitives into the same batch.
For example:
Rectangle elements with different width/height/color/gradient/rotation properties are batched together but with different opacity/radius/antialiasing properties are not.
Rectangle elements with the color property set to an opaque value using hexadecimal triplets (Ex.: specifying 'forestgreen' with '#228B22' or '#282') are batched together. But specifying translucent colors (a hex quad that does not start with '#FF', ex.: '#44228B22' -- 25% opaque 'forestgreen') forces a separate batch even if the translucent color values are identical (verified on Qt 5.9.1). Having translucent border colors also forces separate batches even if the translucent color values are identical.
Text elements with different font letterspacing/underline
properties are grouped together, but different font bold/italic/pixelSize/hintingPreference properties are separated.
You can determine which objects are which by changing the visible property and seeing what appears or disappears.

Related

SeaweedFS - Added new volume server but not able to add new files

I've one master(x.x.x.61), one volume(x.x.x.63) and one filer + s3API (x.x.x.62) setup on 3 separate machines.
I added a new volume server (x.x.x.64) because I've max out the storage space on the first volume server.
But I'm still not able to add new files on the filer UI(http://x.x.x.62:8888)
In my filer logs, I noticed that it's trying to connect to the first volume server IP address that's out of space. Am I missing a configuration for it to connect to the new volume server?
E1221 11:09:48.027930 upload_content.go:351 unmarshal http://x.x.x.63:8080/7,2bafadaa4666: {"error":"failed to write to local disk: write data/chrisDir_7.dat: no space left on device"}{"name":"app_progress4.apk","size":2353734,"eTag":"92b10892"}
W1221 11:09:48.027950 upload_content.go:168 uploading 2 to http://x.x.x.63:8080/7,2bafadaa4666: unmarshal http://x.x.x.63:8080/7,2bafadaa4666: invalid character '{' after top-level value
E1221 11:09:48.027965 filer_server_handlers_write_upload.go:209 upload error: unmarshal http://x.x.x.63:8080/7,2bafadaa4666: invalid character '{' after top-level value
I1221 11:09:48.028022 common.go:70 response method:POST URL:/buckets/chrisDir/ with httpStatus:500 and JSON:{"error":"unmarshal http://x.x.x.63:8080/2,2ba84b2894a7: invalid character '{' after top-level value"}
In the master log, I see that the second volume server was added successfully and master.toml file was executed to rebalance
I1221 11:36:09.522690 node.go:225 topo:DefaultDataCenter:DefaultRack adds child x.x.x.64:8080
I1221 11:36:09.522716 node.go:225 topo:DefaultDataCenter:DefaultRack:x.x.x.64:8080 adds child
I1221 11:36:09.522724 master_grpc_server.go:138 added volume server 0: x.x.x.64:8080 [3caad049-38a6-43f6-8192-d1082c5e838b]
I1221 11:36:09.522744 master_grpc_server.go:49 found new uuid:x.x.x.64:8080 [3caad049-38a6-43f6-8192-d1082c5e838b] , map[x.x.x.63:8080:[5005b287-c812-4dba-ba41-9b5a6a022f12] x.x.x.64:8080:[3caad049-38a6-43f6-8192-d1082c5e838b]]
I1221 11:36:09.522866 volume_layout.go:393 Volume 11 becomes writable
I1221 11:36:09.522880 master_grpc_server.go:199 master see new volume 11 from x.x.x.64:8080
I1221 11:38:33.481721 master_server.go:323 executing: lock []
I1221 11:38:33.482821 master_server.go:323 executing: ec.encode [-fullPercent=95 -quietFor=1h]
I1221 11:38:33.483925 master_server.go:323 executing: ec.rebuild [-force]
I1221 11:38:33.484372 master_server.go:323 executing: ec.balance [-force]
I1221 11:38:33.484777 master_server.go:323 executing: volume.balance [-force]
2022/12/21 11:38:48 copying volume 21 from x.x.x.63:8080 to x.x.x.64:8080
I1221 11:38:48.486778 volume_layout.go:407 Volume 21 has 0 replica, less than required 1
I1221 11:38:48.486798 volume_layout.go:380 Volume 21 becomes unwritable
I1221 11:38:48.494998 volume_layout.go:393 Volume 21 becomes writable
2022/12/21 11:38:48 tailing volume 21 from x.x.x.63:8080 to x.x.x.64:8080
2022/12/21 11:38:58 deleting volume 21 from x.x.x.63:8080
....
How I start master
./weed master -mdir='.'
How I start volume
./weed volume -max=100 -mserver="x.x.x.61:9333" -dir="$dataDir"
How I start filer and s3
./weed filer -master="x.x.x.61:9333" -s3
What's in $HOME/.seaweedfs
drwxrwxr-x 2 seaweedfs seaweedfs 4096 Dec 20 16:01 .
drwxr-xr-x 20 seaweedfs seaweedfs 4096 Dec 20 16:01 ..
-rw-r--r-- 1 seaweedfs seaweedfs 2234 Dec 20 15:57 master.toml
Content of master.toml file
# Put this file to one of the location, with descending priority
# ./master.toml
# $HOME/.seaweedfs/master.toml
# /etc/seaweedfs/master.toml
# this file is read by master
[master.maintenance]
# periodically run these scripts are the same as running them from 'weed shell'
scripts = """
lock
ec.encode -fullPercent=95 -quietFor=1h
ec.rebuild -force
ec.balance -force
volume.deleteEmpty -quietFor=24h -force
volume.balance -force
volume.fix.replication
s3.clean.uploads -timeAgo=24h
unlock
"""
sleep_minutes = 7 # sleep minutes between each script execution
[master.sequencer]
type = "raft" # Choose [raft|snowflake] type for storing the file id sequence
# when sequencer.type = snowflake, the snowflake id must be different from other masters
sequencer_snowflake_id = 0 # any number between 1~1023
# configurations for tiered cloud storage
# old volumes are transparently moved to cloud for cost efficiency
[storage.backend]
[storage.backend.s3.default]
enabled = false
aws_access_key_id = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
region = "us-east-2"
bucket = "your_bucket_name" # an existing bucket
endpoint = ""
storage_class = "STANDARD_IA"
# create this number of logical volumes if no more writable volumes
# count_x means how many copies of data.
# e.g.:
# 000 has only one copy, copy_1
# 010 and 001 has two copies, copy_2
# 011 has only 3 copies, copy_3
[master.volume_growth]
copy_1 = 7 # create 1 x 7 = 7 actual volumes
copy_2 = 6 # create 2 x 6 = 12 actual volumes
copy_3 = 3 # create 3 x 3 = 9 actual volumes
copy_other = 1 # create n x 1 = n actual volumes
# configuration flags for replication
[master.replication]
# any replication counts should be considered minimums. If you specify 010 and
# have 3 different racks, that's still considered writable. Writes will still
# try to replicate to all available volumes. You should only use this option
# if you are doing your own replication or periodic sync of volumes.
treat_replication_as_minimums = false
System status
curl http://localhost:9333/dir/assign?pretty=y
{
"fid": "9,2bb2fd75d706",
"url": "x.x.x.63:8080",
"publicUrl": "x.x.x.63:8080",
"count": 1
}
curl http://x.x.x.61:9333/cluster/status?pretty=y
{
"IsLeader": true,
"Leader": "x.x.x.61:9333",
"MaxVolumeId": 21
}
curl "http://x.x.x.61:9333/dir/status?pretty=y"
{
"Topology": {
"Max": 200,
"Free": 179,
"DataCenters": [
{
"Id": "DefaultDataCenter",
"Racks": [
{
"Id": "DefaultRack",
"DataNodes": [
{
"Url": "x.x.x.63:8080",
"PublicUrl": "x.x.x.63:8080",
"Volumes": 20,
"EcShards": 0,
"Max": 100,
"VolumeIds": " 1-10 12-21"
},
{
"Url": "x.x.x.64:8080",
"PublicUrl": "x.x.x.64:8080",
"Volumes": 1,
"EcShards": 0,
"Max": 100,
"VolumeIds": " 11"
}
]
}
]
}
],
"Layouts": [
{
"replication": "000",
"ttl": "",
"writables": [
6,
1,
2,
7,
3,
4,
5
],
"collection": "chrisDir"
},
{
"replication": "000",
"ttl": "",
"writables": [
16,
19,
17,
21,
15,
18,
20
],
"collection": "chrisDir2"
},
{
"replication": "000",
"ttl": "",
"writables": [
8,
12,
13,
9,
14,
10,
11
],
"collection": ""
}
]
},
"Version": "30GB 3.37 438146249f50bf36b4c46ece02a430f44152777f"
}

Linear Diophantine Equations with Restriction in the GAP System

I am searching for a way to use the GAP System to find a solution of a linear Diophantine equation over the non-negative integers. Explicitly, I have a list L of positive integers for each of which there exists a solution of the linear Diophantine equation s = 11*a + 7*b such that a and b are non-negative integers. I would like to have the GAP System return for each element s of L the ordered pair(s) [a, b] corresponding to the above solution(s).
I am familiar already with the command SolutionIntMat in the GAP System; however, this produces only some solution of the linear Diophantine equation s = 11*a + 7*b. Particularly, it is possible (and far more likely) that one of the coefficients a and b is negative. For instance, I obtain the solution [-375, 600] when I use the aforementioned command on the linear Diophantine equation 75 = 11*a + 7*b.
For additional context, this query arises when working with numerical semigroups generated by generalized arithmetic sequences. Use the command LoadPackage("numericalsgps"); to implement computations with such objects. For instance, if S := NumericalSemigroup(11, 29, 36, 43, 50, 57, 64, 71);, then each of the minimal generators of S other than 11 is of the form 2*11 + 7*i for some integer i in [1..7]. One can ask the GAP System for the SmallElements(S);, and the GAP System will return all elements of S up to FrobeniusNumber(S) + 1. Clearly, every element of S is of the form 11*a + 7*b for some non-negative integers a and b; I would like to investigate what coefficients a and b arise. In fact, the answer is known (cf. Proposition 2.5 of this paper); I am just trying to get an understanding of the intuition behind the proof.
Thank you in advance for your time and consideration.
Dylan, thank you for your query and for using GAP and numericalsgps.
You can probably use in this setting Factorizations from the package numericalsgps. It internally rewrites the output of RestrictedPartitions.
For instance, in your example, you can get all possible "factorizations" of the small elements of S, with respect to the generators of S, by typing List(SmallElements(S), x->[x,Factorizations(x,S)]). A particular example:
gap> Factorizations(104,S);
[ [ 1, 0, 0, 1, 1, 0, 0, 0 ], [ 1, 0, 1, 0, 0, 1, 0, 0 ],
[ 1, 1, 0, 0, 0, 0, 1, 0 ], [ 3, 0, 0, 0, 0, 0, 0, 1 ] ]
If you want to see the factorizations of the elements of S in terms of 11 and 7, then you can do the following:
gap> FactorizationsIntegerWRTList(29,[11,7]);
[ [ 2, 1 ] ]
So, for all minimal generators of S you would do
gap> List(MinimalGenerators(S), g-> FactorizationsIntegerWRTList(g,[11,7]));
[ [ [ 1, 0 ] ], [ [ 2, 1 ] ], [ [ 2, 2 ] ], [ [ 2, 3 ] ],
[ [ 2, 4 ] ], [ [ 2, 5 ] ], [ [ 2, 6 ] ], [ [ 2, 7 ] ] ]
For the set of small elements of S, try List(SmallElements(S), g-> FactorizationsIntegerWRTList(g,[11,7])). If you only want up to some integer, just replace SmallElements(S) with Intersection([1..200], S); or if you want the first, say 200, elements of S, use S{[1..200]}.
You may want to have a look at Chapter 9 of the manual, and in particular to FactorizationsElementListWRTNumericalSemigroup.
I hope this helps.

Tensorflow, Reshape like a convolution

I have a matrix [3,3,256], my final output must be [4,2,2,256], I have to use a reshape like a 'convolution' without changing the values. (In this case using a filter 2x2). Is there a method to do this using tensorflow?
If I understand your question correctly, you want to store the original values redundantly in the new structure, like this (without the last dim of 256):
[ [ 1 2 3 ] [ [ 1 2 ] [ [ 2 3 ] [ [ 4 5 ] [ [ 5 6 ]
[ 4 5 6 ] => [ 4 5 ] ], [ 5 6 ] ], [ 7 8 ] ], [ 8 9 ] ]
[ 7 8 9 ] ]
If yes, you can use indexing, like this, with x being the original tensor, and then stack them:
x2 = []
for i in xrange( 2 ):
for j in xrange( 2 ):
x2.append( x[ i : i + 2, j : j + 2, : ] )
y = tf.stack( x2, axis = 0 )
Based on your comment, if you really want to avoid using any loops, you might utilize the tf.extract_image_patches, like below (tested code) but you should run some tests because this might actually be much worse than the above in terms of efficiency and perfomance:
import tensorflow as tf
sess = tf.Session()
x = tf.constant( [ [ [ 1, -1 ], [ 2, -2 ], [ 3, -3 ] ],
[ [ 4, -4 ], [ 5, -5 ], [ 6, -6 ] ],
[ [ 7, -7 ], [ 8, -8 ], [ 9, -9 ] ] ] )
xT = tf.transpose( x, perm = [ 2, 0, 1 ] ) # have to put channel dim as batch for tf.extract_image_patches
xTE = tf.expand_dims( xT, axis = -1 ) # extend dims to have fake channel dim
xP = tf.extract_image_patches( xTE, ksizes = [ 1, 2, 2, 1 ],
strides = [ 1, 1, 1, 1 ], rates = [ 1, 1, 1, 1 ], padding = "VALID" )
y = tf.transpose( xP, perm = [ 3, 1, 2, 0 ] ) # move dims back to original and new dim up front
print( sess.run(y) )
Output (horizontal separator lines added manually for readability):
[[[[ 1 -1]
[ 2 -2]]
[[ 4 -4]
[ 5 -5]]]
[[[ 2 -2]
[ 3 -3]]
[[ 5 -5]
[ 6 -6]]]
[[[ 4 -4]
[ 5 -5]]
[[ 7 -7]
[ 8 -8]]]
[[[ 5 -5]
[ 6 -6]]
[[ 8 -8]
[ 9 -9]]]]
I have a similar problem with you and I found that in tf.contrib.kfac.utils there is a function called extract_convolution_patches. Suppose you have a tensor X with shape (1, 3, 3, 256) where the initial 1 marks batch size, you can call
Y = tf.contrib.kfac.utils.extract_convolution_patches(X, (2, 2, 256, 1), padding='VALID')
Y.shape # (1, 2, 2, 2, 2, 256)
The first two 2's will be the number of your output filters (makes up the 4 in your description). The latter two 2's will be the shape of the filters. You can then call
Y = tf.reshape(Y, [4,2,2,256])
to get your final result.

Why does one of these two indexing patterns give me an "index out of bounds" error?

With ncsim the following code throws the error:
Bit-select or part-select index out of declared bounds.
However, the commented out code which does exactly the same thing doesn't. Am I missing something or is the compiler mistaken?
module pd__test;
genvar i, j;
reg [10-1:0] assign_from_reg;
reg [256:0] assign_to_reg;
generate
for (i=0; i<2; i=i+1) begin
for (j=0; j<13; j=j+1) begin
always #* begin
if (i+2*j < 25) begin
// gives me an index out of bounds error
assign_to_reg[10*(i+2*j)+9 : 10*(i+2*j)] = assign_from_reg;
// gives me no such error, however the indices are the same
// assign_to_reg[10*(i+2*j)+9 -: 10] = assign_from_reg;
end else begin
// do something else
end
end
end
end
endgenerate
endmodule
I ran a python script to print the indeces to double check:
for i in range(2):
for j in range(13):
if (i+(2*j) < 25):
print("[", 10*(i+(2*j))+9, ":", 10*(i+(2*j)), "]")
Prints:
[ 9 : 0 ]
[ 29 : 20 ]
[ 49 : 40 ]
[ 69 : 60 ]
[ 89 : 80 ]
[ 109 : 100 ]
[ 129 : 120 ]
[ 149 : 140 ]
[ 169 : 160 ]
[ 189 : 180 ]
[ 209 : 200 ]
[ 229 : 220 ]
[ 249 : 240 ]
[ 19 : 10 ]
[ 39 : 30 ]
[ 59 : 50 ]
[ 79 : 70 ]
[ 99 : 90 ]
[ 119 : 110 ]
[ 139 : 130 ]
[ 159 : 150 ]
[ 179 : 170 ]
[ 199 : 190 ]
[ 219 : 210 ]
[ 239 : 230 ]
you asked verilog compiler to compile this code in the always block
assign_to_reg[10*(i+2*j)+9 -: 10]
which for i == 1 and j == 12 is generated by the generate block as:
assign_to_reg[259 : 250]
The above is clearly outside of the declared bounds [256:0].
Moving the 'if' into the generate block, as #toolic suggested would cause verilog not to generate the last always block at all, therefore, it will not be compiled and no warning/error would be produced.
So, the other solution with generate blocks would be to declare your assign_to_reg as [259:0].
And yet the best solution would be to get rid of the generate block all together and move all your loops inside the single always block:
always #* begin
for (int i=0; i<2; i=i+1) begin
for (int j=0; j<13; j=j+1) begin
if (i+2*j < 25) begin
assign_to_reg[10*(i+2*j)+9 -: 10] = assign_from_reg;
end
end
end
end
This should let compiler to calculate indexes at run-time and will not cause out-of-bound access as well.
Move the conditional if (i+2*j < 25) outside of the always block:
module pd__test;
genvar i, j;
reg [10-1:0] assign_from_reg;
reg [256:0] assign_to_reg;
generate
for (i=0; i<2; i=i+1) begin
for (j=0; j<13; j=j+1) begin
if (i+2*j < 25) begin
always #* begin
//assign_to_reg[10*(i+2*j)+9 : 10*(i+2*j)] = assign_from_reg;
assign_to_reg[10*(i+2*j)+9 -: 10] = assign_from_reg;
end
end
end
end
endgenerate
endmodule
Both assignments compile without warnings or errors for me.

Openstack VM instance SHUTOFF after few minute

For learning purpose i have build openstack on VirtualBox with 2 vCPU and 4GB Memory. It installed successfully and i am able to start VM instances but what happened is guest VM got SHUTOFF status after few minutes. I have google this issue but didn't get proper answer. I have check logs and i didn't find anything suspicious.
How do i check VM console so i can see what is going on there?
Where should i check SHUTOFF specific error logs, i meant in which file?
EDIT:
Following is output of nova console-log but it stuck there not going ahead and i can't see login screen too
openstack#openstack1:~$ nova console-log 970a3722-0fb3-4db6-862b-2aa626cc68a8
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 3.0.0-12-virtual (buildd#crested) (gcc version 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) ) #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 (Ubuntu 3.0.0-12.20-virtual 3.0.4)
[ 0.000000] Command line: LABEL=cirros-rootfs ro console=tty0 console=ttyS0 console=hvc0
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] AMD AuthenticAMD
[ 0.000000] Centaur CentaurHauls
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: 0000000000000000 - 000000000009dc00 (usable)
[ 0.000000] BIOS-e820: 000000000009dc00 - 00000000000a0000 (reserved)
[ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
[ 0.000000] BIOS-e820: 0000000000100000 - 000000001fffd000 (usable)
[ 0.000000] BIOS-e820: 000000001fffd000 - 0000000020000000 (reserved)
[ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] DMI 2.4 present.
[ 0.000000] No AGP bridge found
[ 0.000000] last_pfn = 0x1fffd max_arch_pfn = 0x400000000
[ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[ 0.000000] found SMP MP-table at [ffff8800000fdaf0] fdaf0
[ 0.000000] init_memory_mapping: 0000000000000000-000000001fffd000
[ 0.000000] RAMDISK: 1fdf9000 - 1ffed000
[ 0.000000] ACPI: RSDP 00000000000fd990 00014 (v00 BOCHS )
[ 0.000000] ACPI: RSDT 000000001fffd7b0 00034 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
[ 0.000000] ACPI: FACP 000000001fffff80 00074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
[ 0.000000] ACPI: DSDT 000000001fffd9b0 02589 (v01 BXPC BXDSDT 00000001 INTL 20100528)
[ 0.000000] ACPI: FACS 000000001fffff40 00040
[ 0.000000] ACPI: SSDT 000000001fffd910 0009E (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001)
[ 0.000000] ACPI: APIC 000000001fffd830 00072 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
[ 0.000000] ACPI: HPET 000000001fffd7f0 00038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
[ 0.000000] No NUMA configuration found
[ 0.000000] Faking a node at 0000000000000000-000000001fffd000
[ 0.000000] Initmem setup node 0 0000000000000000-000000001fffd000
[ 0.000000] NODE_DATA [000000001fff5000 - 000000001fff9fff]
[ 0.000000] Zone PFN ranges:
[ 0.000000] DMA 0x00000010 -> 0x00001000
[ 0.000000] DMA32 0x00001000 -> 0x00100000
[ 0.000000] Normal empty
[ 0.000000] Movable zone start PFN for each node
[ 0.000000] early_node_map[2] active PFN ranges
[ 0.000000] 0: 0x00000010 -> 0x0000009d
[ 0.000000] 0: 0x00000100 -> 0x0001fffd
[ 0.000000] ACPI: PM-Timer IO Port: 0xb008
[ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0])
[ 0.000000] IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 0.000000] Using ACPI (MADT) for SMP configuration information
[ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs
[ 0.000000] PM: Registered nosave memory: 000000000009d000 - 000000000009e000
[ 0.000000] PM: Registered nosave memory: 000000000009e000 - 00000000000a0000
[ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[ 0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[ 0.000000] Allocating PCI resources starting at 20000000 (gap: 20000000:dffc0000)
[ 0.000000] Booting paravirtualized kernel on bare hardware
[ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1
[ 0.000000] PERCPU: Embedded 27 pages/cpu #ffff88001fa00000 s79296 r8192 d23104 u2097152
[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 129157
[ 0.000000] Policy zone: DMA32
[ 0.000000] Kernel command line: LABEL=cirros-rootfs ro console=tty0 console=ttyS0 console=hvc0
[ 0.000000] PID hash table entries: 2048 (order: 2, 16384 bytes)
[ 0.000000] Checking aperture...
[ 0.000000] No AGP bridge found
[ 0.000000] Memory: 497852k/524276k available (6206k kernel code, 460k absent, 25964k reserved, 6907k data, 900k init)
[ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[ 0.000000] Hierarchical RCU implementation.
[ 0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[ 0.000000] NR_IRQS:4352 nr_irqs:256 16
[ 0.000000] Console: colour VGA+ 80x25
[ 0.000000] console [tty0] enabled
[ 0.000000] console [ttyS0] enabled
[ 0.000000] allocated 4194304 bytes of page_cgroup
[ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[ 0.000000] Fast TSC calibration failed
[ 0.000000] TSC: Unable to calibrate against PIT
[ 0.000000] TSC: using PMTIMER reference calibration
[ 0.000000] Detected 2486.018 MHz processor.
[ 0.024490] Calibrating delay loop (skipped), value calculated using timer frequency.. 4972.03 BogoMIPS (lpj=9944072)
[ 0.025939] pid_max: default: 32768 minimum: 301
[ 0.029903] Security Framework initialized
[ 0.033041] AppArmor: AppArmor initialized
[ 0.033539] Yama: becoming mindful.
[ 0.037514] Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
[ 0.039560] Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
[ 0.040693] Mount-cache hash table entries: 256
[ 0.054301] Initializing cgroup subsys cpuacct
[ 0.054957] Initializing cgroup subsys memory
[ 0.056108] Initializing cgroup subsys devices
[ 0.056838] Initializing cgroup subsys freezer
[ 0.057341] Initializing cgroup subsys net_cls
[ 0.057824] Initializing cgroup subsys blkio
[ 0.058338] Initializing cgroup subsys perf_event
[ 0.060182] mce: CPU supports 10 MCE banks
[ 0.062116] SMP alternatives: switching to UP code
[ 0.236105] Freeing SMP alternatives: 24k freed
[ 0.237129] ACPI: Core revision 20110413
[ 0.270578] ftrace: allocating 26075 entries in 103 pages
[ 0.289821] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.332667] CPU0: AMD QEMU Virtual CPU version 1.0 stepping 03
[ 0.336020] APIC calibration not consistent with PM-Timer: 103ms instead of 100ms
[ 0.336020] APIC delta adjusted to PM-Timer: 6249961 (6456813)
[ 0.336020] Performance Events: Broken PMU hardware detected, using software events only.
[ 0.341160] Brought up 1 CPUs
[ 0.341596] Total of 1 processors activated (4972.03 BogoMIPS).
[ 0.348508] devtmpfs: initialized
[ 0.370265] print_constraints: dummy:
[ 0.370818] Time: 22:32:35 Date: 07/31/13
[ 0.373184] NET: Registered protocol family 16
[ 0.377862] ACPI: bus type pci registered
[ 0.379805] PCI: Using configuration type 1 for base access
[ 0.394436] bio: create slab <bio-0> at 0
[ 0.441293] ACPI: Interpreter enabled
[ 0.441749] ACPI: (supports S0 S3 S4 S5)
[ 0.442853] ACPI: Using IOAPIC for interrupt routing
[ 0.504949] ACPI: No dock devices found.
[ 0.505458] HEST: Table not found.
[ 0.505922] PCI: Ignoring host bridge windows from ACPI; if necessary, use "pci=use_crs" and report a bug
[ 0.508456] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[ 0.514427] pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI
[ 0.515222] pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB
[ 0.526520] pci0000:00: Unable to request _OSC control (_OSC support mask: 0x1e)
[ 0.612644] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[ 0.614063] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[ 0.615312] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[ 0.616918] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[ 0.618197] ACPI: PCI Interrupt Link [LNKS] (IRQs 9) *0
[ 0.622888] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[ 0.623734] vgaarb: loaded
[ 0.624235] vgaarb: bridge control possible 0000:00:02.0
[ 0.627513] SCSI subsystem initialized
[ 0.629754] usbcore: registered new interface driver usbfs
[ 0.630590] usbcore: registered new interface driver hub
[ 0.632126] usbcore: registered new device driver usb
[ 0.634610] PCI: Using ACPI for IRQ routing
[ 0.640771] NetLabel: Initializing
[ 0.641144] NetLabel: domain hash size = 128
[ 0.641570] NetLabel: protocols = UNLABELED CIPSOv4
[ 0.642769] NetLabel: unlabeled traffic allowed by default
[ 0.744929] AppArmor: AppArmor Filesystem Enabled
[ 0.746522] pnp: PnP ACPI init
[ 0.748377] ACPI: bus type pnp registered
[ 0.761838] pnp: PnP ACPI: found 8 devices
[ 0.762440] ACPI: ACPI bus type pnp unregistered
[ 0.791325] Switching to clocksource acpi_pm
[ 0.791325] NET: Registered protocol family 2
[ 0.792984] Switched to NOHz mode on CPU #0
[ 0.794980] IP route cache hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.800380] TCP established hash table entries: 16384 (order: 6, 262144 bytes)
[ 0.802008] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[ 0.803089] TCP: Hash tables configured (established 16384 bind 16384)
[ 0.803751] TCP reno registered
[ 0.804373] UDP hash table entries: 256 (order: 1, 8192 bytes)
[ 0.805192] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[ 0.806852] NET: Registered protocol family 1
[ 0.807530] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 0.808586] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[ 0.809327] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[ 0.816560] audit: initializing netlink socket (disabled)
[ 0.817591] type=2000 audit(1375309954.816:1): initialized
[ 0.903327] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[ 0.928384] VFS: Disk quotas dquot_6.5.2
[ 0.929484] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.938210] fuse init (API version 7.16)
[ 0.940982] msgmni has been set to 972
[ 0.949280] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[ 0.950562] io scheduler noop registered
[ 0.951008] io scheduler deadline registered (default)
[ 0.951941] io scheduler cfq registered
[ 0.955245] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 0.956970] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[ 0.960881] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[ 0.962211] ACPI: Power Button [PWRF]
[ 0.979110] ERST: Table is not found!
[ 0.982891] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ 0.983651] virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11
[ 0.986746] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
[ 0.987395] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI 10 (level, high) -> IRQ 10
[ 0.993533] Trying to unpack rootfs image as initramfs...
[ 1.017633] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ 1.018210] virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10
[ 1.020389] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[ 1.052583] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 1.082516] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
[ 1.165489] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 1.244653] 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
[ 1.248018] hpet_acpi_add: no address or irqs in _CRS
[ 1.249922] Linux agpgart interface v0.103
[ 1.279474] brd: module loaded
[ 1.287981] loop: module loaded
[ 1.597690] vda: vda1
[ 1.624125] Freeing initrd memory: 2000k freed
[ 1.626790] scsi0 : ata_piix
[ 1.629007] scsi1 : ata_piix
[ 1.629910] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0a0 irq 14
[ 1.630652] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0a8 irq 15
[ 1.636489] Fixed MDIO Bus: probed
[ 1.637469] PPP generic driver version 2.4.2
[ 1.638209] tun: Universal TUN/TAP device driver, 1.6
[ 1.638756] tun: (C) 1999-2004 Max Krasnyansky <maxk#qualcomm.com>
openstack#openstack1:~$
You can get guest console on the dashboard or with this command:
nova get-vnc-console <instance id> novnc
If your guest image redirects console messages (like the ubuntu cloud image), you can see boot messages on dashboard or with the command:
nova console-log <instance id>
You may get clues in /var/log/nova/nova-compute.log and in your hypervisor logs (/var/log/libvirt/libvirtd.log for QEMU/KVM).
A possible cause is that your guest can't boot on its primary disk and get stuck on boot sequence. Try other images, like the ones proposed in OpenStack documentation.