Adding extra device to namespace using asinfo - aerospike

From what I saw in the config documentation it’s easy to configure multiple devices within the same name space using:
namespace <namespace-name> {
memory-size <SIZE>G # Maximum memory allocation for primary
# and secondary indexes.
storage-engine device { # Configure the storage-engine to use persistence
device /dev/<device> # raw device. Maximum size is 2 TiB
# device /dev/<device> # (optional) another raw device.
write-block-size 128K # adjust block size to make it efficient for SSDs.
}
}
Is there is anyway I can do that without restarting asd service? using asinfo tool for example?

No, you cannot add devices dynamically.
User also posted here:
https://discuss.aerospike.com/t/adding-extra-device-to-namespace-using-asinfo/4525

Related

Wrong balance between Aerospike instances in cluster

I have an application with a high load for batch read operations. My Aerospike cluster (v 3.7.2) has 14 servers, each one with 7GB RAM and 2 CPUs in Google Cloud.
By looking at Google Cloud Monitoring Graphs, I noticed a very unbalanced load between servers: some servers have almost 100% CPU load, while others have less than 50% (image below). Even after hours of operation, the cluster unbalanced pattern doesn't change.
Is there any configuration that I could change to make this cluster more homogeneous? How to optimize node balancing?
Edit 1
All servers in the cluster have the same identical aerospike.conf file:
Aerospike database configuration file.
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
paxos-recovery-policy auto-reset-master
pidfile /var/run/aerospike/asd.pid
service-threads 32
transaction-queues 32
transaction-threads-per-queue 32
batch-index-threads 32
proto-fd-max 15000
batch-max-requests 200000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
#address any
port 3000
}
heartbeat {
mode mesh
mesh-seed-address-port 10.240.0.6 3002
mesh-seed-address-port 10.240.0.5 3002
port 3002
interval 150
timeout 20
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace test {
replication-factor 3
memory-size 5G
default-ttl 0 # 30 days, use 0 to never expire/evict.
ldt-enabled true
storage-engine device {
file /data/aerospike.dat
write-block-size 1M
filesize 180G
}
}
Edit 2:
$ asinfo
1 : node
BB90600F00A0142
2 : statistics
cluster_size=14;cluster_key=E3C3672DCDD7F51;cluster_integrity=true;objects=3739898;sub-records=0;total-bytes-disk=193273528320;used-bytes-disk=26018492544;free-pct-disk=86;total-bytes-memory=5368709120;used-bytes-memory=239353472;data-used-bytes-memory=0;index-used-bytes-memory=239353472;sindex-used-bytes-memory=0;free-pct-memory=95;stat_read_reqs=2881465329;stat_read_reqs_xdr=0;stat_read_success=2878457632;stat_read_errs_notfound=3007093;stat_read_errs_other=0;stat_write_reqs=551398;stat_write_reqs_xdr=0;stat_write_success=549522;stat_write_errs=90;stat_xdr_pipe_writes=0;stat_xdr_pipe_miss=0;stat_delete_success=4;stat_rw_timeout=1862;udf_read_reqs=0;udf_read_success=0;udf_read_errs_other=0;udf_write_reqs=0;udf_write_success=0;udf_write_err_others=0;udf_delete_reqs=0;udf_delete_success=0;udf_delete_err_others=0;udf_lua_errs=0;udf_scan_rec_reqs=0;udf_query_rec_reqs=0;udf_replica_writes=0;stat_proxy_reqs=7021;stat_proxy_reqs_xdr=0;stat_proxy_success=2121;stat_proxy_errs=4739;stat_ldt_proxy=0;stat_cluster_key_err_ack_dup_trans_reenqueue=607;stat_expired_objects=0;stat_evicted_objects=0;stat_deleted_set_objects=0;stat_evicted_objects_time=0;stat_zero_bin_records=0;stat_nsup_deletes_not_shipped=0;stat_compressed_pkts_received=0;err_tsvc_requests=110;err_tsvc_requests_timeout=0;err_out_of_space=0;err_duplicate_proxy_request=0;err_rw_request_not_found=17;err_rw_pending_limit=19;err_rw_cant_put_unique=0;geo_region_query_count=0;geo_region_query_cells=0;geo_region_query_points=0;geo_region_query_falsepos=0;fabric_msgs_sent=58002818;fabric_msgs_rcvd=57998870;paxos_principal=BB92B00F00A0142;migrate_msgs_sent=55749290;migrate_msgs_recv=55759692;migrate_progress_send=0;migrate_progress_recv=0;migrate_num_incoming_accepted=7228;migrate_num_incoming_refused=0;queue=0;transactions=101978550;reaped_fds=6;scans_active=0;basic_scans_succeeded=0;basic_scans_failed=0;aggr_scans_succeeded=0;aggr_scans_failed=0;udf_bg_scans_succeeded=0;udf_bg_scans_failed=0;batch_index_initiate=40457778;batch_index_queue=0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0;batch_index_complete=40456708;batch_index_timeout=1037;batch_index_errors=33;batch_index_unused_buffers=256;batch_index_huge_buffers=217168717;batch_index_created_buffers=217583519;batch_index_destroyed_buffers=217583263;batch_initiate=0;batch_queue=0;batch_tree_count=0;batch_timeout=0;batch_errors=0;info_queue=0;delete_queue=0;proxy_in_progress=0;proxy_initiate=7021;proxy_action=5519;proxy_retry=0;proxy_retry_q_full=0;proxy_unproxy=0;proxy_retry_same_dest=0;proxy_retry_new_dest=0;write_master=551089;write_prole=1055431;read_dup_prole=14232;rw_err_dup_internal=0;rw_err_dup_cluster_key=1814;rw_err_dup_send=0;rw_err_write_internal=0;rw_err_write_cluster_key=0;rw_err_write_send=0;rw_err_ack_internal=0;rw_err_ack_nomatch=1767;rw_err_ack_badnode=0;client_connections=366;waiting_transactions=0;tree_count=0;record_refs=3739898;record_locks=0;migrate_tx_objs=0;migrate_rx_objs=0;ongoing_write_reqs=0;err_storage_queue_full=0;partition_actual=296;partition_replica=572;partition_desync=0;partition_absent=3228;partition_zombie=0;partition_object_count=3739898;partition_ref_count=4096;system_free_mem_pct=61;sindex_ucgarbage_found=0;sindex_gc_locktimedout=0;sindex_gc_inactivity_dur=0;sindex_gc_activity_dur=0;sindex_gc_list_creation_time=0;sindex_gc_list_deletion_time=0;sindex_gc_objects_validated=0;sindex_gc_garbage_found=0;sindex_gc_garbage_cleaned=0;system_swapping=false;err_replica_null_node=0;err_replica_non_null_node=0;err_sync_copy_null_master=0;storage_defrag_corrupt_record=0;err_write_fail_prole_unknown=0;err_write_fail_prole_generation=0;err_write_fail_unknown=0;err_write_fail_key_exists=0;err_write_fail_generation=0;err_write_fail_generation_xdr=0;err_write_fail_bin_exists=0;err_write_fail_parameter=0;err_write_fail_incompatible_type=0;err_write_fail_noxdr=0;err_write_fail_prole_delete=0;err_write_fail_not_found=0;err_write_fail_key_mismatch=0;err_write_fail_record_too_big=90;err_write_fail_bin_name=0;err_write_fail_bin_not_found=0;err_write_fail_forbidden=0;stat_duplicate_operation=53184;uptime=1001388;stat_write_errs_notfound=0;stat_write_errs_other=90;heartbeat_received_self=0;heartbeat_received_foreign=145137042;query_reqs=0;query_success=0;query_fail=0;query_abort=0;query_avg_rec_count=0;query_short_running=0;query_long_running=0;query_short_queue_full=0;query_long_queue_full=0;query_short_reqs=0;query_long_reqs=0;query_agg=0;query_agg_success=0;query_agg_err=0;query_agg_abort=0;query_agg_avg_rec_count=0;query_lookups=0;query_lookup_success=0;query_lookup_err=0;query_lookup_abort=0;query_lookup_avg_rec_count=0
3 : features
cdt-list;pipelining;geo;float;batch-index;replicas-all;replicas-master;replicas-prole;udf
4 : cluster-generation
61
5 : partition-generation
11811
6 : edition
Aerospike Community Edition
7 : version
Aerospike Community Edition build 3.7.2
8 : build
3.7.2
9 : services
10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.41:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.23:3000
10 : services-alumni
10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.23:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.41:3000
I have a few comments about your configuration. First, transaction-threads-per-queue should be set to 3 or 4 (don't set it to the number of cores).
The second has to do with your batch-read tuning. You're using the (default) batch-index protocol, and the config params you'll need to tune for batch-read performance are:
You have batch-max-requests set very high. This is probably affecting both your CPU load and your memory consumption. It's enough that there's a slight imbalance in the number of keys you're accessing per-node, and that will reflect in the graphs you've shown. At least, this is possibly the issue. It's better that you iterate over smaller batches than try to fetch 200K records per-node at a time.
batch-index-threads – by default its value is 4, and you set it to 32 (of a max of 64). You should do this incrementally by running the same test and benchmarking the performance. On each iteration adjust higher, then down if it's decreased in performance. For example: test with 32, +8 = 40 , +8 = 48, -4 = 44. There's no easy rule-of-thumb for the setting, you'll need to tune through iterations on the hardware you'll be using, and monitor the performance.
batch-max-buffer-per-queue – this is more directly linked to the number of concurrent batch-read operations the node can support. Each batch-read request will consume at least one buffer (more if the data cannot fit in 128K). If you do not have enough of these allocated to support the number of concurrent batch-read requests you will get exceptions with error code 152 BATCH_QUEUES_FULL . Track and log such events clearly, because it means you need to raise this value. Note that this is the number of buffers per-queue. Each batch response worker thread has its own queue, so you'll have batch-index-threads x batch-max-buffer-per-queue buffers, each taking 128K of RAM. The batch-max-unused-buffers caps the memory usage of all these buffers combined, destroying unused buffers until their number is reduced. There's an overhead to allocating and destroying these buffers, so you do not want to set it too low compared to the total. Your current cost is 32 x 256 x 128KB = 1GB.
Finally, you're storing your data on a filesystem. That's fine for development instances, but not recommended for production. In GCE you can provision either a SATA SSD or an NVMe SSD for your data storage, and those should be initialized, and used as block devices. Take a look at the GCE recommendations for more details. I suspect you have warnings in your log about the device not keeping up.
It's likely that one of your nodes is an outlier with regards to the number of partitions it has (and therefore number of objects). You can confirm it with asadm -e 'asinfo -v "objects"'. If that's the case, you can terminate that node, and bring up a new one. This will force the partitions to be redistributed. This does trigger a migration, which takes quite longer in the CE server than in the EE one.
For anyone interested, Aerospike Enterpirse 4.3 introduced 'uniform-balance' which homogeneously balances data partitions. Read more here: https://www.aerospike.com/blog/aerospike-4-3-all-flash-uniform-balance/

Aerospike cluster not clean available blocks

we use aerospike in our projects and caught strange problem.
We have a 3 node cluster and after some node restarting it stop working.
So, we make test to explain our problem
We make test cluster. 3 node, replication count = 2
Here is our namespace config
namespace test{
replication-factor 2
memory-size 100M
high-water-memory-pct 90
high-water-disk-pct 90
stop-writes-pct 95
single-bin true
default-ttl 0
storage-engine device {
cold-start-empty true
file /tmp/test.dat
write-block-size 1M
}
We write 100Mb test data after that we have that situation
available pct equal about 66% and Disk Usage about 34%
All good :slight_smile:
But we stopped one node. After migration we see that available pct = 49% and disk usage 50%
Return node to cluster and after migration we see that disk usage became previous about 32%, but available pct on old nodes stay 49%
Stop node one more time
available pct = 31%
Repeat one more time we get that situation
available pct = 0%
Our cluster crashed, Clients get AerospikeException: Error Code 8: Server memory error
So how we can clean available pct?
If your defrag-q is empty (and you can see whether it is from grepping the logs) then the issue is likely to be that your namespace is smaller than your post-write-queue. Blocks on the post-write-queue are not eligible for defragmentation and so you would see avail-pct trending down with no defragmentation to reclaim the space. By default the post-write-queue is 256 blocks and so in your case that would equate to 256Mb. If your namespace is smaller than that you will see avail-pct continue to drop until you hit stop-writes. You can reduce the size of the post-write-queue dynamically (i.e. no restart needed) using the following command, here I suggest 8 blocks:
asinfo -v 'set-config:context=namespace;id=<NAMESPACE>;post-write-queue=8'
If you are happy with this value you should amend your aerospike.conf to include it so that it persists after a node restart.

Aerospike: Failed to store record. Error: (13L, 'AEROSPIKE_ERR_RECORD_TOO_BIG', 'src/main/client/put.c', 106)

I get the following error while storing the data to aerospike ( client.put ). I have enough space on the drive.
Aerospike: Failed to store record. Error: (13L, 'AEROSPIKE_ERR_RECORD_TOO_BIG', 'src/main/client/put.c', 106).
Here is my Aerospike server namespace configuration
namespace test {
replication-factor 1
memory-size 1G
default-ttl 30d # 30 days, use 0 to never expire/evict.
storage-engine device {
file /opt/aerospike/data/test.dat
filesize 2G
data-in-memory true # Store data in memory in addition to file.
}
}
By default namespaces have a write-block-size of 1 MiB. This is also the maximum configurable size and will limit the max object size the application is able to write to Aerospike.
If you need to go beyond 1 MiB see Large Data Types as a possible solution.
UPDATE 2019/09/06
Since Aerospike 3.16, the write-block-size limit has been increased from 1 MiB to 8 MiB.
Yes, but unfortunately, Aerospike has deprecated LDT (https://www.aerospike.com/blog/aerospike-ldt/). They now recommend to use Lists or Maps, but as stated in their post:
"the new implementation does not solve the problem of the 1MB Aerospike database row size limit. A future key feature of the product will be an enhanced implementation that transcends the 1MB limit for a number of types"
In other terms, it is still an unsolved problem when storing your data on SSD or HDD. However, you can store larger data on memory namespaces.

AEROSPIKE_ERR_RECORD_NOT_FOUND just after INSERT

I have a strange problem writing data to the aerospike cluster
aql> insert into storebig.Chunks (PK,Data) values ('5cb138284d431abd6a053a56625ec088bfb88912', '1234567890')
OK, 1 record affected.
aql> select * from storebig.Chunks where PK = '5cb138284d431abd6a053a56625ec088bfb88912'
Error: (2) AEROSPIKE_ERR_RECORD_NOT_FOUND
aql> insert into storebig.Chunks (PK,Data) values ('5cb138284d431abd6a053a56625ec088bfb88912', '1234567890')
Error: (1) AEROSPIKE_ERR_SERVER
Same story with the golang client library (of course)
It is very possible cluster is not healfy - some strange messages appears in the server(s) log:
May 06 2015 12:17:49 GMT: WARNING (drv_ssd): (drv_ssd.c::1236) read: read wrong key: expecting de6f0bc93bfdf560 got 8ad3dd7fce1ac7ec
May 06 2015 12:17:49 GMT: WARNING (drv_ssd): (drv_ssd.c::1236) read: read wrong key: expecting de6f0bc93bfdf560 got 8ad3dd7fce1ac7ec
May 06 2015 12:17:50 GMT: WARNING (drv_ssd): (drv_ssd.c::1230) read: bad block magic offset 29843600384
May 06 2015 12:17:50 GMT: WARNING (drv_ssd): (drv_ssd.c::1230) read: bad block magic offset 29843600384
My question is: what can I do to investigate the situation, debug and recover? Where to look and what to try?
Thank you.
With best regards,
Daniel Podolsky
UPDATE
config template (actual config generated from this template on docker container start)
service {
user root
group root
paxos-single-replica-limit 1
pidfile /var/run/aerospike/asd.pid
service-threads 4
transaction-queues 4
transaction-threads-per-queue 4
proto-fd-max 15000
}
logging {
file /storage/logs/aerospike.log {
context any info
}
console {
context any info
}
}
network {
service {
address <%=os.getenv("NODE_EXT_ADDR")%>
port 3000
}
fabric {
address <%=os.getenv("NODE_INT_ADDR")%>
port 3001
}
heartbeat {
mode multicast
address 239.1.99.2
port 9918
interface-address <%=os.getenv("NODE_INT_ADDR")%> interval 150
timeout 10
}
info {
address <%=os.getenv("NODE_INT_ADDR")%>
port 3003
}
}
namespace storebig {
replication-factor 3
memory-size <%=os.getenv("MEM_USE_BIG")%>K
default-ttl 0
high-water-disk-pct 98
high-water-memory-pct 98
stop-writes-pct 95
storage-engine device {
file /storage/data/big.dat
filesize 3T
data-in-memory false
}
}
namespace storefast {
replication-factor 3
memory-size <%=os.getenv("MEM_USE_FAST")%>K
default-ttl 0
high-water-disk-pct 98
high-water-memory-pct 98
stop-writes-pct 95
storage-engine device {
file /storage/data/fast.dat
filesize <%=os.getenv("MEM_USE_FAST")%>K
data-in-memory true
}
}
namespace storetest {
replication-factor 3
memory-size <%=os.getenv("MEM_USE_FAST")%>K
default-ttl 0
high-water-disk-pct 98
high-water-memory-pct 98
stop-writes-pct 95
storage-engine device {
file /storage/data/test.dat
filesize 3T
data-in-memory false
}
}
After reading over your configuration I believe I have found your problem. Individual devices and files in Aerospike can be no larger than 2TiB and yours are configured to 3TiB. Regrettably there currently isn't a check in config parser for this limit and I am unable to find reference in our docs--both of these issues are being taken care of.
You can instead use multiple files to store your data for each namespace (each file limited to 2TB). As discussed elsewhere you will likely see better performance by using multiple files or devices for a given namespace.
reading the Aerospike manual, there is no limitation for device size. only for file size (2TB maximum)
Manual:
Recipe for an SSD Storage Engine
The minimal configuration for an SSD namespace requires setting storage-engine to device and adding a device parameter for each SSD to be used by this namespace. In addition, memory-size may need to be changed from the default of 4GB to a size appropriate for the expected primary index size. For assistance in sizing the primary index, please refer to the Sizing Guide. For performance, we recommend reducing the write-block-size from the default of 1MB to 128 Kb on SSD backed namespaces.
Recipe for a HDD Storage Engine with Data in Memory
The minimal configuration for an HDD with Data-in-Memory namespace involves setting storage-engine to device, setting data-in-memory to true, and finally providing a list of file parameters to indicate where data will be persisted. Also filesize needs to be large enough to support the size of the data on disk (with a maximum allowed value of 2 TiB). Lastly, memory-size may need adjusted from the default of 4GB to a size appropriate to handle the expected primary index size and the expected size of the data in memory. For assistance sizing filesize or memory-size please refer to our Sizing Guide.

Google Cloud - Compute Engine - Error starting instance when attached disk in READ_ONLY

I have a instance (F1-Micro) and a root persistent disk (10GB) on Google Cloud, Compute Engine service. When I start an instance and attach a disk in READ_WRITE the instance starts normally, executes my startup script and I can access through SSH. However, when I change the disk mode parameter to READ_ONLY the instance apparently starts normally and I can't do SSH giving me a timeout connection. Also, my startup script does not start. I suspect that or I need to attach more one root persistent disk with READ_WRITE permission or I need to setup some configuration in my disk. Someone could give me insight about what are happening? Below I give some data and logs:
Body Request:
instance = {
'name': instance_name,
'machineType': machine_type_url,
'disks': [{
'index' : 0,
'autoDelete': 'false',
'boot': 'true',
'type': 'PERSISTENT',
'mode' : 'READ_ONLY', # READ_ONLY, READ_WRITE
'deviceName' : root_disk_name,
'source' : source_root_disk
}],
'networkInterfaces': [{
'accessConfigs': [{
'type': 'ONE_TO_ONE_NAT',
'name': 'External NAT'
}],
'network': network_url
}],
'serviceAccounts': [{
'email': service_email,
'scopes': scopes
}]
}
Log of started instance on GC-CE:
Changing serial settings was 0/0 now 3/0
Start bios (version 1.7.2-20131007_152402-google)
No Xen hypervisor found.
Unable to unlock ram - bridge not found
Ram Size=0x26600000 (0x0000000000000000 high)
Relocating low data from 0x000e10a0 to 0x000ef780 (size 2161)
Relocating init from 0x000e1911 to 0x265d07a0 (size 63291)
CPU Mhz=2601
=== PCI bus & bridge init ===
PCI: pci_bios_init_bus_rec bus = 0x0
=== PCI device probing ===
Found 4 PCI devices (max PCI bus is 00)
=== PCI new allocation pass #1 ===
PCI: check devices
=== PCI new allocation pass #2 ===
PCI: map device bdf=00:03.0 bar 0, addr 0000c000, size 00000040 [io]
PCI: map device bdf=00:04.0 bar 0, addr 0000c040, size 00000040 [io]
PCI: map device bdf=00:04.0 bar 1, addr febff000, size 00001000 [mem]
PCI: init bdf=00:01.0 id=8086:7110
PIIX3/PIIX4 init: elcr=00 0c
PCI: init bdf=00:01.3 id=8086:7113
Using pmtimer, ioport 0xb008, freq 3579 kHz
PCI: init bdf=00:03.0 id=1af4:1004
PCI: init bdf=00:04.0 id=1af4:1000
Found 1 cpu(s) max supported 1 cpu(s)
MP table addr=0x000fdaf0 MPC table addr=0x000fdb00 size=240
SMBIOS ptr=0x000fdad0 table=0x000fd9c0 size=269
Memory hotplug not enabled. [MHPE=0xffffffff]
ACPI DSDT=0x265fe1f0
ACPI tables: RSDP=0x000fd990 RSDT=0x265fe1c0
Scan for VGA option rom
WARNING - Timeout at i8042_flush:68!
All threads complete.
Found 0 lpt ports
Found 0 serial ports
found virtio-scsi at 0:3
Searching bootorder for: /pci#i0cf8/*#3/*#0/*#0,0
Searching bootorder for: /pci#i0cf8/*#3/*#0/*#1,0
virtio-scsi vendor='Google' product='PersistentDisk' rev='1' type=0 removable=0
virtio-scsi blksize=512 sectors=20971520
Searching bootorder for: /pci#i0cf8/*#3/*#0/*#2,0
...
Searching bootorder for: /pci#i0cf8/*#3/*#0/*#255,0
Scan for option roms
Searching bootorder for: HALT
drive 0x000fd950: PCHS=0/0/0 translation=lba LCHS=1024/255/63 s=20971520
Space available for UMB: 000c0000-000eb800
Returned 122880 bytes of ZoneHigh
e820 map has 6 items:
0: 0000000000000000 - 000000000009fc00 = 1 RAM
1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
3: 0000000000100000 - 00000000265fe000 = 1 RAM
4: 00000000265fe000 - 0000000026600000 = 2 RESERVED
5: 00000000fffbc000 - 0000000100000000 = 2 RESERVED
Unable to lock ram - bridge not found
Changing serial settings was 3/2 now 3/0
enter handle_19:
NULL
Booting from Hard Disk...
Booting from 0000:7c00
[ 1.386940] i8042: No controller found
Loading, please wait...
INIT: version 2.88 booting
[[36minfo[39;49m] Using makefile-style concurrent boot in runlevel S.
[....] Starting the hotplug events dispatcher: udevd[?25l[?1c7[1G[[32m ok [39;49m8[?25h[?0c.
[....] Synthesizing the initial hotplug events...[?25l[?1c7[1G[[32m ok [39;49m8[?25h[?0cdone.
[....] Waiting for /dev to be fully populated...[ 8.524814] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[?25l[?1c7[1G[[32m ok [39;49m8[?25h[?0cdone.
[....] Activating swap...[?25l[?1c7[1G[[32m ok [39;49m8[?25h[?0cdone.
[....] Checking root file system...fsck from util-linux 2.20.1
fsck.ext4: Operation not permitted while trying to open /dev/sda1
You must have r/w access to the filesystem or be root
fsck died with exit status 8
[?25l[?1c7[1G[[31mFAIL[39;49m8[?25h[?0c[31mfailed (code 8).[39;49m
[....] An automatic file system check (fsck) of the root filesystem failed. A manual fsck must be performed, then the system restarted. The fsck should be performed in maintenance mode with the root filesystem mounted in read-only mode. ...[?25l[?1c7[1G[[31mFAIL[39;49m8[? 25h[?0c [31mfailed![39;49m
[....] The root filesystem is currently mounted in read-only mode. A maintenance shell will now be started. After performing system maintenance, press CONTROL-D to terminate the maintenance shell and restart the system. ...[?25l[?1c7[1G[[33mwarn[39;49m8[?25h[?0c [33m(warning).[39;49m
sulogin: root account is locked, starting shell
root#localhost:~#
Thanks!
You're correct in assuming that the boot disk for a GCE instance needs to appear in read-write mode. The documentation for root persistent disks says:
To start an instance with an existing root persistent disk in gcutil,
provide the boot parameter when you attach the disk. When you create a
root persistent disk using a Google-provided image, you must attach it
to your instance in read-write mode. If you try to attach it in
read-only mode, your instance may be created successfully, but it
won't boot up correctly.