I have a debian VM deployed at BlueMix, and I want to increase the size of the hard drive mounting a BlockStorage Device.
I followed the instructions on the new Beta BlockStorage Service and created a volume, and then attached it to the VM as a new device, but seems that although the volume is attached to the VM; is not automatically mounted.
I tryed several ways to mount it, but I did not find it the correct way. In fact, I even tryed to clone the line that came on the fstab refering to the root device mounted (I suspected that the additional volume should be similar) but it did not work (even broke the reboot of my machine)... So.. Can someone please advice me how to mount the BlockStorage Bluemix Service on the VM Machine ?
THks!
By attaching a volume you've essentially done the equivalent of plugging a raw, physical hard disk into your system. Before you can mount it you'll have to format it with a filesystem known by your OS.
After attaching the device you should be able to see the raw block device, for example with the lsblk command:
[mysys]# lsblk
sr0 11:0 1 416K 0 rom
vda 252:0 0 20G 0 disk
--vda1 252:1 0 20G 0 part /
vdb 252:16 0 25G 0 disk
Typically vda is your root device, so in this example the additional device is vdb with 25GB.
Now you can create a filesystem with the mkfs command, for example:
[mysys]# mkfs.ext4 /dev/vdb
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1638400 inodes, 6553600 blocks
...
mkfs supports different filesystems, so you might want to check the man pages on the system you're using (man mkfs).
Now all that's left is to create a mount point and mount the new filesystem:
[mysys]# mkdir /mnt/test
[mysys]# mount /dev/vdb /mnt/test
The additional space is now available:
[mysys]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 20G 946M 18G 5% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/vdb 25G 172M 24G 1% /mnt/test
Related
Cinder volume and backup properly functioning with ceph and ceph backup but when tried to use nfs instead of ceph backups ,its not working
NFS server is installed and mounted that path in my controller
added below details on [DEFAULT] section of cinder.conf
backup_driver = cinder.backup.drivers.nfs.NFSBackupDriver
backup_container = None
backup_enable_progress_timer = True
backup_file_size = 1999994880
backup_mount_attempts = 3
backup_mount_options = "vers:3"
backup_mount_point_base = /mnt/nfs
backup_posix_path = /mnt/nfs
backup_sha_block_size_bytes = 32768
backup_share = XX.XX.XX.XX:/mnt/Backup_NFS/Karthigaa_test
restarted all the cinder services
#cinder service-list
shows up cinder-backup as up
====
Manually mounted NFS share in my controller machine
root#controller1:/mnt/nfs# mount |grep Karthigaa
10.0.0.13:/mnt/Backup_NFS/Karthigaa_test on /mnt/nfs type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.0.13,mountvers=3,mountport=939,mountproto=udp,local_lock=none,addr=10.0.0.13)
but below error is hitting in cinder-backup.log
2022-10-18 11:12:01.500 175352 ERROR oslo.service.loopingcall raise exception.BrickException(_("NFS mount failed for share %(sh)s. " 2022-10-18 11:12:01.500 175352 ERROR oslo.service.loopingcall os_brick.exception.BrickException: NFS mount failed for share 10.0.0.13:/mnt/Backup_NFS/Karthigaa_test. Error - {'nfs': 'Unexpected error while running command.\nCommand: mount -t nfs -o vers:3 10.0.0.13:/mnt/Backup_NFS/Karthigaa_test /mnt/nfs/899cc81bf1128501984b61abf1f49288\nExit code: 32\nStdout: \'\'\nStderr: "mount.nfs: parsing error on \'vers=\' option\\n"'}
nfs share is working fine when i done creating the files manually but when i tried
to take volume backup ,its not working showing error in horizon(ui)
where log is not hitting in cinder-backup file also.
Someone please me out ...
I run a server node in java code package xxx.jar, but occur exception like this:
Caused by: java.nio.file.FileSystemException: /home/ranger/EIIP/tools/work/db/ServerNode/cache-TOFTableCache/part-942.bin: Too many open files
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newAsynchronousFileChannel(UnixFileSystemProvider.java:196)
at java.nio.channels.AsynchronousFileChannel.open(AsynchronousFileChannel.java:248)
at java.nio.channels.AsynchronousFileChannel.open(AsynchronousFileChannel.java:301)
at org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.<init>(AsyncFileIO.java:66)
at org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory.create(AsyncFileIOFactory.java:44)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.init(FilePageStore.java:523)
... 31 more
but only occured in ubuntu VM at windows, and no this exception when run at pure ubuntu system, I tried the following methods, but still the same problem:
vim /etc/security/limits.conf
root soft nofile 10240
root hard nofile 20480
vim /etc/sysctl.conf
fs.inotify.max_user_watches=524288
ulimit -n 4096
this is my code:
IgniteConfiguration igniteCfg = new IgniteConfiguration();
igniteCfg.setConsistentId("ServerNode"); //Set Consistent ID
// Ignite Persistence
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
DataRegionConfiguration regionCfg = new DataRegionConfiguration();
regionCfg.setName("TableCache_Region");
regionCfg.setInitialSize(100L * 1024 * 1024);
regionCfg.setMaxSize(8L * 1024 * 1024 * 1024);
regionCfg.setPersistenceEnabled(true);
storageCfg.setDataRegionConfigurations(regionCfg);
storageCfg.setPageSize(4096); // Changing the page size to 4 KB.
storageCfg.setWriteThrottlingEnabled(true); // Enabling the writes throttling.
igniteCfg.setDataStorageConfiguration(storageCfg);
igniteCfg.setWorkDirectory(System.getProperty("user.dir") + "/work"); // System.getProperty("java.class.path")
Ignite ignite = Ignition.start(igniteCfg);
ignite.cluster().baselineAutoAdjustEnabled(false);
// Activate a cluster automatically once all the nodes of the baseline topology have joined after a cluster restart.
ignite.cluster().active(true);
// Manually setting Baseline Topology
Collection<ClusterNode> nodes = ignite.cluster().forServers().nodes();
// Set all server nodes to baseline topology
ignite.cluster().setBaselineTopology(nodes);
Any idea how to resolve this issue?
Thanks.
enter image description here
As I know to persists the ulimits values across reboots you should set it in the configuration file:
/etc/security/limits.conf
It contains "soft" and "hard" options. Hard options for root, soft for others.
Using ulimit command you can overwrite the "soft" values for current user and session. Probably your limits weren't stored or you set "soft" options but start the GridGain using sudo command and your "hard" options were incorrect.
Could you please double-check and provide the next information:
1)What operation system is used by you?
2)Do you have /etc/security/limits.conf file in your environment?
3)Do you have correct values for user that will start the Ignite. In case if you started it under root then check the "hard" options
However, I suggest to set the following options there:
ignite soft nofile 65536
ignite hard nofile 65536
ignite soft nproc 65536
ignite hard nproc 65536
Where ignite is the username that was used for Ignite starting.
UDP send of XXXX bytes failed with error 11
I am running a WebRTC streaming app on Ubuntu 16.04.
It streams video and audio from Logitec HD Webcam c930e within an Electronjs Desktop App.
It all works fine and smooth running on my other machine Macbook Pro. But on my Ubuntu machine I receive errors after 10-20 seconds when the peer connection is established:
[2743:0513/193817.691636:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1019 bytes failed with error 11
[2743:0513/193817.691775:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.696615:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.696777:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.712369:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1029 bytes failed with error 11
[2743:0513/193817.712952:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
[2743:0513/193817.713086:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
[2743:0513/193817.717713:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
==> Btw, if I do NOT stream audio, but video only. I got the same error but only with the "video" between the Log lines...
somewhere in between the lines I also got one line that says:
[3441:0513/195919.377887:ERROR:stunport.cc(506)] sendto: [0x0000000b] Resource temporarily unavailable
I also looked into sysctl.conf and increased the values there. My currenct sysctl.conf looks like this:
fs.file-max=1048576
fs.inotify.max_user_instances=1048576
fs.inotify.max_user_watches=1048576
fs.nr_open=1048576
net.core.netdev_max_backlog=1048576
net.core.rmem_max=16777216
net.core.somaxconn=65535
net.core.wmem_max=16777216
net.ipv4.tcp_congestion_control=htcp
net.ipv4.ip_local_port_range=1024 65535
net.ipv4.tcp_fin_timeout=5
net.ipv4.tcp_max_orphans=1048576
net.ipv4.tcp_max_syn_backlog=20480
net.ipv4.tcp_max_tw_buckets=400000
net.ipv4.tcp_no_metrics_save=1
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_wmem=4096 65535 16777216
vm.max_map_count=1048576
vm.min_free_kbytes=65535
vm.overcommit_memory=1
vm.swappiness=0
vm.vfs_cache_pressure=50
Like suggested here: https://gist.github.com/cdgraff/7920db287988463aafd7ea09eef6f9f0
It does not seem to help. I am still getting these errors and I experience lagging on the other side.
Additional info: on Ubuntu the Electronjs App connects to Heroku Server (Nodejs) and the other side of the peer connection (Chrome Browser) also connects to it. Heroku Server acts as Handshaking Server to establish WebRTC connection. Both have as configuration:
{'urls': 'stun:stun1.l.google.com:19302'},
{'urls': 'stun:stun2.l.google.com:19302'},
and also an additional Turn Server from numb.viagenie.ca
Connection is established and within the first 10 seconds the quality is very high and there is no lagging at all. But then after 10-20 seconds there is lagging and on the Ubuntu console I am getting these UDP errors.
The PC that Ubuntu is running on:
PROCESSOR / CHIPSET:
CPU Intel Core i3 (2nd Gen) 2310M / 2.1 GHz
Number of Cores: Dual-Core
Cache: 3 MB
64-bit Computing: Yes
Chipset Type: Mobile Intel HM65 Express
RAM:
Memory Speed: 1333 MHz
Memory Specification Compliance: PC3-10600
Technology: DDR3 SDRAM
Installed Size: 4 GB
Rated Memory Speed: 1333 MHz
Graphics
Graphics Processor Intel HD Graphics 3000
Could please anyone give me some hints or anything that could solve this problem?
Thank you
==============EDIT=============
I found in my very large strace log somewhere these two lines:
7671 sendmsg(17, {msg_name(0)=NULL, msg_iov(1)=[{"CHILD_PING\0", 11}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 11
7661 <... recvmsg resumed> {msg_name(0)=NULL, msg_iov(1)=[{"CHILD_PING\0", 12}], msg_controllen=32, [{cmsg_len=28, cmsg_level=SOL_SOCKET, cmsg_type=SCM_CREDENTIALS, {pid=7671, uid=0, gid=0}}], msg_flags=0}, 0) = 11
On top of that, somewhere near when the error happens (at the end of the log file, just before I quit the application) I see in the log file the following:
https://gist.github.com/Mcdane/2342d26923e554483237faf02cc7cfad
First, to get an impression of what is happening in the first place, I'd look with strace. Start your application with
strace -e network -o log.strace -f YOUR_APPLICATION
If your application looks for another running process to turn the work too, start it with parameters so it doesn't do that. For instance, for Chrome, pass in a --user-data-dir value that is different from your default.
Look for = 11 in the output file log.strace afterwards, and look what happened before and after. This will give you a rough picture of what is happening, and you can exclude silly mistakes like sendtos to 0.0.0.0 or so (For this reason, this is also very important information to include in a stackoverflow question, for instance by uploading the output to gist).
It may also be helpful to use Wireshark or another packet capture program to get a rough overview of what is being sent.
Assuming you can confirm with strace that a valid send call is taken place, you can then further analyze the error conditions.
Error 11 is EAGAIN. The documentation of send says when this error is supposed to happen:
EAGAIN (...) The socket is marked nonblocking and the requested operation would block. (...)
EAGAIN (Internet domain datagram sockets) The socket referred to by
sockfd had not previously been bound to an address and, upon
attempting to bind it to an ephemeral port, it was determined that all
port numbers in the ephemeral port range are currently in use. See
the discussion of /proc/sys/net/ipv4/ip_local_port_range in
ip(7).
Both conditions could apply.
The first will be obvious by the strace log if you trace the creation of the socket involved.
To exclude the second, you can run netstat -una (or, if you want to know the programs involved, sudo netstat -unap) to see which ports are open (if you want Stack Overflow users to look into it, post the output on gist or similar and link to it here). Your port range net.ipv4.ip_local_port_range=1024 65535 is not the standard 32768 60999; this looks like you attempted to do something about lacking port numbers already. It would help to trace back to the reason of why you changed that parameter, and the conditions that convinced you to do so.
How To Mount USD External Storage Drive on to ESXi 5.5 Host for VM backup
After USB Drive plugin, "esxcli storage core device list" shows there is a usb drive attached. But unable to access it.
"esxcli storage core device list "
mpx.vmhba38:C0:T0:L0
Display Name: Local USB Direct-Access (mpx.vmhba38:C0:T0:L0)
Has Settable Display Name: false
Size: 1907729
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/mpx.vmhba38:C0:T0:L0
Vendor: Seagate
Model: BUP Slim BL
Revision: 0108
SCSI Level: 2
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: true
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Other UIDs: vml.0000000000766d68626133383a303a30
Is Local SAS Device: false
Is USB: true
Is Boot USB Device: false
No of outstanding IOs with competing worlds: 32
"esxcli storage core path list -d mpx.vmhba38:C0:T0:L0"
usb.vmhba38-usb.0:0-mpx.vmhba38:C0:T0:L0
UID: usb.vmhba38-usb.0:0-mpx.vmhba38:C0:T0:L0
Runtime Name: vmhba38:C0:T0:L0
Device: mpx.vmhba38:C0:T0:L0
Device Display Name: Local USB Direct-Access (mpx.vmhba38:C0:T0:L0)
Adapter: vmhba38
Channel: 0
Target: 0
LUN: 0
Plugin: NMP
State: active
Transport: usb
Adapter Identifier: usb.vmhba38
Target Identifier: usb.0:0
Adapter Transport Details: Unavailable or path is unclaimed
Target Transport Details: Unavailable or path is unclaimed
Maximum IO Size: 122880
Note: I stopped usbarbitrator.
/etc/init.d/usbarbitrator status
usbarbitrator is not running
Please advice.
There are very few USB drives that work directly against an ESXi host, and versions prior to 6.0 had a very strict ruleset against mounting unsupported devices.
I'm assuming you're attempting to mount the USB drive as a VMFS volume, which is unsupported regardless of what version, however there is a list of supported devices that work for passthrough activities. I would assume these could be used as VMFS mounts as well: https://kb.vmware.com/kb/1021345
While it looks like you're following the proper steps, here's a detailed list that may help: https://kb.vmware.com/s/article/2065934
Alternatively, since USB performance is generally pretty slow on ESXi hosts, you could also perform your backup activities on the VM and then SCP the files off the ESXi host to the USB drive.
In a AIX Server I’ve been trying to save my IBM Broker (version 8.0) logs, but they’re not getting logged at all. I’ve configured the syslog file using either tabs on the first space or simply separated the path/file and the facility “user” by spaces characters.
syslog.conf
user.info /var/mqsi/info.log rotate size 4m files 4
user.err /var/mqsi/err.log rotate size 4m files 4
Also tried:
user.info /var/mqsi/info.log rotate size 4m files 4
user.err /var/mqsi/err.log rotate size 4m files 4
Then run:
refresh -s syslogd
After that I waited a couple of hours an the files were still without any logs.
The info.log and err.log files have system & root permissions, as well as chmod 640 (write and read) configured.