Errors during downloading metadata for repository 'AppStream' - yum

When I use yum list java*,I get an error like the following:
[root#crucialer ~]# yum list java*
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
CentOS-8 - AppStream 17 kB/s | 2.3 kB 00:00
Errors during downloading metadata for repository 'AppStream':
- Status code: 404 for http://mirrors.cloud.aliyuncs.com/centos/8/AppStream/x86_64/os/repodata/repomd.xml (IP: 100.100.2.148)
Error: Failed to download metadata for repo 'AppStream': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
[root#crucialer ~]# ping 100.100.2.148
PING 100.100.2.148 (100.100.2.148) 56(84) bytes of data.
64 bytes from 100.100.2.148: icmp_seq=1 ttl=102 time=1.94 ms
64 bytes from 100.100.2.148: icmp_seq=2 ttl=102 time=1.88 ms
64 bytes from 100.100.2.148: icmp_seq=3 ttl=102 time=2.08 ms
64 bytes from 100.100.2.148: icmp_seq=4 ttl=102 time=1.94 ms
64 bytes from 100.100.2.148: icmp_seq=5 ttl=102 time=1.93 ms
^C
--- 100.100.2.148 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 8ms
rtt min/avg/max/mdev = 1.883/1.953/2.078/0.076 ms
[root#crucialer ~]# ping www.baidu.com
PING www.a.shifen.com (180.101.49.12) 56(84) bytes of data.
64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=1 ttl=50 time=15.6 ms
64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=2 ttl=50 time=15.2 ms
64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=3 ttl=50 time=15.2 ms
64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=4 ttl=50 time=15.3 ms
^C
--- www.a.shifen.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 6ms
rtt min/avg/max/mdev = 15.223/15.331/15.581/0.210 ms
[root#crucialer yum.repos.d]# cat CentOS-Base.repo
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[base]
name=CentOS-$releasever - Base - 163.com
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
baseurl=http://mirrors.163.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
#released updates
[updates]
name=CentOS-$releasever - Updates - 163.com
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
baseurl=http://mirrors.163.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - 163.com
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
baseurl=http://mirrors.163.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - 163.com
baseurl=http://mirrors.163.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
Any suggestions would be greatly appreciated.

Pierz has rightly pointed. I would like to add few commands that change the repo to vault.centos.org
# sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
# sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
Now run the command: yum list java*

This is probably down to the fact that CentOS Linux 8 has reached End Of Life. The linked article explains that you if want keep with CentoOS8 you'll need to change the repos to use vault.centos.org BUT there will be no further updates. If you want maintain updates you should migrate to Centos Stream to - one way to do this:
sudo dnf --disablerepo '*' --enablerepo=extras swap centos-linux-repos centos-stream-repos
sudo dnf distro-sync
Also looking at your config it seems you have some references to Centos7 which might interfere with things, though hopefully the update will deal with them. Note: CentOS7 is supported till 2024-06-30.

Related

How to solve: UDP send of xxx bytes failed with error 11 in Ubuntu?

UDP send of XXXX bytes failed with error 11
I am running a WebRTC streaming app on Ubuntu 16.04.
It streams video and audio from Logitec HD Webcam c930e within an Electronjs Desktop App.
It all works fine and smooth running on my other machine Macbook Pro. But on my Ubuntu machine I receive errors after 10-20 seconds when the peer connection is established:
[2743:0513/193817.691636:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1019 bytes failed with error 11
[2743:0513/193817.691775:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.696615:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.696777:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.712369:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1029 bytes failed with error 11
[2743:0513/193817.712952:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
[2743:0513/193817.713086:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
[2743:0513/193817.717713:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
==> Btw, if I do NOT stream audio, but video only. I got the same error but only with the "video" between the Log lines...
somewhere in between the lines I also got one line that says:
[3441:0513/195919.377887:ERROR:stunport.cc(506)] sendto: [0x0000000b] Resource temporarily unavailable
I also looked into sysctl.conf and increased the values there. My currenct sysctl.conf looks like this:
fs.file-max=1048576
fs.inotify.max_user_instances=1048576
fs.inotify.max_user_watches=1048576
fs.nr_open=1048576
net.core.netdev_max_backlog=1048576
net.core.rmem_max=16777216
net.core.somaxconn=65535
net.core.wmem_max=16777216
net.ipv4.tcp_congestion_control=htcp
net.ipv4.ip_local_port_range=1024 65535
net.ipv4.tcp_fin_timeout=5
net.ipv4.tcp_max_orphans=1048576
net.ipv4.tcp_max_syn_backlog=20480
net.ipv4.tcp_max_tw_buckets=400000
net.ipv4.tcp_no_metrics_save=1
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_wmem=4096 65535 16777216
vm.max_map_count=1048576
vm.min_free_kbytes=65535
vm.overcommit_memory=1
vm.swappiness=0
vm.vfs_cache_pressure=50
Like suggested here: https://gist.github.com/cdgraff/7920db287988463aafd7ea09eef6f9f0
It does not seem to help. I am still getting these errors and I experience lagging on the other side.
Additional info: on Ubuntu the Electronjs App connects to Heroku Server (Nodejs) and the other side of the peer connection (Chrome Browser) also connects to it. Heroku Server acts as Handshaking Server to establish WebRTC connection. Both have as configuration:
{'urls': 'stun:stun1.l.google.com:19302'},
{'urls': 'stun:stun2.l.google.com:19302'},
and also an additional Turn Server from numb.viagenie.ca
Connection is established and within the first 10 seconds the quality is very high and there is no lagging at all. But then after 10-20 seconds there is lagging and on the Ubuntu console I am getting these UDP errors.
The PC that Ubuntu is running on:
PROCESSOR / CHIPSET:
CPU Intel Core i3 (2nd Gen) 2310M / 2.1 GHz
Number of Cores: Dual-Core
Cache: 3 MB
64-bit Computing: Yes
Chipset Type: Mobile Intel HM65 Express
RAM:
Memory Speed: 1333 MHz
Memory Specification Compliance: PC3-10600
Technology: DDR3 SDRAM
Installed Size: 4 GB
Rated Memory Speed: 1333 MHz
Graphics
Graphics Processor Intel HD Graphics 3000
Could please anyone give me some hints or anything that could solve this problem?
Thank you
==============EDIT=============
I found in my very large strace log somewhere these two lines:
7671 sendmsg(17, {msg_name(0)=NULL, msg_iov(1)=[{"CHILD_PING\0", 11}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 11
7661 <... recvmsg resumed> {msg_name(0)=NULL, msg_iov(1)=[{"CHILD_PING\0", 12}], msg_controllen=32, [{cmsg_len=28, cmsg_level=SOL_SOCKET, cmsg_type=SCM_CREDENTIALS, {pid=7671, uid=0, gid=0}}], msg_flags=0}, 0) = 11
On top of that, somewhere near when the error happens (at the end of the log file, just before I quit the application) I see in the log file the following:
https://gist.github.com/Mcdane/2342d26923e554483237faf02cc7cfad
First, to get an impression of what is happening in the first place, I'd look with strace. Start your application with
strace -e network -o log.strace -f YOUR_APPLICATION
If your application looks for another running process to turn the work too, start it with parameters so it doesn't do that. For instance, for Chrome, pass in a --user-data-dir value that is different from your default.
Look for = 11 in the output file log.strace afterwards, and look what happened before and after. This will give you a rough picture of what is happening, and you can exclude silly mistakes like sendtos to 0.0.0.0 or so (For this reason, this is also very important information to include in a stackoverflow question, for instance by uploading the output to gist).
It may also be helpful to use Wireshark or another packet capture program to get a rough overview of what is being sent.
Assuming you can confirm with strace that a valid send call is taken place, you can then further analyze the error conditions.
Error 11 is EAGAIN. The documentation of send says when this error is supposed to happen:
EAGAIN (...) The socket is marked nonblocking and the requested operation would block. (...)
EAGAIN (Internet domain datagram sockets) The socket referred to by
sockfd had not previously been bound to an address and, upon
attempting to bind it to an ephemeral port, it was determined that all
port numbers in the ephemeral port range are currently in use. See
the discussion of /proc/sys/net/ipv4/ip_local_port_range in
ip(7).
Both conditions could apply.
The first will be obvious by the strace log if you trace the creation of the socket involved.
To exclude the second, you can run netstat -una (or, if you want to know the programs involved, sudo netstat -unap) to see which ports are open (if you want Stack Overflow users to look into it, post the output on gist or similar and link to it here). Your port range net.ipv4.ip_local_port_range=1024 65535 is not the standard 32768 60999; this looks like you attempted to do something about lacking port numbers already. It would help to trace back to the reason of why you changed that parameter, and the conditions that convinced you to do so.

Bacula: Director's connection to SD for this Job was lost

SD-7.4.4 (ubuntu 16) Director-7.4.4(ubuntu 16) FD-5.2.10 (windows)
I'm having trouble backing up windows clients with Bacula. I can run a backup just fine when the backup size is around 1MB or 2 but when running a backup of 500MB, I get the same error every time
"Director's connection to SD for this Job was lost."
Some things to mention. When I issue status client:
Terminated Jobs: JobId Level Files Bytes Status Finished
======================================================================
81 Full 5,796 514.8 M OK 06-Nov-17 12:50 BackupComputerA
When I issue status dir
06-Nov 17:58 acme-director JobId 81: Error: Director's connection to SD for this Job was lost.
06-Nov 17:58 acme-director JobId 81: Error: Bacula acme-director 7.4.4 (202Sep16):
Build OS: arm-unknown-linux-gnueabihf debian 9.0
JobId: 81
Job: BackupComputerA.2017-11-06_17.41.01_03
Backup Level: Full (upgraded from Incremental)
Client: "Computer-A-fd" 5.2.10 (28Jun12) Microsoft (build 9200), 32-bit,Cross-compile,Win32
FileSet: "Full Set" 2017-11-03 22:12:58
Pool: "RemoteFile" (From Job resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "File1" (From Job resource)
Scheduled time: 06-Nov-2017 17:40:59
Start time: 06-Nov-2017 17:41:04
End time: 06-Nov-2017 17:58:00
Elapsed time: 16 mins 56 secs
Priority: 10
FD Files Written: 5,796
SD Files Written: 0
FD Bytes Written: 514,883,164 (514.8 MB)
SD Bytes Written: 0 (0 B)
Rate: 506.8 KB/s
Software Compression: 100.0% 1.0:1
Snapshot/VSS: yes
Encryption: yes
Accurate: no
Volume name(s):
Volume Session Id: 1
Volume Session Time: 1509989906
Last Volume Bytes: 8,045,880,119 (8.045 GB)
Non-fatal FD errors: 1
SD Errors: 0
FD termination status: OK
SD termination status: Error
Termination: *** Backup Error ***
About 5 minutes into the backup, I get a message:
Running Jobs:
Console connected at 06-Nov-17 18:08
JobId Type Level Files Bytes Name Status
======================================================================
83 Back Full 0 0 BackupComputerE has terminated
====
The job completes and terminates but loses connection afterwards and I never get a
"OK"
for the status update.
I have added the "Heartbeat Interval = 1 Minute" to all the Daemons and still no luck. Using mysql as the database on the Director
Future thanks for any help
For anyone having the same issues, I was able to fix this problem between the SD and director by adding the heartbeat interval to the clients and adjusting the keep alive time with
sysctl -w net.ipv4.tcp_keepalive_time=60
on both the Storage daemon and the director. Connecting remotely to the director with the bconsole also interrupted jobs so I ran bconsole on the same machine as the director and connected via ssh.

apache2 processes stuck in sending reply - W

I am hosting multiple sites on a server with 7.5gb RAM. Using apache2 mpm_prefork.
Following command gives me a value of 200-300 in production
ps aux|grep -c 'apache2'
Using top i see only some hundred megabytes of RAM is free. Error log show nothing unusual. Is this much apache2 process normal?
MaxRequestWorkers is set to 512
Update:
Now i am using mod-status to check apache activity.
I have a row like this
Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
0-0 29342 2/2/70 W 0.07 5702 0 3.0 0.00 1.67 XXX XXX /someurl
If i check again after sometime PID not changes and i get SS with greater value that previous time. M of this request is in 'W` sending reply state. So that means apache2 process locked in for that request?
On my VPS and root servers, the situation is partially similar. AFAIK the os tries to distribute most of the processing power/RAM to running processes and frees the resources for other processes as the need arises.

Cassandra Cluster with Multiple Datacenter fail Authenticaion only for a specific user

I've created a cassandra cluster as follow:
Datacenter: MG
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 192.168.0.120 128.45 KB 256 13.3% e1c9e29f-b6f4-4e9f-89f2-bd19153e3253 RACK01
UN 192.168.0.121 115.01 KB 256 12.6% a45f35b7-dbcc-4b09-a35f-5836cabfdedb RACK01
Datacenter: SP
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 192.168.0.101 143.25 KB 256 13.1% 3b3bccf9-63bf-4a33-8efb-412efec35f3d RACK01
UN 192.168.0.100 126.63 KB 256 12.4% 1123cc2f-4ae3-4045-bfe5-1395c36692de RACK01
UN 192.168.0.103 151.64 KB 256 11.2% a9baf020-a1af-4b08-825c-b0e49e938802 RACK02
UN 192.168.0.102 150.65 KB 256 12.3% ce96514f-6f23-4c02-b246-86c0be717ca5 RACK02
Datacenter: RJ
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 192.168.0.111 155.03 KB 256 12.9% 2157445b-d455-4e19-8394-0a9f67397f2e RACK01
UN 192.168.0.110 131.14 KB 256 12.3% 9aaa5de2-37fc-4810-9563-4d711a9457e6 RACK01
For every node I've enabled PasswordAuthenticator and created a "username" with password "password" as superuser. Also I've configured the rpc_address to the node ip address.
When typing ./cqlsh -u username -p password ip_addr on any node from "SP" datacenter I can get connected. However if I try connect to any other node from "MG" or "RJ" datacenter the response is a AuthenticationException: Cannot Achieve consistency level LOCAL_ONE.
Two things are bugging me:
When I try to connect using username "cassandra" and password "cassandra" I can connect on any node of any datacenter
I doubled check and every keyspace on every node has a consistency level of ONE
Does someone has any clue on what can be wrong?
Thanks
I've just performed some research and found out that in despite of creating a cluster, the keyspace for "system_auth" is configured by default using SimpleStrategy; however such a strategy shall not be used when using multiple datacenters.
So I changed the system_auth keyspace by using ALTER KEYSPACE command and setting a replication of type 'class':'NetworkTopologyStrategy', 'MG':1, 'SP':2, 'RJ':1
Now I am able to connect on every node since all username/password data was replicted for all datacenters.
I'm still learning some tricks about cassandra, so excuse the lack of more deep information.
Anyway hope this can help.

Apache configuration fine tuning

I run a very simple website (basically a redirect based on a php database) which gets on average 5 visits per second throughout the day, but at peak times (usually 2-3 times a day), this may go up to even 300 visits/s or more. I've modified the default apache settings as follows (based on various info found online as I'm not an expert):
Start Servers: 5 (default) / 25 (now)
Minimum Spare Servers: 5 (default) / 50 (now)
Maximum Spare Servers: 10 (default) / 100 (now)
Server Limit: 256 (default) / 512 (now)
Max Clients: 150 (default) / 450 (now)
Max Requests Per Child: 10000 (default)
Keep-Alive: On (default) / Off (now)
Timeout: 300 (default)
Server (VPS) specs:
4x 3199.998 MHz, 512 KB Cache, QEMU Virtual CPU version (cpu64-rhel6)
8GB RAM (Memory: 8042676k/8912896k available (5223k kernel code, 524700k absent, 345520k reserved, 7119k data, 1264k init)
70GB SSD disk
CENTOS 6.5 x86_64 kvm – server
During average loads the server handles just fine. Problems occur almost every day during peak traffic times, as in http time-outs or extremely long response/load times.
Question is, do I need to get a better server or can I improve response times during peak traffic by further tuning Apache config? Any help would be appreciated. Thank you!
maybe you need to enable mod_cache with mod_mem_cache, another parameter that i always configure is ulimits:
nofile to get more sockets
nproc to get more processes
http://www.webperformance.com/load-testing/blog/2012/12/setting-apache2-ulimit-for-maximum-prefork-performance/
finally TCP Tuning and Network, check all net.core and net.ipv4 parameters to get less latency
http://fasterdata.es.net/host-tuning/linux/