I have installed sap server 7.51 on ubuntu virtual machine (vmware). Installation was sucessful but when I run start I get the following error:
ubuntu:npladm 1> startsap all
No instance profiles found
please send the tracefile /home/npladm/startsap.trc to support
I typed ifconfig and got the following:
ens33: flags=4163 mtu 1500
inet 192.168.234.130 netmask 255.255.255.0 broadcast 192.168.234.255
inet6 fe80::e152:4277:1c5f:3311 prefixlen 64 scopeid 0x20
ether 00:0c:29:9f:48:53 txqueuelen 1000 (Ethernet)
RX packets 1739 bytes 1139138 (1.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1406 bytes 145716 (145.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 5009 bytes 1102113 (1.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5009 bytes 1102113 (1.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I checked that the ip address in the host file is correct as in:
cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 ubuntu
# mapping the Ubuntu loopback IP 127.0.1.1 to ubuntu
192.168.234.130 ubuntu ubuntu.dummy.nodomain
# The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
127.0.0.1 localhost
127.0.1.1 ubuntu
# mapping the Ubuntu loopback IP 127.0.1.1 to vhcalnplci
127.0.1.1 vhcalnplci vhcalnplci.dummy.nodomain
As you can see the ip adress should be correct (192.168.234.130) and is pingable but it is still not working.
Update:
Here is also the trace log file:
Trace of system startup/check of SAP System NPL on Mon Jul 1 01:08:45 PDT 2019
{01:08:45 ## Main() start: #=/usr/sap/NPL/SYS/exe/uc/linuxx86_64/startsap all
#(#) $Id: //bas/749_STACK/src/krn/startscripts/startsap#1 $
BASENAME=startsap
{01:08:45 ## check_user() start: #=npladm
}01:08:45 ## check_user() done
#=1
#=all
{01:08:45 ## setPlatform() start
PLATFORM=linuxx86_64
}01:08:45 ## setPlatform() done
{01:08:45 ## setPing() start
PING=/bin/ping
}01:08:45 ## setPing() done
{01:08:45 ## setIfconfig() start
IFCONFIG=/sbin/ifconfig
}01:08:45 ## setIfconfig() done
{01:08:45 ## setIp() start
IP=/sbin/ip
}01:08:45 ## setIp() done
{01:08:45 ## setRootDir() start: #=
USR_SAP=/usr/sap
USR_SAP_SID=/usr/sap/NPL
PROFILE_DIR=/usr/sap/NPL/SYS/profile
DIR_LIBRARY=/usr/sap/NPL/SYS/exe/run
}01:08:45 ## setRootDir() done
{01:08:45 ## setDbUser() start: #=
}01:08:45 ## setDbUser() done
Argument=all
{01:08:45 ## getarg() start
{01:08:45 ## checkInstance() start: #=all
}01:08:45 ## checkInstance() done: 1
{01:08:45 ## checkTask() start: #=all
_opt=all
}01:08:45 ## checkTask() done: 0
TASK=ALL
}01:08:45 ## getarg() done: 1
{01:08:45 ## setVHostArray() start
_PROFILES=/usr/sap/NPL/SYS/profile/NPL_ASCS01_ubuntu /usr/sap/NPL/SYS/profile/NPL_D00_ubuntu
_nrProfiles=2
{01:08:45 ## pushVHostsFromProfile() start: #=/usr/sap/NPL/SYS/profile/NPL_ASCS01_ubuntu /usr/sap/NPL/SYS/profile/NPL_D00_ubuntu
_DUMMY=NPL_ASCS01_ubuntu
_VHOST=ubuntu
{01:08:45 ## isVHostLocal() start: ubuntu
VHOST=ubuntu
_IS_LOCAL=0
}01:08:46 ## isVHostLocal() done: 0
_DUMMY=NPL_D00_ubuntu
_VHOST=ubuntu
{01:08:46 ## isVHostLocal() start: ubuntu
VHOST=ubuntu
_IS_LOCAL=0
}01:08:46 ## isVHostLocal() done: 0
VHOSTS=
}01:08:46 ## pushVHostsFromProfile() done
VHOSTS=
}01:08:46 ## setVHostArray() done
{01:08:46 ## set_instance() start
NINST=
INSTFOUND=0
NINSTFOUND=0
hasABAP=0
hasJava=0
hasSpecial=0
}01:08:46 ## set_instance() done
No instance profiles found
Exit code 8
Any other ideas about the error and how to resolve it? Thanks.
updating the ip-address of my VM in /etc/hosts helped:
> sudo nano /etc/hosts
..
192.168.1.218 vhcalnplci vhcalnplci.dummy.nodomain
.. then i started server
> startsap ALL
.. and checked that instances are running
> sapcontrol -nr 00 -function GetProcessList
We are monitoring several servers with Monit. We are using the version 5.25.1.
Some are dedicated apache servers. The monitoring is ok.
But the log of monit (/var/log/monit) is like this :
[CET Mar 18 03:12:03] info : Starting Monit 5.25.1 daemon with http interface at [0.0.0.0]:3353
[CET Mar 18 03:12:03] info : Monit start delay set to 180s
[CET Mar 18 03:15:03] info : 'xxxxx.localhost' Monit 5.25.1 started
[CET Mar 18 03:15:03] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:16:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:17:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:18:08] error : 'apache' error -- unknown resource ID: [5]
The configuration file /etc/monit.conf is like this :
###############################################################################
## Monit control file
###############################################################################
###############################################################################
## Global section
###############################################################################
##
## Start Monit in the background (run as a daemon):
# check services at 2-minute intervals
# with start delay 240 # optional: delay the first check by 4-minutes (by
# # default Monit check immediately after Monit start)
set daemon 60
with start delay 180
### Set the location of the Monit id file which stores the unique id for the
### Monit instance. The id is generated and stored on first Monit start. By
### default the file is placed in $HOME/.monit.id.
#
set idfile /var/.monit.id
## Set the list of mail servers for alert delivery. Multiple servers may be
## specified using a comma separator. By default Monit uses port 25 - it is
## possible to override this with the PORT option.
#
# set mailserver mail.bar.baz, # primary mailserver
# backup.bar.baz port 10025, # backup mailserver on port 10025
# localhost # fallback relay
set mailserver localhost
## By default Monit will drop alert events if no mail servers are available.
## If you want to keep the alerts for later delivery retry, you can use the
## EVENTQUEUE statement. The base directory where undelivered alerts will be
## stored is specified by the BASEDIR option. You can limit the maximal queue
## size using the SLOTS option (if omitted, the queue is limited by space
## available in the back end filesystem).
#
set eventqueue
basedir /var/monit # set the base directory where events will be stored
slots 100 # optionally limit the queue size
## Send status and events to M/Monit (for more informations about M/Monit
## see http://mmonit.com/).
#
# set mmonit http://monit:monit#192.168.1.10:8080/collector
#
#
## Monit by default uses the following alert mail format:
##
##
## You can override this message format or parts of it, such as subject
## or sender using the MAIL-FORMAT statement. Macros such as $DATE, etc.
## are expanded at runtime. For example, to override the sender, use:
#
# set mail-format { from: monit#foo.bar }
#
#
## You can set alert recipients whom will receive alerts if/when a
## service defined in this file has errors. Alerts may be restricted on
## events by using a filter as in the second example below.
#
set alert fake#mail.com not on { instance }
# receive all alerts
# set alert manager#foo.bar only on { timeout } # receive just service-
# # timeout alert
#
mail-format {
from: xxxxxxx#monit.localhost
subject: $SERVICE => $EVENT
message:
DESCRIPTION : $DESCRIPTION
ACTION : $ACTION
DATE : $DATE
HOST : $HOST
Sorry for the spam.
Monit
}
## Monit has an embedded web server which can be used to view status of
## services monitored and manage services from a web interface. See the
## Monit Wiki if you want to enable SSL for the web server.
#
set httpd port 3353 and
use address 0.0.0.0
allow yyyyy:zzzz
###############################################################################
## SeSTART rvices
###############################################################################
##
## Check general system resources such as load average, cpu and memory
## usage. Each test specifies a resource, conditions and the action to be
## performed should a test fail.
#
check system xxxxxx.localhost
if loadavg (1min) > 8 for 5 cycles then alert
if loadavg (5min) > 4 for 5 cycles then alert
if memory usage > 75% for 5 cycles then alert
if cpu usage (user) > 70% for 5 cycles then alert
if cpu usage (system) > 50% for 5 cycles then alert
if cpu usage (wait) > 50% for 5 cycles then alert
check process apache with pidfile /var/run/httpd/httpd.pid
group www
start program = "/etc/init.d/httpd start" with timeout 60 seconds
stop program = "/etc/init.d/httpd stop"
if failed host localhost port 80 then restart
if cpu > 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart
if loadavg(5min) greater than 10 for 8 cycles then restart
if 3 restarts within 5 cycles then timeout
###############################################################################
## Includes
###############################################################################
##
## It is possible to include additional configuration parts from other files or
## directories.
#
# include /etc/monit.d/*
#
#
# Include all files from /etc/monit.d/
include /etc/monit.d/*
On ui monit, everything is ok. and the monitoring is 100% useful. We can stop, restart the service like we want.
So I don't understand the sentence 'error : 'apache' error -- unknown resource ID: [5]' we found on the log of monit.
Anyone has an idea about it ?
Thanks for your help.
I had the same problem..
mmonit said that loadavg is for "check system" only. it used to work for apache but not anymore..
"The loadavg statement can be used in "check system" context only (load average is system property, not process'). Please remove the following statement and reload monit"
so disable this line by adding # on the first of:
# if loadavg(5min) greater than 10 for 8 cycles then restart
then restart monit
service monit restart
You will no longer receive the appache error.
I am trying to make my localhost:80 available on the internet using pagekite with config at ~/.pagekite.rc:
## NOTE: This file may be rewritten/reordered by pagekite.py.
#
##[ Default kite and account details ]##
kitename = myemail#gmail.com
kitesecret = my_kite_secret
##[ Front-end settings: use pagekite.net defaults ]##
defaults
##[ Back-end service examples ... ]##
#
service_on = https:asldkjdk39090.pagekite.me:localhost:80:my_kite_secret
END
I run pagekite:
# pagekite.py
>>> Hello! This is pagekite.py v0.5.9.3. [CTRL+C = Stop]
Connecting to front-end relay 54.84.55.54:443 ...
- Protocols: http http2 http3 https websocket irc finger httpfinger raw
- Protocols: minecraft
- Ports: 79 80 443 843 2222 3000 4545 5222 5223 5269 5670 6667 8000 8080
- Ports: 8081 8082 8083 9292 25565
- Raw ports: virtual
~<> Flying localhost:80 as https://asldkjdk39090.pagekite.me/
Trying localhost:80 as https://asldkjdk39090.pagekite.me/
<< pagekite.py [flying] DynDNS updates may be incomplete, will retry...
Then I request https://asldkjdk39090.pagekite.me/ and it gives an error:
$ curl https://asldkjdk39090.pagekite.me/
curl: (6) Could not resolve host: asldkjdk39090.pagekite.me
I don't clearly understand why it's not working and how to fix it. I expect that pagekite pass request to my localhost:80 when I request https://asldkjdk39090.pagekite.me/ but it doesn't.
Update
With this config it's working:
## NOTE: This file may be rewritten/reordered by pagekite.py.
#
##[ Default kite and account details ]##
kitename = my_kite_name
kitesecret = my_kite_secret
##[ Front-end settings: use pagekite.net defaults ]##
defaults
##[ Back-end service examples ... ]##
#
service_on = http:my_kite_name.pagekite.me:localhost:80:my_kite_secret
END
Where my_kite_name is the name I created on settings page.
Then curl https://my_kite_name.pagekite.me/ redirects properly to my localhost
So it's working for pre-created names and not working for a random name like asldkjdk39090 which I want to use as a subdomain on the fly without registering it on the settings page.
On-the-fly subdomains aren't supported by pagekite.net.
You always have to pre-register, either using the website or the built-in registration tool in pagekite.py itself. Unfortunately, on some modern distros the built-in pagekite.py registration is currently broken because our API server is obsolete and modern versions of OpenSSL refuse to connect to it.
We are working on fixing that, obviously, but it will take some time because of dependencies.
I'm having trouble authenticating over AD to windows machines from my ansible host. I have a valid kerberos ticket -
klist
Credentials cache: FILE:/tmp/krb5cc_1000
Principal: ansible#SOMEDOMAIN.LOCAL
Issued Expires Principal
Mar 10 09:15:27 2017 Mar 10 19:15:24 2017 krbtgt/SOMEDOMAIN.LOCAL#SOMEDOMAIN.LOCAL
My kerberos config looks fine to me -
cat /etc/krb5.conf
[libdefaults]
default_realm = SOMEDOMAIN.LOCAL
# dns_lookup_realm = true
# dns_lookup_kdc = true
# ticket_lifetime = 24h
# renew_lifetime = 7d
# forwardable = true
# The following krb5.conf variables are only for MIT Kerberos.
# kdc_timesync = 1
# forwardable = true
# proxiable = true
# The following encryption type specification will be used by MIT Kerberos
# if uncommented. In general, the defaults in the MIT Kerberos code are
# correct and overriding these specifications only serves to disable new
# encryption types as they are added, creating interoperability problems.
#
# Thie only time when you might need to uncomment these lines and change
# the enctypes is if you have local software that will break on ticket
# caches containing ticket encryption types it doesn't know about (such as
# old versions of Sun Java).
# default_tgs_enctypes = des3-hmac-sha1
# default_tkt_enctypes = des3-hmac-sha1
# permitted_enctypes = des3-hmac-sha1
# The following libdefaults parameters are only for Heimdal Kerberos.
# v4_instance_resolve = false
# v4_name_convert = {
# host = {
# rcmd = host
# ftp = ftp
# }
# plain = {
# something = something-else
# }
# }
# fcc-mit-ticketflags = true
[realms]
SOMEDOMAIN.LOCAL = {
kdc = prosperitydc1.somedomain.local
kdc = prosperitydc2.somedomain.local
default_domain = somedomain.local
admin_server = somedomain.local
}
[domain_realm]
.somedomain.local = SOMEDOMAIN.LOCAL
somedomain.local = SOMEDOMAIN.LOCAL
When running a test command - ansible windows -m win_ping -vvvvv I get
'Server not found in Kerberos database'.
ansible windows -m win_ping -vvvvv
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/windows/win_ping.ps1
<kerberostest.somedomain.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#SOMEDOMAIN.LOCAL on PORT 5986 TO kerberostest.somedomain.local
<kerberostest.somedomain.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.somedomain.local:5986/wsman
<kerberostest.somedomain.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py", line 154, in _winrm_connect
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 132, in open_shell
res = self.send_message(xmltodict.unparse(req))
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 207, in send_message
return self.transport.send_message(message)
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/transport.py", line 181, in send_message
prepared_request = self.session.prepare_request(request)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/sessions.py", line 407, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 306, in prepare
self.prepare_auth(auth, url)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 543, in prepare_auth
r = auth(self)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 308, in __call__
auth_header = self.generate_request_header(None, host, is_preemptive=True)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 148, in generate_request_header
raise KerberosExchangeError("%s failed: %s" % (kerb_stage, str(error.args)))
KerberosExchangeError: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
kerberostest.somedomain.local | UNREACHABLE! => {
"changed": false,
"msg": "kerberos: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))",
"unreachable": true
}
I am able to ssh to the target machine
ssh -v1 kerberostest.somedomain.local -p 5986
OpenSSH_7.3p1 Ubuntu-1, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to kerberostest.somedomain.local [10.10.20.84] port 5986.
debug1: Connection established.
I can also ping all hosts with their hostname. I'm at a loss :(
Here is the ansible host file-
sudo cat /etc/ansible/hosts
# This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
# Ex 1: Ungrouped hosts, specify before any group headers.
## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110
# If you have multiple hosts following a pattern you can specify
# them like this:
## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
## [dbservers]
##
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
## db-[99:101]-node.example.com
[monitoring-servers]
#nagios
10.10.20.75 ansible_connection=ssh ansible_user=nagios
[windows]
#fileserver.somedomain.local#this machine isnt joined to the domain yet.
kerberostest.SOMEDOMAIN.LOCAL
[windows:vars]
#the following works for windows local account authentication
#ansible_ssh_user = prosperity
#ansible_ssh_pass = *********
#ansible_connection = winrm
#ansible_ssh_port = 5986
#ansible_winrm_server_cert_validation = ignore
#vars needed to authenticate on the windows domain using kerberos
ansible_user = ansible#SOMEDOMAIN.LOCAL
ansible_connection = winrm
ansible_winrm_scheme = https
ansible_winrm_transport = kerberos
ansible_winrm_server_cert_validation = ignore
I also tried connecting to the domain with realmd with success, but running the ansible command produced the same result.
This looks like a case of a missing SPN.
Here's the relevant error snippet:
<kerberostest.prosperityerp.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#PROSPERITYERP.LOCAL on PORT 5986 TO kerberostest.prosperityerp.local
<kerberostest.prosperityerp.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.prosperityerp.local:5986/wsman
<kerberostest.prosperityerp.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
And that is based off something I noticed in your Ansible configuration file:
[windows]
#fileserver.prosperityerp.local#this machine isnt joined to the domain yet.
kerberostest.PROSPERITYERP.LOCAL
I think the this machine isnt joined to the domain yet line in that file is a good indicator that the SPN HTTP/kerberostest.prosperityerp.local does not exist in Active Directory which would be causing the "server not found" message. You can SSH to kerberostest.prosperityerp.local, probably because it exists in DNS or in a Hosts file of the client machine, but unless and until the SPN HTTP/kerberostest.prosperityerp.local is created in Active Directory you will continue to get that error message. Adding that SPN properly in at this point would be a whole other topic of discussion.
You could use a command like this to test if you have that SPN defined:
setspn -Q HTTP/kerberostest.prosperityerp.local
SPNs exists to represent to a Kerberos client where to find the service instance for that service on the network.
Also run:
nslookup kerberostest.prosperityerp.local
on at least two client machines to make sure the FQDN of the IP host where the Kerberized is running exists DNS. DNS is a requirement for Kerberos to properly run in a network.
Finally, you could use Wireshark on the client for further analysis, use the filter kerberos to highlight only kerberos traffic.
In my case, the Server not found in Kerberos database error was a result of the target Windows machine's DNS name not being mapped to the right realm, as hinted at in this line from this Microsoft Technet Article:
The error “Server not found in Kerberos database” is common and can be misleading because it often appears when the service principal is not missing. The error can be caused by domain/realm mapping problems or it can be the result of a DNS problem where the service principal name is not being built correctly. Server logs and network traces can be used to determine what service principal is actually being requested.
I had playbook whoami.yaml:
- hosts: windows-machine.mydomain.com
tasks:
- name: Run 'whoami' command
win_command: whoami
Hosts file:
[windows]
windows-machine.mydomain.com
[windows:vars]
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_user=user#FOO.BAR.MYDOMAIN.COM
ansible_password=<password>
ansible_port=5985
Since the DNS name was windows-machine.mydomain.com, but the AD realm was FOO.BAR.MYDOMAIN.COM I had to fix the mapping in my /etc/krb5.conf file on my Ansible host:
INCORRECT
This won't work for our case since this mapping rule won't apply to windows-machine.mydomain.com:
[domain_realm]
foo.bar.mydomain.com = FOO.BAR.MYDOMAIN.COM
CORRECT
This will correctly map windows-machine.mydomain.com to realm FOO.BAR.MYDOMAIN.COM
[domain_realm]
.mydomain.com = FOO.BAR.MYDOMAIN.COM
My issue is retry count exceeds when I download kernel image to Econa processor board (Econa is ARM based processor) via TFTP as shown below
CNS3000 # tftp 0x4000000 bootpImage.cns3420.uclibc
MAC PORT 0 : Initialize bcm53115M
MAC PORT 2 : Initialize RTL8211
TFTP from server 192.168.0.219; our IP address is 192.168.0.112
Filename 'bootpImage.cns3420.uclibc'.
Load address: 0x4000000
Loading: T T T T T T T T T T
Retry count exceeded; starting again
Following are the points which may help you in finding the cause of this error.
Ping response is OK
CNS3000 # ping 192.168.0.219
MAC PORT 0 : Initialize bcm53115M
MAC PORT 2 : Initialize RTL8211
host 192.168.0.219 is alive
When I tried to verify TFTP is running, I tried as shown below. It seems TFTP server is working. I placed a small file in /tftpboot:
# echo "Hello, embedded world" > /tftpboot/hello.txt"
Then I did localhost
# tftp localhost
tftp> get hello.txt
Received 23 bytes in 0.1 seconds
tftp> quit
Please note that there is no firewall or SELinux on my machine.
Please verify location of these files are OK. I have placed kernel image file bootpImage.cns3420.uclibc in /tftpbootTFTP service file is located in /etc/xinetd.d/tftp.
My TFTP service file is:
service tftp
{
socket_type =dgram
protocol=udp
wait=yes
user=root
server=/usr/sbin/in.tftpd
server_args=-s /tftpboot -b 512
disable=no
per_source=11
cps=100 2
flags=ipv4
}
printenv response in U-boot is:
CNS3000 # printenv
bootargs=root=/dev/mtdblock0 mem=256M console=ttyS0
baudrate=38400
ethaddr=00:53:43:4F:54:54
netmask=255.255.0.0
tftp_bsize=512
udp_frag_size=512
mmc_init=mmcinit
loading=fatload mmc 0 0x4000000 bootpimage-82511
running=go 0x4000000
bootcmd=run mmc_init;run loading;run running
serverip=192.168.0.219
ipaddr=192.168.0.112
bootdelay=5
port=1
bootfile=/tftpboot/bootpImage.cns3420.uclibcl
stdin=serial
stdout=serial
stderr=serial
verify=n
Environment size: 437/4092 bytes
Regards
Waqas
Loading: T T T T T T T T T T
Means there is no transfer at all; this can be caused by wrong interface setting i.e.
u-boot is configured for 100Mbit full duplex, and you try to connect via half duplex or 10Mbit (or some mix of it). Another point is the MTU size, should be 1500 (u-boot cannot handle packet fragmentation)
Hint for windows/vmware users:
tftp timeouts from u-boot are caused by windows ip-forwarding.
1) If you have a home network : switch it of.
2) You are running Routing and Remote Access service : shut down service
3) check registry for ip forwarding:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\IPEnableRouter
set value to 0 (and maybe reboot)