Filebeat not read all logs from directory - filebeat

I am configuring filebeat to send to elastic logs located in /var/log/myapp/batch_*
Here my filebeat configuration:
# Version
filebeat version 7.11.0 (amd64), libbeat 7.11.0 [84c4d4c4034fcb49c1a318ccdc7311d70adee15b built 2021-02-08 22:42:11 +0000 UTC]
# Filebeat config
logging.metrics.period: 1h
logging.to_files: true
logging.files:
rotateeverybytes: 16777216
keepfiles: 7
permissions: 0600
filebeat.inputs:
- type: log
enabled: true
scan_frequency: 5m
paths:
- "/var/log/myapp/batch_*"
output.elasticsearch:
hosts: ["server:9200"]
index: "log_test_app-%{+yyyy.MM.dd}"
setup.ilm.enabled: false
setup.template.name: "log_test_app"
setup.template.pattern: "log_test_app-*"
setup.template.overwrite: false
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
I only see that two logs are being sent and within the established directory there are a total of eight logs:
2022-05-24T19:39:55.904Z INFO log/input.go:157 Configured paths: [/var/log/myapp/batch_*]
2022-05-24T19:39:55.904Z INFO [crawler] beater/crawler.go:141 Starting input (ID: 3328309751929357009)
2022-05-24T19:39:55.904Z INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 1
2022-05-24T19:44:55.905Z INFO log/harvester.go:302 Harvester started for file: /var/log/myapp/batch_emails.log
2022-05-24T19:44:55.905Z INFO log/harvester.go:302 Harvester started for file: /var/log/loyalty/batch_import.log
I show you a list of directory files:
ls -l /var/log/loyalty/batch_*
-rw-r--r-- 1 batch batch 154112 May 24 03:20 /var/log/myapp/batch_gifts.log
-rw-r--r-- 1 batch batch 112319 May 24 02:30 /var/log/myapp/batch_http.log
-rw-r--r-- 1 batch batch 7575342 May 24 02:30 /var/log/myapp/batch_vouchers.log
-rw-r--r-- 1 batch batch 4847849 May 24 19:30 /var/log/myapp/batch_ftp.log
-rw-r--r-- 1 batch batch 99413 May 24 03:40 /var/log/myapp/batch_category.log
-rw-r--r-- 1 root root 367207 May 24 19:50 /var/log/myapp/batch_emails.log
-rw-r--r-- 1 batch batch 479 Jan 1 23:00 /var/log/myapp/batch_history.log
-rw-r--r-- 1 batch batch 2420916 Jan 1 23:00 /var/log/myapp/batch_lists.php
-rw-r--r-- 1 batch batch 25779499 May 24 19:50 /var/log/myapp/batch_import.log
Is there something wrong with my setup? I tried using the ignore_older parameter: 36h but only two log files are processed.
Thanks for the help.

Welcome to stack overflow, Emanuel :)
I believe you are reading only the log files (*.log) thus you can mention the same.
filebeat.inputs:
- type: log
enabled: true
scan_frequency: 5m
paths:
- "/var/log/myapp/batch_*.log"
Keep Posted!!! Thanks!!!

Related

Jmeter not executing script from command line (CLI mode) but works fine with GUI mode

I wrote a script in Jmeter that is fine being executed from GUI mode but will fail when being executed from CLI mode (non-GUI)
This is the result from CLI mode:
D:\apache-jmeter-5.4.3\apache-jmeter-5.4.3\bin>jmeter -n -t
D:\apache-jmeter-5.4.3\apache-jmeter-5.4.3\bin\homepage.jmx Creating
summariser Created the tree successfully using
D:\apache-jmeter-5.4.3\apache-jmeter-5.4.3\bin\homepage.jmx Starting
standalone test # Fri Mar 18 09:41:02 EET 2022 (1647589262323) Waiting
for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port
4445 summary + 1 in 00:00:00 = 25.0/s Avg: 0 Min: 0
Max: 0 Err: 1 (100.00%) Active: 1 Started: 1 Finished: 0
summary + 19 in 00:00:05 = 4.0/s Avg: 0 Min: 0 Max:
0 Err: 19 (100.00%) Active: 0 Started: 20 Finished: 20 summary =
20 in 00:00:05 = 4.2/s Avg: 0 Min: 0 Max: 0 Err: 20
(100.00%) Tidying up ... # Fri Mar 18 09:41:07 EET 2022
(1647589267321) ... end of run
Why is this happening and how can i fix it ?
Thanks
You're just running your test without storing the results and from the output it seems that all your 20 requests have failed.
I would recommend re-running your test providing the path to the .jtl results file as a parameter like:
jmeter -n -t D:\apache-jmeter-5.4.3\apache-jmeter-5.4.3\bin\homepage.jmx -l result.jtl
and then inspect the result.jtl file using Excel or equivalent or generate HTML Reporting Dashboard out of it, you will see at least response code and message.

Databricks considering files as directory

We are facing an issue on the Databrick filesyste that considers files as directory and we are unable to read files with Pandas. The files exist in the Azure Storage Explorer, and are considered as files as seen here :
We have mounted the storage with oAuth 2.0.
On Databricks,
%sh ls -al '<path_to_files>'
returns the following :
total 1127
drwxrwxrwx 2 root root 4096 Jan 29 09:26 .
drwxrwxrwx 2 root root 4096 Jan 9 13:47 ..
drwxrwxrwx 1 root root 136705 Jan 28 16:35 AAAA_2019-10-01_2019-12-27.csv
drwxrwxrwx 1 root root 183098 Jan 28 16:35 BBBB_2019-10-01_2019-12-27.csv
-rwxrwxrwx 1 root root 313120 Jan 28 16:35 CCCC_2019-10-01_2019-12-27.csv
-rwxrwxrwx 1 root root 212935 Jan 29 09:26 df_cube.csv
-rwxrwxrwx 1 root root 298228 Jan 29 09:26 df_other_cube.csv
​The thing is, the two first csv files are not directories at all. We can download them and read them as csv, but we cannot load them into a Pandas dataframe.
df = pd.read_csv(rootname_source_test + r'AAAA_2019-10-01_2019-12-27.csv',header=0,sep="|",engine='python')
>>> IsADirectoryError: [Errno 21] Is a directory: '/dbfs/mnt/<path>/AAA_2019-10-01_2019-12-27.csv'
They are generated the same way the 3rd csv is generated, and the 3rd on is loadable in pandas. Sometimes they appear as files, sometimes as directories and we are having trouble recreating and solving this consistently.
Cluster config : Runtime 6.2 ML (includes Apache Spark 2.4.4, Scala 2.11)
Any help will be very appreciated.

Ambari unable to run custom hook for modifying user hive

Attempting to add a client node to cluster via Ambari (v2.7.3.0) (HDP 3.1.0.0-78) and seeing odd error
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 38, in <module>
BeforeAnyHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 31, in hook
setup_users()
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 51, in setup_users
fetch_nonlocal_groups = params.fetch_nonlocal_groups,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/accounts.py", line 90, in action_create
shell.checked_call(command, sudo=True)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'usermod -G hadoop -g hadoop hive' returned 6. usermod: user 'hive' does not exist in /etc/passwd
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-632.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-632.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']2019-11-25 13:07:58,000 - Reporting component version failed
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
stdout:
2019-11-25 13:07:57,644 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
2019-11-25 13:07:57,651 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2019-11-25 13:07:57,652 - Group['livy'] {}
2019-11-25 13:07:57,654 - Group['spark'] {}
2019-11-25 13:07:57,654 - Group['ranger'] {}
2019-11-25 13:07:57,654 - Group['hdfs'] {}
2019-11-25 13:07:57,654 - Group['zeppelin'] {}
2019-11-25 13:07:57,655 - Group['hadoop'] {}
2019-11-25 13:07:57,655 - Group['users'] {}
2019-11-25 13:07:57,656 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-25 13:07:57,658 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-25 13:07:57,659 - Modifying user hive
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-632.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-632.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
2019-11-25 13:07:57,971 - The repository with version 3.1.0.0-78 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2019-11-25 13:07:58,000 - Reporting component version failed
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
Command failed after 1 tries
The problem appears to be
resource_management.core.exceptions.ExecutionFailed: Execution of 'usermod -G hadoop -g hadoop hive' returned 6. usermod: user 'hive' does not exist in /etc/passwd
caused by
2019-11-25 13:07:57,659 - Modifying user hive
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-632.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-632.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
This is further reinforced by the fact that manually adding the ambari-hdp-1.repo and yum-installing hdp-select before adding the host to the cluster shows the same error messages, just truncated up to the parts of stdout/err shown here.
When running
[root#HW001 .ssh]# /usr/bin/hdp-select versions
3.1.0.0-78
from the ambari server node, I can see the command runs.
Looking at what the hook script is trying to run/access, I see
[root#client001~]# ls -lha /var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py
-rw-r--r-- 1 root root 1.2K Nov 25 10:51 /var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py
[root#client001~]# ls -lha /var/lib/ambari-agent/data/command-632.json
-rw------- 1 root root 545K Nov 25 13:07 /var/lib/ambari-agent/data/command-632.json
[root#client001~]# ls -lha /var/lib/ambari-agent/cache/stack-hooks/before-ANY
total 0
drwxr-xr-x 4 root root 34 Nov 25 10:51 .
drwxr-xr-x 8 root root 147 Nov 25 10:51 ..
drwxr-xr-x 2 root root 34 Nov 25 10:51 files
drwxr-xr-x 2 root root 188 Nov 25 10:51 scripts
[root#client001~]# ls -lha /var/lib/ambari-agent/data/structured-out-632.json
ls: cannot access /var/lib/ambari-agent/data/structured-out-632.json: No such file or directory
[root#client001~]# ls -lha /var/lib/ambari-agent/tmp
total 96K
drwxrwxrwt 3 root root 4.0K Nov 25 13:06 .
drwxr-xr-x 10 root root 267 Nov 25 10:50 ..
drwxr-xr-x 6 root root 4.0K Nov 25 13:06 ambari_commons
-rwx------ 1 root root 1.4K Nov 25 13:06 ambari-sudo.sh
-rwxr-xr-x 1 root root 1.6K Nov 25 13:06 create-python-wrap.sh
-rwxr-xr-x 1 root root 1.6K Nov 25 10:50 os_check_type1574715018.py
-rwxr-xr-x 1 root root 1.6K Nov 25 11:12 os_check_type1574716360.py
-rwxr-xr-x 1 root root 1.6K Nov 25 11:29 os_check_type1574717391.py
-rwxr-xr-x 1 root root 1.6K Nov 25 13:06 os_check_type1574723161.py
-rwxr-xr-x 1 root root 16K Nov 25 10:50 setupAgent1574715020.py
-rwxr-xr-x 1 root root 16K Nov 25 11:12 setupAgent1574716361.py
-rwxr-xr-x 1 root root 16K Nov 25 11:29 setupAgent1574717392.py
-rwxr-xr-x 1 root root 16K Nov 25 13:06 setupAgent1574723163.py
notice there is ls: cannot access /var/lib/ambari-agent/data/structured-out-632.json: No such file or directory. Not sure if this is normal, though.
Anyone know what could be causing this or any debugging hints from this point?
UPDATE 01:
Adding some log printing lines near the offending final line in the error trace, ie. File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages, I print the code and stdout:
2
ambari-python-wrap: can't open file '/usr/bin/hdp-select': [Errno 2] No such file or directory
So what the heck? It wants hdp-select to already be there, but ambari add-host UI complains if I manually install that binary myself beforehand. When I do manually install it (using the same repo file as in the rest of the existing cluster nodes) all I see is...
0
Packages:
accumulo-client
accumulo-gc
accumulo-master
accumulo-monitor
accumulo-tablet
accumulo-tracer
atlas-client
atlas-server
beacon
beacon-client
beacon-server
druid-broker
druid-coordinator
druid-historical
druid-middlemanager
druid-overlord
druid-router
druid-superset
falcon-client
falcon-server
flume-server
hadoop-client
hadoop-hdfs-client
hadoop-hdfs-datanode
hadoop-hdfs-journalnode
hadoop-hdfs-namenode
hadoop-hdfs-nfs3
hadoop-hdfs-portmap
hadoop-hdfs-secondarynamenode
hadoop-hdfs-zkfc
hadoop-httpfs
hadoop-mapreduce-client
hadoop-mapreduce-historyserver
hadoop-yarn-client
hadoop-yarn-nodemanager
hadoop-yarn-registrydns
hadoop-yarn-resourcemanager
hadoop-yarn-timelinereader
hadoop-yarn-timelineserver
hbase-client
hbase-master
hbase-regionserver
hive-client
hive-metastore
hive-server2
hive-server2-hive
hive-server2-hive2
hive-webhcat
hive_warehouse_connector
kafka-broker
knox-server
livy-client
livy-server
livy2-client
livy2-server
mahout-client
oozie-client
oozie-server
phoenix-client
phoenix-server
pig-client
ranger-admin
ranger-kms
ranger-tagsync
ranger-usersync
shc
slider-client
spark-atlas-connector
spark-client
spark-historyserver
spark-schema-registry
spark-thriftserver
spark2-client
spark2-historyserver
spark2-thriftserver
spark_llap
sqoop-client
sqoop-server
storm-client
storm-nimbus
storm-slider-client
storm-supervisor
superset
tez-client
zeppelin-server
zookeeper-client
zookeeper-server
Aliases:
accumulo-server
all
client
hadoop-hdfs-server
hadoop-mapreduce-server
hadoop-yarn-server
hive-server
Command failed after 1 tries
UPDATE 02:
Printing some custom logging from File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 322 (printing the values of err_msg, code, out, err), ie.
....
312 if throw_on_failure and not code in returns:
313 err_msg = Logger.filter_text("Execution of '{0}' returned {1}. {2}".format(command_alias, c ode, all_output))
314
315 #TODO remove
316 print("\n----------\nMY LOGS\n----------\n")
317 print(err_msg)
318 print(code)
319 print(out)
320 print(err)
321
322 raise ExecutionFailed(err_msg, code, out, err)
323
324 # if separate stderr is enabled (by default it's redirected to out)
325 if stderr == subprocess32.PIPE:
326 return code, out, err
327
328 return code, out
....
I see
Execution of 'usermod -G hadoop -g hadoop hive' returned 6. usermod: user 'hive' does not exist in /etc/passwd
6
usermod: user 'hive' does not exist in /etc/passwd
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-816.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-816.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
2019-11-26 10:25:46,928 - The repository with version 3.1.0.0-78 for this command has been marked as resolved. It will be used to report the version of the component which was installed
So it seems like it is failing to create the hive user (even though it seems to have no problem creating the yarn-ats user before that)
After just giving in and trying to manually create the hive user myself, I see
[root#airflowetl ~]# useradd -g hadoop -s /bin/bash hive
useradd: user 'hive' already exists
[root#airflowetl ~]# cat /etc/passwd | grep hive
<nothing>
[root#airflowetl ~]# id hive
uid=379022825(hive) gid=379000513(domain users) groups=379000513(domain users)
The fact that this existing user's uid looks like this and is not in the /etc/passwd file made me think that there is some existing Active Directory user (which this client node syncs with via installed SSSD) that already has the name hive. Checking our AD users, this turned out to be true.
Temporarily stopping the SSSD service to stop sync with AD (service sssd stop) (since, not sure if you can get a server to ignore AD syncs on an individual user basis) before rerunning the client host add in Ambari fixed the problem for me.

filebeat marking the log file inactive even when there is unread content in it

I am using filebeat version 5.6.16 on a centos server to push logs to logstash from path /opt/news-bff/logs/Icis.Genesis*.log
There are many matching log files
-rw-r--r--. 1 root root 5049 Sep 25 10:30 Icis.Genesis.News.Bff.Api-2019092510.log
-rw-r--r--. 1 root root 1551 Sep 25 12:15 Icis.Genesis.News.Bff.Api-2019092512.log
-rw-r--r--. 1 root root 2650 Sep 25 13:55 Icis.Genesis.News.Bff.Api-2019092513.log
-rw-r--r--. 1 root root 39447 Sep 25 14:50 Icis.Genesis.News.Bff.Api-2019092514.log
-rw-r--r--. 1 root root 6191 Sep 25 15:31 Icis.Genesis.News.Bff.Api-2019092515.log
But the issue is, filebeat opens harvester for every file, after five minutes marks it as inactive withiust shipping any logs.
The /var/lib/filebeat/registry file is empty.
I prvision multiple centos server and insall filebeat on it using puppet.
5 out of 10 times , the filebeat picks the log files wothout any issues.
otherwise i get this issue.
If delete the registry file and resart the service it works fine.
The filebeat prospector looks like below:
filebeat:
prospectors:
- input_type: log
paths:
- /opt/news-bff/logs/Icis.Genesis*.log
encoding: plain
fields_under_root: false
document_type: log
scan_frequency: 10s
harvester_buffer_size: 16384
max_bytes: 10485760`
logs:
2019-09-25T12:16:01+01:00 INFO Harvester started for file: /opt/news-bff/logs/Icis.Genesis.News.Bff.Api-2019092512.log
2019-09-25T12:16:10+01:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.logstash.publish.read_bytes=36
2019-09-25T12:16:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:17:10+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:17:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:18:10+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:18:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:19:10+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:19:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=30
2019-09-25T12:20:10+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:20:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:21:06+01:00 INFO File is inactive: /opt/news-bff/logs/Icis.Genesis.News.Bff.Api-2019092512.log. Closing because close_inactive of 5m0s reached.

How to freeze graphs from checkpoint directory for inception-v3 model?

I am fine tuning inception-v3 model flowers using this: https://github.com/tensorflow/models/tree/master/inception
I checkpointed the result in a directory. But in the directory I see files like these:
-rw-r--r-- 1 root root 389908432 Mar 15 21:46 model.ckpt-0.data-00000-of-00001
-rw-r--r-- 1 root root 72680 Mar 15 21:46 model.ckpt-0.index
-rw-r--r-- 1 root root 15189794 Mar 15 21:47 model.ckpt-0.meta
-rw-r--r-- 1 root root 135185788 Mar 15 22:36 events.out.tfevents.1489594533.f7d5defbed64
-rw-r--r-- 1 root root 72680 Mar 15 22:37 model.ckpt-4999.index
-rw-r--r-- 1 root root 389908432 Mar 15 22:37 model.ckpt-4999.data-00000-of-00001
-rw-r--r-- 1 root root 15189794 Mar 15 22:38 model.ckpt-4999.meta
-rw-r--r-- 1 root root 130 Mar 15 22:49 checkpoint
whereas I need outputs in directory similar to this:
-rw-r----- 1 107456 5000 223 Mar 2 2016 README.txt
-rw-r----- 1 107456 5000 43 Mar 2 2016 checkpoint
-rw-r----- 1 107456 5000 434903494 Mar 15 2016 model.ckpt-157585
For that I need to do something like freezing, but freezing needs to provide output_node_names. Can anyone guide me, what will be the output_node_names for inception-v3?
Also, I need a reliable way to freeze. Is tensorflow freezer tool okay for this?
I found the answer eventually.
One-line answer should be to use freezer.py available in upstream Tensorflow codebase. See the example on how to use that program from tests.
You may check the following link for sample:
https://gist.githubusercontent.com/morgangiraud/249505f540a5e53a48b0c1a869d370bf/raw/6cb0b4d497925517316a92f935ce5dccb6aafd17/medium-tffreeze-1.py