How can I read and save data from my server on PC?
a=io.open(path.."/datafile","wb")
a:write("nonsense")
a:close()
Is it the same way or a other way?
I want to read an save this file from my server to my PC, but how can I do that?
I hope someone can help me
It is not completely clear what you are trying to do. If you want to copy a file from one machine to another one, the following is a way to do it. Note that it will work by reading the whole file content into memory before copying it to destination, so it is not suitable for really huge files, say >~100MB (YMMV).
local SOURCE_PATH = "my/source/path/datafile.txt"
local DESTINATION_PATH = "another/path/datafile.txt"
local fh = assert( io.open( SOURCE_PATH, "rb" ) )
local content = fh:read "*all"
fh:close()
local fh_out = assert( io.open( DESTINATION_PATH, "wb" ) )
fh_out:write( content )
fh_out:close()
EDIT
Following a suggestion by #lhf here is a version which can cope with huge files. It reads and then writes the files in small chunks:
local SOURCE_PATH = "my/source/path/datafile.txt"
local DESTINATION_PATH = "another/path/datafile.txt"
local BUFFER_SIZE = 4096 -- in bytes
local fh = assert( io.open( SOURCE_PATH, "rb" ) )
local fh_out = assert( io.open( DESTINATION_PATH, "wb" ) )
local data = fh:read( BUFFER_SIZE )
while data do
fh_out:write( data )
data = fh:read( BUFFER_SIZE )
end
fh:close()
fh_out:close()
Related
Is it possible to write to a file directly on an FTP server without first writing the file locally? In other words: writing to a remote file from local memory.
Board: ESP32-CAM
IDE: Thonny
Language: MicroPython (Lemariva's firmware)
from ftplib import FTP
import camera
ftp = FTP('192.168.1.65', '2121')
ftp.login('user', '12345')
ftp.cwd("/Cam/")
filename = "test.jpeg"
camera.init(0, format=camera.JPEG, fb_location=camera.PSRAM)
camera.quality(10)
camera.framesize(camera.FRAME_240X240)
pic = camera.capture()
fh = open(filename, 'rwb')
ftp.storbinary('STOR '+filename, fh)
fh.close()
I'm assuming that I have to convert the camera.capture() object into a Byte array? But how do I do so without first capturing the image and thus writing to disk?
I want to release a DM script tied to a specific PC. GMS license won't work because free license has a common license ID,
"GATAN_FREE"
How can I insert a secret code to give error message when the script runs on a different machine?
I am thinking to use computer name or username. Is there a way to read Windows system variables? If using
LaunchExternalProcessAsync(callString)
to launch DOS command "echo -username", how to catch the output?
Any solution or suggestions?
Nice thinking.
The trick with LaunchExternalProcess is to create some useful string which can be 'executed'. You can try various applications with their own command-line parameters.
In the most general situation, you can create a dummy-batch file and execute it. (Provided you have read/write access on the computer!)
As the LaunchExternalProcess also returns the exit code from the launched process, you can at least pass back one integer variable directly. Otherwise, you need to have the batch file save to a file and have DM read that file.
// Temporary batch file creation
string batchPath = "C:\\Dummy.bat"
string batchText
string auxFilePath = "C:\\tmp_dummy.txt"
batchText += "dir *.* >> " + auxFilePath + "\n"
batchText += "exit 999" + "\n"
// Ensure no files exist...
if ( DoesFileExist(auxFilePath) )
DeleteFile(auxFilePath)
if ( DoesFileExist(batchPath) )
DeleteFile(batchPath)
// Write the batch file....
number fileID = CreateFileForWriting(batchPath)
WriteFile(fileID,batchText)
CloseFile(fileID)
// Call the batch file and retrieve its exit code
number kTimeOutSec = 5 // Prevent freezing of DM if something in the batch file is wrong
number exitCode = LaunchExternalProcess( batchPath, kTimeOutSec )
// Do something with the results
Result( "\n Exit value of batch was:" + exitCode )
if ( DoesFileExist(auxFilePath) )
{
string line
fileID = OpenFileForReading(auxFilePath)
ReadFileLine( fileID, line )
CloseFile(fileID)
Result("\n First line of auxiliary file:" + line )
}
// Ensure no files exist...
if ( DoesFileExist(auxFilePath) )
DeleteFile(auxFilePath)
if ( DoesFileExist(batchPath) )
DeleteFile(batchPath)
This is not a direct answer to your question, but to the overall goal you've mentioned.
An alternative "DM only" solution for restricting script access would be to use the persistent tags of the application itself! (These are stored in the preferences of the application.)
string tagPath = "MyScripts:LicensedComputer"
string installPW = "password"
string mayLoadPassCode = ""
GetPersistentStringNote( tagPath, mayLoadPassCode )
if ( mayLoadPassCode != installPW )
{
string pw
if ( !GetString( "Forbidden.\n Enter password:", pw, pw ) )
exit(0)
if ( pw != installPW )
Throw( "Invalid password." )
SetPersistentStringNote( tagPath, pw )
}
OKDialog( "You may use my script..." )
Obviously this isn't the most secure lock-out, as any user could set the tag manually as well, but as long as the tag-path is "secret" and the password remains "secret" (i.e. you don't share the script in source code) it is reasonably 'save'.
In a similar way, you could make your script write a specific "license" file to the computer and check for that each time. The advantage is, that deleting/resetting the DM preference file would not affect this.
I am using spooldir source to move .gz files from SpoolDirectory to HDFS.
I am using following config,
==========================
a1.channels = ch-1
a1.sources = src-1
a1.sinks = k1
a1.channels.ch-1.type = memory
a1.channels.ch-1.capacity = 1000
a1.channels.ch-1.transactionCapacity = 100
a1.sources.src-1.type = spooldir
a1.sources.src-1.channels = ch-1
a1.sources.src-1.spoolDir = /path_to/flumeSpool
a1.sources.src-1.deserializer=org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder
a1.sources.src-1.basenameHeader=true
a1.sources.src-1.deserializer.maxBlobLength=400000000
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = ch-1
a1.sinks.k1.hdfs.path = hdfs://{namenode}:8020/path_to_hdfs
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.rollInterval =100
a1.sinks.k1.hdfs.rollCount=0
a1.sinks.k1.hdfs.rollSize=0
a1.sinks.k1.hdfs.fileType = CompressedStream
a1.sinks.k1.hdfs.codeC=gzip
a1.sinks.k1.hdfs.callTimeout=120000
========================================
So file does get transferred to HDFS but it appends time_in_millis.gz extension at the end.
Also when I try to gunzip the file in HDFS (by copying via terminal) it shows unknown characters in it. So not sure what is going on.
I would like to maintain the same filename post transfer to HDFS
I would like to be able to unzip the file and read content
Can someone help?
I am trying flume for something very simple, where I would like to push content from my log files to S3. I was able to create a flume agent that would read the content from an apache access log file and use a logger sink. Now I am trying to find a solution where I can replace the logger sink with an "S3 sink". (I know this does not exist by default)
I was looking for some pointers to direct me in the correct path. Below is my test properties file that I am using currently.
a1.sources=src1
a1.sinks=sink1
a1.channels=ch1
#source configuration
a1.sources.src1.type=exec
a1.sources.src1.command=tail -f /var/log/apache2/access.log
#sink configuration
a1.sinks.sink1.type=logger
#channel configuration
a1.channels.ch1.type=memory
a1.channels.ch1.capacity=1000
a1.channels.ch1.transactionCapacity=100
#links
a1.sources.src1.channels=ch1
a1.sinks.sink1.channel=ch1
S3 is built over HDFS so you can use HDFS sink, you must replace hdfs path to your bucket in this way. Don't forget replace AWS_ACCESS_KEY and AWS_SECRET_KEY.
agent.sinks.s3hdfs.type = hdfs
agent.sinks.s3hdfs.hdfs.path = s3n://<AWS.ACCESS.KEY>:<AWS.SECRET.KEY>#<bucket.name>/prefix/
agent.sinks.s3hdfs.hdfs.fileType = DataStream
agent.sinks.s3hdfs.hdfs.filePrefix = FilePrefix
agent.sinks.s3hdfs.hdfs.writeFormat = Text
agent.sinks.s3hdfs.hdfs.rollCount = 0
agent.sinks.s3hdfs.hdfs.rollSize = 67108864 #64Mb filesize
agent.sinks.s3hdfs.hdfs.batchSize = 10000
agent.sinks.s3hdfs.hdfs.rollInterval = 0
This makes sense, but can rollSize of this value be accompanied by
agent_messaging.sinks.AWSS3.hdfs.round = true
agent_messaging.sinks.AWSS3.hdfs.roundValue = 30
agent_messaging.sinks.AWSS3.hdfs.roundUnit = minute
I have a rather large database (5 dbs of about a million keys each), and each key has the environment namespace in it. For example: "datamine::production::crosswalk==foobar"
I need to sync my development environment with this data copied from the production RDB snapshot.
So what I'm trying to do is batch rename every key, changing the namespace from datamine::production to datamine::development. Is there a good, way to achieve this?
What I've tried so far
redis-cli command of keys "datamine::production*", piped into sed, then back to redis-cli. This takes forever, and bombs for some reason on many keys (combining several in the same line, sporadically). I'd prefer a better option.
Perl search/replace on the .rdb file. My local redis-server flat refuses to load the modified RDB.
The solution:
Ok, here's the script I wrote to solve this problem. It requires the "Redis" gem. Hopefully someone else finds this useful...
#!/usr/bin/env ruby
# A script to translate the current redis database into a namespace for another environment
# GWI's Redis keys are namespaced as "datamine::production", "datamine::development", etc.
# This script connects to redis and translates these key names in-place.
#
# This script does not use Rails, but needs the "redis" gem available
require 'Benchmark'
require 'Redis'
FROM_NAMESPACE = "production"
TO_NAMESPACE = "development"
NAMESPACE_PREFIX = "datamine::"
REDIS_SERVER = "localhost"
REDIS_PORT = "6379"
REDIS_DBS = [0,1,2,3,4,5]
redis = Redis.new(host: REDIS_SERVER, port: REDIS_PORT, timeout: 30)
REDIS_DBS.each do |redis_db|
redis.select(redis_db)
puts "Translating db ##{redis_db}..."
seconds = Benchmark.realtime do
dbsize = redis.dbsize.to_f
inc_threshold = (dbsize/100.0).round
i = 0
old_keys = redis.keys("#{NAMESPACE_PREFIX}#{FROM_NAMESPACE}*")
old_keys.each do |old_key|
new_key = old_key.gsub(FROM_NAMESPACE, TO_NAMESPACE)
redis.rename(old_key, new_key)
print "#{((i/dbsize)*100.0).round}% complete\r" if (i % inc_threshold == 0) # on whole # % only
i += 1
end
end
puts "\nDone. It took #{seconds} seconds"
end
I have a working solution:
EVAL "local old_prefix_len = string.len(ARGV[1])
local keys = redis.call('keys', ARGV[1] .. '*')
for i = 1, #keys do
local old_key = keys[i]
local new_key = ARGV[2] .. string.sub(old_key, old_prefix_len + 1)
redis.call('rename', old_key, new_key)
end" 0 "datamine::production::" "datamine::development::"
Two last parameters are respectively an old prefix and a new prefix.