Problem with getting rights data [for execution / read / write] file - kotlin

I'm writing a program that should output a meta-info (size, permissions to execute / read / write, time of last modification) of all files from the specified directory.
I received information about all the information, except the rights to execute / read / write.
I tried to get this info using PosixFilePermissions, but when added to the List I get Exception in thread "main" java.lang.UnsupportedOperationException.
Maybe you should use some other library? Or did I make a mistake somewhere? I would be grateful for any advice!
fun long(path:Path) : MutableList<String> {
var listOfFiles = mutableListOf<String>()
val files = File("$path").listFiles()
var attr: BasicFileAttributes
Arrays.sort(files, NameFileComparator.NAME_COMPARATOR)
files.forEach {
if (it.isFile) {
attr = Files.readAttributes<BasicFileAttributes>(it.toPath(), BasicFileAttributes::class.java)
listOfFiles.add("${it.name} ${attr.size()} ${attr.lastModifiedTime()}" +
" ${PosixFilePermissions.toString(Files.getPosixFilePermissions(it.toPath()))}")
}
else listOfFiles.add("dir ${it.name}")
}
return listOfFiles
}

PosixFilePermissions are only usable for POSIX-compatible file systems (Linux etc.).
For a Windows system, the permissions have to be accessed directly:
file.canRead()
file.canWrite()
file.canExecute()

Related

Gatling feeder/parameter issue - Exception in thread "main" java.lang.UnsupportedOperationException

I just involved the new project for API test for our service by using Gatling. At this point, I want to search query, below is the code:
def chnSendToRender(testData: FeederBuilderBase[String]): ChainBuilder = {
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
.doIf(session => session("searchStatus").as[Int] == 200) {
exec { session =>
printConsoleLog("Rendered Asset ID List: " + session("renderedAssetList").as[String], "INFO")
session
}
}
}
I declared the feeder already in the simulation scala file:
class GVRERenderEditor_new extends Simulation {
private val edlToRender = csv("data/render/edl_asset_ids.csv").queue
private val chnPostRender = components.notifications.notice.JobsPolling_new.chnSendToRender(edlToRender)
private val scnSendEDLForRender = scenario("Search Post Render")
.exitBlockOnFail(exec(preSimAuth))
.exec(chnPostRender)
setUp(
scnSendEDLForRender.inject(atOnceUsers(1)).protocols(httpProtocol)
)
.maxDuration(sessionDuration.seconds)
.assertions(global.successfulRequests.percent.is(100))
}
But Gatling test failed to run, showing this error: Exception in thread "main" java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
If I hardcode the #{edlAssetId} (put the real edlAssetId in that query), I will get result. I think I passed the parameter wrongly in this case. I've tried to print the output in console log but no luck. What's wrong with this code? I would appreciate your help. Thanks!
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
You're missing a . (dot) before the exec to attach it to the feed.
As a result, your method is returning the last instruction, ie the exec only.

Scalding Unit Test - How to Write A Local File?

I work at a place where scalding writes are augmented with a specific API to track dataset meta data. When converting from normal writes to these special writes, there are some intricacies with respect to Key/Value, TSV/CSV, Thrift ... datasets. I would like to compare the binary file is the same prior to conversion and after conversion to the special API.
Given I cannot provide the specific api for the metadata-inclusive writes, I only ask how can I write a unit test for .write method on a TypedPipe?
implicit val timeZone: TimeZone = DateOps.UTC
implicit val dateParser: DateParser = DateParser.default
implicit def flowDef: FlowDef = new FlowDef()
implicit def mode: Mode = Local(true)
val fileStrPath = root + "/test"
println("writing data to " + fileStrPath)
TypedPipe
.from(Seq[Long](1, 2, 3, 4, 5))
// .map((x: Long) => { println(x.toString); System.out.flush(); x })
.write(TypedTsv[Long](fileStrPath))
.forceToDisk
The above doesn't seem to write anything to local (OSX) disk.
So I wonder if I need to use a MiniDFSCluster something like this:
def setUpTempFolder: String = {
val tempFolder = new TemporaryFolder
tempFolder.create()
tempFolder.getRoot.getAbsolutePath
}
val root: String = setUpTempFolder
println(s"root = $root")
val tempDir = Files.createTempDirectory(setUpTempFolder).toFile
val hdfsCluster: MiniDFSCluster = {
val configuration = new Configuration()
configuration.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, tempDir.getAbsolutePath)
configuration.set("io.compression.codecs", classOf[LzopCodec].getName)
new MiniDFSCluster.Builder(configuration)
.manageNameDfsDirs(true)
.manageDataDfsDirs(true)
.format(true)
.build()
}
hdfsCluster.waitClusterUp()
val fs: DistributedFileSystem = hdfsCluster.getFileSystem
val rootPath = new Path(root)
fs.mkdirs(rootPath)
However, my attempts to get this MiniCluster to work haven't panned out either - somehow I need to link the MiniCluster with the Scalding write.
Note: The Scalding JobTest framework for unit testing isn't going to work due actual data written is sometimes wrapped in bijection codec or setup with case class wrappers prior to the writes made by the metadata-inclusive writes APIs.
Any ideas how I can write a local file (without using the Scalding REPL) with either Scalding alone or a MiniCluster? (If using the later, I need a hint how to read the file.)
Answering ... There is an example of how to use a mini cluster for exactly reading and writing to HDFS. I will be able to cross read with my different writes and examine them. Here it is in the tests for scalding's TypedParquet type
HadoopPlatformJobTest is an extension for JobTest that uses a MiniCluster.
With some hand-waiving on detail in the link, the bulk of the code is this:
"TypedParquetTuple" should {
"read and write correctly" in {
import com.twitter.scalding.parquet.tuple.TestValues._
def toMap[T](i: Iterable[T]): Map[T, Int] = i.groupBy(identity).mapValues(_.size)
HadoopPlatformJobTest(new WriteToTypedParquetTupleJob(_), cluster)
.arg("output", "output1")
.sink[SampleClassB](TypedParquet[SampleClassB](Seq("output1"))) {
toMap(_) shouldBe toMap(values)
}
.run()
HadoopPlatformJobTest(new ReadWithFilterPredicateJob(_), cluster)
.arg("input", "output1")
.arg("output", "output2")
.sink[Boolean]("output2")(toMap(_) shouldBe toMap(values.filter(_.string == "B1").map(_.a.bool)))
.run()
}
}

Encrypting large file with PGP (BouncyGPG) and sending it over SFTP (JSch) in Kotlin

I've been trying to encrypt large files with PGP (I'm using BouncyGPG for that) and sending it to a remote server using SFTP (using JSch). I'm using PipedOutputStream and PipedInputStream to avoid saving the final encrypted file locally before sending to the server. I believe BouncyGPG is misbehaving because of that.
Can anyone help me with this?
Here's my code:
BouncyGPG.registerProvider()
private val bufferSize: Int = 8 * 1024
fun sendFileSFTP(myFile: MyFile {
PipedInputStream().use { input ->
GlobalScope.launch {
PipedOutputStream(input).use { output ->
encrypt(myFile, output)
}
}
sendSFTP(input, "mysftp-directory/${myFile.filename}")
log.info("Finished Sending $myFile")
}
}
fun encrypt(myFile: MyFile output: OutputStream) {
log.info("ENCRYPTING - $myFile")
val pubkey = "/home/myhome/server.gpg"
val privkey = "/home/myhome/server.secret.gpg"
val password = "password"
val keyring = KeyringConfigs.forGpgExportedKeys(KeyringConfigCallbacks.withPassword(password))
keyring.addPublicKey(FileReader(File(pubkey)).readText().toByteArray(charset("US-ASCII")))
keyring.addSecretKey(FileReader(File(privkey)).readText().toByteArray(charset("US-ASCII")))
BouncyGPG.encryptToStream()
.withConfig(keyring)
.withStrongAlgorithms()
.toRecipients("contact#myserver.com")
.andDoNotSign()
.binaryOutput()
.andWriteTo(output)
val filepath = Paths.get(myFile.path, myFile.filename)
BufferedInputStream(FileInputStream(filepath.toString())).use { input ->
input.copyTo(output, bufferSize)
}
log.info("FINISHED ENCRYPTING - $myFile")
}
fun sendSFTP(inputStream: InputStream, dest: String) {
log.info("Sending $dest")
var jsch = JSch()
jsch.setKnownHosts(properties.knownHosts)
val jschSession: Session = jsch.getSession(properties.sFTPUsername, properties.sFTPHost, properties.sFTPPort)
jschSession.setPassword(properties.sFTPPassword)
jschSession.connect()
val channel = jschSession.openChannel("sftp") as ChannelSftp
channel.connect()
channel.put(inputStream, dest)
}
I'm assuming that BouncyGPG is not singing my file due to the lack of closing (I believe I've read somewhere that it only signs the file after the output is closed, but as far as I know, kotlin will close the output for me already. Am I missing something?
Thanks
After debating with this issue for a while, I gave a try and generated my own pgp keys (I was using a test key our client gave us) and guess what? It worked!
Apparently there was something wrong with them! ¯_(ツ)_/¯
So, my code is working!

Perl web server: How to route

As seen in my code below, I am using apache to serve my Perl web server. I need Perl to have multple routes for my client as seen in my %dispatch. If I figure one out I'm sure the rest will be very similar. If we look at my Subroutine sub resp_index, how can I modify this to link to my index.html file located in my root: /var/www/perl directory?
/var/www/perl/perlServer.pl:
#!/usr/bin/perl
{
package MyWebServer;
use HTTP::Server::Simple::CGI;
use base qw(HTTP::Server::Simple::CGI);
my %dispatch = (
'/index.html' => \&resp_index,
# ...
);
sub handle_request {
my $self = shift;
my $cgi = shift;
my $path = $cgi->path_info();
my $handler = $dispatch{$path};
if (ref($handler) eq "CODE") {
print "HTTP/1.0 200 OK\r\n";
$handler->($cgi);
} else {
print "HTTP/1.0 404 Not found\r\n";
print $cgi->header,
$cgi->start_html('Not found'),
$cgi->h1('Not found'),
$cgi->end_html;
}
}
sub resp_index {
my $cgi = shift; # CGI.pm object
return if !ref $cgi;
my $who = $cgi->param('name');
print $cgi->header,
$cgi->start_html("index"),
$cgi-h1("THIS IS INDEX"),
$cgi->end_html;
}
}
my $pid = MyWebServer->new()->background();
print "Use 'kill $pid' to stop server.\n";
I think what you're asking is how do you serve a file from your web server? Open it and print it, like any other file.
use autodie;
sub resp_index {
my $cgi = shift;
return if !ref $cgi;
print $cgi->header;
open my $fh, "<", "/var/www/perl/index.html";
print <$fh>;
}
Unless this is an exercise, really, really, REALLY don't write your own web framework. It's going to be slow, buggy, and insecure. Consider a small routing framework like Dancer.
For example, mixing documents like index.html and executable code like perlServer.pl in the same directory invites a security hole. Executable code should be isolated in their own directory so they can be given wholly different permissions and stronger protection.
Let's talk about this line...
return if !ref $cgi;
This line is hiding an error. If your functions are passed the wrong argument, or no argument, it will silently return and you (or the person using this) will have no idea why nothing happened. This should be an error...
use Carp;
croak "resp_index() was not given a CGI object" if !ref $cgi;
...but really you should use one of the existing function signature modules such as Method::Signatures.
use Method::Signatures;
func resp_index(CGI $cgi) {
...
}

Why can I not pickle my case classes? What should I do to solve this manually next time?

Edit 2: Observations and questions
I am pretty sure along with the commenter below Justin that the problem is due to an errant build.sbt configuration. However, this is the first time I have seen an errant build.sbt configuration that literally works for everything else except for pickers. Maybe that is becaus they use macros and I as a rule avoid them.
Why would it matter whether Flow.merge is used vs Flow.map if the problem is with the sbt?
Suspicious build.sbt extract
lazy val server = project
.dependsOn(sharedJvm, client)
Suspicious stack trace
So this is the top of the stack: it goes from a method I cannot find to the linking environment to the string encoding utils. Ok.
server java.lang.RuntimeException: stub
Huh? stub?
server at scala.sys.package$.error(package.scala:27)
server at scala.scalajs.runtime.package$.linkingInfo(package.scala:143)
server at scala.scalajs.runtime.package$.environmentInfo(package.scala:137)
HUH?
server at scala.scalajs.js.Dynamic$.global(Dynamic.scala:78)
???
server at boopickle.StringCodec$.encodeUTF8(StringCodec.scala:56)
Edit 1: My big and beautiful build.sbt might be the problem
What you cannot see is that I organized in my project folder:
JvmDependencies.scala which has regular Jvm dependencies
SjsDependencies.scala which has Def.settingsKeys of libraryDependencies on JsModuleIDs
WebJarDependencies.scala which has javascripts and css's
build.sbt
lazy val shared = (crossProject.crossType(CrossType.Pure) in file("shared"))
.configure(_.enablePlugins(ScalaJSPlugin))
.settings(SjsDependencies.pickling.toSettingsDefinition(): _*)
.settings(SjsDependencies.tagsAndDom.toSettingsDefinition(): _*)
.settings(SjsDependencies.css.toSettingsDefinition(): _*)
lazy val sharedJvm = shared.jvm
lazy val sharedJs = shared.js
lazy val cmdlne = project
.dependsOn(sharedJvm)
.settings(
libraryDependencies ++= (
JvmDependencies.commandLine ++
JvmDependencies.logging ++
JvmDependencies.akka ++
JvmDependencies.serialization
)
)
lazy val client = project
.enablePlugins(ScalaJSPlugin, SbtWeb, SbtSass)
.dependsOn(sharedJs)
.settings(
(SjsDependencies.shapeless ++ SjsDependencies.audiovideo ++ SjsDependencies.databind ++ SjsDependencies.functional ++ SjsDependencies.lensing ++ SjsDependencies.logging ++ SjsDependencies.reactive).toSettingsDefinition(),
jsDependencies ++= WebjarDependencies.js,
libraryDependencies ++= WebjarDependencies.notJs,
persistLauncher in Compile := true
)
lazy val server = project
.dependsOn(sharedJvm, client)
.enablePlugins(SbtNativePackager)
.settings(
copyWebJarResources := { streams.value.log("Copying webjar resources")
val `Web Modules target directory` = (resourceManaged in Compile).value / "assets"
val `Web Modules source directory` = (WebKeys.assets in Assets in client).value / "lib"
final class UsefulFileFilter(acceptable: String*) extends FileFilter {
// TODO ADJUST TO EXCLUDE JS MAP FILES
import scala.collection.JavaConversions._
def accept(file: File) = (file.isDirectory && FileUtils.listFiles(file, acceptable.toArray, true).nonEmpty) || acceptable.contains(file.ext) && !file.name.contains(".js.")
}
val `file filter` = new UsefulFileFilter("css", "scss", "sass", "less", "map")
IO.createDirectory(`Web Modules target directory`)
IO.copyDirectory(source = `Web Modules source directory`, target = `Web Modules target directory` / "script")
FileUtils.copyDirectory(`Web Modules source directory`, `Web Modules target directory` / "style", `file filter`)
},
// run the copy after compile/assets but before managed resources
copyWebJarResources <<= copyWebJarResources dependsOn(compile in Compile, WebKeys.assets in Compile in client, fastOptJS in Compile in client),
managedResources in Compile <<= (managedResources in Compile) dependsOn copyWebJarResources,
watchSources <++= (watchSources in client),
resourceGenerators in Compile <+= Def.task {
val files = ((crossTarget in(client, Compile)).value ** ("*.js" || "*.map")).get
val mappings: Seq[(File,String)] = files pair rebase((crossTarget in(client, Compile)).value, ((resourceManaged in Compile).value / "assets/").getAbsolutePath )
val map: Seq[(File, File)] = mappings.map { case (s, t) => (s, file(t))}
IO.copy(map).toSeq
},
reStart <<= reStart dependsOn (managedResources in Compile),
libraryDependencies ++= (
JvmDependencies.akka ++
JvmDependencies.jarlocating ++
JvmDependencies.functional ++
JvmDependencies.serverPickling ++
JvmDependencies.logging ++
JvmDependencies.serialization ++
JvmDependencies.testing
)
)
Edit 0: A very obscure chat thread has a guy saying what I am feeling: no, not **** scala, but
Mark Eibes #i-am-the-slime Oct 15 2015 09:37
#ochrons I'm still fighting. I can't seem to pickle anything anymore.
https://gitter.im/scala-js/scala-js/archives/2015/10/15
I have a rather simple requirement - I have one web socket route on a akka http server that is defined AkkaServerLogEventToMessageHandler():
object AkkaServerLogEventToMessageHandler
extends Directives {
val sourceOfLogs =
Source.actorPublisher[AkkaServerLogMessage](AkkaServerLogEventPublisher.props) map {
event ⇒
BinaryMessage(
ByteString(
Pickle.intoBytes[AkkaServerLogMessage](event)
)
)
}
def apply(): server.Route = {
handleWebSocketMessages(
Flow[Message].merge(sourceOfLogs)
)
}
}
This fits into a tiny set of routes in the most obvious way.
Now why is that I cannot get boopickle, upickle, or prickle to serialize something as simple as this stupid case class?
sealed case class AkkaServerLogMessage(
message: String,
level: Int,
timestamp: Long
)
No nesting
All primitive types
No generics
Only three of them
These all produced roughly the same error
Using all three of the common picklers to write
Using TextMessage instead of BinaryMessage and the corresponding upickle or prickle writeJs or whatever methods
Varying the case class down to nothing (nothing, as in no members)
Varying the input itself to the case class
Importing various permutations of Implicits and underscore stuff
... specifically, they gave me variations on the same stupid error (not the same error, but considerably similar)
server [ERROR] [04/21/2016 22:04:00.362] [app-akka.actor.default-dispatcher-7] [akka.actor.ActorSystemImpl(app)] WebSocket handler failed with stub
server java.lang.RuntimeException: stub
server at scala.sys.package$.error(package.scala:27)
server at scala.scalajs.runtime.package$.linkingInfo(package.scala:143)
server at scala.scalajs.runtime.package$.environmentInfo(package.scala:137)
server at scala.scalajs.js.Dynamic$.global(Dynamic.scala:78)
server at boopickle.StringCodec$.encodeUTF8(StringCodec.scala:56)
server at boopickle.Encoder.writeString(Codecs.scala:338)
server at boopickle.BasicPicklers$StringPickler$.pickle(Pickler.scala:183)
server at boopickle.BasicPicklers$StringPickler$.pickle(Pickler.scala:134)
server at boopickle.PickleState.pickle(Pickler.scala:511)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1$Pickler$macro$1$2$.pickle(AkkaServerLogEventToMessageHandler.scala:35)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1$Pickler$macro$1$2$.pickle(AkkaServerLogEventToMessageHandler.scala:35)
server at boopickle.PickleImpl$.apply(Default.scala:70)
server at boopickle.PickleImpl$.intoBytes(Default.scala:75)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1.apply(AkkaServerLogEventToMessageHandler.scala:35)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1.apply(AkkaServerLogEventToMessageHandler.scala:31)
This worked
Not using Flow.merge (defeats the purpose, I want to keep sending out put with logs)
Using a static value
Other useless things
Appeal
Please let me know where and why I am stupid... I spent four hours on this problem today in different forms, and it is driving me nuts.
In your build.sbt, you have:
lazy val shared = (crossProject.crossType(CrossType.Pure) in file("shared"))
.configure(_.enablePlugins(ScalaJSPlugin))
Do not do this. You must not enable the Scala.js plugin on a cross-project, ever. This also adds it to the JVM side, which will wreak havoc. Most notably, this will cause %%% to resolve the Scala.js artifacts of your dependencies in the JVM project, and that is really bad. This is what causes your issue.
crossProject already adds the Scala.js plugin to the JS part, and only that one. So simply remove that enablePlugins line.
Mystery solved. Thanks to #Justin du Coeur for pointing me in the right direction.
The reason boopickle wasn't working in particular was because in the dependency chain I was including both the sjs and the jvm version of boopickle in the server project.
I removed the server dependsOn for client and for sharedJs and also removed boopickle from the shared dependencies. Now it works.