I created a service worker where it loops through each of the files to add them to the cache storage but it doesn’t save all of the files every time. Also, each time I tried to create the app on a different device with the same code, it chooses which files to save even though the loop is static.
Does anyone have an idea of can be the issue?
var cacheName = 'erh-cache-v1';
var filesToCache = [
'/',
'index.html',
'/dragon.js',
'/app.js',
'/menu.html',
'/activeShooter.html',
'/buildings.json',
'/demonstrations.html',
'/dragon.js',
'/earthquakes.html',
'/elevatorMalfunction.html',
'/employees.json',
'/fireEvacuation.html',
'/flooding.html',
'/bombThreat.html',
'/gas.html',
'/jquery-3.1.0.js',
'/jquery.cookie.js',
'/jquery.popmenu.min.js',
'/keywords.json',
'/localEmergencyContacts.html',
'/manifest.json',
'/medicalAssistance.html',
'/menu.html',
'/sw.js',
'/powerFailure.html',
'/script.js',
'/search.png',
'/settings.html',
'/severeWeather.html',
'/site.js',
'/styles.css',
'/suspiciousIndividuals.html',
'/suspiciouspackage.html',
'/theft.html',
'/threateningBehaviour.html',
'/unknownOrSuspiciousSubstances.html',
'/violenceAndHarassment.html'
];
self.addEventListener('install', function(e) {
console.log('[ServiceWorker] Install');
e.waitUntil(
caches.open(cacheName).then(function(cache) {
console.log('[ServiceWorker] Caching app shell');
return cache.addAll(filesToCache);
})
);
});
Related
I'm writing a program that should output a meta-info (size, permissions to execute / read / write, time of last modification) of all files from the specified directory.
I received information about all the information, except the rights to execute / read / write.
I tried to get this info using PosixFilePermissions, but when added to the List I get Exception in thread "main" java.lang.UnsupportedOperationException.
Maybe you should use some other library? Or did I make a mistake somewhere? I would be grateful for any advice!
fun long(path:Path) : MutableList<String> {
var listOfFiles = mutableListOf<String>()
val files = File("$path").listFiles()
var attr: BasicFileAttributes
Arrays.sort(files, NameFileComparator.NAME_COMPARATOR)
files.forEach {
if (it.isFile) {
attr = Files.readAttributes<BasicFileAttributes>(it.toPath(), BasicFileAttributes::class.java)
listOfFiles.add("${it.name} ${attr.size()} ${attr.lastModifiedTime()}" +
" ${PosixFilePermissions.toString(Files.getPosixFilePermissions(it.toPath()))}")
}
else listOfFiles.add("dir ${it.name}")
}
return listOfFiles
}
PosixFilePermissions are only usable for POSIX-compatible file systems (Linux etc.).
For a Windows system, the permissions have to be accessed directly:
file.canRead()
file.canWrite()
file.canExecute()
Implementing a SAAS (multi-tenant) app, I have a situation where an app would need to connect to different databases based on the user who wants to login. The databases are for separate institutions. Let's say MANAGER-A for institution-A, MANAGER-B for institution-B, want to login into their various institutions.
The process I'm implementing is this: There are 3 databases involved: DEFAULT-DB, INSTITUTION-A-DB, INSTITUTION-B-DB. The DEFAULT-DB contains all the login and database credentials of each user. This means that before MANAGER-A can login to his app, what happens is that, first, he will be logged in into the DEFAULT-DB, if successful, his details would be fetched and logged and set as the parameter of the config.php file. This means that connection will be dynamic based on the params fetched and passed by the DEFAULT-DB. My questions are these:
How do I write these params dynamically to the config.php file?
Secondly, I'm open to experts advise if my implementation is not the best.
Config.php file
<?php
unset($CFG);
global $CFG;
$CFG = new stdClass();
$CFG->dbtype = 'mysqli';
$CFG->dblibrary = 'native';
$CFG->dbhost = 'localhost';
$CFG->dbname = 'mydb';
$CFG->dbuser = 'root';
$CFG->dbpass = '';
$CFG->prefix = 'my_';
$CFG->dboptions = array (
'dbpersist' => 0,
'dbport' => '',
'dbsocket' => '1',
'dbcollation' => 'utf8mb4_unicode_ci',
);
$CFG->wwwroot = 'http://localhost:8888/myapp';
$CFG->dataroot = '/Applications/MAMP/data/myapp_data';
$CFG->admin = 'admin';
$CFG->directorypermissions = 0777;
require_once(dirname(__FILE__) . '/lib/setup.php');
This is Moodle. I have tried IOMAD, it is a great app but does not address my need.
That is a bad solution, IMHO. When you rewrite the configuration file, what happens when the next request comes in that loads that file? They will load the wrong configuration.
I would create two additional configuration files: config_inst_a.php and config_inst_b.php. Then set a session variable when the user logs in that contains the settings file name to load. You can then redefine the database information variables in the additional settings files. If the session variable has a filename in it, load that file AFTER the default config and the database connection values will be replaced.
Added sample code:
Really, really brief:
// Log in user here, and get info about what user company..
$session_start();
$_SESSION['User_ConfigFile'] = 'settings'.$userCompany.'.php';
// More code, or page redirect, or whatever, but the below goes on every page AFTER the default DB is included:
$session_start();
require_once($_SESSION['User_ConfigFile']);
If it helps somebody, my solution, not in production yet, but for now it works like a charm.
if ($_SERVER['SERVER_NAME'] == 'institutionA.mydomain.com') {
$CFG->dbname = 'institutionA'; // database name, eg moodle
$CFG->dbuser = 'user_institutionA'; // your database username
$CFG->wwwroot = 'https://institutionA.mydomain.com';
$CFG->dataroot = 'dataroot_institutionA';
//
$CFG->some_custom_data = 'my_institutiona_data';
} else
if ($_SERVER['SERVER_NAME'] == 'institutionB.mydomain.com') {
$CFG->dbname = 'institutionB'; // database name, eg moodle
$CFG->dbuser = 'user_institutionB'; // your database username
$CFG->wwwroot = 'https://institutionB.mydomain.com';
$CFG->dataroot = 'dataroot_institutionB';
//
$CFG->some_custom_data = 'my_institutionB_data';
} else {
...... anything you need in this case
}
I have two disks defined in my filesystems.php config file:
'd1' => [
'driver' => 'local',
'root' => storage_path('app/d1'),
],
'd2' => [
'driver' => 'local',
'root' => storage_path('app/d2'),
],
These disk could also be Amazon S3 buckets, and there could be combination of S3 bucket and a local disk.
Let's say I have a file as app/d1/myfile.txt which I want to move to app/d2/myfile.txt.
What I'm doing now is
$f = 'myfile.txt';
$file = Storage::disk('d1')->get($f);
Storage::disk('d2')->put($f, $file);
and leaving the original file on d1 as it doesn't bother me (I periodically delete files from d1).
My questions are:
Is the code below atomic, how would I check if it was, and if not how would I make it atomic (for the scenarios when the files are 1GB or something similar in size):
$f = 'myfile.txt';
$file = Storage::disk('d1')->get($f);
Storage::disk('d2')->put($f, $file);
Storage::disk('d1')->delete($f);
Is there a simple way to move files from one disk to another using the Storage facade. At the moment I need it to work from one local disk to another but in the future I might need to move them from one S3 bucket to the same one, from one S3 bucket to another one, or from local disk to a S3 bucket.
Thanks
I think this way is cleaner and works if you're using remote paths
$directories = ['dir1', 'dir2', 'dir3'];
$from = 'public';
$to = 'assets';
foreach($directories as $directory){
$files = Storage::disk($from)->allFiles($directory);
foreach ($files as $file) {
Storage::disk($to)->writeStream($file, Storage::disk($from)->readStream($file));
// If you no longer need the originals
//Storage::disk($from)->delete($file);
}
Storage::disk($from)->deleteDirectory($directory);
}
The move method may be used to rename or move an existing file to a new location.
Storage::move('old/file.jpg', 'new/file.jpg');
However, to do this between disks you need to have the full path to the file to move.
// convert to full paths
$pathSource = Storage::disk($sourceDisk)->getDriver()->getAdapter()->applyPathPrefix($sourceFile);
$destinationPath = Storage::disk($destDisk)->getDriver()->getAdapter()->applyPathPrefix($destFile);
// make destination folder
if (!File::exists(dirname($destinationPath))) {
File::makeDirectory(dirname($destinationPath), null, true);
}
File::move($pathSource, $destinationPath);
We're utilizing the V Cloud API to interact with virtual machines (create machines, perform actions, switch media, etc). One requested function is to be able to upload media (specifically ISO's) to a particular a catalog. The API guide (pg 67) is fairly straightforward, and our multi-part requests to the URL that is provided when the upload starts go off without a hitch.
Note: We have to declare the file size before starting the upload
The only thing that seems amiss during the upload itself is that the "transferred size" ends up being larger than the "file size" at the end of the process. This is somewhat odd because our content-range never exceeds the expected file size (we assume that the meta data is being included without us having a say). Once this transferred size exceeds the file size, the status of the file upload changes to "Error" but still returns a 200 OK
{
"name": "J Small 4",
"description": "",
"files": [{
"name": "file",
"totalSize": 50696192,
"status": "Error",
"link": "https://cloud01.cs2cloud.com/transfer/27b8f93c-8319-419e-9e8c-15622097670b/file",
"transferredSize": 54293177
}],
"id": "urn:vcloud:media:1cec68ef-f22e-4ec7-ae5d-dfbc4f7137d9",
"catalogId": "urn:vcloud:catalogitem:19dbfdd8-ea70-4355-abc7-96e34dccb869"
}
Not sure where to even start debugging this since all the API calls come back with 200 OK, the .ISO file seems to be fine, our content-range headers never go outside the established file size, and the meta-data seems to be out of our control in terms of editing or measuring it.
Hoping some soul has experienced this issue before and can provide some insight into working towards a solution
It turns out the issue wasn't with the vmware at all, but how we were chunking up the media file. We initially used FileReader() to chunk up the file and send it over to the VMware API.
Theoretically, we were choosing the chunk size and could then generate and set the content range, but in reality we were choosing the content-range but the content-length was different than the chunk size. We're still not entirely sure why it happened (maybe extra meta data being added on) but we found a solution.
The fix: We eliminated FileReader() altogether and just put the file slices directly into a blob (you can see below)
$scope.parseMediaFile = function(url, file, catalogId) {
$scope.uploadingMediaFile = true;
var fileSize = file.size;
var chunkSize = 1024 * 1024 * 5; // bytes
var offset = 0;
var self = this; // we need a reference to the current object
var chunkReaderBlock = null;
var chunkNum = 0;
if (fileSize < chunkSize) {
chunkSize = fileSize;
}
chunkReaderBlock = function(_offset, length, _file) {
var blob = _file.slice(_offset, length + _offset);
var beginRange = _offset;
var endRange = _offset + length;
if(endRange > _file.size) {
endRange = _file.size
}
var contentRange = beginRange + "-" + endRange;
vdcServices.uploadMediaFile(url, blob, fileSize, contentRange).then(
function(resp) {
vdcServices.getUploadStatus($scope.company, catalogId).then(function(resp) {
var uploaded = resp.data.files[0].transferredSize;
$scope.mediaPercentLoaded = $scope.trunc((uploaded / fileSize) * 100);
if (endRange == _file.size) {
$scope.closeModal();
return;
}
chunkReaderBlock(_offset+length, chunkSize, file);
}, function(err) {
$scope.errorMsg = err;
chunkReaderBlock(_offset-length, chunkSize, file);
})
},
function(err) {
$scope.errorMsg = err;
}
)
}
// Starts the read with the first block
if (offset < fileSize) {
chunkReaderBlock(offset, chunkSize, file)
}
}
Doing so allowed us to actually control the content-length, and since we can identify when the number of bytes transferred is equal to the file size we could then complete the process.
I'm developing a ColdBox application with modules and wanted to use it's caching functionality to cache a view for some time.
component{
property name="moduleConfig" inject="coldbox:moduleConfig:mymodule";
...
function widget(event, rc, prc){
var viewData = this.getData();
return renderView(
view = "main/widget",
args = viewData,
cache = true,
cacheSuffix = ":" & moduleConfig.entryPoint,
cacheTimeout = 2
);
}
}
I tried to set the default caching config by adding the following info to my Cachebox.cfc and removed the cacheTimeout from the code above:
cacheBox = {
defaultCache = {
objectDefaultTimeout = 1, //two hours default
objectDefaultLastAccessTimeout = 1, //30 minutes idle time
useLastAccessTimeouts = false,
reapFrequency = 5,
freeMemoryPercentageThreshold = 0,
evictionPolicy = "LRU",
evictCount = 1,
maxObjects = 300,
objectStore = "ConcurrentStore", //guaranteed objects
coldboxEnabled = false
},
caches = {
// Named cache for all coldbox event and view template caching
template = {
provider = "coldbox.system.cache.providers.CacheBoxColdBoxProvider",
properties = {
objectDefaultTimeout = 1,
objectDefaultLastAccessTimeout = 1,
useLastAccessTimeouts = false,
reapFrequency = 5,
freeMemoryPercentageThreshold = 0,
evictionPolicy = "LRU",
evictCount = 2,
maxObjects = 300,
objectStore = "ConcurrentSoftReferenceStore" //memory sensitive
}
}
}
};
Though that didn't have any influence on the caching. I've also tried to add the config above to my Coldbox.cfc.
Even if I create a completely new test app via CommandBox via coldbox create app MyApp, then only set the the caching in Cachebox.cfc to one minute, set viewCaching = true in Coldbox.cfc, and set event.setView( view="main/index", cache=true ) in the Main.cfc, it doesn't work as expected.
No matter what I do, the view is always cached for at least 5 minutes.
Is there something I am missing?
Make sure you have enabled view caching in your ColdBox configuration. Go to the /config/ColdBox.cfc file and add this key:
coldbox = {
// Activate view caching
viewCaching = true
}
Also, did you mistype the name of the CFC you changed for the caching above? Those changes should be in the /config/CacheBox.cfc file, not in /config/ColdBox.cfc.
Obviously, also the the reapFrequency in the /config/ColdBox.cfc needs to be set to a smaller value in order to let the cache entry be removed earlier.
Though, as the documentation states:
The delay in minutes to produce a cache reap (Not guaranteed)
It is not guaranteed that the cached item it really removed after that time, so it can be that the cache is empty after 3 minutes even when reapFrequency is set to 1.