I have an existing FineUploader implementation for small files using the Traditional (upload-to-server) version which is working great. However, I'd like to also allow Direct S3 uploads from a different part of the application which deals with large attachments, without rewriting the existing code for small files.
Is there some way to allow both Direct S3 and Traditional uploads to work alongside each other? This is a single-page application, so I can't just load one or the other fine-uploader versions depending on which page I'm on.
I tried just including both fine-uploader JS files, but it seemed to break my existing code.
Client-side code:
$uploadContainer = this.$('.uploader')
$uploadButton = this.$('.upload-button')
$uploadContainer.fineUploader(
request:
endpoint: #uploadUrl
inputName: #inputName
params:
authenticity_token: $('meta[name="csrf-token"]').attr('content')
button: $uploadButton
).on 'complete', (event, id, fileName, response) =>
#get('controller').receiveUpload(response)
Good find, #Melinda.
Fine Uploader lives within a custom-named namespace so that it does not conflict with other potential global variables, this is the qq namespace (historically named). What is happening is that each custom build is redeclaring this namespace along with all member objects when you include it in the <script> tags on your page.
I've opened up an issue on our bug tracker that explains the issue in more technical details, and we're looking to prioritize a fix to the customize page so that in the future no one will have this issue.
Related
I'm using React 17 with cra-template-pwa to create a PWA. One of my UI libraries has several hundred static image resources that all get preloaded in the PWA (and I don't use most of them). This causes a long delay in enabling the PWA, and even causes Lighthouse to crash. I'm looking at various approaches to fixing the problem, but for a quick fix just to run lighthouse, I'd like to just disable precaching. I haven't been able to find concrete info how to do this. Any advice?
The cleanest solution would entail using the exclude option in the workbox-webpack-plugin configuration, but that requires ejecting in create-react-app.
Something you can do without ejecting, though, is to explicitly filter out entries from the injected self.__WB_MANIFEST array before passing the value to precacheAndRoute().
Your service-worker.js could look something like:
import {precacheAndRoute} from 'workbox-precaching';
// self.__WB_MANIFEST will be replaced with an
// Array<{url: string, revision: string}> during the build process.
// This will filter out all manifest entries with URLs ending in .jpg
// Adjust the criteria as needed.
const filteredManifest = self.__WB_MANIFEST.filter((entry) => {
return !entry.url.endsWith('.jpg');
});
precacheAndRoute(filteredManifest);
The downsides of this approach is that your service-worker.js file will be a bit larger than necessary (since it will include inline {url, revision} entries that aren't needed), and that you'll end up triggering the service worker update flow more than strictly necessary, if the contents of one of your images changes. Those unnecessary service worker updates won't actually harm anything or change the behavior of your web app, though.
I am going to make an app. But i am stuck in one issue. I am unable to find in which file of nextcloud, the codes are available which uploads file.
i want to find the file in which file uploading codes are situated.
I am going to make an app which will make a duplicate of uploaded file and will save in same directory with slightly changed name.
The public API for handling files lives in the \OCP\Files namespace, the implementation is in the \OC\Files namespace (https://github.com/nextcloud/server/tree/master/lib/private/Files).
Rather than modifying this code you should use the hooks functionality (never use classes or functions in the \OC\* namespace!): https://docs.nextcloud.com/server/12/developer_manual/app/hooks.html. This way you can execute your own code when a file is created or updated etc.
I guess you need the postWrite hook. Some sample code (untested):
\OC::$server->getRootFolder()->listen('\OC\Files', 'postWrite', function(\OCP\Files\Node $node) {
$node->copy('my/path');
});
We have an old Yii application along with new Symfony one.
The basic idea is simple - I need to check if there is a route matching in Symfony application then it is cool, if not then bootstrap Yii application and try to handle the request with it.
The main idea to not instantiate AppKernel (and do not load autoload.php - since there is two different autoload.php for each project) before I am sure there is route matching.
Can I do it somehow?
We've done this before with legacy applications.
There are two approaches you can take.
Wrap your old application inside a symfony project (recommended).
Unfortunately this will indeed load the symfony front-controller and kernel. No way around that. You need to make sure that symfony can't handle the request and to do that the kernel needs to be booted up.
Use sub-directories and apache virtual hosts to load one application vs the other as needed.
Given option 1,
You can either create your own front controller that loads either symfony or yii by reading routes (from static files if using yml or xml, or annotations which will be more complex) OR EventListener (RequestListener) that listens to the HttpKernelInterface::MASTER_REQUEST and ensures that a route can be returned.
Creating your own front controller is the only way that you can make it not load the symfony kernel, but it will require you to write something that understands the routes in both frameworks (or at least symfony's) and hands off the request appropriately.
Event listener example:
public function onkernelRequest(GetResponseEvent $event)
{
if (HttpKernelInterface::MASTER_REQUEST !== $event->getRequestType()) {
return;
}
... Code to continue normally, or bootstrap yii and return a custom response... (Can include and ob_start, or make an http request, etc)
}
public function getSubscribedEvents()
{
return [
KernelEvents::REQUEST => ['onKernelRequest']
];
}
As you see, the kernel needs to be booted to ensure symfony can't serve the route. Unless creating your own front controller (as stated above).
A third approach would be to create a fallback controller, which would load up a specified URL if no route was found within symfony. Although this approach is generally used for legacy projects that lack a framework and use page scripts instead of proper routes, and definitely requires the use/help of output buffering.
The EventListener approach gives you the opportunity to create a proper Request to hand off to yii, and using what is returned to create a Response as proper symfony object (can also use ob or other options).
Thank you.
This is an alternative to vpassapera's solution -http://stovepipe.systems/post/migrating-your-project-to-symfony
I want to upload files using FormData Object(html5) in dojo and without using XmpHttpRequest.
I am using dojo.xhrPost to upload files.
Please post your ideas/thoughts and experience.
Thanks
Mathirajan S
Based on your comment I am assuming you do want to use XHR (which would make sense given that FormData is part of the XHR2 spec).
dojo/request/xhr (introduced in Dojo 1.8) supports passing a FormData object via the data property of the options object, so that may get you what you want.
request.post(url, {
data: formdataObjectHere
// and potentially other options...
}).then(...);
The legacy dojo/_base/xhr module does not explicitly support XHR2, but it does lean on dojo/request/xhr now, so it might end up working anyway, but no guarantees there.
More information on dojo/request/xhr can be found in the Reference Guide.
Does anybody know what if direct update updates everything that lives in the common directory structure. I used the same code base for multiple apps, the only change being certain settings within a js file that tells the application how to act. Is there a directory i can put that js file that would be safe from the direct update feature?
I cant seen to find any specific information on IBM's website.
I think you guys need to be careful which terms you are using in order to not confuse people who may be looking for similar help.
Environments are specific to the OS you are using. iOS, Blackberry, Android, and etc. environments.
Skins are based on the environment, and aren't generic to all platform. When you create a skin you must choose which environment you are running in.
So to correct some, direct updates will update all skin resources in targeted environments.
For example: You have an app with Android and iOS versions
When you create skins, you are creating essentially a responsive type of design to your parameters. For instance, if you have a 2.3 vs 4.2 Android OS, you can set a look and feel for both. However, these utilize a single web resource base. The APK would be the same for both versions of the app (by default) and have 2 available skins. On runtime utilizing IBM Worklight's 'Runtime Skinning' (hence the name) it goes through the parameter check for the OS and loads that skins overriding web code.
You could technically override all of the web code to be completely different for both skins, but that would be bulky and inefficient.
When you direct update you are updating all the resources of that particular environment (to include both skins), not the common folder/environment.
So an updated Android (both skins) would have updated web resources (if you deployed the android wlapp) and an iOS version would stay the same.
If you look at the Android project after build (native -> assets -> www -> default or skin) you can find the shared web resources generated by the common environment. However that is only put there every time you do a new build.
In the picture, I have an older version of the Android built for both skins on the left. On the right is a preview of the newer common resources after deploying only the common.wlapp. So you can see that they are separate.
Sorry if it was long winded, but I thought I would be thorough.
To answer the original question, have you thought of having all the parameters of the store loaded from user input or a setup? If you are trying to connect to 3 different store, create some form for settings control that will access different back ends or specific adapters. You could also create 3 different config.js that load depending on the parameters that you set so that you set. The other option is to set different versions of your apps specific to the store.
Example. Version 1.11, 1.12,1.13 can be 3 versions of the same app for store 1, 2, & 3. They can be modified and change and have 3 sets of web resources. When you need to update, jump up to version 1.21, 1.22,1.23. It seems a bit of a work around, but it may be your best bet at getting 3 versions of the same app to fall within the single application category. (keep 3 config.js types for modifying for the 3 stores).
To the best of my knowledge Direct Update will update every web resource of the skin you're using (html, css, js). However, I'm no expert with it.
If you're supporting only Android and iOS applications and need a way to store settings I recommend JSONStore. Otherwise look into Cordova Storage, Local Storage or IndexedDB.
Using a JSONStore collection called settings will allow you to store data on disk inside the app's directory. It will persist until you call one of the removal methods like destroy or until the application is uninstalled. There are also ways of linking collections to Worklight Adapters to pull/push data from/to a server. The links below will provide further details.
the only change being certain settings within a js
Create a collection for your settings:
var options = {};
options.onSuccess = function () {
//... what to do after init finished
};
options.onFailure = function () {
//... what to do if init fails
}
var settings = WL.JSONStore.initCollection('settings',
{background: 'string', itemsPerPage: 'number'}, options);
You can add new settings after initCollection onSuccess has been called:
settings.add({background: 'red', itemsPerPage: 20}, options);
You can find settings stored after initCollection onSuccess has been called:
settings.findAll({onSuccess: function (results) {
console.log(JSON.stringify(results));
}});
You can read more about JSONStore in the Getting Started Modules. See Modules: 7.9, 7.10, 7.11, 7.12. There is further information inside the API Documentation in IBM InfoCenter. The methods described above are: initCollection, add and findAll.
Since version 5.0.3 I think, direct update will not update all the webresources, only those of the skin you are using.
say you have skin def and skin skin2
you are on def
make change to def on the server -> you will get a direct update for
def only
make change to skin2 on server-> no direct update for you.
you are on skin2:
make change to skin2 on server -> direct update for skin2 only
make change to def javascript which also resides on skin2 ( and therefor end result is def+skin2 concatination), update only skin2
make change to def,just to a picture(also checking pic extension from application-descriptor: ") -> no direct update
Thats how direct update works.
Please also share some more details about what is the problem, I see you use a js file, where do you change it? what do you mean excatly, give a better (simplified) real life example, because it is unclear what you are trying.