How to add new fields to a list in KeystoneJS - keystonejs

I want to extend the Blog in KeystoneJS by adding a boolean "frontPage" field to the Post schema, which I want to use to show selected posts on the homepage.
I came up with this code that I put in the updates folder:
var keystone = require('keystone');
var async = require('async');
exports = module.exports = function (done) {
let post = keystone.list('Post');
post.add({
frontPage: Boolean
});
done();
};
it seems to work, but the change does not persist when I restart the server. All docs describe the process of creating new Lists, but none tells how to modify an existing one. Also tried to add a post.register()at the end but no luck.
Is there a function to persist the new schema, or I should write a shell script outside Keystone for that?
Thank you

It seems to work, but the change does not persist when I restart the server.
Scripts in the application updates folder are intended for data import or migration, and intentionally are only applied once to a given deployment.
All docs describe the process of creating new Lists, but none tells how to modify an existing one.
To add, remove, or change fields in your model you should modify the file in your Keystone project (eg: models/Post.js) and then restart your application to pick up the changes.
There generally is no need to create a corresponding update script unless you wanted to include associated data changes (for example, setting values for the existing documents).

Related

How to avoid two sequential alerts (one for read and one for edit) when using `window.showDirectoryPicker()`

const dirHandle = await window.showDirectoryPicker();
await dirHandle.requestPermission({ mode: "readwrite" });
I'm using the File System Access API in chrome. I'd like to let the user pick a folder, then write into the folder.
My code works, but two alerts are shown sequentially, one for read and one for write:
The first one is unnecessary. How can I avoid it?
Interestingly, if the user uses drag and drop, only the 2nd alert will appear after the folder is dropped, which is the desired behavior. The first alert seems to come from showDirectoryPicker. In the ideal world, I imagine being able to pass in an option like showDirectoryPicker({ permission: 'readwrite' }), which will request the 2 permissions together.
I agree it feels suboptimal, but it's a one-time thing. When you run the same code again and pick the same folder (or a nested folder), there will be no prompts at all.
This design was chosen because there are two different things that are being asked here:
First, for the app to read all files (which, recursively for subfolders can be a lot).
Second, for the app to be allowed to write (anywhere) into the folder.
As of Chrome 105, you can get a writable directory with just one prompt
const dirHandle = await window.showDirectoryPicker({ mode: "readwrite" });
Or be explicit in asking for a read-only directory (which is the default).
const dirHandle = await window.showDirectoryPicker({ mode: "read" });

Workbox/Vue: Create a custom variation on an existing caching strategy handler

Background:
I'm building an SPA (Single Page Application) PWA (Progressive Web App) using Vue.js. I've a remote PostgreSQL database, serving the tables over HTTP with PostgREST. I've a working Workbox Service Worker and IndexedDB, which hold a local copy of the database tables. I've also registered some routes in my service-worker.js; everything is fine this far....
I'm letting Workbox cache GET calls that return tables from the REST service. For example:
https://www.example.com/api/customers will return a json object of the customers.
workbox.routing.registerRoute('https://www.example.com/api/customers', workbox.strategies.staleWhileRevalidate())
At this point, I need Workbox to do the stale-while-revalidate pattern, but to:
Not use a cache, but instead return the local version of this table, which I have stored in IndexedDB. (the cache part)
Make the REST call, and update the local version, if it has changed. (the network part)
I'm almost certain that there is no configurable option for this in this workbox strategy. So I would write the code for this, which should be fairly simple. The retrieval of the cache is simply to return the contents of the requested table from IndexedDB. For the update part, I'm thinking to add a data revision number to compare against. And thus decide if I need to update the local database.
Anyway, we're now zooming in on the actual question:
Question:
Is this actually a good way to use Workbox Routes/Caching, or am I now misusing the technology because I use IndexedDB as the cache?
and
How can I make my own version of the StaleWhileRevalidate strategy? I would be happy to understand how to simply make a copy of the existing Workbox version and be able to import it and use it in my Vue.js Service Worker. From there I can make my own necessary code changes.
To make this question a bit easier to answer, these are the underlying subquestions:
First of all, the StaleWhileRevalidate.ts (see link below) is a .ts (TypeScript?) file. Can (should) I simply import this as a module? I propably can. but then I get errors:
When I to import my custom CustomStaleWhileRevalidate.ts in my main.js, I get errors on all of the current import statements because (of course) the workbox-core/_private/ directory doesn't exist.
How to approach this?
This is the current implementation on Github:
https://github.com/GoogleChrome/workbox/blob/master/packages/workbox-strategies/src/StaleWhileRevalidate.ts
I don't think using the built-in StaleWhileRevalidate strategy is the right approach here. It might be possible to do what you're describing using StaleWhileRevalidate along with a number of custom plugin callbacks to override the default behavior... but honestly, you'd end up changing so much via plugins that starting from scratch would make more sense.
What I'd recommend that you do instead is to write a custom handlerCallback function that implements exactly the logic you want, and returns a Response.
// Your full logic goes here.
async function myCustomHandler({event, request}) {
event.waitUntil((() => {
const idbStuff = ...;
const networkResponse = await fetch(...);
// Some IDB operation go here.
return finalResponse;
})());
}
workbox.routing.registerRoute(
'https://www.example.com/api/customers',
myCustomHandler
);
You could do this without Workbox as well, but if you're using Workbox to handle some of your unrelated caching needs, it's probably easiest to also register this logic via a Workbox route.

LibGit2Sharp: how to Task-Async

Hi I have begun to use the package for some very simple tasks, mainly cloning a Git-Wiki repo and subsequently pulling the changes from the server when needed.
Now I can not see any methods corresponding to the Task-Async (TAP) pattern. Also in the documentation I could not find anything concerning.
Could you please give me some direction how to wrap the LibGit2Sharp methods into a TAP construct? Link to documentation (if I missed something) or just telling me which callback to hook up to the TaskCompletionSource object would be nice.
It also doesn't really help that I am a newbie with Git, and normally I only do basic branching, merging, pushing with it.
For cloning I use:
Repository.Clone(#"https://MyName#bitbucket.org/MyRepo/MyProject.git/wiki", "repo");
For pulling I use:
using (var repo = new Repository("repo"))
{
// Credential information to fetch
LibGit2Sharp.PullOptions options = new LibGit2Sharp.PullOptions();
options.FetchOptions = new FetchOptions();
var signature = new LibGit2Sharp.Signature(new Identity("myname", "mymail#google.com"), DateTimeOffset.Now);
Commands.Pull(repo, signature, options);
}
Thanks in advance
First of all, you should never try sync over async or async over sync. See this article.
If you're thinking of using Task.Run, don't. That will just trade on thread pool thread for another with the added cost of 2 context switches.
But you should reconsider your whole approach to this. You don't need to clone the repository just to get the contents of a file. Each version of a file, has a unique URL. You can even get the URL of a file for a specific branch.

Change results URL in Alfresco AIkau faceted search page

I have some difficulties customizing the Aikau faceted search page on Alfresco, which may be more a matter of lack of my knowledge about dojo/AMD.
What I want to do is to replace the document details page URL by a download URL.
I extended the Search Results Widget to include my own custom module :
var searchResultWidget = widgetUtils.findObject(model.jsonModel, "id", "FCTSRCH_SEARCH_RESULT");
if(searchResultWidget) {
searchResultWidget.name = "mynamespace/search/CustomAlfSearchResult";
}
I understand search results URLs are rendered this way :
AlfSearchResult module => uses SearchResultPropertyLink module => mixins _SearchResultLinkMixin renderer => bring the "generateSearchLinkPayload" function => renders URLs depending on the result type
I want to override this "generateSearchLinkPayload" function but I can't figure out what is the best way to do that.
Thanks in advance for the help !
This answer assumes you're able to use the latest version of Aikau (at the time of writing this is 1.0.61). Older versions might require slightly different overriding...
In order to do this you're going to need to override the createDisplayNameRenderer function of AlfSearchResult in your CustomAlfSearchResult widget. This will allow you to create an extension of alfresco/search/SearchResultPropertyLink.
If you want to take advantage of the the download capabilities provided by the alfresco/services/DocumentService for downloading both documents and folders (as a zip) then you're going to want to change both the publishTopic and publishPayload of the SearchResultPropertyLink.
You should extend the getPublishTopic and generateSearchLinkPayload functions. For the getPublishTopic function you'll want to change the return value to be "ALF_SMART_DOWNLOAD" (there are constants available for these strings in the alfresco/core/topics module). This topic can be used to tell the DocumentService to take care of figuring out if the node is a folder or document and will make an XHR request for the full node metadata (in order to get the contentUrl attribute that is not included in the data returned by the Search API.
You should extend the generateSearchLinkPayload function so that for document or folder types the payload contains the attribute nodes that is a single array where the object is the search result object that you wish to download.
I would recommend that you call this.inherited first to get the default payload and only update it for documents and folders.
Hopefully that all makes sense - if not, add a comment and I'll try to provide further assistance!
This is the answer for 1.0.25.2 - unfortunately it's not quite so straightforward...
You still need to extend the alfresco/search/AlfSearchResult widget, however this time you need to extend the postCreate function (remembering to call this.inherited(arguments)). It's not possible to stop the original alfresco/search/SearchResultPropertyLink widget from being created... so it will be necessary to find it and destroy it.
The widget is not assigned to a variable, so it will be necessary to find it using dijit/registry. Use the byNode function from dijit/registry to find the widget assigned to this.nameNode and then call destroy on it (be sure to pass the argument true to preserve the DOM). However, you will then need to empty the DOM node so that you can start again...
Now you need to add in your extension to alfresco/search/SearchResultPropertyLink. Unfortunately, because the smart download capability is not available you'll need to do more work. The difference here is that you'll need to make an XHR request to retrieve the full node metadata in order to obtain the contentURL. It's possible to publish a request to the DocumentService(via the "ALF_RETRIEVE_SINGLE_DOCUMENT_REQUEST" topic). However, you need to be aware that having the XHR step will not allow you to then proceed with the download as is. Instead you'll need to use an iframe download solution, I'd suggest you take a look at the changes in the pull request we recently made to solve this problem and backport them into your own solution.

Get a file from IVirtualImageProvider

I have a custom plugin for serving images trought LDAP IPlugin
and IVirtualImageProvider now im doing a task of importing users from LDAP to our own system and as such i need to import those images, i was wondering if there is any way of using the plugin i previously created to import those images, perhaps something in the like of
ImageResizer.ImageJob i = new ImageResizer.ImageJob("http://host/ad/A68986", "~/uploads/<guid>.<ext>", new ImageResizer.ResizeSettings(
"width=2000;height=2000;format=jpg;mode=max"));
But the first parameter (source) would be "resolved" by my LDAP plugin, ImageResizer API
Edit: I figured out this is possible since source can be a IVirtualFile, that implies that i know in advance which one to create (for my case my own ldap) it would be nice to pass the url and somehow get the correct IVirtualFile
Yes, ImageJob resolves any 'app-relative virtual paths' using installed IVirtualImageProviders. Such paths must begin with "~/", and match the path prefix and syntax you've designed, of course.
In your case, this might look like
var i = new ImageResizer.ImageJob("~/ad/A68986", "~/uploads/<guid>.<ext>",
new ImageResizer.ResizeSettings("width=2000;height=2000;format=jpg;mode=max"));
You can also call Config.Current.Pipeline.GetFile to get an IVirtualFile reference based on a path, if you just want the original data.