I have this specific use case where from my extension, I want users to be able to type !myamltemplate in a yaml file and be able to load in a designated yaml template for users to start editing. This is similar to the ! Emmet that VSCode currently supports.
So is there a way to create my custom Emmets and if not what's the best way to implement this use case.
Edit1:
I see we have this doc on how to use custom Emmets but the example shows using a html one.
Can this be used for yaml files as well ?
Can the yaml template be loaded from a file
I would like to also have diagnostics and validations on the user edited values in yaml file
Edit2:
I was able to achieve my use case with Snippets
// package.json
"contributes": {
"snippets": [
{
"language": "yaml",
"path": "./snippets.json"
}
]
}
// snippets.json
{
"yaml": {
"prefix": "!mytemplate",
"body": [
"name: install and configure DB",
"hosts: testServer",
"vars:",
" oracle_db_port_value : 1521",
"tasks:",
" -name: Install the Oracle DB",
" yum: <code to install the DB>",
" ",
" -name: Ensure the installed service is enabled and running",
"service:",
" name: <your service name>"
],
"description": "Creates a yaml definition from template"
}
}
Questions
Is this the best approach for this use case ?
Where can I find samples for .tmSnippets as described herein Using TextMate snippets
Thank you!
Related
I've an EXPO app and i'm trying to get into the app from external links (universal/deep linking), Android is up and running perfectly and IOS is not working.
the app.json file is configured as described here:
"associatedDomains": [
"applinks:*.<DOMAIN_NAME>.com",
"applinks:<SUB_DOMAIN>.<DOMAIN_NAME>.com"
]
And because of my app is using WEBVIEW of my original website, the file /.well-known/apple-app-site-association is located at my client repository under the SRC folder like so:
Content-Type: application/pkcs7-mime
{
"applinks": {
"apps": [],
"details": [{
"appID": "<APP_ID>",
"paths": ["/login/*"]
}]
}
}
Now, AASA validator is OK, but external links still not opening the native app!!!
What to do?!
I found the answer on the expo forums and want to post the solution here because I spent hours/days trying to figure out the answer. And kept running into this question here on SO. And due to only being easily testable on production(vs local or release channel deployments). I want to save other users the pain of releasing their app with broken universal deep links.
Expo's documentation isn't clear here. In the iOS section of Expo's linking guide is fairly vague and some of the language is not explicit or clear enough. To make matters worse everything you find on SO or elsewhere is filled with conflicting and out of date info.
While the OP's question is likely solved solely by the last item in this list, I want to cover a couple of things I also had issues figuring this out for posterity and to clear things up I kept trying to piece together:
appID is basically a combination of the iOS team ID which can be found in the apple developer console under membership. And the bundle id found in app store connect > app > app information (or it should also be in your app.json).
Some of apple's documentation can be confusing and wildcards do match more than one character. So for example a path of "/auth/*" will properly route the a url with the path of "/auth/login".
You do not need to sign your AASA file.
Do not include https or paths in your associatedDomains configured in app.json it should just be the domain (e.g. "applinks:google.com").
The working mime type for your AASA is application/pkcs7-mime and not application/json
you're AASA file can me in your root directory or .well-known [docs], I suggest placing it on both as this will help create some redundancy. You can upload one file and configure your server to resolve the same file.
You need to enable associated domains service for you identifier. Expo states this in their documentation but the navigation to where to do this is out of date. On the developer portal > Certificates, Identifiers & Profiles > identifiers > click on the identifier that maches your app's bundle ID > capabilities > Check "Associated Domains" It will tell you that this will invalidate your signing profile and you'll need a new one to deploy your app. So long as you are in a managed expo app, expo will handle this for you when you create a new build.
The main problem you're facing is that the AASA file or apple-app-site-association has gone through 3 different iterations with different schemas. Each schema is fairly different and you need to combine all 3 schemas to support all current + older versions of iOS. The schema in the OPs question is used by ios 11 & 12 and not supported by iOS 13+ (which came out 2019). Expo's documentation is unclear about this and makes it sound like the displayed version will work with all versions and you can use the new schema if you want the added features, however the new schema is absolutely necessary to work with newer iphones. The expo documentation has the new schema minified so you don't even realize it exists if you're not paying attention.
Another user explained this solution in depth on the expo forums - I won't go into as much depth but will include his combinations of the different schemas that did work for me so that it's easier to find. (although the new schema is probably sufficient if you don't plan on supporting anything lower than iOS 13. as 15 is the current version as of this writing)
Example of all the schemas combined.
{
"applinks": {
"apps": [],
"details": [
{
"appIDs": [ "TEAMID.bundleidentifier", "ABCDE12345.com.example.app2" ],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL whose fragment equals no_universal_links and instructs the system not to open it as a universal link"
},
{
"/": "/buy/*",
"comment": "Matches any URL whose path starts with /buy/"
},
{
"/": "/help/website/*",
"exclude": true,
"comment": "Matches any URL whose path starts with /help/website/ and instructs the system not to open it as a universal link"
},
{
"/": "/help/*",
"?": { "articleNumber": "????" },
"comment": "Matches any URL whose path starts with /help/ and which has a query item with name 'articleNumber' and a value of exactly 4 characters"
}
]
},
{
"appID": "TEAMID.bundleidentifier",
"paths": [ "/buy/*", "/help/website/*", "/help/*" ]
},
{
"appID": "OTHTEAMID.otherbundleidentifier",
"paths": [ "/blog", "/blog/post/*" ]
},
{
"appID": "YAOTHTEAMID.yetanotherbundleidentifier",
"paths": [ "*" ]
}
]
},
"activitycontinuation": {
"apps": [
"TEAMID.bundleidentifier",
"OTHTEAMID.otherbundleidentifier"
]
}
Example of the new schema if you don't need to support older versions of iOS
{
"applinks": {
"apps": [],
"details": [
{
"appIDs": ["ABCDE12345.com.example.app", "ABCDE12345.com.example.app2"],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL whose fragment equals no_universal_links and instructs the system not to open it as a universal link"
},
{
"/": "/buy/*",
"comment": "Matches any URL whose path starts with /buy/"
},
{
"/": "/help/website/*",
"exclude": true,
"comment": "Matches any URL whose path starts with /help/website/ and instructs the system not to open it as a universal link"
},
{
"/": "/help/*",
"?": { "articleNumber": "????" },
"comment": "Matches any URL whose path starts with /help/ and which has a query item with name 'articleNumber' and a value of exactly 4 characters"
}
]
}
]
}
I want to setup tern-vim in my project and followed this link https://github.com/ternjs/tern_for_vim/blob/master/doc/tern.txt.
After installation, I created a .tern-project file under the root directory of my project as below.
{
"libs": [
"browser",
"jquery"
],
"loadEagerly": [
"importantfile.js"
],
"plugins": {
"requirejs": {
"baseURL": "./",
"paths": {}
}
}
}
When I open a js file by vim command, I can't use any tern command as below:
|:TernDoc|...................... Look up Documentation
|:TernDocBrowse|................ Browse the Documentation
|:TernType|..................... Perform a type look up
|:TernDef|...................... Look up definition
|:TernDefPreview|............... Look up definition in preview
|:TernDefSplit|................. Look up definition in new split
|:TernDefTab|................... Look up definition in new tab
|:TernRefs|..................... Look up references
|:TernRename|................... Rename identifier
I will get Not an editor command error. Do I need to do any other configuration?
Question: how can I split swagger definition across files? What are the possibilities in that area? The question details are described below:
example of what I want - in RAML
I do have experience in RAML and what I do is, for example:
/settings:
description: |
This resource defines application & components configuration
get:
is: [ includingCustomHeaders ]
description: |
Fetch entire configuration
responses:
200:
body:
example: !include samples/settings.json
schema: !include schemas/settings.json
The last two lines are important here - theones with !include <filepath> - in RAML I can split my entire contract into many files that just get included dynamically by the RAML parser (and RAML parser is used by all tools that base on RAML).
My benefit from this is that:
I get my contract more clear and easier to maintain, because schemas are not inline
but that's really important: I can reuse the schema files within other tools to do validation, mock generation, stubs, generate tests, etc. In other words, this way I can reuse schema files within both contract (RAML, this case) and other tools (non-RAML, non-swagger, just JSONschema-based ones).
back to Swagger
As far as I read, swagger supports $ref keyword which allows to load external files. But is that files fetched through HTTP/AJAX or can they just be local files?
And is that supported by the whole specification or is it just some tools that support it and some that don't?
What I found here is that the input for swagger has to be one file. And this is extremely inconvenient for big projects:
because of size
and because I can't reuse the schema if I want to use something non-swagger
Or, in other words, can I achieve the same with swagger, that I can with RAML - in terms of splitting files?
The specification allows for references in multiple locations but not everywhere. These references are resolved depending on where the specification is being hosted--and what you're trying to do.
For something like rendering a dynamic user interface, then yes you do need to eventually load the entire definition into "a single object" which may be composed from many files. If performing a code generation, the definitions may be loaded directly from the file system. But ultimately there are swagger parsers doing the resolution, which is much more fine grained and controllable in Swagger than other definition formats.
In your case, you would use a JSON pointer to the schema reference:
responses:
200:
description: the response
schema:
via local reference
$ref: '#/definitions/myModel'
via absolute reference:
$ref: 'http://path/to/your/resource'
via relative reference, which would be 'relative to where this doc is loaded'
$ref: 'resource.json#/myModel
via inline definition
type: object
properties:
id:
type: string
When I split OpenAPI V3 files using references, I try to avoid the sock drawer anti-pattern and instead use functional groupings for the YAML files.
I also make it so that each YAML file itself is a valid OpenAPI V3 spec.
I start out with the openapi.yaml file.
openapi: 3.0.3
info:
title: MyAPI
description: |
This is the public API for my stuff.
version: "3"
tags:
# NOTE: the name is needed as the info block uses `title` rather than name
- name: Authentication
$ref: 'authn.yaml#/info'
paths:
# NOTE: here are the references to the other OpenAPI files
# from the path. Note because OpenAPI requires paths to
# start with `/` and that is already used as a separator
# replace the `/` with `%2F` for the path reference.
'/authn/start':
$ref: 'authn.yaml#/paths/%2Fstart'
Then in the functional group:
openapi: 3.0.3
info:
title: Authentication
description: |
This is the authentication module.
version: "3"
paths:
# NOTE: don't include the `/authn` prefix here that top level grouping is
# in the `openapi.yaml` file.
'/start':
get:
responses:
"200":
description: OK
By doing this separation you can independently test each file or the whole API as a group.
There may be points where you repeat yourself, but by doing this you limit the chance of breaking changes to other API endpoints when using a "common" library.
However, you should still have a common definition library for some things such as:
errors
security
There is a limitation on this approach and that's the "Discriminators" (it may be a ReDoc issue though, but if you had types that have discriminators outside of the openapi.yaml ReDoc fails to render correctly.
See this answer for details on how to split your Swagger documentation across many files. This is done using JSON, but the same concept can apply to RAML.
EDIT: Adding content of link here
The basic structure of your Swagger JSON should look something like this:
{
"swagger": "2.0",
"info": {
"title": "",
"version": "version number here"
},
"basePath": "/",
"host": "host goes here",
"schemes": [
"http"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {}
}
The paths and definitions are where you need to insert the paths that your API supports and the model definitions describing your response objects. You can populate these objects dynamically. One way of doing this could be to have a separate file for each entity's paths and models.
Let's say one of the objects in your API is a "car".
Path:
{
"paths": {
"/cars": {
"get": {
"tags": [
"Car"
],
"summary": "Get all cars",
"description": "Returns all of the cars.",
"responses": {
"200": {
"description": "An array of cars",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/car"
}
}
},
"404": {
"description": "error fetching cars",
"schema": {
"$ref": "#/definitions/error"
}
}
}
}
}
}
Model:
{
"car": {
"properties": {
"_id": {
"type": "string",
"description": "car unique identifier"
},
"make": {
"type": "string",
"description": "Make of the car"
},
"model":{
"type": "string",
"description": "Model of the car."
}
}
}
}
You could then put each of these in their own files. When you start your server, you could grab these two JSON objects, and append them to the appropriate object in your base swagger object (either paths or definitions) and serve that base object as your Swagger JSON object.
You could also further optimize this by only doing the appending once when the server is started (since the API documentation will not change while the server is running). Then, when when the "serve Swagger docs" endpoint is hit, you can just return the cached Swagger JSON object that you created when the server was started.
The "serve Swagger docs" endpoint can be intercepted by catching a request to /api-docs like below:
app.get('/api-docs', function(req, res) {
// return the created Swagger JSON object here
});
You can use $ref but not have good flexibility, I suggest you process YAML with an external tool like 'Yamlinc' that mix multiple files into one using '$include' tag.
read more: https://github.com/javanile/yamlinc
I develop an iPad/iPhone App web app. Both share some of the resources. Now I wanna build a bootstrap js that looks like this:
requirejs(['app'], function(app) {
app.start();
});
The app resource should be ipadApp.js or iphoneApp.js. So I create the following build file for the optimizer:
{
"appDir": "../develop",
"baseUrl": "./javascripts",
"dir": "../public",
"modules": [
{
"name": "bootstrap",
"out": "bootstrap-ipad.js",
"override": {
"paths": {
"app": "ipadApp"
}
}
},
{
"name": "bootstrap",
"out": "bootstrap-iphone.js",
"override": {
"paths": {
"app": "iphoneApp"
}
}
}
]
}
But this doesn't seems to work. It works with just one module but not with the same module with different outputs.
The only other solution that came in my mind was 4 build files which seems a bit odd. So is there a solution where i only need one build file?
AFAIK the r.js optimizer can only output a module with a given name once - in your case you are attempting to generate the module named bootstrap twice. The author of require.js, #jrburke made the following comment on a related issue here:
...right now you would need to generate a separate build command for each script being targeted, since the name property would always be "almond.js" for each one.
He also suggests:
...if you wanted just one build file to run, you could create a node program and drive the optimizer multiple times in one script file. This example shows using requirejs as a module and calling requirejs.optimize().
I took a similar approach in one of my projects - I made my build.js file an ERB template and created a Thor task that ran through my modules and ran r.js once for each one. But #jrburke's solution using node.js is cleaner.
I'm trying to use msbuild with my sublime project. I created the build file suggested here and the following is my project file
{
"folders":
[
{
"path": "/W/MyOrg/MyApp",
"folder_exclude_patterns": ["_ReSharper.*", "bin", "obj"]
}
]
}
I select the msbuild40 build system and hit Build and get the output:
[Error 6] The handle is invalid
[Finished]
I'm not even sure if this is a python or an msbuild error. Which is it, how can I fix it, and whats a good way to troubleshoot this sort of stuff in the future?
Update
I tried updating my project to the following and using that build and still no dice
{
"folders":
[
{
"path": "/W/MyOrg/MyApp",
"folder_exclude_patterns": ["_ReSharper.*", "bin", "obj"]
}
],
"build_systems":
[
{
"name": "msbuild",
"cmd": ["c:\\Windows\\Microsoft.NET\\Framework\\v4.0.30319\\MSBuild.exe", "w:\\MyOrg\\MyApp\\MyApp.sln"]
}
]
}
Turns out that this happens whenever you start sublime from command line ( I was starting it via a powershell alias).
You can fix this by using a batch file and the START command. I created sublime_text.bat:
START "Sublime Text 2" "C:\Program Files\Sublime Text 2\sublime_text.exe" %*
and set my powershell alias to that bat file. Now everything works.