Fastlane frameit unsupported screen size when deliver to App store - screenshot

Using Swift5, iOS-12.2, Xcode-10.2(10E125) and running everything with a GitLab CI,
There seem to be an issue with screenshot screen sizes during App release step (using Fastlane's deliver). The screenshots are created nicely (with Fastlane's snapshot and frameit tools).
But having updated to newest iOS-, Swift- and Xcode-versions now suddenly breaks my working example of Fastlane. I now get the following error:
Unsupported screen size [1446, 2948] for path '/Users/user/Documents/Programieren/iPhone_applications/Learning/Watch/MyApp/builds/aMDc3etB/0/myusername/MyAppName/fastlane/screenshots/de-DE/iPhone 8 Plus-01Screenshot_de_framed.png'
Could there be something wrong with Fastlane:
either at the frameit step (since framed images are bigger in size than what snapshot created)
or at the App release step (since maybe Apple changed something with the screen sizes accepted).
I am wondering if the fact that I run everything with GitLab CI has an influence (but it should't). What could be the cause of the Fastlane failure concerning screen sizes of screenshots at the deliver step ?
For an iPhone8-Plus example - what I realised:
--> screenshots after Fastlane snapshot step are [1242 × 2208] pixels in size
--> framed-screenshots after the Fastlane frameit step are [1446 × 2948] pixels in size
The Apple app store asks for [1242 × 2208] pixel sized images - therefore "framed" ones will never be accepted !!
Could there be something wrong with frameit ???
Should I choose different iOS devices in my Snapfile (see below) ? And if yes, which ones ??? (i.e. it used to be the case that the App store needed an iPhone8 Plus sized screenshot [5.5"]. Did that change maybe ??)
Here is my Fastfile:
lane :screenshots do
snapshot
frameit(silver: true, path: './fastlane/screenshots')
end
Here is my Snapfile:
workspace "MyApp.xcworkspace"
scheme "MyAppUITests"
devices([
"iPhone 8 Plus",
"iPhone SE"
])
languages([
"en-US",
"de-DE"
])
localize_simulator true
clear_previous_screenshots true
erase_simulator true
reinstall_app true
Here is my Framefile.json file:
{
"device_frame_version": "latest",
"default": {
"keyword": {
"fonts": [
{
"font": "./fonts/SF-UI-Display-Semibold.otf",
"supported": ["de-DE", "en-US"]
},
{
"font": "./fonts/Chinese.ttf",
"supported": ["zcmn-Hans"]
}
]
},
"title": {
"fonts": [
{
"font": "./fonts/SF-UI-Display-Regular.otf",
"supported": ["de-DE", "en-US"]
},
{
"font": "./fonts/Chinese.ttf",
"supported": ["zcmn-Hans"]
}
],
"color": "#203943"
},
"background": "./background.jpg",
"padding": 50,
"stack_title" : false,
"title_below_image": false,
"show_complete_frame": false,
},
"data": [
{
"filter": "01",
"keyword": {
"color": "#4B849B"
}
},
{
"filter": "02",
"keyword": {
"color": "#4B849B"
}
},
{
"filter": "03",
"keyword": {
"color": "#4B849B"
}
},
{
"filter": "04",
"keyword": {
"color": "#4B849B"
}
},
{
"filter": "05",
"keyword": {
"color": "#4B849B"
}
},
{
"filter": "06",
"keyword": {
"color": "#4B849B"
}
}
]
}

For those who bump into frameit spitting "Unsupported screen size" error, here is a scriptable way to resolve the issue.
Go to this file to check the accepted screen size for all devices.
Use magick to reframe the raw screenshot to the dimension required for the desired device. An example command to resize the screenshot to fit iPhone 12 Pro Max is like this
magick $file -resize "1284x2778"\! $file
After resizing, run frameit.

Related

updating manifest.json file doesn't change lighthouse report errors react native

I'm trying to make the desktop version of a react-native app a PWA.
I keep encountering the same report:
lighthouse report
{
"short_name": "test",
"name": "test",
"start_url": "/index.html",
"icons": [
{
"src": "favicon/favicon-512.png",
"sizes": "512x512",
"type": "image/png",
"purpose":"any"
}
,{
"src": "favicon/favicon-144.png",
"sizes": "144x144",
"type": "image/png",
"purpose":"any"
}
],
"app":{
"urls": [
"http://localhost:19006/"
],
"launch": {
"web_url": "http://localhost:19006/"
}
},
"loading":"loading_screen.gif",
"display": "standalone",
"theme_color": "#1E88B5",
"background_color": "#f2f7fc",
"scope": "/"
}
This is the root directory:
folder structure
Any feedback would be appreciated!
I tried changing the value to the "src" property for the icons, but the changes are not reflected in the lighthouse report. I also cleared the browser cache.
P.S: other checks in the lighthouse report pass but not the installability one.
Are you re-creating a production build (usually with npm run build) each time you update the manifest.json file?

Importing Data to Contentful programatically from a json file

I am trying to import some data programatically into contentful:
I am following the docs here
And running the command inside my integrated terminal
contentful space import --config config.json
Where the config file is
{
"spaceId": "abc123",
"managementToken": "112323132321adfWWExample",
"contentFile": "./dataToImport.json"
}
And the dataToImport.json file is
{
"data": [
{
"address": "11234 New York City"
},
{
"address": "1212 New York City"
}
]
}
The thing is I don't understand what format my dataToImport.json should be and what is missing inside this file or in my config file so that the array of addresses from the .json file get added as new entries to an already created content model inside the Contentful UI show in the screenshot below
I am not specifying the content model for the data to go into so I believe that is one issue, and I don't know how I do that. An example or repo would help me out greatly
The types of data you can import are listed : in their documentation
your json top level should say "entries" and not data, if new content of a content type is what you would like to import.
This is an example of a blog post as per content model of the tutorial they provide.
The only thing i didn't work out yet is where the user id is :D so i substituted for one of the content type 'person' also provided in their tutorial (I think it's called Gatsby Starter)
{"entries": [
{
"sys": {
"space": {
"sys": {
"type": "Link",
"linkType": "Space",
"id": "theSpaceIdToReceiveYourImport"
}
},
"type": "Entry",
"createdAt": "2019-04-17T00:56:24.722Z",
"updatedAt": "2019-04-27T09:11:56.769Z",
"environment": {
"sys": {
"id": "master",
"type": "Link",
"linkType": "Environment"
}
},
"publishedVersion": 149, -- these are not compulsory, you can skip
"publishedAt": "2019-04-27T09:11:56.769Z", -- you can skip
"firstPublishedAt": "2019-04-17T00:56:28.525Z", -- you can skip
"publishedCounter": 3, -- you can skip
"version": 150,
"publishedBy": { -- this is an example of a linked content
"sys": {
"type": "Link",
"linkType": "person",
"id": "personId"
}
},
"contentType": {
"sys": {
"type": "Link",
"linkType": "ContentType",
"id": "blogPost" -- here should be your content type 'RealtorProperties'
}
}
},
"fields": { -- here should go your content type fields, i can't see it in your post
"title": {
"en-US": "Test 1"
},
"slug": {
"en-US": "Test-1"
},
"description": {
"en-US": "some description"
},
"body": {
"en-US": "some body..."
},
"publishDate": {
"en-US": "2016-12-19"
},
"heroImage": { -- another example of a linked content
"en-US": {
"sys": {
"type": "Link",
"linkType": "Asset",
"id": "idOfTHisImage"
}
}
}
}
},
--another entry, ...]}
Have a look at this repo. I am also trying to figure this out. Looks like there's quite a lot of fields that need to be included in the json file. I was hoping there'd be a simple solution but it seems you (me too actually) will need to create scripts to "convert" your json file to data contentful can read and import.
I'll let you know if I find anything better.

AWS Code Build - cached DOWNLOAD_SOURCE taking too long

I am trying to improve the build speed of one of my projects on CodeBuild. The project uses a Github source provider and source caching of type local is enabled.
The first time I ran the build it took 103 secs. I ran it again immediately after the first one finished expecting it to run in a few seconds due to source caching, but it took 60 secs.
What I am missing here? Is the cache not working? If it is working, why does it take that long on the second run?
Thanks
Project Details:
{
"projectsNotFound": [],
"projects": [
{
"environment": {
"computeType": "BUILD_GENERAL1_LARGE",
"imagePullCredentialsType": "SERVICE_ROLE",
"privilegedMode": true,
"image": "111669150171.dkr.ecr.us-east-1.amazonaws.com/***********/ep-build-env:latest",
"environmentVariables": [
{
"type": "PLAINTEXT",
"name": "NEXUS_URI",
"value": "http://***************"
},
{
"type": "PLAINTEXT",
"name": "REGISTRY",
"value": "111669150171.dkr.ecr.us-east-1.amazonaws.com/*********"
}
],
"type": "LINUX_CONTAINER"
},
"timeoutInMinutes": 60,
"name": "StorefrontApi",
"serviceRole": "arn:aws:iam::111669150171:role/CodeBuild-ECRReadOnly",
"tags": [],
"artifacts": {
"type": "NO_ARTIFACTS"
},
"lastModified": 1571227097.581,
"cache": {
"type": "LOCAL",
"modes": [
"LOCAL_DOCKER_LAYER_CACHE",
"LOCAL_SOURCE_CACHE",
"LOCAL_CUSTOM_CACHE"
]
},
"vpcConfig": {
"subnets": [
"subnet-fd7f958b"
],
"vpcId": "vpc-71e3f414",
"securityGroupIds": [
"sg-19b65e6c",
"sg-9e28e9f9"
]
},
"created": 1571082681.262,
"sourceVersion": "refs/heads/ep-mysql",
"source": {
"buildspec": "version: 0.2\n\nphases:\n build:\n commands:\n - env\n - cd extensions\n - mvn --settings $CODEBUILD_SRC_DIR_DEVOPS_WINE/pipelines/storefront/build-war/settings.xml --projects storefront/ext-storefront-webapp -am -DskipAllTests clean install\n\nartifacts:\n secondary-artifacts:\n storefront-war:\n base-directory: $CODEBUILD_SRC_DIR/extensions/storefront/ext-storefront-webapp/target\n files:\n - \"*.war\"\n\ncache:\n paths:\n - '/root/.m2/**/*'\n - '/root/.npm/**/*'",
"insecureSsl": false,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"location": "https://github.com/*****************.git",
"gitCloneDepth": 1,
"type": "GITHUB",
"reportBuildStatus": false
},
"badge": {
"badgeEnabled": false
},
"queuedTimeoutInMinutes": 480,
"secondaryArtifacts": [],
"logsConfig": {
"s3Logs": {
"status": "DISABLED",
"encryptionDisabled": false
},
"cloudWatchLogs": {
"status": "ENABLED"
}
},
"secondarySources": [
{
"insecureSsl": false,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"location": "https://github.com/*****************.git",
"sourceIdentifier": "DEVOPS_WINE",
"gitCloneDepth": 1,
"type": "GITHUB",
"reportBuildStatus": false
}
],
"encryptionKey": "arn:aws:kms:us-east-1:111669150171:alias/aws/s3",
"arn": "arn:aws:codebuild:us-east-1:111669150171:project/StorefrontApi",
"secondarySourceVersions": [
{
"sourceVersion": "refs/heads/staging",
"sourceIdentifier": "DEVOPS_WINE"
}
]
}
]
}
Apparently, at time of writing, CodeBuild does not use the native git client to fetch the source from GitHub. I understand that the CodeBuild internal teams have an internal feature request to move from whatever they're using to the native git client to improve performance.
Does your repository, by change, have lots of large files in its history? You can use this answer for a command to run to analyze your repository.
If you have lots of large files in your history and you're able to remove them, you can then use a tool like BFG Repo Cleaner to rewrite history. That should speed up the DOWNLOAD_SOURCE phase.
Also, if you have a dedicated support plan with AWS, you should reach out to your TAM to upvote the feature request to move to native git for GitHub source downloads.

Create env fails when using a daemonset to create processes in Kubernetes

I want to deploy a software in to nodes with daemonset, but it is not a docker app. I created a daemonset json like this :
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "uniagent"
},
"annotations": {
"scheduler.alpha.kubernetes.io/tolerations": "[{\"key\":\"beta.k8s.io/accepted-app\",\"operator\":\"Exists\", \"effect\":\"NoSchedule\"}]"
},
"enable": true
},
"spec": {
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"processes": [
{
"name": "foundation",
"package": "xxxxx",
"resources": {
"limits": {
"cpu": "100m",
"memory": "1Gi"
}
},
"lifecyclePlan": {
"kind": "ProcessLifecycle",
"namespace": "engb",
"name": "app-plc"
},
"env": [
{
"name": "SECRET_USERNAME",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagentuser"
}
}
},
{
"name": "SECRET_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagenthash"
}
}
}
]
},
when the app deploy succeeds, the env variables do not exist at all.
What should I do to solve this problem?
Thanks
Daemon Sets have to be docker containers. You can't have non-containerized programs run as Daemon Sets. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Kubernetes only launches containers.
Also in your YAML manifest file, I see a "processes" key and I have reason to believe it's not a valid manifest file, so I doubt you deployed it successfully.
You have not pasted the "full" YAML file, but I'm guessing the "template" key at the beginning is the spec.template key of the file.
Run kubectl explain daemonset.spec.template.spec and you'll see that there is no "processes" field.

Transloadit: PDF to Video, variable length

/video/merge docs
I need a template that converts a PDF to a video. Unfortunately, the /video/merge robot seems to require both a framerate (slide length) as well as a video duration. Since I don't know how many pages the PDF will have, I'm unable to supply duration.
Is there a way around this?
This is a section of my current template:
"pdf_to_images": {
"use": ":original",
"robot": "/document/thumbs",
"format": "png",
"width": 1920,
"height": 1080
},
"encode": {
"use": {
"steps": [
{
"name": "pdf_to_images",
"as": "image"
}
]
},
"robot": "/video/merge",
"preset": "iphone",
"width": 1920,
"height": 1080,
"ffmpeg": {
"b": "8000K"
},
"framerate": "${fields.framerate}",
"duration": "100"
},
//...store...
I need to replace "framerate" from a field in the upload, but can I replace "duration" from dynamically counting the number of results from "pdf_to_images"?
Otherwise I'm stuck creating individual videos from each image result from "pdf_to_images" and then concatenating them, which seems rather excessive in terms of resource capitalization.
Thoughts?
What you could be looking for is ${file.meta.page_count} – which, as expected, will return the number of pages in the PDF from /document/thumbs. For example, if you wanted to make a video where each page of the PDF lasts a second you'd use a template like so:
{
"steps": {
":original": {
"robot": "/upload/handle"
},
"pdf_to_images": {
"use": ":original",
"robot": "/document/thumbs",
"format": "png",
"results": true,
"width": 1920,
"height": 1080,
"imagemagick_stack": "v2.0.7"
},
"merged": {
"robot": "/video/merge",
"use": {
"steps": [
{
"name": "pdf_to_images",
"as": "image"
}
]
},
"result": true,
"framerate": "1",
"duration": "${file.meta.page_count}",
"ffmpeg_stack": "v4.3.1",
"preset": "iphone",
"resize_strategy": "fillcrop"
}
}
}