TYPO3 tx_news ErrorHandling for non-existing news - typo3-9.x

When I call a news detail page with a non existing news path segment (www.example.de/news-detail/this-news-does-not-exist/), i get redirected to the 404 page, which is configured in the config.yaml. Is it possible also to control non-existing news urls via the plugin.tx_news.settings.detail.errorHandling configuration (https://docs.typo3.org/p/georgringer/news/master/en-us/AdministratorManual/Configuration/TypoScript/Index.html?highlight=errorhandling#detail-errorhandling)? Or is there another way to bypass the 404 handling or to configre the 404 error handling specially for news errors?
Setup:
TYPO3 9.5.19
News 8.3.0
config.yaml News Detail Configuration:
NewsDetail:
type: Extbase
extension: News
plugin: Pi1
limitToPages:
- 22
routes:
- routePath: '/{news_title}'
_controller: 'News::detail'
_arguments:
news_title: news
defaultController: 'News::detail'
defaults:
page: '0'
requirements:
page: \d+
aspects:
news_title:
type: PersistedAliasMapper
tableName: tx_news_domain_model_news
routeFieldName: path_segment

This is a known bug in the tx_news: https://github.com/georgringer/news/issues/1018
Unfortunately there is no fix yet available.
As a workaround you can use your own error handling.
Add to your website-extension an own class for page error handling.
Change the error handling in the site configuration to PHP and set "ErrorHandler Class Target" to your class path.
Not really pretty, but it works.

Related

Swagger file security scheme defined but not in use

I have a Swagger 2.0 file that has an auth mechanism defined but am getting errors that tell me that we aren't using it. The exact error message is “Security scheme was defined but never used”.
How do I make sure my endpoints are protected using the authentication I created? I have tried a bunch of different things but nothing seems to work.
I am not sure if the actual security scheme is defined, I think it is because we are using it in production.
I would really love to have some help with this as I am worried that our competitor might use this to their advantage and steal some of our data.
swagger: "2.0"
# basic info is basic
info:
version: 1.0.0
title: Das ERP
# host config info
# Added by API Auto Mocking Plugin
host: virtserver.swaggerhub.com
basePath: /rossja/whatchamacallit/1.0.0
#host: whatchamacallit.lebonboncroissant.com
#basePath: /v1
# always be schemin'
schemes:
- https
# we believe in security!
securityDefinitions:
api_key:
type: apiKey
name: api_key
in: header
description: API Key
# a maze of twisty passages all alike
paths:
/dt/invoicestatuses:
get:
tags:
- invoice
summary: Returns a list of invoice statuses
produces:
- application/json
operationId: listInvoiceStatuses
responses:
200:
description: OK
schema:
type: object
properties:
code:
type: integer
value:
type: string
securityDefinitions alone is not enough, this section defines available security schemes but does not apply them.
To actually apply a security scheme to your API, you need to add security requirements on the root level or to individual operations.
security:
- api_key: []
See the API Keys guide for details.

How to implement redirect (301 code) mock in serverless framework config (for AWS) without lambda

I'd like that the root path of my API redirect (301) to completely another site with docs. So I have a lambda at e.g /function1 path and the / should return code 301 with another location. And I'd like to do it without another lambda.
This is exactly what is described here, but via aws command line tool. I tried this approach - it works perfectly, but I'd like to configure such API gateway mock via serverless framework config.
Fortunately, the series of CLI commands you linked to can be reproduced in CloudFormation, which can then be dropped into the Resources section of your Serverless template.
In this example, a GET to /function1 will invoke a lambda function, while a GET to / will return a 301 to a well-known search engine.
service: sls-301-mock
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
functions:
hello:
handler: handler.hello
events:
- http:
path: function1
method: get
resources:
Resources:
Method:
Type: AWS::ApiGateway::Method
Properties:
HttpMethod: GET
ResourceId:
!GetAtt ApiGatewayRestApi.RootResourceId
RestApiId:
Ref: ApiGatewayRestApi
AuthorizationType: NONE
MethodResponses:
- ResponseModels: {"application/json":"Empty"}
StatusCode: 301
ResponseParameters:
"method.response.header.Location": true
Integration:
Type: MOCK
RequestTemplates:
"application/json": "{\n \"statusCode\": 301\n}"
IntegrationResponses:
- StatusCode: 301
ResponseParameters:
"method.response.header.Location": "'https://google.com'"
Tested with:
Framework Core: 1.62.0
Plugin: 3.3.0
SDK: 2.3.0
Components Core: 1.1.2
Components CLI: 1.4.0
Notes
ApiGatewayRestApi is, by convention, the logical name of the API Gateway Stage resource created by Serverless on account of the http event.
Relevant CloudFormation documentation
ApiGateway::Method
ApiGateway::Method Integration
EDIT
This answer is not as verbose, and uses an http event instead of the Resources section. I haven't tested it, but it may also work for you.
Managed to achieve it with referring one function twice. But see also reply of Mike Patrick - looks more universal
...
events:
- http:
path: main/root/to/the/function
method: get
cors: true
- http:
path: /
method: get
integration: mock
request:
template:
application/json: '{"statusCode": 301}'
response:
template: redirect
headers:
Location: "'my-redirect-url-note-the-quotes-they-are-important'"
statusCodes:
301:
pattern: ''

Cloudfront distribution fails to load subdirectories in one stack but works in another

I'm using cloudformation (via serverless framework) to deploy static sites to S3 and set up a cloudfront distribution that is aliased from a route53 domain.
I have this working for two domains, each are new domains that were created in route53. I am trying the same set up with an older domain that I am transferring to route53 from an existing registrar.
The cloudfront distribution for this new domain fails to load sub directories. I.e https://[mydistid].cloudfront.net/sub/dir/ does not load the resource at https://[mydistid].cloudfront.net/sub/dir/index.html
There is a common gotcha covered in other SO questions. You must specify the s3 bucket as a custom origin, in order for CloudFront to apply the Default Root Object to sub directories.
I have done this, as can be seen from my serverless.yml CloudFrontDistribution resource:
XxxxCloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Aliases:
- ${self:provider.environment.CUSTOM_DOMAIN}
Origins:
- DomainName: ${self:provider.environment.BUCKET_NAME}.s3.amazonaws.com
Id: Xxxx
CustomOriginConfig:
HTTPPort: 80
HTTPSPort: 443
OriginProtocolPolicy: https-only
Enabled: 'true'
DefaultRootObject: index.html
CustomErrorResponses:
- ErrorCode: 404
ResponseCode: 200
ResponsePagePath: /error.html
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
TargetOriginId: Xxxx
Compress: 'true'
ForwardedValues:
QueryString: 'false'
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
ViewerCertificate:
AcmCertificateArn: ${self:provider.environment.ACM_CERT_ARN}
SslSupportMethod: sni-only
This results in the a CF distribution with the s3 bucket as a 'Custom Origin' in AWS.
However, when accessed the sub directories route to the error page rather than the default root object in that directory.
What is extremely odd is that this uses the same config as another stack that is fine. The only diff I can see (so far) is that the working stack has a route53 created domain whereas this uses a domain that originated from another registrar, so I'll see what happens once the name server migration completes. I'm skeptical this will resolve the issue though as the CF distribution shouldn't be affected by the route53 domain status
I have both stacks working now. The problem was the use of the S3 REST API url
${self:provider.environment.BUCKET_NAME}.s3.amazonaws.com
Changing both to the s3 website url works:
${self:provider.environment.BUCKET_NAME}.s3-website-us-east-1.amazonaws.com
I have no explanation as to why the former URL worked with 1 stack but not the other.
The other change that I needed to make was to set the OriginProtocolPolicy of CustomOriginConfig to http-only, this is because s3 websites don't support https.
Here is my updated CloudFormation config:
XxxxCloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Aliases:
- ${self:provider.environment.CUSTOM_DOMAIN}
Origins:
- DomainName: ${self:provider.environment.BUCKET_NAME}.s3-website-us-east-1.amazonaws.com
Id: Xxxx
CustomOriginConfig:
HTTPPort: 80
OriginProtocolPolicy: http-only
Enabled: 'true'
DefaultRootObject: index.html
CustomErrorResponses:
- ErrorCode: 404
ResponseCode: 200
ResponsePagePath: /error.html
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
TargetOriginId: Xxxx
Compress: 'true'
ForwardedValues:
QueryString: 'false'
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
ViewerCertificate:
AcmCertificateArn: ${self:provider.environment.ACM_CERT_ARN}
SslSupportMethod: sni-only

Wirecloud and IDM server hiccup

I linked wirecloud and Idm recently. When i login into wirecloud and i land into my wirecloud i got the following error:
Sorry, but the requested page is unavailable due to a server hiccup.
Our engineers have been notified, so check back later.
My idm configuration is:
URL
http://151.80.41.166:50002
Callback URL
http://151.80.41.166:50002/complete/fiware/
I cant get more error info
Exception Type: AuthStateMissing
Exception Value: Session value state missing.
Exception Location: /usr/local/lib/python2.7/site-packages/social_core/backends/oauth.py in validate_state, line 90
Python Executable: /usr/local/bin/python
Python Version: 2.7.14
Python Path:
['/opt/wirecloud_instance',
'/usr/local/lib/python27.zip',
'/usr/local/lib/python2.7',
'/usr/local/lib/python2.7/plat-linux2',
'/usr/local/lib/python2.7/lib-tk',
'/usr/local/lib/python2.7/lib-old',
'/usr/local/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/site-packages']
The problem was i got in the same machine idm and Wirecloud and they use the same cookie.
I add the follow lines on settings.py
SESSION_COOKIE_NAME = "wcsessionid"
CSRF_COOKIE_NAME = "wccsrftoken"

'Server Not Found' error when using sfDomainRoutePlugin with Symfony

I am trying to create a site with subdomains, using the sfDomainRoutePlugin plugin. I am using SF version 1.4.12 on Linux, with Apache as the web server.
I am following the online instructions and have created the following routing file:
homepage:
url: /
class: sfDomainRoute
param: { module: foo, action: index }
requirements:
sf_host: [portal.localhost]
#Sample route limited to one subdomain
blog:
url: /
class: sfDomainRoute
param: { module: foo, action: foo1 }
requirements:
sf_host: blog.portal.localhost
#Sample route that will capture the subdomain name as a parameter
user_page:
url: /
class: sfDomainRoute
param: { module: foo, action: foo2 }
#Sample route that will not receive a subdomain and will default to www.greenanysite.com
install:
url: /install
class: sfDomainRoute
param: { module: foo, action: foo3 }
My foo module code has the methods foo1, foo2 and foo3 implemented as stub functions, and each have their template which simply contains text confirming which method was executed (e.g. 'foo::Foo1 was called') etc.
The template for the index method (in the foo module) looks like this:
<html>
<head><title>Test subdomains</title></head>
<body>
<ul>
<li><?php echo link_to('homepage', '#homepage'); ?></li>
<li><?php echo link_to('blog', '#blog'); ?></li>
<li><?php echo link_to('zzzrbyte', '#user_page?subdomain=zzzrbyte'); ?></li>
<li><?php echo link_to('install', '#install'); ?></li>
</ul>
</body>
</html>
The urls are generated correctly (i.e. with subdomains as specified in the routing.yml file), however when I click on the 'blog' or 'zzzrbyte' link, I get the error message: 'Server Not Found'
For example, I got this message:
Server not found Firefox can't find the server at
blog.portal.localhost.
AFAICT, I am following the online instructions exactly, so I can't see where I am going wrong. Can anyone spot what is likely to be causing this problem?.
[[UPDATE]]
I just realized that by adding the subdomain to my hosts file, this seems to get rid of the problem. I am not sure if this is the fix or simply a temporary workaround. If this is the way to do things, I wonder why such a vital piece of information was left out of the notes?
If this is the way to get things to work, it means that subdomains will have to be known before hand (i.e. not generated dynamically and resolved at run time), also - I am not sure how such a solution works for a remote server, since I am running multiple websites (as virtual servers) on one physical machine and I am not using a hosts file on the server.
Any help will be much appreciated.
Adding the subdomain to the hosts is the correct way to solve this issue