Apache Ignite Continuous query Cache Transactions - ignite

We are using a continuous query to transfer data across all clients nodes. However we have a scaling grid so we often run into issue where data node keeps trying to connect to client to send the data from continuous query which has already scaled down. This brings system to a halt as PME operation cannot acquire a lock, so topology doesn't get updated.
In order to resolve this, I want to use parameter TxTimeoutOnPartitionMapExchange which will allow PME to proceed.
However in order to utilise this parameter, do i need to change atomicityMode of my caches to transactional? If yes then, will the process of data node trying to send data from continuous query count as a transaction?
In summary, I am trying to work out if TxTimeoutOnPartitionMapExchange parameter help in my situation with continuous query and what will be the steps to enable this parameter.
EDIT:
Stacktrace of issue I am trying to solve:
Continuous keeps trying to reserve the client and i believe it holds
global lock here which blocks cache updates and checkpointing
:
Deadlock: false
Completed: 1999706
Thread [name="sys-stripe-6-#7%pv-ib-valuation%", id=42, state=WAITING, blockCnt=52537, waitCnt=734400]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
at o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at o.a.i.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:3229)
at o.a.i.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:3013)
at o.a.i.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2960)
at o.a.i.i.managers.communication.GridIoManager.send(GridIoManager.java:2100)
at o.a.i.i.managers.communication.GridIoManager.sendOrderedMessage(GridIoManager.java:2365)
at o.a.i.i.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1964)
at o.a.i.i.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1935)
at o.a.i.i.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1917)
at o.a.i.i.processors.continuous.GridContinuousProcessor.sendNotification(GridContinuousProcessor.java:1324)
at o.a.i.i.processors.continuous.GridContinuousProcessor.addNotification(GridContinuousProcessor.java:1261)
at o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler.onEntryUpdate(CacheContinuousQueryHandler.java:1059)
at o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler.access$600(CacheContinuousQueryHandler.java:90)
at o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler$2.onEntryUpdated(CacheContinuousQueryHandler.java:459)
at o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:447)
at o.a.i.i.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2495)
at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2657)
at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:2118)
This starts coming up after reserveclient call is struck as it is
unable to acquire a lock
:
>>> Possible starvation in striped pool.
Thread name: sys-stripe-4-#5%pv-ib-valuation%
Queue: []
Deadlock: false
Completed: 6328076
Thread [name="sys-stripe-4-#5%pv-ib-valuation%", id=40, state=WAITING, blockCnt=111790, waitCnt=2018248]
Lock [object=java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync#66d8e343, ownerName=null, ownerId=-1]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
at o.a.i.i.processors.cache.persistence.GridCacheDatabaseSharedManager.checkpointReadLock(GridCacheDatabaseSharedManager.java:1663)
at o.a.i.i.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpiredInternal(GridCacheOffheapManager.java:2715)
at o.a.i.i.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpired(GridCacheOffheapManager.java:2679)
at o.a.i.i.processors.cache.persistence.GridCacheOffheapManager.expire(GridCacheOffheapManager.java:1051)
at o.a.i.i.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:243)
at o.a.i.i.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:873)
at o.a.i.i.processors.cache.GridCacheIoManager.onMessageProcessed(GridCacheIoManager.java:1189)
So overall my analysis so far is that if a client is gone then continuous query keeps trying to connect holding a lock which blocks everything.
Sample page locks dump. Its a similar page link dump everytime and
all threads just seem to be waiting and not locked
:
Page locks dump:
Thread=[name=checkpoint-runner-#94%pv-ib-valuation%, id=162], state=WAITING
Locked pages = []
Locked pages log: name=checkpoint-runner-#94%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=checkpoint-runner-#95%pv-ib-valuation%, id=163], state=WAITING
Locked pages = []
Locked pages log: name=checkpoint-runner-#95%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=checkpoint-runner-#96%pv-ib-valuation%, id=164], state=WAITING
Locked pages = []
Locked pages log: name=checkpoint-runner-#96%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=checkpoint-runner-#97%pv-ib-valuation%, id=165], state=WAITING
Locked pages = []
Locked pages log: name=checkpoint-runner-#97%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-0-#15%pv-ib-valuation%, id=50], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-0-#15%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-1-#16%pv-ib-valuation%, id=51], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-1-#16%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-10-#25%pv-ib-valuation%, id=60], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-10-#25%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-11-#26%pv-ib-valuation%, id=61], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-11-#26%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-12-#27%pv-ib-valuation%, id=62], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-12-#27%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-13-#28%pv-ib-valuation%, id=63], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-13-#28%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-14-#29%pv-ib-valuation%, id=64], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-14-#29%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-15-#30%pv-ib-valuation%, id=65], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-15-#30%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-2-#17%pv-ib-valuation%, id=52], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-2-#17%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-3-#18%pv-ib-valuation%, id=53], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-3-#18%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-4-#19%pv-ib-valuation%, id=54], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-4-#19%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-5-#20%pv-ib-valuation%, id=55], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-5-#20%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-6-#21%pv-ib-valuation%, id=56], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-6-#21%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-7-#22%pv-ib-valuation%, id=57], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-7-#22%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-8-#23%pv-ib-valuation%, id=58], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-8-#23%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=data-streamer-stripe-9-#24%pv-ib-valuation%, id=59], state=WAITING
Locked pages = []
Locked pages log: name=data-streamer-stripe-9-#24%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=db-checkpoint-thread-#93%pv-ib-valuation%, id=161], state=TIMED_WAITING
Locked pages = []
Locked pages log: name=db-checkpoint-thread-#93%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=dms-writer-thread-#77%pv-ib-valuation%, id=145], state=WAITING
Locked pages = []
Locked pages log: name=dms-writer-thread-#77%pv-ib-valuation% time=(1674196038673, 2023-01-20 06:27:18.673)
Thread=[name=exchange-worker-#71%pv-ib-valuation%, id=139], state=TIMED_WAITING
Locked pages = []
Locked pages log: name=exchange-worker-#71%pv-ib-valuation% time=(1674196038673, 2023-01-20 06:27:18.673)
Thread=[name=lock-cleanup-0, id=278], state=WAITING
Locked pages = []
Locked pages log: name=lock-cleanup-0 time=(1674196038673, 2023-01-20 06:27:18.673)
Thread=[name=lock-cleanup-scheduled-0, id=171], state=WAITING
Locked pages = []
Locked pages log: name=lock-cleanup-scheduled-0 time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=main, id=1], state=WAITING
Locked pages = []
Locked pages log: name=main time=(1674196038673, 2023-01-20 06:27:18.673)
Thread=[name=query-#5729%pv-ib-valuation%, id=6455], state=WAITING
Locked pages = []
Locked pages log: name=query-#5729%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=query-#5730%pv-ib-valuation%, id=6456], state=WAITING
Locked pages = []
Locked pages log: name=query-#5730%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=query-#5735%pv-ib-valuation%, id=6461], state=WAITING
Locked pages = []
Locked pages log: name=query-#5735%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=query-#5736%pv-ib-valuation%, id=6462], state=WAITING
Locked pages = []
Locked pages log: name=query-#5736%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-0-#1%pv-ib-valuation%, id=36], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-0-#1%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-1-#2%pv-ib-valuation%, id=37], state=RUNNABLE
Locked pages = []
Locked pages log: name=sys-stripe-1-#2%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-10-#11%pv-ib-valuation%, id=46], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-10-#11%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-11-#12%pv-ib-valuation%, id=47], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-11-#12%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-12-#13%pv-ib-valuation%, id=48], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-12-#13%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-13-#14%pv-ib-valuation%, id=49], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-13-#14%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-2-#3%pv-ib-valuation%, id=38], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-2-#3%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-3-#4%pv-ib-valuation%, id=39], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-3-#4%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-4-#5%pv-ib-valuation%, id=40], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-4-#5%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-5-#6%pv-ib-valuation%, id=41], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-5-#6%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-6-#7%pv-ib-valuation%, id=42], state=RUNNABLE
Locked pages = []
Locked pages log: name=sys-stripe-6-#7%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-7-#8%pv-ib-valuation%, id=43], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-7-#8%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-8-#9%pv-ib-valuation%, id=44], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-8-#9%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=sys-stripe-9-#10%pv-ib-valuation%, id=45], state=WAITING
Locked pages = []
Locked pages log: name=sys-stripe-9-#10%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)
Thread=[name=ttl-cleanup-worker-#62%pv-ib-valuation%, id=127], state=TIMED_WAITING
Locked pages = []
Locked pages log: name=ttl-cleanup-worker-#62%pv-ib-valuation% time=(1674196038674, 2023-01-20 06:27:18.674)

TxTimeoutOnPartitionMapExchange is about rolling back active transactions to unlock a PME process. It won't magically unlock every PME that could be stuck due to different reasons.
For sure, it's worth having this setting configured in any case. To enable it, you need to adjust your server nodes' configuration and set this property to some value, like 30 secs. Here is an example of XML changes.
Speaking of the original CQ issue with client disconnects, I'd expect Ignite to handle that automatically with no problems. In other words, I don't think the issue of a hung PME is caused by a continuous query itself, but rather by something else, like yes, active TXs without a timeout.
You don't need to change atomicyMode of your caches. Transactions can't be applied to a non-transactional cache (atomic).

Related

Failed to load resource: net::ERR_HTTP2_PROTOCOL_ERROR -Vue.js deployed on IIS10

/css//SchoolSystem~9d2cc99c.8ec07234.css:1 Failed to load resource: net::ERR_HTTP2_PROTOCOL_ERROR
/css/UsersAnnouncements~db460838.0e433876.css:1 Failed to load resource: net::ERR_HTTP2_PROTOCOL_ERROR
/css/DashboardComponent~10c63b4c.bc5d4d1b.css:1 Failed to load resource: net::ERR_HTTP2_PROTOCOL_ERROR
chunk-vendors~575de10d.881773f3.js:1 Failed to load resource: net::ERR_HTTP2_PROTOCOL_ERROR
Getting above error for all the component when doing a refresh by using ctrl+f5 and the website does not render and doing again a refresh from browser it renders.
This is happening more when I used
optimization: {
splitChunks: {
chunks: "all",
minSize: 30000,
maxSize: 400000,
split Chunk to split the vendor chunk in small size, its doing that and all are injected in the Index.html file which is automatically done by webpack.
Now unable to understand that in which direction should I go for searching. before splitting the vendor...js file the site was loading very slow now its not loading at all.

Failed migration attempt. Errors with TF26176: Could not connect to the specified Azure DevOps Server collection URL

I am attempting to initiate my 1st migration between a local TFS server and Azure DevOps. I've checked my config.json source and target urls and don't see any issues with the url and project name but when I execute the migration tool I receive the following errors. It seems to connect to both TFS and Azure servers but cannot configure the store. Could this be a permissions related issue?
MigrationClient: Access granted to https://mycompanysiteurl/TFS/ for PaulB (Domain\Paul.B)
[16:14:34 ERR] Unable to configure store
Microsoft.TeamFoundation.WorkItemTracking.Client.ConnectionException: TF26176: Could not connect to the specified Azure DevOps Server collection URL. Please check the URL and try again.
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore.InitializeInternal()
at MigrationTools._EngineV1.Clients.TfsWorkItemMigrationClient.GetWorkItemStore(WorkItemStoreFlags bypassRules) in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\_EngineV1\Clients\TfsWorkItemMigrationClient.cs:line 263
[16:14:34 INF] TfsMigrationClient::GetDependantTfsCollection:AuthenticationMode(Prompt)
[16:14:34 INF] TfsMigrationClient::GetDependantTfsCollection: Prompting for credentials
[16:14:34 INF] MigrationClient: Connecting to https://mycompanysite.visualstudio.com/
[16:14:34 INF] MigrationClient: validating security for {"IsContainer": false, "UniqueName": "Paul.B#mycompany.com", "Descriptor": {"Data": null, "Identifier": "01de0b06-87b5-4eae-b09a-fe502ac4fb83\\Paul.B#mycompany.com", "IdentityType": "Microsoft.IdentityModel.Claims.ClaimsIdentity", "$type": "IdentityDescriptor"}, "DisplayName": "Paul B", "IsActive": true, "MemberOf": [], "Members": [], "TeamFoundationId": "865d9407-523f-60ed-af45-f3bf28a87f9b", "UniqueUserId": 0, "$type": "TeamFoundationIdentity"}
[16:14:34 INF] MigrationClient: Access granted to https://companysite.visualstudio.com/ for Paul B (Paul.B#mycompany.com)
[16:14:34 FTL] Error while running WorkItemMigration
Microsoft.TeamFoundation.WorkItemTracking.Client.ConnectionException: TF26176: Could not connect to the specified Azure DevOps Server collection URL. Please check the URL and try again.
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore.InitializeInternal()
at MigrationTools._EngineV1.Clients.TfsWorkItemMigrationClient.GetWorkItemStore(WorkItemStoreFlags bypassRules) in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\_EngineV1\Clients\TfsWorkItemMigrationClient.cs:line 279
at MigrationTools._EngineV1.Clients.TfsWorkItemMigrationClient.InnerConfigure(IMigrationClient migrationClient, Boolean bypassRules) in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\_EngineV1\Clients\TfsWorkItemMigrationClient.cs:line 202
at MigrationTools._EngineV1.Clients.TfsMigrationClient.Configure(IMigrationClientConfig config, NetworkCredential credentials) in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\_EngineV1\Clients\TfsMigrationClient.cs:line 83
at MigrationTools.MigrationEngine.GetSource() in D:\a\1\s\src\MigrationTools\MigrationEngine.cs:line 146
at VstsSyncMigrator.Engine.WorkItemMigrationContext.InternalExecute() in D:\a\1\s\src\VstsSyncMigrator.Core\Execution\MigrationContext\WorkItemMigrationContext.cs:line 83
at MigrationTools._EngineV1.Processors.MigrationProcessorBase.Execute() in D:\a\1\s\src\MigrationTools\_EngineV1\Processors\MigrationProcessorBase.cs:line 47

"Authentication failed" after deployment in a mutualized hosting

I created a webapp from the Symfony4 Demo app, including the Login system and multi-language support.
All worked perfectly with the Built-in Apache server in the port 8000.
- When I configured a Xamp apache I needed to generate the '.htaccess' file in the 'public' folder in order to make the website working(composer require symfony/apache-pack), but finally it worked.
- Now I deployed the app in a mutualized hosting server, and configured the .env properly, queries to the DB work, but I'm not able to login to the webapp.
Do you from where can be the problem?
Thx for your help!
[2019-09-05 11:05:38] request.INFO: Matched route "security_login".
{"route":"security_login","route_parameters":{"_route":"security_login","_controller":"App\Controller\SecurityController::login","_locale":"en"},"request_uri":"http://xxx.xxxx.com/en/login","method":"GET"}
[] [2019-09-05 11:05:38] security.INFO: Populated the TokenStorage
with an anonymous Token. [] [] [2019-09-05 11:05:39] request.INFO:
Matched route "_wdt".
{"route":"_wdt","route_parameters":{"_route":"_wdt","_controller":"web_profiler.controller.profiler::toolbarAction","token":"984d25"},"request_uri":"http://xxx.xxxx.com/_wdt/984d25","method":"GET"}
[] [2019-09-05 11:05:43] request.INFO: Matched route "security_login".
{"route":"security_login","route_parameters":{"_route":"security_login","_controller":"App\Controller\SecurityController::login","_locale":"en"},"request_uri":"http://xxx.xxxx.com/en/login","method":"POST"}
[] [2019-09-05 11:05:43] doctrine.DEBUG: SELECT t0.id AS id_1,
t0.full_name AS full_name_2, t0.username AS username_3, t0.email AS
email_4, t0.password AS password_5, t0.roles AS roles_6 FROM xxx_user
t0 WHERE t0.username = ? LIMIT 1 ["pierre_admin"] [] [2019-09-05
11:05:43] security.INFO: Authentication request failed.
{"exception":"[object]
(Symfony\Component\Security\Core\Exception\BadCredentialsException(code:
0): Bad credentials. at
/home/xxxxcom/xxxx.com/xxx_xxxx_com/vendor/symfony/security-core/Authentication/Provider/UserAuthenticationProvider.php:85,
Symfony\Component\Security\Core\Exception\BadCredentialsException(code:
0): The presented password is invalid. at
/home/xxxxcom/xxxx.com/xxx_xxxx_com/vendor/symfony/security-core/Authentication/Provider/DaoAuthenticationProvider.php:58)"}
[] [2019-09-05 11:05:43] security.DEBUG: Authentication failure,
redirect triggered. {"failure_path":"security_login"} [] [2019-09-05
11:05:43] request.INFO: Matched route "security_login".
{"route":"security_login","route_parameters":{"_route":"security_login","_controller":"App\Controller\SecurityController::login","_locale":"en"},"request_uri":"http://xxx.xxxx.com/en/login","method":"GET"}
[] [2019-09-05 11:05:43] security.INFO: Populated the TokenStorage
with an anonymous Token. [] [] [2019-09-05 11:05:43] request.INFO:
Matched route "_wdt".
{"route":"_wdt","route_parameters":{"_route":"_wdt","_controller":"web_profiler.controller.profiler::toolbarAction","token":"47fb65"},"request_uri":"http://xxx.xxxx.com/_wdt/47fb65","method":"GET"}
[]
Simple answer ;)
- Php 7.1 not working with Symfony4
- Move to 7.2 works fine

Changing the super administrator password WSO2 IOTS 3.1.0

I dowloaded WSO2 IoT Server 3.1.0-M8 version. I want to change super admin password and I followed the instructions in documentation
Changing
the Super Administrator Password and in tab 'Changing Password via
file configuration'. I have changed admin password to new one in all
configuration files mentioned in documentation. I've also changed
[wso2iothome]/conf/api-manager.xml by replacing ${admin.password}
entries with my new password. I connect to ApacheDS LDAP server and
get super admin user successfully. I have changed passwords in the
Property name="ConnectionPassword"> entry in user-mgt.xml file. I
started server by running start-all.sh in Ubuntu 14.04 LTS machine. It
gives following error:
TID: [-1234] [] [2017-07-25 09:20:27,259] INFO
{org.wso2.carbon.registry.eventing.internal.RegistryEventingServiceComponent}
- Successfully Initialized Eventing on Registry {org.wso2.carbon.registry.eventing.internal.RegistryEventingServiceComponent}
TID: [-1234] [] [2017-07-25 09:20:27,349] INFO
{org.wso2.carbon.core.init.JMXServerManager} - JMX Service URL :
service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi
{org.wso2.carbon.core.init.JMXServerManager} TID: [-1234] []
[2017-07-25 09:20:27,350] INFO
{org.wso2.carbon.device.mgt.url.printer.URLPrinterStartupHandler} -
IoT Console URL : https://localhost:9443/devicemgt
{org.wso2.carbon.device.mgt.url.printer.URLPrinterStartupHandler} TID:
[-1234] [] [2017-07-25 09:20:27,369] INFO
{org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} -
Server : WSO2 IoT Server-3.1.0-SNAPSHOT
{org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} TID:
[-1234] [] [2017-07-25 09:20:27,370] INFO
{org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} -
WSO2 Carbon started in 97 sec
{org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} TID:
[-1234] [] [2017-07-25 09:20:27,648] INFO
{org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console
URL : https:// localhost:9443/carbon/
{org.wso2.carbon.ui.internal.CarbonUIServiceComponent} TID: [-1234] []
[2017-07-25 09:20:31,938] ERROR
{org.wso2.carbon.apimgt.rest.api.util.interceptors.auth.BasicAuthenticationInterceptor}
- Authentication failed. Please check your username/password {org.wso2.carbon.apimgt.rest.api.util.interceptors.auth.BasicAuthenticationInterceptor}
TID: [-1] [] [2017-07-25 09:20:32,071] ERROR
{org.wso2.carbon.apimgt.webapp.publisher.APIPublisherStartupHandler} -
failed to publish api.
{org.wso2.carbon.apimgt.webapp.publisher.APIPublisherStartupHandler}
org.wso2.carbon.apimgt.webapp.publisher.exception.APIManagerPublisherException:
feign.FeignException: status 401 reading
DCRClient#register(ClientProfile) at
org.wso2.carbon.apimgt.webapp.publisher.APIPublisherServiceImpl.publishAPI(APIPublisherServiceImpl.java:75)
at
org.wso2.carbon.apimgt.webapp.publisher.APIPublisherStartupHandler.publishAPIs(APIPublisherStartupHandler.java:97)
at
org.wso2.carbon.apimgt.webapp.publisher.APIPublisherStartupHandler.access$500(APIPublisherStartupHandler.java:30)
at
org.wso2.carbon.apimgt.webapp.publisher.APIPublisherStartupHandler$1.run(APIPublisherStartupHandler.java:69)
at java.lang.Thread.run(Thread.java:745) Caused by:
feign.FeignException: status 401 reading
DCRClient#register(ClientProfile) at
feign.FeignException.errorStatus(FeignException.java:62) at
feign.codec.ErrorDecoder$Default.decode(ErrorDecoder.java:91) at
feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:138)
at
feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:76)
at
feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:103)
at com.sun.proxy.$Proxy25.register(Unknown Source) at
org.wso2.carbon.apimgt.integration.client.OAuthRequestInterceptor.apply(OAuthRequestInterceptor.java:84)
at
feign.SynchronousMethodHandler.targetRequest(SynchronousMethodHandler.java:158)
at
feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:88)
at
feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:76)
at
feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:103)
at com.sun.proxy.$Proxy39.apisGet(Unknown Source) at
org.wso2.carbon.apimgt.webapp.publisher.APIPublisherServiceImpl.publishAPI(APIPublisherServiceImpl.java:53)
... 4 more
What could be the problem?
WSO2 IoT Server 3.1.0 is now released. Can you try the steps in the released version, please? It should work for it.
[1] http://wso2.com/iot
[2] https://docs.wso2.com/display/IoTS310/Changing+the+Password

RavenDB Replication Issue - database cannot be found

Have followed the documentation, but am unable to make replication work for RavenDB over the WAN.
Scenario:
Using Raven build #2261
Master DB: has a local name of "it23"
Slave DB: has a remote name of "http://184.169.xxx.xxx" (xxx's are for
privacy)
On both servers I have created a database called "TonyTest".
On the Master db, I have set up replication using the following document:
{
"Destinations": [
{
"Url": "http://184.169.xxx.xxx:8080",
"Username": null,
"Password": null,
"Domain": null,
"ApiKey": null,
"Database": "TonyTest",
"TransitiveReplicationBehavior": "None",
"IgnoredClient": false,
"Disabled": false,
"ClientVisibleUrl": null
}
]
}
When browsing to the remote server using the same URL of: http://184.169.xxx.xxx:8080, the RavenDB studio launches correctly, and I can see the TestTony database. This seems to confirm that the URL is formatted correctly.
However, the master database immediately generates a document showing failures:
{
"Destination": "http://184.169.xxx.xxx:8080/databases/TonyTest",
"FailureCount": 142
}
When we look at the logs for the REMOTE db, we see that there IS communication with the master, but the replication doesn't complete.
Debug 3/9/2013 12:19:44 AM Document with key 'Raven/Replication/Sources/http://it23:8080/databases/TonyTest' was not found Raven.Storage.Esent.StorageActions.DocumentStorageActions
It looks like the remote server is saying that the db "TonyTest' can't be found, but it IS created.
Can anyone spot my mistake?
Per Ayende's request, here are some log samples from LOCAL server after attempting to setup replication (again I replaced IPs with xxx for privacy). We do not see any errors in the LOCAL db's log. And we do see errors popup in the REMOTE db log. This seems to imply that the LOCAL db is connecting to the REMOTE db, but the replication does not happen. Here are the LOCAL logs:
Debug 3/11/2013 3:17:00 PM No work was found, workerWorkCounter: 17626, for: ReducingExecuter, will wait for additional work Raven.Database.Indexing.WorkContext
Debug 3/11/2013 3:17:00 PM Going to index 1 documents in IndexName: Raven/DocumentsByEntityName, LastIndexedEtag: 00000001-0000-0100-0000-000000002265: (Raven/Replication/Destinations/184.169.xxx.xxx8080databasesTonyTest) Raven.Database.Indexing.AbstractIndexingExecuter
Debug 3/11/2013 3:17:00 PM Document with key 'Raven/Studio/PriorityColumns' was not found Raven.Storage.Esent.StorageActions.DocumentStorageActions
Debug 3/11/2013 3:16:56 PM Going to index 1 documents in IndexName: Raven/DocumentsByEntityName, LastIndexedEtag: 00000001-0000-0100-0000-000000002256: (Raven/Replication/Destinations/184.169.xxx.xxx8080databasesTonyTest) Raven.Database.Indexing.AbstractIndexingExecuter
Update 3/11 8:24p Pacific time
I am now seeing the following errors in the MASTER/Local raven logs:
Failed to close response
System.AggregateException: One or more errors occurred. ---> System.Net.HttpListenerException: An operation was attempted on a nonexistent network connection
at System.Net.HttpResponseStream.Dispose(Boolean disposing)
at System.IO.Stream.Close()
at Raven.Database.Util.Streams.BufferPoolStream.Dispose(Boolean disposing) in c:\Builds\RavenDB-Stable\Raven.Database\Util\Streams\BufferPoolStream.cs:line 144
at System.IO.Stream.Close()
at Raven.Database.Impl.ExceptionAggregator.Execute(Action action) in c:\Builds\RavenDB-Stable\Raven.Database\Impl\ExceptionAggregator.cs:line 23
--- End of inner exception stack trace ---
at Raven.Database.Impl.ExceptionAggregator.ThrowIfNeeded() in c:\Builds\RavenDB-Stable\Raven.Database\Impl\ExceptionAggregator.cs:line 38
at Raven.Database.Server.Abstractions.HttpListenerResponseAdapter.Close() in c:\Builds\RavenDB-Stable\Raven.Database\Server\Abstractions\HttpListenerResponseAdapter.cs:line 94
at Raven.Database.Server.Abstractions.HttpListenerContextAdpater.FinalizeResponse() in c:\Builds\RavenDB-Stable\Raven.Database\Server\Abstractions\HttpListenerContextAdpater.cs:line 92
---> (Inner Exception #0) System.Net.HttpListenerException (0x80004005): An operation was attempted on a nonexistent network connection
at System.Net.HttpResponseStream.Dispose(Boolean disposing)
at System.IO.Stream.Close()
at Raven.Database.Util.Streams.BufferPoolStream.Dispose(Boolean disposing) in c:\Builds\RavenDB-Stable\Raven.Database\Util\Streams\BufferPoolStream.cs:line 144
at System.IO.Stream.Close()
at Raven.Database.Impl.ExceptionAggregator.Execute(Action action) in c:\Builds\RavenDB-Stable\Raven.Database\Impl\ExceptionAggregator.cs:line 23<---
Solved this. Although I don't understand the full reason why.
On the SLAVE server, you must set the raven.server.exe config file to have the following key:
<add key="Raven/AnonymousAccess" value="All"/>
The default was
<add key="Raven/AnonymousAccess" value="Get"/>.
The default worked fine when the master and slave were on the same machine. But when the master and slave were on separate machines (either on the LAN, or across the WAN) replication failed.
I could never find a log entry on the master that pointed toward his problem. The only log entry I could see was on the slave which said that Raven/Replication/Sources/ was not found. I realized that the master was connecting to the slave, but the slave was unable to create the "Raven/Replication/Sources/" document remotely.