Jitsi conversation history location - jitsi

I am customizing Jitsi and I just want to ask if Jitsi stores conversation locally
or on server ?
If it stores locally than what is its location ? I have searched a lot but I am helpless. Please Help !

If you have Jitsi, the Java XMPP client, then your message history should be stored locally in an XML file if you haven't disabled the logging. The exact location depends on your OS. I would assume the default Java application storage locations like
Windows: %AppData%\Jitsi\history_ver1.0\messages
Mac: ~/Library/Application Support/Jitsi/history_ver1.0/messages
Linux: ~/.jitsi/history_ver1.0/messages
You can add a log statement to see the document that is being written and specifically to which file in the HistoryImpl source.
If you are using Jitsi-Meet, the browser based client, then your message history is not stored permanently by default as far as I know.
No matter which client you are using, on the server, your messages may or may not be stored. The following assumes you are connecting to an XMPP server. If you are using a XMPP server that chooses to store your messages through the XEP Message Archive Management, it will be stored. In that case it will be in the storage backend of the XMPP server, most likely a SQLite/MySQL/Postgres database. If you've used the Debian quick install, by default, Jitsi-Meet installs the XMPP server, Prosody, and routes all your chats there. Prosody supports MAM but it isn't on by default as of version 0.9 (it requires version 0.10+ according to their xeplist).
Note network architecture looks like follows. At any point if you are logging messages, then you could potentially find/reconstruct your message history.
Client (Jitsi or Jitsi-Meet browser frontend)
| |
| |
v |
443 |
+-------+ |
| | |
| NginX | |
| | |
+--+-+--+ |
| | |
+------------+ | | +--------------+ |
| | | | | | |
| jitsi-meet +<---+ +--->+ prosody/xmpp | |
| |files 5280 | | |
+------------+ +--------------+ v
5222,5347^ ^5347 4443
+--------+ | | +-------------+
| | | | | |
| jicofo +----^ ^----+ videobridge |
| | | |
+--------+ +-------------+
Diagram taken and slightly modified from the manual-install.md file in jitsi-meet's repo.

Related

Express-Gateway, serve same API path/route, but under different ServiceEndpoints

I have a server in Node.js + Express, that exposes some APIs both to public and admins. Actually I have 2 cloned instances running, one for Test, one for Production. They are twins, exposing same routes (/admin, /public), but are connected to two different DataBases, and are deployed on two different addresses.
I want to use Express-Gateway to provide APIs to 3d parties, so I'll give them firstly access to the Test Server. Once it's all done, I'll give them also the Production access.
To do this, my Idea is to create just 1 eg user, with multiple eg application. Each eg application will have eg credentials to access to Server or Production.
http://server_test.com
|-------------| |-------------|
| App Prod | | Server Test |
+----► | scopes: |------+ +-----► | /public |
| | [api_prod] | | | | /admin |
| |-------------| ▼ | |-------------|
| http://gateway.com
|------| |------------|
| User | | Express |
|------| | Gateway |
| |-------------| |------------|
| | App Test | ▲ | http://server_prod.com
+----► | scopes: | | | |-------------|
| [api_test] |------+ +-----► | Server Prod |
|-------------| | /public |
| /admin |
|-------------|
According to the credentials provided, the Gateway should redirect requests to server_test.com or server_prod.com. My idea was to use eg scopes to address requests to the right endpoint. So Server Test policy, will require the scope api_test, while Server Prod will require api_prod scope.
Anyway this solution doesn't work, because if the first match in apiEndpoints fails, it just results in "Not Found".
Example: I make a request to http://gateway.com/public using App Prod credentials, with api_prod scope. It should be passed to http://server_prod.com/public, but instead It matches first paths: '/*' of testEndpoint, and fails the scopes condition. So it just fails, while the correct apiEndpoint should be prodEndpoint.
How can I solve this problem?
This is my gateway.config.yml
apiEndpoints:
testEndpoint
host:*
paths: '/*' # <--- match this
scopes: ["api_test"] # <--- but fails this
prodEndpoint
host:*
paths: '/*'
scopes: ["api_prod"] # <---- this is right
serviceEndpoints
testService
url: "http://server_test.com"
prodService
url: "http://server_prod.com"
policies
- proxy
pipelines
testEndpoint: # Test
apiEndpoints:
- testEndpoint
policies:
- proxy
- action
serviceEndpoint: testService
prodEndpoint: # Prod
apiEndpoints:
- prodEndpoint
policies:
- proxy
- action
serviceEndpoint: prodService
I solved in this way: using -rewrite policy.
I prefix my clients' requests with /test or /prod
Use the prefix to match path the correct apiEndpoint
rewrite the request, deleting the prefix
choose the serviceEndpoint and go on...
http://server_test.com
|-------------| |-------------|
| App Prod | /prod/admin /admin | Server Test |
| scopes: |-------------+ +--------► | /public |
| [api_prod] | | | | /admin |
|-------------| ▼ | |-------------|
http://gateway.com
|------------|
| Express |
| Gateway |
|-------------| |------------|
| App Test | ▲ | http://server_prod.com
| scopes: | | | |-------------|
| [api_test] |-------------+ +---------► | Server Prod |
|-------------| /test/admin /admin | /public |
| /admin |
|-------------|
This is my config file:
apiEndpoints:
testEndpoint
host:*
paths: '/test/*'
scopes: ["api_test"]
prodEndpoint
host:*
paths: '/prod/*'
scopes: ["api_prod"]
serviceEndpoints
testService
url: "http://server_test.com"
prodService
url: "http://server_prod.com"
policies
- proxy
pipelines
testEndpoint: # Test
apiEndpoints:
- testEndpoint
policies:
- rewrite: # rewrite - delete '/test'
-
condition:
name: regexpmatch
match: ^/test/?(.*)$
action:
rewrite: /$1
- proxy
- action
serviceEndpoint: testService
prodEndpoint: # Prod
apiEndpoints:
- prodEndpoint
policies:
- rewrite: # rewrite - delete '/prod'
-
condition:
name: regexpmatch
match: ^/prod/?(.*)$
action:
rewrite: /$1
- proxy
- action
serviceEndpoint: prodService

How to make Orion Context Broker works with HTTPS notification?

I want to enable https for notifications. The Orion Context Broker version 1.7.0 is installed in Ubuntu 16.04. To start, the following command is being used:
sudo /etc/init.d/contextBroker start -logAppend -https -key /path/to/orion.key -cert /path/to/orion.crt
The answer is:
[ ok ] Starting contextBroker (via systemctl): contextBroker.service.
The status is:
sudo systemctl status contextBroker.service
contextBroker.service - LSB: Example initscript
Loaded: loaded (/etc/init.d/contextBroker; bad; vendor preset: enabled)
Active: active (exited) since Tue 2017-04-04 12:56:13 BRT; 14s ago
Docs: man:systemd-sysv-generator(8)
Process: 8312 ExecStart=/etc/init.d/contextBroker start (code=exited, status=0/SUCCESS)
Apr 04 12:56:13 fiware-ubuntu systemd[1]: Starting LSB: Example initscript...
Apr 04 12:56:13 fiware-ubuntu contextBroker[8312]: contextBroker
Apr 04 12:56:13 fiware-ubuntu contextBroker[8312]: /path/bin/contextBroker
Apr 04 12:56:13 fiware-ubuntu systemd[1]: Started LSB: Example initscript.
Another approach is running Orion as:
sudo /path/bin/contextBroker -logLevel DEBUG -localIp x.y.z.t -https -key /path/to/orion.key -cert /path/to/orion.crt
The log follows:
time=2017-04-04T18:37:58.881Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1705]:main | msg=Orion Context Broker is running
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongoConnectionPool.cpp[205]:mongoConnect | msg=Successful connection to database
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[681]:setWriteConcern | msg=Database Operation Successful (setWriteConcern: 1)
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[724]:getWriteConcern | msg=Database Operation Successful (getWriteConcern)
time=2017-04-04T18:37:58.888Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[626]:runCollectionCommand | msg=Database Operation Successful (command: { buildinfo: 1 })
...
time=2017-04-04T18:37:58.897Z | lvl=FATAL | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=rest.cpp[1720]:restStart | msg=Fatal Error (error starting REST interface)
It is not working...
If you run Orion as a service (as recommended) then command line parameters have to be configured in the /etc/sysconfig/contexBroker file. The file is explained in this piece of documentation.
Note the BROKER_EXTRA_OPS variable at the end of the file. This is used to include CLI parameters that are not set using any other option, as the ones related with HTTPS you are using. Thus, it should be a matter of setting BROKER_EXTRA_OPS in this way:
BROKER_EXTRA_OPS="-logAppend -https -key /path/to/orion.key -cert /path/to/orion.crt"
Then start the service using:
sudo /etc/init.d/contextBroker start
(Note that no parameter is added after 'start')
You can check that Orion is running with the right parameters using ps ax | grep contextBroker.
Finally, regarding the error Fatal Error (error starting REST interface) it apperas when Orion, for some reason, is not able to start the listening server for the REST API. Typically this is due to some other process (maybe a forgotten instance of Orion) runs listening at the same port. Use sudo netstat -ntpld | grep 1026 to know which other process could be listening at that part (assuming that 1026 is the port in which you are trying to run Orion, of course).

Red Hat 7 with JBoss A-MQ 6.1: management console is not accessible

We're having the same problem as this OP: JBoss ActiveMQ on Red Hat - Unable to access AMQ Web Console, however that post doesn't indicate how he solved the problem.
We're trying to evaluate JBoss A-MQ, and have installed RHEL 7 with A-MQ 6.1 according to their installation guide. Everything works locally, e.g. the sample producer/consumer tests succeed in the Verifying the Installation step.
However, we cannot access the management console, even after configuring the remote user. We did have to add a JAVA_OPTIONS entry to setenv to override the default IPv6 sockets, and netstat shows that the service is now listening on the tcp socket instead of tcp6:
File /opt/jboss-a-mq-6.1.0.redhat-379/bin/setenv now contains:
JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
export JAVA_OPTS
# netstat -paunt | grep 8181
tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN 10698/java
We can ping the box from other systems, however attempting to open a telnet session on port 8181 times out. The management console URLs we've tried are:
http://172.16.100.110:8181/hawtio
http://172.16.100.110:8181
but neither work. What are we missing?
Here are entries from amq.log containing hawt, which seem like it is starting things up correctly:
$ grep hawt amq.log
2014-09-09 11:32:35,778 | INFO | FelixStartLevel | HttpServiceFactoryImpl | .internal.HttpServiceFactoryImpl 35 | 98 - org.ops4j.pax.web.pax-web-runtime - 3.0.6 | Binding bundle: [io.hawt.hawtio-web [146]] to http service
2014-09-09 11:32:35,860 | INFO | pool-10-thread-1 | ConfigManager | io.hawt.system.ConfigManager 32 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | Configuration will be discovered via system properties
2014-09-09 11:32:35,863 | INFO | pool-10-thread-1 | JmxTreeWatcher | io.hawt.jmx.JmxTreeWatcher 63 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | Welcome to hawtio 1.2-redhat-379 : http://hawt.io/ : Don't cha wish your console was hawt like me? ;-)
2014-09-09 11:32:35,865 | INFO | pool-10-thread-1 | UploadManager | io.hawt.jmx.UploadManager 40 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | Using file upload directory: /opt/jboss-a-mq-6.1.0.redhat-379/data/tmp/uploads
2014-09-09 11:32:35,868 | INFO | pool-10-thread-1 | AuthenticationFilter | io.hawt.web.AuthenticationFilter 84 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | Starting hawtio authentication filter, JAAS realm: "karaf" authorized role: "admin" role principal classes: ""
2014-09-09 11:32:35,965 | INFO | FelixStartLevel | HttpServiceFactoryImpl | .internal.HttpServiceFactoryImpl 35 | 98 - org.ops4j.pax.web.pax-web-runtime - 3.0.6 | Binding bundle: [io.hawt.hawtio-karaf-terminal [148]] to http service
2014-09-09 11:32:35,987 | INFO | pool-10-thread-2 | ConfigManager | io.hawt.system.ConfigManager 32 | 148 - io.hawt.hawtio-karaf-terminal - 1.2.0.redhat-379 | Configuration will be discovered via system properties
2014-09-09 11:32:35,988 | INFO | pool-10-thread-2 | AuthenticationFilter | io.hawt.web.AuthenticationFilter 84 | 148 - io.hawt.hawtio-karaf-terminal - 1.2.0.redhat-379 | Starting hawtio authentication filter, JAAS realm: "karaf" authorized role: "admin" role principal classes: ""
2014-09-09 11:32:35,996 | WARN | FelixStartLevel | ConfigFacade | io.hawt.config.ConfigFacade 23 | 149 - io.hawt.hawtio-maven-indexer - 1.2.0.redhat-379 | No ConfigFacade constructed yet so using default configuration for now
2014-09-09 11:32:36,015 | INFO | pool-10-thread-2 | LoginServlet | io.hawt.web.LoginServlet 55 | 148 - io.hawt.hawtio-karaf-terminal - 1.2.0.redhat-379 | hawtio login is using default HttpSession timeout
2014-09-09 11:32:36,128 | INFO | pool-10-thread-1 | hawtio-web - 146} | lipse.jetty.util.log.JavaUtilLog 75 | 90 - org.eclipse.jetty.aggregate.jetty-all-server - 8.1.14.v20131031 | jolokia-agent: No access restrictor found at classpath:/jolokia-access.xml, access to all MBeans is allowed
2014-09-09 11:32:36,191 | INFO | pool-10-thread-1 | LoginServlet | io.hawt.web.LoginServlet 55 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | hawtio login is using default HttpSession timeout
2014-09-09 11:32:39,227 | INFO | de startup timer | MavenIndexerFacade | maven.indexer.MavenIndexerFacade 98 | 149 - io.hawt.hawtio-maven-indexer - 1.2.0.redhat-379 | Storing maven index files in local directory: /opt/jboss-a-mq-6.1.0.redhat-379/data/mavenIndexer
2014-09-09 11:32:39,621 | INFO | de startup timer | MavenIndexerFacade | maven.indexer.MavenIndexerFacade 148 | 149 - io.hawt.hawtio-maven-indexer - 1.2.0.redhat-379 | Updating the maven indices. This may take a while, please be patient...
2014-09-09 11:32:39,852 | INFO | de startup timer | MavenIndexerFacade | maven.indexer.MavenIndexerFacade 185 | 149 - io.hawt.hawtio-maven-indexer - 1.2.0.redhat-379 | Completed updating 2 maven indices.
OMG, it was the firewall on the local server.
After spending the day scouring the web for ideas, trying to figure out what was wrong with my configuration, in desperation I finally thought I should try disabling the firewall and see what happens. So I shut it down with the command:
sudo service firewalld stop
and suddenly I could access the management console! I tried the default URL, and it worked:
http://172.16.100.110:8181
d'oh!

Master/slave using Oracle

we're using the so called JDBC Master Slave architecture with Oracle DB. We have 2 nodes and each one has one Broker started. We start the Broker1 (on node1) and it becomes the MASTER obtaining the lock over the tables. Then we start the Broker2 on the node2 and this one starts as SLAVE. WE can see in the log of Slave broker that it's trying to obtain the lock every 10sec, but it fails:
2013-06-12 19:32:38,714 | INFO | Default failed to acquire lease. Sleeping for 10000 milli(s) before trying again... | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
2013-06-12 19:32:48,720 | INFO | Default Lease held by Default till Wed Jun 12 19:32:57 UTC 2013 | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
Everything works fine and then in one moment in SLAVE's log we see that it suddenly becomes the MASTER:
2013-06-13 00:38:11,262 | INFO | Default Lease held by Default till Thu Jun 13 00:38:17 UTC 2013 | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
2013-06-13 00:38:11,262 | INFO | Default failed to acquire lease. Sleeping for 10000 milli(s) before trying again... | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
...
2013-06-13 00:38:21,314 | INFO | Default, becoming the master on dataSource: org.apache.commons.dbcp.BasicDataSource#9c6a99d | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
2013-06-13 00:38:21,576 | INFO | Apache ActiveMQ 5.8.0 (Default, ID:corerec3-49774-1371083901328-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2013-06-13 00:38:21,692 | WARN | Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1616/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi]. Will restart management to re-create jmx connector, trying to remedy this issue. | org.apache.activemq.broker.jmx.ManagementContext | JMX connector
2013-06-13 00:38:21,700 | INFO | Listening for connections at: tcp://corerec3:61617?transport.closeAsync=false | org.apache.activemq.transport.TransportServerThreadSupport | main
2013-06-13 00:38:21,700 | INFO | Connector openwire Started | org.apache.activemq.broker.TransportConnector | main
2013-06-13 00:38:21,701 | INFO | Apache ActiveMQ 5.8.0 (Default, ID:corerec3-49774-1371083901328-0:1) started | org.apache.activemq.broker.BrokerService | main
2013-06-13 00:38:21,701 | INFO | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2013-06-13 00:38:21,701 | ERROR | Memory Usage for the Broker (512 mb) is more than the maximum available for the JVM: 245 mb | org.apache.activemq.broker.BrokerService | main
2013-06-13 00:38:22,157 | INFO | Web console type: embedded | org.apache.activemq.web.WebConsoleStarter | main
2013-06-13 00:38:22,292 | INFO | ActiveMQ WebConsole initialized. | org.apache.activemq.web.WebConsoleStarter | main
2013-06-13 00:38:22,353 | INFO | Initializing Spring FrameworkServlet 'dispatcher' | /admin | main
while the MASTER's log shows no change from what it usually outputs...
So, it seems that somehow SLAVE obtains the lock (due to hmm... for example connection loss between master and the DB), but if we don't restart the brokers we start losing messages...
The problem is that in the producers' log we can see that it successfully sends the messages to the QueueX, but we don't see the consumer's taking them from the queue...
If we go to the DB and query _ACTIVEMQ_MSGS_ table we see that the messages are unprocessed.
It looks as if the broker (Producers are connected to) has the lock and inserts the messages into the DB and the brokers Clients are consuming from doesn't have the lock and can't consult the tables...
I don't know if all this makes much sense, but I surely hope someone might shed some light upon this one...
I didn't want to saturate the post with the configuration details, but if you need specific details like failover config, IPs, ports etc. I will post it...

Why does ActiveMQ restart automatically and how do i prevent it?

We've been using AMQ 5.5.1 in production for several months. Occasionally, we observe that the broker decides to refresh itself with no outside trigger. When this happens, our queue senders fail until the broker is back online some 10 minutes later. I cannot find any information or settings that would cause this behavior .. and let me control it.
Is this normal for the broker to recycle on its own like this? If so, what things would cause it?
2012-12-11 11:02:11,603 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#f6ac0b: startup date [Tue Dec 11 11:02:11 EST 2012]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | WrapperSimpleAppMain
2012-12-11 11:02:13,806 | WARN | destroyApplicationContextOnStop parameter is deprecated, please use shutdown hooks instead | org.apache.activemq.xbean.XBeanBrokerService | WrapperSimpleAppMain
2012-12-11 11:02:13,821 | INFO | PListStore:D:\Tools\ActiveMQ\apache-activemq-5.5.1\bin\win32\..\..\data\localhost\tmp_storage started | org.apache.activemq.store.kahadb.plist.PListStore | WrapperSimpleAppMain
2012-12-11 11:02:13,868 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[D:\Tools\ActiveMQ\apache-activemq-5.5.1\bin\win32\..\..\data\kahadb] | org.apache.activemq.broker.BrokerService | WrapperSimpleAppMain
2012-12-11 11:02:16,618 | INFO | KahaDB is version 3 | org.apache.activemq.store.kahadb.MessageDatabase | WrapperSimpleAppMain
2012-12-11 11:02:16,697 | INFO | Recovering from the journal ... | org.apache.activemq.store.kahadb.MessageDatabase | WrapperSimpleAppMain
I found that the wrapper exe process was forcing the restart.
I was able to see in the wrapper.log (windows service) that the process was being restarted because the JVM was not responding. So this is not an issue with the broker auto-restarting per se.. it was an issue with the broker JVM somehow becoming hung (separate problem).
Here are the wrapper log entries for those interested:
ERROR | wrapper | 2012/12/11 11:01:58 | JVM appears hung: Timed out waiting for signal from JVM.
ERROR | wrapper | 2012/12/11 11:01:58 | JVM did not exit on request, terminated
STATUS | wrapper | 2012/12/11 11:02:04 | Launching a JVM...