can anyone help me to set streaming by telegraf into cloud InfluxDB? I use this tutorial, python script launches on my local machine and it pushing notification into rabbitMQ. Telegraf subscribed to rabbitMQ by this config.
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will send metrics to outputs in batches of at most
## metric_batch_size metrics.
## This controls the size of writes that Telegraf sends to output plugins.
metric_batch_size = 1000
## For failed writes, telegraf will cache metric_buffer_limit metrics for each
## output, and will flush this buffer on a successful write. Oldest metrics
## are dropped first when this buffer fills.
## This buffer only fills when writes fail to output plugin(s).
metric_buffer_limit = 10000
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. Maximum flush_interval will be
## flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## By default or when set to "0s", precision will be set to the same
## timestamp order as the collection interval, with the maximum being 1s.
## ie, when interval = "10s", precision will be "1s"
## when interval = "250ms", precision will be "1ms"
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
## Valid time units are "ns", "us" (or "µs"), "ms", "s".
precision = ""
## Logging configuration:
## Run telegraf with debug log messages.
debug = true
## Run telegraf in quiet mode (error log messages only).
quiet = false
## Specify the log file name. The empty string means to log to stderr.
logfile = ""
## Override default hostname, if empty use os.Hostname()
hostname = ""
## If set to true, do no set the "host" tag in the telegraf agent.
omit_hostname = false
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB cluster nodes.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
## urls exp: http://127.0.0.1:9999
urls = ["https://eu-central-1-1.aws.cloud2.influxdata.com"]
## Token for authentication.
token = "$INFLUX_TOKEN"
## Organization is the name of the organization you wish to write to; must exist.
organization = "some#gmail.com"
## Destination bucket to write into.
bucket = "two"
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics.
collect_cpu_time = false
## If true, compute and report the sum of all non-idle CPU states.
report_active = false
[[inputs.disk]]
## By default stats will be gathered for all mount points.
## Set mount_points will restrict the stats to only the specified mount points.
# mount_points = ["/"]
## Ignore mount points by filesystem type.
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
# # Reads metrics from RabbitMQ servers via the Management Plugin
[[inputs.rabbitmq]]
# ## Management Plugin url. (default: http://localhost:15672)
url = "http://localhost:15672"
# ## Tag added to rabbitmq_overview series; deprecated: use tags
# # name = "rmq-server-1"
# ## Credentials
username = "guest"
password = "guest"
#
# ## Optional TLS Config
# # tls_ca = "/etc/telegraf/ca.pem"
# # tls_cert = "/etc/telegraf/cert.pem"
# # tls_key = "/etc/telegraf/key.pem"
# ## Use TLS but skip chain & host verification
# # insecure_skip_verify = false
#
# ## Optional request timeouts
# ##
# ## ResponseHeaderTimeout, if non-zero, specifies the amount of time to wait
# ## for a server's response headers after fully writing the request.
header_timeout = "3s"
# ##
# ## client_timeout specifies a time limit for requests made by this client.
# ## Includes connection time, any redirects, and reading the response body.
client_timeout = "4s"
#
# ## A list of nodes to gather as the rabbitmq_node measurement. If not
# ## specified, metrics for all nodes are gathered.
# # nodes = ["rabbit#node1", "rabbit#node2"]
#
# ## A list of queues to gather as the rabbitmq_queue measurement. If not
# ## specified, metrics for all queues are gathered.
# # queues = ["telegraf"]
#
# ## A list of exchanges to gather as the rabbitmq_exchange measurement. If no
[[inputs.mqtt_consumer]]
name_prefix = "influx"
servers = ["tcp://rabbitmq:1883"]
qos = 0
connection_timeout = "30s"
topics = [
"crypto/btc",
# "crypto/eth",
]
persistent_session = false
client_id = ""
data_format = "json"
json_string_fields
Logs show that data is writing into influxdb cloud
2020-02-25T16:01:53Z I! Starting Telegraf 1.13.3
2020-02-25T16:01:53Z I! Loaded inputs: mqtt_consumer disk diskio net system rabbitmq cpu mem processes swap
2020-02-25T16:01:53Z I! Loaded aggregators:
2020-02-25T16:01:53Z I! Loaded processors:
2020-02-25T16:01:53Z I! Loaded outputs: influxdb_v2
2020-02-25T16:01:53Z I! Tags enabled: host=dos4dev
2020-02-25T16:01:53Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"dos4dev", Flush Interval:10s
2020-02-25T16:01:53Z D! [agent] Initializing plugins
2020-02-25T16:01:53Z D! [agent] Connecting outputs
2020-02-25T16:01:53Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2020-02-25T16:01:53Z D! [agent] Successfully connected to outputs.influxdb_v2
2020-02-25T16:01:53Z D! [agent] Starting service inputs
2020-02-25T16:02:00Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:10Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:10Z D! [outputs.influxdb_v2] Wrote batch of 78 metrics in 595.462779ms
2020-02-25T16:02:10Z D! [outputs.influxdb_v2] Buffer fullness: 83 / 10000 metrics
2020-02-25T16:02:20Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:20Z D! [outputs.influxdb_v2] Wrote batch of 83 metrics in 344.265787ms
2020-02-25T16:02:20Z D! [outputs.influxdb_v2] Buffer fullness: 83 / 10000 metrics
2020-02-25T16:02:30Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
But I can`t find data in cloud Influxdb
Based on the log messages from Telegraf, it looks like the data is being written. Have you tried following the docs for exploring your data? https://v2.docs.influxdata.com/v2.0/visualize-data/explore-metrics/
Related
I have recently set up the Hue set up on my hadoop cluster and everything seems fine. I was able to open the webhue ie., localhost:8888 and i can see the HDFS, HBase and Mysql. But I am still facing some issues on this. Could anyone please help me out in this regard.
Problems facing are :
Hive connection: I am using beeline and i was able to connect to hive databses using beeline on the shelll But in the web hue, it shows error loading databases. The configuration i have used in hue.ini file is
hive_server_host=localhost
Port where HiveServer2 Thrift server runs on.
hive_server_port=10000
The second issue is even though i was able to connect to the mysql database, the issue i am facing is in the dashboard tab. I can see all the widgets and charting options like pie,bar etc. But when i drag and drop them on the page, its loading forever. I dont able to see any chart of the table data.
Please help me out as i have been trying since 10 days and i could not able to find any pointers yet.
#Ruthikajawar I have a working hue.ini here:
https://github.com/steven-dfheinz/HDP3-Hue-Service/blob/Hue.4.6.0/configuration/live.hue.ini
The specifics for working hive are:
[beeswax]
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=hdp.cloudera.com
# Binary thrift port for HiveServer2.
#hive_server_port=10000
# Http thrift port for HiveServer2.
#hive_server_http_port=10001
# Host where LLAP is running
## llap_server_host = localhost
# LLAP binary thrift port
## llap_server_port = 10500
# LLAP HTTP Thrift port
## llap_server_thrift_port = 10501
# Alternatively, use Service Discovery for LLAP (Hive Server Interactive) and/or Hiveserver2, this will override server and thrift port
# Whether to use Service Discovery for LLAP
## hive_discovery_llap = true
# is llap (hive server interactive) running in an HA configuration (more than 1)
# important as the zookeeper structure is different
## hive_discovery_llap_ha = false
# Shortcuts to finding LLAP znode Key
# Non-HA - hiveserver-interactive-site - hive.server2.zookeeper.namespace ex hive2 = /hive2
# HA-NonKerberized - (llap_app_name)_llap ex app name llap0 = /llap0_llap
# HA-Kerberized - (llap_app_name)_llap-sasl ex app name llap0 = /llap0_llap-sasl
## hive_discovery_llap_znode = /hiveserver2-hive2
# Whether to use Service Discovery for HiveServer2
hive_discovery_hs2 = true
# Hiveserver2 is hive-site hive.server2.zookeeper.namespace ex hiveserver2 = /hiverserver2
hive_discovery_hiveserver2_znode = /hiveserver2
# Applicable only for LLAP HA
# To keep the load on zookeeper to a minimum
# ---- we cache the LLAP activeEndpoint for the cache_timeout period
# ---- we cache the hiveserver2 endpoint for the length of session
# configurations to set the time between zookeeper checks
## cache_timeout = 60
# Host where Hive Metastore Server (HMS) is running.
# If Kerberos security is enabled, the fully-qualified domain name (FQDN) is required.
#hive_metastore_host=hdp.cloudera.com
# Configure the port the Hive Metastore Server runs on.
#hive_metastore_port=9083
# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/etc/hive/conf
# Timeout in seconds for thrift calls to Hive service
## server_conn_timeout=120
# Choose whether to use the old GetLog() thrift call from before Hive 0.14 to retrieve the logs.
# If false, use the FetchResults() thrift call from Hive 1.0 or more instead.
## use_get_log_api=false
# Limit the number of partitions that can be listed.
## list_partitions_limit=10000
# The maximum number of partitions that will be included in the SELECT * LIMIT sample query for partitioned tables.
## query_partitions_limit=10
# A limit to the number of rows that can be downloaded from a query before it is truncated.
# A value of -1 means there will be no limit.
## download_row_limit=100000
# A limit to the number of bytes that can be downloaded from a query before it is truncated.
# A value of -1 means there will be no limit.
## download_bytes_limit=-1
# Hue will try to close the Hive query when the user leaves the editor page.
# This will free all the query resources in HiveServer2, but also make its results inaccessible.
## close_queries=false
# Hue will use at most this many HiveServer2 sessions per user at a time.
# For Tez, increase the number to more if you need more than one query at the time, e.g. 2 or 3 (Tez has a maximum of 1 query by session).
## max_number_of_sessions=1
# Thrift version to use when communicating with HiveServer2.
# Version 11 comes with Hive 3.0. If issues, try 7.
thrift_version=11
# A comma-separated list of white-listed Hive configuration properties that users are authorized to set.
## config_whitelist=hive.map.aggr,hive.exec.compress.output,hive.exec.parallel,hive.execution.engine,mapreduce.job.queuename
# Override the default desktop username and password of the hue user used for authentications with other services.
# e.g. Used for LDAP/PAM pass-through authentication.
## auth_username=hive
## auth_password=hive
# Use SASL framework to establish connection to host.
use_sasl=true
For the second part of your question. You should monitor the /var/log/hue/error.log while using the UI to capture and resolve any errors.
We are monitoring several servers with Monit. We are using the version 5.25.1.
Some are dedicated apache servers. The monitoring is ok.
But the log of monit (/var/log/monit) is like this :
[CET Mar 18 03:12:03] info : Starting Monit 5.25.1 daemon with http interface at [0.0.0.0]:3353
[CET Mar 18 03:12:03] info : Monit start delay set to 180s
[CET Mar 18 03:15:03] info : 'xxxxx.localhost' Monit 5.25.1 started
[CET Mar 18 03:15:03] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:16:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:17:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:18:08] error : 'apache' error -- unknown resource ID: [5]
The configuration file /etc/monit.conf is like this :
###############################################################################
## Monit control file
###############################################################################
###############################################################################
## Global section
###############################################################################
##
## Start Monit in the background (run as a daemon):
# check services at 2-minute intervals
# with start delay 240 # optional: delay the first check by 4-minutes (by
# # default Monit check immediately after Monit start)
set daemon 60
with start delay 180
### Set the location of the Monit id file which stores the unique id for the
### Monit instance. The id is generated and stored on first Monit start. By
### default the file is placed in $HOME/.monit.id.
#
set idfile /var/.monit.id
## Set the list of mail servers for alert delivery. Multiple servers may be
## specified using a comma separator. By default Monit uses port 25 - it is
## possible to override this with the PORT option.
#
# set mailserver mail.bar.baz, # primary mailserver
# backup.bar.baz port 10025, # backup mailserver on port 10025
# localhost # fallback relay
set mailserver localhost
## By default Monit will drop alert events if no mail servers are available.
## If you want to keep the alerts for later delivery retry, you can use the
## EVENTQUEUE statement. The base directory where undelivered alerts will be
## stored is specified by the BASEDIR option. You can limit the maximal queue
## size using the SLOTS option (if omitted, the queue is limited by space
## available in the back end filesystem).
#
set eventqueue
basedir /var/monit # set the base directory where events will be stored
slots 100 # optionally limit the queue size
## Send status and events to M/Monit (for more informations about M/Monit
## see http://mmonit.com/).
#
# set mmonit http://monit:monit#192.168.1.10:8080/collector
#
#
## Monit by default uses the following alert mail format:
##
##
## You can override this message format or parts of it, such as subject
## or sender using the MAIL-FORMAT statement. Macros such as $DATE, etc.
## are expanded at runtime. For example, to override the sender, use:
#
# set mail-format { from: monit#foo.bar }
#
#
## You can set alert recipients whom will receive alerts if/when a
## service defined in this file has errors. Alerts may be restricted on
## events by using a filter as in the second example below.
#
set alert fake#mail.com not on { instance }
# receive all alerts
# set alert manager#foo.bar only on { timeout } # receive just service-
# # timeout alert
#
mail-format {
from: xxxxxxx#monit.localhost
subject: $SERVICE => $EVENT
message:
DESCRIPTION : $DESCRIPTION
ACTION : $ACTION
DATE : $DATE
HOST : $HOST
Sorry for the spam.
Monit
}
## Monit has an embedded web server which can be used to view status of
## services monitored and manage services from a web interface. See the
## Monit Wiki if you want to enable SSL for the web server.
#
set httpd port 3353 and
use address 0.0.0.0
allow yyyyy:zzzz
###############################################################################
## SeSTART rvices
###############################################################################
##
## Check general system resources such as load average, cpu and memory
## usage. Each test specifies a resource, conditions and the action to be
## performed should a test fail.
#
check system xxxxxx.localhost
if loadavg (1min) > 8 for 5 cycles then alert
if loadavg (5min) > 4 for 5 cycles then alert
if memory usage > 75% for 5 cycles then alert
if cpu usage (user) > 70% for 5 cycles then alert
if cpu usage (system) > 50% for 5 cycles then alert
if cpu usage (wait) > 50% for 5 cycles then alert
check process apache with pidfile /var/run/httpd/httpd.pid
group www
start program = "/etc/init.d/httpd start" with timeout 60 seconds
stop program = "/etc/init.d/httpd stop"
if failed host localhost port 80 then restart
if cpu > 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart
if loadavg(5min) greater than 10 for 8 cycles then restart
if 3 restarts within 5 cycles then timeout
###############################################################################
## Includes
###############################################################################
##
## It is possible to include additional configuration parts from other files or
## directories.
#
# include /etc/monit.d/*
#
#
# Include all files from /etc/monit.d/
include /etc/monit.d/*
On ui monit, everything is ok. and the monitoring is 100% useful. We can stop, restart the service like we want.
So I don't understand the sentence 'error : 'apache' error -- unknown resource ID: [5]' we found on the log of monit.
Anyone has an idea about it ?
Thanks for your help.
I had the same problem..
mmonit said that loadavg is for "check system" only. it used to work for apache but not anymore..
"The loadavg statement can be used in "check system" context only (load average is system property, not process'). Please remove the following statement and reload monit"
so disable this line by adding # on the first of:
# if loadavg(5min) greater than 10 for 8 cycles then restart
then restart monit
service monit restart
You will no longer receive the appache error.
I am trying to set up deepstream.io. My goal is to have a 4 docker container:
deepstream
the deepstream search
redis
rethink
Redis as well as rethink are running and are accepting connections. Starting deepstream now states that the cache as well as the storage are not ready. I do not get why and what "dependency description provided" is supposed to tell me.
Why does deepstream not accept the connection?
{
"deepstreamVersion": "3.1.0",
"gitRef": "2557412988b128b3331f6079ff1bd26b0b49302d",
"buildTime": "Mon Sep 25 2017 14:42:10 GMT+0000 (UTC)",
"platform": "linux",
"arch": "x64",
"nodeVersion": "v6.11.3",
"libs": [
"deepstream.io-cache-hazelcast:1.0.2",
"deepstream.io-cache-memcached:1.0.0",
"deepstream.io-cache-redis:1.1.0",
"deepstream.io-logger-winston:1.1.0",
"deepstream.io-storage-elasticsearch:1.0.1",
"deepstream.io-storage-mongodb:1.1.0",
"deepstream.io-storage-postgres:1.1.3",
"deepstream.io-storage-rethinkdb:1.0.2"
]
}
Running deepstream start
_ _
__| | ___ ___ _ __ ___| |_ _ __ ___ __ _ _ __ ____
/ _` |/ _ \/ _ \ '_ \/ __| __| '__/ _ \/ _` | '_ ` _ \
| (_| | __/ __/ |_) \__ \ |_| | | __/ (_| | | | | | |
\__,_|\___|\___| .__/|___/\__|_| \___|\__,_|_| |_| |_|
|_|
===================== starting =====================
INFO | State transition (start): Stopped -> LoggerInit
INFO | logger ready: std out/err
INFO | State transition (logger-started): LoggerInit -> PluginInit
INFO | deepstream version: 3.1.0
INFO | configuration file loaded from /etc/deepstream/config.yml
INFO | library directory set to: /var/lib/deepstream
INFO | authenticationHandler ready: none
INFO | permissionHandler ready: valve permissions loaded from /etc/deepstream/permissions.yml
INFO | cache ready: no dependency description provided
INFO | storage ready: no dependency description provided
INFO | State transition (plugins-started): PluginInit -> ServiceInit
INFO | State transition (services-started): ServiceInit -> ConnectionEndpointInit
iconv-lite warning: javascript files use encoding different from utf-8. See https://github.com/ashtuchkin/iconv-lite/wiki/Javascript-source-file-encodings for more info.
INFO | Listening for websocket connections on 0.0.0.0:6020/deepstream
INFO | Listening for health checks on path /health-check
INFO | connectionEndpoint ready: WebSocket Connection Endpoint
INFO | Listening for http connections on 0.0.0.0:8080
INFO | Listening for health checks on path /health-check
INFO | connectionEndpoint ready: HTTP connection endpoint
INFO | State transition (connection-endpoints-started): ConnectionEndpointInit -> Running
INFO | Deepstream started
The config file:
# General
# Show the deepstream logo on startup (highly recommended)
showLogo: true
# Log messages with this level and above. Valid levels are DEBUG, INFO, WARN, ERROR, OFF
logLevel: DEBUG
# Directory where all plugins reside
libDir: /var/lib/deepstream
# Connectivity
# webfacing URL under which this client is reachable. Used for loadbalancing / failover
externalUrl: null
# SSL Configuration
sslKey: null
sslCert: null
sslCa: null
# Connection Endpoint Configuration
# to disable, replace configuration with null eg. `http: null`
connectionEndpoints:
websocket:
name: uws
options:
# port for the websocket server
port: 6020
# host for the websocket server
host: 0.0.0.0
# url path websocket connections connect to
urlPath: /deepstream
# url path for http health-checks, GET requests to this path will return 200 if deepstream is alive
healthCheckPath: /health-check
# the amount of milliseconds between each ping/heartbeat message
heartbeatInterval: 30000
# the amount of milliseconds that writes to sockets are buffered
outgoingBufferTimeout: 0
# Security
# amount of time a connection can remain open while not being logged in
# or false for no timeout
unauthenticatedClientTimeout: 180000
# invalid login attempts before the connection is cut
maxAuthAttempts: 3
# if true, the logs will contain the cleartext username / password of invalid login attempts
logInvalidAuthData: false
# maximum allowed size of an individual message in bytes
maxMessageSize: 1048576
http:
name: http
options:
# port for the http server
port: 8080
# host for the http server
host: 0.0.0.0
# allow 'authData' parameter in POST requests, if disabled only token and OPEN auth is
# possible
allowAuthData: true
# enable the authentication endpoint for requesting tokens/userData.
# note: a custom authentication handler is required for token generation
enableAuthEndpoint: false
# path for authentication requests
authPath: /auth
# path for POST requests
postPath: /
# path for GET requests
getPath: /
# url path for http health-checks, GET requests to this path will return 200 if deepstream is alive
healthCheckPath: /health-check
# -- CORS --
# if disabled, only requests with an 'Origin' header matching one specified under 'origins'
# below will be permitted and the 'Access-Control-Allow-Credentials' response header will be
# enabled
allowAllOrigins: true
# a list of allowed origins
origins:
- 'https://example.com'
# Logger Configuration
# logger:
# # use either the default logger
# name: default
# options:
# colors: true
# # value of logLevel (line 4) will always overwrite this value
# logLevel: INFO
# # or the winston logger
# name: winston
# options:
# # specify a list of transports (console, file, time)
# -
# type: console
# options:
# # value of logLevel (line 4) will always overwrite this value
# level: info
# colorize: true
# -
# type: time
# options:
# filename: ../var/deepstream
# # or a custom logger
# path: ./my-custom-logger
# Plugin Configuration
plugins:
cache:
name: redis
options:
host: Redis-Redis-1
port: 6379
storage:
name: rethinkdb
options:
host: rethinkdb-rethinkdb-proxy-1
port: 28015
splitChar: /
# Storage options
# a RegExp that matches recordNames. If it matches, the record's data won't be stored in the db
storageExclusion: null
auth:
type: none
# getting permissions from a http webhook
# type: http
# options:
# # a post request will be send to this url on every incoming connection
# endpointUrl: http://localhost:6004
# # any of these will be treated as access granted
# permittedStatusCodes: [ 200 ]
# # if the webhook didn't respond after this amount of milliseconds, the connection will be rejected
# requestTimeout: 2000
# Permissioning
permission:
# Only config or custom permissionHandler at the moment
type: config
options:
# Path to the permissionFile. Can be json, js or yml
path: ./permissions.yml
# Amount of times nested cross-references will be loaded. Avoids endless loops
maxRuleIterations: 3
# PermissionResults are cached to increase performance. Lower number means more loading
cacheEvacuationInterval: 60000
# Timeouts (in milliseconds)
# Timeout for client RPC acknownledgement
rpcAckTimeout: 1000
# Timeout for actual RPC provider response
rpcTimeout: 10000
# Maximum time permitted to fetch from cache
cacheRetrievalTimeout: 1000
# Maximum time permitted to fetch from storage
storageRetrievalTimeout: 2000
# Plugin startup timeout – deepstream init will fail if any plugins fail to emit a 'done' event within this timeout
dependencyInitialisationTimeout: 10000
# The amount of time to wait for a provider to acknowledge or reject a listen request
listenResponseTimeout: 500
# The amount of time a broadcast will wait (to allow broadcast coalescing). -1 means disabled.
broadcastTimeout: 0
# A list of prefixes that, when a record is updated via setData and it matches one of the prefixes
# it will be permissioned and written directly to the cache and storage layers
# storageHotPathPatterns:
# - analytics/
# - metrics/
Redis PING
ping Redis-Redis-1
PING redis-redis-1.rancher.internal (10.42.230.105): 56 data bytes
64 bytes from 10.42.230.105: icmp_seq=0 ttl=62 time=12.676 ms
64 bytes from 10.42.230.105: icmp_seq=1 ttl=62 time=12.751 ms
64 bytes from 10.42.230.105: icmp_seq=2 ttl=62 time=15.441 ms
64 bytes from 10.42.230.105: icmp_seq=3 ttl=62 time=12.838 ms
^C--- redis-redis-1.rancher.internal ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/stddev = 12.676/13.427/15.441/1.164 ms
The message no dependency description provided just means that under the hood, the connector has no description property.
I'd recommend trying to set some data via a deepstream client and see if it is written to the database.
my postgresql.conf has following content:
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pg_ctl reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '(none)' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
# Note: Increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction).
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directory = '' # (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - Security and Authentication -
#authentication_timeout = 1min # 1s-600s
#ssl = off # (change requires restart)
#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:#STRENGTH' # allowed SSL ciphers
# (change requires restart)
#ssl_renegotiation_limit = 512MB # amount of data between renegotiations
#password_encryption = on
#db_user_namespace = off
# Kerberos and GSSAPI
#krb_server_keyfile = ''
#krb_srvname = 'postgres' # (Kerberos only)
#krb_caseins_users = off
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 32MB # min 128kB
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.
#work_mem = 1MB # min 64kB
#maintenance_work_mem = 16MB # min 1MB
#max_stack_depth = 2MB # min 100kB
# - Kernel Resource Usage -
#max_files_per_process = 1000 # min 25
# (change requires restart)
#shared_preload_libraries = '' # (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round
# - Asynchronous Behavior -
#effective_io_concurrency = 1 # 1-1000. 0 disables prefetching
#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#wal_level = minimal # minimal, archive, or hot_standby
# (change requires restart)
#fsync = on # turns forced synchronization on or off
#synchronous_commit = on # synchronization level; on, off, or local
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
# - Archiving -
#archive_mode = off # allows archiving to be done
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Master Server -
# These settings are ignored on a standby server
#max_wal_senders = 0 # max number of walsender processes
# (change requires restart)
#wal_sender_delay = 1s # walsender cycle time, 1-10000 milliseconds
#wal_keep_segments = 0 # in logfile segments, 16MB each; 0 disables
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
#replication_timeout = 60s # in milliseconds; 0 disables
#synchronous_standby_names = '' # standby servers that provide sync rep
# comma-separated list of application_name
# from standby(s); '*' = all
# - Standby Servers -
# These settings are ignored on a master server
#hot_standby = off # "on" allows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#effective_cache_size = 128MB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'pg_log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#silent_mode = off # Run server silently.
# DO NOT USE without syslog or
# logging_collector
# (change requires restart)
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%t ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'all' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
#log_timezone = '(defaults to server environment setting)'
#------------------------------------------------------------------------------
# RUNTIME STATISTICS
#------------------------------------------------------------------------------
# - Query/Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
# - Statistics Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM PARAMETERS
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#search_path = '"$user",public' # schema names
#default_tablespace = '' # a tablespace name, '' uses the default
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = '(defaults to server environment setting)'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'English_United States.1252' # locale for system error message
# strings
lc_monetary = 'English_United States.1252' # locale for monetary formatting
lc_numeric = 'English_United States.1252' # locale for number formatting
lc_time = 'English_United States.1252' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Other Defaults -
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
# Note: Each lock table slot uses ~270 bytes of shared memory, and there are
# max_locks_per_transaction * (max_connections + max_prepared_transactions)
# lock table slots.
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#------------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
#custom_variable_classes = '' # list of custom variable class names
Where can I find file where I can see all sql statement which server was executed ?
If compare with default file I have changed following row:
#log_statement = 'all'
After it I restart my pc.
You left log_statement commented out, so it's still at its default.
In postgresql.conf, set:
log_statement = 'all'
(note the lack of the leading '#').
Then restart PostgreSQL. You don't have to restart the whole computer, just the PostgreSQL server. On Windows, that's a service managed by the Services control panel.
Then you need to check the PostgreSQL logs. Their location is dependent on how you installed PostgreSQL and your OS. On Windows they're in the pg_log directory inside your data directory, which you can find with SHOW data_directory; in psql or PgAdmin-III.
On Windows, look in %PROGRAMFILES%\PostgreSQL\9.3\data\pg_log (adjusting for your PostgreSQL version and where you installed PostgreSQL if you didn't use the defaults).
Just as a small addition to the accepted answer, this should at least be mentioned so that some can avoid searching and opening the postgresql.conf.
You can change the settings globally for every database by logging in as a superuser and then by running:
SET log_statement = 'all';
or (for transaction level)
SET LOCAL log_statement = 'all'
From a comment at: postgresql log queries single database
Mind that for the changes to work, the settings must already be uncommented in the postgresql.conf. Else, the settings will not change after the server restart. Therefore, you will have to change the config first by uncommenting
#log_statement = 'none'
to
log_statement = 'all'
and
#logging_collector = off
to
logging_collector = on
and
uncomment log_directory = ... and log_filename = ... as well.
From: How to log PostgreSQL queries?
But if the settings are already uncommented, you can use the postgres commands.
Side note: you can also make the log available for a chosen database by logging in as a superuser and then by running:
ALTER DATABASE your_database_name
SET log_statement = 'all';
From: postgresql log queries single database
I am attempting to run my buildbot master server behind a cherokee reverse proxy with the buildbot instance as cherokee's information source in a round robin reverse proxy layout.
This is the buildbot master.cfg configuration file:-
# -*- python -*-
# ex: set syntax=python:
# This is a sample buildmaster config file. It must be installed as
# 'master.cfg' in your buildmaster's base directory.
# This is the dictionary that the buildmaster pays attention to. We also use
# a shorter alias to save typing.
c = BuildmasterConfig = {}
####### BUILDSLAVES
# The 'slaves' list defines the set of recognized buildslaves. Each element is
# a BuildSlave object, specifying a unique slave name and password. The same
# slave name and password must be configured on the slave.
from buildbot.buildslave import BuildSlave
c['slaves'] = [BuildSlave("example-slave", "pass")]
# 'slavePortnum' defines the TCP port to listen on for connections from slaves.
# This must match the value configured into the buildslaves (with their
# --master option)
c['slavePortnum'] = 9989
####### CHANGESOURCES
# the 'change_source' setting tells the buildmaster how it should find out
# about source code changes. Here we point to the buildbot clone of pyflakes.
from buildbot.changes.gitpoller import GitPoller
c['change_source'] = []
c['change_source'].append(GitPoller(
'git://github.com/buildbot/pyflakes.git',
workdir='gitpoller-workdir', branch='master',
pollinterval=300))
####### SCHEDULERS
# Configure the Schedulers, which decide how to react to incoming changes. In this
# case, just kick off a 'runtests' build
from buildbot.schedulers.basic import SingleBranchScheduler
from buildbot.schedulers.forcesched import ForceScheduler
from buildbot.changes import filter
c['schedulers'] = []
c['schedulers'].append(SingleBranchScheduler(
name="all",
change_filter=filter.ChangeFilter(branch='master'),
treeStableTimer=None,
builderNames=["runtests"]))
c['schedulers'].append(ForceScheduler(
name="force",
builderNames=["runtests"]))
####### BUILDERS
# The 'builders' list defines the Builders, which tell Buildbot how to perform a build:
# what steps, and which slaves can execute them. Note that any particular build will
# only take place on one slave.
from buildbot.process.factory import BuildFactory
from buildbot.steps.source import Git
from buildbot.steps.shell import ShellCommand
factory = BuildFactory()
# check out the source
factory.addStep(Git(repourl='git://github.com/buildbot/pyflakes.git', mode='copy'))
# run the tests (note that this will require that 'trial' is installed)
factory.addStep(ShellCommand(command=["trial", "pyflakes"]))
from buildbot.config import BuilderConfig
c['builders'] = []
c['builders'].append(
BuilderConfig(name="runtests",
slavenames=["example-slave"],
factory=factory))
####### STATUS TARGETS
# 'status' is a list of Status Targets. The results of each build will be
# pushed to these targets. buildbot/status/*.py has a variety to choose from,
# including web pages, email senders, and IRC bots.
c['status'] = []
from buildbot.status import html
from buildbot.status.web import authz, auth
authz_cfg=authz.Authz(
# change any of these to True to enable; see the manual for more
# options
auth=auth.BasicAuth([("pyflakes","pyflakes")]),
gracefulShutdown = False,
forceBuild = 'auth', # use this to test your slave once it is set up
forceAllBuilds = False,
pingBuilder = False,
stopBuild = False,
stopAllBuilds = False,
cancelPendingBuild = False,
)
c['status'].append(html.WebStatus(http_port=8010, authz=authz_cfg))
####### PROJECT IDENTITY
# the 'title' string will appear at the top of this buildbot
# installation's html.WebStatus home page (linked to the
# 'titleURL') and is embedded in the title of the waterfall HTML page.
c['title'] = "Pyflakes"
c['titleURL'] = "http://divmod.org/trac/wiki/DivmodPyflakes"
# the 'buildbotURL' string should point to the location where the buildbot's
# internal web server (usually the html.WebStatus page) is visible. This
# typically uses the port number set in the Waterfall 'status' entry, but
# with an externally-visible host name which the buildbot cannot figure out
# without some help.
c['buildbotURL'] = "http://localhost:8010/"
####### DB URL
c['db'] = {
# This specifies what database buildbot uses to store its state. You can leave
# this at its default for all but the largest installations.
'db_url' : "sqlite:///state.sqlite",
}
# change any of these to True to enable; see the manual for more
# options
auth=auth.BasicAuth([("pyflakes","pyflakes")]),
And this is the cherokee configuration:-
Unfortunately, I get 502 Bad gateway when I go to my web url but on the other hand, I know that my buildbot master server instance is working correctly because going to the same web url and appending :8010 behind the web url gives me the "Welcome to the Buildbot ..." page.
Is your proxy on the same machine as the buildbot? If not, you will need to adjust the URL in cherokee, to point to the machine running buildbot (localhost points to the machine cherokee is running on).
In any case, c['buildbotURL'] should be changed to point to the public URL that the buildbot is available under (i.e. what cherokee exposes, rather than the URL being proxied).