source
stringlengths 3
86
| python
stringlengths 75
1.04M
|
|---|---|
netgear.py
|
"""
===============================================
vidgear library source-code is deployed under the Apache 2.0 License:
Copyright (c) 2019 Abhishek Thakur(@abhiTronix) <abhi.una12@gmail.com>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
===============================================
"""
# import the necessary packages
from threading import Thread
from pkg_resources import parse_version
from .helper import generate_auth_certificates
from collections import deque
import numpy as np
import time
import os
import random
try:
# import OpenCV Binaries
import cv2
# check whether OpenCV Binaries are 3.x+
if parse_version(cv2.__version__) < parse_version('3'):
raise ImportError('[ERROR]: OpenCV library version >= 3.0 is only supported by this library')
except ImportError as error:
raise ImportError('[ERROR]: Failed to detect OpenCV executables, install it with `pip3 install opencv-python` command.')
class NetGear:
"""
NetGear is exclusively designed to transfer video frames synchronously between interconnecting systems over the network in real-time.
This is achieved by implementing a high-level wrapper around PyZmQ python library that contains python bindings for ZeroMQ - a
high-performance asynchronous distributed messaging library that aim to be used in distributed or concurrent applications.
It provides a message queue, but unlike message-oriented middleware, a ZeroMQ system can run without a dedicated message broker.
Furthermore, NetGear currently supports three ZeroMQ messaging patterns: i.e zmq.PAIR, zmq.REQ and zmq.REP,and zmq.PUB,zmq.SUB whereas
supported protocol are: 'tcp', 'upd', 'pgm', 'inproc', 'ipc'.
Multi-Server Mode: This mode enables NetGear API to robustly handle multiple servers at once through exclusive Publish/Subscribe (zmq.PUB/zmq.SUB)
messaging pattern for seamless Live Streaming across various device at the same time. Each device new server on network is
identified using its unique port address. Also, when all the connected servers on the network get disconnected, the client
itself automatically exits too. This mode can be activated through`multiserver_mode` option boolean attribute during
NetGear API initialization easily.
Secure Mode: This mode provides secure ZeroMQ's Security Layers for NetGear API that enables strong encryption on data, and (as far as we know) unbreakable
authentication between the Server and the Client with the help of custom auth certificates/keys. It's default value is `Grassland`:0 => which means no
security at all.
This mode supports the two most powerful ZMQ security mechanisms:
* `StoneHouse`: 1 => which switches to the "CURVE" security mechanism, giving us strong encryption on data, and unbreakable authentication.
Stonehouse is the minimum you would use over public networks and assures clients that they are speaking to an authentic server while allowing any client
to connect. It is less secure but at the same time faster than IronHouse security mechanism.
* `IronHouse`: 2 => which extends Stonehouse with client public key authentication. This is the strongest security model ZMQ have today,
protecting against every attack we know about, except end-point attacks (where an attacker plants spyware on a machine
to capture data before it's encrypted, or after it's decrypted). IronHouse enhanced security comes at a price of additional latency.
:param address(string): sets the valid network address of the server/client. Network addresses unique identifiers across the
network. Its default value of this parameter is based on mode it is working, 'localhost' for Send Mode
and `*` for Receive Mode.
:param port(string/dict/list): sets the valid network port of the server/client. A network port is a number that identifies one side
of a connection between two devices on network. It is used determine to which process or application
a message should be delivered. In Multi-Server Mode a unique port number must required at each server, and a
list/tuple of port addresses of each connected server is required at clients end.
:param protocol(string): sets the valid messaging protocol between server and client. A network protocol is a set of established rules
that dictates how to format, transmit and receive data so computer network devices - from servers and
routers to endpoints - can communicate regardless of the differences in their underlying infrastructures,
designs or standards. Supported protocol are: 'tcp', 'upd', 'pgm', 'inproc', 'ipc'. Its default value is `tcp`.
:param pattern(int): sets the supported messaging pattern(flow of communication) between server and client. Messaging patterns are network
oriented architectural pattern that describes the flow of communication between interconnecting systems.
vidgear provides access to ZeroMQ's pre-optimized sockets which enables you to take advantage of these patterns.
The supported patterns are:
0: zmq.PAIR -> In this, the communication is bidirectional. There is no specific state stored within the socket.
There can only be one connected peer.The server listens on a certain port and a client connects to it.
1. zmq.REQ/zmq.REP -> In this, ZMQ REQ sockets can connect to many servers. The requests will be
interleaved or distributed to both the servers. socket zmq.REQ will block
on send unless it has successfully received a reply back and socket zmq.REP
will block on recv unless it has received a request.
2. zmq.PUB,zmq.SUB -> Publish/Subscribe is another classic pattern where senders of messages, called publishers,
do not program the messages to be sent directly to specific receivers, called subscribers.
Messages are published without the knowledge of what or if any subscriber of that knowledge exists.
Its default value is `0`(i.e zmq.PAIR).
:param (boolean) receive_mode: set this flag to select the Netgear's Mode of operation. This basically activates `Receive Mode`(if True) and `Send Mode`(if False).
Furthermore `recv()` function will only works when this flag is enabled(i.e. `Receive Mode`) and `send()` function will only works
when this flag is disabled(i.e.`Send Mode`). Checkout VidGear docs for usage details.
Its default value is False(i.e. Send Mode is activated by default).
:param **options(dict): can be used to pass flexible parameters to NetGear API.
This attribute provides the flexibility to manipulate ZeroMQ internal parameters
directly. Checkout vidgear docs for usage details.
:param (boolean) logging: set this flag to enable/disable error logging essential for debugging. Its default value is False.
"""
def __init__(self, address = None, port = None, protocol = None, pattern = 0, receive_mode = False, logging = False, **options):
try:
#import PyZMQ library
import zmq
#import ZMQError
from zmq.error import ZMQError
#assign values to global variable for further use
self.zmq = zmq
self.ZMQError = ZMQError
except ImportError as error:
#raise error
raise ImportError('[ERROR]: pyzmq python library not installed. Kindly install it with `pip install pyzmq` command.')
#define valid messaging patterns => `0`: zmq.PAIR, `1`:(zmq.REQ,zmq.REP), and `1`:(zmq.SUB,zmq.PUB)
valid_messaging_patterns = {0:(zmq.PAIR,zmq.PAIR), 1:(zmq.REQ,zmq.REP), 2:(zmq.PUB,zmq.SUB)}
# initialize messaging pattern
msg_pattern = None
# check whether user-defined messaging pattern is valid
if isinstance(pattern, int) and pattern in valid_messaging_patterns:
#assign value
msg_pattern = valid_messaging_patterns[pattern]
self.pattern = pattern #add it to global variable for further use
else:
#otherwise default to 0:`zmq.PAIR`
self.pattern = 0
msg_pattern = valid_messaging_patterns[self.pattern]
#log it
if logging: print('[LOG]: Wrong pattern value, Defaulting to `zmq.PAIR`! Kindly refer Docs for more Information.')
#check whether user-defined messaging protocol is valid
if not(protocol in ['tcp', 'udp', 'pgm', 'epgm', 'inproc', 'ipc']):
# else default to `tcp` protocol
protocol = 'tcp'
#log it
if logging: print('[LOG]: protocol is not valid or provided. Defaulting to `tcp` protocol!')
#generate random device id
self.id = ''.join(random.choice('0123456789ABCDEF') for i in range(5))
self.msg_flag = 0 #handles connection flags
self.msg_copy = True #handles whether to copy data
self.msg_track = False #handles whether to track packets
self.multiserver_mode = False #handles multiserver_mode state
recv_filter = '' #user-defined filter to allow specific port/servers only in multiserver_mode
#define bi-directional data transmission mode
self.bi_mode = False #handles bi_mode state
self.bi_data = None #handles return data
#define valid ZMQ security mechanisms => `0`: Grasslands, `1`:StoneHouse, and `1`:IronHouse
valid_security_mech = {0:'Grasslands', 1:'StoneHouse', 2:'IronHouse'}
self.secure_mode = 0 #handles ZMQ security layer status
auth_cert_dir = '' #handles valid ZMQ certificates dir
self.auth_publickeys_dir = '' #handles valid ZMQ public certificates dir
self.auth_secretkeys_dir = '' #handles valid ZMQ private certificates dir
overwrite_cert = False #checks if certificates overwriting allowed
custom_cert_location = '' #handles custom ZMQ certificates path
#handle force socket termination if there's latency in network
self.force_close = False
#define stream compression handlers
self.compression = '' #disabled by default
self.compression_params = None
#reformat dict
options = {k.lower().strip(): v for k,v in options.items()}
# assign values to global variables if specified and valid
for key, value in options.items():
if key == 'multiserver_mode' and isinstance(value, bool):
if pattern > 0:
# multi-server mode
self.multiserver_mode = value
else:
self.multiserver_mode = False
print('[ALERT]: Multi-Server is disabled!')
raise ValueError('[ERROR]: `{}` pattern is not valid when Multi-Server Mode is enabled. Kindly refer Docs for more Information.'.format(pattern))
elif key == 'filter' and isinstance(value, str):
#custom filter in multi-server mode
recv_filter = value
elif key == 'secure_mode' and isinstance(value,int) and (value in valid_security_mech):
#secure mode
try:
assert zmq.zmq_version_info() >= (4,0), "[ERROR]: ZMQ Security feature is not supported in libzmq version < 4.0."
self.secure_mode = value
except Exception as e:
print(e)
elif key == 'custom_cert_location' and isinstance(value,str):
# custom auth certificates path
try:
assert os.access(value, os.W_OK), "[ERROR]: Permission Denied!, cannot write ZMQ authentication certificates to '{}' directory!".format(value)
assert not(os.path.isfile(value)), "[ERROR]: `custom_cert_location` value must be the path to a directory and not to a file!"
custom_cert_location = os.path.abspath(value)
except Exception as e:
print(e)
elif key == 'overwrite_cert' and isinstance(value,bool):
# enable/disable auth certificate overwriting
overwrite_cert = value
# handle encoding and decoding if specified
elif key == 'compression_format' and isinstance(value,str) and value.lower().strip() in ['.jpg', '.jpeg', '.bmp', '.png']: #few are supported
# enable encoding
if not(receive_mode): self.compression = value.lower().strip()
elif key == 'compression_param':
# specify encoding/decoding params
if receive_mode and isinstance(value, int):
self.compression_params = value
if logging: print("[LOG]: Decoding flag: {}.".format(value))
elif not(receive_mode) and isinstance(value, (list,tuple)):
if logging: print("[LOG]: Encoding parameters: {}.".format(value))
self.compression_params = list(value)
else:
if logging: print("[WARNING]: Invalid compression parameters: {} skipped!".format(value))
self.compression_params = cv2.IMREAD_COLOR if receive_mode else [] # skip to defaults
# enable bi-directional data transmission if specified
elif key == 'bidirectional_mode' and isinstance(value, bool):
# check if pattern is valid
if pattern < 2:
self.bi_mode = True
else:
self.bi_mode = False
print('[ALERT]: Bi-Directional data transmission is disabled!')
raise ValueError('[ERROR]: `{}` pattern is not valid when Bi-Directional Mode is enabled. Kindly refer Docs for more Information.'.format(pattern))
# enable force socket closing if specified
elif key == 'force_terminate' and isinstance(value, bool):
# check if pattern is valid
if address is None and not(receive_mode):
self.force_close = False
print('[ALERT]: Force termination is disabled for local servers!')
else:
self.force_close = True
if logging: print("[LOG]: Force termination is enabled for this connection!")
# various ZMQ flags
elif key == 'flag' and isinstance(value, int):
self.msg_flag = value
elif key == 'copy' and isinstance(value, bool):
self.msg_copy = value
elif key == 'track' and isinstance(value, bool):
self.msg_track = value
else:
pass
#handle secure mode
if self.secure_mode:
#import important libs
import zmq.auth
from zmq.auth.thread import ThreadAuthenticator
# log if overwriting is enabled
if overwrite_cert:
if not receive_mode:
if logging: print('[WARNING]: Overwriting ZMQ Authentication certificates over previous ones!')
else:
overwrite_cert = False
if logging: print('[ALERT]: Overwriting ZMQ Authentication certificates is disabled for Client-end!')
#generate and validate certificates path
try:
#check if custom certificates path is specified
if custom_cert_location:
if os.path.isdir(custom_cert_location): #custom certificate location must be a directory
(auth_cert_dir, self.auth_secretkeys_dir, self.auth_publickeys_dir) = generate_auth_certificates(custom_cert_location, overwrite = overwrite_cert)
else:
raise ValueError("[ERROR]: Invalid `custom_cert_location` value!")
else:
# otherwise auto-generate suitable path
from os.path import expanduser
(auth_cert_dir, self.auth_secretkeys_dir, self.auth_publickeys_dir) = generate_auth_certificates(os.path.join(expanduser("~"),".vidgear"), overwrite = overwrite_cert)
#log it
if logging: print('[LOG]: `{}` is the default location for storing ZMQ authentication certificates/keys.'.format(auth_cert_dir))
except Exception as e:
# catch if any error occurred
print(e)
# also disable secure mode
self.secure_mode = 0
print('[WARNING]: ZMQ Security Mechanism is disabled for this connection!')
else:
#log if disabled
if logging: print('[LOG]: ZMQ Security Mechanism is disabled for this connection!')
#handle bi_mode
if self.bi_mode:
#disable bi_mode if multi-server is enabled
if self.multiserver_mode:
self.bi_mode = False
print('[ALERT]: Bi-Directional Data Transmission is disabled when Multi-Server Mode is Enabled due to incompatibility!')
else:
#enable force termination by default
self.force_close = True
if logging: print("[LOG]: Force termination is enabled for this connection by default!")
if logging: print('[LOG]: Bi-Directional Data Transmission is enabled for this connection!')
# enable logging if specified
self.logging = logging
# initialize termination flag
self.terminate = False
# initialize exit_loop flag
self.exit_loop = False
# initialize and assign receive mode to global variable
self.receive_mode = receive_mode
# define messaging context instance
self.msg_context = zmq.Context.instance()
#check whether `receive_mode` is enabled by user
if receive_mode:
# if does than define connection address
if address is None: address = '*' #define address
#check if multiserver_mode is enabled
if self.multiserver_mode:
# check if unique server port address list/tuple is assigned or not in multiserver_mode
if port is None or not isinstance(port, (tuple, list)):
# raise error if not
raise ValueError('[ERROR]: Incorrect port value! Kindly provide a list/tuple of ports while Multi-Server mode is enabled. For more information refer VidGear docs.')
else:
#otherwise log it
print('[LOG]: Enabling Multi-Server Mode at PORTS: {}!'.format(port))
#create port address buffer for keeping track of incoming server's port
self.port_buffer = []
else:
# otherwise assign local port address if None
if port is None: port = '5555'
try:
# initiate and handle secure mode
if self.secure_mode > 0:
# start an authenticator for this context
auth = ThreadAuthenticator(self.msg_context)
auth.start()
auth.allow(str(address)) #allow current address
#check if `IronHouse` is activated
if self.secure_mode == 2:
# tell authenticator to use the certificate from given valid dir
auth.configure_curve(domain='*', location=self.auth_publickeys_dir)
else:
#otherwise tell the authenticator how to handle the CURVE requests, if `StoneHouse` is activated
auth.configure_curve(domain='*', location=zmq.auth.CURVE_ALLOW_ANY)
# initialize and define thread-safe messaging socket
self.msg_socket = self.msg_context.socket(msg_pattern[1])
if self.pattern == 2 and not(self.secure_mode): self.msg_socket.set_hwm(1)
if self.multiserver_mode:
# if multiserver_mode is enabled, then assign port addresses to zmq socket
for pt in port:
# enable specified secure mode for the zmq socket
if self.secure_mode > 0:
# load server key
server_secret_file = os.path.join(self.auth_secretkeys_dir, "server.key_secret")
server_public, server_secret = zmq.auth.load_certificate(server_secret_file)
# load all CURVE keys
self.msg_socket.curve_secretkey = server_secret
self.msg_socket.curve_publickey = server_public
# enable CURVE connection for this socket
self.msg_socket.curve_server = True
# define socket options
if self.pattern == 2: self.msg_socket.setsockopt_string(zmq.SUBSCRIBE, recv_filter)
# bind socket to given server protocol, address and ports
self.msg_socket.bind(protocol+'://' + str(address) + ':' + str(pt))
# define socket optimizer
self.msg_socket.setsockopt(zmq.LINGER, 0)
else:
# enable specified secure mode for the zmq socket
if self.secure_mode > 0:
# load server key
server_secret_file = os.path.join(self.auth_secretkeys_dir, "server.key_secret")
server_public, server_secret = zmq.auth.load_certificate(server_secret_file)
# load all CURVE keys
self.msg_socket.curve_secretkey = server_secret
self.msg_socket.curve_publickey = server_public
# enable CURVE connection for this socket
self.msg_socket.curve_server = True
# define exclusive socket options for patterns
if self.pattern == 2: self.msg_socket.setsockopt_string(zmq.SUBSCRIBE,'')
# bind socket to given protocol, address and port normally
self.msg_socket.bind(protocol+'://' + str(address) + ':' + str(port))
# define socket optimizer
self.msg_socket.setsockopt(zmq.LINGER, 0)
except Exception as e:
# otherwise raise value error if errored
if self.secure_mode: print('Failed to activate ZMQ Security Mechanism: `{}` for this address!'.format(valid_security_mech[self.secure_mode]))
if self.multiserver_mode:
raise ValueError('[ERROR]: Multi-Server Mode, failed to connect to ports: {} with pattern: {}! Kindly recheck all parameters.'.format( str(port), pattern))
else:
raise ValueError('[ERROR]: Failed to bind address: {} and pattern: {}! Kindly recheck all parameters.'.format((protocol+'://' + str(address) + ':' + str(port)), pattern))
#log and enable threaded queue mode
if logging: print('[LOG]: Threaded Queue Mode is enabled by default for NetGear.')
#define deque and assign it to global var
self.queue = deque(maxlen=96) #max len 96 to check overflow
# initialize and start threading instance
self.thread = Thread(target=self.update, args=())
self.thread.daemon = True
self.thread.start()
if logging:
#finally log progress
print('[LOG]: Successfully Binded to address: {} with pattern: {}.'.format((protocol+'://' + str(address) + ':' + str(port)), pattern))
if self.secure_mode: print('[LOG]: Enabled ZMQ Security Mechanism: `{}` for this address, Successfully!'.format(valid_security_mech[self.secure_mode]))
print('[LOG]: Multi-threaded Receive Mode is enabled Successfully!')
print('[LOG]: Device Unique ID is {}.'.format(self.id))
print('[LOG]: Receive Mode is activated successfully!')
else:
#otherwise default to `Send Mode`
if address is None: address = 'localhost'#define address
#check if multiserver_mode is enabled
if self.multiserver_mode:
# check if unique server port address is assigned or not in multiserver_mode
if port is None:
#raise error is not
raise ValueError('[ERROR]: Kindly provide a unique & valid port value at Server-end. For more information refer VidGear docs.')
else:
#otherwise log it
print('[LOG]: Enabling Multi-Server Mode at PORT: {} on this device!'.format(port))
#assign value to global variable
self.port = port
else:
# otherwise assign local port address if None
if port is None: port = '5555'
try:
# initiate and handle secure mode
if self.secure_mode > 0:
# start an authenticator for this context
auth = ThreadAuthenticator(self.msg_context)
auth.start()
auth.allow(str(address)) #allow current address
#check if `IronHouse` is activated
if self.secure_mode == 2:
# tell authenticator to use the certificate from given valid dir
auth.configure_curve(domain='*', location=self.auth_publickeys_dir)
else:
#otherwise tell the authenticator how to handle the CURVE requests, if `StoneHouse` is activated
auth.configure_curve(domain='*', location=zmq.auth.CURVE_ALLOW_ANY)
# initialize and define thread-safe messaging socket
self.msg_socket = self.msg_context.socket(msg_pattern[0])
if self.pattern == 1:
# if pattern is 1, define additional flags
self.msg_socket.REQ_RELAXED = True
self.msg_socket.REQ_CORRELATE = True
if self.pattern == 2 and not(self.secure_mode): self.msg_socket.set_hwm(1) # if pattern is 2, define additional optimizer
# enable specified secure mode for the zmq socket
if self.secure_mode > 0:
# load client key
client_secret_file = os.path.join(self.auth_secretkeys_dir, "client.key_secret")
client_public, client_secret = zmq.auth.load_certificate(client_secret_file)
# load all CURVE keys
self.msg_socket.curve_secretkey = client_secret
self.msg_socket.curve_publickey = client_public
# load server key
server_public_file = os.path.join(self.auth_publickeys_dir, "server.key")
server_public, _ = zmq.auth.load_certificate(server_public_file)
# inject public key to make a CURVE connection.
self.msg_socket.curve_serverkey = server_public
# connect socket to given protocol, address and port
self.msg_socket.connect(protocol+'://' + str(address) + ':' + str(port))
# define socket options
self.msg_socket.setsockopt(zmq.LINGER, 0)
except Exception as e:
#log if errored
if self.secure_mode: print('Failed to activate ZMQ Security Mechanism: `{}` for this address!'.format(valid_security_mech[self.secure_mode]))
# raise value error
raise ValueError('[ERROR]: Failed to connect address: {} and pattern: {}! Kindly recheck all parameters.'.format((protocol+'://' + str(address) + ':' + str(port)), pattern))
if logging:
#finally log progress
print('[LOG]: Successfully connected to address: {} with pattern: {}.'.format((protocol+'://' + str(address) + ':' + str(port)), pattern))
if self.secure_mode: print('[LOG]: Enabled ZMQ Security Mechanism: `{}` for this address, Successfully!'.format(valid_security_mech[self.secure_mode]))
print('[LOG]: This device Unique ID is {}.'.format(self.id))
print('[LOG]: Send Mode is successfully activated and ready to send data!')
def update(self):
"""
Updates recovered frames from messaging network to the queue
"""
# initialize frame variable
frame = None
# keep looping infinitely until the thread is terminated
while not self.exit_loop:
# check if global termination_flag is enabled
if self.terminate:
# check whether there is still frames in queue before breaking out
if len(self.queue)>0:
continue
else:
break
#check queue buffer for overflow
if len(self.queue) >= 96:
#stop iterating if overflowing occurs
time.sleep(0.000001)
continue
# extract json data out of socket
msg_json = self.msg_socket.recv_json(flags=self.msg_flag)
# check if terminate_flag` received
if msg_json['terminate_flag']:
#if multiserver_mode is enabled
if self.multiserver_mode:
# check and remove from which ports signal is received
if msg_json['port'] in self.port_buffer:
# if pattern is 1, then send back server the info about termination
if self.pattern == 1: self.msg_socket.send_string('[INFO]: Termination signal received at client!')
self.port_buffer.remove(msg_json['port'])
if self.logging: print('[ALERT]: Termination signal received from server at port: {}!'.format(msg_json['port']))
#if termination signal received from all servers then exit client.
if not self.port_buffer:
print('[WARNING]: Termination signal received from all Servers!!!')
self.terminate = True #termination
continue
else:
# if pattern is 1, then send back server the info about termination
if self.pattern == 1: self.msg_socket.send_string('[INFO]: Termination signal received at client!')
#termination
self.terminate = True
#notify client
if self.logging: print('[ALERT]: Termination signal received from server!')
continue
#check if pattern is same at both server's and client's end.
if int(msg_json['pattern']) != self.pattern:
raise ValueError("[ERROR]: Messaging patterns on both Server-end & Client-end must a valid pairs! Kindly refer VidGear docs.")
self.terminate = True
continue
# extract array from socket
msg_data = self.msg_socket.recv(flags=self.msg_flag, copy=self.msg_copy, track=self.msg_track)
if self.pattern != 2:
# check if bi-directional mode is enabled
if self.bi_mode:
# handle return data
bi_dict = dict(data = self.bi_data)
self.msg_socket.send_json(bi_dict, self.msg_flag)
else:
# send confirmation message to server
self.msg_socket.send_string('[LOG]: Data received on device: {} !'.format(self.id))
# recover frame from array buffer
frame_buffer = np.frombuffer(msg_data, dtype=msg_json['dtype'])
# reshape frame
frame = frame_buffer.reshape(msg_json['shape'])
#check if encoding was enabled
if msg_json['compression']:
frame = cv2.imdecode(frame, self.compression_params)
#check if valid frame returned
if frame is None:
#otherwise raise error and exit
raise ValueError("[ERROR]: `{}` Frame Decoding failed with Parameter: {}".format(msg_json['compression'], self.compression_params))
self.terminate = True
continue
if self.multiserver_mode:
# check if multiserver_mode
#save the unique port addresses
if not msg_json['port'] in self.port_buffer:
self.port_buffer.append(msg_json['port'])
#extract if any message from server and display it
if msg_json['message']:
self.queue.append((msg_json['port'], msg_json['message'], frame))
else:
# append recovered unique port and frame to queue
self.queue.append((msg_json['port'],frame))
else:
#extract if any message from server if Bi-Directional Mode is enabled
if self.bi_mode and msg_json['message']:
# append grouped frame and data to queue
self.queue.append((msg_json['message'], frame))
else:
# otherwise append recovered frame to queue
self.queue.append(frame)
# finally properly close the socket
self.msg_socket.close()
def recv(self, return_data = None):
"""
return the recovered frame
:param return_data: handles return data for bi-directional mode
"""
# check whether `receive mode` is activated
if not(self.receive_mode):
#raise value error and exit
raise ValueError('[ERROR]: `recv()` function cannot be used while receive_mode is disabled. Kindly refer vidgear docs!')
self.terminate = True
#handle bi-directional return data
if (self.bi_mode and not(return_data is None)): self.bi_data = return_data
# check whether or not termination flag is enabled
while not self.terminate:
# check if queue is empty
if len(self.queue)>0:
return self.queue.popleft()
else:
continue
# otherwise return NoneType
return None
def send(self, frame, message = None):
"""
send the frames over the messaging network
:param frame(ndarray): frame array to send
:param message(string/int): additional message for the client(s)
"""
# check whether `receive_mode` is disabled
if self.receive_mode:
#raise value error and exit
raise ValueError('[ERROR]: `send()` function cannot be used while receive_mode is enabled. Kindly refer vidgear docs!')
self.terminate = True
# define exit_flag and assign value
exit_flag = True if (frame is None or self.terminate) else False
#check whether exit_flag is False
if not(exit_flag) and not(frame.flags['C_CONTIGUOUS']):
#check whether the incoming frame is contiguous
frame = np.ascontiguousarray(frame, dtype=frame.dtype)
#handle encoding
if self.compression:
retval, frame = cv2.imencode(self.compression, frame, self.compression_params)
#check if it works
if not(retval):
#otherwise raise error and exit
raise ValueError("[ERROR]: Frame Encoding failed with format: {} and Parameters: {}".format(self.compression, self.compression_params))
self.terminate = True
#check if multiserver_mode is activated
if self.multiserver_mode:
# prepare the exclusive json dict and assign values with unique port
msg_dict = dict(terminate_flag = exit_flag,
compression=str(self.compression),
port = self.port,
pattern = str(self.pattern),
message = message,
dtype = str(frame.dtype),
shape = frame.shape)
else:
# otherwise prepare normal json dict and assign values
msg_dict = dict(terminate_flag = exit_flag,
compression=str(self.compression),
message = message,
pattern = str(self.pattern),
dtype = str(frame.dtype),
shape = frame.shape)
# send the json dict
self.msg_socket.send_json(msg_dict, self.msg_flag|self.zmq.SNDMORE)
# send the frame array with correct flags
self.msg_socket.send(frame, flags = self.msg_flag, copy=self.msg_copy, track=self.msg_track)
# wait for confirmation
if self.pattern != 2:
#check if bi-directional data transmission is enabled
if self.bi_mode:
#handle return data
return_dict = self.msg_socket.recv_json(flags=self.msg_flag)
return return_dict['data'] if return_dict else None
else:
#otherwise log normally
recv_confirmation = self.msg_socket.recv()
# log confirmation
if self.logging : print(recv_confirmation)
def close(self):
"""
Terminates the NetGear processes safely
"""
if self.logging:
#log it
print(' \n[LOG]: Terminating various {} Processes \n'.format('Receive Mode' if self.receive_mode else 'Send Mode'))
# whether `receive_mode` is enabled or not
if self.receive_mode:
# indicate that process should be terminated
self.terminate = True
# check whether queue mode is empty
if not(self.queue is None):
self.queue.clear()
# call immediate termination
self.exit_loop = True
# wait until stream resources are released (producer thread might be still grabbing frame)
if self.thread is not None:
self.thread.join()
self.thread = None
#properly handle thread exit
# properly close the socket
self.msg_socket.close(linger=0)
else:
# indicate that process should be terminated
self.terminate = True
if self.multiserver_mode:
#check if multiserver_mode
# send termination flag to client with its unique port
term_dict = dict(terminate_flag = True, port = self.port)
else:
# otherwise send termination flag to client
term_dict = dict(terminate_flag = True)
# otherwise inform client(s) that the termination has been reached
if self.force_close:
#overflow socket with termination signal
for _ in range(500): self.msg_socket.send_json(term_dict)
else:
self.msg_socket.send_json(term_dict)
#check for confirmation if available
if self.pattern < 2:
if self.pattern == 1: self.msg_socket.recv()
# properly close the socket
self.msg_socket.close(linger=0)
|
hc_vitaminder.py
|
import threading
from queue import Queue
from enum import Enum, auto
import serial
import sys
from datetime import datetime, timedelta, date, time
import configparser
# server message definitions (ones sent out from python):
# id=00: heartbeat request
# req bytes:
# 0 - message ID 0x00
# 1-7 - ignored
# rsp bytes:
# send the current settings for LED brightness and colors
# server might ignore it, or make use of it for some future reason
# 0 - 0x01
# 1 - led brightness
# 2-4 - vita LED r, g, b
# 5-7 - sys LED r, g, b
# id=02: set LED
# req bytes:
# 0 - message ID 0x02
# 1 - brightness
# 2 - pixel mask (least significant nibble represents the 4 pixels)
# 3 - red
# 4 - green
# 5 - blue
# 6 - off millis duration (div by 10) (off=0 means constant on)
# 7 - on millis duration (div by 10)
# rsp bytes:
# 0 - 0x03
# 1-7 - ignored
# client message definitions (sent out by Arduino, received by python):
# id=04: device boot
# req bytes:
# 0 - message ID 0x04
# 1-7 - ignored
# rsp bytes:
# none, the host PC will send "set LED" message within 5 secs of responding to this boot message
# id=06: button press
# req bytes:
# 0 - message ID 0x06
# 1 - ok button status (0x00 for unpressed, 0x01 for pressed)
# 2 - snooze status (0x00 for unpressed, 0x01 for pressed)
# 3-7 - ignored
# rsp bytes:
# none, it is likely (but not required) the host will send a "set LED" message soon,
# since a button press typically triggers a state change.
def rgb_from_config(color_str="0,0,0"):
return [int(c) for c in color_str.split(',')]
class Vitaminder:
def __init__(self, config=None):
self.config = config
self.port_name = None
self.serial_port = None
self.state = VitState.UNMEDICATED
self.current_date = date.today()
self.snooze_expiration = None
self.snooze_delta = timedelta(seconds=int(self.config["snooze_duration_seconds"]))
self.heartbeat_time = None
self.heartbeat_result = None
self.alive = True
self.alive_lock = threading.Condition()
self.msg_lock = threading.Condition()
self.msg_queue = Queue()
def connect(self):
self.port_name = self.config["comm_port"]
print("connecting:", self.port_name)
self.serial_port = serial.Serial(self.port_name, timeout=int(self.config["comm_read_timeout"]))
print("connection status:", self.serial_port.isOpen())
def disconnect(self):
if self.is_connected():
self.serial_port.close()
def is_connected(self):
return self.serial_port is not None and self.serial_port.isOpen()
def __del__(self):
self.disconnect()
def send_set_led_message(self):
# we basically need a brightness (just use default) and a color (base on state)
color_map = {
VitState.UNMEDICATED: rgb_from_config(self.config["color_unmedicated"]),
VitState.NAILED_IT: rgb_from_config(self.config["color_nailed_it"]),
VitState.SOFT_REMINDER: rgb_from_config(self.config["color_soft_reminder"]),
VitState.HARD_REMINDER: rgb_from_config(self.config["color_hard_reminder"]),
VitState.SNOOZE: rgb_from_config(self.config["color_snooze"]),
}
brightness_map = {
VitState.UNMEDICATED: int(self.config["brightness_unmedicated"]),
VitState.NAILED_IT: int(self.config["brightness_nailed_it"]),
VitState.SOFT_REMINDER: int(self.config["brightness_soft_reminder"]),
VitState.HARD_REMINDER: int(self.config["brightness_hard_reminder"]),
VitState.SNOOZE: int(self.config["brightness_snooze"])
}
rgb = color_map.get(self.state, [128, 0, 255])
brightness = brightness_map.get(self.state, 128)
# rgb = [0, 0, 255]
# TODO implement blinking for reminder states
blink_off = 25
blink_on = 75
pixel_mask = 0x0F
# req bytes:
# 0 - message ID 0x02
# 1 - brightness
# 2 - pixel mask (least significant nibble represents the 4 pixels)
# 3 - red
# 4 - green
# 5 - blue
# 6 - off millis duration (div by 10) (off=0 means constant on)
# 7 - on millis duration (div by 10)
self.serial_port.write(bytes([0x02, brightness, pixel_mask, rgb[0], rgb[1], rgb[2], blink_off, blink_on]))
def time_update_thread(self, debug=False):
while self.alive:
if debug:
print("time_update() doing my thing")
self.update_state_by_time()
self.add_event(VitEvent(VitMsg.STATE))
self.alive_lock.acquire()
self.alive_lock.wait(int(self.config["time_update_thread_sleep_sec"]))
self.alive_lock.release()
if debug:
print("time_update() end")
def update_state_by_time(self):
# if it is our first update after 2am of a new day,
# reset a new day, back to UNMEDICATED and clear any snooze flags
# if we NAILED_IT, simply move along, maybe give yourself a hug
# if we are snoozing, check the snooze expiry.
# if still snoozing, no state change, buh bye
# if done snoozing, temporarily set state to UNMEDICATED and continue to next section...
# so we're not snoozing, lets figure the right state based on time
# 2am - 6pm, UNMEDICATED
# 6pm-7pm, SOFT_REMINDER
# 7pm-2am, HARD_REMINDER
if self.current_date is not None and self.current_date != date.today():
# we have a new day! happy new year!
self.current_date = date.today()
self.snooze_expiration = None
self.state = VitState.UNMEDICATED
return
if self.state == VitState.NAILED_IT:
# congrats brah
return
if self.state == VitState.SNOOZE:
if datetime.now() < self.snooze_expiration:
# goto sleep; goto sleep; goto sleep, little baby
return
else:
# WAKE UP, SLEEPY HEAD!
self.snooze_expiration = None
self.state = VitState.UNMEDICATED
n = datetime.now().time()
unmed_begin = time.fromisoformat(self.config["boundary_unmedicated_begin"])
unmed_end = time.fromisoformat(self.config["boundary_unmedicated_end"])
soft_begin = time.fromisoformat(self.config["boundary_soft_reminder_begin"])
soft_end = time.fromisoformat(self.config["boundary_soft_reminder_end"])
# print("time.now()", n)
# print("unmed_begin", unmed_begin)
# print("unmed_end", unmed_end)
# print("soft_begin", soft_begin)
# print("soft_end", soft_end)
if unmed_begin <= n < unmed_end:
self.state = VitState.UNMEDICATED
print("update_state_by_time() UNMEDICATED")
elif soft_begin <= n < soft_end:
self.state = VitState.SOFT_REMINDER
print("update_state_by_time() SOFT_REMINDER")
else:
self.state = VitState.HARD_REMINDER
print("update_state_by_time() HARD_REMINDER")
def handle_button_press(self, event):
# snooze button is only valid if current state is hard or soft reminder, or snooze (adds more time)
if event.data[2] == 0x01:
# snooze button
if self.state in [VitState.SNOOZE, VitState.SOFT_REMINDER, VitState.HARD_REMINDER]:
self.state = VitState.SNOOZE
self.snooze_expiration = datetime.now() + self.snooze_delta
elif event.data[1] == 0x01:
# ok button
# if currently NAILED_IT, revert to whatever time-based state you should be in
# if not currently NAILED_IT, then NAIL_IT
if self.state != VitState.NAILED_IT:
self.state = VitState.NAILED_IT
self.snooze_expiration = None
else:
self.state = VitState.UNMEDICATED
self.update_state_by_time()
# mapper = {
# VitState.UNMEDICATED: VitState.SOFT_REMINDER,
# VitState.SOFT_REMINDER: VitState.HARD_REMINDER,
# VitState.HARD_REMINDER: VitState.SNOOZE,
# VitState.SNOOZE: VitState.NAILED_IT,
# VitState.NAILED_IT: VitState.UNMEDICATED
# }
# self.state = mapper.get(self.state)
self.add_event(VitEvent(VitMsg.STATE))
def add_event(self, event):
self.msg_lock.acquire()
self.msg_queue.put(event)
self.msg_lock.notify_all()
self.msg_lock.release()
def serial_read_thread(self, debug=False):
while self.alive:
msg = self.serial_port.read(int(self.config["msg_size"]))
if debug:
print("serial_read: looping")
if msg is None or len(msg) < int(self.config["msg_size"]):
if debug:
print("serial_read: simply a timeout, msg is none")
else:
if debug:
print("serial_read: have a message")
# create a message event and push it to the queue
msg_type = None
if msg[0] == 0x01:
msg_type = VitMsg.SERIAL_HEARTBEAT_RSP
elif msg[0] == 0x03:
msg_type = VitMsg.SERIAL_STATE_RSP
elif msg[0] == 0x04:
msg_type = VitMsg.SERIAL_BOOT
elif msg[0] == 0x06:
msg_type = VitMsg.SERIAL_BUTTON
e = VitEvent(msg_type, msg)
self.add_event(e)
if debug:
print("serial_read: end")
def dummy_thread(self, debug=False):
dummy_sleep_sec = int(self.config["dummy_thread_sleep_sec"])
if debug:
print("dummy() sleeping for", dummy_sleep_sec, "seconds")
self.alive_lock.acquire()
self.alive_lock.wait(dummy_sleep_sec)
self.alive_lock.release()
if debug:
print("dummy() adding EXIT msg")
self.add_event(VitEvent(event_id=VitMsg.EXIT))
def heartbeat_thread(self, debug=False):
while self.alive:
if debug:
print("hb() adding heartbeat message")
self.add_event(VitEvent(VitMsg.HEARTBEAT))
self.alive_lock.acquire()
self.alive_lock.wait(int(self.config["heartbeat_thread_sleep_sec"]))
self.alive_lock.release()
if debug:
print("hb() end")
def ctl_thread(self, debug=False, print_msg=True):
# acquire the lock
self.msg_lock.acquire()
# keep reading messages for as long as we're alive
while self.alive:
# process messages
while not self.msg_queue.empty():
e = self.msg_queue.get()
if debug or print_msg:
print("ctl: ", e.event_id)
if e.event_id == VitMsg.EXIT:
if debug:
print("ctl got exit message")
self.alive = False
# wake everybody up so they can exit cleanly
self.alive_lock.acquire()
self.alive_lock.notify_all()
self.alive_lock.release()
else:
if debug:
print("ctl got non-exit message:", str(e.data))
if e.event_id == VitMsg.HEARTBEAT:
self.serial_port.write(bytes([0, 1, 1, 1, 1, 1, 1, 1]))
elif e.event_id == VitMsg.STATE:
self.send_set_led_message()
elif e.event_id == VitMsg.SERIAL_BOOT:
# device booted, send them current state info
self.send_set_led_message()
elif e.event_id == VitMsg.SERIAL_BUTTON:
# they pressed a button, DO SOMETHING!
self.handle_button_press(e)
# wait for new messages (temporarily releases the lock)
if self.alive:
self.msg_lock.wait(int(self.config["ctl_thread_sleep_sec"]))
if debug:
print("ctl done waiting")
# release the lock
self.msg_lock.release()
if debug:
print("ctl end")
class VitEvent:
def __init__(self, event_id, data=None):
self.event_id = event_id
self.data = data
class VitMsg(Enum):
EXIT = auto()
HEARTBEAT = auto()
STATE = auto()
SERIAL_BOOT = auto()
SERIAL_BUTTON = auto()
SERIAL_HEARTBEAT_RSP = auto()
SERIAL_STATE_RSP = auto()
class VitState(Enum):
UNMEDICATED = auto()
SOFT_REMINDER = auto()
HARD_REMINDER = auto()
SNOOZE = auto()
NAILED_IT = auto()
if __name__ == "__main__":
# load configuration
config_file = "hc-vitaminder.ini"
if len(sys.argv) < 2:
print("no config file specified, using default:", config_file)
else:
config_file = sys.argv[1]
print("using config file:", config_file)
configuration = configparser.ConfigParser()
configuration.read(config_file)
configuration = configuration["DEFAULT"]
v = Vitaminder(config=configuration)
v.connect()
ctl_thread = threading.Thread(target=v.ctl_thread)
print("main starting control thread")
ctl_thread.start()
serial_thread = threading.Thread(target=v.serial_read_thread)
print("main starting serial_read")
serial_thread.start()
# dummy_thread = threading.Thread(target=v.dummy_thread)
# print("main starting dummy")
# dummy_thread.start()
heartbeat_thread = threading.Thread(target=v.heartbeat_thread)
print("main starting heartbeat")
heartbeat_thread.start()
time_thread = threading.Thread(target=v.time_update_thread)
print("main starting time_update")
time_thread.start()
ctl_thread.join()
# dummy_thread.join()
heartbeat_thread.join()
serial_thread.join()
time_thread.join()
v.disconnect()
print("main all done")
if __name__ == "__xxxmain__":
port_name = "COM5"
if len(sys.argv) < 2:
print("no port specified, using default:", port_name)
else:
port_name = sys.argv[1]
print("using port:", port_name)
ser = serial.Serial(port_name)
print("Serial name:", ser.name)
# TODO - test for successful connection
# ser.write(b'py\n')
ser.write(bytes([0, 1, 1, 1, 1, 1, 1, 1]))
# TODO handle failed ack
rsp = ser.read(8)
if rsp[0] == 0x00:
print("have heartbeat response")
else:
print("have unknown response")
ser.write(bytes([1, 128, 255, 0, 255, 0, 255, 0]))
# TODO handle failed ack
rsp = ser.read(8)
if rsp[0] == 0x01:
print("have LED response")
else:
print("have unknown response")
ser.write(bytes([2, 98, 99, 100, 101, 102, 103, 104]))
ser.close()
|
basic.py
|
#!/usr/bin/python3
import urllib.request, urllib.error, urllib.parse, unittest, json, hashlib, threading, uuid, time
from functools import wraps
try:
import msgpack
except:
msgpack = None
import os
host = os.getenv('WEBDIS_HOST', '127.0.0.1')
port = int(os.getenv('WEBDIS_PORT', 7379))
class TestWebdis(unittest.TestCase):
def wrap(self,url):
return 'http://%s:%d/%s' % (host, port, url)
def query(self, url, data = None, headers={}):
r = urllib.request.Request(self.wrap(url), data, headers)
return urllib.request.urlopen(r)
class TestBasics(TestWebdis):
def test_crossdomain(self):
f = self.query('crossdomain.xml')
self.assertTrue(f.getheader('Content-Type') == 'application/xml')
self.assertTrue(b"allow-access-from domain" in f.read())
def test_options(self):
pass
# not sure if OPTIONS is supported by urllib2...
# f = self.query('') # TODO: call with OPTIONS.
# self.assertTrue(f.headers.getheader('Content-Type') == 'text/html')
# self.assertTrue(f.headers.getheader('Allow') == 'GET,POST,PUT,OPTIONS')
# self.assertTrue(f.headers.getheader('Content-Length') == '0')
# self.assertTrue(f.headers.getheader('Access-Control-Allow-Origin') == '*')
class TestJSON(TestWebdis):
def test_set(self):
"success type (+OK)"
self.query('DEL/hello')
f = self.query('SET/hello/world')
self.assertTrue(f.getheader('Content-Type') == 'application/json')
self.assertTrue(f.getheader('ETag') == '"0db1124cf79ffeb80aff6d199d5822f8"')
self.assertTrue(f.read() == b'{"SET":[true,"OK"]}')
def test_get(self):
"string type"
self.query('SET/hello/world')
f = self.query('GET/hello')
self.assertTrue(f.getheader('Content-Type') == 'application/json')
self.assertTrue(f.getheader('ETag') == '"8cf38afc245b7a6a88696566483d1390"')
self.assertTrue(f.read() == b'{"GET":"world"}')
def test_incr(self):
"integer type"
self.query('DEL/hello')
f = self.query('INCR/hello')
self.assertTrue(f.getheader('Content-Type') == 'application/json')
self.assertTrue(f.getheader('ETag') == '"500e9bcdcbb1e98f25c1fbb880a96c99"')
self.assertTrue(f.read() == b'{"INCR":1}')
def test_list(self):
"list type"
self.query('DEL/hello')
self.query('RPUSH/hello/abc')
self.query('RPUSH/hello/def')
f = self.query('LRANGE/hello/0/-1')
self.assertTrue(f.getheader('Content-Type') == 'application/json')
self.assertTrue(f.getheader('ETag') == '"622e51f547a480bef7cf5452fb7782db"')
self.assertTrue(f.read() == b'{"LRANGE":["abc","def"]}')
def test_error(self):
"error return type"
f = self.query('UNKNOWN/COMMAND')
self.assertTrue(f.getheader('Content-Type') == 'application/json')
try:
obj = json.loads(f.read().decode('utf-8'))
except:
self.assertTrue(False)
return
self.assertTrue(len(obj) == 1)
self.assertTrue('UNKNOWN' in obj)
self.assertTrue(isinstance(obj['UNKNOWN'], list))
self.assertTrue(obj['UNKNOWN'][0] == False)
self.assertTrue(isinstance(obj['UNKNOWN'][1], str))
class TestCustom(TestWebdis):
def test_list(self):
"List responses with custom format"
self.query('DEL/hello')
self.query('RPUSH/hello/a/b/c')
f = self.query('LRANGE/hello/0/-1.txt')
self.assertTrue(f.getheader('Content-Type') == 'text/plain')
self.assertTrue(f.read() == b"abc")
def test_separator(self):
"Separator in list responses with custom format"
self.query('DEL/hello')
self.query('RPUSH/hello/a/b/c')
f = self.query('LRANGE/hello/0/-1.txt?sep=--')
self.assertTrue(f.getheader('Content-Type') == 'text/plain')
self.assertTrue(f.read() == b"a--b--c")
class TestRaw(TestWebdis):
def test_set(self):
"success type (+OK)"
self.query('DEL/hello')
f = self.query('SET/hello/world.raw')
self.assertTrue(f.getheader('Content-Type') == 'binary/octet-stream')
self.assertTrue(f.read() == b"+OK\r\n")
def test_get(self):
"string type"
self.query('SET/hello/world')
f = self.query('GET/hello.raw')
self.assertTrue(f.read() == b'$5\r\nworld\r\n')
def test_incr(self):
"integer type"
self.query('DEL/hello')
f = self.query('INCR/hello.raw')
self.assertTrue(f.read() == b':1\r\n')
def test_list(self):
"list type"
self.query('DEL/hello')
self.query('RPUSH/hello/abc')
self.query('RPUSH/hello/def')
f = self.query('LRANGE/hello/0/-1.raw')
self.assertTrue(f.read() == b"*2\r\n$3\r\nabc\r\n$3\r\ndef\r\n")
def test_error(self):
"error return type"
f = self.query('UNKNOWN/COMMAND.raw')
self.assertTrue(f.read().startswith(b"-ERR "))
def need_msgpack(fn):
def wrapper(self):
if msgpack:
fn(self)
return wrapper
class TestMsgPack(TestWebdis):
@need_msgpack
def test_set(self):
"success type (+OK)"
self.query('DEL/hello')
f = self.query('SET/hello/world.msg')
self.assertTrue(f.getheader('Content-Type') == 'application/x-msgpack')
obj = msgpack.loads(f.read())
self.assertTrue(obj == {'SET': [True, 'OK']})
@need_msgpack
def test_get(self):
"string type"
self.query('SET/hello/world')
f = self.query('GET/hello.msg')
obj = msgpack.loads(f.read())
self.assertTrue(obj == {'GET': 'world'})
@need_msgpack
def test_incr(self):
"integer type"
self.query('DEL/hello')
f = self.query('INCR/hello.msg')
obj = msgpack.loads(f.read())
self.assertTrue(obj == {'INCR': 1})
@need_msgpack
def test_list(self):
"list type"
self.query('DEL/hello')
self.query('RPUSH/hello/abc')
self.query('RPUSH/hello/def')
f = self.query('LRANGE/hello/0/-1.msg')
obj = msgpack.loads(f.read())
self.assertTrue(obj == {'LRANGE': ['abc', 'def']})
@need_msgpack
def test_error(self):
"error return type"
f = self.query('UNKNOWN/COMMAND.msg')
obj = msgpack.loads(f.read())
self.assertTrue('UNKNOWN' in obj)
self.assertTrue(isinstance(obj, dict))
self.assertTrue(isinstance(obj['UNKNOWN'], list))
self.assertTrue(obj['UNKNOWN'][0] == False)
self.assertTrue(isinstance(obj['UNKNOWN'][1], str))
class TestETag(TestWebdis):
def test_etag_match(self):
self.query('SET/hello/world')
h = hashlib.md5("world".encode()).hexdigest() # match Etag
try:
f = self.query('GET/hello.txt', None, {'If-None-Match': '"'+ h +'"'})
except urllib.error.HTTPError as e:
self.assertTrue(e.code == 304)
return
self.assertTrue(False) # we should have received a 304.
def test_etag_fail(self):
self.query('SET/hello/world')
h = hashlib.md5("nonsense".encode()).hexdigest() # non-matching Etag
f = self.query('GET/hello.txt', None, {'If-None-Match': '"'+ h +'"'})
self.assertTrue(f.read() == b'world')
class TestDbSwitch(TestWebdis):
def test_db(self):
"Test database change"
self.query('0/SET/key/val0')
self.query('1/SET/key/val1')
f = self.query('0/GET/key.txt')
self.assertTrue(f.read() == b"val0")
f = self.query('1/GET/key.txt')
self.assertTrue(f.read() == b"val1")
f = self.query('GET/key.txt')
self.assertTrue(f.read() == b"val0")
@unittest.skip("Fails in GitHub actions")
class TestPubSub(TestWebdis):
def test_pubsub_basic(self):
self.validate_pubsub(1)
def test_pubsub_many_messages(self):
self.validate_pubsub(1000)
def validate_pubsub(self, num_messages):
channel_name = str(uuid.uuid4())
expected_messages = [str(uuid.uuid4()) for i in range(num_messages)]
self.subscribed = False
subscriber = threading.Thread(target=self.subscriber_main, args=(channel_name,expected_messages))
subscriber.start()
# wait for the subscription to be confirmed
while not self.subscribed:
time.sleep(0.1)
for msg in expected_messages:
pub_response = self.query('PUBLISH/' + channel_name + '/' + msg)
self.assertEqual('{"PUBLISH":1}', pub_response.read().decode('utf-8'))
subscriber.join()
def subscriber_main(self, channel_name, expected_messages):
sub_response = self.query('SUBSCRIBE/' + channel_name)
msg_index = 0
buffer = ''
open_braces = 0
while True:
cur = sub_response.read(1).decode('utf-8')
buffer += cur
if cur == '{':
open_braces += 1
elif cur == '}':
open_braces -= 1
if open_braces == 0: # we have a complete JSON message
message = json.loads(buffer)
buffer = ''
if 'SUBSCRIBE' in message:
if message['SUBSCRIBE'] == ['subscribe', channel_name, 1]: # notify of successful subscription
self.subscribed = True
continue
elif message['SUBSCRIBE'] == ['message', channel_name, expected_messages[msg_index]]: # confirm current message
msg_index += 1
if msg_index == len(expected_messages):
sub_response.close()
return
else:
continue
self.fail('Received an unexpected message: ' + buffer)
if __name__ == '__main__':
unittest.main()
|
FLIR_server_utils.py
|
# AUTOGENERATED! DO NOT EDIT! File to edit: nbs/92_FLIR_server_utils.ipynb (unless otherwise specified).
__all__ = ['CameraThread', 'GPIOThread', 'register', 'server', 'PORT']
# Cell
# from boxfish_stereo_cam.multipyspin import *
import FLIR_pubsub.multi_pyspin as multi_pyspin
import time
import zmq
import threading
import cv2
import imutils
try:
import mraa
use_mraa = True
except:
use_mraa = False
# Cell
class CameraThread:
'''
Each camera is controlled by a separate thread, allowing it to be started stopped.
When started the update loop will wait for a camera image and send it the array data through a zmq socket.
'''
def __init__(self, socket_pub, i, yaml_dict, preview=False):
self.stopped = True
self.socket_pub = socket_pub
self.i = i
self.yaml_dict = yaml_dict
self.preview = preview
self.name = yaml_dict['name']
self.serial = str(yaml_dict['serial'])
self.encoding = yaml_dict['encoding']
self.last_access = time.time()
def send_array(self, A, flags=0, framedata=None, copy=True, track=False):
"""send a numpy array with metadata"""
md = dict(
dtype=str(A.dtype),
shape=A.shape,
framedata=framedata,
)
self.socket_pub.send_string(self.name, zmq.SNDMORE)
self.socket_pub.send_json(md, flags | zmq.SNDMORE)
return self.socket_pub.send(A, flags, copy=copy, track=track)
def start(self):
"""
Initialise and set camera thread and begin acquisition
"""
self.thread = threading.Thread(target=self.update, args=(self.socket_pub, self.i, self.yaml_dict, ))
self.thread.daemon = False
self.stopped = False
self.thread.start()
return self
def stop(self):
"""indicate that the thread should be stopped"""
self.stopped = True
# wait until stream resources are released (producer thread might be still grabbing frame)
self.thread.join()
def update(self, socket, cam_num, yaml_dict):
# Prepare publisher
multi_pyspin.init(self.serial)
multi_pyspin.start_acquisition(self.serial)
print(f'Starting : {self.name}')
i = 0
while True:
i += 1
# cam = multi_pyspin._get_cam(serial)
# image = cam.GetNextImage()
try:
image, image_dict = multi_pyspin.get_image(self.serial)
img = image.GetNDArray()
shape = img.shape
if self.encoding is not None:
img = cv2.imencode(self.encoding, img)[1] # i.e encode into jpeg
md = {'frameid': i, 'encoding': self.encoding, 'size': img.size, 'shape': shape}
self.send_array( img, framedata=md)
except Exception as e:
print(str(e))
if self.preview:
if self.encoding is not None:
_frame = cv2.imdecode(img, cv2.IMREAD_GRAYSCALE)
else:
_frame = img
_frame = cv2.cvtColor(_frame, cv2.COLOR_BAYER_BG2BGR)
_frame = imutils.resize(_frame, width=1000, height=750)
cv2.imshow(self.name, _frame)
cv2.waitKey(10)
if time.time() - self.last_access > 10:
print(f'Stopping {self.name} due to inactivity.')
self.stopped = True
if self.stopped:
break
multi_pyspin.end_acquisition(self.serial)
multi_pyspin.deinit(self.serial)
# Cell
class GPIOThread:
'''
A thread class to control and toggle Upboard GPIO pins, allowing it to be started & stopped.
Pin number and frequency can be set
'''
def __init__(self, pin_no, freq=2.0):
self.stopped = True
# Export the GPIO pin for use
if use_mraa:
self.pin = mraa.Gpio(pin_no)
self.pin.dir(mraa.DIR_OUT)
self.pin.write(0)
self.period = 1 / (2 * freq)
def start(self):
"""Start the pin toggle"""
self.thread = threading.Thread(target=self.update, args=())
self.thread.daemon = False
self.stopped = False
self.thread.start()
return self
def stop(self):
"""Stop the pin toggle"""
self.stopped = True
# wait until stream resources are released (producer thread might be still grabbing frame)
self.thread.join()
def update(self):
# Loop
while True:
if use_mraa:
self.pin.write(1)
# else:
# print('1', end='')
time.sleep(self.period)
if use_mraa:
self.pin.write(0)
# else:
# print('0')
time.sleep(self.period)
if self.stopped:
break
# Cell
PORT = 5555
def register():
"""Run multi_pyspin constructor and register multi_pyspin destructor. Should be called once when first imported"""
multi_pyspin.register()
def server(yaml_dir):
"""
Main loop for the server. Polls and sets up the cameras. Sets up the socket and port numbers and starts threads.
"""
# # Install cameras
pub_threads = []
yaml_dicts = []
for i, serial in enumerate(list(multi_pyspin.SERIAL_DICT)):
print(f'{yaml_dir/serial}.yaml')
yaml_dict = multi_pyspin.setup(f'{yaml_dir/serial}.yaml')
yaml_dicts.append(yaml_dict)
# yaml_dir=Path('common')
# # # Install cameras
# pub_threads = []
# yaml_dicts = []
# for i, serial in enumerate(list(multi_pyspin.SERIAL_DICT)):
# yaml_dict = multi_pyspin.setup(f'{yaml_dir/serial}.yaml')
# yaml_dicts.append(yaml_dict)
context = zmq.Context()
socket_pub = context.socket(zmq.PUB)
socket_pub.setsockopt(zmq.SNDHWM, 20)
socket_pub.setsockopt(zmq.LINGER, 0)
# socket_pub.setsockopt(zmq.SO_REUSEADDR, 1)
socket_pub.bind(f"tcp://*:{PORT}")
socket_rep = context.socket(zmq.REP)
socket_rep.RCVTIMEO = 1000
socket_rep.bind(f"tcp://*:{PORT+1}")
for i, yaml_dict in enumerate(list(yaml_dicts)):
ct = CameraThread(socket_pub, i, yaml_dict)
ct.start()
pub_threads.append(ct)
gpio1 = GPIOThread(29, 2.0).start()
gpio2 = GPIOThread(31, 10.0).start()
while True:
try:
message = socket_rep.recv().decode("utf-8")
socket_rep.send_string("OK")
name = message.split()[1]
pt = [pt for pt in pub_threads if pt.name == name]
if len(pt) == 1:
pt[0].last_access = time.time()
if pt[0].stopped:
pt[0].start()
except zmq.error.Again:
pass
except KeyboardInterrupt:
break
except Exception as e:
print(str(e))
for ct in pub_threads:
print(f"stopping {ct.name}")
ct.stop()
gpio1.stop()
gpio2.stop()
cv2.destroyAllWindows()
socket_pub.close()
socket_rep.close()
context.term()
if __name__ == '__main__':
import sys
from pathlib import Path
sys.path.append(str(Path.cwd()))
register()
yaml_dir = Path.cwd() / '../nbs/common'
print(f'yaml_dir {yaml_dir}')
server(yaml_dir)
|
Responder.py
|
import logging
import zipfile
import requests
from google_drive_downloader import GoogleDriveDownloader as gdd
import threading
import torch
import torch.nn.functional as F
from ChatBotAI.yt_encoder import YTEncoder
from transformers import GPT2Config, GPT2LMHeadModel, GPT2Tokenizer
FILTER_VALUE = -float('Inf')
URL_ZIP_MODEL = "https://drive.google.com/open?id=1FR72Ib40V0nXxfH__x91NWGsy13hzcs5"
ID_GOOGLE_FILE = "1FR72Ib40V0nXxfH__x91NWGsy13hzcs5"
ZIP_NAME = "./ChatBotAI/model_checkpoint.zip"
DIR_NAME = './ChatBotAI/model_checkpoint'
logger = logging.getLogger(__name__)
class ChatBotAI:
def __init__(self, model_path="", tokenizer_class="YTEncoder",
tokenizer_name="ChatBotAI/bpe/yt.model", device='cpu'):
# assert model_path != "", "model_path is empty."
self.model = None
self.config = None
self.tokenizer = None
if model_path == "":
logger.info("Downloading model...")
# gdd.download_file_from_google_drive(file_id=ID_GOOGLE_FILE,
# dest_path=f'./{ZIP_NAME}')
#
# with zipfile.ZipFile(ZIP_NAME, 'r') as zip_ref:
# zip_ref.extractall(DIR_NAME)
# model_path = DIR_NAME
ThreadingExample(ID_GOOGLE_FILE, ZIP_NAME, DIR_NAME)
logger.info("Download completed!")
self.model_path = DIR_NAME
self.model_class = GPT2LMHeadModel
self.config_class = GPT2Config
# TODO:
# Train own tokenizer
self.tokenizer_class = YTEncoder if tokenizer_class == "YTEncoder" else GPT2Tokenizer
self.tokenizer_name = tokenizer_name if tokenizer_name else self.model_path
self.device = device
self.max_input = 1023
def load_model(self):
self.tokenizer = self.tokenizer_class.from_pretrained(self.tokenizer_name)
self.max_input = min(self.tokenizer.max_len_single_sentence, self.max_input)
self.config = self.config_class.from_pretrained(self.model_path)
self.model = self.model_class.from_pretrained(self.model_path, config=self.config)
self.model.to(self.device)
self.model.eval()
def respond(self, context=""):
self.model.eval()
context_tokens = self.tokenizer.encode(context)
out = sample_sequence(
model=self.model,
context=context_tokens,
length=500,
temperature=1.0,
top_k=50,
top_p=0.9,
device=self.device,
max_input=self.max_input
)
out = out[0, len(context_tokens):].tolist()
out_text = self.tokenizer.decode(out)
return out_text
def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')):
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
Args:
logits: logits distribution shape (vocabulary size)
top_k > 0: keep only top k tokens with highest probability (top-k filtering).
top_p > 0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering).
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
"""
assert logits.dim() == 1 # batch size 1 for now - could be updated for more but the code would be less clear
top_k = min(top_k, logits.size(-1)) # Safety check
if top_k > 0:
# Remove all tokens with a probability less than the last token of the top-k
indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
logits[indices_to_remove] = filter_value
if top_p > 0.0:
sorted_logits, sorted_indices = torch.sort(logits, descending=True)
cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold
sorted_indices_to_remove = cumulative_probs > top_p
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
indices_to_remove = sorted_indices[sorted_indices_to_remove]
logits[indices_to_remove] = filter_value
return logits
def sample_sequence(model, length, context, num_samples=1, temperature=1.0, top_k=0, top_p=0.0,
device='cpu', max_input=1023, filter_single=[], filter_double=[]):
context = torch.tensor(context, dtype=torch.long, device=device)
context = context.unsqueeze(0).repeat(num_samples, 1)
generated = context
with torch.no_grad():
for _ in range(length):
inputs = {'input_ids': generated[:, -max_input:]}
outputs = model(
**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet (cached hidden-states)
next_tokens = torch.zeros(num_samples, dtype=torch.long).to(device)
for isample in range(num_samples):
next_token_logits = outputs[0][isample, -1, :] / temperature
next_token_logits[filter_single] = FILTER_VALUE
# filter blank line = double \n
if generated[isample, -1] in filter_double:
next_token_logits[generated[isample, -1]] = FILTER_VALUE
filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)
next_tokens[isample] = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1)
generated = torch.cat((generated, next_tokens.unsqueeze(-1)), dim=1)
return generated
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params={'id': id}, stream=True)
token = get_confirm_token(response)
if token:
params = {'id': id, 'confirm': token}
response = session.get(URL, params=params, stream=True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 3276
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
class ThreadingExample(object):
""" Threading example class
The run() method will be started and it will run in the background
until the application exits.
"""
def __init__(self, id_google="", dest_path_zip="", dest_path=""):
self.id_google = id_google
self.dest_path_zip = dest_path_zip
self.dest_path = dest_path
thread = threading.Thread(target=self.run, args=())
thread.daemon = True # Daemonize thread
thread.start() # Start the execution
def run(self):
""" Method that runs forever """
download_file_from_google_drive(self.id_google, self.dest_path_zip)
with zipfile.ZipFile(self.dest_path_zip, 'r') as zip_ref:
zip_ref.extractall(self.dest_path)
# if __name__=="__main__":
# chatbot = ChatBotAI()
# chatbot.load_model()
|
bgapi.py
|
from __future__ import print_function
# for Python 2/3 compatibility
try:
import queue
except ImportError:
import Queue as queue
import logging
import serial
import time
import threading
from binascii import hexlify, unhexlify
from uuid import UUID
from enum import Enum
from collections import defaultdict
from pygatt.exceptions import NotConnectedError
from pygatt.backends import BLEBackend, Characteristic, BLEAddressType
from pygatt.util import uuid16_to_uuid
from . import bglib, constants
from .exceptions import BGAPIError, ExpectedResponseTimeout
from .device import BGAPIBLEDevice
from .bglib import EventPacketType, ResponsePacketType
from .packets import BGAPICommandPacketBuilder as CommandBuilder
from .error_codes import get_return_message
from .util import find_usb_serial_devices
try:
import termios
except ImportError:
# Running in Windows (not Linux/OS X/Cygwin)
serial_exception = RuntimeError
else:
serial_exception = termios.error
log = logging.getLogger(__name__)
BLED112_VENDOR_ID = 0x2458
BLED112_PRODUCT_ID = 0x0001
MAX_CONNECTION_ATTEMPTS = 10
UUIDType = Enum('UUIDType', ['custom', 'service', 'attribute',
'descriptor', 'characteristic',
'nonstandard'])
def _timed_out(start_time, timeout):
return time.time() - start_time > timeout
def bgapi_address_to_hex(address):
address = hexlify(bytearray(
list(reversed(address)))).upper().decode('ascii')
return ':'.join(''.join(pair) for pair in zip(*[iter(address)] * 2))
class AdvertisingAndScanInfo(object):
"""
Holds the advertising and scan response packet data from a device at a given
address.
"""
def __init__(self):
self.name = ""
self.address = ""
self.rssi = None
self.packet_data = {
# scan_response_packet_type[xxx]: data_dictionary,
}
class BGAPIBackend(BLEBackend):
"""
A BLE backend for a BGAPI compatible USB adapter.
"""
def __init__(self, serial_port=None, receive_queue_timeout=0.1):
"""
Initialize the backend, but don't start the USB connection yet. Must
call .start().
serial_port -- The name of the serial port for the BGAPI-compatible
USB interface. If not provided, will attempt to auto-detect.
"""
self._lib = bglib.BGLib()
self._serial_port = serial_port
self._receive_queue_timeout = receive_queue_timeout
self._ser = None
self._receiver = None
self._running = None
self._lock = threading.Lock()
# buffer for packets received
self._receiver_queue = queue.Queue()
# State
self._num_bonds = 0 # number of bonds stored on the adapter
self._stored_bonds = [] # bond handles stored on the adapter
self._devices_discovered = {
# 'address': AdvertisingAndScanInfo,
# Note: address formatted like "01:23:45:67:89:AB"
}
self._characteristics = defaultdict(dict)
self._connections = {}
self._current_characteristic = None # used in char/descriptor discovery
self._packet_handlers = {
ResponsePacketType.sm_get_bonds: self._ble_rsp_sm_get_bonds,
EventPacketType.attclient_attribute_value: (
self._ble_evt_attclient_attribute_value),
EventPacketType.attclient_find_information_found: (
self._ble_evt_attclient_find_information_found),
EventPacketType.connection_status: self._ble_evt_connection_status,
EventPacketType.connection_disconnected: (
self._ble_evt_connection_disconnected),
EventPacketType.gap_scan_response: self._ble_evt_gap_scan_response,
EventPacketType.sm_bond_status: self._ble_evt_sm_bond_status,
}
log.info("Initialized new BGAPI backend")
def _detect_device_port(self):
log.info("Auto-detecting serial port for BLED112")
detected_devices = find_usb_serial_devices(
vendor_id=BLED112_VENDOR_ID,
product_id=BLED112_PRODUCT_ID)
if len(detected_devices) == 0:
raise BGAPIError("Unable to auto-detect BLED112 serial port")
log.info("Found BLED112 on serial port %s",
detected_devices[0].port_name)
return detected_devices[0].port_name
def _open_serial_port(self,
max_connection_attempts=MAX_CONNECTION_ATTEMPTS):
"""
Open a connection to the named serial port, or auto-detect the first
port matching the BLED device. This will wait until data can actually be
read from the connection, so it will not return until the device is
fully booted.
max_connection_attempts -- Max number of times to retry
detecting and connecting to a device.
Raises a NotConnectedError if the device cannot connect after 10
attempts, with a short pause in between each attempt.
"""
for attempt in range(max_connection_attempts):
log.debug("Opening connection to serial port (attempt %d)",
attempt + 1)
try:
serial_port = self._serial_port or self._detect_device_port()
self._ser = None
self._ser = serial.Serial(serial_port, baudrate=115200,
timeout=0.25)
# Wait until we can actually read from the device
self._ser.read()
break
except (BGAPIError, serial.serialutil.SerialException,
serial_exception):
log.debug("Failed to open serial port", exc_info=True)
if self._ser:
self._ser.close()
elif attempt == 0:
raise NotConnectedError(
"No BGAPI compatible device detected")
self._ser = None
time.sleep(0.25)
else:
raise NotConnectedError("Unable to reconnect with USB "
"device after rebooting")
def start(self):
"""
Connect to the USB adapter, reset it's state and start a backgroud
receiver thread.
"""
if self._running and self._running.is_set():
self.stop()
# Fail immediately if no device is attached, don't retry waiting for one
# to be plugged in.
self._open_serial_port(max_connection_attempts=1)
log.info("Resetting and reconnecting to device for a clean environment")
# Blow everything away and start anew.
# Only way to be sure is to burn it down and start again.
# (Aka reset remote state machine)
# Note: Could make this a conditional based on parameter if this
# happens to be too slow on some systems.
# The zero param just means we want to do a normal restart instead of
# starting a firmware update restart.
self.send_command(CommandBuilder.system_reset(0))
self._ser.flush()
self._ser.close()
time.sleep(1)
self._open_serial_port()
self._receiver = threading.Thread(target=self._receive)
self._receiver.daemon = True
self._running = threading.Event()
self._running.set()
self._receiver.start()
self.disable_advertising()
self.set_bondable(False)
# Stop any ongoing procedure
log.debug("Stopping any outstanding GAP procedure")
self.send_command(CommandBuilder.gap_end_procedure())
try:
self.expect(ResponsePacketType.gap_end_procedure)
except BGAPIError:
# Ignore any errors if there was no GAP procedure running
pass
def stop(self):
for device in self._connections.values():
try:
device.disconnect()
except NotConnectedError:
pass
if self._running:
if self._running.is_set():
log.info('Stopping')
self._running.clear()
if self._receiver:
self._receiver.join()
self._receiver = None
if self._ser:
self._ser.close()
self._ser = None
def set_bondable(self, bondable):
self.send_command(
CommandBuilder.sm_set_bondable_mode(
constants.bondable['yes' if bondable else 'no']))
self.expect(ResponsePacketType.sm_set_bondable_mode)
def disable_advertising(self):
log.info("Disabling advertising")
self.send_command(
CommandBuilder.gap_set_mode(
constants.gap_discoverable_mode['non_discoverable'],
constants.gap_connectable_mode['non_connectable']))
self.expect(ResponsePacketType.gap_set_mode)
def send_command(self, *args, **kwargs):
with self._lock:
if self._ser is None:
log.warn("Unexpectedly not connected to USB device")
raise NotConnectedError()
return self._lib.send_command(self._ser, *args, **kwargs)
def clear_bond(self, address=None):
"""
Delete the bonds stored on the adapter.
address - the address of the device to unbond. If not provided, will
erase all bonds.
Note: this does not delete the corresponding bond stored on the remote
device.
"""
# Find bonds
log.info("Fetching existing bonds for devices")
self._stored_bonds = []
self.send_command(CommandBuilder.sm_get_bonds())
try:
self.expect(ResponsePacketType.sm_get_bonds)
except NotConnectedError:
pass
if self._num_bonds == 0:
return
while len(self._stored_bonds) < self._num_bonds:
self.expect(EventPacketType.sm_bond_status)
for b in reversed(self._stored_bonds):
log.info("Deleting bond %s", b)
self.send_command(CommandBuilder.sm_delete_bonding(b))
self.expect(ResponsePacketType.sm_delete_bonding)
def scan(self, timeout=10, scan_interval=75, scan_window=50, active=True,
discover_mode=constants.gap_discover_mode['observation'],
**kwargs):
"""
Perform a scan to discover BLE devices.
timeout -- the number of seconds this scan should last.
scan_interval -- the number of miliseconds until scanning is restarted.
scan_window -- the number of miliseconds the scanner will listen on one
frequency for advertisement packets.
active -- True --> ask sender for scan response data. False --> don't.
discover_mode -- one of the gap_discover_mode constants.
"""
parameters = 1 if active else 0
# NOTE: the documentation seems to say that the times are in units of
# 625us but the ranges it gives correspond to units of 1ms....
self.send_command(
CommandBuilder.gap_set_scan_parameters(
scan_interval, scan_window, parameters
))
self.expect(ResponsePacketType.gap_set_scan_parameters)
log.info("Starting an %s scan", "active" if active else "passive")
self.send_command(CommandBuilder.gap_discover(discover_mode))
self.expect(ResponsePacketType.gap_discover)
log.info("Pausing for %ds to allow scan to complete", timeout)
time.sleep(timeout)
log.info("Stopping scan")
self.send_command(CommandBuilder.gap_end_procedure())
self.expect(ResponsePacketType.gap_end_procedure)
devices = []
for address, info in self._devices_discovered.items():
devices.append({
'address': address,
'name': info.name,
'rssi': info.rssi,
'packet_data': info.packet_data
})
log.info("Discovered %d devices: %s", len(devices), devices)
self._devices_discovered = {}
return devices
def _end_procedure(self):
self.send_command(CommandBuilder.gap_end_procedure())
self.expect(ResponsePacketType.gap_end_procedure)
def connect(self, address, timeout=5,
address_type=BLEAddressType.public,
interval_min=60, interval_max=76, supervision_timeout=100,
latency=0):
"""
Connnect directly to a device given the ble address then discovers and
stores the characteristic and characteristic descriptor handles.
Requires that the adapter is not connected to a device already.
address -- a bytearray containing the device mac address.
timeout -- number of seconds to wait before returning if not connected.
address_type -- one of BLEAddressType's values, either public or random.
Raises BGAPIError or NotConnectedError on failure.
"""
address_bytes = bytearray(unhexlify(address.replace(":", "")))
for device in self._connections.values():
if device._address == bgapi_address_to_hex(address_bytes):
return device
log.info("Connecting to device at address %s (timeout %ds)",
address, timeout)
self.set_bondable(False)
if address_type == BLEAddressType.public:
addr_type = constants.ble_address_type['gap_address_type_public']
else:
addr_type = constants.ble_address_type['gap_address_type_random']
self.send_command(
CommandBuilder.gap_connect_direct(
address_bytes, addr_type, interval_min, interval_max,
supervision_timeout, latency))
try:
self.expect(ResponsePacketType.gap_connect_direct)
_, packet = self.expect(EventPacketType.connection_status,
timeout=timeout)
# TODO what do we do if the status isn't 'connected'? Retry?
# Raise an exception? Should also check the address matches the
# expected TODO i'm finding that when reconnecting to the same
# MAC, we geta conneciotn status of "disconnected" but that is
# picked up here as "connected", then we don't get anything
# else.
if self._connection_status_flag(
packet['flags'],
constants.connection_status_flag['connected']):
device = BGAPIBLEDevice(
bgapi_address_to_hex(packet['address']),
packet['connection_handle'],
self)
if self._connection_status_flag(
packet['flags'],
constants.connection_status_flag['encrypted']):
device.encrypted = True
self._connections[packet['connection_handle']] = device
log.info("Connected to %s", address)
return device
except ExpectedResponseTimeout:
# If the connection doesn't occur because the device isn't there
# then you should manually stop the command.
#
# If we never get the connection status it is likely that it
# didn't occur because the device isn't there. If that is true
# then we have to manually stop the command.
self._end_procedure()
exc = NotConnectedError()
exc.__cause__ = None
raise exc
def discover_characteristics(self, connection_handle):
att_handle_start = 0x0001 # first valid handle
att_handle_end = 0xFFFF # last valid handle
log.info("Fetching characteristics for connection %d",
connection_handle)
self.send_command(
CommandBuilder.attclient_find_information(
connection_handle, att_handle_start, att_handle_end))
self.expect(ResponsePacketType.attclient_find_information)
self.expect(EventPacketType.attclient_procedure_completed,
timeout=10)
for char_uuid_str, char_obj in (
self._characteristics[connection_handle].items()):
log.info("Characteristic 0x%s is handle 0x%x",
char_uuid_str, char_obj.handle)
for desc_uuid_str, desc_handle in (
char_obj.descriptors.items()):
log.info("Characteristic descriptor 0x%s is handle 0x%x",
desc_uuid_str, desc_handle)
return self._characteristics[connection_handle]
@staticmethod
def _connection_status_flag(flags, flag_to_find):
"""
Is the given flag in the connection status flags?
flags -- the 'flags' parameter returned by ble_evt_connection_status.
flag_to_find -- the flag to look for in flags.
Returns true if flag_to_find is in flags. Returns false otherwise.
"""
return (flags & flag_to_find) == flag_to_find
@staticmethod
def _get_uuid_type(uuid):
"""
Checks if the UUID is a custom 128-bit UUID or a GATT characteristic
descriptor UUID.
uuid -- the UUID as a bytearray.
Return a UUIDType.
"""
if len(uuid) == 16: # 128-bit --> 16 byte
return UUIDType.custom
if uuid in constants.gatt_service_uuid.values():
return UUIDType.service
if uuid in constants.gatt_attribute_type_uuid.values():
return UUIDType.attribute
if uuid in constants.gatt_characteristic_descriptor_uuid.values():
return UUIDType.descriptor
if uuid in constants.gatt_characteristic_type_uuid.values():
return UUIDType.characteristic
log.warn("Unrecognized 4 byte UUID %s", hexlify(uuid))
return UUIDType.nonstandard
def _scan_rsp_data(self, data):
"""
Parse scan response data.
Note: the data will come in a format like the following:
[data_length, data_type, data..., data_length, data_type, data...]
data -- the args['data'] list from _ble_evt_scan_response.
Returns a name and a dictionary containing the parsed data in pairs of
field_name': value.
"""
# Result stored here
data_dict = {
# 'name': value,
}
bytes_left_in_field = 0
field_name = None
field_value = []
# Iterate over data bytes to put in field
dev_name = ""
for b in data:
if bytes_left_in_field == 0:
# New field
bytes_left_in_field = b
field_value = []
else:
field_value.append(b)
bytes_left_in_field -= 1
if bytes_left_in_field == 0:
# End of field
field_name = (
constants.scan_response_data_type[field_value[0]])
field_value = field_value[1:]
# Field type specific formats
if (field_name == 'complete_local_name' or
field_name == 'shortened_local_name'):
dev_name = bytearray(field_value).decode("utf-8")
data_dict[field_name] = dev_name
elif (field_name ==
'complete_list_128-bit_service_class_uuids'):
if len(field_value) % 16 == 0: # 16 bytes
data_dict[field_name] = []
for i in range(0, int(len(field_value) / 16)):
service_uuid = (
"0x%s" %
bgapi_address_to_hex(
field_value[i * 16:i * 16 + 16]))
data_dict[field_name].append(service_uuid)
else:
log.warning("Expected a service class UUID of 16\
bytes. Instead received %d bytes",
len(field_value))
else:
data_dict[field_name] = bytearray(field_value)
return dev_name, data_dict
def expect(self, expected, *args, **kargs):
return self.expect_any([expected], *args, **kargs)
def expect_any(self, expected_packet_choices, timeout=None,
assert_return_success=True):
"""
Process packets until a packet of one of the expected types is found.
expected_packet_choices -- a list of BGLib.PacketType.xxxxx. Upon
processing a packet of a type contained in
the list, this function will return.
timeout -- maximum time in seconds to process packets.
assert_return_success -- raise an exception if the return code from a
matched message is non-zero.
Raises an ExpectedResponseTimeout if one of the expected responses is
not receiving withint the time limit.
"""
timeout = timeout or 1
log.debug("Expecting a response of one of %s within %fs",
expected_packet_choices, timeout or 0)
start_time = None
if timeout is not None:
start_time = time.time()
while True:
packet = None
try:
packet = self._receiver_queue.get(
timeout=self._receive_queue_timeout)
except queue.Empty:
if timeout is not None:
if _timed_out(start_time, timeout):
exc = ExpectedResponseTimeout(
expected_packet_choices, timeout)
exc.__cause__ = None
raise exc
continue
if packet is None:
raise ExpectedResponseTimeout(expected_packet_choices, timeout)
packet_type, response = self._lib.decode_packet(packet)
return_code = response.get('result', 0)
log.debug("Received a %s packet: %s",
packet_type, get_return_message(return_code))
if packet_type in self._packet_handlers:
self._packet_handlers[packet_type](response)
if packet_type in expected_packet_choices:
return packet_type, response
def _receive(self):
"""
Read bytes from serial and enqueue the packets if the packet is not a.
Stops if the self._running event is not set.
"""
log.info("Running receiver")
while self._running.is_set():
packet = self._lib.parse_byte(self._ser.read())
if packet is not None:
packet_type, args = self._lib.decode_packet(packet)
if packet_type == EventPacketType.attclient_attribute_value:
device = self._connections[args['connection_handle']]
device.receive_notification(args['atthandle'],
bytearray(args['value']))
self._receiver_queue.put(packet)
log.info("Stopping receiver")
def _ble_evt_attclient_attribute_value(self, args):
"""
Handles the event for values of characteristics.
args -- dictionary containing the attribute handle ('atthandle'),
attribute type ('type'), and attribute value ('value')
"""
log.debug("attribute handle = %x", args['atthandle'])
log.debug("attribute type = %x", args['type'])
log.debug("attribute value = 0x%s", hexlify(bytearray(args['value'])))
def _ble_evt_attclient_find_information_found(self, args):
"""
Handles the event for characteritic discovery.
Adds the characteristic to the dictionary of characteristics or adds
the descriptor to the dictionary of descriptors in the current
characteristic. These events will be occur in an order similar to the
following:
1) primary service uuid
2) 0 or more descriptors
3) characteristic uuid
4) 0 or more descriptors
5) repeat steps 3-4
args -- dictionary containing the characteristic handle ('chrhandle'),
and characteristic UUID ('uuid')
"""
raw_uuid = bytearray(reversed(args['uuid']))
# Convert 4-byte UUID shorthand to a full, 16-byte UUID
uuid_type = self._get_uuid_type(raw_uuid)
if uuid_type != UUIDType.custom:
uuid = uuid16_to_uuid(int(
bgapi_address_to_hex(args['uuid']).replace(':', ''), 16))
else:
uuid = UUID(bytes=bytes(raw_uuid))
# TODO is there a way to get the characteristic from the packet instead
# of having to track the "current" characteristic?
if (uuid_type == UUIDType.descriptor and
self._current_characteristic is not None):
self._current_characteristic.add_descriptor(uuid, args['chrhandle'])
elif (uuid_type == UUIDType.custom or
uuid_type == UUIDType.nonstandard or
uuid_type == UUIDType.characteristic):
if uuid_type == UUIDType.custom:
log.info("Found custom characteristic %s" % uuid)
elif uuid_type == UUIDType.characteristic:
log.info("Found approved characteristic %s" % uuid)
elif uuid_type == UUIDType.nonstandard:
log.info("Found nonstandard 4-byte characteristic %s" % uuid)
new_char = Characteristic(uuid, args['chrhandle'])
self._current_characteristic = new_char
self._characteristics[
args['connection_handle']][uuid] = new_char
def _ble_evt_connection_disconnected(self, args):
"""
Handles the event for the termination of a connection.
"""
self._connections.pop(args['connection_handle'], None)
def _ble_evt_connection_status(self, args):
"""
Handles the event for reporting connection status.
args -- dictionary containing the connection status flags ('flags'),
device address ('address'), device address type ('address_type'),
connection interval ('conn_interval'), connection timeout
(timeout'), device latency ('latency'), device bond handle
('bonding')
"""
connection_handle = args['connection_handle']
if not self._connection_status_flag(
args['flags'],
constants.connection_status_flag['connected']):
# Disconnected
self._connections.pop(connection_handle, None)
log.info("Connection status: handle=0x%x, flags=%s, address=0x%s, "
"connection interval=%fms, timeout=%d, "
"latency=%d intervals, bonding=0x%x",
connection_handle,
args['address'],
hexlify(bytearray(args['address'])),
args['conn_interval'] * 1.25,
args['timeout'] * 10,
args['latency'],
args['bonding'])
def _ble_evt_gap_scan_response(self, args):
"""
Handles the event for reporting the contents of an advertising or scan
response packet.
This event will occur during device discovery but not direct connection.
args -- dictionary containing the RSSI value ('rssi'), packet type
('packet_type'), address of packet sender ('sender'), address
type ('address_type'), existing bond handle ('bond'), and
scan resonse data list ('data')
"""
# Parse packet
packet_type = constants.scan_response_packet_type[args['packet_type']]
address = bgapi_address_to_hex(args['sender'])
name, data_dict = self._scan_rsp_data(args['data'])
# Store device information
if address not in self._devices_discovered:
self._devices_discovered[address] = AdvertisingAndScanInfo()
dev = self._devices_discovered[address]
if dev.name == "":
dev.name = name
if dev.address == "":
dev.address = address
if (packet_type not in dev.packet_data or
len(dev.packet_data[packet_type]) < len(data_dict)):
dev.packet_data[packet_type] = data_dict
dev.rssi = args['rssi']
log.debug("Received a scan response from %s with rssi=%d dBM "
"and data=%s", address, args['rssi'], data_dict)
def _ble_evt_sm_bond_status(self, args):
"""
Handles the event for reporting a stored bond.
Adds the stored bond to the list of bond handles.
args -- dictionary containing the bond handle ('bond'), encryption key
size used in the long-term key ('keysize'), was man in the
middle used ('mitm'), keys stored for bonding ('keys')
"""
# Add to list of stored bonds found or set flag
self._stored_bonds.append(args['bond'])
def _ble_rsp_sm_delete_bonding(self, args):
"""
Handles the response for the deletion of a stored bond.
args -- dictionary containing the return code ('result')
"""
result = args['result']
if result == 0:
self._stored_bonds.pop()
return result
def _ble_rsp_sm_get_bonds(self, args):
"""
Handles the response for the start of stored bond enumeration. Sets
self._num_bonds to the number of stored bonds.
args -- dictionary containing the number of stored bonds ('bonds'),
"""
self._num_bonds = args['bonds']
log.debug("num bonds = %d", args['bonds'])
|
run_squad_ColabTCPTrans_201910161146.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Run BERT on SQuAD 1.1 and SQuAD 2.0."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import json
import math
import os, types
import random
import modeling
import optimization
import tokenization
import six
import copy
import tensorflow as tf
import numpy as np
import scipy.sparse as sp
# do excel
from openpyxl import Workbook
import uuid
# do
import code
import prettytable
from decimal import *
import decimal
getcontext().prec = 50
#Willy Define
example_in_set_eval_examples = 0
example_in_write_predictions = 0
predict_result_index = 0
checkState_in_AtenResult = 0
checkState_in_AtenResult2 = 0
checkState_in_GetAnswer = 0
checkState_add_retriever = 0
willy_check_code = "willy test on 201910161146"
from drqa import retriever
DOC2IDX = None
documents = []
#db_class = retriever.get_class('sqlite')
flags = tf.flags
FLAGS = flags.FLAGS
## Required parameters
flags.DEFINE_string(
"bert_config_file", None,
"The config json file corresponding to the pre-trained BERT model. "
"This specifies the model architecture.")
flags.DEFINE_string("vocab_file", None,
"The vocabulary file that the BERT model was trained on.")
flags.DEFINE_string(
"output_dir", None,
"The output directory where the model checkpoints will be written.")
## Other parameters
flags.DEFINE_string("train_file", None,
"SQuAD json for training. E.g., train-v1.1.json")
flags.DEFINE_string(
"predict_file", None,
"SQuAD json for predictions. E.g., dev-v1.1.json or test-v1.1.json")
flags.DEFINE_string(
"init_checkpoint", None,
"Initial checkpoint (usually from a pre-trained BERT model).")
flags.DEFINE_bool(
"do_lower_case", True,
"Whether to lower case the input text. Should be True for uncased "
"models and False for cased models.")
flags.DEFINE_integer(
"max_seq_length", 384,
"The maximum total input sequence length after WordPido_interactiveece tokenization. "
"Sequences longer than this will be truncated, and sequences shorter "
"than this will be padded.")
flags.DEFINE_integer(
"doc_stride", 128,
"When splitting up a long document into chunks, how much stride to "
"take between chunks.")
flags.DEFINE_integer(
"max_query_length", 64,
"The maximum number of tokens for the question. Questions longer than "
"this will be truncated to this length.")
flags.DEFINE_bool("do_train", False, "Whether to run training.")
flags.DEFINE_bool("do_predict", False, "Whether to run eval on the dev set.")
flags.DEFINE_integer("train_batch_size", 32, "Total batch size for training.")
flags.DEFINE_integer("predict_batch_size", 8,
"Total batch size for predictions.")
flags.DEFINE_float("learning_rate", 5e-5, "The initial learning rate for Adam.")
flags.DEFINE_float("num_train_epochs", 3.0,
"Total number of training epochs to perform.")
flags.DEFINE_float(
"warmup_proportion", 0.1,
"Proportion of training to perform linear learning rate warmup for. "
"E.g., 0.1 = 10% of training.")
flags.DEFINE_integer("save_checkpoints_steps", 1000,
"How often to save the model checkpoint.")
flags.DEFINE_integer("iterations_per_loop", 1000,
"How many steps to make in each estimator call.")
flags.DEFINE_integer(
"n_best_size", 20,
"The total number of n-best predictions to generate in the "
"nbest_predictions.json output file.")
flags.DEFINE_integer(
"max_answer_length", 30,
"The maximum length of an answer that can be generated. This is needed "
"because the start and end predictions are not conditioned on one another.")
flags.DEFINE_bool("use_tpu", False, "Whether to use TPU or GPU/CPU.")
tf.flags.DEFINE_string(
"tpu_name", None,
"The Cloud TPU to use for training. This should be either the name "
"used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 "
"url.")
tf.flags.DEFINE_string(
"tpu_zone", None,
"[Optional] GCE zone where the Cloud TPU is located in. If not "
"specified, we will attempt to automatically detect the GCE project from "
"metadata.")
tf.flags.DEFINE_string(
"gcp_project", None,
"[Optional] Project name for the Cloud TPU-enabled project. If not "
"specified, we will attempt to automatically detect the GCE project from "
"metadata.")
tf.flags.DEFINE_string("master", None, "[Optional] TensorFlow master URL.")
flags.DEFINE_integer(
"num_tpu_cores", 8,
"Only used if `use_tpu` is True. Total number of TPU cores to use.")
flags.DEFINE_bool(
"verbose_logging", False,
"If true, all of the warnings related to data processing will be printed. "
"A number of warnings are expected for a normal SQuAD evaluation.")
flags.DEFINE_bool(
"version_2_with_negative", False,
"If true, the SQuAD examples contain some that do not have an answer.")
flags.DEFINE_float(
"null_score_diff_threshold", 0.0,
"If null_score - best_non_null is greater than the threshold predict null.")
flags.DEFINE_bool(
"do_retriever", False,
"If True, use retriever to help reader to filte good doc - add by willy.")
flags.DEFINE_string(
"retriever_model", None,
"retriever model path - add by willy.")
flags.DEFINE_float(
"retriever_weight", 0.0,
"retriever weight - add by willy.")
flags.DEFINE_integer("retriever_ranker", 1,"Rank with retriever.")
flags.DEFINE_string("document_type","SQuAD", "There are three document types: (1)paragraphs in SQuAD (2)SQlite (DataBase) (3) Text - add by willy." )
flags.DEFINE_string("question_type","SQuAD", "There are three question types: (1) SQuAD (2)one_question (3) interactive." )
flags.DEFINE_string("question", None, "give question to predict - Willy Test.")
flags.DEFINE_string("db_file", None, "give path with data base file to set SQlite State - Willy Test.")
flags.DEFINE_string("question_table", None, "set table path - Willy Test.")
flags.DEFINE_string("excel_name", None ,"set excel name -Willy Test.")
flags.DEFINE_integer("show_all_choice", 0, "show all choice-Willy Test.")
flags.DEFINE_float(
"choice_score", 0.15,
"choice score. - add by willy.")
flags.DEFINE_float(
"threshold_prob_ans_merge", 0.5,
"threshold prob ans_merge - add by willy.")
flags.DEFINE_string("Host_TCPServer", '127.0.0.1' ,"Set TCP Host-Willy Test.")
flags.DEFINE_integer("PORT_TCPServer", 1234, "Set TCP Port-Willy Test.")
ranker = None
class DecimalEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, decimal.Decimal):
return float(obj)
return super(DecimalEncoder, self).default(obj)
class SquadExample(object):
"""A single training/test example for simple sequence classification.
For examples without an answer, the start and end position are -1.
"""
def __init__(self,
qas_id,
question_text,
doc_id, #willy add
doc_tokens,
orig_answer_text=None,
start_position=None,
end_position=None,
is_impossible=False):
self.qas_id = qas_id
self.question_text = question_text
self.doc_id = doc_id #willy add
self.doc_tokens = doc_tokens
self.orig_answer_text = orig_answer_text
self.start_position = start_position
self.end_position = end_position
self.is_impossible = is_impossible
def __str__(self):
return self.__repr__()
def __repr__(self):
s = ""
s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
s += ", question_text: %s" % (
tokenization.printable_text(self.question_text))
s += ", doc_id:[%s]" % (tokenization.printable_text(self.doc_id)) #willy add
s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
if self.start_position:
s += ", start_position: %d" % (self.start_position)
if self.start_position:
s += ", end_position: %d" % (self.end_position)
if self.start_position:
s += ", is_impossible: %r" % (self.is_impossible)
return s
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self,
unique_id,
example_index,
doc_span_index,
tokens,
token_to_orig_map,
token_is_max_context,
input_ids,
input_mask,
segment_ids,
start_position=None,
end_position=None,
is_impossible=None):
self.unique_id = unique_id
self.example_index = example_index
self.doc_span_index = doc_span_index
self.tokens = tokens
self.token_to_orig_map = token_to_orig_map
self.token_is_max_context = token_is_max_context
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.start_position = start_position
self.end_position = end_position
self.is_impossible = is_impossible
def TakeThird(val):
return val[2]
def set_squad_examples(input_file,question):
"""Read a SQuAD json file into a list of SquadExample."""
with tf.gfile.Open(input_file, "r") as reader:
input_data = json.load(reader)["data"]
def is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
examples = []
file = open("Output1.txt", "r")
document = file.read()
file.close()
paragraphs = document.split('\n')
paragraphs = list(filter(None, paragraphs))
#-----------------------------------------------
doc_tokensList = []
for i , paragraph_text in enumerate(paragraphs):
# paragraph_text = paragraph["context"]
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
for c in paragraph_text:
if is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
char_to_word_offset.append(len(doc_tokens) - 1)
doc_tokensList.append(doc_tokens)
#-----------------------------------------------
start_position = -1
end_position = -1
orig_answer_text = ""
is_impossible = False
for i, doc_tokens in enumerate(doc_tokensList):
example = SquadExample(
qas_id=str(uuid.uuid1()),
question_text=question,
doc_id=DOC2IDX[i],
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_position,
end_position=end_position,
is_impossible=is_impossible)
examples.append(example)
'''
for entry in input_data:
for paragraph in entry["paragraphs"]:
for qa in paragraph["qas"]:
#qas_id = qa["id"]
# uuid reset by willy in 20190313
qas_id = str(uuid.uuid1())
question_text = qa["question"]
start_position = -1
end_position = -1
orig_answer_text = ""
is_impossible = False
for doc_tokens in doc_tokensList:
example = SquadExample(
qas_id=qas_id,
question_text=question_text,
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_position,
end_position=end_position,
is_impossible=is_impossible)
print(example)
examples.append(example)
'''
#-----------------------------------------------
return examples
def read_squad_examples(input_file, is_training):
"""Read a SQuAD json file into a list of SquadExample."""
with tf.gfile.Open(input_file, "r") as reader:
input_data = json.load(reader)["data"]
def is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
examples = []
for entry in input_data:
for paragraph in entry["paragraphs"]:
paragraph_text = paragraph["context"]
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
for c in paragraph_text:
if is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
char_to_word_offset.append(len(doc_tokens) - 1)
for qa in paragraph["qas"]:
qas_id = qa["id"]
question_text = qa["question"]
start_position = None
end_position = None
orig_answer_text = None
is_impossible = False
if is_training:
if FLAGS.version_2_with_negative:
is_impossible = qa["is_impossible"]
if (len(qa["answers"]) != 1) and (not is_impossible):
raise ValueError(
"For training, each question should have exactly 1 answer.")
if not is_impossible:
answer = qa["answers"][0]
orig_answer_text = answer["text"]
answer_offset = answer["answer_start"]
answer_length = len(orig_answer_text)
start_position = char_to_word_offset[answer_offset]
end_position = char_to_word_offset[answer_offset + answer_length -
1]
# Only add answers where the text can be exactly recovered from the
# document. If this CAN'T happen it's likely due to weird Unicode
# stuff so we will just skip the example.
#
# Note that this means for training mode, every example is NOT
# guaranteed to be preserved.
actual_text = " ".join(
doc_tokens[start_position:(end_position + 1)])
cleaned_answer_text = " ".join(
tokenization.whitespace_tokenize(orig_answer_text))
if actual_text.find(cleaned_answer_text) == -1:
tf.logging.warning("Could not find answer: '%s' vs. '%s'",
actual_text, cleaned_answer_text)
continue
else:
start_position = -1
end_position = -1
orig_answer_text = ""
example = SquadExample(
qas_id=qas_id,
question_text=question_text,
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_position,
end_position=end_position,
is_impossible=is_impossible)
examples.append(example)
return examples
def convert_examples_to_features(examples, tokenizer, max_seq_length,
doc_stride, max_query_length, is_training,
output_fn):
"""Loads a data file into a list of `InputBatch`s."""
unique_id = 1000000000
for (example_index, example) in enumerate(examples):
query_tokens = tokenizer.tokenize(example.question_text)
if len(query_tokens) > max_query_length:
query_tokens = query_tokens[0:max_query_length]
tok_to_orig_index = []
orig_to_tok_index = []
all_doc_tokens = []
for (i, token) in enumerate(example.doc_tokens):
orig_to_tok_index.append(len(all_doc_tokens))
sub_tokens = tokenizer.tokenize(token)
for sub_token in sub_tokens:
tok_to_orig_index.append(i)
all_doc_tokens.append(sub_token)
tok_start_position = None
tok_end_position = None
if is_training and example.is_impossible:
tok_start_position = -1
tok_end_position = -1
if is_training and not example.is_impossible:
tok_start_position = orig_to_tok_index[example.start_position]
if example.end_position < len(example.doc_tokens) - 1:
tok_end_position = orig_to_tok_index[example.end_position + 1] - 1
else:
tok_end_position = len(all_doc_tokens) - 1
(tok_start_position, tok_end_position) = _improve_answer_span(
all_doc_tokens, tok_start_position, tok_end_position, tokenizer,
example.orig_answer_text)
# The -3 accounts for [CLS], [SEP] and [SEP]
max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
# We can have documents that are longer than the maximum sequence length.
# To deal with this we do a sliding window approach, where we take chunks
# of the up to our max length with a stride of `doc_stride`.
_DocSpan = collections.namedtuple( # pylint: disable=invalid-name
"DocSpan", ["start", "length"])
doc_spans = []
start_offset = 0
while start_offset < len(all_doc_tokens):
length = len(all_doc_tokens) - start_offset
if length > max_tokens_for_doc:
length = max_tokens_for_doc
doc_spans.append(_DocSpan(start=start_offset, length=length))
if start_offset + length == len(all_doc_tokens):
break
start_offset += min(length, doc_stride)
for (doc_span_index, doc_span) in enumerate(doc_spans):
tokens = []
token_to_orig_map = {}
token_is_max_context = {}
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in query_tokens:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
for i in range(doc_span.length):
split_token_index = doc_span.start + i
token_to_orig_map[len(tokens)] = tok_to_orig_index[split_token_index]
is_max_context = _check_is_max_context(doc_spans, doc_span_index,
split_token_index)
token_is_max_context[len(tokens)] = is_max_context
tokens.append(all_doc_tokens[split_token_index])
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
start_position = None
end_position = None
if is_training and not example.is_impossible:
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
doc_start = doc_span.start
doc_end = doc_span.start + doc_span.length - 1
out_of_span = False
if not (tok_start_position >= doc_start and
tok_end_position <= doc_end):
out_of_span = True
if out_of_span:
start_position = 0
end_position = 0
else:
doc_offset = len(query_tokens) + 2
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset
if is_training and example.is_impossible:
start_position = 0
end_position = 0
'''
if example_index < 10:
tf.compat.v1.logging.info("*** Example ***")
tf.compat.v1.logging.info("unique_id: %s" % (unique_id))
tf.compat.v1.logging.info("example_index: %s" % (example_index))
tf.compat.v1.logging.info("doc_span_index: %s" % (doc_span_index))
tf.compat.v1.logging.info("tokens: %s" % " ".join(
[tokenization.printable_text(x) for x in tokens]))
tf.compat.v1.logging.info("token_to_orig_map: %s" % " ".join(
["%d:%d" % (x, y) for (x, y) in six.iteritems(token_to_orig_map)]))
tf.compat.v1.logging.info("token_is_max_context: %s" % " ".join([
"%d:%s" % (x, y) for (x, y) in six.iteritems(token_is_max_context)
]))
tf.compat.v1.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
tf.compat.v1.logging.info(
"input_mask: %s" % " ".join([str(x) for x in input_mask]))
tf.compat.v1.logging.info(
"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
if is_training and example.is_impossible:
tf.compat.v1.logging.info("impossible example")
if is_training and not example.is_impossible:
answer_text = " ".join(tokens[start_position:(end_position + 1)])
tf.compat.v1.logging.info("start_position: %d" % (start_position))
tf.compat.v1.logging.info("end_position: %d" % (end_position))
tf.compat.v1.logging.info(
"answer: %s" % (tokenization.printable_text(answer_text)))
'''
feature = InputFeatures(
unique_id=unique_id,
example_index=example_index,
doc_span_index=doc_span_index,
tokens=tokens,
token_to_orig_map=token_to_orig_map,
token_is_max_context=token_is_max_context,
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
start_position=start_position,
end_position=end_position,
is_impossible=example.is_impossible)
# Run callback
output_fn(feature)
unique_id += 1
def _improve_answer_span(doc_tokens, input_start, input_end, tokenizer,
orig_answer_text):
"""Returns tokenized answer spans that better match the annotated answer."""
# The SQuAD annotations are character based. We first project them to
# whitespace-tokenized words. But then after WordPiece tokenization, we can
# often find a "better match". For example:
#
# Question: What year was John Smith born?
# Context: The leader was John Smith (1895-1943).
# Answer: 1895
#
# The original whitespace-tokenized answer will be "(1895-1943).". However
# after tokenization, our tokens will be "( 1895 - 1943 ) .". So we can match
# the exact answer, 1895.
#
# However, this is not always possible. Consider the following:
#
# Question: What country is the top exporter of electornics?
# Context: The Japanese electronics industry is the lagest in the world.
# Answer: Japan
#
# In this case, the annotator chose "Japan" as a character sub-span of
# the word "Japanese". Since our WordPiece tokenizer does not split
# "Japanese", we just use "Japanese" as the annotation. This is fairly rare
# in SQuAD, but does happen.
tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text))
for new_start in range(input_start, input_end + 1):
for new_end in range(input_end, new_start - 1, -1):
text_span = " ".join(doc_tokens[new_start:(new_end + 1)])
if text_span == tok_answer_text:
return (new_start, new_end)
return (input_start, input_end)
def _check_is_max_context(doc_spans, cur_span_index, position):
"""Check if this is the 'max context' doc span for the token."""
# Because of the sliding window approach taken to scoring documents, a single
# token can appear in multiple documents. E.g.
# Doc: the man went to the store and bought a gallon of milk
# Span A: the man went to the
# Span B: to the store and bought
# Span C: and bought a gallon of
# ...
#
# Now the word 'bought' will have two scores from spans B and C. We only
# want to consider the score with "maximum context", which we define as
# the *minimum* of its left and right context (the *sum* of left and
# right context will always be the same, of course).
#
# In the example the maximum context for 'bought' would be span C since
# it has 1 left context and 3 right context, while span B has 4 left context
# and 0 right context.
best_score = None
best_span_index = None
for (span_index, doc_span) in enumerate(doc_spans):
end = doc_span.start + doc_span.length - 1
if position < doc_span.start:
continue
if position > end:
continue
num_left_context = position - doc_span.start
num_right_context = end - position
score = min(num_left_context, num_right_context) + 0.01 * doc_span.length
if best_score is None or score > best_score:
best_score = score
best_span_index = span_index
return cur_span_index == best_span_index
def create_model(bert_config, is_training, input_ids, input_mask, segment_ids,
use_one_hot_embeddings):
"""Creates a classification model."""
model = modeling.BertModel(
config=bert_config,
is_training=is_training,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=segment_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
final_hidden = model.get_sequence_output()
final_hidden_shape = modeling.get_shape_list(final_hidden, expected_rank=3)
batch_size = final_hidden_shape[0]
seq_length = final_hidden_shape[1]
hidden_size = final_hidden_shape[2]
output_weights = tf.get_variable(
"cls/squad/output_weights", [2, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"cls/squad/output_bias", [2], initializer=tf.zeros_initializer())
final_hidden_matrix = tf.reshape(final_hidden,
[batch_size * seq_length, hidden_size])
logits = tf.matmul(final_hidden_matrix, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
logits = tf.reshape(logits, [batch_size, seq_length, 2])
logits = tf.transpose(logits, [2, 0, 1])
unstacked_logits = tf.unstack(logits, axis=0)
(start_logits, end_logits) = (unstacked_logits[0], unstacked_logits[1])
return (start_logits, end_logits)
def model_fn_builder(bert_config, init_checkpoint, learning_rate,
num_train_steps, num_warmup_steps, use_tpu,
use_one_hot_embeddings):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
unique_ids = features["unique_ids"]
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
(start_logits, end_logits) = create_model(
bert_config=bert_config,
is_training=is_training,
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
tvars = tf.trainable_variables()
initialized_variable_names = {}
scaffold_fn = None
if init_checkpoint:
(assignment_map, initialized_variable_names
) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
return tf.train.Scaffold()
scaffold_fn = tpu_scaffold
else:
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
for var in tvars:
init_string = ""
if var.name in initialized_variable_names:
init_string = ", *INIT_FROM_CKPT*"
output_spec = None
if mode == tf.estimator.ModeKeys.TRAIN:
seq_length = modeling.get_shape_list(input_ids)[1]
def compute_loss(logits, positions):
one_hot_positions = tf.one_hot(
positions, depth=seq_length, dtype=tf.float32)
log_probs = tf.nn.log_softmax(logits, axis=-1)
loss = -tf.reduce_mean(
tf.reduce_sum(one_hot_positions * log_probs, axis=-1))
return loss
start_positions = features["start_positions"]
end_positions = features["end_positions"]
start_loss = compute_loss(start_logits, start_positions)
end_loss = compute_loss(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2.0
train_op = optimization.create_optimizer(
total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
train_op=train_op,
scaffold_fn=scaffold_fn)
elif mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
"unique_ids": unique_ids,
"start_logits": start_logits,
"end_logits": end_logits,
}
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
else:
raise ValueError(
"Only TRAIN and PREDICT modes are supported: %s" % (mode))
return output_spec
return model_fn
def input_fn_builder(input_file, seq_length, is_training, drop_remainder):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
name_to_features = {
"unique_ids": tf.io.FixedLenFeature([], tf.int64),
"input_ids": tf.io.FixedLenFeature([seq_length], tf.int64),
"input_mask": tf.io.FixedLenFeature([seq_length], tf.int64),
"segment_ids": tf.io.FixedLenFeature([seq_length], tf.int64),
}
if is_training:
name_to_features["start_positions"] = tf.io.FixedLenFeature([], tf.int64)
name_to_features["end_positions"] = tf.io.FixedLenFeature([], tf.int64)
def _decode_record(record, name_to_features):
"""Decodes a record to a TensorFlow example."""
example = tf.parse_single_example(record, name_to_features)
# tf.Example only supports tf.int64, but the TPU only supports tf.int32.
# So cast all int64 to int32.
for name in list(example.keys()):
t = example[name]
if t.dtype == tf.int64:
t = tf.to_int32(t)
example[name] = t
return example
def input_fn(params):
"""The actual input function."""
batch_size = params["batch_size"]
# For training, we want a lot of parallel reading and shuffling.
# For eval, we want no shuffling and parallel reading doesn't matter.
d = tf.data.TFRecordDataset(input_file)
if is_training:
d = d.repeat()
d = d.shuffle(buffer_size=100)
d = d.apply(
tf.data.experimental.map_and_batch(
lambda record: _decode_record(record, name_to_features),
batch_size=batch_size,
drop_remainder=drop_remainder))
return d
return input_fn
RawResult = collections.namedtuple("RawResult",
["unique_id", "start_logits", "end_logits"])
def write_predictions(all_examples, all_features, all_results, n_best_size,
max_answer_length, do_lower_case,client
):
"""Write final predictions to the json file and log-odds of null if needed."""
global ranker
'''
tf.compat.v1.logging.info("Writing predictions to: %s" % (output_prediction_file))
tf.compat.v1.logging.info("Writing nbest to: %s" % (output_nbest_file))
tf.compat.v1.logging.info("Writing Aten predic to: %s" % (output_Aten_predict_file))
'''
example_index_to_features = collections.defaultdict(list)
for feature in all_features:
example_index_to_features[feature.example_index].append(feature)
unique_id_to_result = {}
tf.compat.v1.logging.info("length of all_results: %d" % (len(all_results)))
for result in all_results:
unique_id_to_result[result.unique_id] = result
_PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
"PrelimPrediction",
["feature_index", "start_index", "end_index", "start_logit", "end_logit"])
# Willy Addd collections -> for results
#-------------------------------------------------------------------------------
_AllPredictions = collections.namedtuple( # pylint: disable=invalid-name
"AllPredictions",
["question", "PredictListOneQues"])
_AllPredictResultsInOneQuestion = collections.namedtuple( # pylint: disable=invalid-name
"AllPredictResultsInOneQuestion",
["doc_text", "doc_id", "doc_score", "PredictListOneDoc"])
_AllPredictResultsInOneDocument = collections.namedtuple( # pylint: disable=invalid-name
"AllPredictResultsInOneDocument",
["answer", "prob", "start", "end"])
_FinalResult = collections.namedtuple( # pylint: disable=invalid-name
"FinalResult",
["question", "text", "text_id", "ans", "prob"])
_FinalResult2 = collections.namedtuple( # pylint: disable=invalid-name
"FinalResult2",
["question", "text", "ans", "prob"])
_FinalResult3 = collections.namedtuple( # pylint: disable=invalid-name
"FinalResult3",
["question", "text", "ans", "ans_prob", "TFIDF", "Score", "choice"])
_FinalResultAll = collections.namedtuple( # pylint: disable=invalid-name
"FinalResultAll",
["question", "text1", "ans1", "ans_prob1", "TFIDF1", "Score1", "text2", "ans2", "ans_prob2", "TFIDF2", "Score2", "choice"])
_TempAllpredict_Layer1 = collections.namedtuple( # pylint: disable=invalid-name
"TempAllpredict_Layer1",
["question" , "TempAllpredictList_Layer2"])
_TempAllpredict_Layer2 = collections.namedtuple( # pylint: disable=invalid-name
"TempAllpredict_Layer2",
["doc_id","doc_text","best_ans","best_prob"])
#-------------------------------------------------------------------------------
all_predictions = collections.OrderedDict()
all_nbest_json = collections.OrderedDict()
scores_diff_json = collections.OrderedDict()
all_predicts = []
all_predictsInOneQues = []
quesList = []
Aten_result_list = []
Aten_result3_list = []
TempAllpredictLayer1_list = []
TempAllpredictLayer2_list = []
best_answer=""
best_prob=0.0
ans_is_null = True
#ranker = retriever.get_class('tfidf')(tfidf_path=FLAGS.retriever_model)
for (example_index, example) in enumerate(all_examples):
features = example_index_to_features[example_index]
if example_in_write_predictions == 1:
print ("example idx:%d" %example_index)
print("question in example from predict")
print(example.question_text)
print("doc_tokens in example from predict")
print(example.doc_tokens)
print('-'*60)
print('\n')
doc_names, doc_scores = ranker.closest_docs( example.question_text, 10 )
prelim_predictions = []
# keep track of the minimum score of null start+end of position 0
score_null = 1000000 # large and positive
min_null_feature_index = 0 # the paragraph slice with min mull score
null_start_logit = 0 # the start logit at the slice with min null score
null_end_logit = 0 # the end logit at the slice with min null score
for (feature_index, feature) in enumerate(features):
result = unique_id_to_result[feature.unique_id]
start_indexes = _get_best_indexes(result.start_logits, n_best_size)
end_indexes = _get_best_indexes(result.end_logits, n_best_size)
# if we could have irrelevant answers, get the min score of irrelevant
if FLAGS.version_2_with_negative:
feature_null_score = result.start_logits[0] + result.end_logits[0]
if feature_null_score < score_null:
score_null = feature_null_score
min_null_feature_index = feature_index
null_start_logit = result.start_logits[0]
null_end_logit = result.end_logits[0]
for start_index in start_indexes:
for end_index in end_indexes:
# We could hypothetically create invalid predictions, e.g., predict
# that the start of the span is in the question. We throw out all
# invalid predictions.
if start_index >= len(feature.tokens):
continue
if end_index >= len(feature.tokens):
continue
if start_index not in feature.token_to_orig_map:
continue
if end_index not in feature.token_to_orig_map:
continue
if not feature.token_is_max_context.get(start_index, False):
continue
if end_index < start_index:
continue
length = end_index - start_index + 1
if length > max_answer_length:
continue
prelim_predictions.append(
_PrelimPrediction(
feature_index=feature_index,
start_index=start_index,
end_index=end_index,
start_logit=result.start_logits[start_index],
end_logit=result.end_logits[end_index]))
if FLAGS.version_2_with_negative:
prelim_predictions.append(
_PrelimPrediction(
feature_index=min_null_feature_index,
start_index=0,
end_index=0,
start_logit=null_start_logit,
end_logit=null_end_logit))
prelim_predictions = sorted(
prelim_predictions,
key=lambda x: (x.start_logit + x.end_logit),
reverse=True)
_NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
"NbestPrediction", ["text", "start_logit", "end_logit"])
seen_predictions = {}
nbest = []
for pred in prelim_predictions:
if len(nbest) >= n_best_size:
break
feature = features[pred.feature_index]
if pred.start_index > 0: # this is a non-null prediction
tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1)]
orig_doc_start = feature.token_to_orig_map[pred.start_index]
orig_doc_end = feature.token_to_orig_map[pred.end_index]
orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end + 1)]
tok_text = " ".join(tok_tokens)
# De-tokenize WordPieces that have been split off.
tok_text = tok_text.replace(" ##", "")
tok_text = tok_text.replace("##", "")
# Clean whitespace
tok_text = tok_text.strip()
tok_text = " ".join(tok_text.split())
orig_text = " ".join(orig_tokens)
final_text = get_final_text(tok_text, orig_text, do_lower_case)
if final_text in seen_predictions:
continue
seen_predictions[final_text] = True
else:
final_text = ""
seen_predictions[final_text] = True
nbest.append(
_NbestPrediction(
text=final_text,
start_logit=pred.start_logit,
end_logit=pred.end_logit))
# if we didn't inlude the empty option in the n-best, inlcude it
if FLAGS.version_2_with_negative:
if "" not in seen_predictions:
nbest.append(
_NbestPrediction(
text="", start_logit=null_start_logit,
end_logit=null_end_logit))
# In very rare edge cases we could have no valid predictions. So we
# just create a nonce prediction in this case to avoid failure.
if not nbest:
nbest.append(
_NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
assert len(nbest) >= 1
total_scores = []
best_non_null_entry = None
for entry in nbest:
total_scores.append(entry.start_logit + entry.end_logit)
if not best_non_null_entry:
if entry.text:
best_non_null_entry = entry
#參考
probs = _compute_softmax(total_scores)
nbest_json = []
for i, entry in enumerate(nbest):
output = collections.OrderedDict()
output["text"] = entry.text
output["probability"] = probs[i]
output["start_logit"] = entry.start_logit
output["end_logit"] = entry.end_logit
nbest_json.append(output)
#----------------------------------------------
# presupposition : Question is in order
#"question", "PredictResults"
if example.question_text not in quesList :
if len(quesList)!=0 :
#1. Save to all predicts
#print('all_predictsInOneQues-')
#print(all_predictsInOneQues)
temp = copy.deepcopy(all_predictsInOneQues)
#print('temp')
#print(temp)
all_predicts.append(
_AllPredictions(
question=quesList[-1],
PredictListOneQues=temp
)
)
#2.TODO : Find the result (move to outside)
#3. reset all_predictsInOneQues
all_predictsInOneQues.clear()
#. Add to questList
quesList.append(example.question_text)
#----------------------------------------------
# save answer dataset
#----------------------------------------------
all_predictsInOneDoc = []
#print('go to (1)')
for i, entry in enumerate(nbest):
if predict_result_index == 1:
print(entry)
if i==2:
if predict_result_index == 1:
print('In state 2')
break
tp_answer = entry.text
if i==0 :
if tp_answer.isspace() or not tp_answer:
if predict_result_index == 1:
print('In state 0,tp_ans: %s' %tp_answer)
continue
if i == 1 and len(all_predictsInOneDoc)!=0:
if predict_result_index == 1:
print('In state 1,tp_ans: %s' %tp_answer)
break
if predict_result_index == 1:
print('In state set pridict. tp_ans: %s' %tp_answer )
all_predictsInOneDoc.append(
_AllPredictResultsInOneDocument(
answer=entry.text,
prob=Decimal(probs[i]),
start = entry.start_logit,
end = entry.end_logit
)
)
#print('go to (2)')
#----------------------------------------------
# End of save answer dataset
if predict_result_index == 1:
for i, entry in enumerate(all_predictsInOneDoc):
print('index:%d' %i)
print("answer: %s" %(entry.answer))
print("prob: %s" %(entry.prob))
print("start: %s" %(entry.start))
print("end: %s" %(entry.end))
print('\n')
print('-'*15)
print('\n')
# append predicts to OneQues
#print('go to (3)')
#----------------------------------------------
tp_docscore = 0.0
if example.doc_id in doc_names :
tp_docindex = doc_names.index(example.doc_id)
tp_docscore = doc_scores [tp_docindex]
#print('go to (4)')
#print('go to (5)')
#print('all_predictsInOneQues-in set')
#print(all_predictsInOneQues)
all_predictsInOneQues.append(
_AllPredictResultsInOneQuestion(
doc_text=example.doc_tokens,
doc_id=example.doc_id,
doc_score=tp_docscore,
PredictListOneDoc=all_predictsInOneDoc
)
)
#print('go to (6)')
#print('all_predictsInOneQues-in set')
#print(all_predictsInOneQues)
#----------------------------------------------
# if example is examples last data
if example == all_examples[-1] :
all_predicts.append(
_AllPredictions(question=example.question_text,PredictListOneQues=all_predictsInOneQues))
#----------------------------------------------
assert len(nbest_json) >= 1
if not FLAGS.version_2_with_negative:
all_predictions[example.qas_id] = nbest_json[0]["text"]
else:
# predict "" iff the null score - the score of best non-null > threshold
if best_non_null_entry == None :
score_diff = FLAGS.null_score_diff_threshold + 1.0
else:
score_diff = score_null - best_non_null_entry.start_logit - (
best_non_null_entry.end_logit)
scores_diff_json[example.qas_id] = score_diff
if score_diff > FLAGS.null_score_diff_threshold:
all_predictions[example.qas_id] = ""
else:
all_predictions[example.qas_id] = best_non_null_entry.text
all_nbest_json[example.qas_id] = nbest_json
#TODO: Find the best answer from Aten collections
#----------------------------------------------
'''
const_AtenQuest_index = [3,3,3,3,3,3,3,3,3,3,
3,3,3,3,3,3,3,3,3,3,
4,3,6,5,5,4,5,5,5,4,
5,5,3,5,4,5,5,5,5,5,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1]
const_AtenIntent_index = [1,1,1,0,1,1,1,1,1,1,
1,1,1,1,1,1,1,0,1,1,
1,1,1,1,1,0,1,1,1,0,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1]
excel_NOtGoodAns_index = [0,0,0,3,0,0,0,0,0,0,
0,0,0,0,0,0,0,3,0,0,
0,0,3,2,0,4,3,0,2,4,
0,0,2,0,3,1,0,2,0,4,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0]
excel_count = 0
excel_index = 1
excel_Answer_count = const_AtenQuest_index[excel_index-1]
excel_Intent_count = const_AtenIntent_index[excel_index-1]
excel_NOtGoodAns_count = excel_NOtGoodAns_index[excel_index-1]
wb = Workbook()
ws = wb.active
'''
retriever_weight = FLAGS.retriever_weight
intent_count = 1
for i, entry_predicts in enumerate(all_predicts):
tp_ques = entry_predicts.question
QuesList = entry_predicts.PredictListOneQues
#print("ques: %s" %(tp_ques))
# set score only with bert , TF-IDF used to be choice doc.
#----------------------------------------------
QuesList.sort(key=TakeThird, reverse=True)
#print('len with QuesList:%d' %len(QuesList))
tp_text1 = QuesList[0].doc_text
text1=""
for word in tp_text1:
text1= text1 + " " + word
ans1=""
ans1_prob = 0.0
TFIDF1 = QuesList[0].doc_score
Score1 = 0.0
entry_OneDoc = QuesList [0].PredictListOneDoc
if len(entry_OneDoc) != 0 :
ans1 = entry_OneDoc[0].answer
ans1_prob = entry_OneDoc[0].prob
for k, entry_OneAns in enumerate(entry_OneDoc):
#print('index:%d' %k)
tp_ans1_prob = Decimal(entry_OneAns.prob)
if tp_ans1_prob > ans1_prob:
ans1_prob = tp_ans1_prob
ans1 = entry_OneAns.answer
#print('Ans_ans:%s' %(entry_OneAns.answer))
#print('Ans_prob:%e , start:%e , end:%e' %(entry_OneAns.prob , entry_OneAns.start , entry_OneAns.end))
Score1 = ans1_prob
#----------------------------------------------
# set score with bert and TF-IDF
#----------------------------------------------
text2=""
ans2=""
ans2_prob = 0.0
TFIDF2 = 0.0
Score2 = 0.0
for j , entry_OneDoc in enumerate(QuesList):
tp_TFIDF2 = entry_OneDoc.doc_score
tp_text2=""
for word in entry_OneDoc.doc_text:
tp_text2 = tp_text2 + " " + word
DocList = []
DocList = entry_OneDoc.PredictListOneDoc
for k, entry_OneAns in enumerate(DocList):
tp_ans2_prob = Decimal(entry_OneAns.prob)
tp_Score2 = Decimal(retriever_weight)*Decimal(tp_TFIDF2) + Decimal(1.0-retriever_weight)*Decimal(tp_ans2_prob)
if tp_Score2>Score2:
text2=tp_text2
ans2=entry_OneAns.answer
ans2_prob=tp_ans2_prob
TFIDF2=tp_TFIDF2
Score2 =tp_Score2
#----------------------------------------------
fin_text = text1
fin_ans = ans1
fin_ans_prob = ans1_prob
fin_TFIDF = TFIDF1
fin_Score = Score1
choice_value = 0
if TFIDF1<FLAGS.choice_score:
fin_text = text2
fin_ans = ans2
fin_ans_prob = ans2_prob
fin_TFIDF = TFIDF2
fin_Score = Score2
choice_value = 1
elif ans2_prob>ans1_prob*2:
if ans2_prob > FLAGS.threshold_prob_ans_merge:
fin_text = text2
fin_ans = ans2
fin_ans_prob = ans2_prob
fin_TFIDF = TFIDF2
fin_Score = Score2
choice_value = 1
elif not ans1:
fin_text = text2
fin_ans = ans2
fin_ans_prob = ans2_prob
fin_TFIDF = TFIDF2
fin_Score = Score2
choice_value = 1
if FLAGS.show_all_choice == 0:
Aten_result3_list.append(
_FinalResult3(
question = tp_ques,
text = fin_text,
ans = fin_ans,
ans_prob = fin_ans_prob,
TFIDF = fin_TFIDF,
Score = fin_Score,
choice = choice_value
)
)
else :
Aten_result3_list.append(
_FinalResultAll(
question = tp_ques,
text1 = text1,
ans1 = ans1,
ans_prob1 = ans1_prob,
TFIDF1 = TFIDF1,
Score1 = Score1,
text2 = text2,
ans2 = ans2,
ans_prob2 = ans2_prob,
TFIDF2 = TFIDF2,
Score2 = Score2,
choice = choice_value
)
)
print('ques: %s' %tp_ques)
if FLAGS.show_all_choice==1:
print('-'*5)
print('Only Bert (TF-IDF used to be choice document):')
print('text: %s' %text1)
print('ans: %s' %ans1)
print('ans_prob: %s' %ans1_prob)
print('TFIDF: %s' %TFIDF1)
print('Score: %s' %Score1)
print('')
print('-'*5)
print('Merge TF-IDF:')
print('text: %s' %text2)
print('ans: %s' %ans2)
print('ans_prob: %s' %ans2_prob)
print('TFIDF: %s' %TFIDF2)
print('Score: %s' %Score2)
print('-'*5)
print('My Choice ans(%d):' %choice_value)
print('text: %s' %fin_text)
print('ans: %s' %fin_ans)
print('ans_prob: %s' %fin_ans_prob)
print('TFIDF: %s' %fin_TFIDF)
print('Score: %s' %fin_Score)
# ack message to Colab Client
temp_answer = 'Dr_Answer' + fin_ans + 'Dr_QA' + fin_text + '<AtenEnd>'
client.send(temp_answer.encode('utf8'))
'''
print('-'*5)
if excel_Answer_count == excel_count+1 :
print('-'*15)
print('\n')
if excel_Answer_count == excel_count :
ws['C' + str(excel_index)] = excel_Answer_count
ws['D' + str(excel_index)] = excel_NOtGoodAns_count
ws['F' + str(excel_index)] = excel_Intent_count
excel_index = excel_index+1
excel_Answer_count = const_AtenQuest_index[excel_index-1]
excel_NOtGoodAns_count = excel_NOtGoodAns_index[excel_index-1]
excel_Intent_count = const_AtenIntent_index[excel_index-1]
excel_count = 0
if excel_index <= len(const_AtenQuest_index) :
# print('Set my fin_Score with excel: %s' %fin_Score)
index_str = chr(73+excel_count) + str(excel_index)
ws[index_str] = fin_Score
excel_count = excel_count + 1
ws['A60'] = 'All'
ws['A61'] = '40QA'
ws['B59'] = 'Right answer'
ws['B60'] = '=SUM(B1:B40)+SUM(A41:A58)'
ws['B61'] = '=SUM(B1:B40)'
ws['C59'] = 'All answer'
ws['C60'] = '=SUM(C1:C58)-SUM(D1:D40)'
ws['C61'] = '=SUM(C1:C40)-SUM(D1:D40)'
ws['E59'] = 'Right Intent'
ws['E60'] = '=SUM(E1:E40)+SUM(A41:A58)'
ws['E61'] = '=SUM(E1:E40)'
ws['F59'] = 'All intent'
ws['F60'] = '=SUM(F1:F40)+SUM(C41:C58)'
ws['F61'] = '=SUM(F1:F40)'
ws['G59'] = 'answer prob'
ws['G60'] = '=B60/C60'
ws['G61'] = '=B61/C61'
ws['H59'] = 'Intent prob'
ws['H60'] = '=E60/F60'
ws['H61'] = '=E61/F61'
wb.save(FLAGS.excel_name + '.xlsx')
print('\n')
with tf.gfile.GFile(output_Aten_predict_file, "w") as writer:
writer.write(json.dumps(Aten_result3_list, indent=4,cls=DecimalEncoder) + "\n")
with tf.gfile.GFile(output_prediction_file, "w") as writer:
writer.write(json.dumps(all_predictions, indent=4) + "\n")
with tf.gfile.GFile(output_nbest_file, "w") as writer:
writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
if FLAGS.version_2_with_negative:
with tf.gfile.GFile(output_null_log_odds_file, "w") as writer:
writer.write(json.dumps(scores_diff_json, indent=4) + "\n")
'''
def get_final_text(pred_text, orig_text, do_lower_case):
"""Project the tokenized prediction back to the original text."""
# When we created the data, we kept track of the alignment between original
# (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
# now `orig_text` contains the span of our original text corresponding to the
# span that we predicted.
#
# However, `orig_text` may contain extra characters that we don't want in
# our prediction.
#
# For example, let's say:
# pred_text = steve smith
# orig_text = Steve Smith's
#
# We don't want to return `orig_text` because it contains the extra "'s".
#
# We don't want to return `pred_text` because it's already been normalized
# (the SQuAD eval script also does punctuation stripping/lower casing but
# our tokenizer does additional normalization like stripping accent
# characters).
#
# What we really want to return is "Steve Smith".
#
# Therefore, we have to apply a semi-complicated alignment heruistic between
# `pred_text` and `orig_text` to get a character-to-charcter alignment. This
# can fail in certain cases in which case we just return `orig_text`.
def _strip_spaces(text):
ns_chars = []
ns_to_s_map = collections.OrderedDict()
for (i, c) in enumerate(text):
if c == " ":
continue
ns_to_s_map[len(ns_chars)] = i
ns_chars.append(c)
ns_text = "".join(ns_chars)
return (ns_text, ns_to_s_map)
# We first tokenize `orig_text`, strip whitespace from the result
# and `pred_text`, and check if they are the same length. If they are
# NOT the same length, the heuristic has failed. If they are the same
# length, we assume the characters are one-to-one aligned.
tokenizer = tokenization.BasicTokenizer(do_lower_case=do_lower_case)
tok_text = " ".join(tokenizer.tokenize(orig_text))
start_position = tok_text.find(pred_text)
if start_position == -1:
if FLAGS.verbose_logging:
tf.compat.v1.logging.info(
"Unable to find text: '%s' in '%s'" % (pred_text, orig_text))
return orig_text
end_position = start_position + len(pred_text) - 1
(orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
(tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
if len(orig_ns_text) != len(tok_ns_text):
if FLAGS.verbose_logging:
tf.compat.v1.logging.info("Length not equal after stripping spaces: '%s' vs '%s'",
orig_ns_text, tok_ns_text)
return orig_text
# We then project the characters in `pred_text` back to `orig_text` using
# the character-to-character alignment.
tok_s_to_ns_map = {}
for (i, tok_index) in six.iteritems(tok_ns_to_s_map):
tok_s_to_ns_map[tok_index] = i
orig_start_position = None
if start_position in tok_s_to_ns_map:
ns_start_position = tok_s_to_ns_map[start_position]
if ns_start_position in orig_ns_to_s_map:
orig_start_position = orig_ns_to_s_map[ns_start_position]
if orig_start_position is None:
if FLAGS.verbose_logging:
tf.compat.v1.logging.info("Couldn't map start position")
return orig_text
orig_end_position = None
if end_position in tok_s_to_ns_map:
ns_end_position = tok_s_to_ns_map[end_position]
if ns_end_position in orig_ns_to_s_map:
orig_end_position = orig_ns_to_s_map[ns_end_position]
if orig_end_position is None:
if FLAGS.verbose_logging:
tf.compat.v1.logging.info("Couldn't map end position")
return orig_text
output_text = orig_text[orig_start_position:(orig_end_position + 1)]
return output_text
def _get_best_indexes(logits, n_best_size):
"""Get the n-best logits from a list."""
index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)
best_indexes = []
for i in range(len(index_and_score)):
if i >= n_best_size:
break
best_indexes.append(index_and_score[i][0])
return best_indexes
def _compute_softmax(scores):
"""Compute softmax probability over raw logits."""
if not scores:
return []
max_score = None
for score in scores:
if max_score is None or score > max_score:
max_score = score
exp_scores = []
total_sum = 0.0
for score in scores:
x = math.exp(score - max_score)
exp_scores.append(x)
total_sum += x
probs = []
for score in exp_scores:
probs.append(score / total_sum)
return probs
class FeatureWriter(object):
"""Writes InputFeature to TF example file."""
def __init__(self, filename, is_training):
self.filename = filename
self.is_training = is_training
self.num_features = 0
#self._writer = tf.python_io.TFRecordWriter(filename)
self._writer = tf.io.TFRecordWriter(filename)
def process_feature(self, feature):
"""Write a InputFeature to the TFRecordWriter as a tf.train.Example."""
self.num_features += 1
def create_int_feature(values):
feature = tf.train.Feature(
int64_list=tf.train.Int64List(value=list(values)))
return feature
features = collections.OrderedDict()
features["unique_ids"] = create_int_feature([feature.unique_id])
features["input_ids"] = create_int_feature(feature.input_ids)
features["input_mask"] = create_int_feature(feature.input_mask)
features["segment_ids"] = create_int_feature(feature.segment_ids)
if self.is_training:
features["start_positions"] = create_int_feature([feature.start_position])
features["end_positions"] = create_int_feature([feature.end_position])
impossible = 0
if feature.is_impossible:
impossible = 1
features["is_impossible"] = create_int_feature([impossible])
tf_example = tf.train.Example(features=tf.train.Features(feature=features))
#print("tf_example:")
#print(tf_example)
self._writer.write(tf_example.SerializeToString())
def close(self):
self._writer.close()
def validate_flags_or_throw(bert_config):
"""Validate the input FLAGS or throw an exception."""
tokenization.validate_case_matches_checkpoint(FLAGS.do_lower_case,
FLAGS.init_checkpoint)
if not FLAGS.do_train and not FLAGS.do_predict:
raise ValueError("At least one of `do_train` or `do_predict` must be True.")
if FLAGS.do_train:
if not FLAGS.train_file:
raise ValueError(
"If `do_train` is True, then `train_file` must be specified.")
if FLAGS.do_predict:
if not FLAGS.predict_file:
raise ValueError(
"If `do_predict` is True, then `predict_file` must be specified.")
if FLAGS.max_seq_length > bert_config.max_position_embeddings:
raise ValueError(
"Cannot use sequence length %d because the BERT model "
"was only trained up to sequence length %d" %
(FLAGS.max_seq_length, bert_config.max_position_embeddings))
if FLAGS.max_seq_length <= FLAGS.max_query_length + 3:
raise ValueError(
"The max_seq_length (%d) must be greater than max_query_length "
"(%d) + 3" % (FLAGS.max_seq_length, FLAGS.max_query_length))
# Retriever - added by Willy
if FLAGS.do_retriever:
if not FLAGS.retriever_model:
raise ValueError("You have to set retriever model(give the path) when you set do_retriever to Yes.")
if FLAGS.document_type != 'Sqlite' or FLAGS.db_file == None :
raise ValueError("You have to set document_type to Sqlit and set the db_file when you set do_retriever to Yes.")
# TODO : think a mechanism to chek these key word
'''
if FLAGS.document_type is 'SQlite':
# TODO: set database
elif FLAGS.document_type is 'Text':
# TODO: set text file
elif FLAGS.document_type is 'SQuAD':
# is original method
else :
raise ValueError(
"You have to set correct document_type: (1)'SQlite' (2)'Text' (3)SQuAD.")
'''
def read_squad_documents(input_file):
"""Read a SQuAD json file into a list of SquadExample."""
with tf.gfile.Open(input_file, "r") as reader:
input_data = json.load(reader)["data"]
documents = []
for entry in input_data:
for paragraph in entry["paragraphs"]:
documents.append(paragraph["context"])
return documents
def read_sqlite_documents(input_file):
# TODO
db_class = retriever.get_class('sqlite')
with db_class(input_file) as doc_db:
doc_ids = doc_db.get_doc_ids()
for ids in doc_ids:
documents.append(doc_db.get_doc_text(ids))
doc_db.close()
DOC2IDX = {doc_id: i for i, doc_id in enumerate(doc_ids)}
return DOC2IDX, documents
def read_text_documents(input_file):
examples = []
file = open(input_file, "r")
documents = file.read()
file.close()
documents_split = documents.split('\n')
documents_final = list(filter(None, documents))
return documents_final
def read_squad_question(input_file):
questions = []
"""Read a SQuAD json file into a list of SquadExample."""
with tf.gfile.Open(input_file, "r") as reader:
input_data = json.load(reader)["data"]
for entry in input_data:
for paragraph in entry["paragraphs"]:
for qa in paragraph["qas"]:
questions.append(qa["question"])
return questions
def set_eval_examples(questions, DOC2IDX):
def is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
eval_examples = []
temp_list = []
for i, DOCID in enumerate(DOC2IDX) :
temp_list.append(DOCID)
for question in questions:
#-------------------------questions - Start---------------------------#
question_text = question
start_position = -1
end_position = -1
orig_answer_text = ""
is_impossible = False
#-------------documents - Start--------------#
for i , paragraph_text in enumerate(documents):
paragraph_text = paragraph_text
#-------paragraphs - Start-------#
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
for c in paragraph_text:
if is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
char_to_word_offset.append(len(doc_tokens) - 1)
#-------paragraphs - End-------#
qas_id = str(uuid.uuid1())
example = SquadExample(
qas_id=qas_id,
question_text=question_text,
doc_id = temp_list[i],
doc_tokens=doc_tokens,
orig_answer_text=orig_answer_text,
start_position=start_position,
end_position=end_position,
is_impossible=is_impossible)
eval_examples.append(example)
#-------------documents - Start--------------#
#-------------------------questions - End-----------------------------#
if example_in_set_eval_examples == 1:
print('len of eval_examples:%d' %len(eval_examples))
for i, example in enumerate(eval_examples):
print(i)
print (example.question_text)
'''
for i, example in enumerate(eval_examples):
print('idx:%d:%s' %(i,example.question_text))
'''
return eval_examples
from socket import *
import sys
import threading
import time
from time import localtime
import imp
BUFSIZ = 4096
if sys.version[0] == '2':
imp.reload(sys)
sys.setdefaultencoding("utf-8")
class TcpServer():
def __init__(self,tokenizer,estimator,DOC2IDX):
self.HOST = FLAGS.Host_TCPServer
self.PORT = FLAGS.PORT_TCPServer
self.tokenizer = tokenizer
self.estimator = estimator
self.ADDR = (self.HOST,self.PORT)
self.DOC2IDX = DOC2IDX
try:
self.STOP_CHAT = False
self.sock = socket(AF_INET, SOCK_STREAM)
print('%d is open' %self.PORT)
self.sock.bind(self.ADDR)
self.sock.listen(5)
# 设置退出条件
# 所有监听的客户端
self.clients = {}
self.thrs = {}
self.stops = []
except Exception as e:
print("%d is down" %self.PORT)
return None
def listen_client(self):
while not self.STOP_CHAT:
print(u'等待接入,侦听端口:%d' %self.PORT)
self.tcpClientSock, self.addr = self.sock.accept()
print(u'接受连接,客户端地址:', self.addr)
address = self.addr
# 将建立的client socket链接放到列表self.clients中
self.clients[address] = self.tcpClientSock
# 分别将每个建立的链接放入进程中,接收且分发消息
self.thrs[address] = threading.Thread(target=self.readmsg, args=[address])
self.thrs[address].start()
time.sleep(0.5)
#self.tcpClientSock.send(b'you are connect...')
print(u'系統結束')
def readmsg(self, address):
# 如果地址不存在,则返回False
if address not in self.clients:
return False
# 得到发送消息的client socket
client = self.clients[address]
while True:
try:
# 获取到消息内容data
data = client.recv(BUFSIZ)
except:
print(error)
self.close_client(address)
break
try:
temp = data.decode('utf8')
except:
print('data is not utf8 :%s' %(str(data)) )
continue
# python3使用bytes,所以要进行编码
# s='%s发送给我的信息是:[%s] %s' %(addr[0],ctime(), data.decode('utf8'))
# 对日期进行一下格式化
ISOTIMEFORMAT = '%Y-%m-%d %X'
stime = time.strftime(ISOTIMEFORMAT, localtime())
print([address], '@',[stime],':', data.decode('utf8'))
if len(data)<3:
print('data is not reasonable: %s' %(str(data)))
else:
self.STOP_CHAT = (data.decode('utf8').upper() == "QUIT")
if self.STOP_CHAT:
print("quit")
self.close_client(address)
print("already quit")
break
tokenizer = self.tokenizer
estimator = self.estimator
DOC2IDX = self.DOC2IDX
question = data.decode('utf8')
#print('My question:',question)
if FLAGS.do_predict:
# define
#---------------------------------------------------
def append_feature(feature):
eval_features.append(feature)
eval_writer.process_feature(feature)
# ---------------------------------------------------
# print('WillyTest(1)...do Set question:%s' %(FLAGS.question_type))
# ---------------------set question , changed by willy---------------------#
questions = list()
questions.append(question)
#print('My questions:')
#print(questions)
#-------------------------------------------------------------------------#
#print('WillyTest(2)...do Set eval_examples')
eval_examples=set_eval_examples(questions,DOC2IDX)
#print('WillyTest(2.1)...do FeatureWriter')
eval_writer = FeatureWriter(
filename=os.path.join(FLAGS.output_dir, "eval.tf_record"),
is_training=False
)
eval_features = []
#print('WillyTest(2.2)...do convert_examples_to_features')
convert_examples_to_features(
examples=eval_examples,
tokenizer=tokenizer,
max_seq_length=FLAGS.max_seq_length,
doc_stride=FLAGS.doc_stride,
max_query_length=FLAGS.max_query_length,
is_training=False,
output_fn=append_feature
)
eval_writer.close()
tf.compat.v1.logging.info("***** Running predictions *****")
tf.compat.v1.logging.info(" Num orig examples = %d", len(eval_examples))
tf.compat.v1.logging.info(" Num split examples = %d", len(eval_features))
tf.compat.v1.logging.info(" Batch size = %d", FLAGS.predict_batch_size)
print('WillyTest(5)...before redict_input_fn = input_fn_builder: eval_writer.filename=%s, FLAGS.max_seq_length=%d' %(eval_writer.filename,FLAGS.max_seq_length))
predict_input_fn = input_fn_builder(
input_file=eval_writer.filename,
seq_length=FLAGS.max_seq_length,
is_training=False,
drop_remainder=False
)
all_results = []
print('WillyTest(6)...before estimator predict')
for result in estimator.predict(predict_input_fn, yield_single_examples=True):
if len(all_results) % 1000 == 0:
tf.compat.v1.logging.info("Processing example: %d" % (len(all_results)))
unique_id = int(result["unique_ids"])
start_logits = [float(x) for x in result["start_logits"].flat]
end_logits = [float(x) for x in result["end_logits"].flat]
all_results.append(RawResult(unique_id=unique_id,start_logits=start_logits,end_logits=end_logits))
print('WillyTest(8)...before write_predictions')
write_predictions(
eval_examples, eval_features, all_results,
FLAGS.n_best_size, FLAGS.max_answer_length,
FLAGS.do_lower_case,client
)
'''
output_prediction_file = os.path.join(FLAGS.output_dir, "predictions.json")
output_nbest_file = os.path.join(FLAGS.output_dir, "nbest_predictions.json")
output_null_log_odds_file = os.path.join(FLAGS.output_dir, "null_odds.json")
output_Aten_predict_file = os.path.join(FLAGS.output_dir, "Aten_predicts.json")
print('WillyTest(8)...before write_predictions')
write_predictions(
eval_examples, eval_features, all_results,
FLAGS.n_best_size, FLAGS.max_answer_length,
FLAGS.do_lower_case, output_prediction_file,
output_nbest_file, output_null_log_odds_file,
output_Aten_predict_file,client
)
'''
def close_client(self, address):
try:
'''
print(u'try leave')
client = self.clients.pop(address)
print(u'try leave1')
self.stops.append(address)
print(u'try leave2')
client.close()
print(u'try leave3')
'''
for k in self.clients:
print(u'try leave')
print(u'try client1:', [self.clients[k]])
print(u'try client2:', [self.clients[address]])
print(u'try client3:', [k])
print(u'try client4:', [address])
client = self.clients.pop(k)
#print(u'try leave1')
#self.stops.append(k)
print(u'try leave2')
client.close()
print(u'try leave3')
'''
print(u'try leave4:client:',[self.clients[k]])
self.clients[k].send(str(address) + u"已经离开了")
'''
except:
print(u'try fault')
pass
print(str(address) + u'已经退出')
def main(_):
global ranker
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
print(willy_check_code)
bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
validate_flags_or_throw(bert_config)
tf.io.gfile.makedirs(FLAGS.output_dir)
tokenizer = tokenization.FullTokenizer(
vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
tpu_cluster_resolver = None
if FLAGS.use_tpu and FLAGS.tpu_name:
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=FLAGS.master,
model_dir=FLAGS.output_dir,
save_checkpoints_steps=FLAGS.save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=FLAGS.iterations_per_loop,
num_shards=FLAGS.num_tpu_cores,
per_host_input_for_training=is_per_host))
train_examples = None
num_train_steps = None
num_warmup_steps = None
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=FLAGS.init_checkpoint,
learning_rate=FLAGS.learning_rate,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=FLAGS.use_tpu,
use_one_hot_embeddings=FLAGS.use_tpu)
if FLAGS.do_retriever:
# Set Document
# ------------------------------------------------------
print('WillyTest...do SQlite')
DOC2IDX, docments = read_sqlite_documents(input_file=FLAGS.db_file)
# ------------------------------------------------------
else:
# Set Document
tf.compat.v1.logging.info("my document_type is %s", FLAGS.document_type)
if FLAGS.document_type is 'Text':
# TODO
print('WillyTest...do Text')
docments = read_text_documents(input_file=FLAGS.predict_file)
elif FLAGS.document_type is 'SQuAD':
# TODO
print('WillyTest...do SQuAD')
docments = read_squad_documents(input_file=FLAGS.predict_file)
# else:
# #raise ValueError("Your document_type: %s is undefined or wrong, please reset it." %(FLAGS.document_type))
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=FLAGS.use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=FLAGS.train_batch_size,
predict_batch_size=FLAGS.predict_batch_size)
print("do tcp server")
ranker = retriever.get_class('tfidf')(tfidf_path=FLAGS.retriever_model)
tserver = None
tserver = TcpServer(tokenizer,estimator,DOC2IDX)
while tserver == None:
tserver = TcpServer( tokenizer,estimator,DOC2IDX)
print("do tcp server-listen")
tserver.listen_client()
if __name__ == "__main__":
flags.mark_flag_as_required("vocab_file")
flags.mark_flag_as_required("bert_config_file")
flags.mark_flag_as_required("output_dir")
tf.compat.v1.app.run()
|
v1_station_b_S14_nukex_200ulinput.py
|
from opentrons.types import Point
import json
import os
import math
import threading
from time import sleep
metadata = {
'protocolName': 'Version 1 S13 Station B NukEx (200µl sample input)',
'author': 'Nick <ndiehl@opentrons.com',
'apiLevel': '2.3'
}
NUM_SAMPLES = 8 # start with 8 samples, slowly increase to 48, then 94 (max is 94)
ELUTION_VOL = 100
STARTING_VOL = 800
TIP_TRACK = False
PARK = False
# Definitions for deck light flashing
class CancellationToken:
def __init__(self):
self.is_continued = False
def set_true(self):
self.is_continued = True
def set_false(self):
self.is_continued = False
def turn_on_blinking_notification(hardware, pause):
while pause.is_continued:
hardware.set_lights(rails=True)
sleep(1)
hardware.set_lights(rails=False)
sleep(1)
def create_thread(ctx, cancel_token):
t1 = threading.Thread(target=turn_on_blinking_notification, args=(ctx._hw_manager.hardware, cancel_token))
t1.start()
return t1
# Start protocol
def run(ctx):
# Setup for flashing lights notification to empty trash
# cancellationToken = CancellationToken()
# load labware and pipettes
num_cols = math.ceil(NUM_SAMPLES/8)
tips300 = [ctx.load_labware('opentrons_96_tiprack_300ul', slot, '200µl filtertiprack')
for slot in ['3', '6', '8', '9', '10']]
if PARK:
parkingrack = ctx.load_labware(
'opentrons_96_tiprack_300ul', '7', 'empty tiprack for parking')
parking_spots = parkingrack.rows()[0][:num_cols]
else:
tips300.insert(0, ctx.load_labware('opentrons_96_tiprack_300ul', '7',
'200µl filtertiprack'))
parking_spots = [None for none in range(12)]
m300 = ctx.load_instrument(
'p300_multi_gen2', 'left', tip_racks=tips300)
magdeck = ctx.load_module('magdeck', '4')
magdeck.disengage()
magheight = 13.7
magplate = magdeck.load_labware('nest_96_wellplate_2ml_deep')
# magplate = magdeck.load_labware('biorad_96_wellplate_200ul_pcr')
tempdeck = ctx.load_module('Temperature Module Gen2', '1')
flatplate = tempdeck.load_labware(
'opentrons_96_aluminumblock_nest_wellplate_100ul',)
waste = ctx.load_labware('nest_1_reservoir_195ml', '11',
'Liquid Waste').wells()[0].top()
p3 = ctx.load_labware(
'nest_1_reservoir_195ml', '2', 'wash buffer reservoir').wells()[0:]
res1 = ctx.load_labware(
'nest_12_reservoir_15ml', '5', 'reagent reservoir 1')
p2 = res1.wells()[:4]
elution_solution = res1.wells()[-1]
mag_samples_m = magplate.rows()[0][:num_cols]
elution_samples_m = flatplate.rows()[0][:num_cols]
magdeck.disengage() # just in case
tempdeck.set_temperature(4)
m300.flow_rate.aspirate = 50
m300.flow_rate.dispense = 150
m300.flow_rate.blow_out = 300
folder_path = '/data/B'
tip_file_path = folder_path + '/tip_log.json'
tip_log = {'count': {}}
if TIP_TRACK and not ctx.is_simulating():
if os.path.isfile(tip_file_path):
with open(tip_file_path) as json_file:
data = json.load(json_file)
if 'tips300' in data:
tip_log['count'][m300] = data['tips300']
else:
tip_log['count'][m300] = 0
else:
tip_log['count'][m300] = 0
else:
tip_log['count'] = {m300: 0}
tip_log['tips'] = {
m300: [tip for rack in tips300 for tip in rack.rows()[0]]}
tip_log['max'] = {m300: len(tip_log['tips'][m300])}
def pick_up(pip, loc=None):
nonlocal tip_log
if tip_log['count'][pip] == tip_log['max'][pip] and not loc:
ctx.pause('Replace ' + str(pip.max_volume) + 'µl tipracks before \
resuming.')
pip.reset_tipracks()
tip_log['count'][pip] = 0
if loc:
pip.pick_up_tip(loc)
else:
pip.pick_up_tip(tip_log['tips'][pip][tip_log['count'][pip]])
tip_log['count'][pip] += 1
switch = True
drop_count = 0
drop_threshold = 120 # number of tips trash will accommodate before prompting user to empty
def drop(pip):
nonlocal switch
nonlocal drop_count
side = 30 if switch else -18
drop_loc = ctx.loaded_labwares[12].wells()[0].top().move(
Point(x=side))
pip.drop_tip(drop_loc)
switch = not switch
drop_count += 8
if drop_count == drop_threshold:
# Setup for flashing lights notification to empty trash
# if not ctx._hw_manager.hardware.is_simulator:
# cancellationToken.set_true()
# thread = create_thread(ctx, cancellationToken)
m300.home()
ctx.pause('Please empty tips from waste before resuming.')
ctx.home() # home before continuing with protocol
# cancellationToken.set_false() # stop light flashing after home
# thread.join()
drop_count = 0
waste_vol = 0
waste_threshold = 185000
def remove_supernatant(vol, park=False):
def waste_track(vol):
nonlocal waste_vol
if waste_vol + vol >= waste_threshold:
# Setup for flashing lights notification to empty liquid waste
# if not ctx._hw_manager.hardware.is_simulator:
# cancellationToken.set_true()
# thread = create_thread(ctx, cancellationToken)
m300.home()
ctx.pause('Please empty liquid waste (slot 11) before resuming.')
ctx.home() # home before continuing with protocol
# cancellationToken.set_false() # stop light flashing after home
# thread.join()
waste_vol = 0
waste_vol += vol
m300.flow_rate.aspirate = 30
num_trans = math.ceil(vol/200)
vol_per_trans = vol/num_trans
for i, (m, spot) in enumerate(zip(mag_samples_m, parking_spots)):
if park:
pick_up(m300, spot)
else:
pick_up(m300)
side = -1 if i % 2 == 0 else 1
loc = m.bottom(0.5).move(Point(x=side*2))
for _ in range(num_trans):
waste_track(vol_per_trans)
if m300.current_volume > 0:
m300.dispense(m300.current_volume, m.top()) # void air gap if necessary
m300.move_to(m.center())
m300.transfer(vol_per_trans, loc, waste, new_tip='never',
air_gap=20)
m300.blow_out(waste)
m300.air_gap(20)
drop(m300)
m300.flow_rate.aspirate = 150
# def bind(vol, park=True):
# # add bead binding buffer and mix samples
# for i, (well, spot) in enumerate(zip(mag_samples_m, parking_spots)):
# source = binding_buffer[i//(12//len(binding_buffer))]
# if park:
# pick_up(m300, spot)
# else:
# pick_up(m300)
# for _ in range(5):
# m300.aspirate(180, source.bottom(0.5))
# m300.dispense(180, source.bottom(5))
# num_trans = math.ceil(vol/210)
# vol_per_trans = vol/num_trans
# for t in range(num_trans):
# if m300.current_volume > 0:
# m300.dispense(m300.current_volume, source.top()) # void air gap if necessary
# m300.transfer(vol_per_trans, source, well.top(), air_gap=20,
# new_tip='never')
# if t == 0:
# m300.air_gap(20)
# m300.mix(5, 200, well)
# m300.blow_out(well.top(-2))
# m300.air_gap(20)
# if park:
# m300.drop_tip(spot)
# else:
# drop(m300)
#
# magdeck.engage(height=magheight)
# ctx.delay(minutes=2, msg='Incubating on MagDeck for 2 minutes.')
#
# # remove initial supernatant
# remove_supernatant(vol+STARTING_VOL, park=park)
def wash(wash_vol, source, mix_reps, park=True):
magdeck.disengage()
num_trans = math.ceil(wash_vol/200)
vol_per_trans = wash_vol/num_trans
for i, (m, spot) in enumerate(zip(mag_samples_m, parking_spots)):
pick_up(m300)
side = 1 if i % 2 == 0 else -1
loc = m.bottom(0.5).move(Point(x=side*2))
src = source[i//(12//len(source))]
for n in range(num_trans):
if m300.current_volume > 0:
m300.dispense(m300.current_volume, src.top())
m300.transfer(vol_per_trans, src, m.top(), air_gap=20,
new_tip='never')
if n < num_trans - 1: # only air_gap if going back to source
m300.air_gap(20)
m300.mix(mix_reps, 150, loc)
m300.blow_out(m.top())
m300.air_gap(20)
if park:
m300.drop_tip(spot)
else:
drop(m300)
magdeck.engage(height=magheight)
ctx.delay(minutes=5, msg='Incubating on MagDeck for 5 minutes.')
remove_supernatant(wash_vol, park=park)
def elute(vol, park=True):
# resuspend beads in elution
for i, (m, spot) in enumerate(zip(mag_samples_m, parking_spots)):
pick_up(m300)
side = 1 if i % 2 == 0 else -1
loc = m.bottom(0.5).move(Point(x=side*2))
m300.aspirate(vol, elution_solution)
m300.move_to(m.center())
m300.dispense(vol, loc)
m300.mix(10, 0.8*vol, loc)
m300.blow_out(m.bottom(5))
m300.air_gap(20)
if park:
m300.drop_tip(spot)
else:
drop(m300)
ctx.delay(minutes=2, msg='Incubating off magnet at room temperature \
for 2 minutes')
magdeck.engage(height=magheight)
ctx.delay(minutes=2, msg='Incubating on magnet at room temperature \
for 2 minutes')
for i, (m, e, spot) in enumerate(
zip(mag_samples_m, elution_samples_m, parking_spots)):
if park:
pick_up(m300, spot)
else:
pick_up(m300)
side = -1 if i % 2 == 0 else 1
loc = m.bottom(0.5).move(Point(x=side*2))
m300.transfer(40, loc, e.bottom(5), air_gap=20, new_tip='never')
m300.blow_out(e.top(-2))
m300.air_gap(20)
m300.drop_tip()
magdeck.engage(height=magheight)
ctx.delay(minutes=2, msg='Incubating on MagDeck for 2 minutes.')
# remove initial supernatant
remove_supernatant(STARTING_VOL, park=PARK)
wash(500, p2, 15, park=PARK)
wash(450, p3, 15, park=PARK)
wash(450, p3, 15, park=PARK)
magdeck.disengage()
ctx.delay(minutes=5, msg='Airdrying beads at room temperature for 5 \
minutes.')
elute(ELUTION_VOL, park=PARK)
|
server_utils_test.py
|
# Copyright 2020, The TensorFlow Federated Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import multiprocessing
import signal
import time
from tensorflow_federated.python.common_libs import test
from tensorflow_federated.python.core.impl.executors import eager_tf_executor
from tensorflow_federated.python.simulation import server_utils
class ServerUtilsTest(test.TestCase):
def test_server_runs(self):
ex = eager_tf_executor.EagerTFExecutor()
def noarg_run_server():
server_utils.run_server(ex, 1, 8888)
process = multiprocessing.Process(target=noarg_run_server)
process.start()
time.sleep(1)
process.terminate()
counter = 0
while process.exitcode is None:
time.sleep(1)
counter += 1
if counter > 10:
raise AssertionError('Exitcode not propagated.')
self.assertEqual(process.exitcode, -signal.SIGTERM)
if __name__ == '__main__':
test.main()
|
test_oddball.py
|
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Oddball cases for testing coverage.py"""
import os.path
import sys
from flaky import flaky
import pytest
import coverage
from coverage import env
from coverage.backward import import_local_file
from coverage.files import abs_file
from tests.coveragetest import CoverageTest
from tests import osinfo
class ThreadingTest(CoverageTest):
"""Tests of the threading support."""
def test_threading(self):
self.check_coverage("""\
import threading
def fromMainThread():
return "called from main thread"
def fromOtherThread():
return "called from other thread"
def neverCalled():
return "no one calls me"
other = threading.Thread(target=fromOtherThread)
other.start()
fromMainThread()
other.join()
""",
[1, 3, 4, 6, 7, 9, 10, 12, 13, 14, 15], "10")
def test_thread_run(self):
self.check_coverage("""\
import threading
class TestThread(threading.Thread):
def run(self):
self.a = 5
self.do_work()
self.a = 7
def do_work(self):
self.a = 10
thd = TestThread()
thd.start()
thd.join()
""",
[1, 3, 4, 5, 6, 7, 9, 10, 12, 13, 14], "")
class RecursionTest(CoverageTest):
"""Check what happens when recursive code gets near limits."""
def test_short_recursion(self):
# We can definitely get close to 500 stack frames.
self.check_coverage("""\
def recur(n):
if n == 0:
return 0
else:
return recur(n-1)+1
recur(495) # We can get at least this many stack frames.
i = 8 # and this line will be traced
""",
[1, 2, 3, 5, 7, 8], "")
def test_long_recursion(self):
# We can't finish a very deep recursion, but we don't crash.
with self.assertRaises(RuntimeError):
self.check_coverage("""\
def recur(n):
if n == 0:
return 0
else:
return recur(n-1)+1
recur(100000) # This is definitely too many frames.
""",
[1, 2, 3, 5, 7], ""
)
def test_long_recursion_recovery(self):
# Test the core of bug 93: https://bitbucket.org/ned/coveragepy/issue/93
# When recovering from a stack overflow, the Python trace function is
# disabled, but the C trace function is not. So if we're using a
# Python trace function, we won't trace anything after the stack
# overflow, and there should be a warning about it. If we're using
# the C trace function, only line 3 will be missing, and all else
# will be traced.
self.make_file("recur.py", """\
def recur(n):
if n == 0:
return 0 # never hit
else:
return recur(n-1)+1
try:
recur(100000) # This is definitely too many frames.
except RuntimeError:
i = 10
i = 11
""")
cov = coverage.Coverage()
self.start_import_stop(cov, "recur")
pytrace = (cov._collector.tracer_name() == "PyTracer")
expected_missing = [3]
if pytrace: # pragma: no metacov
expected_missing += [9, 10, 11]
_, statements, missing, _ = cov.analysis("recur.py")
self.assertEqual(statements, [1, 2, 3, 5, 7, 8, 9, 10, 11])
self.assertEqual(expected_missing, missing)
# Get a warning about the stackoverflow effect on the tracing function.
if pytrace: # pragma: no metacov
self.assertEqual(cov._warnings,
["Trace function changed, measurement is likely wrong: None"]
)
else:
self.assertEqual(cov._warnings, [])
class MemoryLeakTest(CoverageTest):
"""Attempt the impossible: test that memory doesn't leak.
Note: this test is truly unusual, and has had a colorful history. See
for example: https://bitbucket.org/ned/coveragepy/issue/186
It may still fail occasionally, especially on PyPy.
"""
@flaky
def test_for_leaks(self):
if env.JYTHON:
self.skipTest("Don't bother on Jython")
# Our original bad memory leak only happened on line numbers > 255, so
# make a code object with more lines than that. Ugly string mumbo
# jumbo to get 300 blank lines at the beginning..
code = """\
# blank line\n""" * 300 + """\
def once(x): # line 301
if x % 100 == 0:
raise Exception("100!")
elif x % 2:
return 10
else: # line 306
return 11
i = 0 # Portable loop without alloc'ing memory.
while i < ITERS:
try:
once(i)
except:
pass
i += 1 # line 315
"""
lines = list(range(301, 315))
lines.remove(306) # Line 306 is the "else".
# This is a non-deterministic test, so try it a few times, and fail it
# only if it predominantly fails.
fails = 0
for _ in range(10):
ram_0 = osinfo.process_ram()
self.check_coverage(code.replace("ITERS", "10"), lines, "")
ram_10 = osinfo.process_ram()
self.check_coverage(code.replace("ITERS", "10000"), lines, "")
ram_10k = osinfo.process_ram()
# Running the code 10k times shouldn't grow the ram much more than
# running it 10 times.
ram_growth = (ram_10k - ram_10) - (ram_10 - ram_0)
if ram_growth > 100000:
fails += 1 # pragma: only failure
if fails > 8:
self.fail("RAM grew by %d" % (ram_growth)) # pragma: only failure
class MemoryFumblingTest(CoverageTest):
"""Test that we properly manage the None refcount."""
def test_dropping_none(self): # pragma: not covered
if not env.C_TRACER:
self.skipTest("Only the C tracer has refcounting issues")
# TODO: Mark this so it will only be run sometimes.
self.skipTest("This is too expensive for now (30s)")
# Start and stop coverage thousands of times to flush out bad
# reference counting, maybe.
self.make_file("the_code.py", """\
import random
def f():
if random.random() > .5:
x = 1
else:
x = 2
""")
self.make_file("main.py", """\
import coverage
import sys
from the_code import f
for i in range(10000):
cov = coverage.Coverage(branch=True)
cov.start()
f()
cov.stop()
cov.erase()
print("Final None refcount: %d" % (sys.getrefcount(None)))
""")
status, out = self.run_command_status("python main.py")
self.assertEqual(status, 0)
self.assertIn("Final None refcount", out)
self.assertNotIn("Fatal", out)
class PyexpatTest(CoverageTest):
"""Pyexpat screws up tracing. Make sure we've counter-defended properly."""
def test_pyexpat(self):
if env.JYTHON:
self.skipTest("Pyexpat isn't a problem on Jython")
# pyexpat calls the trace function explicitly (inexplicably), and does
# it wrong for exceptions. Parsing a DOCTYPE for some reason throws
# an exception internally, and triggers its wrong behavior. This test
# checks that our fake PyTrace_RETURN hack in tracer.c works. It will
# also detect if the pyexpat bug is fixed unbeknownst to us, meaning
# we'd see two RETURNs where there should only be one.
self.make_file("trydom.py", """\
import xml.dom.minidom
XML = '''\\
<!DOCTYPE fooey SYSTEM "http://www.example.com/example.dtd">
<root><child/><child/></root>
'''
def foo():
dom = xml.dom.minidom.parseString(XML)
assert len(dom.getElementsByTagName('child')) == 2
a = 11
foo()
""")
self.make_file("outer.py", "\n"*100 + "import trydom\na = 102\n")
cov = coverage.Coverage()
cov.erase()
# Import the Python file, executing it.
self.start_import_stop(cov, "outer")
_, statements, missing, _ = cov.analysis("trydom.py")
self.assertEqual(statements, [1, 3, 8, 9, 10, 11, 13])
self.assertEqual(missing, [])
_, statements, missing, _ = cov.analysis("outer.py")
self.assertEqual(statements, [101, 102])
self.assertEqual(missing, [])
# Make sure pyexpat isn't recorded as a source file.
# https://bitbucket.org/ned/coveragepy/issues/419/nosource-no-source-for-code-path-to-c
files = cov.get_data().measured_files()
self.assertFalse(
any(f.endswith("pyexpat.c") for f in files),
"Pyexpat.c is in the measured files!: %r:" % (files,)
)
class ExceptionTest(CoverageTest):
"""I suspect different versions of Python deal with exceptions differently
in the trace function.
"""
def test_exception(self):
# Python 2.3's trace function doesn't get called with "return" if the
# scope is exiting due to an exception. This confounds our trace
# function which relies on scope announcements to track which files to
# trace.
#
# This test is designed to sniff this out. Each function in the call
# stack is in a different file, to try to trip up the tracer. Each
# file has active lines in a different range so we'll see if the lines
# get attributed to the wrong file.
self.make_file("oops.py", """\
def oops(args):
a = 2
raise Exception("oops")
a = 4
""")
self.make_file("fly.py", "\n"*100 + """\
def fly(calls):
a = 2
calls[0](calls[1:])
a = 4
""")
self.make_file("catch.py", "\n"*200 + """\
def catch(calls):
try:
a = 3
calls[0](calls[1:])
a = 5
except:
a = 7
""")
self.make_file("doit.py", "\n"*300 + """\
def doit(calls):
try:
calls[0](calls[1:])
except:
a = 5
""")
# Import all the modules before starting coverage, so the def lines
# won't be in all the results.
for mod in "oops fly catch doit".split():
import_local_file(mod)
# Each run nests the functions differently to get different
# combinations of catching exceptions and letting them fly.
runs = [
("doit fly oops", {
'doit.py': [302, 303, 304, 305],
'fly.py': [102, 103],
'oops.py': [2, 3],
}),
("doit catch oops", {
'doit.py': [302, 303],
'catch.py': [202, 203, 204, 206, 207],
'oops.py': [2, 3],
}),
("doit fly catch oops", {
'doit.py': [302, 303],
'fly.py': [102, 103, 104],
'catch.py': [202, 203, 204, 206, 207],
'oops.py': [2, 3],
}),
("doit catch fly oops", {
'doit.py': [302, 303],
'catch.py': [202, 203, 204, 206, 207],
'fly.py': [102, 103],
'oops.py': [2, 3],
}),
]
for callnames, lines_expected in runs:
# Make the list of functions we'll call for this test.
callnames = callnames.split()
calls = [getattr(sys.modules[cn], cn) for cn in callnames]
cov = coverage.Coverage()
cov.start()
# Call our list of functions: invoke the first, with the rest as
# an argument.
calls[0](calls[1:]) # pragma: nested
cov.stop() # pragma: nested
# Clean the line data and compare to expected results.
# The file names are absolute, so keep just the base.
clean_lines = {}
data = cov.get_data()
for callname in callnames:
filename = callname + ".py"
lines = data.lines(abs_file(filename))
clean_lines[filename] = sorted(lines)
if env.JYTHON: # pragma: only jython
# Jython doesn't report on try or except lines, so take those
# out of the expected lines.
invisible = [202, 206, 302, 304]
for lines in lines_expected.values():
lines[:] = [l for l in lines if l not in invisible]
self.assertEqual(clean_lines, lines_expected)
class DoctestTest(CoverageTest):
"""Tests invoked with doctest should measure properly."""
def test_doctest(self):
self.check_coverage('''\
def return_arg_or_void(arg):
"""If <arg> is None, return "Void"; otherwise return <arg>
>>> return_arg_or_void(None)
'Void'
>>> return_arg_or_void("arg")
'arg'
>>> return_arg_or_void("None")
'None'
"""
if arg is None:
return "Void"
else:
return arg
import doctest, sys
doctest.testmod(sys.modules[__name__]) # we're not __main__ :(
''',
[1, 11, 12, 14, 16, 17], "")
class GettraceTest(CoverageTest):
"""Tests that we work properly with `sys.gettrace()`."""
def test_round_trip_in_untraced_function(self):
# https://bitbucket.org/ned/coveragepy/issues/575/running-doctest-prevents-complete-coverage
self.make_file("main.py", """import sample""")
self.make_file("sample.py", """\
from swap import swap_it
def doit():
print(3)
swap_it()
print(5)
def doit_soon():
print(7)
doit()
print(9)
print(10)
doit_soon()
print(12)
""")
self.make_file("swap.py", """\
import sys
def swap_it():
sys.settrace(sys.gettrace())
""")
# Use --source=sample to prevent measurement of swap.py.
cov = coverage.Coverage(source=["sample"])
self.start_import_stop(cov, "main")
self.assertEqual(self.stdout(), "10\n7\n3\n5\n9\n12\n")
_, statements, missing, _ = cov.analysis("sample.py")
self.assertEqual(statements, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
self.assertEqual(missing, [])
def test_setting_new_trace_function(self):
# https://bitbucket.org/ned/coveragepy/issues/436/disabled-coverage-ctracer-may-rise-from
self.check_coverage('''\
import sys
def tracer(frame, event, arg):
print("%s: %s @ %d" % (event, frame.f_code.co_filename, frame.f_lineno))
return tracer
def begin():
sys.settrace(tracer)
def collect():
t = sys.gettrace()
assert t is tracer, t
def test_unsets_trace():
begin()
collect()
old = sys.gettrace()
test_unsets_trace()
sys.settrace(old)
a = 21
b = 22
''',
lines=[1, 3, 4, 5, 7, 8, 10, 11, 12, 14, 15, 16, 18, 19, 20, 21, 22],
missing="4-5, 11-12",
)
out = self.stdout().replace(self.last_module_name, "coverage_test")
self.assertEqual(
out,
(
"call: coverage_test.py @ 10\n"
"line: coverage_test.py @ 11\n"
"line: coverage_test.py @ 12\n"
"return: coverage_test.py @ 12\n"
),
)
@pytest.mark.expensive
def test_atexit_gettrace(self): # pragma: no metacov
# This is not a test of coverage at all, but of our understanding
# of this edge-case behavior in various Pythons.
if env.METACOV:
self.skipTest("Can't set trace functions during meta-coverage")
self.make_file("atexit_gettrace.py", """\
import atexit, sys
def trace_function(frame, event, arg):
return trace_function
sys.settrace(trace_function)
def show_trace_function():
tfn = sys.gettrace()
if tfn is not None:
tfn = tfn.__name__
print(tfn)
atexit.register(show_trace_function)
# This will show what the trace function is at the end of the program.
""")
status, out = self.run_command_status("python atexit_gettrace.py")
self.assertEqual(status, 0)
if env.PYPY and env.PYPYVERSION >= (5, 4):
# Newer PyPy clears the trace function before atexit runs.
self.assertEqual(out, "None\n")
else:
# Other Pythons leave the trace function in place.
self.assertEqual(out, "trace_function\n")
class ExecTest(CoverageTest):
"""Tests of exec."""
def test_correct_filename(self):
# https://bitbucket.org/ned/coveragepy/issues/380/code-executed-by-exec-excluded-from
# Bug was that exec'd files would have their lines attributed to the
# calling file. Make two files, both with ~30 lines, but no lines in
# common. Line 30 in to_exec.py was recorded as line 30 in main.py,
# but now it's fixed. :)
self.make_file("to_exec.py", """\
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
print("var is {}".format(var)) # line 31
""")
self.make_file("main.py", """\
namespace = {'var': 17}
with open("to_exec.py") as to_exec_py:
code = compile(to_exec_py.read(), 'to_exec.py', 'exec')
exec(code, globals(), namespace)
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
print("done") # line 35
""")
cov = coverage.Coverage()
self.start_import_stop(cov, "main")
_, statements, missing, _ = cov.analysis("main.py")
self.assertEqual(statements, [1, 2, 3, 4, 35])
self.assertEqual(missing, [])
_, statements, missing, _ = cov.analysis("to_exec.py")
self.assertEqual(statements, [31])
self.assertEqual(missing, [])
def test_unencodable_filename(self):
# https://github.com/nedbat/coveragepy/issues/891
if env.PYVERSION < (3, 0):
self.skipTest("Python 2 can't seem to compile the file.")
self.make_file("bug891.py", r"""exec(compile("pass", "\udcff.py", "exec"))""")
cov = coverage.Coverage()
self.start_import_stop(cov, "bug891")
# Saving would fail trying to encode \udcff.py
cov.save()
files = [os.path.basename(f) for f in cov.get_data().measured_files()]
assert "bug891.py" in files
class MockingProtectionTest(CoverageTest):
"""Tests about protecting ourselves from aggressive mocking.
https://bitbucket.org/ned/coveragepy/issues/416/coverage-40-is-causing-existing-unit-tests
"""
def test_os_path_exists(self):
# To see if this test still detects the problem, change isolate_module
# in misc.py to simply return its argument. It should fail with a
# StopIteration error.
self.make_file("bug416.py", """\
import os.path
import mock
@mock.patch('os.path.exists')
def test_path_exists(mock_exists):
mock_exists.side_effect = [17]
print("in test")
import bug416a
print(bug416a.foo)
print(os.path.exists("."))
test_path_exists()
""")
self.make_file("bug416a.py", """\
print("bug416a.py")
foo = 23
""")
import py_compile
py_compile.compile("bug416a.py")
out = self.run_command("coverage run bug416.py")
self.assertEqual(out, "in test\nbug416a.py\n23\n17\n")
|
dataset_adapter.py
|
"""Author: Brandon Trabucco
Utility class for serializing ImageMetadata objects into a dataset.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import random
import threading
import sys
import os.path
from datetime import datetime
random.seed(1234567)
np.random.seed(1234567)
from image_annotations.abstract import Abstract
class DatasetAdaptor(Abstract):
"""Utility class to serialize the dataset to tensorflow.
"""
def __init__(self, images, output_dir, num_threads=8):
"""Initiialize the class.
"""
self.images = images
self.output_dir = output_dir
self.num_threads = num_threads
def start(self):
"""Process the images into tensorflow protos.
"""
self._process_dataset("train", self.images, 32)
def _int64_feature(self, value):
"""Wrapper for inserting an int64 Feature into a SequenceExample proto."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _float_feature(self, value):
"""Wrapper for inserting a float Feature into a SequenceExample proto."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _bytes_feature(self, value):
"""Wrapper for inserting a bytes Feature into a SequenceExample proto."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[str(value)]))
def _int64_feature_list(self, values):
"""Wrapper for inserting an int64 FeatureList into a SequenceExample proto."""
return tf.train.FeatureList(feature=[self._int64_feature(v) for v in values])
def _float_feature_list(self, values):
"""Wrapper for inserting a float FeatureList into a SequenceExample proto."""
return tf.train.FeatureList(feature=[self._float_feature(v) for v in values])
def _bytes_feature_list(self, values):
"""Wrapper for inserting a bytes FeatureList into a SequenceExample proto."""
return tf.train.FeatureList(feature=[self._bytes_feature(v) for v in values])
def _to_sequence_example(self, image):
"""Builds a SequenceExample proto for an image.
Args:
image: An ImageMetadata object.
Returns:
A SequenceExample proto.
"""
context = tf.train.Features(feature={
"image/video_id": self._int64_feature(image.video_id),
"image/image_id": self._int64_feature(image.image_id),
})
feature_lists = tf.train.FeatureLists(feature_list={
"image/xs": self._float_feature_list(image.xs),
"image/ys": self._float_feature_list(image.ys),
"image/image": self._float_feature_list(image.image.flatten()),
"image/shape": self._int64_feature_list(image.image.shape)
})
sequence_example = tf.train.SequenceExample(
context=context, feature_lists=feature_lists)
return sequence_example
def _process_images(self, thread_index, ranges, name, images, num_shards):
"""Processes and saves a subset of sentences as TFRecord files in one thread.
Args:
thread_index: Integer thread identifier within [0, len(ranges)].
ranges: A list of pairs of integers specifying the ranges of the dataset to
process in parallel.
name: Unique identifier specifying the dataset.
images: List ofImageMetadata.
num_shards: Integer number of shards for the output files.
"""
# Each thread produces N shards where N = num_shards / num_threads. For
# instance, if num_shards = 128, and num_threads = 2, then the first thread
# would produce shards [0, 64).
num_threads = len(ranges)
assert not num_shards % num_threads
num_shards_per_batch = int(num_shards / num_threads)
shard_ranges = np.linspace(ranges[thread_index][0], ranges[thread_index][1],
num_shards_per_batch + 1).astype(int)
num_images_in_thread = ranges[thread_index][1] - ranges[thread_index][0]
counter = 0
for s in range(num_shards_per_batch):
# Generate a sharded version of the file name, e.g. 'train-00002-of-00010'
shard = thread_index * num_shards_per_batch + s
output_filename = "%s-%.5d-of-%.5d" % (name, shard, num_shards)
output_file = os.path.join(self.output_dir, output_filename)
writer = tf.python_io.TFRecordWriter(output_file)
shard_counter = 0
images_in_shard = np.arange(shard_ranges[s], shard_ranges[s + 1], dtype=int)
for i in images_in_shard:
image = images[i]
sequence_example = self._to_sequence_example(image)
if sequence_example is not None:
writer.write(sequence_example.SerializeToString())
shard_counter += 1
counter += 1
if not counter % 1000:
print("%s [thread %d]: Processed %d of %d items in thread batch." %
(datetime.now(), thread_index, counter, num_images_in_thread))
sys.stdout.flush()
writer.close()
print("%s [thread %d]: Wrote %d images to %s" %
(datetime.now(), thread_index, shard_counter, output_file))
sys.stdout.flush()
shard_counter = 0
print("%s [thread %d]: Wrote %d images to %d shards." %
(datetime.now(), thread_index, counter, num_shards_per_batch))
sys.stdout.flush()
def _process_dataset(self, name, images, num_shards):
"""Processes a complete data set and saves it as a TFRecord.
Args:
name: Unique identifier specifying the dataset.
images: List of ImageMetadata.
num_shards: Integer number of shards for the output files.
"""
# Shuffle the ordering of images. Make the randomization repeatable.
random.seed(12345)
random.shuffle(images)
# Break the sentences into num_threads batches. Batch i is defined as
# sentences[ranges[i][0]:ranges[i][1]].
num_threads = min(num_shards, self.num_threads)
spacing = np.linspace(0, len(images), num_threads + 1).astype(np.int)
ranges = []
threads = []
for i in range(len(spacing) - 1):
ranges.append([spacing[i], spacing[i + 1]])
# Create a mechanism for monitoring when all threads are finished.
coord = tf.train.Coordinator()
# Launch a thread for each batch.
print("Launching %d threads for spacings: %s" % (num_threads, ranges))
for thread_index in range(len(ranges)):
args = (thread_index, ranges, name, images, num_shards)
t = threading.Thread(target=self._process_images, args=args)
t.start()
threads.append(t)
# Wait for all the threads to terminate.
coord.join(threads)
print("%s: Finished processing all %d images in data set '%s'." %
(datetime.now(), len(images), name))
|
subprocess_server.py
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pytype: skip-file
import contextlib
import logging
import os
import re
import shutil
import signal
import socket
import subprocess
import tempfile
import threading
import time
from urllib.error import URLError
from urllib.request import urlopen
import grpc
from apache_beam.version import __version__ as beam_version
_LOGGER = logging.getLogger(__name__)
class SubprocessServer(object):
"""An abstract base class for running GRPC Servers as an external process.
This class acts as a context which will start up a server, provides a stub
to connect to it, and then shuts the server down. For example::
with SubprocessServer(GrpcStubClass, [executable, arg, ...]) as stub:
stub.CallService(...)
"""
def __init__(self, stub_class, cmd, port=None):
"""Creates the server object.
:param stub_class: the auto-generated GRPC client stub class used for
connecting to the GRPC service
:param cmd: command (including arguments) for starting up the server,
suitable for passing to `subprocess.POpen`.
:param port: (optional) the port at which the subprocess will serve its
service. If not given, one will be randomly chosen and the special
string "{{PORT}}" will be substituted in the command line arguments
with the chosen port.
"""
self._process_lock = threading.RLock()
self._process = None
self._stub_class = stub_class
self._cmd = [str(arg) for arg in cmd]
self._port = port
def __enter__(self):
return self.start()
def __exit__(self, *unused_args):
self.stop()
def start(self):
try:
endpoint = self.start_process()
wait_secs = .1
channel_options = [("grpc.max_receive_message_length", -1),
("grpc.max_send_message_length", -1)]
channel = grpc.insecure_channel(endpoint, options=channel_options)
channel_ready = grpc.channel_ready_future(channel)
while True:
if self._process is not None and self._process.poll() is not None:
_LOGGER.error("Starting job service with %s", self._process.args)
raise RuntimeError(
'Service failed to start up with error %s' % self._process.poll())
try:
channel_ready.result(timeout=wait_secs)
break
except (grpc.FutureTimeoutError, grpc.RpcError):
wait_secs *= 1.2
logging.log(
logging.WARNING if wait_secs > 1 else logging.DEBUG,
'Waiting for grpc channel to be ready at %s.',
endpoint)
return self._stub_class(channel)
except: # pylint: disable=bare-except
_LOGGER.exception("Error bringing up service")
self.stop()
raise
def start_process(self):
with self._process_lock:
if self._process:
self.stop()
if self._port:
port = self._port
cmd = self._cmd
else:
port, = pick_port(None)
cmd = [arg.replace('{{PORT}}', str(port)) for arg in self._cmd]
endpoint = 'localhost:%s' % port
_LOGGER.info("Starting service with %s", str(cmd).replace("',", "'"))
self._process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Emit the output of this command as info level logging.
def log_stdout():
line = self._process.stdout.readline()
while line:
# Remove newline via rstrip() to not print an empty line
_LOGGER.info(line.rstrip())
line = self._process.stdout.readline()
t = threading.Thread(target=log_stdout)
t.daemon = True
t.start()
return endpoint
def stop(self):
self.stop_process()
def stop_process(self):
with self._process_lock:
if not self._process:
return
for _ in range(5):
if self._process.poll() is not None:
break
logging.debug("Sending SIGINT to job_server")
self._process.send_signal(signal.SIGINT)
time.sleep(1)
if self._process.poll() is None:
self._process.kill()
self._process = None
def local_temp_dir(self, **kwargs):
return tempfile.mkdtemp(dir=self._local_temp_root, **kwargs)
class JavaJarServer(SubprocessServer):
APACHE_REPOSITORY = 'https://repo.maven.apache.org/maven2'
BEAM_GROUP_ID = 'org.apache.beam'
JAR_CACHE = os.path.expanduser("~/.apache_beam/cache/jars")
_BEAM_SERVICES = type(
'local', (threading.local, ),
dict(__init__=lambda self: setattr(self, 'replacements', {})))()
def __init__(self, stub_class, path_to_jar, java_arguments):
super(JavaJarServer, self).__init__(
stub_class, ['java', '-jar', path_to_jar] + list(java_arguments))
self._existing_service = path_to_jar if _is_service_endpoint(
path_to_jar) else None
def start_process(self):
if self._existing_service:
return self._existing_service
else:
if not shutil.which('java'):
raise RuntimeError(
'Java must be installed on this system to use this '
'transform/runner.')
return super(JavaJarServer, self).start_process()
def stop_process(self):
if self._existing_service:
pass
else:
return super(JavaJarServer, self).stop_process()
@classmethod
def jar_name(cls, artifact_id, version, classifier=None, appendix=None):
return '-'.join(
filter(None, [artifact_id, appendix, version, classifier])) + '.jar'
@classmethod
def path_to_maven_jar(
cls,
artifact_id,
group_id,
version,
repository=APACHE_REPOSITORY,
classifier=None,
appendix=None):
return '/'.join([
repository,
group_id.replace('.', '/'),
artifact_id,
version,
cls.jar_name(artifact_id, version, classifier, appendix)
])
@classmethod
def path_to_beam_jar(
cls,
gradle_target,
appendix=None,
version=beam_version,
artifact_id=None):
if gradle_target in cls._BEAM_SERVICES.replacements:
return cls._BEAM_SERVICES.replacements[gradle_target]
gradle_package = gradle_target.strip(':').rsplit(':', 1)[0]
if not artifact_id:
artifact_id = 'beam-' + gradle_package.replace(':', '-')
project_root = os.path.sep.join(
os.path.abspath(__file__).split(os.path.sep)[:-5])
local_path = os.path.join(
project_root,
gradle_package.replace(':', os.path.sep),
'build',
'libs',
cls.jar_name(
artifact_id,
version.replace('.dev', ''),
classifier='SNAPSHOT',
appendix=appendix))
if os.path.exists(local_path):
_LOGGER.info('Using pre-built snapshot at %s', local_path)
return local_path
elif '.dev' in version:
# TODO: Attempt to use nightly snapshots?
raise RuntimeError(
(
'%s not found. '
'Please build the server with \n cd %s; ./gradlew %s') %
(local_path, os.path.abspath(project_root), gradle_target))
else:
return cls.path_to_maven_jar(
artifact_id,
cls.BEAM_GROUP_ID,
version,
cls.APACHE_REPOSITORY,
appendix=appendix)
@classmethod
def local_jar(cls, url, cache_dir=None):
if cache_dir is None:
cache_dir = cls.JAR_CACHE
# TODO: Verify checksum?
if _is_service_endpoint(url):
return url
elif os.path.exists(url):
return url
else:
cached_jar = os.path.join(cache_dir, os.path.basename(url))
if os.path.exists(cached_jar):
_LOGGER.info('Using cached job server jar from %s' % url)
else:
_LOGGER.info('Downloading job server jar from %s' % url)
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
# TODO: Clean up this cache according to some policy.
try:
url_read = urlopen(url)
with open(cached_jar + '.tmp', 'wb') as jar_write:
shutil.copyfileobj(url_read, jar_write, length=1 << 20)
os.rename(cached_jar + '.tmp', cached_jar)
except URLError as e:
raise RuntimeError(
'Unable to fetch remote job server jar at %s: %s' % (url, e))
return cached_jar
@classmethod
@contextlib.contextmanager
def beam_services(cls, replacements):
try:
old = cls._BEAM_SERVICES.replacements
cls._BEAM_SERVICES.replacements = dict(old, **replacements)
yield
finally:
cls._BEAM_SERVICES.replacements = old
def _is_service_endpoint(path):
return re.match(r'^[a-zA-Z0-9.-]+:\d+$', path)
def pick_port(*ports):
"""
Returns a list of ports, same length as input ports list, but replaces
all None or 0 ports with a random free port.
"""
sockets = []
def find_free_port(port):
if port:
return port
else:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
except OSError as e:
# [Errno 97] Address family not supported by protocol
# Likely indicates we are in an IPv6-only environment (BEAM-10618). Try
# again with AF_INET6.
if e.errno == 97:
s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
else:
raise e
sockets.append(s)
s.bind(('localhost', 0))
return s.getsockname()[1]
ports = list(map(find_free_port, ports))
# Close sockets only now to avoid the same port to be chosen twice
for s in sockets:
s.close()
return ports
|
example_rpc_client.py
|
import re
import os
from lxml import etree
import time
from PyWeChatSpy.command import *
from PyWeChatSpy.proto import spy_pb2
from threading import Thread
from google.protobuf.descriptor import FieldDescriptor as FD
from rpc_client_tools import *
import logging
from queue import Queue
contact_list = []
chatroom_list = []
my_response_queue = Queue()
logger = logging.getLogger(__file__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s [%(threadName)s] %(levelname)s: %(message)s')
sh = logging.StreamHandler()
sh.setFormatter(formatter)
sh.setLevel(logging.INFO)
logger.addHandler(sh)
def handle_response():
while True:
data = my_response_queue.get()
data = dict2pb(spy_pb2.Response, data)
if data.type == PROFESSIONAL_KEY:
if not data.code:
logger.warning(data.message)
elif data.type == WECHAT_CONNECTED: # 微信接入
print(f"微信客户端已接入 port:{data.port}")
time.sleep(1)
# spy.get_login_qrcode() # 获取登录二维码
elif data.type == HEART_BEAT: # 心跳
pass
elif data.type == WECHAT_LOGIN: # 微信登录
print("微信登录")
spy.get_account_details() # 获取登录账号详情
elif data.type == WECHAT_LOGOUT: # 微信登出
print("微信登出")
elif data.type == CHAT_MESSAGE: # 微信消息
chat_message = spy_pb2.ChatMessage()
chat_message.ParseFromString(data.bytes)
for message in chat_message.message:
_type = message.type # 消息类型 1.文本|3.图片...自行探索
_from = message.wxidFrom.str # 消息发送方
_to = message.wxidTo.str # 消息接收方
content = message.content.str # 消息内容
_from_group_member = ""
if _from.endswith("@chatroom"): # 群聊消息
_from_group_member = message.content.str.split(':\n', 1)[0] # 群内发言人
content = message.content.str.split(':\n', 1)[-1] # 群聊消息内容
image_overview_size = message.imageOverview.imageSize # 图片缩略图大小
image_overview_bytes = message.imageOverview.imageBytes # 图片缩略图数据
# with open("img.jpg", "wb") as wf:
# wf.write(image_overview_bytes)
overview = message.overview # 消息缩略
timestamp = message.timestamp # 消息时间戳
if _type == 1: # 文本消息
print(_from, _to, _from_group_member, content)
if _to == "filehelper":
spy.send_text("filehelper", "Hello PyWeChatSpy3.0\n" + content)
elif _type == 3: # 图片消息
break
# file_path = message.file
# file_path = os.path.join(WECHAT_PROFILE, file_path)
# time.sleep(10)
# spy.decrypt_image(file_path, "a.jpg")
elif _type == 43: # 视频消息
pass
elif _type == 49: # XML报文消息
print(_from, _to, message.file)
xml = etree.XML(content)
xml_type = xml.xpath("/msg/appmsg/type/text()")[0]
if xml_type == "5":
xml_title = xml.xpath("/msg/appmsg/title/text()")[0]
print(xml_title)
if xml_title == "邀请你加入群聊":
url = xml.xpath("/msg/appmsg/url/text()")[0]
print(url)
time.sleep(1)
spy.get_group_enter_url(_from, url)
elif _type == 37: # 好友申请
print("新的好友申请")
obj = etree.XML(message.content.str)
encryptusername, ticket = obj.xpath("/msg/@encryptusername")[0], obj.xpath("/msg/@ticket")[0]
spy.accept_new_contact(encryptusername, ticket) # 接收好友请求
elif data.type == ACCOUNT_DETAILS: # 登录账号详情
if data.code:
account_details = spy_pb2.AccountDetails()
account_details.ParseFromString(data.bytes)
print(account_details)
spy.get_contacts() # 获取联系人列表
else:
logger.warning(data.message)
elif data.type == CONTACTS_LIST: # 联系人列表
if data.code:
contacts_list = spy_pb2.Contacts()
contacts_list.ParseFromString(data.bytes)
for contact in contacts_list.contactDetails: # 遍历联系人列表
wxid = contact.wxid.str # 联系人wxid
nickname = contact.nickname.str # 联系人昵称
remark = contact.remark.str # 联系人备注
print(wxid, nickname, remark)
if wxid.endswith("chatroom"): # 群聊
groups.append(wxid)
# spy.get_contact_details("20646587964@chatroom") # 获取群聊详情
else:
logger.error(data.message)
elif data.type == CONTACT_DETAILS:
if data.code:
contact_details_list = spy_pb2.Contacts()
contact_details_list.ParseFromString(data.bytes)
for contact_details in contact_details_list.contactDetails: # 遍历联系人详情
wxid = contact_details.wxid.str # 联系人wxid
nickname = contact_details.nickname.str # 联系人昵称
remark = contact_details.remark.str # 联系人备注
if wxid.endswith("chatroom"): # 判断是否为群聊
group_member_list = contact_details.groupMemberList # 群成员列表
member_count = group_member_list.memberCount # 群成员数量
for group_member in group_member_list.groupMember: # 遍历群成员
member_wxid = group_member.wxid # 群成员wxid
member_nickname = group_member.nickname # 群成员昵称
# print(member_wxid, member_nickname)
pass
else:
logger.error(data.message)
elif data.type == GET_CONTACTS_LIST and not data.code:
logger.error(data.message)
elif data.type == CREATE_GROUP_CALLBACK: # 创建群聊回调
callback = spy_pb2.CreateGroupCallback()
callback.ParseFromString(data.bytes)
print(callback)
elif data.type == GROUP_MEMBER_DETAILS: # 群成员详情
group_member_details = spy_pb2.GroupMemberDetails()
group_member_details.ParseFromString(data.bytes)
# print(group_member_details)
elif data.type == GROUP_MEMBER_EVENT: # 群成员进出事件
group_member_event = spy_pb2.GroupMemberEvent()
group_member_event.ParseFromString(data.bytes)
# print(group_member_event)
elif data.type == LOGIN_QRCODE: # 登录二维码
qrcode = spy_pb2.LoginQRCode()
qrcode.ParseFromString(data.bytes)
with open("qrcode.png", "wb") as _wf:
_wf.write(qrcode.qrcodeBytes)
elif data.type == GROUP_ENTER_URL: # 进群链接
group_enter_url = spy_pb2.GroupEnterUrl()
group_enter_url.ParseFromString(data.bytes)
print(group_enter_url)
# 进群直接post请求链接
# try:
# requests.post(group_enter_url.url)
# except requests.exceptions.InvalidSchema:
# pass
# except Exception as e:
# logger.error(f"进群失败:{e}")
else:
print(data)
# dict格式转pb
def dict2pb(cls, adict, strict=False):
"""
Takes a class representing the ProtoBuf Message and fills it with data from
the dict.
"""
obj = cls()
for field in obj.DESCRIPTOR.fields:
if not field.label == field.LABEL_REQUIRED:
continue
if field.has_default_value:
continue
if not field.name in adict:
# raise ConvertException('Field "%s" missing from descriptor dictionary.' % field.name)
print('Field "%s" missing from descriptor dictionary.' % field.name)
if strict:
field_names = set([field.name for field in obj.DESCRIPTOR.fields])
for key in adict.keys():
if key not in field_names:
# raise ConvertException('Key "%s" can not be mapped to field in %s class.' % (key, type(obj)))
print('Key "%s" can not be mapped to field in %s class.' % (key, type(obj)))
for field in obj.DESCRIPTOR.fields:
if not field.name in adict and not field.has_default_value:
continue
cur_value = adict[field.name] if field.name in adict else field.default_value
msg_type = field.message_type
if field.label == FD.LABEL_REPEATED:
if field.type == FD.TYPE_MESSAGE:
for sub_dict in cur_value:
item = getattr(obj, field.name).add()
item.CopyFrom(dict2pb(msg_type._concrete_class, sub_dict))
else:
map(getattr(obj, field.name).append, cur_value)
else:
if field.type == FD.TYPE_MESSAGE:
value = dict2pb(msg_type._concrete_class, cur_value)
getattr(obj, field.name).CopyFrom(value)
else:
setattr(obj, field.name, cur_value)
return obj
# 服务端推送消息的ip和端口
msg_server_address = "tcp://127.0.0.1:5557"
def accept_data(my_queue):
puller = context.socket(zmq.PULL)
puller.connect(msg_server_address)
while True:
# 接收服务端推送的消息
data = puller.recv_pyobj()
# 存到队列里
my_queue.put(data)
# 启动线程接收来自服务端的消息
t = Thread(target=accept_data, args=(my_response_queue, ))
t.start()
# 从队列里获取消息并处理,再通过rpc调用服务端
spy = RPCProxy()
handle_response()
|
test_general.py
|
"""
Collection of tests for unified general functions
"""
# global
import os
import math
import time
import einops
import pytest
import threading
import numpy as np
from numbers import Number
from collections.abc import Sequence
import torch.multiprocessing as multiprocessing
# local
import ivy
import ivy.functional.backends.numpy
import ivy.functional.backends.jax
import ivy.functional.backends.tensorflow
import ivy.functional.backends.torch
import ivy.functional.backends.mxnet
import ivy_tests.test_ivy.helpers as helpers
# Helpers #
# --------#
def _get_shape_of_list(lst, shape=()):
if not lst:
return []
if not isinstance(lst, Sequence):
return shape
if isinstance(lst[0], Sequence):
l = len(lst[0])
if not all(len(item) == l for item in lst):
msg = 'not all lists have the same length'
raise ValueError(msg)
shape += (len(lst),)
shape = _get_shape_of_list(lst[0], shape)
return shape
# Tests #
# ------#
# set_framework
@pytest.mark.parametrize(
"fw_str", ['numpy', 'jax', 'torch', 'mxnet'])
def test_set_framework(fw_str, dev, call):
ivy.set_framework(fw_str)
ivy.unset_framework()
# use_framework
def test_use_within_use_framework(dev, call):
with ivy.functional.backends.numpy.use:
pass
with ivy.functional.backends.jax.use:
pass
with ivy.functional.backends.tensorflow.use:
pass
with ivy.functional.backends.torch.use:
pass
with ivy.functional.backends.mxnet.use:
pass
@pytest.mark.parametrize(
"allow_duplicates", [True, False])
def test_match_kwargs(allow_duplicates):
def func_a(a, b, c=2):
pass
func_b = lambda a, d, e=5: None
class ClassA:
def __init__(self, c, f, g=3):
pass
kwargs = {'a': 0, 'b': 1, 'c': 2, 'd': 3, 'e': 4, 'f': 5, 'g': 6}
kwfa, kwfb, kwca = ivy.match_kwargs(kwargs, func_a, func_b, ClassA, allow_duplicates=allow_duplicates)
if allow_duplicates:
assert kwfa == {'a': 0, 'b': 1, 'c': 2}
assert kwfb == {'a': 0, 'd': 3, 'e': 4}
assert kwca == {'c': 2, 'f': 5, 'g': 6}
else:
assert kwfa == {'a': 0, 'b': 1, 'c': 2}
assert kwfb == {'d': 3, 'e': 4}
assert kwca == {'f': 5, 'g': 6}
# def test_get_referrers_recursive(dev, call):
#
# class SomeClass:
# def __init__(self):
# self.x = [1, 2]
# self.y = [self.x]
#
# some_obj = SomeClass()
# refs = ivy.get_referrers_recursive(some_obj.x)
# ref_keys = refs.keys()
# assert len(ref_keys) == 3
# assert 'repr' in ref_keys
# assert refs['repr'] == '[1,2]'
# y_id = str(id(some_obj.y))
# y_refs = refs[y_id]
# assert y_refs['repr'] == '[[1,2]]'
# some_obj_dict_id = str(id(some_obj.__dict__))
# assert y_refs[some_obj_dict_id] == 'tracked'
# dict_refs = refs[some_obj_dict_id]
# assert dict_refs['repr'] == "{'x':[1,2],'y':[[1,2]]}"
# some_obj_id = str(id(some_obj))
# some_obj_refs = dict_refs[some_obj_id]
# assert some_obj_refs['repr'] == str(some_obj).replace(' ', '')
# assert len(some_obj_refs) == 1
# array
@pytest.mark.parametrize(
"object_in", [[], [0.], [1], [True], [[1., 2.]]])
@pytest.mark.parametrize(
"dtype", [None, 'float16', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'bool'])
@pytest.mark.parametrize(
"from_numpy", [True, False])
def test_array(object_in, dtype, from_numpy, dev, call):
if call in [helpers.mx_call] and dtype == 'int16':
# mxnet does not support int16
pytest.skip()
# to numpy
if from_numpy:
object_in = np.array(object_in)
# smoke test
ret = ivy.array(object_in, dtype, dev)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == np.array(object_in).shape
# value test
assert np.allclose(call(ivy.array, object_in, dtype, dev), np.array(object_in).astype(dtype))
# compilation test
if call in [helpers.torch_call]:
# pytorch scripting does not support string devices
return
# copy array
@pytest.mark.parametrize(
"x", [[0.], [1], [True], [[1., 2.]]])
@pytest.mark.parametrize(
"dtype", [None, 'float16', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'bool'])
def test_copy_array(x, dtype, dev, call):
if call in [helpers.mx_call] and dtype == 'int16':
# mxnet does not support int16
pytest.skip()
# smoke test
x = ivy.array(x, dtype, dev)
ret = ivy.copy_array(x)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(ivy.to_numpy(ret), ivy.to_numpy(x))
assert id(x) != id(ret)
# compilation test
if call in [helpers.torch_call]:
# pytorch scripting does not support string devices
return
# array_equal
@pytest.mark.parametrize(
"x0_n_x1_n_res", [([0.], [0.], True), ([0.], [1.], False),
([[0.], [1.]], [[0.], [1.]], True),
([[0.], [1.]], [[1.], [2.]], False)])
@pytest.mark.parametrize(
"dtype", [None, 'float16', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'bool'])
def test_array_equal(x0_n_x1_n_res, dtype, dev, call):
if call in [helpers.mx_call] and dtype in ['int16', 'bool']:
# mxnet does not support int16, and does not support bool for broadcast_equal method used
pytest.skip()
x0, x1, true_res = x0_n_x1_n_res
# smoke test
x0 = ivy.array(x0, dtype, dev)
x1 = ivy.array(x1, dtype, dev)
res = ivy.array_equal(x0, x1)
# type test
assert ivy.is_array(x0)
assert ivy.is_array(x1)
assert isinstance(res, bool) or ivy.is_array(res)
# value test
assert res == true_res
# arrays_equal
@pytest.mark.parametrize(
"xs_n_res", [([[[0.], [1.]], [[0.], [1.]], [[1.], [2.]]], False)])
@pytest.mark.parametrize(
"dtype", ['float32'])
def test_arrays_equal(xs_n_res, dtype, dev, call):
xs, true_res = xs_n_res
# smoke test
x0 = ivy.array(xs[0], dtype, dev)
x1 = ivy.array(xs[1], dtype, dev)
x2 = ivy.array(xs[2], dtype, dev)
res = ivy.arrays_equal([x0, x1, x2])
# type test
assert ivy.is_array(x0)
assert ivy.is_array(x1)
assert ivy.is_array(x2)
assert isinstance(res, bool) or ivy.is_array(res)
# value test
assert res == true_res
# equal
@pytest.mark.parametrize(
"x0_n_x1_n_x2_em_n_res", [([0.], [0.], [0.], False, True),
([0.], [1.], [0.], False, False),
([0.], [1.], [0.], True, [[True, False, True],
[False, True, False],
[True, False, True]]),
({'a': 0}, {'a': 0}, {'a': 1}, True, [[True, True, False],
[True, True, False],
[False, False, True]])])
@pytest.mark.parametrize(
"to_array", [True, False])
def test_equal(x0_n_x1_n_x2_em_n_res, to_array, dev, call):
x0, x1, x2, equality_matrix, true_res = x0_n_x1_n_x2_em_n_res
# smoke test
if isinstance(x0, list) and to_array:
x0 = ivy.array(x0, dev=dev)
x1 = ivy.array(x1, dev=dev)
x2 = ivy.array(x2, dev=dev)
res = ivy.all_equal(x0, x1, x2, equality_matrix=equality_matrix)
# value test
if equality_matrix:
assert np.array_equal(ivy.to_numpy(res), np.array(true_res))
else:
assert res == true_res
# compilation test
if call in [helpers.torch_call]:
# pytorch scripting does not support variable number of input arguments
return
# to_numpy
@pytest.mark.parametrize(
"object_in", [[], [0.], [1], [True], [[1., 2.]]])
@pytest.mark.parametrize(
"dtype", [None, 'float16', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'bool'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array])
def test_to_numpy(object_in, dtype, tensor_fn, dev, call):
if call in [helpers.mx_call] and dtype == 'int16':
# mxnet does not support int16
pytest.skip()
if call in [helpers.tf_graph_call]:
# to_numpy() requires eager execution
pytest.skip()
# smoke test
ret = ivy.to_numpy(tensor_fn(object_in, dtype, dev))
# type test
assert isinstance(ret, np.ndarray)
# cardinality test
assert ret.shape == np.array(object_in).shape
# value test
assert np.allclose(ivy.to_numpy(tensor_fn(object_in, dtype, dev)), np.array(object_in).astype(dtype))
# compilation test
if call in [helpers.torch_call]:
# pytorch scripting does not support numpy conversion
return
# to_scalar
@pytest.mark.parametrize(
"object_in", [[0.], [[[1]]], [True], [[1.]]])
@pytest.mark.parametrize(
"dtype", [None, 'float16', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'bool'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array])
def test_to_scalar(object_in, dtype, tensor_fn, dev, call):
if call in [helpers.mx_call] and dtype == 'int16':
# mxnet does not support int16
pytest.skip()
if call in [helpers.tf_graph_call]:
# to_scalar() requires eager execution
pytest.skip()
# smoke test
ret = ivy.to_scalar(tensor_fn(object_in, dtype, dev))
true_val = ivy.to_numpy(ivy.array(object_in, dtype=dtype)).item()
# type test
assert isinstance(ret, type(true_val))
# value test
assert ivy.to_scalar(tensor_fn(object_in, dtype, dev)) == true_val
# compilation test
if call in [helpers.torch_call]:
# pytorch scripting does not support scalar conversion
return
# to_list
@pytest.mark.parametrize(
"object_in", [[], [0.], [1], [True], [[1., 2.]]])
@pytest.mark.parametrize(
"dtype", [None, 'float16', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'bool'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array])
def test_to_list(object_in, dtype, tensor_fn, dev, call):
if call in [helpers.mx_call] and dtype == 'int16':
# mxnet does not support int16
pytest.skip()
if call in [helpers.tf_graph_call]:
# to_list() requires eager execution
pytest.skip()
# smoke test
ret = ivy.to_list(tensor_fn(object_in, dtype, dev))
# type test
assert isinstance(ret, list)
# cardinality test
assert _get_shape_of_list(ret) == _get_shape_of_list(object_in)
# value test
assert np.allclose(np.asarray(ivy.to_list(tensor_fn(object_in, dtype, dev))),
np.array(object_in).astype(dtype))
# compilation test
if call in [helpers.torch_call]:
# pytorch scripting does not support list conversion
return
# shape
@pytest.mark.parametrize(
"object_in", [[], [0.], [1], [True], [[1., 2.]]])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"as_tensor", [None, True, False])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_shape(object_in, dtype, as_tensor, tensor_fn, dev, call):
# smoke test
if len(object_in) == 0 and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
ret = ivy.shape(tensor_fn(object_in, dtype, dev), as_tensor)
# type test
if as_tensor:
assert ivy.is_array(ret)
else:
assert isinstance(ret, tuple)
ret = ivy.array(ret)
# cardinality test
assert ret.shape[0] == len(np.asarray(object_in).shape)
# value test
assert np.array_equal(ivy.to_numpy(ret), np.asarray(np.asarray(object_in).shape, np.int32))
# compilation test
if call in [helpers.torch_call]:
# pytorch scripting does not support Union
return
# get_num_dims
@pytest.mark.parametrize(
"object_in", [[], [0.], [1], [True], [[1., 2.]]])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"as_tensor", [None, True, False])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_get_num_dims(object_in, dtype, as_tensor, tensor_fn, dev, call):
# smoke test
if len(object_in) == 0 and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
ret = ivy.get_num_dims(tensor_fn(object_in, dtype, dev), as_tensor)
# type test
if as_tensor:
assert ivy.is_array(ret)
else:
assert isinstance(ret, int)
ret = ivy.array(ret)
# cardinality test
assert list(ret.shape) == []
# value test
assert np.array_equal(ivy.to_numpy(ret), np.asarray(len(np.asarray(object_in).shape), np.int32))
# compilation test
if call in [helpers.torch_call]:
# pytorch scripting does not support Union
return
# minimum
@pytest.mark.parametrize(
"xy", [([0.7], [0.5]), ([0.7], 0.5), (0.5, [0.7]), ([[0.8, 1.2], [1.5, 0.2]], [0., 1.])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_minimum(xy, dtype, tensor_fn, dev, call):
# smoke test
if (isinstance(xy[0], Number) or isinstance(xy[1], Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(xy[0], dtype, dev)
y = tensor_fn(xy[1], dtype, dev)
ret = ivy.minimum(x, y)
# type test
assert ivy.is_array(ret)
# cardinality test
if len(x.shape) > len(y.shape):
assert ret.shape == x.shape
else:
assert ret.shape == y.shape
# value test
assert np.array_equal(call(ivy.minimum, x, y), np.asarray(ivy.functional.backends.numpy.minimum(ivy.to_numpy(x), ivy.to_numpy(y))))
# maximum
@pytest.mark.parametrize(
"xy", [([0.7], [0.5]), ([0.7], 0.5), (0.5, [0.7]), ([[0.8, 1.2], [1.5, 0.2]], [0., 1.])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_maximum(xy, dtype, tensor_fn, dev, call):
# smoke test
if (isinstance(xy[0], Number) or isinstance(xy[1], Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(xy[0], dtype, dev)
y = tensor_fn(xy[1], dtype, dev)
ret = ivy.maximum(x, y)
# type test
assert ivy.is_array(ret)
# cardinality test
if len(x.shape) > len(y.shape):
assert ret.shape == x.shape
else:
assert ret.shape == y.shape
# value test
assert np.array_equal(call(ivy.maximum, x, y), np.asarray(ivy.functional.backends.numpy.maximum(ivy.to_numpy(x), ivy.to_numpy(y))))
# clip
@pytest.mark.parametrize(
"x_min_n_max", [(-0.5, 0., 1.5), ([1.7], [0.5], [1.1]), ([[0.8, 2.2], [1.5, 0.2]], 0.2, 1.4),
([[0.8, 2.2], [1.5, 0.2]], [[1., 1.], [1., 1.]], [[1.1, 2.], [1.1, 2.]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_clip(x_min_n_max, dtype, tensor_fn, dev, call):
# smoke test
if (isinstance(x_min_n_max[0], Number) or isinstance(x_min_n_max[1], Number) or isinstance(x_min_n_max[2], Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x_min_n_max[0], dtype, dev)
min_val = tensor_fn(x_min_n_max[1], dtype, dev)
max_val = tensor_fn(x_min_n_max[2], dtype, dev)
if ((min_val.shape != [] and min_val.shape != [1]) or (max_val.shape != [] and max_val.shape != [1]))\
and call in [helpers.mx_call]:
# mxnet only supports numbers or 0 or 1 dimensional arrays for min and max while performing clip
pytest.skip()
ret = ivy.clip(x, min_val, max_val)
# type test
assert ivy.is_array(ret)
# cardinality test
max_shape = max([x.shape, min_val.shape, max_val.shape], key=lambda x_: len(x_))
assert ret.shape == max_shape
# value test
assert np.array_equal(call(ivy.clip, x, min_val, max_val),
np.asarray(ivy.functional.backends.numpy.clip(ivy.to_numpy(x), ivy.to_numpy(min_val), ivy.to_numpy(max_val))))
# clip_vector_norm
# @pytest.mark.parametrize(
# "x_max_norm_n_p_val_clipped",
# [(-0.5, 0.4, 2., -0.4), ([1.7], 1.5, 3., [1.5]),
# ([[0.8, 2.2], [1.5, 0.2]], 4., 1., [[0.6808511, 1.8723406], [1.2765958, 0.17021278]]),
# ([[0.8, 2.2], [1.5, 0.2]], 2.5, 2., [[0.71749604, 1.9731141], [1.345305, 0.17937401]])])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_clip_vector_norm(x_max_norm_n_p_val_clipped, dtype, tensor_fn, dev, call):
# # smoke test
# if call is helpers.mx_call:
# # mxnet does not support 0-dimensional variables
# pytest.skip()
# x = tensor_fn(x_max_norm_n_p_val_clipped[0], dtype, dev)
# max_norm = x_max_norm_n_p_val_clipped[1]
# p_val = x_max_norm_n_p_val_clipped[2]
# clipped = x_max_norm_n_p_val_clipped[3]
# ret = ivy.clip_vector_norm(x, max_norm, p_val)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# assert ret.shape == (x.shape if len(x.shape) else (1,))
# # value test
# assert np.allclose(call(ivy.clip_vector_norm, x, max_norm, p_val), np.array(clipped))
# # compilation test
# if call is helpers.torch_call:
# # pytorch jit cannot compile global variables, in this case MIN_DENOMINATOR
# return
# round
@pytest.mark.parametrize(
"x_n_x_rounded", [(-0.51, -1), ([1.7], [2.]), ([[0.8, 2.2], [1.51, 0.2]], [[1., 2.], [2., 0.]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_round(x_n_x_rounded, dtype, tensor_fn, dev, call):
# smoke test
if (isinstance(x_n_x_rounded[0], Number) or isinstance(x_n_x_rounded[1], Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x_n_x_rounded[0], dtype, dev)
ret = ivy.round(x)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.array_equal(call(ivy.round, x), np.array(x_n_x_rounded[1]))
# floormod
@pytest.mark.parametrize(
"x_n_divisor_n_x_floormod", [(2.5, 2., 0.5), ([10.7], [5.], [0.7]),
([[0.8, 2.2], [1.7, 0.2]], [[0.3, 0.5], [0.4, 0.11]], [[0.2, 0.2], [0.1, 0.09]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_floormod(x_n_divisor_n_x_floormod, dtype, tensor_fn, dev, call):
# smoke test
if (isinstance(x_n_divisor_n_x_floormod[0], Number) or isinstance(x_n_divisor_n_x_floormod[1], Number) or
isinstance(x_n_divisor_n_x_floormod[2], Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x_n_divisor_n_x_floormod[0], dtype, dev)
divisor = ivy.array(x_n_divisor_n_x_floormod[1], dtype, dev)
ret = ivy.floormod(x, divisor)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(call(ivy.floormod, x, divisor), np.array(x_n_divisor_n_x_floormod[2]))
# ceil
@pytest.mark.parametrize(
"x_n_x_ceiled", [(2.5, 3.), ([10.7], [11.]), ([[3.8, 2.2], [1.7, 0.2]], [[4., 3.], [2., 1.]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_ceil(x_n_x_ceiled, dtype, tensor_fn, dev, call):
# smoke test
if (isinstance(x_n_x_ceiled[0], Number) or isinstance(x_n_x_ceiled[1], Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x_n_x_ceiled[0], dtype, dev)
ret = ivy.ceil(x)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(call(ivy.ceil, x), np.array(x_n_x_ceiled[1]))
# argmax
# @pytest.mark.parametrize(
# "x_n_axis_x_argmax", [([-0.3, 0.1], None, [1]), ([[1.3, 2.6], [2.3, 2.5]], 0, [1, 0]),
# ([[1.3, 2.6], [2.3, 2.5]], 1, [1, 1])])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_argmax(x_n_axis_x_argmax, dtype, tensor_fn, dev, call):
# # smoke test
# x = ivy.array(x_n_axis_x_argmax[0], dtype, dev)
# axis = x_n_axis_x_argmax[1]
# ret = ivy.argmax(x, axis)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# assert tuple(ret.shape) == (len(x.shape),)
# # value test
# assert np.allclose(call(ivy.argmax, x, axis), np.array(x_n_axis_x_argmax[2]))
# argsort
# @pytest.mark.parametrize(
# "x_n_axis_x_argsort", [([1, 10, 26.9, 2.8, 166.32, 62.3], -1, [0, 3, 1, 2, 5, 4])])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_argsort(x_n_axis_x_argsort, dtype, tensor_fn, dev, call):
# # smoke test
# x = tensor_fn(x_n_axis_x_argsort[0], dtype, dev)
# axis = x_n_axis_x_argsort[1]
# ret = ivy.argsort(x, axis)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# assert tuple(ret.shape) == (6,)
# # value test
# assert np.allclose(call(ivy.argsort, x, axis), np.array(x_n_axis_x_argsort[2]))
# arange
@pytest.mark.parametrize(
"stop_n_start_n_step", [[10, None, None], [10, 2, None], [10, 2, 2]])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_arange(stop_n_start_n_step, dtype, tensor_fn, dev, call):
# smoke test
stop, start, step = stop_n_start_n_step
if (isinstance(stop, Number) or isinstance(start, Number) or isinstance(step, Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
if tensor_fn == helpers.var_fn and call is helpers.torch_call:
# pytorch does not support arange using variables as input
pytest.skip()
args = list()
if stop:
stop = tensor_fn(stop, dtype, dev)
args.append(stop)
if start:
start = tensor_fn(start, dtype, dev)
args.append(start)
if step:
step = tensor_fn(step, dtype, dev)
args.append(step)
ret = ivy.arange(*args, dtype=dtype, dev=dev)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == (int((ivy.to_list(stop) -
(ivy.to_list(start) if start else 0))/(ivy.to_list(step) if step else 1)),)
# value test
assert np.array_equal(call(ivy.arange, *args, dtype=dtype, dev=dev),
np.asarray(ivy.functional.backends.numpy.arange(*[ivy.to_numpy(arg) for arg in args], dtype=dtype)))
# linspace
@pytest.mark.parametrize(
"start_n_stop_n_num_n_axis", [[1, 10, 100, None], [[[0., 1., 2.]], [[1., 2., 3.]], 150, -1],
[[[[-0.1471, 0.4477, 0.2214]]], [[[-0.3048, 0.3308, 0.2721]]], 6, -2]])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_linspace(start_n_stop_n_num_n_axis, dtype, tensor_fn, dev, call):
# smoke test
start, stop, num, axis = start_n_stop_n_num_n_axis
if (isinstance(start, Number) or isinstance(stop, Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
start = tensor_fn(start, dtype, dev)
stop = tensor_fn(stop, dtype, dev)
ret = ivy.linspace(start, stop, num, axis, dev=dev)
# type test
assert ivy.is_array(ret)
# cardinality test
target_shape = list(start.shape)
target_shape.insert(axis + 1 if (axis and axis != -1) else len(target_shape), num)
assert ret.shape == tuple(target_shape)
# value test
assert np.allclose(call(ivy.linspace, start, stop, num, axis, dev=dev),
np.asarray(ivy.functional.backends.numpy.linspace(ivy.to_numpy(start), ivy.to_numpy(stop), num, axis)))
# logspace
@pytest.mark.parametrize(
"start_n_stop_n_num_n_base_n_axis", [[1, 10, 100, 10., None], [[[0., 1., 2.]], [[1., 2., 3.]], 150, 2., -1],
[[[[-0.1471, 0.4477, 0.2214]]], [[[-0.3048, 0.3308, 0.2721]]], 6, 5., -2]])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_logspace(start_n_stop_n_num_n_base_n_axis, dtype, tensor_fn, dev, call):
# smoke test
start, stop, num, base, axis = start_n_stop_n_num_n_base_n_axis
if (isinstance(start, Number) or isinstance(stop, Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
start = tensor_fn(start, dtype, dev)
stop = tensor_fn(stop, dtype, dev)
ret = ivy.logspace(start, stop, num, base, axis, dev=dev)
# type test
assert ivy.is_array(ret)
# cardinality test
target_shape = list(start.shape)
target_shape.insert(axis + 1 if (axis and axis != -1) else len(target_shape), num)
assert ret.shape == tuple(target_shape)
# value test
assert np.allclose(call(ivy.logspace, start, stop, num, base, axis, dev=dev),
ivy.functional.backends.numpy.logspace(ivy.to_numpy(start), ivy.to_numpy(stop), num, base, axis))
# concatenate
@pytest.mark.parametrize(
"x1_n_x2_n_axis", [(1, 10, 0), ([[0., 1., 2.]], [[1., 2., 3.]], 0), ([[0., 1., 2.]], [[1., 2., 3.]], 1),
([[[-0.1471, 0.4477, 0.2214]]], [[[-0.3048, 0.3308, 0.2721]]], -1)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_concatenate(x1_n_x2_n_axis, dtype, tensor_fn, dev, call):
# smoke test
x1, x2, axis = x1_n_x2_n_axis
if (isinstance(x1, Number) or isinstance(x2, Number)) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x1 = tensor_fn(x1, dtype, dev)
x2 = tensor_fn(x2, dtype, dev)
ret = ivy.concatenate((x1, x2), axis)
# type test
assert ivy.is_array(ret)
# cardinality test
axis_val = (axis % len(x1.shape) if (axis is not None and len(x1.shape) != 0) else len(x1.shape) - 1)
if x1.shape == ():
expected_shape = (2,)
else:
expected_shape = tuple([item * 2 if i == axis_val else item for i, item in enumerate(x1.shape)])
assert ret.shape == expected_shape
# value test
assert np.allclose(call(ivy.concatenate, [x1, x2], axis),
np.asarray(ivy.functional.backends.numpy.concatenate([ivy.to_numpy(x1), ivy.to_numpy(x2)], axis)))
# flip
# @pytest.mark.parametrize(
# "x_n_axis_n_bs", [(1, 0, None), ([[0., 1., 2.]], None, (1, 3)), ([[0., 1., 2.]], 1, (1, 3)),
# ([[[-0.1471, 0.4477, 0.2214]]], None, None)])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_flip(x_n_axis_n_bs, dtype, tensor_fn, dev, call):
# # smoke test
# x, axis, bs = x_n_axis_n_bs
# if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# # mxnet does not support 0-dimensional variables
# pytest.skip()
# x = tensor_fn(x, dtype, dev)
# ret = ivy.flip(x, axis, bs)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# assert ret.shape == x.shape
# # value test
# assert np.allclose(call(ivy.flip, x, axis, bs), np.asarray(ivy.functional.backends.numpy.flip(ivy.to_numpy(x), axis, bs)))
# stack
# @pytest.mark.parametrize(
# "xs_n_axis", [((1, 0), -1), (([[0., 1., 2.]], [[3., 4., 5.]]), 0), (([[0., 1., 2.]], [[3., 4., 5.]]), 1)])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_stack(xs_n_axis, dtype, tensor_fn, dev, call):
# # smoke test
# (x1, x2), axis = xs_n_axis
# if (isinstance(x1, Number) or isinstance(x2, Number)) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# # mxnet does not support 0-dimensional variables
# pytest.skip()
# x1 = tensor_fn(x1, dtype, dev)
# x2 = tensor_fn(x2, dtype, dev)
# ret = ivy.stack((x1, x2), axis)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# axis_val = (axis % len(x1.shape) if (axis is not None and len(x1.shape) != 0) else len(x1.shape) - 1)
# if x1.shape == ():
# expected_shape = (2,)
# else:
# expected_shape = list(x1.shape)
# expected_shape.insert(axis_val, 2)
# assert ret.shape == tuple(expected_shape)
# # value test
# assert np.allclose(call(ivy.stack, (x1, x2), axis),
# np.asarray(ivy.functional.backends.numpy.stack((ivy.to_numpy(x1), ivy.to_numpy(x2)), axis)))
# unstack
@pytest.mark.parametrize(
"x_n_axis", [(1, -1), ([[0., 1., 2.]], 0), ([[0., 1., 2.]], 1)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_unstack(x_n_axis, dtype, tensor_fn, dev, call):
# smoke test
x, axis = x_n_axis
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.unstack(x, axis)
# type test
assert isinstance(ret, list)
# cardinality test
axis_val = (axis % len(x.shape) if (axis is not None and len(x.shape) != 0) else len(x.shape) - 1)
if x.shape == ():
expected_shape = ()
else:
expected_shape = list(x.shape)
expected_shape.pop(axis_val)
assert ret[0].shape == tuple(expected_shape)
# value test
assert np.allclose(call(ivy.unstack, x, axis), np.asarray(ivy.functional.backends.numpy.unstack(ivy.to_numpy(x), axis)))
# split
@pytest.mark.parametrize(
"x_n_noss_n_axis_n_wr", [(1, 1, -1, False),
([[0., 1., 2., 3.]], 2, 1, False),
([[0., 1., 2.], [3., 4., 5.]], 2, 0, False),
([[0., 1., 2.], [3., 4., 5.]], 2, 1, True),
([[0., 1., 2.], [3., 4., 5.]], [2, 1], 1, False)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_split(x_n_noss_n_axis_n_wr, dtype, tensor_fn, dev, call):
# smoke test
x, num_or_size_splits, axis, with_remainder = x_n_noss_n_axis_n_wr
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.split(x, num_or_size_splits, axis, with_remainder)
# type test
assert isinstance(ret, list)
# cardinality test
axis_val = (axis % len(x.shape) if (axis is not None and len(x.shape) != 0) else len(x.shape) - 1)
if x.shape == ():
expected_shape = ()
elif isinstance(num_or_size_splits, int):
expected_shape = tuple([math.ceil(item/num_or_size_splits) if i == axis_val else item
for i, item in enumerate(x.shape)])
else:
expected_shape = tuple([num_or_size_splits[0] if i == axis_val else item for i, item in enumerate(x.shape)])
assert ret[0].shape == expected_shape
# value test
pred_split = call(ivy.split, x, num_or_size_splits, axis, with_remainder)
true_split = ivy.functional.backends.numpy.split(ivy.to_numpy(x), num_or_size_splits, axis, with_remainder)
for pred, true in zip(pred_split, true_split):
assert np.allclose(pred, true)
# compilation test
if call is helpers.torch_call:
# pytorch scripting does not support Union or Numbers for type hinting
return
# repeat
@pytest.mark.parametrize(
"x_n_reps_n_axis", [(1, [1], 0), (1, 2, -1), (1, [2], None), ([[0., 1., 2., 3.]], (2, 1, 0, 3), -1)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_repeat(x_n_reps_n_axis, dtype, tensor_fn, dev, call):
# smoke test
x, reps_raw, axis = x_n_reps_n_axis
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
if not isinstance(reps_raw, int) and call is helpers.mx_call:
# mxnet repeat only supports integer repeats
pytest.skip()
x = tensor_fn(x, dtype, dev)
x_shape = list(x.shape)
if call not in [helpers.jnp_call, helpers.torch_call]:
# jax and pytorch repeat do not support repeats specified as lists
ret_from_list = ivy.repeat(x, reps_raw, axis)
reps = ivy.array(reps_raw, 'int32', dev)
if call is helpers.mx_call:
# mxnet only supports repeats defined as as int
ret = ivy.repeat(x, reps_raw, axis)
else:
ret = ivy.repeat(x, reps, axis)
# type test
assert ivy.is_array(ret)
# cardinality test
if x.shape == ():
expected_shape = [reps_raw] if isinstance(reps_raw, int) else list(reps_raw)
else:
axis_wrapped = axis % len(x_shape)
expected_shape = x_shape[0:axis_wrapped] + [sum(reps_raw)] + x_shape[axis_wrapped+1:]
assert list(ret.shape) == expected_shape
# value test
if call is helpers.mx_call:
# mxnet only supports repeats defined as as int
assert np.allclose(call(ivy.repeat, x, reps_raw, axis),
np.asarray(ivy.functional.backends.numpy.repeat(ivy.to_numpy(x), ivy.to_numpy(reps), axis)))
else:
assert np.allclose(call(ivy.repeat, x, reps, axis),
np.asarray(ivy.functional.backends.numpy.repeat(ivy.to_numpy(x), ivy.to_numpy(reps), axis)))
# tile
# @pytest.mark.parametrize(
# "x_n_reps", [(1, [1]), (1, 2), (1, [2]), ([[0., 1., 2., 3.]], (2, 1))])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_tile(x_n_reps, dtype, tensor_fn, dev, call):
# # smoke test
# x, reps_raw = x_n_reps
# if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# # mxnet does not support 0-dimensional variables
# pytest.skip()
# x = tensor_fn(x, dtype, dev)
# ret_from_list = ivy.tile(x, reps_raw)
# reps = ivy.array(reps_raw, 'int32', dev)
# ret = ivy.tile(x, reps)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# if x.shape == ():
# expected_shape = tuple(reps_raw) if isinstance(reps_raw, list) else (reps_raw,)
# else:
# expected_shape = tuple([int(item * rep) for item, rep in zip(x.shape, reps_raw)])
# assert ret.shape == expected_shape
# # value test
# assert np.allclose(call(ivy.tile, x, reps),
# np.asarray(ivy.functional.backends.numpy.tile(ivy.to_numpy(x), ivy.to_numpy(reps))))
# zero_pad
@pytest.mark.parametrize(
"x_n_pw", [(1, [[1, 1]]), (1, [[0, 0]]), ([[0., 1., 2., 3.]], [[0, 1], [1, 2]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_zero_pad(x_n_pw, dtype, tensor_fn, dev, call):
# smoke test
x, pw_raw = x_n_pw
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret_from_list = ivy.zero_pad(x, pw_raw)
pw = ivy.array(pw_raw, 'int32', dev)
ret = ivy.zero_pad(x, pw)
# type test
assert ivy.is_array(ret)
# cardinality test
x_shape = [1] if x.shape == () else x.shape
expected_shape = tuple([int(item + pw_[0] + pw_[1]) for item, pw_ in zip(x_shape, pw_raw)])
assert ret.shape == expected_shape
# value test
assert np.allclose(call(ivy.zero_pad, x, pw), ivy.functional.backends.numpy.zero_pad(ivy.to_numpy(x), ivy.to_numpy(pw)))
# fourier_encode
@pytest.mark.parametrize(
"x_n_mf_n_nb_n_gt", [([2.], 4., 4, [[2.0000000e+00, 1.7484555e-07, 9.9805772e-01,-5.2196848e-01,
3.4969111e-07, 1.0000000e+00, -6.2295943e-02, -8.5296476e-01, 1.0000000e+00]]),
([[1., 2.], [3., 4.], [5., 6.]], [2., 4.], 4,
[[[1.0000000e+00, -8.7422777e-08, -8.7422777e-08, -8.7422777e-08,
-8.7422777e-08, -1.0000000e+00, -1.0000000e+00, -1.0000000e+00,
-1.0000000e+00],
[2.0000000e+00, 1.7484555e-07, 9.9805772e-01, -5.2196848e-01,
-6.0398321e-07, 1.0000000e+00, -6.2295943e-02, -8.5296476e-01,
1.0000000e+00]],
[[3.0000000e+00, -2.3849761e-08, -2.3849761e-08, -2.3849761e-08,
-2.3849761e-08, -1.0000000e+00, -1.0000000e+00, -1.0000000e+00,
-1.0000000e+00],
[4.0000000e+00, 3.4969111e-07, -1.2434989e-01, 8.9044148e-01,
-1.2079664e-06, 1.0000000e+00, -9.9223840e-01, 4.5509776e-01,
1.0000000e+00]],
[[5.0000000e+00, -6.7553248e-07, -6.7553248e-07, -6.7553248e-07,
-6.7553248e-07, -1.0000000e+00, -1.0000000e+00, -1.0000000e+00,
-1.0000000e+00],
[6.0000000e+00, 4.7699523e-08, -9.8256493e-01, -9.9706185e-01,
-3.7192983e-06, 1.0000000e+00, 1.8591987e-01, 7.6601014e-02,
1.0000000e+00]]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_fourier_encode(x_n_mf_n_nb_n_gt, dtype, tensor_fn, dev, call):
# smoke test
x, max_freq, num_bands, ground_truth = x_n_mf_n_nb_n_gt
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
if isinstance(max_freq, list):
max_freq = tensor_fn(max_freq, dtype, dev)
ret = ivy.fourier_encode(x, max_freq, num_bands)
# type test
assert ivy.is_array(ret)
# cardinality test
x_shape = [1] if x.shape == () else list(x.shape)
expected_shape = x_shape + [1 + 2*num_bands]
assert list(ret.shape) == expected_shape
# value test
assert np.allclose(call(ivy.fourier_encode, x, max_freq, num_bands), np.array(ground_truth), atol=1e-5)
# constant_pad
@pytest.mark.parametrize(
"x_n_pw_n_val", [(1, [[1, 1]], 1.5), (1, [[0, 0]], -2.7), ([[0., 1., 2., 3.]], [[0, 1], [1, 2]], 11.)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_constant_pad(x_n_pw_n_val, dtype, tensor_fn, dev, call):
# smoke test
x, pw_raw, val = x_n_pw_n_val
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret_from_list = ivy.constant_pad(x, pw_raw, val)
pw = ivy.array(pw_raw, 'int32', dev)
ret = ivy.constant_pad(x, pw, val)
# type test
assert ivy.is_array(ret)
# cardinality test
x_shape = [1] if x.shape == () else x.shape
expected_shape = tuple([int(item + pw_[0] + pw_[1]) for item, pw_ in zip(x_shape, pw_raw)])
assert ret.shape == expected_shape
# value test
assert np.allclose(call(ivy.constant_pad, x, pw, val),
np.asarray(ivy.functional.backends.numpy.constant_pad(ivy.to_numpy(x), ivy.to_numpy(pw), val)))
# swapaxes
@pytest.mark.parametrize(
"x_n_ax0_n_ax1", [([[1.]], 0, 1), ([[0., 1., 2., 3.]], 1, 0), ([[[0., 1., 2.], [3., 4., 5.]]], -2, -1)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_swapaxes(x_n_ax0_n_ax1, dtype, tensor_fn, dev, call):
# smoke test
x, ax0, ax1 = x_n_ax0_n_ax1
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.swapaxes(x, ax0, ax1)
# type test
assert ivy.is_array(ret)
# cardinality test
expected_shape = list(x.shape)
expected_shape[ax0], expected_shape[ax1] = expected_shape[ax1], expected_shape[ax0]
assert ret.shape == tuple(expected_shape)
# value test
assert np.allclose(call(ivy.swapaxes, x, ax0, ax1),
np.asarray(ivy.functional.backends.numpy.swapaxes(ivy.to_numpy(x), ax0, ax1)))
# transpose
@pytest.mark.parametrize(
"x_n_axes", [([[1.]], [1, 0]), ([[0., 1., 2., 3.]], [1, 0]), ([[[0., 1., 2.], [3., 4., 5.]]], [0, 2, 1])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_transpose(x_n_axes, dtype, tensor_fn, dev, call):
# smoke test
x, axes = x_n_axes
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.transpose(x, axes)
# type test
assert ivy.is_array(ret)
# cardinality test
x_shape = x.shape
assert ret.shape == tuple([x.shape[idx] for idx in axes])
# value test
assert np.allclose(call(ivy.transpose, x, axes), np.asarray(ivy.functional.backends.numpy.transpose(ivy.to_numpy(x), axes)))
# expand_dims
@pytest.mark.parametrize(
"x_n_axis", [(1., 0), (1., -1), ([1.], 0), ([[0., 1., 2., 3.]], -2), ([[[0., 1., 2.], [3., 4., 5.]]], -3)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_expand_dims(x_n_axis, dtype, tensor_fn, dev, call):
# smoke test
x, axis = x_n_axis
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.expand_dims(x, axis)
# type test
assert ivy.is_array(ret)
# cardinality test
expected_shape = list(x.shape)
expected_shape.insert(axis, 1)
assert ret.shape == tuple(expected_shape)
# value test
assert np.allclose(call(ivy.expand_dims, x, axis), np.asarray(ivy.functional.backends.numpy.expand_dims(ivy.to_numpy(x), axis)))
# where
@pytest.mark.parametrize(
"cond_n_x1_n_x2", [(True, 2., 3.), (0., 2., 3.), ([True], [2.], [3.]), ([[0.]], [[2., 3.]], [[4., 5.]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_where(cond_n_x1_n_x2, dtype, tensor_fn, dev, call):
# smoke test
cond, x1, x2 = cond_n_x1_n_x2
if (isinstance(cond, Number) or isinstance(x1, Number) or isinstance(x2, Number))\
and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
cond = tensor_fn(cond, dtype, dev)
x1 = tensor_fn(x1, dtype, dev)
x2 = tensor_fn(x2, dtype, dev)
ret = ivy.where(cond, x1, x2)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x1.shape
# value test
assert np.allclose(call(ivy.where, cond, x1, x2),
np.asarray(ivy.functional.backends.numpy.where(ivy.to_numpy(cond), ivy.to_numpy(x1), ivy.to_numpy(x2))))
# indices_where
@pytest.mark.parametrize(
"x", [[True], [[0., 1.], [2., 3.]]])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_indices_where(x, dtype, tensor_fn, dev, call):
# smoke test
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.indices_where(x)
# type test
assert ivy.is_array(ret)
# cardinality test
assert len(ret.shape) == 2
assert ret.shape[-1] == len(x.shape)
# value test
assert np.allclose(call(ivy.indices_where, x), np.asarray(ivy.functional.backends.numpy.indices_where(ivy.to_numpy(x))))
# isnan
@pytest.mark.parametrize(
"x_n_res", [([True], [False]),
([[0., float('nan')], [float('nan'), 3.]],
[[False, True], [True, False]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_isnan(x_n_res, dtype, tensor_fn, dev, call):
x, res = x_n_res
# smoke test
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.isnan(x)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(call(ivy.isnan, x), res)
# isinf
@pytest.mark.parametrize(
"x_n_res", [([True], [False]),
([[0., float('inf')], [float('nan'), -float('inf')]],
[[False, True], [False, True]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_isinf(x_n_res, dtype, tensor_fn, dev, call):
x, res = x_n_res
# smoke test
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.isinf(x)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(call(ivy.isinf, x), res)
# isfinite
@pytest.mark.parametrize(
"x_n_res", [([True], [True]),
([[0., float('inf')], [float('nan'), 3.]],
[[True, False], [False, True]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_isfinite(x_n_res, dtype, tensor_fn, dev, call):
x, res = x_n_res
# smoke test
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.isfinite(x)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(call(ivy.isfinite, x), res)
# reshape
@pytest.mark.parametrize(
"x_n_shp", [(1., (1, 1)), (1., 1), (1., []), ([[1.]], []), ([[0., 1.], [2., 3.]], (1, 4, 1))])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_reshape(x_n_shp, dtype, tensor_fn, dev, call):
# smoke test
x, new_shape = x_n_shp
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.reshape(x, new_shape)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == ((new_shape,) if isinstance(new_shape, int) else tuple(new_shape))
# value test
assert np.allclose(call(ivy.reshape, x, new_shape), np.asarray(ivy.functional.backends.numpy.reshape(ivy.to_numpy(x), new_shape)))
# broadcast_to
@pytest.mark.parametrize(
"x_n_shp", [([1.], (2, 1)), ([[0., 1.], [2., 3.]], (10, 2, 2))])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_broadcast_to(x_n_shp, dtype, tensor_fn, dev, call):
# smoke test
x, new_shape = x_n_shp
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.broadcast_to(x, new_shape)
# type test
assert ivy.is_array(ret)
# cardinality test
assert len(ret.shape) == len(new_shape)
# value test
assert np.allclose(call(ivy.broadcast_to, x, new_shape),
np.asarray(ivy.functional.backends.numpy.broadcast_to(ivy.to_numpy(x), new_shape)))
# squeeze
# @pytest.mark.parametrize(
# "x_n_axis", [(1., 0), (1., -1), ([[1.]], None), ([[[0.], [1.]], [[2.], [3.]]], -1)])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_squeeze(x_n_axis, dtype, tensor_fn, dev, call):
# # smoke test
# x, axis = x_n_axis
# if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# # mxnet does not support 0-dimensional variables
# pytest.skip()
# x = tensor_fn(x, dtype, dev)
# ret = ivy.squeeze(x, axis)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# if axis is None:
# expected_shape = [item for item in x.shape if item != 1]
# elif x.shape == ():
# expected_shape = []
# else:
# expected_shape = list(x.shape)
# expected_shape.pop(axis)
# assert ret.shape == tuple(expected_shape)
# # value test
# assert np.allclose(call(ivy.squeeze, x, axis), np.asarray(ivy.functional.backends.numpy.squeeze(ivy.to_numpy(x), axis)))
# zeros
# @pytest.mark.parametrize(
# "shape", [(), (1, 2, 3), tuple([1]*10)])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_zeros(shape, dtype, tensor_fn, dev, call):
# # smoke test
# ret = ivy.zeros(shape, dtype, dev)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# assert ret.shape == tuple(shape)
# # value test
# assert np.allclose(call(ivy.zeros, shape, dtype, dev), np.asarray(ivy.functional.backends.numpy.zeros(shape, dtype)))
# zeros_like
@pytest.mark.parametrize(
"x", [1, [1], [[1], [2], [3]]])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_zeros_like(x, dtype, tensor_fn, dev, call):
# smoke test
if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.zeros_like(x, dtype, dev)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(call(ivy.zeros_like, x, dtype, dev),
np.asarray(ivy.functional.backends.numpy.zeros_like(ivy.to_numpy(x), dtype)))
# ones
# @pytest.mark.parametrize(
# "shape", [(), (1, 2, 3), tuple([1]*10)])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_ones(shape, dtype, tensor_fn, dev, call):
# # smoke test
# ret = ivy.ones(shape, dtype, dev)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# assert ret.shape == tuple(shape)
# # value test
# assert np.allclose(call(ivy.ones, shape, dtype, dev), np.asarray(ivy.functional.backends.numpy.ones(shape, dtype)))
# ones_like
# @pytest.mark.parametrize(
# "x", [1, [1], [[1], [2], [3]]])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_ones_like(x, dtype, tensor_fn, dev, call):
# # smoke test
# if isinstance(x, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# # mxnet does not support 0-dimensional variables
# pytest.skip()
# x = tensor_fn(x, dtype, dev)
# ret = ivy.ones_like(x, dtype, dev)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# assert ret.shape == x.shape
# # value test
# assert np.allclose(call(ivy.ones_like, x, dtype, dev),
# np.asarray(ivy.functional.backends.numpy.ones_like(ivy.to_numpy(x), dtype)))
# full
# @pytest.mark.parametrize(
# "shape", [(), (1, 2, 3), tuple([1]*10)])
# @pytest.mark.parametrize(
# "fill_val", [2., -7.])
# @pytest.mark.parametrize(
# "dtype", ['float32'])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_full(shape, fill_val, dtype, tensor_fn, dev, call):
# # smoke test
# ret = ivy.full(shape, fill_val, dtype, dev)
# # type test
# assert ivy.is_array(ret)
# # cardinality test
# assert ret.shape == tuple(shape)
# # value test
# assert np.allclose(call(ivy.full, shape, fill_val, dtype, dev),
# np.asarray(ivy.functional.backends.numpy.full(shape, fill_val, dtype)))
# one_hot
@pytest.mark.parametrize(
"ind_n_depth", [([0], 1), ([0, 1, 2], 3), ([[1, 3], [0, 0], [8, 4], [7, 9]], 10)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_one_hot(ind_n_depth, dtype, tensor_fn, dev, call):
# smoke test
ind, depth = ind_n_depth
if isinstance(ind, Number) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
ind = ivy.array(ind, 'int32', dev)
ret = ivy.one_hot(ind, depth, dev)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == ind.shape + (depth,)
# value test
assert np.allclose(call(ivy.one_hot, ind, depth, dev),
np.asarray(ivy.functional.backends.numpy.one_hot(ivy.to_numpy(ind), depth)))
# cross
@pytest.mark.parametrize(
"x1_n_x2", [([0., 1., 2.], [3., 4., 5.]), ([[0., 1., 2.], [2., 1., 0.]], [[3., 4., 5.], [5., 4., 3.]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_cross(x1_n_x2, dtype, tensor_fn, dev, call):
# smoke test
x1, x2 = x1_n_x2
if (isinstance(x1, Number) or isinstance(x2, Number)) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x1 = ivy.array(x1, dtype, dev)
x2 = ivy.array(x2, dtype, dev)
ret = ivy.cross(x1, x2)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x1.shape
# value test
assert np.allclose(call(ivy.cross, x1, x2), np.asarray(ivy.functional.backends.numpy.cross(ivy.to_numpy(x1), ivy.to_numpy(x2))))
# matmul
@pytest.mark.parametrize(
"x1_n_x2", [([[0., 1., 2.]], [[3.], [4.], [5.]]), ([[0., 1., 2.], [2., 1., 0.]], [[3., 4.], [5., 5.], [4., 3.]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_matmul(x1_n_x2, dtype, tensor_fn, dev, call):
# smoke test
x1, x2 = x1_n_x2
if (isinstance(x1, Number) or isinstance(x2, Number)) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x1 = ivy.array(x1, dtype, dev)
x2 = ivy.array(x2, dtype, dev)
ret = ivy.matmul(x1, x2)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x1.shape[:-1] + (x2.shape[-1],)
# value test
assert np.allclose(call(ivy.matmul, x1, x2), np.asarray(ivy.functional.backends.numpy.matmul(ivy.to_numpy(x1), ivy.to_numpy(x2))))
# cumsum
@pytest.mark.parametrize(
"x_n_axis", [([[0., 1., 2.]], -1), ([[0., 1., 2.], [2., 1., 0.]], 0), ([[0., 1., 2.], [2., 1., 0.]], 1)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_cumsum(x_n_axis, dtype, tensor_fn, dev, call):
# smoke test
x, axis = x_n_axis
x = ivy.array(x, dtype, dev)
ret = ivy.cumsum(x, axis)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(call(ivy.cumsum, x, axis), np.asarray(ivy.functional.backends.numpy.cumsum(ivy.to_numpy(x), axis)))
# cumprod
@pytest.mark.parametrize(
"x_n_axis", [([[0., 1., 2.]], -1), ([[0., 1., 2.], [2., 1., 0.]], 0), ([[0., 1., 2.], [2., 1., 0.]], 1)])
@pytest.mark.parametrize(
"exclusive", [True, False])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_cumprod(x_n_axis, exclusive, dtype, tensor_fn, dev, call):
# smoke test
x, axis = x_n_axis
x = ivy.array(x, dtype, dev)
ret = ivy.cumprod(x, axis, exclusive)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == x.shape
# value test
assert np.allclose(call(ivy.cumprod, x, axis, exclusive),
np.asarray(ivy.functional.backends.numpy.cumprod(ivy.to_numpy(x), axis, exclusive)))
# identity
@pytest.mark.parametrize(
"dim_n_bs", [(3, None), (1, (2, 3)), (5, (1, 2, 3))])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_identity(dim_n_bs, dtype, tensor_fn, dev, call):
# smoke test
dim, bs = dim_n_bs
ret = ivy.identity(dim, dtype, bs, dev)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == (tuple(bs) if bs else ()) + (dim, dim)
# value test
assert np.allclose(call(ivy.identity, dim, dtype, bs, dev),
np.asarray(ivy.functional.backends.numpy.identity(dim, dtype, bs)))
# meshgrid
@pytest.mark.parametrize(
"xs", [([1, 2, 3], [4, 5, 6]), ([1, 2, 3], [4, 5, 6, 7], [8, 9])])
@pytest.mark.parametrize(
"indexing", ['xy', 'ij'])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_meshgrid(xs, indexing, dtype, tensor_fn, dev, call):
# smoke test
xs_as_arrays = [ivy.array(x, 'int32', dev) for x in xs]
rets = ivy.meshgrid(*xs_as_arrays, indexing=indexing)
# type test
for ret in rets:
assert ivy.is_array(ret)
# cardinality test
target_shape = tuple([len(x) for x in xs])
if indexing == 'xy':
target_shape = (target_shape[1], target_shape[0]) + target_shape[2:]
for ret in rets:
assert ret.shape == target_shape
# value test
assert np.allclose(
call(ivy.meshgrid, *xs_as_arrays, indexing=indexing),
[np.asarray(i) for i in ivy.functional.backends.numpy.meshgrid(*[ivy.to_numpy(x) for x in xs_as_arrays], indexing=indexing)])
# scatter_flat
@pytest.mark.parametrize(
"inds_n_upd_n_size_n_tnsr_n_wdup", [([0, 4, 1, 2], [1, 2, 3, 4], 8, None, False),
([0, 4, 1, 2, 0], [1, 2, 3, 4, 5], 8, None, True),
([0, 4, 1, 2, 0], [1, 2, 3, 4, 5], None, [11, 10, 9, 8, 7, 6], True)])
@pytest.mark.parametrize(
"red", ['sum', 'min', 'max', 'replace'])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_scatter_flat(inds_n_upd_n_size_n_tnsr_n_wdup, red, dtype, tensor_fn, dev, call):
# smoke test
if red in ('sum', 'min', 'max') and call is helpers.mx_call:
# mxnet does not support sum, min or max reduction for scattering
pytest.skip()
inds, upd, size, tensor, with_duplicates = inds_n_upd_n_size_n_tnsr_n_wdup
if ivy.exists(tensor) and call is helpers.mx_call:
# mxnet does not support scattering into pre-existing tensors
pytest.skip()
inds = ivy.array(inds, 'int32', dev)
upd = tensor_fn(upd, dtype, dev)
if tensor:
# pytorch variables do not support in-place updates
tensor = ivy.array(tensor, dtype, dev) if ivy.current_framework_str() == 'torch'\
else tensor_fn(tensor, dtype, dev)
ret = ivy.scatter_flat(inds, upd, size, tensor, red, dev)
# type test
assert ivy.is_array(ret)
# cardinality test
if size:
assert ret.shape == (size,)
else:
assert ret.shape == tensor.shape
# value test
if red == 'replace' and with_duplicates:
# replace with duplicates give non-deterministic outputs
return
assert np.allclose(call(ivy.scatter_flat, inds, upd, size, tensor, red, dev),
np.asarray(ivy.functional.backends.numpy.scatter_flat(
ivy.to_numpy(inds), ivy.to_numpy(upd), size,
ivy.to_numpy(tensor) if ivy.exists(tensor) else tensor, red)))
# scatter_nd
@pytest.mark.parametrize(
"inds_n_upd_n_shape_tnsr_n_wdup",
[([[4], [3], [1], [7]], [9, 10, 11, 12], [8], None, False), ([[0, 1, 2]], [1], [3, 3, 3], None, False),
([[0], [2]], [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]], [4, 4, 4], None, False),
([[0, 1, 2]], [1], None, [[[1, 2, 3], [4, 5, 6], [7, 8, 9]],
[[4, 5, 6], [7, 8, 9], [1, 2, 3]],
[[7, 8, 9], [1, 2, 3], [4, 5, 6]]], False)])
@pytest.mark.parametrize(
"red", ['sum', 'min', 'max', 'replace'])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_scatter_nd(inds_n_upd_n_shape_tnsr_n_wdup, red, dtype, tensor_fn, dev, call):
# smoke test
if red in ('sum', 'min', 'max') and call is helpers.mx_call:
# mxnet does not support sum, min or max reduction for scattering
pytest.skip()
inds, upd, shape, tensor, with_duplicates = inds_n_upd_n_shape_tnsr_n_wdup
if ivy.exists(tensor) and call is helpers.mx_call:
# mxnet does not support scattering into pre-existing tensors
pytest.skip()
inds = ivy.array(inds, 'int32', dev)
upd = tensor_fn(upd, dtype, dev)
if tensor:
# pytorch variables do not support in-place updates
tensor = ivy.array(tensor, dtype, dev) if ivy.current_framework_str() == 'torch'\
else tensor_fn(tensor, dtype, dev)
ret = ivy.scatter_nd(inds, upd, shape, tensor, red, dev)
# type test
assert ivy.is_array(ret)
# cardinality test
if shape:
assert tuple(ret.shape) == tuple(shape)
else:
assert tuple(ret.shape) == tuple(tensor.shape)
# value test
if red == 'replace' and with_duplicates:
# replace with duplicates give non-deterministic outputs
return
ret = call(ivy.scatter_nd, inds, upd, shape, tensor, red, dev)
true = np.asarray(ivy.functional.backends.numpy.scatter_nd(
ivy.to_numpy(inds), ivy.to_numpy(upd), shape,
ivy.to_numpy(tensor) if ivy.exists(tensor) else tensor, red))
assert np.allclose(ret, true)
# gather
@pytest.mark.parametrize(
"prms_n_inds_n_axis", [([9, 8, 7, 6, 5, 4, 3, 2, 1, 0], [0, 4, 7], 0),
([[1, 2], [3, 4]], [[0, 0], [1, 0]], 1)])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_gather(prms_n_inds_n_axis, dtype, tensor_fn, dev, call):
# smoke test
prms, inds, axis = prms_n_inds_n_axis
prms = tensor_fn(prms, dtype, dev)
inds = ivy.array(inds, 'int32', dev)
ret = ivy.gather(prms, inds, axis, dev)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == inds.shape
# value test
assert np.allclose(call(ivy.gather, prms, inds, axis, dev),
np.asarray(ivy.functional.backends.numpy.gather(ivy.to_numpy(prms), ivy.to_numpy(inds), axis)))
# gather_nd
@pytest.mark.parametrize(
"prms_n_inds", [([[[0.0, 1.0], [2.0, 3.0]], [[0.1, 1.1], [2.1, 3.1]]], [[0, 1], [1, 0]]),
([[[0.0, 1.0], [2.0, 3.0]], [[0.1, 1.1], [2.1, 3.1]]], [[[0, 1]], [[1, 0]]]),
([[[0.0, 1.0], [2.0, 3.0]], [[0.1, 1.1], [2.1, 3.1]]], [[[0, 1, 0]], [[1, 0, 1]]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_gather_nd(prms_n_inds, dtype, tensor_fn, dev, call):
# smoke test
prms, inds = prms_n_inds
prms = tensor_fn(prms, dtype, dev)
inds = ivy.array(inds, 'int32', dev)
ret = ivy.gather_nd(prms, inds, dev)
# type test
assert ivy.is_array(ret)
# cardinality test
assert ret.shape == inds.shape[:-1] + prms.shape[inds.shape[-1]:]
# value test
assert np.allclose(call(ivy.gather_nd, prms, inds, dev),
np.asarray(ivy.functional.backends.numpy.gather_nd(ivy.to_numpy(prms), ivy.to_numpy(inds))))
# linear_resample
@pytest.mark.parametrize(
"x_n_samples_n_axis_n_y_true", [([[10., 9., 8.]], 9, -1, [[10., 9.75, 9.5, 9.25, 9., 8.75, 8.5, 8.25, 8.]]),
([[[10., 9.], [8., 7.]]], 5, -2,
[[[10., 9.], [9.5, 8.5], [9., 8.], [8.5, 7.5], [8., 7.]]]),
([[[10., 9.], [8., 7.]]], 5, -1,
[[[10., 9.75, 9.5, 9.25, 9.], [8., 7.75, 7.5, 7.25, 7.]]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_linear_resample(x_n_samples_n_axis_n_y_true, dtype, tensor_fn, dev, call):
# smoke test
x, samples, axis, y_true = x_n_samples_n_axis_n_y_true
x = tensor_fn(x, dtype, dev)
ret = ivy.linear_resample(x, samples, axis)
# type test
assert ivy.is_array(ret)
# cardinality test
x_shape = list(x.shape)
num_x_dims = len(x_shape)
axis = axis % num_x_dims
x_pre_shape = x_shape[0:axis]
num_vals = x.shape[axis]
x_post_shape = x_shape[axis+1:]
assert list(ret.shape) == x_pre_shape + [samples] + x_post_shape
# value test
y_true = np.array(y_true)
y = call(ivy.linear_resample, x, samples, axis)
assert np.allclose(y, y_true)
# exists
@pytest.mark.parametrize(
"x", [[1.], None, [[10., 9., 8.]]])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_exists(x, dtype, tensor_fn, dev, call):
# smoke test
x = tensor_fn(x, dtype, dev) if x is not None else None
ret = ivy.exists(x)
# type test
assert isinstance(ret, bool)
# value test
y_true = x is not None
assert ret == y_true
# default
@pytest.mark.parametrize(
"x_n_dv", [([1.], [2.]), (None, [2.]), ([[10., 9., 8.]], [2.])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_default(x_n_dv, dtype, tensor_fn, dev, call):
x, dv = x_n_dv
# smoke test
x = tensor_fn(x, dtype, dev) if x is not None else None
dv = tensor_fn(dv, dtype, dev)
ret = ivy.default(x, dv)
# type test
assert ivy.is_array(ret)
# value test
y_true = ivy.to_numpy(x if x is not None else dv)
assert np.allclose(call(ivy.default, x, dv), y_true)
# dtype bits
@pytest.mark.parametrize(
"x", [1, [], [1], [[0.0, 1.0], [2.0, 3.0]]])
@pytest.mark.parametrize(
"dtype", ivy.all_dtype_strs)
@pytest.mark.parametrize(
"tensor_fn", [ivy.array])
def test_dtype_bits(x, dtype, tensor_fn, dev, call):
# smoke test
if ivy.invalid_dtype(dtype):
pytest.skip()
if (isinstance(x, Number) or len(x) == 0) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
ret = ivy.dtype_bits(ivy.dtype(x))
# type test
assert isinstance(ret, int)
assert ret in [1, 8, 16, 32, 64]
# dtype_to_str
@pytest.mark.parametrize(
"x", [1, [], [1], [[0.0, 1.0], [2.0, 3.0]]])
@pytest.mark.parametrize(
"dtype", ['float16', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'bool'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array])
def test_dtype_to_str(x, dtype, tensor_fn, dev, call):
# smoke test
if call is helpers.mx_call and dtype == 'int16':
# mxnet does not support int16
pytest.skip()
if call is helpers.jnp_call and dtype in ['int64', 'float64']:
# jax does not support int64 or float64 arrays
pytest.skip()
if (isinstance(x, Number) or len(x) == 0) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
dtype_as_str = ivy.dtype(x, as_str=True)
dtype_to_str = ivy.dtype_to_str(ivy.dtype(x))
# type test
assert isinstance(dtype_as_str, str)
assert isinstance(dtype_to_str, str)
# value test
assert dtype_to_str == dtype_as_str
# dtype_from_str
@pytest.mark.parametrize(
"x", [1, [], [1], [[0.0, 1.0], [2.0, 3.0]]])
@pytest.mark.parametrize(
"dtype", ['float16', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'bool'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array])
def test_dtype_from_str(x, dtype, tensor_fn, dev, call):
# smoke test
if call is helpers.mx_call and dtype == 'int16':
# mxnet does not support int16
pytest.skip()
if call is helpers.jnp_call and dtype in ['int64', 'float64']:
# jax does not support int64 or float64 arrays
pytest.skip()
if (isinstance(x, Number) or len(x) == 0) and tensor_fn == helpers.var_fn and call is helpers.mx_call:
# mxnet does not support 0-dimensional variables
pytest.skip()
x = tensor_fn(x, dtype, dev)
dt0 = ivy.dtype_from_str(ivy.dtype(x, as_str=True))
dt1 = ivy.dtype(x)
# value test
assert dt0 is dt1
def test_cache_fn(dev, call):
def func():
return ivy.random_uniform()
# return a single cached_fn and then query this
cached_fn = ivy.cache_fn(func)
ret0 = cached_fn()
ret0_again = cached_fn()
ret1 = func()
assert ivy.to_numpy(ret0).item() == ivy.to_numpy(ret0_again).item()
assert ivy.to_numpy(ret0).item() != ivy.to_numpy(ret1).item()
assert ret0 is ret0_again
assert ret0 is not ret1
# call ivy.cache_fn repeatedly, the new cached functions each use the same global dict
ret0 = ivy.cache_fn(func)()
ret0_again = ivy.cache_fn(func)()
ret1 = func()
assert ivy.to_numpy(ret0).item() == ivy.to_numpy(ret0_again).item()
assert ivy.to_numpy(ret0).item() != ivy.to_numpy(ret1).item()
assert ret0 is ret0_again
assert ret0 is not ret1
def test_cache_fn_with_args(dev, call):
def func(_):
return ivy.random_uniform()
# return a single cached_fn and then query this
cached_fn = ivy.cache_fn(func)
ret0 = cached_fn(0)
ret0_again = cached_fn(0)
ret1 = cached_fn(1)
assert ivy.to_numpy(ret0).item() == ivy.to_numpy(ret0_again).item()
assert ivy.to_numpy(ret0).item() != ivy.to_numpy(ret1).item()
assert ret0 is ret0_again
assert ret0 is not ret1
# call ivy.cache_fn repeatedly, the new cached functions each use the same global dict
ret0 = ivy.cache_fn(func)(0)
ret0_again = ivy.cache_fn(func)(0)
ret1 = ivy.cache_fn(func)(1)
assert ivy.to_numpy(ret0).item() == ivy.to_numpy(ret0_again).item()
assert ivy.to_numpy(ret0).item() != ivy.to_numpy(ret1).item()
assert ret0 is ret0_again
assert ret0 is not ret1
# def test_framework_setting_with_threading(dev, call):
#
# if call is helpers.np_call:
# # Numpy is the conflicting framework being tested against
# pytest.skip()
#
# def thread_fn():
# ivy.set_framework('numpy')
# x_ = np.array([0., 1., 2.])
# for _ in range(2000):
# try:
# ivy.reduce_mean(x_)
# except TypeError:
# return False
# ivy.unset_framework()
# return True
#
# # get original framework string and array
# fws = ivy.current_framework_str()
# x = ivy.array([0., 1., 2.])
#
# # start numpy loop thread
# thread = threading.Thread(target=thread_fn)
# thread.start()
#
# # start local original framework loop
# ivy.set_framework(fws)
# for _ in range(2000):
# ivy.reduce_mean(x)
# ivy.unset_framework()
#
# assert not thread.join()
def test_framework_setting_with_multiprocessing(dev, call):
if call is helpers.np_call:
# Numpy is the conflicting framework being tested against
pytest.skip()
def worker_fn(out_queue):
ivy.set_framework('numpy')
x_ = np.array([0., 1., 2.])
for _ in range(1000):
try:
ivy.mean(x_)
except TypeError:
out_queue.put(False)
return
ivy.unset_framework()
out_queue.put(True)
# get original framework string and array
fws = ivy.current_framework_str()
x = ivy.array([0., 1., 2.])
# start numpy loop thread
output_queue = multiprocessing.Queue()
worker = multiprocessing.Process(target=worker_fn, args=(output_queue,))
worker.start()
# start local original framework loop
ivy.set_framework(fws)
for _ in range(1000):
ivy.mean(x)
ivy.unset_framework()
worker.join()
assert output_queue.get_nowait()
# def test_explicit_ivy_framework_handles(dev, call):
#
# if call is helpers.np_call:
# # Numpy is the conflicting framework being tested against
# pytest.skip()
#
# # store original framework string and unset
# fw_str = ivy.current_framework_str()
# ivy.unset_framework()
#
# # set with explicit handle caught
# ivy_exp = ivy.get_framework(fw_str)
# assert ivy_exp.current_framework_str() == fw_str
#
# # assert backend implemented function is accessible
# assert 'array' in ivy_exp.__dict__
# assert callable(ivy_exp.array)
#
# # assert joint implemented function is also accessible
# assert 'cache_fn' in ivy_exp.__dict__
# assert callable(ivy_exp.cache_fn)
#
# # set global ivy to numpy
# ivy.set_framework('numpy')
#
# # assert the explicit handle is still unchanged
# assert ivy.current_framework_str() == 'numpy'
# assert ivy_exp.current_framework_str() == fw_str
#
# # unset global ivy from numpy
# ivy.unset_framework()
# def test_class_ivy_handles(dev, call):
#
# if call is helpers.np_call:
# # Numpy is the conflicting framework being tested against
# pytest.skip()
#
# class ArrayGen:
#
# def __init__(self, ivyh):
# self._ivy = ivyh
#
# def get_array(self):
# return self._ivy.array([0., 1., 2.])
#
# # create instance
# ag = ArrayGen(ivy.get_framework())
#
# # create array from array generator
# x = ag.get_array()
#
# # verify this is not a numpy array
# assert not isinstance(x, np.ndarray)
#
# # change global framework to numpy
# ivy.set_framework('numpy')
#
# # create another array from array generator
# x = ag.get_array()
#
# # verify this is not still a numpy array
# assert not isinstance(x, np.ndarray)
# einops_rearrange
@pytest.mark.parametrize(
"x_n_pattern_n_newx", [([[0., 1., 2., 3.]], 'b n -> n b', [[0.], [1.], [2.], [3.]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_einops_rearrange(x_n_pattern_n_newx, dtype, tensor_fn, dev, call):
# smoke test
x, pattern, new_x = x_n_pattern_n_newx
x = tensor_fn(x, dtype, dev)
ret = ivy.einops_rearrange(x, pattern)
true_ret = einops.rearrange(ivy.to_native(x), pattern)
# type test
assert ivy.is_array(ret)
# cardinality test
assert list(ret.shape) == list(true_ret.shape)
# value test
assert np.allclose(ivy.to_numpy(ret), ivy.to_numpy(true_ret))
# einops_reduce
@pytest.mark.parametrize(
"x_n_pattern_n_red_n_newx", [([[0., 1., 2., 3.]], 'b n -> b', 'mean', [1.5])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_einops_reduce(x_n_pattern_n_red_n_newx, dtype, tensor_fn, dev, call):
# smoke test
x, pattern, reduction, new_x = x_n_pattern_n_red_n_newx
x = tensor_fn(x, dtype, dev)
ret = ivy.einops_reduce(x, pattern, reduction)
true_ret = einops.reduce(ivy.to_native(x), pattern, reduction)
# type test
assert ivy.is_array(ret)
# cardinality test
assert list(ret.shape) == list(true_ret.shape)
# value test
assert np.allclose(ivy.to_numpy(ret), ivy.to_numpy(true_ret))
# einops_repeat
@pytest.mark.parametrize(
"x_n_pattern_n_al_n_newx", [([[0., 1., 2., 3.]], 'b n -> b n c', {'c': 2},
[[[0., 0.], [1., 1.], [2., 2.], [3., 3.]]])])
@pytest.mark.parametrize(
"dtype", ['float32'])
@pytest.mark.parametrize(
"tensor_fn", [ivy.array, helpers.var_fn])
def test_einops_repeat(x_n_pattern_n_al_n_newx, dtype, tensor_fn, dev, call):
# smoke test
x, pattern, axes_lengths, new_x = x_n_pattern_n_al_n_newx
x = tensor_fn(x, dtype, dev)
ret = ivy.einops_repeat(x, pattern, **axes_lengths)
true_ret = einops.repeat(ivy.to_native(x), pattern, **axes_lengths)
# type test
assert ivy.is_array(ret)
# cardinality test
assert list(ret.shape) == list(true_ret.shape)
# value test
assert np.allclose(ivy.to_numpy(ret), ivy.to_numpy(true_ret))
# profiler
# def test_profiler(dev, call):
#
# # ToDo: find way to prevent this test from hanging when run alongside other tests in parallel
#
# # log dir
# this_dir = os.path.dirname(os.path.realpath(__file__))
# log_dir = os.path.join(this_dir, '../log')
#
# # with statement
# with ivy.Profiler(log_dir):
# a = ivy.ones([10])
# b = ivy.zeros([10])
# a + b
# if call is helpers.mx_call:
# time.sleep(1) # required by MXNet for some reason
#
# # start and stop methods
# profiler = ivy.Profiler(log_dir)
# profiler.start()
# a = ivy.ones([10])
# b = ivy.zeros([10])
# a + b
# profiler.stop()
# if call is helpers.mx_call:
# time.sleep(1) # required by MXNet for some reason
# container types
def test_container_types(dev, call):
cont_types = ivy.container_types()
assert isinstance(cont_types, list)
for cont_type in cont_types:
assert hasattr(cont_type, 'keys')
assert hasattr(cont_type, 'values')
assert hasattr(cont_type, 'items')
def test_inplace_arrays_supported(dev, call):
cur_fw = ivy.current_framework_str()
if cur_fw in ['numpy', 'mxnet', 'torch']:
assert ivy.inplace_arrays_supported()
elif cur_fw in ['jax', 'tensorflow']:
assert not ivy.inplace_arrays_supported()
else:
raise Exception('Unrecognized framework')
def test_inplace_variables_supported(dev, call):
cur_fw = ivy.current_framework_str()
if cur_fw in ['numpy', 'mxnet', 'torch', 'tensorflow']:
assert ivy.inplace_variables_supported()
elif cur_fw in ['jax']:
assert not ivy.inplace_variables_supported()
else:
raise Exception('Unrecognized framework')
# @pytest.mark.parametrize(
# "x_n_new", [([0., 1., 2.], [2., 1., 0.]), (0., 1.)])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_inplace_update(x_n_new, tensor_fn, dev, call):
# x_orig, new_val = x_n_new
# if call is helpers.mx_call and isinstance(x_orig, Number):
# # MxNet supports neither 0-dim variables nor 0-dim inplace updates
# pytest.skip()
# x_orig = tensor_fn(x_orig, 'float32', dev)
# new_val = tensor_fn(new_val, 'float32', dev)
# if (tensor_fn is not helpers.var_fn and ivy.inplace_arrays_supported()) or\
# (tensor_fn is helpers.var_fn and ivy.inplace_variables_supported()):
# x = ivy.inplace_update(x_orig, new_val)
# assert id(x) == id(x_orig)
# assert np.allclose(ivy.to_numpy(x), ivy.to_numpy(new_val))
# return
# pytest.skip()
# @pytest.mark.parametrize(
# "x_n_dec", [([0., 1., 2.], [2., 1., 0.]), (0., 1.)])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_inplace_decrement(x_n_dec, tensor_fn, dev, call):
# x_orig, dec = x_n_dec
# if call is helpers.mx_call and isinstance(x_orig, Number):
# # MxNet supports neither 0-dim variables nor 0-dim inplace updates
# pytest.skip()
# x_orig = tensor_fn(x_orig, 'float32', dev)
# dec = tensor_fn(dec, 'float32', dev)
# new_val = x_orig - dec
# if (tensor_fn is not helpers.var_fn and ivy.inplace_arrays_supported()) or\
# (tensor_fn is helpers.var_fn and ivy.inplace_variables_supported()):
# x = ivy.inplace_decrement(x_orig, dec)
# assert id(x) == id(x_orig)
# assert np.allclose(ivy.to_numpy(new_val), ivy.to_numpy(x))
# return
# pytest.skip()
# @pytest.mark.parametrize(
# "x_n_inc", [([0., 1., 2.], [2., 1., 0.]), (0., 1.)])
# @pytest.mark.parametrize(
# "tensor_fn", [ivy.array, helpers.var_fn])
# def test_inplace_increment(x_n_inc, tensor_fn, dev, call):
# x_orig, inc = x_n_inc
# if call is helpers.mx_call and isinstance(x_orig, Number):
# # MxNet supports neither 0-dim variables nor 0-dim inplace updates
# pytest.skip()
# x_orig = tensor_fn(x_orig, 'float32', dev)
# inc = tensor_fn(inc, 'float32', dev)
# new_val = x_orig + inc
# if (tensor_fn is not helpers.var_fn and ivy.inplace_arrays_supported()) or\
# (tensor_fn is helpers.var_fn and ivy.inplace_variables_supported()):
# x = ivy.inplace_increment(x_orig, inc)
# assert id(x) == id(x_orig)
# assert np.allclose(ivy.to_numpy(new_val), ivy.to_numpy(x))
# return
# pytest.skip()
|
chat_client.py
|
#!/usr/bin/env python3
"""Script chat del client utilizzato per lanciare la GUI Tkinter."""
from socket import AF_INET, socket, SOCK_STREAM
from threading import Thread
import tkinter as tkt
''' Funzione per far partire il client con i dati in ingresso '''
def client_func(host, port):
""" Funzione che ha il compito di gestire la ricezione dei messaggi."""
def receive():
while True:
try:
# Quando viene chiamata la funzione receive, si mette in ascolto dei messaggi che
# arrivano sul socket
msg = client_socket.recv(BUFSIZ).decode("utf8")
# Il messaggio ricevuto lo divido in stringhe, dove occorre, tramite il
# separatore "\n". Ogni stringa la inserisco in fondo alla chat
for s in msg.split("\n"):
if s : # Controllo che non sia una stringa vuota
msg_list.insert(tkt.END, s)
# Se il messaggio è "{quit}" allora esce
check_quit(s)
# Nel caso di errore e' probabile che il client abbia abbandonato la chat.
except OSError:
break
""" Funzione che gestisce l'invio dei messaggi."""
def send(event=None):
# Gli eventi vengono passati dai binders.
msg = my_msg.get()
if msg :
# Pulisco la casella di input.
my_msg.set("")
try:
# Invia il messaggio sul socket
client_socket.send(bytes(msg, "utf8"))
finally:
# Stampo il messaggio nella mia chat
msg_list.insert(tkt.END, "Tu:" + msg)
# Se il messaggio è "{quit}" allora esce
check_quit(msg)
''' Funzione che controlla se l'input è una richiesta d'uscita'''
def check_quit(msg):
if msg == "{quit}":
client_socket.close()
window.quit()
SystemExit
""" Funzione invocata quando viene chiusa la finestra della chat."""
def on_closing(event=None):
my_msg.set("{quit}")
send()
#----Creazione GUI----
# Creo la finestra e le do il titolo
window = tkt.Tk()
window.title("Chat_Laboratorio")
# Frame per contenere i messaggi
messages_frame = tkt.Frame(window)
# Stringa per i messaggi da inviare.
my_msg = tkt.StringVar()
# Inizializzo stringa
my_msg.set("Scrivi qui")
# Scrollbar per navigare tra i messaggi precedenti.
scrollbar = tkt.Scrollbar(messages_frame)
# Creo la lista che rappresenta la chat
msg_list = tkt.Listbox(messages_frame, height=15, width=70, yscrollcommand=scrollbar.set)
scrollbar.pack(side=tkt.RIGHT, fill=tkt.Y)
msg_list.pack(side=tkt.LEFT, fill=tkt.BOTH)
# Integro la chat nel pacchetto
msg_list.pack()
# Integro il frame nel pacchetto
messages_frame.pack()
# Campo di input e associato alla stringa
entry_field = tkt.Entry(window, textvariable=my_msg)
# Lego la funzione send al tasto Return (/Enter/Invio)
entry_field.bind("<Return>", send)
# Integro l'entry nel pacchetto
entry_field.pack()
# Creo tasto invio e associo alla funzione "send"
send_button = tkt.Button(window, text="Invio", command=send)
# Integro il tasto nel pacchetto
send_button.pack()
# Creo tasto Quit e associo alla funzione "on_closing"
quit_button = tkt.Button(window, text="Quit", command=on_closing)
# Integro il tasto nel pacchetto
quit_button.pack()
window.protocol("WM_DELETE_WINDOW", on_closing)
#----Connessione al Server----
HOST = host # "127.0.0.1"
PORT = port # 53000
BUFSIZ = 1024
ADDR = (HOST, PORT)
# Creo socket
client_socket = socket(AF_INET, SOCK_STREAM)
client_socket.connect(ADDR)
# Creo thread e lo avvio
receive_thread = Thread(target=receive)
receive_thread.start()
# Avvia il loop principale della finestra
tkt.mainloop()
if __name__ == "__main__":
print("File non eseguibile")
|
helpers.py
|
"""
Helper functions file for OCS QE
"""
import base64
import datetime
import hashlib
import json
import logging
import os
import re
import statistics
import tempfile
import threading
import time
import inspect
from concurrent.futures import ThreadPoolExecutor
from subprocess import PIPE, TimeoutExpired, run
from uuid import uuid4
import yaml
from ocs_ci.framework import config
from ocs_ci.ocs.utils import mirror_image
from ocs_ci.ocs import constants, defaults, node, ocp
from ocs_ci.ocs.exceptions import (
CommandFailed,
ResourceWrongStatusException,
TimeoutExpiredError,
UnavailableBuildException,
UnexpectedBehaviour,
)
from ocs_ci.ocs.ocp import OCP
from ocs_ci.ocs.resources import pod, pvc
from ocs_ci.ocs.resources.ocs import OCS
from ocs_ci.utility import templating
from ocs_ci.utility.retry import retry
from ocs_ci.utility.utils import (
TimeoutSampler,
ocsci_log_path,
run_cmd,
update_container_with_mirrored_image,
)
logger = logging.getLogger(__name__)
def create_unique_resource_name(resource_description, resource_type):
"""
Creates a unique object name by using the object_description,
object_type and a random uuid(in hex) as suffix trimmed due to
kubernetes limitation of 63 characters
Args:
resource_description (str): The user provided object description
resource_type (str): The type of object for which the unique name
will be created. For example: project, pvc, etc
Returns:
str: A unique name
"""
name = f"{resource_type}-{resource_description[:23]}-{uuid4().hex}"
return name if len(name) < 40 else name[:40]
def create_resource(do_reload=True, **kwargs):
"""
Create a resource
Args:
do_reload (bool): True for reloading the resource following its creation,
False otherwise
kwargs (dict): Dictionary of the OCS resource
Returns:
OCS: An OCS instance
Raises:
AssertionError: In case of any failure
"""
ocs_obj = OCS(**kwargs)
resource_name = kwargs.get("metadata").get("name")
created_resource = ocs_obj.create(do_reload=do_reload)
assert created_resource, f"Failed to create resource {resource_name}"
return ocs_obj
def wait_for_resource_state(resource, state, timeout=60):
"""
Wait for a resource to get to a given status
Args:
resource (OCS obj): The resource object
state (str): The status to wait for
timeout (int): Time in seconds to wait
Raises:
ResourceWrongStatusException: In case the resource hasn't
reached the desired state
"""
if (
resource.name == constants.DEFAULT_STORAGECLASS_CEPHFS
or resource.name == constants.DEFAULT_STORAGECLASS_RBD
):
logger.info("Attempt to default default Secret or StorageClass")
return
try:
resource.ocp.wait_for_resource(
condition=state, resource_name=resource.name, timeout=timeout
)
except TimeoutExpiredError:
logger.error(f"{resource.kind} {resource.name} failed to reach {state}")
resource.reload()
raise ResourceWrongStatusException(resource.name, resource.describe())
logger.info(f"{resource.kind} {resource.name} reached state {state}")
def create_pod(
interface_type=None,
pvc_name=None,
do_reload=True,
namespace=defaults.ROOK_CLUSTER_NAMESPACE,
node_name=None,
pod_dict_path=None,
sa_name=None,
dc_deployment=False,
raw_block_pv=False,
raw_block_device=constants.RAW_BLOCK_DEVICE,
replica_count=1,
pod_name=None,
node_selector=None,
command=None,
command_args=None,
deploy_pod_status=constants.STATUS_COMPLETED,
):
"""
Create a pod
Args:
interface_type (str): The interface type (CephFS, RBD, etc.)
pvc_name (str): The PVC that should be attached to the newly created pod
do_reload (bool): True for reloading the object after creation, False otherwise
namespace (str): The namespace for the new resource creation
node_name (str): The name of specific node to schedule the pod
pod_dict_path (str): YAML path for the pod
sa_name (str): Serviceaccount name
dc_deployment (bool): True if creating pod as deploymentconfig
raw_block_pv (bool): True for creating raw block pv based pod, False otherwise
raw_block_device (str): raw block device for the pod
replica_count (int): Replica count for deployment config
pod_name (str): Name of the pod to create
node_selector (dict): dict of key-value pair to be used for nodeSelector field
eg: {'nodetype': 'app-pod'}
command (list): The command to be executed on the pod
command_args (list): The arguments to be sent to the command running
on the pod
deploy_pod_status (str): Expected status of deploy pod. Applicable
only if dc_deployment is True
Returns:
Pod: A Pod instance
Raises:
AssertionError: In case of any failure
"""
if interface_type == constants.CEPHBLOCKPOOL:
pod_dict = pod_dict_path if pod_dict_path else constants.CSI_RBD_POD_YAML
interface = constants.RBD_INTERFACE
else:
pod_dict = pod_dict_path if pod_dict_path else constants.CSI_CEPHFS_POD_YAML
interface = constants.CEPHFS_INTERFACE
if dc_deployment:
pod_dict = pod_dict_path if pod_dict_path else constants.FEDORA_DC_YAML
pod_data = templating.load_yaml(pod_dict)
if not pod_name:
pod_name = create_unique_resource_name(f"test-{interface}", "pod")
pod_data["metadata"]["name"] = pod_name
pod_data["metadata"]["namespace"] = namespace
if dc_deployment:
pod_data["metadata"]["labels"]["app"] = pod_name
pod_data["spec"]["template"]["metadata"]["labels"]["name"] = pod_name
pod_data["spec"]["replicas"] = replica_count
if pvc_name:
if dc_deployment:
pod_data["spec"]["template"]["spec"]["volumes"][0]["persistentVolumeClaim"][
"claimName"
] = pvc_name
else:
pod_data["spec"]["volumes"][0]["persistentVolumeClaim"][
"claimName"
] = pvc_name
if interface_type == constants.CEPHBLOCKPOOL and raw_block_pv:
if pod_dict_path in [constants.FEDORA_DC_YAML, constants.FIO_DC_YAML]:
temp_dict = [
{
"devicePath": raw_block_device,
"name": pod_data.get("spec")
.get("template")
.get("spec")
.get("volumes")[0]
.get("name"),
}
]
if pod_dict_path == constants.FEDORA_DC_YAML:
del pod_data["spec"]["template"]["spec"]["containers"][0][
"volumeMounts"
]
security_context = {"capabilities": {"add": ["SYS_ADMIN"]}}
pod_data["spec"]["template"]["spec"]["containers"][0][
"securityContext"
] = security_context
pod_data["spec"]["template"]["spec"]["containers"][0][
"volumeDevices"
] = temp_dict
elif pod_dict_path == constants.NGINX_POD_YAML:
temp_dict = [
{
"devicePath": raw_block_device,
"name": pod_data.get("spec")
.get("containers")[0]
.get("volumeMounts")[0]
.get("name"),
}
]
del pod_data["spec"]["containers"][0]["volumeMounts"]
pod_data["spec"]["containers"][0]["volumeDevices"] = temp_dict
else:
pod_data["spec"]["containers"][0]["volumeDevices"][0][
"devicePath"
] = raw_block_device
pod_data["spec"]["containers"][0]["volumeDevices"][0]["name"] = (
pod_data.get("spec").get("volumes")[0].get("name")
)
if command:
if dc_deployment:
pod_data["spec"]["template"]["spec"]["containers"][0]["command"] = command
else:
pod_data["spec"]["containers"][0]["command"] = command
if command_args:
if dc_deployment:
pod_data["spec"]["template"]["spec"]["containers"][0]["args"] = command_args
else:
pod_data["spec"]["containers"][0]["args"] = command_args
if node_name:
if dc_deployment:
pod_data["spec"]["template"]["spec"]["nodeName"] = node_name
else:
pod_data["spec"]["nodeName"] = node_name
if node_selector:
if dc_deployment:
pod_data["spec"]["template"]["spec"]["nodeSelector"] = node_selector
else:
pod_data["spec"]["nodeSelector"] = node_selector
if sa_name and dc_deployment:
pod_data["spec"]["template"]["spec"]["serviceAccountName"] = sa_name
# overwrite used image (required for disconnected installation)
update_container_with_mirrored_image(pod_data)
# configure http[s]_proxy env variable, if required
try:
http_proxy, https_proxy, no_proxy = get_cluster_proxies()
if http_proxy:
if "containers" in pod_data["spec"]:
container = pod_data["spec"]["containers"][0]
else:
container = pod_data["spec"]["template"]["spec"]["containers"][0]
if "env" not in container:
container["env"] = []
container["env"].append({"name": "http_proxy", "value": http_proxy})
container["env"].append({"name": "https_proxy", "value": https_proxy})
container["env"].append({"name": "no_proxy", "value": no_proxy})
except KeyError as err:
logging.warning(
"Http(s)_proxy variable wasn't configured, " "'%s' key not found.", err
)
if dc_deployment:
ocs_obj = create_resource(**pod_data)
logger.info(ocs_obj.name)
assert (ocp.OCP(kind="pod", namespace=namespace)).wait_for_resource(
condition=deploy_pod_status,
resource_name=pod_name + "-1-deploy",
resource_count=0,
timeout=360,
sleep=3,
)
dpod_list = pod.get_all_pods(namespace=namespace)
for dpod in dpod_list:
if "-1-deploy" not in dpod.name:
if pod_name in dpod.name:
return dpod
else:
pod_obj = pod.Pod(**pod_data)
pod_name = pod_data.get("metadata").get("name")
logger.info(f"Creating new Pod {pod_name} for test")
created_resource = pod_obj.create(do_reload=do_reload)
assert created_resource, f"Failed to create Pod {pod_name}"
return pod_obj
def create_project(project_name=None):
"""
Create a project
Args:
project_name (str): The name for the new project
Returns:
ocs_ci.ocs.ocp.OCP: Project object
"""
namespace = project_name or create_unique_resource_name("test", "namespace")
project_obj = ocp.OCP(kind="Project", namespace=namespace)
assert project_obj.new_project(namespace), f"Failed to create namespace {namespace}"
return project_obj
def create_multilpe_projects(number_of_project):
"""
Create one or more projects
Args:
number_of_project (int): Number of projects to be created
Returns:
list: List of project objects
"""
project_objs = [create_project() for _ in range(number_of_project)]
return project_objs
def create_secret(interface_type):
"""
Create a secret
** This method should not be used anymore **
** This method is for internal testing only **
Args:
interface_type (str): The type of the interface
(e.g. CephBlockPool, CephFileSystem)
Returns:
OCS: An OCS instance for the secret
"""
secret_data = dict()
if interface_type == constants.CEPHBLOCKPOOL:
secret_data = templating.load_yaml(constants.CSI_RBD_SECRET_YAML)
secret_data["stringData"]["userID"] = constants.ADMIN_USER
secret_data["stringData"]["userKey"] = get_admin_key()
interface = constants.RBD_INTERFACE
elif interface_type == constants.CEPHFILESYSTEM:
secret_data = templating.load_yaml(constants.CSI_CEPHFS_SECRET_YAML)
del secret_data["stringData"]["userID"]
del secret_data["stringData"]["userKey"]
secret_data["stringData"]["adminID"] = constants.ADMIN_USER
secret_data["stringData"]["adminKey"] = get_admin_key()
interface = constants.CEPHFS_INTERFACE
secret_data["metadata"]["name"] = create_unique_resource_name(
f"test-{interface}", "secret"
)
secret_data["metadata"]["namespace"] = defaults.ROOK_CLUSTER_NAMESPACE
return create_resource(**secret_data)
def default_ceph_block_pool():
"""
Returns default CephBlockPool
Returns:
default CephBlockPool
"""
sc_obj = default_storage_class(constants.CEPHBLOCKPOOL)
cbp_name = sc_obj.get().get("parameters").get("pool")
return cbp_name if cbp_name else constants.DEFAULT_BLOCKPOOL
def create_ceph_block_pool(
pool_name=None, replica=3, compression=None, failure_domain=None, verify=True
):
"""
Create a Ceph block pool
** This method should not be used anymore **
** This method is for internal testing only **
Args:
pool_name (str): The pool name to create
failure_domain (str): Failure domain name
verify (bool): True to verify the pool exists after creation,
False otherwise
replica (int): The replica size for a pool
compression (str): Compression type for a pool
Returns:
OCS: An OCS instance for the Ceph block pool
"""
cbp_data = templating.load_yaml(constants.CEPHBLOCKPOOL_YAML)
cbp_data["metadata"]["name"] = (
pool_name if pool_name else create_unique_resource_name("test", "cbp")
)
cbp_data["metadata"]["namespace"] = defaults.ROOK_CLUSTER_NAMESPACE
cbp_data["spec"]["replicated"]["size"] = replica
cbp_data["spec"]["failureDomain"] = failure_domain or get_failure_domin()
if compression:
cbp_data["spec"]["parameters"]["compression_mode"] = compression
cbp_obj = create_resource(**cbp_data)
cbp_obj.reload()
if verify:
assert verify_block_pool_exists(
cbp_obj.name
), f"Block pool {cbp_obj.name} does not exist"
return cbp_obj
def create_ceph_file_system(pool_name=None):
"""
Create a Ceph file system
** This method should not be used anymore **
** This method is for internal testing only **
Args:
pool_name (str): The pool name to create
Returns:
OCS: An OCS instance for the Ceph file system
"""
cfs_data = templating.load_yaml(constants.CEPHFILESYSTEM_YAML)
cfs_data["metadata"]["name"] = (
pool_name if pool_name else create_unique_resource_name("test", "cfs")
)
cfs_data["metadata"]["namespace"] = defaults.ROOK_CLUSTER_NAMESPACE
cfs_data = create_resource(**cfs_data)
cfs_data.reload()
assert validate_cephfilesystem(
cfs_data.name
), f"File system {cfs_data.name} does not exist"
return cfs_data
def default_storage_class(
interface_type,
):
"""
Return default storage class based on interface_type
Args:
interface_type (str): The type of the interface
(e.g. CephBlockPool, CephFileSystem)
Returns:
OCS: Existing StorageClass Instance
"""
external = config.DEPLOYMENT["external_mode"]
if interface_type == constants.CEPHBLOCKPOOL:
if external:
resource_name = constants.DEFAULT_EXTERNAL_MODE_STORAGECLASS_RBD
else:
resource_name = constants.DEFAULT_STORAGECLASS_RBD
base_sc = OCP(kind="storageclass", resource_name=resource_name)
elif interface_type == constants.CEPHFILESYSTEM:
if external:
resource_name = constants.DEFAULT_EXTERNAL_MODE_STORAGECLASS_CEPHFS
else:
resource_name = constants.DEFAULT_STORAGECLASS_CEPHFS
base_sc = OCP(kind="storageclass", resource_name=resource_name)
sc = OCS(**base_sc.data)
return sc
def create_storage_class(
interface_type,
interface_name,
secret_name,
reclaim_policy=constants.RECLAIM_POLICY_DELETE,
sc_name=None,
provisioner=None,
):
"""
Create a storage class
** This method should not be used anymore **
** This method is for internal testing only **
Args:
interface_type (str): The type of the interface
(e.g. CephBlockPool, CephFileSystem)
interface_name (str): The name of the interface
secret_name (str): The name of the secret
sc_name (str): The name of storage class to create
reclaim_policy (str): Type of reclaim policy. Defaults to 'Delete'
(eg., 'Delete', 'Retain')
Returns:
OCS: An OCS instance for the storage class
"""
sc_data = dict()
if interface_type == constants.CEPHBLOCKPOOL:
sc_data = templating.load_yaml(constants.CSI_RBD_STORAGECLASS_YAML)
sc_data["parameters"]["csi.storage.k8s.io/node-stage-secret-name"] = secret_name
sc_data["parameters"][
"csi.storage.k8s.io/node-stage-secret-namespace"
] = defaults.ROOK_CLUSTER_NAMESPACE
interface = constants.RBD_INTERFACE
sc_data["provisioner"] = (
provisioner if provisioner else defaults.RBD_PROVISIONER
)
elif interface_type == constants.CEPHFILESYSTEM:
sc_data = templating.load_yaml(constants.CSI_CEPHFS_STORAGECLASS_YAML)
sc_data["parameters"]["csi.storage.k8s.io/node-stage-secret-name"] = secret_name
sc_data["parameters"][
"csi.storage.k8s.io/node-stage-secret-namespace"
] = defaults.ROOK_CLUSTER_NAMESPACE
interface = constants.CEPHFS_INTERFACE
sc_data["parameters"]["fsName"] = get_cephfs_name()
sc_data["provisioner"] = (
provisioner if provisioner else defaults.CEPHFS_PROVISIONER
)
sc_data["parameters"]["pool"] = interface_name
sc_data["metadata"]["name"] = (
sc_name
if sc_name
else create_unique_resource_name(f"test-{interface}", "storageclass")
)
sc_data["metadata"]["namespace"] = defaults.ROOK_CLUSTER_NAMESPACE
sc_data["parameters"]["csi.storage.k8s.io/provisioner-secret-name"] = secret_name
sc_data["parameters"][
"csi.storage.k8s.io/provisioner-secret-namespace"
] = defaults.ROOK_CLUSTER_NAMESPACE
sc_data["parameters"][
"csi.storage.k8s.io/controller-expand-secret-name"
] = secret_name
sc_data["parameters"][
"csi.storage.k8s.io/controller-expand-secret-namespace"
] = defaults.ROOK_CLUSTER_NAMESPACE
sc_data["parameters"]["clusterID"] = defaults.ROOK_CLUSTER_NAMESPACE
sc_data["reclaimPolicy"] = reclaim_policy
try:
del sc_data["parameters"]["userid"]
except KeyError:
pass
return create_resource(**sc_data)
def create_pvc(
sc_name,
pvc_name=None,
namespace=defaults.ROOK_CLUSTER_NAMESPACE,
size=None,
do_reload=True,
access_mode=constants.ACCESS_MODE_RWO,
volume_mode=None,
):
"""
Create a PVC
Args:
sc_name (str): The name of the storage class for the PVC to be
associated with
pvc_name (str): The name of the PVC to create
namespace (str): The namespace for the PVC creation
size (str): Size of pvc to create
do_reload (bool): True for wait for reloading PVC after its creation, False otherwise
access_mode (str): The access mode to be used for the PVC
volume_mode (str): Volume mode for rbd RWX pvc i.e. 'Block'
Returns:
PVC: PVC instance
"""
pvc_data = templating.load_yaml(constants.CSI_PVC_YAML)
pvc_data["metadata"]["name"] = (
pvc_name if pvc_name else create_unique_resource_name("test", "pvc")
)
pvc_data["metadata"]["namespace"] = namespace
pvc_data["spec"]["accessModes"] = [access_mode]
pvc_data["spec"]["storageClassName"] = sc_name
if size:
pvc_data["spec"]["resources"]["requests"]["storage"] = size
if volume_mode:
pvc_data["spec"]["volumeMode"] = volume_mode
ocs_obj = pvc.PVC(**pvc_data)
created_pvc = ocs_obj.create(do_reload=do_reload)
assert created_pvc, f"Failed to create resource {pvc_name}"
return ocs_obj
def create_multiple_pvcs(
sc_name,
namespace,
number_of_pvc=1,
size=None,
do_reload=False,
access_mode=constants.ACCESS_MODE_RWO,
burst=False,
):
"""
Create one or more PVC as a bulk or one by one
Args:
sc_name (str): The name of the storage class to provision the PVCs from
namespace (str): The namespace for the PVCs creation
number_of_pvc (int): Number of PVCs to be created
size (str): The size of the PVCs to create
do_reload (bool): True for wait for reloading PVC after its creation,
False otherwise
access_mode (str): The kind of access mode for PVC
Returns:
list: List of PVC objects
"""
if not burst:
if access_mode == "ReadWriteMany" and "rbd" in sc_name:
volume_mode = "Block"
else:
volume_mode = None
return [
create_pvc(
sc_name=sc_name,
size=size,
namespace=namespace,
do_reload=do_reload,
access_mode=access_mode,
volume_mode=volume_mode,
)
for _ in range(number_of_pvc)
]
pvc_data = templating.load_yaml(constants.CSI_PVC_YAML)
pvc_data["metadata"]["namespace"] = namespace
pvc_data["spec"]["accessModes"] = [access_mode]
pvc_data["spec"]["storageClassName"] = sc_name
if size:
pvc_data["spec"]["resources"]["requests"]["storage"] = size
if access_mode == "ReadWriteMany" and "rbd" in sc_name:
pvc_data["spec"]["volumeMode"] = "Block"
else:
pvc_data["spec"]["volumeMode"] = None
# Creating tem directory to hold the files for the PVC creation
tmpdir = tempfile.mkdtemp()
logger.info("Creating the PVC yaml files for creation in bulk")
ocs_objs = []
for _ in range(number_of_pvc):
name = create_unique_resource_name("test", "pvc")
logger.info(f"Adding PVC with name {name}")
pvc_data["metadata"]["name"] = name
templating.dump_data_to_temp_yaml(pvc_data, f"{tmpdir}/{name}.yaml")
ocs_objs.append(pvc.PVC(**pvc_data))
logger.info("Creating all PVCs as bulk")
oc = OCP(kind="pod", namespace=defaults.ROOK_CLUSTER_NAMESPACE)
cmd = f"create -f {tmpdir}/"
oc.exec_oc_cmd(command=cmd, out_yaml_format=False)
# Letting the system 1 sec for each PVC to create.
# this will prevent any other command from running in the system in this
# period of time.
logger.info(
f"Going to sleep for {number_of_pvc} sec. "
"until starting verify that PVCs was created."
)
time.sleep(number_of_pvc)
return ocs_objs
def verify_block_pool_exists(pool_name):
"""
Verify if a Ceph block pool exist
Args:
pool_name (str): The name of the Ceph block pool
Returns:
bool: True if the Ceph block pool exists, False otherwise
"""
logger.info(f"Verifying that block pool {pool_name} exists")
ct_pod = pod.get_ceph_tools_pod()
try:
for pools in TimeoutSampler(60, 3, ct_pod.exec_ceph_cmd, "ceph osd lspools"):
logger.info(f"POOLS are {pools}")
for pool in pools:
if pool_name in pool.get("poolname"):
return True
except TimeoutExpiredError:
return False
def get_pool_cr(pool_name):
"""
Get the pool CR even if the kind is unknown.
Args:
pool_name (str): The name of the pool to get the CR for.
Returns:
dict: If the resource is found, None otherwise.
"""
logger.info(f"Checking if pool {pool_name} is kind of {constants.CEPHBLOCKPOOL}")
ocp_kind_cephblockpool = ocp.OCP(
kind=constants.CEPHBLOCKPOOL, namespace=config.ENV_DATA["cluster_namespace"]
)
pool_cr = ocp_kind_cephblockpool.get(resource_name=pool_name, dont_raise=True)
if pool_cr is not None:
return pool_cr
else:
logger.info(
f"Pool {pool_name} is not kind={constants.CEPHBLOCKPOOL}"
f", checkging if it is kind={constants.CEPHFILESYSTEM}"
)
ocp_kind_cephfilesystem = ocp.OCP(
kind="CephFilesystem",
namespace=config.ENV_DATA["cluster_namespace"],
)
pool_cr = ocp_kind_cephfilesystem.get(resource_name=pool_name, dont_raise=True)
return pool_cr
def get_admin_key():
"""
Fetches admin key secret from Ceph
Returns:
str: The admin key
"""
ct_pod = pod.get_ceph_tools_pod()
out = ct_pod.exec_ceph_cmd("ceph auth get-key client.admin")
return out["key"]
def get_cephfs_data_pool_name():
"""
Fetches ceph fs datapool name from Ceph
Returns:
str: fs datapool name
"""
ct_pod = pod.get_ceph_tools_pod()
out = ct_pod.exec_ceph_cmd("ceph fs ls")
return out[0]["data_pools"][0]
def validate_cephfilesystem(fs_name):
"""
Verify CephFileSystem exists at Ceph and OCP
Args:
fs_name (str): The name of the Ceph FileSystem
Returns:
bool: True if CephFileSystem is created at Ceph and OCP side else
will return False with valid msg i.e Failure cause
"""
cfs = ocp.OCP(
kind=constants.CEPHFILESYSTEM, namespace=defaults.ROOK_CLUSTER_NAMESPACE
)
ct_pod = pod.get_ceph_tools_pod()
ceph_validate = False
ocp_validate = False
result = cfs.get(resource_name=fs_name)
if result.get("metadata").get("name"):
logger.info("Filesystem %s got created from Openshift Side", fs_name)
ocp_validate = True
else:
logger.info("Filesystem %s was not create at Openshift Side", fs_name)
return False
try:
for pools in TimeoutSampler(60, 3, ct_pod.exec_ceph_cmd, "ceph fs ls"):
for out in pools:
result = out.get("name")
if result == fs_name:
logger.info("FileSystem %s got created from Ceph Side", fs_name)
ceph_validate = True
break
else:
logger.error("FileSystem %s was not present at Ceph Side", fs_name)
ceph_validate = False
if ceph_validate:
break
except TimeoutExpiredError:
pass
return True if (ceph_validate and ocp_validate) else False
def get_all_storageclass_names():
"""
Function for getting all storageclass
Returns:
list: list of storageclass name
"""
sc_obj = ocp.OCP(
kind=constants.STORAGECLASS, namespace=defaults.ROOK_CLUSTER_NAMESPACE
)
result = sc_obj.get()
sample = result["items"]
storageclass = [
item.get("metadata").get("name")
for item in sample
if (
(item.get("metadata").get("name") not in constants.IGNORE_SC_GP2)
and (item.get("metadata").get("name") not in constants.IGNORE_SC_FLEX)
)
]
return storageclass
def delete_storageclasses(sc_objs):
""" "
Function for Deleting storageclasses
Args:
sc_objs (list): List of SC objects for deletion
Returns:
bool: True if deletion is successful
"""
for sc in sc_objs:
logger.info("Deleting StorageClass with name %s", sc.name)
sc.delete()
return True
def get_cephblockpool_names():
"""
Function for getting all CephBlockPool
Returns:
list: list of cephblockpool name
"""
pool_obj = ocp.OCP(
kind=constants.CEPHBLOCKPOOL, namespace=defaults.ROOK_CLUSTER_NAMESPACE
)
result = pool_obj.get()
sample = result["items"]
pool_list = [item.get("metadata").get("name") for item in sample]
return pool_list
def delete_cephblockpools(cbp_objs):
"""
Function for deleting CephBlockPool
Args:
cbp_objs (list): List of CBP objects for deletion
Returns:
bool: True if deletion of CephBlockPool is successful
"""
for cbp in cbp_objs:
logger.info("Deleting CephBlockPool with name %s", cbp.name)
cbp.delete()
return True
def get_cephfs_name():
"""
Function to retrive CephFS name
Returns:
str: Name of CFS
"""
ct_pod = pod.get_ceph_tools_pod()
result = ct_pod.exec_ceph_cmd("ceph fs ls")
return result[0]["name"]
def pull_images(image_name):
"""
Function to pull images on all nodes
Args:
image_name (str): Name of the container image to be pulled
Returns: None
"""
node_objs = node.get_node_objs(node.get_worker_nodes())
for node_obj in node_objs:
logging.info(f'pulling image "{image_name} " on node {node_obj.name}')
assert node_obj.ocp.exec_oc_debug_cmd(
node_obj.name, cmd_list=[f"podman pull {image_name}"]
)
def run_io_with_rados_bench(**kw):
"""
A task for radosbench. Runs radosbench command on specified pod . If
parameters are not provided task assumes few default parameters.This task
runs command in synchronous fashion.
Args:
kw (dict): a dictionary of various radosbench parameters.
ex::
pool_name:pool
pg_num:number of pgs for pool
op: type of operation {read, write}
cleanup: True OR False
Returns:
ret: return value of radosbench command
"""
logger.info("Running radosbench task")
ceph_pods = kw.get("ceph_pods") # list of pod objects of ceph cluster
config = kw.get("config")
role = config.get("role", "client")
clients = [cpod for cpod in ceph_pods if role in cpod.roles]
idx = config.get("idx", 0)
client = clients[idx]
op = config.get("op", "write")
cleanup = ["--no-cleanup", "--cleanup"][config.get("cleanup", True)]
pool = config.get("pool")
block = str(config.get("size", 4 << 20))
time = config.get("time", 120)
time = str(time)
rados_bench = (
f"rados --no-log-to-stderr "
f"-b {block} "
f"-p {pool} "
f"bench "
f"{time} "
f"{op} "
f"{cleanup} "
)
try:
ret = client.exec_ceph_cmd(ceph_cmd=rados_bench)
except CommandFailed as ex:
logger.error(f"Rados bench failed\n Error is: {ex}")
return False
logger.info(ret)
logger.info("Finished radosbench")
return ret
def get_all_pvs():
"""
Gets all pv in openshift-storage namespace
Returns:
dict: Dict of all pv in openshift-storage namespace
"""
ocp_pv_obj = ocp.OCP(kind=constants.PV, namespace=defaults.ROOK_CLUSTER_NAMESPACE)
return ocp_pv_obj.get()
# TODO: revert counts of tries and delay,BZ 1726266
@retry(AssertionError, tries=20, delay=10, backoff=1)
def validate_pv_delete(pv_name):
"""
validates if pv is deleted after pvc deletion
Args:
pv_name (str): pv from pvc to validates
Returns:
bool: True if deletion is successful
Raises:
AssertionError: If pv is not deleted
"""
ocp_pv_obj = ocp.OCP(kind=constants.PV, namespace=defaults.ROOK_CLUSTER_NAMESPACE)
try:
if ocp_pv_obj.get(resource_name=pv_name):
msg = f"{constants.PV} {pv_name} is not deleted after PVC deletion"
raise AssertionError(msg)
except CommandFailed:
return True
def create_pods(pvc_objs, pod_factory, interface, pods_for_rwx=1, status=""):
"""
Create pods
Args:
pvc_objs (list): List of ocs_ci.ocs.resources.pvc.PVC instances
pod_factory (function): pod_factory function
interface (int): Interface type
pods_for_rwx (int): Number of pods to be created if access mode of
PVC is RWX
status (str): If provided, wait for desired state of each pod before
creating next one
Returns:
list: list of Pod objects
"""
pod_objs = []
for pvc_obj in pvc_objs:
volume_mode = getattr(
pvc_obj, "volume_mode", pvc_obj.get()["spec"]["volumeMode"]
)
access_mode = getattr(pvc_obj, "access_mode", pvc_obj.get_pvc_access_mode)
if volume_mode == "Block":
pod_dict = constants.CSI_RBD_RAW_BLOCK_POD_YAML
raw_block_pv = True
else:
raw_block_pv = False
pod_dict = ""
if access_mode == constants.ACCESS_MODE_RWX:
pod_obj_rwx = [
pod_factory(
interface=interface,
pvc=pvc_obj,
status=status,
pod_dict_path=pod_dict,
raw_block_pv=raw_block_pv,
)
for _ in range(1, pods_for_rwx)
]
pod_objs.extend(pod_obj_rwx)
pod_obj = pod_factory(
interface=interface,
pvc=pvc_obj,
status=status,
pod_dict_path=pod_dict,
raw_block_pv=raw_block_pv,
)
pod_objs.append(pod_obj)
return pod_objs
def create_build_from_docker_image(
image_name,
install_package,
namespace,
source_image="quay.io/ocsci/fedora",
source_image_label="latest",
):
"""
Allows to create a build config using a Dockerfile specified as an
argument, eg.::
$ oc new-build -D $'FROM centos:7\\nRUN yum install -y httpd'
creates a build with ``httpd`` installed.
Args:
image_name (str): Name of the image to be created
source_image (str): Source image to build docker image from,
defaults to Centos as base image
namespace (str): project where build config should be created
source_image_label (str): Tag to use along with the image name,
defaults to 'latest'
install_package (str): package to install over the base image
Returns:
ocs_ci.ocs.ocp.OCP (obj): The OCP object for the image
Fails on UnavailableBuildException exception if build creation
fails
"""
base_image = source_image + ":" + source_image_label
if config.DEPLOYMENT.get("disconnected"):
base_image = mirror_image(image=base_image)
cmd = f"yum install -y {install_package}"
http_proxy, https_proxy, no_proxy = get_cluster_proxies()
if http_proxy:
cmd = (
f"http_proxy={http_proxy} https_proxy={https_proxy} "
f"no_proxy='{no_proxy}' {cmd}"
)
docker_file = f"FROM {base_image}\n " f" RUN {cmd}\n" f"CMD tail -f /dev/null"
command = f"new-build -D $'{docker_file}' --name={image_name}"
kubeconfig = os.getenv("KUBECONFIG")
oc_cmd = f"oc -n {namespace} "
if kubeconfig:
oc_cmd += f"--kubeconfig {kubeconfig} "
oc_cmd += command
logger.info(f"Running command {oc_cmd}")
result = run(oc_cmd, stdout=PIPE, stderr=PIPE, timeout=15, shell=True)
if result.stderr.decode():
raise UnavailableBuildException(
f"Build creation failed with error: {result.stderr.decode()}"
)
out = result.stdout.decode()
logger.info(out)
if "Success" in out:
# Build becomes ready once build pod goes into Completed state
pod_obj = OCP(kind="Pod", resource_name=image_name)
if pod_obj.wait_for_resource(
condition="Completed",
resource_name=f"{image_name}" + "-1-build",
timeout=300,
sleep=30,
):
logger.info(f"build {image_name} ready")
set_image_lookup(image_name)
logger.info(f"image {image_name} can now be consumed")
image_stream_obj = OCP(kind="ImageStream", resource_name=image_name)
return image_stream_obj
else:
raise UnavailableBuildException("Build creation failed")
def set_image_lookup(image_name):
"""
Function to enable lookup, which allows reference to the image stream tag
in the image field of the object. Example::
$ oc set image-lookup mysql
$ oc run mysql --image=mysql
Args:
image_name (str): Name of the image stream to pull
the image locally
Returns:
str: output of set image-lookup command
"""
ocp_obj = ocp.OCP(kind="ImageStream")
command = f"set image-lookup {image_name}"
logger.info(f'image lookup for image"{image_name}" is set')
status = ocp_obj.exec_oc_cmd(command)
return status
def get_snapshot_time(interface, snap_name, status):
"""
Get the starting/ending creation time of a PVC based on provisioner logs
Args:
interface (str): The interface backed the PVC
pvc_name (str / list): Name of the PVC(s) for creation time
the list will be list of pvc objects
status (str): the status that we want to get - Start / End
Returns:
datetime object: Time of PVC(s) creation
"""
def get_pattern_time(log, snapname, pattern):
"""
Get the time of pattern in the log
Args:
log (list): list of all lines in the log file
snapname (str): the name of hte snapshot
pattern (str): the pattern that need to be found in the log (start / bound)
Returns:
str: string of the pattern timestamp in the log, if not found None
"""
for line in log:
if re.search(snapname, line) and re.search(pattern, line):
return line.split(" ")[1]
return None
logs = ""
format = "%H:%M:%S.%f"
# the starting and ending time are taken from different logs,
# the start creation time is taken from the snapshot controller, while
# the end creation time is taken from the csi snapshot driver
if status.lower() == "start":
pattern = "Creating content for snapshot"
# Get the snapshoter-controller pod
pod_name = pod.get_csi_snapshoter_pod()
logs = pod.get_pod_logs(
pod_name, namespace="openshift-cluster-storage-operator"
)
elif status.lower() == "end":
pattern = "readyToUse true"
pod_name = pod.get_csi_provisioner_pod(interface)
# get the logs from the csi-provisioner containers
for log_pod in pod_name:
logs += pod.get_pod_logs(log_pod, "csi-snapshotter")
else:
logger.error(f"the status {status} is invalid.")
return None
logs = logs.split("\n")
stat = None
# Extract the time for the one PVC snapshot provisioning
if isinstance(snap_name, str):
stat = get_pattern_time(logs, snap_name, pattern)
# Extract the time for the list of PVCs snapshot provisioning
if isinstance(snap_name, list):
all_stats = []
for snapname in snap_name:
all_stats.append(get_pattern_time(logs, snapname.name, pattern))
all_stats = sorted(all_stats)
if status.lower() == "end":
stat = all_stats[-1] # return the highest time
elif status.lower() == "start":
stat = all_stats[0] # return the lowest time
if stat:
return datetime.datetime.strptime(stat, format)
else:
return None
def measure_snapshot_creation_time(interface, snap_name, snap_con_name):
"""
Measure Snapshot creation time based on logs
Args:
snap_name (str): Name of the snapshot for creation time measurement
Returns:
float: Creation time for the snapshot
"""
start = get_snapshot_time(interface, snap_name, status="start")
end = get_snapshot_time(interface, snap_con_name, status="end")
if start and end:
total = end - start
return total.total_seconds()
else:
return None
def get_provision_time(interface, pvc_name, status="start"):
"""
Get the starting/ending creation time of a PVC based on provisioner logs
Args:
interface (str): The interface backed the PVC
pvc_name (str / list): Name of the PVC(s) for creation time
the list will be list of pvc objects
status (str): the status that we want to get - Start / End
Returns:
datetime object: Time of PVC(s) creation
"""
# Define the status that need to retrieve
operation = "started"
if status.lower() == "end":
operation = "succeeded"
format = "%H:%M:%S.%f"
# Get the correct provisioner pod based on the interface
pod_name = pod.get_csi_provisioner_pod(interface)
# get the logs from the csi-provisioner containers
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
# Extract the time for the one PVC provisioning
if isinstance(pvc_name, str):
stat = [i for i in logs if re.search(f"provision.*{pvc_name}.*{operation}", i)]
stat = stat[0].split(" ")[1]
# Extract the time for the list of PVCs provisioning
if isinstance(pvc_name, list):
all_stats = []
for pv_name in pvc_name:
name = pv_name.name
stat = [i for i in logs if re.search(f"provision.*{name}.*{operation}", i)]
stat = stat[0].split(" ")[1]
all_stats.append(stat)
all_stats = sorted(all_stats)
if status.lower() == "end":
stat = all_stats[-1] # return the highest time
elif status.lower() == "start":
stat = all_stats[0] # return the lowest time
return datetime.datetime.strptime(stat, format)
def get_start_creation_time(interface, pvc_name):
"""
Get the starting creation time of a PVC based on provisioner logs
Args:
interface (str): The interface backed the PVC
pvc_name (str): Name of the PVC for creation time measurement
Returns:
datetime object: Start time of PVC creation
"""
format = "%H:%M:%S.%f"
# Get the correct provisioner pod based on the interface
pod_name = pod.get_csi_provisioner_pod(interface)
# get the logs from the csi-provisioner containers
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
# Extract the starting time for the PVC provisioning
start = [i for i in logs if re.search(f"provision.*{pvc_name}.*started", i)]
start = start[0].split(" ")[1]
return datetime.datetime.strptime(start, format)
def get_end_creation_time(interface, pvc_name):
"""
Get the ending creation time of a PVC based on provisioner logs
Args:
interface (str): The interface backed the PVC
pvc_name (str): Name of the PVC for creation time measurement
Returns:
datetime object: End time of PVC creation
"""
format = "%H:%M:%S.%f"
# Get the correct provisioner pod based on the interface
pod_name = pod.get_csi_provisioner_pod(interface)
# get the logs from the csi-provisioner containers
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
# Extract the starting time for the PVC provisioning
end = [i for i in logs if re.search(f"provision.*{pvc_name}.*succeeded", i)]
# End provisioning string may appear in logs several times, take here the latest one
end = end[-1].split(" ")[1]
return datetime.datetime.strptime(end, format)
def measure_pvc_creation_time(interface, pvc_name):
"""
Measure PVC creation time based on logs
Args:
interface (str): The interface backed the PVC
pvc_name (str): Name of the PVC for creation time measurement
Returns:
float: Creation time for the PVC
"""
start = get_start_creation_time(interface=interface, pvc_name=pvc_name)
end = get_end_creation_time(interface=interface, pvc_name=pvc_name)
total = end - start
return total.total_seconds()
def measure_pvc_creation_time_bulk(interface, pvc_name_list, wait_time=60):
"""
Measure PVC creation time of bulk PVC based on logs.
Args:
interface (str): The interface backed the PVC
pvc_name_list (list): List of PVC Names for measuring creation time
wait_time (int): Seconds to wait before collecting CSI log
Returns:
pvc_dict (dict): Dictionary of pvc_name with creation time.
"""
# Get the correct provisioner pod based on the interface
pod_name = pod.get_csi_provisioner_pod(interface)
# due to some delay in CSI log generation added wait
time.sleep(wait_time)
# get the logs from the csi-provisioner containers
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
loop_counter = 0
while True:
no_data_list = list()
for name in pvc_name_list:
# check if PV data present in CSI logs
start = [i for i in logs if re.search(f"provision.*{name}.*started", i)]
end = [i for i in logs if re.search(f"provision.*{name}.*succeeded", i)]
if not start or not end:
no_data_list.append(name)
if no_data_list:
# Clear and get CSI logs after 60secs
logging.info(f"PVC count without CSI create log data {len(no_data_list)}")
logs.clear()
time.sleep(wait_time)
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
loop_counter += 1
if loop_counter >= 3:
logging.info("Waited for more than 3mins still no data")
raise UnexpectedBehaviour(
f"There is no pvc creation data in CSI logs for {no_data_list}"
)
continue
else:
break
pvc_dict = dict()
format = "%H:%M:%S.%f"
for pvc_name in pvc_name_list:
# Extract the starting time for the PVC provisioning
start = [i for i in logs if re.search(f"provision.*{pvc_name}.*started", i)]
start = start[0].split(" ")[1]
start_time = datetime.datetime.strptime(start, format)
# Extract the end time for the PVC provisioning
end = [i for i in logs if re.search(f"provision.*{pvc_name}.*succeeded", i)]
end = end[0].split(" ")[1]
end_time = datetime.datetime.strptime(end, format)
total = end_time - start_time
pvc_dict[pvc_name] = total.total_seconds()
return pvc_dict
def measure_pv_deletion_time_bulk(interface, pv_name_list, wait_time=60):
"""
Measure PV deletion time of bulk PV, based on logs.
Args:
interface (str): The interface backed the PV
pv_name_list (list): List of PV Names for measuring deletion time
wait_time (int): Seconds to wait before collecting CSI log
Returns:
pv_dict (dict): Dictionary of pv_name with deletion time.
"""
# Get the correct provisioner pod based on the interface
pod_name = pod.get_csi_provisioner_pod(interface)
# due to some delay in CSI log generation added wait
time.sleep(wait_time)
# get the logs from the csi-provisioner containers
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
loop_counter = 0
while True:
no_data_list = list()
for pv in pv_name_list:
# check if PV data present in CSI logs
start = [i for i in logs if re.search(f'delete "{pv}": started', i)]
end = [i for i in logs if re.search(f'delete "{pv}": succeeded', i)]
if not start or not end:
no_data_list.append(pv)
if no_data_list:
# Clear and get CSI logs after 60secs
logging.info(f"PV count without CSI delete log data {len(no_data_list)}")
logs.clear()
time.sleep(wait_time)
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
loop_counter += 1
if loop_counter >= 3:
logging.info("Waited for more than 3mins still no data")
raise UnexpectedBehaviour(
f"There is no pv deletion data in CSI logs for {no_data_list}"
)
continue
else:
break
pv_dict = dict()
format = "%H:%M:%S.%f"
for pv_name in pv_name_list:
# Extract the deletion start time for the PV
start = [i for i in logs if re.search(f'delete "{pv_name}": started', i)]
start = start[0].split(" ")[1]
start_time = datetime.datetime.strptime(start, format)
# Extract the deletion end time for the PV
end = [i for i in logs if re.search(f'delete "{pv_name}": succeeded', i)]
end = end[0].split(" ")[1]
end_time = datetime.datetime.strptime(end, format)
total = end_time - start_time
pv_dict[pv_name] = total.total_seconds()
return pv_dict
def get_start_deletion_time(interface, pv_name):
"""
Get the starting deletion time of a PVC based on provisioner logs
Args:
interface (str): The interface backed the PVC
pvc_name (str): Name of the PVC for deletion time measurement
Returns:
datetime object: Start time of PVC deletion
"""
format = "%H:%M:%S.%f"
# Get the correct provisioner pod based on the interface
pod_name = pod.get_csi_provisioner_pod(interface)
# get the logs from the csi-provisioner containers
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
# Extract the starting time for the PVC deletion
start = [i for i in logs if re.search(f'delete "{pv_name}": started', i)]
start = start[0].split(" ")[1]
return datetime.datetime.strptime(start, format)
def get_end_deletion_time(interface, pv_name):
"""
Get the ending deletion time of a PVC based on provisioner logs
Args:
interface (str): The interface backed the PVC
pv_name (str): Name of the PVC for deletion time measurement
Returns:
datetime object: End time of PVC deletion
"""
format = "%H:%M:%S.%f"
# Get the correct provisioner pod based on the interface
pod_name = pod.get_csi_provisioner_pod(interface)
# get the logs from the csi-provisioner containers
logs = pod.get_pod_logs(pod_name[0], "csi-provisioner")
logs += pod.get_pod_logs(pod_name[1], "csi-provisioner")
logs = logs.split("\n")
# Extract the starting time for the PV deletion
end = [i for i in logs if re.search(f'delete "{pv_name}": succeeded', i)]
end = end[0].split(" ")[1]
return datetime.datetime.strptime(end, format)
def measure_pvc_deletion_time(interface, pv_name):
"""
Measure PVC deletion time based on logs
Args:
interface (str): The interface backed the PVC
pv_name (str): Name of the PV for creation time measurement
Returns:
float: Deletion time for the PVC
"""
start = get_start_deletion_time(interface=interface, pv_name=pv_name)
end = get_end_deletion_time(interface=interface, pv_name=pv_name)
total = end - start
return total.total_seconds()
def pod_start_time(pod_obj):
"""
Function to measure time taken for container(s) to get into running state
by measuring the difference between container's start time (when container
went into running state) and started time (when container was actually
started)
Args:
pod_obj(obj): pod object to measure start time
Returns:
containers_start_time(dict):
Returns the name and start time of container(s) in a pod
"""
time_format = "%Y-%m-%dT%H:%M:%SZ"
containers_start_time = {}
start_time = pod_obj.data["status"]["startTime"]
start_time = datetime.datetime.strptime(start_time, time_format)
for container in range(len(pod_obj.data["status"]["containerStatuses"])):
started_time = pod_obj.data["status"]["containerStatuses"][container]["state"][
"running"
]["startedAt"]
started_time = datetime.datetime.strptime(started_time, time_format)
container_name = pod_obj.data["status"]["containerStatuses"][container]["name"]
container_start_time = (started_time - start_time).seconds
containers_start_time[container_name] = container_start_time
return containers_start_time
def get_default_storage_class():
"""
Get the default StorageClass(es)
Returns:
list: default StorageClass(es) list
"""
default_sc_obj = ocp.OCP(kind="StorageClass")
storage_classes = default_sc_obj.get().get("items")
storage_classes = [
sc for sc in storage_classes if "annotations" in sc.get("metadata")
]
return [
sc.get("metadata").get("name")
for sc in storage_classes
if sc.get("metadata")
.get("annotations")
.get("storageclass.kubernetes.io/is-default-class")
== "true"
]
def change_default_storageclass(scname):
"""
Change the default StorageClass to the given SC name
Args:
scname (str): StorageClass name
Returns:
bool: True on success
"""
default_sc = get_default_storage_class()
ocp_obj = ocp.OCP(kind="StorageClass")
if default_sc:
# Change the existing default Storageclass annotation to false
patch = (
' \'{"metadata": {"annotations":'
'{"storageclass.kubernetes.io/is-default-class"'
':"false"}}}\' '
)
patch_cmd = f"patch storageclass {default_sc} -p" + patch
ocp_obj.exec_oc_cmd(command=patch_cmd)
# Change the new storageclass to default
patch = (
' \'{"metadata": {"annotations":'
'{"storageclass.kubernetes.io/is-default-class"'
':"true"}}}\' '
)
patch_cmd = f"patch storageclass {scname} -p" + patch
ocp_obj.exec_oc_cmd(command=patch_cmd)
return True
def is_volume_present_in_backend(interface, image_uuid, pool_name=None):
"""
Check whether Image/Subvolume is present in the backend.
Args:
interface (str): The interface backed the PVC
image_uuid (str): Part of VolID which represents corresponding
image/subvolume in backend, eg:
``oc get pv/<volumeName> -o jsonpath='{.spec.csi.volumeHandle}'``
Output is the CSI generated VolID and looks like:
``0001-000c-rook-cluster-0000000000000001-f301898c-a192-11e9-852a-1eeeb6975c91``
where image_uuid is ``f301898c-a192-11e9-852a-1eeeb6975c91``
pool_name (str): Name of the rbd-pool if interface is CephBlockPool
Returns:
bool: True if volume is present and False if volume is not present
"""
cmd = ""
valid_error = []
ct_pod = pod.get_ceph_tools_pod()
if interface == constants.CEPHBLOCKPOOL:
valid_error = [f"error opening image csi-vol-{image_uuid}"]
cmd = f"rbd info -p {pool_name} csi-vol-{image_uuid}"
if interface == constants.CEPHFILESYSTEM:
valid_error = [
f"Subvolume 'csi-vol-{image_uuid}' not found",
f"subvolume 'csi-vol-{image_uuid}' does not exist",
]
cmd = (
f"ceph fs subvolume getpath {get_cephfs_name()}"
f" csi-vol-{image_uuid} csi"
)
try:
ct_pod.exec_ceph_cmd(ceph_cmd=cmd, format="json")
logger.info(
f"Verified: Volume corresponding to uuid {image_uuid} exists " f"in backend"
)
return True
except CommandFailed as ecf:
assert any([error in str(ecf) for error in valid_error]), (
f"Error occurred while verifying volume is present in backend: "
f"{str(ecf)} ImageUUID: {image_uuid}. Interface type: {interface}"
)
logger.info(
f"Volume corresponding to uuid {image_uuid} does not exist " f"in backend"
)
return False
def verify_volume_deleted_in_backend(
interface, image_uuid, pool_name=None, timeout=180
):
"""
Ensure that Image/Subvolume is deleted in the backend.
Args:
interface (str): The interface backed the PVC
image_uuid (str): Part of VolID which represents corresponding
image/subvolume in backend, eg:
``oc get pv/<volumeName> -o jsonpath='{.spec.csi.volumeHandle}'``
Output is the CSI generated VolID and looks like:
``0001-000c-rook-cluster-0000000000000001-f301898c-a192-11e9-852a-1eeeb6975c91``
where image_uuid is ``f301898c-a192-11e9-852a-1eeeb6975c91``
pool_name (str): Name of the rbd-pool if interface is CephBlockPool
timeout (int): Wait time for the volume to be deleted.
Returns:
bool: True if volume is deleted before timeout.
False if volume is not deleted.
"""
try:
for ret in TimeoutSampler(
timeout,
2,
is_volume_present_in_backend,
interface=interface,
image_uuid=image_uuid,
pool_name=pool_name,
):
if not ret:
break
logger.info(
f"Verified: Volume corresponding to uuid {image_uuid} is deleted "
f"in backend"
)
return True
except TimeoutExpiredError:
logger.error(
f"Volume corresponding to uuid {image_uuid} is not deleted " f"in backend"
)
# Log 'ceph progress' and 'ceph rbd task list' for debugging purpose
ct_pod = pod.get_ceph_tools_pod()
ct_pod.exec_ceph_cmd("ceph progress json", format=None)
ct_pod.exec_ceph_cmd("ceph rbd task list")
return False
def delete_volume_in_backend(img_uuid, pool_name=None):
"""
Delete an Image/Subvolume in the backend
Args:
img_uuid (str): Part of VolID which represents corresponding
image/subvolume in backend, eg:
``oc get pv/<volumeName> -o jsonpath='{.spec.csi.volumeHandle}'``
Output is the CSI generated VolID and looks like:
``0001-000c-rook-cluster-0000000000000001-f301898c-a192-11e9-852a-1eeeb6975c91``
where image_uuid is ``f301898c-a192-11e9-852a-1eeeb6975c91``
pool_name (str): The of the pool
Returns:
bool: True if image deleted successfully
False if:
Pool not found
image not found
image not deleted
"""
cmd = ""
valid_error = []
pool_cr = get_pool_cr(pool_name)
if pool_cr is not None:
if pool_cr["kind"] == "CephFilesystem":
interface = "CephFileSystem"
else:
interface = pool_cr["kind"]
logger.info(f"pool {pool_cr} kind is {interface}")
else:
logger.info(
f"Pool {pool_name} has no kind of "
f"{constants.CEPHBLOCKPOOL} "
f"or {constants.CEPHFILESYSTEM}"
)
return False
# Checking if image is present before trying to delete
image_present_results = is_volume_present_in_backend(
interface=interface, image_uuid=img_uuid, pool_name=pool_name
)
# Incase image is present delete
if image_present_results:
if interface == constants.CEPHBLOCKPOOL:
logger.info(
f"Trying to delete image csi-vol-{img_uuid} from pool {pool_name}"
)
valid_error = ["No such file or directory"]
cmd = f"rbd rm -p {pool_name} csi-vol-{img_uuid}"
if interface == constants.CEPHFILESYSTEM:
logger.info(
f"Trying to delete image csi-vol-{img_uuid} from pool {pool_name}"
)
valid_error = [
f"Subvolume 'csi-vol-{img_uuid}' not found",
f"subvolume 'csi-vol-{img_uuid}' does not exist",
]
cmd = f"ceph fs subvolume rm {get_cephfs_name()} csi-vol-{img_uuid} csi"
ct_pod = pod.get_ceph_tools_pod()
try:
ct_pod.exec_ceph_cmd(ceph_cmd=cmd, format=None)
except CommandFailed as ecf:
if any([error in str(ecf) for error in valid_error]):
logger.info(
f"Error occurred while verifying volume is present in backend: "
f"{str(ecf)} ImageUUID: {img_uuid}. Interface type: {interface}"
)
return False
verify_img_delete_result = is_volume_present_in_backend(
interface=interface, image_uuid=img_uuid, pool_name=pool_name
)
if not verify_img_delete_result:
logger.info(f"Image csi-vol-{img_uuid} deleted successfully")
return True
else:
logger.info(f"Image csi-vol-{img_uuid} not deleted successfully")
return False
return False
def create_serviceaccount(namespace):
"""
Create a Serviceaccount
Args:
namespace (str): The namespace for the serviceaccount creation
Returns:
OCS: An OCS instance for the service_account
"""
service_account_data = templating.load_yaml(constants.SERVICE_ACCOUNT_YAML)
service_account_data["metadata"]["name"] = create_unique_resource_name(
"sa", "serviceaccount"
)
service_account_data["metadata"]["namespace"] = namespace
return create_resource(**service_account_data)
def get_serviceaccount_obj(sa_name, namespace):
"""
Get serviceaccount obj
Args:
sa_name (str): Service Account name
namespace (str): The namespace for the serviceaccount creation
Returns:
OCS: An OCS instance for the service_account
"""
ocp_sa_obj = ocp.OCP(kind=constants.SERVICE_ACCOUNT, namespace=namespace)
try:
sa_dict = ocp_sa_obj.get(resource_name=sa_name)
return OCS(**sa_dict)
except CommandFailed:
logger.error("ServiceAccount not found in specified namespace")
def validate_scc_policy(sa_name, namespace):
"""
Validate serviceaccount is added to scc of privileged
Args:
sa_name (str): Service Account name
namespace (str): The namespace for the serviceaccount creation
Returns:
bool: True if sc_name is present in scc of privileged else False
"""
sa_name = f"system:serviceaccount:{namespace}:{sa_name}"
logger.info(sa_name)
ocp_scc_obj = ocp.OCP(kind=constants.SCC, namespace=namespace)
scc_dict = ocp_scc_obj.get(resource_name=constants.PRIVILEGED)
scc_users_list = scc_dict.get("users")
for scc_user in scc_users_list:
if scc_user == sa_name:
return True
return False
def add_scc_policy(sa_name, namespace):
"""
Adding ServiceAccount to scc privileged
Args:
sa_name (str): ServiceAccount name
namespace (str): The namespace for the scc_policy creation
"""
ocp = OCP()
out = ocp.exec_oc_cmd(
command=f"adm policy add-scc-to-user privileged system:serviceaccount:{namespace}:{sa_name}",
out_yaml_format=False,
)
logger.info(out)
def remove_scc_policy(sa_name, namespace):
"""
Removing ServiceAccount from scc privileged
Args:
sa_name (str): ServiceAccount name
namespace (str): The namespace for the scc_policy deletion
"""
ocp = OCP()
out = ocp.exec_oc_cmd(
command=f"adm policy remove-scc-from-user privileged system:serviceaccount:{namespace}:{sa_name}",
out_yaml_format=False,
)
logger.info(out)
def craft_s3_command(cmd, mcg_obj=None, api=False):
"""
Crafts the AWS CLI S3 command including the
login credentials and command to be ran
Args:
mcg_obj: An MCG object containing the MCG S3 connection credentials
cmd: The AWSCLI command to run
api: True if the call is for s3api, false if s3
Returns:
str: The crafted command, ready to be executed on the pod
"""
api = "api" if api else ""
if mcg_obj:
base_command = (
f'sh -c "AWS_CA_BUNDLE={constants.SERVICE_CA_CRT_AWSCLI_PATH} '
f"AWS_ACCESS_KEY_ID={mcg_obj.access_key_id} "
f"AWS_SECRET_ACCESS_KEY={mcg_obj.access_key} "
f"AWS_DEFAULT_REGION={mcg_obj.region} "
f"aws s3{api} "
f"--endpoint={mcg_obj.s3_internal_endpoint} "
)
string_wrapper = '"'
else:
base_command = f"aws s3{api} --no-sign-request "
string_wrapper = ""
return f"{base_command}{cmd}{string_wrapper}"
def wait_for_resource_count_change(
func_to_use,
previous_num,
namespace,
change_type="increase",
min_difference=1,
timeout=20,
interval=2,
**func_kwargs,
):
"""
Wait for a change in total count of PVC or pod
Args:
func_to_use (function): Function to be used to fetch resource info
Supported functions: pod.get_all_pvcs(), pod.get_all_pods()
previous_num (int): Previous number of pods/PVCs for comparison
namespace (str): Name of the namespace
change_type (str): Type of change to check. Accepted values are
'increase' and 'decrease'. Default is 'increase'.
min_difference (int): Minimum required difference in PVC/pod count
timeout (int): Maximum wait time in seconds
interval (int): Time in seconds to wait between consecutive checks
Returns:
True if difference in count is greater than or equal to
'min_difference'. False in case of timeout.
"""
try:
for sample in TimeoutSampler(
timeout, interval, func_to_use, namespace, **func_kwargs
):
if func_to_use == pod.get_all_pods:
current_num = len(sample)
else:
current_num = len(sample["items"])
if change_type == "increase":
count_diff = current_num - previous_num
else:
count_diff = previous_num - current_num
if count_diff >= min_difference:
return True
except TimeoutExpiredError:
return False
def verify_pv_mounted_on_node(node_pv_dict):
"""
Check if mount point of a PV exists on a node
Args:
node_pv_dict (dict): Node to PV list mapping
eg: {'node1': ['pv1', 'pv2', 'pv3'], 'node2': ['pv4', 'pv5']}
Returns:
dict: Node to existing PV list mapping
eg: {'node1': ['pv1', 'pv3'], 'node2': ['pv5']}
"""
existing_pvs = {}
for node_name, pvs in node_pv_dict.items():
cmd = f"oc debug nodes/{node_name} -- df"
df_on_node = run_cmd(cmd)
existing_pvs[node_name] = []
for pv_name in pvs:
if f"/pv/{pv_name}/" in df_on_node:
existing_pvs[node_name].append(pv_name)
return existing_pvs
def converge_lists(list_to_converge):
"""
Function to flatten and remove the sublist created during future obj
Args:
list_to_converge (list): arg list of lists, eg: [[1,2],[3,4]]
Returns:
list (list): return converged list eg: [1,2,3,4]
"""
return [item for sublist in list_to_converge for item in sublist]
def create_multiple_pvc_parallel(sc_obj, namespace, number_of_pvc, size, access_modes):
"""
Funtion to create multiple PVC in parallel using threads
Function will create PVCs based on the available access modes
Args:
sc_obj (str): Storage Class object
namespace (str): The namespace for creating pvc
number_of_pvc (int): NUmber of pvc to be created
size (str): size of the pvc eg: '10Gi'
access_modes (list): List of access modes for PVC creation
Returns:
pvc_objs_list (list): List of pvc objs created in function
"""
obj_status_list, result_lists = ([] for i in range(2))
with ThreadPoolExecutor() as executor:
for mode in access_modes:
result_lists.append(
executor.submit(
create_multiple_pvcs,
sc_name=sc_obj.name,
namespace=namespace,
number_of_pvc=number_of_pvc,
access_mode=mode,
size=size,
)
)
result_list = [result.result() for result in result_lists]
pvc_objs_list = converge_lists(result_list)
# Check for all the pvcs in Bound state
with ThreadPoolExecutor() as executor:
for objs in pvc_objs_list:
obj_status_list.append(
executor.submit(wait_for_resource_state, objs, "Bound", 90)
)
if False in [obj.result() for obj in obj_status_list]:
raise TimeoutExpiredError
return pvc_objs_list
def create_pods_parallel(
pvc_list,
namespace,
interface,
pod_dict_path=None,
sa_name=None,
raw_block_pv=False,
dc_deployment=False,
node_selector=None,
):
"""
Function to create pods in parallel
Args:
pvc_list (list): List of pvcs to be attached in pods
namespace (str): The namespace for creating pod
interface (str): The interface backed the PVC
pod_dict_path (str): pod_dict_path for yaml
sa_name (str): sa_name for providing permission
raw_block_pv (bool): Either RAW block or not
dc_deployment (bool): Either DC deployment or not
node_selector (dict): dict of key-value pair to be used for nodeSelector field
eg: {'nodetype': 'app-pod'}
Returns:
pod_objs (list): Returns list of pods created
"""
future_pod_objs = []
# Added 300 sec wait time since in scale test once the setup has more
# PODs time taken for the pod to be up will be based on resource available
wait_time = 300
if raw_block_pv and not pod_dict_path:
pod_dict_path = constants.CSI_RBD_RAW_BLOCK_POD_YAML
with ThreadPoolExecutor() as executor:
for pvc_obj in pvc_list:
future_pod_objs.append(
executor.submit(
create_pod,
interface_type=interface,
pvc_name=pvc_obj.name,
do_reload=False,
namespace=namespace,
raw_block_pv=raw_block_pv,
pod_dict_path=pod_dict_path,
sa_name=sa_name,
dc_deployment=dc_deployment,
node_selector=node_selector,
)
)
pod_objs = [pvc_obj.result() for pvc_obj in future_pod_objs]
# Check for all the pods are in Running state
# In above pod creation not waiting for the pod to be created because of threads usage
with ThreadPoolExecutor() as executor:
for obj in pod_objs:
future_pod_objs.append(
executor.submit(
wait_for_resource_state, obj, "Running", timeout=wait_time
)
)
# If pods not up raise exception/failure
if False in [obj.result() for obj in future_pod_objs]:
raise TimeoutExpiredError
return pod_objs
def delete_objs_parallel(obj_list):
"""
Function to delete objs specified in list
Args:
obj_list(list): List can be obj of pod, pvc, etc
Returns:
bool: True if obj deleted else False
"""
threads = list()
for obj in obj_list:
process = threading.Thread(target=obj.delete)
process.start()
threads.append(process)
for process in threads:
process.join()
return True
def memory_leak_analysis(median_dict):
"""
Function to analyse Memory leak after execution of test case Memory leak is
analyzed based on top output "RES" value of ceph-osd daemon, i.e.
``list[7]`` in code.
More Detail on Median value: For calculating memory leak require a constant
value, which should not be start or end of test, so calculating it by
getting memory for 180 sec before TC execution and take a median out of it.
Memory value could be different for each nodes, so identify constant value
for each node and update in median_dict
Args:
median_dict (dict): dict of worker nodes and respective median value
eg: median_dict = {'worker_node_1':102400, 'worker_node_2':204800, ...}
Usage::
test_case(.., memory_leak_function):
.....
median_dict = helpers.get_memory_leak_median_value()
.....
TC execution part, memory_leak_fun will capture data
....
helpers.memory_leak_analysis(median_dict)
....
"""
# dict to store memory leak difference for each worker
diff = {}
for worker in node.get_worker_nodes():
memory_leak_data = []
if os.path.exists(f"/tmp/{worker}-top-output.txt"):
with open(f"/tmp/{worker}-top-output.txt", "r") as f:
data = f.readline()
list = data.split(" ")
list = [i for i in list if i]
memory_leak_data.append(list[7])
else:
logging.info(f"worker {worker} memory leak file not found")
raise UnexpectedBehaviour
number_of_lines = len(memory_leak_data) - 1
# Get the start value form median_dict arg for respective worker
start_value = median_dict[f"{worker}"]
end_value = memory_leak_data[number_of_lines]
logging.info(f"Median value {start_value}")
logging.info(f"End value {end_value}")
# Convert the values to kb for calculations
if start_value.__contains__("g"):
start_value = float(1024 ** 2 * float(start_value[:-1]))
elif start_value.__contains__("m"):
start_value = float(1024 * float(start_value[:-1]))
else:
start_value = float(start_value)
if end_value.__contains__("g"):
end_value = float(1024 ** 2 * float(end_value[:-1]))
elif end_value.__contains__("m"):
end_value = float(1024 * float(end_value[:-1]))
else:
end_value = float(end_value)
# Calculate the percentage of diff between start and end value
# Based on value decide TC pass or fail
diff[worker] = ((end_value - start_value) / start_value) * 100
logging.info(f"Percentage diff in start and end value {diff[worker]}")
if diff[worker] <= 20:
logging.info(f"No memory leak in worker {worker} passing the test")
else:
logging.info(f"There is a memory leak in worker {worker}")
logging.info(f"Memory median value start of the test {start_value}")
logging.info(f"Memory value end of the test {end_value}")
raise UnexpectedBehaviour
def get_memory_leak_median_value():
"""
Function to calculate memory leak Median value by collecting the data for 180 sec
and find the median value which will be considered as starting point
to evaluate memory leak using "RES" value of ceph-osd daemon i.e. list[7] in code
Returns:
median_dict (dict): dict of worker nodes and respective median value
"""
median_dict = {}
timeout = 180 # wait for 180 sec to evaluate memory leak median data.
logger.info(f"waiting for {timeout} sec to evaluate the median value")
time.sleep(timeout)
for worker in node.get_worker_nodes():
memory_leak_data = []
if os.path.exists(f"/tmp/{worker}-top-output.txt"):
with open(f"/tmp/{worker}-top-output.txt", "r") as f:
data = f.readline()
list = data.split(" ")
list = [i for i in list if i]
memory_leak_data.append(list[7])
else:
logging.info(f"worker {worker} memory leak file not found")
raise UnexpectedBehaviour
median_dict[f"{worker}"] = statistics.median(memory_leak_data)
return median_dict
def refresh_oc_login_connection(user=None, password=None):
"""
Function to refresh oc user login
Default login using kubeadmin user and password
Args:
user (str): Username to login
password (str): Password to login
"""
user = user or config.RUN["username"]
if not password:
filename = os.path.join(
config.ENV_DATA["cluster_path"], config.RUN["password_location"]
)
with open(filename) as f:
password = f.read()
ocs_obj = ocp.OCP()
ocs_obj.login(user=user, password=password)
def rsync_kubeconf_to_node(node):
"""
Function to copy kubeconfig to OCP node
Args:
node (str): OCP node to copy kubeconfig if not present
"""
# ocp_obj = ocp.OCP()
filename = os.path.join(
config.ENV_DATA["cluster_path"], config.RUN["kubeconfig_location"]
)
file_path = os.path.dirname(filename)
master_list = node.get_master_nodes()
ocp_obj = ocp.OCP()
check_auth = "auth"
check_conf = "kubeconfig"
node_path = "/home/core/"
if check_auth not in ocp_obj.exec_oc_debug_cmd(
node=master_list[0], cmd_list=[f"ls {node_path}"]
):
ocp.rsync(src=file_path, dst=f"{node_path}", node=node, dst_node=True)
elif check_conf not in ocp_obj.exec_oc_debug_cmd(
node=master_list[0], cmd_list=[f"ls {node_path}auth"]
):
ocp.rsync(src=file_path, dst=f"{node_path}", node=node, dst_node=True)
def create_dummy_osd(deployment):
"""
Replace one of OSD pods with pod that contains all data from original
OSD but doesn't run osd daemon. This can be used e.g. for direct acccess
to Ceph Placement Groups.
Args:
deployment (str): Name of deployment to use
Returns:
list: first item is dummy deployment object, second item is dummy pod
object
"""
oc = OCP(
kind=constants.DEPLOYMENT, namespace=config.ENV_DATA.get("cluster_namespace")
)
osd_data = oc.get(deployment)
dummy_deployment = create_unique_resource_name("dummy", "osd")
osd_data["metadata"]["name"] = dummy_deployment
osd_containers = osd_data.get("spec").get("template").get("spec").get("containers")
# get osd container spec
original_osd_args = osd_containers[0].get("args")
osd_data["spec"]["template"]["spec"]["containers"][0]["args"] = []
osd_data["spec"]["template"]["spec"]["containers"][0]["command"] = [
"/bin/bash",
"-c",
"sleep infinity",
]
osd_file = tempfile.NamedTemporaryFile(
mode="w+", prefix=dummy_deployment, delete=False
)
with open(osd_file.name, "w") as temp:
yaml.dump(osd_data, temp)
oc.create(osd_file.name)
# downscale the original deployment and start dummy deployment instead
oc.exec_oc_cmd(f"scale --replicas=0 deployment/{deployment}")
oc.exec_oc_cmd(f"scale --replicas=1 deployment/{dummy_deployment}")
osd_list = pod.get_osd_pods()
dummy_pod = [pod for pod in osd_list if dummy_deployment in pod.name][0]
wait_for_resource_state(
resource=dummy_pod, state=constants.STATUS_RUNNING, timeout=60
)
ceph_init_cmd = "/rook/tini" + " " + " ".join(original_osd_args)
try:
logger.info("Following command should expire after 7 seconds")
dummy_pod.exec_cmd_on_pod(ceph_init_cmd, timeout=7)
except TimeoutExpired:
logger.info("Killing /rook/tini process")
try:
dummy_pod.exec_sh_cmd_on_pod(
"kill $(ps aux | grep '[/]rook/tini' | awk '{print $2}')"
)
except CommandFailed:
pass
return dummy_deployment, dummy_pod
def get_failure_domin():
"""
Function is used to getting failure domain of pool
Returns:
str: Failure domain from cephblockpool
"""
ct_pod = pod.get_ceph_tools_pod()
out = ct_pod.exec_ceph_cmd(ceph_cmd="ceph osd crush rule dump", format="json")
assert out, "Failed to get cmd output"
for crush_rule in out:
if constants.CEPHBLOCKPOOL.lower() in crush_rule.get("rule_name"):
for steps in crush_rule.get("steps"):
if "type" in steps:
return steps.get("type")
def wait_for_ct_pod_recovery():
"""
In case the of node failures scenarios, in which the selected node is
running the ceph tools pod, we'll want to wait for the pod recovery
Returns:
bool: True in case the ceph tools pod was recovered, False otherwise
"""
try:
_ = get_admin_key()
except CommandFailed as ex:
logger.info(str(ex))
if "connection timed out" in str(ex):
logger.info(
"Ceph tools box was running on the node that had a failure. "
"Hence, waiting for a new Ceph tools box pod to spin up"
)
wait_for_resource_count_change(
func_to_use=pod.get_all_pods,
previous_num=1,
namespace=config.ENV_DATA["cluster_namespace"],
timeout=120,
selector=constants.TOOL_APP_LABEL,
)
return True
else:
return False
return True
def label_worker_node(node_list, label_key, label_value):
"""
Function to label worker node for running app pods on specific worker nodes.
Args:
node_list (list): List of node name
label_key (str): Label_key to be added in worker
label_value (str): Label_value
"""
ocp_obj = OCP()
out = ocp_obj.exec_oc_cmd(
command=f"label node {' '.join(node_list)} {label_key}={label_value}",
out_yaml_format=False,
)
logger.info(out)
def remove_label_from_worker_node(node_list, label_key):
"""
Function to remove label from worker node.
Args:
node_list (list): List of node name
label_key (str): Label_key to be remove from worker node
"""
ocp_obj = OCP()
out = ocp_obj.exec_oc_cmd(
command=f"label node {' '.join(node_list)} {label_key}-", out_yaml_format=False
)
logger.info(out)
def get_pods_nodes_logs():
"""
Get logs from all pods and nodes
Returns:
dict: node/pod name as key, logs content as value (string)
"""
all_logs = {}
all_pods = pod.get_all_pods()
all_nodes = node.get_node_objs()
for node_obj in all_nodes:
node_name = node_obj.name
log_content = node.get_node_logs(node_name)
all_logs.update({node_name: log_content})
for pod_obj in all_pods:
try:
pod_name = pod_obj.name
log_content = pod.get_pod_logs(pod_name)
all_logs.update({pod_name: log_content})
except CommandFailed:
pass
return all_logs
def get_logs_with_errors(errors=None):
"""
From logs of all pods and nodes, get only logs
containing any of specified errors
Args:
errors (list): List of errors to look for
Returns:
dict: node/pod name as key, logs content as value; may be empty
"""
all_logs = get_pods_nodes_logs()
output_logs = {}
errors_list = constants.CRITICAL_ERRORS
if errors:
errors_list = errors_list + errors
for name, log_content in all_logs.items():
for error_msg in errors_list:
if error_msg in log_content:
logger.debug(f"Found '{error_msg}' in log of {name}")
output_logs.update({name: log_content})
log_path = f"{ocsci_log_path()}/{name}.log"
with open(log_path, "w") as fh:
fh.write(log_content)
return output_logs
def modify_osd_replica_count(resource_name, replica_count):
"""
Function to modify osd replica count to 0 or 1
Args:
resource_name (str): Name of osd i.e, 'rook-ceph-osd-0-c9c4bc7c-bkf4b'
replica_count (int): osd replica count to be changed to
Returns:
bool: True in case if changes are applied. False otherwise
"""
ocp_obj = ocp.OCP(
kind=constants.DEPLOYMENT, namespace=defaults.ROOK_CLUSTER_NAMESPACE
)
params = f'{{"spec": {{"replicas": {replica_count}}}}}'
resource_name = "-".join(resource_name.split("-")[0:4])
return ocp_obj.patch(resource_name=resource_name, params=params)
def collect_performance_stats(dir_name):
"""
Collect performance stats and saves them in file in json format.
dir_name (str): directory name to store stats.
Performance stats include:
IOPs and throughput percentage of cluster
CPU, memory consumption of each nodes
"""
from ocs_ci.ocs.cluster import CephCluster
log_dir_path = os.path.join(
os.path.expanduser(config.RUN["log_dir"]),
f"failed_testcase_ocs_logs_{config.RUN['run_id']}",
f"{dir_name}_performance_stats",
)
if not os.path.exists(log_dir_path):
logger.info(f"Creating directory {log_dir_path}")
os.makedirs(log_dir_path)
performance_stats = {}
external = config.DEPLOYMENT["external_mode"]
if external:
# Skip collecting performance_stats for external mode RHCS cluster
logging.info("Skipping status collection for external mode")
else:
ceph_obj = CephCluster()
# Get iops and throughput percentage of cluster
iops_percentage = ceph_obj.get_iops_percentage()
throughput_percentage = ceph_obj.get_throughput_percentage()
performance_stats["iops_percentage"] = iops_percentage
performance_stats["throughput_percentage"] = throughput_percentage
# ToDo: Get iops and throughput percentage of each nodes
# Get the cpu and memory of each nodes from adm top
master_node_utilization_from_adm_top = (
node.get_node_resource_utilization_from_adm_top(node_type="master")
)
worker_node_utilization_from_adm_top = (
node.get_node_resource_utilization_from_adm_top(node_type="worker")
)
# Get the cpu and memory from describe of nodes
master_node_utilization_from_oc_describe = (
node.get_node_resource_utilization_from_oc_describe(node_type="master")
)
worker_node_utilization_from_oc_describe = (
node.get_node_resource_utilization_from_oc_describe(node_type="worker")
)
performance_stats["master_node_utilization"] = master_node_utilization_from_adm_top
performance_stats["worker_node_utilization"] = worker_node_utilization_from_adm_top
performance_stats[
"master_node_utilization_from_oc_describe"
] = master_node_utilization_from_oc_describe
performance_stats[
"worker_node_utilization_from_oc_describe"
] = worker_node_utilization_from_oc_describe
file_name = os.path.join(log_dir_path, "performance")
with open(file_name, "w") as outfile:
json.dump(performance_stats, outfile)
def validate_pod_oomkilled(
pod_name, namespace=defaults.ROOK_CLUSTER_NAMESPACE, container=None
):
"""
Validate pod oomkilled message are found on log
Args:
pod_name (str): Name of the pod
namespace (str): Namespace of the pod
container (str): Name of the container
Returns:
bool : True if oomkill messages are not found on log.
False Otherwise.
Raises:
Assertion if failed to fetch logs
"""
rc = True
try:
pod_log = pod.get_pod_logs(
pod_name=pod_name, namespace=namespace, container=container, previous=True
)
result = pod_log.find("signal: killed")
if result != -1:
rc = False
except CommandFailed as ecf:
assert (
f'previous terminated container "{container}" in pod "{pod_name}" not found'
in str(ecf)
), "Failed to fetch logs"
return rc
def validate_pods_are_running_and_not_restarted(pod_name, pod_restart_count, namespace):
"""
Validate given pod is in running state and not restarted or re-spinned
Args:
pod_name (str): Name of the pod
pod_restart_count (int): Restart count of pod
namespace (str): Namespace of the pod
Returns:
bool : True if pod is in running state and restart
count matches the previous one
"""
ocp_obj = ocp.OCP(kind=constants.POD, namespace=namespace)
pod_obj = ocp_obj.get(resource_name=pod_name)
restart_count = (
pod_obj.get("status").get("containerStatuses")[0].get("restartCount")
)
pod_state = pod_obj.get("status").get("phase")
if pod_state == "Running" and restart_count == pod_restart_count:
logger.info("Pod is running state and restart count matches with previous one")
return True
logger.error(
f"Pod is in {pod_state} state and restart count of pod {restart_count}"
)
logger.info(f"{pod_obj}")
return False
def calc_local_file_md5_sum(path):
"""
Calculate and return the MD5 checksum of a local file
Arguments:
path(str): The path to the file
Returns:
str: The MD5 checksum
"""
with open(path, "rb") as file_to_hash:
file_as_bytes = file_to_hash.read()
return hashlib.md5(file_as_bytes).hexdigest()
def retrieve_default_ingress_crt():
"""
Copy the default ingress certificate from the router-ca secret
to the local code runner for usage with boto3.
"""
default_ingress_crt_b64 = (
OCP(
kind="secret",
namespace="openshift-ingress-operator",
resource_name="router-ca",
)
.get()
.get("data")
.get("tls.crt")
)
decoded_crt = base64.b64decode(default_ingress_crt_b64).decode("utf-8")
with open(constants.DEFAULT_INGRESS_CRT_LOCAL_PATH, "w") as crtfile:
crtfile.write(decoded_crt)
def storagecluster_independent_check():
"""
Check whether the storagecluster is running in independent mode
by checking the value of spec.externalStorage.enable
Returns:
bool: True if storagecluster is running on external mode False otherwise
"""
storage_cluster = (
OCP(kind="StorageCluster", namespace=config.ENV_DATA["cluster_namespace"])
.get()
.get("items")[0]
)
return bool(
storage_cluster.get("spec", {}).get("externalStorage", {}).get("enable", False)
)
def get_pv_size(storageclass=None):
"""
Get Pv size from requested storageclass
Args:
storageclass (str): Name of storageclass
Returns:
list: list of pv's size
"""
return_list = []
ocp_obj = ocp.OCP(kind=constants.PV)
pv_objs = ocp_obj.get()["items"]
for pv_obj in pv_objs:
if pv_obj["spec"]["storageClassName"] == storageclass:
return_list.append(pv_obj["spec"]["capacity"]["storage"])
return return_list
def get_pv_names():
"""
Get Pv names
Returns:
list: list of pv names
"""
ocp_obj = ocp.OCP(kind=constants.PV)
pv_objs = ocp_obj.get()["items"]
return [pv_obj["metadata"]["name"] for pv_obj in pv_objs]
def get_cluster_proxies():
"""
Get http and https proxy configuration:
* If configuration ``ENV_DATA['http_proxy']`` (and prospectively
``ENV_DATA['https_proxy']``) exists, return the respective values.
(If https_proxy not defined, use value from http_proxy.)
* If configuration ``ENV_DATA['http_proxy']`` doesn't exist, try to gather
cluster wide proxy configuration.
* If no proxy configuration found, return empty string for all http_proxy,
https_proxy and no_proxy.
Returns:
tuple: (http_proxy, https_proxy, no_proxy)
"""
if "http_proxy" in config.ENV_DATA:
http_proxy = config.ENV_DATA["http_proxy"]
https_proxy = config.ENV_DATA.get("https_proxy", config.ENV_DATA["http_proxy"])
no_proxy = config.ENV_DATA.get("no_proxy", "")
else:
ocp_obj = ocp.OCP(kind=constants.PROXY, resource_name="cluster")
proxy_obj = ocp_obj.get()
http_proxy = proxy_obj.get("spec", {}).get("httpProxy", "")
https_proxy = proxy_obj.get("spec", {}).get("httpsProxy", "")
no_proxy = proxy_obj.get("status", {}).get("noProxy", "")
logger.info("Using http_proxy: '%s'", http_proxy)
logger.info("Using https_proxy: '%s'", https_proxy)
logger.info("Using no_proxy: '%s'", no_proxy)
return http_proxy, https_proxy, no_proxy
def default_volumesnapshotclass(interface_type):
"""
Return default VolumeSnapshotClass based on interface_type
Args:
interface_type (str): The type of the interface
(e.g. CephBlockPool, CephFileSystem)
Returns:
OCS: VolumeSnapshotClass Instance
"""
external = config.DEPLOYMENT["external_mode"]
if interface_type == constants.CEPHBLOCKPOOL:
resource_name = (
constants.DEFAULT_EXTERNAL_MODE_VOLUMESNAPSHOTCLASS_RBD
if external
else constants.DEFAULT_VOLUMESNAPSHOTCLASS_RBD
)
elif interface_type == constants.CEPHFILESYSTEM:
resource_name = (
constants.DEFAULT_EXTERNAL_MODE_VOLUMESNAPSHOTCLASS_CEPHFS
if external
else constants.DEFAULT_VOLUMESNAPSHOTCLASS_CEPHFS
)
base_snapshot_class = OCP(
kind=constants.VOLUMESNAPSHOTCLASS, resource_name=resource_name
)
return OCS(**base_snapshot_class.data)
def get_snapshot_content_obj(snap_obj):
"""
Get volume snapshot content of a volume snapshot
Args:
snap_obj (OCS): OCS instance of kind VolumeSnapshot
Returns:
OCS: OCS instance of kind VolumeSnapshotContent
"""
data = dict()
data["api_version"] = snap_obj.api_version
data["kind"] = constants.VOLUMESNAPSHOTCONTENT
snapcontent = snap_obj.ocp.get(resource_name=snap_obj.name, out_yaml_format=True)[
"status"
]["boundVolumeSnapshotContentName"]
data["metadata"] = {"name": snapcontent, "namespace": snap_obj.namespace}
snapcontent_obj = OCS(**data)
snapcontent_obj.reload()
return snapcontent_obj
def wait_for_pv_delete(pv_objs):
"""
Wait for PVs to delete. Delete PVs having ReclaimPolicy 'Retain'
Args:
pv_objs (list): OCS instances of kind PersistentVolume
"""
for pv_obj in pv_objs:
if (
pv_obj.data.get("spec").get("persistentVolumeReclaimPolicy")
== constants.RECLAIM_POLICY_RETAIN
):
wait_for_resource_state(pv_obj, constants.STATUS_RELEASED)
pv_obj.delete()
pv_obj.ocp.wait_for_delete(resource_name=pv_obj.name, timeout=180)
@retry(UnexpectedBehaviour, tries=20, delay=10, backoff=1)
def fetch_used_size(cbp_name, exp_val=None):
"""
Fetch used size in the pool
Args:
exp_val(float): Expected size in GB
Returns:
float: Used size in GB
"""
ct_pod = pod.get_ceph_tools_pod()
rados_status = ct_pod.exec_ceph_cmd(ceph_cmd=f"rados df -p {cbp_name}")
size_bytes = rados_status["pools"][0]["size_bytes"]
# Convert size to GB
used_in_gb = float(format(size_bytes / constants.GB, ".4f"))
if exp_val is True and abs(exp_val - used_in_gb) < 1.5:
raise UnexpectedBehaviour(
f"Actual {used_in_gb} and expected size {exp_val} not "
f"matching. Retrying"
)
return used_in_gb
def get_full_test_logs_path(cname):
"""
Getting the full path of the logs file for particular test
this function use the inspect module to find the name of the caller function, so it need
to be call once from the main test function.
the output is in the form of
ocsci_log_path/<full test file path>/<test filename>/<test class name>/<test function name>
Args:
cname (obj): the Class object which was run and called this function
Return:
str : full path of the test logs relative to the ocs-ci base logs path
"""
# the module path relative to ocs-ci base path
log_file_name = (inspect.stack()[1][1]).replace(f"{os.getcwd()}/", "")
# The name of the class
mname = type(cname).__name__
# the full log path (relative to ocs-ci base path)
full_log_path = (
f"{ocsci_log_path()}/{log_file_name}/{mname}/{inspect.stack()[1][3]}"
)
return full_log_path
def get_mon_pdb():
"""
Check for Mon PDB
Returns:
disruptions_allowed (int): Count of mon allowed disruption
min_available_mon (int): Count of minimum mon available
"""
pdb_obj = OCP(
kind=constants.POD_DISRUPTION_BUDGET,
resource_name=constants.MON_PDB,
namespace=defaults.ROOK_CLUSTER_NAMESPACE,
)
disruptions_allowed = pdb_obj.get().get("status").get("disruptionsAllowed")
min_available_mon = pdb_obj.get().get("spec").get("minAvailable")
return disruptions_allowed, min_available_mon
|
wallet_multiwallet.py
|
#!/usr/bin/env python3
# Copyright (c) 2017-2019 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test multiwallet.
Verify that a bitcoind node can load multiple wallet files
"""
import os
import shutil
import time
from decimal import Decimal
from threading import Thread
from test_framework.authproxy import JSONRPCException
from test_framework.test_framework import BitcoinTestFramework
from test_framework.test_node import ErrorMatch
from test_framework.util import (
assert_equal,
assert_raises_rpc_error,
get_rpc_proxy,
)
got_loading_error = False
def test_load_unload(node, name):
global got_loading_error
for _ in range(10):
if got_loading_error:
return
try:
node.loadwallet(name)
node.unloadwallet(name)
except JSONRPCException as e:
if e.error['code'] == - \
4 and 'Wallet already being loading' in e.error['message']:
got_loading_error = True
return
class MultiWalletTest(BitcoinTestFramework):
def set_test_params(self):
self.setup_clean_chain = True
self.num_nodes = 2
def skip_test_if_missing_module(self):
self.skip_if_no_wallet()
def add_options(self, parser):
parser.add_argument(
'--data_wallets_dir',
default=os.path.join(
os.path.dirname(
os.path.realpath(__file__)),
'data/wallets/'),
help='Test data with wallet directories (default: %(default)s)',
)
def run_test(self):
node = self.nodes[0]
def data_dir(*p): return os.path.join(node.datadir, self.chain, *p)
def wallet_dir(*p): return data_dir('wallets', *p)
def wallet(name): return node.get_wallet_rpc(name)
def wallet_file(name):
if os.path.isdir(wallet_dir(name)):
return wallet_dir(name, self.wallet_data_filename)
return wallet_dir(name)
assert_equal(self.nodes[0].listwalletdir(),
{'wallets': [{'name': self.default_wallet_name}]})
# check wallet.dat is created
self.stop_nodes()
assert_equal(os.path.isfile(wallet_dir(self.default_wallet_name,
self.wallet_data_filename)),
True)
# create symlink to verify wallet directory path can be referenced
# through symlink
if os.name != 'nt':
os.mkdir(wallet_dir('w7'))
os.symlink('w7', wallet_dir('w7_symlink'))
# rename wallet.dat to make sure plain wallet file paths (as opposed to
# directory paths) can be loaded
os.rename(wallet_dir(self.default_wallet_name, self.wallet_data_filename),
wallet_dir("w8"))
# create another dummy wallet for use in testing backups later
self.start_node(
0, ["-nowallet", "-wallet=" + self.default_wallet_name])
self.stop_nodes()
empty_wallet = os.path.join(self.options.tmpdir, 'empty.dat')
os.rename(wallet_dir(self.default_wallet_name, self.wallet_data_filename),
empty_wallet)
# restart node with a mix of wallet names:
# w1, w2, w3 - to verify new wallets created when non-existing paths specified
# w - to verify wallet name matching works when one wallet path is prefix of another
# sub/w5 - to verify relative wallet path is created correctly
# extern/w6 - to verify absolute wallet path is created correctly
# w7_symlink - to verify symlinked wallet path is initialized correctly
# w8 - to verify existing wallet file is loaded correctly
# '' - to verify default wallet file is created correctly
wallet_names = ['w1', 'w2', 'w3', 'w', 'sub/w5',
os.path.join(self.options.tmpdir, 'extern/w6'),
'w7_symlink', 'w8', self.default_wallet_name]
if os.name == 'nt':
wallet_names.remove('w7_symlink')
extra_args = ['-nowallet'] + \
['-wallet={}'.format(n) for n in wallet_names]
self.start_node(0, extra_args)
assert_equal(
sorted(map(lambda w: w['name'],
self.nodes[0].listwalletdir()['wallets'])),
[self.default_wallet_name, os.path.join('sub', 'w5'), 'w', 'w1',
'w2', 'w3', 'w7', 'w7_symlink', 'w8'])
assert_equal(set(node.listwallets()), set(wallet_names))
# check that all requested wallets were created
self.stop_node(0)
for wallet_name in wallet_names:
assert_equal(os.path.isfile(wallet_file(wallet_name)), True)
# should not initialize if wallet path can't be created
exp_stderr = "boost::filesystem::create_directory:"
self.nodes[0].assert_start_raises_init_error(
['-wallet=w8/bad'], exp_stderr, match=ErrorMatch.PARTIAL_REGEX)
self.nodes[0].assert_start_raises_init_error(
['-walletdir=wallets'], 'Error: Specified -walletdir "wallets" does not exist')
self.nodes[0].assert_start_raises_init_error(
['-walletdir=wallets'], 'Error: Specified -walletdir "wallets" is a relative path', cwd=data_dir())
self.nodes[0].assert_start_raises_init_error(
['-walletdir=debug.log'], 'Error: Specified -walletdir "debug.log" is not a directory', cwd=data_dir())
# should not initialize if there are duplicate wallets
self.nodes[0].assert_start_raises_init_error(
['-wallet=w1', '-wallet=w1'], 'Error: Error loading wallet w1. Duplicate -wallet filename specified.')
# should not initialize if one wallet is a copy of another
shutil.copyfile(wallet_dir('w8'), wallet_dir('w8_copy'))
exp_stderr = r"BerkeleyDatabase: Can't open database w8_copy \(duplicates fileid \w+ from w8\)"
self.nodes[0].assert_start_raises_init_error(
['-wallet=w8', '-wallet=w8_copy'], exp_stderr, match=ErrorMatch.PARTIAL_REGEX)
# should not initialize if wallet file is a symlink
if os.name != 'nt':
os.symlink('w8', wallet_dir('w8_symlink'))
self.nodes[0].assert_start_raises_init_error(
['-wallet=w8_symlink'], r'Error: Invalid -wallet path \'w8_symlink\'\. .*', match=ErrorMatch.FULL_REGEX)
# should not initialize if the specified walletdir does not exist
self.nodes[0].assert_start_raises_init_error(
['-walletdir=bad'], 'Error: Specified -walletdir "bad" does not exist')
# should not initialize if the specified walletdir is not a directory
not_a_dir = wallet_dir('notadir')
open(not_a_dir, 'a', encoding="utf8").close()
self.nodes[0].assert_start_raises_init_error(
['-walletdir=' + not_a_dir], 'Error: Specified -walletdir "' + not_a_dir + '" is not a directory')
# if wallets/ doesn't exist, datadir should be the default wallet dir
wallet_dir2 = data_dir('walletdir')
os.rename(wallet_dir(), wallet_dir2)
self.start_node(0, ['-nowallet', '-wallet=w4', '-wallet=w5'])
assert_equal(set(node.listwallets()), {"w4", "w5"})
w5 = wallet("w5")
node.generatetoaddress(nblocks=1, address=w5.getnewaddress())
# now if wallets/ exists again, but the rootdir is specified as the
# walletdir, w4 and w5 should still be loaded
os.rename(wallet_dir2, wallet_dir())
self.restart_node(0, ['-nowallet', '-wallet=w4', '-wallet=w5',
'-walletdir=' + data_dir()])
assert_equal(set(node.listwallets()), {"w4", "w5"})
w5 = wallet("w5")
w5_info = w5.getwalletinfo()
assert_equal(w5_info['immature_balance'], 50000000)
competing_wallet_dir = os.path.join(
self.options.tmpdir, 'competing_walletdir')
os.mkdir(competing_wallet_dir)
self.restart_node(0, ['-walletdir=' + competing_wallet_dir])
exp_stderr = r"Error: Error initializing wallet database environment \"\S+competing_walletdir\"!"
self.nodes[1].assert_start_raises_init_error(
['-walletdir=' + competing_wallet_dir], exp_stderr, match=ErrorMatch.PARTIAL_REGEX)
self.restart_node(0, extra_args)
assert_equal(sorted(map(lambda w: w['name'],
self.nodes[0].listwalletdir()['wallets'])),
[self.default_wallet_name, os.path.join('sub', 'w5'), 'w',
'w1', 'w2', 'w3', 'w7', 'w7_symlink', 'w8', 'w8_copy'])
wallets = [wallet(w) for w in wallet_names]
wallet_bad = wallet("bad")
# check wallet names and balances
node.generatetoaddress(nblocks=1, address=wallets[0].getnewaddress())
for wallet_name, wallet in zip(wallet_names, wallets):
info = wallet.getwalletinfo()
assert_equal(info['immature_balance'],
50000000 if wallet is wallets[0] else 0)
assert_equal(info['walletname'], wallet_name)
# accessing invalid wallet fails
assert_raises_rpc_error(-18, "Requested wallet does not exist or is not loaded",
wallet_bad.getwalletinfo)
# accessing wallet RPC without using wallet endpoint fails
assert_raises_rpc_error(-19, "Wallet file not specified (must request wallet RPC through /wallet/<filename> uri-path).",
node.getwalletinfo)
w1, w2, w3, w4, *_ = wallets
node.generatetoaddress(nblocks=101, address=w1.getnewaddress())
assert_equal(w1.getbalance(), 100000000)
assert_equal(w2.getbalance(), 0)
assert_equal(w3.getbalance(), 0)
assert_equal(w4.getbalance(), 0)
w1.sendtoaddress(w2.getnewaddress(), 1000000)
w1.sendtoaddress(w3.getnewaddress(), 2000000)
w1.sendtoaddress(w4.getnewaddress(), 3000000)
node.generatetoaddress(nblocks=1, address=w1.getnewaddress())
assert_equal(w2.getbalance(), 1000000)
assert_equal(w3.getbalance(), 2000000)
assert_equal(w4.getbalance(), 3000000)
batch = w1.batch([w1.getblockchaininfo.get_request(),
w1.getwalletinfo.get_request()])
assert_equal(batch[0]["result"]["chain"], self.chain)
assert_equal(batch[1]["result"]["walletname"], "w1")
self.log.info('Check for per-wallet settxfee call')
assert_equal(w1.getwalletinfo()['paytxfee'], 0)
assert_equal(w2.getwalletinfo()['paytxfee'], 0)
w2.settxfee(1000)
assert_equal(w1.getwalletinfo()['paytxfee'], 0)
assert_equal(w2.getwalletinfo()['paytxfee'], Decimal('1000.00'))
self.log.info("Test dynamic wallet loading")
self.restart_node(0, ['-nowallet'])
assert_equal(node.listwallets(), [])
assert_raises_rpc_error(
-18,
"No wallet is loaded. Load a wallet using loadwallet or create a new"
" one with createwallet. (Note: A default wallet is no longer "
"automatically created)",
node.getwalletinfo
)
self.log.info("Load first wallet")
loadwallet_name = node.loadwallet(wallet_names[0])
assert_equal(loadwallet_name['name'], wallet_names[0])
assert_equal(node.listwallets(), wallet_names[0:1])
node.getwalletinfo()
w1 = node.get_wallet_rpc(wallet_names[0])
w1.getwalletinfo()
self.log.info("Load second wallet")
loadwallet_name = node.loadwallet(wallet_names[1])
assert_equal(loadwallet_name['name'], wallet_names[1])
assert_equal(node.listwallets(), wallet_names[0:2])
assert_raises_rpc_error(-19,
"Wallet file not specified", node.getwalletinfo)
w2 = node.get_wallet_rpc(wallet_names[1])
w2.getwalletinfo()
self.log.info("Concurrent wallet loading")
threads = []
for _ in range(3):
n = node.cli if self.options.usecli else get_rpc_proxy(
node.url, 1, timeout=600, coveragedir=node.coverage_dir)
t = Thread(target=test_load_unload, args=(n, wallet_names[2], ))
t.start()
threads.append(t)
for t in threads:
t.join()
global got_loading_error
assert_equal(got_loading_error, True)
self.log.info("Load remaining wallets")
for wallet_name in wallet_names[2:]:
loadwallet_name = self.nodes[0].loadwallet(wallet_name)
assert_equal(loadwallet_name['name'], wallet_name)
assert_equal(set(self.nodes[0].listwallets()), set(wallet_names))
# Fail to load if wallet doesn't exist
path = os.path.join(self.options.tmpdir, "node0", "regtest",
"wallets", "wallets")
assert_raises_rpc_error(
-18,
"Wallet file verification failed. Failed to load database path "
"'{}'. Path does not exist.".format(path),
self.nodes[0].loadwallet, 'wallets')
# Fail to load duplicate wallets
path = os.path.join(
self.options.tmpdir,
"node0",
"regtest",
"wallets",
"w1",
self.wallet_data_filename)
assert_raises_rpc_error(
-4,
"Wallet file verification failed. Refusing to load database. Data file '{}' is already loaded.".format(
path),
self.nodes[0].loadwallet,
wallet_names[0])
# Fail to load duplicate wallets by different ways (directory and
# filepath)
path = os.path.join(
self.options.tmpdir,
"node0",
"regtest",
"wallets",
self.wallet_data_filename)
assert_raises_rpc_error(
-4,
"Wallet file verification failed. Refusing to load database. Data file '{}' is already loaded.".format(
path),
self.nodes[0].loadwallet,
self.wallet_data_filename)
# Fail to load if one wallet is a copy of another
assert_raises_rpc_error(-4, "BerkeleyDatabase: Can't open database w8_copy (duplicates fileid",
self.nodes[0].loadwallet, 'w8_copy')
# Fail to load if one wallet is a copy of another.
# Test this twice to make sure that we don't re-introduce
# https://github.com/bitcoin/bitcoin/issues/14304
assert_raises_rpc_error(-4, "BerkeleyDatabase: Can't open database w8_copy (duplicates fileid",
self.nodes[0].loadwallet, 'w8_copy')
# Fail to load if wallet file is a symlink
if os.name != 'nt':
assert_raises_rpc_error(
-4,
"Wallet file verification failed. Invalid -wallet path 'w8_symlink'",
self.nodes[0].loadwallet,
'w8_symlink')
# Fail to load if a directory is specified that doesn't contain a
# wallet
os.mkdir(wallet_dir('empty_wallet_dir'))
path = os.path.join(self.options.tmpdir, "node0", "regtest",
"wallets", "empty_wallet_dir")
assert_raises_rpc_error(
-18,
"Wallet file verification failed. Failed to load database "
"path '{}'. Data is not in recognized format.".format(path),
self.nodes[0].loadwallet, 'empty_wallet_dir')
self.log.info("Test dynamic wallet creation.")
# Fail to create a wallet if it already exists.
path = os.path.join(self.options.tmpdir, "node0", "regtest",
"wallets", "w2")
assert_raises_rpc_error(
-4,
f"Failed to create database path '{path}'. Database already exists.",
self.nodes[0].createwallet, 'w2')
# Successfully create a wallet with a new name
loadwallet_name = self.nodes[0].createwallet('w9')
assert_equal(loadwallet_name['name'], 'w9')
w9 = node.get_wallet_rpc('w9')
assert_equal(w9.getwalletinfo()['walletname'], 'w9')
assert 'w9' in self.nodes[0].listwallets()
# Successfully create a wallet using a full path
new_wallet_dir = os.path.join(self.options.tmpdir, 'new_walletdir')
new_wallet_name = os.path.join(new_wallet_dir, 'w10')
loadwallet_name = self.nodes[0].createwallet(new_wallet_name)
assert_equal(loadwallet_name['name'], new_wallet_name)
w10 = node.get_wallet_rpc(new_wallet_name)
assert_equal(w10.getwalletinfo()['walletname'], new_wallet_name)
assert new_wallet_name in self.nodes[0].listwallets()
self.log.info("Test dynamic wallet unloading")
# Test `unloadwallet` errors
assert_raises_rpc_error(-1, "JSON value is not a string as expected",
self.nodes[0].unloadwallet)
assert_raises_rpc_error(-18, "Requested wallet does not exist or is not loaded",
self.nodes[0].unloadwallet, "dummy")
assert_raises_rpc_error(-18, "Requested wallet does not exist or is not loaded",
node.get_wallet_rpc("dummy").unloadwallet)
assert_raises_rpc_error(-8, "Cannot unload the requested wallet",
w1.unloadwallet, "w2"),
# Successfully unload the specified wallet name
self.nodes[0].unloadwallet("w1")
assert 'w1' not in self.nodes[0].listwallets()
# Successfully unload the wallet referenced by the request endpoint
# Also ensure unload works during walletpassphrase timeout
w2.encryptwallet('test')
w2.walletpassphrase('test', 1)
w2.unloadwallet()
time.sleep(1.1)
assert 'w2' not in self.nodes[0].listwallets()
# Successfully unload all wallets
for wallet_name in self.nodes[0].listwallets():
self.nodes[0].unloadwallet(wallet_name)
assert_equal(self.nodes[0].listwallets(), [])
assert_raises_rpc_error(
-18,
"No wallet is loaded. Load a wallet using loadwallet or create a new"
" one with createwallet. (Note: A default wallet is no longer "
"automatically created)",
self.nodes[0].getwalletinfo
)
# Successfully load a previously unloaded wallet
self.nodes[0].loadwallet('w1')
assert_equal(self.nodes[0].listwallets(), ['w1'])
assert_equal(w1.getwalletinfo()['walletname'], 'w1')
assert_equal(sorted(map(lambda w: w['name'],
self.nodes[0].listwalletdir()['wallets'])),
[self.default_wallet_name, os.path.join('sub', 'w5'), 'w',
'w1', 'w2', 'w3', 'w7', 'w7_symlink', 'w8', 'w8_copy',
'w9'])
# Test backing up and restoring wallets
self.log.info("Test wallet backup")
self.restart_node(0, ['-nowallet'])
for wallet_name in wallet_names:
self.nodes[0].loadwallet(wallet_name)
for wallet_name in wallet_names:
rpc = self.nodes[0].get_wallet_rpc(wallet_name)
addr = rpc.getnewaddress()
backup = os.path.join(self.options.tmpdir, 'backup.dat')
rpc.backupwallet(backup)
self.nodes[0].unloadwallet(wallet_name)
shutil.copyfile(empty_wallet, wallet_file(wallet_name))
self.nodes[0].loadwallet(wallet_name)
assert_equal(rpc.getaddressinfo(addr)['ismine'], False)
self.nodes[0].unloadwallet(wallet_name)
shutil.copyfile(backup, wallet_file(wallet_name))
self.nodes[0].loadwallet(wallet_name)
assert_equal(rpc.getaddressinfo(addr)['ismine'], True)
# Test .walletlock file is closed
self.start_node(1)
wallet = os.path.join(self.options.tmpdir, 'my_wallet')
self.nodes[0].createwallet(wallet)
assert_raises_rpc_error(-4, "Error initializing wallet database environment",
self.nodes[1].loadwallet, wallet)
self.nodes[0].unloadwallet(wallet)
self.nodes[1].loadwallet(wallet)
if __name__ == '__main__':
MultiWalletTest().main()
|
frame_handler.py
|
import threading
import time
import gc
from d3dshot import D3DShot
from typing import Any, Union, List, Tuple, Optional, Callable
from torch import Tensor
class WindowsFrameHandler():
"""
Handles a d3dshot instance to return fresh frames.
"""
def __init__(self, d3dshot: D3DShot,
frame_transforms: List[Callable[[Any], Any]] = [],
stack_transforms: List[Callable[[Any], Any]] = []):
"""
Creates the frame handler on the d3dshot instance.
Args:
d3dshot (D3DShot): The d3dshot instance to wrap.
frame_transforms (List[Callable[[Any], Any]]): A list of transforms
to apply to each frame on capture.
stack_transforms: (List[Callable[[Any], Any]]): A list of transforms
to apply to an entire stack of frames.
"""
self.d3dshot = d3dshot
self.frame_transforms = frame_transforms
self.stack_transforms = stack_transforms
self.latest_polled = False
# Syncing variables
self.lock = threading.Lock()
self.cond = threading.Condition(self.lock)
def _apply_frame_transforms(self, frame) -> Any:
"""
Applies the transforms to the frame given.
Args:
frame (Any): The frame to apply transforms to.
"""
for transform in self.frame_transforms:
frame = transform(frame)
return frame
def _apply_stack_transforms(self, frame) -> Any:
"""
Applies the transforms to the stack given.
Args:
frame (Any): The frame to apply transforms to.
"""
for transform in self.stack_transforms:
frame = transform(frame)
return frame
def _is_latest_new(self) -> bool:
"""
Returns true if the latest frame has not been polled.
"""
return (
not self.latest_polled
and not self.d3dshot.get_latest_frame() is None
)
def is_capturing(self) -> bool:
"""
Returns true if currently capturing.
"""
return self.d3dshot.is_capturing
def get_new_frame(self) -> Any:
"""
Retrieves a fresh captured frame using d3dshot.
"""
with self.cond:
self.cond.wait_for(self._is_latest_new)
frame = self.d3dshot.get_latest_frame()
self.latest_polled = True
return frame
def get_frame_stack(self, frame_indices: Union[List[int], Tuple[int]],
stack_dimension: Optional[str] = None):
"""
Retrieves the stack of frames at the indices provided.
Args:
frame_indices ([int], (int,)): The indices of the frames to retrieve
stack_dimension (str): The dimension to stack the frames in.
"""
frames = self.d3dshot.get_frame_stack(frame_indices, stack_dimension)
frames = self._apply_stack_transforms(frames)
return frames
def _capture(self, target_fps: int = 60,
region: Optional[Union[List[int], Tuple[int]]] = None) -> None:
"""
The thread for the d3dshot capture.
Args:
target_fps (int): The target fps of the d3dshot capture.
region ([int], (int,)): The region to capture.
"""
self.d3dshot._reset_frame_buffer()
frame_time = 1 / target_fps
while self.d3dshot.is_capturing:
cycle_start = time.time()
frame = self.d3dshot.display.capture(
self.d3dshot.capture_output.process,
region=self.d3dshot._validate_region(region)
)
with self.cond:
if frame is not None:
frame = self._apply_frame_transforms(frame)
self.d3dshot.frame_buffer.appendleft(frame)
else:
if len(self.d3dshot.frame_buffer):
self.d3dshot.frame_buffer.appendleft(
self.d3dshot.frame_buffer[0]
)
self.latest_polled = False
self.cond.notify()
gc.collect()
cycle_end = time.time()
frame_time_left = frame_time - (cycle_end - cycle_start)
if frame_time_left > 0:
time.sleep(frame_time_left)
self.d3dshot._is_capturing = False
def capture(self, target_fps: int = 60,
region: Optional[Union[List[int], Tuple[int]]] = None) -> bool:
"""
Begins the d3dshot capturing thread, with an extra variable to indicate
if the latest frame has been polled.
Args:
target_fps (int): The target fps of the d3dshot capture.
region ([int], (int,)): The region to capture.
"""
target_fps = self.d3dshot._validate_target_fps(target_fps)
if self.d3dshot.is_capturing:
return False
self.d3dshot._is_capturing = True
self.d3dshot._capture_thread = threading.Thread(
target=self._capture, args=(target_fps, region)
)
self.d3dshot._capture_thread.start()
return True
def stop(self) -> bool:
"""
Stops the capturing thread.
"""
if not self.d3dshot.is_capturing:
return False
self.d3dshot._is_capturing = False
with self.cond:
self.latest_polled = False
self.cond.notify_all()
if self.d3dshot._capture_thread is not None:
self.d3dshot._capture_thread.join(timeout=1)
self.d3dshot._capture_thread = None
return True
|
plotting.py
|
"""Pyvista plotting module."""
import pathlib
import collections.abc
from functools import partial
import logging
import os
import textwrap
import time
import warnings
import weakref
from functools import wraps
from threading import Thread
import imageio
import numpy as np
import scooby
import vtk
from vtk.util import numpy_support as VN
from vtk.util.numpy_support import numpy_to_vtk, vtk_to_numpy
import pyvista
from pyvista.utilities import (assert_empty_kwargs, convert_array,
convert_string_array, get_array,
is_pyvista_dataset, abstract_class,
raise_not_matching, try_callback, wrap)
from pyvista.utilities.regression import image_from_window
from .background_renderer import BackgroundRenderer
from .colors import get_cmap_safe
from .export_vtkjs import export_plotter_vtkjs
from .mapper import make_mapper
from .picking import PickingHelper
from .renderer import Renderer
from .theme import (FONT_KEYS, MAX_N_COLOR_BARS, parse_color,
parse_font_family, rcParams)
from .tools import normalize, opacity_transfer_function
from .widgets import WidgetHelper
try:
import matplotlib
has_matplotlib = True
except ImportError:
has_matplotlib = False
_ALL_PLOTTERS = {}
SUPPORTED_FORMATS = [".png", ".jpeg", ".jpg", ".bmp", ".tif", ".tiff"]
def close_all():
"""Close all open/active plotters and clean up memory."""
for key, p in _ALL_PLOTTERS.items():
if not p._closed:
p.close()
p.deep_clean()
_ALL_PLOTTERS.clear()
return True
log = logging.getLogger(__name__)
log.setLevel('CRITICAL')
log.addHandler(logging.StreamHandler())
@abstract_class
class BasePlotter(PickingHelper, WidgetHelper):
"""To be used by the Plotter and pyvistaqt.QtInteractor classes.
Parameters
----------
shape : list or tuple, optional
Number of sub-render windows inside of the main window.
Specify two across with ``shape=(2, 1)`` and a two by two grid
with ``shape=(2, 2)``. By default there is only one renderer.
Can also accept a string descriptor as shape. E.g.:
* ``shape="3|1"`` means 3 plots on the left and 1 on the right,
* ``shape="4/2"`` means 4 plots on top and 2 at the bottom.
border : bool, optional
Draw a border around each render window. Default False.
border_color : string or 3 item list, optional, defaults to white
Either a string, rgb list, or hex color string. For example:
* ``color='white'``
* ``color='w'``
* ``color=[1, 1, 1]``
* ``color='#FFFFFF'``
border_width : float, optional
Width of the border in pixels when enabled.
title : str, optional
Window title of the scalar bar
"""
mouse_position = None
click_position = None
def __init__(self, shape=(1, 1), border=None, border_color='k',
border_width=2.0, title=None, splitting_position=None,
groups=None, row_weights=None, col_weights=None):
"""Initialize base plotter."""
log.debug('BasePlotter init start')
self.image_transparent_background = rcParams['transparent_background']
# optional function to be called prior to closing
self.__before_close_callback = None
self._store_image = False
self.mesh = None
if title is None:
title = rcParams['title']
self.title = str(title)
# by default add border for multiple plots
if border is None:
if shape != (1, 1):
border = True
else:
border = False
# add render windows
self._active_renderer_index = 0
self.renderers = []
self.groups = np.empty((0,4),dtype=int)
if isinstance(shape, str):
if '|' in shape:
n = int(shape.split('|')[0])
m = int(shape.split('|')[1])
rangen = reversed(range(n))
rangem = reversed(range(m))
else:
m = int(shape.split('/')[0])
n = int(shape.split('/')[1])
rangen = range(n)
rangem = range(m)
if splitting_position is None:
splitting_position = rcParams['multi_rendering_splitting_position']
if splitting_position is None:
if n >= m:
xsplit = m/(n+m)
else:
xsplit = 1-n/(n+m)
else:
xsplit = splitting_position
for i in rangen:
arenderer = Renderer(self, border, border_color, border_width)
if '|' in shape:
arenderer.SetViewport(0, i/n, xsplit, (i+1)/n)
else:
arenderer.SetViewport(i/n, 0, (i+1)/n, xsplit)
self.renderers.append(arenderer)
for i in rangem:
arenderer = Renderer(self, border, border_color, border_width)
if '|' in shape:
arenderer.SetViewport(xsplit, i/m, 1, (i+1)/m)
else:
arenderer.SetViewport(i/m, xsplit, (i+1)/m, 1)
self.renderers.append(arenderer)
self.shape = (n+m,)
self._render_idxs = np.arange(n+m)
else:
if not isinstance(shape, (np.ndarray, collections.abc.Sequence)):
raise TypeError('"shape" should be a list, tuple or string descriptor')
if len(shape) != 2:
raise ValueError('"shape" must have length 2.')
shape = np.asarray(shape)
if not np.issubdtype(shape.dtype, np.integer) or (shape <= 0).any():
raise ValueError('"shape" must contain only positive integers.')
# always assign shape as a tuple
self.shape = tuple(shape)
self._render_idxs = np.empty(self.shape,dtype=int)
# Check if row and col weights correspond to given shape, or initialize them to defaults (equally weighted)
# and convert to normalized offsets
if row_weights is None:
row_weights = np.ones(shape[0])
if col_weights is None:
col_weights = np.ones(shape[1])
assert(np.array(row_weights).size==shape[0])
assert(np.array(col_weights).size==shape[1])
row_off = np.cumsum(np.abs(row_weights))/np.sum(np.abs(row_weights))
row_off = 1-np.concatenate(([0],row_off))
col_off = np.cumsum(np.abs(col_weights))/np.sum(np.abs(col_weights))
col_off = np.concatenate(([0],col_off))
# Check and convert groups to internal format (Nx4 matrix where every row contains the row and col index of the top left cell
# together with the row and col index of the bottom right cell)
if groups is not None:
assert isinstance(groups, collections.abc.Sequence), '"groups" should be a list or tuple'
for group in groups:
assert isinstance(group, collections.abc.Sequence) and len(group)==2, 'each group entry should be a list or tuple of 2 elements'
rows = group[0]
if isinstance(rows,slice):
rows = np.arange(self.shape[0],dtype=int)[rows]
cols = group[1]
if isinstance(cols,slice):
cols = np.arange(self.shape[1],dtype=int)[cols]
# Get the normalized group, i.e. extract top left corner and bottom right corner from the given rows and cols
norm_group = [np.min(rows),np.min(cols),np.max(rows),np.max(cols)]
# Check for overlap with already defined groups:
for i in range(norm_group[0],norm_group[2]+1):
for j in range(norm_group[1],norm_group[3]+1):
assert self.loc_to_group((i,j)) is None, 'groups cannot overlap'
self.groups = np.concatenate((self.groups,np.array([norm_group],dtype=int)),axis=0)
# Create subplot renderers
for row in range(shape[0]):
for col in range(shape[1]):
group = self.loc_to_group((row,col))
nb_rows = None
nb_cols = None
if group is not None:
if row==self.groups[group,0] and col==self.groups[group,1]:
# Only add renderer for first location of the group
nb_rows = 1+self.groups[group,2]-self.groups[group,0]
nb_cols = 1+self.groups[group,3]-self.groups[group,1]
else:
nb_rows = 1
nb_cols = 1
if nb_rows is not None:
renderer = Renderer(self, border, border_color, border_width)
x0 = col_off[col]
y0 = row_off[row+nb_rows]
x1 = col_off[col+nb_cols]
y1 = row_off[row]
renderer.SetViewport(x0, y0, x1, y1)
self._render_idxs[row,col] = len(self.renderers)
self.renderers.append(renderer)
else:
self._render_idxs[row,col] = self._render_idxs[self.groups[group,0],self.groups[group,1]]
# each render will also have an associated background renderer
self._background_renderers = [None for _ in range(len(self.renderers))]
# create a shadow renderer that lives on top of all others
self._shadow_renderer = Renderer(
self, border, border_color, border_width)
self._shadow_renderer.SetViewport(0, 0, 1, 1)
self._shadow_renderer.SetDraw(False)
# This keeps track of scalars names already plotted and their ranges
self._scalar_bar_ranges = {}
self._scalar_bar_mappers = {}
self._scalar_bar_actors = {}
self._scalar_bar_widgets = {}
# track if the camera has been setup
# self.camera_set = False
self._first_time = True
# Keep track of the scale
self._labels = []
# Set default style
self._style = 'RubberBandPick'
self._style_class = None
# this helps managing closed plotters
self._closed = False
# Add self to open plotters
self._id_name = f"{hex(id(self))}-{len(_ALL_PLOTTERS)}"
_ALL_PLOTTERS[self._id_name] = self
# lighting style
self.disable_3_lights()
# Key bindings
self.reset_key_events()
log.debug('BasePlotter init stop')
@property
def _before_close_callback(self):
"""Return the cached function (expecting a reference)."""
if self.__before_close_callback is not None:
return self.__before_close_callback()
@_before_close_callback.setter
def _before_close_callback(self, func):
"""Store a weakref.ref of the function being called."""
if func is not None:
self.__before_close_callback = weakref.ref(func)
else:
self.__before_close_callback = None
#### Manage the active Renderer ####
def loc_to_group(self, loc):
"""Return group id of the given location index. Or None if this location is not part of any group."""
group_idxs = np.arange(self.groups.shape[0])
I = (loc[0]>=self.groups[:,0]) & (loc[0]<=self.groups[:,2]) & (loc[1]>=self.groups[:,1]) & (loc[1]<=self.groups[:,3])
group = group_idxs[I]
return None if group.size==0 else group[0]
def loc_to_index(self, loc):
"""Return index of the render window given a location index.
Parameters
----------
loc : int, tuple, or list
Index of the renderer to add the actor to. For example,
``loc=2`` or ``loc=(1, 1)``.
Return
------
idx : int
Index of the render window.
"""
if loc is None:
return self._active_renderer_index
elif isinstance(loc, (int, np.integer)):
return loc
elif isinstance(loc, (np.ndarray, collections.abc.Sequence)):
if not len(loc) == 2:
raise ValueError('"loc" must contain two items')
index_row = loc[0]
index_column = loc[1]
if index_row < 0 or index_row >= self.shape[0]:
raise IndexError(f'Row index is out of range ({self.shape[0]})')
if index_column < 0 or index_column >= self.shape[1]:
raise IndexError(f'Column index is out of range ({self.shape[1]})')
return self._render_idxs[index_row,index_column]
else:
raise TypeError('"loc" must be an integer or a sequence.')
def index_to_loc(self, index):
"""Convert a 1D index location to the 2D location on the plotting grid."""
if not isinstance(index, (int, np.integer)):
raise TypeError('"index" must be a scalar integer.')
if len(self.shape) == 1:
return index
args = np.argwhere(self._render_idxs == index)
if len(args) < 1:
raise IndexError('Index ({}) is out of range.')
return args[0]
@property
def renderer(self):
"""Return the active renderer."""
return self.renderers[self._active_renderer_index]
@property
def store_image(self):
"""Return if an image will be saved on close."""
return self._store_image
@store_image.setter
def store_image(self, value):
"""Store last rendered frame on close."""
self._store_image = bool(value)
def subplot(self, index_row, index_column=None):
"""Set the active subplot.
Parameters
----------
index_row : int
Index of the subplot to activate along the rows.
index_column : int
Index of the subplot to activate along the columns.
"""
if len(self.shape) == 1:
self._active_renderer_index = index_row
return
if index_row < 0 or index_row >= self.shape[0]:
raise IndexError(f'Row index is out of range ({self.shape[0]})')
if index_column < 0 or index_column >= self.shape[1]:
raise IndexError(f'Column index is out of range ({self.shape[1]})')
self._active_renderer_index = self.loc_to_index((index_row, index_column))
#### Wrap Renderer methods ####
@wraps(Renderer.add_floor)
def add_floor(self, *args, **kwargs):
"""Wrap ``Renderer.add_floor``."""
return self.renderer.add_floor(*args, **kwargs)
@wraps(Renderer.remove_floors)
def remove_floors(self, *args, **kwargs):
"""Wrap ``Renderer.remove_floors``."""
return self.renderer.remove_floors(*args, **kwargs)
def enable_3_lights(self):
"""Enable 3-lights illumination."""
def _to_pos(elevation, azimuth):
theta = azimuth * np.pi / 180.0
phi = (90.0 - elevation) * np.pi / 180.0
x = np.sin(theta) * np.sin(phi)
y = np.cos(phi)
z = np.cos(theta) * np.sin(phi)
return x, y, z
# Inspired from Mayavi's version of Raymond Maple 3-lights illumination
lights = list(self.renderer.GetLights())
headlight = lights.pop(0)
headlight.SetSwitch(False)
for i in range(len(lights)):
if i < 3:
lights[i].SetSwitch(True)
lights[i].SetIntensity(1.0)
lights[i].SetColor(1.0, 1.0, 1.0)
else:
lights[i].SetSwitch(False)
lights[i].SetPosition(_to_pos(0.0, 0.0))
lights[i].SetIntensity(1.0)
lights[i].SetColor(1.0, 1.0, 1.0)
lights[0].SetPosition(_to_pos(45.0, 45.0))
lights[1].SetPosition(_to_pos(-30.0, -60.0))
lights[1].SetIntensity(0.6)
lights[2].SetPosition(_to_pos(-30.0, 60.0))
lights[2].SetIntensity(0.5)
def disable_3_lights(self):
"""Disable 3-lights illumination."""
self.lighting = vtk.vtkLightKit()
# self.lighting.SetHeadLightWarmth(1.0)
# self.lighting.SetHeadLightWarmth(1.0)
for renderer in self.renderers:
renderer.RemoveAllLights()
self.lighting.AddLightsToRenderer(renderer)
renderer.LightFollowCameraOn()
@wraps(Renderer.enable_anti_aliasing)
def enable_anti_aliasing(self, *args, **kwargs):
"""Wrap ``Renderer.enable_anti_aliasing``."""
self.renderer.enable_anti_aliasing(*args, **kwargs)
@wraps(Renderer.disable_anti_aliasing)
def disable_anti_aliasing(self, *args, **kwargs):
"""Wrap ``Renderer.disable_anti_aliasing``."""
self.renderer.disable_anti_aliasing(*args, **kwargs)
@wraps(Renderer.set_focus)
def set_focus(self, *args, **kwargs):
"""Wrap ``Renderer.set_focus``."""
log.debug('set_focus: %s, %s', str(args), str(kwargs))
self.renderer.set_focus(*args, **kwargs)
self.render()
@wraps(Renderer.set_position)
def set_position(self, *args, **kwargs):
"""Wrap ``Renderer.set_position``."""
self.renderer.set_position(*args, **kwargs)
self.render()
@wraps(Renderer.set_viewup)
def set_viewup(self, *args, **kwargs):
"""Wrap ``Renderer.set_viewup``."""
self.renderer.set_viewup(*args, **kwargs)
self.render()
@wraps(Renderer.add_orientation_widget)
def add_orientation_widget(self, *args, **kwargs):
"""Wrap ``Renderer.add_orientation_widget``."""
return self.renderer.add_orientation_widget(*args, **kwargs)
@wraps(Renderer.add_axes)
def add_axes(self, *args, **kwargs):
"""Wrap ``Renderer.add_axes``."""
return self.renderer.add_axes(*args, **kwargs)
@wraps(Renderer.hide_axes)
def hide_axes(self, *args, **kwargs):
"""Wrap ``Renderer.hide_axes``."""
return self.renderer.hide_axes(*args, **kwargs)
@wraps(Renderer.show_axes)
def show_axes(self, *args, **kwargs):
"""Wrap ``Renderer.show_axes``."""
return self.renderer.show_axes(*args, **kwargs)
@wraps(Renderer.update_bounds_axes)
def update_bounds_axes(self, *args, **kwargs):
"""Wrap ``Renderer.update_bounds_axes``."""
return self.renderer.update_bounds_axes(*args, **kwargs)
@wraps(Renderer.add_actor)
def add_actor(self, *args, **kwargs):
"""Wrap ``Renderer.add_actor``."""
return self.renderer.add_actor(*args, **kwargs)
@wraps(Renderer.enable_parallel_projection)
def enable_parallel_projection(self, *args, **kwargs):
"""Wrap ``Renderer.enable_parallel_projection``."""
return self.renderer.enable_parallel_projection(*args, **kwargs)
@wraps(Renderer.disable_parallel_projection)
def disable_parallel_projection(self, *args, **kwargs):
"""Wrap ``Renderer.disable_parallel_projection``."""
return self.renderer.disable_parallel_projection(*args, **kwargs)
@property
def parallel_projection(self):
"""Return parallel projection state of active render window."""
return self.renderer.parallel_projection
@parallel_projection.setter
def parallel_projection(self, state):
"""Set parallel projection state of all active render windows."""
self.renderer.parallel_projection = state
@property
def parallel_scale(self):
"""Return parallel scale of active render window."""
return self.renderer.parallel_scale
@parallel_scale.setter
def parallel_scale(self, value):
"""Set parallel scale of all active render windows."""
self.renderer.parallel_scale = value
@wraps(Renderer.add_axes_at_origin)
def add_axes_at_origin(self, *args, **kwargs):
"""Wrap ``Renderer.add_axes_at_origin``."""
return self.renderer.add_axes_at_origin(*args, **kwargs)
@wraps(Renderer.show_bounds)
def show_bounds(self, *args, **kwargs):
"""Wrap ``Renderer.show_bounds``."""
return self.renderer.show_bounds(*args, **kwargs)
@wraps(Renderer.add_bounding_box)
def add_bounding_box(self, *args, **kwargs):
"""Wrap ``Renderer.add_bounding_box``."""
return self.renderer.add_bounding_box(*args, **kwargs)
@wraps(Renderer.remove_bounding_box)
def remove_bounding_box(self, *args, **kwargs):
"""Wrap ``Renderer.remove_bounding_box``."""
return self.renderer.remove_bounding_box(*args, **kwargs)
@wraps(Renderer.remove_bounds_axes)
def remove_bounds_axes(self, *args, **kwargs):
"""Wrap ``Renderer.remove_bounds_axes``."""
return self.renderer.remove_bounds_axes(*args, **kwargs)
@wraps(Renderer.show_grid)
def show_grid(self, *args, **kwargs):
"""Wrap ``Renderer.show_grid``."""
return self.renderer.show_grid(*args, **kwargs)
@wraps(Renderer.set_scale)
def set_scale(self, *args, **kwargs):
"""Wrap ``Renderer.set_scale``."""
return self.renderer.set_scale(*args, **kwargs)
@wraps(Renderer.enable_eye_dome_lighting)
def enable_eye_dome_lighting(self, *args, **kwargs):
"""Wrap ``Renderer.enable_eye_dome_lighting``."""
return self.renderer.enable_eye_dome_lighting(*args, **kwargs)
@wraps(Renderer.disable_eye_dome_lighting)
def disable_eye_dome_lighting(self, *args, **kwargs):
"""Wrap ``Renderer.disable_eye_dome_lighting``."""
return self.renderer.disable_eye_dome_lighting(*args, **kwargs)
@wraps(Renderer.reset_camera)
def reset_camera(self, *args, **kwargs):
"""Wrap ``Renderer.reset_camera``."""
self.renderer.reset_camera(*args, **kwargs)
self.render()
@wraps(Renderer.isometric_view)
def isometric_view(self, *args, **kwargs):
"""Wrap ``Renderer.isometric_view``."""
return self.renderer.isometric_view(*args, **kwargs)
@wraps(Renderer.view_isometric)
def view_isometric(self, *args, **kwarg):
"""Wrap ``Renderer.view_isometric``."""
return self.renderer.view_isometric(*args, **kwarg)
@wraps(Renderer.view_vector)
def view_vector(self, *args, **kwarg):
"""Wrap ``Renderer.view_vector``."""
return self.renderer.view_vector(*args, **kwarg)
@wraps(Renderer.view_xy)
def view_xy(self, *args, **kwarg):
"""Wrap ``Renderer.view_xy``."""
return self.renderer.view_xy(*args, **kwarg)
@wraps(Renderer.view_yx)
def view_yx(self, *args, **kwarg):
"""Wrap ``Renderer.view_yx``."""
return self.renderer.view_yx(*args, **kwarg)
@wraps(Renderer.view_xz)
def view_xz(self, *args, **kwarg):
"""Wrap ``Renderer.view_xz``."""
return self.renderer.view_xz(*args, **kwarg)
@wraps(Renderer.view_zx)
def view_zx(self, *args, **kwarg):
"""Wrap ``Renderer.view_zx``."""
return self.renderer.view_zx(*args, **kwarg)
@wraps(Renderer.view_yz)
def view_yz(self, *args, **kwarg):
"""Wrap ``Renderer.view_yz``."""
return self.renderer.view_yz(*args, **kwarg)
@wraps(Renderer.view_zy)
def view_zy(self, *args, **kwarg):
"""Wrap ``Renderer.view_zy``."""
return self.renderer.view_zy(*args, **kwarg)
@wraps(Renderer.disable)
def disable(self, *args, **kwarg):
"""Wrap ``Renderer.disable``."""
return self.renderer.disable(*args, **kwarg)
@wraps(Renderer.enable)
def enable(self, *args, **kwarg):
"""Wrap ``Renderer.enable``."""
return self.renderer.enable(*args, **kwarg)
@wraps(Renderer.enable_depth_peeling)
def enable_depth_peeling(self, *args, **kwargs):
"""Wrap ``Renderer.enable_depth_peeling``."""
if hasattr(self, 'ren_win'):
result = self.renderer.enable_depth_peeling(*args, **kwargs)
if result:
self.ren_win.AlphaBitPlanesOn()
return result
@wraps(Renderer.disable_depth_peeling)
def disable_depth_peeling(self):
"""Wrap ``Renderer.disable_depth_peeling``."""
if hasattr(self, 'ren_win'):
self.ren_win.AlphaBitPlanesOff()
return self.renderer.disable_depth_peeling()
@wraps(Renderer.get_default_cam_pos)
def get_default_cam_pos(self, *args, **kwargs):
"""Wrap ``Renderer.get_default_cam_pos``."""
return self.renderer.get_default_cam_pos(*args, **kwargs)
@wraps(Renderer.remove_actor)
def remove_actor(self, *args, **kwargs):
"""Wrap ``Renderer.remove_actor``."""
for renderer in self.renderers:
renderer.remove_actor(*args, **kwargs)
return True
#### Properties from Renderer ####
@property
def camera(self):
"""Return the active camera of the active renderer."""
return self.renderer.camera
@camera.setter
def camera(self, camera):
"""Set the active camera for the rendering scene."""
self.renderer.camera = camera
@property
def camera_set(self):
"""Return if the camera of the active renderer has been set."""
return self.renderer.camera_set
@camera_set.setter
def camera_set(self, is_set):
"""Set if the camera has been set on the active renderer."""
self.renderer.camera_set = is_set
@property
def bounds(self):
"""Return the bounds of the active renderer."""
return self.renderer.bounds
@property
def length(self):
"""Return the length of the diagonal of the bounding box of the scene."""
return self.renderer.length
@property
def center(self):
"""Return the center of the active renderer."""
return self.renderer.center
@property
def _scalar_bar_slots(self):
"""Return the scalar bar slots of the active renderer."""
return self.renderer._scalar_bar_slots
@property
def _scalar_bar_slot_lookup(self):
"""Return the scalar bar slot lookup of the active renderer."""
return self.renderer._scalar_bar_slot_lookup
@_scalar_bar_slots.setter
def _scalar_bar_slots(self, value):
"""Set the scalar bar slots of the active renderer."""
self.renderer._scalar_bar_slots = value
@_scalar_bar_slot_lookup.setter
def _scalar_bar_slot_lookup(self, value):
"""Set the scalar bar slot lookup of the active renderer."""
self.renderer._scalar_bar_slot_lookup = value
@property
def scale(self):
"""Return the scaling of the active renderer."""
return self.renderer.scale
@scale.setter
def scale(self, scale):
"""Set the scaling of the active renderer."""
self.renderer.set_scale(*scale)
@property
def camera_position(self):
"""Return camera position of the active render window."""
return self.renderer.camera_position
@camera_position.setter
def camera_position(self, camera_location):
"""Set camera position of the active render window."""
self.renderer.camera_position = camera_location
@property
def background_color(self):
"""Return the background color of the first render window."""
return self.renderers[0].GetBackground()
@background_color.setter
def background_color(self, color):
"""Set the background color of all the render windows."""
self.set_background(color)
#### Properties of the BasePlotter ####
@property
def window_size(self):
"""Return the render window size."""
return list(self.ren_win.GetSize())
@window_size.setter
def window_size(self, window_size):
"""Set the render window size."""
self.ren_win.SetSize(window_size[0], window_size[1])
@property
def image_depth(self):
"""Return a depth image representing current render window.
Helper attribute for ``get_image_depth``.
"""
return self.get_image_depth()
@property
def image(self):
"""Return an image array of current render window.
To retrieve an image after the render window has been closed,
set: `plotter.store_image = True` before closing the plotter.
"""
if not hasattr(self, 'ren_win') and hasattr(self, 'last_image'):
return self.last_image
data = image_from_window(self.ren_win)
if self.image_transparent_background:
return data
else: # ignore alpha channel
return data[:, :, :-1]
def render(self):
"""Render the main window.
Does nothing until ``show`` has been called.
"""
if hasattr(self, 'ren_win') and not self._first_time:
log.debug('Rendering')
self.ren_win.Render()
def add_key_event(self, key, callback):
"""Add a function to callback when the given key is pressed.
These are non-unique - thus a key could map to many callback
functions. The callback function must not have any arguments.
Parameters
----------
key : str
The key to trigger the event
callback : callable
A callable that takes no arguments
"""
if not hasattr(callback, '__call__'):
raise TypeError('callback must be callable.')
self._key_press_event_callbacks[key].append(callback)
def _add_observer(self, event, call):
call = partial(try_callback, call)
self._observers[event] = self.iren.AddObserver(event, call)
def _remove_observer(self, event):
if event in self._observers:
self.iren.RemoveObserver(event)
del self._observers[event]
def clear_events_for_key(self, key):
"""Remove the callbacks associated to the key."""
self._key_press_event_callbacks.pop(key)
def store_mouse_position(self, *args):
"""Store mouse position."""
if not hasattr(self, "iren"):
raise AttributeError("This plotting window is not interactive.")
self.mouse_position = self.iren.GetEventPosition()
def store_click_position(self, *args):
"""Store click position in viewport coordinates."""
if not hasattr(self, "iren"):
raise AttributeError("This plotting window is not interactive.")
self.click_position = self.iren.GetEventPosition()
self.mouse_position = self.click_position
def track_mouse_position(self):
"""Keep track of the mouse position.
This will potentially slow down the interactor. No callbacks supported
here - use :func:`pyvista.BasePlotter.track_click_position` instead.
"""
if hasattr(self, "iren"):
self._add_observer(vtk.vtkCommand.MouseMoveEvent,
self.store_mouse_position)
def untrack_mouse_position(self):
"""Stop tracking the mouse position."""
self._remove_observer(vtk.vtkCommand.MouseMoveEvent)
def track_click_position(self, callback=None, side="right",
viewport=False):
"""Keep track of the click position.
By default, it only tracks right clicks.
Parameters
----------
callback : callable
A callable method that will use the click position. Passes the
click position as a length two tuple.
side : str
The side of the mouse for the button to track (left or right).
Default is left. Also accepts ``'r'`` or ``'l'``.
viewport: bool
If ``True``, uses the normalized viewport coordinate system
(values between 0.0 and 1.0 and support for HiDPI) when passing the
click position to the callback
"""
if not hasattr(self, "iren"):
return
side = str(side).lower()
if side in ["right", "r"]:
event = vtk.vtkCommand.RightButtonPressEvent
elif side in ["left", "l"]:
event = vtk.vtkCommand.LeftButtonPressEvent
else:
raise TypeError(f"Side ({side}) not supported. Try `left` or `right`")
def _click_callback(obj, event):
self.store_click_position()
if hasattr(callback, '__call__'):
if viewport:
callback(self.click_position)
else:
callback(self.pick_click_position())
self._add_observer(event, _click_callback)
def untrack_click_position(self):
"""Stop tracking the click position."""
if hasattr(self, "_click_observer"):
self.iren.RemoveObserver(self._click_observer)
del self._click_observer
def _prep_for_close(self):
"""Make sure a screenshot is acquired before closing.
This doesn't actually close anything! It just preps the plotter for
closing.
"""
# Grab screenshot right before renderer closes
self.last_image = self.screenshot(True, return_img=True)
self.last_image_depth = self.get_image_depth()
def increment_point_size_and_line_width(self, increment):
"""Increment point size and line width of all actors.
For every actor in the scene, increment both its point size and
line width by the given value.
"""
for renderer in self.renderers:
for actor in renderer._actors.values():
if hasattr(actor, "GetProperty"):
prop = actor.GetProperty()
if hasattr(prop, "SetPointSize"):
prop.SetPointSize(prop.GetPointSize() + increment)
if hasattr(prop, "SetLineWidth"):
prop.SetLineWidth(prop.GetLineWidth() + increment)
self.render()
return
def reset_key_events(self):
"""Reset all of the key press events to their defaults."""
self._key_press_event_callbacks = collections.defaultdict(list)
self.add_key_event('q', self._prep_for_close) # Add no matter what
b_left_down_callback = lambda: self._add_observer('LeftButtonPressEvent', self.left_button_down)
self.add_key_event('b', b_left_down_callback)
self.add_key_event('v', lambda: self.isometric_view_interactive())
self.add_key_event('C', lambda: self.enable_cell_picking())
self.add_key_event('Up', lambda: self.camera.Zoom(1.05))
self.add_key_event('Down', lambda: self.camera.Zoom(0.95))
self.add_key_event('plus', lambda: self.increment_point_size_and_line_width(1))
self.add_key_event('minus', lambda: self.increment_point_size_and_line_width(-1))
def key_press_event(self, obj, event):
"""Listen for key press event."""
key = self.iren.GetKeySym()
log.debug(f'Key {key} pressed')
self._last_key = key
if key in self._key_press_event_callbacks.keys():
# Note that defaultdict's will never throw a key error
callbacks = self._key_press_event_callbacks[key]
for func in callbacks:
func()
def left_button_down(self, obj, event_type):
"""Register the event for a left button down click."""
if hasattr(self.ren_win, 'GetOffScreenFramebuffer'):
if not self.ren_win.GetOffScreenFramebuffer().GetFBOIndex():
# must raise a runtime error as this causes a segfault on VTK9
raise ValueError('Invoking helper with no framebuffer')
# Get 2D click location on window
click_pos = self.iren.GetEventPosition()
# Get corresponding click location in the 3D plot
picker = vtk.vtkWorldPointPicker()
picker.Pick(click_pos[0], click_pos[1], 0, self.renderer)
self.pickpoint = np.asarray(picker.GetPickPosition()).reshape((-1, 3))
if np.any(np.isnan(self.pickpoint)):
self.pickpoint[:] = 0
def update_style(self):
"""Update the camera interactor style."""
if self._style_class is None:
# We need an actually custom style to handle button up events
self._style_class = _style_factory(self._style)(self)
return self.iren.SetInteractorStyle(self._style_class)
def enable_trackball_style(self):
"""Set the interactive style to trackball camera.
The trackball camera is the default interactor style.
"""
self._style = 'TrackballCamera'
self._style_class = None
return self.update_style()
def enable_trackball_actor_style(self):
"""Set the interactive style to trackball actor.
This allows to rotate actors around the scene.
"""
self._style = 'TrackballActor'
self._style_class = None
return self.update_style()
def enable_image_style(self):
"""Set the interactive style to image.
Controls:
- Left Mouse button triggers window level events
- CTRL Left Mouse spins the camera around its view plane normal
- SHIFT Left Mouse pans the camera
- CTRL SHIFT Left Mouse dollys (a positional zoom) the camera
- Middle mouse button pans the camera
- Right mouse button dollys the camera.
- SHIFT Right Mouse triggers pick events
"""
self._style = 'Image'
self._style_class = None
return self.update_style()
def enable_joystick_style(self):
"""Set the interactive style to joystick.
It allows the user to move (rotate, pan, etc.) the camera, the point of
view for the scene. The position of the mouse relative to the center of
the scene determines the speed at which the camera moves, and the speed
of the mouse movement determines the acceleration of the camera, so the
camera continues to move even if the mouse if not moving.
For a 3-button mouse, the left button is for rotation, the right button
for zooming, the middle button for panning, and ctrl + left button for
spinning. (With fewer mouse buttons, ctrl + shift + left button is
for zooming, and shift + left button is for panning.)
"""
self._style = 'JoystickCamera'
self._style_class = None
return self.update_style()
def enable_zoom_style(self):
"""Set the interactive style to rubber band zoom.
This interactor style allows the user to draw a rectangle in the render
window using the left mouse button. When the mouse button is released,
the current camera zooms by an amount determined from the shorter side
of the drawn rectangle.
"""
self._style = 'RubberBandZoom'
self._style_class = None
return self.update_style()
def enable_terrain_style(self):
"""Set the interactive style to terrain.
Used to manipulate a camera which is viewing a scene with a natural
view up, e.g., terrain. The camera in such a scene is manipulated by
specifying azimuth (angle around the view up vector) and elevation
(the angle from the horizon).
"""
self._style = 'Terrain'
self._style_class = None
return self.update_style()
def enable_rubber_band_style(self):
"""Set the interactive style to rubber band picking.
This interactor style allows the user to draw a rectangle in the render
window by hitting 'r' and then using the left mouse button.
When the mouse button is released, the attached picker operates on the
pixel in the center of the selection rectangle. If the picker happens to
be a vtkAreaPicker it will operate on the entire selection rectangle.
When the 'p' key is hit the above pick operation occurs on a 1x1
rectangle. In other respects it behaves the same as its parent class.
"""
self._style = 'RubberBandPick'
self._style_class = None
return self.update_style()
def enable_rubber_band_2d_style(self):
"""Set the interactive style to rubber band 2d.
Camera rotation is not allowed with this interactor style. Zooming
affects the camera's parallel scale only, and assumes that the camera
is in parallel projection mode. The style also allows draws a rubber
band using the left button. All camera changes invoke
StartInteractionEvent when the button is pressed, InteractionEvent
when the mouse (or wheel) is moved, and EndInteractionEvent when the
button is released. The bindings are as follows: Left mouse - Select
(invokes a SelectionChangedEvent). Right mouse - Zoom.
Middle mouse - Pan. Scroll wheel - Zoom.
"""
self._style = 'RubberBand2D'
self._style_class = None
return self.update_style()
def hide_axes_all(self):
"""Hide the axes orientation widget in all renderers."""
for renderer in self.renderers:
renderer.hide_axes()
return
def show_axes_all(self):
"""Show the axes orientation widget in all renderers."""
for renderer in self.renderers:
renderer.show_axes()
return
def isometric_view_interactive(self):
"""Set the current interactive render window to isometric view."""
interactor = self.iren.GetInteractorStyle()
renderer = interactor.GetCurrentRenderer()
if renderer is None:
renderer = self.renderer
renderer.view_isometric()
def update(self, stime=1, force_redraw=True):
"""Update window, redraw, process messages query.
Parameters
----------
stime : int, optional
Duration of timer that interrupt vtkRenderWindowInteractor in
milliseconds.
force_redraw : bool, optional
Call ``render`` immediately.
"""
if self.off_screen:
return
if stime <= 0:
stime = 1
curr_time = time.time()
if Plotter.last_update_time > curr_time:
Plotter.last_update_time = curr_time
update_rate = self.iren.GetDesiredUpdateRate()
if (curr_time - Plotter.last_update_time) > (1.0/update_rate):
self.right_timer_id = self.iren.CreateRepeatingTimer(stime)
self.iren.Start()
self.iren.DestroyTimer(self.right_timer_id)
self.render()
Plotter.last_update_time = curr_time
elif force_redraw:
self.render()
def add_mesh(self, mesh, color=None, style=None, scalars=None,
clim=None, show_edges=None, edge_color=None,
point_size=5.0, line_width=None, opacity=1.0,
flip_scalars=False, lighting=None, n_colors=256,
interpolate_before_map=True, cmap=None, label=None,
reset_camera=None, scalar_bar_args=None, show_scalar_bar=None,
stitle=None, multi_colors=False, name=None, texture=None,
render_points_as_spheres=None, render_lines_as_tubes=False,
smooth_shading=None, ambient=0.0, diffuse=1.0, specular=0.0,
specular_power=100.0, nan_color=None, nan_opacity=1.0,
culling=None, rgb=False, categories=False,
use_transparency=False, below_color=None, above_color=None,
annotations=None, pickable=True, preference="point",
log_scale=False, render=True, **kwargs):
"""Add any PyVista/VTK mesh or dataset that PyVista can wrap to the scene.
This method is using a mesh representation to view the surfaces
and/or geometry of datasets. For volume rendering, see
:func:`pyvista.BasePlotter.add_volume`.
Parameters
----------
mesh : pyvista.Common or pyvista.MultiBlock
Any PyVista or VTK mesh is supported. Also, any dataset
that :func:`pyvista.wrap` can handle including NumPy arrays of XYZ
points.
color : string or 3 item list, optional, defaults to white
Use to make the entire mesh have a single solid color.
Either a string, RGB list, or hex color string. For example:
``color='white'``, ``color='w'``, ``color=[1, 1, 1]``, or
``color='#FFFFFF'``. Color will be overridden if scalars are
specified.
style : string, optional
Visualization style of the mesh. One of the following:
``style='surface'``, ``style='wireframe'``, ``style='points'``.
Defaults to ``'surface'``. Note that ``'wireframe'`` only shows a
wireframe of the outer geometry.
scalars : str or numpy.ndarray, optional
Scalars used to "color" the mesh. Accepts a string name of an
array that is present on the mesh or an array equal
to the number of cells or the number of points in the
mesh. Array should be sized as a single vector. If both
``color`` and ``scalars`` are ``None``, then the active scalars are
used.
clim : 2 item list, optional
Color bar range for scalars. Defaults to minimum and
maximum of scalars array. Example: ``[-1, 2]``. ``rng``
is also an accepted alias for this.
show_edges : bool, optional
Shows the edges of a mesh. Does not apply to a wireframe
representation.
edge_color : string or 3 item list, optional, defaults to black
The solid color to give the edges when ``show_edges=True``.
Either a string, RGB list, or hex color string.
point_size : float, optional
Point size of any nodes in the dataset plotted. Also applicable
when style='points'. Default ``5.0``
line_width : float, optional
Thickness of lines. Only valid for wireframe and surface
representations. Default None.
opacity : float, str, array-like
Opacity of the mesh. If a single float value is given, it will be
the global opacity of the mesh and uniformly applied everywhere -
should be between 0 and 1. A string can also be specified to map
the scalars range to a predefined opacity transfer function
(options include: 'linear', 'linear_r', 'geom', 'geom_r').
A string could also be used to map a scalars array from the mesh to
the opacity (must have same number of elements as the
``scalars`` argument). Or you can pass a custom made transfer
function that is an array either ``n_colors`` in length or shorter.
flip_scalars : bool, optional
Flip direction of cmap. Most colormaps allow ``*_r`` suffix to do
this as well.
lighting : bool, optional
Enable or disable view direction lighting. Default False.
n_colors : int, optional
Number of colors to use when displaying scalars. Defaults to 256.
The scalar bar will also have this many colors.
interpolate_before_map : bool, optional
Enabling makes for a smoother scalars display. Default is True.
When False, OpenGL will interpolate the mapped colors which can
result is showing colors that are not present in the color map.
cmap : str, list, optional
Name of the Matplotlib colormap to us when mapping the ``scalars``.
See available Matplotlib colormaps. Only applicable for when
displaying ``scalars``. Requires Matplotlib to be installed.
``colormap`` is also an accepted alias for this. If ``colorcet`` or
``cmocean`` are installed, their colormaps can be specified by name.
You can also specify a list of colors to override an
existing colormap with a custom one. For example, to
create a three color colormap you might specify
``['green', 'red', 'blue']``
label : str, optional
String label to use when adding a legend to the scene with
:func:`pyvista.BasePlotter.add_legend`
reset_camera : bool, optional
Reset the camera after adding this mesh to the scene
scalar_bar_args : dict, optional
Dictionary of keyword arguments to pass when adding the scalar bar
to the scene. For options, see
:func:`pyvista.BasePlotter.add_scalar_bar`.
show_scalar_bar : bool
If False, a scalar bar will not be added to the scene. Defaults
to ``True``.
stitle : string, optional
Scalar bar title. By default the scalar bar is given a title of the
the scalars array used to color the mesh.
To create a bar with no title, use an empty string (i.e. '').
multi_colors : bool, optional
If a ``MultiBlock`` dataset is given this will color each
block by a solid color using matplotlib's color cycler.
name : str, optional
The name for the added mesh/actor so that it can be easily
updated. If an actor of this name already exists in the
rendering window, it will be replaced by the new actor.
texture : vtk.vtkTexture or np.ndarray or boolean, optional
A texture to apply if the input mesh has texture
coordinates. This will not work with MultiBlock
datasets. If set to ``True``, the first available texture
on the object will be used. If a string name is given, it
will pull a texture with that name associated to the input
mesh.
render_points_as_spheres : bool, optional
render_lines_as_tubes : bool, optional
smooth_shading : bool, optional
ambient : float, optional
When lighting is enabled, this is the amount of light from
0 to 1 that reaches the actor when not directed at the
light source emitted from the viewer. Default 0.0
diffuse : float, optional
The diffuse lighting coefficient. Default 1.0
specular : float, optional
The specular lighting coefficient. Default 0.0
specular_power : float, optional
The specular power. Between 0.0 and 128.0
nan_color : string or 3 item list, optional, defaults to gray
The color to use for all ``NaN`` values in the plotted scalar
array.
nan_opacity : float, optional
Opacity of ``NaN`` values. Should be between 0 and 1.
Default 1.0
culling : str, optional
Does not render faces that are culled. Options are ``'front'`` or
``'back'``. This can be helpful for dense surface meshes,
especially when edges are visible, but can cause flat
meshes to be partially displayed. Defaults ``False``.
rgb : bool, optional
If an 2 dimensional array is passed as the scalars, plot those
values as RGB(A) colors! ``rgba`` is also accepted alias for this.
Opacity (the A) is optional.
categories : bool, optional
If set to ``True``, then the number of unique values in the scalar
array will be used as the ``n_colors`` argument.
use_transparency : bool, optional
Invert the opacity mappings and make the values correspond to
transparency.
below_color : string or 3 item list, optional
Solid color for values below the scalars range (``clim``). This
will automatically set the scalar bar ``below_label`` to
``'Below'``
above_color : string or 3 item list, optional
Solid color for values below the scalars range (``clim``). This
will automatically set the scalar bar ``above_label`` to
``'Above'``
annotations : dict, optional
Pass a dictionary of annotations. Keys are the float values in the
scalars range to annotate on the scalar bar and the values are the
the string annotations.
pickable : bool
Set whether this mesh is pickable
render : bool, optional
Force a render when True. Default ``True``.
Return
------
actor: vtk.vtkActor
VTK actor of the mesh.
"""
# Convert the VTK data object to a pyvista wrapped object if necessary
if not is_pyvista_dataset(mesh):
mesh = wrap(mesh)
if not is_pyvista_dataset(mesh):
raise TypeError(f'Object type ({type(mesh)}) not supported for plotting in PyVista.'
)
##### Parse arguments to be used for all meshes #####
if scalar_bar_args is None:
scalar_bar_args = {'n_colors': n_colors}
if show_edges is None:
show_edges = rcParams['show_edges']
if edge_color is None:
edge_color = rcParams['edge_color']
if show_scalar_bar is None:
show_scalar_bar = rcParams['show_scalar_bar']
if lighting is None:
lighting = rcParams['lighting']
if smooth_shading is None:
smooth_shading = rcParams['smooth_shading']
# supported aliases
clim = kwargs.pop('rng', clim)
cmap = kwargs.pop('colormap', cmap)
culling = kwargs.pop("backface_culling", culling)
if render_points_as_spheres is None:
render_points_as_spheres = rcParams['render_points_as_spheres']
if name is None:
name = f'{type(mesh).__name__}({mesh.memory_address})'
if nan_color is None:
nan_color = rcParams['nan_color']
nan_color = list(parse_color(nan_color))
nan_color.append(nan_opacity)
if color is True:
color = rcParams['color']
if texture is False:
texture = None
if culling is True:
culling = 'backface'
rgb = kwargs.pop('rgba', rgb)
if "scalar" in kwargs:
raise TypeError("`scalar` is an invalid keyword argument for `add_mesh`. Perhaps you mean `scalars` with an s?")
assert_empty_kwargs(**kwargs)
##### Handle composite datasets #####
if isinstance(mesh, pyvista.MultiBlock):
# first check the scalars
if clim is None and scalars is not None:
# Get the data range across the array for all blocks
# if scalars specified
if isinstance(scalars, str):
clim = mesh.get_data_range(scalars)
else:
# TODO: an array was given... how do we deal with
# that? Possibly a 2D arrays or list of
# arrays where first index corresponds to
# the block? This could get complicated real
# quick.
raise TypeError('scalars array must be given as a string name for multiblock datasets.')
the_arguments = locals()
the_arguments.pop('self')
the_arguments.pop('mesh')
the_arguments.pop('kwargs')
if multi_colors:
# Compute unique colors for each index of the block
if has_matplotlib:
from itertools import cycle
cycler = matplotlib.rcParams['axes.prop_cycle']
colors = cycle(cycler)
else:
multi_colors = False
logging.warning('Please install matplotlib for color cycles')
# Now iteratively plot each element of the multiblock dataset
actors = []
for idx in range(mesh.GetNumberOfBlocks()):
if mesh[idx] is None:
continue
# Get a good name to use
next_name = f'{name}-{idx}'
# Get the data object
if not is_pyvista_dataset(mesh[idx]):
data = wrap(mesh.GetBlock(idx))
if not is_pyvista_dataset(mesh[idx]):
continue # move on if we can't plot it
else:
data = mesh.GetBlock(idx)
if data is None or (not isinstance(data, pyvista.MultiBlock) and data.n_points < 1):
# Note that a block can exist but be None type
# or it could have zeros points (be empty) after filtering
continue
# Now check that scalars is available for this dataset
if isinstance(data, vtk.vtkMultiBlockDataSet) or get_array(data, scalars) is None:
ts = None
else:
ts = scalars
if multi_colors:
color = next(colors)['color']
## Add to the scene
the_arguments['color'] = color
the_arguments['scalars'] = ts
the_arguments['name'] = next_name
the_arguments['texture'] = None
a = self.add_mesh(data, **the_arguments)
actors.append(a)
if (reset_camera is None and not self.camera_set) or reset_camera:
cpos = self.get_default_cam_pos()
self.camera_position = cpos
self.camera_set = False
self.reset_camera()
return actors
##### Plot a single PyVista mesh #####
# Compute surface normals if using smooth shading
if smooth_shading:
# extract surface if mesh is exterior
if not isinstance(mesh, pyvista.PolyData):
grid = mesh
mesh = grid.extract_surface()
ind = mesh.point_arrays['vtkOriginalPointIds']
# remap scalars
if isinstance(scalars, np.ndarray):
scalars = scalars[ind]
if texture:
_tcoords = mesh.t_coords
mesh.compute_normals(cell_normals=False, inplace=True)
if texture:
mesh.t_coords = _tcoords
if mesh.n_points < 1:
raise ValueError('Empty meshes cannot be plotted. Input mesh has zero points.')
# Try to plot something if no preference given
if scalars is None and color is None and texture is None:
# Prefer texture first
if len(list(mesh.textures.keys())) > 0:
texture = True
# If no texture, plot any active scalar
else:
# Make sure scalars components are not vectors/tuples
scalars = mesh.active_scalars_name
# Don't allow plotting of string arrays by default
if scalars is not None:# and np.issubdtype(mesh.active_scalars.dtype, np.number):
if stitle is None:
stitle = scalars
else:
scalars = None
# set main values
self.mesh = mesh
self.mapper = make_mapper(vtk.vtkDataSetMapper)
self.mapper.SetInputData(self.mesh)
self.mapper.GetLookupTable().SetNumberOfTableValues(n_colors)
if interpolate_before_map:
self.mapper.InterpolateScalarsBeforeMappingOn()
actor = vtk.vtkActor()
prop = vtk.vtkProperty()
actor.SetMapper(self.mapper)
actor.SetProperty(prop)
# Make sure scalars is a numpy array after this point
original_scalar_name = None
if isinstance(scalars, str):
self.mapper.SetArrayName(scalars)
original_scalar_name = scalars
scalars = get_array(mesh, scalars,
preference=preference, err=True)
if stitle is None:
stitle = original_scalar_name
if texture is True or isinstance(texture, (str, int)):
texture = mesh._activate_texture(texture)
if texture:
if isinstance(texture, np.ndarray):
texture = numpy_to_texture(texture)
if not isinstance(texture, (vtk.vtkTexture, vtk.vtkOpenGLTexture)):
raise TypeError(f'Invalid texture type ({type(texture)})')
if mesh.GetPointData().GetTCoords() is None:
raise ValueError('Input mesh does not have texture coordinates to support the texture.')
actor.SetTexture(texture)
# Set color to white by default when using a texture
if color is None:
color = 'white'
if scalars is None:
show_scalar_bar = False
self.mapper.SetScalarModeToUsePointFieldData()
# see https://github.com/pyvista/pyvista/issues/950
mesh.set_active_scalars(None)
# Handle making opacity array =========================================
_custom_opac = False
if isinstance(opacity, str):
try:
# Get array from mesh
opacity = get_array(mesh, opacity,
preference=preference, err=True)
if np.any(opacity > 1):
warnings.warn("Opacity scalars contain values over 1")
if np.any(opacity < 0):
warnings.warn("Opacity scalars contain values less than 0")
_custom_opac = True
except:
# Or get opacity transfer function
opacity = opacity_transfer_function(opacity, n_colors)
else:
if scalars.shape[0] != opacity.shape[0]:
raise ValueError('Opacity array and scalars array must have the same number of elements.')
elif isinstance(opacity, (np.ndarray, list, tuple)):
opacity = np.array(opacity)
if scalars.shape[0] == opacity.shape[0]:
# User could pass an array of opacities for every point/cell
_custom_opac = True
else:
opacity = opacity_transfer_function(opacity, n_colors)
if use_transparency and np.max(opacity) <= 1.0:
opacity = 1 - opacity
elif use_transparency and isinstance(opacity, np.ndarray):
opacity = 255 - opacity
# Scalars formatting ==================================================
if cmap is None: # Set default map if matplotlib is available
if has_matplotlib:
cmap = rcParams['cmap']
# Set the array title for when it is added back to the mesh
if _custom_opac:
title = '__custom_rgba'
elif stitle is None:
title = 'Data'
else:
title = stitle
if scalars is not None:
# if scalars is a string, then get the first array found with that name
if not isinstance(scalars, np.ndarray):
scalars = np.asarray(scalars)
_using_labels = False
if not np.issubdtype(scalars.dtype, np.number):
# raise TypeError('Non-numeric scalars are currently not supported for plotting.')
# TODO: If str array, digitive and annotate
cats, scalars = np.unique(scalars.astype('|S'), return_inverse=True)
values = np.unique(scalars)
clim = [np.min(values) - 0.5, np.max(values) + 0.5]
title = f'{title}-digitized'
n_colors = len(cats)
scalar_bar_args.setdefault('n_labels', 0)
_using_labels = True
if rgb:
if scalars.ndim != 2 or scalars.shape[1] < 3 or scalars.shape[1] > 4:
raise ValueError('RGB array must be n_points/n_cells by 3/4 in shape.')
if scalars.ndim != 1:
if rgb:
pass
elif scalars.ndim == 2 and (scalars.shape[0] == mesh.n_points or scalars.shape[0] == mesh.n_cells):
scalars = np.linalg.norm(scalars.copy(), axis=1)
title = f'{title}-normed'
else:
scalars = scalars.ravel()
if scalars.dtype == np.bool_:
scalars = scalars.astype(np.float_)
def prepare_mapper(scalars):
# Scalars interpolation approach
if scalars.shape[0] == mesh.n_points:
self.mesh.point_arrays.append(scalars, title, True)
self.mapper.SetScalarModeToUsePointData()
elif scalars.shape[0] == mesh.n_cells:
self.mesh.cell_arrays.append(scalars, title, True)
self.mapper.SetScalarModeToUseCellData()
else:
raise_not_matching(scalars, mesh)
# Common tasks
self.mapper.GetLookupTable().SetNumberOfTableValues(n_colors)
if interpolate_before_map:
self.mapper.InterpolateScalarsBeforeMappingOn()
if rgb or _custom_opac:
self.mapper.SetColorModeToDirectScalars()
else:
self.mapper.SetColorModeToMapScalars()
return
prepare_mapper(scalars)
table = self.mapper.GetLookupTable()
if log_scale:
table.SetScaleToLog10()
if _using_labels:
table.SetAnnotations(convert_array(values), convert_string_array(cats))
if isinstance(annotations, dict):
for val, anno in annotations.items():
table.SetAnnotation(float(val), str(anno))
# Set scalars range
if clim is None:
clim = [np.nanmin(scalars), np.nanmax(scalars)]
elif isinstance(clim, float) or isinstance(clim, int):
clim = [-clim, clim]
if np.any(clim) and not rgb:
self.mapper.scalar_range = clim[0], clim[1]
table.SetNanColor(nan_color)
if above_color:
table.SetUseAboveRangeColor(True)
table.SetAboveRangeColor(*parse_color(above_color, opacity=1))
scalar_bar_args.setdefault('above_label', 'Above')
if below_color:
table.SetUseBelowRangeColor(True)
table.SetBelowRangeColor(*parse_color(below_color, opacity=1))
scalar_bar_args.setdefault('below_label', 'Below')
if cmap is not None:
if not has_matplotlib:
cmap = None
logging.warning('Please install matplotlib for color maps.')
cmap = get_cmap_safe(cmap)
if categories:
if categories is True:
n_colors = len(np.unique(scalars))
elif isinstance(categories, int):
n_colors = categories
ctable = cmap(np.linspace(0, 1, n_colors))*255
ctable = ctable.astype(np.uint8)
# Set opactities
if isinstance(opacity, np.ndarray) and not _custom_opac:
ctable[:,-1] = opacity
if flip_scalars:
ctable = np.ascontiguousarray(ctable[::-1])
table.SetTable(VN.numpy_to_vtk(ctable))
if _custom_opac:
# need to round the colors here since we're
# directly displaying the colors
hue = normalize(scalars, minimum=clim[0], maximum=clim[1])
scalars = np.round(hue*n_colors)/n_colors
scalars = cmap(scalars)*255
scalars[:, -1] *= opacity
scalars = scalars.astype(np.uint8)
prepare_mapper(scalars)
else: # no cmap specified
if flip_scalars:
table.SetHueRange(0.0, 0.66667)
else:
table.SetHueRange(0.66667, 0.0)
else:
self.mapper.SetScalarModeToUseFieldData()
# Set actor properties ================================================
# select view style
if not style:
style = 'surface'
style = style.lower()
if style == 'wireframe':
prop.SetRepresentationToWireframe()
if color is None:
color = rcParams['outline_color']
elif style == 'points':
prop.SetRepresentationToPoints()
elif style == 'surface':
prop.SetRepresentationToSurface()
else:
raise ValueError('Invalid style. Must be one of the following:\n'
'\t"surface"\n'
'\t"wireframe"\n'
'\t"points"\n')
prop.SetPointSize(point_size)
prop.SetAmbient(ambient)
prop.SetDiffuse(diffuse)
prop.SetSpecular(specular)
prop.SetSpecularPower(specular_power)
if smooth_shading:
prop.SetInterpolationToPhong()
else:
prop.SetInterpolationToFlat()
# edge display style
if show_edges:
prop.EdgeVisibilityOn()
rgb_color = parse_color(color)
prop.SetColor(rgb_color)
if isinstance(opacity, (float, int)):
prop.SetOpacity(opacity)
prop.SetEdgeColor(parse_color(edge_color))
if render_points_as_spheres:
prop.SetRenderPointsAsSpheres(render_points_as_spheres)
if render_lines_as_tubes:
prop.SetRenderLinesAsTubes(render_lines_as_tubes)
# legend label
if label:
if not isinstance(label, str):
raise TypeError('Label must be a string')
geom = pyvista.single_triangle()
if scalars is not None:
geom = pyvista.Box()
rgb_color = parse_color('black')
geom.points -= geom.center
self._labels.append([geom, label, rgb_color])
# lighting display style
if not lighting:
prop.LightingOff()
# set line thickness
if line_width:
prop.SetLineWidth(line_width)
# Add scalar bar if available
if stitle is not None and show_scalar_bar and (not rgb or _custom_opac):
self.add_scalar_bar(stitle, **scalar_bar_args)
self.add_actor(actor,
reset_camera=reset_camera,
name=name, culling=culling,
pickable=pickable,
render=render)
self.renderer.Modified()
return actor
def add_volume(self, volume, scalars=None, clim=None, resolution=None,
opacity='linear', n_colors=256, cmap=None, flip_scalars=False,
reset_camera=None, name=None, ambient=0.0, categories=False,
culling=False, multi_colors=False,
blending='composite', mapper=None,
stitle=None, scalar_bar_args=None, show_scalar_bar=None,
annotations=None, pickable=True, preference="point",
opacity_unit_distance=None, shade=False,
diffuse=0.7, specular=0.2, specular_power=10.0,
render=True, **kwargs):
"""Add a volume, rendered using a smart mapper by default.
Requires a 3D :class:`numpy.ndarray` or :class:`pyvista.UniformGrid`.
Parameters
----------
volume : 3D numpy.ndarray or pyvista.UniformGrid
The input volume to visualize. 3D numpy arrays are accepted.
scalars : str or numpy.ndarray, optional
Scalars used to "color" the mesh. Accepts a string name of an
array that is present on the mesh or an array equal
to the number of cells or the number of points in the
mesh. Array should be sized as a single vector. If ``scalars`` is
``None``, then the active scalars are used.
clim : 2 item list, optional
Color bar range for scalars. Defaults to minimum and
maximum of scalars array. Example: ``[-1, 2]``. ``rng``
is also an accepted alias for this.
opacity : string or numpy.ndarray, optional
Opacity mapping for the scalars array.
A string can also be specified to map the scalars range to a
predefined opacity transfer function (options include: 'linear',
'linear_r', 'geom', 'geom_r'). Or you can pass a custom made
transfer function that is an array either ``n_colors`` in length or
shorter.
n_colors : int, optional
Number of colors to use when displaying scalars. Defaults to 256.
The scalar bar will also have this many colors.
cmap : str, optional
Name of the Matplotlib colormap to us when mapping the ``scalars``.
See available Matplotlib colormaps. Only applicable for when
displaying ``scalars``. Requires Matplotlib to be installed.
``colormap`` is also an accepted alias for this. If ``colorcet`` or
``cmocean`` are installed, their colormaps can be specified by name.
flip_scalars : bool, optional
Flip direction of cmap. Most colormaps allow ``*_r`` suffix to do
this as well.
reset_camera : bool, optional
Reset the camera after adding this mesh to the scene
name : str, optional
The name for the added actor so that it can be easily
updated. If an actor of this name already exists in the
rendering window, it will be replaced by the new actor.
ambient : float, optional
When lighting is enabled, this is the amount of light from
0 to 1 that reaches the actor when not directed at the
light source emitted from the viewer. Default 0.0.
culling : str, optional
Does not render faces that are culled. Options are ``'front'`` or
``'back'``. This can be helpful for dense surface meshes,
especially when edges are visible, but can cause flat
meshes to be partially displayed. Defaults ``False``.
categories : bool, optional
If set to ``True``, then the number of unique values in the scalar
array will be used as the ``n_colors`` argument.
multi_colors : bool, optional
Whether or not to use multiple colors when plotting MultiBlock
object. Blocks will be colored sequentially as 'Reds', 'Greens',
'Blues', and 'Grays'.
blending : str, optional
Blending mode for visualisation of the input object(s). Can be
one of 'additive', 'maximum', 'minimum', 'composite', or
'average'. Defaults to 'additive'.
mapper : str, optional
Volume mapper to use given by name. Options include:
``'fixed_point'``, ``'gpu'``, ``'open_gl'``, and ``'smart'``.
If ``None`` the ``"volume_mapper"`` in the ``rcParams`` is used.
scalar_bar_args : dict, optional
Dictionary of keyword arguments to pass when adding the scalar bar
to the scene. For options, see
:func:`pyvista.BasePlotter.add_scalar_bar`.
show_scalar_bar : bool
If False, a scalar bar will not be added to the scene. Defaults
to ``True``.
stitle : string, optional
Scalar bar title. By default the scalar bar is given a title of the
the scalars array used to color the mesh.
To create a bar with no title, use an empty string (i.e. '').
annotations : dict, optional
Pass a dictionary of annotations. Keys are the float values in the
scalars range to annotate on the scalar bar and the values are the
the string annotations.
opacity_unit_distance : float
Set/Get the unit distance on which the scalar opacity transfer
function is defined. Meaning that over that distance, a given
opacity (from the transfer function) is accumulated. This is
adjusted for the actual sampling distance during rendering. By
default, this is the length of the diagonal of the bounding box of
the volume divided by the dimensions.
shade : bool
Default off. If shading is turned on, the mapper may perform
shading calculations - in some cases shading does not apply
(for example, in a maximum intensity projection) and therefore
shading will not be performed even if this flag is on.
diffuse : float, optional
The diffuse lighting coefficient. Default 1.0
specular : float, optional
The specular lighting coefficient. Default 0.0
specular_power : float, optional
The specular power. Between 0.0 and 128.0
render : bool, optional
Force a render when True. Default ``True``.
Return
------
actor: vtk.vtkVolume
VTK volume of the input data.
"""
# Handle default arguments
# Supported aliases
clim = kwargs.pop('rng', clim)
cmap = kwargs.pop('colormap', cmap)
culling = kwargs.pop("backface_culling", culling)
if "scalar" in kwargs:
raise TypeError("`scalar` is an invalid keyword argument for `add_mesh`. Perhaps you mean `scalars` with an s?")
assert_empty_kwargs(**kwargs)
if scalar_bar_args is None:
scalar_bar_args = {}
if show_scalar_bar is None:
show_scalar_bar = rcParams['show_scalar_bar']
if culling is True:
culling = 'backface'
if mapper is None:
mapper = rcParams["volume_mapper"]
# only render when the plotter has already been shown
if render is None:
render = not self._first_time
# Convert the VTK data object to a pyvista wrapped object if necessary
if not is_pyvista_dataset(volume):
if isinstance(volume, np.ndarray):
volume = wrap(volume)
if resolution is None:
resolution = [1,1,1]
elif len(resolution) != 3:
raise ValueError('Invalid resolution dimensions.')
volume.spacing = resolution
else:
volume = wrap(volume)
if not is_pyvista_dataset(volume):
raise TypeError(f'Object type ({type(volume)}) not supported for plotting in PyVista.')
else:
# HACK: Make a copy so the original object is not altered.
# Also, place all data on the nodes as issues arise when
# volume rendering on the cells.
volume = volume.cell_data_to_point_data()
if name is None:
name = f'{type(volume).__name__}({volume.memory_address})'
if isinstance(volume, pyvista.MultiBlock):
from itertools import cycle
cycler = cycle(['Reds', 'Greens', 'Blues', 'Greys', 'Oranges', 'Purples'])
# Now iteratively plot each element of the multiblock dataset
actors = []
for idx in range(volume.GetNumberOfBlocks()):
if volume[idx] is None:
continue
# Get a good name to use
next_name = f'{name}-{idx}'
# Get the data object
block = wrap(volume.GetBlock(idx))
if resolution is None:
try:
block_resolution = block.GetSpacing()
except AttributeError:
block_resolution = resolution
else:
block_resolution = resolution
if multi_colors:
color = next(cycler)
else:
color = cmap
a = self.add_volume(block, resolution=block_resolution, opacity=opacity,
n_colors=n_colors, cmap=color, flip_scalars=flip_scalars,
reset_camera=reset_camera, name=next_name,
ambient=ambient, categories=categories,
culling=culling, clim=clim,
mapper=mapper, pickable=pickable,
opacity_unit_distance=opacity_unit_distance,
shade=shade, diffuse=diffuse, specular=specular,
specular_power=specular_power, render=render)
actors.append(a)
return actors
if not isinstance(volume, pyvista.UniformGrid):
raise TypeError(f'Type {type(volume)} not supported for volume rendering at this time. Use `pyvista.UniformGrid`.')
if opacity_unit_distance is None:
opacity_unit_distance = volume.length / (np.mean(volume.dimensions) - 1)
if scalars is None:
# Make sure scalars components are not vectors/tuples
scalars = volume.active_scalars
# Don't allow plotting of string arrays by default
if scalars is not None and np.issubdtype(scalars.dtype, np.number):
if stitle is None:
stitle = volume.active_scalars_info[1]
else:
raise ValueError('No scalars to use for volume rendering.')
elif isinstance(scalars, str):
pass
##############
title = 'Data' if stitle is None else stitle
if isinstance(scalars, str):
title = scalars
scalars = get_array(volume, scalars,
preference=preference, err=True)
if stitle is None:
stitle = title
if not isinstance(scalars, np.ndarray):
scalars = np.asarray(scalars)
if not np.issubdtype(scalars.dtype, np.number):
raise TypeError('Non-numeric scalars are currently not supported for volume rendering.')
if scalars.ndim != 1:
scalars = scalars.ravel()
if scalars.dtype == np.bool_ or scalars.dtype == np.uint8:
scalars = scalars.astype(np.float_)
# Define mapper, volume, and add the correct properties
mappers = {
'fixed_point': vtk.vtkFixedPointVolumeRayCastMapper,
'gpu': vtk.vtkGPUVolumeRayCastMapper,
'open_gl': vtk.vtkOpenGLGPUVolumeRayCastMapper,
'smart': vtk.vtkSmartVolumeMapper,
}
if not isinstance(mapper, str) or mapper not in mappers.keys():
raise TypeError(f"Mapper ({mapper}) unknown. Available volume mappers include: {', '.join(mappers.keys())}")
self.mapper = make_mapper(mappers[mapper])
# Scalars interpolation approach
if scalars.shape[0] == volume.n_points:
volume.point_arrays.append(scalars, title, True)
self.mapper.SetScalarModeToUsePointData()
elif scalars.shape[0] == volume.n_cells:
volume.cell_arrays.append(scalars, title, True)
self.mapper.SetScalarModeToUseCellData()
else:
raise_not_matching(scalars, volume)
# Set scalars range
if clim is None:
clim = [np.nanmin(scalars), np.nanmax(scalars)]
elif isinstance(clim, float) or isinstance(clim, int):
clim = [-clim, clim]
###############
scalars = scalars.astype(np.float_)
with np.errstate(invalid='ignore'):
idxs0 = scalars < clim[0]
idxs1 = scalars > clim[1]
scalars[idxs0] = clim[0]
scalars[idxs1] = clim[1]
scalars = ((scalars - np.nanmin(scalars)) / (np.nanmax(scalars) - np.nanmin(scalars))) * 255
# scalars = scalars.astype(np.uint8)
volume[title] = scalars
self.mapper.scalar_range = clim
# Set colormap and build lookup table
table = vtk.vtkLookupTable()
# table.SetNanColor(nan_color) # NaN's are chopped out with current implementation
# above/below colors not supported with volume rendering
if isinstance(annotations, dict):
for val, anno in annotations.items():
table.SetAnnotation(float(val), str(anno))
if cmap is None: # Set default map if matplotlib is available
if has_matplotlib:
cmap = rcParams['cmap']
if cmap is not None:
if not has_matplotlib:
raise ImportError('Please install matplotlib for volume rendering.')
cmap = get_cmap_safe(cmap)
if categories:
if categories is True:
n_colors = len(np.unique(scalars))
elif isinstance(categories, int):
n_colors = categories
if flip_scalars:
cmap = cmap.reversed()
color_tf = vtk.vtkColorTransferFunction()
for ii in range(n_colors):
color_tf.AddRGBPoint(ii, *cmap(ii)[:-1])
# Set opacities
if isinstance(opacity, (float, int)):
opacity_values = [opacity] * n_colors
elif isinstance(opacity, str):
opacity_values = pyvista.opacity_transfer_function(opacity, n_colors)
elif isinstance(opacity, (np.ndarray, list, tuple)):
opacity = np.array(opacity)
opacity_values = opacity_transfer_function(opacity, n_colors)
opacity_tf = vtk.vtkPiecewiseFunction()
for ii in range(n_colors):
opacity_tf.AddPoint(ii, opacity_values[ii] / n_colors)
# Now put color tf and opacity tf into a lookup table for the scalar bar
table.SetNumberOfTableValues(n_colors)
lut = cmap(np.array(range(n_colors))) * 255
lut[:,3] = opacity_values
lut = lut.astype(np.uint8)
table.SetTable(VN.numpy_to_vtk(lut))
table.SetRange(*clim)
self.mapper.lookup_table = table
self.mapper.SetInputData(volume)
blending = blending.lower()
if blending in ['additive', 'add', 'sum']:
self.mapper.SetBlendModeToAdditive()
elif blending in ['average', 'avg', 'average_intensity']:
self.mapper.SetBlendModeToAverageIntensity()
elif blending in ['composite', 'comp']:
self.mapper.SetBlendModeToComposite()
elif blending in ['maximum', 'max', 'maximum_intensity']:
self.mapper.SetBlendModeToMaximumIntensity()
elif blending in ['minimum', 'min', 'minimum_intensity']:
self.mapper.SetBlendModeToMinimumIntensity()
else:
raise ValueError(f'Blending mode \'{blending}\' invalid. ' +
'Please choose one ' + 'of \'additive\', '
'\'composite\', \'minimum\' or ' + '\'maximum\'.')
self.mapper.Update()
self.volume = vtk.vtkVolume()
self.volume.SetMapper(self.mapper)
prop = vtk.vtkVolumeProperty()
prop.SetColor(color_tf)
prop.SetScalarOpacity(opacity_tf)
prop.SetAmbient(ambient)
prop.SetScalarOpacityUnitDistance(opacity_unit_distance)
prop.SetShade(shade)
prop.SetDiffuse(diffuse)
prop.SetSpecular(specular)
prop.SetSpecularPower(specular_power)
self.volume.SetProperty(prop)
actor, prop = self.add_actor(self.volume, reset_camera=reset_camera,
name=name, culling=culling,
pickable=pickable, render=render)
# Add scalar bar
if stitle is not None and show_scalar_bar:
self.add_scalar_bar(stitle, **scalar_bar_args)
self.renderer.Modified()
return actor
def update_scalar_bar_range(self, clim, name=None):
"""Update the value range of the active or named scalar bar.
Parameters
----------
2 item list
The new range of scalar bar. Example: ``[-1, 2]``.
name : str, optional
The title of the scalar bar to update
"""
if isinstance(clim, float) or isinstance(clim, int):
clim = [-clim, clim]
if len(clim) != 2:
raise TypeError('clim argument must be a length 2 iterable of values: (min, max).')
if name is None:
if not hasattr(self, 'mapper'):
raise AttributeError('This plotter does not have an active mapper.')
self.mapper.scalar_range = clim
return
# Use the name to find the desired actor
def update_mapper(mapper_helper):
mapper_helper.scalar_range = clim
return
try:
for mh in self._scalar_bar_mappers[name]:
update_mapper(mh)
except KeyError:
raise KeyError('Name ({}) not valid/not found in this plotter.')
return
def clear(self):
"""Clear plot by removing all actors and properties."""
for renderer in self.renderers:
renderer.clear()
self._shadow_renderer.clear()
for renderer in self._background_renderers:
if renderer is not None:
renderer.clear()
self._scalar_bar_slots = set(range(MAX_N_COLOR_BARS))
self._scalar_bar_slot_lookup = {}
self._scalar_bar_ranges = {}
self._scalar_bar_mappers = {}
self._scalar_bar_actors = {}
self._scalar_bar_widgets = {}
self.mesh = None
def link_views(self, views=0):
"""Link the views' cameras.
Parameters
----------
views : int | tuple or list
If ``views`` is int, link the views to the given view
index or if ``views`` is a tuple or a list, link the given
views cameras.
"""
if isinstance(views, (int, np.integer)):
for renderer in self.renderers:
renderer.camera = self.renderers[views].camera
return
views = np.asarray(views)
if np.issubdtype(views.dtype, np.integer):
for view_index in views:
self.renderers[view_index].camera = \
self.renderers[views[0]].camera
else:
raise TypeError('Expected type is int, list or tuple:'
f'{type(views)} is given')
def unlink_views(self, views=None):
"""Unlink the views' cameras.
Parameters
----------
views : None | int | tuple or list
If ``views`` is None unlink all the views, if ``views``
is int unlink the selected view's camera or if ``views``
is a tuple or a list, unlink the given views cameras.
"""
if views is None:
for renderer in self.renderers:
renderer.camera = vtk.vtkCamera()
renderer.reset_camera()
elif isinstance(views, int):
self.renderers[views].camera = vtk.vtkCamera()
self.renderers[views].reset_camera()
elif isinstance(views, collections.abc.Iterable):
for view_index in views:
self.renderers[view_index].camera = vtk.vtkCamera()
self.renderers[view_index].reset_camera()
else:
raise TypeError('Expected type is None, int, list or tuple:'
f'{type(views)} is given')
def add_scalar_bar(self, title=None, n_labels=5, italic=False,
bold=False, title_font_size=None,
label_font_size=None, color=None,
font_family=None, shadow=False, mapper=None,
width=None, height=None, position_x=None,
position_y=None, vertical=None,
interactive=None, fmt=None, use_opacity=True,
outline=False, nan_annotation=False,
below_label=None, above_label=None,
background_color=None, n_colors=None, fill=False,
render=True):
"""Create scalar bar using the ranges as set by the last input mesh.
Parameters
----------
title : string, optional
Title of the scalar bar. Default None
n_labels : int, optional
Number of labels to use for the scalar bar.
italic : bool, optional
Italicises title and bar labels. Default False.
bold : bool, optional
Bolds title and bar labels. Default True
title_font_size : float, optional
Sets the size of the title font. Defaults to None and is sized
automatically.
label_font_size : float, optional
Sets the size of the title font. Defaults to None and is sized
automatically.
color : string or 3 item list, optional, defaults to white
Either a string, rgb list, or hex color string. For example:
color='white'
color='w'
color=[1, 1, 1]
color='#FFFFFF'
font_family : string, optional
Font family. Must be either courier, times, or arial.
shadow : bool, optional
Adds a black shadow to the text. Defaults to False
width : float, optional
The percentage (0 to 1) width of the window for the colorbar
height : float, optional
The percentage (0 to 1) height of the window for the colorbar
position_x : float, optional
The percentage (0 to 1) along the windows's horizontal
direction to place the bottom left corner of the colorbar
position_y : float, optional
The percentage (0 to 1) along the windows's vertical
direction to place the bottom left corner of the colorbar
interactive : bool, optional
Use a widget to control the size and location of the scalar bar.
use_opacity : bool, optional
Optionally display the opacity mapping on the scalar bar
outline : bool, optional
Optionally outline the scalar bar to make opacity mappings more
obvious.
nan_annotation : bool, optional
Annotate the NaN color
below_label : str, optional
String annotation for values below the scalars range
above_label : str, optional
String annotation for values above the scalars range
background_color : array, optional
The color used for the background in RGB format.
n_colors : int, optional
The maximum number of color displayed in the scalar bar.
fill : bool
Draw a filled box behind the scalar bar with the
``background_color``
render : bool, optional
Force a render when True. Default ``True``.
Notes
-----
Setting title_font_size, or label_font_size disables automatic font
sizing for both the title and label.
"""
if interactive is None:
interactive = rcParams['interactive']
if font_family is None:
font_family = rcParams['font']['family']
if label_font_size is None:
label_font_size = rcParams['font']['label_size']
if title_font_size is None:
title_font_size = rcParams['font']['title_size']
if color is None:
color = rcParams['font']['color']
if fmt is None:
fmt = rcParams['font']['fmt']
if vertical is None:
if rcParams['colorbar_orientation'].lower() == 'vertical':
vertical = True
# only render when the plotter has already been shown
if render is None:
render = not self._first_time
# Automatically choose size if not specified
if width is None:
if vertical:
width = rcParams['colorbar_vertical']['width']
else:
width = rcParams['colorbar_horizontal']['width']
if height is None:
if vertical:
height = rcParams['colorbar_vertical']['height']
else:
height = rcParams['colorbar_horizontal']['height']
# check if maper exists
if mapper is None:
if not hasattr(self, 'mapper') or self.mapper is None:
raise AttributeError('Mapper does not exist. '
'Add a mesh with scalars first.')
mapper = self.mapper
if title:
# Check that this data hasn't already been plotted
if title in list(self._scalar_bar_ranges.keys()):
clim = list(self._scalar_bar_ranges[title])
newrng = mapper.scalar_range
oldmappers = self._scalar_bar_mappers[title]
# get max for range and reset everything
if newrng[0] < clim[0]:
clim[0] = newrng[0]
if newrng[1] > clim[1]:
clim[1] = newrng[1]
for mh in oldmappers:
mh.scalar_range = clim[0], clim[1]
mapper.scalar_range = clim[0], clim[1]
self._scalar_bar_mappers[title].append(mapper)
self._scalar_bar_ranges[title] = clim
# Color bar already present and ready to be used so returning
return
# Automatically choose location if not specified
if position_x is None or position_y is None:
try:
slot = min(self._scalar_bar_slots)
self._scalar_bar_slots.remove(slot)
self._scalar_bar_slot_lookup[title] = slot
except:
raise RuntimeError('Maximum number of color bars reached.')
if position_x is None:
if vertical:
position_x = rcParams['colorbar_vertical']['position_x']
position_x -= slot * (width + 0.2 * width)
else:
position_x = rcParams['colorbar_horizontal']['position_x']
if position_y is None:
if vertical:
position_y = rcParams['colorbar_vertical']['position_y']
else:
position_y = rcParams['colorbar_horizontal']['position_y']
position_y += slot * height
# Adjust to make sure on the screen
if position_x + width > 1:
position_x -= width
if position_y + height > 1:
position_y -= height
# parse color
color = parse_color(color)
# Create scalar bar
self.scalar_bar = vtk.vtkScalarBarActor()
if background_color is not None:
background_color = parse_color(background_color, opacity=1.0)
background_color = np.array(background_color) * 255
self.scalar_bar.GetBackgroundProperty().SetColor(background_color[0:3])
if fill:
self.scalar_bar.DrawBackgroundOn()
lut = vtk.vtkLookupTable()
lut.DeepCopy(mapper.lookup_table)
ctable = vtk_to_numpy(lut.GetTable())
alphas = ctable[:, -1][:, np.newaxis] / 255.
use_table = ctable.copy()
use_table[:, -1] = 255.
ctable = (use_table * alphas) + background_color * (1 - alphas)
lut.SetTable(numpy_to_vtk(ctable, array_type=vtk.VTK_UNSIGNED_CHAR))
else:
lut = mapper.lookup_table
self.scalar_bar.SetLookupTable(lut)
if n_colors is not None:
self.scalar_bar.SetMaximumNumberOfColors(n_colors)
if n_labels < 1:
self.scalar_bar.DrawTickLabelsOff()
else:
self.scalar_bar.DrawTickLabelsOn()
self.scalar_bar.SetNumberOfLabels(n_labels)
if nan_annotation:
self.scalar_bar.DrawNanAnnotationOn()
if above_label:
self.scalar_bar.DrawAboveRangeSwatchOn()
self.scalar_bar.SetAboveRangeAnnotation(above_label)
if below_label:
self.scalar_bar.DrawBelowRangeSwatchOn()
self.scalar_bar.SetBelowRangeAnnotation(below_label)
# edit the size of the colorbar
self.scalar_bar.SetHeight(height)
self.scalar_bar.SetWidth(width)
self.scalar_bar.SetPosition(position_x, position_y)
if fmt is not None:
self.scalar_bar.SetLabelFormat(fmt)
if vertical:
self.scalar_bar.SetOrientationToVertical()
else:
self.scalar_bar.SetOrientationToHorizontal()
if label_font_size is not None or title_font_size is not None:
self.scalar_bar.UnconstrainedFontSizeOn()
self.scalar_bar.AnnotationTextScalingOn()
label_text = self.scalar_bar.GetLabelTextProperty()
anno_text = self.scalar_bar.GetAnnotationTextProperty()
label_text.SetColor(color)
anno_text.SetColor(color)
label_text.SetShadow(shadow)
anno_text.SetShadow(shadow)
# Set font
label_text.SetFontFamily(parse_font_family(font_family))
anno_text.SetFontFamily(parse_font_family(font_family))
label_text.SetItalic(italic)
anno_text.SetItalic(italic)
label_text.SetBold(bold)
anno_text.SetBold(bold)
if label_font_size:
label_text.SetFontSize(label_font_size)
anno_text.SetFontSize(label_font_size)
# Set properties
if title:
clim = mapper.scalar_range
self._scalar_bar_ranges[title] = clim
self._scalar_bar_mappers[title] = [mapper]
self.scalar_bar.SetTitle(title)
title_text = self.scalar_bar.GetTitleTextProperty()
title_text.SetJustificationToCentered()
title_text.SetItalic(italic)
title_text.SetBold(bold)
title_text.SetShadow(shadow)
if title_font_size:
title_text.SetFontSize(title_font_size)
# Set font
title_text.SetFontFamily(parse_font_family(font_family))
# set color
title_text.SetColor(color)
self._scalar_bar_actors[title] = self.scalar_bar
if interactive is None:
interactive = rcParams['interactive']
if self.shape != (1, 1):
interactive = False
elif interactive and self.shape != (1, 1):
raise ValueError('Interactive scalar bars disabled for multi-renderer plots')
if interactive:
self.scalar_widget = vtk.vtkScalarBarWidget()
self.scalar_widget.SetScalarBarActor(self.scalar_bar)
self.scalar_widget.SetInteractor(self.iren)
self.scalar_widget.SetEnabled(1)
rep = self.scalar_widget.GetRepresentation()
# self.scalar_widget.On()
if vertical is True or vertical is None:
rep.SetOrientation(1) # 0 = Horizontal, 1 = Vertical
else:
rep.SetOrientation(0) # 0 = Horizontal, 1 = Vertical
self._scalar_bar_widgets[title] = self.scalar_widget
if use_opacity:
self.scalar_bar.SetUseOpacity(True)
if outline:
self.scalar_bar.SetDrawFrame(True)
frame_prop = self.scalar_bar.GetFrameProperty()
frame_prop.SetColor(color)
else:
self.scalar_bar.SetDrawFrame(False)
self.add_actor(self.scalar_bar, reset_camera=False, pickable=False,
render=render)
return self.scalar_bar # return the actor
def update_scalars(self, scalars, mesh=None, render=True):
"""Update scalars of an object in the plotter.
Parameters
----------
scalars : np.ndarray
Scalars to replace existing scalars.
mesh : vtk.PolyData or vtk.UnstructuredGrid, optional
Object that has already been added to the Plotter. If
None, uses last added mesh.
render : bool, optional
Force a render when True. Default ``True``.
"""
if mesh is None:
mesh = self.mesh
if isinstance(mesh, (collections.abc.Iterable, pyvista.MultiBlock)):
# Recursive if need to update scalars on many meshes
for m in mesh:
self.update_scalars(scalars, mesh=m, render=False)
if render:
self.render()
return
if isinstance(scalars, str):
# Grab scalars array if name given
scalars = get_array(mesh, scalars)
if scalars is None:
if render:
self.render()
return
if scalars.shape[0] == mesh.GetNumberOfPoints():
data = mesh.GetPointData()
elif scalars.shape[0] == mesh.GetNumberOfCells():
data = mesh.GetCellData()
else:
raise_not_matching(scalars, mesh)
vtk_scalars = data.GetScalars()
if vtk_scalars is None:
raise ValueError('No active scalars')
s = convert_array(vtk_scalars)
s[:] = scalars
data.Modified()
try:
# Why are the points updated here? Not all datasets have points
# and only the scalars array is modified by this function...
mesh.GetPoints().Modified()
except:
pass
if render:
self.render()
def update_coordinates(self, points, mesh=None, render=True):
"""Update the points of an object in the plotter.
Parameters
----------
points : np.ndarray
Points to replace existing points.
mesh : vtk.PolyData or vtk.UnstructuredGrid, optional
Object that has already been added to the Plotter. If
None, uses last added mesh.
render : bool, optional
Force a render when True. Default ``True``.
"""
if mesh is None:
mesh = self.mesh
mesh.points = points
# only render when the plotter has already been shown
if render is None:
render = not self._first_time
if render:
self.render()
def _clear_ren_win(self):
"""Clear the render window."""
if hasattr(self, 'ren_win'):
self.ren_win.Finalize()
del self.ren_win
def close(self, render=False):
"""Close the render window."""
# optionally run just prior to exiting the plotter
if self._before_close_callback is not None:
self._before_close_callback(self)
self._before_close_callback = None
# must close out widgets first
super().close()
# Renderer has an axes widget, so close it
for renderer in self.renderers:
renderer.close()
self._shadow_renderer.close()
# Turn off the lights
for renderer in self.renderers:
renderer.RemoveAllLights()
self.lighting = None
# Clear the scalar bar
self.scalar_bar = None
# Grab screenshots of last render
if self._store_image:
self.last_image = self.screenshot(None, return_img=True)
self.last_image_depth = self.get_image_depth()
if hasattr(self, 'scalar_widget'):
del self.scalar_widget
# reset scalar bar stuff
self.clear()
self._clear_ren_win()
self._style_class = None
if hasattr(self, '_observers'):
for obs in self._observers.values():
self.iren.RemoveObservers(obs)
del self._observers
if self.iren is not None:
self.iren.TerminateApp()
self.iren = None
if hasattr(self, 'textActor'):
del self.textActor
# end movie
if hasattr(self, 'mwriter'):
try:
self.mwriter.close()
except BaseException:
pass
# this helps managing closed plotters
self._closed = True
def deep_clean(self):
"""Clean the plotter of the memory."""
for renderer in self.renderers:
renderer.deep_clean()
self._shadow_renderer.deep_clean()
for renderer in self._background_renderers:
if renderer is not None:
renderer.deep_clean()
# Do not remove the renderers on the clean
if getattr(self, 'mesh', None) is not None:
self.mesh.point_arrays = None
self.mesh.cell_arrays = None
self.mesh = None
if getattr(self, 'mapper', None) is not None:
self.mapper.lookup_table = None
self.mapper = None
self.volume = None
self.textactor = None
def add_text(self, text, position='upper_left', font_size=18, color=None,
font=None, shadow=False, name=None, viewport=False):
"""Add text to plot object in the top left corner by default.
Parameters
----------
text : str
The text to add the rendering
position : str, tuple(float)
Position to place the bottom left corner of the text box.
If tuple is used, the position of the text uses the pixel
coordinate system (default). In this case,
it returns a more general `vtkOpenGLTextActor`.
If string name is used, it returns a `vtkCornerAnnotation`
object normally used for fixed labels (like title or xlabel).
Default is to find the top left corner of the rendering window
and place text box up there. Available position: ``'lower_left'``,
``'lower_right'``, ``'upper_left'``, ``'upper_right'``,
``'lower_edge'``, ``'upper_edge'``, ``'right_edge'``, and
``'left_edge'``
font : string, optional
Font name may be courier, times, or arial
shadow : bool, optional
Adds a black shadow to the text. Defaults to False
name : str, optional
The name for the added actor so that it can be easily updated.
If an actor of this name already exists in the rendering window, it
will be replaced by the new actor.
viewport: bool
If True and position is a tuple of float, uses
the normalized viewport coordinate system (values between 0.0
and 1.0 and support for HiDPI).
Return
------
textActor : vtk.vtkTextActor
Text actor added to plot
"""
if font is None:
font = rcParams['font']['family']
if font_size is None:
font_size = rcParams['font']['size']
if color is None:
color = rcParams['font']['color']
if position is None:
# Set the position of the text to the top left corner
window_size = self.window_size
x = (window_size[0] * 0.02) / self.shape[0]
y = (window_size[1] * 0.85) / self.shape[0]
position = [x, y]
corner_mappings = {
'lower_left': vtk.vtkCornerAnnotation.LowerLeft,
'lower_right': vtk.vtkCornerAnnotation.LowerRight,
'upper_left': vtk.vtkCornerAnnotation.UpperLeft,
'upper_right': vtk.vtkCornerAnnotation.UpperRight,
'lower_edge': vtk.vtkCornerAnnotation.LowerEdge,
'upper_edge': vtk.vtkCornerAnnotation.UpperEdge,
'left_edge': vtk.vtkCornerAnnotation.LeftEdge,
'right_edge': vtk.vtkCornerAnnotation.RightEdge,
}
corner_mappings['ll'] = corner_mappings['lower_left']
corner_mappings['lr'] = corner_mappings['lower_right']
corner_mappings['ul'] = corner_mappings['upper_left']
corner_mappings['ur'] = corner_mappings['upper_right']
corner_mappings['top'] = corner_mappings['upper_edge']
corner_mappings['bottom'] = corner_mappings['lower_edge']
corner_mappings['right'] = corner_mappings['right_edge']
corner_mappings['r'] = corner_mappings['right_edge']
corner_mappings['left'] = corner_mappings['left_edge']
corner_mappings['l'] = corner_mappings['left_edge']
if isinstance(position, (int, str, bool)):
if isinstance(position, str):
position = corner_mappings[position]
elif position is True:
position = corner_mappings['upper_left']
self.textActor = vtk.vtkCornerAnnotation()
# This is how you set the font size with this actor
self.textActor.SetLinearFontScaleFactor(font_size // 2)
self.textActor.SetText(position, text)
else:
self.textActor = vtk.vtkTextActor()
self.textActor.SetInput(text)
self.textActor.SetPosition(position)
if viewport:
self.textActor.GetActualPositionCoordinate().SetCoordinateSystemToNormalizedViewport()
self.textActor.GetActualPosition2Coordinate().SetCoordinateSystemToNormalizedViewport()
self.textActor.GetTextProperty().SetFontSize(int(font_size * 2))
self.textActor.GetTextProperty().SetColor(parse_color(color))
self.textActor.GetTextProperty().SetFontFamily(FONT_KEYS[font])
self.textActor.GetTextProperty().SetShadow(shadow)
self.add_actor(self.textActor, reset_camera=False, name=name, pickable=False)
return self.textActor
def open_movie(self, filename, framerate=24):
"""Establish a connection to the ffmpeg writer.
Parameters
----------
filename : str
Filename of the movie to open. Filename should end in mp4,
but other filetypes may be supported. See "imagio.get_writer"
framerate : int, optional
Frames per second.
"""
if isinstance(pyvista.FIGURE_PATH, str) and not os.path.isabs(filename):
filename = os.path.join(pyvista.FIGURE_PATH, filename)
self.mwriter = imageio.get_writer(filename, fps=framerate)
def open_gif(self, filename):
"""Open a gif file.
Parameters
----------
filename : str
Filename of the gif to open. Filename must end in gif.
"""
if filename[-3:] != 'gif':
raise ValueError('Unsupported filetype. Must end in .gif')
if isinstance(pyvista.FIGURE_PATH, str) and not os.path.isabs(filename):
filename = os.path.join(pyvista.FIGURE_PATH, filename)
self._gif_filename = os.path.abspath(filename)
self.mwriter = imageio.get_writer(filename, mode='I')
def write_frame(self):
"""Write a single frame to the movie file."""
if not hasattr(self, 'mwriter'):
raise RuntimeError('This plotter has not opened a movie or GIF file.')
self.mwriter.append_data(self.image)
def _run_image_filter(self, ifilter):
# Update filter and grab pixels
ifilter.Modified()
ifilter.Update()
image = pyvista.wrap(ifilter.GetOutput())
img_size = image.dimensions
img_array = pyvista.utilities.point_array(image, 'ImageScalars')
# Reshape and write
tgt_size = (img_size[1], img_size[0], -1)
return img_array.reshape(tgt_size)[::-1]
def get_image_depth(self,
fill_value=np.nan,
reset_camera_clipping_range=True):
"""Return a depth image representing current render window.
Parameters
----------
fill_value : float
Fill value for points in image that don't include objects in scene.
To not use a fill value, pass ``None``.
reset_camera_clipping_range : bool
Reset the camera clipping range to include data in view?
Return
------
image_depth : numpy.ndarray
Image of depth values from camera orthogonal to image plane
Notes
-----
Values in image_depth are negative to adhere to a
right-handed coordinate system.
"""
if not hasattr(self, 'ren_win') and hasattr(self, 'last_image_depth'):
zval = self.last_image_depth.copy()
if fill_value is not None:
zval[self._image_depth_null] = fill_value
return zval
# Ensure points in view are within clipping range of renderer?
if reset_camera_clipping_range:
self.renderer.ResetCameraClippingRange()
# Get the z-buffer image
ifilter = vtk.vtkWindowToImageFilter()
ifilter.SetInput(self.ren_win)
ifilter.ReadFrontBufferOff()
ifilter.SetInputBufferTypeToZBuffer()
zbuff = self._run_image_filter(ifilter)[:, :, 0]
# Convert z-buffer values to depth from camera
with warnings.catch_warnings():
warnings.filterwarnings('ignore')
near, far = self.camera.GetClippingRange()
if self.camera.GetParallelProjection():
zval = (zbuff - near) / (far - near)
else:
zval = 2 * near * far / ((zbuff - 0.5) * 2 * (far - near) - near - far)
# Consider image values outside clipping range as nans
args = np.logical_or(zval < -far, np.isclose(zval, -far))
self._image_depth_null = args
if fill_value is not None:
zval[args] = fill_value
return zval
def add_lines(self, lines, color=(1, 1, 1), width=5, label=None, name=None):
"""Add lines to the plotting object.
Parameters
----------
lines : np.ndarray or pyvista.PolyData
Points representing line segments. For example, two line segments
would be represented as:
np.array([[0, 0, 0], [1, 0, 0], [1, 0, 0], [1, 1, 0]])
color : string or 3 item list, optional, defaults to white
Either a string, rgb list, or hex color string. For example:
color='white'
color='w'
color=[1, 1, 1]
color='#FFFFFF'
width : float, optional
Thickness of lines
name : str, optional
The name for the added actor so that it can be easily updated.
If an actor of this name already exists in the rendering window, it
will be replaced by the new actor.
Return
------
actor : vtk.vtkActor
Lines actor.
"""
if not isinstance(lines, np.ndarray):
raise TypeError('Input should be an array of point segments')
lines = pyvista.lines_from_points(lines)
# Create mapper and add lines
mapper = vtk.vtkDataSetMapper()
mapper.SetInputData(lines)
rgb_color = parse_color(color)
# legend label
if label:
if not isinstance(label, str):
raise TypeError('Label must be a string')
self._labels.append([lines, label, rgb_color])
# Create actor
actor = vtk.vtkActor()
actor.SetMapper(mapper)
actor.GetProperty().SetLineWidth(width)
actor.GetProperty().EdgeVisibilityOn()
actor.GetProperty().SetEdgeColor(rgb_color)
actor.GetProperty().SetColor(rgb_color)
actor.GetProperty().LightingOff()
# Add to renderer
self.add_actor(actor, reset_camera=False, name=name, pickable=False)
return actor
def remove_scalar_bar(self):
"""Remove the scalar bar."""
if hasattr(self, 'scalar_bar'):
self.remove_actor(self.scalar_bar, reset_camera=False)
def add_point_labels(self, points, labels, italic=False, bold=True,
font_size=None, text_color=None,
font_family=None, shadow=False,
show_points=True, point_color=None, point_size=5,
name=None, shape_color='grey', shape='rounded_rect',
fill_shape=True, margin=3, shape_opacity=1.0,
pickable=False, render_points_as_spheres=False,
tolerance=0.001, reset_camera=None, always_visible=False):
"""Create a point actor with one label from list labels assigned to each point.
Parameters
----------
points : np.ndarray or pyvista.Common
n x 3 numpy array of points or pyvista dataset with points
labels : list or str
List of labels. Must be the same length as points. If a string name
is given with a pyvista.Common input for points, then these are fetched.
italic : bool, optional
Italicises title and bar labels. Default False.
bold : bool, optional
Bolds title and bar labels. Default True
font_size : float, optional
Sets the size of the title font. Defaults to 16.
text_color : string or 3 item list, optional
Color of text. Either a string, rgb list, or hex color string.
text_color='white'
text_color='w'
text_color=[1, 1, 1]
text_color='#FFFFFF'
font_family : string, optional
Font family. Must be either courier, times, or arial.
shadow : bool, optional
Adds a black shadow to the text. Defaults to False
show_points : bool, optional
Controls if points are visible. Default True
point_color : string or 3 item list, optional. Color of points (if visible).
Either a string, rgb list, or hex color string. For example:
text_color='white'
text_color='w'
text_color=[1, 1, 1]
text_color='#FFFFFF'
point_size : float, optional
Size of points (if visible)
name : str, optional
The name for the added actor so that it can be easily updated.
If an actor of this name already exists in the rendering window, it
will be replaced by the new actor.
shape_color : string or 3 item list, optional. Color of points (if visible).
Either a string, rgb list, or hex color string. For example:
shape : str, optional
The string name of the shape to use. Options are ``'rect'`` or
``'rounded_rect'``. If you want no shape, pass ``None``
fill_shape : bool, optional
Fill the shape with the ``shape_color``. Outlines if ``False``.
margin : int, optional
The size of the margin on the label background shape. Default is 3.
shape_opacity : float
The opacity of the shape between zero and one.
tolerance : float
a tolerance to use to determine whether a point label is visible.
A tolerance is usually required because the conversion from world
space to display space during rendering introduces numerical
round-off.
reset_camera : bool, optional
Reset the camera after adding the points to the scene.
always_visible : bool, optional
Skip adding the visibility filter. Default False.
Return
------
labelActor : vtk.vtkActor2D
VTK label actor. Can be used to change properties of the labels.
"""
if font_family is None:
font_family = rcParams['font']['family']
if font_size is None:
font_size = rcParams['font']['size']
if point_color is None:
point_color = rcParams['color']
if text_color is None:
text_color = rcParams['font']['color']
if isinstance(points, (list, tuple)):
points = np.array(points)
if isinstance(points, np.ndarray):
vtkpoints = pyvista.PolyData(points) # Cast to poly data
elif is_pyvista_dataset(points):
vtkpoints = pyvista.PolyData(points.points)
if isinstance(labels, str):
labels = points.point_arrays[labels].astype(str)
else:
raise TypeError(f'Points type not usable: {type(points)}')
if len(vtkpoints.points) != len(labels):
raise ValueError('There must be one label for each point')
if name is None:
name = f'{type(vtkpoints).__name__}({vtkpoints.memory_address})'
vtklabels = vtk.vtkStringArray()
vtklabels.SetName('labels')
for item in labels:
vtklabels.InsertNextValue(str(item))
vtkpoints.GetPointData().AddArray(vtklabels)
# Create hierarchy
hier = vtk.vtkPointSetToLabelHierarchy()
hier.SetLabelArrayName('labels')
if always_visible:
hier.SetInputData(vtkpoints)
else:
# Only show visible points
vis_points = vtk.vtkSelectVisiblePoints()
vis_points.SetInputData(vtkpoints)
vis_points.SetRenderer(self.renderer)
vis_points.SetTolerance(tolerance)
hier.SetInputConnection(vis_points.GetOutputPort())
# create label mapper
labelMapper = vtk.vtkLabelPlacementMapper()
labelMapper.SetInputConnection(hier.GetOutputPort())
if not isinstance(shape, str):
labelMapper.SetShapeToNone()
elif shape.lower() in 'rect':
labelMapper.SetShapeToRect()
elif shape.lower() in 'rounded_rect':
labelMapper.SetShapeToRoundedRect()
else:
raise ValueError(f'Shape ({shape}) not understood')
if fill_shape:
labelMapper.SetStyleToFilled()
else:
labelMapper.SetStyleToOutline()
labelMapper.SetBackgroundColor(parse_color(shape_color))
labelMapper.SetBackgroundOpacity(shape_opacity)
labelMapper.SetMargin(margin)
textprop = hier.GetTextProperty()
textprop.SetItalic(italic)
textprop.SetBold(bold)
textprop.SetFontSize(font_size)
textprop.SetFontFamily(parse_font_family(font_family))
textprop.SetColor(parse_color(text_color))
textprop.SetShadow(shadow)
self.remove_actor(f'{name}-points', reset_camera=False)
self.remove_actor(f'{name}-labels', reset_camera=False)
# add points
if show_points:
style = 'points'
else:
style = 'surface'
self.add_mesh(vtkpoints, style=style, color=point_color,
point_size=point_size, name=f'{name}-points',
pickable=pickable,
render_points_as_spheres=render_points_as_spheres,
reset_camera=reset_camera)
labelActor = vtk.vtkActor2D()
labelActor.SetMapper(labelMapper)
self.add_actor(labelActor, reset_camera=False,
name=f'{name}-labels', pickable=False)
return labelActor
def add_point_scalar_labels(self, points, labels, fmt=None, preamble='', **kwargs):
"""Label the points from a dataset with the values of their scalars.
Wrapper for :func:`pyvista.BasePlotter.add_point_labels`.
Parameters
----------
points : np.ndarray or pyvista.Common
n x 3 numpy array of points or pyvista dataset with points
labels : str
String name of the point data array to use.
fmt : str
String formatter used to format numerical data
"""
if not is_pyvista_dataset(points):
raise TypeError(f'input points must be a pyvista dataset, not: {type(points)}')
if not isinstance(labels, str):
raise TypeError('labels must be a string name of the scalars array to use')
if fmt is None:
fmt = rcParams['font']['fmt']
if fmt is None:
fmt = '%.6e'
scalars = points.point_arrays[labels]
phrase = f'{preamble} %.3e'
labels = [phrase % val for val in scalars]
return self.add_point_labels(points, labels, **kwargs)
def add_points(self, points, **kwargs):
"""Add points to a mesh."""
kwargs['style'] = 'points'
return self.add_mesh(points, **kwargs)
def add_arrows(self, cent, direction, mag=1, **kwargs):
"""Add arrows to the plotter.
Parameters
----------
cent : np.ndarray
Array of centers.
direction : np.ndarray
Array of direction vectors.
mag : float, optional
Amount to scale the direction vectors.
Examples
--------
Plot a random field of vectors and save a screenshot of it.
>>> import numpy as np
>>> import pyvista
>>> cent = np.random.random((10, 3))
>>> direction = np.random.random((10, 3))
>>> plotter = pyvista.Plotter()
>>> _ = plotter.add_arrows(cent, direction, mag=2)
>>> plotter.show() # doctest:+SKIP
"""
if cent.shape != direction.shape: # pragma: no cover
raise ValueError('center and direction arrays must have the same shape')
direction = direction.copy()
if cent.ndim != 2:
cent = cent.reshape((-1, 3))
if direction.ndim != 2:
direction = direction.reshape((-1, 3))
if mag != 1:
direction = direction*mag
pdata = pyvista.vector_poly_data(cent, direction)
# Create arrow object
arrow = vtk.vtkArrowSource()
arrow.Update()
glyph3D = vtk.vtkGlyph3D()
glyph3D.SetSourceData(arrow.GetOutput())
glyph3D.SetInputData(pdata)
glyph3D.SetVectorModeToUseVector()
glyph3D.Update()
arrows = wrap(glyph3D.GetOutput())
return self.add_mesh(arrows, **kwargs)
@staticmethod
def _save_image(image, filename, return_img=None):
"""Save a NumPy image array.
This is an internal helper.
"""
if not image.size:
raise ValueError('Empty image. Have you run plot() first?')
# write screenshot to file
if isinstance(filename, (str, pathlib.Path)):
from PIL import Image
filename = pathlib.Path(filename)
if isinstance(pyvista.FIGURE_PATH, str) and not filename.is_absolute():
filename = pathlib.Path(os.path.join(pyvista.FIGURE_PATH, filename))
if not filename.suffix:
filename = filename.with_suffix('.png')
elif filename.suffix not in SUPPORTED_FORMATS:
raise ValueError(f'Unsupported extension {filename.suffix}\n' +
f'Must be one of the following: {SUPPORTED_FORMATS}')
image_path = os.path.abspath(os.path.expanduser(str(filename)))
Image.fromarray(image).save(image_path)
if not return_img:
return image
return image
def save_graphic(self, filename, title='PyVista Export', raster=True, painter=True):
"""Save a screenshot of the rendering window as a graphic file.
The supported formats are: '.svg', '.eps', '.ps', '.pdf', '.tex'
"""
if not hasattr(self, 'ren_win'):
raise AttributeError('This plotter is closed and unable to save a screenshot.')
if isinstance(pyvista.FIGURE_PATH, str) and not os.path.isabs(filename):
filename = os.path.join(pyvista.FIGURE_PATH, filename)
filename = os.path.abspath(os.path.expanduser(filename))
extension = pyvista.fileio.get_ext(filename)
valid = ['.svg', '.eps', '.ps', '.pdf', '.tex']
if extension not in valid:
raise ValueError(f"Extension ({extension}) is an invalid choice. Valid options include: {', '.join(valid)}")
writer = vtk.vtkGL2PSExporter()
modes = {
'.svg': writer.SetFileFormatToSVG,
'.eps': writer.SetFileFormatToEPS,
'.ps': writer.SetFileFormatToPS,
'.pdf': writer.SetFileFormatToPDF,
'.tex': writer.SetFileFormatToTeX,
}
writer.CompressOff()
writer.SetFilePrefix(filename.replace(extension, ''))
writer.SetInput(self.ren_win)
modes[extension]()
writer.SetTitle(title)
writer.SetWrite3DPropsAsRasterImage(raster)
if painter:
writer.UsePainterSettings()
writer.Update()
return
def screenshot(self, filename=None, transparent_background=None,
return_img=None, window_size=None):
"""Take screenshot at current camera position.
Parameters
----------
filename : str, optional
Location to write image to. If None, no image is written.
transparent_background : bool, optional
Makes the background transparent. Default False.
return_img : bool, optional
If a string filename is given and this is true, a NumPy array of
the image will be returned.
Return
------
img : numpy.ndarray
Array containing pixel RGB and alpha. Sized:
[Window height x Window width x 3] for transparent_background=False
[Window height x Window width x 4] for transparent_background=True
Examples
--------
>>> import pyvista
>>> sphere = pyvista.Sphere()
>>> plotter = pyvista.Plotter(off_screen=True)
>>> actor = plotter.add_mesh(sphere)
>>> plotter.screenshot('screenshot.png') # doctest:+SKIP
"""
if window_size is not None:
self.window_size = window_size
# configure image filter
if transparent_background is None:
transparent_background = rcParams['transparent_background']
self.image_transparent_background = transparent_background
# This if statement allows you to save screenshots of closed plotters
# This is needed for the sphinx-gallery work
if not hasattr(self, 'ren_win'):
# If plotter has been closed...
# check if last_image exists
if hasattr(self, 'last_image'):
# Save last image
return self._save_image(self.last_image, filename, return_img)
# Plotter hasn't been rendered or was improperly closed
raise AttributeError('This plotter is closed and unable to save a screenshot.')
if self._first_time and not self.off_screen:
raise RuntimeError("Nothing to screenshot - call .show first or "
"use the off_screen argument")
# if off screen, show has not been called and we must render
# before extracting an image
if self._first_time:
self._on_first_render_request()
self.render()
return self._save_image(self.image, filename, return_img)
def add_legend(self, labels=None, bcolor=(0.5, 0.5, 0.5), border=False,
size=None, name=None):
"""Add a legend to render window.
Entries must be a list containing one string and color entry for each
item.
Parameters
----------
labels : list, optional
When set to None, uses existing labels as specified by
- add_mesh
- add_lines
- add_points
List containing one entry for each item to be added to the
legend. Each entry must contain two strings, [label,
color], where label is the name of the item to add, and
color is the color of the label to add.
bcolor : list or string, optional
Background color, either a three item 0 to 1 RGB color
list, or a matplotlib color string (e.g. 'w' or 'white'
for a white color). If None, legend background is
disabled.
border : bool, optional
Controls if there will be a border around the legend.
Default False.
size : list, optional
Two float list, each float between 0 and 1. For example
[0.1, 0.1] would make the legend 10% the size of the
entire figure window.
name : str, optional
The name for the added actor so that it can be easily updated.
If an actor of this name already exists in the rendering window, it
will be replaced by the new actor.
Return
------
legend : vtk.vtkLegendBoxActor
Actor for the legend.
Examples
--------
>>> import pyvista
>>> from pyvista import examples
>>> mesh = examples.load_hexbeam()
>>> othermesh = examples.load_uniform()
>>> plotter = pyvista.Plotter()
>>> _ = plotter.add_mesh(mesh, label='My Mesh')
>>> _ = plotter.add_mesh(othermesh, 'k', label='My Other Mesh')
>>> _ = plotter.add_legend()
>>> plotter.show() # doctest:+SKIP
Alternative manual example
>>> import pyvista
>>> from pyvista import examples
>>> mesh = examples.load_hexbeam()
>>> othermesh = examples.load_uniform()
>>> legend_entries = []
>>> legend_entries.append(['My Mesh', 'w'])
>>> legend_entries.append(['My Other Mesh', 'k'])
>>> plotter = pyvista.Plotter()
>>> _ = plotter.add_mesh(mesh)
>>> _ = plotter.add_mesh(othermesh, 'k')
>>> _ = plotter.add_legend(legend_entries)
>>> plotter.show() # doctest:+SKIP
"""
self.legend = vtk.vtkLegendBoxActor()
if labels is None:
# use existing labels
if not self._labels:
raise ValueError('No labels input.\n\n'
'Add labels to individual items when adding them to'
'the plotting object with the "label=" parameter. '
'or enter them as the "labels" parameter.')
self.legend.SetNumberOfEntries(len(self._labels))
for i, (vtk_object, text, color) in enumerate(self._labels):
self.legend.SetEntry(i, vtk_object, text, parse_color(color))
else:
self.legend.SetNumberOfEntries(len(labels))
legendface = pyvista.single_triangle()
for i, (text, color) in enumerate(labels):
self.legend.SetEntry(i, legendface, text, parse_color(color))
if size:
self.legend.SetPosition2(size[0], size[1])
if bcolor is None:
self.legend.UseBackgroundOff()
else:
self.legend.UseBackgroundOn()
self.legend.SetBackgroundColor(bcolor)
if border:
self.legend.BorderOn()
else:
self.legend.BorderOff()
# Add to renderer
self.add_actor(self.legend, reset_camera=False, name=name, pickable=False)
return self.legend
def set_background(self, color, top=None, all_renderers=True):
"""Set the background color.
Parameters
----------
color : string or 3 item list, optional, defaults to white
Either a string, rgb list, or hex color string. For example:
color='white'
color='w'
color=[1, 1, 1]
color='#FFFFFF'
top : string or 3 item list, optional, defaults to None
If given, this will enable a gradient background where the
``color`` argument is at the bottom and the color given in ``top``
will be the color at the top of the renderer.
all_renderers : bool
If True, applies to all renderers in subplots. If False, then
only applies to the active renderer.
"""
if all_renderers:
for renderer in self.renderers:
renderer.set_background(color, top=top)
self._shadow_renderer.set_background(color)
else:
self.renderer.set_background(color, top=top)
def remove_legend(self):
"""Remove the legend actor."""
if hasattr(self, 'legend'):
self.remove_actor(self.legend, reset_camera=False)
self.render()
def generate_orbital_path(self, factor=3., n_points=20, viewup=None, shift=0.0):
"""Generate an orbital path around the data scene.
Parameters
----------
factor : float
A scaling factor when biulding the orbital extent
n_points : int
number of points on the orbital path
viewup : list(float)
the normal to the orbital plane
shift : float, optional
shift the plane up/down from the center of the scene by this amount
"""
if viewup is None:
viewup = rcParams['camera']['viewup']
center = np.array(self.center)
bnds = np.array(self.bounds)
radius = (bnds[1] - bnds[0]) * factor
y = (bnds[3] - bnds[2]) * factor
if y > radius:
radius = y
center += np.array(viewup) * shift
return pyvista.Polygon(center=center, radius=radius, normal=viewup, n_sides=n_points)
def fly_to(self, point):
"""Move the current camera's focal point to a position point.
The movement is animated over the number of frames specified in
NumberOfFlyFrames. The LOD desired frame rate is used.
"""
return self.iren.FlyTo(self.renderer, *point)
def orbit_on_path(self, path=None, focus=None, step=0.5, viewup=None,
write_frames=False, threaded=False):
"""Orbit on the given path focusing on the focus point.
Parameters
----------
path : pyvista.PolyData
Path of orbital points. The order in the points is the order of
travel
focus : list(float) of length 3, optional
The point of focus the camera.
step : float, optional
The timestep between flying to each camera position
viewup : list(float)
the normal to the orbital plane
write_frames : bool
Assume a file is open and write a frame on each camera view during
the orbit.
threaded : bool, optional
Run this as a background thread. Generally used within a
GUI (i.e. PyQt).
"""
if focus is None:
focus = self.center
if viewup is None:
viewup = rcParams['camera']['viewup']
if path is None:
path = self.generate_orbital_path(viewup=viewup)
if not is_pyvista_dataset(path):
path = pyvista.PolyData(path)
points = path.points
# Make sure the whole scene is visible
self.camera.SetThickness(path.length)
def orbit():
"""Define the internal thread for running the orbit."""
for point in points:
self.set_position(point)
self.set_focus(focus)
self.set_viewup(viewup)
self.renderer.ResetCameraClippingRange()
self.render()
time.sleep(step)
if write_frames:
self.write_frame()
if threaded:
thread = Thread(target=orbit)
thread.start()
else:
orbit()
return
def export_vtkjs(self, filename, compress_arrays=False):
"""Export the current rendering scene as a VTKjs scene.
It can be used for rendering in a web browser.
"""
if not hasattr(self, 'ren_win'):
raise RuntimeError('Export must be called before showing/closing the scene.')
if isinstance(pyvista.FIGURE_PATH, str) and not os.path.isabs(filename):
filename = os.path.join(pyvista.FIGURE_PATH, filename)
else:
filename = os.path.abspath(os.path.expanduser(filename))
return export_plotter_vtkjs(self, filename, compress_arrays=compress_arrays)
def export_obj(self, filename):
"""Export scene to OBJ format."""
if not hasattr(self, "ren_win"):
raise RuntimeError("This plotter must still have a render window open.")
if isinstance(pyvista.FIGURE_PATH, str) and not os.path.isabs(filename):
filename = os.path.join(pyvista.FIGURE_PATH, filename)
else:
filename = os.path.abspath(os.path.expanduser(filename))
exporter = vtk.vtkOBJExporter()
exporter.SetFilePrefix(filename)
exporter.SetRenderWindow(self.ren_win)
return exporter.Write()
def __del__(self):
"""Delete the plotter."""
if not self._closed:
self.close()
self.deep_clean()
del self.renderers
del self._shadow_renderer
def add_background_image(self, image_path, scale=1, auto_resize=True,
as_global=True):
"""Add a background image to a plot.
Parameters
----------
image_path : str
Path to an image file.
scale : float, optional
Scale the image larger or smaller relative to the size of
the window. For example, a scale size of 2 will make the
largest dimension of the image twice as large as the
largest dimension of the render window. Defaults to 1.
auto_resize : bool, optional
Resize the background when the render window changes size.
as_global : bool, optional
When multiple render windows are present, setting
``as_global=False`` will cause the background to only
appear in one window.
Examples
--------
>>> import pyvista
>>> from pyvista import examples
>>> plotter = pyvista.Plotter()
>>> actor = plotter.add_mesh(pyvista.Sphere())
>>> plotter.add_background_image(examples.mapfile)
>>> plotter.show() # doctest:+SKIP
"""
# verify no render exists
if self._background_renderers[self._active_renderer_index] is not None:
raise RuntimeError('A background image already exists. '
'Remove it with remove_background_image '
'before adding one')
# Need to change the number of layers to support an additional
# background layer
self.ren_win.SetNumberOfLayers(3)
if as_global:
for renderer in self.renderers:
renderer.SetLayer(2)
view_port = None
else:
self.renderer.SetLayer(2)
view_port = self.renderer.GetViewport()
renderer = BackgroundRenderer(self, image_path, scale, view_port)
renderer.SetLayer(1)
self.ren_win.AddRenderer(renderer)
self._background_renderers[self._active_renderer_index] = renderer
# setup autoscaling of the image
if auto_resize: # pragma: no cover
self._add_observer('ModifiedEvent', renderer.resize)
def remove_background_image(self):
"""Remove the background image from the current subplot."""
renderer = self._background_renderers[self._active_renderer_index]
if renderer is None:
raise RuntimeError('No background image to remove at this subplot')
renderer.deep_clean()
self._background_renderers[self._active_renderer_index] = None
def _on_first_render_request(self, cpos=None):
"""Once an image or render is officially requested, run this routine.
For example on the show call or any screenshot producing code.
"""
# reset unless camera for the first render unless camera is set
if self._first_time: # and not self.camera_set:
for renderer in self.renderers:
if not renderer.camera_set and cpos is None:
renderer.camera_position = renderer.get_default_cam_pos()
renderer.ResetCamera()
elif cpos is not None:
renderer.camera_position = cpos
self._first_time = False
def reset_camera_clipping_range(self):
"""Reset camera clipping planes."""
self.renderer.ResetCameraClippingRange()
class Plotter(BasePlotter):
"""Plotting object to display vtk meshes or numpy arrays.
Example
-------
>>> import pyvista
>>> from pyvista import examples
>>> mesh = examples.load_hexbeam()
>>> another_mesh = examples.load_uniform()
>>> plotter = pyvista.Plotter()
>>> _ = plotter.add_mesh(mesh, color='red')
>>> _ = plotter.add_mesh(another_mesh, color='blue')
>>> plotter.show() # doctest:+SKIP
Parameters
----------
off_screen : bool, optional
Renders off screen when True. Useful for automated screenshots.
notebook : bool, optional
When True, the resulting plot is placed inline a jupyter notebook.
Assumes a jupyter console is active. Automatically enables off_screen.
shape : list or tuple, optional
Number of sub-render windows inside of the main window.
Specify two across with ``shape=(2, 1)`` and a two by two grid
with ``shape=(2, 2)``. By default there is only one render window.
Can also accept a string descriptor as shape. E.g.:
* ``shape="3|1"`` means 3 plots on the left and 1 on the right,
* ``shape="4/2"`` means 4 plots on top and 2 at the bottom.
border : bool, optional
Draw a border around each render window. Default False.
border_color : string or 3 item list, optional, defaults to white
Either a string, rgb list, or hex color string. For example:
* ``color='white'``
* ``color='w'``
* ``color=[1, 1, 1]``
* ``color='#FFFFFF'``
window_size : list, optional
Window size in pixels. Defaults to [1024, 768]
multi_samples : int
The number of multi-samples used to mitigate aliasing. 4 is a good
default but 8 will have better results with a potential impact on
performance.
line_smoothing : bool
If True, enable line smothing
point_smoothing : bool
If True, enable point smothing
polygon_smoothing : bool
If True, enable polygon smothing
"""
last_update_time = 0.0
right_timer_id = -1
def __init__(self, off_screen=None, notebook=None, shape=(1, 1),
groups=None, row_weights=None, col_weights=None,
border=None, border_color='k', border_width=2.0,
window_size=None, multi_samples=None, line_smoothing=False,
point_smoothing=False, polygon_smoothing=False,
splitting_position=None, title=None):
"""Initialize a vtk plotting object."""
super().__init__(shape=shape, border=border,
border_color=border_color,
border_width=border_width,
groups=groups, row_weights=row_weights,
col_weights=col_weights,
splitting_position=splitting_position,
title=title)
log.debug('Plotter init start')
def on_timer(iren, event_id):
"""Exit application if interactive renderer stops."""
if event_id == 'TimerEvent':
self.iren.TerminateApp()
if off_screen is None:
off_screen = pyvista.OFF_SCREEN
if notebook is None:
if rcParams['notebook'] is not None:
notebook = rcParams['notebook']
else:
notebook = scooby.in_ipykernel()
self.notebook = notebook
if self.notebook:
off_screen = True
self.off_screen = off_screen
if window_size is None:
window_size = rcParams['window_size']
self.__prior_window_size = window_size
if multi_samples is None:
multi_samples = rcParams['multi_samples']
# initialize render window
self.ren_win = vtk.vtkRenderWindow()
self.ren_win.SetMultiSamples(multi_samples)
self.ren_win.SetBorders(True)
if line_smoothing:
self.ren_win.LineSmoothingOn()
if point_smoothing:
self.ren_win.PointSmoothingOn()
if polygon_smoothing:
self.ren_win.PolygonSmoothingOn()
for renderer in self.renderers:
self.ren_win.AddRenderer(renderer)
# Add the shadow renderer to allow us to capture interactions within
# a given viewport
# https://vtk.org/pipermail/vtkusers/2018-June/102030.html
number_or_layers = self.ren_win.GetNumberOfLayers()
current_layer = self.renderer.GetLayer()
self.ren_win.SetNumberOfLayers(number_or_layers + 1)
self.ren_win.AddRenderer(self._shadow_renderer)
self._shadow_renderer.SetLayer(current_layer + 1)
self._shadow_renderer.SetInteractive(False) # never needs to capture
if self.off_screen:
self.ren_win.SetOffScreenRendering(1)
# Add ren win and interactor no matter what - necessary for ipyvtk_simple
self.iren = vtk.vtkRenderWindowInteractor()
self.iren.LightFollowCameraOff()
self.iren.SetDesiredUpdateRate(30.0)
self.iren.SetRenderWindow(self.ren_win)
self.enable_trackball_style() # internally calls update_style()
self._observers = {} # Map of events to observers of self.iren
self._add_observer("KeyPressEvent", self.key_press_event)
self.update_style()
# Set background
self.set_background(rcParams['background'])
# Set window size
self.window_size = window_size
# add timer event if interactive render exists
self._add_observer(vtk.vtkCommand.TimerEvent, on_timer)
if rcParams["depth_peeling"]["enabled"]:
if self.enable_depth_peeling():
for renderer in self.renderers:
renderer.enable_depth_peeling()
log.debug('Plotter init stop')
def show(self, title=None, window_size=None, interactive=True,
auto_close=None, interactive_update=False, full_screen=None,
screenshot=False, return_img=False, cpos=None, use_ipyvtk=None,
**kwargs):
"""Display the plotting window.
Notes
-----
Please use the ``q``-key to close the plotter as some operating systems
(namely Windows) will experience issues saving a screenshot if the
exit button in the GUI is prressed.
Parameters
----------
title : string, optional
Title of plotting window.
window_size : list, optional
Window size in pixels. Defaults to [1024, 768]
interactive : bool, optional
Enabled by default. Allows user to pan and move figure.
auto_close : bool, optional
Enabled by default. Exits plotting session when user
closes the window when interactive is ``True``.
interactive_update: bool, optional
Disabled by default. Allows user to non-blocking draw,
user should call ``Update()`` in each iteration.
full_screen : bool, optional
Opens window in full screen. When enabled, ignores
window_size. Default ``False``.
cpos : list(tuple(floats))
The camera position to use
return_img : bool
Returns a numpy array representing the last image along
with the camera position.
use_ipyvtk : bool, optional
Use the ``ipyvtk-simple`` ``ViewInteractiveWidget`` to
visualize the plot within a juyterlab notebook.
Return
------
cpos : list
List of camera position, focal point, and view up
image : np.ndarray
Numpy array of the last image when either ``return_img=True``
or ``screenshot`` is set.
Examples
--------
Show the plotting window and display it using the
ipyvtk-simple viewer
>>> pl.show(use_ipyvtk=True) # doctest:+SKIP
Take a screenshot interactively. Screenshot will be of the
last image shown.
>>> pl.show(screenshot='my_image.png') # doctest:+SKIP
"""
# developer keyword argument: return notebook viewer
# normally suppressed since it's shown by default
return_viewer = kwargs.pop('return_viewer', False)
# developer keyword argument: runs a function immediately prior to ``close``
self._before_close_callback = kwargs.pop('before_close_callback', None)
assert_empty_kwargs(**kwargs)
if interactive_update and auto_close is None:
auto_close = False
elif interactive_update and auto_close:
warnings.warn(textwrap.dedent("""\
The plotter will close immediately automatically since ``auto_close=True``.
Either, do not specify ``auto_close``, or set it to ``False`` if you want to
interact with the plotter interactively.\
""")
)
elif auto_close is None:
auto_close = rcParams['auto_close']
if use_ipyvtk is None:
use_ipyvtk = rcParams['use_ipyvtk']
if not hasattr(self, "ren_win"):
raise RuntimeError("This plotter has been closed and cannot be shown.")
if full_screen is None:
full_screen = rcParams['full_screen']
if full_screen:
self.ren_win.SetFullScreen(True)
self.ren_win.BordersOn() # super buggy when disabled
else:
if window_size is None:
window_size = self.window_size
self.ren_win.SetSize(window_size[0], window_size[1])
# reset unless camera for the first render unless camera is set
self._on_first_render_request(cpos)
# Render
# For Windows issues. Resolves #186 and #1018
if os.name == 'nt' and not pyvista.VERY_FIRST_RENDER:
if interactive and (not self.off_screen):
self.iren.Start()
pyvista.VERY_FIRST_RENDER = False
# for some reason iren needs to start before rendering on
# Windows (but not after the very first render window
self.render()
# This has to be after the first render for some reason
if title is None:
title = self.title
if title:
self.ren_win.SetWindowName(title)
self.title = title
# Keep track of image for sphinx-gallery
if pyvista.BUILDING_GALLERY or screenshot:
# always save screenshots for sphinx_gallery
self.last_image = self.screenshot(screenshot, return_img=True)
self.last_image_depth = self.get_image_depth()
disp = None
# See: https://github.com/pyvista/pyvista/issues/186#issuecomment-550993270
if interactive and (not self.off_screen):
try: # interrupts will be caught here
log.debug('Starting iren')
self.update_style()
if not interactive_update:
self.iren.Start()
self.iren.Initialize()
except KeyboardInterrupt:
log.debug('KeyboardInterrupt')
self.close()
raise KeyboardInterrupt
# In the event that the user hits the exit-button on the GUI (on
# Windows OS) then it must be finalized and deleted as accessing it
# will kill the kernel.
# Here we check for that and clean it up before moving on to any of
# the closing routines that might try to still access that
# render window.
if not self.ren_win.IsCurrent():
self._clear_ren_win() # The ren_win is deleted
# proper screenshots cannot be saved if this happens
if not auto_close:
warnings.warn("`auto_close` ignored: by clicking the exit button, you have destroyed the render window and we have to close it out.")
auto_close = True
# NOTE: after this point, nothing from the render window can be accessed
# as if a user presed the close button, then it destroys the
# the render view and a stream of errors will kill the Python
# kernel if code here tries to access that renderer.
# See issues #135 and #186 for insight before editing the
# remainder of this function.
# Get camera position before closing
cpos = self.camera_position
if self.notebook and use_ipyvtk:
# Widgets do not work in spyder
if any('SPYDER' in name for name in os.environ):
warnings.warn('``use_ipyvtk`` is incompatible with Spyder.\n'
'Use notebook=False for interactive '
'plotting within spyder')
try:
from ipyvtk_simple.viewer import ViewInteractiveWidget
except ImportError:
raise ImportError('Please install `ipyvtk_simple` to use this feature:'
'\thttps://github.com/Kitware/ipyvtk-simple')
# Have to leave the Plotter open for the widget to use
auto_close = False
disp = ViewInteractiveWidget(self.ren_win, on_close=self.close,
transparent_background=self.image_transparent_background)
# If notebook is true and ipyvtk_simple display failed:
if self.notebook and (disp is None):
import PIL.Image
# sanity check
try:
import IPython
except ImportError:
raise ImportError('Install IPython to display image in a notebook')
if not hasattr(self, 'last_image'):
self.last_image = self.screenshot(screenshot, return_img=True)
disp = IPython.display.display(PIL.Image.fromarray(self.last_image))
# Cleanup
if auto_close:
self.close()
# Simply display the result: either ipyvtk_simple object or image display
if self.notebook:
if return_viewer: # developer option
return disp
from IPython import display
display.display_html(disp)
# If user asked for screenshot, return as numpy array after camera
# position
if return_img or screenshot is True:
return cpos, self.last_image
# default to returning last used camera position
return cpos
def add_title(self, title, font_size=18, color=None, font=None,
shadow=False):
"""Add text to the top center of the plot.
This is merely a convenience method that calls ``add_text``
with ``position='upper_edge'``.
Parameters
----------
text : str
The text to add the rendering.
font : string, optional
Font name may be courier, times, or arial.
shadow : bool, optional
Adds a black shadow to the text. Defaults to False
name : str, optional
The name for the added actor so that it can be easily
updated. If an actor of this name already exists in the
rendering window, it will be replaced by the new actor.
Return
------
textActor : vtk.vtkTextActor
Text actor added to plot.
"""
# add additional spacing from the top of the figure by default
title = '\n' + title
return self.add_text(title, position='upper_edge',
font_size=font_size, color=color, font=font,
shadow=shadow, name='title', viewport=False)
def _style_factory(klass):
"""Create a subclass with capturing ability, return it."""
# We have to use a custom subclass for this because the default ones
# swallow the release events
# http://vtk.1045678.n5.nabble.com/Mouse-button-release-event-is-still-broken-in-VTK-6-0-0-td5724762.html # noqa
class CustomStyle(getattr(vtk, 'vtkInteractorStyle' + klass)):
def __init__(self, parent):
super().__init__()
self._parent = weakref.ref(parent)
self.AddObserver(
"LeftButtonPressEvent",
partial(try_callback, self._press))
self.AddObserver(
"LeftButtonReleaseEvent",
partial(try_callback, self._release))
def _press(self, obj, event):
# Figure out which renderer has the event and disable the
# others
super().OnLeftButtonDown()
parent = self._parent()
if len(parent.renderers) > 1:
click_pos = parent.iren.GetEventPosition()
for renderer in parent.renderers:
interact = renderer.IsInViewport(*click_pos)
renderer.SetInteractive(interact)
def _release(self, obj, event):
super().OnLeftButtonUp()
parent = self._parent()
if len(parent.renderers) > 1:
for renderer in parent.renderers:
renderer.SetInteractive(True)
return CustomStyle
|
test_ftplib.py
|
"""Test script for ftplib module."""
# Modified by Giampaolo Rodola' to test FTP class, IPv6 and TLS
# environment
import ftplib
import asyncore
import asynchat
import socket
import io
import errno
import os
import threading
import time
try:
import ssl
except ImportError:
ssl = None
from unittest import TestCase, skipUnless
from test import support
from test.support import socket_helper
from test.support.socket_helper import HOST, HOSTv6
TIMEOUT = support.LOOPBACK_TIMEOUT
DEFAULT_ENCODING = 'utf-8'
# the dummy data returned by server over the data channel when
# RETR, LIST, NLST, MLSD commands are issued
RETR_DATA = 'abcde12345\r\n' * 1000 + 'non-ascii char \xAE\r\n'
LIST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n'
NLST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n'
MLSD_DATA = ("type=cdir;perm=el;unique==keVO1+ZF4; test\r\n"
"type=pdir;perm=e;unique==keVO1+d?3; ..\r\n"
"type=OS.unix=slink:/foobar;perm=;unique==keVO1+4G4; foobar\r\n"
"type=OS.unix=chr-13/29;perm=;unique==keVO1+5G4; device\r\n"
"type=OS.unix=blk-11/108;perm=;unique==keVO1+6G4; block\r\n"
"type=file;perm=awr;unique==keVO1+8G4; writable\r\n"
"type=dir;perm=cpmel;unique==keVO1+7G4; promiscuous\r\n"
"type=dir;perm=;unique==keVO1+1t2; no-exec\r\n"
"type=file;perm=r;unique==keVO1+EG4; two words\r\n"
"type=file;perm=r;unique==keVO1+IH4; leading space\r\n"
"type=file;perm=r;unique==keVO1+1G4; file1\r\n"
"type=dir;perm=cpmel;unique==keVO1+7G4; incoming\r\n"
"type=file;perm=r;unique==keVO1+1G4; file2\r\n"
"type=file;perm=r;unique==keVO1+1G4; file3\r\n"
"type=file;perm=r;unique==keVO1+1G4; file4\r\n"
"type=dir;perm=cpmel;unique==SGP1; dir \xAE non-ascii char\r\n"
"type=file;perm=r;unique==SGP2; file \xAE non-ascii char\r\n")
class DummyDTPHandler(asynchat.async_chat):
dtp_conn_closed = False
def __init__(self, conn, baseclass):
asynchat.async_chat.__init__(self, conn)
self.baseclass = baseclass
self.baseclass.last_received_data = ''
self.encoding = baseclass.encoding
def handle_read(self):
new_data = self.recv(1024).decode(self.encoding, 'replace')
self.baseclass.last_received_data += new_data
def handle_close(self):
# XXX: this method can be called many times in a row for a single
# connection, including in clear-text (non-TLS) mode.
# (behaviour witnessed with test_data_connection)
if not self.dtp_conn_closed:
self.baseclass.push('226 transfer complete')
self.close()
self.dtp_conn_closed = True
def push(self, what):
if self.baseclass.next_data is not None:
what = self.baseclass.next_data
self.baseclass.next_data = None
if not what:
return self.close_when_done()
super(DummyDTPHandler, self).push(what.encode(self.encoding))
def handle_error(self):
raise Exception
class DummyFTPHandler(asynchat.async_chat):
dtp_handler = DummyDTPHandler
def __init__(self, conn, encoding=DEFAULT_ENCODING):
asynchat.async_chat.__init__(self, conn)
# tells the socket to handle urgent data inline (ABOR command)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_OOBINLINE, 1)
self.set_terminator(b"\r\n")
self.in_buffer = []
self.dtp = None
self.last_received_cmd = None
self.last_received_data = ''
self.next_response = ''
self.next_data = None
self.rest = None
self.next_retr_data = RETR_DATA
self.push('220 welcome')
self.encoding = encoding
# We use this as the string IPv4 address to direct the client
# to in response to a PASV command. To test security behavior.
# https://bugs.python.org/issue43285/.
self.fake_pasv_server_ip = '252.253.254.255'
def collect_incoming_data(self, data):
self.in_buffer.append(data)
def found_terminator(self):
line = b''.join(self.in_buffer).decode(self.encoding)
self.in_buffer = []
if self.next_response:
self.push(self.next_response)
self.next_response = ''
cmd = line.split(' ')[0].lower()
self.last_received_cmd = cmd
space = line.find(' ')
if space != -1:
arg = line[space + 1:]
else:
arg = ""
if hasattr(self, 'cmd_' + cmd):
method = getattr(self, 'cmd_' + cmd)
method(arg)
else:
self.push('550 command "%s" not understood.' %cmd)
def handle_error(self):
raise Exception
def push(self, data):
asynchat.async_chat.push(self, data.encode(self.encoding) + b'\r\n')
def cmd_port(self, arg):
addr = list(map(int, arg.split(',')))
ip = '%d.%d.%d.%d' %tuple(addr[:4])
port = (addr[4] * 256) + addr[5]
s = socket.create_connection((ip, port), timeout=TIMEOUT)
self.dtp = self.dtp_handler(s, baseclass=self)
self.push('200 active data connection established')
def cmd_pasv(self, arg):
with socket.create_server((self.socket.getsockname()[0], 0)) as sock:
sock.settimeout(TIMEOUT)
port = sock.getsockname()[1]
ip = self.fake_pasv_server_ip
ip = ip.replace('.', ','); p1 = port / 256; p2 = port % 256
self.push('227 entering passive mode (%s,%d,%d)' %(ip, p1, p2))
conn, addr = sock.accept()
self.dtp = self.dtp_handler(conn, baseclass=self)
def cmd_eprt(self, arg):
af, ip, port = arg.split(arg[0])[1:-1]
port = int(port)
s = socket.create_connection((ip, port), timeout=TIMEOUT)
self.dtp = self.dtp_handler(s, baseclass=self)
self.push('200 active data connection established')
def cmd_epsv(self, arg):
with socket.create_server((self.socket.getsockname()[0], 0),
family=socket.AF_INET6) as sock:
sock.settimeout(TIMEOUT)
port = sock.getsockname()[1]
self.push('229 entering extended passive mode (|||%d|)' %port)
conn, addr = sock.accept()
self.dtp = self.dtp_handler(conn, baseclass=self)
def cmd_echo(self, arg):
# sends back the received string (used by the test suite)
self.push(arg)
def cmd_noop(self, arg):
self.push('200 noop ok')
def cmd_user(self, arg):
self.push('331 username ok')
def cmd_pass(self, arg):
self.push('230 password ok')
def cmd_acct(self, arg):
self.push('230 acct ok')
def cmd_rnfr(self, arg):
self.push('350 rnfr ok')
def cmd_rnto(self, arg):
self.push('250 rnto ok')
def cmd_dele(self, arg):
self.push('250 dele ok')
def cmd_cwd(self, arg):
self.push('250 cwd ok')
def cmd_size(self, arg):
self.push('250 1000')
def cmd_mkd(self, arg):
self.push('257 "%s"' %arg)
def cmd_rmd(self, arg):
self.push('250 rmd ok')
def cmd_pwd(self, arg):
self.push('257 "pwd ok"')
def cmd_type(self, arg):
self.push('200 type ok')
def cmd_quit(self, arg):
self.push('221 quit ok')
self.close()
def cmd_abor(self, arg):
self.push('226 abor ok')
def cmd_stor(self, arg):
self.push('125 stor ok')
def cmd_rest(self, arg):
self.rest = arg
self.push('350 rest ok')
def cmd_retr(self, arg):
self.push('125 retr ok')
if self.rest is not None:
offset = int(self.rest)
else:
offset = 0
self.dtp.push(self.next_retr_data[offset:])
self.dtp.close_when_done()
self.rest = None
def cmd_list(self, arg):
self.push('125 list ok')
self.dtp.push(LIST_DATA)
self.dtp.close_when_done()
def cmd_nlst(self, arg):
self.push('125 nlst ok')
self.dtp.push(NLST_DATA)
self.dtp.close_when_done()
def cmd_opts(self, arg):
self.push('200 opts ok')
def cmd_mlsd(self, arg):
self.push('125 mlsd ok')
self.dtp.push(MLSD_DATA)
self.dtp.close_when_done()
def cmd_setlongretr(self, arg):
# For testing. Next RETR will return long line.
self.next_retr_data = 'x' * int(arg)
self.push('125 setlongretr ok')
class DummyFTPServer(asyncore.dispatcher, threading.Thread):
handler = DummyFTPHandler
def __init__(self, address, af=socket.AF_INET, encoding=DEFAULT_ENCODING):
threading.Thread.__init__(self)
asyncore.dispatcher.__init__(self)
self.daemon = True
self.create_socket(af, socket.SOCK_STREAM)
self.bind(address)
self.listen(5)
self.active = False
self.active_lock = threading.Lock()
self.host, self.port = self.socket.getsockname()[:2]
self.handler_instance = None
self.encoding = encoding
def start(self):
assert not self.active
self.__flag = threading.Event()
threading.Thread.start(self)
self.__flag.wait()
def run(self):
self.active = True
self.__flag.set()
while self.active and asyncore.socket_map:
self.active_lock.acquire()
asyncore.loop(timeout=0.1, count=1)
self.active_lock.release()
asyncore.close_all(ignore_all=True)
def stop(self):
assert self.active
self.active = False
self.join()
def handle_accepted(self, conn, addr):
self.handler_instance = self.handler(conn, encoding=self.encoding)
def handle_connect(self):
self.close()
handle_read = handle_connect
def writable(self):
return 0
def handle_error(self):
raise Exception
if ssl is not None:
CERTFILE = os.path.join(os.path.dirname(__file__), "keycert3.pem")
CAFILE = os.path.join(os.path.dirname(__file__), "pycacert.pem")
class SSLConnection(asyncore.dispatcher):
"""An asyncore.dispatcher subclass supporting TLS/SSL."""
_ssl_accepting = False
_ssl_closing = False
def secure_connection(self):
context = ssl.SSLContext()
context.load_cert_chain(CERTFILE)
socket = context.wrap_socket(self.socket,
suppress_ragged_eofs=False,
server_side=True,
do_handshake_on_connect=False)
self.del_channel()
self.set_socket(socket)
self._ssl_accepting = True
def _do_ssl_handshake(self):
try:
self.socket.do_handshake()
except ssl.SSLError as err:
if err.args[0] in (ssl.SSL_ERROR_WANT_READ,
ssl.SSL_ERROR_WANT_WRITE):
return
elif err.args[0] == ssl.SSL_ERROR_EOF:
return self.handle_close()
# TODO: SSLError does not expose alert information
elif "SSLV3_ALERT_BAD_CERTIFICATE" in err.args[1]:
return self.handle_close()
raise
except OSError as err:
if err.args[0] == errno.ECONNABORTED:
return self.handle_close()
else:
self._ssl_accepting = False
def _do_ssl_shutdown(self):
self._ssl_closing = True
try:
self.socket = self.socket.unwrap()
except ssl.SSLError as err:
if err.args[0] in (ssl.SSL_ERROR_WANT_READ,
ssl.SSL_ERROR_WANT_WRITE):
return
except OSError:
# Any "socket error" corresponds to a SSL_ERROR_SYSCALL return
# from OpenSSL's SSL_shutdown(), corresponding to a
# closed socket condition. See also:
# http://www.mail-archive.com/openssl-users@openssl.org/msg60710.html
pass
self._ssl_closing = False
if getattr(self, '_ccc', False) is False:
super(SSLConnection, self).close()
else:
pass
def handle_read_event(self):
if self._ssl_accepting:
self._do_ssl_handshake()
elif self._ssl_closing:
self._do_ssl_shutdown()
else:
super(SSLConnection, self).handle_read_event()
def handle_write_event(self):
if self._ssl_accepting:
self._do_ssl_handshake()
elif self._ssl_closing:
self._do_ssl_shutdown()
else:
super(SSLConnection, self).handle_write_event()
def send(self, data):
try:
return super(SSLConnection, self).send(data)
except ssl.SSLError as err:
if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN,
ssl.SSL_ERROR_WANT_READ,
ssl.SSL_ERROR_WANT_WRITE):
return 0
raise
def recv(self, buffer_size):
try:
return super(SSLConnection, self).recv(buffer_size)
except ssl.SSLError as err:
if err.args[0] in (ssl.SSL_ERROR_WANT_READ,
ssl.SSL_ERROR_WANT_WRITE):
return b''
if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN):
self.handle_close()
return b''
raise
def handle_error(self):
raise Exception
def close(self):
if (isinstance(self.socket, ssl.SSLSocket) and
self.socket._sslobj is not None):
self._do_ssl_shutdown()
else:
super(SSLConnection, self).close()
class DummyTLS_DTPHandler(SSLConnection, DummyDTPHandler):
"""A DummyDTPHandler subclass supporting TLS/SSL."""
def __init__(self, conn, baseclass):
DummyDTPHandler.__init__(self, conn, baseclass)
if self.baseclass.secure_data_channel:
self.secure_connection()
class DummyTLS_FTPHandler(SSLConnection, DummyFTPHandler):
"""A DummyFTPHandler subclass supporting TLS/SSL."""
dtp_handler = DummyTLS_DTPHandler
def __init__(self, conn, encoding=DEFAULT_ENCODING):
DummyFTPHandler.__init__(self, conn, encoding=encoding)
self.secure_data_channel = False
self._ccc = False
def cmd_auth(self, line):
"""Set up secure control channel."""
self.push('234 AUTH TLS successful')
self.secure_connection()
def cmd_ccc(self, line):
self.push('220 Reverting back to clear-text')
self._ccc = True
self._do_ssl_shutdown()
def cmd_pbsz(self, line):
"""Negotiate size of buffer for secure data transfer.
For TLS/SSL the only valid value for the parameter is '0'.
Any other value is accepted but ignored.
"""
self.push('200 PBSZ=0 successful.')
def cmd_prot(self, line):
"""Setup un/secure data channel."""
arg = line.upper()
if arg == 'C':
self.push('200 Protection set to Clear')
self.secure_data_channel = False
elif arg == 'P':
self.push('200 Protection set to Private')
self.secure_data_channel = True
else:
self.push("502 Unrecognized PROT type (use C or P).")
class DummyTLS_FTPServer(DummyFTPServer):
handler = DummyTLS_FTPHandler
class TestFTPClass(TestCase):
def setUp(self, encoding=DEFAULT_ENCODING):
self.server = DummyFTPServer((HOST, 0), encoding=encoding)
self.server.start()
self.client = ftplib.FTP(timeout=TIMEOUT, encoding=encoding)
self.client.connect(self.server.host, self.server.port)
def tearDown(self):
self.client.close()
self.server.stop()
# Explicitly clear the attribute to prevent dangling thread
self.server = None
asyncore.close_all(ignore_all=True)
def check_data(self, received, expected):
self.assertEqual(len(received), len(expected))
self.assertEqual(received, expected)
def test_getwelcome(self):
self.assertEqual(self.client.getwelcome(), '220 welcome')
def test_sanitize(self):
self.assertEqual(self.client.sanitize('foo'), repr('foo'))
self.assertEqual(self.client.sanitize('pass 12345'), repr('pass *****'))
self.assertEqual(self.client.sanitize('PASS 12345'), repr('PASS *****'))
def test_exceptions(self):
self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r\n0')
self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\n0')
self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r0')
self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 400')
self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 499')
self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 500')
self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 599')
self.assertRaises(ftplib.error_proto, self.client.sendcmd, 'echo 999')
def test_all_errors(self):
exceptions = (ftplib.error_reply, ftplib.error_temp, ftplib.error_perm,
ftplib.error_proto, ftplib.Error, OSError,
EOFError)
for x in exceptions:
try:
raise x('exception not included in all_errors set')
except ftplib.all_errors:
pass
def test_set_pasv(self):
# passive mode is supposed to be enabled by default
self.assertTrue(self.client.passiveserver)
self.client.set_pasv(True)
self.assertTrue(self.client.passiveserver)
self.client.set_pasv(False)
self.assertFalse(self.client.passiveserver)
def test_voidcmd(self):
self.client.voidcmd('echo 200')
self.client.voidcmd('echo 299')
self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 199')
self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 300')
def test_login(self):
self.client.login()
def test_acct(self):
self.client.acct('passwd')
def test_rename(self):
self.client.rename('a', 'b')
self.server.handler_instance.next_response = '200'
self.assertRaises(ftplib.error_reply, self.client.rename, 'a', 'b')
def test_delete(self):
self.client.delete('foo')
self.server.handler_instance.next_response = '199'
self.assertRaises(ftplib.error_reply, self.client.delete, 'foo')
def test_size(self):
self.client.size('foo')
def test_mkd(self):
dir = self.client.mkd('/foo')
self.assertEqual(dir, '/foo')
def test_rmd(self):
self.client.rmd('foo')
def test_cwd(self):
dir = self.client.cwd('/foo')
self.assertEqual(dir, '250 cwd ok')
def test_pwd(self):
dir = self.client.pwd()
self.assertEqual(dir, 'pwd ok')
def test_quit(self):
self.assertEqual(self.client.quit(), '221 quit ok')
# Ensure the connection gets closed; sock attribute should be None
self.assertEqual(self.client.sock, None)
def test_abort(self):
self.client.abort()
def test_retrbinary(self):
def callback(data):
received.append(data.decode(self.client.encoding))
received = []
self.client.retrbinary('retr', callback)
self.check_data(''.join(received), RETR_DATA)
def test_retrbinary_rest(self):
def callback(data):
received.append(data.decode(self.client.encoding))
for rest in (0, 10, 20):
received = []
self.client.retrbinary('retr', callback, rest=rest)
self.check_data(''.join(received), RETR_DATA[rest:])
def test_retrlines(self):
received = []
self.client.retrlines('retr', received.append)
self.check_data(''.join(received), RETR_DATA.replace('\r\n', ''))
def test_storbinary(self):
f = io.BytesIO(RETR_DATA.encode(self.client.encoding))
self.client.storbinary('stor', f)
self.check_data(self.server.handler_instance.last_received_data, RETR_DATA)
# test new callback arg
flag = []
f.seek(0)
self.client.storbinary('stor', f, callback=lambda x: flag.append(None))
self.assertTrue(flag)
def test_storbinary_rest(self):
data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding)
f = io.BytesIO(data)
for r in (30, '30'):
f.seek(0)
self.client.storbinary('stor', f, rest=r)
self.assertEqual(self.server.handler_instance.rest, str(r))
def test_storlines(self):
data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding)
f = io.BytesIO(data)
self.client.storlines('stor', f)
self.check_data(self.server.handler_instance.last_received_data, RETR_DATA)
# test new callback arg
flag = []
f.seek(0)
self.client.storlines('stor foo', f, callback=lambda x: flag.append(None))
self.assertTrue(flag)
f = io.StringIO(RETR_DATA.replace('\r\n', '\n'))
# storlines() expects a binary file, not a text file
with support.check_warnings(('', BytesWarning), quiet=True):
self.assertRaises(TypeError, self.client.storlines, 'stor foo', f)
def test_nlst(self):
self.client.nlst()
self.assertEqual(self.client.nlst(), NLST_DATA.split('\r\n')[:-1])
def test_dir(self):
l = []
self.client.dir(lambda x: l.append(x))
self.assertEqual(''.join(l), LIST_DATA.replace('\r\n', ''))
def test_mlsd(self):
list(self.client.mlsd())
list(self.client.mlsd(path='/'))
list(self.client.mlsd(path='/', facts=['size', 'type']))
ls = list(self.client.mlsd())
for name, facts in ls:
self.assertIsInstance(name, str)
self.assertIsInstance(facts, dict)
self.assertTrue(name)
self.assertIn('type', facts)
self.assertIn('perm', facts)
self.assertIn('unique', facts)
def set_data(data):
self.server.handler_instance.next_data = data
def test_entry(line, type=None, perm=None, unique=None, name=None):
type = 'type' if type is None else type
perm = 'perm' if perm is None else perm
unique = 'unique' if unique is None else unique
name = 'name' if name is None else name
set_data(line)
_name, facts = next(self.client.mlsd())
self.assertEqual(_name, name)
self.assertEqual(facts['type'], type)
self.assertEqual(facts['perm'], perm)
self.assertEqual(facts['unique'], unique)
# plain
test_entry('type=type;perm=perm;unique=unique; name\r\n')
# "=" in fact value
test_entry('type=ty=pe;perm=perm;unique=unique; name\r\n', type="ty=pe")
test_entry('type==type;perm=perm;unique=unique; name\r\n', type="=type")
test_entry('type=t=y=pe;perm=perm;unique=unique; name\r\n', type="t=y=pe")
test_entry('type=====;perm=perm;unique=unique; name\r\n', type="====")
# spaces in name
test_entry('type=type;perm=perm;unique=unique; na me\r\n', name="na me")
test_entry('type=type;perm=perm;unique=unique; name \r\n', name="name ")
test_entry('type=type;perm=perm;unique=unique; name\r\n', name=" name")
test_entry('type=type;perm=perm;unique=unique; n am e\r\n', name="n am e")
# ";" in name
test_entry('type=type;perm=perm;unique=unique; na;me\r\n', name="na;me")
test_entry('type=type;perm=perm;unique=unique; ;name\r\n', name=";name")
test_entry('type=type;perm=perm;unique=unique; ;name;\r\n', name=";name;")
test_entry('type=type;perm=perm;unique=unique; ;;;;\r\n', name=";;;;")
# case sensitiveness
set_data('Type=type;TyPe=perm;UNIQUE=unique; name\r\n')
_name, facts = next(self.client.mlsd())
for x in facts:
self.assertTrue(x.islower())
# no data (directory empty)
set_data('')
self.assertRaises(StopIteration, next, self.client.mlsd())
set_data('')
for x in self.client.mlsd():
self.fail("unexpected data %s" % x)
def test_makeport(self):
with self.client.makeport():
# IPv4 is in use, just make sure send_eprt has not been used
self.assertEqual(self.server.handler_instance.last_received_cmd,
'port')
def test_makepasv(self):
host, port = self.client.makepasv()
conn = socket.create_connection((host, port), timeout=TIMEOUT)
conn.close()
# IPv4 is in use, just make sure send_epsv has not been used
self.assertEqual(self.server.handler_instance.last_received_cmd, 'pasv')
def test_makepasv_issue43285_security_disabled(self):
"""Test the opt-in to the old vulnerable behavior."""
self.client.trust_server_pasv_ipv4_address = True
bad_host, port = self.client.makepasv()
self.assertEqual(
bad_host, self.server.handler_instance.fake_pasv_server_ip)
# Opening and closing a connection keeps the dummy server happy
# instead of timing out on accept.
socket.create_connection((self.client.sock.getpeername()[0], port),
timeout=TIMEOUT).close()
def test_makepasv_issue43285_security_enabled_default(self):
self.assertFalse(self.client.trust_server_pasv_ipv4_address)
trusted_host, port = self.client.makepasv()
self.assertNotEqual(
trusted_host, self.server.handler_instance.fake_pasv_server_ip)
# Opening and closing a connection keeps the dummy server happy
# instead of timing out on accept.
socket.create_connection((trusted_host, port), timeout=TIMEOUT).close()
def test_with_statement(self):
self.client.quit()
def is_client_connected():
if self.client.sock is None:
return False
try:
self.client.sendcmd('noop')
except (OSError, EOFError):
return False
return True
# base test
with ftplib.FTP(timeout=TIMEOUT) as self.client:
self.client.connect(self.server.host, self.server.port)
self.client.sendcmd('noop')
self.assertTrue(is_client_connected())
self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit')
self.assertFalse(is_client_connected())
# QUIT sent inside the with block
with ftplib.FTP(timeout=TIMEOUT) as self.client:
self.client.connect(self.server.host, self.server.port)
self.client.sendcmd('noop')
self.client.quit()
self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit')
self.assertFalse(is_client_connected())
# force a wrong response code to be sent on QUIT: error_perm
# is expected and the connection is supposed to be closed
try:
with ftplib.FTP(timeout=TIMEOUT) as self.client:
self.client.connect(self.server.host, self.server.port)
self.client.sendcmd('noop')
self.server.handler_instance.next_response = '550 error on quit'
except ftplib.error_perm as err:
self.assertEqual(str(err), '550 error on quit')
else:
self.fail('Exception not raised')
# needed to give the threaded server some time to set the attribute
# which otherwise would still be == 'noop'
time.sleep(0.1)
self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit')
self.assertFalse(is_client_connected())
def test_source_address(self):
self.client.quit()
port = socket_helper.find_unused_port()
try:
self.client.connect(self.server.host, self.server.port,
source_address=(HOST, port))
self.assertEqual(self.client.sock.getsockname()[1], port)
self.client.quit()
except OSError as e:
if e.errno == errno.EADDRINUSE:
self.skipTest("couldn't bind to port %d" % port)
raise
def test_source_address_passive_connection(self):
port = socket_helper.find_unused_port()
self.client.source_address = (HOST, port)
try:
with self.client.transfercmd('list') as sock:
self.assertEqual(sock.getsockname()[1], port)
except OSError as e:
if e.errno == errno.EADDRINUSE:
self.skipTest("couldn't bind to port %d" % port)
raise
def test_parse257(self):
self.assertEqual(ftplib.parse257('257 "/foo/bar"'), '/foo/bar')
self.assertEqual(ftplib.parse257('257 "/foo/bar" created'), '/foo/bar')
self.assertEqual(ftplib.parse257('257 ""'), '')
self.assertEqual(ftplib.parse257('257 "" created'), '')
self.assertRaises(ftplib.error_reply, ftplib.parse257, '250 "/foo/bar"')
# The 257 response is supposed to include the directory
# name and in case it contains embedded double-quotes
# they must be doubled (see RFC-959, chapter 7, appendix 2).
self.assertEqual(ftplib.parse257('257 "/foo/b""ar"'), '/foo/b"ar')
self.assertEqual(ftplib.parse257('257 "/foo/b""ar" created'), '/foo/b"ar')
def test_line_too_long(self):
self.assertRaises(ftplib.Error, self.client.sendcmd,
'x' * self.client.maxline * 2)
def test_retrlines_too_long(self):
self.client.sendcmd('SETLONGRETR %d' % (self.client.maxline * 2))
received = []
self.assertRaises(ftplib.Error,
self.client.retrlines, 'retr', received.append)
def test_storlines_too_long(self):
f = io.BytesIO(b'x' * self.client.maxline * 2)
self.assertRaises(ftplib.Error, self.client.storlines, 'stor', f)
def test_encoding_param(self):
encodings = ['latin-1', 'utf-8']
for encoding in encodings:
with self.subTest(encoding=encoding):
self.tearDown()
self.setUp(encoding=encoding)
self.assertEqual(encoding, self.client.encoding)
self.test_retrbinary()
self.test_storbinary()
self.test_retrlines()
new_dir = self.client.mkd('/non-ascii dir \xAE')
self.check_data(new_dir, '/non-ascii dir \xAE')
# Check default encoding
client = ftplib.FTP(timeout=TIMEOUT)
self.assertEqual(DEFAULT_ENCODING, client.encoding)
@skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled")
class TestIPv6Environment(TestCase):
def setUp(self):
self.server = DummyFTPServer((HOSTv6, 0),
af=socket.AF_INET6,
encoding=DEFAULT_ENCODING)
self.server.start()
self.client = ftplib.FTP(timeout=TIMEOUT, encoding=DEFAULT_ENCODING)
self.client.connect(self.server.host, self.server.port)
def tearDown(self):
self.client.close()
self.server.stop()
# Explicitly clear the attribute to prevent dangling thread
self.server = None
asyncore.close_all(ignore_all=True)
def test_af(self):
self.assertEqual(self.client.af, socket.AF_INET6)
def test_makeport(self):
with self.client.makeport():
self.assertEqual(self.server.handler_instance.last_received_cmd,
'eprt')
def test_makepasv(self):
host, port = self.client.makepasv()
conn = socket.create_connection((host, port), timeout=TIMEOUT)
conn.close()
self.assertEqual(self.server.handler_instance.last_received_cmd, 'epsv')
def test_transfer(self):
def retr():
def callback(data):
received.append(data.decode(self.client.encoding))
received = []
self.client.retrbinary('retr', callback)
self.assertEqual(len(''.join(received)), len(RETR_DATA))
self.assertEqual(''.join(received), RETR_DATA)
self.client.set_pasv(True)
retr()
self.client.set_pasv(False)
retr()
@skipUnless(ssl, "SSL not available")
class TestTLS_FTPClassMixin(TestFTPClass):
"""Repeat TestFTPClass tests starting the TLS layer for both control
and data connections first.
"""
def setUp(self, encoding=DEFAULT_ENCODING):
self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding)
self.server.start()
self.client = ftplib.FTP_TLS(timeout=TIMEOUT, encoding=encoding)
self.client.connect(self.server.host, self.server.port)
# enable TLS
self.client.auth()
self.client.prot_p()
@skipUnless(ssl, "SSL not available")
class TestTLS_FTPClass(TestCase):
"""Specific TLS_FTP class tests."""
def setUp(self, encoding=DEFAULT_ENCODING):
self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding)
self.server.start()
self.client = ftplib.FTP_TLS(timeout=TIMEOUT)
self.client.connect(self.server.host, self.server.port)
def tearDown(self):
self.client.close()
self.server.stop()
# Explicitly clear the attribute to prevent dangling thread
self.server = None
asyncore.close_all(ignore_all=True)
def test_control_connection(self):
self.assertNotIsInstance(self.client.sock, ssl.SSLSocket)
self.client.auth()
self.assertIsInstance(self.client.sock, ssl.SSLSocket)
def test_data_connection(self):
# clear text
with self.client.transfercmd('list') as sock:
self.assertNotIsInstance(sock, ssl.SSLSocket)
self.assertEqual(sock.recv(1024),
LIST_DATA.encode(self.client.encoding))
self.assertEqual(self.client.voidresp(), "226 transfer complete")
# secured, after PROT P
self.client.prot_p()
with self.client.transfercmd('list') as sock:
self.assertIsInstance(sock, ssl.SSLSocket)
# consume from SSL socket to finalize handshake and avoid
# "SSLError [SSL] shutdown while in init"
self.assertEqual(sock.recv(1024),
LIST_DATA.encode(self.client.encoding))
self.assertEqual(self.client.voidresp(), "226 transfer complete")
# PROT C is issued, the connection must be in cleartext again
self.client.prot_c()
with self.client.transfercmd('list') as sock:
self.assertNotIsInstance(sock, ssl.SSLSocket)
self.assertEqual(sock.recv(1024),
LIST_DATA.encode(self.client.encoding))
self.assertEqual(self.client.voidresp(), "226 transfer complete")
def test_login(self):
# login() is supposed to implicitly secure the control connection
self.assertNotIsInstance(self.client.sock, ssl.SSLSocket)
self.client.login()
self.assertIsInstance(self.client.sock, ssl.SSLSocket)
# make sure that AUTH TLS doesn't get issued again
self.client.login()
def test_auth_issued_twice(self):
self.client.auth()
self.assertRaises(ValueError, self.client.auth)
def test_context(self):
self.client.quit()
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
self.assertRaises(ValueError, ftplib.FTP_TLS, keyfile=CERTFILE,
context=ctx)
self.assertRaises(ValueError, ftplib.FTP_TLS, certfile=CERTFILE,
context=ctx)
self.assertRaises(ValueError, ftplib.FTP_TLS, certfile=CERTFILE,
keyfile=CERTFILE, context=ctx)
self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT)
self.client.connect(self.server.host, self.server.port)
self.assertNotIsInstance(self.client.sock, ssl.SSLSocket)
self.client.auth()
self.assertIs(self.client.sock.context, ctx)
self.assertIsInstance(self.client.sock, ssl.SSLSocket)
self.client.prot_p()
with self.client.transfercmd('list') as sock:
self.assertIs(sock.context, ctx)
self.assertIsInstance(sock, ssl.SSLSocket)
def test_ccc(self):
self.assertRaises(ValueError, self.client.ccc)
self.client.login(secure=True)
self.assertIsInstance(self.client.sock, ssl.SSLSocket)
self.client.ccc()
self.assertRaises(ValueError, self.client.sock.unwrap)
@skipUnless(False, "FIXME: bpo-32706")
def test_check_hostname(self):
self.client.quit()
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED)
self.assertEqual(ctx.check_hostname, True)
ctx.load_verify_locations(CAFILE)
self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT)
# 127.0.0.1 doesn't match SAN
self.client.connect(self.server.host, self.server.port)
with self.assertRaises(ssl.CertificateError):
self.client.auth()
# exception quits connection
self.client.connect(self.server.host, self.server.port)
self.client.prot_p()
with self.assertRaises(ssl.CertificateError):
with self.client.transfercmd("list") as sock:
pass
self.client.quit()
self.client.connect("localhost", self.server.port)
self.client.auth()
self.client.quit()
self.client.connect("localhost", self.server.port)
self.client.prot_p()
with self.client.transfercmd("list") as sock:
pass
class TestTimeouts(TestCase):
def setUp(self):
self.evt = threading.Event()
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.settimeout(20)
self.port = socket_helper.bind_port(self.sock)
self.server_thread = threading.Thread(target=self.server)
self.server_thread.daemon = True
self.server_thread.start()
# Wait for the server to be ready.
self.evt.wait()
self.evt.clear()
self.old_port = ftplib.FTP.port
ftplib.FTP.port = self.port
def tearDown(self):
ftplib.FTP.port = self.old_port
self.server_thread.join()
# Explicitly clear the attribute to prevent dangling thread
self.server_thread = None
def server(self):
# This method sets the evt 3 times:
# 1) when the connection is ready to be accepted.
# 2) when it is safe for the caller to close the connection
# 3) when we have closed the socket
self.sock.listen()
# (1) Signal the caller that we are ready to accept the connection.
self.evt.set()
try:
conn, addr = self.sock.accept()
except socket.timeout:
pass
else:
conn.sendall(b"1 Hola mundo\n")
conn.shutdown(socket.SHUT_WR)
# (2) Signal the caller that it is safe to close the socket.
self.evt.set()
conn.close()
finally:
self.sock.close()
def testTimeoutDefault(self):
# default -- use global socket timeout
self.assertIsNone(socket.getdefaulttimeout())
socket.setdefaulttimeout(30)
try:
ftp = ftplib.FTP(HOST)
finally:
socket.setdefaulttimeout(None)
self.assertEqual(ftp.sock.gettimeout(), 30)
self.evt.wait()
ftp.close()
def testTimeoutNone(self):
# no timeout -- do not use global socket timeout
self.assertIsNone(socket.getdefaulttimeout())
socket.setdefaulttimeout(30)
try:
ftp = ftplib.FTP(HOST, timeout=None)
finally:
socket.setdefaulttimeout(None)
self.assertIsNone(ftp.sock.gettimeout())
self.evt.wait()
ftp.close()
def testTimeoutValue(self):
# a value
ftp = ftplib.FTP(HOST, timeout=30)
self.assertEqual(ftp.sock.gettimeout(), 30)
self.evt.wait()
ftp.close()
# bpo-39259
with self.assertRaises(ValueError):
ftplib.FTP(HOST, timeout=0)
def testTimeoutConnect(self):
ftp = ftplib.FTP()
ftp.connect(HOST, timeout=30)
self.assertEqual(ftp.sock.gettimeout(), 30)
self.evt.wait()
ftp.close()
def testTimeoutDifferentOrder(self):
ftp = ftplib.FTP(timeout=30)
ftp.connect(HOST)
self.assertEqual(ftp.sock.gettimeout(), 30)
self.evt.wait()
ftp.close()
def testTimeoutDirectAccess(self):
ftp = ftplib.FTP()
ftp.timeout = 30
ftp.connect(HOST)
self.assertEqual(ftp.sock.gettimeout(), 30)
self.evt.wait()
ftp.close()
class MiscTestCase(TestCase):
def test__all__(self):
blacklist = {'MSG_OOB', 'FTP_PORT', 'MAXLINE', 'CRLF', 'B_CRLF',
'Error', 'parse150', 'parse227', 'parse229', 'parse257',
'print_line', 'ftpcp', 'test'}
support.check__all__(self, ftplib, blacklist=blacklist)
def test_main():
tests = [TestFTPClass, TestTimeouts,
TestIPv6Environment,
TestTLS_FTPClassMixin, TestTLS_FTPClass,
MiscTestCase]
thread_info = support.threading_setup()
try:
support.run_unittest(*tests)
finally:
support.threading_cleanup(*thread_info)
if __name__ == '__main__':
test_main()
|
controller.py
|
import glob
import json
import os
import re
import shutil
import subprocess
import tarfile
import time
import traceback
from datetime import datetime
from math import floor
from pathlib import Path
from threading import Thread
from typing import List, Set, Type, Tuple, Dict, Iterable
import requests
from bauh.api.abstract.controller import SearchResult, SoftwareManager, ApplicationContext, UpgradeRequirements
from bauh.api.abstract.disk import DiskCacheLoader
from bauh.api.abstract.handler import ProcessWatcher, TaskManager
from bauh.api.abstract.model import PackageUpdate, PackageHistory, SoftwarePackage, PackageSuggestion, PackageStatus, \
SuggestionPriority, CustomSoftwareAction
from bauh.api.abstract.view import MessageType, FormComponent, InputOption, SingleSelectComponent, SelectViewType, \
ViewComponent, PanelComponent, MultipleSelectComponent, TextInputComponent, TextComponent
from bauh.api.constants import TEMP_DIR
from bauh.commons import user, internet
from bauh.commons.category import CategoriesDownloader
from bauh.commons.config import save_config
from bauh.commons.html import bold
from bauh.commons.system import SystemProcess, ProcessHandler, new_subprocess, run_cmd, SimpleProcess
from bauh.gems.arch import BUILD_DIR, aur, pacman, makepkg, message, confirmation, disk, git, \
gpg, URL_CATEGORIES_FILE, CATEGORIES_FILE_PATH, CUSTOM_MAKEPKG_FILE, SUGGESTIONS_FILE, \
CONFIG_FILE, get_icon_path, database, mirrors, sorting, cpu_manager, ARCH_CACHE_PATH
from bauh.gems.arch.aur import AURClient
from bauh.gems.arch.config import read_config
from bauh.gems.arch.dependencies import DependenciesAnalyser
from bauh.gems.arch.exceptions import PackageNotFoundException
from bauh.gems.arch.mapper import ArchDataMapper
from bauh.gems.arch.model import ArchPackage
from bauh.gems.arch.output import TransactionStatusHandler
from bauh.gems.arch.updates import UpdatesSummarizer
from bauh.gems.arch.worker import AURIndexUpdater, ArchDiskCacheUpdater, ArchCompilationOptimizer, SyncDatabases, \
RefreshMirrors
URL_GIT = 'https://aur.archlinux.org/{}.git'
URL_PKG_DOWNLOAD = 'https://aur.archlinux.org/cgit/aur.git/snapshot/{}.tar.gz'
URL_SRC_INFO = 'https://aur.archlinux.org/cgit/aur.git/plain/.SRCINFO?h='
RE_SPLIT_VERSION = re.compile(r'([=><]+)')
SOURCE_FIELDS = ('source', 'source_x86_64')
RE_PRE_DOWNLOAD_WL_PROTOCOLS = re.compile(r'^(.+::)?(https?|ftp)://.+')
RE_PRE_DOWNLOAD_BL_EXT = re.compile(r'.+\.(git|gpg)$')
class TransactionContext:
def __init__(self, name: str = None, base: str = None, maintainer: str = None, watcher: ProcessWatcher = None,
handler: ProcessHandler = None, dependency: bool = None, skip_opt_deps: bool = False, root_password: str = None,
build_dir: str = None, project_dir: str = None, change_progress: bool = False, arch_config: dict = None,
install_file: str = None, repository: str = None, pkg: ArchPackage = None,
remote_repo_map: Dict[str, str] = None, provided_map: Dict[str, Set[str]] = None,
remote_provided_map: Dict[str, Set[str]] = None, aur_idx: Set[str] = None,
missing_deps: List[Tuple[str, str]] = None):
self.name = name
self.base = base
self.maintainer = maintainer
self.watcher = watcher
self.handler = handler
self.dependency = dependency
self.skip_opt_deps = skip_opt_deps
self.build_dir = build_dir
self.project_dir = project_dir
self.root_password = root_password
self.change_progress = change_progress
self.repository = repository
self.config = arch_config
self.install_file = install_file
self.pkg = pkg
self.provided_map = provided_map
self.remote_repo_map = remote_repo_map
self.remote_provided_map = remote_provided_map
self.aur_idx = aur_idx
self.missing_deps = missing_deps
@classmethod
def gen_context_from(cls, pkg: ArchPackage, arch_config: dict, root_password: str, handler: ProcessHandler) -> "TransactionContext":
return cls(name=pkg.name, base=pkg.get_base_name(), maintainer=pkg.maintainer, repository=pkg.repository,
arch_config=arch_config, watcher=handler.watcher, handler=handler, skip_opt_deps=True,
change_progress=True, root_password=root_password, dependency=False)
def get_base_name(self):
return self.base if self.base else self.name
def get_project_dir(self):
return self.project_dir or '.'
def clone_base(self):
return TransactionContext(watcher=self.watcher, handler=self.handler, root_password=self.root_password,
arch_config=self.config)
def gen_dep_context(self, name: str, repository: str):
dep_context = self.clone_base()
dep_context.name = name
dep_context.repository = repository
dep_context.dependency = True
dep_context.change_progress = False
return dep_context
def has_install_file(self) -> bool:
return self.install_file is not None
def get_package_path(self) -> str:
return self.install_file if self.install_file else self.name
def get_version(self) -> str:
return self.pkg.version if self.pkg else None
def get_aur_idx(self, aur_client: AURClient) -> Set[str]:
if self.aur_idx is None:
if self.config['aur']:
self.aur_idx = aur_client.read_index()
else:
self.aur_idx = set()
return self.aur_idx
def get_provided_map(self) -> Dict[str, Set[str]]:
if self.provided_map is None:
self.provided_map = pacman.map_provided()
return self.provided_map
def get_remote_provided_map(self) -> Dict[str, Set[str]]:
if self.remote_provided_map is None:
self.remote_provided_map = pacman.map_provided(remote=True)
return self.remote_provided_map
def get_remote_repo_map(self) -> Dict[str, str]:
if self.remote_repo_map is None:
self.remote_repo_map = pacman.map_repositories()
return self.remote_repo_map
class ArchManager(SoftwareManager):
def __init__(self, context: ApplicationContext):
super(ArchManager, self).__init__(context=context)
self.aur_cache = context.cache_factory.new()
# context.disk_loader_factory.map(ArchPackage, self.aur_cache) TODO
self.mapper = ArchDataMapper(http_client=context.http_client, i18n=context.i18n)
self.i18n = context.i18n
self.aur_client = AURClient(http_client=context.http_client, logger=context.logger, x86_64=context.is_system_x86_64())
self.dcache_updater = None
self.logger = context.logger
self.enabled = True
self.arch_distro = context.distro == 'arch'
self.categories = {}
self.deps_analyser = DependenciesAnalyser(self.aur_client, self.i18n)
self.http_client = context.http_client
self.custom_actions = {
'sys_up': CustomSoftwareAction(i18_label_key='arch.custom_action.upgrade_system',
i18n_status_key='arch.custom_action.upgrade_system.status',
manager_method='upgrade_system',
icon_path=get_icon_path(),
requires_root=True,
backup=True,
manager=self),
'ref_dbs': CustomSoftwareAction(i18_label_key='arch.custom_action.refresh_dbs',
i18n_status_key='arch.sync_databases.substatus',
manager_method='sync_databases',
icon_path=get_icon_path(),
requires_root=True,
manager=self),
'ref_mirrors': CustomSoftwareAction(i18_label_key='arch.custom_action.refresh_mirrors',
i18n_status_key='arch.task.mirrors',
manager_method='refresh_mirrors',
icon_path=get_icon_path(),
requires_root=True,
manager=self),
'clean_cache': CustomSoftwareAction(i18_label_key='arch.custom_action.clean_cache',
i18n_status_key='arch.custom_action.clean_cache.status',
manager_method='clean_cache',
icon_path=get_icon_path(),
requires_root=True,
refresh=False,
manager=self)
}
self.index_aur = None
self.re_file_conflict = re.compile(r'[\w\d\-_.]+:')
@staticmethod
def get_semantic_search_map() -> Dict[str, str]:
return {'google chrome': 'google-chrome',
'chrome google': 'google-chrome',
'googlechrome': 'google-chrome'}
def refresh_mirrors(self, root_password: str, watcher: ProcessWatcher) -> bool:
handler = ProcessHandler(watcher)
if self._is_database_locked(handler, root_password):
return False
available_countries = pacman.list_mirror_countries()
current_countries = pacman.get_current_mirror_countries()
if not available_countries:
self.logger.warning("No country available")
countries = current_countries
else:
country_opts = [InputOption(label=self.i18n['arch.custom_action.refresh_mirrors.location.all'], value='all',
tooltip=self.i18n['arch.custom_action.refresh_mirrors.location.all.tip'])]
mapped_opts = [InputOption(label=' '.join((w.capitalize() for w in self.i18n[' '.join(c.split('_'))].split(' '))),
value=c) for c in available_countries]
mapped_opts.sort(key=lambda o: o.label)
if len(current_countries) == 1 and current_countries[0] == 'all':
default_opts = {country_opts[0]}
else:
default_opts = {o for o in mapped_opts if o.value in current_countries}
country_opts.extend(default_opts)
country_opts.extend((o for o in mapped_opts if o not in default_opts))
select = MultipleSelectComponent(options=country_opts,
default_options=default_opts,
max_per_line=3,
label=self.i18n['arch.custom_action.refresh_mirrors.select_label'])
if watcher.request_confirmation(title=self.i18n['arch.custom_action.refresh_mirrors'],
body=None,
components=[select],
confirmation_label=self.i18n['continue'].capitalize(),
deny_label=self.i18n["cancel"].capitalize()):
countries = select.get_selected_values()
if 'all' in countries or len(countries) == len(available_countries):
countries = ['all']
else:
watcher.print("Aborted by the user")
return False
watcher.change_substatus(self.i18n['arch.custom_action.refresh_mirrors.status.updating'])
if current_countries == countries:
success, output = handler.handle_simple(pacman.refresh_mirrors(root_password))
else:
success, output = handler.handle_simple(pacman.update_mirrors(root_password, countries))
if not success:
watcher.show_message(title=self.i18n["action.failed"].capitalize(),
body=self.i18n['arch.custom_action.refresh_mirrors.failed'],
type_=MessageType.ERROR)
return False
sort_limit = read_config()['mirrors_sort_limit']
if sort_limit is not None and isinstance(sort_limit, int) and sort_limit >= 0:
watcher.change_substatus(self.i18n['arch.custom_action.refresh_mirrors.status.sorting'])
handler.handle_simple(pacman.sort_fastest_mirrors(root_password, sort_limit))
mirrors.register_sync(self.logger)
watcher.change_substatus(self.i18n['arch.sync_databases.substatus'])
return self.sync_databases(root_password=root_password, watcher=watcher)
def sync_databases(self, root_password: str, watcher: ProcessWatcher) -> bool:
handler = ProcessHandler(watcher)
if self._is_database_locked(handler, root_password):
return False
success, output = handler.handle_simple(pacman.sync_databases(root_password, force=True))
if not success:
watcher.show_message(title=self.i18n["action.failed"].capitalize(),
body=self.i18n['arch.custom_action.refresh_mirrors.failed'],
type_=MessageType.ERROR)
return False
database.register_sync(self.logger)
return True
def _upgrade_search_result(self, apidata: dict, installed_pkgs: Dict[str, ArchPackage], downgrade_enabled: bool, res: SearchResult, disk_loader: DiskCacheLoader):
pkg = installed_pkgs.get(apidata['Name'])
if not pkg:
pkg = self.mapper.map_api_data(apidata, None, self.categories)
pkg.downgrade_enabled = downgrade_enabled
if pkg.installed:
res.installed.append(pkg)
else:
res.new.append(pkg)
Thread(target=self.mapper.fill_package_build, args=(pkg,), daemon=True).start()
def _search_in_repos_and_fill(self, words: str, disk_loader: DiskCacheLoader, read_installed: Thread, installed: List[ArchPackage], res: SearchResult):
repo_search = pacman.search(words)
if not repo_search: # the package may not be mapped on the databases anymore
pkgname = words.split(' ')[0].strip()
pkg_found = pacman.get_info_dict(pkgname, remote=False)
if pkg_found and pkg_found['validated by']:
repo_search = {pkgname: {'version': pkg_found.get('version'),
'repository': 'unknown',
'description': pkg_found.get('description')}}
if repo_search:
repo_pkgs = []
for name, data in repo_search.items():
pkg = ArchPackage(name=name, i18n=self.i18n, **data)
pkg.downgrade_enabled = True
repo_pkgs.append(pkg)
if repo_pkgs:
read_installed.join()
repo_installed = {p.name: p for p in installed if p.repository != 'aur'} if installed else {}
for pkg in repo_pkgs:
pkg_installed = repo_installed.get(pkg.name)
if pkg_installed:
res.installed.append(pkg_installed)
else:
pkg.installed = False
res.new.append(pkg)
def _search_in_aur_and_fill(self, words: str, disk_loader: DiskCacheLoader, read_installed: Thread, installed: List[ArchPackage], res: SearchResult):
api_res = self.aur_client.search(words)
if api_res and api_res.get('results'):
read_installed.join()
aur_installed = {p.name: p for p in installed if p.repository == 'aur'}
downgrade_enabled = git.is_enabled()
for pkgdata in api_res['results']:
self._upgrade_search_result(pkgdata, aur_installed, downgrade_enabled, res, disk_loader)
else: # if there are no results from the API (it could be because there were too many), tries the names index:
if self.index_aur:
self.index_aur.join()
aur_index = self.aur_client.read_local_index()
if aur_index:
self.logger.info("Querying through the local AUR index")
to_query = set()
for norm_name, real_name in aur_index.items():
if words in norm_name:
to_query.add(real_name)
if len(to_query) == 25:
break
pkgsinfo = self.aur_client.get_info(to_query)
if pkgsinfo:
read_installed.join()
aur_installed = {p.name: p for p in installed if p.repository == 'aur'}
downgrade_enabled = git.is_enabled()
for pkgdata in pkgsinfo:
self._upgrade_search_result(pkgdata, aur_installed, downgrade_enabled, res, disk_loader)
def search(self, words: str, disk_loader: DiskCacheLoader, limit: int = -1, is_url: bool = False) -> SearchResult:
if is_url:
return SearchResult([], [], 0)
arch_config = read_config()
if not any([arch_config['repositories'], arch_config['aur']]):
return SearchResult([], [], 0)
installed = []
read_installed = Thread(target=lambda: installed.extend(self.read_installed(disk_loader=disk_loader,
only_apps=False,
limit=-1,
internet_available=True).installed), daemon=True)
read_installed.start()
res = SearchResult([], [], 0)
if not any((arch_config['aur'], arch_config['repositories'])):
return res
mapped_words = self.get_semantic_search_map().get(words)
final_words = mapped_words or words
aur_search = None
if arch_config['aur']:
aur_search = Thread(target=self._search_in_aur_and_fill, args=(final_words, disk_loader, read_installed, installed, res), daemon=True)
aur_search.start()
if arch_config['repositories']:
self._search_in_repos_and_fill(final_words, disk_loader, read_installed, installed, res)
if aur_search:
aur_search.join()
res.total = len(res.installed) + len(res.new)
return res
def _fill_aur_pkgs(self, aur_pkgs: dict, output: list, disk_loader: DiskCacheLoader, internet_available: bool):
downgrade_enabled = git.is_enabled()
if internet_available:
try:
pkgsinfo = self.aur_client.get_info(aur_pkgs.keys())
if pkgsinfo:
for pkgdata in pkgsinfo:
pkg = self.mapper.map_api_data(pkgdata, aur_pkgs, self.categories)
pkg.downgrade_enabled = downgrade_enabled
if disk_loader:
disk_loader.fill(pkg)
pkg.status = PackageStatus.READY
output.append(pkg)
return
except requests.exceptions.ConnectionError:
self.logger.warning('Could not retrieve installed AUR packages API data. It seems the internet connection is off.')
self.logger.info("Reading only local AUR packages data")
for name, data in aur_pkgs.items():
pkg = ArchPackage(name=name, version=data.get('version'),
latest_version=data.get('version'), description=data.get('description'),
installed=True, repository='aur', i18n=self.i18n)
pkg.categories = self.categories.get(pkg.name)
pkg.downgrade_enabled = downgrade_enabled
if disk_loader:
disk_loader.fill(pkg)
pkg.status = PackageStatus.READY
output.append(pkg)
def _fill_repo_updates(self, updates: dict):
updates.update(pacman.list_repository_updates())
def _fill_repo_pkgs(self, repo_pkgs: dict, pkgs: list, disk_loader: DiskCacheLoader):
updates = {}
thread_updates = Thread(target=self._fill_repo_updates, args=(updates,), daemon=True)
thread_updates.start()
repo_map = pacman.map_repositories(repo_pkgs)
if len(repo_map) != len(repo_pkgs):
self.logger.warning("Not mapped all signed packages repositories. Mapped: {}. Total: {}".format(len(repo_map), len(repo_pkgs)))
thread_updates.join()
self.logger.info("Repository updates found" if updates else "No repository updates found")
for name, data in repo_pkgs.items():
pkgversion = data.get('version')
pkgrepo = repo_map.get(name)
pkg = ArchPackage(name=name,
version=pkgversion,
latest_version=pkgversion,
description=data.get('description'),
maintainer=pkgrepo,
i18n=self.i18n,
installed=True,
repository=pkgrepo,
categories=self.categories.get(name))
pkg.downgrade_enabled = False
if updates:
update_version = updates.get(pkg.name)
if update_version:
pkg.latest_version = update_version
pkg.update = True
if disk_loader:
disk_loader.fill(pkg)
pkgs.append(pkg)
def read_installed(self, disk_loader: DiskCacheLoader, limit: int = -1, only_apps: bool = False, pkg_types: Set[Type[SoftwarePackage]] = None, internet_available: bool = None) -> SearchResult:
self.aur_client.clean_caches()
arch_config = read_config()
installed = pacman.map_installed()
aur_pkgs, repo_pkgs = None, None
if arch_config['repositories'] and installed['signed']:
repo_pkgs = installed['signed']
if installed['not_signed']:
if self.index_aur:
self.index_aur.join()
aur_index = self.aur_client.read_index()
for pkg in {*installed['not_signed']}:
if pkg not in aur_index:
if repo_pkgs is not None:
repo_pkgs[pkg] = installed['not_signed'][pkg]
if arch_config['aur']:
del installed['not_signed'][pkg]
if arch_config['aur']:
aur_pkgs = installed['not_signed']
pkgs = []
if repo_pkgs or aur_pkgs:
map_threads = []
if aur_pkgs:
t = Thread(target=self._fill_aur_pkgs, args=(aur_pkgs, pkgs, disk_loader, internet_available), daemon=True)
t.start()
map_threads.append(t)
if repo_pkgs:
t = Thread(target=self._fill_repo_pkgs, args=(repo_pkgs, pkgs, disk_loader), daemon=True)
t.start()
map_threads.append(t)
for t in map_threads:
t.join()
return SearchResult(pkgs, None, len(pkgs))
def _downgrade_aur_pkg(self, context: TransactionContext):
context.build_dir = '{}/build_{}'.format(BUILD_DIR, int(time.time()))
try:
if not os.path.exists(context.build_dir):
build_dir = context.handler.handle(SystemProcess(new_subprocess(['mkdir', '-p', context.build_dir])))
if build_dir:
context.handler.watcher.change_progress(10)
base_name = context.get_base_name()
context.watcher.change_substatus(self.i18n['arch.clone'].format(bold(context.name)))
clone = context.handler.handle(SystemProcess(subproc=new_subprocess(['git', 'clone', URL_GIT.format(base_name)],
cwd=context.build_dir), check_error_output=False))
context.watcher.change_progress(30)
if clone:
context.watcher.change_substatus(self.i18n['arch.downgrade.reading_commits'])
clone_path = '{}/{}'.format(context.build_dir, base_name)
context.project_dir = clone_path
srcinfo_path = '{}/.SRCINFO'.format(clone_path)
commits = run_cmd("git log", cwd=clone_path)
context.watcher.change_progress(40)
if commits:
commit_list = re.findall(r'commit (.+)\n', commits)
if commit_list:
if len(commit_list) > 1:
srcfields = {'pkgver', 'pkgrel'}
commit_found = None
for idx in range(1, len(commit_list)):
commit = commit_list[idx]
with open(srcinfo_path) as f:
pkgsrc = aur.map_srcinfo(f.read(), srcfields)
reset_proc = new_subprocess(['git', 'reset', '--hard', commit], cwd=clone_path)
if not context.handler.handle(SystemProcess(reset_proc, check_error_output=False)):
context.handler.watcher.print('Could not downgrade anymore. Aborting...')
return False
if '{}-{}'.format(pkgsrc.get('pkgver'), pkgsrc.get('pkgrel')) == context.get_version():
# current version found
commit_found = commit
elif commit_found:
context.watcher.change_substatus(self.i18n['arch.downgrade.version_found'])
checkout_proc = new_subprocess(['git', 'checkout', commit_found], cwd=clone_path)
if not context.handler.handle(SystemProcess(checkout_proc, check_error_output=False)):
context.watcher.print("Could not rollback to current version's commit")
return False
reset_proc = new_subprocess(['git', 'reset', '--hard', commit_found], cwd=clone_path)
if not context.handler.handle(SystemProcess(reset_proc, check_error_output=False)):
context.watcher.print("Could not downgrade to previous commit of '{}'. Aborting...".format(commit_found))
return False
break
context.watcher.change_substatus(self.i18n['arch.downgrade.install_older'])
return self._build(context)
else:
context.watcher.show_message(title=self.i18n['arch.downgrade.error'],
body=self.i18n['arch.downgrade.impossible'].format(context.name),
type_=MessageType.ERROR)
return False
context.watcher.show_message(title=self.i18n['error'],
body=self.i18n['arch.downgrade.no_commits'],
type_=MessageType.ERROR)
return False
finally:
if os.path.exists(context.build_dir):
context.handler.handle(SystemProcess(subproc=new_subprocess(['rm', '-rf', context.build_dir])))
return False
def _downgrade_repo_pkg(self, context: TransactionContext):
context.watcher.change_substatus(self.i18n['arch.downgrade.searching_stored'])
if not os.path.isdir('/var/cache/pacman/pkg'):
context.watcher.show_message(title=self.i18n['arch.downgrade.error'],
body=self.i18n['arch.downgrade.repo_pkg.no_versions'],
type_=MessageType.ERROR)
return False
available_files = glob.glob("/var/cache/pacman/pkg/{}-*.pkg.tar.*".format(context.name))
if not available_files:
context.watcher.show_message(title=self.i18n['arch.downgrade.error'],
body=self.i18n['arch.downgrade.repo_pkg.no_versions'],
type_=MessageType.ERROR)
return False
reg = re.compile(r'{}-([\w.\-]+)-(x86_64|any|i686).pkg'.format(context.name))
versions, version_files = [], {}
for file_path in available_files:
found = reg.findall(os.path.basename(file_path))
if found:
ver = found[0][0]
if ver not in versions and ver < context.get_version():
versions.append(ver)
version_files[ver] = file_path
context.watcher.change_progress(40)
if not versions:
context.watcher.show_message(title=self.i18n['arch.downgrade.error'],
body=self.i18n['arch.downgrade.repo_pkg.no_versions'],
type_=MessageType.ERROR)
return False
versions.sort(reverse=True)
context.watcher.change_progress(50)
context.install_file = version_files[versions[0]]
if not self._handle_missing_deps(context=context):
return False
context.watcher.change_substatus(self.i18n['arch.downgrade.install_older'])
context.watcher.change_progress(60)
return self._install(context)
def downgrade(self, pkg: ArchPackage, root_password: str, watcher: ProcessWatcher) -> bool:
self.aur_client.clean_caches()
if not self._check_action_allowed(pkg, watcher):
return False
handler = ProcessHandler(watcher)
if self._is_database_locked(handler, root_password):
return False
context = TransactionContext(name=pkg.name, base=pkg.get_base_name(), skip_opt_deps=True,
change_progress=True, dependency=False, repository=pkg.repository, pkg=pkg,
arch_config=read_config(), watcher=watcher, handler=handler, root_password=root_password)
self._sync_databases(context.config, root_password, handler)
watcher.change_progress(5)
if pkg.repository == 'aur':
return self._downgrade_aur_pkg(context)
else:
return self._downgrade_repo_pkg(context)
def clean_cache_for(self, pkg: ArchPackage):
if os.path.exists(pkg.get_disk_cache_path()):
shutil.rmtree(pkg.get_disk_cache_path())
def _check_action_allowed(self, pkg: ArchPackage, watcher: ProcessWatcher) -> bool:
if user.is_root() and pkg.repository == 'aur':
watcher.show_message(title=self.i18n['arch.install.aur.root_error.title'],
body=self.i18n['arch.install.aur.root_error.body'],
type_=MessageType.ERROR)
return False
return True
def _is_database_locked(self, handler: ProcessHandler, root_password: str) -> bool:
if os.path.exists('/var/lib/pacman/db.lck'):
handler.watcher.print('pacman database is locked')
msg = '<p>{}</p><p>{}</p><br/>'.format(self.i18n['arch.action.db_locked.body.l1'],
self.i18n['arch.action.db_locked.body.l2'])
if handler.watcher.request_confirmation(title=self.i18n['arch.action.db_locked.title'].capitalize(),
body=msg,
confirmation_label=self.i18n['arch.action.db_locked.confirmation'].capitalize(),
deny_label=self.i18n['cancel'].capitalize()):
try:
if not handler.handle_simple(SimpleProcess(['rm', '-rf', '/var/lib/pacman/db.lck'], root_password=root_password)):
handler.watcher.show_message(title=self.i18n['error'].capitalize(),
body=self.i18n['arch.action.db_locked.error'],
type_=MessageType.ERROR)
return True
except:
self.logger.error("An error occurred while removing the pacman database lock")
traceback.print_exc()
handler.watcher.show_message(title=self.i18n['error'].capitalize(),
body=self.i18n['arch.action.db_locked.error'],
type_=MessageType.ERROR)
return True
else:
handler.watcher.print('Action cancelled by the user. Aborting...')
return True
return False
def _map_conflicting_file(self, output: str) -> List[TextComponent]:
error_idx = None
lines = output.split('\n')
for idx, l in enumerate(lines):
if l and l.strip().lower().startswith('error: failed to commit transaction (conflicting files)'):
error_idx = idx
break
files = []
if error_idx and error_idx + 1 < len(lines):
for idx in range(error_idx + 1, len(lines)):
line = lines[idx].strip()
if line and self.re_file_conflict.match(line):
files.append(TextComponent(' - {}'.format(line)))
return files
def _upgrade_repo_pkgs(self, pkgs: List[str], handler: ProcessHandler, root_password: str, overwrite_files: bool = False,
status_handler: TransactionStatusHandler = None) -> bool:
try:
if status_handler:
output_handler = status_handler
else:
output_handler = TransactionStatusHandler(handler.watcher, self.i18n, len(pkgs), self.logger)
output_handler.start()
success, upgrade_output = handler.handle_simple(pacman.upgrade_several(pkgnames=pkgs,
root_password=root_password,
overwrite_conflicting_files=overwrite_files),
output_handler=output_handler.handle,)
handler.watcher.change_substatus('')
if success:
output_handler.stop_working()
output_handler.join()
handler.watcher.print("Repository packages successfully upgraded")
handler.watcher.change_substatus(self.i18n['arch.upgrade.caching_pkgs_data'])
repo_map = pacman.map_repositories(pkgs)
pkg_map = {}
for name in pkgs:
repo = repo_map.get(name)
pkg_map[name] = ArchPackage(name=name,
repository=repo,
maintainer=repo,
categories=self.categories.get(name))
disk.save_several(pkg_map, overwrite=True, maintainer=None)
return True
elif 'conflicting files' in upgrade_output:
files = self._map_conflicting_file(upgrade_output)
if not handler.watcher.request_confirmation(title=self.i18n['warning'].capitalize(),
body=self.i18n['arch.upgrade.error.conflicting_files'] + ':',
deny_label=self.i18n['arch.upgrade.conflicting_files.proceed'],
confirmation_label=self.i18n['arch.upgrade.conflicting_files.stop'],
components=files):
return self._upgrade_repo_pkgs(pkgs=pkgs,
handler=handler,
root_password=root_password,
overwrite_files=True,
status_handler=output_handler)
else:
output_handler.stop_working()
output_handler.join()
handler.watcher.print("Aborted by the user")
return False
else:
output_handler.stop_working()
output_handler.join()
self.logger.error("'pacman' returned an unexpected response or error phrase after upgrading the repository packages")
return False
except:
handler.watcher.change_substatus('')
handler.watcher.print("An error occurred while upgrading repository packages")
self.logger.error("An error occurred while upgrading repository packages")
traceback.print_exc()
return False
def upgrade(self, requirements: UpgradeRequirements, root_password: str, watcher: ProcessWatcher) -> bool:
self.aur_client.clean_caches()
watcher.change_status("{}...".format(self.i18n['manage_window.status.upgrading']))
handler = ProcessHandler(watcher)
if self._is_database_locked(handler, root_password):
watcher.change_substatus('')
return False
aur_pkgs, repo_pkgs = [], []
for req in (*requirements.to_install, *requirements.to_upgrade):
if req.pkg.repository == 'aur':
aur_pkgs.append(req.pkg)
else:
repo_pkgs.append(req.pkg)
if aur_pkgs and not self._check_action_allowed(aur_pkgs[0], watcher):
return False
arch_config = read_config()
self._sync_databases(arch_config=arch_config, root_password=root_password, handler=handler)
if requirements.to_remove:
to_remove_names = {r.pkg.name for r in requirements.to_remove}
try:
success = handler.handle(pacman.remove_several(to_remove_names, root_password))
if not success:
self.logger.error("Could not remove packages: {}".format(', '.join(to_remove_names)))
return False
except:
self.logger.error("An error occured while removing packages: {}".format(', '.join(to_remove_names)))
traceback.print_exc()
return False
if repo_pkgs:
repo_pkgs_names = [p.name for p in repo_pkgs]
watcher.change_status('{}...'.format(self.i18n['arch.upgrade.upgrade_repo_pkgs']))
self.logger.info("Upgrading {} repository packages: {}".format(len(repo_pkgs_names),
', '.join(repo_pkgs_names)))
if not self._upgrade_repo_pkgs(pkgs=repo_pkgs_names, handler=handler, root_password=root_password):
return False
watcher.change_status('{}...'.format(self.i18n['arch.upgrade.upgrade_aur_pkgs']))
if aur_pkgs:
for pkg in aur_pkgs:
watcher.change_substatus("{} {} ({})...".format(self.i18n['manage_window.status.upgrading'], pkg.name, pkg.version))
context = TransactionContext.gen_context_from(pkg=pkg, arch_config=arch_config,
root_password=root_password, handler=handler)
context.change_progress = False
try:
if not self.install(pkg=pkg, root_password=root_password, watcher=watcher, context=context):
watcher.print(self.i18n['arch.upgrade.fail'].format('"{}"'.format(pkg.name)))
self.logger.error("Could not upgrade AUR package '{}'".format(pkg.name))
watcher.change_substatus('')
return False
else:
watcher.print(self.i18n['arch.upgrade.success'].format('"{}"'.format(pkg.name)))
except:
watcher.print(self.i18n['arch.upgrade.fail'].format('"{}"'.format(pkg.name)))
watcher.change_substatus('')
self.logger.error("An error occurred when upgrading AUR package '{}'".format(pkg.name))
traceback.print_exc()
return False
watcher.change_substatus('')
return True
def _uninstall_pkgs(self, pkgs: Iterable[str], root_password: str, handler: ProcessHandler) -> bool:
all_uninstalled, _ = handler.handle_simple(SimpleProcess(cmd=['pacman', '-R', *pkgs, '--noconfirm'],
root_password=root_password,
error_phrases={'error: failed to prepare transaction',
'error: failed to commit transaction'}))
installed = pacman.list_installed_names()
for p in pkgs:
if p not in installed:
cache_path = ArchPackage.disk_cache_path(p)
if os.path.exists(cache_path):
shutil.rmtree(cache_path)
return all_uninstalled
def _request_uninstall_confirmation(self, pkgs: Iterable[str], context: TransactionContext) -> bool:
reqs = [InputOption(label=p, value=p, icon_path=get_icon_path(), read_only=True) for p in pkgs]
reqs_select = MultipleSelectComponent(options=reqs, default_options=set(reqs), label="", max_per_line=3)
msg = '<p>{}</p><p>{}</p>'.format(self.i18n['arch.uninstall.required_by'].format(bold(context.name), bold(str(len(reqs)))),
self.i18n['arch.uninstall.required_by.advice'])
if not context.watcher.request_confirmation(title=self.i18n['confirmation'].capitalize(),
body=msg,
components=[reqs_select],
confirmation_label=self.i18n['proceed'].capitalize(),
deny_label=self.i18n['cancel'].capitalize(),
window_cancel=False):
context.watcher.print("Aborted")
return False
return True
def _request_unncessary_uninstall_confirmation(self, pkgs: Iterable[str], context: TransactionContext) -> List[str]:
reqs = [InputOption(label=p, value=p, icon_path=get_icon_path(), read_only=False) for p in pkgs]
reqs_select = MultipleSelectComponent(options=reqs, default_options=set(reqs), label="", max_per_line=3)
msg = '<p>{}</p><p>{}:</p>'.format(self.i18n['arch.uninstall.unnecessary.l1'].format(bold(context.name)),
self.i18n['arch.uninstall.unnecessary.l2'])
if not context.watcher.request_confirmation(title=self.i18n['confirmation'].capitalize(),
body=msg,
components=[reqs_select],
confirmation_label=self.i18n['arch.uninstall.unnecessary.proceed'].capitalize(),
deny_label=self.i18n['arch.uninstall.unnecessary.cancel'].capitalize(),
window_cancel=False):
return
return reqs_select.get_selected_values()
def _request_all_unncessary_uninstall_confirmation(self, pkgs: Iterable[str], context: TransactionContext):
reqs = [InputOption(label=p, value=p, icon_path=get_icon_path(), read_only=True) for p in pkgs]
reqs_select = MultipleSelectComponent(options=reqs, default_options=set(reqs), label="", max_per_line=1)
if not context.watcher.request_confirmation(title=self.i18n['confirmation'].capitalize(),
body=self.i18n['arch.uninstall.unnecessary.all'].format(bold(str(len(pkgs)))),
components=[reqs_select],
confirmation_label=self.i18n['proceed'].capitalize(),
deny_label=self.i18n['cancel'].capitalize(),
window_cancel=False):
context.watcher.print("Aborted")
return False
return True
def _uninstall(self, context: TransactionContext, remove_unneeded: bool = False):
self._update_progress(context, 10)
required_by = self.deps_analyser.map_all_required_by({context.name}, set())
self._update_progress(context, 50)
to_uninstall = set()
to_uninstall.add(context.name)
if required_by:
to_uninstall.update(required_by)
if not self._request_uninstall_confirmation(required_by, context):
return False
if remove_unneeded:
all_deps_map = pacman.map_all_deps(names=to_uninstall, only_installed=True) # retrieving the deps to check if they are still necessary
else:
all_deps_map = None
uninstalled = self._uninstall_pkgs(to_uninstall, context.root_password, context.handler)
if uninstalled:
self._update_progress(context, 70)
if all_deps_map:
context.watcher.change_substatus(self.i18n['arch.checking_unnecessary_deps'])
all_deps = set()
all_provided = pacman.map_provided(remote=False)
for deps in all_deps_map.values():
for dep in deps:
real_deps = all_provided.get(dep)
if real_deps:
all_deps.update(real_deps)
if all_deps:
self.logger.info("Mapping dependencies required packages of uninstalled packages")
alldeps_reqs = pacman.map_required_by(all_deps)
no_longer_needed = set()
if alldeps_reqs:
for dep, reqs in alldeps_reqs.items():
if not reqs:
no_longer_needed.add(dep)
if no_longer_needed:
self.logger.info("{} packages no longer needed found".format(len(no_longer_needed)))
unnecessary_to_uninstall = self._request_unncessary_uninstall_confirmation(no_longer_needed, context)
if unnecessary_to_uninstall:
unnecessary_to_uninstall_deps = pacman.list_unnecessary_deps(unnecessary_to_uninstall, all_provided)
all_unnecessary_to_uninstall = {*unnecessary_to_uninstall, *unnecessary_to_uninstall_deps}
if not unnecessary_to_uninstall_deps or self._request_all_unncessary_uninstall_confirmation(all_unnecessary_to_uninstall, context):
unneded_uninstalled = self._uninstall_pkgs(all_unnecessary_to_uninstall, context.root_password, context.handler)
if unneded_uninstalled:
to_uninstall.update(all_unnecessary_to_uninstall)
else:
self.logger.error("Could not uninstall some unnecessary packages")
context.watcher.print("Could not uninstall some unnecessary packages")
self._update_progress(context, 90)
if bool(context.config['clean_cached']): # cleaning old versions
context.watcher.change_substatus(self.i18n['arch.uninstall.clean_cached.substatus'])
if os.path.isdir('/var/cache/pacman/pkg'):
for p in to_uninstall:
available_files = glob.glob("/var/cache/pacman/pkg/{}-*.pkg.tar.*".format(p))
if available_files and not context.handler.handle_simple(SimpleProcess(cmd=['rm', '-rf', *available_files],
root_password=context.root_password)):
context.watcher.show_message(title=self.i18n['error'],
body=self.i18n['arch.uninstall.clean_cached.error'].format(bold(p)),
type_=MessageType.WARNING)
self._update_progress(context, 100)
return uninstalled
def uninstall(self, pkg: ArchPackage, root_password: str, watcher: ProcessWatcher) -> bool:
self.aur_client.clean_caches()
handler = ProcessHandler(watcher)
if self._is_database_locked(handler, root_password):
return False
return self._uninstall(TransactionContext(name=pkg.name,
change_progress=True,
arch_config=read_config(),
watcher=watcher,
root_password=root_password,
handler=handler),
remove_unneeded=True)
def get_managed_types(self) -> Set["type"]:
return {ArchPackage}
def _get_info_aur_pkg(self, pkg: ArchPackage) -> dict:
if pkg.installed:
t = Thread(target=self.mapper.fill_package_build, args=(pkg,), daemon=True)
t.start()
info = pacman.get_info_dict(pkg.name)
t.join()
if pkg.pkgbuild:
info['13_pkg_build'] = pkg.pkgbuild
info['14_installed_files'] = pacman.list_installed_files(pkg.name)
return info
else:
info = {
'01_id': pkg.id,
'02_name': pkg.name,
'03_description': pkg.description,
'03_version': pkg.version,
'04_popularity': pkg.popularity,
'05_votes': pkg.votes,
'06_package_base': pkg.package_base,
'07_maintainer': pkg.maintainer,
'08_first_submitted': pkg.first_submitted,
'09_last_modified': pkg.last_modified,
'10_url': pkg.url_download
}
srcinfo = self.aur_client.get_src_info(pkg.name)
if srcinfo:
arch_str = 'x86_64' if self.context.is_system_x86_64() else 'i686'
for info_attr, src_attr in {'12_makedepends': 'makedepends',
'13_dependson': 'depends',
'14_optdepends': 'optdepends',
'checkdepends': '15_checkdepends'}.items():
if srcinfo.get(src_attr):
info[info_attr] = [*srcinfo[src_attr]]
arch_attr = '{}_{}'.format(src_attr, arch_str)
if srcinfo.get(arch_attr):
if not info.get(info_attr):
info[info_attr] = [*srcinfo[arch_attr]]
else:
info[info_attr].extend(srcinfo[arch_attr])
if pkg.pkgbuild:
info['00_pkg_build'] = pkg.pkgbuild
else:
info['11_pkg_build_url'] = pkg.get_pkg_build_url()
return info
def _get_info_repo_pkg(self, pkg: ArchPackage) -> dict:
info = pacman.get_info_dict(pkg.name, remote=not pkg.installed)
if pkg.installed:
info['installed files'] = pacman.list_installed_files(pkg.name)
return info
def get_info(self, pkg: ArchPackage) -> dict:
if pkg.repository == 'aur':
return self._get_info_aur_pkg(pkg)
else:
return self._get_info_repo_pkg(pkg)
def _get_history_aur_pkg(self, pkg: ArchPackage) -> PackageHistory:
temp_dir = '{}/build_{}'.format(BUILD_DIR, int(time.time()))
try:
Path(temp_dir).mkdir(parents=True)
base_name = pkg.get_base_name()
run_cmd('git clone ' + URL_GIT.format(base_name), print_error=False, cwd=temp_dir)
clone_path = '{}/{}'.format(temp_dir, base_name)
srcinfo_path = '{}/.SRCINFO'.format(clone_path)
commits = git.list_commits(clone_path)
if commits:
srcfields = {'pkgver', 'pkgrel'}
history, status_idx = [], -1
for idx, commit in enumerate(commits):
with open(srcinfo_path) as f:
pkgsrc = aur.map_srcinfo(f.read(), srcfields)
if status_idx < 0 and '{}-{}'.format(pkgsrc.get('pkgver'), pkgsrc.get('pkgrel')) == pkg.version:
status_idx = idx
history.append({'1_version': pkgsrc['pkgver'], '2_release': pkgsrc['pkgrel'],
'3_date': commit['date']}) # the number prefix is to ensure the rendering order
if idx + 1 < len(commits):
if not run_cmd('git reset --hard ' + commits[idx + 1]['commit'], cwd=clone_path):
break
return PackageHistory(pkg=pkg, history=history, pkg_status_idx=status_idx)
finally:
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
def _get_history_repo_pkg(self, pkg: ArchPackage) -> PackageHistory:
data = PackageHistory(pkg=pkg, history=[], pkg_status_idx=-1)
versions = [pkg.latest_version]
version_files = {} # maps the version and tar file
if pkg.update:
versions.append(pkg.version)
if os.path.isdir('/var/cache/pacman/pkg'):
available_files = glob.glob("/var/cache/pacman/pkg/{}-*.pkg.tar.*".format(pkg.name))
if available_files:
reg = re.compile(r'{}-([\w.\-]+)-(x86_64|any|i686).pkg'.format(pkg.name))
for file_path in available_files:
found = reg.findall(os.path.basename(file_path))
if found:
ver = found[0][0]
if ver not in versions:
versions.append(ver)
version_files[ver] = file_path
versions.sort(reverse=True)
extract_path = '{}/arch/history'.format(TEMP_DIR)
try:
Path(extract_path).mkdir(parents=True, exist_ok=True)
except:
self.logger.error("Could not create temp dir {} to extract previous versions data".format(extract_path))
traceback.print_exc()
return data
try:
for idx, v in enumerate(versions):
cur_version = v.split('-')
cur_data = {'1_version': ''.join(cur_version[0:-1]),
'2_release': cur_version[-1],
'3_date': ''}
if pkg.version == v:
data.pkg_status_idx = idx
version_file = version_files.get(v)
if not version_file:
if v == pkg.version:
cur_data['3_date'] = pacman.get_build_date(pkg.name)
else:
extracted_dir = '{}/{}'.format(extract_path, v)
Path(extracted_dir).mkdir(parents=True, exist_ok=True)
try:
filext = version_file.split('.')[-1]
run_cmd('tar -C {} -I {} -xvf {} .PKGINFO'.format(extracted_dir, 'zstd' if filext == 'zst' else filext, version_file))
except tarfile.ReadError:
if v == pkg.version:
cur_data['3_date'] = pacman.get_build_date(pkg.name)
else:
self.logger.error("Could not read file {}. Skipping version {}".format(version_file, v))
continue
info_file = '{}/.PKGINFO'.format(extracted_dir)
if os.path.isfile(info_file):
with open(info_file) as f:
for l in f.readlines():
if l and l.startswith('builddate'):
cur_data['3_date'] = datetime.fromtimestamp(int(l.split('=')[1].strip()))
break
data.history.append(cur_data)
return data
finally:
if os.path.exists(extract_path):
try:
self.logger.info("Removing temporary history dir {}".format(extract_path))
shutil.rmtree(extract_path)
except:
self.logger.error("Could not remove temp path '{}'".format(extract_path))
raise
def get_history(self, pkg: ArchPackage) -> PackageHistory:
if pkg.repository == 'aur':
return self._get_history_aur_pkg(pkg)
else:
return self._get_history_repo_pkg(pkg)
def _request_conflict_resolution(self, pkg: str, conflicting_pkg: str, context: TransactionContext) -> bool:
conflict_msg = '{} {} {}'.format(bold(pkg), self.i18n['and'], bold(conflicting_pkg))
if not context.watcher.request_confirmation(title=self.i18n['arch.install.conflict.popup.title'],
body=self.i18n['arch.install.conflict.popup.body'].format(conflict_msg)):
context.watcher.print(self.i18n['action.cancelled'])
return False
else:
context.watcher.change_substatus(self.i18n['arch.uninstalling.conflict'].format(bold(conflicting_pkg)))
conflict_context = context.clone_base()
conflict_context.change_progress = False
conflict_context.name = conflicting_pkg
if not self._uninstall(conflict_context):
context.watcher.show_message(title=self.i18n['error'],
body=self.i18n['arch.uninstalling.conflict.fail'].format(bold(conflicting_pkg)),
type_=MessageType.ERROR)
return False
return True
def _install_deps(self, context: TransactionContext, deps: List[Tuple[str, str]]) -> Iterable[str]:
"""
:param pkgs_repos:
:param root_password:
:param handler:
:return: not installed dependency
"""
progress_increment = int(100 / len(deps))
progress = 0
self._update_progress(context, 1)
repo_deps, repo_dep_names, aur_deps_context = [], None, []
for dep in deps:
context.watcher.change_substatus(self.i18n['arch.install.dependency.install'].format(bold('{} ({})'.format(dep[0], dep[1]))))
if dep[1] == 'aur':
dep_context = context.gen_dep_context(dep[0], dep[1])
dep_src = self.aur_client.get_src_info(dep[0])
dep_context.base = dep_src['pkgbase']
aur_deps_context.append(dep_context)
else:
repo_deps.append(dep)
if repo_deps:
repo_dep_names = [d[0] for d in repo_deps]
context.watcher.change_substatus(self.i18n['arch.checking.conflicts'].format(bold(context.name)))
all_provided = context.get_provided_map()
for dep, conflicts in pacman.map_conflicts_with(repo_dep_names, remote=True).items():
if conflicts:
for c in conflicts:
source_conflict = all_provided.get(c)
if source_conflict:
conflict_pkg = [*source_conflict][0]
if dep != conflict_pkg:
if not self._request_conflict_resolution(dep, conflict_pkg , context):
return {dep}
status_handler = TransactionStatusHandler(context.watcher, self.i18n, len(repo_dep_names), self.logger, percentage=len(repo_deps) > 1)
status_handler.start()
installed, _ = context.handler.handle_simple(pacman.install_as_process(pkgpaths=repo_dep_names,
root_password=context.root_password,
file=False),
output_handler=status_handler.handle)
if installed:
pkg_map = {d[0]: ArchPackage(name=d[0], repository=d[1], maintainer=d[1],
categories=self.categories.get(d[0])) for d in repo_deps}
disk.save_several(pkg_map, overwrite=True, maintainer=None)
progress += len(repo_deps) * progress_increment
self._update_progress(context, progress)
else:
return repo_dep_names
for aur_context in aur_deps_context:
installed = self._install_from_aur(aur_context)
if not installed:
return {aur_context.name}
else:
progress += progress_increment
self._update_progress(context, progress)
self._update_progress(context, 100)
def _map_repos(self, pkgnames: Iterable[str]) -> dict:
pkg_repos = pacman.get_repositories(pkgnames) # getting repositories set
if len(pkgnames) != len(pkg_repos): # checking if any dep not found in the distro repos are from AUR
norepos = {p for p in pkgnames if p not in pkg_repos}
for pkginfo in self.aur_client.get_info(norepos):
if pkginfo.get('Name') in norepos:
pkg_repos[pkginfo['Name']] = 'aur'
return pkg_repos
def _pre_download_source(self, project_dir: str, watcher: ProcessWatcher) -> bool:
if self.context.file_downloader.is_multithreaded():
with open('{}/.SRCINFO'.format(project_dir)) as f:
srcinfo = aur.map_srcinfo(f.read())
pre_download_files = []
for attr in SOURCE_FIELDS:
if srcinfo.get(attr):
if attr == 'source_x86_x64' and not self.context.is_system_x86_64():
continue
else:
for f in srcinfo[attr]:
if RE_PRE_DOWNLOAD_WL_PROTOCOLS.match(f) and not RE_PRE_DOWNLOAD_BL_EXT.match(f):
pre_download_files.append(f)
if pre_download_files:
for f in pre_download_files:
fdata = f.split('::')
args = {'watcher': watcher, 'cwd': project_dir}
if len(fdata) > 1:
args.update({'file_url': fdata[1], 'output_path': fdata[0]})
else:
args.update({'file_url': fdata[0], 'output_path': None})
if not self.context.file_downloader.download(**args):
watcher.print('Could not download source file {}'.format(args['file_url']))
return False
return True
def _build(self, context: TransactionContext) -> bool:
self._pre_download_source(context.project_dir, context.watcher)
self._update_progress(context, 50)
if not self._handle_aur_package_deps_and_keys(context):
return False
# building main package
context.watcher.change_substatus(self.i18n['arch.building.package'].format(bold(context.name)))
optimize = bool(context.config['optimize']) and cpu_manager.supports_performance_mode() and not cpu_manager.all_in_performance()
cpu_optimized = False
if optimize:
self.logger.info("Setting cpus to performance mode")
cpu_manager.set_mode('performance', context.root_password)
cpu_optimized = True
try:
pkgbuilt, output = makepkg.make(context.project_dir, optimize=optimize, handler=context.handler)
finally:
if cpu_optimized:
self.logger.info("Setting cpus to powersave mode")
cpu_manager.set_mode('powersave', context.root_password)
self._update_progress(context, 65)
if pkgbuilt:
gen_file = [fname for root, dirs, files in os.walk(context.build_dir) for fname in files if re.match(r'^{}-.+\.tar\.(xz|zst)'.format(context.name), fname)]
if not gen_file:
context.watcher.print('Could not find the built package. Aborting...')
return False
context.install_file = '{}/{}'.format(context.project_dir, gen_file[0])
if self._install(context=context):
if context.dependency or context.skip_opt_deps:
return True
context.watcher.change_substatus(self.i18n['arch.optdeps.checking'].format(bold(context.name)))
self._update_progress(context, 100)
if self._install_optdeps(context):
return True
return False
def _ask_and_install_missing_deps(self, context: TransactionContext, missing_deps: List[Tuple[str, str]]) -> bool:
context.watcher.change_substatus(self.i18n['arch.missing_deps_found'].format(bold(context.name)))
if not confirmation.request_install_missing_deps(context.name, missing_deps, context.watcher, self.i18n):
context.watcher.print(self.i18n['action.cancelled'])
return False
old_progress_behavior = context.change_progress
context.change_progress = False
deps_not_installed = self._install_deps(context, missing_deps)
context.change_progress = old_progress_behavior
if deps_not_installed:
message.show_deps_not_installed(context.watcher, context.name, deps_not_installed, self.i18n)
return False
return True
def _list_missing_deps(self, context: TransactionContext) -> List[Tuple[str, str]]:
context.watcher.change_substatus(self.i18n['arch.checking.deps'].format(bold(context.name)))
ti = time.time()
if context.repository == 'aur':
with open('{}/.SRCINFO'.format(context.project_dir)) as f:
srcinfo = aur.map_srcinfo(f.read())
pkgs_data = {context.name: self.aur_client.map_update_data(context.name, context.get_version(), srcinfo)}
else:
file = bool(context.install_file)
pkgs_data = pacman.map_updates_data({context.install_file if file else context.name}, files=file)
deps_data, alread_checked_deps = {}, set()
missing_deps = self.deps_analyser.map_missing_deps(pkgs_data=pkgs_data,
provided_map=context.get_provided_map(),
aur_index=context.get_aur_idx(self.aur_client),
deps_checked=alread_checked_deps,
deps_data=deps_data,
sort=True,
remote_provided_map=context.get_remote_provided_map(),
remote_repo_map=context.get_remote_repo_map(),
watcher=context.watcher)
tf = time.time()
self.logger.info("Took {0:.2f} seconds to check for missing dependencies".format(tf - ti))
return missing_deps
def _handle_missing_deps(self, context: TransactionContext) -> bool:
missing_deps = self._list_missing_deps(context)
if missing_deps is None:
return False # called off by the user
if not missing_deps:
return True
elif not self._ask_and_install_missing_deps(context=context, missing_deps=missing_deps):
return False # called off by the user or something went wrong
else:
return True
def _handle_aur_package_deps_and_keys(self, context: TransactionContext) -> bool:
handled_deps = self._handle_missing_deps(context)
if not handled_deps:
return False
check_res = makepkg.check(context.project_dir,
optimize=bool(context.config['optimize']),
missing_deps=False,
handler=context.handler)
if check_res:
if check_res.get('gpg_key'):
if context.watcher.request_confirmation(title=self.i18n['arch.install.aur.unknown_key.title'],
body=self.i18n['arch.install.aur.unknown_key.body'].format(bold(context.name), bold(check_res['gpg_key']))):
context.watcher.change_substatus(self.i18n['arch.aur.install.unknown_key.status'].format(bold(check_res['gpg_key'])))
self.logger.info("Importing GPG key {}".format(check_res['gpg_key']))
if not context.handler.handle(gpg.receive_key(check_res['gpg_key'])):
self.logger.error("An error occurred while importing the GPG key {}".format(check_res['gpg_key']))
context.watcher.show_message(title=self.i18n['error'].capitalize(),
body=self.i18n['arch.aur.install.unknown_key.receive_error'].format(bold(check_res['gpg_key'])))
return False
else:
context.watcher.print(self.i18n['action.cancelled'])
return False
if check_res.get('validity_check'):
body = "<p>{}</p><p>{}</p>".format(self.i18n['arch.aur.install.validity_check.body'].format(bold(context.name)),
self.i18n['arch.aur.install.validity_check.proceed'])
return not context.watcher.request_confirmation(title=self.i18n['arch.aur.install.validity_check.title'].format('( checksum )'),
body=body,
confirmation_label=self.i18n['no'].capitalize(),
deny_label=self.i18n['yes'].capitalize())
return True
def _install_optdeps(self, context: TransactionContext) -> bool:
odeps = pacman.map_optional_deps({context.name}, remote=False, not_installed=True)[context.name]
if not odeps:
return True
repo_mapping = self._map_repos(odeps.keys())
if repo_mapping:
final_optdeps = {dep: {'desc': odeps.get(dep), 'repository': repo_mapping.get(dep)} for dep, repository in repo_mapping.items() if repo_mapping.get(dep)}
deps_to_install = confirmation.request_optional_deps(context.name, final_optdeps, context.watcher, self.i18n)
if not deps_to_install:
return True
else:
deps_data = {}
opt_repo_deps, aur_threads = [], []
for dep in deps_to_install:
if repo_mapping[dep] == 'aur':
t = Thread(target=self.aur_client.fill_update_data, args=(deps_data, dep, None, None), daemon=True)
t.start()
aur_threads.append(t)
else:
opt_repo_deps.append(dep)
if opt_repo_deps:
deps_data.update(pacman.map_updates_data(opt_repo_deps))
for t in aur_threads:
t.join()
provided_map = pacman.map_provided()
remote_provided_map = pacman.map_provided(remote=True)
remote_repo_map = pacman.map_repositories()
aur_index = self.aur_client.read_index() if aur_threads else None
subdeps_data = {}
missing_deps = self.deps_analyser.map_missing_deps(pkgs_data=deps_data,
provided_map=provided_map,
aur_index=aur_index,
deps_checked=set(),
deps_data=subdeps_data,
watcher=context.watcher,
remote_provided_map=remote_provided_map,
remote_repo_map=remote_repo_map,
sort=False)
if missing_deps is None:
return False # called of by the user
to_sort = []
if missing_deps:
for dep in missing_deps:
# TODO handle multiple providers for a missing dep
if dep[0] not in deps_to_install and dep[1] != '__several__':
to_sort.append(dep[0])
display_deps_dialog = bool(to_sort) # it means there are subdeps to be installed so a new dialog should be displayed
to_sort.extend(deps_data.keys())
sorted_deps = sorting.sort(to_sort, {**deps_data, **subdeps_data}, provided_map)
if display_deps_dialog and not confirmation.request_install_missing_deps(None, sorted_deps, context.watcher, self.i18n):
context.watcher.print(self.i18n['action.cancelled'])
return True # because the main package installation was successful
old_progress_behavior = context.change_progress
context.change_progress = True
deps_not_installed = self._install_deps(context, sorted_deps)
context.change_progress = old_progress_behavior
if deps_not_installed:
message.show_optdeps_not_installed(deps_not_installed, context.watcher, self.i18n)
return True
def _install(self, context: TransactionContext):
check_install_output = []
pkgpath = context.get_package_path()
context.watcher.change_substatus(self.i18n['arch.checking.conflicts'].format(bold(context.name)))
self.logger.info("Checking for possible conflicts with '{}'".format(context.name))
for check_out in SimpleProcess(cmd=['pacman', '-U' if context.install_file else '-S', pkgpath],
root_password=context.root_password,
cwd=context.project_dir or '.').instance.stdout:
check_install_output.append(check_out.decode())
self._update_progress(context, 70)
if check_install_output and 'conflict' in check_install_output[-1]:
self.logger.info("Conflicts detected for '{}'".format(context.name))
conflicting_apps = [w[0] for w in re.findall(r'((\w|\-|\.)+)\s(and|are)', check_install_output[-1])]
conflict_msg = ' {} '.format(self.i18n['and']).join([bold(c) for c in conflicting_apps])
if not context.watcher.request_confirmation(title=self.i18n['arch.install.conflict.popup.title'],
body=self.i18n['arch.install.conflict.popup.body'].format(conflict_msg)):
context.watcher.print(self.i18n['action.cancelled'])
return False
else: # uninstall conflicts
self._update_progress(context, 75)
to_uninstall = [conflict for conflict in conflicting_apps if conflict != context.name]
self.logger.info("Preparing to uninstall conflicting packages: {}".format(to_uninstall))
for conflict in to_uninstall:
context.watcher.change_substatus(self.i18n['arch.uninstalling.conflict'].format(bold(conflict)))
if not self._uninstall_pkgs(pkgs={conflict}, root_password=context.root_password, handler=context.handler):
context.watcher.show_message(title=self.i18n['error'],
body=self.i18n['arch.uninstalling.conflict.fail'].format(bold(conflict)),
type_=MessageType.ERROR)
return False
else:
self.logger.info("No conflict detected for '{}'".format(context.name))
context.watcher.change_substatus(self.i18n['arch.installing.package'].format(bold(context.name)))
self._update_progress(context, 80)
to_install = []
if context.missing_deps:
to_install.extend((d[0] for d in context.missing_deps))
to_install.append(pkgpath)
if not context.dependency:
status_handler = TransactionStatusHandler(context.watcher, self.i18n, len(to_install), self.logger, percentage=len(to_install) > 1) if not context.dependency else None
status_handler.start()
else:
status_handler = None
installed, _ = context.handler.handle_simple(pacman.install_as_process(pkgpaths=to_install,
root_password=context.root_password,
file=context.has_install_file(),
pkgdir=context.project_dir),
output_handler=status_handler.handle if status_handler else None)
if status_handler:
status_handler.stop_working()
status_handler.join()
self._update_progress(context, 95)
if installed:
context.watcher.change_substatus(self.i18n['status.caching_data'].format(bold(context.name)))
if not context.maintainer:
if context.pkg and context.pkg.maintainer:
pkg_maintainer = context.pkg.maintainer
elif context.repository == 'aur':
aur_infos = self.aur_client.get_info({context.name})
pkg_maintainer = aur_infos[0].get('Maintainer') if aur_infos else None
else:
pkg_maintainer = context.repository
else:
pkg_maintainer = context.maintainer
cache_map = {context.name: ArchPackage(name=context.name,
repository=context.repository,
maintainer=pkg_maintainer,
categories=self.categories.get(context.name))}
if context.missing_deps:
aur_deps = {dep[0] for dep in context.missing_deps if dep[1] == 'aur'}
if aur_deps:
aur_data = self.aur_client.get_info(aur_deps)
if aur_data:
aur_data = {info['Name']: info for info in aur_data}
else:
aur_data = {n: {} for n in aur_deps}
else:
aur_data = None
for dep in context.missing_deps:
cache_map[dep[0]] = ArchPackage(name=dep[0],
repository=dep[1],
maintainer=dep[1] if dep[1] != 'aur' else (aur_data[dep[0]].get('Maintainer') if aur_data else None),
categories=self.categories.get(context.name))
disk.save_several(pkgs=cache_map, maintainer=None, overwrite=True)
self._update_progress(context, 100)
return installed
def _update_progress(self, context: TransactionContext, val: int):
if context.change_progress:
context.watcher.change_progress(val)
def _import_pgp_keys(self, pkgname: str, root_password: str, handler: ProcessHandler):
srcinfo = self.aur_client.get_src_info(pkgname)
if srcinfo.get('validpgpkeys'):
handler.watcher.print(self.i18n['arch.aur.install.verifying_pgp'])
keys_to_download = [key for key in srcinfo['validpgpkeys'] if not pacman.verify_pgp_key(key)]
if keys_to_download:
keys_str = ''.join(
['<br/><span style="font-weight:bold"> - {}</span>'.format(k) for k in keys_to_download])
msg_body = '{}:<br/>{}<br/><br/>{}'.format(self.i18n['arch.aur.install.pgp.body'].format(bold(pkgname)),
keys_str, self.i18n['ask.continue'])
if handler.watcher.request_confirmation(title=self.i18n['arch.aur.install.pgp.title'], body=msg_body):
for key in keys_to_download:
handler.watcher.change_substatus(self.i18n['arch.aur.install.pgp.substatus'].format(bold(key)))
if not handler.handle(pacman.receive_key(key, root_password)):
handler.watcher.show_message(title=self.i18n['error'],
body=self.i18n['arch.aur.install.pgp.receive_fail'].format(
bold(key)),
type_=MessageType.ERROR)
return False
if not handler.handle(pacman.sign_key(key, root_password)):
handler.watcher.show_message(title=self.i18n['error'],
body=self.i18n['arch.aur.install.pgp.sign_fail'].format(
bold(key)),
type_=MessageType.ERROR)
return False
handler.watcher.change_substatus(self.i18n['arch.aur.install.pgp.success'])
else:
handler.watcher.print(self.i18n['action.cancelled'])
return False
def _install_from_aur(self, context: TransactionContext) -> bool:
self._optimize_makepkg(context.config, context.watcher)
context.build_dir = '{}/build_{}'.format(BUILD_DIR, int(time.time()))
try:
if not os.path.exists(context.build_dir):
build_dir = context.handler.handle(SystemProcess(new_subprocess(['mkdir', '-p', context.build_dir])))
self._update_progress(context, 10)
if build_dir:
base_name = context.get_base_name()
file_url = URL_PKG_DOWNLOAD.format(base_name)
file_name = file_url.split('/')[-1]
context.watcher.change_substatus('{} {}'.format(self.i18n['arch.downloading.package'], bold(file_name)))
download = context.handler.handle(SystemProcess(new_subprocess(['wget', file_url], cwd=context.build_dir), check_error_output=False))
if download:
self._update_progress(context, 30)
context.watcher.change_substatus('{} {}'.format(self.i18n['arch.uncompressing.package'], bold(base_name)))
uncompress = context.handler.handle(SystemProcess(new_subprocess(['tar', 'xvzf', '{}.tar.gz'.format(base_name)], cwd=context.build_dir)))
self._update_progress(context, 40)
if uncompress:
context.project_dir = '{}/{}'.format(context.build_dir, base_name)
return self._build(context)
finally:
if os.path.exists(context.build_dir):
context.handler.handle(SystemProcess(new_subprocess(['rm', '-rf', context.build_dir])))
return False
def _sync_databases(self, arch_config: dict, root_password: str, handler: ProcessHandler, change_substatus: bool = True):
if bool(arch_config['sync_databases']) and database.should_sync(arch_config, handler, self.logger):
if change_substatus:
handler.watcher.change_substatus(self.i18n['arch.sync_databases.substatus'])
synced, output = handler.handle_simple(pacman.sync_databases(root_password=root_password, force=True))
if synced:
database.register_sync(self.logger)
else:
self.logger.warning("It was not possible to synchronized the package databases")
handler.watcher.change_substatus(self.i18n['arch.sync_databases.substatus.error'])
def _optimize_makepkg(self, arch_config: dict, watcher: ProcessWatcher):
if arch_config['optimize'] and not os.path.exists(CUSTOM_MAKEPKG_FILE):
watcher.change_substatus(self.i18n['arch.makepkg.optimizing'])
ArchCompilationOptimizer(arch_config, self.i18n, self.context.logger).optimize()
def install(self, pkg: ArchPackage, root_password: str, watcher: ProcessWatcher, context: TransactionContext = None) -> bool:
self.aur_client.clean_caches()
if not self._check_action_allowed(pkg, watcher):
return False
handler = ProcessHandler(watcher) if not context else context.handler
if self._is_database_locked(handler, root_password):
return False
if context:
install_context = context
else:
install_context = TransactionContext.gen_context_from(pkg=pkg, handler=handler, arch_config=read_config(),
root_password=root_password)
install_context.skip_opt_deps = False
self._sync_databases(arch_config=install_context.config, root_password=root_password, handler=handler)
if pkg.repository == 'aur':
res = self._install_from_aur(install_context)
else:
res = self._install_from_repository(install_context)
if res and os.path.exists(pkg.get_disk_data_path()):
with open(pkg.get_disk_data_path()) as f:
data = f.read()
if data:
data = json.loads(data)
pkg.fill_cached_data(data)
return res
def _install_from_repository(self, context: TransactionContext) -> bool:
try:
missing_deps = self._list_missing_deps(context)
except PackageNotFoundException:
self.logger.error("Package '{}' was not found")
return False
if missing_deps is None:
return False # called off by the user
if missing_deps:
if any((dep for dep in missing_deps if dep[1] == 'aur')):
context.watcher.show_message(title=self.i18n['error'].capitalize(),
body=self.i18n['arch.install.repo_pkg.error.aur_deps'],
type_=MessageType.ERROR)
return False
context.missing_deps = missing_deps
context.watcher.change_substatus(self.i18n['arch.missing_deps_found'].format(bold(context.name)))
if not confirmation.request_install_missing_deps(context.name, missing_deps, context.watcher, self.i18n):
context.watcher.print(self.i18n['action.cancelled'])
return False
res = self._install(context)
if res and not context.skip_opt_deps:
self._update_progress(context, 100)
return self._install_optdeps(context)
return res
def _is_wget_available(self):
res = run_cmd('which wget')
return res and not res.strip().startswith('which ')
def is_enabled(self) -> bool:
return self.enabled
def set_enabled(self, enabled: bool):
self.enabled = enabled
def can_work(self) -> bool:
try:
return self.arch_distro and pacman.is_available() and self._is_wget_available()
except FileNotFoundError:
return False
def is_downgrade_enabled(self) -> bool:
try:
new_subprocess(['git', '--version'])
return True
except FileNotFoundError:
return False
def cache_to_disk(self, pkg: ArchPackage, icon_bytes: bytes, only_icon: bool):
pass
def requires_root(self, action: str, pkg: ArchPackage):
if action == 'prepare':
arch_config = read_config()
if arch_config['refresh_mirrors_startup'] and mirrors.should_sync(self.logger):
return True
return arch_config['sync_databases_startup'] and database.should_sync(arch_config, None, self.logger)
return action != 'search'
def _start_category_task(self, task_man: TaskManager):
task_man.register_task('arch_aur_cats', self.i18n['task.download_categories'].format('Arch'), get_icon_path())
task_man.update_progress('arch_aur_cats', 50, None)
def _finish_category_task(self, task_man: TaskManager):
task_man.update_progress('arch_aur_cats', 100, None)
task_man.finish_task('arch_aur_cats')
def prepare(self, task_manager: TaskManager, root_password: str, internet_available: bool):
arch_config = read_config(update_file=True)
if arch_config['aur'] or arch_config['repositories']:
ArchDiskCacheUpdater(task_man=task_manager, arch_config=arch_config, i18n=self.i18n, logger=self.context.logger,
controller=self, internet_available=internet_available).start()
if arch_config['aur']:
ArchCompilationOptimizer(arch_config, self.i18n, self.context.logger, task_manager).start()
CategoriesDownloader(id_='Arch', http_client=self.context.http_client, logger=self.context.logger,
manager=self, url_categories_file=URL_CATEGORIES_FILE, disk_cache_dir=ARCH_CACHE_PATH,
categories_path=CATEGORIES_FILE_PATH,
before=lambda: self._start_category_task(task_manager),
after=lambda: self._finish_category_task(task_manager)).start()
if arch_config['aur'] and internet_available:
self.index_aur = AURIndexUpdater(self.context)
self.index_aur.start()
refresh_mirrors = None
if internet_available and arch_config['repositories'] and arch_config['refresh_mirrors_startup'] \
and pacman.is_mirrors_available() and mirrors.should_sync(self.logger):
refresh_mirrors = RefreshMirrors(taskman=task_manager, i18n=self.i18n,
root_password=root_password, logger=self.logger,
sort_limit=arch_config['mirrors_sort_limit'])
refresh_mirrors.start()
if internet_available and (refresh_mirrors or (arch_config['sync_databases_startup'] and database.should_sync(arch_config, None, self.logger))):
SyncDatabases(taskman=task_manager, root_password=root_password, i18n=self.i18n,
logger=self.logger, refresh_mirrors=refresh_mirrors).start()
def list_updates(self, internet_available: bool) -> List[PackageUpdate]:
installed = self.read_installed(disk_loader=None, internet_available=internet_available).installed
aur_type, repo_type = self.i18n['gem.arch.type.aur.label'], self.i18n['gem.arch.type.arch_repo.label']
return [PackageUpdate(p.name, p.latest_version, aur_type if p.repository == 'aur' else repo_type, p.name) for p in installed if p.update]
def list_warnings(self, internet_available: bool) -> List[str]:
warnings = []
if self.arch_distro:
if not pacman.is_available():
warnings.append(self.i18n['arch.warning.disabled'].format(bold('pacman')))
if not self._is_wget_available():
warnings.append(self.i18n['arch.warning.disabled'].format(bold('wget')))
if not git.is_enabled():
warnings.append(self.i18n['arch.warning.git'].format(bold('git')))
return warnings
def list_suggestions(self, limit: int, filter_installed: bool) -> List[PackageSuggestion]:
self.logger.info("Downloading suggestions file {}".format(SUGGESTIONS_FILE))
file = self.http_client.get(SUGGESTIONS_FILE)
if not file or not file.text:
self.logger.warning("No suggestion could be read from {}".format(SUGGESTIONS_FILE))
else:
self.logger.info("Mapping suggestions")
suggestions = {}
for l in file.text.split('\n'):
if l:
if limit <= 0 or len(suggestions) < limit:
lsplit = l.split('=')
name = lsplit[1].strip()
if not filter_installed or not pacman.check_installed(name):
suggestions[name] = SuggestionPriority(int(lsplit[0]))
api_res = self.aur_client.get_info(suggestions.keys())
if api_res:
res = []
for pkg in api_res:
if pkg.get('Name') in suggestions:
res.append(PackageSuggestion(self.mapper.map_api_data(pkg, {}, self.categories), suggestions[pkg['Name']]))
self.logger.info("Mapped {} suggestions".format(len(suggestions)))
return res
def is_default_enabled(self) -> bool:
return True
def launch(self, pkg: ArchPackage):
if pkg.command:
subprocess.Popen(pkg.command.split(' '))
def get_screenshots(self, pkg: SoftwarePackage) -> List[str]:
pass
def _gen_bool_selector(self, id_: str, label_key: str, tooltip_key: str, value: bool, max_width: int, capitalize_label: bool = True) -> SingleSelectComponent:
opts = [InputOption(label=self.i18n['yes'].capitalize(), value=True),
InputOption(label=self.i18n['no'].capitalize(), value=False)]
return SingleSelectComponent(label=self.i18n[label_key],
options=opts,
default_option=[o for o in opts if o.value == value][0],
max_per_line=len(opts),
type_=SelectViewType.RADIO,
tooltip=self.i18n[tooltip_key],
max_width=max_width,
id_=id_,
capitalize_label=capitalize_label)
def get_settings(self, screen_width: int, screen_height: int) -> ViewComponent:
local_config = read_config()
max_width = floor(screen_width * 0.15)
db_sync_start = self._gen_bool_selector(id_='sync_dbs_start',
label_key='arch.config.sync_dbs',
tooltip_key='arch.config.sync_dbs_start.tip',
value=bool(local_config['sync_databases_startup']),
max_width=max_width)
db_sync_start.label += ' ( {} )'.format(self.i18n['initialization'].capitalize())
fields = [
self._gen_bool_selector(id_='repos',
label_key='arch.config.repos',
tooltip_key='arch.config.repos.tip',
value=bool(local_config['repositories']),
max_width=max_width),
self._gen_bool_selector(id_='aur',
label_key='arch.config.aur',
tooltip_key='arch.config.aur.tip',
value=local_config['aur'],
max_width=max_width,
capitalize_label=False),
self._gen_bool_selector(id_='opts',
label_key='arch.config.optimize',
tooltip_key='arch.config.optimize.tip',
value=bool(local_config['optimize']),
max_width=max_width),
self._gen_bool_selector(id_='sync_dbs',
label_key='arch.config.sync_dbs',
tooltip_key='arch.config.sync_dbs.tip',
value=bool(local_config['sync_databases']),
max_width=max_width),
db_sync_start,
self._gen_bool_selector(id_='clean_cached',
label_key='arch.config.clean_cache',
tooltip_key='arch.config.clean_cache.tip',
value=bool(local_config['clean_cached']),
max_width=max_width),
self._gen_bool_selector(id_='ref_mirs',
label_key='arch.config.refresh_mirrors',
tooltip_key='arch.config.refresh_mirrors.tip',
value=bool(local_config['refresh_mirrors_startup']),
max_width=max_width),
TextInputComponent(id_='mirrors_sort_limit',
label=self.i18n['arch.config.mirrors_sort_limit'],
tooltip=self.i18n['arch.config.mirrors_sort_limit.tip'],
only_int=True,
max_width=max_width,
value=local_config['mirrors_sort_limit'] if isinstance(local_config['mirrors_sort_limit'], int) else '')
]
return PanelComponent([FormComponent(fields, spaces=False)])
def save_settings(self, component: PanelComponent) -> Tuple[bool, List[str]]:
config = read_config()
form_install = component.components[0]
config['repositories'] = form_install.get_component('repos').get_selected()
config['aur'] = form_install.get_component('aur').get_selected()
config['optimize'] = form_install.get_component('opts').get_selected()
config['sync_databases'] = form_install.get_component('sync_dbs').get_selected()
config['sync_databases_startup'] = form_install.get_component('sync_dbs_start').get_selected()
config['clean_cached'] = form_install.get_component('clean_cached').get_selected()
config['refresh_mirrors_startup'] = form_install.get_component('ref_mirs').get_selected()
config['mirrors_sort_limit'] = form_install.get_component('mirrors_sort_limit').get_int_value()
try:
save_config(config, CONFIG_FILE)
return True, None
except:
return False, [traceback.format_exc()]
def get_upgrade_requirements(self, pkgs: List[ArchPackage], root_password: str, watcher: ProcessWatcher) -> UpgradeRequirements:
self.aur_client.clean_caches()
arch_config = read_config()
self._sync_databases(arch_config=arch_config, root_password=root_password, handler=ProcessHandler(watcher), change_substatus=False)
self.aur_client.clean_caches()
try:
return UpdatesSummarizer(self.aur_client, self.i18n, self.logger, self.deps_analyser, watcher).summarize(pkgs, root_password, arch_config)
except PackageNotFoundException:
pass # when nothing is returned, the upgrade is called off by the UI
def get_custom_actions(self) -> List[CustomSoftwareAction]:
actions = []
arch_config = read_config()
if pacman.is_mirrors_available():
actions.append(self.custom_actions['ref_mirrors'])
actions.append(self.custom_actions['ref_dbs'])
actions.append(self.custom_actions['clean_cache'])
if bool(arch_config['repositories']):
actions.append(self.custom_actions['sys_up'])
return actions
def fill_sizes(self, pkgs: List[ArchPackage]):
installed, new, all_names, installed_names = [], [], [], []
for p in pkgs:
if p.repository != 'aur':
all_names.append(p.name)
if p.installed:
installed.append(p)
installed_names.append(p.name)
else:
new.append(p)
new_sizes = pacman.get_update_size(all_names)
if new_sizes:
if new:
for p in new:
p.size = new_sizes.get(p.name)
if installed:
installed_sizes = pacman.get_installed_size(installed_names)
for p in installed:
p.size = installed_sizes.get(p.name)
new_size = new_sizes.get(p.name)
if p.size is None:
p.size = new_size
elif new_size is not None:
p.size = new_size - p.size
def upgrade_system(self, root_password: str, watcher: ProcessWatcher) -> bool:
repo_map = pacman.map_repositories()
installed = self.read_installed(limit=-1, only_apps=False, pkg_types=None, internet_available=internet.is_available(), disk_loader=None).installed
if not installed:
watcher.show_message(title=self.i18n['arch.custom_action.upgrade_system'],
body=self.i18n['arch.custom_action.upgrade_system.no_updates'],
type_=MessageType.INFO)
return False
to_update = [p for p in installed if p.repository != 'aur' and p.update]
if not to_update:
watcher.show_message(title=self.i18n['arch.custom_action.upgrade_system'],
body=self.i18n['arch.custom_action.upgrade_system.no_updates'],
type_=MessageType.INFO)
return False
# icon_path = get_repo_icon_path()
# pkg_opts, size = [], 0
# self.fill_sizes(to_update)
#
# for pkg in to_update:
# lb = '{} ( {} > {} ) - {}: {}'.format(pkg.name,
# pkg.version,
# pkg.latest_version,
# self.i18n['size'].capitalize(),
# '?' if pkg.size is None else get_human_size_str(pkg.size))
# pkg_opts.append(InputOption(label=lb,
# value=pkg.name,
# read_only=True,
# icon_path=icon_path))
#
# if pkg.size is not None:
# size += pkg.size
#
# pkg_opts.sort(key=lambda o: o.label)
# select = MultipleSelectComponent(label='',
# options=pkg_opts,
# default_options=set(pkg_opts))
# if watcher.request_confirmation(title=self.i18n['arch.custom_action.upgrade_system'],
# body="{}. {}: {}".format(self.i18n['arch.custom_action.upgrade_system.pkgs'],
# self.i18n['size'].capitalize(),
# get_human_size_str(size)),
# confirmation_label=self.i18n['proceed'].capitalize(),
# deny_label=self.i18n['cancel'].capitalize(),
# components=[select]):
# watcher.change_substatus(self.i18n['arch.custom_action.upgrade_system.substatus'])
handler = ProcessHandler(watcher)
if self._is_database_locked(handler, root_password):
return False
success, output = handler.handle_simple(pacman.upgrade_system(root_password))
if not success or 'error:' in output:
watcher.show_message(title=self.i18n['arch.custom_action.upgrade_system'],
body="An error occurred during the upgrade process. Check out the {}".format(
bold('Details')),
type_=MessageType.ERROR)
return False
else:
database.register_sync(self.logger)
msg = '<p>{}</p><br/>{}</p><p>{}</p>'.format(self.i18n['action.update.success.reboot.line1'],
self.i18n['action.update.success.reboot.line2'],
self.i18n['action.update.success.reboot.line3'])
watcher.request_reboot(msg)
return True
def clean_cache(self, root_password: str, watcher: ProcessWatcher) -> bool:
cache_dir = pacman.get_cache_dir()
if not cache_dir or not os.path.isdir(cache_dir):
watcher.show_message(title=self.i18n['arch.custom_action.clean_cache'].capitalize(),
body=self.i18n['arch.custom_action.clean_cache.no_dir'.format(bold(cache_dir))].capitalize(),
type_=MessageType.WARNING)
return True
text = '<p>{}.</p><p>{}.</p><p>{}.</p>'.format(self.i18n['arch.custom_action.clean_cache.msg1'],
self.i18n['arch.custom_action.clean_cache.msg2'],
self.i18n['arch.custom_action.clean_cache.msg3'])
if watcher.request_confirmation(title=self.i18n['arch.custom_action.clean_cache'].capitalize(),
body=text,
confirmation_label=self.i18n['clean'].capitalize(),
deny_label=self.i18n['cancel'].capitalize()):
handler = ProcessHandler(watcher)
rm = SimpleProcess(cmd=['rm', '-rf', cache_dir], root_password=root_password)
success, _ = handler.handle_simple(rm)
if success:
watcher.show_message(title=self.i18n['arch.custom_action.clean_cache'].capitalize(),
body=self.i18n['arch.custom_action.clean_cache.success'],
type_=MessageType.INFO)
mkcache = SimpleProcess(cmd=['mkdir', '-p', cache_dir], root_password=root_password)
handler.handle_simple(mkcache)
return True
else:
watcher.show_message(title=self.i18n['arch.custom_action.clean_cache'].capitalize(),
body=self.i18n['arch.custom_action.clean_cache.fail'],
type_=MessageType.ERROR)
return False
return True
|
helpers.py
|
from workflow.WF_0_scrape_web.WF_0_scrape_web import run_script_0
from workflow.WF_0_5_extract.WF_0_5_extract import run_script_0_5
from workflow.WF_1_import_demos.WF_1_import_demos import run_script_1
from workflow.WF_2_parse_run_data.WF_2_parse_run_data import run_script_2
from workflow.WF_3_compile_fasta.WF_3_compile_fasta import run_script_3
from workflow.WF_4_parse_nextclade.WF_4_parse_nextclade import run_script_4
from workflow.WF_5_parse_pangolin.WF_5_parse_pangolin import run_script_5
from workflow.WF_6_build_epi_report.WF_6_build_epi_report import run_script_6
from workflow.WF_7_send_epi_report.WF_7_send_epi_report import run_script_7
from workflow.WF_8_sync_network.WF_8_sync_network import run_script_8
from workflow.WF_9_send_fastas.WF_9_send_fastas import run_script_9
from workflow.epi_isl.epi_isl import run_epi_isl
from workflow.gisaid.gisaid import run_gisaid
from workflow.outside_lab.outside_lab import run_outside_lab
from workflow.gisaid_submit.gisaid_submit import run_gisaid_submit
# TODO bokeh import not working
#from workflow.plotter.plotter import run_plotter
import pyodbc
import time
import threading
import datetime
import os
def run_set_1(run_id):
# run_script_1 will read in all hsn's from run_data.json file and fetch patient demographics
# from oracle database, clean the data, and push it to the SQL database
run_script_1(run_id)
# run_script_2 will read in all run_stats from run_data.json file, and push the data to the
# SQL database. Requires run_script_1 to run first
run_script_2(run_id) #needs run to open json and open run info
def run_set_2(run_id):
# run_script_0_5 will extract fasta files downloaded in WF_0
run_script_0_5(run_id)
# run_script_3 will take all the FASTA files and combine them into one
run_script_3(run_id)
def run(run_id):
print("\n ___________________________________________________\n| _______________________________________________ |\n| |\033[4m SARS-CoV-2 daily workflow runner script \033[0m| |\n|___________________________________________________|\n")
ask = True
if run_id != "windows":
try:
# run_script_0 will perform web-scraping and save information in run_data.json file
# run_script_0 will also download the FASTA/Q files for use later. It MUST finish execution
# before anything else starts
lock_file = open("/home/ssh_user/WGS_Sequencing_COVID/lock_file.txt", "w")
run_script_0(run_id)
run_script_0_5(run_id)
run_script_1(run_id)
run_script_2(run_id)
run_script_3(run_id)
run_script_4(run_id)
run_script_5(run_id)
run_script_6(run_id)
run_gisaid(run_id)
#run_gisaid_submit(run_id)
# TODO setup thread pooling to reduce resource
# requirements
# t1 = threading.Thread(target=run_set_1, args=run_id)
# t2 = threading.Thread(target=run_set_2, args=run_id)
# # start multitasking
# t1.start() # WF 1, 2
# t2.start() # WF 0_5, 3
# # WF 2 and 3 must finish before 4 and 5 start
# t1.join()
# t2.join()
# # TODO - access fasta path within run_script 4 and 5 via run_id
# t3 = threading.Thread(target=run_script_4, args=run_id)
# t4 = threading.Thread(target=run_script_5, args=run_id)
# t3.start()
# t4.start()
# t3.join()
# t4.join()
# run_date = datetime.datetime.strptime(run_id[7:17], '%Y-%m-%d').strftime("%m/%d/%Y")
# run_script_9(run_date)
#run_gisaid()
# t5 = threading.Thread(target=run_script_9, args=run_date)
#t6 = threading.Thread(target=run_gisaid)
# we can have the script grab all 64 samples for the day
# since it will just create 2 files, one for each run
# t7 = threading.Thread(target=run_script_6, args=run_id)
# t5.start()
# t6.start()
# t7.start()
# t5.join()
# t6.join()
# t7.join()
# release the results
lock_file.close()
os.remove("/home/ssh_user/WGS_Sequencing_COVID/lock_file.txt")
except pyodbc.IntegrityError as i:
print(i)
print("\nThis usually happens when the run data has already been imported into the database")
lock_file.close()
os.remove("/home/ssh_user/WGS_Sequencing_COVID/lock_file.txt")
time.sleep(5)
except Exception as i:
print(i)
lock_file.close()
os.remove("/home/ssh_user/WGS_Sequencing_COVID/lock_file.txt")
time.sleep(5)
else:
# script is being called by a user on windows
while ask:
u_input = input("\n\nenter '6' to build an epi report\nenter '7' to send an epi report\nenter '8' to pull all results files from Linux\
\n\nOther options:\
\nenter 'plotter' to get an interactive dashboard of the database\
\nenter 'outside lab' to import a data template submitted from an outside lab\
\nenter 'epi isl' to update all isl numbers for samples submitted to gisaid\n\nenter 'q' to quit\n--> ")
try:
if u_input.strip().lower() == '6':
run_script_6(run_id)
elif u_input.strip().lower() == '7':
run_script_7()
elif u_input.strip().lower() == '8':
run_script_8()
elif u_input.strip().lower() == 'outside lab':
# run script
run_outside_lab()
elif u_input.strip().lower() == 'epi isl':
# run script
run_epi_isl()
elif u_input.strip().lower() == 'q':
ask = False
else:
raise ValueError("Invalid input!")
except Exception as i:
print(i, str(type(i)))
time.sleep(2)
|
tasks.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
# Python
from collections import OrderedDict, namedtuple, deque
import errno
import functools
import importlib
import json
import logging
import os
import shutil
import stat
import tempfile
import time
import traceback
from distutils.dir_util import copy_tree
from distutils.version import LooseVersion as Version
import yaml
import fcntl
from pathlib import Path
from uuid import uuid4
import urllib.parse as urlparse
import socket
import threading
import concurrent.futures
from base64 import b64encode
import subprocess
import sys
# Django
from django.conf import settings
from django.db import transaction, DatabaseError, IntegrityError
from django.db.models.fields.related import ForeignKey
from django.utils.timezone import now
from django.utils.encoding import smart_str
from django.contrib.auth.models import User
from django.utils.translation import ugettext_lazy as _, gettext_noop
from django.core.cache import cache
from django.core.exceptions import ObjectDoesNotExist
from django_guid.middleware import GuidMiddleware
# Django-CRUM
from crum import impersonate
# GitPython
import git
from gitdb.exc import BadName as BadGitName
# Runner
import ansible_runner
# Receptor
from receptorctl.socket_interface import ReceptorControl
# AWX
from awx import __version__ as awx_application_version
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV, MINIMAL_EVENTS
from awx.main.access import access_registry
from awx.main.redact import UriCleaner
from awx.main.models import (
Schedule,
TowerScheduleState,
Instance,
InstanceGroup,
UnifiedJob,
Notification,
Inventory,
InventorySource,
SmartInventoryMembership,
Job,
AdHocCommand,
ProjectUpdate,
InventoryUpdate,
SystemJob,
JobEvent,
ProjectUpdateEvent,
InventoryUpdateEvent,
AdHocCommandEvent,
SystemJobEvent,
build_safe_env,
)
from awx.main.constants import ACTIVE_STATES
from awx.main.exceptions import AwxTaskError, PostRunError
from awx.main.queue import CallbackQueueDispatcher
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename, reaper
from awx.main.utils.common import (
update_scm_url,
ignore_inventory_computed_fields,
ignore_inventory_group_removal,
extract_ansible_vars,
schedule_task_manager,
get_awx_version,
deepmerge,
parse_yaml_or_json,
cleanup_new_process,
create_partition,
)
from awx.main.utils.execution_environments import get_default_pod_spec, CONTAINER_ROOT, to_container_path
from awx.main.utils.ansible import read_ansible_config
from awx.main.utils.external_logging import reconfigure_rsyslog
from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja
from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock
from awx.main.utils.handlers import SpecialInventoryHandler
from awx.main.consumers import emit_channel_notification
from awx.main import analytics
from awx.conf import settings_registry
from awx.conf.license import get_license
from awx.main.analytics.subsystem_metrics import Metrics
from rest_framework.exceptions import PermissionDenied
__all__ = [
'RunJob',
'RunSystemJob',
'RunProjectUpdate',
'RunInventoryUpdate',
'RunAdHocCommand',
'handle_work_error',
'handle_work_success',
'apply_cluster_membership_policies',
'update_inventory_computed_fields',
'update_host_smart_inventory_memberships',
'send_notifications',
'purge_old_stdout_files',
]
HIDDEN_PASSWORD = '**********'
OPENSSH_KEY_ERROR = u'''\
It looks like you're trying to use a private key in OpenSSH format, which \
isn't supported by the installed version of OpenSSH on this instance. \
Try upgrading OpenSSH or providing your private key in an different format. \
'''
logger = logging.getLogger('awx.main.tasks')
def dispatch_startup():
startup_logger = logging.getLogger('awx.main.tasks')
startup_logger.debug("Syncing Schedules")
for sch in Schedule.objects.all():
try:
sch.update_computed_fields()
except Exception:
logger.exception("Failed to rebuild schedule {}.".format(sch))
#
# When the dispatcher starts, if the instance cannot be found in the database,
# automatically register it. This is mostly useful for openshift-based
# deployments where:
#
# 2 Instances come online
# Instance B encounters a network blip, Instance A notices, and
# deprovisions it
# Instance B's connectivity is restored, the dispatcher starts, and it
# re-registers itself
#
# In traditional container-less deployments, instances don't get
# deprovisioned when they miss their heartbeat, so this code is mostly a
# no-op.
#
apply_cluster_membership_policies()
cluster_node_heartbeat()
Metrics().clear_values()
# Update Tower's rsyslog.conf file based on loggins settings in the db
reconfigure_rsyslog()
def inform_cluster_of_shutdown():
try:
this_inst = Instance.objects.get(hostname=settings.CLUSTER_HOST_ID)
this_inst.capacity = 0 # No thank you to new jobs while shut down
this_inst.save(update_fields=['capacity', 'modified'])
try:
reaper.reap(this_inst)
except Exception:
logger.exception('failed to reap jobs for {}'.format(this_inst.hostname))
logger.warning('Normal shutdown signal for instance {}, ' 'removed self from capacity pool.'.format(this_inst.hostname))
except Exception:
logger.exception('Encountered problem with normal shutdown signal.')
@task(queue=get_local_queuename)
def apply_cluster_membership_policies():
started_waiting = time.time()
with advisory_lock('cluster_policy_lock', wait=True):
lock_time = time.time() - started_waiting
if lock_time > 1.0:
to_log = logger.info
else:
to_log = logger.debug
to_log('Waited {} seconds to obtain lock name: cluster_policy_lock'.format(lock_time))
started_compute = time.time()
all_instances = list(Instance.objects.order_by('id'))
all_groups = list(InstanceGroup.objects.prefetch_related('instances'))
total_instances = len(all_instances)
actual_groups = []
actual_instances = []
Group = namedtuple('Group', ['obj', 'instances', 'prior_instances'])
Node = namedtuple('Instance', ['obj', 'groups'])
# Process policy instance list first, these will represent manually managed memberships
instance_hostnames_map = {inst.hostname: inst for inst in all_instances}
for ig in all_groups:
group_actual = Group(obj=ig, instances=[], prior_instances=[instance.pk for instance in ig.instances.all()]) # obtained in prefetch
for hostname in ig.policy_instance_list:
if hostname not in instance_hostnames_map:
logger.info("Unknown instance {} in {} policy list".format(hostname, ig.name))
continue
inst = instance_hostnames_map[hostname]
group_actual.instances.append(inst.id)
# NOTE: arguable behavior: policy-list-group is not added to
# instance's group count for consideration in minimum-policy rules
if group_actual.instances:
logger.debug("Policy List, adding Instances {} to Group {}".format(group_actual.instances, ig.name))
actual_groups.append(group_actual)
# Process Instance minimum policies next, since it represents a concrete lower bound to the
# number of instances to make available to instance groups
actual_instances = [Node(obj=i, groups=[]) for i in all_instances if i.managed_by_policy]
logger.debug("Total instances: {}, available for policy: {}".format(total_instances, len(actual_instances)))
for g in sorted(actual_groups, key=lambda x: len(x.instances)):
policy_min_added = []
for i in sorted(actual_instances, key=lambda x: len(x.groups)):
if len(g.instances) >= g.obj.policy_instance_minimum:
break
if i.obj.id in g.instances:
# If the instance is already _in_ the group, it was
# applied earlier via the policy list
continue
g.instances.append(i.obj.id)
i.groups.append(g.obj.id)
policy_min_added.append(i.obj.id)
if policy_min_added:
logger.debug("Policy minimum, adding Instances {} to Group {}".format(policy_min_added, g.obj.name))
# Finally, process instance policy percentages
for g in sorted(actual_groups, key=lambda x: len(x.instances)):
policy_per_added = []
for i in sorted(actual_instances, key=lambda x: len(x.groups)):
if i.obj.id in g.instances:
# If the instance is already _in_ the group, it was
# applied earlier via a minimum policy or policy list
continue
if 100 * float(len(g.instances)) / len(actual_instances) >= g.obj.policy_instance_percentage:
break
g.instances.append(i.obj.id)
i.groups.append(g.obj.id)
policy_per_added.append(i.obj.id)
if policy_per_added:
logger.debug("Policy percentage, adding Instances {} to Group {}".format(policy_per_added, g.obj.name))
# Determine if any changes need to be made
needs_change = False
for g in actual_groups:
if set(g.instances) != set(g.prior_instances):
needs_change = True
break
if not needs_change:
logger.debug('Cluster policy no-op finished in {} seconds'.format(time.time() - started_compute))
return
# On a differential basis, apply instances to groups
with transaction.atomic():
for g in actual_groups:
if g.obj.is_container_group:
logger.debug('Skipping containerized group {} for policy calculation'.format(g.obj.name))
continue
instances_to_add = set(g.instances) - set(g.prior_instances)
instances_to_remove = set(g.prior_instances) - set(g.instances)
if instances_to_add:
logger.debug('Adding instances {} to group {}'.format(list(instances_to_add), g.obj.name))
g.obj.instances.add(*instances_to_add)
if instances_to_remove:
logger.debug('Removing instances {} from group {}'.format(list(instances_to_remove), g.obj.name))
g.obj.instances.remove(*instances_to_remove)
logger.debug('Cluster policy computation finished in {} seconds'.format(time.time() - started_compute))
@task(queue='tower_broadcast_all')
def handle_setting_changes(setting_keys):
orig_len = len(setting_keys)
for i in range(orig_len):
for dependent_key in settings_registry.get_dependent_settings(setting_keys[i]):
setting_keys.append(dependent_key)
cache_keys = set(setting_keys)
logger.debug('cache delete_many(%r)', cache_keys)
cache.delete_many(cache_keys)
if any([setting.startswith('LOG_AGGREGATOR') for setting in setting_keys]):
reconfigure_rsyslog()
@task(queue='tower_broadcast_all')
def delete_project_files(project_path):
# TODO: possibly implement some retry logic
lock_file = project_path + '.lock'
if os.path.exists(project_path):
try:
shutil.rmtree(project_path)
logger.debug('Success removing project files {}'.format(project_path))
except Exception:
logger.exception('Could not remove project directory {}'.format(project_path))
if os.path.exists(lock_file):
try:
os.remove(lock_file)
logger.debug('Success removing {}'.format(lock_file))
except Exception:
logger.exception('Could not remove lock file {}'.format(lock_file))
@task(queue='tower_broadcast_all')
def profile_sql(threshold=1, minutes=1):
if threshold <= 0:
cache.delete('awx-profile-sql-threshold')
logger.error('SQL PROFILING DISABLED')
else:
cache.set('awx-profile-sql-threshold', threshold, timeout=minutes * 60)
logger.error('SQL QUERIES >={}s ENABLED FOR {} MINUTE(S)'.format(threshold, minutes))
@task(queue=get_local_queuename)
def send_notifications(notification_list, job_id=None):
if not isinstance(notification_list, list):
raise TypeError("notification_list should be of type list")
if job_id is not None:
job_actual = UnifiedJob.objects.get(id=job_id)
notifications = Notification.objects.filter(id__in=notification_list)
if job_id is not None:
job_actual.notifications.add(*notifications)
for notification in notifications:
update_fields = ['status', 'notifications_sent']
try:
sent = notification.notification_template.send(notification.subject, notification.body)
notification.status = "successful"
notification.notifications_sent = sent
if job_id is not None:
job_actual.log_lifecycle("notifications_sent")
except Exception as e:
logger.exception("Send Notification Failed {}".format(e))
notification.status = "failed"
notification.error = smart_str(e)
update_fields.append('error')
finally:
try:
notification.save(update_fields=update_fields)
except Exception:
logger.exception('Error saving notification {} result.'.format(notification.id))
@task(queue=get_local_queuename)
def gather_analytics():
from awx.conf.models import Setting
from rest_framework.fields import DateTimeField
last_gather = Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_GATHER').first()
last_time = DateTimeField().to_internal_value(last_gather.value) if last_gather and last_gather.value else None
gather_time = now()
if not last_time or ((gather_time - last_time).total_seconds() > settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
analytics.gather()
@task(queue=get_local_queuename)
def purge_old_stdout_files():
nowtime = time.time()
for f in os.listdir(settings.JOBOUTPUT_ROOT):
if os.path.getctime(os.path.join(settings.JOBOUTPUT_ROOT, f)) < nowtime - settings.LOCAL_STDOUT_EXPIRE_TIME:
os.unlink(os.path.join(settings.JOBOUTPUT_ROOT, f))
logger.debug("Removing {}".format(os.path.join(settings.JOBOUTPUT_ROOT, f)))
@task(queue=get_local_queuename)
def cleanup_execution_environment_images():
if settings.IS_K8S:
return
process = subprocess.run('podman images --filter="dangling=true" --format json'.split(" "), capture_output=True)
if process.returncode != 0:
logger.debug("Cleanup execution environment images: could not get list of images")
return
if len(process.stdout) > 0:
images_system = json.loads(process.stdout)
for e in images_system:
image_name = e["Id"]
logger.debug(f"Cleanup execution environment images: deleting {image_name}")
process = subprocess.run(['podman', 'rmi', image_name, '-f'], stdout=subprocess.DEVNULL)
if process.returncode != 0:
logger.debug(f"Failed to delete image {image_name}")
@task(queue=get_local_queuename)
def cluster_node_heartbeat():
logger.debug("Cluster node heartbeat task.")
nowtime = now()
instance_list = list(Instance.objects.all())
this_inst = None
lost_instances = []
(changed, instance) = Instance.objects.get_or_register()
if changed:
logger.info("Registered tower node '{}'".format(instance.hostname))
for inst in list(instance_list):
if inst.hostname == settings.CLUSTER_HOST_ID:
this_inst = inst
instance_list.remove(inst)
elif inst.is_lost(ref_time=nowtime):
lost_instances.append(inst)
instance_list.remove(inst)
if this_inst:
startup_event = this_inst.is_lost(ref_time=nowtime)
this_inst.refresh_capacity()
if startup_event:
logger.warning('Rejoining the cluster as instance {}.'.format(this_inst.hostname))
return
else:
raise RuntimeError("Cluster Host Not Found: {}".format(settings.CLUSTER_HOST_ID))
# IFF any node has a greater version than we do, then we'll shutdown services
for other_inst in instance_list:
if other_inst.version == "":
continue
if Version(other_inst.version.split('-', 1)[0]) > Version(awx_application_version.split('-', 1)[0]) and not settings.DEBUG:
logger.error(
"Host {} reports version {}, but this node {} is at {}, shutting down".format(
other_inst.hostname, other_inst.version, this_inst.hostname, this_inst.version
)
)
# Shutdown signal will set the capacity to zero to ensure no Jobs get added to this instance.
# The heartbeat task will reset the capacity to the system capacity after upgrade.
stop_local_services(communicate=False)
raise RuntimeError("Shutting down.")
for other_inst in lost_instances:
try:
reaper.reap(other_inst)
except Exception:
logger.exception('failed to reap jobs for {}'.format(other_inst.hostname))
try:
# Capacity could already be 0 because:
# * It's a new node and it never had a heartbeat
# * It was set to 0 by another tower node running this method
# * It was set to 0 by this node, but auto deprovisioning is off
#
# If auto deprovisining is on, don't bother setting the capacity to 0
# since we will delete the node anyway.
if other_inst.capacity != 0 and not settings.AWX_AUTO_DEPROVISION_INSTANCES:
other_inst.capacity = 0
other_inst.save(update_fields=['capacity'])
logger.error("Host {} last checked in at {}, marked as lost.".format(other_inst.hostname, other_inst.modified))
elif settings.AWX_AUTO_DEPROVISION_INSTANCES:
deprovision_hostname = other_inst.hostname
other_inst.delete()
logger.info("Host {} Automatically Deprovisioned.".format(deprovision_hostname))
except DatabaseError as e:
if 'did not affect any rows' in str(e):
logger.debug('Another instance has marked {} as lost'.format(other_inst.hostname))
else:
logger.exception('Error marking {} as lost'.format(other_inst.hostname))
@task(queue=get_local_queuename)
def awx_receptor_workunit_reaper():
"""
When an AWX job is launched via receptor, files such as status, stdin, and stdout are created
in a specific receptor directory. This directory on disk is a random 8 character string, e.g. qLL2JFNT
This is also called the work Unit ID in receptor, and is used in various receptor commands,
e.g. "work results qLL2JFNT"
After an AWX job executes, the receptor work unit directory is cleaned up by
issuing the work release command. In some cases the release process might fail, or
if AWX crashes during a job's execution, the work release command is never issued to begin with.
As such, this periodic task will obtain a list of all receptor work units, and find which ones
belong to AWX jobs that are in a completed state (status is canceled, error, or succeeded).
This task will call "work release" on each of these work units to clean up the files on disk.
"""
if not settings.RECEPTOR_RELEASE_WORK:
return
logger.debug("Checking for unreleased receptor work units")
receptor_ctl = get_receptor_ctl()
receptor_work_list = receptor_ctl.simple_command("work list")
unit_ids = [id for id in receptor_work_list]
jobs_with_unreleased_receptor_units = UnifiedJob.objects.filter(work_unit_id__in=unit_ids).exclude(status__in=ACTIVE_STATES)
for job in jobs_with_unreleased_receptor_units:
logger.debug(f"{job.log_format} is not active, reaping receptor work unit {job.work_unit_id}")
receptor_ctl.simple_command(f"work release {job.work_unit_id}")
@task(queue=get_local_queuename)
def awx_k8s_reaper():
if not settings.RECEPTOR_RELEASE_WORK:
return
from awx.main.scheduler.kubernetes import PodManager # prevent circular import
for group in InstanceGroup.objects.filter(is_container_group=True).iterator():
logger.debug("Checking for orphaned k8s pods for {}.".format(group))
pods = PodManager.list_active_jobs(group)
for job in UnifiedJob.objects.filter(pk__in=pods.keys()).exclude(status__in=ACTIVE_STATES):
logger.debug('{} is no longer active, reaping orphaned k8s pod'.format(job.log_format))
try:
pm = PodManager(job)
pm.kube_api.delete_namespaced_pod(name=pods[job.id], namespace=pm.namespace, _request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)
except Exception:
logger.exception("Failed to delete orphaned pod {} from {}".format(job.log_format, group))
@task(queue=get_local_queuename)
def awx_periodic_scheduler():
with advisory_lock('awx_periodic_scheduler_lock', wait=False) as acquired:
if acquired is False:
logger.debug("Not running periodic scheduler, another task holds lock")
return
logger.debug("Starting periodic scheduler")
run_now = now()
state = TowerScheduleState.get_solo()
last_run = state.schedule_last_run
logger.debug("Last scheduler run was: %s", last_run)
state.schedule_last_run = run_now
state.save()
old_schedules = Schedule.objects.enabled().before(last_run)
for schedule in old_schedules:
schedule.update_computed_fields()
schedules = Schedule.objects.enabled().between(last_run, run_now)
invalid_license = False
try:
access_registry[Job](None).check_license(quiet=True)
except PermissionDenied as e:
invalid_license = e
for schedule in schedules:
template = schedule.unified_job_template
schedule.update_computed_fields() # To update next_run timestamp.
if template.cache_timeout_blocked:
logger.warn("Cache timeout is in the future, bypassing schedule for template %s" % str(template.id))
continue
try:
job_kwargs = schedule.get_job_kwargs()
new_unified_job = schedule.unified_job_template.create_unified_job(**job_kwargs)
logger.debug('Spawned {} from schedule {}-{}.'.format(new_unified_job.log_format, schedule.name, schedule.pk))
if invalid_license:
new_unified_job.status = 'failed'
new_unified_job.job_explanation = str(invalid_license)
new_unified_job.save(update_fields=['status', 'job_explanation'])
new_unified_job.websocket_emit_status("failed")
raise invalid_license
can_start = new_unified_job.signal_start()
except Exception:
logger.exception('Error spawning scheduled job.')
continue
if not can_start:
new_unified_job.status = 'failed'
new_unified_job.job_explanation = gettext_noop(
"Scheduled job could not start because it \
was not in the right state or required manual credentials"
)
new_unified_job.save(update_fields=['status', 'job_explanation'])
new_unified_job.websocket_emit_status("failed")
emit_channel_notification('schedules-changed', dict(id=schedule.id, group_name="schedules"))
state.save()
@task(queue=get_local_queuename)
def handle_work_success(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in success callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
schedule_task_manager()
@task(queue=get_local_queuename)
def handle_work_error(task_id, *args, **kwargs):
subtasks = kwargs.get('subtasks', None)
logger.debug('Executing error task id %s, subtasks: %s' % (task_id, str(subtasks)))
first_instance = None
first_instance_type = ''
if subtasks is not None:
for each_task in subtasks:
try:
instance = UnifiedJob.get_instance_by_type(each_task['type'], each_task['id'])
if not instance:
# Unknown task type
logger.warn("Unknown task type: {}".format(each_task['type']))
continue
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in error callback.'.format(each_task['type'], each_task['id']))
continue
if first_instance is None:
first_instance = instance
first_instance_type = each_task['type']
if instance.celery_task_id != task_id and not instance.cancel_flag and not instance.status == 'successful':
instance.status = 'failed'
instance.failed = True
if not instance.job_explanation:
instance.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
first_instance_type,
first_instance.name,
first_instance.id,
)
instance.save()
instance.websocket_emit_status("failed")
# We only send 1 job complete message since all the job completion message
# handling does is trigger the scheduler. If we extend the functionality of
# what the job complete message handler does then we may want to send a
# completion event for each job here.
if first_instance:
schedule_task_manager()
pass
@task(queue=get_local_queuename)
def handle_success_and_failure_notifications(job_id):
uj = UnifiedJob.objects.get(pk=job_id)
retries = 0
while retries < 5:
if uj.finished:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
return
else:
# wait a few seconds to avoid a race where the
# events are persisted _before_ the UJ.status
# changes from running -> successful
retries += 1
time.sleep(1)
uj = UnifiedJob.objects.get(pk=job_id)
logger.warn(f"Failed to even try to send notifications for job '{uj}' due to job not being in finished state.")
@task(queue=get_local_queuename)
def update_inventory_computed_fields(inventory_id):
"""
Signal handler and wrapper around inventory.update_computed_fields to
prevent unnecessary recursive calls.
"""
i = Inventory.objects.filter(id=inventory_id)
if not i.exists():
logger.error("Update Inventory Computed Fields failed due to missing inventory: " + str(inventory_id))
return
i = i[0]
try:
i.update_computed_fields()
except DatabaseError as e:
if 'did not affect any rows' in str(e):
logger.debug('Exiting duplicate update_inventory_computed_fields task.')
return
raise
def update_smart_memberships_for_inventory(smart_inventory):
current = set(SmartInventoryMembership.objects.filter(inventory=smart_inventory).values_list('host_id', flat=True))
new = set(smart_inventory.hosts.values_list('id', flat=True))
additions = new - current
removals = current - new
if additions or removals:
with transaction.atomic():
if removals:
SmartInventoryMembership.objects.filter(inventory=smart_inventory, host_id__in=removals).delete()
if additions:
add_for_inventory = [SmartInventoryMembership(inventory_id=smart_inventory.id, host_id=host_id) for host_id in additions]
SmartInventoryMembership.objects.bulk_create(add_for_inventory, ignore_conflicts=True)
logger.debug(
'Smart host membership cached for {}, {} additions, {} removals, {} total count.'.format(
smart_inventory.pk, len(additions), len(removals), len(new)
)
)
return True # changed
return False
@task(queue=get_local_queuename)
def update_host_smart_inventory_memberships():
smart_inventories = Inventory.objects.filter(kind='smart', host_filter__isnull=False, pending_deletion=False)
changed_inventories = set([])
for smart_inventory in smart_inventories:
try:
changed = update_smart_memberships_for_inventory(smart_inventory)
if changed:
changed_inventories.add(smart_inventory)
except IntegrityError:
logger.exception('Failed to update smart inventory memberships for {}'.format(smart_inventory.pk))
# Update computed fields for changed inventories outside atomic action
for smart_inventory in changed_inventories:
smart_inventory.update_computed_fields()
@task(queue=get_local_queuename)
def delete_inventory(inventory_id, user_id, retries=5):
# Delete inventory as user
if user_id is None:
user = None
else:
try:
user = User.objects.get(id=user_id)
except Exception:
user = None
with ignore_inventory_computed_fields(), ignore_inventory_group_removal(), impersonate(user):
try:
i = Inventory.objects.get(id=inventory_id)
for host in i.hosts.iterator():
host.job_events_as_primary_host.update(host=None)
i.delete()
emit_channel_notification('inventories-status_changed', {'group_name': 'inventories', 'inventory_id': inventory_id, 'status': 'deleted'})
logger.debug('Deleted inventory {} as user {}.'.format(inventory_id, user_id))
except Inventory.DoesNotExist:
logger.exception("Delete Inventory failed due to missing inventory: " + str(inventory_id))
return
except DatabaseError:
logger.exception('Database error deleting inventory {}, but will retry.'.format(inventory_id))
if retries > 0:
time.sleep(10)
delete_inventory(inventory_id, user_id, retries=retries - 1)
def with_path_cleanup(f):
@functools.wraps(f)
def _wrapped(self, *args, **kwargs):
try:
return f(self, *args, **kwargs)
finally:
for p in self.cleanup_paths:
try:
if os.path.isdir(p):
shutil.rmtree(p, ignore_errors=True)
elif os.path.exists(p):
os.remove(p)
except OSError:
logger.exception("Failed to remove tmp file: {}".format(p))
self.cleanup_paths = []
return _wrapped
def get_receptor_ctl():
return ReceptorControl('/var/run/receptor/receptor.sock')
class BaseTask(object):
model = None
event_model = None
abstract = True
def __init__(self):
self.cleanup_paths = []
self.parent_workflow_job_id = None
self.host_map = {}
self.guid = GuidMiddleware.get_guid()
self.job_created = None
self.recent_event_timings = deque(maxlen=settings.MAX_WEBSOCKET_EVENT_RATE)
def update_model(self, pk, _attempt=0, **updates):
"""Reload the model instance from the database and update the
given fields.
"""
try:
with transaction.atomic():
# Retrieve the model instance.
instance = self.model.objects.get(pk=pk)
# Update the appropriate fields and save the model
# instance, then return the new instance.
if updates:
update_fields = ['modified']
for field, value in updates.items():
setattr(instance, field, value)
update_fields.append(field)
if field == 'status':
update_fields.append('failed')
instance.save(update_fields=update_fields)
return instance
except DatabaseError as e:
# Log out the error to the debug logger.
logger.debug('Database error updating %s, retrying in 5 ' 'seconds (retry #%d): %s', self.model._meta.object_name, _attempt + 1, e)
# Attempt to retry the update, assuming we haven't already
# tried too many times.
if _attempt < 5:
time.sleep(5)
return self.update_model(pk, _attempt=_attempt + 1, **updates)
else:
logger.error('Failed to update %s after %d retries.', self.model._meta.object_name, _attempt)
def get_path_to(self, *args):
"""
Return absolute path relative to this file.
"""
return os.path.abspath(os.path.join(os.path.dirname(__file__), *args))
def build_execution_environment_params(self, instance, private_data_dir):
if settings.IS_K8S:
return {}
image = instance.execution_environment.image
params = {
"container_image": image,
"process_isolation": True,
"container_options": ['--user=root'],
}
if instance.execution_environment.credential:
cred = instance.execution_environment.credential
if cred.has_inputs(field_names=('host', 'username', 'password')):
path = os.path.split(private_data_dir)[0]
with open(path + '/auth.json', 'w') as authfile:
os.chmod(authfile.name, stat.S_IRUSR | stat.S_IWUSR)
host = cred.get_input('host')
username = cred.get_input('username')
password = cred.get_input('password')
token = "{}:{}".format(username, password)
auth_data = {'auths': {host: {'auth': b64encode(token.encode('UTF-8')).decode('UTF-8')}}}
authfile.write(json.dumps(auth_data, indent=4))
params["container_options"].append(f'--authfile={authfile.name}')
else:
raise RuntimeError('Please recheck that your host, username, and password fields are all filled.')
pull = instance.execution_environment.pull
if pull:
params['container_options'].append(f'--pull={pull}')
if settings.AWX_ISOLATION_SHOW_PATHS:
params['container_volume_mounts'] = []
for this_path in settings.AWX_ISOLATION_SHOW_PATHS:
# Using z allows the dir to mounted by multiple containers
# Uppercase Z restricts access (in weird ways) to 1 container at a time
params['container_volume_mounts'].append(f'{this_path}:{this_path}:z')
return params
def build_private_data(self, instance, private_data_dir):
"""
Return SSH private key data (only if stored in DB as ssh_key_data).
Return structure is a dict of the form:
"""
def build_private_data_dir(self, instance):
"""
Create a temporary directory for job-related files.
"""
pdd_wrapper_path = tempfile.mkdtemp(prefix=f'pdd_wrapper_{instance.pk}_', dir=settings.AWX_ISOLATION_BASE_PATH)
os.chmod(pdd_wrapper_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
if settings.AWX_CLEANUP_PATHS:
self.cleanup_paths.append(pdd_wrapper_path)
path = tempfile.mkdtemp(prefix='awx_%s_' % instance.pk, dir=pdd_wrapper_path)
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
# Ansible runner requires that project exists,
# and we will write files in the other folders without pre-creating the folder
for subfolder in ('project', 'inventory', 'env'):
runner_subfolder = os.path.join(path, subfolder)
if not os.path.exists(runner_subfolder):
os.mkdir(runner_subfolder)
return path
def build_private_data_files(self, instance, private_data_dir):
"""
Creates temporary files containing the private data.
Returns a dictionary i.e.,
{
'credentials': {
<awx.main.models.Credential>: '/path/to/decrypted/data',
<awx.main.models.Credential>: '/path/to/decrypted/data',
...
},
'certificates': {
<awx.main.models.Credential>: /path/to/signed/ssh/certificate,
<awx.main.models.Credential>: /path/to/signed/ssh/certificate,
...
}
}
"""
private_data = self.build_private_data(instance, private_data_dir)
private_data_files = {'credentials': {}}
if private_data is not None:
for credential, data in private_data.get('credentials', {}).items():
# OpenSSH formatted keys must have a trailing newline to be
# accepted by ssh-add.
if 'OPENSSH PRIVATE KEY' in data and not data.endswith('\n'):
data += '\n'
# For credentials used with ssh-add, write to a named pipe which
# will be read then closed, instead of leaving the SSH key on disk.
if credential and credential.credential_type.namespace in ('ssh', 'scm'):
try:
os.mkdir(os.path.join(private_data_dir, 'env'))
except OSError as e:
if e.errno != errno.EEXIST:
raise
path = os.path.join(private_data_dir, 'env', 'ssh_key')
ansible_runner.utils.open_fifo_write(path, data.encode())
private_data_files['credentials']['ssh'] = path
# Ansible network modules do not yet support ssh-agent.
# Instead, ssh private key file is explicitly passed via an
# env variable.
else:
handle, path = tempfile.mkstemp(dir=os.path.join(private_data_dir, 'env'))
f = os.fdopen(handle, 'w')
f.write(data)
f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
private_data_files['credentials'][credential] = path
for credential, data in private_data.get('certificates', {}).items():
artifact_dir = os.path.join(private_data_dir, 'artifacts', str(self.instance.id))
if not os.path.exists(artifact_dir):
os.makedirs(artifact_dir, mode=0o700)
path = os.path.join(artifact_dir, 'ssh_key_data-cert.pub')
with open(path, 'w') as f:
f.write(data)
f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
return private_data_files
def build_passwords(self, instance, runtime_passwords):
"""
Build a dictionary of passwords for responding to prompts.
"""
return {
'yes': 'yes',
'no': 'no',
'': '',
}
def build_extra_vars_file(self, instance, private_data_dir):
"""
Build ansible yaml file filled with extra vars to be passed via -e@file.yml
"""
def _write_extra_vars_file(self, private_data_dir, vars, safe_dict={}):
env_path = os.path.join(private_data_dir, 'env')
try:
os.mkdir(env_path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
except OSError as e:
if e.errno != errno.EEXIST:
raise
path = os.path.join(env_path, 'extravars')
handle = os.open(path, os.O_RDWR | os.O_CREAT, stat.S_IREAD | stat.S_IWRITE)
f = os.fdopen(handle, 'w')
if settings.ALLOW_JINJA_IN_EXTRA_VARS == 'always':
f.write(yaml.safe_dump(vars))
else:
f.write(safe_dump(vars, safe_dict))
f.close()
os.chmod(path, stat.S_IRUSR)
return path
def add_awx_venv(self, env):
env['VIRTUAL_ENV'] = settings.AWX_VENV_PATH
if 'PATH' in env:
env['PATH'] = os.path.join(settings.AWX_VENV_PATH, "bin") + ":" + env['PATH']
else:
env['PATH'] = os.path.join(settings.AWX_VENV_PATH, "bin")
def build_env(self, instance, private_data_dir, private_data_files=None):
"""
Build environment dictionary for ansible-playbook.
"""
env = {}
# Add ANSIBLE_* settings to the subprocess environment.
for attr in dir(settings):
if attr == attr.upper() and attr.startswith('ANSIBLE_'):
env[attr] = str(getattr(settings, attr))
# Also set environment variables configured in AWX_TASK_ENV setting.
for key, value in settings.AWX_TASK_ENV.items():
env[key] = str(value)
env['AWX_PRIVATE_DATA_DIR'] = private_data_dir
if self.instance.execution_environment is None:
raise RuntimeError('The project could not sync because there is no Execution Environment.')
ee_cred = self.instance.execution_environment.credential
if ee_cred:
verify_ssl = ee_cred.get_input('verify_ssl')
if not verify_ssl:
pdd_wrapper_path = os.path.split(private_data_dir)[0]
registries_conf_path = os.path.join(pdd_wrapper_path, 'registries.conf')
host = ee_cred.get_input('host')
with open(registries_conf_path, 'w') as registries_conf:
os.chmod(registries_conf.name, stat.S_IRUSR | stat.S_IWUSR)
lines = [
'[[registry]]',
'location = "{}"'.format(host),
'insecure = true',
]
registries_conf.write('\n'.join(lines))
# Podman >= 3.1.0
env['CONTAINERS_REGISTRIES_CONF'] = registries_conf_path
# Podman < 3.1.0
env['REGISTRIES_CONFIG_PATH'] = registries_conf_path
return env
def build_inventory(self, instance, private_data_dir):
script_params = dict(hostvars=True, towervars=True)
if hasattr(instance, 'job_slice_number'):
script_params['slice_number'] = instance.job_slice_number
script_params['slice_count'] = instance.job_slice_count
script_data = instance.inventory.get_script_data(**script_params)
# maintain a list of host_name --> host_id
# so we can associate emitted events to Host objects
self.host_map = {hostname: hv.pop('remote_tower_id', '') for hostname, hv in script_data.get('_meta', {}).get('hostvars', {}).items()}
json_data = json.dumps(script_data)
path = os.path.join(private_data_dir, 'inventory')
fn = os.path.join(path, 'hosts')
with open(fn, 'w') as f:
os.chmod(fn, stat.S_IRUSR | stat.S_IXUSR | stat.S_IWUSR)
f.write('#! /usr/bin/env python3\n# -*- coding: utf-8 -*-\nprint(%r)\n' % json_data)
return fn
def build_args(self, instance, private_data_dir, passwords):
raise NotImplementedError
def write_args_file(self, private_data_dir, args):
env_path = os.path.join(private_data_dir, 'env')
try:
os.mkdir(env_path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
except OSError as e:
if e.errno != errno.EEXIST:
raise
path = os.path.join(env_path, 'cmdline')
handle = os.open(path, os.O_RDWR | os.O_CREAT, stat.S_IREAD | stat.S_IWRITE)
f = os.fdopen(handle, 'w')
f.write(ansible_runner.utils.args2cmdline(*args))
f.close()
os.chmod(path, stat.S_IRUSR)
return path
def build_credentials_list(self, instance):
return []
def get_instance_timeout(self, instance):
global_timeout_setting_name = instance._global_timeout_setting()
if global_timeout_setting_name:
global_timeout = getattr(settings, global_timeout_setting_name, 0)
local_timeout = getattr(instance, 'timeout', 0)
job_timeout = global_timeout if local_timeout == 0 else local_timeout
job_timeout = 0 if local_timeout < 0 else job_timeout
else:
job_timeout = 0
return job_timeout
def get_password_prompts(self, passwords={}):
"""
Return a dictionary where keys are strings or regular expressions for
prompts, and values are password lookup keys (keys that are returned
from build_passwords).
"""
return OrderedDict()
def create_expect_passwords_data_struct(self, password_prompts, passwords):
expect_passwords = {}
for k, v in password_prompts.items():
expect_passwords[k] = passwords.get(v, '') or ''
return expect_passwords
def pre_run_hook(self, instance, private_data_dir):
"""
Hook for any steps to run before the job/task starts
"""
instance.log_lifecycle("pre_run")
def post_run_hook(self, instance, status):
"""
Hook for any steps to run before job/task is marked as complete.
"""
instance.log_lifecycle("post_run")
def final_run_hook(self, instance, status, private_data_dir, fact_modification_times):
"""
Hook for any steps to run after job/task is marked as complete.
"""
instance.log_lifecycle("finalize_run")
job_profiling_dir = os.path.join(private_data_dir, 'artifacts/playbook_profiling')
awx_profiling_dir = '/var/log/tower/playbook_profiling/'
collections_info = os.path.join(private_data_dir, 'artifacts/', 'collections.json')
ansible_version_file = os.path.join(private_data_dir, 'artifacts/', 'ansible_version.txt')
if not os.path.exists(awx_profiling_dir):
os.mkdir(awx_profiling_dir)
if os.path.isdir(job_profiling_dir):
shutil.copytree(job_profiling_dir, os.path.join(awx_profiling_dir, str(instance.pk)))
if os.path.exists(collections_info):
with open(collections_info) as ee_json_info:
ee_collections_info = json.loads(ee_json_info.read())
instance.installed_collections = ee_collections_info
instance.save(update_fields=['installed_collections'])
if os.path.exists(ansible_version_file):
with open(ansible_version_file) as ee_ansible_info:
ansible_version_info = ee_ansible_info.readline()
instance.ansible_version = ansible_version_info
instance.save(update_fields=['ansible_version'])
def event_handler(self, event_data):
#
# ⚠️ D-D-D-DANGER ZONE ⚠️
# This method is called once for *every event* emitted by Ansible
# Runner as a playbook runs. That means that changes to the code in
# this method are _very_ likely to introduce performance regressions.
#
# Even if this function is made on average .05s slower, it can have
# devastating performance implications for playbooks that emit
# tens or hundreds of thousands of events.
#
# Proceed with caution!
#
"""
Ansible runner puts a parent_uuid on each event, no matter what the type.
AWX only saves the parent_uuid if the event is for a Job.
"""
# cache end_line locally for RunInventoryUpdate tasks
# which generate job events from two 'streams':
# ansible-inventory and the awx.main.commands.inventory_import
# logger
if isinstance(self, RunInventoryUpdate):
self.end_line = event_data['end_line']
if event_data.get(self.event_data_key, None):
if self.event_data_key != 'job_id':
event_data.pop('parent_uuid', None)
if self.parent_workflow_job_id:
event_data['workflow_job_id'] = self.parent_workflow_job_id
event_data['job_created'] = self.job_created
if self.host_map:
host = event_data.get('event_data', {}).get('host', '').strip()
if host:
event_data['host_name'] = host
if host in self.host_map:
event_data['host_id'] = self.host_map[host]
else:
event_data['host_name'] = ''
event_data['host_id'] = ''
if event_data.get('event') == 'playbook_on_stats':
event_data['host_map'] = self.host_map
if isinstance(self, RunProjectUpdate):
# it's common for Ansible's SCM modules to print
# error messages on failure that contain the plaintext
# basic auth credentials (username + password)
# it's also common for the nested event data itself (['res']['...'])
# to contain unredacted text on failure
# this is a _little_ expensive to filter
# with regex, but project updates don't have many events,
# so it *should* have a negligible performance impact
task = event_data.get('event_data', {}).get('task_action')
try:
if task in ('git', 'svn'):
event_data_json = json.dumps(event_data)
event_data_json = UriCleaner.remove_sensitive(event_data_json)
event_data = json.loads(event_data_json)
except json.JSONDecodeError:
pass
if 'event_data' in event_data:
event_data['event_data']['guid'] = self.guid
# To prevent overwhelming the broadcast queue, skip some websocket messages
if self.recent_event_timings:
cpu_time = time.time()
first_window_time = self.recent_event_timings[0]
last_window_time = self.recent_event_timings[-1]
if event_data.get('event') in MINIMAL_EVENTS:
should_emit = True # always send some types like playbook_on_stats
elif event_data.get('stdout') == '' and event_data['start_line'] == event_data['end_line']:
should_emit = False # exclude events with no output
else:
should_emit = any(
[
# if 30the most recent websocket message was sent over 1 second ago
cpu_time - first_window_time > 1.0,
# if the very last websocket message came in over 1/30 seconds ago
self.recent_event_timings.maxlen * (cpu_time - last_window_time) > 1.0,
# if the queue is not yet full
len(self.recent_event_timings) != self.recent_event_timings.maxlen,
]
)
if should_emit:
self.recent_event_timings.append(cpu_time)
else:
event_data.setdefault('event_data', {})
event_data['skip_websocket_message'] = True
elif self.recent_event_timings.maxlen:
self.recent_event_timings.append(time.time())
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
self.event_ct += 1
'''
Handle artifacts
'''
if event_data.get('event_data', {}).get('artifact_data', {}):
self.instance.artifacts = event_data['event_data']['artifact_data']
self.instance.save(update_fields=['artifacts'])
return False
def cancel_callback(self):
"""
Ansible runner callback to tell the job when/if it is canceled
"""
unified_job_id = self.instance.pk
self.instance = self.update_model(unified_job_id)
if not self.instance:
logger.error('unified job {} was deleted while running, canceling'.format(unified_job_id))
return True
if self.instance.cancel_flag or self.instance.status == 'canceled':
cancel_wait = (now() - self.instance.modified).seconds if self.instance.modified else 0
if cancel_wait > 5:
logger.warn('Request to cancel {} took {} seconds to complete.'.format(self.instance.log_format, cancel_wait))
return True
return False
def finished_callback(self, runner_obj):
"""
Ansible runner callback triggered on finished run
"""
event_data = {
'event': 'EOF',
'final_counter': self.event_ct,
'guid': self.guid,
}
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
def status_handler(self, status_data, runner_config):
"""
Ansible runner callback triggered on status transition
"""
if status_data['status'] == 'starting':
job_env = dict(runner_config.env)
'''
Take the safe environment variables and overwrite
'''
for k, v in self.safe_env.items():
if k in job_env:
job_env[k] = v
from awx.main.signals import disable_activity_stream # Circular import
with disable_activity_stream():
self.instance = self.update_model(self.instance.pk, job_args=json.dumps(runner_config.command), job_cwd=runner_config.cwd, job_env=job_env)
elif status_data['status'] == 'error':
result_traceback = status_data.get('result_traceback', None)
if result_traceback:
from awx.main.signals import disable_activity_stream # Circular import
with disable_activity_stream():
self.instance = self.update_model(self.instance.pk, result_traceback=result_traceback)
@with_path_cleanup
def run(self, pk, **kwargs):
"""
Run the job/task and capture its output.
"""
self.instance = self.model.objects.get(pk=pk)
if self.instance.execution_environment_id is None:
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
self.instance = self.update_model(self.instance.pk, execution_environment=self.instance.resolve_execution_environment())
# self.instance because of the update_model pattern and when it's used in callback handlers
self.instance = self.update_model(pk, status='running', start_args='') # blank field to remove encrypted passwords
self.instance.websocket_emit_status("running")
status, rc = 'error', None
extra_update_fields = {}
fact_modification_times = {}
self.event_ct = 0
'''
Needs to be an object property because status_handler uses it in a callback context
'''
self.safe_env = {}
self.safe_cred_env = {}
private_data_dir = None
# store a reference to the parent workflow job (if any) so we can include
# it in event data JSON
if self.instance.spawned_by_workflow:
self.parent_workflow_job_id = self.instance.get_workflow_job().id
self.job_created = str(self.instance.created)
try:
self.instance.send_notification_templates("running")
private_data_dir = self.build_private_data_dir(self.instance)
self.pre_run_hook(self.instance, private_data_dir)
self.instance.log_lifecycle("preparing_playbook")
if self.instance.cancel_flag:
self.instance = self.update_model(self.instance.pk, status='canceled')
if self.instance.status != 'running':
# Stop the task chain and prevent starting the job if it has
# already been canceled.
self.instance = self.update_model(pk)
status = self.instance.status
raise RuntimeError('not starting %s task' % self.instance.status)
if not os.path.exists(settings.AWX_ISOLATION_BASE_PATH):
raise RuntimeError('AWX_ISOLATION_BASE_PATH=%s does not exist' % settings.AWX_ISOLATION_BASE_PATH)
# Fetch "cached" fact data from prior runs and put on the disk
# where ansible expects to find it
if getattr(self.instance, 'use_fact_cache', False):
self.instance.start_job_fact_cache(
os.path.join(private_data_dir, 'artifacts', str(self.instance.id), 'fact_cache'),
fact_modification_times,
)
# May have to serialize the value
private_data_files = self.build_private_data_files(self.instance, private_data_dir)
passwords = self.build_passwords(self.instance, kwargs)
self.build_extra_vars_file(self.instance, private_data_dir)
args = self.build_args(self.instance, private_data_dir, passwords)
env = self.build_env(self.instance, private_data_dir, private_data_files=private_data_files)
self.safe_env = build_safe_env(env)
credentials = self.build_credentials_list(self.instance)
for credential in credentials:
if credential:
credential.credential_type.inject_credential(credential, env, self.safe_cred_env, args, private_data_dir)
self.safe_env.update(self.safe_cred_env)
self.write_args_file(private_data_dir, args)
password_prompts = self.get_password_prompts(passwords)
expect_passwords = self.create_expect_passwords_data_struct(password_prompts, passwords)
params = {
'ident': self.instance.id,
'private_data_dir': private_data_dir,
'playbook': self.build_playbook_path_relative_to_cwd(self.instance, private_data_dir),
'inventory': self.build_inventory(self.instance, private_data_dir),
'passwords': expect_passwords,
'envvars': env,
'settings': {
'job_timeout': self.get_instance_timeout(self.instance),
'suppress_ansible_output': True,
},
}
if isinstance(self.instance, AdHocCommand):
params['module'] = self.build_module_name(self.instance)
params['module_args'] = self.build_module_args(self.instance)
if getattr(self.instance, 'use_fact_cache', False):
# Enable Ansible fact cache.
params['fact_cache_type'] = 'jsonfile'
else:
# Disable Ansible fact cache.
params['fact_cache_type'] = ''
if self.instance.is_container_group_task or settings.IS_K8S:
params['envvars'].pop('HOME', None)
'''
Delete parameters if the values are None or empty array
'''
for v in ['passwords', 'playbook', 'inventory']:
if not params[v]:
del params[v]
self.dispatcher = CallbackQueueDispatcher()
self.instance.log_lifecycle("running_playbook")
if isinstance(self.instance, SystemJob):
res = ansible_runner.interface.run(
project_dir=settings.BASE_DIR,
event_handler=self.event_handler,
finished_callback=self.finished_callback,
status_handler=self.status_handler,
**params,
)
else:
receptor_job = AWXReceptorJob(self, params)
res = receptor_job.run()
self.unit_id = receptor_job.unit_id
if not res:
return
status = res.status
rc = res.rc
if status == 'timeout':
self.instance.job_explanation = "Job terminated due to timeout"
status = 'failed'
extra_update_fields['job_explanation'] = self.instance.job_explanation
# ensure failure notification sends even if playbook_on_stats event is not triggered
handle_success_and_failure_notifications.apply_async([self.instance.job.id])
except Exception:
# this could catch programming or file system errors
extra_update_fields['result_traceback'] = traceback.format_exc()
logger.exception('%s Exception occurred while running task', self.instance.log_format)
finally:
logger.debug('%s finished running, producing %s events.', self.instance.log_format, self.event_ct)
try:
self.post_run_hook(self.instance, status)
except PostRunError as exc:
if status == 'successful':
status = exc.status
extra_update_fields['job_explanation'] = exc.args[0]
if exc.tb:
extra_update_fields['result_traceback'] = exc.tb
except Exception:
logger.exception('{} Post run hook errored.'.format(self.instance.log_format))
self.instance = self.update_model(pk)
self.instance = self.update_model(pk, status=status, emitted_events=self.event_ct, **extra_update_fields)
try:
self.final_run_hook(self.instance, status, private_data_dir, fact_modification_times)
except Exception:
logger.exception('{} Final run hook errored.'.format(self.instance.log_format))
self.instance.websocket_emit_status(status)
if status != 'successful':
if status == 'canceled':
raise AwxTaskError.TaskCancel(self.instance, rc)
else:
raise AwxTaskError.TaskError(self.instance, rc)
@task(queue=get_local_queuename)
class RunJob(BaseTask):
"""
Run a job using ansible-playbook.
"""
model = Job
event_model = JobEvent
event_data_key = 'job_id'
def build_private_data(self, job, private_data_dir):
"""
Returns a dict of the form
{
'credentials': {
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>,
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>,
...
},
'certificates': {
<awx.main.models.Credential>: <signed SSH certificate data>,
<awx.main.models.Credential>: <signed SSH certificate data>,
...
}
}
"""
private_data = {'credentials': {}}
for credential in job.credentials.prefetch_related('input_sources__source_credential').all():
# If we were sent SSH credentials, decrypt them and send them
# back (they will be written to a temporary file).
if credential.has_input('ssh_key_data'):
private_data['credentials'][credential] = credential.get_input('ssh_key_data', default='')
if credential.has_input('ssh_public_key_data'):
private_data.setdefault('certificates', {})[credential] = credential.get_input('ssh_public_key_data', default='')
return private_data
def build_passwords(self, job, runtime_passwords):
"""
Build a dictionary of passwords for SSH private key, SSH user, sudo/su
and ansible-vault.
"""
passwords = super(RunJob, self).build_passwords(job, runtime_passwords)
cred = job.machine_credential
if cred:
for field in ('ssh_key_unlock', 'ssh_password', 'become_password', 'vault_password'):
value = runtime_passwords.get(field, cred.get_input('password' if field == 'ssh_password' else field, default=''))
if value not in ('', 'ASK'):
passwords[field] = value
for cred in job.vault_credentials:
field = 'vault_password'
vault_id = cred.get_input('vault_id', default=None)
if vault_id:
field = 'vault_password.{}'.format(vault_id)
if field in passwords:
raise RuntimeError('multiple vault credentials were specified with --vault-id {}@prompt'.format(vault_id))
value = runtime_passwords.get(field, cred.get_input('vault_password', default=''))
if value not in ('', 'ASK'):
passwords[field] = value
'''
Only 1 value can be provided for a unique prompt string. Prefer ssh
key unlock over network key unlock.
'''
if 'ssh_key_unlock' not in passwords:
for cred in job.network_credentials:
if cred.inputs.get('ssh_key_unlock'):
passwords['ssh_key_unlock'] = runtime_passwords.get('ssh_key_unlock', cred.get_input('ssh_key_unlock', default=''))
break
return passwords
def build_env(self, job, private_data_dir, private_data_files=None):
"""
Build environment dictionary for ansible-playbook.
"""
env = super(RunJob, self).build_env(job, private_data_dir, private_data_files=private_data_files)
if private_data_files is None:
private_data_files = {}
# Set environment variables needed for inventory and job event
# callbacks to work.
env['JOB_ID'] = str(job.pk)
env['INVENTORY_ID'] = str(job.inventory.pk)
if job.project:
env['PROJECT_REVISION'] = job.project.scm_revision
env['ANSIBLE_RETRY_FILES_ENABLED'] = "False"
env['MAX_EVENT_RES'] = str(settings.MAX_EVENT_RES_DATA)
if hasattr(settings, 'AWX_ANSIBLE_CALLBACK_PLUGINS') and settings.AWX_ANSIBLE_CALLBACK_PLUGINS:
env['ANSIBLE_CALLBACK_PLUGINS'] = ':'.join(settings.AWX_ANSIBLE_CALLBACK_PLUGINS)
env['AWX_HOST'] = settings.TOWER_URL_BASE
# Create a directory for ControlPath sockets that is unique to each job
cp_dir = os.path.join(private_data_dir, 'cp')
if not os.path.exists(cp_dir):
os.mkdir(cp_dir, 0o700)
# FIXME: more elegant way to manage this path in container
env['ANSIBLE_SSH_CONTROL_PATH_DIR'] = '/runner/cp'
# Set environment variables for cloud credentials.
cred_files = private_data_files.get('credentials', {})
for cloud_cred in job.cloud_credentials:
if cloud_cred and cloud_cred.credential_type.namespace == 'openstack' and cred_files.get(cloud_cred, ''):
env['OS_CLIENT_CONFIG_FILE'] = to_container_path(cred_files.get(cloud_cred, ''), private_data_dir)
for network_cred in job.network_credentials:
env['ANSIBLE_NET_USERNAME'] = network_cred.get_input('username', default='')
env['ANSIBLE_NET_PASSWORD'] = network_cred.get_input('password', default='')
ssh_keyfile = cred_files.get(network_cred, '')
if ssh_keyfile:
env['ANSIBLE_NET_SSH_KEYFILE'] = ssh_keyfile
authorize = network_cred.get_input('authorize', default=False)
env['ANSIBLE_NET_AUTHORIZE'] = str(int(authorize))
if authorize:
env['ANSIBLE_NET_AUTH_PASS'] = network_cred.get_input('authorize_password', default='')
path_vars = (
('ANSIBLE_COLLECTIONS_PATHS', 'collections_paths', 'requirements_collections', '~/.ansible/collections:/usr/share/ansible/collections'),
('ANSIBLE_ROLES_PATH', 'roles_path', 'requirements_roles', '~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles'),
)
config_values = read_ansible_config(job.project.get_project_path(), list(map(lambda x: x[1], path_vars)))
for env_key, config_setting, folder, default in path_vars:
paths = default.split(':')
if env_key in env:
for path in env[env_key].split(':'):
if path not in paths:
paths = [env[env_key]] + paths
elif config_setting in config_values:
for path in config_values[config_setting].split(':'):
if path not in paths:
paths = [config_values[config_setting]] + paths
paths = [os.path.join(CONTAINER_ROOT, folder)] + paths
env[env_key] = os.pathsep.join(paths)
return env
def build_args(self, job, private_data_dir, passwords):
"""
Build command line argument list for running ansible-playbook,
optionally using ssh-agent for public/private key authentication.
"""
creds = job.machine_credential
ssh_username, become_username, become_method = '', '', ''
if creds:
ssh_username = creds.get_input('username', default='')
become_method = creds.get_input('become_method', default='')
become_username = creds.get_input('become_username', default='')
else:
become_method = None
become_username = ""
# Always specify the normal SSH user as root by default. Since this
# task is normally running in the background under a service account,
# it doesn't make sense to rely on ansible-playbook's default of using
# the current user.
ssh_username = ssh_username or 'root'
args = []
if job.job_type == 'check':
args.append('--check')
args.extend(['-u', sanitize_jinja(ssh_username)])
if 'ssh_password' in passwords:
args.append('--ask-pass')
if job.become_enabled:
args.append('--become')
if job.diff_mode:
args.append('--diff')
if become_method:
args.extend(['--become-method', sanitize_jinja(become_method)])
if become_username:
args.extend(['--become-user', sanitize_jinja(become_username)])
if 'become_password' in passwords:
args.append('--ask-become-pass')
# Support prompting for multiple vault passwords
for k, v in passwords.items():
if k.startswith('vault_password'):
if k == 'vault_password':
args.append('--ask-vault-pass')
else:
# split only on the first dot in case the vault ID itself contains a dot
vault_id = k.split('.', 1)[1]
args.append('--vault-id')
args.append('{}@prompt'.format(vault_id))
if job.forks:
if settings.MAX_FORKS > 0 and job.forks > settings.MAX_FORKS:
logger.warning(f'Maximum number of forks ({settings.MAX_FORKS}) exceeded.')
args.append('--forks=%d' % settings.MAX_FORKS)
else:
args.append('--forks=%d' % job.forks)
if job.force_handlers:
args.append('--force-handlers')
if job.limit:
args.extend(['-l', job.limit])
if job.verbosity:
args.append('-%s' % ('v' * min(5, job.verbosity)))
if job.job_tags:
args.extend(['-t', job.job_tags])
if job.skip_tags:
args.append('--skip-tags=%s' % job.skip_tags)
if job.start_at_task:
args.append('--start-at-task=%s' % job.start_at_task)
return args
def build_playbook_path_relative_to_cwd(self, job, private_data_dir):
return job.playbook
def build_extra_vars_file(self, job, private_data_dir):
# Define special extra_vars for AWX, combine with job.extra_vars.
extra_vars = job.awx_meta_vars()
if job.extra_vars_dict:
extra_vars.update(json.loads(job.decrypted_extra_vars()))
# By default, all extra vars disallow Jinja2 template usage for
# security reasons; top level key-values defined in JT.extra_vars, however,
# are allowed as "safe" (because they can only be set by users with
# higher levels of privilege - those that have the ability create and
# edit Job Templates)
safe_dict = {}
if job.job_template and settings.ALLOW_JINJA_IN_EXTRA_VARS == 'template':
safe_dict = job.job_template.extra_vars_dict
return self._write_extra_vars_file(private_data_dir, extra_vars, safe_dict)
def build_credentials_list(self, job):
return job.credentials.prefetch_related('input_sources__source_credential').all()
def get_password_prompts(self, passwords={}):
d = super(RunJob, self).get_password_prompts(passwords)
d[r'Enter passphrase for .*:\s*?$'] = 'ssh_key_unlock'
d[r'Bad passphrase, try again for .*:\s*?$'] = ''
for method in PRIVILEGE_ESCALATION_METHODS:
d[r'%s password.*:\s*?$' % (method[0])] = 'become_password'
d[r'%s password.*:\s*?$' % (method[0].upper())] = 'become_password'
d[r'BECOME password.*:\s*?$'] = 'become_password'
d[r'SSH password:\s*?$'] = 'ssh_password'
d[r'Password:\s*?$'] = 'ssh_password'
d[r'Vault password:\s*?$'] = 'vault_password'
for k, v in passwords.items():
if k.startswith('vault_password.'):
# split only on the first dot in case the vault ID itself contains a dot
vault_id = k.split('.', 1)[1]
d[r'Vault password \({}\):\s*?$'.format(vault_id)] = k
return d
def build_execution_environment_params(self, instance, private_data_dir):
if settings.IS_K8S:
return {}
params = super(RunJob, self).build_execution_environment_params(instance, private_data_dir)
# If this has an insights agent and it is not already mounted then show it
insights_dir = os.path.dirname(settings.INSIGHTS_SYSTEM_ID_FILE)
if instance.use_fact_cache and os.path.exists(insights_dir):
logger.info('not parent of others')
params.setdefault('container_volume_mounts', [])
params['container_volume_mounts'].extend(
[
f"{insights_dir}:{insights_dir}:Z",
]
)
return params
def pre_run_hook(self, job, private_data_dir):
super(RunJob, self).pre_run_hook(job, private_data_dir)
if job.inventory is None:
error = _('Job could not start because it does not have a valid inventory.')
self.update_model(job.pk, status='failed', job_explanation=error)
raise RuntimeError(error)
elif job.project is None:
error = _('Job could not start because it does not have a valid project.')
self.update_model(job.pk, status='failed', job_explanation=error)
raise RuntimeError(error)
elif job.execution_environment is None:
error = _('Job could not start because no Execution Environment could be found.')
self.update_model(job.pk, status='error', job_explanation=error)
raise RuntimeError(error)
elif job.project.status in ('error', 'failed'):
msg = _('The project revision for this job template is unknown due to a failed update.')
job = self.update_model(job.pk, status='failed', job_explanation=msg)
raise RuntimeError(msg)
project_path = job.project.get_project_path(check_if_exists=False)
job_revision = job.project.scm_revision
sync_needs = []
source_update_tag = 'update_{}'.format(job.project.scm_type)
branch_override = bool(job.scm_branch and job.scm_branch != job.project.scm_branch)
if not job.project.scm_type:
pass # manual projects are not synced, user has responsibility for that
elif not os.path.exists(project_path):
logger.debug('Performing fresh clone of {} on this instance.'.format(job.project))
sync_needs.append(source_update_tag)
elif job.project.scm_type == 'git' and job.project.scm_revision and (not branch_override):
try:
git_repo = git.Repo(project_path)
if job_revision == git_repo.head.commit.hexsha:
logger.debug('Skipping project sync for {} because commit is locally available'.format(job.log_format))
else:
sync_needs.append(source_update_tag)
except (ValueError, BadGitName, git.exc.InvalidGitRepositoryError):
logger.debug('Needed commit for {} not in local source tree, will sync with remote'.format(job.log_format))
sync_needs.append(source_update_tag)
else:
logger.debug('Project not available locally, {} will sync with remote'.format(job.log_format))
sync_needs.append(source_update_tag)
has_cache = os.path.exists(os.path.join(job.project.get_cache_path(), job.project.cache_id))
# Galaxy requirements are not supported for manual projects
if job.project.scm_type and ((not has_cache) or branch_override):
sync_needs.extend(['install_roles', 'install_collections'])
if sync_needs:
pu_ig = job.instance_group
pu_en = job.execution_node
sync_metafields = dict(
launch_type="sync",
job_type='run',
job_tags=','.join(sync_needs),
status='running',
instance_group=pu_ig,
execution_node=pu_en,
celery_task_id=job.celery_task_id,
)
if branch_override:
sync_metafields['scm_branch'] = job.scm_branch
sync_metafields['scm_clean'] = True # to accomidate force pushes
if 'update_' not in sync_metafields['job_tags']:
sync_metafields['scm_revision'] = job_revision
local_project_sync = job.project.create_project_update(_eager_fields=sync_metafields)
create_partition(local_project_sync.event_class._meta.db_table, start=local_project_sync.created)
# save the associated job before calling run() so that a
# cancel() call on the job can cancel the project update
job = self.update_model(job.pk, project_update=local_project_sync)
project_update_task = local_project_sync._get_task_class()
try:
# the job private_data_dir is passed so sync can download roles and collections there
sync_task = project_update_task(job_private_data_dir=private_data_dir)
sync_task.run(local_project_sync.id)
local_project_sync.refresh_from_db()
job = self.update_model(job.pk, scm_revision=local_project_sync.scm_revision)
except Exception:
local_project_sync.refresh_from_db()
if local_project_sync.status != 'canceled':
job = self.update_model(
job.pk,
status='failed',
job_explanation=(
'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}'
% ('project_update', local_project_sync.name, local_project_sync.id)
),
)
raise
job.refresh_from_db()
if job.cancel_flag:
return
else:
# Case where a local sync is not needed, meaning that local tree is
# up-to-date with project, job is running project current version
if job_revision:
job = self.update_model(job.pk, scm_revision=job_revision)
# Project update does not copy the folder, so copy here
RunProjectUpdate.make_local_copy(job.project, private_data_dir, scm_revision=job_revision)
if job.inventory.kind == 'smart':
# cache smart inventory memberships so that the host_filter query is not
# ran inside of the event saving code
update_smart_memberships_for_inventory(job.inventory)
def final_run_hook(self, job, status, private_data_dir, fact_modification_times):
super(RunJob, self).final_run_hook(job, status, private_data_dir, fact_modification_times)
if not private_data_dir:
# If there's no private data dir, that means we didn't get into the
# actual `run()` call; this _usually_ means something failed in
# the pre_run_hook method
return
if job.use_fact_cache:
job.finish_job_fact_cache(
os.path.join(private_data_dir, 'artifacts', 'fact_cache'),
fact_modification_times,
)
try:
inventory = job.inventory
except Inventory.DoesNotExist:
pass
else:
if inventory is not None:
update_inventory_computed_fields.delay(inventory.id)
@task(queue=get_local_queuename)
class RunProjectUpdate(BaseTask):
model = ProjectUpdate
event_model = ProjectUpdateEvent
event_data_key = 'project_update_id'
def __init__(self, *args, job_private_data_dir=None, **kwargs):
super(RunProjectUpdate, self).__init__(*args, **kwargs)
self.playbook_new_revision = None
self.original_branch = None
self.job_private_data_dir = job_private_data_dir
def event_handler(self, event_data):
super(RunProjectUpdate, self).event_handler(event_data)
returned_data = event_data.get('event_data', {})
if returned_data.get('task_action', '') == 'set_fact':
returned_facts = returned_data.get('res', {}).get('ansible_facts', {})
if 'scm_version' in returned_facts:
self.playbook_new_revision = returned_facts['scm_version']
def build_private_data(self, project_update, private_data_dir):
"""
Return SSH private key data needed for this project update.
Returns a dict of the form
{
'credentials': {
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>,
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>,
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>
}
}
"""
private_data = {'credentials': {}}
if project_update.credential:
credential = project_update.credential
if credential.has_input('ssh_key_data'):
private_data['credentials'][credential] = credential.get_input('ssh_key_data', default='')
return private_data
def build_passwords(self, project_update, runtime_passwords):
"""
Build a dictionary of passwords for SSH private key unlock and SCM
username/password.
"""
passwords = super(RunProjectUpdate, self).build_passwords(project_update, runtime_passwords)
if project_update.credential:
passwords['scm_key_unlock'] = project_update.credential.get_input('ssh_key_unlock', default='')
passwords['scm_username'] = project_update.credential.get_input('username', default='')
passwords['scm_password'] = project_update.credential.get_input('password', default='')
return passwords
def build_env(self, project_update, private_data_dir, private_data_files=None):
"""
Build environment dictionary for ansible-playbook.
"""
env = super(RunProjectUpdate, self).build_env(project_update, private_data_dir, private_data_files=private_data_files)
env['ANSIBLE_RETRY_FILES_ENABLED'] = str(False)
env['ANSIBLE_ASK_PASS'] = str(False)
env['ANSIBLE_BECOME_ASK_PASS'] = str(False)
env['DISPLAY'] = '' # Prevent stupid password popup when running tests.
# give ansible a hint about the intended tmpdir to work around issues
# like https://github.com/ansible/ansible/issues/30064
env['TMP'] = settings.AWX_ISOLATION_BASE_PATH
env['PROJECT_UPDATE_ID'] = str(project_update.pk)
if settings.GALAXY_IGNORE_CERTS:
env['ANSIBLE_GALAXY_IGNORE'] = True
# build out env vars for Galaxy credentials (in order)
galaxy_server_list = []
if project_update.project.organization:
for i, cred in enumerate(project_update.project.organization.galaxy_credentials.all()):
env[f'ANSIBLE_GALAXY_SERVER_SERVER{i}_URL'] = cred.get_input('url')
auth_url = cred.get_input('auth_url', default=None)
token = cred.get_input('token', default=None)
if token:
env[f'ANSIBLE_GALAXY_SERVER_SERVER{i}_TOKEN'] = token
if auth_url:
env[f'ANSIBLE_GALAXY_SERVER_SERVER{i}_AUTH_URL'] = auth_url
galaxy_server_list.append(f'server{i}')
if galaxy_server_list:
env['ANSIBLE_GALAXY_SERVER_LIST'] = ','.join(galaxy_server_list)
return env
def _build_scm_url_extra_vars(self, project_update):
"""
Helper method to build SCM url and extra vars with parameters needed
for authentication.
"""
extra_vars = {}
if project_update.credential:
scm_username = project_update.credential.get_input('username', default='')
scm_password = project_update.credential.get_input('password', default='')
else:
scm_username = ''
scm_password = ''
scm_type = project_update.scm_type
scm_url = update_scm_url(scm_type, project_update.scm_url, check_special_cases=False)
scm_url_parts = urlparse.urlsplit(scm_url)
# Prefer the username/password in the URL, if provided.
scm_username = scm_url_parts.username or scm_username
scm_password = scm_url_parts.password or scm_password
if scm_username:
if scm_type == 'svn':
extra_vars['scm_username'] = scm_username
extra_vars['scm_password'] = scm_password
scm_password = False
if scm_url_parts.scheme != 'svn+ssh':
scm_username = False
elif scm_url_parts.scheme.endswith('ssh'):
scm_password = False
elif scm_type in ('insights', 'archive'):
extra_vars['scm_username'] = scm_username
extra_vars['scm_password'] = scm_password
scm_url = update_scm_url(scm_type, scm_url, scm_username, scm_password, scp_format=True)
else:
scm_url = update_scm_url(scm_type, scm_url, scp_format=True)
# Pass the extra accept_hostkey parameter to the git module.
if scm_type == 'git' and scm_url_parts.scheme.endswith('ssh'):
extra_vars['scm_accept_hostkey'] = 'true'
return scm_url, extra_vars
def build_inventory(self, instance, private_data_dir):
return 'localhost,'
def build_args(self, project_update, private_data_dir, passwords):
"""
Build command line argument list for running ansible-playbook,
optionally using ssh-agent for public/private key authentication.
"""
args = []
if getattr(settings, 'PROJECT_UPDATE_VVV', False):
args.append('-vvv')
if project_update.job_tags:
args.extend(['-t', project_update.job_tags])
return args
def build_extra_vars_file(self, project_update, private_data_dir):
extra_vars = {}
scm_url, extra_vars_new = self._build_scm_url_extra_vars(project_update)
extra_vars.update(extra_vars_new)
scm_branch = project_update.scm_branch
if project_update.job_type == 'run' and (not project_update.branch_override):
if project_update.project.scm_revision:
scm_branch = project_update.project.scm_revision
elif not scm_branch:
raise RuntimeError('Could not determine a revision to run from project.')
elif not scm_branch:
scm_branch = 'HEAD'
galaxy_creds_are_defined = project_update.project.organization and project_update.project.organization.galaxy_credentials.exists()
if not galaxy_creds_are_defined and (settings.AWX_ROLES_ENABLED or settings.AWX_COLLECTIONS_ENABLED):
logger.warning('Galaxy role/collection syncing is enabled, but no ' f'credentials are configured for {project_update.project.organization}.')
extra_vars.update(
{
'projects_root': settings.PROJECTS_ROOT.rstrip('/'),
'local_path': os.path.basename(project_update.project.local_path),
'project_path': project_update.get_project_path(check_if_exists=False), # deprecated
'insights_url': settings.INSIGHTS_URL_BASE,
'awx_license_type': get_license().get('license_type', 'UNLICENSED'),
'awx_version': get_awx_version(),
'scm_url': scm_url,
'scm_branch': scm_branch,
'scm_clean': project_update.scm_clean,
'scm_track_submodules': project_update.scm_track_submodules,
'roles_enabled': galaxy_creds_are_defined and settings.AWX_ROLES_ENABLED,
'collections_enabled': galaxy_creds_are_defined and settings.AWX_COLLECTIONS_ENABLED,
}
)
# apply custom refspec from user for PR refs and the like
if project_update.scm_refspec:
extra_vars['scm_refspec'] = project_update.scm_refspec
elif project_update.project.allow_override:
# If branch is override-able, do extra fetch for all branches
extra_vars['scm_refspec'] = 'refs/heads/*:refs/remotes/origin/*'
if project_update.scm_type == 'archive':
# for raw archive, prevent error moving files between volumes
extra_vars['ansible_remote_tmp'] = os.path.join(project_update.get_project_path(check_if_exists=False), '.ansible_awx', 'tmp')
self._write_extra_vars_file(private_data_dir, extra_vars)
def build_playbook_path_relative_to_cwd(self, project_update, private_data_dir):
return os.path.join('project_update.yml')
def get_password_prompts(self, passwords={}):
d = super(RunProjectUpdate, self).get_password_prompts(passwords)
d[r'Username for.*:\s*?$'] = 'scm_username'
d[r'Password for.*:\s*?$'] = 'scm_password'
d[r'Password:\s*?$'] = 'scm_password'
d[r'\S+?@\S+?\'s\s+?password:\s*?$'] = 'scm_password'
d[r'Enter passphrase for .*:\s*?$'] = 'scm_key_unlock'
d[r'Bad passphrase, try again for .*:\s*?$'] = ''
# FIXME: Configure whether we should auto accept host keys?
d[r'^Are you sure you want to continue connecting \(yes/no\)\?\s*?$'] = 'yes'
return d
def _update_dependent_inventories(self, project_update, dependent_inventory_sources):
scm_revision = project_update.project.scm_revision
inv_update_class = InventoryUpdate._get_task_class()
for inv_src in dependent_inventory_sources:
if not inv_src.update_on_project_update:
continue
if inv_src.scm_last_revision == scm_revision:
logger.debug('Skipping SCM inventory update for `{}` because ' 'project has not changed.'.format(inv_src.name))
continue
logger.debug('Local dependent inventory update for `{}`.'.format(inv_src.name))
with transaction.atomic():
if InventoryUpdate.objects.filter(inventory_source=inv_src, status__in=ACTIVE_STATES).exists():
logger.debug('Skipping SCM inventory update for `{}` because ' 'another update is already active.'.format(inv_src.name))
continue
if settings.IS_K8S:
instance_group = InventoryUpdate(inventory_source=inv_src).preferred_instance_groups[0]
else:
instance_group = project_update.instance_group
local_inv_update = inv_src.create_inventory_update(
_eager_fields=dict(
launch_type='scm',
status='running',
instance_group=instance_group,
execution_node=project_update.execution_node,
source_project_update=project_update,
celery_task_id=project_update.celery_task_id,
)
)
try:
create_partition(local_inv_update.event_class._meta.db_table, start=local_inv_update.created)
inv_update_class().run(local_inv_update.id)
except Exception:
logger.exception('{} Unhandled exception updating dependent SCM inventory sources.'.format(project_update.log_format))
try:
project_update.refresh_from_db()
except ProjectUpdate.DoesNotExist:
logger.warning('Project update deleted during updates of dependent SCM inventory sources.')
break
try:
local_inv_update.refresh_from_db()
except InventoryUpdate.DoesNotExist:
logger.warning('%s Dependent inventory update deleted during execution.', project_update.log_format)
continue
if project_update.cancel_flag:
logger.info('Project update {} was canceled while updating dependent inventories.'.format(project_update.log_format))
break
if local_inv_update.cancel_flag:
logger.info('Continuing to process project dependencies after {} was canceled'.format(local_inv_update.log_format))
if local_inv_update.status == 'successful':
inv_src.scm_last_revision = scm_revision
inv_src.save(update_fields=['scm_last_revision'])
def release_lock(self, instance):
try:
fcntl.lockf(self.lock_fd, fcntl.LOCK_UN)
except IOError as e:
logger.error("I/O error({0}) while trying to release lock file [{1}]: {2}".format(e.errno, instance.get_lock_file(), e.strerror))
os.close(self.lock_fd)
raise
os.close(self.lock_fd)
self.lock_fd = None
'''
Note: We don't support blocking=False
'''
def acquire_lock(self, instance, blocking=True):
lock_path = instance.get_lock_file()
if lock_path is None:
# If from migration or someone blanked local_path for any other reason, recoverable by save
instance.save()
lock_path = instance.get_lock_file()
if lock_path is None:
raise RuntimeError(u'Invalid lock file path')
try:
self.lock_fd = os.open(lock_path, os.O_RDWR | os.O_CREAT)
except OSError as e:
logger.error("I/O error({0}) while trying to open lock file [{1}]: {2}".format(e.errno, lock_path, e.strerror))
raise
start_time = time.time()
while True:
try:
instance.refresh_from_db(fields=['cancel_flag'])
if instance.cancel_flag:
logger.debug("ProjectUpdate({0}) was canceled".format(instance.pk))
return
fcntl.lockf(self.lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
break
except IOError as e:
if e.errno not in (errno.EAGAIN, errno.EACCES):
os.close(self.lock_fd)
logger.error("I/O error({0}) while trying to aquire lock on file [{1}]: {2}".format(e.errno, lock_path, e.strerror))
raise
else:
time.sleep(1.0)
waiting_time = time.time() - start_time
if waiting_time > 1.0:
logger.info('{} spent {} waiting to acquire lock for local source tree ' 'for path {}.'.format(instance.log_format, waiting_time, lock_path))
def pre_run_hook(self, instance, private_data_dir):
super(RunProjectUpdate, self).pre_run_hook(instance, private_data_dir)
# re-create root project folder if a natural disaster has destroyed it
if not os.path.exists(settings.PROJECTS_ROOT):
os.mkdir(settings.PROJECTS_ROOT)
project_path = instance.project.get_project_path(check_if_exists=False)
self.acquire_lock(instance)
self.original_branch = None
if instance.scm_type == 'git' and instance.branch_override:
if os.path.exists(project_path):
git_repo = git.Repo(project_path)
if git_repo.head.is_detached:
self.original_branch = git_repo.head.commit
else:
self.original_branch = git_repo.active_branch
if not os.path.exists(project_path):
os.makedirs(project_path) # used as container mount
stage_path = os.path.join(instance.get_cache_path(), 'stage')
if os.path.exists(stage_path):
logger.warning('{0} unexpectedly existed before update'.format(stage_path))
shutil.rmtree(stage_path)
os.makedirs(stage_path) # presence of empty cache indicates lack of roles or collections
# the project update playbook is not in a git repo, but uses a vendoring directory
# to be consistent with the ansible-runner model,
# that is moved into the runner project folder here
awx_playbooks = self.get_path_to('..', 'playbooks')
copy_tree(awx_playbooks, os.path.join(private_data_dir, 'project'))
@staticmethod
def clear_project_cache(cache_dir, keep_value):
if os.path.isdir(cache_dir):
for entry in os.listdir(cache_dir):
old_path = os.path.join(cache_dir, entry)
if entry not in (keep_value, 'stage'):
# invalidate, then delete
new_path = os.path.join(cache_dir, '.~~delete~~' + entry)
try:
os.rename(old_path, new_path)
shutil.rmtree(new_path)
except OSError:
logger.warning(f"Could not remove cache directory {old_path}")
@staticmethod
def make_local_copy(p, job_private_data_dir, scm_revision=None):
"""Copy project content (roles and collections) to a job private_data_dir
:param object p: Either a project or a project update
:param str job_private_data_dir: The root of the target ansible-runner folder
:param str scm_revision: For branch_override cases, the git revision to copy
"""
project_path = p.get_project_path(check_if_exists=False)
destination_folder = os.path.join(job_private_data_dir, 'project')
if not scm_revision:
scm_revision = p.scm_revision
if p.scm_type == 'git':
git_repo = git.Repo(project_path)
if not os.path.exists(destination_folder):
os.mkdir(destination_folder, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
tmp_branch_name = 'awx_internal/{}'.format(uuid4())
# always clone based on specific job revision
if not p.scm_revision:
raise RuntimeError('Unexpectedly could not determine a revision to run from project.')
source_branch = git_repo.create_head(tmp_branch_name, p.scm_revision)
# git clone must take file:// syntax for source repo or else options like depth will be ignored
source_as_uri = Path(project_path).as_uri()
git.Repo.clone_from(
source_as_uri,
destination_folder,
branch=source_branch,
depth=1,
single_branch=True, # shallow, do not copy full history
)
# submodules copied in loop because shallow copies from local HEADs are ideal
# and no git clone submodule options are compatible with minimum requirements
for submodule in git_repo.submodules:
subrepo_path = os.path.abspath(os.path.join(project_path, submodule.path))
subrepo_destination_folder = os.path.abspath(os.path.join(destination_folder, submodule.path))
subrepo_uri = Path(subrepo_path).as_uri()
git.Repo.clone_from(subrepo_uri, subrepo_destination_folder, depth=1, single_branch=True)
# force option is necessary because remote refs are not counted, although no information is lost
git_repo.delete_head(tmp_branch_name, force=True)
else:
copy_tree(project_path, destination_folder, preserve_symlinks=1)
# copy over the roles and collection cache to job folder
cache_path = os.path.join(p.get_cache_path(), p.cache_id)
subfolders = []
if settings.AWX_COLLECTIONS_ENABLED:
subfolders.append('requirements_collections')
if settings.AWX_ROLES_ENABLED:
subfolders.append('requirements_roles')
for subfolder in subfolders:
cache_subpath = os.path.join(cache_path, subfolder)
if os.path.exists(cache_subpath):
dest_subpath = os.path.join(job_private_data_dir, subfolder)
copy_tree(cache_subpath, dest_subpath, preserve_symlinks=1)
logger.debug('{0} {1} prepared {2} from cache'.format(type(p).__name__, p.pk, dest_subpath))
def post_run_hook(self, instance, status):
super(RunProjectUpdate, self).post_run_hook(instance, status)
# To avoid hangs, very important to release lock even if errors happen here
try:
if self.playbook_new_revision:
instance.scm_revision = self.playbook_new_revision
instance.save(update_fields=['scm_revision'])
# Roles and collection folders copy to durable cache
base_path = instance.get_cache_path()
stage_path = os.path.join(base_path, 'stage')
if status == 'successful' and 'install_' in instance.job_tags:
# Clear other caches before saving this one, and if branch is overridden
# do not clear cache for main branch, but do clear it for other branches
self.clear_project_cache(base_path, keep_value=instance.project.cache_id)
cache_path = os.path.join(base_path, instance.cache_id)
if os.path.exists(stage_path):
if os.path.exists(cache_path):
logger.warning('Rewriting cache at {0}, performance may suffer'.format(cache_path))
shutil.rmtree(cache_path)
os.rename(stage_path, cache_path)
logger.debug('{0} wrote to cache at {1}'.format(instance.log_format, cache_path))
elif os.path.exists(stage_path):
shutil.rmtree(stage_path) # cannot trust content update produced
if self.job_private_data_dir:
if status == 'successful':
# copy project folder before resetting to default branch
# because some git-tree-specific resources (like submodules) might matter
self.make_local_copy(instance, self.job_private_data_dir)
if self.original_branch:
# for git project syncs, non-default branches can be problems
# restore to branch the repo was on before this run
try:
self.original_branch.checkout()
except Exception:
# this could have failed due to dirty tree, but difficult to predict all cases
logger.exception('Failed to restore project repo to prior state after {}'.format(instance.log_format))
finally:
self.release_lock(instance)
p = instance.project
if instance.job_type == 'check' and status not in (
'failed',
'canceled',
):
if self.playbook_new_revision:
p.scm_revision = self.playbook_new_revision
else:
if status == 'successful':
logger.error("{} Could not find scm revision in check".format(instance.log_format))
p.playbook_files = p.playbooks
p.inventory_files = p.inventories
p.save(update_fields=['scm_revision', 'playbook_files', 'inventory_files'])
# Update any inventories that depend on this project
dependent_inventory_sources = p.scm_inventory_sources.filter(update_on_project_update=True)
if len(dependent_inventory_sources) > 0:
if status == 'successful' and instance.launch_type != 'sync':
self._update_dependent_inventories(instance, dependent_inventory_sources)
def build_execution_environment_params(self, instance, private_data_dir):
if settings.IS_K8S:
return {}
params = super(RunProjectUpdate, self).build_execution_environment_params(instance, private_data_dir)
project_path = instance.get_project_path(check_if_exists=False)
cache_path = instance.get_cache_path()
params.setdefault('container_volume_mounts', [])
params['container_volume_mounts'].extend(
[
f"{project_path}:{project_path}:Z",
f"{cache_path}:{cache_path}:Z",
]
)
return params
@task(queue=get_local_queuename)
class RunInventoryUpdate(BaseTask):
model = InventoryUpdate
event_model = InventoryUpdateEvent
event_data_key = 'inventory_update_id'
def build_private_data(self, inventory_update, private_data_dir):
"""
Return private data needed for inventory update.
Returns a dict of the form
{
'credentials': {
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>,
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>,
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>
}
}
If no private data is needed, return None.
"""
if inventory_update.source in InventorySource.injectors:
injector = InventorySource.injectors[inventory_update.source]()
return injector.build_private_data(inventory_update, private_data_dir)
def build_env(self, inventory_update, private_data_dir, private_data_files=None):
"""Build environment dictionary for ansible-inventory.
Most environment variables related to credentials or configuration
are accomplished by the inventory source injectors (in this method)
or custom credential type injectors (in main run method).
"""
env = super(RunInventoryUpdate, self).build_env(inventory_update, private_data_dir, private_data_files=private_data_files)
if private_data_files is None:
private_data_files = {}
# Pass inventory source ID to inventory script.
env['INVENTORY_SOURCE_ID'] = str(inventory_update.inventory_source_id)
env['INVENTORY_UPDATE_ID'] = str(inventory_update.pk)
env.update(STANDARD_INVENTORY_UPDATE_ENV)
injector = None
if inventory_update.source in InventorySource.injectors:
injector = InventorySource.injectors[inventory_update.source]()
if injector is not None:
env = injector.build_env(inventory_update, env, private_data_dir, private_data_files)
# All CLOUD_PROVIDERS sources implement as inventory plugin from collection
env['ANSIBLE_INVENTORY_ENABLED'] = 'auto'
if inventory_update.source == 'scm':
for env_k in inventory_update.source_vars_dict:
if str(env_k) not in env and str(env_k) not in settings.INV_ENV_VARIABLE_BLOCKED:
env[str(env_k)] = str(inventory_update.source_vars_dict[env_k])
elif inventory_update.source == 'file':
raise NotImplementedError('Cannot update file sources through the task system.')
if inventory_update.source == 'scm' and inventory_update.source_project_update:
env_key = 'ANSIBLE_COLLECTIONS_PATHS'
config_setting = 'collections_paths'
folder = 'requirements_collections'
default = '~/.ansible/collections:/usr/share/ansible/collections'
config_values = read_ansible_config(os.path.join(private_data_dir, 'project'), [config_setting])
paths = default.split(':')
if env_key in env:
for path in env[env_key].split(':'):
if path not in paths:
paths = [env[env_key]] + paths
elif config_setting in config_values:
for path in config_values[config_setting].split(':'):
if path not in paths:
paths = [config_values[config_setting]] + paths
paths = [os.path.join(CONTAINER_ROOT, folder)] + paths
env[env_key] = os.pathsep.join(paths)
if 'ANSIBLE_COLLECTIONS_PATHS' in env:
paths = env['ANSIBLE_COLLECTIONS_PATHS'].split(':')
else:
paths = ['~/.ansible/collections', '/usr/share/ansible/collections']
paths.append('/usr/share/automation-controller/collections')
env['ANSIBLE_COLLECTIONS_PATHS'] = os.pathsep.join(paths)
return env
def write_args_file(self, private_data_dir, args):
path = os.path.join(private_data_dir, 'args')
handle = os.open(path, os.O_RDWR | os.O_CREAT, stat.S_IREAD | stat.S_IWRITE)
f = os.fdopen(handle, 'w')
f.write(' '.join(args))
f.close()
os.chmod(path, stat.S_IRUSR)
return path
def build_args(self, inventory_update, private_data_dir, passwords):
"""Build the command line argument list for running an inventory
import.
"""
# Get the inventory source and inventory.
inventory_source = inventory_update.inventory_source
inventory = inventory_source.inventory
if inventory is None:
raise RuntimeError('Inventory Source is not associated with an Inventory.')
args = ['ansible-inventory', '--list', '--export']
# Add arguments for the source inventory file/script/thing
rel_path = self.pseudo_build_inventory(inventory_update, private_data_dir)
container_location = os.path.join(CONTAINER_ROOT, rel_path)
source_location = os.path.join(private_data_dir, rel_path)
args.append('-i')
args.append(container_location)
args.append('--output')
args.append(os.path.join(CONTAINER_ROOT, 'artifacts', str(inventory_update.id), 'output.json'))
if os.path.isdir(source_location):
playbook_dir = container_location
else:
playbook_dir = os.path.dirname(container_location)
args.extend(['--playbook-dir', playbook_dir])
if inventory_update.verbosity:
args.append('-' + 'v' * min(5, inventory_update.verbosity * 2 + 1))
return args
def build_inventory(self, inventory_update, private_data_dir):
return None # what runner expects in order to not deal with inventory
def pseudo_build_inventory(self, inventory_update, private_data_dir):
"""Inventory imports are ran through a management command
we pass the inventory in args to that command, so this is not considered
to be "Ansible" inventory (by runner) even though it is
Eventually, we would like to cut out the management command,
and thus use this as the real inventory
"""
src = inventory_update.source
injector = None
if inventory_update.source in InventorySource.injectors:
injector = InventorySource.injectors[src]()
if injector is not None:
content = injector.inventory_contents(inventory_update, private_data_dir)
# must be a statically named file
inventory_path = os.path.join(private_data_dir, 'inventory', injector.filename)
with open(inventory_path, 'w') as f:
f.write(content)
os.chmod(inventory_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
rel_path = os.path.join('inventory', injector.filename)
elif src == 'scm':
rel_path = os.path.join('project', inventory_update.source_path)
return rel_path
def build_playbook_path_relative_to_cwd(self, inventory_update, private_data_dir):
return None
def build_credentials_list(self, inventory_update):
# All credentials not used by inventory source injector
return inventory_update.get_extra_credentials()
def pre_run_hook(self, inventory_update, private_data_dir):
super(RunInventoryUpdate, self).pre_run_hook(inventory_update, private_data_dir)
source_project = None
if inventory_update.inventory_source:
source_project = inventory_update.inventory_source.source_project
if (
inventory_update.source == 'scm' and inventory_update.launch_type != 'scm' and source_project and source_project.scm_type
): # never ever update manual projects
# Check if the content cache exists, so that we do not unnecessarily re-download roles
sync_needs = ['update_{}'.format(source_project.scm_type)]
has_cache = os.path.exists(os.path.join(source_project.get_cache_path(), source_project.cache_id))
# Galaxy requirements are not supported for manual projects
if not has_cache:
sync_needs.extend(['install_roles', 'install_collections'])
local_project_sync = source_project.create_project_update(
_eager_fields=dict(
launch_type="sync",
job_type='run',
job_tags=','.join(sync_needs),
status='running',
execution_node=inventory_update.execution_node,
instance_group=inventory_update.instance_group,
celery_task_id=inventory_update.celery_task_id,
)
)
# associate the inventory update before calling run() so that a
# cancel() call on the inventory update can cancel the project update
local_project_sync.scm_inventory_updates.add(inventory_update)
project_update_task = local_project_sync._get_task_class()
try:
sync_task = project_update_task(job_private_data_dir=private_data_dir)
sync_task.run(local_project_sync.id)
local_project_sync.refresh_from_db()
inventory_update.inventory_source.scm_last_revision = local_project_sync.scm_revision
inventory_update.inventory_source.save(update_fields=['scm_last_revision'])
except Exception:
inventory_update = self.update_model(
inventory_update.pk,
status='failed',
job_explanation=(
'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}'
% ('project_update', local_project_sync.name, local_project_sync.id)
),
)
raise
elif inventory_update.source == 'scm' and inventory_update.launch_type == 'scm' and source_project:
# This follows update, not sync, so make copy here
RunProjectUpdate.make_local_copy(source_project, private_data_dir)
def post_run_hook(self, inventory_update, status):
super(RunInventoryUpdate, self).post_run_hook(inventory_update, status)
if status != 'successful':
return # nothing to save, step out of the way to allow error reporting
private_data_dir = inventory_update.job_env['AWX_PRIVATE_DATA_DIR']
expected_output = os.path.join(private_data_dir, 'artifacts', 'output.json')
with open(expected_output) as f:
data = json.load(f)
# build inventory save options
options = dict(
overwrite=inventory_update.overwrite,
overwrite_vars=inventory_update.overwrite_vars,
)
src = inventory_update.source
if inventory_update.enabled_var:
options['enabled_var'] = inventory_update.enabled_var
options['enabled_value'] = inventory_update.enabled_value
else:
if getattr(settings, '%s_ENABLED_VAR' % src.upper(), False):
options['enabled_var'] = getattr(settings, '%s_ENABLED_VAR' % src.upper())
if getattr(settings, '%s_ENABLED_VALUE' % src.upper(), False):
options['enabled_value'] = getattr(settings, '%s_ENABLED_VALUE' % src.upper())
if inventory_update.host_filter:
options['host_filter'] = inventory_update.host_filter
if getattr(settings, '%s_EXCLUDE_EMPTY_GROUPS' % src.upper()):
options['exclude_empty_groups'] = True
if getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper(), False):
options['instance_id_var'] = getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper())
# Verbosity is applied to saving process, as well as ansible-inventory CLI option
if inventory_update.verbosity:
options['verbosity'] = inventory_update.verbosity
handler = SpecialInventoryHandler(
self.event_handler,
self.cancel_callback,
verbosity=inventory_update.verbosity,
job_timeout=self.get_instance_timeout(self.instance),
start_time=inventory_update.started,
counter=self.event_ct,
initial_line=self.end_line,
)
inv_logger = logging.getLogger('awx.main.commands.inventory_import')
formatter = inv_logger.handlers[0].formatter
formatter.job_start = inventory_update.started
handler.formatter = formatter
inv_logger.handlers[0] = handler
from awx.main.management.commands.inventory_import import Command as InventoryImportCommand
cmd = InventoryImportCommand()
try:
# save the inventory data to database.
# canceling exceptions will be handled in the global post_run_hook
cmd.perform_update(options, data, inventory_update)
except PermissionDenied as exc:
logger.exception('License error saving {} content'.format(inventory_update.log_format))
raise PostRunError(str(exc), status='error')
except PostRunError:
logger.exception('Error saving {} content, rolling back changes'.format(inventory_update.log_format))
raise
except Exception:
logger.exception('Exception saving {} content, rolling back changes.'.format(inventory_update.log_format))
raise PostRunError('Error occured while saving inventory data, see traceback or server logs', status='error', tb=traceback.format_exc())
@task(queue=get_local_queuename)
class RunAdHocCommand(BaseTask):
"""
Run an ad hoc command using ansible.
"""
model = AdHocCommand
event_model = AdHocCommandEvent
event_data_key = 'ad_hoc_command_id'
def build_private_data(self, ad_hoc_command, private_data_dir):
"""
Return SSH private key data needed for this ad hoc command (only if
stored in DB as ssh_key_data).
Returns a dict of the form
{
'credentials': {
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>,
<awx.main.models.Credential>: <credential_decrypted_ssh_key_data>,
...
},
'certificates': {
<awx.main.models.Credential>: <signed SSH certificate data>,
<awx.main.models.Credential>: <signed SSH certificate data>,
...
}
}
"""
# If we were sent SSH credentials, decrypt them and send them
# back (they will be written to a temporary file).
creds = ad_hoc_command.credential
private_data = {'credentials': {}}
if creds and creds.has_input('ssh_key_data'):
private_data['credentials'][creds] = creds.get_input('ssh_key_data', default='')
if creds and creds.has_input('ssh_public_key_data'):
private_data.setdefault('certificates', {})[creds] = creds.get_input('ssh_public_key_data', default='')
return private_data
def build_passwords(self, ad_hoc_command, runtime_passwords):
"""
Build a dictionary of passwords for SSH private key, SSH user and
sudo/su.
"""
passwords = super(RunAdHocCommand, self).build_passwords(ad_hoc_command, runtime_passwords)
cred = ad_hoc_command.credential
if cred:
for field in ('ssh_key_unlock', 'ssh_password', 'become_password'):
value = runtime_passwords.get(field, cred.get_input('password' if field == 'ssh_password' else field, default=''))
if value not in ('', 'ASK'):
passwords[field] = value
return passwords
def build_env(self, ad_hoc_command, private_data_dir, private_data_files=None):
"""
Build environment dictionary for ansible.
"""
env = super(RunAdHocCommand, self).build_env(ad_hoc_command, private_data_dir, private_data_files=private_data_files)
# Set environment variables needed for inventory and ad hoc event
# callbacks to work.
env['AD_HOC_COMMAND_ID'] = str(ad_hoc_command.pk)
env['INVENTORY_ID'] = str(ad_hoc_command.inventory.pk)
env['INVENTORY_HOSTVARS'] = str(True)
env['ANSIBLE_LOAD_CALLBACK_PLUGINS'] = '1'
env['ANSIBLE_SFTP_BATCH_MODE'] = 'False'
return env
def build_args(self, ad_hoc_command, private_data_dir, passwords):
"""
Build command line argument list for running ansible, optionally using
ssh-agent for public/private key authentication.
"""
creds = ad_hoc_command.credential
ssh_username, become_username, become_method = '', '', ''
if creds:
ssh_username = creds.get_input('username', default='')
become_method = creds.get_input('become_method', default='')
become_username = creds.get_input('become_username', default='')
else:
become_method = None
become_username = ""
# Always specify the normal SSH user as root by default. Since this
# task is normally running in the background under a service account,
# it doesn't make sense to rely on ansible's default of using the
# current user.
ssh_username = ssh_username or 'root'
args = []
if ad_hoc_command.job_type == 'check':
args.append('--check')
args.extend(['-u', sanitize_jinja(ssh_username)])
if 'ssh_password' in passwords:
args.append('--ask-pass')
# We only specify sudo/su user and password if explicitly given by the
# credential. Credential should never specify both sudo and su.
if ad_hoc_command.become_enabled:
args.append('--become')
if become_method:
args.extend(['--become-method', sanitize_jinja(become_method)])
if become_username:
args.extend(['--become-user', sanitize_jinja(become_username)])
if 'become_password' in passwords:
args.append('--ask-become-pass')
if ad_hoc_command.forks: # FIXME: Max limit?
args.append('--forks=%d' % ad_hoc_command.forks)
if ad_hoc_command.diff_mode:
args.append('--diff')
if ad_hoc_command.verbosity:
args.append('-%s' % ('v' * min(5, ad_hoc_command.verbosity)))
extra_vars = ad_hoc_command.awx_meta_vars()
if ad_hoc_command.extra_vars_dict:
redacted_extra_vars, removed_vars = extract_ansible_vars(ad_hoc_command.extra_vars_dict)
if removed_vars:
raise ValueError(_("{} are prohibited from use in ad hoc commands.").format(", ".join(removed_vars)))
extra_vars.update(ad_hoc_command.extra_vars_dict)
if ad_hoc_command.limit:
args.append(ad_hoc_command.limit)
else:
args.append('all')
return args
def build_extra_vars_file(self, ad_hoc_command, private_data_dir):
extra_vars = ad_hoc_command.awx_meta_vars()
if ad_hoc_command.extra_vars_dict:
redacted_extra_vars, removed_vars = extract_ansible_vars(ad_hoc_command.extra_vars_dict)
if removed_vars:
raise ValueError(_("{} are prohibited from use in ad hoc commands.").format(", ".join(removed_vars)))
extra_vars.update(ad_hoc_command.extra_vars_dict)
self._write_extra_vars_file(private_data_dir, extra_vars)
def build_module_name(self, ad_hoc_command):
return ad_hoc_command.module_name
def build_module_args(self, ad_hoc_command):
module_args = ad_hoc_command.module_args
if settings.ALLOW_JINJA_IN_EXTRA_VARS != 'always':
module_args = sanitize_jinja(module_args)
return module_args
def build_playbook_path_relative_to_cwd(self, job, private_data_dir):
return None
def get_password_prompts(self, passwords={}):
d = super(RunAdHocCommand, self).get_password_prompts()
d[r'Enter passphrase for .*:\s*?$'] = 'ssh_key_unlock'
d[r'Bad passphrase, try again for .*:\s*?$'] = ''
for method in PRIVILEGE_ESCALATION_METHODS:
d[r'%s password.*:\s*?$' % (method[0])] = 'become_password'
d[r'%s password.*:\s*?$' % (method[0].upper())] = 'become_password'
d[r'BECOME password.*:\s*?$'] = 'become_password'
d[r'SSH password:\s*?$'] = 'ssh_password'
d[r'Password:\s*?$'] = 'ssh_password'
return d
@task(queue=get_local_queuename)
class RunSystemJob(BaseTask):
model = SystemJob
event_model = SystemJobEvent
event_data_key = 'system_job_id'
def build_execution_environment_params(self, system_job, private_data_dir):
return {}
def build_args(self, system_job, private_data_dir, passwords):
args = ['awx-manage', system_job.job_type]
try:
# System Job extra_vars can be blank, must be JSON if not blank
if system_job.extra_vars == '':
json_vars = {}
else:
json_vars = json.loads(system_job.extra_vars)
if system_job.job_type in ('cleanup_jobs', 'cleanup_activitystream'):
if 'days' in json_vars:
args.extend(['--days', str(json_vars.get('days', 60))])
if 'dry_run' in json_vars and json_vars['dry_run']:
args.extend(['--dry-run'])
if system_job.job_type == 'cleanup_jobs':
args.extend(
['--jobs', '--project-updates', '--inventory-updates', '--management-jobs', '--ad-hoc-commands', '--workflow-jobs', '--notifications']
)
except Exception:
logger.exception("{} Failed to parse system job".format(system_job.log_format))
return args
def write_args_file(self, private_data_dir, args):
path = os.path.join(private_data_dir, 'args')
handle = os.open(path, os.O_RDWR | os.O_CREAT, stat.S_IREAD | stat.S_IWRITE)
f = os.fdopen(handle, 'w')
f.write(' '.join(args))
f.close()
os.chmod(path, stat.S_IRUSR)
return path
def build_env(self, instance, private_data_dir, private_data_files=None):
base_env = super(RunSystemJob, self).build_env(instance, private_data_dir, private_data_files=private_data_files)
# TODO: this is able to run by turning off isolation
# the goal is to run it a container instead
env = dict(os.environ.items())
env.update(base_env)
return env
def build_playbook_path_relative_to_cwd(self, job, private_data_dir):
return None
def build_inventory(self, instance, private_data_dir):
return None
def _reconstruct_relationships(copy_mapping):
for old_obj, new_obj in copy_mapping.items():
model = type(old_obj)
for field_name in getattr(model, 'FIELDS_TO_PRESERVE_AT_COPY', []):
field = model._meta.get_field(field_name)
if isinstance(field, ForeignKey):
if getattr(new_obj, field_name, None):
continue
related_obj = getattr(old_obj, field_name)
related_obj = copy_mapping.get(related_obj, related_obj)
setattr(new_obj, field_name, related_obj)
elif field.many_to_many:
for related_obj in getattr(old_obj, field_name).all():
logger.debug('Deep copy: Adding {} to {}({}).{} relationship'.format(related_obj, new_obj, model, field_name))
getattr(new_obj, field_name).add(copy_mapping.get(related_obj, related_obj))
new_obj.save()
@task(queue=get_local_queuename)
def deep_copy_model_obj(model_module, model_name, obj_pk, new_obj_pk, user_pk, uuid, permission_check_func=None):
sub_obj_list = cache.get(uuid)
if sub_obj_list is None:
logger.error('Deep copy {} from {} to {} failed unexpectedly.'.format(model_name, obj_pk, new_obj_pk))
return
logger.debug('Deep copy {} from {} to {}.'.format(model_name, obj_pk, new_obj_pk))
from awx.api.generics import CopyAPIView
from awx.main.signals import disable_activity_stream
model = getattr(importlib.import_module(model_module), model_name, None)
if model is None:
return
try:
obj = model.objects.get(pk=obj_pk)
new_obj = model.objects.get(pk=new_obj_pk)
creater = User.objects.get(pk=user_pk)
except ObjectDoesNotExist:
logger.warning("Object or user no longer exists.")
return
with transaction.atomic(), ignore_inventory_computed_fields(), disable_activity_stream():
copy_mapping = {}
for sub_obj_setup in sub_obj_list:
sub_model = getattr(importlib.import_module(sub_obj_setup[0]), sub_obj_setup[1], None)
if sub_model is None:
continue
try:
sub_obj = sub_model.objects.get(pk=sub_obj_setup[2])
except ObjectDoesNotExist:
continue
copy_mapping.update(CopyAPIView.copy_model_obj(obj, new_obj, sub_model, sub_obj, creater))
_reconstruct_relationships(copy_mapping)
if permission_check_func:
permission_check_func = getattr(getattr(importlib.import_module(permission_check_func[0]), permission_check_func[1]), permission_check_func[2])
permission_check_func(creater, copy_mapping.values())
if isinstance(new_obj, Inventory):
update_inventory_computed_fields.delay(new_obj.id)
class TransmitterThread(threading.Thread):
def run(self):
self.exc = None
try:
super().run()
except Exception:
self.exc = sys.exc_info()
class AWXReceptorJob:
def __init__(self, task, runner_params=None):
self.task = task
self.runner_params = runner_params
self.unit_id = None
if self.task and not self.task.instance.is_container_group_task:
execution_environment_params = self.task.build_execution_environment_params(self.task.instance, runner_params['private_data_dir'])
self.runner_params['settings'].update(execution_environment_params)
def run(self):
# We establish a connection to the Receptor socket
receptor_ctl = get_receptor_ctl()
try:
return self._run_internal(receptor_ctl)
finally:
# Make sure to always release the work unit if we established it
if self.unit_id is not None and settings.RECEPTOR_RELEASE_WORK:
receptor_ctl.simple_command(f"work release {self.unit_id}")
def _run_internal(self, receptor_ctl):
# Create a socketpair. Where the left side will be used for writing our payload
# (private data dir, kwargs). The right side will be passed to Receptor for
# reading.
sockin, sockout = socket.socketpair()
transmitter_thread = TransmitterThread(target=self.transmit, args=[sockin])
transmitter_thread.start()
# submit our work, passing
# in the right side of our socketpair for reading.
result = receptor_ctl.submit_work(worktype=self.work_type, payload=sockout.makefile('rb'), params=self.receptor_params)
self.unit_id = result['unitid']
self.task.update_model(self.task.instance.pk, work_unit_id=result['unitid'])
sockin.close()
sockout.close()
if transmitter_thread.exc:
raise transmitter_thread.exc[1].with_traceback(transmitter_thread.exc[2])
transmitter_thread.join()
resultsock, resultfile = receptor_ctl.get_work_results(self.unit_id, return_socket=True, return_sockfile=True)
# Both "processor" and "cancel_watcher" are spawned in separate threads.
# We wait for the first one to return. If cancel_watcher returns first,
# we yank the socket out from underneath the processor, which will cause it
# to exit. A reference to the processor_future is passed into the cancel_watcher_future,
# Which exits if the job has finished normally. The context manager ensures we do not
# leave any threads laying around.
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
processor_future = executor.submit(self.processor, resultfile)
cancel_watcher_future = executor.submit(self.cancel_watcher, processor_future)
futures = [processor_future, cancel_watcher_future]
first_future = concurrent.futures.wait(futures, return_when=concurrent.futures.FIRST_COMPLETED)
res = list(first_future.done)[0].result()
if res.status == 'canceled':
receptor_ctl.simple_command(f"work cancel {self.unit_id}")
resultsock.shutdown(socket.SHUT_RDWR)
resultfile.close()
elif res.status == 'error':
# TODO: There should be a more efficient way of getting this information
receptor_work_list = receptor_ctl.simple_command("work list")
detail = receptor_work_list[self.unit_id]['Detail']
state_name = receptor_work_list[self.unit_id]['StateName']
if 'exceeded quota' in detail:
logger.warn(detail)
log_name = self.task.instance.log_format
logger.warn(f"Could not launch pod for {log_name}. Exceeded quota.")
self.task.update_model(self.task.instance.pk, status='pending')
return
# If ansible-runner ran, but an error occured at runtime, the traceback information
# is saved via the status_handler passed in to the processor.
if state_name == 'Succeeded':
return res
if not self.task.instance.result_traceback:
raise RuntimeError(detail)
return res
# Spawned in a thread so Receptor can start reading before we finish writing, we
# write our payload to the left side of our socketpair.
@cleanup_new_process
def transmit(self, _socket):
if not settings.IS_K8S and self.work_type == 'local':
self.runner_params['only_transmit_kwargs'] = True
try:
ansible_runner.interface.run(streamer='transmit', _output=_socket.makefile('wb'), **self.runner_params)
finally:
# Socket must be shutdown here, or the reader will hang forever.
_socket.shutdown(socket.SHUT_WR)
@cleanup_new_process
def processor(self, resultfile):
return ansible_runner.interface.run(
streamer='process',
quiet=True,
_input=resultfile,
event_handler=self.task.event_handler,
finished_callback=self.task.finished_callback,
status_handler=self.task.status_handler,
**self.runner_params,
)
@property
def receptor_params(self):
if self.task.instance.is_container_group_task:
spec_yaml = yaml.dump(self.pod_definition, explicit_start=True)
receptor_params = {
"secret_kube_pod": spec_yaml,
"pod_pending_timeout": getattr(settings, 'AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT', "5m"),
}
if self.credential:
kubeconfig_yaml = yaml.dump(self.kube_config, explicit_start=True)
receptor_params["secret_kube_config"] = kubeconfig_yaml
else:
private_data_dir = self.runner_params['private_data_dir']
receptor_params = {"params": f"--private-data-dir={private_data_dir}"}
return receptor_params
@property
def work_type(self):
if self.task.instance.is_container_group_task:
if self.credential:
work_type = 'kubernetes-runtime-auth'
else:
work_type = 'kubernetes-incluster-auth'
else:
work_type = 'local'
return work_type
@cleanup_new_process
def cancel_watcher(self, processor_future):
while True:
if processor_future.done():
return processor_future.result()
if self.task.cancel_callback():
result = namedtuple('result', ['status', 'rc'])
return result('canceled', 1)
time.sleep(1)
@property
def pod_definition(self):
ee = self.task.instance.execution_environment
default_pod_spec = get_default_pod_spec()
pod_spec_override = {}
if self.task and self.task.instance.instance_group.pod_spec_override:
pod_spec_override = parse_yaml_or_json(self.task.instance.instance_group.pod_spec_override)
pod_spec = {**default_pod_spec, **pod_spec_override}
pod_spec['spec']['containers'][0]['image'] = ee.image
pod_spec['spec']['containers'][0]['args'] = ['ansible-runner', 'worker', '--private-data-dir=/runner']
# Enforce EE Pull Policy
pull_options = {"always": "Always", "missing": "IfNotPresent", "never": "Never"}
if self.task and self.task.instance.execution_environment:
if self.task.instance.execution_environment.pull:
pod_spec['spec']['containers'][0]['imagePullPolicy'] = pull_options[self.task.instance.execution_environment.pull]
if self.task and self.task.instance.is_container_group_task:
# If EE credential is passed, create an imagePullSecret
if self.task.instance.execution_environment and self.task.instance.execution_environment.credential:
# Create pull secret in k8s cluster based on ee cred
from awx.main.scheduler.kubernetes import PodManager # prevent circular import
pm = PodManager(self.task.instance)
secret_name = pm.create_secret(job=self.task.instance)
# Inject secret name into podspec
pod_spec['spec']['imagePullSecrets'] = [{"name": secret_name}]
if self.task:
pod_spec['metadata'] = deepmerge(
pod_spec.get('metadata', {}),
dict(name=self.pod_name, labels={'ansible-awx': settings.INSTALL_UUID, 'ansible-awx-job-id': str(self.task.instance.id)}),
)
return pod_spec
@property
def pod_name(self):
return f"automation-job-{self.task.instance.id}"
@property
def credential(self):
return self.task.instance.instance_group.credential
@property
def namespace(self):
return self.pod_definition['metadata']['namespace']
@property
def kube_config(self):
host_input = self.credential.get_input('host')
config = {
"apiVersion": "v1",
"kind": "Config",
"preferences": {},
"clusters": [{"name": host_input, "cluster": {"server": host_input}}],
"users": [{"name": host_input, "user": {"token": self.credential.get_input('bearer_token')}}],
"contexts": [{"name": host_input, "context": {"cluster": host_input, "user": host_input, "namespace": self.namespace}}],
"current-context": host_input,
}
if self.credential.get_input('verify_ssl') and 'ssl_ca_cert' in self.credential.inputs:
config["clusters"][0]["cluster"]["certificate-authority-data"] = b64encode(
self.credential.get_input('ssl_ca_cert').encode() # encode to bytes
).decode() # decode the base64 data into a str
else:
config["clusters"][0]["cluster"]["insecure-skip-tls-verify"] = True
return config
|
FileChecker.py
|
from watchdog.events import FileSystemEventHandler
from watchdog.observers.polling import PollingObserver
from PIL import Image
from pystray import Menu, MenuItem, Icon
from tkinter import filedialog
from tkinter import ttk
import tkinter
import datetime as dt
import threading
import pystray
import json
import sys
import os
def resource_path(path):
if hasattr(sys, "_MEIPASS"):
return os.path.join(sys._MEIPASS, path)
else:
return os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), path)
class Config(dict):
def __init__(self):
try:
self.file = resource_path("FileCheckerConfig.json")
with open(self.file, "r") as f:
option = json.load(f)
except:
option = {}
super().__init__(**option)
def write(self, **option):
self.update(option)
with open(self.file, "w") as f:
json.dump(self, f)
class TaskTray:
file = resource_path("FileCheckerIcon.ico")
def __init__(self, root):
self.root = root
def start(self):
options_map = {"隠す": self.root.withdraw, "表示": self.root.deiconify, "終了": lambda: self.root.after(1, self.quit)}
items = []
for option, callback in options_map.items():
items.append(MenuItem(option, callback, default=True if option == "表示" else False))
menu = Menu(*items)
image = Image.open(self.file)
self.icon = pystray.Icon("FileChecker", image, "FileChecker Icon", menu)
self.icon.run()
def quit(self):
self.icon.stop()
self.root.quit()
class FileChecker(FileSystemEventHandler):
def __init__(self, txt):
self.txt = txt
self.logfile = None
self.txt.tag_config("delete", foreground="red", background="yellow")
def update_log(self, *text, red=False):
txt = "{}: {}\n".format(dt.datetime.now(), " ".join(text))
self.txt.configure(state=tkinter.NORMAL)
if red:
self.txt.insert(tkinter.END, txt, "delete")
else:
self.txt.insert(tkinter.END, txt)
self.txt.configure(state=tkinter.DISABLED)
self.txt.see(tkinter.END)
if config.get("file"):
if not self.logfile:
self.logfile = open(config["file"], "a")
self.logfile.write(txt)
# ファイル作成時
def on_created(self, event):
self.update_log(event.src_path, "が作成されました")
# ファイル変更時
def on_modified(self, event):
self.update_log(event.src_path, "が変更されました")
# ファイル削除時
def on_deleted(self, event):
self.update_log(event.src_path, "が削除されました", red=True)
# ファイル移動時
def on_moved(self, event):
self.update_log(event.src_path, "が", event.dest_path, "に移動しました")
def close(self):
if self.logfile:
self.logfile.close()
def start(path, button, string):
if os.path.isdir(path):
observer = PollingObserver()
observer.schedule(event_handler, path, recursive=True)
observer.start()
button["command"] = lambda: pause(observer, button, string)
button["text"] = "一時停止"
def pause(observer, button, string):
observer.stop()
observer.join()
button["command"] = lambda: start(string.get(), button, string)
button["text"] = "スタート"
def select_path(e, directory=False):
if directory:
path = filedialog.askdirectory()
else:
path = filedialog.askopenfilename()
if path:
e.delete(0, tkinter.END)
e.insert(tkinter.END, path)
def save(path, root):
if path:
config.write(file=path)
root.destroy()
def ask_logfile():
subroot = tkinter.Toplevel()
subroot.title("FileCheckerAskLogFile")
subroot.geometry("300x200")
label1 = ttk.Label(subroot, text="ログの出力先ファイルパスを\n指定してください\n(存在しないファイルを指定した場合は\n新しく生成されます)")
label1.grid(column=0, row=0)
label2 = ttk.Label(subroot, text="現在設定されているパスは{}".format(config.get("file") if "file" in config else "ありません"))
label2.grid(column=1, row=2)
path = tkinter.StringVar()
entry = ttk.Entry(subroot, textvariable=path)
entry.grid(column=0, row=1)
button1 = ttk.Button(subroot, text="参照", command=lambda: select_path(entry))
button1.grid(column=1, row=1)
button2 = ttk.Button(subroot, text="決定", command=lambda: save(path.get(), subroot))
button2.grid(column=0, row=2)
def create():
root = tkinter.Tk()
root.title("FileChecker")
root.geometry("800x500")
menu = tkinter.Menu(root)
root.config(menu=menu)
config_menu = tkinter.Menu(root)
menu.add_cascade(label="設定", menu=config_menu)
config_menu.add_command(label="ログ", command=ask_logfile)
label1 = ttk.Label(root, text="監視対象のディレクトリ")
label1.grid(column=0, row=0)
st = tkinter.StringVar()
entry = ttk.Entry(root, textvariable=st)
entry.grid(column=0, row=1)
button1 = ttk.Button(root, text="参照", command=lambda: select_path(entry, directory=True))
button1.grid(column=1, row=1, sticky=tkinter.W)
button2 = ttk.Button(root, text="スタート")
button2["command"] = lambda: start(st.get(), button2, st)
button2.grid(column=0, row=2)
label2 = ttk.Label(root, text="ログ")
label2.grid(column=0, row=3)
log = tkinter.Text(root)
log.grid(column=0, row=4)
scrollbar = ttk.Scrollbar(root, orient=tkinter.VERTICAL, command=log.yview)
log["yscrollcommand"] = scrollbar.set
scrollbar.grid(column=1, row=4, sticky=(tkinter.N, tkinter.S))
log.configure(state=tkinter.DISABLED)
return root, log
def main():
global event_handler
global config
config = Config()
try:
root, log = create()
icon = TaskTray(root)
threading.Thread(target=icon.start).start()
event_handler = FileChecker(log)
root.mainloop()
finally:
event_handler.close()
try:
icon.icon.stop()
except:
pass
if __name__ == "__main__":
main()
|
b.py
|
#encoding: utf-8
from flask import Flask,render_template, request, redirect, url_for, session, g
import config
from models import User, Question, Answer
from exts import db
import csv, operator,os
from sqlalchemy import or_
from decorators import login_required
from werkzeug.utils import secure_filename
from gevent import monkey
monkey.patch_all()
from gevent import pywsgi
import subprocess
import time
app = Flask(__name__)
app.config.from_object(config)
# 不写这句注册会报错
db.init_app(app)
@app.route('/line/')
def line():
return render_template('line.html')
@app.route('/map1/')
def map1():
return render_template('map1.html')
# @app.route('/')
# def index():
# context = {
# 'questions': Question.query.order_by('-create_time').all()
# }
# return render_template('index.html',**context)
import threading
import multiprocessing
@app.route('/', methods=['POST', 'GET'])
def index():
context = {
'questions': Question.query.order_by('-create_time').all(),
'is_success':False
}
print("asyn has a request!")
# time.sleep(10)
# lease_frequency()
# basepath = os.path.dirname(__file__) # 当前文件所在路径
# print basepath
# os.system('cd'+basepath)
# os.system('python a.py')
p = multiprocessing.Process(target=lease_frequency)
p.start()
# os.system('python ./a.py')
# processor = "a.py"
# INTERPRETER = "/usr/bin/python"
# pargs = [INTERPRETER, processor]
# pargs.extend(["--input=inputMd5s"])
# subprocess.Popen(pargs)
if request.method == 'POST':
f = request.files['file']
basepath = os.path.dirname(__file__) # 当前文件所在路径
upload_path = os.path.join(basepath, 'static\upload',secure_filename(f.filename)) #注意:没有的文件夹一定要先创建,不然会提示没有该路径
f.save(upload_path)
context['is_success'] = True
# return redirect(url_for('uploads'))
return render_template('index.html', **context)
@app.route('/lease_frequency/')
def lease_frequency():
print 11111111111111111111111111111111111
frequency = {}
with open(os.path.join(os.path.dirname(os.path.abspath(__file__)),'static/uploads/train.csv'))as csvfile:
reader = [each for each in csv.DictReader(csvfile)]
base = len(reader)
for row in reader:
time = int(row['OVERTIME'])
if time not in frequency:
frequency[time] = 1
else:
frequency[time] += 1
for each in frequency:
frequency[each] = frequency[each] / float(base)
minutes = frequency.keys()
fre = frequency.values()
result = {'min':minutes,
'fre':fre
}
print '======================================'
print result
return 'finished!!'
# return render_template('lease_frequency.html', **result)
@app.route('/route/')
def route():
return render_template('route.html')
@app.route('/login/',methods=['GET','POST'])
def login():
if request.method == 'GET':
return render_template('login.html')
else:
telephone = request.form.get('telephone')
password = request.form.get('password')
user = User.query.filter(User.telephone == telephone).first()
if user and user.check_password(password):
session['user_id'] = user.id
session.permanent = True
return redirect(url_for('index'))
else:
return u'手机号或密码错误!'
@app.route('/regist/',methods=['GET','POST'])
def regist():
if request.method == 'GET':
return render_template('regist2.html')
else:
telephone = request.form.get('telephone')
username = request.form.get('username')
password1 = request.form.get('password1')
password2 = request.form.get('password2')
user = User.query.filter(User.telephone == telephone).first()
if user:
return '改手机号已被注册,请更换手机号码'
else:
if password1 != password2:
return '两次密码不相同'
else:
user = User(telephone=telephone, username=username, password=password1)
db.session.add(user)
db.session.commit()
# 注册成功,页面跳转到登录页面
return redirect(url_for('login'))
@app.route('/logout/')
def logout():
# session.pop('user_id')
del session['user_id']
# session.clear()
return redirect(url_for('login'))
@app.route('/question/',methods=['GET','POST'])
@login_required
def question():
if request.method == 'GET':
return render_template('question.html')
else:
title = request.form.get('title')
content = request.form.get('content')
question = Question(title=title, content=content)
# user_id = session.get('user_id')
# user = User.query.filter(User.id == user_id).first()
question.author = g.user
db.session.add(question)
db.session.commit()
return redirect(url_for('index'))
# 执行顺序
# before_request -> 视图函数-> context_processor
@app.before_request
def my_before_request():
user_id = session.get('user_id')
if user_id:
user = User.query.filter(User.id== user_id).first()
if user:
g.user = user
@app.context_processor
def my_context_processor():
if hasattr(g, 'user'):
return {'user':g.user}
# user_id = session.get('user_id')
# if user_id:
# user = User.query.filter(User.id == user_id).first()
# if user:
# return {'user':user}
return {}
@app.route('/detail/<question_id>/')
def detail(question_id):
question_model = Question.query.filter(Question.id == question_id).first()
return render_template('detail.html',question=question_model)
@app.route('/add_answer/', methods=['POST'])
@login_required
def add_answer():
content = request.form.get('answer_content')
question_id = request.form.get('question_id')
answer = Answer(content=content)
# user_id = session['user_id']
# user = User.query.filter(User.id== user_id).first()
answer.author = g.user
answer.question = Question.query.filter(Question.id == question_id).first()
db.session.add(answer)
db.session.commit()
return redirect(url_for('detail',question_id=question_id))
@app.route('/search/')
def search():
q = request.args.get('q')
# 或条件,title或者content中包含
questions = Question.query.filter(or_(Question.title.contains(q),
Question.content.contains(q))).order_by('-create_time')
return render_template('index.html', questions=questions)
if __name__ == '__main__':
app.run('127.0.0.1', 5001)
# server = pywsgi.WSGIServer(('127.0.0.1', 5000), app)
# server.serve_forever()
|
test_scheduler.py
|
import unittest
from datetime import datetime, timedelta
import os
import signal
import time
from dateutil.relativedelta import relativedelta
from threading import Thread
from rq import Queue
from rq.compat import as_text
from rq.job import Job
import warnings
from rq_scheduler import Scheduler
from rq_scheduler.utils import to_unix
from tests import RQTestCase
def say_hello(name=None):
"""A job with a single argument and a return value."""
if name is None:
name = 'Stranger'
return 'Hi there, %s!' % (name,)
def tl(l):
return [as_text(i) for i in l]
def simple_addition(x, y, z):
return x + y + z
class TestScheduler(RQTestCase):
def setUp(self):
super(TestScheduler, self).setUp()
self.scheduler = Scheduler(connection=self.testconn)
def test_birth_and_death_registration(self):
"""
When scheduler registers it's birth, besides creating a key, it should
also set an expiry that's a few seconds longer than it's polling
interval so it automatically expires if scheduler is unexpectedly
terminated.
"""
key = Scheduler.scheduler_key
self.assertNotIn(key, tl(self.testconn.keys('*')))
scheduler = Scheduler(connection=self.testconn, interval=20)
scheduler.register_birth()
self.assertIn(key, tl(self.testconn.keys('*')))
self.assertEqual(self.testconn.ttl(key), 30)
self.assertFalse(self.testconn.hexists(key, 'death'))
self.assertRaises(ValueError, scheduler.register_birth)
scheduler.register_death()
self.assertTrue(self.testconn.hexists(key, 'death'))
def test_create_job(self):
"""
Ensure that jobs are created properly.
"""
job = self.scheduler._create_job(say_hello, args=(), kwargs={})
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual(job, job_from_queue)
self.assertEqual(job_from_queue.func, say_hello)
def test_create_job_with_ttl(self):
"""
Ensure that TTL is passed to RQ.
"""
job = self.scheduler._create_job(say_hello, ttl=2, args=(), kwargs={})
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual(2, job_from_queue.ttl)
def test_create_job_with_id(self):
"""
Ensure that ID is passed to RQ.
"""
job = self.scheduler._create_job(say_hello, id='id test', args=(), kwargs={})
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual('id test', job_from_queue.id)
def test_create_job_with_description(self):
"""
Ensure that description is passed to RQ.
"""
job = self.scheduler._create_job(say_hello, description='description', args=(), kwargs={})
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual('description', job_from_queue.description)
def test_job_not_persisted_if_commit_false(self):
"""
Ensure jobs are only saved to Redis if commit=True.
"""
job = self.scheduler._create_job(say_hello, commit=False)
self.assertEqual(self.testconn.hgetall(job.key), {})
def test_create_scheduled_job(self):
"""
Ensure that scheduled jobs are put in the scheduler queue with the right score
"""
scheduled_time = datetime.utcnow()
job = self.scheduler.enqueue_at(scheduled_time, say_hello)
self.assertEqual(job, Job.fetch(job.id, connection=self.testconn))
self.assertIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
self.assertEqual(self.testconn.zscore(self.scheduler.scheduled_jobs_key, job.id),
to_unix(scheduled_time))
def test_enqueue_in(self):
"""
Ensure that jobs have the right scheduled time.
"""
right_now = datetime.utcnow()
time_delta = timedelta(minutes=1)
job = self.scheduler.enqueue_in(time_delta, say_hello)
self.assertIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
self.assertEqual(self.testconn.zscore(self.scheduler.scheduled_jobs_key, job.id),
to_unix(right_now + time_delta))
time_delta = timedelta(hours=1)
job = self.scheduler.enqueue_in(time_delta, say_hello)
self.assertEqual(self.testconn.zscore(self.scheduler.scheduled_jobs_key, job.id),
to_unix(right_now + time_delta))
def test_get_jobs(self):
"""
Ensure get_jobs() returns all jobs until the specified time.
"""
now = datetime.utcnow()
job = self.scheduler.enqueue_at(now, say_hello)
self.assertIn(job, self.scheduler.get_jobs(now))
future_time = now + timedelta(hours=1)
job = self.scheduler.enqueue_at(future_time, say_hello)
self.assertIn(job, self.scheduler.get_jobs(timedelta(hours=1, seconds=1)))
self.assertIn(job, [j[0] for j in self.scheduler.get_jobs(with_times=True)])
self.assertIsInstance(self.scheduler.get_jobs(with_times=True)[0][1], datetime)
self.assertNotIn(job, self.scheduler.get_jobs(timedelta(minutes=59, seconds=59)))
def test_get_jobs_to_queue(self):
"""
Ensure that jobs scheduled the future are not queued.
"""
now = datetime.utcnow()
job = self.scheduler.enqueue_at(now, say_hello)
self.assertIn(job, self.scheduler.get_jobs_to_queue())
future_time = now + timedelta(hours=1)
job = self.scheduler.enqueue_at(future_time, say_hello)
self.assertNotIn(job, self.scheduler.get_jobs_to_queue())
def test_enqueue_job(self):
"""
When scheduled job is enqueued, make sure:
- Job is removed from the sorted set of scheduled jobs
- "enqueued_at" attribute is properly set
- Job appears in the right queue
- Queue is recognized by rq's Queue.all()
"""
now = datetime.utcnow()
queue_name = 'foo'
scheduler = Scheduler(connection=self.testconn, queue_name=queue_name)
job = scheduler.enqueue_at(now, say_hello)
self.scheduler.enqueue_job(job)
self.assertNotIn(job, tl(self.testconn.zrange(scheduler.scheduled_jobs_key, 0, 10)))
job = Job.fetch(job.id, connection=self.testconn)
self.assertTrue(job.enqueued_at is not None)
queue = scheduler.get_queue_for_job(job)
self.assertIn(job, queue.jobs)
queue = Queue.from_queue_key('rq:queue:{0}'.format(queue_name))
self.assertIn(job, queue.jobs)
self.assertIn(queue, Queue.all())
def test_job_membership(self):
now = datetime.utcnow()
job = self.scheduler.enqueue_at(now, say_hello)
self.assertIn(job, self.scheduler)
self.assertIn(job.id, self.scheduler)
self.assertNotIn("non-existing-job-id", self.scheduler)
def test_cancel_scheduled_job(self):
"""
When scheduled job is canceled, make sure:
- Job is removed from the sorted set of scheduled jobs
"""
# schedule a job to be enqueued one minute from now
time_delta = timedelta(minutes=1)
job = self.scheduler.enqueue_in(time_delta, say_hello)
# cancel the scheduled job and check that it's gone from the set
self.scheduler.cancel(job)
self.assertNotIn(job.id, tl(self.testconn.zrange(
self.scheduler.scheduled_jobs_key, 0, 1)))
def test_change_execution_time(self):
"""
Ensure ``change_execution_time`` is called, ensure that job's score is updated
"""
job = self.scheduler.enqueue_at(datetime.utcnow(), say_hello)
new_date = datetime(2010, 1, 1)
self.scheduler.change_execution_time(job, new_date)
self.assertEqual(to_unix(new_date),
self.testconn.zscore(self.scheduler.scheduled_jobs_key, job.id))
self.scheduler.cancel(job)
self.assertRaises(ValueError, self.scheduler.change_execution_time, job, new_date)
def test_args_kwargs_are_passed_correctly(self):
"""
Ensure that arguments and keyword arguments are properly saved to jobs.
"""
job = self.scheduler.enqueue_at(datetime.utcnow(), simple_addition, 1, 1, 1)
self.assertEqual(job.args, (1, 1, 1))
job = self.scheduler.enqueue_at(datetime.utcnow(), simple_addition, z=1, y=1, x=1)
self.assertEqual(job.kwargs, {'x': 1, 'y': 1, 'z': 1})
job = self.scheduler.enqueue_at(datetime.utcnow(), simple_addition, 1, z=1, y=1)
self.assertEqual(job.kwargs, {'y': 1, 'z': 1})
self.assertEqual(job.args, (1,))
time_delta = timedelta(minutes=1)
job = self.scheduler.enqueue_in(time_delta, simple_addition, 1, 1, 1)
self.assertEqual(job.args, (1, 1, 1))
job = self.scheduler.enqueue_in(time_delta, simple_addition, z=1, y=1, x=1)
self.assertEqual(job.kwargs, {'x': 1, 'y': 1, 'z': 1})
job = self.scheduler.enqueue_in(time_delta, simple_addition, 1, z=1, y=1)
self.assertEqual(job.kwargs, {'y': 1, 'z': 1})
self.assertEqual(job.args, (1,))
def test_enqueue_is_deprecated(self):
"""
Ensure .enqueue() throws a DeprecationWarning
"""
with warnings.catch_warnings(record=True) as w:
# Enable all warnings
warnings.simplefilter("always")
job = self.scheduler.enqueue(datetime.utcnow(), say_hello)
self.assertEqual(1, len(w))
self.assertEqual(w[0].category, DeprecationWarning)
def test_enqueue_periodic(self):
"""
Ensure .enqueue_periodic() throws a DeprecationWarning
"""
with warnings.catch_warnings(record=True) as w:
# Enable all warnings
warnings.simplefilter("always")
job = self.scheduler.enqueue_periodic(datetime.utcnow(), 1, None, say_hello)
self.assertEqual(1, len(w))
self.assertEqual(w[0].category, DeprecationWarning)
def test_interval_and_repeat_persisted_correctly(self):
"""
Ensure that interval and repeat attributes get correctly saved in Redis.
"""
job = self.scheduler.schedule(datetime.utcnow(), say_hello, interval=10, repeat=11)
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual(job_from_queue.meta['interval'], 10)
self.assertEqual(job_from_queue.meta['repeat'], 11)
def test_interval_relativedelta_schedule(self):
"""
Ensure that interval is correct saved and restored (JSON dump)
"""
_rd = relativedelta(months=1, days=5, seconds=30)
job = self.scheduler.schedule(datetime.now(), say_hello, interval=_rd, repeat=11)
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual(job_from_queue.meta['interval'], _rd)
def test_repeat_without_interval_raises_error(self):
# Ensure that an error is raised if repeat is specified without interval
def create_job():
self.scheduler.schedule(datetime.utcnow(), say_hello, repeat=11)
self.assertRaises(ValueError, create_job)
def test_job_with_intervals_get_rescheduled(self):
"""
Ensure jobs with interval attribute are put back in the scheduler
"""
time_now = datetime.utcnow()
interval = 10
job = self.scheduler.schedule(time_now, say_hello, interval=interval)
self.scheduler.enqueue_job(job)
self.assertIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
self.assertEqual(self.testconn.zscore(self.scheduler.scheduled_jobs_key, job.id),
to_unix(time_now) + interval)
# Now the same thing using enqueue_periodic
job = self.scheduler.enqueue_periodic(time_now, interval, None, say_hello)
self.scheduler.enqueue_job(job)
self.assertIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
self.assertEqual(self.testconn.zscore(self.scheduler.scheduled_jobs_key, job.id),
to_unix(time_now) + interval)
def test_job_with_repeat(self):
"""
Ensure jobs with repeat attribute are put back in the scheduler
X (repeat) number of times
"""
time_now = datetime.utcnow()
interval = 10
# If job is repeated once, the job shouldn't be put back in the queue
job = self.scheduler.schedule(time_now, say_hello, interval=interval, repeat=1)
self.scheduler.enqueue_job(job)
self.assertNotIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
# If job is repeated twice, it should only be put back in the queue once
job = self.scheduler.schedule(time_now, say_hello, interval=interval, repeat=2)
self.scheduler.enqueue_job(job)
self.assertIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
self.scheduler.enqueue_job(job)
self.assertNotIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
time_now = datetime.utcnow()
# Now the same thing using enqueue_periodic
job = self.scheduler.enqueue_periodic(time_now, interval, 1, say_hello)
self.scheduler.enqueue_job(job)
self.assertNotIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
# If job is repeated twice, it should only be put back in the queue once
job = self.scheduler.enqueue_periodic(time_now, interval, 2, say_hello)
self.scheduler.enqueue_job(job)
self.assertIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
self.scheduler.enqueue_job(job)
self.assertNotIn(job.id,
tl(self.testconn.zrange(self.scheduler.scheduled_jobs_key, 0, 1)))
def test_missing_jobs_removed_from_scheduler(self):
"""
Ensure jobs that don't exist when queued are removed from the scheduler.
"""
job = self.scheduler.schedule(datetime.utcnow(), say_hello)
job.cancel()
self.scheduler.get_jobs_to_queue()
self.assertNotIn(job.id, tl(self.testconn.zrange(
self.scheduler.scheduled_jobs_key, 0, 1)))
def test_periodic_jobs_sets_result_ttl(self):
"""
Ensure periodic jobs set result_ttl to infinite.
"""
job = self.scheduler.schedule(datetime.utcnow(), say_hello, interval=5)
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual(job.result_ttl, -1)
def test_periodic_jobs_sets_ttl(self):
"""
Ensure periodic jobs sets correctly ttl.
"""
job = self.scheduler.schedule(datetime.utcnow(), say_hello, interval=5, ttl=4)
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual(job.ttl, 4)
def test_periodic_job_sets_id(self):
"""
Ensure that ID is passed to RQ by schedule.
"""
job = self.scheduler.schedule(datetime.utcnow(), say_hello, interval=5, id='id test')
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual('id test', job.id)
def test_periodic_job_sets_description(self):
"""
Ensure that description is passed to RQ by schedule.
"""
job = self.scheduler.schedule(datetime.utcnow(), say_hello, interval=5, description='description')
job_from_queue = Job.fetch(job.id, connection=self.testconn)
self.assertEqual('description', job.description)
def test_run(self):
"""
Check correct signal handling in Scheduler.run().
"""
def send_stop_signal():
"""
Sleep for 1 second, then send a INT signal to ourself, so the
signal handler installed by scheduler.run() is called.
"""
time.sleep(1)
os.kill(os.getpid(), signal.SIGINT)
thread = Thread(target=send_stop_signal)
thread.start()
self.assertRaises(SystemExit, self.scheduler.run)
thread.join()
def test_scheduler_w_o_explicit_connection(self):
"""
Ensure instantiating Scheduler w/o explicit connection works.
"""
s = Scheduler()
self.assertEqual(s.connection, self.testconn)
def test_small_float_interval(self):
"""
Test that scheduler accepts 'interval' of type float, less than 1 second.
"""
key = Scheduler.scheduler_key
self.assertNotIn(key, tl(self.testconn.keys('*')))
scheduler = Scheduler(connection=self.testconn, interval=0.1) # testing interval = 0.1 second
self.assertEqual(scheduler._interval, 0.1)
#register birth
scheduler.register_birth()
self.assertIn(key, tl(self.testconn.keys('*')))
self.assertEqual(self.testconn.ttl(key), 10) # int(0.1) + 10 = 10
self.assertFalse(self.testconn.hexists(key, 'death'))
#enqueue a job
now = datetime.utcnow()
job = scheduler.enqueue_at(now, say_hello)
self.assertIn(job, self.scheduler.get_jobs_to_queue())
self.assertEqual(len(self.scheduler.get_jobs()), 1)
#register death
scheduler.register_death()
#test that run works with the small floating-point interval
def send_stop_signal():
"""
Sleep for 1 second, then send a INT signal to ourself, so the
signal handler installed by scheduler.run() is called.
"""
time.sleep(1)
os.kill(os.getpid(), signal.SIGINT)
thread = Thread(target=send_stop_signal)
thread.start()
self.assertRaises(SystemExit, scheduler.run)
thread.join()
#all jobs must have been scheduled during 1 second
self.assertEqual(len(scheduler.get_jobs()), 0)
|
ComputeNodeTest.py
|
##########################################################################
#
# Copyright (c) 2011-2012, John Haddon. All rights reserved.
# Copyright (c) 2011-2013, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the following
# disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided with
# the distribution.
#
# * Neither the name of John Haddon nor the names of
# any other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
import unittest
import threading
import time
import IECore
import Gaffer
import GafferTest
class ComputeNodeTest( GafferTest.TestCase ) :
def testOperation( self ) :
n1 = GafferTest.AddNode()
n1["sum"].getValue()
dirtiedPlugs = GafferTest.CapturingSlot( n1.plugDirtiedSignal() )
setPlugs = GafferTest.CapturingSlot( n1.plugSetSignal() )
n1["op1"].setValue( 2 )
self.assertEqual( len( setPlugs ), 1 )
self.assertEqual( len( dirtiedPlugs ), 2 )
self.assertEqual( setPlugs[0][0].fullName(), "AddNode.op1" )
self.assertEqual( dirtiedPlugs[0][0].fullName(), "AddNode.op1" )
self.assertEqual( dirtiedPlugs[1][0].fullName(), "AddNode.sum" )
n1["op2"].setValue( 3 )
self.assertEqual( len( setPlugs ), 2 )
self.assertEqual( setPlugs[1][0].fullName(), "AddNode.op2" )
del dirtiedPlugs[:]
del setPlugs[:]
# plug set or dirty signals are not emitted during computation
self.assertEqual( n1.getChild("sum").getValue(), 5 )
self.assertEqual( len( setPlugs ), 0 )
self.assertEqual( len( dirtiedPlugs ), 0 )
# connect another add node onto the output of this one
n2 = GafferTest.AddNode( "Add2" )
dirtiedPlugs2 = GafferTest.CapturingSlot( n2.plugDirtiedSignal() )
setPlugs2 = GafferTest.CapturingSlot( n2.plugSetSignal() )
n2["op1"].setInput( n1["sum"] )
# connecting a plug doesn't set the value of the input plug
# immediately - the value is transferred only upon request.
self.assertEqual( len( setPlugs2 ), 0 )
self.assertEqual( len( dirtiedPlugs2 ), 2 )
self.assertEqual( dirtiedPlugs2[0][0].fullName(), "Add2.op1" )
self.assertEqual( dirtiedPlugs2[1][0].fullName(), "Add2.sum" )
del dirtiedPlugs2[:]
del setPlugs2[:]
self.assertEqual( n2["op1"].getValue(), 5 )
self.assertEqual( n2["sum"].getValue(), 5 )
# plug set or dirty signals are not emitted during computation
self.assertEqual( len( setPlugs2 ), 0 )
self.assertEqual( len( dirtiedPlugs2 ), 0 )
def testDirtyOfInputsWithConnections( self ) :
n1 = GafferTest.AddNode( "n1" )
n2 = GafferTest.AddNode( "n2" )
dirtied = GafferTest.CapturingSlot( n1.plugDirtiedSignal(), n2.plugDirtiedSignal() )
n2["op1"].setInput( n1["sum"] )
self.assertEqual( len( dirtied ), 2 )
self.failUnless( dirtied[0][0].isSame( n2["op1"] ) )
self.failUnless( dirtied[1][0].isSame( n2["sum"] ) )
del dirtied[:]
n1["op1"].setValue( 10 )
self.assertEqual( len( dirtied ), 4 )
self.failUnless( dirtied[0][0].isSame( n1["op1"] ) )
self.failUnless( dirtied[1][0].isSame( n1["sum"] ) )
self.failUnless( dirtied[2][0].isSame( n2["op1"] ) )
self.failUnless( dirtied[3][0].isSame( n2["sum"] ) )
self.assertEqual( n2.getChild( "sum" ).getValue(), 10 )
def testDirtyPlugComputesSameValueAsBefore( self ) :
n1 = GafferTest.AddNode( "N1" )
n2 = GafferTest.AddNode( "N2" )
n2.getChild( "op1" ).setInput( n1.getChild( "sum" ) )
n1.getChild( "op1" ).setValue( 1 )
n1.getChild( "op2" ).setValue( -1 )
self.assertEqual( n2.getChild( "sum" ).getValue(), 0 )
def testOutputsDirtyForNewNodes( self ) :
n = GafferTest.AddNode()
n["op1"].setValue( 1 )
n["op2"].setValue( 2 )
self.assertEqual( n["sum"].getValue(), 3 )
def testComputeInContext( self ) :
n = GafferTest.FrameNode()
self.assertEqual( n["output"].getValue(), 1 )
c = Gaffer.Context()
c.setFrame( 10 )
with c :
self.assertEqual( n["output"].getValue(), 10 )
def testComputeInThreads( self ) :
n = GafferTest.FrameNode()
def f( frame ) :
c = Gaffer.Context()
c.setFrame( frame )
with c :
time.sleep( 0.01 )
self.assertEqual( n["output"].getValue(), frame )
threads = []
for i in range( 0, 1000 ) :
t = threading.Thread( target = f, args = ( i, ) )
t.start()
threads.append( t )
for t in threads :
t.join()
def testDirtyNotPropagatedDuringCompute( self ) :
n1 = GafferTest.AddNode( "n1" )
n2 = GafferTest.AddNode( "n2" )
n1["op1"].setValue( 2 )
n1["op2"].setValue( 3 )
n2["op1"].setInput( n1["sum"] )
dirtyCapturer = GafferTest.CapturingSlot( n2.plugDirtiedSignal() )
self.assertEqual( n2["sum"].getValue(), 5 )
self.assertEqual( len( dirtyCapturer ), 0 )
def testWrongPlugSet( self ) :
n = GafferTest.BadNode()
self.assertRaises( RuntimeError, n["out1"].getValue )
def testPlugNotSet( self ) :
n = GafferTest.BadNode()
self.assertRaises( RuntimeError, n["out3"].getValue )
def testHash( self ) :
n = GafferTest.MultiplyNode()
self.assertHashesValid( n )
def testHashForPythonDerivedClasses( self ) :
n = GafferTest.AddNode()
self.assertHashesValid( n )
def testDisableCaching( self ) :
n = GafferTest.CachingTestNode()
n["in"].setValue( "d" )
v1 = n["out"].getValue( _copy=False )
v2 = n["out"].getValue( _copy=False )
self.assertEqual( v1, v2 )
self.assertEqual( v1, IECore.StringData( "d" ) )
# the objects should be one and the same, as the second computation
# should have shortcut and returned a cached result.
self.failUnless( v1.isSame( v2 ) )
n["out"].setFlags( Gaffer.Plug.Flags.Cacheable, False )
v3 = n["out"].getValue( _copy=False )
self.assertEqual( v3, IECore.StringData( "d" ) )
self.assertEqual( v3, v1 )
# we disabled caching, so the two values should
# be distinct objects, even though they are equal.
self.failIf( v3.isSame( v1 ) )
def testConnectedPlugsShareHashesAndCacheEntries( self ) :
class Out( Gaffer.ComputeNode ) :
def __init__( self, name="Out" ) :
Gaffer.ComputeNode.__init__( self, name )
self.addChild( Gaffer.ObjectPlug( "oOut", Gaffer.Plug.Direction.Out, IECore.NullObject() ) )
self.addChild( Gaffer.FloatPlug( "fOut", Gaffer.Plug.Direction.Out ) )
def hash( self, output, context, h ) :
h.append( context.getFrame() )
def compute( self, plug, context ) :
if plug.getName() == "oOut" :
plug.setValue( IECore.IntData( int( context.getFrame() ) ) )
else :
plug.setValue( context.getFrame() )
IECore.registerRunTimeTyped( Out )
class In( Gaffer.ComputeNode ) :
def __init__( self, name="In" ) :
Gaffer.ComputeNode.__init__( self, name )
self.addChild( Gaffer.ObjectPlug( "oIn", Gaffer.Plug.Direction.In, IECore.NullObject() ) )
self.addChild( Gaffer.IntPlug( "iIn", Gaffer.Plug.Direction.In ) )
IECore.registerRunTimeTyped( In )
nOut = Out()
nIn = In()
nIn["oIn"].setInput( nOut["oOut"] )
nIn["iIn"].setInput( nOut["fOut"] )
for i in range( 0, 1000 ) :
c = Gaffer.Context()
c.setFrame( i )
with c :
# because oIn and oOut are connected, they should
# have the same hash and share the exact same value.
self.assertEqual( nIn["oIn"].getValue(), IECore.IntData( i ) )
self.assertEqual( nOut["oOut"].getValue(), IECore.IntData( i ) )
self.assertEqual( nIn["oIn"].hash(), nOut["oOut"].hash() )
self.failUnless( nIn["oIn"].getValue( _copy=False ).isSame( nOut["oOut"].getValue( _copy=False ) ) )
# even though iIn and fOut are connected, they should have
# different hashes and different values, because type conversion
# (float to int) is performed when connecting them.
self.assertEqual( nIn["iIn"].getValue(), i )
self.assertEqual( nOut["fOut"].getValue(), float( i ) )
self.assertNotEqual( nIn["iIn"].hash(), nOut["fOut"].hash() )
class PassThrough( Gaffer.ComputeNode ) :
def __init__( self, name="PassThrough" ) :
Gaffer.ComputeNode.__init__( self, name )
self.addChild( Gaffer.ObjectPlug( "in", Gaffer.Plug.Direction.In, IECore.NullObject() ) )
self.addChild( Gaffer.ObjectPlug( "out", Gaffer.Plug.Direction.Out, IECore.NullObject() ) )
def affects( self, input ) :
outputs = Gaffer.ComputeNode.affects( self, input )
if input.isSame( self["in"] ) :
outputs.append( self["out"] )
return outputs
def hash( self, output, context, h ) :
assert( output.isSame( self["out"] ) )
# by assigning directly to the hash rather than appending,
# we signify that we'll pass through the value unchanged.
h.copyFrom( self["in"].hash() )
def compute( self, plug, context ) :
assert( plug.isSame( self["out"] ) )
plug.setValue( self["in"].getValue( _copy=False ), _copy=False )
IECore.registerRunTimeTyped( PassThrough )
def testPassThroughSharesHashes( self ) :
n = self.PassThrough()
n["in"].setValue( IECore.IntVectorData( [ 1, 2, 3 ] ) )
self.assertEqual( n["in"].hash(), n["out"].hash() )
self.assertEqual( n["in"].getValue(), n["out"].getValue() )
def testPassThroughSharesCacheEntries( self ) :
n = self.PassThrough()
n["in"].setValue( IECore.IntVectorData( [ 1, 2, 3 ] ) )
self.failUnless( n["in"].getValue( _copy=False ).isSame( n["out"].getValue( _copy=False ) ) )
def testInternalConnections( self ) :
a = GafferTest.AddNode()
a["op1"].setValue( 10 )
n = Gaffer.Node()
n["in"] = Gaffer.IntPlug()
n["out"] = Gaffer.IntPlug( direction = Gaffer.Plug.Direction.Out )
n["out"].setInput( n["in"] )
n["in"].setInput( a["sum"] )
self.assertEqual( n["out"].getValue(), a["sum"].getValue() )
self.assertEqual( n["out"].hash(), a["sum"].hash() )
def testErrorSignal( self ) :
b = GafferTest.BadNode()
a = GafferTest.AddNode()
a["op1"].setInput( b["out3"] )
cs = GafferTest.CapturingSlot( b.errorSignal() )
self.assertRaises( RuntimeError, b["out1"].getValue )
self.assertEqual( len( cs ), 1 )
self.assertTrue( cs[0][0].isSame( b["out1"] ) )
self.assertTrue( cs[0][1].isSame( b["out1"] ) )
self.assertTrue( isinstance( cs[0][2], str ) )
self.assertRaises( RuntimeError, a["sum"].getValue )
self.assertEqual( len( cs ), 2 )
self.assertTrue( cs[1][0].isSame( b["out3"] ) )
self.assertTrue( cs[1][1].isSame( b["out3"] ) )
self.assertTrue( isinstance( cs[1][2], str ) )
def testErrorSignalledOnIntermediateNodes( self ) :
nodes = [ GafferTest.BadNode() ]
for i in range( 0, 10 ) :
nodes.append( GafferTest.AddNode() )
nodes[-1]["op1"].setInput(
nodes[-2]["sum"] if i != 0 else nodes[-2]["out3"]
)
slots = [ GafferTest.CapturingSlot( n.errorSignal() ) for n in nodes ]
self.assertRaises( RuntimeError, nodes[-1]["sum"].getValue )
for i, slot in enumerate( slots ) :
self.assertEqual( len( slot ), 1 )
self.assertTrue( slot[0][0].isSame( nodes[i]["out3"] if i == 0 else nodes[i]["sum"] ) )
self.assertTrue( slot[0][1].isSame( nodes[0]["out3"] ) )
def testErrorSignalledAtScopeTransitions( self ) :
s = Gaffer.ScriptNode()
s["b"] = Gaffer.Box()
s["b"]["b"] = GafferTest.BadNode()
s["b"]["a"] = GafferTest.AddNode()
s["b"]["a"]["op1"].setInput( s["b"]["b"]["out3"] )
css = GafferTest.CapturingSlot( s.errorSignal() )
csb = GafferTest.CapturingSlot( s["b"].errorSignal() )
csbb = GafferTest.CapturingSlot( s["b"]["b"].errorSignal() )
p = Gaffer.PlugAlgo.promote( s["b"]["a"]["sum"] )
self.assertRaises( RuntimeError, p.getValue )
self.assertEqual( len( css ), 0 )
self.assertEqual( len( csb ), 1 )
self.assertTrue( csb[0][0].isSame( p ) )
self.assertTrue( csb[0][1].isSame( s["b"]["b"]["out3"] ) )
self.assertEqual( len( csbb ), 1 )
self.assertTrue( csbb[0][0].isSame( s["b"]["b"]["out3"] ) )
self.assertTrue( csbb[0][1].isSame( s["b"]["b"]["out3"] ) )
def testErrorSlotsDontSeeException( self ) :
self.fRan = False
def f( *unusedArgs ) :
# If there's an active python exception (from
# the error in BadNode below) when we try this
# import, it'll appear (falsely) as if the error
# originated from the import, and throw an exception
# here. This is not the intention - error slots are
# just meant to be informed of the error, without
# ever seeing the exception itself.
import IECore
self.fRan = True
n = GafferTest.BadNode()
c = n.errorSignal().connect( f )
with IECore.IgnoredExceptions( Exception ) :
n["out1"].getValue()
self.assertTrue( self.fRan )
def testPlugDestructionDuringComputation( self ) :
class PlugDestructionNode( GafferTest.AddNode ) :
def __init__( self, name="PlugDestructionNode" ) :
GafferTest.AddNode.__init__( self, name )
def compute( self, plug, context ) :
# It's not particularly smart to create a plug from
# inside a compute, but here we're doing it to emulate
# a situation which can occur when the python
# garbage collector kicks in during computation.
# When that happens, the garbage collector might
# collect and destroy plugs from other graphs, and
# we need the computation framework to be robust to
# that. See #1576 for details of the original garbage
# collection manifesting itself.
v = Gaffer.ValuePlug()
del v
GafferTest.AddNode.compute( self, plug, context )
IECore.registerRunTimeTyped( PlugDestructionNode )
n = PlugDestructionNode()
n["op1"].setValue( 1 )
self.assertEqual( n["sum"].getValue(), 1 )
def testThreading( self ) :
GafferTest.testComputeNodeThreading()
if __name__ == "__main__":
unittest.main()
|
ant.py
|
# Ant
#
# Copyright (c) 2012, Gustav Tiger <gustav@tiger.name>
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import array
import collections
import struct
import threading
import time
import Queue
import logging
import usb.core
import usb.util
from message import Message
from commons import format_list
_logger = logging.getLogger("garmin.ant.base.ant")
class Ant():
def __init__(self, idVendor, idProduct):
# Find USB device
_logger.debug("USB Find device, vendor %#04x, product %#04x", idVendor, idProduct)
dev = usb.core.find(idVendor=idVendor, idProduct=idProduct)
# was it found?
if dev is None:
raise ValueError('Device not found')
_logger.debug("USB Config values:")
for cfg in dev:
_logger.debug(" Config %s", cfg.bConfigurationValue)
for intf in cfg:
_logger.debug(" Interface %s, Alt %s", str(intf.bInterfaceNumber), str(intf.bAlternateSetting))
for ep in intf:
_logger.debug(" Endpoint %s", str(ep.bEndpointAddress))
# unmount a kernel driver (TODO: should probably reattach later)
if dev.is_kernel_driver_active(0):
_logger.debug("A kernel driver active, detatching")
dev.detach_kernel_driver(0)
else:
_logger.debug("No kernel driver active")
# set the active configuration. With no arguments, the first
# configuration will be the active one
dev.set_configuration()
dev.reset()
#dev.set_configuration()
# get an endpoint instance
cfg = dev.get_active_configuration()
interface_number = cfg[(0,0)].bInterfaceNumber
alternate_setting = usb.control.get_interface(dev, interface_number)
intf = usb.util.find_descriptor(
cfg, bInterfaceNumber = interface_number,
bAlternateSetting = alternate_setting
)
self._out = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_OUT
)
_logger.debug("UBS Endpoint out: %s, %s", self._out, self._out.bEndpointAddress)
self._in = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_IN
)
_logger.debug("UBS Endpoint in: %s, %s", self._in, self._in.bEndpointAddress)
assert self._out is not None and self._in is not None
self._message_queue_cond = threading.Condition()
self._message_queue = collections.deque()
self._events = Queue.Queue()
self._buffer = array.array('B', [])
self._burst_data = array.array('B', [])
self._last_data = array.array('B', [])
self._running = True
self._worker_thread = threading.Thread(target=self._worker, name="ant.base")
self._worker_thread.start()
def start(self):
self._main()
def stop(self):
if self._running:
_logger.debug("Stoping ant.base")
self._running = False
self._worker_thread.join()
def _on_broadcast(self, message):
self._events.put(('event', (message._data[0],
Message.Code.EVENT_RX_BROADCAST, message._data[1:])))
def _on_acknowledge(self, message):
self._events.put(('event', (message._data[0],
Message.Code.EVENT_RX_ACKNOWLEDGED, message._data[1:])))
def _on_burst_data(self, message):
sequence = message._data[0] >> 5
channel = message._data[0] & 0b00011111
data = message._data[1:]
# First sequence
if sequence == 0:
self._burst_data = data
# Other
else:
self._burst_data.extend(data)
# Last sequence (indicated by bit 3)
if sequence & 0b100 != 0:
self._events.put(('event', (channel,
Message.Code.EVENT_RX_BURST_PACKET, self._burst_data)))
def _worker(self):
_logger.debug("Ant runner started")
while self._running:
try:
message = self.read_message()
# TODO: flag and extended for broadcast, acknowledge, and burst
# Only do callbacks for new data. Resent data only indicates
# a new channel timeslot.
if not (message._id == Message.ID.BROADCAST_DATA and
message._data == self._last_data):
# Notifications
if message._id in [Message.ID.STARTUP_MESSAGE, \
Message.ID.SERIAL_ERROR_MESSAGE]:
self._events.put(('response', (None, message._id,
message._data)))
# Response (no channel)
elif message._id in [Message.ID.RESPONSE_VERSION, \
Message.ID.RESPONSE_CAPABILITIES, \
Message.ID.RESPONSE_SERIAL_NUMBER]:
self._events.put(('response', (None, message._id,
message._data)))
# Response (channel)
elif message._id in [Message.ID.RESPONSE_CHANNEL_STATUS, \
Message.ID.RESPONSE_CHANNEL_ID]:
self._events.put(('response', (message._data[0],
message._id, message._data[1:])))
# Response (other)
elif (message._id == Message.ID.RESPONSE_CHANNEL \
and message._data[1] != 0x01):
self._events.put(('response', (message._data[0],
message._data[1], message._data[2:])))
# Channel event
elif message._id == Message.ID.BROADCAST_DATA:
self._on_broadcast(message)
elif message._id == Message.ID.ACKNOWLEDGE_DATA:
self._on_acknowledge(message)
elif message._id == Message.ID.BURST_TRANSFER_DATA:
self._on_burst_data(message)
elif message._id == Message.ID.RESPONSE_CHANNEL:
_logger.debug("Got channel event, %r", message)
self._events.put(('event', (message._data[0],
message._data[1], message._data[2:])))
else:
_logger.warning("Got unknown message, %r", message)
else:
_logger.debug("No new data this period")
# Send messages in queue, on indicated time slot
if message._id == Message.ID.BROADCAST_DATA:
time.sleep(0.1)
_logger.debug("Got broadcast data, examine queue to see if we should send anything back")
if self._message_queue_cond.acquire(blocking=False):
while len(self._message_queue) > 0:
m = self._message_queue.popleft()
self.write_message(m)
_logger.debug(" - sent message from queue, %r", m)
if(m._id != Message.ID.BURST_TRANSFER_DATA or \
m._data[0] & 0b10000000):# or m._data[0] == 0):
break
else:
_logger.debug(" - no messages in queue")
self._message_queue_cond.release()
self._last_data = message._data
except usb.USBError as e:
_logger.warning("%s, %r", type(e), e.args)
_logger.debug("Ant runner stopped")
def _main(self):
while self._running:
try:
(event_type, event) = self._events.get(True, 1.0)
self._events.task_done()
(channel, event, data) = event
if event_type == 'response':
self.response_function(channel, event, data)
elif event_type == 'event':
self.channel_event_function(channel, event, data)
else:
_logger.warning("Unknown message typ '%s': %r", event_type, event)
except Queue.Empty as e:
pass
def write_message_timeslot(self, message):
with self._message_queue_cond:
self._message_queue.append(message)
def write_message(self, message):
data = message.get()
self._out.write(data + array.array('B', [0x00, 0x00]))
_logger.debug("Write data: %s", format_list(data))
def read_message(self):
# If we have a message in buffer already, return it
if len(self._buffer) >= 5 and len(self._buffer) >= self._buffer[1] + 4:
packet = self._buffer[:self._buffer[1] + 4]
self._buffer = self._buffer[self._buffer[1] + 4:]
return Message.parse(packet)
# Otherwise, read some data and call the function again
else:
data = self._in.read(4096)
self._buffer.extend(data)
_logger.debug("Read data: %s (now have %s in buffer)",
format_list(data), format_list(self._buffer))
return self.read_message()
# Ant functions
def unassign_channel(self, channel):
pass
def assign_channel(self, channel, channelType, networkNumber):
message = Message(Message.ID.ASSIGN_CHANNEL, [channel, channelType, networkNumber])
self.write_message(message)
def open_channel(self, channel):
message = Message(Message.ID.OPEN_CHANNEL, [channel])
self.write_message(message)
def set_channel_id(self, channel, deviceNum, deviceType, transmissionType):
data = array.array('B', struct.pack("<BHBB", channel, deviceNum, deviceType, transmissionType))
message = Message(Message.ID.SET_CHANNEL_ID, data)
self.write_message(message)
def set_channel_period(self, channel, messagePeriod):
data = array.array('B', struct.pack("<BH", channel, messagePeriod))
message = Message(Message.ID.SET_CHANNEL_PERIOD, data)
self.write_message(message)
def set_channel_search_timeout(self, channel, timeout):
message = Message(Message.ID.SET_CHANNEL_SEARCH_TIMEOUT, [channel, timeout])
self.write_message(message)
def set_channel_rf_freq(self, channel, rfFreq):
message = Message(Message.ID.SET_CHANNEL_RF_FREQ, [channel, rfFreq])
self.write_message(message)
def set_network_key(self, network, key):
message = Message(Message.ID.SET_NETWORK_KEY, [network] + key)
self.write_message(message)
# This function is a bit of a mystery. It is mentioned in libgant,
# http://sportwatcher.googlecode.com/svn/trunk/libgant/gant.h and is
# also sent from the official ant deamon on windows.
def set_search_waveform(self, channel, waveform):
message = Message(Message.ID.SET_SEARCH_WAVEFORM, [channel] + waveform)
self.write_message(message)
def reset_system(self):
message = Message(Message.ID.RESET_SYSTEM, [0x00])
self.write_message(message)
def request_message(self, channel, messageId):
message = Message(Message.ID.REQUEST_MESSAGE, [channel, messageId])
self.write_message(message)
def send_acknowledged_data(self, channel, data):
assert len(data) == 8
message = Message(Message.ID.ACKNOWLEDGE_DATA,
array.array('B', [channel]) + data)
self.write_message_timeslot(message)
def send_burst_transfer_packet(self, channel_seq, data, first):
assert len(data) == 8
message = Message(Message.ID.BURST_TRANSFER_DATA,
array.array('B', [channel_seq]) + data)
self.write_message_timeslot(message)
def send_burst_transfer(self, channel, data):
assert len(data) % 8 == 0
_logger.debug("Send burst transfer, chan %s, data %s", channel, data)
packets = len(data) / 8
for i in range(packets):
sequence = i % 4
if i == packets - 1:
sequence = sequence | 0b100
channel_seq = channel | sequence << 5
packet_data = data[i * 8:i * 8 + 8]
_logger.debug("Send burst transfer, packet %d, data %s", i, packet_data)
self.send_burst_transfer_packet(channel_seq, packet_data, first=i==0)
def response_function(self, channel, event, data):
pass
def channel_event_function(self, channel, event, data):
pass
|
test_p2p_grpform.py
|
#!/usr/bin/python
#
# P2P group formation test cases
# Copyright (c) 2013, Jouni Malinen <j@w1.fi>
#
# This software may be distributed under the terms of the BSD license.
# See README for more details.
import logging
logger = logging.getLogger(__name__)
import time
import threading
import Queue
import hwsim_utils
def check_grpform_results(i_res, r_res):
if i_res['result'] != 'success' or r_res['result'] != 'success':
raise Exception("Failed group formation")
if i_res['ssid'] != r_res['ssid']:
raise Exception("SSID mismatch")
if i_res['freq'] != r_res['freq']:
raise Exception("SSID mismatch")
if i_res['go_dev_addr'] != r_res['go_dev_addr']:
raise Exception("GO Device Address mismatch")
def go_neg_init(i_dev, r_dev, pin, i_method, i_intent, res):
logger.debug("Initiate GO Negotiation from i_dev")
i_res = i_dev.p2p_go_neg_init(r_dev.p2p_dev_addr(), pin, i_method, timeout=20, go_intent=i_intent)
logger.debug("i_res: " + str(i_res))
res.put(i_res)
def go_neg_pin(i_dev, r_dev, i_intent=None, r_intent=None, i_method='enter', r_method='display'):
r_dev.p2p_listen()
i_dev.p2p_listen()
pin = r_dev.wps_read_pin()
logger.info("Start GO negotiation " + i_dev.ifname + " -> " + r_dev.ifname)
r_dev.dump_monitor()
res = Queue.Queue()
t = threading.Thread(target=go_neg_init, args=(i_dev, r_dev, pin, i_method, i_intent, res))
t.start()
logger.debug("Wait for GO Negotiation Request on r_dev")
ev = r_dev.wait_global_event(["P2P-GO-NEG-REQUEST"], timeout=15)
if ev is None:
raise Exception("GO Negotiation timed out")
r_dev.dump_monitor()
logger.debug("Re-initiate GO Negotiation from r_dev")
r_res = r_dev.p2p_go_neg_init(i_dev.p2p_dev_addr(), pin, r_method, go_intent=r_intent, timeout=20)
logger.debug("r_res: " + str(r_res))
r_dev.dump_monitor()
t.join()
i_res = res.get()
logger.debug("i_res: " + str(i_res))
logger.info("Group formed")
hwsim_utils.test_connectivity_p2p(r_dev, i_dev)
i_dev.dump_monitor()
def go_neg_pin_authorized(i_dev, r_dev, i_intent=None, r_intent=None, expect_failure=False, i_go_neg_status=None, i_method='enter', r_method='display'):
r_dev.p2p_listen()
i_dev.p2p_listen()
pin = r_dev.wps_read_pin()
logger.info("Start GO negotiation " + i_dev.ifname + " -> " + r_dev.ifname)
r_dev.p2p_go_neg_auth(i_dev.p2p_dev_addr(), pin, r_method, go_intent=r_intent)
i_res = i_dev.p2p_go_neg_init(r_dev.p2p_dev_addr(), pin, i_method, timeout=20, go_intent=i_intent, expect_failure=expect_failure)
r_res = r_dev.p2p_go_neg_auth_result(expect_failure=expect_failure)
logger.debug("i_res: " + str(i_res))
logger.debug("r_res: " + str(r_res))
r_dev.dump_monitor()
i_dev.dump_monitor()
if i_go_neg_status:
if i_res['result'] != 'go-neg-failed':
raise Exception("Expected GO Negotiation failure not reported")
if i_res['status'] != i_go_neg_status:
raise Exception("Expected GO Negotiation status not seen")
if expect_failure:
return
logger.info("Group formed")
hwsim_utils.test_connectivity_p2p(r_dev, i_dev)
return [i_res, r_res]
def go_neg_init_pbc(i_dev, r_dev, i_intent, res):
logger.debug("Initiate GO Negotiation from i_dev")
i_res = i_dev.p2p_go_neg_init(r_dev.p2p_dev_addr(), None, "pbc", timeout=20, go_intent=i_intent)
logger.debug("i_res: " + str(i_res))
res.put(i_res)
def go_neg_pbc(i_dev, r_dev, i_intent=None, r_intent=None):
r_dev.p2p_find(social=True)
i_dev.p2p_find(social=True)
logger.info("Start GO negotiation " + i_dev.ifname + " -> " + r_dev.ifname)
r_dev.dump_monitor()
res = Queue.Queue()
t = threading.Thread(target=go_neg_init_pbc, args=(i_dev, r_dev, i_intent, res))
t.start()
logger.debug("Wait for GO Negotiation Request on r_dev")
ev = r_dev.wait_global_event(["P2P-GO-NEG-REQUEST"], timeout=15)
if ev is None:
raise Exception("GO Negotiation timed out")
r_dev.dump_monitor()
logger.debug("Re-initiate GO Negotiation from r_dev")
r_res = r_dev.p2p_go_neg_init(i_dev.p2p_dev_addr(), None, "pbc", go_intent=r_intent, timeout=20)
logger.debug("r_res: " + str(r_res))
r_dev.dump_monitor()
t.join()
i_res = res.get()
logger.debug("i_res: " + str(i_res))
logger.info("Group formed")
hwsim_utils.test_connectivity_p2p(r_dev, i_dev)
i_dev.dump_monitor()
return [i_res, r_res]
def test_grpform(dev):
"""P2P group formation using PIN and authorized connection (init -> GO)"""
go_neg_pin_authorized(i_dev=dev[0], i_intent=15, r_dev=dev[1], r_intent=0)
dev[0].remove_group()
try:
dev[1].remove_group()
except:
pass
def test_grpform2(dev):
"""P2P group formation using PIN and authorized connection (resp -> GO)"""
go_neg_pin_authorized(i_dev=dev[0], i_intent=0, r_dev=dev[1], r_intent=15)
dev[0].remove_group()
try:
dev[1].remove_group()
except:
pass
def test_grpform3(dev):
"""P2P group formation using PIN and re-init GO Negotiation"""
go_neg_pin(i_dev=dev[0], i_intent=15, r_dev=dev[1], r_intent=0)
dev[0].remove_group()
try:
dev[1].remove_group()
except:
pass
def test_grpform_pbc(dev):
"""P2P group formation using PBC and re-init GO Negotiation"""
[i_res, r_res] = go_neg_pbc(i_dev=dev[0], i_intent=15, r_dev=dev[1], r_intent=0)
check_grpform_results(i_res, r_res)
if i_res['role'] != 'GO' or r_res['role'] != 'client':
raise Exception("Unexpected device roles")
dev[0].remove_group()
try:
dev[1].remove_group()
except:
pass
def test_both_go_intent_15(dev):
"""P2P GO Negotiation with both devices using GO intent 15"""
go_neg_pin_authorized(i_dev=dev[0], i_intent=15, r_dev=dev[1], r_intent=15, expect_failure=True, i_go_neg_status=9)
def test_both_go_neg_display(dev):
"""P2P GO Negotiation with both devices trying to display PIN"""
go_neg_pin_authorized(i_dev=dev[0], r_dev=dev[1], expect_failure=True, i_go_neg_status=10, i_method='display', r_method='display')
def test_both_go_neg_enter(dev):
"""P2P GO Negotiation with both devices trying to enter PIN"""
go_neg_pin_authorized(i_dev=dev[0], r_dev=dev[1], expect_failure=True, i_go_neg_status=10, i_method='enter', r_method='enter')
def test_grpform_per_sta_psk(dev):
"""P2P group formation with per-STA PSKs"""
dev[0].request("P2P_SET per_sta_psk 1")
[i_res, r_res] = go_neg_pin_authorized(i_dev=dev[0], i_intent=15, r_dev=dev[1], r_intent=0)
check_grpform_results(i_res, r_res)
pin = dev[2].wps_read_pin()
dev[0].p2p_go_authorize_client(pin)
c_res = dev[2].p2p_connect_group(dev[0].p2p_dev_addr(), pin, timeout=60)
check_grpform_results(i_res, c_res)
if r_res['psk'] == c_res['psk']:
raise Exception("Same PSK assigned for both clients")
hwsim_utils.test_connectivity_p2p(dev[1], dev[2])
dev[0].remove_group()
dev[1].wait_go_ending_session()
dev[2].wait_go_ending_session()
def test_grpform_per_sta_psk_wps(dev):
"""P2P group formation with per-STA PSKs with non-P2P WPS STA"""
dev[0].request("P2P_SET per_sta_psk 1")
[i_res, r_res] = go_neg_pin_authorized(i_dev=dev[0], i_intent=15, r_dev=dev[1], r_intent=0)
check_grpform_results(i_res, r_res)
dev[0].p2p_go_authorize_client_pbc()
dev[2].request("WPS_PBC")
ev = dev[2].wait_event(["CTRL-EVENT-CONNECTED"], timeout=30)
if ev is None:
raise Exception("Association with the GO timed out")
hwsim_utils.test_connectivity_p2p_sta(dev[1], dev[2])
dev[0].remove_group()
dev[2].request("DISCONNECT")
dev[1].wait_go_ending_session()
|
runner.py
|
"""Run Home Assistant."""
import asyncio
from concurrent.futures import ThreadPoolExecutor
import dataclasses
import logging
import sys
import threading
from typing import Any, Dict, Optional
from homeassistant import bootstrap
from homeassistant.core import callback
from homeassistant.helpers.frame import warn_use
#
# Python 3.8 has significantly less workers by default
# than Python 3.7. In order to be consistent between
# supported versions, we need to set max_workers.
#
# In most cases the workers are not I/O bound, as they
# are sleeping/blocking waiting for data from integrations
# updating so this number should be higher than the default
# use case.
#
MAX_EXECUTOR_WORKERS = 64
@dataclasses.dataclass
class RuntimeConfig:
"""Class to hold the information for running Home Assistant."""
config_dir: str
skip_pip: bool = False
safe_mode: bool = False
verbose: bool = False
log_rotate_days: Optional[int] = None
log_file: Optional[str] = None
log_no_color: bool = False
debug: bool = False
open_ui: bool = False
# In Python 3.8+ proactor policy is the default on Windows
if sys.platform == "win32" and sys.version_info[:2] < (3, 8):
PolicyBase = asyncio.WindowsProactorEventLoopPolicy
else:
PolicyBase = asyncio.DefaultEventLoopPolicy # pylint: disable=invalid-name
class HassEventLoopPolicy(PolicyBase): # type: ignore
"""Event loop policy for Home Assistant."""
def __init__(self, debug: bool) -> None:
"""Init the event loop policy."""
super().__init__()
self.debug = debug
@property
def loop_name(self) -> str:
"""Return name of the loop."""
return self._loop_factory.__name__ # type: ignore
def new_event_loop(self) -> asyncio.AbstractEventLoop:
"""Get the event loop."""
loop: asyncio.AbstractEventLoop = super().new_event_loop()
loop.set_exception_handler(_async_loop_exception_handler)
if self.debug:
loop.set_debug(True)
executor = ThreadPoolExecutor(
thread_name_prefix="SyncWorker", max_workers=MAX_EXECUTOR_WORKERS
)
loop.set_default_executor(executor)
loop.set_default_executor = warn_use( # type: ignore
loop.set_default_executor, "sets default executor on the event loop"
)
# Python 3.9+
if hasattr(loop, "shutdown_default_executor"):
return loop
# Copied from Python 3.9 source
def _do_shutdown(future: asyncio.Future) -> None:
try:
executor.shutdown(wait=True)
loop.call_soon_threadsafe(future.set_result, None)
except Exception as ex: # pylint: disable=broad-except
loop.call_soon_threadsafe(future.set_exception, ex)
async def shutdown_default_executor() -> None:
"""Schedule the shutdown of the default executor."""
future = loop.create_future()
thread = threading.Thread(target=_do_shutdown, args=(future,))
thread.start()
try:
await future
finally:
thread.join()
setattr(loop, "shutdown_default_executor", shutdown_default_executor)
return loop
@callback
def _async_loop_exception_handler(_: Any, context: Dict) -> None:
"""Handle all exception inside the core loop."""
kwargs = {}
exception = context.get("exception")
if exception:
kwargs["exc_info"] = (type(exception), exception, exception.__traceback__)
logging.getLogger(__package__).error(
"Error doing job: %s", context["message"], **kwargs # type: ignore
)
async def setup_and_run_hass(runtime_config: RuntimeConfig,) -> int:
"""Set up Home Assistant and run."""
hass = await bootstrap.async_setup_hass(runtime_config)
if hass is None:
return 1
return await hass.async_run()
def run(runtime_config: RuntimeConfig) -> int:
"""Run Home Assistant."""
asyncio.set_event_loop_policy(HassEventLoopPolicy(runtime_config.debug))
return asyncio.run(setup_and_run_hass(runtime_config))
|
executor.py
|
import functools
import importlib
import logging
import multiprocessing
import os
import time
from typing import Optional
import celery
import requests
from openff.bespokefit.schema.fitting import BespokeOptimizationSchema
from openff.utilities import temporary_cd
from beflow.services import settings
from beflow.services.coordinator.models import (
CoordinatorGETResponse,
CoordinatorPOSTBody,
CoordinatorPOSTResponse,
)
from beflow.services.gateway import launch as launch_gateway
from beflow.services.gateway import wait_for_gateway
from beflow.utilities.celery import spawn_worker
from beflow.utilities.redis import is_redis_available, launch_redis
_logger = logging.getLogger(__name__)
class BespokeExecutor:
def __init__(
self,
n_fragmenter_workers: int = 0,
n_qc_compute_workers: int = 0,
n_optimizer_workers: int = 0,
directory: str = "beflow-executor",
launch_redis_if_unavailable: bool = True,
):
self._n_fragmenter_workers = n_fragmenter_workers
self._n_qc_compute_workers = n_qc_compute_workers
self._n_optimizer_workers = n_optimizer_workers
self._directory = directory
self._launch_redis_if_unavailable = launch_redis_if_unavailable
self._started = False
self._gateway_process: Optional[multiprocessing.Process] = None
def _launch_redis(self):
if self._launch_redis_if_unavailable and not is_redis_available(
host=settings.BEFLOW_REDIS_ADDRESS, port=settings.BEFLOW_REDIS_PORT
):
redis_log_file = open("redis.log", "w")
launch_redis(settings.BEFLOW_REDIS_PORT, redis_log_file, redis_log_file)
def _launch_workers(self):
for import_path, n_workers in {
(settings.BEFLOW_FRAGMENTER_WORKER, self._n_fragmenter_workers),
(settings.BEFLOW_QC_COMPUTE_WORKER, self._n_qc_compute_workers),
(settings.BEFLOW_OPTIMIZER_WORKER, self._n_optimizer_workers),
}:
worker_module = importlib.import_module(import_path)
worker_app = getattr(worker_module, "celery_app")
assert isinstance(worker_app, celery.Celery), "workers must be celery based"
spawn_worker(worker_app, concurrency=n_workers)
def start(self, asynchronous=False):
if self._started:
raise RuntimeError("This executor is already running.")
self._started = True
if self._directory is not None and len(self._directory) > 0:
os.makedirs(self._directory, exist_ok=True)
with temporary_cd(self._directory):
self._launch_redis()
self._launch_workers()
if asynchronous:
self._gateway_process = multiprocessing.Process(
target=functools.partial(launch_gateway, self._directory), daemon=True
)
self._gateway_process.start()
wait_for_gateway()
else:
launch_gateway(self._directory)
def stop(self):
if not self._started:
raise ValueError("The executor is not running.")
self._started = False
if self._gateway_process is not None and self._gateway_process.is_alive():
self._gateway_process.terminate()
self._gateway_process.join()
def submit(self, input_schema: BespokeOptimizationSchema) -> str:
assert self._started, "the executor is not running"
request = requests.post(
"http://127.0.0.1:8000/api/v1/optimization",
data=CoordinatorPOSTBody(input_schema=input_schema).json(),
)
return CoordinatorPOSTResponse.parse_raw(request.text).optimization_id
def wait_until_complete(
self, optimization_id: str, frequency: int = 10
) -> CoordinatorGETResponse:
while True:
try:
request = requests.get(
f"http://127.0.0.1:8000/api/v1/optimization/{optimization_id}"
)
request.raise_for_status()
response = CoordinatorGETResponse.parse_raw(request.text)
# TODO: Return the actual result
if all(
stage.stage_status == "success" for stage in response.stages
) or any(stage.stage_status == "errored" for stage in response.stages):
return response
time.sleep(frequency)
except KeyboardInterrupt:
break
def __enter__(self):
self.start(asynchronous=True)
return self
def __exit__(self, *args):
self.stop()
|
tcp.py
|
__copyright__ = """
Copyright 2017 the original author or authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
__author__ = 'David Turanski'
__version__ = '1.1.0'
"""
This module supports the use TCP sockets for communication between local processes.
"""
import sys
import logging
import threading
import os
from springcloudstream.portability import PYTHON3
FORMAT = '%(asctime)s - %(name)s - %(levelname)s : %(message)s'
logging.basicConfig(format=FORMAT, level=logging.INFO)
if PYTHON3:
from socketserver import BaseRequestHandler, TCPServer, ThreadingMixIn
else:
from SocketServer import BaseRequestHandler, TCPServer, ThreadingMixIn
class ThreadedTCPServer(ThreadingMixIn, TCPServer):
pass
def launch_server(message_handler, options):
"""
Launch a message server
:param handler_function: The handler function to execute for each message
:param options: Application options for TCP, etc.
"""
logger = logging.getLogger(__name__)
if (options.debug):
logger.setLevel(logging.DEBUG)
if not options.monitor_port:
logger.warning(
"Monitoring not enabled. No monitor-port option defined.")
else:
threading.Thread(target=launch_monitor_server, args=(options.host, options.monitor_port, logger)).start()
# Create the server, binding to specified host on configured port
logger.info(
'Starting server on host %s port %d Python version %s.%s.%s' % ((options.host, options.port) + sys.version_info[:3]))
server = ThreadedTCPServer((options.host, options.port),
StreamHandler.create_handler(message_handler,
options.buffer_size,
logger))
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
try:
server.serve_forever()
except KeyboardInterrupt:
logger.info("Ctrl-C, exiting...")
os._exit(142)
def launch_monitor_server(host, port, logger):
"""
Launch a monitor server
:param port: the monitor port
:param logger: the logger
"""
logger.info('Starting monitor server on host %s port %d' % (host, port))
server = ThreadedTCPServer((host, port), MonitorHandler)
server.serve_forever()
class StreamHandler(BaseRequestHandler):
"""
A RequestHandler that waits for messages over its request socket until the socket is closed
and delegates to a MessageHandler
"""
@classmethod
def create_handler(cls, message_handler, buffer_size, logger):
"""
Class variables used here since the framework creates an instance for each connection
:param message_handler: the MessageHandler used to process each message.
:param buffer_size: the TCP buffer size.
:param logger: the global logger.
:return: this class.
"""
cls.BUFFER_SIZE = buffer_size
cls.message_handler = message_handler
cls.logger = logger
cls.message_handler.logger = logging.getLogger(message_handler.__class__.__name__)
cls.message_handler.logger.setLevel(logger.level)
return cls
def handle(self):
"""
The required handle method.
"""
logger = StreamHandler.logger
logger.debug("handling requests with message handler %s " % StreamHandler.message_handler.__class__.__name__)
message_handler = StreamHandler.message_handler
try:
while True:
logger.debug('waiting for more data')
if not message_handler.handle(self.request, StreamHandler.BUFFER_SIZE):
break
logger.warning("connection closed from %s" % (self.client_address[0]))
self.request.close()
except:
logger.exception("connection closed from %s" % (self.client_address[0]))
finally:
self.request.close()
class MonitorHandler(BaseRequestHandler):
"""Starts a monitor server to allow external processes to check the status"""
def handle(self):
logger = logging.getLogger(__name__)
try:
while True:
data = self.request.recv(80)
if not data:
break
logger.debug('got ping request %s' % data)
self.request.sendall('alive\n'.encode('utf-8'));
logger.warning("monitor connection closed %s" % (self.client_address[0]))
except Exception:
logger.exception("connection closed from %s" % (self.client_address[0]))
finally:
self.request.close()
|
master.py
|
#
# Copyright Cloudlab URV 2020
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import copy
import time
import json
import uuid
import flask
import queue
import logging
import multiprocessing as mp
from pathlib import Path
from gevent.pywsgi import WSGIServer
from concurrent.futures import ThreadPoolExecutor
from lithops.constants import LITHOPS_TEMP_DIR, STANDALONE_LOG_FILE, JOBS_DIR,\
STANDALONE_SERVICE_PORT, STANDALONE_CONFIG_FILE, STANDALONE_INSTALL_DIR
from lithops.localhost.localhost import LocalhostHandler
from lithops.utils import verify_runtime_name, iterchunks, setup_lithops_logger
from lithops.standalone.utils import get_worker_setup_script
from lithops.standalone.keeper import BudgetKeeper
setup_lithops_logger(logging.DEBUG, filename=STANDALONE_LOG_FILE)
logger = logging.getLogger('lithops.standalone.master')
app = flask.Flask(__name__)
INSTANCE_START_TIMEOUT = 200
MAX_INSTANCE_CREATE_RETRIES = 3
STANDALONE_CONFIG = None
STANDALONE_HANDLER = None
BUDGET_KEEPER = None
JOB_PROCESSES = {}
WORK_QUEUES = {}
MASTER_IP = None
MP_MANAGER = mp.Manager()
def is_worker_instance_ready(vm):
"""
Checks if the VM instance is ready to receive ssh connections
"""
try:
vm.get_ssh_client().run_remote_command('id')
except Exception as e:
logger.debug('ssh to {} failed: {}'
.format(vm.ip_address, e))
vm.del_ssh_client()
return False
return True
def wait_worker_instance_ready(vm):
"""
Waits until the VM instance is ready to receive ssh connections
"""
logger.info('Waiting {} to become ready'.format(vm))
start = time.time()
while(time.time() - start < INSTANCE_START_TIMEOUT):
if is_worker_instance_ready(vm):
logger.info('{} ready in {} seconds'
.format(vm, round(time.time()-start, 2)))
return True
time.sleep(5)
msg = 'Readiness probe expired on {}'.format(vm)
logger.error(msg)
raise TimeoutError(msg)
def setup_worker(worker_info, work_queue, job_key):
"""
Run worker process
Install all the Lithops dependencies into the worker.
Runs the job
"""
instance_name, ip_address, instance_id = worker_info
logger.info('Starting setup for VM instance {}'.format(instance_name))
vm = STANDALONE_HANDLER.backend.get_vm(instance_name)
vm.ip_address = ip_address
vm.instance_id = instance_id
worker_ready = False
retry = 0
logger.info('Queue empty: {} - Queue size: {}'
.format(work_queue.empty(), work_queue.qsize()))
while(not worker_ready and not work_queue.empty()
and retry < MAX_INSTANCE_CREATE_RETRIES):
try:
wait_worker_instance_ready(vm)
worker_ready = True
except TimeoutError: # VM not started in time
if retry == MAX_INSTANCE_CREATE_RETRIES:
msg = '{} readiness probe failed after {} retries.'.format(vm, retry)
logger.debug(msg)
raise Exception(msg)
logger.info('Recreating VM instance {}'.format(vm.name))
retry += 1
vm.delete()
vm.create()
if work_queue.empty():
logger.info('Work queue is already empty. Skipping {}'.format(vm))
return
# upload zip lithops package
logger.info('Uploading lithops files to {}'.format(vm))
vm.get_ssh_client().upload_local_file('/opt/lithops/lithops_standalone.zip',
'/tmp/lithops_standalone.zip')
logger.info('Executing lithops installation process on {}'.format(vm))
vm_data = {'instance_name': vm.name,
'ip_address': vm.ip_address,
'instance_id': vm.instance_id,
'master_ip': MASTER_IP,
'job_key': job_key}
script = get_worker_setup_script(STANDALONE_CONFIG, vm_data)
vm.get_ssh_client().run_remote_command(script, run_async=True)
vm.del_ssh_client()
logger.info('Installation process finished on {}'.format(vm))
def stop_job_process(job_key):
"""
Stops a job process
"""
global JOB_PROCESSES
done = os.path.join(JOBS_DIR, job_key+'.done')
Path(done).touch()
if job_key in JOB_PROCESSES and JOB_PROCESSES[job_key].is_alive():
JOB_PROCESSES[job_key].terminate()
logger.info('Finished job {} invocation'.format(job_key))
del JOB_PROCESSES[job_key]
def run_job_process(job_payload, work_queue):
"""
Process responsible to wait for workers to become ready, and
submit individual tasks of the job to them
"""
job_key = job_payload['job_key']
call_ids = job_payload['call_ids']
chunksize = job_payload['chunksize']
workers = job_payload['worker_instances']
for call_ids_range in iterchunks(call_ids, chunksize):
task_payload = copy.deepcopy(job_payload)
dbr = task_payload['data_byte_ranges']
task_payload['call_ids'] = call_ids_range
task_payload['data_byte_ranges'] = [dbr[int(call_id)] for call_id in call_ids_range]
work_queue.put(task_payload)
logger.info("Total tasks in {} work queue: {}".format(job_key, work_queue.qsize()))
with ThreadPoolExecutor(len(workers)) as executor:
for worker_info in workers:
executor.submit(setup_worker, worker_info, work_queue, job_key)
logger.info('All workers set up for job {}'.format(job_key))
while not work_queue.empty():
time.sleep(1)
done = os.path.join(JOBS_DIR, job_key+'.done')
Path(done).touch()
logger.info('Finished job {} invocation.'.format(job_key))
def error(msg):
response = flask.jsonify({'error': msg})
response.status_code = 404
return response
@app.route('/get-task/<job_key>', methods=['GET'])
def get_task(job_key):
"""
Returns a task from the work queue
"""
global WORK_QUEUES
global JOB_PROCESSES
try:
task_payload = WORK_QUEUES[job_key].get(timeout=0.1)
response = flask.jsonify(task_payload)
response.status_code = 200
logger.info('Calls {} invoked on {}'
.format(', '.join(task_payload['call_ids']),
flask.request.remote_addr))
except queue.Empty:
stop_job_process(job_key)
response = ('', 204)
return response
@app.route('/clear', methods=['POST'])
def clear():
"""
Stops received job processes
"""
global JOB_PROCESSES
job_key_list = flask.request.get_json(force=True, silent=True)
for job_key in job_key_list:
if job_key in JOB_PROCESSES and JOB_PROCESSES[job_key].is_alive():
logger.info('Received SIGTERM: Stopping job process {}'
.format(job_key))
stop_job_process(job_key)
return ('', 204)
@app.route('/run', methods=['POST'])
def run():
"""
Run a job locally, in consume mode
"""
global BUDGET_KEEPER
global WORK_QUEUES
global JOB_PROCESSES
job_payload = flask.request.get_json(force=True, silent=True)
if job_payload and not isinstance(job_payload, dict):
return error('The action did not receive a dictionary as an argument.')
try:
runtime = job_payload['runtime_name']
verify_runtime_name(runtime)
except Exception as e:
return error(str(e))
job_key = job_payload['job_key']
logger.info('Received job {}'.format(job_key))
BUDGET_KEEPER.last_usage_time = time.time()
BUDGET_KEEPER.update_config(job_payload['config']['standalone'])
BUDGET_KEEPER.jobs[job_key] = 'running'
exec_mode = job_payload['config']['standalone'].get('exec_mode', 'consume')
if exec_mode == 'consume':
# Consume mode runs the job locally
pull_runtime = STANDALONE_CONFIG.get('pull_runtime', False)
localhost_handler = LocalhostHandler({'runtime': runtime, 'pull_runtime': pull_runtime})
localhost_handler.run_job(job_payload)
elif exec_mode == 'create':
# Create mode runs the job in worker VMs
work_queue = MP_MANAGER.Queue()
WORK_QUEUES[job_key] = work_queue
jp = mp.Process(target=run_job_process, args=(job_payload, work_queue))
jp.daemon = True
jp.start()
JOB_PROCESSES[job_key] = jp
act_id = str(uuid.uuid4()).replace('-', '')[:12]
response = flask.jsonify({'activationId': act_id})
response.status_code = 202
return response
@app.route('/ping', methods=['GET'])
def ping():
response = flask.jsonify({'response': 'pong'})
response.status_code = 200
return response
@app.route('/preinstalls', methods=['GET'])
def preinstalls():
payload = flask.request.get_json(force=True, silent=True)
if payload and not isinstance(payload, dict):
return error('The action did not receive a dictionary as an argument.')
try:
runtime = payload['runtime']
verify_runtime_name(runtime)
except Exception as e:
return error(str(e))
pull_runtime = STANDALONE_CONFIG.get('pull_runtime', False)
localhost_handler = LocalhostHandler({'runtime': runtime, 'pull_runtime': pull_runtime})
runtime_meta = localhost_handler.create_runtime(runtime)
response = flask.jsonify(runtime_meta)
response.status_code = 200
return response
def main():
global STANDALONE_CONFIG
global STANDALONE_HANDLER
global BUDGET_KEEPER
global MASTER_IP
os.makedirs(LITHOPS_TEMP_DIR, exist_ok=True)
with open(STANDALONE_CONFIG_FILE, 'r') as cf:
STANDALONE_CONFIG = json.load(cf)
# Delete ssh_key_filename
backend = STANDALONE_CONFIG['backend']
if 'ssh_key_filename' in STANDALONE_CONFIG[backend]:
del STANDALONE_CONFIG[backend]['ssh_key_filename']
vm_data_file = os.path.join(STANDALONE_INSTALL_DIR, 'access.data')
with open(vm_data_file, 'r') as ad:
MASTER_IP = json.load(ad)['ip_address']
BUDGET_KEEPER = BudgetKeeper(STANDALONE_CONFIG)
BUDGET_KEEPER.start()
STANDALONE_HANDLER = BUDGET_KEEPER.sh
server = WSGIServer(('0.0.0.0', STANDALONE_SERVICE_PORT),
app, log=app.logger)
server.serve_forever()
if __name__ == '__main__':
main()
|
test_qmock.py
|
from collections import OrderedDict
import signal
import sys
from threading import Thread
import unittest
import qmock
from qmock._python_compat import get_thread_id, mock
# arbitrary targets for qmock.patch() tests
import datetime, json, xml.etree.ElementTree
DATETIME_DATE = "datetime.date"
JSON_LOADS = "json.loads"
XML_ETREE_ELEMENTTREE = "xml.etree.ElementTree"
PY2 = sys.version_info[0] < 3
class QMockErrorsInThreadsTests(unittest.TestCase):
def test_str(self):
error = qmock.QMockErrorsInThreads(
[RuntimeError("foo"), ValueError("bar"), KeyError("baz")]
)
self.assertEqual(
str(error),
"Unhandled QMock errors raised in other threads:"
" ["
+ repr(RuntimeError("foo")) + ", "
+ repr(ValueError("bar")) + ", "
+ repr(KeyError("baz"))
+ "]"
)
class patchTests(unittest.TestCase):
def setUp(self):
self._assert_no_patches()
def tearDown(self):
self._assert_no_patches()
def _assert_no_patches(self):
self._assert_datetime_is_not_patched()
self._assert_json_is_not_patched()
self._assert_xml_etree_is_not_patched()
def _assert_datetime_is_not_patched(self):
self.assertEqual(
str(datetime.date(1, 2, 3)),
"0001-02-03"
)
def _assert_json_is_not_patched(self):
self.assertEqual(
json.loads("[1,2,3]"),
[1, 2, 3]
)
def _assert_xml_etree_is_not_patched(self):
self.assertEqual(
xml.etree.ElementTree.fromstring("<foo />").tag,
"foo"
)
def _force_unexpected_call_in_thread(self, qm):
try:
thread = Thread(target=qm.an_unknown_call)
thread.start()
# we expect the thread to die immediately because of an
# UnexpectedCall. the alarms are an abundance of caution.
signal.alarm(1)
thread.join()
signal.alarm(0)
except BaseException as ex:
self.fail("Thread setup caught: {0!r}".format(ex))
def _assert_thread_qmock_errors(self, errors_in_thread_error):
"""
QMockErrorsInThreads.errors should contain a single
UnexpectedCall raised in a different thread.
"""
qmock_errors_from_threads = errors_in_thread_error.errors
self.assertEqual(len(qmock_errors_from_threads), 1)
qmock_error_tid, qmock_error = qmock_errors_from_threads[0]
self.assertNotEqual(qmock_error_tid, get_thread_id())
self.assertIsInstance(qmock_error, qmock.UnexpectedCall)
def _assert_patched_func_error(self, errors_in_thread_error,
expected_func_error_type):
"""
in Python 3, when multiple exceptions are being handled at once,
each exception has a __context__ which is the last exception
raised before this one (or `None`, if this is the first
exception in the current batch of active exceptions).
so QMockErrorsInThreads.__context__ should be the exception
raised by the function/scope being patched.
"""
if PY2:
# Python 2 has no __context__
return
patched_func_error = errors_in_thread_error.__context__
if expected_func_error_type is None:
self.assertIsNone(patched_func_error)
else:
self.assertIsInstance(patched_func_error, expected_func_error_type)
def test_empty_function_decorator_succeeds(self):
@qmock.patch()
def foo(qm):
self._assert_no_patches()
qm.call_queue.push(qmock.call.bar(), 5)
self.assertEqual(qm.bar(), 5)
foo() # no raise == success
# a little silly because nothing is being patched, but just in case.
def test_empty_function_decorator_cleans_up_on_func_exception(self):
@qmock.patch()
def foo(qm):
self._assert_no_patches()
raise RuntimeError("TEST")
self.assertRaises(RuntimeError, foo)
def test_empty_function_decorator_raises_on_exit_if_queue_not_empty(self):
@qmock.patch()
def foo(qm):
self._assert_no_patches()
qm.call_queue.push(qmock.call.bar(), 5)
self.assertRaises(qmock.CallQueueNotEmpty, foo)
def test_empty_function_decorator_doesnt_raise_on_exit_if_queue_not_empty_and_func_exception(self):
@qmock.patch()
def foo(qm):
self._assert_no_patches()
# would raise CallQueueNotEmpty if not handling RuntimeError
qm.call_queue.push(qmock.call.bar(), 5)
raise RuntimeError("TEST")
self.assertRaises(RuntimeError, foo)
def test_empty_function_decorator_raises_on_exit_if_errors_in_threads(self):
@qmock.patch()
def foo(qm):
self._assert_no_patches()
self._force_unexpected_call_in_thread(qm)
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
foo()
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, None)
def test_empty_function_decorator_still_raises_on_exit_if_errors_in_threads_and_func_exception(self):
@qmock.patch()
def foo(qm):
self._assert_no_patches()
# raises QMockErrorsInThreads on top of RuntimeError
self._force_unexpected_call_in_thread(qm)
raise RuntimeError("TEST")
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
foo()
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, RuntimeError)
def test_single_patch_function_decorator_succeeds(self):
@qmock.patch(dt=DATETIME_DATE)
def foo(qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), 7)
self.assertEqual(datetime.date(1, 2, 3), 7)
self._assert_no_patches()
foo()
def test_single_patch_function_decorator_cleans_up_on_func_exception(self):
@qmock.patch(dt=DATETIME_DATE)
def foo(qm):
raise ValueError("TEST")
self._assert_no_patches()
self.assertRaises(ValueError, foo)
def test_single_patch_function_decorator_cleans_up_on_bad_patch(self):
@qmock.patch(dt="datetime.BAD")
def foo(qm):
self.fail("This test function should not run.")
self._assert_no_patches()
self.assertRaises(AttributeError, foo)
def test_single_patch_function_decorator_raises_on_exit_if_queue_not_empty(self):
@qmock.patch(dt=DATETIME_DATE)
def foo(qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), 7)
self._assert_no_patches()
self.assertRaises(qmock.CallQueueNotEmpty, foo)
def test_single_patch_function_decorator_doesnt_raise_on_exit_if_queue_not_empty_and_func_exception(self):
@qmock.patch(dt=DATETIME_DATE)
def foo(qm):
# would raise CallQueueNotEmpty if not handling ValueError
qm.call_queue.push(qmock.call.dt(1, 2, 3), 7)
raise ValueError("TEST")
self._assert_no_patches()
self.assertRaises(ValueError, foo)
def test_single_patch_function_decorator_raises_on_exit_if_errors_in_threads(self):
@qmock.patch(dt=DATETIME_DATE)
def foo(qm):
self._force_unexpected_call_in_thread(qm)
self._assert_no_patches()
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
foo()
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, None)
def test_single_patch_function_decorator_still_raises_on_exit_if_errors_in_threads_and_func_exception(self):
@qmock.patch(dt=DATETIME_DATE)
def foo(qm):
# raises QMockErrorsInThreads on top of ValueError
self._force_unexpected_call_in_thread(qm)
raise ValueError("TEST")
self._assert_no_patches()
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
foo()
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, ValueError)
def test_multi_patch_function_decorator_succeeds(self):
@qmock.patch(dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE)
def foo(qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
self.assertEqual(datetime.date(1, 2, 3), "a")
self.assertEqual(xml.etree.ElementTree.fromstring("<foo />"), "b")
self.assertEqual(json.loads("[1,2,3]"), "c")
self._assert_no_patches()
foo()
def test_multi_patch_function_decorator_cleans_up_on_func_exception(self):
@qmock.patch(dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE)
def foo(qm):
raise KeyError("TEST")
self._assert_no_patches()
self.assertRaises(KeyError, foo)
def test_multi_patch_function_decorator_cleans_up_on_bad_patch(self):
@qmock.patch(dt=DATETIME_DATE, json="json.BAD", et=XML_ETREE_ELEMENTTREE)
def foo(qm):
self.fail("This test function should not run.")
self._assert_no_patches()
self.assertRaises(AttributeError, foo)
def test_multi_patch_function_decorator_raises_on_exit_if_queue_not_empty(self):
@qmock.patch(dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE)
def foo(qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
self._assert_no_patches()
self.assertRaises(qmock.CallQueueNotEmpty, foo)
def test_multi_patch_function_decorator_doesnt_raise_on_exit_if_queue_not_empty_and_func_exception(self):
@qmock.patch(dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE)
def foo(qm):
# would raise CallQueueNotEmpty if not handling KeyError
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
raise KeyError("TEST")
self._assert_no_patches()
self.assertRaises(KeyError, foo)
def test_multi_patch_function_decorator_raises_on_exit_if_errors_in_threads(self):
@qmock.patch(dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE)
def foo(qm):
self._force_unexpected_call_in_thread(qm)
self._assert_no_patches()
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
foo()
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, None)
def test_multi_patch_function_decorator_still_raises_on_exit_if_errors_in_threads_and_func_exception(self):
@qmock.patch(dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE)
def foo(qm):
# raises QMockErrorsInThreads on top of KeyError
self._force_unexpected_call_in_thread(qm)
raise KeyError("TEST")
self._assert_no_patches()
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
foo()
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, KeyError)
def test_stacked_function_decorator_succeeds(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(json=JSON_LOADS)
@qmock.patch(et=XML_ETREE_ELEMENTTREE)
def foo(qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
self.assertEqual(datetime.date(1, 2, 3), "a")
self.assertEqual(xml.etree.ElementTree.fromstring("<foo />"), "b")
self.assertEqual(json.loads("[1,2,3]"), "c")
self._assert_no_patches()
foo()
def test_stacked_function_decorator_cleans_up_on_func_exception(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(json=JSON_LOADS)
@qmock.patch(et=XML_ETREE_ELEMENTTREE)
def foo(qm):
raise IndexError("TEST")
self._assert_no_patches()
self.assertRaises(IndexError, foo)
def test_stacked_function_decorator_cleans_up_on_bad_patch(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(json="json.BAD")
@qmock.patch(et=XML_ETREE_ELEMENTTREE)
def foo(qm):
self.fail("This test function should not run.")
self._assert_no_patches()
self.assertRaises(AttributeError, foo)
def test_stacked_function_decorator_raises_on_exit_if_queue_not_empty(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(json=JSON_LOADS)
@qmock.patch(et=XML_ETREE_ELEMENTTREE)
def foo(qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
self._assert_no_patches()
self.assertRaises(qmock.CallQueueNotEmpty, foo)
def test_stacked_function_decorator_doesnt_raise_on_exit_if_queue_not_empty_and_func_exception(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(json=JSON_LOADS)
@qmock.patch(et=XML_ETREE_ELEMENTTREE)
def foo(qm):
# would raise CallQueueNotEmpty if not handling IndexError
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
raise IndexError("TEST")
self._assert_no_patches()
self.assertRaises(IndexError, foo)
def test_stacked_function_decorator_raises_on_exit_if_errors_in_threads(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(json=JSON_LOADS)
@qmock.patch(et=XML_ETREE_ELEMENTTREE)
def foo(qm):
self._force_unexpected_call_in_thread(qm)
self._assert_no_patches()
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
foo()
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, None)
def test_stacked_function_decorator_still_raises_on_exit_if_errors_in_threads_and_func_exception(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(json=JSON_LOADS)
@qmock.patch(et=XML_ETREE_ELEMENTTREE)
def foo(qm):
# raises QMockErrorsInThreads on top of IndexError
self._force_unexpected_call_in_thread(qm)
raise IndexError("TEST")
self._assert_no_patches()
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
foo()
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, IndexError)
def test_class_decorator_only_patches_test_methods(self):
@qmock.patch(dt=DATETIME_DATE)
class Foo(object):
fizz = "a"
test_buzz = "b"
def bar(foo_self):
self._assert_no_patches()
def test_baz(foo_self, qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), 7)
self.assertEqual(datetime.date(1, 2, 3), 7)
self._assert_no_patches()
f = Foo()
self._assert_no_patches()
self.assertEqual(f.fizz, "a")
self._assert_no_patches()
self.assertEqual(f.test_buzz, "b")
self._assert_no_patches()
f.bar()
self._assert_no_patches()
f.test_baz()
def test_mixed_decorator_patches(self):
@qmock.patch(dt=DATETIME_DATE, json=JSON_LOADS)
class Foo(object):
@qmock.patch(et=XML_ETREE_ELEMENTTREE)
def test_mixed(foo_self, qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
self.assertEqual(datetime.date(1, 2, 3), "a")
self.assertEqual(xml.etree.ElementTree.fromstring("<foo />"), "b")
self.assertEqual(json.loads("[1,2,3]"), "c")
def test_no_cross_mix_between_methods(foo_self, qm):
self._assert_xml_etree_is_not_patched()
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
self.assertEqual(datetime.date(1, 2, 3), "a")
self.assertEqual(json.loads("[1,2,3]"), "c")
@qmock.patch(et="xml.etree.BAD")
def test_bad_patch(foo_self, qm):
self.fail("This test function should not run.")
self._assert_no_patches()
f = Foo()
self._assert_no_patches()
f.test_mixed()
self._assert_no_patches()
f.test_no_cross_mix_between_methods()
self._assert_no_patches()
self.assertRaises(AttributeError, f.test_bad_patch)
def test_empty_context_manager_succeeds(self):
with qmock.patch() as qm:
self._assert_no_patches()
qm.call_queue.push(qmock.call.bar(), 5)
self.assertEqual(qm.bar(), 5)
# a little silly because nothing is being patched, but just in case.
def test_empty_context_manager_cleans_up_on_func_exception(self):
with self.assertRaises(RuntimeError):
with qmock.patch() as qm:
self._assert_no_patches()
raise RuntimeError("TEST")
def test_empty_context_manager_raises_on_exit_if_queue_not_empty(self):
with self.assertRaises(qmock.CallQueueNotEmpty):
with qmock.patch() as qm:
self._assert_no_patches()
qm.call_queue.push(qmock.call.bar(), 5)
def test_empty_context_manager_doesnt_raise_on_exit_if_queue_not_empty_and_func_exception(self):
with self.assertRaises(RuntimeError):
with qmock.patch() as qm:
self._assert_no_patches()
# would raise CallQueueNotEmpty if not handling RuntimeError
qm.call_queue.push(qmock.call.bar(), 5)
raise RuntimeError("TEST")
def test_empty_context_manager_raises_on_exit_if_errors_in_threads(self):
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
with qmock.patch() as qm:
self._assert_no_patches()
self._force_unexpected_call_in_thread(qm)
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, None)
def test_empty_context_manager_still_raises_on_exit_if_errors_in_threads_and_func_exception(self):
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
with qmock.patch() as qm:
self._assert_no_patches()
# raises QMockErrorsInThreads on top of RuntimeError
self._force_unexpected_call_in_thread(qm)
raise RuntimeError("TEST")
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, RuntimeError)
def test_single_patch_context_manager_succeeds(self):
with qmock.patch(dt=DATETIME_DATE) as qm:
qm.call_queue.push(qmock.call.dt(1, 2, 3), 7)
self.assertEqual(datetime.date(1, 2, 3), 7)
def test_single_patch_context_manager_cleans_up_on_func_exception(self):
with self.assertRaises(ValueError):
with qmock.patch(dt=DATETIME_DATE) as qm:
raise ValueError("TEST")
def test_single_patch_context_manager_cleans_up_on_bad_patch(self):
with self.assertRaises(AttributeError):
with qmock.patch(dt="datetime.BAD") as qm:
self.fail("This context should not be entered.")
def test_single_patch_context_manager_raises_on_exit_if_queue_not_empty(self):
with self.assertRaises(qmock.CallQueueNotEmpty):
with qmock.patch(dt=DATETIME_DATE) as qm:
qm.call_queue.push(qmock.call.dt(1, 2, 3), 7)
def test_single_patch_context_manager_doesnt_raise_on_exit_if_queue_not_empty_and_func_exception(self):
with self.assertRaises(ValueError):
with qmock.patch(dt=DATETIME_DATE) as qm:
# would raise CallQueueNotEmpty if not handling ValueError
qm.call_queue.push(qmock.call.dt(1, 2, 3), 7)
raise ValueError("TEST")
def test_single_patch_context_manager_raises_on_exit_if_errors_in_threads(self):
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
with qmock.patch(dt=DATETIME_DATE) as qm:
self._force_unexpected_call_in_thread(qm)
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, None)
def test_single_patch_context_manager_still_raises_on_exit_if_errors_in_threads_and_func_exception(self):
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
with qmock.patch(dt=DATETIME_DATE) as qm:
# raises QMockErrorsInThreads on top of ValueError
self._force_unexpected_call_in_thread(qm)
raise ValueError("TEST")
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, ValueError)
def test_multi_patch_function_decorator_succeeds(self):
with qmock.patch(
dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE
) as qm:
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
self.assertEqual(datetime.date(1, 2, 3), "a")
self.assertEqual(xml.etree.ElementTree.fromstring("<foo />"), "b")
self.assertEqual(json.loads("[1,2,3]"), "c")
def test_multi_patch_context_manager_cleans_up_on_func_exception(self):
with self.assertRaises(KeyError):
with qmock.patch(
dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE
) as qm:
raise KeyError("TEST")
def test_multi_patch_context_manager_cleans_up_on_bad_patch(self):
with self.assertRaises(AttributeError):
with qmock.patch(
dt=DATETIME_DATE, json="json.BAD", et=XML_ETREE_ELEMENTTREE
) as qm:
self.fail("This context should not be entered.")
def test_multi_patch_context_manager_raises_on_exit_if_queue_not_empty(self):
with self.assertRaises(qmock.CallQueueNotEmpty):
with qmock.patch(
dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE
) as qm:
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
def test_multi_patch_context_manager_doesnt_raise_on_exit_if_queue_not_empty_and_func_exception(self):
with self.assertRaises(KeyError):
with qmock.patch(
dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE
) as qm:
# would raise CallQueueNotEmpty if not handling KeyError
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.et.fromstring("<foo />"), "b")
qm.call_queue.push(qmock.call.json("[1,2,3]"), "c")
raise KeyError("TEST")
def test_multi_patch_context_manager_raises_on_exit_if_errors_in_threads(self):
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
with qmock.patch(
dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE
) as qm:
self._force_unexpected_call_in_thread(qm)
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, None)
def test_multi_patch_context_manager_still_raises_on_exit_if_errors_in_threads_and_func_exception(self):
with self.assertRaises(qmock.QMockErrorsInThreads) as assertion:
with qmock.patch(
dt=DATETIME_DATE, json=JSON_LOADS, et=XML_ETREE_ELEMENTTREE
) as qm:
# raises QMockErrorsInThreads on top of KeyError
self._force_unexpected_call_in_thread(qm)
raise KeyError("TEST")
self._assert_thread_qmock_errors(assertion.exception)
self._assert_patched_func_error(assertion.exception, KeyError)
#
# degenerate cases
#
def test_duplicate_patch_succeeds(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(dt=DATETIME_DATE)
def foo(qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
self.assertEqual(datetime.date(1, 2, 3), "a")
self._assert_no_patches()
foo()
# this also indirectly tests that stacked patches are applied strictly
# bottom-up.
def test_same_patch_on_different_attr_is_weird(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(date=DATETIME_DATE)
def foo(qm):
# this is the wrong call to expect because the last patch for
# datetime.date was assigned to the `dt` attr
qm.call_queue.push(qmock.call.date(1, 2, 3), "a")
with self.assertRaises(qmock.UnexpectedCall):
datetime.date(1, 2, 3)
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
self.assertEqual(datetime.date(1, 2, 3), "a")
self._assert_no_patches()
foo()
def test_different_patch_on_same_attr_is_also_weird(self):
@qmock.patch(dt=DATETIME_DATE)
@qmock.patch(dt="datetime.datetime")
def foo(qm):
qm.call_queue.push(qmock.call.dt(1, 2, 3), "a")
qm.call_queue.push(qmock.call.dt(4, 5, 6), "b")
qm.call_queue.push(qmock.call.dt(7, 8, 9), "c")
self.assertEqual(datetime.date(1, 2, 3), "a")
self.assertEqual(datetime.datetime(4, 5, 6), "b")
self.assertEqual(datetime.date(7, 8, 9), "c")
self._assert_no_patches()
foo()
class QMockTests(unittest.TestCase):
def test_root_assigned_attributes(self):
qm = qmock.QMock()
qm.foo = 5
self.assertIs(qm.foo, 5) # retained across accesses
self.assertIsInstance(qm.foo, int)
self.assertRaises(TypeError, qm.foo) # not callable
def test_root_generated_attributes(self):
qm = qmock.QMock()
self.assertIsNot(qm.foo, qm.baz)
self.assertIs(qm.foo, qm.foo) # retained across accesses
self.assertIsInstance(qm.foo, qmock._qmock._CallProxy)
with self.assertRaises(qmock.UnexpectedCall):
qm.foo() # empty CallQueue
qm.call_queue.push(qmock.call.foo(), 5)
self.assertIs(qm.foo(), 5)
with self.assertRaises(qmock.UnexpectedCall):
qm.foo() # empty CallQueue
def test_nested_assigned_attributes(self):
qm = qmock.QMock()
qm.foo.bar = 5
self.assertIs(qm.foo.bar, 5) # retained across accesses
self.assertIsInstance(qm.foo.bar, int)
self.assertRaises(TypeError, qm.foo.bar) # not callable
self.assertIsNot(qm.foo, qm.baz)
self.assertIs(qm.foo, qm.foo) # retained across accesses
self.assertIsInstance(qm.foo, qmock._qmock._CallProxy)
with self.assertRaises(qmock.UnexpectedCall):
qm.foo() # empty CallQueue
def test_nested_generated_attributes(self):
qm = qmock.QMock()
self.assertIsNot(qm.foo.bar, qm.foo.baz)
self.assertIsNot(qm.foo.bar, qm.baz.bar)
self.assertIs(qm.foo.bar, qm.foo.bar) # retained across accesses
self.assertIsInstance(qm.foo.bar, qmock._qmock._CallProxy)
with self.assertRaises(qmock.UnexpectedCall):
qm.foo.bar() # empty CallQueue
self.assertIsNot(qm.foo, qm.baz)
self.assertIs(qm.foo, qm.foo) # retained across accesses
self.assertIsInstance(qm.foo, qmock._qmock._CallProxy)
with self.assertRaises(qmock.UnexpectedCall):
qm.foo() # empty CallQueue
qm.call_queue.push(qmock.call.foo.bar(), 5)
self.assertIs(qm.foo.bar(), 5)
with self.assertRaises(qmock.UnexpectedCall):
qm.foo.bar() # empty CallQueue
def test_assigned_attributes_are_attached(self):
qm = qmock.QMock()
m = mock.Mock()
qm.foo = m
with self.assertRaises(qmock.UnexpectedCall):
m()
def test_root_magic_methods(self):
qm = qmock.QMock()
with self.assertRaises(qmock.UnexpectedCall):
str(qm) # empty CallQueue
qm.call_queue.push(qmock.call.__getattr__("__str__")(qm), "test")
self.assertEqual(str(qm), "test")
with self.assertRaises(qmock.UnexpectedCall):
str(qm) # empty CallQueue
def test_nested_magic_methods(self):
qm = qmock.QMock()
with self.assertRaises(qmock.UnexpectedCall):
qm.foo < 5 # empty CallQueue
qm.call_queue.push(qmock.call.foo.__getattr__("__lt__")(qm.foo, 5), "test")
self.assertEqual((qm.foo < 5), "test")
with self.assertRaises(qmock.UnexpectedCall):
qm.foo < 5 # empty CallQueue
def test_magic_methods_are_always_the_same_object(self):
qm = qmock.QMock()
method = qm.__len__
self.assertIs(method, qm.__len__)
self.assertIs(method, qm.__len__)
self.assertIs(method, qm.__len__)
def test_magic_methods_are_unique(self):
qm = qmock.QMock()
self.assertIsNot(qm.__len__, qm.foo.__len__)
self.assertIsNot(qm.__len__, qm.bar.__len__)
self.assertIsNot(qm.foo.__len__, qm.bar.__len__)
qm.call_queue.assert_empty()
qm.call_queue.push(qmock.call.__getattr__("__len__")(qm), 1)
qm.call_queue.push(qmock.call.foo.__getattr__("__len__")(qm.foo), 2)
qm.call_queue.push(qmock.call.bar.__getattr__("__len__")(qm.bar), 3)
qm.call_queue.push(qmock.call.bar.__getattr__("__len__")(qm.bar), 4)
with self.assertRaises(qmock.UnexpectedCall):
len(qm.foo) # wrong call; expected len(qm)
self.assertEqual(len(qm.foo), 2)
with self.assertRaises(qmock.UnexpectedCall):
len(qm.foo) # wrong call; expected len(qm.bar)
self.assertEqual(len(qm.bar), 4)
qm.call_queue.assert_empty()
def test_can_be_a_context_manager(self):
qm = qmock.QMock()
qm.call_queue.assert_empty()
with self.assertRaises(qmock.UnexpectedCall):
with qm as foo: # empty CallQueue
pass
qm.call_queue.assert_empty()
qm.call_queue.push(qmock.call.__getattr__("__enter__")(qm), qm.foo)
qm.call_queue.push(qmock.call.foo(), 7357)
qm.call_queue.push(qmock.call.__getattr__("__exit__")(qm, None, None, None), None)
with qm as foo:
self.assertEqual(foo(), 7357)
qm.call_queue.assert_empty()
def test_mock_calls_returns_proxy(self):
qm = qmock.QMock()
self.assertIsInstance(qm.mock_calls, qmock._qmock._MockCallsProxy)
def test_eq(self):
alpha = qmock.QMock()
bravo = qmock.QMock()
self.assertTrue(alpha == alpha)
self.assertTrue(bravo == bravo)
self.assertFalse(alpha == bravo)
self.assertFalse(bravo == alpha)
def test_is_callable(self):
qm = qmock.QMock()
with self.assertRaises(qmock.UnexpectedCall):
qm() # empty CallQueue
qm.call_queue.push(qmock.call(), 5)
self.assertIs(qm(), 5)
def test_mock_return_assigned_attributes(self):
qm = qmock.QMock()
qm.foo = 5
self.assertIs(
qm.mock_return(qmock.call.foo),
5
)
qm = qmock.QMock()
qm.foo.return_value = 6
self.assertIs(
qm.mock_return(qmock.call.foo()),
6
)
qm = qmock.QMock()
qm.return_value = 7
self.assertIs(
qm.mock_return(qmock.call()),
7
)
qm = qmock.QMock()
qm.return_value.foo = 8
self.assertIs(
qm.mock_return(qmock.call().foo),
8
)
qm = qmock.QMock()
qm.return_value.foo.return_value.bar.return_value.baz.barf.return_value = 9
self.assertIs(
qm.mock_return(qmock.call(x=1).foo(y=2).bar(5).baz.barf(z={6: 7}, w=8)),
9
)
def test_mock_return_generated_attributes(self):
qm = qmock.QMock()
self.assertIs(
qm.mock_return(qmock.call.foo),
qm.foo
)
self.assertIs(
qm.mock_return(qmock.call.foo()),
qm.foo.return_value
)
self.assertIs(
qm.mock_return(qmock.call()),
qm.return_value
)
self.assertIs(
qm.mock_return(qmock.call().foo),
qm.return_value.foo
)
self.assertIs(
qm.mock_return(qmock.call(x=1).foo(y=2).bar(5).baz.barf(z={6: 7}, w=8)),
qm.return_value.foo.return_value.bar.return_value.baz.barf.return_value
)
def test_mock_return_null_call(self):
qm = qmock.QMock()
self.assertRaises(
AttributeError,
qm.mock_return,
qmock.call
)
class CallQueueTests(unittest.TestCase):
def test_push_attribute_call(self):
qm = qmock.QMock()
cq = qm.call_queue
self.assertRaises(
qmock.BadCall,
cq.push,
qmock.call.foo,
"bar"
)
self.assertEqual(len(cq.pop_errors), 0)
def test_push_function_call(self):
qm = qmock.QMock()
cq = qm.call_queue
self.assertEqual(len(cq._queue), 0)
cq.push(qmock.call.foo(), "bar")
self.assertEqual(
tuple(
(expected_call, self._copy_mock_side_effect(mock_result))
for expected_call, mock_result in cq._queue
),
(
(qmock.call.foo(), ("bar",)),
)
)
self.assertEqual(len(cq._queue), 1)
self.assertEqual(qm.foo(), "bar")
cq.assert_empty()
self.assertEqual(len(cq.pop_errors), 0)
def test_push_all_attribute_call(self):
qm = qmock.QMock()
cq = qm.call_queue
self.assertRaises(
qmock.BadCall,
cq.push_all,
qmock.call(x=1).foo(y=2).bar(5).baz.barf,
10
)
self.assertEqual(len(cq.pop_errors), 0)
def test_push_all_function_call(self):
qm = qmock.QMock()
cq = qm.call_queue
cq.push_all(qmock.call(x=1).foo(y=2).bar(5).baz.barf(z={6: 7}, w=8), 10)
self.assertEqual(
tuple(
(expected_call, self._copy_mock_side_effect(mock_result))
for expected_call, mock_result in cq._queue
),
(
(
qmock.call(x=1),
(qm.return_value,)
),
(
qmock.call(x=1).foo(y=2),
(qm.return_value.foo.return_value,)
),
(
qmock.call(x=1).foo(y=2).bar(5),
(qm.return_value.foo.return_value.bar.return_value,)
),
(
qmock.call(x=1).foo(y=2).bar(5).baz.barf(z={6: 7}, w=8),
(10,)
)
)
)
self.assertEqual(len(cq._queue), 4)
self.assertEqual(qm(x=1).foo(y=2).bar(5).baz.barf(z={6: 7}, w=8), 10)
cq.assert_empty()
self.assertEqual(len(cq.pop_errors), 0)
def test_pop_value_result(self):
qm = qmock.QMock()
cq = qm.call_queue
cq.push(qmock.call.foo(), 7357)
self.assertEqual(cq._pop(qmock.call.foo()), 7357)
cq.assert_empty()
self.assertEqual(len(cq.pop_errors), 0)
def test_pop_exception_result(self):
qm = qmock.QMock()
cq = qm.call_queue
cq.push(qmock.call.foo(), ValueError("test"))
with self.assertRaises(ValueError) as assertion:
cq._pop(qmock.call.foo())
self.assertEqual(str(assertion.exception), "test")
cq.assert_empty()
self.assertEqual(len(cq.pop_errors), 0)
def test_pop_raises_when_empty(self):
qm = qmock.QMock()
cq = qm.call_queue
self.assertRaises(qmock.UnexpectedCall, cq._pop, qmock.call.foo())
self.assertEqual(len(cq.pop_errors), 1)
record = cq.pop_errors[0]
self.assertEqual(record.thread_id, get_thread_id())
self.assertIsInstance(record.error, qmock.UnexpectedCall)
self.assertEqual(
str(record.error),
"Queue is empty. call: call.foo()"
)
def test_pop_raises_when_call_doesnt_match_expectation(self):
qm = qmock.QMock()
cq = qm.call_queue
cq.push(qmock.call.foo(), 7357)
self.assertRaises(qmock.UnexpectedCall, cq._pop, qmock.call.not_foo())
self.assertEqual(len(cq.pop_errors), 1)
record = cq.pop_errors[0]
self.assertEqual(record.thread_id, get_thread_id())
self.assertIsInstance(record.error, qmock.UnexpectedCall)
self.assertEqual(
str(record.error),
"Call does not match expectation. actual: call.not_foo(); expected: call.foo()"
)
def test_assert_empty(self):
qm = qmock.QMock()
cq = qm.call_queue
cq.assert_empty()
cq.push(qmock.call.foo(), "bar")
self.assertRaises(qmock.CallQueueNotEmpty, cq.assert_empty)
cq._pop(qmock.call.foo())
cq.assert_empty()
self.assertEqual(len(cq.pop_errors), 0)
def _copy_mock_side_effect(self, m):
"""
mock.Mock.side_effect is stored as a <tupleiterator>,
so iterating consumes it. so we'll consume it, store a copy,
re-populate it, and return the copy
"""
side_effect = tuple(m.side_effect)
m.side_effect = side_effect
return side_effect
|
main.py
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# filename: main.py
# modified: 2019-09-11
import os
import time
from optparse import OptionParser
from multiprocessing import Process, Manager, Queue
from autoelective import __version__, __date__
from autoelective.config import AutoElectiveConfig
from autoelective.parser import load_course_csv
from autoelective.logger import ConsoleLogger
from autoelective.loop import main as run_main_loop
from autoelective.monitor import main as run_monitor
from autoelective.const import SIGNAL_KILL_ALL_PROCESSES
from autoelective._internal import userInfo as _userInfo # ugly !
def task_run_loop(userInfo):
config = AutoElectiveConfig() # create singleton first
cout = ConsoleLogger("main")
signals = Queue()
p = Process(target=run_main_loop, name="Main", args=(signals, userInfo))
p.daemon = True
p.start()
while True:
try:
signal = signals.get() # block process
except KeyboardInterrupt as e:
cout.info("Process %s is killed" % os.getpid())
return
time.sleep(0.1) # wait a minute
if signal == SIGNAL_KILL_ALL_PROCESSES:
if p.is_alive():
p.terminate()
cout.info("Process %s is killed" % p.name)
break
def task_run_loop_with_monitor(userInfo):
config = AutoElectiveConfig() # create singleton first
cout = ConsoleLogger("main")
signals = Queue()
with Manager() as manager:
# shared objects
goals = manager.list(load_course_csv())
ignored = manager.list()
status = manager.dict()
status["main_loop"] = 0
status["login_loop"] = 0
status["error_count"] = 0
status["errors"] = manager.dict()
args = (signals, userInfo, goals, ignored, status)
pList = [
Process(target=run_main_loop, name="Main", args=args),
Process(target=run_monitor, name="Monitor", args=args),
]
for p in pList:
p.daemon = True
p.start()
while True:
try:
signal = signals.get() # block process
except KeyboardInterrupt as e:
cout.info("Process %s is killed" % os.getpid())
return
time.sleep(0.1) # wait a minute
if signal == SIGNAL_KILL_ALL_PROCESSES:
for p in pList:
if p.is_alive():
p.terminate()
cout.info("Process %s is killed" % p.name)
break
def main():
parser = OptionParser(
description='PKU Auto-Elective Tool v%s (%s)' % (__version__, __date__),
version=__version__,
)
# MARK: custom input files
parser.add_option(
'--config',
dest='CONFIG_INI',
metavar="FILE",
help='custom config file encoded with utf8',
)
parser.add_option(
'--course-csv-utf8',
dest='COURSE_UTF8_CSV',
metavar="FILE",
help='custom course.csv file encoded with utf8',
)
parser.add_option(
'--course-csv-gbk',
dest='COURSE_GBK_CSV',
metavar="FILE",
help='custom course.csv file encoded with gbk',
)
# MARK: boolean (flag) options
parser.add_option(
'--with-monitor',
dest='with_monitor',
action='store_true',
default=False,
help='run the monitor process simultaneously',
)
options, args = parser.parse_args()
run_task = task_run_loop
# MARK: setup userInfo
userInfo = {}
if options.CONFIG_INI is not None:
userInfo["CONFIG_INI"] = options.CONFIG_INI
if options.COURSE_UTF8_CSV is not None:
userInfo["COURSE_UTF8_CSV"] = options.COURSE_UTF8_CSV
if options.COURSE_GBK_CSV is not None:
userInfo["COURSE_GBK_CSV"] = options.COURSE_GBK_CSV
# MAKR: handle boolean (flag) options
if options.with_monitor:
run_task = task_run_loop_with_monitor
_userInfo.update(userInfo) # setup userInfo first
run_task(userInfo)
if __name__ == '__main__':
main()
|
parameter_server.py
|
import argparse
import os
import time
from threading import Lock
import torch
import torch.distributed.autograd as dist_autograd
import torch.distributed.rpc as rpc
import torch.multiprocessing as mp
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
from torch.distributed.optim import DistributedOptimizer
from torchvision import datasets, transforms
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
# Move tensor to next device if necessary
next_device = next(self.fc1.parameters()).device
x = x.to(next_device)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
# --------- Helper Methods --------------------
# On the local node, call a method with first arg as the value held by the
# RRef. Other args are passed in as arguments to the function called.
# Useful for calling instance methods. method could be any matching function, including
# class methods.
def call_method(method, rref, *args, **kwargs):
return method(rref.local_value(), *args, **kwargs)
# Given an RRef, return the result of calling the passed in method on the value
# held by the RRef. This call is done on the remote node that owns
# the RRef and passes along the given argument.
# Example: If the value held by the RRef is of type Foo, then
# remote_method(Foo.bar, rref, arg1, arg2) is equivalent to calling
# <foo_instance>.bar(arg1, arg2) on the remote node and getting the result
# back.
def remote_method(method, rref, *args, **kwargs):
args = [method, rref] + list(args)
return rpc.rpc_sync(rref.owner(), call_method, args=args, kwargs=kwargs)
# --------- Parameter Server --------------------
class ParameterServer(nn.Module):
def __init__(self):
super().__init__()
self.model = Net()
def forward(self, inp):
inp = inp.to(self.input_device)
out = self.model(inp)
# This output is forwarded over RPC, which as of 1.5.0 only accepts CPU tensors.
# Tensors must be moved in and out of GPU memory due to this.
out = out.to("cpu")
return out
# Use dist autograd to retrieve gradients accumulated for this model.
# Primarily used for verification.
def get_dist_gradients(self, cid):
grads = dist_autograd.get_gradients(cid)
# This output is forwarded over RPC, which as of 1.5.0 only accepts CPU tensors.
# Tensors must be moved in and out of GPU memory due to this.
cpu_grads = {}
for k, v in grads.items():
k_cpu, v_cpu = k.to("cpu"), v.to("cpu")
cpu_grads[k_cpu] = v_cpu
return cpu_grads
# Wrap local parameters in a RRef. Needed for building the
# DistributedOptimizer which optimizes paramters remotely.
def get_param_rrefs(self):
param_rrefs = [rpc.RRef(param) for param in self.model.parameters()]
return param_rrefs
# The global parameter server instance.
param_server = None
# A lock to ensure we only have one parameter server.
global_lock = Lock()
def get_parameter_server():
"""
Returns a singleton parameter server to all trainer processes
"""
global param_server
# Ensure that we get only one handle to the ParameterServer.
with global_lock:
if not param_server:
# construct it once
param_server = ParameterServer()
return param_server
def run_parameter_server(rank, world_size):
# The parameter server just acts as a host for the model and responds to
# requests from trainers.
# rpc.shutdown() will wait for all workers to complete by default, which
# in this case means that the parameter server will wait for all trainers
# to complete, and then exit.
print("PS master initializing RPC")
rpc.init_rpc(name="parameter_server", rank=rank, world_size=world_size)
print("RPC initialized! Running parameter server...")
rpc.shutdown()
print("RPC shutdown on parameter server.")
# --------- Trainers --------------------
# nn.Module corresponding to the network trained by this trainer. The
# forward() method simply invokes the network on the given parameter
# server.
class TrainerNet(nn.Module):
def __init__(self):
super().__init__()
self.param_server_rref = rpc.remote("parameter_server", get_parameter_server)
def get_global_param_rrefs(self):
remote_params = remote_method(ParameterServer.get_param_rrefs, self.param_server_rref)
return remote_params
def forward(self, x):
model_output = remote_method(ParameterServer.forward, self.param_server_rref, x)
return model_output
def run_training_loop(rank, train_loader, test_loader):
# Runs the typical nueral network forward + backward + optimizer step, but
# in a distributed fashion.
net = TrainerNet()
# Build DistributedOptimizer.
param_rrefs = net.get_global_param_rrefs()
opt = DistributedOptimizer(optim.SGD, param_rrefs, lr=0.03)
for i, (data, target) in enumerate(train_loader):
with dist_autograd.context() as cid:
model_output = net(data)
target = target.to(model_output.device)
loss = F.nll_loss(model_output, target)
if i % 5 == 0:
print(f"Rank {rank} training batch {i} loss {loss.item()}")
dist_autograd.backward(cid, [loss])
# Ensure that dist autograd ran successfully and gradients were
# returned.
assert remote_method(
ParameterServer.get_dist_gradients,
net.param_server_rref,
cid) != {}
opt.step(cid)
print("Training complete!")
print("Getting accuracy....")
get_accuracy(test_loader, net)
def get_accuracy(test_loader, model):
model.eval()
correct_sum = 0
with torch.no_grad():
for i, (data, target) in enumerate(test_loader):
out = model(data, -1)
pred = out.argmax(dim=1, keepdim=True)
pred, target = pred, target
correct = pred.eq(target.view_as(pred)).sum().item()
correct_sum += correct
print(f"Accuracy {correct_sum / len(test_loader.dataset)}")
# Main loop for trainers.
def run_worker(rank, world_size, train_loader, test_loader):
print(f"Worker rank {rank} initializing RPC")
rpc.init_rpc(
name=f"trainer_{rank}",
rank=rank,
world_size=world_size)
print(f"Worker {rank} done initializing RPC")
run_training_loop(rank, train_loader, test_loader)
rpc.shutdown()
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description="Parameter-Server RPC based training")
parser.add_argument(
"--world_size",
type=int,
default=4,
help="""Total number of participating processes. Should be the sum of
master node and all training nodes.""")
parser.add_argument(
"--master_addr",
type=str,
default="localhost",
help="""Address of master, will default to localhost if not provided.
Master must be able to accept network traffic on the address + port.""")
parser.add_argument(
"--master_port",
type=str,
default="29500",
help="""Port that master is listening on, will default to 29500 if not
provided. Master must be able to accept network traffic on the host and port.""")
parser.add_argument(
"rank",
type=int,
default=None,
help="Global rank of this process. Pass in 0 for master.")
args = parser.parse_args()
assert args.rank is not None, "must provide rank argument."
os.environ['MASTER_ADDR'] = args.master_addr
os.environ["MASTER_PORT"] = args.master_port
processes = []
world_size = args.world_size
if args.rank == 0:
p = mp.Process(target=run_parameter_server, args=(0, world_size))
p.start()
processes.append(p)
else:
# Get data to train on
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=32, shuffle=True, )
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(
'../data',
train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=32,
shuffle=True,
)
# start training worker on this node
p = mp.Process(
target=run_worker,
args=(
args.rank,
world_size,
train_loader,
test_loader))
p.start()
processes.append(p)
for p in processes:
p.join()
|
eurekaclient.py
|
"""
Eureka Client
"""
import json
import logging
import os
import random
import time
from threading import Thread
try:
from urllib.parse import urljoin
except ImportError:
from urlparse import urljoin
import dns.resolver
from .ec2metadata import get_metadata
from .httpclient import HttpClientObject, ApiException
from .hostinfo import HostInfo
logger = logging.getLogger('service.eureka')
logger.setLevel(logging.INFO)
class EurekaClientException(Exception):
pass
class EurekaRegistrationFailedException(EurekaClientException):
pass
class EurekaUpdateFailedException(EurekaClientException):
pass
class EurekaHeartbeatFailedException(EurekaClientException):
pass
class EurekaGetFailedException(EurekaClientException):
pass
class EurekaClient(object):
"""
Eureka Client
"""
EUREKA_SERVICE_URL = 'EUREKA_SERVICE_URL'
EUREKA_INSTANCE_DATACENTER = 'EUREKA_INSTANCE_DATACENTER'
EUREKA_HEARTBEAT_INTERVAL = 'EUREKA_HEARTBEAT_INTERVAL'
EUREKA_SERVICE_PATH = 'EUREKA_SERVICE_PATH'
EUREKA_INSTANCE_HOSTNAME = 'EUREKA_INSTANCE_HOSTNAME'
EUREKA_INSTANCE_PORT = 'EUREKA_INSTANCE_PORT'
def __init__(self,
name,
eureka_url=None,
eureka_domain_name=None,
host_name=None,
data_center=None,
instance_id=None,
vip_address=None,
secure_vip_address=None,
port=None,
use_dns=True,
region=None,
prefer_same_zone=True,
context="eureka/v2",
eureka_port=None,
https_enabled=False,
heartbeat_interval=None,
service_path=None):
self.app_name = name
self.eureka_url = eureka_url or os.environ.get(EurekaClient.EUREKA_SERVICE_URL, None)
self.data_center = data_center or os.environ.get(EurekaClient.EUREKA_INSTANCE_DATACENTER, None)
self.heartbeat_interval = heartbeat_interval or int(os.environ.get(EurekaClient.EUREKA_HEARTBEAT_INTERVAL, 30))
self.service_path = service_path or os.environ.get(EurekaClient.EUREKA_SERVICE_PATH, 'eureka/apps')
self.host_name = host_name or os.environ.get(EurekaClient.EUREKA_INSTANCE_HOSTNAME, None)
self.port = port or os.environ.get(EurekaClient.EUREKA_INSTANCE_PORT, None)
self.secure_port = port
self.use_dns = use_dns
self.region = region
self.prefer_same_zone = prefer_same_zone
self.eureka_domain_name = eureka_domain_name
self.eureka_port = eureka_port
self.heartbeat_task = None
self.instance_id = instance_id
self.app_protocol = 'https://' if https_enabled else 'http://'
host_info = HostInfo().get()
if data_center == "Amazon":
self.host_name = get_metadata("hostname")
elif not host_name:
self.host_name = host_info['host']
self.vip_address = vip_address
if not self.vip_address:
self.vip_address = host_info['IPv4']
self.secure_vip_address = secure_vip_address
if not self.secure_vip_address:
self.secure_vip_address = host_info['IPv4']
# Relative URL to eureka
self.context = context
self.eureka_urls = self.get_eureka_urls()
self.requests = HttpClientObject()
def _get_txt_records_from_dns(self, domain):
records = dns.resolver.query(domain, 'TXT')
for record in records:
for string in record.strings:
yield string
def _get_zone_urls_from_dns(self, domain):
for zone in self._get_txt_records_from_dns(domain):
yield zone
def get_zones_from_dns(self):
return {
zone_url.split(".")[0]:
list(self._get_zone_urls_from_dns("txt.%s" % zone_url))
for zone_url in list(self._get_zone_urls_from_dns('txt.%s.%s' % (self.region, self.eureka_domain_name)))
}
def get_eureka_urls(self):
"""
Get Eureka URLs
"""
if self.eureka_url:
return [self.eureka_url]
elif self.use_dns:
zone_dns_map = self.get_zones_from_dns()
zones = zone_dns_map.keys()
assert len(zones) > 0, "No availability zones found for, please add them explicitly"
if self.prefer_same_zone:
if self.get_instance_zone() in zones:
zones = [zones.pop(
zones.index(self.get_instance_zone()))] + zones # Add our zone as the first element
else:
logger.warn("No match for the zone %s in the list of available zones %s" % (
self.get_instance_zone(), zones)
)
service_urls = []
for zone in zones:
eureka_instances = zone_dns_map[zone]
random.shuffle(eureka_instances) # Shuffle order for load balancing
for eureka_instance in eureka_instances:
server_uri = "http://%s" % eureka_instance
if self.eureka_port:
server_uri += ":%s" % self.eureka_port
eureka_instance_url = urljoin(server_uri, self.context, "/")
if not eureka_instance_url.endswith("/"):
eureka_instance_url = "%s/" % eureka_instance_url
service_urls.append(eureka_instance_url)
primary_server = service_urls.pop(0)
random.shuffle(service_urls)
service_urls.insert(0, primary_server)
logger.info("This client will talk to the following serviceUrls in order: %s" % service_urls)
return service_urls
def get_instance_zone(self):
"""
Get Instance Zone
"""
if self.data_center == "Amazon":
return get_metadata('availability-zone')
else:
raise NotImplementedError("%s does not implement DNS lookups" % self.data_center)
def get_instance_id(self):
"""
Get Instance ID
"""
if self.instance_id:
return self.instance_id
return self.host_name + ':' + self.app_name + ':' + str(self.port)
def get_instance_data(self):
"""
Get Instance Data
"""
data_center_info = {
'name': self.data_center
}
if self.data_center == "Amazon":
data_center_info['metadata'] = {
'ami-launch-index': get_metadata('ami-launch-index'),
'local-hostname': get_metadata('local-hostname'),
'availability-zone': get_metadata('availability-zone'),
'instance-id': get_metadata('instance-id'),
'public-ipv4': get_metadata('local-ipv4'),
'public-hostname': get_metadata('hostname'),
'ami-manifest-path': get_metadata('ami-manifest-path'),
'local-ipv4': get_metadata('local-ipv4'),
'ami-id': get_metadata('ami-id'),
'instance-type': get_metadata('instance-type'),
}
return {
'instance': {
'app': self.app_name,
'instanceId': self.get_instance_id(),
'hostName': self.host_name,
'ipAddr': self.vip_address,
'healthCheckUrl': self.app_protocol + self.host_name + ':' + str(self.port) + '/healthcheck',
'statusPageUrl': self.app_protocol + self.host_name + ':' + str(self.port) + '/healthcheck',
'homePageUrl': self.app_protocol + self.host_name + ':' + str(self.port) + '/healthcheck',
'port': {
'$': self.port,
'@enabled': 'true',
},
'vipAddress': self.vip_address,
'dataCenterInfo': {
'@class': 'com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo',
'name': 'MyOwn',
},
},
}
def star(self):
"""
Start registration process
:return:
"""
logger.info('Starting eureka registration')
self.register()
self.heartbeat_task = Thread(target=self._heartbeat)
self.heartbeat_task.daemon = True
self.heartbeat_task.start()
def _heartbeat(self):
while True:
try:
time.sleep(self.heartbeat_interval)
self.renew()
except Exception as ex:
print("Eureka connection Exception")
def register(self, initial_status="UP"):
"""
Registers instance with Eureka, begins heartbeats, and fetches registry.
:param initial_status: status string
:return:
"""
instance_data = self.get_instance_data()
instance_data['instance']['status'] = initial_status
success = False
for eureka_url in self.eureka_urls:
try:
self.requests.POST(
url=urljoin(eureka_url, self.service_path + "/%s" % self.app_name), body=instance_data,
headers={'Content-Type': 'application/json'})
success = True
except ApiException as ex:
success = False
if not success:
raise EurekaRegistrationFailedException("Did not receive correct reply from any instances")
def renew(self):
"""
Send application instance heartbeat
"""
logger.info(' Updating registeration status ')
success = False
for eureka_url in self.eureka_urls:
try:
self.requests.PUT(url=urljoin(eureka_url, self.service_path + '/%s/%s' % (
self.app_name,
self.get_instance_id()
)))
success = True
except ApiException as ex:
if ex.status == 404:
self.register()
return
else:
success = False
if not success:
raise EurekaUpdateFailedException("Did not receive correct reply from any instances")
# a generic get request, since most of the get requests for discovery will take a similar form
def _get_from_any_instance(self, endpoint):
for eureka_url in self.eureka_urls:
try:
r = self.requests.GET(urljoin(eureka_url, endpoint), headers={'accept': 'application/json'})
r.raise_for_status()
return json.loads(r.content)
except:
pass
raise EurekaGetFailedException("Failed to GET %s from all instances" % endpoint)
def get_apps(self):
return self._get_from_any_instance("apps")
def get_app(self, app_id):
return self._get_from_any_instance("apps/%s" % app_id)
def get_vip(self, vip_address):
return self._get_from_any_instance("vips/%s" % vip_address)
def get_svip(self, vip_address):
return self._get_from_any_instance("svips/%s" % vip_address)
def get_instance(self, instance_id):
return self._get_from_any_instance("instances/%s" % instance_id)
def get_app_instance(self, app_id, instance_id):
return self._get_from_any_instance("apps/%s/%s" % (app_id, instance_id))
|
alpaca_discovery.py
|
#!/usr/bin/env python3
"""Provides the ability to find IPv4 ASCOM Alpaca servers on the local networks.
Uses the netifaces library to find the broadcast IPs that can be used for
sending the UDP discovery message.
Note that I've chosen to omit support for IPv6 because I don't need it for
testing Tiny Alpaca Server.
TODO(jamessynge): Figure out if I can NOT use netifaces to get the network
interface information.
"""
import argparse
import dataclasses
import json
import pprint
import queue
import socket
import sys
import threading
import time
from typing import Callable, Dict, Generator, List, Optional
import install_advice
try:
import netifaces # pylint: disable=g-import-not-at-top
except ImportError:
install_advice.install_advice('netifaces')
# build_cleaner doesn't find imports that aren't at the top level, so we repeat
# the import here.
import netifaces # pylint: disable=g-import-not-at-top,g-bad-import-order
# Daniel VanNoord selected UDP port 32227 for the Alpaca Discovery protocol, but
# that port is not officially assigned to the protocol, so it may change some
# day. An Alpaca Server can confirm that the packet is intended for it by
# looking for the string 'alpacadiscovery1' as the entire body of the UDP packet
# it receives at that port, and an Alpaca Discovery client can confirm that a
# response is from an Alpaca Server by checking that the response body can be
# parsed as JSON and has a property 'alpacaport' whose value is an integer that
# can be a TCP port number (e.g. 1 to 65535).
ALPACA_DISCOVERY_PORT = 32227
DISCOVERY_REQUEST_BODY = 'alpacadiscovery1'
ALPACA_SERVER_PORT_PROPERTY = 'alpacaport'
DEFAULT_DISCOVERY_SECS = 2.0
@dataclasses.dataclass
class DiscoverySource:
"""Addresses from and to which to send ASCOM Alpaca discovery packets."""
interface_name: str
src_address: str
dst_address: str
dst_is_broadcast: bool
def get_name(self) -> str:
return f'{self.dst_address} via {self.interface_name}'
def create_bound_udp_socket(self) -> socket.socket:
"""Create UDP port for sending to dst_address."""
# --------------------------------
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
if self.dst_is_broadcast:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
try:
sock.bind((self.src_address, 0)) # listen to any on a temporary port
except:
print(f'failure to bind {self}', file=sys.stderr, flush=True)
sock.close()
raise
# sock.setblocking(0)
return sock
def send_discovery_packet(self, sock: socket.socket, verbose=False):
"""Writes an Alpaca Discovery UDP Packet to sock."""
if verbose:
action = 'Broadcasting' if self.dst_is_broadcast else 'Sending'
# Appending a \n explicitly because multiple threads will output strings,
# and I've found that the default end value is output as a separate
# operation that can come after the "Collected..." string from another
# thread.
print(
f'{action} from {self.src_address} to {self.dst_address}\n',
flush=True,
end='')
sock.sendto(
DISCOVERY_REQUEST_BODY.encode(encoding='ascii'),
(self.dst_address, ALPACA_DISCOVERY_PORT))
@dataclasses.dataclass
class DiscoveryResponse:
"""Represents an Alpaca Discovery Protocol Response from an Alpaca Server."""
source: DiscoverySource
data_bytes: bytes
recvfrom_addr: str
recvfrom_port: int # The discovery port.
def get_alpaca_server_addr(self) -> str:
return f'{self.recvfrom_addr}:{self.get_port()}'
def get_port(self) -> int:
data_str = str(self.data_bytes, 'ascii')
jsondata = json.loads(data_str)
return int(jsondata[ALPACA_SERVER_PORT_PROPERTY])
def generate_addresses(address_family) -> Generator[Dict[str, str], None, None]:
"""docstring."""
# netifaces.interfaces returns a list of interface names.
for name in netifaces.interfaces():
# netifaces.ifaddresses(interface_name) returns a dictionary mapping an
# address family (e.g. netifaces.AF_INET for IPv4) to a list of address
# groups (dictionaries) provided by that interface. Note that a single
# interface may have multiple addresses, even for a single address family.
for addr_family, addr_groups in netifaces.ifaddresses(name).items():
if address_family == addr_family:
for address_group in addr_groups:
if 'addr' not in address_group:
# Note that I'm assuming
continue
result = dict(interface_name=name)
result.update(address_group)
yield result
def generate_discovery_sources() -> Generator[DiscoverySource, None, None]:
"""docstring."""
for address_group in generate_addresses(netifaces.AF_INET):
if 'broadcast' in address_group:
yield DiscoverySource(
interface_name=address_group['interface_name'],
src_address=address_group['addr'],
dst_address=address_group['broadcast'],
dst_is_broadcast=True)
elif 'peer' in address_group:
yield DiscoverySource(
interface_name=address_group['interface_name'],
src_address=address_group['addr'],
dst_address=address_group['peer'],
dst_is_broadcast=False)
def receiver(sock: socket.socket, max_discovery_secs: float,
response_queue: queue.Queue) -> None:
sock.settimeout(max_discovery_secs)
while True:
try:
data_bytes, addr = sock.recvfrom(1024)
except socket.timeout:
return
# For AF_INET sockets, addr is a pair, (host, port).
response_queue.put((data_bytes, addr[0], addr[1]))
class Discoverer(object):
"""Performs Alpaca Discovery for a single DiscoverySource."""
def __init__(self, source: DiscoverySource):
self.source = source
def perform_discovery(self,
response_queue: queue.Queue,
max_discovery_secs: float = DEFAULT_DISCOVERY_SECS,
verbose=False) -> threading.Thread:
"""Returns a thread which writes DiscoveryResponses to response_queue."""
def worker():
for r in self.generate_responses(
max_discovery_secs=max_discovery_secs, verbose=verbose):
response_queue.put(r)
t = threading.Thread(target=worker, name=self.source.get_name())
t.start()
return t
def generate_responses(
self,
max_discovery_secs: float = DEFAULT_DISCOVERY_SECS,
verbose=False) -> Generator[DiscoveryResponse, None, None]:
"""Yields DiscoveryResponses after sending from the source address."""
sock = self.source.create_bound_udp_socket()
q = queue.Queue(maxsize=1000)
t = threading.Thread(target=receiver, args=(sock, max_discovery_secs, q))
t.start()
iota = max(0.001, min(0.05, max_discovery_secs / 100.0))
time.sleep(iota)
self.source.send_discovery_packet(sock, verbose=verbose)
count = 0
while t.is_alive():
try:
data_bytes, addr, port = q.get(block=True, timeout=iota)
except queue.Empty:
continue
yield DiscoveryResponse(
source=self.source,
data_bytes=data_bytes,
recvfrom_addr=addr,
recvfrom_port=port)
count += 1
t.join()
while not q.empty():
data_bytes, addr, port = q.get(block=False)
yield DiscoveryResponse(
source=self.source,
data_bytes=data_bytes,
recvfrom_addr=addr,
recvfrom_port=port)
if verbose:
# Appending a \n explicitly because multiple threads will output strings,
# and I've found that the default end value is output as a separate
# operation that can come after the "Collected..." string from another
# thread.
print(
f'Collected {count} responses for source {self.source.get_name()}\n',
flush=True,
end='')
def perform_discovery(discovery_response_handler: Callable[[DiscoveryResponse],
None],
sources: Optional[List[DiscoverySource]] = None,
max_discovery_secs: float = DEFAULT_DISCOVERY_SECS,
verbose=False) -> None:
"""Sends a discovery packet from all sources, passes results to handler."""
if sources is None:
if verbose:
print('Finding network interfaces to use for discovery.')
sources = list(generate_discovery_sources())
discoverers = [Discoverer(source) for source in sources]
q = queue.Queue(maxsize=1000)
threads = []
for d in discoverers:
threads.append(
d.perform_discovery(
response_queue=q,
max_discovery_secs=max_discovery_secs,
verbose=verbose))
start_secs = time.time()
while threads:
if not threads[0].is_alive():
t = threads.pop(0)
if verbose:
print('Thread %r is done' % t.name, flush=True)
t.join()
while not q.empty():
dr = q.get(block=False)
discovery_response_handler(dr)
time.sleep(0.01)
end_secs = time.time()
if verbose:
elapsed_secs = end_secs - start_secs
print(f'perform_discovery: elapsed_secs={elapsed_secs}')
if elapsed_secs < max_discovery_secs:
print(
f'perform_discovery: ended {max_discovery_secs - elapsed_secs}s early'
)
def find_first_server(max_discovery_secs: float = DEFAULT_DISCOVERY_SECS,
verbose=False) -> Optional[DiscoveryResponse]:
"""Return the first server to respond within max_discovery_secs, else None."""
result = None
def discovery_response_handler(dr: DiscoveryResponse) -> None:
nonlocal result
if result is not None:
return
result = dr
perform_discovery(
discovery_response_handler,
max_discovery_secs=max_discovery_secs,
verbose=verbose)
return result
def make_discovery_parser() -> argparse.ArgumentParser:
"""Returns a parser for discovery operations."""
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument(
'--max_discovery_secs',
metavar='SECONDS',
type=float,
default=DEFAULT_DISCOVERY_SECS,
help='Time to wait (seconds) for Alpaca Discovery responses.')
parser.add_argument(
'--verbose',
'-v',
action='store_true',
help='Print more messages about what the program is doing.')
return parser
def main():
parser = argparse.ArgumentParser(
description='Find Alpaca servers.', parents=[make_discovery_parser()])
cli_args = parser.parse_args()
cli_kwargs = vars(cli_args)
def discovery_response_handler(dr: DiscoveryResponse) -> None:
pprint.pprint(dr)
perform_discovery(discovery_response_handler, **cli_kwargs)
if __name__ == '__main__':
main()
|
plugin.py
|
import asyncio
import logging
import locale
import threading
import zmq
import zmq.asyncio
import tempfile
import argparse
import sys
__author__ = "tigge"
plugin_argparser = argparse.ArgumentParser(description="Start a platinumshrimp plugin")
plugin_argparser.add_argument(
"--socket_path",
type=str,
default=tempfile.gettempdir(),
help="The path to the location where platinumshrimp stores the IPC socket",
dest="socket_path",
)
class Plugin:
def __init__(self, name):
locale.setlocale(locale.LC_ALL, "")
logging.basicConfig(filename=name + ".log", level=logging.DEBUG)
context = zmq.asyncio.Context()
args, _ = plugin_argparser.parse_known_args()
self.socket_base_path = args.socket_path
self._socket_bot = context.socket(zmq.PAIR)
self._socket_bot.connect(
"ipc://" + self.socket_base_path + "/ipc_plugin_" + name
)
self._socket_workers = context.socket(zmq.PULL)
self._socket_workers.bind(
"ipc://" + self.socket_base_path + "/ipc_plugin_" + name + "_workers"
)
self._poller = zmq.asyncio.Poller()
self._poller.register(self._socket_bot, zmq.POLLIN)
self._poller.register(self._socket_workers, zmq.POLLIN)
self.name = name
logging.info(
"Plugin.init %s, %s",
threading.current_thread().ident,
"ipc://ipc_plugin_" + name,
)
self.threading_data = threading.local()
self.threading_data.call_socket = self._socket_bot
def _recieve(self, data):
func_name = data["function"]
if func_name.startswith("on_") or func_name in ["started", "update"]:
try:
func = getattr(self, func_name)
except AttributeError as e:
pass # Not all plugins implements all functions, therefore silencing if not found.
else:
func(*data["params"])
else:
logging.warning(
"Unsupported call to plugin function with name " + func_name
)
def _call(self, function, *args):
logging.info("Plugin.call %s", self.threading_data.__dict__)
socket = self.threading_data.call_socket
socket.send_json({"function": function, "params": args})
def _thread(self, function, *args, **kwargs):
logging.info("Plugin._thread %r", function)
def starter():
context = zmq.Context()
sock = context.socket(zmq.PUSH)
sock.connect(
"ipc://"
+ self.socket_base_path
+ "/ipc_plugin_"
+ self.name
+ "_workers"
)
self.threading_data.call_socket = sock
function(*args, **kwargs)
thread = threading.Thread(target=starter)
thread.start()
async def _run(self):
while True:
socks = dict(await self._poller.poll())
if self._socket_bot in socks:
self._recieve(await self._socket_bot.recv_json())
if self._socket_workers in socks:
self._socket_bot.send(await self._socket_workers.recv())
@classmethod
def run(cls):
loop = asyncio.get_event_loop()
instance = cls()
logging.info("Plugin.run %s, %s", cls, instance)
loop.create_task(instance._run())
try:
loop.run_forever()
except:
logging.exception("Plugin.run aborted")
loop.close()
sys.exit(1)
def __getattr__(self, name):
# List covers available commands to be sent to the IRC server
if name in [
"action",
"admin",
"cap",
"ctcp",
"ctcp_reply",
"globops",
"info",
"invite",
"ison",
"join",
"kick",
"links",
"list",
"lusers",
"mode",
"motd",
"names",
"nick",
"notice",
"oper",
"part",
"pass_",
"ping",
"pong",
"privmsg",
"quit",
"squit",
"stats",
"time",
"topic",
"trace",
"user",
"userhost",
"users",
"version",
"wallops",
"who",
"whois",
"whowas",
"_save_settings",
]:
def call(*args, **kwarg):
self._call(name, *args)
return call
else:
raise AttributeError(
"Unsupported internal function call to function: " + name
)
|
regression.py
|
#!/usr/bin/env python3
from argparse import ArgumentParser
import sys
import os
import subprocess
import re
import glob
import threading
import time
DESCRIPTION = """Regressor is a tool to run regression tests in a CI env."""
class PrintDotsThread(object):
"""Prints a dot every "interval" (default is 300) seconds"""
def __init__(self, interval=300):
self.interval = interval
thread = threading.Thread(target=self.run, args=())
thread.daemon = True
thread.start()
def run(self):
""" Runs until the main Python thread exits. """
## Print a newline at the very beginning.
print("")
while True:
# Print dot
print(".")
time.sleep(self.interval)
class regressor():
_re_sanitizer_log = re.compile(r"""ERROR: (libFuzzer|UndefinedBehaviorSanitizer)""")
def __init__(self, description, args):
self._description = description
self._args = self.parseCmdLine(description, args)
self._repo_root = os.path.dirname(sys.path[0])
self._fuzzer_path = os.path.join(self._repo_root,
"build/test/tools/ossfuzz")
self._logpath = os.path.join(self._repo_root, "test_results")
def parseCmdLine(self, description, args):
argParser = ArgumentParser(description)
argParser.add_argument('-o', '--out-dir', required=True, type=str,
help="""Directory where test results will be written""")
return argParser.parse_args(args)
@staticmethod
def run_cmd(command, logfile=None, env=None):
"""
Args:
command (str): command to run
logfile (str): log file name
env (dict): dictionary holding key-value pairs for bash environment
variables
Returns:
int: The exit status of the command. Exit status codes are:
0 -> Success
1-255 -> Failure
"""
if not logfile:
logfile = os.devnull
if not env:
env = os.environ.copy()
logfh = open(logfile, 'w')
proc = subprocess.Popen(command, shell=True, executable='/bin/bash',
env=env, stdout=logfh,
stderr=subprocess.STDOUT)
ret = proc.wait()
logfh.close()
return ret
def process_log(self, logfile):
"""
Args:
logfile (str): log file name
Returns:
bool: Test status.
True -> Success
False -> Failure
"""
## Log may contain non ASCII characters, so we simply stringify them
## since they don't matter for regular expression matching
rawtext = str(open(logfile, 'rb').read())
return not re.search(self._re_sanitizer_log, rawtext)
def run(self):
"""
Returns:
bool: Test status.
True -> All tests succeeded
False -> At least one test failed
"""
testStatus = []
for fuzzer in glob.iglob("{}/*_ossfuzz".format(self._fuzzer_path)):
basename = os.path.basename(fuzzer)
logfile = os.path.join(self._logpath, "{}.log".format(basename))
corpus_dir = "/tmp/solidity-fuzzing-corpus/{0}_seed_corpus" \
.format(basename)
cmd = "find {0} -type f | xargs -n1 sh -c '{1} $0 || exit 255'".format(corpus_dir, fuzzer)
self.run_cmd(cmd, logfile=logfile)
ret = self.process_log(logfile)
if not ret:
print(
"\t[-] libFuzzer reported failure for {0}. "
"Failure logged to test_results".format(
basename))
testStatus.append(False)
else:
print("\t[+] {0} passed regression tests.".format(basename))
testStatus.append(True)
return all(testStatus)
if __name__ == '__main__':
dotprinter = PrintDotsThread()
tool = regressor(DESCRIPTION, sys.argv[1:])
sys.exit(not tool.run())
|
client.py
|
from base64 import b64encode
from engineio.json import JSONDecodeError
import logging
import queue
import signal
import ssl
import threading
import time
import urllib
try:
import requests
except ImportError: # pragma: no cover
requests = None
try:
import websocket
except ImportError: # pragma: no cover
websocket = None
from . import exceptions
from . import packet
from . import payload
default_logger = logging.getLogger('engineio.client')
connected_clients = []
def signal_handler(sig, frame):
"""SIGINT handler.
Disconnect all active clients and then invoke the original signal handler.
"""
for client in connected_clients[:]:
if not client.is_asyncio_based():
client.disconnect()
if callable(original_signal_handler):
return original_signal_handler(sig, frame)
else: # pragma: no cover
# Handle case where no original SIGINT handler was present.
return signal.default_int_handler(sig, frame)
original_signal_handler = None
class Client(object):
"""An Engine.IO client.
This class implements a fully compliant Engine.IO web client with support
for websocket and long-polling transports.
:param logger: To enable logging set to ``True`` or pass a logger object to
use. To disable logging set to ``False``. The default is
``False``. Note that fatal errors are logged even when
``logger`` is ``False``.
:param json: An alternative json module to use for encoding and decoding
packets. Custom json modules must have ``dumps`` and ``loads``
functions that are compatible with the standard library
versions.
:param request_timeout: A timeout in seconds for requests. The default is
5 seconds.
:param http_session: an initialized ``requests.Session`` object to be used
when sending requests to the server. Use it if you
need to add special client options such as proxy
servers, SSL certificates, custom CA bundle, etc.
:param ssl_verify: ``True`` to verify SSL certificates, or ``False`` to
skip SSL certificate verification, allowing
connections to servers with self signed certificates.
The default is ``True``.
"""
event_names = ['connect', 'disconnect', 'message', 'ping', 'pong']
def __init__(self,
logger=False,
json=None,
request_timeout=5,
http_session=None,
ssl_verify=True):
global original_signal_handler
if original_signal_handler is None and \
threading.current_thread() == threading.main_thread():
original_signal_handler = signal.signal(signal.SIGINT,
signal_handler)
self.handlers = {}
self.base_url = None
self.transports = None
self.current_transport = None
self.sid = None
self.upgrades = None
self.ping_interval = None
self.ping_timeout = None
self.http = http_session
self.ws = None
self.read_loop_task = None
self.write_loop_task = None
self.queue = None
self.state = 'disconnected'
self.ssl_verify = ssl_verify
if json is not None:
packet.Packet.json = json
if not isinstance(logger, bool):
self.logger = logger
else:
self.logger = default_logger
if self.logger.level == logging.NOTSET:
if logger:
self.logger.setLevel(logging.INFO)
else:
self.logger.setLevel(logging.ERROR)
self.logger.addHandler(logging.StreamHandler())
self.request_timeout = request_timeout
def is_asyncio_based(self):
return False
def on(self, event, handler=None):
"""Register an event handler.
:param event: The event name. Can be ``'connect'``, ``'message'`` or
``'disconnect'``.
:param handler: The function that should be invoked to handle the
event. When this parameter is not given, the method
acts as a decorator for the handler function.
Example usage::
# as a decorator:
@eio.on('connect')
def connect_handler():
print('Connection request')
# as a method:
def message_handler(msg):
print('Received message: ', msg)
eio.send('response')
eio.on('message', message_handler)
"""
if event not in self.event_names:
raise ValueError('Invalid event')
def set_handler(handler):
self.handlers[event] = handler
return handler
if handler is None:
return set_handler
set_handler(handler)
def connect(self, url, headers=None, transports=None,
engineio_path='engine.io'):
"""Connect to an Engine.IO server.
:param url: The URL of the Engine.IO server. It can include custom
query string parameters if required by the server.
:param headers: A dictionary with custom headers to send with the
connection request.
:param transports: The list of allowed transports. Valid transports
are ``'polling'`` and ``'websocket'``. If not
given, the polling transport is connected first,
then an upgrade to websocket is attempted.
:param engineio_path: The endpoint where the Engine.IO server is
installed. The default value is appropriate for
most cases.
Example usage::
eio = engineio.Client()
eio.connect('http://localhost:5000')
"""
if self.state != 'disconnected':
raise ValueError('Client is not in a disconnected state')
valid_transports = ['polling', 'websocket']
if transports is not None:
if isinstance(transports, str):
transports = [transports]
transports = [transport for transport in transports
if transport in valid_transports]
if not transports:
raise ValueError('No valid transports provided')
self.transports = transports or valid_transports
self.queue = self.create_queue()
return getattr(self, '_connect_' + self.transports[0])(
url, headers or {}, engineio_path)
def wait(self):
"""Wait until the connection with the server ends.
Client applications can use this function to block the main thread
during the life of the connection.
"""
if self.read_loop_task:
self.read_loop_task.join()
def send(self, data):
"""Send a message to a client.
:param data: The data to send to the client. Data can be of type
``str``, ``bytes``, ``list`` or ``dict``. If a ``list``
or ``dict``, the data will be serialized as JSON.
"""
self._send_packet(packet.Packet(packet.MESSAGE, data=data))
def disconnect(self, abort=False):
"""Disconnect from the server.
:param abort: If set to ``True``, do not wait for background tasks
associated with the connection to end.
"""
if self.state == 'connected':
self._send_packet(packet.Packet(packet.CLOSE))
self.queue.put(None)
self.state = 'disconnecting'
self._trigger_event('disconnect', run_async=False)
if self.current_transport == 'websocket':
self.ws.close()
if not abort:
self.read_loop_task.join()
self.state = 'disconnected'
try:
connected_clients.remove(self)
except ValueError: # pragma: no cover
pass
self._reset()
def transport(self):
"""Return the name of the transport currently in use.
The possible values returned by this function are ``'polling'`` and
``'websocket'``.
"""
return self.current_transport
def start_background_task(self, target, *args, **kwargs):
"""Start a background task.
This is a utility function that applications can use to start a
background task.
:param target: the target function to execute.
:param args: arguments to pass to the function.
:param kwargs: keyword arguments to pass to the function.
This function returns an object compatible with the `Thread` class in
the Python standard library. The `start()` method on this object is
already called by this function.
"""
th = threading.Thread(target=target, args=args, kwargs=kwargs)
th.start()
return th
def sleep(self, seconds=0):
"""Sleep for the requested amount of time."""
return time.sleep(seconds)
def create_queue(self, *args, **kwargs):
"""Create a queue object."""
q = queue.Queue(*args, **kwargs)
q.Empty = queue.Empty
return q
def create_event(self, *args, **kwargs):
"""Create an event object."""
return threading.Event(*args, **kwargs)
def _reset(self):
self.state = 'disconnected'
self.sid = None
def _connect_polling(self, url, headers, engineio_path):
"""Establish a long-polling connection to the Engine.IO server."""
if requests is None: # pragma: no cover
# not installed
self.logger.error('requests package is not installed -- cannot '
'send HTTP requests!')
return
self.base_url = self._get_engineio_url(url, engineio_path, 'polling')
self.logger.info('Attempting polling connection to ' + self.base_url)
r = self._send_request(
'GET', self.base_url + self._get_url_timestamp(), headers=headers,
timeout=self.request_timeout)
if r is None:
self._reset()
raise exceptions.ConnectionError(
'Connection refused by the server')
if r.status_code < 200 or r.status_code >= 300:
self._reset()
try:
arg = r.json()
except JSONDecodeError:
arg = None
raise exceptions.ConnectionError(
'Unexpected status code {} in server response'.format(
r.status_code), arg)
try:
p = payload.Payload(encoded_payload=r.content.decode('utf-8'))
except ValueError:
raise exceptions.ConnectionError(
'Unexpected response from server') from None
open_packet = p.packets[0]
if open_packet.packet_type != packet.OPEN:
raise exceptions.ConnectionError(
'OPEN packet not returned by server')
self.logger.info(
'Polling connection accepted with ' + str(open_packet.data))
self.sid = open_packet.data['sid']
self.upgrades = open_packet.data['upgrades']
self.ping_interval = int(open_packet.data['pingInterval']) / 1000.0
self.ping_timeout = int(open_packet.data['pingTimeout']) / 1000.0
self.current_transport = 'polling'
self.base_url += '&sid=' + self.sid
self.state = 'connected'
connected_clients.append(self)
self._trigger_event('connect', run_async=False)
for pkt in p.packets[1:]:
self._receive_packet(pkt)
if 'websocket' in self.upgrades and 'websocket' in self.transports:
# attempt to upgrade to websocket
if self._connect_websocket(url, headers, engineio_path):
# upgrade to websocket succeeded, we're done here
return
# start background tasks associated with this client
self.write_loop_task = self.start_background_task(self._write_loop)
self.read_loop_task = self.start_background_task(
self._read_loop_polling)
def _connect_websocket(self, url, headers, engineio_path):
"""Establish or upgrade to a WebSocket connection with the server."""
if websocket is None: # pragma: no cover
# not installed
self.logger.error('websocket-client package not installed, only '
'polling transport is available')
return False
websocket_url = self._get_engineio_url(url, engineio_path, 'websocket')
if self.sid:
self.logger.info(
'Attempting WebSocket upgrade to ' + websocket_url)
upgrade = True
websocket_url += '&sid=' + self.sid
else:
upgrade = False
self.base_url = websocket_url
self.logger.info(
'Attempting WebSocket connection to ' + websocket_url)
# get cookies and other settings from the long-polling connection
# so that they are preserved when connecting to the WebSocket route
cookies = None
extra_options = {}
if self.http:
# cookies
cookies = '; '.join(["{}={}".format(cookie.name, cookie.value)
for cookie in self.http.cookies])
for header, value in headers.items():
if header.lower() == 'cookie':
if cookies:
cookies += '; '
cookies += value
del headers[header]
break
# auth
if 'Authorization' not in headers and self.http.auth is not None:
if not isinstance(self.http.auth, tuple): # pragma: no cover
raise ValueError('Only basic authentication is supported')
basic_auth = '{}:{}'.format(
self.http.auth[0], self.http.auth[1]).encode('utf-8')
basic_auth = b64encode(basic_auth).decode('utf-8')
headers['Authorization'] = 'Basic ' + basic_auth
# cert
# this can be given as ('certfile', 'keyfile') or just 'certfile'
if isinstance(self.http.cert, tuple):
extra_options['sslopt'] = {
'certfile': self.http.cert[0],
'keyfile': self.http.cert[1]}
elif self.http.cert:
extra_options['sslopt'] = {'certfile': self.http.cert}
# proxies
if self.http.proxies:
proxy_url = None
if websocket_url.startswith('ws://'):
proxy_url = self.http.proxies.get(
'ws', self.http.proxies.get('http'))
else: # wss://
proxy_url = self.http.proxies.get(
'wss', self.http.proxies.get('https'))
if proxy_url:
parsed_url = urllib.parse.urlparse(
proxy_url if '://' in proxy_url
else 'scheme://' + proxy_url)
extra_options['http_proxy_host'] = parsed_url.hostname
extra_options['http_proxy_port'] = parsed_url.port
extra_options['http_proxy_auth'] = (
(parsed_url.username, parsed_url.password)
if parsed_url.username or parsed_url.password
else None)
# verify
if isinstance(self.http.verify, str):
if 'sslopt' in extra_options:
extra_options['sslopt']['ca_certs'] = self.http.verify
else:
extra_options['sslopt'] = {'ca_certs': self.http.verify}
elif not self.http.verify:
self.ssl_verify = False
if not self.ssl_verify:
extra_options['sslopt'] = {"cert_reqs": ssl.CERT_NONE}
try:
ws = websocket.create_connection(
websocket_url + self._get_url_timestamp(), header=headers,
cookie=cookies, enable_multithread=True, **extra_options)
except (ConnectionError, IOError, websocket.WebSocketException):
if upgrade:
self.logger.warning(
'WebSocket upgrade failed: connection error')
return False
else:
raise exceptions.ConnectionError('Connection error')
if upgrade:
p = packet.Packet(packet.PING, data='probe').encode()
try:
ws.send(p)
except Exception as e: # pragma: no cover
self.logger.warning(
'WebSocket upgrade failed: unexpected send exception: %s',
str(e))
return False
try:
p = ws.recv()
except Exception as e: # pragma: no cover
self.logger.warning(
'WebSocket upgrade failed: unexpected recv exception: %s',
str(e))
return False
pkt = packet.Packet(encoded_packet=p)
if pkt.packet_type != packet.PONG or pkt.data != 'probe':
self.logger.warning(
'WebSocket upgrade failed: no PONG packet')
return False
p = packet.Packet(packet.UPGRADE).encode()
try:
ws.send(p)
except Exception as e: # pragma: no cover
self.logger.warning(
'WebSocket upgrade failed: unexpected send exception: %s',
str(e))
return False
self.current_transport = 'websocket'
self.logger.info('WebSocket upgrade was successful')
else:
try:
p = ws.recv()
except Exception as e: # pragma: no cover
raise exceptions.ConnectionError(
'Unexpected recv exception: ' + str(e))
open_packet = packet.Packet(encoded_packet=p)
if open_packet.packet_type != packet.OPEN:
raise exceptions.ConnectionError('no OPEN packet')
self.logger.info(
'WebSocket connection accepted with ' + str(open_packet.data))
self.sid = open_packet.data['sid']
self.upgrades = open_packet.data['upgrades']
self.ping_interval = int(open_packet.data['pingInterval']) / 1000.0
self.ping_timeout = int(open_packet.data['pingTimeout']) / 1000.0
self.current_transport = 'websocket'
self.state = 'connected'
connected_clients.append(self)
self._trigger_event('connect', run_async=False)
self.ws = ws
self.ws.settimeout(self.ping_interval + self.ping_timeout)
# start background tasks associated with this client
self.write_loop_task = self.start_background_task(self._write_loop)
self.read_loop_task = self.start_background_task(
self._read_loop_websocket)
return True
def _receive_packet(self, pkt):
"""Handle incoming packets from the server."""
packet_name = packet.packet_names[pkt.packet_type] \
if pkt.packet_type < len(packet.packet_names) else 'UNKNOWN'
self.logger.info(
'Received packet %s data %s', packet_name,
pkt.data if not isinstance(pkt.data, bytes) else '<binary>')
if pkt.packet_type == packet.MESSAGE:
self._trigger_event('message', pkt.data, run_async=True)
elif pkt.packet_type == packet.PING:
self._send_packet(packet.Packet(packet.PONG, pkt.data))
self._trigger_event('pong', run_async=True)
elif pkt.packet_type == packet.CLOSE:
self.disconnect(abort=True)
elif pkt.packet_type == packet.NOOP:
pass
else:
self.logger.error('Received unexpected packet of type %s',
pkt.packet_type)
def _send_packet(self, pkt):
"""Queue a packet to be sent to the server."""
if self.state != 'connected':
return
self.queue.put(pkt)
self.logger.info(
'Sending packet %s data %s',
packet.packet_names[pkt.packet_type],
pkt.data if not isinstance(pkt.data, bytes) else '<binary>')
def _send_request(
self, method, url, headers=None, body=None,
timeout=None): # pragma: no cover
if self.http is None:
self.http = requests.Session()
if not self.ssl_verify:
self.http.verify = False
try:
return self.http.request(method, url, headers=headers, data=body,
timeout=timeout)
except requests.exceptions.RequestException as exc:
self.logger.info('HTTP %s request to %s failed with error %s.',
method, url, exc)
def _trigger_event(self, event, *args, **kwargs):
"""Invoke an event handler."""
run_async = kwargs.pop('run_async', False)
if event in self.handlers:
if run_async:
return self.start_background_task(self.handlers[event], *args)
else:
try:
return self.handlers[event](*args)
except:
self.logger.exception(event + ' handler error')
def _get_engineio_url(self, url, engineio_path, transport):
"""Generate the Engine.IO connection URL."""
engineio_path = engineio_path.strip('/')
parsed_url = urllib.parse.urlparse(url)
if transport == 'polling':
scheme = 'http'
elif transport == 'websocket':
scheme = 'ws'
else: # pragma: no cover
raise ValueError('invalid transport')
if parsed_url.scheme in ['https', 'wss']:
scheme += 's'
return ('{scheme}://{netloc}/{path}/?{query}'
'{sep}transport={transport}&EIO=4').format(
scheme=scheme, netloc=parsed_url.netloc,
path=engineio_path, query=parsed_url.query,
sep='&' if parsed_url.query else '',
transport=transport)
def _get_url_timestamp(self):
"""Generate the Engine.IO query string timestamp."""
return '&t=' + str(time.time())
self._trigger_event("ping", run_async=True)
def _read_loop_polling(self):
"""Read packets by polling the Engine.IO server."""
while self.state == 'connected':
self.logger.info(
'Sending polling GET request to ' + self.base_url)
r = self._send_request(
'GET', self.base_url + self._get_url_timestamp(),
timeout=max(self.ping_interval, self.ping_timeout) + 5)
if r is None:
self.logger.warning(
'Connection refused by the server, aborting')
self.queue.put(None)
break
if r.status_code < 200 or r.status_code >= 300:
self.logger.warning('Unexpected status code %s in server '
'response, aborting', r.status_code)
self.queue.put(None)
break
try:
p = payload.Payload(encoded_payload=r.content.decode('utf-8'))
except ValueError:
self.logger.warning(
'Unexpected packet from server, aborting')
self.queue.put(None)
break
for pkt in p.packets:
self._receive_packet(pkt)
self.logger.info('Waiting for write loop task to end')
self.write_loop_task.join()
if self.state == 'connected':
self._trigger_event('disconnect', run_async=False)
try:
connected_clients.remove(self)
except ValueError: # pragma: no cover
pass
self._reset()
self.logger.info('Exiting read loop task')
def _read_loop_websocket(self):
"""Read packets from the Engine.IO WebSocket connection."""
while self.state == 'connected':
p = None
try:
p = self.ws.recv()
except websocket.WebSocketTimeoutException:
self.logger.warning(
'Server has stopped communicating, aborting')
self.queue.put(None)
break
except websocket.WebSocketConnectionClosedException:
self.logger.warning(
'WebSocket connection was closed, aborting')
self.queue.put(None)
break
except Exception as e:
self.logger.info(
'Unexpected error receiving packet: "%s", aborting',
str(e))
self.queue.put(None)
break
try:
pkt = packet.Packet(encoded_packet=p)
except Exception as e: # pragma: no cover
self.logger.info(
'Unexpected error decoding packet: "%s", aborting', str(e))
self.queue.put(None)
break
self._receive_packet(pkt)
self.logger.info('Waiting for write loop task to end')
self.write_loop_task.join()
if self.state == 'connected':
self._trigger_event('disconnect', run_async=False)
try:
connected_clients.remove(self)
except ValueError: # pragma: no cover
pass
self._reset()
self.logger.info('Exiting read loop task')
def _write_loop(self):
"""This background task sends packages to the server as they are
pushed to the send queue.
"""
while self.state == 'connected':
# to simplify the timeout handling, use the maximum of the
# ping interval and ping timeout as timeout, with an extra 5
# seconds grace period
timeout = max(self.ping_interval, self.ping_timeout) + 5
packets = None
try:
packets = [self.queue.get(timeout=timeout)]
except self.queue.Empty:
self.logger.error('packet queue is empty, aborting')
break
if packets == [None]:
self.queue.task_done()
packets = []
else:
while True:
try:
packets.append(self.queue.get(block=False))
except self.queue.Empty:
break
if packets[-1] is None:
packets = packets[:-1]
self.queue.task_done()
break
if not packets:
# empty packet list returned -> connection closed
break
if self.current_transport == 'polling':
p = payload.Payload(packets=packets)
r = self._send_request(
'POST', self.base_url, body=p.encode(),
headers={'Content-Type': 'application/octet-stream'},
timeout=self.request_timeout)
for pkt in packets:
self.queue.task_done()
if r is None:
self.logger.warning(
'Connection refused by the server, aborting')
break
if r.status_code < 200 or r.status_code >= 300:
self.logger.warning('Unexpected status code %s in server '
'response, aborting', r.status_code)
self._reset()
break
else:
# websocket
try:
for pkt in packets:
encoded_packet = pkt.encode()
if pkt.binary:
self.ws.send_binary(encoded_packet)
else:
self.ws.send(encoded_packet)
self.queue.task_done()
except (websocket.WebSocketConnectionClosedException,
BrokenPipeError, OSError):
self.logger.warning(
'WebSocket connection was closed, aborting')
break
self.logger.info('Exiting write loop task')
|
Multithreading.py
|
import threading
from threading import *
import time
def sqr(n):
for x in n:
time.sleep(1)
print('Remainder after dividing by 2',x%2)
def cube(n):
for x in n:
time.sleep(1)
print('Remainder after dividing by 3',x%3)
n=[1,2,3,4,5,6,7,8]
start=time.time()
t1=Thread(target=sqr,args=(n,))
t2=Thread(target=cube,args=(n,))
t1.start()
time.sleep(1)
t2.start()
t1.join()
t2.join()
end=time.time()
print(end-start)
|
main.py
|
from face_detection import Detector
from face_verification.OneShotFaceVerification import Verifier
from data_source import CCTV
import cv2
from utils import *
import os
from dotenv import load_dotenv
from menu_pages import *
import json
from threading import Thread
import time
from multiprocessing import Process
detector = Detector()
verifier = Verifier('instagram.json')
def view_camera(camera_config):
# cctvs = list()
# for camera_config in camera_configs:
ip_camera_url = f"{camera_config['protocol']}://{camera_config['ip']}:{camera_config['port']}/{camera_config['path']}"
print(ip_camera_url)
cctv = CCTV(ip_camera_url)
# verifier = Verifier('instagram.json')
for frame in cctv.start_streaming():
for detected_face in detector.detect_faces(frame):
# if real_face.verify(detected_face['face']):
# if True:
# identity = verifier.who_is_it(detector.align(detected_face['face']))
frame = poi(frame, detected_face['box']['start_point'], detected_face['box']['end_point'], text='')
# print(len(frames))
cv2.imshow(f'Camera ({ip_camera_url})', frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
return
cv2.destroyAllWindows()
with open('config.json') as config_file:
config = json.load(config_file)
while True:
# view_camera(config['cameras'])
camera_processes = list()
for camera in config['cameras']:
camera_processes.append(Process(target=view_camera, args=(camera,)))
camera_processes[-1].start()
for camera_process in camera_processes:
camera_process.join()
command = get_command(first_page_commands)
if command == 'cctv':
command = get_command(cctv_page_commands)
if command == 'add':
new_camera = dict()
new_camera['protocol'] = input('protocol>')
new_camera['ip'] = input('Protocol>')
new_camera['port'] = input('port>')
new_camera['path'] = input('path>')
config['cameras'].append(new_camera)
with open('config.json', 'w') as config_file:
json.dump(config, config_file)
elif command == 'view':
pass
|
conftest.py
|
"""Define general test helper attributes and utilities."""
import ast
import contextlib
import functools
import http.server
import importlib
import inspect
import io
import pkgutil
import socketserver
import sys
import tempfile
import threading
import cupy as np
import pytest
from moviepy.video.io.VideoFileClip import VideoFileClip
TMP_DIR = tempfile.gettempdir() # because tempfile.tempdir is sometimes None
# Arbitrary font used in caption testing.
if sys.platform in ("win32", "cygwin"):
FONT = "Arial"
# Even if Windows users install the Liberation fonts, it is called
# LiberationMono on Windows, so it doesn't help.
else:
FONT = (
"Liberation-Mono" # This is available in the fonts-liberation package on Linux
)
@functools.lru_cache(maxsize=None)
def get_video(start_time=0, end_time=1):
return VideoFileClip("media/big_buck_bunny_432_433.webm").subclip(
start_time, end_time
)
@functools.lru_cache(maxsize=None)
def get_stereo_wave(left_freq=440, right_freq=220):
def make_stereo_frame(t):
return np.array(
[np.sin(left_freq * 2 * np.pi * t), np.sin(right_freq * 2 * np.pi * t)]
).T.copy(order="C")
return make_stereo_frame
@functools.lru_cache(maxsize=None)
def get_mono_wave(freq=440):
def make_mono_frame(t):
return np.sin(freq * 2 * np.pi * t)
return make_mono_frame
@contextlib.contextmanager
def get_static_files_server(port=8000):
my_server = socketserver.TCPServer(("", port), http.server.SimpleHTTPRequestHandler)
thread = threading.Thread(target=my_server.serve_forever, daemon=True)
thread.start()
yield thread
@functools.lru_cache(maxsize=None)
def get_moviepy_modules():
"""Get all moviepy module names and if each one is a package."""
response = []
with contextlib.redirect_stdout(io.StringIO()):
moviepy_module = importlib.import_module("moviepy")
modules = pkgutil.walk_packages(
path=moviepy_module.__path__,
prefix=moviepy_module.__name__ + ".",
)
for importer, modname, ispkg in modules:
response.append((modname, ispkg))
return response
def get_functions_with_decorator_defined(code, decorator_name):
"""Get all functions in a code object which have a decorator defined,
along with the arguments of the function and the decorator.
Parameters
----------
code : object
Module or class object from which to retrieve the functions.
decorator_name : str
Name of the decorator defined in the functions to search.
"""
class FunctionsWithDefinedDecoratorExtractor(ast.NodeVisitor):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.functions_with_decorator = []
def generic_visit(self, node):
if isinstance(node, ast.FunctionDef) and node.decorator_list:
for dec in node.decorator_list:
if not isinstance(dec, ast.Call) or dec.func.id != decorator_name:
continue
decorator_argument_names = []
if isinstance(dec.args, ast.List):
for args in dec.args:
decorator_argument_names.extend(
[e.value for e in args.elts]
)
else:
for args in dec.args:
if isinstance(args, (ast.List, ast.Tuple)):
decorator_argument_names.extend(
[e.value for e in args.elts]
)
else:
decorator_argument_names.append(args.value)
function_argument_names = [arg.arg for arg in node.args.args]
for arg in node.args.kwonlyargs:
function_argument_names.append(arg.arg)
self.functions_with_decorator.append(
{
"function_name": node.name,
"function_arguments": function_argument_names,
"decorator_arguments": decorator_argument_names,
}
)
ast.NodeVisitor.generic_visit(self, node)
modtree = ast.parse(inspect.getsource(code))
visitor = FunctionsWithDefinedDecoratorExtractor()
visitor.visit(modtree)
return visitor.functions_with_decorator
@pytest.fixture
def util():
class MoviepyTestUtils:
FONT = FONT
TMP_DIR = TMP_DIR
return MoviepyTestUtils
@pytest.fixture
def video():
return get_video
@pytest.fixture
def stereo_wave():
return get_stereo_wave
@pytest.fixture
def mono_wave():
return get_mono_wave
@pytest.fixture
def static_files_server():
return get_static_files_server
@pytest.fixture
def moviepy_modules():
return get_moviepy_modules
@pytest.fixture
def functions_with_decorator_defined():
return get_functions_with_decorator_defined
|
test_capi.py
|
# Run the _testcapi module tests (tests for the Python/C API): by defn,
# these are all functions _testcapi exports whose name begins with 'test_'.
import os
import pickle
import random
import re
import subprocess
import sys
import sysconfig
import textwrap
import time
import unittest
from test import support
from test.support import MISSING_C_DOCSTRINGS
from test.support.script_helper import assert_python_failure
try:
import _posixsubprocess
except ImportError:
_posixsubprocess = None
try:
import threading
except ImportError:
threading = None
# Skip this test if the _testcapi module isn't available.
_testcapi = support.import_module('_testcapi')
# Were we compiled --with-pydebug or with #define Py_DEBUG?
Py_DEBUG = hasattr(sys, 'gettotalrefcount')
def testfunction(self):
"""some doc"""
return self
class InstanceMethod:
id = _testcapi.instancemethod(id)
testfunction = _testcapi.instancemethod(testfunction)
class CAPITest(unittest.TestCase):
def test_instancemethod(self):
inst = InstanceMethod()
self.assertEqual(id(inst), inst.id())
self.assertTrue(inst.testfunction() is inst)
self.assertEqual(inst.testfunction.__doc__, testfunction.__doc__)
self.assertEqual(InstanceMethod.testfunction.__doc__, testfunction.__doc__)
InstanceMethod.testfunction.attribute = "test"
self.assertEqual(testfunction.attribute, "test")
self.assertRaises(AttributeError, setattr, inst.testfunction, "attribute", "test")
@unittest.skipUnless(threading, 'Threading required for this test.')
def test_no_FatalError_infinite_loop(self):
with support.SuppressCrashReport():
p = subprocess.Popen([sys.executable, "-c",
'import _testcapi;'
'_testcapi.crash_no_current_thread()'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(out, err) = p.communicate()
self.assertEqual(out, b'')
# This used to cause an infinite loop.
self.assertTrue(err.rstrip().startswith(
b'Fatal Python error:'
b' PyThreadState_Get: no current thread'))
def test_memoryview_from_NULL_pointer(self):
self.assertRaises(ValueError, _testcapi.make_memoryview_from_NULL_pointer)
def test_exc_info(self):
raised_exception = ValueError("5")
new_exc = TypeError("TEST")
try:
raise raised_exception
except ValueError as e:
tb = e.__traceback__
orig_sys_exc_info = sys.exc_info()
orig_exc_info = _testcapi.set_exc_info(new_exc.__class__, new_exc, None)
new_sys_exc_info = sys.exc_info()
new_exc_info = _testcapi.set_exc_info(*orig_exc_info)
reset_sys_exc_info = sys.exc_info()
self.assertEqual(orig_exc_info[1], e)
self.assertSequenceEqual(orig_exc_info, (raised_exception.__class__, raised_exception, tb))
self.assertSequenceEqual(orig_sys_exc_info, orig_exc_info)
self.assertSequenceEqual(reset_sys_exc_info, orig_exc_info)
self.assertSequenceEqual(new_exc_info, (new_exc.__class__, new_exc, None))
self.assertSequenceEqual(new_sys_exc_info, new_exc_info)
else:
self.assertTrue(False)
@unittest.skipUnless(_posixsubprocess, '_posixsubprocess required for this test.')
def test_seq_bytes_to_charp_array(self):
# Issue #15732: crash in _PySequence_BytesToCharpArray()
class Z(object):
def __len__(self):
return 1
self.assertRaises(TypeError, _posixsubprocess.fork_exec,
1,Z(),3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17)
# Issue #15736: overflow in _PySequence_BytesToCharpArray()
class Z(object):
def __len__(self):
return sys.maxsize
def __getitem__(self, i):
return b'x'
self.assertRaises(MemoryError, _posixsubprocess.fork_exec,
1,Z(),3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17)
@unittest.skipUnless(_posixsubprocess, '_posixsubprocess required for this test.')
def test_subprocess_fork_exec(self):
class Z(object):
def __len__(self):
return 1
# Issue #15738: crash in subprocess_fork_exec()
self.assertRaises(TypeError, _posixsubprocess.fork_exec,
Z(),[b'1'],3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17)
@unittest.skipIf(MISSING_C_DOCSTRINGS,
"Signature information for builtins requires docstrings")
def test_docstring_signature_parsing(self):
self.assertEqual(_testcapi.no_docstring.__doc__, None)
self.assertEqual(_testcapi.no_docstring.__text_signature__, None)
self.assertEqual(_testcapi.docstring_empty.__doc__, None)
self.assertEqual(_testcapi.docstring_empty.__text_signature__, None)
self.assertEqual(_testcapi.docstring_no_signature.__doc__,
"This docstring has no signature.")
self.assertEqual(_testcapi.docstring_no_signature.__text_signature__, None)
self.assertEqual(_testcapi.docstring_with_invalid_signature.__doc__,
"docstring_with_invalid_signature($module, /, boo)\n"
"\n"
"This docstring has an invalid signature."
)
self.assertEqual(_testcapi.docstring_with_invalid_signature.__text_signature__, None)
self.assertEqual(_testcapi.docstring_with_invalid_signature2.__doc__,
"docstring_with_invalid_signature2($module, /, boo)\n"
"\n"
"--\n"
"\n"
"This docstring also has an invalid signature."
)
self.assertEqual(_testcapi.docstring_with_invalid_signature2.__text_signature__, None)
self.assertEqual(_testcapi.docstring_with_signature.__doc__,
"This docstring has a valid signature.")
self.assertEqual(_testcapi.docstring_with_signature.__text_signature__, "($module, /, sig)")
self.assertEqual(_testcapi.docstring_with_signature_but_no_doc.__doc__, None)
self.assertEqual(_testcapi.docstring_with_signature_but_no_doc.__text_signature__,
"($module, /, sig)")
self.assertEqual(_testcapi.docstring_with_signature_and_extra_newlines.__doc__,
"\nThis docstring has a valid signature and some extra newlines.")
self.assertEqual(_testcapi.docstring_with_signature_and_extra_newlines.__text_signature__,
"($module, /, parameter)")
def test_c_type_with_matrix_multiplication(self):
M = _testcapi.matmulType
m1 = M()
m2 = M()
self.assertEqual(m1 @ m2, ("matmul", m1, m2))
self.assertEqual(m1 @ 42, ("matmul", m1, 42))
self.assertEqual(42 @ m1, ("matmul", 42, m1))
o = m1
o @= m2
self.assertEqual(o, ("imatmul", m1, m2))
o = m1
o @= 42
self.assertEqual(o, ("imatmul", m1, 42))
o = 42
o @= m1
self.assertEqual(o, ("matmul", 42, m1))
def test_return_null_without_error(self):
# Issue #23571: A function must not return NULL without setting an
# error
if Py_DEBUG:
code = textwrap.dedent("""
import _testcapi
from test import support
with support.SuppressCrashReport():
_testcapi.return_null_without_error()
""")
rc, out, err = assert_python_failure('-c', code)
self.assertRegex(err.replace(b'\r', b''),
br'Fatal Python error: a function returned NULL '
br'without setting an error\n'
br'SystemError: <built-in function '
br'return_null_without_error> returned NULL '
br'without setting an error\n'
br'\n'
br'Current thread.*:\n'
br' File .*", line 6 in <module>')
else:
with self.assertRaises(SystemError) as cm:
_testcapi.return_null_without_error()
self.assertRegex(str(cm.exception),
'return_null_without_error.* '
'returned NULL without setting an error')
def test_return_result_with_error(self):
# Issue #23571: A function must not return a result with an error set
if Py_DEBUG:
code = textwrap.dedent("""
import _testcapi
from test import support
with support.SuppressCrashReport():
_testcapi.return_result_with_error()
""")
rc, out, err = assert_python_failure('-c', code)
self.assertRegex(err.replace(b'\r', b''),
br'Fatal Python error: a function returned a '
br'result with an error set\n'
br'ValueError\n'
br'\n'
br'The above exception was the direct cause '
br'of the following exception:\n'
br'\n'
br'SystemError: <built-in '
br'function return_result_with_error> '
br'returned a result with an error set\n'
br'\n'
br'Current thread.*:\n'
br' File .*, line 6 in <module>')
else:
with self.assertRaises(SystemError) as cm:
_testcapi.return_result_with_error()
self.assertRegex(str(cm.exception),
'return_result_with_error.* '
'returned a result with an error set')
def test_buildvalue_N(self):
_testcapi.test_buildvalue_N()
@unittest.skipUnless(threading, 'Threading required for this test.')
class TestPendingCalls(unittest.TestCase):
def pendingcalls_submit(self, l, n):
def callback():
#this function can be interrupted by thread switching so let's
#use an atomic operation
l.append(None)
for i in range(n):
time.sleep(random.random()*0.02) #0.01 secs on average
#try submitting callback until successful.
#rely on regular interrupt to flush queue if we are
#unsuccessful.
while True:
if _testcapi._pending_threadfunc(callback):
break;
def pendingcalls_wait(self, l, n, context = None):
#now, stick around until l[0] has grown to 10
count = 0;
while len(l) != n:
#this busy loop is where we expect to be interrupted to
#run our callbacks. Note that callbacks are only run on the
#main thread
if False and support.verbose:
print("(%i)"%(len(l),),)
for i in range(1000):
a = i*i
if context and not context.event.is_set():
continue
count += 1
self.assertTrue(count < 10000,
"timeout waiting for %i callbacks, got %i"%(n, len(l)))
if False and support.verbose:
print("(%i)"%(len(l),))
def test_pendingcalls_threaded(self):
#do every callback on a separate thread
n = 32 #total callbacks
threads = []
class foo(object):pass
context = foo()
context.l = []
context.n = 2 #submits per thread
context.nThreads = n // context.n
context.nFinished = 0
context.lock = threading.Lock()
context.event = threading.Event()
threads = [threading.Thread(target=self.pendingcalls_thread,
args=(context,))
for i in range(context.nThreads)]
with support.start_threads(threads):
self.pendingcalls_wait(context.l, n, context)
def pendingcalls_thread(self, context):
try:
self.pendingcalls_submit(context.l, context.n)
finally:
with context.lock:
context.nFinished += 1
nFinished = context.nFinished
if False and support.verbose:
print("finished threads: ", nFinished)
if nFinished == context.nThreads:
context.event.set()
def test_pendingcalls_non_threaded(self):
#again, just using the main thread, likely they will all be dispatched at
#once. It is ok to ask for too many, because we loop until we find a slot.
#the loop can be interrupted to dispatch.
#there are only 32 dispatch slots, so we go for twice that!
l = []
n = 64
self.pendingcalls_submit(l, n)
self.pendingcalls_wait(l, n)
class SubinterpreterTest(unittest.TestCase):
def test_subinterps(self):
import builtins
r, w = os.pipe()
code = """if 1:
import sys, builtins, pickle
with open({:d}, "wb") as f:
pickle.dump(id(sys.modules), f)
pickle.dump(id(builtins), f)
""".format(w)
with open(r, "rb") as f:
ret = support.run_in_subinterp(code)
self.assertEqual(ret, 0)
self.assertNotEqual(pickle.load(f), id(sys.modules))
self.assertNotEqual(pickle.load(f), id(builtins))
# Bug #6012
class Test6012(unittest.TestCase):
def test(self):
self.assertEqual(_testcapi.argparsing("Hello", "World"), 1)
class EmbeddingTests(unittest.TestCase):
def setUp(self):
here = os.path.abspath(__file__)
basepath = os.path.dirname(os.path.dirname(os.path.dirname(here)))
exename = "_testembed"
if sys.platform.startswith("win"):
ext = ("_d" if "_d" in sys.executable else "") + ".exe"
exename += ext
exepath = os.path.dirname(sys.executable)
else:
exepath = os.path.join(basepath, "Programs")
self.test_exe = exe = os.path.join(exepath, exename)
if not os.path.exists(exe):
self.skipTest("%r doesn't exist" % exe)
# This is needed otherwise we get a fatal error:
# "Py_Initialize: Unable to get the locale encoding
# LookupError: no codec search functions registered: can't find encoding"
self.oldcwd = os.getcwd()
os.chdir(basepath)
def tearDown(self):
os.chdir(self.oldcwd)
def run_embedded_interpreter(self, *args, env=None):
"""Runs a test in the embedded interpreter"""
cmd = [self.test_exe]
cmd.extend(args)
if env is not None and sys.platform == 'win32':
# Windows requires at least the SYSTEMROOT environment variable to
# start Python.
env = env.copy()
env['SYSTEMROOT'] = os.environ['SYSTEMROOT']
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
env=env)
(out, err) = p.communicate()
self.assertEqual(p.returncode, 0,
"bad returncode %d, stderr is %r" %
(p.returncode, err))
return out, err
def test_subinterps(self):
# This is just a "don't crash" test
out, err = self.run_embedded_interpreter()
if support.verbose > 1:
print()
print(out)
print(err)
def test_forced_io_encoding(self):
# Checks forced configuration of embedded interpreter IO streams
env = dict(os.environ, PYTHONIOENCODING="utf-8:surrogateescape")
out, err = self.run_embedded_interpreter("forced_io_encoding", env=env)
if support.verbose > 1:
print()
print(out)
print(err)
expected_stream_encoding = "utf-8"
expected_errors = "surrogateescape"
expected_output = '\n'.join([
"--- Use defaults ---",
"Expected encoding: default",
"Expected errors: default",
"stdin: {in_encoding}:{errors}",
"stdout: {out_encoding}:{errors}",
"stderr: {out_encoding}:backslashreplace",
"--- Set errors only ---",
"Expected encoding: default",
"Expected errors: ignore",
"stdin: {in_encoding}:ignore",
"stdout: {out_encoding}:ignore",
"stderr: {out_encoding}:backslashreplace",
"--- Set encoding only ---",
"Expected encoding: latin-1",
"Expected errors: default",
"stdin: latin-1:{errors}",
"stdout: latin-1:{errors}",
"stderr: latin-1:backslashreplace",
"--- Set encoding and errors ---",
"Expected encoding: latin-1",
"Expected errors: replace",
"stdin: latin-1:replace",
"stdout: latin-1:replace",
"stderr: latin-1:backslashreplace"])
expected_output = expected_output.format(
in_encoding=expected_stream_encoding,
out_encoding=expected_stream_encoding,
errors=expected_errors)
# This is useful if we ever trip over odd platform behaviour
self.maxDiff = None
self.assertEqual(out.strip(), expected_output)
class SkipitemTest(unittest.TestCase):
def test_skipitem(self):
"""
If this test failed, you probably added a new "format unit"
in Python/getargs.c, but neglected to update our poor friend
skipitem() in the same file. (If so, shame on you!)
With a few exceptions**, this function brute-force tests all
printable ASCII*** characters (32 to 126 inclusive) as format units,
checking to see that PyArg_ParseTupleAndKeywords() return consistent
errors both when the unit is attempted to be used and when it is
skipped. If the format unit doesn't exist, we'll get one of two
specific error messages (one for used, one for skipped); if it does
exist we *won't* get that error--we'll get either no error or some
other error. If we get the specific "does not exist" error for one
test and not for the other, there's a mismatch, and the test fails.
** Some format units have special funny semantics and it would
be difficult to accommodate them here. Since these are all
well-established and properly skipped in skipitem() we can
get away with not testing them--this test is really intended
to catch *new* format units.
*** Python C source files must be ASCII. Therefore it's impossible
to have non-ASCII format units.
"""
empty_tuple = ()
tuple_1 = (0,)
dict_b = {'b':1}
keywords = ["a", "b"]
for i in range(32, 127):
c = chr(i)
# skip parentheses, the error reporting is inconsistent about them
# skip 'e', it's always a two-character code
# skip '|' and '$', they don't represent arguments anyway
if c in '()e|$':
continue
# test the format unit when not skipped
format = c + "i"
try:
_testcapi.parse_tuple_and_keywords(tuple_1, dict_b,
format, keywords)
when_not_skipped = False
except SystemError as e:
s = "argument 1 (impossible<bad format char>)"
when_not_skipped = (str(e) == s)
except TypeError:
when_not_skipped = False
# test the format unit when skipped
optional_format = "|" + format
try:
_testcapi.parse_tuple_and_keywords(empty_tuple, dict_b,
optional_format, keywords)
when_skipped = False
except SystemError as e:
s = "impossible<bad format char>: '{}'".format(format)
when_skipped = (str(e) == s)
message = ("test_skipitem_parity: "
"detected mismatch between convertsimple and skipitem "
"for format unit '{}' ({}), not skipped {}, skipped {}".format(
c, i, when_skipped, when_not_skipped))
self.assertIs(when_skipped, when_not_skipped, message)
def test_parse_tuple_and_keywords(self):
# Test handling errors in the parse_tuple_and_keywords helper itself
self.assertRaises(TypeError, _testcapi.parse_tuple_and_keywords,
(), {}, 42, [])
self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords,
(), {}, '', 42)
self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords,
(), {}, '', [''] * 42)
self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords,
(), {}, '', [42])
def test_bad_use(self):
# Test handling invalid format and keywords in
# PyArg_ParseTupleAndKeywords()
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(1,), {}, '||O', ['a'])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(1, 2), {}, '|O|O', ['a', 'b'])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(), {'a': 1}, '$$O', ['a'])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(), {'a': 1, 'b': 2}, '$O$O', ['a', 'b'])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(), {'a': 1}, '$|O', ['a'])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(), {'a': 1, 'b': 2}, '$O|O', ['a', 'b'])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(1,), {}, '|O', ['a', 'b'])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(1,), {}, '|OO', ['a'])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(), {}, '|$O', [''])
self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords,
(), {}, '|OO', ['a', ''])
def test_positional_only(self):
parse = _testcapi.parse_tuple_and_keywords
parse((1, 2, 3), {}, 'OOO', ['', '', 'a'])
parse((1, 2), {'a': 3}, 'OOO', ['', '', 'a'])
with self.assertRaisesRegex(TypeError,
r'Function takes at least 2 positional arguments \(1 given\)'):
parse((1,), {'a': 3}, 'OOO', ['', '', 'a'])
parse((1,), {}, 'O|OO', ['', '', 'a'])
with self.assertRaisesRegex(TypeError,
r'Function takes at least 1 positional arguments \(0 given\)'):
parse((), {}, 'O|OO', ['', '', 'a'])
parse((1, 2), {'a': 3}, 'OO$O', ['', '', 'a'])
with self.assertRaisesRegex(TypeError,
r'Function takes exactly 2 positional arguments \(1 given\)'):
parse((1,), {'a': 3}, 'OO$O', ['', '', 'a'])
parse((1,), {}, 'O|O$O', ['', '', 'a'])
with self.assertRaisesRegex(TypeError,
r'Function takes at least 1 positional arguments \(0 given\)'):
parse((), {}, 'O|O$O', ['', '', 'a'])
with self.assertRaisesRegex(SystemError, r'Empty parameter name after \$'):
parse((1,), {}, 'O|$OO', ['', '', 'a'])
with self.assertRaisesRegex(SystemError, 'Empty keyword'):
parse((1,), {}, 'O|OO', ['', 'a', ''])
@unittest.skipUnless(threading, 'Threading required for this test.')
class TestThreadState(unittest.TestCase):
@support.reap_threads
def test_thread_state(self):
# some extra thread-state tests driven via _testcapi
def target():
idents = []
def callback():
idents.append(threading.get_ident())
_testcapi._test_thread_state(callback)
a = b = callback
time.sleep(1)
# Check our main thread is in the list exactly 3 times.
self.assertEqual(idents.count(threading.get_ident()), 3,
"Couldn't find main thread correctly in the list")
target()
t = threading.Thread(target=target)
t.start()
t.join()
class Test_testcapi(unittest.TestCase):
def test__testcapi(self):
for name in dir(_testcapi):
if name.startswith('test_'):
with self.subTest("internal", name=name):
test = getattr(_testcapi, name)
test()
class PyMemDebugTests(unittest.TestCase):
PYTHONMALLOC = 'debug'
# '0x04c06e0' or '04C06E0'
PTR_REGEX = r'(?:0x)?[0-9a-fA-F]+'
def check(self, code):
with support.SuppressCrashReport():
out = assert_python_failure('-c', code,
PYTHONMALLOC=self.PYTHONMALLOC)
stderr = out.err
return stderr.decode('ascii', 'replace')
def test_buffer_overflow(self):
out = self.check('import _testcapi; _testcapi.pymem_buffer_overflow()')
regex = (r"Debug memory block at address p={ptr}: API 'm'\n"
r" 16 bytes originally requested\n"
r" The [0-9] pad bytes at p-[0-9] are FORBIDDENBYTE, as expected.\n"
r" The [0-9] pad bytes at tail={ptr} are not all FORBIDDENBYTE \(0x[0-9a-f]{{2}}\):\n"
r" at tail\+0: 0x78 \*\*\* OUCH\n"
r" at tail\+1: 0xfb\n"
r" at tail\+2: 0xfb\n"
r" .*\n"
r" The block was made by call #[0-9]+ to debug malloc/realloc.\n"
r" Data at p: cb cb cb .*\n"
r"\n"
r"Fatal Python error: bad trailing pad byte")
regex = regex.format(ptr=self.PTR_REGEX)
regex = re.compile(regex, flags=re.DOTALL)
self.assertRegex(out, regex)
def test_api_misuse(self):
out = self.check('import _testcapi; _testcapi.pymem_api_misuse()')
regex = (r"Debug memory block at address p={ptr}: API 'm'\n"
r" 16 bytes originally requested\n"
r" The [0-9] pad bytes at p-[0-9] are FORBIDDENBYTE, as expected.\n"
r" The [0-9] pad bytes at tail={ptr} are FORBIDDENBYTE, as expected.\n"
r" The block was made by call #[0-9]+ to debug malloc/realloc.\n"
r" Data at p: cb cb cb .*\n"
r"\n"
r"Fatal Python error: bad ID: Allocated using API 'm', verified using API 'r'\n")
regex = regex.format(ptr=self.PTR_REGEX)
self.assertRegex(out, regex)
@unittest.skipUnless(threading, 'Test requires a GIL (multithreading)')
def check_malloc_without_gil(self, code):
out = self.check(code)
expected = ('Fatal Python error: Python memory allocator called '
'without holding the GIL')
self.assertIn(expected, out)
def test_pymem_malloc_without_gil(self):
# Debug hooks must raise an error if PyMem_Malloc() is called
# without holding the GIL
code = 'import _testcapi; _testcapi.pymem_malloc_without_gil()'
self.check_malloc_without_gil(code)
def test_pyobject_malloc_without_gil(self):
# Debug hooks must raise an error if PyObject_Malloc() is called
# without holding the GIL
code = 'import _testcapi; _testcapi.pyobject_malloc_without_gil()'
self.check_malloc_without_gil(code)
class PyMemMallocDebugTests(PyMemDebugTests):
PYTHONMALLOC = 'malloc_debug'
@unittest.skipUnless(sysconfig.get_config_var('WITH_PYMALLOC') == 1,
'need pymalloc')
class PyMemPymallocDebugTests(PyMemDebugTests):
PYTHONMALLOC = 'pymalloc_debug'
@unittest.skipUnless(Py_DEBUG, 'need Py_DEBUG')
class PyMemDefaultTests(PyMemDebugTests):
# test default allocator of Python compiled in debug mode
PYTHONMALLOC = ''
if __name__ == "__main__":
unittest.main()
|
test_speed.py
|
import time
import asyncio
import threading
import concurrent
import itemdb
# %% More theoretical test, without using an actual ItemDB
executor = concurrent.futures.ThreadPoolExecutor(
max_workers=None, thread_name_prefix="itemdb"
)
async def run_in_thread(func, *args, **kwargs):
loop = asyncio.get_event_loop()
future = loop.create_future()
def thread_func():
result = func(*args, **kwargs)
loop.call_soon_threadsafe(future.set_result, result)
t = threading.Thread(target=thread_func)
t.start()
return await future
class FakeItemDB:
def work(self):
pass # time.sleep(0)
async def work_async0(self):
# Not actually async
return self.work()
async def work_async1(self):
# Using a thread
return await run_in_thread(self.work)
async def work_async2(self):
# Using a thread pool
loop = asyncio.get_event_loop()
return await loop.run_in_executor(executor, self.work)
async def do_some_work(method_name):
n = 0
t0 = time.perf_counter()
while time.perf_counter() - t0 < 0.5:
db = FakeItemDB()
await getattr(db, method_name)()
n += 1
etime = time.perf_counter() - t0
return etime / n
def check_speed_of_async():
# Try out multiple ways to implement async. The overhead for the threading
# is in favor of the thread pool, but it's really tiny, in the order of 100us.
# Plus starting 40 threads might be overkill on many cases too. Creating
# a thread per db might make more sense ...
# Example run:
# work_async0 1.1372491800444893e-06
# work_async1 0.0002502583791895948
# work_async2 0.00015008997599039617
loop = asyncio.get_event_loop()
for i in range(3):
method_name = f"work_async{i}"
t = loop.run_until_complete(do_some_work(method_name))
print(method_name, t)
# %% Practical test, comparing asyncify vs AsyncItemDB
async def do_work_using_asyncify():
def work():
db = itemdb.ItemDB(":memory:")
db.ensure_table("items")
with db:
db.put_one("items", id=1, name="foo")
db.put_one("items", id=1, name="bar")
await itemdb.asyncify(work)()
async def do_work_using_asyncitemdb():
async def work():
db = await itemdb.AsyncItemDB(":memory:")
await db.ensure_table("items")
async with db:
await db.put_one("items", id=1, name="foo")
await db.put_one("items", id=1, name="bar")
await work()
def check_speed_of_async_itemdb():
co = _check_speed_of_async_itemdb()
asyncio.get_event_loop().run_until_complete(co)
async def _check_speed_of_async_itemdb():
# The AsyncItemDB approach is less efficient than using asyncify.
# The amount differs a lot between runs, but ranges from 20-30% with the
# current measurement. Using a threadpool with AsyncItemDB might reduce
# this discrepancy. Note that aiosqlite also uses a new thread per connection.
n = 500
time.sleep(0.1)
t0 = time.perf_counter()
for i in range(n):
await do_work_using_asyncify()
t1 = time.perf_counter()
print(f"with asyncify: {(t1 - t0)/n:0.9f} s")
time.sleep(0.1)
t0 = time.perf_counter()
for i in range(n):
await do_work_using_asyncitemdb()
t1 = time.perf_counter()
print(f"with AsyncItemDB: {(t1 - t0)/n:0.9f} s")
if __name__ == "__main__":
check_speed_of_async()
check_speed_of_async_itemdb()
|
worker.py
|
#!/usr/bin/python3
import time
try:
import queue
except ImportError:
import Queue as queue
import multiprocessing
from threading import Event
from threading import Thread
from os import getpid
import mapnik
pool = {}
class TileWorker(multiprocessing.Process):
def __init__(self, mmap, image, job_queue, stop_event):
multiprocessing.Process.__init__(self)
self.mmap = mmap
self.image = image
self.queue = job_queue
self.event = stop_event
# self.job_id = job_id
def run(self):
pid = getpid()
print('{0} starting'.format(pid))
mapnik.render(self.mmap, self.image)
print('{0} finished'.format(pid))
self.queue.put(self.image)
print('{0} done'.format(pid))
self.event.set()
return
def get_tile(mmap, image, result, stop_event):
print('get_tile begin', getpid())
mapnik.render(mmap, image)
print('get_tile end', getpid())
stop_event.set()
result['tile'] = image
#
# def RenderTile(job_id, mmap, image):
# print('RenderTile',getpid())
# thread_return = {'tile': None}
# thread = Thread(target=get_tile, args=(mmap, image, thread_return,))
# pool[job_id] = thread
# pool[job_id].start()
# pool[job_id].join()
# # queue = multiprocessing.Queue()
# # job_queue = queue.Queue()
# # process = TileWorker(mmap, image, job_queue)
# # pool[job_id] = process
# # process.start()
# # process.join()
# # tile = process.get_tile()
# # del pool[job_id]
# # image = job_queue.get()
# print(thread_return['tile'])
# return thread_return['tile']
#
#
def RenderTile(job_id, mmap, image):
print('RenderTile',getpid())
# queue = multiprocessing.Queue()
result = {'tile': None}
stop_event = Event()
thread = Thread(target=get_tile, args=(mmap, image, result, stop_event,))
pool[job_id] = thread
pool[job_id].start()
# pool[job_id].join()
# while not stop_event.is_set():
# time.sleep(0.1)
stop_event.wait()
return result['tile']
# job_queue = queue.Queue()
# stop_event = Event()
# process = TileWorker(mmap, image, job_queue, stop_event)
# pool[job_id] = process
# process.start()
# # process.join()
# while not stop_event.is_set():
# time.sleep(0.1)
# del pool[job_id]
# tile = job_queue.get()
# return tile
def CancelTileRender(job_id):
try:
pool[job_id].terminate()
except Exception as e:
print(e)
|
supervisor.py
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Training helper that checkpoints models and computes summaries."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import contextlib
import os
import time
from tensorflow.core.framework.summary_pb2 import Summary
from tensorflow.core.util.event_pb2 import SessionLog
from tensorflow.python.eager import context
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import meta_graph
from tensorflow.python.framework import ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import lookup_ops
from tensorflow.python.ops import variables
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.summary import summary as _summary
from tensorflow.python.training import coordinator
from tensorflow.python.training import saver as saver_mod
from tensorflow.python.training import session_manager as session_manager_mod
from tensorflow.python.training import training_util
from tensorflow.python.util import deprecation
from tensorflow.python.util.tf_export import tf_export
@tf_export("train.Supervisor")
class Supervisor(object):
"""A training helper that checkpoints models and computes summaries.
This class is deprecated. Please use
@{tf.train.MonitoredTrainingSession} instead.
The Supervisor is a small wrapper around a `Coordinator`, a `Saver`,
and a `SessionManager` that takes care of common needs of TensorFlow
training programs.
#### Use for a single program
```python
with tf.Graph().as_default():
...add operations to the graph...
# Create a Supervisor that will checkpoint the model in '/tmp/mydir'.
sv = Supervisor(logdir='/tmp/mydir')
# Get a TensorFlow session managed by the supervisor.
with sv.managed_session(FLAGS.master) as sess:
# Use the session to train the graph.
while not sv.should_stop():
sess.run(<my_train_op>)
```
Within the `with sv.managed_session()` block all variables in the graph have
been initialized. In addition, a few services have been started to
checkpoint the model and add summaries to the event log.
If the program crashes and is restarted, the managed session automatically
reinitialize variables from the most recent checkpoint.
The supervisor is notified of any exception raised by one of the services.
After an exception is raised, `should_stop()` returns `True`. In that case
the training loop should also stop. This is why the training loop has to
check for `sv.should_stop()`.
Exceptions that indicate that the training inputs have been exhausted,
`tf.errors.OutOfRangeError`, also cause `sv.should_stop()` to return `True`
but are not re-raised from the `with` block: they indicate a normal
termination.
#### Use for multiple replicas
To train with replicas you deploy the same program in a `Cluster`.
One of the tasks must be identified as the *chief*: the task that handles
initialization, checkpoints, summaries, and recovery. The other tasks
depend on the *chief* for these services.
The only change you have to do to the single program code is to indicate
if the program is running as the *chief*.
```python
# Choose a task as the chief. This could be based on server_def.task_index,
# or job_def.name, or job_def.tasks. It's entirely up to the end user.
# But there can be only one *chief*.
is_chief = (server_def.task_index == 0)
server = tf.train.Server(server_def)
with tf.Graph().as_default():
...add operations to the graph...
# Create a Supervisor that uses log directory on a shared file system.
# Indicate if you are the 'chief'
sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief)
# Get a Session in a TensorFlow server on the cluster.
with sv.managed_session(server.target) as sess:
# Use the session to train the graph.
while not sv.should_stop():
sess.run(<my_train_op>)
```
In the *chief* task, the `Supervisor` works exactly as in the first example
above. In the other tasks `sv.managed_session()` waits for the Model to have
been initialized before returning a session to the training code. The
non-chief tasks depend on the chief task for initializing the model.
If one of the tasks crashes and restarts, `managed_session()`
checks if the Model is initialized. If yes, it just creates a session and
returns it to the training code that proceeds normally. If the model needs
to be initialized, the chief task takes care of reinitializing it; the other
tasks just wait for the model to have been initialized.
NOTE: This modified program still works fine as a single program.
The single program marks itself as the chief.
#### What `master` string to use
Whether you are running on your machine or in the cluster you can use the
following values for the --master flag:
* Specifying `''` requests an in-process session that does not use RPC.
* Specifying `'local'` requests a session that uses the RPC-based
"Master interface" to run TensorFlow programs. See
@{tf.train.Server.create_local_server} for
details.
* Specifying `'grpc://hostname:port'` requests a session that uses
the RPC interface to a specific host, and also allows the in-process
master to access remote tensorflow workers. Often, it is
appropriate to pass `server.target` (for some `tf.train.Server`
named `server).
#### Advanced use
##### Launching additional services
`managed_session()` launches the Checkpoint and Summary services (threads).
If you need more services to run you can simply launch them in the block
controlled by `managed_session()`.
Example: Start a thread to print losses. We want this thread to run
every 60 seconds, so we launch it with `sv.loop()`.
```python
...
sv = Supervisor(logdir='/tmp/mydir')
with sv.managed_session(FLAGS.master) as sess:
sv.loop(60, print_loss, (sess, ))
while not sv.should_stop():
sess.run(my_train_op)
```
##### Launching fewer services
`managed_session()` launches the "summary" and "checkpoint" threads which use
either the optionally `summary_op` and `saver` passed to the constructor, or
default ones created automatically by the supervisor. If you want to run
your own summary and checkpointing logic, disable these services by passing
`None` to the `summary_op` and `saver` parameters.
Example: Create summaries manually every 100 steps in the chief.
```python
# Create a Supervisor with no automatic summaries.
sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None)
# As summary_op was None, managed_session() does not start the
# summary thread.
with sv.managed_session(FLAGS.master) as sess:
for step in xrange(1000000):
if sv.should_stop():
break
if is_chief and step % 100 == 0:
# Create the summary every 100 chief steps.
sv.summary_computed(sess, sess.run(my_summary_op))
else:
# Train normally
sess.run(my_train_op)
```
##### Custom model initialization
`managed_session()` only supports initializing the model by running an
`init_op` or restoring from the latest checkpoint. If you have special
initialization needs, see how to specify a `local_init_op` when creating the
supervisor. You can also use the `SessionManager` directly to create a
session and check if it could be initialized automatically.
"""
# Value to pass for the 'ready_op', 'init_op', 'summary_op', 'saver',
# and 'global_step' parameters of Supervisor.__init__() to indicate that
# the default behavior should be used.
USE_DEFAULT = 0
@deprecation.deprecated(None,
"Please switch to tf.train.MonitoredTrainingSession")
def __init__(self,
graph=None,
ready_op=USE_DEFAULT,
ready_for_local_init_op=USE_DEFAULT,
is_chief=True,
init_op=USE_DEFAULT,
init_feed_dict=None,
local_init_op=USE_DEFAULT,
logdir=None,
summary_op=USE_DEFAULT,
saver=USE_DEFAULT,
global_step=USE_DEFAULT,
save_summaries_secs=120,
save_model_secs=600,
recovery_wait_secs=30,
stop_grace_secs=120,
checkpoint_basename="model.ckpt",
session_manager=None,
summary_writer=USE_DEFAULT,
init_fn=None,
local_init_run_options=None):
"""Create a `Supervisor`.
Args:
graph: A `Graph`. The graph that the model will use. Defaults to the
default `Graph`. The supervisor may add operations to the graph before
creating a session, but the graph should not be modified by the caller
after passing it to the supervisor.
ready_op: 1-D string `Tensor`. This tensor is evaluated by supervisors in
`prepare_or_wait_for_session()` to check if the model is ready to use.
The model is considered ready if it returns an empty array. Defaults to
the tensor returned from `tf.report_uninitialized_variables()` If
`None`, the model is not checked for readiness.
ready_for_local_init_op: 1-D string `Tensor`. This tensor is evaluated by
supervisors in `prepare_or_wait_for_session()` to check if the model is
ready to run the local_init_op.
The model is considered ready if it returns an empty array. Defaults to
the tensor returned from
`tf.report_uninitialized_variables(tf.global_variables())`. If `None`,
the model is not checked for readiness before running local_init_op.
is_chief: If True, create a chief supervisor in charge of initializing
and restoring the model. If False, create a supervisor that relies
on a chief supervisor for inits and restore.
init_op: `Operation`. Used by chief supervisors to initialize the model
when it can not be recovered. Defaults to an `Operation` that
initializes all global variables. If `None`, no initialization is done
automatically unless you pass a value for `init_fn`, see below.
init_feed_dict: A dictionary that maps `Tensor` objects to feed values.
This feed dictionary will be used when `init_op` is evaluated.
local_init_op: `Operation`. Used by all supervisors to run initializations
that should run for every new supervisor instance. By default these
are table initializers and initializers for local variables.
If `None`, no further per supervisor-instance initialization is
done automatically.
logdir: A string. Optional path to a directory where to checkpoint the
model and log events for the visualizer. Used by chief supervisors.
The directory will be created if it does not exist.
summary_op: An `Operation` that returns a Summary for the event logs.
Used by chief supervisors if a `logdir` was specified. Defaults to the
operation returned from summary.merge_all(). If `None`, summaries are
not computed automatically.
saver: A Saver object. Used by chief supervisors if a `logdir` was
specified. Defaults to the saved returned by Saver().
If `None`, the model is not saved automatically.
global_step: An integer Tensor of size 1 that counts steps. The value
from 'global_step' is used in summaries and checkpoint filenames.
Default to the op named 'global_step' in the graph if it exists, is of
rank 1, size 1, and of type tf.int32 or tf.int64. If `None` the global
step is not recorded in summaries and checkpoint files. Used by chief
supervisors if a `logdir` was specified.
save_summaries_secs: Number of seconds between the computation of
summaries for the event log. Defaults to 120 seconds. Pass 0 to
disable summaries.
save_model_secs: Number of seconds between the creation of model
checkpoints. Defaults to 600 seconds. Pass 0 to disable checkpoints.
recovery_wait_secs: Number of seconds between checks that the model
is ready. Used by supervisors when waiting for a chief supervisor
to initialize or restore the model. Defaults to 30 seconds.
stop_grace_secs: Grace period, in seconds, given to running threads to
stop when `stop()` is called. Defaults to 120 seconds.
checkpoint_basename: The basename for checkpoint saving.
session_manager: `SessionManager`, which manages Session creation and
recovery. If it is `None`, a default `SessionManager` will be created
with the set of arguments passed in for backwards compatibility.
summary_writer: `SummaryWriter` to use or `USE_DEFAULT`. Can be `None`
to indicate that no summaries should be written.
init_fn: Optional callable used to initialize the model. Called
after the optional `init_op` is called. The callable must accept one
argument, the session being initialized.
local_init_run_options: RunOptions to be passed as the SessionManager
local_init_run_options parameter.
Returns:
A `Supervisor`.
Raises:
RuntimeError: If called with eager execution enabled.
@compatibility(eager)
`Supervisor`s are not supported when eager execution is enabled.
@end_compatibility
"""
if context.executing_eagerly():
raise RuntimeError("Supervisors are compatible with eager execution.")
# Set default values of arguments.
if graph is None:
graph = ops.get_default_graph()
with graph.as_default():
self._init_ready_op(
ready_op=ready_op, ready_for_local_init_op=ready_for_local_init_op)
self._init_init_op(init_op=init_op, init_feed_dict=init_feed_dict)
self._init_local_init_op(local_init_op=local_init_op)
self._init_saver(saver=saver)
self._init_summary_op(summary_op=summary_op)
self._init_global_step(global_step=global_step)
self._graph = graph
self._meta_graph_def = meta_graph.create_meta_graph_def(
graph_def=graph.as_graph_def(add_shapes=True),
saver_def=self._saver.saver_def if self._saver else None)
self._is_chief = is_chief
self._coord = coordinator.Coordinator()
self._recovery_wait_secs = recovery_wait_secs
self._stop_grace_secs = stop_grace_secs
self._init_fn = init_fn
self._local_init_run_options = local_init_run_options
# Set all attributes related to checkpointing and writing events to None.
# Afterwards, set them appropriately for chief supervisors, as these are
# the only supervisors that can write checkpoints and events.
self._logdir = None
self._save_summaries_secs = None
self._save_model_secs = None
self._save_path = None
self._summary_writer = None
if self._is_chief:
self._logdir = logdir
self._save_summaries_secs = save_summaries_secs
self._save_model_secs = save_model_secs
if self._logdir:
self._save_path = os.path.join(self._logdir, checkpoint_basename)
if summary_writer is Supervisor.USE_DEFAULT:
if self._logdir:
self._summary_writer = _summary.FileWriter(self._logdir)
else:
self._summary_writer = summary_writer
self._graph_added_to_summary = False
self._init_session_manager(session_manager=session_manager)
self._verify_setup()
# The graph is not allowed to change anymore.
graph.finalize()
def _init_session_manager(self, session_manager=None):
if session_manager is None:
self._session_manager = session_manager_mod.SessionManager(
local_init_op=self._local_init_op,
ready_op=self._ready_op,
ready_for_local_init_op=self._ready_for_local_init_op,
graph=self._graph,
recovery_wait_secs=self._recovery_wait_secs,
local_init_run_options=self._local_init_run_options)
else:
self._session_manager = session_manager
def _get_first_op_from_collection(self, key):
"""Returns the first `Operation` from a collection.
Args:
key: A string collection key.
Returns:
The first Op found in a collection, or `None` if the collection is empty.
"""
try:
op_list = ops.get_collection(key)
if len(op_list) > 1:
logging.info("Found %d %s operations. Returning the first one.",
len(op_list), key)
if op_list:
return op_list[0]
except LookupError:
pass
return None
def _init_ready_op(self,
ready_op=USE_DEFAULT,
ready_for_local_init_op=USE_DEFAULT):
"""Initializes ready_op.
Args:
ready_op: `Tensor` to check if the model is initialized.
If it's set to USE_DEFAULT, creates an op that checks all
the variables are initialized.
ready_for_local_init_op: `Tensor` to check if the model is ready to run
local_init_op.
If it's set to USE_DEFAULT, creates an op that checks all
the global variables are initialized.
"""
if ready_op is Supervisor.USE_DEFAULT:
ready_op = self._get_first_op_from_collection(ops.GraphKeys.READY_OP)
if ready_op is None:
ready_op = variables.report_uninitialized_variables()
ops.add_to_collection(ops.GraphKeys.READY_OP, ready_op)
self._ready_op = ready_op
# ready_for_local_init_op defaults to None for backward compatibility
if ready_for_local_init_op is Supervisor.USE_DEFAULT:
ready_for_local_init_op = self._get_first_op_from_collection(
ops.GraphKeys.READY_FOR_LOCAL_INIT_OP)
self._ready_for_local_init_op = ready_for_local_init_op
def _init_init_op(self, init_op=USE_DEFAULT, init_feed_dict=None):
"""Initializes init_op.
Args:
init_op: `Operation` to initialize the variables. If set to USE_DEFAULT,
create an op that initializes all variables and tables.
init_feed_dict: A dictionary that maps `Tensor` objects to feed values.
This feed dictionary will be used when `init_op` is evaluated.
"""
if init_op is Supervisor.USE_DEFAULT:
init_op = self._get_first_op_from_collection(ops.GraphKeys.INIT_OP)
if init_op is None:
init_op = variables.global_variables_initializer()
ops.add_to_collection(ops.GraphKeys.INIT_OP, init_op)
self._init_op = init_op
self._init_feed_dict = init_feed_dict
def _init_local_init_op(self, local_init_op=USE_DEFAULT):
"""Initializes local_init_op.
Args:
local_init_op: `Operation` run for every new supervisor instance. If set
to USE_DEFAULT, use the first op from the GraphKeys.LOCAL_INIT_OP
collection. If the collection is empty, create an op that initializes
all local variables and all tables.
"""
if local_init_op is Supervisor.USE_DEFAULT:
local_init_op = self._get_first_op_from_collection(
ops.GraphKeys.LOCAL_INIT_OP)
if local_init_op is None:
op_list = [
variables.local_variables_initializer(),
lookup_ops.tables_initializer()
]
if op_list:
local_init_op = control_flow_ops.group(*op_list)
ops.add_to_collection(ops.GraphKeys.LOCAL_INIT_OP, local_init_op)
self._local_init_op = local_init_op
def _init_saver(self, saver=USE_DEFAULT):
"""Initializes saver.
Args:
saver: A `Saver` object. If set to USE_DEFAULT, create one that
saves all the variables.
"""
if saver is Supervisor.USE_DEFAULT:
saver = self._get_first_op_from_collection(ops.GraphKeys.SAVERS)
if saver is None and variables.global_variables():
saver = saver_mod.Saver()
ops.add_to_collection(ops.GraphKeys.SAVERS, saver)
self._saver = saver
def _init_summary_op(self, summary_op=USE_DEFAULT):
"""Initializes summary_op.
Args:
summary_op: An Operation that returns a Summary for the event logs.
If set to USE_DEFAULT, create an op that merges all the summaries.
"""
if summary_op is Supervisor.USE_DEFAULT:
summary_op = self._get_first_op_from_collection(ops.GraphKeys.SUMMARY_OP)
if summary_op is None:
summary_op = _summary.merge_all()
if summary_op is not None:
ops.add_to_collection(ops.GraphKeys.SUMMARY_OP, summary_op)
self._summary_op = summary_op
def _init_global_step(self, global_step=USE_DEFAULT):
"""Initializes global_step.
Args:
global_step: An integer Tensor of size 1 that counts steps. If
set to USE_DEFAULT, creates global_step tensor.
"""
if global_step is Supervisor.USE_DEFAULT:
global_step = self._get_first_op_from_collection(
ops.GraphKeys.GLOBAL_STEP)
if global_step is None:
global_step = self._default_global_step_tensor()
if global_step is not None:
ops.add_to_collection(ops.GraphKeys.GLOBAL_STEP, global_step)
self._global_step = global_step
@property
def is_chief(self):
"""Return True if this is a chief supervisor.
Returns:
A bool.
"""
return self._is_chief
@property
def session_manager(self):
"""Return the SessionManager used by the Supervisor.
Returns:
A SessionManager object.
"""
return self._session_manager
@property
def coord(self):
"""Return the Coordinator used by the Supervisor.
The Coordinator can be useful if you want to run multiple threads
during your training.
Returns:
A Coordinator object.
"""
return self._coord
@property
def init_op(self):
"""Return the Init Op used by the supervisor.
Returns:
An Op or `None`.
"""
return self._init_op
@property
def init_feed_dict(self):
"""Return the feed dictionary used when evaluating the `init_op`.
Returns:
A feed dictionary or `None`.
"""
return self._init_feed_dict
@property
def ready_op(self):
"""Return the Ready Op used by the supervisor.
Returns:
An Op or `None`.
"""
return self._ready_op
@property
def ready_for_local_init_op(self):
return self._ready_for_local_init_op
@property
def summary_writer(self):
"""Return the SummaryWriter used by the chief supervisor.
Returns:
A SummaryWriter.
"""
return self._summary_writer
@property
def summary_op(self):
"""Return the Summary Tensor used by the chief supervisor.
Returns:
A string Tensor for the summary or `None`.
"""
return self._summary_op
@property
def save_summaries_secs(self):
"""Return the delay between summary computations.
Returns:
A timestamp.
"""
return self._save_summaries_secs
@property
def global_step(self):
"""Return the global_step Tensor used by the supervisor.
Returns:
An integer Tensor for the global_step.
"""
return self._global_step
@property
def saver(self):
"""Return the Saver used by the supervisor.
Returns:
A Saver object.
"""
return self._saver
@property
def save_model_secs(self):
"""Return the delay between checkpoints.
Returns:
A timestamp.
"""
return self._save_model_secs
@property
def save_path(self):
"""Return the save path used by the supervisor.
Returns:
A string.
"""
return self._save_path
def _write_graph(self):
"""Writes graph_def to `logdir` and adds it to summary if applicable."""
assert self._is_chief
if self._logdir:
training_util.write_graph(self._graph.as_graph_def(add_shapes=True),
self._logdir, "graph.pbtxt")
if self._summary_writer and not self._graph_added_to_summary:
self._summary_writer.add_graph(self._graph)
self._summary_writer.add_meta_graph(self._meta_graph_def)
self._graph_added_to_summary = True
def start_standard_services(self, sess):
"""Start the standard services for 'sess'.
This starts services in the background. The services started depend
on the parameters to the constructor and may include:
- A Summary thread computing summaries every save_summaries_secs.
- A Checkpoint thread saving the model every save_model_secs.
- A StepCounter thread measure step time.
Args:
sess: A Session.
Returns:
A list of threads that are running the standard services. You can use
the Supervisor's Coordinator to join these threads with:
sv.coord.Join(<list of threads>)
Raises:
RuntimeError: If called with a non-chief Supervisor.
ValueError: If not `logdir` was passed to the constructor as the
services need a log directory.
"""
if not self._is_chief:
raise RuntimeError("Only chief supervisor can start standard services. "
"Because only chief supervisors can write events.")
if not self._logdir:
logging.warning("Standard services need a 'logdir' "
"passed to the SessionManager")
return
if self._global_step is not None and self._summary_writer:
# Only add the session log if we keep track of global step.
# TensorBoard cannot use START message for purging expired events
# if there is no step value.
current_step = training_util.global_step(sess, self._global_step)
self._summary_writer.add_session_log(
SessionLog(status=SessionLog.START),
current_step)
threads = []
if self._save_summaries_secs and self._summary_writer:
if self._summary_op is not None:
threads.append(SVSummaryThread(self, sess))
if self._global_step is not None:
threads.append(SVStepCounterThread(self, sess))
if self.saver and self._save_model_secs:
threads.append(SVTimerCheckpointThread(self, sess))
for t in threads:
t.start()
return threads
def prepare_or_wait_for_session(self, master="", config=None,
wait_for_checkpoint=False,
max_wait_secs=7200,
start_standard_services=True):
"""Make sure the model is ready to be used.
Create a session on 'master', recovering or initializing the model as
needed, or wait for a session to be ready. If running as the chief
and `start_standard_service` is set to True, also call the session
manager to start the standard services.
Args:
master: name of the TensorFlow master to use. See the `tf.Session`
constructor for how this is interpreted.
config: Optional ConfigProto proto used to configure the session,
which is passed as-is to create the session.
wait_for_checkpoint: Whether we should wait for the availability of a
checkpoint before creating Session. Defaults to False.
max_wait_secs: Maximum time to wait for the session to become available.
start_standard_services: Whether to start the standard services and the
queue runners.
Returns:
A Session object that can be used to drive the model.
"""
# For users who recreate the session with prepare_or_wait_for_session(), we
# need to clear the coordinator's stop_event so that threads managed by the
# coordinator can run.
self._coord.clear_stop()
if self._summary_writer:
self._summary_writer.reopen()
if self._is_chief:
sess = self._session_manager.prepare_session(
master, init_op=self.init_op, saver=self.saver,
checkpoint_dir=self._logdir, wait_for_checkpoint=wait_for_checkpoint,
max_wait_secs=max_wait_secs, config=config,
init_feed_dict=self._init_feed_dict, init_fn=self._init_fn)
self._write_graph()
if start_standard_services:
logging.info("Starting standard services.")
self.start_standard_services(sess)
else:
sess = self._session_manager.wait_for_session(master,
config=config,
max_wait_secs=max_wait_secs)
if start_standard_services:
logging.info("Starting queue runners.")
self.start_queue_runners(sess)
return sess
def start_queue_runners(self, sess, queue_runners=None):
"""Start threads for `QueueRunners`.
Note that the queue runners collected in the graph key `QUEUE_RUNNERS`
are already started automatically when you create a session with the
supervisor, so unless you have non-collected queue runners to start
you do not need to call this explicitly.
Args:
sess: A `Session`.
queue_runners: A list of `QueueRunners`. If not specified, we'll use the
list of queue runners gathered in the graph under the key
`GraphKeys.QUEUE_RUNNERS`.
Returns:
The list of threads started for the `QueueRunners`.
Raises:
RuntimeError: If called with eager execution enabled.
@compatibility(eager)
Queues are not compatible with eager execution. To ingest data when eager
execution is enabled, use the `tf.data` API.
@end_compatibility
"""
if context.executing_eagerly():
raise RuntimeError("Queues are not compatible with eager execution.")
if queue_runners is None:
queue_runners = self._graph.get_collection(ops.GraphKeys.QUEUE_RUNNERS)
threads = []
for qr in queue_runners:
threads.extend(qr.create_threads(sess, coord=self._coord, daemon=True,
start=True))
return threads
def loop(self, timer_interval_secs, target, args=None, kwargs=None):
"""Start a LooperThread that calls a function periodically.
If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)`
repeatedly. Otherwise it calls it every `timer_interval_secs`
seconds. The thread terminates when a stop is requested.
The started thread is added to the list of threads managed by the supervisor
so it does not need to be passed to the `stop()` method.
Args:
timer_interval_secs: Number. Time boundaries at which to call `target`.
target: A callable object.
args: Optional arguments to pass to `target` when calling it.
kwargs: Optional keyword arguments to pass to `target` when calling it.
Returns:
The started thread.
"""
looper = coordinator.LooperThread(self._coord, timer_interval_secs,
target=target, args=args, kwargs=kwargs)
looper.start()
return looper
def stop(self,
threads=None,
close_summary_writer=True,
ignore_live_threads=False):
"""Stop the services and the coordinator.
This does not close the session.
Args:
threads: Optional list of threads to join with the coordinator. If
`None`, defaults to the threads running the standard services, the
threads started for `QueueRunners`, and the threads started by the
`loop()` method. To wait on additional threads, pass the
list in this parameter.
close_summary_writer: Whether to close the `summary_writer`. Defaults to
`True` if the summary writer was created by the supervisor, `False`
otherwise.
ignore_live_threads: If `True` ignores threads that remain running after
a grace period when joining threads via the coordinator, instead of
raising a RuntimeError.
"""
self._coord.request_stop()
try:
# coord.join() re-raises the first reported exception; the "finally"
# block ensures that we clean up whether or not an exception was
# reported.
self._coord.join(
threads,
stop_grace_period_secs=self._stop_grace_secs,
ignore_live_threads=ignore_live_threads)
finally:
# Close the writer last, in case one of the running threads was using it.
if close_summary_writer and self._summary_writer:
# Stop messages are not logged with event.step,
# since the session may have already terminated.
self._summary_writer.add_session_log(SessionLog(status=SessionLog.STOP))
self._summary_writer.close()
self._graph_added_to_summary = False
def request_stop(self, ex=None):
"""Request that the coordinator stop the threads.
See `Coordinator.request_stop()`.
Args:
ex: Optional `Exception`, or Python `exc_info` tuple as returned by
`sys.exc_info()`. If this is the first call to `request_stop()` the
corresponding exception is recorded and re-raised from `join()`.
"""
self._coord.request_stop(ex=ex)
def should_stop(self):
"""Check if the coordinator was told to stop.
See `Coordinator.should_stop()`.
Returns:
True if the coordinator was told to stop, False otherwise.
"""
return self._coord.should_stop()
def stop_on_exception(self):
"""Context handler to stop the supervisor when an exception is raised.
See `Coordinator.stop_on_exception()`.
Returns:
A context handler.
"""
return self._coord.stop_on_exception()
def wait_for_stop(self):
"""Block waiting for the coordinator to stop."""
self._coord.wait_for_stop()
def summary_computed(self, sess, summary, global_step=None):
"""Indicate that a summary was computed.
Args:
sess: A `Session` object.
summary: A Summary proto, or a string holding a serialized summary proto.
global_step: Int. global step this summary is associated with. If `None`,
it will try to fetch the current step.
Raises:
TypeError: if 'summary' is not a Summary proto or a string.
RuntimeError: if the Supervisor was created without a `logdir`.
"""
if not self._summary_writer:
raise RuntimeError("Writing a summary requires a summary writer.")
if global_step is None and self.global_step is not None:
global_step = training_util.global_step(sess, self.global_step)
self._summary_writer.add_summary(summary, global_step)
def _default_global_step_tensor(self):
"""Returns the global_step from the default graph.
Returns:
The global step `Tensor` or `None`.
"""
try:
gs = ops.get_default_graph().get_tensor_by_name("global_step:0")
if gs.dtype.base_dtype in [dtypes.int32, dtypes.int64]:
return gs
else:
logging.warning("Found 'global_step' is not an int type: %s", gs.dtype)
return None
except KeyError:
return None
def _verify_setup(self):
"""Check that all is good.
Raises:
ValueError: If something is not good.
"""
# Not running as chief means that replicas are used.
# In that case all Variables must have their device set.
if not self._is_chief:
for op in self._graph.get_operations():
if op.type in ["Variable", "VariableV2"] and not op.device:
raise ValueError("When using replicas, all Variables must have "
"their device set: %s" % op)
# pylint: disable=g-doc-return-or-yield,broad-except
@contextlib.contextmanager
def managed_session(self, master="", config=None,
start_standard_services=True,
close_summary_writer=True):
"""Returns a context manager for a managed session.
This context manager creates and automatically recovers a session. It
optionally starts the standard services that handle checkpoints and
summaries. It monitors exceptions raised from the `with` block or from the
services and stops the supervisor as needed.
The context manager is typically used as follows:
```python
def train():
sv = tf.train.Supervisor(...)
with sv.managed_session(<master>) as sess:
for step in xrange(..):
if sv.should_stop():
break
sess.run(<my training op>)
...do other things needed at each training step...
```
An exception raised from the `with` block or one of the service threads is
raised again when the block exits. This is done after stopping all threads
and closing the session. For example, an `AbortedError` exception, raised
in case of preemption of one of the workers in a distributed model, is
raised again when the block exits.
If you want to retry the training loop in case of preemption you can do it
as follows:
```python
def main(...):
while True
try:
train()
except tf.errors.Aborted:
pass
```
As a special case, exceptions used for control flow, such as
`OutOfRangeError` which reports that input queues are exhausted, are not
raised again from the `with` block: they indicate a clean termination of
the training loop and are considered normal termination.
Args:
master: name of the TensorFlow master to use. See the `tf.Session`
constructor for how this is interpreted.
config: Optional `ConfigProto` proto used to configure the session.
Passed as-is to create the session.
start_standard_services: Whether to start the standard services,
such as checkpoint, summary and step counter.
close_summary_writer: Whether to close the summary writer when
closing the session. Defaults to True.
Returns:
A context manager that yields a `Session` restored from the latest
checkpoint or initialized from scratch if not checkpoint exists. The
session is closed when the `with` block exits.
"""
try:
sess = self.prepare_or_wait_for_session(
master=master, config=config,
start_standard_services=start_standard_services)
yield sess
except Exception as e:
self.request_stop(e)
finally:
try:
# Request all the threads to stop and wait for them to do so. Any
# exception raised by the threads is raised again from stop().
# Passing stop_grace_period_secs is for blocked enqueue/dequeue
# threads which are not checking for `should_stop()`. They
# will be stopped when we close the session further down.
self.stop(close_summary_writer=close_summary_writer)
finally:
# Close the session to finish up all pending calls. We do not care
# about exceptions raised when closing. This takes care of
# blocked enqueue/dequeue calls.
try:
sess.close()
except Exception:
# Silently ignore exceptions raised by close().
pass
# pylint: enable=g-doc-return-or-yield,broad-except
class SVSummaryThread(coordinator.LooperThread):
"""A thread to save summaries on a timer."""
def __init__(self, sv, sess):
"""Create a SVSummaryThread.
Args:
sv: A `Supervisor`.
sess: A `Session`.
"""
super(SVSummaryThread, self).__init__(sv.coord, sv.save_summaries_secs)
self._sv = sv
self._sess = sess
def run_loop(self):
if self._sv.global_step is not None:
summary_strs, global_step = self._sess.run([self._sv.summary_op,
self._sv.global_step])
else:
summary_strs = self._sess.run(self._sv.summary_op)
global_step = None
if self._sv.summary_writer:
logging.info("Recording summary at step %s.", global_step)
self._sv.summary_writer.add_summary(summary_strs, global_step)
class SVStepCounterThread(coordinator.LooperThread):
"""Threads to count steps and measure their duration."""
def __init__(self, sv, sess, step_counter=None):
"""Create a `SVStepCounterThread`.
Args:
sv: A `Supervisor`.
sess: A `Session`.
step_counter: A `Tensor` holding the step counter. By defaults, it uses
sv.global_step.
"""
super(SVStepCounterThread, self).__init__(sv.coord, sv.save_summaries_secs)
self._sv = sv
self._sess = sess
self._last_time = 0.0
self._last_step = 0
step_counter = sv.global_step if step_counter is None else step_counter
self._step_counter = step_counter
self._summary_tag = "%s/sec" % self._step_counter.op.name
def start_loop(self):
self._last_time = time.time()
self._last_step = training_util.global_step(
self._sess, self._step_counter)
def run_loop(self):
# Count the steps.
current_step = training_util.global_step(self._sess, self._step_counter)
added_steps = current_step - self._last_step
self._last_step = current_step
# Measure the elapsed time.
current_time = time.time()
elapsed_time = current_time - self._last_time
self._last_time = current_time
# Reports the number of steps done per second
if elapsed_time > 0.:
steps_per_sec = added_steps / elapsed_time
else:
steps_per_sec = float("inf")
summary = Summary(value=[Summary.Value(tag=self._summary_tag,
simple_value=steps_per_sec)])
if self._sv.summary_writer:
self._sv.summary_writer.add_summary(summary, current_step)
logging.log_first_n(logging.INFO, "%s: %g", 10,
self._summary_tag, steps_per_sec)
class SVTimerCheckpointThread(coordinator.LooperThread):
"""A thread to checkpoint on a timer."""
def __init__(self, sv, sess):
"""Create a `SVTimerCheckpointThread`.
Args:
sv: A `Supervisor`.
sess: A `Session`.
"""
super(SVTimerCheckpointThread, self).__init__(sv.coord, sv.save_model_secs)
self._sv = sv
self._sess = sess
def run_loop(self):
logging.info("Saving checkpoint to path %s", self._sv.save_path)
self._sv.saver.save(self._sess, self._sv.save_path,
global_step=self._sv.global_step)
if self._sv.summary_writer and self._sv.global_step is not None:
current_step = training_util.global_step(self._sess, self._sv.global_step)
self._sv.summary_writer.add_session_log(
SessionLog(status=SessionLog.CHECKPOINT,
checkpoint_path=self._sv.save_path),
current_step)
# TODO(sherrym): All non-PEP8 compliant names will be deprecated shortly.
setattr(Supervisor, "PrepareSession", Supervisor.prepare_or_wait_for_session)
setattr(Supervisor, "StartQueueRunners", Supervisor.start_queue_runners)
setattr(Supervisor, "StartStandardServices", Supervisor.start_standard_services)
setattr(Supervisor, "Stop", Supervisor.stop)
setattr(Supervisor, "RequestStop", Supervisor.request_stop)
setattr(Supervisor, "Loop", Supervisor.loop)
setattr(Supervisor, "ShouldStop", Supervisor.should_stop)
setattr(Supervisor, "StopOnException", Supervisor.stop_on_exception)
setattr(Supervisor, "WaitForStop", Supervisor.wait_for_stop)
setattr(Supervisor, "SummaryComputed", Supervisor.summary_computed)
|
server.py
|
# -*- coding: utf-8 -*-
"""
DNS server framework - intended to simplify creation of custom resolvers.
Comprises the following components:
DNSServer - socketserver wrapper (in most cases you should just
need to pass this an appropriate resolver instance
and start in either foreground/background)
DNSHandler - handler instantiated by DNSServer to handle requests
The 'handle' method deals with the sending/receiving
packets (handling TCP length prefix) and delegates
the protocol handling to 'get_reply'. This decodes
packet, hands off a DNSRecord to the Resolver instance,
and encodes the returned DNSRecord.
In most cases you dont need to change DNSHandler unless
you need to get hold of the raw protocol data in the
Resolver
DNSLogger - The class provides a default set of logging functions for
the various stages of the request handled by a DNSServer
instance which are enabled/disabled by flags in the 'log'
class variable.
Resolver - Instance implementing a 'resolve' method that receives
the decodes request packet and returns a response.
To implement a custom resolver in most cases all you need
is to implement this interface.
Note that there is only a single instance of the Resolver
so need to be careful about thread-safety and blocking
The following examples use the server framework:
fixedresolver.py - Simple resolver which will respond to all
requests with a fixed response
zoneresolver.py - Resolver which will take a standard zone
file input
shellresolver.py - Example of a dynamic resolver
proxy.py - DNS proxy
intercept.py - Intercepting DNS proxy
>>> resolver = BaseResolver()
>>> logger = DNSLogger(prefix=False)
>>> server = DNSServer(resolver,port=8053,address="localhost",logger=logger)
>>> server.start_thread()
>>> q = DNSRecord.question("abc.def")
>>> a = q.send("localhost",8053)
Request: [...] (udp) / 'abc.def.' (A)
Reply: [...] (udp) / 'abc.def.' (A) / RRs:
>>> print(DNSRecord.parse(a))
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: ...
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;abc.def. IN A
>>> server.stop()
>>> class TestResolver:
... def resolve(self,request,handler):
... reply = request.reply()
... reply.add_answer(*RR.fromZone("abc.def. 60 A 1.2.3.4"))
... return reply
>>> resolver = TestResolver()
>>> server = DNSServer(resolver,port=8053,address="localhost",logger=logger,tcp=True)
>>> server.start_thread()
>>> a = q.send("localhost",8053,tcp=True)
Request: [...] (tcp) / 'abc.def.' (A)
Reply: [...] (tcp) / 'abc.def.' (A) / RRs: A
>>> print(DNSRecord.parse(a))
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: ...
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;abc.def. IN A
;; ANSWER SECTION:
abc.def. 60 IN A 1.2.3.4
>>> server.stop()
"""
from __future__ import print_function
import binascii,socket,struct,threading,time
try:
import socketserver
except ImportError:
import SocketServer as socketserver
from dnslib import DNSRecord,DNSError,QTYPE,RCODE,RR
class BaseResolver(object):
"""
Base resolver implementation. Provides 'resolve' method which is
called by DNSHandler with the decode request (DNSRecord instance)
and returns a DNSRecord instance as reply.
In most cases you should be able to create a custom resolver by
just replacing the resolve method with appropriate resolver code for
application (see fixedresolver/zoneresolver/shellresolver for
examples)
Note that a single instance is used by all DNSHandler instances so
need to consider blocking & thread safety.
"""
def resolve(self,request,handler):
"""
Example resolver - respond to all requests with NXDOMAIN
"""
reply = request.reply()
reply.header.rcode = getattr(RCODE,'NXDOMAIN')
return reply
class DNSHandler(socketserver.BaseRequestHandler):
"""
Handler for socketserver. Transparently handles both TCP/UDP requests
(TCP requests have length prepended) and hands off lookup to resolver
instance specified in <SocketServer>.resolver
"""
udplen = 0 # Max udp packet length (0 = ignore)
def handle(self):
if self.server.socket_type == socket.SOCK_STREAM:
self.protocol = 'tcp'
data = self.request.recv(8192)
length = struct.unpack("!H",bytes(data[:2]))[0]
while len(data) - 2 < length:
data += self.request.recv(8192)
data = data[2:]
else:
self.protocol = 'udp'
data,connection = self.request
self.server.logger.log_recv(self,data)
try:
rdata = self.get_reply(data)
self.server.logger.log_send(self,rdata)
if self.protocol == 'tcp':
rdata = struct.pack("!H",len(rdata)) + rdata
self.request.sendall(rdata)
else:
connection.sendto(rdata,self.client_address)
except DNSError as e:
self.server.logger.log_error(self,e)
def get_reply(self,data):
request = DNSRecord.parse(data)
self.server.logger.log_request(self,request)
resolver = self.server.resolver
reply = resolver.resolve(request,self)
self.server.logger.log_reply(self,reply)
if self.protocol == 'udp':
rdata = reply.pack()
if self.udplen and len(rdata) > self.udplen:
truncated_reply = reply.truncate()
rdata = truncated_reply.pack()
self.server.logger.log_truncated(self,truncated_reply)
else:
rdata = reply.pack()
return rdata
class DNSLogger:
"""
The class provides a default set of logging functions for the various
stages of the request handled by a DNSServer instance which are
enabled/disabled by flags in the 'log' class variable.
To customise logging create an object which implements the DNSLogger
interface and pass instance to DNSServer.
The methods which the logger instance must implement are:
log_recv - Raw packet received
log_send - Raw packet sent
log_request - DNS Request
log_reply - DNS Response
log_truncated - Truncated
log_error - Decoding error
log_data - Dump full request/response
"""
def __init__(self,log="",prefix=True):
"""
Selectively enable log hooks depending on log argument
(comma separated list of hooks to enable/disable)
- If empty enable default log hooks
- If entry starts with '+' (eg. +send,+recv) enable hook
- If entry starts with '-' (eg. -data) disable hook
- If entry doesn't start with +/- replace defaults
Prefix argument enables/disables log prefix
"""
default = ["request","reply","truncated","error"]
log = log.split(",") if log else []
enabled = set([ s for s in log if s[0] not in '+-'] or default)
[ enabled.add(l[1:]) for l in log if l.startswith('+') ]
[ enabled.discard(l[1:]) for l in log if l.startswith('-') ]
for l in ['log_recv','log_send','log_request','log_reply',
'log_truncated','log_error','log_data']:
if l[4:] not in enabled:
setattr(self,l,self.log_pass)
self.prefix = prefix
def log_pass(self,*args):
pass
def log_prefix(self,handler):
if self.prefix:
return "%s [%s:%s] " % (time.strftime("%Y-%M-%d %X"),
handler.__class__.__name__,
handler.server.resolver.__class__.__name__)
else:
return ""
def log_recv(self,handler,data):
print("%sReceived: [%s:%d] (%s) <%d> : %s" % (
self.log_prefix(handler),
handler.client_address[0],
handler.client_address[1],
handler.protocol,
len(data),
binascii.hexlify(data)))
def log_send(self,handler,data):
print("%sSent: [%s:%d] (%s) <%d> : %s" % (
self.log_prefix(handler),
handler.client_address[0],
handler.client_address[1],
handler.protocol,
len(data),
binascii.hexlify(data)))
def log_request(self,handler,request):
print("%sRequest: [%s:%d] (%s) / '%s' (%s)" % (
self.log_prefix(handler),
handler.client_address[0],
handler.client_address[1],
handler.protocol,
request.q.qname,
QTYPE[request.q.qtype]))
self.log_data(request)
def log_reply(self,handler,reply):
print("%sReply: [%s:%d] (%s) / '%s' (%s) / RRs: %s" % (
self.log_prefix(handler),
handler.client_address[0],
handler.client_address[1],
handler.protocol,
reply.q.qname,
QTYPE[reply.q.qtype],
",".join([QTYPE[a.rtype] for a in reply.rr])))
self.log_data(reply)
def log_truncated(self,handler,reply):
print("%sTruncated Reply: [%s:%d] (%s) / '%s' (%s) / RRs: %s" % (
self.log_prefix(handler),
handler.client_address[0],
handler.client_address[1],
handler.protocol,
reply.q.qname,
QTYPE[reply.q.qtype],
",".join([QTYPE[a.rtype] for a in reply.rr])))
self.log_data(reply)
def log_error(self,handler,e):
print("%sInvalid Request: [%s:%d] (%s) :: %s" % (
self.log_prefix(handler),
handler.client_address[0],
handler.client_address[1],
handler.protocol,
e))
def log_data(self,dnsobj):
print("\n",dnsobj.toZone(" "),"\n",sep="")
class UDPServer(socketserver.UDPServer):
allow_reuse_address = True
class TCPServer(socketserver.TCPServer):
allow_reuse_address = True
class DNSServer(object):
"""
Convenience wrapper for socketserver instance allowing
either UDP/TCP server to be started in blocking more
or as a background thread.
Processing is delegated to custom resolver (instance) and
optionally custom logger (instance), handler (class), and
server (class)
In most cases only a custom resolver instance is required
(and possibly logger)
"""
def __init__(self,resolver,
address="",
port=53,
tcp=False,
logger=None,
handler=DNSHandler,
server=None):
"""
resolver: resolver instance
address: listen address (default: "")
port: listen port (default: 53)
tcp: UDP (false) / TCP (true) (default: False)
logger: logger instance (default: DNSLogger)
handler: handler class (default: DNSHandler)
server: socketserver class (default: UDPServer/TCPServer)
"""
if not server:
if tcp:
server = TCPServer
else:
server = UDPServer
self.server = server((address,port),handler)
self.server.resolver = resolver
self.server.logger = logger or DNSLogger()
def start(self):
self.server.serve_forever()
def start_thread(self):
self.thread = threading.Thread(target=self.server.serve_forever)
self.thread.daemon = True
self.thread.start()
def stop(self):
self.server.shutdown()
def isAlive(self):
return self.thread.isAlive()
if __name__ == "__main__":
import doctest
doctest.testmod(optionflags=doctest.ELLIPSIS)
|
main.py
|
import sublime
import sublime_plugin
import os
import threading
# import datetime
# import sys, xdrlib
# import json
# import random
# from . import xlwt
# from . import requests
from . import util
from . import codecreator
from . import setting
from . import xlsxwriter
from .requests.exceptions import RequestException
from .salesforce import (
SalesforceMoreThanOneRecord,
SalesforceMalformedRequest,
SalesforceExpiredSession,
SalesforceRefusedRequest,
SalesforceResourceNotFound,
SalesforceGeneralError,
SalesforceError
)
AUTO_CODE_DIR = "code-creator"
##########################################################################################
#Sublime main menu
##########################################################################################
# Oauth2
# class OauthCheckCommand(sublime_plugin.TextCommand):
# def run(self, edit):
# util.stop_server()
# util.open_in_default_browser(authorize_url)
# print the SFDC Object
class ShowSfdcObjectListCommand(sublime_plugin.TextCommand):
def main_handle(self):
try:
sf = util.sf_login()
message = 'label, name, keyPrefix' + "\n"
for x in sf.describe()["sobjects"]:
message += util.xstr(x["label"]) + "," + util.xstr(x["name"]) + "," + util.xstr(x["keyPrefix"]) + "\n"
# util.show_in_new_tab(message)
file_name = sf.settings["default_project"] + '_sobject_lst.csv'
util.save_and_open_in_panel(message, file_name, True )
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def run(self, edit):
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
# Save the SFDC Object As Excel
# # use xlsxwriter to write excel
class SaveSfdcObjectAsExcelCommand(sublime_plugin.WindowCommand):
def main_handle(self, savePath = ''):
try:
dirPath = os.path.dirname(savePath)
util.makedir(dirPath)
sf = util.sf_login()
# contact = sf.query("SELECT Id, Email FROM Contact limit 1")
sfdesc = sf.describe()
book = xlsxwriter.Workbook(savePath)
newSheet_1Name = 'オブジェクトリスト'
newSheet_1 = book.add_worksheet(newSheet_1Name)
newSheet_1.write(0, 0, 'label')
newSheet_1.write(0, 1, 'name')
newSheet_1.write(0, 2, 'keyPrefix')
index = 1;
sheetIndexMap = {}
sheetIndex = 0;
sheetIndexMap[0] = newSheet_1Name
sheetNameList = []
headers = ['name','label','type','length','scale','updateable','unique','custom','picklistValues','aggregatable','autoNumber','byteLength','calculated','calculatedFormula','cascadeDelete','caseSensitive','controllerName','createable','defaultValue','defaultValueFormula','defaultedOnCreate','dependentPicklist','deprecatedAndHidden','digits','displayLocationInDecimal','encrypted','externalId','extraTypeInfo','filterable','filteredLookupInfo','groupable','highScaleNumber','htmlFormatted','idLookup','inlineHelpText','mask','maskType','nameField','namePointing','nillable','permissionable','precision','queryByDistance','referenceTargetField','referenceTo','relationshipName','relationshipOrder','restrictedDelete','restrictedPicklist','soapType','sortable','writeRequiresMasterRead']
for x in sf.describe()["sobjects"]:
#write to xls
# book.get_sheet(0)
newSheet_1.write(index, 0, util.xstr(x["label"]))
newSheet_1.write(index, 1, util.xstr(x["name"]))
newSheet_1.write(index, 2, util.xstr(x["keyPrefix"]))
index = index + 1
#print(sf.Kind__c.describe())
#print(x["name"])
#print(x["custom"])
if x["custom"]:
sheetIndex += 1
# sftype = SFType(util.xstr(x["name"]), sf.session_id, sf.sf_instance, sf_version=sf.sf_version,
# proxies=sf.proxies, session=sf.session)
sftype = sf.get_sobject(util.xstr(x["name"]))
#print(x["name"])
#write to xls
# worksheet_name = x["name"]
# if len(worksheet_name) > 31:
# worksheet_name = (x["name"].replace("_",""))[0:31]
worksheet_name = x["label"]
if len(worksheet_name) > 31:
worksheet_name = (x["label"])[0:31]
if worksheet_name in sheetNameList:
worksheet_name = (x["label"])[0:25] + "_" + util.random_str(4)
sheetNameList.append(worksheet_name)
fieldSheet_1 = book.add_worksheet(worksheet_name)
fieldSheet_1.write(0, 0, 'sobject')
fieldSheet_1.write(0, 1, x["name"])
fieldSheet_1.write(1, 0, 'label')
fieldSheet_1.write(1, 1, x["label"])
fieldSheet_1.write(2, 0, 'keyPrefix')
fieldSheet_1.write(2, 1, x["keyPrefix"])
# book.get_sheet(sheetIndex)
rowIndex = 4;
headerIndex = 0
for header in headers:
fieldSheet_1.write(rowIndex, headerIndex, header)
headerIndex = headerIndex + 1
sftypedesc = sftype.describe()
for field in sftypedesc["fields"]:
print(field)
#print(field["name"])
#print(field["label"])
#print(field["type"])
#print(field["length"])
#print(field["scale"])
rowIndex += 1
headerIndex = 0
for header in headers:
if header == "picklistValues":
picklistValuesStr = ''
for pv in field[header]:
picklistValuesStr += pv['label'] + ':' + pv['value'] + '\n'
fieldSheet_1.write(rowIndex, headerIndex, picklistValuesStr)
else:
fieldSheet_1.write(rowIndex, headerIndex, util.xstr(field[header]))
headerIndex = headerIndex + 1
#message += x["label"] + "\n"
# book.save( settings["default_project"] + '_sobject.xls')
util.show_in_dialog("Done! Please see the dir below: \n" + dirPath)
# isOpen = sublime.ok_cancel_dialog('Do you want to open the directory?')
# if isOpen:
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def on_input(self, args):
thread = threading.Thread(target=self.main_handle, args=(args, ))
thread.start()
util.handle_thread(thread)
def run(self):
settings = setting.load()
self.fullPath = os.path.join(util.get_default_floder(), settings["default_project"] + '_sobject.xlsx')
# show_input_panel(caption, initial_text, on_done, on_change, on_cancel)
self.window.show_input_panel("Please Input FullPath of fileName: " ,
self.fullPath, self.on_input, None, None)
# Soql Query
class SoqlQueryCommand(sublime_plugin.TextCommand):
def main_handle(self, sel_string = ''):
try:
sf = util.sf_login()
soql_str = soql_format(sf,sel_string)
print('------>soql')
print(soql_str)
soql_result = sf.query(soql_str)
print('----->soql_result')
print(soql_result)
# header = [key for key in soql_result['records'].iterkeys()]
# print(header)
message = util.get_soql_result(soql_str, soql_result)
util.show_in_new_tab(message)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def run(self, edit):
sel_area = self.view.sel()
if sel_area[0].empty():
util.show_in_dialog("Please select the SOQL !!")
return
else:
soql_string = self.view.substr(sel_area[0])
soql_string = util.del_comment(soql_string)
# TODO
# sobject_name = util.get_soql_sobject(soql_string)
# if not sobject_name:
# util.show_in_dialog("Please select SOQL !")
# return
thread = threading.Thread(target=self.main_handle, args=(soql_string, ))
thread.start()
util.handle_thread(thread)
# ToolingQueryCommand
class ToolingQueryCommand(sublime_plugin.TextCommand):
def main_handle(self, sel_string = ''):
try:
# print("ToolingQueryCommand Start")
sf = util.sf_login()
params = {'q': sel_string}
soql_result = sf.restful('tooling/query', params)
# header = [key for key in soql_result['records'].iterkeys()]
# print(header)
message = util.get_soql_result(sel_string, soql_result)
util.show_in_new_tab(message)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def run(self, edit):
sel_area = self.view.sel()
if sel_area[0].empty():
util.show_in_dialog("Please select the Tooling SOQL !!")
return
else:
sel_string = self.view.substr(sel_area[0])
sel_string = util.del_comment(sel_string)
thread = threading.Thread(target=self.main_handle, args=(sel_string, ))
thread.start()
util.handle_thread(thread)
# SFDC Coverage
class SfdcCoverageCommand(sublime_plugin.TextCommand):
def main_handle(self):
try:
sf = util.sf_login()
apexClassSoql = "select Id, Name, ApiVersion, LengthWithoutComments from ApexClass where Status = 'Active'"
apexClass = sf.query(apexClassSoql)
apexCodeCoverageSoql = "select Id , ApexClassOrTriggerId , NumLinesCovered , NumLinesUncovered FROM ApexCodeCoverageAggregate"
params = {'q': apexCodeCoverageSoql}
apexCodeCoverage = sf.restful('tooling/query', params)
util.show_in_new_tab(util.xstr(apexClass))
util.show_in_new_tab(util.xstr(apexCodeCoverage))
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def run(self, edit):
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
# RunApexScript
class RunApexScriptCommand(sublime_plugin.TextCommand):
def main_handle(self, sel_string = ''):
try:
sel_area = self.view.sel()
sf = util.sf_login()
result = sf.execute_anonymous(sel_string)
# print(result)
util.show_in_new_tab(result["debugLog"])
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def run(self, edit):
sel_area = self.view.sel()
if sel_area[0].empty():
util.show_in_dialog("Please select the Tooling SOQL !!")
return
else:
sel_string = self.view.substr(sel_area[0])
sel_string = util.del_comment(sel_string)
thread = threading.Thread(target=self.main_handle, args=(sel_string, ))
thread.start()
util.handle_thread(thread)
# Login Salesforce
class LoginSfdcCommand(sublime_plugin.WindowCommand):
def run(self):
# try:
# self.dirs = setting.get_browser_setting()
# dirs = []
# for dir in self.dirs:
# dirs.append(dir[0])
# self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
# except Exception as e:
# util.show_in_panel(e)
# return
# def panel_done(self, picked):
# if 0 > picked < len(self.dirs):
# return
# self.browser = self.dirs[picked][0]
# self.broswer_path = self.dirs[picked][1]
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
try:
sf = util.sf_login()
login_sf_home(self, sf)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
return
# Login Project
class LoginProjectCommand(sublime_plugin.WindowCommand):
def run(self):
settings = setting.load()
self.settings = settings
projects = settings["projects"]
dirs = []
for project_key in projects.keys():
project_value = projects[project_key]
dirs.append(project_key)
self.results = dirs
self.window.show_quick_panel(dirs, self.panel_done,
sublime.MONOSPACE_FONT)
def panel_done(self, picked):
if 0 > picked < len(self.results):
return
self.picked_project = self.results[picked]
self.dirs = setting.get_browser_setting()
dirs = []
for dir in self.dirs:
dirs.append(dir[0])
sublime.set_timeout(lambda:self.window.show_quick_panel(dirs, self.browser_choose), 10)
def browser_choose(self, picked):
if 0 > picked < len(self.dirs):
return
self.browser = self.dirs[picked][0]
self.broswer_path = self.dirs[picked][1]
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
try:
sf = util.sf_login(self.picked_project)
# login_sf_home(self, sf)
login_sf_home(self, sf, self.browser, self.broswer_path)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
return
# sfdc_dataviewer
class SfdcDataviewerCommand(sublime_plugin.WindowCommand):
def run(self):
try:
self.sf = util.sf_login()
self.settings = self.sf.settings
dirs = []
self.results = []
for x in self.sf.describe()["sobjects"]:
# dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]) +' : Export to Tab ')
self.results.append(util.xstr(x["name"]))
self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def panel_done(self, picked):
if 0 > picked < len(self.results):
return
self.picked_name = self.results[picked]
# print(self.picked_name)
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
self.sftype = self.sf.get_sobject(self.picked_name)
soql = 'SELECT '
fields = []
sftypedesc = self.sftype.describe()
for field in sftypedesc["fields"]:
fields.append(util.xstr(field["name"]))
soql += ' , '.join(fields)
soql += ' FROM ' + self.picked_name
if 'soql_select_limit' in self.settings:
soql += ' LIMIT ' + util.xstr(self.settings["soql_select_limit"])
message = 'soql : ' + soql + '\n\n\n\n'
soql_result = self.sf.query(soql)
message += util.get_soql_result(soql, soql_result)
util.show_in_new_tab(message)
# sfdc_online_dataviewer
class SfdcQuickViewerCommand(sublime_plugin.WindowCommand):
def run(self):
self.menu = [
"0. Reload Cache",
"1. Search All",
"2. SObject Data",
"3. SObject Setting",
"4. ApexClass",
"5. Trigger",
"6. VisualForce Page",
"7. VisualForce Components",
"8. Email Template",
"9. Static Resource"
# "10. Workflow Rule",
# "11. Validate Rule"
]
sublime.set_timeout(lambda:self.window.show_quick_panel(self.menu, self.select_menu), 10)
def select_menu(self, picked):
if 0 > picked < len(self.menu):
return
self.sel_menu = picked
self.sel_key = self.menu[picked]
self.tooling_soql = {}
self.tooling_soql[self.menu[3]] = 'SELECT id,developername FROM CustomObject'
self.tooling_soql[self.menu[4]] = 'SELECT id,name FROM ApexClass'
self.tooling_soql[self.menu[5]] = 'SELECT id,name FROM ApexTrigger'
self.tooling_soql[self.menu[6]] = 'SELECT id,name FROM ApexPage'
self.tooling_soql[self.menu[7]] = 'SELECT id,name FROM ApexComponent'
self.tooling_soql[self.menu[8]] = 'SELECT id,name FROM EmailTemplate'
self.tooling_soql[self.menu[9]] = 'SELECT id,name FROM StaticResource'
# self.tooling_soql[self.menu[10]] = 'SELECT id,fullname FROM WorkflowRule'
# self.tooling_soql[self.menu[11]] = 'SELECT Id,FullName FROM ValidationRule'
self.sf = util.sf_login()
self.settings = self.sf.settings
default_project = self.settings['default_project_value']['project_name']
s = sublime.load_settings(setting.SFDC_CACHE_SETTINGS)
project_meta_map = s.get(default_project)
# print('project_meta_map-->')
# print(project_meta_map)
# Reload Cache
if picked == 0:
thread = threading.Thread(target=self.reload_cache)
thread.start()
util.handle_thread(thread)
else:
# reload cache
if (project_meta_map is None):
# thread = threading.Thread(target=self.reload_cache)
# thread.start()
# util.handle_thread(thread)
# print('reload_cache()-->')
self.reload_cache()
s = sublime.load_settings(setting.SFDC_CACHE_SETTINGS)
project_meta_map = s.get(default_project)
self.metadata = []
self.metadata_view = []
sel_key_str = self.menu[picked]
# Search All
if picked == 1:
self.metadata = []
self.metadata_view = []
for key in range(2,9):
tmp = project_meta_map[self.menu[key]]
if tmp is not None:
self.metadata.extend(tmp)
for obj in tmp:
self.metadata_view.append(obj['name'])
sublime.set_timeout(lambda:self.window.show_quick_panel(self.metadata_view, self.select_metadata), 10)
# Custom Object Data
elif picked >= 2:
self.metadata = project_meta_map[sel_key_str]
for obj in project_meta_map[sel_key_str]:
self.metadata_view.append(obj['name'])
sublime.set_timeout(lambda:self.window.show_quick_panel(self.metadata_view, self.select_metadata), 10)
def select_metadata(self, picked):
if 0 > picked < len(self.metadata):
return
if self.sf is None:
self.sf = util.sf_login()
sel_item = self.metadata[picked]
login_url = ('https://{instance}/{id}'
.format(instance=self.sf.sf_instance,
id=sel_item['id']))
# print(login_url)
util.open_in_default_browser(login_url)
def reload_cache(self):
# try:
sel_type = {}
sel_type[self.menu[2]] = ''
sel_type[self.menu[3]] = '__c Custom SObject Setting'
sel_type[self.menu[4]] = '.cls'
sel_type[self.menu[5]] = '.trigger'
sel_type[self.menu[6]] = '.page'
sel_type[self.menu[7]] = '.component'
sel_type[self.menu[8]] = '-EmailTemplate'
sel_type[self.menu[9]] = '-StaticResource'
# sel_type[self.menu[10]] = '-WorkflowRule'
# sel_type[self.menu[11]] = '-ValidationRule'
default_project = self.settings['default_project_value']['project_name']
# print('default_project -->')
# print(default_project)
s = sublime.load_settings(setting.SFDC_CACHE_SETTINGS)
project_meta_map = s.get(default_project)
if project_meta_map is None:
project_meta_map = {}
for key in self.tooling_soql:
soql = self.tooling_soql[key]
params = {'q': soql}
soql_result = {}
# TODO
try:
soql_result = self.sf.restful('tooling/query', params)
# print(soql_result)
except Exception as e:
print('------>reload_cache Exception')
print(e)
pass
# util.show_in_new_tab(soql_result)
# table = util.search_soql_to_list(soql, soql_result)
if soql_result['records'] is not None:
result = []
name_key = 'Name'
if self.menu[3] == key:
name_key = 'DeveloperName'
# TODO
# elif self.menu[10] == key:
# name_key = 'FullName'
for obj in soql_result['records']:
tmp = {}
tmp['id'] = obj['Id']
tmp['name'] = obj[name_key] + sel_type[key]
result.append(tmp)
project_meta_map[key] = result
# "2. Custom Object Data"
custom_obj_meta = []
for x in self.sf.describe()["sobjects"]:
if x["keyPrefix"] is not None:
tmp_map = {}
tmp_map['id'] = util.xstr(x["keyPrefix"])
tmp_map['name'] = (util.xstr(x["name"])+' : '+util.xstr(x["label"]) +' : '+ x["keyPrefix"])
custom_obj_meta.append(tmp_map)
# stand sobject setting page
if not x['custom']:
tmp_map = {}
tmp_map['id'] = 'p/setup/layout/LayoutFieldList?type=' + util.xstr(x["name"])
tmp_map['name'] = util.xstr(x["name"]) + ' ' + util.xstr(x["label"]) + ' Standard SObject Setting'
if project_meta_map[self.menu[3]] is None:
project_meta_map[self.menu[3]] = {}
project_meta_map[self.menu[3]].append(tmp_map)
project_meta_map[self.menu[2]] = custom_obj_meta
s.set(default_project, project_meta_map)
sublime.save_settings(setting.SFDC_CACHE_SETTINGS)
# except RequestException as e:
# util.show_in_panel("Network connection timeout when issuing REST GET request")
# return
# except SalesforceExpiredSession as e:
# util.show_in_dialog('session expired')
# util.re_auth()
# return
# except SalesforceRefusedRequest as e:
# util.show_in_panel('The request has been refused.')
# return
# except SalesforceError as e:
# err = 'Error code: %s \nError message:%s' % (e.status,e.content)
# util.show_in_panel(err)
# return
# except Exception as e:
# util.show_in_panel(e)
# # util.show_in_dialog('Exception Error!')
# return
# sfdc_object_desc
class SfdcObjectDescCommand(sublime_plugin.WindowCommand):
def run(self):
try:
self.sf = util.sf_login()
dirs = []
self.results = []
for x in self.sf.describe()["sobjects"]:
# dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
self.results.append(util.xstr(x["name"]))
# print(x)
self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def panel_done(self, picked):
if 0 > picked < len(self.results):
return
self.picked_name = self.results[picked]
# print(self.picked_name)
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
self.sftype = self.sf.get_sobject(self.picked_name)
message = 'name, label, type, length, scale' + "\n"
sftypedesc = self.sftype.describe()
for field in sftypedesc["fields"]:
# print(field["name"])
# print(field["label"])
# print(field["type"])
# print(field["length"])
# print(field["scale"])
message += util.xstr(field["name"]) + "," + util.xstr(field["label"]) \
+ "," + util.xstr(field["type"]) + "," + util.xstr(field["length"]) + "," + util.xstr(field["scale"]) + "\n"
# util.show_in_new_tab(message)
file_name = self.picked_name + '_sobject_desc.csv'
sub_folder = 'sobject-desc'
util.save_and_open_in_panel(message, file_name, True, sub_folder )
# soql create
class SoqlCreateCommand(sublime_plugin.WindowCommand):
def run(self):
try:
self.sf = util.sf_login()
dirs = []
self.results = []
for x in self.sf.describe()["sobjects"]:
# dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
self.results.append(util.xstr(x["name"]))
# print(x)
self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def panel_done(self, picked):
if 0 > picked < len(self.results):
return
self.picked_name = self.results[picked]
# print(self.picked_name)
# print(self.picked_name)
dirs = ["1. Custom Fields Only(Exclude Relation)",
"2. Updateable(Exclude Relation)",
"3. All Fields(Exclude Relation)",
"4. Custom Fields Only(Include Relation)",
"5. Updateable(Include Relation)",
"6. All Fields(Include Relation)"]
self.custom_result = dirs
sublime.set_timeout(lambda:self.window.show_quick_panel(dirs, self.select_panel), 10)
def select_panel(self, picked):
if 0 > picked < len(self.custom_result):
return
self.is_custom_only = ( picked == 0 or picked == 3 )
self.is_updateable = ( picked == 1 or picked == 4)
self.include_relationship = ( picked >= 3 )
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
try:
# sobject = self.picked_name
# fields = get_sobject_fields(self.sf, sobject)
# fields_str = ",".join(fields)
# soql = ("select %s from %s " % (fields_str, sobject))
# util.show_in_new_tab(soql)
sobject = self.picked_name
sftype = self.sf.get_sobject(sobject)
sftypedesc = sftype.describe()
soql = codecreator.get_soql_src(sobject, sftypedesc["fields"],self.sf, condition='', has_comment=True,
is_custom_only=self.is_custom_only,
updateable_only=self.is_updateable,
include_relationship=self.include_relationship)
util.show_in_new_tab(soql)
except Exception as e:
util.show_in_panel(e)
return
# sfdc_object_desc
class CreateAllTestDataCommand(sublime_plugin.WindowCommand):
def run(self):
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
try:
self.sf = util.sf_login()
message = ''
for x in self.sf.describe()["sobjects"]:
if x["custom"]:
objectApiName = util.xstr(x["name"])
message += createTestDataStr(objectApiName=objectApiName,
sftype=self.sf.get_sobject(objectApiName),
isAllField=False)
util.show_in_new_tab(message)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
# sfdc_object_desc
class CreateTestDataNeedCommand(sublime_plugin.WindowCommand):
def run(self):
try:
self.sf = util.sf_login()
dirs = []
self.results = []
for x in self.sf.describe()["sobjects"]:
# dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
self.results.append(util.xstr(x["name"]))
# print(x)
self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def panel_done(self, picked):
if 0 > picked < len(self.results):
return
self.picked_name = self.results[picked]
# print(self.picked_name)
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
self.sftype = self.sf.get_sobject(self.picked_name)
obj_name = util.get_obj_name(self.picked_name)
message = createTestDataStr(objectApiName=self.picked_name,
sftype=self.sftype,
isAllField=False)
util.insert_str(message)
class CreateTestDataFromSoqlCommand(sublime_plugin.TextCommand):
def main_handle(self, sel_string = ''):
try:
sf = util.sf_login()
soql_result = sf.query(sel_string)
object_name = util.get_query_object_name(soql_result)
if object_name:
sftype = sf.get_sobject(object_name)
sftypedesc = sftype.describe()
fields = {}
for field in sftypedesc["fields"]:
name = field['name'].lower()
fields[name] = field
message = util.get_soql_to_apex(fields, sel_string, soql_result)
else :
# print(header)
message = 'Query Error!\n'
util.insert_str(message)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def run(self, edit):
sel_area = self.view.sel()
if sel_area[0].empty():
util.show_in_dialog("Please select the SOQL !!")
return
else:
sel_string = self.view.substr(sel_area[0])
sel_string = util.del_comment(sel_string)
thread = threading.Thread(target=self.main_handle, args=(sel_string, ))
thread.start()
util.handle_thread(thread)
# sfdc_object_desc
class CreateTestDataAllCommand(sublime_plugin.WindowCommand):
def run(self):
try:
self.sf = util.sf_login()
dirs = []
self.results = []
for x in self.sf.describe()["sobjects"]:
# dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
self.results.append(util.xstr(x["name"]))
# print(x)
self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def panel_done(self, picked):
if 0 > picked < len(self.results):
return
self.picked_name = self.results[picked]
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
# print(self.picked_name)
self.sftype = self.sf.get_sobject(self.picked_name)
obj_name = util.get_obj_name(self.picked_name)
message = createTestDataStr(objectApiName=self.picked_name,
sftype=self.sftype,
isAllField=True)
# util.show_in_new_tab(message)
util.insert_str(message)
def createTestDataStr(objectApiName, sftype, isAllField):
obj_name = util.get_obj_name(objectApiName)
message = ("List<{T}> {objName}List = new List<{T}>();\n"
.format(T=objectApiName,
objName=obj_name))
message += "for(Integer i=0; i<5; i++){\n"
message += ("\t{T} {objName} = new {T}();\n"
.format(T=objectApiName,
objName=obj_name))
sftypedesc = sftype.describe()
for field in sftypedesc["fields"]:
# util.show_in_panel("defaultValue------->" + util.xstr(field["defaultValue"]))
# util.show_in_panel('\n')
# util.show_in_panel(field)
# util.show_in_panel('\n')
# util.show_in_panel(field["name"])
# util.show_in_panel('\n')
# util.show_in_panel(field["defaultValue"])
# util.show_in_panel('\n')
# util.show_in_panel(field["defaultValue"] is None)
# util.show_in_panel('\n\n\n')
# print(field["name"])
# print(field["label"])
# print(field["type"])
# print(field["length"])
# print(field["scale"])
# print(field)
## https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describesobjects_describesobjectresult.htm#topic-title
## obj type
if isAllField:
check = field["updateable"] and field["name"] != 'OwnerId'
else:
check = field["updateable"] \
and not field["nillable"] \
and field["name"] != 'OwnerId'\
and ( field["defaultValue"] is None ) \
and ( field["type"] != 'boolean' )
if check:
# if field["updateable"]:
## do with picklist
picklistValues = []
if field["type"] == 'picklist' or field["type"]== 'multipicklist':
for pv in field['picklistValues']:
picklistValues.append(pv['value'])
# print('------>')
# print(field["name"])
# print(picklistValues)
# print(field['picklistValues'])
if field["type"] == 'int' or field["type"] == 'double' or field["type"] == 'currency':
length = field["length"] if field["length"] < 3 else 3
else:
length = field["length"] if field["length"] < 8 else 8
val = util.random_data(data_type=field["type"],
length=length,
scale=field["scale"],
filed_name=field["name"],
picklistValues=picklistValues)
message += ("\t{objName}.{field} = {value}; //{label}\n"
.format(objName=obj_name,
field=util.xstr(field["name"]),
value=val,
label=field["label"],))
message += ("\t{objName}List.add({objName});\n"
.format(T=objectApiName,
objName=obj_name))
message += "}\n"
message += ("insert {objName}List;\n\n"
.format(objName=obj_name))
return message
class CreateTestCodeCommand(sublime_plugin.TextCommand):
def run(self, edit):
try:
file_name = self.view.file_name()
if file_name is None:
return
check = os.path.isfile(file_name) and ( file_name.find(".cls") > -1 )
if not check:
util.show_in_dialog('Error file type! Please select a cls file.')
return
sel_string = self.view.substr(sublime.Region(0, self.view.size()))
test_code,sfdc_name_map = codecreator.get_testclass(sel_string)
# util.show_in_new_tab(test_code)
is_open = True
file_name = sfdc_name_map['test_class_file']
sub_folder = AUTO_CODE_DIR
save_path = util.save_and_open_in_panel(test_code, file_name, is_open, sub_folder )
except Exception as e:
util.show_in_panel(e)
return
#Create VisualForce/Controller/DTO/DAO Code
class CreateSfdcCodeCommand(sublime_plugin.WindowCommand):
def run(self):
try:
self.sf = util.sf_login()
dirs = []
self.results = []
for x in self.sf.describe()["sobjects"]:
# dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
self.results.append(util.xstr(x["name"]))
# print(x)
self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
# util.show_in_dialog('Exception Error!')
return
def panel_done(self, picked):
if 0 > picked < len(self.results):
return
self.picked_name = self.results[picked]
# print(self.picked_name)
dirs = ["1. Custom Fields Only(Exclude Validate)",
"2. All Fields(Exclude Validate)",
"3. Custom Fields Only(Include Validate)",
"4. All Fields(Include Validate)"]
self.custom_result = dirs
sublime.set_timeout(lambda:self.window.show_quick_panel(dirs, self.select_panel), 10)
def select_panel(self, picked):
if 0 > picked < len(self.custom_result):
return
self.is_custom_only = (picked==0 or picked==2)
self.include_validate = (picked>1)
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
self.sftype = self.sf.get_sobject(self.picked_name)
sftypedesc = self.sftype.describe()
# util.show_in_new_tab(util.get_dto_class(self.picked_name, sftypedesc["fields"], self.is_custom_only))
sub_folder = AUTO_CODE_DIR
sfdc_name_map = codecreator.get_sfdc_namespace(self.picked_name)
is_open = False
save_path_list = []
# dto Code
dto_code, class_name = codecreator.get_dto_class(self.picked_name, sftypedesc["fields"], self.is_custom_only, self.include_validate)
file_name = sfdc_name_map['dto_file']
save_path = util.save_and_open_in_panel(dto_code, file_name, is_open, sub_folder )
save_path_list.append(save_path)
# dao Code
dao_code = codecreator.get_dao_class(self.picked_name, sftypedesc["fields"], self.sf, self.is_custom_only)
file_name = sfdc_name_map['dao_file']
save_path = util.save_and_open_in_panel(dao_code, file_name, is_open, sub_folder )
save_path_list.append(save_path)
# controller code
controller_code, class_name = codecreator.get_controller_class(self.picked_name)
file_name = sfdc_name_map['controller_file']
save_path = util.save_and_open_in_panel(controller_code, file_name, is_open, sub_folder )
save_path_list.append(save_path)
# visualforce code
vf_code, class_name = codecreator.get_vf_class(self.picked_name, sftypedesc["fields"], self.is_custom_only, self.include_validate)
file_name = sfdc_name_map['vf_file']
save_path = util.save_and_open_in_panel(vf_code, file_name, is_open, sub_folder )
save_path_list.append(save_path)
# list controller code
list_controller_code, class_name = codecreator.get_list_controller_class(self.picked_name)
file_name = sfdc_name_map['list_controller_file']
save_path = util.save_and_open_in_panel(list_controller_code, file_name, is_open, sub_folder )
save_path_list.append(save_path)
# visualforce code
list_vf_code, class_name = codecreator.get_list_vf_class(self.picked_name, sftypedesc["fields"], self.is_custom_only, self.include_validate)
file_name = sfdc_name_map['list_vf_file']
save_path = util.save_and_open_in_panel(list_vf_code, file_name, is_open, sub_folder )
save_path_list.append(save_path)
# SfdcXyController
src_code = codecreator.get_sfdcxycontroller()
file_name = 'SfdcXyController.cls'
save_path = util.save_and_open_in_panel(src_code, file_name, is_open, sub_folder )
save_path_list.append(save_path)
if sublime.ok_cancel_dialog('Create Source Over,Do you want to open the sources? '):
for save_path in save_path_list:
util.open_file(save_path)
class OpenControllerCommand(sublime_plugin.TextCommand):
def run(self, edit):
file_name = self.view.file_name()
isExist = False
if file_name and ( file_name.find(".page") > -1 ):
# XyzPage.page -> XyzController.cls
# Xyz.page -> XyzController.cls
page_floder = util.get_slash() + "pages" + util.get_slash()
class_floder = util.get_slash() + "classes" + util.get_slash()
file_name1 = file_name.replace(page_floder, class_floder).replace('.page', 'Controller.cls')
file_name2 = file_name.replace(page_floder, class_floder).replace('Page.page', 'Controller.cls')
if os.path.isfile(file_name1):
self.view.window().open_file(file_name1)
isExist = True
elif os.path.isfile(file_name2):
self.view.window().open_file(file_name2)
isExist = True
elif file_name and ( file_name.find("Test.cls") > -1 ):
# XyzControllerTest.cls -> XyzController.cls
file_name1 = file_name.replace('Test.cls', '.cls')
if os.path.isfile(file_name1):
self.view.window().open_file(file_name1)
isExist = True
if not isExist:
util.show_in_panel('file not found!\n')
def is_enabled(self):
file_name = self.view.file_name()
if file_name is None:
return False
check = os.path.isfile(file_name) and (( file_name.find(".page") > -1 ) or ( file_name.find("Test.cls") > -1 ))
return check
def is_visible(self):
return self.is_enabled()
# Open Test Class
class OpenTestclassCommand(sublime_plugin.TextCommand):
def run(self, edit):
file_name = self.view.file_name()
isExist = False
if file_name:
# XyzController.cls -> XyzControllerTest.cls
file_name1 = file_name.replace('.cls', 'Test.cls')
if os.path.isfile(file_name1):
self.view.window().open_file(file_name1)
isExist = True
if not isExist:
util.show_in_panel('file not found!\n')
def is_enabled(self):
file_name = self.view.file_name()
if file_name is None:
return False
check = os.path.isfile(file_name) and ( file_name.find(".cls") > -1 ) and ( file_name.find("Test.cls") == -1 )
return check
def is_visible(self):
return self.is_enabled()
class OpenVisualpageCommand(sublime_plugin.TextCommand):
def run(self, edit):
file_name = self.view.file_name()
isExist = False
if file_name:
# XyzPage.page -> XyzController.cls
# Xyz.page -> XyzController.cls
page_floder = util.get_slash() + "pages" + util.get_slash()
class_floder = util.get_slash() + "classes" + util.get_slash()
file_name1 = file_name.replace(class_floder, page_floder).replace('Controller.cls', '.page')
file_name2 = file_name.replace(class_floder, page_floder).replace('Controller.cls', 'Page.page')
if os.path.isfile(file_name1):
self.view.window().open_file(file_name1)
isExist = True
elif os.path.isfile(file_name2):
self.view.window().open_file(file_name2)
isExist = True
if not isExist:
util.show_in_panel('file not found!\n')
def is_enabled(self):
file_name = self.view.file_name()
if file_name is None:
return False
check = os.path.isfile(file_name) and ( file_name.find(".cls") > -1 ) and ( file_name.find("Test.cls") == -1 )
return check
def is_visible(self):
return self.is_enabled()
class CopyFilenameCommand(sublime_plugin.TextCommand):
def run(self, edit):
full_path = self.view.file_name()
if full_path != None:
str_list = full_path.split(util.get_slash())
file_name = str(str_list[-1])
sublime.set_clipboard(file_name)
sublime.status_message("Copy File Name : %s " % file_name)
class ChangeAuthTypeCommand(sublime_plugin.WindowCommand):
def run(self):
try:
settings = setting.load()
auth_type = settings["authentication"]
self.dirs = [setting.AUTHENTICATION_OAUTH2, setting.AUTHENTICATION_PASSWORD, setting.AUTHENTICATION_MAVENSMATE]
show_dirs = []
for dirstr in self.dirs:
if auth_type == dirstr:
show_dirs.append('[○]' + dirstr)
else:
show_dirs.append('[X]' + dirstr)
self.window.show_quick_panel(show_dirs, self.panel_done,sublime.MONOSPACE_FONT)
except Exception as e:
util.show_in_panel(e)
return
def panel_done(self, picked):
if 0 > picked < len(self.dirs):
return
self.auth_type = self.dirs[picked]
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
# print('self.auth_type-->')
# print(self.auth_type)
setting.update_authentication_setting(self.auth_type)
class SwitchBrowserCommand(sublime_plugin.WindowCommand):
def run(self):
try:
self.dirs = setting.get_browser_setting2()
dirs = []
for dir in self.dirs:
dirs.append(dir[0])
self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
except Exception as e:
util.show_in_panel(e)
return
def panel_done(self, picked):
if 0 > picked < len(self.dirs):
return
self.browser = self.dirs[picked][0]
self.broswer_name = self.dirs[picked][1]
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
setting.update_default_browser(self.broswer_name)
class SwitchXyProjectCommand(sublime_plugin.TextCommand):
def run(self, edit):
settings = setting.load()
self.settings = settings
projects = settings["projects"]
default_project = settings["default_project"]
dirs = []
dirs_result = []
for project_key in projects.keys():
project_value = projects[project_key]
if default_project == project_key:
tmp = '[○]' + project_key
else:
tmp = '[X]' + project_key
dirs.append(tmp)
dirs_result.append(project_key)
self.results = dirs_result
window = sublime.active_window()
window.show_quick_panel(dirs, self.panel_done,
sublime.MONOSPACE_FONT)
def panel_done(self, picked):
if 0 > picked < len(self.results):
return
self.picked_project = self.results[picked]
thread = threading.Thread(target=self.main_handle)
thread.start()
util.handle_thread(thread)
def main_handle(self):
setting.update_default_project(self.picked_project)
# if self.settings["use_oauth2"]:
# util.sf_oauth2()
# TODO
# open project
# current_project_dir = setting.mm_project_directory()
# project = self.settings["projects"][self.picked_project]
# workspace = project["workspace"]
# project_file = self.picked_project + ".sublime-project"
# project_file_path = os.path.join(workspace, project_file)
# if os.path.normpath(current_project_dir) != os.path.normpath(workspace):
# print('--->open project')
# print(project_file_path)
# if os.path.isfile(project_file_path):
# util.subl(project_file_path)
# def is_enabled(self):
# use_mavensmate_setting = self.settings["use_mavensmate_setting"]
# return (not use_mavensmate_setting)
# def is_visible(self):
# return self.is_enabled()
class AboutHxyCommand(sublime_plugin.ApplicationCommand):
def run(command):
version_info = sublime.load_settings("sfdc.version.sublime-settings")
version_msg = "%s v%s\n\n%s\n\nCopyright © 2016-2017 By %s\n\nEmail: %s\nHomePage: %s\n" % (
version_info.get("name"),
version_info.get("version"),
version_info.get("description"),
version_info.get("author"),
version_info.get("email"),
version_info.get("homepage")
)
util.show_in_dialog(version_msg)
class ReportIssueXyCommand(sublime_plugin.ApplicationCommand):
def run(command):
version_info = sublime.load_settings("sfdc.version.sublime-settings")
util.open_in_default_browser(version_info.get("issue"))
class HomePageCommand(sublime_plugin.ApplicationCommand):
def run(command):
version_info = sublime.load_settings("sfdc.version.sublime-settings")
util.open_in_default_browser(version_info.get("homepage"))
##########################################################################################
#Main Util
##########################################################################################
# salesforce_instance is Salesforce instance from simple-salesforce
def login_sf_home(self, salesforce_instance, browser='default', broswer_path=''):
try:
sf = salesforce_instance
sfdesc = sf.describe()
returl = '/home/home.jsp'
login_url = ('https://{instance}/secur/frontdoor.jsp?sid={sid}&retURL={returl}'
.format(instance=sf.sf_instance,
sid=sf.session_id,
returl=returl))
if browser == 'default':
util.open_in_default_browser(login_url)
else:
util.open_in_browser(login_url,browser,broswer_path)
except RequestException as e:
util.show_in_panel("Network connection timeout when issuing REST GET request")
return
except SalesforceExpiredSession as e:
util.show_in_dialog('session expired')
util.re_auth()
return
except SalesforceRefusedRequest as e:
util.show_in_panel('The request has been refused.')
return
except SalesforceError as e:
err = 'Error code: %s \nError message:%s' % (e.status,e.content)
util.show_in_panel(err)
return
except Exception as e:
util.show_in_panel(e)
return
# format soql ,
# format 'select * from Sojbect' -> add all field
def soql_format(sf_instance,soql_str):
import re
soql = util.del_comment(soql_str)
match = re.match("select\s+\*\s+from[\s\t]+(\w+)([\t\s\S]*)", soql, re.I|re.M)
if match:
sobject = match.group(1)
condition = match.group(2)
fields = get_sobject_fields(sf_instance, sobject)
fields_str = ','.join(fields)
soql = ("select %s from %s %s" % (fields_str, sobject, condition))
return soql
# get all fields from sobject
def get_sobject_fields(sf_instance, sobject):
fields = []
sftype = sf_instance.get_sobject(sobject)
sftypedesc = sftype.describe()
for field in sftypedesc["fields"]:
fields.append(util.xstr(field["name"]))
return fields
##########################################################################################
#Deprecated Or Delete
##########################################################################################
# # Save the SFDC Object As Excel
# # use xlwt to write excel,deprecated
# class SaveSfdcObjectAsExcelCommand(sublime_plugin.WindowCommand):
# def main_handle(self, savePath = ''):
# try:
# dirPath = os.path.dirname(savePath)
# util.makedir(dirPath)
# sf = util.sf_login()
# # contact = sf.query("SELECT Id, Email FROM Contact limit 1")
# sfdesc = sf.describe()
# book = xlwt.Workbook()
# newSheet_1 = book.add_sheet('オブジェクトリスト')
# newSheet_1.write(0, 0, 'label')
# newSheet_1.write(0, 1, 'name')
# newSheet_1.write(0, 2, 'keyPrefix')
# index = 1;
# sheetIndex = 0;
# for x in sf.describe()["sobjects"]:
# #write to xls
# book.get_sheet(0)
# newSheet_1.write(index, 0, util.xstr(x["label"]))
# newSheet_1.write(index, 1, util.xstr(x["name"]))
# newSheet_1.write(index, 2, util.xstr(x["keyPrefix"]))
# index = index + 1
# #print(sf.Kind__c.describe())
# #print(x["name"])
# #print(x["custom"])
# if x["custom"]:
# sheetIndex += 1
# # sftype = SFType(util.xstr(x["name"]), sf.session_id, sf.sf_instance, sf_version=sf.sf_version,
# # proxies=sf.proxies, session=sf.session)
# sftype = sf.get_sobject(util.xstr(x["name"]))
# #print(x["name"])
# #write to xls
# fieldSheet_1 = book.add_sheet(x["name"])
# book.get_sheet(sheetIndex)
# rowIndex = 0;
# fieldSheet_1.write(rowIndex, 0, "name")
# fieldSheet_1.write(rowIndex, 1, "label")
# fieldSheet_1.write(rowIndex, 2, "type")
# fieldSheet_1.write(rowIndex, 3, "length")
# fieldSheet_1.write(rowIndex, 4, "scale")
# sftypedesc = sftype.describe()
# for field in sftypedesc["fields"]:
# #print(field["name"])
# #print(field["label"])
# #print(field["type"])
# #print(field["length"])
# #print(field["scale"])
# rowIndex += 1
# fieldSheet_1.write(rowIndex, 0, field["name"])
# fieldSheet_1.write(rowIndex, 1, field["label"])
# fieldSheet_1.write(rowIndex, 2, field["type"])
# fieldSheet_1.write(rowIndex, 3, field["length"])
# fieldSheet_1.write(rowIndex, 4, field["scale"])
# #message += x["label"] + "\n"
# # book.save( settings["default_project"] + '_sobject.xls')
# book.save(savePath)
# util.show_in_dialog("Done! Please see the dir below: \n" + dirPath)
# # isOpen = sublime.ok_cancel_dialog('Do you want to open the directory?')
# # if isOpen:
# except RequestException as e:
# util.show_in_panel("Network connection timeout when issuing REST GET request")
# return
# except SalesforceExpiredSession as e:
# util.show_in_dialog('session expired')
# return
# except SalesforceRefusedRequest as e:
# util.show_in_panel('The request has been refused.')
# return
# except SalesforceError as e:
# err = 'Error code: %s \nError message:%s' % (e.status,e.content)
# util.show_in_panel(err)
# return
# except Exception as e:
# util.show_in_panel(e)
# # util.show_in_dialog('Exception Error!')
# return
# def on_input(self, args):
# thread = threading.Thread(target=self.main_handle, args=(args, ))
# thread.start()
# util.handle_thread(thread)
# def run(self):
# settings = setting.load()
# self.fullPath = os.path.join(util.get_default_floder(), settings["default_project"] + '_sobject.xls')
# # show_input_panel(caption, initial_text, on_done, on_change, on_cancel)
# self.window.show_input_panel("Please Input FullPath of fileName: " ,
# self.fullPath, self.on_input, None, None)
# class CreateDtoCodeCommand(sublime_plugin.WindowCommand):
# def run(self):
# try:
# self.sf = util.sf_login()
# dirs = []
# self.results = []
# for x in self.sf.describe()["sobjects"]:
# # dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
# dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
# self.results.append(util.xstr(x["name"]))
# # print(x)
# self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
# except RequestException as e:
# util.show_in_panel("Network connection timeout when issuing REST GET request")
# return
# except SalesforceExpiredSession as e:
# util.show_in_dialog('session expired')
# return
# except SalesforceRefusedRequest as e:
# util.show_in_panel('The request has been refused.')
# return
# except SalesforceError as e:
# err = 'Error code: %s \nError message:%s' % (e.status,e.content)
# util.show_in_panel(err)
# return
# except Exception as e:
# util.show_in_panel(e)
# # util.show_in_dialog('Exception Error!')
# return
# def panel_done(self, picked):
# if 0 > picked < len(self.results):
# return
# self.picked_name = self.results[picked]
# # print(self.picked_name)
# dirs = ["Custom Fields Only-Exclude Validate", "All Fields-Exclude Validate",
# "Custom Fields Only-Include Validate", "All Fields-Include Validate"]
# self.custom_result = [1, 2, 3, 4]
# sublime.set_timeout(lambda:self.window.show_quick_panel(dirs, self.select_panel), 10)
# def select_panel(self, picked):
# if 0 > picked < len(self.custom_result):
# return
# self.is_custom_only = (self.custom_result[picked]==1 or self.custom_result[picked]==3)
# self.include_validate = (self.custom_result[picked]>2)
# thread = threading.Thread(target=self.main_handle)
# thread.start()
# util.handle_thread(thread)
# def main_handle(self):
# self.sftype = self.sf.get_sobject(self.picked_name)
# sftypedesc = self.sftype.describe()
# # util.show_in_new_tab(util.get_dto_class(self.picked_name, sftypedesc["fields"], self.is_custom_only))
# dto_class, class_name = util.get_dto_class(self.picked_name, sftypedesc["fields"], self.is_custom_only, self.include_validate)
# file_name = class_name + 'Dto.cls'
# sub_folder = AUTO_CODE_DIR
# util.save_and_open_in_panel(dto_class, file_name, sub_folder )
# class CreateVfCodeCommand(sublime_plugin.WindowCommand):
# def run(self):
# try:
# self.sf = util.sf_login()
# dirs = []
# self.results = []
# for x in self.sf.describe()["sobjects"]:
# # dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
# dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
# self.results.append(util.xstr(x["name"]))
# # print(x)
# self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
# except RequestException as e:
# util.show_in_panel("Network connection timeout when issuing REST GET request")
# return
# except SalesforceExpiredSession as e:
# util.show_in_dialog('session expired')
# return
# except SalesforceRefusedRequest as e:
# util.show_in_panel('The request has been refused.')
# return
# except SalesforceError as e:
# err = 'Error code: %s \nError message:%s' % (e.status,e.content)
# util.show_in_panel(err)
# return
# except Exception as e:
# util.show_in_panel(e)
# # util.show_in_dialog('Exception Error!')
# return
# def panel_done(self, picked):
# if 0 > picked < len(self.results):
# return
# self.picked_name = self.results[picked]
# # print(self.picked_name)
# dirs = ["Custom Fields Only", "All Fields"]
# self.custom_result = [1, 2]
# sublime.set_timeout(lambda:self.window.show_quick_panel(dirs, self.select_panel), 10)
# def select_panel(self, picked):
# if 0 > picked < len(self.custom_result):
# return
# self.is_custom_only = (self.custom_result[picked]==1 )
# thread = threading.Thread(target=self.main_handle)
# thread.start()
# util.handle_thread(thread)
# def main_handle(self):
# self.sftype = self.sf.get_sobject(self.picked_name)
# sftypedesc = self.sftype.describe()
# # util.show_in_new_tab(util.get_dto_class(self.picked_name, sftypedesc["fields"], self.is_custom_only))
# source_code, class_name = util.get_vf_class(self.picked_name, sftypedesc["fields"], self.is_custom_only)
# file_name = class_name + '.page'
# sub_folder = AUTO_CODE_DIR
# util.save_and_open_in_panel(source_code, file_name, sub_folder )
# class CreateDaoCodeCommand(sublime_plugin.WindowCommand):
# def run(self):
# try:
# self.sf = util.sf_login()
# dirs = []
# self.results = []
# for x in self.sf.describe()["sobjects"]:
# # dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
# dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
# self.results.append(util.xstr(x["name"]))
# # print(x)
# self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
# except RequestException as e:
# util.show_in_panel("Network connection timeout when issuing REST GET request")
# return
# except SalesforceExpiredSession as e:
# util.show_in_dialog('session expired')
# return
# except SalesforceRefusedRequest as e:
# util.show_in_panel('The request has been refused.')
# return
# except SalesforceError as e:
# err = 'Error code: %s \nError message:%s' % (e.status,e.content)
# util.show_in_panel(err)
# return
# except Exception as e:
# util.show_in_panel(e)
# # util.show_in_dialog('Exception Error!')
# return
# def panel_done(self, picked):
# if 0 > picked < len(self.results):
# return
# self.picked_name = self.results[picked]
# # print(self.picked_name)
# dirs = ["Custom Fields Only", "All Fields"]
# self.custom_result = [True, False]
# sublime.set_timeout(lambda:self.window.show_quick_panel(dirs, self.select_panel), 10)
# def select_panel(self, picked):
# if 0 > picked < len(self.custom_result):
# return
# self.is_custom_only = self.custom_result[picked]
# thread = threading.Thread(target=self.main_handle)
# thread.start()
# util.handle_thread(thread)
# def main_handle(self):
# self.sftype = self.sf.get_sobject(self.picked_name)
# sftypedesc = self.sftype.describe()
# util.show_in_new_tab(util.get_dao_class(self.picked_name, sftypedesc["fields"], self.is_custom_only))
# class CreateControllerCodeCommand(sublime_plugin.WindowCommand):
# def run(self):
# try:
# self.sf = util.sf_login()
# dirs = []
# self.results = []
# for x in self.sf.describe()["sobjects"]:
# # dirs.append([util.xstr(x["name"]), util.xstr(x["label"])])
# dirs.append(util.xstr(x["name"])+' : '+util.xstr(x["label"]))
# self.results.append(util.xstr(x["name"]))
# # print(x)
# self.window.show_quick_panel(dirs, self.panel_done,sublime.MONOSPACE_FONT)
# except RequestException as e:
# util.show_in_panel("Network connection timeout when issuing REST GET request")
# return
# except SalesforceExpiredSession as e:
# util.show_in_dialog('session expired')
# return
# except SalesforceRefusedRequest as e:
# util.show_in_panel('The request has been refused.')
# return
# except SalesforceError as e:
# err = 'Error code: %s \nError message:%s' % (e.status,e.content)
# util.show_in_panel(err)
# return
# except Exception as e:
# util.show_in_panel(e)
# # util.show_in_dialog('Exception Error!')
# return
# def panel_done(self, picked):
# if 0 > picked < len(self.results):
# return
# self.picked_name = self.results[picked]
# source_code, class_name = util.get_controller_class(self.picked_name)
# file_name = class_name + 'Controller.cls'
# sub_folder = AUTO_CODE_DIR
# util.save_and_open_in_panel(source_code, file_name, sub_folder )
|
core.py
|
# -*- coding: utf-8 -*-
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from typing import Optional
import io
import json
import multiprocessing
import os
import pickle # type: ignore
import re
import signal
import subprocess
import tempfile
import unittest
import warnings
from datetime import timedelta
from email.mime.application import MIMEApplication
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from tempfile import NamedTemporaryFile
from time import sleep
from unittest import mock
import sqlalchemy
from dateutil.relativedelta import relativedelta
from numpy.testing import assert_array_almost_equal
from pendulum import utcnow
from airflow import configuration, models
from airflow import jobs, DAG, utils, settings, exceptions
from airflow.bin import cli
from airflow.configuration import AirflowConfigException, run_command
from airflow.exceptions import AirflowException
from airflow.executors import SequentialExecutor
from airflow.hooks.base_hook import BaseHook
from airflow.hooks.sqlite_hook import SqliteHook
from airflow.models import (
BaseOperator,
Connection,
TaskFail,
DagBag,
DagRun,
Pool,
DagModel,
TaskInstance,
Variable,
)
from airflow.operators.bash_operator import BashOperator
from airflow.operators.check_operator import CheckOperator, ValueCheckOperator
from airflow.operators.dagrun_operator import TriggerDagRunOperator
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.python_operator import PythonOperator
from airflow.settings import Session
from airflow.utils import timezone
from airflow.utils.dates import (
days_ago, infer_time_unit, round_time,
scale_time_units
)
from airflow.utils.state import State
from airflow.utils.timezone import datetime
from airflow.hooks import hdfs_hook
from tests.test_utils.config import conf_vars
DEV_NULL = '/dev/null'
TEST_DAG_FOLDER = os.path.join(
os.path.dirname(os.path.realpath(__file__)), 'dags')
DEFAULT_DATE = datetime(2015, 1, 1)
DEFAULT_DATE_ISO = DEFAULT_DATE.isoformat()
DEFAULT_DATE_DS = DEFAULT_DATE_ISO[:10]
TEST_DAG_ID = 'unit_tests'
EXAMPLE_DAG_DEFAULT_DATE = days_ago(2)
class OperatorSubclass(BaseOperator):
"""
An operator to test template substitution
"""
template_fields = ['some_templated_field']
def __init__(self, some_templated_field, *args, **kwargs):
super().__init__(*args, **kwargs)
self.some_templated_field = some_templated_field
def execute(self, context):
pass
class TestCore(unittest.TestCase):
TEST_SCHEDULE_WITH_NO_PREVIOUS_RUNS_DAG_ID = TEST_DAG_ID + 'test_schedule_dag_no_previous_runs'
TEST_SCHEDULE_DAG_FAKE_SCHEDULED_PREVIOUS_DAG_ID = \
TEST_DAG_ID + 'test_schedule_dag_fake_scheduled_previous'
TEST_SCHEDULE_DAG_NO_END_DATE_UP_TO_TODAY_ONLY_DAG_ID = \
TEST_DAG_ID + 'test_schedule_dag_no_end_date_up_to_today_only'
TEST_SCHEDULE_ONCE_DAG_ID = TEST_DAG_ID + 'test_schedule_dag_once'
TEST_SCHEDULE_RELATIVEDELTA_DAG_ID = TEST_DAG_ID + 'test_schedule_dag_relativedelta'
TEST_SCHEDULE_START_END_DATES_DAG_ID = TEST_DAG_ID + 'test_schedule_dag_start_end_dates'
default_scheduler_args = {"num_runs": 1}
def setUp(self):
self.dagbag = DagBag(
dag_folder=DEV_NULL, include_examples=True)
self.args = {'owner': 'airflow', 'start_date': DEFAULT_DATE}
self.dag = DAG(TEST_DAG_ID, default_args=self.args)
self.dag_bash = self.dagbag.dags['example_bash_operator']
self.runme_0 = self.dag_bash.get_task('runme_0')
self.run_after_loop = self.dag_bash.get_task('run_after_loop')
self.run_this_last = self.dag_bash.get_task('run_this_last')
def tearDown(self):
if os.environ.get('KUBERNETES_VERSION') is not None:
return
dag_ids_to_clean = [
TEST_DAG_ID,
self.TEST_SCHEDULE_WITH_NO_PREVIOUS_RUNS_DAG_ID,
self.TEST_SCHEDULE_DAG_FAKE_SCHEDULED_PREVIOUS_DAG_ID,
self.TEST_SCHEDULE_DAG_NO_END_DATE_UP_TO_TODAY_ONLY_DAG_ID,
self.TEST_SCHEDULE_ONCE_DAG_ID,
self.TEST_SCHEDULE_RELATIVEDELTA_DAG_ID,
self.TEST_SCHEDULE_START_END_DATES_DAG_ID,
]
session = Session()
session.query(DagRun).filter(
DagRun.dag_id.in_(dag_ids_to_clean)).delete(
synchronize_session=False)
session.query(TaskInstance).filter(
TaskInstance.dag_id.in_(dag_ids_to_clean)).delete(
synchronize_session=False)
session.query(TaskFail).filter(
TaskFail.dag_id.in_(dag_ids_to_clean)).delete(
synchronize_session=False)
session.commit()
session.close()
def test_schedule_dag_no_previous_runs(self):
"""
Tests scheduling a dag with no previous runs
"""
dag = DAG(self.TEST_SCHEDULE_WITH_NO_PREVIOUS_RUNS_DAG_ID)
dag.add_task(BaseOperator(
task_id="faketastic",
owner='Also fake',
start_date=datetime(2015, 1, 2, 0, 0)))
dag_run = jobs.SchedulerJob(**self.default_scheduler_args).create_dag_run(dag)
self.assertIsNotNone(dag_run)
self.assertEqual(dag.dag_id, dag_run.dag_id)
self.assertIsNotNone(dag_run.run_id)
self.assertNotEqual('', dag_run.run_id)
self.assertEqual(
datetime(2015, 1, 2, 0, 0),
dag_run.execution_date,
msg='dag_run.execution_date did not match expectation: {0}'
.format(dag_run.execution_date)
)
self.assertEqual(State.RUNNING, dag_run.state)
self.assertFalse(dag_run.external_trigger)
dag.clear()
def test_schedule_dag_relativedelta(self):
"""
Tests scheduling a dag with a relativedelta schedule_interval
"""
delta = relativedelta(hours=+1)
dag = DAG(self.TEST_SCHEDULE_RELATIVEDELTA_DAG_ID,
schedule_interval=delta)
dag.add_task(BaseOperator(
task_id="faketastic",
owner='Also fake',
start_date=datetime(2015, 1, 2, 0, 0)))
dag_run = jobs.SchedulerJob(**self.default_scheduler_args).create_dag_run(dag)
self.assertIsNotNone(dag_run)
self.assertEqual(dag.dag_id, dag_run.dag_id)
self.assertIsNotNone(dag_run.run_id)
self.assertNotEqual('', dag_run.run_id)
self.assertEqual(
datetime(2015, 1, 2, 0, 0),
dag_run.execution_date,
msg='dag_run.execution_date did not match expectation: {0}'
.format(dag_run.execution_date)
)
self.assertEqual(State.RUNNING, dag_run.state)
self.assertFalse(dag_run.external_trigger)
dag_run2 = jobs.SchedulerJob(**self.default_scheduler_args).create_dag_run(dag)
self.assertIsNotNone(dag_run2)
self.assertEqual(dag.dag_id, dag_run2.dag_id)
self.assertIsNotNone(dag_run2.run_id)
self.assertNotEqual('', dag_run2.run_id)
self.assertEqual(
datetime(2015, 1, 2, 0, 0) + delta,
dag_run2.execution_date,
msg='dag_run2.execution_date did not match expectation: {0}'
.format(dag_run2.execution_date)
)
self.assertEqual(State.RUNNING, dag_run2.state)
self.assertFalse(dag_run2.external_trigger)
dag.clear()
def test_schedule_dag_fake_scheduled_previous(self):
"""
Test scheduling a dag where there is a prior DagRun
which has the same run_id as the next run should have
"""
delta = timedelta(hours=1)
dag = DAG(self.TEST_SCHEDULE_DAG_FAKE_SCHEDULED_PREVIOUS_DAG_ID,
schedule_interval=delta,
start_date=DEFAULT_DATE)
dag.add_task(BaseOperator(
task_id="faketastic",
owner='Also fake',
start_date=DEFAULT_DATE))
scheduler = jobs.SchedulerJob(**self.default_scheduler_args)
dag.create_dagrun(run_id=DagRun.id_for_date(DEFAULT_DATE),
execution_date=DEFAULT_DATE,
state=State.SUCCESS,
external_trigger=True)
dag_run = scheduler.create_dag_run(dag)
self.assertIsNotNone(dag_run)
self.assertEqual(dag.dag_id, dag_run.dag_id)
self.assertIsNotNone(dag_run.run_id)
self.assertNotEqual('', dag_run.run_id)
self.assertEqual(
DEFAULT_DATE + delta,
dag_run.execution_date,
msg='dag_run.execution_date did not match expectation: {0}'
.format(dag_run.execution_date)
)
self.assertEqual(State.RUNNING, dag_run.state)
self.assertFalse(dag_run.external_trigger)
def test_schedule_dag_once(self):
"""
Tests scheduling a dag scheduled for @once - should be scheduled the first time
it is called, and not scheduled the second.
"""
dag = DAG(self.TEST_SCHEDULE_ONCE_DAG_ID)
dag.schedule_interval = '@once'
dag.add_task(BaseOperator(
task_id="faketastic",
owner='Also fake',
start_date=datetime(2015, 1, 2, 0, 0)))
dag_run = jobs.SchedulerJob(**self.default_scheduler_args).create_dag_run(dag)
dag_run2 = jobs.SchedulerJob(**self.default_scheduler_args).create_dag_run(dag)
self.assertIsNotNone(dag_run)
self.assertIsNone(dag_run2)
dag.clear()
def test_fractional_seconds(self):
"""
Tests if fractional seconds are stored in the database
"""
dag = DAG(TEST_DAG_ID + 'test_fractional_seconds')
dag.schedule_interval = '@once'
dag.add_task(BaseOperator(
task_id="faketastic",
owner='Also fake',
start_date=datetime(2015, 1, 2, 0, 0)))
start_date = timezone.utcnow()
run = dag.create_dagrun(
run_id='test_' + start_date.isoformat(),
execution_date=start_date,
start_date=start_date,
state=State.RUNNING,
external_trigger=False
)
run.refresh_from_db()
self.assertEqual(start_date, run.execution_date,
"dag run execution_date loses precision")
self.assertEqual(start_date, run.start_date,
"dag run start_date loses precision ")
def test_schedule_dag_start_end_dates(self):
"""
Tests that an attempt to schedule a task after the Dag's end_date
does not succeed.
"""
delta = timedelta(hours=1)
runs = 3
start_date = DEFAULT_DATE
end_date = start_date + (runs - 1) * delta
dag = DAG(self.TEST_SCHEDULE_START_END_DATES_DAG_ID,
start_date=start_date,
end_date=end_date,
schedule_interval=delta)
dag.add_task(BaseOperator(task_id='faketastic', owner='Also fake'))
# Create and schedule the dag runs
dag_runs = []
scheduler = jobs.SchedulerJob(**self.default_scheduler_args)
for _ in range(runs):
dag_runs.append(scheduler.create_dag_run(dag))
additional_dag_run = scheduler.create_dag_run(dag)
for dag_run in dag_runs:
self.assertIsNotNone(dag_run)
self.assertIsNone(additional_dag_run)
def test_schedule_dag_no_end_date_up_to_today_only(self):
"""
Tests that a Dag created without an end_date can only be scheduled up
to and including the current datetime.
For example, if today is 2016-01-01 and we are scheduling from a
start_date of 2015-01-01, only jobs up to, but not including
2016-01-01 should be scheduled.
"""
session = settings.Session()
delta = timedelta(days=1)
now = utcnow()
start_date = now.subtract(weeks=1)
runs = (now - start_date).days
dag = DAG(self.TEST_SCHEDULE_DAG_NO_END_DATE_UP_TO_TODAY_ONLY_DAG_ID,
start_date=start_date,
schedule_interval=delta)
dag.add_task(BaseOperator(task_id='faketastic', owner='Also fake'))
dag_runs = []
scheduler = jobs.SchedulerJob(**self.default_scheduler_args)
for _ in range(runs):
dag_run = scheduler.create_dag_run(dag)
dag_runs.append(dag_run)
# Mark the DagRun as complete
dag_run.state = State.SUCCESS
session.merge(dag_run)
session.commit()
# Attempt to schedule an additional dag run (for 2016-01-01)
additional_dag_run = scheduler.create_dag_run(dag)
for dag_run in dag_runs:
self.assertIsNotNone(dag_run)
self.assertIsNone(additional_dag_run)
def test_confirm_unittest_mod(self):
self.assertTrue(configuration.conf.get('core', 'unit_test_mode'))
def test_pickling(self):
dp = self.dag.pickle()
self.assertEqual(dp.pickle.dag_id, self.dag.dag_id)
def test_rich_comparison_ops(self):
class DAGsubclass(DAG):
pass
dag_eq = DAG(TEST_DAG_ID, default_args=self.args)
dag_diff_load_time = DAG(TEST_DAG_ID, default_args=self.args)
dag_diff_name = DAG(TEST_DAG_ID + '_neq', default_args=self.args)
dag_subclass = DAGsubclass(TEST_DAG_ID, default_args=self.args)
dag_subclass_diff_name = DAGsubclass(
TEST_DAG_ID + '2', default_args=self.args)
for d in [dag_eq, dag_diff_name, dag_subclass, dag_subclass_diff_name]:
d.last_loaded = self.dag.last_loaded
# test identity equality
self.assertEqual(self.dag, self.dag)
# test dag (in)equality based on _comps
self.assertEqual(dag_eq, self.dag)
self.assertNotEqual(dag_diff_name, self.dag)
self.assertNotEqual(dag_diff_load_time, self.dag)
# test dag inequality based on type even if _comps happen to match
self.assertNotEqual(dag_subclass, self.dag)
# a dag should equal an unpickled version of itself
d = pickle.dumps(self.dag)
self.assertEqual(pickle.loads(d), self.dag)
# dags are ordered based on dag_id no matter what the type is
self.assertLess(self.dag, dag_diff_name)
self.assertGreater(self.dag, dag_diff_load_time)
self.assertLess(self.dag, dag_subclass_diff_name)
# greater than should have been created automatically by functools
self.assertGreater(dag_diff_name, self.dag)
# hashes are non-random and match equality
self.assertEqual(hash(self.dag), hash(self.dag))
self.assertEqual(hash(dag_eq), hash(self.dag))
self.assertNotEqual(hash(dag_diff_name), hash(self.dag))
self.assertNotEqual(hash(dag_subclass), hash(self.dag))
def test_check_operators(self):
conn_id = "sqlite_default"
captainHook = BaseHook.get_hook(conn_id=conn_id)
captainHook.run("CREATE TABLE operator_test_table (a, b)")
captainHook.run("insert into operator_test_table values (1,2)")
t = CheckOperator(
task_id='check',
sql="select count(*) from operator_test_table",
conn_id=conn_id,
dag=self.dag)
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
t = ValueCheckOperator(
task_id='value_check',
pass_value=95,
tolerance=0.1,
conn_id=conn_id,
sql="SELECT 100",
dag=self.dag)
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
captainHook.run("drop table operator_test_table")
def test_clear_api(self):
task = self.dag_bash.tasks[0]
task.clear(
start_date=DEFAULT_DATE, end_date=DEFAULT_DATE,
upstream=True, downstream=True)
ti = TaskInstance(task=task, execution_date=DEFAULT_DATE)
ti.are_dependents_done()
def test_illegal_args(self):
"""
Tests that Operators reject illegal arguments
"""
with warnings.catch_warnings(record=True) as w:
BashOperator(
task_id='test_illegal_args',
bash_command='echo success',
dag=self.dag,
illegal_argument_1234='hello?')
self.assertTrue(
issubclass(w[0].category, PendingDeprecationWarning))
self.assertIn(
('Invalid arguments were passed to BashOperator '
'(task_id: test_illegal_args).'),
w[0].message.args[0])
def test_bash_operator(self):
t = BashOperator(
task_id='test_bash_operator',
bash_command="echo success",
dag=self.dag)
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_bash_operator_multi_byte_output(self):
t = BashOperator(
task_id='test_multi_byte_bash_operator',
bash_command="echo \u2600",
dag=self.dag,
output_encoding='utf-8')
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_bash_operator_kill(self):
import psutil
sleep_time = "100%d" % os.getpid()
t = BashOperator(
task_id='test_bash_operator_kill',
execution_timeout=timedelta(seconds=1),
bash_command="/bin/bash -c 'sleep %s'" % sleep_time,
dag=self.dag)
self.assertRaises(
exceptions.AirflowTaskTimeout,
t.run,
start_date=DEFAULT_DATE, end_date=DEFAULT_DATE)
sleep(2)
pid = -1
for proc in psutil.process_iter():
if proc.cmdline() == ['sleep', sleep_time]:
pid = proc.pid
if pid != -1:
os.kill(pid, signal.SIGTERM)
self.fail("BashOperator's subprocess still running after stopping on timeout!")
def test_on_failure_callback(self):
# Annoying workaround for nonlocal not existing in python 2
data = {'called': False}
def check_failure(context, test_case=self):
data['called'] = True
error = context.get('exception')
test_case.assertIsInstance(error, AirflowException)
t = BashOperator(
task_id='check_on_failure_callback',
bash_command="exit 1",
dag=self.dag,
on_failure_callback=check_failure)
self.assertRaises(
exceptions.AirflowException,
t.run,
start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
self.assertTrue(data['called'])
def test_trigger_dagrun(self):
def trigga(_, obj):
if True:
return obj
t = TriggerDagRunOperator(
task_id='test_trigger_dagrun',
trigger_dag_id='example_bash_operator',
python_callable=trigga,
dag=self.dag)
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_dryrun(self):
t = BashOperator(
task_id='test_dryrun',
bash_command="echo success",
dag=self.dag)
t.dry_run()
def test_sqlite(self):
import airflow.operators.sqlite_operator
t = airflow.operators.sqlite_operator.SqliteOperator(
task_id='time_sqlite',
sql="CREATE TABLE IF NOT EXISTS unitest (dummy VARCHAR(20))",
dag=self.dag)
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_timeout(self):
t = PythonOperator(
task_id='test_timeout',
execution_timeout=timedelta(seconds=1),
python_callable=lambda: sleep(5),
dag=self.dag)
self.assertRaises(
exceptions.AirflowTaskTimeout,
t.run,
start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_python_op(self):
def test_py_op(templates_dict, ds, **kwargs):
if not templates_dict['ds'] == ds:
raise Exception("failure")
t = PythonOperator(
task_id='test_py_op',
provide_context=True,
python_callable=test_py_op,
templates_dict={'ds': "{{ ds }}"},
dag=self.dag)
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_complex_template(self):
def verify_templated_field(context):
self.assertEqual(context['ti'].task.some_templated_field['bar'][1],
context['ds'])
t = OperatorSubclass(
task_id='test_complex_template',
some_templated_field={
'foo': '123',
'bar': ['baz', '{{ ds }}']
},
dag=self.dag)
t.execute = verify_templated_field
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_template_with_variable(self):
"""
Test the availability of variables in templates
"""
val = {
'test_value': 'a test value'
}
Variable.set("a_variable", val['test_value'])
def verify_templated_field(context):
self.assertEqual(context['ti'].task.some_templated_field,
val['test_value'])
t = OperatorSubclass(
task_id='test_complex_template',
some_templated_field='{{ var.value.a_variable }}',
dag=self.dag)
t.execute = verify_templated_field
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_template_with_json_variable(self):
"""
Test the availability of variables (serialized as JSON) in templates
"""
val = {
'test_value': {'foo': 'bar', 'obj': {'v1': 'yes', 'v2': 'no'}}
}
Variable.set("a_variable", val['test_value'], serialize_json=True)
def verify_templated_field(context):
self.assertEqual(context['ti'].task.some_templated_field,
val['test_value']['obj']['v2'])
t = OperatorSubclass(
task_id='test_complex_template',
some_templated_field='{{ var.json.a_variable.obj.v2 }}',
dag=self.dag)
t.execute = verify_templated_field
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_template_with_json_variable_as_value(self):
"""
Test the availability of variables (serialized as JSON) in templates, but
accessed as a value
"""
val = {
'test_value': {'foo': 'bar'}
}
Variable.set("a_variable", val['test_value'], serialize_json=True)
def verify_templated_field(context):
self.assertEqual(context['ti'].task.some_templated_field,
'{\n "foo": "bar"\n}')
t = OperatorSubclass(
task_id='test_complex_template',
some_templated_field='{{ var.value.a_variable }}',
dag=self.dag)
t.execute = verify_templated_field
t.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
def test_template_non_bool(self):
"""
Test templates can handle objects with no sense of truthiness
"""
class NonBoolObject:
def __len__(self):
return NotImplemented
def __bool__(self):
return NotImplemented
t = OperatorSubclass(
task_id='test_bad_template_obj',
some_templated_field=NonBoolObject(),
dag=self.dag)
t.resolve_template_files()
def test_task_get_template(self):
TI = TaskInstance
ti = TI(
task=self.runme_0, execution_date=DEFAULT_DATE)
ti.dag = self.dag_bash
ti.run(ignore_ti_state=True)
context = ti.get_template_context()
# DEFAULT DATE is 2015-01-01
self.assertEqual(context['ds'], '2015-01-01')
self.assertEqual(context['ds_nodash'], '20150101')
# next_ds is 2015-01-02 as the dag interval is daily
self.assertEqual(context['next_ds'], '2015-01-02')
self.assertEqual(context['next_ds_nodash'], '20150102')
# prev_ds is 2014-12-31 as the dag interval is daily
self.assertEqual(context['prev_ds'], '2014-12-31')
self.assertEqual(context['prev_ds_nodash'], '20141231')
self.assertEqual(context['ts'], '2015-01-01T00:00:00+00:00')
self.assertEqual(context['ts_nodash'], '20150101T000000')
self.assertEqual(context['ts_nodash_with_tz'], '20150101T000000+0000')
self.assertEqual(context['yesterday_ds'], '2014-12-31')
self.assertEqual(context['yesterday_ds_nodash'], '20141231')
self.assertEqual(context['tomorrow_ds'], '2015-01-02')
self.assertEqual(context['tomorrow_ds_nodash'], '20150102')
def test_local_task_job(self):
TI = TaskInstance
ti = TI(
task=self.runme_0, execution_date=DEFAULT_DATE)
job = jobs.LocalTaskJob(task_instance=ti, ignore_ti_state=True)
job.run()
def test_raw_job(self):
TI = TaskInstance
ti = TI(
task=self.runme_0, execution_date=DEFAULT_DATE)
ti.dag = self.dag_bash
ti.run(ignore_ti_state=True)
def test_variable_set_get_round_trip(self):
Variable.set("tested_var_set_id", "Monday morning breakfast")
self.assertEqual("Monday morning breakfast", Variable.get("tested_var_set_id"))
def test_variable_set_get_round_trip_json(self):
value = {"a": 17, "b": 47}
Variable.set("tested_var_set_id", value, serialize_json=True)
self.assertEqual(value, Variable.get("tested_var_set_id", deserialize_json=True))
def test_get_non_existing_var_should_return_default(self):
default_value = "some default val"
self.assertEqual(default_value, Variable.get("thisIdDoesNotExist",
default_var=default_value))
def test_get_non_existing_var_should_raise_key_error(self):
with self.assertRaises(KeyError):
Variable.get("thisIdDoesNotExist")
def test_get_non_existing_var_with_none_default_should_return_none(self):
self.assertIsNone(Variable.get("thisIdDoesNotExist", default_var=None))
def test_get_non_existing_var_should_not_deserialize_json_default(self):
default_value = "}{ this is a non JSON default }{"
self.assertEqual(default_value, Variable.get("thisIdDoesNotExist",
default_var=default_value,
deserialize_json=True))
def test_variable_setdefault_round_trip(self):
key = "tested_var_setdefault_1_id"
value = "Monday morning breakfast in Paris"
Variable.setdefault(key, value)
self.assertEqual(value, Variable.get(key))
def test_variable_setdefault_round_trip_json(self):
key = "tested_var_setdefault_2_id"
value = {"city": 'Paris', "Happiness": True}
Variable.setdefault(key, value, deserialize_json=True)
self.assertEqual(value, Variable.get(key, deserialize_json=True))
def test_variable_setdefault_existing_json(self):
key = "tested_var_setdefault_2_id"
value = {"city": 'Paris', "Happiness": True}
Variable.set(key, value, serialize_json=True)
val = Variable.setdefault(key, value, deserialize_json=True)
# Check the returned value, and the stored value are handled correctly.
self.assertEqual(value, val)
self.assertEqual(value, Variable.get(key, deserialize_json=True))
def test_variable_delete(self):
key = "tested_var_delete"
value = "to be deleted"
# No-op if the variable doesn't exist
Variable.delete(key)
with self.assertRaises(KeyError):
Variable.get(key)
# Set the variable
Variable.set(key, value)
self.assertEqual(value, Variable.get(key))
# Delete the variable
Variable.delete(key)
with self.assertRaises(KeyError):
Variable.get(key)
def test_parameterized_config_gen(self):
cfg = configuration.parameterized_config(configuration.DEFAULT_CONFIG)
# making sure some basic building blocks are present:
self.assertIn("[core]", cfg)
self.assertIn("dags_folder", cfg)
self.assertIn("sql_alchemy_conn", cfg)
self.assertIn("fernet_key", cfg)
# making sure replacement actually happened
self.assertNotIn("{AIRFLOW_HOME}", cfg)
self.assertNotIn("{FERNET_KEY}", cfg)
def test_config_use_original_when_original_and_fallback_are_present(self):
self.assertTrue(configuration.conf.has_option("core", "FERNET_KEY"))
self.assertFalse(configuration.conf.has_option("core", "FERNET_KEY_CMD"))
FERNET_KEY = configuration.conf.get('core', 'FERNET_KEY')
with conf_vars({('core', 'FERNET_KEY_CMD'): 'printf HELLO'}):
FALLBACK_FERNET_KEY = configuration.conf.get(
"core",
"FERNET_KEY"
)
self.assertEqual(FERNET_KEY, FALLBACK_FERNET_KEY)
def test_config_throw_error_when_original_and_fallback_is_absent(self):
self.assertTrue(configuration.conf.has_option("core", "FERNET_KEY"))
self.assertFalse(configuration.conf.has_option("core", "FERNET_KEY_CMD"))
with conf_vars({('core', 'fernet_key'): None}):
with self.assertRaises(AirflowConfigException) as cm:
configuration.conf.get("core", "FERNET_KEY")
exception = str(cm.exception)
message = "section/key [core/fernet_key] not found in config"
self.assertEqual(message, exception)
def test_config_override_original_when_non_empty_envvar_is_provided(self):
key = "AIRFLOW__CORE__FERNET_KEY"
value = "some value"
self.assertNotIn(key, os.environ)
os.environ[key] = value
FERNET_KEY = configuration.conf.get('core', 'FERNET_KEY')
self.assertEqual(value, FERNET_KEY)
# restore the envvar back to the original state
del os.environ[key]
def test_config_override_original_when_empty_envvar_is_provided(self):
key = "AIRFLOW__CORE__FERNET_KEY"
value = ""
self.assertNotIn(key, os.environ)
os.environ[key] = value
FERNET_KEY = configuration.conf.get('core', 'FERNET_KEY')
self.assertEqual(value, FERNET_KEY)
# restore the envvar back to the original state
del os.environ[key]
def test_round_time(self):
rt1 = round_time(datetime(2015, 1, 1, 6), timedelta(days=1))
self.assertEqual(datetime(2015, 1, 1, 0, 0), rt1)
rt2 = round_time(datetime(2015, 1, 2), relativedelta(months=1))
self.assertEqual(datetime(2015, 1, 1, 0, 0), rt2)
rt3 = round_time(datetime(2015, 9, 16, 0, 0), timedelta(1), datetime(
2015, 9, 14, 0, 0))
self.assertEqual(datetime(2015, 9, 16, 0, 0), rt3)
rt4 = round_time(datetime(2015, 9, 15, 0, 0), timedelta(1), datetime(
2015, 9, 14, 0, 0))
self.assertEqual(datetime(2015, 9, 15, 0, 0), rt4)
rt5 = round_time(datetime(2015, 9, 14, 0, 0), timedelta(1), datetime(
2015, 9, 14, 0, 0))
self.assertEqual(datetime(2015, 9, 14, 0, 0), rt5)
rt6 = round_time(datetime(2015, 9, 13, 0, 0), timedelta(1), datetime(
2015, 9, 14, 0, 0))
self.assertEqual(datetime(2015, 9, 14, 0, 0), rt6)
def test_infer_time_unit(self):
self.assertEqual('minutes', infer_time_unit([130, 5400, 10]))
self.assertEqual('seconds', infer_time_unit([110, 50, 10, 100]))
self.assertEqual('hours', infer_time_unit([100000, 50000, 10000, 20000]))
self.assertEqual('days', infer_time_unit([200000, 100000]))
def test_scale_time_units(self):
# use assert_almost_equal from numpy.testing since we are comparing
# floating point arrays
arr1 = scale_time_units([130, 5400, 10], 'minutes')
assert_array_almost_equal(arr1, [2.167, 90.0, 0.167], decimal=3)
arr2 = scale_time_units([110, 50, 10, 100], 'seconds')
assert_array_almost_equal(arr2, [110.0, 50.0, 10.0, 100.0], decimal=3)
arr3 = scale_time_units([100000, 50000, 10000, 20000], 'hours')
assert_array_almost_equal(arr3, [27.778, 13.889, 2.778, 5.556],
decimal=3)
arr4 = scale_time_units([200000, 100000], 'days')
assert_array_almost_equal(arr4, [2.315, 1.157], decimal=3)
def test_bad_trigger_rule(self):
with self.assertRaises(AirflowException):
DummyOperator(
task_id='test_bad_trigger',
trigger_rule="non_existent",
dag=self.dag)
def test_terminate_task(self):
"""If a task instance's db state get deleted, it should fail"""
TI = TaskInstance
dag = self.dagbag.dags.get('test_utils')
task = dag.task_dict.get('sleeps_forever')
ti = TI(task=task, execution_date=DEFAULT_DATE)
job = jobs.LocalTaskJob(
task_instance=ti, ignore_ti_state=True, executor=SequentialExecutor())
# Running task instance asynchronously
p = multiprocessing.Process(target=job.run)
p.start()
sleep(5)
settings.engine.dispose()
session = settings.Session()
ti.refresh_from_db(session=session)
# making sure it's actually running
self.assertEqual(State.RUNNING, ti.state)
ti = session.query(TI).filter_by(
dag_id=task.dag_id,
task_id=task.task_id,
execution_date=DEFAULT_DATE
).one()
# deleting the instance should result in a failure
session.delete(ti)
session.commit()
# waiting for the async task to finish
p.join()
# making sure that the task ended up as failed
ti.refresh_from_db(session=session)
self.assertEqual(State.FAILED, ti.state)
session.close()
def test_task_fail_duration(self):
"""If a task fails, the duration should be recorded in TaskFail"""
p = BashOperator(
task_id='pass_sleepy',
bash_command='sleep 3',
dag=self.dag)
f = BashOperator(
task_id='fail_sleepy',
bash_command='sleep 5',
execution_timeout=timedelta(seconds=3),
retry_delay=timedelta(seconds=0),
dag=self.dag)
session = settings.Session()
try:
p.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
except Exception:
pass
try:
f.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
except Exception:
pass
p_fails = session.query(TaskFail).filter_by(
task_id='pass_sleepy',
dag_id=self.dag.dag_id,
execution_date=DEFAULT_DATE).all()
f_fails = session.query(TaskFail).filter_by(
task_id='fail_sleepy',
dag_id=self.dag.dag_id,
execution_date=DEFAULT_DATE).all()
self.assertEqual(0, len(p_fails))
self.assertEqual(1, len(f_fails))
self.assertGreaterEqual(sum([f.duration for f in f_fails]), 3)
def test_run_command(self):
write = r'sys.stdout.buffer.write("\u1000foo".encode("utf8"))'
cmd = 'import sys; {0}; sys.stdout.flush()'.format(write)
self.assertEqual(run_command("python -c '{0}'".format(cmd)), '\u1000foo')
self.assertEqual(run_command('echo "foo bar"'), 'foo bar\n')
self.assertRaises(AirflowConfigException, run_command, 'bash -c "exit 1"')
def test_trigger_dagrun_with_execution_date(self):
utc_now = timezone.utcnow()
run_id = 'trig__' + utc_now.isoformat()
def payload_generator(context, object): # pylint: disable=unused-argument
object.run_id = run_id
return object
task = TriggerDagRunOperator(task_id='test_trigger_dagrun_with_execution_date',
trigger_dag_id='example_bash_operator',
python_callable=payload_generator,
execution_date=utc_now,
dag=self.dag)
task.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
dag_runs = DagRun.find(dag_id='example_bash_operator', run_id=run_id)
self.assertEqual(len(dag_runs), 1)
dag_run = dag_runs[0]
self.assertEqual(dag_run.execution_date, utc_now)
def test_trigger_dagrun_with_str_execution_date(self):
utc_now_str = timezone.utcnow().isoformat()
self.assertIsInstance(utc_now_str, (str,))
run_id = 'trig__' + utc_now_str
def payload_generator(context, object): # pylint: disable=unused-argument
object.run_id = run_id
return object
task = TriggerDagRunOperator(
task_id='test_trigger_dagrun_with_str_execution_date',
trigger_dag_id='example_bash_operator',
python_callable=payload_generator,
execution_date=utc_now_str,
dag=self.dag)
task.run(start_date=DEFAULT_DATE, end_date=DEFAULT_DATE, ignore_ti_state=True)
dag_runs = DagRun.find(dag_id='example_bash_operator', run_id=run_id)
self.assertEqual(len(dag_runs), 1)
dag_run = dag_runs[0]
self.assertEqual(dag_run.execution_date.isoformat(), utc_now_str)
def test_trigger_dagrun_with_templated_execution_date(self):
task = TriggerDagRunOperator(
task_id='test_trigger_dagrun_with_str_execution_date',
trigger_dag_id='example_bash_operator',
execution_date='{{ execution_date }}',
dag=self.dag)
self.assertTrue(isinstance(task.execution_date, str))
self.assertEqual(task.execution_date, '{{ execution_date }}')
ti = TaskInstance(task=task, execution_date=DEFAULT_DATE)
ti.render_templates()
self.assertEqual(timezone.parse(task.execution_date), DEFAULT_DATE)
def test_externally_triggered_dagrun(self):
TI = TaskInstance
# Create the dagrun between two "scheduled" execution dates of the DAG
EXECUTION_DATE = DEFAULT_DATE + timedelta(days=2)
EXECUTION_DS = EXECUTION_DATE.strftime('%Y-%m-%d')
EXECUTION_DS_NODASH = EXECUTION_DS.replace('-', '')
dag = DAG(
TEST_DAG_ID,
default_args=self.args,
schedule_interval=timedelta(weeks=1),
start_date=DEFAULT_DATE)
task = DummyOperator(task_id='test_externally_triggered_dag_context',
dag=dag)
dag.create_dagrun(run_id=DagRun.id_for_date(EXECUTION_DATE),
execution_date=EXECUTION_DATE,
state=State.RUNNING,
external_trigger=True)
task.run(
start_date=EXECUTION_DATE, end_date=EXECUTION_DATE)
ti = TI(task=task, execution_date=EXECUTION_DATE)
context = ti.get_template_context()
# next_ds/prev_ds should be the execution date for manually triggered runs
self.assertEqual(context['next_ds'], EXECUTION_DS)
self.assertEqual(context['next_ds_nodash'], EXECUTION_DS_NODASH)
self.assertEqual(context['prev_ds'], EXECUTION_DS)
self.assertEqual(context['prev_ds_nodash'], EXECUTION_DS_NODASH)
class TestCli(unittest.TestCase):
TEST_USER1_EMAIL = 'test-user1@example.com'
TEST_USER2_EMAIL = 'test-user2@example.com'
@classmethod
def setUpClass(cls):
super().setUpClass()
cls._cleanup()
def setUp(self):
super().setUp()
from airflow.www import app as application
self.app, self.appbuilder = application.create_app(session=Session, testing=True)
self.app.config['TESTING'] = True
self.parser = cli.CLIFactory.get_parser()
self.dagbag = DagBag(dag_folder=DEV_NULL, include_examples=True)
settings.configure_orm()
self.session = Session
def tearDown(self):
self._cleanup(session=self.session)
for email in [self.TEST_USER1_EMAIL, self.TEST_USER2_EMAIL]:
test_user = self.appbuilder.sm.find_user(email=email)
if test_user:
self.appbuilder.sm.del_register_user(test_user)
for role_name in ['FakeTeamA', 'FakeTeamB']:
if self.appbuilder.sm.find_role(role_name):
self.appbuilder.sm.delete_role(role_name)
super().tearDown()
@staticmethod
def _cleanup(session=None):
if session is None:
session = Session()
session.query(Pool).delete()
session.query(Variable).delete()
session.commit()
session.close()
def test_cli_list_dags(self):
args = self.parser.parse_args(['dags', 'list', '--report'])
cli.list_dags(args)
def test_cli_list_dag_runs(self):
cli.trigger_dag(self.parser.parse_args([
'dags', 'trigger', 'example_bash_operator', ]))
args = self.parser.parse_args(['dags', 'list_runs',
'example_bash_operator',
'--no_backfill'])
cli.list_dag_runs(args)
def test_cli_create_user_random_password(self):
args = self.parser.parse_args([
'users', 'create', '--username', 'test1', '--lastname', 'doe',
'--firstname', 'jon',
'--email', 'jdoe@foo.com', '--role', 'Viewer', '--use_random_password'
])
cli.users_create(args)
def test_cli_create_user_supplied_password(self):
args = self.parser.parse_args([
'users', 'create', '--username', 'test2', '--lastname', 'doe',
'--firstname', 'jon',
'--email', 'jdoe@apache.org', '--role', 'Viewer', '--password', 'test'
])
cli.users_create(args)
def test_cli_delete_user(self):
args = self.parser.parse_args([
'users', 'create', '--username', 'test3', '--lastname', 'doe',
'--firstname', 'jon',
'--email', 'jdoe@example.com', '--role', 'Viewer', '--use_random_password'
])
cli.users_create(args)
args = self.parser.parse_args([
'users', 'delete', '--username', 'test3',
])
cli.users_delete(args)
def test_cli_list_users(self):
for i in range(0, 3):
args = self.parser.parse_args([
'users', 'create', '--username', 'user{}'.format(i), '--lastname',
'doe', '--firstname', 'jon',
'--email', 'jdoe+{}@gmail.com'.format(i), '--role', 'Viewer',
'--use_random_password'
])
cli.users_create(args)
with mock.patch('sys.stdout', new_callable=io.StringIO) as mock_stdout:
cli.users_list(self.parser.parse_args(['users', 'list']))
stdout = mock_stdout.getvalue()
for i in range(0, 3):
self.assertIn('user{}'.format(i), stdout)
def test_cli_import_users(self):
def assertUserInRoles(email, roles):
for role in roles:
self.assertTrue(self._does_user_belong_to_role(email, role))
def assertUserNotInRoles(email, roles):
for role in roles:
self.assertFalse(self._does_user_belong_to_role(email, role))
assertUserNotInRoles(self.TEST_USER1_EMAIL, ['Admin', 'Op'])
assertUserNotInRoles(self.TEST_USER2_EMAIL, ['Public'])
users = [
{
"username": "imported_user1", "lastname": "doe1",
"firstname": "jon", "email": self.TEST_USER1_EMAIL,
"roles": ["Admin", "Op"]
},
{
"username": "imported_user2", "lastname": "doe2",
"firstname": "jon", "email": self.TEST_USER2_EMAIL,
"roles": ["Public"]
}
]
self._import_users_from_file(users)
assertUserInRoles(self.TEST_USER1_EMAIL, ['Admin', 'Op'])
assertUserInRoles(self.TEST_USER2_EMAIL, ['Public'])
users = [
{
"username": "imported_user1", "lastname": "doe1",
"firstname": "jon", "email": self.TEST_USER1_EMAIL,
"roles": ["Public"]
},
{
"username": "imported_user2", "lastname": "doe2",
"firstname": "jon", "email": self.TEST_USER2_EMAIL,
"roles": ["Admin"]
}
]
self._import_users_from_file(users)
assertUserNotInRoles(self.TEST_USER1_EMAIL, ['Admin', 'Op'])
assertUserInRoles(self.TEST_USER1_EMAIL, ['Public'])
assertUserNotInRoles(self.TEST_USER2_EMAIL, ['Public'])
assertUserInRoles(self.TEST_USER2_EMAIL, ['Admin'])
def test_cli_export_users(self):
user1 = {"username": "imported_user1", "lastname": "doe1",
"firstname": "jon", "email": self.TEST_USER1_EMAIL,
"roles": ["Public"]}
user2 = {"username": "imported_user2", "lastname": "doe2",
"firstname": "jon", "email": self.TEST_USER2_EMAIL,
"roles": ["Admin"]}
self._import_users_from_file([user1, user2])
users_filename = self._export_users_to_file()
with open(users_filename, mode='r') as file:
retrieved_users = json.loads(file.read())
os.remove(users_filename)
# ensure that an export can be imported
self._import_users_from_file(retrieved_users)
def find_by_username(username):
matches = [u for u in retrieved_users
if u['username'] == username]
if not matches:
self.fail("Couldn't find user with username {}".format(username))
else:
matches[0].pop('id') # this key not required for import
return matches[0]
self.assertEqual(find_by_username('imported_user1'), user1)
self.assertEqual(find_by_username('imported_user2'), user2)
def _import_users_from_file(self, user_list):
json_file_content = json.dumps(user_list)
f = NamedTemporaryFile(delete=False)
try:
f.write(json_file_content.encode())
f.flush()
args = self.parser.parse_args([
'users', 'import', f.name
])
cli.users_import(args)
finally:
os.remove(f.name)
def _export_users_to_file(self):
f = NamedTemporaryFile(delete=False)
args = self.parser.parse_args([
'users', 'export', f.name
])
cli.users_export(args)
return f.name
def _does_user_belong_to_role(self, email, rolename):
user = self.appbuilder.sm.find_user(email=email)
role = self.appbuilder.sm.find_role(rolename)
if user and role:
return role in user.roles
return False
def test_cli_add_user_role(self):
args = self.parser.parse_args([
'users', 'create', '--username', 'test4', '--lastname', 'doe',
'--firstname', 'jon',
'--email', self.TEST_USER1_EMAIL, '--role', 'Viewer', '--use_random_password'
])
cli.users_create(args)
self.assertFalse(
self._does_user_belong_to_role(email=self.TEST_USER1_EMAIL,
rolename='Op'),
"User should not yet be a member of role 'Op'"
)
args = self.parser.parse_args([
'users', 'add_role', '--username', 'test4', '--role', 'Op'
])
cli.users_manage_role(args, remove=False)
self.assertTrue(
self._does_user_belong_to_role(email=self.TEST_USER1_EMAIL,
rolename='Op'),
"User should have been added to role 'Op'"
)
def test_cli_remove_user_role(self):
args = self.parser.parse_args([
'users', 'create', '--username', 'test4', '--lastname', 'doe',
'--firstname', 'jon',
'--email', self.TEST_USER1_EMAIL, '--role', 'Viewer', '--use_random_password'
])
cli.users_create(args)
self.assertTrue(
self._does_user_belong_to_role(email=self.TEST_USER1_EMAIL,
rolename='Viewer'),
"User should have been created with role 'Viewer'"
)
args = self.parser.parse_args([
'users', 'remove_role', '--username', 'test4', '--role', 'Viewer'
])
cli.users_manage_role(args, remove=True)
self.assertFalse(
self._does_user_belong_to_role(email=self.TEST_USER1_EMAIL,
rolename='Viewer'),
"User should have been removed from role 'Viewer'"
)
@mock.patch("airflow.bin.cli.DagBag")
def test_cli_sync_perm(self, dagbag_mock):
self.expect_dagbag_contains([
DAG('has_access_control',
access_control={
'Public': {'can_dag_read'}
}),
DAG('no_access_control')
], dagbag_mock)
self.appbuilder.sm = mock.Mock()
args = self.parser.parse_args([
'sync_perm'
])
cli.sync_perm(args)
assert self.appbuilder.sm.sync_roles.call_count == 1
self.assertEqual(2,
len(self.appbuilder.sm.sync_perm_for_dag.mock_calls))
self.appbuilder.sm.sync_perm_for_dag.assert_any_call(
'has_access_control',
{'Public': {'can_dag_read'}}
)
self.appbuilder.sm.sync_perm_for_dag.assert_any_call(
'no_access_control',
None,
)
def expect_dagbag_contains(self, dags, dagbag_mock):
dagbag = mock.Mock()
dagbag.dags = {dag.dag_id: dag for dag in dags}
dagbag_mock.return_value = dagbag
def test_cli_create_roles(self):
self.assertIsNone(self.appbuilder.sm.find_role('FakeTeamA'))
self.assertIsNone(self.appbuilder.sm.find_role('FakeTeamB'))
args = self.parser.parse_args([
'roles', 'create', 'FakeTeamA', 'FakeTeamB'
])
cli.roles_create(args)
self.assertIsNotNone(self.appbuilder.sm.find_role('FakeTeamA'))
self.assertIsNotNone(self.appbuilder.sm.find_role('FakeTeamB'))
def test_cli_create_roles_is_reentrant(self):
self.assertIsNone(self.appbuilder.sm.find_role('FakeTeamA'))
self.assertIsNone(self.appbuilder.sm.find_role('FakeTeamB'))
args = self.parser.parse_args([
'roles', 'create', 'FakeTeamA', 'FakeTeamB'
])
cli.roles_create(args)
self.assertIsNotNone(self.appbuilder.sm.find_role('FakeTeamA'))
self.assertIsNotNone(self.appbuilder.sm.find_role('FakeTeamB'))
def test_cli_list_roles(self):
self.appbuilder.sm.add_role('FakeTeamA')
self.appbuilder.sm.add_role('FakeTeamB')
with mock.patch('sys.stdout', new_callable=io.StringIO) as mock_stdout:
cli.roles_list(self.parser.parse_args(['roles', 'list']))
stdout = mock_stdout.getvalue()
self.assertIn('FakeTeamA', stdout)
self.assertIn('FakeTeamB', stdout)
def test_cli_list_tasks(self):
for dag_id in self.dagbag.dags.keys():
args = self.parser.parse_args(['tasks', 'list', dag_id])
cli.list_tasks(args)
args = self.parser.parse_args([
'tasks', 'list', 'example_bash_operator', '--tree'])
cli.list_tasks(args)
def test_cli_list_jobs(self):
args = self.parser.parse_args(['dags', 'list_jobs'])
cli.list_jobs(args)
def test_cli_list_jobs_with_args(self):
args = self.parser.parse_args(['dags', 'list_jobs', '--dag_id',
'example_bash_operator',
'--state', 'success',
'--limit', '100'])
cli.list_jobs(args)
@mock.patch("airflow.bin.cli.db.initdb")
def test_cli_initdb(self, initdb_mock):
cli.initdb(self.parser.parse_args(['db', 'init']))
initdb_mock.assert_called_once_with()
@mock.patch("airflow.bin.cli.db.resetdb")
def test_cli_resetdb(self, resetdb_mock):
cli.resetdb(self.parser.parse_args(['db', 'reset', '--yes']))
resetdb_mock.assert_called_once_with()
def test_cli_connections_list(self):
with mock.patch('sys.stdout', new_callable=io.StringIO) as mock_stdout:
cli.connections_list(self.parser.parse_args(['connections', 'list']))
stdout = mock_stdout.getvalue()
conns = [[x.strip("'") for x in re.findall(r"'\w+'", line)[:2]]
for ii, line in enumerate(stdout.split('\n'))
if ii % 2 == 1]
conns = [conn for conn in conns if len(conn) > 0]
# Assert that some of the connections are present in the output as
# expected:
self.assertIn(['aws_default', 'aws'], conns)
self.assertIn(['hive_cli_default', 'hive_cli'], conns)
self.assertIn(['emr_default', 'emr'], conns)
self.assertIn(['mssql_default', 'mssql'], conns)
self.assertIn(['mysql_default', 'mysql'], conns)
self.assertIn(['postgres_default', 'postgres'], conns)
self.assertIn(['wasb_default', 'wasb'], conns)
self.assertIn(['segment_default', 'segment'], conns)
def test_cli_connections_list_redirect(self):
cmd = ['airflow', 'connections', 'list']
with tempfile.TemporaryFile() as fp:
p = subprocess.Popen(cmd, stdout=fp)
p.wait()
self.assertEqual(0, p.returncode)
def test_cli_connections_add_delete(self):
# Add connections:
uri = 'postgresql://airflow:airflow@host:5432/airflow'
with mock.patch('sys.stdout', new_callable=io.StringIO) as mock_stdout:
cli.connections_add(self.parser.parse_args(
['connections', 'add', 'new1',
'--conn_uri=%s' % uri]))
cli.connections_add(self.parser.parse_args(
['connections', 'add', 'new2',
'--conn_uri=%s' % uri]))
cli.connections_add(self.parser.parse_args(
['connections', 'add', 'new3',
'--conn_uri=%s' % uri, '--conn_extra', "{'extra': 'yes'}"]))
cli.connections_add(self.parser.parse_args(
['connections', 'add', 'new4',
'--conn_uri=%s' % uri, '--conn_extra', "{'extra': 'yes'}"]))
cli.connections_add(self.parser.parse_args(
['connections', 'add', 'new5',
'--conn_type=hive_metastore', '--conn_login=airflow',
'--conn_password=airflow', '--conn_host=host',
'--conn_port=9083', '--conn_schema=airflow']))
cli.connections_add(self.parser.parse_args(
['connections', 'add', 'new6',
'--conn_uri', "", '--conn_type=google_cloud_platform', '--conn_extra', "{'extra': 'yes'}"]))
stdout = mock_stdout.getvalue()
# Check addition stdout
lines = [l for l in stdout.split('\n') if len(l) > 0]
self.assertListEqual(lines, [
("\tSuccessfully added `conn_id`=new1 : " +
"postgresql://airflow:airflow@host:5432/airflow"),
("\tSuccessfully added `conn_id`=new2 : " +
"postgresql://airflow:airflow@host:5432/airflow"),
("\tSuccessfully added `conn_id`=new3 : " +
"postgresql://airflow:airflow@host:5432/airflow"),
("\tSuccessfully added `conn_id`=new4 : " +
"postgresql://airflow:airflow@host:5432/airflow"),
("\tSuccessfully added `conn_id`=new5 : " +
"hive_metastore://airflow:airflow@host:9083/airflow"),
("\tSuccessfully added `conn_id`=new6 : " +
"google_cloud_platform://:@:")
])
# Attempt to add duplicate
with mock.patch('sys.stdout', new_callable=io.StringIO) as mock_stdout:
cli.connections_add(self.parser.parse_args(
['connections', 'add', 'new1',
'--conn_uri=%s' % uri]))
stdout = mock_stdout.getvalue()
# Check stdout for addition attempt
lines = [l for l in stdout.split('\n') if len(l) > 0]
self.assertListEqual(lines, [
"\tA connection with `conn_id`=new1 already exists",
])
# Attempt to add without providing conn_uri
with self.assertRaises(SystemExit) as exc:
cli.connections_add(self.parser.parse_args(
['connections', 'add', 'new']))
self.assertEqual(
exc.exception.code,
"The following args are required to add a connection: ['conn_uri or conn_type']"
)
# Prepare to add connections
session = settings.Session()
extra = {'new1': None,
'new2': None,
'new3': "{'extra': 'yes'}",
'new4': "{'extra': 'yes'}"}
# Add connections
for index in range(1, 6):
conn_id = 'new%s' % index
result = (session
.query(Connection)
.filter(Connection.conn_id == conn_id)
.first())
result = (result.conn_id, result.conn_type, result.host,
result.port, result.get_extra())
if conn_id in ['new1', 'new2', 'new3', 'new4']:
self.assertEqual(result, (conn_id, 'postgres', 'host', 5432,
extra[conn_id]))
elif conn_id == 'new5':
self.assertEqual(result, (conn_id, 'hive_metastore', 'host',
9083, None))
elif conn_id == 'new6':
self.assertEqual(result, (conn_id, 'google_cloud_platform',
None, None, "{'extra': 'yes'}"))
# Delete connections
with mock.patch('sys.stdout', new_callable=io.StringIO) as mock_stdout:
cli.connections_delete(self.parser.parse_args(
['connections', 'delete', 'new1']))
cli.connections_delete(self.parser.parse_args(
['connections', 'delete', 'new2']))
cli.connections_delete(self.parser.parse_args(
['connections', 'delete', 'new3']))
cli.connections_delete(self.parser.parse_args(
['connections', 'delete', 'new4']))
cli.connections_delete(self.parser.parse_args(
['connections', 'delete', 'new5']))
cli.connections_delete(self.parser.parse_args(
['connections', 'delete', 'new6']))
stdout = mock_stdout.getvalue()
# Check deletion stdout
lines = [l for l in stdout.split('\n') if len(l) > 0]
self.assertListEqual(lines, [
"\tSuccessfully deleted `conn_id`=new1",
"\tSuccessfully deleted `conn_id`=new2",
"\tSuccessfully deleted `conn_id`=new3",
"\tSuccessfully deleted `conn_id`=new4",
"\tSuccessfully deleted `conn_id`=new5",
"\tSuccessfully deleted `conn_id`=new6"
])
# Check deletions
for index in range(1, 7):
conn_id = 'new%s' % index
result = (session.query(Connection)
.filter(Connection.conn_id == conn_id)
.first())
self.assertTrue(result is None)
# Attempt to delete a non-existing connection
with mock.patch('sys.stdout', new_callable=io.StringIO) as mock_stdout:
cli.connections_delete(self.parser.parse_args(
['connections', 'delete', 'fake']))
stdout = mock_stdout.getvalue()
# Check deletion attempt stdout
lines = [l for l in stdout.split('\n') if len(l) > 0]
self.assertListEqual(lines, [
"\tDid not find a connection with `conn_id`=fake",
])
session.close()
def test_cli_test(self):
cli.test(self.parser.parse_args([
'tasks', 'test', 'example_bash_operator', 'runme_0',
DEFAULT_DATE.isoformat()]))
cli.test(self.parser.parse_args([
'tasks', 'test', 'example_bash_operator', 'runme_0', '--dry_run',
DEFAULT_DATE.isoformat()]))
def test_cli_test_with_params(self):
cli.test(self.parser.parse_args([
'tasks', 'test', 'example_passing_params_via_test_command', 'run_this',
'-tp', '{"foo":"bar"}', DEFAULT_DATE.isoformat()]))
cli.test(self.parser.parse_args([
'tasks', 'test', 'example_passing_params_via_test_command', 'also_run_this',
'-tp', '{"foo":"bar"}', DEFAULT_DATE.isoformat()]))
def test_cli_run(self):
cli.run(self.parser.parse_args([
'tasks', 'run', 'example_bash_operator', 'runme_0', '-l',
DEFAULT_DATE.isoformat()]))
def test_task_state(self):
cli.task_state(self.parser.parse_args([
'tasks', 'state', 'example_bash_operator', 'runme_0',
DEFAULT_DATE.isoformat()]))
def test_dag_state(self):
self.assertEqual(None, cli.dag_state(self.parser.parse_args([
'dags', 'state', 'example_bash_operator', DEFAULT_DATE.isoformat()])))
def test_pause(self):
args = self.parser.parse_args([
'dags', 'pause', 'example_bash_operator'])
cli.pause(args)
self.assertIn(self.dagbag.dags['example_bash_operator'].is_paused, [True, 1])
args = self.parser.parse_args([
'dags', 'unpause', 'example_bash_operator'])
cli.unpause(args)
self.assertIn(self.dagbag.dags['example_bash_operator'].is_paused, [False, 0])
def test_subdag_clear(self):
args = self.parser.parse_args([
'tasks', 'clear', 'example_subdag_operator', '--no_confirm'])
cli.clear(args)
args = self.parser.parse_args([
'tasks', 'clear', 'example_subdag_operator', '--no_confirm', '--exclude_subdags'])
cli.clear(args)
def test_parentdag_downstream_clear(self):
args = self.parser.parse_args([
'tasks', 'clear', 'example_subdag_operator.section-1', '--no_confirm'])
cli.clear(args)
args = self.parser.parse_args([
'tasks', 'clear', 'example_subdag_operator.section-1', '--no_confirm',
'--exclude_parentdag'])
cli.clear(args)
def test_get_dags(self):
dags = cli.get_dags(self.parser.parse_args(['tasks', 'clear', 'example_subdag_operator',
'-c']))
self.assertEqual(len(dags), 1)
dags = cli.get_dags(self.parser.parse_args(['tasks', 'clear', 'subdag', '-dx', '-c']))
self.assertGreater(len(dags), 1)
with self.assertRaises(AirflowException):
cli.get_dags(self.parser.parse_args(['tasks', 'clear', 'foobar', '-dx', '-c']))
def test_process_subdir_path_with_placeholder(self):
self.assertEqual(os.path.join(settings.DAGS_FOLDER, 'abc'), cli.process_subdir('DAGS_FOLDER/abc'))
def test_trigger_dag(self):
cli.trigger_dag(self.parser.parse_args([
'dags', 'trigger', 'example_bash_operator',
'-c', '{"foo": "bar"}']))
self.assertRaises(
ValueError,
cli.trigger_dag,
self.parser.parse_args([
'dags', 'trigger', 'example_bash_operator',
'--run_id', 'trigger_dag_xxx',
'-c', 'NOT JSON'])
)
def test_delete_dag(self):
DM = DagModel
key = "my_dag_id"
session = settings.Session()
session.add(DM(dag_id=key))
session.commit()
cli.delete_dag(self.parser.parse_args([
'dags', 'delete', key, '--yes']))
self.assertEqual(session.query(DM).filter_by(dag_id=key).count(), 0)
self.assertRaises(
AirflowException,
cli.delete_dag,
self.parser.parse_args([
'dags', 'delete',
'does_not_exist_dag',
'--yes'])
)
def test_pool_create(self):
cli.pool_set(self.parser.parse_args(['pools', 'set', 'foo', '1', 'test']))
self.assertEqual(self.session.query(Pool).count(), 1)
def test_pool_get(self):
cli.pool_set(self.parser.parse_args(['pools', 'set', 'foo', '1', 'test']))
try:
cli.pool_get(self.parser.parse_args(['pools', 'get', 'foo']))
except Exception as e:
self.fail("The 'pool -g foo' command raised unexpectedly: %s" % e)
def test_pool_delete(self):
cli.pool_set(self.parser.parse_args(['pools', 'set', 'foo', '1', 'test']))
cli.pool_delete(self.parser.parse_args(['pools', 'delete', 'foo']))
self.assertEqual(self.session.query(Pool).count(), 0)
def test_pool_import_export(self):
# Create two pools first
pool_config_input = {
"foo": {
"description": "foo_test",
"slots": 1
},
"baz": {
"description": "baz_test",
"slots": 2
}
}
with open('pools_import.json', mode='w') as file:
json.dump(pool_config_input, file)
# Import json
try:
cli.pool_import(self.parser.parse_args(['pools', 'import', 'pools_import.json']))
except Exception as e:
self.fail("The 'pool import pools_import.json' failed: %s" % e)
# Export json
try:
cli.pool_export(self.parser.parse_args(['pools', 'export', 'pools_export.json']))
except Exception as e:
self.fail("The 'pool export pools_export.json' failed: %s" % e)
with open('pools_export.json', mode='r') as file:
pool_config_output = json.load(file)
self.assertEqual(
pool_config_input,
pool_config_output,
"Input and output pool files are not same")
os.remove('pools_import.json')
os.remove('pools_export.json')
def test_variables(self):
# Checks if all subcommands are properly received
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'foo', '{"foo":"bar"}']))
cli.variables_get(self.parser.parse_args([
'variables', 'get', 'foo']))
cli.variables_get(self.parser.parse_args([
'variables', 'get', 'baz', '-d', 'bar']))
cli.variables_list(self.parser.parse_args([
'variables', 'list']))
cli.variables_delete(self.parser.parse_args([
'variables', 'delete', 'bar']))
cli.variables_import(self.parser.parse_args([
'variables', 'import', DEV_NULL]))
cli.variables_export(self.parser.parse_args([
'variables', 'export', DEV_NULL]))
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'bar', 'original']))
# First export
cli.variables_export(self.parser.parse_args([
'variables', 'export', 'variables1.json']))
first_exp = open('variables1.json', 'r')
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'bar', 'updated']))
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'foo', '{"foo":"oops"}']))
cli.variables_delete(self.parser.parse_args([
'variables', 'delete', 'foo']))
# First import
cli.variables_import(self.parser.parse_args([
'variables', 'import', 'variables1.json']))
self.assertEqual('original', Variable.get('bar'))
self.assertEqual('{\n "foo": "bar"\n}', Variable.get('foo'))
# Second export
cli.variables_export(self.parser.parse_args([
'variables', 'export', 'variables2.json']))
second_exp = open('variables2.json', 'r')
self.assertEqual(first_exp.read(), second_exp.read())
second_exp.close()
first_exp.close()
# Second import
cli.variables_import(self.parser.parse_args([
'variables', 'import', 'variables2.json']))
self.assertEqual('original', Variable.get('bar'))
self.assertEqual('{\n "foo": "bar"\n}', Variable.get('foo'))
# Set a dict
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'dict', '{"foo": "oops"}']))
# Set a list
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'list', '["oops"]']))
# Set str
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'str', 'hello string']))
# Set int
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'int', '42']))
# Set float
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'float', '42.0']))
# Set true
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'true', 'true']))
# Set false
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'false', 'false']))
# Set none
cli.variables_set(self.parser.parse_args([
'variables', 'set', 'null', 'null']))
# Export and then import
cli.variables_export(self.parser.parse_args([
'variables', 'export', 'variables3.json']))
cli.variables_import(self.parser.parse_args([
'variables', 'import', 'variables3.json']))
# Assert value
self.assertEqual({'foo': 'oops'}, models.Variable.get('dict', deserialize_json=True))
self.assertEqual(['oops'], models.Variable.get('list', deserialize_json=True))
self.assertEqual('hello string', models.Variable.get('str')) # cannot json.loads(str)
self.assertEqual(42, models.Variable.get('int', deserialize_json=True))
self.assertEqual(42.0, models.Variable.get('float', deserialize_json=True))
self.assertEqual(True, models.Variable.get('true', deserialize_json=True))
self.assertEqual(False, models.Variable.get('false', deserialize_json=True))
self.assertEqual(None, models.Variable.get('null', deserialize_json=True))
os.remove('variables1.json')
os.remove('variables2.json')
os.remove('variables3.json')
def _wait_pidfile(self, pidfile):
while True:
try:
with open(pidfile) as file:
return int(file.read())
except Exception:
sleep(1)
def test_cli_webserver_foreground(self):
# Confirm that webserver hasn't been launched.
# pgrep returns exit status 1 if no process matched.
self.assertEqual(1, subprocess.Popen(["pgrep", "-c", "airflow"]).wait())
self.assertEqual(1, subprocess.Popen(["pgrep", "-c", "gunicorn"]).wait())
# Run webserver in foreground and terminate it.
p = subprocess.Popen(["airflow", "webserver"])
p.terminate()
p.wait()
# Assert that no process remains.
self.assertEqual(1, subprocess.Popen(["pgrep", "-c", "airflow"]).wait())
self.assertEqual(1, subprocess.Popen(["pgrep", "-c", "gunicorn"]).wait())
@unittest.skipIf("TRAVIS" in os.environ and bool(os.environ["TRAVIS"]),
"Skipping test due to lack of required file permission")
def test_cli_webserver_foreground_with_pid(self):
# Run webserver in foreground with --pid option
pidfile = tempfile.mkstemp()[1]
p = subprocess.Popen(["airflow", "webserver", "--pid", pidfile])
# Check the file specified by --pid option exists
self._wait_pidfile(pidfile)
# Terminate webserver
p.terminate()
p.wait()
@unittest.skipIf("TRAVIS" in os.environ and bool(os.environ["TRAVIS"]),
"Skipping test due to lack of required file permission")
def test_cli_webserver_background(self):
import psutil
# Confirm that webserver hasn't been launched.
self.assertEqual(1, subprocess.Popen(["pgrep", "-c", "airflow"]).wait())
self.assertEqual(1, subprocess.Popen(["pgrep", "-c", "gunicorn"]).wait())
# Run webserver in background.
subprocess.Popen(["airflow", "webserver", "-D"])
pidfile = cli.setup_locations("webserver")[0]
self._wait_pidfile(pidfile)
# Assert that gunicorn and its monitor are launched.
self.assertEqual(0, subprocess.Popen(["pgrep", "-c", "airflow"]).wait())
self.assertEqual(0, subprocess.Popen(["pgrep", "-c", "gunicorn"]).wait())
# Terminate monitor process.
pidfile = cli.setup_locations("webserver-monitor")[0]
pid = self._wait_pidfile(pidfile)
p = psutil.Process(pid)
p.terminate()
p.wait()
# Assert that no process remains.
self.assertEqual(1, subprocess.Popen(["pgrep", "-c", "airflow"]).wait())
self.assertEqual(1, subprocess.Popen(["pgrep", "-c", "gunicorn"]).wait())
# Patch for causing webserver timeout
@mock.patch("airflow.bin.cli.get_num_workers_running", return_value=0)
def test_cli_webserver_shutdown_when_gunicorn_master_is_killed(self, _):
# Shorten timeout so that this test doesn't take too long time
args = self.parser.parse_args(['webserver'])
with conf_vars({('webserver', 'web_server_master_timeout'): '10'}):
with self.assertRaises(SystemExit) as e:
cli.webserver(args)
self.assertEqual(e.exception.code, 1)
class FakeWebHDFSHook:
def __init__(self, conn_id):
self.conn_id = conn_id
def get_conn(self):
return self.conn_id
def check_for_path(self, hdfs_path):
return hdfs_path
class FakeSnakeBiteClientException(Exception):
pass
class FakeSnakeBiteClient:
def __init__(self):
self.started = True
def ls(self, path, include_toplevel=False):
"""
the fake snakebite client
:param path: the array of path to test
:param include_toplevel: to return the toplevel directory info
:return: a list for path for the matching queries
"""
if path[0] == '/datadirectory/empty_directory' and not include_toplevel:
return []
elif path[0] == '/datadirectory/datafile':
return [{
'group': 'supergroup',
'permission': 420,
'file_type': 'f',
'access_time': 1481122343796,
'block_replication': 3,
'modification_time': 1481122343862,
'length': 0,
'blocksize': 134217728,
'owner': 'hdfs',
'path': '/datadirectory/datafile'
}]
elif path[0] == '/datadirectory/empty_directory' and include_toplevel:
return [{
'group': 'supergroup',
'permission': 493,
'file_type': 'd',
'access_time': 0,
'block_replication': 0,
'modification_time': 1481132141540,
'length': 0,
'blocksize': 0,
'owner': 'hdfs',
'path': '/datadirectory/empty_directory'
}]
elif path[0] == '/datadirectory/not_empty_directory' and include_toplevel:
return [{
'group': 'supergroup',
'permission': 493,
'file_type': 'd',
'access_time': 0,
'block_replication': 0,
'modification_time': 1481132141540,
'length': 0,
'blocksize': 0,
'owner': 'hdfs',
'path': '/datadirectory/empty_directory'
}, {
'group': 'supergroup',
'permission': 420,
'file_type': 'f',
'access_time': 1481122343796,
'block_replication': 3,
'modification_time': 1481122343862,
'length': 0,
'blocksize': 134217728,
'owner': 'hdfs',
'path': '/datadirectory/not_empty_directory/test_file'
}]
elif path[0] == '/datadirectory/not_empty_directory':
return [{
'group': 'supergroup',
'permission': 420,
'file_type': 'f',
'access_time': 1481122343796,
'block_replication': 3,
'modification_time': 1481122343862,
'length': 0,
'blocksize': 134217728,
'owner': 'hdfs',
'path': '/datadirectory/not_empty_directory/test_file'
}]
elif path[0] == '/datadirectory/not_existing_file_or_directory':
raise FakeSnakeBiteClientException
elif path[0] == '/datadirectory/regex_dir':
return [{
'group': 'supergroup',
'permission': 420,
'file_type': 'f',
'access_time': 1481122343796,
'block_replication': 3,
'modification_time': 1481122343862, 'length': 12582912,
'blocksize': 134217728,
'owner': 'hdfs',
'path': '/datadirectory/regex_dir/test1file'
}, {
'group': 'supergroup',
'permission': 420,
'file_type': 'f',
'access_time': 1481122343796,
'block_replication': 3,
'modification_time': 1481122343862,
'length': 12582912,
'blocksize': 134217728,
'owner': 'hdfs',
'path': '/datadirectory/regex_dir/test2file'
}, {
'group': 'supergroup',
'permission': 420,
'file_type': 'f',
'access_time': 1481122343796,
'block_replication': 3,
'modification_time': 1481122343862,
'length': 12582912,
'blocksize': 134217728,
'owner': 'hdfs',
'path': '/datadirectory/regex_dir/test3file'
}, {
'group': 'supergroup',
'permission': 420,
'file_type': 'f',
'access_time': 1481122343796,
'block_replication': 3,
'modification_time': 1481122343862,
'length': 12582912,
'blocksize': 134217728,
'owner': 'hdfs',
'path': '/datadirectory/regex_dir/copying_file_1.txt._COPYING_'
}, {
'group': 'supergroup',
'permission': 420,
'file_type': 'f',
'access_time': 1481122343796,
'block_replication': 3,
'modification_time': 1481122343862,
'length': 12582912,
'blocksize': 134217728,
'owner': 'hdfs',
'path': '/datadirectory/regex_dir/copying_file_3.txt.sftp'
}]
else:
raise FakeSnakeBiteClientException
class FakeHDFSHook:
def __init__(self, conn_id=None):
self.conn_id = conn_id
def get_conn(self):
client = FakeSnakeBiteClient()
return client
class TestConnection(unittest.TestCase):
def setUp(self):
utils.db.initdb()
os.environ['AIRFLOW_CONN_TEST_URI'] = (
'postgres://username:password@ec2.compute.com:5432/the_database')
os.environ['AIRFLOW_CONN_TEST_URI_NO_CREDS'] = (
'postgres://ec2.compute.com/the_database')
def tearDown(self):
env_vars = ['AIRFLOW_CONN_TEST_URI', 'AIRFLOW_CONN_AIRFLOW_DB']
for ev in env_vars:
if ev in os.environ:
del os.environ[ev]
def test_using_env_var(self):
c = SqliteHook.get_connection(conn_id='test_uri')
self.assertEqual('ec2.compute.com', c.host)
self.assertEqual('the_database', c.schema)
self.assertEqual('username', c.login)
self.assertEqual('password', c.password)
self.assertEqual(5432, c.port)
def test_using_unix_socket_env_var(self):
c = SqliteHook.get_connection(conn_id='test_uri_no_creds')
self.assertEqual('ec2.compute.com', c.host)
self.assertEqual('the_database', c.schema)
self.assertIsNone(c.login)
self.assertIsNone(c.password)
self.assertIsNone(c.port)
def test_param_setup(self):
c = Connection(conn_id='local_mysql', conn_type='mysql',
host='localhost', login='airflow',
password='airflow', schema='airflow')
self.assertEqual('localhost', c.host)
self.assertEqual('airflow', c.schema)
self.assertEqual('airflow', c.login)
self.assertEqual('airflow', c.password)
self.assertIsNone(c.port)
def test_env_var_priority(self):
c = SqliteHook.get_connection(conn_id='airflow_db')
self.assertNotEqual('ec2.compute.com', c.host)
os.environ['AIRFLOW_CONN_AIRFLOW_DB'] = \
'postgres://username:password@ec2.compute.com:5432/the_database'
c = SqliteHook.get_connection(conn_id='airflow_db')
self.assertEqual('ec2.compute.com', c.host)
self.assertEqual('the_database', c.schema)
self.assertEqual('username', c.login)
self.assertEqual('password', c.password)
self.assertEqual(5432, c.port)
del os.environ['AIRFLOW_CONN_AIRFLOW_DB']
def test_dbapi_get_uri(self):
conn = BaseHook.get_connection(conn_id='test_uri')
hook = conn.get_hook()
self.assertEqual('postgres://username:password@ec2.compute.com:5432/the_database', hook.get_uri())
conn2 = BaseHook.get_connection(conn_id='test_uri_no_creds')
hook2 = conn2.get_hook()
self.assertEqual('postgres://ec2.compute.com/the_database', hook2.get_uri())
def test_dbapi_get_sqlalchemy_engine(self):
conn = BaseHook.get_connection(conn_id='test_uri')
hook = conn.get_hook()
engine = hook.get_sqlalchemy_engine()
self.assertIsInstance(engine, sqlalchemy.engine.Engine)
self.assertEqual('postgres://username:password@ec2.compute.com:5432/the_database', str(engine.url))
def test_get_connections_env_var(self):
conns = SqliteHook.get_connections(conn_id='test_uri')
assert len(conns) == 1
assert conns[0].host == 'ec2.compute.com'
assert conns[0].schema == 'the_database'
assert conns[0].login == 'username'
assert conns[0].password == 'password'
assert conns[0].port == 5432
class TestWebHDFSHook(unittest.TestCase):
def test_simple_init(self):
from airflow.hooks.webhdfs_hook import WebHDFSHook
c = WebHDFSHook()
self.assertIsNone(c.proxy_user)
def test_init_proxy_user(self):
from airflow.hooks.webhdfs_hook import WebHDFSHook
c = WebHDFSHook(proxy_user='someone')
self.assertEqual('someone', c.proxy_user)
HDFSHook = None # type: Optional[hdfs_hook.HDFSHook]
snakebite = None # type: None
@unittest.skipIf(HDFSHook is None,
"Skipping test because HDFSHook is not installed")
class TestHDFSHook(unittest.TestCase):
def setUp(self):
os.environ['AIRFLOW_CONN_HDFS_DEFAULT'] = 'hdfs://localhost:8020'
def test_get_client(self):
client = HDFSHook(proxy_user='foo').get_conn()
self.assertIsInstance(client, snakebite.client.Client)
self.assertEqual('localhost', client.host)
self.assertEqual(8020, client.port)
self.assertEqual('foo', client.service.channel.effective_user)
@mock.patch('airflow.hooks.hdfs_hook.AutoConfigClient')
@mock.patch('airflow.hooks.hdfs_hook.HDFSHook.get_connections')
def test_get_autoconfig_client(self, mock_get_connections,
MockAutoConfigClient):
c = Connection(conn_id='hdfs', conn_type='hdfs',
host='localhost', port=8020, login='foo',
extra=json.dumps({'autoconfig': True}))
mock_get_connections.return_value = [c]
HDFSHook(hdfs_conn_id='hdfs').get_conn()
MockAutoConfigClient.assert_called_once_with(effective_user='foo',
use_sasl=False)
@mock.patch('airflow.hooks.hdfs_hook.AutoConfigClient')
def test_get_autoconfig_client_no_conn(self, MockAutoConfigClient):
HDFSHook(hdfs_conn_id='hdfs_missing', autoconfig=True).get_conn()
MockAutoConfigClient.assert_called_once_with(effective_user=None,
use_sasl=False)
@mock.patch('airflow.hooks.hdfs_hook.HDFSHook.get_connections')
def test_get_ha_client(self, mock_get_connections):
c1 = Connection(conn_id='hdfs_default', conn_type='hdfs',
host='localhost', port=8020)
c2 = Connection(conn_id='hdfs_default', conn_type='hdfs',
host='localhost2', port=8020)
mock_get_connections.return_value = [c1, c2]
client = HDFSHook().get_conn()
self.assertIsInstance(client, snakebite.client.HAClient)
send_email_test = mock.Mock()
class TestEmail(unittest.TestCase):
def setUp(self):
configuration.conf.remove_option('email', 'EMAIL_BACKEND')
@mock.patch('airflow.utils.email.send_email')
def test_default_backend(self, mock_send_email):
res = utils.email.send_email('to', 'subject', 'content')
mock_send_email.assert_called_with('to', 'subject', 'content')
self.assertEqual(mock_send_email.return_value, res)
@mock.patch('airflow.utils.email.send_email_smtp')
def test_custom_backend(self, mock_send_email):
with conf_vars({('email', 'email_backend'): 'tests.core.send_email_test'}):
utils.email.send_email('to', 'subject', 'content')
send_email_test.assert_called_with(
'to', 'subject', 'content', files=None, dryrun=False,
cc=None, bcc=None, mime_charset='utf-8', mime_subtype='mixed')
self.assertFalse(mock_send_email.called)
class TestEmailSmtp(unittest.TestCase):
def setUp(self):
configuration.conf.set('smtp', 'SMTP_SSL', 'False')
@mock.patch('airflow.utils.email.send_MIME_email')
def test_send_smtp(self, mock_send_mime):
attachment = tempfile.NamedTemporaryFile()
attachment.write(b'attachment')
attachment.seek(0)
utils.email.send_email_smtp('to', 'subject', 'content', files=[attachment.name])
self.assertTrue(mock_send_mime.called)
call_args = mock_send_mime.call_args[0]
self.assertEqual(configuration.conf.get('smtp', 'SMTP_MAIL_FROM'), call_args[0])
self.assertEqual(['to'], call_args[1])
msg = call_args[2]
self.assertEqual('subject', msg['Subject'])
self.assertEqual(configuration.conf.get('smtp', 'SMTP_MAIL_FROM'), msg['From'])
self.assertEqual(2, len(msg.get_payload()))
filename = 'attachment; filename="' + os.path.basename(attachment.name) + '"'
self.assertEqual(filename, msg.get_payload()[-1].get('Content-Disposition'))
mimeapp = MIMEApplication('attachment')
self.assertEqual(mimeapp.get_payload(), msg.get_payload()[-1].get_payload())
@mock.patch('airflow.utils.email.send_MIME_email')
def test_send_smtp_with_multibyte_content(self, mock_send_mime):
utils.email.send_email_smtp('to', 'subject', '🔥', mime_charset='utf-8')
self.assertTrue(mock_send_mime.called)
call_args = mock_send_mime.call_args[0]
msg = call_args[2]
mimetext = MIMEText('🔥', 'mixed', 'utf-8')
self.assertEqual(mimetext.get_payload(), msg.get_payload()[0].get_payload())
@mock.patch('airflow.utils.email.send_MIME_email')
def test_send_bcc_smtp(self, mock_send_mime):
attachment = tempfile.NamedTemporaryFile()
attachment.write(b'attachment')
attachment.seek(0)
utils.email.send_email_smtp('to', 'subject', 'content', files=[attachment.name], cc='cc', bcc='bcc')
self.assertTrue(mock_send_mime.called)
call_args = mock_send_mime.call_args[0]
self.assertEqual(configuration.conf.get('smtp', 'SMTP_MAIL_FROM'), call_args[0])
self.assertEqual(['to', 'cc', 'bcc'], call_args[1])
msg = call_args[2]
self.assertEqual('subject', msg['Subject'])
self.assertEqual(configuration.conf.get('smtp', 'SMTP_MAIL_FROM'), msg['From'])
self.assertEqual(2, len(msg.get_payload()))
self.assertEqual('attachment; filename="' + os.path.basename(attachment.name) + '"',
msg.get_payload()[-1].get('Content-Disposition'))
mimeapp = MIMEApplication('attachment')
self.assertEqual(mimeapp.get_payload(), msg.get_payload()[-1].get_payload())
@mock.patch('smtplib.SMTP_SSL')
@mock.patch('smtplib.SMTP')
def test_send_mime(self, mock_smtp, mock_smtp_ssl):
mock_smtp.return_value = mock.Mock()
mock_smtp_ssl.return_value = mock.Mock()
msg = MIMEMultipart()
utils.email.send_MIME_email('from', 'to', msg, dryrun=False)
mock_smtp.assert_called_with(
configuration.conf.get('smtp', 'SMTP_HOST'),
configuration.conf.getint('smtp', 'SMTP_PORT'),
)
self.assertTrue(mock_smtp.return_value.starttls.called)
mock_smtp.return_value.login.assert_called_with(
configuration.conf.get('smtp', 'SMTP_USER'),
configuration.conf.get('smtp', 'SMTP_PASSWORD'),
)
mock_smtp.return_value.sendmail.assert_called_with('from', 'to', msg.as_string())
self.assertTrue(mock_smtp.return_value.quit.called)
@mock.patch('smtplib.SMTP_SSL')
@mock.patch('smtplib.SMTP')
def test_send_mime_ssl(self, mock_smtp, mock_smtp_ssl):
mock_smtp.return_value = mock.Mock()
mock_smtp_ssl.return_value = mock.Mock()
with conf_vars({('smtp', 'smtp_ssl'): 'True'}):
utils.email.send_MIME_email('from', 'to', MIMEMultipart(), dryrun=False)
self.assertFalse(mock_smtp.called)
mock_smtp_ssl.assert_called_with(
configuration.conf.get('smtp', 'SMTP_HOST'),
configuration.conf.getint('smtp', 'SMTP_PORT'),
)
@mock.patch('smtplib.SMTP_SSL')
@mock.patch('smtplib.SMTP')
def test_send_mime_noauth(self, mock_smtp, mock_smtp_ssl):
mock_smtp.return_value = mock.Mock()
mock_smtp_ssl.return_value = mock.Mock()
with conf_vars({
('smtp', 'smtp_user'): None,
('smtp', 'smtp_password'): None,
}):
utils.email.send_MIME_email('from', 'to', MIMEMultipart(), dryrun=False)
self.assertFalse(mock_smtp_ssl.called)
mock_smtp.assert_called_with(
configuration.conf.get('smtp', 'SMTP_HOST'),
configuration.conf.getint('smtp', 'SMTP_PORT'),
)
self.assertFalse(mock_smtp.login.called)
@mock.patch('smtplib.SMTP_SSL')
@mock.patch('smtplib.SMTP')
def test_send_mime_dryrun(self, mock_smtp, mock_smtp_ssl):
utils.email.send_MIME_email('from', 'to', MIMEMultipart(), dryrun=True)
self.assertFalse(mock_smtp.called)
self.assertFalse(mock_smtp_ssl.called)
if __name__ == '__main__':
unittest.main()
|
test_io.py
|
"""Unit tests for the io module."""
# Tests of io are scattered over the test suite:
# * test_bufio - tests file buffering
# * test_memoryio - tests BytesIO and StringIO
# * test_fileio - tests FileIO
# * test_file - tests the file interface
# * test_io - tests everything else in the io module
# * test_univnewlines - tests universal newline support
# * test_largefile - tests operations on a file greater than 2**32 bytes
# (only enabled with -ulargefile)
################################################################################
# ATTENTION TEST WRITERS!!!
################################################################################
# When writing tests for io, it's important to test both the C and Python
# implementations. This is usually done by writing a base test that refers to
# the type it is testing as an attribute. Then it provides custom subclasses to
# test both implementations. This file has lots of examples.
################################################################################
import abc
import array
import errno
import locale
import os
import pickle
import random
import signal
import sys
import threading
import time
import unittest
import warnings
import weakref
from collections import deque, UserList
from itertools import cycle, count
from test import support
from test.support.script_helper import assert_python_ok, run_python_until_end
import codecs
import io # C implementation of io
import _pyio as pyio # Python implementation of io
try:
import ctypes
except ImportError:
def byteslike(*pos, **kw):
return array.array("b", bytes(*pos, **kw))
else:
def byteslike(*pos, **kw):
"""Create a bytes-like object having no string or sequence methods"""
data = bytes(*pos, **kw)
obj = EmptyStruct()
ctypes.resize(obj, len(data))
memoryview(obj).cast("B")[:] = data
return obj
class EmptyStruct(ctypes.Structure):
pass
def _default_chunk_size():
"""Get the default TextIOWrapper chunk size"""
with open(__file__, "r", encoding="latin-1") as f:
return f._CHUNK_SIZE
class MockRawIOWithoutRead:
"""A RawIO implementation without read(), so as to exercise the default
RawIO.read() which calls readinto()."""
def __init__(self, read_stack=()):
self._read_stack = list(read_stack)
self._write_stack = []
self._reads = 0
self._extraneous_reads = 0
def write(self, b):
self._write_stack.append(bytes(b))
return len(b)
def writable(self):
return True
def fileno(self):
return 42
def readable(self):
return True
def seekable(self):
return True
def seek(self, pos, whence):
return 0 # wrong but we gotta return something
def tell(self):
return 0 # same comment as above
def readinto(self, buf):
self._reads += 1
max_len = len(buf)
try:
data = self._read_stack[0]
except IndexError:
self._extraneous_reads += 1
return 0
if data is None:
del self._read_stack[0]
return None
n = len(data)
if len(data) <= max_len:
del self._read_stack[0]
buf[:n] = data
return n
else:
buf[:] = data[:max_len]
self._read_stack[0] = data[max_len:]
return max_len
def truncate(self, pos=None):
return pos
class CMockRawIOWithoutRead(MockRawIOWithoutRead, io.RawIOBase):
pass
class PyMockRawIOWithoutRead(MockRawIOWithoutRead, pyio.RawIOBase):
pass
class MockRawIO(MockRawIOWithoutRead):
def read(self, n=None):
self._reads += 1
try:
return self._read_stack.pop(0)
except:
self._extraneous_reads += 1
return b""
class CMockRawIO(MockRawIO, io.RawIOBase):
pass
class PyMockRawIO(MockRawIO, pyio.RawIOBase):
pass
class MisbehavedRawIO(MockRawIO):
def write(self, b):
return super().write(b) * 2
def read(self, n=None):
return super().read(n) * 2
def seek(self, pos, whence):
return -123
def tell(self):
return -456
def readinto(self, buf):
super().readinto(buf)
return len(buf) * 5
class CMisbehavedRawIO(MisbehavedRawIO, io.RawIOBase):
pass
class PyMisbehavedRawIO(MisbehavedRawIO, pyio.RawIOBase):
pass
class SlowFlushRawIO(MockRawIO):
def __init__(self):
super().__init__()
self.in_flush = threading.Event()
def flush(self):
self.in_flush.set()
time.sleep(0.25)
class CSlowFlushRawIO(SlowFlushRawIO, io.RawIOBase):
pass
class PySlowFlushRawIO(SlowFlushRawIO, pyio.RawIOBase):
pass
class CloseFailureIO(MockRawIO):
closed = 0
def close(self):
if not self.closed:
self.closed = 1
raise OSError
class CCloseFailureIO(CloseFailureIO, io.RawIOBase):
pass
class PyCloseFailureIO(CloseFailureIO, pyio.RawIOBase):
pass
class MockFileIO:
def __init__(self, data):
self.read_history = []
super().__init__(data)
def read(self, n=None):
res = super().read(n)
self.read_history.append(None if res is None else len(res))
return res
def readinto(self, b):
res = super().readinto(b)
self.read_history.append(res)
return res
class CMockFileIO(MockFileIO, io.BytesIO):
pass
class PyMockFileIO(MockFileIO, pyio.BytesIO):
pass
class MockUnseekableIO:
def seekable(self):
return False
def seek(self, *args):
raise self.UnsupportedOperation("not seekable")
def tell(self, *args):
raise self.UnsupportedOperation("not seekable")
def truncate(self, *args):
raise self.UnsupportedOperation("not seekable")
class CMockUnseekableIO(MockUnseekableIO, io.BytesIO):
UnsupportedOperation = io.UnsupportedOperation
class PyMockUnseekableIO(MockUnseekableIO, pyio.BytesIO):
UnsupportedOperation = pyio.UnsupportedOperation
class MockNonBlockWriterIO:
def __init__(self):
self._write_stack = []
self._blocker_char = None
def pop_written(self):
s = b"".join(self._write_stack)
self._write_stack[:] = []
return s
def block_on(self, char):
"""Block when a given char is encountered."""
self._blocker_char = char
def readable(self):
return True
def seekable(self):
return True
def writable(self):
return True
def write(self, b):
b = bytes(b)
n = -1
if self._blocker_char:
try:
n = b.index(self._blocker_char)
except ValueError:
pass
else:
if n > 0:
# write data up to the first blocker
self._write_stack.append(b[:n])
return n
else:
# cancel blocker and indicate would block
self._blocker_char = None
return None
self._write_stack.append(b)
return len(b)
class CMockNonBlockWriterIO(MockNonBlockWriterIO, io.RawIOBase):
BlockingIOError = io.BlockingIOError
class PyMockNonBlockWriterIO(MockNonBlockWriterIO, pyio.RawIOBase):
BlockingIOError = pyio.BlockingIOError
class IOTest(unittest.TestCase):
def setUp(self):
support.unlink(support.TESTFN)
def tearDown(self):
support.unlink(support.TESTFN)
def write_ops(self, f):
self.assertEqual(f.write(b"blah."), 5)
f.truncate(0)
self.assertEqual(f.tell(), 5)
f.seek(0)
self.assertEqual(f.write(b"blah."), 5)
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.write(b"Hello."), 6)
self.assertEqual(f.tell(), 6)
self.assertEqual(f.seek(-1, 1), 5)
self.assertEqual(f.tell(), 5)
buffer = bytearray(b" world\n\n\n")
self.assertEqual(f.write(buffer), 9)
buffer[:] = b"*" * 9 # Overwrite our copy of the data
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.write(b"h"), 1)
self.assertEqual(f.seek(-1, 2), 13)
self.assertEqual(f.tell(), 13)
self.assertEqual(f.truncate(12), 12)
self.assertEqual(f.tell(), 13)
self.assertRaises(TypeError, f.seek, 0.0)
def read_ops(self, f, buffered=False):
data = f.read(5)
self.assertEqual(data, b"hello")
data = byteslike(data)
self.assertEqual(f.readinto(data), 5)
self.assertEqual(bytes(data), b" worl")
data = bytearray(5)
self.assertEqual(f.readinto(data), 2)
self.assertEqual(len(data), 5)
self.assertEqual(data[:2], b"d\n")
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.read(20), b"hello world\n")
self.assertEqual(f.read(1), b"")
self.assertEqual(f.readinto(byteslike(b"x")), 0)
self.assertEqual(f.seek(-6, 2), 6)
self.assertEqual(f.read(5), b"world")
self.assertEqual(f.read(0), b"")
self.assertEqual(f.readinto(byteslike()), 0)
self.assertEqual(f.seek(-6, 1), 5)
self.assertEqual(f.read(5), b" worl")
self.assertEqual(f.tell(), 10)
self.assertRaises(TypeError, f.seek, 0.0)
if buffered:
f.seek(0)
self.assertEqual(f.read(), b"hello world\n")
f.seek(6)
self.assertEqual(f.read(), b"world\n")
self.assertEqual(f.read(), b"")
f.seek(0)
data = byteslike(5)
self.assertEqual(f.readinto1(data), 5)
self.assertEqual(bytes(data), b"hello")
LARGE = 2**31
def large_file_ops(self, f):
assert f.readable()
assert f.writable()
try:
self.assertEqual(f.seek(self.LARGE), self.LARGE)
except (OverflowError, ValueError):
self.skipTest("no largefile support")
self.assertEqual(f.tell(), self.LARGE)
self.assertEqual(f.write(b"xxx"), 3)
self.assertEqual(f.tell(), self.LARGE + 3)
self.assertEqual(f.seek(-1, 1), self.LARGE + 2)
self.assertEqual(f.truncate(), self.LARGE + 2)
self.assertEqual(f.tell(), self.LARGE + 2)
self.assertEqual(f.seek(0, 2), self.LARGE + 2)
self.assertEqual(f.truncate(self.LARGE + 1), self.LARGE + 1)
self.assertEqual(f.tell(), self.LARGE + 2)
self.assertEqual(f.seek(0, 2), self.LARGE + 1)
self.assertEqual(f.seek(-1, 2), self.LARGE)
self.assertEqual(f.read(2), b"x")
def test_invalid_operations(self):
# Try writing on a file opened in read mode and vice-versa.
exc = self.UnsupportedOperation
for mode in ("w", "wb"):
with self.open(support.TESTFN, mode) as fp:
self.assertRaises(exc, fp.read)
self.assertRaises(exc, fp.readline)
with self.open(support.TESTFN, "wb", buffering=0) as fp:
self.assertRaises(exc, fp.read)
self.assertRaises(exc, fp.readline)
with self.open(support.TESTFN, "rb", buffering=0) as fp:
self.assertRaises(exc, fp.write, b"blah")
self.assertRaises(exc, fp.writelines, [b"blah\n"])
with self.open(support.TESTFN, "rb") as fp:
self.assertRaises(exc, fp.write, b"blah")
self.assertRaises(exc, fp.writelines, [b"blah\n"])
with self.open(support.TESTFN, "r") as fp:
self.assertRaises(exc, fp.write, "blah")
self.assertRaises(exc, fp.writelines, ["blah\n"])
# Non-zero seeking from current or end pos
self.assertRaises(exc, fp.seek, 1, self.SEEK_CUR)
self.assertRaises(exc, fp.seek, -1, self.SEEK_END)
def test_optional_abilities(self):
# Test for OSError when optional APIs are not supported
# The purpose of this test is to try fileno(), reading, writing and
# seeking operations with various objects that indicate they do not
# support these operations.
def pipe_reader():
[r, w] = os.pipe()
os.close(w) # So that read() is harmless
return self.FileIO(r, "r")
def pipe_writer():
[r, w] = os.pipe()
self.addCleanup(os.close, r)
# Guarantee that we can write into the pipe without blocking
thread = threading.Thread(target=os.read, args=(r, 100))
thread.start()
self.addCleanup(thread.join)
return self.FileIO(w, "w")
def buffered_reader():
return self.BufferedReader(self.MockUnseekableIO())
def buffered_writer():
return self.BufferedWriter(self.MockUnseekableIO())
def buffered_random():
return self.BufferedRandom(self.BytesIO())
def buffered_rw_pair():
return self.BufferedRWPair(self.MockUnseekableIO(),
self.MockUnseekableIO())
def text_reader():
class UnseekableReader(self.MockUnseekableIO):
writable = self.BufferedIOBase.writable
write = self.BufferedIOBase.write
return self.TextIOWrapper(UnseekableReader(), "ascii")
def text_writer():
class UnseekableWriter(self.MockUnseekableIO):
readable = self.BufferedIOBase.readable
read = self.BufferedIOBase.read
return self.TextIOWrapper(UnseekableWriter(), "ascii")
tests = (
(pipe_reader, "fr"), (pipe_writer, "fw"),
(buffered_reader, "r"), (buffered_writer, "w"),
(buffered_random, "rws"), (buffered_rw_pair, "rw"),
(text_reader, "r"), (text_writer, "w"),
(self.BytesIO, "rws"), (self.StringIO, "rws"),
)
for [test, abilities] in tests:
with self.subTest(test), test() as obj:
readable = "r" in abilities
self.assertEqual(obj.readable(), readable)
writable = "w" in abilities
self.assertEqual(obj.writable(), writable)
if isinstance(obj, self.TextIOBase):
data = "3"
elif isinstance(obj, (self.BufferedIOBase, self.RawIOBase)):
data = b"3"
else:
self.fail("Unknown base class")
if "f" in abilities:
obj.fileno()
else:
self.assertRaises(OSError, obj.fileno)
if readable:
obj.read(1)
obj.read()
else:
self.assertRaises(OSError, obj.read, 1)
self.assertRaises(OSError, obj.read)
if writable:
obj.write(data)
else:
self.assertRaises(OSError, obj.write, data)
if sys.platform.startswith("win") and test in (
pipe_reader, pipe_writer):
# Pipes seem to appear as seekable on Windows
continue
seekable = "s" in abilities
self.assertEqual(obj.seekable(), seekable)
if seekable:
obj.tell()
obj.seek(0)
else:
self.assertRaises(OSError, obj.tell)
self.assertRaises(OSError, obj.seek, 0)
if writable and seekable:
obj.truncate()
obj.truncate(0)
else:
self.assertRaises(OSError, obj.truncate)
self.assertRaises(OSError, obj.truncate, 0)
def test_open_handles_NUL_chars(self):
fn_with_NUL = 'foo\0bar'
self.assertRaises(ValueError, self.open, fn_with_NUL, 'w')
bytes_fn = bytes(fn_with_NUL, 'ascii')
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
self.assertRaises(ValueError, self.open, bytes_fn, 'w')
def test_raw_file_io(self):
with self.open(support.TESTFN, "wb", buffering=0) as f:
self.assertEqual(f.readable(), False)
self.assertEqual(f.writable(), True)
self.assertEqual(f.seekable(), True)
self.write_ops(f)
with self.open(support.TESTFN, "rb", buffering=0) as f:
self.assertEqual(f.readable(), True)
self.assertEqual(f.writable(), False)
self.assertEqual(f.seekable(), True)
self.read_ops(f)
def test_buffered_file_io(self):
with self.open(support.TESTFN, "wb") as f:
self.assertEqual(f.readable(), False)
self.assertEqual(f.writable(), True)
self.assertEqual(f.seekable(), True)
self.write_ops(f)
with self.open(support.TESTFN, "rb") as f:
self.assertEqual(f.readable(), True)
self.assertEqual(f.writable(), False)
self.assertEqual(f.seekable(), True)
self.read_ops(f, True)
def test_readline(self):
with self.open(support.TESTFN, "wb") as f:
f.write(b"abc\ndef\nxyzzy\nfoo\x00bar\nanother line")
with self.open(support.TESTFN, "rb") as f:
self.assertEqual(f.readline(), b"abc\n")
self.assertEqual(f.readline(10), b"def\n")
self.assertEqual(f.readline(2), b"xy")
self.assertEqual(f.readline(4), b"zzy\n")
self.assertEqual(f.readline(), b"foo\x00bar\n")
self.assertEqual(f.readline(None), b"another line")
self.assertRaises(TypeError, f.readline, 5.3)
with self.open(support.TESTFN, "r") as f:
self.assertRaises(TypeError, f.readline, 5.3)
def test_readline_nonsizeable(self):
# Issue #30061
# Crash when readline() returns an object without __len__
class R(self.IOBase):
def readline(self):
return None
self.assertRaises((TypeError, StopIteration), next, R())
def test_next_nonsizeable(self):
# Issue #30061
# Crash when __next__() returns an object without __len__
class R(self.IOBase):
def __next__(self):
return None
self.assertRaises(TypeError, R().readlines, 1)
def test_raw_bytes_io(self):
f = self.BytesIO()
self.write_ops(f)
data = f.getvalue()
self.assertEqual(data, b"hello world\n")
f = self.BytesIO(data)
self.read_ops(f, True)
def test_large_file_ops(self):
# On Windows and Mac OSX this test comsumes large resources; It takes
# a long time to build the >2 GiB file and takes >2 GiB of disk space
# therefore the resource must be enabled to run this test.
if sys.platform[:3] == 'win' or sys.platform == 'darwin':
support.requires(
'largefile',
'test requires %s bytes and a long time to run' % self.LARGE)
with self.open(support.TESTFN, "w+b", 0) as f:
self.large_file_ops(f)
with self.open(support.TESTFN, "w+b") as f:
self.large_file_ops(f)
def test_with_open(self):
for bufsize in (0, 1, 100):
f = None
with self.open(support.TESTFN, "wb", bufsize) as f:
f.write(b"xxx")
self.assertEqual(f.closed, True)
f = None
try:
with self.open(support.TESTFN, "wb", bufsize) as f:
1/0
except ZeroDivisionError:
self.assertEqual(f.closed, True)
else:
self.fail("1/0 didn't raise an exception")
# issue 5008
def test_append_mode_tell(self):
with self.open(support.TESTFN, "wb") as f:
f.write(b"xxx")
with self.open(support.TESTFN, "ab", buffering=0) as f:
self.assertEqual(f.tell(), 3)
with self.open(support.TESTFN, "ab") as f:
self.assertEqual(f.tell(), 3)
with self.open(support.TESTFN, "a") as f:
self.assertGreater(f.tell(), 0)
def test_destructor(self):
record = []
class MyFileIO(self.FileIO):
def __del__(self):
record.append(1)
try:
f = super().__del__
except AttributeError:
pass
else:
f()
def close(self):
record.append(2)
super().close()
def flush(self):
record.append(3)
super().flush()
with support.check_warnings(('', ResourceWarning)):
f = MyFileIO(support.TESTFN, "wb")
f.write(b"xxx")
del f
support.gc_collect()
self.assertEqual(record, [1, 2, 3])
with self.open(support.TESTFN, "rb") as f:
self.assertEqual(f.read(), b"xxx")
def _check_base_destructor(self, base):
record = []
class MyIO(base):
def __init__(self):
# This exercises the availability of attributes on object
# destruction.
# (in the C version, close() is called by the tp_dealloc
# function, not by __del__)
self.on_del = 1
self.on_close = 2
self.on_flush = 3
def __del__(self):
record.append(self.on_del)
try:
f = super().__del__
except AttributeError:
pass
else:
f()
def close(self):
record.append(self.on_close)
super().close()
def flush(self):
record.append(self.on_flush)
super().flush()
f = MyIO()
del f
support.gc_collect()
self.assertEqual(record, [1, 2, 3])
def test_IOBase_destructor(self):
self._check_base_destructor(self.IOBase)
def test_RawIOBase_destructor(self):
self._check_base_destructor(self.RawIOBase)
def test_BufferedIOBase_destructor(self):
self._check_base_destructor(self.BufferedIOBase)
def test_TextIOBase_destructor(self):
self._check_base_destructor(self.TextIOBase)
def test_close_flushes(self):
with self.open(support.TESTFN, "wb") as f:
f.write(b"xxx")
with self.open(support.TESTFN, "rb") as f:
self.assertEqual(f.read(), b"xxx")
def test_array_writes(self):
a = array.array('i', range(10))
n = len(a.tobytes())
def check(f):
with f:
self.assertEqual(f.write(a), n)
f.writelines((a,))
check(self.BytesIO())
check(self.FileIO(support.TESTFN, "w"))
check(self.BufferedWriter(self.MockRawIO()))
check(self.BufferedRandom(self.MockRawIO()))
check(self.BufferedRWPair(self.MockRawIO(), self.MockRawIO()))
def test_closefd(self):
self.assertRaises(ValueError, self.open, support.TESTFN, 'w',
closefd=False)
def test_read_closed(self):
with self.open(support.TESTFN, "w") as f:
f.write("egg\n")
with self.open(support.TESTFN, "r") as f:
file = self.open(f.fileno(), "r", closefd=False)
self.assertEqual(file.read(), "egg\n")
file.seek(0)
file.close()
self.assertRaises(ValueError, file.read)
def test_no_closefd_with_filename(self):
# can't use closefd in combination with a file name
self.assertRaises(ValueError, self.open, support.TESTFN, "r", closefd=False)
def test_closefd_attr(self):
with self.open(support.TESTFN, "wb") as f:
f.write(b"egg\n")
with self.open(support.TESTFN, "r") as f:
self.assertEqual(f.buffer.raw.closefd, True)
file = self.open(f.fileno(), "r", closefd=False)
self.assertEqual(file.buffer.raw.closefd, False)
def test_garbage_collection(self):
# FileIO objects are collected, and collecting them flushes
# all data to disk.
with support.check_warnings(('', ResourceWarning)):
f = self.FileIO(support.TESTFN, "wb")
f.write(b"abcxxx")
f.f = f
wr = weakref.ref(f)
del f
support.gc_collect()
self.assertIsNone(wr(), wr)
with self.open(support.TESTFN, "rb") as f:
self.assertEqual(f.read(), b"abcxxx")
def test_unbounded_file(self):
# Issue #1174606: reading from an unbounded stream such as /dev/zero.
zero = "/dev/zero"
if not os.path.exists(zero):
self.skipTest("{0} does not exist".format(zero))
if sys.maxsize > 0x7FFFFFFF:
self.skipTest("test can only run in a 32-bit address space")
if support.real_max_memuse < support._2G:
self.skipTest("test requires at least 2 GiB of memory")
with self.open(zero, "rb", buffering=0) as f:
self.assertRaises(OverflowError, f.read)
with self.open(zero, "rb") as f:
self.assertRaises(OverflowError, f.read)
with self.open(zero, "r") as f:
self.assertRaises(OverflowError, f.read)
def check_flush_error_on_close(self, *args, **kwargs):
# Test that the file is closed despite failed flush
# and that flush() is called before file closed.
f = self.open(*args, **kwargs)
closed = []
def bad_flush():
closed[:] = [f.closed]
raise OSError()
f.flush = bad_flush
self.assertRaises(OSError, f.close) # exception not swallowed
self.assertTrue(f.closed)
self.assertTrue(closed) # flush() called
self.assertFalse(closed[0]) # flush() called before file closed
f.flush = lambda: None # break reference loop
def test_flush_error_on_close(self):
# raw file
# Issue #5700: io.FileIO calls flush() after file closed
self.check_flush_error_on_close(support.TESTFN, 'wb', buffering=0)
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'wb', buffering=0)
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'wb', buffering=0, closefd=False)
os.close(fd)
# buffered io
self.check_flush_error_on_close(support.TESTFN, 'wb')
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'wb')
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'wb', closefd=False)
os.close(fd)
# text io
self.check_flush_error_on_close(support.TESTFN, 'w')
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'w')
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'w', closefd=False)
os.close(fd)
def test_multi_close(self):
f = self.open(support.TESTFN, "wb", buffering=0)
f.close()
f.close()
f.close()
self.assertRaises(ValueError, f.flush)
def test_RawIOBase_read(self):
# Exercise the default RawIOBase.read() implementation (which calls
# readinto() internally).
rawio = self.MockRawIOWithoutRead((b"abc", b"d", None, b"efg", None))
self.assertEqual(rawio.read(2), b"ab")
self.assertEqual(rawio.read(2), b"c")
self.assertEqual(rawio.read(2), b"d")
self.assertEqual(rawio.read(2), None)
self.assertEqual(rawio.read(2), b"ef")
self.assertEqual(rawio.read(2), b"g")
self.assertEqual(rawio.read(2), None)
self.assertEqual(rawio.read(2), b"")
def test_types_have_dict(self):
test = (
self.IOBase(),
self.RawIOBase(),
self.TextIOBase(),
self.StringIO(),
self.BytesIO()
)
for obj in test:
self.assertTrue(hasattr(obj, "__dict__"))
def test_opener(self):
with self.open(support.TESTFN, "w") as f:
f.write("egg\n")
fd = os.open(support.TESTFN, os.O_RDONLY)
def opener(path, flags):
return fd
with self.open("non-existent", "r", opener=opener) as f:
self.assertEqual(f.read(), "egg\n")
def test_bad_opener_negative_1(self):
# Issue #27066.
def badopener(fname, flags):
return -1
with self.assertRaises(ValueError) as cm:
open('non-existent', 'r', opener=badopener)
self.assertEqual(str(cm.exception), 'opener returned -1')
def test_bad_opener_other_negative(self):
# Issue #27066.
def badopener(fname, flags):
return -2
with self.assertRaises(ValueError) as cm:
open('non-existent', 'r', opener=badopener)
self.assertEqual(str(cm.exception), 'opener returned -2')
def test_fileio_closefd(self):
# Issue #4841
with self.open(__file__, 'rb') as f1, \
self.open(__file__, 'rb') as f2:
fileio = self.FileIO(f1.fileno(), closefd=False)
# .__init__() must not close f1
fileio.__init__(f2.fileno(), closefd=False)
f1.readline()
# .close() must not close f2
fileio.close()
f2.readline()
def test_nonbuffered_textio(self):
with support.check_no_resource_warning(self):
with self.assertRaises(ValueError):
self.open(support.TESTFN, 'w', buffering=0)
def test_invalid_newline(self):
with support.check_no_resource_warning(self):
with self.assertRaises(ValueError):
self.open(support.TESTFN, 'w', newline='invalid')
def test_buffered_readinto_mixin(self):
# Test the implementation provided by BufferedIOBase
class Stream(self.BufferedIOBase):
def read(self, size):
return b"12345"
read1 = read
stream = Stream()
for method in ("readinto", "readinto1"):
with self.subTest(method):
buffer = byteslike(5)
self.assertEqual(getattr(stream, method)(buffer), 5)
self.assertEqual(bytes(buffer), b"12345")
def test_fspath_support(self):
class PathLike:
def __init__(self, path):
self.path = path
def __fspath__(self):
return self.path
def check_path_succeeds(path):
with self.open(path, "w") as f:
f.write("egg\n")
with self.open(path, "r") as f:
self.assertEqual(f.read(), "egg\n")
check_path_succeeds(PathLike(support.TESTFN))
check_path_succeeds(PathLike(support.TESTFN.encode('utf-8')))
bad_path = PathLike(TypeError)
with self.assertRaises(TypeError):
self.open(bad_path, 'w')
# ensure that refcounting is correct with some error conditions
with self.assertRaisesRegex(ValueError, 'read/write/append mode'):
self.open(PathLike(support.TESTFN), 'rwxa')
class CIOTest(IOTest):
def test_IOBase_finalize(self):
# Issue #12149: segmentation fault on _PyIOBase_finalize when both a
# class which inherits IOBase and an object of this class are caught
# in a reference cycle and close() is already in the method cache.
class MyIO(self.IOBase):
def close(self):
pass
# create an instance to populate the method cache
MyIO()
obj = MyIO()
obj.obj = obj
wr = weakref.ref(obj)
del MyIO
del obj
support.gc_collect()
self.assertIsNone(wr(), wr)
class PyIOTest(IOTest):
pass
@support.cpython_only
class APIMismatchTest(unittest.TestCase):
def test_RawIOBase_io_in_pyio_match(self):
"""Test that pyio RawIOBase class has all c RawIOBase methods"""
mismatch = support.detect_api_mismatch(pyio.RawIOBase, io.RawIOBase,
ignore=('__weakref__',))
self.assertEqual(mismatch, set(), msg='Python RawIOBase does not have all C RawIOBase methods')
def test_RawIOBase_pyio_in_io_match(self):
"""Test that c RawIOBase class has all pyio RawIOBase methods"""
mismatch = support.detect_api_mismatch(io.RawIOBase, pyio.RawIOBase)
self.assertEqual(mismatch, set(), msg='C RawIOBase does not have all Python RawIOBase methods')
class CommonBufferedTests:
# Tests common to BufferedReader, BufferedWriter and BufferedRandom
def test_detach(self):
raw = self.MockRawIO()
buf = self.tp(raw)
self.assertIs(buf.detach(), raw)
self.assertRaises(ValueError, buf.detach)
repr(buf) # Should still work
def test_fileno(self):
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertEqual(42, bufio.fileno())
def test_invalid_args(self):
rawio = self.MockRawIO()
bufio = self.tp(rawio)
# Invalid whence
self.assertRaises(ValueError, bufio.seek, 0, -1)
self.assertRaises(ValueError, bufio.seek, 0, 9)
def test_override_destructor(self):
tp = self.tp
record = []
class MyBufferedIO(tp):
def __del__(self):
record.append(1)
try:
f = super().__del__
except AttributeError:
pass
else:
f()
def close(self):
record.append(2)
super().close()
def flush(self):
record.append(3)
super().flush()
rawio = self.MockRawIO()
bufio = MyBufferedIO(rawio)
del bufio
support.gc_collect()
self.assertEqual(record, [1, 2, 3])
def test_context_manager(self):
# Test usability as a context manager
rawio = self.MockRawIO()
bufio = self.tp(rawio)
def _with():
with bufio:
pass
_with()
# bufio should now be closed, and using it a second time should raise
# a ValueError.
self.assertRaises(ValueError, _with)
def test_error_through_destructor(self):
# Test that the exception state is not modified by a destructor,
# even if close() fails.
rawio = self.CloseFailureIO()
def f():
self.tp(rawio).xyzzy
with support.captured_output("stderr") as s:
self.assertRaises(AttributeError, f)
s = s.getvalue().strip()
if s:
# The destructor *may* have printed an unraisable error, check it
self.assertEqual(len(s.splitlines()), 1)
self.assertTrue(s.startswith("Exception OSError: "), s)
self.assertTrue(s.endswith(" ignored"), s)
def test_repr(self):
raw = self.MockRawIO()
b = self.tp(raw)
clsname = "%s.%s" % (self.tp.__module__, self.tp.__qualname__)
self.assertEqual(repr(b), "<%s>" % clsname)
raw.name = "dummy"
self.assertEqual(repr(b), "<%s name='dummy'>" % clsname)
raw.name = b"dummy"
self.assertEqual(repr(b), "<%s name=b'dummy'>" % clsname)
def test_recursive_repr(self):
# Issue #25455
raw = self.MockRawIO()
b = self.tp(raw)
with support.swap_attr(raw, 'name', b):
try:
repr(b) # Should not crash
except RuntimeError:
pass
def test_flush_error_on_close(self):
# Test that buffered file is closed despite failed flush
# and that flush() is called before file closed.
raw = self.MockRawIO()
closed = []
def bad_flush():
closed[:] = [b.closed, raw.closed]
raise OSError()
raw.flush = bad_flush
b = self.tp(raw)
self.assertRaises(OSError, b.close) # exception not swallowed
self.assertTrue(b.closed)
self.assertTrue(raw.closed)
self.assertTrue(closed) # flush() called
self.assertFalse(closed[0]) # flush() called before file closed
self.assertFalse(closed[1])
raw.flush = lambda: None # break reference loop
def test_close_error_on_close(self):
raw = self.MockRawIO()
def bad_flush():
raise OSError('flush')
def bad_close():
raise OSError('close')
raw.close = bad_close
b = self.tp(raw)
b.flush = bad_flush
with self.assertRaises(OSError) as err: # exception not swallowed
b.close()
self.assertEqual(err.exception.args, ('close',))
self.assertIsInstance(err.exception.__context__, OSError)
self.assertEqual(err.exception.__context__.args, ('flush',))
self.assertFalse(b.closed)
def test_nonnormalized_close_error_on_close(self):
# Issue #21677
raw = self.MockRawIO()
def bad_flush():
raise non_existing_flush
def bad_close():
raise non_existing_close
raw.close = bad_close
b = self.tp(raw)
b.flush = bad_flush
with self.assertRaises(NameError) as err: # exception not swallowed
b.close()
self.assertIn('non_existing_close', str(err.exception))
self.assertIsInstance(err.exception.__context__, NameError)
self.assertIn('non_existing_flush', str(err.exception.__context__))
self.assertFalse(b.closed)
def test_multi_close(self):
raw = self.MockRawIO()
b = self.tp(raw)
b.close()
b.close()
b.close()
self.assertRaises(ValueError, b.flush)
def test_unseekable(self):
bufio = self.tp(self.MockUnseekableIO(b"A" * 10))
self.assertRaises(self.UnsupportedOperation, bufio.tell)
self.assertRaises(self.UnsupportedOperation, bufio.seek, 0)
def test_readonly_attributes(self):
raw = self.MockRawIO()
buf = self.tp(raw)
x = self.MockRawIO()
with self.assertRaises(AttributeError):
buf.raw = x
class SizeofTest:
@support.cpython_only
def test_sizeof(self):
bufsize1 = 4096
bufsize2 = 8192
rawio = self.MockRawIO()
bufio = self.tp(rawio, buffer_size=bufsize1)
size = sys.getsizeof(bufio) - bufsize1
rawio = self.MockRawIO()
bufio = self.tp(rawio, buffer_size=bufsize2)
self.assertEqual(sys.getsizeof(bufio), size + bufsize2)
@support.cpython_only
def test_buffer_freeing(self) :
bufsize = 4096
rawio = self.MockRawIO()
bufio = self.tp(rawio, buffer_size=bufsize)
size = sys.getsizeof(bufio) - bufsize
bufio.close()
self.assertEqual(sys.getsizeof(bufio), size)
class BufferedReaderTest(unittest.TestCase, CommonBufferedTests):
read_mode = "rb"
def test_constructor(self):
rawio = self.MockRawIO([b"abc"])
bufio = self.tp(rawio)
bufio.__init__(rawio)
bufio.__init__(rawio, buffer_size=1024)
bufio.__init__(rawio, buffer_size=16)
self.assertEqual(b"abc", bufio.read())
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=0)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-16)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-1)
rawio = self.MockRawIO([b"abc"])
bufio.__init__(rawio)
self.assertEqual(b"abc", bufio.read())
def test_uninitialized(self):
bufio = self.tp.__new__(self.tp)
del bufio
bufio = self.tp.__new__(self.tp)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
bufio.read, 0)
bufio.__init__(self.MockRawIO())
self.assertEqual(bufio.read(0), b'')
def test_read(self):
for arg in (None, 7):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"abcdefg", bufio.read(arg))
# Invalid args
self.assertRaises(ValueError, bufio.read, -2)
def test_read1(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"a", bufio.read(1))
self.assertEqual(b"b", bufio.read1(1))
self.assertEqual(rawio._reads, 1)
self.assertEqual(b"", bufio.read1(0))
self.assertEqual(b"c", bufio.read1(100))
self.assertEqual(rawio._reads, 1)
self.assertEqual(b"d", bufio.read1(100))
self.assertEqual(rawio._reads, 2)
self.assertEqual(b"efg", bufio.read1(100))
self.assertEqual(rawio._reads, 3)
self.assertEqual(b"", bufio.read1(100))
self.assertEqual(rawio._reads, 4)
def test_read1_arbitrary(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"a", bufio.read(1))
self.assertEqual(b"bc", bufio.read1())
self.assertEqual(b"d", bufio.read1())
self.assertEqual(b"efg", bufio.read1(-1))
self.assertEqual(rawio._reads, 3)
self.assertEqual(b"", bufio.read1())
self.assertEqual(rawio._reads, 4)
def test_readinto(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
b = bytearray(2)
self.assertEqual(bufio.readinto(b), 2)
self.assertEqual(b, b"ab")
self.assertEqual(bufio.readinto(b), 2)
self.assertEqual(b, b"cd")
self.assertEqual(bufio.readinto(b), 2)
self.assertEqual(b, b"ef")
self.assertEqual(bufio.readinto(b), 1)
self.assertEqual(b, b"gf")
self.assertEqual(bufio.readinto(b), 0)
self.assertEqual(b, b"gf")
rawio = self.MockRawIO((b"abc", None))
bufio = self.tp(rawio)
self.assertEqual(bufio.readinto(b), 2)
self.assertEqual(b, b"ab")
self.assertEqual(bufio.readinto(b), 1)
self.assertEqual(b, b"cb")
def test_readinto1(self):
buffer_size = 10
rawio = self.MockRawIO((b"abc", b"de", b"fgh", b"jkl"))
bufio = self.tp(rawio, buffer_size=buffer_size)
b = bytearray(2)
self.assertEqual(bufio.peek(3), b'abc')
self.assertEqual(rawio._reads, 1)
self.assertEqual(bufio.readinto1(b), 2)
self.assertEqual(b, b"ab")
self.assertEqual(rawio._reads, 1)
self.assertEqual(bufio.readinto1(b), 1)
self.assertEqual(b[:1], b"c")
self.assertEqual(rawio._reads, 1)
self.assertEqual(bufio.readinto1(b), 2)
self.assertEqual(b, b"de")
self.assertEqual(rawio._reads, 2)
b = bytearray(2*buffer_size)
self.assertEqual(bufio.peek(3), b'fgh')
self.assertEqual(rawio._reads, 3)
self.assertEqual(bufio.readinto1(b), 6)
self.assertEqual(b[:6], b"fghjkl")
self.assertEqual(rawio._reads, 4)
def test_readinto_array(self):
buffer_size = 60
data = b"a" * 26
rawio = self.MockRawIO((data,))
bufio = self.tp(rawio, buffer_size=buffer_size)
# Create an array with element size > 1 byte
b = array.array('i', b'x' * 32)
assert len(b) != 16
# Read into it. We should get as many *bytes* as we can fit into b
# (which is more than the number of elements)
n = bufio.readinto(b)
self.assertGreater(n, len(b))
# Check that old contents of b are preserved
bm = memoryview(b).cast('B')
self.assertLess(n, len(bm))
self.assertEqual(bm[:n], data[:n])
self.assertEqual(bm[n:], b'x' * (len(bm[n:])))
def test_readinto1_array(self):
buffer_size = 60
data = b"a" * 26
rawio = self.MockRawIO((data,))
bufio = self.tp(rawio, buffer_size=buffer_size)
# Create an array with element size > 1 byte
b = array.array('i', b'x' * 32)
assert len(b) != 16
# Read into it. We should get as many *bytes* as we can fit into b
# (which is more than the number of elements)
n = bufio.readinto1(b)
self.assertGreater(n, len(b))
# Check that old contents of b are preserved
bm = memoryview(b).cast('B')
self.assertLess(n, len(bm))
self.assertEqual(bm[:n], data[:n])
self.assertEqual(bm[n:], b'x' * (len(bm[n:])))
def test_readlines(self):
def bufio():
rawio = self.MockRawIO((b"abc\n", b"d\n", b"ef"))
return self.tp(rawio)
self.assertEqual(bufio().readlines(), [b"abc\n", b"d\n", b"ef"])
self.assertEqual(bufio().readlines(5), [b"abc\n", b"d\n"])
self.assertEqual(bufio().readlines(None), [b"abc\n", b"d\n", b"ef"])
def test_buffering(self):
data = b"abcdefghi"
dlen = len(data)
tests = [
[ 100, [ 3, 1, 4, 8 ], [ dlen, 0 ] ],
[ 100, [ 3, 3, 3], [ dlen ] ],
[ 4, [ 1, 2, 4, 2 ], [ 4, 4, 1 ] ],
]
for bufsize, buf_read_sizes, raw_read_sizes in tests:
rawio = self.MockFileIO(data)
bufio = self.tp(rawio, buffer_size=bufsize)
pos = 0
for nbytes in buf_read_sizes:
self.assertEqual(bufio.read(nbytes), data[pos:pos+nbytes])
pos += nbytes
# this is mildly implementation-dependent
self.assertEqual(rawio.read_history, raw_read_sizes)
def test_read_non_blocking(self):
# Inject some None's in there to simulate EWOULDBLOCK
rawio = self.MockRawIO((b"abc", b"d", None, b"efg", None, None, None))
bufio = self.tp(rawio)
self.assertEqual(b"abcd", bufio.read(6))
self.assertEqual(b"e", bufio.read(1))
self.assertEqual(b"fg", bufio.read())
self.assertEqual(b"", bufio.peek(1))
self.assertIsNone(bufio.read())
self.assertEqual(b"", bufio.read())
rawio = self.MockRawIO((b"a", None, None))
self.assertEqual(b"a", rawio.readall())
self.assertIsNone(rawio.readall())
def test_read_past_eof(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"abcdefg", bufio.read(9000))
def test_read_all(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"abcdefg", bufio.read())
@support.requires_resource('cpu')
def test_threads(self):
try:
# Write out many bytes with exactly the same number of 0's,
# 1's... 255's. This will help us check that concurrent reading
# doesn't duplicate or forget contents.
N = 1000
l = list(range(256)) * N
random.shuffle(l)
s = bytes(bytearray(l))
with self.open(support.TESTFN, "wb") as f:
f.write(s)
with self.open(support.TESTFN, self.read_mode, buffering=0) as raw:
bufio = self.tp(raw, 8)
errors = []
results = []
def f():
try:
# Intra-buffer read then buffer-flushing read
for n in cycle([1, 19]):
s = bufio.read(n)
if not s:
break
# list.append() is atomic
results.append(s)
except Exception as e:
errors.append(e)
raise
threads = [threading.Thread(target=f) for x in range(20)]
with support.start_threads(threads):
time.sleep(0.02) # yield
self.assertFalse(errors,
"the following exceptions were caught: %r" % errors)
s = b''.join(results)
for i in range(256):
c = bytes(bytearray([i]))
self.assertEqual(s.count(c), N)
finally:
support.unlink(support.TESTFN)
def test_unseekable(self):
bufio = self.tp(self.MockUnseekableIO(b"A" * 10))
self.assertRaises(self.UnsupportedOperation, bufio.tell)
self.assertRaises(self.UnsupportedOperation, bufio.seek, 0)
bufio.read(1)
self.assertRaises(self.UnsupportedOperation, bufio.seek, 0)
self.assertRaises(self.UnsupportedOperation, bufio.tell)
def test_misbehaved_io(self):
rawio = self.MisbehavedRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertRaises(OSError, bufio.seek, 0)
self.assertRaises(OSError, bufio.tell)
def test_no_extraneous_read(self):
# Issue #9550; when the raw IO object has satisfied the read request,
# we should not issue any additional reads, otherwise it may block
# (e.g. socket).
bufsize = 16
for n in (2, bufsize - 1, bufsize, bufsize + 1, bufsize * 2):
rawio = self.MockRawIO([b"x" * n])
bufio = self.tp(rawio, bufsize)
self.assertEqual(bufio.read(n), b"x" * n)
# Simple case: one raw read is enough to satisfy the request.
self.assertEqual(rawio._extraneous_reads, 0,
"failed for {}: {} != 0".format(n, rawio._extraneous_reads))
# A more complex case where two raw reads are needed to satisfy
# the request.
rawio = self.MockRawIO([b"x" * (n - 1), b"x"])
bufio = self.tp(rawio, bufsize)
self.assertEqual(bufio.read(n), b"x" * n)
self.assertEqual(rawio._extraneous_reads, 0,
"failed for {}: {} != 0".format(n, rawio._extraneous_reads))
def test_read_on_closed(self):
# Issue #23796
b = io.BufferedReader(io.BytesIO(b"12"))
b.read(1)
b.close()
self.assertRaises(ValueError, b.peek)
self.assertRaises(ValueError, b.read1, 1)
class CBufferedReaderTest(BufferedReaderTest, SizeofTest):
tp = io.BufferedReader
def test_constructor(self):
BufferedReaderTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. with more
# than 2 GiB RAM and a 64-bit kernel.
if sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertRaises((OverflowError, MemoryError, ValueError),
bufio.__init__, rawio, sys.maxsize)
def test_initialization(self):
rawio = self.MockRawIO([b"abc"])
bufio = self.tp(rawio)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=0)
self.assertRaises(ValueError, bufio.read)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-16)
self.assertRaises(ValueError, bufio.read)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-1)
self.assertRaises(ValueError, bufio.read)
def test_misbehaved_io_read(self):
rawio = self.MisbehavedRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
# _pyio.BufferedReader seems to implement reading different, so that
# checking this is not so easy.
self.assertRaises(OSError, bufio.read, 10)
def test_garbage_collection(self):
# C BufferedReader objects are collected.
# The Python version has __del__, so it ends into gc.garbage instead
with support.check_warnings(('', ResourceWarning)):
rawio = self.FileIO(support.TESTFN, "w+b")
f = self.tp(rawio)
f.f = f
wr = weakref.ref(f)
del f
support.gc_collect()
self.assertIsNone(wr(), wr)
def test_args_error(self):
# Issue #17275
with self.assertRaisesRegex(TypeError, "BufferedReader"):
self.tp(io.BytesIO(), 1024, 1024, 1024)
class PyBufferedReaderTest(BufferedReaderTest):
tp = pyio.BufferedReader
class BufferedWriterTest(unittest.TestCase, CommonBufferedTests):
write_mode = "wb"
def test_constructor(self):
rawio = self.MockRawIO()
bufio = self.tp(rawio)
bufio.__init__(rawio)
bufio.__init__(rawio, buffer_size=1024)
bufio.__init__(rawio, buffer_size=16)
self.assertEqual(3, bufio.write(b"abc"))
bufio.flush()
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=0)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-16)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-1)
bufio.__init__(rawio)
self.assertEqual(3, bufio.write(b"ghi"))
bufio.flush()
self.assertEqual(b"".join(rawio._write_stack), b"abcghi")
def test_uninitialized(self):
bufio = self.tp.__new__(self.tp)
del bufio
bufio = self.tp.__new__(self.tp)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
bufio.write, b'')
bufio.__init__(self.MockRawIO())
self.assertEqual(bufio.write(b''), 0)
def test_detach_flush(self):
raw = self.MockRawIO()
buf = self.tp(raw)
buf.write(b"howdy!")
self.assertFalse(raw._write_stack)
buf.detach()
self.assertEqual(raw._write_stack, [b"howdy!"])
def test_write(self):
# Write to the buffered IO but don't overflow the buffer.
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.write(b"abc")
self.assertFalse(writer._write_stack)
buffer = bytearray(b"def")
bufio.write(buffer)
buffer[:] = b"***" # Overwrite our copy of the data
bufio.flush()
self.assertEqual(b"".join(writer._write_stack), b"abcdef")
def test_write_overflow(self):
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
contents = b"abcdefghijklmnop"
for n in range(0, len(contents), 3):
bufio.write(contents[n:n+3])
flushed = b"".join(writer._write_stack)
# At least (total - 8) bytes were implicitly flushed, perhaps more
# depending on the implementation.
self.assertTrue(flushed.startswith(contents[:-8]), flushed)
def check_writes(self, intermediate_func):
# Lots of writes, test the flushed output is as expected.
contents = bytes(range(256)) * 1000
n = 0
writer = self.MockRawIO()
bufio = self.tp(writer, 13)
# Generator of write sizes: repeat each N 15 times then proceed to N+1
def gen_sizes():
for size in count(1):
for i in range(15):
yield size
sizes = gen_sizes()
while n < len(contents):
size = min(next(sizes), len(contents) - n)
self.assertEqual(bufio.write(contents[n:n+size]), size)
intermediate_func(bufio)
n += size
bufio.flush()
self.assertEqual(contents, b"".join(writer._write_stack))
def test_writes(self):
self.check_writes(lambda bufio: None)
def test_writes_and_flushes(self):
self.check_writes(lambda bufio: bufio.flush())
def test_writes_and_seeks(self):
def _seekabs(bufio):
pos = bufio.tell()
bufio.seek(pos + 1, 0)
bufio.seek(pos - 1, 0)
bufio.seek(pos, 0)
self.check_writes(_seekabs)
def _seekrel(bufio):
pos = bufio.seek(0, 1)
bufio.seek(+1, 1)
bufio.seek(-1, 1)
bufio.seek(pos, 0)
self.check_writes(_seekrel)
def test_writes_and_truncates(self):
self.check_writes(lambda bufio: bufio.truncate(bufio.tell()))
def test_write_non_blocking(self):
raw = self.MockNonBlockWriterIO()
bufio = self.tp(raw, 8)
self.assertEqual(bufio.write(b"abcd"), 4)
self.assertEqual(bufio.write(b"efghi"), 5)
# 1 byte will be written, the rest will be buffered
raw.block_on(b"k")
self.assertEqual(bufio.write(b"jklmn"), 5)
# 8 bytes will be written, 8 will be buffered and the rest will be lost
raw.block_on(b"0")
try:
bufio.write(b"opqrwxyz0123456789")
except self.BlockingIOError as e:
written = e.characters_written
else:
self.fail("BlockingIOError should have been raised")
self.assertEqual(written, 16)
self.assertEqual(raw.pop_written(),
b"abcdefghijklmnopqrwxyz")
self.assertEqual(bufio.write(b"ABCDEFGHI"), 9)
s = raw.pop_written()
# Previously buffered bytes were flushed
self.assertTrue(s.startswith(b"01234567A"), s)
def test_write_and_rewind(self):
raw = io.BytesIO()
bufio = self.tp(raw, 4)
self.assertEqual(bufio.write(b"abcdef"), 6)
self.assertEqual(bufio.tell(), 6)
bufio.seek(0, 0)
self.assertEqual(bufio.write(b"XY"), 2)
bufio.seek(6, 0)
self.assertEqual(raw.getvalue(), b"XYcdef")
self.assertEqual(bufio.write(b"123456"), 6)
bufio.flush()
self.assertEqual(raw.getvalue(), b"XYcdef123456")
def test_flush(self):
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.write(b"abc")
bufio.flush()
self.assertEqual(b"abc", writer._write_stack[0])
def test_writelines(self):
l = [b'ab', b'cd', b'ef']
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.writelines(l)
bufio.flush()
self.assertEqual(b''.join(writer._write_stack), b'abcdef')
def test_writelines_userlist(self):
l = UserList([b'ab', b'cd', b'ef'])
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.writelines(l)
bufio.flush()
self.assertEqual(b''.join(writer._write_stack), b'abcdef')
def test_writelines_error(self):
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
self.assertRaises(TypeError, bufio.writelines, [1, 2, 3])
self.assertRaises(TypeError, bufio.writelines, None)
self.assertRaises(TypeError, bufio.writelines, 'abc')
def test_destructor(self):
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.write(b"abc")
del bufio
support.gc_collect()
self.assertEqual(b"abc", writer._write_stack[0])
def test_truncate(self):
# Truncate implicitly flushes the buffer.
with self.open(support.TESTFN, self.write_mode, buffering=0) as raw:
bufio = self.tp(raw, 8)
bufio.write(b"abcdef")
self.assertEqual(bufio.truncate(3), 3)
self.assertEqual(bufio.tell(), 6)
with self.open(support.TESTFN, "rb", buffering=0) as f:
self.assertEqual(f.read(), b"abc")
@support.requires_resource('cpu')
def test_threads(self):
try:
# Write out many bytes from many threads and test they were
# all flushed.
N = 1000
contents = bytes(range(256)) * N
sizes = cycle([1, 19])
n = 0
queue = deque()
while n < len(contents):
size = next(sizes)
queue.append(contents[n:n+size])
n += size
del contents
# We use a real file object because it allows us to
# exercise situations where the GIL is released before
# writing the buffer to the raw streams. This is in addition
# to concurrency issues due to switching threads in the middle
# of Python code.
with self.open(support.TESTFN, self.write_mode, buffering=0) as raw:
bufio = self.tp(raw, 8)
errors = []
def f():
try:
while True:
try:
s = queue.popleft()
except IndexError:
return
bufio.write(s)
except Exception as e:
errors.append(e)
raise
threads = [threading.Thread(target=f) for x in range(20)]
with support.start_threads(threads):
time.sleep(0.02) # yield
self.assertFalse(errors,
"the following exceptions were caught: %r" % errors)
bufio.close()
with self.open(support.TESTFN, "rb") as f:
s = f.read()
for i in range(256):
self.assertEqual(s.count(bytes([i])), N)
finally:
support.unlink(support.TESTFN)
def test_misbehaved_io(self):
rawio = self.MisbehavedRawIO()
bufio = self.tp(rawio, 5)
self.assertRaises(OSError, bufio.seek, 0)
self.assertRaises(OSError, bufio.tell)
self.assertRaises(OSError, bufio.write, b"abcdef")
def test_max_buffer_size_removal(self):
with self.assertRaises(TypeError):
self.tp(self.MockRawIO(), 8, 12)
def test_write_error_on_close(self):
raw = self.MockRawIO()
def bad_write(b):
raise OSError()
raw.write = bad_write
b = self.tp(raw)
b.write(b'spam')
self.assertRaises(OSError, b.close) # exception not swallowed
self.assertTrue(b.closed)
def test_slow_close_from_thread(self):
# Issue #31976
rawio = self.SlowFlushRawIO()
bufio = self.tp(rawio, 8)
t = threading.Thread(target=bufio.close)
t.start()
rawio.in_flush.wait()
self.assertRaises(ValueError, bufio.write, b'spam')
self.assertTrue(bufio.closed)
t.join()
class CBufferedWriterTest(BufferedWriterTest, SizeofTest):
tp = io.BufferedWriter
def test_constructor(self):
BufferedWriterTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. with more
# than 2 GiB RAM and a 64-bit kernel.
if sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertRaises((OverflowError, MemoryError, ValueError),
bufio.__init__, rawio, sys.maxsize)
def test_initialization(self):
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=0)
self.assertRaises(ValueError, bufio.write, b"def")
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-16)
self.assertRaises(ValueError, bufio.write, b"def")
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-1)
self.assertRaises(ValueError, bufio.write, b"def")
def test_garbage_collection(self):
# C BufferedWriter objects are collected, and collecting them flushes
# all data to disk.
# The Python version has __del__, so it ends into gc.garbage instead
with support.check_warnings(('', ResourceWarning)):
rawio = self.FileIO(support.TESTFN, "w+b")
f = self.tp(rawio)
f.write(b"123xxx")
f.x = f
wr = weakref.ref(f)
del f
support.gc_collect()
self.assertIsNone(wr(), wr)
with self.open(support.TESTFN, "rb") as f:
self.assertEqual(f.read(), b"123xxx")
def test_args_error(self):
# Issue #17275
with self.assertRaisesRegex(TypeError, "BufferedWriter"):
self.tp(io.BytesIO(), 1024, 1024, 1024)
class PyBufferedWriterTest(BufferedWriterTest):
tp = pyio.BufferedWriter
class BufferedRWPairTest(unittest.TestCase):
def test_constructor(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertFalse(pair.closed)
def test_uninitialized(self):
pair = self.tp.__new__(self.tp)
del pair
pair = self.tp.__new__(self.tp)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
pair.read, 0)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
pair.write, b'')
pair.__init__(self.MockRawIO(), self.MockRawIO())
self.assertEqual(pair.read(0), b'')
self.assertEqual(pair.write(b''), 0)
def test_detach(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertRaises(self.UnsupportedOperation, pair.detach)
def test_constructor_max_buffer_size_removal(self):
with self.assertRaises(TypeError):
self.tp(self.MockRawIO(), self.MockRawIO(), 8, 12)
def test_constructor_with_not_readable(self):
class NotReadable(MockRawIO):
def readable(self):
return False
self.assertRaises(OSError, self.tp, NotReadable(), self.MockRawIO())
def test_constructor_with_not_writeable(self):
class NotWriteable(MockRawIO):
def writable(self):
return False
self.assertRaises(OSError, self.tp, self.MockRawIO(), NotWriteable())
def test_read(self):
pair = self.tp(self.BytesIO(b"abcdef"), self.MockRawIO())
self.assertEqual(pair.read(3), b"abc")
self.assertEqual(pair.read(1), b"d")
self.assertEqual(pair.read(), b"ef")
pair = self.tp(self.BytesIO(b"abc"), self.MockRawIO())
self.assertEqual(pair.read(None), b"abc")
def test_readlines(self):
pair = lambda: self.tp(self.BytesIO(b"abc\ndef\nh"), self.MockRawIO())
self.assertEqual(pair().readlines(), [b"abc\n", b"def\n", b"h"])
self.assertEqual(pair().readlines(), [b"abc\n", b"def\n", b"h"])
self.assertEqual(pair().readlines(5), [b"abc\n", b"def\n"])
def test_read1(self):
# .read1() is delegated to the underlying reader object, so this test
# can be shallow.
pair = self.tp(self.BytesIO(b"abcdef"), self.MockRawIO())
self.assertEqual(pair.read1(3), b"abc")
self.assertEqual(pair.read1(), b"def")
def test_readinto(self):
for method in ("readinto", "readinto1"):
with self.subTest(method):
pair = self.tp(self.BytesIO(b"abcdef"), self.MockRawIO())
data = byteslike(b'\0' * 5)
self.assertEqual(getattr(pair, method)(data), 5)
self.assertEqual(bytes(data), b"abcde")
def test_write(self):
w = self.MockRawIO()
pair = self.tp(self.MockRawIO(), w)
pair.write(b"abc")
pair.flush()
buffer = bytearray(b"def")
pair.write(buffer)
buffer[:] = b"***" # Overwrite our copy of the data
pair.flush()
self.assertEqual(w._write_stack, [b"abc", b"def"])
def test_peek(self):
pair = self.tp(self.BytesIO(b"abcdef"), self.MockRawIO())
self.assertTrue(pair.peek(3).startswith(b"abc"))
self.assertEqual(pair.read(3), b"abc")
def test_readable(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertTrue(pair.readable())
def test_writeable(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertTrue(pair.writable())
def test_seekable(self):
# BufferedRWPairs are never seekable, even if their readers and writers
# are.
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertFalse(pair.seekable())
# .flush() is delegated to the underlying writer object and has been
# tested in the test_write method.
def test_close_and_closed(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertFalse(pair.closed)
pair.close()
self.assertTrue(pair.closed)
def test_reader_close_error_on_close(self):
def reader_close():
reader_non_existing
reader = self.MockRawIO()
reader.close = reader_close
writer = self.MockRawIO()
pair = self.tp(reader, writer)
with self.assertRaises(NameError) as err:
pair.close()
self.assertIn('reader_non_existing', str(err.exception))
self.assertTrue(pair.closed)
self.assertFalse(reader.closed)
self.assertTrue(writer.closed)
def test_writer_close_error_on_close(self):
def writer_close():
writer_non_existing
reader = self.MockRawIO()
writer = self.MockRawIO()
writer.close = writer_close
pair = self.tp(reader, writer)
with self.assertRaises(NameError) as err:
pair.close()
self.assertIn('writer_non_existing', str(err.exception))
self.assertFalse(pair.closed)
self.assertTrue(reader.closed)
self.assertFalse(writer.closed)
def test_reader_writer_close_error_on_close(self):
def reader_close():
reader_non_existing
def writer_close():
writer_non_existing
reader = self.MockRawIO()
reader.close = reader_close
writer = self.MockRawIO()
writer.close = writer_close
pair = self.tp(reader, writer)
with self.assertRaises(NameError) as err:
pair.close()
self.assertIn('reader_non_existing', str(err.exception))
self.assertIsInstance(err.exception.__context__, NameError)
self.assertIn('writer_non_existing', str(err.exception.__context__))
self.assertFalse(pair.closed)
self.assertFalse(reader.closed)
self.assertFalse(writer.closed)
def test_isatty(self):
class SelectableIsAtty(MockRawIO):
def __init__(self, isatty):
MockRawIO.__init__(self)
self._isatty = isatty
def isatty(self):
return self._isatty
pair = self.tp(SelectableIsAtty(False), SelectableIsAtty(False))
self.assertFalse(pair.isatty())
pair = self.tp(SelectableIsAtty(True), SelectableIsAtty(False))
self.assertTrue(pair.isatty())
pair = self.tp(SelectableIsAtty(False), SelectableIsAtty(True))
self.assertTrue(pair.isatty())
pair = self.tp(SelectableIsAtty(True), SelectableIsAtty(True))
self.assertTrue(pair.isatty())
def test_weakref_clearing(self):
brw = self.tp(self.MockRawIO(), self.MockRawIO())
ref = weakref.ref(brw)
brw = None
ref = None # Shouldn't segfault.
class CBufferedRWPairTest(BufferedRWPairTest):
tp = io.BufferedRWPair
class PyBufferedRWPairTest(BufferedRWPairTest):
tp = pyio.BufferedRWPair
class BufferedRandomTest(BufferedReaderTest, BufferedWriterTest):
read_mode = "rb+"
write_mode = "wb+"
def test_constructor(self):
BufferedReaderTest.test_constructor(self)
BufferedWriterTest.test_constructor(self)
def test_uninitialized(self):
BufferedReaderTest.test_uninitialized(self)
BufferedWriterTest.test_uninitialized(self)
def test_read_and_write(self):
raw = self.MockRawIO((b"asdf", b"ghjk"))
rw = self.tp(raw, 8)
self.assertEqual(b"as", rw.read(2))
rw.write(b"ddd")
rw.write(b"eee")
self.assertFalse(raw._write_stack) # Buffer writes
self.assertEqual(b"ghjk", rw.read())
self.assertEqual(b"dddeee", raw._write_stack[0])
def test_seek_and_tell(self):
raw = self.BytesIO(b"asdfghjkl")
rw = self.tp(raw)
self.assertEqual(b"as", rw.read(2))
self.assertEqual(2, rw.tell())
rw.seek(0, 0)
self.assertEqual(b"asdf", rw.read(4))
rw.write(b"123f")
rw.seek(0, 0)
self.assertEqual(b"asdf123fl", rw.read())
self.assertEqual(9, rw.tell())
rw.seek(-4, 2)
self.assertEqual(5, rw.tell())
rw.seek(2, 1)
self.assertEqual(7, rw.tell())
self.assertEqual(b"fl", rw.read(11))
rw.flush()
self.assertEqual(b"asdf123fl", raw.getvalue())
self.assertRaises(TypeError, rw.seek, 0.0)
def check_flush_and_read(self, read_func):
raw = self.BytesIO(b"abcdefghi")
bufio = self.tp(raw)
self.assertEqual(b"ab", read_func(bufio, 2))
bufio.write(b"12")
self.assertEqual(b"ef", read_func(bufio, 2))
self.assertEqual(6, bufio.tell())
bufio.flush()
self.assertEqual(6, bufio.tell())
self.assertEqual(b"ghi", read_func(bufio))
raw.seek(0, 0)
raw.write(b"XYZ")
# flush() resets the read buffer
bufio.flush()
bufio.seek(0, 0)
self.assertEqual(b"XYZ", read_func(bufio, 3))
def test_flush_and_read(self):
self.check_flush_and_read(lambda bufio, *args: bufio.read(*args))
def test_flush_and_readinto(self):
def _readinto(bufio, n=-1):
b = bytearray(n if n >= 0 else 9999)
n = bufio.readinto(b)
return bytes(b[:n])
self.check_flush_and_read(_readinto)
def test_flush_and_peek(self):
def _peek(bufio, n=-1):
# This relies on the fact that the buffer can contain the whole
# raw stream, otherwise peek() can return less.
b = bufio.peek(n)
if n != -1:
b = b[:n]
bufio.seek(len(b), 1)
return b
self.check_flush_and_read(_peek)
def test_flush_and_write(self):
raw = self.BytesIO(b"abcdefghi")
bufio = self.tp(raw)
bufio.write(b"123")
bufio.flush()
bufio.write(b"45")
bufio.flush()
bufio.seek(0, 0)
self.assertEqual(b"12345fghi", raw.getvalue())
self.assertEqual(b"12345fghi", bufio.read())
def test_threads(self):
BufferedReaderTest.test_threads(self)
BufferedWriterTest.test_threads(self)
def test_writes_and_peek(self):
def _peek(bufio):
bufio.peek(1)
self.check_writes(_peek)
def _peek(bufio):
pos = bufio.tell()
bufio.seek(-1, 1)
bufio.peek(1)
bufio.seek(pos, 0)
self.check_writes(_peek)
def test_writes_and_reads(self):
def _read(bufio):
bufio.seek(-1, 1)
bufio.read(1)
self.check_writes(_read)
def test_writes_and_read1s(self):
def _read1(bufio):
bufio.seek(-1, 1)
bufio.read1(1)
self.check_writes(_read1)
def test_writes_and_readintos(self):
def _read(bufio):
bufio.seek(-1, 1)
bufio.readinto(bytearray(1))
self.check_writes(_read)
def test_write_after_readahead(self):
# Issue #6629: writing after the buffer was filled by readahead should
# first rewind the raw stream.
for overwrite_size in [1, 5]:
raw = self.BytesIO(b"A" * 10)
bufio = self.tp(raw, 4)
# Trigger readahead
self.assertEqual(bufio.read(1), b"A")
self.assertEqual(bufio.tell(), 1)
# Overwriting should rewind the raw stream if it needs so
bufio.write(b"B" * overwrite_size)
self.assertEqual(bufio.tell(), overwrite_size + 1)
# If the write size was smaller than the buffer size, flush() and
# check that rewind happens.
bufio.flush()
self.assertEqual(bufio.tell(), overwrite_size + 1)
s = raw.getvalue()
self.assertEqual(s,
b"A" + b"B" * overwrite_size + b"A" * (9 - overwrite_size))
def test_write_rewind_write(self):
# Various combinations of reading / writing / seeking backwards / writing again
def mutate(bufio, pos1, pos2):
assert pos2 >= pos1
# Fill the buffer
bufio.seek(pos1)
bufio.read(pos2 - pos1)
bufio.write(b'\x02')
# This writes earlier than the previous write, but still inside
# the buffer.
bufio.seek(pos1)
bufio.write(b'\x01')
b = b"\x80\x81\x82\x83\x84"
for i in range(0, len(b)):
for j in range(i, len(b)):
raw = self.BytesIO(b)
bufio = self.tp(raw, 100)
mutate(bufio, i, j)
bufio.flush()
expected = bytearray(b)
expected[j] = 2
expected[i] = 1
self.assertEqual(raw.getvalue(), expected,
"failed result for i=%d, j=%d" % (i, j))
def test_truncate_after_read_or_write(self):
raw = self.BytesIO(b"A" * 10)
bufio = self.tp(raw, 100)
self.assertEqual(bufio.read(2), b"AA") # the read buffer gets filled
self.assertEqual(bufio.truncate(), 2)
self.assertEqual(bufio.write(b"BB"), 2) # the write buffer increases
self.assertEqual(bufio.truncate(), 4)
def test_misbehaved_io(self):
BufferedReaderTest.test_misbehaved_io(self)
BufferedWriterTest.test_misbehaved_io(self)
def test_interleaved_read_write(self):
# Test for issue #12213
with self.BytesIO(b'abcdefgh') as raw:
with self.tp(raw, 100) as f:
f.write(b"1")
self.assertEqual(f.read(1), b'b')
f.write(b'2')
self.assertEqual(f.read1(1), b'd')
f.write(b'3')
buf = bytearray(1)
f.readinto(buf)
self.assertEqual(buf, b'f')
f.write(b'4')
self.assertEqual(f.peek(1), b'h')
f.flush()
self.assertEqual(raw.getvalue(), b'1b2d3f4h')
with self.BytesIO(b'abc') as raw:
with self.tp(raw, 100) as f:
self.assertEqual(f.read(1), b'a')
f.write(b"2")
self.assertEqual(f.read(1), b'c')
f.flush()
self.assertEqual(raw.getvalue(), b'a2c')
def test_interleaved_readline_write(self):
with self.BytesIO(b'ab\ncdef\ng\n') as raw:
with self.tp(raw) as f:
f.write(b'1')
self.assertEqual(f.readline(), b'b\n')
f.write(b'2')
self.assertEqual(f.readline(), b'def\n')
f.write(b'3')
self.assertEqual(f.readline(), b'\n')
f.flush()
self.assertEqual(raw.getvalue(), b'1b\n2def\n3\n')
# You can't construct a BufferedRandom over a non-seekable stream.
test_unseekable = None
class CBufferedRandomTest(BufferedRandomTest, SizeofTest):
tp = io.BufferedRandom
def test_constructor(self):
BufferedRandomTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. with more
# than 2 GiB RAM and a 64-bit kernel.
if sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertRaises((OverflowError, MemoryError, ValueError),
bufio.__init__, rawio, sys.maxsize)
def test_garbage_collection(self):
CBufferedReaderTest.test_garbage_collection(self)
CBufferedWriterTest.test_garbage_collection(self)
def test_args_error(self):
# Issue #17275
with self.assertRaisesRegex(TypeError, "BufferedRandom"):
self.tp(io.BytesIO(), 1024, 1024, 1024)
class PyBufferedRandomTest(BufferedRandomTest):
tp = pyio.BufferedRandom
# To fully exercise seek/tell, the StatefulIncrementalDecoder has these
# properties:
# - A single output character can correspond to many bytes of input.
# - The number of input bytes to complete the character can be
# undetermined until the last input byte is received.
# - The number of input bytes can vary depending on previous input.
# - A single input byte can correspond to many characters of output.
# - The number of output characters can be undetermined until the
# last input byte is received.
# - The number of output characters can vary depending on previous input.
class StatefulIncrementalDecoder(codecs.IncrementalDecoder):
"""
For testing seek/tell behavior with a stateful, buffering decoder.
Input is a sequence of words. Words may be fixed-length (length set
by input) or variable-length (period-terminated). In variable-length
mode, extra periods are ignored. Possible words are:
- 'i' followed by a number sets the input length, I (maximum 99).
When I is set to 0, words are space-terminated.
- 'o' followed by a number sets the output length, O (maximum 99).
- Any other word is converted into a word followed by a period on
the output. The output word consists of the input word truncated
or padded out with hyphens to make its length equal to O. If O
is 0, the word is output verbatim without truncating or padding.
I and O are initially set to 1. When I changes, any buffered input is
re-scanned according to the new I. EOF also terminates the last word.
"""
def __init__(self, errors='strict'):
codecs.IncrementalDecoder.__init__(self, errors)
self.reset()
def __repr__(self):
return '<SID %x>' % id(self)
def reset(self):
self.i = 1
self.o = 1
self.buffer = bytearray()
def getstate(self):
i, o = self.i ^ 1, self.o ^ 1 # so that flags = 0 after reset()
return bytes(self.buffer), i*100 + o
def setstate(self, state):
buffer, io = state
self.buffer = bytearray(buffer)
i, o = divmod(io, 100)
self.i, self.o = i ^ 1, o ^ 1
def decode(self, input, final=False):
output = ''
for b in input:
if self.i == 0: # variable-length, terminated with period
if b == ord('.'):
if self.buffer:
output += self.process_word()
else:
self.buffer.append(b)
else: # fixed-length, terminate after self.i bytes
self.buffer.append(b)
if len(self.buffer) == self.i:
output += self.process_word()
if final and self.buffer: # EOF terminates the last word
output += self.process_word()
return output
def process_word(self):
output = ''
if self.buffer[0] == ord('i'):
self.i = min(99, int(self.buffer[1:] or 0)) # set input length
elif self.buffer[0] == ord('o'):
self.o = min(99, int(self.buffer[1:] or 0)) # set output length
else:
output = self.buffer.decode('ascii')
if len(output) < self.o:
output += '-'*self.o # pad out with hyphens
if self.o:
output = output[:self.o] # truncate to output length
output += '.'
self.buffer = bytearray()
return output
codecEnabled = False
@classmethod
def lookupTestDecoder(cls, name):
if cls.codecEnabled and name == 'test_decoder':
latin1 = codecs.lookup('latin-1')
return codecs.CodecInfo(
name='test_decoder', encode=latin1.encode, decode=None,
incrementalencoder=None,
streamreader=None, streamwriter=None,
incrementaldecoder=cls)
# Register the previous decoder for testing.
# Disabled by default, tests will enable it.
codecs.register(StatefulIncrementalDecoder.lookupTestDecoder)
class StatefulIncrementalDecoderTest(unittest.TestCase):
"""
Make sure the StatefulIncrementalDecoder actually works.
"""
test_cases = [
# I=1, O=1 (fixed-length input == fixed-length output)
(b'abcd', False, 'a.b.c.d.'),
# I=0, O=0 (variable-length input, variable-length output)
(b'oiabcd', True, 'abcd.'),
# I=0, O=0 (should ignore extra periods)
(b'oi...abcd...', True, 'abcd.'),
# I=0, O=6 (variable-length input, fixed-length output)
(b'i.o6.x.xyz.toolongtofit.', False, 'x-----.xyz---.toolon.'),
# I=2, O=6 (fixed-length input < fixed-length output)
(b'i.i2.o6xyz', True, 'xy----.z-----.'),
# I=6, O=3 (fixed-length input > fixed-length output)
(b'i.o3.i6.abcdefghijklmnop', True, 'abc.ghi.mno.'),
# I=0, then 3; O=29, then 15 (with longer output)
(b'i.o29.a.b.cde.o15.abcdefghijabcdefghij.i3.a.b.c.d.ei00k.l.m', True,
'a----------------------------.' +
'b----------------------------.' +
'cde--------------------------.' +
'abcdefghijabcde.' +
'a.b------------.' +
'.c.------------.' +
'd.e------------.' +
'k--------------.' +
'l--------------.' +
'm--------------.')
]
def test_decoder(self):
# Try a few one-shot test cases.
for input, eof, output in self.test_cases:
d = StatefulIncrementalDecoder()
self.assertEqual(d.decode(input, eof), output)
# Also test an unfinished decode, followed by forcing EOF.
d = StatefulIncrementalDecoder()
self.assertEqual(d.decode(b'oiabcd'), '')
self.assertEqual(d.decode(b'', 1), 'abcd.')
class TextIOWrapperTest(unittest.TestCase):
def setUp(self):
self.testdata = b"AAA\r\nBBB\rCCC\r\nDDD\nEEE\r\n"
self.normalized = b"AAA\nBBB\nCCC\nDDD\nEEE\n".decode("ascii")
support.unlink(support.TESTFN)
def tearDown(self):
support.unlink(support.TESTFN)
def test_constructor(self):
r = self.BytesIO(b"\xc3\xa9\n\n")
b = self.BufferedReader(r, 1000)
t = self.TextIOWrapper(b)
t.__init__(b, encoding="latin-1", newline="\r\n")
self.assertEqual(t.encoding, "latin-1")
self.assertEqual(t.line_buffering, False)
t.__init__(b, encoding="utf-8", line_buffering=True)
self.assertEqual(t.encoding, "utf-8")
self.assertEqual(t.line_buffering, True)
self.assertEqual("\xe9\n", t.readline())
self.assertRaises(TypeError, t.__init__, b, newline=42)
self.assertRaises(ValueError, t.__init__, b, newline='xyzzy')
def test_uninitialized(self):
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
del t
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
self.assertRaises(Exception, repr, t)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
t.read, 0)
t.__init__(self.MockRawIO())
self.assertEqual(t.read(0), '')
def test_non_text_encoding_codecs_are_rejected(self):
# Ensure the constructor complains if passed a codec that isn't
# marked as a text encoding
# http://bugs.python.org/issue20404
r = self.BytesIO()
b = self.BufferedWriter(r)
with self.assertRaisesRegex(LookupError, "is not a text encoding"):
self.TextIOWrapper(b, encoding="hex")
def test_detach(self):
r = self.BytesIO()
b = self.BufferedWriter(r)
t = self.TextIOWrapper(b)
self.assertIs(t.detach(), b)
t = self.TextIOWrapper(b, encoding="ascii")
t.write("howdy")
self.assertFalse(r.getvalue())
t.detach()
self.assertEqual(r.getvalue(), b"howdy")
self.assertRaises(ValueError, t.detach)
# Operations independent of the detached stream should still work
repr(t)
self.assertEqual(t.encoding, "ascii")
self.assertEqual(t.errors, "strict")
self.assertFalse(t.line_buffering)
self.assertFalse(t.write_through)
def test_repr(self):
raw = self.BytesIO("hello".encode("utf-8"))
b = self.BufferedReader(raw)
t = self.TextIOWrapper(b, encoding="utf-8")
modname = self.TextIOWrapper.__module__
self.assertEqual(repr(t),
"<%s.TextIOWrapper encoding='utf-8'>" % modname)
raw.name = "dummy"
self.assertEqual(repr(t),
"<%s.TextIOWrapper name='dummy' encoding='utf-8'>" % modname)
t.mode = "r"
self.assertEqual(repr(t),
"<%s.TextIOWrapper name='dummy' mode='r' encoding='utf-8'>" % modname)
raw.name = b"dummy"
self.assertEqual(repr(t),
"<%s.TextIOWrapper name=b'dummy' mode='r' encoding='utf-8'>" % modname)
t.buffer.detach()
repr(t) # Should not raise an exception
def test_recursive_repr(self):
# Issue #25455
raw = self.BytesIO()
t = self.TextIOWrapper(raw)
with support.swap_attr(raw, 'name', t):
try:
repr(t) # Should not crash
except RuntimeError:
pass
def test_line_buffering(self):
r = self.BytesIO()
b = self.BufferedWriter(r, 1000)
t = self.TextIOWrapper(b, newline="\n", line_buffering=True)
t.write("X")
self.assertEqual(r.getvalue(), b"") # No flush happened
t.write("Y\nZ")
self.assertEqual(r.getvalue(), b"XY\nZ") # All got flushed
t.write("A\rB")
self.assertEqual(r.getvalue(), b"XY\nZA\rB")
def test_reconfigure_line_buffering(self):
r = self.BytesIO()
b = self.BufferedWriter(r, 1000)
t = self.TextIOWrapper(b, newline="\n", line_buffering=False)
t.write("AB\nC")
self.assertEqual(r.getvalue(), b"")
t.reconfigure(line_buffering=True) # implicit flush
self.assertEqual(r.getvalue(), b"AB\nC")
t.write("DEF\nG")
self.assertEqual(r.getvalue(), b"AB\nCDEF\nG")
t.write("H")
self.assertEqual(r.getvalue(), b"AB\nCDEF\nG")
t.reconfigure(line_buffering=False) # implicit flush
self.assertEqual(r.getvalue(), b"AB\nCDEF\nGH")
t.write("IJ")
self.assertEqual(r.getvalue(), b"AB\nCDEF\nGH")
# Keeping default value
t.reconfigure()
t.reconfigure(line_buffering=None)
self.assertEqual(t.line_buffering, False)
t.reconfigure(line_buffering=True)
t.reconfigure()
t.reconfigure(line_buffering=None)
self.assertEqual(t.line_buffering, True)
def test_default_encoding(self):
old_environ = dict(os.environ)
try:
# try to get a user preferred encoding different than the current
# locale encoding to check that TextIOWrapper() uses the current
# locale encoding and not the user preferred encoding
for key in ('LC_ALL', 'LANG', 'LC_CTYPE'):
if key in os.environ:
del os.environ[key]
current_locale_encoding = locale.getpreferredencoding(False)
b = self.BytesIO()
t = self.TextIOWrapper(b)
self.assertEqual(t.encoding, current_locale_encoding)
finally:
os.environ.clear()
os.environ.update(old_environ)
@support.cpython_only
def test_device_encoding(self):
# Issue 15989
import _testcapi
b = self.BytesIO()
b.fileno = lambda: _testcapi.INT_MAX + 1
self.assertRaises(OverflowError, self.TextIOWrapper, b)
b.fileno = lambda: _testcapi.UINT_MAX + 1
self.assertRaises(OverflowError, self.TextIOWrapper, b)
def test_encoding(self):
# Check the encoding attribute is always set, and valid
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="utf-8")
self.assertEqual(t.encoding, "utf-8")
t = self.TextIOWrapper(b)
self.assertIsNotNone(t.encoding)
codecs.lookup(t.encoding)
def test_encoding_errors_reading(self):
# (1) default
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii")
self.assertRaises(UnicodeError, t.read)
# (2) explicit strict
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="strict")
self.assertRaises(UnicodeError, t.read)
# (3) ignore
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="ignore")
self.assertEqual(t.read(), "abc\n\n")
# (4) replace
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="replace")
self.assertEqual(t.read(), "abc\n\ufffd\n")
def test_encoding_errors_writing(self):
# (1) default
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii")
self.assertRaises(UnicodeError, t.write, "\xff")
# (2) explicit strict
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="strict")
self.assertRaises(UnicodeError, t.write, "\xff")
# (3) ignore
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="ignore",
newline="\n")
t.write("abc\xffdef\n")
t.flush()
self.assertEqual(b.getvalue(), b"abcdef\n")
# (4) replace
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="replace",
newline="\n")
t.write("abc\xffdef\n")
t.flush()
self.assertEqual(b.getvalue(), b"abc?def\n")
def test_newlines(self):
input_lines = [ "unix\n", "windows\r\n", "os9\r", "last\n", "nonl" ]
tests = [
[ None, [ 'unix\n', 'windows\n', 'os9\n', 'last\n', 'nonl' ] ],
[ '', input_lines ],
[ '\n', [ "unix\n", "windows\r\n", "os9\rlast\n", "nonl" ] ],
[ '\r\n', [ "unix\nwindows\r\n", "os9\rlast\nnonl" ] ],
[ '\r', [ "unix\nwindows\r", "\nos9\r", "last\nnonl" ] ],
]
encodings = (
'utf-8', 'latin-1',
'utf-16', 'utf-16-le', 'utf-16-be',
'utf-32', 'utf-32-le', 'utf-32-be',
)
# Try a range of buffer sizes to test the case where \r is the last
# character in TextIOWrapper._pending_line.
for encoding in encodings:
# XXX: str.encode() should return bytes
data = bytes(''.join(input_lines).encode(encoding))
for do_reads in (False, True):
for bufsize in range(1, 10):
for newline, exp_lines in tests:
bufio = self.BufferedReader(self.BytesIO(data), bufsize)
textio = self.TextIOWrapper(bufio, newline=newline,
encoding=encoding)
if do_reads:
got_lines = []
while True:
c2 = textio.read(2)
if c2 == '':
break
self.assertEqual(len(c2), 2)
got_lines.append(c2 + textio.readline())
else:
got_lines = list(textio)
for got_line, exp_line in zip(got_lines, exp_lines):
self.assertEqual(got_line, exp_line)
self.assertEqual(len(got_lines), len(exp_lines))
def test_newlines_input(self):
testdata = b"AAA\nBB\x00B\nCCC\rDDD\rEEE\r\nFFF\r\nGGG"
normalized = testdata.replace(b"\r\n", b"\n").replace(b"\r", b"\n")
for newline, expected in [
(None, normalized.decode("ascii").splitlines(keepends=True)),
("", testdata.decode("ascii").splitlines(keepends=True)),
("\n", ["AAA\n", "BB\x00B\n", "CCC\rDDD\rEEE\r\n", "FFF\r\n", "GGG"]),
("\r\n", ["AAA\nBB\x00B\nCCC\rDDD\rEEE\r\n", "FFF\r\n", "GGG"]),
("\r", ["AAA\nBB\x00B\nCCC\r", "DDD\r", "EEE\r", "\nFFF\r", "\nGGG"]),
]:
buf = self.BytesIO(testdata)
txt = self.TextIOWrapper(buf, encoding="ascii", newline=newline)
self.assertEqual(txt.readlines(), expected)
txt.seek(0)
self.assertEqual(txt.read(), "".join(expected))
def test_newlines_output(self):
testdict = {
"": b"AAA\nBBB\nCCC\nX\rY\r\nZ",
"\n": b"AAA\nBBB\nCCC\nX\rY\r\nZ",
"\r": b"AAA\rBBB\rCCC\rX\rY\r\rZ",
"\r\n": b"AAA\r\nBBB\r\nCCC\r\nX\rY\r\r\nZ",
}
tests = [(None, testdict[os.linesep])] + sorted(testdict.items())
for newline, expected in tests:
buf = self.BytesIO()
txt = self.TextIOWrapper(buf, encoding="ascii", newline=newline)
txt.write("AAA\nB")
txt.write("BB\nCCC\n")
txt.write("X\rY\r\nZ")
txt.flush()
self.assertEqual(buf.closed, False)
self.assertEqual(buf.getvalue(), expected)
def test_destructor(self):
l = []
base = self.BytesIO
class MyBytesIO(base):
def close(self):
l.append(self.getvalue())
base.close(self)
b = MyBytesIO()
t = self.TextIOWrapper(b, encoding="ascii")
t.write("abc")
del t
support.gc_collect()
self.assertEqual([b"abc"], l)
def test_override_destructor(self):
record = []
class MyTextIO(self.TextIOWrapper):
def __del__(self):
record.append(1)
try:
f = super().__del__
except AttributeError:
pass
else:
f()
def close(self):
record.append(2)
super().close()
def flush(self):
record.append(3)
super().flush()
b = self.BytesIO()
t = MyTextIO(b, encoding="ascii")
del t
support.gc_collect()
self.assertEqual(record, [1, 2, 3])
def test_error_through_destructor(self):
# Test that the exception state is not modified by a destructor,
# even if close() fails.
rawio = self.CloseFailureIO()
def f():
self.TextIOWrapper(rawio).xyzzy
with support.captured_output("stderr") as s:
self.assertRaises(AttributeError, f)
s = s.getvalue().strip()
if s:
# The destructor *may* have printed an unraisable error, check it
self.assertEqual(len(s.splitlines()), 1)
self.assertTrue(s.startswith("Exception OSError: "), s)
self.assertTrue(s.endswith(" ignored"), s)
# Systematic tests of the text I/O API
def test_basic_io(self):
for chunksize in (1, 2, 3, 4, 5, 15, 16, 17, 31, 32, 33, 63, 64, 65):
for enc in "ascii", "latin-1", "utf-8" :# , "utf-16-be", "utf-16-le":
f = self.open(support.TESTFN, "w+", encoding=enc)
f._CHUNK_SIZE = chunksize
self.assertEqual(f.write("abc"), 3)
f.close()
f = self.open(support.TESTFN, "r+", encoding=enc)
f._CHUNK_SIZE = chunksize
self.assertEqual(f.tell(), 0)
self.assertEqual(f.read(), "abc")
cookie = f.tell()
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.read(None), "abc")
f.seek(0)
self.assertEqual(f.read(2), "ab")
self.assertEqual(f.read(1), "c")
self.assertEqual(f.read(1), "")
self.assertEqual(f.read(), "")
self.assertEqual(f.tell(), cookie)
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.seek(0, 2), cookie)
self.assertEqual(f.write("def"), 3)
self.assertEqual(f.seek(cookie), cookie)
self.assertEqual(f.read(), "def")
if enc.startswith("utf"):
self.multi_line_test(f, enc)
f.close()
def multi_line_test(self, f, enc):
f.seek(0)
f.truncate()
sample = "s\xff\u0fff\uffff"
wlines = []
for size in (0, 1, 2, 3, 4, 5, 30, 31, 32, 33, 62, 63, 64, 65, 1000):
chars = []
for i in range(size):
chars.append(sample[i % len(sample)])
line = "".join(chars) + "\n"
wlines.append((f.tell(), line))
f.write(line)
f.seek(0)
rlines = []
while True:
pos = f.tell()
line = f.readline()
if not line:
break
rlines.append((pos, line))
self.assertEqual(rlines, wlines)
def test_telling(self):
f = self.open(support.TESTFN, "w+", encoding="utf-8")
p0 = f.tell()
f.write("\xff\n")
p1 = f.tell()
f.write("\xff\n")
p2 = f.tell()
f.seek(0)
self.assertEqual(f.tell(), p0)
self.assertEqual(f.readline(), "\xff\n")
self.assertEqual(f.tell(), p1)
self.assertEqual(f.readline(), "\xff\n")
self.assertEqual(f.tell(), p2)
f.seek(0)
for line in f:
self.assertEqual(line, "\xff\n")
self.assertRaises(OSError, f.tell)
self.assertEqual(f.tell(), p2)
f.close()
def test_seeking(self):
chunk_size = _default_chunk_size()
prefix_size = chunk_size - 2
u_prefix = "a" * prefix_size
prefix = bytes(u_prefix.encode("utf-8"))
self.assertEqual(len(u_prefix), len(prefix))
u_suffix = "\u8888\n"
suffix = bytes(u_suffix.encode("utf-8"))
line = prefix + suffix
with self.open(support.TESTFN, "wb") as f:
f.write(line*2)
with self.open(support.TESTFN, "r", encoding="utf-8") as f:
s = f.read(prefix_size)
self.assertEqual(s, str(prefix, "ascii"))
self.assertEqual(f.tell(), prefix_size)
self.assertEqual(f.readline(), u_suffix)
def test_seeking_too(self):
# Regression test for a specific bug
data = b'\xe0\xbf\xbf\n'
with self.open(support.TESTFN, "wb") as f:
f.write(data)
with self.open(support.TESTFN, "r", encoding="utf-8") as f:
f._CHUNK_SIZE # Just test that it exists
f._CHUNK_SIZE = 2
f.readline()
f.tell()
def test_seek_and_tell(self):
#Test seek/tell using the StatefulIncrementalDecoder.
# Make test faster by doing smaller seeks
CHUNK_SIZE = 128
def test_seek_and_tell_with_data(data, min_pos=0):
"""Tell/seek to various points within a data stream and ensure
that the decoded data returned by read() is consistent."""
f = self.open(support.TESTFN, 'wb')
f.write(data)
f.close()
f = self.open(support.TESTFN, encoding='test_decoder')
f._CHUNK_SIZE = CHUNK_SIZE
decoded = f.read()
f.close()
for i in range(min_pos, len(decoded) + 1): # seek positions
for j in [1, 5, len(decoded) - i]: # read lengths
f = self.open(support.TESTFN, encoding='test_decoder')
self.assertEqual(f.read(i), decoded[:i])
cookie = f.tell()
self.assertEqual(f.read(j), decoded[i:i + j])
f.seek(cookie)
self.assertEqual(f.read(), decoded[i:])
f.close()
# Enable the test decoder.
StatefulIncrementalDecoder.codecEnabled = 1
# Run the tests.
try:
# Try each test case.
for input, _, _ in StatefulIncrementalDecoderTest.test_cases:
test_seek_and_tell_with_data(input)
# Position each test case so that it crosses a chunk boundary.
for input, _, _ in StatefulIncrementalDecoderTest.test_cases:
offset = CHUNK_SIZE - len(input)//2
prefix = b'.'*offset
# Don't bother seeking into the prefix (takes too long).
min_pos = offset*2
test_seek_and_tell_with_data(prefix + input, min_pos)
# Ensure our test decoder won't interfere with subsequent tests.
finally:
StatefulIncrementalDecoder.codecEnabled = 0
def test_encoded_writes(self):
data = "1234567890"
tests = ("utf-16",
"utf-16-le",
"utf-16-be",
"utf-32",
"utf-32-le",
"utf-32-be")
for encoding in tests:
buf = self.BytesIO()
f = self.TextIOWrapper(buf, encoding=encoding)
# Check if the BOM is written only once (see issue1753).
f.write(data)
f.write(data)
f.seek(0)
self.assertEqual(f.read(), data * 2)
f.seek(0)
self.assertEqual(f.read(), data * 2)
self.assertEqual(buf.getvalue(), (data * 2).encode(encoding))
def test_unreadable(self):
class UnReadable(self.BytesIO):
def readable(self):
return False
txt = self.TextIOWrapper(UnReadable())
self.assertRaises(OSError, txt.read)
def test_read_one_by_one(self):
txt = self.TextIOWrapper(self.BytesIO(b"AA\r\nBB"))
reads = ""
while True:
c = txt.read(1)
if not c:
break
reads += c
self.assertEqual(reads, "AA\nBB")
def test_readlines(self):
txt = self.TextIOWrapper(self.BytesIO(b"AA\nBB\nCC"))
self.assertEqual(txt.readlines(), ["AA\n", "BB\n", "CC"])
txt.seek(0)
self.assertEqual(txt.readlines(None), ["AA\n", "BB\n", "CC"])
txt.seek(0)
self.assertEqual(txt.readlines(5), ["AA\n", "BB\n"])
# read in amounts equal to TextIOWrapper._CHUNK_SIZE which is 128.
def test_read_by_chunk(self):
# make sure "\r\n" straddles 128 char boundary.
txt = self.TextIOWrapper(self.BytesIO(b"A" * 127 + b"\r\nB"))
reads = ""
while True:
c = txt.read(128)
if not c:
break
reads += c
self.assertEqual(reads, "A"*127+"\nB")
def test_writelines(self):
l = ['ab', 'cd', 'ef']
buf = self.BytesIO()
txt = self.TextIOWrapper(buf)
txt.writelines(l)
txt.flush()
self.assertEqual(buf.getvalue(), b'abcdef')
def test_writelines_userlist(self):
l = UserList(['ab', 'cd', 'ef'])
buf = self.BytesIO()
txt = self.TextIOWrapper(buf)
txt.writelines(l)
txt.flush()
self.assertEqual(buf.getvalue(), b'abcdef')
def test_writelines_error(self):
txt = self.TextIOWrapper(self.BytesIO())
self.assertRaises(TypeError, txt.writelines, [1, 2, 3])
self.assertRaises(TypeError, txt.writelines, None)
self.assertRaises(TypeError, txt.writelines, b'abc')
def test_issue1395_1(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
# read one char at a time
reads = ""
while True:
c = txt.read(1)
if not c:
break
reads += c
self.assertEqual(reads, self.normalized)
def test_issue1395_2(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = ""
while True:
c = txt.read(4)
if not c:
break
reads += c
self.assertEqual(reads, self.normalized)
def test_issue1395_3(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
reads += txt.read(4)
reads += txt.readline()
reads += txt.readline()
reads += txt.readline()
self.assertEqual(reads, self.normalized)
def test_issue1395_4(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
reads += txt.read()
self.assertEqual(reads, self.normalized)
def test_issue1395_5(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
pos = txt.tell()
txt.seek(0)
txt.seek(pos)
self.assertEqual(txt.read(4), "BBB\n")
def test_issue2282(self):
buffer = self.BytesIO(self.testdata)
txt = self.TextIOWrapper(buffer, encoding="ascii")
self.assertEqual(buffer.seekable(), txt.seekable())
def test_append_bom(self):
# The BOM is not written again when appending to a non-empty file
filename = support.TESTFN
for charset in ('utf-8-sig', 'utf-16', 'utf-32'):
with self.open(filename, 'w', encoding=charset) as f:
f.write('aaa')
pos = f.tell()
with self.open(filename, 'rb') as f:
self.assertEqual(f.read(), 'aaa'.encode(charset))
with self.open(filename, 'a', encoding=charset) as f:
f.write('xxx')
with self.open(filename, 'rb') as f:
self.assertEqual(f.read(), 'aaaxxx'.encode(charset))
def test_seek_bom(self):
# Same test, but when seeking manually
filename = support.TESTFN
for charset in ('utf-8-sig', 'utf-16', 'utf-32'):
with self.open(filename, 'w', encoding=charset) as f:
f.write('aaa')
pos = f.tell()
with self.open(filename, 'r+', encoding=charset) as f:
f.seek(pos)
f.write('zzz')
f.seek(0)
f.write('bbb')
with self.open(filename, 'rb') as f:
self.assertEqual(f.read(), 'bbbzzz'.encode(charset))
def test_seek_append_bom(self):
# Same test, but first seek to the start and then to the end
filename = support.TESTFN
for charset in ('utf-8-sig', 'utf-16', 'utf-32'):
with self.open(filename, 'w', encoding=charset) as f:
f.write('aaa')
with self.open(filename, 'a', encoding=charset) as f:
f.seek(0)
f.seek(0, self.SEEK_END)
f.write('xxx')
with self.open(filename, 'rb') as f:
self.assertEqual(f.read(), 'aaaxxx'.encode(charset))
def test_errors_property(self):
with self.open(support.TESTFN, "w") as f:
self.assertEqual(f.errors, "strict")
with self.open(support.TESTFN, "w", errors="replace") as f:
self.assertEqual(f.errors, "replace")
@support.no_tracing
def test_threads_write(self):
# Issue6750: concurrent writes could duplicate data
event = threading.Event()
with self.open(support.TESTFN, "w", buffering=1) as f:
def run(n):
text = "Thread%03d\n" % n
event.wait()
f.write(text)
threads = [threading.Thread(target=run, args=(x,))
for x in range(20)]
with support.start_threads(threads, event.set):
time.sleep(0.02)
with self.open(support.TESTFN) as f:
content = f.read()
for n in range(20):
self.assertEqual(content.count("Thread%03d\n" % n), 1)
def test_flush_error_on_close(self):
# Test that text file is closed despite failed flush
# and that flush() is called before file closed.
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
closed = []
def bad_flush():
closed[:] = [txt.closed, txt.buffer.closed]
raise OSError()
txt.flush = bad_flush
self.assertRaises(OSError, txt.close) # exception not swallowed
self.assertTrue(txt.closed)
self.assertTrue(txt.buffer.closed)
self.assertTrue(closed) # flush() called
self.assertFalse(closed[0]) # flush() called before file closed
self.assertFalse(closed[1])
txt.flush = lambda: None # break reference loop
def test_close_error_on_close(self):
buffer = self.BytesIO(self.testdata)
def bad_flush():
raise OSError('flush')
def bad_close():
raise OSError('close')
buffer.close = bad_close
txt = self.TextIOWrapper(buffer, encoding="ascii")
txt.flush = bad_flush
with self.assertRaises(OSError) as err: # exception not swallowed
txt.close()
self.assertEqual(err.exception.args, ('close',))
self.assertIsInstance(err.exception.__context__, OSError)
self.assertEqual(err.exception.__context__.args, ('flush',))
self.assertFalse(txt.closed)
def test_nonnormalized_close_error_on_close(self):
# Issue #21677
buffer = self.BytesIO(self.testdata)
def bad_flush():
raise non_existing_flush
def bad_close():
raise non_existing_close
buffer.close = bad_close
txt = self.TextIOWrapper(buffer, encoding="ascii")
txt.flush = bad_flush
with self.assertRaises(NameError) as err: # exception not swallowed
txt.close()
self.assertIn('non_existing_close', str(err.exception))
self.assertIsInstance(err.exception.__context__, NameError)
self.assertIn('non_existing_flush', str(err.exception.__context__))
self.assertFalse(txt.closed)
def test_multi_close(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt.close()
txt.close()
txt.close()
self.assertRaises(ValueError, txt.flush)
def test_unseekable(self):
txt = self.TextIOWrapper(self.MockUnseekableIO(self.testdata))
self.assertRaises(self.UnsupportedOperation, txt.tell)
self.assertRaises(self.UnsupportedOperation, txt.seek, 0)
def test_readonly_attributes(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
buf = self.BytesIO(self.testdata)
with self.assertRaises(AttributeError):
txt.buffer = buf
def test_rawio(self):
# Issue #12591: TextIOWrapper must work with raw I/O objects, so
# that subprocess.Popen() can have the required unbuffered
# semantics with universal_newlines=True.
raw = self.MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n'])
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n')
# Reads
self.assertEqual(txt.read(4), 'abcd')
self.assertEqual(txt.readline(), 'efghi\n')
self.assertEqual(list(txt), ['jkl\n', 'opq\n'])
def test_rawio_write_through(self):
# Issue #12591: with write_through=True, writes don't need a flush
raw = self.MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n'])
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n',
write_through=True)
txt.write('1')
txt.write('23\n4')
txt.write('5')
self.assertEqual(b''.join(raw._write_stack), b'123\n45')
def test_bufio_write_through(self):
# Issue #21396: write_through=True doesn't force a flush()
# on the underlying binary buffered object.
flush_called, write_called = [], []
class BufferedWriter(self.BufferedWriter):
def flush(self, *args, **kwargs):
flush_called.append(True)
return super().flush(*args, **kwargs)
def write(self, *args, **kwargs):
write_called.append(True)
return super().write(*args, **kwargs)
rawio = self.BytesIO()
data = b"a"
bufio = BufferedWriter(rawio, len(data)*2)
textio = self.TextIOWrapper(bufio, encoding='ascii',
write_through=True)
# write to the buffered io but don't overflow the buffer
text = data.decode('ascii')
textio.write(text)
# buffer.flush is not called with write_through=True
self.assertFalse(flush_called)
# buffer.write *is* called with write_through=True
self.assertTrue(write_called)
self.assertEqual(rawio.getvalue(), b"") # no flush
write_called = [] # reset
textio.write(text * 10) # total content is larger than bufio buffer
self.assertTrue(write_called)
self.assertEqual(rawio.getvalue(), data * 11) # all flushed
def test_reconfigure_write_through(self):
raw = self.MockRawIO([])
t = self.TextIOWrapper(raw, encoding='ascii', newline='\n')
t.write('1')
t.reconfigure(write_through=True) # implied flush
self.assertEqual(t.write_through, True)
self.assertEqual(b''.join(raw._write_stack), b'1')
t.write('23')
self.assertEqual(b''.join(raw._write_stack), b'123')
t.reconfigure(write_through=False)
self.assertEqual(t.write_through, False)
t.write('45')
t.flush()
self.assertEqual(b''.join(raw._write_stack), b'12345')
# Keeping default value
t.reconfigure()
t.reconfigure(write_through=None)
self.assertEqual(t.write_through, False)
t.reconfigure(write_through=True)
t.reconfigure()
t.reconfigure(write_through=None)
self.assertEqual(t.write_through, True)
def test_read_nonbytes(self):
# Issue #17106
# Crash when underlying read() returns non-bytes
t = self.TextIOWrapper(self.StringIO('a'))
self.assertRaises(TypeError, t.read, 1)
t = self.TextIOWrapper(self.StringIO('a'))
self.assertRaises(TypeError, t.readline)
t = self.TextIOWrapper(self.StringIO('a'))
self.assertRaises(TypeError, t.read)
def test_illegal_encoder(self):
# Issue 31271: Calling write() while the return value of encoder's
# encode() is invalid shouldn't cause an assertion failure.
rot13 = codecs.lookup("rot13")
with support.swap_attr(rot13, '_is_text_encoding', True):
t = io.TextIOWrapper(io.BytesIO(b'foo'), encoding="rot13")
self.assertRaises(TypeError, t.write, 'bar')
def test_illegal_decoder(self):
# Issue #17106
# Bypass the early encoding check added in issue 20404
def _make_illegal_wrapper():
quopri = codecs.lookup("quopri")
quopri._is_text_encoding = True
try:
t = self.TextIOWrapper(self.BytesIO(b'aaaaaa'),
newline='\n', encoding="quopri")
finally:
quopri._is_text_encoding = False
return t
# Crash when decoder returns non-string
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.read, 1)
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.readline)
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.read)
# Issue 31243: calling read() while the return value of decoder's
# getstate() is invalid should neither crash the interpreter nor
# raise a SystemError.
def _make_very_illegal_wrapper(getstate_ret_val):
class BadDecoder:
def getstate(self):
return getstate_ret_val
def _get_bad_decoder(dummy):
return BadDecoder()
quopri = codecs.lookup("quopri")
with support.swap_attr(quopri, 'incrementaldecoder',
_get_bad_decoder):
return _make_illegal_wrapper()
t = _make_very_illegal_wrapper(42)
self.assertRaises(TypeError, t.read, 42)
t = _make_very_illegal_wrapper(())
self.assertRaises(TypeError, t.read, 42)
t = _make_very_illegal_wrapper((1, 2))
self.assertRaises(TypeError, t.read, 42)
def _check_create_at_shutdown(self, **kwargs):
# Issue #20037: creating a TextIOWrapper at shutdown
# shouldn't crash the interpreter.
iomod = self.io.__name__
code = """if 1:
import codecs
import {iomod} as io
# Avoid looking up codecs at shutdown
codecs.lookup('utf-8')
class C:
def __init__(self):
self.buf = io.BytesIO()
def __del__(self):
io.TextIOWrapper(self.buf, **{kwargs})
print("ok")
c = C()
""".format(iomod=iomod, kwargs=kwargs)
return assert_python_ok("-c", code)
@support.requires_type_collecting
def test_create_at_shutdown_without_encoding(self):
rc, out, err = self._check_create_at_shutdown()
if err:
# Can error out with a RuntimeError if the module state
# isn't found.
self.assertIn(self.shutdown_error, err.decode())
else:
self.assertEqual("ok", out.decode().strip())
@support.requires_type_collecting
def test_create_at_shutdown_with_encoding(self):
rc, out, err = self._check_create_at_shutdown(encoding='utf-8',
errors='strict')
self.assertFalse(err)
self.assertEqual("ok", out.decode().strip())
def test_read_byteslike(self):
r = MemviewBytesIO(b'Just some random string\n')
t = self.TextIOWrapper(r, 'utf-8')
# TextIOwrapper will not read the full string, because
# we truncate it to a multiple of the native int size
# so that we can construct a more complex memoryview.
bytes_val = _to_memoryview(r.getvalue()).tobytes()
self.assertEqual(t.read(200), bytes_val.decode('utf-8'))
def test_issue22849(self):
class F(object):
def readable(self): return True
def writable(self): return True
def seekable(self): return True
for i in range(10):
try:
self.TextIOWrapper(F(), encoding='utf-8')
except Exception:
pass
F.tell = lambda x: 0
t = self.TextIOWrapper(F(), encoding='utf-8')
class MemviewBytesIO(io.BytesIO):
'''A BytesIO object whose read method returns memoryviews
rather than bytes'''
def read1(self, len_):
return _to_memoryview(super().read1(len_))
def read(self, len_):
return _to_memoryview(super().read(len_))
def _to_memoryview(buf):
'''Convert bytes-object *buf* to a non-trivial memoryview'''
arr = array.array('i')
idx = len(buf) - len(buf) % arr.itemsize
arr.frombytes(buf[:idx])
return memoryview(arr)
class CTextIOWrapperTest(TextIOWrapperTest):
io = io
shutdown_error = "RuntimeError: could not find io module state"
def test_initialization(self):
r = self.BytesIO(b"\xc3\xa9\n\n")
b = self.BufferedReader(r, 1000)
t = self.TextIOWrapper(b)
self.assertRaises(ValueError, t.__init__, b, newline='xyzzy')
self.assertRaises(ValueError, t.read)
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
self.assertRaises(Exception, repr, t)
def test_garbage_collection(self):
# C TextIOWrapper objects are collected, and collecting them flushes
# all data to disk.
# The Python version has __del__, so it ends in gc.garbage instead.
with support.check_warnings(('', ResourceWarning)):
rawio = io.FileIO(support.TESTFN, "wb")
b = self.BufferedWriter(rawio)
t = self.TextIOWrapper(b, encoding="ascii")
t.write("456def")
t.x = t
wr = weakref.ref(t)
del t
support.gc_collect()
self.assertIsNone(wr(), wr)
with self.open(support.TESTFN, "rb") as f:
self.assertEqual(f.read(), b"456def")
def test_rwpair_cleared_before_textio(self):
# Issue 13070: TextIOWrapper's finalization would crash when called
# after the reference to the underlying BufferedRWPair's writer got
# cleared by the GC.
for i in range(1000):
b1 = self.BufferedRWPair(self.MockRawIO(), self.MockRawIO())
t1 = self.TextIOWrapper(b1, encoding="ascii")
b2 = self.BufferedRWPair(self.MockRawIO(), self.MockRawIO())
t2 = self.TextIOWrapper(b2, encoding="ascii")
# circular references
t1.buddy = t2
t2.buddy = t1
support.gc_collect()
class PyTextIOWrapperTest(TextIOWrapperTest):
io = pyio
shutdown_error = "LookupError: unknown encoding: ascii"
class IncrementalNewlineDecoderTest(unittest.TestCase):
def check_newline_decoding_utf8(self, decoder):
# UTF-8 specific tests for a newline decoder
def _check_decode(b, s, **kwargs):
# We exercise getstate() / setstate() as well as decode()
state = decoder.getstate()
self.assertEqual(decoder.decode(b, **kwargs), s)
decoder.setstate(state)
self.assertEqual(decoder.decode(b, **kwargs), s)
_check_decode(b'\xe8\xa2\x88', "\u8888")
_check_decode(b'\xe8', "")
_check_decode(b'\xa2', "")
_check_decode(b'\x88', "\u8888")
_check_decode(b'\xe8', "")
_check_decode(b'\xa2', "")
_check_decode(b'\x88', "\u8888")
_check_decode(b'\xe8', "")
self.assertRaises(UnicodeDecodeError, decoder.decode, b'', final=True)
decoder.reset()
_check_decode(b'\n', "\n")
_check_decode(b'\r', "")
_check_decode(b'', "\n", final=True)
_check_decode(b'\r', "\n", final=True)
_check_decode(b'\r', "")
_check_decode(b'a', "\na")
_check_decode(b'\r\r\n', "\n\n")
_check_decode(b'\r', "")
_check_decode(b'\r', "\n")
_check_decode(b'\na', "\na")
_check_decode(b'\xe8\xa2\x88\r\n', "\u8888\n")
_check_decode(b'\xe8\xa2\x88', "\u8888")
_check_decode(b'\n', "\n")
_check_decode(b'\xe8\xa2\x88\r', "\u8888")
_check_decode(b'\n', "\n")
def check_newline_decoding(self, decoder, encoding):
result = []
if encoding is not None:
encoder = codecs.getincrementalencoder(encoding)()
def _decode_bytewise(s):
# Decode one byte at a time
for b in encoder.encode(s):
result.append(decoder.decode(bytes([b])))
else:
encoder = None
def _decode_bytewise(s):
# Decode one char at a time
for c in s:
result.append(decoder.decode(c))
self.assertEqual(decoder.newlines, None)
_decode_bytewise("abc\n\r")
self.assertEqual(decoder.newlines, '\n')
_decode_bytewise("\nabc")
self.assertEqual(decoder.newlines, ('\n', '\r\n'))
_decode_bytewise("abc\r")
self.assertEqual(decoder.newlines, ('\n', '\r\n'))
_decode_bytewise("abc")
self.assertEqual(decoder.newlines, ('\r', '\n', '\r\n'))
_decode_bytewise("abc\r")
self.assertEqual("".join(result), "abc\n\nabcabc\nabcabc")
decoder.reset()
input = "abc"
if encoder is not None:
encoder.reset()
input = encoder.encode(input)
self.assertEqual(decoder.decode(input), "abc")
self.assertEqual(decoder.newlines, None)
def test_newline_decoder(self):
encodings = (
# None meaning the IncrementalNewlineDecoder takes unicode input
# rather than bytes input
None, 'utf-8', 'latin-1',
'utf-16', 'utf-16-le', 'utf-16-be',
'utf-32', 'utf-32-le', 'utf-32-be',
)
for enc in encodings:
decoder = enc and codecs.getincrementaldecoder(enc)()
decoder = self.IncrementalNewlineDecoder(decoder, translate=True)
self.check_newline_decoding(decoder, enc)
decoder = codecs.getincrementaldecoder("utf-8")()
decoder = self.IncrementalNewlineDecoder(decoder, translate=True)
self.check_newline_decoding_utf8(decoder)
self.assertRaises(TypeError, decoder.setstate, 42)
def test_newline_bytes(self):
# Issue 5433: Excessive optimization in IncrementalNewlineDecoder
def _check(dec):
self.assertEqual(dec.newlines, None)
self.assertEqual(dec.decode("\u0D00"), "\u0D00")
self.assertEqual(dec.newlines, None)
self.assertEqual(dec.decode("\u0A00"), "\u0A00")
self.assertEqual(dec.newlines, None)
dec = self.IncrementalNewlineDecoder(None, translate=False)
_check(dec)
dec = self.IncrementalNewlineDecoder(None, translate=True)
_check(dec)
class CIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest):
pass
class PyIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest):
pass
# XXX Tests for open()
class MiscIOTest(unittest.TestCase):
def tearDown(self):
support.unlink(support.TESTFN)
def test___all__(self):
for name in self.io.__all__:
obj = getattr(self.io, name, None)
self.assertIsNotNone(obj, name)
if name == "open":
continue
elif "error" in name.lower() or name == "UnsupportedOperation":
self.assertTrue(issubclass(obj, Exception), name)
elif not name.startswith("SEEK_"):
self.assertTrue(issubclass(obj, self.IOBase))
def test_attributes(self):
f = self.open(support.TESTFN, "wb", buffering=0)
self.assertEqual(f.mode, "wb")
f.close()
with support.check_warnings(('', DeprecationWarning)):
f = self.open(support.TESTFN, "U")
self.assertEqual(f.name, support.TESTFN)
self.assertEqual(f.buffer.name, support.TESTFN)
self.assertEqual(f.buffer.raw.name, support.TESTFN)
self.assertEqual(f.mode, "U")
self.assertEqual(f.buffer.mode, "rb")
self.assertEqual(f.buffer.raw.mode, "rb")
f.close()
f = self.open(support.TESTFN, "w+")
self.assertEqual(f.mode, "w+")
self.assertEqual(f.buffer.mode, "rb+") # Does it really matter?
self.assertEqual(f.buffer.raw.mode, "rb+")
g = self.open(f.fileno(), "wb", closefd=False)
self.assertEqual(g.mode, "wb")
self.assertEqual(g.raw.mode, "wb")
self.assertEqual(g.name, f.fileno())
self.assertEqual(g.raw.name, f.fileno())
f.close()
g.close()
def test_io_after_close(self):
for kwargs in [
{"mode": "w"},
{"mode": "wb"},
{"mode": "w", "buffering": 1},
{"mode": "w", "buffering": 2},
{"mode": "wb", "buffering": 0},
{"mode": "r"},
{"mode": "rb"},
{"mode": "r", "buffering": 1},
{"mode": "r", "buffering": 2},
{"mode": "rb", "buffering": 0},
{"mode": "w+"},
{"mode": "w+b"},
{"mode": "w+", "buffering": 1},
{"mode": "w+", "buffering": 2},
{"mode": "w+b", "buffering": 0},
]:
f = self.open(support.TESTFN, **kwargs)
f.close()
self.assertRaises(ValueError, f.flush)
self.assertRaises(ValueError, f.fileno)
self.assertRaises(ValueError, f.isatty)
self.assertRaises(ValueError, f.__iter__)
if hasattr(f, "peek"):
self.assertRaises(ValueError, f.peek, 1)
self.assertRaises(ValueError, f.read)
if hasattr(f, "read1"):
self.assertRaises(ValueError, f.read1, 1024)
self.assertRaises(ValueError, f.read1)
if hasattr(f, "readall"):
self.assertRaises(ValueError, f.readall)
if hasattr(f, "readinto"):
self.assertRaises(ValueError, f.readinto, bytearray(1024))
if hasattr(f, "readinto1"):
self.assertRaises(ValueError, f.readinto1, bytearray(1024))
self.assertRaises(ValueError, f.readline)
self.assertRaises(ValueError, f.readlines)
self.assertRaises(ValueError, f.readlines, 1)
self.assertRaises(ValueError, f.seek, 0)
self.assertRaises(ValueError, f.tell)
self.assertRaises(ValueError, f.truncate)
self.assertRaises(ValueError, f.write,
b"" if "b" in kwargs['mode'] else "")
self.assertRaises(ValueError, f.writelines, [])
self.assertRaises(ValueError, next, f)
def test_blockingioerror(self):
# Various BlockingIOError issues
class C(str):
pass
c = C("")
b = self.BlockingIOError(1, c)
c.b = b
b.c = c
wr = weakref.ref(c)
del c, b
support.gc_collect()
self.assertIsNone(wr(), wr)
def test_abcs(self):
# Test the visible base classes are ABCs.
self.assertIsInstance(self.IOBase, abc.ABCMeta)
self.assertIsInstance(self.RawIOBase, abc.ABCMeta)
self.assertIsInstance(self.BufferedIOBase, abc.ABCMeta)
self.assertIsInstance(self.TextIOBase, abc.ABCMeta)
def _check_abc_inheritance(self, abcmodule):
with self.open(support.TESTFN, "wb", buffering=0) as f:
self.assertIsInstance(f, abcmodule.IOBase)
self.assertIsInstance(f, abcmodule.RawIOBase)
self.assertNotIsInstance(f, abcmodule.BufferedIOBase)
self.assertNotIsInstance(f, abcmodule.TextIOBase)
with self.open(support.TESTFN, "wb") as f:
self.assertIsInstance(f, abcmodule.IOBase)
self.assertNotIsInstance(f, abcmodule.RawIOBase)
self.assertIsInstance(f, abcmodule.BufferedIOBase)
self.assertNotIsInstance(f, abcmodule.TextIOBase)
with self.open(support.TESTFN, "w") as f:
self.assertIsInstance(f, abcmodule.IOBase)
self.assertNotIsInstance(f, abcmodule.RawIOBase)
self.assertNotIsInstance(f, abcmodule.BufferedIOBase)
self.assertIsInstance(f, abcmodule.TextIOBase)
def test_abc_inheritance(self):
# Test implementations inherit from their respective ABCs
self._check_abc_inheritance(self)
def test_abc_inheritance_official(self):
# Test implementations inherit from the official ABCs of the
# baseline "io" module.
self._check_abc_inheritance(io)
def _check_warn_on_dealloc(self, *args, **kwargs):
f = open(*args, **kwargs)
r = repr(f)
with self.assertWarns(ResourceWarning) as cm:
f = None
support.gc_collect()
self.assertIn(r, str(cm.warning.args[0]))
def test_warn_on_dealloc(self):
self._check_warn_on_dealloc(support.TESTFN, "wb", buffering=0)
self._check_warn_on_dealloc(support.TESTFN, "wb")
self._check_warn_on_dealloc(support.TESTFN, "w")
def _check_warn_on_dealloc_fd(self, *args, **kwargs):
fds = []
def cleanup_fds():
for fd in fds:
try:
os.close(fd)
except OSError as e:
if e.errno != errno.EBADF:
raise
self.addCleanup(cleanup_fds)
r, w = os.pipe()
fds += r, w
self._check_warn_on_dealloc(r, *args, **kwargs)
# When using closefd=False, there's no warning
r, w = os.pipe()
fds += r, w
with support.check_no_resource_warning(self):
open(r, *args, closefd=False, **kwargs)
def test_warn_on_dealloc_fd(self):
self._check_warn_on_dealloc_fd("rb", buffering=0)
self._check_warn_on_dealloc_fd("rb")
self._check_warn_on_dealloc_fd("r")
def test_pickling(self):
# Pickling file objects is forbidden
for kwargs in [
{"mode": "w"},
{"mode": "wb"},
{"mode": "wb", "buffering": 0},
{"mode": "r"},
{"mode": "rb"},
{"mode": "rb", "buffering": 0},
{"mode": "w+"},
{"mode": "w+b"},
{"mode": "w+b", "buffering": 0},
]:
for protocol in range(pickle.HIGHEST_PROTOCOL + 1):
with self.open(support.TESTFN, **kwargs) as f:
self.assertRaises(TypeError, pickle.dumps, f, protocol)
def test_nonblock_pipe_write_bigbuf(self):
self._test_nonblock_pipe_write(16*1024)
def test_nonblock_pipe_write_smallbuf(self):
self._test_nonblock_pipe_write(1024)
@unittest.skipUnless(hasattr(os, 'set_blocking'),
'os.set_blocking() required for this test')
def _test_nonblock_pipe_write(self, bufsize):
sent = []
received = []
r, w = os.pipe()
os.set_blocking(r, False)
os.set_blocking(w, False)
# To exercise all code paths in the C implementation we need
# to play with buffer sizes. For instance, if we choose a
# buffer size less than or equal to _PIPE_BUF (4096 on Linux)
# then we will never get a partial write of the buffer.
rf = self.open(r, mode='rb', closefd=True, buffering=bufsize)
wf = self.open(w, mode='wb', closefd=True, buffering=bufsize)
with rf, wf:
for N in 9999, 73, 7574:
try:
i = 0
while True:
msg = bytes([i % 26 + 97]) * N
sent.append(msg)
wf.write(msg)
i += 1
except self.BlockingIOError as e:
self.assertEqual(e.args[0], errno.EAGAIN)
self.assertEqual(e.args[2], e.characters_written)
sent[-1] = sent[-1][:e.characters_written]
received.append(rf.read())
msg = b'BLOCKED'
wf.write(msg)
sent.append(msg)
while True:
try:
wf.flush()
break
except self.BlockingIOError as e:
self.assertEqual(e.args[0], errno.EAGAIN)
self.assertEqual(e.args[2], e.characters_written)
self.assertEqual(e.characters_written, 0)
received.append(rf.read())
received += iter(rf.read, None)
sent, received = b''.join(sent), b''.join(received)
self.assertEqual(sent, received)
self.assertTrue(wf.closed)
self.assertTrue(rf.closed)
def test_create_fail(self):
# 'x' mode fails if file is existing
with self.open(support.TESTFN, 'w'):
pass
self.assertRaises(FileExistsError, self.open, support.TESTFN, 'x')
def test_create_writes(self):
# 'x' mode opens for writing
with self.open(support.TESTFN, 'xb') as f:
f.write(b"spam")
with self.open(support.TESTFN, 'rb') as f:
self.assertEqual(b"spam", f.read())
def test_open_allargs(self):
# there used to be a buffer overflow in the parser for rawmode
self.assertRaises(ValueError, self.open, support.TESTFN, 'rwax+')
class CMiscIOTest(MiscIOTest):
io = io
def test_readinto_buffer_overflow(self):
# Issue #18025
class BadReader(self.io.BufferedIOBase):
def read(self, n=-1):
return b'x' * 10**6
bufio = BadReader()
b = bytearray(2)
self.assertRaises(ValueError, bufio.readinto, b)
def check_daemon_threads_shutdown_deadlock(self, stream_name):
# Issue #23309: deadlocks at shutdown should be avoided when a
# daemon thread and the main thread both write to a file.
code = """if 1:
import sys
import time
import threading
from test.support import SuppressCrashReport
file = sys.{stream_name}
def run():
while True:
file.write('.')
file.flush()
crash = SuppressCrashReport()
crash.__enter__()
# don't call __exit__(): the crash occurs at Python shutdown
thread = threading.Thread(target=run)
thread.daemon = True
thread.start()
time.sleep(0.5)
file.write('!')
file.flush()
""".format_map(locals())
res, _ = run_python_until_end("-c", code)
err = res.err.decode()
if res.rc != 0:
# Failure: should be a fatal error
self.assertIn("Fatal Python error: could not acquire lock "
"for <_io.BufferedWriter name='<{stream_name}>'> "
"at interpreter shutdown, possibly due to "
"daemon threads".format_map(locals()),
err)
else:
self.assertFalse(err.strip('.!'))
def test_daemon_threads_shutdown_stdout_deadlock(self):
self.check_daemon_threads_shutdown_deadlock('stdout')
def test_daemon_threads_shutdown_stderr_deadlock(self):
self.check_daemon_threads_shutdown_deadlock('stderr')
class PyMiscIOTest(MiscIOTest):
io = pyio
@unittest.skipIf(os.name == 'nt', 'POSIX signals required for this test.')
class SignalsTest(unittest.TestCase):
def setUp(self):
self.oldalrm = signal.signal(signal.SIGALRM, self.alarm_interrupt)
def tearDown(self):
signal.signal(signal.SIGALRM, self.oldalrm)
def alarm_interrupt(self, sig, frame):
1/0
def check_interrupted_write(self, item, bytes, **fdopen_kwargs):
"""Check that a partial write, when it gets interrupted, properly
invokes the signal handler, and bubbles up the exception raised
in the latter."""
read_results = []
def _read():
if hasattr(signal, 'pthread_sigmask'):
signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGALRM])
s = os.read(r, 1)
read_results.append(s)
t = threading.Thread(target=_read)
t.daemon = True
r, w = os.pipe()
fdopen_kwargs["closefd"] = False
large_data = item * (support.PIPE_MAX_SIZE // len(item) + 1)
try:
wio = self.io.open(w, **fdopen_kwargs)
t.start()
# Fill the pipe enough that the write will be blocking.
# It will be interrupted by the timer armed above. Since the
# other thread has read one byte, the low-level write will
# return with a successful (partial) result rather than an EINTR.
# The buffered IO layer must check for pending signal
# handlers, which in this case will invoke alarm_interrupt().
signal.alarm(1)
try:
self.assertRaises(ZeroDivisionError, wio.write, large_data)
finally:
signal.alarm(0)
t.join()
# We got one byte, get another one and check that it isn't a
# repeat of the first one.
read_results.append(os.read(r, 1))
self.assertEqual(read_results, [bytes[0:1], bytes[1:2]])
finally:
os.close(w)
os.close(r)
# This is deliberate. If we didn't close the file descriptor
# before closing wio, wio would try to flush its internal
# buffer, and block again.
try:
wio.close()
except OSError as e:
if e.errno != errno.EBADF:
raise
def test_interrupted_write_unbuffered(self):
self.check_interrupted_write(b"xy", b"xy", mode="wb", buffering=0)
def test_interrupted_write_buffered(self):
self.check_interrupted_write(b"xy", b"xy", mode="wb")
# Issue #22331: The test hangs on FreeBSD 7.2
@support.requires_freebsd_version(8)
def test_interrupted_write_text(self):
self.check_interrupted_write("xy", b"xy", mode="w", encoding="ascii")
@support.no_tracing
def check_reentrant_write(self, data, **fdopen_kwargs):
def on_alarm(*args):
# Will be called reentrantly from the same thread
wio.write(data)
1/0
signal.signal(signal.SIGALRM, on_alarm)
r, w = os.pipe()
wio = self.io.open(w, **fdopen_kwargs)
try:
signal.alarm(1)
# Either the reentrant call to wio.write() fails with RuntimeError,
# or the signal handler raises ZeroDivisionError.
with self.assertRaises((ZeroDivisionError, RuntimeError)) as cm:
while 1:
for i in range(100):
wio.write(data)
wio.flush()
# Make sure the buffer doesn't fill up and block further writes
os.read(r, len(data) * 100)
exc = cm.exception
if isinstance(exc, RuntimeError):
self.assertTrue(str(exc).startswith("reentrant call"), str(exc))
finally:
signal.alarm(0)
wio.close()
os.close(r)
def test_reentrant_write_buffered(self):
self.check_reentrant_write(b"xy", mode="wb")
def test_reentrant_write_text(self):
self.check_reentrant_write("xy", mode="w", encoding="ascii")
def check_interrupted_read_retry(self, decode, **fdopen_kwargs):
"""Check that a buffered read, when it gets interrupted (either
returning a partial result or EINTR), properly invokes the signal
handler and retries if the latter returned successfully."""
r, w = os.pipe()
fdopen_kwargs["closefd"] = False
def alarm_handler(sig, frame):
os.write(w, b"bar")
signal.signal(signal.SIGALRM, alarm_handler)
try:
rio = self.io.open(r, **fdopen_kwargs)
os.write(w, b"foo")
signal.alarm(1)
# Expected behaviour:
# - first raw read() returns partial b"foo"
# - second raw read() returns EINTR
# - third raw read() returns b"bar"
self.assertEqual(decode(rio.read(6)), "foobar")
finally:
signal.alarm(0)
rio.close()
os.close(w)
os.close(r)
def test_interrupted_read_retry_buffered(self):
self.check_interrupted_read_retry(lambda x: x.decode('latin1'),
mode="rb")
def test_interrupted_read_retry_text(self):
self.check_interrupted_read_retry(lambda x: x,
mode="r")
def check_interrupted_write_retry(self, item, **fdopen_kwargs):
"""Check that a buffered write, when it gets interrupted (either
returning a partial result or EINTR), properly invokes the signal
handler and retries if the latter returned successfully."""
select = support.import_module("select")
# A quantity that exceeds the buffer size of an anonymous pipe's
# write end.
N = support.PIPE_MAX_SIZE
r, w = os.pipe()
fdopen_kwargs["closefd"] = False
# We need a separate thread to read from the pipe and allow the
# write() to finish. This thread is started after the SIGALRM is
# received (forcing a first EINTR in write()).
read_results = []
write_finished = False
error = None
def _read():
try:
while not write_finished:
while r in select.select([r], [], [], 1.0)[0]:
s = os.read(r, 1024)
read_results.append(s)
except BaseException as exc:
nonlocal error
error = exc
t = threading.Thread(target=_read)
t.daemon = True
def alarm1(sig, frame):
signal.signal(signal.SIGALRM, alarm2)
signal.alarm(1)
def alarm2(sig, frame):
t.start()
large_data = item * N
signal.signal(signal.SIGALRM, alarm1)
try:
wio = self.io.open(w, **fdopen_kwargs)
signal.alarm(1)
# Expected behaviour:
# - first raw write() is partial (because of the limited pipe buffer
# and the first alarm)
# - second raw write() returns EINTR (because of the second alarm)
# - subsequent write()s are successful (either partial or complete)
written = wio.write(large_data)
self.assertEqual(N, written)
wio.flush()
write_finished = True
t.join()
self.assertIsNone(error)
self.assertEqual(N, sum(len(x) for x in read_results))
finally:
signal.alarm(0)
write_finished = True
os.close(w)
os.close(r)
# This is deliberate. If we didn't close the file descriptor
# before closing wio, wio would try to flush its internal
# buffer, and could block (in case of failure).
try:
wio.close()
except OSError as e:
if e.errno != errno.EBADF:
raise
def test_interrupted_write_retry_buffered(self):
self.check_interrupted_write_retry(b"x", mode="wb")
def test_interrupted_write_retry_text(self):
self.check_interrupted_write_retry("x", mode="w", encoding="latin1")
class CSignalsTest(SignalsTest):
io = io
class PySignalsTest(SignalsTest):
io = pyio
# Handling reentrancy issues would slow down _pyio even more, so the
# tests are disabled.
test_reentrant_write_buffered = None
test_reentrant_write_text = None
def load_tests(*args):
tests = (CIOTest, PyIOTest, APIMismatchTest,
CBufferedReaderTest, PyBufferedReaderTest,
CBufferedWriterTest, PyBufferedWriterTest,
CBufferedRWPairTest, PyBufferedRWPairTest,
CBufferedRandomTest, PyBufferedRandomTest,
StatefulIncrementalDecoderTest,
CIncrementalNewlineDecoderTest, PyIncrementalNewlineDecoderTest,
CTextIOWrapperTest, PyTextIOWrapperTest,
CMiscIOTest, PyMiscIOTest,
CSignalsTest, PySignalsTest,
)
# Put the namespaces of the IO module we are testing and some useful mock
# classes in the __dict__ of each test.
mocks = (MockRawIO, MisbehavedRawIO, MockFileIO, CloseFailureIO,
MockNonBlockWriterIO, MockUnseekableIO, MockRawIOWithoutRead,
SlowFlushRawIO)
all_members = io.__all__ + ["IncrementalNewlineDecoder"]
c_io_ns = {name : getattr(io, name) for name in all_members}
py_io_ns = {name : getattr(pyio, name) for name in all_members}
globs = globals()
c_io_ns.update((x.__name__, globs["C" + x.__name__]) for x in mocks)
py_io_ns.update((x.__name__, globs["Py" + x.__name__]) for x in mocks)
# Avoid turning open into a bound method.
py_io_ns["open"] = pyio.OpenWrapper
for test in tests:
if test.__name__.startswith("C"):
for name, obj in c_io_ns.items():
setattr(test, name, obj)
elif test.__name__.startswith("Py"):
for name, obj in py_io_ns.items():
setattr(test, name, obj)
suite = unittest.TestSuite([unittest.makeSuite(test) for test in tests])
return suite
if __name__ == "__main__":
unittest.main()
|
multiexec.py
|
'''
redis-事务(伪)
将几个命令一起执行
'''
import time,threading
import rediscommon
r = rediscommon.r
def erro_show():
print(r.incr('notrans:'))
time.sleep(.1)
r.incr('notrans:',-1)
def right_show():
pipline = r.pipeline()
pipline.incr("trans:")
time.sleep(.1)
pipline.incr("trans:", -1)
print(pipline.execute()[0])
def show():
for i in range(3):
threading.Thread(target=right_show).start()
time.sleep(.5)
show()
|
tulip_facebook.py
|
from tulip import *
import facebook
import tempfile
import sys
if sys.version_info[0] == 3:
from urllib.request import urlopen
import queue as Queue
else:
from urllib2 import urlopen
import Queue
import os
import threading
maxThreadsDl = 8
nbThreadsDl = 0
threadJobs = Queue.Queue()
threadsPool = []
runThread = False
def downloadAvatar():
global runThread
while runThread:
job = None
try:
job = threadJobs.get(block = 0)
except Queue.Empty:
pass
if not job:
continue
url = job[0]
fileName = job[1]
if sys.version_info[0] < 3:
fileNameEnc = fileName.decode(sys.getdefaultencoding()).encode(sys.getfilesystemencoding())
else:
fileNameEnc = fileName.encode(sys.getfilesystemencoding())
avatarFile = urlopen(url)
output = open(fileNameEnc,'wb')
output.write(avatarFile.read())
output.close()
threadJobs.task_done()
def launchAvatarDlThread(url, directory, name):
global threadsPool
global threadJobs
global nbThreadsDl
global maxThreadsDl
global runThread
runThread = True
ext = url[len(url)-4:]
fileName = directory+"/"+name+ext
threadJobs.put((url, fileName))
if (nbThreadsDl < maxThreadsDl):
nbThreadsDl = nbThreadsDl + 1
t = threading.Thread(target=downloadAvatar)
threadsPool.append(t)
t.start()
return fileName
def waitAvatarDlThreads():
global threadsPool
global threadJobs
global runThread
threadJobs.join()
runThread = False
for t in threadsPool:
t.join()
threadsPool = []
def getTempDir():
return tempfile.gettempdir()
def importFacebookGraph(graph, accessToken, pluginProgress, avatarsDlPath):
# Creation of needed properties, don't remove those lines !!!
viewLabel = graph.getStringProperty("viewLabel")
viewTexture = graph.getStringProperty("viewTexture")
viewColor = graph.getColorProperty("viewColor")
viewBorderColor = graph.getColorProperty("viewBorderColor")
viewShape = graph.getIntegerProperty("viewShape")
viewBorderWidth = graph.getDoubleProperty("viewBorderWidth")
viewLayout = graph.getLayoutProperty("viewLayout")
viewSize = graph.getSizeProperty("viewSize")
viewMetric = graph.getDoubleProperty("viewMetric")
nameProp = graph.getStringProperty("name")
if pluginProgress:
pluginProgress.setComment("Connecting to Facebook Graph API")
fbGraph = facebook.GraphAPI(accessToken)
if pluginProgress:
pluginProgress.setComment("Retrieving your profile")
profile = fbGraph.get_object("me")
name = str(profile["name"])
graph.setName("Facebook social network of " + name)
meNode = graph.addNode()
viewLabel[meNode] = name
nameProp[meNode] = name
if len(avatarsDlPath) > 0:
picture = fbGraph.get_object("me/picture")
fileName = launchAvatarDlThread(picture["url"], avatarsDlPath, name)
viewTexture[meNode] = fileName
if pluginProgress:
pluginProgress.setComment("Retrieving your friends")
friends = fbGraph.get_connections("me", "friends")
friendsMap = {}
for friend in friends["data"]:
friendNode = graph.addNode()
name = str(friend["name"])
friendsMap[name] = friendNode
viewLabel[friendNode] = name
viewLabel[friendNode] = name
graph.addEdge(meNode, friendNode)
nbFriends = len(friends["data"])
i = 0
if pluginProgress:
pluginProgress.progress(i, nbFriends)
for friend in friends["data"]:
name = str(friend["name"])
friendNode = friendsMap[name]
if len(avatarsDlPath) > 0:
picture = fbGraph.get_object(friend["id"] + "/picture")
fileName = launchAvatarDlThread(picture["url"], avatarsDlPath, name)
viewTexture[friendNode] = fileName
if pluginProgress:
pluginProgress.setComment("Checking your mutual friends with " + name)
mutualFriends = fbGraph.get_object("me/mutualfriends/"+friend["id"])
for mfriend in mutualFriends["data"]:
mfriendNode = friendsMap[str(mfriend["name"])]
if not graph.existEdge(friendNode, mfriendNode, False).isValid():
graph.addEdge(friendNode, mfriendNode)
i = i + 1
if pluginProgress:
pluginProgress.progress(i, nbFriends)
if len(avatarsDlPath) > 0:
viewShape.setAllNodeValue(tlp.NodeShape.Square)
viewColor.setAllNodeValue(tlp.Color(255, 255, 255))
else:
viewShape.setAllNodeValue(tlp.NodeShape.Circle)
viewColor.setAllNodeValue(tlp.Color(255, 0, 0))
viewBorderColor.setAllNodeValue(tlp.Color(0, 0, 0))
viewColor.setAllEdgeValue(tlp.Color(0,0,0))
viewBorderColor.setAllEdgeValue(tlp.Color(0, 0, 0))
viewBorderWidth.setAllNodeValue(1)
viewBorderWidth.setAllEdgeValue(1)
dataSet = tlp.getDefaultPluginParameters("FM^3 (OGDF)", graph)
graph.applyLayoutAlgorithm("FM^3 (OGDF)", viewLayout, dataSet)
if pluginProgress:
pluginProgress.setComment("Finishing to download avatars")
waitAvatarDlThreads()
|
test_Future.py
|
"""Unit Tests for the Future class"""
import sys
import threading
import time
import pytest
from ginga.misc.Future import Future, TimeoutError
class _TestError(Exception):
pass
class TestFuture(object):
def setup_class(self):
self.future_thread = None
def test_init(self):
test_future = Future()
assert hasattr(test_future, 'cb')
if sys.version_info.major == 2:
assert isinstance(test_future.evt, threading._Event)
elif sys.version_info.major == 3:
assert isinstance(test_future.evt, threading.Event)
assert test_future.evt.is_set() is False
assert test_future.res is None
assert test_future.data is None
assert 'resolved' in test_future.cb
expected = []
actual = test_future.cb['resolved']
assert expected == actual
def test_init_with_data(self):
test_future = Future("TestData")
assert hasattr(test_future, 'cb')
if sys.version_info.major == 2:
assert isinstance(test_future.evt, threading._Event)
elif sys.version_info.major == 3:
assert isinstance(test_future.evt, threading.Event)
assert test_future.evt.is_set() is False
assert test_future.res is None
assert test_future.data == "TestData"
assert 'resolved' in test_future.cb
expected = []
actual = test_future.cb['resolved']
assert expected == actual
def test_get_data_no_data(self):
test_future = Future()
expected = None
actual = test_future.get_data()
assert expected == actual
def test_get_data_some_data(self):
test_future = Future("TestData")
expected = "TestData"
actual = test_future.get_data()
assert expected == actual
def test_freeze(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
pass
test_future.freeze(
test_method, "arg1", "arg2", kwarg1="test", kwarg2="test")
assert test_future.method == test_method
assert test_future.args == ("arg1", "arg2")
assert test_future.kwdargs == {"kwarg1": "test", "kwarg2": "test"}
def test_freeze_empty_args(self):
test_future = Future("TestData")
def test_method():
pass
test_future.freeze(test_method)
assert test_future.method == test_method
assert test_future.args == ()
assert test_future.kwdargs == {}
def test_thaw_suppress_exception_no_exception(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
return True
test_future.freeze(test_method)
expected = True
actual = test_future.thaw()
assert expected == actual
assert test_future.res is True
assert test_future.evt.is_set() is True
def test_thaw_suppress_exception_exception(self):
test_future = Future("TestData")
def test_method():
return True
test_future.freeze(
test_method, "arg1", "arg2", kwarg1="test", kwarg2="test")
test_result = test_future.thaw()
assert isinstance(test_result, TypeError)
assert isinstance(test_future.res, TypeError)
assert test_future.evt.is_set() is True
def test_thaw_not_suppress_exception_no_exception(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
return True
test_future.freeze(test_method)
expected = True
actual = test_future.thaw(False)
assert expected == actual
assert test_future.res is True
assert test_future.evt.is_set() is True
def test_thaw_not_suppress_exception_raise_exception(self):
test_future = Future("TestData")
def test_method():
return True
test_future.freeze(
test_method, "arg1", "arg2", kwarg1="test", kwarg2="test")
with pytest.raises(TypeError):
test_future.thaw(False)
assert test_future.res is None
assert test_future.evt.is_set() is False
def test_has_value_unset(self):
test_future = Future("TestData")
expected = False
actual = test_future.has_value()
assert expected == actual
def test_has_value_set(self):
test_future = Future("TestData")
test_future.evt.set()
expected = True
actual = test_future.has_value()
assert expected == actual
def test_resolve(self):
test_future = Future("TestData")
test_future.resolve(True)
assert test_future.res is True
assert test_future.evt.is_set() is True
def test_resolve_callback(self):
test_future = Future("TestData")
def test_callback(obj):
try:
obj.res = not obj.res
except Exception:
pass
test_future.add_callback('resolved', test_callback)
test_future.resolve(True)
# Callback reverses the boolean 'res' value
assert test_future.res is False
assert test_future.evt.is_set() is True
def test_wait(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
time.sleep(2)
return True
test_future.freeze(test_method)
self.future_thread = threading.Thread(target=test_future.thaw)
self.future_thread.start()
expected = True
actual = test_future.wait()
assert expected == actual
def test_wait_timeout(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
time.sleep(2)
return True
test_future.freeze(test_method)
self.future_thread = threading.Thread(target=test_future.thaw)
self.future_thread.start()
with pytest.raises(TimeoutError):
test_future.wait(1)
def test_get_value_block_no_timeout(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
time.sleep(2)
return True
test_future.freeze(test_method)
self.future_thread = threading.Thread(target=test_future.thaw)
self.future_thread.start()
expected = True
actual = test_future.get_value()
assert expected == actual
def test_get_value_block_timeout(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
time.sleep(2)
return True
test_future.freeze(test_method)
self.future_thread = threading.Thread(target=test_future.thaw)
self.future_thread.start()
with pytest.raises(TimeoutError):
test_future.get_value(True, 1)
def test_get_value_no_block_fail(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
time.sleep(2)
return True
test_future.freeze(test_method)
self.future_thread = threading.Thread(target=test_future.thaw)
self.future_thread.start()
with pytest.raises(TimeoutError):
test_future.get_value(False)
def test_get_value_no_block_pass(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
return True
test_future.freeze(test_method)
self.future_thread = threading.Thread(target=test_future.thaw)
self.future_thread.start()
# Making the test running thread to sleep for a while
# for the self.future_thread to complete and ensure success
time.sleep(0.5)
expected = True
actual = test_future.get_value()
assert expected == actual
def test_get_value_suppress_exception(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
raise _TestError("Test Error")
test_future.freeze(test_method)
self.future_thread = threading.Thread(target=test_future.thaw)
self.future_thread.start()
future_value = test_future.get_value(True, None, True)
assert isinstance(future_value, Exception)
def test_get_value_no_suppress_exception(self):
test_future = Future("TestData")
def test_method(*args, **kwargs):
raise _TestError("Test Error")
test_future.freeze(test_method)
self.future_thread = threading.Thread(target=test_future.thaw)
self.future_thread.start()
with pytest.raises(_TestError):
test_future.get_value()
def teardown_class(self):
if self.future_thread is not None:
self.future_thread.join()
# END
|
magnet.py
|
import shutil
import tempfile
import os.path as pt
import sys
import libtorrent as lt
from time import sleep
import json
import requests
import multiprocessing
from Magnet_To_Torrent2 import magnet2torrent
jobs = []
count = 0
page = 0
for i in range(0, 10000):
if page == 0:
url = "https://nyaa.pantsu.cat/api"
response = requests.get(url)
data = json.loads(response.text)
for i in data["torrents"]:
path = "torrents/" + i["hash"].lstrip()+".torrent"
if pt.isfile(path):
print("file exists")
else:
count += 1
p = multiprocessing.Process(target=magnet2torrent, args=(i["magnet"]+"&tr=http://nyaa.tracker.wf:7777/announce&tr=http://anidex.moe:6969/announce", path))
jobs.append(p)
p.start()
else:
print(i)
url = "https://nyaa.pantsu.cat/api/" + str(i)
response = requests.get(url)
data = json.loads(response.text)
for i in data["torrents"]:
path = "torrents/" + i["hash"].lstrip()+".torrent"
if pt.isfile(path):
print("file exists")
else:
count += 1
print("Worker %i", count)
p = multiprocessing.Process(target=magnet2torrent, args=(i["magnet"]+"&tr=http://nyaa.tracker.wf:7777/announce&tr=http://anidex.moe:6969/announce", path))
jobs.append(p)
p.start()
page += 1
sleep(1)
print("finished")
sys.exit(0)
|
test_pyepics_compat.py
|
#!/usr/bin/env python
# unit-tests for ca interface
# Lifted almost exactly from pyepics
# The epics python module was orignally written by
#
# Matthew Newville <newville@cars.uchicago.edu>
# CARS, University of Chicago
#
# There have been several contributions from many others, notably Angus
# Gratton <angus.gratton@anu.edu.au>. See the Acknowledgements section of
# the documentation for a list of more contributors.
#
# Except where explicitly noted, all files in this distribution are licensed
# under the Epics Open License.:
#
# ------------------------------------------------
#
# Copyright 2010 Matthew Newville, The University of Chicago. All rights reserved.
#
# The epics python module is distributed subject to the following license conditions:
# SOFTWARE LICENSE AGREEMENT
# Software: epics python module
#
# 1. The "Software", below, refers to the epics python module (in either
# source code, or binary form and accompanying documentation). Each
# licensee is addressed as "you" or "Licensee."
#
# 2. The copyright holders shown above and their third-party licensors
# hereby grant Licensee a royalty-free nonexclusive license, subject to
# the limitations stated herein and U.S. Government license rights.
#
# 3. You may modify and make a copy or copies of the Software for use
# within your organization, if you meet the following conditions:
#
# 1. Copies in source code must include the copyright notice and this
# Software License Agreement.
#
# 2. Copies in binary form must include the copyright notice and this
# Software License Agreement in the documentation and/or other
# materials provided with the copy.
#
# 4. You may modify a copy or copies of the Software or any portion of
# it, thus forming a work based on the Software, and distribute copies of
# such work outside your organization, if you meet all of the following
# conditions:
#
# 1. Copies in source code must include the copyright notice and this
# Software License Agreement;
#
# 2. Copies in binary form must include the copyright notice and this
# Software License Agreement in the documentation and/or other
# materials provided with the copy;
#
# 3. Modified copies and works based on the Software must carry
# prominent notices stating that you changed specified portions of
# the Software.
#
# 5. Portions of the Software resulted from work developed under a
# U.S. Government contract and are subject to the following license: the
# Government is granted for itself and others acting on its behalf a
# paid-up, nonexclusive, irrevocable worldwide license in this computer
# software to reproduce, prepare derivative works, and perform publicly
# and display publicly.
#
# 6. WARRANTY DISCLAIMER. THE SOFTWARE IS SUPPLIED "AS IS" WITHOUT
# WARRANTY OF ANY KIND. THE COPYRIGHT HOLDERS, THEIR THIRD PARTY
# LICENSORS, THE UNITED STATES, THE UNITED STATES DEPARTMENT OF ENERGY,
# AND THEIR EMPLOYEES: (1) DISCLAIM ANY WARRANTIES, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO ANY IMPLIED WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NON-INFRINGEMENT, (2) DO NOT
# ASSUME ANY LEGAL LIABILITY OR RESPONSIBILITY FOR THE ACCURACY,
# COMPLETENESS, OR USEFULNESS OF THE SOFTWARE, (3) DO NOT REPRESENT THAT
# USE OF THE SOFTWARE WOULD NOT INFRINGE PRIVATELY OWNED RIGHTS, (4) DO
# NOT WARRANT THAT THE SOFTWARE WILL FUNCTION UNINTERRUPTED, THAT IT IS
# ERROR-FREE OR THAT ANY ERRORS WILL BE CORRECTED.
#
# 7. LIMITATION OF LIABILITY. IN NO EVENT WILL THE COPYRIGHT HOLDERS,
# THEIR THIRD PARTY LICENSORS, THE UNITED STATES, THE UNITED STATES
# DEPARTMENT OF ENERGY, OR THEIR EMPLOYEES: BE LIABLE FOR ANY INDIRECT,
# INCIDENTAL, CONSEQUENTIAL, SPECIAL OR PUNITIVE DAMAGES OF ANY KIND OR
# NATURE, INCLUDING BUT NOT LIMITED TO LOSS OF PROFITS OR LOSS OF DATA,
# FOR ANY REASON WHATSOEVER, WHETHER SUCH LIABILITY IS ASSERTED ON THE
# BASIS OF CONTRACT, TORT (INCLUDING NEGLIGENCE OR STRICT LIABILITY), OR
# OTHERWISE, EVEN IF ANY OF SAID PARTIES HAS BEEN WARNED OF THE
# POSSIBILITY OF SUCH LOSS OR DAMAGES.
#
# ------------------------------------------------
import pytest
numpy = pytest.importorskip("numpy")
import time
import os
import sys
import threading
from types import SimpleNamespace
from contextlib import contextmanager
from caproto.threading.pyepics_compat import (PV, caput, caget, cainfo,
caget_many, caput_many,
AccessRightsException)
from .conftest import default_setup_module, default_teardown_module
from .test_threading_client import context, shared_broadcaster
def setup_module(module):
default_setup_module(module)
from caproto.benchmarking.util import set_logging_level
set_logging_level('DEBUG')
def teardown_module(module):
default_teardown_module(module)
@pytest.fixture(scope='function')
def pvnames(request, epics_base_ioc, context):
class PVNames:
prefix = epics_base_ioc.prefix
double_pv = prefix + 'ao1'
double_pv_units = 'microns'
double_pv_prec = 4
double_pv2 = prefix + 'ao2'
pause_pv = prefix + 'pause'
str_pv = prefix + 'ao1.DESC'
int_pv = prefix + 'long2'
long_pv = prefix + 'long2'
float_pv = prefix + 'ao3'
enum_pv = prefix + 'mbbo1'
enum_pv_strs = ['Stop', 'Start', 'Pause', 'Resume']
proc_pv = prefix + 'ao1.PROC'
long_arr_pv = prefix + 'long2k'
double_arr_pv = prefix + 'double2k'
string_arr_pv = prefix + 'string128'
char_arr_pv = prefix + 'char128'
char_arrays = [prefix + 'char128',
prefix + 'char2k',
prefix + 'char64k']
long_arrays = [prefix + 'long128',
prefix + 'long2k',
prefix + 'long64k']
double_arrays = [prefix + 'double128',
prefix + 'double2k',
prefix + 'double64k']
updating_pv1 = prefix + 'ao1'
updating_str1 = prefix + 'char256'
updating_pvlist = [prefix + 'ao1',
prefix + 'ai1',
prefix + 'long1',
prefix + 'ao2']
non_updating_pv = prefix + 'ao4'
alarm_pv = prefix + 'long1'
alarm_comp = 'ge'
alarm_trippoint = 7
subarr_driver = prefix + 'wave_test'
subarr1 = prefix + 'subArr1'
subarr2 = prefix + 'subArr2'
subarr3 = prefix + 'subArr3'
subarr4 = prefix + 'subArr4'
zero_len_subarr1 = prefix + 'ZeroLenSubArr1'
# TODO: softIoc does not build with motor
motor1 = 'sim:mtr1'
motor2 = 'sim:mtr2'
def __repr__(self):
return f'<PVNames prefix={epics_base_ioc.prefix}>'
PV._default_context = context
def finalize_context():
print('Cleaning up PV context')
context.disconnect()
assert not context._process_search_results_thread.is_alive()
assert not context._activate_subscriptions_thread.is_alive()
assert not context.selector.thread.is_alive()
sb = context.broadcaster
sb.disconnect()
assert not sb._command_thread.is_alive()
assert not sb.selector.thread.is_alive()
assert not sb._retry_unanswered_searches_thread.is_alive()
print('Done cleaning up PV context')
request.addfinalizer(finalize_context)
return PVNames()
def simulator_main(prefix, ready_event, exit_event):
'simulator.py from pyepics testioc (same license as above)'
import random
from epics import caput as _caput, PV as _PV
class PV(_PV):
def put(self, value, **kw):
rval = repr(value)[:50]
print(f'(simulator: put {self.pvname} {rval})')
return super().put(value, **kw)
def caput(pv, value, **kw):
rval = repr(value)[:50]
print(f'(simulator: caput {pv} {rval})')
return _caput(pv, value, **kw)
NEEDS_INIT = True
SLEEP_TIME = 0.10
def onConnect(pvname=None, conn=None, **kws):
nonlocal NEEDS_INIT
NEEDS_INIT = conn
def make_pvs(*args, **kwds):
# print("Make PVS ' ", prefix, args)
# print( [("%s%s" % (prefix, name)) for name in args])
pvlist = [PV("%s%s" % (prefix, name)) for name in args]
for pv in pvlist:
pv.connect()
pv.connection_callbacks.append(onConnect)
return pvlist
mbbos = make_pvs("mbbo1", "mbbo2")
pause_pv = make_pvs("pause",)[0]
longs = make_pvs("long1", "long2", "long3", "long4")
strs = make_pvs("str1", "str2")
analogs = make_pvs("ao1", "ai1", "ao2", "ao3")
binaries = make_pvs("bo1", "bi1")
char_waves = make_pvs("char128", "char256", "char2k", "char64k")
double_waves = make_pvs("double128", "double2k", "double64k")
long_waves = make_pvs("long128", "long2k", "long64k")
str_waves = make_pvs("string128", "string2k", "string64k")
subarrays = make_pvs("subArr1", "subArr2", "subArr3", "subArr4" )
subarray_driver = make_pvs("wave_test",)[0]
def initialize_data():
subarray_driver.put(numpy.arange(64)/12.0)
for p in mbbos:
p.put(1)
for i, p in enumerate(longs):
p.put((i+1))
for i, p in enumerate(strs):
p.put(("String %s" % (i+1)))
for i, p in enumerate(binaries):
p.put((i+1))
for i, p in enumerate(analogs):
p.put((i+1)*1.7135000 )
caput(f'{prefix}ao1.EGU', 'microns')
caput(f'{prefix}ao1.PREC', 4)
caput(f'{prefix}ai1.PREC', 2)
caput(f'{prefix}ao2.PREC', 3)
char_waves[0].put([60+random.randrange(30) for i in range(128)])
char_waves[1].put([random.randrange(256) for i in range(256)])
char_waves[2].put([random.randrange(256) for i in range(2048)])
char_waves[3].put([random.randrange(256) for i in range(65536)])
long_waves[0].put([i+random.randrange(2) for i in range(128)])
long_waves[1].put([i+random.randrange(128) for i in range(2048)])
long_waves[2].put([i for i in range(65536)])
double_waves[0].put([i+random.randrange(2) for i in range(128)])
double_waves[1].put([random.random() for i in range(2048)])
double_waves[2].put([random.random() for i in range(65536)])
pause_pv.put(0)
str_waves[0].put([(" String %i" % (i+1)) for i in range(128)])
print('Data initialized')
text = '''line 1
this is line 2
and line 3
here is another line
this is the 5th line
line 6
line 7
line 8
line 9
line 10
line 11
'''.split('\n')
start_time = time.time()
count = 0
long_update = 0
lcount = 1
initialized_at = 0
while not exit_event.is_set():
if NEEDS_INIT:
initialize_data()
time.sleep(SLEEP_TIME)
NEEDS_INIT = False
initialized_at = count
time.sleep(SLEEP_TIME)
count = count + 1
if not NEEDS_INIT and count >= initialized_at + 4:
if not ready_event.is_set():
ready_event.set()
print('[Pyepics simulator running!]')
if count > 99999999:
count = 1
t0 = time.time()
if pause_pv.get() == 1:
# pause for up to 120 seconds if pause was selected
t0 = time.time()
while time.time()-t0 < 120:
time.sleep(SLEEP_TIME)
if pause_pv.get() == 0:
break
elif exit_event.is_set():
break
pause_pv.put(0)
if exit_event.is_set():
break
noise = numpy.random.normal
analogs[0].put(100*(random.random()-0.5))
analogs[1].put(76.54321*(time.time()-start_time))
analogs[2].put(0.3*numpy.sin(time.time() / 2.302) + noise(scale=0.4))
char_waves[0].put([45+random.randrange(64)
for i in range(128)])
if count % 3 == 0:
analogs[3].put(
numpy.exp((max(0.001, noise(scale=0.03) +
numpy.sqrt((count/16.0) % 87.)))))
long_waves[1].put([i+random.randrange(128)
for i in range(2048)])
str_waves[0].put([("Str%i_%.3f" % (i+1, 100*random.random()))
for i in range(128)])
if t0-long_update >= 1.0:
long_update=t0
lcount = (lcount + 1) % 10
longs[0].put(lcount)
char_waves[1].put(text[lcount])
double_waves[2].put([random.random()
for i in range(65536)])
double_waves[1].put([random.random()
for i in range(2048)])
print('[Simulator loop exiting]')
@pytest.fixture(scope='function')
def simulator(request, pvnames):
prefix = pvnames.prefix
ready_event = threading.Event()
exit_event = threading.Event()
kwargs = dict(prefix=pvnames.prefix,
ready_event=ready_event,
exit_event=exit_event)
print()
print()
print(f'* Starting up simulator for prefix: {prefix}')
thread = threading.Thread(target=simulator_main, kwargs=kwargs)
thread.start()
def stop_simulator():
print()
print(f'* Joining simulator thread')
exit_event.set()
thread.join(timeout=2)
print()
if thread.is_alive():
print(f'* Dangling simulator thread (prefix={prefix})... :(')
else:
print(f'* Simulator thread exited cleanly (prefix={prefix})')
request.addfinalizer(stop_simulator)
ok = ready_event.wait(15)
if not ok:
raise TimeoutError('Simulator thread failed to start!')
print()
print(f'* Simulator thread started up! (prefix={prefix})')
return thread
@contextmanager
def no_simulator_updates(pvnames):
'''Context manager which pauses and resumes simulator PV updating'''
try:
caput(pvnames.pause_pv, 1)
time.sleep(0.1)
yield
finally:
caput(pvnames.pause_pv, 0)
# Give the simulator some time to start back up
time.sleep(0.5)
@pytest.mark.flaky(reruns=5, reruns_delay=2)
def testA_CreatePV(pvnames):
print('Simple Test: create pv\n')
pv = PV(pvnames.double_pv)
assert pv is not None
@pytest.mark.flaky(reruns=5, reruns_delay=2)
def testA_CreatedWithConn(pvnames):
print('Simple Test: create pv with conn callback\n')
CONN_DAT = {}
def onConnect(pvname=None, conn=None, chid=None, **kws):
nonlocal CONN_DAT
print(' :Connection status changed: %s connected=%s\n' % (pvname, conn))
CONN_DAT[pvname] = conn
print(f'Connecting to {pvnames.int_pv}')
pv = PV(pvnames.int_pv, connection_callback=onConnect)
val = pv.get(timeout=5)
conn = CONN_DAT.get(pvnames.int_pv, None)
assert conn
def test_caget(pvnames):
print('Simple Test of caget() function\n')
pvs = (pvnames.double_pv, pvnames.enum_pv, pvnames.str_pv)
for p in pvs:
val = caget(p)
assert val is not None
sval = caget(pvnames.str_pv)
assert sval == 'ao'
def test_smoke_cainfo(pvnames):
print('Simple Test of caget() function\n')
pvs = (pvnames.double_pv, pvnames.enum_pv, pvnames.str_pv)
for p in pvs:
for print_out in (True, False):
val = cainfo(p, print_out=print_out)
if not print_out:
assert val is not None
def test_caget_many(pvnames):
print('Simple Test of caget_many() function\n')
pvs = [pvnames.double_pv, pvnames.enum_pv, pvnames.str_pv]
vals = caget_many(pvs)
assert len(vals) == len(pvs)
assert isinstance(vals[0], float)
print(type(vals[1]))
assert isinstance(vals[1], (int, numpy.uint16))
assert isinstance(vals[2], str)
def test_caput_many_wait_all(pvnames):
print('Test of caput_many() function, waiting for all.\n')
pvs = [pvnames.double_pv, pvnames.enum_pv, 'ceci nest pas une PV']
vals = [0.5, 0, 23]
t0 = time.time()
success = caput_many(pvs, vals, wait='all', connection_timeout=0.5,
put_timeout=5.0)
t1 = time.time()
assert len(success) == len(pvs)
assert success[0] == 1
assert success[1] == 1
assert success[2] < 0
def test_caput_many_wait_each(pvnames):
print('Simple Test of caput_many() function, waiting for each.\n')
pvs = [pvnames.double_pv, pvnames.enum_pv, 'ceci nest pas une PV']
#pvs = ["MTEST:Val1", "MTEST:Val2", "MTEST:SlowVal"]
vals = [0.5, 0, 23]
success = caput_many(pvs, vals, wait='each', connection_timeout=0.5,
put_timeout=1.0)
assert len(success) == len(pvs)
assert success[0] == 1
assert success[1] == 1
assert success[2] < 0
def test_caput_many_no_wait(pvnames):
print('Simple Test of caput_many() function, without waiting.\n')
pvs = [pvnames.double_pv, pvnames.enum_pv, 'ceci nest pas une PV']
vals = [0.5, 0, 23]
success = caput_many(pvs, vals, wait=None, connection_timeout=0.5)
assert len(success) == len(pvs)
# If you don't wait, ca.put returns 1 as long as the PV connects
# and the put request is valid.
assert success[0] == 1
assert success[1] == 1
assert success[2] < 0
def test_get1(pvnames):
print('Simple Test: test value and char_value on an integer\n')
pv = PV(pvnames.int_pv)
val = pv.get()
cval = pv.get(as_string=True)
assert int(cval) == val
@pytest.mark.xfail(os.environ.get('BASE') in ('R3.16.1', 'R7.0.1.1'),
reason='known issues with simulator on some BASE versions')
def test_get_string_waveform(pvnames, simulator):
print('String Array: \n')
with no_simulator_updates(pvnames):
pv = PV(pvnames.string_arr_pv)
val = pv.get()
assert len(val) > 10
assert isinstance(val[0], str)
assert len(val[0]) > 1
assert isinstance(val[1], str)
assert len(val[1]) > 1
def test_put_string_waveform(pvnames):
print('String Array: \n')
with no_simulator_updates(pvnames):
pv = PV(pvnames.string_arr_pv)
put_value = ['a', 'b', 'c']
pv.put(put_value)
get_value = pv.get(use_monitor=False, count=len(put_value))
numpy.testing.assert_array_equal(get_value, put_value)
@pytest.mark.skipif(os.environ.get("CAPROTO_SKIP_MOTORSIM_TESTS") is not None,
reason='No motorsim IOC')
@pytest.mark.skipif(sys.platform == 'win32',
reason='win32 motorsim IOC')
def test_putcomplete(pvnames):
print('Put with wait and put_complete (using real motor!) \n')
vals = (1.35, 1.50, 1.44, 1.445, 1.45, 1.453, 1.446, 1.447, 1.450,
1.450, 1.490, 1.5, 1.500)
p = PV(pvnames.motor1)
if not p.wait_for_connection():
raise TimeoutError('simulated motor connection failed?')
see_complete = []
for v in vals:
t0 = time.time()
p.put(v, use_complete=True)
count = 0
for i in range(100000):
time.sleep(0.001)
count = count + 1
if p.put_complete:
see_complete.append(True)
print('See completion')
break
# print('made it to value= %.3f, elapsed time= %.4f sec (count=%i)' % (v, time.time()-t0, count))
assert len(see_complete) > (len(vals) - 5)
@pytest.mark.skipif(os.environ.get("CAPROTO_SKIP_MOTORSIM_TESTS") is not None,
reason='No motorsim IOC')
@pytest.mark.skipif(sys.platform == 'win32',
reason='win32 motorsim IOC')
def test_putwait(pvnames):
print('Put with wait (using real motor!) \n')
pv = PV(pvnames.motor1)
if not pv.wait_for_connection():
raise TimeoutError('simulated motor connection failed?')
val = pv.get()
t0 = time.time()
if val < 5:
pv.put(val + 1.0, wait=True)
else:
pv.put(val - 1.0, wait=True)
dt = time.time()-t0
print(' put took %s sec\n' % dt)
assert dt > 0.1
# now with a callback!
put_callback_called = False
def onPutdone(pvname=None, **kws):
print('put done ', pvname, kws)
nonlocal put_callback_called
put_callback_called = True
val = pv.get()
if val < 5:
pv.put(val + 1.0, callback=onPutdone)
else:
pv.put(val - 1.0, callback=onPutdone)
t0 = time.time()
while time.time()-t0 < dt*1.50:
time.sleep(0.02)
print(' put should be done by now? %s \n' % put_callback_called)
assert put_callback_called
# now using pv.put_complete
val = pv.get()
if val < 5:
pv.put(val + 1.0, use_complete=True)
else:
pv.put(val - 1.0, use_complete=True)
t0 = time.time()
count = 0
while time.time()-t0 < dt*1.50:
if pv.put_complete:
break
count = count + 1
time.sleep(0.02)
print(' put_complete=%s (should be True), and count=%i (should be>3)\n' %
(pv.put_complete, count))
assert pv.put_complete
assert count > 3
@pytest.mark.xfail(os.environ.get('BASE') in ('R3.16.1', 'R7.0.1.1'),
reason='known issues with simulator on some BASE versions')
def test_get_callback(pvnames, simulator):
print("Callback test: changing PV must be updated\n")
mypv = PV(pvnames.updating_pv1)
NEWVALS = []
def onChanges(pvname=None, value=None, char_value=None, **kw):
nonlocal NEWVALS
print('PV %s %s, %s Changed!\n' % (pvname, repr(value), char_value))
NEWVALS.append(repr(value))
mypv.add_callback(onChanges)
print('Added a callback. Now wait for changes...\n')
t0 = time.time()
while time.time() - t0 < 3:
time.sleep(1.e-4)
print(' saw %i changes.\n' % len(NEWVALS))
assert len(NEWVALS) > 3
mypv.clear_callbacks()
def test_subarrays(pvnames):
print("Subarray test: dynamic length arrays\n")
driver = PV(pvnames.subarr_driver)
subarr1 = PV(pvnames.subarr1)
subarr1.connect()
len_full = 64
len_sub1 = 16
full_data = numpy.arange(len_full)/1.0
caput("%s.NELM" % pvnames.subarr1, len_sub1)
caput("%s.INDX" % pvnames.subarr1, 0)
driver.put(full_data)
time.sleep(0.1)
subval = subarr1.get()
assert len(subval) == len_sub1
assert numpy.all(subval == full_data[:len_sub1])
print("Subarray test: C\n")
caput("%s.NELM" % pvnames.subarr2, 19)
caput("%s.INDX" % pvnames.subarr2, 3)
subarr2 = PV(pvnames.subarr2)
subarr2.get()
driver.put(full_data)
time.sleep(0.1)
subval = subarr2.get()
assert len(subval) == 19
assert numpy.all(subval == full_data[3:3+19])
caput("%s.NELM" % pvnames.subarr2, 5)
caput("%s.INDX" % pvnames.subarr2, 13)
driver.put(full_data)
time.sleep(0.1)
subval = subarr2.get()
assert len(subval) == 5
assert numpy.all(subval == full_data[13:5+13])
def test_subarray_zerolen(pvnames):
subarr1 = PV(pvnames.zero_len_subarr1)
subarr1.wait_for_connection()
val = subarr1.get(use_monitor=True, as_numpy=True)
assert isinstance(val, numpy.ndarray), 'using monitor'
assert len(val) == 0, 'using monitor'
# caproto returns things in big endian, not native type
# assert val.dtype == numpy.float64, 'using monitor'
val = subarr1.get(use_monitor=False, as_numpy=True)
assert isinstance(val, numpy.ndarray), 'no monitor'
assert len(val) == 0, 'no monitor'
# caproto returns things in big endian, not native type
# assert val.dtype == numpy.float64, 'no monitor'
def test_waveform_get_with_count_arg(pvnames):
wf = PV(pvnames.char_arr_pv, count=32)
val=wf.get()
assert len(val) == 32
val=wf.get(count=wf.nelm)
assert len(val) == wf.nelm
@pytest.mark.xfail(os.environ.get('BASE') in ('R3.16.1', 'R7.0.1.1'),
reason='known issues with simulator on some BASE versions')
def test_waveform_callback_with_count_arg(pvnames, simulator):
values = []
wf = PV(pvnames.char_arr_pv, count=32)
def onChanges(pvname=None, value=None, char_value=None, **kw):
print('PV %s %s, %s Changed!\n' % (pvname, repr(value), char_value))
values.append(value)
wf.add_callback(onChanges)
print('Added a callback. Now wait for changes...\n')
t0 = time.time()
while time.time() - t0 < 3:
time.sleep(1.e-4)
if len(values)>0:
break
assert len(values) > 0
assert len(values[0]) == 32
wf.clear_callbacks()
def test_emptyish_char_waveform_no_monitor(pvnames):
'''a test of a char waveform of length 1 (NORD=1): value "\0"
without using auto_monitor
'''
zerostr = PV(pvnames.char_arr_pv, auto_monitor=False)
zerostr.wait_for_connection()
# elem_count = 128, requested count = None, libca returns count = 1
zerostr.put([0], wait=True)
assert zerostr.get(as_string=True) == ''
numpy.testing.assert_array_equal(zerostr.get(as_string=False), [0])
assert zerostr.get(as_string=True, as_numpy=False) == ''
numpy.testing.assert_array_equal(zerostr.get(as_string=False, as_numpy=False), [0])
# elem_count = 128, requested count = None, libca returns count = 2
zerostr.put([0, 0], wait=True)
assert zerostr.get(as_string=True) == ''
numpy.testing.assert_array_equal(zerostr.get(as_string=False), [0, 0])
assert zerostr.get(as_string=True, as_numpy=False) == ''
numpy.testing.assert_array_equal(zerostr.get(as_string=False,
as_numpy=False), [0, 0])
def test_emptyish_char_waveform_monitor(pvnames):
'''a test of a char waveform of length 1 (NORD=1): value "\0"
with using auto_monitor
'''
zerostr = PV(pvnames.char_arr_pv, auto_monitor=True)
zerostr.wait_for_connection()
zerostr.put([0], wait=True)
time.sleep(0.2)
assert zerostr.get(as_string=True) == ''
numpy.testing.assert_array_equal(zerostr.get(as_string=False), [0])
assert zerostr.get(as_string=True, as_numpy=False) == ''
numpy.testing.assert_array_equal(zerostr.get(as_string=False, as_numpy=False), [0])
zerostr.put([0, 0], wait=True)
time.sleep(0.2)
assert zerostr.get(as_string=True) == ''
numpy.testing.assert_array_equal(zerostr.get(as_string=False), [0, 0])
assert zerostr.get(as_string=True, as_numpy=False) == ''
numpy.testing.assert_array_equal(zerostr.get(as_string=False, as_numpy=False), [0, 0])
zerostr.disconnect()
def testEnumPut(pvnames):
pv = PV(pvnames.enum_pv)
assert pv is not None
pv.put('Stop')
time.sleep(0.1)
val = pv.get()
assert val == 0
assert pv.get(as_string=True) == 'Stop'
@pytest.mark.xfail(os.environ.get('BASE') in ('R3.16.1', 'R7.0.1.1'),
reason='known issues with simulator on some BASE versions')
def test_DoubleVal(pvnames, simulator):
pvn = pvnames.double_pv
pv = PV(pvn)
print('pv', pv)
value = pv.get()
print('pv get', value)
assert pv.connected
print('%s get value %s' % (pvn, value))
cdict = pv.get_ctrlvars()
print('Testing CTRL Values for a Double (%s)\n' % (pvn))
assert 'severity' in cdict
assert len(pv.host) > 1
assert pv.count == 1
assert pv.precision == pvnames.double_pv_prec
assert pv.units == pvnames.double_pv_units
assert pv.access.startswith('read')
def test_waveform_get_1elem(pvnames):
pv = PV(pvnames.double_arr_pv)
val = pv.get(count=1, use_monitor=False)
assert isinstance(val, numpy.ndarray)
assert len(val) == 1
def test_subarray_1elem(pvnames):
# pv = PV(pvnames.zero_len_subarr1)
pv = PV(pvnames.double_arr_pv)
pv.wait_for_connection()
val = pv.get(count=1, use_monitor=False)
print('val is', val, type(val))
assert isinstance(val, numpy.ndarray)
assert len(val) == 1
val = pv.get(count=1, as_numpy=False, use_monitor=False)
print('val is', val, type(val))
assert isinstance(val, list)
assert len(val) == 1
@pytest.mark.skipif(os.environ.get("CAPROTO_SKIP_MOTORSIM_TESTS") is not None,
reason='No motorsim IOC')
@pytest.mark.skipif(sys.platform == 'win32',
reason='win32 motorsim IOC')
def test_pyepics_pv(context):
pv1 = "sim:mtr1"
ctx = context
# Some user function to call when subscriptions receive data.
called = []
def user_callback(*, value, **kwargs):
print()
print('-- user callback', value)
called.append(True)
time_pv = PV(pv1, context=ctx, form='time')
ctrl_pv = PV(pv1, context=ctx, form='ctrl')
time_pv.wait_for_connection()
time_pv.add_callback(user_callback)
print('time read', time_pv.get())
print('ctrl read', ctrl_pv.get())
time_pv.put(3, wait=True)
time_pv.put(6, wait=True)
time.sleep(0.1)
assert time_pv.get() == 6
assert called
print('read', time_pv.get())
print('done')
repr(time_pv)
for k, v in PV.__dict__.items():
if isinstance(v, property):
getattr(time_pv, k)
getattr(ctrl_pv, k)
@pytest.fixture(scope='function')
def access_security_softioc(request, prefix, context):
'From pyepics test_cas.py'
access_rights_db = {
('{}:ao'.format(prefix), 'ao') : {
'ASG': "rps_threshold",
'DRVH': "10",
'DRVL': "0",
},
('{}:bo'.format(prefix), 'bo') : {
'ASG': "rps_lock",
'ZNAM': "OUT",
'ONAM': "IN",
},
('{}:ao2'.format(prefix), 'ao') : {
'DRVH': "5",
'DRVL': "1",
},
('{}:permit'.format(prefix), 'bo') : {
'VAL': "0",
'PINI': "1",
'ZNAM': "DISABLED",
'ONAM': "ENABLED",
},
}
access_rights_asg_rules = '''
ASG(DEFAULT) {
RULE(1,READ)
RULE(1,WRITE,TRAPWRITE)
}
ASG(rps_threshold) {
INPA("$(P):permit")
RULE(1, READ)
RULE(0, WRITE, TRAPWRITE) {
CALC("A=1")
}
RULE(1, WRITE, TRAPWRITE) {
CALC("A=0")
}
}
ASG(rps_lock) {
INPA("$(P):permit")
RULE(1, READ)
RULE(1, WRITE, TRAPWRITE) {
CALC("A=0")
}
}
'''
from .conftest import run_softioc, poll_readiness
handler = run_softioc(request, db=access_rights_db,
access_rules_text=access_rights_asg_rules,
macros={'P': prefix},
)
PV._default_context = context
process = handler.processes[-1]
pvs = {pv[len(prefix) + 1:]: PV(pv)
for pv, rtype in access_rights_db
}
pvs['ao.DRVH'] = PV(prefix + ':ao.DRVH')
poll_readiness(pvs['ao'].pvname)
for pv in pvs.values():
pv.wait_for_connection()
def finalize_context():
print('Cleaning up PV context')
broadcaster = PV._default_context.broadcaster
broadcaster.disconnect()
PV._default_context.disconnect()
PV._default_context = None
print('Done cleaning up PV context')
request.addfinalizer(finalize_context)
return SimpleNamespace(process=process, prefix=prefix,
name='access_rights', pvs=pvs, type='epics-base')
def test_permit_disabled(access_security_softioc):
# with the permit disabled, all test pvs should be readable/writable
pvs = access_security_softioc.pvs
for pv in pvs.values():
assert pv.read_access and pv.write_access
def test_permit_enabled(access_security_softioc):
pvs = access_security_softioc.pvs
# set the run-permit
pvs['permit'].put(1, wait=True)
assert pvs['permit'].get(as_string=True, use_monitor=False) == 'ENABLED'
# rps_lock rule should disable write access
assert pvs['bo'].write_access is False
with pytest.raises(AccessRightsException):
pvs['bo'].put(1, wait=True)
# rps_threshold rule should disable write access to metadata, not VAL
assert pvs['ao'].write_access is True
assert pvs['ao.DRVH'].write_access is False
with pytest.raises(AccessRightsException):
pvs['ao.DRVH'].put(100, wait=True)
def test_pv_access_event_callback(access_security_softioc):
pvs = access_security_softioc.pvs
# clear the run-permit
pvs['permit'].put(0, wait=True)
assert pvs['permit'].get(as_string=True, use_monitor=False) == 'DISABLED'
def lcb(read_access, write_access, pv=None):
assert pv.read_access == read_access
assert pv.write_access == write_access
pv.flag = True
bo = PV(pvs['bo'].pvname, access_callback=lcb)
bo.flag = False
# set the run-permit to trigger an access rights event
pvs['permit'].put(1, wait=True)
assert pvs['permit'].get(as_string=True, use_monitor=False) == 'ENABLED'
# give the callback a bit of time to run
time.sleep(0.2)
assert bo.flag is True
def test_get_with_metadata(pvnames):
with no_simulator_updates(pvnames):
pv = PV(pvnames.int_pv, form='native')
# Request time type
md = pv.get_with_metadata(use_monitor=False, form='time')
assert 'timestamp' in md
assert 'lower_ctrl_limit' not in md
# Request control type
md = pv.get_with_metadata(use_monitor=False, form='ctrl')
assert 'lower_ctrl_limit' in md
assert 'timestamp' not in md
# Use monitor: all metadata should come through
md = pv.get_with_metadata(use_monitor=True)
assert 'timestamp' in md
assert 'lower_ctrl_limit' in md
# Get a namespace
ns = pv.get_with_metadata(use_monitor=True, as_namespace=True)
assert hasattr(ns, 'timestamp')
assert hasattr(ns, 'lower_ctrl_limit')
|
test_advanced.py
|
# coding: utf-8
from concurrent.futures import ThreadPoolExecutor
import json
import logging
import random
import sys
import threading
import time
import numpy as np
import pytest
import ray.cluster_utils
import ray.test_utils
from ray.test_utils import client_test_enabled
from ray.test_utils import RayTestTimeoutException
if client_test_enabled():
from ray.util.client import ray
else:
import ray
logger = logging.getLogger(__name__)
# issue https://github.com/ray-project/ray/issues/7105
@pytest.mark.skipif(client_test_enabled(), reason="internal api")
def test_internal_free(shutdown_only):
ray.init(num_cpus=1)
@ray.remote
class Sampler:
def sample(self):
return [1, 2, 3, 4, 5]
def sample_big(self):
return np.zeros(1024 * 1024)
sampler = Sampler.remote()
# Free deletes from in-memory store.
obj_ref = sampler.sample.remote()
ray.get(obj_ref)
ray.internal.free(obj_ref)
with pytest.raises(Exception):
ray.get(obj_ref)
# Free deletes big objects from plasma store.
big_id = sampler.sample_big.remote()
ray.get(big_id)
ray.internal.free(big_id)
time.sleep(1) # wait for delete RPC to propagate
with pytest.raises(Exception):
ray.get(big_id)
def test_multiple_waits_and_gets(shutdown_only):
# It is important to use three workers here, so that the three tasks
# launched in this experiment can run at the same time.
ray.init(num_cpus=3)
@ray.remote
def f(delay):
time.sleep(delay)
return 1
@ray.remote
def g(input_list):
# The argument input_list should be a list containing one object ref.
ray.wait([input_list[0]])
@ray.remote
def h(input_list):
# The argument input_list should be a list containing one object ref.
ray.get(input_list[0])
# Make sure that multiple wait requests involving the same object ref
# all return.
x = f.remote(1)
ray.get([g.remote([x]), g.remote([x])])
# Make sure that multiple get requests involving the same object ref all
# return.
x = f.remote(1)
ray.get([h.remote([x]), h.remote([x])])
@pytest.mark.skipif(client_test_enabled(), reason="internal api")
def test_caching_functions_to_run(shutdown_only):
# Test that we export functions to run on all workers before the driver
# is connected.
def f(worker_info):
sys.path.append(1)
ray.worker.global_worker.run_function_on_all_workers(f)
def f(worker_info):
sys.path.append(2)
ray.worker.global_worker.run_function_on_all_workers(f)
def g(worker_info):
sys.path.append(3)
ray.worker.global_worker.run_function_on_all_workers(g)
def f(worker_info):
sys.path.append(4)
ray.worker.global_worker.run_function_on_all_workers(f)
ray.init(num_cpus=1)
@ray.remote
def get_state():
time.sleep(1)
return sys.path[-4], sys.path[-3], sys.path[-2], sys.path[-1]
res1 = get_state.remote()
res2 = get_state.remote()
assert ray.get(res1) == (1, 2, 3, 4)
assert ray.get(res2) == (1, 2, 3, 4)
# Clean up the path on the workers.
def f(worker_info):
sys.path.pop()
sys.path.pop()
sys.path.pop()
sys.path.pop()
ray.worker.global_worker.run_function_on_all_workers(f)
@pytest.mark.skipif(client_test_enabled(), reason="internal api")
def test_running_function_on_all_workers(ray_start_regular):
def f(worker_info):
sys.path.append("fake_directory")
ray.worker.global_worker.run_function_on_all_workers(f)
@ray.remote
def get_path1():
return sys.path
assert "fake_directory" == ray.get(get_path1.remote())[-1]
def f(worker_info):
sys.path.pop(-1)
ray.worker.global_worker.run_function_on_all_workers(f)
# Create a second remote function to guarantee that when we call
# get_path2.remote(), the second function to run will have been run on
# the worker.
@ray.remote
def get_path2():
return sys.path
assert "fake_directory" not in ray.get(get_path2.remote())
@pytest.mark.skipif(client_test_enabled(), reason="ray.timeline")
def test_profiling_api(ray_start_2_cpus):
@ray.remote
def f():
with ray.profile("custom_event", extra_data={"name": "custom name"}):
pass
ray.put(1)
object_ref = f.remote()
ray.wait([object_ref])
ray.get(object_ref)
# Wait until all of the profiling information appears in the profile
# table.
timeout_seconds = 20
start_time = time.time()
while True:
profile_data = ray.timeline()
event_types = {event["cat"] for event in profile_data}
expected_types = [
"task",
"task:deserialize_arguments",
"task:execute",
"task:store_outputs",
"wait_for_function",
"ray.get",
"ray.put",
"ray.wait",
"submit_task",
"fetch_and_run_function",
# TODO (Alex) :https://github.com/ray-project/ray/pull/9346
# "register_remote_function",
"custom_event", # This is the custom one from ray.profile.
]
if all(expected_type in event_types
for expected_type in expected_types):
break
if time.time() - start_time > timeout_seconds:
raise RayTestTimeoutException(
"Timed out while waiting for information in "
"profile table. Missing events: {}.".format(
set(expected_types) - set(event_types)))
# The profiling information only flushes once every second.
time.sleep(1.1)
def test_wait_cluster(ray_start_cluster):
cluster = ray_start_cluster
cluster.add_node(num_cpus=1, resources={"RemoteResource": 1})
cluster.add_node(num_cpus=1, resources={"RemoteResource": 1})
ray.init(address=cluster.address)
@ray.remote(resources={"RemoteResource": 1})
def f():
return
# Make sure we have enough workers on the remote nodes to execute some
# tasks.
tasks = [f.remote() for _ in range(10)]
start = time.time()
ray.get(tasks)
end = time.time()
# Submit some more tasks that can only be executed on the remote nodes.
tasks = [f.remote() for _ in range(10)]
# Sleep for a bit to let the tasks finish.
time.sleep((end - start) * 2)
_, unready = ray.wait(tasks, num_returns=len(tasks), timeout=0)
# All remote tasks should have finished.
assert len(unready) == 0
@pytest.mark.skip(reason="TODO(ekl)")
def test_object_transfer_dump(ray_start_cluster):
cluster = ray_start_cluster
num_nodes = 3
for i in range(num_nodes):
cluster.add_node(resources={str(i): 1}, object_store_memory=10**9)
ray.init(address=cluster.address)
@ray.remote
def f(x):
return
# These objects will live on different nodes.
object_refs = [
f._remote(args=[1], resources={str(i): 1}) for i in range(num_nodes)
]
# Broadcast each object from each machine to each other machine.
for object_ref in object_refs:
ray.get([
f._remote(args=[object_ref], resources={str(i): 1})
for i in range(num_nodes)
])
# The profiling information only flushes once every second.
time.sleep(1.1)
transfer_dump = ray.object_transfer_timeline()
# Make sure the transfer dump can be serialized with JSON.
json.loads(json.dumps(transfer_dump))
assert len(transfer_dump) >= num_nodes**2
assert len({
event["pid"]
for event in transfer_dump if event["name"] == "transfer_receive"
}) == num_nodes
assert len({
event["pid"]
for event in transfer_dump if event["name"] == "transfer_send"
}) == num_nodes
def test_identical_function_names(ray_start_regular):
# Define a bunch of remote functions and make sure that we don't
# accidentally call an older version.
num_calls = 200
@ray.remote
def f():
return 1
results1 = [f.remote() for _ in range(num_calls)]
@ray.remote
def f():
return 2
results2 = [f.remote() for _ in range(num_calls)]
@ray.remote
def f():
return 3
results3 = [f.remote() for _ in range(num_calls)]
@ray.remote
def f():
return 4
results4 = [f.remote() for _ in range(num_calls)]
@ray.remote
def f():
return 5
results5 = [f.remote() for _ in range(num_calls)]
assert ray.get(results1) == num_calls * [1]
assert ray.get(results2) == num_calls * [2]
assert ray.get(results3) == num_calls * [3]
assert ray.get(results4) == num_calls * [4]
assert ray.get(results5) == num_calls * [5]
@ray.remote
def g():
return 1
@ray.remote # noqa: F811
def g(): # noqa: F811
return 2
@ray.remote # noqa: F811
def g(): # noqa: F811
return 3
@ray.remote # noqa: F811
def g(): # noqa: F811
return 4
@ray.remote # noqa: F811
def g(): # noqa: F811
return 5
result_values = ray.get([g.remote() for _ in range(num_calls)])
assert result_values == num_calls * [5]
def test_illegal_api_calls(ray_start_regular):
# Verify that we cannot call put on an ObjectRef.
x = ray.put(1)
with pytest.raises(Exception):
ray.put(x)
# Verify that we cannot call get on a regular value.
with pytest.raises(Exception):
ray.get(3)
@pytest.mark.skipif(
client_test_enabled(), reason="grpc interaction with releasing resources")
def test_multithreading(ray_start_2_cpus):
# This test requires at least 2 CPUs to finish since the worker does not
# release resources when joining the threads.
def run_test_in_multi_threads(test_case, num_threads=10, num_repeats=25):
"""A helper function that runs test cases in multiple threads."""
def wrapper():
for _ in range(num_repeats):
test_case()
time.sleep(random.randint(0, 10) / 1000.0)
return "ok"
executor = ThreadPoolExecutor(max_workers=num_threads)
futures = [executor.submit(wrapper) for _ in range(num_threads)]
for future in futures:
assert future.result() == "ok"
@ray.remote
def echo(value, delay_ms=0):
if delay_ms > 0:
time.sleep(delay_ms / 1000.0)
return value
def test_api_in_multi_threads():
"""Test using Ray api in multiple threads."""
@ray.remote
class Echo:
def echo(self, value):
return value
# Test calling remote functions in multiple threads.
def test_remote_call():
value = random.randint(0, 1000000)
result = ray.get(echo.remote(value))
assert value == result
run_test_in_multi_threads(test_remote_call)
# Test multiple threads calling one actor.
actor = Echo.remote()
def test_call_actor():
value = random.randint(0, 1000000)
result = ray.get(actor.echo.remote(value))
assert value == result
run_test_in_multi_threads(test_call_actor)
# Test put and get.
def test_put_and_get():
value = random.randint(0, 1000000)
result = ray.get(ray.put(value))
assert value == result
run_test_in_multi_threads(test_put_and_get)
# Test multiple threads waiting for objects.
num_wait_objects = 10
objects = [
echo.remote(i, delay_ms=10) for i in range(num_wait_objects)
]
def test_wait():
ready, _ = ray.wait(
objects,
num_returns=len(objects),
timeout=1000.0,
)
assert len(ready) == num_wait_objects
assert ray.get(ready) == list(range(num_wait_objects))
run_test_in_multi_threads(test_wait, num_repeats=1)
# Run tests in a driver.
test_api_in_multi_threads()
# Run tests in a worker.
@ray.remote
def run_tests_in_worker():
test_api_in_multi_threads()
return "ok"
assert ray.get(run_tests_in_worker.remote()) == "ok"
# Test actor that runs background threads.
@ray.remote
class MultithreadedActor:
def __init__(self):
self.lock = threading.Lock()
self.thread_results = []
def background_thread(self, wait_objects):
try:
# Test wait
ready, _ = ray.wait(
wait_objects,
num_returns=len(wait_objects),
timeout=1000.0,
)
assert len(ready) == len(wait_objects)
for _ in range(20):
num = 10
# Test remote call
results = [echo.remote(i) for i in range(num)]
assert ray.get(results) == list(range(num))
# Test put and get
objects = [ray.put(i) for i in range(num)]
assert ray.get(objects) == list(range(num))
time.sleep(random.randint(0, 10) / 1000.0)
except Exception as e:
with self.lock:
self.thread_results.append(e)
else:
with self.lock:
self.thread_results.append("ok")
def spawn(self):
wait_objects = [echo.remote(i, delay_ms=10) for i in range(10)]
self.threads = [
threading.Thread(
target=self.background_thread, args=(wait_objects, ))
for _ in range(20)
]
[thread.start() for thread in self.threads]
def join(self):
[thread.join() for thread in self.threads]
assert self.thread_results == ["ok"] * len(self.threads)
return "ok"
actor = MultithreadedActor.remote()
actor.spawn.remote()
ray.get(actor.join.remote()) == "ok"
@pytest.mark.skipif(client_test_enabled(), reason="internal api")
def test_wait_makes_object_local(ray_start_cluster):
cluster = ray_start_cluster
cluster.add_node(num_cpus=0)
cluster.add_node(num_cpus=2)
ray.init(address=cluster.address)
@ray.remote
class Foo:
def method(self):
return np.zeros(1024 * 1024)
a = Foo.remote()
# Test get makes the object local.
x_id = a.method.remote()
assert not ray.worker.global_worker.core_worker.object_exists(x_id)
ray.get(x_id)
assert ray.worker.global_worker.core_worker.object_exists(x_id)
# Test wait makes the object local.
x_id = a.method.remote()
assert not ray.worker.global_worker.core_worker.object_exists(x_id)
ok, _ = ray.wait([x_id])
assert len(ok) == 1
assert ray.worker.global_worker.core_worker.object_exists(x_id)
if __name__ == "__main__":
import pytest
sys.exit(pytest.main(["-v", __file__]))
|
decode.py
|
import logging
import os
import signal
import socket
import threading
from collections import UserDict
from datetime import datetime, timedelta, timezone
from operator import itemgetter
from pathlib import Path
from typing import (
Any,
Callable,
Dict,
Iterable,
Iterator,
List,
Optional,
TextIO,
Tuple,
Union,
)
import pyModeS as pms
from tqdm.autonotebook import tqdm
import pandas as pd
from ...core import Flight, Traffic
from ...data.basic.airports import Airport
def next_msg(chunk_it: Iterator[bytes]) -> Iterator[bytes]:
data = b""
for chunk in chunk_it:
data += chunk
while len(data) >= 23:
it = data.find(0x1A)
if it < 0:
break
data = data[it:]
if len(data) < 23:
break
if data[1] == 0x33:
yield data[:23]
data = data[23:]
continue
elif data[1] == 0x32:
data = data[16:]
continue
elif data[1] == 0x31:
data = data[11:]
continue
elif data[1] == 0x34:
data = data[23:]
continue
else:
data = data[1:]
def decode_time_default(
msg: str, time_0: Optional[datetime] = None
) -> datetime:
return datetime.now(timezone.utc)
def decode_time_radarcape(
msg: str, time_0: Optional[datetime] = None
) -> datetime:
now = datetime.now(timezone.utc)
if time_0 is not None:
now = time_0
timestamp = int(msg[4:16], 16)
nanos = timestamp & 0x00003FFFFFFF
secs = timestamp >> 30
now = now.replace(hour=0, minute=0, second=0, microsecond=0)
now += timedelta(seconds=secs, microseconds=nanos / 1000)
return now
def decode_time_dump1090(
msg: str, time_0: Optional[datetime] = None
) -> datetime:
now = datetime.now(timezone.utc)
if time_0 is not None:
now = time_0
else:
now = now.replace(hour=0, minute=0, second=0, microsecond=0)
timestamp = int(msg[4:16], 16)
# dump1090/net_io.c => time (in 12Mhz ticks)
now += timedelta(seconds=timestamp / 12e6)
return now
decode_time: Dict[str, Callable[[str, Optional[datetime]], datetime]] = {
"radarcape": decode_time_radarcape,
"dump1090": decode_time_dump1090,
"default": decode_time_default,
}
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the to_be_stopped() condition."""
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.daemon = True # is it redundant?
self._stop_event = threading.Event()
def stop(self) -> None:
self._stop_event.set()
def to_be_stopped(self) -> bool:
return self._stop_event.is_set()
class Aircraft(object):
def __init__(self, icao24: str, lat0: float, lon0: float) -> None:
self.icao24 = icao24
self._callsign: Optional[str] = None
self._flight: Optional[Flight] = None
self.cumul: List[Dict] = []
self.t0: Optional[datetime] = None
self.t1: Optional[datetime] = None
self.tpos: Optional[datetime] = None
self.m0: Optional[str] = None
self.m1: Optional[str] = None
self.lat: Optional[float] = None
self.lon: Optional[float] = None
self.alt: Optional[float] = None
self.trk: Optional[float] = None
self.spd: Optional[float] = None
self.lat0: float = lat0
self.lon0: float = lon0
self.version: Optional[int] = None
self.nic_a: Optional[int] = None
self.nic_bc: Optional[int] = None
self.nic_s: Optional[int] = None
self.lock = threading.Lock()
@property
def flight(self) -> Optional[Flight]:
with self.lock: # access then clear not thread-safe, hence the lock
df = pd.DataFrame.from_records(self.cumul)
self.cumul.clear()
if self._flight is not None:
df = pd.concat([self._flight.data, df], sort=False)
if self.version is not None:
# remove columns added by nuc_p, nuc_r
if "HPL" in df.columns:
df = df.drop(columns=["HPL", "RCu", "RCv"])
if "HVE" in df.columns:
df = df.drop(columns=["HVE", "VVE"])
if len(df) == 0:
return None
self._flight = Flight(
df.assign(
callsign=df.callsign.replace("", None)
.fillna(method="ffill")
.fillna(method="bfill")
)
)
return self._flight
@property
def callsign(self):
return self._callsign
@callsign.setter
def callsign(self, args):
t, msg = args
callsign = pms.adsb.callsign(msg).strip("_")
if callsign == "":
return
self._callsign = callsign
with self.lock:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, callsign=self._callsign)
)
@property
def speed(self):
pass
@speed.setter
def speed(self, args):
t, msg = args
vdata = pms.adsb.velocity(msg)
if vdata is None:
return
spd, trk, roc, tag = vdata
if tag != "GS":
# does it ever happen...
return
if (spd is None) or (trk is None):
return
self.spd = spd
self.trk = trk
delta = pms.adsb.altitude_diff(msg)
with self.lock:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
groundspeed=spd,
track=trk,
vertical_rate=roc,
)
)
if delta is not None and self.alt is not None:
self.cumul[-1]["geoaltitude"] = self.alt + delta
@property
def position(self):
pass
@position.setter
def position(self, args):
t, msg = args
oe = pms.adsb.oe_flag(msg)
setattr(self, "m" + str(oe), msg)
setattr(self, "t" + str(oe), t)
if (
self.t0 is not None
and self.t1 is not None
and abs((self.t0 - self.t1).total_seconds()) < 10
):
latlon = pms.adsb.position(
self.m0, self.m1, self.t0, self.t1, self.lat0, self.lon0
)
else:
latlon = None
if latlon is not None:
self.tpos = t
self.lat, self.lon = latlon
self.alt = pms.adsb.altitude(msg)
with self.lock:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
latitude=self.lat,
longitude=self.lon,
altitude=self.alt,
onground=False,
)
)
@property
def surface(self):
pass
@surface.setter
def surface(self, args):
t, msg = args
self.lat, self.lon = pms.adsb.surface_position_with_ref(
msg, self.lat0, self.lon0
)
with self.lock:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
latitude=self.lat,
longitude=self.lon,
onground=True,
)
)
@property
def altcode(self):
pass
@altcode.setter
def altcode(self, args):
t, msg = args
from pyModeS import hex2bin
if set(hex2bin(msg)[19:32]) in [{"0"}, {"1"}]:
return
self.alt = pms.common.altcode(msg)
with self.lock:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, altitude=self.alt)
)
@property
def idcode(self):
pass
@idcode.setter
def idcode(self, args):
t, msg = args
from pyModeS import hex2bin
if set(hex2bin(msg)[19:32]) in [{"0"}, {"1"}]:
return
idcode = pms.common.idcode(msg)
with self.lock:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
squawk=idcode,
)
)
@property
def bds20(self):
pass
@bds20.setter
def bds20(self, args):
t, msg = args
callsign = pms.commb.cs20(msg).strip("_")
if callsign == "":
return
self._callsign = callsign
with self.lock:
# in case altitude was already included from altcode (DF 4 or 20)
# or squawk from idcode (DF5 or 21)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = dict(**last_entry, callsign=self._callsign)
else:
self.cumul.append(
dict(
timestamp=t, icao24=self.icao24, callsign=self._callsign
)
)
@property
def bds40(self):
pass
@bds40.setter
def bds40(self, args):
t, msg = args
with self.lock:
# in case altitude was already included from altcode (DF 4 or 20)
# or squawk from idcode (DF5 or 21)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {
**last_entry,
**dict(
# FMS selected altitude (ft)
selected_fms=pms.commb.selalt40fms(msg),
# MCP/FCU selected altitude (ft)
selected_mcp=pms.commb.selalt40mcp(msg),
# Barometric pressure (mb)
barometric_setting=pms.commb.p40baro(msg),
),
}
else:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
# FMS selected altitude (ft)
selected_fms=pms.commb.selalt40fms(msg),
# MCP/FCU selected altitude (ft)
selected_mcp=pms.commb.selalt40mcp(msg),
# Barometric pressure (mb)
barometric_setting=pms.commb.p40baro(msg),
)
)
@property
def bds44(self):
pass
@bds44.setter
def bds44(self, args):
t, msg = args
wind = pms.commb.wind44(msg)
wind = wind if wind is not None else (None, None)
with self.lock:
# in case altitude was already included from altcode (DF 4 or 20)
# or squawk from idcode (DF 5 or 21)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {
**last_entry,
**dict(
# Humidity (%)
humidity=pms.commb.hum44(msg),
# Average static pressure (hPa)
pressure=pms.commb.p44(msg),
# Static air temperature (C)
temperature=pms.commb.temp44(msg),
turbulence=pms.commb.turb44(msg),
# Wind speed (kt) and direction (true) (deg)
windspeed=wind[0],
winddirection=wind[1],
),
}
else:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
# Humidity (%)
humidity=pms.commb.hum44(msg),
# Average static pressure (hPa)
pressure=pms.commb.p44(msg),
# Static air temperature (C)
temperature=pms.commb.temp44(msg),
turbulence=pms.commb.turb44(msg),
# Wind speed (kt) and direction (true) (deg)
windspeed=wind[0],
winddirection=wind[1],
)
)
@property
def bds45(self):
pass
@bds45.setter
def bds45(self, args):
t, msg = args
with self.lock:
# in case altitude was already included from altcode (DF 4 or 20)
# or squawk from idcode (DF 5 or 21)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {
**last_entry,
**dict(
# Turbulence level (0-3)
turbulence=pms.commb.turb45(msg),
# Wind shear level (0-3)
wind_shear=pms.commb.ws45(msg),
# Microburst level (0-3)
microburst=pms.commb.mb45(msg),
# Icing level (0-3)
icing=pms.commb.ic45(msg),
# Wake vortex level (0-3)
wake_vortex=pms.commb.wv45(msg),
# Static air temperature (C)
temperature=pms.commb.temp45(msg),
# Average static pressure (hPa)
pressure=pms.commb.p45(msg),
# Radio height (ft)
radio_height=pms.commb.rh45(msg),
),
}
else:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
# Turbulence level (0-3)
turbulence=pms.commb.turb45(msg),
# Wind shear level (0-3)
wind_shear=pms.commb.ws45(msg),
# Microburst level (0-3)
microburst=pms.commb.mb45(msg),
# Icing level (0-3)
icing=pms.commb.ic45(msg),
# Wake vortex level (0-3)
wake_vortex=pms.commb.wv45(msg),
# Static air temperature (C)
temperature=pms.commb.temp45(msg),
# Average static pressure (hPa)
pressure=pms.commb.p45(msg),
# Radio height (ft)
radio_height=pms.commb.rh45(msg),
)
)
@property
def bds50(self):
pass
@bds50.setter
def bds50(self, args):
t, msg = args
with self.lock:
# in case altitude was already included from altcode (DF 4 or 20)
# or squawk from idcode (DF5 or 21)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {
**last_entry,
**dict(
# Ground speed (kt)
groundspeed=pms.commb.gs50(msg),
# Roll angle (deg)
roll=pms.commb.roll50(msg),
# True airspeed (kt)
TAS=pms.commb.tas50(msg),
# True track angle (deg)
track=pms.commb.trk50(msg),
# Track angle rate (deg/sec)
track_rate=pms.commb.rtrk50(msg),
),
}
else:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
# Ground speed (kt)
groundspeed=pms.commb.gs50(msg),
# Roll angle (deg)
roll=pms.commb.roll50(msg),
# True airspeed (kt)
TAS=pms.commb.tas50(msg),
# True track angle (deg)
track=pms.commb.trk50(msg),
# Track angle rate (deg/sec)
track_rate=pms.commb.rtrk50(msg),
)
)
@property
def bds60(self):
pass
@bds60.setter
def bds60(self, args):
t, msg = args
with self.lock:
# in case altitude was already included from altcode (DF 4 or 20)
# or squawk from idcode (DF5 or 21)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {
**last_entry,
**dict(
# Indicated airspeed (kt)
IAS=pms.commb.ias60(msg),
# Magnetic heading (deg)
heading=pms.commb.hdg60(msg),
# Mach number (-)
Mach=pms.commb.mach60(msg),
# Barometric altitude rate (ft/min)
vertical_rate_barometric=pms.commb.vr60baro(msg),
# Inertial vertical speed (ft/min)
vertical_rate_inertial=pms.commb.vr60ins(msg),
),
}
else:
self.cumul.append(
dict(
timestamp=t,
icao24=self.icao24,
# Indicated airspeed (kt)
IAS=pms.commb.ias60(msg),
# Magnetic heading (deg)
heading=pms.commb.hdg60(msg),
# Mach number (-)
Mach=pms.commb.mach60(msg),
# Barometric altitude rate (ft/min)
vertical_rate_barometric=pms.commb.vr60baro(msg),
# Inertial vertical speed (ft/min)
vertical_rate_inertial=pms.commb.vr60ins(msg),
)
)
@property
def nuc_p(self):
pass
@nuc_p.setter
def nuc_p(self, args):
t, msg = args
with self.lock:
hpl, rcu, rcv = pms.adsb.nuc_p(msg)
current = dict(
# Horizontal Protection Limit
HPL=hpl,
# 95% Containment Radius on horizontal position error
RCu=rcu,
# 95% Containment Radius on vertical position error
RCv=rcv,
)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {**last_entry, **current}
else:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, **current)
)
@property
def nic_v1(self):
pass
@nic_v1.setter
def nic_v1(self, args):
t, msg = args
if self.nic_s is None:
return
with self.lock:
hcr, vpl = pms.adsb.nic_v1(msg, self.nic_s)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
current = dict(
# Horizontal Containment Radius
HCR=hcr,
# Vertical Protection Limit
VPL=vpl,
)
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {**last_entry, **current}
else:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, **current)
)
@property
def nic_v2(self):
pass
@nic_v2.setter
def nic_v2(self, args):
t, msg = args
if self.nic_a is None or self.nic_bc is None:
return
with self.lock:
hcr = pms.adsb.nic_v2(msg, self.nic_a, self.nic_bc)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
current = dict(
# Horizontal Containment Radius
HCR=hcr
)
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {**last_entry, **current}
else:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, **current)
)
@property
def nuc_r(self):
pass
@nuc_r.setter
def nuc_r(self, args):
t, msg = args
with self.lock:
hve, vve = pms.adsb.nuc_v(msg)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
current = dict(
# Horizontal Velocity Error
HVE=hve,
# Vertical Velocity Error
VVE=vve,
)
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {**last_entry, **current}
else:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, **current)
)
@property
def nac_v(self):
pass
@nac_v.setter
def nac_v(self, args):
t, msg = args
with self.lock:
hfm, vfm = pms.adsb.nac_v(msg)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
current = dict(
# Horizontal Figure of Merit for rate (GNSS)
HFM=hfm,
# Vertical Figure of Merit for rate (GNSS)
VFM=vfm,
)
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {**last_entry, **current}
else:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, **current)
)
@property
def nac_p(self):
pass
@nac_p.setter
def nac_p(self, args):
t, msg = args
with self.lock:
epu, vepu = pms.adsb.nac_p(msg)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
current = dict(
# Estimated Position Uncertainty
EPU=epu,
# Vertical Estimated Position Uncertainty
VEPU=vepu,
)
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {**last_entry, **current}
else:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, **current)
)
@property
def sil(self):
pass
@sil.setter
def sil(self, args):
t, msg = args
with self.lock:
phcr, pvpl, base = pms.adsb.sil(msg, self.version)
last_entry = self.cumul[-1] if len(self.cumul) > 0 else None
current = dict(
version=self.version,
# Probability exceeding Horizontal Containment Radius
pHCR=phcr,
# Probability exceeding Vertical Protection Limit
pVPL=pvpl,
sil_base=base,
)
if last_entry is not None and last_entry["timestamp"] == t:
self.cumul[-1] = {**last_entry, **current}
else:
self.cumul.append(
dict(timestamp=t, icao24=self.icao24, **current)
)
class AircraftDict(UserDict):
lat0: float
lon0: float
def __missing__(self, key):
self[key] = value = Aircraft(key, self.lat0, self.lon0)
return value
def set_latlon(self, lat0, lon0):
self.lat0 = lat0
self.lon0 = lon0
for ac in self.values():
ac.lat0 = lat0
ac.lon0 = lon0
class Decoder:
thread: Optional[StoppableThread]
def __init__(
self, reference: Union[None, str, Airport, Tuple[float, float]] = None
) -> None:
if isinstance(reference, str):
from ...data import airports
reference = airports[reference]
if reference is None:
logging.warning(
"No valid reference position provided. Fallback to (0, 0)"
)
lat0, lon0 = 0.0, 0.0
elif isinstance(reference, Airport):
lat0, lon0 = reference.latlon
else:
lat0, lon0 = reference # type: ignore
self.acs: AircraftDict = AircraftDict()
self.acs.set_latlon(lat0, lon0)
self.thread = None
@classmethod
def from_file(
cls,
filename: Union[str, Path],
reference: Union[str, Airport, Tuple[float, float]],
uncertainty: bool = False,
) -> "Decoder":
if isinstance(filename, str):
filename = Path(filename)
with filename.open("r") as fh:
all_lines = fh.readlines()
decoder = cls(reference)
decoder.process_msgs(
list(
(
datetime.fromtimestamp(
float(line.strip().split(",")[0]), timezone.utc
),
line.strip().split(",")[1][18:],
)
for line in all_lines
),
uncertainty=uncertainty,
)
return decoder
@classmethod
def from_binary(
cls,
filename: Union[str, Path],
reference: Union[str, Airport, Tuple[float, float]],
*,
uncertainty: bool = False,
time_fmt: str = "dump1090",
time_0: Optional[datetime] = None,
redefine_mag: int = 10,
fh: Optional[TextIO] = None,
):
decoder = cls(reference)
redefine_freq = 2 ** redefine_mag - 1
decode_time_here = decode_time.get(time_fmt, decode_time_default)
def next_in_binary(filename: Union[str, Path]) -> Iterator[bytes]:
with Path(filename).open("rb") as fh:
while True:
get = fh.read()
if len(get) == 0:
return
yield get
for i, bin_msg in tqdm(enumerate(next_msg(next_in_binary(filename)))):
if len(bin_msg) < 23:
continue
msg = "".join(["{:02x}".format(t) for t in bin_msg])
now = decode_time_here(msg, time_0)
if fh is not None:
fh.write("{},{}\n".format(now.timestamp(), msg))
if i & redefine_freq == redefine_freq:
decoder.redefine_reference(now)
decoder.process(now, msg[18:], uncertainty=uncertainty)
return decoder
@classmethod
def from_rtlsdr(
cls,
reference: Union[str, Airport, Tuple[float, float]],
file_pattern: str = "~/ADSB_EHS_RAW_%Y%m%d_dump1090.csv",
uncertainty: bool = False,
) -> "Decoder": # coverage: ignore
from .rtlsdr import MyRtlReader
decoder = cls(reference)
# dump file
now = datetime.now(timezone.utc)
filename = now.strftime(file_pattern)
today = os.path.expanduser(filename)
fh = open(today, "a", 1)
rtlsdr = MyRtlReader(decoder, fh, uncertainty=uncertainty)
decoder.thread = StoppableThread(target=rtlsdr.run)
signal.signal(signal.SIGINT, rtlsdr.stop)
decoder.thread.start()
return decoder
@classmethod
def from_socket(
cls,
socket: socket.socket,
reference: Union[str, Airport, Tuple[float, float]],
*,
uncertainty: bool,
time_fmt: str = "default",
time_0: Optional[datetime] = None,
redefine_mag: int = 7,
fh: Optional[TextIO] = None,
) -> "Decoder": # coverage: ignore
decoder = cls(reference)
redefine_freq = 2 ** redefine_mag - 1
decode_time_here = decode_time.get(time_fmt, decode_time_default)
def next_in_socket() -> Iterator[bytes]:
while True:
if decoder.thread is None or decoder.thread.to_be_stopped():
socket.close()
return
yield socket.recv(2048)
def decode():
for i, bin_msg in enumerate(next_msg(next_in_socket())):
msg = "".join(["{:02x}".format(t) for t in bin_msg])
# Timestamp decoding
now = decode_time_here(msg, time_0)
if fh is not None:
fh.write("{},{}\n".format(now.timestamp(), msg))
if len(bin_msg) < 23:
continue
if (
time_fmt != "radarcape"
and i & redefine_freq == redefine_freq
):
decoder.redefine_reference(now)
decoder.process(now, msg[18:], uncertainty=uncertainty)
decoder.thread = StoppableThread(target=decode)
decoder.thread.start()
return decoder
def stop(self):
if self.thread is not None and self.thread.is_alive():
self.thread.stop()
self.thread.join()
def __del__(self):
self.stop()
@classmethod
def from_dump1090(
cls,
reference: Union[str, Airport, Tuple[float, float]],
file_pattern: str = "~/ADSB_EHS_RAW_%Y%m%d_dump1090.csv",
uncertainty: bool = False,
) -> "Decoder": # coverage: ignore
now = datetime.now(timezone.utc)
filename = now.strftime(file_pattern)
today = os.path.expanduser(filename)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("localhost", 30005))
fh = open(today, "a", 1)
return cls.from_socket(
s,
reference,
uncertainty=uncertainty,
time_fmt="dump1090",
time_0=now,
fh=fh,
)
@classmethod
def from_address(
cls,
host: str,
port: int,
reference: Union[str, Airport, Tuple[float, float]],
file_pattern: str = "~/ADSB_EHS_RAW_%Y%m%d_tcp.csv",
uncertainty: bool = False,
) -> "Decoder": # coverage: ignore
now = datetime.now(timezone.utc)
filename = now.strftime(file_pattern)
today = os.path.expanduser(filename)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
fh = open(today, "a", 1)
return cls.from_socket(
s, reference, uncertainty=uncertainty, time_fmt="radarcape", fh=fh
)
def redefine_reference(self, time: datetime) -> None:
pos = list(
(ac.lat, ac.lon)
for ac in self.acs.values()
if ac.alt is not None
and ac.alt < 5000
and ac.tpos is not None
and (time - ac.tpos).total_seconds() < 20 * 60
)
n = len(pos)
if n > 0:
self.acs.set_latlon(
sum(a[0] for a in pos) / n, sum(a[1] for a in pos) / n
)
def process_msgs(
self, msgs: Iterable[Tuple[datetime, str]], uncertainty: bool = False
) -> None:
for i, (time, msg) in tqdm(enumerate(msgs), total=sum(1 for _ in msgs)):
if i & 127 == 127:
self.redefine_reference(time)
self.process(time, msg, uncertainty=uncertainty)
def process(
self,
time: datetime,
msg: str,
*args,
uncertainty: bool = False,
spd: Optional[float] = None,
trk: Optional[float] = None,
alt: Optional[float] = None,
) -> None:
ac: Aircraft
if len(msg) != 28:
return
df = pms.df(msg)
if df == 4 or df == 20:
icao = pms.icao(msg)
if isinstance(icao, bytes):
icao = icao.decode()
ac = self.acs[icao.lower()]
ac.altcode = time, msg
if df == 5 or df == 21:
icao = pms.icao(msg)
if isinstance(icao, bytes):
icao = icao.decode()
ac = self.acs[icao.lower()]
ac.idcode = time, msg
if df == 17 or df == 18: # ADS-B
if pms.crc(msg, encode=False) != 0:
return
tc = pms.adsb.typecode(msg)
icao = pms.icao(msg)
# before it's fixed in pyModeS release...
if isinstance(icao, bytes):
icao = icao.decode()
ac = self.acs[icao.lower()]
if 1 <= tc <= 4:
ac.callsign = time, msg
if 5 <= tc <= 8:
ac.surface = time, msg
if tc == 19:
ac.speed = time, msg
if 9 <= tc <= 18:
# This is barometric altitude
ac.position = time, msg
if 20 <= tc <= 22:
# Only GNSS altitude
pass
if not uncertainty:
return
if 9 <= tc <= 18:
ac.nic_bc = pms.adsb.nic_b(msg)
if (5 <= tc <= 8) or (9 <= tc <= 18) or (20 <= tc <= 22):
ac.nuc_p = time, msg
if ac.version == 1:
ac.nic_v1 = time, msg
elif ac.version == 2:
ac.nic_v2 = time, msg
if tc == 19:
ac.nuc_r = time, msg
if ac.version in [1, 2]:
ac.nac_v = time, msg
if tc == 29:
ac.sil = time, msg
ac.nac_p = time, msg
if tc == 31:
ac.version = pms.adsb.version(msg)
ac.sil = time, msg
ac.nac_p = time, msg
if ac.version == 1:
ac.nic_s = pms.adsb.nic_s(msg)
elif ac.version == 2:
ac.nic_a, ac.nic_bc = pms.adsb.nic_a_c(msg)
elif df == 20 or df == 21:
bds = pms.bds.infer(msg)
icao = pms.icao(msg)
if isinstance(icao, bytes):
icao = icao.decode()
ac = self.acs[icao.lower()]
if bds == "BDS20":
ac.bds20 = time, msg
return
if bds == "BDS40":
ac.bds40 = time, msg
return
if bds == "BDS44":
ac.bds44 = time, msg
return
if bds == "BDS45":
ac.bds45 = time, msg
return
if bds == "BDS50,BDS60":
if spd is not None and trk is not None and alt is not None:
bds = pms.bds.is50or60(msg, spd, trk, alt)
elif (
ac.spd is not None
and ac.trk is not None
and ac.alt is not None
):
bds = pms.bds.is50or60(msg, ac.spd, ac.trk, ac.alt)
else:
return
# do not return!
if bds == "BDS50":
ac.bds50 = time, msg
return
if bds == "BDS60":
ac.bds60 = time, msg
return
@property
def aircraft(self) -> List[Dict[str, Any]]:
return sorted(
(
dict(
icao24=key,
callsign=ac.callsign,
length=(
(len(ac.cumul) + len(ac._flight))
if ac._flight is not None
else len(ac.cumul)
),
position=ac.lat is not None,
data=ac,
)
for (key, ac) in self.acs.items()
if ac.callsign is not None
),
key=itemgetter("length"),
reverse=True,
)
@property
def traffic(self) -> Optional[Traffic]:
try:
return Traffic.from_flights(
self[elt["icao24"]] for elt in self.aircraft
)
except ValueError:
return None
def __getitem__(self, icao: str) -> Optional[Flight]:
return self.acs[icao].flight
|
launch.py
|
import discord.http; discord.http.Route.BASE = "https://discordapp.com/api/v9"
import os
import asyncio
import multiprocessing
import requests
import signal
import logging; log = logging.getLogger(__name__)
from backend import ShardedBotInstance, config
class Supervisor(object):
def __init__(self, loop):
self.cluster_queue = []
self.clusters = []
self.loop = loop
self.future = None
self.alive = True
self.keep_alive = None
def shard_count(self):
d = requests.get(
"https://discordapp.com/api/v9/gateway/bot",
headers={
"Authorization": "Bot " + config["token"],
}
)
d.raise_for_status()
return (d.json())["shards"]
def start(self):
self.future = asyncio.ensure_future(self.boot(), loop=self.loop)
try:
self.loop.run_forever()
except KeyboardInterrupt:
self.loop.run_until_complete(self.kill())
finally:
self.clean()
def clean(self):
self.loop.stop()
self.loop.close()
def task_done(self, t):
if t.execption():
t.print_stack()
self.keep_alive = self.loop.create_task(self.rebooter())
self.keep_alive.add_done_callback(self.task_done)
async def boot(self):
shards = list(range(self.shard_count()))
size = [shards[x:x+4] for x in range(0, len(shards), 4)]
log.info(f"Starting with {len(size)} cluster{'' if len(size) == 1 else 's'}")
for sids in size:
self.cluster_queue.append(Cluster(self, next(iter(config["cluster_names"])), sids, len(shards)))
await self.boot_cluster()
self.keep_alive = self.loop.create_task(self.rebooter())
self.keep_alive.add_done_callback(self.task_done)
log.info(f"Startup finished")
async def kill(self):
log.info("Shutting down clusters")
self.alive = False
if self.keep_alive:
self.keep_alive.cancel()
for cluster in self.clusters:
cluster.kill()
self.clean()
async def rebooter(self):
while self.alive:
if not self.clusters:
log.warning("Seems like all clusters are down")
asyncio.ensure_future(self.kill())
for cluster in self.clusters:
if not cluster.process.is_alive():
log.info(f"Cluster#{cluster.name} exited with code {cluster.process.exitcode}")
log.info(f"Restarting cluster#{cluster.name}")
await cluster.start()
await asyncio.sleep(5)
async def boot_cluster(self):
for cluster in self.cluster_queue:
self.clusters.append(cluster)
log.info(f"Starting Cluster#{cluster.name}")
self.loop.create_task(cluster.start())
await asyncio.sleep(0.5)
class Cluster(object):
def __init__(self, s, name, sids, max_shards):
self.supervisor = s
self.process = None
self.kwargs = {}
self.name = name
self.log = log
self.log.info(f"Initialized with shard ids {sids}, total shards {max_shards}")
def wait_close(self):
return self.process.join()
async def start(self, *, force=False):
if self.process and self.process.is_alive():
if not force:
return
self.log.info("Terminating process")
self.process.terminate()
self.process.close()
self.process = multiprocessing.Process(target=ShardedBotInstance, kwargs=self.kwargs, daemon=True)
self.process.start()
self.log.info(f"Process started with ID {self.process.pid}")
return True
def kill(self, sign=signal.SIGINT):
self.log.info(f"Shutting down with signal {sign}")
try:
os.kill(self.process.pid, sign)
except ProcessLookupError:
pass
if __name__ == "__main__":
loop = asyncio.get_event_loop()
Supervisor(loop).start()
|
sdk_worker.py
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""SDK harness for executing Python Fns via the Fn API."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import logging
import Queue as queue
import sys
import threading
import traceback
from concurrent import futures
import grpc
from apache_beam.portability.api import beam_fn_api_pb2
from apache_beam.portability.api import beam_fn_api_pb2_grpc
from apache_beam.runners.worker import bundle_processor
from apache_beam.runners.worker import data_plane
class SdkHarness(object):
REQUEST_METHOD_PREFIX = '_request_'
def __init__(self, control_address, worker_count):
self._worker_count = worker_count
self._worker_index = 0
self._control_channel = grpc.insecure_channel(control_address)
self._data_channel_factory = data_plane.GrpcClientDataChannelFactory()
self.workers = queue.Queue()
# one thread is enough for getting the progress report.
# Assumption:
# Progress report generation should not do IO or wait on other resources.
# Without wait, having multiple threads will not improve performance and
# will only add complexity.
self._progress_thread_pool = futures.ThreadPoolExecutor(max_workers=1)
self._process_thread_pool = futures.ThreadPoolExecutor(
max_workers=self._worker_count)
self._instruction_id_vs_worker = {}
self._fns = {}
self._responses = queue.Queue()
self._process_bundle_queue = queue.Queue()
logging.info('Initializing SDKHarness with %s workers.', self._worker_count)
def run(self):
control_stub = beam_fn_api_pb2_grpc.BeamFnControlStub(self._control_channel)
no_more_work = object()
# Create workers
for _ in range(self._worker_count):
state_handler = GrpcStateHandler(
beam_fn_api_pb2_grpc.BeamFnStateStub(self._control_channel))
state_handler.start()
# SdkHarness manage function registration and share self._fns with all
# the workers. This is needed because function registration (register)
# and exceution(process_bundle) are send over different request and we
# do not really know which woker is going to process bundle
# for a function till we get process_bundle request. Moreover
# same function is reused by different process bundle calls and
# potentially get executed by different worker. Hence we need a
# centralized function list shared among all the workers.
self.workers.put(
SdkWorker(
state_handler=state_handler,
data_channel_factory=self._data_channel_factory,
fns=self._fns))
def get_responses():
while True:
response = self._responses.get()
if response is no_more_work:
return
yield response
for work_request in control_stub.Control(get_responses()):
logging.info('Got work %s', work_request.instruction_id)
request_type = work_request.WhichOneof('request')
# Name spacing the request method with 'request_'. The called method
# will be like self.request_register(request)
getattr(self, SdkHarness.REQUEST_METHOD_PREFIX + request_type)(
work_request)
logging.info('No more requests from control plane')
logging.info('SDK Harness waiting for in-flight requests to complete')
# Wait until existing requests are processed.
self._progress_thread_pool.shutdown()
self._process_thread_pool.shutdown()
# get_responses may be blocked on responses.get(), but we need to return
# control to its caller.
self._responses.put(no_more_work)
self._data_channel_factory.close()
# Stop all the workers and clean all the associated resources
for worker in self.workers.queue:
worker.state_handler.done()
logging.info('Done consuming work.')
def _execute(self, task, request):
try:
response = task()
except Exception as e: # pylint: disable=broad-except
traceback.print_exc(file=sys.stderr)
logging.error(
'Error processing instruction %s. Original traceback is\n%s\n',
request.instruction_id,
traceback.format_exc(e),
exc_info=True)
response = beam_fn_api_pb2.InstructionResponse(
instruction_id=request.instruction_id, error=str(e))
self._responses.put(response)
def _request_register(self, request):
def task():
for process_bundle_descriptor in getattr(
request, request.WhichOneof('request')).process_bundle_descriptor:
self._fns[process_bundle_descriptor.id] = process_bundle_descriptor
return beam_fn_api_pb2.InstructionResponse(
instruction_id=request.instruction_id,
register=beam_fn_api_pb2.RegisterResponse())
self._execute(task, request)
def _request_process_bundle(self, request):
def task():
# Take the free worker. Wait till a worker is free.
worker = self.workers.get()
# Get the first work item in the queue
work = self._process_bundle_queue.get()
# add the instuction_id vs worker map for progress reporting lookup
self._instruction_id_vs_worker[work.instruction_id] = worker
try:
self._execute(lambda: worker.do_instruction(work), work)
finally:
# Delete the instruction_id <-> worker mapping
self._instruction_id_vs_worker.pop(work.instruction_id, None)
# Put the worker back in the free worker pool
self.workers.put(worker)
# Create a task for each process_bundle request and schedule it
self._process_bundle_queue.put(request)
self._process_thread_pool.submit(task)
def _request_process_bundle_progress(self, request):
def task():
self._execute(lambda: self._instruction_id_vs_worker[getattr(
request, request.WhichOneof('request')
).instruction_reference].do_instruction(request), request)
self._progress_thread_pool.submit(task)
class SdkWorker(object):
def __init__(self, state_handler, data_channel_factory, fns):
self.fns = fns
self.state_handler = state_handler
self.data_channel_factory = data_channel_factory
self.bundle_processors = {}
def do_instruction(self, request):
request_type = request.WhichOneof('request')
if request_type:
# E.g. if register is set, this will call self.register(request.register))
return getattr(self, request_type)(getattr(request, request_type),
request.instruction_id)
else:
raise NotImplementedError
def register(self, request, instruction_id):
for process_bundle_descriptor in request.process_bundle_descriptor:
self.fns[process_bundle_descriptor.id] = process_bundle_descriptor
return beam_fn_api_pb2.InstructionResponse(
instruction_id=instruction_id,
register=beam_fn_api_pb2.RegisterResponse())
def process_bundle(self, request, instruction_id):
self.bundle_processors[
instruction_id] = processor = bundle_processor.BundleProcessor(
self.fns[request.process_bundle_descriptor_reference],
self.state_handler, self.data_channel_factory)
try:
processor.process_bundle(instruction_id)
finally:
del self.bundle_processors[instruction_id]
return beam_fn_api_pb2.InstructionResponse(
instruction_id=instruction_id,
process_bundle=beam_fn_api_pb2.ProcessBundleResponse(
metrics=processor.metrics()))
def process_bundle_progress(self, request, instruction_id):
# It is an error to get progress for a not-in-flight bundle.
processor = self.bundle_processors.get(request.instruction_reference)
return beam_fn_api_pb2.InstructionResponse(
instruction_id=instruction_id,
process_bundle_progress=beam_fn_api_pb2.ProcessBundleProgressResponse(
metrics=processor.metrics() if processor else None))
class GrpcStateHandler(object):
_DONE = object()
def __init__(self, state_stub):
self._lock = threading.Lock()
self._state_stub = state_stub
self._requests = queue.Queue()
self._responses_by_id = {}
self._last_id = 0
self._exc_info = None
def start(self):
self._done = False
def request_iter():
while True:
request = self._requests.get()
if request is self._DONE or self._done:
break
yield request
responses = self._state_stub.State(request_iter())
def pull_responses():
try:
for response in responses:
self._responses_by_id[response.id].set(response)
if self._done:
break
except: # pylint: disable=bare-except
self._exc_info = sys.exc_info()
raise
reader = threading.Thread(target=pull_responses, name='read_state')
reader.daemon = True
reader.start()
def done(self):
self._done = True
self._requests.put(self._DONE)
def blocking_get(self, state_key, instruction_reference):
response = self._blocking_request(
beam_fn_api_pb2.StateRequest(
instruction_reference=instruction_reference,
state_key=state_key,
get=beam_fn_api_pb2.StateGetRequest()))
if response.get.continuation_token:
raise NotImplementedErrror
return response.get.data
def blocking_append(self, state_key, data, instruction_reference):
self._blocking_request(
beam_fn_api_pb2.StateRequest(
instruction_reference=instruction_reference,
state_key=state_key,
append=beam_fn_api_pb2.StateAppendRequest(data=data)))
def blocking_clear(self, state_key, instruction_reference):
self._blocking_request(
beam_fn_api_pb2.StateRequest(
instruction_reference=instruction_reference,
state_key=state_key,
clear=beam_fn_api_pb2.StateClearRequest()))
def _blocking_request(self, request):
request.id = self._next_id()
self._responses_by_id[request.id] = future = _Future()
self._requests.put(request)
while not future.wait(timeout=1):
if self._exc_info:
raise self._exc_info[0], self._exc_info[1], self._exc_info[2]
elif self._done:
raise RuntimeError()
del self._responses_by_id[request.id]
return future.get()
def _next_id(self):
self._last_id += 1
return str(self._last_id)
class _Future(object):
"""A simple future object to implement blocking requests.
"""
def __init__(self):
self._event = threading.Event()
def wait(self, timeout=None):
return self._event.wait(timeout)
def get(self, timeout=None):
if self.wait(timeout):
return self._value
else:
raise LookupError()
def set(self, value):
self._value = value
self._event.set()
|
test_cover_type_checking.py
|
"""
The import like
if TYPE_CHECKING:
import bla_bla
Only covered after reloading module with TYPE_CHECKING = True.
"""
from importlib import reload
from multiprocessing.context import Process
import lightweight.cli
import lightweight.content.content_abc
import lightweight.content.copies
import lightweight.content.jinja_page
import lightweight.content.lwmd
import lightweight.content.md_page
import lightweight.content.sass_scss
import lightweight.errors
import lightweight.files
import lightweight.generation.context
import lightweight.generation.path
import lightweight.generation.task
import lightweight.included
import lightweight.lw
import lightweight.server
import lightweight.site
import lightweight.templates
def cover_type_checking():
import typing
typing.TYPE_CHECKING = True
reload(lightweight.content.content_abc)
reload(lightweight.content.copies)
reload(lightweight.content.jinja_page)
reload(lightweight.content.lwmd)
reload(lightweight.content.md_page)
reload(lightweight.content.sass_scss)
reload(lightweight.generation.context)
reload(lightweight.generation.path)
reload(lightweight.generation.task)
reload(lightweight.cli)
reload(lightweight.errors)
reload(lightweight.files)
reload(lightweight.included)
reload(lightweight.lw)
reload(lightweight.server)
reload(lightweight.site)
reload(lightweight.template)
def test_cover_type_checking():
"""Doing this in separate process because classes change memory addresses breaking instance checks, etc"""
p = Process(target=cover_type_checking)
p.start()
p.join()
|
GenerateDatasetBBOX.py
|
import threading
import cv2 as cv
import numpy as np
import pickle
import time
import os
cap = cv.VideoCapture(0)
pastas = sorted(os.listdir("DatasetBBOX"))
pastas = pastas[::-1]
dirr = pastas.pop()
print(dirr)
count = len(os.listdir("DatasetBBOX/"+dirr))
ready = False
MODE = True
#480-250, 640-150
def draw_right_rect(img):
cv.rectangle(img,(490-1,0),(640,250+2),(0,255,0),0)
return 490,1
def timeout():
global RUNNING,ready,TIMEOUT,x,y,window
while RUNNING:
if ready:
while True:
time.sleep(1)
TIMEOUT += 1
if TIMEOUT >= 7:
break
ready = False
TIMEOUT = 0
PASS = True
x += 50
if x+window[0] >= 640:
x = 0
y += 50
if y+window[1] >= 480:
x = 0
y = 0
print("Updated coords",(x,y))
RUNNING = True
TIMEOUT = 0
laplacian_mode = False
window = (250,250)
x = y = 0
BBOXS = []
color = (0,0,255)
th = threading.Thread(target=timeout)
th.daemon = True
th.start()
while(cap.isOpened()):
ret, img = cap.read()
if ready:
crop = img.copy()
cv.putText(img,"CAPTURING time: "+str(TIMEOUT+1),(150,80), cv.FONT_HERSHEY_SIMPLEX, 1,(255,255,255),2)
st = "DatasetBBOX/"+dirr+"/img"+str(count)+".png"
print("saving in ",st)
cv.imwrite(st,crop)
count += 1
BBOXS.append((x,y,x+window[0],y+window[1]))
k = cv.waitKey(10)
if ready:
color = (0,255,0)
else:
color = (0,0,255)
cv.rectangle(img,pt1=(x,y),pt2=(x+window[0],y+window[1]),color=color,thickness=2)
if k == 27:
break
elif k == ord("1"):
with open("BBOXS/("+dirr+")bboxs.pkl","wb") as f:
pickle.dump(BBOXS,f)
count = 0
dirr = pastas.pop()
count = len(os.listdir("DatasetBBOX/"+dirr))
print("modified to ",dirr,"/ images:",count)
BBOXS.clear()
elif k == ord("e"):
MODE = not MODE
elif k == ord("l"):
laplacian_mode = not laplacian_mode
elif k == ord("a"):
ready = not ready
if laplacian_mode:
img = view_laplacian(img)
cv.imshow('Gesture', img)
cv.destroyAllWindows()
cap.release()
|
client.py
|
#!/usr/bin/env python
"""
SecureChat client: communicates with server and initializes user interface.
"""
import socket
import threading
import sys
import binascii
import argparse
from server import DEFAULT_PORT
from dhke import DH, DH_MSG_SIZE, LEN_PK
from cipher import Message
from cli import CLI
__author__ = "spec"
__license__ = "MIT"
__version__ = "0.1"
__status__ = "Development"
class Client:
def __init__(self, interface, server_address, port=DEFAULT_PORT):
"""
Initialize a new client.
:param server_address: IP address of the server
:param port: Server port to connect to
"""
self.cli = interface
self.connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.cli.add_msg("Connecting to {}...".format(server_address))
try:
self.connection.connect((server_address, port))
except KeyboardInterrupt:
self.cli.clean_exit()
sys.exit()
self.cli.add_msg("Connected!")
self.key = None
def dh(self):
"""
Perform Diffie-Hellman Key Exchange with the server.
p: prime modulus declared by the server
g: generator declared by the server
server_key: the server's public key
private_key: the client's private key
public_key: the client's public key
:return shared_key: the 256-bit key both the client and
server now share
"""
self.cli.add_msg("Establishing Encryption Key...")
dh_message = self.connection.recv(DH_MSG_SIZE)
# Unpack p, g, and server_key from the server's dh message
p, g, server_key = DH.unpack(dh_message)
# Generate a randomized private key
private_key = DH.gen_private_key()
# Send the server a public key which used the previously
# Generated private key and both g and p
public_key = DH.gen_public_key(g, private_key, p)
self.connection.sendall(DH.package(public_key, LEN_PK))
# Calculate shared key
shared_key = DH.get_shared_key(server_key, private_key, p)
# print("Shared Key: {}".format(shared_key))
self.cli.add_msg("Encryption Key: {}".format(binascii.hexlify(shared_key).decode("utf-8")))
return shared_key
def send(self, content):
"""
Send a message to the server.
:param content: string to encrypt and send
"""
if not self.key:
self.cli.add_msg("Error: Key Not Established")
return
msg = Message(key=self.key, plaintext=content)
self.connection.sendall(msg.pack())
def start(self):
"""
Start the client: perform key exchange and start listening
for incoming messages.
"""
try:
self.key = self.dh()
except ConnectionError:
self.cli.add_msg("Unable to Connect")
return
while True:
try:
# Wait for data from server
data = self.connection.recv(1024)
# Disconnect from server if no data received
if not data:
self.connection.close()
self.cli.uninit_client()
break
# Parse data as cipher-text message
msg = Message(key=self.key, ciphertext=data)
if not self.cli:
break
# Add message to the command-line interface
self.cli.add_msg(msg.plaintext)
# Disconnect client if unable to read from connection
except OSError:
self.connection.close()
self.cli.uninit_client()
break
if __name__ == '__main__':
# Get host and port arguments from the command-line
aparser = argparse.ArgumentParser()
aparser.add_argument("host", help="IP address of the chat server")
aparser.add_argument("--port", default=DEFAULT_PORT, type=int, help="Port number the chat server is running on")
args = aparser.parse_args()
# Initialize Command-Line Interface
interface = CLI()
try:
c = Client(interface, args.host, port=args.port)
except ConnectionRefusedError:
interface.clean_exit()
print("Connection Refused")
sys.exit()
except OSError:
interface.clean_exit()
print("Connection Failed")
sys.exit()
# Add the client object to the interface
interface.init_client(c)
# Start the client
client_thread = threading.Thread(target=c.start)
client_thread.start()
# Start the main input loop
try:
interface.main()
except KeyboardInterrupt:
interface.clean_exit()
|
test_io.py
|
"""Unit tests dla the io module."""
# Tests of io are scattered over the test suite:
# * test_bufio - tests file buffering
# * test_memoryio - tests BytesIO oraz StringIO
# * test_fileio - tests FileIO
# * test_file - tests the file interface
# * test_io - tests everything inaczej w the io module
# * test_univnewlines - tests universal newline support
# * test_largefile - tests operations on a file greater than 2**32 bytes
# (only enabled przy -ulargefile)
################################################################################
# ATTENTION TEST WRITERS!!!
################################################################################
# When writing tests dla io, it's important to test both the C oraz Python
# implementations. This jest usually done by writing a base test that refers to
# the type it jest testing jako a attribute. Then it provides custom subclasses to
# test both implementations. This file has lots of examples.
################################################################################
zaimportuj abc
zaimportuj array
zaimportuj errno
zaimportuj locale
zaimportuj os
zaimportuj pickle
zaimportuj random
zaimportuj signal
zaimportuj sys
zaimportuj time
zaimportuj unittest
zaimportuj warnings
zaimportuj weakref
z collections zaimportuj deque, UserList
z itertools zaimportuj cycle, count
z test zaimportuj support
z test.support.script_helper zaimportuj assert_python_ok, run_python_until_end
zaimportuj codecs
zaimportuj io # C implementation of io
zaimportuj _pyio jako pyio # Python implementation of io
spróbuj:
zaimportuj threading
wyjąwszy ImportError:
threading = Nic
def _default_chunk_size():
"""Get the default TextIOWrapper chunk size"""
przy open(__file__, "r", encoding="latin-1") jako f:
zwróć f._CHUNK_SIZE
klasa MockRawIOWithoutRead:
"""A RawIO implementation without read(), so jako to exercise the default
RawIO.read() which calls readinto()."""
def __init__(self, read_stack=()):
self._read_stack = list(read_stack)
self._write_stack = []
self._reads = 0
self._extraneous_reads = 0
def write(self, b):
self._write_stack.append(bytes(b))
zwróć len(b)
def writable(self):
zwróć Prawda
def fileno(self):
zwróć 42
def readable(self):
zwróć Prawda
def seekable(self):
zwróć Prawda
def seek(self, pos, whence):
zwróć 0 # wrong but we gotta zwróć something
def tell(self):
zwróć 0 # same comment jako above
def readinto(self, buf):
self._reads += 1
max_len = len(buf)
spróbuj:
data = self._read_stack[0]
wyjąwszy IndexError:
self._extraneous_reads += 1
zwróć 0
jeżeli data jest Nic:
usuń self._read_stack[0]
zwróć Nic
n = len(data)
jeżeli len(data) <= max_len:
usuń self._read_stack[0]
buf[:n] = data
zwróć n
inaczej:
buf[:] = data[:max_len]
self._read_stack[0] = data[max_len:]
zwróć max_len
def truncate(self, pos=Nic):
zwróć pos
klasa CMockRawIOWithoutRead(MockRawIOWithoutRead, io.RawIOBase):
dalej
klasa PyMockRawIOWithoutRead(MockRawIOWithoutRead, pyio.RawIOBase):
dalej
klasa MockRawIO(MockRawIOWithoutRead):
def read(self, n=Nic):
self._reads += 1
spróbuj:
zwróć self._read_stack.pop(0)
wyjąwszy:
self._extraneous_reads += 1
zwróć b""
klasa CMockRawIO(MockRawIO, io.RawIOBase):
dalej
klasa PyMockRawIO(MockRawIO, pyio.RawIOBase):
dalej
klasa MisbehavedRawIO(MockRawIO):
def write(self, b):
zwróć super().write(b) * 2
def read(self, n=Nic):
zwróć super().read(n) * 2
def seek(self, pos, whence):
zwróć -123
def tell(self):
zwróć -456
def readinto(self, buf):
super().readinto(buf)
zwróć len(buf) * 5
klasa CMisbehavedRawIO(MisbehavedRawIO, io.RawIOBase):
dalej
klasa PyMisbehavedRawIO(MisbehavedRawIO, pyio.RawIOBase):
dalej
klasa CloseFailureIO(MockRawIO):
closed = 0
def close(self):
jeżeli nie self.closed:
self.closed = 1
podnieś OSError
klasa CCloseFailureIO(CloseFailureIO, io.RawIOBase):
dalej
klasa PyCloseFailureIO(CloseFailureIO, pyio.RawIOBase):
dalej
klasa MockFileIO:
def __init__(self, data):
self.read_history = []
super().__init__(data)
def read(self, n=Nic):
res = super().read(n)
self.read_history.append(Nic jeżeli res jest Nic inaczej len(res))
zwróć res
def readinto(self, b):
res = super().readinto(b)
self.read_history.append(res)
zwróć res
klasa CMockFileIO(MockFileIO, io.BytesIO):
dalej
klasa PyMockFileIO(MockFileIO, pyio.BytesIO):
dalej
klasa MockUnseekableIO:
def seekable(self):
zwróć Nieprawda
def seek(self, *args):
podnieś self.UnsupportedOperation("not seekable")
def tell(self, *args):
podnieś self.UnsupportedOperation("not seekable")
klasa CMockUnseekableIO(MockUnseekableIO, io.BytesIO):
UnsupportedOperation = io.UnsupportedOperation
klasa PyMockUnseekableIO(MockUnseekableIO, pyio.BytesIO):
UnsupportedOperation = pyio.UnsupportedOperation
klasa MockNonBlockWriterIO:
def __init__(self):
self._write_stack = []
self._blocker_char = Nic
def pop_written(self):
s = b"".join(self._write_stack)
self._write_stack[:] = []
zwróć s
def block_on(self, char):
"""Block when a given char jest encountered."""
self._blocker_char = char
def readable(self):
zwróć Prawda
def seekable(self):
zwróć Prawda
def writable(self):
zwróć Prawda
def write(self, b):
b = bytes(b)
n = -1
jeżeli self._blocker_char:
spróbuj:
n = b.index(self._blocker_char)
wyjąwszy ValueError:
dalej
inaczej:
jeżeli n > 0:
# write data up to the first blocker
self._write_stack.append(b[:n])
zwróć n
inaczej:
# cancel blocker oraz indicate would block
self._blocker_char = Nic
zwróć Nic
self._write_stack.append(b)
zwróć len(b)
klasa CMockNonBlockWriterIO(MockNonBlockWriterIO, io.RawIOBase):
BlockingIOError = io.BlockingIOError
klasa PyMockNonBlockWriterIO(MockNonBlockWriterIO, pyio.RawIOBase):
BlockingIOError = pyio.BlockingIOError
klasa IOTest(unittest.TestCase):
def setUp(self):
support.unlink(support.TESTFN)
def tearDown(self):
support.unlink(support.TESTFN)
def write_ops(self, f):
self.assertEqual(f.write(b"blah."), 5)
f.truncate(0)
self.assertEqual(f.tell(), 5)
f.seek(0)
self.assertEqual(f.write(b"blah."), 5)
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.write(b"Hello."), 6)
self.assertEqual(f.tell(), 6)
self.assertEqual(f.seek(-1, 1), 5)
self.assertEqual(f.tell(), 5)
self.assertEqual(f.write(bytearray(b" world\n\n\n")), 9)
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.write(b"h"), 1)
self.assertEqual(f.seek(-1, 2), 13)
self.assertEqual(f.tell(), 13)
self.assertEqual(f.truncate(12), 12)
self.assertEqual(f.tell(), 13)
self.assertRaises(TypeError, f.seek, 0.0)
def read_ops(self, f, buffered=Nieprawda):
data = f.read(5)
self.assertEqual(data, b"hello")
data = bytearray(data)
self.assertEqual(f.readinto(data), 5)
self.assertEqual(data, b" worl")
self.assertEqual(f.readinto(data), 2)
self.assertEqual(len(data), 5)
self.assertEqual(data[:2], b"d\n")
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.read(20), b"hello world\n")
self.assertEqual(f.read(1), b"")
self.assertEqual(f.readinto(bytearray(b"x")), 0)
self.assertEqual(f.seek(-6, 2), 6)
self.assertEqual(f.read(5), b"world")
self.assertEqual(f.read(0), b"")
self.assertEqual(f.readinto(bytearray()), 0)
self.assertEqual(f.seek(-6, 1), 5)
self.assertEqual(f.read(5), b" worl")
self.assertEqual(f.tell(), 10)
self.assertRaises(TypeError, f.seek, 0.0)
jeżeli buffered:
f.seek(0)
self.assertEqual(f.read(), b"hello world\n")
f.seek(6)
self.assertEqual(f.read(), b"world\n")
self.assertEqual(f.read(), b"")
LARGE = 2**31
def large_file_ops(self, f):
assert f.readable()
assert f.writable()
self.assertEqual(f.seek(self.LARGE), self.LARGE)
self.assertEqual(f.tell(), self.LARGE)
self.assertEqual(f.write(b"xxx"), 3)
self.assertEqual(f.tell(), self.LARGE + 3)
self.assertEqual(f.seek(-1, 1), self.LARGE + 2)
self.assertEqual(f.truncate(), self.LARGE + 2)
self.assertEqual(f.tell(), self.LARGE + 2)
self.assertEqual(f.seek(0, 2), self.LARGE + 2)
self.assertEqual(f.truncate(self.LARGE + 1), self.LARGE + 1)
self.assertEqual(f.tell(), self.LARGE + 2)
self.assertEqual(f.seek(0, 2), self.LARGE + 1)
self.assertEqual(f.seek(-1, 2), self.LARGE)
self.assertEqual(f.read(2), b"x")
def test_invalid_operations(self):
# Try writing on a file opened w read mode oraz vice-versa.
exc = self.UnsupportedOperation
dla mode w ("w", "wb"):
przy self.open(support.TESTFN, mode) jako fp:
self.assertRaises(exc, fp.read)
self.assertRaises(exc, fp.readline)
przy self.open(support.TESTFN, "wb", buffering=0) jako fp:
self.assertRaises(exc, fp.read)
self.assertRaises(exc, fp.readline)
przy self.open(support.TESTFN, "rb", buffering=0) jako fp:
self.assertRaises(exc, fp.write, b"blah")
self.assertRaises(exc, fp.writelines, [b"blah\n"])
przy self.open(support.TESTFN, "rb") jako fp:
self.assertRaises(exc, fp.write, b"blah")
self.assertRaises(exc, fp.writelines, [b"blah\n"])
przy self.open(support.TESTFN, "r") jako fp:
self.assertRaises(exc, fp.write, "blah")
self.assertRaises(exc, fp.writelines, ["blah\n"])
# Non-zero seeking z current albo end pos
self.assertRaises(exc, fp.seek, 1, self.SEEK_CUR)
self.assertRaises(exc, fp.seek, -1, self.SEEK_END)
def test_open_handles_NUL_chars(self):
fn_with_NUL = 'foo\0bar'
self.assertRaises(ValueError, self.open, fn_with_NUL, 'w')
self.assertRaises(ValueError, self.open, bytes(fn_with_NUL, 'ascii'), 'w')
def test_raw_file_io(self):
przy self.open(support.TESTFN, "wb", buffering=0) jako f:
self.assertEqual(f.readable(), Nieprawda)
self.assertEqual(f.writable(), Prawda)
self.assertEqual(f.seekable(), Prawda)
self.write_ops(f)
przy self.open(support.TESTFN, "rb", buffering=0) jako f:
self.assertEqual(f.readable(), Prawda)
self.assertEqual(f.writable(), Nieprawda)
self.assertEqual(f.seekable(), Prawda)
self.read_ops(f)
def test_buffered_file_io(self):
przy self.open(support.TESTFN, "wb") jako f:
self.assertEqual(f.readable(), Nieprawda)
self.assertEqual(f.writable(), Prawda)
self.assertEqual(f.seekable(), Prawda)
self.write_ops(f)
przy self.open(support.TESTFN, "rb") jako f:
self.assertEqual(f.readable(), Prawda)
self.assertEqual(f.writable(), Nieprawda)
self.assertEqual(f.seekable(), Prawda)
self.read_ops(f, Prawda)
def test_readline(self):
przy self.open(support.TESTFN, "wb") jako f:
f.write(b"abc\ndef\nxyzzy\nfoo\x00bar\nanother line")
przy self.open(support.TESTFN, "rb") jako f:
self.assertEqual(f.readline(), b"abc\n")
self.assertEqual(f.readline(10), b"def\n")
self.assertEqual(f.readline(2), b"xy")
self.assertEqual(f.readline(4), b"zzy\n")
self.assertEqual(f.readline(), b"foo\x00bar\n")
self.assertEqual(f.readline(Nic), b"another line")
self.assertRaises(TypeError, f.readline, 5.3)
przy self.open(support.TESTFN, "r") jako f:
self.assertRaises(TypeError, f.readline, 5.3)
def test_raw_bytes_io(self):
f = self.BytesIO()
self.write_ops(f)
data = f.getvalue()
self.assertEqual(data, b"hello world\n")
f = self.BytesIO(data)
self.read_ops(f, Prawda)
def test_large_file_ops(self):
# On Windows oraz Mac OSX this test comsumes large resources; It takes
# a long time to build the >2GB file oraz takes >2GB of disk space
# therefore the resource must be enabled to run this test.
jeżeli sys.platform[:3] == 'win' albo sys.platform == 'darwin':
support.requires(
'largefile',
'test requires %s bytes oraz a long time to run' % self.LARGE)
przy self.open(support.TESTFN, "w+b", 0) jako f:
self.large_file_ops(f)
przy self.open(support.TESTFN, "w+b") jako f:
self.large_file_ops(f)
def test_with_open(self):
dla bufsize w (0, 1, 100):
f = Nic
przy self.open(support.TESTFN, "wb", bufsize) jako f:
f.write(b"xxx")
self.assertEqual(f.closed, Prawda)
f = Nic
spróbuj:
przy self.open(support.TESTFN, "wb", bufsize) jako f:
1/0
wyjąwszy ZeroDivisionError:
self.assertEqual(f.closed, Prawda)
inaczej:
self.fail("1/0 didn't podnieś an exception")
# issue 5008
def test_append_mode_tell(self):
przy self.open(support.TESTFN, "wb") jako f:
f.write(b"xxx")
przy self.open(support.TESTFN, "ab", buffering=0) jako f:
self.assertEqual(f.tell(), 3)
przy self.open(support.TESTFN, "ab") jako f:
self.assertEqual(f.tell(), 3)
przy self.open(support.TESTFN, "a") jako f:
self.assertGreater(f.tell(), 0)
def test_destructor(self):
record = []
klasa MyFileIO(self.FileIO):
def __del__(self):
record.append(1)
spróbuj:
f = super().__del__
wyjąwszy AttributeError:
dalej
inaczej:
f()
def close(self):
record.append(2)
super().close()
def flush(self):
record.append(3)
super().flush()
przy support.check_warnings(('', ResourceWarning)):
f = MyFileIO(support.TESTFN, "wb")
f.write(b"xxx")
usuń f
support.gc_collect()
self.assertEqual(record, [1, 2, 3])
przy self.open(support.TESTFN, "rb") jako f:
self.assertEqual(f.read(), b"xxx")
def _check_base_destructor(self, base):
record = []
klasa MyIO(base):
def __init__(self):
# This exercises the availability of attributes on object
# destruction.
# (in the C version, close() jest called by the tp_dealloc
# function, nie by __del__)
self.on_usuń = 1
self.on_close = 2
self.on_flush = 3
def __del__(self):
record.append(self.on_del)
spróbuj:
f = super().__del__
wyjąwszy AttributeError:
dalej
inaczej:
f()
def close(self):
record.append(self.on_close)
super().close()
def flush(self):
record.append(self.on_flush)
super().flush()
f = MyIO()
usuń f
support.gc_collect()
self.assertEqual(record, [1, 2, 3])
def test_IOBase_destructor(self):
self._check_base_destructor(self.IOBase)
def test_RawIOBase_destructor(self):
self._check_base_destructor(self.RawIOBase)
def test_BufferedIOBase_destructor(self):
self._check_base_destructor(self.BufferedIOBase)
def test_TextIOBase_destructor(self):
self._check_base_destructor(self.TextIOBase)
def test_close_flushes(self):
przy self.open(support.TESTFN, "wb") jako f:
f.write(b"xxx")
przy self.open(support.TESTFN, "rb") jako f:
self.assertEqual(f.read(), b"xxx")
def test_array_writes(self):
a = array.array('i', range(10))
n = len(a.tobytes())
przy self.open(support.TESTFN, "wb", 0) jako f:
self.assertEqual(f.write(a), n)
przy self.open(support.TESTFN, "wb") jako f:
self.assertEqual(f.write(a), n)
def test_closefd(self):
self.assertRaises(ValueError, self.open, support.TESTFN, 'w',
closefd=Nieprawda)
def test_read_closed(self):
przy self.open(support.TESTFN, "w") jako f:
f.write("egg\n")
przy self.open(support.TESTFN, "r") jako f:
file = self.open(f.fileno(), "r", closefd=Nieprawda)
self.assertEqual(file.read(), "egg\n")
file.seek(0)
file.close()
self.assertRaises(ValueError, file.read)
def test_no_closefd_with_filename(self):
# can't use closefd w combination przy a file name
self.assertRaises(ValueError, self.open, support.TESTFN, "r", closefd=Nieprawda)
def test_closefd_attr(self):
przy self.open(support.TESTFN, "wb") jako f:
f.write(b"egg\n")
przy self.open(support.TESTFN, "r") jako f:
self.assertEqual(f.buffer.raw.closefd, Prawda)
file = self.open(f.fileno(), "r", closefd=Nieprawda)
self.assertEqual(file.buffer.raw.closefd, Nieprawda)
def test_garbage_collection(self):
# FileIO objects are collected, oraz collecting them flushes
# all data to disk.
przy support.check_warnings(('', ResourceWarning)):
f = self.FileIO(support.TESTFN, "wb")
f.write(b"abcxxx")
f.f = f
wr = weakref.ref(f)
usuń f
support.gc_collect()
self.assertIsNic(wr(), wr)
przy self.open(support.TESTFN, "rb") jako f:
self.assertEqual(f.read(), b"abcxxx")
def test_unbounded_file(self):
# Issue #1174606: reading z an unbounded stream such jako /dev/zero.
zero = "/dev/zero"
jeżeli nie os.path.exists(zero):
self.skipTest("{0} does nie exist".format(zero))
jeżeli sys.maxsize > 0x7FFFFFFF:
self.skipTest("test can only run w a 32-bit address space")
jeżeli support.real_max_memuse < support._2G:
self.skipTest("test requires at least 2GB of memory")
przy self.open(zero, "rb", buffering=0) jako f:
self.assertRaises(OverflowError, f.read)
przy self.open(zero, "rb") jako f:
self.assertRaises(OverflowError, f.read)
przy self.open(zero, "r") jako f:
self.assertRaises(OverflowError, f.read)
def check_flush_error_on_close(self, *args, **kwargs):
# Test that the file jest closed despite failed flush
# oraz that flush() jest called before file closed.
f = self.open(*args, **kwargs)
closed = []
def bad_flush():
closed[:] = [f.closed]
podnieś OSError()
f.flush = bad_flush
self.assertRaises(OSError, f.close) # exception nie swallowed
self.assertPrawda(f.closed)
self.assertPrawda(closed) # flush() called
self.assertNieprawda(closed[0]) # flush() called before file closed
f.flush = lambda: Nic # przerwij reference loop
def test_flush_error_on_close(self):
# raw file
# Issue #5700: io.FileIO calls flush() after file closed
self.check_flush_error_on_close(support.TESTFN, 'wb', buffering=0)
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'wb', buffering=0)
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'wb', buffering=0, closefd=Nieprawda)
os.close(fd)
# buffered io
self.check_flush_error_on_close(support.TESTFN, 'wb')
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'wb')
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'wb', closefd=Nieprawda)
os.close(fd)
# text io
self.check_flush_error_on_close(support.TESTFN, 'w')
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'w')
fd = os.open(support.TESTFN, os.O_WRONLY|os.O_CREAT)
self.check_flush_error_on_close(fd, 'w', closefd=Nieprawda)
os.close(fd)
def test_multi_close(self):
f = self.open(support.TESTFN, "wb", buffering=0)
f.close()
f.close()
f.close()
self.assertRaises(ValueError, f.flush)
def test_RawIOBase_read(self):
# Exercise the default RawIOBase.read() implementation (which calls
# readinto() internally).
rawio = self.MockRawIOWithoutRead((b"abc", b"d", Nic, b"efg", Nic))
self.assertEqual(rawio.read(2), b"ab")
self.assertEqual(rawio.read(2), b"c")
self.assertEqual(rawio.read(2), b"d")
self.assertEqual(rawio.read(2), Nic)
self.assertEqual(rawio.read(2), b"ef")
self.assertEqual(rawio.read(2), b"g")
self.assertEqual(rawio.read(2), Nic)
self.assertEqual(rawio.read(2), b"")
def test_types_have_dict(self):
test = (
self.IOBase(),
self.RawIOBase(),
self.TextIOBase(),
self.StringIO(),
self.BytesIO()
)
dla obj w test:
self.assertPrawda(hasattr(obj, "__dict__"))
def test_opener(self):
przy self.open(support.TESTFN, "w") jako f:
f.write("egg\n")
fd = os.open(support.TESTFN, os.O_RDONLY)
def opener(path, flags):
zwróć fd
przy self.open("non-existent", "r", opener=opener) jako f:
self.assertEqual(f.read(), "egg\n")
def test_fileio_closefd(self):
# Issue #4841
przy self.open(__file__, 'rb') jako f1, \
self.open(__file__, 'rb') jako f2:
fileio = self.FileIO(f1.fileno(), closefd=Nieprawda)
# .__init__() must nie close f1
fileio.__init__(f2.fileno(), closefd=Nieprawda)
f1.readline()
# .close() must nie close f2
fileio.close()
f2.readline()
def test_nonbuffered_textio(self):
przy warnings.catch_warnings(record=Prawda) jako recorded:
przy self.assertRaises(ValueError):
self.open(support.TESTFN, 'w', buffering=0)
support.gc_collect()
self.assertEqual(recorded, [])
def test_invalid_newline(self):
przy warnings.catch_warnings(record=Prawda) jako recorded:
przy self.assertRaises(ValueError):
self.open(support.TESTFN, 'w', newline='invalid')
support.gc_collect()
self.assertEqual(recorded, [])
klasa CIOTest(IOTest):
def test_IOBase_finalize(self):
# Issue #12149: segmentation fault on _PyIOBase_finalize when both a
# klasa which inherits IOBase oraz an object of this klasa are caught
# w a reference cycle oraz close() jest already w the method cache.
klasa MyIO(self.IOBase):
def close(self):
dalej
# create an instance to populate the method cache
MyIO()
obj = MyIO()
obj.obj = obj
wr = weakref.ref(obj)
usuń MyIO
usuń obj
support.gc_collect()
self.assertIsNic(wr(), wr)
klasa PyIOTest(IOTest):
dalej
@support.cpython_only
klasa APIMismatchTest(unittest.TestCase):
def test_RawIOBase_io_in_pyio_match(self):
"""Test that pyio RawIOBase klasa has all c RawIOBase methods"""
mismatch = support.detect_api_mismatch(pyio.RawIOBase, io.RawIOBase,
ignore=('__weakref__',))
self.assertEqual(mismatch, set(), msg='Python RawIOBase does nie have all C RawIOBase methods')
def test_RawIOBase_pyio_in_io_match(self):
"""Test that c RawIOBase klasa has all pyio RawIOBase methods"""
mismatch = support.detect_api_mismatch(io.RawIOBase, pyio.RawIOBase)
self.assertEqual(mismatch, set(), msg='C RawIOBase does nie have all Python RawIOBase methods')
klasa CommonBufferedTests:
# Tests common to BufferedReader, BufferedWriter oraz BufferedRandom
def test_detach(self):
raw = self.MockRawIO()
buf = self.tp(raw)
self.assertIs(buf.detach(), raw)
self.assertRaises(ValueError, buf.detach)
repr(buf) # Should still work
def test_fileno(self):
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertEqual(42, bufio.fileno())
@unittest.skip('test having existential crisis')
def test_no_fileno(self):
# XXX will we always have fileno() function? If so, kill
# this test. Else, write it.
dalej
def test_invalid_args(self):
rawio = self.MockRawIO()
bufio = self.tp(rawio)
# Invalid whence
self.assertRaises(ValueError, bufio.seek, 0, -1)
self.assertRaises(ValueError, bufio.seek, 0, 9)
def test_override_destructor(self):
tp = self.tp
record = []
klasa MyBufferedIO(tp):
def __del__(self):
record.append(1)
spróbuj:
f = super().__del__
wyjąwszy AttributeError:
dalej
inaczej:
f()
def close(self):
record.append(2)
super().close()
def flush(self):
record.append(3)
super().flush()
rawio = self.MockRawIO()
bufio = MyBufferedIO(rawio)
writable = bufio.writable()
usuń bufio
support.gc_collect()
jeżeli writable:
self.assertEqual(record, [1, 2, 3])
inaczej:
self.assertEqual(record, [1, 2])
def test_context_manager(self):
# Test usability jako a context manager
rawio = self.MockRawIO()
bufio = self.tp(rawio)
def _with():
przy bufio:
dalej
_with()
# bufio should now be closed, oraz using it a second time should podnieś
# a ValueError.
self.assertRaises(ValueError, _with)
def test_error_through_destructor(self):
# Test that the exception state jest nie modified by a destructor,
# even jeżeli close() fails.
rawio = self.CloseFailureIO()
def f():
self.tp(rawio).xyzzy
przy support.captured_output("stderr") jako s:
self.assertRaises(AttributeError, f)
s = s.getvalue().strip()
jeżeli s:
# The destructor *may* have printed an unraisable error, check it
self.assertEqual(len(s.splitlines()), 1)
self.assertPrawda(s.startswith("Exception OSError: "), s)
self.assertPrawda(s.endswith(" ignored"), s)
def test_repr(self):
raw = self.MockRawIO()
b = self.tp(raw)
clsname = "%s.%s" % (self.tp.__module__, self.tp.__qualname__)
self.assertEqual(repr(b), "<%s>" % clsname)
raw.name = "dummy"
self.assertEqual(repr(b), "<%s name='dummy'>" % clsname)
raw.name = b"dummy"
self.assertEqual(repr(b), "<%s name=b'dummy'>" % clsname)
def test_flush_error_on_close(self):
# Test that buffered file jest closed despite failed flush
# oraz that flush() jest called before file closed.
raw = self.MockRawIO()
closed = []
def bad_flush():
closed[:] = [b.closed, raw.closed]
podnieś OSError()
raw.flush = bad_flush
b = self.tp(raw)
self.assertRaises(OSError, b.close) # exception nie swallowed
self.assertPrawda(b.closed)
self.assertPrawda(raw.closed)
self.assertPrawda(closed) # flush() called
self.assertNieprawda(closed[0]) # flush() called before file closed
self.assertNieprawda(closed[1])
raw.flush = lambda: Nic # przerwij reference loop
def test_close_error_on_close(self):
raw = self.MockRawIO()
def bad_flush():
podnieś OSError('flush')
def bad_close():
podnieś OSError('close')
raw.close = bad_close
b = self.tp(raw)
b.flush = bad_flush
przy self.assertRaises(OSError) jako err: # exception nie swallowed
b.close()
self.assertEqual(err.exception.args, ('close',))
self.assertIsInstance(err.exception.__context__, OSError)
self.assertEqual(err.exception.__context__.args, ('flush',))
self.assertNieprawda(b.closed)
def test_nonnormalized_close_error_on_close(self):
# Issue #21677
raw = self.MockRawIO()
def bad_flush():
podnieś non_existing_flush
def bad_close():
podnieś non_existing_close
raw.close = bad_close
b = self.tp(raw)
b.flush = bad_flush
przy self.assertRaises(NameError) jako err: # exception nie swallowed
b.close()
self.assertIn('non_existing_close', str(err.exception))
self.assertIsInstance(err.exception.__context__, NameError)
self.assertIn('non_existing_flush', str(err.exception.__context__))
self.assertNieprawda(b.closed)
def test_multi_close(self):
raw = self.MockRawIO()
b = self.tp(raw)
b.close()
b.close()
b.close()
self.assertRaises(ValueError, b.flush)
def test_unseekable(self):
bufio = self.tp(self.MockUnseekableIO(b"A" * 10))
self.assertRaises(self.UnsupportedOperation, bufio.tell)
self.assertRaises(self.UnsupportedOperation, bufio.seek, 0)
def test_readonly_attributes(self):
raw = self.MockRawIO()
buf = self.tp(raw)
x = self.MockRawIO()
przy self.assertRaises(AttributeError):
buf.raw = x
klasa SizeofTest:
@support.cpython_only
def test_sizeof(self):
bufsize1 = 4096
bufsize2 = 8192
rawio = self.MockRawIO()
bufio = self.tp(rawio, buffer_size=bufsize1)
size = sys.getsizeof(bufio) - bufsize1
rawio = self.MockRawIO()
bufio = self.tp(rawio, buffer_size=bufsize2)
self.assertEqual(sys.getsizeof(bufio), size + bufsize2)
@support.cpython_only
def test_buffer_freeing(self) :
bufsize = 4096
rawio = self.MockRawIO()
bufio = self.tp(rawio, buffer_size=bufsize)
size = sys.getsizeof(bufio) - bufsize
bufio.close()
self.assertEqual(sys.getsizeof(bufio), size)
klasa BufferedReaderTest(unittest.TestCase, CommonBufferedTests):
read_mode = "rb"
def test_constructor(self):
rawio = self.MockRawIO([b"abc"])
bufio = self.tp(rawio)
bufio.__init__(rawio)
bufio.__init__(rawio, buffer_size=1024)
bufio.__init__(rawio, buffer_size=16)
self.assertEqual(b"abc", bufio.read())
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=0)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-16)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-1)
rawio = self.MockRawIO([b"abc"])
bufio.__init__(rawio)
self.assertEqual(b"abc", bufio.read())
def test_uninitialized(self):
bufio = self.tp.__new__(self.tp)
usuń bufio
bufio = self.tp.__new__(self.tp)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
bufio.read, 0)
bufio.__init__(self.MockRawIO())
self.assertEqual(bufio.read(0), b'')
def test_read(self):
dla arg w (Nic, 7):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"abcdefg", bufio.read(arg))
# Invalid args
self.assertRaises(ValueError, bufio.read, -2)
def test_read1(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"a", bufio.read(1))
self.assertEqual(b"b", bufio.read1(1))
self.assertEqual(rawio._reads, 1)
self.assertEqual(b"c", bufio.read1(100))
self.assertEqual(rawio._reads, 1)
self.assertEqual(b"d", bufio.read1(100))
self.assertEqual(rawio._reads, 2)
self.assertEqual(b"efg", bufio.read1(100))
self.assertEqual(rawio._reads, 3)
self.assertEqual(b"", bufio.read1(100))
self.assertEqual(rawio._reads, 4)
# Invalid args
self.assertRaises(ValueError, bufio.read1, -1)
def test_readinto(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
b = bytearray(2)
self.assertEqual(bufio.readinto(b), 2)
self.assertEqual(b, b"ab")
self.assertEqual(bufio.readinto(b), 2)
self.assertEqual(b, b"cd")
self.assertEqual(bufio.readinto(b), 2)
self.assertEqual(b, b"ef")
self.assertEqual(bufio.readinto(b), 1)
self.assertEqual(b, b"gf")
self.assertEqual(bufio.readinto(b), 0)
self.assertEqual(b, b"gf")
rawio = self.MockRawIO((b"abc", Nic))
bufio = self.tp(rawio)
self.assertEqual(bufio.readinto(b), 2)
self.assertEqual(b, b"ab")
self.assertEqual(bufio.readinto(b), 1)
self.assertEqual(b, b"cb")
def test_readinto1(self):
buffer_size = 10
rawio = self.MockRawIO((b"abc", b"de", b"fgh", b"jkl"))
bufio = self.tp(rawio, buffer_size=buffer_size)
b = bytearray(2)
self.assertEqual(bufio.peek(3), b'abc')
self.assertEqual(rawio._reads, 1)
self.assertEqual(bufio.readinto1(b), 2)
self.assertEqual(b, b"ab")
self.assertEqual(rawio._reads, 1)
self.assertEqual(bufio.readinto1(b), 1)
self.assertEqual(b[:1], b"c")
self.assertEqual(rawio._reads, 1)
self.assertEqual(bufio.readinto1(b), 2)
self.assertEqual(b, b"de")
self.assertEqual(rawio._reads, 2)
b = bytearray(2*buffer_size)
self.assertEqual(bufio.peek(3), b'fgh')
self.assertEqual(rawio._reads, 3)
self.assertEqual(bufio.readinto1(b), 6)
self.assertEqual(b[:6], b"fghjkl")
self.assertEqual(rawio._reads, 4)
def test_readinto_array(self):
buffer_size = 60
data = b"a" * 26
rawio = self.MockRawIO((data,))
bufio = self.tp(rawio, buffer_size=buffer_size)
# Create an array przy element size > 1 byte
b = array.array('i', b'x' * 32)
assert len(b) != 16
# Read into it. We should get jako many *bytes* jako we can fit into b
# (which jest more than the number of elements)
n = bufio.readinto(b)
self.assertGreater(n, len(b))
# Check that old contents of b are preserved
bm = memoryview(b).cast('B')
self.assertLess(n, len(bm))
self.assertEqual(bm[:n], data[:n])
self.assertEqual(bm[n:], b'x' * (len(bm[n:])))
def test_readinto1_array(self):
buffer_size = 60
data = b"a" * 26
rawio = self.MockRawIO((data,))
bufio = self.tp(rawio, buffer_size=buffer_size)
# Create an array przy element size > 1 byte
b = array.array('i', b'x' * 32)
assert len(b) != 16
# Read into it. We should get jako many *bytes* jako we can fit into b
# (which jest more than the number of elements)
n = bufio.readinto1(b)
self.assertGreater(n, len(b))
# Check that old contents of b are preserved
bm = memoryview(b).cast('B')
self.assertLess(n, len(bm))
self.assertEqual(bm[:n], data[:n])
self.assertEqual(bm[n:], b'x' * (len(bm[n:])))
def test_readlines(self):
def bufio():
rawio = self.MockRawIO((b"abc\n", b"d\n", b"ef"))
zwróć self.tp(rawio)
self.assertEqual(bufio().readlines(), [b"abc\n", b"d\n", b"ef"])
self.assertEqual(bufio().readlines(5), [b"abc\n", b"d\n"])
self.assertEqual(bufio().readlines(Nic), [b"abc\n", b"d\n", b"ef"])
def test_buffering(self):
data = b"abcdefghi"
dlen = len(data)
tests = [
[ 100, [ 3, 1, 4, 8 ], [ dlen, 0 ] ],
[ 100, [ 3, 3, 3], [ dlen ] ],
[ 4, [ 1, 2, 4, 2 ], [ 4, 4, 1 ] ],
]
dla bufsize, buf_read_sizes, raw_read_sizes w tests:
rawio = self.MockFileIO(data)
bufio = self.tp(rawio, buffer_size=bufsize)
pos = 0
dla nbytes w buf_read_sizes:
self.assertEqual(bufio.read(nbytes), data[pos:pos+nbytes])
pos += nbytes
# this jest mildly implementation-dependent
self.assertEqual(rawio.read_history, raw_read_sizes)
def test_read_non_blocking(self):
# Inject some Nic's w there to simulate EWOULDBLOCK
rawio = self.MockRawIO((b"abc", b"d", Nic, b"efg", Nic, Nic, Nic))
bufio = self.tp(rawio)
self.assertEqual(b"abcd", bufio.read(6))
self.assertEqual(b"e", bufio.read(1))
self.assertEqual(b"fg", bufio.read())
self.assertEqual(b"", bufio.peek(1))
self.assertIsNic(bufio.read())
self.assertEqual(b"", bufio.read())
rawio = self.MockRawIO((b"a", Nic, Nic))
self.assertEqual(b"a", rawio.readall())
self.assertIsNic(rawio.readall())
def test_read_past_eof(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"abcdefg", bufio.read(9000))
def test_read_all(self):
rawio = self.MockRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertEqual(b"abcdefg", bufio.read())
@unittest.skipUnless(threading, 'Threading required dla this test.')
@support.requires_resource('cpu')
def test_threads(self):
spróbuj:
# Write out many bytes przy exactly the same number of 0's,
# 1's... 255's. This will help us check that concurrent reading
# doesn't duplicate albo forget contents.
N = 1000
l = list(range(256)) * N
random.shuffle(l)
s = bytes(bytearray(l))
przy self.open(support.TESTFN, "wb") jako f:
f.write(s)
przy self.open(support.TESTFN, self.read_mode, buffering=0) jako raw:
bufio = self.tp(raw, 8)
errors = []
results = []
def f():
spróbuj:
# Intra-buffer read then buffer-flushing read
dla n w cycle([1, 19]):
s = bufio.read(n)
jeżeli nie s:
przerwij
# list.append() jest atomic
results.append(s)
wyjąwszy Exception jako e:
errors.append(e)
podnieś
threads = [threading.Thread(target=f) dla x w range(20)]
przy support.start_threads(threads):
time.sleep(0.02) # uzyskaj
self.assertNieprawda(errors,
"the following exceptions were caught: %r" % errors)
s = b''.join(results)
dla i w range(256):
c = bytes(bytearray([i]))
self.assertEqual(s.count(c), N)
w_końcu:
support.unlink(support.TESTFN)
def test_unseekable(self):
bufio = self.tp(self.MockUnseekableIO(b"A" * 10))
self.assertRaises(self.UnsupportedOperation, bufio.tell)
self.assertRaises(self.UnsupportedOperation, bufio.seek, 0)
bufio.read(1)
self.assertRaises(self.UnsupportedOperation, bufio.seek, 0)
self.assertRaises(self.UnsupportedOperation, bufio.tell)
def test_misbehaved_io(self):
rawio = self.MisbehavedRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
self.assertRaises(OSError, bufio.seek, 0)
self.assertRaises(OSError, bufio.tell)
def test_no_extraneous_read(self):
# Issue #9550; when the raw IO object has satisfied the read request,
# we should nie issue any additional reads, otherwise it may block
# (e.g. socket).
bufsize = 16
dla n w (2, bufsize - 1, bufsize, bufsize + 1, bufsize * 2):
rawio = self.MockRawIO([b"x" * n])
bufio = self.tp(rawio, bufsize)
self.assertEqual(bufio.read(n), b"x" * n)
# Simple case: one raw read jest enough to satisfy the request.
self.assertEqual(rawio._extraneous_reads, 0,
"failed dla {}: {} != 0".format(n, rawio._extraneous_reads))
# A more complex case where two raw reads are needed to satisfy
# the request.
rawio = self.MockRawIO([b"x" * (n - 1), b"x"])
bufio = self.tp(rawio, bufsize)
self.assertEqual(bufio.read(n), b"x" * n)
self.assertEqual(rawio._extraneous_reads, 0,
"failed dla {}: {} != 0".format(n, rawio._extraneous_reads))
def test_read_on_closed(self):
# Issue #23796
b = io.BufferedReader(io.BytesIO(b"12"))
b.read(1)
b.close()
self.assertRaises(ValueError, b.peek)
self.assertRaises(ValueError, b.read1, 1)
klasa CBufferedReaderTest(BufferedReaderTest, SizeofTest):
tp = io.BufferedReader
def test_constructor(self):
BufferedReaderTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. przy more
# than 2GB RAM oraz a 64-bit kernel.
jeżeli sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertRaises((OverflowError, MemoryError, ValueError),
bufio.__init__, rawio, sys.maxsize)
def test_initialization(self):
rawio = self.MockRawIO([b"abc"])
bufio = self.tp(rawio)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=0)
self.assertRaises(ValueError, bufio.read)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-16)
self.assertRaises(ValueError, bufio.read)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-1)
self.assertRaises(ValueError, bufio.read)
def test_misbehaved_io_read(self):
rawio = self.MisbehavedRawIO((b"abc", b"d", b"efg"))
bufio = self.tp(rawio)
# _pyio.BufferedReader seems to implement reading different, so that
# checking this jest nie so easy.
self.assertRaises(OSError, bufio.read, 10)
def test_garbage_collection(self):
# C BufferedReader objects are collected.
# The Python version has __del__, so it ends into gc.garbage instead
przy support.check_warnings(('', ResourceWarning)):
rawio = self.FileIO(support.TESTFN, "w+b")
f = self.tp(rawio)
f.f = f
wr = weakref.ref(f)
usuń f
support.gc_collect()
self.assertIsNic(wr(), wr)
def test_args_error(self):
# Issue #17275
przy self.assertRaisesRegex(TypeError, "BufferedReader"):
self.tp(io.BytesIO(), 1024, 1024, 1024)
klasa PyBufferedReaderTest(BufferedReaderTest):
tp = pyio.BufferedReader
klasa BufferedWriterTest(unittest.TestCase, CommonBufferedTests):
write_mode = "wb"
def test_constructor(self):
rawio = self.MockRawIO()
bufio = self.tp(rawio)
bufio.__init__(rawio)
bufio.__init__(rawio, buffer_size=1024)
bufio.__init__(rawio, buffer_size=16)
self.assertEqual(3, bufio.write(b"abc"))
bufio.flush()
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=0)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-16)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-1)
bufio.__init__(rawio)
self.assertEqual(3, bufio.write(b"ghi"))
bufio.flush()
self.assertEqual(b"".join(rawio._write_stack), b"abcghi")
def test_uninitialized(self):
bufio = self.tp.__new__(self.tp)
usuń bufio
bufio = self.tp.__new__(self.tp)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
bufio.write, b'')
bufio.__init__(self.MockRawIO())
self.assertEqual(bufio.write(b''), 0)
def test_detach_flush(self):
raw = self.MockRawIO()
buf = self.tp(raw)
buf.write(b"howdy!")
self.assertNieprawda(raw._write_stack)
buf.detach()
self.assertEqual(raw._write_stack, [b"howdy!"])
def test_write(self):
# Write to the buffered IO but don't overflow the buffer.
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.write(b"abc")
self.assertNieprawda(writer._write_stack)
def test_write_overflow(self):
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
contents = b"abcdefghijklmnop"
dla n w range(0, len(contents), 3):
bufio.write(contents[n:n+3])
flushed = b"".join(writer._write_stack)
# At least (total - 8) bytes were implicitly flushed, perhaps more
# depending on the implementation.
self.assertPrawda(flushed.startswith(contents[:-8]), flushed)
def check_writes(self, intermediate_func):
# Lots of writes, test the flushed output jest jako expected.
contents = bytes(range(256)) * 1000
n = 0
writer = self.MockRawIO()
bufio = self.tp(writer, 13)
# Generator of write sizes: repeat each N 15 times then proceed to N+1
def gen_sizes():
dla size w count(1):
dla i w range(15):
uzyskaj size
sizes = gen_sizes()
dopóki n < len(contents):
size = min(next(sizes), len(contents) - n)
self.assertEqual(bufio.write(contents[n:n+size]), size)
intermediate_func(bufio)
n += size
bufio.flush()
self.assertEqual(contents, b"".join(writer._write_stack))
def test_writes(self):
self.check_writes(lambda bufio: Nic)
def test_writes_and_flushes(self):
self.check_writes(lambda bufio: bufio.flush())
def test_writes_and_seeks(self):
def _seekabs(bufio):
pos = bufio.tell()
bufio.seek(pos + 1, 0)
bufio.seek(pos - 1, 0)
bufio.seek(pos, 0)
self.check_writes(_seekabs)
def _seekrel(bufio):
pos = bufio.seek(0, 1)
bufio.seek(+1, 1)
bufio.seek(-1, 1)
bufio.seek(pos, 0)
self.check_writes(_seekrel)
def test_writes_and_truncates(self):
self.check_writes(lambda bufio: bufio.truncate(bufio.tell()))
def test_write_non_blocking(self):
raw = self.MockNonBlockWriterIO()
bufio = self.tp(raw, 8)
self.assertEqual(bufio.write(b"abcd"), 4)
self.assertEqual(bufio.write(b"efghi"), 5)
# 1 byte will be written, the rest will be buffered
raw.block_on(b"k")
self.assertEqual(bufio.write(b"jklmn"), 5)
# 8 bytes will be written, 8 will be buffered oraz the rest will be lost
raw.block_on(b"0")
spróbuj:
bufio.write(b"opqrwxyz0123456789")
wyjąwszy self.BlockingIOError jako e:
written = e.characters_written
inaczej:
self.fail("BlockingIOError should have been podnieśd")
self.assertEqual(written, 16)
self.assertEqual(raw.pop_written(),
b"abcdefghijklmnopqrwxyz")
self.assertEqual(bufio.write(b"ABCDEFGHI"), 9)
s = raw.pop_written()
# Previously buffered bytes were flushed
self.assertPrawda(s.startswith(b"01234567A"), s)
def test_write_and_rewind(self):
raw = io.BytesIO()
bufio = self.tp(raw, 4)
self.assertEqual(bufio.write(b"abcdef"), 6)
self.assertEqual(bufio.tell(), 6)
bufio.seek(0, 0)
self.assertEqual(bufio.write(b"XY"), 2)
bufio.seek(6, 0)
self.assertEqual(raw.getvalue(), b"XYcdef")
self.assertEqual(bufio.write(b"123456"), 6)
bufio.flush()
self.assertEqual(raw.getvalue(), b"XYcdef123456")
def test_flush(self):
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.write(b"abc")
bufio.flush()
self.assertEqual(b"abc", writer._write_stack[0])
def test_writelines(self):
l = [b'ab', b'cd', b'ef']
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.writelines(l)
bufio.flush()
self.assertEqual(b''.join(writer._write_stack), b'abcdef')
def test_writelines_userlist(self):
l = UserList([b'ab', b'cd', b'ef'])
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.writelines(l)
bufio.flush()
self.assertEqual(b''.join(writer._write_stack), b'abcdef')
def test_writelines_error(self):
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
self.assertRaises(TypeError, bufio.writelines, [1, 2, 3])
self.assertRaises(TypeError, bufio.writelines, Nic)
self.assertRaises(TypeError, bufio.writelines, 'abc')
def test_destructor(self):
writer = self.MockRawIO()
bufio = self.tp(writer, 8)
bufio.write(b"abc")
usuń bufio
support.gc_collect()
self.assertEqual(b"abc", writer._write_stack[0])
def test_truncate(self):
# Truncate implicitly flushes the buffer.
przy self.open(support.TESTFN, self.write_mode, buffering=0) jako raw:
bufio = self.tp(raw, 8)
bufio.write(b"abcdef")
self.assertEqual(bufio.truncate(3), 3)
self.assertEqual(bufio.tell(), 6)
przy self.open(support.TESTFN, "rb", buffering=0) jako f:
self.assertEqual(f.read(), b"abc")
@unittest.skipUnless(threading, 'Threading required dla this test.')
@support.requires_resource('cpu')
def test_threads(self):
spróbuj:
# Write out many bytes z many threads oraz test they were
# all flushed.
N = 1000
contents = bytes(range(256)) * N
sizes = cycle([1, 19])
n = 0
queue = deque()
dopóki n < len(contents):
size = next(sizes)
queue.append(contents[n:n+size])
n += size
usuń contents
# We use a real file object because it allows us to
# exercise situations where the GIL jest released before
# writing the buffer to the raw streams. This jest w addition
# to concurrency issues due to switching threads w the middle
# of Python code.
przy self.open(support.TESTFN, self.write_mode, buffering=0) jako raw:
bufio = self.tp(raw, 8)
errors = []
def f():
spróbuj:
dopóki Prawda:
spróbuj:
s = queue.popleft()
wyjąwszy IndexError:
zwróć
bufio.write(s)
wyjąwszy Exception jako e:
errors.append(e)
podnieś
threads = [threading.Thread(target=f) dla x w range(20)]
przy support.start_threads(threads):
time.sleep(0.02) # uzyskaj
self.assertNieprawda(errors,
"the following exceptions were caught: %r" % errors)
bufio.close()
przy self.open(support.TESTFN, "rb") jako f:
s = f.read()
dla i w range(256):
self.assertEqual(s.count(bytes([i])), N)
w_końcu:
support.unlink(support.TESTFN)
def test_misbehaved_io(self):
rawio = self.MisbehavedRawIO()
bufio = self.tp(rawio, 5)
self.assertRaises(OSError, bufio.seek, 0)
self.assertRaises(OSError, bufio.tell)
self.assertRaises(OSError, bufio.write, b"abcdef")
def test_max_buffer_size_removal(self):
przy self.assertRaises(TypeError):
self.tp(self.MockRawIO(), 8, 12)
def test_write_error_on_close(self):
raw = self.MockRawIO()
def bad_write(b):
podnieś OSError()
raw.write = bad_write
b = self.tp(raw)
b.write(b'spam')
self.assertRaises(OSError, b.close) # exception nie swallowed
self.assertPrawda(b.closed)
klasa CBufferedWriterTest(BufferedWriterTest, SizeofTest):
tp = io.BufferedWriter
def test_constructor(self):
BufferedWriterTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. przy more
# than 2GB RAM oraz a 64-bit kernel.
jeżeli sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertRaises((OverflowError, MemoryError, ValueError),
bufio.__init__, rawio, sys.maxsize)
def test_initialization(self):
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=0)
self.assertRaises(ValueError, bufio.write, b"def")
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-16)
self.assertRaises(ValueError, bufio.write, b"def")
self.assertRaises(ValueError, bufio.__init__, rawio, buffer_size=-1)
self.assertRaises(ValueError, bufio.write, b"def")
def test_garbage_collection(self):
# C BufferedWriter objects are collected, oraz collecting them flushes
# all data to disk.
# The Python version has __del__, so it ends into gc.garbage instead
przy support.check_warnings(('', ResourceWarning)):
rawio = self.FileIO(support.TESTFN, "w+b")
f = self.tp(rawio)
f.write(b"123xxx")
f.x = f
wr = weakref.ref(f)
usuń f
support.gc_collect()
self.assertIsNic(wr(), wr)
przy self.open(support.TESTFN, "rb") jako f:
self.assertEqual(f.read(), b"123xxx")
def test_args_error(self):
# Issue #17275
przy self.assertRaisesRegex(TypeError, "BufferedWriter"):
self.tp(io.BytesIO(), 1024, 1024, 1024)
klasa PyBufferedWriterTest(BufferedWriterTest):
tp = pyio.BufferedWriter
klasa BufferedRWPairTest(unittest.TestCase):
def test_constructor(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertNieprawda(pair.closed)
def test_uninitialized(self):
pair = self.tp.__new__(self.tp)
usuń pair
pair = self.tp.__new__(self.tp)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
pair.read, 0)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
pair.write, b'')
pair.__init__(self.MockRawIO(), self.MockRawIO())
self.assertEqual(pair.read(0), b'')
self.assertEqual(pair.write(b''), 0)
def test_detach(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertRaises(self.UnsupportedOperation, pair.detach)
def test_constructor_max_buffer_size_removal(self):
przy self.assertRaises(TypeError):
self.tp(self.MockRawIO(), self.MockRawIO(), 8, 12)
def test_constructor_with_not_readable(self):
klasa NotReadable(MockRawIO):
def readable(self):
zwróć Nieprawda
self.assertRaises(OSError, self.tp, NotReadable(), self.MockRawIO())
def test_constructor_with_not_writeable(self):
klasa NotWriteable(MockRawIO):
def writable(self):
zwróć Nieprawda
self.assertRaises(OSError, self.tp, self.MockRawIO(), NotWriteable())
def test_read(self):
pair = self.tp(self.BytesIO(b"abcdef"), self.MockRawIO())
self.assertEqual(pair.read(3), b"abc")
self.assertEqual(pair.read(1), b"d")
self.assertEqual(pair.read(), b"ef")
pair = self.tp(self.BytesIO(b"abc"), self.MockRawIO())
self.assertEqual(pair.read(Nic), b"abc")
def test_readlines(self):
pair = lambda: self.tp(self.BytesIO(b"abc\ndef\nh"), self.MockRawIO())
self.assertEqual(pair().readlines(), [b"abc\n", b"def\n", b"h"])
self.assertEqual(pair().readlines(), [b"abc\n", b"def\n", b"h"])
self.assertEqual(pair().readlines(5), [b"abc\n", b"def\n"])
def test_read1(self):
# .read1() jest delegated to the underlying reader object, so this test
# can be shallow.
pair = self.tp(self.BytesIO(b"abcdef"), self.MockRawIO())
self.assertEqual(pair.read1(3), b"abc")
def test_readinto(self):
pair = self.tp(self.BytesIO(b"abcdef"), self.MockRawIO())
data = bytearray(5)
self.assertEqual(pair.readinto(data), 5)
self.assertEqual(data, b"abcde")
def test_write(self):
w = self.MockRawIO()
pair = self.tp(self.MockRawIO(), w)
pair.write(b"abc")
pair.flush()
pair.write(b"def")
pair.flush()
self.assertEqual(w._write_stack, [b"abc", b"def"])
def test_peek(self):
pair = self.tp(self.BytesIO(b"abcdef"), self.MockRawIO())
self.assertPrawda(pair.peek(3).startswith(b"abc"))
self.assertEqual(pair.read(3), b"abc")
def test_readable(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertPrawda(pair.readable())
def test_writeable(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertPrawda(pair.writable())
def test_seekable(self):
# BufferedRWPairs are never seekable, even jeżeli their readers oraz writers
# are.
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertNieprawda(pair.seekable())
# .flush() jest delegated to the underlying writer object oraz has been
# tested w the test_write method.
def test_close_and_closed(self):
pair = self.tp(self.MockRawIO(), self.MockRawIO())
self.assertNieprawda(pair.closed)
pair.close()
self.assertPrawda(pair.closed)
def test_reader_close_error_on_close(self):
def reader_close():
reader_non_existing
reader = self.MockRawIO()
reader.close = reader_close
writer = self.MockRawIO()
pair = self.tp(reader, writer)
przy self.assertRaises(NameError) jako err:
pair.close()
self.assertIn('reader_non_existing', str(err.exception))
self.assertPrawda(pair.closed)
self.assertNieprawda(reader.closed)
self.assertPrawda(writer.closed)
def test_writer_close_error_on_close(self):
def writer_close():
writer_non_existing
reader = self.MockRawIO()
writer = self.MockRawIO()
writer.close = writer_close
pair = self.tp(reader, writer)
przy self.assertRaises(NameError) jako err:
pair.close()
self.assertIn('writer_non_existing', str(err.exception))
self.assertNieprawda(pair.closed)
self.assertPrawda(reader.closed)
self.assertNieprawda(writer.closed)
def test_reader_writer_close_error_on_close(self):
def reader_close():
reader_non_existing
def writer_close():
writer_non_existing
reader = self.MockRawIO()
reader.close = reader_close
writer = self.MockRawIO()
writer.close = writer_close
pair = self.tp(reader, writer)
przy self.assertRaises(NameError) jako err:
pair.close()
self.assertIn('reader_non_existing', str(err.exception))
self.assertIsInstance(err.exception.__context__, NameError)
self.assertIn('writer_non_existing', str(err.exception.__context__))
self.assertNieprawda(pair.closed)
self.assertNieprawda(reader.closed)
self.assertNieprawda(writer.closed)
def test_isatty(self):
klasa SelectableIsAtty(MockRawIO):
def __init__(self, isatty):
MockRawIO.__init__(self)
self._isatty = isatty
def isatty(self):
zwróć self._isatty
pair = self.tp(SelectableIsAtty(Nieprawda), SelectableIsAtty(Nieprawda))
self.assertNieprawda(pair.isatty())
pair = self.tp(SelectableIsAtty(Prawda), SelectableIsAtty(Nieprawda))
self.assertPrawda(pair.isatty())
pair = self.tp(SelectableIsAtty(Nieprawda), SelectableIsAtty(Prawda))
self.assertPrawda(pair.isatty())
pair = self.tp(SelectableIsAtty(Prawda), SelectableIsAtty(Prawda))
self.assertPrawda(pair.isatty())
def test_weakref_clearing(self):
brw = self.tp(self.MockRawIO(), self.MockRawIO())
ref = weakref.ref(brw)
brw = Nic
ref = Nic # Shouldn't segfault.
klasa CBufferedRWPairTest(BufferedRWPairTest):
tp = io.BufferedRWPair
klasa PyBufferedRWPairTest(BufferedRWPairTest):
tp = pyio.BufferedRWPair
klasa BufferedRandomTest(BufferedReaderTest, BufferedWriterTest):
read_mode = "rb+"
write_mode = "wb+"
def test_constructor(self):
BufferedReaderTest.test_constructor(self)
BufferedWriterTest.test_constructor(self)
def test_uninitialized(self):
BufferedReaderTest.test_uninitialized(self)
BufferedWriterTest.test_uninitialized(self)
def test_read_and_write(self):
raw = self.MockRawIO((b"asdf", b"ghjk"))
rw = self.tp(raw, 8)
self.assertEqual(b"as", rw.read(2))
rw.write(b"ddd")
rw.write(b"eee")
self.assertNieprawda(raw._write_stack) # Buffer writes
self.assertEqual(b"ghjk", rw.read())
self.assertEqual(b"dddeee", raw._write_stack[0])
def test_seek_and_tell(self):
raw = self.BytesIO(b"asdfghjkl")
rw = self.tp(raw)
self.assertEqual(b"as", rw.read(2))
self.assertEqual(2, rw.tell())
rw.seek(0, 0)
self.assertEqual(b"asdf", rw.read(4))
rw.write(b"123f")
rw.seek(0, 0)
self.assertEqual(b"asdf123fl", rw.read())
self.assertEqual(9, rw.tell())
rw.seek(-4, 2)
self.assertEqual(5, rw.tell())
rw.seek(2, 1)
self.assertEqual(7, rw.tell())
self.assertEqual(b"fl", rw.read(11))
rw.flush()
self.assertEqual(b"asdf123fl", raw.getvalue())
self.assertRaises(TypeError, rw.seek, 0.0)
def check_flush_and_read(self, read_func):
raw = self.BytesIO(b"abcdefghi")
bufio = self.tp(raw)
self.assertEqual(b"ab", read_func(bufio, 2))
bufio.write(b"12")
self.assertEqual(b"ef", read_func(bufio, 2))
self.assertEqual(6, bufio.tell())
bufio.flush()
self.assertEqual(6, bufio.tell())
self.assertEqual(b"ghi", read_func(bufio))
raw.seek(0, 0)
raw.write(b"XYZ")
# flush() resets the read buffer
bufio.flush()
bufio.seek(0, 0)
self.assertEqual(b"XYZ", read_func(bufio, 3))
def test_flush_and_read(self):
self.check_flush_and_read(lambda bufio, *args: bufio.read(*args))
def test_flush_and_readinto(self):
def _readinto(bufio, n=-1):
b = bytearray(n jeżeli n >= 0 inaczej 9999)
n = bufio.readinto(b)
zwróć bytes(b[:n])
self.check_flush_and_read(_readinto)
def test_flush_and_peek(self):
def _peek(bufio, n=-1):
# This relies on the fact that the buffer can contain the whole
# raw stream, otherwise peek() can zwróć less.
b = bufio.peek(n)
jeżeli n != -1:
b = b[:n]
bufio.seek(len(b), 1)
zwróć b
self.check_flush_and_read(_peek)
def test_flush_and_write(self):
raw = self.BytesIO(b"abcdefghi")
bufio = self.tp(raw)
bufio.write(b"123")
bufio.flush()
bufio.write(b"45")
bufio.flush()
bufio.seek(0, 0)
self.assertEqual(b"12345fghi", raw.getvalue())
self.assertEqual(b"12345fghi", bufio.read())
def test_threads(self):
BufferedReaderTest.test_threads(self)
BufferedWriterTest.test_threads(self)
def test_writes_and_peek(self):
def _peek(bufio):
bufio.peek(1)
self.check_writes(_peek)
def _peek(bufio):
pos = bufio.tell()
bufio.seek(-1, 1)
bufio.peek(1)
bufio.seek(pos, 0)
self.check_writes(_peek)
def test_writes_and_reads(self):
def _read(bufio):
bufio.seek(-1, 1)
bufio.read(1)
self.check_writes(_read)
def test_writes_and_read1s(self):
def _read1(bufio):
bufio.seek(-1, 1)
bufio.read1(1)
self.check_writes(_read1)
def test_writes_and_readintos(self):
def _read(bufio):
bufio.seek(-1, 1)
bufio.readinto(bytearray(1))
self.check_writes(_read)
def test_write_after_readahead(self):
# Issue #6629: writing after the buffer was filled by readahead should
# first rewind the raw stream.
dla overwrite_size w [1, 5]:
raw = self.BytesIO(b"A" * 10)
bufio = self.tp(raw, 4)
# Trigger readahead
self.assertEqual(bufio.read(1), b"A")
self.assertEqual(bufio.tell(), 1)
# Overwriting should rewind the raw stream jeżeli it needs so
bufio.write(b"B" * overwrite_size)
self.assertEqual(bufio.tell(), overwrite_size + 1)
# If the write size was smaller than the buffer size, flush() oraz
# check that rewind happens.
bufio.flush()
self.assertEqual(bufio.tell(), overwrite_size + 1)
s = raw.getvalue()
self.assertEqual(s,
b"A" + b"B" * overwrite_size + b"A" * (9 - overwrite_size))
def test_write_rewind_write(self):
# Various combinations of reading / writing / seeking backwards / writing again
def mutate(bufio, pos1, pos2):
assert pos2 >= pos1
# Fill the buffer
bufio.seek(pos1)
bufio.read(pos2 - pos1)
bufio.write(b'\x02')
# This writes earlier than the previous write, but still inside
# the buffer.
bufio.seek(pos1)
bufio.write(b'\x01')
b = b"\x80\x81\x82\x83\x84"
dla i w range(0, len(b)):
dla j w range(i, len(b)):
raw = self.BytesIO(b)
bufio = self.tp(raw, 100)
mutate(bufio, i, j)
bufio.flush()
expected = bytearray(b)
expected[j] = 2
expected[i] = 1
self.assertEqual(raw.getvalue(), expected,
"failed result dla i=%d, j=%d" % (i, j))
def test_truncate_after_read_or_write(self):
raw = self.BytesIO(b"A" * 10)
bufio = self.tp(raw, 100)
self.assertEqual(bufio.read(2), b"AA") # the read buffer gets filled
self.assertEqual(bufio.truncate(), 2)
self.assertEqual(bufio.write(b"BB"), 2) # the write buffer increases
self.assertEqual(bufio.truncate(), 4)
def test_misbehaved_io(self):
BufferedReaderTest.test_misbehaved_io(self)
BufferedWriterTest.test_misbehaved_io(self)
def test_interleaved_read_write(self):
# Test dla issue #12213
przy self.BytesIO(b'abcdefgh') jako raw:
przy self.tp(raw, 100) jako f:
f.write(b"1")
self.assertEqual(f.read(1), b'b')
f.write(b'2')
self.assertEqual(f.read1(1), b'd')
f.write(b'3')
buf = bytearray(1)
f.readinto(buf)
self.assertEqual(buf, b'f')
f.write(b'4')
self.assertEqual(f.peek(1), b'h')
f.flush()
self.assertEqual(raw.getvalue(), b'1b2d3f4h')
przy self.BytesIO(b'abc') jako raw:
przy self.tp(raw, 100) jako f:
self.assertEqual(f.read(1), b'a')
f.write(b"2")
self.assertEqual(f.read(1), b'c')
f.flush()
self.assertEqual(raw.getvalue(), b'a2c')
def test_interleaved_readline_write(self):
przy self.BytesIO(b'ab\ncdef\ng\n') jako raw:
przy self.tp(raw) jako f:
f.write(b'1')
self.assertEqual(f.readline(), b'b\n')
f.write(b'2')
self.assertEqual(f.readline(), b'def\n')
f.write(b'3')
self.assertEqual(f.readline(), b'\n')
f.flush()
self.assertEqual(raw.getvalue(), b'1b\n2def\n3\n')
# You can't construct a BufferedRandom over a non-seekable stream.
test_unseekable = Nic
klasa CBufferedRandomTest(BufferedRandomTest, SizeofTest):
tp = io.BufferedRandom
def test_constructor(self):
BufferedRandomTest.test_constructor(self)
# The allocation can succeed on 32-bit builds, e.g. przy more
# than 2GB RAM oraz a 64-bit kernel.
jeżeli sys.maxsize > 0x7FFFFFFF:
rawio = self.MockRawIO()
bufio = self.tp(rawio)
self.assertRaises((OverflowError, MemoryError, ValueError),
bufio.__init__, rawio, sys.maxsize)
def test_garbage_collection(self):
CBufferedReaderTest.test_garbage_collection(self)
CBufferedWriterTest.test_garbage_collection(self)
def test_args_error(self):
# Issue #17275
przy self.assertRaisesRegex(TypeError, "BufferedRandom"):
self.tp(io.BytesIO(), 1024, 1024, 1024)
klasa PyBufferedRandomTest(BufferedRandomTest):
tp = pyio.BufferedRandom
# To fully exercise seek/tell, the StatefulIncrementalDecoder has these
# properties:
# - A single output character can correspond to many bytes of input.
# - The number of input bytes to complete the character can be
# undetermined until the last input byte jest received.
# - The number of input bytes can vary depending on previous input.
# - A single input byte can correspond to many characters of output.
# - The number of output characters can be undetermined until the
# last input byte jest received.
# - The number of output characters can vary depending on previous input.
klasa StatefulIncrementalDecoder(codecs.IncrementalDecoder):
"""
For testing seek/tell behavior przy a stateful, buffering decoder.
Input jest a sequence of words. Words may be fixed-length (length set
by input) albo variable-length (period-terminated). In variable-length
mode, extra periods are ignored. Possible words are:
- 'i' followed by a number sets the input length, I (maximum 99).
When I jest set to 0, words are space-terminated.
- 'o' followed by a number sets the output length, O (maximum 99).
- Any other word jest converted into a word followed by a period on
the output. The output word consists of the input word truncated
albo padded out przy hyphens to make its length equal to O. If O
jest 0, the word jest output verbatim without truncating albo padding.
I oraz O are initially set to 1. When I changes, any buffered input jest
re-scanned according to the new I. EOF also terminates the last word.
"""
def __init__(self, errors='strict'):
codecs.IncrementalDecoder.__init__(self, errors)
self.reset()
def __repr__(self):
zwróć '<SID %x>' % id(self)
def reset(self):
self.i = 1
self.o = 1
self.buffer = bytearray()
def getstate(self):
i, o = self.i ^ 1, self.o ^ 1 # so that flags = 0 after reset()
zwróć bytes(self.buffer), i*100 + o
def setstate(self, state):
buffer, io = state
self.buffer = bytearray(buffer)
i, o = divmod(io, 100)
self.i, self.o = i ^ 1, o ^ 1
def decode(self, input, final=Nieprawda):
output = ''
dla b w input:
jeżeli self.i == 0: # variable-length, terminated przy period
jeżeli b == ord('.'):
jeżeli self.buffer:
output += self.process_word()
inaczej:
self.buffer.append(b)
inaczej: # fixed-length, terminate after self.i bytes
self.buffer.append(b)
jeżeli len(self.buffer) == self.i:
output += self.process_word()
jeżeli final oraz self.buffer: # EOF terminates the last word
output += self.process_word()
zwróć output
def process_word(self):
output = ''
jeżeli self.buffer[0] == ord('i'):
self.i = min(99, int(self.buffer[1:] albo 0)) # set input length
albo_inaczej self.buffer[0] == ord('o'):
self.o = min(99, int(self.buffer[1:] albo 0)) # set output length
inaczej:
output = self.buffer.decode('ascii')
jeżeli len(output) < self.o:
output += '-'*self.o # pad out przy hyphens
jeżeli self.o:
output = output[:self.o] # truncate to output length
output += '.'
self.buffer = bytearray()
zwróć output
codecEnabled = Nieprawda
@classmethod
def lookupTestDecoder(cls, name):
jeżeli cls.codecEnabled oraz name == 'test_decoder':
latin1 = codecs.lookup('latin-1')
zwróć codecs.CodecInfo(
name='test_decoder', encode=latin1.encode, decode=Nic,
incrementalencoder=Nic,
streamreader=Nic, streamwriter=Nic,
incrementaldecoder=cls)
# Register the previous decoder dla testing.
# Disabled by default, tests will enable it.
codecs.register(StatefulIncrementalDecoder.lookupTestDecoder)
klasa StatefulIncrementalDecoderTest(unittest.TestCase):
"""
Make sure the StatefulIncrementalDecoder actually works.
"""
test_cases = [
# I=1, O=1 (fixed-length input == fixed-length output)
(b'abcd', Nieprawda, 'a.b.c.d.'),
# I=0, O=0 (variable-length input, variable-length output)
(b'oiabcd', Prawda, 'abcd.'),
# I=0, O=0 (should ignore extra periods)
(b'oi...abcd...', Prawda, 'abcd.'),
# I=0, O=6 (variable-length input, fixed-length output)
(b'i.o6.x.xyz.toolongtofit.', Nieprawda, 'x-----.xyz---.toolon.'),
# I=2, O=6 (fixed-length input < fixed-length output)
(b'i.i2.o6xyz', Prawda, 'xy----.z-----.'),
# I=6, O=3 (fixed-length input > fixed-length output)
(b'i.o3.i6.abcdefghijklmnop', Prawda, 'abc.ghi.mno.'),
# I=0, then 3; O=29, then 15 (przy longer output)
(b'i.o29.a.b.cde.o15.abcdefghijabcdefghij.i3.a.b.c.d.ei00k.l.m', Prawda,
'a----------------------------.' +
'b----------------------------.' +
'cde--------------------------.' +
'abcdefghijabcde.' +
'a.b------------.' +
'.c.------------.' +
'd.e------------.' +
'k--------------.' +
'l--------------.' +
'm--------------.')
]
def test_decoder(self):
# Try a few one-shot test cases.
dla input, eof, output w self.test_cases:
d = StatefulIncrementalDecoder()
self.assertEqual(d.decode(input, eof), output)
# Also test an unfinished decode, followed by forcing EOF.
d = StatefulIncrementalDecoder()
self.assertEqual(d.decode(b'oiabcd'), '')
self.assertEqual(d.decode(b'', 1), 'abcd.')
klasa TextIOWrapperTest(unittest.TestCase):
def setUp(self):
self.testdata = b"AAA\r\nBBB\rCCC\r\nDDD\nEEE\r\n"
self.normalized = b"AAA\nBBB\nCCC\nDDD\nEEE\n".decode("ascii")
support.unlink(support.TESTFN)
def tearDown(self):
support.unlink(support.TESTFN)
def test_constructor(self):
r = self.BytesIO(b"\xc3\xa9\n\n")
b = self.BufferedReader(r, 1000)
t = self.TextIOWrapper(b)
t.__init__(b, encoding="latin-1", newline="\r\n")
self.assertEqual(t.encoding, "latin-1")
self.assertEqual(t.line_buffering, Nieprawda)
t.__init__(b, encoding="utf-8", line_buffering=Prawda)
self.assertEqual(t.encoding, "utf-8")
self.assertEqual(t.line_buffering, Prawda)
self.assertEqual("\xe9\n", t.readline())
self.assertRaises(TypeError, t.__init__, b, newline=42)
self.assertRaises(ValueError, t.__init__, b, newline='xyzzy')
def test_uninitialized(self):
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
usuń t
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
self.assertRaises(Exception, repr, t)
self.assertRaisesRegex((ValueError, AttributeError),
'uninitialized|has no attribute',
t.read, 0)
t.__init__(self.MockRawIO())
self.assertEqual(t.read(0), '')
def test_non_text_encoding_codecs_are_rejected(self):
# Ensure the constructor complains jeżeli dalejed a codec that isn't
# marked jako a text encoding
# http://bugs.python.org/issue20404
r = self.BytesIO()
b = self.BufferedWriter(r)
przy self.assertRaisesRegex(LookupError, "is nie a text encoding"):
self.TextIOWrapper(b, encoding="hex")
def test_detach(self):
r = self.BytesIO()
b = self.BufferedWriter(r)
t = self.TextIOWrapper(b)
self.assertIs(t.detach(), b)
t = self.TextIOWrapper(b, encoding="ascii")
t.write("howdy")
self.assertNieprawda(r.getvalue())
t.detach()
self.assertEqual(r.getvalue(), b"howdy")
self.assertRaises(ValueError, t.detach)
# Operations independent of the detached stream should still work
repr(t)
self.assertEqual(t.encoding, "ascii")
self.assertEqual(t.errors, "strict")
self.assertNieprawda(t.line_buffering)
def test_repr(self):
raw = self.BytesIO("hello".encode("utf-8"))
b = self.BufferedReader(raw)
t = self.TextIOWrapper(b, encoding="utf-8")
modname = self.TextIOWrapper.__module__
self.assertEqual(repr(t),
"<%s.TextIOWrapper encoding='utf-8'>" % modname)
raw.name = "dummy"
self.assertEqual(repr(t),
"<%s.TextIOWrapper name='dummy' encoding='utf-8'>" % modname)
t.mode = "r"
self.assertEqual(repr(t),
"<%s.TextIOWrapper name='dummy' mode='r' encoding='utf-8'>" % modname)
raw.name = b"dummy"
self.assertEqual(repr(t),
"<%s.TextIOWrapper name=b'dummy' mode='r' encoding='utf-8'>" % modname)
t.buffer.detach()
repr(t) # Should nie podnieś an exception
def test_line_buffering(self):
r = self.BytesIO()
b = self.BufferedWriter(r, 1000)
t = self.TextIOWrapper(b, newline="\n", line_buffering=Prawda)
t.write("X")
self.assertEqual(r.getvalue(), b"") # No flush happened
t.write("Y\nZ")
self.assertEqual(r.getvalue(), b"XY\nZ") # All got flushed
t.write("A\rB")
self.assertEqual(r.getvalue(), b"XY\nZA\rB")
def test_default_encoding(self):
old_environ = dict(os.environ)
spróbuj:
# try to get a user preferred encoding different than the current
# locale encoding to check that TextIOWrapper() uses the current
# locale encoding oraz nie the user preferred encoding
dla key w ('LC_ALL', 'LANG', 'LC_CTYPE'):
jeżeli key w os.environ:
usuń os.environ[key]
current_locale_encoding = locale.getpreferredencoding(Nieprawda)
b = self.BytesIO()
t = self.TextIOWrapper(b)
self.assertEqual(t.encoding, current_locale_encoding)
w_końcu:
os.environ.clear()
os.environ.update(old_environ)
@support.cpython_only
def test_device_encoding(self):
# Issue 15989
zaimportuj _testcapi
b = self.BytesIO()
b.fileno = lambda: _testcapi.INT_MAX + 1
self.assertRaises(OverflowError, self.TextIOWrapper, b)
b.fileno = lambda: _testcapi.UINT_MAX + 1
self.assertRaises(OverflowError, self.TextIOWrapper, b)
def test_encoding(self):
# Check the encoding attribute jest always set, oraz valid
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="utf-8")
self.assertEqual(t.encoding, "utf-8")
t = self.TextIOWrapper(b)
self.assertIsNotNic(t.encoding)
codecs.lookup(t.encoding)
def test_encoding_errors_reading(self):
# (1) default
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii")
self.assertRaises(UnicodeError, t.read)
# (2) explicit strict
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="strict")
self.assertRaises(UnicodeError, t.read)
# (3) ignore
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="ignore")
self.assertEqual(t.read(), "abc\n\n")
# (4) replace
b = self.BytesIO(b"abc\n\xff\n")
t = self.TextIOWrapper(b, encoding="ascii", errors="replace")
self.assertEqual(t.read(), "abc\n\ufffd\n")
def test_encoding_errors_writing(self):
# (1) default
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii")
self.assertRaises(UnicodeError, t.write, "\xff")
# (2) explicit strict
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="strict")
self.assertRaises(UnicodeError, t.write, "\xff")
# (3) ignore
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="ignore",
newline="\n")
t.write("abc\xffdef\n")
t.flush()
self.assertEqual(b.getvalue(), b"abcdef\n")
# (4) replace
b = self.BytesIO()
t = self.TextIOWrapper(b, encoding="ascii", errors="replace",
newline="\n")
t.write("abc\xffdef\n")
t.flush()
self.assertEqual(b.getvalue(), b"abc?def\n")
def test_newlines(self):
input_lines = [ "unix\n", "windows\r\n", "os9\r", "last\n", "nonl" ]
tests = [
[ Nic, [ 'unix\n', 'windows\n', 'os9\n', 'last\n', 'nonl' ] ],
[ '', input_lines ],
[ '\n', [ "unix\n", "windows\r\n", "os9\rlast\n", "nonl" ] ],
[ '\r\n', [ "unix\nwindows\r\n", "os9\rlast\nnonl" ] ],
[ '\r', [ "unix\nwindows\r", "\nos9\r", "last\nnonl" ] ],
]
encodings = (
'utf-8', 'latin-1',
'utf-16', 'utf-16-le', 'utf-16-be',
'utf-32', 'utf-32-le', 'utf-32-be',
)
# Try a range of buffer sizes to test the case where \r jest the last
# character w TextIOWrapper._pending_line.
dla encoding w encodings:
# XXX: str.encode() should zwróć bytes
data = bytes(''.join(input_lines).encode(encoding))
dla do_reads w (Nieprawda, Prawda):
dla bufsize w range(1, 10):
dla newline, exp_lines w tests:
bufio = self.BufferedReader(self.BytesIO(data), bufsize)
textio = self.TextIOWrapper(bufio, newline=newline,
encoding=encoding)
jeżeli do_reads:
got_lines = []
dopóki Prawda:
c2 = textio.read(2)
jeżeli c2 == '':
przerwij
self.assertEqual(len(c2), 2)
got_lines.append(c2 + textio.readline())
inaczej:
got_lines = list(textio)
dla got_line, exp_line w zip(got_lines, exp_lines):
self.assertEqual(got_line, exp_line)
self.assertEqual(len(got_lines), len(exp_lines))
def test_newlines_input(self):
testdata = b"AAA\nBB\x00B\nCCC\rDDD\rEEE\r\nFFF\r\nGGG"
normalized = testdata.replace(b"\r\n", b"\n").replace(b"\r", b"\n")
dla newline, expected w [
(Nic, normalized.decode("ascii").splitlines(keepends=Prawda)),
("", testdata.decode("ascii").splitlines(keepends=Prawda)),
("\n", ["AAA\n", "BB\x00B\n", "CCC\rDDD\rEEE\r\n", "FFF\r\n", "GGG"]),
("\r\n", ["AAA\nBB\x00B\nCCC\rDDD\rEEE\r\n", "FFF\r\n", "GGG"]),
("\r", ["AAA\nBB\x00B\nCCC\r", "DDD\r", "EEE\r", "\nFFF\r", "\nGGG"]),
]:
buf = self.BytesIO(testdata)
txt = self.TextIOWrapper(buf, encoding="ascii", newline=newline)
self.assertEqual(txt.readlines(), expected)
txt.seek(0)
self.assertEqual(txt.read(), "".join(expected))
def test_newlines_output(self):
testdict = {
"": b"AAA\nBBB\nCCC\nX\rY\r\nZ",
"\n": b"AAA\nBBB\nCCC\nX\rY\r\nZ",
"\r": b"AAA\rBBB\rCCC\rX\rY\r\rZ",
"\r\n": b"AAA\r\nBBB\r\nCCC\r\nX\rY\r\r\nZ",
}
tests = [(Nic, testdict[os.linesep])] + sorted(testdict.items())
dla newline, expected w tests:
buf = self.BytesIO()
txt = self.TextIOWrapper(buf, encoding="ascii", newline=newline)
txt.write("AAA\nB")
txt.write("BB\nCCC\n")
txt.write("X\rY\r\nZ")
txt.flush()
self.assertEqual(buf.closed, Nieprawda)
self.assertEqual(buf.getvalue(), expected)
def test_destructor(self):
l = []
base = self.BytesIO
klasa MyBytesIO(base):
def close(self):
l.append(self.getvalue())
base.close(self)
b = MyBytesIO()
t = self.TextIOWrapper(b, encoding="ascii")
t.write("abc")
usuń t
support.gc_collect()
self.assertEqual([b"abc"], l)
def test_override_destructor(self):
record = []
klasa MyTextIO(self.TextIOWrapper):
def __del__(self):
record.append(1)
spróbuj:
f = super().__del__
wyjąwszy AttributeError:
dalej
inaczej:
f()
def close(self):
record.append(2)
super().close()
def flush(self):
record.append(3)
super().flush()
b = self.BytesIO()
t = MyTextIO(b, encoding="ascii")
usuń t
support.gc_collect()
self.assertEqual(record, [1, 2, 3])
def test_error_through_destructor(self):
# Test that the exception state jest nie modified by a destructor,
# even jeżeli close() fails.
rawio = self.CloseFailureIO()
def f():
self.TextIOWrapper(rawio).xyzzy
przy support.captured_output("stderr") jako s:
self.assertRaises(AttributeError, f)
s = s.getvalue().strip()
jeżeli s:
# The destructor *may* have printed an unraisable error, check it
self.assertEqual(len(s.splitlines()), 1)
self.assertPrawda(s.startswith("Exception OSError: "), s)
self.assertPrawda(s.endswith(" ignored"), s)
# Systematic tests of the text I/O API
def test_basic_io(self):
dla chunksize w (1, 2, 3, 4, 5, 15, 16, 17, 31, 32, 33, 63, 64, 65):
dla enc w "ascii", "latin-1", "utf-8" :# , "utf-16-be", "utf-16-le":
f = self.open(support.TESTFN, "w+", encoding=enc)
f._CHUNK_SIZE = chunksize
self.assertEqual(f.write("abc"), 3)
f.close()
f = self.open(support.TESTFN, "r+", encoding=enc)
f._CHUNK_SIZE = chunksize
self.assertEqual(f.tell(), 0)
self.assertEqual(f.read(), "abc")
cookie = f.tell()
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.read(Nic), "abc")
f.seek(0)
self.assertEqual(f.read(2), "ab")
self.assertEqual(f.read(1), "c")
self.assertEqual(f.read(1), "")
self.assertEqual(f.read(), "")
self.assertEqual(f.tell(), cookie)
self.assertEqual(f.seek(0), 0)
self.assertEqual(f.seek(0, 2), cookie)
self.assertEqual(f.write("def"), 3)
self.assertEqual(f.seek(cookie), cookie)
self.assertEqual(f.read(), "def")
jeżeli enc.startswith("utf"):
self.multi_line_test(f, enc)
f.close()
def multi_line_test(self, f, enc):
f.seek(0)
f.truncate()
sample = "s\xff\u0fff\uffff"
wlines = []
dla size w (0, 1, 2, 3, 4, 5, 30, 31, 32, 33, 62, 63, 64, 65, 1000):
chars = []
dla i w range(size):
chars.append(sample[i % len(sample)])
line = "".join(chars) + "\n"
wlines.append((f.tell(), line))
f.write(line)
f.seek(0)
rlines = []
dopóki Prawda:
pos = f.tell()
line = f.readline()
jeżeli nie line:
przerwij
rlines.append((pos, line))
self.assertEqual(rlines, wlines)
def test_telling(self):
f = self.open(support.TESTFN, "w+", encoding="utf-8")
p0 = f.tell()
f.write("\xff\n")
p1 = f.tell()
f.write("\xff\n")
p2 = f.tell()
f.seek(0)
self.assertEqual(f.tell(), p0)
self.assertEqual(f.readline(), "\xff\n")
self.assertEqual(f.tell(), p1)
self.assertEqual(f.readline(), "\xff\n")
self.assertEqual(f.tell(), p2)
f.seek(0)
dla line w f:
self.assertEqual(line, "\xff\n")
self.assertRaises(OSError, f.tell)
self.assertEqual(f.tell(), p2)
f.close()
def test_seeking(self):
chunk_size = _default_chunk_size()
prefix_size = chunk_size - 2
u_prefix = "a" * prefix_size
prefix = bytes(u_prefix.encode("utf-8"))
self.assertEqual(len(u_prefix), len(prefix))
u_suffix = "\u8888\n"
suffix = bytes(u_suffix.encode("utf-8"))
line = prefix + suffix
przy self.open(support.TESTFN, "wb") jako f:
f.write(line*2)
przy self.open(support.TESTFN, "r", encoding="utf-8") jako f:
s = f.read(prefix_size)
self.assertEqual(s, str(prefix, "ascii"))
self.assertEqual(f.tell(), prefix_size)
self.assertEqual(f.readline(), u_suffix)
def test_seeking_too(self):
# Regression test dla a specific bug
data = b'\xe0\xbf\xbf\n'
przy self.open(support.TESTFN, "wb") jako f:
f.write(data)
przy self.open(support.TESTFN, "r", encoding="utf-8") jako f:
f._CHUNK_SIZE # Just test that it exists
f._CHUNK_SIZE = 2
f.readline()
f.tell()
def test_seek_and_tell(self):
#Test seek/tell using the StatefulIncrementalDecoder.
# Make test faster by doing smaller seeks
CHUNK_SIZE = 128
def test_seek_and_tell_with_data(data, min_pos=0):
"""Tell/seek to various points within a data stream oraz ensure
that the decoded data returned by read() jest consistent."""
f = self.open(support.TESTFN, 'wb')
f.write(data)
f.close()
f = self.open(support.TESTFN, encoding='test_decoder')
f._CHUNK_SIZE = CHUNK_SIZE
decoded = f.read()
f.close()
dla i w range(min_pos, len(decoded) + 1): # seek positions
dla j w [1, 5, len(decoded) - i]: # read lengths
f = self.open(support.TESTFN, encoding='test_decoder')
self.assertEqual(f.read(i), decoded[:i])
cookie = f.tell()
self.assertEqual(f.read(j), decoded[i:i + j])
f.seek(cookie)
self.assertEqual(f.read(), decoded[i:])
f.close()
# Enable the test decoder.
StatefulIncrementalDecoder.codecEnabled = 1
# Run the tests.
spróbuj:
# Try each test case.
dla input, _, _ w StatefulIncrementalDecoderTest.test_cases:
test_seek_and_tell_with_data(input)
# Position each test case so that it crosses a chunk boundary.
dla input, _, _ w StatefulIncrementalDecoderTest.test_cases:
offset = CHUNK_SIZE - len(input)//2
prefix = b'.'*offset
# Don't bother seeking into the prefix (takes too long).
min_pos = offset*2
test_seek_and_tell_with_data(prefix + input, min_pos)
# Ensure our test decoder won't interfere przy subsequent tests.
w_końcu:
StatefulIncrementalDecoder.codecEnabled = 0
def test_encoded_writes(self):
data = "1234567890"
tests = ("utf-16",
"utf-16-le",
"utf-16-be",
"utf-32",
"utf-32-le",
"utf-32-be")
dla encoding w tests:
buf = self.BytesIO()
f = self.TextIOWrapper(buf, encoding=encoding)
# Check jeżeli the BOM jest written only once (see issue1753).
f.write(data)
f.write(data)
f.seek(0)
self.assertEqual(f.read(), data * 2)
f.seek(0)
self.assertEqual(f.read(), data * 2)
self.assertEqual(buf.getvalue(), (data * 2).encode(encoding))
def test_unreadable(self):
klasa UnReadable(self.BytesIO):
def readable(self):
zwróć Nieprawda
txt = self.TextIOWrapper(UnReadable())
self.assertRaises(OSError, txt.read)
def test_read_one_by_one(self):
txt = self.TextIOWrapper(self.BytesIO(b"AA\r\nBB"))
reads = ""
dopóki Prawda:
c = txt.read(1)
jeżeli nie c:
przerwij
reads += c
self.assertEqual(reads, "AA\nBB")
def test_readlines(self):
txt = self.TextIOWrapper(self.BytesIO(b"AA\nBB\nCC"))
self.assertEqual(txt.readlines(), ["AA\n", "BB\n", "CC"])
txt.seek(0)
self.assertEqual(txt.readlines(Nic), ["AA\n", "BB\n", "CC"])
txt.seek(0)
self.assertEqual(txt.readlines(5), ["AA\n", "BB\n"])
# read w amounts equal to TextIOWrapper._CHUNK_SIZE which jest 128.
def test_read_by_chunk(self):
# make sure "\r\n" straddles 128 char boundary.
txt = self.TextIOWrapper(self.BytesIO(b"A" * 127 + b"\r\nB"))
reads = ""
dopóki Prawda:
c = txt.read(128)
jeżeli nie c:
przerwij
reads += c
self.assertEqual(reads, "A"*127+"\nB")
def test_writelines(self):
l = ['ab', 'cd', 'ef']
buf = self.BytesIO()
txt = self.TextIOWrapper(buf)
txt.writelines(l)
txt.flush()
self.assertEqual(buf.getvalue(), b'abcdef')
def test_writelines_userlist(self):
l = UserList(['ab', 'cd', 'ef'])
buf = self.BytesIO()
txt = self.TextIOWrapper(buf)
txt.writelines(l)
txt.flush()
self.assertEqual(buf.getvalue(), b'abcdef')
def test_writelines_error(self):
txt = self.TextIOWrapper(self.BytesIO())
self.assertRaises(TypeError, txt.writelines, [1, 2, 3])
self.assertRaises(TypeError, txt.writelines, Nic)
self.assertRaises(TypeError, txt.writelines, b'abc')
def test_issue1395_1(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
# read one char at a time
reads = ""
dopóki Prawda:
c = txt.read(1)
jeżeli nie c:
przerwij
reads += c
self.assertEqual(reads, self.normalized)
def test_issue1395_2(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = ""
dopóki Prawda:
c = txt.read(4)
jeżeli nie c:
przerwij
reads += c
self.assertEqual(reads, self.normalized)
def test_issue1395_3(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
reads += txt.read(4)
reads += txt.readline()
reads += txt.readline()
reads += txt.readline()
self.assertEqual(reads, self.normalized)
def test_issue1395_4(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
reads += txt.read()
self.assertEqual(reads, self.normalized)
def test_issue1395_5(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt._CHUNK_SIZE = 4
reads = txt.read(4)
pos = txt.tell()
txt.seek(0)
txt.seek(pos)
self.assertEqual(txt.read(4), "BBB\n")
def test_issue2282(self):
buffer = self.BytesIO(self.testdata)
txt = self.TextIOWrapper(buffer, encoding="ascii")
self.assertEqual(buffer.seekable(), txt.seekable())
def test_append_bom(self):
# The BOM jest nie written again when appending to a non-empty file
filename = support.TESTFN
dla charset w ('utf-8-sig', 'utf-16', 'utf-32'):
przy self.open(filename, 'w', encoding=charset) jako f:
f.write('aaa')
pos = f.tell()
przy self.open(filename, 'rb') jako f:
self.assertEqual(f.read(), 'aaa'.encode(charset))
przy self.open(filename, 'a', encoding=charset) jako f:
f.write('xxx')
przy self.open(filename, 'rb') jako f:
self.assertEqual(f.read(), 'aaaxxx'.encode(charset))
def test_seek_bom(self):
# Same test, but when seeking manually
filename = support.TESTFN
dla charset w ('utf-8-sig', 'utf-16', 'utf-32'):
przy self.open(filename, 'w', encoding=charset) jako f:
f.write('aaa')
pos = f.tell()
przy self.open(filename, 'r+', encoding=charset) jako f:
f.seek(pos)
f.write('zzz')
f.seek(0)
f.write('bbb')
przy self.open(filename, 'rb') jako f:
self.assertEqual(f.read(), 'bbbzzz'.encode(charset))
def test_seek_append_bom(self):
# Same test, but first seek to the start oraz then to the end
filename = support.TESTFN
dla charset w ('utf-8-sig', 'utf-16', 'utf-32'):
przy self.open(filename, 'w', encoding=charset) jako f:
f.write('aaa')
przy self.open(filename, 'a', encoding=charset) jako f:
f.seek(0)
f.seek(0, self.SEEK_END)
f.write('xxx')
przy self.open(filename, 'rb') jako f:
self.assertEqual(f.read(), 'aaaxxx'.encode(charset))
def test_errors_property(self):
przy self.open(support.TESTFN, "w") jako f:
self.assertEqual(f.errors, "strict")
przy self.open(support.TESTFN, "w", errors="replace") jako f:
self.assertEqual(f.errors, "replace")
@support.no_tracing
@unittest.skipUnless(threading, 'Threading required dla this test.')
def test_threads_write(self):
# Issue6750: concurrent writes could duplicate data
event = threading.Event()
przy self.open(support.TESTFN, "w", buffering=1) jako f:
def run(n):
text = "Thread%03d\n" % n
event.wait()
f.write(text)
threads = [threading.Thread(target=run, args=(x,))
dla x w range(20)]
przy support.start_threads(threads, event.set):
time.sleep(0.02)
przy self.open(support.TESTFN) jako f:
content = f.read()
dla n w range(20):
self.assertEqual(content.count("Thread%03d\n" % n), 1)
def test_flush_error_on_close(self):
# Test that text file jest closed despite failed flush
# oraz that flush() jest called before file closed.
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
closed = []
def bad_flush():
closed[:] = [txt.closed, txt.buffer.closed]
podnieś OSError()
txt.flush = bad_flush
self.assertRaises(OSError, txt.close) # exception nie swallowed
self.assertPrawda(txt.closed)
self.assertPrawda(txt.buffer.closed)
self.assertPrawda(closed) # flush() called
self.assertNieprawda(closed[0]) # flush() called before file closed
self.assertNieprawda(closed[1])
txt.flush = lambda: Nic # przerwij reference loop
def test_close_error_on_close(self):
buffer = self.BytesIO(self.testdata)
def bad_flush():
podnieś OSError('flush')
def bad_close():
podnieś OSError('close')
buffer.close = bad_close
txt = self.TextIOWrapper(buffer, encoding="ascii")
txt.flush = bad_flush
przy self.assertRaises(OSError) jako err: # exception nie swallowed
txt.close()
self.assertEqual(err.exception.args, ('close',))
self.assertIsInstance(err.exception.__context__, OSError)
self.assertEqual(err.exception.__context__.args, ('flush',))
self.assertNieprawda(txt.closed)
def test_nonnormalized_close_error_on_close(self):
# Issue #21677
buffer = self.BytesIO(self.testdata)
def bad_flush():
podnieś non_existing_flush
def bad_close():
podnieś non_existing_close
buffer.close = bad_close
txt = self.TextIOWrapper(buffer, encoding="ascii")
txt.flush = bad_flush
przy self.assertRaises(NameError) jako err: # exception nie swallowed
txt.close()
self.assertIn('non_existing_close', str(err.exception))
self.assertIsInstance(err.exception.__context__, NameError)
self.assertIn('non_existing_flush', str(err.exception.__context__))
self.assertNieprawda(txt.closed)
def test_multi_close(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
txt.close()
txt.close()
txt.close()
self.assertRaises(ValueError, txt.flush)
def test_unseekable(self):
txt = self.TextIOWrapper(self.MockUnseekableIO(self.testdata))
self.assertRaises(self.UnsupportedOperation, txt.tell)
self.assertRaises(self.UnsupportedOperation, txt.seek, 0)
def test_readonly_attributes(self):
txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii")
buf = self.BytesIO(self.testdata)
przy self.assertRaises(AttributeError):
txt.buffer = buf
def test_rawio(self):
# Issue #12591: TextIOWrapper must work przy raw I/O objects, so
# that subprocess.Popen() can have the required unbuffered
# semantics przy universal_newlines=Prawda.
raw = self.MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n'])
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n')
# Reads
self.assertEqual(txt.read(4), 'abcd')
self.assertEqual(txt.readline(), 'efghi\n')
self.assertEqual(list(txt), ['jkl\n', 'opq\n'])
def test_rawio_write_through(self):
# Issue #12591: przy write_through=Prawda, writes don't need a flush
raw = self.MockRawIO([b'abc', b'def', b'ghi\njkl\nopq\n'])
txt = self.TextIOWrapper(raw, encoding='ascii', newline='\n',
write_through=Prawda)
txt.write('1')
txt.write('23\n4')
txt.write('5')
self.assertEqual(b''.join(raw._write_stack), b'123\n45')
def test_bufio_write_through(self):
# Issue #21396: write_through=Prawda doesn't force a flush()
# on the underlying binary buffered object.
flush_called, write_called = [], []
klasa BufferedWriter(self.BufferedWriter):
def flush(self, *args, **kwargs):
flush_called.append(Prawda)
zwróć super().flush(*args, **kwargs)
def write(self, *args, **kwargs):
write_called.append(Prawda)
zwróć super().write(*args, **kwargs)
rawio = self.BytesIO()
data = b"a"
bufio = BufferedWriter(rawio, len(data)*2)
textio = self.TextIOWrapper(bufio, encoding='ascii',
write_through=Prawda)
# write to the buffered io but don't overflow the buffer
text = data.decode('ascii')
textio.write(text)
# buffer.flush jest nie called przy write_through=Prawda
self.assertNieprawda(flush_called)
# buffer.write *is* called przy write_through=Prawda
self.assertPrawda(write_called)
self.assertEqual(rawio.getvalue(), b"") # no flush
write_called = [] # reset
textio.write(text * 10) # total content jest larger than bufio buffer
self.assertPrawda(write_called)
self.assertEqual(rawio.getvalue(), data * 11) # all flushed
def test_read_nonbytes(self):
# Issue #17106
# Crash when underlying read() returns non-bytes
t = self.TextIOWrapper(self.StringIO('a'))
self.assertRaises(TypeError, t.read, 1)
t = self.TextIOWrapper(self.StringIO('a'))
self.assertRaises(TypeError, t.readline)
t = self.TextIOWrapper(self.StringIO('a'))
self.assertRaises(TypeError, t.read)
def test_illegal_decoder(self):
# Issue #17106
# Bypass the early encoding check added w issue 20404
def _make_illegal_wrapper():
quopri = codecs.lookup("quopri")
quopri._is_text_encoding = Prawda
spróbuj:
t = self.TextIOWrapper(self.BytesIO(b'aaaaaa'),
newline='\n', encoding="quopri")
w_końcu:
quopri._is_text_encoding = Nieprawda
zwróć t
# Crash when decoder returns non-string
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.read, 1)
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.readline)
t = _make_illegal_wrapper()
self.assertRaises(TypeError, t.read)
def _check_create_at_shutdown(self, **kwargs):
# Issue #20037: creating a TextIOWrapper at shutdown
# shouldn't crash the interpreter.
iomod = self.io.__name__
code = """jeżeli 1:
zaimportuj codecs
zaimportuj {iomod} jako io
# Avoid looking up codecs at shutdown
codecs.lookup('utf-8')
klasa C:
def __init__(self):
self.buf = io.BytesIO()
def __del__(self):
io.TextIOWrapper(self.buf, **{kwargs})
print("ok")
c = C()
""".format(iomod=iomod, kwargs=kwargs)
zwróć assert_python_ok("-c", code)
def test_create_at_shutdown_without_encoding(self):
rc, out, err = self._check_create_at_shutdown()
jeżeli err:
# Can error out przy a RuntimeError jeżeli the module state
# isn't found.
self.assertIn(self.shutdown_error, err.decode())
inaczej:
self.assertEqual("ok", out.decode().strip())
def test_create_at_shutdown_with_encoding(self):
rc, out, err = self._check_create_at_shutdown(encoding='utf-8',
errors='strict')
self.assertNieprawda(err)
self.assertEqual("ok", out.decode().strip())
def test_read_byteslike(self):
r = MemviewBytesIO(b'Just some random string\n')
t = self.TextIOWrapper(r, 'utf-8')
# TextIOwrapper will nie read the full string, because
# we truncate it to a multiple of the native int size
# so that we can construct a more complex memoryview.
bytes_val = _to_memoryview(r.getvalue()).tobytes()
self.assertEqual(t.read(200), bytes_val.decode('utf-8'))
def test_issue22849(self):
klasa F(object):
def readable(self): zwróć Prawda
def writable(self): zwróć Prawda
def seekable(self): zwróć Prawda
dla i w range(10):
spróbuj:
self.TextIOWrapper(F(), encoding='utf-8')
wyjąwszy Exception:
dalej
F.tell = lambda x: 0
t = self.TextIOWrapper(F(), encoding='utf-8')
klasa MemviewBytesIO(io.BytesIO):
'''A BytesIO object whose read method returns memoryviews
rather than bytes'''
def read1(self, len_):
zwróć _to_memoryview(super().read1(len_))
def read(self, len_):
zwróć _to_memoryview(super().read(len_))
def _to_memoryview(buf):
'''Convert bytes-object *buf* to a non-trivial memoryview'''
arr = array.array('i')
idx = len(buf) - len(buf) % arr.itemsize
arr.frombytes(buf[:idx])
zwróć memoryview(arr)
klasa CTextIOWrapperTest(TextIOWrapperTest):
io = io
shutdown_error = "RuntimeError: could nie find io module state"
def test_initialization(self):
r = self.BytesIO(b"\xc3\xa9\n\n")
b = self.BufferedReader(r, 1000)
t = self.TextIOWrapper(b)
self.assertRaises(ValueError, t.__init__, b, newline='xyzzy')
self.assertRaises(ValueError, t.read)
t = self.TextIOWrapper.__new__(self.TextIOWrapper)
self.assertRaises(Exception, repr, t)
def test_garbage_collection(self):
# C TextIOWrapper objects are collected, oraz collecting them flushes
# all data to disk.
# The Python version has __del__, so it ends w gc.garbage instead.
przy support.check_warnings(('', ResourceWarning)):
rawio = io.FileIO(support.TESTFN, "wb")
b = self.BufferedWriter(rawio)
t = self.TextIOWrapper(b, encoding="ascii")
t.write("456def")
t.x = t
wr = weakref.ref(t)
usuń t
support.gc_collect()
self.assertIsNic(wr(), wr)
przy self.open(support.TESTFN, "rb") jako f:
self.assertEqual(f.read(), b"456def")
def test_rwpair_cleared_before_textio(self):
# Issue 13070: TextIOWrapper's finalization would crash when called
# after the reference to the underlying BufferedRWPair's writer got
# cleared by the GC.
dla i w range(1000):
b1 = self.BufferedRWPair(self.MockRawIO(), self.MockRawIO())
t1 = self.TextIOWrapper(b1, encoding="ascii")
b2 = self.BufferedRWPair(self.MockRawIO(), self.MockRawIO())
t2 = self.TextIOWrapper(b2, encoding="ascii")
# circular references
t1.buddy = t2
t2.buddy = t1
support.gc_collect()
klasa PyTextIOWrapperTest(TextIOWrapperTest):
io = pyio
#shutdown_error = "LookupError: unknown encoding: ascii"
shutdown_error = "TypeError: 'NicType' object jest nie iterable"
klasa IncrementalNewlineDecoderTest(unittest.TestCase):
def check_newline_decoding_utf8(self, decoder):
# UTF-8 specific tests dla a newline decoder
def _check_decode(b, s, **kwargs):
# We exercise getstate() / setstate() jako well jako decode()
state = decoder.getstate()
self.assertEqual(decoder.decode(b, **kwargs), s)
decoder.setstate(state)
self.assertEqual(decoder.decode(b, **kwargs), s)
_check_decode(b'\xe8\xa2\x88', "\u8888")
_check_decode(b'\xe8', "")
_check_decode(b'\xa2', "")
_check_decode(b'\x88', "\u8888")
_check_decode(b'\xe8', "")
_check_decode(b'\xa2', "")
_check_decode(b'\x88', "\u8888")
_check_decode(b'\xe8', "")
self.assertRaises(UnicodeDecodeError, decoder.decode, b'', final=Prawda)
decoder.reset()
_check_decode(b'\n', "\n")
_check_decode(b'\r', "")
_check_decode(b'', "\n", final=Prawda)
_check_decode(b'\r', "\n", final=Prawda)
_check_decode(b'\r', "")
_check_decode(b'a', "\na")
_check_decode(b'\r\r\n', "\n\n")
_check_decode(b'\r', "")
_check_decode(b'\r', "\n")
_check_decode(b'\na', "\na")
_check_decode(b'\xe8\xa2\x88\r\n', "\u8888\n")
_check_decode(b'\xe8\xa2\x88', "\u8888")
_check_decode(b'\n', "\n")
_check_decode(b'\xe8\xa2\x88\r', "\u8888")
_check_decode(b'\n', "\n")
def check_newline_decoding(self, decoder, encoding):
result = []
jeżeli encoding jest nie Nic:
encoder = codecs.getincrementalencoder(encoding)()
def _decode_bytewise(s):
# Decode one byte at a time
dla b w encoder.encode(s):
result.append(decoder.decode(bytes([b])))
inaczej:
encoder = Nic
def _decode_bytewise(s):
# Decode one char at a time
dla c w s:
result.append(decoder.decode(c))
self.assertEqual(decoder.newlines, Nic)
_decode_bytewise("abc\n\r")
self.assertEqual(decoder.newlines, '\n')
_decode_bytewise("\nabc")
self.assertEqual(decoder.newlines, ('\n', '\r\n'))
_decode_bytewise("abc\r")
self.assertEqual(decoder.newlines, ('\n', '\r\n'))
_decode_bytewise("abc")
self.assertEqual(decoder.newlines, ('\r', '\n', '\r\n'))
_decode_bytewise("abc\r")
self.assertEqual("".join(result), "abc\n\nabcabc\nabcabc")
decoder.reset()
input = "abc"
jeżeli encoder jest nie Nic:
encoder.reset()
input = encoder.encode(input)
self.assertEqual(decoder.decode(input), "abc")
self.assertEqual(decoder.newlines, Nic)
def test_newline_decoder(self):
encodings = (
# Nic meaning the IncrementalNewlineDecoder takes unicode input
# rather than bytes input
Nic, 'utf-8', 'latin-1',
'utf-16', 'utf-16-le', 'utf-16-be',
'utf-32', 'utf-32-le', 'utf-32-be',
)
dla enc w encodings:
decoder = enc oraz codecs.getincrementaldecoder(enc)()
decoder = self.IncrementalNewlineDecoder(decoder, translate=Prawda)
self.check_newline_decoding(decoder, enc)
decoder = codecs.getincrementaldecoder("utf-8")()
decoder = self.IncrementalNewlineDecoder(decoder, translate=Prawda)
self.check_newline_decoding_utf8(decoder)
def test_newline_bytes(self):
# Issue 5433: Excessive optimization w IncrementalNewlineDecoder
def _check(dec):
self.assertEqual(dec.newlines, Nic)
self.assertEqual(dec.decode("\u0D00"), "\u0D00")
self.assertEqual(dec.newlines, Nic)
self.assertEqual(dec.decode("\u0A00"), "\u0A00")
self.assertEqual(dec.newlines, Nic)
dec = self.IncrementalNewlineDecoder(Nic, translate=Nieprawda)
_check(dec)
dec = self.IncrementalNewlineDecoder(Nic, translate=Prawda)
_check(dec)
klasa CIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest):
dalej
klasa PyIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest):
dalej
# XXX Tests dla open()
klasa MiscIOTest(unittest.TestCase):
def tearDown(self):
support.unlink(support.TESTFN)
def test___all__(self):
dla name w self.io.__all__:
obj = getattr(self.io, name, Nic)
self.assertIsNotNic(obj, name)
jeżeli name == "open":
kontynuuj
albo_inaczej "error" w name.lower() albo name == "UnsupportedOperation":
self.assertPrawda(issubclass(obj, Exception), name)
albo_inaczej nie name.startswith("SEEK_"):
self.assertPrawda(issubclass(obj, self.IOBase))
def test_attributes(self):
f = self.open(support.TESTFN, "wb", buffering=0)
self.assertEqual(f.mode, "wb")
f.close()
przy support.check_warnings(('', DeprecationWarning)):
f = self.open(support.TESTFN, "U")
self.assertEqual(f.name, support.TESTFN)
self.assertEqual(f.buffer.name, support.TESTFN)
self.assertEqual(f.buffer.raw.name, support.TESTFN)
self.assertEqual(f.mode, "U")
self.assertEqual(f.buffer.mode, "rb")
self.assertEqual(f.buffer.raw.mode, "rb")
f.close()
f = self.open(support.TESTFN, "w+")
self.assertEqual(f.mode, "w+")
self.assertEqual(f.buffer.mode, "rb+") # Does it really matter?
self.assertEqual(f.buffer.raw.mode, "rb+")
g = self.open(f.fileno(), "wb", closefd=Nieprawda)
self.assertEqual(g.mode, "wb")
self.assertEqual(g.raw.mode, "wb")
self.assertEqual(g.name, f.fileno())
self.assertEqual(g.raw.name, f.fileno())
f.close()
g.close()
def test_io_after_close(self):
dla kwargs w [
{"mode": "w"},
{"mode": "wb"},
{"mode": "w", "buffering": 1},
{"mode": "w", "buffering": 2},
{"mode": "wb", "buffering": 0},
{"mode": "r"},
{"mode": "rb"},
{"mode": "r", "buffering": 1},
{"mode": "r", "buffering": 2},
{"mode": "rb", "buffering": 0},
{"mode": "w+"},
{"mode": "w+b"},
{"mode": "w+", "buffering": 1},
{"mode": "w+", "buffering": 2},
{"mode": "w+b", "buffering": 0},
]:
f = self.open(support.TESTFN, **kwargs)
f.close()
self.assertRaises(ValueError, f.flush)
self.assertRaises(ValueError, f.fileno)
self.assertRaises(ValueError, f.isatty)
self.assertRaises(ValueError, f.__iter__)
jeżeli hasattr(f, "peek"):
self.assertRaises(ValueError, f.peek, 1)
self.assertRaises(ValueError, f.read)
jeżeli hasattr(f, "read1"):
self.assertRaises(ValueError, f.read1, 1024)
jeżeli hasattr(f, "readall"):
self.assertRaises(ValueError, f.readall)
jeżeli hasattr(f, "readinto"):
self.assertRaises(ValueError, f.readinto, bytearray(1024))
jeżeli hasattr(f, "readinto1"):
self.assertRaises(ValueError, f.readinto1, bytearray(1024))
self.assertRaises(ValueError, f.readline)
self.assertRaises(ValueError, f.readlines)
self.assertRaises(ValueError, f.seek, 0)
self.assertRaises(ValueError, f.tell)
self.assertRaises(ValueError, f.truncate)
self.assertRaises(ValueError, f.write,
b"" jeżeli "b" w kwargs['mode'] inaczej "")
self.assertRaises(ValueError, f.writelines, [])
self.assertRaises(ValueError, next, f)
def test_blockingioerror(self):
# Various BlockingIOError issues
klasa C(str):
dalej
c = C("")
b = self.BlockingIOError(1, c)
c.b = b
b.c = c
wr = weakref.ref(c)
usuń c, b
support.gc_collect()
self.assertIsNic(wr(), wr)
def test_abcs(self):
# Test the visible base classes are ABCs.
self.assertIsInstance(self.IOBase, abc.ABCMeta)
self.assertIsInstance(self.RawIOBase, abc.ABCMeta)
self.assertIsInstance(self.BufferedIOBase, abc.ABCMeta)
self.assertIsInstance(self.TextIOBase, abc.ABCMeta)
def _check_abc_inheritance(self, abcmodule):
przy self.open(support.TESTFN, "wb", buffering=0) jako f:
self.assertIsInstance(f, abcmodule.IOBase)
self.assertIsInstance(f, abcmodule.RawIOBase)
self.assertNotIsInstance(f, abcmodule.BufferedIOBase)
self.assertNotIsInstance(f, abcmodule.TextIOBase)
przy self.open(support.TESTFN, "wb") jako f:
self.assertIsInstance(f, abcmodule.IOBase)
self.assertNotIsInstance(f, abcmodule.RawIOBase)
self.assertIsInstance(f, abcmodule.BufferedIOBase)
self.assertNotIsInstance(f, abcmodule.TextIOBase)
przy self.open(support.TESTFN, "w") jako f:
self.assertIsInstance(f, abcmodule.IOBase)
self.assertNotIsInstance(f, abcmodule.RawIOBase)
self.assertNotIsInstance(f, abcmodule.BufferedIOBase)
self.assertIsInstance(f, abcmodule.TextIOBase)
def test_abc_inheritance(self):
# Test implementations inherit z their respective ABCs
self._check_abc_inheritance(self)
def test_abc_inheritance_official(self):
# Test implementations inherit z the official ABCs of the
# baseline "io" module.
self._check_abc_inheritance(io)
def _check_warn_on_dealloc(self, *args, **kwargs):
f = open(*args, **kwargs)
r = repr(f)
przy self.assertWarns(ResourceWarning) jako cm:
f = Nic
support.gc_collect()
self.assertIn(r, str(cm.warning.args[0]))
def test_warn_on_dealloc(self):
self._check_warn_on_dealloc(support.TESTFN, "wb", buffering=0)
self._check_warn_on_dealloc(support.TESTFN, "wb")
self._check_warn_on_dealloc(support.TESTFN, "w")
def _check_warn_on_dealloc_fd(self, *args, **kwargs):
fds = []
def cleanup_fds():
dla fd w fds:
spróbuj:
os.close(fd)
wyjąwszy OSError jako e:
jeżeli e.errno != errno.EBADF:
podnieś
self.addCleanup(cleanup_fds)
r, w = os.pipe()
fds += r, w
self._check_warn_on_dealloc(r, *args, **kwargs)
# When using closefd=Nieprawda, there's no warning
r, w = os.pipe()
fds += r, w
przy warnings.catch_warnings(record=Prawda) jako recorded:
open(r, *args, closefd=Nieprawda, **kwargs)
support.gc_collect()
self.assertEqual(recorded, [])
def test_warn_on_dealloc_fd(self):
self._check_warn_on_dealloc_fd("rb", buffering=0)
self._check_warn_on_dealloc_fd("rb")
self._check_warn_on_dealloc_fd("r")
def test_pickling(self):
# Pickling file objects jest forbidden
dla kwargs w [
{"mode": "w"},
{"mode": "wb"},
{"mode": "wb", "buffering": 0},
{"mode": "r"},
{"mode": "rb"},
{"mode": "rb", "buffering": 0},
{"mode": "w+"},
{"mode": "w+b"},
{"mode": "w+b", "buffering": 0},
]:
dla protocol w range(pickle.HIGHEST_PROTOCOL + 1):
przy self.open(support.TESTFN, **kwargs) jako f:
self.assertRaises(TypeError, pickle.dumps, f, protocol)
def test_nonblock_pipe_write_bigbuf(self):
self._test_nonblock_pipe_write(16*1024)
def test_nonblock_pipe_write_smallbuf(self):
self._test_nonblock_pipe_write(1024)
@unittest.skipUnless(hasattr(os, 'set_blocking'),
'os.set_blocking() required dla this test')
def _test_nonblock_pipe_write(self, bufsize):
sent = []
received = []
r, w = os.pipe()
os.set_blocking(r, Nieprawda)
os.set_blocking(w, Nieprawda)
# To exercise all code paths w the C implementation we need
# to play przy buffer sizes. For instance, jeżeli we choose a
# buffer size less than albo equal to _PIPE_BUF (4096 on Linux)
# then we will never get a partial write of the buffer.
rf = self.open(r, mode='rb', closefd=Prawda, buffering=bufsize)
wf = self.open(w, mode='wb', closefd=Prawda, buffering=bufsize)
przy rf, wf:
dla N w 9999, 73, 7574:
spróbuj:
i = 0
dopóki Prawda:
msg = bytes([i % 26 + 97]) * N
sent.append(msg)
wf.write(msg)
i += 1
wyjąwszy self.BlockingIOError jako e:
self.assertEqual(e.args[0], errno.EAGAIN)
self.assertEqual(e.args[2], e.characters_written)
sent[-1] = sent[-1][:e.characters_written]
received.append(rf.read())
msg = b'BLOCKED'
wf.write(msg)
sent.append(msg)
dopóki Prawda:
spróbuj:
wf.flush()
przerwij
wyjąwszy self.BlockingIOError jako e:
self.assertEqual(e.args[0], errno.EAGAIN)
self.assertEqual(e.args[2], e.characters_written)
self.assertEqual(e.characters_written, 0)
received.append(rf.read())
received += iter(rf.read, Nic)
sent, received = b''.join(sent), b''.join(received)
self.assertEqual(sent, received)
self.assertPrawda(wf.closed)
self.assertPrawda(rf.closed)
def test_create_fail(self):
# 'x' mode fails jeżeli file jest existing
przy self.open(support.TESTFN, 'w'):
dalej
self.assertRaises(FileExistsError, self.open, support.TESTFN, 'x')
def test_create_writes(self):
# 'x' mode opens dla writing
przy self.open(support.TESTFN, 'xb') jako f:
f.write(b"spam")
przy self.open(support.TESTFN, 'rb') jako f:
self.assertEqual(b"spam", f.read())
def test_open_allargs(self):
# there used to be a buffer overflow w the parser dla rawmode
self.assertRaises(ValueError, self.open, support.TESTFN, 'rwax+')
klasa CMiscIOTest(MiscIOTest):
io = io
def test_readinto_buffer_overflow(self):
# Issue #18025
klasa BadReader(self.io.BufferedIOBase):
def read(self, n=-1):
zwróć b'x' * 10**6
bufio = BadReader()
b = bytearray(2)
self.assertRaises(ValueError, bufio.readinto, b)
@unittest.skipUnless(threading, 'Threading required dla this test.')
def check_daemon_threads_shutdown_deadlock(self, stream_name):
# Issue #23309: deadlocks at shutdown should be avoided when a
# daemon thread oraz the main thread both write to a file.
code = """jeżeli 1:
zaimportuj sys
zaimportuj time
zaimportuj threading
file = sys.{stream_name}
def run():
dopóki Prawda:
file.write('.')
file.flush()
thread = threading.Thread(target=run)
thread.daemon = Prawda
thread.start()
time.sleep(0.5)
file.write('!')
file.flush()
""".format_map(locals())
res, _ = run_python_until_end("-c", code)
err = res.err.decode()
jeżeli res.rc != 0:
# Failure: should be a fatal error
self.assertIn("Fatal Python error: could nie acquire lock "
"dla <_io.BufferedWriter name='<{stream_name}>'> "
"at interpreter shutdown, possibly due to "
"daemon threads".format_map(locals()),
err)
inaczej:
self.assertNieprawda(err.strip('.!'))
def test_daemon_threads_shutdown_stdout_deadlock(self):
self.check_daemon_threads_shutdown_deadlock('stdout')
def test_daemon_threads_shutdown_stderr_deadlock(self):
self.check_daemon_threads_shutdown_deadlock('stderr')
klasa PyMiscIOTest(MiscIOTest):
io = pyio
@unittest.skipIf(os.name == 'nt', 'POSIX signals required dla this test.')
klasa SignalsTest(unittest.TestCase):
def setUp(self):
self.oldalrm = signal.signal(signal.SIGALRM, self.alarm_interrupt)
def tearDown(self):
signal.signal(signal.SIGALRM, self.oldalrm)
def alarm_interrupt(self, sig, frame):
1/0
@unittest.skipUnless(threading, 'Threading required dla this test.')
def check_interrupted_write(self, item, bytes, **fdopen_kwargs):
"""Check that a partial write, when it gets interrupted, properly
invokes the signal handler, oraz bubbles up the exception podnieśd
w the latter."""
read_results = []
def _read():
jeżeli hasattr(signal, 'pthread_sigmask'):
signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGALRM])
s = os.read(r, 1)
read_results.append(s)
t = threading.Thread(target=_read)
t.daemon = Prawda
r, w = os.pipe()
fdopen_kwargs["closefd"] = Nieprawda
large_data = item * (support.PIPE_MAX_SIZE // len(item) + 1)
spróbuj:
wio = self.io.open(w, **fdopen_kwargs)
t.start()
# Fill the pipe enough that the write will be blocking.
# It will be interrupted by the timer armed above. Since the
# other thread has read one byte, the low-level write will
# zwróć przy a successful (partial) result rather than an EINTR.
# The buffered IO layer must check dla pending signal
# handlers, which w this case will invoke alarm_interrupt().
signal.alarm(1)
spróbuj:
self.assertRaises(ZeroDivisionError, wio.write, large_data)
w_końcu:
signal.alarm(0)
t.join()
# We got one byte, get another one oraz check that it isn't a
# repeat of the first one.
read_results.append(os.read(r, 1))
self.assertEqual(read_results, [bytes[0:1], bytes[1:2]])
w_końcu:
os.close(w)
os.close(r)
# This jest deliberate. If we didn't close the file descriptor
# before closing wio, wio would try to flush its internal
# buffer, oraz block again.
spróbuj:
wio.close()
wyjąwszy OSError jako e:
jeżeli e.errno != errno.EBADF:
podnieś
def test_interrupted_write_unbuffered(self):
self.check_interrupted_write(b"xy", b"xy", mode="wb", buffering=0)
def test_interrupted_write_buffered(self):
self.check_interrupted_write(b"xy", b"xy", mode="wb")
# Issue #22331: The test hangs on FreeBSD 7.2
@support.requires_freebsd_version(8)
def test_interrupted_write_text(self):
self.check_interrupted_write("xy", b"xy", mode="w", encoding="ascii")
@support.no_tracing
def check_reentrant_write(self, data, **fdopen_kwargs):
def on_alarm(*args):
# Will be called reentrantly z the same thread
wio.write(data)
1/0
signal.signal(signal.SIGALRM, on_alarm)
r, w = os.pipe()
wio = self.io.open(w, **fdopen_kwargs)
spróbuj:
signal.alarm(1)
# Either the reentrant call to wio.write() fails przy RuntimeError,
# albo the signal handler podnieśs ZeroDivisionError.
przy self.assertRaises((ZeroDivisionError, RuntimeError)) jako cm:
dopóki 1:
dla i w range(100):
wio.write(data)
wio.flush()
# Make sure the buffer doesn't fill up oraz block further writes
os.read(r, len(data) * 100)
exc = cm.exception
jeżeli isinstance(exc, RuntimeError):
self.assertPrawda(str(exc).startswith("reentrant call"), str(exc))
w_końcu:
wio.close()
os.close(r)
def test_reentrant_write_buffered(self):
self.check_reentrant_write(b"xy", mode="wb")
def test_reentrant_write_text(self):
self.check_reentrant_write("xy", mode="w", encoding="ascii")
def check_interrupted_read_retry(self, decode, **fdopen_kwargs):
"""Check that a buffered read, when it gets interrupted (either
returning a partial result albo EINTR), properly invokes the signal
handler oraz retries jeżeli the latter returned successfully."""
r, w = os.pipe()
fdopen_kwargs["closefd"] = Nieprawda
def alarm_handler(sig, frame):
os.write(w, b"bar")
signal.signal(signal.SIGALRM, alarm_handler)
spróbuj:
rio = self.io.open(r, **fdopen_kwargs)
os.write(w, b"foo")
signal.alarm(1)
# Expected behaviour:
# - first raw read() returns partial b"foo"
# - second raw read() returns EINTR
# - third raw read() returns b"bar"
self.assertEqual(decode(rio.read(6)), "foobar")
w_końcu:
rio.close()
os.close(w)
os.close(r)
def test_interrupted_read_retry_buffered(self):
self.check_interrupted_read_retry(lambda x: x.decode('latin1'),
mode="rb")
def test_interrupted_read_retry_text(self):
self.check_interrupted_read_retry(lambda x: x,
mode="r")
@unittest.skipUnless(threading, 'Threading required dla this test.')
def check_interrupted_write_retry(self, item, **fdopen_kwargs):
"""Check that a buffered write, when it gets interrupted (either
returning a partial result albo EINTR), properly invokes the signal
handler oraz retries jeżeli the latter returned successfully."""
select = support.import_module("select")
# A quantity that exceeds the buffer size of an anonymous pipe's
# write end.
N = support.PIPE_MAX_SIZE
r, w = os.pipe()
fdopen_kwargs["closefd"] = Nieprawda
# We need a separate thread to read z the pipe oraz allow the
# write() to finish. This thread jest started after the SIGALRM jest
# received (forcing a first EINTR w write()).
read_results = []
write_finished = Nieprawda
error = Nic
def _read():
spróbuj:
dopóki nie write_finished:
dopóki r w select.select([r], [], [], 1.0)[0]:
s = os.read(r, 1024)
read_results.append(s)
wyjąwszy BaseException jako exc:
nonlocal error
error = exc
t = threading.Thread(target=_read)
t.daemon = Prawda
def alarm1(sig, frame):
signal.signal(signal.SIGALRM, alarm2)
signal.alarm(1)
def alarm2(sig, frame):
t.start()
large_data = item * N
signal.signal(signal.SIGALRM, alarm1)
spróbuj:
wio = self.io.open(w, **fdopen_kwargs)
signal.alarm(1)
# Expected behaviour:
# - first raw write() jest partial (because of the limited pipe buffer
# oraz the first alarm)
# - second raw write() returns EINTR (because of the second alarm)
# - subsequent write()s are successful (either partial albo complete)
written = wio.write(large_data)
self.assertEqual(N, written)
wio.flush()
write_finished = Prawda
t.join()
self.assertIsNic(error)
self.assertEqual(N, sum(len(x) dla x w read_results))
w_końcu:
write_finished = Prawda
os.close(w)
os.close(r)
# This jest deliberate. If we didn't close the file descriptor
# before closing wio, wio would try to flush its internal
# buffer, oraz could block (in case of failure).
spróbuj:
wio.close()
wyjąwszy OSError jako e:
jeżeli e.errno != errno.EBADF:
podnieś
def test_interrupted_write_retry_buffered(self):
self.check_interrupted_write_retry(b"x", mode="wb")
def test_interrupted_write_retry_text(self):
self.check_interrupted_write_retry("x", mode="w", encoding="latin1")
klasa CSignalsTest(SignalsTest):
io = io
klasa PySignalsTest(SignalsTest):
io = pyio
# Handling reentrancy issues would slow down _pyio even more, so the
# tests are disabled.
test_reentrant_write_buffered = Nic
test_reentrant_write_text = Nic
def load_tests(*args):
tests = (CIOTest, PyIOTest, APIMismatchTest,
CBufferedReaderTest, PyBufferedReaderTest,
CBufferedWriterTest, PyBufferedWriterTest,
CBufferedRWPairTest, PyBufferedRWPairTest,
CBufferedRandomTest, PyBufferedRandomTest,
StatefulIncrementalDecoderTest,
CIncrementalNewlineDecoderTest, PyIncrementalNewlineDecoderTest,
CTextIOWrapperTest, PyTextIOWrapperTest,
CMiscIOTest, PyMiscIOTest,
CSignalsTest, PySignalsTest,
)
# Put the namespaces of the IO module we are testing oraz some useful mock
# classes w the __dict__ of each test.
mocks = (MockRawIO, MisbehavedRawIO, MockFileIO, CloseFailureIO,
MockNonBlockWriterIO, MockUnseekableIO, MockRawIOWithoutRead)
all_members = io.__all__ + ["IncrementalNewlineDecoder"]
c_io_ns = {name : getattr(io, name) dla name w all_members}
py_io_ns = {name : getattr(pyio, name) dla name w all_members}
globs = globals()
c_io_ns.update((x.__name__, globs["C" + x.__name__]) dla x w mocks)
py_io_ns.update((x.__name__, globs["Py" + x.__name__]) dla x w mocks)
# Avoid turning open into a bound method.
py_io_ns["open"] = pyio.OpenWrapper
dla test w tests:
jeżeli test.__name__.startswith("C"):
dla name, obj w c_io_ns.items():
setattr(test, name, obj)
albo_inaczej test.__name__.startswith("Py"):
dla name, obj w py_io_ns.items():
setattr(test, name, obj)
suite = unittest.TestSuite([unittest.makeSuite(test) dla test w tests])
zwróć suite
jeżeli __name__ == "__main__":
unittest.main()
|
python_ls.py
|
# Copyright 2017 Palantir Technologies, Inc.
from functools import partial
import logging
import os
import socketserver
import threading
from pyls_jsonrpc.dispatchers import MethodDispatcher
from pyls_jsonrpc.endpoint import Endpoint
from pyls_jsonrpc.streams import JsonRpcStreamReader, JsonRpcStreamWriter
from . import lsp, _utils, uris
from .config import config
from .workspace import Workspace
log = logging.getLogger(__name__)
LINT_DEBOUNCE_S = 0.5 # 500 ms
PARENT_PROCESS_WATCH_INTERVAL = 10 # 10 s
MAX_WORKERS = 64
PYTHON_FILE_EXTENSIONS = ('.py', '.pyi')
CONFIG_FILEs = ('pycodestyle.cfg', 'setup.cfg', 'tox.ini', '.flake8')
class _StreamHandlerWrapper(socketserver.StreamRequestHandler, object):
"""A wrapper class that is used to construct a custom handler class."""
delegate = None
def setup(self):
super(_StreamHandlerWrapper, self).setup()
# pylint: disable=no-member
self.delegate = self.DELEGATE_CLASS(self.rfile, self.wfile)
def handle(self):
try:
self.delegate.start()
except OSError as e:
if os.name == 'nt':
# Catch and pass on ConnectionResetError when parent process
# dies
# pylint: disable=no-member, undefined-variable
if isinstance(e, WindowsError) and e.winerror == 10054:
pass
# pylint: disable=no-member
self.SHUTDOWN_CALL()
def start_tcp_lang_server(bind_addr, port, check_parent_process, handler_class):
if not issubclass(handler_class, PythonLanguageServer):
raise ValueError('Handler class must be an instance of PythonLanguageServer')
def shutdown_server(check_parent_process, *args):
# pylint: disable=unused-argument
if check_parent_process:
log.debug('Shutting down server')
# Shutdown call must be done on a thread, to prevent deadlocks
stop_thread = threading.Thread(target=server.shutdown)
stop_thread.start()
# Construct a custom wrapper class around the user's handler_class
wrapper_class = type(
handler_class.__name__ + 'Handler',
(_StreamHandlerWrapper,),
{'DELEGATE_CLASS': partial(handler_class,
check_parent_process=check_parent_process),
'SHUTDOWN_CALL': partial(shutdown_server, check_parent_process)}
)
server = socketserver.TCPServer((bind_addr, port), wrapper_class, bind_and_activate=False)
server.allow_reuse_address = True
try:
server.server_bind()
server.server_activate()
log.info('Serving %s on (%s, %s)', handler_class.__name__, bind_addr, port)
server.serve_forever()
finally:
log.info('Shutting down')
server.server_close()
def start_io_lang_server(rfile, wfile, check_parent_process, handler_class):
if not issubclass(handler_class, PythonLanguageServer):
raise ValueError('Handler class must be an instance of PythonLanguageServer')
log.info('Starting %s IO language server', handler_class.__name__)
server = handler_class(rfile, wfile, check_parent_process)
server.start()
class PythonLanguageServer(MethodDispatcher):
""" Implementation of the Microsoft VSCode Language Server Protocol
https://github.com/Microsoft/language-server-protocol/blob/master/versions/protocol-1-x.md
"""
# pylint: disable=too-many-public-methods,redefined-builtin
def __init__(self, rx, tx, check_parent_process=False):
self.workspace = None
self.config = None
self.root_uri = None
self.watching_thread = None
self.workspaces = {}
self.uri_workspace_mapper = {}
self._jsonrpc_stream_reader = JsonRpcStreamReader(rx)
self._jsonrpc_stream_writer = JsonRpcStreamWriter(tx)
self._check_parent_process = check_parent_process
self._endpoint = Endpoint(self, self._jsonrpc_stream_writer.write, max_workers=MAX_WORKERS)
self._dispatchers = []
self._shutdown = False
def start(self):
"""Entry point for the server."""
self._jsonrpc_stream_reader.listen(self._endpoint.consume)
def __getitem__(self, item):
"""Override getitem to fallback through multiple dispatchers."""
if self._shutdown and item != 'exit':
# exit is the only allowed method during shutdown
log.debug("Ignoring non-exit method during shutdown: %s", item)
raise KeyError
try:
return super(PythonLanguageServer, self).__getitem__(item)
except KeyError:
# Fallback through extra dispatchers
for dispatcher in self._dispatchers:
try:
return dispatcher[item]
except KeyError:
continue
raise KeyError()
def m_shutdown(self, **_kwargs):
self._shutdown = True
return None
def m_exit(self, **_kwargs):
self._endpoint.shutdown()
self._jsonrpc_stream_reader.close()
self._jsonrpc_stream_writer.close()
def _match_uri_to_workspace(self, uri):
workspace_uri = _utils.match_uri_to_workspace(uri, self.workspaces)
return self.workspaces.get(workspace_uri, self.workspace)
def _hook(self, hook_name, doc_uri=None, **kwargs):
"""Calls hook_name and returns a list of results from all registered handlers"""
workspace = self._match_uri_to_workspace(doc_uri)
doc = workspace.get_document(doc_uri) if doc_uri else None
hook_handlers = self.config.plugin_manager.subset_hook_caller(hook_name, self.config.disabled_plugins)
return hook_handlers(config=self.config, workspace=workspace, document=doc, **kwargs)
def capabilities(self):
server_capabilities = {
'codeActionProvider': True,
'codeLensProvider': {
'resolveProvider': False, # We may need to make this configurable
},
'completionProvider': {
'resolveProvider': False, # We know everything ahead of time
'triggerCharacters': ['.']
},
'documentFormattingProvider': True,
'documentHighlightProvider': True,
'documentRangeFormattingProvider': True,
'documentSymbolProvider': True,
'definitionProvider': True,
'executeCommandProvider': {
'commands': flatten(self._hook('pyls_commands'))
},
'hoverProvider': True,
'referencesProvider': True,
'renameProvider': True,
'foldingRangeProvider': True,
'signatureHelpProvider': {
'triggerCharacters': ['(', ',', '=']
},
'textDocumentSync': {
'change': lsp.TextDocumentSyncKind.INCREMENTAL,
'save': {
'includeText': True,
},
'openClose': True,
},
'workspace': {
'workspaceFolders': {
'supported': True,
'changeNotifications': True
}
},
'experimental': merge(self._hook('pyls_experimental_capabilities'))
}
log.info('Server capabilities: %s', server_capabilities)
return server_capabilities
def m_initialize(self, processId=None, rootUri=None, rootPath=None, initializationOptions=None, **_kwargs):
log.debug('Language server initialized with %s %s %s %s', processId, rootUri, rootPath, initializationOptions)
if rootUri is None:
rootUri = uris.from_fs_path(rootPath) if rootPath is not None else ''
self.workspaces.pop(self.root_uri, None)
self.root_uri = rootUri
self.config = config.Config(rootUri, initializationOptions or {},
processId, _kwargs.get('capabilities', {}))
self.workspace = Workspace(rootUri, self._endpoint, self.config)
self.workspaces[rootUri] = self.workspace
self._dispatchers = self._hook('pyls_dispatchers')
self._hook('pyls_initialize')
if self._check_parent_process and processId is not None and self.watching_thread is None:
def watch_parent_process(pid):
# exit when the given pid is not alive
if not _utils.is_process_alive(pid):
log.info("parent process %s is not alive, exiting!", pid)
self.m_exit()
else:
threading.Timer(PARENT_PROCESS_WATCH_INTERVAL, watch_parent_process, args=[pid]).start()
self.watching_thread = threading.Thread(target=watch_parent_process, args=(processId,))
self.watching_thread.daemon = True
self.watching_thread.start()
# Get our capabilities
return {'capabilities': self.capabilities()}
def m_initialized(self, **_kwargs):
self._hook('pyls_initialized')
def code_actions(self, doc_uri, range, context):
return flatten(self._hook('pyls_code_actions', doc_uri, range=range, context=context))
def code_lens(self, doc_uri):
return flatten(self._hook('pyls_code_lens', doc_uri))
def completions(self, doc_uri, position):
completions = self._hook('pyls_completions', doc_uri, position=position)
return {
'isIncomplete': False,
'items': flatten(completions)
}
def definitions(self, doc_uri, position):
return flatten(self._hook('pyls_definitions', doc_uri, position=position))
def document_symbols(self, doc_uri):
return flatten(self._hook('pyls_document_symbols', doc_uri))
def execute_command(self, command, arguments):
return self._hook('pyls_execute_command', command=command, arguments=arguments)
def format_document(self, doc_uri):
return self._hook('pyls_format_document', doc_uri)
def format_range(self, doc_uri, range):
return self._hook('pyls_format_range', doc_uri, range=range)
def highlight(self, doc_uri, position):
return flatten(self._hook('pyls_document_highlight', doc_uri, position=position)) or None
def hover(self, doc_uri, position):
return self._hook('pyls_hover', doc_uri, position=position) or {'contents': ''}
@_utils.debounce(LINT_DEBOUNCE_S, keyed_by='doc_uri')
def lint(self, doc_uri, is_saved):
# Since we're debounced, the document may no longer be open
workspace = self._match_uri_to_workspace(doc_uri)
if doc_uri in workspace.documents:
workspace.publish_diagnostics(
doc_uri,
flatten(self._hook('pyls_lint', doc_uri, is_saved=is_saved))
)
def references(self, doc_uri, position, exclude_declaration):
return flatten(self._hook(
'pyls_references', doc_uri, position=position,
exclude_declaration=exclude_declaration
))
def rename(self, doc_uri, position, new_name):
return self._hook('pyls_rename', doc_uri, position=position, new_name=new_name)
def signature_help(self, doc_uri, position):
return self._hook('pyls_signature_help', doc_uri, position=position)
def folding(self, doc_uri):
return self._hook('pyls_folding_range', doc_uri)
def m_text_document__did_close(self, textDocument=None, **_kwargs):
workspace = self._match_uri_to_workspace(textDocument['uri'])
workspace.rm_document(textDocument['uri'])
def m_text_document__did_open(self, textDocument=None, **_kwargs):
workspace = self._match_uri_to_workspace(textDocument['uri'])
workspace.put_document(textDocument['uri'], textDocument['text'], version=textDocument.get('version'))
self._hook('pyls_document_did_open', textDocument['uri'])
self.lint(textDocument['uri'], is_saved=True)
def m_text_document__did_change(self, contentChanges=None, textDocument=None, **_kwargs):
workspace = self._match_uri_to_workspace(textDocument['uri'])
for change in contentChanges:
workspace.update_document(
textDocument['uri'],
change,
version=textDocument.get('version')
)
self.lint(textDocument['uri'], is_saved=False)
def m_text_document__did_save(self, textDocument=None, **_kwargs):
self.lint(textDocument['uri'], is_saved=True)
def m_text_document__code_action(self, textDocument=None, range=None, context=None, **_kwargs):
return self.code_actions(textDocument['uri'], range, context)
def m_text_document__code_lens(self, textDocument=None, **_kwargs):
return self.code_lens(textDocument['uri'])
def m_text_document__completion(self, textDocument=None, position=None, **_kwargs):
return self.completions(textDocument['uri'], position)
def m_text_document__definition(self, textDocument=None, position=None, **_kwargs):
return self.definitions(textDocument['uri'], position)
def m_text_document__document_highlight(self, textDocument=None, position=None, **_kwargs):
return self.highlight(textDocument['uri'], position)
def m_text_document__hover(self, textDocument=None, position=None, **_kwargs):
return self.hover(textDocument['uri'], position)
def m_text_document__document_symbol(self, textDocument=None, **_kwargs):
return self.document_symbols(textDocument['uri'])
def m_text_document__formatting(self, textDocument=None, _options=None, **_kwargs):
# For now we're ignoring formatting options.
return self.format_document(textDocument['uri'])
def m_text_document__rename(self, textDocument=None, position=None, newName=None, **_kwargs):
return self.rename(textDocument['uri'], position, newName)
def m_text_document__folding_range(self, textDocument=None, **_kwargs):
return self.folding(textDocument['uri'])
def m_text_document__range_formatting(self, textDocument=None, range=None, _options=None, **_kwargs):
# Again, we'll ignore formatting options for now.
return self.format_range(textDocument['uri'], range)
def m_text_document__references(self, textDocument=None, position=None, context=None, **_kwargs):
exclude_declaration = not context['includeDeclaration']
return self.references(textDocument['uri'], position, exclude_declaration)
def m_text_document__signature_help(self, textDocument=None, position=None, **_kwargs):
return self.signature_help(textDocument['uri'], position)
def m_workspace__did_change_configuration(self, settings=None):
self.config.update((settings or {}).get('pyls', {}))
for workspace_uri in self.workspaces:
workspace = self.workspaces[workspace_uri]
workspace.update_config(settings)
for doc_uri in workspace.documents:
self.lint(doc_uri, is_saved=False)
def m_workspace__did_change_workspace_folders(self, event=None, **_kwargs): # pylint: disable=too-many-locals
if event is None:
return
added = event.get('added', [])
removed = event.get('removed', [])
for removed_info in removed:
if 'uri' in removed_info:
removed_uri = removed_info['uri']
self.workspaces.pop(removed_uri, None)
for added_info in added:
if 'uri' in added_info:
added_uri = added_info['uri']
workspace_config = config.Config(
added_uri, self.config._init_opts,
self.config._process_id, self.config._capabilities)
self.workspaces[added_uri] = Workspace(
added_uri, self._endpoint, workspace_config)
root_workspace_removed = any(removed_info['uri'] == self.root_uri for removed_info in removed)
workspace_added = len(added) > 0 and 'uri' in added[0]
if root_workspace_removed and workspace_added:
added_uri = added[0]['uri']
self.root_uri = added_uri
new_root_workspace = self.workspaces[added_uri]
self.config = new_root_workspace._config
self.workspace = new_root_workspace
elif root_workspace_removed:
# NOTE: Removing the root workspace can only happen when the server
# is closed, thus the else condition of this if can never happen.
if self.workspaces:
log.debug('Root workspace deleted!')
available_workspaces = sorted(self.workspaces)
first_workspace = available_workspaces[0]
new_root_workspace = self.workspaces[first_workspace]
self.root_uri = first_workspace
self.config = new_root_workspace._config
self.workspace = new_root_workspace
# Migrate documents that are on the root workspace and have a better
# match now
doc_uris = list(self.workspace._docs.keys())
for uri in doc_uris:
doc = self.workspace._docs.pop(uri)
new_workspace = self._match_uri_to_workspace(uri)
new_workspace._docs[uri] = doc
def m_workspace__did_change_watched_files(self, changes=None, **_kwargs):
changed_py_files = set()
config_changed = False
for d in (changes or []):
if d['uri'].endswith(PYTHON_FILE_EXTENSIONS):
changed_py_files.add(d['uri'])
elif d['uri'].endswith(CONFIG_FILEs):
config_changed = True
if config_changed:
self.config.settings.cache_clear()
elif not changed_py_files:
# Only externally changed python files and lint configs may result in changed diagnostics.
return
for workspace_uri in self.workspaces:
workspace = self.workspaces[workspace_uri]
for doc_uri in workspace.documents:
# Changes in doc_uri are already handled by m_text_document__did_save
if doc_uri not in changed_py_files:
self.lint(doc_uri, is_saved=False)
def m_workspace__execute_command(self, command=None, arguments=None):
return self.execute_command(command, arguments)
def flatten(list_of_lists):
return [item for lst in list_of_lists for item in lst]
def merge(list_of_dicts):
return {k: v for dictionary in list_of_dicts for k, v in dictionary.items()}
|
function_mod.py
|
from threading import Thread
from datetime import datetime
import flask
import requests
import feedparser
import re
from webapp import app, db_log_mod, db_rss_channel_mod, db_rss_item_mod
def requester(url: str) -> tuple:
"""
Для проверки доступности URL и RSS-контента на нём.
И получения некоторых параметров с данного URL (в случае доступности).
:param url: адрес по которому посылается запрос.
:return: кортеж с параметрами (url, status_code, length_content, title_channel, url_site, result)
"""
# Проверка доступности URL.
try:
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:45.0) Gecko/20100101 Firefox/45.0'}
# для того что бы быть опознанным как браузер
page = requests.get(url, headers=headers)
status_code = page.status_code
length_content = len(page.content)
except Exception as err:
result = 'Запрашиваемый сайт недоступен.'
# result = 'Failed to open url.'
# app.logger.info('requester failed with this error: Failed to open url. ' + str(err))
return url, 'Error', 'Error', 'Error', 'Error', result
# Проверка доступности RSS-контента.
try:
feed = feedparser.parse(url)
title_channel = feed.channel.title
url_site = feed.channel.link
except Exception as err:
result = 'Ошибка при попытке считывания rss.'
# result = 'Failed to open rss.'
# app.logger.info('requester failed with this error: Failed to open rss. ' + str(err))
return url, status_code, length_content, 'Error', 'Error', result
# result = 'completed' признак успешного завершения запроса.
return url, status_code, length_content, title_channel, url_site, 'completed'
def checker(list_from_request: tuple, contents: list) -> bool:
"""
Сравнивает размер страницы из запроса и channel_tab.
:param list_from_request: данные из запроса.
:param contents: данные из channel_tab.
:return: Возвращает True если размер совпадает.
Вызывает add_rss и возращает False если размер не совпадает.
"""
try:
if list_from_request[2] != contents[0][4]:
add_rss(list_from_request[0])
return False
else:
return True
except Exception as err:
app.logger.info('checker failed with this error: ' + str(err))
def do_search(url: str) -> None:
"""
Посылает запрос (requester), ивлекает данные из запроса, выполняет поиск по БД возвращает результаты.
:param url: адрес по которому посылается запрос
:return: None.
"""
try:
# Данные для лога
list_for_log = (url,
flask.request.remote_addr,
flask.request.user_agent.browser,
flask.request.user_agent.platform,)
list_from_request = requester(url)
if list_from_request[5] == 'completed' and 100 <= int(list_from_request[1]) <= 299:
contents = db_rss_channel_mod.search_channel(list_from_request, list_for_log)
if not contents:
db_rss_channel_mod.add_channel(list_from_request, list_for_log) # Добавляем новый канал.
add_rss(list_from_request[0]) # Добавляем rss контент.
flask.flash('Результаты проверки: <{}>'.format(url))
flask.flash('Канал добавлен.')
# Проверка наличия изменений
elif not checker(list_from_request=list_from_request, contents=contents):
db_rss_channel_mod.update_channel(list_from_request, list_for_log) # Апдейтим данные в БД
flask.flash('Результаты проверки: <{}>'.format(url))
flask.flash('Информация на канале была изменена со времени последней проверки.')
elif checker(list_from_request=list_from_request, contents=contents):
db_rss_channel_mod.update_timestamp(list_from_request, list_for_log) # Апдейтим timestamp для элемента.
flask.flash('Результаты проверки: <{}>'.format(url))
flask.flash('Информация на канале со времени последней проверки не изменялась.')
else:
flask.flash('Результаты проверки: <{}>'.format(url))
flask.flash('{}'.format(list_from_request[5]))
try:
t = Thread(target=db_log_mod.add_log, args=(list_for_log, 'failed to open url'))
t.start()
except Exception as err:
app.logger.info('Thread in do_search failed with this error:' + str(err))
except Exception as err:
app.logger.info('do_search failed with this error: ' + str(err))
def remove_channel(id: int) -> None: # переименовать
"""
Получает URL канала. Удаляет RSS-контент связанный с каналом, затем удаляет сам канал.
:param id: id удаляемого элемента в channel_tab.
:return: None.
"""
try:
if db_rss_channel_mod.search_channel_by_id(id):
url_channel = db_rss_channel_mod.get_url_channel_by_id(id)
db_rss_item_mod.remove_rss_by_url_channel(url_channel)
db_rss_channel_mod.remove_channel_by_id(id)
if db_rss_channel_mod.search_channel_by_id(id):
flask.flash('Элемент с ID {} не был удалён.'.format(id))
else:
flask.flash('Элемент с ID {} был удалён.'.format(id))
else:
flask.flash('Элемента с ID {} нет в БД.'.format(id))
except Exception as err:
app.logger.info('function_mod.remove_channel_by_id failed with this error: ' + str(err))
def remove_log(id: int) -> None: # переименовать
"""
Удаление выбранного элемента из log_tab по id.
:param id: id удаляемого элемента в channel_tab.
:return: None.
"""
try:
if db_log_mod.search_log_by_id(id):
db_log_mod.remove_log(id)
if db_log_mod.search_log_by_id(id):
flask.flash('Элемент с ID {} не был удалён.'.format(id))
else:
flask.flash('Элемент с ID {} был удалён.'.format(id))
else:
flask.flash('Элемента с ID {} нет в БД.'.format(id))
except Exception as err:
app.logger.info('function_mod.remove_log_by_id failed with this error: ' + str(err))
def remove_rss(id: int) -> None: # переименовать
"""
Удаление выбранного элемента из rss_tab по id.
:param id: id удаляемого элемента в rss_tab.
:return: None.
"""
try:
if db_rss_item_mod.search_rss_by_id(id):
db_rss_item_mod.remove_rss_by_id(id)
if db_rss_item_mod.search_rss_by_id(id):
flask.flash('Элемент с ID {} не был удалён.'.format(id))
else:
flask.flash('Элемент с ID {} был удалён.'.format(id))
else:
flask.flash('Элемента с ID {} нет в БД.'.format(id))
except Exception as err:
app.logger.info('function_mod.remove_rss_by_id failed with this error: ' + str(err))
def add_rss(url: str) -> None:
"""
Посылает запрос по URL. Проверяет наличие элемента в rss_tab по url_item.
При отсутствии элементта в БД добавляет его.
:param url: адрес по которому посылается запрос.
:return: None.
"""
def clean_html(raw_html: str) -> str:
"""
Для удаления html тегов из текста.
Вспомогательная функция для add_rss
:param raw_html: строка которая может содержать теги.
:return: строка без тегов
"""
cleanr = re.compile('<.*?>')
clean_text = re.sub(cleanr, '', raw_html)
return clean_text
def time_convert(input_time: str) -> str:
"""
Для преобразования времени в формат удобный для сортировки в БД.
Вспомогательная функция для add_rss.
:param input_time: исходная строка даты-времени.
:return: преобразованная строка даты-времени.
"""
dt = datetime.strptime(input_time, '%a, %d %b %Y %X %z')
output_time = str(dt.strftime('%Y-%m-%d %X %z'))
return output_time
try:
feed = feedparser.parse(url)
for item in feed['items']:
if not db_rss_item_mod.search_rss_by_url_item(item.link):
db_rss_item_mod.add_rss((url, item.title, clean_html(item.summary),
item.link, time_convert(item.published)))
except Exception as err:
app.logger.info('add_rss failed with this error: ' + str(err))
def channels_to_listdict() -> list:
"""
Преобразует список кортежей каналов в список словарей, для отображения на entry.html.
:return: список словарей.
"""
try:
contents = db_rss_channel_mod.show_channel_restrict()
channel_list = []
for content in contents:
channel_dict = {}
channel_dict['id'] = content[0]
channel_dict['url'] = content[1]
channel_dict['title'] = content[2]
channel_dict['time'] = content[3]
channel_list.append(channel_dict)
return channel_list
except Exception as err:
app.logger.info('channels_to_listdict failed with this error: ' + str(err))
def show_content(url_channel: str) -> list:
"""
Отоброжаем содержимое определённого канала из rss_tab.
Добавляет данные в log_tab, при отображении содержимого канала.
:param url_channel: url канала, содержимое которого будет отображаться.
:return: список словарей с данными из таблицы соответствущие каналу.
"""
try:
list_for_log = (url_channel,
flask.request.remote_addr,
flask.request.user_agent.browser,
flask.request.user_agent.platform,)
contents = db_rss_item_mod.show_channel_content(url_channel, list_for_log)
rss_list = []
for content in contents:
rss_dict = {}
rss_dict['title'] = content[0]
rss_dict['summary'] = content[1]
rss_dict['published'] = content[2]
rss_dict['url'] = content[3]
rss_list.append(rss_dict)
return rss_list
except Exception as err:
app.logger.info('show_content failed with this error: ' + str(err))
def text_search(text: str) -> list: # добавить данные для лога
"""
Для поиска по тексту в поле summary_item таблицы rss_tab.
Добавляет данные в log_tab, при поиске.
:param text: подстрока, вхождение которой в summary_item необходимо найти.
:return: список словарей с результатами поиска из rss_tab.
"""
try:
# Данные для лога
list_for_log = ('None',
flask.request.remote_addr,
flask.request.user_agent.browser,
flask.request.user_agent.platform,)
contents = db_rss_item_mod.search_rss_by_text(text, list_for_log)
search_list = []
for content in contents:
search_dict = {}
search_dict['url_channel'] = content[0]
search_dict['title'] = content[1]
search_dict['summary'] = content[2]
search_dict['published'] = content[3]
search_dict['url'] = content[4]
search_list.append(search_dict)
return search_list
except Exception as err:
app.logger.info('text_search failed with this error: ' + str(err))
def date_search(date: str) -> list:
"""
Для поиска по дате в поле published_item таблицы rss_tab.
Добавляет данные в log_tab, при поиске.
:param date: подстрока, вхождение которой в published_item необходимо найти.
:return: список словарей с результатами поиска из rss_tab.
"""
try:
# Данные для лога
list_for_log = ('None',
flask.request.remote_addr,
flask.request.user_agent.browser,
flask.request.user_agent.platform,)
contents = db_rss_item_mod.search_rss_by_date(date, list_for_log)
search_list = []
for content in contents:
search_dict = {}
search_dict['url_channel'] = content[0]
search_dict['title'] = content[1]
search_dict['summary'] = content[2]
search_dict['published'] = content[3]
search_dict['url'] = content[4]
search_list.append(search_dict)
return search_list
except Exception as err:
app.logger.info('date_search failed with this error: ' + str(err))
def complex_search(text: str, date: str) -> list:
"""
Для комплексного поиска по тексту в поле summary_item, и дате в поле published_item таблицы rss_tab.
Добавляет данные в log_tab, при поиске.
:param text: подстрока, вхождение которой в summary_item необходимо найти.
:param date: подстрока, вхождение которой в published_item необходимо найти.
:return: список словарей с результатами поиска из rss_tab.
"""
try:
# Данные для лога
list_for_log = ('None',
flask.request.remote_addr,
flask.request.user_agent.browser,
flask.request.user_agent.platform,)
contents = db_rss_item_mod.search_rss_complex(text, date, list_for_log)
search_list = []
for content in contents:
search_dict = {}
search_dict['url_channel'] = content[0]
search_dict['title'] = content[1]
search_dict['summary'] = content[2]
search_dict['published'] = content[3]
search_dict['url'] = content[4]
search_list.append(search_dict)
return search_list
except Exception as err:
app.logger.info('complex_search failed with this error: ' + str(err))
|
detect_simple2.py
|
#-*- coding:utf-8 -*-
from detect_simple import detect
from PIL import Image
import paho.mqtt.client as mqtt
import cv2
import argparse
import time
import threading
import queue
q = queue.Queue()
i = 0
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("connected OK")
else:
print("Bad connection Returned code=", rc)
def on_disconnect(client, userdata, flags, rc=0):
print(str(rc))
def on_subscribe(client, userdata, mid, granted_qos):
print("subscribed: " + str(mid) + " " + str(granted_qos))
def on_message(client, userdata, msg):
message = str(msg.payload.decode("utf-8")).split('-')
camera, location, In_out = message
global i
#print(f"{camera}-{location}-{In_out}")
print("=============================")
print("q에 detect 넣기 : ", i)
print("=============================")
print()
q.put(("detected", i))
i += 1
return location, In_out
def detect_Snapshot(loc):
url = 'rtsp://ID:password@Ip/'
cap = cv2.VideoCapture(url)
ret, image = cap.read()
if not ret:
print("Snapshot Error!")
return
# 후에 없앨 것
if loc == "B1":
cv2.imwrite('./camera/camera1.jpg', image)
img_path = './camera/camera1.jpg'
elif loc == "B2":
cv2.imwrite('./camera/camera2.jpg', image)
img_path = './camera/camera2.jpg'
return img_path
def start(location, In_out):
detect_Snapshot(1)
#Snapshot(2)
img_path = ['camera1.jpg','camera2.jpg']
detect(img_path)
def subscribing():
client.on_message = on_message
client.loop_forever()
def Timesnap():
#print("Hello!")
# queue 넣어주기
global i
print("=============================")
print("q에 snapshot 넣기 :", i)
print("=============================")
print()
q.put(("snapshot",i))
i += 1
threading.Timer(10, Timesnap).start()
def event_queue():
while True:
item, num = q.get()
print(f"------detect start : {num} - {item}------")
time.sleep(20)
print(f"--------------detect {num} end-----------")
print()
q.task_done()
if __name__ == '__main__':
client = mqtt.Client()
# 콜백 함수 설정 on_connect(브로커에 접속), on_disconnect(브로커에 접속중료), on_subscribe(topic 구독),
# on_message(발행된 메세지가 들어왔을 때)
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.on_subscribe = on_subscribe
# address : localhost, port: 1883 에 연결
client.connect('Ip', "port")
# common topic 으로 메세지 발행
client.subscribe('test', 1)
sub = threading.Thread(target = subscribing)
event_queue = threading.Thread(target=event_queue, daemon=True)
sub.start()
Timesnap()
event_queue.start()
|
webhook.py
|
from flask import Flask, request
import sys
import threading
import datetime
import logging
class WebHook:
class __WebHook:
def __init__(self, port):
self.app = Flask(__name__)
self.port = port
cli = sys.modules['flask.cli']
cli.show_server_banner = lambda *x: None
self.app.logger.disabled = True
log = logging.getLogger('werkzeug')
log.disabled = True
def __str__(self):
return repr(self)
def add(self, name, route, job):
self.app.add_url_rule(route, route, self.__wrap_job(name, job), methods=["GET", "POST"])
def __wrap_job(self, name, job):
def wrapped_job():
print(f"Webhook handler is executing task {name} at {datetime.datetime.now()}")
try:
if hasattr(request, "json") and request.json:
value = job(context=dict(request.json))
else:
value = job(context=dict(request.values))
if isinstance(value, str) or isinstance(value, tuple):
return value
else:
return "OK"
except Exception as e:
print(f"Error processing webhook trigger: {e}")
return "There was an error processing the request. Please see server logs for details.", 500
return wrapped_job
def listener(self):
self.app.run(port=self.port, host="0.0.0.0", debug=False, use_reloader=False)
def listen(self):
listener = threading.Thread(name='Webhook Listener', target=self.listener)
listener.setDaemon(True)
listener.start()
instance = None
def __init__(self, port=5000):
if not WebHook.instance:
WebHook.instance = WebHook.__WebHook(port)
else:
WebHook.instance.port = port
def __getattr__(self, name):
return getattr(self.instance, name)
|
runtests.py
|
#!/usr/bin/env python
from __future__ import print_function
import atexit
import os
import sys
import re
import gc
import heapq
import locale
import shutil
import time
import unittest
import doctest
import operator
import subprocess
import tempfile
import traceback
import warnings
import zlib
import glob
from contextlib import contextmanager
try:
import platform
IS_PYPY = platform.python_implementation() == 'PyPy'
IS_CPYTHON = platform.python_implementation() == 'CPython'
except (ImportError, AttributeError):
IS_CPYTHON = True
IS_PYPY = False
from io import open as io_open
try:
from StringIO import StringIO
except ImportError:
from io import StringIO # doesn't accept 'str' in Py2
try:
import cPickle as pickle
except ImportError:
import pickle
try:
import threading
except ImportError: # No threads, no problems
threading = None
try:
from collections import defaultdict
except ImportError:
class defaultdict(object):
def __init__(self, default_factory=lambda : None):
self._dict = {}
self.default_factory = default_factory
def __getitem__(self, key):
if key not in self._dict:
self._dict[key] = self.default_factory()
return self._dict[key]
def __setitem__(self, key, value):
self._dict[key] = value
def __contains__(self, key):
return key in self._dict
def __repr__(self):
return repr(self._dict)
def __nonzero__(self):
return bool(self._dict)
try:
from unittest import SkipTest
except ImportError:
class SkipTest(Exception): # don't raise, only provided to allow except-ing it!
pass
def skip_test(reason):
sys.stderr.write("Skipping test: %s\n" % reason)
else:
def skip_test(reason):
raise SkipTest(reason)
try:
basestring
except NameError:
basestring = str
WITH_CYTHON = True
CY3_DIR = None
from distutils.command.build_ext import build_ext as _build_ext
from distutils import sysconfig
from distutils import ccompiler
_to_clean = []
@atexit.register
def _cleanup_files():
"""
This is only used on Cygwin to clean up shared libraries that are unsafe
to delete while the test suite is running.
"""
for filename in _to_clean:
if os.path.isdir(filename):
shutil.rmtree(filename, ignore_errors=True)
else:
try:
os.remove(filename)
except OSError:
pass
def get_distutils_distro(_cache=[]):
if _cache:
return _cache[0]
# late import to accommodate for setuptools override
from distutils.dist import Distribution
distutils_distro = Distribution()
if sys.platform == 'win32':
# TODO: Figure out why this hackery (see http://thread.gmane.org/gmane.comp.python.cython.devel/8280/).
config_files = distutils_distro.find_config_files()
try:
config_files.remove('setup.cfg')
except ValueError:
pass
distutils_distro.parse_config_files(config_files)
cfgfiles = distutils_distro.find_config_files()
try:
cfgfiles.remove('setup.cfg')
except ValueError:
pass
distutils_distro.parse_config_files(cfgfiles)
_cache.append(distutils_distro)
return distutils_distro
EXT_DEP_MODULES = {
'tag:numpy': 'numpy',
'tag:numpy_old': 'numpy',
'tag:pythran': 'pythran',
'tag:setuptools': 'setuptools.sandbox',
'tag:asyncio': 'asyncio',
'tag:pstats': 'pstats',
'tag:posix': 'posix',
'tag:array': 'array',
'tag:coverage': 'Cython.Coverage',
'Coverage': 'Cython.Coverage',
'tag:ipython': 'IPython.testing.globalipapp',
'tag:jedi': 'jedi_BROKEN_AND_DISABLED',
'tag:test.support': 'test.support', # support module for CPython unit tests
}
def patch_inspect_isfunction():
import inspect
orig_isfunction = inspect.isfunction
def isfunction(obj):
return orig_isfunction(obj) or type(obj).__name__ == 'cython_function_or_method'
isfunction._orig_isfunction = orig_isfunction
inspect.isfunction = isfunction
def unpatch_inspect_isfunction():
import inspect
try:
orig_isfunction = inspect.isfunction._orig_isfunction
except AttributeError:
pass
else:
inspect.isfunction = orig_isfunction
def def_to_cdef(source):
'''
Converts the module-level def methods into cdef methods, i.e.
@decorator
def foo([args]):
"""
[tests]
"""
[body]
becomes
def foo([args]):
"""
[tests]
"""
return foo_c([args])
cdef foo_c([args]):
[body]
'''
output = []
skip = False
def_node = re.compile(r'def (\w+)\(([^()*]*)\):').match
lines = iter(source.split('\n'))
for line in lines:
if not line.strip():
output.append(line)
continue
if skip:
if line[0] != ' ':
skip = False
else:
continue
if line[0] == '@':
skip = True
continue
m = def_node(line)
if m:
name = m.group(1)
args = m.group(2)
if args:
args_no_types = ", ".join(arg.split()[-1] for arg in args.split(','))
else:
args_no_types = ""
output.append("def %s(%s):" % (name, args_no_types))
line = next(lines)
if '"""' in line:
has_docstring = True
output.append(line)
for line in lines:
output.append(line)
if '"""' in line:
break
else:
has_docstring = False
output.append(" return %s_c(%s)" % (name, args_no_types))
output.append('')
output.append("cdef %s_c(%s):" % (name, args))
if not has_docstring:
output.append(line)
else:
output.append(line)
return '\n'.join(output)
def exclude_extension_in_pyver(*versions):
def check(ext):
return EXCLUDE_EXT if sys.version_info[:2] in versions else ext
return check
def exclude_extension_on_platform(*platforms):
def check(ext):
return EXCLUDE_EXT if sys.platform in platforms else ext
return check
def update_linetrace_extension(ext):
ext.define_macros.append(('CYTHON_TRACE', 1))
return ext
def update_old_numpy_extension(ext):
update_numpy_extension(ext, set_api17_macro=False)
def update_numpy_extension(ext, set_api17_macro=True):
import numpy
from numpy.distutils.misc_util import get_info
ext.include_dirs.append(numpy.get_include())
if set_api17_macro:
ext.define_macros.append(('NPY_NO_DEPRECATED_API', 'NPY_1_7_API_VERSION'))
# We need the npymath library for numpy.math.
# This is typically a static-only library.
for attr, value in get_info('npymath').items():
getattr(ext, attr).extend(value)
def update_openmp_extension(ext):
ext.openmp = True
language = ext.language
if sys.platform == 'win32' and sys.version_info[:2] == (3,4):
# OpenMP tests fail in appveyor in Py3.4 -> just ignore them, EoL of Py3.4 is early 2019...
return EXCLUDE_EXT
if language == 'cpp':
flags = OPENMP_CPP_COMPILER_FLAGS
else:
flags = OPENMP_C_COMPILER_FLAGS
if flags:
compile_flags, link_flags = flags
ext.extra_compile_args.extend(compile_flags.split())
ext.extra_link_args.extend(link_flags.split())
return ext
elif sys.platform == 'win32':
return ext
return EXCLUDE_EXT
def update_cpp11_extension(ext):
"""
update cpp11 extensions that will run on versions of gcc >4.8
"""
gcc_version = get_gcc_version(ext.language)
if gcc_version:
compiler_version = gcc_version.group(1)
if float(compiler_version) > 4.8:
ext.extra_compile_args.append("-std=c++11")
return ext
clang_version = get_clang_version(ext.language)
if clang_version:
ext.extra_compile_args.append("-std=c++11")
if sys.platform == "darwin":
ext.extra_compile_args.append("-stdlib=libc++")
ext.extra_compile_args.append("-mmacosx-version-min=10.7")
return ext
return EXCLUDE_EXT
def get_cc_version(language):
"""
finds gcc version using Popen
"""
if language == 'cpp':
cc = sysconfig.get_config_var('CXX')
else:
cc = sysconfig.get_config_var('CC')
if not cc:
cc = ccompiler.get_default_compiler()
if not cc:
return ''
# For some reason, cc can be e.g. 'gcc -pthread'
cc = cc.split()[0]
# Force english output
env = os.environ.copy()
env['LC_MESSAGES'] = 'C'
try:
p = subprocess.Popen([cc, "-v"], stderr=subprocess.PIPE, env=env)
except EnvironmentError:
# Be compatible with Python 3
warnings.warn("Unable to find the %s compiler: %s: %s" %
(language, os.strerror(sys.exc_info()[1].errno), cc))
return ''
_, output = p.communicate()
return output.decode(locale.getpreferredencoding() or 'ASCII', 'replace')
def get_gcc_version(language):
matcher = re.compile(r"gcc version (\d+\.\d+)").search
return matcher(get_cc_version(language))
def get_clang_version(language):
matcher = re.compile(r"clang(?:-|\s+version\s+)(\d+\.\d+)").search
return matcher(get_cc_version(language))
def get_openmp_compiler_flags(language):
"""
As of gcc 4.2, it supports OpenMP 2.5. Gcc 4.4 implements 3.0. We don't
(currently) check for other compilers.
returns a two-tuple of (CFLAGS, LDFLAGS) to build the OpenMP extension
"""
gcc_version = get_gcc_version(language)
if not gcc_version:
if sys.platform == 'win32':
return '/openmp', ''
else:
return None # not gcc - FIXME: do something about other compilers
# gcc defines "__int128_t", assume that at least all 64 bit architectures have it
global COMPILER_HAS_INT128
COMPILER_HAS_INT128 = getattr(sys, 'maxsize', getattr(sys, 'maxint', 0)) > 2**60
compiler_version = gcc_version.group(1)
if compiler_version and compiler_version.split('.') >= ['4', '2']:
return '-fopenmp', '-fopenmp'
try:
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
pass
COMPILER = None
COMPILER_HAS_INT128 = False
OPENMP_C_COMPILER_FLAGS = get_openmp_compiler_flags('c')
OPENMP_CPP_COMPILER_FLAGS = get_openmp_compiler_flags('cpp')
# Return this from the EXT_EXTRAS matcher callback to exclude the extension
EXCLUDE_EXT = object()
EXT_EXTRAS = {
'tag:numpy' : update_numpy_extension,
'tag:numpy_old' : update_old_numpy_extension,
'tag:openmp': update_openmp_extension,
'tag:cpp11': update_cpp11_extension,
'tag:trace' : update_linetrace_extension,
'tag:bytesformat': exclude_extension_in_pyver((3, 3), (3, 4)), # no %-bytes formatting
'tag:no-macos': exclude_extension_on_platform('darwin'),
}
# TODO: use tags
VER_DEP_MODULES = {
# tests are excluded if 'CurrentPythonVersion OP VersionTuple', i.e.
# (2,4) : (operator.lt, ...) excludes ... when PyVer < 2.4.x
# The next line should start (3,); but this is a dictionary, so
# we can only have one (3,) key. Since 2.7 is supposed to be the
# last 2.x release, things would have to change drastically for this
# to be unsafe...
(2,999): (operator.lt, lambda x: x in ['run.special_methods_T561_py3',
'run.test_raisefrom',
]),
(3,): (operator.ge, lambda x: x in ['run.non_future_division',
'compile.extsetslice',
'compile.extdelslice',
'run.special_methods_T561_py2'
]),
(3,3) : (operator.lt, lambda x: x in ['build.package_compilation',
'run.yield_from_py33',
'pyximport.pyximport_namespace',
]),
(3,4): (operator.lt, lambda x: x in ['run.py34_signature',
'run.test_unicode', # taken from Py3.7, difficult to backport
]),
(3,4,999): (operator.gt, lambda x: x in ['run.initial_file_path',
]),
(3,5): (operator.lt, lambda x: x in ['run.py35_pep492_interop',
'run.py35_asyncio_async_def',
'run.mod__spec__',
'run.pep526_variable_annotations', # typing module
'run.test_exceptions', # copied from Py3.7+
]),
}
INCLUDE_DIRS = [ d for d in os.getenv('INCLUDE', '').split(os.pathsep) if d ]
CFLAGS = os.getenv('CFLAGS', '').split()
CCACHE = os.getenv('CYTHON_RUNTESTS_CCACHE', '').split()
TEST_SUPPORT_DIR = 'testsupport'
BACKENDS = ['c', 'cpp']
UTF8_BOM_BYTES = r'\xef\xbb\xbf'.encode('ISO-8859-1').decode('unicode_escape')
def memoize(f):
uncomputed = object()
f._cache = {}
def func(*args):
res = f._cache.get(args, uncomputed)
if res is uncomputed:
res = f._cache[args] = f(*args)
return res
return func
@memoize
def parse_tags(filepath):
tags = defaultdict(list)
parse_tag = re.compile(r'#\s*(\w+)\s*:(.*)$').match
with io_open(filepath, encoding='ISO-8859-1', errors='ignore') as f:
for line in f:
# ignore BOM-like bytes and whitespace
line = line.lstrip(UTF8_BOM_BYTES).strip()
if not line:
if tags:
break # assume all tags are in one block
else:
continue
if line[0] != '#':
break
parsed = parse_tag(line)
if parsed:
tag, values = parsed.groups()
if tag in ('coding', 'encoding'):
continue
if tag == 'tags':
tag = 'tag'
print("WARNING: test tags use the 'tag' directive, not 'tags' (%s)" % filepath)
if tag not in ('mode', 'tag', 'ticket', 'cython', 'distutils', 'preparse'):
print("WARNING: unknown test directive '%s' found (%s)" % (tag, filepath))
values = values.split(',')
tags[tag].extend(filter(None, [value.strip() for value in values]))
elif tags:
break # assume all tags are in one block
return tags
list_unchanging_dir = memoize(lambda x: os.listdir(x))
@memoize
def _list_pyregr_data_files(test_directory):
is_data_file = re.compile('(?:[.](txt|pem|db|html)|^bad.*[.]py)$').search
return ['__init__.py'] + [
filename for filename in list_unchanging_dir(test_directory)
if is_data_file(filename)]
def import_ext(module_name, file_path=None):
if file_path:
import imp
return imp.load_dynamic(module_name, file_path)
else:
try:
from importlib import invalidate_caches
except ImportError:
pass
else:
invalidate_caches()
return __import__(module_name, globals(), locals(), ['*'])
class build_ext(_build_ext):
def build_extension(self, ext):
try:
try: # Py2.7+ & Py3.2+
compiler_obj = self.compiler_obj
except AttributeError:
compiler_obj = self.compiler
if ext.language == 'c++':
compiler_obj.compiler_so.remove('-Wstrict-prototypes')
if CCACHE:
compiler_obj.compiler_so = CCACHE + compiler_obj.compiler_so
if getattr(ext, 'openmp', None) and compiler_obj.compiler_type == 'msvc':
ext.extra_compile_args.append('/openmp')
except Exception:
pass
_build_ext.build_extension(self, ext)
class ErrorWriter(object):
match_error = re.compile(r'(warning:)?(?:.*:)?\s*([-0-9]+)\s*:\s*([-0-9]+)\s*:\s*(.*)').match
def __init__(self):
self.output = []
self.write = self.output.append
def _collect(self):
s = ''.join(self.output)
results = {'errors': [], 'warnings': []}
for line in s.splitlines():
match = self.match_error(line)
if match:
is_warning, line, column, message = match.groups()
results['warnings' if is_warning else 'errors'].append((int(line), int(column), message.strip()))
return [["%d:%d: %s" % values for values in sorted(results[key])] for key in ('errors', 'warnings')]
def geterrors(self):
return self._collect()[0]
def getwarnings(self):
return self._collect()[1]
def getall(self):
return self._collect()
def close(self):
pass # ignore, only to match file-like interface
class Stats(object):
def __init__(self, top_n=8):
self.top_n = top_n
self.test_counts = defaultdict(int)
self.test_times = defaultdict(float)
self.top_tests = defaultdict(list)
def add_time(self, name, language, metric, t):
self.test_counts[metric] += 1
self.test_times[metric] += t
top = self.top_tests[metric]
push = heapq.heappushpop if len(top) >= self.top_n else heapq.heappush
# min-heap => pop smallest/shortest until longest times remain
push(top, (t, name, language))
@contextmanager
def time(self, name, language, metric):
t = time.time()
yield
t = time.time() - t
self.add_time(name, language, metric, t)
def update(self, stats):
# type: (Stats) -> None
for metric, t in stats.test_times.items():
self.test_times[metric] += t
self.test_counts[metric] += stats.test_counts[metric]
top = self.top_tests[metric]
for entry in stats.top_tests[metric]:
push = heapq.heappushpop if len(top) >= self.top_n else heapq.heappush
push(top, entry)
def print_stats(self, out=sys.stderr):
if not self.test_times:
return
lines = ['Times:\n']
for metric, t in sorted(self.test_times.items()):
count = self.test_counts[metric]
top = self.top_tests[metric]
lines.append("%-12s: %8.2f sec (%4d, %6.3f / run) - slowest: %s\n" % (
metric, t, count, t / count,
', '.join("'{2}:{1}' ({0:.2f}s)".format(*item) for item in heapq.nlargest(self.top_n, top))))
out.write(''.join(lines))
class TestBuilder(object):
def __init__(self, rootdir, workdir, selectors, exclude_selectors, options,
with_pyregr, languages, test_bugs, language_level,
common_utility_dir, pythran_dir=None,
default_mode='run', stats=None,
add_embedded_test=False):
self.rootdir = rootdir
self.workdir = workdir
self.selectors = selectors
self.exclude_selectors = exclude_selectors
self.annotate = options.annotate_source
self.cleanup_workdir = options.cleanup_workdir
self.cleanup_sharedlibs = options.cleanup_sharedlibs
self.cleanup_failures = options.cleanup_failures
self.with_pyregr = with_pyregr
self.cython_only = options.cython_only
self.languages = languages
self.test_bugs = test_bugs
self.fork = options.fork
self.language_level = language_level
self.test_determinism = options.test_determinism
self.common_utility_dir = common_utility_dir
self.pythran_dir = pythran_dir
self.default_mode = default_mode
self.stats = stats
self.add_embedded_test = add_embedded_test
def build_suite(self):
suite = unittest.TestSuite()
filenames = os.listdir(self.rootdir)
filenames.sort()
for filename in filenames:
path = os.path.join(self.rootdir, filename)
if os.path.isdir(path) and filename != TEST_SUPPORT_DIR:
if filename == 'pyregr' and not self.with_pyregr:
continue
if filename == 'broken' and not self.test_bugs:
continue
suite.addTest(
self.handle_directory(path, filename))
if sys.platform not in ['win32'] and self.add_embedded_test:
# Non-Windows makefile.
if [1 for selector in self.selectors if selector("embedded")] \
and not [1 for selector in self.exclude_selectors if selector("embedded")]:
suite.addTest(unittest.makeSuite(EmbedTest))
return suite
def handle_directory(self, path, context):
workdir = os.path.join(self.workdir, context)
if not os.path.exists(workdir):
os.makedirs(workdir)
suite = unittest.TestSuite()
filenames = list_unchanging_dir(path)
filenames.sort()
for filename in filenames:
filepath = os.path.join(path, filename)
module, ext = os.path.splitext(filename)
if ext not in ('.py', '.pyx', '.srctree'):
continue
if filename.startswith('.'):
continue # certain emacs backup files
if context == 'pyregr':
tags = defaultdict(list)
else:
tags = parse_tags(filepath)
fqmodule = "%s.%s" % (context, module)
if not [ 1 for match in self.selectors
if match(fqmodule, tags) ]:
continue
if self.exclude_selectors:
if [1 for match in self.exclude_selectors
if match(fqmodule, tags)]:
continue
mode = self.default_mode
if tags['mode']:
mode = tags['mode'][0]
elif context == 'pyregr':
mode = 'pyregr'
if ext == '.srctree':
if 'cpp' not in tags['tag'] or 'cpp' in self.languages:
suite.addTest(EndToEndTest(filepath, workdir, self.cleanup_workdir, stats=self.stats))
continue
# Choose the test suite.
if mode == 'pyregr':
if not filename.startswith('test_'):
continue
test_class = CythonPyregrTestCase
elif mode == 'run':
if module.startswith("test_"):
test_class = CythonUnitTestCase
else:
test_class = CythonRunTestCase
elif mode in ['compile', 'error']:
test_class = CythonCompileTestCase
else:
raise KeyError('Invalid test mode: ' + mode)
for test in self.build_tests(test_class, path, workdir,
module, mode == 'error', tags):
suite.addTest(test)
if mode == 'run' and ext == '.py' and not self.cython_only and not filename.startswith('test_'):
# additionally test file in real Python
min_py_ver = [
(int(pyver.group(1)), int(pyver.group(2)))
for pyver in map(re.compile(r'pure([0-9]+)[.]([0-9]+)').match, tags['tag'])
if pyver
]
if not min_py_ver or any(sys.version_info >= min_ver for min_ver in min_py_ver):
suite.addTest(PureDoctestTestCase(module, os.path.join(path, filename), tags, stats=self.stats))
return suite
def build_tests(self, test_class, path, workdir, module, expect_errors, tags):
warning_errors = 'werror' in tags['tag']
expect_warnings = 'warnings' in tags['tag']
if expect_errors:
if skip_c(tags) and 'cpp' in self.languages:
languages = ['cpp']
else:
languages = self.languages[:1]
else:
languages = self.languages
if skip_c(tags) and 'c' in languages:
languages = list(languages)
languages.remove('c')
elif 'no-cpp' in tags['tag'] and 'cpp' in self.languages:
languages = list(languages)
languages.remove('cpp')
pythran_dir = self.pythran_dir
if 'pythran' in tags['tag'] and not pythran_dir and 'cpp' in languages:
import pythran.config
from pythran import __version__ as pythran_version
pythran_ext = (
pythran.config.make_extension(python=True)
if pythran_version >= '0.9' or pythran_version >= '0.8.7'
else pythran.config.make_extension()
)
pythran_dir = pythran_ext['include_dirs'][0]
preparse_list = tags.get('preparse', ['id'])
tests = [ self.build_test(test_class, path, workdir, module, tags, language,
expect_errors, expect_warnings, warning_errors, preparse,
pythran_dir if language == "cpp" else None)
for language in languages
for preparse in preparse_list ]
return tests
def build_test(self, test_class, path, workdir, module, tags, language,
expect_errors, expect_warnings, warning_errors, preparse, pythran_dir):
language_workdir = os.path.join(workdir, language)
if not os.path.exists(language_workdir):
os.makedirs(language_workdir)
workdir = os.path.join(language_workdir, module)
if preparse != 'id':
workdir += '_%s' % str(preparse)
return test_class(path, workdir, module, tags,
language=language,
preparse=preparse,
expect_errors=expect_errors,
expect_warnings=expect_warnings,
annotate=self.annotate,
cleanup_workdir=self.cleanup_workdir,
cleanup_sharedlibs=self.cleanup_sharedlibs,
cleanup_failures=self.cleanup_failures,
cython_only=self.cython_only,
fork=self.fork,
language_level=self.language_level,
warning_errors=warning_errors,
test_determinism=self.test_determinism,
common_utility_dir=self.common_utility_dir,
pythran_dir=pythran_dir,
stats=self.stats)
def skip_c(tags):
if 'cpp' in tags['tag']:
return True
# We don't want to create a distutils key in the
# dictionary so we check before looping.
if 'distutils' in tags:
for option in tags['distutils']:
splitted = option.split('=')
if len(splitted) == 2:
argument, value = splitted
if argument.strip() == 'language' and value.strip() == 'c++':
return True
return False
def filter_stderr(stderr_bytes):
"""
Filter annoying warnings from output.
"""
if b"Command line warning D9025" in stderr_bytes:
# MSCV: cl : Command line warning D9025 : overriding '/Ox' with '/Od'
stderr_bytes = b'\n'.join(
line for line in stderr_bytes.splitlines()
if b"Command line warning D9025" not in line)
return stderr_bytes
class CythonCompileTestCase(unittest.TestCase):
def __init__(self, test_directory, workdir, module, tags, language='c', preparse='id',
expect_errors=False, expect_warnings=False, annotate=False, cleanup_workdir=True,
cleanup_sharedlibs=True, cleanup_failures=True, cython_only=False,
fork=True, language_level=2, warning_errors=False,
test_determinism=False,
common_utility_dir=None, pythran_dir=None, stats=None):
self.test_directory = test_directory
self.tags = tags
self.workdir = workdir
self.module = module
self.language = language
self.preparse = preparse
self.name = module if self.preparse == "id" else "%s_%s" % (module, preparse)
self.expect_errors = expect_errors
self.expect_warnings = expect_warnings
self.annotate = annotate
self.cleanup_workdir = cleanup_workdir
self.cleanup_sharedlibs = cleanup_sharedlibs
self.cleanup_failures = cleanup_failures
self.cython_only = cython_only
self.fork = fork
self.language_level = language_level
self.warning_errors = warning_errors
self.test_determinism = test_determinism
self.common_utility_dir = common_utility_dir
self.pythran_dir = pythran_dir
self.stats = stats
unittest.TestCase.__init__(self)
def shortDescription(self):
return "compiling (%s%s) %s" % (self.language, "/pythran" if self.pythran_dir is not None else "", self.name)
def setUp(self):
from Cython.Compiler import Options
self._saved_options = [
(name, getattr(Options, name))
for name in ('warning_errors', 'clear_to_none', 'error_on_unknown_names', 'error_on_uninitialized')
]
self._saved_default_directives = list(Options.get_directive_defaults().items())
Options.warning_errors = self.warning_errors
if sys.version_info >= (3, 4):
Options._directive_defaults['autotestdict'] = False
if not os.path.exists(self.workdir):
os.makedirs(self.workdir)
if self.workdir not in sys.path:
sys.path.insert(0, self.workdir)
def tearDown(self):
from Cython.Compiler import Options
for name, value in self._saved_options:
setattr(Options, name, value)
Options._directive_defaults = dict(self._saved_default_directives)
unpatch_inspect_isfunction()
try:
sys.path.remove(self.workdir)
except ValueError:
pass
try:
del sys.modules[self.module]
except KeyError:
pass
cleanup = self.cleanup_failures or self.success
cleanup_c_files = WITH_CYTHON and self.cleanup_workdir and cleanup
cleanup_lib_files = self.cleanup_sharedlibs and cleanup
is_cygwin = sys.platform == 'cygwin'
if os.path.exists(self.workdir):
if cleanup_c_files and cleanup_lib_files and not is_cygwin:
shutil.rmtree(self.workdir, ignore_errors=True)
else:
for rmfile in os.listdir(self.workdir):
if not cleanup_c_files:
if (rmfile[-2:] in (".c", ".h") or
rmfile[-4:] == ".cpp" or
rmfile.endswith(".html") and rmfile.startswith(self.module)):
continue
is_shared_obj = rmfile.endswith(".so") or rmfile.endswith(".dll")
if not cleanup_lib_files and is_shared_obj:
continue
try:
rmfile = os.path.join(self.workdir, rmfile)
if os.path.isdir(rmfile):
shutil.rmtree(rmfile, ignore_errors=True)
elif is_cygwin and is_shared_obj:
# Delete later
_to_clean.append(rmfile)
else:
os.remove(rmfile)
except IOError:
pass
if cleanup_c_files and cleanup_lib_files and is_cygwin:
# Finally, remove the work dir itself
_to_clean.append(self.workdir)
if cleanup_c_files and os.path.exists(self.workdir + '-again'):
shutil.rmtree(self.workdir + '-again', ignore_errors=True)
def runTest(self):
self.success = False
self.runCompileTest()
self.success = True
def runCompileTest(self):
return self.compile(
self.test_directory, self.module, self.workdir,
self.test_directory, self.expect_errors, self.expect_warnings, self.annotate)
def find_module_source_file(self, source_file):
if not os.path.exists(source_file):
source_file = source_file[:-1]
return source_file
def build_target_filename(self, module_name):
target = '%s.%s' % (module_name, self.language)
return target
def related_files(self, test_directory, module_name):
is_related = re.compile('%s_.*[.].*' % module_name).match
return [filename for filename in list_unchanging_dir(test_directory)
if is_related(filename)]
def copy_files(self, test_directory, target_directory, file_list):
if self.preparse and self.preparse != 'id':
preparse_func = globals()[self.preparse]
def copy(src, dest):
with open(src) as fin:
with open(dest, 'w') as fout:
fout.write(preparse_func(fin.read()))
else:
# use symlink on Unix, copy on Windows
try:
copy = os.symlink
except AttributeError:
copy = shutil.copy
join = os.path.join
for filename in file_list:
file_path = join(test_directory, filename)
if os.path.exists(file_path):
copy(file_path, join(target_directory, filename))
def source_files(self, workdir, module_name, file_list):
return ([self.build_target_filename(module_name)] +
[filename for filename in file_list
if not os.path.isfile(os.path.join(workdir, filename))])
def split_source_and_output(self, test_directory, module, workdir):
source_file = self.find_module_source_file(os.path.join(test_directory, module) + '.pyx')
with io_open(source_file, 'r', encoding='ISO-8859-1') as source_and_output:
error_writer = warnings_writer = None
out = io_open(os.path.join(workdir, module + os.path.splitext(source_file)[1]),
'w', encoding='ISO-8859-1')
try:
for line in source_and_output:
if line.startswith("_ERRORS"):
out.close()
out = error_writer = ErrorWriter()
elif line.startswith("_WARNINGS"):
out.close()
out = warnings_writer = ErrorWriter()
else:
out.write(line)
finally:
out.close()
return (error_writer.geterrors() if error_writer else [],
warnings_writer.geterrors() if warnings_writer else [])
def run_cython(self, test_directory, module, targetdir, incdir, annotate,
extra_compile_options=None):
include_dirs = INCLUDE_DIRS + [os.path.join(test_directory, '..', TEST_SUPPORT_DIR)]
if incdir:
include_dirs.append(incdir)
if self.preparse == 'id':
source = self.find_module_source_file(
os.path.join(test_directory, module + '.pyx'))
else:
self.copy_files(test_directory, targetdir, [module + '.pyx'])
source = os.path.join(targetdir, module + '.pyx')
target = os.path.join(targetdir, self.build_target_filename(module))
if extra_compile_options is None:
extra_compile_options = {}
if 'allow_unknown_names' in self.tags['tag']:
from Cython.Compiler import Options
Options.error_on_unknown_names = False
try:
CompilationOptions
except NameError:
from Cython.Compiler.Main import CompilationOptions
from Cython.Compiler.Main import compile as cython_compile
from Cython.Compiler.Main import default_options
common_utility_include_dir = self.common_utility_dir
options = CompilationOptions(
default_options,
include_path = include_dirs,
output_file = target,
annotate = annotate,
use_listing_file = False,
cplus = self.language == 'cpp',
np_pythran = self.pythran_dir is not None,
language_level = self.language_level,
generate_pxi = False,
evaluate_tree_assertions = True,
common_utility_include_dir = common_utility_include_dir,
**extra_compile_options
)
cython_compile(source, options=options,
full_module_name=module)
def run_distutils(self, test_directory, module, workdir, incdir,
extra_extension_args=None):
cwd = os.getcwd()
os.chdir(workdir)
try:
build_extension = build_ext(get_distutils_distro())
build_extension.include_dirs = INCLUDE_DIRS[:]
if incdir:
build_extension.include_dirs.append(incdir)
build_extension.finalize_options()
if COMPILER:
build_extension.compiler = COMPILER
ext_compile_flags = CFLAGS[:]
if build_extension.compiler == 'mingw32':
ext_compile_flags.append('-Wno-format')
if extra_extension_args is None:
extra_extension_args = {}
related_files = self.related_files(test_directory, module)
self.copy_files(test_directory, workdir, related_files)
from distutils.core import Extension
extension = Extension(
module,
sources=self.source_files(workdir, module, related_files),
extra_compile_args=ext_compile_flags,
**extra_extension_args
)
if self.language == 'cpp':
# Set the language now as the fixer might need it
extension.language = 'c++'
if 'distutils' in self.tags:
from Cython.Build.Dependencies import DistutilsInfo
from Cython.Utils import open_source_file
pyx_path = os.path.join(self.test_directory, self.module + ".pyx")
with open_source_file(pyx_path) as f:
DistutilsInfo(f).apply(extension)
if self.pythran_dir:
from Cython.Build.Dependencies import update_pythran_extension
update_pythran_extension(extension)
for matcher, fixer in list(EXT_EXTRAS.items()):
if isinstance(matcher, str):
# lazy init
del EXT_EXTRAS[matcher]
matcher = string_selector(matcher)
EXT_EXTRAS[matcher] = fixer
if matcher(module, self.tags):
newext = fixer(extension)
if newext is EXCLUDE_EXT:
return skip_test("Test '%s' excluded due to tags '%s'" % (
self.name, ', '.join(self.tags.get('tag', ''))))
extension = newext or extension
if self.language == 'cpp':
extension.language = 'c++'
build_extension.extensions = [extension]
build_extension.build_temp = workdir
build_extension.build_lib = workdir
build_extension.run()
finally:
os.chdir(cwd)
try:
get_ext_fullpath = build_extension.get_ext_fullpath
except AttributeError:
def get_ext_fullpath(ext_name, self=build_extension):
# copied from distutils.command.build_ext (missing in Py2.[45])
fullname = self.get_ext_fullname(ext_name)
modpath = fullname.split('.')
filename = self.get_ext_filename(modpath[-1])
if not self.inplace:
filename = os.path.join(*modpath[:-1]+[filename])
return os.path.join(self.build_lib, filename)
package = '.'.join(modpath[0:-1])
build_py = self.get_finalized_command('build_py')
package_dir = os.path.abspath(build_py.get_package_dir(package))
return os.path.join(package_dir, filename)
return get_ext_fullpath(module)
def compile(self, test_directory, module, workdir, incdir,
expect_errors, expect_warnings, annotate):
expected_errors = expected_warnings = errors = warnings = ()
if expect_errors or expect_warnings:
expected_errors, expected_warnings = self.split_source_and_output(
test_directory, module, workdir)
test_directory = workdir
if WITH_CYTHON:
old_stderr = sys.stderr
try:
sys.stderr = ErrorWriter()
with self.stats.time(self.name, self.language, 'cython'):
self.run_cython(test_directory, module, workdir, incdir, annotate)
errors, warnings = sys.stderr.getall()
finally:
sys.stderr = old_stderr
if self.test_determinism and not expect_errors:
workdir2 = workdir + '-again'
os.mkdir(workdir2)
self.run_cython(test_directory, module, workdir2, incdir, annotate)
diffs = []
for file in os.listdir(workdir2):
if (open(os.path.join(workdir, file)).read()
!= open(os.path.join(workdir2, file)).read()):
diffs.append(file)
os.system('diff -u %s/%s %s/%s > %s/%s.diff' % (
workdir, file,
workdir2, file,
workdir2, file))
if diffs:
self.fail('Nondeterministic file generation: %s' % ', '.join(diffs))
tostderr = sys.__stderr__.write
if expected_warnings or (expect_warnings and warnings):
self._match_output(expected_warnings, warnings, tostderr)
if 'cerror' in self.tags['tag']:
if errors:
tostderr("\n=== Expected C compile error ===\n")
tostderr("\n=== Got Cython errors: ===\n")
tostderr('\n'.join(errors))
tostderr('\n\n')
raise RuntimeError('should have generated extension code')
elif errors or expected_errors:
self._match_output(expected_errors, errors, tostderr)
return None
so_path = None
if not self.cython_only:
from Cython.Utils import captured_fd, print_bytes
from distutils.errors import CompileError, LinkError
show_output = True
get_stderr = get_stdout = None
try:
with captured_fd(1) as get_stdout:
with captured_fd(2) as get_stderr:
with self.stats.time(self.name, self.language, 'compile-%s' % self.language):
so_path = self.run_distutils(test_directory, module, workdir, incdir)
except Exception as exc:
if ('cerror' in self.tags['tag'] and
((get_stderr and get_stderr()) or
isinstance(exc, (CompileError, LinkError)))):
show_output = False # expected C compiler failure
else:
raise
else:
if 'cerror' in self.tags['tag']:
raise RuntimeError('should have failed C compile')
finally:
if show_output:
stdout = get_stdout and get_stdout().strip()
if stdout:
print_bytes(
stdout, header_text="\n=== C/C++ compiler output: =========\n",
end=None, file=sys.__stderr__)
stderr = get_stderr and filter_stderr(get_stderr()).strip()
if stderr:
print_bytes(
stderr, header_text="\n=== C/C++ compiler error output: ===\n",
end=None, file=sys.__stderr__)
if stdout or stderr:
tostderr("\n====================================\n")
return so_path
def _match_output(self, expected_output, actual_output, write):
try:
for expected, actual in zip(expected_output, actual_output):
self.assertEqual(expected, actual)
if len(actual_output) < len(expected_output):
expected = expected_output[len(actual_output)]
self.assertEqual(expected, None)
elif len(actual_output) > len(expected_output):
unexpected = actual_output[len(expected_output)]
self.assertEqual(None, unexpected)
except AssertionError:
write("\n=== Expected: ===\n")
write('\n'.join(expected_output))
write("\n\n=== Got: ===\n")
write('\n'.join(actual_output))
write('\n\n')
raise
class CythonRunTestCase(CythonCompileTestCase):
def setUp(self):
CythonCompileTestCase.setUp(self)
from Cython.Compiler import Options
Options.clear_to_none = False
def shortDescription(self):
if self.cython_only:
return CythonCompileTestCase.shortDescription(self)
else:
return "compiling (%s%s) and running %s" % (self.language, "/pythran" if self.pythran_dir is not None else "", self.name)
def run(self, result=None):
if result is None:
result = self.defaultTestResult()
result.startTest(self)
try:
self.setUp()
try:
self.success = False
ext_so_path = self.runCompileTest()
failures, errors, skipped = len(result.failures), len(result.errors), len(result.skipped)
if not self.cython_only and ext_so_path is not None:
self.run_tests(result, ext_so_path)
if failures == len(result.failures) and errors == len(result.errors):
# No new errors...
self.success = True
finally:
check_thread_termination()
except SkipTest as exc:
result.addSkip(self, str(exc))
result.stopTest(self)
except Exception:
result.addError(self, sys.exc_info())
result.stopTest(self)
try:
self.tearDown()
except Exception:
pass
def run_tests(self, result, ext_so_path):
self.run_doctests(self.module, result, ext_so_path)
def run_doctests(self, module_or_name, result, ext_so_path):
def run_test(result):
if isinstance(module_or_name, basestring):
with self.stats.time(self.name, self.language, 'import'):
module = import_ext(module_or_name, ext_so_path)
else:
module = module_or_name
tests = doctest.DocTestSuite(module)
with self.stats.time(self.name, self.language, 'run'):
tests.run(result)
run_forked_test(result, run_test, self.shortDescription(), self.fork)
def run_forked_test(result, run_func, test_name, fork=True):
if not fork or sys.version_info[0] >= 3 or not hasattr(os, 'fork'):
run_func(result)
sys.stdout.flush()
sys.stderr.flush()
gc.collect()
return
# fork to make sure we do not keep the tested module loaded
result_handle, result_file = tempfile.mkstemp()
os.close(result_handle)
child_id = os.fork()
if not child_id:
result_code = 0
output = None
try:
try:
tests = partial_result = None
try:
partial_result = PartialTestResult(result)
run_func(partial_result)
sys.stdout.flush()
sys.stderr.flush()
gc.collect()
except Exception:
result_code = 1
if partial_result is not None:
if tests is None:
# importing failed, try to fake a test class
tests = _FakeClass(
failureException=sys.exc_info()[1],
_shortDescription=test_name,
module_name=None)
partial_result.addError(tests, sys.exc_info())
output = open(result_file, 'wb')
pickle.dump(partial_result.data(), output)
except:
traceback.print_exc()
finally:
try: sys.stderr.flush()
except: pass
try: sys.stdout.flush()
except: pass
try:
if output is not None:
output.close()
except:
pass
os._exit(result_code)
try:
cid, result_code = os.waitpid(child_id, 0)
module_name = test_name.split()[-1]
# os.waitpid returns the child's result code in the
# upper byte of result_code, and the signal it was
# killed by in the lower byte
if result_code & 255:
raise Exception("Tests in module '%s' were unexpectedly killed by signal %d"%
(module_name, result_code & 255))
result_code >>= 8
if result_code in (0,1):
input = open(result_file, 'rb')
try:
PartialTestResult.join_results(result, pickle.load(input))
finally:
input.close()
if result_code:
raise Exception("Tests in module '%s' exited with status %d" %
(module_name, result_code))
finally:
try:
os.unlink(result_file)
except:
pass
class PureDoctestTestCase(unittest.TestCase):
def __init__(self, module_name, module_path, tags, stats=None):
self.tags = tags
self.module_name = self.name = module_name
self.module_path = module_path
self.stats = stats
unittest.TestCase.__init__(self, 'run')
def shortDescription(self):
return "running pure doctests in %s" % self.module_name
def run(self, result=None):
if result is None:
result = self.defaultTestResult()
loaded_module_name = 'pure_doctest__' + self.module_name
result.startTest(self)
try:
self.setUp()
import imp
with self.stats.time(self.name, 'py', 'pyimport'):
m = imp.load_source(loaded_module_name, self.module_path)
try:
with self.stats.time(self.name, 'py', 'pyrun'):
doctest.DocTestSuite(m).run(result)
finally:
del m
if loaded_module_name in sys.modules:
del sys.modules[loaded_module_name]
check_thread_termination()
except Exception:
result.addError(self, sys.exc_info())
result.stopTest(self)
try:
self.tearDown()
except Exception:
pass
if 'mypy' in self.tags['tag']:
try:
from mypy import api as mypy_api
except ImportError:
pass
else:
with self.stats.time(self.name, 'py', 'mypy'):
mypy_result = mypy_api.run((
self.module_path,
'--ignore-missing-imports',
'--follow-imports', 'skip',
))
if mypy_result[2]:
self.fail(mypy_result[0])
is_private_field = re.compile('^_[^_]').match
class _FakeClass(object):
def __init__(self, **kwargs):
self._shortDescription = kwargs.get('module_name')
self.__dict__.update(kwargs)
def shortDescription(self):
return self._shortDescription
try: # Py2.7+ and Py3.2+
from unittest.runner import _TextTestResult
except ImportError:
from unittest import _TextTestResult
class PartialTestResult(_TextTestResult):
def __init__(self, base_result):
_TextTestResult.__init__(
self, self._StringIO(), True,
base_result.dots + base_result.showAll*2)
def strip_error_results(self, results):
for test_case, error in results:
for attr_name in filter(is_private_field, dir(test_case)):
if attr_name == '_dt_test':
test_case._dt_test = _FakeClass(
name=test_case._dt_test.name)
elif attr_name != '_shortDescription':
setattr(test_case, attr_name, None)
def data(self):
self.strip_error_results(self.failures)
self.strip_error_results(self.errors)
return (self.failures, self.errors, self.skipped, self.testsRun,
self.stream.getvalue())
def join_results(result, data):
"""Static method for merging the result back into the main
result object.
"""
failures, errors, skipped, tests_run, output = data
if output:
result.stream.write(output)
result.errors.extend(errors)
result.skipped.extend(skipped)
result.failures.extend(failures)
result.testsRun += tests_run
join_results = staticmethod(join_results)
class _StringIO(StringIO):
def writeln(self, line):
self.write("%s\n" % line)
class CythonUnitTestCase(CythonRunTestCase):
def shortDescription(self):
return "compiling (%s) tests in %s" % (self.language, self.name)
def run_tests(self, result, ext_so_path):
with self.stats.time(self.name, self.language, 'import'):
module = import_ext(self.module, ext_so_path)
tests = unittest.defaultTestLoader.loadTestsFromModule(module)
with self.stats.time(self.name, self.language, 'run'):
tests.run(result)
class CythonPyregrTestCase(CythonRunTestCase):
def setUp(self):
CythonRunTestCase.setUp(self)
from Cython.Compiler import Options
Options.error_on_unknown_names = False
Options.error_on_uninitialized = False
Options._directive_defaults.update(dict(
binding=True, always_allow_keywords=True,
set_initial_path="SOURCEFILE"))
patch_inspect_isfunction()
def related_files(self, test_directory, module_name):
return _list_pyregr_data_files(test_directory)
def _run_unittest(self, result, *classes):
"""Run tests from unittest.TestCase-derived classes."""
valid_types = (unittest.TestSuite, unittest.TestCase)
suite = unittest.TestSuite()
for cls in classes:
if isinstance(cls, str):
if cls in sys.modules:
suite.addTest(unittest.findTestCases(sys.modules[cls]))
else:
raise ValueError("str arguments must be keys in sys.modules")
elif isinstance(cls, valid_types):
suite.addTest(cls)
else:
suite.addTest(unittest.makeSuite(cls))
with self.stats.time(self.name, self.language, 'run'):
suite.run(result)
def _run_doctest(self, result, module):
self.run_doctests(module, result, None)
def run_tests(self, result, ext_so_path):
try:
from test import support
except ImportError: # Python2.x
from test import test_support as support
def run_test(result):
def run_unittest(*classes):
return self._run_unittest(result, *classes)
def run_doctest(module, verbosity=None):
return self._run_doctest(result, module)
backup = (support.run_unittest, support.run_doctest)
support.run_unittest = run_unittest
support.run_doctest = run_doctest
try:
try:
sys.stdout.flush() # helps in case of crashes
with self.stats.time(self.name, self.language, 'import'):
module = import_ext(self.module, ext_so_path)
sys.stdout.flush() # helps in case of crashes
if hasattr(module, 'test_main'):
# help 'doctest.DocFileTest' find the module path through frame inspection
fake_caller_module_globals = {
'module': module,
'__name__': module.__name__,
}
call_tests = eval(
'lambda: module.test_main()',
fake_caller_module_globals, fake_caller_module_globals)
call_tests()
sys.stdout.flush() # helps in case of crashes
except (unittest.SkipTest, support.ResourceDenied):
result.addSkip(self, 'ok')
finally:
support.run_unittest, support.run_doctest = backup
run_forked_test(result, run_test, self.shortDescription(), self.fork)
class TestCodeFormat(unittest.TestCase):
def __init__(self, cython_dir):
self.cython_dir = cython_dir
unittest.TestCase.__init__(self)
def runTest(self):
import pycodestyle
config_file = os.path.join(self.cython_dir, "tox.ini")
paths = glob.glob(os.path.join(self.cython_dir, "**/*.py"), recursive=True)
style = pycodestyle.StyleGuide(config_file=config_file)
print("") # Fix the first line of the report.
result = style.check_files(paths)
self.assertEqual(result.total_errors, 0, "Found code style errors.")
include_debugger = IS_CPYTHON
def collect_unittests(path, module_prefix, suite, selectors, exclude_selectors):
def file_matches(filename):
return filename.startswith("Test") and filename.endswith(".py")
def package_matches(dirname):
return dirname == "Tests"
loader = unittest.TestLoader()
if include_debugger:
skipped_dirs = []
else:
skipped_dirs = ['Cython' + os.path.sep + 'Debugger' + os.path.sep]
for dirpath, dirnames, filenames in os.walk(path):
if dirpath != path and "__init__.py" not in filenames:
skipped_dirs.append(dirpath + os.path.sep)
continue
skip = False
for dir in skipped_dirs:
if dirpath.startswith(dir):
skip = True
if skip:
continue
parentname = os.path.split(dirpath)[-1]
if package_matches(parentname):
for f in filenames:
if file_matches(f):
filepath = os.path.join(dirpath, f)[:-len(".py")]
modulename = module_prefix + filepath[len(path)+1:].replace(os.path.sep, '.')
if not any(1 for match in selectors if match(modulename)):
continue
if any(1 for match in exclude_selectors if match(modulename)):
continue
module = __import__(modulename)
for x in modulename.split('.')[1:]:
module = getattr(module, x)
suite.addTests([loader.loadTestsFromModule(module)])
def collect_doctests(path, module_prefix, suite, selectors, exclude_selectors):
def package_matches(dirname):
if dirname == 'Debugger' and not include_debugger:
return False
return dirname not in ("Mac", "Distutils", "Plex", "Tempita")
def file_matches(filename):
filename, ext = os.path.splitext(filename)
blacklist = ['libcython', 'libpython', 'test_libcython_in_gdb',
'TestLibCython']
return (ext == '.py' and not
'~' in filename and not
'#' in filename and not
filename.startswith('.') and not
filename in blacklist)
import doctest
for dirpath, dirnames, filenames in os.walk(path):
for dir in list(dirnames):
if not package_matches(dir):
dirnames.remove(dir)
for f in filenames:
if file_matches(f):
if not f.endswith('.py'): continue
filepath = os.path.join(dirpath, f)
if os.path.getsize(filepath) == 0: continue
filepath = filepath[:-len(".py")]
modulename = module_prefix + filepath[len(path)+1:].replace(os.path.sep, '.')
if not [ 1 for match in selectors if match(modulename) ]:
continue
if [ 1 for match in exclude_selectors if match(modulename) ]:
continue
if 'in_gdb' in modulename:
# These should only be imported from gdb.
continue
module = __import__(modulename)
for x in modulename.split('.')[1:]:
module = getattr(module, x)
if hasattr(module, "__doc__") or hasattr(module, "__test__"):
try:
suite.addTest(doctest.DocTestSuite(module))
except ValueError: # no tests
pass
class EndToEndTest(unittest.TestCase):
"""
This is a test of build/*.srctree files, where srctree defines a full
directory structure and its header gives a list of commands to run.
"""
cython_root = os.path.dirname(os.path.abspath(__file__))
def __init__(self, treefile, workdir, cleanup_workdir=True, stats=None):
self.name = os.path.splitext(os.path.basename(treefile))[0]
self.treefile = treefile
self.workdir = os.path.join(workdir, self.name)
self.cleanup_workdir = cleanup_workdir
self.stats = stats
cython_syspath = [self.cython_root]
for path in sys.path:
if path.startswith(self.cython_root) and path not in cython_syspath:
# Py3 installation and refnanny build prepend their
# fixed paths to sys.path => prefer that over the
# generic one (cython_root itself goes last)
cython_syspath.append(path)
self.cython_syspath = os.pathsep.join(cython_syspath[::-1])
unittest.TestCase.__init__(self)
def shortDescription(self):
return "End-to-end %s" % self.name
def setUp(self):
from Cython.TestUtils import unpack_source_tree
_, self.commands = unpack_source_tree(self.treefile, self.workdir)
self.old_dir = os.getcwd()
os.chdir(self.workdir)
if self.workdir not in sys.path:
sys.path.insert(0, self.workdir)
def tearDown(self):
if self.cleanup_workdir:
for trial in range(5):
try:
shutil.rmtree(self.workdir)
except OSError:
time.sleep(0.1)
else:
break
os.chdir(self.old_dir)
def _try_decode(self, content):
try:
return content.decode()
except UnicodeDecodeError:
return content.decode('iso-8859-1')
def runTest(self):
self.success = False
commands = (self.commands
.replace("CYTHON", "PYTHON %s" % os.path.join(self.cython_root, 'cython.py'))
.replace("PYTHON", sys.executable))
old_path = os.environ.get('PYTHONPATH')
env = dict(os.environ)
new_path = self.cython_syspath
if old_path:
new_path = new_path + os.pathsep + old_path
env['PYTHONPATH'] = new_path
cmd = []
out = []
err = []
for command_no, command in enumerate(filter(None, commands.splitlines()), 1):
with self.stats.time('%s(%d)' % (self.name, command_no), 'c',
'etoe-build' if ' setup.py ' in command else 'etoe-run'):
p = subprocess.Popen(command,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE,
shell=True,
env=env)
_out, _err = p.communicate()
cmd.append(command)
out.append(_out)
err.append(_err)
res = p.returncode
if res != 0:
for c, o, e in zip(cmd, out, err):
sys.stderr.write("%s\n%s\n%s\n\n" % (
c, self._try_decode(o), self._try_decode(e)))
self.assertEqual(0, res, "non-zero exit status")
self.success = True
# TODO: Support cython_freeze needed here as well.
# TODO: Windows support.
class EmbedTest(unittest.TestCase):
working_dir = "Demos/embed"
def setUp(self):
self.old_dir = os.getcwd()
os.chdir(self.working_dir)
os.system(
"make PYTHON='%s' clean > /dev/null" % sys.executable)
def tearDown(self):
try:
os.system(
"make PYTHON='%s' clean > /dev/null" % sys.executable)
except:
pass
os.chdir(self.old_dir)
def test_embed(self):
libname = sysconfig.get_config_var('LIBRARY')
libdir = sysconfig.get_config_var('LIBDIR')
if not os.path.isdir(libdir) or libname not in os.listdir(libdir):
libdir = os.path.join(os.path.dirname(sys.executable), '..', 'lib')
if not os.path.isdir(libdir) or libname not in os.listdir(libdir):
libdir = os.path.join(libdir, 'python%d.%d' % sys.version_info[:2], 'config')
if not os.path.isdir(libdir) or libname not in os.listdir(libdir):
# report the error for the original directory
libdir = sysconfig.get_config_var('LIBDIR')
cython = 'cython.py'
if sys.version_info[0] >=3 and CY3_DIR:
cython = os.path.join(CY3_DIR, cython)
cython = os.path.abspath(os.path.join('..', '..', cython))
self.assertEqual(0, os.system(
"make PYTHON='%s' CYTHON='%s' LIBDIR1='%s' test > make.output" % (sys.executable, cython, libdir)))
try:
os.remove('make.output')
except OSError:
pass
class MissingDependencyExcluder(object):
def __init__(self, deps):
# deps: { matcher func : module name }
self.exclude_matchers = []
for matcher, mod in deps.items():
try:
__import__(mod)
except ImportError:
self.exclude_matchers.append(string_selector(matcher))
self.tests_missing_deps = []
def __call__(self, testname, tags=None):
for matcher in self.exclude_matchers:
if matcher(testname, tags):
self.tests_missing_deps.append(testname)
return True
return False
class VersionDependencyExcluder(object):
def __init__(self, deps):
# deps: { version : matcher func }
from sys import version_info
self.exclude_matchers = []
for ver, (compare, matcher) in deps.items():
if compare(version_info, ver):
self.exclude_matchers.append(matcher)
self.tests_missing_deps = []
def __call__(self, testname, tags=None):
for matcher in self.exclude_matchers:
if matcher(testname):
self.tests_missing_deps.append(testname)
return True
return False
class FileListExcluder(object):
def __init__(self, list_file, verbose=False):
self.verbose = verbose
self.excludes = {}
self._list_file = os.path.relpath(list_file)
with open(list_file) as f:
for line in f:
line = line.strip()
if line and line[0] != '#':
self.excludes[line.split()[0]] = True
def __call__(self, testname, tags=None):
exclude = (testname in self.excludes
or testname.split('.')[-1] in self.excludes)
if exclude and self.verbose:
print("Excluding %s because it's listed in %s"
% (testname, self._list_file))
return exclude
class TagsSelector(object):
def __init__(self, tag, value):
self.tag = tag
self.value = value
def __call__(self, testname, tags=None):
if tags is None:
return False
else:
return self.value in tags[self.tag]
class RegExSelector(object):
def __init__(self, pattern_string):
try:
self.regex_matches = re.compile(pattern_string, re.I|re.U).search
except re.error:
print('Invalid pattern: %r' % pattern_string)
raise
def __call__(self, testname, tags=None):
return self.regex_matches(testname)
def string_selector(s):
if ':' in s:
return TagsSelector(*s.split(':', 1))
else:
return RegExSelector(s)
class ShardExcludeSelector(object):
# This is an exclude selector so it can override the (include) selectors.
# It may not provide uniform distribution (in time or count), but is a
# determanistic partition of the tests which is important.
def __init__(self, shard_num, shard_count):
self.shard_num = shard_num
self.shard_count = shard_count
def __call__(self, testname, tags=None, _hash=zlib.crc32, _is_py2=sys.version_info[0] < 3):
# Cannot use simple hash() here as shard processes might use different hash seeds.
# CRC32 is fast and simple, but might return negative values in Py2.
hashval = _hash(testname) & 0x7fffffff if _is_py2 else _hash(testname.encode())
return hashval % self.shard_count != self.shard_num
class PendingThreadsError(RuntimeError):
pass
threads_seen = []
def check_thread_termination(ignore_seen=True):
if threading is None: # no threading enabled in CPython
return
current = threading.currentThread()
blocking_threads = []
for t in threading.enumerate():
if not t.isAlive() or t == current or t.name == 'time_stamper':
continue
t.join(timeout=2)
if t.isAlive():
if not ignore_seen:
blocking_threads.append(t)
continue
for seen in threads_seen:
if t is seen:
break
else:
threads_seen.append(t)
blocking_threads.append(t)
if not blocking_threads:
return
sys.stderr.write("warning: left-over threads found after running test:\n")
for t in blocking_threads:
sys.stderr.write('...%s\n' % repr(t))
raise PendingThreadsError("left-over threads found after running test")
def subprocess_output(cmd):
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return p.communicate()[0].decode('UTF-8')
except OSError:
return ''
def get_version():
from Cython.Compiler.Version import version as cython_version
full_version = cython_version
top = os.path.dirname(os.path.abspath(__file__))
if os.path.exists(os.path.join(top, '.git')):
old_dir = os.getcwd()
try:
os.chdir(top)
head_commit = subprocess_output(['git', 'rev-parse', 'HEAD']).strip()
version_commit = subprocess_output(['git', 'rev-parse', cython_version]).strip()
diff = subprocess_output(['git', 'diff', '--stat']).strip()
if head_commit != version_commit:
full_version += " " + head_commit
if diff:
full_version += ' + uncommitted changes'
finally:
os.chdir(old_dir)
return full_version
_orig_stdout, _orig_stderr = sys.stdout, sys.stderr
def flush_and_terminate(status):
try:
_orig_stdout.flush()
_orig_stderr.flush()
finally:
os._exit(status)
def main():
global DISTDIR, WITH_CYTHON
DISTDIR = os.path.join(os.getcwd(), os.path.dirname(sys.argv[0]))
from Cython.Compiler import DebugFlags
args = []
for arg in sys.argv[1:]:
if arg.startswith('--debug') and arg[2:].replace('-', '_') in dir(DebugFlags):
setattr(DebugFlags, arg[2:].replace('-', '_'), True)
else:
args.append(arg)
from optparse import OptionParser
parser = OptionParser()
parser.add_option("--no-cleanup", dest="cleanup_workdir",
action="store_false", default=True,
help="do not delete the generated C files (allows passing --no-cython on next run)")
parser.add_option("--no-cleanup-sharedlibs", dest="cleanup_sharedlibs",
action="store_false", default=True,
help="do not delete the generated shared library files (allows manual module experimentation)")
parser.add_option("--no-cleanup-failures", dest="cleanup_failures",
action="store_false", default=True,
help="enable --no-cleanup and --no-cleanup-sharedlibs for failed tests only")
parser.add_option("--no-cython", dest="with_cython",
action="store_false", default=True,
help="do not run the Cython compiler, only the C compiler")
parser.add_option("--compiler", dest="compiler", default=None,
help="C compiler type")
backend_list = ','.join(BACKENDS)
parser.add_option("--backends", dest="backends", default=backend_list,
help="select backends to test (default: %s)" % backend_list)
parser.add_option("--no-c", dest="use_c",
action="store_false", default=True,
help="do not test C compilation backend")
parser.add_option("--no-cpp", dest="use_cpp",
action="store_false", default=True,
help="do not test C++ compilation backend")
parser.add_option("--no-unit", dest="unittests",
action="store_false", default=True,
help="do not run the unit tests")
parser.add_option("--no-doctest", dest="doctests",
action="store_false", default=True,
help="do not run the doctests")
parser.add_option("--no-file", dest="filetests",
action="store_false", default=True,
help="do not run the file based tests")
parser.add_option("--no-pyregr", dest="pyregr",
action="store_false", default=True,
help="do not run the regression tests of CPython in tests/pyregr/")
parser.add_option("--no-examples", dest="examples",
action="store_false", default=True,
help="Do not run the documentation tests in the examples directory.")
parser.add_option("--no-code-style", dest="code_style",
action="store_false", default=True,
help="Do not run the code style (PEP8) checks.")
parser.add_option("--cython-only", dest="cython_only",
action="store_true", default=False,
help="only compile pyx to c, do not run C compiler or run the tests")
parser.add_option("--no-refnanny", dest="with_refnanny",
action="store_false", default=True,
help="do not regression test reference counting")
parser.add_option("--no-fork", dest="fork",
action="store_false", default=True,
help="do not fork to run tests")
parser.add_option("--sys-pyregr", dest="system_pyregr",
action="store_true", default=False,
help="run the regression tests of the CPython installation")
parser.add_option("-x", "--exclude", dest="exclude",
action="append", metavar="PATTERN",
help="exclude tests matching the PATTERN")
parser.add_option("-j", "--shard_count", dest="shard_count", metavar="N",
type=int, default=1,
help="shard this run into several parallel runs")
parser.add_option("--shard_num", dest="shard_num", metavar="K",
type=int, default=-1,
help="test only this single shard")
parser.add_option("--profile", dest="profile",
action="store_true", default=False,
help="enable profiling of the tests")
parser.add_option("-C", "--coverage", dest="coverage",
action="store_true", default=False,
help="collect source coverage data for the Compiler")
parser.add_option("--coverage-xml", dest="coverage_xml",
action="store_true", default=False,
help="collect source coverage data for the Compiler in XML format")
parser.add_option("--coverage-html", dest="coverage_html",
action="store_true", default=False,
help="collect source coverage data for the Compiler in HTML format")
parser.add_option("-A", "--annotate", dest="annotate_source",
action="store_true", default=True,
help="generate annotated HTML versions of the test source files")
parser.add_option("--no-annotate", dest="annotate_source",
action="store_false",
help="do not generate annotated HTML versions of the test source files")
parser.add_option("-v", "--verbose", dest="verbosity",
action="count", default=0,
help="display test progress, pass twice to print test names")
parser.add_option("-T", "--ticket", dest="tickets",
action="append",
help="a bug ticket number to run the respective test in 'tests/*'")
parser.add_option("-3", dest="language_level",
action="store_const", const=3, default=2,
help="set language level to Python 3 (useful for running the CPython regression tests)'")
parser.add_option("--xml-output", dest="xml_output_dir", metavar="DIR",
help="write test results in XML to directory DIR")
parser.add_option("--exit-ok", dest="exit_ok", default=False,
action="store_true",
help="exit without error code even on test failures")
parser.add_option("--failfast", dest="failfast", default=False,
action="store_true",
help="stop on first failure or error")
parser.add_option("--root-dir", dest="root_dir", default=os.path.join(DISTDIR, 'tests'),
help=("Directory to look for the file based "
"tests (the ones which are deactivated with '--no-file'."))
parser.add_option("--examples-dir", dest="examples_dir",
default=os.path.join(DISTDIR, 'docs', 'examples'),
help="Directory to look for documentation example tests")
parser.add_option("--work-dir", dest="work_dir", default=os.path.join(os.getcwd(), 'TEST_TMP'),
help="working directory")
parser.add_option("--cython-dir", dest="cython_dir", default=os.getcwd(),
help="Cython installation directory (default: use local source version)")
parser.add_option("--debug", dest="for_debugging", default=False, action="store_true",
help="configure for easier use with a debugger (e.g. gdb)")
parser.add_option("--pyximport-py", dest="pyximport_py", default=False, action="store_true",
help="use pyximport to automatically compile imported .pyx and .py files")
parser.add_option("--watermark", dest="watermark", default=None,
help="deterministic generated by string")
parser.add_option("--use_common_utility_dir", default=False, action="store_true")
parser.add_option("--use_formal_grammar", default=False, action="store_true")
parser.add_option("--test_determinism", default=False, action="store_true",
help="test whether Cython's output is deterministic")
parser.add_option("--pythran-dir", dest="pythran_dir", default=None,
help="specify Pythran include directory. This will run the C++ tests using Pythran backend for Numpy")
options, cmd_args = parser.parse_args(args)
if options.with_cython and sys.version_info[0] >= 3:
sys.path.insert(0, options.cython_dir)
# requires glob with the wildcard.
if sys.version_info < (3, 5) or cmd_args:
options.code_style = False
WITH_CYTHON = options.with_cython
coverage = None
if options.coverage or options.coverage_xml or options.coverage_html:
if not WITH_CYTHON:
options.coverage = options.coverage_xml = options.coverage_html = False
elif options.shard_num == -1:
print("Enabling coverage analysis")
from coverage import coverage as _coverage
coverage = _coverage(branch=True)
coverage.erase()
coverage.start()
if options.xml_output_dir:
shutil.rmtree(options.xml_output_dir, ignore_errors=True)
if options.shard_count > 1 and options.shard_num == -1:
import multiprocessing
pool = multiprocessing.Pool(options.shard_count)
tasks = [(options, cmd_args, shard_num) for shard_num in range(options.shard_count)]
errors = []
# NOTE: create process pool before time stamper thread to avoid forking issues.
total_time = time.time()
stats = Stats()
with time_stamper_thread():
for shard_num, shard_stats, return_code in pool.imap_unordered(runtests_callback, tasks):
if return_code != 0:
errors.append(shard_num)
sys.stderr.write("FAILED (%s/%s)\n" % (shard_num, options.shard_count))
sys.stderr.write("ALL DONE (%s/%s)\n" % (shard_num, options.shard_count))
stats.update(shard_stats)
pool.close()
pool.join()
total_time = time.time() - total_time
sys.stderr.write("Sharded tests run in %d seconds (%.1f minutes)\n" % (round(total_time), total_time / 60.))
if errors:
sys.stderr.write("Errors for shards %s\n" % ", ".join([str(e) for e in errors]))
return_code = 1
else:
return_code = 0
else:
with time_stamper_thread():
_, stats, return_code = runtests(options, cmd_args, coverage)
if coverage:
if options.shard_count > 1 and options.shard_num == -1:
coverage.combine()
coverage.stop()
stats.print_stats(sys.stderr)
if coverage:
save_coverage(coverage, options)
sys.stderr.write("ALL DONE\n")
sys.stderr.flush()
try:
check_thread_termination(ignore_seen=False)
except PendingThreadsError:
# normal program exit won't kill the threads, do it the hard way here
flush_and_terminate(return_code)
else:
sys.exit(return_code)
@contextmanager
def time_stamper_thread(interval=10):
"""
Print regular time stamps into the build logs to find slow tests.
@param interval: time interval in seconds
"""
try:
_xrange = xrange
except NameError:
_xrange = range
import threading
from datetime import datetime
from time import sleep
interval = _xrange(interval * 4)
now = datetime.now
write = sys.__stderr__.write
stop = False
def time_stamper():
while True:
for _ in interval:
if stop:
return
sleep(1./4)
write('\n#### %s\n' % now())
thread = threading.Thread(target=time_stamper, name='time_stamper')
thread.setDaemon(True) # Py2 ...
thread.start()
try:
yield
finally:
stop = True
thread.join()
def configure_cython(options):
global CompilationOptions, pyrex_default_options, cython_compile
from Cython.Compiler.Main import \
CompilationOptions, \
default_options as pyrex_default_options
from Cython.Compiler.Options import _directive_defaults as directive_defaults
from Cython.Compiler import Errors
Errors.LEVEL = 0 # show all warnings
from Cython.Compiler import Options
Options.generate_cleanup_code = 3 # complete cleanup code
from Cython.Compiler import DebugFlags
DebugFlags.debug_temp_code_comments = 1
pyrex_default_options['formal_grammar'] = options.use_formal_grammar
if options.profile:
directive_defaults['profile'] = True
if options.watermark:
import Cython.Compiler.Version
Cython.Compiler.Version.watermark = options.watermark
def save_coverage(coverage, options):
if options.coverage:
coverage.report(show_missing=0)
if options.coverage_xml:
coverage.xml_report(outfile="coverage-report.xml")
if options.coverage_html:
coverage.html_report(directory="coverage-report-html")
def runtests_callback(args):
options, cmd_args, shard_num = args
options.shard_num = shard_num
return runtests(options, cmd_args)
def runtests(options, cmd_args, coverage=None):
WITH_CYTHON = options.with_cython
ROOTDIR = os.path.abspath(options.root_dir)
WORKDIR = os.path.abspath(options.work_dir)
if WITH_CYTHON:
configure_cython(options)
xml_output_dir = options.xml_output_dir
if options.shard_num > -1:
WORKDIR = os.path.join(WORKDIR, str(options.shard_num))
if xml_output_dir:
xml_output_dir = os.path.join(xml_output_dir, 'shard-%03d' % options.shard_num)
# RUN ALL TESTS!
UNITTEST_MODULE = "Cython"
UNITTEST_ROOT = os.path.join(os.path.dirname(__file__), UNITTEST_MODULE)
if WITH_CYTHON:
if os.path.exists(WORKDIR):
for path in os.listdir(WORKDIR):
if path in ("support", "Cy3"): continue
shutil.rmtree(os.path.join(WORKDIR, path), ignore_errors=True)
if not os.path.exists(WORKDIR):
os.makedirs(WORKDIR)
if options.shard_num <= 0:
sys.stderr.write("Python %s\n" % sys.version)
sys.stderr.write("\n")
if WITH_CYTHON:
sys.stderr.write("Running tests against Cython %s\n" % get_version())
else:
sys.stderr.write("Running tests without Cython.\n")
if options.for_debugging:
options.cleanup_workdir = False
options.cleanup_sharedlibs = False
options.fork = False
if WITH_CYTHON and include_debugger:
from Cython.Compiler.Main import default_options as compiler_default_options
compiler_default_options['gdb_debug'] = True
compiler_default_options['output_dir'] = os.getcwd()
if IS_PYPY:
if options.with_refnanny:
sys.stderr.write("Disabling refnanny in PyPy\n")
options.with_refnanny = False
if options.with_refnanny:
from pyximport.pyxbuild import pyx_to_dll
libpath = pyx_to_dll(os.path.join("Cython", "Runtime", "refnanny.pyx"),
build_in_temp=True,
pyxbuild_dir=os.path.join(WORKDIR, "support"))
sys.path.insert(0, os.path.split(libpath)[0])
CFLAGS.append("-DCYTHON_REFNANNY=1")
if xml_output_dir and options.fork:
# doesn't currently work together
sys.stderr.write("Disabling forked testing to support XML test output\n")
options.fork = False
if WITH_CYTHON:
sys.stderr.write("Using Cython language level %d.\n" % options.language_level)
test_bugs = False
if options.tickets:
for ticket_number in options.tickets:
test_bugs = True
cmd_args.append('ticket:%s' % ticket_number)
if not test_bugs:
for selector in cmd_args:
if selector.startswith('bugs'):
test_bugs = True
selectors = [ string_selector(r) for r in cmd_args ]
verbose_excludes = selectors or options.verbosity >= 2
if not selectors:
selectors = [ lambda x, tags=None: True ]
# Check which external modules are not present and exclude tests
# which depends on them (by prefix)
missing_dep_excluder = MissingDependencyExcluder(EXT_DEP_MODULES)
version_dep_excluder = VersionDependencyExcluder(VER_DEP_MODULES)
exclude_selectors = [missing_dep_excluder, version_dep_excluder] # want to print msg at exit
try:
import IPython.core.release
if list(IPython.core.release._ver) < [1, 0, 0]:
raise ImportError
except (ImportError, AttributeError, TypeError):
exclude_selectors.append(RegExSelector('IPython'))
try:
raise ImportError("Jedi typer is currently broken, see GH#1845")
import jedi
if not ([0, 9] <= list(map(int, re.findall('[0-9]+', jedi.__version__ or '0')))):
raise ImportError
except (ImportError, AttributeError, TypeError):
exclude_selectors.append(RegExSelector('Jedi'))
if options.exclude:
exclude_selectors += [ string_selector(r) for r in options.exclude ]
if not COMPILER_HAS_INT128 or not IS_CPYTHON:
exclude_selectors += [RegExSelector('int128')]
if options.shard_num > -1:
exclude_selectors.append(ShardExcludeSelector(options.shard_num, options.shard_count))
if not test_bugs:
bug_files = [
('bugs.txt', True),
('pypy_bugs.txt', IS_PYPY),
('windows_bugs.txt', sys.platform == 'win32'),
('cygwin_bugs.txt', sys.platform == 'cygwin')
]
exclude_selectors += [
FileListExcluder(os.path.join(ROOTDIR, bugs_file_name),
verbose=verbose_excludes)
for bugs_file_name, condition in bug_files if condition
]
global COMPILER
if options.compiler:
COMPILER = options.compiler
selected_backends = [ name.strip() for name in options.backends.split(',') if name.strip() ]
backends = []
for backend in selected_backends:
if backend == 'c' and not options.use_c:
continue
elif backend == 'cpp' and not options.use_cpp:
continue
elif backend not in BACKENDS:
sys.stderr.write("Unknown backend requested: '%s' not one of [%s]\n" % (
backend, ','.join(BACKENDS)))
sys.exit(1)
backends.append(backend)
if options.shard_num <= 0:
sys.stderr.write("Backends: %s\n" % ','.join(backends))
languages = backends
if 'TRAVIS' in os.environ and sys.platform == 'darwin' and 'cpp' in languages:
bugs_file_name = 'travis_macos_cpp_bugs.txt'
exclude_selectors += [
FileListExcluder(os.path.join(ROOTDIR, bugs_file_name),
verbose=verbose_excludes)
]
if options.use_common_utility_dir:
common_utility_dir = os.path.join(WORKDIR, 'utility_code')
if not os.path.exists(common_utility_dir):
os.makedirs(common_utility_dir)
else:
common_utility_dir = None
sys.stderr.write("\n")
test_suite = unittest.TestSuite()
stats = Stats()
if options.unittests:
collect_unittests(UNITTEST_ROOT, UNITTEST_MODULE + ".", test_suite, selectors, exclude_selectors)
if options.doctests:
collect_doctests(UNITTEST_ROOT, UNITTEST_MODULE + ".", test_suite, selectors, exclude_selectors)
if options.filetests and languages:
filetests = TestBuilder(ROOTDIR, WORKDIR, selectors, exclude_selectors,
options, options.pyregr, languages, test_bugs,
options.language_level, common_utility_dir,
options.pythran_dir, add_embedded_test=True, stats=stats)
test_suite.addTest(filetests.build_suite())
if options.examples and languages:
for subdirectory in glob.glob(os.path.join(options.examples_dir, "*/")):
filetests = TestBuilder(subdirectory, WORKDIR, selectors, exclude_selectors,
options, options.pyregr, languages, test_bugs,
options.language_level, common_utility_dir,
options.pythran_dir,
default_mode='compile', stats=stats)
test_suite.addTest(filetests.build_suite())
if options.system_pyregr and languages:
sys_pyregr_dir = os.path.join(sys.prefix, 'lib', 'python'+sys.version[:3], 'test')
if not os.path.isdir(sys_pyregr_dir):
sys_pyregr_dir = os.path.join(os.path.dirname(sys.executable), 'Lib', 'test') # source build
if os.path.isdir(sys_pyregr_dir):
filetests = TestBuilder(ROOTDIR, WORKDIR, selectors, exclude_selectors,
options, True, languages, test_bugs,
sys.version_info[0], common_utility_dir, stats=stats)
sys.stderr.write("Including CPython regression tests in %s\n" % sys_pyregr_dir)
test_suite.addTest(filetests.handle_directory(sys_pyregr_dir, 'pyregr'))
if options.code_style and options.shard_num <= 0:
try:
import pycodestyle
except ImportError:
# Hack to make the exclusion visible.
missing_dep_excluder.tests_missing_deps.append('TestCodeFormat')
else:
test_suite.addTest(TestCodeFormat(options.cython_dir))
if xml_output_dir:
from Cython.Tests.xmlrunner import XMLTestRunner
if not os.path.exists(xml_output_dir):
try:
os.makedirs(xml_output_dir)
except OSError:
pass # concurrency issue?
test_runner = XMLTestRunner(output=xml_output_dir,
verbose=options.verbosity > 0)
if options.failfast:
sys.stderr.write("--failfast not supported with XML runner\n")
else:
text_runner_options = {}
if options.failfast:
if sys.version_info < (2, 7):
sys.stderr.write("--failfast not supported with Python < 2.7\n")
else:
text_runner_options['failfast'] = True
test_runner = unittest.TextTestRunner(verbosity=options.verbosity, **text_runner_options)
if options.pyximport_py:
from pyximport import pyximport
pyximport.install(pyimport=True, build_dir=os.path.join(WORKDIR, '_pyximport'),
load_py_module_on_import_failure=True, inplace=True)
try:
gc.set_debug(gc.DEBUG_UNCOLLECTABLE)
except AttributeError:
pass # not available on PyPy
result = test_runner.run(test_suite)
if common_utility_dir and options.shard_num < 0 and options.cleanup_workdir:
shutil.rmtree(common_utility_dir)
if missing_dep_excluder.tests_missing_deps:
sys.stderr.write("Following tests excluded because of missing dependencies on your system:\n")
for test in missing_dep_excluder.tests_missing_deps:
sys.stderr.write(" %s\n" % test)
if options.with_refnanny:
import refnanny
sys.stderr.write("\n".join([repr(x) for x in refnanny.reflog]))
if options.exit_ok:
return options.shard_num, stats, 0
else:
return options.shard_num, stats, not result.wasSuccessful()
if __name__ == '__main__':
try:
main()
except Exception:
traceback.print_exc()
try:
check_thread_termination(ignore_seen=False)
except PendingThreadsError:
# normal program exit won't kill the threads, do it the hard way here
flush_and_terminate(1)
sys.exit(1)
|
main-checkpoint.py
|
#!/usr/bin/env python3
import argparse
from collections import Counter
from multiprocessing import set_start_method
import pdb
import re
import sys
import time
import copy
import numpy as np
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
import torch.multiprocessing as mp
import data_producer
from word2atoms import skipgram_atoms, trivial_atoms, morpheme_split
parser = argparse.ArgumentParser()
parser.add_argument("--train", type=str, default="", help="training file")
parser.add_argument("--output", type=str, default="vectors.txt", help="output word embedding file")
parser.add_argument("--atomoutput", type=str, default="atomvectors.txt", help="output atom embedding file")
parser.add_argument("--ctxoutput", type=str, default="atomvectors.txt", help="context vectors file")
parser.add_argument("--ctxatomoutput", type=str, default="atomvectors.txt", help="context atom vectors file")
parser.add_argument("--losslog", type=str, default="all_losses.txt", help="log of training losses")
parser.add_argument("--opt", type=str, default="SGD", help="optimiser to use")
parser.add_argument("--size", type=int, default=300, help="word embedding dimension")
parser.add_argument("--cbow", type=int, default=1, help="1 for cbow, 0 for skipgram")
parser.add_argument("--window", type=int, default=5, help="context window size")
parser.add_argument("--sample", type=float, default=1e-4, help="subsample threshold")
parser.add_argument("--negative", type=int, default=10, help="number of negative samples")
parser.add_argument("--min_count", type=int, default=5, help="minimum frequency of a word")
parser.add_argument("--processes", type=int, default=4, help="number of processes")
parser.add_argument("--num_workers", type=int, default=6, help="number of workers for data processsing")
parser.add_argument("--iter", type=int, default=5, help="number of iterations")
parser.add_argument("--lr", type=float, default=-1.0, help="initial learning rate")
parser.add_argument("--momentum", type=float, default=0.0, help="momentum")
parser.add_argument("--batch_size", type=int, default=100, help="(max) batch size")
parser.add_argument("--megabatch_size", type=int, default=100000, help="(max) megabatch size")
parser.add_argument("--cuda", action='store_true', default=False, help="enable cuda")
parser.add_argument("--output_ctx", action='store_true', default=False, help="output context embeddings")
parser.add_argument("--anneal", action='store_true', default=False, help="anneal the learning rate linearly to 0")
parser.add_argument("--shuffle", action='store_true', default=False, help="shuffle the training data points")
parser.add_argument("--atomizer", type=str, choices=['fasttext', 'morphoseg', 'word2vec'], default="word2vec", help="atomizer to use (vanilla word2vec by default, can also choose fasttext or morphoseg)")
parser.add_argument("--minL", type=int, default=5, help="minimum possible length of n-grams to take when atomizing")
parser.add_argument("--maxL", type=int, default=5, help="maximum possible length of n-grams to take when atomizing")
parser.add_argument("--halfletters", action='store_true', default=False, help="whether to use half-letters/raw Unicode characters when taking n-grams, or whole Tamil letters")
MAX_SENT_LEN = 1000
# Build the vocabulary.
def file_split(f, delim=' \t\n', bufsize=1024):
prev = ''
while True:
s = f.read(bufsize)
if not s:
break
tokens = re.split('['+delim+']{1,}', s)
if len(tokens) > 1:
yield prev + tokens[0]
prev = tokens[-1]
for x in tokens[1:-1]:
yield x
else:
prev += s
if prev:
yield prev
def build_vocab(args):
vocab = Counter()
word_count = 0
for word in file_split(open(args.train)):
vocab[word] += 1
word_count += 1
if word_count % 10000 == 0:
sys.stdout.write('%d\r' % len(vocab))
freq = {k:v for k,v in vocab.items() if v >= args.min_count}
word_count = sum([freq[k] for k in freq])
word_list = sorted(freq, key=freq.get, reverse=True)
word2idx = {}
for i,w in enumerate(word_list):
word2idx[w] = i
print("Vocab size: %ld" % len(word2idx))
print("Words in train file: %ld" % word_count)
vars(args)['vocab_size'] = len(word2idx)
vars(args)['train_words'] = word_count
""" for i, word in enumerate(word_list):
if word[-1] == '0':
print(i, word)
if i == 1761:
print(word) """
#print("Initial mattrum count")
#print(freq['மற்றும்'])
return word2idx, word_list, freq
def build_morph(args, word2idx, word_list, freq, atomiser):
morph_list = []
morph2idx = {}
word2morph = []
#total_morphperword = 0
max_morphperword = 0
#total_freq = 0
for i, word in enumerate(word_list):
idxs = []
cnt = 0
for morph in atomiser(word):
if morph in morph2idx:
idx = morph2idx[morph]
else:
idx = len(morph_list)
morph_list.append(morph)
morph2idx[morph] = idx
idxs.append(idx)
cnt += 1
#total_morphperword += freq[word] * cnt
#total_freq += freq[word]
if cnt > max_morphperword:
max_morphperword = cnt
word2morph.append(torch.LongTensor(idxs))
if i % 100 == 0:
sys.stdout.write('%d\r' % i)
# actual padding index
word2morph.append(torch.LongTensor([len(morph2idx)]))
word2morphfinal = torch.zeros((len(word2morph), max_morphperword), dtype=torch.long)
word2morphfinal = word2morphfinal.fill_(len(morph2idx) +1)
for i in range(len(word2morph)):
row = word2morph[i]
word2morphfinal[i, :row.shape[0]] = row
#print(word2morphfinal.shape)
#print(freq[word_list[0]])
""" indices = [0]
for index in indices:
print(word_list[index])
print(' '.join([morph_list[j] for j in word2morph[index]]))
print(word2morph[816])
print(word2morph[0])
print(word2morph[10520]) """
print("Morpheme size: %ld" % len(morph2idx))
#print("Average morphemes per word: %f" % (float(total_morphperword)/total_freq))
print("Max morphemes per word: %d" % max_morphperword)
#vars(args)['morph_size'] = len(morph2idx)
#vars(args)['word2morph'] = word2morph
return morph2idx, morph_list, word2morphfinal
class CBOWMean(torch.autograd.Function):
@staticmethod
def forward(ctx, x, lens):
ctx.save_for_backward(x)
x = torch.sum(x, 1, keepdim=True)
x = x.permute(1,2,0) / lens
return x.permute(2,0,1)
@staticmethod
def backward(ctx, g):
x, = ctx.saved_variables
return g.expand_as(x), None
class CBOW(nn.Module):
def __init__(self, args):
super(CBOW, self).__init__()
self.emb0_lookup = nn.Embedding(args.vocab_size+1, args.size, padding_idx=args.vocab_size, sparse=True)
self.emb1_lookup = nn.Embedding(args.vocab_size, args.size, sparse=True)
self.emb0_lookup.weight.data.uniform_(-0.5/args.size, 0.5/args.size)
self.emb0_lookup.weight.data[args.vocab_size].fill_(0)
self.emb1_lookup.weight.data.uniform_(-0.5/args.size, 0.5/args.size)
self.window = args.window
self.negative = args.negative
self.pad_idx = args.vocab_size
def forward(self, data):
ctx_indices = data[:, 0:2*self.window]
ctx_lens = data[:, 2*self.window].float()
word_idx = data[:, 2*self.window+1]
neg_indices = data[:, 2*self.window+2:2*self.window+2+self.negative]
neg_mask = data[:, 2*self.window+2+self.negative:].float()
c_embs = self.emb0_lookup(ctx_indices)
w_embs = self.emb1_lookup(word_idx)
n_embs = self.emb1_lookup(neg_indices)
c_embs = CBOWMean.apply(c_embs, ctx_lens)
pos_ips = torch.sum(c_embs[:,0,:] * w_embs, 1)
neg_ips = torch.bmm(n_embs, c_embs.permute(0,2,1))[:,:,0]
# Neg Log Likelihood
pos_loss = torch.sum( -F.logsigmoid(torch.clamp(pos_ips,max=10,min=-10)) )
neg_loss = torch.sum( -F.logsigmoid(torch.clamp(-neg_ips,max=10,min=-10)) * neg_mask )
return pos_loss + neg_loss
class SG(nn.Module):
def __init__(self, args):
super(SG, self).__init__()
self.emb0morph_lookup = nn.Embedding(args.morph_size+2, args.size, padding_idx=args.morph_size, sparse=True)
self.emb1morph_lookup = nn.Embedding(args.ctxmorph_size+2, args.size, padding_idx=args.ctxmorph_size, sparse=True)
#self.emb0_lookup = nn.Embedding(args.vocab_size+1, args.size, padding_idx=args.vocab_size, sparse=True)
#self.emb1_lookup = nn.Embedding(args.vocab_size, args.size, sparse=True)
self.emb0morph_lookup.weight.data.uniform_(-0.5/args.size, 0.5/args.size)
self.emb0morph_lookup.weight.data[args.morph_size+1].fill_(0)
# randomly initialise context vectors
#self.emb1morph_lookup.weight.data.uniform_(-0.5/args.size, 0.5/args.size)
#self.emb1morph_lookup.weight.data[args.ctxmorph_size+1].fill_(0)
# OR zero initialise them as usual
self.emb1morph_lookup.weight.data.zero_()
#self.emb0_lookup.weight.data.uniform_(-0.5/args.size, 0.5/args.size)
#self.emb1_lookup.weight.data.zero_()
#self.emb1morph_lookup.weight.data.zero_()
#self.emb0morph_lookup.weight.data[args.morph_size+1].requires_grad = False
self.window = args.window
self.negative = args.negative
self.pad_idx = args.vocab_size
self.morph_size = args.morph_size
def forward(self, data, word2morph, word2morph_mask, ctx2morph, ctx2morph_mask):
word_idx = data[:, 0]
ctx_idx = data[:, 1]
neg_indices = data[:, 2:2+self.negative]
neg_mask = data[:, 2+self.negative:].float()
#print(word2morph.shape)
#print(word2morph_mask.shape)
#print(torch.max(word2morph))
#print("MORPHEMES")
#print(word2morph[3])
#print(torch.norm(self.emb0morph_lookup.weight.data[-1]))
#print(torch.norm(torch.squeeze(word2morph_mask), dim=0)) # should start nonzero, then eventually be 0
#print((self.emb0morph_lookup(word2morph[:, 0]) * word2morph_mask[:, 0]).shape)
#t = self.emb0morph_lookup(word2morph) * word2morph_mask
#print(t.shape)
#print(torch.sum(t, dim=1).shape)
w_embs = torch.sum(self.emb0morph_lookup(word2morph) * word2morph_mask, dim=1)
#w_embs = self.emb0_lookup(word_idx)
#t = self.emb1morph_lookup(ctx2morph[:, 0]) * ctx2morph_mask[:, 0]
#print(t.shape)
#print(torch.sum(t, dim=1).shape)
#print((self.emb0morph_lookup(word2morph[:, 1]) * word2morph_mask[:, 1]).shape)
c_embs = torch.sum(self.emb1morph_lookup(ctx2morph[:, 0]) * ctx2morph_mask[:, 0], dim=1)
#c_embs = self.emb1_lookup(ctx_idx)
#t = self.emb1morph_lookup(ctx2morph[:, 1:1+self.negative]) * ctx2morph_mask[:, 1:1+self.negative]
#print(t.shape)
#print(torch.sum(t, dim=2).shape)
#print((self.emb0morph_lookup(word2morph[:, 2:2+self.negative]) * word2morph_mask[:, 2:2+self.negative]).shape)
n_embs = torch.sum(self.emb1morph_lookup(ctx2morph[:, 1:1+self.negative]) * ctx2morph_mask[:, 1:1+self.negative], dim=2)
#n_embs = self.emb1_lookup(neg_indices)
pos_ips = torch.sum(w_embs * c_embs, 1)
neg_ips = torch.bmm(n_embs, torch.unsqueeze(w_embs,1).permute(0,2,1))[:,:,0]
# Neg Log Likelihood
pos_loss = torch.sum( -F.logsigmoid(torch.clamp(pos_ips,max=10,min=-10)) )
neg_loss = torch.sum( -F.logsigmoid(torch.clamp(-neg_ips,max=10,min=-10)) * neg_mask )
return pos_loss + neg_loss
# Initialize model.
def init_net(args):
if args.cbow == 1:
if args.lr == -1.0:
vars(args)['lr'] = 0.05
return CBOW(args)
elif args.cbow == 0:
if args.lr == -1.0:
vars(args)['lr'] = 0.025
return SG(args)
# Training
def train_process_sent_producer(p_id, data_queue, word_count_actual, word2idx, word_list, freq, args):
if args.negative > 0:
table_ptr_val = data_producer.init_unigram_table(word_list, freq, args.train_words)
train_file = open(args.train)
file_pos = args.file_size * p_id // args.processes
train_file.seek(file_pos, 0)
while True:
try:
train_file.read(1)
except UnicodeDecodeError:
file_pos -= 1
train_file.seek(file_pos, 0)
else:
train_file.seek(file_pos, 0)
break
batch_count = 0
if args.cbow == 1:
batch_placeholder = np.zeros((args.megabatch_size, 2*args.window+2+2*args.negative), 'int64')
else:
batch_placeholder = np.zeros((args.megabatch_size, 2+2*args.negative), 'int64')
#mattrum_cnt = 0
for it in range(args.iter):
train_file.seek(file_pos, 0)
last_word_cnt = 0
word_cnt = 0
sentence = []
prev = ''
eof = False
while True:
if eof or train_file.tell() > file_pos + args.file_size / args.processes:
break
while True:
s = train_file.read(1)
if not s:
eof = True
break
elif s == ' ' or s == '\t':
if prev in word2idx:
sentence.append(prev)
prev = ''
if len(sentence) >= MAX_SENT_LEN:
break
elif s == '\n':
if prev in word2idx:
sentence.append(prev)
prev = ''
break
else:
prev += s
if len(sentence) > 0:
#print("Full sentence")
#print(' '.join(sentence))
# subsampling
sent_id = []
trimmed = []
if args.sample != 0:
sent_len = len(sentence)
i = 0
while i < sent_len:
word = sentence[i]
f = freq[word] / args.train_words
pb = (np.sqrt(f / args.sample) + 1) * args.sample / f
if pb > np.random.random_sample():
sent_id.append( word2idx[word] )
""" if word2idx[word] == 'மற்றும்' and mattrum_cnt % 1000 == 0:
print("Hit another 1000 mattrums")
mattrum_cnt += 1
else:
trimmed.append(word) """
i += 1
if len(sent_id) < 2:
word_cnt += len(sentence)
sentence.clear()
continue
#print("Killed words")
#print(' '.join(trimmed))
#print("Trimmed sentence")
#print(' '.join([word_list[index] for index in sent_id]))
next_random = (2**24) * np.random.randint(0, 2**24) + np.random.randint(0, 2**24)
if args.cbow == 1: # train CBOW
chunk = data_producer.cbow_producer(sent_id, len(sent_id), table_ptr_val,
args.window, args.negative, args.vocab_size, args.batch_size, next_random)
elif args.cbow == 0: # train skipgram
chunk = data_producer.sg_producer(sent_id, len(sent_id), table_ptr_val,
args.window, args.negative, args.vocab_size, args.batch_size, next_random)
#print("Data points")
#print(chunk)
chunk_pos = 0
while chunk_pos < chunk.shape[0]:
remain_space = args.megabatch_size - batch_count
remain_chunk = chunk.shape[0] - chunk_pos
if remain_chunk < remain_space:
take_from_chunk = remain_chunk
else:
take_from_chunk = remain_space
batch_placeholder[batch_count:batch_count+take_from_chunk, :] = chunk[chunk_pos:chunk_pos+take_from_chunk, :]
batch_count += take_from_chunk
if batch_count == args.megabatch_size:
if args.shuffle:
p = torch.randperm(batch_count)
batch_placeholder = batch_placeholder[p]
start = 0
while start < batch_count:
data_queue.put(batch_placeholder[start : min(start + args.batch_size, batch_count)])
start += args.batch_size
#print("Batch placeholder")
#print(batch_placeholder)
batch_count = 0
chunk_pos += take_from_chunk
word_cnt += len(sentence)
if word_cnt - last_word_cnt > 10000:
with word_count_actual.get_lock():
word_count_actual.value += word_cnt - last_word_cnt
last_word_cnt = word_cnt
sentence.clear()
with word_count_actual.get_lock():
word_count_actual.value += word_cnt - last_word_cnt
#print("Total occurrences of mattrum: " + str(mattrum_cnt))
#print("Total non-occurrences of mattrum: " + str(non_mattrum_cnt))
if batch_count > 0:
if args.shuffle:
p = torch.randperm(batch_count)
batch_placeholder[:batch_count] = batch_placeholder[p]
start = 0
while start < batch_count:
data_queue.put(batch_placeholder[start : min(start + args.batch_size, batch_count)])
start += args.batch_size
#print("Batch placeholder")
#print(batch_placeholder)
batch_count = 0
data_queue.put(None)
def train_process(p_id, word_count_actual, word2idx, word_list, freq, args, model, word2morph, word2morph_mask, ctx2morph, ctx2morph_mask):
data_queue = mp.SimpleQueue()
if args.opt == "Adagrad":
optimizer = optim.Adagrad(model.parameters(), lr=args.lr)
elif args.opt == "SGD":
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
elif args.opt == 'SparseAdam':
optimizer = optim.SparseAdam(model.parameters(), lr=args.lr)
t = mp.Process(target=train_process_sent_producer, args=(p_id, data_queue, word_count_actual, word2idx, word_list, freq, args))
t.start()
# get from data_queue and feed to model
prev_word_cnt = 0
losses_cnt = 0
total_loss = 0.0
losses_file = open(args.losslog, 'w')
lr = args.lr
#mattrum_cnt = 0
#non_mattrum_cnt = 0
while True:
d = data_queue.get()
if d is None:
break
else:
# lr anneal
if args.anneal:
if word_count_actual.value - prev_word_cnt > 10000:
lr = args.lr * (1 - word_count_actual.value / (args.iter * args.train_words))
if lr < 0.0001 * args.lr:
lr = 0.0001 * args.lr
for param_group in optimizer.param_groups:
param_group['lr'] = lr
else:
lr = args.lr
if args.cuda:
data = Variable(torch.LongTensor(d).cuda(), requires_grad=False)
else:
data = Variable(torch.LongTensor(d), requires_grad=False)
if args.cbow == 1:
optimizer.zero_grad()
loss = model(data)
loss.backward()
optimizer.step()
model.emb0_lookup.weight.data[args.vocab_size].fill_(0)
elif args.cbow == 0:
optimizer.zero_grad()
#print("WORD")
#print(data[3][0])
loss = model(data, word2morph[data[:, 0]], word2morph_mask[data[:, 0]], ctx2morph[data[:, 1:2+args.negative]], ctx2morph_mask[data[:, 1:2+args.negative]])
loss.backward()
#model.emb0morph_lookup.weight.data.grad[args.morph_size+1].fill_(0)
optimizer.step()
#model.emb0morph_lookup.weight.data[args.morph_size+1].zero_()
losses_cnt += data.shape[0]
total_loss += loss
# output
if word_count_actual.value - prev_word_cnt > 10000:
avg_loss = total_loss/losses_cnt
sys.stdout.write("\rAlpha: %0.8f, Loss: %0.8f, Progress: %0.2f, Words/sec: %f" % (lr, avg_loss, word_count_actual.value / (args.iter * args.train_words) * 100, word_count_actual.value / (time.monotonic() - args.t_start)))
sys.stdout.flush()
prev_word_cnt = word_count_actual.value
losses_cnt = 0
total_loss = 0.0
losses_file.write(str(avg_loss.item()) + '\n')
losses_file.close()
t.join()
if __name__ == '__main__':
set_start_method('forkserver')
args = parser.parse_args()
print("Starting training using file %s" % args.train)
train_file = open(args.train)
train_file.seek(0, 2)
vars(args)['file_size'] = train_file.tell()
word2idx, word_list, freq = build_vocab(args)
# constructing and applying atomizer to all words
minL = args.minL
maxL = args.maxL
use_listify = not args.halfletters
if args.atomizer == 'word2vec':
atomizer = lambda w: trivial_atoms(w)
elif args.atomizer == 'fasttext':
atomizer = lambda w: skipgram_atoms(w, minL=minL, maxL=maxL)
elif args.atomizer == 'morphoseg':
atomizer = lambda w: morpheme_split(w, minL=minL, maxL=maxL, use_listify=use_listify, to_stem=True)
morph2idx, morph_list, word2morph = build_morph(args, word2idx, word_list, freq, atomizer)
vars(args)['morph_size'] = len(morph2idx)
ctxmorph2idx, ctxmorph_list, ctx2morph = build_morph(args, word2idx, word_list, freq, trivial_atoms)
vars(args)['ctxmorph_size'] = len(ctxmorph2idx)
#print(word2morph.shape)
#print(ctx2morph.shape)
""" idx = 200
word = word_list[idx]
print("Word: " + word)
print("Index should be: " + str(word2idx[word]))
all_morphs = word2morph[idx]
for midx in all_morphs:
print("Morpheme: " + morph_list[midx])
print("Index used for lookup: " + str(midx))
print("Index retrieved: " + str(morph2idx[morph_list[midx]])) """
word_count_actual = mp.Value('L', 0)
model = init_net(args)
model.share_memory()
word2morph_mask = torch.unsqueeze((word2morph <= args.morph_size), dim=2).type(model.emb0morph_lookup.weight.data.dtype)
ctx2morph_mask = torch.unsqueeze((ctx2morph <= args.ctxmorph_size), dim=2).type(model.emb1morph_lookup.weight.data.dtype)
if args.cuda:
model.cuda()
word2morph = word2morph.cuda()
word2morph_mask = word2morph_mask.cuda()
ctx2morph = ctx2morph.cuda()
ctx2morph_mask = ctx2morph_mask.cuda()
#print(word2morph_mask.shape)
#print(ctx2morph_mask.shape)
vars(args)['t_start'] = time.monotonic()
processes = []
for p_id in range(args.processes):
p = mp.Process(target=train_process, args=(p_id, word_count_actual, word2idx, word_list, freq, args, model, word2morph, word2morph_mask, ctx2morph, ctx2morph_mask))
p.start()
processes.append(p)
for p in processes:
p.join()
torch.cuda.empty_cache()
# print out the atom input vectors
if args.cuda:
embs = model.emb0morph_lookup.weight.data.cpu().numpy()
else:
embs = model.emb0morph_lookup.weight.data.numpy()
#print(embs.shape)
data_producer.write_embs(args.atomoutput, morph_list, embs, args.morph_size, args.size)
del embs
torch.cuda.empty_cache()
wordembs = np.zeros((word2morph.shape[0], args.size), dtype=np.float32)
print(wordembs.shape)
# word input vectors
if args.cuda:
numpieces = 5
cnt = word2morph.shape[0] // numpieces
print("Need to handle " + str(word2morph.shape[0]) + " words")
for j in range(numpieces):
start = j * cnt
end = start + cnt
if j == numpieces - 1:
end = word2morph.shape[0]
print("Handling all words from " + str(start) + " to " + str(end))
wordembs[start:end] = torch.sum(model.emb0morph_lookup(word2morph[start:end]) * word2morph_mask[start:end], dim=1).detach().cpu().numpy()
else:
wordembs = torch.sum(model.emb0morph_lookup(word2morph) * word2morph_mask, dim=1).detach().numpy()
#print(wordembs.shape)
data_producer.write_embs(args.output, word_list, wordembs, args.vocab_size, args.size)
del wordembs
torch.cuda.empty_cache()
# atom context vectors
if args.cuda:
atembs = model.emb1morph_lookup.weight.data.cpu().numpy()
else:
atembs = model.emb1morph_lookup.weight.data.numpy()
#print(atembs.shape)
data_producer.write_embs(args.ctxatomoutput, ctxmorph_list, atembs, args.ctxmorph_size, args.size)
del atembs
torch.cuda.empty_cache()
# word context vectors
ctxembs = np.zeros((ctx2morph.shape[0], args.size), dtype=np.float32)
print(ctxembs.shape)
if args.cuda:
numpieces = 5
cnt = ctx2morph.shape[0] // numpieces
print("Need to handle " + str(ctx2morph.shape[0]) + " words")
for j in range(numpieces):
start = j * cnt
end = start + cnt
if j == numpieces - 1:
end = ctx2morph.shape[0]
print("Handling all words from " + str(start) + " to " + str(end))
ctxembs[start:end] = torch.sum(model.emb1morph_lookup(ctx2morph[start:end]) * ctx2morph_mask[start:end], dim=1).detach().cpu().numpy()
else:
ctxembs = torch.sum(model.emb1morph_lookup(ctx2morph) * ctx2morph_mask, dim=1).detach().numpy()
#print(ctxembs.shape)
data_producer.write_embs(args.ctxoutput, word_list, ctxembs, args.vocab_size, args.size)
del ctxembs
torch.cuda.empty_cache()
print("")
|
map_dataset_op_test.py
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for the experimental input pipeline ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import namedtuple
import threading
import time
import warnings
from absl.testing import parameterized
import numpy as np
from tensorflow.core.framework import attr_value_pb2
from tensorflow.python.client import session
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import functional_ops
from tensorflow.python.ops import lookup_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import script_ops
from tensorflow.python.ops import sparse_ops
from tensorflow.python.ops import string_ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.platform import test
class MapDatasetTest(test.TestCase, parameterized.TestCase):
def _buildMapDataset(self, components, count):
def _map_fn(x, y, z):
return math_ops.square(x), math_ops.square(y), math_ops.square(z)
return (dataset_ops.Dataset.from_tensor_slices(components).map(_map_fn)
.repeat(count))
def testMapDataset(self):
"""Test an dataset that maps a TF function across its input elements."""
# The pipeline is TensorSliceDataset -> MapDataset(square_3) ->
# RepeatDataset(count).
components = (np.arange(7),
np.array([[1, 2, 3]]) * np.arange(7)[:, np.newaxis],
np.array(37.0) * np.arange(7))
count = array_ops.placeholder(dtypes.int64, shape=[])
dataset = self._buildMapDataset(components, count)
iterator = dataset.make_initializable_iterator()
init_op = iterator.initializer
get_next = iterator.get_next()
self.assertEqual([c.shape[1:] for c in components],
[t.shape for t in get_next])
with self.test_session() as sess:
# Test single-threaded access to the iterator.
sess.run(init_op, feed_dict={count: 14})
for _ in range(14):
for i in range(7):
result = sess.run(get_next)
for component, result_component in zip(components, result):
self.assertAllEqual(component[i]**2, result_component)
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
# Test multi-threaded access to the same iterator.
sess.run(init_op, feed_dict={count: 18})
results = []
def iterator_thread():
while True:
try:
results.append(sess.run(get_next))
except errors.OutOfRangeError:
return
threads = [self.checkedThread(target=iterator_thread) for _ in range(8)]
for t in threads:
t.start()
for t in threads:
t.join()
# `results` will contain the same elements components**2
# repeated 18 times, but in a non-deterministic order. Sort the
# results, and assert that each element of components**2 is
# produced 18 times.
results.sort(key=lambda x: x[0])
for i in range(7):
for j in range(18):
for component, result_component in zip(components,
results[i * 18 + j]):
self.assertAllEqual(component[i]**2, result_component)
def _buildParallelMapDataset(self, components, count, num_parallel_calls,
output_buffer_size):
def _map_fn(x, y, z):
return math_ops.square(x), math_ops.square(y), math_ops.square(z)
return (dataset_ops.Dataset.from_tensor_slices(components)
.map(_map_fn, num_parallel_calls=num_parallel_calls)
.prefetch(output_buffer_size)
.repeat(count))
def testParallelMapDataset(self):
"""Test an dataset that maps a TF function across its input elements."""
# The pipeline is TensorSliceDataset -> ParallelMapDataset(square_3) ->
# RepeatDataset(count).
components = (np.arange(7),
np.array([[1, 2, 3]]) * np.arange(7)[:, np.newaxis],
np.array(37.0) * np.arange(7))
count = array_ops.placeholder(dtypes.int64, shape=[])
num_parallel_calls = array_ops.placeholder(dtypes.int32, shape=[])
output_buffer_size = array_ops.placeholder(dtypes.int64, shape=[])
dataset = self._buildParallelMapDataset(
components, count, num_parallel_calls, output_buffer_size)
iterator = dataset.make_initializable_iterator()
init_op = iterator.initializer
get_next = iterator.get_next()
self.assertEqual([c.shape[1:] for c in components],
[t.shape for t in get_next])
with self.test_session() as sess:
def do_test(num_parallel_calls_val, output_buffer_size_val):
# Test single-threaded access to the iterator.
sess.run(init_op, feed_dict={
count: 14,
num_parallel_calls: num_parallel_calls_val,
output_buffer_size: output_buffer_size_val})
for _ in range(14):
for i in range(7):
result = sess.run(get_next)
for component, result_component in zip(components, result):
self.assertAllEqual(component[i]**2, result_component)
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
# Test multi-threaded access to the same iterator.
sess.run(init_op, feed_dict={
count: 18,
num_parallel_calls: num_parallel_calls_val,
output_buffer_size: output_buffer_size_val})
results = []
def iterator_thread():
while True:
try:
results.append(sess.run(get_next))
except errors.OutOfRangeError:
return
threads = [self.checkedThread(target=iterator_thread)
for _ in range(64)]
for t in threads:
t.start()
for t in threads:
t.join()
# `results` will contain the same elements components**2
# repeated 18 times, but in a non-deterministic order. Sort the
# results, and assert that each element of components**2 is
# produced 18 times.
results.sort(key=lambda x: x[0])
for i in range(7):
for j in range(18):
for component, result_component in zip(components,
results[i * 18 + j]):
self.assertAllEqual(component[i]**2, result_component)
for num_parallel_calls_val, output_buffer_size_val in [
(1, 1), (1, 2), (2, 2), (2, 4), (8, 8), (8, 16)]:
do_test(num_parallel_calls_val, output_buffer_size_val)
def testImplicitDisposeParallelMapDataset(self):
# Tests whether a parallel map dataset will be cleaned up correctly when
# the pipeline does not run it until exhaustion.
# The pipeline is TensorSliceDataset -> MapDataset(square_3) ->
# RepeatDataset(1000).
components = (np.arange(1000),
np.array([[1, 2, 3]]) * np.arange(1000)[:, np.newaxis],
np.array(37.0) * np.arange(1000))
dataset = self._buildParallelMapDataset(components, 1000, 100, 100)
# NOTE(mrry): Also test that the prefetching thread is cancelled correctly.
dataset = dataset.prefetch(100)
iterator = dataset.make_initializable_iterator()
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for _ in range(3):
sess.run(get_next)
def testParallelMapUnspecifiedOutputSize(self):
components = np.array([1., 2., 3., np.nan, 5.]).astype(np.float32)
dataset = (dataset_ops.Dataset.from_tensor_slices(components)
.map(lambda x: array_ops.check_numerics(x, "message"),
num_parallel_calls=2))
iterator = dataset.make_initializable_iterator()
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for _ in range(3):
sess.run(get_next)
def testParallelMapError(self):
components = np.array([1., 2., 3., np.nan, 5.]).astype(np.float32)
dataset = (dataset_ops.Dataset.from_tensor_slices(components)
.map(lambda x: array_ops.check_numerics(x, "message"),
num_parallel_calls=2))
iterator = dataset.make_initializable_iterator()
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for _ in range(3):
sess.run(get_next)
# The 4th element is NaN, so `array_ops.check_numerics()` should fail.
with self.assertRaises(errors.InvalidArgumentError):
sess.run(get_next)
sess.run(get_next)
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testPrefetchError(self):
components = np.array([1., 2., 3., np.nan, 5.]).astype(np.float32)
dataset = (dataset_ops.Dataset.from_tensor_slices(components)
.map(lambda x: array_ops.check_numerics(x, "message"))
.prefetch(2))
iterator = dataset.make_initializable_iterator()
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for _ in range(3):
sess.run(get_next)
# The 4th element is NaN, so `array_ops.check_numerics()` should fail.
with self.assertRaises(errors.InvalidArgumentError):
sess.run(get_next)
sess.run(get_next)
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testCaptureHashTable(self):
# NOTE(mrry): We must use the V2 variants of `HashTable`
# etc. because these produce a `tf.resource`-typed output that is
# compatible with the in-graph function implementation.
default_val = -1
keys = constant_op.constant(["brain", "salad", "surgery"])
values = constant_op.constant([0, 1, 2], dtypes.int64)
table = lookup_ops.HashTable(
lookup_ops.KeyValueTensorInitializer(keys, values), default_val)
input_sentences = dataset_ops.Dataset.from_tensor_slices(
["brain brain tank salad surgery", "surgery brain"])
iterator = (input_sentences
.map(lambda x: string_ops.string_split([x]).values)
.map(table.lookup)
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(table.init)
sess.run(init_op)
sess.run(get_next)
sess.run(get_next)
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testCaptureQueue(self):
elements = np.random.randint(100, size=[200])
queue = data_flow_ops.FIFOQueue(200, dtypes.int64, shapes=[])
enqueue_op = queue.enqueue_many(elements)
close_op = queue.close()
iterator = (dataset_ops.Dataset.from_tensors(0).repeat(-1)
.map(lambda _: queue.dequeue()).make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(enqueue_op)
sess.run(close_op)
sess.run(init_op)
for element in elements:
self.assertEqual(element, sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testCaptureSameResourceMultipleTimes(self):
elements = np.random.randint(100, size=[200])
queue = data_flow_ops.FIFOQueue(
200, dtypes.int64, shapes=[], shared_name="shared_queue")
queue_2 = data_flow_ops.FIFOQueue(
200, dtypes.int64, shapes=[], shared_name="shared_queue")
enqueue_op = queue.enqueue_many(elements)
close_op = queue.close()
iterator = (dataset_ops.Dataset.from_tensors(0).repeat(-1)
.map(lambda _: (queue.dequeue(), queue_2.dequeue()))
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(enqueue_op)
sess.run(close_op)
sess.run(init_op)
for i in range(100):
self.assertEqual(sorted([elements[i * 2], elements[i * 2 + 1]]),
sorted(sess.run(get_next)))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testCaptureVariable(self):
counter_var = variable_scope.get_variable(
"counter", (), dtypes.int32, use_resource=True)
iterator = (dataset_ops.Dataset.from_tensors(0).repeat(10)
.map(lambda _: counter_var.assign_add(1))
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(counter_var.initializer)
sess.run(init_op)
for i in range(10):
self.assertEqual(i, sess.run(counter_var))
self.assertEqual(i + 1, sess.run(get_next))
self.assertEqual(10, sess.run(counter_var))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
self.assertEqual(10, sess.run(counter_var))
def testCaptureUninitializedVariableError(self):
counter_var = variable_scope.get_variable(
"counter", (), dtypes.int32, use_resource=True)
iterator = (dataset_ops.Dataset.from_tensors(0).repeat(10)
.map(lambda _: counter_var.assign_add(1))
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
with self.assertRaises(errors.NotFoundError):
sess.run(get_next)
def testSeededStatefulOperatorIsProperlyStateful(self):
iterator = (dataset_ops.Dataset.from_tensors(0).repeat(10)
.map(lambda _: random_ops.random_uniform((), seed=11)).batch(2)
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
random_values = []
with self.assertRaises(errors.OutOfRangeError):
while True:
random_values.extend(sess.run(get_next))
self.assertEqual(10, len(random_values))
self.assertGreater(np.abs(np.diff(random_values)).max(), 1e-6)
sess.run(init_op)
random_values_2 = []
with self.assertRaises(errors.OutOfRangeError):
while True:
random_values_2.extend(sess.run(get_next))
# Randomness is repeatable given same seed
self.assertAllClose(random_values, random_values_2)
def testMapDict(self):
iterator = (dataset_ops.Dataset.range(10)
.map(lambda x: {"foo": x * 2, "bar": x ** 2})
.map(lambda d: d["foo"] + d["bar"])
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for i in range(10):
self.assertEqual(i * 2 + i ** 2, sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testMapNamedtuple(self, count=10):
# construct dataset of tuples
labels = dataset_ops.Dataset.range(count)
images = labels.map(lambda l: -l)
dataset_tuple = dataset_ops.Dataset.zip((labels, images))
# convert dataset of tuples to dataset of namedtuples
example = namedtuple("Example", ["label", "image"])
dataset_namedtuple = dataset_tuple.map(example)
def preprocess_tuple(label, image):
image = 2 * image
return label, image
def preprocess_namedtuple(example):
return example._replace(image=2 * example.image)
# preprocess both datasets
dataset_tuple = dataset_tuple.map(preprocess_tuple)
dataset_namedtuple = dataset_namedtuple.map(preprocess_namedtuple)
next_tuple = dataset_tuple.make_one_shot_iterator().get_next()
next_namedtuple = dataset_namedtuple.make_one_shot_iterator().get_next()
# make sure both datasets contain the same data
with self.test_session() as sess:
for i in range(count):
tuple_, namedtuple_ = sess.run([next_tuple, next_namedtuple])
self.assertEqual(tuple_, namedtuple_)
self.assertEqual(tuple_, (i, -2 * i))
with self.assertRaises(errors.OutOfRangeError):
sess.run(next_namedtuple)
def testUseStepContainerInMap(self):
row = np.arange(6)
iterator = (
dataset_ops.Dataset.from_tensors(row)
.map(lambda elems: functional_ops.map_fn(lambda x: x * x, elems))
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
self.assertAllEqual(row ** 2, sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testPrefetch(self):
# We will use this event to test that `_map_py_func()` has been
# invoked a certain number of times (6 times, to be exact) after
# consuming fewer elements from the iterator.
ev = threading.Event()
set_event_during_invocation = 5
def _map_py_func(x):
if x == set_event_during_invocation:
ev.set()
return x * x
def _map_fn(x):
return script_ops.py_func(_map_py_func, [x], x.dtype)
buffer_size_placeholder = array_ops.placeholder(dtypes.int64, shape=[])
iterator = (
dataset_ops.Dataset.range(100)
.map(_map_fn)
.prefetch(buffer_size_placeholder)
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
# Simple test that prefetch yields the expected values in the
# expected order.
for buffer_size in [1, 10, 100, 1000]:
sess.run(init_op, feed_dict={buffer_size_placeholder: buffer_size})
for i in range(100):
self.assertEqual(i * i, sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
# We can indirectly observe that varying the buffer size has the
# intended effect by observing when `ev` is set (on the 6th
# invocation of `_map_py_func()`).
# NOTE(mrry): We do not test with `buffer_size ==
# set_event_during_invocation`, because we must consume at least
# one element to start the prefetching.
for buffer_size in range(1, set_event_during_invocation):
event_will_be_set_after_consuming = (
set_event_during_invocation - buffer_size + 1)
ev.clear()
sess.run(init_op, feed_dict={buffer_size_placeholder: buffer_size})
for i in range(event_will_be_set_after_consuming):
self.assertFalse(ev.is_set())
self.assertEqual(i * i, sess.run(get_next))
ev.wait()
for i in range(event_will_be_set_after_consuming, 100):
self.assertEqual(i * i, sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testReturnList(self):
iterator = (dataset_ops.Dataset.range(10)
.map(lambda x: [x, constant_op.constant(37.0)])
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for i in range(10):
self.assertEqual((i, 37.0), sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testMultiOutputPyFunc(self):
# The `tf.py_func()` op returns a list of tensors for its outputs.
def _map_fn(x_tensor):
def _map_py_func(x):
return x, np.array(37.0, dtype=np.float64)
return script_ops.py_func(
_map_py_func, [x_tensor], [dtypes.int64, dtypes.float64])
iterator = (dataset_ops.Dataset.range(10)
.map(_map_fn)
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for i in range(10):
self.assertEqual((i, 37.0), sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def assertSparseValuesEqual(self, a, b):
self.assertAllEqual(a.indices, b.indices)
self.assertAllEqual(a.values, b.values)
self.assertAllEqual(a.dense_shape, b.dense_shape)
def testSparse(self):
def _sparse(i):
return sparse_tensor.SparseTensorValue(
indices=np.array([[0, 0]]),
values=(i * np.array([1])),
dense_shape=np.array([1, 1]))
iterator = (dataset_ops.Dataset.range(10)
.map(_sparse)
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for i in range(10):
actual = sess.run(get_next)
self.assertTrue(isinstance(actual, sparse_tensor.SparseTensorValue))
self.assertSparseValuesEqual(actual, _sparse(i))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testSparseChain(self):
def _sparse(i):
return sparse_tensor.SparseTensorValue(
indices=np.array([[0, 0]]),
values=(i * np.array([1])),
dense_shape=np.array([1, 1]))
def _check(i):
self.assertTrue(sparse_tensor.is_sparse(i))
return sparse_ops.sparse_concat(0, [i, i])
iterator = (
dataset_ops.Dataset.range(10).map(_sparse).map(_check)
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for i in range(10):
actual = sess.run(get_next)
self.assertTrue(isinstance(actual, sparse_tensor.SparseTensorValue))
self.assertSparseValuesEqual(actual, _check(_sparse(i)).eval())
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testParallelMapOutOfRangeError(self):
def raising_py_func(i):
if i == 100:
raise StopIteration()
else:
return i
iterator = (
dataset_ops.Dataset.range(105)
.map(lambda x: script_ops.py_func(raising_py_func, [x], dtypes.int64),
num_parallel_calls=2)
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for i in range(100):
self.assertEqual(i, sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testConstantOutput(self):
iterator = (
dataset_ops.Dataset.range(10).map(lambda x: [x, "hello", 10])
.make_initializable_iterator())
init_op = iterator.initializer
get_next = iterator.get_next()
with self.test_session() as sess:
sess.run(init_op)
for i in range(10):
self.assertEqual((i, b"hello", 10), sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testWarnOnLookupTable(self):
def collecting_function(x):
_ = lookup_ops.HashTable(
lookup_ops.KeyValueTensorInitializer([], []), 0.0, name="t1")
return x
warnings.simplefilter("always")
with warnings.catch_warnings(record=True) as w:
_ = dataset_ops.Dataset.range(10).map(collecting_function)
# NOTE(mrry): Python 3 prints other warnings in addition to the one we are
# testing, so we search for the expected warning.
self.assertGreaterEqual(len(w), 1)
found_warning = False
for warning in w:
if ("Creating lookup tables inside a function passed to Dataset.map() is "
"not supported." in str(warning)):
found_warning = True
break
self.assertTrue(found_warning)
def testNestedDatasetError(self):
dataset = dataset_ops.Dataset.from_tensors([1.0, 2.0, 3.0])
with self.assertRaisesRegexp(
NotImplementedError, r"The Dataset.map\(\) transformation does not "
"currently support nested datasets as outputs."):
_ = dataset.map(dataset_ops.Dataset.from_tensor_slices)
def testReturnValueError(self):
dataset = dataset_ops.Dataset.from_tensors([1.0, 2.0, 3.0])
with self.assertRaisesRegexp(
TypeError, r"Unsupported return value from function passed to "
r"Dataset.map\(\): None."):
_ = dataset.map(lambda x: None)
def testBrokenFunctionErrorOnInitialization(self):
dataset = dataset_ops.Dataset.from_tensor_slices([1.0, 2.0, 3.0])
def broken_function(_):
"""A function deliberately designed to fail on instantiation."""
value = []
tensor_value = attr_value_pb2.AttrValue()
tensor_value.tensor.CopyFrom(
tensor_util.make_tensor_proto(
value, dtype=dtypes.float32, shape=[0], verify_shape=False))
dtype_value = attr_value_pb2.AttrValue(type=dtypes.int32.as_datatype_enum)
# Create a "Const" op with a `tf.float32` value and a `tf.int32` type
# attr.
const_tensor = ops.get_default_graph().create_op(
"Const", [], [dtypes.int32],
attrs={
"value": tensor_value,
"dtype": dtype_value
},
name="BrokenConst").outputs[0]
return const_tensor
dataset = dataset.map(broken_function)
iterator = dataset.make_initializable_iterator()
with self.test_session() as sess:
with self.assertRaisesRegexp(errors.InvalidArgumentError, "BrokenConst"):
sess.run(iterator.initializer)
# pylint: disable=g-long-lambda
@parameterized.named_parameters(
("Map", lambda dataset, func:
dataset_ops.MapDataset(dataset, func, use_inter_op_parallelism=False)),
("ParallelMap", lambda dataset, func:
dataset_ops.ParallelMapDataset(dataset, func, num_parallel_calls=1,
use_inter_op_parallelism=False)),
)
def testNoInterOpParallelism(self, make_dataset_fn):
dataset = dataset_ops.Dataset.from_tensors(0)
def _get_tid():
return np.int64(threading.current_thread().ident)
def _map_fn(_):
tids = []
for _ in range(10):
tids.append(script_ops.py_func(_get_tid, [], dtypes.int64))
return tids
dataset = make_dataset_fn(dataset, _map_fn)
iterator = dataset.make_one_shot_iterator()
get_next = iterator.get_next()
with self.test_session() as sess:
tids = sess.run(get_next)
self.assertTrue(all(tids[0] == tid for tid in tids))
# pylint: enable=g-long-lambda
class MapDatasetBenchmark(test.Benchmark):
def benchmarkChainOfMaps(self):
chain_lengths = [0, 1, 2, 5, 10, 20, 50]
for chain_length in chain_lengths:
for use_inter_op_parallelism in [False, True]:
with ops.Graph().as_default():
dataset = dataset_ops.Dataset.from_tensors(0).repeat(None)
for _ in range(chain_length):
dataset = dataset_ops.MapDataset(
dataset,
lambda x: x,
use_inter_op_parallelism=use_inter_op_parallelism)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
with session.Session() as sess:
for _ in range(5):
sess.run(next_element.op)
deltas = []
for _ in range(100):
start = time.time()
for _ in range(100):
sess.run(next_element.op)
end = time.time()
deltas.append(end - start)
median_wall_time = np.median(deltas) / 100
print("Map dataset chain length%s: %d Median wall time: %f" %
(" (single threaded mode)" if not use_inter_op_parallelism
else "", chain_length, median_wall_time))
self.report_benchmark(
iters=1000,
wall_time=median_wall_time,
name="benchmark_map_dataset_chain_latency_%d%s" %
(chain_length, "_single_threaded"
if not use_inter_op_parallelism else ""))
def benchmarkMapFanOut(self):
fan_outs = [1, 2, 5, 10, 20, 50, 100]
for fan_out in fan_outs:
for use_inter_op_parallelism in [False, True]:
with ops.Graph().as_default():
dataset = dataset_ops.Dataset.from_tensors(
tuple(0 for _ in range(fan_out))).repeat(None)
dataset = dataset_ops.MapDataset(
dataset,
lambda *xs: xs,
use_inter_op_parallelism=use_inter_op_parallelism)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
with session.Session() as sess:
for _ in range(5):
sess.run(next_element[0].op)
deltas = []
for _ in range(100):
start = time.time()
for _ in range(100):
sess.run(next_element[0].op)
end = time.time()
deltas.append(end - start)
median_wall_time = np.median(deltas) / 100
print("Map dataset fan out%s: %d Median wall time: %f" %
(" (single threaded mode)" if not use_inter_op_parallelism
else "", fan_out, median_wall_time))
self.report_benchmark(
iters=1000,
wall_time=median_wall_time,
name="benchmark_map_dataset_fan_out_%d%s" %
(fan_out, "_single_threaded"
if not use_inter_op_parallelism else ""))
if __name__ == "__main__":
test.main()
|
workflows_scaling.py
|
import functools
import json
import os
import random
import sys
from threading import Thread
from uuid import uuid4
galaxy_root = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir, os.path.pardir))
sys.path[1:1] = [ os.path.join( galaxy_root, "lib" ), os.path.join( galaxy_root, "test" ) ]
try:
from argparse import ArgumentParser
except ImportError:
ArgumentParser = None
import requests
from bioblend import galaxy
from api import helpers, yaml_to_workflow
LONG_TIMEOUT = 1000000000
DESCRIPTION = "Script to exercise the workflow engine."
def main(argv=None):
if ArgumentParser is None:
raise Exception("Test requires Python 2.7")
arg_parser = ArgumentParser(description=DESCRIPTION)
arg_parser.add_argument("--api_key", default="testmasterapikey")
arg_parser.add_argument("--host", default="http://localhost:8080/")
arg_parser.add_argument("--collection_size", type=int, default=20)
arg_parser.add_argument("--workflow_depth", type=int, default=10)
arg_parser.add_argument("--two_outputs", default=False, action="store_true")
arg_parser.add_argument("--workflow_count", type=int, default=1)
args = arg_parser.parse_args(argv)
uuid = str(uuid4())
workflow_struct = _workflow_struct(args, uuid)
gi = _gi(args)
workflow = yaml_to_workflow.python_to_workflow(workflow_struct)
workflow_info = gi.workflows.import_workflow_json(workflow)
workflow_id = workflow_info["id"]
target = functools.partial(_run, args, gi, workflow_id, uuid)
threads = []
for i in range(args.workflow_count):
t = Thread(target=target)
t.daemon = True
t.start()
threads.append(t)
for t in threads:
t.join()
def _run(args, gi, workflow_id, uuid):
dataset_populator = GiDatasetPopulator(gi)
dataset_collection_populator = GiDatasetCollectionPopulator(gi)
history_id = dataset_populator.new_history()
contents = []
for i in range(args.collection_size):
contents.append("random dataset number #%d" % i)
hdca = dataset_collection_populator.create_list_in_history( history_id, contents=contents ).json()
label_map = {
uuid: {"src": "hdca", "id": hdca["id"]},
}
workflow_request = dict(
history="hist_id=%s" % history_id,
)
workflow_request[ "inputs" ] = json.dumps( label_map )
url = "workflows/%s/usage" % ( workflow_id )
invoke_response = dataset_populator._post( url, data=workflow_request ).json()
invocation_id = invoke_response["id"]
workflow_populator = GiWorkflowPopulator(gi)
workflow_populator.wait_for_workflow( workflow_id, invocation_id, history_id, timeout=LONG_TIMEOUT )
class GiPostGetMixin:
def _get(self, route):
return self._gi.make_get_request(self.__url(route))
def _post(self, route, data={}):
data = data.copy()
data['key'] = self._gi.key
return requests.post(self.__url(route), data=data)
def __url(self, route):
return self._gi.url + "/" + route
class GiDatasetPopulator(helpers.BaseDatasetPopulator, GiPostGetMixin):
def __init__(self, gi):
self._gi = gi
class GiDatasetCollectionPopulator(helpers.BaseDatasetCollectionPopulator, GiPostGetMixin):
def __init__(self, gi):
self._gi = gi
self.dataset_populator = GiDatasetPopulator(gi)
def _create_collection(self, payload):
create_response = self._post( "dataset_collections", data=payload )
return create_response
class GiWorkflowPopulator(helpers.BaseWorkflowPopulator, GiPostGetMixin):
def __init__(self, gi):
self._gi = gi
self.dataset_populator = GiDatasetPopulator(gi)
def _workflow_struct(args, input_uuid):
if args.two_outputs:
return _workflow_struct_two_outputs(args, input_uuid)
else:
return _workflow_struct_simple(args, input_uuid)
def _workflow_struct_simple(args, input_uuid):
workflow_struct = [
{"type": "input_collection", "uuid": input_uuid},
{"tool_id": "cat1", "state": {"input1": _link(0)}}
]
workflow_depth = args.workflow_depth
for i in range(workflow_depth):
link = str(i + 1) + "#out_file1"
workflow_struct.append(
{"tool_id": "cat1", "state": {"input1": _link(link)}}
)
return workflow_struct
def _workflow_struct_two_outputs(args, input_uuid):
workflow_struct = [
{"type": "input_collection", "uuid": input_uuid},
{"tool_id": "cat1", "state": {"input1": _link(0), "input2": _link(0)}}
]
workflow_depth = args.workflow_depth
for i in range(workflow_depth):
link1 = str(i + 1) + "#out_file1"
link2 = str(i + 1) + "#out_file2"
workflow_struct.append(
{"tool_id": "cat1", "state": {"input1": _link(link1), "input2": _link(link2)}}
)
return workflow_struct
def _link(link):
return {"$link": link}
def _gi(args):
gi = galaxy.GalaxyInstance(args.host, key=args.api_key)
name = "wftest-user-%d" % random.randint(0, 1000000)
user = gi.users.create_local_user(name, "%s@galaxytesting.dev" % name, "pass123")
user_id = user["id"]
api_key = gi.users.create_user_apikey(user_id)
user_gi = galaxy.GalaxyInstance(args.host, api_key)
return user_gi
if __name__ == "__main__":
main()
|
darknet_websocket_demo.py
|
from ctypes import *
#from multiprocessing import Process, Queue
import queue
import time
from threading import Lock,Thread
from fastapi import FastAPI
from fastapi import Request
from fastapi import WebSocket, WebSocketDisconnect
import uvicorn
#from yolo_service import *
import socket
import random
from typing import List
import darknet
import cv2
import time
import io
import struct
import os
import numpy as np
import base64
import json
from jtracer.tracing import init_tracer
import pynng
from PIL import Image
from opentracing.propagation import Format
def convert2relative(bbox,darknet_height,darknet_width):
"""
YOLO format use relative coordinates for annotation
"""
x, y, w, h = bbox
_height = darknet_height
_width = darknet_width
return x/_width, y/_height, w/_width, h/_height
def convert2original(image, bbox,darknet_height,darknet_width):
x, y, w, h = convert2relative(bbox,darknet_height,darknet_width)
image_h, image_w, __ = image.shape
orig_x = int(x * image_w)
orig_y = int(y * image_h)
orig_width = int(w * image_w)
orig_height = int(h * image_h)
bbox_converted = (orig_x, orig_y, orig_width, orig_height)
return bbox_converted
class SuperbFrame:
def __init__(self,darknet_height,darknet_width):
self.image = None
self.results = None
self.darknet_image = darknet.make_image(darknet_width,darknet_height,3)
self.recv_timestamp = 0
self.send_timestamp = 0
self.inference_time = 0
self.final_image = None
self.bytes = None
self.span = None
def port_is_used(port,ip="0.0.0.0"):
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
try:
s.connect((ip,port))
s.shutdown(2)
return True
except Exception as e:
return False
app = FastAPI()
class ConnectionManager:
def __init__(self):
# 存放激活的ws连接对象
self.active_connections: List[WebSocket] = []
self.ports = set()
self.port_lock = Lock()
async def connect(self, ws: WebSocket):
# 等待连接
await ws.accept()
# 存储ws连接对象
self.active_connections.append(ws)
def disconnect(self, ws: WebSocket):
# 关闭时 移除ws对象
self.active_connections.remove(ws)
manager = ConnectionManager()
@app.get("/get_port")
def get_port(request:Request):
while True:
manager.port_lock.acquire()
port_tmp = random.randint(int(os.getenv("SUPB_MIN_PORT")),int(os.getenv("SUPB_MAX_PORT")))
if port_tmp in manager.ports or port_is_used(port_tmp):
manager.port_lock.release()
continue
else:
manager.ports.add(port_tmp)
manager.port_lock.release()
return port_tmp # port_tmp is the key for a client
def parse_data(data,tracer):
head_length, msg_length = struct.unpack("ii", data[0:8])
head_length, msg_length, msg_head, msg = struct.unpack("ii"+ str(head_length) + "s" + str(msg_length) + "s", data)
if head_length > 2:
span_dict = json.loads(msg_head)
span_ctx = tracer.extract(Format.TEXT_MAP, span_dict)
return span_ctx, msg
else:
return None, msg
def send_index(send_queue, sock,keep_alive):
while keep_alive:
try:
span_reply = send_queue.get(block=False,timeout=20)
sock.send(span_reply)
except pynng.Timeout:
print("sock.send timeout")
except:
pass # no msg to send
def send_then_recv(input_address,send_queue,input_queue,tracer,darknet_width,darknet_height,sock,keep_alive):
#sock = pynng.Pair1(recv_timeout=100,send_timeout=100)
#sock.listen(input_address)
while keep_alive:
#try:
# span_reply = send_queue.get(block=False,timeout=20)
# sock.send(span_reply)
#except pynng.Timeout:
# print("sock.send timeout")
#except:
# pass # no msg to send
try:
msg = sock.recv()
except pynng.Timeout:
continue
recv_time = time.time()
newFrame = SuperbFrame(darknet_height,darknet_width)
newFrame.recv_timestamp = int(recv_time*1000.0) # in ms
# msg handling
span_ctx, msg_content = parse_data(msg,tracer)
if span_ctx is not None:
newFrame.span = tracer.start_span('image_procss',child_of=span_ctx)
header = msg_content[0:24]
hh,ww,cc,tt = struct.unpack('iiid',header)
newFrame.send_timestamp = int(tt*1000.0)
hh,ww,cc,tt,ss = struct.unpack('iiid'+str(hh*ww*cc)+'s',msg_content)
newFrame.image = cv2.cvtColor((np.frombuffer(ss,dtype=np.uint8)).reshape(hh,ww,cc), cv2.COLOR_BGR2RGB)
darknet.copy_image_from_bytes(newFrame.darknet_image,cv2.resize(newFrame.image,(darknet_width,darknet_height),interpolation=cv2.INTER_LINEAR).tobytes())
#if span_ctx is not None:
# newFrame.span.finish()
try:
input_queue.put(newFrame,block=False,timeout=100)
except:
print("input_queue is full, discard current msg")
continue
def keep_inference(send_queue,input_queue,result_queue,network,class_names,keep_alive):
while keep_alive:
try:
#print("get newFrame")
newFrame = input_queue.get(block=False,timeout=100)
except:
#print("inference get fail")
continue
prev_time = time.time()
newFrame.results = darknet.detect_image(network, class_names, newFrame.darknet_image, thresh=0.2)
newFrame.inference_time = int((time.time()-prev_time)*1000.0) # s -> ms
darknet.free_image(newFrame.darknet_image)
if newFrame.span is not None:
index = newFrame.span.get_baggage_item('index')
newFrame.span.finish()
try:
send_queue.put(index.encode())
#sock.send(index.encode())
except:
print("send_queue is full, discard current msg")
try:
result_queue.put(newFrame,block=False,timeout=10)
except:
print("result_queue is full, discard current msg")
continue
def generate_output(result_queue,need_bytes,keep_alive,class_colors,darknet_height,darknet_width,resizew=960,resizeh=480):
while keep_alive:
try:
newFrame = result_queue.get(block=False,timeout=30)
except:
continue
detections_adjusted = []
if newFrame is not None:
for label, confidence, bbox in newFrame.results:
bbox_adjusted = convert2original(newFrame.image, bbox,darknet_height,darknet_width)
detections_adjusted.append((str(label), confidence, bbox_adjusted))
image = darknet.draw_boxes(detections_adjusted, newFrame.image, class_colors)
cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
newFrame.final_image = image
if need_bytes:
img = Image.fromarray(image).resize((resizew,resizeh))
img_byte_arr = io.BytesIO()
img.save(img_byte_arr, format='PNG')
img_byte_arr.seek(0)
newFrame.bytes = base64.b64encode(img_byte_arr.read()).decode()
return newFrame
else:
continue
@app.websocket("/ws/{port}")# user is the received port_tmp
async def stream_handler(websocket: WebSocket, port: str):
print("a new websocket connected")
await manager.connect(websocket)
network,class_names,class_colors = darknet.load_network(
"./cfg/yolov4.cfg",
"./cfg/coco.data",
"./yolov4.weights",
batch_size=1
)
darknet_width = darknet.network_width(network)
darknet_height = darknet.network_height(network)
tracer = init_tracer("image-process")
input_queue = queue.Queue(maxsize=5)
result_queue = queue.Queue(maxsize=5)
send_queue = queue.Queue(maxsize=5)
input_address = "tcp://0.0.0.0:"+port
sock = pynng.Pair1(recv_timeout=100,send_timeout=100)
sock.listen(input_address)
keep_alive = True
p0 = Thread(target=send_then_recv,args=(input_address,send_queue,input_queue,tracer,darknet_width,darknet_height,sock,keep_alive))
p1 = Thread(target=keep_inference,args=(send_queue,input_queue,result_queue,network,class_names,keep_alive))
p2 = Thread(target=send_index,args=(send_queue,sock,keep_alive))
p0.start()
p1.start()
p2.start()
try:
while keep_alive:
superbFrame = generate_output(result_queue,True,keep_alive,class_colors,darknet_width,darknet_height)
send1_time = int(time.time()*1000.0)
payload = {"img": "data:image/png;base64,%s"%(superbFrame.bytes),"send0_time":superbFrame.send_timestamp,"recv_time":superbFrame.recv_timestamp,"send1_time":send1_time}
await websocket.send_json(payload)
except WebSocketDisconnect:
keep_alive = False
p0.join()
p1.join()
p2.join()
sock.close()
manager.disconnect(websocket)
manager.ports.discard(port)
if __name__ == "__main__":
uvicorn.run("darknet_websocket_demo:app",host="0.0.0.0",port=int(os.getenv("SUPB_SERVICE_PORT")),log_level="info")
|
stride-ios-relay.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# stride-ios-relay.py - Stride TCP connection relay for iOS devices to Windows developer host (using usbmuxd)
#
# Copyright (c) .NET Foundation and Contributors (https://dotnetfoundation.org/ & https://stride3d.net) and Silicon Studio Corp. (https://www.siliconstudio.co.jp)
# Copyright (C) 2009 Hector Martin "marcan" <hector@marcansoft.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 or version 3.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
import usbmux
import SocketServer
import select
from optparse import OptionParser
import sys
import threading
import time
import traceback
import socket
class SocketRelay(object):
def __init__(self, a, b, maxbuf=65535):
self.a = a
self.b = b
self.atob = ""
self.btoa = ""
self.maxbuf = maxbuf
def handle(self):
while True:
rlist = []
wlist = []
xlist = [self.a, self.b]
if self.atob:
wlist.append(self.b)
if self.btoa:
wlist.append(self.a)
if len(self.atob) < self.maxbuf:
rlist.append(self.a)
if len(self.btoa) < self.maxbuf:
rlist.append(self.b)
rlo, wlo, xlo = select.select(rlist, wlist, xlist)
if xlo:
return
if self.a in wlo:
n = self.a.send(self.btoa)
self.btoa = self.btoa[n:]
if self.b in wlo:
n = self.b.send(self.atob)
self.atob = self.atob[n:]
if self.a in rlo:
s = self.a.recv(self.maxbuf - len(self.atob))
if not s:
return
self.atob += s
if self.b in rlo:
s = self.b.recv(self.maxbuf - len(self.btoa))
if not s:
return
self.btoa += s
#print "Relay iter: %8d atob, %8d btoa, lists: %r %r %r"%(len(self.atob), len(self.btoa), rlo, wlo, xlo)
parser = OptionParser(usage="usage: %prog [OPTIONS] RemoteHost")
parser.add_option("-b", "--bufsize", dest='bufsize', action='store', metavar='KILOBYTES', type='int', default=16, help="specify buffer size for socket forwarding")
parser.add_option("-s", "--socket", dest='sockpath', action='store', metavar='PATH', type='str', default=None, help="specify the path of the usbmuxd socket")
options, args = parser.parse_args()
if len(args) != 1:
parser.print_help()
sys.exit(1)
alive = True
remotehost = args[0]
mux = usbmux.USBMux(options.sockpath)
class DeviceConnectionHelper():
def __init__(self, device):
self.device = device
def start_connection(self, device_sock):
try:
print "Connection opened with device, establishing connection to router (%s)"%(remotehost)
# Connect to router
router_sock = socket.socket()
router_sock.connect((remotehost, 31254))
print "Starting relay between iOS device and router"
# Forward connection between router and iOS device
fwd = SocketRelay(device_sock, router_sock, options.bufsize * 1024)
fwd.handle()
except:
traceback.print_exc(file=sys.stdout)
pass
finally:
print "Connection between iOS device and router has been interrupted"
device_sock.close()
router_sock.close()
def start_device(self):
self.device.alive = True
while self.device.alive and alive:
try:
device_sock = mux.connect(self.device, 31255)
# Start a thread for this connection
thread = threading.Thread(target = lambda: self.start_connection(device_sock))
thread.start()
except:
# Silently ignore exceptions (since we try to continuously connect to device)
pass
time.sleep(0.2)
def start_device_threaded(self):
thread = threading.Thread(target = self.start_device)
thread.start()
deviceNames = {
0x1290: 'iPhone',
0x1292: 'iPhone 3G',
0x1294: 'iPhone 3GS',
0x1297: 'iPhone 4 GSM',
0x129c: 'iPhone 4 CDMA',
0x12a0: 'iPhone 4S',
0x12a8: 'iPhone 5/6',
0x1291: 'iPod touch',
0x1293: 'iPod touch 2G',
0x1299: 'iPod touch 3G',
0x129e: 'iPod touch 4G',
0x129a: 'iPad',
0x129f: 'iPad 2 Wi-Fi',
0x12a2: 'iPad 2 GSM',
0x12a3: 'iPad 2 CDMA',
0x12a9: 'iPad 2 R2',
0x12a4: 'iPad 3 Wi-Fi',
0x12a5: 'iPad 3 CDMA',
0x12a6: 'iPad 3 Global',
0x129d: 'Apple TV 2G',
0x12a7: 'Apple TV 3G'
}
def device_name(device):
return deviceNames.get(device.usbprod, "Unknown(0x%04x)"%(device.usbprod))
def device_added(device):
# Try to connect to establish connection to device
print "Device connected: ID %d, Type %s (Serial %s)"%(device.devid, device_name(device), device.serial)
deviceConnectionHelper = DeviceConnectionHelper(device)
deviceConnectionHelper.start_device_threaded()
def device_removed(device):
print "Device removed: ID %d, Type %s (Serial %s)"%(device.devid, device_name(device), device.serial)
device.alive = False
print "Listening for iOS devices..."
mux.listener.callback_device_added = device_added
mux.listener.callback_device_removed = device_removed
alive = True
while alive:
try:
mux.process()
except:
alive = False
|
enhanced_csi.py
|
"""
Advanced CSI video services for NVIDIA Jetson products.
A set of utility services that can be used for capturing and gathering
statistics on a csi video streams. Services include real time fps statistics
on inbound frames from the camera and fps on displaying frames for the
application. Other utilies include camera management function and data display
function
"""
import threading
from collections import deque
from enum import Enum
import cv2
class CameraFailedToStart(RuntimeError):
"""Error when camera initialization fails."""
def __init__(self, cmd, backend, append=''):
"""Failure to start camera RuntimeError."""
self._cmd = cmd
self._backend = backend
self._append = append
def __str__(self):
"""Failure camera start up message."""
return '''Failed to open video capture for {} using backend {}. {}
Note: Check to make sure your camera is plugged in correctly.
Common errors during installation of CSI cameras include plugging the
strip in backwards and plugging the camera into the wrong port.
'''.format(self._cmd, self._backend, self._append)
class CameraConnectionLost(RuntimeError):
"""
Camera connection failure.
Error for when the camera connection is not found after the connection has
been established.
"""
def __init__(self):
"""Error message for camera connection lost."""
super().__init__(
"""Failed to read a frame from video capture.Connection to
the camera has been lost."""
)
class RepeatTimer(threading.Timer):
"""Timer to reset statistics at a specified interval."""
def run(self):
"""Run RepeatTimer method."""
while not self.finished.wait(self.interval):
self.function(*self.args, **self.kwargs)
class _BaseVideoStream:
def __init__(self, cmd, backend):
self._cmd = cmd
self._backend = backend
self.fps_timer = None
self._running = False
self.frames_read = 0
self.frames_displayed = 0
self.last_frames_read = 0
self.last_frames_displayed = 0
self._stream = None
self._thread = None
self._queue_depth = 2
self._frame_queue = deque(maxlen=self._queue_depth)
self._stopped = False
def start(self):
"""
Start reading frames from the video stream.
:returns: `self`
:raises: :class:`~edgeiq.edge_tools.CameraFailedToStart`
if video stream fails to open
"""
# initialize the video stream and read the first frame
# from the stream
try:
self._stream = cv2.VideoCapture(self._cmd, self._backend)
except RuntimeError:
raise CameraFailedToStart(self._cmd, self._backend)
if self._stream is None or not self._stream.isOpened():
raise CameraFailedToStart(self._cmd, self._backend,
'Stream not open.')
(grabbed, frame) = self._stream.read() # Attempt to grab a frame
if grabbed is False or frame is None:
raise CameraFailedToStart(self._cmd, self._backend,
'Failed to grab frame')
self._update_failure = threading.Event()
self._thread = threading.Thread(target=self._update, args=())
self._thread.start()
return self
@property
def fps(self):
"""
Run FPS averaging of the video stream.
:type: float
:raises: `RuntimeError` if FPS cannot be queried
"""
fps = self._stream.get(cv2.CAP_PROP_FPS)
if fps == -1.0:
raise RuntimeError('Failed to get camera FPS!')
return fps
def update_fps_stats(self):
"""Update fps stats."""
self.last_frames_read = self.frames_read
self.last_frames_displayed = self.frames_displayed
self.frames_read = 0
self.frames_displayed = 0
def start_counting_fps(self):
"""Start fps counter to get camera input and dispaly stats."""
self.fps_timer = RepeatTimer(1.0, self.update_fps_stats)
self.fps_timer.start()
def _update(self):
"""Read frames as they're available."""
while True:
# if the thread indicator variable is set, stop the thread
if self._stopped:
return
# otherwise read the next frame from the stream
(grabbed, frame) = self._stream.read()
if grabbed is False or frame is None:
self._update_failure.set()
return
self.frames_read += 1
self._frame_queue.appendleft(frame)
def read(self):
"""
Return the most recent frame from the camera.
This function blocks on waiting for a new frame.
:returns: numpy array -- The frame that was read from the camera
"""
while True:
if self._update_failure.is_set():
raise CameraConnectionLost()
if len(self._frame_queue) > 0:
break
return self._frame_queue.pop()
def stop(self):
"""Stop and clean up the camera connection."""
self._stopped = True
if self._thread:
self._thread.join()
self._stream.release()
def release_fps_stats(self):
self.fps_timer.cancel()
self.fps_timer.join()
def __enter__(self):
return self.start()
def __exit__(self, type, value, traceback):
self.stop()
class FrameRotation(Enum):
"""Amount of rotation applied to each frame in degrees."""
ROTATE_NONE = 0
ROTATE_90 = 90
ROTATE_180 = 180
class JetsonCameraMode(Enum):
"""
Sensor mode for Jetson CSI camera.
Sensor Mode applied to CSI camera which determines input width and height
and framerate. The first Number identifies the Sony Sensor Number (IMX219
or IMX477). The second numbers are the input width and height. The third
number is the framerate and fourth the number is the camera sensor mode.
"""
IMX219_3264x2468_21_0 = 0
IMX219_3264x1848_28_1 = 1
IMX219_1920x1080_30_2 = 2
IMX219_1640x1232_30_3 = 3
IMX477_4032x3040_30_0 = 4
IMX477_1920x1080_60_1 = 5
IMX477_2560x1440_40_3 = 7
class JetsonVideoStream(_BaseVideoStream):
"""
Capture video frames from a CSI ribbon camera on NVIDIA Jetson.
`JetsonVideoStream` can be instantiated as a context manager::
with edgeiq.JetsonVideoStream() as video_stream:
...
To use `JetsonVideoStream` directly, use the
:func:`~edgeiq.edge_tools.JetsonVideoStream.start()` and
:func:`~edgeiq.edge_tools.JetsonVideoStream.stop()` functions::
video_stream = edgeiq.JetsonVideoStream().start()
...
video_stream.stop()
Typical usage::
with edgeiq.JetsonVideoStream() as video_stream:
while True:
frame = video_stream.read()
:type cam: integer
:param cam: The integer identifier of the camera.
:type rotation: :class:`~FrameRotation`
:param rotation: The rotation applied to each frame
:type camera_mode: :class:`~JetsonCameraMode`
:param camera_mode: The sensor mode for csi camera
:type display_width: integer
:param display_width: The output image width in pixels.
:type display_height: integer
:param display_height: The output image height in pixels.
"""
def __init__(
self, cam=0, rotation=FrameRotation.ROTATE_NONE,
camera_mode=JetsonCameraMode.IMX219_1920x1080_30_2,
display_width=640, display_height=480):
"""Initialize CSI camera."""
self._sensor_id = cam
self._rotation = rotation
self._sensor_mode = camera_mode
self._display_width = display_width
self._display_height = display_height
if self._rotation == FrameRotation.ROTATE_NONE:
flip = 0
elif self._rotation == FrameRotation.ROTATE_90:
flip = 1
elif self._rotation == FrameRotation.ROTATE_180:
flip = 2
else:
raise ValueError(
'Invalid input for rotation: {}'.format(self._rotation))
if self._sensor_mode == JetsonCameraMode.IMX219_3264x2468_21_0:
self._mode = 0
self._capture_width = 3264
self._capture_height = 2468
self._framerate = 21
elif self._sensor_mode == JetsonCameraMode.IMX219_3264x1848_28_1:
self._mode = 1
self._capture_width = 3264
self._capture_height = 1848
self._framerate = 28
elif self._sensor_mode == JetsonCameraMode.IMX219_1920x1080_30_2:
self._mode = 2
self._capture_width = 1920
self._capture_height = 1080
self._framerate = 30
elif self._sensor_mode == JetsonCameraMode.IMX219_1640x1232_30_3:
self._mode = 3
self._capture_width = 1640
self._capture_height = 1232
self._framerate = 30
elif self._sensor_mode == JetsonCameraMode.IMX477_4032x3040_30_0:
self._mode = 0
self._capture_width = 4032
self._capture_height = 3040
self._framerate = 30
elif self._sensor_mode == JetsonCameraMode.IMX477_1920x1080_60_1:
self._mode = 1
self._capture_width = 1920
self._capture_height = 1080
self._framerate = 60
elif self._sensor_mode == JetsonCameraMode.IMX477_2560x1440_40_3:
self._mode = 3
self._capture_width = 2560
self._capture_height = 1440
self._framerate = 40
else:
raise ValueError(
'Invalid input for camera_mode: {}'.format(
self._sensor_mode))
cmd = (
'nvarguscamerasrc sensor-id=%d sensor-mode=%d !'
'video/x-raw(memory:NVMM), '
'width=(int)%d, height=(int)%d, '
'format=(string)NV12, framerate=(fraction)%d/1 ! '
'nvvidconv flip-method=%d ! '
'video/x-raw, width=(int)%d, height=(int)%d,format=(string)BGRx ! '
'videoconvert ! '
'video/x-raw, format=(string)BGR ! appsink '
'wait-on-eos=false drop=true max-buffers=60' % (
self._sensor_id, self._mode,
self._capture_width, self._capture_height,
self._framerate, flip, self._display_width,
self._display_height))
backend = cv2.CAP_GSTREAMER
super(JetsonVideoStream, self).__init__(cmd=cmd, backend=backend)
def read_camera(camera_stream, monitor_fps):
"""
Read camera video stream and monitor fps in real time.
This function reads camera stream and adds fps information if monitor_fps
variable is True.
:type camera_stream: :class:`WebcamVideoStream` 'JetsonVideoStream'
'GStreamerVideoStream'
:param camera_stream: The VideoStream to read from.
:type monitor_fps: :boolean
:param monitor_fps: True value enables fps statistics to be visiable on the
image
:returns: image -- Numpy array of image in BGR format
"""
camera_image = camera_stream.read()
if monitor_fps:
draw_label(camera_image, "Frames Displayed (PS): "+str(
camera_stream.last_frames_displayed), (10, 20))
draw_label(camera_image, "Frames Read (PS): "+str(
camera_stream.last_frames_read), (10, 40))
return camera_image
def draw_label(image, label_text, label_position):
"""
Draw a label on a image.
This function will place a label on image at a specified position.
:type image: numpy array of image in BGR format
:param image: The image for label to be placed on.
:type label_text: string
:param label_text: Text string to be drawn on the image.
:type label_position: tuples of two values i.e. (X coordinate value,
Y coordinate value).
:param label_position: The coordinates of the bottom-left corner of
the text string in the image.
:returns: image -- numpy array of image in BGR format with label on it
"""
font_face = cv2.FONT_HERSHEY_SIMPLEX
scale = 0.5
color = (255, 255, 255)
# You can get the size of the string with cv2.getTextSize here
cv2.putText(image, label_text, label_position, font_face, scale, color, 1,
cv2.LINE_AA)
|
graphql_client.py
|
import websocket
import threading
import random
import string
import json
import time
from six.moves import urllib
class GraphQLClient:
def __init__(self, endpoint, session=None):
self.endpoint = endpoint
self.session = session
self.token = None
self.headername = None
def execute(self, query, variables=None):
return self._send(query, variables)
def inject_token(self, token, headername='Authorization'):
self.token = token
self.headername = headername
def _send(self, query, variables):
data = {'query': query,
'variables': variables}
headers = {'Accept': 'application/json',
'Content-Type': 'application/json'}
if self.token is not None:
headers[self.headername] = '{}'.format(self.token)
if self.session is not None:
try:
number_of_trials = 3
for n in range(number_of_trials):
req = self.session.post(self.endpoint, json.dumps(
data).encode('utf-8'), headers=headers)
if req.status_code == 200:
break
time.sleep(1)
return req.json()
except Exception as e:
print('Request failed with error:\n')
if req:
print(req.content)
print('')
raise e
req = urllib.request.Request(
self.endpoint, json.dumps(data).encode('utf-8'), headers)
try:
response = urllib.request.urlopen(req)
str_json = response.read().decode('utf-8')
print('>>>>', str_json)
return json.loads(str_json)
except urllib.error.HTTPError as e:
print((e.read()))
print('')
raise e
GQL_WS_SUBPROTOCOL = "graphql-ws"
class SubscriptionGraphQLClient:
"""
A simple GraphQL client that works over Websocket as the transport
protocol, instead of HTTP.
This follows the Apollo protocol.
https://github.com/apollographql/subscriptions-transport-ws/blob/master/PROTOCOL.md
"""
def __init__(self, url):
self.ws_url = url
self._conn = websocket.create_connection(self.ws_url,
on_message=self._on_message,
subprotocols=[GQL_WS_SUBPROTOCOL])
self._conn.on_message = self._on_message
self._subscription_running = False
self._st_id = None
def _on_message(self, message):
data = json.loads(message)
# skip keepalive messages
if data['type'] != 'ka':
print(message)
def _conn_init(self, headers=None):
payload = {
'type': 'connection_init',
'payload': {'headers': headers}
}
self._conn.send(json.dumps(payload))
self._conn.recv()
def _start(self, payload):
_id = gen_id()
frame = {'id': _id, 'type': 'start', 'payload': payload}
self._conn.send(json.dumps(frame))
return _id
def _stop(self, _id):
payload = {'id': _id, 'type': 'stop'}
self._conn.send(json.dumps(payload))
return self._conn.recv()
def query(self, query, variables=None, headers=None):
self._conn_init(headers)
payload = {'headers': headers, 'query': query, 'variables': variables}
_id = self._start(payload)
res = self._conn.recv()
self._stop(_id)
return res
def subscribe(self, query, variables=None, headers=None, callback=None):
self._conn_init(headers)
payload = {'headers': headers, 'query': query, 'variables': variables}
_cc = self._on_message if not callback else callback
_id = self._start(payload)
def subs(_cc):
self._subscription_running = True
while self._subscription_running:
r = json.loads(self._conn.recv())
if r['type'] == 'error' or r['type'] == 'complete':
print(r)
self.stop_subscribe(_id)
break
elif r['type'] != 'ka':
_cc(_id, r)
time.sleep(1)
self._st_id = threading.Thread(target=subs, args=(_cc,))
self._st_id.start()
return _id
def stop_subscribe(self, _id):
self._subscription_running = False
self._st_id.join()
self._stop(_id)
def close(self):
self._conn.close()
# generate random alphanumeric id
def gen_id(size=6, chars=string.ascii_letters + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
|
bridge.py
|
#!/usr/bin/env python3
import argparse
import atexit
import carla # pylint: disable=import-error
import math
import numpy as np
import time
import threading
from cereal import log
from typing import Any
import cereal.messaging as messaging
from common.params import Params
from common.realtime import Ratekeeper, DT_DMON
from lib.can import can_function
from selfdrive.car.honda.values import CruiseButtons
from selfdrive.test.helpers import set_params_enabled
parser = argparse.ArgumentParser(description='Bridge between CARLA and openpilot.')
parser.add_argument('--joystick', action='store_true')
parser.add_argument('--low_quality', action='store_true')
parser.add_argument('--town', type=str, default='Town04_Opt')
parser.add_argument('--spawn_point', dest='num_selected_spawn_point',
type=int, default=16)
args = parser.parse_args()
W, H = 1164, 874
REPEAT_COUNTER = 5
PRINT_DECIMATION = 100
STEER_RATIO = 15.
pm = messaging.PubMaster(['roadCameraState', 'sensorEvents', 'can', "gpsLocationExternal"])
sm = messaging.SubMaster(['carControl','controlsState'])
class VehicleState:
def __init__(self):
self.speed = 0
self.angle = 0
self.bearing_deg = 0.0
self.vel = carla.Vector3D()
self.cruise_button= 0
self.is_engaged=False
def steer_rate_limit(old, new):
# Rate limiting to 0.5 degrees per step
limit = 0.5
if new > old + limit:
return old + limit
elif new < old - limit:
return old - limit
else:
return new
frame_id = 0
def cam_callback(image):
global frame_id
img = np.frombuffer(image.raw_data, dtype=np.dtype("uint8"))
img = np.reshape(img, (H, W, 4))
img = img[:, :, [0, 1, 2]].copy()
dat = messaging.new_message('roadCameraState')
dat.roadCameraState = {
"frameId": image.frame,
"image": img.tobytes(),
"transform": [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
}
pm.send('roadCameraState', dat)
frame_id += 1
def imu_callback(imu, vehicle_state):
vehicle_state.bearing_deg = math.degrees(imu.compass)
dat = messaging.new_message('sensorEvents', 2)
dat.sensorEvents[0].sensor = 4
dat.sensorEvents[0].type = 0x10
dat.sensorEvents[0].init('acceleration')
dat.sensorEvents[0].acceleration.v = [imu.accelerometer.x, imu.accelerometer.y, imu.accelerometer.z]
# copied these numbers from locationd
dat.sensorEvents[1].sensor = 5
dat.sensorEvents[1].type = 0x10
dat.sensorEvents[1].init('gyroUncalibrated')
dat.sensorEvents[1].gyroUncalibrated.v = [imu.gyroscope.x, imu.gyroscope.y, imu.gyroscope.z]
pm.send('sensorEvents', dat)
def panda_state_function():
pm = messaging.PubMaster(['pandaState'])
while 1:
dat = messaging.new_message('pandaState')
dat.valid = True
dat.pandaState = {
'ignitionLine': True,
'pandaType': "blackPanda",
'controlsAllowed': True,
'safetyModel': 'hondaNidec'
}
pm.send('pandaState', dat)
time.sleep(0.5)
def gps_callback(gps, vehicle_state):
dat = messaging.new_message('gpsLocationExternal')
# transform vel from carla to NED
# north is -Y in CARLA
velNED = [
-vehicle_state.vel.y, # north/south component of NED is negative when moving south
vehicle_state.vel.x, # positive when moving east, which is x in carla
vehicle_state.vel.z,
]
dat.gpsLocationExternal = {
"flags": 1, # valid fix
"verticalAccuracy": 1.0,
"speedAccuracy": 0.1,
"vNED": velNED,
"bearingDeg": vehicle_state.bearing_deg,
"latitude": gps.latitude,
"longitude": gps.longitude,
"altitude": gps.altitude,
"source": log.GpsLocationData.SensorSource.ublox,
}
pm.send('gpsLocationExternal', dat)
def fake_driver_monitoring():
pm = messaging.PubMaster(['driverState','driverMonitoringState'])
while 1:
# dmonitoringmodeld output
dat = messaging.new_message('driverState')
dat.driverState.faceProb = 1.0
pm.send('driverState', dat)
# dmonitoringd output
dat = messaging.new_message('driverMonitoringState')
dat.driverMonitoringState = {
"faceDetected": True,
"isDistracted": False,
"awarenessStatus": 1.,
}
pm.send('driverMonitoringState', dat)
time.sleep(DT_DMON)
def can_function_runner(vs):
i = 0
while 1:
can_function(pm, vs.speed, vs.angle, i, vs.cruise_button, vs.is_engaged)
time.sleep(0.01)
i+=1
def bridge(q):
# setup CARLA
client = carla.Client("127.0.0.1", 2000)
client.set_timeout(10.0)
world = client.load_world(args.town)
if args.low_quality:
world.unload_map_layer(carla.MapLayer.Foliage)
world.unload_map_layer(carla.MapLayer.Buildings)
world.unload_map_layer(carla.MapLayer.ParkedVehicles)
world.unload_map_layer(carla.MapLayer.Particles)
world.unload_map_layer(carla.MapLayer.Props)
world.unload_map_layer(carla.MapLayer.StreetLights)
blueprint_library = world.get_blueprint_library()
world_map = world.get_map()
vehicle_bp = blueprint_library.filter('vehicle.tesla.*')[1]
spawn_points = world_map.get_spawn_points()
assert len(spawn_points) > args.num_selected_spawn_point, \
f'''No spawn point {args.num_selected_spawn_point}, try a value between 0 and
{len(spawn_points)} for this town.'''
spawn_point = spawn_points[args.num_selected_spawn_point]
vehicle = world.spawn_actor(vehicle_bp, spawn_point)
max_steer_angle = vehicle.get_physics_control().wheels[0].max_steer_angle
# make tires less slippery
# wheel_control = carla.WheelPhysicsControl(tire_friction=5)
physics_control = vehicle.get_physics_control()
physics_control.mass = 2326
# physics_control.wheels = [wheel_control]*4
physics_control.torque_curve = [[20.0, 500.0], [5000.0, 500.0]]
physics_control.gear_switch_time = 0.0
vehicle.apply_physics_control(physics_control)
blueprint = blueprint_library.find('sensor.camera.rgb')
blueprint.set_attribute('image_size_x', str(W))
blueprint.set_attribute('image_size_y', str(H))
blueprint.set_attribute('fov', '70')
blueprint.set_attribute('sensor_tick', '0.05')
transform = carla.Transform(carla.Location(x=0.8, z=1.13))
camera = world.spawn_actor(blueprint, transform, attach_to=vehicle)
camera.listen(cam_callback)
vehicle_state = VehicleState()
# reenable IMU
imu_bp = blueprint_library.find('sensor.other.imu')
imu = world.spawn_actor(imu_bp, transform, attach_to=vehicle)
imu.listen(lambda imu: imu_callback(imu, vehicle_state))
gps_bp = blueprint_library.find('sensor.other.gnss')
gps = world.spawn_actor(gps_bp, transform, attach_to=vehicle)
gps.listen(lambda gps: gps_callback(gps, vehicle_state))
def destroy():
print("clean exit")
imu.destroy()
camera.destroy()
vehicle.destroy()
print("done")
atexit.register(destroy)
# launch fake car threads
threading.Thread(target=panda_state_function).start()
threading.Thread(target=fake_driver_monitoring).start()
threading.Thread(target=can_function_runner, args=(vehicle_state,)).start()
# can loop
rk = Ratekeeper(100, print_delay_threshold=0.05)
# init
throttle_ease_out_counter = REPEAT_COUNTER
brake_ease_out_counter = REPEAT_COUNTER
steer_ease_out_counter = REPEAT_COUNTER
vc = carla.VehicleControl(throttle=0, steer=0, brake=0, reverse=False)
is_openpilot_engaged = False
throttle_out = steer_out = brake_out = 0
throttle_op = steer_op = brake_op = 0
throttle_manual = steer_manual = brake_manual = 0
old_steer = old_brake = old_throttle = 0
throttle_manual_multiplier = 0.7 #keyboard signal is always 1
brake_manual_multiplier = 0.7 #keyboard signal is always 1
steer_manual_multiplier = 45 * STEER_RATIO #keyboard signal is always 1
while 1:
# 1. Read the throttle, steer and brake from op or manual controls
# 2. Set instructions in Carla
# 3. Send current carstate to op via can
cruise_button = 0
throttle_out = steer_out = brake_out = 0
throttle_op = steer_op = brake_op = 0
throttle_manual = steer_manual = brake_manual = 0
# --------------Step 1-------------------------------
if not q.empty():
message = q.get()
m = message.split('_')
if m[0] == "steer":
steer_manual = float(m[1])
is_openpilot_engaged = False
if m[0] == "throttle":
throttle_manual = float(m[1])
is_openpilot_engaged = False
if m[0] == "brake":
brake_manual = float(m[1])
is_openpilot_engaged = False
if m[0] == "reverse":
#in_reverse = not in_reverse
cruise_button = CruiseButtons.CANCEL
is_openpilot_engaged = False
if m[0] == "cruise":
if m[1] == "down":
cruise_button = CruiseButtons.DECEL_SET
is_openpilot_engaged = True
if m[1] == "up":
cruise_button = CruiseButtons.RES_ACCEL
is_openpilot_engaged = True
if m[1] == "cancel":
cruise_button = CruiseButtons.CANCEL
is_openpilot_engaged = False
throttle_out = throttle_manual * throttle_manual_multiplier
steer_out = steer_manual * steer_manual_multiplier
brake_out = brake_manual * brake_manual_multiplier
#steer_out = steer_out
# steer_out = steer_rate_limit(old_steer, steer_out)
old_steer = steer_out
old_throttle = throttle_out
old_brake = brake_out
# print('message',old_throttle, old_steer, old_brake)
if is_openpilot_engaged:
sm.update(0)
throttle_op = sm['carControl'].actuators.gas #[0,1]
brake_op = sm['carControl'].actuators.brake #[0,1]
steer_op = sm['controlsState'].steeringAngleDesiredDeg # degrees [-180,180]
throttle_out = throttle_op
steer_out = steer_op
brake_out = brake_op
steer_out = steer_rate_limit(old_steer, steer_out)
old_steer = steer_out
else:
if throttle_out==0 and old_throttle>0:
if throttle_ease_out_counter>0:
throttle_out = old_throttle
throttle_ease_out_counter += -1
else:
throttle_ease_out_counter = REPEAT_COUNTER
old_throttle = 0
if brake_out==0 and old_brake>0:
if brake_ease_out_counter>0:
brake_out = old_brake
brake_ease_out_counter += -1
else:
brake_ease_out_counter = REPEAT_COUNTER
old_brake = 0
if steer_out==0 and old_steer!=0:
if steer_ease_out_counter>0:
steer_out = old_steer
steer_ease_out_counter += -1
else:
steer_ease_out_counter = REPEAT_COUNTER
old_steer = 0
# --------------Step 2-------------------------------
steer_carla = steer_out / (max_steer_angle * STEER_RATIO * -1)
steer_carla = np.clip(steer_carla, -1,1)
steer_out = steer_carla * (max_steer_angle * STEER_RATIO * -1)
old_steer = steer_carla * (max_steer_angle * STEER_RATIO * -1)
vc.throttle = throttle_out/0.6
vc.steer = steer_carla
vc.brake = brake_out
vehicle.apply_control(vc)
# --------------Step 3-------------------------------
vel = vehicle.get_velocity()
speed = math.sqrt(vel.x**2 + vel.y**2 + vel.z**2) # in m/s
vehicle_state.speed = speed
vehicle_state.vel = vel
vehicle_state.angle = steer_out
vehicle_state.cruise_button = cruise_button
vehicle_state.is_engaged = is_openpilot_engaged
if rk.frame%PRINT_DECIMATION == 0:
print("frame: ", "engaged:", is_openpilot_engaged, "; throttle: ", round(vc.throttle, 3), "; steer(c/deg): ", round(vc.steer, 3), round(steer_out, 3), "; brake: ", round(vc.brake, 3))
rk.keep_time()
def go(q: Any):
while 1:
try:
bridge(q)
except RuntimeError:
print("Restarting bridge...")
if __name__ == "__main__":
# make sure params are in a good state
set_params_enabled()
params = Params()
params.delete("Offroad_ConnectivityNeeded")
params.put("CalibrationParams", '{"calib_radians": [0,0,0], "valid_blocks": 20}')
from multiprocessing import Process, Queue
q: Any = Queue()
p = Process(target=go, args=(q,))
p.daemon = True
p.start()
if args.joystick:
# start input poll for joystick
from lib.manual_ctrl import wheel_poll_thread
wheel_poll_thread(q)
else:
# start input poll for keyboard
from lib.keyboard_ctrl import keyboard_poll_thread
keyboard_poll_thread(q)
|
test_collection.py
|
import numpy
import pandas as pd
import pytest
from pymilvus import DataType
from base.client_base import TestcaseBase
from utils.util_log import test_log as log
from common import common_func as cf
from common import common_type as ct
from common.common_type import CaseLabel, CheckTasks
from utils.utils import *
from common import constants as cons
prefix = "collection"
exp_name = "name"
exp_schema = "schema"
exp_num = "num_entities"
exp_primary = "primary"
exp_shards_num = "shards_num"
default_schema = cf.gen_default_collection_schema()
default_binary_schema = cf.gen_default_binary_collection_schema()
default_shards_num = 2
uid_count = "collection_count"
tag = "collection_count_tag"
uid_stats = "get_collection_stats"
uid_create = "create_collection"
uid_describe = "describe_collection"
uid_drop = "drop_collection"
uid_has = "has_collection"
uid_list = "list_collections"
uid_load = "load_collection"
field_name = default_float_vec_field_name
default_single_query = {
"data": gen_vectors(1, default_dim),
"anns_field": default_float_vec_field_name,
"param": {"metric_type": "L2", "params": {"nprobe": 10}},
"limit": default_top_k,
}
class TestCollectionParams(TestcaseBase):
""" Test case of collection interface """
@pytest.fixture(scope="function", params=ct.get_invalid_strs)
def get_none_removed_invalid_strings(self, request):
if request.param is None:
pytest.skip("None schema is valid")
yield request.param
@pytest.fixture(scope="function", params=ct.get_invalid_strs)
def get_invalid_type_fields(self, request):
if isinstance(request.param, list):
pytest.skip("list is valid fields")
yield request.param
@pytest.fixture(scope="function", params=cf.gen_all_type_fields())
def get_unsupported_primary_field(self, request):
if request.param.dtype == DataType.INT64:
pytest.skip("int64 type is valid primary key")
yield request.param
@pytest.fixture(scope="function", params=ct.get_invalid_strs)
def get_invalid_dim(self, request):
if request.param == 1:
pytest.skip("1 is valid dim")
yield request.param
@pytest.mark.tags(CaseLabel.L0)
def test_collection(self):
"""
target: test collection with default schema
method: create collection with default schema
expected: assert collection property
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
self.collection_wrap.init_collection(c_name, schema=default_schema,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema, exp_num: 0,
exp_primary: ct.default_int64_field_name})
assert c_name in self.utility_wrap.list_collections()[0]
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.xfail(reason="exception not Milvus Exception")
def test_collection_empty_name(self):
"""
target: test collection with empty name
method: create collection with an empty name
expected: raise exception
"""
self._connect()
c_name = ""
error = {ct.err_code: 1, ct.err_msg: f'`collection_name` value is illegal'}
self.collection_wrap.init_collection(c_name, schema=default_schema, check_task=CheckTasks.err_res,
check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.xfail(reason="exception not Milvus Exception")
@pytest.mark.parametrize("name", [[], 1, [1, "2", 3], (1,), {1: 1}, None])
def test_collection_illegal_name(self, name):
"""
target: test collection with illegal name
method: create collection with illegal name
expected: raise exception
"""
self._connect()
error = {ct.err_code: 1, ct.err_msg: "`collection_name` value {} is illegal".format(name)}
self.collection_wrap.init_collection(name, schema=default_schema, check_task=CheckTasks.err_res,
check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("name", ["12-s", "12 s", "(mn)", "中文", "%$#", "a".join("a" for i in range(256))])
def test_collection_invalid_name(self, name):
"""
target: test collection with invalid name
method: create collection with invalid name
expected: raise exception
"""
self._connect()
error = {ct.err_code: 1, ct.err_msg: "Invalid collection name: {}".format(name)}
self.collection_wrap.init_collection(name, schema=default_schema, check_task=CheckTasks.err_res,
check_items=error)
@pytest.mark.tags(CaseLabel.L0)
def test_collection_dup_name(self):
"""
target: test collection with dup name
method: create collection with dup name and none schema and data
expected: collection properties consistent
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
self.collection_wrap.init_collection(collection_w.name)
assert collection_w.name == self.collection_wrap.name
assert collection_w.schema == self.collection_wrap.schema
assert collection_w.num_entities == self.collection_wrap.num_entities
assert collection_w.name in self.utility_wrap.list_collections()[0]
@pytest.mark.tags(CaseLabel.L2)
def test_collection_dup_name_with_desc(self):
"""
target: test collection with dup name
method: 1. default schema with desc 2. dup name collection
expected: desc consistent
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(description=ct.collection_desc)
collection_w = self.init_collection_wrap(name=c_name, schema=schema,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
self.collection_wrap.init_collection(c_name,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
assert collection_w.description == self.collection_wrap.description
@pytest.mark.tags(CaseLabel.L1)
def test_collection_dup_name_new_schema(self):
"""
target: test collection with dup name and new schema
method: 1.create collection with default schema
2. collection with dup name and new schema
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
self.init_collection_wrap(name=c_name, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
fields = [cf.gen_int64_field(is_primary=True)]
schema = cf.gen_collection_schema(fields=fields)
error = {ct.err_code: 0, ct.err_msg: "The collection already exist, but the schema is not the same as the "
"schema passed in."}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_dup_name_new_primary(self):
"""
target: test collection with dup name and new primary_field schema
method: 1.collection with default schema
2. collection with same fields and new primary_field schema
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
int_field_one = cf.gen_int64_field()
int_field_two = cf.gen_int64_field(name="int2")
fields = [int_field_one, int_field_two, cf.gen_float_vec_field()]
schema = cf.gen_collection_schema(fields, primary_field=int_field_one.name)
collection_w = self.init_collection_wrap(name=c_name, schema=schema,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema,
exp_primary: int_field_one.name})
new_schema = cf.gen_collection_schema(fields, primary_field=int_field_two.name)
error = {ct.err_code: 0, ct.err_msg: "The collection already exist, but the schema is not the same as the "
"schema passed in."}
self.collection_wrap.init_collection(c_name, schema=new_schema, check_task=CheckTasks.err_res,
check_items=error)
assert collection_w.primary_field.name == int_field_one.name
@pytest.mark.tags(CaseLabel.L1)
def test_collection_dup_name_new_dim(self):
"""
target: test collection with dup name and new dim schema
method: 1. default schema 2. schema with new dim
expected: raise exception
"""
self._connect()
new_dim = 120
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
schema = cf.gen_default_collection_schema()
new_fields = cf.gen_float_vec_field(dim=new_dim)
schema.fields[-1] = new_fields
error = {ct.err_code: 0, ct.err_msg: "The collection already exist, but the schema is not the same as the "
"schema passed in."}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
dim = collection_w.schema.fields[-1].params['dim']
assert dim == ct.default_dim
@pytest.mark.tags(CaseLabel.L2)
def test_collection_dup_name_invalid_schema_type(self, get_none_removed_invalid_strings):
"""
target: test collection with dup name and invalid schema
method: 1. default schema 2. invalid schema
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
error = {ct.err_code: 0, ct.err_msg: "Schema type must be schema.CollectionSchema"}
schema = get_none_removed_invalid_strings
self.collection_wrap.init_collection(collection_w.name, schema=schema,
check_task=CheckTasks.err_res, check_items=error)
assert collection_w.name == c_name
@pytest.mark.tags(CaseLabel.L1)
def test_collection_dup_name_same_schema(self):
"""
target: test collection with dup name and same schema
method: dup name and same schema
expected: two collection object is available
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, schema=default_schema,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
self.collection_wrap.init_collection(name=c_name, schema=default_schema,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
assert collection_w.name == self.collection_wrap.name
@pytest.mark.tags(CaseLabel.L2)
def test_collection_none_schema(self):
"""
target: test collection with none schema
method: create collection with none schema
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
error = {ct.err_code: 0, ct.err_msg: "Should be passed into the schema"}
self.collection_wrap.init_collection(c_name, schema=None, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_invalid_type_schema(self, get_none_removed_invalid_strings):
"""
target: test collection with invalid schema
method: create collection with non-CollectionSchema type schema
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
error = {ct.err_code: 0, ct.err_msg: "Schema type must be schema.CollectionSchema"}
self.collection_wrap.init_collection(c_name, schema=get_none_removed_invalid_strings,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_invalid_type_fields(self, get_invalid_type_fields):
"""
target: test collection with invalid fields type, non-list
method: create collection schema with non-list invalid fields
expected: exception
"""
self._connect()
fields = get_invalid_type_fields
error = {ct.err_code: 0, ct.err_msg: "The fields of schema must be type list"}
self.collection_schema_wrap.init_collection_schema(fields=fields,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_with_unknown_type(self):
"""
target: test collection with unknown type
method: create with DataType.UNKNOWN
expected: raise exception
"""
self._connect()
error = {ct.err_code: 0, ct.err_msg: "Field dtype must be of DataType"}
self.field_schema_wrap.init_field_schema(name="unknown", dtype=DataType.UNKNOWN,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.xfail(reason="exception not Milvus Exception")
@pytest.mark.parametrize("name", [[], 1, (1,), {1: 1}, "12-s"])
def test_collection_invalid_type_field(self, name):
"""
target: test collection with invalid field name
method: invalid string name
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
field, _ = self.field_schema_wrap.init_field_schema(name=name, dtype=5, is_primary=True)
vec_field = cf.gen_float_vec_field()
schema = cf.gen_collection_schema(fields=[field, vec_field])
error = {ct.err_code: 1, ct.err_msg: "expected one of: bytes, unicode"}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("name", ["12-s", "12 s", "(mn)", "中文", "%$#", "a".join("a" for i in range(256))])
def test_collection_invalid_field_name(self, name):
"""
target: test collection with invalid field name
method: invalid string name
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
field, _ = self.field_schema_wrap.init_field_schema(name=name, dtype=DataType.INT64, is_primary=True)
vec_field = cf.gen_float_vec_field()
schema = cf.gen_collection_schema(fields=[field, vec_field])
error = {ct.err_code: 1, ct.err_msg: "Invalid field name"}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.xfail(reason="exception not Milvus Exception")
def test_collection_none_field_name(self):
"""
target: test field schema with None name
method: None field name
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
field, _ = self.field_schema_wrap.init_field_schema(name=None, dtype=DataType.INT64, is_primary=True)
schema = cf.gen_collection_schema(fields=[field, cf.gen_float_vec_field()])
error = {ct.err_code: 1, ct.err_msg: "You should specify the name of field"}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("dtype", [6, [[]], {}, (), "", "a"])
def test_collection_invalid_field_type(self, dtype):
"""
target: test collection with invalid field type
method: invalid DataType
expected: raise exception
"""
self._connect()
error = {ct.err_code: 0, ct.err_msg: "Field dtype must be of DataType"}
self.field_schema_wrap.init_field_schema(name="test", dtype=dtype, is_primary=True,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.xfail(reason="exception not Milvus Exception")
def test_collection_field_dtype_float_value(self):
"""
target: test collection with float type
method: create field with float type
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
field, _ = self.field_schema_wrap.init_field_schema(name=ct.default_int64_field_name, dtype=5.0,
is_primary=True)
schema = cf.gen_collection_schema(fields=[field, cf.gen_float_vec_field()])
error = {ct.err_code: 0, ct.err_msg: "Field type must be of DataType!"}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_empty_fields(self):
"""
target: test collection with empty fields
method: create collection with fields = []
expected: exception
"""
self._connect()
error = {ct.err_code: 0, ct.err_msg: "Primary field must in dataframe."}
self.collection_schema_wrap.init_collection_schema(fields=[], primary_field=ct.default_int64_field_name,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_collection_dup_field(self):
"""
target: test collection with dup field name
method: Two FieldSchema have same name
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
field_one = cf.gen_int64_field(is_primary=True)
field_two = cf.gen_int64_field()
schema = cf.gen_collection_schema(fields=[field_one, field_two, cf.gen_float_vec_field()])
error = {ct.err_code: 1, ct.err_msg: "duplicated field name"}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
assert not self.utility_wrap.has_collection(c_name)[0]
@pytest.mark.tags(CaseLabel.L0)
@pytest.mark.parametrize("field", [cf.gen_float_vec_field(), cf.gen_binary_vec_field()])
def test_collection_only_vector_field(self, field):
"""
target: test collection just with vec field
method: create with float-vec fields
expected: raise exception
"""
self._connect()
error = {ct.err_code: 0, ct.err_msg: "Primary field must in dataframe"}
self.collection_schema_wrap.init_collection_schema([field], check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_multi_float_vectors(self):
"""
target: test collection with multi float vectors
method: create collection with two float-vec fields
expected: raise exception (not supported yet)
"""
# 1. connect
self._connect()
# 2. create collection with multiple vectors
c_name = cf.gen_unique_str(prefix)
fields = [cf.gen_int64_field(is_primary=True), cf.gen_float_field(),
cf.gen_float_vec_field(dim=default_dim), cf.gen_float_vec_field(name="tmp", dim=default_dim)]
schema = cf.gen_collection_schema(fields=fields)
err_msg = "multiple vector fields is not supported"
self.collection_wrap.init_collection(c_name, schema=schema,
check_task=CheckTasks.err_res,
check_items={"err_code": 1, "err_msg": err_msg})[0]
@pytest.mark.tags(CaseLabel.L1)
@pytest.mark.skip("https://github.com/milvus-io/milvus/issues/12680")
def test_collection_mix_vectors(self):
"""
target: test collection with mix vectors
method: create with float and binary vec
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
fields = [cf.gen_int64_field(is_primary=True), cf.gen_float_vec_field(), cf.gen_binary_vec_field()]
schema = cf.gen_collection_schema(fields=fields, auto_id=True)
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
@pytest.mark.tags(CaseLabel.L0)
def test_collection_without_vectors(self):
"""
target: test collection without vectors
method: create collection only with int field
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_collection_schema([cf.gen_int64_field(is_primary=True)])
error = {ct.err_code: 0, ct.err_msg: "No vector field is found."}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_collection_without_primary_field(self):
"""
target: test collection without primary field
method: no primary field specified in collection schema and fields
expected: raise exception
"""
self._connect()
int_fields, _ = self.field_schema_wrap.init_field_schema(name=ct.default_int64_field_name, dtype=DataType.INT64)
vec_fields, _ = self.field_schema_wrap.init_field_schema(name=ct.default_float_vec_field_name,
dtype=DataType.FLOAT_VECTOR, dim=ct.default_dim)
error = {ct.err_code: 0, ct.err_msg: "Primary field must in dataframe."}
self.collection_schema_wrap.init_collection_schema([int_fields, vec_fields],
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_is_primary_false(self):
"""
target: test collection with all is_primary false
method: set all fields if_primary false
expected: raise exception
"""
self._connect()
fields = [cf.gen_int64_field(is_primary=False), cf.gen_float_field(is_primary=False),
cf.gen_float_vec_field(is_primary=False)]
error = {ct.err_code: 0, ct.err_msg: "Primary field must in dataframe."}
self.collection_schema_wrap.init_collection_schema(fields, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("is_primary", ct.get_invalid_strs)
def test_collection_invalid_is_primary(self, is_primary):
"""
target: test collection with invalid primary
method: define field with is_primary=non-bool
expected: raise exception
"""
self._connect()
name = cf.gen_unique_str(prefix)
error = {ct.err_code: 0, ct.err_msg: "Param is_primary must be bool type"}
self.field_schema_wrap.init_field_schema(name=name, dtype=DataType.INT64, is_primary=is_primary,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("primary_field", ["12-s", "12 s", "(mn)", "中文", "%$#", "a".join("a" for i in range(256))])
def test_collection_invalid_primary_field(self, primary_field):
"""
target: test collection with invalid primary_field
method: specify invalid string primary_field in collection schema
expected: raise exception
"""
self._connect()
fields = [cf.gen_int64_field(), cf.gen_float_vec_field()]
error = {ct.err_code: 0, ct.err_msg: "Primary field must in dataframe."}
self.collection_schema_wrap.init_collection_schema(fields=fields, primary_field=primary_field,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("primary_field", [[], 1, [1, "2", 3], (1,), {1: 1}, None])
def test_collection_non_string_primary_field(self, primary_field):
"""
target: test collection with non-string primary_field
method: primary_field type is not string
expected: raise exception
"""
self._connect()
fields = [cf.gen_int64_field(), cf.gen_float_vec_field()]
error = {ct.err_code: 0, ct.err_msg: "Primary field must in dataframe."}
self.collection_schema_wrap.init_collection_schema(fields, primary_field=primary_field,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_not_existed_primary_field(self):
"""
target: test collection with not exist primary field
method: specify not existed field as primary_field
expected: raise exception
"""
self._connect()
fake_field = cf.gen_unique_str()
fields = [cf.gen_int64_field(), cf.gen_float_vec_field()]
error = {ct.err_code: 0, ct.err_msg: "Primary field must in dataframe."}
self.collection_schema_wrap.init_collection_schema(fields, primary_field=fake_field,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L0)
def test_collection_primary_in_schema(self):
"""
target: test collection with primary field
method: specify primary field in CollectionSchema
expected: collection.primary_field
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(primary_field=ct.default_int64_field_name)
self.collection_wrap.init_collection(c_name, schema=schema)
assert self.collection_wrap.primary_field.name == ct.default_int64_field_name
@pytest.mark.tags(CaseLabel.L0)
def test_collection_primary_in_field(self):
"""
target: test collection with primary field
method: specify primary field in FieldSchema
expected: collection.primary_field
"""
self._connect()
fields = [cf.gen_int64_field(is_primary=True), cf.gen_float_field(), cf.gen_float_vec_field()]
schema, _ = self.collection_schema_wrap.init_collection_schema(fields)
self.collection_wrap.init_collection(cf.gen_unique_str(prefix), schema=schema)
assert self.collection_wrap.primary_field.name == ct.default_int64_field_name
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.xfail(reason="exception not Milvus Exception")
def test_collection_unsupported_primary_field(self, get_unsupported_primary_field):
"""
target: test collection with unsupported primary field type
method: specify non-int64 as primary field
expected: raise exception
"""
self._connect()
field = get_unsupported_primary_field
vec_field = cf.gen_float_vec_field(name="vec")
error = {ct.err_code: 1, ct.err_msg: "Primary key type must be DataType.INT64."}
self.collection_schema_wrap.init_collection_schema(fields=[field, vec_field], primary_field=field.name,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_collection_multi_primary_fields(self):
"""
target: test collection with multi primary
method: collection with two primary fields
expected: raise exception
"""
self._connect()
int_field_one = cf.gen_int64_field(is_primary=True)
int_field_two = cf.gen_int64_field(name="int2", is_primary=True)
error = {ct.err_code: 0, ct.err_msg: "Primary key field can only be one."}
self.collection_schema_wrap.init_collection_schema(
fields=[int_field_one, int_field_two, cf.gen_float_vec_field()],
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_primary_inconsistent(self):
"""
target: test collection with different primary field setting
method: 1. set A field is_primary 2. set primary_field is B
expected: raise exception
"""
self._connect()
int_field_one = cf.gen_int64_field(is_primary=True)
int_field_two = cf.gen_int64_field(name="int2")
fields = [int_field_one, int_field_two, cf.gen_float_vec_field()]
error = {ct.err_code: 0, ct.err_msg: "Primary key field can only be one"}
self.collection_schema_wrap.init_collection_schema(fields, primary_field=int_field_two.name,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_collection_primary_consistent(self):
"""
target: test collection with both collection schema and field schema
method: 1. set A field is_primary 2.set primary_field is A
expected: verify primary field
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
int_field_one = cf.gen_int64_field(is_primary=True)
schema = cf.gen_collection_schema(fields=[int_field_one, cf.gen_float_vec_field()],
primary_field=int_field_one.name)
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
@pytest.mark.tags(CaseLabel.L0)
@pytest.mark.parametrize("auto_id", [True, False])
def test_collection_auto_id_in_field_schema(self, auto_id):
"""
target: test collection with auto_id in field schema
method: specify auto_id True in field schema
expected: verify schema's auto_id
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
int_field = cf.gen_int64_field(is_primary=True, auto_id=auto_id)
vec_field = cf.gen_float_vec_field(name='vec')
schema, _ = self.collection_schema_wrap.init_collection_schema([int_field, vec_field])
assert schema.auto_id == auto_id
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
@pytest.mark.tags(CaseLabel.L0)
@pytest.mark.parametrize("auto_id", [True, False])
def test_collection_auto_id_in_collection_schema(self, auto_id):
"""
target: test collection with auto_id in collection schema
method: specify auto_id True in collection schema
expected: verify schema auto_id and collection schema
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
int_field = cf.gen_int64_field(is_primary=True)
vec_field = cf.gen_float_vec_field(name='vec')
schema, _ = self.collection_schema_wrap.init_collection_schema([int_field, vec_field], auto_id=auto_id)
assert schema.auto_id == auto_id
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
@pytest.mark.tags(CaseLabel.L2)
def test_collection_auto_id_non_primary_field(self):
"""
target: test collection set auto_id in non-primary field
method: set auto_id=True in non-primary field
expected: raise exception
"""
self._connect()
error = {ct.err_code: 0, ct.err_msg: "auto_id can only be specified on the primary key field"}
self.field_schema_wrap.init_field_schema(name=ct.default_int64_field_name, dtype=DataType.INT64, auto_id=True,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_auto_id_false_non_primary(self):
"""
target: test collection set auto_id in non-primary field
method: set auto_id=True in non-primary field
expected: verify schema auto_id is False
"""
self._connect()
int_field_one = cf.gen_int64_field(is_primary=True)
int_field_two = cf.gen_int64_field(name='int2', auto_id=False)
fields = [int_field_one, int_field_two, cf.gen_float_vec_field()]
schema, _ = self.collection_schema_wrap.init_collection_schema(fields)
assert not schema.auto_id
@pytest.mark.tags(CaseLabel.L2)
def test_collection_auto_id_inconsistent(self):
"""
target: test collection auto_id with both collection schema and field schema
method: 1.set primary field auto_id=True in field schema 2.set auto_id=False in collection schema
expected: raise exception
"""
self._connect()
int_field = cf.gen_int64_field(is_primary=True, auto_id=True)
vec_field = cf.gen_float_vec_field(name='vec')
error = {ct.err_code: 0, ct.err_msg: "The auto_id of the collection is inconsistent with "
"the auto_id of the primary key field"}
self.collection_schema_wrap.init_collection_schema([int_field, vec_field], auto_id=False,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("auto_id", [True, False])
def test_collection_auto_id_consistent(self, auto_id):
"""
target: test collection auto_id with both collection schema and field schema
method: set auto_id=True/False both field and schema
expected: verify auto_id
"""
self._connect()
int_field = cf.gen_int64_field(is_primary=True, auto_id=auto_id)
vec_field = cf.gen_float_vec_field(name='vec')
schema, _ = self.collection_schema_wrap.init_collection_schema([int_field, vec_field], auto_id=auto_id)
assert schema.auto_id == auto_id
@pytest.mark.tags(CaseLabel.L2)
def test_collection_auto_id_none_in_field(self):
"""
target: test collection with auto_id is None
method: set auto_id=None
expected: raise exception
"""
self._connect()
error = {ct.err_code: 0, ct.err_msg: "Param auto_id must be bool type"}
self.field_schema_wrap.init_field_schema(name=ct.default_int64_field_name, dtype=DataType.INT64,
is_primary=True,
auto_id=None, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("auto_id", ct.get_invalid_strs)
def test_collection_invalid_auto_id(self, auto_id):
"""
target: test collection with invalid auto_id
method: define field with auto_id=non-bool
expected: raise exception
"""
self._connect()
int_field = cf.gen_int64_field(is_primary=True)
vec_field = cf.gen_float_vec_field(name='vec')
error = {ct.err_code: 0, ct.err_msg: "Param auto_id must be bool type"}
self.collection_schema_wrap.init_collection_schema([int_field, vec_field], auto_id=auto_id,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_multi_fields_auto_id(self):
"""
target: test collection auto_id with multi fields
method: specify auto_id=True for multi int64 fields
expected: todo raise exception
"""
self._connect()
error = {ct.err_code: 0, ct.err_msg: "auto_id can only be specified on the primary key field"}
cf.gen_int64_field(is_primary=True, auto_id=True)
self.field_schema_wrap.init_field_schema(name="int", dtype=DataType.INT64, auto_id=True,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("dtype", [DataType.FLOAT_VECTOR, DataType.BINARY_VECTOR])
def test_collection_vector_without_dim(self, dtype):
"""
target: test collection without dimension
method: define vector field without dim
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
float_vec_field, _ = self.field_schema_wrap.init_field_schema(name="vec", dtype=dtype)
schema = cf.gen_collection_schema(fields=[cf.gen_int64_field(is_primary=True), float_vec_field])
error = {ct.err_code: 1, ct.err_msg: "dimension is not defined in field type params"}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.xfail(reason="exception not Milvus Exception")
def test_collection_vector_invalid_dim(self, get_invalid_dim):
"""
target: test collection with invalid dimension
method: define float-vec field with invalid dimension
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
float_vec_field = cf.gen_float_vec_field(dim=get_invalid_dim)
schema = cf.gen_collection_schema(fields=[cf.gen_int64_field(is_primary=True), float_vec_field])
error = {ct.err_code: 1, ct.err_msg: f'invalid dim: {get_invalid_dim}'}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("dim", [-1, 0, 32769])
def test_collection_vector_out_bounds_dim(self, dim):
"""
target: test collection with out of bounds dim
method: invalid dim -1 and 32759
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
float_vec_field = cf.gen_float_vec_field(dim=dim)
schema = cf.gen_collection_schema(fields=[cf.gen_int64_field(is_primary=True), float_vec_field])
error = {ct.err_code: 1, ct.err_msg: "invalid dimension: {}. should be in range 1 ~ 32768".format(dim)}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_collection_non_vector_field_dim(self):
"""
target: test collection with dim for non-vector field
method: define int64 field with dim
expected: no exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
int_field, _ = self.field_schema_wrap.init_field_schema(name=ct.default_int64_field_name, dtype=DataType.INT64,
dim=ct.default_dim)
float_vec_field = cf.gen_float_vec_field()
schema = cf.gen_collection_schema(fields=[int_field, float_vec_field],
primary_field=ct.default_int64_field_name)
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
@pytest.mark.tags(CaseLabel.L1)
def test_collection_desc(self):
"""
target: test collection with description
method: create with description
expected: assert default description
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(description=ct.collection_desc)
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.xfail(reason="exception not Milvus Exception")
def test_collection_none_desc(self):
"""
target: test collection with none description
method: create with none description
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
schema = cf.gen_default_collection_schema(description=None)
error = {ct.err_code: 1, ct.err_msg: "None has type NoneType, but expected one of: bytes, unicode"}
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_long_desc(self):
"""
target: test collection with long desc
method: create with long desc
expected:
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
desc = "a".join("a" for _ in range(256))
schema = cf.gen_default_collection_schema(description=desc)
self.collection_wrap.init_collection(c_name, schema=schema,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
@pytest.mark.tags(CaseLabel.L0)
def test_collection_binary(self):
"""
target: test collection with binary-vec
method: create collection with binary field
expected: assert binary field
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
self.collection_wrap.init_collection(c_name, schema=default_binary_schema,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_binary_schema})
assert c_name in self.utility_wrap.list_collections()[0]
@pytest.mark.tag(CaseLabel.L0)
def test_collection_shards_num_with_default_value(self):
"""
target:test collection with shards_num
method:create collection with shards_num
expected: no exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
self.collection_wrap.init_collection(c_name, schema=default_schema, shards_num=default_shards_num,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_shards_num: default_shards_num})
assert c_name in self.utility_wrap.list_collections()[0]
@pytest.mark.tag(CaseLabel.L0)
@pytest.mark.parametrize("shards_num", [-256, 0, 10, 256])
def test_collection_shards_num_with_not_default_value(self, shards_num):
"""
target:test collection with shards_num
method:create collection with not default shards_num
expected: no exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
self.collection_wrap.init_collection(c_name, schema=default_schema, shards_num=shards_num,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_shards_num: shards_num})
assert c_name in self.utility_wrap.list_collections()[0]
@pytest.mark.tag(CaseLabel.L2)
def test_collection_shards_num_with_error_type(self):
"""
target:test collection with error type shards_num
method:create collection with error type shards_num
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
error_type_shards_num = "2" # suppose to be int rather than str
error = {ct.err_code: -1, ct.err_msg: f"expected one of: int, long"}
self.collection_wrap.init_collection(c_name, schema=default_schema, shards_num=error_type_shards_num,
check_task=CheckTasks.err_res,
check_items=error)
class TestCollectionOperation(TestcaseBase):
"""
******************************************************************
The following cases are used to test collection interface operations
******************************************************************
"""
# def teardown_method(self):
# if self.self.collection_wrap is not None and self.self.collection_wrap.collection is not None:
# self.self.collection_wrap.drop()
@pytest.mark.tags(CaseLabel.L2)
def test_collection_without_connection(self):
"""
target: test collection without connection
method: 1.create collection after connection removed
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
self.connection_wrap.remove_connection(ct.default_alias)
res_list, _ = self.connection_wrap.list_connections()
assert ct.default_alias not in res_list
error = {ct.err_code: 0, ct.err_msg: 'should create connect first'}
self.collection_wrap.init_collection(c_name, schema=default_schema,
check_task=CheckTasks.err_res, check_items=error)
assert self.collection_wrap.collection is None
@pytest.mark.tags(CaseLabel.L1)
def test_collection_multi_create_drop(self):
"""
target: test cycle creation and deletion of multiple collections
method: in a loop, collections are created and deleted sequentially
expected: no exception
"""
self._connect()
c_num = 20
for _ in range(c_num):
c_name = cf.gen_unique_str(prefix)
self.collection_wrap.init_collection(c_name, schema=default_schema,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
self.collection_wrap.drop()
assert c_name not in self.utility_wrap.list_collections()[0]
@pytest.mark.tags(CaseLabel.L1)
def test_collection_dup_name_drop(self):
"""
target: test collection with dup name, and drop
method: 1. two dup name collection object
2. one object drop collection
expected: collection dropped
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
self.collection_wrap.init_collection(c_name, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
self.collection_wrap.drop()
assert not self.utility_wrap.has_collection(c_name)[0]
error = {ct.err_code: 1, ct.err_msg: f'HasPartition failed: can\'t find collection: {c_name}'}
collection_w.has_partition("p", check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_collection_after_drop(self):
"""
target: test create collection after create and drop
method: 1. create a 2. drop a 3, re-create a
expected: no exception
"""
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
collection_w.drop()
assert not self.utility_wrap.has_collection(collection_w.name)[0]
self.init_collection_wrap(name=c_name, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
assert self.utility_wrap.has_collection(c_name)[0]
@pytest.mark.tags(CaseLabel.L1)
def test_collection_all_datatype_fields(self):
"""
target: test create collection with all dataType fields
method: create collection with all dataType schema
expected: create successfully
"""
self._connect()
fields = []
for k, v in DataType.__members__.items():
if v and v != DataType.UNKNOWN and v != DataType.FLOAT_VECTOR and v != DataType.BINARY_VECTOR:
field, _ = self.field_schema_wrap.init_field_schema(name=k.lower(), dtype=v)
fields.append(field)
fields.append(cf.gen_float_vec_field())
schema, _ = self.collection_schema_wrap.init_collection_schema(fields,
primary_field=ct.default_int64_field_name)
c_name = cf.gen_unique_str(prefix)
self.collection_wrap.init_collection(c_name, schema=schema, check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: schema})
class TestCollectionDataframe(TestcaseBase):
"""
******************************************************************
The following cases are used to test construct_from_dataframe
******************************************************************
"""
@pytest.fixture(scope="function", params=ct.get_invalid_strs)
def get_non_df(self, request):
if request.param is None:
pytest.skip("skip None")
yield request.param
@pytest.mark.tags(CaseLabel.L0)
def test_construct_from_dataframe(self):
"""
target: test collection with dataframe data
method: create collection and insert with dataframe
expected: collection num entities equal to nb
"""
conn = self._connect()
c_name = cf.gen_unique_str(prefix)
df = cf.gen_default_dataframe_data(ct.default_nb)
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
conn.flush([c_name])
assert self.collection_wrap.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L0)
def test_construct_from_binary_dataframe(self):
"""
target: test binary collection with dataframe
method: create binary collection with dataframe
expected: collection num entities equal to nb
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df, _ = cf.gen_default_binary_dataframe_data(nb=ct.default_nb)
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_binary_schema})
assert self.collection_wrap.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L2)
def test_construct_from_none_dataframe(self):
"""
target: test create collection by empty dataframe
method: invalid dataframe type create collection
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
error = {ct.err_code: 0, ct.err_msg: "Dataframe can not be None."}
self.collection_wrap.construct_from_dataframe(c_name, None, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_construct_from_dataframe_only_column(self):
"""
target: test collection with dataframe only columns
method: dataframe only has columns
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = pd.DataFrame(columns=[ct.default_int64_field_name, ct.default_float_vec_field_name])
error = {ct.err_code: 0, ct.err_msg: "Cannot infer schema from empty dataframe"}
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_construct_from_inconsistent_dataframe(self):
"""
target: test collection with data inconsistent
method: create and insert with inconsistent data
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
# one field different type df
mix_data = [(1, 2., [0.1, 0.2]), (2, 3., 4)]
df = pd.DataFrame(data=mix_data, columns=list("ABC"))
error = {ct.err_code: 0, ct.err_msg: "The data in the same column must be of the same type"}
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field='A', check_task=CheckTasks.err_res,
check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_construct_from_non_dataframe(self, get_non_df):
"""
target: test create collection by invalid dataframe
method: non-dataframe type create collection
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
error = {ct.err_code: 0, ct.err_msg: "Data type must be pandas.DataFrame."}
df = get_non_df
self.collection_wrap.construct_from_dataframe(c_name, df, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_construct_from_data_type_dataframe(self):
"""
target: test collection with invalid dataframe
method: create with invalid dataframe
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = pd.DataFrame({"date": pd.date_range('20210101', periods=3), ct.default_int64_field_name: [1, 2, 3]})
error = {ct.err_code: 0, ct.err_msg: "Cannot infer schema from empty dataframe."}
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_construct_from_invalid_field_name(self):
"""
target: test collection with invalid field name
method: create with invalid field name dataframe
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = pd.DataFrame({'%$#': cf.gen_vectors(3, 2), ct.default_int64_field_name: [1, 2, 3]})
error = {ct.err_code: 1, ct.err_msg: "Invalid field name"}
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_construct_none_primary_field(self):
"""
target: test collection with none primary field
method: primary_field is none
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = cf.gen_default_dataframe_data(ct.default_nb)
error = {ct.err_code: 0, ct.err_msg: "Schema must have a primary key field."}
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=None,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_construct_not_existed_primary_field(self):
"""
target: test collection with not existed primary field
method: primary field not existed
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = cf.gen_default_dataframe_data(ct.default_nb)
error = {ct.err_code: 0, ct.err_msg: "Primary field must in dataframe."}
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=c_name,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L2)
def test_construct_with_none_auto_id(self):
"""
target: test construct with non-int64 as primary field
method: non-int64 as primary field
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = cf.gen_default_dataframe_data(ct.default_nb)
error = {ct.err_code: 0, ct.err_msg: "Param auto_id must be bool type"}
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
auto_id=None, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_construct_auto_id_true_insert(self):
"""
target: test construct with true auto_id
method: auto_id=True and insert values
expected: raise exception
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = cf.gen_default_dataframe_data(nb=100)
error = {ct.err_code: 0, ct.err_msg: "Auto_id is True, primary field should not have data."}
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
auto_id=True, check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_construct_auto_id_true_no_insert(self):
"""
target: test construct with true auto_id
method: auto_id=True and not insert ids(primary fields all values are None)
expected: verify num entities
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = cf.gen_default_dataframe_data()
# df.drop(ct.default_int64_field_name, axis=1, inplace=True)
df[ct.default_int64_field_name] = None
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
auto_id=True)
assert self.collection_wrap.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L2)
def test_construct_none_value_auto_id_true(self):
"""
target: test construct with none value, auto_id
method: df primary field with none value, auto_id=true
expected: todo
"""
self._connect()
nb = 100
df = cf.gen_default_dataframe_data(nb)
df.iloc[:, 0] = numpy.NaN
res, _ = self.collection_wrap.construct_from_dataframe(cf.gen_unique_str(prefix), df,
primary_field=ct.default_int64_field_name, auto_id=True)
mutation_res = res[1]
assert cf._check_primary_keys(mutation_res.primary_keys, 100)
assert self.collection_wrap.num_entities == nb
@pytest.mark.tags(CaseLabel.L1)
def test_construct_auto_id_false(self):
"""
target: test construct with false auto_id
method: auto_id=False, primary_field correct
expected: verify auto_id
"""
self._connect()
c_name = cf.gen_unique_str(prefix)
df = cf.gen_default_dataframe_data(ct.default_nb)
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
auto_id=False)
assert not self.collection_wrap.schema.auto_id
assert self.collection_wrap.num_entities == ct.default_nb
@pytest.mark.tags(CaseLabel.L2)
def test_construct_none_value_auto_id_false(self):
"""
target: test construct with none value, auto_id
method: df primary field with none value, auto_id=false
expected: raise exception
"""
self._connect()
nb = 100
df = cf.gen_default_dataframe_data(nb)
df.iloc[:, 0] = numpy.NaN
error = {ct.err_code: 0, ct.err_msg: "Primary key type must be DataType.INT64"}
self.collection_wrap.construct_from_dataframe(cf.gen_unique_str(prefix), df,
primary_field=ct.default_int64_field_name, auto_id=False,
check_task=CheckTasks.err_res, check_items=error)
@pytest.mark.tags(CaseLabel.L1)
def test_construct_auto_id_false_same_values(self):
"""
target: test construct with false auto_id and same value
method: auto_id=False, primary field same values
expected: verify num entities
"""
self._connect()
nb = 100
df = cf.gen_default_dataframe_data(nb)
df.iloc[1:, 0] = 1
res, _ = self.collection_wrap.construct_from_dataframe(cf.gen_unique_str(prefix), df,
primary_field=ct.default_int64_field_name, auto_id=False)
collection_w = res[0]
assert collection_w.num_entities == nb
mutation_res = res[1]
assert mutation_res.primary_keys == df[ct.default_int64_field_name].values.tolist()
@pytest.mark.tags(CaseLabel.L1)
def test_construct_auto_id_false_negative_values(self):
"""
target: test construct with negative values
method: auto_id=False, primary field values is negative
expected: verify num entities
"""
self._connect()
nb = 100
df = cf.gen_default_dataframe_data(nb)
new_values = pd.Series(data=[i for i in range(0, -nb, -1)])
df[ct.default_int64_field_name] = new_values
self.collection_wrap.construct_from_dataframe(cf.gen_unique_str(prefix), df,
primary_field=ct.default_int64_field_name, auto_id=False)
assert self.collection_wrap.num_entities == nb
@pytest.mark.tags(CaseLabel.L1)
def test_construct_from_dataframe_dup_name(self):
"""
target: test collection with dup name and insert dataframe
method: create collection with dup name, none schema, dataframe
expected: two collection object is correct
"""
conn = self._connect()
c_name = cf.gen_unique_str(prefix)
collection_w = self.init_collection_wrap(name=c_name, primary_field=ct.default_int64_field_name,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
df = cf.gen_default_dataframe_data(ct.default_nb)
self.collection_wrap.construct_from_dataframe(c_name, df, primary_field=ct.default_int64_field_name,
check_task=CheckTasks.check_collection_property,
check_items={exp_name: c_name, exp_schema: default_schema})
conn.flush([collection_w.name])
assert collection_w.num_entities == ct.default_nb
assert collection_w.num_entities == self.collection_wrap.num_entities
class TestCollectionCount:
"""
params means different nb, the nb value may trigger merge, or not
"""
@pytest.fixture(
scope="function",
params=[
1,
1000,
2001
],
)
def insert_count(self, request):
yield request.param
"""
generate valid create_index params
"""
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
return request.param
@pytest.mark.tags(CaseLabel.L2)
def test_count_without_connection(self, collection, dis_connect):
"""
target: test count_entities, without connection
method: calling count_entities with correct params, with a disconnected instance
expected: count_entities raise exception
"""
with pytest.raises(Exception) as e:
dis_connect.count_entities(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_collection_count_no_vectors(self, connect, collection):
"""
target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0
"""
stats = connect.get_collection_stats(collection)
assert stats[row_count] == 0
class TestCollectionCountIP:
"""
params means different nb, the nb value may trigger merge, or not
"""
@pytest.fixture(
scope="function",
params=[
1,
1000,
2001
],
)
def insert_count(self, request):
yield request.param
"""
generate valid create_index params
"""
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
request.param.update({"metric_type": "IP"})
return request.param
@pytest.mark.tags(CaseLabel.L1)
def test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
"""
target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception
"""
entities = gen_entities(insert_count)
connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
stats = connect.get_collection_stats(collection)
assert stats[row_count] == insert_count
class TestCollectionCountBinary:
"""
params means different nb, the nb value may trigger merge, or not
"""
@pytest.fixture(
scope="function",
params=[
1,
1000,
2001
],
)
def insert_count(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_binary_index()
)
def get_jaccard_index(self, request, connect):
request.param["metric_type"] = "JACCARD"
return request.param
@pytest.fixture(
scope="function",
params=gen_binary_index()
)
def get_hamming_index(self, request, connect):
request.param["metric_type"] = "HAMMING"
return request.param
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_substructure_index(self, request, connect):
request.param["metric_type"] = "SUBSTRUCTURE"
return request.param
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_superstructure_index(self, request, connect):
request.param["metric_type"] = "SUPERSTRUCTURE"
return request.param
# TODO: need to update and enable
@pytest.mark.tags(CaseLabel.L1)
def test_collection_count_after_index_created_A(self, connect, binary_collection, get_hamming_index, insert_count):
"""
target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities equals entities count just inserted
"""
raw_vectors, entities = gen_binary_entities(insert_count)
connect.insert(binary_collection, entities)
connect.flush([binary_collection])
# connect.load_collection(binary_collection)
connect.create_index(binary_collection, default_binary_vec_field_name, get_hamming_index)
stats = connect.get_collection_stats(binary_collection)
assert stats[row_count] == insert_count
@pytest.mark.tags(CaseLabel.L2)
def test_collection_count_no_entities(self, connect, binary_collection):
"""
target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0
"""
stats = connect.get_collection_stats(binary_collection)
assert stats[row_count] == 0
class TestCollectionMultiCollections:
"""
params means different nb, the nb value may trigger merge, or not
"""
@pytest.fixture(
scope="function",
params=[
1,
1000,
2001
],
)
def insert_count(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L0)
def test_collection_count_multi_collections_l2(self, connect, insert_count):
"""
target: test collection rows_count is correct or not with multiple collections of L2
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities
"""
entities = gen_entities(insert_count)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid_count)
collection_list.append(collection_name)
connect.create_collection(collection_name, cons.default_fields)
connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
stats = connect.get_collection_stats(collection_list[i])
assert stats[row_count] == insert_count
connect.drop_collection(collection_list[i])
@pytest.mark.tags(CaseLabel.L2)
def test_collection_count_multi_collections_binary(self, connect, binary_collection, insert_count):
"""
target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities
"""
raw_vectors, entities = gen_binary_entities(insert_count)
connect.insert(binary_collection, entities)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid_count)
collection_list.append(collection_name)
connect.create_collection(collection_name, cons.default_binary_fields)
connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
stats = connect.get_collection_stats(collection_list[i])
assert stats[row_count] == insert_count
connect.drop_collection(collection_list[i])
@pytest.mark.tags(CaseLabel.L2)
def test_collection_count_multi_collections_mix(self, connect):
"""
target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities
"""
collection_list = []
collection_num = 20
for i in range(0, int(collection_num / 2)):
collection_name = gen_unique_str(uid_count)
collection_list.append(collection_name)
connect.create_collection(collection_name, cons.default_fields)
connect.insert(collection_name, cons.default_entities)
for i in range(int(collection_num / 2), collection_num):
collection_name = gen_unique_str(uid_count)
collection_list.append(collection_name)
connect.create_collection(collection_name, cons.default_binary_fields)
res = connect.insert(collection_name, cons.default_binary_entities)
connect.flush(collection_list)
for i in range(collection_num):
stats = connect.get_collection_stats(collection_list[i])
assert stats[row_count] == default_nb
connect.drop_collection(collection_list[i])
class TestGetCollectionStats:
"""
******************************************************************
The following cases are used to test `collection_stats` function
******************************************************************
"""
@pytest.fixture(
scope="function",
params=gen_invalid_strs()
)
def get_invalid_collection_name(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
# if str(connect._cmd("mode")) == "CPU":
# if request.param["index_type"] in index_cpu_not_support():
# pytest.skip("CPU not support index_type: ivf_sq8h")
return request.param
@pytest.fixture(
scope="function",
params=gen_binary_index()
)
def get_jaccard_index(self, request, connect):
logging.getLogger().info(request.param)
if request.param["index_type"] in binary_support():
request.param["metric_type"] = "JACCARD"
return request.param
else:
pytest.skip("Skip index Temporary")
@pytest.fixture(
scope="function",
params=[
1,
1000,
2001
],
)
def insert_count(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_name_not_existed(self, connect, collection):
"""
target: get collection stats where collection name does not exist
method: call collection_stats with a random collection_name, which is not in db
expected: status not ok
"""
collection_name = gen_unique_str(uid_stats)
with pytest.raises(Exception) as e:
connect.get_collection_stats(collection_name)
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_name_invalid(self, connect, get_invalid_collection_name):
"""
target: get collection stats where collection name is invalid
method: call collection_stats with invalid collection_name
expected: status not ok
"""
collection_name = get_invalid_collection_name
with pytest.raises(Exception) as e:
connect.get_collection_stats(collection_name)
@pytest.mark.tags(CaseLabel.L0)
def test_get_collection_stats_empty(self, connect, collection):
"""
target: get collection stats where no entity in collection
method: call collection_stats in empty collection
expected: segment = []
"""
stats = connect.get_collection_stats(collection)
connect.flush([collection])
assert stats[row_count] == 0
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_without_connection(self, collection, dis_connect):
"""
target: test count_entities, without connection
method: calling count_entities with correct params, with a disconnected instance
expected: count_entities raise exception
"""
with pytest.raises(Exception) as e:
dis_connect.get_collection_stats(collection)
@pytest.mark.tags(CaseLabel.L0)
def test_get_collection_stats_batch(self, connect, collection):
"""
target: get row count with collection_stats
method: add entities, check count in collection info
expected: count as expected
"""
result = connect.insert(collection, cons.default_entities)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert int(stats[row_count]) == default_nb
@pytest.mark.tags(CaseLabel.L0)
def test_get_collection_stats_single(self, connect, collection):
"""
target: get row count with collection_stats
method: add entity one by one, check count in collection info
expected: count as expected
"""
nb = 10
for i in range(nb):
connect.insert(collection, cons.default_entity)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == nb
@pytest.mark.tags(CaseLabel.L2)
def _test_get_collection_stats_after_delete(self, connect, collection):
"""
target: get row count with collection_stats
method: add and delete entities, check count in collection info
expected: status ok, count as expected
"""
ids = connect.insert(collection, cons.default_entities)
status = connect.flush([collection])
delete_ids = [ids[0], ids[-1]]
connect.delete_entity_by_id(collection, delete_ids)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats["row_count"] == default_nb - 2
assert stats["partitions"][0]["row_count"] == default_nb - 2
assert stats["partitions"][0]["segments"][0]["data_size"] > 0
# TODO: enable
@pytest.mark.tags(CaseLabel.L2)
def _test_get_collection_stats_after_compact_parts(self, connect, collection):
"""
target: get row count with collection_stats
method: add and delete entities, and compact collection, check count in collection info
expected: status ok, count as expected
"""
delete_length = 1000
ids = connect.insert(collection, cons.default_entities)
status = connect.flush([collection])
delete_ids = ids[:delete_length]
connect.delete_entity_by_id(collection, delete_ids)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
logging.getLogger().info(stats)
assert stats["row_count"] == default_nb - delete_length
compact_before = stats["partitions"][0]["segments"][0]["data_size"]
connect.compact(collection)
stats = connect.get_collection_stats(collection)
logging.getLogger().info(stats)
compact_after = stats["partitions"][0]["segments"][0]["data_size"]
assert compact_before == compact_after
@pytest.mark.tags(CaseLabel.L2)
def _test_get_collection_stats_after_compact_delete_one(self, connect, collection):
"""
target: get row count with collection_stats
method: add and delete one entity, and compact collection, check count in collection info
expected: status ok, count as expected
"""
ids = connect.insert(collection, cons.default_entities)
status = connect.flush([collection])
delete_ids = ids[:1]
connect.delete_entity_by_id(collection, delete_ids)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
logging.getLogger().info(stats)
compact_before = stats["partitions"][0]["row_count"]
connect.compact(collection)
stats = connect.get_collection_stats(collection)
logging.getLogger().info(stats)
compact_after = stats["partitions"][0]["row_count"]
# pdb.set_trace()
assert compact_before == compact_after
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_partition(self, connect, collection):
"""
target: get partition info in a collection
method: call collection_stats after partition created and check partition_stats
expected: status ok, vectors added to partition
"""
connect.create_partition(collection, default_tag)
result = connect.insert(collection, cons.default_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == default_nb
@pytest.mark.tags(CaseLabel.L0)
def test_get_collection_stats_partitions(self, connect, collection):
"""
target: get partition info in a collection
method: create two partitions, add vectors in one of the partitions, call collection_stats and check
expected: status ok, vectors added to one partition but not the other
"""
new_tag = "new_tag"
connect.create_partition(collection, default_tag)
connect.create_partition(collection, new_tag)
connect.insert(collection, cons.default_entities, partition_name=default_tag)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == default_nb
connect.insert(collection, cons.default_entities, partition_name=new_tag)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == default_nb * 2
connect.insert(collection, cons.default_entities)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == default_nb * 3
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_partitions_A(self, connect, collection, insert_count):
"""
target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities
"""
new_tag = "new_tag"
entities = gen_entities(insert_count)
connect.create_partition(collection, default_tag)
connect.create_partition(collection, new_tag)
connect.insert(collection, entities)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == insert_count
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_partitions_B(self, connect, collection, insert_count):
"""
target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities
"""
new_tag = "new_tag"
entities = gen_entities(insert_count)
connect.create_partition(collection, default_tag)
connect.create_partition(collection, new_tag)
connect.insert(collection, entities, partition_name=default_tag)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == insert_count
@pytest.mark.tags(CaseLabel.L0)
def test_get_collection_stats_partitions_C(self, connect, collection, insert_count):
"""
target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of vectors
"""
new_tag = "new_tag"
entities = gen_entities(insert_count)
connect.create_partition(collection, default_tag)
connect.create_partition(collection, new_tag)
connect.insert(collection, entities)
connect.insert(collection, entities, partition_name=default_tag)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == insert_count * 2
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_partitions_D(self, connect, collection, insert_count):
"""
target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities
"""
new_tag = "new_tag"
entities = gen_entities(insert_count)
connect.create_partition(collection, default_tag)
connect.create_partition(collection, new_tag)
connect.insert(collection, entities, partition_name=default_tag)
connect.insert(collection, entities, partition_name=new_tag)
connect.flush([collection])
stats = connect.get_collection_stats(collection)
assert stats[row_count] == insert_count * 2
# TODO: assert metric type in stats response
@pytest.mark.tags(CaseLabel.L0)
def test_get_collection_stats_after_index_created(self, connect, collection, get_simple_index):
"""
target: test collection info after index created
method: create collection, add vectors, create index and call collection_stats
expected: status ok, index created and shown in segments
"""
connect.insert(collection, cons.default_entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
stats = connect.get_collection_stats(collection)
assert stats[row_count] == default_nb
# TODO: assert metric type in stats response
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_after_index_created_ip(self, connect, collection, get_simple_index):
"""
target: test collection info after index created
method: create collection, add vectors, create index and call collection_stats
expected: status ok, index created and shown in segments
"""
get_simple_index["metric_type"] = "IP"
result = connect.insert(collection, cons.default_entities)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
get_simple_index.update({"metric_type": "IP"})
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
stats = connect.get_collection_stats(collection)
assert stats[row_count] == default_nb
# TODO: assert metric type in stats response
@pytest.mark.tags(CaseLabel.L2)
def test_get_collection_stats_after_index_created_jac(self, connect, binary_collection, get_jaccard_index):
"""
target: test collection info after index created
method: create collection, add binary entities, create index and call collection_stats
expected: status ok, index created and shown in segments
"""
ids = connect.insert(binary_collection, cons.default_binary_entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, default_binary_vec_field_name, get_jaccard_index)
stats = connect.get_collection_stats(binary_collection)
assert stats[row_count] == default_nb
@pytest.mark.tags(CaseLabel.L0)
def test_get_collection_stats_after_create_different_index(self, connect, collection):
"""
target: test collection info after index created repeatedly
method: create collection, add vectors, create index and call collection_stats multiple times
expected: status ok, index info shown in segments
"""
result = connect.insert(collection, cons.default_entities)
connect.flush([collection])
for index_type in ["IVF_FLAT", "IVF_SQ8"]:
connect.create_index(collection, default_float_vec_field_name,
{"index_type": index_type, "params": {"nlist": 1024}, "metric_type": "L2"})
stats = connect.get_collection_stats(collection)
assert stats[row_count] == default_nb
@pytest.mark.tags(CaseLabel.L2)
def test_collection_count_multi_collections_indexed(self, connect):
"""
target: test collection rows_count is correct or not with multiple collections of L2
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: row count in segments
"""
collection_list = []
collection_num = 10
for i in range(collection_num):
collection_name = gen_unique_str(uid_stats)
collection_list.append(collection_name)
connect.create_collection(collection_name, cons.default_fields)
res = connect.insert(collection_name, cons.default_entities)
connect.flush(collection_list)
index_1 = {"index_type": "IVF_SQ8", "params": {"nlist": 1024}, "metric_type": "L2"}
index_2 = {"index_type": "IVF_FLAT", "params": {"nlist": 1024}, "metric_type": "L2"}
if i % 2:
connect.create_index(collection_name, default_float_vec_field_name, index_1)
else:
connect.create_index(collection_name, default_float_vec_field_name, index_2)
for i in range(collection_num):
stats = connect.get_collection_stats(collection_list[i])
assert stats[row_count] == default_nb
index = connect.describe_index(collection_list[i], "")
if i % 2:
create_target_index(index_1, default_float_vec_field_name)
assert index == index_1
else:
create_target_index(index_2, default_float_vec_field_name)
assert index == index_2
# break
connect.drop_collection(collection_list[i])
class TestCreateCollection:
"""
******************************************************************
The following cases are used to test `create_collection` function
******************************************************************
"""
@pytest.fixture(
scope="function",
params=gen_single_filter_fields()
)
def get_filter_field(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_single_vector_fields()
)
def get_vector_field(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_segment_row_limits()
)
def get_segment_row_limit(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L2)
def _test_create_collection_segment_row_limit(self, connect, get_segment_row_limit):
"""
target: test create normal collection with different fields
method: create collection with diff segment_row_limit
expected: no exception raised
"""
collection_name = gen_unique_str(uid_create)
fields = copy.deepcopy(cons.default_fields)
# fields["segment_row_limit"] = get_segment_row_limit
connect.create_collection(collection_name, fields)
assert connect.has_collection(collection_name)
@pytest.mark.tags(CaseLabel.L2)
def test_create_collection_after_insert(self, connect, collection):
"""
target: test insert vector, then create collection again
method: insert vector and create collection
expected: error raised
"""
# pdb.set_trace()
connect.insert(collection, cons.default_entity)
try:
connect.create_collection(collection, cons.default_fields)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "CreateCollection failed: meta table add collection failed," \
"error = collection %s exist" % collection
@pytest.mark.tags(CaseLabel.L2)
def test_create_collection_after_insert_flush(self, connect, collection):
"""
target: test insert vector, then create collection again
method: insert vector and create collection
expected: error raised
"""
connect.insert(collection, cons.default_entity)
connect.flush([collection])
try:
connect.create_collection(collection, cons.default_fields)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "CreateCollection failed: meta table add collection failed," \
"error = collection %s exist" % collection
@pytest.mark.tags(CaseLabel.L1)
def test_create_collection_multithread(self, connect):
"""
target: test create collection with multi-thread
method: create collection using multi-thread,
expected: collections are created
"""
threads_num = 8
threads = []
collection_names = []
def create():
collection_name = gen_unique_str(uid_create)
collection_names.append(collection_name)
connect.create_collection(collection_name, cons.default_fields)
for i in range(threads_num):
t = MyThread(target=create, args=())
threads.append(t)
t.start()
time.sleep(0.2)
for t in threads:
t.join()
for item in collection_names:
assert item in connect.list_collections()
connect.drop_collection(item)
class TestCreateCollectionInvalid(object):
"""
Test creating collections with invalid params
"""
@pytest.fixture(
scope="function",
params=gen_invalid_metric_types()
)
def get_metric_type(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_invalid_ints()
)
def get_segment_row_limit(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_invalid_ints()
)
def get_dim(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_invalid_strs()
)
def get_invalid_string(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_invalid_field_types()
)
def get_field_type(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L2)
def _test_create_collection_with_invalid_segment_row_limit(self, connect, get_segment_row_limit):
collection_name = gen_unique_str()
fields = copy.deepcopy(cons.default_fields)
fields["segment_row_limit"] = get_segment_row_limit
with pytest.raises(Exception) as e:
connect.create_collection(collection_name, fields)
@pytest.mark.tags(CaseLabel.L2)
def _test_create_collection_no_segment_row_limit(self, connect):
"""
target: test create collection with no segment_row_limit params
method: create collection with correct params
expected: use default default_segment_row_limit
"""
collection_name = gen_unique_str(uid_create)
fields = copy.deepcopy(cons.default_fields)
fields.pop("segment_row_limit")
connect.create_collection(collection_name, fields)
res = connect.get_collection_info(collection_name)
logging.getLogger().info(res)
assert res["segment_row_limit"] == default_server_segment_row_limit
# TODO: assert exception
@pytest.mark.tags(CaseLabel.L2)
def test_create_collection_limit_fields(self, connect):
"""
target: test create collection with maximum fields
method: create collection with maximum field number
expected: raise exception
"""
collection_name = gen_unique_str(uid_create)
limit_num = 64
fields = copy.deepcopy(cons.default_fields)
for i in range(limit_num):
field_name = gen_unique_str("field_name")
field = {"name": field_name, "type": DataType.INT64}
fields["fields"].append(field)
try:
connect.create_collection(collection_name, fields)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "maximum field's number should be limited to 64"
class TestDescribeCollection:
@pytest.fixture(
scope="function",
params=gen_single_filter_fields()
)
def get_filter_field(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_single_vector_fields()
)
def get_vector_field(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
logging.getLogger().info(request.param)
# if str(connect._cmd("mode")) == "CPU":
# if request.param["index_type"] in index_cpu_not_support():
# pytest.skip("sq8h not support in CPU mode")
return request.param
"""
******************************************************************
The following cases are used to test `describe_collection` function, no data in collection
******************************************************************
"""
@pytest.mark.tags(CaseLabel.L0)
def test_collection_fields(self, connect, get_filter_field, get_vector_field):
"""
target: test create normal collection with different fields, check info returned
method: create collection with diff fields: metric/field_type/..., calling `describe_collection`
expected: no exception raised, and value returned correct
"""
filter_field = get_filter_field
vector_field = get_vector_field
collection_name = gen_unique_str(uid_describe)
fields = {
"fields": [gen_primary_field(), filter_field, vector_field],
# "segment_row_limit": default_segment_row_limit
}
connect.create_collection(collection_name, fields)
res = connect.describe_collection(collection_name)
# assert res['segment_row_limit'] == default_segment_row_limit
assert len(res["fields"]) == len(fields.get("fields"))
for field in res["fields"]:
if field["type"] == filter_field:
assert field["name"] == filter_field["name"]
elif field["type"] == vector_field:
assert field["name"] == vector_field["name"]
assert field["params"] == vector_field["params"]
@pytest.mark.tags(CaseLabel.L0)
def test_describe_collection_after_index_created(self, connect, collection, get_simple_index):
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
if get_simple_index["index_type"] != "FLAT":
index = connect.describe_index(collection, "")
assert index["index_type"] == get_simple_index["index_type"]
assert index["metric_type"] == get_simple_index["metric_type"]
assert index["params"] == get_simple_index["params"]
@pytest.mark.tags(CaseLabel.L2)
def test_describe_collection_without_connection(self, collection, dis_connect):
"""
target: test get collection info, without connection
method: calling get collection info with correct params, with a disconnected instance
expected: get collection info raise exception
"""
with pytest.raises(Exception) as e:
dis_connect.describe_collection(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_describe_collection_not_existed(self, connect):
"""
target: test if collection not created
method: random a collection name, create this collection then drop it,
assert the value returned by describe_collection method
expected: False
"""
collection_name = gen_unique_str(uid_describe)
connect.create_collection(collection_name, cons.default_fields)
connect.describe_collection(collection_name)
connect.drop_collection(collection_name)
try:
connect.describe_collection(collection_name)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection_name
@pytest.mark.tags(CaseLabel.L1)
def test_describe_collection_multithread(self, connect):
"""
target: test create collection with multi-thread
method: create collection using multi-thread,
expected: collections are created
"""
threads_num = 4
threads = []
collection_name = gen_unique_str(uid_describe)
connect.create_collection(collection_name, cons.default_fields)
def get_info():
connect.describe_collection(collection_name)
for i in range(threads_num):
t = MyThread(target=get_info)
threads.append(t)
t.start()
time.sleep(0.2)
for t in threads:
t.join()
"""
******************************************************************
The following cases are used to test `describe_collection` function, and insert data in collection
******************************************************************
"""
@pytest.mark.tags(CaseLabel.L0)
def test_describe_collection_fields_after_insert(self, connect, get_filter_field, get_vector_field):
"""
target: test create normal collection with different fields, check info returned
method: create collection with diff fields: metric/field_type/..., calling `describe_collection`
expected: no exception raised, and value returned correct
"""
filter_field = get_filter_field
vector_field = get_vector_field
collection_name = gen_unique_str(uid_describe)
fields = {
"fields": [gen_primary_field(), filter_field, vector_field],
# "segment_row_limit": default_segment_row_limit
}
connect.create_collection(collection_name, fields)
entities = gen_entities_by_fields(fields["fields"], default_nb, vector_field["params"]["dim"])
res_ids = connect.insert(collection_name, entities)
connect.flush([collection_name])
res = connect.describe_collection(collection_name)
# assert res['segment_row_limit'] == default_segment_row_limit
assert len(res["fields"]) == len(fields.get("fields"))
for field in res["fields"]:
if field["type"] == filter_field:
assert field["name"] == filter_field["name"]
elif field["type"] == vector_field:
assert field["name"] == vector_field["name"]
assert field["params"] == vector_field["params"]
class TestDescribeCollectionInvalid(object):
"""
Test describe collection with invalid params
"""
@pytest.fixture(
scope="function",
params=gen_invalid_strs()
)
def get_collection_name(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L2)
def test_describe_collection_with_invalid_collection_name(self, connect, get_collection_name):
"""
target: test describe collection which name invalid
method: call describe_collection with invalid names
expected: raise exception
"""
collection_name = get_collection_name
with pytest.raises(Exception) as e:
connect.describe_collection(collection_name)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("collection_name", ('', None))
def test_describe_collection_with_empty_or_None_collection_name(self, connect, collection_name):
"""
target: test describe collection which name is empty or None
method: call describe_collection with '' or None name
expected: raise exception
"""
with pytest.raises(Exception) as e:
connect.describe_collection(collection_name)
class TestDropCollection:
"""
******************************************************************
The following cases are used to test `drop_collection` function
******************************************************************
"""
@pytest.mark.tags(CaseLabel.L0)
def test_drop_collection_A(self, connect, collection):
"""
target: test delete collection created with correct params
method: create collection and then delete,
assert the value returned by delete method
expected: status ok, and no collection in collections
"""
connect.drop_collection(collection)
time.sleep(2)
assert not connect.has_collection(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_drop_collection_without_connection(self, collection, dis_connect):
"""
target: test describe collection, without connection
method: drop collection with correct params, with a disconnected instance
expected: drop raise exception
"""
with pytest.raises(Exception) as e:
dis_connect.drop_collection(collection)
@pytest.mark.tags(CaseLabel.L1)
def test_drop_collection_not_existed(self, connect):
"""
target: test if collection not created
method: random a collection name, which not existed in db,
assert the exception raised returned by drp_collection method
expected: False
"""
collection_name = gen_unique_str(uid_drop)
try:
connect.drop_collection(collection_name)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection_name
@pytest.mark.tags(CaseLabel.L1)
def test_create_drop_collection_multithread(self, connect):
"""
target: test create and drop collection with multi-thread
method: create and drop collection using multi-thread,
expected: collections are created, and dropped
"""
threads_num = 8
threads = []
collection_names = []
def create():
collection_name = gen_unique_str(uid_drop)
collection_names.append(collection_name)
connect.create_collection(collection_name, cons.default_fields)
connect.drop_collection(collection_name)
for i in range(threads_num):
t = MyThread(target=create, args=())
threads.append(t)
t.start()
time.sleep(0.2)
for t in threads:
t.join()
for item in collection_names:
assert not connect.has_collection(item)
class TestDropCollectionInvalid(object):
"""
Test has collection with invalid params
"""
@pytest.fixture(
scope="function",
params=gen_invalid_strs()
)
def get_collection_name(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L2)
def test_drop_collection_with_invalid_collection_name(self, connect, get_collection_name):
"""
target: test drop invalid collection
method: drop collection with invalid collection name
expected: raise exception
"""
collection_name = get_collection_name
with pytest.raises(Exception) as e:
connect.has_collection(collection_name)
@pytest.mark.tags(CaseLabel.L2)
@pytest.mark.parametrize("collection_name", ('', None))
def test_drop_collection_with_empty_or_None_collection_name(self, connect, collection_name):
"""
target: test drop invalid collection
method: drop collection with empty or None collection name
expected: raise exception
"""
with pytest.raises(Exception) as e:
connect.has_collection(collection_name)
class TestHasCollection:
"""
******************************************************************
The following cases are used to test `has_collection` function
******************************************************************
"""
@pytest.mark.tags(CaseLabel.L2)
def test_has_collection_without_connection(self, collection, dis_connect):
"""
target: test has collection, without connection
method: calling has collection with correct params, with a disconnected instance
expected: has collection raise exception
"""
with pytest.raises(Exception) as e:
assert dis_connect.has_collection(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_has_collection_not_existed(self, connect):
"""
target: test if collection not created
method: random a collection name, create this collection then drop it,
assert the value returned by has_collection method
expected: False
"""
collection_name = gen_unique_str(uid_has)
connect.create_collection(collection_name, cons.default_fields)
assert connect.has_collection(collection_name)
connect.drop_collection(collection_name)
assert not connect.has_collection(collection_name)
@pytest.mark.tags(CaseLabel.L2)
def test_has_collection_multithread(self, connect):
"""
target: test create collection with multi-thread
method: create collection using multi-thread,
expected: collections are created
"""
threads_num = 4
threads = []
collection_name = gen_unique_str(uid_has)
connect.create_collection(collection_name, cons.default_fields)
def has():
assert connect.has_collection(collection_name)
# assert not assert_collection(connect, collection_name)
for i in range(threads_num):
t = MyThread(target=has, args=())
threads.append(t)
t.start()
time.sleep(0.2)
for t in threads:
t.join()
class TestHasCollectionInvalid(object):
"""
Test has collection with invalid params
"""
@pytest.fixture(
scope="function",
params=gen_invalid_strs()
)
def get_collection_name(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L2)
def test_has_collection_with_invalid_collection_name(self, connect, get_collection_name):
"""
target: test list collections with invalid scenario
method: show collection with invalid collection name
expected: raise exception
"""
collection_name = get_collection_name
with pytest.raises(Exception) as e:
connect.has_collection(collection_name)
@pytest.mark.tags(CaseLabel.L2)
def test_has_collection_with_empty_collection_name(self, connect):
"""
target: test list collections with invalid scenario
method: show collection with empty collection name
expected: raise exception
"""
collection_name = ''
with pytest.raises(Exception) as e:
connect.has_collection(collection_name)
@pytest.mark.tags(CaseLabel.L2)
def test_has_collection_with_none_collection_name(self, connect):
"""
target: test list collections with invalid scenario
method: show collection with no collection name
expected: raise exception
"""
collection_name = None
with pytest.raises(Exception) as e:
connect.has_collection(collection_name)
class TestListCollections:
"""
******************************************************************
The following cases are used to test `list_collections` function
******************************************************************
"""
@pytest.mark.tags(CaseLabel.L0)
def test_list_collections_multi_collections(self, connect):
"""
target: test list collections
method: create collection, assert the value returned by list_collections method
expected: True
"""
collection_num = 50
collection_names = []
for i in range(collection_num):
collection_name = gen_unique_str(uid_list)
collection_names.append(collection_name)
connect.create_collection(collection_name, cons.default_fields)
assert collection_name in connect.list_collections()
for i in range(collection_num):
connect.drop_collection(collection_names[i])
@pytest.mark.tags(CaseLabel.L2)
def test_list_collections_without_connection(self, dis_connect):
"""
target: test list collections, without connection
method: calling list collections with correct params, with a disconnected instance
expected: list collections raise exception
"""
with pytest.raises(Exception) as e:
dis_connect.list_collections()
# TODO: make sure to run this case in the end
@pytest.mark.skip("r0.3-test")
@pytest.mark.tags(CaseLabel.L2)
def test_list_collections_no_collection(self, connect):
"""
target: test show collections is correct or not, if no collection in db
method: delete all collections,
assert the value returned by list_collections method is equal to []
expected: the status is ok, and the result is equal to []
"""
result = connect.list_collections()
if result:
for collection_name in result:
assert connect.has_collection(collection_name)
@pytest.mark.tags(CaseLabel.L2)
def test_list_collections_multithread(self, connect):
"""
target: test list collection with multi-threads
method: list collection using multi-threads
expected: list collections correctly
"""
threads_num = 10
threads = []
collection_name = gen_unique_str(uid_list)
connect.create_collection(collection_name, cons.default_fields)
def _list():
assert collection_name in connect.list_collections()
for i in range(threads_num):
t = MyThread(target=_list)
threads.append(t)
t.start()
time.sleep(0.2)
for t in threads:
t.join()
class TestLoadCollection:
"""
******************************************************************
The following cases are used to test `load_collection` function
******************************************************************
"""
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
return request.param
@pytest.fixture(
scope="function",
params=gen_binary_index()
)
def get_binary_index(self, request, connect):
return request.param
@pytest.mark.tags(CaseLabel.L0)
def test_load_collection_after_index(self, connect, collection, get_simple_index):
"""
target: test load collection, after index created
method: insert and create index, load collection with correct params
expected: no error raised
"""
connect.insert(collection, cons.default_entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
connect.load_collection(collection)
connect.release_collection(collection)
@pytest.mark.tags(CaseLabel.L1)
def test_load_collection_after_index_binary(self, connect, binary_collection, get_binary_index):
"""
target: test load binary_collection, after index created
method: insert and create index, load binary_collection with correct params
expected: no error raised
"""
result = connect.insert(binary_collection, cons.default_binary_entities)
assert len(result.primary_keys) == default_nb
connect.flush([binary_collection])
for metric_type in binary_metrics():
get_binary_index["metric_type"] = metric_type
connect.drop_index(binary_collection, default_binary_vec_field_name)
if get_binary_index["index_type"] == "BIN_IVF_FLAT" and metric_type in structure_metrics():
with pytest.raises(Exception) as e:
connect.create_index(binary_collection, default_binary_vec_field_name, get_binary_index)
else:
connect.create_index(binary_collection, default_binary_vec_field_name, get_binary_index)
index = connect.describe_index(binary_collection, "")
create_target_index(get_binary_index, default_binary_vec_field_name)
assert index == get_binary_index
connect.load_collection(binary_collection)
connect.release_collection(binary_collection)
@pytest.mark.tags(CaseLabel.L2)
def test_load_empty_collection(self, connect, collection):
"""
target: test load an empty collection with no data inserted
method: no entities in collection, load and release the collection
expected: load and release successfully
"""
connect.load_collection(collection)
connect.release_collection(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_load_collection_dis_connect(self, dis_connect, collection):
"""
target: test load collection, without connection
method: load collection with correct params, with a disconnected instance
expected: load raise exception
"""
with pytest.raises(Exception) as e:
dis_connect.load_collection(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_release_collection_dis_connect(self, dis_connect, collection):
"""
target: test release collection, without connection
method: release collection with correct params, with a disconnected instance
expected: release raise exception
"""
with pytest.raises(Exception) as e:
dis_connect.release_collection(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_load_collection_not_existed(self, connect, collection):
"""
target: test load invalid collection
method: load not existed collection
expected: raise exception
"""
collection_name = gen_unique_str(uid_load)
try:
connect.load_collection(collection_name)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection_name
@pytest.mark.tags(CaseLabel.L2)
def test_release_collection_not_existed(self, connect, collection):
"""
target: test release a not existed collection
method: release with a not existed collection name
expected: raise exception
"""
collection_name = gen_unique_str(uid_load)
try:
connect.release_collection(collection_name)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection_name
@pytest.mark.tags(CaseLabel.L2)
def test_release_collection_not_load(self, connect, collection):
"""
target: test release collection without load
method: release collection without load
expected: release successfully
"""
result = connect.insert(collection, cons.default_entities)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.release_collection(collection)
@pytest.mark.tags(CaseLabel.L0)
def test_load_collection_after_load_release(self, connect, collection):
"""
target: test load collection after load and release
method: 1.load and release collection after entities flushed
2.re-load collection
expected: No exception
"""
result = connect.insert(collection, cons.default_entities)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_collection(collection)
connect.release_collection(collection)
connect.load_collection(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_load_collection_repeatedly(self, connect, collection):
"""
target: test load collection repeatedly
method: load collection twice
expected: No exception
"""
result = connect.insert(collection, cons.default_entities)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_collection(collection)
connect.load_collection(collection)
@pytest.mark.tags(CaseLabel.L2)
def test_load_release_collection(self, connect, collection):
"""
target: test load, release non-exist collection
method: 1. load, release and drop collection
2. load and release dropped collection
expected: raise exception
"""
collection_name = gen_unique_str(uid_load)
connect.create_collection(collection_name, cons.default_fields)
connect.insert(collection_name, cons.default_entities)
connect.flush([collection_name])
connect.load_collection(collection_name)
connect.release_collection(collection_name)
connect.drop_collection(collection_name)
try:
connect.load_collection(collection_name)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection_name
try:
connect.release_collection(collection_name)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection_name
@pytest.mark.tags(CaseLabel.L0)
def test_release_collection_after_drop(self, connect, collection):
"""
target: test release collection after drop
method: insert and flush, then release collection after load and drop
expected: raise exception
"""
result = connect.insert(collection, cons.default_entities)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_collection(collection)
connect.drop_collection(collection)
try:
connect.release_collection(collection)
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection
@pytest.mark.tags(CaseLabel.L0)
def test_load_collection_without_flush(self, connect, collection):
"""
target: test load collection without flush
method: insert entities without flush, then load collection
expected: No exception and data can be queried
"""
result = connect.insert(collection, gen_entities(100))
assert len(result.primary_keys) == 100
connect.load_collection(collection)
int_field_name = "int64"
term_expr = f'{int_field_name} in {result.primary_keys[:1]}'
res = connect.query(collection, term_expr)
assert res == [{int_field_name: result.primary_keys[0]}]
# TODO
@pytest.mark.tags(CaseLabel.L2)
def _test_load_collection_larger_than_memory(self):
"""
target: test load collection when memory less than collection size
method: i don't know
expected: raise exception
"""
@pytest.mark.tags(CaseLabel.L0)
def test_load_collection_release_part_partitions(self, connect, collection):
"""
target: test release part partitions after load collection
method: load collection and release part partitions
expected: released partitions search empty
"""
result = connect.insert(collection, cons.default_entities)
assert len(result.primary_keys) == default_nb
connect.create_partition(collection, default_tag)
result = connect.insert(collection, cons.default_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_collection(collection)
connect.release_partitions(collection, [default_tag])
with pytest.raises(Exception) as e:
connect.search(collection, **default_single_query, partition_names=[default_tag])
res = connect.search(collection, **default_single_query, partition_names=[default_partition_name])
assert len(res[0]) == default_top_k
@pytest.mark.tags(CaseLabel.L2)
def test_load_collection_release_all_partitions(self, connect, collection):
"""
target: test release all partitions after load collection
method: load collection and release all partitions
expected: search empty
"""
result = connect.insert(collection, cons.default_entities)
assert len(result.primary_keys) == default_nb
connect.create_partition(collection, default_tag)
result = connect.insert(collection, cons.default_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_collection(collection)
connect.release_partitions(collection, [default_partition_name, default_tag])
res = connect.search(collection, **default_single_query)
assert len(res[0]) == 0
@pytest.mark.tags(CaseLabel.L0)
def test_load_partitions_release_collection(self, connect, collection):
"""
target: test release collection after load partitions
method: insert entities into partitions, search empty after load partitions and release collection
expected: search result empty
"""
connect.create_partition(collection, default_tag)
result = connect.insert(collection, cons.default_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_partitions(collection, [default_tag])
connect.release_collection(collection)
with pytest.raises(Exception):
connect.search(collection, **default_single_query)
class TestReleaseAdvanced:
@pytest.mark.tags(CaseLabel.L0)
def test_release_collection_during_searching(self, connect, collection):
"""
target: test release collection during searching
method: insert entities into collection, flush and load collection, release collection during searching
expected: raise exception
"""
nq = 1000
top_k = 1
connect.insert(collection, cons.default_entities)
connect.flush([collection])
connect.load_collection(collection)
params, _ = gen_search_vectors_params(field_name, cons.default_entities, top_k, nq)
future = connect.search(collection, **params, _async=True)
connect.release_collection(collection)
with pytest.raises(Exception):
connect.search(collection, **default_single_query)
@pytest.mark.tags(CaseLabel.L2)
def test_release_partition_during_searching(self, connect, collection):
"""
target: test release partition during searching
method: insert entities into partition, flush and load partition, release partition during searching
expected: raise exception
"""
nq = 1000
top_k = 1
connect.create_partition(collection, default_tag)
query, _ = gen_search_vectors_params(field_name, cons.default_entities, top_k, nq)
connect.insert(collection, cons.default_entities, partition_name=default_tag)
connect.flush([collection])
connect.load_partitions(collection, [default_tag])
res = connect.search(collection, **query, _async=True)
connect.release_partitions(collection, [default_tag])
with pytest.raises(Exception):
res = connect.search(collection, **default_single_query)
@pytest.mark.tags(CaseLabel.L0)
def test_release_collection_during_searching_A(self, connect, collection):
"""
target: test release collection during searching
method: insert entities into partition, flush and load partition, release collection during searching
expected: raise exception
"""
nq = 1000
top_k = 1
connect.create_partition(collection, default_tag)
query, _ = gen_search_vectors_params(field_name, cons.default_entities, top_k, nq)
connect.insert(collection, cons.default_entities, partition_name=default_tag)
connect.flush([collection])
connect.load_partitions(collection, [default_tag])
res = connect.search(collection, **query, _async=True)
connect.release_collection(collection)
with pytest.raises(Exception):
connect.search(collection, **default_single_query)
def _test_release_collection_during_loading(self, connect, collection):
"""
target: test release collection during loading
method: insert entities into collection, flush, release collection during loading
expected: raise exception
"""
connect.insert(collection, cons.default_entities)
connect.flush([collection])
def load():
connect.load_collection(collection)
t = threading.Thread(target=load, args=())
t.start()
connect.release_collection(collection)
with pytest.raises(Exception):
connect.search(collection, **default_single_query)
def _test_release_partition_during_loading(self, connect, collection):
"""
target: test release partition during loading
method: insert entities into partition, flush, release partition during loading
expected:
"""
connect.create_partition(collection, default_tag)
connect.insert(collection, cons.default_entities, partition_name=default_tag)
connect.flush([collection])
def load():
connect.load_collection(collection)
t = threading.Thread(target=load, args=())
t.start()
connect.release_partitions(collection, [default_tag])
res = connect.search(collection, **default_single_query)
assert len(res[0]) == 0
def _test_release_collection_during_inserting(self, connect, collection):
"""
target: test release collection during inserting
method: load collection, do release collection during inserting
expected: raise exception
"""
connect.insert(collection, cons.default_entities)
connect.flush([collection])
connect.load_collection(collection)
def insert():
connect.insert(collection, cons.default_entities)
t = threading.Thread(target=insert, args=())
t.start()
connect.release_collection(collection)
with pytest.raises(Exception):
res = connect.search(collection, **default_single_query)
def _test_release_collection_during_indexing(self, connect, collection):
"""
target: test release collection during building index
method: insert and flush, load collection, do release collection during creating index
expected:
"""
pass
def _test_release_collection_during_droping_index(self, connect, collection):
"""
target: test release collection during droping index
method: insert, create index and flush, load collection, do release collection during droping index
expected:
"""
pass
class TestLoadCollectionInvalid(object):
"""
Test load collection with invalid params
"""
@pytest.fixture(
scope="function",
params=gen_invalid_strs()
)
def get_collection_name(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L2)
def test_load_collection_with_invalid_collection_name(self, connect, get_collection_name):
"""
target: test load invalid collection
method: load collection with invalid name
expected: raise exception
"""
collection_name = get_collection_name
with pytest.raises(Exception) as e:
connect.load_collection(collection_name)
@pytest.mark.tags(CaseLabel.L2)
def test_release_collection_with_invalid_collection_name(self, connect, get_collection_name):
"""
target: test release invalid collection
method: release collection with invalid name
expected: raise exception
"""
collection_name = get_collection_name
with pytest.raises(Exception) as e:
connect.release_collection(collection_name)
class TestLoadPartition:
"""
******************************************************************
The following cases are used to test `load_collection` function
******************************************************************
"""
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
# if str(connect._cmd("mode")) == "CPU":
# if request.param["index_type"] in index_cpu_not_support():
# pytest.skip("sq8h not support in cpu mode")
return request.param
@pytest.fixture(
scope="function",
params=gen_binary_index()
)
def get_binary_index(self, request, connect):
logging.getLogger().info(request.param)
if request.param["index_type"] in binary_support():
return request.param
else:
pytest.skip("Skip index Temporary")
@pytest.mark.tags(CaseLabel.L0)
def test_load_partition_after_index_binary(self, connect, binary_collection, get_binary_index):
"""
target: test load binary_collection, after index created
method: insert and create index, load binary_collection with correct params
expected: no error raised
"""
connect.create_partition(binary_collection, default_tag)
result = connect.insert(binary_collection, cons.default_binary_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([binary_collection])
for metric_type in binary_metrics():
logging.getLogger().info(metric_type)
get_binary_index["metric_type"] = metric_type
if get_binary_index["index_type"] == "BIN_IVF_FLAT" and metric_type in structure_metrics():
with pytest.raises(Exception) as e:
connect.create_index(binary_collection, default_binary_vec_field_name, get_binary_index)
else:
connect.create_index(binary_collection, default_binary_vec_field_name, get_binary_index)
connect.load_partitions(binary_collection, [default_tag])
@pytest.mark.tags(CaseLabel.L2)
def test_load_collection_dis_connect(self, connect, dis_connect, collection):
"""
target: test load collection, without connection
method: load collection with correct params, with a disconnected instance
expected: load raise exception
"""
connect.create_partition(collection, default_tag)
with pytest.raises(Exception) as e:
dis_connect.load_partitions(collection, [default_tag])
@pytest.mark.tags(CaseLabel.L2)
def test_release_partition_dis_connect(self, connect, dis_connect, collection):
"""
target: test release collection, without connection
method: release collection with correct params, with a disconnected instance
expected: release raise exception
"""
connect.create_partition(collection, default_tag)
connect.load_partitions(collection, [default_tag])
with pytest.raises(Exception) as e:
dis_connect.release_partitions(collection, [default_tag])
@pytest.mark.tags(CaseLabel.L2)
def test_load_partition_not_existed(self, connect, collection):
"""
target: test load partition for invalid scenario
method: load not existed partition
expected: raise exception and report the error
"""
partition_name = gen_unique_str(uid_load)
try:
connect.load_partitions(collection, [partition_name])
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "partitionID of partitionName:%s can not be find" % partition_name
@pytest.mark.tags(CaseLabel.L0)
def test_release_partition_not_load(self, connect, collection):
"""
target: test release partition without load
method: release partition without load
expected: raise exception
"""
connect.create_partition(collection, default_tag)
result = connect.insert(collection, cons.default_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.release_partitions(collection, [default_tag])
@pytest.mark.tags(CaseLabel.L2)
def test_load_release_after_drop(self, connect, collection):
"""
target: test load and release partition after drop
method: drop partition and then load and release it
expected: raise exception
"""
connect.create_partition(collection, default_tag)
result = connect.insert(collection, cons.default_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_partitions(collection, [default_tag])
connect.release_partitions(collection, [default_tag])
connect.drop_partition(collection, default_tag)
try:
connect.load_partitions(collection, [default_tag])
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "partitionID of partitionName:%s can not be find" % default_tag
try:
connect.release_partitions(collection, [default_tag])
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "partitionID of partitionName:%s can not be find" % default_tag
@pytest.mark.tags(CaseLabel.L0)
def test_release_partition_after_drop(self, connect, collection):
"""
target: test release collection after drop
method: insert and flush, then release collection after load and drop
expected: raise exception
"""
connect.create_partition(collection, default_tag)
result = connect.insert(collection, cons.default_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_partitions(collection, [default_tag])
connect.drop_partition(collection, default_tag)
try:
connect.load_partitions(collection, [default_tag])
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "partitionID of partitionName:%s can not be find" % default_tag
@pytest.mark.tags(CaseLabel.L0)
def test_load_release_after_collection_drop(self, connect, collection):
"""
target: test release collection after drop
method: insert and flush, then release collection after load and drop
expected: raise exception
"""
connect.create_partition(collection, default_tag)
result = connect.insert(collection, cons.default_entities, partition_name=default_tag)
assert len(result.primary_keys) == default_nb
connect.flush([collection])
connect.load_partitions(collection, [default_tag])
connect.release_partitions(collection, [default_tag])
connect.drop_collection(collection)
try:
connect.load_partitions(collection, [default_tag])
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection
try:
connect.release_partitions(collection, [default_tag])
except Exception as e:
code = getattr(e, 'code', "The exception does not contain the field of code.")
assert code == 1
message = getattr(e, 'message', "The exception does not contain the field of message.")
assert message == "DescribeCollection failed: can't find collection: %s" % collection
class TestLoadPartitionInvalid(object):
"""
Test load collection with invalid params
"""
@pytest.fixture(
scope="function",
params=gen_invalid_strs()
)
def get_partition_name(self, request):
yield request.param
@pytest.mark.tags(CaseLabel.L2)
def test_load_partition_with_invalid_partition_name(self, connect, collection, get_partition_name):
"""
target: test load invalid partition
method: load partition with invalid partition name
expected: raise exception
"""
partition_name = get_partition_name
with pytest.raises(Exception) as e:
connect.load_partitions(collection, [partition_name])
@pytest.mark.tags(CaseLabel.L2)
def test_release_partition_with_invalid_partition_name(self, connect, collection, get_partition_name):
"""
target: test release invalid partition
method: release partition with invalid partition name
expected: raise exception
"""
partition_name = get_partition_name
with pytest.raises(Exception) as e:
connect.load_partitions(collection, [partition_name])
|
animationcontroller.py
|
# led-control WS2812B LED Controller Server
# Copyright 2019 jackw01. Released under the MIT License (see LICENSE for details).
import math
import random
import time
import traceback
import RestrictedPython
import copy
from threading import Event, Thread
import ledcontrol.animationpatterns as patterns
import ledcontrol.rpi_ws281x as rpi_ws281x
import ledcontrol.utils as utils
class RepeatedTimer:
"""
Repeat function call at a regular interval.
"""
def __init__(self, interval, function, *args, **kwargs):
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.count = 0
self.wait_time = 0
self.last_frame = time.perf_counter()
self.last_render_start = time.perf_counter()
self.last_render_end = time.perf_counter()
self.delta_t = interval
self.event = Event()
self.thread = Thread(target=self.target, daemon=True)
self.thread.start()
def target(self):
"""
Waits until ready and executes target function.
"""
while not self.event.wait(self.wait_time):
self.last_render_start = time.perf_counter() # get start time
if self.count % 100 == 0:
print('FPS: {}'.format(1.0 / self.delta_t)) # print fps
self.delta_t = self.last_render_start - self.last_frame # recalculate frame delta_t
self.last_frame = self.last_render_start
self.function(*self.args, **self.kwargs) # call target function
self.count += 1
self.last_render_end = time.perf_counter() # get end time
# Calculate wait for next iteration
self.wait_time = self.interval - (self.last_render_end - self.last_render_start)
if (self.wait_time < 0):
self.wait_time = 0
def stop(self):
"""
Stops the timer thread.
"""
self.event.set()
self.thread.join()
class AnimationController:
def __init__(self, led_controller, refresh_rate, led_count, mapping_func, led_color_correction):
self.led_controller = led_controller
self.refresh_rate = refresh_rate
self.led_count = led_count
self.mapping_func = mapping_func
# Initialize prev state arrays
self.reset_prev_states()
# Map led indices to normalized position vectors
self.mapped = [self.mapping_func(i) for i in range(self.led_count)]
# Check mapping dimensions to simplify loop if possible
self.mapping_uses_x_only = True
for point in self.mapped:
if point.y != 0:
self.mapping_uses_x_only = False
# Create lists used to cache current mapping
# so it doesn't have to be recalculated every frame
self.primary_mapping = []
self.secondary_mapping = []
# Used to render main slider/select list
self.params = {
'master_brightness': 0.15,
'master_color_temp': 6500,
'master_gamma': 1.0,
'master_saturation': 1.0,
'primary_pattern': 0,
'primary_speed': 0.2,
'primary_scale': 1.0,
'secondary_pattern': 0,
'secondary_speed': 0.2,
'secondary_scale': 1.0,
}
# Source code for patterns
self.pattern_sources = {}
# Lookup dictionary for pattern functions - keys are used to generate select menu
self.pattern_functions = {}
# Initialize primary patterns
for k, v in patterns.default.items():
self.set_pattern_function(k, v)
# Lookup dictionary for secondary pattern functions
self.secondary_pattern_functions = patterns.default_secondary
# Color palette used for animations
self.colors = [(0, 0, 1)]
# Set default color temp
self.correction_original = led_color_correction
self.calculate_color_correction()
# Set default mapping
self.calculate_mappings()
# Prepare to start
self.start = time.perf_counter()
self.time = 0
self.render_perf_avg = 0
def compile_pattern(self, source):
"""
Compiles source string to a pattern function using restricted globals.
"""
def getitem(obj, index):
if obj is not None and type(obj) in (list, tuple, dict):
return obj[index]
raise Exception()
def getiter(obj):
return obj
restricted_globals = {
'__builtins__': RestrictedPython.Guards.safe_builtins,
'_print_': RestrictedPython.PrintCollector,
'_getattr_': RestrictedPython.Guards.safer_getattr,
'_getitem_': getitem,
'_getiter_': getiter,
'_write_': RestrictedPython.Guards.full_write_guard,
'math': math,
'random': random,
'hsv': patterns.ColorMode.hsv,
'rgb': patterns.ColorMode.rgb,
'clamp': utils.clamp,
'wave_pulse': rpi_ws281x.wave_pulse,
'wave_triangle': rpi_ws281x.wave_triangle,
'wave_sine': rpi_ws281x.wave_sine,
'wave_cubic': rpi_ws281x.wave_cubic,
'plasma_sines': rpi_ws281x.plasma_sines,
'plasma_sines_octave': rpi_ws281x.plasma_sines_octave,
'perlin_noise_3d': rpi_ws281x.perlin_noise_3d,
'impulse_exp': utils.impulse_exp,
'fract': utils.fract,
'blackbody_to_rgb': rpi_ws281x.blackbody_to_rgb,
'blackbody_correction_rgb': rpi_ws281x.blackbody_correction_rgb,
}
restricted_locals = {}
arg_names = ['t', 'dt', 'x', 'y', 'prev_state']
name_error = False
results = RestrictedPython.compile_restricted_exec(source, filename='<inline code>')
print(results)
warnings = list(results.warnings)
for name in results.used_names:
if name not in restricted_globals and name not in arg_names:
name_error = True
warnings.append('NameError: name \'{}\' is not defined'.format(name))
if results.code:
exec(results.code, restricted_globals, restricted_locals)
return results.errors, warnings, restricted_locals['pattern']
else:
return results.errors, warnings, None
def reset_prev_states(self):
"""
Reset previous animation state lists.
"""
self.primary_prev_state = [((0, 0, 0), 0) for i in range(self.led_count)]
self.secondary_prev_state = [((0, 0, 0), 0) for i in range(self.led_count)]
def calculate_color_correction(self):
"""
Calculate and store color temperature correction.
"""
rgb = [int(x * 255) for x in rpi_ws281x.blackbody_to_rgb(self.params['master_color_temp'])]
c = [self.correction_original[0] * rgb[0] // 255,
self.correction_original[1] * rgb[1] // 255,
self.correction_original[2] * rgb[2] // 255]
self.correction = (c[0] << 16) | (c[1] << 8) | c[2]
def calculate_mappings(self):
"""
Calculate and store spatial mapping values with current scale
"""
p = []
s = []
if self.params['primary_scale'] != 0:
for i in range(self.led_count):
# Calculate scale components to determine animation position
# scale component = position (max size) / scale (pattern length in units)
# One cycle is a normalized input value's transition from 0 to 1
p.append((
(self.mapped[i][0] / self.params['primary_scale']) % 1,
(self.mapped[i][1] / self.params['primary_scale']) % 1
))
else:
p = [(0, 0) for i in range(self.led_count)]
if self.params['secondary_scale'] != 0:
for i in range(self.led_count):
# Calculate scale components to determine animation position
# scale component = position (max size) / scale (pattern length in units)
# One cycle is a normalized input value's transition from 0 to 1
s.append((
(self.mapped[i][0] / self.params['secondary_scale']) % 1,
(self.mapped[i][1] / self.params['secondary_scale']) % 1
))
else:
s = [(0, 0) for i in range(self.led_count)]
self.primary_mapping = p
self.secondary_mapping = s
def set_param(self, key, value):
"""
Set an animation parameter.
"""
self.params[key] = value
if key == 'master_color_temp':
self.calculate_color_correction()
elif key == 'primary_scale' or key == 'secondary_scale':
self.calculate_mappings()
def set_pattern_function(self, key, source):
"""
Update the source code and recompile a pattern function.
"""
errors, warnings, pattern = self.compile_pattern(source)
if len(errors) == 0:
self.pattern_sources[key] = source
self.pattern_functions[key] = pattern
return errors, warnings
def set_color(self, index, value):
self.colors[index] = value
def set_color_component(self, index, component, value):
self.colors[index][component] = value
def begin_animation_thread(self):
"""
Start animating.
"""
self.timer = RepeatedTimer(1.0 / self.refresh_rate, self.update_leds)
def update_leds(self):
"""
Determine time, render frame, and display.
"""
self.time = self.timer.last_frame - self.start
t0 = time.perf_counter()
# Begin render
# Create local references to patterns
pattern_1 = self.pattern_functions[self.params['primary_pattern']]
pattern_2 = self.secondary_pattern_functions[self.params['secondary_pattern']]
# Calculate times
# time component = time (s) * speed (cycle/s)
primary_time = self.time * self.params['primary_speed']
primary_delta_t = self.timer.delta_t * self.params['primary_speed']
secondary_time = self.time * self.params['secondary_speed']
secondary_delta_t = self.timer.delta_t * self.params['secondary_speed']
# Determine current pattern mode
c, mode = pattern_1(0, 0.1, 0, 0, (0, 0, 0), [(0, 0, 0)])
try:
# Run primary pattern to determine initial color
# State is an array of (color, secondary_value) pairs
state_1 = [(pattern_1(primary_time,
primary_delta_t,
self.primary_mapping[i][0],
self.primary_mapping[i][1],
self.primary_prev_state[i][0],
self.colors)[0], 1) for i in range(self.led_count)]
self.primary_prev_state = state_1
# Run secondary pattern to determine new brightness and possibly modify color
if pattern_2 is None:
state_2 = state_1
else:
state_2 = [pattern_2(secondary_time,
secondary_delta_t,
self.secondary_mapping[i][0],
self.secondary_mapping[i][1],
self.secondary_prev_state[i],
state_1[i][0]) for i in range(self.led_count)]
self.secondary_prev_state = state_2
except Exception as e:
print('Pattern execution: {}'.format(traceback.format_exception(type(e),
e,
e.__traceback__)))
state_2 = [((0, 0, 0), 0) for i in range(self.led_count)]
# Write colors to LEDs
if mode == patterns.ColorMode.hsv:
self.led_controller.leds.set_all_pixels_hsv_float(
[(c[0][0], c[0][1], c[0][2] * c[1]) for c in state_2],
self.correction,
self.params['master_saturation'],
self.params['master_brightness'],
self.params['master_gamma']
)
elif mode == patterns.ColorMode.rgb:
self.led_controller.leds.set_all_pixels_rgb_float(
[(c[0][0] * c[1], c[0][1] * c[1], c[0][2] * c[1]) for c in state_2],
self.correction,
self.params['master_saturation'],
self.params['master_brightness'],
self.params['master_gamma']
)
# End render
t1 = time.perf_counter()
self.render_perf_avg += (t1 - t0)
if self.timer.count % 100 == 0:
print('Render time (s): {}'.format(self.render_perf_avg / 100))
self.render_perf_avg = 0
def end_animation_thread(self):
"""
Stop rendering in the animation thread.
"""
self.timer.stop()
|
old_install.py
|
#!/usr/bin/env python
# TODO: add installer support for membasez
import Queue
import copy
import getopt
import os
import re
import socket
import sys
from datetime import datetime
from threading import Thread
from Cb_constants import CbServer
sys.path = [".", "lib", "pytests", "pysystests", "couchbase_utils",
"platform_utils", "connections"] + sys.path
import testconstants
import time
from builds.build_query import BuildQuery
import logging.config
from custom_exceptions.exception import ServerUnavailableException
from membase.api.rest_client import RestConnection, RestHelper
from platform_utils.remote.remote_util import \
RemoteMachineShellConnection, \
RemoteUtilHelper
from membase.helper.cluster_helper import ClusterOperationHelper
from testconstants import MV_LATESTBUILD_REPO
from testconstants import CB_REPO, CB_DOWNLOAD_SERVER, \
CB_DOWNLOAD_SERVER_FQDN
from testconstants import COUCHBASE_VERSION_2
from testconstants import COUCHBASE_VERSION_3, COUCHBASE_FROM_SPOCK
from testconstants import CB_VERSION_NAME, COUCHBASE_FROM_VERSION_4, \
CB_RELEASE_BUILDS, COUCHBASE_VERSIONS
from testconstants import MIN_KV_QUOTA, INDEX_QUOTA, FTS_QUOTA, \
CBAS_QUOTA
from testconstants import LINUX_COUCHBASE_PORT_CONFIG_PATH, \
LINUX_COUCHBASE_OLD_CONFIG_PATH
from testconstants import WIN_COUCHBASE_PORT_CONFIG_PATH, \
WIN_COUCHBASE_OLD_CONFIG_PATH
import TestInput
def usage(err=None):
print """\
Syntax: install.py [options]
Options:
-p <key=val,...> Comma-separated key=value info.
-i <file> Path to .ini file containing cluster information.
Available keys:
product=cb|mb Used to specify couchbase or membase.
version=SHORT_VERSION Example: "2.0.0r-71".
parallel=false Useful when you're installing a cluster.
toy= Install a toy build
init_nodes=False Initialize nodes
vbuckets= The number of vbuckets in the server
installation.
sync_threads=True Sync or acync threads(+S or +A)
erlang_threads= Number of erlang threads (default=16:16
for +S type)
upr=True Enable UPR replication
xdcr_upr= Enable UPR for XDCR (temporary param
until XDCR with UPR is stable), values: None | True | False
fts_query_limit=1000000 Set a limit for the max results to be
returned by fts for any query
change_indexer_ports=false Sets indexer ports values to non-default
ports
storage_mode=plasma Sets indexer storage mode
enable_ipv6=False Enable ipv6 mode in ns_server
Examples:
install.py -i /tmp/ubuntu.ini -p product=cb,version=2.2.0-792
install.py -i /tmp/ubuntu.ini -p product=cb,version=2.2.0-792,
url=http://builds.hq.northscale.net/latestbuilds....
install.py -i /tmp/ubuntu.ini -p product=mb,version=1.7.1r-38,
parallel=true,toy=keith
install.py -i /tmp/ubuntu.ini -p product=mongo,version=2.0.2
install.py -i /tmp/ubuntu.ini -p product=cb,version=0.0.0-704,
toy=couchstore,parallel=true,vbuckets=1024
# to run with build with require openssl version 1.0.0
install.py -i /tmp/ubuntu.ini -p product=cb,version=2.2.0-792,openssl=1
# to install latest release of couchbase server via repo (apt-get
and yum)
install.py -i /tmp/ubuntu.ini -p product=cb,linux_repo=true
# to install non-root non default path, add nr_install_dir params
install.py -i /tmp/ubuntu.ini -p product=cb,version=5.0.0-1900,
nr_install_dir=testnow1
"""
sys.exit(err)
def installer_factory(params):
if params.get("product", None) is None:
sys.exit("ERROR: don't know what product you want installed")
mb_alias = ["membase", "membase-server", "mbs", "mb"]
cb_alias = ["couchbase", "couchbase-server", "cb", "cbas"]
sdk_alias = ["python-sdk", "pysdk"]
es_alias = ["elasticsearch"]
css_alias = ["couchbase-single", "couchbase-single-server", "css"]
mongo_alias = ["mongo"]
if params["product"] in mb_alias:
return MembaseServerInstaller()
elif params["product"] in cb_alias:
return CouchbaseServerInstaller()
elif params["product"] in mongo_alias:
return MongoInstaller()
elif params["product"] in sdk_alias:
return SDKInstaller()
elif params["product"] in es_alias:
return ESInstaller()
sys.exit("ERROR: don't know about product " + params["product"])
class Installer(object):
def install(self, params):
pass
def initialize(self, params):
pass
def uninstall(self, params):
remote_client = RemoteMachineShellConnection(params["server"])
# remote_client.membase_uninstall()
self.msi = 'msi' in params and params['msi'].lower() == 'true'
remote_client.couchbase_uninstall(windows_msi=self.msi,
product=params['product'])
remote_client.disconnect()
def build_url(self, params):
_errors = []
version = ''
server = ''
openssl = ''
names = []
url = ''
direct_build_url = None
# replace "v" with version
# replace p with product
tmp = {}
for k in params:
value = params[k]
if k == "v":
tmp["version"] = value
elif k == "p":
tmp["version"] = value
else:
tmp[k] = value
params = tmp
ok = True
if not "version" in params and len(params["version"]) < 5:
_errors.append(errors["INVALID-PARAMS"])
ok = False
else:
version = params["version"]
if ok:
if not "product" in params:
_errors.append(errors["INVALID-PARAMS"])
ok = False
if ok:
if not "server" in params:
_errors.append(errors["INVALID-PARAMS"])
ok = False
else:
server = params["server"]
if ok:
if "toy" in params:
toy = params["toy"]
else:
toy = ""
if ok:
if "openssl" in params:
openssl = params["openssl"]
if ok:
if "url" in params and params["url"] != "" \
and isinstance(params["url"], str):
direct_build_url = params["url"]
if ok:
if "linux_repo" in params \
and params["linux_repo"].lower() == "true":
linux_repo = True
else:
linux_repo = False
if ok:
if "msi" in params and params["msi"].lower() == "true":
msi = True
else:
msi = False
if ok:
mb_alias = ["membase", "membase-server", "mbs", "mb"]
cb_alias = ["couchbase", "couchbase-server", "cb"]
css_alias = ["couchbase-single", "couchbase-single-server",
"css"]
cbas_alias = ["cbas", "server-analytics"]
if params["product"] in cbas_alias:
names = ['couchbase-server-analytics',
'server-analytics']
elif params["product"] in mb_alias:
names = ['membase-server-enterprise',
'membase-server-community']
elif params["product"] in cb_alias:
if "type" in params and params[
"type"].lower() in "couchbase-server-community":
names = ['couchbase-server-community']
elif "type" in params and params[
"type"].lower() in "couchbase-server-enterprise":
names = ['couchbase-server-enterprise']
elif "type" in params and params[
"type"].lower() in "no-jre":
names = ['couchbase-server-enterprise-no-jre']
else:
names = ['couchbase-server-enterprise',
'couchbase-server-community']
elif params["product"] in css_alias:
names = ['couchbase-single-server-enterprise',
'couchbase-single-server-community']
else:
ok = False
_errors.append(errors["INVALID-PARAMS"])
if "1" in openssl:
names = ['couchbase-server-enterprise_centos6',
'couchbase-server-community_centos6',
'couchbase-server-enterprise_ubuntu_1204',
'couchbase-server-community_ubuntu_1204']
if "toy" in params:
names = ['couchbase-server-enterprise']
remote_client = RemoteMachineShellConnection(server)
info = remote_client.extract_remote_info()
print "*** OS version of this server %s is %s ***" % (
remote_client.ip,
info.distribution_version)
if info.distribution_version.lower() == "suse 12":
if version[:5] not in COUCHBASE_FROM_SPOCK:
mesg = "%s does not support cb version %s" % \
(info.distribution_version, version[:5])
remote_client.stop_current_python_running(mesg)
remote_client.disconnect()
if info.type.lower() == "windows":
if "-" in version:
msi_build = version.split("-")
"""
In spock from build 2924 and later release,
we only support
msi installation method on windows
"""
if "2k8" in info.windows_name:
info.windows_name = 2008
if msi_build[0] in COUCHBASE_FROM_SPOCK:
info.deliverable_type = "msi"
elif "5" > msi_build[0] and info.windows_name == 2016:
log.info("\n========\n"
" Build version %s does not "
"support on\n"
" Windows Server 2016\n"
"========" % msi_build[0])
os.system(
"ps aux | grep python | grep %d " % os.getpid())
time.sleep(5)
os.system('kill %d' % os.getpid())
else:
print "Incorrect version format"
sys.exit()
if ok and not linux_repo:
timeout = 300
if "timeout" in params:
timeout = int(params["timeout"])
releases_version = ["1.6.5.4", "1.7.0", "1.7.1", "1.7.1.1",
"1.8.0"]
cb_releases_version = ["1.8.1", "2.0.0", "2.0.1", "2.1.0",
"2.1.1", "2.2.0",
"2.5.0", "2.5.1", "2.5.2", "3.0.0",
"3.0.1", "3.0.2",
"3.0.3", "3.1.0", "3.1.1", "3.1.2",
"3.1.3", "3.1.5", "3.1.6",
"4.0.0", "4.0.1", "4.1.0", "4.1.1",
"4.1.2", "4.5.0"]
build_repo = MV_LATESTBUILD_REPO
if toy != "":
build_repo = CB_REPO
elif "server-analytics" in names:
build_repo = CB_REPO.replace("couchbase-server",
"server-analytics") + \
CB_VERSION_NAME[version[:3]] + "/"
elif version[:5] not in COUCHBASE_VERSION_2 and \
version[:5] not in COUCHBASE_VERSION_3:
if version[:3] in CB_VERSION_NAME:
build_repo = CB_REPO + CB_VERSION_NAME[
version[:3]] + "/"
else:
sys.exit("version is not support yet")
if 'enable_ipv6' in params and params['enable_ipv6']:
build_repo = build_repo.replace(CB_DOWNLOAD_SERVER,
CB_DOWNLOAD_SERVER_FQDN)
for name in names:
if version[:5] in releases_version:
build = BuildQuery().find_membase_release_build(
deliverable_type=info.deliverable_type,
os_architecture=info.architecture_type,
build_version=version,
product='membase-server-enterprise')
elif len(version) > 6 and version[6:].replace("-rel",
"") == \
CB_RELEASE_BUILDS[version[:5]]:
build = BuildQuery().find_couchbase_release_build(
deliverable_type=info.deliverable_type,
os_architecture=info.architecture_type,
build_version=version,
product=name,
os_version=info.distribution_version,
direct_build_url=direct_build_url)
else:
builds, changes = BuildQuery().get_all_builds(
version=version,
timeout=timeout,
direct_build_url=direct_build_url,
deliverable_type=info.deliverable_type,
architecture_type=info.architecture_type,
edition_type=name,
repo=build_repo, toy=toy,
distribution_version=info.distribution_version.lower(),
distribution_type=info.distribution_type
.lower())
build = BuildQuery().find_build(builds, name,
info.deliverable_type,
info.architecture_type,
version, toy=toy,
openssl=openssl,
direct_build_url=direct_build_url,
distribution_version=info.distribution_version.lower(),
distribution_type=info.distribution_type.lower())
if build:
if 'amazon' in params:
type = info.type.lower()
if type == 'windows' and version in \
releases_version:
build.url = build.url.replace(
"http://builds.hq.northscale.net",
"https://s3.amazonaws.com/packages.couchbase")
build.url = build.url.replace("enterprise",
"community")
build.name = build.name.replace(
"enterprise", "community")
else:
""" since url in S3 insert version into
it, we need to put version
in like ..latestbuilds/3.0.0/... """
cb_version = version[:5]
build.url = build.url.replace(
"http://builds.hq.northscale.net/latestbuilds",
"http://packages.northscale.com/latestbuilds/{0}"
.format(cb_version))
""" test enterprise version """
# build.url = build.url.replace(
# "enterprise", "community")
# build.name = build.name.replace(
# "enterprise", "community")
""" check if URL is live """
remote_client = RemoteMachineShellConnection(server)
url_live = remote_client.is_url_live(build.url)
remote_client.disconnect()
if url_live:
return build
else:
sys.exit(
"ERROR: URL is not good. Check URL again")
_errors.append(errors["BUILD-NOT-FOUND"])
if not linux_repo:
msg = "unable to find a build for product {0} version {1} " \
"" \
"" \
"for package_type {2}"
raise Exception(
msg.format(names, version, info.deliverable_type))
def is_socket_active(self, host, port, timeout=300):
""" Check if remote socket is open and active
Keyword arguments:
host -- remote address
port -- remote port
timeout -- check timeout (in seconds)
Returns:
True -- socket is active
False -- otherwise
"""
start_time = time.time()
sckt = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while time.time() - start_time < timeout:
try:
sckt.connect((host, port))
sckt.shutdown(2)
sckt.close()
return True
except:
time.sleep(10)
return False
class MembaseServerInstaller(Installer):
def __init__(self):
Installer.__init__(self)
def initialize(self, params):
start_time = time.time()
cluster_initialized = False
server = params["server"]
while time.time() < (start_time + (5 * 60)):
rest = RestConnection(server)
try:
if server.data_path:
remote_client = RemoteMachineShellConnection(server)
remote_client.execute_command(
'rm -rf {0}/*'.format(server.data_path))
# Make sure that data_path is writable by membase user
remote_client.execute_command(
"chown -R membase.membase {0}".format(
server.data_path))
remote_client.disconnect()
rest.set_data_path(data_path=server.data_path,
index_path=server.index_path,
cbas_path=server.cbas_path)
rest.init_cluster(username=server.rest_username,
password=server.rest_password)
rest.set_service_mem_quota(
{CbServer.Settings.KV_MEM_QUOTA: rest.get_nodes_self().mcdMemoryReserved})
cluster_initialized = True
break
except ServerUnavailableException:
log.error(
"error happened while initializing the cluster @ "
"{0}".format(
server.ip))
log.info('sleep for 5 seconds before trying again ...')
time.sleep(5)
if not cluster_initialized:
raise Exception("unable to initialize membase node")
def install(self, params, queue=None):
try:
build = self.build_url(params)
except Exception, e:
if queue:
queue.put(False)
raise e
remote_client = RemoteMachineShellConnection(params["server"])
info = remote_client.extract_remote_info()
type = info.type.lower()
server = params["server"]
if "vbuckets" in params:
vbuckets = int(params["vbuckets"][0])
else:
vbuckets = None
if "swappiness" in params:
swappiness = int(params["swappiness"])
else:
swappiness = 0
if "openssl" in params:
openssl = params["openssl"]
else:
openssl = ""
success = True
if type == "windows":
build = self.build_url(params)
remote_client.download_binary_in_win(build.url,
params["version"])
success = remote_client.install_server_win(
build, params["version"],
vbuckets=vbuckets)
else:
downloaded = remote_client.download_build(build)
if not downloaded:
log.error('Server {1} unable to download binaries: {0}'
.format(build.url, params["server"].ip))
return False
success &= remote_client.install_server(build,
vbuckets=vbuckets,
swappiness=swappiness,
openssl=openssl)
ready = RestHelper(RestConnection(params["server"]))\
.is_ns_server_running(60)
if not ready:
log.error("membase-server did not start...")
log.info('wait 5 seconds for Membase server to start')
time.sleep(5)
remote_client.disconnect()
if queue:
queue.put(success)
return success
class CouchbaseServerInstaller(Installer):
def __init__(self):
Installer.__init__(self)
def initialize(self, params):
# log.info('*****CouchbaseServerInstaller initialize the
# application ****')
start_time = time.time()
cluster_initialized = False
server = params["server"]
remote_client = RemoteMachineShellConnection(params["server"])
success = True
success &= remote_client.is_couchbase_installed()
if not success:
mesg = "\nServer {0} failed to install".format(
params["server"].ip)
sys.exit(mesg)
while time.time() < start_time + 5 * 60:
try:
rest = RestConnection(server)
# Optionally change node name and restart server
if params.get('use_domain_names', 0):
RemoteUtilHelper.use_hostname_for_server_settings(
server)
if params.get('enable_ipv6', 0):
status, content = RestConnection(
server).rename_node(
hostname=server.ip.replace('[', '').replace(']',
''))
if status:
log.info(
"Node {0} renamed to {1}".format(server.ip,
server.ip.replace(
'[',
'').
replace(
']',
'')))
else:
log.error("Error renaming node {0} to {1}: {2}".
format(server.ip,
server.ip.replace('[',
'').replace(
']', ''),
content))
# Make sure that data_path and index_path are
# writable by couchbase user
for path in set(filter(None, [server.data_path,
server.index_path])):
time.sleep(3)
for cmd in ("rm -rf {0}/*".format(path),
"chown -R couchbase:couchbase {0}".format(
path)):
remote_client.execute_command(cmd)
rest.set_data_path(data_path=server.data_path,
index_path=server.index_path,
cbas_path=server.cbas_path)
time.sleep(3)
# Initialize cluster
if "init_nodes" in params:
init_nodes = params["init_nodes"]
else:
init_nodes = "True"
if (isinstance(init_nodes, bool) and init_nodes) or \
(isinstance(init_nodes,
str) and init_nodes.lower() ==
"true"):
if not server.services:
set_services = ["kv"]
elif server.services:
set_services = server.services.split(',')
kv_quota = 0
while kv_quota == 0:
time.sleep(1)
kv_quota = int(
rest.get_nodes_self().mcdMemoryReserved)
info = rest.get_nodes_self()
cb_version = info.version[:5]
kv_quota = int(info.mcdMemoryReserved * 2 / 3)
""" for fts, we need to grep quota from ns_server
but need to make it works even RAM of
vm is
smaller than 2 GB """
if cb_version in COUCHBASE_FROM_VERSION_4:
if "index" in set_services:
log.info("quota for index service will be %s MB"
% INDEX_QUOTA)
kv_quota -= INDEX_QUOTA
log.info(
"set index quota to node %s " %
server.ip)
rest.set_service_mem_quota(
{CbServer.Settings.INDEX_MEM_QUOTA: INDEX_QUOTA})
if "fts" in set_services:
log.info(
"quota for fts service will be %s MB"
% FTS_QUOTA)
kv_quota -= FTS_QUOTA
log.info(
"set both index and fts quota at node "
"%s " % server.ip)
rest.set_service_mem_quota(
{CbServer.Settings.FTS_MEM_QUOTA: FTS_QUOTA})
if "cbas" in set_services:
log.info(
"quota for cbas service will be %s "
"MB" % (
CBAS_QUOTA))
kv_quota -= CBAS_QUOTA
rest.set_service_mem_quota(
{CbServer.Settings.CBAS_MEM_QUOTA: CBAS_QUOTA})
if kv_quota < MIN_KV_QUOTA:
raise Exception(
"KV RAM needs to be more than %s MB"
" at node %s" % (
MIN_KV_QUOTA, server.ip))
""" set kv quota smaller than 1 MB so that it
will satify
the condition smaller than allow quota """
kv_quota -= 1
log.info("quota for kv: %s MB" % kv_quota)
rest.set_service_mem_quota(
{CbServer.Settings.KV_MEM_QUOTA: kv_quota})
if params["version"][
:5] in COUCHBASE_FROM_VERSION_4:
rest.init_node_services(
username=server.rest_username,
password=server.rest_password,
services=set_services)
if "index" in set_services:
if "storage_mode" in params:
storageMode = params["storage_mode"]
else:
storageMode = "plasma"
rest.set_indexer_storage_mode(
storageMode=storageMode)
rest.init_cluster(username=server.rest_username,
password=server.rest_password)
# Optionally disable consistency check
if params.get('disable_consistency', 0):
rest.set_couchdb_option(section='couchdb',
option='consistency_check_ratio',
value='0.0')
# memcached env variable
mem_req_tap_env = params.get('MEMCACHED_REQS_TAP_EVENT',
0)
if mem_req_tap_env:
remote_client.set_environment_variable(
'MEMCACHED_REQS_TAP_EVENT',
mem_req_tap_env)
""" set cbauth environment variables from Watson version
it is checked version inside method """
remote_client.set_cbauth_env(server)
remote_client.check_man_page()
""" add unzip command on server if it is not
available """
remote_client.check_cmd("unzip")
remote_client.is_ntp_installed()
# TODO: Make it work with windows
if "erlang_threads" in params:
num_threads = params.get('erlang_threads',
testconstants.NUM_ERLANG_THREADS)
# Stop couchbase-server
ClusterOperationHelper.stop_cluster([server])
if "sync_threads" in params or ':' in num_threads:
sync_threads = params.get('sync_threads', True)
else:
sync_threads = False
# Change type of threads(sync/async) and num
# erlang threads
ClusterOperationHelper.change_erlang_threads_values(
[server], sync_threads, num_threads)
# Start couchbase-server
ClusterOperationHelper.start_cluster([server])
if "erlang_gc_level" in params:
erlang_gc_level = params.get('erlang_gc_level',
None)
if erlang_gc_level is None:
# Don't change the value
break
# Stop couchbase-server
ClusterOperationHelper.stop_cluster([server])
# Change num erlang threads
ClusterOperationHelper.change_erlang_gc([server],
erlang_gc_level)
# Start couchbase-server
ClusterOperationHelper.start_cluster([server])
cluster_initialized = True
break
except ServerUnavailableException:
log.error(
"error happened while initializing the cluster @ "
"{0}".format(
server.ip))
log.info('sleep for 5 seconds before trying again ...')
time.sleep(5)
remote_client.disconnect()
if not cluster_initialized:
sys.exit("unable to initialize couchbase node")
def install(self, params, queue=None):
log.info('********CouchbaseServerInstaller:install')
self.msi = 'msi' in params and params['msi'].lower() == 'true'
start_server = True
try:
if "linux_repo" not in params:
build = self.build_url(params)
except Exception, e:
if queue:
queue.put(False)
raise e
remote_client = RemoteMachineShellConnection(params["server"])
info = remote_client.extract_remote_info()
type = info.type.lower()
server = params["server"]
force_upgrade = False
self.nonroot = False
if info.deliverable_type in ["rpm", "deb"]:
if server.ssh_username != "root":
self.nonroot = True
if "swappiness" in params:
swappiness = int(params["swappiness"])
else:
swappiness = 0
if "openssl" in params:
openssl = params["openssl"]
else:
openssl = ""
if "vbuckets" in params:
vbuckets = int(params["vbuckets"][0])
else:
vbuckets = None
if "upr" in params and params["upr"].lower() != "none":
upr = params["upr"].lower() == 'true'
else:
upr = None
if "xdcr_upr" not in params:
xdcr_upr = None
else:
xdcr_upr = eval(params["xdcr_upr"].capitalize())
if "fts_query_limit" in params:
fts_query_limit = params["fts_query_limit"]
start_server = False
else:
fts_query_limit = None
if "enable_ipv6" in params:
enable_ipv6 = params["enable_ipv6"]
start_server = False
else:
enable_ipv6 = None
if "linux_repo" in params and params[
"linux_repo"].lower() == "true":
linux_repo = True
else:
linux_repo = False
if "force_upgrade" in params:
force_upgrade = params["force_upgrade"]
if not linux_repo:
if type == "windows":
log.info('***** Download Windows binary*****')
"""
In spock from build 2924 and later release,
we only support
msi installation method on windows
"""
if "-" in params["version"] and \
params["version"].split("-")[
0] in COUCHBASE_FROM_SPOCK:
self.msi = True
os_type = "msi"
remote_client.download_binary_in_win(build.url,
params["version"],
msi_install=self.msi)
success = remote_client.install_server_win(build,
params[
"version"].replace(
"-rel",
""),
vbuckets=vbuckets,
fts_query_limit=fts_query_limit,
enable_ipv6=enable_ipv6,
windows_msi=self.msi)
else:
downloaded = remote_client.download_build(build)
if not downloaded:
sys.exit('server {1} unable to download binaries : {0}'
.format(build.url, params["server"].ip))
# TODO: need separate methods in remote_util for
# couchbase and membase install
path = server.data_path or '/tmp'
try:
success = remote_client.install_server(
build,
path=path,
startserver=start_server,
vbuckets=vbuckets,
swappiness=swappiness,
openssl=openssl,
upr=upr,
xdcr_upr=xdcr_upr,
fts_query_limit=fts_query_limit,
enable_ipv6=enable_ipv6,
force=force_upgrade)
ready = RestHelper(RestConnection(params["server"]))\
.is_ns_server_running(60)
if not ready:
log.error("membase-server did not start...")
if "rest_vbuckets" in params:
rest_vbuckets = int(params["rest_vbuckets"])
ClusterOperationHelper.set_vbuckets(server,
rest_vbuckets)
except BaseException, e:
success = False
log.error("installation failed: {0}".format(e))
remote_client.disconnect()
if queue:
queue.put(success)
return success
elif linux_repo:
cb_edition = ""
if "type" in params and params["type"] == "community":
cb_edition = "community"
try:
success = remote_client.install_server_via_repo(
info.deliverable_type, \
cb_edition, remote_client)
log.info('wait 5 seconds for Couchbase server to start')
time.sleep(5)
except BaseException, e:
success = False
log.error("installation failed: {0}".format(e))
remote_client.disconnect()
if queue:
queue.put(success)
return success
class MongoInstaller(Installer):
def get_server(self, params):
version = params["version"]
server = params["server"]
server.product_name = "mongodb-linux-x86_64-" + version
server.product_tgz = server.product_name + ".tgz"
server.product_url = "http://fastdl.mongodb.org/linux/" + \
server.product_tgz
return server
def mk_remote_client(self, server):
remote_client = RemoteMachineShellConnection(server)
info = remote_client.extract_remote_info()
type = info.type.lower()
if type == "windows":
sys.exit("ERROR: please teach me about windows one day.")
return remote_client
def uninstall(self, params):
server = self.get_server(params)
remote_client = self.mk_remote_client(server)
remote_client.execute_command("killall mongod mongos")
remote_client.execute_command("killall -9 mongod mongos")
remote_client.execute_command(
"rm -rf ./{0}".format(server.product_name))
remote_client.disconnect()
def install(self, params):
server = self.get_server(params)
remote_client = self.mk_remote_client(server)
downloaded = remote_client.download_binary(server.product_url,
"tgz",
server.product_tgz)
if not downloaded:
log.error(downloaded,
'server {1} unable to download binaries : {0}' \
.format(server.product_url, server.ip))
remote_client.execute_command(
"tar -xzvf /tmp/{0}".format(server.product_tgz))
remote_client.disconnect()
def initialize(self, params):
server = self.get_server(params)
remote_client = self.mk_remote_client(server)
remote_client.execute_command(
"mkdir -p {0}/data/data-27019 {0}/data/data-27018 {"
"0}/log". \
format(server.product_name))
remote_client.execute_command(
"./{0}/bin/mongod --port 27019 --fork --rest --configsvr" \
" --logpath ./{0}/log/mongod-27019.out" \
" --dbpath ./{0}/data/data-27019". \
format(server.product_name))
remote_client.execute_command(
"./{0}/bin/mongod --port 27018 --fork --rest --shardsvr" \
" --logpath ./{0}/log/mongod-27018.out" \
" --dbpath ./{0}/data/data-27018". \
format(server.product_name))
log.info(
"check that config server started before launching mongos")
if self.is_socket_active(host=server.ip, port=27019):
remote_client.execute_command(
("./{0}/bin/mongos --port 27017 --fork" \
" --logpath ./{0}/log/mongos-27017.out" \
" --configdb " + server.ip + ":27019"). \
format(server.product_name))
remote_client.disconnect()
else:
log.error(
"Connection with MongoDB config server was not "
"established.")
remote_client.disconnect()
sys.exit()
class SDKInstaller(Installer):
def __init__(self):
pass
def initialize(self, params):
log.info('There is no initialize phase for sdk installation')
def uninstall(self):
pass
def install(self, params):
remote_client = RemoteMachineShellConnection(params["server"])
info = remote_client.extract_remote_info()
os = info.type.lower()
type = info.deliverable_type.lower()
version = info.distribution_version.lower()
if params['subdoc'] == 'True':
sdk_url = 'git+git://github.com/mnunberg/couchbase-python' \
'-client.git@subdoc'
else:
sdk_url = 'git+git://github.com/couchbase/couchbase' \
'-python-client.git'
if os == 'linux':
if (type == 'rpm' and params['subdoc'] == 'False'):
repo_file = '/etc/yum.repos.d/couchbase.repo'
baseurl = ''
if (version.find('centos') != -1 and version.find(
'6.2') != -1):
baseurl = \
'http://packages.couchbase.com/rpm/6.2/x86-64'
elif (version.find('centos') != -1 and version.find(
'6.4') != -1):
baseurl = \
'http://packages.couchbase.com/rpm/6.4/x86-64'
elif (version.find('centos') != -1 and version.find(
'7') != -1):
baseurl = \
'http://packages.couchbase.com/rpm/7/x86_64'
else:
log.info(
"os version {0} not supported".format(version))
exit(1)
remote_client.execute_command(
"rm -rf {0}".format(repo_file))
remote_client.execute_command(
"touch {0}".format(repo_file))
remote_client.execute_command(
"echo [couchbase] >> {0}".format(repo_file))
remote_client.execute_command(
"echo enabled=1 >> {0}".format(repo_file))
remote_client.execute_command("echo name = Couchbase "
"package repository \
>> {0}".format(repo_file))
remote_client.execute_command("echo baseurl = {0} >> \
{1}".format(baseurl, repo_file))
remote_client.execute_command("yum -n update")
remote_client.execute_command("yum -y install \
libcouchbase2-libevent libcouchbase-devel "
"libcouchbase2-bin")
remote_client.execute_command(
"yum -y install python-pip")
remote_client.execute_command(
"pip -y uninstall couchbase")
remote_client.execute_command(
"pip -y install {0}".format(sdk_url))
elif (type == 'rpm' and params['subdoc'] == 'True'):
package_url = ''
lcb_core = ''
lcb_libevent = ''
lcb_devel = ''
lcb_bin = ''
if (version.find('centos') != -1 and version.find(
'6') != -1):
package_url = 'http://172.23.105.153/228/DIST/el6/'
lcb_core = \
'libcouchbase2-core-2.5.4-11.r10ga37efd8.SP' \
'.el6.x86_64.rpm'
lcb_libevent = \
'libcouchbase2-libevent-2.5.4-11.r10ga37efd8' \
'.SP.el6.x86_64.rpm'
lcb_devel = \
'libcouchbase-devel-2.5.4-11.r10ga37efd8.SP' \
'.el6.x86_64.rpm'
lcb_bin = \
'libcouchbase2-bin-2.5.4-11.r10ga37efd8.SP' \
'.el6.x86_64.rpm'
remote_client.execute_command(
'rpm -ivh '
'http://dl.fedoraproject.org/pub/epel/6'
'/x86_64/epel-release-6-8.noarch.rpm')
elif (version.find('centos') != -1 and version.find(
'7') != -1):
package_url = 'http://172.23.105.153/228/DIST/el7/'
lcb_core = \
'libcouchbase2-core-2.5.4-11.r10ga37efd8.SP' \
'.el7.centos.x86_64.rpm'
lcb_libevent = \
'libcouchbase2-libevent-2.5.4-11.r10ga37efd8' \
'.SP.el7.centos.x86_64.rpm'
lcb_devel = \
'libcouchbase-devel-2.5.4-11.r10ga37efd8.SP' \
'.el7.centos.x86_64.rpm'
lcb_bin = \
'libcouchbase2-bin-2.5.4-11.r10ga37efd8.SP' \
'.el7.centos.x86_64.rpm'
remote_client.execute_command(
'yum -y install epel-release')
remote_client.execute_command(
'yum -y remove "libcouchbase*"')
remote_client.execute_command(
'rm -rf {0} {1} {2} {3}'.format(lcb_core,
lcb_libevent,
lcb_devel, lcb_bin))
remote_client.execute_command(
'wget {0}{1}'.format(package_url, lcb_core))
remote_client.execute_command(
'wget {0}{1}'.format(package_url,
lcb_libevent))
remote_client.execute_command(
'wget {0}{1}'.format(package_url, lcb_devel))
remote_client.execute_command(
'wget {0}{1}'.format(package_url, lcb_bin))
remote_client.execute_command(
'rpm -ivh {0} {1} {2}'.format(lcb_core,
lcb_libevent,
lcb_devel, lcb_bin))
remote_client.execute_command(
'yum -y install python-pip')
remote_client.execute_command(
'pip -y uninstall couchbase')
remote_client.execute_command(
'pip -y install {0}'.format(sdk_url))
elif (type == "deb" and params['subdoc'] == 'False'):
repo_file = "/etc/sources.list.d/couchbase.list"
entry = ""
if (version.find("ubuntu") != -1 and version.find(
"12.04") != -1):
entry = "http://packages.couchbase.com/ubuntu " \
"precise precise/main"
elif (version.find("ubuntu") != -1 and version.find(
"14.04") != -1):
entry = "http://packages.couchbase.com/ubuntu " \
"trusty trusty/main"
elif (version.find("debian") != -1 and version.find(
"7") != -1):
entry = "http://packages.couchbase.com/ubuntu " \
"wheezy wheezy/main"
else:
log.info(
"os version {0} not supported".format(version))
exit(1)
remote_client.execute_command(
"rm -rf {0}".format(repo_file))
remote_client.execute_command(
"touch {0}".format(repo_file))
remote_client.execute_command(
"deb {0} >> {1}".format(entry, repo_file))
remote_client.execute_command("apt-get update")
remote_client.execute_command("apt-get -y install "
"libcouchbase2-libevent \
libcouchbase-devel libcouchbase2-bin")
remote_client.execute_command("apt-get -y install pip")
remote_client.execute_command(
"pip -y uninstall couchbase")
remote_client.execute_command(
"pip -y install {0}".format(sdk_url))
if os == "mac":
remote_client.execute_command("brew install libcouchbase;\
brew link libcouchbase")
remote_client.execute_command(
"brew install pip; brew link pip")
remote_client.execute_command(
"pip install {0}".format(sdk_url))
if os == "windows":
log.info('Currently not supported')
remote_client.disconnect()
return True
class ESInstaller(object):
def __init__(self):
self.remote_client = None
pass
def initialize(self, params):
self.remote_client.execute_command(
"~/elasticsearch/bin/elasticsearch > es.log 2>&1 &")
def install(self, params):
self.remote_client = RemoteMachineShellConnection(
params["server"])
self.remote_client.execute_command("pkill -f elasticsearch")
self.remote_client.execute_command("rm -rf ~/elasticsearch")
self.remote_client.execute_command(
"rm -rf ~/elasticsearch-*.tar.gz*")
download_url = "https://download.elasticsearch.org" \
"/elasticsearch/elasticsearch/elasticsearch-{" \
"0}.tar.gz".format(
params["version"])
self.remote_client.execute_command(
"wget {0}".format(download_url))
self.remote_client.execute_command(
"tar xvzf elasticsearch-{0}.tar.gz; mv elasticsearch-{0} "
"elasticsearch".format(
params["version"]))
self.remote_client.execute_command(
"echo couchbase.password: password >> "
"~/elasticsearch/config/elasticsearch.yml")
self.remote_client.execute_command(
"echo network.bind_host: _eth0:ipv4_ >> "
"~/elasticsearch/config/elasticsearch.yml")
self.remote_client.execute_command(
"echo couchbase.port: 9091 >> "
"~/elasticsearch/config/elasticsearch.yml")
self.remote_client.execute_command(
"~/elasticsearch/bin/plugin -u {0} -i "
"transport-couchbase".format(
params["plugin-url"]))
self.remote_client.execute_command(
"~/elasticsearch/bin/plugin -u "
"https://github.com/mobz/elasticsearch-head/archive"
"/master.zip -i mobz/elasticsearch-head")
self.remote_client.disconnect()
return True
def __exit__(self):
self.remote_client.disconnect()
class InstallerJob(object):
def sequential_install(self, servers, params):
installers = []
for server in servers:
_params = copy.deepcopy(params)
_params["server"] = server
installers.append((installer_factory(_params), _params))
for installer, _params in installers:
try:
installer.uninstall(_params)
if "product" in params and params["product"] in [
"couchbase", "couchbase-server", "cb"]:
success = True
for server in servers:
shell = RemoteMachineShellConnection(server)
success &= not shell.is_couchbase_installed()
shell.disconnect()
if not success:
print "Server:{0}.Couchbase is still" + \
" installed after uninstall".format(
server)
return success
print "uninstall succeeded"
except Exception as ex:
print "unable to complete the uninstallation: ", ex
success = True
for installer, _params in installers:
try:
success &= installer.install(_params)
try:
installer.initialize(_params)
except Exception as ex:
print "unable to initialize the server after " \
"successful installation", ex
except Exception as ex:
print "unable to complete the installation: ", ex
return success
def parallel_install(self, servers, params):
uninstall_threads = []
install_threads = []
initializer_threads = []
queue = Queue.Queue()
success = True
for server in servers:
if params.get('enable_ipv6', 0):
if re.match('\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}',
server.ip):
sys.exit(
"****************************** ERROR: You are "
"trying to enable IPv6 on an IPv4 machine, "
"run without enable_ipv6=True "
"******************")
_params = copy.deepcopy(params)
_params["server"] = server
u_t = Thread(target=installer_factory(params).uninstall,
name="uninstaller-thread-{0}".format(
server.ip),
args=(_params,))
i_t = Thread(target=installer_factory(params).install,
name="installer-thread-{0}".format(server.ip),
args=(_params, queue))
init_t = Thread(target=installer_factory(params).initialize,
name="initializer-thread-{0}".format(
server.ip),
args=(_params,))
uninstall_threads.append(u_t)
install_threads.append(i_t)
initializer_threads.append(init_t)
for t in uninstall_threads:
t.start()
for t in uninstall_threads:
t.join()
print "thread {0} finished".format(t.name)
if "product" in params and params["product"] in ["couchbase",
"couchbase-server",
"cb"]:
success = True
for server in servers:
shell = RemoteMachineShellConnection(server)
success &= not shell.is_couchbase_installed()
shell.disconnect()
if not success:
print "Server:{0}.Couchbase is still installed after " \
"uninstall".format(
server)
return success
for t in install_threads:
t.start()
for t in install_threads:
t.join()
print "thread {0} finished".format(t.name)
while not queue.empty():
success &= queue.get()
if not success:
print "installation failed. initializer threads were " \
"skipped"
return success
for t in initializer_threads:
t.start()
for t in initializer_threads:
t.join()
print "thread {0} finished".format(t.name)
""" remove any capture files left after install windows """
remote_client = RemoteMachineShellConnection(servers[0])
type = remote_client.extract_remote_info().distribution_type
remote_client.disconnect()
if type.lower() == 'windows':
for server in servers:
shell = RemoteMachineShellConnection(server)
shell.execute_command(
"rm -f /cygdrive/c/automation/*_172.23*")
shell.execute_command(
"rm -f /cygdrive/c/automation/*_10.17*")
os.system(
"rm -f resources/windows/automation/*_172.23*")
os.system("rm -f resources/windows/automation/*_10.17*")
shell.disconnect()
return success
def check_build(input):
_params = copy.deepcopy(input.test_params)
_params["server"] = input.servers[0]
installer = installer_factory(_params)
try:
build = installer.build_url(_params)
log.info("Found build: {0}".format(build))
except Exception:
log.error("Cannot find build {0}".format(_params))
exit(1)
def change_couchbase_indexer_ports(input):
params = {"indexer_admin_port": 9110,
"indexer_scan_port": 9111,
"indexer_http_port": 9112,
"indexer_stream_init_port": 9113,
"indexer_stream_catchup_port": 9114,
"indexer_stream_maint_port": 9115}
remote_client = RemoteMachineShellConnection(input.servers[0])
info = remote_client.extract_remote_info()
remote_client.disconnect()
type = info.type.lower()
if type == "windows":
port_config_path = WIN_COUCHBASE_PORT_CONFIG_PATH
old_config_path = WIN_COUCHBASE_OLD_CONFIG_PATH
else:
port_config_path = LINUX_COUCHBASE_PORT_CONFIG_PATH
old_config_path = LINUX_COUCHBASE_OLD_CONFIG_PATH
filename = "static_config"
for node in input.servers:
output_lines = ''
remote = RemoteMachineShellConnection(node)
remote.stop_server()
lines = remote.read_remote_file(port_config_path, filename)
for line in lines:
for key in params.keys():
if key in line:
line = ""
break
output_lines += "{0}".format(line)
for key in params.keys():
line = "{" + str(key) + ", " + str(params[key]) + "}."
output_lines += "{0}\n".format(line)
output_lines = output_lines.replace(r'"', r'\"')
remote.write_remote_file(port_config_path, filename,
output_lines)
remote.delete_file(old_config_path, "/config.dat")
remote.disconnect()
for node in input.servers:
remote = RemoteMachineShellConnection(node)
remote.start_server()
remote.disconnect()
def main():
log.info('*****Starting the complete install process ****')
log_install_failed = "some nodes were not install successfully!"
try:
(opts, args) = getopt.getopt(sys.argv[1:], 'hi:p:', [])
for o, a in opts:
if o == "-h":
usage()
if len(sys.argv) <= 1:
usage()
input = TestInput.TestInputParser.get_test_input(sys.argv)
"""
Terminate the installation process instantly if user put in
incorrect build pattern. Correct pattern should be
x.x.x-xxx
x.x.x-xxxx
xx.x.x-xxx
xx.x.x-xxxx
where x is a number from 0 to 9
"""
correct_build_format = False
if "version" in input.test_params:
build_version = input.test_params["version"]
build_pattern = re.compile("\d\d?\.\d\.\d-\d{3,4}$")
if input.test_params["version"][
:5] in COUCHBASE_VERSIONS and \
bool(build_pattern.match(build_version)):
correct_build_format = True
use_direct_url = False
if "url" in input.test_params and input.test_params[
"url"].startswith("http"):
use_direct_url = True
if not correct_build_format and not use_direct_url:
log.info("\n========\n"
" Incorrect build pattern.\n"
" It should be 0.0.0-111 or 0.0.0-1111 format\n"
" Or \n"
" Build version %s does not support yet\n"
" Or \n"
" There is No build %s in build repo\n"
"========"
% (build_version[:5],
build_version.split("-")[
1] if "-" in build_version else "Need "
"build "
"number"))
os.system("ps aux | grep python | grep %d " % os.getpid())
os.system('kill %d' % os.getpid())
if not input.servers:
usage(
"ERROR: no servers specified. Please use the -i parameter.")
except IndexError:
usage()
except getopt.GetoptError, err:
usage("ERROR: " + str(err))
# TODO: This is not broken, but could be something better
# like a validator, to check SSH, input params etc
# check_build(input)
if "parallel" in input.test_params and input.test_params[
'parallel'].lower() != 'false':
# workaround for a python2.6 bug of using strptime with threads
datetime.strptime("30 Nov 00", "%d %b %y")
log.info('Doing parallel install****')
success = InstallerJob().parallel_install(input.servers,
input.test_params)
else:
log.info('Doing serial install****')
success = InstallerJob().sequential_install(input.servers,
input.test_params)
if "product" in input.test_params and input.test_params[
"product"] in ["couchbase", "couchbase-server", "cb"]:
print "verify installation..."
success = True
for server in input.servers:
shell = RemoteMachineShellConnection(server)
success &= shell.is_couchbase_installed()
shell.disconnect()
if not success:
sys.exit(log_install_failed)
if "change_indexer_ports" in input.test_params and \
input.test_params["change_indexer_ports"].lower() == \
'true' \
and input.test_params["product"] in ["couchbase",
"couchbase-server",
"cb"]:
change_couchbase_indexer_ports(input)
log = logging.getLogger()
product = "membase-server(ms),couchbase-single-server(css)," \
"couchbase-server(cs),zynga(z)"
errors = {"UNREACHABLE": "",
"UNINSTALL-FAILED": "unable to uninstall the product",
"INSTALL-FAILED": "unable to install",
"BUILD-NOT-FOUND": "unable to find build",
"INVALID-PARAMS": "invalid params given"}
params = {"ini": "resources/jenkins/fusion.ini",
"product": "ms", "version": "1.7.1r-31", "amazon": "false"}
if __name__ == "__main__":
logging.config.fileConfig("scripts.logging.conf")
log = logging.getLogger()
main()
|
rate_limited.py
|
"""
by oPromessa, 2017
Published on https://gist.github.com/gregburek/1441055#gistcomment-2369461
Inspired by: https://gist.github.com/gregburek/1441055
Helper class and functions to rate limiting function calls with Python Decorators
"""
# ----------------------------------------------------------------------------
# Import section for Python 2 and 3 compatible code
# from __future__ import absolute_import, division, print_function, unicode_literals
from __future__ import division # This way: 3 / 2 == 1.5; 3 // 2 == 1
# ----------------------------------------------------------------------------
# Import section
#
import sys
import logging
import multiprocessing
import time
from functools import wraps
# -----------------------------------------------------------------------------
# class LastTime to be used with rate_limited
#
class LastTime:
"""
>>> import rate_limited as rt
>>> a = rt.LastTime()
>>> a.add_cnt()
>>> a.get_cnt()
1
>>> a.add_cnt()
>>> a.get_cnt()
2
"""
def __init__(self, name='LT'):
# Init variables to None
self.name = name
self.ratelock = None
self.cnt = None
self.last_time_called = None
# Instantiate control variables
self.ratelock = multiprocessing.Lock()
self.cnt = multiprocessing.Value('i', 0)
self.last_time_called = multiprocessing.Value('d', 0.0)
logging.debug('\t__init__: name=[{!s}]'.format(self.name))
def acquire(self):
self.ratelock.acquire()
def release(self):
self.ratelock.release()
def set_last_time_called(self):
self.last_time_called.value = time.time()
# self.debug('set_last_time_called')
def get_last_time_called(self):
return self.last_time_called.value
def add_cnt(self):
self.cnt.value += 1
def get_cnt(self):
return self.cnt.value
def debug(self, debugname='LT'):
now=time.time()
logging.debug('___Rate name:[{!s}] '
'debug=[{!s}] '
'\n\t cnt:[{!s}] '
'\n\tlast_called:{!s} '
'\n\t timenow():{!s} '
.format(self.name,
debugname,
self.cnt.value,
time.strftime(
'%T.{}'
.format(str(self.last_time_called.value -
int(self.last_time_called.value))
.split('.')[1][:3]),
time.localtime(self.last_time_called.value)),
time.strftime(
'%T.{}'
.format(str(now -
int(now))
.split('.')[1][:3]),
time.localtime(now))))
# -----------------------------------------------------------------------------
# rate_limited
#
# retries execution of a function
def rate_limited(max_per_second):
min_interval = 1.0 / max_per_second
LT = LastTime('rate_limited')
def decorate(func):
LT.acquire()
if LT.get_last_time_called() == 0:
LT.set_last_time_called()
LT.debug('DECORATE')
LT.release()
@wraps(func)
def rate_limited_function(*args, **kwargs):
logging.debug('___Rate_limited f():[{!s}]: '
'Max_per_Second:[{!s}]'
.format(func.__name__, max_per_second))
try:
LT.acquire()
LT.add_cnt()
xfrom = time.time()
elapsed = xfrom - LT.get_last_time_called()
left_to_wait = min_interval - elapsed
logging.debug('___Rate f():[{!s}] '
'cnt:[{!s}] '
'\n\tlast_called:{!s} '
'\n\t time now():{!s} '
'elapsed:{:6.2f} '
'min:{!s} '
'to_wait:{:6.2f}'
.format(func.__name__,
LT.get_cnt(),
time.strftime(
'%T',
time.localtime(
LT.get_last_time_called())),
time.strftime('%T',
time.localtime(xfrom)),
elapsed,
min_interval,
left_to_wait))
if left_to_wait > 0:
time.sleep(left_to_wait)
ret = func(*args, **kwargs)
LT.debug('OVER')
LT.set_last_time_called()
LT.debug('NEXT')
except Exception as ex:
# CODING: To be changed once reportError is on a module
sys.stderr.write('+++000 '
'Exception on rate_limited_function: [{!s}]\n'
.format(ex))
sys.stderr.flush()
# reportError(Caught=True,
# CaughtPrefix='+++',
# CaughtCode='000',
# CaughtMsg='Exception on rate_limited_function',
# exceptUse=True,
# # exceptCode=ex.code,
# exceptMsg=ex,
# NicePrint=False,
# exceptSysInfo=True)
raise
finally:
LT.release()
return ret
return rate_limited_function
return decorate
# -----------------------------------------------------------------------------
# Samples
#@rate_limited(5) # 5 calls per second
# def print_num(num):
# print (num )
# -----------------------------------------------------------------------------
# If called directly run doctests
#
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG,
format='[%(asctime)s]:[%(processName)-11s]' +
'[%(levelname)-8s]:[%(name)s] %(message)s')
import doctest
doctest.testmod()
# Comment following line to allow further debugging/testing
# sys.exit(0)
# n for n calls per second (ex. 3 means 3 calls per second)
# 1/n for n seconds per call (ex. 0.5 meand 4 seconds in between calls)
@rate_limited(1)
def print_num(prc, num):
"""
"""
print('\t\t***prc:[{!s}] num:[{!s}] '
'rate_limit timestamp:[{!s}]'
.format(prc, num, time.strftime('%T')))
print('-------------------------------------------------Single Processing')
for process in range(1, 3):
for j in range(1, 2):
print_num(process, j)
print('-------------------------------------------------Multi Processing')
def fmulti(x, prc):
import random
for i in range(1,x):
r = random.randrange(6)
print('\t\t[prc:{!s}] [{!s}]'
'->- WORKing {!s}s----[{!s}]'
.format(prc, i, r, time.strftime('%T')))
time.sleep(r)
print('\t\t[prc:{!s}] [{!s}]--> Before:---[{!s}]'
.format(prc, i, time.strftime('%T')))
print_num(prc, i)
print('\t\t[prc:{!s}] [{!s}]<-- After----[{!s}]'
.format(prc, i, time.strftime('%T')))
TaskPool = []
for j in range(1,4):
Task = multiprocessing.Process(target=fmulti, args=(5,j))
TaskPool.append(Task)
Task.start()
for j in TaskPool:
print('{!s}.is_alive = {!s}'.format(j.name, j.is_alive()))
while (True):
if not (any(multiprocessing.active_children())):
print('===No active children Processes.')
break
for p in multiprocessing.active_children():
print('==={!s}.is_alive = {!s}'.format(p.name, p.is_alive()))
uploadTaskActive = p
print('===Will wait for 60 on {!s}.is_alive = {!s}'
.format(uploadTaskActive.name,
uploadTaskActive.is_alive()))
uploadTaskActive.join(timeout=60)
print('===Waited for 60s on {!s}.is_alive = {!s}'
.format(uploadTaskActive.name,
uploadTaskActive.is_alive()))
# Wait for join all jobs/tasks in the Process Pool
# All should be done by now!
for j in TaskPool:
j.join()
print('==={!s} (is alive: {!s}).exitcode = {!s}'
.format(j.name, j.is_alive(), j.exitcode))
|
screen.py
|
import ui
import numpy
import matplotlib.image as im
import io
import Image
import threading
class Screen(ui.View):
def __init__(self,w,h):
self.width=w
self.height=h
self.S=numpy.zeros((w,h,3))
self.B=io.BytesIO()
ui.Image.from_data(self.B.getvalue())
im.imsave(self.B,self.S,format='jpg')
self.B.seek(0)
self.set_needs_display()
self.updates_pending=0
self.skip=0
def set_data(self,S):
self.S=S
#ui.Image.from_data(self.B.getvalue())
im.imsave(self.B,self.S.transpose([1,0,2])/255.,format='jpg')
self.B.seek(0)
self.set_needs_display()
def blit(self,data,corner):
#print(data[1,1,2])
try:
self.S[corner[0]:(corner[0]+data.shape[0]), corner[1]:(corner[1]+data.shape[1]), :]=data
if not self.updates_pending:
self.updates_pending+=1
self.update()
except:
pass
@ui.in_background
def update(self):
self.updates_pending =0
im.imsave(self.B,self.S.transpose([1,0,2])/255.,format='jpg')
self.B.seek(0)
self.set_needs_display()
def draw(self):
if not self.on_screen:
raise KeyboardInterrupt()
ui.Image.from_data(self.B.getvalue()).draw()
if __name__=='__main__':
import time
w=300
h=w
s=Screen(w,h)
d=numpy.zeros((w*h,3))
d[:,0]=128+128*numpy.sin(w*7*numpy.linspace(0,6.28,w*h))
d[:,1]=128+128*numpy.sin(3/4+w*4.1*numpy.linspace(0,6.28,w*h))
d[:,2]=128+128*numpy.sin(3+1.7*numpy.linspace(0,6.28,w*h))
#d=d.reshape(3,w*h).transpose()
s.set_data(d.reshape(w,h,3))
s.present('sheet')
time.sleep(1)
pixel=255*numpy.ones((5,5,3))
def runloop():
for r in range(0,h,5):
for c in range(0,w,5):
s.blit(pixel,[c,r])
#time.sleep(0.1)
time.sleep(2)
s.set_data( numpy.zeros((w,h,3)))
#s.update()
import threading
threading.Thread(target=runloop).start()
|
chatter.py
|
import re
import os
import sys
import copy
import json
import time
import emoji
import pyDes
import queue
import random
import signal
import socket
import threading
import netifaces
from datetime import datetime
from termcolor import colored
ip_dict = {}
chat_dict = {}
msg_formats = {}
new_msg_dict = {}
profile_dict = {}
imposter_dict = {}
recipient_key_dict = {}
msg_queue = queue.Queue()
profile_struct = {"tag": "", "name": "", "ip": "", "isActiveStatus": False}
imposter_struct = {"NAME": "", "CL_IP": "", "TYPE": "", "PAYLOAD": "", "A_IP": "", "A_PORT": ""}
recipient_key_struct = {"ground": 0, "power": 0, "secret": 0, "evolving_key": 0}
prime_number_list = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
isSelfOnline = True
isExit = False
isNotified = True
isLocal = True
isDeception = False
listenPid = 5
PORT = 12345
def entry():
print("Welcome to SecureChat-In-Python!")
print("It is a local area chatting application on the commandline.")
print("P.S. : Environment set 'local' by default.")
print("For further help, type 'help' and Enter.")
print("To exit, type 'exit' and Enter.")
def helpy():
print("- This application is developed for LAN chatting.")
print("- When you become online, a discovery signal will be sent")
print("for all possible LAN IP addresses to receive a response")
print("- After receiving a response, the user that has responded")
print("to you will become online for you to start chatting.")
print("- Users inside the program are stored with their <user-tag>")
print("that is a composition of their names and their last part")
print("of their LAN IP address.")
print("e.g. {} ozgurcan-25: ozgurcan@192.168.1.25".format(emoji.emojize(":bust_in_silhouette:")))
print("-" * 56)
print(" - GENERAL COMMAND LIST - ")
print("help:\t\t\t\t Opens general command.")
print("switch <env>:\t\t\t Changes into any environment.")
print("\t\t\t\t Either to 'hamachi' or 'local'.")
print("discover:\t\t\t Discovers the clients in the network.")
print("profile:\t\t\t Shows your profile.")
print("refresh:\t\t\t Refreshes and syncs with the current state.")
print("whoall:\t\t\t\t Shows all users' profiles.")
print("impall:\t\t\t\t Shows all imposters' profiles.")
print("imptags:\t\t\t Shows all imposters' tags.")
print("whoshow <user_tag>:\t\t Shows the profile of user with")
print("\t\t\t\t tag <user_tag>.")
print("impshow <imp_tag>:\t\t Shows the profile of imposter with")
print("\t\t\t\t tag <imp_tag>.")
print("whoisonline:\t\t\t Shows all available users in LAN.")
print("sendmsg <user_tag> <your_msg>:\t Sends message to the user with")
print("\t\t\t\t tag <user_tag>")
print("respond <imp_tag>:\t\t Sends respond to a known impostor as <imp_tag>")
print("validate <imp_tag>:\t\t Validate a known impostor known as <imp_tag>")
print("\t\t\t\t to a user known as <user_tag>")
print("chathist <user_tag>:\t\t Opens chat history with <user_tag>")
print("dumphist <user_tag>:\t\t Dumps chat history in a .txt file")
def ip_extractor():
ip_list = []
for if_name in netifaces.interfaces():
for ip in netifaces.ifaddresses(if_name).get(netifaces.AF_INET, ()):
ip_list.append(ip["addr"])
return ip_list
def ip_regex(ip_list):
global ip_dict
local_pat = re.compile("^192.168.[0-9]{1,3}.[0-9]{1,3}$")
hamachi_pat = re.compile("^25.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$")
for ip in ip_list:
hamachi_match = hamachi_pat.match(ip)
local_match = local_pat.match(ip)
if hamachi_match is not None:
ip_dict["hamachi"] = hamachi_match.group()
if local_match is not None:
ip_dict["local"] = local_match.group()
def switch(env):
global isLocal
if env == "local":
isLocal = True
env_params("local")
elif env == "hamachi":
isLocal = False
env_params("hamachi")
else:
print("Invalid environment. Set to 'local' by default.")
isLocal = True
env_params("local")
def hamachi_ip_extract():
hamachi_ip_file = open("./hamachi_ip_list.txt", "r")
hamachi_ipl = [hip.strip() for hip in hamachi_ip_file.readlines()]
hamachi_ip_file.close()
return hamachi_ipl
def discover():
if isLocal:
local_discover()
else:
hamachi_discover()
def env_params(env):
global profile_dict, ip_dict
ip_regex(ip_extractor())
user_name = os.getenv("USER") or os.getenv("USERNAME")
user_ip = ip_dict[env]
if env == "hamachi":
user_identifier = ".".join(user_ip.split(".")[1:])
elif env == "local":
user_identifier = user_ip.split(".")[-1]
else:
raise ValueError("There's no suitable IP address in this platform. Aborted.")
user_tag = str(user_name).upper() + "-" + str(user_identifier)
profile_dict["self"] = {"tag": user_tag, "isActiveStatus": False, "name": user_name, "ip": user_ip}
msg_formatting("self")
def msg_formatting(user_tag):
global msg_formats
profile = profile_dict[user_tag]
init = {"NAME": profile["name"], "MY_IP": profile["ip"], "TYPE": "INIT", "PAYLOAD": ""}
exchange = {"NAME": profile["name"], "MY_IP": profile["ip"], "TYPE": "EXCHANGE", "PAYLOAD": ""}
discovery = {"NAME": profile["name"], "MY_IP": profile["ip"], "TYPE": "DISCOVER", "PAYLOAD": ""}
response = {"NAME": profile["name"], "MY_IP": profile["ip"], "TYPE": "RESPOND", "PAYLOAD": ""}
message = {"NAME": profile["name"], "MY_IP": profile["ip"], "TYPE": "MESSAGE", "PAYLOAD": ""}
goodbye = {"NAME": profile["name"], "MY_IP": profile["ip"], "TYPE": "GOODBYE", "PAYLOAD": ""}
msg_formats["init"] = init
msg_formats["exchange"] = exchange
msg_formats["discovery"] = discovery
msg_formats["response"] = response
msg_formats["message"] = message
msg_formats["goodbye"] = goodbye
def generate_public_value(recipient_tag):
current_primes = set(prime_number_list)
ground = random.choice(list(current_primes))
current_primes.discard(ground)
power = random.choice(list(current_primes))
recipient_key_dict[recipient_tag] = copy.deepcopy(recipient_key_struct)
recipient_key_dict[recipient_tag]["ground"] = ground
recipient_key_dict[recipient_tag]["power"] = power
return ground, power
def generate_private_value(recipient_tag):
current_primes = set(prime_number_list)
ground = recipient_key_dict[recipient_tag]["ground"]
power = recipient_key_dict[recipient_tag]["power"]
current_primes.discard(ground)
current_primes.discard(power)
secret = random.choice(list(current_primes))
recipient_key_dict[recipient_tag]["secret"] = secret
return secret
def cmd_input_analysis(cmd_input):
input_tokens = str(cmd_input).split()
if len(input_tokens) == 0:
return
input_tokens[0] = input_tokens[0].lower()
if input_tokens[0] == "profile":
profile_disp()
elif input_tokens[0] == "refresh":
print("{} Checking for new messages."
.format(emoji.emojize(":tropical_drink:")))
elif input_tokens[0] == "help":
helpy()
elif input_tokens[0] == "switch":
switch(input_tokens[1].lower())
elif input_tokens[0] == "discover":
discover()
elif input_tokens[0] == "whoisonline":
who_is_online()
elif input_tokens[0] == "whoall":
who_all()
elif input_tokens[0] == "impall":
imposter_display_all()
elif input_tokens[0] == "imptags":
list_imposter_tags()
elif input_tokens[0] == "impshow":
imposter_display(input_tokens[1].lower())
elif input_tokens[0] == "whoshow":
show_profile(input_tokens[1].lower())
elif input_tokens[0] == "respond":
response_at_will(input_tokens[1].lower())
elif input_tokens[0] == "validate":
validate_imposter(input_tokens[1].lower())
elif input_tokens[0] == "sendmsg":
send_message(" ".join(input_tokens[1:]))
elif input_tokens[0] == "chathist":
chat_history(input_tokens[1].lower())
elif input_tokens[0] == "dumphist":
dump_history(input_tokens[1].lower())
else:
print("Invalid input. Try again!")
def tag_generator(name, ip):
hamachi_pat = re.compile("^25.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$")
local_pat = re.compile("^192.168.[0-9]{1,3}.[0-9]{1,3}$")
if local_pat.match(ip) is not None:
ip_identifier = str(ip).split(".")[-1]
return "-".join([str(name).lower(), ip_identifier])
else:
if hamachi_pat.match(ip) is not None:
ip_identifier = ".".join(str(ip).split(".")[1:])
return "-".join([str(name).lower(), ip_identifier])
def profile_disp():
if isSelfOnline:
print(emoji.emojize("Status Information: Online :green_apple:"))
print("Number of Active Chats: {}".format(count_active_chat()))
else:
print(emoji.emojize("Status Information: Offline :red_circle:"))
print(emoji.emojize(" - PROFILE INFO :eyes: -"))
print("{} User Tag:\t\t\t {}".format(emoji.emojize(":thumbs_up:"), str(profile_dict["self"]["tag"]).upper()))
print("{} User Name:\t\t\t {}".format(emoji.emojize(":thumbs_up:"), profile_dict["self"]["name"]))
print("{} User IP:\t\t\t {}".format(emoji.emojize(":thumbs_up:"), profile_dict["self"]["ip"]))
def show_profile(user_tag):
print(emoji.emojize(" - PROFILE INFO :eyes: -"))
print("{} User Tag:\t\t\t {}".format(emoji.emojize(":thumbs_up:"), str(profile_dict[user_tag]["tag"]).upper()))
print("{} User Name:\t\t\t {}".format(emoji.emojize(":thumbs_up:"), profile_dict[user_tag]["name"]))
print("{} User IP:\t\t\t {}".format(emoji.emojize(":thumbs_up:"), profile_dict[user_tag]["ip"]))
def who_all():
for tag in profile_dict.keys():
if tag != "self":
p = profile_dict[tag]
if p["isActiveStatus"]:
status = "Online"
else:
status = "Offline"
print("{} User Tag:{}\t User Name:{}\t User IP:{}\t Status:{}"
.format(emoji.emojize(":eyes:"), p["tag"], p["name"], p["ip"], status))
def count_active_chat():
active_tags = [tag for tag, chat in chat_dict.items() if len(chat) != 0 and profile_dict[tag]["isActiveStatus"]]
return len(active_tags)
def who_is_online():
active_users = [user["tag"] for tag, user in profile_dict.items() if tag != "self" and user["isActiveStatus"]]
for user in active_users:
print(emoji.emojize(":bust_in_silhouette: {}".format(user)))
def send_message(msg_info):
global chat_dict, recipient_key_dict
msg_info_tokens = msg_info.split()
recipient_tag = msg_info_tokens[0].lower()
if recipient_tag not in profile_dict.keys():
print("The recipient is not recognized. Check the user tag.")
return
recipient_profile = profile_dict[recipient_tag]
msg = " ".join(msg_info_tokens[1:])
if len(msg) > 140:
print("Message can be composed with at most 140 characters.")
return
msg_obj = copy.deepcopy(msg_formats["message"])
if recipient_key_dict[recipient_tag]["evolving_key"] != 0:
cipher = recipient_key_dict[recipient_tag]["evolving_key"]
encrypted_msg = pyDes.triple_des(str(cipher).ljust(24)).encrypt(msg, padmode=2)
msg_obj["PAYLOAD"] = list(encrypted_msg)
msg_obj_srl = json.dumps(msg_obj)
isMessageSent = tcp_socket_send(msg_obj_srl, recipient_profile["ip"])
if not isMessageSent:
print("{} User with tag {} is not online right now. Try again!"
.format(emoji.emojize(":no_entry:"), recipient_tag))
return
else:
print("{} Your message is sent!".format(emoji.emojize(":fire:")))
cipher = cipher + len(msg)
recipient_key_dict[recipient_tag]["evolving_key"] = cipher
chat_dict[recipient_tag].append("+ " + str(msg))
else:
print(f"Starting end-to-end encryption with {recipient_tag} ...")
end_to_end_encrypt(recipient_tag)
print(f"Completed end-to-end encryption with {recipient_tag} .")
cipher = recipient_key_dict[recipient_tag]["evolving_key"]
encrypted_msg = pyDes.triple_des(str(cipher).ljust(24)).encrypt(msg, padmode=2)
msg_obj["PAYLOAD"] = list(encrypted_msg)
msg_obj_srl = json.dumps(msg_obj)
isMessageSent = tcp_socket_send(msg_obj_srl, recipient_profile["ip"])
if not isMessageSent:
print("{} User with tag {} is not online right now. Try again!"
.format(emoji.emojize(":no_entry:"), recipient_tag))
return
else:
print("{} Your message is sent!".format(emoji.emojize(":fire:")))
cipher = cipher + len(msg)
recipient_key_dict[recipient_tag]["evolving_key"] = cipher
chat_dict[recipient_tag].append("+ " + str(msg))
def end_to_end_encrypt(recipient_tag):
global profile_dict
recipient_profile = profile_dict[recipient_tag]
secret_obj = recipient_key_dict[recipient_tag]
ground = secret_obj["ground"]
power = secret_obj["power"]
secret = generate_private_value(recipient_tag)
first_part_of_cipher = (int(ground) ** int(secret)) % int(power)
exchange_obj = copy.deepcopy(msg_formats["exchange"])
exchange_obj["PAYLOAD"] = first_part_of_cipher
exch_obj_srl = json.dumps(exchange_obj)
isMessage = tcp_socket_send(exch_obj_srl, recipient_profile["ip"])
if not isMessage:
recipient_key_dict[recipient_tag] = secret_obj
def local_discover():
disc_obj = copy.deepcopy(msg_formats["discovery"])
disc_obj_srl = json.dumps(disc_obj)
for i in range(0, 3):
udp_socket_broadcast(disc_obj_srl)
print("{} Local client discovery completed!".format(emoji.emojize(":globe_with_meridians:")))
def hamachi_discover():
hamachi_obj = copy.deepcopy(msg_formats["discovery"])
hamachi_obj_srl = json.dumps(hamachi_obj)
for i in range(0, 3):
udp_socket_broadcast(hamachi_obj_srl)
print("{} Hamachi client discovery completed!".format(emoji.emojize(":mushroom:")))
def extract_profile(msg):
tag = tag_generator(msg["NAME"], msg["MY_IP"])
new_profile = copy.deepcopy(profile_struct)
new_profile["tag"] = tag
new_profile["name"] = str(msg["NAME"]).lower()
new_profile["ip"] = msg["MY_IP"]
new_profile["isActiveStatus"] = True
return new_profile, tag
def discovery_actions(msg):
global profile_dict, chat_dict, recipient_key_dict
new_profile, tag = extract_profile(msg)
rsp_obj = msg_formats["response"]
rsp_obj_srl = json.dumps(rsp_obj)
if tag not in profile_dict.keys():
ground, power = generate_public_value(tag)
gnd_pow_obj = copy.deepcopy(msg_formats["init"])
gnd_pow_obj["PAYLOAD"] = str(ground) + "|" + str(power)
gnd_pow_srl = json.dumps(gnd_pow_obj)
# Send response
isRespond = tcp_socket_send(rsp_obj_srl, msg["MY_IP"])
if not isRespond:
return
# Send ground and power
isGndPow = tcp_socket_send(gnd_pow_srl, msg["MY_IP"])
if not isGndPow:
return
profile_dict[tag] = new_profile
chat_dict[tag] = []
recipient_secrets = copy.deepcopy(recipient_key_struct)
recipient_secrets["ground"] = ground
recipient_secrets["power"] = power
recipient_key_dict[tag] = recipient_secrets
else:
if not profile_dict[tag]["isActiveStatus"]:
ground, power = generate_public_value(tag)
gnd_pow_obj = copy.deepcopy(msg_formats["init"])
gnd_pow_obj["PAYLOAD"] = str(ground) + "|" + str(power)
gnd_pow_srl = json.dumps(gnd_pow_obj)
# Send response
isRespond = tcp_socket_send(rsp_obj_srl, msg["MY_IP"])
if not isRespond:
return
# Send ground and power
isGndPow = tcp_socket_send(gnd_pow_srl, msg["MY_IP"])
if not isGndPow:
return
profile_dict[tag]["isActiveStatus"] = True
recipient_secrets = copy.deepcopy(recipient_key_struct)
recipient_secrets["ground"] = ground
recipient_secrets["power"] = power
recipient_key_dict[tag] = recipient_secrets
def response_actions(msg):
new_profile, tag = extract_profile(msg)
if tag not in profile_dict.keys():
profile_dict[tag] = new_profile
profile_dict[tag]["isActiveStatus"] = True
chat_dict[tag] = []
else:
profile_dict[tag]["isActiveStatus"] = True
def response_at_will(imp_tag):
global imposter_dict
imp_actual_ip = imposter_dict[imp_tag]["A_IP"]
rsp_obj = msg_formats["response"]
rsp_obj_srl = json.dumps(rsp_obj)
isResponded = tcp_socket_send(rsp_obj_srl, imp_actual_ip)
if not isResponded:
print("{} User with ip {} is not online right now. Try again!"
.format(emoji.emojize(":no_entry:"), imp_actual_ip))
return
def message_actions(msg):
global chat_dict, new_msg_dict, recipient_key_dict
tag = tag_generator(msg["NAME"], msg["MY_IP"])
cipher = recipient_key_dict[tag]["evolving_key"]
decrpyted_msg = pyDes.triple_des(str(cipher).ljust(24)).decrypt(msg["PAYLOAD"], padmode=2)
cipher = cipher + len(decrpyted_msg)
recipient_key_dict[tag]["evolving_key"] = cipher
message = "- " + decrpyted_msg.decode("utf-8")
if tag not in chat_dict.keys():
chat_dict[tag] = []
chat_dict[tag].append(message)
if tag not in new_msg_dict.keys():
new_msg_dict[tag] = 0
new_msg_dict[tag] += 1
def goodbye_actions(msg):
global profile_dict
tag = tag_generator(msg["NAME"], msg["MY_IP"])
del recipient_key_dict[tag]
if tag in profile_dict.keys():
if profile_dict[tag]["isActiveStatus"]:
profile_dict[tag]["isActiveStatus"] = False
def init_actions(msg):
global recipient_key_dict, recipient_key_struct
tag = tag_generator(msg["NAME"], msg["MY_IP"])
tokens = str(msg["PAYLOAD"]).split("|")
ground = tokens[0]
power = tokens[1]
conv_secrets = copy.deepcopy(recipient_key_struct)
conv_secrets["ground"] = ground
conv_secrets["power"] = power
recipient_key_dict[tag] = conv_secrets
def exchange_actions(msg):
global recipient_key_dict
tag = tag_generator(msg["NAME"], msg["MY_IP"])
if recipient_key_dict[tag]["evolving_key"] == 0:
secret_obj = recipient_key_dict[tag]
ground = secret_obj["ground"]
power = secret_obj["power"]
if secret_obj["secret"] == 0:
secret = generate_private_value(tag)
else:
secret = secret_obj["secret"]
incoming_cipher_part = int(msg["PAYLOAD"])
final_cipher = (int(incoming_cipher_part) ** int(secret)) % int(power)
recipient_key_dict[tag]["evolving_key"] = final_cipher
recipient_key_dict[tag]["secret"] = secret
exchange_obj = copy.deepcopy(msg_formats["exchange"])
outgoing_cipher = (int(ground) ** int(secret)) % int(power)
exchange_obj["PAYLOAD"] = outgoing_cipher
exc_obj_srl = json.dumps(exchange_obj)
isOutCipher = tcp_socket_send(exc_obj_srl, msg["MY_IP"])
if not isOutCipher:
recipient_key_dict[tag] = secret_obj
return
def broadcast_goodbye():
goodbye_msg = copy.deepcopy(msg_formats["goodbye"])
goodbye_msg_srl = json.dumps(goodbye_msg)
for i in range(0, 3):
udp_socket_broadcast(goodbye_msg_srl)
def validate_imposter(imp_tag):
global imposter_dict, profile_dict, chat_dict
if imp_tag in imposter_dict.keys():
imp_obj = imposter_dict[imp_tag]
valid_imp_tag = tag_generator(imp_obj["NAME"], imp_obj["A_IP"])
if valid_imp_tag not in profile_dict.keys():
valid_profile = copy.deepcopy(profile_struct)
valid_profile["tag"] = valid_imp_tag
valid_profile["name"] = imp_obj["NAME"]
valid_profile["ip"] = imp_obj["A_IP"]
valid_profile["isActiveStatus"] = True
profile_dict[valid_imp_tag] = valid_profile
chat_dict[valid_imp_tag] = []
payload_msgs = ["- {}".format(msg["PAYLOAD"]) for msg in imp_obj["MSG"] if msg["TYPE"] == "MESSAGE"]
if len(payload_msgs) > 0:
chat_dict[valid_imp_tag] += payload_msgs
del imposter_dict[imp_tag]
else:
print("Invalid imposter tag. Try again!")
def new_msg_count():
global isNotified
if not isNotified:
print("{} You have new messages!".format(emoji.emojize(":bell:")))
for tag, nmc in new_msg_dict.items():
print("{} {} message(s) from {}".format(emoji.emojize(":paperclip:"), nmc, profile_dict[tag]["name"]))
def chat_history(user_tag):
global chat_dict, isNotified, new_msg_dict
isNotified = True
new_msg_dict[user_tag] = 0
if user_tag in chat_dict.keys():
history = chat_dict[user_tag]
for msg_line in history:
print(msg_line)
else:
print("Invalid <user_tag>. Try again!")
def dump_history(user_tag):
global chat_dict
history = chat_dict[user_tag]
da_ti = datetime.utcnow().strftime("%Y-%m-%d_%H:%M:%S")
f_name = user_tag + "-" + da_ti + ".txt"
dump = open(f_name, "w")
for msg in history:
dump.write(msg)
dump.write("\n")
dump.flush()
dump.close()
def tcp_listener_func():
global isExit
while True:
if isExit:
break
tcp_socket_listen()
def udp_listener_func():
global isExit
while True:
if isExit:
break
udp_socket_listen()
def env_check(ip):
hamachi_pat = re.compile("^25.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$")
local_pat = re.compile("^192.168.[0-9]{1,3}.[0-9]{1,3}$")
if local_pat.match(ip) is not None:
return True
else:
if hamachi_pat.match(ip) is not None:
return True
else:
return False
def message_parse():
global isNotified, imposter_dict, isDeception
try:
(raw_message, conn_info) = msg_queue.get()
dcd_msg = raw_message.decode("utf-8")
if dcd_msg != "\n" and dcd_msg != "\r":
message = dcd_msg.strip("\n")
else:
return
msg = json.loads(message)
if self_check(msg["MY_IP"]):
return
if conn_info != None:
if not env_check(conn_info[0]) and not env_check(msg["MY_IP"]):
return
if not imposter_check(msg, conn_info):
if msg["TYPE"] == "DISCOVER":
discovery_actions(msg)
elif msg["TYPE"] == "INIT":
init_actions(msg)
elif msg["TYPE"] == "EXCHANGE":
exchange_actions(msg)
elif msg["TYPE"] == "RESPOND":
response_actions(msg)
elif msg["TYPE"] == "MESSAGE":
isNotified = False
message_actions(msg)
elif msg["TYPE"] == "GOODBYE":
goodbye_actions(msg)
else:
return
else:
isDeception = True
imposter_tag = "imp-" + tag_generator(msg["NAME"], msg["MY_IP"])
if imposter_tag not in imposter_dict.keys():
imposter_dict[imposter_tag] = {}
impostor_msg_obj = imposter_prep_info(msg, conn_info)
imposter_dict[imposter_tag] = impostor_msg_obj
else:
imp_msg = {
"TYPE": msg["TYPE"],
"PAYLOAD": msg["PAYLOAD"]
}
imposter_dict[imposter_tag]["MSG"].append(imp_msg)
except BaseException:
return
def imposter_check(message, conn):
if conn != None:
if isLocal:
return message["MY_IP"] != conn[0]
else:
return False
def self_check(msg_ip):
global ip_dict
if isLocal:
return str(msg_ip) == str(ip_dict["local"])
else:
return str(msg_ip) == str(ip_dict["hamachi"])
def imposter_prep_info(msg, conn_info):
imp_obj = {"CL_IP": msg["MY_IP"], "NAME": msg["NAME"],
"A_IP": conn_info[0], "A_PORT": conn_info[1],
"MSG":
[
{"TYPE": msg["TYPE"],
"PAYLOAD": msg["PAYLOAD"]
}
]
}
return imp_obj
def imposter_display_info(imposter_msg_obj):
print(" Claimed Name: {}".format(imposter_msg_obj["NAME"]))
print(" Claimed IP: {}".format(colored(imposter_msg_obj["CL_IP"], 'red')))
print(" Actual IP: {}".format(colored(imposter_msg_obj["A_IP"], 'cyan')))
print(" Actual Port: {}".format(imposter_msg_obj["A_PORT"]))
print(" Actions:")
hail_msg = [msg for msg in imposter_msg_obj["MSG"] if msg["TYPE"] == "DISCOVER"]
payloads = [msg for msg in imposter_msg_obj["MSG"] if msg["TYPE"] == "MESSAGE"]
if len(hail_msg) == 1:
print(" {} You've been hailed for once.".format(emoji.emojize(":e-mail:")))
if len(hail_msg) > 1:
print(" {} You've been hailed for {} times.".format(emoji.emojize(":e-mail:"), len(hail_msg)))
if len(payloads) > 0:
for msg in payloads:
print(" Message: {}".format(msg["PAYLOAD"]))
def imposter_display_all():
global imposter_dict
for imp_tag, imp_log in imposter_dict.items():
print("{} Possible suspect tagged as {}:".format(emoji.emojize(":briefcase:"), colored(imp_tag, 'yellow')))
imposter_display_info(imp_log)
def imposter_display(imp_tag):
global imposter_dict
print("{} Possible suspect tagged as {}:".format(emoji.emojize(":briefcase:"), colored(imp_tag, 'yellow')))
imp_log = imposter_dict[imp_tag]
imposter_display_info(imp_log)
def list_imposter_tags():
global imposter_dict
print("{} Current Imposters:".format(emoji.emojize(":briefcase:")))
for imp_tag in imposter_dict.keys():
print("{} {}".format(emoji.emojize(":ghost:"), colored(imp_tag, 'yellow')))
def export_all():
global profile_dict, chat_dict
f_name = "all" + ".txt"
all_export = open(f_name, "w")
for tag, profile in profile_dict.items():
if tag != "self":
profile["chat_hist"] = chat_dict[tag]
profile["isActiveStatus"] = False
profile_srl = json.dumps(profile)
all_export.write(profile_srl + "\n")
all_export.flush()
all_export.close()
def import_all():
global profile_dict, chat_dict
try:
all_import = open("./all.txt", "r")
info_lines = all_import.readlines()
all_import.close()
for info in info_lines:
info_dsrl = json.loads(info)
info_tag = info_dsrl["tag"]
chat_dict[info_tag] = info_dsrl["chat_hist"]
del info_dsrl["chat_hist"]
profile_dict[info_tag] = info_dsrl
except FileNotFoundError:
print("No previous chat history has been found.")
def messenger_func():
while True:
if isExit:
break
message_parse()
def tcp_socket_listen():
if isLocal:
host_ip = ip_dict["local"]
else:
host_ip = ip_dict["hamachi"]
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((host_ip, 12345))
sock.listen()
in_conn, addr = sock.accept()
with in_conn:
while True:
in_data = in_conn.recv(1024)
if in_data:
msg_queue.put_nowait((in_data, addr))
else:
break
def tcp_socket_send(msg, recp_ip):
try:
raw_msg = str(msg) + "\n"
enc_msg = raw_msg.encode("utf-8")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.settimeout(1)
sock.connect((recp_ip, 12345))
sock.sendall(enc_msg)
sock.close()
time.sleep(1)
return True
except socket.timeout:
return False
def udp_socket_listen():
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as sock:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', 12345))
try:
msg, ip_addr = sock.recvfrom(1000)
if msg:
msg_queue.put_nowait((msg, ip_addr))
except BlockingIOError:
pass
def udp_socket_broadcast(msg):
if isLocal:
host_ip = ip_dict["local"]
broadcast_ip = host_ip.split(".")[:3]
broadcast_ip.append("255")
broadcast_ip = ".".join(broadcast_ip)
else:
broadcast_ip = "25.255.255.255"
raw_msg = str(msg) + "\n"
enc_msg = raw_msg.encode("utf-8")
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as sock:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
sock.setblocking(False)
sock.bind(('', 0))
sock.sendto(enc_msg, (broadcast_ip, 12345))
def signal_handler(sgn, frame):
terminate()
sys.exit(0)
def terminate():
broadcast_goodbye()
export_all()
for i in range(0, 2):
print("\r{} | Prepare to shutdown gracefully...".format(emoji.emojize(":floppy_disk:")), end=" ")
time.sleep(.3125)
print("\r{} / Prepare to shutdown gracefully...".format(emoji.emojize(":floppy_disk:")), end=" ")
time.sleep(.3125)
print("\r{} - Prepare to shutdown gracefully...".format(emoji.emojize(":floppy_disk:")), end=" ")
time.sleep(.3125)
print("\r{} \\ Prepare to shutdown gracefully...".format(emoji.emojize(":floppy_disk:")), end=" ")
time.sleep(.3125)
print("\r\n", end=" ")
print("\r{} You are exiting in 5 seconds...".format(emoji.emojize(":koala:")), end=" ")
time.sleep(1)
print("\r{} You are exiting in 4 seconds...".format(emoji.emojize(":penguin:")), end=" ")
time.sleep(1)
print("\r{} You are exiting in 3 seconds...".format(emoji.emojize(":boar:")), end=" ")
time.sleep(1)
print("\r{} You are exiting in 2 seconds...".format(emoji.emojize(":camel:")), end=" ")
time.sleep(1)
print("\r{} You are exiting in 1 seconds...".format(emoji.emojize(":bird:")), end=" ")
time.sleep(1)
print("\r\n", end=" ")
print("\r{} Till next time!".format(emoji.emojize(":monkey:")), end=" ")
if __name__ == "__main__":
entry()
hamachi_ip_list = hamachi_ip_extract()
tcp_listen_func = threading.Thread(target=tcp_listener_func)
tcp_listen_func.daemon = True
tcp_message_func = threading.Thread(target=messenger_func)
tcp_message_func.daemon = True
udp_listen_func = threading.Thread(target=udp_listener_func)
udp_listen_func.daemon = True
print("Select your environment to work: type 'local' or 'hamachi'.")
while True:
usr_env_input = input(">>> ")
if usr_env_input.strip().lower() == "local":
switch("local")
break
elif usr_env_input.strip().lower() == "hamachi":
switch("hamachi")
break
else:
print("{} Enter a valid environment.".format(emoji.emojize(":no_entry:")))
import_all()
tcp_listen_func.start()
tcp_message_func.start()
udp_listen_func.start()
signal.signal(signal.SIGINT, signal_handler)
while True:
usr_input = input(">>> ")
if usr_input == "exit":
isExit = True
break
else:
cmd_input_analysis(usr_input)
if isDeception:
print("{} A packet is received from an imposter.".format(emoji.emojize(":collision:")))
imposter_display_all()
isDeception = False
new_msg_count()
tcp_message_func.join(.4)
tcp_listen_func.join(.4)
udp_listen_func.join(.4)
terminate()
|
test_fakeredis.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from time import sleep, time
from redis.exceptions import ResponseError
import inspect
from functools import wraps
import os
import sys
import threading
from nose.plugins.skip import SkipTest
from nose.plugins.attrib import attr
import redis
import redis.client
import fakeredis
from datetime import datetime, timedelta
try:
# Python 2.6, 2.7
from Queue import Queue
except:
# Python 3
from queue import Queue
PY2 = sys.version_info[0] == 2
if not PY2:
long = int
if sys.version_info[:2] == (2, 6):
import unittest2 as unittest
else:
import unittest
# Try importlib, then imp, then the old builtin `reload`
try:
from importlib import reload
except:
try:
from imp import reload
except:
pass
DEFAULT_ENCODING = fakeredis.DEFAULT_ENCODING
def redis_must_be_running(cls):
# This can probably be improved. This will determines
# at import time if the tests should be run, but we probably
# want it to be when the tests are actually run.
try:
r = redis.StrictRedis('localhost', port=6379)
r.ping()
except redis.ConnectionError:
redis_running = False
else:
redis_running = True
if not redis_running:
for name, attribute in inspect.getmembers(cls):
if name.startswith('test_'):
@wraps(attribute)
def skip_test(*args, **kwargs):
raise SkipTest("Redis is not running.")
setattr(cls, name, skip_test)
cls.setUp = lambda x: None
cls.tearDown = lambda x: None
return cls
def key_val_dict(size=100):
return dict([(b'key:' + bytes([i]), b'val:' + bytes([i]))
for i in range(size)])
class TestFakeStrictRedis(unittest.TestCase):
decode_responses = False
def setUp(self):
self.redis = self.create_redis()
def tearDown(self):
self.redis.flushall()
del self.redis
if sys.version_info >= (3,):
def assertItemsEqual(self, a, b):
return self.assertCountEqual(a, b)
def create_redis(self, db=0):
return fakeredis.FakeStrictRedis(db=db)
def test_flushdb(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.keys(), [b'foo'])
self.assertEqual(self.redis.flushdb(), True)
self.assertEqual(self.redis.keys(), [])
def test_set_then_get(self):
self.assertEqual(self.redis.set('foo', 'bar'), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_set_None_value(self):
self.assertEqual(self.redis.set('foo', None), True)
self.assertEqual(self.redis.get('foo'), b'None')
def test_saving_non_ascii_chars_as_value(self):
self.assertEqual(self.redis.set('foo', 'Ñandu'), True)
self.assertEqual(self.redis.get('foo'),
u'Ñandu'.encode(DEFAULT_ENCODING))
def test_saving_unicode_type_as_value(self):
self.assertEqual(self.redis.set('foo', u'Ñandu'), True)
self.assertEqual(self.redis.get('foo'),
u'Ñandu'.encode(DEFAULT_ENCODING))
def test_saving_non_ascii_chars_as_key(self):
self.assertEqual(self.redis.set('Ñandu', 'foo'), True)
self.assertEqual(self.redis.get('Ñandu'), b'foo')
def test_saving_unicode_type_as_key(self):
self.assertEqual(self.redis.set(u'Ñandu', 'foo'), True)
self.assertEqual(self.redis.get(u'Ñandu'), b'foo')
def test_get_does_not_exist(self):
self.assertEqual(self.redis.get('foo'), None)
def test_get_with_non_str_keys(self):
self.assertEqual(self.redis.set('2', 'bar'), True)
self.assertEqual(self.redis.get(2), b'bar')
def test_get_invalid_type(self):
self.assertEqual(self.redis.hset('foo', 'key', 'value'), 1)
with self.assertRaises(redis.ResponseError):
self.redis.get('foo')
def test_set_non_str_keys(self):
self.assertEqual(self.redis.set(2, 'bar'), True)
self.assertEqual(self.redis.get(2), b'bar')
self.assertEqual(self.redis.get('2'), b'bar')
def test_getbit(self):
self.redis.setbit('foo', 3, 1)
self.assertEqual(self.redis.getbit('foo', 0), 0)
self.assertEqual(self.redis.getbit('foo', 1), 0)
self.assertEqual(self.redis.getbit('foo', 2), 0)
self.assertEqual(self.redis.getbit('foo', 3), 1)
self.assertEqual(self.redis.getbit('foo', 4), 0)
self.assertEqual(self.redis.getbit('foo', 100), 0)
def test_multiple_bits_set(self):
self.redis.setbit('foo', 1, 1)
self.redis.setbit('foo', 3, 1)
self.redis.setbit('foo', 5, 1)
self.assertEqual(self.redis.getbit('foo', 0), 0)
self.assertEqual(self.redis.getbit('foo', 1), 1)
self.assertEqual(self.redis.getbit('foo', 2), 0)
self.assertEqual(self.redis.getbit('foo', 3), 1)
self.assertEqual(self.redis.getbit('foo', 4), 0)
self.assertEqual(self.redis.getbit('foo', 5), 1)
self.assertEqual(self.redis.getbit('foo', 6), 0)
def test_unset_bits(self):
self.redis.setbit('foo', 1, 1)
self.redis.setbit('foo', 2, 0)
self.redis.setbit('foo', 3, 1)
self.assertEqual(self.redis.getbit('foo', 1), 1)
self.redis.setbit('foo', 1, 0)
self.assertEqual(self.redis.getbit('foo', 1), 0)
self.redis.setbit('foo', 3, 0)
self.assertEqual(self.redis.getbit('foo', 3), 0)
def test_setbits_and_getkeys(self):
# The bit operations and the get commands
# should play nicely with each other.
self.redis.setbit('foo', 1, 1)
self.assertEqual(self.redis.get('foo'), b'@')
self.redis.setbit('foo', 2, 1)
self.assertEqual(self.redis.get('foo'), b'`')
self.redis.setbit('foo', 3, 1)
self.assertEqual(self.redis.get('foo'), b'p')
self.redis.setbit('foo', 9, 1)
self.assertEqual(self.redis.get('foo'), b'p@')
self.redis.setbit('foo', 54, 1)
self.assertEqual(self.redis.get('foo'), b'p@\x00\x00\x00\x00\x02')
def test_bitcount(self):
self.redis.delete('foo')
self.assertEqual(self.redis.bitcount('foo'), 0)
self.redis.setbit('foo', 1, 1)
self.assertEqual(self.redis.bitcount('foo'), 1)
self.redis.setbit('foo', 8, 1)
self.assertEqual(self.redis.bitcount('foo'), 2)
self.assertEqual(self.redis.bitcount('foo', 1, 1), 1)
self.redis.setbit('foo', 57, 1)
self.assertEqual(self.redis.bitcount('foo'), 3)
self.redis.set('foo', ' ')
self.assertEqual(self.redis.bitcount('foo'), 1)
def test_getset_not_exist(self):
val = self.redis.getset('foo', 'bar')
self.assertEqual(val, None)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_getset_exists(self):
self.redis.set('foo', 'bar')
val = self.redis.getset('foo', b'baz')
self.assertEqual(val, b'bar')
val = self.redis.getset('foo', b'baz2')
self.assertEqual(val, b'baz')
def test_setitem_getitem(self):
self.assertEqual(self.redis.keys(), [])
self.redis['foo'] = 'bar'
self.assertEqual(self.redis['foo'], b'bar')
def test_strlen(self):
self.redis['foo'] = 'bar'
self.assertEqual(self.redis.strlen('foo'), 3)
self.assertEqual(self.redis.strlen('noexists'), 0)
def test_substr(self):
self.redis['foo'] = 'one_two_three'
self.assertEqual(self.redis.substr('foo', 0), b'one_two_three')
self.assertEqual(self.redis.substr('foo', 0, 2), b'one')
self.assertEqual(self.redis.substr('foo', 4, 6), b'two')
self.assertEqual(self.redis.substr('foo', -5), b'three')
def test_substr_noexist_key(self):
self.assertEqual(self.redis.substr('foo', 0), b'')
self.assertEqual(self.redis.substr('foo', 10), b'')
self.assertEqual(self.redis.substr('foo', -5, -1), b'')
def test_append(self):
self.assertTrue(self.redis.set('foo', 'bar'))
self.assertEqual(self.redis.append('foo', 'baz'), 6)
self.assertEqual(self.redis.get('foo'), b'barbaz')
def test_append_with_no_preexisting_key(self):
self.assertEqual(self.redis.append('foo', 'bar'), 3)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_incr_with_no_preexisting_key(self):
self.assertEqual(self.redis.incr('foo'), 1)
self.assertEqual(self.redis.incr('bar', 2), 2)
def test_incr_by(self):
self.assertEqual(self.redis.incrby('foo'), 1)
self.assertEqual(self.redis.incrby('bar', 2), 2)
def test_incr_preexisting_key(self):
self.redis.set('foo', 15)
self.assertEqual(self.redis.incr('foo', 5), 20)
self.assertEqual(self.redis.get('foo'), b'20')
def test_incr_bad_type(self):
self.redis.set('foo', 'bar')
with self.assertRaises(redis.ResponseError):
self.redis.incr('foo', 15)
def test_incr_with_float(self):
with self.assertRaises(redis.ResponseError):
self.redis.incr('foo', 2.0)
def test_incr_followed_by_mget(self):
self.redis.set('foo', 15)
self.assertEqual(self.redis.incr('foo', 5), 20)
self.assertEqual(self.redis.get('foo'), b'20')
def test_incr_followed_by_mget_returns_strings(self):
self.redis.incr('foo', 1)
self.assertEqual(self.redis.mget(['foo']), [b'1'])
def test_incrbyfloat(self):
self.redis.set('foo', 0)
self.assertEqual(self.redis.incrbyfloat('foo', 1.0), 1.0)
self.assertEqual(self.redis.incrbyfloat('foo', 1.0), 2.0)
def test_incrbyfloat_with_noexist(self):
self.assertEqual(self.redis.incrbyfloat('foo', 1.0), 1.0)
self.assertEqual(self.redis.incrbyfloat('foo', 1.0), 2.0)
def test_incrbyfloat_bad_type(self):
self.redis.set('foo', 'bar')
with self.assertRaisesRegexp(redis.ResponseError, 'not a valid float'):
self.redis.incrbyfloat('foo', 1.0)
def test_decr(self):
self.redis.set('foo', 10)
self.assertEqual(self.redis.decr('foo'), 9)
self.assertEqual(self.redis.get('foo'), b'9')
def test_decr_newkey(self):
self.redis.decr('foo')
self.assertEqual(self.redis.get('foo'), b'-1')
def test_decr_badtype(self):
self.redis.set('foo', 'bar')
with self.assertRaises(redis.ResponseError):
self.redis.decr('foo', 15)
def test_exists(self):
self.assertFalse('foo' in self.redis)
self.redis.set('foo', 'bar')
self.assertTrue('foo' in self.redis)
def test_contains(self):
self.assertFalse(self.redis.exists('foo'))
self.redis.set('foo', 'bar')
self.assertTrue(self.redis.exists('foo'))
def test_rename(self):
self.redis.set('foo', 'unique value')
self.assertTrue(self.redis.rename('foo', 'bar'))
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.get('bar'), b'unique value')
def test_rename_nonexistent_key(self):
with self.assertRaises(redis.ResponseError):
self.redis.rename('foo', 'bar')
def test_renamenx_doesnt_exist(self):
self.redis.set('foo', 'unique value')
self.assertTrue(self.redis.renamenx('foo', 'bar'))
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.get('bar'), b'unique value')
def test_rename_does_exist(self):
self.redis.set('foo', 'unique value')
self.redis.set('bar', 'unique value2')
self.assertFalse(self.redis.renamenx('foo', 'bar'))
self.assertEqual(self.redis.get('foo'), b'unique value')
self.assertEqual(self.redis.get('bar'), b'unique value2')
def test_mget(self):
self.redis.set('foo', 'one')
self.redis.set('bar', 'two')
self.assertEqual(self.redis.mget(['foo', 'bar']), [b'one', b'two'])
self.assertEqual(self.redis.mget(['foo', 'bar', 'baz']),
[b'one', b'two', None])
self.assertEqual(self.redis.mget('foo', 'bar'), [b'one', b'two'])
self.assertEqual(self.redis.mget('foo', 'bar', None),
[b'one', b'two', None])
def test_mgset_with_no_keys_raises_error(self):
with self.assertRaisesRegexp(
redis.ResponseError, 'wrong number of arguments'):
self.redis.mget([])
def test_mset_with_no_keys_raises_error(self):
with self.assertRaisesRegexp(
redis.RedisError, 'MSET requires'):
self.redis.mset([])
def test_mset(self):
self.assertEqual(self.redis.mset({'foo': 'one', 'bar': 'two'}), True)
self.assertEqual(self.redis.mset({'foo': 'one', 'bar': 'two'}), True)
self.assertEqual(self.redis.mget('foo', 'bar'), [b'one', b'two'])
def test_mset_accepts_kwargs(self):
self.assertEqual(
self.redis.mset(foo='one', bar='two'), True)
self.assertEqual(
self.redis.mset(foo='one', baz='three'), True)
self.assertEqual(self.redis.mget('foo', 'bar', 'baz'),
[b'one', b'two', b'three'])
def test_msetnx(self):
self.assertEqual(self.redis.msetnx({'foo': 'one', 'bar': 'two'}),
True)
self.assertEqual(self.redis.msetnx({'bar': 'two', 'baz': 'three'}),
False)
self.assertEqual(self.redis.mget('foo', 'bar', 'baz'),
[b'one', b'two', None])
def test_setex(self):
self.assertEqual(self.redis.setex('foo', 100, 'bar'), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_setex_using_timedelta(self):
self.assertEqual(
self.redis.setex('foo', timedelta(seconds=100), 'bar'), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_setex_using_float(self):
self.assertRaisesRegexp(
redis.ResponseError, 'integer', self.redis.setex, 'foo', 1.2,
'bar')
def test_set_ex(self):
self.assertEqual(self.redis.set('foo', 'bar', ex=100), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_set_ex_using_timedelta(self):
self.assertEqual(
self.redis.set('foo', 'bar', ex=timedelta(seconds=100)), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_set_px(self):
self.assertEqual(self.redis.set('foo', 'bar', px=100), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_set_px_using_timedelta(self):
self.assertEqual(
self.redis.set('foo', 'bar', px=timedelta(milliseconds=100)), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_set_raises_wrong_ex(self):
with self.assertRaises(ResponseError):
self.redis.set('foo', 'bar', ex=-100)
def test_set_using_timedelta_raises_wrong_ex(self):
with self.assertRaises(ResponseError):
self.redis.set('foo', 'bar', ex=timedelta(seconds=-100))
def test_set_raises_wrong_px(self):
with self.assertRaises(ResponseError):
self.redis.set('foo', 'bar', px=-100)
def test_set_using_timedelta_raises_wrong_px(self):
with self.assertRaises(ResponseError):
self.redis.set('foo', 'bar', px=timedelta(milliseconds=-100))
def test_setex_raises_wrong_ex(self):
with self.assertRaises(ResponseError):
self.redis.setex('foo', -100, 'bar')
def test_setex_using_timedelta_raises_wrong_ex(self):
with self.assertRaises(ResponseError):
self.redis.setex('foo', timedelta(seconds=-100), 'bar')
def test_setnx(self):
self.assertEqual(self.redis.setnx('foo', 'bar'), True)
self.assertEqual(self.redis.get('foo'), b'bar')
self.assertEqual(self.redis.setnx('foo', 'baz'), False)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_delete(self):
self.redis['foo'] = 'bar'
self.assertEqual(self.redis.delete('foo'), True)
self.assertEqual(self.redis.get('foo'), None)
def test_echo(self):
self.assertEqual(self.redis.echo(b'hello'), b'hello')
self.assertEqual(self.redis.echo('hello'), b'hello')
@attr('slow')
def test_delete_expire(self):
self.redis.set("foo", "bar", ex=1)
self.redis.delete("foo")
self.redis.set("foo", "bar")
sleep(2)
self.assertEqual(self.redis.get("foo"), b'bar')
def test_delete_multiple(self):
self.redis['one'] = 'one'
self.redis['two'] = 'two'
self.redis['three'] = 'three'
# Since redis>=2.7.6 returns number of deleted items.
self.assertEqual(self.redis.delete('one', 'two'), 2)
self.assertEqual(self.redis.get('one'), None)
self.assertEqual(self.redis.get('two'), None)
self.assertEqual(self.redis.get('three'), b'three')
self.assertEqual(self.redis.delete('one', 'two'), False)
# If any keys are deleted, True is returned.
self.assertEqual(self.redis.delete('two', 'three'), True)
self.assertEqual(self.redis.get('three'), None)
def test_delete_nonexistent_key(self):
self.assertEqual(self.redis.delete('foo'), False)
# Tests for the list type.
def test_rpush_then_lrange_with_nested_list1(self):
self.assertEqual(self.redis.rpush('foo', [long(12345), long(6789)]), 1)
self.assertEqual(self.redis.rpush('foo', [long(54321), long(9876)]), 2)
self.assertEqual(self.redis.lrange(
'foo', 0, -1), ['[12345L, 6789L]', '[54321L, 9876L]'] if PY2 else
[b'[12345, 6789]', b'[54321, 9876]'])
self.redis.flushall()
def test_rpush_then_lrange_with_nested_list2(self):
self.assertEqual(self.redis.rpush('foo', [long(12345), 'banana']), 1)
self.assertEqual(self.redis.rpush('foo', [long(54321), 'elephant']), 2)
self.assertEqual(self.redis.lrange(
'foo', 0, -1),
['[12345L, \'banana\']', '[54321L, \'elephant\']'] if PY2 else
[b'[12345, \'banana\']', b'[54321, \'elephant\']'])
self.redis.flushall()
def test_rpush_then_lrange_with_nested_list3(self):
self.assertEqual(self.redis.rpush('foo', [long(12345), []]), 1)
self.assertEqual(self.redis.rpush('foo', [long(54321), []]), 2)
self.assertEqual(self.redis.lrange(
'foo', 0, -1), ['[12345L, []]', '[54321L, []]'] if PY2 else
[b'[12345, []]', b'[54321, []]'])
self.redis.flushall()
def test_lpush_then_lrange_all(self):
self.assertEqual(self.redis.lpush('foo', 'bar'), 1)
self.assertEqual(self.redis.lpush('foo', 'baz'), 2)
self.assertEqual(self.redis.lpush('foo', 'bam', 'buzz'), 4)
self.assertEqual(self.redis.lrange('foo', 0, -1),
[b'buzz', b'bam', b'baz', b'bar'])
def test_lpush_then_lrange_portion(self):
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'two')
self.redis.lpush('foo', 'three')
self.redis.lpush('foo', 'four')
self.assertEqual(self.redis.lrange('foo', 0, 2),
[b'four', b'three', b'two'])
self.assertEqual(self.redis.lrange('foo', 0, 3),
[b'four', b'three', b'two', b'one'])
def test_lpush_key_does_not_exist(self):
self.assertEqual(self.redis.lrange('foo', 0, -1), [])
def test_lpush_with_nonstr_key(self):
self.redis.lpush(1, 'one')
self.redis.lpush(1, 'two')
self.redis.lpush(1, 'three')
self.assertEqual(self.redis.lrange(1, 0, 2),
[b'three', b'two', b'one'])
self.assertEqual(self.redis.lrange('1', 0, 2),
[b'three', b'two', b'one'])
def test_llen(self):
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'two')
self.redis.lpush('foo', 'three')
self.assertEqual(self.redis.llen('foo'), 3)
def test_llen_no_exist(self):
self.assertEqual(self.redis.llen('foo'), 0)
def test_lrem_postitive_count(self):
self.redis.lpush('foo', 'same')
self.redis.lpush('foo', 'same')
self.redis.lpush('foo', 'different')
self.redis.lrem('foo', 2, 'same')
self.assertEqual(self.redis.lrange('foo', 0, -1), [b'different'])
def test_lrem_negative_count(self):
self.redis.lpush('foo', 'removeme')
self.redis.lpush('foo', 'three')
self.redis.lpush('foo', 'two')
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'removeme')
self.redis.lrem('foo', -1, 'removeme')
# Should remove it from the end of the list,
# leaving the 'removeme' from the front of the list alone.
self.assertEqual(self.redis.lrange('foo', 0, -1),
[b'removeme', b'one', b'two', b'three'])
def test_lrem_zero_count(self):
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'one')
self.redis.lrem('foo', 0, 'one')
self.assertEqual(self.redis.lrange('foo', 0, -1), [])
def test_lrem_default_value(self):
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'one')
self.redis.lrem('foo', 0, 'one')
self.assertEqual(self.redis.lrange('foo', 0, -1), [])
def test_lrem_does_not_exist(self):
self.redis.lpush('foo', 'one')
self.redis.lrem('foo', 0, 'one')
# These should be noops.
self.redis.lrem('foo', -2, 'one')
self.redis.lrem('foo', 2, 'one')
def test_lrem_return_value(self):
self.redis.lpush('foo', 'one')
count = self.redis.lrem('foo', 0, 'one')
self.assertEqual(count, 1)
self.assertEqual(self.redis.lrem('foo', 0, 'one'), 0)
def test_rpush(self):
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.redis.rpush('foo', 'three')
self.redis.rpush('foo', 'four', 'five')
self.assertEqual(self.redis.lrange('foo', 0, -1),
[b'one', b'two', b'three', b'four', b'five'])
def test_lpop(self):
self.assertEqual(self.redis.rpush('foo', 'one'), 1)
self.assertEqual(self.redis.rpush('foo', 'two'), 2)
self.assertEqual(self.redis.rpush('foo', 'three'), 3)
self.assertEqual(self.redis.lpop('foo'), b'one')
self.assertEqual(self.redis.lpop('foo'), b'two')
self.assertEqual(self.redis.lpop('foo'), b'three')
def test_lpop_empty_list(self):
self.redis.rpush('foo', 'one')
self.redis.lpop('foo')
self.assertEqual(self.redis.lpop('foo'), None)
# Verify what happens if we try to pop from a key
# we've never seen before.
self.assertEqual(self.redis.lpop('noexists'), None)
def test_lset(self):
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.redis.rpush('foo', 'three')
self.redis.lset('foo', 0, 'four')
self.redis.lset('foo', -2, 'five')
self.assertEqual(self.redis.lrange('foo', 0, -1),
[b'four', b'five', b'three'])
def test_lset_index_out_of_range(self):
self.redis.rpush('foo', 'one')
with self.assertRaises(redis.ResponseError):
self.redis.lset('foo', 3, 'three')
def test_rpushx(self):
self.redis.rpush('foo', 'one')
self.redis.rpushx('foo', 'two')
self.redis.rpushx('bar', 'three')
self.assertEqual(self.redis.lrange('foo', 0, -1), [b'one', b'two'])
self.assertEqual(self.redis.lrange('bar', 0, -1), [])
def test_ltrim(self):
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.redis.rpush('foo', 'three')
self.redis.rpush('foo', 'four')
self.assertTrue(self.redis.ltrim('foo', 1, 3))
self.assertEqual(self.redis.lrange('foo', 0, -1), [b'two', b'three',
b'four'])
self.assertTrue(self.redis.ltrim('foo', 1, -1))
self.assertEqual(self.redis.lrange('foo', 0, -1), [b'three', b'four'])
def test_ltrim_with_non_existent_key(self):
self.assertTrue(self.redis.ltrim('foo', 0, -1))
def test_lindex(self):
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.assertEqual(self.redis.lindex('foo', 0), b'one')
self.assertEqual(self.redis.lindex('foo', 4), None)
self.assertEqual(self.redis.lindex('bar', 4), None)
def test_lpushx(self):
self.redis.lpush('foo', 'two')
self.redis.lpushx('foo', 'one')
self.redis.lpushx('bar', 'one')
self.assertEqual(self.redis.lrange('foo', 0, -1), [b'one', b'two'])
self.assertEqual(self.redis.lrange('bar', 0, -1), [])
def test_rpop(self):
self.assertEqual(self.redis.rpop('foo'), None)
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.assertEqual(self.redis.rpop('foo'), b'two')
self.assertEqual(self.redis.rpop('foo'), b'one')
self.assertEqual(self.redis.rpop('foo'), None)
def test_linsert(self):
self.redis.rpush('foo', 'hello')
self.redis.rpush('foo', 'world')
self.redis.linsert('foo', 'before', 'world', 'there')
self.assertEqual(self.redis.lrange('foo', 0, -1),
[b'hello', b'there', b'world'])
def test_rpoplpush(self):
self.assertEqual(self.redis.rpoplpush('foo', 'bar'), None)
self.assertEqual(self.redis.lpop('bar'), None)
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.redis.rpush('bar', 'one')
self.assertEqual(self.redis.rpoplpush('foo', 'bar'), b'two')
self.assertEqual(self.redis.lrange('foo', 0, -1), [b'one'])
self.assertEqual(self.redis.lrange('bar', 0, -1), [b'two', b'one'])
# Catch instances where we store bytes and strings inconsistently
# and thus bar = ['two', b'one']
self.assertEqual(self.redis.lrem('bar', -1, 'two'), 1)
def test_rpoplpush_to_nonexistent_destination(self):
self.redis.rpush('foo', 'one')
self.assertEqual(self.redis.rpoplpush('foo', 'bar'), b'one')
self.assertEqual(self.redis.rpop('bar'), b'one')
def test_blpop_single_list(self):
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.redis.rpush('foo', 'three')
self.assertEqual(self.redis.blpop(['foo'], timeout=1),
(b'foo', b'one'))
def test_blpop_test_multiple_lists(self):
self.redis.rpush('baz', 'zero')
self.assertEqual(self.redis.blpop(['foo', 'baz'], timeout=1),
(b'baz', b'zero'))
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
# bar has nothing, so the returned value should come
# from foo.
self.assertEqual(self.redis.blpop(['bar', 'foo'], timeout=1),
(b'foo', b'one'))
self.redis.rpush('bar', 'three')
# bar now has something, so the returned value should come
# from bar.
self.assertEqual(self.redis.blpop(['bar', 'foo'], timeout=1),
(b'bar', b'three'))
self.assertEqual(self.redis.blpop(['bar', 'foo'], timeout=1),
(b'foo', b'two'))
def test_blpop_allow_single_key(self):
# blpop converts single key arguments to a one element list.
self.redis.rpush('foo', 'one')
self.assertEqual(self.redis.blpop('foo', timeout=1), (b'foo', b'one'))
def test_brpop_test_multiple_lists(self):
self.redis.rpush('baz', 'zero')
self.assertEqual(self.redis.brpop(['foo', 'baz'], timeout=1),
(b'baz', b'zero'))
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.assertEqual(self.redis.brpop(['bar', 'foo'], timeout=1),
(b'foo', b'two'))
def test_brpop_single_key(self):
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.assertEqual(self.redis.brpop('foo', timeout=1),
(b'foo', b'two'))
def test_brpoplpush_multi_keys(self):
self.assertEqual(self.redis.lpop('bar'), None)
self.redis.rpush('foo', 'one')
self.redis.rpush('foo', 'two')
self.assertEqual(self.redis.brpoplpush('foo', 'bar', timeout=1),
b'two')
self.assertEqual(self.redis.lrange('bar', 0, -1), [b'two'])
# Catch instances where we store bytes and strings inconsistently
# and thus bar = ['two']
self.assertEqual(self.redis.lrem('bar', -1, 'two'), 1)
@attr('slow')
def test_blocking_operations_when_empty(self):
self.assertEqual(self.redis.blpop(['foo'], timeout=1),
None)
self.assertEqual(self.redis.blpop(['bar', 'foo'], timeout=1),
None)
self.assertEqual(self.redis.brpop('foo', timeout=1),
None)
self.assertEqual(self.redis.brpoplpush('foo', 'bar', timeout=1),
None)
# Tests for the hash type.
def test_hset_then_hget(self):
self.assertEqual(self.redis.hset('foo', 'key', 'value'), 1)
self.assertEqual(self.redis.hget('foo', 'key'), b'value')
def test_hset_update(self):
self.assertEqual(self.redis.hset('foo', 'key', 'value'), 1)
self.assertEqual(self.redis.hset('foo', 'key', 'value'), 0)
def test_hgetall(self):
self.assertEqual(self.redis.hset('foo', 'k1', 'v1'), 1)
self.assertEqual(self.redis.hset('foo', 'k2', 'v2'), 1)
self.assertEqual(self.redis.hset('foo', 'k3', 'v3'), 1)
self.assertEqual(self.redis.hgetall('foo'), {b'k1': b'v1',
b'k2': b'v2',
b'k3': b'v3'})
def test_hgetall_with_tuples(self):
self.assertEqual(self.redis.hset('foo', (1, 2), (1, 2, 3)), 1)
self.assertEqual(self.redis.hgetall('foo'), {b'(1, 2)': b'(1, 2, 3)'})
def test_hgetall_empty_key(self):
self.assertEqual(self.redis.hgetall('foo'), {})
def test_hexists(self):
self.redis.hset('foo', 'bar', 'v1')
self.assertEqual(self.redis.hexists('foo', 'bar'), 1)
self.assertEqual(self.redis.hexists('foo', 'baz'), 0)
self.assertEqual(self.redis.hexists('bar', 'bar'), 0)
def test_hkeys(self):
self.redis.hset('foo', 'k1', 'v1')
self.redis.hset('foo', 'k2', 'v2')
self.assertEqual(set(self.redis.hkeys('foo')), set([b'k1', b'k2']))
self.assertEqual(set(self.redis.hkeys('bar')), set([]))
def test_hlen(self):
self.redis.hset('foo', 'k1', 'v1')
self.redis.hset('foo', 'k2', 'v2')
self.assertEqual(self.redis.hlen('foo'), 2)
def test_hvals(self):
self.redis.hset('foo', 'k1', 'v1')
self.redis.hset('foo', 'k2', 'v2')
self.assertEqual(set(self.redis.hvals('foo')), set([b'v1', b'v2']))
self.assertEqual(set(self.redis.hvals('bar')), set([]))
def test_hmget(self):
self.redis.hset('foo', 'k1', 'v1')
self.redis.hset('foo', 'k2', 'v2')
self.redis.hset('foo', 'k3', 'v3')
# Normal case.
self.assertEqual(self.redis.hmget('foo', ['k1', 'k3']), [b'v1', b'v3'])
self.assertEqual(self.redis.hmget('foo', 'k1', 'k3'), [b'v1', b'v3'])
# Key does not exist.
self.assertEqual(self.redis.hmget('bar', ['k1', 'k3']), [None, None])
self.assertEqual(self.redis.hmget('bar', 'k1', 'k3'), [None, None])
# Some keys in the hash do not exist.
self.assertEqual(self.redis.hmget('foo', ['k1', 'k500']),
[b'v1', None])
self.assertEqual(self.redis.hmget('foo', 'k1', 'k500'),
[b'v1', None])
def test_hdel(self):
self.redis.hset('foo', 'k1', 'v1')
self.redis.hset('foo', 'k2', 'v2')
self.redis.hset('foo', 'k3', 'v3')
self.assertEqual(self.redis.hget('foo', 'k1'), b'v1')
self.assertEqual(self.redis.hdel('foo', 'k1'), True)
self.assertEqual(self.redis.hget('foo', 'k1'), None)
self.assertEqual(self.redis.hdel('foo', 'k1'), False)
# Since redis>=2.7.6 returns number of deleted items.
self.assertEqual(self.redis.hdel('foo', 'k2', 'k3'), 2)
self.assertEqual(self.redis.hget('foo', 'k2'), None)
self.assertEqual(self.redis.hget('foo', 'k3'), None)
self.assertEqual(self.redis.hdel('foo', 'k2', 'k3'), False)
def test_hincrby(self):
self.redis.hset('foo', 'counter', 0)
self.assertEqual(self.redis.hincrby('foo', 'counter'), 1)
self.assertEqual(self.redis.hincrby('foo', 'counter'), 2)
self.assertEqual(self.redis.hincrby('foo', 'counter'), 3)
def test_hincrby_with_no_starting_value(self):
self.assertEqual(self.redis.hincrby('foo', 'counter'), 1)
self.assertEqual(self.redis.hincrby('foo', 'counter'), 2)
self.assertEqual(self.redis.hincrby('foo', 'counter'), 3)
def test_hincrby_with_range_param(self):
self.assertEqual(self.redis.hincrby('foo', 'counter', 2), 2)
self.assertEqual(self.redis.hincrby('foo', 'counter', 2), 4)
self.assertEqual(self.redis.hincrby('foo', 'counter', 2), 6)
def test_hincrbyfloat(self):
self.redis.hset('foo', 'counter', 0.0)
self.assertEqual(self.redis.hincrbyfloat('foo', 'counter'), 1.0)
self.assertEqual(self.redis.hincrbyfloat('foo', 'counter'), 2.0)
self.assertEqual(self.redis.hincrbyfloat('foo', 'counter'), 3.0)
def test_hincrbyfloat_with_no_starting_value(self):
self.assertEqual(self.redis.hincrbyfloat('foo', 'counter'), 1.0)
self.assertEqual(self.redis.hincrbyfloat('foo', 'counter'), 2.0)
self.assertEqual(self.redis.hincrbyfloat('foo', 'counter'), 3.0)
def test_hincrbyfloat_with_range_param(self):
self.assertAlmostEqual(
self.redis.hincrbyfloat('foo', 'counter', 0.1), 0.1)
self.assertAlmostEqual(
self.redis.hincrbyfloat('foo', 'counter', 0.1), 0.2)
self.assertAlmostEqual(
self.redis.hincrbyfloat('foo', 'counter', 0.1), 0.3)
def test_hincrbyfloat_on_non_float_value_raises_error(self):
self.redis.hset('foo', 'counter', 'cat')
with self.assertRaises(redis.ResponseError):
self.redis.hincrbyfloat('foo', 'counter')
def test_hincrbyfloat_with_non_float_amount_raises_error(self):
with self.assertRaises(redis.ResponseError):
self.redis.hincrbyfloat('foo', 'counter', 'cat')
def test_hsetnx(self):
self.assertEqual(self.redis.hsetnx('foo', 'newkey', 'v1'), True)
self.assertEqual(self.redis.hsetnx('foo', 'newkey', 'v1'), False)
self.assertEqual(self.redis.hget('foo', 'newkey'), b'v1')
def test_hmsetset_empty_raises_error(self):
with self.assertRaises(redis.DataError):
self.redis.hmset('foo', {})
def test_hmsetset(self):
self.redis.hset('foo', 'k1', 'v1')
self.assertEqual(self.redis.hmset('foo', {'k2': 'v2', 'k3': 'v3'}),
True)
def test_hmset_convert_values(self):
self.redis.hmset('foo', {'k1': True, 'k2': 1})
self.assertEqual(
self.redis.hgetall('foo'), {b'k1': b'True', b'k2': b'1'})
def test_hmset_does_not_mutate_input_params(self):
original = {'key': [123, 456]}
self.redis.hmset('foo', original)
self.assertEqual(original, {'key': [123, 456]})
def test_sadd(self):
self.assertEqual(self.redis.sadd('foo', 'member1'), 1)
self.assertEqual(self.redis.sadd('foo', 'member1'), 0)
self.assertEqual(self.redis.smembers('foo'), set([b'member1']))
self.assertEqual(self.redis.sadd('foo', 'member2', 'member3'), 2)
self.assertEqual(self.redis.smembers('foo'),
set([b'member1', b'member2', b'member3']))
self.assertEqual(self.redis.sadd('foo', 'member3', 'member4'), 1)
self.assertEqual(self.redis.smembers('foo'),
set([b'member1', b'member2', b'member3', b'member4']))
def test_sadd_as_str_type(self):
self.assertEqual(self.redis.sadd('foo', *range(3)), 3)
self.assertEqual(self.redis.smembers('foo'), set([b'0', b'1', b'2']))
def test_scan_single(self):
self.redis.set('foo1', 'bar1')
self.assertEqual(self.redis.scan(match="foo*"), (0, [b'foo1']))
def test_scan_iter_single_page(self):
self.redis.set('foo1', 'bar1')
self.redis.set('foo2', 'bar2')
self.assertEqual(set(self.redis.scan_iter(match="foo*")),
set([b'foo1', b'foo2']))
def test_scan_iter_multiple_pages(self):
all_keys = key_val_dict(size=100)
self.assertTrue(
all(self.redis.set(k, v) for k, v in all_keys.items()))
self.assertEqual(
set(self.redis.scan_iter()),
set(all_keys))
def test_scan_iter_multiple_pages_with_match(self):
all_keys = key_val_dict(size=100)
self.assertTrue(
all(self.redis.set(k, v) for k, v in all_keys.items()))
# Now add a few keys that don't match the key:<number> pattern.
self.redis.set('otherkey', 'foo')
self.redis.set('andanother', 'bar')
actual = set(self.redis.scan_iter(match='key:*'))
self.assertEqual(actual, set(all_keys))
def test_scan_multiple_pages_with_count_arg(self):
all_keys = key_val_dict(size=100)
self.assertTrue(
all(self.redis.set(k, v) for k, v in all_keys.items()))
self.assertEqual(
set(self.redis.scan_iter(count=1000)),
set(all_keys))
def test_scan_all_in_single_call(self):
all_keys = key_val_dict(size=100)
self.assertTrue(
all(self.redis.set(k, v) for k, v in all_keys.items()))
# Specify way more than the 100 keys we've added.
actual = self.redis.scan(count=1000)
self.assertEqual(set(actual[1]), set(all_keys))
self.assertEqual(actual[0], 0)
def test_scard(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.redis.sadd('foo', 'member2')
self.assertEqual(self.redis.scard('foo'), 2)
def test_sdiff(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.redis.sadd('bar', 'member2')
self.redis.sadd('bar', 'member3')
self.assertEqual(self.redis.sdiff('foo', 'bar'), set([b'member1']))
# Original sets shouldn't be modified.
self.assertEqual(self.redis.smembers('foo'),
set([b'member1', b'member2']))
self.assertEqual(self.redis.smembers('bar'),
set([b'member2', b'member3']))
def test_sdiff_one_key(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.assertEqual(self.redis.sdiff('foo'),
set([b'member1', b'member2']))
def test_sdiff_empty(self):
self.assertEqual(self.redis.sdiff('foo'), set())
def test_sdiffstore(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.redis.sadd('bar', 'member2')
self.redis.sadd('bar', 'member3')
self.assertEqual(self.redis.sdiffstore('baz', 'foo', 'bar'), 1)
# Catch instances where we store bytes and strings inconsistently
# and thus baz = {'member1', b'member1'}
self.redis.sadd('baz', 'member1')
self.assertEqual(self.redis.scard('baz'), 1)
def test_setrange(self):
self.redis.set('foo', 'test')
self.assertEqual(self.redis.setrange('foo', 1, 'aste'), 5)
self.assertEqual(self.redis.get('foo'), b'taste')
self.redis.set('foo', 'test')
self.assertEqual(self.redis.setrange('foo', 1, 'a'), 4)
self.assertEqual(self.redis.get('foo'), b'tast')
self.assertEqual(self.redis.setrange('bar', 2, 'test'), 6)
self.assertEqual(self.redis.get('bar'), b'\x00\x00test')
def test_sinter(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.redis.sadd('bar', 'member2')
self.redis.sadd('bar', 'member3')
self.assertEqual(self.redis.sinter('foo', 'bar'), set([b'member2']))
self.assertEqual(self.redis.sinter('foo'),
set([b'member1', b'member2']))
def test_sinter_bytes_keys(self):
foo = os.urandom(10)
bar = os.urandom(10)
self.redis.sadd(foo, 'member1')
self.redis.sadd(foo, 'member2')
self.redis.sadd(bar, 'member2')
self.redis.sadd(bar, 'member3')
self.assertEqual(self.redis.sinter(foo, bar), set([b'member2']))
self.assertEqual(self.redis.sinter(foo), set([b'member1', b'member2']))
def test_sinterstore(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.redis.sadd('bar', 'member2')
self.redis.sadd('bar', 'member3')
self.assertEqual(self.redis.sinterstore('baz', 'foo', 'bar'), 1)
# Catch instances where we store bytes and strings inconsistently
# and thus baz = {'member2', b'member2'}
self.redis.sadd('baz', 'member2')
self.assertEqual(self.redis.scard('baz'), 1)
def test_sismember(self):
self.assertEqual(self.redis.sismember('foo', 'member1'), False)
self.redis.sadd('foo', 'member1')
self.assertEqual(self.redis.sismember('foo', 'member1'), True)
def test_smembers(self):
self.assertEqual(self.redis.smembers('foo'), set())
def test_smove(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.assertEqual(self.redis.smove('foo', 'bar', 'member1'), True)
self.assertEqual(self.redis.smembers('bar'), set([b'member1']))
def test_smove_non_existent_key(self):
self.assertEqual(self.redis.smove('foo', 'bar', 'member1'), False)
def test_spop(self):
# This is tricky because it pops a random element.
self.redis.sadd('foo', 'member1')
self.assertEqual(self.redis.spop('foo'), b'member1')
self.assertEqual(self.redis.spop('foo'), None)
def test_srandmember(self):
self.redis.sadd('foo', 'member1')
self.assertEqual(self.redis.srandmember('foo'), b'member1')
# Shouldn't be removed from the set.
self.assertEqual(self.redis.srandmember('foo'), b'member1')
def test_srandmember_number(self):
"""srandmember works with the number argument."""
self.assertEqual(self.redis.srandmember('foo', 2), [])
self.redis.sadd('foo', b'member1')
self.assertEqual(self.redis.srandmember('foo', 2), [b'member1'])
self.redis.sadd('foo', b'member2')
self.assertEqual(set(self.redis.srandmember('foo', 2)),
set([b'member1', b'member2']))
self.redis.sadd('foo', b'member3')
res = self.redis.srandmember('foo', 2)
self.assertEqual(len(res), 2)
if self.decode_responses:
superset = set(['member1', 'member2', 'member3'])
else:
superset = set([b'member1', b'member2', b'member3'])
for e in res:
self.assertIn(e, superset)
def test_srem(self):
self.redis.sadd('foo', 'member1', 'member2', 'member3', 'member4')
self.assertEqual(self.redis.smembers('foo'),
set([b'member1', b'member2', b'member3', b'member4']))
self.assertEqual(self.redis.srem('foo', 'member1'), True)
self.assertEqual(self.redis.smembers('foo'),
set([b'member2', b'member3', b'member4']))
self.assertEqual(self.redis.srem('foo', 'member1'), False)
# Since redis>=2.7.6 returns number of deleted items.
self.assertEqual(self.redis.srem('foo', 'member2', 'member3'), 2)
self.assertEqual(self.redis.smembers('foo'), set([b'member4']))
self.assertEqual(self.redis.srem('foo', 'member3', 'member4'), True)
self.assertEqual(self.redis.smembers('foo'), set([]))
self.assertEqual(self.redis.srem('foo', 'member3', 'member4'), False)
def test_sunion(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.redis.sadd('bar', 'member2')
self.redis.sadd('bar', 'member3')
self.assertEqual(self.redis.sunion('foo', 'bar'),
set([b'member1', b'member2', b'member3']))
def test_sunionstore(self):
self.redis.sadd('foo', 'member1')
self.redis.sadd('foo', 'member2')
self.redis.sadd('bar', 'member2')
self.redis.sadd('bar', 'member3')
self.assertEqual(self.redis.sunionstore('baz', 'foo', 'bar'), 3)
self.assertEqual(self.redis.smembers('baz'),
set([b'member1', b'member2', b'member3']))
# Catch instances where we store bytes and strings inconsistently
# and thus baz = {b'member1', b'member2', b'member3', 'member3'}
self.redis.sadd('baz', 'member3')
self.assertEqual(self.redis.scard('baz'), 3)
def test_zadd(self):
self.redis.zadd('foo', four=4)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zadd('foo', 2, 'two', 1, 'one', zero=0), 3)
self.assertEqual(self.redis.zrange('foo', 0, -1),
[b'zero', b'one', b'two', b'three', b'four'])
self.assertEqual(self.redis.zadd('foo', 7, 'zero', one=1, five=5), 1)
self.assertEqual(self.redis.zrange('foo', 0, -1),
[b'one', b'two', b'three', b'four', b'five', b'zero'])
def test_zadd_uses_str(self):
self.redis.zadd('foo', 12345, (1, 2, 3))
self.assertEqual(self.redis.zrange('foo', 0, 0), [b'(1, 2, 3)'])
def test_zadd_errors(self):
# The args are backwards, it should be 2, "two", so we
# expect an exception to be raised.
with self.assertRaises(redis.ResponseError):
self.redis.zadd('foo', 'two', 2)
with self.assertRaises(redis.ResponseError):
self.redis.zadd('foo', two='two')
# It's expected an equal number of values and scores
with self.assertRaises(redis.RedisError):
self.redis.zadd('foo', 'two')
def test_zadd_multiple(self):
self.redis.zadd('foo', 1, 'one', 2, 'two')
self.assertEqual(self.redis.zrange('foo', 0, 0),
[b'one'])
self.assertEqual(self.redis.zrange('foo', 1, 1),
[b'two'])
def test_zrange_same_score(self):
self.redis.zadd('foo', two_a=2)
self.redis.zadd('foo', two_b=2)
self.redis.zadd('foo', two_c=2)
self.redis.zadd('foo', two_d=2)
self.redis.zadd('foo', two_e=2)
self.assertEqual(self.redis.zrange('foo', 2, 3),
[b'two_c', b'two_d'])
def test_zcard(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.assertEqual(self.redis.zcard('foo'), 2)
def test_zcard_non_existent_key(self):
self.assertEqual(self.redis.zcard('foo'), 0)
def test_zcount(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', three=2)
self.redis.zadd('foo', five=5)
self.assertEqual(self.redis.zcount('foo', 2, 4), 1)
self.assertEqual(self.redis.zcount('foo', 1, 4), 2)
self.assertEqual(self.redis.zcount('foo', 0, 5), 3)
self.assertEqual(self.redis.zcount('foo', 4, '+inf'), 1)
self.assertEqual(self.redis.zcount('foo', '-inf', 4), 2)
self.assertEqual(self.redis.zcount('foo', '-inf', '+inf'), 3)
def test_zcount_exclusive(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', three=2)
self.redis.zadd('foo', five=5)
self.assertEqual(self.redis.zcount('foo', '-inf', '(2'), 1)
self.assertEqual(self.redis.zcount('foo', '-inf', 2), 2)
self.assertEqual(self.redis.zcount('foo', '(5', '+inf'), 0)
self.assertEqual(self.redis.zcount('foo', '(1', 5), 2)
self.assertEqual(self.redis.zcount('foo', '(2', '(5'), 0)
self.assertEqual(self.redis.zcount('foo', '(1', '(5'), 1)
self.assertEqual(self.redis.zcount('foo', 2, '(5'), 1)
def test_zincrby(self):
self.redis.zadd('foo', one=1)
self.assertEqual(self.redis.zincrby('foo', 'one', 10), 11)
self.assertEqual(self.redis.zrange('foo', 0, -1, withscores=True),
[(b'one', 11)])
def test_zrange_descending(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrange('foo', 0, -1, desc=True),
[b'three', b'two', b'one'])
def test_zrange_descending_with_scores(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrange('foo', 0, -1, desc=True,
withscores=True),
[(b'three', 3), (b'two', 2), (b'one', 1)])
def test_zrange_with_positive_indices(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrange('foo', 0, 1), [b'one', b'two'])
def test_zrank(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrank('foo', 'one'), 0)
self.assertEqual(self.redis.zrank('foo', 'two'), 1)
self.assertEqual(self.redis.zrank('foo', 'three'), 2)
def test_zrank_non_existent_member(self):
self.assertEqual(self.redis.zrank('foo', 'one'), None)
def test_zrem(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.redis.zadd('foo', four=4)
self.assertEqual(self.redis.zrem('foo', 'one'), True)
self.assertEqual(self.redis.zrange('foo', 0, -1),
[b'two', b'three', b'four'])
# Since redis>=2.7.6 returns number of deleted items.
self.assertEqual(self.redis.zrem('foo', 'two', 'three'), 2)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'four'])
self.assertEqual(self.redis.zrem('foo', 'three', 'four'), True)
self.assertEqual(self.redis.zrange('foo', 0, -1), [])
self.assertEqual(self.redis.zrem('foo', 'three', 'four'), False)
def test_zrem_non_existent_member(self):
self.assertFalse(self.redis.zrem('foo', 'one'))
def test_zrem_numeric_member(self):
self.redis.zadd('foo', **{'128': 13.0, '129': 12.0})
self.assertEqual(self.redis.zrem('foo', 128), True)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'129'])
def test_zscore(self):
self.redis.zadd('foo', one=54)
self.assertEqual(self.redis.zscore('foo', 'one'), 54)
def test_zscore_non_existent_member(self):
self.assertIsNone(self.redis.zscore('foo', 'one'))
def test_zrevrank(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrevrank('foo', 'one'), 2)
self.assertEqual(self.redis.zrevrank('foo', 'two'), 1)
self.assertEqual(self.redis.zrevrank('foo', 'three'), 0)
def test_zrevrank_non_existent_member(self):
self.assertEqual(self.redis.zrevrank('foo', 'one'), None)
def test_zrevrange(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrevrange('foo', 0, 1), [b'three', b'two'])
self.assertEqual(self.redis.zrevrange('foo', 0, -1),
[b'three', b'two', b'one'])
def test_zrevrange_sorted_keys(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', 2, 'two_b')
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrevrange('foo', 0, 2),
[b'three', b'two_b', b'two'])
self.assertEqual(self.redis.zrevrange('foo', 0, -1),
[b'three', b'two_b', b'two', b'one'])
def test_zrangebyscore(self):
self.redis.zadd('foo', zero=0)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', two_a_also=2)
self.redis.zadd('foo', two_b_also=2)
self.redis.zadd('foo', four=4)
self.assertEqual(self.redis.zrangebyscore('foo', 1, 3),
[b'two', b'two_a_also', b'two_b_also'])
self.assertEqual(self.redis.zrangebyscore('foo', 2, 3),
[b'two', b'two_a_also', b'two_b_also'])
self.assertEqual(self.redis.zrangebyscore('foo', 0, 4),
[b'zero', b'two', b'two_a_also', b'two_b_also',
b'four'])
self.assertEqual(self.redis.zrangebyscore('foo', '-inf', 1),
[b'zero'])
self.assertEqual(self.redis.zrangebyscore('foo', 2, '+inf'),
[b'two', b'two_a_also', b'two_b_also', b'four'])
self.assertEqual(self.redis.zrangebyscore('foo', '-inf', '+inf'),
[b'zero', b'two', b'two_a_also', b'two_b_also',
b'four'])
def test_zrangebysore_exclusive(self):
self.redis.zadd('foo', zero=0)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', four=4)
self.redis.zadd('foo', five=5)
self.assertEqual(self.redis.zrangebyscore('foo', '(0', 6),
[b'two', b'four', b'five'])
self.assertEqual(self.redis.zrangebyscore('foo', '(2', '(5'),
[b'four'])
self.assertEqual(self.redis.zrangebyscore('foo', 0, '(4'),
[b'zero', b'two'])
def test_zrangebyscore_raises_error(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
with self.assertRaises(redis.ResponseError):
self.redis.zrangebyscore('foo', 'one', 2)
with self.assertRaises(redis.ResponseError):
self.redis.zrangebyscore('foo', 2, 'three')
with self.assertRaises(redis.ResponseError):
self.redis.zrangebyscore('foo', 2, '3)')
with self.assertRaises(redis.RedisError):
self.redis.zrangebyscore('foo', 2, '3)', 0, None)
def test_zrangebyscore_slice(self):
self.redis.zadd('foo', two_a=2)
self.redis.zadd('foo', two_b=2)
self.redis.zadd('foo', two_c=2)
self.redis.zadd('foo', two_d=2)
self.assertEqual(self.redis.zrangebyscore('foo', 0, 4, 0, 2),
[b'two_a', b'two_b'])
self.assertEqual(self.redis.zrangebyscore('foo', 0, 4, 1, 3),
[b'two_b', b'two_c', b'two_d'])
def test_zrangebyscore_withscores(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrangebyscore('foo', 1, 3, 0, 2, True),
[(b'one', 1), (b'two', 2)])
def test_zrevrangebyscore(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrevrangebyscore('foo', 3, 1),
[b'three', b'two', b'one'])
self.assertEqual(self.redis.zrevrangebyscore('foo', 3, 2),
[b'three', b'two'])
self.assertEqual(self.redis.zrevrangebyscore('foo', 3, 1, 0, 1),
[b'three'])
self.assertEqual(self.redis.zrevrangebyscore('foo', 3, 1, 1, 2),
[b'two', b'one'])
def test_zrevrangebyscore_exclusive(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zrevrangebyscore('foo', '(3', 1),
[b'two', b'one'])
self.assertEqual(self.redis.zrevrangebyscore('foo', 3, '(2'),
[b'three'])
self.assertEqual(self.redis.zrevrangebyscore('foo', '(3', '(1'),
[b'two'])
self.assertEqual(self.redis.zrevrangebyscore('foo', '(2', 1, 0, 1),
[b'one'])
self.assertEqual(self.redis.zrevrangebyscore('foo', '(2', '(1', 0, 1),
[])
self.assertEqual(self.redis.zrevrangebyscore('foo', '(3', '(0', 1, 2),
[b'one'])
def test_zrevrangebyscore_raises_error(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebyscore('foo', 'three', 1)
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebyscore('foo', 3, 'one')
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebyscore('foo', 3, '1)')
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebyscore('foo', '((3', '1)')
def test_zrangebylex(self):
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', three_a=0)
self.assertEqual(self.redis.zrangebylex('foo', b'(t', b'+'),
[b'three_a', b'two_a', b'two_b'])
self.assertEqual(self.redis.zrangebylex('foo', b'(t', b'[two_b'),
[b'three_a', b'two_a', b'two_b'])
self.assertEqual(self.redis.zrangebylex('foo', b'(t', b'(two_b'),
[b'three_a', b'two_a'])
self.assertEqual(self.redis.zrangebylex('foo', b'[three_a', b'[two_b'),
[b'three_a', b'two_a', b'two_b'])
self.assertEqual(self.redis.zrangebylex('foo', b'(three_a', b'[two_b'),
[b'two_a', b'two_b'])
self.assertEqual(self.redis.zrangebylex('foo', b'-', b'(two_b'),
[b'one_a', b'three_a', b'two_a'])
self.assertEqual(self.redis.zrangebylex('foo', b'[two_b', b'(two_b'),
[])
# reversed max + and min - boundaries
# these will be always empty, but allowed by redis
self.assertEqual(self.redis.zrangebylex('foo', b'+', b'-'),
[])
self.assertEqual(self.redis.zrangebylex('foo', b'+', b'[three_a'),
[])
self.assertEqual(self.redis.zrangebylex('foo', b'[o', b'-'),
[])
def test_zlexcount(self):
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', three_a=0)
self.assertEqual(self.redis.zlexcount('foo', b'(t', b'+'),
3)
self.assertEqual(self.redis.zlexcount('foo', b'(t', b'[two_b'),
3)
self.assertEqual(self.redis.zlexcount('foo', b'(t', b'(two_b'),
2)
self.assertEqual(self.redis.zlexcount('foo', b'[three_a', b'[two_b'),
3)
self.assertEqual(self.redis.zlexcount('foo', b'(three_a', b'[two_b'),
2)
self.assertEqual(self.redis.zlexcount('foo', b'-', b'(two_b'),
3)
self.assertEqual(self.redis.zlexcount('foo', b'[two_b', b'(two_b'),
0)
# reversed max + and min - boundaries
# these will be always empty, but allowed by redis
self.assertEqual(self.redis.zlexcount('foo', b'+', b'-'),
0)
self.assertEqual(self.redis.zlexcount('foo', b'+', b'[three_a'),
0)
self.assertEqual(self.redis.zlexcount('foo', b'[o', b'-'),
0)
def test_zrangebylex_with_limit(self):
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', three_a=0)
self.assertEqual(self.redis.zrangebylex('foo', b'-', b'+', 1, 2),
[b'three_a', b'two_a'])
# negative offset no results
self.assertEqual(self.redis.zrangebylex('foo', b'-', b'+', -1, 3),
[])
# negative limit ignored
self.assertEqual(self.redis.zrangebylex('foo', b'-', b'+', 0, -2),
[b'one_a', b'three_a', b'two_a', b'two_b'])
self.assertEqual(self.redis.zrangebylex('foo', b'-', b'+', 1, -2),
[b'three_a', b'two_a', b'two_b'])
self.assertEqual(self.redis.zrangebylex('foo', b'+', b'-', 1, 1),
[])
def test_zrangebylex_raises_error(self):
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', three_a=0)
with self.assertRaises(redis.ResponseError):
self.redis.zrangebylex('foo', b'', b'[two_b')
with self.assertRaises(redis.ResponseError):
self.redis.zrangebylex('foo', b'-', b'two_b')
with self.assertRaises(redis.ResponseError):
self.redis.zrangebylex('foo', b'(t', b'two_b')
with self.assertRaises(redis.ResponseError):
self.redis.zrangebylex('foo', b't', b'+')
with self.assertRaises(redis.ResponseError):
self.redis.zrangebylex('foo', b'[two_a', b'')
with self.assertRaises(redis.RedisError):
self.redis.zrangebylex('foo', b'(two_a', b'[two_b', 1)
def test_zrevrangebylex(self):
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', three_a=0)
self.assertEqual(self.redis.zrevrangebylex('foo', b'+', b'(t'),
[b'two_b', b'two_a', b'three_a'])
self.assertEqual(self.redis.zrevrangebylex('foo', b'[two_b', b'(t'),
[b'two_b', b'two_a', b'three_a'])
self.assertEqual(self.redis.zrevrangebylex('foo', b'(two_b', b'(t'),
[b'two_a', b'three_a'])
self.assertEqual(self.redis.zrevrangebylex('foo', b'[two_b', b'[three_a'),
[b'two_b', b'two_a', b'three_a'])
self.assertEqual(self.redis.zrevrangebylex('foo', b'[two_b', b'(three_a'),
[b'two_b', b'two_a'])
self.assertEqual(self.redis.zrevrangebylex('foo', b'(two_b', b'-'),
[b'two_a', b'three_a', b'one_a'])
self.assertEqual(self.redis.zrangebylex('foo', b'(two_b', b'[two_b'),
[])
# reversed max + and min - boundaries
# these will be always empty, but allowed by redis
self.assertEqual(self.redis.zrevrangebylex('foo', b'-', b'+'),
[])
self.assertEqual(self.redis.zrevrangebylex('foo', b'[three_a', b'+'),
[])
self.assertEqual(self.redis.zrevrangebylex('foo', b'-', b'[o'),
[])
def test_zrevrangebylex_with_limit(self):
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', three_a=0)
self.assertEqual(self.redis.zrevrangebylex('foo', b'+', b'-', 1, 2),
[b'two_a', b'three_a'])
def test_zrevrangebylex_raises_error(self):
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', three_a=0)
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebylex('foo', b'[two_b', b'')
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebylex('foo', b'two_b', b'-')
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebylex('foo', b'two_b', b'(t')
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebylex('foo', b'+', b't')
with self.assertRaises(redis.ResponseError):
self.redis.zrevrangebylex('foo', b'', b'[two_a')
with self.assertRaises(redis.RedisError):
self.redis.zrevrangebylex('foo', b'[two_a', b'(two_b', 1)
def test_zremrangebyrank(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zremrangebyrank('foo', 0, 1), 2)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'three'])
def test_zremrangebyrank_negative_indices(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', three=3)
self.assertEqual(self.redis.zremrangebyrank('foo', -2, -1), 2)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'one'])
def test_zremrangebyrank_out_of_bounds(self):
self.redis.zadd('foo', one=1)
self.assertEqual(self.redis.zremrangebyrank('foo', 1, 3), 0)
def test_zremrangebyscore(self):
self.redis.zadd('foo', zero=0)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', four=4)
# Outside of range.
self.assertEqual(self.redis.zremrangebyscore('foo', 5, 10), 0)
self.assertEqual(self.redis.zrange('foo', 0, -1),
[b'zero', b'two', b'four'])
# Middle of range.
self.assertEqual(self.redis.zremrangebyscore('foo', 1, 3), 1)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'zero', b'four'])
self.assertEqual(self.redis.zremrangebyscore('foo', 1, 3), 0)
# Entire range.
self.assertEqual(self.redis.zremrangebyscore('foo', 0, 4), 2)
self.assertEqual(self.redis.zrange('foo', 0, -1), [])
def test_zremrangebyscore_exclusive(self):
self.redis.zadd('foo', zero=0)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', four=4)
self.assertEqual(self.redis.zremrangebyscore('foo', '(0', 1), 0)
self.assertEqual(self.redis.zrange('foo', 0, -1),
[b'zero', b'two', b'four'])
self.assertEqual(self.redis.zremrangebyscore('foo', '-inf', '(0'), 0)
self.assertEqual(self.redis.zrange('foo', 0, -1),
[b'zero', b'two', b'four'])
self.assertEqual(self.redis.zremrangebyscore('foo', '(2', 5), 1)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'zero', b'two'])
self.assertEqual(self.redis.zremrangebyscore('foo', 0, '(2'), 1)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'two'])
self.assertEqual(self.redis.zremrangebyscore('foo', '(1', '(3'), 1)
self.assertEqual(self.redis.zrange('foo', 0, -1), [])
def test_zremrangebyscore_raises_error(self):
self.redis.zadd('foo', zero=0)
self.redis.zadd('foo', two=2)
self.redis.zadd('foo', four=4)
with self.assertRaises(redis.ResponseError):
self.redis.zremrangebyscore('foo', 'three', 1)
with self.assertRaises(redis.ResponseError):
self.redis.zremrangebyscore('foo', 3, 'one')
with self.assertRaises(redis.ResponseError):
self.redis.zremrangebyscore('foo', 3, '1)')
with self.assertRaises(redis.ResponseError):
self.redis.zremrangebyscore('foo', '((3', '1)')
def test_zremrangebyscore_badkey(self):
self.assertEqual(self.redis.zremrangebyscore('foo', 0, 2), 0)
def test_zremrangebylex(self):
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', three_a=0)
self.assertEqual(self.redis.zremrangebylex('foo', b'(three_a', b'[two_b'), 2)
self.assertEqual(self.redis.zremrangebylex('foo', b'(three_a', b'[two_b'), 0)
self.assertEqual(self.redis.zremrangebylex('foo', b'-', b'(o'), 0)
self.assertEqual(self.redis.zremrangebylex('foo', b'-', b'[one_a'), 1)
self.assertEqual(self.redis.zremrangebylex('foo', b'[tw', b'+'), 0)
self.assertEqual(self.redis.zremrangebylex('foo', b'[t', b'+'), 1)
self.assertEqual(self.redis.zremrangebylex('foo', b'[t', b'+'), 0)
def test_zremrangebylex_error(self):
self.redis.zadd('foo', two_a=0)
self.redis.zadd('foo', two_b=0)
self.redis.zadd('foo', one_a=0)
self.redis.zadd('foo', three_a=0)
with self.assertRaises(redis.ResponseError):
self.redis.zremrangebylex('foo', b'(t', b'two_b')
with self.assertRaises(redis.ResponseError):
self.redis.zremrangebylex('foo', b't', b'+')
with self.assertRaises(redis.ResponseError):
self.redis.zremrangebylex('foo', b'[two_a', b'')
def test_zremrangebylex_badkey(self):
self.assertEqual(self.redis.zremrangebylex('foo', b'(three_a', b'[two_b'), 0)
def test_zunionstore(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('bar', one=1)
self.redis.zadd('bar', two=2)
self.redis.zadd('bar', three=3)
self.redis.zunionstore('baz', ['foo', 'bar'])
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 2), (b'three', 3), (b'two', 4)])
def test_zunionstore_sum(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('bar', one=1)
self.redis.zadd('bar', two=2)
self.redis.zadd('bar', three=3)
self.redis.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 2), (b'three', 3), (b'two', 4)])
def test_zunionstore_max(self):
self.redis.zadd('foo', one=0)
self.redis.zadd('foo', two=0)
self.redis.zadd('bar', one=1)
self.redis.zadd('bar', two=2)
self.redis.zadd('bar', three=3)
self.redis.zunionstore('baz', ['foo', 'bar'], aggregate='MAX')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 1), (b'two', 2), (b'three', 3)])
def test_zunionstore_min(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('bar', one=0)
self.redis.zadd('bar', two=0)
self.redis.zadd('bar', three=3)
self.redis.zunionstore('baz', ['foo', 'bar'], aggregate='MIN')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 0), (b'two', 0), (b'three', 3)])
def test_zunionstore_weights(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('bar', one=1)
self.redis.zadd('bar', two=2)
self.redis.zadd('bar', four=4)
self.redis.zunionstore('baz', {'foo': 1, 'bar': 2}, aggregate='SUM')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 3), (b'two', 6), (b'four', 8)])
def test_zunionstore_mixed_set_types(self):
# No score, redis will use 1.0.
self.redis.sadd('foo', 'one')
self.redis.sadd('foo', 'two')
self.redis.zadd('bar', one=1)
self.redis.zadd('bar', two=2)
self.redis.zadd('bar', three=3)
self.redis.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 2), (b'three', 3), (b'two', 3)])
def test_zunionstore_badkey(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zunionstore('baz', ['foo', 'bar'], aggregate='SUM')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 1), (b'two', 2)])
self.redis.zunionstore('baz', {'foo': 1, 'bar': 2}, aggregate='SUM')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 1), (b'two', 2)])
def test_zinterstore(self):
self.redis.zadd('foo', one=1)
self.redis.zadd('foo', two=2)
self.redis.zadd('bar', one=1)
self.redis.zadd('bar', two=2)
self.redis.zadd('bar', three=3)
self.redis.zinterstore('baz', ['foo', 'bar'])
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 2), (b'two', 4)])
def test_zinterstore_mixed_set_types(self):
self.redis.sadd('foo', 'one')
self.redis.sadd('foo', 'two')
self.redis.zadd('bar', one=1)
self.redis.zadd('bar', two=2)
self.redis.zadd('bar', three=3)
self.redis.zinterstore('baz', ['foo', 'bar'], aggregate='SUM')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 2), (b'two', 3)])
def test_zinterstore_max(self):
self.redis.zadd('foo', one=0)
self.redis.zadd('foo', two=0)
self.redis.zadd('bar', one=1)
self.redis.zadd('bar', two=2)
self.redis.zadd('bar', three=3)
self.redis.zinterstore('baz', ['foo', 'bar'], aggregate='MAX')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 1), (b'two', 2)])
def test_zinterstore_onekey(self):
self.redis.zadd('foo', one=1)
self.redis.zinterstore('baz', ['foo'], aggregate='MAX')
self.assertEqual(self.redis.zrange('baz', 0, -1, withscores=True),
[(b'one', 1)])
def test_zinterstore_nokey(self):
with self.assertRaises(redis.ResponseError):
self.redis.zinterstore('baz', [], aggregate='MAX')
def test_zunionstore_nokey(self):
with self.assertRaises(redis.ResponseError):
self.redis.zunionstore('baz', [], aggregate='MAX')
def test_multidb(self):
r1 = self.create_redis(db=0)
r2 = self.create_redis(db=1)
r1['r1'] = 'r1'
r2['r2'] = 'r2'
self.assertTrue('r2' not in r1)
self.assertTrue('r1' not in r2)
self.assertEqual(r1['r1'], b'r1')
self.assertEqual(r2['r2'], b'r2')
r1.flushall()
self.assertTrue('r1' not in r1)
self.assertTrue('r2' not in r2)
def test_basic_sort(self):
self.redis.rpush('foo', '2')
self.redis.rpush('foo', '1')
self.redis.rpush('foo', '3')
self.assertEqual(self.redis.sort('foo'), [b'1', b'2', b'3'])
def test_empty_sort(self):
self.assertEqual(self.redis.sort('foo'), [])
def test_sort_range_offset_range(self):
self.redis.rpush('foo', '2')
self.redis.rpush('foo', '1')
self.redis.rpush('foo', '4')
self.redis.rpush('foo', '3')
self.assertEqual(self.redis.sort('foo', start=0, num=2), [b'1', b'2'])
def test_sort_range_offset_range_and_desc(self):
self.redis.rpush('foo', '2')
self.redis.rpush('foo', '1')
self.redis.rpush('foo', '4')
self.redis.rpush('foo', '3')
self.assertEqual(self.redis.sort("foo", start=0, num=1, desc=True),
[b"4"])
def test_sort_range_offset_norange(self):
with self.assertRaises(redis.RedisError):
self.redis.sort('foo', start=1)
def test_sort_range_with_large_range(self):
self.redis.rpush('foo', '2')
self.redis.rpush('foo', '1')
self.redis.rpush('foo', '4')
self.redis.rpush('foo', '3')
# num=20 even though len(foo) is 4.
self.assertEqual(self.redis.sort('foo', start=1, num=20),
[b'2', b'3', b'4'])
def test_sort_descending(self):
self.redis.rpush('foo', '1')
self.redis.rpush('foo', '2')
self.redis.rpush('foo', '3')
self.assertEqual(self.redis.sort('foo', desc=True), [b'3', b'2', b'1'])
def test_sort_alpha(self):
self.redis.rpush('foo', '2a')
self.redis.rpush('foo', '1b')
self.redis.rpush('foo', '2b')
self.redis.rpush('foo', '1a')
self.assertEqual(self.redis.sort('foo', alpha=True),
[b'1a', b'1b', b'2a', b'2b'])
def test_foo(self):
self.redis.rpush('foo', '2a')
self.redis.rpush('foo', '1b')
self.redis.rpush('foo', '2b')
self.redis.rpush('foo', '1a')
with self.assertRaises(redis.ResponseError):
self.redis.sort('foo', alpha=False)
def test_sort_with_store_option(self):
self.redis.rpush('foo', '2')
self.redis.rpush('foo', '1')
self.redis.rpush('foo', '4')
self.redis.rpush('foo', '3')
self.assertEqual(self.redis.sort('foo', store='bar'), 4)
self.assertEqual(self.redis.lrange('bar', 0, -1),
[b'1', b'2', b'3', b'4'])
def test_sort_with_by_and_get_option(self):
self.redis.rpush('foo', '2')
self.redis.rpush('foo', '1')
self.redis.rpush('foo', '4')
self.redis.rpush('foo', '3')
self.redis['weight_1'] = '4'
self.redis['weight_2'] = '3'
self.redis['weight_3'] = '2'
self.redis['weight_4'] = '1'
self.redis['data_1'] = 'one'
self.redis['data_2'] = 'two'
self.redis['data_3'] = 'three'
self.redis['data_4'] = 'four'
self.assertEqual(self.redis.sort('foo', by='weight_*', get='data_*'),
[b'four', b'three', b'two', b'one'])
self.assertEqual(self.redis.sort('foo', by='weight_*', get='#'),
[b'4', b'3', b'2', b'1'])
self.assertEqual(
self.redis.sort('foo', by='weight_*', get=('data_*', '#')),
[b'four', b'4', b'three', b'3', b'two', b'2', b'one', b'1'])
self.assertEqual(self.redis.sort('foo', by='weight_*', get='data_1'),
[None, None, None, None])
def test_sort_with_hash(self):
self.redis.rpush('foo', 'middle')
self.redis.rpush('foo', 'eldest')
self.redis.rpush('foo', 'youngest')
self.redis.hset('record_youngest', 'age', 1)
self.redis.hset('record_youngest', 'name', 'baby')
self.redis.hset('record_middle', 'age', 10)
self.redis.hset('record_middle', 'name', 'teen')
self.redis.hset('record_eldest', 'age', 20)
self.redis.hset('record_eldest', 'name', 'adult')
self.assertEqual(self.redis.sort('foo', by='record_*->age'),
[b'youngest', b'middle', b'eldest'])
self.assertEqual(
self.redis.sort('foo', by='record_*->age', get='record_*->name'),
[b'baby', b'teen', b'adult'])
def test_sort_with_set(self):
self.redis.sadd('foo', '3')
self.redis.sadd('foo', '1')
self.redis.sadd('foo', '2')
self.assertEqual(self.redis.sort('foo'), [b'1', b'2', b'3'])
def test_pipeline(self):
# The pipeline method returns an object for
# issuing multiple commands in a batch.
p = self.redis.pipeline()
p.watch('bam')
p.multi()
p.set('foo', 'bar').get('foo')
p.lpush('baz', 'quux')
p.lpush('baz', 'quux2').lrange('baz', 0, -1)
res = p.execute()
# Check return values returned as list.
self.assertEqual(res, [True, b'bar', 1, 2, [b'quux2', b'quux']])
# Check side effects happened as expected.
self.assertEqual(self.redis.lrange('baz', 0, -1), [b'quux2', b'quux'])
# Check that the command buffer has been emptied.
self.assertEqual(p.execute(), [])
def test_pipeline_ignore_errors(self):
"""Test the pipeline ignoring errors when asked."""
with self.redis.pipeline() as p:
p.set('foo', 'bar')
p.rename('baz', 'bats')
with self.assertRaises(redis.exceptions.ResponseError):
p.execute()
self.assertEqual([], p.execute())
with self.redis.pipeline() as p:
p.set('foo', 'bar')
p.rename('baz', 'bats')
res = p.execute(raise_on_error=False)
self.assertEqual([], p.execute())
self.assertEqual(len(res), 2)
self.assertIsInstance(res[1], redis.exceptions.ResponseError)
def test_multiple_successful_watch_calls(self):
p = self.redis.pipeline()
p.watch('bam')
p.multi()
p.set('foo', 'bar')
# Check that the watched keys buffer has been emptied.
p.execute()
# bam is no longer being watched, so it's ok to modify
# it now.
p.watch('foo')
self.redis.set('bam', 'boo')
p.multi()
p.set('foo', 'bats')
self.assertEqual(p.execute(), [True])
def test_pipeline_non_transactional(self):
# For our simple-minded model I don't think
# there is any observable difference.
p = self.redis.pipeline(transaction=False)
res = p.set('baz', 'quux').get('baz').execute()
self.assertEqual(res, [True, b'quux'])
def test_pipeline_raises_when_watched_key_changed(self):
self.redis.set('foo', 'bar')
self.redis.rpush('greet', 'hello')
p = self.redis.pipeline()
self.addCleanup(p.reset)
p.watch('greet', 'foo')
nextf = fakeredis.to_bytes(p.get('foo')) + b'baz'
# Simulate change happening on another thread.
self.redis.rpush('greet', 'world')
# Begin pipelining.
p.multi()
p.set('foo', nextf)
with self.assertRaises(redis.WatchError):
p.execute()
def test_pipeline_succeeds_despite_unwatched_key_changed(self):
# Same setup as before except for the params to the WATCH command.
self.redis.set('foo', 'bar')
self.redis.rpush('greet', 'hello')
p = self.redis.pipeline()
try:
# Only watch one of the 2 keys.
p.watch('foo')
nextf = fakeredis.to_bytes(p.get('foo')) + b'baz'
# Simulate change happening on another thread.
self.redis.rpush('greet', 'world')
p.multi()
p.set('foo', nextf)
p.execute()
# Check the commands were executed.
self.assertEqual(self.redis.get('foo'), b'barbaz')
finally:
p.reset()
def test_pipeline_succeeds_when_watching_nonexistent_key(self):
self.redis.set('foo', 'bar')
self.redis.rpush('greet', 'hello')
p = self.redis.pipeline()
try:
# Also watch a nonexistent key.
p.watch('foo', 'bam')
nextf = fakeredis.to_bytes(p.get('foo')) + b'baz'
# Simulate change happening on another thread.
self.redis.rpush('greet', 'world')
p.multi()
p.set('foo', nextf)
p.execute()
# Check the commands were executed.
self.assertEqual(self.redis.get('foo'), b'barbaz')
finally:
p.reset()
def test_watch_state_is_cleared_across_multiple_watches(self):
self.redis.set('foo', 'one')
self.redis.set('bar', 'baz')
p = self.redis.pipeline()
self.addCleanup(p.reset)
p.watch('foo')
# Simulate change happening on another thread.
self.redis.set('foo', 'three')
p.multi()
p.set('foo', 'three')
with self.assertRaises(redis.WatchError):
p.execute()
# Now watch another key. It should be ok to change
# foo as we're no longer watching it.
p.watch('bar')
self.redis.set('foo', 'four')
p.multi()
p.set('bar', 'five')
self.assertEqual(p.execute(), [True])
def test_pipeline_proxies_to_redis_object(self):
p = self.redis.pipeline()
self.assertTrue(hasattr(p, 'zadd'))
with self.assertRaises(AttributeError):
p.non_existent_attribute
def test_pipeline_as_context_manager(self):
self.redis.set('foo', 'bar')
with self.redis.pipeline() as p:
p.watch('foo')
self.assertTrue(isinstance(p, redis.client.BasePipeline)
or p.need_reset)
p.multi()
p.set('foo', 'baz')
p.execute()
# Usually you would consider the pipeline to
# have been destroyed
# after the with statement, but we need to check
# it was reset properly:
self.assertTrue(isinstance(p, redis.client.BasePipeline)
or not p.need_reset)
def test_pipeline_transaction_shortcut(self):
# This example taken pretty much from the redis-py documentation.
self.redis.set('OUR-SEQUENCE-KEY', 13)
calls = []
def client_side_incr(pipe):
calls.append((pipe,))
current_value = pipe.get('OUR-SEQUENCE-KEY')
next_value = int(current_value) + 1
if len(calls) < 3:
# Simulate a change from another thread.
self.redis.set('OUR-SEQUENCE-KEY', next_value)
pipe.multi()
pipe.set('OUR-SEQUENCE-KEY', next_value)
res = self.redis.transaction(client_side_incr, 'OUR-SEQUENCE-KEY')
self.assertEqual([True], res)
self.assertEqual(16, int(self.redis.get('OUR-SEQUENCE-KEY')))
self.assertEqual(3, len(calls))
def test_key_patterns(self):
self.redis.mset({'one': 1, 'two': 2, 'three': 3, 'four': 4})
self.assertItemsEqual(self.redis.keys('*o*'),
[b'four', b'one', b'two'])
self.assertItemsEqual(self.redis.keys('t??'), [b'two'])
self.assertItemsEqual(self.redis.keys('*'),
[b'four', b'one', b'two', b'three'])
self.assertItemsEqual(self.redis.keys(),
[b'four', b'one', b'two', b'three'])
def test_ping(self):
self.assertTrue(self.redis.ping())
def test_type(self):
self.redis.set('string_key', "value")
self.redis.lpush("list_key", "value")
self.redis.sadd("set_key", "value")
self.redis.zadd("zset_key", 1, "value")
self.redis.hset('hset_key', 'key', 'value')
self.assertEqual(self.redis.type('string_key'), b'string')
self.assertEqual(self.redis.type('list_key'), b'list')
self.assertEqual(self.redis.type('set_key'), b'set')
self.assertEqual(self.redis.type('zset_key'), b'zset')
self.assertEqual(self.redis.type('hset_key'), b'hash')
self.assertEqual(self.redis.type('none_key'), b'none')
@attr('slow')
def test_pubsub_subscribe(self):
pubsub = self.redis.pubsub()
pubsub.subscribe("channel")
sleep(1)
expected_message = {'type': 'subscribe', 'pattern': None,
'channel': b'channel', 'data': 1}
message = pubsub.get_message()
keys = list(pubsub.channels.keys())
key = keys[0]
if not self.decode_responses:
key = (key if type(key) == bytes
else bytes(key, encoding='utf-8'))
self.assertEqual(len(keys), 1)
self.assertEqual(key, b'channel')
self.assertEqual(message, expected_message)
@attr('slow')
def test_pubsub_psubscribe(self):
pubsub = self.redis.pubsub()
pubsub.psubscribe("channel.*")
sleep(1)
expected_message = {'type': 'psubscribe', 'pattern': None,
'channel': b'channel.*', 'data': 1}
message = pubsub.get_message()
keys = list(pubsub.patterns.keys())
self.assertEqual(len(keys), 1)
self.assertEqual(message, expected_message)
@attr('slow')
def test_pubsub_unsubscribe(self):
pubsub = self.redis.pubsub()
pubsub.subscribe('channel-1', 'channel-2', 'channel-3')
sleep(1)
expected_message = {'type': 'unsubscribe', 'pattern': None,
'channel': b'channel-1', 'data': 2}
pubsub.get_message()
pubsub.get_message()
pubsub.get_message()
# unsubscribe from one
pubsub.unsubscribe('channel-1')
sleep(1)
message = pubsub.get_message()
keys = list(pubsub.channels.keys())
self.assertEqual(message, expected_message)
self.assertEqual(len(keys), 2)
# unsubscribe from multiple
pubsub.unsubscribe()
sleep(1)
pubsub.get_message()
pubsub.get_message()
keys = list(pubsub.channels.keys())
self.assertEqual(message, expected_message)
self.assertEqual(len(keys), 0)
@attr('slow')
def test_pubsub_punsubscribe(self):
pubsub = self.redis.pubsub()
pubsub.psubscribe('channel-1.*', 'channel-2.*', 'channel-3.*')
sleep(1)
expected_message = {'type': 'punsubscribe', 'pattern': None,
'channel': b'channel-1.*', 'data': 2}
pubsub.get_message()
pubsub.get_message()
pubsub.get_message()
# unsubscribe from one
pubsub.punsubscribe('channel-1.*')
sleep(1)
message = pubsub.get_message()
keys = list(pubsub.patterns.keys())
self.assertEqual(message, expected_message)
self.assertEqual(len(keys), 2)
# unsubscribe from multiple
pubsub.punsubscribe()
sleep(1)
pubsub.get_message()
pubsub.get_message()
keys = list(pubsub.patterns.keys())
self.assertEqual(len(keys), 0)
@attr('slow')
def test_pubsub_listen(self):
def _listen(pubsub, q):
count = 0
for message in pubsub.listen():
q.put(message)
count += 1
if count == 4:
pubsub.close()
channel = 'ch1'
patterns = ['ch1*', 'ch[1]', 'ch?']
pubsub = self.redis.pubsub()
pubsub.subscribe(channel)
pubsub.psubscribe(*patterns)
sleep(1)
msg1 = pubsub.get_message()
msg2 = pubsub.get_message()
msg3 = pubsub.get_message()
msg4 = pubsub.get_message()
self.assertEqual(msg1['type'], 'subscribe')
self.assertEqual(msg2['type'], 'psubscribe')
self.assertEqual(msg3['type'], 'psubscribe')
self.assertEqual(msg4['type'], 'psubscribe')
q = Queue()
t = threading.Thread(target=_listen, args=(pubsub, q))
t.start()
msg = 'hello world'
self.redis.publish(channel, msg)
t.join()
msg1 = q.get()
msg2 = q.get()
msg3 = q.get()
msg4 = q.get()
if self.decode_responses:
bpatterns = patterns + [channel]
else:
bpatterns = [pattern.encode() for pattern in patterns]
bpatterns.append(channel.encode())
msg = msg.encode()
self.assertEqual(msg1['data'], msg)
self.assertIn(msg1['channel'], bpatterns)
self.assertEqual(msg2['data'], msg)
self.assertIn(msg2['channel'], bpatterns)
self.assertEqual(msg3['data'], msg)
self.assertIn(msg3['channel'], bpatterns)
self.assertEqual(msg4['data'], msg)
self.assertIn(msg4['channel'], bpatterns)
@attr('slow')
def test_pubsub_ignore_sub_messages_listen(self):
def _listen(pubsub, q):
count = 0
for message in pubsub.listen():
q.put(message)
count += 1
if count == 4:
pubsub.close()
channel = 'ch1'
patterns = ['ch1*', 'ch[1]', 'ch?']
pubsub = self.redis.pubsub(ignore_subscribe_messages=True)
pubsub.subscribe(channel)
pubsub.psubscribe(*patterns)
sleep(1)
q = Queue()
t = threading.Thread(target=_listen, args=(pubsub, q))
t.start()
msg = 'hello world'
self.redis.publish(channel, msg)
t.join()
msg1 = q.get()
msg2 = q.get()
msg3 = q.get()
msg4 = q.get()
if self.decode_responses:
bpatterns = patterns + [channel]
else:
bpatterns = [pattern.encode() for pattern in patterns]
bpatterns.append(channel.encode())
msg = msg.encode()
self.assertEqual(msg1['data'], msg)
self.assertIn(msg1['channel'], bpatterns)
self.assertEqual(msg2['data'], msg)
self.assertIn(msg2['channel'], bpatterns)
self.assertEqual(msg3['data'], msg)
self.assertIn(msg3['channel'], bpatterns)
self.assertEqual(msg4['data'], msg)
self.assertIn(msg4['channel'], bpatterns)
def test_pfadd(self):
key = "hll-pfadd"
self.assertEqual(
1, self.redis.pfadd(key, "a", "b", "c", "d", "e", "f", "g"))
self.assertEqual(7, self.redis.pfcount(key))
def test_pfcount(self):
key1 = "hll-pfcount01"
key2 = "hll-pfcount02"
key3 = "hll-pfcount03"
self.assertEqual(1, self.redis.pfadd(key1, "foo", "bar", "zap"))
self.assertEqual(0, self.redis.pfadd(key1, "zap", "zap", "zap"))
self.assertEqual(0, self.redis.pfadd(key1, "foo", "bar"))
self.assertEqual(3, self.redis.pfcount(key1))
self.assertEqual(1, self.redis.pfadd(key2, "1", "2", "3"))
self.assertEqual(3, self.redis.pfcount(key2))
self.assertEqual(6, self.redis.pfcount(key1, key2))
self.assertEqual(1, self.redis.pfadd(key3, "foo", "bar", "zip"))
self.assertEqual(3, self.redis.pfcount(key3))
self.assertEqual(4, self.redis.pfcount(key1, key3))
self.assertEqual(7, self.redis.pfcount(key1, key2, key3))
def test_pfmerge(self):
key1 = "hll-pfmerge01"
key2 = "hll-pfmerge02"
key3 = "hll-pfmerge03"
self.assertEqual(1, self.redis.pfadd(key1, "foo", "bar", "zap", "a"))
self.assertEqual(1, self.redis.pfadd(key2, "a", "b", "c", "foo"))
self.assertTrue(self.redis.pfmerge(key3, key1, key2))
self.assertEqual(6, self.redis.pfcount(key3))
def test_scan(self):
# Setup the data
for ix in range(20):
k = 'scan-test:%s' % ix
v = 'result:%s' % ix
self.redis.set(k, v)
expected = self.redis.keys()
self.assertEqual(20, len(expected)) # Ensure we know what we're testing
# Test that we page through the results and get everything out
results = []
cursor = '0'
while cursor != 0:
cursor, data = self.redis.scan(cursor, count=6)
results.extend(data)
self.assertSetEqual(set(expected), set(results))
# Now test that the MATCH functionality works
results = []
cursor = '0'
while cursor != 0:
cursor, data = self.redis.scan(cursor, match='*7', count=100)
results.extend(data)
self.assertIn(b'scan-test:7', results)
self.assertIn(b'scan-test:17', results)
self.assertEqual(2, len(results))
# Test the match on iterator
results = [r for r in self.redis.scan_iter(match='*7')]
self.assertIn(b'scan-test:7', results)
self.assertIn(b'scan-test:17', results)
self.assertEqual(2, len(results))
def test_sscan(self):
# Setup the data
name = 'sscan-test'
for ix in range(20):
k = 'sscan-test:%s' % ix
self.redis.sadd(name, k)
expected = self.redis.smembers(name)
self.assertEqual(20, len(expected)) # Ensure we know what we're testing
# Test that we page through the results and get everything out
results = []
cursor = '0'
while cursor != 0:
cursor, data = self.redis.sscan(name, cursor, count=6)
results.extend(data)
self.assertSetEqual(set(expected), set(results))
# Test the iterator version
results = [r for r in self.redis.sscan_iter(name, count=6)]
self.assertSetEqual(set(expected), set(results))
# Now test that the MATCH functionality works
results = []
cursor = '0'
while cursor != 0:
cursor, data = self.redis.sscan(name, cursor, match='*7', count=100)
results.extend(data)
self.assertIn(b'sscan-test:7', results)
self.assertIn(b'sscan-test:17', results)
self.assertEqual(2, len(results))
# Test the match on iterator
results = [r for r in self.redis.sscan_iter(name, match='*7')]
self.assertIn(b'sscan-test:7', results)
self.assertIn(b'sscan-test:17', results)
self.assertEqual(2, len(results))
def test_hscan(self):
# Setup the data
name = 'hscan-test'
for ix in range(20):
k = 'key:%s' % ix
v = 'result:%s' % ix
self.redis.hset(name, k, v)
expected = self.redis.hgetall(name)
self.assertEqual(20, len(expected)) # Ensure we know what we're testing
# Test that we page through the results and get everything out
results = {}
cursor = '0'
while cursor != 0:
cursor, data = self.redis.hscan(name, cursor, count=6)
results.update(data)
self.assertDictEqual(expected, results)
# Test the iterator version
results = {}
for key, val in self.redis.hscan_iter(name, count=6):
results[key] = val
self.assertDictEqual(expected, results)
# Now test that the MATCH functionality works
results = {}
cursor = '0'
while cursor != 0:
cursor, data = self.redis.hscan(name, cursor, match='*7', count=100)
results.update(data)
self.assertIn(b'key:7', results)
self.assertIn(b'key:17', results)
self.assertEqual(2, len(results))
# Test the match on iterator
results = {}
for key, val in self.redis.hscan_iter(name, match='*7'):
results[key] = val
self.assertIn(b'key:7', results)
self.assertIn(b'key:17', results)
self.assertEqual(2, len(results))
def test_ttl_should_return_minus_one_for_non_expiring_key(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.assertEqual(self.redis.ttl('foo'), -1)
def test_ttl_should_return_minus_two_for_non_existent_key(self):
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.ttl('foo'), -2)
def test_pttl_should_return_minus_one_for_non_expiring_key(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.assertEqual(self.redis.pttl('foo'), -1)
def test_pttl_should_return_minus_two_for_non_existent_key(self):
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.pttl('foo'), -2)
def test_persist(self):
self.redis.set('foo', 'bar', ex=20)
self.redis.persist('foo')
self.assertEqual(self.redis.ttl('foo'), -1)
def test_set_existing_key_persists(self):
self.redis.set('foo', 'bar', ex=20)
self.redis.set('foo', 'foo')
self.assertEqual(self.redis.ttl('foo'), -1)
class TestFakeRedis(unittest.TestCase):
decode_responses = False
def setUp(self):
self.redis = self.create_redis()
def tearDown(self):
self.redis.flushall()
del self.redis
def assertInRange(self, value, start, end, msg=None):
self.assertGreaterEqual(value, start, msg)
self.assertLessEqual(value, end, msg)
def create_redis(self, db=0):
return fakeredis.FakeRedis(db=db)
def test_setex(self):
self.assertEqual(self.redis.setex('foo', 'bar', 100), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_setex_using_timedelta(self):
self.assertEqual(
self.redis.setex('foo', 'bar', timedelta(seconds=100)), True)
self.assertEqual(self.redis.get('foo'), b'bar')
def test_lrem_postitive_count(self):
self.redis.lpush('foo', 'same')
self.redis.lpush('foo', 'same')
self.redis.lpush('foo', 'different')
self.redis.lrem('foo', 'same', 2)
self.assertEqual(self.redis.lrange('foo', 0, -1), [b'different'])
def test_lrem_negative_count(self):
self.redis.lpush('foo', 'removeme')
self.redis.lpush('foo', 'three')
self.redis.lpush('foo', 'two')
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'removeme')
self.redis.lrem('foo', 'removeme', -1)
# Should remove it from the end of the list,
# leaving the 'removeme' from the front of the list alone.
self.assertEqual(self.redis.lrange('foo', 0, -1),
[b'removeme', b'one', b'two', b'three'])
def test_lrem_zero_count(self):
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'one')
self.redis.lrem('foo', 'one')
self.assertEqual(self.redis.lrange('foo', 0, -1), [])
def test_lrem_default_value(self):
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'one')
self.redis.lpush('foo', 'one')
self.redis.lrem('foo', 'one')
self.assertEqual(self.redis.lrange('foo', 0, -1), [])
def test_lrem_does_not_exist(self):
self.redis.lpush('foo', 'one')
self.redis.lrem('foo', 'one')
# These should be noops.
self.redis.lrem('foo', 'one', -2)
self.redis.lrem('foo', 'one', 2)
def test_lrem_return_value(self):
self.redis.lpush('foo', 'one')
count = self.redis.lrem('foo', 'one', 0)
self.assertEqual(count, 1)
self.assertEqual(self.redis.lrem('foo', 'one'), 0)
def test_zadd_deprecated(self):
result = self.redis.zadd('foo', 'one', 1)
self.assertEqual(result, 1)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'one'])
def test_zadd_missing_required_params(self):
with self.assertRaises(redis.RedisError):
# Missing the 'score' param.
self.redis.zadd('foo', 'one')
with self.assertRaises(redis.RedisError):
# Missing the 'value' param.
self.redis.zadd('foo', None, score=1)
with self.assertRaises(redis.RedisError):
self.redis.zadd('foo')
def test_zadd_with_single_keypair(self):
result = self.redis.zadd('foo', bar=1)
self.assertEqual(result, 1)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'bar'])
def test_zadd_with_multiple_keypairs(self):
result = self.redis.zadd('foo', bar=1, baz=9)
self.assertEqual(result, 2)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'bar', b'baz'])
def test_zadd_with_name_is_non_string(self):
result = self.redis.zadd('foo', 1, 9)
self.assertEqual(result, 1)
self.assertEqual(self.redis.zrange('foo', 0, -1), [b'1'])
def test_set_nx_doesnt_set_value_twice(self):
self.assertEqual(self.redis.set('foo', 'bar', nx=True), True)
self.assertEqual(self.redis.set('foo', 'bar', nx=True), None)
def test_set_xx_set_value_when_exists(self):
self.assertEqual(self.redis.set('foo', 'bar', xx=True), None)
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.set('foo', 'bar', xx=True), True)
@attr('slow')
def test_set_ex_should_expire_value(self):
self.redis.set('foo', 'bar', ex=0)
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.set('foo', 'bar', ex=1)
sleep(2)
self.assertEqual(self.redis.get('foo'), None)
@attr('slow')
def test_set_px_should_expire_value(self):
self.redis.set('foo', 'bar', px=500)
sleep(1.5)
self.assertEqual(self.redis.get('foo'), None)
@attr('slow')
def test_psetex_expire_value(self):
with self.assertRaises(ResponseError):
self.redis.psetex('foo', 0, 'bar')
self.redis.psetex('foo', 500, 'bar')
sleep(1.5)
self.assertEqual(self.redis.get('foo'), None)
@attr('slow')
def test_psetex_expire_value_using_timedelta(self):
with self.assertRaises(ResponseError):
self.redis.psetex('foo', timedelta(seconds=0), 'bar')
self.redis.psetex('foo', timedelta(seconds=0.5), 'bar')
sleep(1.5)
self.assertEqual(self.redis.get('foo'), None)
@attr('slow')
def test_expire_should_expire_key(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.expire('foo', 1)
sleep(1.5)
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.expire('bar', 1), False)
def test_expire_should_return_true_for_existing_key(self):
self.redis.set('foo', 'bar')
rv = self.redis.expire('foo', 1)
self.assertIs(rv, True)
def test_expire_should_return_false_for_missing_key(self):
rv = self.redis.expire('missing', 1)
self.assertIs(rv, False)
@attr('slow')
def test_expire_should_expire_key_using_timedelta(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.expire('foo', timedelta(seconds=1))
sleep(1.5)
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.expire('bar', 1), False)
@attr('slow')
def test_expire_should_expire_immediately_with_millisecond_timedelta(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.expire('foo', timedelta(milliseconds=750))
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.expire('bar', 1), False)
@attr('slow')
def test_pexpire_should_expire_key(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.pexpire('foo', 150)
sleep(0.2)
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.pexpire('bar', 1), False)
def test_pexpire_should_return_truthy_for_existing_key(self):
self.redis.set('foo', 'bar')
rv = self.redis.pexpire('foo', 1)
self.assertIs(bool(rv), True)
def test_pexpire_should_return_falsey_for_missing_key(self):
rv = self.redis.pexpire('missing', 1)
self.assertIs(bool(rv), False)
@attr('slow')
def test_pexpire_should_expire_key_using_timedelta(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.pexpire('foo', timedelta(milliseconds=750))
sleep(0.5)
self.assertEqual(self.redis.get('foo'), b'bar')
sleep(0.5)
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.pexpire('bar', 1), False)
@attr('slow')
def test_expireat_should_expire_key_by_datetime(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.expireat('foo', datetime.now() + timedelta(seconds=1))
sleep(1.5)
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.expireat('bar', datetime.now()), False)
@attr('slow')
def test_expireat_should_expire_key_by_timestamp(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.expireat('foo', int(time() + 1))
sleep(1.5)
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.expire('bar', 1), False)
def test_expireat_should_return_true_for_existing_key(self):
self.redis.set('foo', 'bar')
rv = self.redis.expireat('foo', int(time() + 1))
self.assertIs(rv, True)
def test_expireat_should_return_false_for_missing_key(self):
rv = self.redis.expireat('missing', int(time() + 1))
self.assertIs(rv, False)
@attr('slow')
def test_pexpireat_should_expire_key_by_datetime(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.pexpireat('foo', datetime.now() + timedelta(milliseconds=150))
sleep(0.2)
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.pexpireat('bar', datetime.now()), False)
@attr('slow')
def test_pexpireat_should_expire_key_by_timestamp(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.redis.pexpireat('foo', int(time() * 1000 + 150))
sleep(0.2)
self.assertEqual(self.redis.get('foo'), None)
self.assertEqual(self.redis.expire('bar', 1), False)
def test_pexpireat_should_return_true_for_existing_key(self):
self.redis.set('foo', 'bar')
rv = self.redis.pexpireat('foo', int(time() * 1000 + 150))
self.assertIs(bool(rv), True)
def test_pexpireat_should_return_false_for_missing_key(self):
rv = self.redis.pexpireat('missing', int(time() * 1000 + 150))
self.assertIs(bool(rv), False)
def test_ttl_should_return_none_for_non_expiring_key(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.assertEqual(self.redis.ttl('foo'), None)
def test_ttl_should_return_value_for_expiring_key(self):
self.redis.set('foo', 'bar')
self.redis.expire('foo', 1)
self.assertEqual(self.redis.ttl('foo'), 1)
self.redis.expire('foo', 2)
self.assertEqual(self.redis.ttl('foo'), 2)
# See https://github.com/antirez/redis/blob/unstable/src/db.c#L632
ttl = 1000000000
self.redis.expire('foo', ttl)
self.assertEqual(self.redis.ttl('foo'), ttl)
def test_pttl_should_return_none_for_non_expiring_key(self):
self.redis.set('foo', 'bar')
self.assertEqual(self.redis.get('foo'), b'bar')
self.assertEqual(self.redis.pttl('foo'), None)
def test_pttl_should_return_value_for_expiring_key(self):
d = 100
self.redis.set('foo', 'bar')
self.redis.expire('foo', 1)
self.assertInRange(self.redis.pttl('foo'), 1000 - d, 1000)
self.redis.expire('foo', 2)
self.assertInRange(self.redis.pttl('foo'), 2000 - d, 2000)
ttl = 1000000000
# See https://github.com/antirez/redis/blob/unstable/src/db.c#L632
self.redis.expire('foo', ttl)
self.assertInRange(self.redis.pttl('foo'),
ttl * 1000 - d,
ttl * 1000)
def test_ttls_should_always_be_long(self):
self.redis.set('foo', 'bar')
self.redis.expire('foo', 1)
self.assertTrue(type(self.redis.ttl('foo')) is long)
self.assertTrue(type(self.redis.pttl('foo')) is long)
def test_expire_should_not_handle_floating_point_values(self):
self.redis.set('foo', 'bar')
with self.assertRaisesRegexp(
redis.ResponseError, 'value is not an integer or out of range'):
self.redis.expire('something_new', 1.2)
self.redis.pexpire('something_new', 1000.2)
self.redis.expire('some_unused_key', 1.2)
self.redis.pexpire('some_unused_key', 1000.2)
class DecodeMixin(object):
decode_responses = True
def assertEqual(self, a, b, msg=None):
super(DecodeMixin, self).assertEqual(a, fakeredis._decode(b), msg)
def assertIn(self, member, container, msg=None):
super(DecodeMixin, self).assertIn(fakeredis._decode(member), container)
def assertItemsEqual(self, a, b):
super(DecodeMixin, self).assertItemsEqual(a, fakeredis._decode(b))
class TestFakeStrictRedisDecodeResponses(DecodeMixin, TestFakeStrictRedis):
def create_redis(self, db=0):
return fakeredis.FakeStrictRedis(db=db, decode_responses=True)
class TestFakeRedisDecodeResponses(DecodeMixin, TestFakeRedis):
def create_redis(self, db=0):
return fakeredis.FakeRedis(db=db, decode_responses=True)
@redis_must_be_running
class TestRealRedis(TestFakeRedis):
def create_redis(self, db=0):
return redis.Redis('localhost', port=6379, db=db)
@redis_must_be_running
class TestRealStrictRedis(TestFakeStrictRedis):
def create_redis(self, db=0):
return redis.StrictRedis('localhost', port=6379, db=db)
@redis_must_be_running
class TestRealRedisDecodeResponses(TestFakeRedisDecodeResponses):
def create_redis(self, db=0):
return redis.Redis('localhost', port=6379, db=db, decode_responses=True)
@redis_must_be_running
class TestRealStrictRedisDecodeResponses(TestFakeStrictRedisDecodeResponses):
def create_redis(self, db=0):
return redis.StrictRedis('localhost', port=6379, db=db, decode_responses=True)
class TestInitArgs(unittest.TestCase):
def test_can_accept_any_kwargs(self):
fakeredis.FakeRedis(foo='bar', bar='baz')
fakeredis.FakeStrictRedis(foo='bar', bar='baz')
def test_from_url(self):
db = fakeredis.FakeStrictRedis.from_url(
'redis://username:password@localhost:6379/0')
db.set('foo', 'bar')
self.assertEqual(db.get('foo'), b'bar')
def test_from_url_with_db_arg(self):
db = fakeredis.FakeStrictRedis.from_url(
'redis://username:password@localhost:6379/0')
db1 = fakeredis.FakeStrictRedis.from_url(
'redis://username:password@localhost:6379/1')
db2 = fakeredis.FakeStrictRedis.from_url(
'redis://username:password@localhost:6379/',
db=2)
db.set('foo', 'foo0')
db1.set('foo', 'foo1')
db2.set('foo', 'foo2')
self.assertEqual(db.get('foo'), b'foo0')
self.assertEqual(db1.get('foo'), b'foo1')
self.assertEqual(db2.get('foo'), b'foo2')
def test_from_url_db_value_error(self):
# In ValueError, should default to 0
db = fakeredis.FakeStrictRedis.from_url(
'redis://username:password@localhost:6379/a')
self.assertEqual(db._db_num, 0)
def test_can_pass_through_extra_args(self):
db = fakeredis.FakeStrictRedis.from_url(
'redis://username:password@localhost:6379/0',
decode_responses=True)
db.set('foo', 'bar')
self.assertEqual(db.get('foo'), 'bar')
class TestImportation(unittest.TestCase):
def test_searches_for_c_stdlib_and_raises_if_missing(self):
"""
Verifies that fakeredis checks for both libc and msvcrt when looking for a strtod implementation and that it
fails fast when neither is found.
"""
import ctypes.util
# Patch manually since unittest.mock.patch is not available in old Python versions
old_find_library = ctypes.util.find_library
searched_libraries = set()
try:
ctypes.util.find_library = lambda library: searched_libraries.add(library)
with self.assertRaises(ImportError):
reload(fakeredis)
self.assertEqual(set(['c', 'msvcrt']), searched_libraries)
finally:
ctypes.util.find_library = old_find_library
reload(fakeredis)
if __name__ == '__main__':
unittest.main()
|
trezor.py
|
import traceback
import sys
from typing import NamedTuple, Any, Optional, Dict, Union, List, Tuple, TYPE_CHECKING
from electrum_ltc.util import bfh, bh2u, versiontuple, UserCancelled, UserFacingException
from electrum_ltc.bip32 import BIP32Node, convert_bip32_path_to_list_of_uint32 as parse_path
from electrum_ltc import constants
from electrum_ltc.i18n import _
from electrum_ltc.plugin import Device
from electrum_ltc.transaction import Transaction, PartialTransaction, PartialTxInput, PartialTxOutput
from electrum_ltc.keystore import Hardware_KeyStore
from electrum_ltc.base_wizard import ScriptTypeNotSupported, HWD_SETUP_NEW_WALLET
from electrum_ltc.logging import get_logger
from ..hw_wallet import HW_PluginBase
from ..hw_wallet.plugin import (is_any_tx_output_on_change_branch, trezor_validate_op_return_output_and_get_data,
LibraryFoundButUnusable, OutdatedHwFirmwareException,
get_xpubs_and_der_suffixes_from_txinout)
_logger = get_logger(__name__)
try:
import trezorlib
import trezorlib.transport
from trezorlib.transport.bridge import BridgeTransport, call_bridge
from .clientbase import TrezorClientBase
from trezorlib.messages import (
RecoveryDeviceType, HDNodeType, HDNodePathType,
InputScriptType, OutputScriptType, MultisigRedeemScriptType,
TxInputType, TxOutputType, TxOutputBinType, TransactionType, SignTx)
RECOVERY_TYPE_SCRAMBLED_WORDS = RecoveryDeviceType.ScrambledWords
RECOVERY_TYPE_MATRIX = RecoveryDeviceType.Matrix
TREZORLIB = True
except Exception as e:
_logger.exception('error importing trezorlib')
TREZORLIB = False
RECOVERY_TYPE_SCRAMBLED_WORDS, RECOVERY_TYPE_MATRIX = range(2)
# Trezor initialization methods
TIM_NEW, TIM_RECOVER = range(2)
TREZOR_PRODUCT_KEY = 'Trezor'
class TrezorKeyStore(Hardware_KeyStore):
hw_type = 'trezor'
device = TREZOR_PRODUCT_KEY
plugin: 'TrezorPlugin'
def get_client(self, force_pair=True):
return self.plugin.get_client(self, force_pair)
def decrypt_message(self, sequence, message, password):
raise UserFacingException(_('Encryption and decryption are not implemented by {}').format(self.device))
def sign_message(self, sequence, message, password):
client = self.get_client()
address_path = self.get_derivation_prefix() + "/%d/%d"%sequence
msg_sig = client.sign_message(address_path, message)
return msg_sig.signature
def sign_transaction(self, tx, password):
if tx.is_complete():
return
# previous transactions used as inputs
prev_tx = {}
for txin in tx.inputs():
tx_hash = txin.prevout.txid.hex()
if txin.utxo is None and not Transaction.is_segwit_input(txin):
raise UserFacingException(_('Missing previous tx for legacy input.'))
prev_tx[tx_hash] = txin.utxo
self.plugin.sign_transaction(self, tx, prev_tx)
class TrezorInitSettings(NamedTuple):
word_count: int
label: str
pin_enabled: bool
passphrase_enabled: bool
recovery_type: Any = None
no_backup: bool = False
class TrezorPlugin(HW_PluginBase):
# Derived classes provide:
#
# class-static variables: client_class, firmware_URL, handler_class,
# libraries_available, libraries_URL, minimum_firmware,
# wallet_class, types
firmware_URL = 'https://wallet.trezor.io'
libraries_URL = 'https://github.com/trezor/python-trezor'
minimum_firmware = (1, 5, 2)
keystore_class = TrezorKeyStore
minimum_library = (0, 11, 0)
maximum_library = (0, 12)
SUPPORTED_XTYPES = ('standard', 'p2wpkh-p2sh', 'p2wpkh', 'p2wsh-p2sh', 'p2wsh')
DEVICE_IDS = (TREZOR_PRODUCT_KEY,)
MAX_LABEL_LEN = 32
def __init__(self, parent, config, name):
super().__init__(parent, config, name)
self.libraries_available = self.check_libraries_available()
if not self.libraries_available:
return
self.device_manager().register_enumerate_func(self.enumerate)
def get_library_version(self):
import trezorlib
try:
version = trezorlib.__version__
except Exception:
version = 'unknown'
if TREZORLIB:
return version
else:
raise LibraryFoundButUnusable(library_version=version)
def enumerate(self):
# If there is a bridge, prefer that.
# On Windows, the bridge runs as Admin (and Electrum usually does not),
# so the bridge has better chances of finding devices. see #5420
# This also avoids duplicate entries.
try:
call_bridge("enumerate")
except Exception:
devices = trezorlib.transport.enumerate_devices()
else:
devices = BridgeTransport.enumerate()
return [Device(path=d.get_path(),
interface_number=-1,
id_=d.get_path(),
product_key=TREZOR_PRODUCT_KEY,
usage_page=0,
transport_ui_string=d.get_path())
for d in devices]
def create_client(self, device, handler):
try:
self.logger.info(f"connecting to device at {device.path}")
transport = trezorlib.transport.get_transport(device.path)
except BaseException as e:
self.logger.info(f"cannot connect at {device.path} {e}")
return None
if not transport:
self.logger.info(f"cannot connect at {device.path}")
return
self.logger.info(f"connected to device at {device.path}")
# note that this call can still raise!
return TrezorClientBase(transport, handler, self)
def get_client(self, keystore, force_pair=True) -> Optional['TrezorClientBase']:
devmgr = self.device_manager()
handler = keystore.handler
with devmgr.hid_lock:
client = devmgr.client_for_keystore(self, handler, keystore, force_pair)
# returns the client for a given keystore. can use xpub
if client:
client.used()
return client
def get_coin_name(self):
return "Testnet" if constants.net.TESTNET else "Litecoin"
def initialize_device(self, device_id, wizard, handler):
# Initialization method
msg = _("Choose how you want to initialize your {}.").format(self.device, self.device)
choices = [
# Must be short as QT doesn't word-wrap radio button text
(TIM_NEW, _("Let the device generate a completely new seed randomly")),
(TIM_RECOVER, _("Recover from a seed you have previously written down")),
]
def f(method):
import threading
settings = self.request_trezor_init_settings(wizard, method, device_id)
t = threading.Thread(target=self._initialize_device_safe, args=(settings, method, device_id, wizard, handler))
t.setDaemon(True)
t.start()
exit_code = wizard.loop.exec_()
if exit_code != 0:
# this method (initialize_device) was called with the expectation
# of leaving the device in an initialized state when finishing.
# signal that this is not the case:
raise UserCancelled()
wizard.choice_dialog(title=_('Initialize Device'), message=msg, choices=choices, run_next=f)
def _initialize_device_safe(self, settings, method, device_id, wizard, handler):
exit_code = 0
try:
self._initialize_device(settings, method, device_id, wizard, handler)
except UserCancelled:
exit_code = 1
except BaseException as e:
self.logger.exception('')
handler.show_error(repr(e))
exit_code = 1
finally:
wizard.loop.exit(exit_code)
def _initialize_device(self, settings: TrezorInitSettings, method, device_id, wizard, handler):
if method == TIM_RECOVER and settings.recovery_type == RECOVERY_TYPE_SCRAMBLED_WORDS:
handler.show_error(_(
"You will be asked to enter 24 words regardless of your "
"seed's actual length. If you enter a word incorrectly or "
"misspell it, you cannot change it or go back - you will need "
"to start again from the beginning.\n\nSo please enter "
"the words carefully!"),
blocking=True)
devmgr = self.device_manager()
client = devmgr.client_by_id(device_id)
if not client:
raise Exception(_("The device was disconnected."))
if method == TIM_NEW:
strength_from_word_count = {12: 128, 18: 192, 24: 256}
client.reset_device(
strength=strength_from_word_count[settings.word_count],
passphrase_protection=settings.passphrase_enabled,
pin_protection=settings.pin_enabled,
label=settings.label,
no_backup=settings.no_backup)
elif method == TIM_RECOVER:
client.recover_device(
recovery_type=settings.recovery_type,
word_count=settings.word_count,
passphrase_protection=settings.passphrase_enabled,
pin_protection=settings.pin_enabled,
label=settings.label)
if settings.recovery_type == RECOVERY_TYPE_MATRIX:
handler.close_matrix_dialog()
else:
raise RuntimeError("Unsupported recovery method")
def _make_node_path(self, xpub, address_n):
bip32node = BIP32Node.from_xkey(xpub)
node = HDNodeType(
depth=bip32node.depth,
fingerprint=int.from_bytes(bip32node.fingerprint, 'big'),
child_num=int.from_bytes(bip32node.child_number, 'big'),
chain_code=bip32node.chaincode,
public_key=bip32node.eckey.get_public_key_bytes(compressed=True),
)
return HDNodePathType(node=node, address_n=address_n)
def setup_device(self, device_info, wizard, purpose):
devmgr = self.device_manager()
device_id = device_info.device.id_
client = devmgr.client_by_id(device_id)
if client is None:
raise UserFacingException(_('Failed to create a client for this device.') + '\n' +
_('Make sure it is in the correct state.'))
if not client.is_uptodate():
msg = (_('Outdated {} firmware for device labelled {}. Please '
'download the updated firmware from {}')
.format(self.device, client.label(), self.firmware_URL))
raise OutdatedHwFirmwareException(msg)
# fixme: we should use: client.handler = wizard
client.handler = self.create_handler(wizard)
if not device_info.initialized:
self.initialize_device(device_id, wizard, client.handler)
is_creating_wallet = purpose == HWD_SETUP_NEW_WALLET
client.get_xpub('m', 'standard', creating=is_creating_wallet)
client.used()
def get_xpub(self, device_id, derivation, xtype, wizard):
if xtype not in self.SUPPORTED_XTYPES:
raise ScriptTypeNotSupported(_('This type of script is not supported with {}.').format(self.device))
devmgr = self.device_manager()
client = devmgr.client_by_id(device_id)
client.handler = wizard
xpub = client.get_xpub(derivation, xtype)
client.used()
return xpub
def get_trezor_input_script_type(self, electrum_txin_type: str):
if electrum_txin_type in ('p2wpkh', 'p2wsh'):
return InputScriptType.SPENDWITNESS
if electrum_txin_type in ('p2wpkh-p2sh', 'p2wsh-p2sh'):
return InputScriptType.SPENDP2SHWITNESS
if electrum_txin_type in ('p2pkh', ):
return InputScriptType.SPENDADDRESS
if electrum_txin_type in ('p2sh', ):
return InputScriptType.SPENDMULTISIG
raise ValueError('unexpected txin type: {}'.format(electrum_txin_type))
def get_trezor_output_script_type(self, electrum_txin_type: str):
if electrum_txin_type in ('p2wpkh', 'p2wsh'):
return OutputScriptType.PAYTOWITNESS
if electrum_txin_type in ('p2wpkh-p2sh', 'p2wsh-p2sh'):
return OutputScriptType.PAYTOP2SHWITNESS
if electrum_txin_type in ('p2pkh', ):
return OutputScriptType.PAYTOADDRESS
if electrum_txin_type in ('p2sh', ):
return OutputScriptType.PAYTOMULTISIG
raise ValueError('unexpected txin type: {}'.format(electrum_txin_type))
def sign_transaction(self, keystore, tx: PartialTransaction, prev_tx):
prev_tx = { bfh(txhash): self.electrum_tx_to_txtype(tx) for txhash, tx in prev_tx.items() }
client = self.get_client(keystore)
inputs = self.tx_inputs(tx, for_sig=True, keystore=keystore)
outputs = self.tx_outputs(tx, keystore=keystore)
details = SignTx(lock_time=tx.locktime, version=tx.version)
signatures, _ = client.sign_tx(self.get_coin_name(), inputs, outputs, details=details, prev_txes=prev_tx)
signatures = [(bh2u(x) + '01') for x in signatures]
tx.update_signatures(signatures)
def show_address(self, wallet, address, keystore=None):
if keystore is None:
keystore = wallet.get_keystore()
if not self.show_address_helper(wallet, address, keystore):
return
deriv_suffix = wallet.get_address_index(address)
derivation = keystore.get_derivation_prefix()
address_path = "%s/%d/%d"%(derivation, *deriv_suffix)
script_type = self.get_trezor_input_script_type(wallet.txin_type)
# prepare multisig, if available:
xpubs = wallet.get_master_public_keys()
if len(xpubs) > 1:
pubkeys = wallet.get_public_keys(address)
# sort xpubs using the order of pubkeys
sorted_pairs = sorted(zip(pubkeys, xpubs))
multisig = self._make_multisig(
wallet.m,
[(xpub, deriv_suffix) for pubkey, xpub in sorted_pairs])
else:
multisig = None
client = self.get_client(keystore)
client.show_address(address_path, script_type, multisig)
def tx_inputs(self, tx: Transaction, *, for_sig=False, keystore: 'TrezorKeyStore' = None):
inputs = []
for txin in tx.inputs():
txinputtype = TxInputType()
if txin.is_coinbase():
prev_hash = b"\x00"*32
prev_index = 0xffffffff # signed int -1
else:
if for_sig:
assert isinstance(tx, PartialTransaction)
assert isinstance(txin, PartialTxInput)
assert keystore
if len(txin.pubkeys) > 1:
xpubs_and_deriv_suffixes = get_xpubs_and_der_suffixes_from_txinout(tx, txin)
multisig = self._make_multisig(txin.num_sig, xpubs_and_deriv_suffixes)
else:
multisig = None
script_type = self.get_trezor_input_script_type(txin.script_type)
txinputtype = TxInputType(
script_type=script_type,
multisig=multisig)
my_pubkey, full_path = keystore.find_my_pubkey_in_txinout(txin)
if full_path:
txinputtype.address_n = full_path
prev_hash = txin.prevout.txid
prev_index = txin.prevout.out_idx
if txin.value_sats() is not None:
txinputtype.amount = txin.value_sats()
txinputtype.prev_hash = prev_hash
txinputtype.prev_index = prev_index
if txin.script_sig is not None:
txinputtype.script_sig = txin.script_sig
txinputtype.sequence = txin.nsequence
inputs.append(txinputtype)
return inputs
def _make_multisig(self, m, xpubs):
if len(xpubs) == 1:
return None
pubkeys = [self._make_node_path(xpub, deriv) for xpub, deriv in xpubs]
return MultisigRedeemScriptType(
pubkeys=pubkeys,
signatures=[b''] * len(pubkeys),
m=m)
def tx_outputs(self, tx: PartialTransaction, *, keystore: 'TrezorKeyStore'):
def create_output_by_derivation():
script_type = self.get_trezor_output_script_type(txout.script_type)
if len(txout.pubkeys) > 1:
xpubs_and_deriv_suffixes = get_xpubs_and_der_suffixes_from_txinout(tx, txout)
multisig = self._make_multisig(txout.num_sig, xpubs_and_deriv_suffixes)
else:
multisig = None
my_pubkey, full_path = keystore.find_my_pubkey_in_txinout(txout)
assert full_path
txoutputtype = TxOutputType(
multisig=multisig,
amount=txout.value,
address_n=full_path,
script_type=script_type)
return txoutputtype
def create_output_by_address():
txoutputtype = TxOutputType()
txoutputtype.amount = txout.value
if address:
txoutputtype.script_type = OutputScriptType.PAYTOADDRESS
txoutputtype.address = address
else:
txoutputtype.script_type = OutputScriptType.PAYTOOPRETURN
txoutputtype.op_return_data = trezor_validate_op_return_output_and_get_data(txout)
return txoutputtype
outputs = []
has_change = False
any_output_on_change_branch = is_any_tx_output_on_change_branch(tx)
for txout in tx.outputs():
address = txout.address
use_create_by_derivation = False
if txout.is_mine and not has_change:
# prioritise hiding outputs on the 'change' branch from user
# because no more than one change address allowed
# note: ^ restriction can be removed once we require fw
# that has https://github.com/trezor/trezor-mcu/pull/306
if txout.is_change == any_output_on_change_branch:
use_create_by_derivation = True
has_change = True
if use_create_by_derivation:
txoutputtype = create_output_by_derivation()
else:
txoutputtype = create_output_by_address()
outputs.append(txoutputtype)
return outputs
def electrum_tx_to_txtype(self, tx: Optional[Transaction]):
t = TransactionType()
if tx is None:
# probably for segwit input and we don't need this prev txn
return t
tx.deserialize()
t.version = tx.version
t.lock_time = tx.locktime
t.inputs = self.tx_inputs(tx)
t.bin_outputs = [
TxOutputBinType(amount=o.value, script_pubkey=o.scriptpubkey)
for o in tx.outputs()
]
return t
|
autoreload.py
|
# -*- coding: utf-8 -*-
#
# Copyright (C)2006-2009 Edgewall Software
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at http://trac.edgewall.org/wiki/TracLicense.
#
# This software consists of voluntary contributions made by many
# individuals. For the exact contribution history, see the revision
# history and logs, available at http://trac.edgewall.org/log/.
import os
import sys
import threading
import time
import traceback
_SLEEP_TIME = 1
def _reloader_thread(modification_callback, loop_callback):
"""When this function is run from the main thread, it will force other
threads to exit when any modules currently loaded change.
@param modification_callback: a function taking a single argument, the
modified file, which is called every time a modification is detected
@param loop_callback: a function taking no arguments, which is called
after every modification check
"""
mtimes = {}
while True:
for filename in filter(None, [getattr(module, '__file__', None)
for module in sys.modules.values()]):
while not os.path.isfile(filename): # Probably in an egg or zip file
filename = os.path.dirname(filename)
if not filename:
break
if not filename: # Couldn't map to physical file, so just ignore
continue
if filename.endswith(('.pyc', '.pyo')):
filename = filename[:-1]
if not os.path.isfile(filename):
# Compiled file for non-existant source
continue
mtime = os.stat(filename).st_mtime
if filename not in mtimes:
mtimes[filename] = mtime
continue
if mtime > mtimes[filename]:
modification_callback(filename)
sys.exit(3)
loop_callback()
time.sleep(_SLEEP_TIME)
def _restart_with_reloader():
while True:
args = [sys.executable] + sys.argv
if sys.platform == 'win32':
args = ['"%s"' % arg for arg in args]
new_environ = os.environ.copy()
new_environ['RUN_MAIN'] = 'true'
# This call reinvokes ourself and goes into the other branch of main as
# a new process.
exit_code = os.spawnve(os.P_WAIT, sys.executable,
args, new_environ)
if exit_code != 3:
return exit_code
def main(func, modification_callback, *args, **kwargs):
"""Run the given function and restart any time modules are changed."""
if os.environ.get('RUN_MAIN'):
exit_code = []
def main_thread():
try:
func(*args, **kwargs)
exit_code.append(None)
except SystemExit, e:
exit_code.append(e.code)
except:
traceback.print_exception(*sys.exc_info())
exit_code.append(1)
def check_exit():
if exit_code:
sys.exit(exit_code[0])
# Lanch the actual program as a child thread
thread = threading.Thread(target=main_thread, name='Main thread')
thread.setDaemon(True)
thread.start()
try:
# Now wait for a file modification and quit
_reloader_thread(modification_callback, check_exit)
except KeyboardInterrupt:
pass
else:
# Initial invocation just waits around restarting this executable
try:
sys.exit(_restart_with_reloader())
except KeyboardInterrupt:
pass
|
main.py
|
import sys
import os
import ode
import logging
import threading
from time import sleep, time
from genie_python.genie_startup import *
import pv_server
import render
from configurations import config_zoom as config
from collide import collide, CollisionDetector
from geometry import GeometryBox
from move import move_all
sys.path.insert(0, os.path.abspath(os.environ["MYDIRCD"]))
from monitor import Monitor
from server_common.loggers.isis_logger import IsisLogger
logging.basicConfig(level=logging.INFO,
format='%(asctime)s (%(threadName)-2s) %(message)s',
)
def auto_seek(start_step_size, start_values, end_value, geometries, moves, axis_index, ignore, fine_step=None):
limit = end_value
current_value = start_values[axis_index]
if current_value == end_value:
return end_value
values = start_values[:]
last_value = None
old_points = None
step_checked = False
if current_value < end_value:
# Going up
def comp(a, b):
return a < b
step_size = abs(start_step_size)
else:
# Going down
def comp(a, b):
return a > b
step_size = -abs(start_step_size)
while last_value is None or comp(last_value, end_value):
# Move if we need to
if last_value is not None:
current_value += step_size
# print "Using step size of %f" % step_size
else:
current_value = start_values[axis_index]
if not comp(current_value, end_value):
current_value = end_value
values[axis_index] = current_value
move_all(geometries, moves, values=values[:])
# Check nothing moved too far
if step_checked is False:
new_points = [g.get_vertices() for g in geometries]
if old_points is not None:
delta = max_delta(geometries, new_points, old_points)
if delta > start_step_size:
# Work out a new step size
step_size *= start_step_size/delta
last_value = None
continue
step_checked = True
# Check for collisions
collisions = collide(geometries, ignore)
if any(collisions):
if current_value == start_values[axis_index]:
# There was already a collision
limit = current_value
break
elif fine_step and fine_step < step_size:
start_values[axis_index] = last_value
limit = auto_seek(fine_step, start_values, current_value, geometries, moves, axis_index, ignore)
else:
limit = last_value
break
old_points = new_points[:]
last_value = current_value
# print "Found limits for axis %d using step size of %f" % (axis_index, step_size)
if limit is None:
raise ValueError("Null limit")
return limit
def max_delta(geometries, new_points, old_points):
# Calculate the greatest position deltas
delta = 0
for j in range(len(geometries)):
old = old_points[j]
new = new_points[j]
deltas = [map(float, n - o) for n, o in zip(new, old)]
for i, (x, y, z) in enumerate(deltas):
mag = float(x) ** 2 + float(y) ** 2 + float(z) ** 2
if mag > delta:
delta = mag
# print "New max delta of %f (%f, %f, %f) for body %d at %s from %s" % \
# (mag ** 0.5, x, y, z, j, new[i], old[i])
delta = float(delta) ** 0.5
return delta
def compare(sign):
if sign > 0:
return lambda a, b: a > b
else:
return lambda a, b: a < b
def auto_seek_limits(geometries, ignore, moves, values, limits, coarse=1.0, fine=0.1):
dynamic_limits = []
for i in range(len(values)):
logging.debug("Seeking for axis %d" % i)
lower_limit = auto_seek(coarse, values[:], min(limits[i]), geometries, moves, i, ignore, fine)
upper_limit = auto_seek(coarse, values[:], max(limits[i]), geometries, moves, i, ignore, fine)
dynamic_limits.append([lower_limit, upper_limit])
logging.debug("Found limits for axis %d at %s, %s" % (i, upper_limit, lower_limit))
return dynamic_limits
def look_ahead(start_values, pvs, is_moving, geometries, moves, ignore, max_movement=1.0, max_time=10., time_step=0.1):
# Get the indices of the axes currently moving
moving = [i for i, m in enumerate(is_moving) if m == 0] # DMOV = 0 when motors not moving
msg = "No collisions predicted in the next %fs" % max_time
safe_time = max_time
safe = True
# Only worth calculating if more than one axis is moving
if len(moving) > 1:
set_points = [None] * len(pvs)
speeds = [None] * len(pvs)
directions = [None] * len(pvs)
# Assume everything has finished moving
move_complete = [True] * len(pvs)
# Get some settings:
for i in moving:
pv = pvs[i]
set_point = get_pv(pv + '.DVAL')
speed = get_pv(pv + '.VELO')
direction = 0.
move = set_point - start_values[i]
if move > 0:
direction = 1.
if move < 0:
direction = -1.
set_points[i] = set_point
speeds[i] = speed
directions[i] = direction
# This axis has not finished moving!
move_complete[i] = False
current_time = 0.
values = start_values[:]
old_points = None
step_checked = False
last_time = None
while current_time < max_time:
if last_time is None:
values = start_values[:]
current_time = 0.
old_points = None
else:
current_time += time_step
for i in moving:
if move_complete[i] is False:
values[i] = start_values[i] + (directions[i] * speeds[i] * current_time)
comp = compare(directions[i])(values[i], set_points[i])
if comp:
values[i] = set_points[i]
# Move the bodies
move_all(geometries, moves, values=values)
if step_checked is False:
new_points = [g.get_vertices() for g in geometries]
if old_points is not None:
delta = max_delta(geometries, new_points, old_points)
if delta > max_movement:
# Reduce the size of the time step
time_step *= max_movement/delta
# Reset to starting point
last_time = None
old_points = None
continue
step_checked = True
# Check for collisions
collisions = collide(geometries, ignore)
if any(collisions):
if last_time is None:
msg = "There is already a collision"
safe_time = 0.
else:
msg = "Collision expected in %.1fs - %.1fs" % (last_time, current_time)
safe_time = last_time
safe = False
break
old_points = new_points[:]
last_time = current_time
return msg, safe_time, safe
# Set the high and low dial limits for each motor
def set_limits(limits, pvs):
for limit, pv in zip(limits, pvs):
set_pv(pv + '.DLLM', limit[0])
set_pv(pv + '.DHLM', limit[1])
# Contains operating mode events
class OperatingMode(object):
def __init__(self):
# Close event to be triggered by the render thread
self.close = threading.Event()
# Set dynamic limits automatically
self.set_limits = threading.Event()
# Stop the motors on a collision
self.auto_stop = threading.Event()
# Re-calculate limits on demand
self.calc_limits = threading.Event()
def get_operation_mode(self):
return self.auto_stop.is_set(), self.set_limits.is_set(), self.close.is_set()
def set_operation_mode(self, auto_stop, set_limits, close):
if auto_stop:
self.auto_stop.set()
else:
self.auto_stop.clear()
if set_limits:
self.set_limits.set()
else:
self.set_limits.clear()
if close:
self.close.set()
else:
self.close.clear()
# The main routine to execute
def main():
# Load config:
colors = config.colors
moves = config.moves
ignore = config.ignore
pvs = config.pvs
config_limits = config.hardlimits
old_limits = config_limits[:]
# Create space objects for the live and rendered world
space = ode.Space()
render_space = ode.Space()
collision_space = ode.Space()
# Create and populate lists of geometries
geometries = []
render_geometries = []
collision_geometries = []
for i, geometry in enumerate(config.geometries):
geometries.append(GeometryBox(space, oversize=config.oversize, **geometry))
render_geometries.append(GeometryBox(render_space, **geometry))
collision_geometries.append(GeometryBox(collision_space, oversize=config.oversize, **geometry))
# Create and populate two lists of monitors
monitors = []
is_moving = []
for pv in pvs:
m = Monitor(pv + ".DRBV")
m.start()
monitors.append(m)
any_moving = Monitor(pv + ".DMOV")
any_moving.start()
is_moving.append(any_moving)
# Create a shared operating mode object to control the main thread
op_mode = OperatingMode()
# Set the default behaviour to set_limits as calculated, and auto_stop on collision
op_mode.set_limits.set()
op_mode.auto_stop.set()
# Start a logger
logger = IsisLogger()
# Create a shared render parameter object to update the render thread
parameters = render.RenderParams()
if 'blind' not in sys.argv:
# Initialise the render thread, and set it to daemon - won't prevent the main thread from exiting
renderer = render.Renderer(parameters, render_geometries, colors, monitors, pvs, moves, op_mode)
renderer.daemon = True
# Need to know if this is the first execution of the main loop
op_mode.calc_limits.set()
# Initialise the pv server
# Loop over the pvdb and update the counts based on the number of aves/bodies
for pv in pv_server.pvdb:
for key, val in pv_server.pvdb[pv].items():
if key == 'count':
if val is pv_server.axis_count:
pv_server.pvdb[pv]['count'] = len(config.pvs)
if val is pv_server.body_count:
pv_server.pvdb[pv]['count'] = len(config.geometries)
driver = pv_server.start_thread(config.control_pv, op_mode)
driver.setParam('OVERSIZE', config.oversize)
driver.setParam('COARSE', config.coarse)
driver.setParam('FINE', config.fine)
driver.setParam('NAMES', [g['name'] for g in config.geometries])
# Only report for new collisions
collision_detector = CollisionDetector(driver, collision_geometries, config.moves, monitors, config.ignore,
is_moving, logger, op_mode, config.pvs)
collision_detector.start()
# Main loop
while True:
# Freeze the positions of our current monitors by creating some dummies
# This stops the threads from trying to reading each monitor sequentially, and holding each other up
frozen = [m.value() for m in monitors]
# Execute the move
move_all(geometries, moves, values=frozen)
# Check if the oversize has been changed, ahead of any collision calcs
if driver.new_data.isSet():
for geometry, collision_geometry in zip(geometries, collision_geometries):
geometry.set_size(oversize=driver.getParam('OVERSIZE'))
collision_geometry.set_size(oversize=driver.getParam('OVERSIZE'))
driver.new_data.clear()
op_mode.calc_limits.set()
if driver.getParam("CALC") != 0:
op_mode.calc_limits.set()
collisions = collision_detector.collisions[:]
collision_message = collision_detector.message[:]
# Check if there have been any changes to the .MOVN monitors
fresh = any([m.fresh() for m in is_moving])
# Check if any of the motors monitors are moving
moving = [not m.value() for m in is_moving] # Invert because DMOV is inverted from MOVN
any_moving = any(moving)
new_limits = []
if fresh or any_moving or op_mode.calc_limits.isSet():
# Look ahead some time to see if any collisions are going to happen in the future
msg, safe_time, safe = look_ahead(frozen, config.pvs, moving, geometries, moves, ignore,
max_movement=driver.getParam('COARSE'))
if not safe and not any(collisions):
logger.write_to_log(msg, "MAJOR", "COLLIDE")
driver.setParam('MSG', msg)
else:
driver.setParam('MSG', collision_message)
logging.info(msg)
# Start timing for diagnostics
time_passed = time()
# Seek the correct limit values
dynamic_limits = auto_seek_limits(geometries, ignore, moves, frozen, config_limits,
coarse=driver.getParam('COARSE'), fine=driver.getParam('FINE'))
# Calculate and log the time taken to calculate
time_passed = (time() - time_passed) * 1000
# Log the new limits
logging.info("New limits calculated in %dms, are %s" % (time_passed, dynamic_limits))
# Set the limits according to the set_limits operating mode
if op_mode.set_limits.is_set():
# Apply the calculated limits
new_limits = dynamic_limits[:]
else:
# Restore the configuration limits
new_limits = config_limits[:]
# Update the render thread parameters
parameters.update_params(dynamic_limits, collisions, time_passed)
# # Update the PVs
driver.setParam('TIME', time_passed)
driver.setParam('HI_LIM', [l[1] for l in dynamic_limits])
driver.setParam('LO_LIM', [l[0] for l in dynamic_limits])
driver.setParam('TRAVEL', [min([l[0] - m, l[1] - m], key=abs)
for l, m in zip(dynamic_limits, frozen)])
driver.setParam('TRAV_F', [l[1] - m for l, m in zip(dynamic_limits, frozen)])
driver.setParam('TRAV_R', [l[0] - m for l, m in zip(dynamic_limits, frozen)])
driver.updatePVs()
if 'blind' not in sys.argv:
# On the first run, start the renderer
if renderer.is_alive() is False:
renderer.start()
op_mode.calc_limits.clear()
driver.setParam("CALC", False)
else:
# Restore the configuration limits
if op_mode.set_limits.is_set() is False:
new_limits = config_limits[:]
# Stop us overloading the limits
if not new_limits == old_limits:
threading.Thread(target=set_limits, args=(new_limits, pvs)).start()
old_limits = new_limits[:]
# Exit the program
if op_mode.close.is_set():
# Restore the configuration limits
set_limits(config_limits, pvs)
return
# Give the CPU a break
sleep(0.01)
if 'return' in sys.argv:
return
# Execute main
main()
|
_wood.py
|
from __future__ import annotations
from typing import Union
import sys
import inspect
import json
from threading import Thread
import math
import select
import struct
from interfaces import ASerial, AMessage, AMedium
from utils import util, dbgprint, errprint
from . import encoder
AnsProtocol = {
'dump': '10',
'locals': '11',
'receivesession': '22',
'rmvbp': '07',
'run': 'GO!\n',
'step': 'STEP!\n',
'pause': 'PAUSE!\n'
}
Interrupts = {
'addbp': '06',
'dump': '10',
'offset': '0x23',
'locals': '11',
'receivesession': '22',
'recvproxies': '25',
'rmvbp': '07',
'run': '01',
'step': '04',
'until': '05',
'updateModule' : '24',
'pause': '03'
}
proxy_config = None
class WOODManager(ASerial):
def __init__(self):
super().__init__()
self.__offset = None
self.__max_bytes = 1024 # FIXME at 20 get issue due to pc_serialization
self.__medium = None
self.__redirectdbg = None
self.__connected = False
self.__eventlistener = None
self.__event_handler = None
#properties
@property
def medium(self) -> AMedium:
return self.__medium
@medium.setter
def medium(self, m):
self.__medium = m
self.__medium.serializer = self
@property
def max_bytes(self):
return self.__max_bytes
@max_bytes.setter
def max_bytes(self, v):
self.__max_bytes = v
@property
def connected(self) -> bool:
return self.__connected
@connected.setter
def connected(self, c: bool) -> None:
self.__connected = c
@property
def uses_socket(self) -> bool:
return self.medium.is_socket
@property
def debugger(self):
return self.__debugger
def set_debugger(self, d):
self.__debugger = d
def redirect_dbg(self, dbg_med):
self.__redirectdbg = dbg_med
def set_event_handler(self, eh: callable) -> None:
self.__event_handler = eh
#API
def connect(self, event_handler: Union[None, callable] = None ) -> bool:
# dbgprint("connecting..."
if not self.medium.connected:
self.connected = self.medium.start_connection(self)
if self.offset is None:
self.offset = self.__ask_for_offset()
# dbgprint(f"offset device {self.offset}")
if self.uses_socket and self.__eventlistener is None:
if event_handler is not None:
self.__event_handler = event_handler
if self.__event_handler is None:
raise ValueError('configure an event_hanler')
self.__eventlistener = Thread(target=receive_events, args=(self, self.medium, self.__event_handler), daemon=True)
self.__eventlistener.start()
return self.connected
def run(self) -> bool:
run_msg = AMessage(Interrupts['run'] + '\n', receive_run_ack)
return self.medium.send(run_msg)
def pause(self):
pause_msg = AMessage(Interrupts['pause'] + '\n', receive_ack_pause)
return self.medium.send(pause_msg)
def step(self, amount = 1):
msgs = []
for _ in range(amount):
step_msg = AMessage(Interrupts['step'] + '\n', receive_step_ack)
msgs.append(step_msg)
return self.medium.send(msgs)
def remove_breakpoint(self, addr: int) -> bool:
(size_hex, addr_hex) = bp_addr_helper(self.offset, addr)
content = Interrupts['rmvbp'] + size_hex[2:] + addr_hex[2:] + '\n'
rmbp_msg = AMessage(content.upper(), receive_rmvbp)
return self.medium.send(rmbp_msg)
def add_breakpoint(self, addr: int) -> bool:
# dbgprint(f'addr {addr} offset {self.offset}')
(size_hex, addr_hex) = bp_addr_helper(self.offset, addr)
content = Interrupts['addbp'] + size_hex[2:] + addr_hex[2:] + '\n'
addbp_msg = AMessage(content.upper(), receive_addbp)
return self.medium.send(addbp_msg)
def halt(self, state):
raise NotImplementedError
def upload(self, wasm: bytes, config: dict) -> None:
global proxy_config
proxy_config = config
interrupt = Interrupts['updateModule']
ask4commit = AMessage(interrupt + '\n', receive_ack)
sers = encoder.serialize_wasm(interrupt, wasm, self.max_bytes)
l = len(sers)
msgs = [ask4commit]
for idx, content in enumerate(sers):
rpl = receive_uploaddone if (idx + 1) == l else receive_ack
dbgprint(f'#{len(content) + 1} Content {content}')
msgs.append(
AMessage(content + '\n', rpl))
for m in msgs:
self.medium.send(m)
def send_proxies(self, config: dict) -> None:
lst = config['proxy']
h = config['host']
p = config['port']
sers = encoder.serialize_proxies(Interrupts['recvproxies'], h, p, lst, self.max_bytes)
msgs = []
for i, s in enumerate(sers):
rpl = receive_done if (i + 1) == len(sers) else receive_ack
m = AMessage(s + '\n', rpl)
msgs.append(m)
for m in msgs:
self.medium.send(m)
def commit(self, wasm) -> bool:
interrupt = Interrupts['updateModule']
ask4commit = AMessage(interrupt + '\n', receive_ack)
sers = encoder.serialize_wasm(interrupt, wasm, self.max_bytes)
l = len(sers)
msgs = [ask4commit]
for idx, content in enumerate(sers):
rpl = receive_commitdone if (idx + 1) == l else receive_ack
dbgprint(f'#{len(content) + 1} Content {content}')
msgs.append(
AMessage(content + '\n', rpl))
replies = self.medium.send(msgs)
succ = replies[-1]
if succ:
self.offset = self.__ask_for_offset()
dbgprint(f"new offset post commit {self.offset}")
return succ
def get_execution_state(self):
dump_msg = AMessage(Interrupts['dump'] + '\n', receive_dump)
_dumpjson = self.medium.send(dump_msg)
dbgprint(f'the dumpjson {_dumpjson}')
if self.offset != _dumpjson['start'][0]:
dbgprint('new offset')
self.offset = _dumpjson['start'][0]
return wood_state_to_wa_state(_dumpjson)
# Helper methods
def has_offset(self):
return self.offset is not None
@property
def offset(self):
return self.__offset
@offset.setter
def offset(self, off):
self.__offset = off
def clear_offset(self):
self.offset = None
def stopEventThread(self):
self.__eventlistener = None
def receive_session(self, session: dict) -> bool:
recv_int = Interrupts['receivesession']
wood_state = wa_state_to_wood_state(session, self.offset)
dbgprint(f"State to send {wood_state}")
sers = encoder.serialize_session(wood_state, recv_int, self.max_bytes)
msgs = []
l = len(sers)
assert l >= 2, f'at least two expected got {l}'
for idx, content in enumerate(sers):
rpl = receive_done_session if (idx + 1) == l else receive_ack
msgs.append(AMessage(content + '\n', rpl))
dbgprint(f"about to send #{len(msgs)}")
replies = self.medium.send(msgs)
return replies[-1]
def step_until(self, addr: str) -> None:
dbgprint(f'stepping until addr {addr} offset {self.offset}')
(size_hex, addr_hex) = bp_addr_helper(self.offset, addr)
content = Interrupts['until'] + size_hex[2:] + addr_hex[2:] + '\n'
msg = AMessage(content.upper(), receive_until_ack)
return self.medium.send(msg)
#private
def __ask_for_offset(self) -> str:
offmsg = AMessage(Interrupts['offset'] + '\n', receive_offset)
off = self.medium.send(offmsg)
return off
# receive socket content functions
def receive_events(wood: WOODManager, aMedium: AMedium, callback: callable) -> None:
import time #TODO remove
at_start = b'AT '
at_end = b'!\n'
err_start = b'{"error":'
err_end = b'}\n'
timeout = float(0.1)
while True:
if not aMedium.has_event(timeout):
continue
#input has been received
_start = aMedium.recv_until([at_start, err_start], event=True)
_end = aMedium.recv_until([at_end, err_end], event = True)
if not aMedium.connected:
wood.connected = False
callback({'event': 'disconnection'})
break
if _start.find(at_start) >= 0:
# print("at bp ")
_bp = _end[:-len(at_end)].decode()
bp = hex(int(_bp , 16) - int(wood.offset, 16))
callback({'event': 'at bp', 'breakpoint': bp})
else:
start = time.monotonic()
_dump = receive_dump(wood, aMedium, ignore_prev_hash = False)
if _dump is None:
continue
_bytes = err_start + _end[:-len(b'\n')]
_obj = json.loads(_bytes.decode())
_event = {
'event': 'error',
'msg': _obj['error'],
'start_time': start,
'time':time
}
_dump['session_size'] = _dump['session_size'] + len(_bytes) # TODO remove
_event['execution_state'] = wood_state_to_wa_state(_dump)
callback(_event)
dbgprint("stopping event thread")
wood.stopEventThread()
def receive_offset(wood, aMedium) -> str:
end = b'"}"\n'
_noise = aMedium.recv_until(b'"offset":"')
byts = aMedium.recv_until(end)[:-len(end)]
return byts.decode()
def receive_initstep_run(_, sock):
sock.recv_until(AnsProtocol['run'].encode())
return True
def receive_run_ack(_, sock):
sock.recv_until(AnsProtocol['run'].encode())
return True
def receive_ack_pause(_, sock):
sock.recv_until(AnsProtocol['pause'].encode())
return True
def receive_step_ack(wood: WOODManager, medium: AMedium) -> bool:
medium.recv_until(AnsProtocol['step'].encode())
medium.recv_until(b'STEP DONE!\n', wait = False, timeout=True)
dbgprint("step done")
return True
def receive_dump(wood: WOODManager, aMedium: AMedium, ignore_prev_hash = True):
dump_json = receive_dump_helper(aMedium, ignore_prev_hash)
return dump_json
# def receive_locals(wood: wood, aMedium: AMedium):
# loc_json = receive_locals_helper(aMedium)
# return loc_json
# def receive_locals_helper(aMedium: AMedium):
# loc_end = b'\n'
# _noise = aMedium.recv_until(b'STACK')
# byts = aMedium.recv_until(loc_end)[:-len(loc_end)]
# parsed = json.loads(byts)
# return parsed
def receive_rmvbp(wood, aMedium) -> bool:
dbgprint("receive rmvbp")
bp_end = b'!\n'
_ = aMedium.recv_until(b'BP ')
bp_bytes = aMedium.recv_until(bp_end)[:-len(bp_end)]
dbgprint(f"removed bp {bp_bytes.decode()}")
return True
def receive_addbp(wood, aMedium) -> bool:
# dbgprint("receive addbp")
bp_end = b'!\n'
_ = aMedium.recv_until(b'BP ')
bp_bytes = aMedium.recv_until(bp_end)[:-len(bp_end)]
return True
def receive_until_ack(wood, aMedium) -> bool:
dbgprint("receive until pc")
bp_end = b'!\n'
_ = aMedium.recv_until(b'Until ')
bp_bytes = aMedium.recv_until(bp_end)[:-len(bp_end)]
dbgprint(f"ack until pc {bp_bytes.decode()}")
aMedium.recv_until(b'STEP DONE!\n')
return True
def receive_commitdone(wood, aSocket) -> bool:
aSocket.recv_until(until=b'restart done!\n')
dbgprint("received commit done")
return True
def receive_ack(wood, aMedium) -> bool:
aMedium.recv_until(until=b'ack!\n')
return True
def receive_done(wood, aMedium) -> bool:
aMedium.recv_until(until=b'done!\n')
return True
def receive_done_session(wood, aMedium) -> bool:
aMedium.recv_until(until=b'done!\n')
dbgprint("done receiving sessions")
return True
def receive_uploaddone(wood, aMedium):
global proxy_config
aMedium.recv_until(until=b'done!\n')
wood.send_proxies(proxy_config)
proxy_config = None
def bytes2int(data):
ints = []
for i in range(0, len(data), 4):
x = int.from_bytes(data[i:i+4], 'little', signed=False)
ints.append(x)
return ints
# receive helper functions
prev_h = 3
def receive_dump_helper(sock, ignore_prev_hash = True):
global prev_h
_noise = sock.recv_until(b'DUMP!\n')
raw_end = b']}'
re_len = len(raw_end)
json_bytes = b''
json_bytes += sock.recv_until(b'"elements":[') + raw_end
elements = sock.recv_until(raw_end)[:-re_len]
json_bytes += sock.recv_until(b'"bytes":[') + raw_end
membytes = sock.recv_until(raw_end)[:-2]
json_bytes += sock.recv_until(b'"labels":[') + raw_end
labels = sock.recv_until(raw_end)[:-re_len]
json_bytes += sock.recv_until(b'\n')[:-len(b'\n')]
dec=None
try:
dec = json_bytes.decode()
except:
print(f"failed for raw {json_bytes}")
raise ValueError("something wrong")
if not ignore_prev_hash:
h = hash(json_bytes)
if prev_h == h:
dbgprint("Ignoring Received session")
return None
prev_h = h
dbgprint(f'bytes {dec}')
parsed = json.loads(dec)
parsed['memory']['bytes'] = membytes
parsed['table']['elements'] = bytes2int(elements)
br_tbl = parsed['br_table']
br_tbl['size'] = int(br_tbl['size'], 16)
br_tbl['labels'] = bytes2int(labels)
parsed['session_size'] = len(json_bytes) # TODO remove
return parsed
def bp_addr_helper(offset, code_addr):
all_bp_addr = util.sum_hexs([offset, code_addr]) # remove '0x'
bp_addr = all_bp_addr
if len(all_bp_addr[2:]) % 2 != 0:
missing_chars = len(all_bp_addr[2:]) % 2
bp_addr = "0x" + ( missing_chars * '0') + all_bp_addr[2:]
amount_bytes = int(len(bp_addr[2:]) / 2)
_hex = hex(amount_bytes)
if int(_hex[2:], 16) < 16:
_hex = '0x0' + _hex[2:]
return (_hex, bp_addr)
def old_bp_addr_helper(offset, code_addr):
bp_addr = util.sum_hexs([offset, code_addr]) # remove '0x'
amount_chars = math.floor(len(bp_addr[2:]) / 2)
if amount_chars % 2 != 0:
dbgprint("WARNING: breakpoint address is not even addr")
dbgprint(
f"offset {offset} code_addr {code_addr} chars {amount_chars} calculated addr: {bp_addr}")
else:
pass
_hex = hex(amount_chars)
if int(_hex[2:], 16) < 16:
_hex = '0x0' + _hex[2:]
return (_hex, bp_addr)
def wood_state_to_wa_state(dump_json: dict) -> dict:
offset = int(dump_json['start'][0], 16)
state = {}
state['pc'] = hex( int(dump_json['pc'], 16) - offset)
if dump_json.get('pc_error', None) is not None:
state['pc_error'] = hex( int(dump_json['pc_error']))
state['breakpoints'] = [ hex( int(b, 16) - offset) for b in dump_json['breakpoints']]
state['table'] = {
'max': dump_json['table']['max'],
'min': dump_json['table']['init'],
'elements': dump_json['table']['elements']
}
state['memory'] = {
'pages': dump_json['memory']['pages'],
'min': dump_json['memory']['init'],
'max': dump_json['memory']['max'],
'bytes': dump_json['memory']['bytes'],
}
state['br_table'] = dump_json['br_table']['labels']
state['globals'] = dump_json['globals']
state['stack'] = [s for s in dump_json['stack']]
state['session_size'] = dump_json['session_size'] #TODO remove
_frame_types = {
0: 'fun',
1: 'init_exp',
2: 'block',
3: 'loop',
4: 'if'
}
_cs = []
for frame in dump_json['callstack']:
cleaned_frame = {
'block_type': _frame_types[frame['type']],
'sp': frame['sp'],
'fp': frame['fp'],
'ret_addr': hex(int(frame['ra'], 16) - offset),
}
if cleaned_frame['block_type'] == 'fun':
cleaned_frame['fidx'] = int(frame['fidx'], 16)
else:
cleaned_frame['block_key'] = hex( int(frame['block_key'], 16) - offset)
_cs.append(cleaned_frame)
state['callstack'] = _cs
return state
def wa_state_to_wood_state(_json: dict, offset: str) -> dict:
_offset = int(offset, 16)
rebase = lambda addr : hex( int(addr, 16) + _offset)
state = {
'pc' : rebase(_json['pc']),
'breakpoints': [rebase(bp) for bp in _json['breakpoints']],
'br_table': {
'size': len(_json['br_table']),
'labels': _json['br_table'],
},
'globals': _json['globals'],
'table': {
'init': _json['table'].get('min', 0),
'max': _json['table'].get('max', 0),
'elements' : _json['table']['elements']
},
'memory': {
'init': _json['memory'].get('min', 0),
'max': _json['memory'].get('max', 0),
'pages': _json['memory']['pages'],
'bytes': _json['memory']['bytes'],
},
'stack': _json['stack'],
}
_frame_types = {
'fun': 0,
'init_exp': 1,
'block': 2,
'loop': 3,
'if': 4
}
callstack = []
for frame in _json['callstack']:
_f = {
'idx' : frame['idx'],
'type': _frame_types[frame['block_type']],
'fidx': hex(frame.get('fidx', 0)),
'sp': frame['sp'],
'fp': frame['fp'],
'ra': frame.get('ret_addr', ''),
'block_key': frame.get('block_key', '')
}
if frame.get('ret_addr', False):
_f['ra'] = rebase(frame['ret_addr'])
if frame.get('block_key', False):
_f['block_key'] = rebase(frame['block_key'])
callstack.append(_f)
callstack.sort(key = lambda f: f['idx'], reverse= False)
state['callstack'] = callstack
return state
|
run.py
|
import logging
import threading,sys,os,webbrowser,time
import serial,serial.tools.list_ports
import SimpleHTTPServer, BaseHTTPServer, SocketServer
from websocket_server import WebsocketServer
from Tkinter import *
import ttk,tkMessageBox
class ThreadingSimpleServer(SocketServer.ThreadingMixIn,BaseHTTPServer.HTTPServer):
pass
httpServerRunning = False
serialServerRunning = False
def serialServerLoop():
global serialServerRunning
try:
ser = serial.Serial(serialbox.get())
ser.baudrate = SerialSpinBoxVar.get()
ser.write('TS 1\n')
ctime = time.time()
while serialServerRunning:
server.handle_request()
line = ser.readline()
server.handle_request()
server.send_message_to_all(line)
if (time.time() - ctime) > 4:
ctime = time.time()
server.send_message_to_all("pong")
ser.write('TS 0\n')
ser.close()
server.server_close()
except Exception as e:
print "Failed to open"
print e
serialToggle()
return
def serialToggle():
global serialServerRunning,server
if serialServerRunning:
SerialButton.configure(bg='#F00')
serialServerRunning = False
else:
server = WebsocketServer(WebSpinBoxVar.get(), host='127.0.0.1', loglevel=logging.INFO)
server.set_fn_new_client(new_client)
server.timeout = 0
serialServerRunning = True
serialthread = threading.Thread(target = serialServerLoop)
serialthread.daemon = True
serialthread.start()
SerialButton.configure(bg='#0F0')
def httpToggle():
global httpServerRunning
global httpserver,httpthread
if httpServerRunning:
HttpButton.configure(bg='#F00')
httpserver.shutdown()
httpServerRunning = False
os.chdir( sys.path[0] )
else:
HttpButton.configure(bg='#0F0')
httpserver = ThreadingSimpleServer(('', HttpSpinBoxVar.get()), SimpleHTTPServer.SimpleHTTPRequestHandler)
#os.chdir('theme\\' + themebox.get(themebox.curselection()))
httpthread = threading.Thread(target = httpserver.serve_forever)
httpthread.daemon = True
httpthread.start()
httpServerRunning = True
def new_client(client,server):
print "New client gotten ", client
def genUrl():
if not len(WebSockUrlEntryVar.get()):
tkMessageBox.showerror("Url Error","WebSocketServer Url can not be blank!")
return
try:
val = "http://localhost:{}/?theme={}&websockport={}&websocketserver={}".format(HttpSpinBoxVar.get(),themebox.get(),WebSpinBoxVar.get(),WebSockUrlEntryVar.get())
except TclError:
tkMessageBox.showerror("Input Error","Bad input. Ensure all items are selected")
return
root.clipboard_clear()
root.clipboard_append(val)
def authorLink(event):
webbrowser.open_new(r"http://keepdream.in")
def homeLink(event):
webbrowser.open_new(r"https://github.com/geekbozu/NintendoWiSpy")
def aboutWindow():
window = Toplevel(mainframe)
window.title("About NintendoWiSpy")
Label(window,text='NintendoWiSpy').pack(anchor=N)
Label(window,text='A WiFi enabled Live input viewer for GCN/N64').pack(anchor=N)
a = Label(window,text="Timothy 'Geekboy1011' Keller",fg="blue", cursor="hand2")
h = Label(window,text="https://github.com/geekbozu/NintendoWiSpy",fg="blue", cursor="hand2")
a.bind("<Button-1>", authorLink)
h.bind("<Button-1>", homeLink)
a.pack()
h.pack()
licframe = LabelFrame(window, text="MIT License:")
licframe.pack(expand="yes",fill="both")
scrollbar = Scrollbar(licframe)
scrollbar.pack(side=RIGHT, fill=Y)
t = Text(licframe,wrap="word",width=70,height=10,yscrollcommand=scrollbar.set)
scrollbar.configure(command=t.yview)
t.pack(side=TOP)
t.insert(END,"""Copyright 2018 Timothy 'Geekboy1011' Keller
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.""")
def setWifiCreds():
ser = serial.Serial(serialbox.get())
ser.baudrate = SerialSpinBoxVar.get()
ser.reset_input_buffer()
ser.reset_output_buffer()
ser.write('WW {} {}\n'.format(SSIDVar.get(),PASSVar.get()))
ser.close()
def setWinCredMan():
ser = serial.Serial(serialbox.get())
ser.baudrate = SerialSpinBoxVar.get()
window = Toplevel(mainframe)
window.title("Wifi Manager")
ser.reset_output_buffer()
ser.reset_input_buffer()
ser.write('RW\n')
SSIDVar.set(ser.readline().strip())
PASSVar.set(ser.readline().strip())
ser.close()
Label(window,text="SSID").pack()
Entry(window,textvariable=SSIDVar).pack()
Label(window,text="PASSWORD").pack()
Entry(window,textvariable=PASSVar).pack()
Button(window,text="SAVE",command=setWifiCreds).pack()
window.wait_window(window)
if __name__ == "__main__":
root = Tk()
root.title("NintendoWiSpy")
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
menubar = Menu(root)
menubar.add_command(label="Wifi Config",command=setWinCredMan)
menubar.add_command(label="About", command=aboutWindow)
root.config(menu=menubar)
SerialSpinBoxVar = IntVar(mainframe)
SerialSpinBoxVar.set("921600")
WebSpinBoxVar = IntVar(mainframe)
WebSpinBoxVar.set("18881")
HttpSpinBoxVar = IntVar(mainframe)
HttpSpinBoxVar.set("8888")
WebSockUrlEntryVar = StringVar(mainframe)
WebSockUrlEntryVar.set("NintendoSpy.local")
SSIDVar = StringVar(mainframe)
PASSVar = StringVar(mainframe)
Label(mainframe,text='Serial BaudRate').pack(anchor=N)
Spinbox(mainframe,textvariable=SerialSpinBoxVar).pack(anchor=N)
Label(mainframe,text='HTTPSERVER port').pack(anchor=N)
Spinbox(mainframe,from_ = 1, to = 65535,textvariable=HttpSpinBoxVar).pack(anchor=N)
Label(mainframe,text='WebSocket Host').pack(anchor=N)
Entry(mainframe,textvariable=WebSockUrlEntryVar).pack(anchor=N)
Label(mainframe,text='Display Socket Port').pack(anchor=N)
Spinbox(mainframe,from_ = 1, to = 65535,textvariable=WebSpinBoxVar).pack(anchor=N)
lists = ttk.Frame(mainframe)
Label(lists,text='Serial Port').grid(column=0,row=0)
Label(lists,text='Theme').grid(column=1,row=0)
serialbox = ttk.Combobox(lists,exportselection=False,height=5)
serialbox.config(value=[i.device for i in serial.tools.list_ports.comports()])
#for i in serial.tools.list_ports.comports():
# print dir(serialbox.insert)
# serialbox.values.insert(END,i.device)
serialbox.grid(column=0,row=1)
themebox = ttk.Combobox(lists,exportselection=False,height=5)
themebox.config(value=[i for i in os.listdir('theme')])
#for i in os.listdir('theme'):
# themebox.values.insert(END,i)
themebox.grid(column=1,row=1)
lists.pack()
HttpButton = Button(mainframe,text='Http Server',command=httpToggle)
HttpButton.pack(side = LEFT)
SerialButton = Button(mainframe,text='Serial Forwarder',command=serialToggle)
SerialButton.pack(side = LEFT)
Button(mainframe,text='Generate Url',command=genUrl).pack(side = LEFT)
root.mainloop()
|
test_etcd_client.py
|
# Copyright 2020 The FedLearner Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# coding: utf-8
import os
import unittest
import threading
import time
from fedlearner.common import etcd_client
class TestEtcdClient(unittest.TestCase):
def test_etcd_op(self):
cli = etcd_client.EtcdClient('test_cluster', 'localhost:2379',
'data_source_a', True)
cli.delete('fl_key')
cli.set_data('fl_key', 'fl_value')
self.assertEqual(cli.get_data('fl_key'), b'fl_value')
self.assertFalse(cli.cas('fl_key', 'fl_value1', 'fl_value2'))
self.assertTrue(cli.cas('fl_key', 'fl_value', 'fl_value1'))
self.assertEqual(cli.get_data('fl_key'), b'fl_value1')
goahead = False
def thread_routine():
cli.set_data('fl_key', 'fl_value2')
self.assertEqual(cli.get_data('fl_key'), b'fl_value2')
eiter, cancel = cli.watch_key('fl_key')
other = threading.Thread(target=thread_routine)
other.start()
for e in eiter:
self.assertEqual(e.key, b'fl_key')
self.assertEqual(e.value, b'fl_value2')
cancel()
other.join()
cli.set_data('fl_key/a', '1')
cli.set_data('fl_key/b', '2')
cli.set_data('fl_key/c', '3')
expected_kvs = [(b'fl_key', b'fl_value2'), (b'fl_key/a', b'1'),
(b'fl_key/b', b'2'), (b'fl_key/c', b'3')]
for idx, kv in enumerate(cli.get_prefix_kvs('fl_key')):
self.assertEqual(kv[0], expected_kvs[idx][0])
self.assertEqual(kv[1], expected_kvs[idx][1])
for idx, kv in enumerate(cli.get_prefix_kvs('fl_key', True)):
self.assertEqual(kv[0], expected_kvs[idx+1][0])
self.assertEqual(kv[1], expected_kvs[idx+1][1])
cli.destroy_client_pool()
if __name__ == '__main__':
unittest.main()
|
unicorn_binance_websocket_api_manager.py
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# File: unicorn_binance_websocket_api/unicorn_binance_websocket_api_manager.py
#
# Part of ‘UNICORN Binance WebSocket API’
# Project website: https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api
# Documentation: https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api
# PyPI: https://pypi.org/project/unicorn-binance-websocket-api/
#
# Author: Oliver Zehentleitner
# https://about.me/oliver-zehentleitner
#
# Copyright (c) 2019-2021, Oliver Zehentleitner
# All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from .unicorn_binance_websocket_api_exceptions import StreamRecoveryError, UnknownExchange
from .unicorn_binance_websocket_api_socket import BinanceWebSocketApiSocket
from .unicorn_binance_websocket_api_restclient import BinanceWebSocketApiRestclient
from .unicorn_binance_websocket_api_restserver import BinanceWebSocketApiRestServer
from cheroot import wsgi
from collections import deque
from datetime import datetime
from flask import Flask, redirect
from flask_restful import Api
import asyncio
import colorama
import copy
import logging
import os
import platform
import psutil
import re
import requests
import sys
import threading
import time
import uuid
import ujson as json
class BinanceWebSocketApiManager(threading.Thread):
"""
An unofficial Python API to use the Binance Websocket API`s (com+testnet, com-margin+testnet,
com-isolated_margin+testnet, com-futures+testnet, us, jex, dex/chain+testnet) in a easy, fast, flexible,
robust and fully-featured way.
This library supports two different kind of websocket endpoints:
- CEX (Centralized exchange): binance.com, binance.vision, binance.je, binance.us, trbinance.com, jex.com
- DEX (Decentralized exchange): binance.org
Binance.com websocket API documentation:
- https://github.com/binance-exchange/binance-official-api-docs/blob/master/web-socket-streams.md
- https://binance-docs.github.io/apidocs/futures/en/#user-data-streams
- https://binance-docs.github.io/apidocs/spot/en/#user-data-streams
Binance.vision (Testnet) websocket API documentation:
- https://testnet.binance.vision/
Binance.us websocket API documentation:
- https://github.com/binance-us/binance-official-api-docs/blob/master/web-socket-streams.md
- https://github.com/binance-us/binance-official-api-docs/blob/master/user-data-stream.md
TRBinance.com websocket API documentation:
- https://www.trbinance.com/apidocs/#general-wss-information
Jex.com websocket API documentation:
- https://jexapi.github.io/api-doc/option.html#web-socket-streams
- https://jexapi.github.io/api-doc/option.html#user-data-streams
Binance.org websocket API documentation:
- https://docs.binance.org/api-reference/dex-api/ws-connection.html
:param process_stream_data: Provide a function/method to process the received webstream data. The function
will be called instead of
`add_to_stream_buffer() <unicorn_binance_websocket_api.html#unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager.BinanceWebSocketApiManager.add_to_stream_buffer>`_
like `process_stream_data(stream_data, stream_buffer_name)` where
`stream_data` cointains the raw_stream_data. If not provided, the raw stream_data will
get stored in the stream_buffer! `How to read from stream_buffer!
<https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/README.html#and-4-more-lines-to-print-the-receives>`_
:type process_stream_data: function
:param exchange: Select binance.com, binance.com-testnet, binance.com-margin, binance.com-margin-testnet,
binance.com-isolated_margin, binance.com-isolated_margin-testnet, binance.com-futures,
binance.com-futures-testnet, binance.com-coin-futures, binance.us, trbinance.com,
jex.com, binance.org or binance.org-testnet (default: binance.com)
:type exchange: str
:param warn_on_update: set to `False` to disable the update warning
:type warn_on_update: bool
:param throw_exception_if_unrepairable: set to `True` to activate exceptions if a crashed stream is unrepairable
(invalid API key, exceeded subscription limit) or an unknown exchange is
used
:type throw_exception_if_unrepairable: bool
:param restart_timeout: A stream restart must be successful within this time, otherwise a new restart will be
initialized. Default is 6 seconds.
:type restart_timeout: int
:param show_secrets_in_logs: set to True to show secrets like listen_key, api_key or api_secret in log file
(default=False)
:type show_secrets_in_logs: bool
:param output_default: set to "dict" to convert the received raw data to a python dict, set to "UnicornFy" to
convert with `UnicornFy <https://github.com/oliver-zehentleitner/unicorn-fy>`_ - otherwise
with the default setting "raw_data" the output remains unchanged and gets delivered as
received from the endpoints. Change this for a specific stream with the `output` parameter
of `create_stream()` and `replace_stream()`
:type output_default: str
:param enable_stream_signal_buffer: set to True to enable the
`stream_signal_buffer <https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/%60stream_signal_buffer%60>`_
and receive information about
disconnects and reconnects to manage a restore of the lost data during the
interruption or to recognize your bot got blind.
:type enable_stream_signal_buffer: bool
:param disable_colorama: set to True to disable the use of `colorama <https://pypi.org/project/colorama/>`_
:type disable_colorama: bool
:param stream_buffer_maxlen: Set a max len for the generic `stream_buffer`. This parameter can also be used within
`create_stream()` for a specific `stream_buffer`.
:type stream_buffer_maxlen: int or None
"""
def __init__(self,
process_stream_data=False,
exchange="binance.com",
warn_on_update=True,
throw_exception_if_unrepairable=False,
restart_timeout=6,
show_secrets_in_logs=False,
output_default="raw_data",
enable_stream_signal_buffer=False,
disable_colorama=False,
stream_buffer_maxlen=None):
threading.Thread.__init__(self)
self.name = "unicorn-binance-websocket-api"
self.version = "1.32.0"
logging.info(f"New instance of {self.get_user_agent()} on {str(platform.system())} {str(platform.release())} "
f"for exchange {exchange} started ...")
if disable_colorama is not True:
colorama.init()
if process_stream_data is False:
# no special method to process stream data provided, so we use add_to_stream_buffer:
self.process_stream_data = self.add_to_stream_buffer
logging.info(f"Using `stream_buffer`")
else:
# use the provided method to process stream data:
self.process_stream_data = process_stream_data
logging.info(f"Using `process_stream_data`")
self.exchange = exchange
if self.exchange == "binance.com":
self.websocket_base_uri = "wss://stream.binance.com:9443/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "binance.com-testnet":
self.websocket_base_uri = "wss://testnet.binance.vision/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "binance.com-margin":
self.websocket_base_uri = "wss://stream.binance.com:9443/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "binance.com-margin-testnet":
self.websocket_base_uri = "wss://testnet.binance.vision/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "binance.com-isolated_margin":
self.websocket_base_uri = "wss://stream.binance.com:9443/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "binance.com-isolated_margin-testnet":
self.websocket_base_uri = "wss://testnet.binance.vision/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "binance.com-futures":
self.websocket_base_uri = "wss://fstream.binance.com/"
self.max_subscriptions_per_stream = 200
elif self.exchange == "binance.com-coin-futures":
self.websocket_base_uri = "wss://dstream.binance.com/"
self.max_subscriptions_per_stream = 200
elif self.exchange == "binance.com-futures-testnet":
self.websocket_base_uri = "wss://stream.binancefuture.com/"
self.max_subscriptions_per_stream = 200
elif self.exchange == "binance.us":
self.websocket_base_uri = "wss://stream.binance.us:9443/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "trbinance.com":
self.websocket_base_uri = "wss://stream.binance.cc/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "jex.com":
self.websocket_base_uri = "wss://ws.jex.com/"
self.max_subscriptions_per_stream = 10
elif self.exchange == "binance.org":
self.websocket_base_uri = "wss://dex.binance.org/api/"
self.max_subscriptions_per_stream = 1024
elif self.exchange == "binance.org-testnet":
self.websocket_base_uri = "wss://testnet-dex.binance.org/api/"
self.max_subscriptions_per_stream = 1024
else:
# Unknown Exchange
error_msg = f"Unknown exchange '{str(self.exchange)}'! Read the docs to see a list of supported " \
"exchanges: https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_" \
"binance_websocket_api.html#module-unicorn_binance_websocket_api.unicorn_binance_websocket_" \
"api_manager"
logging.critical(error_msg)
raise UnknownExchange(error_msg)
self.stop_manager_request = None
self.all_subscriptions_number = 0
self.binance_api_status = {'weight': None,
'timestamp': 0,
'status_code': None}
self.dex_user_address = False
self.enable_stream_signal_buffer = enable_stream_signal_buffer
self.event_loops = {}
self.frequent_checks_list = {}
self.frequent_checks_list_lock = threading.Lock()
self.receiving_speed_average = 0
self.receiving_speed_peak = {'value': 0,
'timestamp': time.time()}
self.keep_max_received_last_second_entries = 5
self.keepalive_streams_list = {}
self.last_entry_added_to_stream_buffer = 0
self.last_monitoring_check = time.time()
self.last_update_check_github = {'timestamp': time.time(),
'status': None}
self.last_update_check_github_check_command = {'timestamp': time.time(),
'status': None}
self.max_send_messages_per_second = 5
self.max_send_messages_per_second_reserve = 2
self.most_receives_per_second = 0
self.monitoring_api_server = False
self.monitoring_total_received_bytes = 0
self.monitoring_total_receives = 0
self.output_default = output_default
self.reconnects = 0
self.reconnects_lock = threading.Lock()
self.request_id = 0
self.request_id_lock = threading.Lock()
self.restart_requests = {}
self.restart_timeout = restart_timeout
self.ringbuffer_error = []
self.ringbuffer_error_max_size = 500
self.ringbuffer_result = []
self.ringbuffer_result_max_size = 500
self.show_secrets_in_logs = show_secrets_in_logs
self.start_time = time.time()
self.stream_buffer_maxlen = stream_buffer_maxlen
self.stream_buffer = deque(maxlen=self.stream_buffer_maxlen)
self.stream_buffer_lock = threading.Lock()
self.stream_buffer_locks = {}
self.stream_buffers = {}
self.stream_list = {}
self.stream_list_lock = threading.Lock()
self.stream_signal_buffer = []
self.stream_signal_buffer_lock = threading.Lock()
self.stream_threading_lock = {}
self.throw_exception_if_unrepairable = throw_exception_if_unrepairable
self.total_received_bytes = 0
self.total_received_bytes_lock = threading.Lock()
self.total_receives = 0
self.total_receives_lock = threading.Lock()
self.total_transmitted = 0
self.total_transmitted_lock = threading.Lock()
self.websocket_list = {}
self.start()
self.replaced_secrets_text = "***SECRET_REMOVED***"
self.restclient = BinanceWebSocketApiRestclient(self)
if warn_on_update and self.is_update_availabe():
update_msg = f"Release {self.name}_" + self.get_latest_version() + " is available, " \
"please consider updating! (Changelog: https://github.com/oliver-zehentleitner/unicorn-" \
"binance-websocket-api/blob/master/CHANGELOG.md)"
print(update_msg)
logging.warning(update_msg)
def _add_stream_to_stream_list(self,
stream_id,
channels,
markets,
stream_label=None,
stream_buffer_name=False,
api_key=False,
api_secret=False,
symbols=False,
output=False,
ping_interval=False,
ping_timeout=False,
close_timeout=False,
stream_buffer_maxlen=None):
"""
Create a list entry for new streams
:param stream_id: provide a stream_id (only needed for userData Streams (acquiring a listenKey)
:type stream_id: uuid
:param channels: provide the channels to create the URI
:type channels: str, tuple, list, set
:param markets: provide the markets to create the URI
:type markets: str, tuple, list, set
:param stream_label: provide a stream_label for the stream
:type stream_label: str
:param stream_buffer_name: If `False` the data is going to get written to the default stream_buffer,
set to `True` to read the data via `pop_stream_data_from_stream_buffer(stream_id)` or
provide a string to create and use a shared stream_buffer and read it via
`pop_stream_data_from_stream_buffer('string')`.
:type stream_buffer_name: bool or str
:param api_key: provide a valid Binance API key
:type api_key: str
:param api_secret: provide a valid Binance API secret
:type api_secret: str
:param symbols: provide the symbols for isolated_margin user_data streams
:type symbols: str
:param output: the default setting `raw_data` can be globaly overwritten with the parameter
`output_default <https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_binance_websocket_api.html?highlight=output_default#module-unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager>`_
of BinanceWebSocketApiManager`. To overrule the `output_default` value for this specific stream,
set `output` to "dict" to convert the received raw data to a python dict, set to "UnicornFy" to
convert with `UnicornFy <https://github.com/oliver-zehentleitner/unicorn-fy>`_ - otherwise with
the default setting "raw_data" the output remains unchanged and gets delivered as received from
the endpoints
:type output: str
:param ping_interval: Once the connection is open, a `Ping frame` is sent every
`ping_interval` seconds. This serves as a keepalive. It helps keeping
the connection open, especially in the presence of proxies with short
timeouts on inactive connections. Set `ping_interval` to `None` to
disable this behavior. (default: 20)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type ping_interval: int or None
:param ping_timeout: If the corresponding `Pong frame` isn't received within
`ping_timeout` seconds, the connection is considered unusable and is closed with
code 1011. This ensures that the remote endpoint remains responsive. Set
`ping_timeout` to `None` to disable this behavior. (default: 20)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type ping_timeout: int or None
:param close_timeout: The `close_timeout` parameter defines a maximum wait time in seconds for
completing the closing handshake and terminating the TCP connection. (default: 10)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type close_timeout: int or None
:param stream_buffer_maxlen: Set a max len for the `stream_buffer`. Only used in combination with a non generic
`stream_buffer`. The generic `stream_buffer` uses always the value of
`BinanceWebSocketApiManager()`.
:type stream_buffer_maxlen: int or None
"""
if output is False:
output = self.output_default
self.stream_threading_lock[stream_id] = {'full_lock': threading.Lock(),
'receives_statistic_last_second_lock': threading.Lock()}
self.stream_list[stream_id] = {'exchange': self.exchange,
'stream_id': copy.deepcopy(stream_id),
'recent_socket_id': None,
'channels': copy.deepcopy(channels),
'markets': copy.deepcopy(markets),
'stream_label': copy.deepcopy(stream_label),
'stream_buffer_name': copy.deepcopy(stream_buffer_name),
'stream_buffer_maxlen': copy.deepcopy(stream_buffer_maxlen),
'symbols': copy.deepcopy(symbols),
'output': copy.deepcopy(output),
'subscriptions': 0,
'payload': [],
'api_key': copy.deepcopy(api_key),
'api_secret': copy.deepcopy(api_secret),
'dex_user_address': copy.deepcopy(self.dex_user_address),
'ping_interval': copy.deepcopy(ping_interval),
'ping_timeout': copy.deepcopy(ping_timeout),
'close_timeout': copy.deepcopy(close_timeout),
'status': 'starting',
'start_time': time.time(),
'processed_receives_total': 0,
'receives_statistic_last_second': {'most_receives_per_second': 0, 'entries': {}},
'seconds_to_last_heartbeat': None,
'last_heartbeat': None,
'kill_request': None,
'stop_request': None,
'crash_request': None,
'seconds_since_has_stopped': None,
'has_stopped': False,
'reconnects': 0,
'logged_reconnects': [],
'processed_transmitted_total': 0,
'last_static_ping_listen_key': 0,
'listen_key': False,
'listen_key_cache_time': 30 * 60,
'last_received_data_record': None,
'processed_receives_statistic': {},
'transfer_rate_per_second': {'bytes': {}, 'speed': 0}}
logging.info("BinanceWebSocketApiManager._add_stream_to_stream_list(" +
str(stream_id) + ", " + str(channels) + ", " + str(markets) + ", " + str(stream_label) + ", "
+ str(stream_buffer_name) + ", " + str(stream_buffer_maxlen) + ", " + str(symbols) + ")")
def _create_stream_thread(self,
loop,
stream_id,
channels,
markets,
stream_buffer_name=False,
stream_buffer_maxlen=None,
restart=False):
"""
Co function of self.create_stream to create a thread for the socket and to manage the coroutine
:param loop: provide a asynio loop
:type loop: asyncio loop
:param stream_id: provide a stream_id (only needed for userData Streams (acquiring a listenKey)
:type stream_id: uuid
:param channels: provide the channels to create the URI
:type channels: str, tuple, list, set
:param markets: provide the markets to create the URI
:type markets: str, tuple, list, set
:param stream_buffer_name: If `False` the data is going to get written to the default stream_buffer,
set to `True` to read the data via `pop_stream_data_from_stream_buffer(stream_id)` or
provide a string to create and use a shared stream_buffer and read it via
`pop_stream_data_from_stream_buffer('string')`.
:type stream_buffer_name: bool or str
:param stream_buffer_maxlen: Set a max len for the `stream_buffer`. Only used in combination with a non generic
`stream_buffer`. The generic `stream_buffer` uses always the value of
`BinanceWebSocketApiManager()`.
:type stream_buffer_maxlen: int or None
:param restart: set to `True`, if its a restart!
:type restart: bool
:return:
"""
if self.is_stop_request(stream_id):
return False
if restart is False:
if stream_buffer_name is not False:
self.stream_buffer_locks[stream_buffer_name] = threading.Lock()
try:
# Not resetting the stream_buffer during a restart:
if self.stream_buffers[stream_buffer_name]:
pass
except KeyError:
self.stream_buffers[stream_buffer_name] = deque(maxlen=stream_buffer_maxlen)
asyncio.set_event_loop(loop)
socket = BinanceWebSocketApiSocket(self, stream_id, channels, markets)
try:
loop.run_until_complete(socket.start_socket())
except RuntimeError as error_msg:
if "cannot schedule new futures after interpreter shutdown" in str(error_msg):
logging.critical(f"BinanceWebSocketApiManager._create_stream_thread() stream_id={str(stream_id)} "
f" - RuntimeError error_msg: - {str(error_msg)} - stopping and shutting down - read "
f"https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/issues/131"
f" for further information!")
self.stop_manager_with_all_streams()
sys.exit(1)
logging.critical(f"BinanceWebSocketApiManager._create_stream_thread() stream_id={str(stream_id)} "
f"error: 7 - {str(error_msg)} - if this stream did not restart after this error, please "
f"create an issue: "
f"https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/issues/new/choose")
loop.close()
finally:
self.add_to_stream_signal_buffer("DISCONNECT", stream_id)
loop.close()
def _frequent_checks(self):
"""
This method gets started as a thread and is doing the frequent checks
"""
frequent_checks_id = time.time()
cpu_usage_time = False
with self.frequent_checks_list_lock:
self.frequent_checks_list[frequent_checks_id] = {'last_heartbeat': 0,
'stop_request': None,
'has_stopped': False}
logging.info("BinanceWebSocketApiManager._frequent_checks() new instance created with frequent_checks_id=" +
str(frequent_checks_id))
# threaded loop for min 1 check per second
while self.stop_manager_request is None and self.frequent_checks_list[frequent_checks_id]['stop_request'] \
is None:
with self.frequent_checks_list_lock:
self.frequent_checks_list[frequent_checks_id]['last_heartbeat'] = time.time()
time.sleep(0.3)
current_timestamp = int(time.time())
last_timestamp = current_timestamp - 1
next_to_last_timestamp = current_timestamp - 2
total_most_stream_receives_last_timestamp = 0
total_most_stream_receives_next_to_last_timestamp = 0
active_stream_list = self.get_active_stream_list()
# check CPU stats
cpu = self.get_process_usage_cpu()
if cpu >= 95:
time_of_waiting = 5
if cpu_usage_time is False:
cpu_usage_time = time.time()
elif (time.time() - cpu_usage_time) > time_of_waiting:
logging.warning(f"BinanceWebSocketApiManager._frequent_checks() - High CPU usage since "
f"{str(time_of_waiting)} seconds: {str(cpu)}")
cpu_usage_time = False
else:
cpu_usage_time = False
# count most_receives_per_second total last second
if active_stream_list:
for stream_id in active_stream_list:
# set the streams `most_receives_per_second` value
try:
if self.stream_list[stream_id]['receives_statistic_last_second']['entries'][last_timestamp] > \
self.stream_list[stream_id]['receives_statistic_last_second']['most_receives_per_second']:
self.stream_list[stream_id]['receives_statistic_last_second']['most_receives_per_second'] = \
self.stream_list[stream_id]['receives_statistic_last_second']['entries'][last_timestamp]
except KeyError:
pass
try:
total_most_stream_receives_last_timestamp += self.stream_list[stream_id]['receives_statistic_last_second']['entries'][last_timestamp]
except KeyError:
pass
try:
total_most_stream_receives_next_to_last_timestamp += self.stream_list[stream_id]['receives_statistic_last_second']['entries'][next_to_last_timestamp]
except KeyError:
pass
# delete list entries older than `keep_max_received_last_second_entries`
# receives_statistic_last_second
delete_index = []
if len(self.stream_list[stream_id]['receives_statistic_last_second']['entries']) > \
self.keep_max_received_last_second_entries:
with self.stream_threading_lock[stream_id]['receives_statistic_last_second_lock']:
temp_entries = copy.deepcopy(self.stream_list[stream_id]['receives_statistic_last_second']['entries'])
for timestamp_key in temp_entries:
try:
if timestamp_key < current_timestamp - self.keep_max_received_last_second_entries:
delete_index.append(timestamp_key)
except ValueError as error_msg:
logging.error(
"BinanceWebSocketApiManager._frequent_checks() timestamp_key=" + str(timestamp_key) +
" current_timestamp=" + str(current_timestamp) + " keep_max_received_last_second_"
"entries=" + str(self.keep_max_received_last_second_entries) + " error_msg=" +
str(error_msg))
for timestamp_key in delete_index:
with self.stream_threading_lock[stream_id]['receives_statistic_last_second_lock']:
self.stream_list[stream_id]['receives_statistic_last_second']['entries'].pop(timestamp_key,
None)
# transfer_rate_per_second
delete_index = []
if len(self.stream_list[stream_id]['transfer_rate_per_second']['bytes']) > \
self.keep_max_received_last_second_entries:
try:
temp_bytes = self.stream_list[stream_id]['transfer_rate_per_second']['bytes']
for timestamp_key in temp_bytes:
try:
if timestamp_key < current_timestamp - self.keep_max_received_last_second_entries:
delete_index.append(timestamp_key)
except ValueError as error_msg:
logging.error(
"BinanceWebSocketApiManager._frequent_checks() timestamp_key="
+ str(timestamp_key) +
" current_timestamp=" + str(current_timestamp) +
" keep_max_received_last_second_"
"entries=" + str(self.keep_max_received_last_second_entries) + " error_msg=" +
str(error_msg))
except RuntimeError as error_msg:
logging.info("BinanceWebSocketApiManager._frequent_checks() - "
"Catched RuntimeError: " + str(error_msg))
for timestamp_key in delete_index:
self.stream_list[stream_id]['transfer_rate_per_second']['bytes'].pop(timestamp_key, None)
# set most_receives_per_second
try:
if int(self.most_receives_per_second) < int(total_most_stream_receives_last_timestamp):
self.most_receives_per_second = int(total_most_stream_receives_last_timestamp)
except ValueError as error_msg:
logging.error("BinanceWebSocketApiManager._frequent_checks() self.most_receives_per_second"
"=" + str(self.most_receives_per_second) + " total_most_stream_receives_last_timestamp"
"=" + str(total_most_stream_receives_last_timestamp) + " total_most_stream_receives_next_"
"to_last_timestamp=" + str(total_most_stream_receives_next_to_last_timestamp) + " error_"
"msg=" + str(error_msg))
# check receiving_speed_peak
last_second_receiving_speed = self.get_current_receiving_speed_global()
try:
if last_second_receiving_speed > self.receiving_speed_peak['value']:
self.receiving_speed_peak['value'] = last_second_receiving_speed
self.receiving_speed_peak['timestamp'] = time.time()
logging.info(f"BinanceWebSocketApiManager._frequent_checks() - reached new "
f"`highest_receiving_speed` "
f"{str(self.get_human_bytesize(self.receiving_speed_peak['value'], '/s'))} at "
f"{self.get_date_of_timestamp(self.receiving_speed_peak['timestamp'])}")
except TypeError as error_msg:
pass
# send keepalive for `!userData` streams every 30 minutes
if active_stream_list:
for stream_id in active_stream_list:
if isinstance(active_stream_list[stream_id]['markets'], str):
active_stream_list[stream_id]['markets'] = [active_stream_list[stream_id]['markets'], ]
if isinstance(active_stream_list[stream_id]['channels'], str):
active_stream_list[stream_id]['channels'] = [active_stream_list[stream_id]['channels'], ]
if "!userData" in active_stream_list[stream_id]['markets'] or \
"!userData" in active_stream_list[stream_id]['channels']:
if (active_stream_list[stream_id]['start_time'] + active_stream_list[stream_id]['listen_key_cache_time']) \
< time.time() and (active_stream_list[stream_id]['last_static_ping_listen_key'] +
active_stream_list[stream_id]['listen_key_cache_time']) < time.time():
# keep-alive the listenKey
self.restclient.keepalive_listen_key(stream_id)
# set last_static_ping_listen_key
self.stream_list[stream_id]['last_static_ping_listen_key'] = time.time()
self.set_heartbeat(stream_id)
logging.info("BinanceWebSocketApiManager._frequent_checks() - sent listen_key keepalive "
"ping for stream_id=" + str(stream_id))
sys.exit(0)
def _keepalive_streams(self):
"""
This method is started as a thread and is observing the streams, if neccessary it restarts a dead stream
"""
keepalive_streams_id = time.time()
self.keepalive_streams_list[keepalive_streams_id] = {'last_heartbeat': 0,
'stop_request': None,
'has_stopped': False}
logging.info(
"BinanceWebSocketApiManager._keepalive_streams() new instance created with keepalive_streams_id=" +
str(keepalive_streams_id))
# threaded loop to restart crashed streams:
while self.stop_manager_request is None and \
self.keepalive_streams_list[keepalive_streams_id]['stop_request'] is None:
time.sleep(1)
self.keepalive_streams_list[keepalive_streams_id]['last_heartbeat'] = time.time()
# restart streams with a restart_request (status == new)
temp_restart_requests = copy.deepcopy(self.restart_requests)
for stream_id in temp_restart_requests:
try:
# find restarts that didnt work
if self.restart_requests[stream_id]['status'] == "restarted" and \
self.restart_requests[stream_id]['last_restart_time']+self.restart_timeout < time.time():
self.restart_requests[stream_id]['status'] = "new"
# restart streams with requests
if self.restart_requests[stream_id]['status'] == "new" or \
self.stream_list[stream_id]['kill_request'] is True:
self.kill_stream(stream_id)
thread = threading.Thread(target=self._restart_stream_thread, args=(stream_id,))
thread.start()
except KeyError:
pass
sys.exit(0)
def _restart_stream(self, stream_id):
"""
This is NOT stop/start! Its purpose is to start a died stream again! Use `set_restart_request()` for stop/start!
:param stream_id: id of a stream
:type stream_id: uuid
:return: stream_id or False
"""
try:
if self.restart_requests[stream_id]['status'] != "new":
logging.warning("BinanceWebSocketApiManager._restart_stream() please use `set_restart_request()` "
"instead!")
return False
except KeyError:
# no restart_request entry for this stream_id:
logging.warning("BinanceWebSocketApiManager._restart_stream() please use `set_restart_request() instead!")
return False
logging.info("BinanceWebSocketApiManager._restart_stream(" + str(stream_id) + ", " +
str(self.stream_list[stream_id]['channels']) +
", " + str(self.stream_list[stream_id]['markets']) + ")")
self.restart_requests[stream_id] = {'status': "restarted"}
self.restart_requests[stream_id]['last_restart_time'] = time.time()
self.stream_list[stream_id]['status'] = "restarting"
self.stream_list[stream_id]['kill_request'] = None
self.stream_list[stream_id]['payload'] = []
try:
loop = asyncio.new_event_loop()
except OSError as error_msg:
logging.critical(f"BinanceWebSocketApiManager.create_stream({str(stream_id)}) - OSError - "
f"error_msg: {str(error_msg)}")
return False
self.event_loops[stream_id] = loop
thread = threading.Thread(target=self._create_stream_thread,
args=(loop,
stream_id,
self.stream_list[stream_id]['channels'],
self.stream_list[stream_id]['markets'],
self.stream_list[stream_id]['stream_buffer_name'],
self.stream_list[stream_id]['stream_buffer_maxlen'],
True))
thread.start()
return stream_id
def _restart_stream_thread(self, stream_id):
"""
Wait till the old socket has closed and then start it again
:param stream_id: id of a stream
:type stream_id: uuid
"""
self._restart_stream(stream_id)
def _start_monitoring_api_thread(self, host, port, warn_on_update):
"""
Threaded method that servces the monitoring api
:param host: IP or hostname to use
:type host: str
:param port: Port to use
:type port: int
:param warn_on_update: Should the monitoring system report available updates?
:type warn_on_update: bool
"""
logging.info("BinanceWebSocketApiManager._start_monitoring_api_thread() - Starting monitoring API service ...")
app = Flask(__name__)
@app.route('/')
@app.route('/status/')
def redirect_to_wiki():
logging.info("BinanceWebSocketApiManager._start_monitoring_api_thread() 200 - "
"Visit https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/UNICORN-"
"Monitoring-API-Service for further information!")
return redirect("https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/"
"UNICORN-Monitoring-API-Service", code=302)
api = Api(app)
api.add_resource(BinanceWebSocketApiRestServer,
"/status/<string:statusformat>/",
"/status/<string:statusformat>/<string:checkcommandversion>",
resource_class_kwargs={'handler_binance_websocket_api_manager': self,
'warn_on_update': warn_on_update})
try:
dispatcher = wsgi.PathInfoDispatcher({'/': app})
self.monitoring_api_server = wsgi.WSGIServer((host, port), dispatcher)
self.monitoring_api_server.start()
except RuntimeError as error_msg:
logging.critical("BinanceWebSocketApiManager._start_monitoring_api_thread() - Monitoring API service is "
"going down! - Info: " + str(error_msg))
except OSError as error_msg:
logging.critical("BinanceWebSocketApiManager._start_monitoring_api_thread() - Monitoring API service is "
"going down! - Info: " + str(error_msg))
def add_to_ringbuffer_error(self, error):
"""
Add received error messages from websocket endpoints to the error ringbuffer
:param error: The data to add.
:type error: string
:return: bool
"""
while len(self.ringbuffer_error) >= self.get_ringbuffer_error_max_size():
self.ringbuffer_error.pop(0)
self.ringbuffer_error.append(str(error))
return True
def add_to_ringbuffer_result(self, result):
"""
Add received result messages from websocket endpoints to the result ringbuffer
:param result: The data to add.
:type result: string
:return: bool
"""
while len(self.ringbuffer_result) >= self.get_ringbuffer_result_max_size():
self.ringbuffer_result.pop(0)
self.ringbuffer_result.append(str(result))
return True
def add_to_stream_buffer(self, stream_data, stream_buffer_name=False):
"""
Kick back data to the
`stream_buffer <https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/%60stream_buffer%60>`_
If it is not possible to process received stream data (for example, the database is restarting, so its not
possible to save the data), you can return the data back into the stream_buffer. After a few seconds you stopped
writing data back to the stream_buffer, the BinanceWebSocketApiManager starts flushing back the data to normal
processing.
:param stream_data: the data you want to write back to the buffer
:type stream_data: raw stream_data or unicorn_fied stream data
:param stream_buffer_name: If `False` the data is going to get written to the default stream_buffer,
set to `True` to read the data via `pop_stream_data_from_stream_buffer(stream_id)` or
provide a string to create and use a shared stream_buffer and read it via
`pop_stream_data_from_stream_buffer('string')`.
:type stream_buffer_name: bool or str
:return: bool
"""
if stream_buffer_name is False:
with self.stream_buffer_lock:
self.stream_buffer.append(stream_data)
else:
with self.stream_buffer_locks[stream_buffer_name]:
self.stream_buffers[stream_buffer_name].append(stream_data)
self.last_entry_added_to_stream_buffer = time.time()
return True
def add_to_stream_signal_buffer(self, signal_type=False, stream_id=False, data_record=False):
"""
Add signals about a stream to the
`stream_signal_buffer <https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/%60stream_signal_buffer%60>`_
:param signal_type: the data you want to write back to the buffer
:type signal_type: raw stream_data or unicorn_fied stream data
:param stream_id: id of a stream
:type stream_id: uuid
:param data_record: The last or first received data record
:type data_record: str or dict
:return: bool
"""
if self.enable_stream_signal_buffer:
stream_signal = {'type': signal_type,
'stream_id': stream_id,
'timestamp': time.time()}
if signal_type == "CONNECT":
# nothing to add ...
pass
elif signal_type == "DISCONNECT":
try:
stream_signal['last_received_data_record'] = self.stream_list[stream_id]['last_received_data_record']
except KeyError as error_msg:
logging.critical(f"BinanceWebSocketApiManager.add_to_stream_signal_buffer({signal_type}) - "
f"Cant determine last_received_data_record! - error_msg: {error_msg}")
stream_signal['last_received_data_record'] = None
elif signal_type == "FIRST_RECEIVED_DATA":
stream_signal['first_received_data_record'] = data_record
else:
logging.error(f"BinanceWebSocketApiManager.add_to_stream_signal_buffer({signal_type}) - "
f"Received invalid `signal_type`!")
return False
with self.stream_signal_buffer_lock:
self.stream_signal_buffer.append(stream_signal)
logging.info(f"BinanceWebSocketApiManager.add_to_stream_signal_buffer({stream_signal})")
return True
else:
return False
def add_total_received_bytes(self, size):
"""
Add received bytes to the total received bytes statistic
:param size: int value of added bytes
:type size: int
"""
with self.total_received_bytes_lock:
self.total_received_bytes += int(size)
def clear_stream_buffer(self, stream_buffer_name=False):
"""
Clear the
`stream_buffer <https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/%60stream_buffer%60>`_
:param stream_buffer_name: `False` to read from generic stream_buffer, the stream_id if you used True in
create_stream() or the string name of a shared stream_buffer.
:type stream_buffer_name: bool or str
:return: bool
"""
if stream_buffer_name is False:
try:
self.stream_buffer.clear()
return True
except IndexError:
return False
else:
try:
with self.stream_buffer_locks[stream_buffer_name]:
self.stream_buffers[stream_buffer_name].clear()
return True
except IndexError:
return False
except KeyError:
return False
def create_payload(self, stream_id, method, channels=False, markets=False):
"""
Create the payload for subscriptions
:param stream_id: provide a stream_id
:type stream_id: uuid
:param method: `SUBSCRIBE` or `UNSUBSCRIBE`
:type method: str
:param channels: provide the channels to create the URI
:type channels: str, tuple, list, set
:param markets: provide the markets to create the URI
:type markets: str, tuple, list, set
:return: payload (list) or False
"""
logging.info("BinanceWebSocketApiManager.create_payload(" + str(stream_id) + ", " + str(channels) + ", " +
str(markets) + ") started ...")
if type(channels) is str:
channels = [channels]
if type(markets) is str:
markets = [markets]
payload = []
if self.is_exchange_type("dex"):
if method == "subscribe" and channels is not False:
for channel in channels:
add_payload = {"method": method,
"topic": channel}
symbols = []
if channel == "allMiniTickers" or \
channel == "allTickers" or \
channel == "blockheight":
add_payload["symbols"] = ["$all"]
payload.append(add_payload)
continue
if markets:
for market in markets:
if market == "allMiniTickers" or \
market == "allTickers" or \
market == "blockheight":
add_payload_from_market = {"method": method,
"topic": market,
"symbols": ["$all"]}
payload.append(add_payload_from_market)
continue
elif re.match(r'[a-zA-Z0-9]{41,43}', market) is not None:
if self.stream_list[stream_id]['dex_user_address'] is False:
self.stream_list[stream_id]['dex_user_address'] = market
else:
symbols.append(market)
try:
if self.stream_list[stream_id]["dex_user_address"] is not False:
add_payload["address"] = self.stream_list[stream_id]["dex_user_address"]
payload.append(add_payload)
except KeyError:
pass
if len(symbols) > 0:
add_payload["symbols"] = symbols
payload.append(add_payload)
elif method == "unsubscribe":
if markets:
add_payload = {"method": method}
for market in markets:
if re.match(r'[a-zA-Z0-9]{41,43}', market) is not None:
if self.stream_list[stream_id]['dex_user_address'] is False:
self.stream_list[stream_id]['dex_user_address'] = market
markets.remove(market)
if len(markets) > 0:
add_payload["symbols"] = markets
payload.append(add_payload)
if channels:
for channel in channels:
add_payload = {"method": method,
"topic": channel}
payload.append(add_payload)
else:
logging.critical("BinanceWebSocketApiManager.create_payload(" + str(stream_id) + ", "
+ str(channels) + ", " + str(markets) + ") - Allowed values for `method`: `subscribe` "
"or `unsubscribe`!")
return False
elif self.is_exchange_type("cex"):
final_market = "@arr"
if markets:
for market in markets:
if "arr@" in market:
final_market = "@" + market
final_channel = "@arr"
if channels:
for channel in channels:
if "arr@" in channel:
final_channel = "@" + channel
if method == "subscribe":
params = []
for channel in channels:
if "!" in channel:
params.append(channel + final_market)
continue
else:
for market in markets:
if "!" in market:
params.append(market + final_channel)
else:
params.append(market.lower() + "@" + channel)
if len(params) > 0:
params = list(set(params))
payload = self.split_payload(params, "SUBSCRIBE")
elif method == "unsubscribe":
if markets:
params = []
try:
for channel in self.stream_list[stream_id]['channels']:
if "!" in channel:
params.append(channel + final_market)
else:
for market in markets:
params.append(market.lower() + "@" + channel)
if len(params) > 0:
payload = self.split_payload(params, "UNSUBSCRIBE")
except KeyError:
pass
if channels:
params = []
for market in self.stream_list[stream_id]['markets']:
if "!" in market:
params.append(market + final_channel)
else:
for channel in channels:
params.append(market.lower() + "@" + channel)
if len(params) > 0:
payload = self.split_payload(params, "UNSUBSCRIBE")
else:
logging.critical("BinanceWebSocketApiManager.create_payload(" + str(stream_id) + ", "
+ str(channels) + ", " + str(markets) + ") - Allowed values for `method`: `subscribe` "
"or `unsubscribe`!")
return False
logging.info("BinanceWebSocketApiManager.create_payload(" + str(stream_id) + ", "
+ str(channels) + ", " + str(markets) + ") - Payload: " + str(payload))
logging.info("BinanceWebSocketApiManager.create_payload(" + str(stream_id) + ", " + str(channels) + ", " +
str(markets) + ") finished ...")
return payload
def create_stream(self,
channels,
markets,
stream_label=None,
stream_buffer_name=False,
api_key=False,
api_secret=False,
symbols=False,
output=False,
ping_interval=20,
ping_timeout=20,
close_timeout=10,
stream_buffer_maxlen=None):
"""
Create a websocket stream
If you provide 2 markets and 2 channels, then you are going to create 4 subscriptions (markets * channels).
Example:
channels = ['trade', 'kline_1']
markets = ['bnbbtc', 'ethbtc']
Finally: bnbbtc@trade, ethbtc@trade, bnbbtc@kline_1, ethbtc@kline_1
`There is a subscriptions limit per stream!
<https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/Binance-websocket-endpoint-configuration-overview>`_
Create `!userData` streams as single streams, because its using a different endpoint and can not get combined
with other streams in a multiplexed stream!
Example CEX:
``binance_websocket_api_manager.create_stream(["arr"], ["!userData"], api_key="aaa", api_secret="bbb")``
Isolated Margin:
``binance_websocket_api_manager.create_stream(["arr"], ["!userData"], api_key="aaa", api_secret="bbb", symbols="ankrbtc")``
Example DEX:
``binance_websocket_api_manager.create_stream(['orders', 'transfers', 'accounts'], binance_dex_user_address)``
To create a multiplexed stream which includes also `!miniTicker@arr`, `!ticker@arr`, `!forceOrder@arr` or
`!bookTicker@arr` you just need to add `!bookTicker` to the channels list - dont add `arr` (cex) or `$all`
(dex) to the markets list.
Example:
``binance_websocket_api_manager.create_stream(['kline_5m', 'marketDepth', '!miniTicker'], ['bnbbtc'])``
But you have to add `arr` or `$all` if you want to start it as a single stream!
Example:
``binance_websocket_api_manager.create_stream(["arr"], ["!miniTicker"])``
:param channels: provide the channels you wish to stream
:type channels: str, tuple, list, set
:param markets: provide the markets you wish to stream
:type markets: str, tuple, list, set
:param stream_label: provide a stream_label to identify the stream
:type stream_label: str
:param stream_buffer_name: If `False` the data is going to get written to the default stream_buffer,
set to `True` to read the data via `pop_stream_data_from_stream_buffer(stream_id)` or
provide a string to create and use a shared stream_buffer and read it via
`pop_stream_data_from_stream_buffer('string')`.
:type stream_buffer_name: bool or str
:param api_key: provide a valid Binance API key
:type api_key: str
:param api_secret: provide a valid Binance API secret
:type api_secret: str
:param symbols: provide the symbols for isolated_margin user_data streams
:type symbols: str
:param output: the default setting `raw_data` can be globaly overwritten with the parameter
`output_default <https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_binance_websocket_api.html?highlight=output_default#module-unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager>`_
of BinanceWebSocketApiManager`. To overrule the `output_default` value for this specific stream,
set `output` to "dict" to convert the received raw data to a python dict, set to "UnicornFy" to
convert with `UnicornFy <https://github.com/oliver-zehentleitner/unicorn-fy>`_ - otherwise with
the default setting "raw_data" the output remains unchanged and gets delivered as received from
the endpoints
:type output: str
:param ping_interval: Once the connection is open, a `Ping frame` is sent every
`ping_interval` seconds. This serves as a keepalive. It helps keeping
the connection open, especially in the presence of proxies with short
timeouts on inactive connections. Set `ping_interval` to `None` to
disable this behavior. (default: 20)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type ping_interval: int or None
:param ping_timeout: If the corresponding `Pong frame` isn't received within
`ping_timeout` seconds, the connection is considered unusable and is closed with
code 1011. This ensures that the remote endpoint remains responsive. Set
`ping_timeout` to `None` to disable this behavior. (default: 20)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type ping_timeout: int or None
:param close_timeout: The `close_timeout` parameter defines a maximum wait time in seconds for
completing the closing handshake and terminating the TCP connection. (default: 10)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type close_timeout: int or None
:param stream_buffer_maxlen: Set a max len for the `stream_buffer`. Only used in combination with a non generic
`stream_buffer`. The generic `stream_buffer` uses always the value of
`BinanceWebSocketApiManager()`.
:type stream_buffer_maxlen: int or None
:return: stream_id or 'False'
"""
# create a stream
if isinstance(channels, bool):
logging.error(f"BinanceWebSocketApiManager.create_stream(" + str(channels) + ", " + str(markets) + ", "
+ str(stream_label) + ", " + str(stream_buffer_name) + ", " + str(symbols) + ", " +
str(stream_buffer_maxlen) + ") - Parameter "
f"`channels` must be str, tuple, list or a set!")
return False
elif isinstance(markets, bool):
if isinstance(channels, bool):
logging.error(f"BinanceWebSocketApiManager.create_stream(" + str(channels) + ", " + str(markets) + ", "
+ str(stream_label) + ", " + str(stream_buffer_name) + ", " + str(symbols) + ", " +
str(stream_buffer_maxlen) + ") - Parameter "
f"`markets` must be str, tuple, list or a set!")
return False
if type(channels) is str:
channels = [channels]
if type(markets) is str:
markets = [markets]
if output is False:
output = self.output_default
stream_id = uuid.uuid4()
markets_new = []
if stream_buffer_name is True:
stream_buffer_name = stream_id
for market in markets:
if "!" in market \
or market == "allMiniTickers" \
or market == "allTickers" \
or market == "blockheight" \
or market == "$all":
markets_new.append(market)
else:
if self.is_exchange_type('dex'):
if re.match(r'[a-zA-Z0-9]{41,43}', market) is None:
markets_new.append(str(market).upper())
else:
markets_new.append(str(market))
elif self.is_exchange_type('cex'):
markets_new.append(str(market).lower())
logging.info("BinanceWebSocketApiManager.create_stream(" + str(channels) + ", " + str(markets_new) + ", "
+ str(stream_label) + ", " + str(stream_buffer_name) + ", " + str(symbols) + ") with stream_id="
+ str(stream_id))
self._add_stream_to_stream_list(stream_id,
channels,
markets_new,
stream_label,
stream_buffer_name,
symbols=symbols,
api_key=api_key,
api_secret=api_secret,
output=output,
ping_interval=ping_interval,
ping_timeout=ping_timeout,
close_timeout=close_timeout,
stream_buffer_maxlen=stream_buffer_maxlen)
try:
loop = asyncio.new_event_loop()
except OSError as error_msg:
logging.critical(f"BinanceWebSocketApiManager.create_stream({str(channels)}, {str(markets_new)}, "
f"{str(stream_label)}, {str(stream_buffer_name)}, {str(symbols)}), {stream_buffer_maxlen} "
f"with stream_id="
f"{str(stream_id)} - OSError - can not create stream - error_msg: {str(error_msg)}")
return False
self.event_loops[stream_id] = loop
thread = threading.Thread(target=self._create_stream_thread, args=(loop,
stream_id,
channels,
markets_new,
stream_buffer_name,
stream_buffer_maxlen,
False))
thread.start()
return stream_id
def create_websocket_uri(self, channels, markets, stream_id=False, api_key=False, api_secret=False, symbols=False):
"""
Create a websocket URI
:param channels: provide the channels to create the URI
:type channels: str, tuple, list, set
:param markets: provide the markets to create the URI
:type markets: str, tuple, list, set
:param stream_id: provide a stream_id (only needed for userData Streams (acquiring a listenKey)
:type stream_id: uuid
:param api_key: provide a valid Binance API key
:type api_key: str
:param api_secret: provide a valid Binance API secret
:type api_secret: str
:param symbols: provide the symbols for isolated_margin user_data streams
:type symbols: str
:return: str or False
"""
if isinstance(channels, bool):
logging.error(f"BinanceWebSocketApiManager.create_websocket_uri({str(channels)}, {str(markets)}"
f", {str(symbols)}) - error_msg: Parameter `channels` must be str, tuple, list "
f"or a set!")
return False
elif isinstance(markets, bool):
logging.error(f"BinanceWebSocketApiManager.create_websocket_uri({str(channels)}, {str(markets)}"
f", {str(symbols)}) - error_msg: Parameter `markets` must be str, tuple, list "
f"or a set!")
return False
payload = []
if type(channels) is str:
channels = [channels]
if type(markets) is str:
markets = [markets]
if len(channels) == 1 and len(markets) == 1:
if "!userData" in channels or "!userData" in markets:
if stream_id is not False:
response = self.get_listen_key_from_restclient(stream_id, api_key, api_secret, symbols=symbols)
try:
if response['code'] == -1102 or \
response['code'] == -2008 or \
response['code'] == -2014 or \
response['code'] == -2015 or \
response['code'] == -11001:
# -1102 = Mandatory parameter 'symbol' was not sent, was empty/null, or malformed.
# -2008 = Invalid Api-Key ID
# -2014 = API-key format invalid
# -2015 = Invalid API-key, IP, or permissions for action
# -11001 = Isolated margin account does not exist.
logging.critical("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) +
", " + str(markets) + ", " + ", " + str(symbols) + ") - Received known "
"error code from rest client: " + str(response))
return response
else:
logging.critical("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) +
", " + str(markets) + ", " + ", " + str(symbols) + ") - Received unknown "
"error code from rest client: " + str(response))
return response
except KeyError:
pass
except TypeError:
pass
if response:
try:
uri = self.websocket_base_uri + "ws/" + str(response['listenKey'])
uri_hidden_secret = self.websocket_base_uri + "ws/" + self.replaced_secrets_text
if self.show_secrets_in_logs is True:
logging.info("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) +
", " + str(markets) + ", " + str(symbols) + ") - result: " + uri)
else:
logging.info("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) +
", " + str(markets) + ", " + str(symbols) + ") - result: " +
uri_hidden_secret)
self.stream_list[stream_id]['subscriptions'] = self.get_number_of_subscriptions(stream_id)
return uri
except KeyError:
logging.critical("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", "
+ str(markets) + ", " + ", " + str(symbols) + ") - error_msg: can not "
"create URI!!")
return False
except TypeError:
logging.critical("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", "
+ str(markets) + ", " + ", " + str(symbols) + ") - error_msg: can not "
"create URI!!")
return False
else:
logging.critical("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", " +
str(markets) + ", " + ", " + str(symbols) + ") - error_msg: can not create "
"URI!!")
return False
else:
logging.critical("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", " +
str(markets) + ", " + ", " + str(symbols) + ") - error_msg: can not create URI!!")
return False
elif "!bookTicker" in channels or "!bookTicker" in markets:
if stream_id:
self.stream_list[stream_id]['subscriptions'] = self.get_number_of_subscriptions(stream_id)
return self.websocket_base_uri + "ws/!bookTicker"
elif "arr" in channels or "$all" in markets:
if stream_id:
self.stream_list[stream_id]['subscriptions'] = self.get_number_of_subscriptions(stream_id)
return self.websocket_base_uri + "ws/" + markets[0] + "@" + channels[0]
elif "arr" in markets or "$all" in channels:
if stream_id:
self.stream_list[stream_id]['subscriptions'] = self.get_number_of_subscriptions(stream_id)
return self.websocket_base_uri + "ws/" + channels[0] + "@" + markets[0]
elif self.is_exchange_type("dex"):
if re.match(r'[a-zA-Z0-9]{41,43}', markets[0]) is not None:
try:
if self.stream_list[stream_id]['dex_user_address'] is False:
self.stream_list[stream_id]['dex_user_address'] = markets[0]
if self.stream_list[stream_id]['dex_user_address'] != markets[0]:
logging.error("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", " +
str(markets) + ", " + ", " + str(symbols) + ") - Error: once set, the "
"dex_user_address is not allowed to get changed anymore!")
return False
except KeyError:
pass
add_payload = {"method": "subscribe",
"topic": channels[0],
"address": markets[0]}
payload.append(add_payload)
if stream_id:
self.stream_list[stream_id]['payload'] = payload
self.stream_list[stream_id]['subscriptions'] = self.get_number_of_subscriptions(stream_id)
return self.websocket_base_uri + "ws/" + markets[0]
elif markets[0] != "" and channels[0] != "":
return self.websocket_base_uri + "ws/" + markets[0] + "@" + channels[0]
else:
logging.error("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", " +
str(markets) + ", " + ", " + str(symbols) + ") - Error: not able to create websocket "
"URI for DEX")
return False
if self.is_exchange_type("dex"):
query = "ws"
if stream_id:
payload = self.create_payload(stream_id, "subscribe", channels=channels, markets=markets)
self.stream_list[stream_id]['payload'] = payload
self.stream_list[stream_id]['subscriptions'] = self.get_number_of_subscriptions(stream_id)
return self.websocket_base_uri + str(query)
else:
query = "stream?streams="
final_market = "@arr"
market = ""
channel = ""
for market in markets:
if "arr@" in market:
final_market = "@" + market
final_channel = "@arr"
for channel in channels:
if "arr@" in channel:
final_channel = "@" + channel
for channel in channels:
if channel == "!userData":
logging.error("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", " +
str(markets) + ", " + ", " + str(symbols) + ") - Can not create "
"'outboundAccountInfo' in a multi channel socket! "
"Unfortunately Binance only stream it in a single stream socket! ./"
"Use binance_websocket_api_manager.create_stream([\"arr\"], [\"!userData\"]) to "
"initiate an extra connection.")
return False
for market in markets:
if market == "!userData":
logging.error("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", " +
str(markets) + ", " + ", " + str(symbols) + ") - Can not create "
"'outboundAccountInfo' in a multi channel socket! "
"Unfortunatly Binance only stream it in a single stream socket! ./"
"Use binance_websocket_api_manager.create_stream([\"arr\"], [\"!userData\"]) to "
"initiate an extra connection.")
return False
if "!" in channel:
query += channel + final_market
elif "!" in market:
query += market + final_channel
else:
query += market.lower() + "@" + channel
try:
if self.subscribe_to_stream(stream_id, markets=markets, channels=channels) is False:
sys.exit(1)
except KeyError:
pass
logging.info("BinanceWebSocketApiManager.create_websocket_uri(" + str(channels) + ", " +
str(markets) + ", " + ", " + str(symbols) + ") - Created websocket URI for stream_id=" +
str(stream_id) + " is " + self.websocket_base_uri + str(query))
return self.websocket_base_uri + str(query)
def delete_listen_key_by_stream_id(self, stream_id):
"""
Delete a binance listen_key from a specific !userData stream
:param stream_id: id of a !userData stream
:type stream_id: uuid
"""
try:
if self.stream_list[stream_id]['listen_key'] is not False:
logging.info("BinanceWebSocketApiManager.delete_listen_key_by_stream_id(" + str(stream_id) + ")")
self.restclient.delete_listen_key(stream_id)
except KeyError:
return False
def delete_stream_from_stream_list(self, stream_id):
"""
Delete a stream from the stream_list
Even if a stream crashes or get stopped, its data remains in the BinanceWebSocketApiManager till you stop the
BinanceWebSocketApiManager itself. If you want to tidy up the stream_list you can use this method.
:param stream_id: id of a stream
:type stream_id: uuid
:return: bool
"""
logging.info("BinanceWebSocketApiManager.delete_stream_from_stream_list(" + str(stream_id) + ")")
return self.stream_list.pop(stream_id, False)
def fill_up_space_left(self, demand_of_chars, string, filling=" "):
"""
Add whitespaces to `string` to a length of `demand_of_chars` on the left side
:param demand_of_chars: how much chars does the string have to have?
:type demand_of_chars: int
:param string: the string that has to get filled up with spaces
:type string: str
:param filling: filling char (default: blank space)
:type filling: str
:return: the filled up string
"""
blanks_pre = ""
blanks_post = ""
demand_of_blanks = demand_of_chars - len(str(string)) - 1
while len(blanks_pre) < demand_of_blanks:
blanks_pre += filling
blanks_post = filling
return blanks_pre + str(string) + blanks_post
def fill_up_space_centered(self, demand_of_chars, string, filling=" "):
"""
Add whitespaces to `string` to a length of `demand_of_chars`
:param demand_of_chars: how much chars does the string have to have?
:type demand_of_chars: int
:param string: the string that has to get filled up with spaces
:type string: str
:param filling: filling char (default: blank space)
:type filling: str
:return: the filled up string
"""
blanks_pre = ""
blanks_post = ""
demand_of_blanks = demand_of_chars - len(str(string)) - 1
while (len(blanks_pre)+len(blanks_post)) < demand_of_blanks:
blanks_pre += filling
if (len(blanks_pre) + len(blanks_post)) < demand_of_blanks:
blanks_post += filling
return blanks_pre + str(string) + blanks_post
def fill_up_space_right(self, demand_of_chars, string, filling=" "):
"""
Add whitespaces to `string` to a length of `demand_of_chars` on the right side
:param demand_of_chars: how much chars does the string have to have?
:type demand_of_chars: int
:param string: the string that has to get filled up with spaces
:type string: str
:param filling: filling char (default: blank space)
:type filling: str
:return: the filled up string
"""
blanks_pre = " "
blanks_post = ""
demand_of_blanks = demand_of_chars - len(str(string))
while len(blanks_post) < demand_of_blanks-1:
blanks_pre = filling
blanks_post += filling
string = blanks_pre + str(string) + blanks_post
return string[0:demand_of_chars]
def get_active_stream_list(self):
"""
Get a list of all active streams
:return: set or False
"""
# get the stream_list without stopped and crashed streams
stream_list_with_active_streams = {}
for stream_id in self.stream_list:
if self.stream_list[stream_id]['status'] == "running":
stream_list_with_active_streams[stream_id] = self.stream_list[stream_id]
try:
if len(stream_list_with_active_streams) > 0:
return stream_list_with_active_streams
except KeyError:
return False
except UnboundLocalError:
return False
def get_all_receives_last_second(self):
"""
Get the number of all receives of the last second
:return: int
"""
all_receives_last_second = 0
last_second_timestamp = int(time.time()) - 1
for stream_id in self.stream_list:
try:
all_receives_last_second += self.stream_list[stream_id]['receives_statistic_last_second']['entries'][
last_second_timestamp]
except KeyError:
pass
return all_receives_last_second
def get_binance_api_status(self):
"""
`get_binance_api_status()` is obsolete and will be removed in future releases, please use `get_used_weight()`
instead!
:return: dict
"""
logging.warning("`get_binance_api_status()` is obsolete and will be removed in future releases, please use"
"`get_used_weight()` instead!")
return self.binance_api_status
def get_used_weight(self):
"""
Get used_weight, last status_code and the timestamp of the last status update
:return: dict
"""
return self.binance_api_status
def get_current_receiving_speed(self, stream_id):
"""
Get the receiving speed of the last second in Bytes
:return: int
"""
current_timestamp = int(time.time())
last_timestamp = current_timestamp - 1
try:
if self.stream_list[stream_id]['transfer_rate_per_second']['bytes'][last_timestamp] > 0:
self.stream_list[stream_id]['transfer_rate_per_second']['speed'] = \
self.stream_list[stream_id]['transfer_rate_per_second']['bytes'][last_timestamp]
except TypeError:
return 0
except KeyError:
return 0
try:
current_receiving_speed = self.stream_list[stream_id]['transfer_rate_per_second']['speed']
except KeyError:
current_receiving_speed = 0
return current_receiving_speed
def get_current_receiving_speed_global(self):
"""
Get the receiving speed of the last second in Bytes from all streams!
:return: int
"""
current_receiving_speed = 0
try:
temp_stream_list = copy.deepcopy(self.stream_list)
except RuntimeError as error_msg:
logging.debug(f"BinanceWebSocketApiManager.get_current_receiving_speed_global() - RuntimeError: "
f"{str(error_msg)}")
return 0
except TypeError as error_msg:
logging.debug(f"BinanceWebSocketApiManager.get_current_receiving_speed_global() - RuntimeError: "
f"{str(error_msg)}")
return 0
for stream_id in temp_stream_list:
current_receiving_speed += self.get_current_receiving_speed(stream_id)
return current_receiving_speed
@staticmethod
def get_date_of_timestamp(timestamp):
"""
Convert a timestamp into a readable date/time format for humans
:param timestamp: provide the timestamp you want to convert into a date
:type timestamp: timestamp
:return: str
"""
date = str(datetime.utcfromtimestamp(timestamp).strftime('%Y-%m-%d, %H:%M:%S UTC'))
return date
def get_errors_from_endpoints(self):
"""
Get all the stored error messages from the ringbuffer sent by the endpoints.
:return: list
"""
return self.ringbuffer_error
def get_event_loop_by_stream_id(self, stream_id=False):
"""
Get the asyncio event loop used by a specific stream.
:return: asyncio event loop or False
"""
if stream_id is False:
return False
else:
return self.event_loops[stream_id]
def get_exchange(self):
"""
Get the name of the used exchange like "binance.com" or "binance.org-testnet"
:return: str
"""
return self.exchange
@staticmethod
def get_human_bytesize(bytes, suffix=""):
"""
Convert the bytes to something readable
:param bytes: amount of bytes
:type bytes: int
:param suffix: add a string after
:type suffix: str
:return:
"""
if bytes > 1024 * 1024 * 1024 *1024:
bytes = str(round(bytes / (1024 * 1024 * 1024 * 1024), 3)) + " tB" + suffix
elif bytes > 1024 * 1024 * 1024:
bytes = str(round(bytes / (1024 * 1024 * 1024), 2)) + " gB" + suffix
elif bytes > 1024 * 1024:
bytes = str(round(bytes / (1024 * 1024), 2)) + " mB" + suffix
elif bytes > 1024:
bytes = str(round(bytes / 1024, 2)) + " kB" + suffix
else:
bytes = str(bytes) + " B" + suffix
return bytes
@staticmethod
def get_human_uptime(uptime):
"""
Convert a timespan of seconds into hours, days, ...
:param uptime: Uptime in seconds
:type uptime: int
:return:
"""
if uptime > (60 * 60 * 24):
uptime_days = int(uptime / (60 * 60 * 24))
uptime_hours = int(((uptime - (uptime_days * (60 * 60 * 24))) / (60 * 60)))
uptime_minutes = int((uptime - ((uptime_days * (60 * 60 * 24)) + (uptime_hours * 60 * 60))) / 60)
uptime_seconds = int(
uptime - ((uptime_days * (60 * 60 * 24)) + ((uptime_hours * (60 * 60)) + (uptime_minutes * 60))))
uptime = str(uptime_days) + "d:" + str(uptime_hours) + "h:" + str(int(uptime_minutes)) + "m:" + str(
int(uptime_seconds)) + "s"
elif uptime > (60 * 60):
uptime_hours = int(uptime / (60 * 60))
uptime_minutes = int((uptime - (uptime_hours * (60 * 60))) / 60)
uptime_seconds = int(uptime - ((uptime_hours * (60 * 60)) + (uptime_minutes * 60)))
uptime = str(uptime_hours) + "h:" + str(int(uptime_minutes)) + "m:" + str(int(uptime_seconds)) + "s"
elif uptime > 60:
uptime_minutes = int(uptime / 60)
uptime_seconds = uptime - uptime_minutes * 60
uptime = str(uptime_minutes) + "m:" + str(int(uptime_seconds)) + "s"
else:
uptime = str(int(uptime)) + " seconds"
return uptime
@staticmethod
def get_latest_release_info():
"""
Get infos about the latest available release
:return: dict or False
"""
try:
respond = requests.get('https://api.github.com/repos/oliver-zehentleitner/unicorn-binance-websocket-api/'
'releases/latest')
latest_release_info = respond.json()
return latest_release_info
except Exception:
return False
@staticmethod
def get_latest_release_info_check_command():
"""
Get infos about the latest available `check_lucit_collector` release
:return: dict or False
"""
try:
respond = requests.get('https://api.github.com/repos/LUCIT-Development/check_lucit_collector.py/'
'releases/latest')
return respond.json()
except Exception:
return False
def get_latest_version(self):
"""
Get the version of the latest available release (cache time 1 hour)
:return: str or False
"""
# Do a fresh request if status is None or last timestamp is older 1 hour
if self.last_update_check_github['status'] is None or \
(self.last_update_check_github['timestamp']+(60*60) < time.time()):
self.last_update_check_github['status'] = self.get_latest_release_info()
if self.last_update_check_github['status']:
try:
return self.last_update_check_github['status']["tag_name"]
except KeyError:
return "unknown"
else:
return "unknown"
def get_latest_version_check_command(self):
"""
Get the version of the latest available `check_lucit_collector.py` release (cache time 1 hour)
:return: str or False
"""
# Do a fresh request if status is None or last timestamp is older 1 hour
if self.last_update_check_github_check_command['status'] is None or \
(self.last_update_check_github_check_command['timestamp'] + (60 * 60) < time.time()):
self.last_update_check_github_check_command['status'] = self.get_latest_release_info_check_command()
if self.last_update_check_github_check_command['status']:
try:
return self.last_update_check_github_check_command['status']["tag_name"]
except KeyError:
return "unknown"
else:
return "unknown"
def get_limit_of_subscriptions_per_stream(self):
"""
Get the number of allowed active subscriptions per stream (limit of binance API)
:return: int
"""
return self.max_subscriptions_per_stream
def get_number_of_all_subscriptions(self):
"""
Get the amount of all stream subscriptions
:return: inf
"""
subscriptions = 0
try:
active_stream_list = copy.deepcopy(self.get_active_stream_list())
if active_stream_list:
for stream_id in active_stream_list:
subscriptions += active_stream_list[stream_id]['subscriptions']
self.all_subscriptions_number = subscriptions
except TypeError:
return self.all_subscriptions_number
except RuntimeError:
return self.all_subscriptions_number
return subscriptions
def get_number_of_free_subscription_slots(self, stream_id):
"""
Get the number of free subscription slots (max allowed subscriptions - subscriptions) of a specific stream
:return: int
"""
free_slots = self.max_subscriptions_per_stream - self.stream_list[stream_id]['subscriptions']
return free_slots
def get_listen_key_from_restclient(self, stream_id, api_key, api_secret, symbols=False):
"""
Get a new or cached (<30m) listen_key
:param stream_id: provide a stream_id
:type stream_id: uuid
:param api_key: provide a valid Binance API key
:type api_key: str
:param api_secret: provide a valid Binance API secret
:type api_secret: str
:param symbols: provide the symbols for isolated_margin user_data streams
:type symbols: str
:return: str or False
"""
if (self.stream_list[stream_id]['start_time'] + self.stream_list[stream_id]['listen_key_cache_time']) > \
time.time() or (self.stream_list[stream_id]['last_static_ping_listen_key'] +
self.stream_list[stream_id]['listen_key_cache_time']) > time.time():
# listen_key is not older than 30 min
if self.stream_list[stream_id]['listen_key'] is not False:
response = {'listenKey': self.stream_list[stream_id]['listen_key']}
return response
# no cached listen_key or listen_key is older than 30 min
# acquire a new listen_key:
response = self.restclient.get_listen_key(stream_id)
if response:
# save and return the valid listen_key
try:
self.stream_list[stream_id]['listen_key'] = str(response['listenKey'])
return response
except KeyError:
# no valid listen_key, but a response from endpoint
return response
except TypeError:
return response
else:
# no valid listen_key
return False
def get_most_receives_per_second(self):
"""
Get the highest total receives per second value
:return: int
"""
return self.most_receives_per_second
def get_number_of_streams_in_stream_list(self):
"""
Get the number of streams that are stored in the stream_list
:return: int
"""
return len(self.stream_list)
def get_number_of_subscriptions(self, stream_id):
"""
Get the number of subscriptions of a specific stream
:return: int
"""
count_subscriptions = 0
for channel in self.stream_list[stream_id]['channels']:
if "!" in channel \
or channel == "orders" \
or channel == "accounts" \
or channel == "transfers" \
or channel == "allTickers" \
or channel == "allMiniTickers" \
or channel == "blockheight":
count_subscriptions += 1
continue
else:
for market in self.stream_list[stream_id]['markets']:
if "!" in market \
or market == "orders" \
or market == "accounts" \
or market == "transfers" \
or market == "allTickers" \
or market == "allMiniTickers" \
or market == "blockheight":
count_subscriptions += 1
else:
count_subscriptions += 1
return count_subscriptions
def get_keep_max_received_last_second_entries(self):
"""
Get the number of how much received_last_second entries are stored till they get deleted
:return: int
"""
return self.keep_max_received_last_second_entries
def get_monitoring_status_icinga(self, check_command_version=False, warn_on_update=True):
"""
Get status and perfdata to monitor and collect metrics with ICINGA/Nagios
status: OK, WARNING, CRITICAL
- WARNING: on restarts, available updates
- CRITICAL: crashed streams
perfdata:
- average receives per second since last status check
- average speed per second since last status check
- total received bytes since start
- total received length since start
- stream_buffer size
- stream_buffer length
- reconnects
- uptime
:param check_command_version: is the version of the calling `check_command <https://github.com/LUCIT-Development/check_lucit_collector.py>`_
:type check_command_version: str
:param warn_on_update: set to `False` to disable the update warning
:type warn_on_update: bool
:return: dict (text, time, return_code)
"""
result = self.get_monitoring_status_plain(check_command_version=check_command_version,
warn_on_update=warn_on_update)
if len(result['update_msg']) > 0 or len(result['status_msg']) > 0:
text_msg = " -" + str(result['status_msg']) + str(result['update_msg'])
else:
text_msg = ""
check_message = "BINANCE WEBSOCKETS (" + self.exchange + ") - " + result['status_text'] + ": O:" + \
str(result['active_streams']) + \
"/R:" + str(result['restarting_streams']) + "/C:" + str(result['crashed_streams']) + "/S:" + \
str(result['stopped_streams']) + text_msg + " | " + \
"active streams=" + str(result['active_streams']) + ";;;0 " + \
"average_receives_per_second=" + str(result['average_receives_per_second']) + \
";;;0 current_receiving_speed_per_second=" + str(result['average_speed_per_second']) + \
"KB;;;0 total_received_length=" + str(result['total_received_length']) + "c;;;0 total_" \
"received_size=" + str(result['total_received_mb']) + "MB;;;0 stream_buffer_size=" + \
str(result['stream_buffer_mb']) + "MB;;;0 stream_buffer_length=" + \
str(result['stream_buffer_items']) + ";;;0 reconnects=" + str(result['reconnects']) + "c;;;0 " \
"uptime_days=" + str(result['uptime']) + "c;;;0"
status = {'text': check_message,
'time': int(result['timestamp']),
'return_code': result['return_code']}
return status
def get_monitoring_status_plain(self, check_command_version=False, warn_on_update=True):
"""
Get plain monitoring status data:
active_streams, crashed_streams, restarting_streams, stopped_streams, return_code, status_text,
timestamp, update_msg, average_receives_per_second, average_speed_per_second, total_received_mb,
stream_buffer_items, stream_buffer_mb, reconnects, uptime
:param check_command_version: is the version of the calling `check_command <https://github.com/LUCIT-Development/check_lucit_collector.py>`_
:type check_command_version: False or str
:param warn_on_update: set to `False` to disable the update warning
:type warn_on_update: bool
:return: dict
"""
result = {}
result['active_streams'] = 0
result['crashed_streams'] = 0
result['restarting_streams'] = 0
result['highest_restart_per_stream_last_hour'] = 0
result['return_code'] = 0
result['status_text'] = "OK"
result['status_msg'] = ""
result['stopped_streams'] = 0
result['timestamp'] = time.time()
result['update_msg'] = ""
time_period = result['timestamp'] - self.last_monitoring_check
timestamp_last_hour = time.time() - (60*60)
try:
from unicorn_fy.unicorn_fy import UnicornFy
unicorn_fy = UnicornFy()
is_update_available_unicorn_fy = unicorn_fy.is_update_availabe()
except ModuleNotFoundError:
logging.critical("BinanceWebSocketApiManager.get_monitoring_status_plain() - UnicornFy not installed!")
is_update_available_unicorn_fy = False
except AttributeError:
logging.error("BinanceWebSocketApiManager.get_monitoring_status_plain() - UnicornFy outdated!")
is_update_available_unicorn_fy = True
if check_command_version:
is_update_available_check_command = self.is_update_availabe_check_command(
check_command_version=check_command_version)
else:
is_update_available_check_command = True
for stream_id in self.stream_list:
stream_restarts_last_hour = 0
for reconnect in self.stream_list[stream_id]['logged_reconnects']:
if reconnect > timestamp_last_hour:
stream_restarts_last_hour += 1
if stream_restarts_last_hour > result['highest_restart_per_stream_last_hour']:
result['highest_restart_per_stream_last_hour'] = stream_restarts_last_hour
for stream_id in self.stream_list:
if self.stream_list[stream_id]['status'] == "running":
result['active_streams'] += 1
elif self.stream_list[stream_id]['status'] == "stopped":
result['stopped_streams'] += 1
elif self.stream_list[stream_id]['status'] == "restarting":
result['restarting_streams'] += 1
elif "crashed" in self.stream_list[stream_id]['status']:
result['crashed_streams'] += 1
if self.is_update_availabe() and is_update_available_unicorn_fy and is_update_available_check_command:
result['update_msg'] = " Update available: UNICORN Binance WebSocket API, UnicornFy and " \
"check_lucit_collector.py!"
if warn_on_update is True:
result['status_text'] = "WARNING"
result['return_code'] = 1
elif self.is_update_availabe() and is_update_available_unicorn_fy:
result['update_msg'] = " Update available: UNICORN Binance WebSocket API and UnicornFy"
if warn_on_update is True:
result['status_text'] = "WARNING"
result['return_code'] = 1
elif self.is_update_availabe() and is_update_available_check_command:
result['update_msg'] = " Update available: UNICORN Binance WebSocket API and check_lucit_collector.py!"
if warn_on_update is True:
result['status_text'] = "WARNING"
result['return_code'] = 1
elif is_update_available_unicorn_fy and is_update_available_check_command:
result['update_msg'] = " Update available: UnicornFy and check_lucit_collector.py!"
if warn_on_update is True:
result['status_text'] = "WARNING"
result['return_code'] = 1
elif self.is_update_availabe():
result['update_msg'] = " Update " + str(self.get_latest_version()) + " available!"
if warn_on_update is True:
result['status_text'] = "WARNING"
result['return_code'] = 1
elif is_update_available_unicorn_fy:
result['update_msg'] = " Update UnicornFy " + str(unicorn_fy.get_latest_version()) + " available!"
if warn_on_update is True:
result['status_text'] = "WARNING"
result['return_code'] = 1
elif is_update_available_check_command:
result['update_msg'] = " Update `check_lucit_collector.py` " + \
str(self.get_latest_version_check_command()) + " available!"
if warn_on_update is True:
result['status_text'] = "WARNING"
result['return_code'] = 1
if result['highest_restart_per_stream_last_hour'] >= 10:
result['status_text'] = "CRITICAL"
result['return_code'] = 2
result['status_msg'] = " Restart rate per stream last hour: " + \
str(result['highest_restart_per_stream_last_hour'])
elif result['crashed_streams'] > 0:
result['status_text'] = "CRITICAL"
result['return_code'] = 2
elif result['highest_restart_per_stream_last_hour'] >= 3:
result['status_text'] = "WARNING"
result['return_code'] = 1
result['status_msg'] = " Restart rate per stream last hour: " + \
str(result['highest_restart_per_stream_last_hour'])
result['average_receives_per_second'] = ((self.total_receives - self.monitoring_total_receives) /
time_period).__round__(2)
result['average_speed_per_second'] = (((self.total_received_bytes - self.monitoring_total_received_bytes) /
time_period) / 1024).__round__(2)
result['total_received_mb'] = (self.get_total_received_bytes() / (1024 * 1024)).__round__(2)
result['total_received_length'] = self.total_receives
result['stream_buffer_items'] = str(self.get_stream_buffer_length())
result['stream_buffer_mb'] = (self.get_stream_buffer_byte_size() / (1024 * 1024)).__round__(4)
result['reconnects'] = self.get_reconnects()
self.monitoring_total_receives = self.get_total_receives()
self.monitoring_total_received_bytes = self.get_total_received_bytes()
self.last_monitoring_check = result['timestamp']
result['uptime'] = ((result['timestamp'] - self.start_time) / (60*60*24)).__round__(3)
return result
def get_process_usage_memory(self):
"""
Get the used memory of this process
:return: str
"""
process = psutil.Process(os.getpid())
memory = self.get_human_bytesize(process.memory_info()[0])
return memory
def get_process_usage_cpu(self):
"""
Get the used cpu power of this process
:return: int
"""
try:
cpu = psutil.cpu_percent(interval=None)
except OSError as error_msg:
logging.error(f"BinanceWebSocketApiManager.get_process_usage_cpu() - OSError - error_msg: {str(error_msg)}")
return False
return cpu
def get_process_usage_threads(self):
"""
Get the amount of threads that this process is using
:return: int
"""
threads = threading.active_count()
return threads
def get_reconnects(self):
"""
Get the number of total reconnects
:return: int
"""
return self.reconnects
def get_request_id(self):
"""
Get a unique `request_id`
:return: int
"""
with self.request_id_lock:
self.request_id += 1
return self.request_id
def get_result_by_request_id(self, request_id=False, timeout=10):
"""
Get the result related to the provided `request_id`
:param request_id: if you run `get_stream_subscriptions()
<https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_binance_websocket_api.html#unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager.BinanceWebSocketApiManager.get_stream_subscriptions>`_
it returns a unique `request_id` - provide it to this method to receive the result.
:type request_id: stream_id (uuid)
:param timeout: seconds to wait to receive the result. If not there it returns 'False'
:type timeout: int
:return: `result` or False
"""
if request_id is False:
return False
wait_till_timestamp = time.time() + timeout
while wait_till_timestamp >= time.time():
for result in self.ringbuffer_result:
result_dict = json.loads(result)
if result_dict['id'] == request_id:
return result
return False
def get_results_from_endpoints(self):
"""
Get all the stored result messages from the ringbuffer sent by the endpoints.
:return: list
"""
return self.ringbuffer_result
def get_ringbuffer_error_max_size(self):
"""
How many entries should be stored in the ringbuffer?
:return: int
"""
return self.ringbuffer_error_max_size
def get_ringbuffer_result_max_size(self):
"""
How many entries should be stored in the ringbuffer?
:return: int
"""
return self.ringbuffer_result_max_size
def get_start_time(self):
"""
Get the start_time of the BinanceWebSocketApiManager instance
:return: timestamp
"""
return self.start_time
def get_stream_buffer_byte_size(self):
"""
Get the current byte size estimation of the stream_buffer
:return: int
"""
total_received_bytes = self.get_total_received_bytes()
total_receives = self.get_total_receives()
stream_buffer_length = self.get_stream_buffer_length()
return round(total_received_bytes / total_receives * stream_buffer_length)
def get_stream_buffer_length(self):
"""
Get the current number of items in all stream_buffer
:return: int
"""
number = 0
number += len(self.stream_buffer)
for stream_buffer_name in self.stream_buffers:
number += len(self.stream_buffers[stream_buffer_name])
return number
def get_stream_id_by_label(self, stream_label=False):
"""
Get the stream_id of a specific stream by stream label
:param stream_label: stream_label of the stream you search
:type stream_label: str
:return: stream_id or False
"""
if stream_label:
for stream_id in self.stream_list:
if self.stream_list[stream_id]['stream_label'] == stream_label:
return stream_id
return False
def get_stream_info(self, stream_id):
"""
Get all infos about a specific stream
:param stream_id: id of a stream
:type stream_id: uuid
:return: set
"""
current_timestamp = time.time()
try:
temp_stream_list = copy.deepcopy(self.stream_list[stream_id])
except RuntimeError:
logging.error("BinanceWebSocketApiManager.get_stream_info(" + str(stream_id) + ") Info: RuntimeError")
return self.get_stream_info(stream_id)
except KeyError:
logging.error("BinanceWebSocketApiManager.get_stream_info(" + str(stream_id) + ") Info: KeyError")
return False
if temp_stream_list['last_heartbeat'] is not None:
temp_stream_list['seconds_to_last_heartbeat'] = \
current_timestamp - self.stream_list[stream_id]['last_heartbeat']
if temp_stream_list['has_stopped'] is not False:
temp_stream_list['seconds_since_has_stopped'] = \
int(current_timestamp) - int(self.stream_list[stream_id]['has_stopped'])
try:
self.stream_list[stream_id]['processed_receives_statistic'] = self.get_stream_statistic(stream_id)
except ZeroDivisionError:
pass
self.stream_list[stream_id]['transfer_rate_per_second']['speed'] = self.get_current_receiving_speed(stream_id)
return temp_stream_list
def get_stream_label(self, stream_id=False):
"""
Get the stream_label of a specific stream
:param stream_id: id of a stream
:type stream_id: uuid
:return: str or False
"""
if stream_id:
return self.stream_list[stream_id]['stream_label']
else:
return False
def get_stream_subscriptions(self, stream_id, request_id=False):
"""
Get a list of subscriptions of a specific stream from Binance endpoints - the result can be received via
the `stream_buffer` and is also added to the results ringbuffer - `get_results_from_endpoints()
<https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_binance_websocket_api.html#unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager.BinanceWebSocketApiManager.get_results_from_endpoints>`_
to get all results or use `get_result_by_request_id(request_id)
<https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_binance_websocket_api.html#unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager.BinanceWebSocketApiManager.get_result_by_request_id>`_
to get a specific one!
This function is supported by CEX endpoints only!
Info: https://github.com/binance-exchange/binance-official-api-docs/blob/master/web-socket-streams.md#listing-subscriptions
:param stream_id: id of a stream
:type stream_id: uuid
:param request_id: id to use for the request - use `get_request_id()` to create a unique id. If not provided or
`False`, then this method is using `get_request_id()
<https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_binance_websocket_api.html#unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager.BinanceWebSocketApiManager.get_request_id>`_
automatically.
:type request_id: int
:return: request_id (int)
"""
if request_id is False:
request_id = self.get_request_id()
if self.is_exchange_type('dex'):
logging.error("BinanceWebSocketApiManager.get_stream_subscriptions(" + str(stream_id) + ", " +
str(request_id) + ") DEX websockets dont support the listing of subscriptions! Request not "
"sent!")
return False
elif self.is_exchange_type('cex'):
payload = {"method": "LIST_SUBSCRIPTIONS",
"id": request_id}
self.stream_list[stream_id]['payload'].append(payload)
logging.info("BinanceWebSocketApiManager.get_stream_subscriptions(" + str(stream_id) + ", " +
str(request_id) + ") payload added!")
return request_id
else:
return False
def get_stream_list(self):
"""
Get a list of all streams
:return: set
"""
# get the stream list
temp_stream_list = {}
for stream_id in self.stream_list:
temp_stream_list[stream_id] = self.get_stream_info(stream_id)
return temp_stream_list
def get_stream_buffer_maxlen(self, stream_buffer_name=False):
"""
Get the maxlen value of the
`stream_buffer <https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/%60stream_buffer%60>`_
If maxlen is not specified or is None, `stream_buffer` may grow to an arbitrary length. Otherwise, the
`stream_buffer` is bounded to the specified maximum length. Once a bounded length `stream_buffer` is full, when
new items are added, a corresponding number of items are discarded from the opposite end.
:param stream_buffer_name: `False` to read from generic stream_buffer, the stream_id if you used True in
create_stream() or the string name of a shared stream_buffer.
:type stream_buffer_name: bool or str
:return: int or False
"""
if stream_buffer_name is False:
try:
return self.stream_buffer.maxlen
except IndexError:
return False
else:
try:
return self.stream_buffers[stream_buffer_name].maxlen
except IndexError:
return False
except KeyError:
return False
def get_stream_receives_last_second(self, stream_id):
"""
Get the number of receives of specific stream from the last seconds
:param stream_id: id of a stream
:type stream_id: uuid
:return: int
"""
last_second_timestamp = int(time.time()) - 1
try:
return self.stream_list[stream_id]['receives_statistic_last_second']['entries'][last_second_timestamp]
except KeyError:
return 0
def get_stream_statistic(self, stream_id):
"""
Get the statistic of a specific stream
:param stream_id: id of a stream
:type stream_id: uuid
:return: set
"""
stream_statistic = {'stream_receives_per_second': 0,
'stream_receives_per_minute': 0,
'stream_receives_per_hour': 0,
'stream_receives_per_day': 0,
'stream_receives_per_month': 0,
'stream_receives_per_year': 0}
if self.stream_list[stream_id]['status'] == "running":
stream_statistic['uptime'] = time.time() - self.stream_list[stream_id]['start_time']
elif self.stream_list[stream_id]['status'] == "stopped":
stream_statistic['uptime'] = self.stream_list[stream_id]['has_stopped'] - self.stream_list[stream_id]['start_time']
elif "crashed" in self.stream_list[stream_id]['status']:
stream_statistic['uptime'] = self.stream_list[stream_id]['has_stopped'] - self.stream_list[stream_id]['start_time']
elif self.stream_list[stream_id]['status'] == "restarting":
stream_statistic['uptime'] = time.time() - self.stream_list[stream_id]['start_time']
else:
stream_statistic['uptime'] = time.time() - self.stream_list[stream_id]['start_time']
try:
stream_receives_per_second = self.stream_list[stream_id]['processed_receives_total'] / stream_statistic['uptime']
except ZeroDivisionError:
stream_receives_per_second = 0
stream_statistic['stream_receives_per_second'] = stream_receives_per_second
if stream_statistic['uptime'] > 60:
stream_statistic['stream_receives_per_minute'] = stream_receives_per_second * 60
if stream_statistic['uptime'] > 60 * 60:
stream_statistic['stream_receives_per_hour'] = stream_receives_per_second * 60 * 60
if stream_statistic['uptime'] > 60 * 60 * 24:
stream_statistic['stream_receives_per_day'] = stream_receives_per_second * 60 * 60 * 24
if stream_statistic['uptime'] > 60 * 60 * 24 * 30:
stream_statistic['stream_receives_per_month'] = stream_receives_per_second * 60 * 60 * 24 * 30
if stream_statistic['uptime'] > 60 * 60 * 24 * 30 * 12:
stream_statistic['stream_receives_per_year'] = stream_receives_per_second * 60 * 60 * 24 * 30 * 12
return stream_statistic
def get_total_received_bytes(self):
"""
Get number of total received bytes
:return: int
"""
# how much bytes did we receive till now?
return self.total_received_bytes
def get_total_receives(self):
"""
Get the number of total receives
:return: int
"""
return self.total_receives
def get_user_agent(self):
"""
Get the user_agent string "lib name + lib version + python version"
:return:
"""
user_agent = f"{self.name}_{str(self.get_version())}-python_{str(platform.python_version())}"
return user_agent
def get_version(self):
"""
Get the package/module version
:return: str
"""
return self.version
def get_version_unicorn_fy(self):
"""
Get the package/module version of `UnicornFy <https://github.com/oliver-zehentleitner/unicorn-fy>`_
:return: str
"""
from unicorn_fy.unicorn_fy import UnicornFy
unicorn_fy = UnicornFy()
return unicorn_fy.get_version()
@staticmethod
def help():
"""
Help in iPython
"""
print("Ctrl+D to close")
def increase_received_bytes_per_second(self, stream_id, size):
"""
Add the amount of received bytes per second
:param stream_id: id of a stream
:type stream_id: uuid
:param size: amount of bytes to add
:type size: int
"""
current_timestamp = int(time.time())
try:
if self.stream_list[stream_id]['transfer_rate_per_second']['bytes'][current_timestamp]:
pass
except KeyError:
self.stream_list[stream_id]['transfer_rate_per_second']['bytes'][current_timestamp] = 0
try:
self.stream_list[stream_id]['transfer_rate_per_second']['bytes'][current_timestamp] += size
except KeyError:
pass
def increase_processed_receives_statistic(self, stream_id):
"""
Add the number of processed receives
:param stream_id: id of a stream
:type stream_id: uuid
"""
current_timestamp = int(time.time())
try:
self.stream_list[stream_id]['processed_receives_total'] += 1
except KeyError:
return False
try:
with self.stream_threading_lock[stream_id]['receives_statistic_last_second_lock']:
self.stream_list[stream_id]['receives_statistic_last_second']['entries'][current_timestamp] += 1
except KeyError:
with self.stream_threading_lock[stream_id]['receives_statistic_last_second_lock']:
self.stream_list[stream_id]['receives_statistic_last_second']['entries'][current_timestamp] = 1
with self.total_receives_lock:
self.total_receives += 1
def increase_reconnect_counter(self, stream_id):
"""
Increase reconnect counter
:param stream_id: id of a stream
:type stream_id: uuid
"""
self.stream_list[stream_id]['logged_reconnects'].append(time.time())
self.stream_list[stream_id]['reconnects'] += 1
with self.reconnects_lock:
self.reconnects += 1
def increase_transmitted_counter(self, stream_id):
"""
Increase the counter of transmitted payloads
:param stream_id: id of a stream
:type stream_id: uuid
"""
self.stream_list[stream_id]['processed_transmitted_total'] += 1
with self.total_transmitted_lock:
self.total_transmitted += 1
def is_manager_stopping(self):
"""
Returns `True` if the manager has a stop request, 'False' if not.
:return: bool
"""
if self.stop_manager_request is None:
return False
else:
return True
def is_exchange_type(self, exchange_type=False):
"""
Check the exchange type!
:param exchange_type: Valid types are `dex` and `cex`!
:type exchange_type: str
:return: bool
"""
if exchange_type is False:
return False
if self.exchange == "binance.org" or \
self.exchange == "binance.org-testnet":
is_type = "dex"
elif self.exchange == "binance.com" or \
self.exchange == "binance.com-testnet" or \
self.exchange == "binance.com-margin" or \
self.exchange == "binance.com-margin-testnet" or \
self.exchange == "binance.com-isolated_margin" or \
self.exchange == "binance.com-isolated_margin-testnet" or \
self.exchange == "binance.com-futures" or \
self.exchange == "binance.com-futures-testnet" or \
self.exchange == "binance.com-coin-futures" or \
self.exchange == "binance.je" or \
self.exchange == "binance.us" or \
self.exchange == "trbinance.com" or \
self.exchange == "jex.com":
is_type = "cex"
else:
logging.critical(f"BinanceWebSocketApiManager.is_exchange_type() - Can not determine exchange type for"
f"exchange={str(self.exchange)}")
return False
if is_type == exchange_type:
return True
else:
return False
def is_stop_request(self, stream_id, exclude_kill_requests=False):
"""
Has a specific stream a stop_request?
:param stream_id: id of a stream
:type stream_id: uuid
:param exclude_kill_requests: if `True` this method returns `False` on kill_requests
:type exclude_kill_requests: bool
:return: bool
"""
logging.debug("BinanceWebSocketApiManager.is_stop_request(" + str(stream_id) + ")")
try:
if self.stream_list[stream_id]['stop_request'] is True:
return True
elif self.is_manager_stopping():
return True
elif self.stream_list[stream_id]['kill_request'] is True and exclude_kill_requests is False:
return True
else:
return False
except KeyError:
return False
def is_stop_as_crash_request(self, stream_id):
"""
Has a specific stream a stop_as_crash_request?
:param stream_id: id of a stream
:type stream_id: uuid
:return: bool
"""
logging.debug("BinanceWebSocketApiManager.is_stop_as_crash_request(" + str(stream_id) + ")")
try:
if self.stream_list[stream_id]['crash_request'] is True:
return True
except KeyError:
pass
if self.is_manager_stopping():
return True
else:
return False
def is_update_availabe(self):
"""
Is a new release of this package available?
:return: bool
"""
installed_version = self.get_version()
if ".dev" in installed_version:
installed_version = installed_version[:-4]
if self.get_latest_version() == installed_version:
return False
elif self.get_latest_version() == "unknown":
return False
else:
return True
def is_update_availabe_unicorn_fy(self):
"""
Is a new release of `UnicornFy <https://github.com/oliver-zehentleitner/unicorn-fy>`_ available?
:return: bool
"""
from unicorn_fy.unicorn_fy import UnicornFy
unicorn_fy = UnicornFy()
return unicorn_fy.is_update_availabe()
def is_update_availabe_check_command(self, check_command_version=False):
"""
Is a new release of `check_lucit_collector.py` available?
:return: bool
"""
installed_version = check_command_version
latest_version = self.get_latest_version_check_command()
if ".dev" in str(installed_version):
installed_version = installed_version[:-4]
if latest_version == installed_version:
return False
elif latest_version == "unknown":
return False
else:
return True
def kill_stream(self, stream_id):
"""
Kill a specific stream
:param stream_id: id of a stream
:type stream_id: uuid
:return: bool
"""
# stop a specific stream by stream_id
logging.info("BinanceWebSocketApiManager.kill_stream(" + str(stream_id) + ")")
self.stream_list[stream_id]['kill_request'] = True
def pop_stream_data_from_stream_buffer(self, stream_buffer_name=False, mode="FIFO"):
"""
Get oldest or latest entry from
`stream_buffer <https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/%60stream_buffer%60>`_
and remove from FIFO/LIFO stack.
:param stream_buffer_name: `False` to read from generic stream_buffer, the stream_id if you used True in
create_stream() or the string name of a shared stream_buffer.
:type stream_buffer_name: bool or str
:param mode: How to read from the `stream_buffer` - "FIFO" (default) or "LIFO".
:type mode: str
:return: stream_data - str, dict or False
"""
if stream_buffer_name is False:
try:
with self.stream_buffer_lock:
if mode.upper() == "FIFO":
stream_data = self.stream_buffer.popleft()
elif mode.upper() == "LIFO":
stream_data = self.stream_buffer.pop()
else:
return False
return stream_data
except IndexError:
return False
else:
try:
with self.stream_buffer_locks[stream_buffer_name]:
if mode.upper() == "FIFO":
stream_data = self.stream_buffers[stream_buffer_name].popleft()
elif mode.upper() == "LIFO":
stream_data = self.stream_buffers[stream_buffer_name].pop()
else:
return False
return stream_data
except IndexError:
return False
except KeyError:
return False
def pop_stream_signal_from_stream_signal_buffer(self):
"""
Get oldest entry from
`stream_signal_buffer <https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/%60stream_signal_buffer%60>`_
and remove from stack (FIFO stack)
:return: stream_signal - dict or False
"""
try:
with self.stream_signal_buffer_lock:
stream_signal = self.stream_signal_buffer.pop(0)
return stream_signal
except IndexError:
return False
def print_stream_info(self, stream_id, add_string=""):
"""
Print all infos about a specific stream, helps debugging :)
:param stream_id: id of a stream
:type stream_id: uuid
:param add_string: text to add to the output
:type add_string: str
:return: bool
"""
restart_requests_row = ""
binance_api_status_row = ""
stream_label_row = ""
status_row = ""
payload_row = ""
symbol_row = ""
dex_user_address_row = ""
last_static_ping_listen_key = ""
stream_info = self.get_stream_info(stream_id)
stream_row_color_prefix = ""
stream_row_color_suffix = ""
if len(add_string) > 0:
add_string = " " + str(add_string) + "\r\n"
try:
if len(self.stream_list[stream_id]['logged_reconnects']) > 0:
logged_reconnects_row = "\r\n logged_reconnects: "
row_prefix = ""
for timestamp in self.stream_list[stream_id]['logged_reconnects']:
logged_reconnects_row += row_prefix + \
datetime.utcfromtimestamp(timestamp).strftime('%Y-%m-%d, %H:%M:%S UTC')
row_prefix = ", "
else:
logged_reconnects_row = ""
except KeyError:
return False
if "running" in stream_info['status']:
stream_row_color_prefix = "\033[1m\033[32m"
stream_row_color_suffix = "\033[0m\r\n"
for reconnect_timestamp in self.stream_list[stream_id]['logged_reconnects']:
if (time.time() - reconnect_timestamp) < 2:
stream_row_color_prefix = "\033[1m\033[33m"
stream_row_color_suffix = "\033[0m\r\n"
status_row = stream_row_color_prefix + " status: " + str(stream_info['status']) + stream_row_color_suffix
elif "crashed" in stream_info['status']:
stream_row_color_prefix = "\033[1m\033[31m"
stream_row_color_suffix = "\033[0m\r\n"
status_row = stream_row_color_prefix + " status: " + str(stream_info['status']) + stream_row_color_suffix
elif "restarting" in stream_info['status']:
stream_row_color_prefix = "\033[1m\033[33m"
stream_row_color_suffix = "\033[0m\r\n"
status_row = stream_row_color_prefix + " status: " + str(stream_info['status']) + stream_row_color_suffix
elif "stopped" in stream_info['status']:
stream_row_color_prefix = "\033[1m\033[33m"
stream_row_color_suffix = "\033[0m\r\n"
status_row = stream_row_color_prefix + " status: " + str(stream_info['status']) + stream_row_color_suffix
try:
if self.restart_requests[stream_id]['status']:
restart_requests_row = " restart_request: " + self.restart_requests[stream_id]['status'] + "\r\n"
except KeyError:
pass
if self.stream_list[stream_id]['markets'] == "!userData":
last_static_ping_listen_key = " last_static_ping_listen_key: " + \
str(self.stream_list[stream_id]['last_static_ping_listen_key']) + "\r\n"
if self.binance_api_status['status_code'] == 200:
binance_api_status_code = str(self.binance_api_status['status_code'])
elif self.binance_api_status['status_code'] == 418:
binance_api_status_code = "\033[1m\033[31m" + str(self.binance_api_status['status_code']) + "\033[0m"
else:
binance_api_status_code = "\033[1m\033[33m" + str(self.binance_api_status['status_code']) + "\033[0m"
binance_api_status_row = " binance_api_status: used_weight=" + str(self.binance_api_status['weight']) + \
", status_code=" + str(binance_api_status_code) + " (last update " + \
str(datetime.utcfromtimestamp(
self.binance_api_status['timestamp']).strftime('%Y-%m-%d, %H:%M:%S UTC')) + \
")\r\n"
current_receiving_speed = str(self.get_human_bytesize(self.get_current_receiving_speed(stream_id), "/s"))
if self.stream_list[stream_id]['symbols'] is not False:
symbol_row = " symbols:" + str(stream_info['symbols']) + "\r\n"
if self.stream_list[stream_id]["payload"]:
payload_row = " payload: " + str(self.stream_list[stream_id]["payload"]) + "\r\n"
if self.stream_list[stream_id]["dex_user_address"] is not False:
dex_user_address_row = " user_address: " + str(self.stream_list[stream_id]["dex_user_address"]) + "\r\n"
if self.stream_list[stream_id]["stream_label"] is not None:
stream_label_row = " stream_label: " + self.stream_list[stream_id]["stream_label"] + "\r\n"
if isinstance(stream_info['ping_interval'], int):
ping_interval = f"{stream_info['ping_interval']} seconds"
else:
ping_interval = stream_info['ping_interval']
if isinstance(stream_info['ping_timeout'], int):
ping_timeout = f"{stream_info['ping_timeout']} seconds"
else:
ping_timeout = stream_info['ping_timeout']
if isinstance(stream_info['close_timeout'], int):
close_timeout = f"{stream_info['close_timeout']} seconds"
else:
close_timeout = stream_info['close_timeout']
try:
uptime = self.get_human_uptime(stream_info['processed_receives_statistic']['uptime'])
print(str(self.fill_up_space_centered(96, f" {self.get_user_agent()} ", "=")) + "\r\n" +
" exchange:", str(self.stream_list[stream_id]['exchange']), "\r\n" +
str(add_string) +
" stream_id:", str(stream_id), "\r\n" +
str(stream_label_row) +
" stream_buffer_maxlen:", str(stream_info['stream_buffer_maxlen']), "\r\n" +
" channels (" + str(len(stream_info['channels'])) + "):", str(stream_info['channels']), "\r\n" +
" markets (" + str(len(stream_info['markets'])) + "):", str(stream_info['markets']), "\r\n" +
str(symbol_row) +
" subscriptions: " + str(self.stream_list[stream_id]['subscriptions']) + "\r\n" +
str(payload_row) +
str(status_row) +
str(dex_user_address_row) +
f" ping_interval: {ping_interval}\r\n"
f" ping_timeout: {ping_timeout}\r\n"
f" close_timeout: {close_timeout}\r\n"
" start_time:", str(stream_info['start_time']), "\r\n"
" uptime:", str(uptime),
"since " + str(
datetime.utcfromtimestamp(stream_info['start_time']).strftime('%Y-%m-%d, %H:%M:%S UTC')) +
"\r\n" +
" reconnects:", str(stream_info['reconnects']), logged_reconnects_row, "\r\n" +
str(restart_requests_row) +
str(binance_api_status_row) +
str(last_static_ping_listen_key) +
" last_heartbeat:", str(stream_info['last_heartbeat']), "\r\n"
" seconds_to_last_heartbeat:", str(stream_info['seconds_to_last_heartbeat']), "\r\n"
" kill_request:", str(stream_info['kill_request']), "\r\n"
" stop_request:", str(stream_info['stop_request']), "\r\n"
" has_stopped:", str(stream_info['has_stopped']), "\r\n"
" seconds_since_has_stopped:",
str(stream_info['seconds_since_has_stopped']), "\r\n"
" current_receiving_speed:", str(current_receiving_speed), "\r\n" +
" processed_receives:", str(stream_info['processed_receives_total']), "\r\n" +
" transmitted_payloads:", str(self.stream_list[stream_id]['processed_transmitted_total']), "\r\n" +
" stream_most_receives_per_second:",
str(stream_info['receives_statistic_last_second']['most_receives_per_second']), "\r\n"
" stream_receives_per_second:",
str(stream_info['processed_receives_statistic']['stream_receives_per_second'].__round__(3)), "\r\n"
" stream_receives_per_minute:",
str(stream_info['processed_receives_statistic']['stream_receives_per_minute'].__round__(3)), "\r\n"
" stream_receives_per_hour:",
str(stream_info['processed_receives_statistic']['stream_receives_per_hour'].__round__(3)), "\r\n"
" stream_receives_per_day:",
str(stream_info['processed_receives_statistic']['stream_receives_per_day'].__round__(3)), "\r\n"
"===============================================================================================\r\n")
except KeyError:
self.print_stream_info(stream_id)
def print_summary(self, add_string="", disable_print=False):
"""
Print an overview of all streams
:param add_string: text to add to the output
:type add_string: str
:param disable_print: set to `True` to use curses instead of print()
:type disable_print: bool
"""
streams = len(self.stream_list)
active_streams = 0
crashed_streams = 0
restarting_streams = 0
stopped_streams = 0
active_streams_row = ""
restarting_streams_row = ""
stopped_streams_row = ""
all_receives_per_second = 0.0
current_receiving_speed = 0
streams_with_stop_request = 0
stream_rows = ""
crashed_streams_row = ""
binance_api_status_row = ""
received_bytes_per_x_row = ""
streams_with_stop_request_row = ""
stream_buffer_row = ""
highest_receiving_speed_row = f"{str(self.get_human_bytesize(self.receiving_speed_peak['value'], '/s'))} " \
f"(reached at " \
f"{self.get_date_of_timestamp(self.receiving_speed_peak['timestamp'])})"
if len(add_string) > 0:
add_string = " " + str(add_string) + "\r\n"
try:
temp_stream_list = copy.deepcopy(self.stream_list)
except RuntimeError:
return ""
for stream_id in temp_stream_list:
stream_row_color_prefix = ""
stream_row_color_suffix = ""
current_receiving_speed += self.get_current_receiving_speed(stream_id)
stream_statistic = self.get_stream_statistic(stream_id)
if self.stream_list[stream_id]['status'] == "running":
active_streams += 1
all_receives_per_second += stream_statistic['stream_receives_per_second']
try:
if self.restart_requests[stream_id]['status'] == "restarted":
stream_row_color_prefix = "\033[1m\033[33m"
stream_row_color_suffix = "\033[0m"
except KeyError:
pass
try:
for reconnect_timestamp in self.stream_list[stream_id]['logged_reconnects']:
if (time.time() - reconnect_timestamp) < 1:
stream_row_color_prefix = "\033[1m\033[31m"
stream_row_color_suffix = "\033[0m"
elif (time.time() - reconnect_timestamp) < 2:
stream_row_color_prefix = "\033[1m\033[33m"
stream_row_color_suffix = "\033[0m"
elif (time.time() - reconnect_timestamp) < 4:
stream_row_color_prefix = "\033[1m\033[32m"
stream_row_color_suffix = "\033[0m"
except KeyError:
pass
elif self.stream_list[stream_id]['status'] == "stopped":
stopped_streams += 1
stream_row_color_prefix = "\033[1m\033[33m"
stream_row_color_suffix = "\033[0m"
elif self.stream_list[stream_id]['status'] == "restarting":
restarting_streams += 1
stream_row_color_prefix = "\033[1m\033[33m"
stream_row_color_suffix = "\033[0m"
elif "crashed" in self.stream_list[stream_id]['status']:
crashed_streams += 1
stream_row_color_prefix = "\033[1m\033[31m"
stream_row_color_suffix = "\033[0m"
if self.stream_list[stream_id]['stream_label'] is not None:
if len(self.stream_list[stream_id]['stream_label']) > 18:
stream_label = str(self.stream_list[stream_id]['stream_label'])[:13] + "..."
else:
stream_label = str(self.stream_list[stream_id]['stream_label'])
else:
stream_label = str(self.stream_list[stream_id]['stream_label'])
stream_rows += stream_row_color_prefix + str(stream_id) + stream_row_color_suffix + " |" + \
self.fill_up_space_right(17, stream_label) + "|" + \
self.fill_up_space_left(8, self.get_stream_receives_last_second(stream_id)) + "|" + \
self.fill_up_space_left(11, stream_statistic['stream_receives_per_second'].__round__(2)) + "|" + \
self.fill_up_space_left(8, self.stream_list[stream_id]['receives_statistic_last_second']['most_receives_per_second']) \
+ "|" + stream_row_color_prefix + \
self.fill_up_space_left(8, len(self.stream_list[stream_id]['logged_reconnects'])) + \
stream_row_color_suffix + "\r\n "
if self.is_stop_request(stream_id, exclude_kill_requests=True) is True and \
self.stream_list[stream_id]['status'] == "running":
streams_with_stop_request += 1
if streams_with_stop_request >= 1:
stream_row_color_prefix = "\033[1m\033[33m"
stream_row_color_suffix = "\033[0m"
streams_with_stop_request_row = stream_row_color_prefix + " streams_with_stop_request: " + \
str(streams_with_stop_request) + stream_row_color_suffix + "\r\n"
if crashed_streams >= 1:
stream_row_color_prefix = "\033[1m\033[31m"
stream_row_color_suffix = "\033[0m"
crashed_streams_row = stream_row_color_prefix + " crashed_streams: " + str(crashed_streams) \
+ stream_row_color_suffix + "\r\n"
total_received_bytes = str(self.get_total_received_bytes()) + " (" + str(
self.get_human_bytesize(self.get_total_received_bytes())) + ")"
try:
received_bytes_per_second = self.get_total_received_bytes() / (time.time() - self.start_time)
received_bytes_per_x_row += str(self.get_human_bytesize(received_bytes_per_second, '/s')) + " (per day " + \
str(((received_bytes_per_second / 1024 / 1024 / 1024) * 60 * 60 * 24).__round__(2))\
+ " gB)"
if self.get_stream_buffer_length() > 50:
stream_row_color_prefix = "\033[1m\033[34m"
stream_row_color_suffix = "\033[0m"
stream_buffer_row += stream_row_color_prefix + " stream_buffer_stored_items: " + \
str(self.get_stream_buffer_length()) + "\r\n"
stream_buffer_row += " stream_buffer_byte_size: " + str(self.get_stream_buffer_byte_size()) + \
" (" + str(self.get_human_bytesize(self.get_stream_buffer_byte_size())) + ")" + \
stream_row_color_suffix + "\r\n"
if active_streams > 0:
active_streams_row = " \033[1m\033[32mactive_streams: " + str(active_streams) + "\033[0m\r\n"
if restarting_streams > 0:
restarting_streams_row = " \033[1m\033[33mrestarting_streams: " + str(restarting_streams) + "\033[0m\r\n"
if stopped_streams > 0:
stopped_streams_row = " \033[1m\033[33mstopped_streams: " + str(stopped_streams) + "\033[0m\r\n"
if self.binance_api_status['weight'] is not None:
if self.binance_api_status['status_code'] == 200:
binance_api_status_code = str(self.binance_api_status['status_code'])
elif self.binance_api_status['status_code'] == 418:
binance_api_status_code = "\033[1m\033[31m" + str(self.binance_api_status['status_code']) + \
"\033[0m"
else:
binance_api_status_code = "\033[1m\033[33m" + str(self.binance_api_status['status_code']) + \
"\033[0m"
binance_api_status_row = " binance_api_status: used_weight=" + \
str(self.binance_api_status['weight']) + \
", status_code=" + str(binance_api_status_code) + " (last update " + \
str(datetime.utcfromtimestamp(
self.binance_api_status['timestamp']).strftime('%Y-%m-%d, %H:%M:%S UTC')) + \
")\r\n"
try:
print_text = (
str(self.fill_up_space_centered(96, f" {self.get_user_agent()} ", "=")) + "\r\n" +
" exchange: " + str(self.stream_list[stream_id]['exchange']) + "\r\n" +
" uptime: " + str(self.get_human_uptime(time.time() - self.start_time)) + " since " +
str(self.get_date_of_timestamp(self.start_time)) + "\r\n" +
" streams: " + str(streams) + "\r\n" +
str(active_streams_row) +
str(crashed_streams_row) +
str(restarting_streams_row) +
str(stopped_streams_row) +
str(streams_with_stop_request_row) +
" subscriptions: " + str(self.get_number_of_all_subscriptions()) + "\r\n" +
str(stream_buffer_row) +
" current_receiving_speed: " + str(self.get_human_bytesize(current_receiving_speed, "/s")) + "\r\n" +
" average_receiving_speed: " + str(received_bytes_per_x_row) + "\r\n" +
" highest_receiving_speed: " + str(highest_receiving_speed_row) + "\r\n" +
" total_receives: " + str(self.total_receives) + "\r\n"
" total_received_bytes: " + str(total_received_bytes) + "\r\n"
" total_transmitted_payloads: " + str(self.total_transmitted) + "\r\n" +
" stream_buffer_maxlen: " + str(self.stream_buffer_maxlen) + "\r\n" +
str(binance_api_status_row) +
" process_ressource_usage: cpu=" + str(self.get_process_usage_cpu()) + "%, memory=" +
str(self.get_process_usage_memory()) + ", threads=" + str(self.get_process_usage_threads()) +
"\r\n" + str(add_string) +
" ---------------------------------------------------------------------------------------------\r\n"
" stream_id | stream_label | last | average | most | recon\r\n"
" ---------------------------------------------------------------------------------------------\r\n"
" " + str(stream_rows) +
"---------------------------------------------------------------------------------------------\r\n"
" all_streams |" +
self.fill_up_space_left(8, self.get_all_receives_last_second()) + "|" +
self.fill_up_space_left(11, all_receives_per_second.__round__(2)) + "|" +
self.fill_up_space_left(8, self.most_receives_per_second) + "|" +
self.fill_up_space_left(8, self.reconnects) + "\r\n" +
"===============================================================================================\r\n"
)
if disable_print:
if sys.platform.startswith('Windows'):
print_text = self.remove_ansi_escape_codes(print_text)
return print_text
else:
print(print_text)
except UnboundLocalError:
pass
except ZeroDivisionError:
pass
def print_summary_to_png(self, print_summary_export_path, hight_per_row=12.5):
"""
Create a PNG image file with the console output of `print_summary()`
*LINUX ONLY* It should not be hard to make it OS independent:
https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/issues/61
:param print_summary_export_path: If you want to export the output of print_summary() to an image,
please provide a path like "/var/www/html/". `View the Wiki!
<https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/How-to-export-print_summary()-stdout-to-PNG%3F>`_
:type print_summary_export_path: str
:param hight_per_row: set the hight per row for the image hight calculation
:type hight_per_row: int
:return: bool
"""
print_text = self.print_summary(disable_print=True)
# Todo:
# 1. Handle paths right
# 2. Use PythonMagick instead of Linux ImageMagick
with open(print_summary_export_path + "print_summary.txt", 'w') as text_file:
print(self.remove_ansi_escape_codes(print_text), file=text_file)
try:
image_hight = print_text.count("\n") * hight_per_row + 15
except AttributeError:
return False
os.system('convert -size 720x' + str(image_hight) + ' xc:black -font "FreeMono" -pointsize 12 -fill white -annotate '
'+30+30 "@' + print_summary_export_path + 'print_summary.txt' + '" ' +
print_summary_export_path + 'print_summary_plain.png')
os.system('convert ' + print_summary_export_path + 'print_summary_plain.png -font "FreeMono" '
'-pointsize 12 -fill red -undercolor \'#00000080\' -gravity North -annotate +0+5 '
'"$(date)" ' + print_summary_export_path + 'print_summary.png')
return True
@staticmethod
def remove_ansi_escape_codes(text):
"""
Remove ansi excape codes from the text string!
:param text: str
:return:
"""
text = str(text)
text = text.replace("\033[1m\033[31m", "")
text = text.replace("\033[1m\033[32m", "")
text = text.replace("\033[1m\033[33m", "")
text = text.replace("\033[1m\033[34m", "")
text = text.replace("\033[0m", "")
return text
def replace_stream(self,
stream_id,
new_channels,
new_markets,
new_stream_label=None,
new_stream_buffer_name=False,
new_api_key=False,
new_api_secret=False,
new_symbols=False,
new_output="raw_data",
new_ping_interval=20,
new_ping_timeout=20,
new_close_timeout=10,
new_stream_buffer_maxlen=None):
"""
Replace a stream
If you want to start a stream with a new config, its recommended, to first start a new stream with the new
settings and close the old stream not before the new stream received its first data. So your data will stay
consistent.
:param stream_id: id of the old stream
:type stream_id: uuid
:param new_channels: the new channel list for the stream
:type new_channels: str, tuple, list, set
:param new_markets: the new markets list for the stream
:type new_markets: str, tuple, list, set
:param new_stream_label: provide a stream_label to identify the stream
:type new_stream_label: str
:param new_stream_buffer_name: If `False` the data is going to get written to the default stream_buffer,
set to `True` to read the data via `pop_stream_data_from_stream_buffer(stream_id)` or
provide a string to create and use a shared stream_buffer and read it via
`pop_stream_data_from_stream_buffer('string')`.
:type new_stream_buffer_name: bool or str
:param new_api_key: provide a valid Binance API key
:type new_api_key: str
:param new_api_secret: provide a valid Binance API secret
:type new_api_secret: str
:param new_symbols: provide the symbols for isolated_margin user_data streams
:type new_symbols: str
:return: new stream_id
:param new_output: set to "dict" to convert the received raw data to a python dict, set to "UnicornFy" to convert
with `UnicornFy <https://github.com/oliver-zehentleitner/unicorn-fy>`_ - otherwise the output
remains unchanged and gets delivered as received from the endpoints
:type new_output: str
:param new_ping_interval: Once the connection is open, a `Ping frame` is sent every
`ping_interval` seconds. This serves as a keepalive. It helps keeping
the connection open, especially in the presence of proxies with short
timeouts on inactive connections. Set `ping_interval` to `None` to
disable this behavior. (default: 20)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type new_ping_interval: int or None
:param new_ping_timeout: If the corresponding `Pong frame` isn't received within
`ping_timeout` seconds, the connection is considered unusable and is closed with
code 1011. This ensures that the remote endpoint remains responsive. Set
`ping_timeout` to `None` to disable this behavior. (default: 20)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type new_ping_timeout: int or None
:param new_close_timeout: The `close_timeout` parameter defines a maximum wait time in seconds for
completing the closing handshake and terminating the TCP connection. (default: 10)
This parameter is passed through to the `websockets.client.connect()
<https://websockets.readthedocs.io/en/stable/api.html?highlight=ping_interval#websockets.client.connect>`_
:type new_close_timeout: int or None
:param new_stream_buffer_maxlen: Set a max len for the `stream_buffer`. Only used in combination with a non generic
`stream_buffer`. The generic `stream_buffer` uses always the value of
`BinanceWebSocketApiManager()`.
:type new_stream_buffer_maxlen: int or None
:return: new_stream_id or 'False'
"""
# starting a new socket and stop the old stream not before the new stream received its first record
new_stream_id = self.create_stream(new_channels,
new_markets,
new_stream_label,
new_stream_buffer_name,
new_api_key,
new_api_secret,
new_symbols,
new_output,
new_ping_interval,
new_ping_timeout,
new_close_timeout,
new_stream_buffer_maxlen)
if self.wait_till_stream_has_started(new_stream_id):
self.stop_stream(stream_id)
return new_stream_id
def run(self):
"""
This method overloads `threading.run()` and starts management threads
"""
thread_frequent_checks = threading.Thread(target=self._frequent_checks)
thread_frequent_checks.start()
thread_keepalive_streams = threading.Thread(target=self._keepalive_streams)
thread_keepalive_streams.start()
def set_private_dex_config(self, binance_dex_user_address):
"""
Set binance_dex_user_address
Is going to be the default user_address, once the websocket is created with this default value, its not possible
to change it. If you plan to use different user_address its recommended to not use this method! Just provide the
user_address with create_stream() in the market parameter.
:param binance_dex_user_address: Binance DEX user address
:type binance_dex_user_address: str
"""
self.dex_user_address = binance_dex_user_address
def set_heartbeat(self, stream_id):
"""
Set heartbeat for a specific thread (should only be done by the stream itself)
"""
logging.debug("BinanceWebSocketApiManager.set_heartbeat(" + str(stream_id) + ")")
try:
self.stream_list[stream_id]['last_heartbeat'] = time.time()
self.stream_list[stream_id]['status'] = "running"
except KeyError:
pass
def set_ringbuffer_error_max_size(self, max_size):
"""
How many error messages should be kept in the ringbuffer?
:param max_size: Max entries of error messages in the ringbuffer.
:type max_size: int
:return: bool
"""
self.ringbuffer_error_max_size = int(max_size)
def set_ringbuffer_result_max_size(self, max_size):
"""
How many result messages should be kept in the ringbuffer?
:param max_size: Max entries of result messages in the ringbuffer.
:type max_size: int
:return: bool
"""
self.ringbuffer_result_max_size = int(max_size)
def set_stream_label(self, stream_id, stream_label=None):
"""
Set a stream_label by stream_id
:param stream_id: id of the stream
:type stream_id: uuid
:param stream_label: stream_label to set
:type stream_label: str
"""
self.stream_list[stream_id]['stream_label'] = stream_label
def set_keep_max_received_last_second_entries(self, number_of_max_entries):
"""
Set how much received_last_second entries are stored till they get deleted!
:param number_of_max_entries: number of entries to keep in list
:type number_of_max_entries: int
"""
self.keep_max_received_last_second_entries = number_of_max_entries
def set_restart_request(self, stream_id):
"""
Set a restart request for a specific stream
:param stream_id: id of the old stream
:type stream_id: uuid
"""
self.restart_requests[stream_id] = {'status': "new"}
return True
def split_payload(self, params, method, max_items_per_request=350):
"""
Sending more than 8000 chars via websocket.send() leads to a connection loss, 350 list elements is a good limit
to keep the payload length under 8000 chars and avoid reconnects
:param params: params of subscribe payload
:type params: list
:param method: SUBSCRIBE or UNSUBSCRIBE
:type method: str
:param max_items_per_request: max size for params, if more it gets splitted
:return: list or False
"""
if self.is_exchange_type('cex'):
count_items = 0
add_params = []
payload = []
for param in params:
add_params.append(param)
count_items += 1
if count_items > max_items_per_request:
add_payload = {"method": method,
"params": add_params,
"id": self.get_request_id()}
payload.append(add_payload)
count_items = 0
add_params = []
if len(add_params) > 0:
add_payload = {"method": method,
"params": add_params,
"id": self.get_request_id()}
payload.append(add_payload)
return payload
else:
return False
elif self.is_exchange_type('dex'):
pass
else:
return False
def start_monitoring_api(self, host='127.0.0.1', port=64201, warn_on_update=True):
"""
Start the monitoring API server
Take a look into the
`Wiki <https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/wiki/UNICORN-Monitoring-API-Service>`_
to see how this works!
:param host: listening ip address, use 0.0.0.0 or a specific address (default: 127.0.0.1)
:type host: str
:param port: listening port number (default: 64201)
:type port: int
:param warn_on_update: set to `False` to disable the update warning
:type warn_on_update: bool
"""
thread = threading.Thread(target=self._start_monitoring_api_thread, args=(host, port, warn_on_update))
thread.start()
return True
def stop_manager_with_all_streams(self):
"""
Stop the BinanceWebSocketApiManager with all streams and management threads
"""
logging.info("BinanceWebSocketApiManager.stop_manager_with_all_streams() - Stopping "
"unicorn_binance_websocket_api_manager " + self.version + " ...")
# send signal to all threads
self.stop_manager_request = True
# delete listenKeys
for stream_id in self.stream_list:
self.stop_stream(stream_id)
# stop monitoring API services
self.stop_monitoring_api()
def stop_monitoring_api(self):
"""
Stop the monitoring API service
:return: bool
"""
try:
if not isinstance(self.monitoring_api_server, bool):
self.monitoring_api_server.stop()
return True
except AttributeError as error_msg:
logging.info("BinanceWebSocketApiManager.stop_monitoring_api() - can not execute "
"self.monitoring_api_server.stop() - info: " + str(error_msg))
return False
def stop_stream(self, stream_id):
"""
Stop a specific stream
:param stream_id: id of a stream
:type stream_id: uuid
:return: bool
"""
# stop a specific stream by stream_id
logging.info("BinanceWebSocketApiManager.stop_stream(" + str(stream_id) + ")")
try:
del self.restart_requests[stream_id]
except KeyError:
pass
self.delete_listen_key_by_stream_id(stream_id)
try:
self.stream_list[stream_id]['stop_request'] = True
except KeyError:
return False
return True
def stop_stream_as_crash(self, stream_id):
"""
Stop a specific stream with 'crashed' status
:param stream_id: id of a stream
:type stream_id: uuid
:return: bool
"""
# stop a specific stream by stream_id
logging.critical("BinanceWebSocketApiManager.stop_stream_as_crash(" + str(stream_id) + ")")
try:
del self.restart_requests[stream_id]
except KeyError:
pass
try:
self.stream_list[stream_id]['crash_request'] = True
except KeyError:
return False
def stream_is_crashing(self, stream_id, error_msg=False):
"""
If a stream can not heal itself in cause of wrong parameter (wrong market, channel type) it calls this method
:param stream_id: id of a stream
:type stream_id: uuid
:param error_msg: Error msg to add to the stream status!
:type error_msg: str
"""
logging.critical("BinanceWebSocketApiManager.stream_is_crashing(" + str(stream_id) + ")")
self.stream_list[stream_id]['has_stopped'] = time.time()
self.stream_list[stream_id]['status'] = "crashed"
if error_msg:
self.stream_list[stream_id]['status'] += " - " + str(error_msg)
def stream_is_stopping(self, stream_id):
"""
Streams report with this call their shutdowns
:param stream_id: id of a stream
:type stream_id: uuid
:return: bool
"""
logging.info("BinanceWebSocketApiManager.stream_is_stopping(" + str(stream_id) + ")")
try:
self.stream_list[stream_id]['has_stopped'] = time.time()
self.stream_list[stream_id]['status'] = "stopped"
return True
except KeyError:
return False
def subscribe_to_stream(self, stream_id, channels=[], markets=[]):
"""
Subscribe channels and/or markets to an existing stream
If you provide one channel and one market, then every subscribed market is going to get added to the new channel
and all subscribed channels are going to get added to the new market!
`How are the parameter `channels` and `markets` used with
`subscriptions <https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_binance_websocket_api.html#unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager.BinanceWebSocketApiManager.create_stream>`_
:param stream_id: id of a stream
:type stream_id: uuid
:param channels: provide the channels you wish to stream
:type channels: str, tuple, list, set
:param markets: provide the markets you wish to stream
:type markets: str, tuple, list, set
:return: bool
"""
logging.info("BinanceWebSocketApiManager.subscribe_to_stream(" + str(stream_id) + ", " + str(channels) +
", " + str(markets) + ") started ...")
try:
if type(channels) is str:
channels = [channels]
if type(markets) is str:
markets = [markets]
if type(channels) is set:
channels = list(channels)
if type(markets) is set:
markets = list(markets)
except KeyError:
logging.error("BinanceWebSocketApiManager.subscribe_to_stream(" + str(stream_id) + ", " + str(channels) +
", " + str(markets) + ") KeyError: setting a restart request for this stream ...")
self.stream_is_stopping(stream_id)
self.set_restart_request(stream_id)
return False
if type(self.stream_list[stream_id]['channels']) is str:
self.stream_list[stream_id]['channels'] = [self.stream_list[stream_id]['channels']]
if type(self.stream_list[stream_id]['markets']) is str:
self.stream_list[stream_id]['markets'] = [self.stream_list[stream_id]['markets']]
if type(self.stream_list[stream_id]['channels']) is set:
self.stream_list[stream_id]['channels'] = list(self.stream_list[stream_id]['channels'])
if type(self.stream_list[stream_id]['markets']) is set:
self.stream_list[stream_id]['markets'] = list(self.stream_list[stream_id]['markets'])
self.stream_list[stream_id]['channels'] = list(set(self.stream_list[stream_id]['channels'] + channels))
markets_new = []
for market in markets:
if "!" in market \
or market == "allMiniTickers" \
or market == "allTickers" \
or market == "blockheight" \
or market == "$all":
markets_new.append(market)
else:
if self.is_exchange_type('dex'):
markets_new.append(str(market).upper())
elif self.is_exchange_type('cex'):
markets_new.append(str(market).lower())
self.stream_list[stream_id]['markets'] = list(set(self.stream_list[stream_id]['markets'] + markets_new))
payload = self.create_payload(stream_id, "subscribe",
channels=self.stream_list[stream_id]['channels'],
markets=self.stream_list[stream_id]['markets'])
self.stream_list[stream_id]['subscriptions'] = self.get_number_of_subscriptions(stream_id)
# control subscription limit:
# https://github.com/binance-exchange/binance-official-api-docs/blob/5fccfd572db2f530e25e302c02be5dec12759cf9/CHANGELOG.md#2020-04-23
if self.stream_list[stream_id]['subscriptions'] > self.max_subscriptions_per_stream:
self.stop_stream_as_crash(stream_id)
error_msg = "The limit of " + str(self.max_subscriptions_per_stream) + " subscriptions per stream has " \
"been exceeded!"
logging.critical(f"BinanceWebSocketApiManager.subscribe_to_stream({str(stream_id)}) "
f"Info: {str(error_msg)}")
self.stream_is_crashing(stream_id, error_msg)
if self.throw_exception_if_unrepairable:
raise StreamRecoveryError("stream_id " + str(stream_id) + ": " + str(error_msg))
return False
for item in payload:
self.stream_list[stream_id]['payload'].append(item)
logging.info("BinanceWebSocketApiManager.subscribe_to_stream(" + str(stream_id) + ", " + str(channels) +
", " + str(markets) + ") finished ...")
return True
def unsubscribe_from_stream(self, stream_id, channels=[], markets=[]):
"""
Unsubscribe channels and/or markets to an existing stream
If you provide one channel and one market, then all subscribed markets from the specific channel and all
subscribed channels from the specific markets are going to be removed!
`How are the parameter `channels` and `markets` used with
`subscriptions <https://oliver-zehentleitner.github.io/unicorn-binance-websocket-api/unicorn_binance_websocket_api.html#unicorn_binance_websocket_api.unicorn_binance_websocket_api_manager.BinanceWebSocketApiManager.create_stream>`_
:param stream_id: id of a stream
:type stream_id: uuid
:param channels: provide the channels you wish to stream
:type channels: str, tuple, list, set
:param markets: provide the markets you wish to stream
:type markets: str, tuple, list, set
:return: bool
"""
logging.info("BinanceWebSocketApiManager.unsubscribe_to_stream(" + str(stream_id) + ", " + str(channels) +
", " + str(markets) + ") started ...")
if type(channels) is str:
channels = [channels]
if type(markets) is str:
markets = [markets]
if type(self.stream_list[stream_id]['channels']) is str:
self.stream_list[stream_id]['channels'] = [self.stream_list[stream_id]['channels']]
if type(self.stream_list[stream_id]['markets']) is str:
self.stream_list[stream_id]['markets'] = [self.stream_list[stream_id]['markets']]
for channel in channels:
try:
self.stream_list[stream_id]['channels'].remove(channel)
except ValueError:
pass
for i in range(len(markets)):
markets[i] = markets[i].lower()
for market in markets:
if re.match(r'[a-zA-Z0-9]{41,43}', market) is None:
try:
self.stream_list[stream_id]['markets'].remove(market)
except ValueError:
pass
payload = self.create_payload(stream_id, "unsubscribe",
channels=channels, markets=markets)
for item in payload:
self.stream_list[stream_id]['payload'].append(item)
self.stream_list[stream_id]['subscriptions'] = self.get_number_of_subscriptions(stream_id)
logging.info("BinanceWebSocketApiManager.unsubscribe_to_stream(" + str(stream_id) + ", " + str(channels) +
", " + str(markets) + ") finished ...")
return True
def wait_till_stream_has_started(self, stream_id):
"""
Returns `True` as soon a specific stream has started
:param stream_id: id of a stream
:type stream_id: uuid
:return: bool
"""
# will return `True` as soon the stream received the first data row
try:
while self.stream_list[stream_id]['last_heartbeat'] is None:
time.sleep(0.1)
return True
except KeyError:
return False
def wait_till_stream_has_stopped(self, stream_id):
"""
Returns `True` as soon a specific stream has stopped itself
:param stream_id: id of a stream
:type stream_id: uuid
:return: bool
"""
try:
while self.stream_list[stream_id]['has_stopped'] is False:
time.sleep(0.1)
return True
except KeyError:
return False
|
base.py
|
# Copyright 2014 ETH Zurich
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
:mod:`base` --- Base beacon server
==================================
"""
# Stdlib
import logging
import os
import threading
import time
from collections import defaultdict
from abc import ABCMeta, abstractmethod
from threading import RLock
# External packages
from prometheus_client import Counter, Gauge
# SCION
from beacon_server.if_state import InterfaceState
from lib.crypto.asymcrypto import get_sig_key
from lib.crypto.symcrypto import kdf
from lib.defines import (
EXP_TIME_UNIT,
GEN_CACHE_PATH,
MIN_REVOCATION_TTL,
PATH_POLICY_FILE,
)
from lib.errors import (
SCIONKeyError,
SCIONParseError,
SCIONPathPolicyViolated,
SCIONServiceLookupError,
)
from lib.msg_meta import UDPMetadata
from lib.packet.cert_mgmt import CertChainRequest, CertMgmt
from lib.packet.ext.one_hop_path import OneHopPathExt
from lib.path_seg_meta import PathSegMeta
from lib.packet.ctrl_pld import CtrlPayload
from lib.packet.ifid import IFIDPayload
from lib.packet.opaque_field import HopOpaqueField, InfoOpaqueField
from lib.packet.path import SCIONPath
from lib.packet.path_mgmt.base import PathMgmt
from lib.packet.path_mgmt.ifstate import (
IFStateInfo,
IFStatePayload,
IFStateRequest,
)
from lib.packet.path_mgmt.rev_info import RevocationInfo, SignedRevInfo
from lib.packet.pcb import (
ASMarking,
PCB,
PCBMarking,
)
from lib.packet.proto_sign import ProtoSignType
from lib.packet.scion_addr import ISD_AS
from lib.packet.signed_util import DefaultSignSrc
from lib.packet.svc import SVCType
from lib.packet.scmp.types import SCMPClass, SCMPPathClass
from lib.path_store import PathPolicy
from lib.rev_cache import RevCache
from lib.thread import thread_safety_net
from lib.types import (
CertMgmtType,
PathMgmtType as PMT,
PayloadClass,
ServiceType,
)
from lib.util import (
SCIONTime,
sleep_interval,
)
from lib.zk.cache import ZkSharedCache
from lib.zk.errors import ZkNoConnection
from lib.zk.id import ZkID
from lib.zk.zk import ZK_LOCK_SUCCESS, Zookeeper
from scion_elem.scion_elem import SCIONElement
# Exported metrics.
BEACONS_PROPAGATED = Counter("bs_beacons_propagated_total", "# of propagated beacons",
["server_id", "isd_as", "type"])
SEGMENTS_REGISTERED = Counter("bs_segments_registered_total", "# of registered segments",
["server_id", "isd_as", "type"])
REVOCATIONS_ISSUED = Counter("bs_revocations_issued_total", "# of issued revocations",
["server_id", "isd_as"])
IS_MASTER = Gauge("bs_is_master", "true if this process is the replication master",
["server_id", "isd_as"])
IF_STATE = Gauge("bs_ifstate", "0/1/2 if interface is active/revoked/other",
["server_id", "isd_as", "ifid"])
class BeaconServer(SCIONElement, metaclass=ABCMeta):
"""
The SCION PathConstructionBeacon Server.
"""
SERVICE_TYPE = ServiceType.BS
# ZK path for incoming PCBs
ZK_PCB_CACHE_PATH = "pcb_cache"
# ZK path for revocations.
ZK_REVOCATIONS_PATH = "rev_cache"
# Time revocation objects are cached in memory (in seconds).
ZK_REV_OBJ_MAX_AGE = MIN_REVOCATION_TTL
# Revocation TTL
REVOCATION_TTL = MIN_REVOCATION_TTL
# Revocation Overlapping (seconds)
REVOCATION_OVERLAP = 2
# Interval to checked for timed out interfaces.
IF_TIMEOUT_INTERVAL = 1
# Interval to send keep-alive msgs
IFID_INTERVAL = 1
# Interval between two consecutive requests (in seconds).
CERT_REQ_RATE = 10
def __init__(self, server_id, conf_dir, spki_cache_dir=GEN_CACHE_PATH,
prom_export=None, sciond_path=None):
"""
:param str server_id: server identifier.
:param str conf_dir: configuration directory.
:param str prom_export: prometheus export address.
:param str sciond_path: path to sciond socket.
"""
super().__init__(server_id, conf_dir, spki_cache_dir=spki_cache_dir,
prom_export=prom_export, sciond_path=sciond_path)
self.config = self._load_as_conf()
# TODO: add 2 policies
self.path_policy = PathPolicy.from_file(
os.path.join(conf_dir, PATH_POLICY_FILE))
self.signing_key = get_sig_key(self.conf_dir)
self.of_gen_key = kdf(self.config.master_as_key, b"Derive OF Key")
# Amount of time units a HOF is valid (time unit is EXP_TIME_UNIT).
self.default_hof_exp_time = int(self.config.segment_ttl / EXP_TIME_UNIT)
self.ifid_state = {}
for ifid in self.ifid2br:
self.ifid_state[ifid] = InterfaceState()
self.ifid_state_lock = RLock()
self.if_revocations = {}
self.CTRL_PLD_CLASS_MAP = {
PayloadClass.PCB: {PayloadClass.PCB: self.handle_pcb},
PayloadClass.IFID: {PayloadClass.IFID: self.handle_ifid_packet},
PayloadClass.CERT: {
CertMgmtType.CERT_CHAIN_REQ: self.process_cert_chain_request,
CertMgmtType.CERT_CHAIN_REPLY: self.process_cert_chain_reply,
CertMgmtType.TRC_REPLY: self.process_trc_reply,
CertMgmtType.TRC_REQ: self.process_trc_request,
},
PayloadClass.PATH: {
PMT.IFSTATE_REQ: self._handle_ifstate_request,
PMT.REVOCATION: self._handle_revocation,
},
}
self.SCMP_PLD_CLASS_MAP = {
SCMPClass.PATH: {
SCMPPathClass.REVOKED_IF: self._handle_scmp_revocation,
},
}
zkid = ZkID.from_values(self.addr.isd_as, self.id,
[(self.addr.host, self._port)]).pack()
self.zk = Zookeeper(self.addr.isd_as, self.SERVICE_TYPE, zkid,
self.topology.zookeepers)
self.zk.retry("Joining party", self.zk.party_setup)
self.pcb_cache = ZkSharedCache(
self.zk, self.ZK_PCB_CACHE_PATH, self._handle_pcbs_from_zk)
self.revobjs_cache = ZkSharedCache(
self.zk, self.ZK_REVOCATIONS_PATH, self.process_rev_objects)
self.local_rev_cache = RevCache()
self._rev_seg_lock = RLock()
def propagate_downstream_pcb(self, pcb):
"""
Propagates the beacon to all children.
:param pcb: path segment.
:type pcb: PathSegment
"""
propagated_pcbs = defaultdict(list)
prop_cnt = 0
for intf in self.topology.child_interfaces:
if not intf.to_if_id:
continue
new_pcb, meta = self._mk_prop_pcb_meta(
pcb.copy(), intf.isd_as, intf.if_id)
if not new_pcb:
continue
self.send_meta(CtrlPayload(new_pcb.pcb()), meta)
propagated_pcbs[(intf.isd_as, intf.if_id)].append(pcb.short_id())
prop_cnt += 1
if self._labels:
BEACONS_PROPAGATED.labels(**self._labels, type="down").inc(prop_cnt)
return propagated_pcbs
def _mk_prop_pcb_meta(self, pcb, dst_ia, egress_if):
ts = pcb.get_timestamp()
asm = self._create_asm(pcb.ifID, egress_if, ts, pcb.last_hof())
if not asm:
return None, None
pcb.add_asm(asm, ProtoSignType.ED25519, self.addr.isd_as.pack())
pcb.sign(self.signing_key)
one_hop_path = self._create_one_hop_path(egress_if)
return pcb, self._build_meta(ia=dst_ia, host=SVCType.BS_A,
path=one_hop_path, one_hop=True)
def _create_one_hop_path(self, egress_if):
ts = int(SCIONTime.get_time())
info = InfoOpaqueField.from_values(ts, self.addr.isd_as[0], hops=2)
hf1 = HopOpaqueField.from_values(OneHopPathExt.HOF_EXP_TIME, 0, egress_if)
hf1.set_mac(self.of_gen_key, ts, None)
# Return a path where second HF is empty.
return SCIONPath.from_values(info, [hf1, HopOpaqueField()])
def hof_exp_time(self, ts):
"""
Return the ExpTime based on IF timestamp and the certificate chain/TRC.
The certificate chain must be valid for the entire HOF lifetime.
:param int ts: IF timestamp
:return: HF ExpTime
:rtype: int
"""
cert_exp = self._get_my_cert().as_cert.expiration_time
max_exp_time = int((cert_exp-ts) / EXP_TIME_UNIT)
return min(max_exp_time, self.default_hof_exp_time)
def _mk_if_info(self, if_id):
"""
Small helper method to make it easier to deal with ingress/egress
interface being 0 while building ASMarkings.
"""
d = {"remote_ia": ISD_AS.from_values(0, 0), "remote_if": 0, "mtu": 0}
if not if_id:
return d
br = self.ifid2br[if_id]
d["remote_ia"] = br.interfaces[if_id].isd_as
d["remote_if"] = br.interfaces[if_id].to_if_id
d["mtu"] = br.interfaces[if_id].mtu
return d
@abstractmethod
def handle_pcbs_propagation(self):
"""
Main loop to propagate received beacons.
"""
raise NotImplementedError
def _log_propagations(self, propagated_pcbs):
for (isd_as, if_id), pcbs in propagated_pcbs.items():
logging.debug("Propagated %d PCBs to %s via %s (%s)", len(pcbs),
isd_as, if_id, ", ".join(pcbs))
def _handle_pcbs_from_zk(self, pcbs):
"""
Handles cached pcbs through ZK, passed as a list.
"""
for pcb in pcbs:
try:
pcb = PCB.from_raw(pcb)
except SCIONParseError as e:
logging.error("Unable to parse raw pcb: %s", e)
continue
self.handle_pcb(CtrlPayload(pcb))
if pcbs:
logging.debug("Processed %s PCBs from ZK", len(pcbs))
def handle_pcb(self, cpld, meta=None):
"""
Handles pcbs received from the network.
"""
pcb = cpld.union
assert isinstance(pcb, PCB), type(pcb)
pcb = pcb.pseg()
if meta:
pcb.ifID = meta.path.get_hof().ingress_if
try:
self.path_policy.check_filters(pcb)
except SCIONPathPolicyViolated as e:
logging.debug("Segment dropped due to path policy: %s\n%s" %
(e, pcb.short_desc()))
return
if not self._filter_pcb(pcb):
logging.debug("Segment dropped due to looping: %s" %
pcb.short_desc())
return
seg_meta = PathSegMeta(pcb, self.continue_seg_processing, meta)
self._process_path_seg(seg_meta, cpld.req_id)
def continue_seg_processing(self, seg_meta):
"""
For every verified pcb received from the network or ZK
this function gets called to continue the processing for the pcb.
"""
pseg = seg_meta.seg
logging.debug("Successfully verified PCB %s", pseg.short_id())
if seg_meta.meta:
# Segment was received from network, not from zk. Share segment
# with other beacon servers in this AS.
entry_name = "%s-%s" % (pseg.get_hops_hash(hex=True), time.time())
try:
self.pcb_cache.store(entry_name, pseg.pcb().copy().pack())
except ZkNoConnection:
logging.error("Unable to store PCB in shared cache: "
"no connection to ZK")
self.handle_ext(pseg)
self._handle_verified_beacon(pseg)
def _filter_pcb(self, pcb, dst_ia=None):
return True
def handle_ext(self, pcb):
"""
Handle beacon extensions.
"""
# Handle PCB extensions
for asm in pcb.iter_asms():
pol = asm.routing_pol_ext()
if pol:
self.handle_routing_pol_ext(pol)
def handle_routing_pol_ext(self, ext):
# TODO(Sezer): Implement routing policy extension handling
logging.debug("Routing policy extension: %s" % ext)
@abstractmethod
def register_segments(self):
"""
Registers paths according to the received beacons.
"""
raise NotImplementedError
def _log_registrations(self, registrations, seg_type):
reg_cnt = 0
for (dst_meta, dst_type), pcbs in registrations.items():
reg_cnt += len(pcbs)
logging.debug("Registered %d %s-segments @ %s:%s (%s)", len(pcbs),
seg_type, dst_type.upper(), dst_meta, ", ".join(pcbs))
if self._labels:
SEGMENTS_REGISTERED.labels(**self._labels, type=seg_type).inc(reg_cnt)
def _create_asm(self, in_if, out_if, ts, prev_hof):
pcbms = list(self._create_pcbms(in_if, out_if, ts, prev_hof))
if not pcbms:
return None
chain = self._get_my_cert()
_, cert_ver = chain.get_leaf_isd_as_ver()
return ASMarking.from_values(
self.addr.isd_as, self._get_my_trc().version, cert_ver, pcbms, self.topology.mtu)
def _create_pcbms(self, in_if, out_if, ts, prev_hof):
up_pcbm = self._create_pcbm(in_if, out_if, ts, prev_hof)
if not up_pcbm:
return
yield up_pcbm
for intf in sorted(self.topology.peer_interfaces):
in_if = intf.if_id
with self.ifid_state_lock:
if (not self.ifid_state[in_if].is_active() and
not self._quiet_startup()):
continue
peer_pcbm = self._create_pcbm(in_if, out_if, ts, up_pcbm.hof(), xover=True)
if peer_pcbm:
yield peer_pcbm
def _create_pcbm(self, in_if, out_if, ts, prev_hof, xover=False):
in_info = self._mk_if_info(in_if)
if in_info["remote_ia"].int() and not in_info["remote_if"]:
return None
out_info = self._mk_if_info(out_if)
if out_info["remote_ia"].int() and not out_info["remote_if"]:
return None
exp_time = self.hof_exp_time(ts)
if exp_time <= 0:
logging.error("Invalid hop field expiration time value: %s", exp_time)
return None
hof = HopOpaqueField.from_values(exp_time, in_if, out_if, xover=xover)
hof.set_mac(self.of_gen_key, ts, prev_hof)
return PCBMarking.from_values(
in_info["remote_ia"], in_info["remote_if"], in_info["mtu"],
out_info["remote_ia"], out_info["remote_if"], hof)
def _terminate_pcb(self, pcb):
"""
Copies a PCB, terminates it and adds the segment ID.
Terminating a PCB means adding a opaque field with the egress IF set
to 0, i.e., there is no AS to forward a packet containing this path
segment to.
"""
pcb = pcb.copy()
asm = self._create_asm(pcb.ifID, 0, pcb.get_timestamp(),
pcb.last_hof())
if not asm:
return None
pcb.add_asm(asm, ProtoSignType.ED25519, self.addr.isd_as.pack())
return pcb
def handle_ifid_packet(self, cpld, meta):
"""
Update the interface state for the corresponding interface.
:param pld: The IFIDPayload.
:type pld: IFIDPayload
"""
pld = cpld.union
assert isinstance(pld, IFIDPayload), type(pld)
ifid = pld.p.relayIF
with self.ifid_state_lock:
if ifid not in self.ifid_state:
raise SCIONKeyError("Invalid IF %d in IFIDPayload" % ifid)
br = self.ifid2br[ifid]
br.interfaces[ifid].to_if_id = pld.p.origIF
prev_state = self.ifid_state[ifid].update()
if prev_state == InterfaceState.INACTIVE:
logging.info("IF %d activated.", ifid)
elif prev_state in [InterfaceState.TIMED_OUT,
InterfaceState.REVOKED]:
logging.info("IF %d came back up.", ifid)
if prev_state != InterfaceState.ACTIVE:
if self.zk.have_lock():
# Inform BRs about the interface coming up.
metas = []
for br in self.topology.border_routers:
br_addr, br_port = br.int_addrs.public[0]
metas.append(UDPMetadata.from_values(host=br_addr, port=br_port))
info = IFStateInfo.from_values(ifid, True)
self._send_ifstate_update([info], metas)
def run(self):
"""
Run an instance of the Beacon Server.
"""
threading.Thread(
target=thread_safety_net, args=(self.worker,),
name="BS.worker", daemon=True).start()
# https://github.com/scionproto/scion/issues/308:
threading.Thread(
target=thread_safety_net, args=(self._send_ifid_updates,),
name="BS._send_if_updates", daemon=True).start()
threading.Thread(
target=thread_safety_net, args=(self._handle_if_timeouts,),
name="BS._handle_if_timeouts", daemon=True).start()
threading.Thread(
target=thread_safety_net, args=(self._check_trc_cert_reqs,),
name="Elem.check_trc_cert_reqs", daemon=True).start()
threading.Thread(
target=thread_safety_net, args=(self._check_local_cert,),
name="BS._check_local_cert", daemon=True).start()
super().run()
def worker(self):
"""
Worker thread that takes care of reading shared PCBs from ZK, and
propagating PCBS/registering paths when master.
"""
last_propagation = last_registration = 0
worker_cycle = 1.0
start = time.time()
while self.run_flag.is_set():
sleep_interval(start, worker_cycle, "BS.worker cycle",
self._quiet_startup())
start = time.time()
# Update IS_MASTER metric.
if self._labels:
IS_MASTER.labels(**self._labels).set(int(self.zk.have_lock()))
try:
self.zk.wait_connected()
self.pcb_cache.process()
self.revobjs_cache.process()
self.handle_rev_objs()
ret = self.zk.get_lock(lock_timeout=0, conn_timeout=0)
if not ret: # Failed to get the lock
continue
elif ret == ZK_LOCK_SUCCESS:
logging.info("Became master")
self._became_master()
self.pcb_cache.expire(self.config.propagation_time * 10)
self.revobjs_cache.expire(self.ZK_REV_OBJ_MAX_AGE)
except ZkNoConnection:
continue
now = time.time()
if now - last_propagation >= self.config.propagation_time:
self.handle_pcbs_propagation()
last_propagation = now
if (self.config.registers_paths and
now - last_registration >= self.config.registration_time):
try:
self.register_segments()
except SCIONKeyError as e:
logging.error("Error while registering segments: %s", e)
pass
last_registration = now
def _became_master(self):
"""
Called when a BS becomes the new master. Resets some state that will be
rebuilt over time.
"""
# Reset all timed-out and revoked interfaces to inactive.
with self.ifid_state_lock:
for (_, ifstate) in self.ifid_state.items():
if not ifstate.is_active():
ifstate.reset()
def _get_my_trc(self):
return self.trust_store.get_trc(self.addr.isd_as[0])
def _get_my_cert(self):
return self.trust_store.get_cert(self.addr.isd_as)
@abstractmethod
def _handle_verified_beacon(self, pcb):
"""
Once a beacon has been verified, place it into the right containers.
:param pcb: verified path segment.
:type pcb: PathSegment
"""
raise NotImplementedError
def process_rev_objects(self, rev_infos):
"""
Processes revocation infos stored in Zookeeper.
"""
with self._rev_seg_lock:
for raw in rev_infos:
try:
srev_info = SignedRevInfo.from_raw(raw)
except SCIONParseError as e:
logging.error(
"Error parsing revocation info from ZK: %s", e)
continue
self.check_revocation(srev_info, lambda x: lambda:
self.local_rev_cache.add(srev_info) if not x else False)
def _issue_revocations(self, revoked_ifs):
"""
Store a RevocationInfo in ZK and send a revocation to all BRs.
:param list revoked_ifs: A list of interfaces that needs to be revoked.
"""
# Only the master BS issues revocations.
if not self.zk.have_lock():
return
# Process revoked interfaces.
infos = []
for if_id in revoked_ifs:
br = self.ifid2br[if_id]
rev_info = RevocationInfo.from_values(
self.addr.isd_as, if_id, br.interfaces[if_id].link_type,
int(time.time()), self.REVOCATION_TTL)
logging.info("Issuing revocation: %s", rev_info.short_desc())
if self._labels:
REVOCATIONS_ISSUED.labels(**self._labels).inc()
chain = self._get_my_cert()
_, cert_ver = chain.get_leaf_isd_as_ver()
src = DefaultSignSrc.from_values(rev_info.isd_as(), cert_ver,
self._get_my_trc().version).pack()
srev_info = SignedRevInfo.from_values(rev_info.copy().pack(),
ProtoSignType.ED25519, src)
srev_info.sign(self.signing_key)
# Add to revocation cache
self.if_revocations[if_id] = srev_info
self._process_revocation(srev_info)
infos.append(IFStateInfo.from_values(if_id, False, srev_info))
border_metas = []
# Add all BRs.
for br in self.topology.border_routers:
br_addr, br_port = br.int_addrs.public[0]
border_metas.append(UDPMetadata.from_values(host=br_addr, port=br_port))
# Add local path server.
ps_meta = []
if self.topology.path_servers:
try:
addr, port = self.dns_query_topo(ServiceType.PS)[0]
except SCIONServiceLookupError:
addr, port = None, None
# Create a meta if there is a local path service
if addr:
ps_meta.append(UDPMetadata.from_values(host=addr, port=port))
self._send_ifstate_update(infos, border_metas, ps_meta)
def _handle_scmp_revocation(self, pld, meta):
srev_info = SignedRevInfo.from_raw(pld.info.srev_info)
self._handle_revocation(CtrlPayload(PathMgmt(srev_info)), meta)
def _handle_revocation(self, cpld, meta):
pmgt = cpld.union
srev_info = pmgt.union
rev_info = srev_info.rev_info()
assert isinstance(rev_info, RevocationInfo), type(rev_info)
logging.debug("Received revocation from %s: %s", meta, rev_info.short_desc())
self.check_revocation(srev_info, lambda x:
self._process_revocation(srev_info) if not x else False, meta)
def handle_rev_objs(self):
with self._rev_seg_lock:
for srev_info in self.local_rev_cache.values():
self._remove_revoked_pcbs(srev_info.rev_info())
def _process_revocation(self, srev_info):
"""
Removes PCBs containing a revoked interface and sends the revocation
to the local PS.
:param srev_info: The signed RevocationInfo object
:type srev_info: SignedRevInfo
"""
rev_info = srev_info.rev_info()
assert isinstance(rev_info, RevocationInfo), type(rev_info)
if_id = rev_info.p.ifID
if not if_id:
logging.error("Trying to revoke IF with ID 0.")
return
with self._rev_seg_lock:
self.local_rev_cache.add(srev_info.copy())
srev_info_packed = srev_info.copy().pack()
entry_name = "%s:%s" % (hash(srev_info_packed), time.time())
try:
self.revobjs_cache.store(entry_name, srev_info_packed)
except ZkNoConnection as exc:
logging.error("Unable to store revocation in shared cache "
"(no ZK connection): %s" % exc)
self._remove_revoked_pcbs(rev_info)
@abstractmethod
def _remove_revoked_pcbs(self, rev_info):
"""
Removes the PCBs containing the revoked interface.
:param rev_info: The RevocationInfo object.
:type rev_info: RevocationInfo
"""
raise NotImplementedError
def _pcb_list_to_remove(self, candidates, rev_info):
"""
Calculates the list of PCBs to remove.
Called by _remove_revoked_pcbs.
:param candidates: Candidate PCBs.
:type candidates: List
:param rev_info: The RevocationInfo object.
:type rev_info: RevocationInfo
"""
to_remove = []
if not rev_info.active():
return to_remove
processed = set()
for cand in candidates:
if cand.id in processed:
continue
processed.add(cand.id)
# If the interface on which we received the PCB is
# revoked, then the corresponding pcb needs to be removed.
if (self.addr.isd_as == rev_info.isd_as() and
cand.pcb.ifID == rev_info.p.ifID):
to_remove.append(cand.id)
for asm in cand.pcb.iter_asms():
if self._check_revocation_for_asm(rev_info, asm, False):
to_remove.append(cand.id)
return to_remove
def _handle_if_timeouts(self):
"""
Periodically checks each interface state and issues an IF revocation, if
no keep-alive message was received for IFID_TOUT.
"""
while self.run_flag.is_set():
start_time = time.time()
with self.ifid_state_lock:
to_revoke = []
for (ifid, if_state) in self.ifid_state.items():
if self._labels:
metric = IF_STATE.labels(ifid=ifid, **self._labels)
if if_state.is_active():
metric.set(0)
elif if_state.is_revoked():
metric.set(1)
else:
metric.set(2)
if not if_state.is_expired():
# Interface hasn't timed out
self.if_revocations.pop(ifid, None)
continue
srev_info = self.if_revocations.get(ifid, None)
if if_state.is_revoked() and srev_info:
# Interface is revoked until the revocation time plus the revocation TTL,
# we want to issue a new revocation REVOCATION_OVERLAP seconds
# before it is expired
rev_info = srev_info.rev_info()
if (rev_info.p.timestamp + rev_info.p.ttl -
self.REVOCATION_OVERLAP > start_time):
# Interface has already been revoked within the REVOCATION_TTL -
# REVOCATION_OVERLAP period
continue
if not if_state.is_revoked():
logging.info("IF %d went down.", ifid)
to_revoke.append(ifid)
if_state.revoke_if_expired()
if to_revoke:
self._issue_revocations(to_revoke)
sleep_interval(start_time, self.IF_TIMEOUT_INTERVAL, "Handle IF timeouts")
def _handle_ifstate_request(self, cpld, meta):
# Only master replies to ifstate requests.
pmgt = cpld.union
req = pmgt.union
assert isinstance(req, IFStateRequest), type(req)
if not self.zk.have_lock():
return
with self.ifid_state_lock:
infos = []
for (ifid, state) in self.ifid_state.items():
# Don't include inactive interfaces in update.
if state.is_inactive():
continue
srev_info = None
if state.is_revoked():
srev_info = self.if_revocations.get(ifid, None)
if not srev_info:
logging.warning("No revocation in cache for revoked IFID: %s", ifid)
continue
infos.append(IFStateInfo.from_values(ifid, state.is_active(), srev_info))
if not infos and not self._quiet_startup():
logging.warning("No IF state info to put in IFState update for %s.", meta)
return
self._send_ifstate_update(infos, [meta])
def _send_ifstate_update(self, state_infos, border_metas, server_metas=None):
server_metas = server_metas or []
payload = CtrlPayload(PathMgmt(IFStatePayload.from_values(state_infos)))
for meta in border_metas:
self.send_meta(payload.copy(), meta, (meta.host, meta.port))
for meta in server_metas:
self.send_meta(payload.copy(), meta)
def _send_ifid_updates(self):
start = time.time()
while self.run_flag.is_set():
sleep_interval(start, self.IFID_INTERVAL, "BS._send_ifid_updates cycle")
start = time.time()
# only master sends keep-alive messages
if not self.zk.have_lock():
continue
# send keep-alives on all known BR interfaces
for ifid in self.ifid2br:
br = self.ifid2br[ifid]
br_addr, br_port = br.int_addrs.public[0]
meta = self._build_meta(host=br_addr, port=br_port)
self.send_meta(CtrlPayload(IFIDPayload.from_values(ifid)),
meta, (meta.host, meta.port))
def _check_local_cert(self):
while self.run_flag.is_set():
chain = self._get_my_cert()
exp = min(chain.as_cert.expiration_time, chain.core_as_cert.expiration_time)
diff = exp - int(time.time())
if diff > self.config.segment_ttl:
time.sleep(diff - self.config.segment_ttl)
continue
cs_meta = self._get_cs()
req = CertChainRequest.from_values(
self.addr.isd_as, chain.as_cert.version+1, cache_only=True)
logging.info("Request new certificate chain. Req: %s", req)
self.send_meta(CtrlPayload(CertMgmt(req)), cs_meta)
cs_meta.close()
time.sleep(self.CERT_REQ_RATE)
def _init_metrics(self):
super()._init_metrics()
for type_ in ("core", "up", "down"):
BEACONS_PROPAGATED.labels(**self._labels, type=type_).inc(0)
SEGMENTS_REGISTERED.labels(**self._labels, type=type_).inc(0)
REVOCATIONS_ISSUED.labels(**self._labels).inc(0)
IS_MASTER.labels(**self._labels).set(0)
|
freeze_frame.py
|
#!/usr/bin/env python
# Copyright (c) 2017, Elaine Short, SIM Lab
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of the SIM Lab nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import rospy
import yaml
import tf
import sys
import threading
from hlpr_manipulation_utils.srv import FreezeFrame, FreezeFrameRequest, FreezeFrameResponse
from std_srvs.srv import Empty, EmptyResponse
class TFFreezeFrame(object):
def __init__(self, frames, fixed_frame="base_link"):
self.frames = {}
self.fixed = fixed_frame
for frame, subframes in frames.items():
if frame in subframes:
logerr("Can't remap frame {} to same name! Skipping.".format(frame))
else:
self.frames[frame]=subframes
self.transforms = dict([(f,(None, None)) for f in self.frames.keys()])
self.frozen = dict([(f,False) for f in self.frames.keys()])
self.broadcaster = tf.TransformBroadcaster()
self.listener = tf.TransformListener()
self.increment = 0
self.monitor_thread = threading.Thread(target=self.monitor_tfs)
self.publish_thread = threading.Thread(target=self.publish_tfs)
self.monitor_thread.start()
rospy.sleep(2)
self.publish_thread.start()
self.server = rospy.Service("freeze_frames", FreezeFrame, self.handle_srv)
def handle_srv(self, req):
if req.action == FreezeFrameRequest.TOGGLE:
if all(self.frozen.values()):
self.unfreeze_all()
else:
self.freeze_all()
elif req.action == FreezeFrameRequest.FREEZE:
self.freeze_all()
elif req.action == FreezeFrameRequest.UNFREEZE:
self.unfreeze_all()
else:
rospy.logerr("unknown freeze frame action {}".format(req.action))
rospy.loginfo("Freezing is: {}".format(self.frozen))
return FreezeFrameResponse()
def freeze(self, frames):
for f in frames:
self.frozen[f] = True
def unfreeze(self, frames):
for f in frames:
self.frozen[f] = False
def freeze_all(self):
self.freeze(self.frames.keys())
def unfreeze_all(self):
self.unfreeze(self.frames.keys())
def monitor_tfs(self):
while not rospy.is_shutdown():
for outframe, inframes in self.frames.items():
if self.frozen[outframe]:
continue
trans = None
rot = None
for inframe in inframes:
try:
trans, rot = self.listener.lookupTransform(self.fixed, inframe, rospy.Time.now()-rospy.Duration(0.3))
except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException) as e:
rospy.logwarn_throttle(30, "FreezeFrame Error: {}".format(e))
continue
break
self.transforms[outframe]=(trans,rot)
rospy.sleep(0.1)
def publish_tfs(self):
while not rospy.is_shutdown():
rospy.loginfo_throttle(30,"Freezing is: {}".format(self.frozen))
for outframe, inframes in self.frames.items():
trans,rot = self.transforms[outframe]
if trans is None or rot is None:
rospy.logwarn_throttle(30,"Couldn't find transform for tf {}; won't republish.".format(outframe))
else:
self.increment += 1
self.broadcaster.sendTransform(trans, rot, rospy.Time.now(), outframe, self.fixed)
rospy.sleep(0.1)
if __name__=="__main__":
rospy.init_node("freeze_frame")
frames = yaml.load(sys.argv[1])
if len(sys.argv) > 2:
# we have a base link provided
tff = TFFreezeFrame(frames, sys.argv[2])
else:
tff = TFFreezeFrame(frames)
rospy.spin()
|
pythonforward.py
|
import select
import socket
import threading
from robot.utils import PY2, WINDOWS
from .logger import logger
if PY2 and WINDOWS:
import win_inet_pton
try:
import SocketServer
except ImportError:
import socketserver as SocketServer
def check_if_ipv6(ip):
try:
socket.inet_pton(socket.AF_INET6, ip)
return True
except socket.error:
return False
class LocalPortForwarding:
def __init__(self, port, host, transport, bind_address):
self.server = None
self.port = port
self.host = host
self.transport = transport
self.bind_address = bind_address
def forward(self, local_port):
class SubHandler(LocalPortForwardingHandler):
port = self.port
host = self.host
ssh_transport = self.transport
self.server = ForwardServer((self.bind_address or '', local_port), SubHandler, ipv6=check_if_ipv6(self.host))
t = threading.Thread(target=self.server.serve_forever)
t.setDaemon(True)
t.start()
logger.info("Now forwarding port %d to %s:%d ..." % (local_port, self.host, self.port))
def close(self):
if self.server:
self.server.shutdown()
try:
logger.log_background_messages()
except AttributeError:
pass
class ForwardServer(SocketServer.ThreadingTCPServer):
daemon_threads = True
allow_reuse_address = True
def __init__(self, server_address, RequestHandlerClass, ipv6=False):
if ipv6:
ForwardServer.address_family = socket.AF_INET6
SocketServer.ThreadingTCPServer.__init__(self, server_address, RequestHandlerClass, bind_and_activate=True)
class LocalPortForwardingHandler(SocketServer.BaseRequestHandler):
host, port, ssh_transport = None, None, None
def handle(self):
try:
chan = self.ssh_transport.open_channel('direct-tcpip', (self.host, self.port),
self.request.getpeername())
except Exception as e:
logger.info("Incoming request to %s:%d failed: %s" % (self.host, self.port, repr(e)))
return
if chan is None:
logger.info("Incoming request to %s:%d was rejected by the SSH server." % (self.host, self.port))
return
logger.info("Connected! Tunnel open %r -> %r -> %r" % (self.request.getpeername(),
chan.getpeername(),
(self.host, self.port)))
while True:
r, w, x = select.select([self.request, chan], [], [])
if self.request in r:
data = self.request.recv(1024)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(1024)
if len(data) == 0:
break
self.request.send(data)
peername = self.request.getpeername()
chan.close()
self.request.close()
logger.info("Tunnel closed from %r" % (peername,))
|
base_crash_reporter.py
|
# Electrum - lightweight DECENOMY Standard Wallet
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import asyncio
import json
import locale
import traceback
import sys
import queue
from .version import ELECTRUM_VERSION
from . import constants
from .i18n import _
from .util import make_aiohttp_session
from .logging import describe_os_version, Logger, get_git_version
class BaseCrashReporter(Logger):
report_server = "https://crashhub.electrum.org"
config_key = "show_crash_reporter"
issue_template = """<h2>Traceback</h2>
<pre>
{traceback}
</pre>
<h2>Additional information</h2>
<ul>
<li>Electrum version: {app_version}</li>
<li>Python version: {python_version}</li>
<li>Operating system: {os}</li>
<li>Wallet type: {wallet_type}</li>
<li>Locale: {locale}</li>
</ul>
"""
CRASH_MESSAGE = _('Something went wrong while executing Electrum.')
CRASH_TITLE = _('Sorry!')
REQUEST_HELP_MESSAGE = _('To help us diagnose and fix the problem, you can send us a bug report that contains '
'useful debug information:')
DESCRIBE_ERROR_MESSAGE = _("Please briefly describe what led to the error (optional):")
ASK_CONFIRM_SEND = _("Do you want to send this report?")
USER_COMMENT_PLACEHOLDER = _("Do not enter sensitive/private information here. "
"The report will be visible on the public issue tracker.")
def __init__(self, exctype, value, tb):
Logger.__init__(self)
self.exc_args = (exctype, value, tb)
def send_report(self, asyncio_loop, proxy, endpoint="/crash", *, timeout=None):
if constants.net.GENESIS[-4:] not in ["4943", "e26f"] and ".electrum.org" in BaseCrashReporter.report_server:
# Gah! Some kind of altcoin wants to send us crash reports.
raise Exception(_("Missing report URL."))
report = self.get_traceback_info()
report.update(self.get_additional_info())
report = json.dumps(report)
coro = self.do_post(proxy, BaseCrashReporter.report_server + endpoint, data=report)
response = asyncio.run_coroutine_threadsafe(coro, asyncio_loop).result(timeout)
return response
async def do_post(self, proxy, url, data):
async with make_aiohttp_session(proxy) as session:
async with session.post(url, data=data, raise_for_status=True) as resp:
return await resp.text()
def get_traceback_info(self):
exc_string = str(self.exc_args[1])
stack = traceback.extract_tb(self.exc_args[2])
readable_trace = self.__get_traceback_str_to_send()
id = {
"file": stack[-1].filename,
"name": stack[-1].name,
"type": self.exc_args[0].__name__
}
return {
"exc_string": exc_string,
"stack": readable_trace,
"id": id
}
def get_additional_info(self):
args = {
"app_version": get_git_version() or ELECTRUM_VERSION,
"python_version": sys.version,
"os": describe_os_version(),
"wallet_type": "unknown",
"locale": locale.getdefaultlocale()[0] or "?",
"description": self.get_user_description()
}
try:
args["wallet_type"] = self.get_wallet_type()
except:
# Maybe the wallet isn't loaded yet
pass
return args
def __get_traceback_str_to_send(self) -> str:
# make sure that traceback sent to crash reporter contains
# e.__context__ and e.__cause__, i.e. if there was a chain of
# exceptions, we want the full traceback for the whole chain.
return "".join(traceback.format_exception(*self.exc_args))
def _get_traceback_str_to_display(self) -> str:
# overridden in Qt subclass
return self.__get_traceback_str_to_send()
def get_report_string(self):
info = self.get_additional_info()
info["traceback"] = self._get_traceback_str_to_display()
return self.issue_template.format(**info)
def get_user_description(self):
raise NotImplementedError
def get_wallet_type(self) -> str:
raise NotImplementedError
class EarlyExceptionsQueue:
"""Helper singleton for explicitly sending exceptions to crash reporter.
Typically the GUIs set up an "exception hook" that catches all otherwise
uncaught exceptions (which unroll the stack of a thread completely).
This class provides methods to report *any* exception, and queueing logic
that delays processing until the exception hook is set up.
"""
_is_exc_hook_ready = False
_exc_queue = queue.Queue()
@classmethod
def set_hook_as_ready(cls):
if cls._is_exc_hook_ready:
return
cls._is_exc_hook_ready = True
while cls._exc_queue.qsize() > 0:
e = cls._exc_queue.get()
cls._send_exception_to_crash_reporter(e)
@classmethod
def send_exception_to_crash_reporter(cls, e: BaseException):
if cls._is_exc_hook_ready:
cls._send_exception_to_crash_reporter(e)
else:
cls._exc_queue.put(e)
@staticmethod
def _send_exception_to_crash_reporter(e: BaseException):
assert EarlyExceptionsQueue._is_exc_hook_ready
sys.excepthook(type(e), e, e.__traceback__)
send_exception_to_crash_reporter = EarlyExceptionsQueue.send_exception_to_crash_reporter
def trigger_crash():
# note: do not change the type of the exception, the message,
# or the name of this method. All reports generated through this
# method will be grouped together by the crash reporter, and thus
# don't spam the issue tracker.
class TestingException(Exception):
pass
def crash_test():
raise TestingException("triggered crash for testing purposes")
import threading
t = threading.Thread(target=crash_test)
t.start()
|
graph.py
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Copyright 2020 Alibaba Group Holding Limited. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import hashlib
import json
import logging
import threading
from copy import deepcopy
from typing import List
from typing import Mapping
from typing import Union
import vineyard
from graphscope.config import GSConfig as gs_config
from graphscope.framework import dag_utils
from graphscope.framework import graph_utils
from graphscope.framework import utils
from graphscope.framework.errors import check_argument
from graphscope.framework.graph_schema import GraphSchema
from graphscope.framework.graph_utils import EdgeLabel
from graphscope.framework.graph_utils import EdgeSubLabel
from graphscope.framework.graph_utils import VertexLabel
from graphscope.framework.operation import Operation
from graphscope.proto import attr_value_pb2
from graphscope.proto import types_pb2
logger = logging.getLogger("graphscope")
class Graph(object):
"""A class for representing metadata of a graph in the GraphScope.
A :class:`Graph` object holds the metadata of a graph, such as key, schema, and the graph is directed or not.
It is worth noting that the graph is stored by the backend such as Analytical Engine, Vineyard.
In other words, the graph object holds nothing but metadata.
The following example demonstrates its usage:
.. code:: python
>>> import graphscope as gs
>>> from graphscope.framework.loader import Loader
>>> sess = gs.session()
>>> graph = sess.g()
>>> graph = graph.add_vertices("person.csv","person")
>>> graph = graph.add_vertices("software.csv", "software")
>>> graph = graph.add_edges("knows.csv", "knows", src_label="person", dst_label="person")
>>> graph = graph.add_edges("created.csv", "created", src_label="person", dst_label="software")
>>> print(graph)
>>> print(graph.schema)
"""
def __init__(
self,
session,
incoming_data=None,
oid_type="int64",
directed=True,
generate_eid=True,
):
"""Construct a :class:`Graph` object.
Args:
session_id (str): Session id of the session the graph is created in.
incoming_data: Graph can be initialized through various type of sources,
which can be one of:
- :class:`Operation`
- :class:`nx.Graph`
- :class:`Graph`
- :class:`vineyard.Object`, :class:`vineyard.ObjectId` or :class:`vineyard.ObjectName`
"""
self._key = None
self._graph_type = types_pb2.ARROW_PROPERTY
self._vineyard_id = 0
self._schema = GraphSchema()
self._session = session
self._detached = False
self._interactive_instance_launching_thread = None
self._interactive_instance_list = []
self._learning_instance_list = []
# Hold uncompleted operation for lazy evaluation
self._pending_op = None
# Hold a reference to base graph of modify operation,
# to avoid being garbage collected
self._base_graph = None
oid_type = utils.normalize_data_type_str(oid_type)
if oid_type not in ("int64_t", "std::string"):
raise ValueError("oid_type can only be int64_t or string.")
self._oid_type = oid_type
self._directed = directed
self._generate_eid = generate_eid
self._unsealed_vertices = {}
self._unsealed_edges = {}
# Used to isplay schema without load into vineyard,
# and do sanity checking for newly added vertices and edges.
self._v_labels = []
self._e_labels = []
self._e_relationships = []
if incoming_data is not None:
# Don't import the :code:`NXGraph` in top-level statements to improve the
# performance of :code:`import graphscope`.
from graphscope.experimental import nx
if isinstance(incoming_data, Operation):
self._pending_op = incoming_data
if self._pending_op.type == types_pb2.PROJECT_TO_SIMPLE:
self._graph_type = types_pb2.ARROW_PROJECTED
elif isinstance(incoming_data, nx.Graph):
self._pending_op = self._from_nx_graph(incoming_data)
elif isinstance(incoming_data, Graph):
self._pending_op = self._copy_from(incoming_data)
elif isinstance(
incoming_data, (vineyard.Object, vineyard.ObjectID, vineyard.ObjectName)
):
self._pending_op = self._from_vineyard(incoming_data)
else:
raise RuntimeError("Not supported incoming data.")
def __del__(self):
# cleanly ignore all exceptions, cause session may already closed / destroyed.
try:
self.unload()
except Exception: # pylint: disable=broad-except
pass
def _close_interactive_instances(self):
# Close related interactive instances when graph unloaded.
# Since the graph is gone, quering via interactive client is meaningless.
for instance in self._interactive_instance_list:
instance.close()
self._interactive_instance_list.clear()
def _close_learning_instances(self):
for instance in self._learning_instance_list:
instance.close()
self._learning_instance_list.clear()
def _launch_interactive_instance_impl(self):
try:
self._session.gremlin(self)
except: # noqa: E722
# Record error msg in `InteractiveQuery` when launching failed.
# Unexpect and suppress all exceptions here.
pass
def _from_graph_def(self, graph_def):
check_argument(
self._graph_type == graph_def.graph_type, "Graph type doesn't match."
)
self._key = graph_def.key
self._vineyard_id = graph_def.vineyard_id
self._oid_type = graph_def.schema_def.oid_type
self._directed = graph_def.directed
self._generate_eid = graph_def.generate_eid
self._schema_path = graph_def.schema_path
self._schema.get_schema_from_def(graph_def.schema_def)
self._v_labels = self._schema.vertex_labels
self._e_labels = self._schema.edge_labels
self._e_relationships = self._schema.edge_relationships
def _ensure_loaded(self):
if self._key is not None and self._pending_op is None:
return
# Unloaded
if self._session is None:
raise RuntimeError("The graph is not loaded")
# Empty graph
if self._key is None and self._pending_op is None:
raise RuntimeError("Empty graph.")
# Try to load
if self._pending_op is not None:
# Create a graph from scratch.
graph_def = self._pending_op.eval()
self._from_graph_def(graph_def)
self._pending_op = None
self._base_graph = None
self._unsealed_vertices.clear()
self._unsealed_edges.clear()
# init saved_signature (must be after init schema)
self._saved_signature = self.signature
# create gremlin server pod asynchronously
if gs_config.initializing_interactive_engine:
self._interactive_instance_launching_thread = threading.Thread(
target=self._launch_interactive_instance_impl, args=()
)
self._interactive_instance_launching_thread.start()
@property
def key(self):
"""The key of the corresponding graph in engine."""
self._ensure_loaded()
return self._key
@property
def graph_type(self):
"""The type of the graph object.
Returns:
type (`types_pb2.GraphType`): the type of the graph.
"""
return self._graph_type
@property
def schema(self):
"""Schema of the graph.
Returns:
:class:`GraphSchema`: the schema of the graph
"""
self._ensure_loaded()
return self._schema
@property
def schema_path(self):
"""Path that Coordinator will write interactive schema path to.
Returns:
str: The path contains the schema. for interactive engine.
"""
self._ensure_loaded()
return self._schema_path
@property
def signature(self):
self._ensure_loaded()
return hashlib.sha256(
"{}.{}".format(self._schema.signature(), self._key).encode("utf-8")
).hexdigest()
@property
def template_str(self):
self._ensure_loaded()
# transform str/string to std::string
oid_type = utils.normalize_data_type_str(self._oid_type)
vid_type = self._schema.vid_type
vdata_type = utils.data_type_to_cpp(self._schema.vdata_type)
edata_type = utils.data_type_to_cpp(self._schema.edata_type)
if self._graph_type == types_pb2.ARROW_PROPERTY:
template = f"vineyard::ArrowFragment<{oid_type},{vid_type}>"
elif self._graph_type == types_pb2.ARROW_PROJECTED:
template = f"gs::ArrowProjectedFragment<{oid_type},{vid_type},{vdata_type},{edata_type}>"
elif self._graph_type == types_pb2.DYNAMIC_PROJECTED:
template = f"gs::DynamicProjectedFragment<{vdata_type},{edata_type}>"
else:
raise ValueError(f"Unsupported graph type: {self._graph_type}")
return template
@property
def vineyard_id(self):
"""Get the vineyard object_id of this graph.
Returns:
str: return vineyard id of this graph
"""
self._ensure_loaded()
return self._vineyard_id
@property
def session_id(self):
"""Get the currrent session_id.
Returns:
str: Return session id that the graph belongs to.
"""
return self._session.session_id
def detach(self):
"""Detaching a graph makes it being left in vineyard even when the varaible for
this :class:`Graph` object leaves the lexical scope.
The graph can be accessed using the graph's :code:`ObjectID` or its name later.
"""
self._detached = True
def loaded(self):
try:
self._ensure_loaded()
except RuntimeError:
return False
return self._key is not None
def __str__(self):
v_str = "\n".join([f"VERTEX: {label}" for label in self._v_labels])
relations = []
for i in range(len(self._e_labels)):
relations.extend(
[(self._e_labels[i], src, dst) for src, dst in self._e_relationships[i]]
)
e_str = "\n".join(
[f"EDGE: {label}\tsrc: {src}\tdst: {dst}" for label, src, dst in relations]
)
return f"graphscope.Graph\n{types_pb2.GraphType.Name(self._graph_type)}\n{v_str}\n{e_str}"
def __repr__(self):
return self.__str__()
def unload(self):
"""Unload this graph from graphscope engine."""
if self._session is None:
raise RuntimeError("The graph is not loaded")
if self._key is None:
self._session = None
self._pending_op = None
return
# close interactive instances first
try:
if (
self._interactive_instance_launching_thread is not None
and self._interactive_instance_launching_thread.is_alive()
):
# join raises a RuntimeError if an attempt is made to join the current thread.
# this exception occurs when a object collected by gc mechanism contains a running thread.
if (
threading.current_thread()
!= self._interactive_instance_launching_thread
):
self._interactive_instance_launching_thread.join()
self._close_interactive_instances()
except Exception as e:
logger.error("Failed to close interactive instances: %s" % e)
try:
self._close_learning_instances()
except Exception as e:
logger.error("Failed to close learning instances: %s" % e)
if not self._detached:
op = dag_utils.unload_graph(self)
op.eval()
self._key = None
self._session = None
self._pending_op = None
def _project_to_simple(self):
self._ensure_loaded()
check_argument(self.graph_type == types_pb2.ARROW_PROPERTY)
check_argument(
self.schema.vertex_label_num == 1,
"Cannot project to simple, vertex label number is more than 1.",
)
check_argument(
self.schema.edge_label_num == 1,
"Cannot project to simple, edge label number is more than 1.",
)
# Check relation v_label -> e_label <- v_label exists.
v_label = self.schema.vertex_labels[0]
e_label = self.schema.edge_labels[0]
relation = (v_label, v_label)
check_argument(
relation in self._schema.get_relationships(e_label),
f"Cannot project to simple, Graph doesn't contain such relationship: {v_label} -> {e_label} <- {v_label}.",
)
v_props = self.schema.get_vertex_properties(v_label)
e_props = self.schema.get_edge_properties(e_label)
check_argument(len(v_props) <= 1)
check_argument(len(e_props) <= 1)
v_label_id = self.schema.get_vertex_label_id(v_label)
e_label_id = self.schema.get_edge_label_id(e_label)
v_prop_id, vdata_type = (
(v_props[0].id, v_props[0].type) if v_props else (-1, None)
)
e_prop_id, edata_type = (
(e_props[0].id, e_props[0].type) if e_props else (-1, None)
)
oid_type = self._schema.oid_type
vid_type = self._schema.vid_type
op = dag_utils.project_arrow_property_graph_to_simple(
self,
v_label_id,
v_prop_id,
e_label_id,
e_prop_id,
vdata_type,
edata_type,
oid_type,
vid_type,
)
graph = Graph(self._session, op)
graph._base_graph = self
return graph
def add_column(self, results, selector):
"""Add the results as a column to the graph. Modification rules are given by the selector.
Args:
results (:class:`Context`): A `Context` that created by doing a query.
selector (dict): Select results to add as column. Format is similar to selectors in `Context`
Returns:
:class:`Graph`: A new `Graph` with new columns.
"""
self._ensure_loaded()
check_argument(
isinstance(selector, Mapping), "selector of add column must be a dict"
)
check_argument(self.graph_type == types_pb2.ARROW_PROPERTY)
self._check_unmodified()
selector = {
key: results._transform_selector(value) for key, value in selector.items()
}
selector = json.dumps(selector)
op = dag_utils.add_column(self, results, selector)
graph = Graph(self._session, op)
graph._base_graph = self
return graph
def to_numpy(self, selector, vertex_range=None):
"""Select some elements of the graph and output to numpy.
Args:
selector (str): Select a portion of graph as a numpy.ndarray.
vertex_range(dict, optional): Slice vertices. Defaults to None.
Returns:
`numpy.ndarray`
"""
check_argument(self.graph_type == types_pb2.ARROW_PROPERTY)
self._ensure_loaded()
self._check_unmodified()
selector = utils.transform_labeled_vertex_property_data_selector(self, selector)
vertex_range = utils.transform_vertex_range(vertex_range)
op = dag_utils.graph_to_numpy(self, selector, vertex_range)
ret = op.eval()
return utils.decode_numpy(ret)
def to_dataframe(self, selector, vertex_range=None):
"""Select some elements of the graph and output as a pandas.DataFrame
Args:
selector (dict): Select some portions of graph.
vertex_range (dict, optional): Slice vertices. Defaults to None.
Returns:
`pandas.DataFrame`
"""
check_argument(self.graph_type == types_pb2.ARROW_PROPERTY)
self._ensure_loaded()
self._check_unmodified()
check_argument(
isinstance(selector, Mapping),
"selector of to_vineyard_dataframe must be a dict",
)
selector = {
key: utils.transform_labeled_vertex_property_data_selector(self, value)
for key, value in selector.items()
}
selector = json.dumps(selector)
vertex_range = utils.transform_vertex_range(vertex_range)
op = dag_utils.graph_to_dataframe(self, selector, vertex_range)
ret = op.eval()
return utils.decode_dataframe(ret)
def is_directed(self):
self._ensure_loaded()
return self._directed
def _check_unmodified(self):
self._ensure_loaded()
check_argument(
self.signature == self._saved_signature, "Graph has been modified!"
)
def _from_nx_graph(self, incoming_graph):
"""Create a gs graph from a nx graph.
Args:
incoming_graph (:class:`nx.graph`): A nx graph that contains graph data.
Returns:
that will be used to construct a gs.Graph
Raises:
TypeError: Raise Error if graph type not match.
Examples:
>>> nx_g = nx.path_graph(10)
>>> gs_g = gs.Graph(nx_g)
"""
if hasattr(incoming_graph, "_graph"):
msg = "graph view can not convert to gs graph"
raise TypeError(msg)
return dag_utils.dynamic_to_arrow(incoming_graph)
def _copy_from(self, incoming_graph):
"""Copy a graph.
Args:
incoming_graph (:class:`Graph`): Source graph to be copied from
Returns:
:class:`Graph`: An identical graph, but with a new vineyard id.
"""
check_argument(incoming_graph.graph_type == types_pb2.ARROW_PROPERTY)
check_argument(incoming_graph.loaded())
return dag_utils.copy_graph(incoming_graph)
def _from_vineyard(self, vineyard_object):
"""Load a graph from a already existed vineyard graph.
Args:
vineyard_object (:class:`vineyard.Object`, :class:`vineyard.ObjectID`
or :class:`vineyard.ObjectName`): vineyard object,
which represents a graph.
Returns:
A graph_def.
"""
if isinstance(vineyard_object, vineyard.Object):
return self._from_vineyard_id(vineyard_object.id)
if isinstance(vineyard_object, vineyard.ObjectID):
return self._from_vineyard_id(vineyard_object)
if isinstance(vineyard_object, vineyard.ObjectName):
return self._from_vineyard_name(vineyard_object)
def _from_vineyard_id(self, vineyard_id):
config = {}
config[types_pb2.IS_FROM_VINEYARD_ID] = utils.b_to_attr(True)
config[types_pb2.VINEYARD_ID] = utils.i_to_attr(int(vineyard_id))
# FIXME(hetao) hardcode oid/vid type for codegen, when loading from vineyard
#
# the metadata should be retrived from vineyard
config[types_pb2.OID_TYPE] = utils.s_to_attr("int64_t")
config[types_pb2.VID_TYPE] = utils.s_to_attr("uint64_t")
return dag_utils.create_graph(
self.session_id, types_pb2.ARROW_PROPERTY, attrs=config
)
def _from_vineyard_name(self, vineyard_name):
config = {}
config[types_pb2.IS_FROM_VINEYARD_ID] = utils.b_to_attr(True)
config[types_pb2.VINEYARD_NAME] = utils.s_to_attr(str(vineyard_name))
# FIXME(hetao) hardcode oid/vid type for codegen, when loading from vineyard
#
# the metadata should be retrived from vineyard
config[types_pb2.OID_TYPE] = utils.s_to_attr("int64_t")
config[types_pb2.VID_TYPE] = utils.s_to_attr("uint64_t")
return dag_utils.create_graph(
self.session_id, types_pb2.ARROW_PROPERTY, attrs=config
)
def _attach_interactive_instance(self, instance):
"""Store the instance when a new interactive instance is started.
Args:
instance: interactive instance
"""
self._interactive_instance_list.append(instance)
def _attach_learning_instance(self, instance):
"""Store the instance when a new learning instance is created.
Args:
instance: learning instance
"""
self._learning_instance_list.append(instance)
def save_to(self, path, **kwargs):
"""Serialize graph to a location.
The meta and data of graph is dumped to specified location,
and can be restored by `Graph.deserialize` in other sessions.
Each worker will write a `path_{worker_id}.meta` file and
a `path_{worker_id}` file to storage.
Args:
path (str): supported storages are local, hdfs, oss, s3
"""
import vineyard
import vineyard.io
self._ensure_loaded()
sess = self._session
deployment = "kubernetes" if sess.info["type"] == "k8s" else "ssh"
conf = sess.info["engine_config"]
vineyard_endpoint = conf["vineyard_rpc_endpoint"]
vineyard_ipc_socket = conf["vineyard_socket"]
if sess.info["type"] == "k8s":
hosts = [
"{}:{}".format(sess.info["namespace"], s)
for s in sess.info["engine_hosts"].split(",")
]
else: # type == "hosts"
hosts = sess.info["engine_hosts"].split(",")
vineyard.io.serialize(
path,
vineyard.ObjectID(self._vineyard_id),
type="global",
vineyard_ipc_socket=vineyard_ipc_socket,
vineyard_endpoint=vineyard_endpoint,
storage_options=kwargs,
deployment=deployment,
hosts=hosts,
)
@classmethod
def load_from(cls, path, sess, **kwargs):
"""Construct a `Graph` by deserialize from `path`.
It will read all serialization files, which is dumped by
`Graph.serialize`.
If any serialize file doesn't exists or broken, will error out.
Args:
path (str): Path contains the serialization files.
sess (`graphscope.Session`): The target session
that the graph will be construct in
Returns:
`Graph`: A new graph object. Schema and data is supposed to be
identical with the one that called serialized method.
"""
import vineyard
import vineyard.io
deployment = "kubernetes" if sess.info["type"] == "k8s" else "ssh"
conf = sess.info["engine_config"]
vineyard_endpoint = conf["vineyard_rpc_endpoint"]
vineyard_ipc_socket = conf["vineyard_socket"]
if sess.info["type"] == "k8s":
hosts = [
"{}:{}".format(sess.info["namespace"], s)
for s in sess.info["engine_hosts"].split(",")
]
else: # type == "hosts"
hosts = sess.info["engine_hosts"].split(",")
graph_id = vineyard.io.deserialize(
path,
type="global",
vineyard_ipc_socket=vineyard_ipc_socket,
vineyard_endpoint=vineyard_endpoint,
storage_options=kwargs,
deployment=deployment,
hosts=hosts,
)
return cls(sess, vineyard.ObjectID(graph_id))
def _construct_graph(
self, vertices, edges, v_labels, e_labels, e_relations, mutation_func=None
):
"""Construct graph.
1. Construct a graph from scratch.
If the vertices and edges is empty, return a empty graph.
2. Construct a graph from existed builded graph.
If the vertices and edges is empty, return a copied graph.
Args:
vertices ([type]): [description]
edges ([type]): [description]
v_labels ([type]): [description]
e_labels ([type]): [description]
e_relations ([type]): [description]
mutation_func ([type], optional): [description]. Defaults to None.
Returns:
[type]: [description]
"""
config = graph_utils.assemble_op_config(
vertices.values(),
edges.values(),
self._oid_type,
self._directed,
self._generate_eid,
)
# edge case.
if not vertices and not edges:
if mutation_func:
# Rely on `self._key`
return Graph(self._session, self)
else:
return Graph(
self._session,
None,
self._oid_type,
self._directed,
self._generate_eid,
)
if mutation_func:
op = mutation_func(self, attrs=config)
else:
op = dag_utils.create_graph(
self.session_id, types_pb2.ARROW_PROPERTY, attrs=config
)
graph = Graph(
self._session, op, self._oid_type, self._directed, self._generate_eid
)
graph._unsealed_vertices = vertices
graph._unsealed_edges = edges
graph._v_labels = v_labels
graph._e_labels = e_labels
graph._e_relationships = e_relations
# propage info about whether is a loaded graph.
# graph._key = self._key
if mutation_func:
graph._base_graph = self._base_graph or self
return graph
def add_vertices(self, vertices, label="_", properties=None, vid_field=0):
is_from_existed_graph = len(self._unsealed_vertices) != len(
self._v_labels
) or len(self._unsealed_edges) != len(self._e_labels)
if label in self._v_labels:
raise ValueError(f"Label {label} already existed in graph.")
if not self._v_labels and self._e_labels:
raise ValueError("Cannot manually add vertices after inferred vertices.")
unsealed_vertices = deepcopy(self._unsealed_vertices)
unsealed_vertices[label] = VertexLabel(
label=label, loader=vertices, properties=properties, vid_field=vid_field
)
v_labels = deepcopy(self._v_labels)
v_labels.append(label)
# Load after validity check and before create add_vertices op.
# TODO(zsy): Add ability to add vertices and edges to existed graph simultaneously.
if is_from_existed_graph and self._unsealed_edges:
self._ensure_loaded()
func = dag_utils.add_vertices if is_from_existed_graph else None
return self._construct_graph(
unsealed_vertices,
self._unsealed_edges,
v_labels,
self._e_labels,
self._e_relationships,
func,
)
def add_edges(
self,
edges,
label="_",
properties=None,
src_label=None,
dst_label=None,
src_field=0,
dst_field=1,
):
"""Add edges to graph.
1. Add edges to a uninitialized graph.
i. src_label and dst_label both unspecified. In this case, current graph must
has 0 (we deduce vertex label from edge table, and set vertex label name to '_'),
or 1 vertex label (we set src_label and dst label to this).
ii. src_label and dst_label both specified and existed in current graph's vertex labels.
iii. src_label and dst_label both specified and there is no vertex labels in current graph.
we deduce all vertex labels from edge tables.
Note that you either provide all vertex labels, or let graphscope deduce all vertex labels.
We don't support mixed style.
2. Add edges to a existed graph.
Must add a new kind of edge label, not a new relation to builded graph.
But you can add a new relation to uninitialized part of the graph.
src_label and dst_label must be specified and existed in current graph.
Args:
edges ([type]): [description]
label (str, optional): [description]. Defaults to "_".
properties ([type], optional): [description]. Defaults to None.
src_label ([type], optional): [description]. Defaults to None.
dst_label ([type], optional): [description]. Defaults to None.
src_field (int, optional): [description]. Defaults to 0.
dst_field (int, optional): [description]. Defaults to 1.
Raises:
RuntimeError: [description]
Returns:
Graph: [description]
"""
is_from_existed_graph = len(self._unsealed_vertices) != len(
self._v_labels
) or len(self._unsealed_edges) != len(self._e_labels)
if is_from_existed_graph:
if label in self._e_labels and label not in self._unsealed_edges:
raise ValueError("Cannot add new relation to existed graph.")
if src_label is None or dst_label is None:
raise ValueError("src label and dst label cannot be None.")
if src_label not in self._v_labels or dst_label not in self._v_labels:
raise ValueError("src label or dst_label not existed in graph.")
else:
if src_label is None and dst_label is None:
check_argument(len(self._v_labels) <= 1, "ambiguous vertex label")
if len(self._v_labels) == 1:
src_label = dst_label = self._v_labels[0]
else:
src_label = dst_label = "_"
elif src_label is not None and dst_label is not None:
if self._v_labels:
if (
src_label not in self._v_labels
or dst_label not in self._v_labels
):
raise ValueError("src label or dst_label not existed in graph.")
else:
# Infer all v_labels from edge tables.
pass
else:
raise ValueError(
"src and dst label must be both specified or either unspecified."
)
check_argument(
src_field != dst_field, "src and dst field cannot refer to the same field"
)
unsealed_edges = deepcopy(self._unsealed_edges)
e_labels = deepcopy(self._e_labels)
relations = deepcopy(self._e_relationships)
if label in unsealed_edges:
assert label in self._e_labels
label_idx = self._e_labels.index(label)
# Will check conflict in `add_sub_label`
relations[label_idx].append((src_label, dst_label))
cur_label = unsealed_edges[label]
else:
e_labels.append(label)
relations.append([(src_label, dst_label)])
cur_label = EdgeLabel(label)
cur_label.add_sub_label(
EdgeSubLabel(edges, properties, src_label, dst_label, src_field, dst_field)
)
unsealed_edges[label] = cur_label
# Load after validity check and before create add_vertices op.
# TODO(zsy): Add ability to add vertices and edges to existed graph simultaneously.
if is_from_existed_graph and self._unsealed_vertices:
self._ensure_loaded()
func = dag_utils.add_edges if is_from_existed_graph else None
return self._construct_graph(
self._unsealed_vertices,
unsealed_edges,
self._v_labels,
e_labels,
relations,
func,
)
def project(
self,
vertices: Mapping[str, Union[List[str], None]],
edges: Mapping[str, Union[List[str], None]],
):
check_argument(self.graph_type == types_pb2.ARROW_PROPERTY)
def get_all_v_props_id(label) -> List[int]:
props = self.schema.get_vertex_properties(label)
return [
self.schema.get_vertex_property_id(label, prop.name) for prop in props
]
def get_all_e_props_id(label) -> List[int]:
props = self.schema.get_edge_properties(label)
return [
self.schema.get_edge_property_id(label, prop.name) for prop in props
]
vertex_collections = {}
edge_collections = {}
for label, props in vertices.items():
label_id = self.schema.get_vertex_label_id(label)
if props is None:
vertex_collections[label_id] = get_all_v_props_id(label)
else:
vertex_collections[label_id] = sorted(
[self.schema.get_vertex_property_id(label, prop) for prop in props]
)
for label, props in edges.items():
# find whether exist a valid relation
relations = self.schema.get_relationships(label)
valid = False
for src, dst in relations:
if src in vertices and dst in vertices:
valid = True
break
if not valid:
raise ValueError(
"Cannot find a valid relation in given vertices and edges"
)
label_id = self.schema.get_edge_label_id(label)
if props is None:
edge_collections[label_id] = get_all_e_props_id(label)
else:
edge_collections[label_id] = sorted(
[self.schema.get_edge_property_id(label, prop) for prop in props]
)
vertex_collections = dict(sorted(vertex_collections.items()))
edge_collections = dict(sorted(edge_collections.items()))
op = dag_utils.project_arrow_property_graph(
self, vertex_collections, edge_collections
)
graph = Graph(self._session, op)
graph._base_graph = self
return graph
|
__init__.py
|
#!/usr/bin/env python3
#
# Copyright 2017-2018 Amazon.com, Inc. and its affiliates. All Rights Reserved.
#
# Licensed under the MIT License. See the LICENSE accompanying this file
# for the specific language governing permissions and limitations under
# the License.
#
#
# Copy this script to /sbin/mount.efs and make sure it is executable.
#
# You will be able to mount an EFS file system by its short name, by adding it
# to /etc/fstab. The syntax of an fstab entry is:
#
# [Device] [Mount Point] [File System Type] [Options] [Dump] [Pass]
#
# Add an entry like this:
#
# fs-deadbeef /mount_point efs _netdev 0 0
#
# Using the 'efs' type will cause '/sbin/mount.efs' to be called by 'mount -a'
# for this file system. The '_netdev' option tells the init system that the
# 'efs' type is a networked file system type. This has been tested with systemd
# (Amazon Linux 2, CentOS 7, RHEL 7, Debian 9, and Ubuntu 16.04), and upstart
# (Amazon Linux 2017.09).
#
# Once there is an entry in fstab, the file system can be mounted with:
#
# sudo mount /mount_point
#
# The script will add recommended mount options, if not provided in fstab.
import base64
import errno
import hashlib
import hmac
import json
import logging
import os
import platform
import pwd
import random
import re
import socket
import subprocess
import sys
import threading
import time
from contextlib import contextmanager
from datetime import datetime, timedelta
from logging.handlers import RotatingFileHandler
try:
from configparser import ConfigParser, NoOptionError, NoSectionError
except ImportError:
import ConfigParser
from ConfigParser import NoOptionError, NoSectionError
try:
from urllib.parse import quote_plus
except ImportError:
from urllib import quote_plus
try:
from urllib.request import urlopen, Request
from urllib.error import URLError, HTTPError
from urllib.parse import urlencode
except ImportError:
from urllib2 import URLError, HTTPError, build_opener, urlopen, Request, HTTPHandler
from urllib import urlencode
try:
import botocore.session
from botocore.exceptions import ClientError, NoCredentialsError, EndpointConnectionError
BOTOCORE_PRESENT = True
except ImportError:
BOTOCORE_PRESENT = False
VERSION = '1.30.1'
SERVICE = 'elasticfilesystem'
CLONE_NEWNET = 0x40000000
CONFIG_FILE = '/etc/amazon/efs/efs-utils.conf'
CONFIG_SECTION = 'mount'
CLIENT_INFO_SECTION = 'client-info'
CLIENT_SOURCE_STR_LEN_LIMIT = 100
# Cloudwatchlog agent dict includes cloudwatchlog botocore client, cloudwatchlog group name, cloudwatchlog stream name
CLOUDWATCHLOG_AGENT = None
CLOUDWATCH_LOG_SECTION = 'cloudwatch-log'
DEFAULT_CLOUDWATCH_LOG_GROUP = '/aws/efs/utils'
DEFAULT_RETENTION_DAYS = 14
DEFAULT_UNKNOWN_VALUE = 'unknown'
LOG_DIR = '/var/log/amazon/efs'
LOG_FILE = 'mount.log'
STATE_FILE_DIR = '/var/run/efs'
PRIVATE_KEY_FILE = '/etc/amazon/efs/privateKey.pem'
DATE_ONLY_FORMAT = '%Y%m%d'
SIGV4_DATETIME_FORMAT = '%Y%m%dT%H%M%SZ'
CERT_DATETIME_FORMAT = '%y%m%d%H%M%SZ'
AWS_CREDENTIALS_FILE = os.path.expanduser(os.path.join('~' + pwd.getpwuid(os.getuid()).pw_name, '.aws', 'credentials'))
AWS_CONFIG_FILE = os.path.expanduser(os.path.join('~' + pwd.getpwuid(os.getuid()).pw_name, '.aws', 'config'))
CA_CONFIG_BODY = """dir = %s
RANDFILE = $dir/database/.rand
[ ca ]
default_ca = local_ca
[ local_ca ]
database = $dir/database/index.txt
serial = $dir/database/serial
private_key = %s
cert = $dir/certificate.pem
new_certs_dir = $dir/certs
default_md = sha256
preserve = no
policy = efsPolicy
x509_extensions = v3_ca
[ efsPolicy ]
CN = supplied
[ req ]
prompt = no
distinguished_name = req_distinguished_name
[ req_distinguished_name ]
CN = %s
%s
%s
%s
"""
# SigV4 Auth
ALGORITHM = 'AWS4-HMAC-SHA256'
AWS4_REQUEST = 'aws4_request'
HTTP_REQUEST_METHOD = 'GET'
CANONICAL_URI = '/'
CANONICAL_HEADERS_DICT = {
'host': '%s'
}
CANONICAL_HEADERS = '\n'.join(['%s:%s' % (k, v) for k, v in sorted(CANONICAL_HEADERS_DICT.items())])
SIGNED_HEADERS = ';'.join(CANONICAL_HEADERS_DICT.keys())
REQUEST_PAYLOAD = ''
FS_ID_RE = re.compile('^(?P<fs_id>fs-[0-9a-f]+)$')
EFS_FQDN_RE = re.compile(r'^((?P<az>[a-z0-9-]+)\.)?(?P<fs_id>fs-[0-9a-f]+)\.efs\.'
r'(?P<region>[a-z0-9-]+)\.(?P<dns_name_suffix>[a-z0-9.]+)$')
AP_ID_RE = re.compile('^fsap-[0-9a-f]{17}$')
CREDENTIALS_KEYS = ['AccessKeyId', 'SecretAccessKey', 'Token']
ECS_URI_ENV = 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI'
ECS_TASK_METADATA_API = 'http://169.254.170.2'
WEB_IDENTITY_ROLE_ARN_ENV = 'AWS_ROLE_ARN'
WEB_IDENTITY_TOKEN_FILE_ENV = 'AWS_WEB_IDENTITY_TOKEN_FILE'
STS_ENDPOINT_URL_FORMAT = 'https://sts.{}.amazonaws.com/'
INSTANCE_METADATA_TOKEN_URL = 'http://169.254.169.254/latest/api/token'
INSTANCE_METADATA_SERVICE_URL = 'http://169.254.169.254/latest/dynamic/instance-identity/document/'
INSTANCE_IAM_URL = 'http://169.254.169.254/latest/meta-data/iam/security-credentials/'
SECURITY_CREDS_ECS_URI_HELP_URL = 'https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html'
SECURITY_CREDS_WEBIDENTITY_HELP_URL = 'https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html'
SECURITY_CREDS_IAM_ROLE_HELP_URL = 'https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html'
DEFAULT_STUNNEL_VERIFY_LEVEL = 2
DEFAULT_STUNNEL_CAFILE = '/etc/amazon/efs/efs-utils.crt'
NOT_BEFORE_MINS = 15
NOT_AFTER_HOURS = 3
EFS_ONLY_OPTIONS = [
'accesspoint',
'awscredsuri',
'awsprofile',
'az',
'cafile',
'iam',
'netns',
'noocsp',
'ocsp',
'tls',
'tlsport',
'verify'
]
UNSUPPORTED_OPTIONS = [
'capath'
]
STUNNEL_GLOBAL_CONFIG = {
'fips': 'no',
'foreground': 'yes',
'socket': [
'l:SO_REUSEADDR=yes',
'a:SO_BINDTODEVICE=lo',
],
}
STUNNEL_EFS_CONFIG = {
'client': 'yes',
'accept': '127.0.0.1:%s',
'connect': '%s:2049',
'sslVersion': 'TLSv1.2',
'renegotiation': 'no',
'TIMEOUTbusy': '20',
'TIMEOUTclose': '0',
'TIMEOUTidle': '70',
'delay': 'yes',
}
WATCHDOG_SERVICE = 'amazon-efs-mount-watchdog'
# MacOS instances use plist files. This files needs to be loaded on launchctl (init system of MacOS)
WATCHDOG_SERVICE_PLIST_PATH = '/Library/LaunchAgents/amazon-efs-mount-watchdog.plist'
SYSTEM_RELEASE_PATH = '/etc/system-release'
OS_RELEASE_PATH = '/etc/os-release'
RHEL8_RELEASE_NAME = 'Red Hat Enterprise Linux release 8'
CENTOS8_RELEASE_NAME = 'CentOS Linux release 8'
FEDORA_RELEASE_NAME = 'Fedora release'
OPEN_SUSE_LEAP_RELEASE_NAME = 'openSUSE Leap'
SUSE_RELEASE_NAME = 'SUSE Linux Enterprise Server'
MACOS_BIG_SUR_RELEASE = 'macOS-11'
SKIP_NO_LIBWRAP_RELEASES = [RHEL8_RELEASE_NAME, CENTOS8_RELEASE_NAME, FEDORA_RELEASE_NAME, OPEN_SUSE_LEAP_RELEASE_NAME,
SUSE_RELEASE_NAME, MACOS_BIG_SUR_RELEASE]
# MacOS does not support the property of Socket SO_BINDTODEVICE in stunnel configuration
SKIP_NO_SO_BINDTODEVICE_RELEASES = [MACOS_BIG_SUR_RELEASE]
MAC_OS_PLATFORM_LIST = ['darwin']
# MacOS Versions : Big Sur - 20.*, Catalina - 19.*, Mojave - 18.*. Catalina and Mojave are not supported for now
MAC_OS_SUPPORTED_VERSION_LIST = ['20']
def errcheck(ret, func, args):
from ctypes import get_errno
if ret == -1:
e = get_errno()
raise OSError(e, os.strerror(e))
def setns(fd, nstype):
from ctypes import CDLL
libc = CDLL('libc.so.6', use_errno=True)
libc.setns.errcheck = errcheck
if hasattr(fd, 'fileno'):
fd = fd.fileno()
return libc.setns(fd, nstype)
class NetNS(object):
# Open sockets from given network namespace: stackoverflow.com/questions/28846059
def __init__(self, nspath):
self.original_nspath = '/proc/%d/ns/net' % os.getpid()
self.target_nspath = nspath
def __enter__(self):
self.original_namespace = open(self.original_nspath)
with open(self.target_nspath) as fd:
setns(fd, CLONE_NEWNET)
def __exit__(self, *args):
setns(self.original_namespace, CLONE_NEWNET)
self.original_namespace.close()
def fatal_error(user_message, log_message=None, exit_code=1):
if log_message is None:
log_message = user_message
sys.stderr.write('%s\n' % user_message)
logging.error(log_message)
publish_cloudwatch_log(CLOUDWATCHLOG_AGENT, 'Mount failed, %s' % log_message)
sys.exit(exit_code)
def get_target_region(config):
def _fatal_error(message):
fatal_error('Error retrieving region. Please set the "region" parameter in the efs-utils configuration file.', message)
metadata_exception = 'Unknown error'
try:
return config.get(CONFIG_SECTION, 'region')
except NoOptionError:
pass
try:
return get_region_from_instance_metadata()
except Exception as e:
metadata_exception = e
logging.warning('Region not found in config file and metadata service call failed, falling back '
'to legacy "dns_name_format" check')
try:
region = get_region_from_legacy_dns_format(config)
sys.stdout.write('Warning: region obtained from "dns_name_format" field. Please set the "region" '
'parameter in the efs-utils configuration file.')
return region
except Exception:
logging.warning('Legacy check for region in "dns_name_format" failed')
_fatal_error(metadata_exception)
def get_region_from_instance_metadata():
instance_identity = get_instance_identity_info_from_instance_metadata('region')
if not instance_identity:
raise Exception("Cannot retrieve region from instance_metadata")
return instance_identity
def get_instance_identity_info_from_instance_metadata(property):
ec2_metadata_unsuccessful_resp = 'Unsuccessful retrieval of EC2 metadata at %s.' % INSTANCE_METADATA_SERVICE_URL
ec2_metadata_url_error_msg = 'Unable to reach %s to retrieve EC2 instance metadata.' % INSTANCE_METADATA_SERVICE_URL
instance_identity = url_request_helper(INSTANCE_METADATA_SERVICE_URL, ec2_metadata_unsuccessful_resp,
ec2_metadata_url_error_msg, retry_with_new_header_token=True)
if instance_identity:
try:
return instance_identity[property]
except KeyError as e:
logging.warning('%s not present in %s: %s' % (property, instance_identity, e))
except TypeError as e:
logging.warning('response %s is not a json object: %s' % (instance_identity, e))
return None
def get_region_from_legacy_dns_format(config):
"""
For backwards compatibility check dns_name_format to obtain the target region. This functionality
should only be used if region is not present in the config file and metadata calls fail.
"""
dns_name_format = config.get(CONFIG_SECTION, 'dns_name_format')
if '{region}' not in dns_name_format:
split_dns_name_format = dns_name_format.split('.')
if '{dns_name_suffix}' in dns_name_format:
return split_dns_name_format[-2]
elif 'amazonaws.com' in dns_name_format:
return split_dns_name_format[-3]
raise Exception('Region not found in dns_name_format')
def get_aws_ec2_metadata_token():
try:
opener = build_opener(HTTPHandler)
request = Request(INSTANCE_METADATA_TOKEN_URL)
request.add_header('X-aws-ec2-metadata-token-ttl-seconds', 21600)
request.get_method = lambda: 'PUT'
res = opener.open(request)
return res.read()
except NameError:
headers = {'X-aws-ec2-metadata-token-ttl-seconds': 21600}
req = Request(INSTANCE_METADATA_TOKEN_URL, headers=headers, method='PUT')
res = urlopen(req)
return res.read()
def get_aws_security_credentials(use_iam, region, awsprofile=None, aws_creds_uri=None):
"""
Lookup AWS security credentials (access key ID and secret access key). Adapted credentials provider chain from:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html and
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html
"""
if not use_iam:
return None, None
# attempt to lookup AWS security credentials through the credentials URI the ECS agent generated
if aws_creds_uri:
return get_aws_security_credentials_from_ecs(aws_creds_uri, True)
# attempt to lookup AWS security credentials in AWS credentials file (~/.aws/credentials)
# and configs file (~/.aws/config) with given awsprofile
if awsprofile:
return get_aws_security_credentials_from_awsprofile(awsprofile, True)
# attempt to lookup AWS security credentials through AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable
if ECS_URI_ENV in os.environ:
credentials, credentials_source = get_aws_security_credentials_from_ecs(os.environ[ECS_URI_ENV], False)
if credentials and credentials_source:
return credentials, credentials_source
# attempt to lookup AWS security credentials through AssumeRoleWithWebIdentity
# (e.g. for IAM Role for Service Accounts (IRSA) approach on EKS)
if WEB_IDENTITY_ROLE_ARN_ENV in os.environ and WEB_IDENTITY_TOKEN_FILE_ENV in os.environ:
credentials, credentials_source = get_aws_security_credentials_from_webidentity(
os.environ[WEB_IDENTITY_ROLE_ARN_ENV],
os.environ[WEB_IDENTITY_TOKEN_FILE_ENV],
region,
False
)
if credentials and credentials_source:
return credentials, credentials_source
# attempt to lookup AWS security credentials with IAM role name attached to instance
# through IAM role name security credentials lookup uri
iam_role_name = get_iam_role_name()
if iam_role_name:
credentials, credentials_source = get_aws_security_credentials_from_instance_metadata(iam_role_name)
if credentials and credentials_source:
return credentials, credentials_source
error_msg = 'AWS Access Key ID and Secret Access Key are not found in AWS credentials file (%s), config file (%s), ' \
'from ECS credentials relative uri, or from the instance security credentials service' % \
(AWS_CREDENTIALS_FILE, AWS_CONFIG_FILE)
fatal_error(error_msg, error_msg)
def get_aws_security_credentials_from_awsprofile(awsprofile, is_fatal=False):
for file_path in [AWS_CREDENTIALS_FILE, AWS_CONFIG_FILE]:
if os.path.exists(file_path):
credentials = credentials_file_helper(file_path, awsprofile)
if credentials['AccessKeyId']:
return credentials, os.path.basename(file_path) + ':' + awsprofile
# Fail if credentials cannot be fetched from the given awsprofile
if is_fatal:
log_message = 'AWS security credentials not found in %s or %s under named profile [%s]' % \
(AWS_CREDENTIALS_FILE, AWS_CONFIG_FILE, awsprofile)
fatal_error(log_message)
else:
return None, None
def get_aws_security_credentials_from_ecs(aws_creds_uri, is_fatal=False):
ecs_uri = ECS_TASK_METADATA_API + aws_creds_uri
ecs_unsuccessful_resp = 'Unsuccessful retrieval of AWS security credentials at %s.' % ecs_uri
ecs_url_error_msg = 'Unable to reach %s to retrieve AWS security credentials. See %s for more info.' \
% (ecs_uri, SECURITY_CREDS_ECS_URI_HELP_URL)
ecs_security_dict = url_request_helper(ecs_uri, ecs_unsuccessful_resp, ecs_url_error_msg)
if ecs_security_dict and all(k in ecs_security_dict for k in CREDENTIALS_KEYS):
return ecs_security_dict, 'ecs:' + aws_creds_uri
# Fail if credentials cannot be fetched from the given aws_creds_uri
if is_fatal:
fatal_error(ecs_unsuccessful_resp, ecs_unsuccessful_resp)
else:
return None, None
def get_aws_security_credentials_from_webidentity(role_arn, token_file, region, is_fatal=False):
try:
with open(token_file, 'r') as f:
token = f.read()
except Exception as e:
if is_fatal:
unsuccessful_resp = 'Error reading token file %s: %s' % (token_file, e)
fatal_error(unsuccessful_resp, unsuccessful_resp)
else:
return None, None
STS_ENDPOINT_URL = STS_ENDPOINT_URL_FORMAT.format(region)
webidentity_url = STS_ENDPOINT_URL + '?' + urlencode({
'Version': '2011-06-15',
'Action': 'AssumeRoleWithWebIdentity',
'RoleArn': role_arn,
'RoleSessionName': 'efs-mount-helper',
'WebIdentityToken': token
})
unsuccessful_resp = 'Unsuccessful retrieval of AWS security credentials at %s.' % STS_ENDPOINT_URL
url_error_msg = 'Unable to reach %s to retrieve AWS security credentials. See %s for more info.' % \
(STS_ENDPOINT_URL, SECURITY_CREDS_WEBIDENTITY_HELP_URL)
resp = url_request_helper(webidentity_url, unsuccessful_resp, url_error_msg, headers={'Accept': 'application/json'})
if resp:
creds = resp \
.get('AssumeRoleWithWebIdentityResponse', {}) \
.get('AssumeRoleWithWebIdentityResult', {}) \
.get('Credentials', {})
if all(k in creds for k in ['AccessKeyId', 'SecretAccessKey', 'SessionToken']):
return {
'AccessKeyId': creds['AccessKeyId'],
'SecretAccessKey': creds['SecretAccessKey'],
'Token': creds['SessionToken']
}, 'webidentity:' + ','.join([role_arn, token_file])
# Fail if credentials cannot be fetched from the given aws_creds_uri
if is_fatal:
fatal_error(unsuccessful_resp, unsuccessful_resp)
else:
return None, None
def get_aws_security_credentials_from_instance_metadata(iam_role_name):
security_creds_lookup_url = INSTANCE_IAM_URL + iam_role_name
unsuccessful_resp = 'Unsuccessful retrieval of AWS security credentials at %s.' % security_creds_lookup_url
url_error_msg = 'Unable to reach %s to retrieve AWS security credentials. See %s for more info.' % \
(security_creds_lookup_url, SECURITY_CREDS_IAM_ROLE_HELP_URL)
iam_security_dict = url_request_helper(security_creds_lookup_url, unsuccessful_resp,
url_error_msg, retry_with_new_header_token=True)
if iam_security_dict and all(k in iam_security_dict for k in CREDENTIALS_KEYS):
return iam_security_dict, 'metadata:'
else:
return None, None
def get_iam_role_name():
iam_role_unsuccessful_resp = 'Unsuccessful retrieval of IAM role name at %s.' % INSTANCE_IAM_URL
iam_role_url_error_msg = 'Unable to reach %s to retrieve IAM role name. See %s for more info.' % \
(INSTANCE_IAM_URL, SECURITY_CREDS_IAM_ROLE_HELP_URL)
iam_role_name = url_request_helper(INSTANCE_IAM_URL, iam_role_unsuccessful_resp,
iam_role_url_error_msg, retry_with_new_header_token=True)
return iam_role_name
def credentials_file_helper(file_path, awsprofile):
aws_credentials_configs = read_config(file_path)
credentials = {'AccessKeyId': None, 'SecretAccessKey': None, 'Token': None}
try:
access_key = aws_credentials_configs.get(awsprofile, 'aws_access_key_id')
secret_key = aws_credentials_configs.get(awsprofile, 'aws_secret_access_key')
session_token = aws_credentials_configs.get(awsprofile, 'aws_session_token')
credentials['AccessKeyId'] = access_key
credentials['SecretAccessKey'] = secret_key
credentials['Token'] = session_token
except NoOptionError as e:
if 'aws_access_key_id' in str(e) or 'aws_secret_access_key' in str(e):
logging.debug('aws_access_key_id or aws_secret_access_key not found in %s under named profile [%s]', file_path,
awsprofile)
if 'aws_session_token' in str(e):
logging.debug('aws_session_token not found in %s', file_path)
credentials['AccessKeyId'] = aws_credentials_configs.get(awsprofile, 'aws_access_key_id')
credentials['SecretAccessKey'] = aws_credentials_configs.get(awsprofile, 'aws_secret_access_key')
except NoSectionError:
logging.debug('No [%s] section found in config file %s', awsprofile, file_path)
return credentials
def get_aws_profile(options, use_iam):
awsprofile = options.get('awsprofile')
if not awsprofile and use_iam:
for file_path in [AWS_CREDENTIALS_FILE, AWS_CONFIG_FILE]:
aws_credentials_configs = read_config(file_path)
# check if aws access key id is found under [default] section in current file and return 'default' if so
try:
access_key = aws_credentials_configs.get('default', 'aws_access_key_id')
if access_key is not None:
return 'default'
except (NoSectionError, NoOptionError):
continue
return awsprofile
def url_request_helper(url, unsuccessful_resp, url_error_msg, headers={}, retry_with_new_header_token=False):
try:
req = Request(url)
for k, v in headers.items():
req.add_header(k, v)
request_resp = urlopen(req, timeout=1)
return get_resp_obj(request_resp, url, unsuccessful_resp)
except HTTPError as e:
# For instance enable with IMDSv2, Unauthorized 401 error will be thrown,
# to retrieve metadata, the header should embeded with metadata token
if e.code == 401 and retry_with_new_header_token:
token = get_aws_ec2_metadata_token()
req.add_header('X-aws-ec2-metadata-token', token)
request_resp = urlopen(req, timeout=1)
return get_resp_obj(request_resp, url, unsuccessful_resp)
err_msg = 'Unable to reach the url at %s: status=%d, reason is %s' % (url, e.code, e.reason)
except URLError as e:
err_msg = 'Unable to reach the url at %s, reason is %s' % (url, e.reason)
if err_msg:
logging.debug('%s %s', url_error_msg, err_msg)
return None
def get_resp_obj(request_resp, url, unsuccessful_resp):
if request_resp.getcode() != 200:
logging.debug(unsuccessful_resp + ' %s: ResponseCode=%d', url, request_resp.getcode())
return None
resp_body = request_resp.read()
resp_body_type = type(resp_body)
try:
if resp_body_type is str:
resp_dict = json.loads(resp_body)
else:
resp_dict = json.loads(resp_body.decode(request_resp.headers.get_content_charset() or 'us-ascii'))
return resp_dict
except ValueError as e:
logging.info('ValueError parsing "%s" into json: %s. Returning response body.' % (str(resp_body), e))
return resp_body if resp_body_type is str else resp_body.decode('utf-8')
def parse_options(options):
opts = {}
for o in options.split(','):
if '=' in o:
k, v = o.split('=')
opts[k] = v
else:
opts[o] = None
return opts
def get_tls_port_range(config):
lower_bound = config.getint(CONFIG_SECTION, 'port_range_lower_bound')
upper_bound = config.getint(CONFIG_SECTION, 'port_range_upper_bound')
if lower_bound >= upper_bound:
fatal_error('Configuration option "port_range_upper_bound" defined as %d '
'must be strictly greater than "port_range_lower_bound" defined as %d.'
% (upper_bound, lower_bound))
return lower_bound, upper_bound
def choose_tls_port(config, options):
if 'tlsport' in options:
ports_to_try = [int(options['tlsport'])]
else:
lower_bound, upper_bound = get_tls_port_range(config)
tls_ports = list(range(lower_bound, upper_bound))
# Choose a random midpoint, and then try ports in-order from there
mid = random.randrange(len(tls_ports))
ports_to_try = tls_ports[mid:] + tls_ports[:mid]
assert len(tls_ports) == len(ports_to_try)
if 'netns' not in options:
tls_port = find_tls_port_in_range(ports_to_try)
else:
with NetNS(nspath=options['netns']):
tls_port = find_tls_port_in_range(ports_to_try)
if tls_port:
return tls_port
if 'tlsport' in options:
fatal_error('Specified port [%s] is unavailable. Try selecting a different port.' % options['tlsport'])
else:
fatal_error('Failed to locate an available port in the range [%d, %d], try specifying a different port range in %s'
% (lower_bound, upper_bound, CONFIG_FILE))
def find_tls_port_in_range(ports_to_try):
sock = socket.socket()
for tls_port in ports_to_try:
try:
logging.info("binding %s", tls_port)
sock.bind(('localhost', tls_port))
sock.close()
return tls_port
except socket.error as e:
logging.info(e)
continue
sock.close()
return None
def is_ocsp_enabled(config, options):
if 'ocsp' in options:
return True
elif 'noocsp' in options:
return False
else:
return config.getboolean(CONFIG_SECTION, 'stunnel_check_cert_validity')
def get_mount_specific_filename(fs_id, mountpoint, tls_port):
return '%s.%s.%d' % (fs_id, os.path.abspath(mountpoint).replace(os.sep, '.').lstrip('.'), tls_port)
def serialize_stunnel_config(config, header=None):
lines = []
if header:
lines.append('[%s]' % header)
for k, v in config.items():
if type(v) is list:
for item in v:
lines.append('%s = %s' % (k, item))
else:
lines.append('%s = %s' % (k, v))
return lines
def add_stunnel_ca_options(efs_config, config, options, region):
if 'cafile' in options:
stunnel_cafile = options['cafile']
else:
try:
config_section = get_config_section(config, region)
stunnel_cafile = config.get(config_section, 'stunnel_cafile')
logging.debug("Using stunnel_cafile %s in config section [%s]", stunnel_cafile, config_section)
except NoOptionError:
logging.debug('No CA file configured, using default CA file %s', DEFAULT_STUNNEL_CAFILE)
stunnel_cafile = DEFAULT_STUNNEL_CAFILE
if not os.path.exists(stunnel_cafile):
fatal_error('Failed to find certificate authority file for verification',
'Failed to find CAfile "%s"' % stunnel_cafile)
efs_config['CAfile'] = stunnel_cafile
def get_config_section(config, region):
region_specific_config_section = '%s.%s' % (CONFIG_SECTION, region)
if config.has_section(region_specific_config_section):
config_section = region_specific_config_section
else:
config_section = CONFIG_SECTION
return config_section
def is_stunnel_option_supported(stunnel_output, stunnel_option_name):
supported = False
for line in stunnel_output:
if line.startswith(stunnel_option_name):
supported = True
break
if not supported:
logging.warning('stunnel does not support "%s"', stunnel_option_name)
return supported
def get_version_specific_stunnel_options():
stunnel_command = [_stunnel_bin(), '-help']
proc = subprocess.Popen(stunnel_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
proc.wait()
_, err = proc.communicate()
stunnel_output = err.splitlines()
check_host_supported = is_stunnel_option_supported(stunnel_output, b'checkHost')
ocsp_aia_supported = is_stunnel_option_supported(stunnel_output, b'OCSPaia')
return check_host_supported, ocsp_aia_supported
def _stunnel_bin():
return find_command_path('stunnel',
'Please install it following the instructions at '
'https://docs.aws.amazon.com/efs/latest/ug/using-amazon-efs-utils.html#upgrading-stunnel')
def find_command_path(command, install_method):
try:
env_path = '/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin'
os.putenv('PATH', env_path)
path = subprocess.check_output(['which', command])
except subprocess.CalledProcessError as e:
fatal_error('Failed to locate %s in %s - %s' % (command, env_path, install_method), e)
return path.strip().decode()
def get_system_release_version():
# MacOS does not maintain paths /etc/os-release and /etc/sys-release
if check_if_platform_is_mac():
return platform.platform()
try:
with open(SYSTEM_RELEASE_PATH) as f:
return f.read().strip()
except IOError:
logging.debug('Unable to read %s', SYSTEM_RELEASE_PATH)
try:
with open(OS_RELEASE_PATH) as f:
for line in f:
if 'PRETTY_NAME' in line:
return line.split('=')[1].strip()
except IOError:
logging.debug('Unable to read %s', OS_RELEASE_PATH)
return DEFAULT_UNKNOWN_VALUE
def write_stunnel_config_file(config, state_file_dir, fs_id, mountpoint, tls_port, dns_name, verify_level, ocsp_enabled,
options, region, log_dir=LOG_DIR, cert_details=None):
"""
Serializes stunnel configuration to a file. Unfortunately this does not conform to Python's config file format, so we have to
hand-serialize it.
"""
mount_filename = get_mount_specific_filename(fs_id, mountpoint, tls_port)
system_release_version = get_system_release_version()
global_config = dict(STUNNEL_GLOBAL_CONFIG)
if any(release in system_release_version for release in SKIP_NO_SO_BINDTODEVICE_RELEASES):
global_config['socket'].remove('a:SO_BINDTODEVICE=lo')
if config.getboolean(CONFIG_SECTION, 'stunnel_debug_enabled'):
global_config['debug'] = 'debug'
if config.has_option(CONFIG_SECTION, 'stunnel_logs_file'):
global_config['output'] = config.get(CONFIG_SECTION, 'stunnel_logs_file').replace('{fs_id}', fs_id)
else:
global_config['output'] = os.path.join(log_dir, '%s.stunnel.log' % mount_filename)
efs_config = dict(STUNNEL_EFS_CONFIG)
efs_config['accept'] = efs_config['accept'] % tls_port
efs_config['connect'] = efs_config['connect'] % dns_name
efs_config['verify'] = verify_level
if verify_level > 0:
add_stunnel_ca_options(efs_config, config, options, region)
if cert_details:
efs_config['cert'] = cert_details['certificate']
efs_config['key'] = cert_details['privateKey']
check_host_supported, ocsp_aia_supported = get_version_specific_stunnel_options()
tls_controls_message = 'WARNING: Your client lacks sufficient controls to properly enforce TLS. Please upgrade stunnel, ' \
'or disable "%%s" in %s.\nSee %s for more detail.' % (CONFIG_FILE,
'https://docs.aws.amazon.com/console/efs/troubleshooting-tls')
if config.getboolean(CONFIG_SECTION, 'stunnel_check_cert_hostname'):
if check_host_supported:
# Stunnel checkHost option checks if the specified DNS host name or wildcard matches any of the provider in peer
# certificate's CN fields, after introducing the AZ field in dns name, the host name in the stunnel config file
# is not valid, remove the az info there
efs_config['checkHost'] = dns_name[dns_name.index(fs_id):]
else:
fatal_error(tls_controls_message % 'stunnel_check_cert_hostname')
# Only use the config setting if the override is not set
if ocsp_enabled:
if ocsp_aia_supported:
efs_config['OCSPaia'] = 'yes'
else:
fatal_error(tls_controls_message % 'stunnel_check_cert_validity')
if not any(release in system_release_version for release in SKIP_NO_LIBWRAP_RELEASES):
efs_config['libwrap'] = 'no'
stunnel_config = '\n'.join(serialize_stunnel_config(global_config) + serialize_stunnel_config(efs_config, 'efs'))
logging.debug('Writing stunnel configuration:\n%s', stunnel_config)
stunnel_config_file = os.path.join(state_file_dir, 'stunnel-config.%s' % mount_filename)
with open(stunnel_config_file, 'w') as f:
f.write(stunnel_config)
return stunnel_config_file
def write_tls_tunnel_state_file(fs_id, mountpoint, tls_port, tunnel_pid, command, files, state_file_dir, cert_details=None):
"""
Return the name of the temporary file containing TLS tunnel state, prefixed with a '~'. This file needs to be renamed to a
non-temporary version following a successful mount.
"""
state_file = '~' + get_mount_specific_filename(fs_id, mountpoint, tls_port)
state = {
'pid': tunnel_pid,
'cmd': command,
'files': files,
}
if cert_details:
state.update(cert_details)
with open(os.path.join(state_file_dir, state_file), 'w') as f:
json.dump(state, f)
return state_file
def test_tunnel_process(tunnel_proc, fs_id):
tunnel_proc.poll()
if tunnel_proc.returncode is not None:
out, err = tunnel_proc.communicate()
fatal_error('Failed to initialize TLS tunnel for %s' % fs_id,
'Failed to start TLS tunnel (errno=%d). stdout="%s" stderr="%s"'
% (tunnel_proc.returncode, out.strip(), err.strip()))
def poll_tunnel_process(tunnel_proc, fs_id, mount_completed):
"""
poll the tunnel process health every .5s during the mount attempt to fail fast if the tunnel dies - since this is not called
from the main thread, if the tunnel fails, exit uncleanly with os._exit
"""
while not mount_completed.is_set():
try:
test_tunnel_process(tunnel_proc, fs_id)
except SystemExit as e:
os._exit(e.code)
mount_completed.wait(.5)
def get_init_system(comm_file='/proc/1/comm'):
init_system = DEFAULT_UNKNOWN_VALUE
if not check_if_platform_is_mac():
try:
with open(comm_file) as f:
init_system = f.read().strip()
except IOError:
logging.warning('Unable to read %s', comm_file)
else:
init_system = 'launchd'
logging.debug('Identified init system: %s', init_system)
return init_system
def check_network_target(fs_id):
with open(os.devnull, 'w') as devnull:
if not check_if_platform_is_mac():
rc = subprocess.call(['systemctl', 'status', 'network.target'], stdout=devnull, stderr=devnull, close_fds=True)
else:
rc = subprocess.call(['sudo', 'ifconfig', 'en0'], stdout=devnull, stderr=devnull, close_fds=True)
if rc != 0:
fatal_error('Failed to mount %s because the network was not yet available, add "_netdev" to your mount options' % fs_id,
exit_code=0)
def check_network_status(fs_id, init_system):
if init_system != 'systemd':
logging.debug('Not testing network on non-systemd init systems')
return
check_network_target(fs_id)
def start_watchdog(init_system):
if init_system == 'init':
proc = subprocess.Popen(
['/sbin/status', WATCHDOG_SERVICE], stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
status, _ = proc.communicate()
if 'stop' in str(status):
with open(os.devnull, 'w') as devnull:
subprocess.Popen(['/sbin/start', WATCHDOG_SERVICE], stdout=devnull, stderr=devnull, close_fds=True)
elif 'start' in str(status):
logging.debug('%s is already running', WATCHDOG_SERVICE)
elif init_system == 'systemd':
rc = subprocess.call(['systemctl', 'is-active', '--quiet', WATCHDOG_SERVICE], close_fds=True)
if rc != 0:
with open(os.devnull, 'w') as devnull:
subprocess.Popen(['systemctl', 'start', WATCHDOG_SERVICE], stdout=devnull, stderr=devnull, close_fds=True)
else:
logging.debug('%s is already running', WATCHDOG_SERVICE)
elif init_system == 'launchd':
rc = subprocess.Popen(['sudo', 'launchctl', 'list', WATCHDOG_SERVICE], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, close_fds=True)
if rc != 0:
if not os.path.exists(WATCHDOG_SERVICE_PLIST_PATH):
fatal_error('Watchdog plist file missing. Copy the watchdog plist file in directory /Library/LaunchAgents')
with open(os.devnull, 'w') as devnull:
subprocess.Popen(['sudo', 'launchctl', 'load', WATCHDOG_SERVICE_PLIST_PATH], stdout=devnull,
stderr=devnull, close_fds=True)
else:
logging.debug('%s is already running', WATCHDOG_SERVICE)
else:
error_message = 'Could not start %s, unrecognized init system "%s"' % (WATCHDOG_SERVICE, init_system)
sys.stderr.write('%s\n' % error_message)
logging.warning(error_message)
def create_required_directory(config, directory):
mode = 0o750
try:
mode_str = config.get(CONFIG_SECTION, 'state_file_dir_mode')
try:
mode = int(mode_str, 8)
except ValueError:
logging.warning('Bad state_file_dir_mode "%s" in config file "%s"', mode_str, CONFIG_FILE)
except NoOptionError:
pass
try:
os.makedirs(directory, mode)
except OSError as e:
if errno.EEXIST != e.errno or not os.path.isdir(directory):
raise
@contextmanager
def bootstrap_tls(config, init_system, dns_name, fs_id, mountpoint, options, state_file_dir=STATE_FILE_DIR):
tls_port = choose_tls_port(config, options)
# override the tlsport option so that we can later override the port the NFS client uses to connect to stunnel.
# if the user has specified tlsport=X at the command line this will just re-set tlsport to X.
options['tlsport'] = tls_port
use_iam = 'iam' in options
ap_id = options.get('accesspoint')
cert_details = {}
security_credentials = None
client_info = get_client_info(config)
region = get_target_region(config)
if use_iam:
aws_creds_uri = options.get('awscredsuri')
if aws_creds_uri:
kwargs = {'aws_creds_uri': aws_creds_uri}
else:
kwargs = {'awsprofile': get_aws_profile(options, use_iam)}
security_credentials, credentials_source = get_aws_security_credentials(use_iam, region, **kwargs)
if credentials_source:
cert_details['awsCredentialsMethod'] = credentials_source
if ap_id:
cert_details['accessPoint'] = ap_id
# additional symbol appended to avoid naming collisions
cert_details['mountStateDir'] = get_mount_specific_filename(fs_id, mountpoint, tls_port) + '+'
# common name for certificate signing request is max 64 characters
cert_details['commonName'] = socket.gethostname()[0:64]
region = get_target_region(config)
cert_details['region'] = region
cert_details['certificateCreationTime'] = create_certificate(config, cert_details['mountStateDir'],
cert_details['commonName'], cert_details['region'], fs_id,
security_credentials, ap_id, client_info,
base_path=state_file_dir)
cert_details['certificate'] = os.path.join(state_file_dir, cert_details['mountStateDir'], 'certificate.pem')
cert_details['privateKey'] = get_private_key_path()
cert_details['fsId'] = fs_id
start_watchdog(init_system)
if not os.path.exists(state_file_dir):
create_required_directory(config, state_file_dir)
verify_level = int(options.get('verify', DEFAULT_STUNNEL_VERIFY_LEVEL))
ocsp_enabled = is_ocsp_enabled(config, options)
stunnel_config_file = write_stunnel_config_file(config, state_file_dir, fs_id, mountpoint, tls_port, dns_name, verify_level,
ocsp_enabled, options, region, cert_details=cert_details)
tunnel_args = [_stunnel_bin(), stunnel_config_file]
if 'netns' in options:
tunnel_args = ['nsenter', '--net=' + options['netns']] + tunnel_args
# launch the tunnel in a process group so if it has any child processes, they can be killed easily by the mount watchdog
logging.info('Starting TLS tunnel: "%s"', ' '.join(tunnel_args))
tunnel_proc = subprocess.Popen(
tunnel_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=os.setsid, close_fds=True)
logging.info('Started TLS tunnel, pid: %d', tunnel_proc.pid)
temp_tls_state_file = write_tls_tunnel_state_file(fs_id, mountpoint, tls_port, tunnel_proc.pid, tunnel_args,
[stunnel_config_file], state_file_dir, cert_details=cert_details)
if 'netns' not in options:
test_tlsport(options['tlsport'])
else:
with NetNS(nspath=options['netns']):
test_tlsport(options['tlsport'])
try:
yield tunnel_proc
finally:
os.rename(os.path.join(state_file_dir, temp_tls_state_file), os.path.join(state_file_dir, temp_tls_state_file[1:]))
def test_tlsport(tlsport):
retry_times = 5
while not verify_tlsport_can_be_connected(tlsport) and retry_times > 0:
logging.debug('The tlsport %s cannot be connected yet, sleep 0.05s, %s retry time(s) left', tlsport, retry_times)
time.sleep(0.05)
retry_times -= 1
def check_if_nfsvers_is_compatible_with_macos(options):
# MacOS does not support NFSv4.1
if ('nfsvers' in options and options['nfsvers'] == '4.1') \
or ('vers' in options and options['vers'] == '4.1')\
or ('minorversion' in options and options['minorversion'] == 1):
fatal_error('NFSv4.1 is not supported on MacOS, please switch to NFSv4.0')
def get_nfs_mount_options(options):
# If you change these options, update the man page as well at man/mount.efs.8
if 'nfsvers' not in options and 'vers' not in options:
options['nfsvers'] = '4.1' if not check_if_platform_is_mac() else '4.0'
if check_if_platform_is_mac():
check_if_nfsvers_is_compatible_with_macos(options)
if 'rsize' not in options:
options['rsize'] = '1048576'
if 'wsize' not in options:
options['wsize'] = '1048576'
if 'soft' not in options and 'hard' not in options:
options['hard'] = None
if 'timeo' not in options:
options['timeo'] = '600'
if 'retrans' not in options:
options['retrans'] = '2'
if 'noresvport' not in options:
options['noresvport'] = None
# Set mountport to 2049 for MacOS
if check_if_platform_is_mac():
options['mountport'] = '2049'
if 'tls' in options:
options['port'] = options['tlsport']
def to_nfs_option(k, v):
if v is None:
return k
return '%s=%s' % (str(k), str(v))
nfs_options = [to_nfs_option(k, v) for k, v in options.items() if k not in EFS_ONLY_OPTIONS]
return ','.join(nfs_options)
def mount_nfs(dns_name, path, mountpoint, options):
if 'tls' in options:
mount_path = '127.0.0.1:%s' % path
else:
mount_path = '%s:%s' % (dns_name, path)
if not check_if_platform_is_mac():
command = ['/sbin/mount.nfs4', mount_path, mountpoint, '-o', get_nfs_mount_options(options)]
else:
command = ['/sbin/mount_nfs', '-o', get_nfs_mount_options(options), mount_path, mountpoint]
if 'netns' in options:
command = ['nsenter', '--net=' + options['netns']] + command
logging.info('Executing: "%s"', ' '.join(command))
proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
out, err = proc.communicate()
if proc.returncode == 0:
message = 'Successfully mounted %s at %s' % (dns_name, mountpoint)
logging.info(message)
publish_cloudwatch_log(CLOUDWATCHLOG_AGENT, message)
else:
message = 'Failed to mount %s at %s: returncode=%d, stderr="%s"' % (dns_name, mountpoint, proc.returncode, err.strip())
fatal_error(err.strip(), message, proc.returncode)
def usage(out, exit_code=1):
out.write('Usage: mount.efs [--version] [-h|--help] <fsname> <mountpoint> [-o <options>]\n')
sys.exit(exit_code)
def parse_arguments_early_exit(args=None):
"""Parse arguments, checking for early exit conditions only"""
if args is None:
args = sys.argv
if '-h' in args[1:] or '--help' in args[1:]:
usage(out=sys.stdout, exit_code=0)
if '--version' in args[1:]:
sys.stdout.write('%s Version: %s\n' % (args[0], VERSION))
sys.exit(0)
def parse_arguments(config, args=None):
"""Parse arguments, return (fsid, path, mountpoint, options)"""
if args is None:
args = sys.argv
fsname = None
mountpoint = None
options = {}
if not check_if_platform_is_mac():
if len(args) > 1:
fsname = args[1]
if len(args) > 2:
mountpoint = args[2]
if len(args) > 4 and '-o' in args[:-1]:
options_index = args.index('-o') + 1
options = parse_options(args[options_index])
else:
if len(args) > 1:
fsname = args[-2]
if len(args) > 2:
mountpoint = args[-1]
if len(args) > 4 and '-o' in args[:-2]:
for arg in args[1:-2]:
if arg != '-o':
options.update(parse_options(arg))
if not fsname or not mountpoint:
usage(out=sys.stderr)
# We treat az as an option when customer is using dns name of az mount target to mount,
# even if they don't provide az with option, we update the options with that info
fs_id, path, az = match_device(config, fsname, options)
return fs_id, path, mountpoint, add_field_in_options(options, 'az', az)
def get_client_info(config):
client_info = {}
# source key/value pair in config file
if config.has_option(CLIENT_INFO_SECTION, 'source'):
client_source = config.get(CLIENT_INFO_SECTION, 'source')
if 0 < len(client_source) <= CLIENT_SOURCE_STR_LEN_LIMIT:
client_info['source'] = client_source
if not client_info.get('source'):
client_info['source'] = DEFAULT_UNKNOWN_VALUE
client_info['efs_utils_version'] = VERSION
return client_info
def create_certificate(config, mount_name, common_name, region, fs_id, security_credentials, ap_id, client_info,
base_path=STATE_FILE_DIR):
current_time = get_utc_now()
tls_paths = tls_paths_dictionary(mount_name, base_path)
certificate_config = os.path.join(tls_paths['mount_dir'], 'config.conf')
certificate_signing_request = os.path.join(tls_paths['mount_dir'], 'request.csr')
certificate = os.path.join(tls_paths['mount_dir'], 'certificate.pem')
ca_dirs_check(config, tls_paths['database_dir'], tls_paths['certs_dir'])
ca_supporting_files_check(tls_paths['index'], tls_paths['index_attr'], tls_paths['serial'], tls_paths['rand'])
private_key = check_and_create_private_key(base_path)
if security_credentials:
public_key = os.path.join(tls_paths['mount_dir'], 'publicKey.pem')
create_public_key(private_key, public_key)
create_ca_conf(certificate_config, common_name, tls_paths['mount_dir'], private_key, current_time, region, fs_id,
security_credentials, ap_id, client_info)
create_certificate_signing_request(certificate_config, private_key, certificate_signing_request)
not_before = get_certificate_timestamp(current_time, minutes=-NOT_BEFORE_MINS)
not_after = get_certificate_timestamp(current_time, hours=NOT_AFTER_HOURS)
cmd = 'openssl ca -startdate %s -enddate %s -selfsign -batch -notext -config %s -in %s -out %s' % \
(not_before, not_after, certificate_config, certificate_signing_request, certificate)
subprocess_call(cmd, 'Failed to create self-signed client-side certificate')
return current_time.strftime(CERT_DATETIME_FORMAT)
def get_private_key_path():
"""Wrapped for mocking purposes in unit tests"""
return PRIVATE_KEY_FILE
def check_and_create_private_key(base_path=STATE_FILE_DIR):
# Creating RSA private keys is slow, so we will create one private key and allow mounts to share it.
# This means, however, that we have to include a locking mechanism to ensure that the private key is
# atomically created, as mounts occurring in parallel may try to create the key simultaneously.
key = get_private_key_path()
@contextmanager
def open_lock_file():
lock_file = os.path.join(base_path, 'efs-utils-lock')
f = os.open(lock_file, os.O_CREAT | os.O_DSYNC | os.O_EXCL | os.O_RDWR)
try:
lock_file_contents = 'PID: %s' % os.getpid()
os.write(f, lock_file_contents.encode('utf-8'))
yield f
finally:
os.close(f)
os.remove(lock_file)
def do_with_lock(function):
while True:
try:
with open_lock_file():
return function()
except OSError as e:
if e.errno == errno.EEXIST:
logging.info('Failed to take out private key creation lock, sleeping 50 ms')
time.sleep(0.05)
else:
raise
def generate_key():
if os.path.isfile(key):
return
cmd = 'openssl genpkey -algorithm RSA -out %s -pkeyopt rsa_keygen_bits:3072' % key
subprocess_call(cmd, 'Failed to create private key')
read_only_mode = 0o400
os.chmod(key, read_only_mode)
do_with_lock(generate_key)
return key
def create_certificate_signing_request(config_path, private_key, csr_path):
cmd = 'openssl req -new -config %s -key %s -out %s' % (config_path, private_key, csr_path)
subprocess_call(cmd, 'Failed to create certificate signing request (csr)')
def create_ca_conf(config_path, common_name, directory, private_key, date,
region, fs_id, security_credentials, ap_id, client_info):
"""Populate ca/req configuration file with fresh configurations at every mount since SigV4 signature can change"""
public_key_path = os.path.join(directory, 'publicKey.pem')
ca_extension_body = ca_extension_builder(ap_id, security_credentials, fs_id, client_info)
efs_client_auth_body = efs_client_auth_builder(public_key_path, security_credentials['AccessKeyId'],
security_credentials['SecretAccessKey'], date, region, fs_id,
security_credentials['Token']) if security_credentials else ''
efs_client_info_body = efs_client_info_builder(client_info) if client_info else ''
full_config_body = CA_CONFIG_BODY % (directory, private_key, common_name, ca_extension_body,
efs_client_auth_body, efs_client_info_body)
with open(config_path, 'w') as f:
f.write(full_config_body)
return full_config_body
def ca_extension_builder(ap_id, security_credentials, fs_id, client_info):
ca_extension_str = '[ v3_ca ]\nsubjectKeyIdentifier = hash'
if ap_id:
ca_extension_str += '\n1.3.6.1.4.1.4843.7.1 = ASN1:UTF8String:' + ap_id
if security_credentials:
ca_extension_str += '\n1.3.6.1.4.1.4843.7.2 = ASN1:SEQUENCE:efs_client_auth'
ca_extension_str += '\n1.3.6.1.4.1.4843.7.3 = ASN1:UTF8String:' + fs_id
if client_info:
ca_extension_str += '\n1.3.6.1.4.1.4843.7.4 = ASN1:SEQUENCE:efs_client_info'
return ca_extension_str
def efs_client_auth_builder(public_key_path, access_key_id, secret_access_key, date, region, fs_id, session_token=None):
public_key_hash = get_public_key_sha1(public_key_path)
canonical_request = create_canonical_request(public_key_hash, date, access_key_id, region, fs_id, session_token)
string_to_sign = create_string_to_sign(canonical_request, date, region)
signature = calculate_signature(string_to_sign, date, secret_access_key, region)
efs_client_auth_str = '[ efs_client_auth ]'
efs_client_auth_str += '\naccessKeyId = UTF8String:' + access_key_id
efs_client_auth_str += '\nsignature = OCTETSTRING:' + signature
efs_client_auth_str += '\nsigv4DateTime = UTCTIME:' + date.strftime(CERT_DATETIME_FORMAT)
if session_token:
efs_client_auth_str += '\nsessionToken = EXPLICIT:0,UTF8String:' + session_token
return efs_client_auth_str
def efs_client_info_builder(client_info):
efs_client_info_str = '[ efs_client_info ]'
for key, value in client_info.items():
efs_client_info_str += '\n%s = UTF8String:%s' % (key, value)
return efs_client_info_str
def create_public_key(private_key, public_key):
cmd = 'openssl rsa -in %s -outform PEM -pubout -out %s' % (private_key, public_key)
subprocess_call(cmd, 'Failed to create public key')
def subprocess_call(cmd, error_message):
"""Helper method to run shell openssl command and to handle response error messages"""
retry_times = 3
for retry in range(retry_times):
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
(output, err) = process.communicate()
rc = process.poll()
if rc != 0:
logging.error('Command %s failed, rc=%s, stdout="%s", stderr="%s"' % (cmd, rc, output, err), exc_info=True)
try:
process.kill()
except OSError:
# Silently fail if the subprocess has exited already
pass
else:
return output, err
error_message = '%s, error is: %s' % (error_message, err)
fatal_error(error_message, error_message)
def ca_dirs_check(config, database_dir, certs_dir):
"""Check if mount's database and certs directories exist and if not, create directories (also create all intermediate
directories if they don't exist)."""
if not os.path.exists(database_dir):
create_required_directory(config, database_dir)
if not os.path.exists(certs_dir):
create_required_directory(config, certs_dir)
def ca_supporting_files_check(index_path, index_attr_path, serial_path, rand_path):
"""Recreate all supporting openssl ca and req files if they're not present in their respective directories"""
if not os.path.isfile(index_path):
open(index_path, 'w').close()
if not os.path.isfile(index_attr_path):
with open(index_attr_path, 'w+') as f:
f.write('unique_subject = no')
if not os.path.isfile(serial_path):
with open(serial_path, 'w+') as f:
f.write('00')
if not os.path.isfile(rand_path):
open(rand_path, 'w').close()
def get_certificate_timestamp(current_time, **kwargs):
updated_time = current_time + timedelta(**kwargs)
return updated_time.strftime(CERT_DATETIME_FORMAT)
def get_utc_now():
"""
Wrapped for patching purposes in unit tests
"""
return datetime.utcnow()
def assert_root():
if os.geteuid() != 0:
sys.stderr.write('only root can run mount.efs\n')
sys.exit(1)
def read_config(config_file=CONFIG_FILE):
try:
p = ConfigParser.SafeConfigParser()
except AttributeError:
p = ConfigParser()
p.read(config_file)
return p
def bootstrap_logging(config, log_dir=LOG_DIR):
raw_level = config.get(CONFIG_SECTION, 'logging_level')
levels = {
'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL
}
level = levels.get(raw_level.lower())
level_error = False
if not level:
# delay logging error about malformed log level until after logging is configured
level_error = True
level = logging.INFO
max_bytes = config.getint(CONFIG_SECTION, 'logging_max_bytes')
file_count = config.getint(CONFIG_SECTION, 'logging_file_count')
handler = RotatingFileHandler(os.path.join(log_dir, LOG_FILE), maxBytes=max_bytes, backupCount=file_count)
handler.setFormatter(logging.Formatter(fmt='%(asctime)s - %(levelname)s - %(message)s'))
logger = logging.getLogger()
logger.setLevel(level)
logger.addHandler(handler)
if level_error:
logging.error('Malformed logging level "%s", setting logging level to %s', raw_level, level)
def get_dns_name(config, fs_id, options):
def _validate_replacement_field_count(format_str, expected_ct):
if format_str.count('{') != expected_ct or format_str.count('}') != expected_ct:
raise ValueError('DNS name format has an incorrect number of replacement fields')
dns_name_format = config.get(CONFIG_SECTION, 'dns_name_format')
if '{fs_id}' not in dns_name_format:
raise ValueError('DNS name format must include {fs_id}')
format_args = {'fs_id': fs_id}
expected_replacement_field_ct = 1
if '{az}' in dns_name_format:
az = options.get('az')
if az:
expected_replacement_field_ct += 1
format_args['az'] = az
else:
dns_name_format = dns_name_format.replace('{az}.', '')
if '{region}' in dns_name_format:
expected_replacement_field_ct += 1
format_args['region'] = get_target_region(config)
if '{dns_name_suffix}' in dns_name_format:
expected_replacement_field_ct += 1
config_section = CONFIG_SECTION
region = format_args.get('region')
if region:
config_section = get_config_section(config, region)
format_args['dns_name_suffix'] = config.get(config_section, 'dns_name_suffix')
logging.debug("Using dns_name_suffix %s in config section [%s]", format_args.get('dns_name_suffix'), config_section)
_validate_replacement_field_count(dns_name_format, expected_replacement_field_ct)
dns_name = dns_name_format.format(**format_args)
try:
socket.gethostbyname(dns_name)
except socket.gaierror:
fatal_error('Failed to resolve "%s" - check that your file system ID is correct.\nSee %s for more detail.'
% (dns_name, 'https://docs.aws.amazon.com/console/efs/mount-dns-name'),
'Failed to resolve "%s"' % dns_name)
return dns_name
def tls_paths_dictionary(mount_name, base_path=STATE_FILE_DIR):
tls_dict = {
'mount_dir': os.path.join(base_path, mount_name),
# every mount will have its own ca mode assets due to lack of multi-threading support in openssl
'database_dir': os.path.join(base_path, mount_name, 'database'),
'certs_dir': os.path.join(base_path, mount_name, 'certs'),
'index': os.path.join(base_path, mount_name, 'database/index.txt'),
'index_attr': os.path.join(base_path, mount_name, 'database/index.txt.attr'),
'serial': os.path.join(base_path, mount_name, 'database/serial'),
'rand': os.path.join(base_path, mount_name, 'database/.rand')
}
return tls_dict
def get_public_key_sha1(public_key):
# truncating public key to remove the header and footer '-----(BEGIN|END) PUBLIC KEY-----'
with open(public_key, 'r') as f:
lines = f.readlines()
lines = lines[1:-1]
key = ''.join(lines)
key = bytearray(base64.b64decode(key))
# Parse the public key to pull out the actual key material by looking for the key BIT STRING
# Example:
# 0:d=0 hl=4 l= 418 cons: SEQUENCE
# 4:d=1 hl=2 l= 13 cons: SEQUENCE
# 6:d=2 hl=2 l= 9 prim: OBJECT :rsaEncryption
# 17:d=2 hl=2 l= 0 prim: NULL
# 19:d=1 hl=4 l= 399 prim: BIT STRING
cmd = 'openssl asn1parse -inform PEM -in %s' % public_key
output, err = subprocess_call(cmd, 'Unable to ASN1 parse public key file, %s, correctly' % public_key)
key_line = ''
for line in output.splitlines():
if 'BIT STRING' in line.decode('utf-8'):
key_line = line.decode('utf-8')
if not key_line:
err_msg = 'Public key file, %s, is incorrectly formatted' % public_key
fatal_error(err_msg, err_msg)
key_line = key_line.replace(' ', '')
# DER encoding TLV (Tag, Length, Value)
# - the first octet (byte) is the tag (type)
# - the next octets are the length - "definite form"
# - the first octet always has the high order bit (8) set to 1
# - the remaining 127 bits are used to encode the number of octets that follow
# - the following octets encode, as big-endian, the length (which may be 0) as a number of octets
# - the remaining octets are the "value" aka content
#
# For a BIT STRING, the first octet of the value is used to signify the number of unused bits that exist in the last
# content byte. Note that this is explicitly excluded from the SubjectKeyIdentifier hash, per
# https://tools.ietf.org/html/rfc5280#section-4.2.1.2
#
# Example:
# 0382018f00...<subjectPublicKey>
# - 03 - BIT STRING tag
# - 82 - 2 length octets to follow (ignore high order bit)
# - 018f - length of 399
# - 00 - no unused bits in the last content byte
offset = int(key_line.split(':')[0])
key = key[offset:]
num_length_octets = key[1] & 0b01111111
# Exclude the tag (1), length (1 + num_length_octets), and number of unused bits (1)
offset = 1 + 1 + num_length_octets + 1
key = key[offset:]
sha1 = hashlib.sha1()
sha1.update(key)
return sha1.hexdigest()
def create_canonical_request(public_key_hash, date, access_key, region, fs_id, session_token=None):
"""
Create a Canonical Request - https://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
"""
formatted_datetime = date.strftime(SIGV4_DATETIME_FORMAT)
credential = quote_plus(access_key + '/' + get_credential_scope(date, region))
request = HTTP_REQUEST_METHOD + '\n'
request += CANONICAL_URI + '\n'
request += create_canonical_query_string(public_key_hash, credential, formatted_datetime, session_token) + '\n'
request += CANONICAL_HEADERS % fs_id + '\n'
request += SIGNED_HEADERS + '\n'
sha256 = hashlib.sha256()
sha256.update(REQUEST_PAYLOAD.encode())
request += sha256.hexdigest()
return request
def create_canonical_query_string(public_key_hash, credential, formatted_datetime, session_token=None):
canonical_query_params = {
'Action': 'Connect',
# Public key hash is included in canonical request to tie the signature to a specific key pair to avoid replay attacks
'PublicKeyHash': quote_plus(public_key_hash),
'X-Amz-Algorithm': ALGORITHM,
'X-Amz-Credential': credential,
'X-Amz-Date': quote_plus(formatted_datetime),
'X-Amz-Expires': 86400,
'X-Amz-SignedHeaders': SIGNED_HEADERS,
}
if session_token:
canonical_query_params['X-Amz-Security-Token'] = quote_plus(session_token)
# Cannot use urllib.urlencode because it replaces the %s's
return '&'.join(['%s=%s' % (k, v) for k, v in sorted(canonical_query_params.items())])
def create_string_to_sign(canonical_request, date, region):
"""
Create a String to Sign - https://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html
"""
string_to_sign = ALGORITHM + '\n'
string_to_sign += date.strftime(SIGV4_DATETIME_FORMAT) + '\n'
string_to_sign += get_credential_scope(date, region) + '\n'
sha256 = hashlib.sha256()
sha256.update(canonical_request.encode())
string_to_sign += sha256.hexdigest()
return string_to_sign
def calculate_signature(string_to_sign, date, secret_access_key, region):
"""
Calculate the Signature - https://docs.aws.amazon.com/general/latest/gr/sigv4-calculate-signature.html
"""
def _sign(key, msg):
return hmac.new(key, msg.encode('utf-8'), hashlib.sha256)
key_date = _sign(('AWS4' + secret_access_key).encode('utf-8'), date.strftime(DATE_ONLY_FORMAT)).digest()
add_region = _sign(key_date, region).digest()
add_service = _sign(add_region, SERVICE).digest()
signing_key = _sign(add_service, 'aws4_request').digest()
return _sign(signing_key, string_to_sign).hexdigest()
def get_credential_scope(date, region):
return '/'.join([date.strftime(DATE_ONLY_FORMAT), region, SERVICE, AWS4_REQUEST])
def match_device(config, device, options):
"""Return the EFS id, the remote path, and the az to mount"""
try:
remote, path = device.split(':', 1)
except ValueError:
remote = device
path = '/'
if FS_ID_RE.match(remote):
return remote, path, None
try:
primary, secondaries, _ = socket.gethostbyname_ex(remote)
hostnames = list(filter(lambda e: e is not None, [primary] + secondaries))
except socket.gaierror:
create_default_cloudwatchlog_agent_if_not_exist(config)
fatal_error(
'Failed to resolve "%s" - check that the specified DNS name is a CNAME record resolving to a valid EFS DNS '
'name' % remote,
'Failed to resolve "%s"' % remote
)
if not hostnames:
create_default_cloudwatchlog_agent_if_not_exist(config)
fatal_error(
'The specified domain name "%s" did not resolve to an EFS mount target' % remote
)
for hostname in hostnames:
efs_fqdn_match = EFS_FQDN_RE.match(hostname)
if efs_fqdn_match:
az = efs_fqdn_match.group('az')
fs_id = efs_fqdn_match.group('fs_id')
if az and 'az' in options and az != options['az']:
fatal_error(
'The hostname "%s" resolved by the specified domain name "%s" does not match the az provided in the '
'mount options, expected = %s, given = %s' % (hostname, remote, options['az'], az))
expected_dns_name = get_dns_name(config, fs_id, add_field_in_options(options, 'az', az))
# check that the DNS name of the mount target matches exactly the DNS name the CNAME resolves to
if hostname == expected_dns_name:
return fs_id, path, az
else:
create_default_cloudwatchlog_agent_if_not_exist(config)
fatal_error('The specified CNAME "%s" did not resolve to a valid DNS name for an EFS mount target. '
'Please refer to the EFS documentation for mounting with DNS names for examples: %s'
% (remote, 'https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html'))
def add_field_in_options(options, field_key, field_value):
if field_value and field_key not in options:
options[field_key] = field_value
return options
def is_nfs_mount(mountpoint):
if not check_if_platform_is_mac():
cmd = ['stat', '-f', '-L', '-c', '%T', mountpoint]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
output, _ = p.communicate()
return output and 'nfs' in str(output)
else:
process = subprocess.run(['mount', '-t', 'nfs'], check=True, stdout=subprocess.PIPE, universal_newlines=True)
stdout = process.stdout
if not stdout:
return False
mounts = stdout.split('\n')
for mount in mounts:
_mount = mount.split()
if len(_mount) >= 4 and _mount[2] == mountpoint and 'nfs' in _mount[3]:
return True
return False
def mount_tls(config, init_system, dns_name, path, fs_id, mountpoint, options):
if os.path.ismount(mountpoint) and is_nfs_mount(mountpoint):
sys.stdout.write("%s is already mounted, please run 'mount' command to verify\n" % mountpoint)
logging.warning("%s is already mounted, mount aborted" % mountpoint)
return
with bootstrap_tls(config, init_system, dns_name, fs_id, mountpoint, options) as tunnel_proc:
mount_completed = threading.Event()
t = threading.Thread(target=poll_tunnel_process, args=(tunnel_proc, fs_id, mount_completed))
t.daemon = True
t.start()
mount_nfs(dns_name, path, mountpoint, options)
mount_completed.set()
t.join()
def verify_tlsport_can_be_connected(tlsport):
try:
test_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
except Exception as e:
logging.warning('Error opening a socket, %s', e)
return False
try:
logging.debug('Trying to connect to 127.0.0.1: %s', tlsport)
test_socket.connect(('127.0.0.1', tlsport))
return True
except ConnectionRefusedError:
return False
finally:
test_socket.close()
def check_unsupported_options(options):
for unsupported_option in UNSUPPORTED_OPTIONS:
if unsupported_option in options:
warn_message = 'The "%s" option is not supported and has been ignored, as amazon-efs-utils relies on a built-in ' \
'trust store.' % unsupported_option
sys.stderr.write('WARN: %s\n' % warn_message)
logging.warning(warn_message)
del options[unsupported_option]
def check_options_validity(options):
if 'tls' in options:
if 'port' in options:
fatal_error('The "port" and "tls" options are mutually exclusive')
if 'tlsport' in options:
try:
int(options['tlsport'])
except ValueError:
fatal_error('tlsport option [%s] is not an integer' % options['tlsport'])
if 'ocsp' in options and 'noocsp' in options:
fatal_error('The "ocsp" and "noocsp" options are mutually exclusive')
if 'accesspoint' in options:
if 'tls' not in options:
fatal_error('The "tls" option is required when mounting via "accesspoint"')
if not AP_ID_RE.match(options['accesspoint']):
fatal_error('Access Point ID %s is malformed' % options['accesspoint'])
if 'iam' in options and 'tls' not in options:
fatal_error('The "tls" option is required when mounting via "iam"')
if 'awsprofile' in options and 'iam' not in options:
fatal_error('The "iam" option is required when mounting with named profile option, "awsprofile"')
if 'awscredsuri' in options and 'iam' not in options:
fatal_error('The "iam" option is required when mounting with "awscredsuri"')
if 'awscredsuri' in options and 'awsprofile' in options:
fatal_error('The "awscredsuri" and "awsprofile" options are mutually exclusive')
def bootstrap_cloudwatch_logging(config, fs_id=None):
if not check_if_cloudwatch_log_enabled(config):
return None
cloudwatchlog_client = get_botocore_client(config, 'logs')
if not cloudwatchlog_client:
return None
cloudwatchlog_config = get_cloudwatchlog_config(config, fs_id)
log_group_name = cloudwatchlog_config.get('log_group_name')
log_stream_name = cloudwatchlog_config.get('log_stream_name')
retention_days = cloudwatchlog_config.get('retention_days')
group_creation_completed = create_cloudwatch_log_group(cloudwatchlog_client, log_group_name)
if not group_creation_completed:
return None
put_retention_policy_completed = put_cloudwatch_log_retention_policy(cloudwatchlog_client, log_group_name, retention_days)
if not put_retention_policy_completed:
return None
stream_creation_completed = create_cloudwatch_log_stream(cloudwatchlog_client, log_group_name, log_stream_name)
if not stream_creation_completed:
return None
return {
'client': cloudwatchlog_client,
'log_group_name': log_group_name,
'log_stream_name': log_stream_name
}
def create_default_cloudwatchlog_agent_if_not_exist(config):
if not check_if_cloudwatch_log_enabled(config):
return None
global CLOUDWATCHLOG_AGENT
if not CLOUDWATCHLOG_AGENT:
CLOUDWATCHLOG_AGENT = bootstrap_cloudwatch_logging(config)
def get_botocore_client(config, service):
if not BOTOCORE_PRESENT:
logging.error('Failed to import botocore, please install botocore first.')
return None
session = botocore.session.get_session()
region = get_target_region(config)
iam_role_name = get_iam_role_name()
if iam_role_name:
credentials, _ = get_aws_security_credentials_from_instance_metadata(iam_role_name)
if credentials:
return session.create_client(service, aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['Token'], region_name=region)
return session.create_client(service, region_name=region)
def get_cloudwatchlog_config(config, fs_id=None):
log_group_name = DEFAULT_CLOUDWATCH_LOG_GROUP
if config.has_option(CLOUDWATCH_LOG_SECTION, 'log_group_name'):
log_group_name = config.get(CLOUDWATCH_LOG_SECTION, 'log_group_name')
retention_days = DEFAULT_RETENTION_DAYS
if config.has_option(CLOUDWATCH_LOG_SECTION, 'retention_in_days'):
retention_days = config.get(CLOUDWATCH_LOG_SECTION, 'retention_in_days')
log_stream_name = get_cloudwatch_log_stream_name(fs_id)
return {
'log_group_name': log_group_name,
'retention_days': int(retention_days),
'log_stream_name': log_stream_name
}
def get_cloudwatch_log_stream_name(fs_id=None):
instance_id = get_instance_identity_info_from_instance_metadata('instanceId')
if instance_id and fs_id:
log_stream_name = '%s - %s - mount.log' % (fs_id, instance_id)
elif instance_id:
log_stream_name = '%s - mount.log' % (instance_id)
elif fs_id:
log_stream_name = '%s - mount.log' % (fs_id)
else:
log_stream_name = 'default - mount.log'
return log_stream_name
def check_if_platform_is_mac():
return sys.platform in MAC_OS_PLATFORM_LIST
def check_if_mac_version_is_supported():
return any(release in platform.release() for release in MAC_OS_SUPPORTED_VERSION_LIST)
def check_if_cloudwatch_log_enabled(config):
if config.has_option(CLOUDWATCH_LOG_SECTION, 'enabled'):
return config.getboolean(CLOUDWATCH_LOG_SECTION, 'enabled')
return False
def cloudwatch_create_log_group_helper(cloudwatchlog_client, log_group_name):
cloudwatchlog_client.create_log_group(
logGroupName=log_group_name
)
logging.info('Created cloudwatch log group %s' % log_group_name)
def create_cloudwatch_log_group(cloudwatchlog_client, log_group_name):
try:
cloudwatch_create_log_group_helper(cloudwatchlog_client, log_group_name)
except ClientError as e:
exception = e.response['Error']['Code']
if exception == 'ResourceAlreadyExistsException':
logging.debug('Log group %s already exist, %s' % (log_group_name, e.response))
return True
elif exception == 'LimitExceededException':
logging.error('Reached the maximum number of log groups that can be created, %s' % e.response)
return False
elif exception == 'OperationAbortedException':
logging.debug('Multiple requests to update the same log group %s were in conflict, %s' % (log_group_name, e.response))
return False
elif exception == 'InvalidParameterException':
logging.error('Log group name %s is specified incorrectly, %s' % (log_group_name, e.response))
return False
else:
handle_general_botocore_exceptions(e)
return False
except NoCredentialsError as e:
logging.warning('Credentials are not properly configured, %s' % e)
return False
except EndpointConnectionError as e:
logging.warning('Could not connect to the endpoint, %s' % e)
return False
except Exception as e:
logging.warning('Unknown error, %s.' % e)
return False
return True
def cloudwatch_put_retention_policy_helper(cloudwatchlog_client, log_group_name, retention_days):
cloudwatchlog_client.put_retention_policy(
logGroupName=log_group_name,
retentionInDays=retention_days
)
logging.debug('Set cloudwatch log group retention days to %s' % retention_days)
def put_cloudwatch_log_retention_policy(cloudwatchlog_client, log_group_name, retention_days):
try:
cloudwatch_put_retention_policy_helper(cloudwatchlog_client, log_group_name, retention_days)
except ClientError as e:
exception = e.response['Error']['Code']
if exception == 'ResourceNotFoundException':
logging.error('Log group %s does not exist, %s' % (log_group_name, e.response))
return False
elif exception == 'OperationAbortedException':
logging.debug('Multiple requests to update the same log group %s were in conflict, %s' % (log_group_name, e.response))
return False
elif exception == 'InvalidParameterException':
logging.error('Either parameter log group name %s or retention in days %s is specified incorrectly, %s'
% (log_group_name, retention_days, e.response))
return False
else:
handle_general_botocore_exceptions(e)
return False
except NoCredentialsError as e:
logging.warning('Credentials are not properly configured, %s' % e)
return False
except EndpointConnectionError as e:
logging.warning('Could not connect to the endpoint, %s' % e)
return False
except Exception as e:
logging.warning('Unknown error, %s.' % e)
return False
return True
def cloudwatch_create_log_stream_helper(cloudwatchlog_client, log_group_name, log_stream_name):
cloudwatchlog_client.create_log_stream(
logGroupName=log_group_name,
logStreamName=log_stream_name
)
logging.info('Created cloudwatch log stream %s in log group %s' % (log_stream_name, log_group_name))
def create_cloudwatch_log_stream(cloudwatchlog_client, log_group_name, log_stream_name):
try:
cloudwatch_create_log_stream_helper(cloudwatchlog_client, log_group_name, log_stream_name)
except ClientError as e:
exception = e.response['Error']['Code']
if exception == 'ResourceAlreadyExistsException':
logging.debug('Log stream %s already exist in log group %s, %s' % (log_stream_name, log_group_name, e.response))
return True
elif exception == 'InvalidParameterException':
logging.error('Either parameter log group name %s or log stream name %s is specified incorrectly, %s'
% (log_group_name, log_stream_name, e.response))
return False
elif exception == 'ResourceNotFoundException':
logging.error('Log group %s does not exist, %s' % (log_group_name, e.response))
return False
else:
handle_general_botocore_exceptions(e)
return False
except NoCredentialsError as e:
logging.warning('Credentials are not properly configured, %s' % e)
return False
except EndpointConnectionError as e:
logging.warning('Could not connect to the endpoint, %s' % e)
return False
except Exception as e:
logging.warning('Unknown error, %s.' % e)
return False
return True
def cloudwatch_put_log_events_helper(cloudwatchlog_agent, message, token=None):
kwargs = {
'logGroupName': cloudwatchlog_agent.get('log_group_name'),
'logStreamName': cloudwatchlog_agent.get('log_stream_name'),
'logEvents': [
{
'timestamp': int(round(time.time() * 1000)),
'message': message
}
]
}
if token:
kwargs['sequenceToken'] = token
cloudwatchlog_agent.get('client').put_log_events(**kwargs)
def publish_cloudwatch_log(cloudwatchlog_agent, message):
if not cloudwatchlog_agent or not cloudwatchlog_agent.get('client'):
return False
token = get_log_stream_next_token(cloudwatchlog_agent)
try:
cloudwatch_put_log_events_helper(cloudwatchlog_agent, message, token)
except ClientError as e:
exception = e.response['Error']['Code']
if exception == 'InvalidSequenceTokenException':
logging.debug('The sequence token is not valid, %s' % e.response)
return False
elif exception == 'InvalidParameterException':
logging.debug('One of the parameter to put log events is not valid, %s' % e.response)
return False
elif exception == 'DataAlreadyAcceptedException':
logging.debug('The event %s was already logged, %s' % (message, e.response))
return False
elif exception == 'UnrecognizedClientException':
logging.debug('The most likely cause is an invalid AWS access key ID or secret Key, %s' % e.response)
return False
elif exception == 'ResourceNotFoundException':
logging.error('Either log group %s or log stream %s does not exist, %s'
% (cloudwatchlog_agent.get('log_group_name'), cloudwatchlog_agent.get('log_stream_name'), e.response))
return False
else:
logging.debug('Unexpected error: %s' % e)
return False
except NoCredentialsError as e:
logging.warning('Credentials are not properly configured, %s' % e)
return False
except EndpointConnectionError as e:
logging.warning('Could not connect to the endpoint, %s' % e)
return False
except Exception as e:
logging.warning('Unknown error, %s.' % e)
return False
return True
def cloudwatch_describe_log_streams_helper(cloudwatchlog_agent):
return cloudwatchlog_agent.get('client').describe_log_streams(
logGroupName=cloudwatchlog_agent.get('log_group_name'),
logStreamNamePrefix=cloudwatchlog_agent.get('log_stream_name')
)
def get_log_stream_next_token(cloudwatchlog_agent):
try:
response = cloudwatch_describe_log_streams_helper(cloudwatchlog_agent)
except ClientError as e:
exception = e.response['Error']['Code']
if exception == 'InvalidParameterException':
logging.debug('Either parameter log group name %s or log stream name %s is specified incorrectly, %s'
% (cloudwatchlog_agent.get('log_group_name'), cloudwatchlog_agent.get('log_stream_name'), e.response))
elif exception == 'ResourceNotFoundException':
logging.debug('Either log group %s or log stream %s does not exist, %s'
% (cloudwatchlog_agent.get('log_group_name'), cloudwatchlog_agent.get('log_stream_name'), e.response))
else:
handle_general_botocore_exceptions(e)
return None
except NoCredentialsError as e:
logging.warning('Credentials are not properly configured, %s' % e)
return None
except EndpointConnectionError as e:
logging.warning('Could not connect to the endpoint, %s' % e)
return None
except Exception as e:
logging.warning('Unknown error, %s' % e)
return None
try:
log_stream = response['logStreams'][0]
return log_stream.get('uploadSequenceToken')
except (IndexError, TypeError, KeyError):
pass
return None
def handle_general_botocore_exceptions(error):
exception = error.response['Error']['Code']
if exception == 'ServiceUnavailableException':
logging.debug('The service cannot complete the request, %s' % error.response)
elif exception == 'AccessDeniedException':
logging.debug('User is not authorized to perform the action, %s' % error.response)
else:
logging.debug('Unexpected error: %s' % error)
def main():
parse_arguments_early_exit()
assert_root()
config = read_config()
bootstrap_logging(config)
if check_if_platform_is_mac() and not check_if_mac_version_is_supported():
fatal_error("We do not support EFS on MacOS " + platform.release())
fs_id, path, mountpoint, options = parse_arguments(config)
logging.info('version=%s options=%s', VERSION, options)
global CLOUDWATCHLOG_AGENT
CLOUDWATCHLOG_AGENT = bootstrap_cloudwatch_logging(config, fs_id)
check_unsupported_options(options)
check_options_validity(options)
init_system = get_init_system()
check_network_status(fs_id, init_system)
dns_name = get_dns_name(config, fs_id, options)
if 'tls' in options:
mount_tls(config, init_system, dns_name, path, fs_id, mountpoint, options)
else:
mount_nfs(dns_name, path, mountpoint, options)
if '__main__' == __name__:
main()
|
_driver.py
|
#! /usr/bin/env python
# coding: utf-8
import serial
import threading
from Queue import Queue
import struct
class Driver:
def __init__(self, port):
self.port = port
self.ser = None
self.buzzer_state = False
self.led_state = False
# 定义回调函数
self.battery_callback = None
# 速度回调
self.vel_callback = None
# imu回调
self.imu_callback = None
# 发送数据的队列
self.__snd_queue = Queue()
# 接受数据的队列
self.__rcv_queue = Queue()
def connect(self):
"""
连接下位机
:return:
"""
# 重试机制
count = 0
while count < 10:
count += 1
try:
self.ser = serial.Serial(port=self.port, baudrate=115200)
# 如果出错了,后面的代码就不执行了
# 能到达这个位置说明,链接成功
# 开启发送队列的获取
threading.Thread(target=self.__do_snd_work).start()
# 开启读串口数据的线程
threading.Thread(target=self.__do_rcv_work).start()
break
except Exception as e:
print(e)
def disconnect(self):
"""
和下位机断开连接
:return:
"""
if self.ser is None: return
# 向队列里加空数据,跳出循环
self.__snd_queue.put(None)
self.__rcv_queue.put(None)
if not isinstance(self.ser, serial.Serial): return False
self.ser.flushInput()
self.ser.flushOutput()
self.ser.cancel_read()
self.ser.close()
def is_open(self):
if not isinstance(self.ser, serial.Serial): return False
if self.ser is None: return False
return self.ser.isOpen()
def __do_snd_work(self):
while self.is_open():
# 阻塞式的
data = self.__snd_queue.get()
if data is None: break
# 将数据发送给下位机
self.ser.write(data)
def __do_rcv_work(self):
threading.Thread(target=self.__do_parse_work).start()
while self.is_open():
try:
# 阻塞式的函数
read = self.ser.read(1)
self.__rcv_queue.put(read)
# 1.读几个数据
# 2.如何解析, 写逻辑代码进行解析,蛮复杂的,比较耗时
except Exception as e:
print e
def __do_parse_work(self):
# 用来记录一条数据的
rcv_buff = []
while self.is_open():
buff = self.__rcv_queue.get()
if buff is None: break
buff = bytearray(buff)[0]
rcv_buff.append(buff)
# 0xFE 0xCE 0x12 0x05 0x00 0x00 0x00 0x00 0xff
# 0xFE 0xCE 0xFE 0xCE 0x12 0x05 0x00 0x00 0x00 0x00 0xff
while len(rcv_buff) > 0:
# 判断第一个元素是不是第一个帧头
if rcv_buff[0] != 0xFE:
# 丢弃掉
del rcv_buff[0]
continue
# 第一个帧头确定了
if len(rcv_buff) < 2:
# 去消息队列里取数据
break
# 判断第二个元素是不是第二个帧头
if rcv_buff[1] != 0xCE:
# 帧头判断有误
del rcv_buff[0]
del rcv_buff[0]
continue
# 判断数据长度
if len(rcv_buff) < 4:
# 去消息队列里取数据
break
r_type = rcv_buff[2]
r_len = rcv_buff[3]
# 判断数据长度,判断是否够一条指令
if len(rcv_buff) < r_len + 4:
# 去消息队列里取数据
break
# 做校验位判断
# 计算出校验位, 比较已经有的校验位
sum = 0
for i in range(2, r_len + 4 - 1):
sum += rcv_buff[i]
sum = bytearray(struct.pack('h', sum))[0]
# 已有的
check = rcv_buff[r_len + 4 - 1]
if sum != check:
# 指令有误
del rcv_buff[0]
del rcv_buff[1]
continue
# 真的找到了一条指令
cmd = rcv_buff[0: r_len + 4]
# 分流
self.__do_cmd_work(cmd)
# 清理数据
for i in range(r_len + 4):
del rcv_buff[0]
def __do_cmd_work(self, cmd):
# 0xFE 0xCE 0x12 0x05 0x00 0x00 0x00 0x00 0xff
# 判断指令的类型
if cmd[2] == 0x11:
self.__do_imu_parse(cmd)
elif cmd[2] == 0x12:
self.__do_vel_parse(cmd)
elif cmd[2] == 0x13:
self.__do_battery_parse(cmd)
elif cmd[2] == 0x01:
# led
self.__do_led_parse(cmd)
elif cmd[2] == 0x02:
# buzzer
self.__do_buzzer_parse(cmd)
else:
# print '其他'
pass
def __do_led_parse(self, cmd):
self.led_state = cmd[5] == 0x01
def __do_buzzer_parse(self, cmd):
# 0xFE 0xCE 0x02 0x03 0x01 0x01 0x07
self.buzzer_state = cmd[5] == 0x01
def __do_vel_parse(self, cmd):
# 0xFE 0xCE 0x12 0x05 0x00 0x00 0x00 0x00 0xff
linear = cmd[4:6]
angular = cmd[6:8]
linear = struct.unpack('h', bytearray(linear))[0] / 1000.0
angular = struct.unpack('h', bytearray(angular))[0] / 1000.0
# print "vel: {} {}".format(linear, angular)
if self.vel_callback is not None:
self.vel_callback(linear, angular)
def __do_battery_parse(self, cmd):
# 0xFE 0xCE 0x13 0x03 0x00 0x00 0xff
battery = cmd[4:6]
battery = struct.unpack('h', bytearray(battery))[0] / 100.0
# print "voltage: {}".format(battery)
if self.battery_callback is not None:
self.battery_callback(battery)
def __do_imu_parse(self, cmd):
# 0xFE 0xCE 0x11 0x13 x,y,z x,y,z x,y,z 0xff
a_s = 164.0
r_s = 16.4
# acc
ax = struct.unpack('h', bytearray(cmd[4:6]))[0] / a_s
ay = struct.unpack('h', bytearray(cmd[6:8]))[0] / a_s
az = struct.unpack('h', bytearray(cmd[8:10]))[0] / a_s
# rot
rx = struct.unpack('h', bytearray(cmd[10:12]))[0] / r_s
ry = struct.unpack('h', bytearray(cmd[12:14]))[0] / r_s
rz = struct.unpack('h', bytearray(cmd[14:16]))[0] / r_s
# m
mx = struct.unpack('h', bytearray(cmd[16:18]))[0]
my = struct.unpack('h', bytearray(cmd[18:20]))[0]
mz = struct.unpack('h', bytearray(cmd[20:22]))[0]
# print "imu: {} {} {} {} {} {} {} {} {}".format(ax, ay, az, rx, ry, rz, mx, my, mz)
if self.imu_callback is not None:
self.imu_callback(ax, ay, az, rx, ry, rz, mx, my, mz)
def buzzer_open(self):
# if not isinstance(self.ser, serial.Serial): return
# data = bytearray([0xAB, 0xBC, 0x02, 0x03, 0x01, 0x01, 0x07])
# self.ser.write(data)
data = bytearray([0xAB, 0xBC, 0x02, 0x03, 0x01, 0x01, 0x07])
self.__snd_queue.put(data)
def buzzer_close(self):
# if not isinstance(self.ser, serial.Serial): return
# data = bytearray([0xAB, 0xBC, 0x02, 0x03, 0x00, 0x01, 0x06])
# self.ser.write(data)
data = bytearray([0xAB, 0xBC, 0x02, 0x03, 0x00, 0x01, 0x06])
self.__snd_queue.put(data)
def led_open(self):
data = bytearray([0xAB, 0xBC, 0x01, 0x03, 0x01, 0x01, 0x06])
self.__snd_queue.put(data)
def led_close(self):
data = bytearray([0xAB, 0xBC, 0x01, 0x03, 0x00, 0x01, 0x05])
self.__snd_queue.put(data)
def vel_ctrl(self, linear, angular):
# 帧头(2位)
# 类型(1位)
# 数据长度(1位)
data = bytearray([0xAB, 0xBC, 0x22, 0x05])
# 数据位(2位,线速度, short)
linear = int(linear * 1000)
linear = bytearray(struct.pack('h', linear))
data.extend(linear)
# 数据位(2位, 角速度, short)
angular = int(angular * 1000)
angular = bytearray(struct.pack('h', angular))
data.extend(angular)
# 文档: 0xab 0xbc 0x22 0x5 0xf4 0x1 0x0 0x0 0x1c
# 打印: 0xab 0xbc 0x22 0x5 0xf4 0x1 0x0 0x0
# 0xab 0xbc 0x22 0x5 0xf4 0x1 0x0 0x0 0x1c
# 校验位 0x1c 0x1
sum = 0
for i in range(2, len(data)):
sum += data[i]
check = bytearray(struct.pack('h', sum))[0]
data.append(check)
self.__snd_queue.put(data)
|
setup_hosts.py
|
import argparse
from common_funcs import run_cmd
from common_funcs import run_cmd_single
from common_funcs import sed
from common_funcs import start_cmd_disown
from common_funcs import start_cmd_disown_nobg
from common_funcs import upload_file
from common_funcs import run_script
from common_funcs import fetch_file_single
from common_funcs import fetch_file_single_compressed
from common_funcs import fetch_file_single_compressed_bg
from threading import Thread, Lock
from experiments import *
from boto import ec2
import os
import itertools
from datetime import datetime
from os import system
from time import sleep
THRIFT_PORT=8080
KAIJU_PORT=9081
KAIJU_HOSTS_INTERNAL=""
KAIJU_HOSTS_EXTERNAL=""
SERVERS_PER_HOST = 1
# Upgraded AMIs
#non -HVM
#AMIs = {'us-west-2': 'ami-2c8a161c'}
#HVM
AMIs = {'us-west-2': 'ami-784cd048',
'us-east-1': 'ami-896b27e0'}
tag_blacklist = ["ping"]
netCmd = "sudo sysctl net.ipv4.tcp_syncookies=1 > /dev/null; sudo sysctl net.core.netdev_max_backlog=250000 > /dev/null; sudo ifconfig eth0 txqueuelen 10000000; sudo sysctl net.core.somaxconn=100000 > /dev/null ; sudo sysctl net.core.netdev_max_backlog=10000000 > /dev/null; sudo sysctl net.ipv4.tcp_max_syn_backlog=1000000 > /dev/null; sudo sysctl -w net.ipv4.ip_local_port_range='1024 64000' > /dev/null; sudo sysctl -w net.ipv4.tcp_fin_timeout=2 > /dev/null; "
class Region:
def __init__(self, name):
self.name = name
self.clusters = []
self._ownsGraphite = False
self.graphiteHost = None
def addCluster(self, cluster):
self.clusters.append(cluster)
def getTotalNumHosts(self):
return sum([cluster.getNumHosts() for cluster in self.clusters])
def getTotalNumHostsWithoutGraphite(self):
return sum([cluster.getNumHosts() for cluster in self.clusters])
class Cluster:
def __init__(self, regionName, clusterID, numServers, numClients):
self.regionName = regionName
self.clusterID = clusterID
self.numServers = numServers
self.servers = []
self.numClients = numClients
self.clients = []
def allocateHosts(self, hosts):
for host in hosts:
if len(self.servers) < self.numServers:
self.servers.append(host)
elif len(self.clients) < self.numClients:
self.clients.append(host)
assert len(self.getAllHosts()) == self.getNumHosts(), "Don't have exactly as many hosts as I expect!" \
" (expect: %d, have: %d)" \
% (self.getNumHosts(), len(self.getAllHosts()))
def getAllHosts(self):
return self.servers + self.clients
def getNumHosts(self):
return self.numServers + self.numClients
class Host:
def __init__(self, ip, regionName, instanceid, status):
self.ip = ip
self.regionName = regionName
self.instanceid = instanceid
self.status = status
# UTILITIES
def run_cmd_in_kaiju(hosts, cmd, user='ubuntu'):
run_cmd(hosts, "cd /home/ubuntu/kaiju/; %s" % cmd, user)
def run_cmd_in_ycsb(hosts, cmd, user='ubuntu'):
run_cmd(hosts, "cd /home/ubuntu/kaiju/contrib/YCSB/; %s" % cmd, user)
# Passing tag=None will return all hosts without a tag.
def get_instances(regionName, tag):
system("rm -f instances.txt")
hosts = []
allowed_hosts = []
ignored_hosts = []
conn = ec2.connect_to_region(regionName)
filters={'instance-state-name':'running'}
if tag is not None:
filters['tag:'+tag] = ''
reservations = conn.get_all_instances(filters=filters)
instances = []
for reservation in reservations:
instances += reservation.instances
for i in instances:
if tag is None and len(i.tags) != 0:
continue
hosts.append(Host(str(i.public_dns_name), regionName, str(i.id), str(i.state)))
return hosts
'''
system("ec2-describe-instances --region %s >> instances.txt" % regionName)
for line in open("instances.txt"):
line = line.split()
if line[0] == "INSTANCE":
ip = line[3]
if ip == "terminated":
continue
status = line[5]
if status.find("shutting") != -1:
continue
region = line[10]
instanceid = line[1]
status = line[5]
hosts.append(Host(ip, region, instanceid, status))
elif line[0] == "TAG":
if line[3] == tag:
allowed_hosts.append(line[2])
else:
ignored_hosts.append(line[2])
if tag != None:
return [host for host in hosts if host.instanceid in allowed_hosts]
else:
return [host for host in hosts if host.instanceid not in ignored_hosts]
'''
def get_spot_request_ids(regionName):
system("rm -f instances.txt")
global AMIs
ret = []
conn = ec2.connect_to_region(regionName)
return [str(i.id) for i in conn.get_all_spot_instance_requests()]
'''
system("ec2-describe-spot-instance-requests --region %s >> instances.txt" % regionName)
for line in open("instances.txt"):
line = line.split()
if line[0] == "SPOTINSTANCEREQUEST":
id = line[1]
ret.append(id)
return ret
'''
def get_num_running_instances(regionName, tag):
instances = get_instances(regionName, tag)
return len([host for host in instances if host.status == "running"])
def get_num_nonterminated_instances(regionName, tag):
instances = get_instances(regionName, tag)
return len([host for host in instances if host.status != "terminated"])
def make_instancefile(name, hosts):
f = open("hosts/" + name, 'w')
for host in hosts:
f.write("%s\n" % (host.ip))
f.close
# MAIN STUFF
def check_for_instances(regions, tag):
numRunningAnywhere = 0
numUntagged = 0
for region in regions:
numRunning = get_num_nonterminated_instances(region, tag)
numRunningAnywhere += numRunning
numUntagged += get_num_nonterminated_instances(region, None)
if numRunningAnywhere > 0:
pprint("NOTICE: You appear to have %d instances up already." % numRunningAnywhere)
f = raw_input("Continue without terminating them? ")
if f != "Y" and f != "y":
exit(-1)
if numUntagged > 0:
pprint("NOTICE: You appear to have %d UNTAGGED instances up already." % numUntagged)
f = raw_input("Continue without terminating/claiming them? ")
if f != "Y" and f != "y":
exit(-1)
def provision_clusters(regions, use_spot):
global AMIs
for region in regions:
assert region.name in AMIs, "No AMI for region '%s'" % region.name
f = raw_input("spinning up %d %s instances in %s; okay? " %
(region.getTotalNumHosts(),
"spot" if use_spot else "normal",
region.name))
if f != "Y" and f != "y":
exit(-1)
numHosts = region.getTotalNumHostsWithoutGraphite()
if use_spot:
provision_spot(region.name, numHosts)
else:
provision_instance(region.name, numHosts)
def provision_spot(regionName, num):
global AMIs
conn = ec2.connect_to_region(regionName)
try:
conn.create_placement_group(args.placement_group)
except:
print "Placement group exception "+args.placement_group
reservations = conn.request_spot_instances(1.5,
AMIs[regionName],
count=num,
instance_type="cr1.8xlarge",
key_name="kaiju",
placement_group=args.placement_group)
'''
system("ec2-request-spot-instances %s --region %s -t m1.xlarge -price 0.50 " \
"-b '/dev/sdb=ephemeral0' -b '/dev/sdc=ephemeral1' -k kaiju -g kaiju -n %d" % (AMIs[regionName], regionName, num));
'''
def provision_instance(regionName, num):
global AMIs
conn = ec2.connect_to_region(regionName)
reservations = conn.run_instances(AMIs[regionName],
count=num,
instance_type="cc2.8xlarge",
key_name="kaiju",
placement_group=args.placement_group)
#system("ec2-run-instances %s --region %s -t m1.xlarge " \
# "-b '/dev/sdb=ephemeral0' -b '/dev/sdc=ephemeral1' -k kaiju -g kaiju -n %d > /tmp/instances" % (AMIs[regionName], regionName, num));
#system("ec2-run-instances %s -n %d -g 'cassandra' --t m1.large -k " \
# "'lenovo-pub' -b '/dev/sdb=ephemeral0' -b '/dev/sdc=ephemeral1'" %
# (AMIs[region], n))
def wait_all_hosts_up(regions, tag):
for region in regions:
pprint("Waiting for instances in %s to start..." % region.name)
while True:
numInstancesInRegion = get_num_running_instances(region.name, None)
numInstancesExpected = region.getTotalNumHosts()
if numInstancesInRegion >= numInstancesExpected:
break
else:
pprint("Got %d of %d hosts; sleeping..." % (numInstancesInRegion, numInstancesExpected))
sleep(5)
pprint("All instances in %s alive!" % region.name)
# Since ssh takes some time to come up
pprint("Waiting for instances to warm up... ")
sleep(10)
pprint("Awake!")
def claim_instances(regions, tag):
for region in regions:
instances = get_instances(region.name, None)
instanceString = ' '.join([host.instanceid for host in instances])
pprint("Claiming %s..." % instanceString)
conn = ec2.connect_to_region(region.name)
instances = [i.instanceid for i in instances]
reservations = conn.create_tags(instances, {tag:""})
#system("ec2-create-tags %s --tag %s --region %s" % (instanceString, tag, region.name))
pprint("Claimed!")
# Assigns hosts to clusters (and specifically as servers, clients)
# Also logs the assignments in the hosts/ files.
def assign_hosts(regions, tag):
allHosts = []
allServers = []
allClients = []
hostsPerRegion = {}
clusterId = 0
system("mkdir -p hosts")
for region in regions:
hostsToAssign = get_instances(region.name, tag)
pprint("Assigning %d hosts to %s... " % (len(hostsToAssign), region.name))
allHosts += hostsToAssign
hostsPerRegion[region.name] = hostsToAssign
for cluster in region.clusters:
cluster.allocateHosts(hostsToAssign[:cluster.getNumHosts()])
hostsToAssign = hostsToAssign[cluster.getNumHosts():]
# Note all the servers in our cluster.
make_instancefile("cluster-%d-all.txt" % cluster.clusterID, cluster.getAllHosts())
make_instancefile("cluster-%d-servers.txt" % cluster.clusterID, cluster.servers)
make_instancefile("cluster-%d-clients.txt" % cluster.clusterID, cluster.clients)
allServers += cluster.servers
allClients += cluster.clients
global KAIJU_HOSTS_INTERNAL
global KAIJU_HOSTS_EXTERNAL
KAIJU_HOSTS_INTERNAL = None
for server in cluster.servers:
for s_localid in range(0, SERVERS_PER_HOST):
if KAIJU_HOSTS_INTERNAL:
KAIJU_HOSTS_INTERNAL += ","
KAIJU_HOSTS_EXTERNAL += ","
else:
KAIJU_HOSTS_INTERNAL = ""
KAIJU_HOSTS_EXTERNAL = ""
KAIJU_HOSTS_INTERNAL+=(server.ip+":"+str(KAIJU_PORT+s_localid))
KAIJU_HOSTS_EXTERNAL+=(server.ip+":"+str(THRIFT_PORT+s_localid))
# Finally write the instance files for the regions and everything.
make_instancefile("all-hosts.txt", allHosts)
make_instancefile("all-servers.txt", allServers)
make_instancefile("all-clients.txt", allClients)
for region, hosts in hostsPerRegion.items():
make_instancefile("region-%s.txt" % region, hosts)
pprint("Assigned all %d hosts!" % len(allHosts))
# Runs general setup over all hosts.
def setup_hosts(clusters):
pprint("Appending authorized key...")
run_cmd("all-hosts", "sudo chown ubuntu /etc/security/limits.conf; sudo chmod u+w /etc/security/limits.conf; sudo echo '* soft nofile 1000000\n* hard nofile 1000000' >> /etc/security/limits.conf; sudo chown ubuntu /etc/pam.d/common-session; sudo echo 'session required pam_limits.so' >> /etc/pam.d/common-session")
run_cmd("all-hosts", "cat /home/ubuntu/.ssh/kaiju_rsa.pub >> /home/ubuntu/.ssh/authorized_keys", user="ubuntu")
pprint("Done")
run_cmd("all-hosts", " wget --output-document sigar.tar.gz 'http://downloads.sourceforge.net/project/sigar/sigar/1.6/hyperic-sigar-1.6.4.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fsigar%2Ffiles%2Fsigar%2F1.6%2F&ts=1375479576&use_mirror=iweb'; tar -xvf sigar*; sudo rm /usr/local/lib/libsigar*; sudo cp ./hyperic-sigar-1.6.4/sigar-bin/lib/libsigar-amd64-linux.so /usr/local/lib/; rm -rf *sigar*")
run_cmd("all-hosts", "sudo echo 'include /usr/local/lib' >> /etc/ld.so.conf; sudo ldconfig")
def jumpstart_hosts(clusters):
pprint("Resetting git...")
run_cmd_in_kaiju('all-hosts', 'git stash', user="ubuntu")
pprint("Done")
rebuild_all(clusters)
def stop_kaiju_processes(clusters):
pprint("Terminating java processes...")
run_cmd("all-hosts", "killall -9 java; pkill -9 java")
sleep(10)
pprint('Termination command sent.')
def stop_kaiju_clients(clusters):
pprint("Terminating client java processes...")
run_cmd("all-clients", "killall -9 java;")
pprint('Termination command sent.')
def rebuild_all(clusters):
pprint('Rebuilding clients and servers...')
pprint("Checking out branch %s" % (args.branch))
run_cmd_in_kaiju('all-hosts', 'git fetch origin; git checkout -b %s origin/%s; git checkout %s; git reset --hard origin/%s' % (args.branch, args.branch, args.branch, args.branch))
pprint("Done")
pprint("Pulling updates...")
run_cmd_in_kaiju('all-hosts', 'git pull', user="ubuntu")
pprint("Done")
pprint("Building kaiju...")
run_cmd_in_kaiju('all-hosts', 'mvn package', user="ubuntu")
pprint("Done")
pprint("Building ycsb...")
run_cmd_in_ycsb('all-hosts', 'bash install-kaiju-jar.sh; mvn package', user="ubuntu")
pprint("Done")
pprint('Servers re-built!')
CLIENT_ID = 0
CLIENT_ID_LOCK = Lock()
def getNextClientID():
global CLIENT_ID, CLIENT_ID_LOCK
CLIENT_ID_LOCK.acquire()
CLIENT_ID += 1
myClientId = CLIENT_ID
CLIENT_ID_LOCK.release()
return myClientId
def start_servers(clusters, **kwargs):
HEADER = "pkill -9 java; cd /home/ubuntu/kaiju/; rm *.log;"
HEADER += netCmd
baseCmd = "java -XX:+UseParallelGC -Xms%dG -Xmx%dG \
-Djava.library.path=/usr/local/lib \
-Dlog4j.configuration=file:log4j.properties \
-jar target/kaiju-1.0-SNAPSHOT.jar \
-bootstrap_time %d \
-kaiju_port %d \
-id %d \
-cluster %s \
-thrift_port %d \
-isolation_level %s \
-ra_algorithm %s \
-metrics_console_rate %d \
-bloom-filter-ne %d \
-max_object_size %d \
-drop_commit_pct %f \
-check_commit_delay_ms %d\
-outbound_internal_conn %d \
-locktable_numlatches %d \
1>server-%d.log 2>&1 & "
sid = 0
for cluster in clusters:
for server in cluster.servers:
servercmd = HEADER
for s_localid in range(0, SERVERS_PER_HOST):
servercmd += (
baseCmd % (
240/SERVERS_PER_HOST,
240/SERVERS_PER_HOST,
kwargs.get("bootstrap_time_ms", 1000),
KAIJU_PORT+s_localid,
sid,
KAIJU_HOSTS_INTERNAL,
THRIFT_PORT+s_localid,
kwargs.get("isolation_level"),
kwargs.get("ra_algorithm"),
kwargs.get("metrics_printrate", -1),
kwargs.get("bloom_filter_numbits", 256),
max(16384, (100+kwargs.get("valuesize"))*kwargs.get("txnlen")+1000),
kwargs.get("drop_commit_pct", 0),
kwargs.get("check_commit_delay", -1),
kwargs.get("outbound_internal_conn", 1),
kwargs.get("locktable_numlatches", 1024),
s_localid))
sid += 1
pprint("Starting kv-servers on [%s]" % server.ip)
start_cmd_disown_nobg(server.ip, servercmd)
def start_ycsb_clients(clusters, **kwargs):
def fmt_ycsb_string(runType, cluster):
return (('cd /home/ubuntu/kaiju/contrib/YCSB;' +
netCmd+
'rm *.log;' \
'bin/ycsb %s kaiju -p hosts=%s -threads %d -p txnlen=%d -p readproportion=%s -p updateproportion=%s -p fieldlength=%d -p histogram.buckets=%d -p fieldcount=1 -p operationcount=100000000 -p recordcount=%d -p isolation_level=%s -p read_atomic_algorithm=%s -t -s ' \
' -p requestdistribution=%s -p maxexecutiontime=%d -P %s' \
' 1>%s_out.log 2>%s_err.log') % (runType,
KAIJU_HOSTS_EXTERNAL,
kwargs.get("threads", 10) if runType != 'load' else min(1000, kwargs.get("recordcount")/10),
kwargs.get("txnlen", 8),
kwargs.get("readprop", .5),
1-kwargs.get("readprop", .5),
kwargs.get("valuesize", 1),
kwargs.get("numbuckets", 10000),
kwargs.get("recordcount", 10000),
kwargs.get("isolation_level", "READ_COMMITTED"),
kwargs.get("ra_algorithm", "KEY_LIST"),
kwargs.get("keydistribution", "zipfian"),
kwargs.get("time", 60) if runType != 'load' else 10000,
kwargs.get("workload", "workloads/workloada"),
runType,
runType))
cluster = clusters[0]
pprint("Loading YCSB on single client: %s." % (cluster.clients[0].ip))
run_cmd_single(cluster.clients[0].ip, fmt_ycsb_string("load", cluster), time=kwargs.get("recordcount", 180))
pprint("Done")
sleep(10)
pprint("Running YCSB on all clients.")
if kwargs.get("bgrun", False):
for client in cluster.clients:
start_cmd_disown(client.ip, fmt_ycsb_string("run", cluster))
sleep(kwargs.get("time")+15)
else:
run_cmd("all-clients", fmt_ycsb_string("run", cluster), time=kwargs.get("time", 60)+30)
pprint("Done")
def fetch_logs(runid, clusters, **kwargs):
def fetchYCSB(rundir, client):
client_dir = rundir+"/"+"C"+client.ip
system("mkdir -p "+client_dir)
fetch_file_single_compressed(client.ip, "/home/ubuntu/kaiju/contrib/YCSB/*.log", client_dir)
def fetchYCSBbg(rundir, client):
client_dir = rundir+"/"+"C"+client.ip
system("mkdir -p "+client_dir)
sleep(.1)
fetch_file_single_compressed_bg(client.ip, "/home/ubuntu/kaiju/contrib/YCSB/*.log", client_dir)
def fetchkaiju(rundir, server, symbol):
server_dir = rundir+"/"+symbol+server.ip
system("mkdir -p "+server_dir)
fetch_file_single_compressed(server.ip, "/home/ubuntu/kaiju/*.log", server_dir)
def fetchkaijubg(rundir, server, symbol):
server_dir = rundir+"/"+symbol+server.ip
system("mkdir -p "+server_dir)
fetch_file_single_compressed_bg(server.ip, "/home/ubuntu/kaiju/*.log", server_dir)
outroot = args.output_dir+'/'+runid
system("mkdir -p "+args.output_dir)
bgfetch = kwargs.get("bgrun", False)
ths = []
pprint("Fetching YCSB logs from clients.")
for cluster in clusters:
for i,client in enumerate(cluster.clients):
if not bgfetch:
t = Thread(target=fetchYCSB, args=(outroot, client))
t.start()
ths.append(t)
else:
fetchYCSBbg(outroot,client)
for th in ths:
th.join()
pprint("Done clients")
ths = []
pprint("Fetching logs from servers.")
for cluster in clusters:
for i,server in enumerate(cluster.servers):
if not bgfetch:
t = Thread(target=fetchkaiju, args=(outroot, server, "S"))
t.start()
ths.append(t)
else:
fetchkaijubg(outroot, server, "S")
for th in ths:
th.join()
pprint("Done")
if bgfetch:
sleep(30)
def terminate_clusters(tag, cluster):
for regionName in cluster.split():
allHosts = get_instances(regionName, tag) + get_instances(regionName, None)
instance_ids = [h.instanceid for h in allHosts]
spot_request_ids = get_spot_request_ids(regionName)
conn = ec2.connect_to_region(regionName)
if len(instance_ids) > 0:
pprint('Terminating instances (tagged & untagged) in %s...' % regionName)
conn.terminate_instances(instance_ids)
else:
pprint('No instances to terminate in %s, skipping...' % regionName)
if len(spot_request_ids) > 0:
pprint('Cancelling spot requests in %s...' % regionName)
conn.cancel_spot_instance_requests(spot_request_ids)
else:
pprint('No spot requests to cancel in %s, skipping...' % regionName)
conn.delete_placement_group(args.placement_group)
'''
allHosts = get_instances(regionName, tag) + get_instances(regionName, None)
instance_ids = ' '.join([h.instanceid for h in allHosts])
spot_request_ids = ' '.join(get_spot_request_ids(regionName))
if instance_ids.strip() != '':
pprint('Terminating instances (tagged & untagged) in %s...' % regionName)
system("ec2-terminate-instances --region %s %s" % (regionName, instance_ids))
else:
pprint('No instances to terminate in %s, skipping...' % regionName)
if spot_request_ids.strip() != '':
pprint('Cancelling spot requests in %s...' % regionName)
system("ec2-cancel-spot-instance-requests --region %s %s" % (regionName, spot_request_ids))
else:
pprint('No spot requests to cancel in %s, skipping...' % regionName)
'''
def terminate_num(tag, num):
for regionName in AMIs.keys():
allHosts = get_instances(regionName, tag)
instance_ids = [h.instanceid for h in allHosts]
if len(instance_ids) < num:
pprint("Only %d instances to cancel; %d requested; cowardly not killing" % (len(instance_ids), num))
return
instance_ids = instance_ids[:num]
conn = ec2.connect_to_region(regionName)
conn.terminate_instances(instance_ids)
pprint("Terminated %d instances (%s)" % num, ' '.join(instance_ids))
def parseArgs(args):
clusters = []
regions = []
clusterID = 1
clusterConfig = args.clusters.split(",")
for i in range(len(clusterConfig)):
cluster = clusterConfig[i]
if ":" in cluster:
regionName = cluster.split(":")[0]
numClustersInRegion = int(cluster.split(":")[1])
else:
regionName = cluster
numClustersInRegion = 1
newRegion = Region(regionName)
regions.append(newRegion)
for j in range(numClustersInRegion):
newCluster = Cluster(regionName, clusterID, args.servers, args.clients)
clusterID += 1
clusters.append(newCluster)
newRegion.addCluster(newCluster)
return regions, clusters, args.tag
def pprint(str):
global USE_COLOR
if USE_COLOR:
print '\033[94m%s\033[0m' % str
else:
print str
def run_ycsb_trial(tag, serverArgs="", **kwargs):
pprint("Running trial %s" % kwargs.get("runid", "no label"))
pprint("Restarting kaiju clusters %s" % tag)
#if kwargs.get("killservers", True):
stop_kaiju_processes(clusters)
start_servers(clusters, **kwargs)
sleep(kwargs.get("bootstrap_time_ms", 1000)/1000.*2+5)
#else:
#stop_kaiju_clients(clusters)
start_ycsb_clients(clusters, **kwargs)
runid = kwargs.get("runid", str(datetime.now()).replace(' ', '_'))
#print "KILLING JAVA"
#run_cmd("all-servers", "pkill --signal SIGQUIT java")
fetch_logs(runid, clusters)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Setup cassandra on EC2')
parser.add_argument('--tag', dest='tag', required=True, help='Tag to use for your instances')
parser.add_argument('--fetchlogs', '-f', action='store_true',
help='Fetch logs and exit')
parser.add_argument('--launch', '-l', action='store_true',
help='Launch EC2 cluster')
parser.add_argument('--claim', action='store_true',
help='Claim non-tagged instances as our own')
parser.add_argument('--kill_num',
help='Kill specified number of instances',
default=-1,
type=int)
parser.add_argument('--setup', '-s', action='store_true',
help='Set up already running EC2 cluster')
parser.add_argument('--terminate', '-t', action='store_true',
help='Terminate the EC2 cluster')
parser.add_argument('--restart', '-r', action='store_true',
help='Restart kaiju cluster')
parser.add_argument('--rebuild', '-rb', action='store_true',
help='Rebuild kaiju cluster')
parser.add_argument('--fetch', action='store_true',
help='Fetch logs')
parser.add_argument('--rebuild_clients', '-rbc', action='store_true',
help='Rebuild kaiju clients')
parser.add_argument('--rebuild_servers', '-rbs', action='store_true',
help='Rebuild kaiju servers')
parser.add_argument('--num_servers', '-ns', dest='servers', nargs='?',
default=2, type=int,
help='Number of server machines per cluster, default=2')
parser.add_argument('--num_clients', '-nc', dest='clients', nargs='?',
default=2, type=int,
help='Number of client machines per cluster, default=2')
parser.add_argument('--output', dest='output_dir', nargs='?',
default="./output", type=str,
help='output directory for runs')
parser.add_argument('--clusters', '-c', dest='clusters', nargs='?',
default="us-east-1", type=str,
help='List of clusters to start, command delimited, default=us-east-1:1')
parser.add_argument('--no_spot', dest='no_spot', action='store_true',
help='Don\'t use spot instances, default off.')
parser.add_argument('--color', dest='color', action='store_true',
help='Print with pretty colors, default off.')
parser.add_argument('-D', dest='kaiju_args', action='append', default=[],
help='Parameters to pass along to the kaiju servers/clients.')
parser.add_argument('--placement_group', dest='placement_group', default="KAIJUCLUSTER")
parser.add_argument('--branch', dest='branch', default='master',
help='Parameters to pass along to the kaiju servers/clients.')
parser.add_argument('--experiment', dest='experiment',
help='Named, pre-defined experiment.')
parser.add_argument('--ycsb_vary_constants_experiment', action='store_true', help='run experiment for varying constants')
args,unknown = parser.parse_known_args()
USE_COLOR = args.color
pprint("Reminder: Run this script from an ssh-agent!")
(regions, clusters, tag) = parseArgs(args)
kaijuArgString = ' '.join(['-D%s' % arg for arg in args.kaiju_args])
if args.fetchlogs:
pprint("Fetching logs")
assign_hosts(regions, tag)
runid = str(datetime.now()).replace(' ', '_')
fetch_logs(runid, clusters)
exit(-1)
if args.launch:
pprint("Launching kaiju clusters")
check_for_instances(AMIs.keys(), tag)
provision_clusters(regions, not args.no_spot)
wait_all_hosts_up(regions, tag)
if args.launch or args.claim:
pprint("Claiming untagged instances...")
claim_instances(regions, tag)
if args.setup or args.launch:
pprint("Setting up kaiju clusters")
assign_hosts(regions, tag)
setup_hosts(clusters)
jumpstart_hosts(clusters)
if args.rebuild:
pprint("Rebuilding kaiju clusters")
assign_hosts(regions, tag)
stop_kaiju_processes(clusters)
rebuild_all(clusters)
if args.rebuild_clients:
pprint("Rebuilding kaiju clients")
stop_kaiju_processes(clusters)
rebuild_clients(clusters)
if args.rebuild_servers:
pprint("Rebuilding kaiju servers")
stop_kaiju_processes(clusters)
rebuild_servers(clusters)
if args.restart:
assign_hosts(regions, tag)
run_ycsb_trial(False, tag, runid="DEFAULT_RUN",
threads=30,
distributionparameter=8,
isolation_level="NO_ISOLATION",
atomicity_level="NO_ATOMICITY",
recordcount=100000,
time=60,
timeout=120*1000,
keydistribution="zipfian")
if args.terminate:
pprint("Terminating kaiju clusters")
terminate_clusters(tag, args.clusters)
if args.kill_num != -1:
pprint("Killing %d instances" % (args.kill_num))
terminate_num(tag, args.kill_num)
if args.experiment:
if args.experiment not in experiments:
print "Bad experiment! (%s) Possibilities: %s" % (args.experiment, experiments.keys())
exit(-1)
experiment = experiments[args.experiment]
args.output_dir=args.output_dir+"/"+args.experiment+"-"+str(datetime.now()).replace(" ", "-").replace(":","-").split(".")[0]
system("mkdir -p "+args.output_dir)
system("cp experiments.py "+args.output_dir)
system("git rev-parse HEAD > "+args.output_dir+"/githash.txt")
for nc, ns in experiment["serversList"]:
args.servers = ns
args.clients = nc
(regions, clusters, tag) = parseArgs(args)
assign_hosts(regions, tag)
for iteration in experiment["iterations"]:
firstrun = True
for readprop in experiment["readprop"]:
for numkeys in experiment["numkeys"]:
for valuesize in experiment["valuesize"]:
for txnlen in experiment["txnlen"]:
for threads in experiment["threads"]:
for drop_commit_pct in experiment["drop_commit_pcts"]:
for check_commit_delay in experiment["check_commit_delays"]:
for config in experiment["configs"]:
isolation_level = config
ra_algorithm = "KEY_LIST"
if(config.find("READ_ATOMIC") != -1):
isolation_level = "READ_ATOMIC"
if(config.find("LIST") != -1):
ra_algorithm = "KEY_LIST"
elif(config.find("BLOOM") != -1):
ra_algorithm = "BLOOM_FILTER"
else:
ra_algorithm = "TIMESTAMP"
firstrun = True
run_ycsb_trial(tag, runid=("%s-%d-THREADS%d-RPROP%s-VS%d-TXN%d-NC%s-NS%s-NK%d-DCP%f-CCD%d-IT%d" % (config,
txnlen,
threads,
readprop,
valuesize,
txnlen,
nc,
ns,
numkeys,
drop_commit_pct,
check_commit_delay,
iteration)),
bootstrap_time_ms=experiment["bootstrap_time_ms"],
threads=threads,
txnlen=txnlen,
readprop=readprop,
recordcount=numkeys,
time=experiment["numseconds"],
timeout=120*10000,
ra_algorithm = ra_algorithm,
isolation_level = isolation_level,
keydistribution=experiment["keydistribution"],
valuesize=valuesize,
numbuckets=100,
metrics_printrate=-1,
killservers=firstrun,
drop_commit_pct=drop_commit_pct,
check_commit_delay=check_commit_delay,
bgrun=experiment["launch_in_bg"])
firstrun = False
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.