blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 281 | content_id stringlengths 40 40 | detected_licenses listlengths 0 57 | license_type stringclasses 2 values | repo_name stringlengths 6 116 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 313 values | visit_date timestamp[us] | revision_date timestamp[us] | committer_date timestamp[us] | github_id int64 18.2k 668M ⌀ | star_events_count int64 0 102k | fork_events_count int64 0 38.2k | gha_license_id stringclasses 17 values | gha_event_created_at timestamp[us] | gha_created_at timestamp[us] | gha_language stringclasses 107 values | src_encoding stringclasses 20 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.02M | extension stringclasses 78 values | content stringlengths 2 6.02M | authors listlengths 1 1 | author stringlengths 0 175 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
325e307598c52d6975a86c8edc8054b6e0fe4a67 | ded10c2f2f5f91c44ec950237a59225e8486abd8 | /.history/2/matrix_squaring_20200423012150.py | d1a2e804abef8dfd13862f5b3c728acfea003b0a | [] | no_license | jearistiz/Statistical-Physics-Projects | 276a86407b32ded4e06b32efb2fadbd8eff8daed | d9c5b16a50856e148dc8604d92b6de3ea21fc552 | refs/heads/master | 2022-11-05T03:41:23.623050 | 2020-06-28T06:36:05 | 2020-06-28T06:36:05 | 254,909,897 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 35,543 | py | # -*- coding: utf-8 -*-
from __future__ import division
import os
import numpy as np
import matplotlib.pyplot as plt
from time import time
import pandas as pd
# Author: Juan Esteban Aristizabal-Zuluaga
# date: 20200414
def rho_free(x,xp,beta):
"""Uso: devuelve elemento de matriz dsnsidad para el caso de una partícula libre en
un toro infinito.
"""
return (2.*np.pi*beta)**(-0.5) * np.exp(-(x-xp)**2 / (2 * beta))
def harmonic_potential(x):
"""Uso: Devuelve valor del potencial armónico para una posición x dada"""
return 0.5*x**2
def anharmonic_potential(x):
"""Devuelve valor de potencial anarmónico para una posición x dada"""
# return np.abs(x)*(1+np.cos(x)) #el resultado de este potencial es interesante
return 0.5*x**2 - x**3 + x**4
def QHO_canonical_ensemble(x,beta):
"""
Uso: calcula probabilidad teórica cuántica de encontrar al oscilador armónico
(inmerso en un baño térmico a temperatura inversa beta) en la posición x.
Recibe:
x: float -> posición
beta: float -> inverso de temperatura en unidades reducidas beta = 1/T.
Devuelve:
probabilidad teórica cuántica en posición x para temperatura inversa beta.
"""
return (np.tanh(beta/2.)/np.pi)**0.5 * np.exp(- x**2 * np.tanh(beta/2.))
def Z_QHO(beta):
"""Uso: devuelve valor de función de partición para el QHO unidimensional"""
return 0.5/np.sinh(beta/2)
def E_QHO_avg_theo(beta):
"""Uso: devuelve valor de energía interna para el QHO unidimensional"""
return 0.5/np.tanh(0.5*beta)
def rho_trotter(x_max=5., nx=101, beta=1, potential=harmonic_potential):
"""
Uso: devuelve matriz densidad en aproximación de Trotter para altas temperaturas
y bajo influencia del potencial "potential".
Recibe:
x_max: float -> los valores de x estarán en el intervalo (-x_max,x_max).
nx: int -> número de valores de x considerados (igualmente espaciados).
beta: float -> inverso de temperatura en unidades reducidas.
potential: func -> potencial de interacción. Debe ser solo función de x.
Devuelve:
rho: numpy array, shape=(nx,nx) -> matriz densidad en aproximación de Trotter para
altas temperaturas y potencial dado.
grid_x: numpy array, shape=(nx,) -> valores de x en los que está evaluada rho.
dx: float -> separación entre valores contiguos de grid_x
"""
nx = int(nx)
# Si nx es par lo cambiamos al impar más cercano para incluir al 0 en valores de x
if nx%2 == 0:
nx = nx + 1
# Valor de la discretización de posiciones según x_max y nx dados como input
dx = 2 * x_max/(nx-1)
# Lista de valores de x teniendo en cuenta discretización y x_max
grid_x = [i*dx for i in range(-int((nx-1)/2),int((nx-1)/2 + 1))]
# Construcción de matriz densidad dada por aproximación de Trotter
rho = np.array([[rho_free(x , xp, beta) * np.exp(-0.5*beta*(potential(x)+potential(xp)))
for x in grid_x]
for xp in grid_x])
return rho, grid_x, dx
def density_matrix_squaring(rho, grid_x, N_iter=1, beta_ini=1, print_steps=True):
"""
Uso: devuelve matriz densidad luego de aplicarle algoritmo matrix squaring N_iter veces.
En la primera iteración se usa matriz de densidad dada por el input rho (a
temperatura inversa beta_ini); en las siguientes iteraciones se usa matriz densidad
generada por la iteración inmediatamente anterior. El sistema asociado a la matriz
densidad obtenida (al final de aplicar el algoritmo) está a temperatura inversa
beta_fin = beta_ini * 2**(N_iter).
Recibe:
rho: numpy array, shape=(nx,nx) -> matriz densidad discretizada en valores dados
por x_grid.
grid_x: numpy array, shape=(nx,) -> valores de x en los que está evaluada rho.
N_iter: int -> número de iteraciones del algoritmo.
beta_ini: float -> valor de inverso de temperatura asociado a la
matriz densidad rho dada como input.
print_steps: bool -> decide si muestra valores de beta en cada
iteración.
Devuelve:
rho: numpy array, shape=(nx,nx) -> matriz densidad de estado rho a temperatura
inversa igual a beta_fin.
trace_rho: float -> traza de la matriz densidad a temperatura inversa
igual a beta_fin. Por la definición que tomamos
de rho, ésta es equivalente a la función
partición a dicha temperatura.
beta_fin: float -> temperatura inversa del sistema asociado a rho.
"""
# Valor de discretización de las posiciones
dx = grid_x[1] - grid_x[0]
# Cálculo del valor de beta_fin según valores beta_ini y N_iter dados como input
beta_fin = beta_ini * 2 ** N_iter
# Itera algoritmo matrix squaring
if print_steps:
print('\nbeta_ini = %.3f'%beta_ini,
'\n----------------------------------------------------------------')
for i in range(N_iter):
rho = dx * np.dot(rho,rho)
# Imprime información relevante
if print_steps:
print(u'Iteración %d) 2^%d * beta_ini --> 2^%d * beta_ini'%(i, i, i+1))
if print_steps:
print('----------------------------------------------------------------\n' +
u'beta_fin = %.3f'%beta_fin)
# Calcula traza de rho
trace_rho = np.trace(rho)*dx
return rho, trace_rho, beta_fin
def save_csv(data, data_headers=None, data_index=None, file_name=None,
relevant_info=None, print_data=True):
"""
Uso: data debe contener listas que serán las columnas de un archivo CSV que se guardará
con nombre file_name. relevant_info agrega comentarios en primeras líneas del
archivo.
Recibe:
data: array of arrays, shape=(nx,ny) -> cada columna es una columna del archivo.
data_headers: numpy array, shape=(ny,) -> nombres de las columnas
data_index: numpy array, shape=(nx,) -> nombres de las filas
file_name: str -> nombre del archivo en el que se guardarán datos.
relevant_info: list of str -> información que se agrega como comentario en
primeras líneas. Cada elemento de esta lista
se agrega como una nueva línea.
print_data: bool -> decide si imprime datos guardados, en pantalla.
Devuelve:
data_pdDF: pd.DataFrame -> archivo con datos formato "pandas data frame".
guarda archivo con datos e inforamación relevante en primera línea.
"""
if file_name==None:
#path completa para este script
script_dir = os.path.dirname(os.path.abspath(__file__))
file_name = script_dir + '/' + 'file_name.csv'
data_pdDF = pd.DataFrame(data, columns=data_headers, index=data_index)
# Crea archivo CSV y agrega comentarios relevantes dados como input
if relevant_info is not None:
# Agregamos información relevante en primeras líneas
with open(file_name,mode='w') as file_csv:
for info in list(relevant_info):
file_csv.write('# '+info+'\n')
file_csv.close()
# Usamos pandas para escribir en archivo en formato csv.
with open(file_name,mode='a') as file_csv:
data_pdDF.to_csv(file_csv)
file_csv.close()
else:
with open(file_name,mode='w') as file_csv:
data_pdDF.to_csv(file_csv)
file_csv.close()
# Imprime datos en pantalla.
if print_data==True:
print(data_pdDF)
return data_pdDF
def run_pi_x_sq_trotter(x_max=5., nx=201, N_iter=7, beta_fin=4, potential=harmonic_potential,
potential_string='harmonic_potential', print_steps=True,
save_data=True, file_name=None, relevant_info=None,
plot=True, save_plot=True, show_plot=True):
"""
Uso: corre algoritmo matrix squaring iterativamente (N_iter veces). En la primera
iteración se usa una matriz densidad en aproximación de Trotter a temperatura
inversa beta_ini = beta_fin * 2**(-N_iter) para potencial dado por potential;
en las siguientes iteraciones se usa matriz densidad generada por la iteración
inmediatamente anterior. Además ésta función guarda datos de pi(x;beta) vs. x
en archivo de texto y grafica pi(x;beta) comparándolo con teoría para el oscilador
armónico cuántico.
Recibe:
x_max: float -> los valores de x estarán en el intervalo (-x_max,x_max).
nx: int -> número de valores de x considerados.
N_iter: int -> número de iteraciones del algoritmo matrix squaring.
beta_ini: float -> valor de inverso de temperatura que queremos tener al final de
aplicar el algoritmo matrix squaring iterativamente.
potential: func -> potencial de interacción usado en aproximación de trotter. Debe
ser función de x.
potential_string: str -> nombre del potencial (con éste nombramos los archivos que
se generan).
print_steps: bool -> decide si imprime los pasos del algoritmo matrix squaring.
save_data: bool -> decide si guarda los datos en archivo .csv.
file_name: str -> nombre de archivo CSV en que se guardan datos. Si valor es None,
se guarda con nombre conveniente según parámetros relevantes.
plot: bool -> decide si grafica.
save_plot: bool -> decide si guarda la figura.
show_plot: bool -> decide si muestra la figura en pantalla.
Devuelve:
rho: numpy array, shape=(nx,nx) -> matriz densidad de estado rho a temperatura
inversa igual a beta_fin.
trace_rho: float -> traza de la matriz densidad a temperatura inversa
igual a beta_fin. Por la definición que tomamos
de "rho", ésta es equivalente a la función
partición en dicha temperatura.
grid_x: numpy array, shape=(nx,) -> valores de x en los que está evaluada rho.
"""
# Cálculo del valor de beta_ini según valores beta_fin y N_iter dados como input
beta_ini = beta_fin * 2**(-N_iter)
# Cálculo de rho con aproximación de Trotter
rho, grid_x, dx = rho_trotter(x_max, nx, beta_ini, potential)
grid_x = np.array(grid_x)
# Aproximación de rho con matrix squaring iterado N_iter veces.
rho, trace_rho, beta_fin_2 = density_matrix_squaring(rho, grid_x, N_iter,
beta_ini, print_steps)
print('---------------------------------------------------------'
+ '---------------------------------------------------------\n'
+ u'Matrix squaring: beta_ini = %.3f --> beta_fin = %.3f'%(beta_ini, beta_fin_2)
+ u' N_iter = %d Z(beta_fin) = Tr(rho(beta_fin)) = %.3E \n'%(N_iter,trace_rho)
+ '---------------------------------------------------------'
+ '---------------------------------------------------------'
)
# Normalización de rho a 1 y cálculo de densidades de probabilidad para valores en grid_x.
rho_normalized = np.copy(rho)/trace_rho
x_weights = np.diag(rho_normalized)
# Guarda datos en archivo CSV.
script_dir = os.path.dirname(os.path.abspath(__file__)) #path completa para este script
if save_data:
# Nombre del archivo .csv en el que guardamos valores de pi(x;beta_fin).
if file_name is None:
csv_file_name = (script_dir
+ u'/pi_x-ms-%s-beta_fin_%.3f-x_max_%.3f-nx_%d-N_iter_%d.csv'
%(potential_string,beta_fin,x_max,nx,N_iter))
else:
csv_file_name = script_dir + '/'+ file_name
# Información relevante para agregar como comentario al archivo csv.
if relevant_info is None:
relevant_info = ['pi(x;beta_fin) computed using matrix squaring algorithm and'
+ ' Trotter approximation. Parameters:',
u'%s x_max = %.3f nx = %d '%(potential_string,x_max,nx)
+ u'N_iter = %d beta_ini = %.3f '%(N_iter,beta_ini,)
+ u'beta_fin = %.3f'%beta_fin]
# Guardamos valores de pi(x;beta_fin) en archivo csv.
pi_x_data = np.array([grid_x.copy(),x_weights.copy()])
pi_x_data_headers = ['position_x','prob_density']
pi_x_data = save_csv(pi_x_data.transpose(),pi_x_data_headers,None,csv_file_name,
relevant_info,print_data=0)
# Gráfica y comparación con teoría
if plot:
plt.figure(figsize=(8,5))
plt.plot(grid_x, x_weights,
label = 'Matrix squaring +\nfórmula de Trotter.\n$N=%d$ iteraciones\n$dx=%.3E$'
%(N_iter,dx))
plt.plot(grid_x, QHO_canonical_ensemble(grid_x,beta_fin), label=u'Valor teórico QHO')
plt.xlabel(u'x')
plt.ylabel(u'$\pi^{(Q)}(x;\\beta)$')
plt.legend(loc='best',title=u'$\\beta=%.2f$'%beta_fin)
plt.tight_layout()
if save_plot:
if file_name is None:
plot_file_name = (script_dir
+ u'/pi_x-ms-plot-%s-beta_fin_%.3f-x_max_%.3f-nx_%d-N_iter_%d.eps'
%(potential_string,beta_fin,x_max,nx,N_iter))
else:
plot_file_name = script_dir+u'/pi_x-ms-plot-'+file_name+'.eps'
plt.savefig(plot_file_name)
if show_plot:
plt.show()
plt.close()
return rho, trace_rho, grid_x
def Z_several_values(temp_min=1./10, temp_max=1/2., N_temp=10, save_Z_csv=True,
Z_file_name = None, relevant_info_Z = None, print_Z_data = True,
x_max=7., nx=201, N_iter=7, potential = harmonic_potential,
potential_string = 'harmonic_potential', print_steps=False,
save_pi_x_data=False, pi_x_file_name=None, relevant_info_pi_x=None,
plot=False, save_plot=False, show_plot=False):
"""
Uso: calcula varios valores para la función partición, Z, usando operador densidad
aproximado aproximado por el algoritmo matrix squaring.
Recibe:
temp_min: float -> Z se calcula para valores de beta en (1/temp_min,1/temp_max).
con N_temp valores igualmente espaciados.
temp_max: float.
N_temp: int.
save_Z_csv: bool -> decide si guarda valores calculados en archivo CSV.
Z_file_name: str -> nombre del archivo en el que se guardan datos de Z. Si valor
es None, se guarda con nombre conveniente según parámetros
relevantes.
relevant_info_Z: list -> infrmación relevante se añade en primeras líneas del archivo.
Cada str separada por una coma en la lista se añade como una
nueva línea.
print_Z_data: bool -> imprime datos de Z en pantalla.
*args: tuple -> argumentos de run_pi_x_sq_trotter
Devuelve:
Z_data: list, shape=(3,)
Z_data[0]: list, shape(N_temp,) -> contiene valores de beta en los que está evaluada Z.
Z_data[1]: list, shape(N_temp,) -> contiene valores de T en los que está evaluada Z.
Z_data[2]: list, shape(N_temp,) -> contiene valores de Z.
Z(beta) = Z(1/T) =
Z_data[0](Z_data[1]) = Z_data[0](Z_data[2])
"""
# Transforma valores de beta en valores de T y calcula lista de beta.
beta_max = 1./temp_min
beta_min = 1./temp_max
N_temp = int(N_temp)
beta_array = np.linspace(beta_max,beta_min,N_temp)
Z = []
# Calcula valores de Z para valores de beta especificados en beta_array.
for beta_fin in beta_array:
rho, trace_rho, grid_x = run_pi_x_sq_trotter(x_max, nx, N_iter, beta_fin, potential,
potential_string, print_steps,
save_pi_x_data, file_name,
relevant_info, plot, save_plot, show_plot)
Z.append(trace_rho)
# Calcula el output de la función.
Z_data = np.array([beta_array.copy(), 1./beta_array.copy(), Z.copy()], dtype=float)
# Guarda datos de Z en archivo CSV.
if save_Z_csv == True:
if Z_file_name is None:
script_dir = os.path.dirname(os.path.abspath(__file__))
Z_file_name = ('Z-ms-%s-beta_max_%.3f-'%(potential_string,1./temp_min)
+ 'beta_min_%.3f-N_temp_%d-x_max_%.3f-'%(1./temp_max,N_temp,x_max)
+ 'nx_%d-N_iter_%d.csv'%(nx, N_iter))
Z_file_name = script_dir + '/' + Z_file_name
if relevant_info_Z is None:
relevant_info_Z = ['Partition function at several temperatures',
'%s beta_max = %.3f '%(potential_string,1./temp_min)
+ 'beta_min = %.3f N_temp = %d '%(1./temp_max,N_temp)
+ 'x_max = %.3f nx = %d N_iter = %d'%(x_max,nx, N_iter)]
Z_data_headers = ['beta', 'temperature', 'Z']
Z_data = save_csv(Z_data.transpose(), Z_data_headers, None, Z_file_name, relevant_info_Z,
print_data=False)
if print_Z_data == True:
print(Z_data)
return Z_data
def average_energy(read_Z_data=True, generate_Z_data=False, Z_file_name = None,
plot_energy=True, save_plot_E=True, show_plot_E=True,
E_plot_name=None,
temp_min=1./10, temp_max=1/2., N_temp=10, save_Z_csv=True,
relevant_info_Z=None, print_Z_data=True,
x_max=7., nx=201, N_iter=7, potential=harmonic_potential,
potential_string='harmonic_potential', print_steps=False,
save_pi_x_data=False, pi_x_file_name=None, relevant_info_pi_x=None,
plot_pi_x=False, save_plot_pi_x=False, show_plot_pi_x=False):
"""
Uso: calcula energía promedio, E, del sistema en cuestión dado por potential.
Se puede decidir si se leen datos de función partición o se generan,
ya que E = - (d/d beta )log(Z).
Recibe:
read_Z_data: bool -> decide si se leen datos de Z de un archivo con nombre
Z_file_name.
generate_Z_data: bool -> decide si genera datos de Z.
Nota: read_Z_data y generate_Z_data son excluyentes. Se analiza primero primera opción
Z_file_name: str -> nombre del archivo en del que se leerá o en el que se
guardarán datos de Z. Si valor es None, se guarda con nombre
conveniente según parámetros relevantes.
plot_energy: bool -> decide si gráfica energía.
save_plot_E: bool -> decide si guarda gráfica de energía. Nótese que si
plot_energy=False, no se generará gráfica.
show_plot_E: bool -> decide si muestra gráfica de E en pantalla
E_plot_name: str -> nombre para guardar gráfico de E.
*args: tuple -> argumentos de Z_several_values
Devuelve:
E_avg: list -> valores de energía promedio para beta especificados por
beta__read
beta_read: list
"""
# Decide si lee o genera datos de Z.
if read_Z_data:
Z_file_read = pd.read_csv(Z_file_name, index_col=0, comment='#')
elif generate_Z_data:
t_0 = time()
Z_data = Z_several_values(temp_min, temp_max, N_temp, save_Z_csv, Z_file_name,
relevant_info_Z, print_Z_data, x_max, nx, N_iter, potential,
potential_string, print_steps, save_pi_x_data, pi_x_file_name,
relevant_info_pi_x, plot_pi_x,save_plot_pi_x, show_plot_pi_x)
t_1 = time()
print('--------------------------------------------------------------------------\n'
+ '%d values of Z(beta) generated --> %.3f sec.'%(N_temp,t_1-t_0))
Z_file_read = Z_data
else:
print('Elegir si se generan o se leen los datos para la función partición, Z.\n'
+ 'Estas opciones son mutuamente exluyentes. Si se seleccionan las dos, el'
+ 'algoritmo escoge leer los datos.')
beta_read = Z_file_read['beta']
temp_read = Z_file_read['temperature']
Z_read = Z_file_read['Z']
# Calcula energía promedio.
E_avg = np.gradient(-np.log(Z_read),beta_read)
# Grafica.
if plot_energy:
plt.figure(figsize=(8,5))
plt.plot(temp_read,E_avg,label=u'$\langle E \\rangle$ via path integral\nnaive sampling')
plt.plot(temp_read,E_QHO_avg_theo(beta_read),label=u'$\langle E \\rangle$ teórico')
plt.legend(loc='best')
plt.xlabel(u'$T$')
plt.ylabel(u'$\langle E \\rangle$')
if save_plot_E:
if E_plot_name is None:
script_dir = os.path.dirname(os.path.abspath(__file__))
E_plot_name = ('E-ms-plot-%s-beta_max_%.3f-'%(potential_string,1./temp_min)
+ 'beta_min_%.3f-N_temp_%d-x_max_%.3f-'%(1./temp_max,N_temp,x_max)
+ 'nx_%d-N_iter_%d.eps'%(nx, N_iter))
E_plot_name = script_dir + '/' + E_plot_name
plt.savefig(E_plot_name)
if show_plot_E:
plt.show()
plt.close()
return E_avg, beta_read.to_numpy()
def calc_error(x,xp,dx):
"""
Uso: calcula error acumulado en cálculo computacional de pi(x;beta) comparado
con valor teórico
"""
x, xp = np.array(x), np.array(xp)
N = len(x)
if N != len(xp):
raise Exception('x y xp deben ser del mismo tamaño.')
else:
return np.sum(np.abs(x-xp))*dx
def optimization(generate_opt_data=True, read_opt_data=False, beta_fin=4, x_max=5,
potential=harmonic_potential, potential_string='harmonic_potential',
nx_min=50, nx_max=1000, nx_sampling=50, N_iter_min=1, N_iter_max=20,
save_opt_data=False, opt_data_file_name=None, plot=True,
show_plot=True, save_plot=True, opt_plot_file_name=None):
"""
Uso: calcula diferentes valores de error usando calc_error() para encontrar valores de
dx y beta_ini óptimos para correr el alcoritmo (óptimos = que minimicen error)
Recibe:
generate_opt_data: bool -> decide si genera datos para optimización.
read_opt_data: bool -> decide si lee datos para optimización.
Nota: generate_opt_data y read_opt_data son excluyentes. Se evalúa primero la primera.
nx_min: int
nx_max: int -> se relaciona con dx = 2*x_max/(nx-1).
nx_sampling: int -> se generan nx mediante range(nx_max,nx_min,-1*nx_sampling).
N_iter_min: int
N_iter_max: int -> se relaciona con beta_ini = beta_fin **(-N_iter). Se gereran
valores de N_iter con range(N_iter_max,N_iter_min-1,-1).
save_opt_data: bool -> decide si guarda datos de optimización en archivo CSV.
opt_data_file_name: str -> nombre de archivo para datos de optimización.
plot: bool -> decide si grafica optimización.
show_plot: bool -> decide si muestra optimización.
save_plot: bool -> decide si guarda optimización.
opt_plot_file_name: str -> nombre de gráfico de optimización. Si valor es None, se
guarda con nombre conveniente según parámetros relevantes.
Devuelve:
error: list, shape=(nb,ndx) -> valores de calc_error para diferentes valores de dx y
beta_ini. dx incrementa de izquierda a derecha en lista
y beta_ini incrementa de arriba a abajo.
dx_grid: list, shape=(ndx,) -> valores de dx para los que se calcula error.
beta-ini_grid: list, shape=(nb,) -> valores de beta_ini para los que se calcula error.
"""
# Decide si genera o lee datos.
if generate_opt_data:
N_iter_min = int(N_iter_min)
N_iter_max = int(N_iter_max)
nx_min = int(nx_min)
nx_max = int(nx_max)
if nx_min%2==1:
nx_min -= 1
if nx_max%2==0:
nx_max += 1
# Crea valores de nx y N_iter (equivalente a generar valores de dx y beta_ini)
nx_values = range(nx_max,nx_min,-1*nx_sampling)
N_iter_values = range(N_iter_max,N_iter_min-1,-1)
dx_grid = [2*x_max/(nx-1) for nx in nx_values]
beta_ini_grid = [beta_fin * 2**(-N_iter) for N_iter in N_iter_values]
error = []
# Calcula error para cada valor de nx y N_iter especificado
# (equivalentemente dx y beta_ini).
for N_iter in N_iter_values:
row = []
for nx in nx_values:
rho,trace_rho,grid_x = run_pi_x_sq_trotter(x_max, nx, N_iter, beta_fin,
potential, potential_string,
False, False, None, None, False,
False, False)
grid_x = np.array(grid_x)
dx = grid_x[1]-grid_x[0]
rho_normalized = np.copy(rho)/trace_rho
pi_x = np.diag(rho_normalized)
theoretical_pi_x = QHO_canonical_ensemble(grid_x,beta_fin)
error_comp_theo = calc_error(pi_x,theoretical_pi_x,dx)
row.append(error_comp_theo)
error.append(row)
error = np.array(error)
elif read_opt_data:
error = pd.read_csv(opt_data_file_name, index_col=0, comment='#')
dx_grid = error.columns.to_numpy()
beta_ini_grid = error.index.to_numpy()
error = error.to_numpy()
else:
raise Exception('Escoga si generar o leer datos en optimization(.)')
# Toma valores de error en cálculo de Z (nan e inf) y los remplaza por
# el valor de mayor error en el gráfico.
try:
error = np.where(np.isinf(error),-np.Inf,error)
error = np.where(np.isnan(error),-np.Inf,error)
nan_value = 1.1*max(error)
except:
nan_value = 0
error = np.nan_to_num(error, nan = nan_value, posinf=nan_value, neginf = nan_value)
script_dir = os.path.dirname(os.path.abspath(__file__))
# Guarda datos (solo si fueron generados y se escoje guardar)
if generate_opt_data and save_opt_data:
if opt_data_file_name is None:
opt_data_file_name = ('/pi_x-ms-opt-%s-beta_fin_%.3f'%(potential_string, beta_fin)
+ '-x_max_%.3f-nx_min_%d-nx_max_%d'%(x_max, nx_min, nx_max)
+ '-nx_sampling_%d-N_iter_min_%d'%(nx_sampling, N_iter_min)
+ '-N_iter_max_%d.csv'%(N_iter_max))
opt_data_file_name = script_dir + opt_data_file_name
relevant_info = ['Optimization of parameters dx and beta_ini of matrix squaring'
+ ' algorithm', '%s beta_fin = %.3f '%(potential_string, beta_fin)
+ 'x_max = %.3f nx_min = %d nx_max = %d '%(x_max, nx_min, nx_max)
+ 'nx_sampling = %d N_iter_min = %d '%(nx_sampling, N_iter_min)
+ 'N_iter_max = %d'%(N_iter_max)]
save_csv(error, dx_grid, beta_ini_grid, opt_data_file_name, relevant_info)
# Grafica
if plot:
fig, ax = plt.subplots(1, 1)
DX, BETA_INI = np.meshgrid(dx_grid, beta_ini_grid)
cp = plt.contourf(DX,BETA_INI,error)
plt.colorbar(cp)
ax.set_ylabel(u'$\\beta_{ini}$')
ax.set_xlabel('$dx$')
plt.tight_layout()
if save_plot:
if opt_plot_file_name is None:
opt_plot_file_name = \
('/pi_x-ms-opt-plot-%s-beta_fin_%.3f'%(potential_string, beta_fin)
+ '-x_max_%.3f-nx_min_%d-nx_max_%d'%(x_max, nx_min, nx_max)
+ '-nx_sampling_%d-N_iter_min_%d'%(nx_sampling, N_iter_min)
+ '-N_iter_max_%d.eps'%(N_iter_max))
opt_plot_file_name = script_dir + opt_plot_file_name
else:
plt.savefig(opt_plot_file_name)
if show_plot:
plt.show()
plt.close()
return error, dx_grid, beta_ini_grid
#################################################################################################
# PANEL DE CONTROL
#
# Decide si corre algoritmo matrix squaring
run_ms_algorithm = True
# Decide si corre algoritmo para cálculo de energía interna
run_avg_energy = False
# Decide si corre algoritmo para optimización de dx y beta_ini
run_optimization = False
#
#
#################################################################################################
#################################################################################################
# PARÁMETROS GENERALES PARA LAS FIGURAS
#
# Usar latex en texto de figuras y agrandar tamaño de fuente
plt.rc('text', usetex=True)
plt.rcParams.update({'font.size':15,'text.latex.unicode':True})
# Obtenemos path para guardar archivos en el mismo directorio donde se ubica el script
script_dir = os.path.dirname(os.path.abspath(__file__))
#
#################################################################################################
#################################################################################################
# CORRE ALGORITMO MATRIX SQUARING
#
# Parámetros físicos del algoritmo
x_max = 5.
nx = 201
N_iter = 7
beta_fin = 4
potential, potential_string = harmonic_potential, 'harmonic_potential'
# Parámetros técnicos
print_steps = False
save_data = True #False
file_name = 'test1' #None
relevant_info = None
plot = True
save_plot = True #False
show_plot = True
if run_ms_algorithm:
rho, trace_rho, grid_x = run_pi_x_sq_trotter(x_max, nx, N_iter, beta_fin, potential,
potential_string, print_steps, save_data,
file_name, relevant_info, plot,
save_plot, show_plot)
#
#
#################################################################################################
#################################################################################################
# CORRE ALGORITMO PARA CÁLCULO DE ENERGÍA INTERNA
#
# Parámetros técnicos función partición y cálculo de energía
read_Z_data = False
generate_Z_data = True
Z_file_name = None
plot_energy = True
save_plot_E = True
show_plot_E = True
E_plot_name = None #script_dir + 'E.eps'
# Parámetros físicos para calcular Z y <E>
temp_min = 1./10
temp_max = 1./2
N_temp = 10
potential, potential_string = harmonic_potential, 'harmonic_potential'
# Más parámetros técnicos
save_Z_csv = True
relevant_info_Z = None
print_Z_data = False
x_max = 7.
nx = 201
N_iter = 7
print_steps = False
save_pi_x_data = False
pi_x_file_name = None
relevant_info_pi_x = None
plot_pi_x = False
save_plot_pi_x = False
show_plot_pi_x = False
if run_avg_energy:
average_energy(read_Z_data, generate_Z_data, Z_file_name, plot_energy, save_plot_E,
show_plot_E, E_plot_name,
temp_min, temp_max, N_temp, save_Z_csv, relevant_info_Z, print_Z_data,
x_max, nx, N_iter, potential, potential_string, print_steps, save_pi_x_data,
pi_x_file_name, relevant_info_pi_x,plot_pi_x, save_plot_pi_x, show_plot_pi_x)
#
#
#################################################################################################
#################################################################################################
# CORRE ALGORITMO PARA OPTIMIZACIÓN DE DX Y BETA_INI
#
# Parámetros físicos
beta_fin = 4
x_max = 5
potential, potential_string = harmonic_potential, 'harmonic_potential'
nx_min = 10
nx_max = 300
nx_sampling = 30
N_iter_min = 8
N_iter_max = 20
# Parámetros técnicos
generate_opt_data = True
read_opt_data = True
save_opt_data = True
opt_data_file_name = None #script_dir + '/pi_x-ms-opt-harmonic_potential-beta_fin_4.000-x_max_5.000-nx_min_10-nx_max_1001-nx_sampling_50-N_iter_min_1-N_iter_max_20.csv'
plot_opt = True
show_opt_plot = True
save_plot_opt = True
opt_plot_file_name = None #script_dir + '/pi_x-ms-opt-plot-harmonic_potential-beta_fin_4.000-x_max_5.000-nx_min_10-nx_max_1001-nx_sampling_50-N_iter_min_1-N_iter_max_20.eps'
if run_optimization:
t_0 = time()
error, dx_grid, beta_ini_grid = \
optimization(generate_opt_data, read_opt_data, beta_fin, x_max, potential,
potential_string, nx_min, nx_max, nx_sampling, N_iter_min,
N_iter_max, save_opt_data, opt_data_file_name, plot_opt,
show_opt_plot, save_plot_opt, opt_plot_file_name)
t_1 = time()
print('-----------------------------------------'
+ '-----------------------------------------\n'
+ 'Optimization: beta_fin=%.3f, x_max=%.3f, potential=%s\n \
nx_min=%d, nx_max=%d, N_iter_min=%d, N_iter_max=%d\n \
computation time = %.3f sec.\n'%(beta_fin,x_max,potential_string,nx_min,
nx_max,N_iter_min,N_iter_max,t_1-t_0)
+ '-----------------------------------------'
+ '-----------------------------------------')
#
#
################################################################################################# | [
"jeaz.git@gmail.com"
] | jeaz.git@gmail.com |
825c3a081928e9dfab29f460e918c27c3e835224 | 5f041c97889b1eade34669499c300eb764b4fc89 | /test/simple_match.py | 705d9e420743602afa8f119c86eb289b012ac2e6 | [
"MIT"
] | permissive | sirius8156/tea-recognition | d54df2ed0f341ee0bacd3f94f44d85d1b377b6ea | fec58fbca22627433bb37ba57942ebeaa5eefaa5 | refs/heads/master | 2020-03-17T06:21:09.946095 | 2018-05-14T12:17:46 | 2018-05-14T12:17:46 | 133,351,767 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,480 | py | # coding=utf-8
import cv2
import numpy as np
import tools.feature_extract as ife
import tools.feature_key_point as fkp
def test_match(img_path_1, img_path_2):
canny_img_1 = ife.get_canny_img(img_path_1)
key_points_1, desc_1, signed_img_1 = fkp.sift_img(canny_img_1)
canny_img_2 = ife.get_canny_img(img_path_2)
key_points_2, desc_2, signed_img_2 = fkp.sift_img(canny_img_2)
matcher = cv2.BFMatcher()
matches = matcher.knnMatch(desc_1, desc_2, k=2)
matches = sorted(matches, key=lambda x: x[0].distance)
good = [m1 for (m1, m2) in matches if m1.distance < 0.9 * m2.distance]
good = np.expand_dims(good, 1)
img3 = cv2.drawMatchesKnn(canny_img_1, key_points_1, canny_img_2, key_points_2, good[0:20], outImg=None, flags=2)
return good, img3
if __name__ == '__main__':
compare_1_good, compare_1_img = test_match("cropped_1.jpg", "cropped_2.jpg")
compare_2_good, compare_2_img = test_match("cropped_1.jpg", "cropped_3.jpg")
compare_3_good, compare_3_img = test_match("cropped_1.jpg", "cropped_4.jpg")
compare_4_good, compare_3_img = test_match("cropped_2.jpg", "cropped_3.jpg")
compare_5_good, compare_3_img = test_match("cropped_2.jpg", "cropped_4.jpg")
print("compare_1#good: {}, compare_2#good: {}, compare_3#good: {}, compare_4#good: {}, compare_5#good: {}"
.format(len(compare_1_good),
len(compare_2_good),
len(compare_3_good),
len(compare_4_good),
len(compare_5_good)))
# TODO 目前观察阈值len(good)卡在100, 可以区分出特征差异
# TODO 后续需要大量实验sample块的大小, 验证阈值
# TODO canny特征向量数量小, 如果用sobel差异会更明显, 这里也需要实验, 找到合适的算子, 在速度和复杂度中找到优化
cv2.imshow("compare_1", compare_1_img)
cv2.imshow("compare_2", compare_2_img)
cv2.imshow("compare_3", compare_3_img)
cv2.imshow("compare_4", compare_3_img)
cv2.imshow("compare_5", compare_3_img)
cv2.waitKey(0)
# matcher = cv2.FlannBasedMatcher(dict(algorithm=1, trees=5), {})
#
# knn_matches = matcher.knnMatch(desc_1, desc_2, 2)
# knn_matches = matches = sorted(knn_matches, key=lambda x: x[0].distance)
#
# good = [m1 for (m1, m2) in knn_matches if m1.distance < 0.7 * m2.distance]
#
# src_pts = np.float32([key_points_1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
# dst_pts = np.float32([key_points_2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)
#
# M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
#
# h, w = canny_img_1.shape[:2]
# pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
# dst = cv2.perspectiveTransform(pts, M)
#
# canvas = canny_img_2.copy()
# cv2.polylines(canvas, [np.int32(dst)], True, (0, 255, 0), 3, cv2.LINE_AA)
# matched = cv2.drawMatches(canny_img_1, key_points_1, canvas, key_points_2, good, None) # ,**draw_params)
#
# h, w = canny_img_1.shape[:2]
# pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
# dst = cv2.perspectiveTransform(pts, M)
# perspectiveM = cv2.getPerspectiveTransform(np.float32(dst), pts)
# found = cv2.warpPerspective(canny_img_2, perspectiveM, (w, h))
#
# cv2.imshow("matched", matched)
# cv2.imshow("found", found)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
| [
"duyaming@baidu.com"
] | duyaming@baidu.com |
71a647bc0c4861ce9748d1653962bd9272ed223d | 7f32426d36f10d57bc52382a03132093830434eb | /reindeer/sys/handler/user.py | c7b27e2c4e448e0046f45584b0c6f4c1952145a2 | [] | no_license | CuiVincent/MCMS-Python | a7b5b922f3ae5352f0535a9c42b92b76f06db73e | 07f1b8c811d0eb23ebe9bf91ab17d0a6eaa692bd | refs/heads/master | 2022-07-10T16:06:40.863179 | 2015-06-20T14:53:31 | 2015-06-20T14:53:31 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,413 | py | __author__ = 'CuiVincent'
# -*- coding: utf8 -*-
import reindeer.sys.base_handler
from reindeer.sys.model.sys_user import SysUser
from tornado.escape import json_encode
import random
from reindeer.sys.exceptions import BusinessRuleException
class UserListHandler(reindeer.sys.base_handler.BaseHandler):
def get(self):
self.render('sys/user/user_list.html')
def post(self):
r_echo = self.get_argument('sEcho')
r_start = int(self.get_argument('iDisplayStart'))
r_length = int(self.get_argument('iDisplayLength'))
r_search = self.get_argument('sSearch')
r_sort_col = self.get_arguments('iSortCol_0')[0]
r_sort_dir = self.get_arguments('sSortDir_0')[0]
json = SysUser.get_slice_json(r_search, r_start, r_start+r_length, r_sort_col, r_sort_dir)
total = SysUser.get_all_count()
slice_total = SysUser.get_slice_count(r_search)
r_json = '{"success": true, "aaData":'+json+',"iTotalRecords":'+str(total)+',"iTotalDisplayRecords":'+str(slice_total)+',"sEcho":'+str(r_echo)+'}'
print(r_json)
return self.write(r_json)
class UserAddHandler(reindeer.sys.base_handler.BaseHandler):
def post(self):
code = self.get_argument('user_code')
name = self.get_argument('user_name')
status = self.get_argument('user_status') if self.get_argument('user_status') else 1
if SysUser.add(code, name, "111", status, self.get_current_user().CODE):
return self.write(json_encode({'success': True}))
class UserEditHandler(reindeer.sys.base_handler.BaseHandler):
def post(self):
uid = str(self.get_argument('uid'))
status = self.get_argument('user_status') if self.get_argument('user_status') else 1
name = self.get_argument('user_name')
if SysUser.update(uid, name, status):
return self.write(json_encode({'success': True}))
else:
raise BusinessRuleException(1153)
class UserDeleteHandler(reindeer.sys.base_handler.BaseHandler):
def post(self):
uids = str(self.get_argument('uid')).split(',')
success = False
for uid in uids:
d_success = SysUser.delete(uid)
if not success:
success = d_success
if success:
return self.write(json_encode({'success': success}))
else:
raise BusinessRuleException(1152) | [
"xoyoshin@gmail.com"
] | xoyoshin@gmail.com |
590718870c9a1af82d2b270b04cdca68e6a53547 | d9366ddca913f561474ecfbf7b613c2da41ac974 | /posts/forms.py | 81907dab299da60530ef724fc0295167a7c661d4 | [
"MIT"
] | permissive | Duskhorizon/discoplaytogether | cba2b1e510e79166bac5dfe4ea0296fab42f153a | e74a11b0f65d14db6f15d1bb0536411dd546eda6 | refs/heads/master | 2021-03-15T09:43:56.634236 | 2020-04-05T14:12:49 | 2020-04-05T14:12:49 | 246,829,876 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 768 | py | from django import forms
from django.forms import ModelForm
from .models import Event, Game
from .widgets import XDSoftDateTimePickerInput
class EventForm(ModelForm):
title = forms.CharField(label='Nazwa wydarzenia:')
description = forms.CharField(label='Opis:')
game = forms.ModelChoiceField(queryset=Game.objects.all(), label='Gra:')
start_time = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=XDSoftDateTimePickerInput(), label='Rozpoczęcie:'
)
class Meta:
model = Event
fields = ['title', 'description', 'start_time', 'game', 'server']
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=XDSoftDateTimePickerInput()
)
| [
"duskhorizon.ggmu@gmail.com"
] | duskhorizon.ggmu@gmail.com |
7d7c1bbc6e3d39f628484db6de2dbae30a640adc | 95a08c7d63e7b47b42f4a56a40615e6fd7ddfd10 | /venv/bin/easy_install | 9742d91681c24d9fab91db729019ea86bac73878 | [] | no_license | ShahrokhGit/take-a-break | 361569c8f43023f69a2615debc05e6fdb6c3e45b | 40e475603847e9ec6438b2727d6763242e179d92 | refs/heads/master | 2020-04-14T02:34:15.495333 | 2019-01-07T06:39:33 | 2019-01-07T06:39:33 | 163,585,580 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 288 | #!/Users/shahrokh/Documents/ShahProjects/take-a-break/venv/bin/python3.7
# -*- coding: utf-8 -*-
import re
import sys
from setuptools.command.easy_install import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(main())
| [
"shahrokh.zita@gmail.com"
] | shahrokh.zita@gmail.com | |
a4dc23eae566a3bc54d8ce9848d14fcec8cffa10 | dd240d4b1298117806d9218c409eaf3237b29509 | /test/test_min_edge_weight.py | c12d47401d1b0ece4159760694a6b6cc408d7120 | [] | no_license | NicolasBondouxA/openbackend_clustering_ls | 10a196ab012edc231aebed7dd88fbcec6a21760b | 36f0bbac9c5c2b09e5de578b32e3a5a8a0d913ba | refs/heads/master | 2023-04-21T15:54:19.426282 | 2021-05-17T17:44:03 | 2021-05-17T17:44:03 | 368,251,462 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,259 | py | """
No unit tests in this class.
Class used to run a test campaign with different nb_clusters and min_edge_weight.
"""
import logging
import pathlib
import unittest
from openbackend_clustering.instance.instance import InstanceMatrixParser, display_graph
from openbackend_clustering.model.local_solver_model import LSModel
logging.basicConfig(format='%(asctime)s:%(levelname)s:%(filename)s:%(lineno)d:%(message)s', level=logging.INFO)
current_path = pathlib.Path(__file__).parent.absolute()
def create_instance(min_edge_weight):
parser = InstanceMatrixParser(str(current_path) + '/../openbackend_clustering/resources/CA_extract.matrix')
return parser.create_instance(ignore=['node_22', 'node_8', 'node_67', 'node_46'], min_edge_weight=min_edge_weight)
def run_test_min_edge(min_edge_weight, nb_clusters):
# create instance
instance = create_instance(min_edge_weight)
separated_nodes = [node for node in ['node_16', 'node_65', 'node_39', 'node_57', 'node_68'] if node in instance.get_all_nodes()]
min_cluster_weight = int(instance.total_exchanges() / nb_clusters * 0.25)
# create model and solve
model = LSModel(instance, nb_clusters=nb_clusters, separated_nodes=separated_nodes,
min_cluster_weight=min_cluster_weight)
model.solve()
model.print_solution()
instance.print_kpis()
# save instance with solution
instance.set_solution(model.solution)
instance.save('instance_min_edge_weight_{}_w_{}_clusters.pickle'.format(min_edge_weight, nb_clusters))
# save a graphical representation of the solution
display_graph(instance, 'solution_min_edge_weight_{}_w_{}_clusters.png'.format(min_edge_weight, nb_clusters))
class MinEdgeWeightTest(unittest.TestCase):
def test_min_edge_weight_20_w_6_clusters(self):
run_test_min_edge(20, 6)
def test_min_edge_weight_20_w_5_clusters(self):
run_test_min_edge(20, 5)
def test_min_edge_weight_10_w_6_clusters(self):
run_test_min_edge(10, 6)
def test_min_edge_weight_10_w_5_clusters(self):
run_test_min_edge(10, 5)
def test_min_edge_weight_5_w_6_clusters(self):
run_test_min_edge(5, 6)
def test_min_edge_weight_5_w_5_clusters(self):
run_test_min_edge(5, 5)
| [
"nbondoux@amadeus.com"
] | nbondoux@amadeus.com |
6546eaac9531d7194aec8eac4caf36a6fda8ad87 | 249a30f5d0913dfd0f289f59d4943bf9253f847d | /del_bupp_files.py | e5ca01b04890817a6b1a02dac6b18fa52f23cc83 | [
"MIT"
] | permissive | SciLifeLab/standalone_scripts | 40d45c6a55ee89ec5ee089d6893df22d544d882f | cc3c6c13395d463b21764a2c7d3e950a002ff21a | refs/heads/master | 2023-05-12T08:56:24.927233 | 2023-05-08T13:36:10 | 2023-05-08T13:36:10 | 24,371,145 | 4 | 23 | MIT | 2023-05-08T09:37:09 | 2014-09-23T12:50:27 | Python | UTF-8 | Python | false | false | 2,278 | py | #!/usr/bin/python
"""A script to clean up among files which will be backed up on tape.
slightly adapted from the original, kindly supplied by it-support@scilifelab.se.
"""
import glob
import datetime
import os
import argparse
import sys
def main(args):
files = glob.glob("/home/bupp/other/*")
for f in files:
bn = os.path.basename(f)
file_date = None
if args.mode == 'github' and 'github' in f:
# Typical file name: githubbackup_2019-09-09T15:42:39.980130.tar.gz
try:
file_date = datetime.datetime.strptime(bn[13:23], "%Y-%m-%d")
except ValueError as e:
print(e, file=sys.stderr)
continue
if args.mode == 'zendesk' and 'github' not in f:
# Typical file name: 2019-09-09_16-33.bckp.json
try:
file_date = datetime.datetime.strptime(bn[0:10], "%Y-%m-%d")
except ValueError as e:
sys.stderr.write(e)
print(e, file=sys.stderr)
continue
if file_date is None:
continue
remove = False
# Keep all backups from the last 20 days
if (datetime.datetime.now() - file_date).days > 20:
# Remove all but a single last one of the weekly backups
# for April, August and December.
if ((file_date.month == 4 and file_date.day < 24) or
(file_date.month in [8,12] and file_date.day < 25)):
remove = True
# Remove all other weekly backups
if file_date.month % 4:
remove = True
# Remove everything older than approx 2 years
if (datetime.datetime.now() - file_date).days > 600:
remove = True
if remove:
if args.danger:
os.remove(f)
else:
sys.stderr.write("Would have removed {}\n".format(f))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("mode", choices=['github', 'zendesk'])
parser.add_argument("--danger", action="store_true", help="Without this, no files will be deleted")
args = parser.parse_args()
main(args)
| [
"alneberg@kth.se"
] | alneberg@kth.se |
fe0b2ad7b2c4837a963b199042559a743734e1bc | ad9f6a77777dd0384e09a88fe0eddf0a3435fa98 | /ana.py | 48e1771235c0850d8ca825d99409d700cb13bf3c | [] | no_license | Keitokuch/csl_work | 608595277356b4339a6c665761cd4fae73ced69e | 6404445464ac60c3d1ce913eae902e272bba88a3 | refs/heads/master | 2021-08-17T15:34:36.528215 | 2020-09-17T23:11:33 | 2020-09-17T23:11:33 | 222,787,612 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,197 | py | import pandas as pd
import sys
# df = pd.read_csv('cmdata.csv')
# df = pd.read_csv('./parsec_2.csv', index_col='ts')
# df = pd.read_csv('./test_running.csv', index_col='ts')
filename = sys.argv[1]
df = pd.read_csv(filename, index_col='ts')
# print(df['can_migrate'].eq(1).sum(), df['can_migrate'].eq(0).sum())
# sel = df.loc[(df['prefer_dst'] == 0) & (df['prefer_src'] == 0) & (df['same_node'] == 0)]
# sel = df.loc[df['same_node'] == 0]
# sel = df[df.throttled.eq(1) & df.can_migrate.eq(1)]
# sel = df[df.fair_class.eq(0)]
# A = (df['can_migrate'] == 0).sum()
# B = (df['can_migrate'] == 1).sum()
# print(A, B, A/B)
print(filename)
total = len(df)
stats = {}
stats['throttled and can_migrate'] = (df.throttled.eq(1) & df.can_migrate.eq(1)).sum()
stats['throttled'] = df.throttled.eq(1).sum()
stats['running'] = df.p_running.eq(1).sum()
stats['running and can_migrate'] = (df.p_running.eq(1) & df.can_migrate.eq(1)).sum()
stats['buddy_hot and can_migrate'] = (df.buddy_hot.eq(1) & df.can_migrate.eq(1)).sum()
stats['throttled and running'] = (df.throttled.eq(1) & df.p_running.eq(1)).sum()
stats['throttled and aggressive'] = (df.throttled.eq(1) & df.test_aggressive.eq(1)).sum()
stats['running and aggressive'] = (df.p_running.eq(1) & df.test_aggressive.eq(1)).sum()
stats['!test_aggressive and can_migrate'] = (df.test_aggressive.eq(0) & df.can_migrate.eq(1)).sum()
stats['mean load'] = df.src_load.max()
stats['mean dstload'] = df.dst_load.mean()
stats['mean len'] = df.src_len.mean()
stats['test_aggressive'] = df.test_aggressive.eq(1).sum()
# stats['fair_class'] = df.fair_class.eq(1).sum()
stats['can migrate'] = df.can_migrate.eq(1).sum()
stats['total'] = len(df)
for item in stats:
print(item, stats[item], '{:.4f}'.format(stats[item] / total))
pd.set_option('display.max_rows', df.shape[0]+1)
# # print(sel.loc[:, ['ts', 'pid', 'src_cpu', 'dst_cpu', 'delta_faults', 'can_migrate']])
# print(sel)
# print(len(sel))
# sel = df[df['delta_faults'].ne(0) & df['prefer_src'].eq(0) & df['prefer_dst'].eq(0)]
# sel = df[df['prefer_src'].eq(0) & df['prefer_dst'].eq(0)]
# print(sel[['prefer_src', 'prefer_dst', 'delta', 'delta_faults', 'can_migrate', 'src_numa_len']])
| [
"jingdec2@illinois.edu"
] | jingdec2@illinois.edu |
e55a27dca2368a2b46c0c89ab0d91c9214f68154 | 9f2f386a692a6ddeb7670812d1395a0b0009dad9 | /python/paddle/fluid/tests/unittests/ipu/test_print_op_ipu.py | 3189e060d58373250ba776271009bcab004e762b | [
"Apache-2.0"
] | permissive | sandyhouse/Paddle | 2f866bf1993a036564986e5140e69e77674b8ff5 | 86e0b07fe7ee6442ccda0aa234bd690a3be2cffa | refs/heads/develop | 2023-08-16T22:59:28.165742 | 2022-06-03T05:23:39 | 2022-06-03T05:23:39 | 181,423,712 | 0 | 7 | Apache-2.0 | 2022-08-15T08:46:04 | 2019-04-15T06:15:22 | C++ | UTF-8 | Python | false | false | 3,306 | py | # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import numpy as np
import paddle
import paddle.static
from paddle.fluid.tests.unittests.ipu.op_test_ipu import IPUOpTest
@unittest.skipIf(not paddle.is_compiled_with_ipu(),
"core is not compiled with IPU")
class TestBase(IPUOpTest):
def setUp(self):
self.set_atol()
self.set_training()
self.set_data_feed()
self.set_feed_attr()
self.set_op_attrs()
@property
def fp16_enabled(self):
return False
def set_data_feed(self):
data = np.random.uniform(size=[1, 3, 3, 3]).astype('float32')
self.feed_fp32 = {"x": data.astype(np.float32)}
self.feed_fp16 = {"x": data.astype(np.float16)}
def set_feed_attr(self):
self.feed_shape = [x.shape for x in self.feed_fp32.values()]
self.feed_list = list(self.feed_fp32.keys())
self.feed_dtype = [x.dtype for x in self.feed_fp32.values()]
def set_op_attrs(self):
self.attrs = {}
@IPUOpTest.static_graph
def build_model(self):
x = paddle.static.data(
name=self.feed_list[0],
shape=self.feed_shape[0],
dtype=self.feed_dtype[0])
out = paddle.fluid.layers.conv2d(x, num_filters=3, filter_size=3)
out = paddle.fluid.layers.Print(out, **self.attrs)
if self.is_training:
loss = paddle.mean(out)
adam = paddle.optimizer.Adam(learning_rate=1e-2)
adam.minimize(loss)
self.fetch_list = [loss.name]
else:
self.fetch_list = [out.name]
def run_model(self, exec_mode):
self.run_op_test(exec_mode)
def test(self):
for m in IPUOpTest.ExecutionMode:
if not self.skip_mode(m):
self.build_model()
self.run_model(m)
class TestCase1(TestBase):
def set_op_attrs(self):
self.attrs = {"message": "input_data"}
class TestTrainCase1(TestBase):
def set_op_attrs(self):
# "forward" : print forward
# "backward" : print forward and backward
# "both": print forward and backward
self.attrs = {"message": "input_data2", "print_phase": "both"}
def set_training(self):
self.is_training = True
self.epoch = 2
@unittest.skip("attrs are not supported")
class TestCase2(TestBase):
def set_op_attrs(self):
self.attrs = {
"first_n": 10,
"summarize": 10,
"print_tensor_name": True,
"print_tensor_type": True,
"print_tensor_shape": True,
"print_tensor_layout": True,
"print_tensor_lod": True
}
if __name__ == "__main__":
unittest.main()
| [
"noreply@github.com"
] | noreply@github.com |
d1d7b3d188456c9ce66966bb6625e2af07ecdf85 | e6b41b3c70ce572c64fdb7ea475f426945190ab2 | /lab10.1.py | f9432834f94bad6bac8ac92dc88c1fac5cc0f268 | [] | no_license | CodeLyokoZ/lab_Ex | fd6f7be963c8a80b9810a21705b3e974e6d963c8 | a0081b288fcd4b16182d5b4d278d9a3ed17148c1 | refs/heads/master | 2022-06-17T14:53:29.504476 | 2020-05-12T17:20:09 | 2020-05-12T17:20:09 | 257,349,180 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,415 | py | def read_file(filename): # funtion for read file
file = open(filename, 'r')
content = list(map(int, file.read().split(',')))
print(content)
file.close()
return content
def save_file(filename, content): # funtion for save file
file = open(filename, 'w')
file.write(content)
file.close()
return
def add_bst(i, index = 0): #funtion for add value to binary search tree
global out_list
out_list += ['null' for _ in range((index - len(out_list)) + 1)] # extend the tree
if out_list[index] == 'null': # if position is empty assign nod value
out_list[index] = i
else:
if i < out_list[index]: # check wether child value less than pareant value
l_nod = (index * 2 + 1) # left nod index value
add_bst(i, l_nod) # it true then send to value to left sub tree
else:
r_nod = (index * 2 + 2) # right nod index value
add_bst(i, r_nod) # it false then send to value to rigth sub tree
filename = input() # get file name from terminal
main_list = read_file(filename) # get user file content as list
out_list = [] # create a empty tree list
for i in main_list: # get each values in list
add_bst(i) # send values to tree for assign it to propper nod
content = ' '.join(list(map(str, out_list))) # create out string for output
save_file('result.txt', content) # save the content
| [
"noreply@github.com"
] | noreply@github.com |
00fdb46cfc5b0b82858263c76bdbebc1112d5445 | 9222357cc34a687321ee25e750f9aae6aa1989f5 | /alpaca_server/setup.py | e97fc25ced3488e8712da374e64f82210decb2fd | [] | no_license | codefly13/AlpacaTag | 0ad177d3ea22c2dda7849ee1ca0ab4654b2b4d7d | 9f34d495ba0d94fbf9b50e38f39d17113214007e | refs/heads/master | 2022-04-08T21:53:37.513880 | 2020-03-24T06:02:05 | 2020-03-24T06:02:05 | 259,337,840 | 1 | 0 | null | 2020-04-27T13:51:54 | 2020-04-27T13:51:54 | null | UTF-8 | Python | false | false | 1,168 | py | from os import path
from setuptools import setup, find_packages
# setup metainfo
libinfo_py = path.join('alpaca_serving', '__init__.py')
libinfo_content = open(libinfo_py, 'r').readlines()
version_line = [l.strip() for l in libinfo_content if l.startswith('__version__')][0]
exec(version_line) # produce __version__
with open('requirements.txt') as f:
require_packages = [line[:-1] if line[-1] == '\n' else line for line in f]
setup(
name='alpaca-serving-server',
version=__version__,
description='AlpacaTag - server side',
url='https://github.com/INK-USC/AlpacaTag',
author='Bill Yuchen Lin, Dong-Ho Lee',
author_email='dongho.lee@usc.edu',
license='MIT',
packages=find_packages(),
zip_safe=False,
install_requires=require_packages,
classifiers=(
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
),
entry_points={
'console_scripts': [
'alpaca-serving-start=alpaca_serving.server:main']
},
keywords='alpacatag',
) | [
"danny911kr@naver.com"
] | danny911kr@naver.com |
8fd2424a8d46f18ffec8634804ad72d2101d76a1 | 8be1a938a7554bfa6c4c90f7734380bdb23c837a | /A3/systemctl (1)/1systems/B4p2p/pserver.py | 9b1cd15c4467b82361715e1c795f3f8bbe2a249e | [] | no_license | roninx991/Computer-Networks-TE | 4ddd64d544ebdaaa38d8fcc00c937bd4524806eb | f5c5b9952ae8926b6c71807cb2e7b2149802e5f6 | refs/heads/master | 2022-03-14T13:07:27.024369 | 2017-11-17T08:19:43 | 2017-11-17T08:19:43 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 496 | py | import sys
import socket
def server():
host = ''
port = 2000
serversock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
serversock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
serversock.bind((host,port))
serversock.listen(1)
newserversock,addr = serversock.accept()
print "connected to :" + str(addr)
while 1:
data = newserversock.recv(1024)
print "\nClient says : " +data
msg = raw_input("\nsend message :")
newserversock.send(msg)
server()
| [
"sangatdas5@gmail.com"
] | sangatdas5@gmail.com |
b3519ec67a53f9f13aa52c53f567d1d776edbd44 | 7f0710a8aea6f41044ee29651c32ca9eebfe7715 | /keys.py | 51be492a55aba4d87e010bbd3ee80b05cf282ec6 | [
"MIT"
] | permissive | KennethD96/IPX | 5d0f0aea471d4997404e1064c878f2b1d1ec0caa | 971b8197ab6d7f84c961b2f613311e98451ff992 | refs/heads/master | 2021-01-15T19:06:50.920263 | 2014-12-11T18:29:36 | 2014-12-11T18:29:36 | 99,811,855 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,821 | py | ####
# The code within this file is copyrighted by its original creator,
# and obtained from the following URL. It is thus not covered by this
# project's license.
#
# Source: http://stackoverflow.com/a/13615802
##
import time
import ctypes
SendInput = ctypes.windll.user32.SendInput
# C struct redefinitions
PUL = ctypes.POINTER(ctypes.c_ulong)
class KeyBdInput(ctypes.Structure):
_fields_ = [("wVk", ctypes.c_ushort),
("wScan", ctypes.c_ushort),
("dwFlags", ctypes.c_ulong),
("time", ctypes.c_ulong),
("dwExtraInfo", PUL)]
class HardwareInput(ctypes.Structure):
_fields_ = [("uMsg", ctypes.c_ulong),
("wParamL", ctypes.c_short),
("wParamH", ctypes.c_ushort)]
class MouseInput(ctypes.Structure):
_fields_ = [("dx", ctypes.c_long),
("dy", ctypes.c_long),
("mouseData", ctypes.c_ulong),
("dwFlags", ctypes.c_ulong),
("time",ctypes.c_ulong),
("dwExtraInfo", PUL)]
class Input_I(ctypes.Union):
_fields_ = [("ki", KeyBdInput),
("mi", MouseInput),
("hi", HardwareInput)]
class Input(ctypes.Structure):
_fields_ = [("type", ctypes.c_ulong),
("ii", Input_I)]
# Actuals Functions
def PressKey(hexKeyCode):
extra = ctypes.c_ulong(0)
ii_ = Input_I()
ii_.ki = KeyBdInput( hexKeyCode, 0x48, 0, 0, ctypes.pointer(extra) )
x = Input( ctypes.c_ulong(1), ii_ )
SendInput(1, ctypes.pointer(x), ctypes.sizeof(x))
def ReleaseKey(hexKeyCode):
extra = ctypes.c_ulong(0)
ii_ = Input_I()
ii_.ki = KeyBdInput( hexKeyCode, 0x48, 0x0002, 0, ctypes.pointer(extra) )
x = Input( ctypes.c_ulong(1), ii_ )
SendInput(1, ctypes.pointer(x), ctypes.sizeof(x)) | [
"simen@lybekk.no"
] | simen@lybekk.no |
d2532abf4853a00083e27afaff4c1ee3781091b1 | b92d59b1d78276c2f642b640fbb495fa85e222c9 | /debugger_tools/sitepackages_libs/pygments/styles/borland.py | e7494b0b556f302ed49dc2d3d8cb49a7d0731409 | [] | no_license | tin2tin/weed | d69d27ed9fb0273d0bbcbcf6941d9d9bfd4bbb44 | dade41a9f6e82a493d4817d53a5af3dcdf31f21c | refs/heads/master | 2020-12-27T06:21:09.047047 | 2020-02-02T18:03:15 | 2020-02-02T18:03:15 | 237,793,836 | 0 | 0 | null | 2020-02-02T15:43:25 | 2020-02-02T15:43:25 | null | UTF-8 | Python | false | false | 1,613 | py | # -*- coding: utf-8 -*-
"""
pygments.styles.borland
~~~~~~~~~~~~~~~~~~~~~~~
Style similar to the style used in the Borland IDEs.
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, \
Number, Operator, Generic, Whitespace
class BorlandStyle(Style):
"""
Style similar to the style used in the borland IDEs.
"""
default_style = ''
styles = {
Whitespace: '#bbbbbb',
Comment: 'italic #008800',
Comment.Preproc: 'noitalic #008080',
Comment.Special: 'noitalic bold',
String: '#0000FF',
String.Char: '#800080',
Number: '#0000FF',
Keyword: 'bold #000080',
Operator.Word: 'bold',
Name.Tag: 'bold #000080',
Name.Attribute: '#FF0000',
Generic.Heading: '#999999',
Generic.Subheading: '#aaaaaa',
Generic.Deleted: 'bg:#ffdddd #000000',
Generic.Inserted: 'bg:#ddffdd #000000',
Generic.Error: '#aa0000',
Generic.Emph: 'italic',
Generic.Strong: 'bold',
Generic.Prompt: '#555555',
Generic.Output: '#888888',
Generic.Traceback: '#aa0000',
Error: 'bg:#e3d2d2 #a61717'
}
| [
"cristian@blender.cl"
] | cristian@blender.cl |
1e0e33a438ef73e28874fea87773b1cf5ca40c49 | 03b7823d164d126d40189305f5782cb24befeb72 | /detection.py | a092028d36973e02ab46541bea4abcb236e2aa79 | [] | no_license | doanab/person-detection | 518f6e4714baddcf0b6058275c5da29fcf7a5d47 | efb535547261fab6f8d2efe66f229103ce460e53 | refs/heads/master | 2021-03-24T17:08:39.189508 | 2020-03-15T21:19:20 | 2020-03-15T21:19:20 | 247,551,996 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,744 | py | # Them cac goi thu vien
from imutils.object_detection import non_max_suppression
from imutils import paths
import numpy as np
import argparse
import imutils
import cv2
# Tao doi so
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--images", required=True, help="path to images directory")
args = vars(ap.parse_args())
# initialize the HOG descriptor/person detector
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
# loop over the image paths
for imagePath in paths.list_images(args["images"]):
# load the image and resize it to (1) reduce detection time
# and (2) improve detection accuracy
image = cv2.imread(imagePath)
image = imutils.resize(image, width=min(400, image.shape[1]))
orig = image.copy()
# detect people in the image
(rects, weights) = hog.detectMultiScale(image, winStride=(4, 4),
padding=(8, 8), scale=1.05)
# draw the original bounding boxes
for (x, y, w, h) in rects:
cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255), 2)
# apply non-maxima suppression to the bounding boxes using a
# fairly large overlap threshold to try to maintain overlapping
# boxes that are still people
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
pick = non_max_suppression(rects, probs=None, overlapThresh=0.65)
# draw the final bounding boxes
for (xA, yA, xB, yB) in pick:
cv2.rectangle(image, (xA, yA), (xB, yB), (0, 255, 0), 2)
# show some information on the number of bounding boxes
filename = imagePath[imagePath.rfind("/") + 1:]
print("[INFO] {}: {} original boxes, {} after suppression".format(
filename, len(rects), len(pick)))
# show the output images
cv2.imshow("Before NMS", orig)
cv2.imshow("After NMS", image)
cv2.waitKey(0)
| [
"doanbebk2018@gmail.com"
] | doanbebk2018@gmail.com |
307953eec58975a8509f55196d62a9976bd07307 | f02cdc9bb054fe82895cbedba09dcb6284de0d88 | /driver.py | 3c9a97a16ea674fc7746a60848542fd3dff61d5a | [] | no_license | Nagesh-a5/A04 | bc2edd49bfe36a70d7653bf466fb085d204ed5cd | 725ee803cb211e5ff06a6e79dee5bdfd836d17c1 | refs/heads/master | 2020-05-18T01:45:24.575301 | 2019-05-08T04:41:48 | 2019-05-08T04:41:48 | 184,098,094 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,739 | py | import os
import virtual
def read_file():
tasks=[]
for path, dirs, files in os.walk('vm_snapshots'):
for f in files:
fin=open(path+'\\'+f, 'r')
data=fin.read()
tasks.append([f, data])
return tasks
def get_params(str):
name, ext=str.split('.')
return [int(s) for s in name.split('_')[1:]]
def get_memory_accesses(str):
accesses=str.split(' ')
seq=[]
for access in accesses:
if access=='' : continue
if len(access.split(','))!=2 : continue
proc_id, address=access.split(',')
seq.append((int(proc_id), int(address, 16)))
return seq
if __name__=='__main__':
tasks=read_file()
for task in tasks:
filename, seq=task
file_no, np, VM_size, PM_size=get_params(filename)
print('Simulating file', file_no, '...\n')
seq=get_memory_accesses(seq)
print('PM size =', PM_size, ' VM size=', VM_size, ' count of process=', np)
fifo_faults=virtual.FIFO(VM_size, PM_size, np, seq)
lru_faults=virtual.LRU(VM_size, PM_size, np, seq)
lfu_faults=virtual.LFU(VM_size, PM_size, np, seq)
opt_faults=virtual.OPTIMAL(VM_size, PM_size, np, seq)
rnd_faults=virtual.Random(VM_size, PM_size, np, seq)
for i in range(np):
print('FIFO PROCESS '+str(i)+' '+str(fifo_faults[i]))
print('LRU PROCESS '+str(i)+' '+str(lru_faults[i]))
print('LFU PROCESS '+str(i)+' '+str(lfu_faults[i]))
print('OPTIMAL PROCESS '+str(i)+' '+str(opt_faults[i]))
print('RANDOM PROCESS '+str(i)+' '+str(rnd_faults[i]))
print('')
| [
"noreply@github.com"
] | noreply@github.com |
c41845008487c6093767f0afb04faa5273492412 | 2729fff7cb053d2577985d38c8962043ee9f853d | /bokeh/sampledata/tests/test_perceptions.py | fd7da155b0a33f279e0b263f36c4dea9d66e1929 | [
"BSD-3-Clause"
] | permissive | modster/bokeh | 2c78c5051fa9cac48c8c2ae7345eafc54b426fbd | 60fce9003aaa618751c9b8a3133c95688073ea0b | refs/heads/master | 2020-03-29T01:13:35.740491 | 2018-09-18T06:08:59 | 2018-09-18T06:08:59 | 149,377,781 | 1 | 0 | BSD-3-Clause | 2018-09-19T02:02:49 | 2018-09-19T02:02:49 | null | UTF-8 | Python | false | false | 2,212 | py | #-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2017, Anaconda, Inc. All rights reserved.
#
# Powered by the Bokeh Development Team.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Boilerplate
#-----------------------------------------------------------------------------
from __future__ import absolute_import, division, print_function, unicode_literals
import pytest ; pytest
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
# Standard library imports
# External imports
# Bokeh imports
from bokeh._testing.util.api import verify_all
# Module under test
#import bokeh.sampledata.perceptions as bsp
#-----------------------------------------------------------------------------
# Setup
#-----------------------------------------------------------------------------
ALL = (
'numberly',
'probly',
)
#-----------------------------------------------------------------------------
# General API
#-----------------------------------------------------------------------------
Test___all__ = pytest.mark.sampledata(verify_all("bokeh.sampledata.perceptions", ALL))
@pytest.mark.sampledata
def test_numberly(pd):
import bokeh.sampledata.perceptions as bsp
assert isinstance(bsp.numberly, pd.DataFrame)
# check detail for package data
assert len(bsp.numberly) == 46
@pytest.mark.sampledata
def test_probly(pd):
import bokeh.sampledata.perceptions as bsp
assert isinstance(bsp.probly, pd.DataFrame)
# check detail for package data
assert len(bsp.probly) == 46
#-----------------------------------------------------------------------------
# Dev API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
| [
"noreply@github.com"
] | noreply@github.com |
9fd8ade44c4ae07271c9aec55135a23ab06f9c4d | ed56a197b12d8e5cfca16cd314b188353aa46ebd | /scripts/get_rdkit_descriptors_for_all_datasets.py | 871681ff454e6e92507e593ffea0ca5fd0e8f191 | [] | no_license | GeorgeBatch/moleculenet | b0698f8cabb2cb65d3399d313e4e5bddad03c971 | a72cf362e0002261e8e1de1bfbcb65efac315437 | refs/heads/master | 2023-02-07T16:25:44.435430 | 2020-12-23T12:35:31 | 2020-12-23T12:35:31 | 278,653,042 | 2 | 2 | null | null | null | null | UTF-8 | Python | false | false | 1,603 | py | # Import modules
import numpy as np
import pandas as pd
import time
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit import RDLogger
from rdkit.Chem import Descriptors
# record time to show after execution of the script
since = time.time()
# For each dataset*smile_typecombination get the rdkit features from smile strings
for dataset in ['esol', 'freesolv', 'lipophilicity']:
for smile_type in ['original']:
# tell the user what is happenning
print(f'Working on {dataset} dataset, {smile_type} smile_type...')
## Load Data
data = pd.read_csv(f'../data/{dataset}_{smile_type}_IdSmilesLabels.csv', index_col=0)
smiles = data['smiles']
## Get RDKit Molecular descriptors
# load ligands and compute features
features = {}
descriptors = {d[0]: d[1] for d in Descriptors.descList}
for index in smiles.index:
mol = Chem.MolFromSmiles(smiles.loc[index])
# how exactly do we add hydrogens here???
mol = Chem.AddHs(mol)
try:
features[index] = {d: descriptors[d](mol) for d in descriptors}
except ValueError as e:
print(e)
continue
features = pd.DataFrame.from_dict(features).T
# save file
file_path = f'../data/{dataset}_{smile_type}_rdkit_features.csv'
features.to_csv(file_path, index=True)
print(f'Saved file to: {file_path}\n')
time_elapsed = time.time() - since
print(f'Task completed in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s \n')
| [
"george.batchkala@gmail.com"
] | george.batchkala@gmail.com |
5b204d08c098c4b40f0d851978139606c080d14f | bc42b64d28ef03622cc1de3fc22d25c0f4ef5e6d | /bsPipe/bs_ui/bsui_asset/bsui_assetManager.py | 46faf6eed6c45bb3cb225d4dee63c07c69b40c93 | [] | no_license | amolrupnar/bsPipe | fcc032f5c74c12497291e3ee1f582d4cc728ad22 | db9c0039e7fae8d856f7b3f34c2bca18fd588272 | refs/heads/master | 2021-08-30T20:12:53.319618 | 2017-12-19T08:54:17 | 2017-12-19T08:54:17 | 113,181,189 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 24,429 | py | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'D:/bsw_programation/01_maya/Pipeline/bsPipe/bsPipe/bs_ui/bsui_asset/bsui_assetManager.ui'
#
# Created: Tue Dec 19 12:41:12 2017
# by: pyside-uic 0.2.14 running on PySide 1.2.0
#
# WARNING! All changes made in this file will be lost!
from PySide import QtCore, QtGui
class Ui_bs_assetManagerMainWin(object):
def setupUi(self, bs_assetManagerMainWin):
bs_assetManagerMainWin.setObjectName("bs_assetManagerMainWin")
bs_assetManagerMainWin.resize(882, 740)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(bs_assetManagerMainWin.sizePolicy().hasHeightForWidth())
bs_assetManagerMainWin.setSizePolicy(sizePolicy)
bs_assetManagerMainWin.setMinimumSize(QtCore.QSize(882, 740))
bs_assetManagerMainWin.setMaximumSize(QtCore.QSize(882, 740))
self.centralwidget = QtGui.QWidget(bs_assetManagerMainWin)
self.centralwidget.setObjectName("centralwidget")
self.layoutWidget = QtGui.QWidget(self.centralwidget)
self.layoutWidget.setGeometry(QtCore.QRect(1, 11, 871, 118))
self.layoutWidget.setObjectName("layoutWidget")
self.verticalLayout_4 = QtGui.QVBoxLayout(self.layoutWidget)
self.verticalLayout_4.setContentsMargins(0, 0, 0, 0)
self.verticalLayout_4.setObjectName("verticalLayout_4")
self.logo_lbl = QtGui.QLabel(self.layoutWidget)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(1)
sizePolicy.setHeightForWidth(self.logo_lbl.sizePolicy().hasHeightForWidth())
self.logo_lbl.setSizePolicy(sizePolicy)
self.logo_lbl.setMinimumSize(QtCore.QSize(0, 50))
self.logo_lbl.setMaximumSize(QtCore.QSize(16777215, 50))
self.logo_lbl.setAlignment(QtCore.Qt.AlignCenter)
self.logo_lbl.setObjectName("logo_lbl")
self.verticalLayout_4.addWidget(self.logo_lbl)
self.line_3 = QtGui.QFrame(self.layoutWidget)
self.line_3.setFrameShape(QtGui.QFrame.HLine)
self.line_3.setFrameShadow(QtGui.QFrame.Sunken)
self.line_3.setObjectName("line_3")
self.verticalLayout_4.addWidget(self.line_3)
self.horizontalLayout_3 = QtGui.QHBoxLayout()
self.horizontalLayout_3.setContentsMargins(5, -1, -1, -1)
self.horizontalLayout_3.setObjectName("horizontalLayout_3")
self.episode_lbl = QtGui.QLabel(self.layoutWidget)
self.episode_lbl.setObjectName("episode_lbl")
self.horizontalLayout_3.addWidget(self.episode_lbl)
self.episode_cBox = QtGui.QComboBox(self.layoutWidget)
self.episode_cBox.setObjectName("episode_cBox")
self.horizontalLayout_3.addWidget(self.episode_cBox)
self.line_5 = QtGui.QFrame(self.layoutWidget)
self.line_5.setFrameShape(QtGui.QFrame.VLine)
self.line_5.setFrameShadow(QtGui.QFrame.Sunken)
self.line_5.setObjectName("line_5")
self.horizontalLayout_3.addWidget(self.line_5)
self.label_7 = QtGui.QLabel(self.layoutWidget)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label_7.sizePolicy().hasHeightForWidth())
self.label_7.setSizePolicy(sizePolicy)
self.label_7.setObjectName("label_7")
self.horizontalLayout_3.addWidget(self.label_7)
self.line = QtGui.QFrame(self.layoutWidget)
self.line.setFrameShape(QtGui.QFrame.VLine)
self.line.setFrameShadow(QtGui.QFrame.Sunken)
self.line.setObjectName("line")
self.horizontalLayout_3.addWidget(self.line)
self.model_RB = QtGui.QRadioButton(self.layoutWidget)
self.model_RB.setChecked(True)
self.model_RB.setObjectName("model_RB")
self.horizontalLayout_3.addWidget(self.model_RB)
self.texture_RB = QtGui.QRadioButton(self.layoutWidget)
self.texture_RB.setObjectName("texture_RB")
self.horizontalLayout_3.addWidget(self.texture_RB)
self.rig_RB = QtGui.QRadioButton(self.layoutWidget)
self.rig_RB.setObjectName("rig_RB")
self.horizontalLayout_3.addWidget(self.rig_RB)
self.light_RB = QtGui.QRadioButton(self.layoutWidget)
self.light_RB.setObjectName("light_RB")
self.horizontalLayout_3.addWidget(self.light_RB)
self.fx_RB = QtGui.QRadioButton(self.layoutWidget)
self.fx_RB.setObjectName("fx_RB")
self.horizontalLayout_3.addWidget(self.fx_RB)
self.line_2 = QtGui.QFrame(self.layoutWidget)
self.line_2.setFrameShape(QtGui.QFrame.VLine)
self.line_2.setFrameShadow(QtGui.QFrame.Sunken)
self.line_2.setObjectName("line_2")
self.horizontalLayout_3.addWidget(self.line_2)
spacerItem = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum)
self.horizontalLayout_3.addItem(spacerItem)
self.bsAm_refresh_pb = QtGui.QPushButton(self.layoutWidget)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.MinimumExpanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.bsAm_refresh_pb.sizePolicy().hasHeightForWidth())
self.bsAm_refresh_pb.setSizePolicy(sizePolicy)
self.bsAm_refresh_pb.setMinimumSize(QtCore.QSize(0, 40))
self.bsAm_refresh_pb.setMaximumSize(QtCore.QSize(16777215, 40))
self.bsAm_refresh_pb.setObjectName("bsAm_refresh_pb")
self.horizontalLayout_3.addWidget(self.bsAm_refresh_pb)
self.verticalLayout_4.addLayout(self.horizontalLayout_3)
self.line_4 = QtGui.QFrame(self.layoutWidget)
self.line_4.setFrameShape(QtGui.QFrame.HLine)
self.line_4.setFrameShadow(QtGui.QFrame.Sunken)
self.line_4.setObjectName("line_4")
self.verticalLayout_4.addWidget(self.line_4)
self.layoutWidget1 = QtGui.QWidget(self.centralwidget)
self.layoutWidget1.setGeometry(QtCore.QRect(1, 140, 871, 511))
self.layoutWidget1.setObjectName("layoutWidget1")
self.horizontalLayout_2 = QtGui.QHBoxLayout(self.layoutWidget1)
self.horizontalLayout_2.setContentsMargins(0, 0, 0, 0)
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
self.verticalLayout = QtGui.QVBoxLayout()
self.verticalLayout.setContentsMargins(5, -1, -1, -1)
self.verticalLayout.setObjectName("verticalLayout")
self.bsAm_asset_tw = QtGui.QTabWidget(self.layoutWidget1)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.bsAm_asset_tw.sizePolicy().hasHeightForWidth())
self.bsAm_asset_tw.setSizePolicy(sizePolicy)
self.bsAm_asset_tw.setMinimumSize(QtCore.QSize(310, 425))
self.bsAm_asset_tw.setMaximumSize(QtCore.QSize(300, 500))
self.bsAm_asset_tw.setObjectName("bsAm_asset_tw")
self.chars_tab = QtGui.QWidget()
self.chars_tab.setObjectName("chars_tab")
self.bsAm_chars_lw = QtGui.QListWidget(self.chars_tab)
self.bsAm_chars_lw.setGeometry(QtCore.QRect(0, 40, 300, 425))
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.bsAm_chars_lw.sizePolicy().hasHeightForWidth())
self.bsAm_chars_lw.setSizePolicy(sizePolicy)
self.bsAm_chars_lw.setMinimumSize(QtCore.QSize(300, 425))
self.bsAm_chars_lw.setMaximumSize(QtCore.QSize(300, 425))
self.bsAm_chars_lw.setObjectName("bsAm_chars_lw")
self.bsAm_charSearch_LE = QtGui.QLineEdit(self.chars_tab)
self.bsAm_charSearch_LE.setGeometry(QtCore.QRect(0, 10, 300, 21))
self.bsAm_charSearch_LE.setMinimumSize(QtCore.QSize(300, 21))
self.bsAm_charSearch_LE.setMaximumSize(QtCore.QSize(300, 21))
self.bsAm_charSearch_LE.setObjectName("bsAm_charSearch_LE")
self.bsAm_asset_tw.addTab(self.chars_tab, "")
self.props_tab = QtGui.QWidget()
self.props_tab.setObjectName("props_tab")
self.bsAm_props_lw = QtGui.QListWidget(self.props_tab)
self.bsAm_props_lw.setGeometry(QtCore.QRect(0, 40, 300, 425))
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.bsAm_props_lw.sizePolicy().hasHeightForWidth())
self.bsAm_props_lw.setSizePolicy(sizePolicy)
self.bsAm_props_lw.setMinimumSize(QtCore.QSize(300, 425))
self.bsAm_props_lw.setMaximumSize(QtCore.QSize(300, 425))
self.bsAm_props_lw.setObjectName("bsAm_props_lw")
self.bsAm_propSearch_LE = QtGui.QLineEdit(self.props_tab)
self.bsAm_propSearch_LE.setGeometry(QtCore.QRect(0, 10, 300, 21))
self.bsAm_propSearch_LE.setMinimumSize(QtCore.QSize(300, 21))
self.bsAm_propSearch_LE.setMaximumSize(QtCore.QSize(300, 21))
self.bsAm_propSearch_LE.setObjectName("bsAm_propSearch_LE")
self.bsAm_asset_tw.addTab(self.props_tab, "")
self.sets_tab = QtGui.QWidget()
self.sets_tab.setObjectName("sets_tab")
self.bsAm_sets_lw = QtGui.QListWidget(self.sets_tab)
self.bsAm_sets_lw.setGeometry(QtCore.QRect(0, 40, 300, 425))
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.bsAm_sets_lw.sizePolicy().hasHeightForWidth())
self.bsAm_sets_lw.setSizePolicy(sizePolicy)
self.bsAm_sets_lw.setMinimumSize(QtCore.QSize(300, 425))
self.bsAm_sets_lw.setMaximumSize(QtCore.QSize(300, 425))
self.bsAm_sets_lw.setObjectName("bsAm_sets_lw")
self.bsAm_setSearch_LE = QtGui.QLineEdit(self.sets_tab)
self.bsAm_setSearch_LE.setGeometry(QtCore.QRect(0, 10, 300, 21))
self.bsAm_setSearch_LE.setMinimumSize(QtCore.QSize(300, 21))
self.bsAm_setSearch_LE.setMaximumSize(QtCore.QSize(300, 21))
self.bsAm_setSearch_LE.setObjectName("bsAm_setSearch_LE")
self.bsAm_asset_tw.addTab(self.sets_tab, "")
self.setElements_tab = QtGui.QWidget()
self.setElements_tab.setObjectName("setElements_tab")
self.bsAm_setElems_lw = QtGui.QListWidget(self.setElements_tab)
self.bsAm_setElems_lw.setGeometry(QtCore.QRect(0, 40, 300, 425))
self.bsAm_setElems_lw.setMinimumSize(QtCore.QSize(300, 425))
self.bsAm_setElems_lw.setMaximumSize(QtCore.QSize(300, 425))
self.bsAm_setElems_lw.setObjectName("bsAm_setElems_lw")
self.bsAm_setElementSearch_LE = QtGui.QLineEdit(self.setElements_tab)
self.bsAm_setElementSearch_LE.setGeometry(QtCore.QRect(0, 10, 300, 21))
self.bsAm_setElementSearch_LE.setMinimumSize(QtCore.QSize(300, 21))
self.bsAm_setElementSearch_LE.setMaximumSize(QtCore.QSize(300, 21))
self.bsAm_setElementSearch_LE.setObjectName("bsAm_setElementSearch_LE")
self.bsAm_asset_tw.addTab(self.setElements_tab, "")
self.vehicles_tab = QtGui.QWidget()
self.vehicles_tab.setObjectName("vehicles_tab")
self.bsAm_vehicles_lw = QtGui.QListWidget(self.vehicles_tab)
self.bsAm_vehicles_lw.setGeometry(QtCore.QRect(0, 40, 300, 425))
self.bsAm_vehicles_lw.setMinimumSize(QtCore.QSize(300, 425))
self.bsAm_vehicles_lw.setMaximumSize(QtCore.QSize(300, 425))
self.bsAm_vehicles_lw.setObjectName("bsAm_vehicles_lw")
self.bsAm_vehicleSearch_LE = QtGui.QLineEdit(self.vehicles_tab)
self.bsAm_vehicleSearch_LE.setGeometry(QtCore.QRect(0, 10, 300, 21))
self.bsAm_vehicleSearch_LE.setMinimumSize(QtCore.QSize(300, 21))
self.bsAm_vehicleSearch_LE.setMaximumSize(QtCore.QSize(300, 21))
self.bsAm_vehicleSearch_LE.setObjectName("bsAm_vehicleSearch_LE")
self.bsAm_asset_tw.addTab(self.vehicles_tab, "")
self.verticalLayout.addWidget(self.bsAm_asset_tw)
self.horizontalLayout_2.addLayout(self.verticalLayout)
self.verticalLayout_3 = QtGui.QVBoxLayout()
self.verticalLayout_3.setObjectName("verticalLayout_3")
self.label_5 = QtGui.QLabel(self.layoutWidget1)
self.label_5.setAlignment(QtCore.Qt.AlignCenter)
self.label_5.setObjectName("label_5")
self.verticalLayout_3.addWidget(self.label_5)
self.bsAm_assetVersions_lw = QtGui.QListWidget(self.layoutWidget1)
self.bsAm_assetVersions_lw.setObjectName("bsAm_assetVersions_lw")
self.verticalLayout_3.addWidget(self.bsAm_assetVersions_lw)
self.horizontalLayout_2.addLayout(self.verticalLayout_3)
self.verticalLayout_2 = QtGui.QVBoxLayout()
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.label_6 = QtGui.QLabel(self.layoutWidget1)
self.label_6.setAlignment(QtCore.Qt.AlignCenter)
self.label_6.setObjectName("label_6")
self.verticalLayout_2.addWidget(self.label_6)
self.imageView_lbl = QtGui.QLabel(self.layoutWidget1)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.MinimumExpanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.imageView_lbl.sizePolicy().hasHeightForWidth())
self.imageView_lbl.setSizePolicy(sizePolicy)
self.imageView_lbl.setMinimumSize(QtCore.QSize(250, 250))
self.imageView_lbl.setMaximumSize(QtCore.QSize(250, 250))
self.imageView_lbl.setAlignment(QtCore.Qt.AlignCenter)
self.imageView_lbl.setObjectName("imageView_lbl")
self.verticalLayout_2.addWidget(self.imageView_lbl)
self.gridLayout = QtGui.QGridLayout()
self.gridLayout.setObjectName("gridLayout")
self.owner_lbl = QtGui.QLabel(self.layoutWidget1)
self.owner_lbl.setText("")
self.owner_lbl.setObjectName("owner_lbl")
self.gridLayout.addWidget(self.owner_lbl, 2, 1, 1, 1)
self.label = QtGui.QLabel(self.layoutWidget1)
self.label.setObjectName("label")
self.gridLayout.addWidget(self.label, 0, 0, 1, 1)
self.name_lbl = QtGui.QLabel(self.layoutWidget1)
self.name_lbl.setText("")
self.name_lbl.setObjectName("name_lbl")
self.gridLayout.addWidget(self.name_lbl, 0, 1, 1, 1)
self.type_lbl = QtGui.QLabel(self.layoutWidget1)
self.type_lbl.setText("")
self.type_lbl.setObjectName("type_lbl")
self.gridLayout.addWidget(self.type_lbl, 1, 1, 1, 1)
self.label_2 = QtGui.QLabel(self.layoutWidget1)
self.label_2.setObjectName("label_2")
self.gridLayout.addWidget(self.label_2, 1, 0, 1, 1)
self.label_3 = QtGui.QLabel(self.layoutWidget1)
self.label_3.setObjectName("label_3")
self.gridLayout.addWidget(self.label_3, 2, 0, 1, 1)
self.label_8 = QtGui.QLabel(self.layoutWidget1)
self.label_8.setObjectName("label_8")
self.gridLayout.addWidget(self.label_8, 3, 0, 1, 1)
self.label_9 = QtGui.QLabel(self.layoutWidget1)
self.label_9.setObjectName("label_9")
self.gridLayout.addWidget(self.label_9, 4, 0, 1, 1)
self.size_lbl = QtGui.QLabel(self.layoutWidget1)
self.size_lbl.setText("")
self.size_lbl.setObjectName("size_lbl")
self.gridLayout.addWidget(self.size_lbl, 3, 1, 1, 1)
self.time_lbl = QtGui.QLabel(self.layoutWidget1)
self.time_lbl.setText("")
self.time_lbl.setObjectName("time_lbl")
self.gridLayout.addWidget(self.time_lbl, 4, 1, 1, 1)
self.verticalLayout_2.addLayout(self.gridLayout)
self.label_4 = QtGui.QLabel(self.layoutWidget1)
self.label_4.setAlignment(QtCore.Qt.AlignCenter)
self.label_4.setObjectName("label_4")
self.verticalLayout_2.addWidget(self.label_4)
self.bsAm_versionInfo_TE = QtGui.QTextEdit(self.layoutWidget1)
self.bsAm_versionInfo_TE.setObjectName("bsAm_versionInfo_TE")
self.verticalLayout_2.addWidget(self.bsAm_versionInfo_TE)
self.horizontalLayout_2.addLayout(self.verticalLayout_2)
self.layoutWidget2 = QtGui.QWidget(self.centralwidget)
self.layoutWidget2.setGeometry(QtCore.QRect(0, 660, 871, 42))
self.layoutWidget2.setObjectName("layoutWidget2")
self.horizontalLayout = QtGui.QHBoxLayout(self.layoutWidget2)
self.horizontalLayout.setContentsMargins(0, 0, 0, 0)
self.horizontalLayout.setObjectName("horizontalLayout")
self.bsAm_open_pb = QtGui.QPushButton(self.layoutWidget2)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.MinimumExpanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.bsAm_open_pb.sizePolicy().hasHeightForWidth())
self.bsAm_open_pb.setSizePolicy(sizePolicy)
self.bsAm_open_pb.setMinimumSize(QtCore.QSize(0, 30))
self.bsAm_open_pb.setMaximumSize(QtCore.QSize(16777215, 30))
self.bsAm_open_pb.setObjectName("bsAm_open_pb")
self.horizontalLayout.addWidget(self.bsAm_open_pb)
self.bsAm_import_pb = QtGui.QPushButton(self.layoutWidget2)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.MinimumExpanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.bsAm_import_pb.sizePolicy().hasHeightForWidth())
self.bsAm_import_pb.setSizePolicy(sizePolicy)
self.bsAm_import_pb.setMinimumSize(QtCore.QSize(0, 30))
self.bsAm_import_pb.setMaximumSize(QtCore.QSize(16777215, 30))
self.bsAm_import_pb.setObjectName("bsAm_import_pb")
self.horizontalLayout.addWidget(self.bsAm_import_pb)
self.bsAm_reference_pb = QtGui.QPushButton(self.layoutWidget2)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.MinimumExpanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.bsAm_reference_pb.sizePolicy().hasHeightForWidth())
self.bsAm_reference_pb.setSizePolicy(sizePolicy)
self.bsAm_reference_pb.setMinimumSize(QtCore.QSize(0, 30))
self.bsAm_reference_pb.setMaximumSize(QtCore.QSize(16777215, 30))
self.bsAm_reference_pb.setObjectName("bsAm_reference_pb")
self.horizontalLayout.addWidget(self.bsAm_reference_pb)
bs_assetManagerMainWin.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(bs_assetManagerMainWin)
self.menubar.setGeometry(QtCore.QRect(0, 0, 882, 21))
self.menubar.setObjectName("menubar")
self.menuMenu = QtGui.QMenu(self.menubar)
self.menuMenu.setObjectName("menuMenu")
bs_assetManagerMainWin.setMenuBar(self.menubar)
self.actionExit = QtGui.QAction(bs_assetManagerMainWin)
self.actionExit.setObjectName("actionExit")
self.menuMenu.addAction(self.actionExit)
self.menubar.addAction(self.menuMenu.menuAction())
self.retranslateUi(bs_assetManagerMainWin)
self.bsAm_asset_tw.setCurrentIndex(0)
QtCore.QObject.connect(self.actionExit, QtCore.SIGNAL("triggered()"), bs_assetManagerMainWin.close)
QtCore.QMetaObject.connectSlotsByName(bs_assetManagerMainWin)
def retranslateUi(self, bs_assetManagerMainWin):
bs_assetManagerMainWin.setWindowTitle(QtGui.QApplication.translate("bs_assetManagerMainWin", "Bioscopewala Asset Manager", None, QtGui.QApplication.UnicodeUTF8))
self.logo_lbl.setStatusTip(QtGui.QApplication.translate("bs_assetManagerMainWin", "ui logo", None, QtGui.QApplication.UnicodeUTF8))
self.logo_lbl.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "logo", None, QtGui.QApplication.UnicodeUTF8))
self.episode_lbl.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Episode:-", None, QtGui.QApplication.UnicodeUTF8))
self.label_7.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", " Asset Type:-", None, QtGui.QApplication.UnicodeUTF8))
self.model_RB.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Model", None, QtGui.QApplication.UnicodeUTF8))
self.texture_RB.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Texture", None, QtGui.QApplication.UnicodeUTF8))
self.rig_RB.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Rig", None, QtGui.QApplication.UnicodeUTF8))
self.light_RB.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Light", None, QtGui.QApplication.UnicodeUTF8))
self.fx_RB.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "FX", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_refresh_pb.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Refresh", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_asset_tw.setTabText(self.bsAm_asset_tw.indexOf(self.chars_tab), QtGui.QApplication.translate("bs_assetManagerMainWin", "Character", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_asset_tw.setTabText(self.bsAm_asset_tw.indexOf(self.props_tab), QtGui.QApplication.translate("bs_assetManagerMainWin", "Prop", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_asset_tw.setTabText(self.bsAm_asset_tw.indexOf(self.sets_tab), QtGui.QApplication.translate("bs_assetManagerMainWin", "Set", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_asset_tw.setTabText(self.bsAm_asset_tw.indexOf(self.setElements_tab), QtGui.QApplication.translate("bs_assetManagerMainWin", "SetElement", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_asset_tw.setTabText(self.bsAm_asset_tw.indexOf(self.vehicles_tab), QtGui.QApplication.translate("bs_assetManagerMainWin", "Vehicle", None, QtGui.QApplication.UnicodeUTF8))
self.label_5.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "All Versions", None, QtGui.QApplication.UnicodeUTF8))
self.label_6.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Screenshot", None, QtGui.QApplication.UnicodeUTF8))
self.imageView_lbl.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "screenshot", None, QtGui.QApplication.UnicodeUTF8))
self.label.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Name:-", None, QtGui.QApplication.UnicodeUTF8))
self.label_2.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Type:-", None, QtGui.QApplication.UnicodeUTF8))
self.label_3.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Owner:-", None, QtGui.QApplication.UnicodeUTF8))
self.label_8.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Size:-", None, QtGui.QApplication.UnicodeUTF8))
self.label_9.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Time:-", None, QtGui.QApplication.UnicodeUTF8))
self.label_4.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Version Comment", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_open_pb.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Open", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_import_pb.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Import", None, QtGui.QApplication.UnicodeUTF8))
self.bsAm_reference_pb.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Reference", None, QtGui.QApplication.UnicodeUTF8))
self.menuMenu.setTitle(QtGui.QApplication.translate("bs_assetManagerMainWin", "Menu", None, QtGui.QApplication.UnicodeUTF8))
self.actionExit.setText(QtGui.QApplication.translate("bs_assetManagerMainWin", "Exit", None, QtGui.QApplication.UnicodeUTF8))
| [
"crashsonu@gmail.com"
] | crashsonu@gmail.com |
9ee890146ce6af634c55139f125f346f2d8a82b8 | 4b3c57a19006ddc2136e20458fec330180f1dd19 | /19캡디 머신러닝/get_data.py | 8119c4c8dd2587bca9c871b882cd7f666e5951fe | [] | no_license | yangtae/Motion-Guantlet | 78fef29931b4cc296c7c798da480061516b5ca5a | f69cf152160cef627b6df7e4229af3769d51247e | refs/heads/main | 2023-06-09T17:52:42.978797 | 2021-07-03T13:42:21 | 2021-07-03T13:49:28 | 382,616,244 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,221 | py | # file: rfcomm-server.py
# auth: Albert Huang <albert@csail.mit.edu>
# desc: simple demonstration of a server application that uses RFCOMM sockets
#
# $Id: rfcomm-server.py 518 2007-08-10 07:20:07Z albert $
import pandas as pd
from bluetooth import *
import pyautogui as p
import keyboard
import sys
# Global Variable
w, h = p.size()
# p.PAUSE = 0
p.FAILSAFE = False
def connect_blutooth():
server_sock = BluetoothSocket(RFCOMM)
server_sock.bind(("", PORT_ANY))
server_sock.listen(1)
port = server_sock.getsockname()[1]
uuid = "94f39d29-7d6d-437d-973b-fba39e49d4ee"
advertise_service(server_sock, "SampleServer",
service_id=uuid,
service_classes=[uuid, SERIAL_PORT_CLASS],
profiles=[SERIAL_PORT_PROFILE],
protocols = [ OBEX_UUID ]
)
print("Waiting for connection on RFCOMM channel %d" % port)
client_sock, client_info = server_sock.accept()
print("Accepted connection from ", client_info)
return client_sock, client_info
if __name__ == "__main__":
client_sock, client_info = connect_blutooth()
df = pd.DataFrame()
flag = False
total = []
try:
while True:
try: #used try so that if user pressed other than the given key error will not be shown
if not keyboard.is_pressed('q'): continue
count = 0
datas = []
while count < 20:
client_sock.send('ack')
recv = client_sock.recv(1024)
if len(recv) == 0: continue
data = recv.decode('ascii')
data = data.split()
datas = datas + data
count = count + 1
total.append(datas)
except:
break #if user pressed a key other than the given key the loop will break
except IOError:
pass
print("disconnected")
df = pd.DataFrame(total)
print(df.head())
df.to_csv('right_2.csv')
client_sock.close()
print("all done")
| [
"xotjr1316@gmail.com"
] | xotjr1316@gmail.com |
5ab89bff4f0be6f2e9393198d7937d50d28f6ba9 | bf683796ca0368468b0d361c91083f7ef1d19e11 | /mainapp/migrations/0005_auto_20200522_1011.py | 8672b1a9f9ba9c9e1385f01e03deea7275d8511a | [] | no_license | pavanvarma08/python-django-models-authentication | d5771a4a04f22f415f366d1699a19623cb989f6b | 83379653506081c50a92048fc689cd66e28fc889 | refs/heads/master | 2022-08-21T09:05:43.487596 | 2020-05-22T10:15:45 | 2020-05-22T10:15:45 | 265,871,127 | 0 | 0 | null | 2020-05-21T14:33:44 | 2020-05-21T14:33:43 | null | UTF-8 | Python | false | false | 695 | py | # Generated by Django 2.2.9 on 2020-05-22 10:11
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('mainapp', '0004_blogpost'),
]
operations = [
migrations.CreateModel(
name='Tag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, unique=True)),
],
),
migrations.AddField(
model_name='blogpost',
name='tags',
field=models.ManyToManyField(related_name='posts', to='mainapp.Tag'),
),
]
| [
"pavan.indukuri@ericsson.com"
] | pavan.indukuri@ericsson.com |
9f4a47f73799a81165385e13b29c9011492f03ef | c00b2b96e00cd999c5bafc9d9e0c57dcbdd080ac | /scripts/Kinda.py | 430cef06cf3eb5c2290e3179fa52ed1ea647eadf | [
"MIT"
] | permissive | thomaslao/home_edu | d038b038f47b84070a6978e927b99e06744897d0 | 6368967917892eb43164ca79fd4faeafb085be37 | refs/heads/master | 2023-05-01T07:09:27.467886 | 2021-05-12T08:58:53 | 2021-05-12T08:58:53 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 282 | py | flag = 0
while True:
answer = main.answer_question_from_data(msg, kdata)['answer']
print(answer)
if answer == 'follow':
flag = 1
elif answer == 'stop':
k.move(0, 0)
break
forward_speed, turn_speed = f.follow(c.depth_image, flag==1)
k.move(forward_speed, turn_speed)
| [
"lamkinun@gmail.com"
] | lamkinun@gmail.com |
124cc01f09e4a09f6b04f3d79b499aba8b561276 | 6796f5a07ae5fe7fd79ba700bd9e68d35b3b1959 | /src/_check_freq_azimuth_amp.py | 82af83d24ba562c7094e7e2641ff500b4aa8131b | [] | no_license | Hiroki-kt/robot-audition-experiment | 642d157514fb9ae886b5c65107a76c27d56cfdf4 | 40feab6231801bcd71ad71a049b60ab7e6e2ac5a | refs/heads/master | 2022-04-10T08:23:54.917243 | 2020-03-27T02:21:07 | 2020-03-27T02:21:07 | 214,172,858 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,674 | py | import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from _function import MyFunc
import sys
if __name__ == '__main__':
mf = MyFunc()
data_set_file_path = mf.onedrive_path + '_array/200220/'
data_name = '191205_PTs05'
origin_path = mf.speaker_sound_path + '2_up_tsp_1num.wav'
mic = 0
data_id = 0
smooth_step = 50
data_set_file = data_set_file_path + data_name + '.npy'
data_set = np.load(data_set_file)
print(data_set.shape)
directions = np.arange(data_set.shape[0]/2 * (-1), data_set.shape[0]/2)
freq_list = np.fft.rfftfreq(mf.get_frames(origin_path), 1/44100)
freq_max_id = mf.freq_ids(freq_list, 7000)
freq_min_id = mf.freq_ids(freq_list, 1000)
X1, X2 = np.meshgrid(freq_list[freq_min_id + int(smooth_step/2) - 1:freq_max_id - int(smooth_step/2)], directions)
print(X1.shape, X2.shape)
fig = plt.figure()
fig.subplots_adjust(bottom=0.2)
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X1,
X2,
data_set[:, mic, data_id, :],
cmap='winter',
linewidth=0)
ax.set_ylabel('Azimuth [deg]', fontsize=12)
ax.set_xlabel('Frequency [Hz]', fontsize=12)
ax.set_zlabel('Amplitude spectrum', fontsize=12)
ax.tick_params(labelsize=10)
# fig.colorbar(surf)
path = mf.make_dir_path(img=True, directory_name='/' + data_name + '/')
for angle in range(0, 360):
ax.view_init(30, angle)
# fig.draw()
# plt.pause(.001)
# fig.show()
plt.savefig(path + str(angle).zfill(3) + '.png')
# fig.show()
| [
"katayama.hiroki.kb3@is.naist.jp"
] | katayama.hiroki.kb3@is.naist.jp |
847583d24105d7141eccf2797b87b466cbd57b01 | 99f43f4591f63d0c57cd07f07af28c0b554b8e90 | /python/프로그래머스/직사각형만들기.py | 16bef1835142eb60413503e1b66a432aafa71fc8 | [] | no_license | SINHOLEE/Algorithm | 049fa139f89234dd626348c753d97484fab811a7 | 5f39d45e215c079862871636d8e0306d6c304f7e | refs/heads/master | 2023-04-13T18:55:11.499413 | 2023-04-10T06:21:29 | 2023-04-10T06:21:29 | 199,813,684 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 543 | py | def solution(v):
v = sorted(v)
x_dic = {}
y_dic = {}
for x, y in v:
if x_dic.get(x) is None:
x_dic[x] = 1
else:
x_dic[x] += 1
if y_dic.get(y) is None:
y_dic[y] = 1
else:
y_dic[y] += 1
answer = []
for x, cnt in x_dic.items():
if cnt == 1:
answer.append(x)
break
for y, cnt in y_dic.items():
if cnt == 1:
answer.append(y)
return answer
print(solution([[1, 1], [2, 2], [1, 2]]))
| [
"dltlsgh5@naver.com"
] | dltlsgh5@naver.com |
bc45b4f440231771d040320fa33af10e7caddee6 | 945ed3bc28abff20e7d4a445941fdd6b2161b3cd | /[python3]/generic.py | df70588171070948c7e2d33347fa6aa2a309a2d4 | [] | no_license | cash2one/adwords | 066a9f3173af37ea456488152c1e95fa16aae35e | ea1376286972886f8b6c47c5873558345ff81491 | refs/heads/master | 2021-01-21T10:55:45.940180 | 2017-01-30T11:04:28 | 2017-01-30T11:04:28 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,880 | py | #BookYogaRetreats
#Generic Keywords
######################################################################################
# import sys
# #set max cpc bid for keyword and ad group here
# print "Please type in the max cpc for the landing page."
# max_cpc = float(raw_input())
# if type(max_cpc) != float:
# sys.exit("Please type in a price in form of 0.00")
######################################################################################
yoga = ['yoga']
holiday_word1 = [
'holiday',
'holidays',
'retreat',
'retreats',
'vacation',
'vacations',
'resort',
'resorts',
'camp',
'camps',
'package',
'packages',
'center',
'centers',
'centre',
'centres',
'deal',
'deals',
'training',
'trainings',
'holiday retreat',
'holiday retreats',
'holidays retreat',
'holidays retreats',
'holiday resort',
'holiday resorts',
'holidays resort',
'holidays resorts',
'holiday package',
'holiday packages',
'holidays package',
'holidays packages',
'holiday deal',
'holiday deals',
'holidays deal',
'holidays deals',
'retreat holiday',
'retreat holidays',
'retreats holiday',
'retreats holidays',
'retreat vacation',
'retreat vacations',
'retreats vacation',
'retreats vacations',
'retreat resort',
'retreat resorts',
'retreats resort',
'retreats resorts',
'retreat camp',
'retreat camps',
'retreats camp',
'retreats camps',
'retreat package',
'retreat packages',
'retreats package',
'retreats packages',
'retreat center',
'retreat centers',
'retreats center',
'retreats centers',
'retreat centre',
'retreat centres',
'retreats centre',
'retreats centres',
'retreat deal',
'retreat deals',
'retreats deal',
'retreats deals',
'retreat training',
'retreat trainings',
'retreats training',
'retreats trainings',
'camp holiday',
'camp holidays',
'camps holiday',
'camps holidays',
'camp retreat',
'camp retreats',
'camps retreat',
'camps retreats',
'camp vacation',
'camp vacations',
'camps vacation',
'camps vacations',
'camp resort',
'camp resorts',
'camps resort',
'camps resorts',
'camp package',
'camp packages',
'camps package',
'camps packages',
'camp deal',
'camp deals',
'camps deal',
'camps deals',
'camp training',
'camp trainings',
'camps training',
'camps trainings',
'center retreat',
'center retreats',
'centers retreat',
'centers retreats',
'centre retreat',
'centre retreats',
'centres retreat',
'centres retreats',
'center training',
'center trainings',
'centers training',
'centers trainings',
'centre training',
'centre trainings',
'centres training',
'centres trainings',
'vacation retreat',
'vacation retreats',
'vacations retreat',
'vacations retreats',
'vacation camp',
'vacation camps',
'vacations camp',
'vacations camps',
'vacation package',
'vacation packages',
'vacations package',
'vacations packages',
'vacation deal',
'vacation deals',
'vacations deal',
'vacations deals',
'resort holiday',
'resort holidays',
'resorts holiday',
'resorts holidays',
'resort retreat',
'resort retreats',
'resorts retreat',
'resorts retreats',
'resort vacation',
'resort vacations',
'resorts vacation',
'resorts vacations',
'resort camp',
'resort camps',
'resorts camp',
'resorts camps',
'resort package',
'resort packages',
'resorts package',
'resorts packages',
'resort deal',
'resort deals',
'resorts deal',
'resorts deals',
'package holiday',
'package holidays',
'packages holiday',
'packages holidays',
'package retreat',
'package retreats',
'packages retreat',
'packages retreats',
'package vacation',
'package vacations',
'packages vacation',
'packages vacations',
'package resort',
'package resorts',
'packages resort',
'packages resorts',
'package camp',
'package camps',
'packages camp',
'packages camps',
'package deal',
'package deals',
'packages deal',
'packages deals',
'package training',
'package trainings',
'packages training',
'packages trainings',
'training holiday',
'training holidays',
'trainings holiday',
'trainings holidays',
'training retreat',
'training retreats',
'trainings retreat',
'trainings retreats',
'training vacation',
'training vacations',
'trainings vacation',
'trainings vacations',
'training camp',
'training camps',
'trainings camp',
'trainings camps',
'training package',
'training packages',
'trainings package',
'trainings packages',
'training center',
'training centers',
'trainings center',
'trainings centers',
'training centre',
'training centres',
'trainings centre',
'trainings centres'
]
holiday_word2 = [
'package',
'packages',
'resort',
'resorts',
'retreat',
'retreats',
'deal',
'deals',
'']
######################################################################################
#structure and combination
a1 = [yoga, holiday_word1, holiday_word2] #ex) yoga retreat holiday package
import itertools
comb = list(itertools.product(*a1))
######################################################################################
#clean the list
for i in range(len(comb)): #remove all empty strings
comb[i] = tuple(x for x in comb[i] if x != '')
comb[i] = ' '.join(comb[i])
comb[i] = tuple(comb[i].split())
all_comb = list(set(comb))
for i in range(len(all_comb)): #remove adjacent duplicates including singular vs plural (e.g. yoga retreat retreats)
a = all_comb[i]
for x in range(len(a)-1):
if (a[x] == a[x+1]) or (a[x]+'s' == a[x+1]) or (a[x] == a[x+1]+'s'):
all_comb[i] = a[:x+1] + a[x+2:]
def remove_duplicates(values): #aux
output = []
seen = set()
for value in values:
# If value has not been encountered yet,
# ... add it to both list and set.
if value not in seen:
output.append(value)
seen.add(value)
return output
#remove keywords with plural/singular duplicates within them
#e.g. yoga retreat budget retreat, yoga retreats budget retreat
for i in range(len(all_comb)): #yoga retreat budget retreat
all_comb[i] = tuple(remove_duplicates(all_comb[i]))
for i in range(len(all_comb)): #yoga retreats budget retreat, yoga retreat budget retreats
if (all_comb[i][-1]+'s' == all_comb[i][1]) or (all_comb[i][-1] == all_comb[i][1]+'s'):
all_comb[i] = []
all_comb = [x for x in all_comb if x != []] #remove empty lists in all_comb
for i in range(len(all_comb)): #remove all duplicate words
all_comb[i] = tuple(remove_duplicates(all_comb[i]))
all_comb[i] = '[' + ' '.join(all_comb[i]) + ']'
all_comb = sorted(set(all_comb)) #remove duplicates and sort the list in an alphabetical order
######################################################################################
#export keywords onto a csv file
import csv
file_name = "Generic.csv"
with open(file_name, 'wb') as f:
writer = csv.writer(f)
for i in range(len(all_comb)):
writer.writerow([all_comb[i]])
print "[Notification] %s generic keywords are generated successfully. Please check your current directory for the output in a CSV file." %len(all_comb) | [
"stephanie@ebookingservices.com"
] | stephanie@ebookingservices.com |
2060cc6036f984b39e6124ba18d02103c6b57d5d | 21b6f23951aa54e3c59a8a284f69599817f5c5ad | /filehandling_example.py | e25019dbbf3cd0a8eadee895f6288206c8f0f85c | [] | no_license | syladiana/basic-python-b3 | 1f94d70acdab93815dbc146bdbaeb5a3e715ea50 | 36ade0a5b7fbb488af53878b7430bfb4647e2238 | refs/heads/main | 2023-03-01T23:40:25.318183 | 2021-02-07T14:37:52 | 2021-02-07T14:37:52 | 328,089,939 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 770 | py | def menu():
print("Menu: ")
print("1. Lihat daftar kontak")
print("2. Tambahkan kontak")
print("3. Keluar")
option = int(input("Masukkan pilihan: "))
return option
def show():
f = open("file_kontak.txt", "r")
print("List nama kontak: ")
print(f.read())
f.close()
def add():
f = open("file_kontak.txt", "a")
nama = input("Masukkan nama: ")
telepon = input("Masukkan nomor telepon: ")
f.write(nama + "," + telepon)
f.write("\n")
f.close()
def unknown():
print("Menu tidak tersedia")
def keluar():
print("Terima kasih!")
while True:
opsi = menu()
if opsi == 1:
show()
elif opsi == 2:
add()
elif opsi == 3:
keluar()
break
else:
unknown() | [
"sylasudiana@gmail.com"
] | sylasudiana@gmail.com |
f3e8bbb985e9c431950c37d1dbd0f064440bd7c5 | 7437c54720a7e221ac99f436919c7988458abd1e | /src/tests/skills/test_sample.py | 4cd1e26fe0cf80a6258a959eecc4e1b7cd88fa71 | [
"Apache-2.0"
] | permissive | alexkryu4kov/DreamJob | e6a1ec4320836662f393f6080a5ee9600cf614ce | 27ae092d0086c8dde2ceb9598e65c3b79a654f56 | refs/heads/develop | 2023-08-04T14:35:54.231891 | 2019-12-04T15:45:20 | 2019-12-04T15:45:20 | 217,653,574 | 0 | 0 | Apache-2.0 | 2023-07-20T15:03:36 | 2019-10-26T04:03:03 | Python | UTF-8 | Python | false | false | 246 | py | from tests.constants import skills_request
def test_set_email_known_unknown(request_parser):
request_parser.set_email_known_unknown(skills_request)
assert request_parser.known == ['android']
assert request_parser.unknown == ['git']
| [
"noreply@github.com"
] | noreply@github.com |
c6977906e213e68c73e51c78565d020bd2a58022 | 2cefbfc49a118f1bc820e1acb4866cf13240d91b | /aula0305/regex.py | 5ed8891f447504ecd738ce002b8cfc5aabb1298a | [
"MIT"
] | permissive | proffillipesilva/aulasdelogicaprogramacao | c574aee382501f75a713b6b89a1e2f9a533e17d1 | f32d1019d4bfd5da371417911cb6bcafe58dfac7 | refs/heads/main | 2023-06-04T07:48:28.828672 | 2021-06-26T01:27:33 | 2021-06-26T01:27:33 | 354,985,186 | 1 | 25 | MIT | 2021-06-26T01:27:34 | 2021-04-05T22:06:40 | Python | UTF-8 | Python | false | false | 100 | py | import re
email = "<p>Aloha</p>"
valido = re.search("^<p>(.*)</p>$", email )
print(valido.group(1)) | [
"noreply@github.com"
] | noreply@github.com |
34e43b4116a0c8e45799e462f0469116df0a77a4 | 0645f92331396e03d449668bc37d5e26a82b07df | /hash_table.py | 02d8f06c69419dc8e4d0e5244206cab90983b18a | [] | no_license | jhkette/data_structures_algorithms | 8cfba599baf59028ae7ecf905745340f468dd675 | 9ed023082eb64aacb0eed32ed3987fc6d9e44dee | refs/heads/master | 2022-05-03T20:33:22.633849 | 2022-03-14T15:48:07 | 2022-03-14T15:48:07 | 205,164,091 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,459 | py | # Given an array of integers, return indices of the two numbers such
# that they add up to a specific target.
# You may assume that each input would have exactly one solution,
# and you may not use the same element twice.
# Example:
# Given nums = [2, 7, 11, 15], target = 9,
# Because nums[0] + nums[1] = 2 + 7 = 9,
# return [0, 1]. ie the indexes
# 1. Two Sum
# THE ARRAY IS NOT SORTED - TWO POINTER ARRAY WILL NOT WORK
def two_sum(nums, target):
dic = {}
for i, num in enumerate(nums):
if num in dic:
return [dic[num], i]
else:
dic[target - num] = i
print(two_sum([0,2,5,3,6,5], 11))
print(two_sum([0,2,5,3,6,3,4,5,6,9,11,5], 20))
import collections
# A Counter is a container that keeps track of how many times equivalent
# values are added. It can be used to implement the same algorithms for which
# bag or multiset data structures are commonly used in other languages.
# find the first uniqe chrecter. collections.counter creates the hash table
# that takes a parameter an creates a dictionary from it.
def first_unique(s):
count = collections.Counter(s)
# find the index
for idx, ch in enumerate(s):
if count[ch] == 1:
return idx
return -1
print(first_unique('helloh'));
# Time complexity : 0(n) since we go through the string of length N two times.
# Space complexity 0(n) since we have to keep a hash map with N elements. | [
"joseph.ketterer@gmail.com"
] | joseph.ketterer@gmail.com |
5c696de0f62ac5ddf947bd3c760a3a4cb5157354 | 7cda45674ecd42a371928bd7263ca959843b5436 | /Instagram/wsgi.py | da40952ae14e1ce64f4a6d88e6b263d03aa1f457 | [] | no_license | sachitbatra/Django-InstaClone | 096f3437ba74ca3609032077b25a8a299157cd80 | a69d71af91cade589a2bfc227ee50a66eea20c62 | refs/heads/master | 2021-01-01T16:22:39.806016 | 2018-11-24T17:20:00 | 2018-11-24T17:20:00 | 97,817,199 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 395 | py | """
WSGI config for Instagram project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'Instagram.settings')
application = get_wsgi_application()
| [
"sachitbatra97@gmail.com"
] | sachitbatra97@gmail.com |
d7fae122e7f8a84921a3b1fe45c07f443c21ff43 | 89b2f5b08c441d4af0a63ed2ec1a5889bc92f0f7 | /NI_DJANGO_PROJECTS/urls_and_templates/second__app/urls.py | cfec6a5b99b358adb0edc6eaa0de920d58001cc6 | [] | no_license | KoliosterNikolayIliev/Softuni_education | 68d7ded9564861f2bbf1bef0dab9ba4a788aa8dd | 18f1572d81ad9eb7edd04300deb8c81bde05d76b | refs/heads/master | 2023-07-18T09:29:36.139360 | 2021-08-27T15:04:38 | 2021-08-27T15:04:38 | 291,744,823 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 105 | py | from django.urls import path
from second__app import views
urlpatterns = [
path('', views.index)
]
| [
"65191727+KoliosterNikolayIliev@users.noreply.github.com"
] | 65191727+KoliosterNikolayIliev@users.noreply.github.com |
55ff3c9682db0bd28e6eb9aa2c943bf04ef521d5 | d1be29b65e7e4118d42446d5c056bb55839358e1 | /Project-Euler/Problem-25.py | cd471aab0ec4a09d6a1314f78ebd96d3b0d4deb3 | [] | no_license | XiaoTaoWang/JustForFun | f77693139181ffc6ef102adbc670e4758a4b7e5f | 09ddf48f22994e6a8fc23bad32cbac41b62574b1 | refs/heads/master | 2021-01-10T14:37:12.211897 | 2016-03-13T04:38:47 | 2016-03-13T04:38:47 | 49,762,427 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 456 | py | # -*- coding: utf-8 -*-
"""
Created on Fri Mar 4 00:17:25 2016
@author: Xiaotao Wang
"""
"""
Problem 25:
What is the index of the first term in the Fibonacci sequence to contain
1000 digits?
"""
def fib():
a = b = 1
while True:
yield a
a, b = b, a+b
if __name__ == '__main__':
F = fib()
for i, num in enumerate(F):
digits = str(num)
if len(digits) == 1000:
print(i+1)
break | [
"wangxiaotao868@163.com"
] | wangxiaotao868@163.com |
2fb599f614a7c8f7aecc28811d5fdeeaa7f4f484 | 3e03f4701ed09f5bf4f53c9b47fa9ebf36309bfb | /ctpExecutionEngine.py | 56eb521829d68a22d4f2f9118a4ba1c4c43433bb | [
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | WinQuant/arsenal-ctp-driver | 387926c08d37866a6bdc12e01265cb903bc3ea1f | 23acd98b8df8e4c7407b14a5b01de6679f2d2501 | refs/heads/master | 2021-01-21T08:02:07.396257 | 2017-02-28T07:42:26 | 2017-02-28T07:42:26 | 83,330,822 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,887 | py | '''The execution engine for CTP interface.
'''
'''
Copyright (c) 2017, WinQuant Information and Technology Co. Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the <organization> nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''
# built-in modules
import logging
import yaml
# third-party modules
import gevent
# customized modules
import ctptrader
import ctpOrder
import ctpUtil
import execution.engine as eEngine
# initialize logging configure
logging.basicConfig( format='[%(levelname)s] %(message)s', level=logging.INFO )
class CTPExecutionEngine( eEngine.ExecutionEngine ):
'''The CTP specific execution engine.
'''
def __init__( self, configPath ):
'''Initialize the trader engine.
Parameters
----------
configPath : str
path to the configure for the data feed including fields
TRADER_FRONT_IP : str
gateway IP address;
BROKER_ID : str
broker ID;
INVESTOR_ID : str
investor ID;
PASSOWRD : str
password to authenticate.
The configure file is encoded as a YAML.
Exceptions
----------
raise
* FileNotFoundError when the given YAML file is not found;
* KeyError if the required fileds are not specified in the configure file.
'''
with open( configPath, 'r' ) as f:
self.config = yaml.load( f.read() )
# login CTP trader API
ctptrader.login( self.config[ 'TRADER_FRONT_IP' ],
self.config[ 'BROKER_ID' ], self.config[ 'INVESTOR_ID' ],
self.config[ 'PASSWORD' ], self._onRspUserLogin,
self._onOrderSubmitted, self._onOrderActionTaken,
self._onOrderReturn, self._onTradeReturn )
self.executedOrders = {}
# next order identifier
self.nextId = 1
# callbacks
self.onRspUserLogin = None
self.onOrderSubmitted = None
self.onOrderActionTaken = None
self.onOrderReturn = None
self.onTradeReturn = None
def setCallbacks( self, onRspUserLogin=None, onOrderSubmitted=None,
onOrderActionTaken=None, onOrderReturn=None, onTradeReturn=None ):
'''Set callback functions for later processing.
Parameters
----------
onRspUserLogin : callable
Callback function to response user login, None by default;
onOrderSubmitted : callable
Callback function to response order submit, None by default;
onOrderActionTaken : callable
Callback function to acknowledge order action taken, None by default;
onOrderReturn : callable
Callback function to response order return, None by default;
onTradeReturn : callable
Callback function to response trade return, None by default;
'''
self.onRspUserLogin = onRspUserLogin
self.onOrderSubmitted = onOrderSubmitted
self.onOrderActionTaken = onOrderActionTaken
self.onOrderReturn = onOrderReturn
self.onTradeReturn = onTradeReturn
def connect( self ):
'''Bring the CTP execution engine online.
'''
ctptrader.connect()
def placeOrder( self, order, onOrderFilled=None ):
'''Submit the order to the CTP engine.
Parameters
----------
order : ctpOrder.CTPOrder
an CTP compatible order.
onOrderFilled : callable
the callback if given, when the submitted order is filled.
Returns
-------
orderId : ctpOrder.CTPOrderId
identifier of the CTP order;
'''
orderId = self.nextId
self.nextId += 1
ctpOrderObj = ctpUtil.convertToCtpOrder( order )
ctptrader.placeOrder( ctpOrderObj, orderId )
orderId = ctpOrder.CTPOrderId( order.secId, ctpOrderObj.exch,
orderId )
return orderId
def cancelOrder( self, orderId ):
'''Cancel the given order through the CTP interface.
Parameters
----------
orderId : ctpOrder.CTPOrderId
identifier of the CTP order to cancel.
Returns
-------
orderStatus : ctpOrder.CTPOrderStauts
Status of the CTP order.
'''
ctptrader.cancelOrder( orderId )
def queryStatus( self, orderId ):
'''Query status of the CTP order.
Parameters
----------
orderId : ctpOrder.CTPOrderId
identifier of the CTP order to query.
Returns
-------
orderStatus : ctpOrder.CTPOrderStatus
Status of the CTP order.
'''
def updateOrder( self, orderId, newOrder ):
'''Update the order associated with the given order identifier to the new order.
Parameters
----------
orderId : ctpOrder.CTPOrderId
identifier of the CTP order to query;
newOrder : ctpOrder.CTPOrder
the new CTP order object to update.
Returns
-------
orderStatus : ctpOrder.CTPOrderStatus
Status of the CTP order.
'''
def _onRspUserLogin( self ):
'''Callbacks for user login.
'''
logging.info( 'CTP trader logged in.' )
def _onOrderSubmitted( self, orderId, requestId ):
'''Callbacks for order filled.
Parameters
----------
orderId : int
Order identifier for the given order;
requestId : int
request identifier for the given order.
'''
print( 'order submitted', orderId, requestId )
def _onOrderActionTaken( self, orderId, requestId ):
'''Callbacks for order action taken.
Parameters
----------
orderId : int
The identifier of the order where the action is taken;
requestId : int
request identifier for the given order.
'''
def _onOrderReturn( self, orderRefId, notifySeq, orderStatus, volumeTraded,
volumeTotal, seqNo ):
'''Callbacks for order return.
Parameters
----------
orderRefId : str
Order reference ID;
notifySeq : int
notify sequence;
orderStatus : int
the status of orders;
volumeTraded : int
volumes filled;
volumeTotal : int
volumes in total;
seqNo : int
sequence number.
'''
def _onTradeReturn( self, orderRefId, orderSysId, tradeId, price, volume,
tradeDate, tradeTime, orderLocalId, seqNo ):
'''Callbacks for trade return.
Parameters
----------
orderRefId : str
Order reference ID;
orderSysId : str
order system ID;
tradeId : str
trade identifier;
price : float
trade price;
volume : int
trade volume;
tradeDate : str
trade date;
tradeTime : str
trade time;
orderLocalId : str
local order ID;
seqNo : int
sequence number.
'''
if self.onTradeReturn is not None:
self.onTradeReturn( orderRefId, orderSysId, tradeId, price, volume,
tradeDate, tradeTime, orderLocalId, seqNo )
| [
"qiush.summer@gmail.com"
] | qiush.summer@gmail.com |
099107a1fc7a937fe06c9e7494308aa4d7f2223e | 26d030d1a8134f1900d11054dc63c674dc2beec8 | /main.py | 0895f7c340f491aad10624532b6215a61944c9a2 | [
"MIT"
] | permissive | kendricktan/pychip8 | 1ea1259abb61485c0db9bd26dda0201c2369452d | c9eb4f950f4546dbad0ca84f1c393d822a925a10 | refs/heads/master | 2021-04-27T15:44:17.064807 | 2018-02-23T14:28:13 | 2018-02-23T14:28:13 | 122,475,722 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 133 | py | from pychip8.emulator import Chip8
if __name__ == '__main__':
rom_name = 'pong.rom'
chip8 = Chip8(rom_name)
chip8.run() | [
"kendricktan0814@gmail.com"
] | kendricktan0814@gmail.com |
f00da1225c7993c12b6cda82ddca593124f48469 | 491ecd86c572906fcbc6e59f3f4ef4812e0022c4 | /mqtt-publisher.py | ea4633db38c84f9de1bf74e68db96d00fe5670a2 | [] | no_license | velmurugan2020/mqtt | d88e614eb907ed5b1c5cc93bc0d2393ebdde3d5e | 398e3548955fb4e2f1f809b72ff26a85e8289fb1 | refs/heads/master | 2022-12-08T19:13:29.788278 | 2020-09-01T12:13:55 | 2020-09-01T12:13:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 306 | py | import paho.mqtt.client as mqtt
import time
broker_address="iot.eclipse.org"
#broker_address="192.168.1.102"
# This is the Publisher
client = mqtt.Client("p1")
client.connect(broker_address)
client.loop_start()
client.publish("test", "Hello world!")
client.disconnect()
client.loop_stop()
| [
"noreply@github.com"
] | noreply@github.com |
cd7fd456176850a00ea9e237f482bc0ff0b23b23 | 400ad761259442dec1a496f0e8bf516a1665ce0b | /sorters.py | a72d3fb116e422508db30994dc5f55a6f071b72a | [] | no_license | trevanreeser41/bom_words | dcdfabc5e4232e556315a0c6db9468b464f10097 | d89c017485745b42b5449e53a8e033d9d3196494 | refs/heads/master | 2021-05-20T09:42:50.144121 | 2020-04-01T16:40:01 | 2020-04-01T16:40:01 | 252,231,409 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,865 | py | # Parameters in the following functions:
# data: a list of tuples
# index: the tuple index to sort by
#
# Consider the following example data:
data = [
( 'homer', 'simpson', 50 ),
( 'luke', 'skywalker', 87 ),
( 'bilbo', 'baggins', 111 ),
( 'mike', 'azowskie', 21 )
]
#
# bubble_sort(data, 0) sorts on first name (a..z)
# bubble_sort(data, 0, True) sorts on first name (z..a)
# bubble_sort(data, 2) sorts on age (1..infinity)
#
# The data list is sorted in place (anew list is not created).
# You do NOT need to perform validation on input data
# (null data list, index out of bounds, etc.)
#
def bubble_sort(data, index, descending=False):
'''Sorts using the bubble sort algorithm'''
# replace this with your own algorithm (do not use Python's sort)
for item in range(len(data)-1,0,-1):
for i in range(item):
if descending == False:
if data[i][index]>data[i+1][index]:
temp = data[i]
data[i] = data[i+1]
data[i+1] = temp
elif descending == True:
if data[i][index]<data[i+1][index]:
temp = data[i]
data[i] = data[i+1]
data[i+1] = temp
# data.sort(key=lambda t: t[index], reverse=descending)
def insertion_sort(data, index, descending=False):
'''Sorts using the insertion sort algorithm'''
# replace this with your own algorithm (do not use Python's sort)
for item in range(1,len(data)):
currentValue=data[item]
position = item
if descending == False:
while position>0 and data[position-1][index]>currentValue[index]:
data[position]=data[position-1]
position = position-1
elif descending == True:
while position>0 and data[position-1][index]<currentValue[index]:
data[position]=data[position-1]
position = position-1
data[position]=currentValue
# data.sort(key=lambda t: t[index], reverse=descending)
def selection_sort(data, index, descending=False):
'''Sorts using the selection sort algorithm'''
# replace this with your own algorithm (do not use Python's sort)
for item in range(len(data)-1,0,-1):
maxPosition = 0
for i in range(1,item+1):
if descending == False:
if data[i][index] > data[maxPosition][index]:
maxPosition = i
elif descending == True:
if data[i][index] < data[maxPosition][index]:
maxPosition = i
temp = data[item]
data[item] = data[maxPosition]
data[maxPosition] = temp
# data.sort(key=lambda t: t[index], reverse=descending)
# bubble_sort(data, 0, True)
# selection_sort(data, 2, True)
# insertion_sort(data, 1, True) | [
"trevanreeser41@gmail.com"
] | trevanreeser41@gmail.com |
d83ff0cf32e9893c0e9435a58e94531245790845 | e96c5e2770f3df29a9fdfe56ae0270648d1108df | /tweetsplitter.py | 06c45a5e432f441a545da97291619e10533ed1b7 | [] | no_license | NaruBeast/tweet-splitter | 66506dfc9b1d15d487204eeda93aec3ef028f36e | a94338631d465cb8d030d18a03e64c1f7de6930c | refs/heads/master | 2023-03-02T16:02:36.175667 | 2021-02-05T05:10:11 | 2021-02-05T05:10:11 | 336,168,834 | 2 | 3 | null | null | null | null | UTF-8 | Python | false | false | 2,623 | py | def tweet_splitter(text, counter):
char_limit = 280
counter_char = 8 if counter else 0
end = char_limit - counter_char
tweets = []
break_points = ".,&!?;" # Sentence ending punctuations
index = 0
if counter: # By Punctuation with Counter
tweets_temp = [] # Temporary list
count = 1
while (index + end) < len(text): # If the remaining text still has more than 272 (+8 for counter) characters, we must cut.
for break_pt in range(index+end-1, index-1, -1): # From the 280th, go backwards and find the nearest punctuation mark.
if text[break_pt] in break_points:
tweets_temp.append(text[index:break_pt+1].strip() + " ({}/".format(count)) # Append from starting index up to punctuation mark. Since we can't yet find the total number of tweets we will be creating, we'll add the denominator later.
index = break_pt+1 # New starting index at the next word
break # Go back to while loop
else:
space = text[index:index+end].rfind(' ')
if space!=-1:
tweets_temp.append(text[index:index+space].strip() + " ({}/".format(count))
index += (space + 1)
else:
tweets_temp.append(text[index:index+end].strip() + " ({}/".format(count))
index += end
count+=1
tweets_temp.append(text[index:len(text)].strip() + " ({}/".format(count)) # No more cutting needed, append remaining words.
for tweet in tweets_temp: # Call all tweets and add the denominator in the counter since we now know the total amount of tweets created. Remember format " ({}/{})".
tweets.append(tweet + "{})".format(count))
else: # By Punctuation without Counter
while (index + end) < len(text): # Same thing as above, without the complex counter
for break_pt in range(index+end-1, index-1, -1):
if text[break_pt] in break_points:
tweets.append(text[index:break_pt+1].strip())
index = break_pt+1
break
else:
space = text[index:index+end].rfind(' ')
if space!=-1:
tweets.append(text[index:index+space].strip())
index += (space + 1)
else:
tweets.append(text[index:index+end].strip())
index += end
tweets.append(text[index:len(text)].strip())
return tweets
| [
"mohmahn@gmail.com"
] | mohmahn@gmail.com |
e281c75728a1005f2eeb65258609d547a8f7e51f | 4dbbd709f7a12976d52f102e05cedc0aec4b7c07 | /launcher.py | ec267e6249936d9e91832436d6f4bbb507e47c83 | [] | no_license | chartes/erasmus | 7b2ac3a4d5123897dc6ae0824e610bf615d090a6 | 5ec1662625112698e9c18cce83720e9371658ef5 | refs/heads/dev | 2020-03-21T18:01:03.711082 | 2018-07-17T09:11:42 | 2018-07-17T09:11:42 | 138,867,565 | 0 | 0 | null | 2018-07-17T09:15:19 | 2018-06-27T10:38:08 | HTML | UTF-8 | Python | false | false | 121 | py | from app import create_app
app = create_app("dev")
if __name__ == "__main__":
app.run(host='127.0.0.1', port=5020)
| [
"vrlitwhai.kridanik@gmail.com"
] | vrlitwhai.kridanik@gmail.com |
28bf6dde8bb5f2f4f836584daa7697bbbb60659a | 5679731cee36c537615d285ed72810f4c6b17380 | /492_ConstructTheRectangle.py | 864fd723b3b57af7cb42b67c170b150f6a55bac9 | [] | no_license | manofmountain/LeetCode | 6b76105190a9b62df65a7b56b6def4120498b9fa | 718f688b3d316e8c10ef680d9c21ecd518d062f8 | refs/heads/master | 2021-01-12T03:41:48.318116 | 2017-07-18T12:35:58 | 2017-07-18T12:35:58 | 78,252,164 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 412 | py | import math
class Solution(object):
def constructRectangle(self, area):
"""
:type area: int
:rtype: List[int]
"""
width, res = int(math.sqrt(area)), list()
while width != 0:
if area % width == 0:
res.append(area / width)
res.append(width)
break
width -= 1
return res | [
"noreply@github.com"
] | noreply@github.com |
a07208baa41623aadfd3f44d003830fe174faa6c | 73b4efb5e2f79131d083711e84f06edf1db92966 | /RDT_2_1.py | d3a76058328615b0520a636ebe8963ac9d5b6dc2 | [
"Apache-2.0"
] | permissive | nicktonj/466-prog-2 | a801a8173a287de763763e97b090cda6dd5649ae | caf362b91818fb4f356a34dbcb62c0f55f977398 | refs/heads/master | 2022-03-01T04:09:03.605837 | 2019-10-12T00:20:55 | 2019-10-12T00:20:55 | 212,420,773 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,616 | py | import Network_2_1 as Network
import argparse
from time import sleep
import hashlib
# This is a Packet class
class Packet:
## the number of bytes used to store packet length
seq_num_S_length = 10
ack_S_length = 10
length_S_length = 10
## length of md5 checksum in hex
checksum_length = 32
def __init__(self, seq_num, msg_S='', ack=1):
self.seq_num = seq_num
self.msg_S = msg_S
self.ack = ack
@classmethod
def from_byte_S(self, byte_S):
if Packet.corrupt(byte_S):
# raise RuntimeError('Cannot initialize Packet: byte_S is corrupt')
# print('CORRUPT')
return self(0, ack=0)
#extract the fields
seq_num = int(byte_S[Packet.length_S_length : Packet.length_S_length+Packet.seq_num_S_length])
# print("OK > seq_num:", seq_num)
ack = int(byte_S[Packet.length_S_length+Packet.seq_num_S_length : Packet.length_S_length+Packet.seq_num_S_length+Packet.ack_S_length])
msg_S = byte_S[Packet.length_S_length+Packet.seq_num_S_length+Packet.ack_S_length+Packet.checksum_length :]
return self(seq_num, msg_S, ack)
def get_byte_S(self):
# print("\nCreating packet...")
#convert sequence number of a byte field of seq_num_S_length bytes
seq_num_S = str(self.seq_num).zfill(self.seq_num_S_length)
# print("OK > seq_num:", str(int(seq_num_S)))
# convert ack flag to a byte field of ack_S_length bytes
ack_S = str(self.ack).zfill(self.ack_S_length)
# print("ack_S:", str(int(ack_S)))
#convert length to a byte field of length_S_length bytes
length_S = str(self.length_S_length + len(seq_num_S) + len(ack_S) + self.checksum_length + len(self.msg_S)).zfill(self.length_S_length)
# print("length_S:", str(int(length_S)))
# print("msg_S:", str(self.msg_S))
#compute the checksum
checksum = hashlib.md5((length_S+seq_num_S+ack_S+self.msg_S).encode('utf-8'))
checksum_S = checksum.hexdigest()
# print("\nchecksum:", checksum_S, "\n")
#compile into a string
return length_S + seq_num_S + ack_S + checksum_S + self.msg_S
@staticmethod
def corrupt(byte_S):
# print("\nChecking for corruption...")
#extract the fields
length_S = byte_S[0:Packet.length_S_length]
# print("length:", str(length_S))
seq_num_S = byte_S[Packet.length_S_length : Packet.length_S_length+Packet.seq_num_S_length]
ack_S = byte_S[Packet.length_S_length+Packet.seq_num_S_length : Packet.length_S_length+Packet.seq_num_S_length+Packet.ack_S_length]
# print("ack:", str(ack_S))
checksum_S = byte_S[Packet.length_S_length+Packet.seq_num_S_length+Packet.ack_S_length : Packet.length_S_length+Packet.seq_num_S_length+Packet.ack_S_length+Packet.checksum_length]
# print("checksum:", str(checksum_S))
msg_S = byte_S[Packet.length_S_length+Packet.seq_num_S_length+Packet.ack_S_length+Packet.checksum_length :]
# print("msg:", str(msg_S))
#compute the checksum locally
checksum = hashlib.md5(str(length_S+seq_num_S+ack_S+msg_S).encode('utf-8'))
computed_checksum_S = checksum.hexdigest()
#and check if the same
'''
if checksum_S != computed_checksum_S:
print("\nCORRUPTION DETECTED")
# print("Checksum:", checksum_S)
# print("Computed Checksum:", computed_checksum_S, "\n")
print("CORRUPT > seq_num:", str(seq_num_S))
else:
print("No corruption found.")
'''
return checksum_S != computed_checksum_S
class RDT:
## latest sequence number used in a packet
seq_num = 0
## buffer of bytes read from network
byte_buffer = ''
def __init__(self, role_S, server_S, port):
self.network = Network.NetworkLayer(role_S, server_S, port)
def disconnect(self):
self.network.disconnect()
def rdt_1_0_send(self, msg_S):
p = Packet(self.seq_num, msg_S)
self.seq_num += 1
self.network.udt_send(p.get_byte_S())
def rdt_1_0_receive(self):
ret_S = None
byte_S = self.network.udt_receive()
self.byte_buffer += byte_S
#keep extracting packets - if reordered, could get more than one
while True:
#check if we have received enough bytes
if(len(self.byte_buffer) < Packet.length_S_length):
return ret_S #not enough bytes to read packet length
#extract length of packet
length = int(self.byte_buffer[:Packet.length_S_length])
if len(self.byte_buffer) < length:
return ret_S #not enough bytes to read the whole packet
#create packet from buffer content and add to return string
p = Packet.from_byte_S(self.byte_buffer[0:length])
ret_S = p.msg_S if (ret_S is None) else ret_S + p.msg_S
#remove the packet bytes from the buffer
self.byte_buffer = self.byte_buffer[length:]
#if this was the last packet, will return on the next iteration
def rdt_2_1_send(self, msg_S):
p = Packet(self.seq_num, msg_S=msg_S)
byte_S = ''
while True:
self.network.udt_send(p.get_byte_S())
sleep(0.1)
while byte_S == '':
byte_S = self.network.udt_receive()
self.byte_buffer += byte_S
length = int(self.byte_buffer[:Packet.length_S_length])
p_ack = Packet.from_byte_S(self.byte_buffer[0:length])
self.byte_buffer = ''
if p_ack.ack == 1 and p_ack.seq_num == self.seq_num:
self.seq_num = 1 if self.seq_num == 0 else 0
return
def rdt_2_1_receive(self):
ret_S = None
byte_S = self.network.udt_receive()
self.byte_buffer += byte_S
while True:
byte_S = self.network.udt_receive()
self.byte_buffer += byte_S
if len(self.byte_buffer) < Packet.length_S_length:
return ret_S
length = int(self.byte_buffer[:Packet.length_S_length])
if len(self.byte_buffer) < length:
return ret_S
p = Packet.from_byte_S(self.byte_buffer[0:length])
self.byte_buffer = self.byte_buffer[length:]
self.network.udt_send(Packet(p.seq_num, ack=p.ack).get_byte_S())
sleep(0.1)
if p.ack == 1 and p.seq_num == self.seq_num:
ret_S = p.msg_S if ret_S is None else ret_S + p.msg_S
self.seq_num = 1 if self.seq_num == 0 else 0
return ret_S
def rdt_3_0_send(self, msg_S):
pass
def rdt_3_0_receive(self):
pass
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='RDT implementation.')
parser.add_argument('role', help='Role is either client or server.', choices=['client', 'server'])
parser.add_argument('server', help='Server.')
parser.add_argument('port', help='Port.', type=int)
args = parser.parse_args()
rdt = RDT(args.role, args.server, args.port)
if args.role == 'client':
rdt.rdt_1_0_send('MSG_FROM_CLIENT')
sleep(2)
print(rdt.rdt_1_0_receive())
rdt.disconnect()
else:
sleep(1)
print(rdt.rdt_1_0_receive())
rdt.rdt_1_0_send('MSG_FROM_SERVER')
rdt.disconnect()
| [
"zbryantaylor@gmail.com"
] | zbryantaylor@gmail.com |
5fd20e3011e980f89584b88c39314ab97024b9db | 8a6a7d63f637c242052cbbb42b50dcf98f11642e | /inProgress/pyChess-master/BasicObjects/BaseObject.py | 0db35f95f4f00ccc470c4239e1ff7e520b605153 | [] | no_license | Paulware/waveSharePython | f77a7a2ab10c9ce52a180fe7e6607db59d36467c | 06b6b4f1527f31dbb4f65057c8c54559c36367cf | refs/heads/master | 2021-07-11T00:20:39.759342 | 2020-07-06T18:25:55 | 2020-07-06T18:25:55 | 168,863,769 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 886 | py | """
BaseObject class
"""
# Module informations
__project__ = u''
__author__ = u'Pires Baptiste (baptiste.pires37@gmail.com'
__modifiers__ = u''
__date__ = u''
__version__ = u'1.0.0'
# Importations
# Specific definitions
# Classes / Functions declaration
class BaseObject(object):
"""
This class will be the base class for all the other class that will not
be a process.
---------------------------------------------------------------------------
Attributes :
- _own_config = Config of the class
"""
_own_config = {}
def __init__(self, config):
"""
Constructor
-----------------------------------------------------------------------
Arguments :
-----------------------------------------------------------------------
Return : None.
"""
self._ownConfig = config | [
"Paul Richards"
] | Paul Richards |
1e9cb8061f6aae23dd4a1704f261f1fd6fbb8916 | b85c21315f9ebb19b76cb1bab2f904fa7c726f11 | /预科/7-10/作业讲解/非诚勿扰.py | d58a2a48159a290b9c7ab3699fa64ffc9bd2ea1a | [] | no_license | Ngyg520/python--class | 10d07981dc7fd1a559bc9ab1acec68840c5f0cf0 | 58647b6a8e9ad631ce5dc293395cc8bddd992adc | refs/heads/master | 2020-03-26T11:28:19.367471 | 2018-08-15T11:27:37 | 2018-08-15T11:27:37 | 144,844,447 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 868 | py | man_age=int(input('请输入男生的年龄:'))
man_height=int(input('请输入男生的身高:'))
man_weight=int(input('请输入男生的体重(公斤):'))
man_income=int(input('请输入男生的收入:'))
woman_age=int(input('请输入女生的年龄:'))
woman_height=int(input('请输入女生的身高:'))
woman_weight=int(input('请输入女生的体重(公斤):'))
woman_income=int(input('请输入女生的收入:'))
if (20<=woman_age<=30) and (160<=woman_height <=170) and (50<=woman_weight <=60)and (2000<=woman_income <=3000):
#再判断女生符合男生的要求下,判断男生是否符合女生的要求
if (22<=man_age<=25) and (170<=man_height <=180) and (70<=man_weight <=80) and (10000<=man_income ):
print('配对成功')
else:
print('男生不符合女生的要求')
else:
print('女生不符合男生的要求') | [
"1755193535@qq.com"
] | 1755193535@qq.com |
80c97795d9082435f7a91d8ed6670f3aaffe5112 | 7bca567686375d643542dedafd119546a3e63526 | /ex43_classes.py | 78d34f9e32e914e2bceb876def40ab3828020e39 | [] | no_license | gitkenan/Games-and-experiments | 50f263021b07afbf70564c84ae6fc223a3d71427 | 81e7e160fe381a32bc8a97c0a88d2f69b67ac8d0 | refs/heads/master | 2020-05-18T03:21:32.725115 | 2019-09-12T00:18:16 | 2019-09-12T00:18:16 | 184,143,062 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,058 | py | from sys import exit
from random import randint
class Scene(object):
def enter(self):
pass
class Engine(object):
def __init__(self, scene_map):
pass
def play(self):
pass
class Death(Scene):
def enter(self):
print """You don't make it out alive. The Gothons take you captive
and ensure that you have nothing to do but listen to their jokes until
you starve to death."""
exit(0)
class CentralCorridor(Scene):
def Gothon(self, joke):
print "Gothon hears your joke and dies."
def enter(self):
print """As you enter the Central Corridor you notice there is
a Gothon standing in front of you. He seems to be expecting entertainment
from you."""
class LaserWeaponArmory(Scene):
def enter(self):
pass
class TheBridge(Scene):
def enter(self):
pass
class EscapePod(Scene):
def enter(self):
pass
class Map(object):
def __init__(self, start_scene):
pass
def next_scene(self, scene_name):
pass
def opening_scene(self):
pass
a_map = Map('central_corridor')
a_game = Engine(a_map)
a_game.play()
| [
"kenan.mth@gmail.com"
] | kenan.mth@gmail.com |
f2a995cb08d5daa59a055ec2dd0139d44f2eb797 | f54fc8f56e5710d990966f273bc783cff9818b41 | /train.py | 5471e3a2a81677241f10c537a1b2128441ae10ef | [] | no_license | xavierign/tangoAi | aea1d7c70308b0a4e0b04ff9039c972a3aa4f48f | bef793f6a55465dab195aae95ae242b18125d60a | refs/heads/master | 2023-06-19T15:33:41.593085 | 2021-07-04T19:57:59 | 2021-07-04T19:57:59 | 367,705,041 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,637 | py | import os
import pickle
import numpy
from music21 import note, chord, instrument, stream, duration
import time
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.utils import plot_model
from models.RNNAttention import get_distinct, create_lookups, prepare_sequences, get_music_list, create_network
from models.RNNAttention import create_network, sample_with_temp
import matplotlib.pyplot as plt
import numpy as np
import random
import pandas as pd
melody_len = 16 # 12, 16, or 24
section = 'SEQ_' + str(melody_len)
indir = 'data/' + section + "/"
notes_in = np.load(indir + 'notes_in.npy')
notes_out = np.load(indir + 'notes_out.npy')
durations_in = np.load(indir + 'durations_in.npy')
durations_out = np.load(indir + 'durations_out.npy')
network_input = [notes_in, durations_in]
network_output = [notes_out, durations_out]
# run params
### fixed by data
n_notes = 42
n_durations = 9
#run experiments
n_experiments = 4
music_name = 'experiments'
additional_epochs= 4
n_initial = 1
for id_ in range(n_experiments):
run_id = str(id_)
run_folder = 'run/{}/'.format(section)
run_folder += str(int(random.random()*999999999999))#'_'.join([run_id, music_name])
store_folder = os.path.join(run_folder, 'store')
data_folder = os.path.join('data', music_name)
weights_folder = os.path.join(run_folder, 'weights')
checkpoint1 = ModelCheckpoint(
os.path.join(weights_folder, "weights-improvement-{epoch:02d}-{val_loss:.4f}-bigger.h5"),
monitor='val_loss',
verbose=0,
save_best_only=True,
mode='min')
checkpoint2 = ModelCheckpoint(
os.path.join(weights_folder, "weights.h5"),
monitor='val_loss',
verbose=0,
save_best_only=True,
mode='min')
early_stopping = EarlyStopping(
monitor='val_loss'
, restore_best_weights=True
, patience = 3 )
callbacks_list = [
#checkpoint1
checkpoint2
, early_stopping]
if not os.path.exists(run_folder):
os.mkdir(run_folder)
os.mkdir(os.path.join(run_folder, 'store'))
os.mkdir(os.path.join(run_folder, 'output'))
os.mkdir(os.path.join(run_folder, 'weights'))
os.mkdir(os.path.join(run_folder, 'viz'))
#variable
embed_size =random.choice([36,48,60])
rnn_units = random.choice([50,100,200])
use_attention = random.choice([True,False])
model, att_model = create_network(n_notes, n_durations, embed_size, rnn_units, use_attention)
trainHistory = model.fit(network_input, network_output
, epochs=500, batch_size=128
, validation_split = 0.2
, callbacks=callbacks_list
, shuffle=False )
#retrain with all data
model, att_model = create_network(n_notes, n_durations, embed_size, rnn_units, use_attention)
fullTrainHistory = model.fit(network_input, network_output
, epochs=early_stopping.stopped_epoch + additional_epochs, batch_size=64
, validation_split = 0
, shuffle=False )
#save fullTrained model
model.save(store_folder + '/model.h5')
#save training history
with open(store_folder + '/trainHistory.json', mode='w') as f:
pd.DataFrame(trainHistory.history).to_json(f)
with open(store_folder + '/fullTrainHistory.json', mode='w') as f:
pd.DataFrame(fullTrainHistory.history).to_json(f)
#write the parameters
text_file = open(store_folder + "/parameters.txt", "w")
text_file.write(str(embed_size)+","+str(rnn_units)+","+str(use_attention)+ "," + str(len(notes_in[0])) +"\n")
text_file.close() | [
"xig2000@columbia.edu"
] | xig2000@columbia.edu |
7d02c53728c44c82d12d502f85beedd67a1d19c0 | d631bcbcec6b0d4fc2a6e3e18d6d23ef7eb462e3 | /inventoryproject/user/migrations/0001_initial.py | e97923b45f902c1f34fe3c901c7ff57f0f453c34 | [] | no_license | NazmulMilon/inventory_system | 63e1e027977969f5fc3035811d367e8ed5277b1a | 58740904372333c779c27877b3f9ff1ae165fad5 | refs/heads/master | 2023-07-13T21:42:46.256820 | 2021-08-26T19:38:20 | 2021-08-26T19:38:20 | 399,541,885 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 938 | py | # Generated by Django 3.2.6 on 2021-08-25 16:07
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Profile',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('address', models.CharField(max_length=200, null=True)),
('phone', models.CharField(max_length=20, null=True)),
('image', models.ImageField(default='avatar.jpg', upload_to='Profile_Images')),
('staff', models.OneToOneField(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| [
"milon16103373@gmail.com"
] | milon16103373@gmail.com |
27bca25d2dc36b5ed3b2f910eae3ec8de7b5d080 | a1ef153a7249c50df517e978c7932e07757b426d | /mysite/polls/views.py | 6445a3a072999cd8c6ecb481ba52efaf2f9607f4 | [] | no_license | Sangry13/django-project | 1235e6de173ecd89ff67f1d230de35b5a34a2330 | 5e3e33e79ab564d5e3a2c782c9dfd272743a9fe7 | refs/heads/master | 2020-12-25T02:29:52.629177 | 2012-12-14T00:38:34 | 2012-12-14T00:38:34 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,907 | py | from django.shortcuts import render_to_response, get_object_or_404, render
from django.contrib.auth.models import User
from django.contrib.auth import login, authenticate
from django.contrib.auth.decorators import login_required
from django.core.urlresolvers import reverse
from django.template import RequestContext
from django.http import HttpResponseRedirect
from polls.forms import RegistrationForm, VoteForm, ProfileForm
from polls.models import Poll, Choice
def register(request):
form = RegistrationForm(request.POST or None, prefix="rf")
form2 = ProfileForm(request.POST or None, prefix="up")
if form.is_valid() and form2.is_valid():
if form.save():
u = authenticate(username=form.cleaned_data['username'],
password=form.cleaned_data['password1'])
user_profile = form2.save(commit=False)
user_profile.user = u
user_profile.save()
login(request, u)
return HttpResponseRedirect(reverse("poll_index"))
return render(request, "registration/register.html",
{'form': form, 'up_form': form2})
def index(request):
latest_poll_list = Poll.objects.all().order_by('-pub_date')[:5]
return render(request, 'polls/index.html',
{'latest_poll_list' : latest_poll_list})
def detail(request, poll_id):
poll = get_object_or_404(Poll, pk=poll_id)
form = VoteForm(request.POST or None)
form.fields['choice'].queryset = Choice.objects.filter(poll=poll)
if form.is_valid():
form.save()
return HttpResponseRedirect(poll.get_absolute_url() + "results/")
return render(request, 'polls/detail.html',
{'poll': poll, 'form': form})
def results(request, poll_id):
poll = get_object_or_404(Poll, pk=poll_id)
return render(request, 'polls/results.html', {'poll':poll})
| [
"simeonf@gmail.com"
] | simeonf@gmail.com |
4589205cd366b6f298b02d86f9c43531f17d96f3 | 5d61acc1f9595047861f76b916cd28a167496f9e | /Configuration/GenProduction/python/ThirteenTeV/GMSB/GMSB_L500TeV_Ctau1200cm_Pythia8_13TeV_cff.py | 4c40005548ada354e0fd1851954fd8435b425dbe | [
"Apache-2.0"
] | permissive | zhangzc11/cms-gmsb-sps8-configs | 1bbd3cf2a45ee187f3e41ff51c409976fd59f586 | 838e6aac1d13251e050c0ee8c4ed26ca0c6cef7e | refs/heads/master | 2020-06-24T05:28:46.872990 | 2019-09-24T20:05:33 | 2019-09-24T20:05:33 | 198,862,590 | 0 | 0 | Apache-2.0 | 2019-07-25T16:03:09 | 2019-07-25T16:03:08 | null | UTF-8 | Python | false | false | 59,260 | py |
SLHA_TABLE = '''
# ISAJET SUSY parameters in SUSY Les Houches Accord 2 format
# Created by ISALHA 2.0 Last revision: H Baer 27 May 2014
Block SPINFO # Program information
1 ISASUGRA/ISASUSY from ISAJET # Spectrum Calculator
2 7.88 02-JAN-2018 11:01:14 # Version number
Block MODSEL # Model selection
1 2 # Minimal gauge mediated (GMSB) model
Block SMINPUTS # Standard Model inputs
1 1.28000000E+02 # alpha_em^(-1)
2 1.16570000E-05 # G_Fermi
3 1.19999997E-01 # alpha_s(M_Z)
4 9.11699982E+01 # m_{Z}(pole)
5 4.19999981E+00 # m_{b}(m_{b})
6 1.73100006E+02 # m_{top}(pole)
7 1.77699995E+00 # m_{tau}(pole)
Block MINPAR # SUSY breaking input parameters
1 5.00000000E+05 # Lambda scale of soft SSB
2 1.00000000E+06 # M_mess overall messenger scale
3 1.50000000E+01 # tan(beta)
4 1.00000000E+00 # sign(mu)
5 1.00000000E+00 # N_5 messenger index
6 9.78250000E+02 # c_grav gravitino mass factor
51 1.00000000E+00 # N5_1 U(1)_Y messenger index
52 1.00000000E+00 # N5_2 SU(2)_L messenger index
53 1.00000000E+00 # N5_3 SU(3)_C messenger index
101 1.00000000E+00 # Rsl
102 0.00000000E+00 # dmH_d^2
103 0.00000000E+00 # dmH_u^2
104 0.00000000E+00 # d_Y
Block MASS # Scalar and gaugino mass spectrum
# PDG code mass particle
6 1.73100006E+02 # top
24 8.04229965E+01 # W^+
25 1.20963837E+02 # h^0
35 2.33982568E+03 # H^0
36 2.32449658E+03 # A^0
37 2.34119263E+03 # H^+
1000001 4.97861523E+03 # dnl
1000002 4.97796680E+03 # upl
1000003 4.97861523E+03 # stl
1000004 4.97796680E+03 # chl
1000005 4.67874658E+03 # b1
1000006 4.28783154E+03 # t1
1000011 1.74556335E+03 # el-
1000012 1.73479907E+03 # nuel
1000013 1.74556335E+03 # mul-
1000014 1.73479907E+03 # numl
1000015 8.77231506E+02 # tau1
1000016 1.72525610E+03 # nutl
1000021 3.59506958E+03 # glss
1000022 7.22850281E+02 # z1ss
1000023 1.36930420E+03 # z2ss
1000024 1.36947656E+03 # w1ss
1000025 -1.61272607E+03 # z3ss
1000035 1.62492908E+03 # z4ss
1000037 1.62620276E+03 # w2ss
1000039 1.17665186E-04 # gvss
2000001 4.69962061E+03 # dnr
2000002 4.72713818E+03 # upr
2000003 4.69962061E+03 # str
2000004 4.72713867E+03 # chr
2000005 4.79156592E+03 # b2
2000006 4.82052734E+03 # t2
2000011 8.74960999E+02 # er-
2000013 8.74960999E+02 # mur-
2000015 1.73672705E+03 # tau2
Block ALPHA # Effective Higgs mixing parameter
-6.68404102E-02 # alpha
Block STOPMIX # stop mixing matrix
1 1 2.02913228E-02 # O_{11}
1 2 9.99794126E-01 # O_{12}
2 1 -9.99794126E-01 # O_{21}
2 2 2.02913228E-02 # O_{22}
Block SBOTMIX # sbottom mixing matrix
1 1 7.70947039E-02 # O_{11}
1 2 9.97023761E-01 # O_{12}
2 1 -9.97023761E-01 # O_{21}
2 2 7.70947039E-02 # O_{22}
Block STAUMIX # stau mixing matrix
1 1 1.58321951E-02 # O_{11}
1 2 9.99874651E-01 # O_{12}
2 1 -9.99874651E-01 # O_{21}
2 2 1.58321951E-02 # O_{22}
Block NMIX # neutralino mixing matrix
1 1 9.99309301E-01 #
1 2 -1.99518353E-03 #
1 3 3.34340036E-02 #
1 4 -1.60968006E-02 #
2 1 -9.81154572E-03 #
2 2 -9.75596309E-01 #
2 3 1.66011706E-01 #
2 4 -1.43372744E-01 #
3 1 1.22171640E-02 #
3 2 -1.64305065E-02 #
3 3 -7.06651509E-01 #
3 4 -7.07265913E-01 #
4 1 -3.36956233E-02 #
4 2 2.18947381E-01 #
4 3 6.86998308E-01 #
4 4 -6.92069888E-01 #
Block UMIX # chargino U mixing matrix
1 1 -9.73944068E-01 # U_{11}
1 2 2.26788387E-01 # U_{12}
2 1 -2.26788387E-01 # U_{21}
2 2 -9.73944068E-01 # U_{22}
Block VMIX # chargino V mixing matrix
1 1 -9.80740488E-01 # V_{11}
1 2 1.95315510E-01 # V_{12}
2 1 -1.95315510E-01 # V_{21}
2 2 -9.80740488E-01 # V_{22}
Block GAUGE Q= 4.41147656E+03 #
1 3.57515574E-01 # g`
2 6.52412176E-01 # g_2
3 1.21976912E+00 # g_3
Block YU Q= 4.41147656E+03 #
3 3 8.23612988E-01 # y_t
Block YD Q= 4.41147656E+03 #
3 3 1.79693878E-01 # y_b
Block YE Q= 4.41147656E+03 #
3 3 1.53216481E-01 # y_tau
Block HMIX Q= 4.41147656E+03 # Higgs mixing parameters
1 1.59305103E+03 # mu(Q)
2 1.42569609E+01 # tan(beta)(Q)
3 2.52204025E+02 # Higgs vev at Q
4 5.40328450E+06 # m_A^2(Q)
Block MSOFT Q= 4.41147656E+03 # DRbar SUSY breaking parameters
1 7.36200684E+02 # M_1(Q)
2 1.32917480E+03 # M_2(Q)
3 3.27734692E+03 # M_3(Q)
21 2.71649850E+06 # MHd^2(Q)
22 -2.20397425E+06 # MHu^2(Q)
31 1.73146716E+03 # MeL(Q)
32 1.73146716E+03 # MmuL(Q)
33 1.72214929E+03 # MtauL(Q)
34 8.74915039E+02 # MeR(Q)
35 8.74915039E+02 # MmuR(Q)
36 8.62024658E+02 # MtauR(Q)
41 4.81839209E+03 # MqL1(Q)
42 4.81839209E+03 # MqL2(Q)
43 4.64446826E+03 # MqL3(Q)
44 4.56409326E+03 # MuR(Q)
45 4.56409326E+03 # McR(Q)
46 4.19017236E+03 # MtR(Q)
47 4.53597314E+03 # MdR(Q)
48 4.53597314E+03 # MsR(Q)
49 4.52017188E+03 # MbR(Q)
Block AU Q= 4.41147656E+03 #
1 1 -9.46278625E+02 # A_u
2 2 -9.46278625E+02 # A_c
3 3 -9.46278625E+02 # A_t
Block AD Q= 4.41147656E+03 #
1 1 -1.05144751E+03 # A_d
2 2 -1.05144751E+03 # A_s
3 3 -1.05144751E+03 # A_b
Block AE Q= 4.41147656E+03 #
1 1 -1.31876114E+02 # A_e
2 2 -1.31876114E+02 # A_mu
3 3 -1.31876114E+02 # A_tau
# ISAJET decay tables in SUSY Les Houches accord format
# Created by ISALHD. Last revision: C. Balazs, 2005 May 25
Block DCINFO # Program information
1 ISASUGRA from ISAJET # Spectrum Calculator
2 7.88 02-JAN-2018 11:01:14 # Version number
# PDG Width
DECAY 6 1.48575687E+00 # TP decays
# BR NDA ID1 ID2 ID3 ID4
3.33333313E-01 3 2 -1 5 # TP --> UP DB BT
3.33333313E-01 3 4 -3 5 # TP --> CH SB BT
1.11111097E-01 3 -11 12 5 # TP --> E+ NUE BT
1.11111097E-01 3 -13 14 5 # TP --> MU+ NUM BT
1.11111097E-01 3 -15 16 5 # TP --> TAU+ NUT BT
# PDG Width
DECAY 1000021 8.68305862E-02 # GLSS decays
# BR NDA ID1 ID2 ID3 ID4
4.62337658E-02 3 1000024 1 -2 # GLSS --> W1SS+ DN UB
4.62337658E-02 3 -1000024 2 -1 # GLSS --> W1SS- UP DB
4.62337658E-02 3 1000024 3 -4 # GLSS --> W1SS+ ST CB
4.62337658E-02 3 -1000024 4 -3 # GLSS --> W1SS- CH SB
5.38903959E-02 3 1000024 5 -6 # GLSS --> W1SS+ BT TB
5.38903959E-02 3 -1000024 6 -5 # GLSS --> W1SS- TP BB
1.50996190E-03 3 1000037 1 -2 # GLSS --> W2SS+ DN UB
1.50996190E-03 3 -1000037 2 -1 # GLSS --> W2SS- UP DB
1.50996190E-03 3 1000037 3 -4 # GLSS --> W2SS+ ST CB
1.50996190E-03 3 -1000037 4 -3 # GLSS --> W2SS- CH SB
1.04367949E-01 3 1000037 5 -6 # GLSS --> W2SS+ BT TB
1.04367949E-01 3 -1000037 6 -5 # GLSS --> W2SS- TP BB
1.29011560E-05 2 1000022 21 # GLSS --> Z1SS GL
3.13564092E-02 3 1000022 2 -2 # GLSS --> Z1SS UP UB
9.18567367E-03 3 1000022 1 -1 # GLSS --> Z1SS DN DB
9.18567367E-03 3 1000022 3 -3 # GLSS --> Z1SS ST SB
3.13563906E-02 3 1000022 4 -4 # GLSS --> Z1SS CH CB
9.73962154E-03 3 1000022 5 -5 # GLSS --> Z1SS BT BB
5.12548834E-02 3 1000022 6 -6 # GLSS --> Z1SS TP TB
8.66053379E-05 2 1000023 21 # GLSS --> Z2SS GL
2.31342353E-02 3 1000023 2 -2 # GLSS --> Z2SS UP UB
2.29462497E-02 3 1000023 1 -1 # GLSS --> Z2SS DN DB
2.29462497E-02 3 1000023 3 -3 # GLSS --> Z2SS ST SB
2.31342353E-02 3 1000023 4 -4 # GLSS --> Z2SS CH CB
2.89335642E-02 3 1000023 5 -5 # GLSS --> Z2SS BT BB
2.51190588E-02 3 1000023 6 -6 # GLSS --> Z2SS TP TB
1.33392459E-03 2 1000025 21 # GLSS --> Z3SS GL
2.32389334E-06 3 1000025 2 -2 # GLSS --> Z3SS UP UB
2.80408267E-06 3 1000025 1 -1 # GLSS --> Z3SS DN DB
2.80408267E-06 3 1000025 3 -3 # GLSS --> Z3SS ST SB
2.32389311E-06 3 1000025 4 -4 # GLSS --> Z3SS CH CB
3.98141053E-03 3 1000025 5 -5 # GLSS --> Z3SS BT BB
9.14951563E-02 3 1000025 6 -6 # GLSS --> Z3SS TP TB
1.26973237E-03 2 1000035 21 # GLSS --> Z4SS GL
7.81447918E-04 3 1000035 2 -2 # GLSS --> Z4SS UP UB
8.62020184E-04 3 1000035 1 -1 # GLSS --> Z4SS DN DB
8.62020184E-04 3 1000035 3 -3 # GLSS --> Z4SS ST SB
7.81447918E-04 3 1000035 4 -4 # GLSS --> Z4SS CH CB
4.77397442E-03 3 1000035 5 -5 # GLSS --> Z4SS BT BB
9.79650915E-02 3 1000035 6 -6 # GLSS --> Z4SS TP TB
5.75114283E-13 2 1000039 21 # GLSS --> GVSS GL
# PDG Width
DECAY 1000002 1.14308014E+02 # UPL decays
# BR NDA ID1 ID2 ID3 ID4
5.76027064E-03 2 1000022 2 # UPL --> Z1SS UP
1.49618506E-01 2 1000023 2 # UPL --> Z2SS UP
2.95784539E-05 2 1000025 2 # UPL --> Z3SS UP
6.62464788E-03 2 1000035 2 # UPL --> Z4SS UP
5.25529683E-01 2 1000021 2 # UPL --> GLSS UP
3.01277310E-01 2 1000024 1 # UPL --> W1SS+ DN
1.11600524E-02 2 1000037 1 # UPL --> W2SS+ DN
# PDG Width
DECAY 1000001 1.14323929E+02 # DNL decays
# BR NDA ID1 ID2 ID3 ID4
6.01694640E-03 2 1000022 1 # DNL --> Z1SS DN
1.48524866E-01 2 1000023 1 # DNL --> Z2SS DN
5.11833059E-05 2 1000025 1 # DNL --> Z3SS DN
7.41615752E-03 2 1000035 1 # DNL --> Z4SS DN
5.25817454E-01 2 1000021 1 # DNL --> GLSS DN
2.97126144E-01 2 -1000024 2 # DNL --> W1SS- UP
1.50472615E-02 2 -1000037 2 # DNL --> W2SS- UP
# PDG Width
DECAY 1000003 1.14323921E+02 # STL decays
# BR NDA ID1 ID2 ID3 ID4
6.01694686E-03 2 1000022 3 # STL --> Z1SS ST
1.48524880E-01 2 1000023 3 # STL --> Z2SS ST
5.11833096E-05 2 1000025 3 # STL --> Z3SS ST
7.41615798E-03 2 1000035 3 # STL --> Z4SS ST
5.25817454E-01 2 1000021 3 # STL --> GLSS ST
2.97126085E-01 2 -1000024 4 # STL --> W1SS- CH
1.50472606E-02 2 -1000037 4 # STL --> W2SS- CH
# PDG Width
DECAY 1000004 1.14307983E+02 # CHL decays
# BR NDA ID1 ID2 ID3 ID4
5.76027157E-03 2 1000022 4 # CHL --> Z1SS CH
1.49618536E-01 2 1000023 4 # CHL --> Z2SS CH
2.95784575E-05 2 1000025 4 # CHL --> Z3SS CH
6.62464881E-03 2 1000035 4 # CHL --> Z4SS CH
5.25529563E-01 2 1000021 4 # CHL --> GLSS CH
3.01277399E-01 2 1000024 3 # CHL --> W1SS+ ST
1.11600552E-02 2 1000037 3 # CHL --> W2SS+ ST
# PDG Width
DECAY 1000005 4.95794106E+01 # BT1 decays
# BR NDA ID1 ID2 ID3 ID4
5.10155894E-02 2 1000022 5 # BT1 --> Z1SS BT
6.63381396E-03 2 1000023 5 # BT1 --> Z2SS BT
2.43770368E-02 2 1000025 5 # BT1 --> Z3SS BT
2.02832837E-02 2 1000035 5 # BT1 --> Z4SS BT
8.39735508E-01 2 1000021 5 # BT1 --> GLSS BT
1.29529992E-02 2 -1000024 6 # BT1 --> W1SS- TP
4.49891165E-02 2 -1000037 6 # BT1 --> W2SS- TP
1.26570822E-05 2 -24 1000006 # BT1 --> W- TP1
# PDG Width
DECAY 1000006 1.09464081E+02 # TP1 decays
# BR NDA ID1 ID2 ID3 ID4
1.79395169E-01 2 1000021 6 # TP1 --> GLSS TP
8.32933933E-02 2 1000022 6 # TP1 --> Z1SS TP
9.64419264E-03 2 1000023 6 # TP1 --> Z2SS TP
1.84550896E-01 2 1000025 6 # TP1 --> Z3SS TP
1.74011007E-01 2 1000035 6 # TP1 --> Z4SS TP
1.80708487E-02 2 1000024 5 # TP1 --> W1SS+ BT
3.51034433E-01 2 1000037 5 # TP1 --> W2SS+ BT
# PDG Width
DECAY 2000002 5.46927567E+01 # UPR decays
# BR NDA ID1 ID2 ID3 ID4
1.86094329E-01 2 1000022 2 # UPR --> Z1SS UP
1.57847553E-05 2 1000023 2 # UPR --> Z2SS UP
2.27690271E-05 2 1000025 2 # UPR --> Z3SS UP
1.72508648E-04 2 1000035 2 # UPR --> Z4SS UP
8.13694596E-01 2 1000021 2 # UPR --> GLSS UP
# PDG Width
DECAY 2000001 4.53831749E+01 # DNR decays
# BR NDA ID1 ID2 ID3 ID4
5.57093807E-02 2 1000022 1 # DNR --> Z1SS DN
4.71783096E-06 2 1000023 1 # DNR --> Z2SS DN
6.79890718E-06 2 1000025 1 # DNR --> Z3SS DN
5.15089087E-05 2 1000035 1 # DNR --> Z4SS DN
9.44227636E-01 2 1000021 1 # DNR --> GLSS DN
# PDG Width
DECAY 2000003 4.53831749E+01 # STR decays
# BR NDA ID1 ID2 ID3 ID4
5.57093807E-02 2 1000022 3 # STR --> Z1SS ST
4.71783096E-06 2 1000023 3 # STR --> Z2SS ST
6.79890718E-06 2 1000025 3 # STR --> Z3SS ST
5.15089087E-05 2 1000035 3 # STR --> Z4SS ST
9.44227636E-01 2 1000021 3 # STR --> GLSS ST
# PDG Width
DECAY 2000004 5.46927414E+01 # CHR decays
# BR NDA ID1 ID2 ID3 ID4
1.86094388E-01 2 1000022 4 # CHR --> Z1SS CH
1.57847589E-05 2 1000023 4 # CHR --> Z2SS CH
2.27690271E-05 2 1000025 4 # CHR --> Z3SS CH
1.72508720E-04 2 1000035 4 # CHR --> Z4SS CH
8.13694537E-01 2 1000021 4 # CHR --> GLSS CH
# PDG Width
DECAY 2000005 1.50105927E+02 # BT2 decays
# BR NDA ID1 ID2 ID3 ID4
4.34915954E-03 2 1000022 5 # BT2 --> Z1SS BT
1.06184937E-01 2 1000023 5 # BT2 --> Z2SS BT
8.59377626E-03 2 1000025 5 # BT2 --> Z3SS BT
1.43277599E-02 2 1000035 5 # BT2 --> Z4SS BT
3.22379410E-01 2 1000021 5 # BT2 --> GLSS BT
2.21527189E-01 2 -1000024 6 # BT2 --> W1SS- TP
3.21123898E-01 2 -1000037 6 # BT2 --> W2SS- TP
1.48442271E-03 2 -24 1000006 # BT2 --> W- TP1
2.94006004E-05 2 23 1000005 # BT2 --> Z0 BT1
# PDG Width
DECAY 2000006 1.52893723E+02 # TP2 decays
# BR NDA ID1 ID2 ID3 ID4
3.22430015E-01 2 1000021 6 # TP2 --> GLSS TP
2.14427546E-01 2 1000024 5 # TP2 --> W1SS+ BT
2.59258728E-02 2 1000037 5 # TP2 --> W2SS+ BT
8.65389244E-04 2 23 1000006 # TP2 --> Z0 TP1
2.97915353E-03 2 25 1000006 # TP2 --> HL0 TP1
3.08037765E-04 2 24 1000005 # TP2 --> W+ BT1
4.20037098E-03 2 1000022 6 # TP2 --> Z1SS TP
1.11457132E-01 2 1000023 6 # TP2 --> Z2SS TP
1.58510298E-01 2 1000025 6 # TP2 --> Z3SS TP
1.58896253E-01 2 1000035 6 # TP2 --> Z4SS TP
# PDG Width
DECAY 1000011 4.63572931E+00 # EL- decays
# BR NDA ID1 ID2 ID3 ID4
3.25833857E-01 2 1000022 11 # EL- --> Z1SS E-
2.25667387E-01 2 1000023 11 # EL- --> Z2SS E-
3.20709046E-06 2 1000025 11 # EL- --> Z3SS E-
1.13378419E-03 2 1000035 11 # EL- --> Z4SS E-
4.44517344E-01 2 -1000024 12 # EL- --> W1SS- NUE
2.84443167E-03 2 -1000037 12 # EL- --> W2SS- NUE
2.90703587E-16 2 11 1000039 # EL- --> E- GVSS
# PDG Width
DECAY 1000013 4.63572931E+00 # MUL- decays
# BR NDA ID1 ID2 ID3 ID4
3.25833857E-01 2 1000022 13 # MUL- --> Z1SS MU-
2.25667387E-01 2 1000023 13 # MUL- --> Z2SS MU-
3.20708955E-06 2 1000025 13 # MUL- --> Z3SS MU-
1.13378372E-03 2 1000035 13 # MUL- --> Z4SS MU-
4.44517344E-01 2 -1000024 14 # MUL- --> W1SS- NUM
2.84443167E-03 2 -1000037 14 # MUL- --> W2SS- NUM
2.90703587E-16 2 13 1000039 # MUL- --> MU- GVSS
# PDG Width
DECAY 1000015 4.59101647E-01 # TAU1- decays
# BR NDA ID1 ID2 ID3 ID4
1.00000000E+00 2 1000022 15 # TAU1- --> Z1SS TAU-
9.40904381E-17 2 15 1000039 # TAU1- --> TAU- GVSS
# PDG Width
DECAY 1000012 4.50129700E+00 # NUEL decays
# BR NDA ID1 ID2 ID3 ID4
3.36635560E-01 2 1000022 12 # NUEL --> Z1SS NUE
2.17013910E-01 2 1000023 12 # NUEL --> Z2SS NUE
1.60228446E-05 2 1000025 12 # NUEL --> Z3SS NUE
1.37625961E-03 2 1000035 12 # NUEL --> Z4SS NUE
4.43137556E-01 2 1000024 11 # NUEL --> W1SS+ E-
1.82052923E-03 2 1000037 11 # NUEL --> W2SS+ E-
2.90267630E-16 2 12 1000039 # NUEL --> NUE GVSS
# PDG Width
DECAY 1000014 4.50129652E+00 # NUML decays
# BR NDA ID1 ID2 ID3 ID4
3.36635590E-01 2 1000022 14 # NUML --> Z1SS NUM
2.17013940E-01 2 1000023 14 # NUML --> Z2SS NUM
1.60228465E-05 2 1000025 14 # NUML --> Z3SS NUM
1.37625972E-03 2 1000035 14 # NUML --> Z4SS NUM
4.43137586E-01 2 1000024 13 # NUML --> W1SS+ MU-
1.82052853E-03 2 1000037 13 # NUML --> W2SS+ MU-
2.90267656E-16 2 14 1000039 # NUML --> NUM GVSS
# PDG Width
DECAY 1000016 4.71154881E+00 # NUTL decays
# BR NDA ID1 ID2 ID3 ID4
3.18354905E-01 2 1000022 16 # NUTL --> Z1SS NUT
1.98698238E-01 2 1000023 16 # NUTL --> Z2SS NUT
1.31497764E-05 2 1000025 16 # NUTL --> Z3SS NUT
1.10832800E-03 2 1000035 16 # NUTL --> Z4SS NUT
4.06784296E-01 2 1000024 15 # NUTL --> W1SS+ TAU-
3.64203425E-03 2 1000037 15 # NUTL --> W2SS+ TAU-
7.13991448E-02 2 24 1000015 # NUTL --> W+ TAU1-
2.69770570E-16 2 16 1000039 # NUTL --> NUT GVSS
# PDG Width
DECAY 2000011 4.47922766E-01 # ER- decays
# BR NDA ID1 ID2 ID3 ID4
1.00000000E+00 2 1000022 11 # ER- --> Z1SS E-
9.51986219E-17 2 11 1000039 # ER- --> E- GVSS
# PDG Width
DECAY 2000013 4.47922617E-01 # MUR- decays
# BR NDA ID1 ID2 ID3 ID4
1.00000000E+00 2 1000022 13 # MUR- --> Z1SS MU-
9.51986550E-17 2 13 1000039 # MUR- --> MU- GVSS
# PDG Width
DECAY 2000015 4.94278526E+00 # TAU2- decays
# BR NDA ID1 ID2 ID3 ID4
3.02915215E-01 2 1000022 15 # TAU2- --> Z1SS TAU-
2.03916073E-01 2 1000023 15 # TAU2- --> Z2SS TAU-
1.63616333E-03 2 1000025 15 # TAU2- --> Z3SS TAU-
2.25865375E-03 2 1000035 15 # TAU2- --> Z4SS TAU-
4.00540292E-01 2 -1000024 16 # TAU2- --> W1SS- NUT
2.38614902E-03 2 -1000037 16 # TAU2- --> W2SS- NUT
3.55129205E-02 2 23 1000015 # TAU2- --> Z0 TAU1-
5.08345105E-02 2 25 1000015 # TAU2- --> HL0 TAU1-
# PDG Width
DECAY 1000022 1.64491995E-17 # Z1SS decays
# BR NDA ID1 ID2 ID3 ID4
7.63475120E-01 2 1000039 22 # Z1SS --> GVSS GM
1.79258157E-02 3 1000039 11 -11 # Z1SS --> GVSS E- E+
2.18513936E-01 2 1000039 23 # Z1SS --> GVSS Z0
8.51374789E-05 2 1000039 25 # Z1SS --> GVSS HL0
# PDG Width
DECAY 1000023 6.47801831E-02 # Z2SS decays
# BR NDA ID1 ID2 ID3 ID4
1.51884387E-06 2 1000022 22 # Z2SS --> Z1SS GM
5.29290959E-02 2 1000022 23 # Z2SS --> Z1SS Z0
1.09971518E-06 3 1000022 2 -2 # Z2SS --> Z1SS UP UB
1.13848353E-06 3 1000022 1 -1 # Z2SS --> Z1SS DN DB
1.13848353E-06 3 1000022 3 -3 # Z2SS --> Z1SS ST SB
1.09971518E-06 3 1000022 4 -4 # Z2SS --> Z1SS CH CB
2.76562628E-06 3 1000022 5 -5 # Z2SS --> Z1SS BT BB
5.33444283E-04 3 1000022 11 -11 # Z2SS --> Z1SS E- E+
5.33444283E-04 3 1000022 13 -13 # Z2SS --> Z1SS MU- MU+
5.51571313E-04 3 1000022 15 -15 # Z2SS --> Z1SS TAU- TAU+
5.51783713E-04 3 1000022 12 -12 # Z2SS --> Z1SS NUE ANUE
5.51783713E-04 3 1000022 14 -14 # Z2SS --> Z1SS NUM ANUM
5.72720019E-04 3 1000022 16 -16 # Z2SS --> Z1SS NUT ANUT
7.72540212E-01 2 1000022 25 # Z2SS --> Z1SS HL0
1.81178725E-03 2 2000011 -11 # Z2SS --> ER- E+
1.81178725E-03 2 -2000011 11 # Z2SS --> ER+ E-
1.81178725E-03 2 2000013 -13 # Z2SS --> MUR- MU+
1.81178725E-03 2 -2000013 13 # Z2SS --> MUR+ MU-
8.19899961E-02 2 1000015 -15 # Z2SS --> TAU1- TAU+
8.19899961E-02 2 -1000015 15 # Z2SS --> TAU1+ TAU-
1.41490078E-15 2 1000039 22 # Z2SS --> GVSS GM
3.47193677E-17 3 1000039 11 -11 # Z2SS --> GVSS E- E+
4.46058477E-15 2 1000039 23 # Z2SS --> GVSS Z0
5.21462037E-17 2 1000039 25 # Z2SS --> GVSS HL0
# PDG Width
DECAY 1000025 4.55006266E+00 # Z3SS decays
# BR NDA ID1 ID2 ID3 ID4
1.31105821E-07 2 1000022 22 # Z3SS --> Z1SS GM
1.44819921E-07 2 1000023 22 # Z3SS --> Z2SS GM
2.46137574E-01 2 1000024 -24 # Z3SS --> W1SS+ W-
2.46137574E-01 2 -1000024 24 # Z3SS --> W1SS- W+
1.87547490E-01 2 1000022 23 # Z3SS --> Z1SS Z0
2.52311766E-01 2 1000023 23 # Z3SS --> Z2SS Z0
4.97181150E-11 3 1000022 2 -2 # Z3SS --> Z1SS UP UB
1.26815962E-11 3 1000022 1 -1 # Z3SS --> Z1SS DN DB
1.26815962E-11 3 1000022 3 -3 # Z3SS --> Z1SS ST SB
4.97180977E-11 3 1000022 4 -4 # Z3SS --> Z1SS CH CB
8.70761028E-07 3 1000022 5 -5 # Z3SS --> Z1SS BT BB
2.53470489E-09 3 1000022 11 -11 # Z3SS --> Z1SS E- E+
2.53470489E-09 3 1000022 13 -13 # Z3SS --> Z1SS MU- MU+
1.99515966E-06 3 1000022 15 -15 # Z3SS --> Z1SS TAU- TAU+
1.56548055E-08 3 1000022 12 -12 # Z3SS --> Z1SS NUE ANUE
1.56548055E-08 3 1000022 14 -14 # Z3SS --> Z1SS NUM ANUM
1.66999463E-08 3 1000022 16 -16 # Z3SS --> Z1SS NUT ANUT
5.08758021E-13 3 1000023 2 -2 # Z3SS --> Z2SS UP UB
8.73334391E-13 3 1000023 1 -1 # Z3SS --> Z2SS DN DB
8.73334391E-13 3 1000023 3 -3 # Z3SS --> Z2SS ST SB
5.08758021E-13 3 1000023 4 -4 # Z3SS --> Z2SS CH CB
8.05588130E-09 3 1000023 5 -5 # Z3SS --> Z2SS BT BB
7.19554624E-11 3 1000023 11 -11 # Z3SS --> Z2SS E- E+
7.19554624E-11 3 1000023 13 -13 # Z3SS --> Z2SS MU- MU+
1.02007419E-07 3 1000023 15 -15 # Z3SS --> Z2SS TAU- TAU+
4.45968401E-10 3 1000023 12 -12 # Z3SS --> Z2SS NUE ANUE
4.45968401E-10 3 1000023 14 -14 # Z3SS --> Z2SS NUM ANUM
4.94453700E-10 3 1000023 16 -16 # Z3SS --> Z2SS NUT ANUT
2.32765023E-02 2 1000022 25 # Z3SS --> Z1SS HL0
1.27826596E-03 2 1000023 25 # Z3SS --> Z2SS HL0
6.69943329E-05 2 2000011 -11 # Z3SS --> ER- E+
6.69943329E-05 2 -2000011 11 # Z3SS --> ER+ E-
6.69943329E-05 2 2000013 -13 # Z3SS --> MUR- MU+
6.69943329E-05 2 -2000013 13 # Z3SS --> MUR+ MU-
2.15199906E-02 2 1000015 -15 # Z3SS --> TAU1- TAU+
2.15199906E-02 2 -1000015 15 # Z3SS --> TAU1+ TAU-
1.55487589E-21 2 1000039 22 # Z3SS --> GVSS GM
3.85759104E-23 3 1000039 11 -11 # Z3SS --> GVSS E- E+
4.27834026E-17 2 1000039 23 # Z3SS --> GVSS Z0
5.52461358E-17 2 1000039 25 # Z3SS --> GVSS HL0
# PDG Width
DECAY 1000035 5.46985912E+00 # Z4SS decays
# BR NDA ID1 ID2 ID3 ID4
2.45756127E-08 2 1000022 22 # Z4SS --> Z1SS GM
3.05220986E-08 2 1000023 22 # Z4SS --> Z2SS GM
1.32293912E-10 2 1000025 22 # Z4SS --> Z3SS GM
2.64156073E-01 2 1000024 -24 # Z4SS --> W1SS+ W-
2.64156073E-01 2 -1000024 24 # Z4SS --> W1SS- W+
1.89151037E-02 2 1000022 23 # Z4SS --> Z1SS Z0
1.62043236E-03 2 1000023 23 # Z4SS --> Z2SS Z0
3.47367934E-09 3 1000022 2 -2 # Z4SS --> Z1SS UP UB
3.23292926E-09 3 1000022 1 -1 # Z4SS --> Z1SS DN DB
3.23292926E-09 3 1000022 3 -3 # Z4SS --> Z1SS ST SB
3.47367890E-09 3 1000022 4 -4 # Z4SS --> Z1SS CH CB
6.89211788E-07 3 1000022 5 -5 # Z4SS --> Z1SS BT BB
1.91832100E-06 3 1000022 11 -11 # Z4SS --> Z1SS E- E+
1.91832100E-06 3 1000022 13 -13 # Z4SS --> Z1SS MU- MU+
3.86848205E-06 3 1000022 15 -15 # Z4SS --> Z1SS TAU- TAU+
2.90643720E-06 3 1000022 12 -12 # Z4SS --> Z1SS NUE ANUE
2.90643720E-06 3 1000022 14 -14 # Z4SS --> Z1SS NUM ANUM
3.07654636E-06 3 1000022 16 -16 # Z4SS --> Z1SS NUT ANUT
3.57983559E-10 3 1000023 2 -2 # Z4SS --> Z2SS UP UB
3.97563316E-10 3 1000023 1 -1 # Z4SS --> Z2SS DN DB
3.97563316E-10 3 1000023 3 -3 # Z4SS --> Z2SS ST SB
3.57983559E-10 3 1000023 4 -4 # Z4SS --> Z2SS CH CB
7.47458184E-09 3 1000023 5 -5 # Z4SS --> Z2SS BT BB
9.17663456E-08 3 1000023 11 -11 # Z4SS --> Z2SS E- E+
9.17663456E-08 3 1000023 13 -13 # Z4SS --> Z2SS MU- MU+
2.14685855E-07 3 1000023 15 -15 # Z4SS --> Z2SS TAU- TAU+
1.39999415E-07 3 1000023 12 -12 # Z4SS --> Z2SS NUE ANUE
1.39999415E-07 3 1000023 14 -14 # Z4SS --> Z2SS NUM ANUM
1.54462825E-07 3 1000023 16 -16 # Z4SS --> Z2SS NUT ANUT
5.98884276E-09 3 1000025 2 -2 # Z4SS --> Z3SS UP UB
7.72269182E-09 3 1000025 1 -1 # Z4SS --> Z3SS DN DB
7.72269182E-09 3 1000025 3 -3 # Z4SS --> Z3SS ST SB
5.98884276E-09 3 1000025 4 -4 # Z4SS --> Z3SS CH CB
6.67879807E-10 3 1000025 5 -5 # Z4SS --> Z3SS BT BB
1.75199422E-09 3 1000025 11 -11 # Z4SS --> Z3SS E- E+
1.75199422E-09 3 1000025 13 -13 # Z4SS --> Z3SS MU- MU+
1.35863532E-09 3 1000025 15 -15 # Z4SS --> Z3SS TAU- TAU+
3.48505869E-09 3 1000025 12 -12 # Z4SS --> Z3SS NUE ANUE
3.48505869E-09 3 1000025 14 -14 # Z4SS --> Z3SS NUM ANUM
3.48499962E-09 3 1000025 16 -16 # Z4SS --> Z3SS NUT ANUT
1.66654408E-01 2 1000022 25 # Z4SS --> Z1SS HL0
2.48504817E-01 2 1000023 25 # Z4SS --> Z2SS HL0
4.32477711E-04 2 2000011 -11 # Z4SS --> ER- E+
4.32477711E-04 2 -2000011 11 # Z4SS --> ER+ E-
4.32477711E-04 2 2000013 -13 # Z4SS --> MUR- MU+
4.32477711E-04 2 -2000013 13 # Z4SS --> MUR+ MU-
1.71224866E-02 2 1000015 -15 # Z4SS --> TAU1- TAU+
1.71224866E-02 2 -1000015 15 # Z4SS --> TAU1+ TAU-
9.92908420E-19 2 1000039 22 # Z4SS --> GVSS GM
2.46461065E-20 3 1000039 11 -11 # Z4SS --> GVSS E- E+
5.34557067E-17 2 1000039 23 # Z4SS --> GVSS Z0
3.49974492E-17 2 1000039 25 # Z4SS --> GVSS HL0
# PDG Width
DECAY 1000024 5.37228398E-02 # W1SS+ decays
# BR NDA ID1 ID2 ID3 ID4
2.70995679E-06 3 1000022 2 -1 # W1SS+ --> Z1SS UP DB
2.70995679E-06 3 1000022 4 -3 # W1SS+ --> Z1SS CH SB
1.31557952E-03 3 1000022 -11 12 # W1SS+ --> Z1SS E+ NUE
1.31557952E-03 3 1000022 -13 14 # W1SS+ --> Z1SS MU+ NUM
1.36305287E-03 3 1000022 -15 16 # W1SS+ --> Z1SS TAU+ NUT
8.12415600E-01 2 1000022 24 # W1SS+ --> Z1SS W+
8.53304796E-18 3 1000023 -11 12 # W1SS+ --> Z2SS E+ NUE
8.53304796E-18 3 1000023 -13 14 # W1SS+ --> Z2SS MU+ NUM
1.83584750E-01 2 -1000015 16 # W1SS+ --> TAU1+ NUT
# PDG Width
DECAY 1000037 4.87808514E+00 # W2SS+ decays
# BR NDA ID1 ID2 ID3 ID4
6.21149754E-09 3 1000022 2 -1 # W2SS+ --> Z1SS UP DB
6.21149754E-09 3 1000022 4 -3 # W2SS+ --> Z1SS CH SB
5.00140595E-06 3 1000022 -11 12 # W2SS+ --> Z1SS E+ NUE
5.00140595E-06 3 1000022 -13 14 # W2SS+ --> Z1SS MU+ NUM
7.58720262E-06 3 1000022 -15 16 # W2SS+ --> Z1SS TAU+ NUT
1.83986947E-01 2 1000022 24 # W2SS+ --> Z1SS W+
2.75353351E-10 3 1000023 2 -1 # W2SS+ --> Z2SS UP DB
2.75353351E-10 3 1000023 4 -3 # W2SS+ --> Z2SS CH SB
9.38854967E-08 3 1000023 -11 12 # W2SS+ --> Z2SS E+ NUE
9.38854967E-08 3 1000023 -13 14 # W2SS+ --> Z2SS MU+ NUM
2.38884525E-07 3 1000023 -15 16 # W2SS+ --> Z2SS TAU+ NUT
2.39771843E-01 2 1000023 24 # W2SS+ --> Z2SS W+
3.78574860E-08 3 1000025 2 -1 # W2SS+ --> Z3SS UP DB
3.78574860E-08 3 1000025 4 -3 # W2SS+ --> Z3SS CH SB
1.26186341E-08 3 1000025 -11 12 # W2SS+ --> Z3SS E+ NUE
1.26186341E-08 3 1000025 -13 14 # W2SS+ --> Z3SS MU+ NUM
1.26187842E-08 3 1000025 -15 16 # W2SS+ --> Z3SS TAU+ NUT
2.97163336E-13 3 1000035 2 -1 # W2SS+ --> Z4SS UP DB
9.92110379E-14 3 1000035 -11 12 # W2SS+ --> Z4SS E+ NUE
9.92110379E-14 3 1000035 -13 14 # W2SS+ --> Z4SS MU+ NUM
3.77806313E-02 2 -1000015 16 # W2SS+ --> TAU1+ NUT
2.50494421E-01 2 1000024 23 # W2SS+ --> W1SS+ Z0
4.64719235E-10 3 1000024 1 -1 # W2SS+ --> W1SS+ DN DB
4.64719235E-10 3 1000024 3 -3 # W2SS+ --> W1SS+ ST SB
6.17546625E-10 3 1000024 2 -2 # W2SS+ --> W1SS+ UP UB
6.17546625E-10 3 1000024 4 -4 # W2SS+ --> W1SS+ CH CB
1.84072505E-07 3 1000024 12 -12 # W2SS+ --> W1SS+ NUE ANUE
1.84072505E-07 3 1000024 14 -14 # W2SS+ --> W1SS+ NUM ANUM
1.54359057E-07 3 1000024 11 -11 # W2SS+ --> W1SS+ E- E+
1.54359057E-07 3 1000024 13 -13 # W2SS+ --> W1SS+ MU- MU+
1.70915968E-07 3 1000024 15 -15 # W2SS+ --> W1SS+ TAU- TAU+
2.87947237E-01 2 1000024 25 # W2SS+ --> W1SS+ HL0
# PDG Width
DECAY 25 3.38289002E-03 # HL0 decays
# BR NDA ID1 ID2 ID3 ID4
6.12632656E-09 2 11 -11 # HL0 --> E- E+
2.58663378E-04 2 13 -13 # HL0 --> MU- MU+
7.39895925E-02 2 15 -15 # HL0 --> TAU- TAU+
7.68337031E-06 2 1 -1 # HL0 --> DN DB
3.10446718E-03 2 3 -3 # HL0 --> ST SB
6.64919317E-01 2 5 -5 # HL0 --> BT BB
2.43838622E-06 2 2 -2 # HL0 --> UP UB
4.55486365E-02 2 4 -4 # HL0 --> CH CB
2.68818764E-03 2 22 22 # HL0 --> GM GM
5.55850491E-02 2 21 21 # HL0 --> GL GL
7.78724020E-03 3 24 11 -12 # HL0 --> W+ E- ANUE
7.78724020E-03 3 24 13 -14 # HL0 --> W+ MU- ANUM
7.78724020E-03 3 24 15 -16 # HL0 --> W+ TAU- ANUT
2.33617220E-02 3 24 -2 1 # HL0 --> W+ UB DN
2.33617220E-02 3 24 -4 3 # HL0 --> W+ CB ST
7.78724020E-03 3 -24 -11 12 # HL0 --> W- E+ NUE
7.78724020E-03 3 -24 -13 14 # HL0 --> W- MU+ NUM
7.78724020E-03 3 -24 -15 16 # HL0 --> W- TAU+ NUT
2.33617220E-02 3 -24 2 -1 # HL0 --> W- UP DB
2.33617220E-02 3 -24 4 -3 # HL0 --> W- CH SB
9.38703422E-04 3 23 12 -12 # HL0 --> Z0 NUE ANUE
9.38703422E-04 3 23 14 -14 # HL0 --> Z0 NUM ANUM
9.38703422E-04 3 23 16 -16 # HL0 --> Z0 NUT ANUT
4.72440035E-04 3 23 11 -11 # HL0 --> Z0 E- E+
4.72440035E-04 3 23 13 -13 # HL0 --> Z0 MU- MU+
4.72440035E-04 3 23 15 -15 # HL0 --> Z0 TAU- TAU+
1.61853945E-03 3 23 2 -2 # HL0 --> Z0 UP UB
1.61853945E-03 3 23 4 -4 # HL0 --> Z0 CH CB
2.08507758E-03 3 23 1 -1 # HL0 --> Z0 DN DB
2.08507758E-03 3 23 3 -3 # HL0 --> Z0 ST SB
2.08507758E-03 3 23 5 -5 # HL0 --> Z0 BT BB
# PDG Width
DECAY 35 7.69684315E+00 # HH0 decays
# BR NDA ID1 ID2 ID3 ID4
1.16233236E-08 2 11 -11 # HH0 --> E- E+
4.90757695E-04 2 13 -13 # HH0 --> MU- MU+
1.40560046E-01 2 15 -15 # HH0 --> TAU- TAU+
1.41470109E-05 2 1 -1 # HH0 --> DN DB
5.71610872E-03 2 3 -3 # HH0 --> ST SB
7.80803204E-01 2 5 -5 # HH0 --> BT BB
9.01491173E-11 2 2 -2 # HH0 --> UP UB
1.15403418E-06 2 4 -4 # HH0 --> CH CB
5.98319769E-02 2 6 -6 # HH0 --> TP TB
8.35925889E-08 2 22 22 # HH0 --> GM GM
1.09686289E-05 2 21 21 # HH0 --> GL GL
3.98430493E-05 2 24 -24 # HH0 --> W+ W-
2.04096523E-05 2 23 23 # HH0 --> Z0 Z0
3.94009112E-04 2 1000022 1000022 # HH0 --> Z1SS Z1SS
1.52045733E-03 2 1000022 1000023 # HH0 --> Z1SS Z2SS
1.04495510E-02 2 1000022 1000025 # HH0 --> Z1SS Z3SS
1.38539312E-04 2 25 25 # HH0 --> HL0 HL0
3.24044777E-06 2 2000011 -2000011 # HH0 --> ER- ER+
3.23632548E-06 2 2000013 -2000013 # HH0 --> MUR- MUR+
2.38183475E-06 2 1000015 -1000015 # HH0 --> TAU1- TAU1+
# PDG Width
DECAY 36 7.62508869E+00 # HA0 decays
# BR NDA ID1 ID2 ID3 ID4
1.16562608E-08 2 11 -11 # HA0 --> E- E+
4.92148334E-04 2 13 -13 # HA0 --> MU- MU+
1.40958667E-01 2 15 -15 # HA0 --> TAU- TAU+
1.41877599E-05 2 1 -1 # HA0 --> DN DB
5.73257264E-03 2 3 -3 # HA0 --> ST SB
7.83057988E-01 2 5 -5 # HA0 --> BT BB
8.96717769E-11 2 2 -2 # HA0 --> UP UB
1.14874319E-06 2 4 -4 # HA0 --> CH CB
6.05726093E-02 2 6 -6 # HA0 --> TP TB
2.78544547E-07 2 22 22 # HA0 --> GM GM
3.47714595E-05 2 21 21 # HA0 --> GL GL
7.23920704E-04 2 1000022 1000022 # HA0 --> Z1SS Z1SS
8.37204233E-03 2 1000022 1000023 # HA0 --> Z1SS Z2SS
3.97260228E-05 2 25 23 # HA0 --> HL0 Z0
# PDG Width
DECAY 37 7.12928963E+00 # H+ decays
# BR NDA ID1 ID2 ID3 ID4
1.25564288E-08 2 12 -11 # H+ --> NUE E+
5.30155085E-04 2 14 -13 # H+ --> NUM MU+
1.51844352E-01 2 16 -15 # H+ --> NUT TAU+
1.41390019E-05 2 2 -1 # H+ --> UP DB
5.71397599E-03 2 4 -3 # H+ --> CH SB
8.31536174E-01 2 6 -5 # H+ --> TP BB
1.03183016E-02 2 1000024 1000022 # H+ --> W1SS+ Z1SS
4.28967905E-05 2 25 24 # H+ --> HL0 W+
'''
import FWCore.ParameterSet.Config as cms
from Configuration.Generator.Pythia8CommonSettings_cfi import *
from Configuration.Generator.MCTunes2017.PythiaCP5Settings_cfi import *
from Configuration.Generator.PSweightsPythia.PythiaPSweightsSettings_cfi import *
generator = cms.EDFilter("Pythia8GeneratorFilter",
comEnergy = cms.double(13000.0),
pythiaHepMCVerbosity = cms.untracked.bool(False),
pythiaPylistVerbosity = cms.untracked.int32(1),
filterEfficiency = cms.untracked.double(1.0),
SLHATableForPythia8 = cms.string('%s' % SLHA_TABLE),
PythiaParameters = cms.PSet(
pythia8CommonSettingsBlock,
pythia8CP5SettingsBlock,
pythia8PSweightsSettingsBlock,
processParameters = cms.vstring(
'ParticleDecays:limitTau0 = off',
'ParticleDecays:tau0Max = 10000000',
'SUSY:all on',
),
parameterSets = cms.vstring('pythia8CommonSettings',
'pythia8CP5Settings',
'pythia8PSweightsSettings',
'processParameters')
)
)
ProductionFilterSequence = cms.Sequence(generator)
| [
"zzhang2@caltech.edu"
] | zzhang2@caltech.edu |
8c6c97e42f1033b93ec5c59b7037a4bbb78588ae | f570654b7b7edd4fb45f8ff6610c68c99ecf3d9b | /restaurant/restaurant/urls.py | 926eb83fc79b9875032a8f4fa4abf3e43860e7b0 | [] | no_license | Sam11360/django | 596f8d7a224f80486bf71c29cfa40c54ea40db70 | 3fa5a62888e1671a68dd94d8bb9156c86d48a59c | refs/heads/master | 2021-01-21T14:56:40.622282 | 2017-07-03T13:59:50 | 2017-07-03T13:59:50 | 95,230,134 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 975 | py | """restaurant URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^menu/', include('menu.urls'))
"""
from django.conf.urls import patterns, include, url
from django.contrib import admin
from menu import views
urlpatterns = [
# Examples:
# url(r'^$', 'restaurant.views.home', name='home'),
# url(r'^menu/', include('menu.urls')),
url(r'^admin/', include(admin.site.urls)),
url(r'^menu/', include('menu.urls')),
]
| [
"baleclyps@hotmail.fr"
] | baleclyps@hotmail.fr |
f7bddb9f78d1af55f6cdc5c3f6c8d5430cbb95c3 | a11b99b9e423eda5c4e0b26d32b93fe9af5967e7 | /app/posts/models.py | df64e4d71ec671f464ad64ba92615657084ff745 | [] | no_license | vintkor/test_app_drf | d88d86717d5d1b05c0114584397c7932515caa61 | 2aceb5185f95ef2ee8c082ff7e89b8a8e34e7c2d | refs/heads/master | 2022-04-29T13:15:08.311520 | 2020-03-19T09:48:04 | 2020-03-19T09:48:04 | 248,354,484 | 0 | 0 | null | 2022-04-22T23:10:47 | 2020-03-18T22:13:07 | Python | UTF-8 | Python | false | false | 993 | py | from django.db import models
from django.contrib.auth import get_user_model
from django.utils.translation import gettext_lazy as _
User = get_user_model()
class Post(models.Model):
author = models.ForeignKey(User, on_delete=models.CASCADE)
title = models.CharField(max_length=255)
text = models.TextField()
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = _('Post')
verbose_name_plural = _('Posts')
ordering = ('created',)
def __str__(self):
return self.title
class Like(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
post = models.ForeignKey(Post, on_delete=models.CASCADE)
created = models.DateTimeField(auto_now_add=True)
class Meta:
verbose_name = _('Like')
verbose_name_plural = _('Likes')
unique_together = ('user', 'post')
def __str__(self):
return self.post.title
| [
"alkv84@gmail.com"
] | alkv84@gmail.com |
58a54670cfbc63f0049ae9e408774a69a90f14e8 | 98eed0bd411dbf36115d45299a669914ef7d3421 | /Ping-Ping/Jogo Ping-Ping.py | c014a5045b698be5beed8e784714c2060ef4eda2 | [] | no_license | GabrielMorais2/jogo-ping-pong | c5d833ac4501654022eef44f9ff37b87a6a0938f | c5866f205769776e1677d792abcc915399dcde30 | refs/heads/master | 2022-11-24T11:54:36.257323 | 2020-07-26T20:05:57 | 2020-07-26T20:05:57 | 282,724,444 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,756 | py | # Game Ping-Pong
# Imports
from tkinter import *
import random
import time
# Variável recebendo o valor digitado pelo usuário
level = int(input("Qual nível você gostaria de jogar? 1/2/3/4/5 \n"))
# Variável
length = 500 / level
# Instância do Objeto Tk
root = Tk()
root.title("Ping Pong")
root.resizable(0, 0)
root.wm_attributes("-topmost", -1)
# Variável recebendo o resultado da função Canvas
canvas = Canvas(root, width=800, height=600, bd=0, highlightthickness=0)
canvas.pack()
root.update()
# Variável
count = 0
# Variável
lost = False
# Classe
class Bola:
def __init__(self, canvas, Barra, color):
# Variáveis
self.canvas = canvas
self.Barra = Barra
self.id = canvas.create_oval(0, 0, 15, 15, fill=color)
self.canvas.move(self.id, 245, 200)
# Lista
starts_x = [-3, -2, -1, 1, 2, 3]
random.shuffle(starts_x)
# Variáveis
self.x = starts_x[0]
self.y = -3
self.canvas_height = self.canvas.winfo_height()
self.canvas_width = self.canvas.winfo_width()
# Função
def draw(self):
# Variáveis
self.canvas.move(self.id, self.x, self.y)
pos = self.canvas.coords(self.id)
# Condicional if
if pos[1] <= 0:
# Variável
self.y = 3
# Condicional if
if pos[3] >= self.canvas_height:
# Variável
self.y = -3
# Condicional if
if pos[0] <= 0:
# Variável
self.x = 3
# Condicional if
if pos[2] >= self.canvas_width:
# Variável
self.x = -3
# Variável
self.Barra_pos = self.canvas.coords(self.Barra.id)
# Condicional if aninhado
if pos[2] >= self.Barra_pos[0] and pos[0] <= self.Barra_pos[2]:
if pos[3] >= self.Barra_pos[1] and pos[3] <= self.Barra_pos[3]:
# Variáveis
self.y = -3
global count
count += 1
# Chamada à função
score()
# Condicional if
if pos[3] <= self.canvas_height:
# Variável
self.canvas.after(10, self.draw)
else:
# Chamada à função
game_over()
# Variáveis
global lost
lost = True
# Classe
class Barra:
def __init__(self, canvas, color):
# Variáveis
self.canvas = canvas
self.id = canvas.create_rectangle(0, 0, length, 10, fill=color)
self.canvas.move(self.id, 200, 400)
self.x = 0
self.canvas_width = self.canvas.winfo_width()
self.canvas.bind_all("<KeyPress-Left>", self.move_left)
self.canvas.bind_all("<KeyPress-Right>", self.move_right)
# Função
def draw(self):
# Chamada ao método
self.canvas.move(self.id, self.x, 0)
# Variável
self.pos = self.canvas.coords(self.id)
# Condicional if
if self.pos[0] <= 0:
# Variável
self.x = 0
# Condicional if
if self.pos[2] >= self.canvas_width:
# Variável
self.x = 0
global lost
# Condicional if
if lost == False:
self.canvas.after(10, self.draw)
# Função
def move_left(self, event):
# Condicional if
if self.pos[0] >= 0:
# Variável
self.x = -3
# Função
def move_right(self, event):
# Condicional if
if self.pos[2] <= self.canvas_width:
# Variável
self.x = 3
# Função
def start_game(event):
# Variáveis
global lost, count
lost = False
count = 0
# Chamada à função
score()
# Variável que recebe o resultado da função
canvas.itemconfig(game, text=" ")
# Métodos dos objetos
time.sleep(1)
Barra.draw()
Bola.draw()
# Função
def score():
canvas.itemconfig(score_now, text="Pontos: " + str(count))
# Função
def game_over():
canvas.itemconfig(game, text="Game over!")
# Instâncias dos objetos Barra e Bola
Barra = Barra(canvas, "orange")
Bola = Bola(canvas, Barra, "purple")
# Variáveis que recebem o resultado das funções
score_now = canvas.create_text(430, 20, text="Pontos: " + str(count), fill="green", font=("Arial", 16))
game = canvas.create_text(400, 300, text=" ", fill="red", font=("Arial", 40))
canvas.bind_all("<Button-1>", start_game)
# Executa o programa
root.mainloop()
| [
"noreply@github.com"
] | noreply@github.com |
ba5005318e358cdcd9c9dfb8ddb237303ae65140 | 8a3b144f871815b9e89e6b4326910747817a4bc7 | /learning_logs/models.py | 3d6d9289b0340984e28a31ca8680d39defe1c6be | [] | no_license | borays/learning | b71777be9298dc87b81f821319d0131495762ee7 | e2e5ab0fe6859635b329a844837546b711b557e3 | refs/heads/master | 2021-01-21T01:33:22.570183 | 2017-08-30T12:43:51 | 2017-08-30T12:43:51 | 101,879,918 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 601 | py | from django.db import models
#for user data
from django.contrib.auth.models import User
# Create your models here.
class Topic(models.Model):
text = models.CharField(max_length=200)
date_added = models.DateTimeField(auto_now_add=True)
owner=models.ForeignKey(User)
def __str__(self):
return self.text
class Entry(models.Model):
topic = models.ForeignKey(Topic)
text = models.TextField()
date_added = models.DateTimeField(auto_now_add=True)
class Meta:
verbose_name_plural = 'entries'
def __str__(self):
return self.text[:50] + '...'
| [
"borays@qq.com"
] | borays@qq.com |
0a2e966187b89beb9f8331300b18f5e41d660407 | 69c882c678103b182988fb60d3e898d569980f1c | /Day 6/day6prog14.py | 6b4f901785e52ff819f090084f7e227d01a62b68 | [] | no_license | gittygupta/stcet-python | 44be9d91cdd6215879d9f04497214819228821be | e77456172746ee76b6e2a901ddb0c3dbe457f82a | refs/heads/master | 2022-03-05T11:37:08.720226 | 2019-12-01T00:56:03 | 2019-12-01T00:56:03 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 202 | py | def fact(x):
pro = 1
for i in range(1, x+1):
pro *= i
return pro
def perm(n, r):
return (fact(n)/(fact(n - r) * fact(r)))
n = input("n = ")
r = input("r = ")
print(perm(n, r))
| [
"noreply@github.com"
] | noreply@github.com |
fba5870fe191a0fc0b1193de47a1b812cb0a2846 | 7574d5b3dc8a6feb9a363fed8e7303a110faf4ba | /public/cgi-bin/signup.cgi | 29f954e826aeee78159eb13fb8371a6a8cea4091 | [] | no_license | augustecolle/Komen-eten | a6b4e820734291083cf122127d0122e5d458e5bc | 50ca2087ef26e059d28b5353b304a635eb26fcc0 | refs/heads/master | 2016-08-12T03:00:56.172754 | 2015-07-15T19:40:19 | 2015-07-15T19:40:19 | 36,119,552 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,083 | cgi | #!/usr/bin/python
# this script uses passlib to hash the passwords
# to install it:
#
# pip install passlib
#
#
import imp # to find our user_database module
mod = imp.find_module("user_database",["."])
user_database = imp.load_module("user_database",*mod)
import cgi,cgitb; cgitb.enable()
from user_database import UserDatabase
db = UserDatabase()
form = cgi.FieldStorage()
username=form['username'].value
password=form['password'].value
email =form['email'].value
bodystring = ""
# check if username is not already present in the database
if (db.user_exists(username)):
bodystring = "Your username is already in use! pick another one<br>"
else: # insert the user in the database
db.add_user(username,password,email)
bodystring = "Congratulations {:s}, you have succesfully registered".format(username)
print("Content-Type: text/html\n\n")
print("<html>")
print("<head>")
print("<title>This is the title</title>")
print("Your username was {:}<br>".format(username))
print("Your password was {:}<br>".format(password))
print(bodystring)
print("</head>")
print("</html>")
| [
"camille.colle@ugent.be"
] | camille.colle@ugent.be |
628a3cc362d668c9258cc2ea42153488961de655 | d8b19fd2011ef684444c94bdabd15cfdea61d340 | /pythonProject/data08/dataframe살펴보기/데이터프레임으로읽어오기.py | 081d7dbd7586cebaa4ee522bb36394a991f7eb1f | [] | no_license | kanu21sj/Python_Edu | 28045690addd2b161a4d134dd06be29da8dbee18 | f5b3cc3e3841a32dcf6297b2c7c3dbe72bc9a24e | refs/heads/main | 2023-04-19T01:24:20.103017 | 2021-05-06T01:10:19 | 2021-05-06T01:10:19 | 364,742,533 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 191 | py | # 데이터를 2차원 데이터의 형태인 테이블로 읽어오려면 pandas 라이브러리 필요
import pandas as pd
import numpy as np
df = pd.read_csv("../data/2010.csv")
print(df) | [
"kanu21sj@gmail.com"
] | kanu21sj@gmail.com |
7cc4981c7cd0c60282834db79a919e6aa5ad472b | f879a376a17de22256257436bc59d34fa45a0c05 | /estado.py | 414e34caf7ad1669a60dcd10d6ab04713ab4a8e2 | [] | no_license | braldanago/Trabajo1 | c2b58c3b9089ae92e8b97e80bb35cc28ec980455 | 5bdce9246f4c5a3f3d9ed559ceb5d57e9f4fc9a1 | refs/heads/master | 2021-06-23T19:19:27.700833 | 2017-08-11T03:00:11 | 2017-08-11T03:00:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 176 | py | def dec_a_bin(decimal):
if decimal == 0:
return ""
else:
return dec_a_bin(decimal//2) + str(decimal % 2)
print("Resultado " + dec_a_bin(6))
| [
"noreply@github.com"
] | noreply@github.com |
dc5ea16ec7bea01bcd37de490d79bc1f42277f89 | 174dd6831c7c79dc299bf83b7278c72244f553f8 | /Una0112/source/0804/GET.py | 84abad7bd36e93952abbfd1337b39dfa2ff8c7fc | [] | no_license | Zaybc/bingyan-summer-camp-2020 | a4e95cd5c2ac6bd7687516c02f0336587e7ae2e7 | 821b949565a4cbb4f5d3d89c3ea13c04518c60bc | refs/heads/master | 2023-04-15T21:25:41.432267 | 2021-03-20T07:24:53 | 2021-03-20T07:24:53 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,030 | py | # -*- coding: utf-8 -*-
"""
Created on Mon Aug 3 16:15:22 2020
@author: 16038
"""
import urllib.request
import urllib.parse
import ssl
# 取消代理验证
ssl._create_default_https_context = ssl._create_unverified_context
# url 地址
url = "http://www.baidu.com/s"
word = {"wd": "贴吧"}
word = urllib.parse.urlencode(word) # 转换成url编码格式(字符串)
url = url + "?" + word # url首个分隔符就是 ?
# User-Agent
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36"}
# url 作为Request()方法的参数,构造并返回一个Request对象
request = urllib.request.Request(url, headers=headers)
# Request对象作为urlopen()方法的参数,发送给服务器并接收响应
response = urllib.request.urlopen(request)
# 类文件对象支持 文件对象的操作方法,如read()方法读取文件全部内容,返回字符串
html = response.read().decode("utf-8")
# 打印字符串
print(html) | [
"59985892+Una0112@users.noreply.github.com"
] | 59985892+Una0112@users.noreply.github.com |
39a076f793af787e4e0f6223f985459d3b3a06eb | ebd2de1b198d3aacfe18f1e04788beaf81a00948 | /npc.py | a349798d548a7b1c4b671f593daf77df9fdf9233 | [] | no_license | Manethpak/python-text-adventure | 0a0d38b04ed3286496523916d8809ac0440aa8f4 | b3762260822ca6782888678af71cdee54f735944 | refs/heads/master | 2023-01-29T04:04:59.228919 | 2020-12-12T07:07:06 | 2020-12-12T07:07:06 | 303,360,665 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 649 | py | import items
class NonPlayableCharacter():
def __init__(self):
raise NotImplementedError("Do not create raw NPC objects.")
def __str__(self):
return self.name
class Trader(NonPlayableCharacter):
def __init__(self):
self.name = "Trader"
self.gold = 1000
self.inventory = [
items.CrustyBread(),
items.CrustyBread(),
items.CrustyBread(),
items.WaterPouch(),
items.WaterPouch(),
items.HealingPotion(),
items.RustySword(),
items.SilverSword(),
items.HolySword()
] | [
"58499300+Manethpak@users.noreply.github.com"
] | 58499300+Manethpak@users.noreply.github.com |
3ff68a4c1c5275640a532fc3ea8213874164cbde | 81435421d3741d7c3a09f66537427717fe0ee823 | /PyBitcoin/utils/conversions.py | 87f2a53f9dbe6a2b8e4fc1b337b17932ecfcda1f | [] | no_license | Scowley4/PyBitcoin | 2d958c3c164d45f7544465e8bab4105fee78352e | 7fa228e06907a57b27cd82710eeb2adda5f2f207 | refs/heads/master | 2020-05-03T07:03:49.276531 | 2019-04-18T15:04:51 | 2019-04-18T15:04:51 | 178,488,139 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 819 | py | def hex_byte_swap(h):
"""Swaps endianness of a hex value."""
return bytes.fromhex(h)[::-1].hex()
def int_to_nbyte_hex(i, n, lendian=False):
"""Converts int to nbyte hex."""
h = ('{:x}'.format(i)).zfill(n*2)
if lendian:
h = hex_byte_swap(h)
return h
def int_to_varint(i):
"""Converts in to varint."""
if i <= 0xfc:
prefix = ''
n_bytes = 1
elif i <= 0xffff:
prefix = 'fd'
n_bytes = 2
elif i <= 0xffffffff:
prefix = 'fe'
n_bytes = 4
elif i <= 0xffffffffffffffff:
prefix = 'ff'
n_bytes = 8
return prefix + int_to_nbyte_hex(i, n_bytes, lendian=True)
def varint_to_int(varint):
"""Converts a varint to int."""
h = varint[2:] if len(varint)>2 else varint
return int(hex_byte_swap(h), 16)
| [
"stephencowley44@gmail.com"
] | stephencowley44@gmail.com |
277a305282a2d7828d7ea54255db9def6e09bb6b | 4f6fff91dce0f9712a4c2d3bd02ac5608e48dc8e | /garageproject/models/pemilik_motor.py | 54942dd0c93d1e9591fa30614289fbd49bb22ae4 | [] | no_license | ManBeunta/ThreeGaragePython | 473046965c6255b7e2e7a8e228b1084e80e89a07 | b02c6b59b1c47f6bef07f275d7125ad65879948a | refs/heads/master | 2023-02-15T20:18:12.912033 | 2021-01-06T17:05:53 | 2021-01-06T17:05:53 | 326,122,936 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 516 | py | from django.db import models
from garageproject.models.size_motor import Size_motor
from garageproject.models.brand_motor import Brand_motor
class Pemilik_motor(models.Model):
name = models.CharField(max_length=100)
nameMotor = models.CharField(max_length=100)
brand_motor = models.ForeignKey(Brand_motor, null=True, on_delete=models.SET_NULL)
size_motor = models.ManyToManyField(Size_motor)
class Meta:
app_label = 'garageproject'
def __str__(self):
return f'{self.name}' | [
"ramadhanyananta@gmail.com"
] | ramadhanyananta@gmail.com |
fa5a3c808bc276ec761788b984f27ff991cc67d7 | f5b221c8b6926ae4ab5ba65e636acb67eb35554c | /groups_manager/migrations/0001_initial.py | 42d87452c34c1d38449475d63bf4b7be9e222bd4 | [
"MIT"
] | permissive | vittoriozamboni/django-groups-manager | 2f388ef6d29c37194ce3f43d103eee054ad51c45 | b07ee33e9dd45cdf21e9d5b5e574c5ac85bab3d9 | refs/heads/master | 2023-07-01T02:30:58.548617 | 2023-06-15T09:49:48 | 2023-06-15T09:49:48 | 25,861,369 | 95 | 33 | MIT | 2023-06-15T09:49:49 | 2014-10-28T09:15:27 | Python | UTF-8 | Python | false | false | 6,381 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import jsonfield.fields
import mptt.fields
import django.db.models.deletion
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
('auth', '0001_initial'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Group',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=255)),
('codename', models.SlugField(max_length=255, blank=True)),
('description', models.TextField(default=b'', blank=True)),
('comment', models.TextField(default=b'', blank=True)),
('full_name', models.CharField(default=b'', max_length=255, blank=True)),
('properties', jsonfield.fields.JSONField(default={}, blank=True)),
('django_auth_sync', models.BooleanField(default=True)),
('lft', models.PositiveIntegerField(editable=False, db_index=True)),
('rght', models.PositiveIntegerField(editable=False, db_index=True)),
('tree_id', models.PositiveIntegerField(editable=False, db_index=True)),
('level', models.PositiveIntegerField(editable=False, db_index=True)),
('django_group', models.ForeignKey(on_delete=django.db.models.deletion.SET_NULL, blank=True, to='auth.Group', null=True)),
],
options={
'ordering': ('name',),
},
bases=(models.Model,),
),
migrations.CreateModel(
name='GroupEntity',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('label', models.CharField(max_length=255)),
('codename', models.SlugField(unique=True, max_length=255, blank=True)),
],
options={
'ordering': ('label',),
},
bases=(models.Model,),
),
migrations.CreateModel(
name='GroupMember',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('group', models.ForeignKey(related_name='group_membership', to='groups_manager.Group', on_delete=models.CASCADE)),
],
options={
'ordering': ('group', 'member'),
},
bases=(models.Model,),
),
migrations.CreateModel(
name='GroupMemberRole',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('label', models.CharField(max_length=255)),
('codename', models.SlugField(unique=True, max_length=255, blank=True)),
],
options={
'ordering': ('label',),
},
bases=(models.Model,),
),
migrations.CreateModel(
name='GroupType',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('label', models.CharField(max_length=255)),
('codename', models.SlugField(unique=True, max_length=255, blank=True)),
],
options={
'ordering': ('label',),
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Member',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('first_name', models.CharField(max_length=255)),
('last_name', models.CharField(max_length=255)),
('username', models.CharField(default=b'', max_length=255, blank=True)),
('email', models.EmailField(default=b'', max_length=255, blank=True)),
('django_auth_sync', models.BooleanField(default=True)),
('django_user', models.ForeignKey(related_name='groups_manager_member', on_delete=django.db.models.deletion.SET_NULL, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ('last_name', 'first_name'),
},
bases=(models.Model,),
),
migrations.AddField(
model_name='groupmember',
name='member',
field=models.ForeignKey(related_name='group_membership', to='groups_manager.Member', on_delete=models.CASCADE),
preserve_default=True,
),
migrations.AddField(
model_name='groupmember',
name='roles',
field=models.ManyToManyField(to='groups_manager.GroupMemberRole', null=True, blank=True),
preserve_default=True,
),
migrations.AlterUniqueTogether(
name='groupmember',
unique_together=set([('group', 'member')]),
),
migrations.AddField(
model_name='group',
name='group_entities',
field=models.ManyToManyField(related_name='groups', null=True, to='groups_manager.GroupEntity', blank=True),
preserve_default=True,
),
migrations.AddField(
model_name='group',
name='group_members',
field=models.ManyToManyField(related_name='groups', through='groups_manager.GroupMember', to='groups_manager.Member'),
preserve_default=True,
),
migrations.AddField(
model_name='group',
name='group_type',
field=models.ForeignKey(related_name='groups', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='groups_manager.GroupType', null=True),
preserve_default=True,
),
migrations.AddField(
model_name='group',
name='parent',
field=mptt.fields.TreeForeignKey(related_name='subgroups', on_delete=models.CASCADE, blank=True, to='groups_manager.Group', null=True),
preserve_default=True,
),
]
| [
"vittorio.zamboni@gmail.com"
] | vittorio.zamboni@gmail.com |
1e02d4a21be50b792bdf4b14e5bcc50bd4fd9afd | c231c3205adc965d98ab69e846e543bf38207b1a | /bin/notify-proxy | ab5951b17672d6248aafc96e8555f19c39b7f9af | [] | permissive | rasa/dotfiles-3 | 879bc9cbfda315d68055c03c375ebcd4e64d70bd | 3438a379c5c70074ca697937126f3f28409db293 | refs/heads/master | 2023-09-05T13:58:03.095705 | 2021-11-22T18:41:44 | 2021-11-22T18:41:44 | 354,554,255 | 0 | 0 | MIT | 2021-04-04T13:44:12 | 2021-04-04T13:44:12 | null | UTF-8 | Python | false | false | 2,812 | #!/usr/bin/env python3
import asyncio
from distutils.spawn import find_executable
import os
import logging
import json
logger = logging.getLogger('notify-proxy')
notify_send_path = find_executable('notify-send')
@asyncio.coroutine
def is_focused(window_title):
environ = dict(os.environ, NOTIFY_TITLE=window_title)
child = yield from asyncio.create_subprocess_exec(
'notify-is-focused',
stdout=asyncio.subprocess.PIPE,
env=environ,
)
stdout, _ = yield from child.communicate()
return child.returncode == 0
@asyncio.coroutine
def notify_send(
summary, app_name=None, body=None, urgency='normal', icon=None, window_title=None, expire_time=None, hint=None, **kwargs
):
if window_title:
logger.debug("Checking focused window is %s", window_title)
is_window_focused = yield from is_focused(window_title)
if is_window_focused:
return logger.info("Inhibited for focused window for %s.", summary)
args = [notify_send_path]
if app_name:
args.extend(['--app-name', app_name])
if expire_time:
args.extend(['--expire-time', str(expire_time)])
if icon:
args.extend(['--icon', icon])
if urgency:
args.extend(['--urgency', urgency])
if hint:
if not isinstance(hint, list):
hint = [hint]
for item in hint:
args.extend(['--hint', item])
args.append(summary)
if body:
args.append(body)
logger.debug("%r", ' '.join(args))
child = yield from asyncio.create_subprocess_exec(*args)
yield from child.communicate()
logger.info("Notification sent: %r", summary)
class NotifyProtocol(asyncio.Protocol):
def data_received(self, payload):
logger.debug("<<< %r", payload)
try:
data = json.loads(payload.decode('utf-8'))
except Exception as e:
logger.warn("Failed to parse payload:\n%s", e)
return
asyncio.async(notify_send(**data))
DEBUG = os.environ.get('DEBUG')
logging.basicConfig(
level=logging.DEBUG if DEBUG else logging.INFO,
format='%(message)s',
)
logger.info("Starting notify-proxy")
loop = asyncio.get_event_loop()
addr = os.environ.get('PROXY_ADDR', '0.0.0.0')
port = int(os.environ.get('PROXY_PORT', '1216'))
server = loop.run_until_complete(loop.create_server(
NotifyProtocol,
addr,
port,
reuse_address=True,
reuse_port=True,
))
logger.info("Listening on %s:%d", addr, port)
if DEBUG:
data = json.loads(DEBUG)
logger.debug("Testing %r", data)
loop.run_until_complete(asyncio.async(notify_send(**data)))
else:
try:
loop.run_forever()
except KeyboardInterrupt:
logger.info('Done')
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
| [
"github@skywww.net"
] | github@skywww.net | |
a2cb206e12ecf1ac18256a7a20df090cdb7e6379 | 4fb0feb7f460dae19d788bcea02e294020fc5fc3 | /src/AlgorithmGridView.py | 71566f844b520ad58a7e56863042ef2625da4983 | [] | no_license | TheChouzanOne/AlgorithmCompare | e24033fde6aa6e2ff469bfe08807ad1d4912cc8c | b76dc6221c5ea39ed48f4f3802d94f4bb6181444 | refs/heads/master | 2022-09-06T10:02:55.402314 | 2020-05-26T01:14:48 | 2020-05-26T01:14:48 | 262,175,311 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,297 | py | import graphics as gx
class AlgorithmGridView:
def __init__(self, width, height, nodesPerSide, config):
self.config = config
self.background = self._getAlgorithmGridBackground()
self.titleLabel = self._getTitleLabel()
def draw(self, window):
self.titleLabel.draw(window)
self.background.draw(window)
def undraw(self):
self.titleLabel.undraw()
self.background.undraw()
def setBackgroundColor(self, color):
self.background.setFill(color)
def _getTitleLabel(self):
anchorPoint = gx.Point(
self.config['title']['x'],
self.config['title']['y'],
)
title = gx.Text(anchorPoint, self.config['algorithm'])
title.setFace('times roman')
title.setSize(18)
title.setStyle('bold')
return title
def _getAlgorithmGridBackground(self):
P1 = gx.Point(
self.config['algorithmColumns']['xOffsetUL'],
self.config['algorithmColumns']['yOffsetUL'],
)
P2 = gx.Point(
self.config['algorithmColumns']['xOffsetLR'],
self.config['algorithmColumns']['yOffsetLR'],
)
background = gx.Rectangle(P1, P2)
background.setFill('black')
return background | [
"thechouzanone@gmail.com"
] | thechouzanone@gmail.com |
24c29542f75701de4464297026f75f48b43d3ed4 | 18e2effa75180759d4fed875ae5a9a53dbbd93f9 | /HackerRank/Candies.py | d328bac5aa5726b461bcf42da137cb15f3a4e137 | [] | no_license | Shubham20091999/Problems | 3161422cc3d502df12a52bf62f078e727dc4f3c5 | 565fdd452a9c7f5db58ca3baf7cfeca6c824e459 | refs/heads/master | 2021-07-12T03:07:50.153479 | 2020-08-28T15:30:48 | 2020-08-28T15:30:48 | 198,468,062 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 447 | py | # https://www.hackerrank.com/challenges/candies/problem
def candies(n,arr):
candies = [1]*n
# Forward
for i in range(1,n):
if arr[i]>arr[i-1]:
candies[i] = candies[i-1]+1
# Backward
for i in range(n-1,0,-1):
if arr[i-1]>arr[i] and candies[i-1]<=candies[i]:
candies[i-1] = candies[i]+1
return sum(candies)
n = int(input())
arr = [int(input()) for _ in range(n)]
print(candies(n,arr)) | [
"shubham.patel@jai-kisan.com"
] | shubham.patel@jai-kisan.com |
a7a56c33c5dc7c3f5ec6f3f23457b3c7828ea508 | 1ddae0fb9cbcc187a4fcafadf668dba75f9dbc74 | /prac_07/hello_world.py | 4bcf821b59cecf75876a72b03856cbbb12f2b320 | [] | no_license | riskarampengan/CP1404 | d21bba7317d55398074f64b4d3eaf51a6bb9f490 | aa1fc655bcb75b5401e6884baa89401107585a6e | refs/heads/master | 2020-06-21T16:25:01.302935 | 2019-09-26T13:30:06 | 2019-09-26T13:30:06 | 197,501,977 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 173 | py | from kivy.app import App
from kivy.app import Widget
class HelloWorld(App):
def build(self):
self.root = Widget()
return self.root
HelloWorld().run() | [
"noreply@github.com"
] | noreply@github.com |
5e54e5543f6dc3aa9f9a040b8151490d7c0ce6cc | 3aecb162945b4738bf5a13ae10a638fb42057bdc | /ex46.py | b0fd8865135316d587da3fa683e94b19bbfd4ab7 | [] | no_license | tmemud/Python-Projects | c35f38f5adfafdc24356ac889780358283147328 | c6330986cd49f19014e18dfb12b976574d962931 | refs/heads/master-Repo | 2023-02-02T17:34:25.229646 | 2023-01-31T13:16:55 | 2023-01-31T13:16:55 | 265,056,954 | 0 | 0 | null | 2023-01-31T13:16:57 | 2020-05-18T20:41:52 | Python | UTF-8 | Python | false | false | 209 | py | #average in a loop
count=0
sum=0
for other_numbers in [1,2,3,4,5,6,7,8,]:
count= count + 1
sum= sum + other_numbers
print(count, other_numbers, sum)
print(count, other_numbers, sum/count)
| [
"noreply@github.com"
] | noreply@github.com |
fc579a4c1ef34346215732c86f64d6cccd35a1d7 | 0c212aa63d07e84fbad849d15f2ee6a72aea82d2 | /03-Package/p05.py | 97746f6706a0c92afc79db4efa4a3440e9f2a3a6 | [] | no_license | flyingtothe/Python | e55b54e1b646d391550c8ced12ee92055c902c63 | 064964cb30308a38eefa5dc3059c065fcb89dd9f | refs/heads/master | 2021-08-06T19:44:42.137076 | 2018-12-03T12:15:15 | 2018-12-03T12:15:15 | 145,518,863 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 63 | py | from p01 import *
sayHello()
stu = Student('sss', 88)
stu.say() | [
"heidemeirenai@163.com"
] | heidemeirenai@163.com |
eb5d7aba99ce10ed488f5be198476d799554acb2 | f4434c85e3814b6347f8f8099c081ed4af5678a5 | /sdk/rdbms/azure-mgmt-rdbms/azure/mgmt/rdbms/mariadb/aio/operations/_private_link_resources_operations.py | 95f8c439d59dff4da2f06f3595ae038121fa0f85 | [
"LicenseRef-scancode-generic-cla",
"MIT",
"LGPL-2.1-or-later"
] | permissive | yunhaoling/azure-sdk-for-python | 5da12a174a37672ac6ed8e3c1f863cb77010a506 | c4eb0ca1aadb76ad892114230473034830116362 | refs/heads/master | 2022-06-11T01:17:39.636461 | 2020-12-08T17:42:08 | 2020-12-08T17:42:08 | 177,675,796 | 1 | 0 | MIT | 2020-03-31T20:35:17 | 2019-03-25T22:43:40 | Python | UTF-8 | Python | false | false | 8,751 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, AsyncIterable, Callable, Dict, Generic, Optional, TypeVar
import warnings
from azure.core.async_paging import AsyncItemPaged, AsyncList
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse, HttpRequest
from azure.mgmt.core.exceptions import ARMErrorFormat
from ... import models as _models
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class PrivateLinkResourcesOperations:
"""PrivateLinkResourcesOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~maria_db_management_client.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
def list_by_server(
self,
resource_group_name: str,
server_name: str,
**kwargs
) -> AsyncIterable["_models.PrivateLinkResourceListResult"]:
"""Gets the private link resources for MariaDB server.
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param server_name: The name of the server.
:type server_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either PrivateLinkResourceListResult or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~maria_db_management_client.models.PrivateLinkResourceListResult]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PrivateLinkResourceListResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2018-06-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_by_server.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern=r'^[-\w\._\(\)]+$'),
'serverName': self._serialize.url("server_name", server_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str', min_length=1),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('PrivateLinkResourceListResult', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_by_server.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMariaDB/servers/{serverName}/privateLinkResources'} # type: ignore
async def get(
self,
resource_group_name: str,
server_name: str,
group_name: str,
**kwargs
) -> "_models.PrivateLinkResource":
"""Gets a private link resource for MariaDB server.
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param server_name: The name of the server.
:type server_name: str
:param group_name: The name of the private link resource.
:type group_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PrivateLinkResource, or the result of cls(response)
:rtype: ~maria_db_management_client.models.PrivateLinkResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PrivateLinkResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2018-06-01"
accept = "application/json"
# Construct URL
url = self.get.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern=r'^[-\w\._\(\)]+$'),
'serverName': self._serialize.url("server_name", server_name, 'str'),
'groupName': self._serialize.url("group_name", group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str', min_length=1),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('PrivateLinkResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMariaDB/servers/{serverName}/privateLinkResources/{groupName}'} # type: ignore
| [
"noreply@github.com"
] | noreply@github.com |
fc6c4503a020087e941bb71c2c4ee42a2d2a12fd | d3d61b0c141d71a9666a3a700ee1c11e76288b3e | /extra/helper.py | 7f1c183e33fe0cf5c73c8b8d2118ed5f1f05fb5c | [] | no_license | sharababy/py-dip | c84aaec7bc2a55c9d37e4eaa806744b42a940454 | 0ef480e9b177bca479e49be4a6fd4bbede0e5e07 | refs/heads/master | 2021-04-29T00:41:22.460947 | 2018-05-16T09:26:01 | 2018-05-16T09:26:01 | 121,835,693 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 455 | py | def truncate(f, n):
'''Truncates/pads a float f to n decimal places without rounding'''
s = '{}'.format(f)
if 'e' in s or 'E' in s:
return '{0:.{1}f}'.format(f, n)
i, p, d = s.partition('.')
return '.'.join([i, (d+'0'*n)[:n]])
def linearSolver(x1,y1, x2,y2):
m = (y1-y2)/(x1-x2);
c = int(x1 - (m*y1));
m = truncate(m, 2);
return [m,c];
def paSolver(x1,y1, x2,y2):
# p = xsin(a) + ycoz(a);
# p/cos(a) - x tan(a) = y
# | [
"bassi.vdt@gmail.com"
] | bassi.vdt@gmail.com |
ae30829a988b8ab8e312dc21f65011b6c2658bcf | 8396606fcb98d8ab2d5e5a139da83e7ce0880324 | /galaxy-dist/lib/galaxy/web/form_builder.py | 72e71ab851b14a80aeb1253dc5f87971c253993d | [
"CC-BY-2.5",
"AFL-2.1",
"AFL-3.0",
"CC-BY-3.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | bopopescu/Learn2Mine-Main | 87b3d6d7fa804568a93c4c7e8324c95574afc819 | acc0267b86ad6a9e5e1619d494c20407d4710e90 | refs/heads/master | 2021-05-29T17:48:17.844094 | 2015-09-14T16:19:36 | 2015-09-14T16:19:36 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 37,185 | py | """
Classes for generating HTML forms
"""
import logging, sys, os, time
from operator import itemgetter
from cgi import escape
from galaxy.util import restore_text, relpath, nice_size, unicodify
from galaxy.web import url_for
from binascii import hexlify
log = logging.getLogger(__name__)
class BaseField(object):
def get_html( self, prefix="" ):
"""Returns the html widget corresponding to the parameter"""
raise TypeError( "Abstract Method" )
def get_disabled_str( self, disabled=False ):
if disabled:
return ' disabled="disabled"'
else:
return ''
class TextField(BaseField):
"""
A standard text input box.
>>> print TextField( "foo" ).get_html()
<input type="text" name="foo" size="10" value="">
>>> print TextField( "bins", size=4, value="default" ).get_html()
<input type="text" name="bins" size="4" value="default">
"""
def __init__( self, name, size=None, value=None ):
self.name = name
self.size = int( size or 10 )
self.value = value or ""
def get_html( self, prefix="", disabled=False ):
value = self.value
if not isinstance( value, basestring ):
value = str( value )
value = unicodify( value )
return unicodify( '<input type="text" name="%s%s" size="%d" value="%s"%s>' \
% ( prefix, self.name, self.size, escape( value, quote=True ), self.get_disabled_str( disabled ) ) )
def set_size(self, size):
self.size = int( size )
class PasswordField(BaseField):
"""
A password input box. text appears as "******"
>>> print PasswordField( "foo" ).get_html()
<input type="password" name="foo" size="10" value="">
>>> print PasswordField( "bins", size=4, value="default" ).get_html()
<input type="password" name="bins" size="4" value="default">
"""
def __init__( self, name, size=None, value=None ):
self.name = name
self.size = int( size or 10 )
self.value = value or ""
def get_html( self, prefix="", disabled=False ):
return unicodify( '<input type="password" name="%s%s" size="%d" value="%s"%s>' \
% ( prefix, self.name, self.size, escape( str( self.value ), quote=True ), self.get_disabled_str( disabled ) ) )
def set_size(self, size):
self.size = int( size )
class TextArea(BaseField):
"""
A standard text area box.
>>> print TextArea( "foo" ).get_html()
<textarea name="foo" rows="5" cols="25"></textarea>
>>> print TextArea( "bins", size="4x5", value="default" ).get_html()
<textarea name="bins" rows="4" cols="5">default</textarea>
"""
_DEFAULT_SIZE = "5x25"
def __init__( self, name, size=None, value=None ):
self.name = name
size = size or self._DEFAULT_SIZE
self.size = size.split("x")
self.rows = int(self.size[0])
self.cols = int(self.size[-1])
self.value = value or ""
def get_html( self, prefix="", disabled=False ):
return unicodify( '<textarea name="%s%s" rows="%d" cols="%d"%s>%s</textarea>' \
% ( prefix, self.name, self.rows, self.cols, self.get_disabled_str( disabled ), escape( str( self.value ), quote=True ) ) )
def set_size(self, rows, cols):
self.rows = rows
self.cols = cols
class CheckboxField(BaseField):
"""
A checkbox (boolean input)
>>> print CheckboxField( "foo" ).get_html()
<input type="checkbox" id="foo" name="foo" value="true"><input type="hidden" name="foo" value="true">
>>> print CheckboxField( "bar", checked="yes" ).get_html()
<input type="checkbox" id="bar" name="bar" value="true" checked="checked"><input type="hidden" name="bar" value="true">
"""
def __init__( self, name, checked=None, refresh_on_change = False, refresh_on_change_values = None ):
self.name = name
self.checked = ( checked == True ) or ( isinstance( checked, basestring ) and ( checked.lower() in ( "yes", "true", "on" ) ) )
self.refresh_on_change = refresh_on_change
self.refresh_on_change_values = refresh_on_change_values or []
if self.refresh_on_change:
self.refresh_on_change_text = ' refresh_on_change="true" '
if self.refresh_on_change_values:
self.refresh_on_change_text = '%s refresh_on_change_values="%s" ' % ( self.refresh_on_change_text, ",".join( self.refresh_on_change_values ) )
else:
self.refresh_on_change_text = ''
def get_html( self, prefix="", disabled=False ):
if self.checked:
checked_text = ' checked="checked"'
else:
checked_text = ""
id_name = prefix + self.name
# The hidden field is necessary because if the check box is not checked on the form, it will
# not be included in the request params. The hidden field ensure that this will happen. When
# parsing the request, the value 'true' in the hidden field actually means it is NOT checked.
# See the is_checked() method below. The prefix is necessary in each case to ensure functional
# correctness when the param is inside a conditional.
return unicodify( '<input type="checkbox" id="%s" name="%s" value="true"%s%s%s><input type="hidden" name="%s%s" value="true"%s>' \
% ( id_name, id_name, checked_text, self.get_disabled_str( disabled ), self.refresh_on_change_text, prefix, self.name, self.get_disabled_str( disabled ) ) )
@staticmethod
def is_checked( value ):
if value == True:
return True
# This may look strange upon initial inspection, but see the comments in the get_html() method
# above for clarification. Basically, if value is not True, then it will always be a list with
# 2 input fields ( a checkbox and a hidden field ) if the checkbox is checked. If it is not
# checked, then value will be only the hidden field.
return isinstance( value, list ) and len( value ) == 2
def set_checked(self, value):
if isinstance( value, basestring ):
self.checked = value.lower() in [ "yes", "true", "on" ]
else:
self.checked = value
class FileField(BaseField):
"""
A file upload input.
>>> print FileField( "foo" ).get_html()
<input type="file" name="foo">
>>> print FileField( "foo", ajax = True ).get_html()
<input type="file" name="foo" galaxy-ajax-upload="true">
"""
def __init__( self, name, value = None, ajax=False ):
self.name = name
self.ajax = ajax
self.value = value
def get_html( self, prefix="" ):
value_text = ""
if self.value:
value_text = ' value="%s"' % escape( str( self.value ), quote=True )
ajax_text = ""
if self.ajax:
ajax_text = ' galaxy-ajax-upload="true"'
return unicodify( '<input type="file" name="%s%s"%s%s>' % ( prefix, self.name, ajax_text, value_text ) )
class FTPFileField(BaseField):
"""
An FTP file upload input.
"""
thead = '''
<table id="grid-table" class="grid">
<thead id="grid-table-header">
<tr>
<th id="select-header"></th>
<th id="name-header">
File
</th>
<th id="size-header">
Size
</th>
<th id="date-header">
Date
</th>
</tr>
</thead>
<tbody id="grid-table-body">
'''
trow = '''
<tr>
<td><input type="checkbox" name="%s%s" value="%s"/></td>
<td>%s</td>
<td>%s</td>
<td>%s</td>
</tr>
'''
tfoot = '''
</tbody>
</table>
'''
def __init__( self, name, dir, ftp_site, value = None ):
self.name = name
self.dir = dir
self.ftp_site = ftp_site
self.value = value
def get_html( self, prefix="" ):
rval = FTPFileField.thead
if self.dir is None:
rval += '<tr><td colspan="4"><em>Please <a href="%s">create</a> or <a href="%s">log in to</a> a Galaxy account to view files uploaded via FTP.</em></td></tr>' % ( url_for( controller='user', action='create', cntrller='user', referer=url_for( controller='root' ) ), url_for( controller='user', action='login', cntrller='user', referer=url_for( controller='root' ) ) )
elif not os.path.exists( self.dir ):
rval += '<tr><td colspan="4"><em>Your FTP upload directory contains no files.</em></td></tr>'
else:
uploads = []
for ( dirpath, dirnames, filenames ) in os.walk( self.dir ):
for filename in filenames:
path = relpath( os.path.join( dirpath, filename ), self.dir )
statinfo = os.lstat( os.path.join( dirpath, filename ) )
uploads.append( dict( path=path,
size=nice_size( statinfo.st_size ),
ctime=time.strftime( "%m/%d/%Y %I:%M:%S %p", time.localtime( statinfo.st_ctime ) ) ) )
if not uploads:
rval += '<tr><td colspan="4"><em>Your FTP upload directory contains no files.</em></td></tr>'
uploads = sorted(uploads, key=itemgetter("path"))
for upload in uploads:
rval += FTPFileField.trow % ( prefix, self.name, upload['path'], upload['path'], upload['size'], upload['ctime'] )
rval += FTPFileField.tfoot
rval += '<div class="toolParamHelp">This Galaxy server allows you to upload files via FTP. To upload some files, log in to the FTP server at <strong>%s</strong> using your Galaxy credentials (email address and password).</div>' % self.ftp_site
return rval
class HiddenField(BaseField):
"""
A hidden field.
>>> print HiddenField( "foo", 100 ).get_html()
<input type="hidden" name="foo" value="100">
"""
def __init__( self, name, value=None ):
self.name = name
self.value = value or ""
def get_html( self, prefix="" ):
return unicodify( '<input type="hidden" name="%s%s" value="%s">' % ( prefix, self.name, escape( str( self.value ), quote=True ) ) )
class SelectField(BaseField):
"""
A select field.
>>> t = SelectField( "foo", multiple=True )
>>> t.add_option( "tuti", 1 )
>>> t.add_option( "fruity", "x" )
>>> print t.get_html()
<select name="foo" multiple>
<option value="1">tuti</option>
<option value="x">fruity</option>
</select>
>>> t = SelectField( "bar" )
>>> t.add_option( "automatic", 3 )
>>> t.add_option( "bazooty", 4, selected=True )
>>> print t.get_html()
<select name="bar" last_selected_value="4">
<option value="3">automatic</option>
<option value="4" selected>bazooty</option>
</select>
>>> t = SelectField( "foo", display="radio" )
>>> t.add_option( "tuti", 1 )
>>> t.add_option( "fruity", "x" )
>>> print t.get_html()
<div><input type="radio" name="foo" value="1" id="foo|1"><label class="inline" for="foo|1">tuti</label></div>
<div><input type="radio" name="foo" value="x" id="foo|x"><label class="inline" for="foo|x">fruity</label></div>
>>> t = SelectField( "bar", multiple=True, display="checkboxes" )
>>> t.add_option( "automatic", 3 )
>>> t.add_option( "bazooty", 4, selected=True )
>>> print t.get_html()
<div class="checkUncheckAllPlaceholder" checkbox_name="bar"></div>
<div><input type="checkbox" name="bar" value="3" id="bar|3"><label class="inline" for="bar|3">automatic</label></div>
<div><input type="checkbox" name="bar" value="4" id="bar|4" checked='checked'><label class="inline" for="bar|4">bazooty</label></div>
"""
def __init__( self, name, multiple=None, display=None, refresh_on_change=False, refresh_on_change_values=None, size=None ):
self.name = name
self.multiple = multiple or False
self.size = size
self.options = list()
if display == "checkboxes":
assert multiple, "Checkbox display only supported for multiple select"
elif display == "radio":
assert not( multiple ), "Radio display only supported for single select"
elif display is not None:
raise Exception, "Unknown display type: %s" % display
self.display = display
self.refresh_on_change = refresh_on_change
self.refresh_on_change_values = refresh_on_change_values or []
if self.refresh_on_change:
self.refresh_on_change_text = ' refresh_on_change="true"'
if self.refresh_on_change_values:
self.refresh_on_change_text = '%s refresh_on_change_values="%s"' % ( self.refresh_on_change_text, escape( ",".join( self.refresh_on_change_values ), quote=True ) )
else:
self.refresh_on_change_text = ''
def add_option( self, text, value, selected = False ):
self.options.append( ( text, value, selected ) )
def get_html( self, prefix="", disabled=False ):
if self.display == "checkboxes":
return self.get_html_checkboxes( prefix, disabled )
elif self.display == "radio":
return self.get_html_radio( prefix, disabled )
else:
return self.get_html_default( prefix, disabled )
def get_html_checkboxes( self, prefix="", disabled=False ):
rval = []
ctr = 0
if len( self.options ) > 1:
rval.append ( '<div class="checkUncheckAllPlaceholder" checkbox_name="%s%s"></div>' % ( prefix, self.name ) ) #placeholder for the insertion of the Select All/Unselect All buttons
for text, value, selected in self.options:
style = ""
if not isinstance( value, basestring ):
value = str( value )
if not isinstance( text, basestring ):
text = str( text )
text = unicodify( text )
escaped_value = escape( unicodify( value ), quote=True )
uniq_id = "%s%s|%s" % (prefix, self.name, escaped_value)
if len(self.options) > 2 and ctr % 2 == 1:
style = " class=\"odd_row\""
selected_text = ""
if selected:
selected_text = " checked='checked'"
rval.append( '<div%s><input type="checkbox" name="%s%s" value="%s" id="%s"%s%s><label class="inline" for="%s">%s</label></div>' % \
( style, prefix, self.name, escaped_value, uniq_id, selected_text, self.get_disabled_str( disabled ), uniq_id, escape( text, quote=True ) ) )
ctr += 1
return unicodify( "\n".join( rval ) )
def get_html_radio( self, prefix="", disabled=False ):
rval = []
ctr = 0
for text, value, selected in self.options:
style = ""
escaped_value = escape( str( value ), quote=True )
uniq_id = "%s%s|%s" % (prefix, self.name, escaped_value)
if len(self.options) > 2 and ctr % 2 == 1:
style = " class=\"odd_row\""
selected_text = ""
if selected:
selected_text = " checked='checked'"
rval.append( '<div%s><input type="radio" name="%s%s"%s value="%s" id="%s"%s%s><label class="inline" for="%s">%s</label></div>' % \
( style,
prefix,
self.name,
self.refresh_on_change_text,
escaped_value,
uniq_id,
selected_text,
self.get_disabled_str( disabled ),
uniq_id,
text ) )
ctr += 1
return unicodify( "\n".join( rval ) )
def get_html_default( self, prefix="", disabled=False ):
if self.multiple:
multiple = " multiple"
else:
multiple = ""
if self.size:
size = ' size="%s"' % str( self.size )
else:
size = ''
rval = []
last_selected_value = ""
for text, value, selected in self.options:
if selected:
selected_text = " selected"
last_selected_value = value
if not isinstance( last_selected_value, basestring ):
last_selected_value = str( last_selected_value )
else:
selected_text = ""
if not isinstance( value, basestring ):
value = str( value )
if not isinstance( text, basestring ):
text = str( text )
rval.append( '<option value="%s"%s>%s</option>' % ( escape( unicodify( value ), quote=True ), selected_text, escape( unicodify( text ), quote=True ) ) )
if last_selected_value:
last_selected_value = ' last_selected_value="%s"' % escape( unicodify( last_selected_value ), quote=True )
rval.insert( 0, '<select name="%s%s"%s%s%s%s%s>' % \
( prefix, self.name, multiple, size, self.refresh_on_change_text, last_selected_value, self.get_disabled_str( disabled ) ) )
rval.append( '</select>' )
return unicodify( "\n".join( rval ) )
def get_selected( self, return_label=False, return_value=False, multi=False ):
'''
Return the currently selected option's label, value or both as a tuple. For
multi-select lists, a list is returned.
'''
if multi:
selected_options = []
for label, value, selected in self.options:
if selected:
if return_label and return_value:
if multi:
selected_options.append( ( label, value ) )
else:
return ( label, value )
elif return_label:
if multi:
selected_options.append( label )
else:
return label
elif return_value:
if multi:
selected_options.append( value )
else:
return value
if multi:
return selected_options
return None
class DrillDownField( BaseField ):
"""
A hierarchical select field, which allows users to 'drill down' a tree-like set of options.
>>> t = DrillDownField( "foo", multiple=True, display="checkbox", options=[{'name': 'Heading 1', 'value': 'heading1', 'options': [{'name': 'Option 1', 'value': 'option1', 'options': []}, {'name': 'Option 2', 'value': 'option2', 'options': []}, {'name': 'Heading 1', 'value': 'heading1', 'options': [{'name': 'Option 3', 'value': 'option3', 'options': []}, {'name': 'Option 4', 'value': 'option4', 'options': []}]}]}, {'name': 'Option 5', 'value': 'option5', 'options': []}] )
>>> print t.get_html()
<div class="form-row drilldown-container" id="drilldown--666f6f">
<div class="form-row-input">
<div><span class="form-toggle icon-button toggle-expand" id="drilldown--666f6f-68656164696e6731-click"></span>
<input type="checkbox" name="foo" value="heading1" >Heading 1
</div><div class="form-row" id="drilldown--666f6f-68656164696e6731-container" style="float: left; margin-left: 1em;">
<div class="form-row-input">
<input type="checkbox" name="foo" value="option1" >Option 1
</div>
<div class="form-row-input">
<input type="checkbox" name="foo" value="option2" >Option 2
</div>
<div class="form-row-input">
<div><span class="form-toggle icon-button toggle-expand" id="drilldown--666f6f-68656164696e6731-68656164696e6731-click"></span>
<input type="checkbox" name="foo" value="heading1" >Heading 1
</div><div class="form-row" id="drilldown--666f6f-68656164696e6731-68656164696e6731-container" style="float: left; margin-left: 1em;">
<div class="form-row-input">
<input type="checkbox" name="foo" value="option3" >Option 3
</div>
<div class="form-row-input">
<input type="checkbox" name="foo" value="option4" >Option 4
</div>
</div>
</div>
</div>
</div>
<div class="form-row-input">
<input type="checkbox" name="foo" value="option5" >Option 5
</div>
</div>
>>> t = DrillDownField( "foo", multiple=False, display="radio", options=[{'name': 'Heading 1', 'value': 'heading1', 'options': [{'name': 'Option 1', 'value': 'option1', 'options': []}, {'name': 'Option 2', 'value': 'option2', 'options': []}, {'name': 'Heading 1', 'value': 'heading1', 'options': [{'name': 'Option 3', 'value': 'option3', 'options': []}, {'name': 'Option 4', 'value': 'option4', 'options': []}]}]}, {'name': 'Option 5', 'value': 'option5', 'options': []}] )
>>> print t.get_html()
<div class="form-row drilldown-container" id="drilldown--666f6f">
<div class="form-row-input">
<div><span class="form-toggle icon-button toggle-expand" id="drilldown--666f6f-68656164696e6731-click"></span>
<input type="radio" name="foo" value="heading1" >Heading 1
</div><div class="form-row" id="drilldown--666f6f-68656164696e6731-container" style="float: left; margin-left: 1em;">
<div class="form-row-input">
<input type="radio" name="foo" value="option1" >Option 1
</div>
<div class="form-row-input">
<input type="radio" name="foo" value="option2" >Option 2
</div>
<div class="form-row-input">
<div><span class="form-toggle icon-button toggle-expand" id="drilldown--666f6f-68656164696e6731-68656164696e6731-click"></span>
<input type="radio" name="foo" value="heading1" >Heading 1
</div><div class="form-row" id="drilldown--666f6f-68656164696e6731-68656164696e6731-container" style="float: left; margin-left: 1em;">
<div class="form-row-input">
<input type="radio" name="foo" value="option3" >Option 3
</div>
<div class="form-row-input">
<input type="radio" name="foo" value="option4" >Option 4
</div>
</div>
</div>
</div>
</div>
<div class="form-row-input">
<input type="radio" name="foo" value="option5" >Option 5
</div>
</div>
"""
def __init__( self, name, multiple=None, display=None, refresh_on_change=False, options = [], value = [], refresh_on_change_values = [] ):
self.name = name
self.multiple = multiple or False
self.options = options
if value and not isinstance( value, list ):
value = [ value ]
elif not value:
value = []
self.value = value
if display == "checkbox":
assert multiple, "Checkbox display only supported for multiple select"
elif display == "radio":
assert not( multiple ), "Radio display only supported for single select"
else:
raise Exception, "Unknown display type: %s" % display
self.display = display
self.refresh_on_change = refresh_on_change
self.refresh_on_change_values = refresh_on_change_values
if self.refresh_on_change:
self.refresh_on_change_text = ' refresh_on_change="true"'
if self.refresh_on_change_values:
self.refresh_on_change_text = '%s refresh_on_change_values="%s"' % ( self.refresh_on_change_text, ",".join( self.refresh_on_change_values ) )
else:
self.refresh_on_change_text = ''
def get_html( self, prefix="" ):
def find_expanded_options( expanded_options, options, parent_options = [] ):
for option in options:
if option['value'] in self.value:
expanded_options.extend( parent_options )
if option['options']:
new_parents = list( parent_options ) + [ option['value'] ]
find_expanded_options( expanded_options, option['options'], new_parents )
def recurse_options( html, options, base_id, expanded_options = [] ):
for option in options:
escaped_option_value = escape( str( option['value'] ), quote=True )
selected = ( option['value'] in self.value )
if selected:
selected = ' checked'
else:
selected = ''
span_class = 'form-toggle icon-button toggle'
if option['value'] not in expanded_options:
span_class = "%s-expand" % ( span_class )
html.append( '<div class="form-row-input">')
drilldown_group_id = "%s-%s" % ( base_id, hexlify( option['value'] ) )
if option['options']:
html.append( '<div><span class="%s" id="%s-click"></span>' % ( span_class, drilldown_group_id ) )
html.append( '<input type="%s" name="%s%s" value="%s" %s>%s' % ( self.display, prefix, self.name, escaped_option_value, selected, option['name']) )
if option['options']:
html.append( '</div><div class="form-row" id="%s-container" style="float: left; margin-left: 1em;">' % ( drilldown_group_id ) )
recurse_options( html, option['options'], drilldown_group_id, expanded_options )
html.append( '</div>')
html.append( '</div>')
drilldown_id = "drilldown-%s-%s" % ( hexlify( prefix ), hexlify( self.name ) )
rval = []
rval.append( '<div class="form-row drilldown-container" id="%s">' % ( drilldown_id ) )
expanded_options = []
find_expanded_options( expanded_options, self.options )
recurse_options( rval, self.options, drilldown_id, expanded_options )
rval.append( '</div>' )
return unicodify( '\n'.join( rval ) )
class AddressField(BaseField):
@staticmethod
def fields():
return [ ( "short_desc", "Short address description", "Required" ),
( "name", "Name", "Required" ),
( "institution", "Institution", "Required" ),
( "address", "Address", "Required" ),
( "city", "City", "Required" ),
( "state", "State/Province/Region", "Required" ),
( "postal_code", "Postal Code", "Required" ),
( "country", "Country", "Required" ),
( "phone", "Phone", "" ) ]
def __init__(self, name, user=None, value=None, params=None):
self.name = name
self.user = user
self.value = value
self.select_address = None
self.params = params
def get_html( self, disabled=False ):
address_html = ''
add_ids = ['none']
if self.user:
for a in self.user.addresses:
add_ids.append( str( a.id ) )
add_ids.append( 'new' )
self.select_address = SelectField( self.name,
refresh_on_change=True,
refresh_on_change_values=add_ids )
if self.value == 'none':
self.select_address.add_option( 'Select one', 'none', selected=True )
else:
self.select_address.add_option( 'Select one', 'none' )
if self.user:
for a in self.user.addresses:
if not a.deleted:
if self.value == str( a.id ):
self.select_address.add_option( a.desc, str( a.id ), selected=True )
# Display this address
address_html += '''
<div class="form-row">
%s
</div>
''' % a.get_html()
else:
self.select_address.add_option( a.desc, str( a.id ) )
if self.value == 'new':
self.select_address.add_option( 'Add a new address', 'new', selected=True )
for field_name, label, help_text in self.fields():
add_field = TextField( self.name + '_' + field_name,
40,
restore_text( self.params.get( self.name + '_' + field_name, '' ) ) )
address_html += '''
<div class="form-row">
<label>%s</label>
%s
''' % ( label, add_field.get_html( disabled=disabled ) )
if help_text:
address_html += '''
<div class="toolParamHelp" style="clear: both;">
%s
</div>
''' % help_text
address_html += '''
</div>
'''
else:
self.select_address.add_option( 'Add a new address', 'new' )
return self.select_address.get_html( disabled=disabled ) + address_html
class WorkflowField( BaseField ):
def __init__( self, name, user=None, value=None, params=None ):
self.name = name
self.user = user
self.value = value
self.select_workflow = None
self.params = params
def get_html( self, disabled=False ):
self.select_workflow = SelectField( self.name )
if self.value == 'none':
self.select_workflow.add_option( 'Select one', 'none', selected=True )
else:
self.select_workflow.add_option( 'Select one', 'none' )
if self.user:
for a in self.user.stored_workflows:
if not a.deleted:
if str( self.value ) == str( a.id ):
self.select_workflow.add_option( a.name, str( a.id ), selected=True )
else:
self.select_workflow.add_option( a.name, str( a.id ) )
return self.select_workflow.get_html( disabled=disabled )
class WorkflowMappingField( BaseField):
def __init__( self, name, user=None, value=None, params=None, **kwd ):
# DBTODO integrate this with the new __build_workflow approach in requests_common. As it is, not particularly useful.
self.name = name
self.user = user
self.value = value
self.select_workflow = None
self.params = params
self.workflow_inputs = []
def get_html( self, disabled=False ):
self.select_workflow = SelectField( self.name, refresh_on_change = True )
workflow_inputs = []
if self.value == 'none':
self.select_workflow.add_option( 'Select one', 'none', selected=True )
else:
self.select_workflow.add_option( 'Select one', 'none' )
if self.user:
for a in self.user.stored_workflows:
if not a.deleted:
if str( self.value ) == str( a.id ):
self.select_workflow.add_option( a.name, str( a.id ), selected=True )
else:
self.select_workflow.add_option( a.name, str( a.id ) )
if self.value and self.value != 'none':
# Workflow selected. Find all inputs.
for workflow in self.user.stored_workflows:
if workflow.id == int(self.value):
for step in workflow.latest_workflow.steps:
if step.type == 'data_input':
if step.tool_inputs and "name" in step.tool_inputs:
workflow_inputs.append((step.tool_inputs['name'], TextField( '%s_%s' % (self.name, step.id), 20)))
# Do something more appropriate here and allow selection of inputs
return self.select_workflow.get_html( disabled=disabled ) + ''.join(['<div class="form-row"><label>%s</label>%s</div>' % (s[0], s[1].get_html()) for s in workflow_inputs])
def get_display_text(self):
if self.value:
return self.value
else:
return '-'
class HistoryField( BaseField ):
def __init__( self, name, user=None, value=None, params=None ):
self.name = name
self.user = user
self.value = value
self.select_history = None
self.params = params
def get_html( self, disabled=False ):
self.select_history = SelectField( self.name )
if self.value == 'none':
self.select_history.add_option( 'No Import', 'none', selected=True )
self.select_history.add_option( 'New History', 'new' )
else:
self.select_history.add_option( 'No Import', 'none' )
if self.value == 'new':
self.select_history.add_option( 'New History', 'new', selected=True )
else:
self.select_history.add_option( 'New History', 'new')
if self.user:
for a in self.user.histories:
if not a.deleted:
if str( self.value ) == str( a.id ):
self.select_history.add_option( a.name, str( a.id ), selected=True )
else:
self.select_history.add_option( a.name, str( a.id ) )
return self.select_history.get_html( disabled=disabled )
def get_display_text(self):
if self.value:
return self.value
else:
return '-'
class LibraryField( BaseField ):
def __init__( self, name, value=None, trans=None ):
self.name = name
self.lddas = value
self.trans = trans
def get_html( self, prefix="", disabled=False ):
if not self.lddas:
ldda_ids = ""
text = "Select library dataset(s)"
else:
ldda_ids = "||".join( [ self.trans.security.encode_id( ldda.id ) for ldda in self.lddas ] )
text = "<br />".join( [ "%s. %s" % (i+1, ldda.name) for i, ldda in enumerate(self.lddas)] )
return unicodify( '<a href="javascript:void(0);" class="add-librarydataset">%s</a> \
<input type="hidden" name="%s%s" value="%s">' % ( text, prefix, self.name, escape( str(ldda_ids), quote=True ) ) )
def get_display_text(self):
if self.ldda:
return self.ldda.name
else:
return 'None'
def get_suite():
"""Get unittest suite for this module"""
import doctest, sys
return doctest.DocTestSuite( sys.modules[__name__] )
# --------- Utility methods -----------------------------
def build_select_field( trans, objs, label_attr, select_field_name, initial_value='none',
selected_value='none', refresh_on_change=False, multiple=False, display=None, size=None ):
"""
Build a SelectField given a set of objects. The received params are:
- objs: the set of objects used to populate the option list
- label_attr: the attribute of each obj (e.g., name, email, etc ) whose value is used to populate each option label.
- If the string 'self' is passed as label_attr, each obj in objs is assumed to be a string, so the obj itself is used
- select_field_name: the name of the SelectField
- initial_value: the value of the first option in the SelectField - allows for an option telling the user to select something
- selected_value: the value of the currently selected option
- refresh_on_change: True if the SelectField should perform a refresh_on_change
"""
if initial_value == 'none':
values = [ initial_value ]
else:
values = []
for obj in objs:
if label_attr == 'self':
# Each obj is a string
values.append( obj )
else:
values.append( trans.security.encode_id( obj.id ) )
if refresh_on_change:
refresh_on_change_values = values
else:
refresh_on_change_values = []
select_field = SelectField( name=select_field_name,
multiple=multiple,
display=display,
refresh_on_change=refresh_on_change,
refresh_on_change_values=refresh_on_change_values,
size=size )
for obj in objs:
if label_attr == 'self':
# Each obj is a string
if str( selected_value ) == str( obj ):
select_field.add_option( obj, obj, selected=True )
else:
select_field.add_option( obj, obj )
else:
label = getattr( obj, label_attr )
if str( selected_value ) == str( obj.id ) or str( selected_value ) == trans.security.encode_id( obj.id ):
select_field.add_option( label, trans.security.encode_id( obj.id ), selected=True )
else:
select_field.add_option( label, trans.security.encode_id( obj.id ) )
return select_field
| [
"nashtf@g.cofc.edu"
] | nashtf@g.cofc.edu |
931e52355a9877de357fa0e0b6a602e2de02d64e | 5a52ccea88f90dd4f1acc2819997fce0dd5ffb7d | /alipay/aop/api/response/AlipayBossFncInvoiceBatchqueryResponse.py | cfe0f6e638e1c8ae763e1e65af4347042e876024 | [
"Apache-2.0"
] | permissive | alipay/alipay-sdk-python-all | 8bd20882852ffeb70a6e929038bf88ff1d1eff1c | 1fad300587c9e7e099747305ba9077d4cd7afde9 | refs/heads/master | 2023-08-27T21:35:01.778771 | 2023-08-23T07:12:26 | 2023-08-23T07:12:26 | 133,338,689 | 247 | 70 | Apache-2.0 | 2023-04-25T04:54:02 | 2018-05-14T09:40:54 | Python | UTF-8 | Python | false | false | 2,739 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
from alipay.aop.api.response.AlipayResponse import AlipayResponse
from alipay.aop.api.domain.MultiCurrencyMoneyOpenApi import MultiCurrencyMoneyOpenApi
from alipay.aop.api.domain.ArInvoiceOpenApiResponse import ArInvoiceOpenApiResponse
class AlipayBossFncInvoiceBatchqueryResponse(AlipayResponse):
def __init__(self):
super(AlipayBossFncInvoiceBatchqueryResponse, self).__init__()
self._amt = None
self._current_page = None
self._items_page = None
self._result_set = None
self._total_items = None
self._total_pages = None
@property
def amt(self):
return self._amt
@amt.setter
def amt(self, value):
if isinstance(value, MultiCurrencyMoneyOpenApi):
self._amt = value
else:
self._amt = MultiCurrencyMoneyOpenApi.from_alipay_dict(value)
@property
def current_page(self):
return self._current_page
@current_page.setter
def current_page(self, value):
self._current_page = value
@property
def items_page(self):
return self._items_page
@items_page.setter
def items_page(self, value):
self._items_page = value
@property
def result_set(self):
return self._result_set
@result_set.setter
def result_set(self, value):
if isinstance(value, list):
self._result_set = list()
for i in value:
if isinstance(i, ArInvoiceOpenApiResponse):
self._result_set.append(i)
else:
self._result_set.append(ArInvoiceOpenApiResponse.from_alipay_dict(i))
@property
def total_items(self):
return self._total_items
@total_items.setter
def total_items(self, value):
self._total_items = value
@property
def total_pages(self):
return self._total_pages
@total_pages.setter
def total_pages(self, value):
self._total_pages = value
def parse_response_content(self, response_content):
response = super(AlipayBossFncInvoiceBatchqueryResponse, self).parse_response_content(response_content)
if 'amt' in response:
self.amt = response['amt']
if 'current_page' in response:
self.current_page = response['current_page']
if 'items_page' in response:
self.items_page = response['items_page']
if 'result_set' in response:
self.result_set = response['result_set']
if 'total_items' in response:
self.total_items = response['total_items']
if 'total_pages' in response:
self.total_pages = response['total_pages']
| [
"liuqun.lq@alibaba-inc.com"
] | liuqun.lq@alibaba-inc.com |
22b53232f46e095efd4866c40161ce323aec31d8 | 7940c9c528057521a4b2c6a9608edf48706139c9 | /code.py | 6c3b6df0054ed69aa9d8d1242634d30b89dc3114 | [] | no_license | msafdarkhan/mydata | 4f700d7b9c6709eb200cc733841ba0aa6f75c13a | 37589f71f68afb2d92ca592e8093dbf940e5c73d | refs/heads/master | 2020-06-01T20:21:38.214830 | 2019-06-13T17:50:16 | 2019-06-13T17:50:16 | 190,915,963 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,627 | py | from tkinter import *
#Defining all Sub-routines:
####################################################################################
def identify():
import cv2
import numpy as np
import os
identifier = cv2.face.LBPHFaceRecognizer_create()
identifier.read('trainer/trainer.yml')
cascadePath = "Cascades.xml"
faceCascade = cv2.CascadeClassifier(cascadePath)
font = cv2.FONT_HERSHEY_SIMPLEX
#font = FONT_HERSHEY_SCRIPT_SIMPLEX
#font = FONT_ITALIC
#iniciate id counter
id = 0
# names related to ids respectively starting from zero
names = ['None','Tahir','Safdar','Guest']
# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640)
cam.set(4, 480)
# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)
while True:
ret, img =cam.read()
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor = 1.2,
minNeighbors = 5,
minSize = (int(minW), int(minH)),
)
for(x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
id, confidence = identifier.predict(gray[y:y+h,x:x+w])
# Check if confidence is less than 100 ==> "60" is perfect match
if (confidence < 70):
id = names[id]
confidence = " {0}%".format(round(100 - confidence))
else:
id = "unknown"
confidence = " {0}%".format(round(100 - confidence))
cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)
cv2.imshow('camera',img)
k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
if k == 27:
break
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()
###########################################################################################end recognizing
def entrant():
import cv2
import os
cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height
face_detector = cv2.CascadeClassifier('Cascades.xml')
# For each person, enter one numeric face id
face_id = input('\n enter user id end press <return> ==> ')
print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face count
count = 0
while(True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_detector.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)
count += 1
txtis= 'image saved: '+str(count)
cv2.putText(img, txtis, (x+5,y-5), font, 1, (255,255,255), 2)
# Save the captured image into the datasets folder
cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
cv2.imshow('image', img)
k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
if k == 27:
break
elif count >= 30: # Take 30 face sample and stop video
break
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()
#######################################################################################end entry
def train():
import cv2
import numpy as np
from PIL import Image
import os
# Path for face image database
path = 'dataset'
identifier = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("Cascades.xml")
# function to get the images and label data
def getImagesAndLabels(path):
imagePaths = [os.path.join(path,f) for f in os.listdir(path)]
faceSamples=[]
ids = []
for imagePath in imagePaths:
PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
img_numpy = np.array(PIL_img,'uint8')
id = int(os.path.split(imagePath)[-1].split(".")[1])
faces = detector.detectMultiScale(img_numpy)
for (x,y,w,h) in faces:
faceSamples.append(img_numpy[y:y+h,x:x+w])
ids.append(id)
return faceSamples,ids
print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
identifier.train(faces, np.array(ids))
# Save the model into trainer/trainer.yml
identifier.write('trainer/trainer.yml')
# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))
#######################################################################################end training
main=Tk()
main.title("FYP GUI")
main.geometry("500x500")
#Adding a label
Label(main, text="Select").grid(row=0, column=0, sticky=W)
#Adding buttons
Button(main, text="Identify", width=14,bg="light green", command=identify).grid(row=1, column=0,sticky=W)
Button(main, text="New Entry", width=14,bg="sky blue", command=entrant).grid(row=2, column=0 ,sticky=W)
Button(main, text="Training", width=14,bg="orange", command=train).grid(row=3, column=0, sticky=W)
#calling the main function
main.mainloop() | [
"noreply@github.com"
] | noreply@github.com |
5106b4bc6065ba4a922017012271755814c6cbb5 | ad0df4e7cd5a75f9272ec40321c9e4863d40dc18 | /extensions.py | 0f2c9962d20e3e92841d4e984adf2adf3548e067 | [] | no_license | gitxiangxiang/crawls_of_guangxi | 2ddad8d61384553aaddd9367ea6d93289350428e | f5174928cc072078c7bfbde3e25883ee15125d77 | refs/heads/master | 2023-01-02T07:36:28.806193 | 2020-11-01T17:17:18 | 2020-11-01T17:17:18 | 297,000,424 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,600 | py | from scrapy import signals
from scrapy.crawler import CrawlerProcess
from scrapy.exceptions import NotConfigured
from guangxi_by_sxp.utils import SpiderCounter
class SpiderOpenCloseLogging(object):
def __init__(self, item_count):
self.item_count = item_count
self.items_scraped = 0
self.spider_counter = SpiderCounter()
@classmethod
def from_crawler(cls, crawler):
# first check if the extension should be enabled and raise
# NotConfigured otherwise
# if not crawler.settings.getbool('MYEXT_ENABLED'):
#
# raise NotConfigured
# get the number of items from settings
item_count = 5
# instantiate the extension object
ext = cls(item_count)
# connect the extension object to signals
crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)
crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)
crawler.signals.connect(ext.item_scraped, signal=signals.item_scraped)
# return the extension object
return ext
def spider_opened(self, spider):
spider.log("----opened spider %s---------" % spider.name)
self.spider_counter.open_spider()
def spider_closed(self, spider):
spider.log("----closed spider %s---------" % spider.name)
self.spider_counter.close_spider()
def item_scraped(self, item, spider):
self.items_scraped += 1
if self.items_scraped % self.item_count == 0:
spider.log("scraped %d items" % self.items_scraped) | [
"xiang@git.com"
] | xiang@git.com |
77eb6e1582768564296247181be6190754fbe83c | 6da178d1c3fee5709e43d8162359c1b087c29a92 | /Intermediates/level_1/level_1d.py | 4324803b75398e8a5cb52a6ce3ac48c40200302d | [] | no_license | joewofford/coding_exercise | d74e50ad3ecbb0cee13c78fb44a7a317847f2863 | ae57eee4c556e78b04668dd6cb9e799f309bc3f9 | refs/heads/master | 2020-04-06T06:54:50.296103 | 2016-08-30T16:10:31 | 2016-08-30T16:10:31 | 65,870,783 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9,886 | py | from __future__ import division
import requests
import random
import time
import websocket
import json
import Queue
import re
from threading import Thread
from bs4 import BeautifulSoup
from selenium import webdriver
from datetime import datetime
AUTH = {'X-Starfighter-Authorization': '0c758ac77e1595c23756812113e730df324730e4'}
USERNAME = 'joewofford'
PS = 'givepeyton#18'
PATH_TO_CHROMEDRIVER = '/Users/joewofford/anaconda/chromedriver'
DELTA = .95
TRADE_WINDOW = 5
MAX_QUOTE_AGE = 1
SPREAD_SPLIT = .9
class Account(object):
def __init__(self):
self.target_price = 0
return
def block_buy(self, qty=100000, max_buy=200, owned=0):
self.qty = qty
self.max_buy = max_buy
self.owned = owned
b_chrome = self._login()
self._initiate_market(b_chrome)
self._parse_trade_info(b_chrome)
self._get_venue()
q = Queue.Queue()
tickertape = Thread(target=self._launch_tickertape, args=(q,))
tickertape.start()
#testing to see if the ticker has launched yet, if not then wait
while q.empty():
time.sleep(1)
self._extract_target_price(b_chrome)
self._iterative_buying(q)
print 'We have now bought {} shares of {} on account {}.'.format(self.owned, self.ticker, self.account)
return
#Example quote from tickertape:
#{"ok":true,"quote":{"symbol":"YLVO","venue":"WDBTEX","bid":5470,"ask":5497,"bidSize":5,"askSize":11446,"bidDepth":41023,"askDepth":34338,"last":5470,"lastSize":348,"lastTrade":"2016-08-18T20:56:09.856793761Z","quoteTime":"2016-08-18T20:56:09.856854129Z"}}
def _iterative_buying(self, q):
print 'Starting iterative buying...'
while self.qty > self.owned:
if not q.empty():
quote = json.loads(q.get())
if all(x in quote['quote'] for x in ['bid', 'ask', 'quoteTime']):
q_time = datetime.strptime(quote['quote']['quoteTime'].split('.')[0],'%Y-%m-%dT%H:%M:%S')
#Checking the age of the quote to check it'll be somewhat accurate
if (datetime.utcnow() - q_time).total_seconds() < MAX_QUOTE_AGE:
print 'Age good.'
ask = quote['quote']['ask']
#Checking if the 'current' ask price is lower than our target price
if ask < (self.target_price*100):
print 'Price good.'
bid = quote['quote']['bid']
buy_size = min(random.randint(1, self.max_buy), (self.qty - self.owned))
#Determining what our bid price should be
buy_bid = bid + (ask - bid) * SPREAD_SPLIT
print 'Ordering {} shares of {} at a bid of {}, out of {} left to buy.'.format(str(buy_size), self.ticker, str(buy_bid/100), str(self.qty-self.owned))
buy = self._single_buy(buy_size, 'limit', buy_bid)
while buy.status_code != requests.codes.ok:
buy = self._single_buy(buy_size, 'limit', buy_bid)
t_sent = time.time()
print 'Trade sent.'
remaining = buy.json()['qty']
#Checking if the full order was filled, and monitoring the status for the duraction of TRADE_WINDOW, then close and add the number of shares purchased to the class attribute
while remaining > 0:
if time.time() - t_sent > TRADE_WINDOW:
self._cancel_buy(buy.json()['id'])
print 'Trade cancelled.'
break
time.sleep(.05)
remaining = self._trade_status(buy.json()['id']).json()['qty']
self.owned = self.owned + (buy_size - remaining)
print 'We just bought {} shares out of an intial request of {}, giving us a total of {} currently owned.'.format(str(buy_size-remaining), str(buy_size), str(self.owned))
time.sleep(.2)
return
def _get_tickers(self, venue):
call = requests.get('https://api.stockfighter.io/ob/api/venues/{}/stocks'.format(venue), headers=AUTH)
return [d['symbol'] for d in call.json()['symbols']]
def _get_venue(self):
print 'Getting venue now.'
call = requests.get('https://api.stockfighter.io/ob/api/venues', headers=AUTH)
venues = [d['venue'] for d in call.json()['venues']]
for venue in venues:
if self.ticker in self._get_tickers(venue):
self.venue = venue
return
def _single_buy(self, qty, order_type, price=0):
order = {
'account': self.account,
'venue': self.venue,
'stock': self.ticker,
'qty': qty,
'direction': 'buy',
'price': price,
'orderType': order_type
}
call = requests.post('https://api.stockfighter.io/ob/api/venues/{}/stocks/{}/orders'.format(self.venue, self.ticker), headers=AUTH, json=order)
return call
def _trade_status(self, t_id):
call = requests.get('https://api.stockfighter.io/ob/api/venues/{}/stocks/{}/orders/{}'.format(self.venue, self.ticker, t_id), headers=AUTH)
print call.json()
return call
def _cancel_buy(self, t_id):
call = requests.post('https://api.stockfighter.io/ob/api/venues/{}/stocks/{}/orders/{}/cancel'.format(self.venue, self.ticker, t_id), headers=AUTH)
return call
def _launch_tickertape(self, q):
url="wss://api.stockfighter.io/ob/api/ws/{}/venues/{}/tickertape/stocks/{}".format(self.account, self.venue, self.ticker)
print 'Launching tickertape now.'
while 1:
try:
tick = ws.recv()
with q.mutex:
q.queue.clear()
q.put(tick)
except:
ws = websocket.create_connection(url)
return
def _login(self):
url = 'https://www.stockfighter.io'
b_chrome = webdriver.Chrome(executable_path = PATH_TO_CHROMEDRIVER)
b_chrome.get(url)
b_chrome.find_element_by_name('session[username]').send_keys(USERNAME)
b_chrome.find_element_by_name('session[password]').send_keys(PS)
b_chrome.find_element_by_xpath('//*[@id="loginform"]/button').click()
time.sleep(10)
return b_chrome
def _initiate_market(self, b_chrome):
b_chrome.find_element_by_xpath('//*[@id="app"]/div/div/div/div/div[1]/div[2]/ul/li[1]/b/a').click()
time.sleep(3)
b_chrome.find_element_by_xpath('//*[@id="wrapping"]/nav/div/ul/li[2]').click()
time.sleep(1)
b_chrome.find_element_by_xpath('//*[@id="wrapping"]/nav/div/ul/li[2]/ul/li[1]/a/span[1]/b').click()
time.sleep(10)
return
def _parse_trade_info(self, b_chrome):
self.account = b_chrome.find_element_by_xpath('/html/body/div[3]/div/div[2]/div/div/div[2]/span/p[2]/strong[2]').text.split()[1]
self.ticker = b_chrome.find_element_by_xpath('/html/body/div[3]/div/div[2]/div/div/div[2]/span/p[2]/em').text
time.sleep(2)
b_chrome.find_element_by_xpath('/html/body/div[3]/div/div[2]/div/div/div[3]/button').click()
return
def _parse_target_price(self, b_chrome):
temp = b_chrome.find_elements_by_xpath('//*[@id="wrapping"]/div/div[1]/div/div[1]/p')
if len(temp) == 0:
return False
else:
self.target_price = float(temp[0].text.split('$')[-1][:-1])
return True
def _sum_fills(self, trade):
return sum([x['qty'] for x in trade.json()['fills']])
def _extract_target_price(self, b_chrome):
print 'Starting to try to extract target price...'
#Make small buys to get the target price to pop up, and checking if its there yet, parse and store as attribute, then
while self.target_price == 0:
buy = self._single_buy(10, 'market')
while buy.status_code != requests.codes.ok:
buy = self._single_buy(10, 'market')
t_sent = time.time()
print 'Trade sent.'
print buy.json()
remaining = buy.json()['qty']
print 'Remaining = {}'.format(str(remaining))
#Checking if the full order was filled, and monitoring status if not until the trade has been open for TRADE_WINDOW, then close and add the number of shares purchased to the class attribute
while remaining >0:
if time.time() - t_sent > TRADE_WINDOW:
self._cancel_buy(buy.json()['id'])
print 'Trade cancelled.'
break
time.sleep(.1)
remaining = self._trade_status(buy.json()['id']).json()['qty']
print 'Remaining = {}'.format(str(remaining))
self.owned = self.owned + (10 - remaining)
print 'We just bought {} shares out of an intial request of {}, giving us a total of {} currently owned.'.format(str(10-remaining), '10', str(self.owned))
#Now pause for a bit and check if the trade desk has revealed the target price, then iterate before trading again to prod it along.
time.sleep(1)
for x in xrange(3):
if self._parse_target_price(b_chrome):
print 'Target price = {}, ending extraction function.'.format(str(self.target_price))
return
else:
time.sleep(10)
if __name__ == '__main__':
acc = Account()
acc.block_buy()
| [
"joewofford@Josephs-Mini.domain"
] | joewofford@Josephs-Mini.domain |
c586bd5693c7518eb1d938ce2ad960a01f98d250 | f95e73867e4383784d6fdd6a1c9fe06cffbfd019 | /ProjectEuler/p004_Largest_palindrome_product.py | 5d8c9510aee2afac0a0864fbbfc27608ef991779 | [] | no_license | linxiaohui/CodeLibrary | da03a9ed631d1d44b098ae393b4bd9e378ab38d3 | 96a5d22a8c442c4aec8a064ce383aba8a7559b2c | refs/heads/master | 2021-01-18T03:42:39.536939 | 2018-12-11T06:47:15 | 2018-12-11T06:47:15 | 85,795,767 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 342 | py | #!/usr/bin/env python
# -*- coding:utf-8 -*-
import timeit
def PE004():
M=0
for i in range(100,10000):
for j in range(i+1,1000):
k=i*j
#if k==int(str(k)[::-1]) and k>M :
if k>M and k==int(str(k)[::-1]) :
M=k
print M
print timeit.timeit(PE004, number=1)
| [
"llinxiaohui@126.com"
] | llinxiaohui@126.com |
2d5ecf7358af515774131476888553a17bd614ae | 0f0be8e5cfb5fc9b0da164e428249c73e4e7b025 | /Self/Test/inherit.py | 110af38dc6e6c1fc046c584a7463f02c65123da7 | [] | no_license | tatatakky/algorithm | 3158ca558ef2171440be51750e1b6031f75220c6 | ed72e2520622a88efe8ed8120b4d2daad8f80c33 | refs/heads/master | 2020-03-10T11:17:23.882533 | 2018-05-02T16:54:12 | 2018-05-02T16:54:12 | 129,353,366 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 844 | py | class Parent:
def __init__(self,name,height,weight): #コンストラクタ
self.name=name
self.height=height
self.weight=weight
def output_p(self):
print("<{}さんの情報>".format(self.name))
print("名前:{} 身長:{} 体重:{}".format(self.name,self.height,self.weight))
class Child(Parent):
def __init__(self,name,height,weight,bloodtype): #コンストラクタ
super().__init__(name,height,weight) #スーパーコンストラクタの呼び出し
self.bloodtype=bloodtype
def output_c(self):
super().output_p()
print("血液型:{}".format(self.bloodtype))
if __name__=="__main__":
p=Parent("AAA",170,70) #インスタンスを生成する
p.output_p()
c=Child("aaa",165,60,"A") #インスタンスを生成する
c.output_c()
| [
"s1240232@gmail.com"
] | s1240232@gmail.com |
06f4a5191659fecff39026c84ba1da0bf26db778 | 1bbb75db28908f492ee05b1d084fd8be71527579 | /hotel.py | f486d420ff862cb4287d98e4cc80112464c5ac2c | [] | no_license | gayoungoh5/my_project | 090f95c297228bbb7f0b19cf6c9ecbaa9368348a | 71f85413f9eb06fb9e8c5991c74e639c0b889b21 | refs/heads/master | 2022-12-17T22:48:36.493908 | 2020-09-10T14:05:46 | 2020-09-10T14:05:46 | 285,817,319 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,942 | py | import requests
import pprint
import urllib.parse
import time
from pymongo import MongoClient
# 맛집 데이터는 seoul_matjip 이라는 데이터베이스에 저장하겠습니다.
client = MongoClient('mongodb://test:test@13.125.185.68', 27017)
db = client.seoul_matjip
# 서울시 구마다 맛집을 검색해보겠습니다.
seoul_gu = ["서울", "인천", "대전", "대구", "광주", "부산", "울산"]
# 네이버 검색 API 신청을 통해 발급받은 아이디와 시크릿 키를 입력합니다.
client_id = "Xkk8AQs9lWNsiooHjJ8R"
client_secret = "UW2NSMecHO"
# 검색어를 전달하면 결과를 반환하는 함수
def get_naver_result(keyword):
time.sleep(0.1)
# url에 전달받은 검색어를 삽입합니다.
api_url = f"https://openapi.naver.com/v1/search/local.json?query={keyword}&display=10&start=1&sort=random"
# 아이디와 시크릿 키를 부가 정보로 같이 보냅니다.
headers = {'X-Naver-Client-Id': client_id, 'X-Naver-Client-Secret': client_secret }
# 검색 결과를 data에 저장합니다.
data = requests.get(api_url, headers=headers)
# 받아온 JSON 결과를 딕셔너리로 변환합니다.
data = data.json()
return data['items']
# 저장할 전체 맛집 목록입니다.
docs = []
# 구별로 검색을 실행합니다.
for gu in seoul_gu:
# '강님구 맛집', '종로구 맛집', '용산구 맛집' .. 을 반복해서 인코딩합니다.
keyword = f'{gu} 호텔'
# 맛집 리스트를 받아옵니다.
restaurant_list = get_naver_result(keyword)
# 구별 맛집 구분선입니다.
print("*"*80 + gu)
for matjip in restaurant_list:
# 구 정보를 추가합니다.
matjip['area'] = gu
# 맛집을 인쇄합니다.
pprint.pprint(matjip)
# docs에 맛집을 추가합니다.
docs.append(matjip)
# 맛집 정보를 저장합니다.
db.matjip.insert_many(docs) | [
"gayoungoh5@users.noreply.github.com"
] | gayoungoh5@users.noreply.github.com |
4c5e296aa7cd413d37554e01f65388753100d7e0 | 6a78933bab79ddb45092010246c34810c2c9acfc | /trunk/modules/python/dataset/hdf5.py | 4bb3a57528882c73e6528e346536d56257deb5aa | [
"MIT"
] | permissive | yamadaNano/playbox | 5ac98ca90c0beb35e9845ba323c97fe7a734d858 | 31ba5a11301e2dd345fd3b64e559ca9901890b3a | refs/heads/master | 2020-12-24T10:55:16.504019 | 2016-11-04T18:14:51 | 2016-11-04T18:14:51 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,225 | py | import h5py
def createHDF5Unlabeled (outputFile, trainDataShape, trainDataDtype,
trainMaxShape=None, log=None) :
'''Utility to create the HDF5 file and return the handles. This allows
users to fill out the buffers in a memory conscious manner.
outputFile : Name of the file to write. The extension should be
either .h5 or .hdf5
trainDataShape : Training data dimensions
trainDataDtype : Training data dtype
trainMaxShape : Optionally specify a maxshape for the training set
log : Logger to use
'''
if not outputFile.endswith('.h5') and not outputFile.endswith('.hdf5') :
raise Exception('The file must end in the .h5 or .hdf5 extension.')
# create a file in a standard way
hdf5 = h5py.File(outputFile, libver='latest', mode='w')
# the file will always have training data in train/data
trainData = hdf5.create_dataset('train/data', shape=trainDataShape,
dtype=trainDataDtype,
maxshape=trainMaxShape)
# TODO: Should we additionally have a test/data set to allow early
# stoppage? This will test the ability to reconstruct data
# never before encountered. It's likely a better way to perform
# training instead of naive number of epochs.
return [hdf5, trainData]
def createHDF5Labeled (outputFile,
trainDataShape, trainDataDtype, trainIndicesDtype,
testDataShape, testDataDtype, testIndicesDtype,
labelsShape, log=None) :
'''Utility to create the HDF5 file and return the handles. This allows
users to fill out the buffers in a memory conscious manner.
outputFile : Name of the file to write. The extension should be
either .h5 or .hdf5
trainDataShape : Training data dimensions
trainDataDtype : Training data dtype
trainIndicesDtype : Training indicies dtype
testDataShape : Testing data dimensions
testDataDtype : Testing data dtype
testIndicesDtype : Testing indicies dtype
labelsShape : Labels shape associated with indices
log : Logger to use
'''
hdf5, trainData = createHDF5Unlabeled(outputFile, trainDataShape,
trainDataDtype, log=log)
# supervised learning will have indices associated with the training data
trainIndices = hdf5.create_dataset('train/indices',
shape=tuple(trainDataShape[:2]),
dtype=trainIndicesDtype)
# add testing data and indices
testData = hdf5.create_dataset('test/data', shape=testDataShape,
dtype=testDataDtype)
testIndices = hdf5.create_dataset('test/indices',
shape=tuple(testDataShape[:2]),
dtype=testIndicesDtype)
# each index with have an associated string label
labelsShape = labelsShape if isinstance(labelsShape, tuple) else\
(labelsShape, )
labelsDtype = h5py.special_dtype(vlen=str)
labels = hdf5.create_dataset('labels', shape=labelsShape,
dtype=labelsDtype)
return [hdf5, trainData, trainIndices, testData, testIndices, labels]
def writeHDF5 (outputFile, trainData, trainIndices=None,
testData=None, testIndices=None, labels=None, log=None) :
'''Utility to write a hdf5 file to disk given the data exists in numpy.
outputFile : Name of the file to write. The extension should be pkl.gz
trainData : Training data (numBatch, batchSize, chan, row, col)
trainIndices : Training indices (either one-hot or float vectors)
testData : Testing data (numBatch, batchSize, chan, row, col)
testIndices : Testing indices (either one-hot or float vectors)
labels : String labels associated with indices
log : Logger to use
'''
if log is not None :
log.debug('Writing to [' + outputFile + ']')
# write the data to disk -- if it was supplied
with h5py.File(outputFile, mode='w') as hdf5 :
hdf5.create_dataset('train/data', data=trainData)
# TODO: This should also be updated if we find unsupervised training
# is better with early stoppage via a test set.
if trainIndices is not None :
hdf5.create_dataset('train/indices', data=trainIndices)
if testData is not None :
hdf5.create_dataset('test/data', data=testData)
if testIndices is not None :
hdf5.create_dataset('test/indices', data=testIndices)
if labels is not None :
hdf5.create_dataset('labels', data=[l.encode("ascii", "ignore") \
for l in labels])
# ensure it gets to disk
hdf5.flush()
hdf5.close()
def readHDF5 (inFile, log=None) :
'''Utility to read a pickle in from disk.
inFile : Name of the file to read. The extension should be pkl.gz
log : Logger to use
return : (train, test, labels)
'''
if not inFile.endswith('.h5') and not inFile.endswith('.hdf5') :
raise Exception('The file must end in the .h5 or .hdf5 extension.')
if log is not None :
log.debug('Opening the file in memory-mapped mode')
# open the file
hdf5 = h5py.File(inFile, mode='r')
# read the available information
trainData = hdf5.get("train/data")
trainIndices = None
if 'train/indices' in hdf5 :
trainIndices = hdf5.get('train/indices')
testData = None
if 'test/data' in hdf5 :
testData = hdf5.get('test/data')
testIndices = None
if 'test/indices' in hdf5 :
testIndices = hdf5.get('test/indices')
labels = None
if 'labels' in hdf5 :
labels = hdf5.get('labels')
# the returned information should be checked for None
return (trainData, trainIndices), (testData, testIndices), labels
| [
"micah.bojrab@mdaus.com"
] | micah.bojrab@mdaus.com |
6e3669121fdd67488f4e7ec58aa121cf467f15dc | f8ffac4fa0dbe27316fa443a16df8a3f1f5cff05 | /Regex/Matching_Anything_But_New_Line.py | 9d1b4802c8910f717e3f7aafecd4dfcb1cc4b4c3 | [] | no_license | ankitniranjan/HackerrankSolutions | e27073f9837787a8af7a0157d95612028c07c974 | e110c72d3b137cf4c5cef6e91f58a17452c54c08 | refs/heads/master | 2023-03-16T19:06:17.805307 | 2021-03-09T16:28:39 | 2021-03-09T16:28:39 | 292,994,949 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 411 | py | # Your task is to write a regular expression that matches only and exactly strings of form: abc.def.ghi.jkx, where each variable a,b,c,d,e,f,g,h,i,j,k,x can be
# any single character except the newline.
regex_pattern = r"^.{3}\..{3}\..{3}\..{3}$" # Do not delete 'r'.
import re
import sys
test_string = input()
match = re.match(regex_pattern, test_string) is not None
print(str(match).lower())
| [
"noreply@github.com"
] | noreply@github.com |
8c5ed0a6b6e6053c544a2a177a5da87e2fbad4ef | da0f3a46bd09ab32ce16ea46912ccb24900a887c | /LUIS-mendoza/ej_fibonacci.py | 00182aa800577e69a05b346f35fe2bdf9adf2140 | [] | no_license | 1109israel/AprendiendoGit | ea275a145390c169f987280b5d934eb14d9026f8 | f366a4555153376876a337e9b246110fdaa87795 | refs/heads/master | 2020-03-21T04:13:00.584488 | 2018-07-25T22:43:21 | 2018-07-25T22:43:21 | 138,097,310 | 2 | 2 | null | null | null | null | UTF-8 | Python | false | false | 1,252 | py | ar = open("fibonacci.txt","a")
def fibonacci_or(lim):
a= 0
b = 1
c = 0
cont = 2
print(a,b,end = ' ')
ar.write('%s'%a+"\t")
ar.write('%s'%b+"\t")
while cont < lim:
c = a + b
a = b
b = c
print(c, end = ' ')
cont += 1
ar.write('%s'%c+"\t")
ar.write("\n")
def fibonacci(a,b):
lim = int(input("\nIngresa el limite para la serie: "))
c = 0
cont = 2
print(a,b,end = ' ')
ar.write('%s'%a+"\t")
ar.write('%s'%b+"\t")
while (cont < lim):
c = a + b
a = b
b = c
print(c, end = ' ')
cont += 1
ar.write('%s'%c+"\t")
ar.write("\n")
'''Ejercicios fibonacci.'''
opc = 1
while opc != 0:
opc = input("[1]\n[2]\n[0] Salir\n\nOprime la opcion que quieres: ")
if opc == '1':
l = int(input("\nIngresa el limite para la serie: "))
fibonacci_or(l)
print("\n")
elif opc == '2':
fn = int(input("\nIngresa el primer numero para la serie: "))
sn = int(input("\nIngresa el segundo numero para la serie: "))
fibonacci(fn,sn)
print("\n")
elif opc == '0':
break
else:
print("\nIngresa una opcion valida")
ar.close() | [
"luisito250899@gmail.com"
] | luisito250899@gmail.com |
44533fbbc7b814605f1f41f18cc1a17868022aff | ad4f3c164cb3acda3863e4b3f644f5e241cdf890 | /cli_tools_py/rosnode_list.py | 021f7e6cc5345a723d23654255927c7be9f7a4e4 | [
"Apache-2.0"
] | permissive | nuclearsandwich/cli_tools | 9a1b41846c6e9910d0ab0feca1485d132e23ea4b | 4937030dbec7dabc5a051f24f5f5db54c97f03f0 | refs/heads/master | 2021-01-25T09:09:59.743138 | 2017-06-09T01:10:59 | 2017-06-09T01:10:59 | 93,792,172 | 0 | 0 | null | 2017-06-08T21:10:27 | 2017-06-08T21:10:27 | null | UTF-8 | Python | false | false | 2,047 | py | # Copyright 2017 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import rclpy
# TODO(mikaelarguedas) revisit this once it's specified
HIDDEN_NODE_PREFIX = '_'
def main():
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('-a', '--all', action='store_true',
help='display all nodes even hidden ones')
parser.add_argument('-t', '--time', default=0.01, type=float,
help='spin time to collect nodes (in seconds)')
parser.add_argument('-n', '--number-of-nodes', action='store_true',
help='display the number of nodes discovered')
args = parser.parse_args()
timeout_reached = False
def timer_callback():
nonlocal timeout_reached
timeout_reached = True
rclpy.init()
node = rclpy.create_node(HIDDEN_NODE_PREFIX + 'rosnode_list')
timer = node.create_timer(args.time, timer_callback)
while not timeout_reached:
rclpy.spin_once(node)
if rclpy.ok():
node_names = node.get_node_names()
if not args.all:
for node_name in node_names:
if node_name.startswith(HIDDEN_NODE_PREFIX):
node_names.remove(node_name)
print(*node_names, sep='\n')
if args.number_of_nodes:
print('Number of currently available nodes: %d' % len(node_names))
node.destroy_timer(timer)
node.destroy_node()
rclpy.shutdown()
| [
"noreply@github.com"
] | noreply@github.com |
e15553e15d02f44a2891a48cfd96e63bd730812a | 3a85606fa2f643a12f0747e57cdb8bb626ccb338 | /FER_2020_python/faceDetectionWithAPIAndVision.py | 277b90e89a5e7789505fcf40462b8534a710d8a8 | [] | no_license | LucasBarrot/FER2020 | b98aaa43a8ff2784900acb8a4fccb5d43a9ba86c | dea3e175e4474a1899af1c3bcbc895b335a1def0 | refs/heads/master | 2022-12-27T19:56:46.657577 | 2020-10-15T22:11:44 | 2020-10-15T22:11:44 | 275,368,535 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,430 | py | # -*- coding: utf-8 -*-
import requests
# Detect and return the first face found location in the image_data given (byte array : <class 'bytes'> )
def faceDetection(image_data):
ENDPOINT = 'https://istancedetecface.cognitiveservices.azure.com/' # Set the endpoint
KEY = '787e01a860bb4be0a7fb0044e286931c' # Set the autentification key
analyze_url = ENDPOINT + "vision/v3.0/analyze" # Set the complete url
# Read the image into a byte array
#image_data = open(image_path, "rb").read()
# set the headers of the request
headers = {'Ocp-Apim-Subscription-Key': KEY,
'Content-Type': 'application/octet-stream'}
# set the parameters of the request
params = {'visualFeatures': 'Faces'}
# send the request and get the response
response = requests.post(
analyze_url, headers=headers, params=params, data=image_data)
# raise a statut if one occured
response.raise_for_status()
# The 'complete_response_json' object contains various fields that describe the image. The most
# relevant caption for the image is obtained from the 'description' property.
complete_response_json = response.json()
# print(complete_response_json)
found_face_json = complete_response_json["faces"][0]["faceRectangle"] # select only the face rectangle of the first face detected
# print (found_face_json)
return found_face_json | [
"lbarrot.lb@gmail.com"
] | lbarrot.lb@gmail.com |
8a6abacedfda94088a76510acfaf478288bec22d | 586695a9a52c69123fd1053358c754ad3ccc11fd | /hackerrank/SockMerchant.py | f35e40b7cce9f6f4525059280226f9911658e666 | [] | no_license | nidiodolfini/descubra-o-python | 9ba8b72b88f6b0db797d844610f93cded170161b | 0759410a82f4e26272f7b761955e9cf5bab461dc | refs/heads/master | 2020-08-28T05:22:16.862971 | 2020-05-14T18:46:27 | 2020-05-14T18:46:27 | 217,604,306 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 348 | py | n = 9
ar = [10, 20, 20, 10, 10, 30, 50, 10, 20]
def sockMerchant(n, ar):
count = 0
ar.sort()
ar.append('#')
i = 0
print(ar)
while i<n:
if ar[i]==ar[i+1]:
print(i, ar[i], "" , ar[i])
count = count+1
i+=2
else:
i+=1
return count
print(sockMerchant(n, ar)) | [
"nidiosdolfini@gmail.com"
] | nidiosdolfini@gmail.com |
deb6dc985d28ad6b65b35a345c03761e49dc5c24 | 3709dfa23b42dc4a9a89199490b8533963f8c534 | /splat/advanced_splat.py | d377a4f51e02f4cbc908523069f979e8b643def0 | [
"Apache-2.0"
] | permissive | boltanalytics/splat | 8e4723fcc67d62c216c286c32e899a817b578290 | a44434ce336e6ffc6340ff55407413c0886e7d1f | refs/heads/main | 2023-01-30T11:11:24.949655 | 2020-12-07T14:30:03 | 2020-12-07T14:30:03 | 315,811,952 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 11,789 | py | from math import isnan
from os import stat
from splunklib.client import connect
import click
import json
import numpy as np
import os
import pandas as pd
import re
import shlex
import splunklib.client as client
import splunklib.results as results
import time
import warnings
# command_line_args = {}
# logging.basicConfig(level=logging.INFO,
# format='%(asctime)s.%(msecs)03d %(levelname)s %(module)s - %(funcName)s: %(message)s',
# datefmt='%Y-%m-%d %H:%M:%S',
# filename='/tmp/compression_estimator.log')
def read_logs(file_name: str) -> list:
log_lines: list = []
with open(file_name, "r") as f:
file_contents: list = f.readlines()
for line in file_contents:
log_lines.append(line)
return log_lines
def get_file_size(file_name: str) -> int:
return stat(file_name).st_size
def convert_kv_to_string(kv_pairs: list) -> list:
kv_str: list = []
for kv_pair in kv_pairs:
temp: list = []
for k, v in kv_pair.items():
if isinstance(v, str):
temp.append(f"{k}={v}")
elif (isinstance(v, (int, np.integer, float, np.float))) and (not isnan(v)):
temp.append(f"{k}={v}")
kv_str.append(" ".join(temp))
return kv_str
def convert_kv_to_logs(kv_pairs: list, verbatim_text: list):
str_kv_pairs: list = convert_kv_to_string(kv_pairs)
str_verbatim_text: list = [" ".join(x) for x in verbatim_text]
return [f"{kv_pair} {verb_text}" for kv_pair, verb_text in zip(str_kv_pairs, str_verbatim_text)]
def get_size_on_disk(logs: list) -> int:
tmp_file_name: str = "tmp.txt"
with open(tmp_file_name, "w") as f:
logs_text: str = "\n".join(logs)
f.write(logs_text)
file_size: int = get_file_size(tmp_file_name)
os.remove(tmp_file_name)
return file_size
def get_key_value_pairs(text: list) -> (list, list):
key_value_pairs: list = []
verbatim_content: list = []
for line in text:
kv_pairs: dict = {}
non_kv_content: list = []
for pair in shlex.split(line):
if "=" in pair:
kv_pair: list = pair.split("=")
kv_pairs[kv_pair[0]] = kv_pair[1]
else:
non_kv_content.append(pair)
key_value_pairs.append(kv_pairs)
verbatim_content.append(non_kv_content)
return key_value_pairs, verbatim_content
def evaluate_ip_addr_freq(values: pd.Series, freq_thresh: float) -> bool:
# Split IP addresses into 4 columns.
ips_split: pd.DataFrame = values.str.split(".", expand=True)
# Get the proportion of the most frequently occurring value in a column.
ip_component_freq: pd.Series = ips_split.apply(lambda x: x.value_counts(normalize=True)[0], axis=0)
return sum(ip_component_freq.values > freq_thresh) > 2
def evaluate_value_freq(values: pd.Series, freq_thresh: float) -> bool:
# if pd.api.types.is_integer_dtype(values):
# return False
# else:
return round(values.value_counts(normalize=True)[0], 2) > freq_thresh
def is_ip_addr(x: str) -> bool:
if not isinstance(x, str):
return False
return bool(re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", x))
def get_signif_keys_using_freq(kv_df: pd.DataFrame, freq_thresh: float) -> list:
signif_keys = []
for col_name in kv_df.columns:
# Whether a column contains IP addresses is determined by only looking
# at the first value.
# if is_ip_addr(kv_df[col_name].iloc[0]):
# is_col_signif = evaluate_ip_addr_freq(kv_df[col_name], freq_thresh)
# else:
is_col_signif = evaluate_value_freq(kv_df[col_name], freq_thresh)
if is_col_signif:
signif_keys.append(col_name)
return signif_keys
def convert_values_to_tokens(kv_df: pd.DataFrame, cols: list):
for col in cols:
kv_df.loc[:, col] = pd.factorize(kv_df[col])[0]
# The `factorize` method converts NaNs to -1 which increases the size of output file.
# Resetting -1's to None turns a series of type `int64` to `float64`. That is because
# None values are represented by `numpy.nan` in a series. And `numpy.nan` is a float
# value. So we convert the array to the type `arrays.IntegerArray` which is an
# extension type implemented within Pandas.
# https://pandas.pydata.org/pandas-docs/stable/user_guide/integer_na.html
kv_df.loc[kv_df[col] == -1, col] = None
if kv_df[col].dtype in ["float64"]:
kv_df.loc[:, col] = kv_df[col].astype("Int64")
return kv_df
def dedup_kv_pairs(log_kv_pairs: list, freq_threshold: float) -> list:
"""
Convert frequently occuring values into integer tokens.
:param log_kv_pairs: List of key-value pairs.
:param freq_threshold: Minimum proportion required from the value of a key for the key to be converted to a
template.
:return: List of key-value pairs with frequent values deduped.
"""
log_kv_df: pd.DataFrame = pd.DataFrame(log_kv_pairs)
keys_to_templatise: list = get_signif_keys_using_freq(log_kv_df, freq_threshold)
log_kv_df: pd.DataFrame = convert_values_to_tokens(log_kv_df, keys_to_templatise)
compressed_log_kv_pairs: list = log_kv_df.to_dict("records")
return compressed_log_kv_pairs
def get_splunk_connection(splunk_config: dict) -> client.Service:
service: client.Service = connect(
host=splunk_config["host"],
port=splunk_config["port"],
username=splunk_config["username"],
password=splunk_config["password"],
owner=splunk_config["owner"],
app=splunk_config["app"]
)
return service
def construct_query_args(earliest_time: str, latest_time: str, query_type: str) -> dict:
if query_type == "event_count":
search_args: dict = dict(exec_mode="normal")
else:
search_args: dict = dict(search_mode="normal")
search_args["earliest_time"] = earliest_time
search_args["latest_time"] = latest_time
return search_args
def check_for_sufficient_data(event_count: int) -> None:
if event_count <= 0:
raise ValueError("There are no events for the given time. Please check input parameters such as "
"user credentials and query.")
def check_processing_time(event_count: int, verbose: bool, warning_thresh: int = 180) -> None:
approx_processing_time = round((event_count / 100000) * 10, 2)
if verbose:
print(f"Fetching {event_count} records from Splunk will require approximately "
f"{approx_processing_time} minutes.")
if approx_processing_time > warning_thresh:
warnings.warn(f"Fetching {event_count} records from Splunk will require approximately "
f"{approx_processing_time} minutes. Faster but slightly less accurate results can be seen "
f"by reducing the query timespan")
def get_splunk_query_results(splunk_connection: client.Service, query: str, search_args: dict, n: int, verbose: bool):
query_results: list = []
if n > 0:
query = f"{query} | head {n}"
job = splunk_connection.jobs.create(query, **search_args)
if verbose:
print("Querying the splunk results...")
while not job.is_done():
time.sleep(.2)
count = 0
# rr = results.ResultsReader(job.results(count=0)) # Max number of records in one poll 50k.
rr = results.ResultsReader(splunk_connection.jobs.export(query, **search_args))
for result in rr:
if isinstance(result, dict):
query_results.append(result["_raw"])
count += 1
if verbose:
if count % 1000 == 0:
print(f"Retrieved {count} events so far and the current event is {result['_raw']}")
assert rr.is_preview is False
return query_results
def get_event_count(splunk_connection: client.Service, query: str, search_args: dict, verbose: bool) -> int:
query = f"{query} | stats count as events_count"
job = splunk_connection.jobs.create(query, **search_args)
if verbose:
print("Getting event count...")
while not job.is_done():
time.sleep(.2)
rr = results.ResultsReader(job.results(count=0)) # Max number of records in one poll 50k.
for result in rr:
if isinstance(result, dict) and "events_count" in result:
return int(result["events_count"])
return 0
def splat(splunk_details: dict, index: str, start_time: str, end_time: str, num_records: int, verbose: bool):
search_query: str = "search index={0}".format(index)
splunk_conn: client.Service = get_splunk_connection(splunk_details)
search_params: dict = construct_query_args(start_time, end_time, query_type="search")
event_count_params: dict = construct_query_args(start_time, end_time, query_type="event_count")
total_event_count: int = get_event_count(splunk_conn, search_query, event_count_params, verbose)
check_for_sufficient_data(total_event_count)
check_processing_time(total_event_count, verbose)
splunk_logs: list = get_splunk_query_results(splunk_conn, search_query, search_params, num_records,
verbose)
if verbose:
print(f"Number of records retrieved from Splunk: {len(splunk_logs)}")
input_size: int = get_size_on_disk(splunk_logs)
log_key_value_pairs: list
log_verbatim_text: list
log_key_value_pairs, log_verbatim_text = get_key_value_pairs(splunk_logs)
deduped_key_value_pairs: list = dedup_kv_pairs(log_key_value_pairs, 0.9)
deduped_splunk_logs: list = convert_kv_to_logs(deduped_key_value_pairs, log_verbatim_text)
output_size: int = get_size_on_disk(deduped_splunk_logs)
percent_reduction: float = round((input_size - output_size) * 100 / input_size, 2)
print(f"Input data size: {round(input_size / 1024, 2)} KBs")
print(f"Output data size: {round(output_size / 1024, 2)} KBs")
print(f"Estimated compression: {percent_reduction}%")
@click.command()
@click.option("-sc", "--splunk-config", type=str, required=True, help="Path to JSON file specifying Splunk "
"server configuration. Check GitHub "
"README for details.")
@click.option("-i", "--index", required=True, help="Splunk index to fetch results.")
@click.option("-st", "--start-time", required=True, help=f"Query start time. It is specified in relation to the current"
f"time. So `-2d` means 2 days prior to now.")
@click.option("-et", "--end-time", required=True, help=f"Query end time. It can either be `now` or a time in the past"
f"from the present like `-2d`.")
@click.option("-n", type=int, default=0, help="Number of records to limit the query to.")
@click.option("-v", "--verbose", is_flag=True)
def main(splunk_config: str, index: str, start_time: str, end_time: str, n: int, verbose: bool):
"""Estimate compression of Splunk records achieved by Bolt's <product_name>
\b
Examples:
splat --splunk-config splunk.json --index firewall -n 5000 -v
splat --splunk-config splunk.json --index firewall --start-time 2020-11-10T12:00:00.000-00:00
--end-time 2020-11-10T13:00:00.000-00:00
"""
with open(splunk_config, "r") as f:
splunk_server_details: dict = json.load(f)
splat(splunk_server_details, index, start_time, end_time, n, verbose)
if __name__ == '__main__':
# main()
main(["-sc", "../splunk.json", "-i", "firewall", "-st", "-2h", "-et", "now"]) # For debugging.
| [
"tejas.kale@cumulus-systems.com"
] | tejas.kale@cumulus-systems.com |
768d56b4aa61d462bcd2ff40eae5b901a5f5a04a | 478925516b5d32e61eacdb22555097b2318cd649 | /P47/01-06-2021/functions_examples.py | 485186078f8b09e067ba652f8a990729e4e5d0e4 | [] | no_license | Rogerd97/mintic_class_examples | fee9f867d8f846757b5a14326ab5e737b86b64c2 | 0a53f9fac13b1abe539bd345ed2186c95b3e08b5 | refs/heads/main | 2023-05-10T06:13:58.863766 | 2021-06-12T02:02:07 | 2021-06-12T02:02:07 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,969 | py | # default parameters
def fun(a, b, c=1, d=2):
pass
import datetime
from io import DEFAULT_BUFFER_SIZE
def func(delta_days=5):
return datetime.datetime.now() + datetime.timedelta(days=delta_days)
# Create a function that makes a math operation to a given number. By default, it should be to calculate
# the sqrt
import math
def apply_operation(number, default_operation="sqrt"):
if default_operation == "sqrt":
return math.sqrt(number)
def apply_operation(number, default_operation="sqrt"):
if default_operation == "sqrt":
return math.sqrt(number)
if default_operation == "tan":
return math.tan(number)
if default_operation == "cos":
return math.cos(number)
def apply_operation(number, default_operation=math.sqrt):
return default_operation(number)
def sumar(number_1, number_2=4):
return number_1 + number_2
MISC_FUNC = {
"sqrt": math.sqrt,
"tan": math.tan,
"sin": math.sin,
"suma": sumar
}
def apply_operation(number, default_operation="sqrt"):
# operation = MISC_FUNC[default_operation]
operation = MISC_FUNC.get(default_operation, None)
if operation is not None:
return operation(number)
else:
print("invalid")
return None
# Positional arguments
def minimum(*n):
# print(n) # n is a tuple
mn = n[0]
for value in n[1:]:
if value < mn:
mn = value
print(mn)
# Create a function that calculates the maximum inside a list
# variable keywords
def func(**kwargs):
print(kwargs)
# create a function that follows this structure:
# fun(<operation_math>, <number>):
# return operation_math(number)
def fun(math_operation, number):
return math_operation(number)
data = {"math_operation": math.sqrt, "number": 4}
fun(**data)
def fun(math_operation, number, **data):
return math_operation(number)
data = {"math_operation": math.sqrt, "number": 4, "number-2": 45}
fun(**data)
| [
"reyes2000jose@gmail.com"
] | reyes2000jose@gmail.com |
166b11993b083285167131c9da62538a39942a29 | ad53026bc745c328a3dc49a13b3f9b8d64d4dbdb | /Moniter.py | eb7b06b332ab6420a923cb79b78f787ce08f2109 | [] | no_license | ykozxy/EngradeHelper | 2af473b2b7ec17e264f33e1dd9b030920a713edb | 5178b5801a298fcbbd3f858ad2cac54662faf6a6 | refs/heads/master | 2020-04-17T03:52:43.479741 | 2019-04-16T12:15:08 | 2019-04-16T12:15:08 | 166,204,214 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 14,975 | py | import errno
import json
import logging
import os
import pickle
import platform
import random
import subprocess
import sys
import time
import traceback
from datetime import datetime, timedelta
from subprocess import PIPE
from typing import List, Dict, Union
import selenium
from selenium import webdriver
from selenium.common.exceptions import WebDriverException
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common import utils
from selenium.webdriver.remote.webelement import WebElement
class WebDriver:
def __init__(self):
self.username: str = ""
self.password: str = ""
self.wait_time: int = 0
self.random_time_margin: int = 0
self.email_notify: bool = False
self.email_receivers: List[str] = []
self.mail_host: str = ""
self.mail_user: str = ""
self.mail_pass: str = ""
self.bark_notify: bool = False
self.bark_api: str = ""
chrome_options = Options()
chrome_options.add_argument("--headless") # 无窗口,若调试可以取消该选项
chrome_options.add_argument("--disable-gpu")
self.driver = webdriver.Chrome(options=chrome_options)
self.previous_data = {}
self.previous_score = {}
def start_loop(self):
"""
主循环
"""
log.info("Load config...")
self.load_config()
log.info("Open Engrade...")
self.driver.get("https://engradepro.com")
while True:
log.info("Load data...")
self.load_data()
need_login = True
# noinspection PyUnresolvedReferences
try:
self.driver.find_element_by_name("usr")
except selenium.common.exceptions.NoSuchElementException:
# 已经在主页面
need_login = False
if need_login:
log.info("Login...")
self.login()
log.info("Change course category...")
if not self.change_course_category():
notify("Engrade Helper", "Not enrolled in any classes.")
log.critical("Not enrolled in any classes.")
return False
c = self.get_course_list()
change_list = []
for i in range(len(c)):
is_change = self.get_course_detail(c[i])
# 这里需要更新Element列表,因为刷新页面后Element元素会变
c = self.get_course_list()
if is_change:
course = c[i][0].text
log.info("Change detected for {}!".format(course))
if len(c[i]) <= 2:
# 有一行没有成绩则列数为2
score = "NO SCORE"
else:
score = c[i][2].text
change_list.append((course, self.previous_score[course], score))
else:
course = c[i][0].text
if len(c[i]) <= 2:
score = "NO SCORE"
else:
score = c[i][2].text
if course not in self.previous_score.keys():
self.previous_score[course] = score
self.save_data()
content: str = "Score change{} detected for {} course{} on Engrade.\n\n {}\n\nOpen in Engrade: {}".format(
"" if len(change_list) == 1 else "s",
len(change_list),
"" if len(change_list) == 1 else "s",
"\n".join("{}: {} -> {}".format(x[0], x[1], x[2]) for x in change_list),
self.driver.current_url,
)
if change_list:
notify(
"EngradeHelper",
content,
self.email_notify,
{
"email_receivers": self.email_receivers,
"mail_host": self.mail_host,
"mail_user": self.mail_user,
"mail_pass": self.mail_pass,
},
self.bark_notify,
self.bark_api,
)
wait = random.randint(
self.wait_time - self.random_time_margin,
self.wait_time + self.random_time_margin,
)
log.info("Waiting for {} seconds...".format(wait))
time.sleep(wait)
# 刷新页面,以防被登出
self.driver.refresh()
def load_config(self):
"""
加载设置项
"""
with open("config.json", "r") as file:
setting = json.load(file)
self.username = setting["Engrade"]["username"]
self.password = setting["Engrade"]["password"]
self.wait_time = setting["wait_time"]
self.random_time_margin = setting["random_time_margin"]
self.email_notify = setting["email_notification"]
self.email_receivers = setting["email_receivers"]
self.mail_host = setting["email_sender"]["smtp_host"]
self.mail_user = setting["email_sender"]["address"]
self.mail_pass = setting["email_sender"]["password"]
self.bark_notify = setting["Bark_notification"]
self.bark_api = setting["Bark_api"]
def login(self):
"""
登录操作
"""
self.driver.find_element_by_name("usr").send_keys(self.username)
self.driver.find_element_by_name("pwd").send_keys(self.password)
self.driver.find_element_by_name("_submit").click()
def load_data(self):
"""
加载上一次的数据
"""
if "data.cache" in os.listdir("."):
with open("data.cache", "rb") as data:
self.previous_data, self.previous_score = pickle.load(data)
def change_course_category(self):
"""
选择Course Period,目前自动选择SEMESTER
"""
try:
self.driver.find_element_by_xpath('//*[@id="gpselector"]/ul/li[1]').click()
except selenium.common.exceptions.NoSuchElementException:
return False
for i in range(1, 30):
course_xpath = '//*[@id="gpperiods"]/li[{}]'.format(i)
# noinspection PyUnresolvedReferences
try:
c = self.driver.find_element_by_xpath(course_xpath)
except selenium.common.exceptions.NoSuchElementException:
break
if "SEMESTER" in c.text:
c.click()
break
return True
def get_course_list(self) -> List[List[WebElement]]:
"""
获取主界面上课程列表
:return: [[CourseName, Teacher, Score], ...]
"""
table = self.driver.find_element_by_xpath('//*[@id="classTable"]/tbody')
courses = []
for c in table.find_elements_by_tag_name("tr"):
if not c.is_displayed():
continue
course_detail = c.find_elements_by_tag_name("a")
courses.append(course_detail)
return courses
def get_course_detail(self, course: List[WebElement]) -> bool:
"""
点进去单个课程比较详细内容。
目前自动选择 Assignment 下的 Semester 类别
:param course: 课程(get_course_list 的某个返回值)
:return: 成绩是否改变
"""
url = self.driver.current_url
course_name = course[0].text
log.debug("Get detail for {}...".format(course_name))
course[0].click()
# Navigate to semester detail
self.driver.find_element_by_xpath('//*[@id="sideappgradebook"]/span[1]').click()
self.driver.find_element_by_xpath('//*[@id="gpselector"]/ul/li[1]').click()
self.driver.find_element_by_xpath('//*[@id="gpperiods"]/span[3]/a').click()
detail = self.driver.find_element_by_xpath(
'//*[@id="content-expanded"]/div[2]'
).get_attribute("outerHTML")
is_change = False
if course_name in self.previous_data.keys():
if detail != self.previous_data[course_name]:
is_change = True
self.previous_data[course_name] = detail
self.driver.get(url)
return is_change
def save_data(self):
"""
保存数据
"""
if "data.cache" not in os.listdir("."):
open("data.cache", "w").close()
with open("data.cache", "wb") as data:
pickle.dump((self.previous_data, self.previous_score), data)
def notify(
title: str,
content: str,
email_notify: bool = False,
email_data: Dict[str, Union[str, List[str]]] = None,
bark_notify: bool = False,
bark_api: str = None,
):
"""
发送通知,目前有三种形式:
1. 发送系统通知,支持 win10(依赖于 win10toast) 和 macOS(未测试)
2. 发送邮件通知,需要提供支持SMTP的邮箱服务(如126、qq等)
3. 推送 Bark 通知,需要在手机上安装 bark 并提供 api 编号
:param title: 通知标题
:param content: 通知内容
:param email_notify: 是否邮件通知
:param email_data: 邮件通知具体设置
:param bark_api: 是否 bark 推送消息
:param bark_notify: bark 推送 api
"""
if platform.system() == "Windows":
try:
while notifier.notification_active():
...
notifier.show_toast(
title, content.split("\n")[0], duration=10, threaded=True
)
log.debug("Show notification on Windows")
except NameError:
log.warning("Fail to show notification on Windows")
pass
elif platform.system() == "Darwin":
# MacOS
from subprocess import call
cmd = 'display notification "{}" with title "{}"'.format(
content.split("\n")[0], title
)
call(["osascript", "-e", cmd])
log.info("Show notification on MacOS")
if email_notify:
import smtplib
from email.mime.text import MIMEText
from email.header import Header
message = MIMEText(content, "plain", "utf-8")
message["From"] = Header("Engrade Helper", "utf-8")
message["To"] = Header("You <{}>".format(email_data["mail_user"]))
message["Subject"] = Header(title, "utf-8")
receivers = [email_data["email_receivers"]]
smtp_obj = smtplib.SMTP()
smtp_obj.connect(email_data["mail_host"], 25)
smtp_obj.login(email_data["mail_user"], email_data["mail_pass"])
smtp_obj.sendmail(email_data["mail_user"], receivers, message.as_string())
log.debug("Email notify success")
else:
log.debug("Skip email notify")
if bark_notify:
from urllib.request import urlopen
urlopen("https://api.day.app/{}/{}/{}".format(bark_api, title, content))
def start(self):
"""
selenium Service 类中 start 方法的猴子补丁,
用于隐藏 webdriver 控制台窗口
"""
try:
cmd = [self.path]
cmd.extend(self.command_line_args())
self.process = subprocess.Popen(
cmd,
env=self.env,
close_fds=platform.system() != "Windows",
stdout=self.log_file,
stderr=self.log_file,
stdin=PIPE,
creationflags=subprocess.CREATE_NO_WINDOW, # 取消显示窗口
)
except TypeError:
raise
except OSError as err:
if err.errno == errno.ENOENT:
raise WebDriverException(
"'%s' executable needs to be in PATH. %s"
% (os.path.basename(self.path), self.start_error_message)
)
elif err.errno == errno.EACCES:
raise WebDriverException(
"'%s' executable may have wrong permissions. %s"
% (os.path.basename(self.path), self.start_error_message)
)
else:
raise
except Exception as e:
raise WebDriverException(
"The executable %s needs to be available in the path. %s\n%s"
% (os.path.basename(self.path), self.start_error_message, str(e))
)
count = 0
while True:
self.assert_process_still_running()
if self.is_connectable():
break
count += 1
time.sleep(1)
if count == 30:
raise WebDriverException("Can not connect to the Service %s" % self.path)
def delete_old_log():
"""
删除两天以上的 log
"""
yesterday = "Log {}.log".format(str(datetime.today() - timedelta(1)).split()[0])
today = "Log {}.log".format(str(datetime.today()).split()[0])
for file in os.listdir("."):
if file.endswith(".log") and file != yesterday and file != today:
os.remove(file)
if __name__ == "__main__":
log = logging.Logger("Logger", logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(levelname)s: %(message)s")
fh = logging.FileHandler("Log {}.log".format(str(datetime.today()).split()[0]))
fh.setFormatter(formatter)
fh.setLevel(logging.DEBUG)
log.addHandler(fh)
sh = logging.StreamHandler(__import__("sys").stdout)
sh.setFormatter(formatter)
log.addHandler(sh)
if platform.system() == "Windows":
try:
from win10toast import ToastNotifier
notifier = ToastNotifier()
except ImportError:
pass
notify("EngradeHelper", "Start!")
# noinspection PyUnresolvedReferences
webdriver.common.service.Service.start = start
w: WebDriver
t = 0
has_disconnected = False
while True:
t += 1
fh = logging.FileHandler("Log {}.log".format(str(datetime.today()).split()[0]))
delete_old_log()
try:
w = WebDriver()
if not w.start_loop():
break
except selenium.common.exceptions.TimeoutException:
log.warning("Timeout! Retry = " + str(t))
notify("EngradeHelper", "Connection timeout. Retry = " + str(t))
# noinspection PyUnboundLocalVariable
w.driver.quit()
if t >= 6:
break
except Exception as e:
if has_disconnected:
t = 1
log.critical("Unknown Error!\n" + str(e))
with open("EngradeHelper.log", "a") as f:
exc_type, exc_value, exc_traceback = sys.exc_info()
traceback.print_tb(exc_traceback, file=f)
notify(
"EngradeHelper",
"Unknown Error! Detail stored in log file, please report it on Github.",
)
notify("EngradeHelper", "Process end!")
w.driver.quit()
raise e
notify("EngradeHelper", "Process end!")
w.driver.quit()
| [
"32444891+ykozxy@users.noreply.github.com"
] | 32444891+ykozxy@users.noreply.github.com |
c3ffeec57531320981263bd35fe7e2b6d51878f0 | 877f2c8a24c42703b26db38bcebd22400fe0b948 | /test1/excelUtil.py | b13b8d7f1b5f39a568d7c2e2153b7974c7c37087 | [] | no_license | zsjkulong/spider | c17932247a2ea4bbab03980293329f4dd14b54c2 | 13b78350fed355f6743fd911ef885dae600de19b | refs/heads/master | 2020-03-20T05:22:47.942043 | 2018-08-06T08:33:46 | 2018-08-06T08:33:46 | 137,212,581 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 16,836 | py | import openpyxl
import datetime
import operator
from openpyxl.styles import colors
from openpyxl.styles import Font, Color, Fill,PatternFill
from openpyxl.cell import Cell
from test1.tushareUtil import AnalysisIndexData
import configparser
from test1.Item.indexItem import indexItem
class excelUtil:
sheetName= ['每日指数','每日领涨概念板块','每日领涨行业板块','每日走势分析']
path = ''
numberOfGet = 5;
yesterdayNumber = numberOfGet*1;
beforeYesterdayNumber = numberOfGet*2;
#exit = ['BK0815','BK0816']
color = 'FFEEE8AA';
#startRows = 0;
def topN(self,item,startRow):#设置item的值到excel中
#print(item);
wb = openpyxl.load_workbook(self.path)
if(operator.eq(item['isConcept'],'true')):
sheet = wb.get_sheet_by_name(self.sheetName[1]);
else:
sheet = wb.get_sheet_by_name(self.sheetName[2]);
sheet['B'+str(startRow)] = item['title']
sheet['C' + str(startRow)] = item['code']
self.setValAndFontColor(sheet['D' + str(startRow)],item['upRate'])
self.setValAndFontColor(sheet['E' + str(startRow)], item['amount'])
sheet['F' + str(startRow)] = item['ytitle']
sheet['G' + str(startRow)] = item['ycode']
self.setValAndFontColor(sheet['H' + str(startRow)],item['yupRate']);
self.setValAndFontColor(sheet['I' + str(startRow)],item['yamount']);
sheet['J' + str(startRow)] = item['bytitle']
sheet['K' + str(startRow)] = item['bycode']
self.setValAndFontColor(sheet['L' + str(startRow)],item['byupRate'])
self.setValAndFontColor(sheet['M' + str(startRow)],item['byamount'])
wb.save(self.path)
#print("写入数据成功!")
def makeSheetAndHeader(self):#写excel的header
wb = openpyxl.Workbook()
sheet = wb.active
sheet.title =self.sheetName[0]
sheet.merge_cells('B1:D1')
sheet['B1'] = '上证指数';
sheet['B2'] = '涨幅';
sheet['C2'] = '成交量(亿)';
sheet['D2'] = '成交量相比昨日';
sheet.merge_cells('E1:G1')
sheet['E1'] = '深证成指';
sheet['E2'] = '涨幅';
sheet['F2'] = '成交量(亿)';
sheet['G2'] = '成交量相比昨日';
sheet.merge_cells('H1:J1')
sheet['H1'] = '创业板指';
sheet['H2'] = '涨幅';
sheet['I2'] = '成交量(亿)';
sheet['J2'] = '成交量相比昨日';
sheet.merge_cells('K1:M1')
sheet['K1'] = '上证50';
sheet['K2'] = '涨幅';
sheet['L2'] = '成交量(亿)';
sheet['M2'] = '成交量相比昨日';
sheet.merge_cells('N1:P1')
sheet['N1'] = '中小板指';
sheet['N2'] = '涨幅';
sheet['O2'] = '成交量(亿)';
sheet['P2'] = '成交量相比昨日';
sheet['A2'] = '日期'
wb.save(self.path)
#wb = openpyxl.Workbook()
self.topNHeader(wb,1);
self.topNHeader(wb,2)
self.writeAnalysisHeader();
def topNHeader(self,wb,i):
sheet = wb.create_sheet(title=self.sheetName[i])
# sheet.title = self.sheetName[1]
sheet.merge_cells('B1:E1')
sheet['B1'] = '当日领涨板块';
sheet['B2'] = '板块名称';
sheet['C2'] = '板块编码';
sheet['D2'] = '涨幅';
sheet['E2'] = '主力金额'
sheet.merge_cells('F1:I1')
sheet['F1'] = '昨日领涨板块';
sheet['F2'] = '板块名称';
sheet['G2'] = '板块编码';
sheet['H2'] = '涨幅';
sheet['I2'] = '主力金额'
sheet.merge_cells('J1:M1')
sheet['J1'] = '前日领涨板块';
sheet['J2'] = '板块名称';
sheet['K2'] = '板块编码';
sheet['L2'] = '涨幅';
sheet['M2'] = '主力金额'
sheet['A2'] = '日期'
wb.save(self.path)
def readExcelRows(self,i):#获取excel的行数
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(self.sheetName[i])
return sheet.max_row+1;
# sheet.get
def readYesterday(self,startRowin,i):#读取昨天数据
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(self.sheetName[i])
if(startRowin and startRowin>0):
startRow = startRowin;
else:
startRow = sheet.max_row+1
list = [];
if(startRow<=self.yesterdayNumber):
return list;
for cell in sheet['C'+str(startRow-self.yesterdayNumber):'C'+str(startRow-1)]:
#print(cell[0].value);
list.append(cell[0].value)
print(list);
return list;
def readBeforeYesterday(self,startRowin,i):#读取前天数据
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(self.sheetName[i])
if (startRowin and startRowin > 0):
startRow = startRowin;
else:
startRow = sheet.max_row + 1
list = [];
if (startRow <= self.beforeYesterdayNumber):
return list;
for cell in sheet['C' + str(startRow - self.beforeYesterdayNumber):'C' + str(startRow - 1 - self.yesterdayNumber)]:
# print(cell[0].value);
list.append(cell[0].value)
#print(list);
return list;
def writerDate(self):#写日期
self.wDate_two(self.sheetName[0])
self.wDate_one(self.sheetName[1])
self.wDate_one(self.sheetName[2])
self.wDate_two(self.sheetName[3])
def wDate_one(self,sheet):
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(sheet)
startRow = sheet.max_row + 1;
mergeCell = 'A' + str(startRow - self.numberOfGet) + ':A' + str(startRow - 1);
sheet.merge_cells(mergeCell);
sheet['A' + str(startRow - self.numberOfGet)] = datetime.datetime.now().strftime('%Y-%m-%d')
wb.save(self.path)
def wDate_two(self,sheet):
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(sheet)
startRow = sheet.max_row;
sheet['A' + str(startRow)] = datetime.datetime.now().strftime('%Y-%m-%d')
wb.save(self.path)
def setValAndFontColor(self,cell,val):#根据数字正负设置红绿
if(operator.eq(val,'')):
cell.value = val;
return;
if(float(val) > 0):
ft = Font(color=colors.RED)
cell.font = ft;
elif(float(val) < 0):
ft = Font(color=colors.GREEN)
cell.font = ft;
else:
ft = Font(color=colors.BLACK)
cell.font = ft;
cell.value = val;
def setCellColor(self):#设置隔一天数据的颜色
self.setCellColorTool_one(self.sheetName[1],'A','M')
self.setCellColorTool_two(self.sheetName[0], 'A', 'P')
self.setCellColorTool_one(self.sheetName[2], 'A', 'M')
self.setCellColorTool_two(self.sheetName[3], 'A', 'AE')
def setCellColorTool_one(self,sheetName,start,end):
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(sheetName)
startRow = sheet.max_row + 1;
# i = 0
if (startRow % 2 == 0):
for cells in sheet[(start + str(startRow - self.numberOfGet)):(end + str(startRow - 1))]:
for cell in cells:
cell.fill = PatternFill(fill_type='solid', fgColor=self.color)
wb.save(self.path);
def setCellColorTool_two(self,sheetName,start,end):
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(sheetName)
startRow = sheet.max_row;
# i = 0
if (startRow % 2 == 0):
for cells in sheet[start + str(startRow):end + str(startRow)]:
for cell in cells:
cell.fill = PatternFill(fill_type='solid', fgColor=self.color)
# i += 1;
wb.save(self.path);
def makeItRed(self):#标记重复上榜的数据为红色或者紫色
self.toRed(1);
self.toRed(2);
def toRed(self,i):
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(self.sheetName[i])
startRow = sheet.max_row + 1;
yesterday = self.readYesterday(startRow - self.numberOfGet,i);
beforeYesterday = self.readBeforeYesterday(startRow - self.numberOfGet,i);
if (len(yesterday) == 0):
return;
if (len(beforeYesterday) == 0):
return;
i = self.numberOfGet;
for cells in sheet['C' + str(startRow - self.numberOfGet):'C' + str(startRow - 1)]:
if (cells[0].value in yesterday and cells[0].value in beforeYesterday):
ft = Font(color=colors.RED)
sheet['B' + str(startRow - i)].font = ft;
elif (cells[0].value in yesterday or cells[0].value in beforeYesterday):
ft = Font(color='FF9932CC')
sheet['B' + str(startRow - i)].font = ft;
else:
i -= 1;
continue;
i -= 1;
wb.save(self.path);
def writeIndexData(self,indexItem):
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(self.sheetName[0])
startRow = sheet.max_row + 1;
self.setRedOrGreen(sheet['B'+str(startRow)],indexItem.shrate);
sheet['C' + str(startRow)].value = str(indexItem.shamount) ;
if (startRow-1 <= 2):
print()
else:
value = float(indexItem.shamount) - float(sheet['C' + str(startRow - 1)].value);
self.setRedOrGreen(sheet['D' + str(startRow)], value);
self.setRedOrGreen(sheet['E' + str(startRow)], indexItem.szrate);
sheet['F' + str(startRow)].value = str(indexItem.szamount);
if (startRow-1 <= 2):
print()
else:
value = float(indexItem.szamount) - float(sheet['F' + str(startRow - 1)].value);
self.setRedOrGreen(sheet['G' + str(startRow)], value);
self.setRedOrGreen(sheet['H' + str(startRow)], indexItem.cyrate);
sheet['I' + str(startRow)] = str(indexItem.cyamount);
if (startRow-1 <= 2):
print()
else:
value = float(indexItem.cyamount) - float(sheet['I' + str(startRow - 1)].value);
self.setRedOrGreen(sheet['J' + str(startRow)], value);
self.setRedOrGreen(sheet['K' + str(startRow)], indexItem.sh50rate);
sheet['L' + str(startRow)] = str(indexItem.sh50amount);
if (startRow-1 <= 2):
print()
else:
value = float(indexItem.sh50amount) - float(sheet['L' + str(startRow - 1)].value);
self.setRedOrGreen(sheet['M' + str(startRow)], value);
self.setRedOrGreen(sheet['N' + str(startRow)], indexItem.zxrate);
sheet['O' + str(startRow)] = str(indexItem.zxamount);
if (startRow-1 <= 2):
print()
else:
value = float(indexItem.zxamount) - float(sheet['O' + str(startRow - 1)].value);
self.setRedOrGreen(sheet['P' + str(startRow)], value);
wb.save(self.path);
def setRedOrGreen(self,cell,value):
ftRed = Font(color=colors.RED)
ftGreen = Font(color=colors.GREEN)
cell.value = value
if(float(value) > 0):
cell.font = ftRed;
else:
cell.font = ftGreen;
def setCellRedOrGreen(self,cell,value):
ftRed = Font(color=colors.RED)
ftGreen = Font(color=colors.GREEN)
ftBlue = Font(color=colors.BLUE)
val = str(value).split('|');
# cell.value = value
if(len(val)>1):
if(operator.eq(val[1],'red')):
cell.font = ftRed;
elif(operator.eq(val[1],'green')):
cell.font = ftGreen;
else:
cell.font = ftBlue;
cell.value = val[0];
else:
cell.value = value;
def readConfig(self):
cf = configparser.ConfigParser()
cf.read('config.ini',encoding="utf-8-sig")
self.numberOfGet = cf.getint('config', 'numberOfGet')
self.path = cf.get('config', 'path')
def writeIndexAnalysisData(self,AnalysisIndexData):
wb = openpyxl.load_workbook(self.path)
sheet = wb.get_sheet_by_name(self.sheetName[3])
startRow = sheet.max_row + 1;
self.setCellRedOrGreen(sheet['B' + str(startRow)], AnalysisIndexData.todayshClose)
self.setCellRedOrGreen(sheet['C' + str(startRow)], AnalysisIndexData.shDire)
self.setCellRedOrGreen(sheet['D' + str(startRow)], AnalysisIndexData.shValStatus)
self.setCellRedOrGreen(sheet['E' + str(startRow)], AnalysisIndexData.shStatusHit)
self.setCellRedOrGreen(sheet['F' + str(startRow)], AnalysisIndexData.shMACDHit)
self.setCellRedOrGreen(sheet['G' + str(startRow)], AnalysisIndexData.shMACDDayHit)
self.setCellRedOrGreen(sheet['H' + str(startRow)], AnalysisIndexData.todayszClose)
self.setCellRedOrGreen(sheet['I' + str(startRow)], AnalysisIndexData.szDire)
self.setCellRedOrGreen(sheet['J' + str(startRow)], AnalysisIndexData.szValStatus)
self.setCellRedOrGreen(sheet['K' + str(startRow)], AnalysisIndexData.szStatusHit)
self.setCellRedOrGreen(sheet['L' + str(startRow)], AnalysisIndexData.szMACDHit)
self.setCellRedOrGreen(sheet['M' + str(startRow)], AnalysisIndexData.szMACDDayHit)
self.setCellRedOrGreen(sheet['N' + str(startRow)], AnalysisIndexData.todaycyClose)
self.setCellRedOrGreen(sheet['O' + str(startRow)], AnalysisIndexData.cyDire)
self.setCellRedOrGreen(sheet['P' + str(startRow)], AnalysisIndexData.cyValStatus)
self.setCellRedOrGreen(sheet['Q' + str(startRow)], AnalysisIndexData.cyStatusHit)
self.setCellRedOrGreen(sheet['R' + str(startRow)], AnalysisIndexData.cyMACDHit)
self.setCellRedOrGreen(sheet['S' + str(startRow)], AnalysisIndexData.cyMACDDayHit)
self.setCellRedOrGreen(sheet['T' + str(startRow)], AnalysisIndexData.todaysh50Close)
self.setCellRedOrGreen(sheet['U' + str(startRow)], AnalysisIndexData.sh50Dire)
self.setCellRedOrGreen(sheet['V' + str(startRow)], AnalysisIndexData.sh50ValStatus)
self.setCellRedOrGreen(sheet['W' + str(startRow)], AnalysisIndexData.sh50StatusHit)
self.setCellRedOrGreen(sheet['X' + str(startRow)], AnalysisIndexData.sh50MACDHit)
self.setCellRedOrGreen(sheet['Y' + str(startRow)], AnalysisIndexData.sh50MACDDayHit)
self.setCellRedOrGreen(sheet['Z' + str(startRow)], AnalysisIndexData.todayzxClose)
self.setCellRedOrGreen(sheet['AA' + str(startRow)], AnalysisIndexData.zxDire)
self.setCellRedOrGreen(sheet['AB' + str(startRow)], AnalysisIndexData.zxValStatus)
self.setCellRedOrGreen(sheet['AC' + str(startRow)], AnalysisIndexData.zxStatusHit)
self.setCellRedOrGreen(sheet['AD' + str(startRow)], AnalysisIndexData.zxMACDHit)
self.setCellRedOrGreen(sheet['AE' + str(startRow)], AnalysisIndexData.zxMACDDayHit)
wb.save(self.path)
def writeAnalysisHeader(self):
wb = openpyxl.load_workbook(self.path)
sheet = wb.create_sheet(title=self.sheetName[3])
sheet.merge_cells('B1:G1')
sheet['B1'] = '上证指数';
sheet['B2'] = '指数';
sheet['C2'] = '走势';
sheet['D2'] = '成交量状态';
sheet['E2'] = '走势成交量提示'
sheet['F2'] = 'MACD提示'
sheet['G2'] = '每日指数提示'
sheet.merge_cells('H1:M1')
sheet['H1'] = '深证成指';
sheet['H2'] = '指数';
sheet['I2'] = '走势';
sheet['J2'] = '成交量状态';
sheet['K2'] = '走势成交量提示'
sheet['L2'] = 'MACD提示'
sheet['M2'] = '每日指数提示'
sheet.merge_cells('N1:S1')
sheet['N1'] = '创业板指';
sheet['N2'] = '指数';
sheet['O2'] = '走势';
sheet['P2'] = '成交量状态';
sheet['Q2'] = '走势成交量提示'
sheet['R2'] = 'MACD提示'
sheet['S2'] = '每日指数提示'
sheet.merge_cells('T1:Y1')
sheet['T1'] = '上证50';
sheet['T2'] = '指数';
sheet['U2'] = '走势';
sheet['V2'] = '成交量状态';
sheet['W2'] = '走势成交量提示'
sheet['X2'] = 'MACD提示'
sheet['Y2'] = '每日指数提示'
sheet.merge_cells('Z1:AE1')
sheet['Z1'] = '中小板指';
sheet['Z2'] = '指数';
sheet['AA2'] = '走势';
sheet['AB2'] = '成交量状态';
sheet['AC2'] = '走势成交量提示'
sheet['AD2'] = 'MACD提示'
sheet['AE2'] = '每日指数提示'
wb.save(self.path)
| [
"zhousijian@travelsky.com"
] | zhousijian@travelsky.com |
6a20c6b46f1956d64bcd5fc20261bb7af697a8eb | 6679ab23bf4f0100eb07cf13be21a8c1b1ae4c1f | /Python_Team_Notes/구현/input_alpha_to_num.py | 817176213e5896f5d99570aae3099f7501ab5271 | [] | no_license | gimquokka/problem-solving | 1c77e0ad1828fa93ebba360dcf774e38e157d7b6 | f3c661241d3e41adee330d19db3a66e20d23cf50 | refs/heads/master | 2023-06-28T10:19:07.230366 | 2021-07-29T11:29:26 | 2021-07-29T11:29:26 | 365,461,737 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 131 | py | # a ~ z를 입력으로 받아 숫자로 전환
char = input()
# ord 함수 이용
num = int(ord(char)) - int(ord('a'))
print(num)
| [
"gimquokka@gmail.com"
] | gimquokka@gmail.com |
553f3a3d1f1ae0c200e965ab8b316854da0f69b9 | 91191339a1f9731bb7d1bd646a363db5e9812fab | /Task6.py | a765df91993fa43732d3a6e731c395513ca0add9 | [] | no_license | Hedin555/Python | 3432ecd3a0aa0d22bbd47054eec29c7765622489 | 856dde8ce2d0292c83737cb0a2d8395d85fef670 | refs/heads/main | 2023-01-24T16:52:20.271802 | 2020-11-22T16:45:24 | 2020-11-22T16:45:24 | 315,083,881 | 0 | 0 | null | 2020-11-22T17:33:43 | 2020-11-22T16:40:08 | Python | UTF-8 | Python | false | false | 375 | py | a = int(input('Введите начальную дистанцию, пробегаемую спортсменом в день: '))
b = int(input('Введите требуемое расстояние: '))
day = 1
while a < b:
a = a * 1.1
day = day + 1
print(f'Спортсмен добъется необходимого результата на {day} день')
| [
"qmkonstantin@gmail.com"
] | qmkonstantin@gmail.com |
72caec1e57d85a6bf4b606a5228254cf3c680874 | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_179/ch25_2020_03_23_14_36_52_247565.py | b77b247db72eeb75a6603e8b3a253feeebcab017 | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 300 | py | import math
g = 9.8
def calcula_distancia (velocidade, angulo):
angulo_radianos = math.degrees(angulo)
distancia = (velocidade**2 * math.sin(2*angulo_radianos))/g
return distancia
if distancia < 98:
print ('Muito perto')
elif distancia > 102:
print ('Muito longe')
else:
print ('Acertou!') | [
"you@example.com"
] | you@example.com |
1e47736427d5b29ddbed8c696b895ae76e78410d | 5da5473ff3026165a47f98744bac82903cf008e0 | /packages/google-cloud-vm-migration/samples/generated_samples/vmmigration_v1_generated_vm_migration_finalize_migration_async.py | 9725b31782e691f5713fa20467e00eb66fe54fa1 | [
"Apache-2.0"
] | permissive | googleapis/google-cloud-python | ed61a5f03a476ab6053870f4da7bc5534e25558b | 93c4e63408c65129422f65217325f4e7d41f7edf | refs/heads/main | 2023-09-04T09:09:07.852632 | 2023-08-31T22:49:26 | 2023-08-31T22:49:26 | 16,316,451 | 2,792 | 917 | Apache-2.0 | 2023-09-14T21:45:18 | 2014-01-28T15:51:47 | Python | UTF-8 | Python | false | false | 1,993 | py | # -*- coding: utf-8 -*-
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Generated code. DO NOT EDIT!
#
# Snippet for FinalizeMigration
# NOTE: This snippet has been automatically generated for illustrative purposes only.
# It may require modifications to work in your environment.
# To install the latest published package dependency, execute the following:
# python3 -m pip install google-cloud-vm-migration
# [START vmmigration_v1_generated_VmMigration_FinalizeMigration_async]
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import vmmigration_v1
async def sample_finalize_migration():
# Create a client
client = vmmigration_v1.VmMigrationAsyncClient()
# Initialize request argument(s)
request = vmmigration_v1.FinalizeMigrationRequest(
migrating_vm="migrating_vm_value",
)
# Make the request
operation = client.finalize_migration(request=request)
print("Waiting for operation to complete...")
response = (await operation).result()
# Handle the response
print(response)
# [END vmmigration_v1_generated_VmMigration_FinalizeMigration_async]
| [
"noreply@github.com"
] | noreply@github.com |
7fd7a01d4607f5455302f773014b401914bae6c2 | a85ddbac6ce4e7738ccd4b37b2c6279559a14e0c | /verification_types/migrations/0003_auto_20210117_2358.py | 4da53b9ad4b5629e3551d236f0a1524c56de816b | [] | no_license | Fire2Bear/url_monitor | 644781b74653402672dc2b38219ecb6034f0c400 | 546fc49be6f7943b29ef4d1ca2fe6893c31bb7b0 | refs/heads/master | 2023-02-26T18:39:41.635307 | 2021-02-03T14:52:29 | 2021-02-03T14:52:29 | 330,383,462 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 505 | py | # Generated by Django 3.1.3 on 2021-01-17 22:58
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('verification_types', '0002_verificationtype_code'),
]
operations = [
migrations.AlterField(
model_name='verificationtype',
name='code',
field=models.CharField(default=None, max_length=4, null=True, unique=True, verbose_name='Identifiant unique du type de vérification'),
),
]
| [
"nicolas.jousset@etudiant.univ-rennes1.fr"
] | nicolas.jousset@etudiant.univ-rennes1.fr |
7eb3e34af4c94f5c6476f267de752ba66437517a | 1b4baa4f2ceee7f9e22be2c9ecdd5ed7470317cb | /marks.py | ea51f41abd5754a7cb44aa8a76c0cac3c9547cd1 | [] | no_license | bnprk/pybegin | 5cac1dd098e8202e2574d4030560185d8517933b | 38ca59a31257e2000db284ed6b4dd828e5c29b61 | refs/heads/master | 2020-12-24T08:49:14.437329 | 2016-08-19T21:11:05 | 2016-08-19T21:11:05 | 25,602,208 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 664 | py | #!/usr/bin/env python3
n = int(input("Enter the number of students:"))
data = {} # here we will store the data
languages = ('Physics', 'Maths', 'History') #all languages
for i in range(0, n): #for the n number of students
name = input('Enter the name of the student %d: ' % (i + 1)) #Get the name of the student
marks = []
for x in languages:
marks.append(int(input('Enter marks of %s: ' % x))) #Get the marks for languages
data[name] = marks
for x, y in data.iteritems():
total = sum(y)
print("%s 's total marks %d" % (x, total))
if total < 120:
print("%s failed :(" % x)
else:
print("%s passed :)" % x)
| [
"bnprk27@gmail.com"
] | bnprk27@gmail.com |
5bd58b1f6673a0ce365efc48e5b852e5ac8e54ed | 984529e5171b0594c966b55d4680d54389ae7d0c | /cc/apps/tracking/migrations/0011_auto__chg_field_trackinglog_participant.py | 7f6cacb87c49401ac3bd180636f18403772ce2f7 | [] | no_license | afsmith/cc | 6d15616cbba7b0d47a3de1b9d4d11dddb1986f5b | 86f08c78e8d855e55488c4fdf73353e4dd8a43cd | refs/heads/master | 2021-03-16T09:52:19.020011 | 2015-04-22T11:35:07 | 2015-04-22T11:35:07 | 10,797,416 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 10,458 | py | # -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Changing field 'TrackingLog.participant'
db.alter_column(u'tracking_trackinglog', 'participant_id', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['accounts.CUser'], null=True))
def backwards(self, orm):
# Changing field 'TrackingLog.participant'
db.alter_column(u'tracking_trackinglog', 'participant_id', self.gf('django.db.models.fields.related.ForeignKey')(default=1, to=orm['accounts.CUser']))
models = {
u'accounts.cuser': {
'Meta': {'object_name': 'CUser'},
'country': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True'}),
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'unique': 'True', 'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'industry': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'signature': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'user_type': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
'username': ('django.db.models.fields.CharField', [], {'max_length': '30'})
},
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'cc_messages.message': {
'Meta': {'object_name': 'Message'},
'allow_download': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'cc_me': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'expired_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'files': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'files'", 'symmetrical': 'False', 'to': u"orm['content.File']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'message': ('django.db.models.fields.TextField', [], {}),
'modified_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'owner'", 'null': 'True', 'to': u"orm['accounts.CUser']"}),
'receivers': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'receivers'", 'symmetrical': 'False', 'to': u"orm['accounts.CUser']"}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '130'})
},
u'content.file': {
'Meta': {'ordering': "('created_on',)", 'object_name': 'File'},
'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'index': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
'key': ('django.db.models.fields.CharField', [], {'default': "'wCf5hQh5sZoTCWhmXsld2T'", 'unique': 'True', 'max_length': '22'}),
'link_text': ('django.db.models.fields.CharField', [], {'max_length': '150', 'blank': 'True'}),
'orig_filename': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['accounts.CUser']", 'null': 'True'}),
'pages_num': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'status': ('django.db.models.fields.IntegerField', [], {'default': '10'}),
'subkey_conv': ('django.db.models.fields.CharField', [], {'default': "'np54r'", 'max_length': '5'}),
'subkey_orig': ('django.db.models.fields.CharField', [], {'default': "'Rmbr3'", 'max_length': '5'}),
'subkey_preview': ('django.db.models.fields.CharField', [], {'default': "'hMY2m'", 'max_length': '5'}),
'subkey_thumbnail': ('django.db.models.fields.CharField', [], {'default': "'krXkK'", 'max_length': '5'}),
'type': ('django.db.models.fields.IntegerField', [], {}),
'updated_on': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'tracking.closeddeal': {
'Meta': {'object_name': 'ClosedDeal'},
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'message': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cc_messages.Message']"}),
'participant': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['accounts.CUser']"})
},
u'tracking.trackingevent': {
'Meta': {'object_name': 'TrackingEvent'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'page_number': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'page_view': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'total_time': ('django.db.models.fields.BigIntegerField', [], {'default': '0'}),
'tracking_session': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['tracking.TrackingSession']", 'null': 'True'})
},
u'tracking.trackinglog': {
'Meta': {'object_name': 'TrackingLog'},
'action': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'client_ip': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True'}),
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'device': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True'}),
'file_index': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True'}),
'message': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cc_messages.Message']"}),
'participant': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['accounts.CUser']", 'null': 'True'}),
'revision': ('django.db.models.fields.IntegerField', [], {})
},
u'tracking.trackingsession': {
'Meta': {'object_name': 'TrackingSession'},
'client_ip': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True'}),
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'device': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True'}),
'file_index': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'message': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cc_messages.Message']"}),
'participant': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['accounts.CUser']"}),
'tracking_log': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['tracking.TrackingLog']", 'null': 'True'})
}
}
complete_apps = ['tracking'] | [
"hieuh25@gmail.com"
] | hieuh25@gmail.com |
12397dba9aa20dfa6ddc202d2645785817557c92 | a3038ca9afae4ab5a6c3426e91374a37a9c0a581 | /.venv/lib/python3.6/fnmatch.py | 52114f17c96460f02d6dd447121e52fb804f9a87 | [] | no_license | kgmour/webservice-pycon | f88e79e2cf338a35e0ba6a49801170c5c2249313 | 004027f08443c26cef32b31a90b6a71cd843f026 | refs/heads/master | 2020-03-16T19:21:28.998960 | 2018-05-10T14:36:06 | 2018-05-10T14:36:06 | 132,912,086 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 68 | py | /Users/kevinmouritsen/.pyenv/versions/3.6.1/lib/python3.6/fnmatch.py | [
"kgmour@gmail.com"
] | kgmour@gmail.com |
f1852c7da40eb5c08990351bb1c5c7ea3197c233 | 7bfcb91f95d20f1199d54f91c9a095df08b44d83 | /Backup/Django_Youtube/WebBanHang/user/models.py | b2d82ecc75ed2668b3c7dbb54babf9acbad04250 | [] | no_license | llduyll10/backup | bcb09eb632dd0858d515aacb7132d913da4dc24c | 8849d812566977f9a379d38ee1daa2ef42c02c7f | refs/heads/master | 2023-02-28T11:22:23.831040 | 2021-02-01T17:09:55 | 2021-02-01T17:09:55 | 335,006,700 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 263 | py | from django.db import models
from django.contrib.auth.models import AbstractUser
# Create your models here.
class CustomerUser(AbstractUser):
phone_number = models.CharField(default='', max_length=15)
address = models.CharField(default='', max_length=255) | [
"llduyll10@gmail.com"
] | llduyll10@gmail.com |
c98ac5620ffbd99f16e637d08b518f76f5ff669f | edaa42987b34a97b91de60f4cec98cf3f552e91c | /train_graph_classifier.py | 70c8ca7a015b3f40c0c7df059a5544dfcb9a9c7a | [] | no_license | PriyankaChakraborti/Benchmarking-movie-posters-with-transfer-and-bandit-learning | 2ec55a64c3c0cf8a387e02a12f5bcefbd92ce2d8 | 578dcf2ed29a10284d680a469fc2049301d59b90 | refs/heads/master | 2021-08-04T02:35:32.070264 | 2020-12-21T22:49:05 | 2020-12-21T22:49:05 | 229,343,216 | 0 | 0 | null | 2020-12-21T22:49:06 | 2019-12-20T22:07:38 | Python | UTF-8 | Python | false | false | 13,641 | py | # Import Modules
from keras.models import load_model, Model
from keras import optimizers
import numpy as np
import tensorflow as tf
from tensorflow.keras import backend as K
import pickle
import pandas as pd
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.ensemble import RandomForestRegressor
from skmultilearn.cluster import LabelCooccurrenceGraphBuilder
from skmultilearn.embedding import OpenNetworkEmbedder, EmbeddingClassifier
from skmultilearn.adapt import MLkNN
from skmultilearn.cluster import NetworkXLabelGraphClusterer
import dill
import networkx as nx
import matplotlib.pyplot as plt
from skmultilearn.embedding import CLEMS
import sklearn.metrics as metrics
from sklearn.metrics import classification_report
from sklearn.externals import joblib
#################### Input Parameters ####################
learning_rate = 10**(-5)
img_shape = (100,100,3)
des_layer = -8
only_drama_thresh = 1000
comedy_and_drama_thresh = 1000
train_perc = 0.90
learning_rate = 10**(-6)
# Choose whether to perform gridsearch or create classifier
use_gridsearch = False
# Parameters for Random Forest Classifier
max_depth = None
max_features = 6
min_samples_split = 3
min_samples_leaf = 5
bootstrap = True
n_estimators = 50
num_neighbors = 5
genres = ['Action','Comedy','Drama','Horror','Romance','Thriller']
# Import the CSV File
full_csv_df = pd.read_csv('Data/full_poster_6.csv')
############################################################
#################### Define functions ####################
# Transform labels from ML to MC
def ML_to_MC(labels_ML):
labels_MC = np.zeros((1,len(labels_ML)))
# Find all unique row-vectors
unique_label_vec, counts = np.unique(labels_ML, axis=0, return_counts=True)
#unique_label_vec = [int(label) for label in unique_label_vec]
class_num = 0
for label_vec in unique_label_vec:
for count,label_ML in enumerate(labels_ML):
if np.array_equal(label_vec, label_ML):
labels_MC[0,count] = int(class_num)
class_num += 1
return unique_label_vec, labels_MC[0], counts
def generate_MC_genres(unique_label_vec, labels_MC, genres):
names = []
for index,vect in enumerate(unique_label_vec):
if index in labels_MC:
out_vect = []
# Go through each index of vector
for label_index,name in enumerate(vect):
try:
if int(name) == 1:
out_vect.append(genres[label_index])
else:
out_vect.append(str(0))
except:
out_vect.append(str(0))
names.append(out_vect)
return names
# Function to calculate hamming distance
def hamming_dist(y_true,y_pred):
y_true = tf.cast(y_true,dtype=tf.float32)
y_pred = tf.cast(y_pred,dtype=tf.float32)
return K.mean(K.sum(K.abs(y_true-y_pred),axis=1))
def hamming_dist_classifier(y_true,y_pred):
return np.mean(np.sum(np.abs(y_true-y_pred),axis=1))
############################################################
#################### Load data and preprocess it ####################
# Load model
model=load_model('Data/vgg16_1.h5', custom_objects={"hamming_dist": hamming_dist})
# Create new model, removing the last layer
model = Model(model.inputs, model.layers[des_layer].output)
# Compile model
model.compile(loss='mean_squared_error', optimizer=optimizers.Adam(lr=learning_rate, beta_1=0.9), metrics=['mean_squared_error'])
num_imgs_6 = 68054
pickle_in = open("Data/labels_6_ML.pickle","rb")
labels = pickle.load(pickle_in)
data = np.memmap('Data/data_6.dat', dtype='float32', mode='r', shape=(num_imgs_6,img_shape[0],img_shape[1],img_shape[2]))
total_count = np.sum(labels,axis=0)
print('')
print('Below are the number of posters for each genre before thresholding:')
print([int(val) for val in total_count])
unique_label_vec, labels_MC, counts = ML_to_MC(labels)
names = generate_MC_genres(unique_label_vec, labels_MC, genres)
print('')
print('Below are the number of posters in each unique label vector before thresholding:')
print([int(num) for num in counts])
print([str(label) for label in names])
# Go through each label and threshold desired label vectors
print('')
print('Thresholding the data...')
num_after_thresh = 49260 # Manually found
drama_count = 0
comedy_drama_count = 0
other_count = 0
pos_index = 0
temp_data = np.memmap('Data/data_6_MLthresh.dat', dtype='float32', mode='w+', shape=(num_after_thresh,img_shape[0],img_shape[1],img_shape[2]))
temp_labels = np.zeros((num_after_thresh,labels.shape[1]))
cnt = 0
for label_index,label in enumerate(labels):
#print('On ' + str(cnt) + ' out of ' + str(labels.shape[0]) )
cnt += 1
# Check current label to see if it is only nonzero for Drama (index 2)
if (int(np.sum(label))==1) and (int(label[2])==1):
if drama_count <= only_drama_thresh:
temp_data[pos_index,:,:,:] = data[label_index,:,:,:]
temp_labels[pos_index] = labels[label_index]
pos_index+=1
drama_count+=1
# Also reduce number of comedy and dramas shared
elif (int(np.sum(label))==2) and (int(label[1])==1) and (int(label[2])==1):
if comedy_drama_count <= comedy_and_drama_thresh:
temp_data[pos_index,:,:,:] = data[label_index,:,:,:]
temp_labels[pos_index] = labels[label_index]
pos_index+=1
comedy_drama_count+=1
else:
temp_data[pos_index,:,:,:] = data[label_index,:,:,:]
temp_labels[pos_index] = labels[label_index]
pos_index += 1
other_count += 1
print('')
print(str(drama_count) + ' movies removed tagged only with drama')
print(str(comedy_drama_count) + ' movies removed tagged specifically with comedy and drama')
print('There are now ' + str(pos_index) + ' unique movies')
total_count = np.sum(temp_labels,axis=0)
print('')
print('Below are the number of posters for each genre after thresholding:')
print([int(val) for val in total_count])
genre_hist = total_count
unique_label_vec, labels_MC, counts = ML_to_MC(temp_labels)
names = generate_MC_genres(unique_label_vec, labels_MC, genres)
print('')
print('Below are the number of posters in each unique label vector after thresholding:')
print([int(num) for num in counts])
print([str(label) for label in names])
# Split into test and train sets
num_in_test = int(len(temp_labels)*(1-train_perc))
num_in_train = int(len(temp_labels)*train_perc)+1
print('')
print('There are ' + str(num_in_train) + ' train posters and ' + str(num_in_test) + ' test posters')
print('')
print('Creating the test-dataset...')
test_data = np.memmap('Data/test_data.dat', dtype='float32', mode='w+', shape=(num_in_test,img_shape[0], img_shape[1], img_shape[2]))
test_labels = np.zeros((num_in_test,temp_labels.shape[1]))
counter = 0
for row in list(range(num_in_test)):
test_data[counter,:,:,:] = temp_data[row,:,:,:]
test_labels[counter,:] = temp_labels[row,:]
counter += 1
print('The shape of the test-dataset is: ' + str(test_data.shape))
print('The shape of the test-labels is: ' + str(test_labels.shape))
print('')
print('Creating the train dataset...')
train_data = np.memmap('Data/train_data.dat', dtype='float32', mode='w+', shape=(num_in_train, img_shape[0], img_shape[1], img_shape[2]))
train_labels = np.zeros((num_in_train,temp_labels.shape[1]))
counter = 0
for row in list(range(num_in_test,temp_labels.shape[0])):
train_data[counter,:,:,:] = temp_data[row,:,:,:]
train_labels[counter,:] = temp_labels[row,:]
counter += 1
print('The shape of the train-dataset is: ' + str(train_data.shape))
print('The shape of the train-labels is: ' + str(train_labels.shape))
del temp_data
del temp_labels
print('')
print('Extracting feature vectors for the test and train data...')
print('')
feature_train = model.predict(train_data)
feature_test = model.predict(test_data)
del model
print('')
print('The feature vectors have been extracted')
print('The dimensions of the feature vector array for the test set is: ' + str(feature_test.shape) )
print('The dimensions of the feature vector array for the train set is: ' + str(feature_train.shape) )
############################################################
#################### Define basic architecture for the model ####################
# construct a graph builder that will include label relations weighted by how many times they co-occurred in the data, without self-edges
graph_builder = LabelCooccurrenceGraphBuilder(weighted = True, include_self_edges = False)
edge_map = graph_builder.transform(train_labels)
print('')
print("Our graph builder for {} labels has {} edges".format(6, len(edge_map)))
print('Below is the associated edge map for the train set:')
print(edge_map)
# setup the clusterer to use, we selected the modularity-based approach
clusterer = NetworkXLabelGraphClusterer(graph_builder=graph_builder, method='louvain')
partition = clusterer.fit_predict(train_data,train_labels)
print('')
print('The output of the NetworkXLabelGraphClusterer is below:')
print(partition)
def to_membership_vector(partition):
return {
member : partition_id
for partition_id, members in enumerate(partition)
for member in members
}
membership_vector = to_membership_vector(partition)
indices = list(range(len(genres)))
names_dict = dict(zip(indices,genres))
# Create and save map of graph-based relationships
f = plt.figure()
nx.draw(
clusterer.graph_,
pos=nx.circular_layout(clusterer.graph_),
labels=names_dict,
with_labels = True,
width = [10*x/train_labels.shape[0] for x in clusterer.weights_['weight']],
node_color = [membership_vector[i] for i in range(train_labels.shape[1])],
cmap=plt.cm.Spectral,
node_size=10,
font_size=14
)
f.savefig("Data/label_graph.png")
# Set up the ensemble metaclassifier
#openne_line_params = dict(batch_size=1000, order=3)
#embedder = OpenNetworkEmbedder(
# graph_builder,
# 'LINE',
# dimension = 5*train_labels.shape[1],
# aggregation_function = 'add',
# normalize_weights=True,
# param_dict = openne_line_params
#)
dimensional_scaler_params = {'n_jobs': -1}
embedder = CLEMS(metrics.jaccard_similarity_score, is_score=True, params=dimensional_scaler_params)
############################################################
#################### Run Grid-Search over this classifier ####################
if use_gridsearch==True:
classifier = EmbeddingClassifier( embedder=embedder, regressor=RandomForestRegressor(), classifier=MLkNN(k=5), require_dense=[False, False])
#classifier = EmbeddingClassifier( embedder, RandomForestRegressor(), MLkNN(k=1), regressor_per_dimension= True )
parameters = {"regressor__max_depth": [3, None],
"regressor__max_features": [3, 6], # was [1,3,6]
"regressor__min_samples_split": [3, 10],
"regressor__min_samples_leaf": [1, 5], # [1,3,10]
"regressor__bootstrap": [True, False],
"regressor__n_estimators": [20, 50]}
scorer = make_scorer(hamming_dist_classifier, greater_is_better=False)
grid_classifier = GridSearchCV(classifier, param_grid=parameters, scoring=scorer, verbose=0)
grid_classifier.fit(feature_train, train_labels)
print("Best Score (Hamming Distance): %f" % grid_classifier.best_score_)
print("Optimal Hyperparameter Values: ", grid_classifier.best_params_)
############################################################
#################### Build the classifier for use on our test-set ####################
if use_gridsearch==False:
# Define using found optimal parameters
reg = RandomForestRegressor(max_depth=max_depth, max_features=max_features, min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf, bootstrap=bootstrap, n_estimators=n_estimators)
# reg = RandomForestRegressor(n_estimators=10, n_jobs=-1)
#reg = RandomForestRegressor()
# Create classifier using those values
#classifier = EmbeddingClassifier( embedder=embedder, regressor=reg, classifier=MLkNN(k=num_neighbors))
#classifier = EmbeddingClassifier( embedder=embedder, regressor=reg, classifier=MLkNN(k=1), regressor_per_dimension= True, require_dense=[False, False] )
classifier = EmbeddingClassifier( embedder=embedder, regressor=reg, classifier=MLkNN(k=5), regressor_per_dimension=True)
print('')
print('Fitting the train data to our graph-based classifier...')
print('')
# Fit to the train set
classifier.fit(feature_train, train_labels)
# Make the predictions on our test dataset
print('Making predictions on our test data...')
pred_test_labels = classifier.predict(feature_test).toarray()
# Output the classification report
print('')
print(classification_report(test_labels,pred_test_labels))
print('')
print('These results use:')
print('Max_depth: ' + str(max_depth))
print('max_features: ' + str(max_features))
print('min_samples_split: ' + str(min_samples_split))
print('min_samples_leaf: ' + str(min_samples_leaf))
print('bootstrap: ' + str(bootstrap))
print('n_estimators: ' + str(n_estimators))
print('num_neighbors: ' + str(num_neighbors))
# Save the classifier model pieces
#with open('Data/classifier.pickle', 'wb') as file:
# pickle.dump(classifier, file)
with open("Data/classifier.pkd", "wb") as dill_file:
dill.dump(classifier, dill_file)
############################################################
| [
"priyanka.chakraborti@huskers.unl.edu"
] | priyanka.chakraborti@huskers.unl.edu |
03004024534da0e57bfd5e600e5e287e7ef641d6 | 0c21b60518e4e96ff1a68cf551eeb59d42be8b8a | /rango/migrations/0001_initial.py | dcf5def48ada10100b093e3a575d30849b304681 | [] | no_license | mzeeshanid/pythonlearning-userauth | 522bd56f7163757c38bb49ce1c6ccaebc8101544 | 2ecba2599c4f319d9607e5384b8aef7185bdfc90 | refs/heads/master | 2021-01-01T03:40:08.521769 | 2016-06-03T13:02:48 | 2016-06-03T13:02:48 | 58,745,008 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 895 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.9.6 on 2016-05-13 04:51
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='UserProfile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('website', models.URLField(blank=True)),
('picture', models.ImageField(blank=True, upload_to='profileImages')),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| [
"mzeeshanid@yahoo.com"
] | mzeeshanid@yahoo.com |
d1b7d022400ba00ee4fce89919dee52215244a92 | e064a58ff63212ead8df1683aaa9e44f44be3058 | /integration-tests/tests/entity_last_read_date_tests.py | 142f617056ba8c127ac3e9e64b94a5a39460fea2 | [
"Apache-2.0"
] | permissive | kkasravi/atk | 09cc7cba05abbcf0bd4c3eaae7dbccef7d02c7e8 | 65c3180ee33b752567c72c333f0cc8a8efc27816 | refs/heads/master | 2020-12-28T23:23:06.922418 | 2015-10-22T22:48:14 | 2015-10-22T22:48:14 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,632 | py | #
# Copyright (c) 2015 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import unittest
import trustedanalytics as ta
# show full stack traces
ta.errors.show_details = True
ta.loggers.set_api()
# TODO: port setup should move to a super class
if ta.server.port != 19099:
ta.server.port = 19099
ta.connect()
class LastReadDateUpdatesTests(unittest.TestCase):
_multiprocess_can_split_ = True
def test_access_refreshes_frames(self):
"""Tests that some actions do or do not update the last_read_date entity property"""
csv = ta.CsvFile("/datasets/dates.csv", schema= [('start', ta.datetime),
('id', int),
('stop', ta.datetime),
('color', str)], delimiter=',')
name = "update_last_read"
if name in ta.get_frame_names():
ta.drop_frames(name)
f = ta.Frame(csv, name=name) # name it, to save it from the other GC blasting test in here
t0 = f.last_read_date
t1 = f.last_read_date
#print "t0=%s" % t0.isoformat()
self.assertEqual(t0, t1)
f.schema # schema, or other meta data property reads, should not update the last read date
t2 = f.last_read_date
#print "t2=%s" % t2.isoformat()
self.assertEqual(t0, t2)
f.inspect() # inspect should update the last read date
t3 = f.last_read_date
#print "t3=%s" % t3.isoformat()
self.assertLess(t2,t3)
f.copy() # copy should update the last read date
t4 = f.last_read_date
#print "t4=%s" % t4.isoformat()
self.assertLess(t3,t4)
f.bin_column('id', [3, 5, 8])
t5 = f.last_read_date
#print "t5=%s" % t5.isoformat()
self.assertLess(t4,t5)
# testing graph and model last_read_date is piggybacked on other tests which already do the
# heavy lifting of proper construction. See graph_pagerank_tests.py and model_kmeans_test.py
if __name__ == "__main__":
unittest.main()
| [
"briton.barker@intel.com"
] | briton.barker@intel.com |
dfa2627f80936e2cfdd5cdba073341bfc8db39ee | 65bad08af9e76c1adbdfd9d5447621c713d4e483 | /studay_sele/__init__.py | 84c8ae7085de930eafff8406caf0cb2f69b69e8a | [] | no_license | jinsm01/changlumanman | c74bbd7f1d80bdca40076722265213a94f2dc691 | 0afe5f01d4ed31e9dba1611eed8dae3a24884635 | refs/heads/master | 2022-07-21T23:58:15.525735 | 2020-05-18T03:07:07 | 2020-05-18T03:07:07 | 262,966,453 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 121 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2020/5/13 17:22
# @Author :labixiaoxin
# @File : __init__.py.py | [
"61052379+jinsm01@users.noreply.github.com"
] | 61052379+jinsm01@users.noreply.github.com |
53f752fc2df0837ee2781e7c187d13028052fdca | b2fceb7e11d2d0c8bdfecc5e3d50685855b9f4fd | /example/pythonSnippet.py | 17c246231e4dad4bbf2283522d842632de40be1c | [
"MIT"
] | permissive | DIPSAS/DockerBuildSystem | a6f9858e1e7b4895f17c8434482ab700d1c72570 | 0ddfb0a4cebf707a36d2708c12c0c2641172299b | refs/heads/master | 2021-09-28T22:37:51.484815 | 2021-09-22T23:49:51 | 2021-09-22T23:49:51 | 133,937,355 | 10 | 2 | MIT | 2020-01-09T14:53:02 | 2018-05-18T10:02:22 | Python | UTF-8 | Python | false | false | 148 | py | def GetInfoMsg():
infoMsg = "This python snippet is just an example.\r\n"
return infoMsg
if __name__ == "__main__":
print(GetInfoMsg()) | [
"hans.erik.heggem@gmail.com"
] | hans.erik.heggem@gmail.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.