repo_name
stringlengths 6
97
| path
stringlengths 3
341
| text
stringlengths 8
1.02M
|
|---|---|---|
JayramMardi/codeMania
|
codeMania-python-matplotlib/mymodule.py
|
'''
TRICKS FOR CODING {
1. for press down arrow key when center to a word to avoid the time
}
'''
person1 = {
"name": "j",
"age": "23",
"class": "standard"
}
class myclass():
agent1 = {
"name": " india ",
"age":"21",
"standard":"A class v",
"majority" : "senior z level supreme commander battle star "}
agent2= {
"name": " <NAME>",
"age": "21",
"majority": "ex major counter terriost ek no ka gunda ",
"proffesion" : "city attaker one of the most cyber attack in india "
}
agent3 = {
"name": "pk",
"age": "23",
"criteria" : "single" ,
"address" : "kolkata",
"keystroke" : "No" ,
"natural" : "yes" ,
"far cry id" : '388393333'
, "profession" : "most wanted gangster most hitllist in the world like me means to owner of this code he is beyound over expectation like me and he never gone rush in the battle . but he soon comming to the battleground mobile india he is never gone stopped. "
}
class myclass2:
employee1 ={
"name" : " sync" ,
"age" : "21",
"majority" : " senior article officer",
"national id " : "driving liscense ",
"address " : "new york texas " ,
}
|
JayramMardi/codeMania
|
codeMania-python-AI-Machine-learning/tut9_practise.py
|
def testing(num):
if (num>50):
return (num-2)
return testing(testing(num+10))
print(testing(30))
|
JayramMardi/codeMania
|
codeMania-python-begginer/01_pythonforbegginer.py
|
<filename>codeMania-python-begginer/01_pythonforbegginer.py
# What Is Programming
python is easy to use and less development time, high level language it means don't very about the memory allocated or crash your proggramer code portablity first class it installed on windows/man /linux and instllation is very quick .
# Modules PiP And Comments
module are the code which is written by somone else and use it for coding purpose for everyone by downloading and write in the terminal pip install module name and hit enter .
# 01 hello.py
print("hello world")
# what are pip
pip are basically python pacakage installer and use it for download modules which are pre installed with the python interpreter .
# what are comments
comments are basically readible but not execute cause this not a valid syntax of python programming language . comments can be written in sinle line comments and multiline comments , # for single and ''' for multiline and both written under the comments
'''
hello world
'''
|
JayramMardi/codeMania
|
codeMania-python-AI-Machine-learning/tut8_linear-polynomial.py
|
'''
Machine Learning - Polynomial Regression
Polynomial Regression
If your data points clearly will not fit a linear regression (a straight line through all data points), it might be ideal for polynomial regression.
Polynomial regression, like linear regression, uses the relationship between the variables x and y to find the best way to draw a line through the data points.
How Does it Work?
Python has methods for finding a relationship between data-points and to draw a line of polynomial regression. We will show you how to use these methods instead of going through the mathematic formula.
In the example below, we have registered 18 cars as they were passing a certain tollbooth.
We have registered the car's speed, and the time of day (hour) the passing occurred.
The x-axis represents the hours of the day and the y-axis represents the speed:
Example
Start by drawing a scatter plot:
import matplotlib.pyplot as plt
x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]
plt.scatter(x, y)
plt.show()
Result:
'''
'''
xample Explained
Import the modules you need.
You can learn about the NumPy module in our NumPy Tutorial.
You can learn about the SciPy module in our SciPy Tutorial.
import numpy
import matplotlib.pyplot as plt
Create the arrays that represent the values of the x and y axis:
x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]
NumPy has a method that lets us make a polynomial model:
mymodel = numpy.poly1d(numpy.polyfit(x, y, 3))
Then specify how the line will display, we start at position 1, and end at position 22:
myline = numpy.linspace(1, 22, 100)
Draw the original scatter plot:
plt.scatter(x, y)
Draw the line of polynomial regression:
plt.plot(myline, mymodel(myline))
Display the diagram:
plt.show()
R-Squared
It is important to know how well the relationship between the values of the x- and y-axis is, if there are no relationship the polynomial regression can not be used to predict anything.
The relationship is measured with a value called the r-squared.
The r-squared value ranges from 0 to 1, where 0 means no relationship, and 1 means 100% related.
Python and the Sklearn module will compute this value for you, all you have to do is feed it with the x and y arrays:
Example
How well does my data fit in a polynomial regression?
import numpy
from sklearn.metrics import r2_score
x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]
mymodel = numpy.poly1d(numpy.polyfit(x, y, 3))
print(r2_score(y, mymodel(x)))
Note: The result 0.94 shows that there is a very good relationship, and we can use polynomial regression in future predictions.
Predict Future Values
Now we can use the information we have gathered to predict future values.
Example: Let us try to predict the speed of a car that passes the tollbooth at around 17 P.M:
To do so, we need the same mymodel array from the example above:
mymodel = numpy.poly1d(numpy.polyfit(x, y, 3))
Example
Predict the speed of a car passing at 17 P.M:
import numpy
from sklearn.metrics import r2_score
x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]
mymodel = numpy.poly1d(numpy.polyfit(x, y, 3))
speed = mymodel(17)
print(speed)
The example predicted a speed to be 88.87, which we also could read from the diagram:
Bad Fit?
Let us create an example where polynomial regression would not be the best method to predict future values.
Example
These values for the x- and y-axis should result in a very bad fit for polynomial regression:
import numpy
import matplotlib.pyplot as plt
x = [89,43,36,36,95,10,66,34,38,20,26,29,48,64,6,5,36,66,72,40]
y = [21,46,3,35,67,95,53,72,58,10,26,34,90,33,38,20,56,2,47,15]
mymodel = numpy.poly1d(numpy.polyfit(x, y, 3))
myline = numpy.linspace(2, 95, 100)
plt.scatter(x, y)
plt.plot(myline, mymodel(myline))
plt.show()
Result:
And the r-squared value?
Example
You should get a very low r-squared value.
import numpy
from sklearn.metrics import r2_score
x = [89,43,36,36,95,10,66,34,38,20,26,29,48,64,6,5,36,66,72,40]
y = [21,46,3,35,67,95,53,72,58,10,26,34,90,33,38,20,56,2,47,15]
mymodel = numpy.poly1d(numpy.polyfit(x, y, 3))
print(r2_score(y, mymodel(x)))
The result: 0.00995 indicates a very bad relationship, and tells us that this data set is not suitable for polynomial regression.
COLOR PICKER
colorpicker
LIKE US
Get certified
by completing
a course today!
w3schools CERTIFIED . 2021
CODE GAME
Code Game
'''
|
JayramMardi/codeMania
|
codeMania-python-matplotlib/tut14_historigram.py
|
<filename>codeMania-python-matplotlib/tut14_historigram.py
# Matplotlib Histograms
'''
Histogram
A histogram is a graph showing frequency distributions.
It is a graph showing the number of observations within each given interval.
Example: Say you ask for the height of 250 people, you might end up with a histogram like this:
'''
import matplotlib.pyplot as plt
import numpy as r
x=r.random.normal(170,10,250)
plt.hist(x)
plt.show()
p=r.random.normal(170,10,250)
plt.hist(p)
plt.show()
|
JayramMardi/codeMania
|
codeMania-python-matplotlib/tut11_Matplot_Subplot.py
|
<gh_stars>0
import matplotlib.pyplot as plt
import numpy as np
# plot 1
x=np.array([80, 85, 90, 95, 100, 105, 110, 115, 120, 125])
y=np.array([80, 85, 90, 95, 100, 105, 110, 115, 120, 125])
# plt.subplot(1,2,1)
# plt.plot(x, y)
# plt.title('income')
# # plot 2
# x=np.array([80, 85, 90, 95, 100, 105, 110, 115, 120, 125])
# y=np.array([80, 85, 90, 95, 100, 105, 110, 115, 120, 125])
# plt.subplot(1,2,2)
# plt.plot(x,y)
# plt.title("grenade")
# plt.suptitle('megatron')
plt.scatter(x,y)
plt.show()
|
JayramMardi/codeMania
|
codeMania-python-AI-Machine-learning/tut6_scatterplot.py
|
<reponame>JayramMardi/codeMania<gh_stars>0
'''
Machine Learning - Scatter Plot
Scatter Plot
A scatter plot is a diagram where each value in the data set is represented by a dot.
The Matplotlib module has a method for drawing scatter plots, it needs two arrays of the same length, one for the values of the x-axis, and one for the values of the y-axis:
'''
import matplotlib.pyplot as plt
x = [5,7,8,7,2,17,2,9,4,11,12,9,6]
y = [99,86,87,88,111,86,103,87,94,78,77,85,86]
plt.scatter(x,y)
plt.show()
'''
Scatter Plot Explained
The x-axis represents ages, and the y-axis represents speeds.
What we can read from the diagram is that the two fastest cars were both 2 years old, and the slowest car was 12 years old.
Note: It seems that the newer the car, the faster it drives, but that could be a coincidence, after all we only registered 13 cars.
'''
'''
catter Plot Explained
The x-axis represents ages, and the y-axis represents speeds.
What we can read from the diagram is that the two fastest cars were both 2 years old, and the slowest car was 12 years old.
Note: It seems that the newer the car, the faster it drives, but that could be a coincidence, after all we only registered 13 cars.
Random Data Distributions
In Machine Learning the data sets can contain thousands-, or even millions, of values.
You might not have real world data when you are testing an algorithm, you might have to use randomly generated values.
As we have learned in the previous chapter, the NumPy module can help us with that!
Let us create two arrays that are both filled with 1000 random numbers from a normal data distribution.
The first array will have the mean set to 5.0 with a standard deviation of 1.0.
The second array will have the mean set to 10.0 with a standard deviation of 2.0:
Example
'''
import matplotlib.pyplot as plt
import numpy as r
x=r.random.normal(5.0,1.0,1000)
y=r.random.normal(10.0,2.0,1000)
plt.scatter(x,y)
plt.show()
'''
Scatter Plot Explained
We can see that the dots are concentrated around the value 5 on the x-axis, and 10 on the y-axis.
We can also see that the spread is wider on the y-axis than on the x-axis.
'''
|
JayramMardi/codeMania
|
codeMania-python-matplotlib/tut3.py
|
<filename>codeMania-python-matplotlib/tut3.py
# TYPECASTING CHAPTER
a = "antman"
print(bool(a))
x=48
y=898.889
u="duhdh"
print(int(y))
print(float(x))
|
JayramMardi/codeMania
|
codeMania-python-matplotlib/tut9_LINE_plotting.py
|
'''
Chapter Matplot lining ....
Linestyle
You can use the keyword argument linestyle, or shorter ls, to change the style of the plotted line:
Example
Use a dotted line:
import matplotlib.pyplot as plt
import numpy as np
ypoints = np.array([3, 8, 1, 10])
plt.plot(ypoints, linestyle = 'dotted')
plt.show()
'''
import matplotlib.pyplot as plt
import numpy as np
x=np.array([1,2,3,4,5,6,5,7])
y=np.array([1,2,4,5,5,6,])
plt.plot(y,ms=34,marker='*',mec='#ccffff',mfc="#ccffff",linestyle=':')
plt.show()
'''
Example
Use a dashed line:
plt.plot(ypoints, linestyle = 'dashed')
'''
import matplotlib.pyplot as plt
import numpy as np
p=np.array([3,8,1,10])
plt.plot(p,marker='*',mec='r',mfc='r',ms=20,ls='dashed')
plt.show()
'''
Shorter Syntax
The line style can be written in a shorter syntax:
linestyle can be written as ls.
dotted can be written as :.
dashed can be written as --.
'''
'''
Line Width
You can use the keyword argument linewidth or the shorter lw to change the width of the line.
The value is a floating number, in points:
Example
Plot with a 20.5pt wide line:
import matplotlib.pyplot as plt
import numpy as np
ypoints = np.array([3, 8, 1, 10])
plt.plot(ypoints, linewidth = '20.5')
plt.show()
'''
import matplotlib.pyplot as plt
import numpy as np
p=np.array([3,8,1,10])
plt.plot(p,marker='*',mec='r',mfc='r',ms=20,ls='dashed',color='hotpink',lw='20.5')
plt.show()
'''
Multiple Lines
You can plot as many lines as you like by simply adding more plt.plot() functions:
Example
Draw two lines by specifying a plt.plot() function for each line:
import matplotlib.pyplot as plt
import numpy as np
y1 = np.array([3, 8, 1, 10])
y2 = np.array([6, 2, 7, 11])
plt.plot(y1)
plt.plot(y2)
plt.show()
'''
|
JayramMardi/codeMania
|
codeMania-python-AI-Machine-learning/tut3_percentiles.py
|
<reponame>JayramMardi/codeMania
'''
Machine Learning - Percentiles
What are Percentiles?
Percentiles are used in statistics to give you a number that describes the value that a given percent of the values are lower than.
Example: Let's say we have an array of the ages of all the people that lives in a street.
ages = [5,31,43,48,50,41,7,11,15,39,80,82,32,2,8,6,25,36,27,61,31]
What is the 75. percentile? The answer is 43, meaning that 75% of the people are 43 or younger.
The NumPy module has a method for finding the specified percentile:
Example
Use the NumPy percentile() method to find the percentiles:
import numpy
ages = [5,31,43,48,50,41,7,11,15,39,80,82,32,2,8,6,25,36,27,61,31]
x = numpy.percentile(ages, 75)
print(x)
'''
import numpy as r
ages=[5,31,43,48,50,41,7,11,15,39,80,82,32,2,8,6,25,36,27,61,31]
p=r.percentile(ages,75)
print(p)
|
rsriram315/eds_covid-19
|
src/data/get_data.py
|
# -*- coding: utf-8 -*-
"""
Created on Fri Aug 21 13:02:59 2020
@author: Sriram
"""
import subprocess
import os
import pandas as pd
import numpy as np
from datetime import datetime
# Check Working directory and set the path
if os.path.split(os.getcwd())[-1]=='notebooks':
os.chdir("../")
# Function to pull latest data from John Hopkins GITHUB page
def get_john_hopkins():
'We use git pull to save the data in the folder COVID-19. Data saved as csv files under various names'
git_pull = subprocess.Popen( "git pull" ,
cwd = os.path.dirname( 'data/raw/COVID-19/' ),
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE )
(out, error) = git_pull.communicate()
print("Error : " + str(error))
print("out : " + str(out))
if __name__ == '__main__':
get_john_hopkins()
|
rsriram315/eds_covid-19
|
src/visualization/visualize_SIR.py
|
<reponame>rsriram315/eds_covid-19
# -*- coding: utf-8 -*-
"""
Created on Thu Aug 27 22:59:56 2020
@author: Sriram
"""
import os
import pandas as pd
import numpy as np
import plotly.graph_objects as go
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input,Output
import plotly.io as pio
df_SIR_large=pd.read_csv('data/processed/COVID_JH_flat_table_confirmed.csv',sep=';',parse_dates=[0])
df_SIR_large=df_SIR_large.sort_values('date',ascending=True)
fig=go.Figure()
app=dash.Dash()
app.layout=html.Div([
dcc.Markdown('''
# Applied Datascience on COVID-19 Data
This Dashboard shows the actual confirmed infected people and the simulated
SIR curve.
'''),
# For Country dropdown menu
dcc.Markdown(''' ## Single-Select Country for Visualization'''),
dcc.Dropdown( id='single_select_country',
options=[{'label':each,'value':each} for each in df_SIR_large.columns[1:]],
value='Germany',
multi=False),
#For changing beta ,gamma, t_initial, t_intro_measures,t_hold,t_relax
dcc.Markdown(''' ## Change the values below to manipulate the SIR curve(And press enter):'''),
html.Label(["No measures introduced(days):",
dcc.Input(id='t_initial',
type='number',
value=28,debounce=True)],style={"margin-left": "30px"}),
html.Label(["Measures introduced over(days):",
dcc.Input(id='t_intro_measures',
type='number',
value=14,debounce=True)],style={"margin-left": "30px"}),
html.Label(["Introduced measures hold time(days):",
dcc.Input(id='t_hold',
type='number',
value=21,debounce=True)],style={"margin-left": "30px"}),
html.Br(),
html.Br(),
html.Label(["Introduced measures relaxed(days):",
dcc.Input(id='t_relax',
type='number',
value=21,debounce=True)],style={"margin-left": "30px"}),
html.Label(["Beta max:",
dcc.Input(id='beta_max',
type='number',
value=0.4,debounce=True)],style={"margin-left": "30px"}),
html.Label(["Beta min:",
dcc.Input(id='beta_min',
type='number',
value=0.11,debounce=True)],style={"margin-left": "30px"}),
html.Label(["Gamma:",
dcc.Input(id='gamma',
type='number',
value=0.1,debounce=True)],style={"margin-left": "30px"}),
html.Br(),
html.Br(),
# For plotting graph
dcc.Graph(figure=fig,
id='SIR_curve',
animate=False,)
])
@app.callback(
Output('SIR_curve', 'figure'),
[Input('single_select_country', 'value'),
Input('t_initial','value'),
Input('t_intro_measures','value'),
Input('t_hold','value'),
Input('t_relax','value'),
Input('beta_max','value'),
Input('beta_min','value'),
Input('gamma','value')])
def update_figure(country,initial_time,intro_measures,hold_time,relax_time,max_beta,min_beta,gamma_max):
ydata=df_SIR_large[country][df_SIR_large[country]>=30]
xdata=np.arange(len(ydata))
N0=5000000
I0=30
S0=N0-I0
R0=0
gamma=gamma_max
SIR=np.array([S0,I0,R0])
t_initial=initial_time
t_intro_measures=intro_measures
t_hold=hold_time
t_relax=relax_time
beta_max=max_beta
beta_min=min_beta
propagation_rates=pd.DataFrame(columns={'susceptible':S0,'infected':I0,'recovered':R0})
pd_beta=np.concatenate((np.array(t_initial*[beta_max]),
np.linspace(beta_max,beta_min,t_intro_measures),
np.array(t_hold*[beta_min]),
np.linspace(beta_min,beta_max,t_relax),
))
def SIR_model(SIR,beta,gamma):
'SIR model for simulatin spread'
'S: Susceptible population'
'I: Infected popuation'
'R: Recovered population'
'S+I+R=N (remains constant)'
'dS+dI+dR=0 model has to satisfy this condition at all time'
S,I,R=SIR
dS_dt=-beta*S*I/N0
dI_dt=beta*S*I/N0-gamma*I
dR_dt=gamma*I
return ([dS_dt,dI_dt,dR_dt])
for each_beta in pd_beta:
new_delta_vec=SIR_model(SIR,each_beta,gamma)
SIR=SIR+new_delta_vec
propagation_rates=propagation_rates.append({'susceptible':SIR[0],'infected':SIR[1],'recovered':SIR[2]},ignore_index=True)
fig=go.Figure()
fig.add_trace(go.Bar(x=xdata,
y=ydata,
marker_color='crimson',
name='Confirmed Cases'
))
fig.add_trace(go.Scatter(x=xdata,
y=propagation_rates.infected,
mode='lines',
marker_color='blue',
name='Simulated curve'))
fig.update_layout(shapes=[
dict(type='rect',xref='x',yref='paper',x0=0,y0=0,x1=t_initial,y1=1,fillcolor="LightSalmon",opacity=0.4,layer="below",line_width=0,),
dict(type='rect',xref='x',yref='paper',x0=t_initial,y0=0,x1=t_initial+t_intro_measures,y1=1,fillcolor="LightSalmon",opacity=0.5,layer="below",line_width=0,),
dict(type='rect',xref='x',yref='paper',x0=t_initial+t_intro_measures,y0=0,x1=t_initial+t_intro_measures+t_hold,y1=1,fillcolor="LightSalmon",opacity=0.6,layer='below',line_width=0,),
dict(type='rect',xref='x',yref='paper',x0=t_initial+t_intro_measures+t_hold,y0=0,x1=t_initial+t_intro_measures+t_hold+t_relax,y1=1,fillcolor='LightSalmon',opacity=0.7,layer='below',line_width=0,)
],
title='SIR Simulation Scenario',
title_x=0.5,
xaxis=dict(title='Time(days)',
titlefont_size=16),
yaxis=dict(title='Confirmed cases[JH Data, log scale] ',
type='log',
titlefont_size=16,
),
width=1280,
height=600,
template='plotly_dark'
)
return fig
if __name__ == '__main__':
app.run_server(debug=True,use_reloader=False)
|
rsriram315/eds_covid-19
|
src/data/process_JH_data.py
|
# -*- coding: utf-8 -*-
"""
Created on Fri Aug 21 18:59:53 2020
@author: Sriram
"""
import pandas as pd
import numpy as np
from datetime import datetime
#defining a function to process raw JH data into a relational data structure
def store_relational_JH_data():
"process raw JH data into a relational data structure"
data_path='data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
pd_raw=pd.read_csv(data_path)
pd_data_base=pd_raw.rename(columns={'Country/Region':'country','Province/State':'state'})
pd_data_base['state']=pd_data_base['state'].fillna('no')
pd_data_base=pd_data_base.drop(['Lat','Long'],axis=1)
pd_relational_model=pd_data_base.set_index(['state','country']) \
.T \
.stack(level=[0,1]) \
.reset_index() \
.rename(columns={'level_0':'date',
0:'confirmed'},
)
pd_relational_model['date']=pd_relational_model.date.astype('datetime64[ns]')
pd_relational_model.to_csv('data/processed/COVID_relational_confirmed.csv',sep=';',index=False)
print(' Number of rows stored: '+str(pd_relational_model.shape[0]))
print(' Latest date is: '+str(max(pd_relational_model.date)))
#running the function
if __name__ == '__main__':
store_relational_JH_data()
|
rsriram315/eds_covid-19
|
src/features/build_features.py
|
# -*- coding: utf-8 -*-
"""
Created on Sat Aug 22 10:32:53 2020
@author: Sriram
"""
import numpy as np
from sklearn import linear_model
import pandas as pd
from scipy import signal
# we define the linear regression object
reg=linear_model.LinearRegression(fit_intercept=True)
def get_doubling_time_via_regression(in_array):
" Use linear regression to find the doubling rate"
y=np.array(in_array)
X=np.arange(-1,2).reshape(-1,1)
# for safety we are asserting that the length of the input array is 3
assert len(in_array)==3
reg.fit(X,y)
intercept=reg.intercept_
slope=reg.coef_
return intercept/slope
def savgol_filter(df_input,column='confirmed',window=5):
df_result=df_input
degree=1
# we fill the missing entries with zero
filter_in=df_input[column].fillna(0)
result=signal.savgol_filter(np.array(filter_in),
window,
degree)
df_result[str(column+'_filtered')]=result
return df_result
def rolling_reg(df_input,col='confirmed'):
"Input is dataframe"
"return value is a single series of doubling rates"
days_back=3
result=df_input[col].rolling(window=days_back,min_periods=days_back).apply(get_doubling_time_via_regression,raw=False)
return result
def calc_filtered_data(df_input,filter_on='confirmed'):
"Apply SavGol filter on the dataset and return the merged dataset"
must_contain=set(['state','country',filter_on])
assert must_contain.issubset(set(df_input.columns)),'Error in calc_filtered_data not all columns in data Frame'
df_output=df_input.copy()
pd_filtered_result=df_output[['state','country',filter_on]].groupby(['state','country']).apply(savgol_filter)#.reset_index()
df_output=pd.merge(df_output,pd_filtered_result[[str(filter_on+'_filtered')]],left_index=True,right_index=True,how='left')
return df_output.copy()
def calc_doubling_rate(df_input,filter_on='confirmed'):
"Calculate doubling rate and return the dataframe"
must_contain=set(['state','country',filter_on])
assert must_contain.issubset(set(df_input.columns)),'Error in calc_filtered_data not all columns in data Frame'
pd_DR_result=df_input[['state','country',filter_on]].groupby(['state','country']).apply(rolling_reg,filter_on).reset_index()
pd_DR_result=pd_DR_result.rename(columns={filter_on:filter_on+'_DR','level_2':'index'})
df_output=pd.merge(df_input,pd_DR_result[['index',str(filter_on+'_DR')]],left_index=True,right_on=['index'],how='left')
df_output=df_output.drop(columns=['index'])
return df_output
if __name__=='__main__':
#test_data=np.array([2,4,6])
#doubling_time=get_doubling_time_via_regression(test_data)
#print('Test slope is :'+str(doubling_time))
# We read the data from file
pd_JH_data=pd.read_csv('data/processed/COVID_relational_confirmed.csv',sep=';',parse_dates=[0])
pd_JH_data=pd_JH_data.sort_values('date',ascending=True).reset_index(drop=True).copy()
# We process the data calculating filtered data and doubling rate
pd_JH_result_large=calc_filtered_data(pd_JH_data)
pd_JH_result_large=calc_doubling_rate(pd_JH_result_large)
pd_JH_result_large=calc_doubling_rate(pd_JH_result_large,filter_on='confirmed_filtered')
# we apply a threshold on confirmed column since if values are small doubling rate goes to infinity
mask=pd_JH_result_large['confirmed']>100
pd_JH_result_large['confirmed_filtered_DR']=pd_JH_result_large['confirmed_filtered_DR'].where(mask,other=np.NaN)
pd_JH_result_large.to_csv('data/processed/COVID_final_set.csv',sep=';',index=False)
print(pd_JH_result_large.head())
|
rsriram315/eds_covid-19
|
src/visualization/visualize.py
|
# -*- coding: utf-8 -*-
"""
Created on Mon Aug 24 17:00:40 2020
@author: Sriram
"""
import os
import pandas as pd
import numpy as np
import plotly.graph_objects as go
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input,Output
df_input_large=pd.read_csv('data/processed/COVID_final_set.csv',sep=';')
fig=go.Figure()
app=dash.Dash()
app.layout=html.Div([
dcc.Markdown('''
# Applied Datascience on COVID-19 Data
Goal of the project is to create a responsive Dashboard
with data from many countries in an automated way through:
data gathering , data transformation,
filtering and machine learning to approximate doubling time.
'''),
# For Country dropdown menu
dcc.Markdown(''' ## Multi-Select Country for Visualization'''),
dcc.Dropdown( id='country_drop_down',
options=[{'label':each,'value':each} for each in df_input_large['country'].unique()],
value=['Germany','India','US'],
multi=True),
# For Doubling rate or conformed cased drop down mneu
dcc.Markdown(''' ## Select Timeline of confirmed COVID-19 cases or approximated doubling time'''),
dcc.Dropdown( id='doubling_time',
options=[
{'label':'Timeline Confirmed','value':'confirmed'},
{'label':'Timeline Confirmed Filtered','value':'confirmed_filtered'},
{'label':'Timeline Doubling Rate','value':'confirmed_DR'},
{'label':'Timeline Doubling Rate Filtered','value':'confirmed_filtered_DR'},
],
value='confirmed',
multi=False),
dcc.Graph(figure=fig,id='main_window_slope')
])
@app.callback(
Output('main_window_slope', 'figure'),
[Input('country_drop_down', 'value'),
Input('doubling_time', 'value')])
def update_figure(country_list,show_doubling):
if 'DR' in show_doubling:
my_yaxis={'type':"log",
'title':'Approximated doubling rate over 3 days'
}
else:
my_yaxis={'type':"log",
'title':'Confirmed infected people (source: johns hopkins csse, log-scale)'
}
#Define the traces for the countries
traces = []
for each in country_list:
df_plot=df_input_large[df_input_large['country']==each]
if show_doubling=='confirmed_filtered_DR':
df_plot=df_plot[['state','country','confirmed','confirmed_filtered','confirmed_DR','confirmed_filtered_DR','date']].groupby(['country','date']).agg(np.mean).reset_index()
else:
df_plot=df_plot[['state','country','confirmed','confirmed_filtered','confirmed_DR','confirmed_filtered_DR','date']].groupby(['country','date']).agg(np.sum).reset_index()
traces.append(dict(x=df_plot.date,
y=df_plot[show_doubling],
mode='markers+lines',
opacity=0.9,
name=each
)
)
return {'data':traces,
'layout':dict(
width=1280,
height=600,
xaxis={'title':'Timeline',
'tickangle':-45,
'nticks':20,
'tickfont':dict(size=14,color="#7f7f7f")},
yaxis=my_yaxis)
}
if __name__ == '__main__':
app.run_server(debug=True, use_reloader=False)
|
rsriram315/eds_covid-19
|
src/data/process_SIR_JH_data.py
|
<gh_stars>0
# -*- coding: utf-8 -*-
"""
Created on Sat Aug 29 12:22:53 2020
@author: Sriram
"""
import pandas as pd
import requests
import subprocess
import os
import numpy as np
from datetime import datetime
def store_flat_table_JH_data():
"process raw JH data into a flat table data structure"
datapath='data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
JH_data_raw=pd.read_csv(datapath)
time_index=JH_data_raw.columns[4:]
pd_flat_table=pd.DataFrame({'date':time_index})
country_list=JH_data_raw['Country/Region'].unique()
for country in country_list:
pd_flat_table[country]=np.array(JH_data_raw[JH_data_raw['Country/Region']==country].iloc[:,4::].sum(axis=0))
time_index=[datetime.strptime(each,"%m/%d/%y") for each in pd_flat_table.date]
pd_flat_table['date']=time_index
pd_flat_table.to_csv('data/processed/COVID_JH_flat_table_confirmed.csv',sep=';',index=False )
print('Latest date is'+str(max(pd_flat_table.date)))
print(' Number of rows stored: '+str(pd_flat_table.shape[0]))
#running the function
if __name__ == '__main__':
store_flat_table_JH_data()
|
foxmask/baeuda
|
baeuda/datasource/joplin.py
|
<reponame>foxmask/baeuda<filename>baeuda/datasource/joplin.py
from joplin_api import JoplinApi
import pypandoc
import settings
class Joplin:
def __init__(self):
self.joplin = JoplinApi(settings.TOKEN)
def subfolder(self, folder):
"""
:param folder: subfolder
:return: line of the founded children
"""
for line in folder['children']:
if settings.FOLDER == line['title']:
return line['id']
async def folder(self):
"""
:return: the folder id
"""
res = await self.joplin.get_folders()
for line in res.json():
if settings.FOLDER == line['title']:
return line['id']
if 'children' in line:
folder_id = self.subfolder(line)
if folder_id is not None:
return folder_id
async def my_folder(self):
"""
:return: return the folder name used in Joplin
"""
return settings.FOLDER
async def data(self):
"""
:return: list of notes
"""
data = []
folder_id = await self.folder()
notes = await self.joplin.get_folders_notes(folder_id)
for note in notes.json():
if settings.FILTER in note['title'] and note['is_todo'] == 0:
content = pypandoc.convert_text(source=note['body'],
to='html',
format=settings.PYPANDOC_MARKDOWN)
data.append({'title': note['title'], 'body': content})
return data
|
foxmask/baeuda
|
baeuda/__init__.py
|
# coding: utf-8
"""
This module allow us to manage anki card
cards created from Joplin editor :P
"""
import baeuda.settings
from baeuda.datasource.joplin import Joplin
from baeuda.datasource.mdfile import MdFile
__version__ = "0.3.0"
|
foxmask/baeuda
|
baeuda/datasource/mdfile.py
|
<gh_stars>10-100
from pathlib import Path
import os
import pypandoc
import settings
class MdFile:
def __init__(self):
pass
async def my_folder(self):
"""
:return: return the folder name used by he filesystem where md files are
"""
return os.path.basename(os.path.normpath(settings.FOLDER))
async def data(self):
"""
:return: list of notes
"""
data = []
p = Path(settings.FOLDER).glob('**/*.md')
for full_file in p:
file = os.path.basename(str(full_file))
with open(str(full_file)) as md_file:
content = md_file.read()
content = pypandoc.convert_text(source=content,
to='html',
format=settings.PYPANDOC_MARKDOWN)
data.append({'title': file, 'body': content})
return data
|
foxmask/baeuda
|
baeuda/settings.py
|
JOPLIN_WEBCLIPPER = 41184
TOKEN = '' # provide the JOPLIN TOKEN you have on the webclipper configuration page
FOLDER = 'Kimchi!' # from which folder does baeuda read the note to create into anki ?
# FOLDER = '/home/foxmask/tmp/' # from which folder does baeuda read the note to create into anki ?
PYPANDOC_MARKDOWN = 'markdown_github'
FILTER = ''
ANKI_URL = 'http://localhost:8765/' # url provided by AnkiConnect https://ankiweb.net/shared/info/2055492159
ANKI_MODEL = 'Korean (foxmask)' # fields are front/back/romanisation
ANKI_FIELD_COUNT = 3 # number of columns to grab from a joplin table
ANKI_FIELDS = ['Coréen', 'Romanisation', 'Français'] # put the name of the fields you want to use with the "ANKI_MODEL"
ONE_DECK_PER_NOTE = False # will create one deck
DATASOURCE = 'Joplin' # Joplin or MdFile
|
imagetyperz-api/imagetyperz-api-python2
|
imagetyperzapi2/imagetyperzapi.py
|
# Imagetyperz captcha API
# -----------------------
# requests lib
try:
from requests import session
except:
raise Exception('requests package not installed, try with: \'pip install requests\'')
import os, json
from base64 import b64encode
from urllib import urlencode
from json import loads as json_loads
# endpoints
# -------------------------------------------------------------------------------------------
CAPTCHA_ENDPOINT = 'http://captchatypers.com/Forms/UploadFileAndGetTextNEW.ashx'
RECAPTCHA_SUBMIT_ENDPOINT = 'http://captchatypers.com/captchaapi/UploadRecaptchaV1.ashx'
RECAPTCHA_RETRIEVE_ENDPOINT = 'http://captchatypers.com/captchaapi/GetRecaptchaText.ashx'
BALANCE_ENDPOINT = 'http://captchatypers.com/Forms/RequestBalance.ashx'
BAD_IMAGE_ENDPOINT = 'http://captchatypers.com/Forms/SetBadImage.ashx'
GEETEST_SUBMIT_ENDPOINT = 'http://captchatypers.com/captchaapi/UploadGeeTest.ashx'
GEETEST_RETRIEVE_ENDPOINT = 'http://captchatypers.com/captchaapi/getrecaptchatext.ashx'
HCAPTCHA_ENDPOINT = 'http://captchatypers.com/captchaapi/UploadHCaptchaUser.ashx'
CAPY_ENDPOINT = 'http://captchatypers.com/captchaapi/UploadCapyCaptchaUser.ashx'
TIKTOK_ENDPOINT = 'http://captchatypers.com/captchaapi/UploadTikTokCaptchaUser.ashx'
RETRIEVE_JSON_ENDPOINT = 'http://captchatypers.com/captchaapi/GetCaptchaResponseJson.ashx'
CAPTCHA_ENDPOINT_CONTENT_TOKEN = 'http://captchatypers.com/Forms/UploadFileAndGetTextNEWToken.ashx'
CAPTCHA_ENDPOINT_URL_TOKEN = 'http://captchatypers.com/Forms/FileUploadAndGetTextCaptchaURLToken.ashx'
RECAPTCHA_SUBMIT_ENDPOINT_TOKEN = 'http://captchatypers.com/captchaapi/UploadRecaptchaToken.ashx'
RECAPTCHA_RETRIEVE_ENDPOINT_TOKEN = 'http://captchatypers.com/captchaapi/GetRecaptchaTextToken.ashx'
BALANCE_ENDPOINT_TOKEN = 'http://captchatypers.com/Forms/RequestBalanceToken.ashx'
BAD_IMAGE_ENDPOINT_TOKEN = 'http://captchatypers.com/Forms/SetBadImageToken.ashx'
GEETEST_SUBMIT_ENDPOINT_TOKEN = 'http://captchatypers.com/captchaapi/UploadGeeTestToken.ashx'
# user agent used in requests
# ---------------------------
USER_AGENT = 'pythonAPI1.0'
# API class
# -----------------------------------------
class ImageTyperzAPI:
def __init__(self, access_token, affiliate_id = 0, timeout = 120):
self._access_token = access_token
self._affiliate_id = affiliate_id
# empty by default
self._username = ''
self._password = ''
self._timeout = timeout
self._session = session() # init a new session
self._headers = { # use this user agent
'User-Agent' : USER_AGENT
}
# set username and password
def set_user_password(self, username, password):
self._username = username
self._password = password
# solve normal captcha
def submit_image(self, image_path, is_case_sensitive = False, is_math = False, is_phrase = False, digits_only = False, letters_only = False, min_length = 0, max_length = 0):
data = {}
# if username is given, do it with user otherwise token
if self._username:
data['username'] = self._username
data['password'] = self._password
url = CAPTCHA_ENDPOINT
if not os.path.exists(image_path): raise Exception('captcha image does not exist: {}'.format(image_path))
# read image/captcha
with open(image_path, 'rb') as f:
image_data = b64encode(f.read())
else:
if image_path.lower().startswith('http'):
url = CAPTCHA_ENDPOINT_URL_TOKEN
image_data = image_path
else:
url = CAPTCHA_ENDPOINT_CONTENT_TOKEN
# check if image/file exists
if not os.path.exists(image_path): raise Exception('captcha image does not exist: {}'.format(image_path))
# read image/captcha
with open(image_path, 'rb') as f:
image_data = b64encode(f.read())
# set token
data['token'] = self._access_token
# init dict params (request params)
data['action'] = 'UPLOADCAPTCHA'
data['iscase'] = 'true' if is_case_sensitive else None
data['isphrase'] = 'true' if is_phrase else None
data['ismath'] = 'true' if is_math else None
# digits, letters, or both
if digits_only: data['alphanumeric'] = '1'
elif letters_only: data['alphanumeric'] = '2'
# min, max length
if min_length != 0: data['minlength'] = min_length
if max_length != 0: data['maxlength'] = max_length
data['file'] = image_data
if self._affiliate_id: data['affiliateid'] = self._affiliate_id
# make request with all data
response = self._session.post(url, data=data,
headers=self._headers,
timeout=self._timeout)
response_text = str(response.text) # get text from response
# check if we got an error
if 'ERROR:' in response_text and response_text.split('|') != 2:
raise Exception(response_text.split('ERROR:')[0].strip()) # raise Ex
return response_text.split('|')[0]
# submit recaptcha to system
def submit_recaptcha(self, d):
page_url = d['page_url']
sitekey = d['sitekey']
# check for proxy
proxy = None
if 'proxy' in d: proxy = d['proxy'] # if proxy, add it
# check if page_url and sitekey are != None
if not page_url: raise Exception('provide a valid page_url')
if not sitekey: raise Exception('provide a valid sitekey')
data = {} # create data obj here, we might need it for proxy
if self._username:
data['username'] = self._username
data['password'] = self._password
url = RECAPTCHA_SUBMIT_ENDPOINT
else:
data['token'] = self._access_token
url = RECAPTCHA_SUBMIT_ENDPOINT_TOKEN
# check proxy and set dict (request params) accordingly
if proxy: # if proxy is given, check proxytype
# we have both proxy and type at this point
data['proxy'] = proxy
data['proxytype'] = 'HTTP'
# init dict params (request params)
data['action'] = 'UPLOADCAPTCHA'
data['pageurl'] = page_url
data['googlekey'] = sitekey
if self._affiliate_id:
data['affiliateid'] = self._affiliate_id
# user agent
if 'user_agent' in d: data['useragent'] = d['user_agent']
# v3
data['recaptchatype'] = 0
if 'type' in d: data['recaptchatype'] = d['type']
if 'v3_action' in d: data['captchaaction'] = d['v3_action']
if 'v3_min_score' in d: data['score'] = d['v3_min_score']
if 'data-s' in d: data['data-s'] = d['data-s']
# make request with all data
response = self._session.post(url, data=data,
headers=self._headers, timeout=self._timeout)
response_text = str(response.text) # get text from response
# check if we got an error
# -------------------------------------------------------------
if 'ERROR:' in response_text and response_text.split('|') != 2:
raise Exception(response_text.split('ERROR:')[1].strip()) # raise Ex
return response_text
# submit geetest captcha
def submit_geetest(self, d):
# check if page_url and sitekey are != None
if 'domain' not in d: raise Exception('domain is missing')
if 'challenge' not in d: raise Exception('challenge is missing')
if 'gt' not in d: raise Exception('gt is missing')
d['action'] = 'UPLOADCAPTCHA'
# credentials and url
if self._username:
d['username'] = self._username
d['password'] = self._password
url = GEETEST_SUBMIT_ENDPOINT
else:
d['token'] = self._access_token
url = GEETEST_SUBMIT_ENDPOINT_TOKEN
# affiliate ID
if self._affiliate_id: d['affiliateid'] = self._affiliate_id
url = '{}?{}'.format(url, urlencode(d))
# make request with all data
response = self._session.post(url, data=d,
headers=self._headers, timeout=self._timeout)
response_text = '{}'.format(response.text)
# check if we got an error
# -------------------------------------------------------------
if 'ERROR:' in response_text and response_text.split('|') != 2:
raise Exception(response_text.split('ERROR:')[1].strip()) # raise Ex
return response_text
# submit hcaptcha
def submit_hcaptcha(self, d):
page_url = d['page_url']
sitekey = d['sitekey']
# check for proxy
proxy = None
if 'proxy' in d: proxy = d['proxy'] # if proxy, add it
# check if page_url and sitekey are != None
if not page_url: raise Exception('provide a valid page_url')
if not sitekey: raise Exception('provide a valid sitekey')
data = {} # create data obj here, we might need it for proxy
if self._username:
data['username'] = self._username
data['password'] = self._password
else:
data['token'] = self._access_token
# check proxy and set dict (request params) accordingly
if proxy: # if proxy is given, check proxytype
# we have both proxy and type at this point
data['proxy'] = proxy
data['proxytype'] = 'HTTP'
# init dict params (request params)
data['action'] = 'UPLOADCAPTCHA'
data['pageurl'] = page_url
data['sitekey'] = sitekey # just to make sure it's not sitekey that's required as input
data['captchatype'] = 11
if self._affiliate_id:
data['affiliateid'] = self._affiliate_id
# user agent
if 'user_agent' in d: data['useragent'] = d['user_agent']
# make request with all data
response = self._session.post(HCAPTCHA_ENDPOINT, data=data,
headers=self._headers, timeout=self._timeout)
response_text = str(response.text) # get text from response
# check if we got an error
# -------------------------------------------------------------
if 'ERROR:' in response_text and response_text.split('|') != 2:
raise Exception(response_text.split('ERROR:')[1].strip()) # raise Ex
else:
js = json_loads(response.text)
response_text = js[0]['CaptchaId']
return response_text
# submit capy
def submit_capy(self, d):
page_url = d['page_url']
sitekey = d['sitekey']
# check for proxy
proxy = None
if 'proxy' in d: proxy = d['proxy'] # if proxy, add it
# check if page_url and sitekey are != None
if not page_url: raise Exception('provide a valid page_url')
if not sitekey: raise Exception('provide a valid sitekey')
data = {} # create data obj here, we might need it for proxy
if self._username:
data['username'] = self._username
data['password'] = self._password
else:
data['token'] = self._access_token
# check proxy and set dict (request params) accordingly
if proxy: # if proxy is given, check proxytype
# we have both proxy and type at this point
data['proxy'] = proxy
data['proxytype'] = 'HTTP'
# init dict params (request params)
data['action'] = 'UPLOADCAPTCHA'
data['pageurl'] = page_url
data['sitekey'] = sitekey
data['captchatype'] = 12
if self._affiliate_id:
data['affiliateid'] = self._affiliate_id
# user agent
if 'user_agent' in d: data['useragent'] = d['user_agent']
# make request with all data
response = self._session.post(CAPY_ENDPOINT, data=data,
headers=self._headers, timeout=self._timeout)
response_text = str(response.text) # get text from response
# check if we got an error
# -------------------------------------------------------------
if 'ERROR:' in response_text and response_text.split('|') != 2:
raise Exception(response_text.split('ERROR:')[1].strip()) # raise Ex
else:
js = json_loads(response.text)
response_text = js[0]['CaptchaId']
return response_text
# submit tiktok
def submit_tiktok(self, d):
page_url = d['page_url']
cookie_input = ''
if 'cookie_input' in d: cookie_input = d['cookie_input']
# check for proxy
proxy = None
if 'proxy' in d: proxy = d['proxy'] # if proxy, add it
# check if page_url and sitekey are != None
if not page_url: raise Exception('provide a valid page_url')
data = {} # create data obj here, we might need it for proxy
if self._username:
data['username'] = self._username
data['password'] = <PASSWORD>
else:
data['token'] = self._access_token
# check proxy and set dict (request params) accordingly
if proxy: # if proxy is given, check proxytype
# we have both proxy and type at this point
data['proxy'] = proxy
data['proxytype'] = 'HTTP'
# init dict params (request params)
data['action'] = 'UPLOADCAPTCHA'
data['pageurl'] = page_url
data['cookie_input'] = cookie_input
data['captchatype'] = 10
if self._affiliate_id:
data['affiliateid'] = self._affiliate_id
# user agent
if 'user_agent' in d: data['useragent'] = d['user_agent']
# make request with all data
response = self._session.post(TIKTOK_ENDPOINT, data=data,
headers=self._headers, timeout=self._timeout)
response_text = str(response.text) # get text from response
# check if we got an error
# -------------------------------------------------------------
if 'ERROR:' in response_text and response_text.split('|') != 2:
raise Exception(response_text.split('ERROR:')[1].strip()) # raise Ex
else:
js = json_loads(response.text)
return js[0]['CaptchaId']
# use to retrieve captcha response
def retrieve_response(self, captcha_id):
# create params dict (multipart)
data = {
'action': 'GETTEXT',
'captchaid': captcha_id
}
if self._username:
data['username'] = self._username
data['password'] = <PASSWORD>
else:
data['token'] = self._access_token
# make request with all data
response = self._session.post(RETRIEVE_JSON_ENDPOINT, data=data,
headers=self._headers, timeout=self._timeout)
response_text = str(response.text) # get text from response
# check if we got an error
if 'ERROR:' in response_text and response_text.split('|') != 2:
raise Exception(response_text.split('ERROR:')[1].strip().split('"')[0])
# load json
try:
js = json_loads(response_text)
if not js[0]: return None
status = js[0]['Status']
if status.lower() == 'pending': return None
except:
raise Exception('invalid JSON response from server: {}.'
' Make sure your input is correct and retry'.format(response_text))
return json_loads(response_text)[0] # return response
# get account balance
def account_balance(self):
data = {}
if self._username:
url = BALANCE_ENDPOINT
data["username"] = self._username
data["password"] = self._password
else:
url = BALANCE_ENDPOINT_TOKEN
data["token"] = self._access_token
data["action"] = "REQUESTBALANCE"
data["submit"] = "Submit"
response = self._session.post(url, data=data,
headers=self._headers, timeout=self._timeout)
response_text = str(response.text)
# check if we have an error
if 'ERROR:' in response_text:
raise Exception(response_text.split('ERROR:')[1].strip()) # raise
return '${}'.format(response_text) # we don't, return balance
# set captcha bad, if given id, otherwise set the last one
def set_captcha_bad(self, captcha_id):
data = {
"action": "SETBADIMAGE",
"imageid": captcha_id,
"submit": "Submissssst"
}
if self._username:
data["username"] = self._username
data["password"] = <PASSWORD>
url = BAD_IMAGE_ENDPOINT
else:
data["token"] = self._access_token
url = BAD_IMAGE_ENDPOINT_TOKEN
# make request
response = self._session.post(url, data=data,
headers=self._headers, timeout=self._timeout)
response_text = str(response.text)
# check if we have an error
if 'ERROR:' in response_text:
raise Exception(response_text.split('ERROR:')[1].strip()) # raise
return response_text # we don't, return balance
|
sdlouhy/TensorFlow-ENet
|
test_enet.py
|
<gh_stars>0
import tensorflow as tf
from tensorflow.contrib.framework.python.ops.variables import get_or_create_global_step
from tensorflow.python.platform import tf_logging as logging
from enet import ENet, ENet_arg_scope
from preprocessing import preprocess
import os
import time
import numpy as np
import matplotlib.pyplot as plt
slim = tf.contrib.slim
#============INPUT ARGUMENTS================
flags = tf.app.flags
#Directories
flags.DEFINE_string('dataset_dir', './dataset', 'The dataset directory to find the train, validation and test images.')
flags.DEFINE_string('checkpoint_dir', './log/original', 'The checkpoint directory to restore your mode.l')
flags.DEFINE_string('logdir', './log/original_test', 'The log directory for event files created during test evaluation.')
flags.DEFINE_boolean('save_images', True, 'If True, saves 10 images to your logdir for visualization.')
#Evaluation information
flags.DEFINE_integer('num_classes', 12, 'The number of classes to predict.')
flags.DEFINE_integer('batch_size', 10, 'The batch_size for evaluation.')
flags.DEFINE_integer('image_height', 360, "The input height of the images.")
flags.DEFINE_integer('image_width', 480, "The input width of the images.")
flags.DEFINE_integer('num_epochs', 10, "The number of epochs to evaluate your model.")
#Architectural changes
flags.DEFINE_integer('num_initial_blocks', 1, 'The number of initial blocks to use in ENet.')
flags.DEFINE_integer('stage_two_repeat', 2, 'The number of times to repeat stage two.')
flags.DEFINE_boolean('skip_connections', False, 'If True, perform skip connections from encoder to decoder.')
FLAGS = flags.FLAGS
#==========NAME HANDLING FOR CONVENIENCE==============
num_classes = FLAGS.num_classes
batch_size = FLAGS.batch_size
image_height = FLAGS.image_height
image_width = FLAGS.image_width
num_epochs = FLAGS.num_epochs
save_images = FLAGS.save_images
#Architectural changes
num_initial_blocks = FLAGS.num_initial_blocks
stage_two_repeat = FLAGS.stage_two_repeat
skip_connections = FLAGS.skip_connections
dataset_dir = FLAGS.dataset_dir
checkpoint_dir = FLAGS.checkpoint_dir
photo_dir = os.path.join(FLAGS.logdir, "images")
logdir = FLAGS.logdir
#===============PREPARATION FOR TRAINING==================
#Checkpoint directories
checkpoint_file = tf.train.latest_checkpoint(checkpoint_dir)
#Dataset directories
image_files = sorted([os.path.join(dataset_dir, 'test', file) for file in os.listdir(dataset_dir + "/test") if file.endswith('.png')])
annotation_files = sorted([os.path.join(dataset_dir, "testannot", file) for file in os.listdir(dataset_dir + "/testannot") if file.endswith('.png')])
num_batches_per_epoch = len(image_files) / batch_size
num_steps_per_epoch = num_batches_per_epoch
#=============EVALUATION=================
def run():
with tf.Graph().as_default() as graph:
tf.logging.set_verbosity(tf.logging.INFO)
#===================TEST BRANCH=======================
#Load the files into one input queue
images = tf.convert_to_tensor(image_files)
annotations = tf.convert_to_tensor(annotation_files)
input_queue = tf.train.slice_input_producer([images, annotations])
#Decode the image and annotation raw content
image = tf.read_file(input_queue[0])
image = tf.image.decode_image(image, channels=3)
annotation = tf.read_file(input_queue[1])
annotation = tf.image.decode_image(annotation)
#preprocess and batch up the image and annotation
preprocessed_image, preprocessed_annotation = preprocess(image, annotation, image_height, image_width)
images, annotations = tf.train.batch([preprocessed_image, preprocessed_annotation], batch_size=batch_size, allow_smaller_final_batch=True)
#Create the model inference
with slim.arg_scope(ENet_arg_scope()):
logits, probabilities = ENet(images,
num_classes,
batch_size=batch_size,
is_training=True,
reuse=None,
num_initial_blocks=num_initial_blocks,
stage_two_repeat=stage_two_repeat,
skip_connections=skip_connections)
# Set up the variables to restore and restoring function from a saver.
exclude = []
variables_to_restore = slim.get_variables_to_restore(exclude=exclude)
saver = tf.train.Saver(variables_to_restore)
def restore_fn(sess):
return saver.restore(sess, checkpoint_file)
#perform one-hot-encoding on the ground truth annotation to get same shape as the logits
annotations = tf.reshape(annotations, shape=[batch_size, image_height, image_width])
annotations_ohe = tf.one_hot(annotations, num_classes, axis=-1)
annotations = tf.cast(annotations, tf.int64)
#State the metrics that you want to predict. We get a predictions that is not one_hot_encoded.
predictions = tf.argmax(probabilities, -1)
accuracy, accuracy_update = tf.contrib.metrics.streaming_accuracy(predictions, annotations)
mean_IOU, mean_IOU_update = tf.contrib.metrics.streaming_mean_iou(predictions=predictions, labels=annotations, num_classes=num_classes)
per_class_accuracy, per_class_accuracy_update = tf.metrics.mean_per_class_accuracy(labels=annotations, predictions=predictions, num_classes=num_classes)
metrics_op = tf.group(accuracy_update, mean_IOU_update, per_class_accuracy_update)
#Create the global step and an increment op for monitoring
global_step = get_or_create_global_step()
global_step_op = tf.assign(global_step, global_step + 1) #no apply_gradient method so manually increasing the global_step
#Create a evaluation step function
def eval_step(sess, metrics_op, global_step):
'''
Simply takes in a session, runs the metrics op and some logging information.
'''
start_time = time.time()
_, global_step_count, accuracy_value, mean_IOU_value, per_class_accuracy_value = sess.run([metrics_op, global_step_op, accuracy, mean_IOU, per_class_accuracy])
time_elapsed = time.time() - start_time
#Log some information
logging.info('Global Step %s: Streaming Accuracy: %.4f Streaming Mean IOU: %.4f Per-class Accuracy: %.4f (%.2f sec/step)',
global_step_count, accuracy_value, mean_IOU_value, per_class_accuracy_value, time_elapsed)
return accuracy_value, mean_IOU_value, per_class_accuracy_value
#Create your summaries
tf.summary.scalar('Monitor/test_accuracy', accuracy)
tf.summary.scalar('Monitor/test_mean_per_class_accuracy', per_class_accuracy)
tf.summary.scalar('Monitor/test_mean_IOU', mean_IOU)
my_summary_op = tf.summary.merge_all()
#Define your supervisor for running a managed session. Do not run the summary_op automatically or else it will consume too much memory
sv = tf.train.Supervisor(logdir = logdir, summary_op = None, init_fn=restore_fn)
#Run the managed session
with sv.managed_session() as sess:
for step in range(int(num_steps_per_epoch * num_epochs)):
#print vital information every start of the epoch as always
if step % num_batches_per_epoch == 0:
accuracy_value, mean_IOU_value = sess.run([accuracy, mean_IOU])
logging.info('Epoch: %s/%s', step / num_batches_per_epoch + 1, num_epochs)
logging.info('Current Streaming Accuracy: %.4f', accuracy_value)
logging.info('Current Streaming Mean IOU: %.4f', mean_IOU_value)
#Compute summaries every 10 steps and continue evaluating
if step % 10 == 0:
test_accuracy, test_mean_IOU, test_per_class_accuracy = eval_step(sess, metrics_op = metrics_op, global_step = sv.global_step)
summaries = sess.run(my_summary_op)
sv.summary_computed(sess, summaries)
#Otherwise just run as per normal
else:
test_accuracy, test_mean_IOU, test_per_class_accuracy = eval_step(sess, metrics_op = metrics_op, global_step = sv.global_step)
#At the end of all the evaluation, show the final accuracy
logging.info('Final Streaming Accuracy: %.4f', test_accuracy)
logging.info('Final Mean IOU: %.4f', test_mean_IOU)
logging.info('Final Per Class Accuracy %.4f', test_per_class_accuracy)
#Show end of evaluation
logging.info('Finished evaluating!')
#Save the images
if save_images:
if not os.path.exists(photo_dir):
os.mkdir(photo_dir)
#Save the image visualizations for the first 10 images.
logging.info('Saving the images now...')
predictions_val, annotations_val = sess.run([predictions, annotations])
for i in range(10):
predicted_annotation = predictions_val[i]
annotation = annotations_val[i]
plt.subplot(1,2,1)
plt.imshow(predicted_annotation)
plt.subplot(1,2,2)
plt.imshow(annotation)
plt.savefig(photo_dir+"/image_" + str(i))
if __name__ == '__main__':
run()
|
c3TNT/master
|
manager.py
|
<gh_stars>0
from flask_migrate import MigrateCommand
from flask_script import Manager
app =create_app('develop')
manager =Manager()
manager.add_command('db',MigrateCommand)
if __name__ == '__main__':
manager.run()
|
glqstrauss/oopsgenie
|
clean.py
|
import csv
from utils import get_valid_colum_indices
class Cleaner():
def clean(file, clean_columns, remove):
print ("Cleaning {}".format(file))
print ("For columns {}".format(clean_columns))
new_file = file[0:-7] + "clean.csv"
with open(file, 'r') as raw_file:
reader = csv.reader(raw_file, delimiter=',')
headers = next(reader)
col_count = len(clean_columns)
if remove:
clean_columns.append("Message")
indices = get_valid_colum_indices(headers, clean_columns)
if indices is None:
print ("invalid column specified for in {}".format(file))
return
with open(new_file, 'w') as clean_file:
writer = csv.writer(clean_file, delimiter=',')
writer.writerow(clean_columns)
for row in reader:
if remove:
blacklisted = False
for r in remove:
if r in row[indices[-1]]:
blacklisted = True
if blacklisted:
continue
cleaned_row = []
for i in range(col_count):
cleaned_row.append(row[indices[i]])
writer.writerow(cleaned_row)
print("Done")
|
glqstrauss/oopsgenie
|
utils.py
|
def get_valid_colum_indices(full_cols, specified_cols):
indices = []
for column in specified_cols:
# Validate columns to extract
if column not in full_cols:
return None
indices.append(full_cols.index(column))
return indices
|
glqstrauss/oopsgenie
|
count.py
|
<gh_stars>1-10
import csv
from datetime import datetime
from utils import get_valid_colum_indices
from fuzzywuzzy import fuzz
class Counter(object):
def count(self, file, column, limit, interval, match, update_minutes, outfile):
with open(file, 'r') as f:
reader = csv.reader(f, delimiter=',')
headers = next(reader)
if column == 'all':
print("Total number of alerts: {}".format(sum(1 for line in f)))
cols = [column, "CreatedAtDate"] if interval else [column]
if update_minutes:
cols.extend(["CreatedAt", "UpdatedAt"])
indices = get_valid_colum_indices(headers, cols)
if indices is None:
print ("invalid column specified for in {}".format(file))
return
index = indices[0]
count_map = {}
for row in reader:
if match:
if match not in row[index]:
continue
if update_minutes:
ms_to_update = float(row[indices[-1]]) - float(row[indices[-2]])
min_to_update = ms_to_update/1000/60
if min_to_update > int(update_minutes):
continue
if interval:
if len(interval) != 2:
print ("invalid use of --interval, must give 2 values")
# CreatedAtDate in this format 12/12/12 12:12:12.123-4567
# cut off the last 9 values to properly parse
dtime = datetime.strptime(row[indices[1]][0:-9], '%Y/%m/%d %H:%M:%S')
if dtime.hour < int(interval[0]) or dtime.hour > int(interval[1]):
continue
count_map[row[index]] = count_map.get(row[index], 0) + 1
return self.format_output(count_map, column, outfile, limit)
@classmethod
def format_output(self, count_map, column, outfile, limit, fuzzy=False):
alert_list = sorted(count_map.items(),
key = lambda kv:(kv[1], kv[0]),
reverse=True)
if not outfile:
for alert, num in alert_list:
if limit <= 0:
break
print("{}: {}".format(alert, num))
limit -=1
else:
output_file = 'fuzzy-' + outfile if fuzzy else outfile
with open(output_file, 'w') as out:
writer = csv.writer(out, delimiter=',')
writer.writerow([column, "Count"])
for row in alert_list:
writer.writerow(row)
print("Done")
class FuzzyCounter(Counter):
def count(self, file, column, limit, threshold, remove_numbers, outfile, alias_strip_list):
strip_list = []
if alias_strip_list:
with open(alias_strip_list, 'rt', encoding='utf-8') as f:
csv_reader = csv.reader(f)
values_to_strip = list(csv_reader)
strip_list = [item[0] for item in values_to_strip]
with open(file, 'r') as f:
reader = csv.reader(f, delimiter=',')
headers = next(reader)
indices = get_valid_colum_indices(headers, [column])
if indices is None:
print ("invalid column specified for in {}".format(file))
return
index = indices[0]
count_map = {}
for row in reader:
for strip in strip_list:
if strip in row[1]:
row[1] = row[1].replace(strip, "")
break
count_map[row[index]] = count_map.get(row[index], 0) + 1
# if the fuzzy threshold is set to 100% match, we can skip this
# O Notation murdering step
if threshold < 100:
# Get a list of the keys in the count_map
# Then see if any are a fuzzy match
count_map = self.count_fuzzy_matches(count_map, threshold, remove_numbers)
return self.format_output(count_map, column, outfile, limit, fuzzy=True)
@classmethod
def count_fuzzy_matches(self, count_map, fuzzy_thresh, remove_numbers):
alert_keys = list(count_map.keys())
skip_list = []
for key, val in count_map.items():
if key not in skip_list:
for alert_key in alert_keys:
if remove_numbers:
new_key = ''.join([i for i in key if not i.isdigit()])
new_alert_key = ''.join([i for i in alert_key if not i.isdigit()])
else:
new_key = key
new_alert_key = alert_key
fuzzy_match_ratio = fuzz.ratio(new_key, new_alert_key)
if fuzzy_match_ratio >= fuzzy_thresh:
if key != alert_key:
count_map[key] = count_map[key] + count_map[alert_key]
skip_list.append(alert_key)
for skip in skip_list:
if count_map.get(skip):
del count_map[skip]
return count_map
|
glqstrauss/oopsgenie
|
oopsgenie.py
|
import argparse
import os.path
from clean import Cleaner
from count import Counter, FuzzyCounter
def is_valid_file(parser, arg):
if not os.path.exists(arg):
parser.error("The file {} does not exist!".format(arg))
return arg
def main():
parser = argparse.ArgumentParser(description="OpsGenie Alert Classifier")
parser.add_argument('file', type=lambda x: is_valid_file(parser, x),
metavar='FILE', help='file to work with')
parser.add_argument("--clean", nargs='+', dest="clean",
help="create a 'clean' file with whitelisted columns a raw file")
parser.add_argument("--remove", nargs='+', dest="remove",
help="Match rows to remove based the 'Message' column")
parser.add_argument("--count", nargs='?', dest="count", default=None, const=None,
help="count of alerts grouped by specified column name")
parser.add_argument("--fuzzy-count", nargs='?', dest="fuzzy_count", default=None, const=None,
help="fuzzy count alerts grouped by specified column name")
parser.add_argument("--limit", nargs='?', dest="limit", default=20, const=20, type=int,
help="limit number of results returned (default: 20)")
parser.add_argument("--interval", nargs='+', dest="interval",
help="Time interval in hours to filter alerts")
parser.add_argument("--match", nargs='?', dest="match", default=None, const=None,
help="Regex match against specified column name for count")
parser.add_argument("--update-minutes", nargs='?', dest="update_minutes", default=None, const=None,
help="Number of minutes between 'CreatedAt' and 'UpdatedAt'")
parser.add_argument("--outfile", nargs='?', dest="outfile", default=None, const=None,
help="Optional file to output results of count")
parser.add_argument("--threshold", nargs='?', dest="threshold", default=90, const=90, type=int,
help="Threshold for alert fuzzy match (default: 100 - so 100% match)")
parser.add_argument("--remove-numbers", nargs='?', dest="remove_numbers", default=False, const=None, type=bool,
help="Remove numbers from alias before doing fuzzy matching (default: False). \
To be used in conjuction with the fuzzy threshold flag")
parser.add_argument('--alias-strip-list', type=lambda x: is_valid_file(parser, x),
dest='strip_file', help='csv file with a column of values to strip', metavar="FILE")
args = parser.parse_args()
if args.clean:
if not args.file.endswith("raw.csv"):
parser.error("The file {} does not end with 'raw.csv'".format(args.file))
Cleaner.clean(args.file, args.clean, args.remove)
elif args.count:
counter = Counter()
counter.count(file=args.file, column=args.count, limit=args.limit, interval=args.interval,
match=args.match, update_minutes=args.update_minutes, outfile=args.outfile)
elif args.fuzzy_count:
fuzzy_counter = FuzzyCounter()
fuzzy_counter.count(file=args.file, column=args.fuzzy_count, limit=args.limit, threshold=args.threshold,
remove_numbers=args.remove_numbers, outfile=args.outfile,
alias_strip_list=args.strip_file)
if __name__ == "__main__":
main()
|
develmaycare/django-fixman
|
tests/test_library_files.py
|
from fixman.library.files import *
# Tests
class TestFixtureFile(object):
def test_get_full_path(self):
f = FixtureFile("testing")
assert f.get_full_path() == "fixtures/testing/initial.json"
def test_get_path(self):
f = FixtureFile("testing")
assert f.get_path() == "fixtures/testing"
def test_init(self):
f = FixtureFile("testing", model="TestModel")
assert f.export == "testing.TestModel"
assert f.file_name == "testmodel.json"
f = FixtureFile("testing")
assert f.export == "testing"
f = FixtureFile("testing", path="testing")
assert f.path == "testing"
f = FixtureFile("testing")
assert f.path == "fixtures/testing"
f = FixtureFile("testing")
assert f._full_path == "fixtures/testing/initial.json"
f = FixtureFile("testing", project_root="/path/to/example_project")
assert f._full_path == "/path/to/example_project/fixtures/testing/initial.json"
def test_label(self):
f = FixtureFile("testing", model="TestModel")
assert f.label == "testing.TestModel"
f = FixtureFile("testing")
assert f.label == "testing"
def test_repr(self):
f = FixtureFile("testing")
assert repr(f) == "<FixtureFile fixtures/testing/initial.json>"
|
develmaycare/django-fixman
|
fixman/variables.py
|
<reponame>develmaycare/django-fixman
import os
FIXMAN_PATH = os.path.abspath(os.path.dirname(__file__))
"""The current path to the fixman install directory."""
CURRENT_WORKING_DIRECTORY = os.getcwd()
"""The current working directory. Used as the default path for all commands."""
|
develmaycare/django-fixman
|
sandbox/fixman.py
|
<gh_stars>0
#! /usr/bin/env python
import re
import sys
sys.path.insert(0, "../")
from fixman.cli import main_command
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main_command())
|
develmaycare/django-fixman
|
fixman/utils.py
|
<filename>fixman/utils.py<gh_stars>0
# Imports
from configparser import ConfigParser
import logging
import os
from commonkit import smart_cast
from .constants import LOGGER_NAME
log = logging.getLogger(LOGGER_NAME)
# Exports
__all__ = (
"filter_fixtures",
"load_fixtures",
"scan_fixtures",
)
# Functions
def filter_fixtures(fixtures, apps=None, groups=None, models=None, skip_readonly=False):
"""Filter fixtures for various criteria.
:param fixtures: The fixtures to be filtered.
:type fixtures: list[fixman.library.files.FixtureFile]
:param apps: Include only the provided app names.
:type apps: list[str]
:param groups: Include only the provided group names.
:type groups: list[str]
:param models: Include only the provided model names.
:type models: list[str]
:param skip_readonly: Skip fixtures that are marked read only.
:type skip_readonly: bool
:rtype: list[fixman.library.files.FixtureFile]
:returns: The filters that match the given criteria.
"""
_fixtures = list()
for f in fixtures:
if apps is not None and f.app not in apps:
log.debug("Skipping %s app (not in apps list)." % f.app)
continue
# BUG: Model filter will return on a partial match; Group and Grouping.
if models is not None and f.model is not None and f.model not in models:
log.debug("Skipping %s model (not in models list)." % f.model)
continue
if groups is not None and f.group not in groups:
log.debug("Skipping %s (not in group)." % f.label)
continue
if f.readonly and skip_readonly:
log.debug("Skipping %s (read only)." % f.label)
continue
_fixtures.append(f)
return _fixtures
def load_fixtures(path, **kwargs):
"""Load fixture meta data.
:param path: The path to the fixtures INI file.
:type path: str
:rtype: list[FixtureFile] | None
Remaining keyword arguments are passed to the file.
"""
from .library.files import FixtureFile
if not os.path.exists(path):
log.error("Path does not exist: %s" % path)
return None
ini = ConfigParser()
ini.read(path)
fixtures = list()
group = None
for section in ini.sections():
_kwargs = kwargs.copy()
_section = section
if ":" in section:
_section, group = section.split(":")
if "." in _section:
app_label, model_name = _section.split(".")
else:
app_label = _section
model_name = None
_kwargs['group'] = group
_kwargs['model'] = model_name
for key, value in ini.items(section):
if key == "db":
key = "database"
elif key == "nfk":
key = "natural_foreign"
elif key == "npk":
key = "natural_primary"
else:
pass
_kwargs[key] = smart_cast(value)
fixtures.append(FixtureFile(app_label, **_kwargs))
return fixtures
def scan_fixtures(path):
"""Scan for fixture files on the given path.
:param path: The path to scan.
:type path: str
:rtype: list
:returns: A list of three-element tuples; the app name, file name, and relative path.
"""
results = list()
for root, dirs, files in os.walk(path):
relative_path = root.replace(path + "/", "")
if relative_path.startswith("static") or relative_path.startswith("theme"):
continue
for f in files:
if not f.endswith(".json"):
continue
app_name = os.path.basename(os.path.dirname(relative_path))
results.append((app_name, f, relative_path))
return results
|
develmaycare/django-fixman
|
tests/test_utils.py
|
from fixman.utils import *
import os
def test_filter_fixtures():
path = os.path.join("tests", "example_project", "fixtures", "config.ini")
fixtures = load_fixtures(path)
assert len(filter_fixtures(fixtures, apps=["lookups"])) == 9
# BUG: Model filter will also fail if there is a fixture with no specified model.
assert len(filter_fixtures(fixtures, models=["Goal"])) == 2
assert len(filter_fixtures(fixtures, groups=["defaults"])) == 10
assert len(filter_fixtures(fixtures, groups=["examples"])) == 20
assert len(filter_fixtures(fixtures, skip_readonly=True)) == 31
def test_load_fixtures():
assert load_fixtures("nonexistent") is None
path = os.path.join("tests", "example_project", "fixtures", "config.ini")
fixtures = load_fixtures(path)
assert len(fixtures) == 33
def test_scan_fixtures():
path = os.path.join("tests", "example_project", "source")
results = scan_fixtures(path)
assert len(results) == 4
|
develmaycare/django-fixman
|
setup.py
|
<filename>setup.py
# See https://packaging.python.org/en/latest/distributing.html
# and https://docs.python.org/2/distutils/setupscript.html
# and https://pypi.python.org/pypi?%3Aaction=list_classifiers
from setuptools import setup, find_packages
def read(path):
with open(path, "r") as f:
contents = f.read()
f.close()
return contents
setup(
name='django-fixman',
version=read("VERSION.txt"),
description=read("DESCRIPTION.txt"),
long_description=read("README.markdown"),
long_description_content_type="text/markdown",
author='<NAME>',
author_email='<EMAIL>',
url='https://github.com/develmaycare/django-fixman/',
packages=find_packages(),
include_package_data=True,
install_requires=[
"pygments",
# "commonkit @ git+https://github.com/develmaycare/python-commonkit",
"python-commonkit",
],
# dependency_links=[
# "https://github.com/develmaycare/python-commonkit",
# ],
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Environment :: Console',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
zip_safe=False,
tests_require=[
"coverage",
"pytest",
],
test_suite='runtests.runtests',
entry_points={
'console_scripts': [
'fixman = fixman.cli:main_command',
],
},
)
|
develmaycare/django-fixman
|
fixman/library/commands.py
|
<filename>fixman/library/commands.py
# Imports
import logging
import os
from subprocess import getstatusoutput
from commonkit.shell import EXIT
from ..constants import LOGGER_NAME
log = logging.getLogger(LOGGER_NAME)
# Classes
class DumpData(object):
"""A command for dumping fixture data."""
def __init__(self, app, database=None, export=None, natural_foreign=False, natural_primary=False, path=None,
settings=None):
"""Initialize the command.
:param app: The name of the app to which the fixture file belongs.
:type app: str
:param database: The database name into which the fixtures are installed.
:type database: str
:param export: ???
:param natural_foreign: Indicates whether natural foreign keys are used.
:type natural_foreign: bool
:param natural_primary: Indicates whether natural primary keys are used.
:type natural_primary: bool
:param path: The full path to the fixture file, including the file name.
:type path: str
:param settings: The settings to use when loading or dumping data.
:type settings: str
"""
self.app = app
self.database = database
self.export = export or app
self.natural_foreign = natural_foreign
self.natural_primary = natural_primary
self.path = path or os.path.join("../fixtures", app, "initial.json")
self.settings = settings
self._output = None
self._status = None
def get_command(self):
"""Get the command.
:rtype: str
"""
a = list()
a.append("(cd source && ./manage.py dumpdata")
if self.database is not None:
a.append("--database=%s" % self.database)
a.append("--indent=4")
if self.natural_foreign:
a.append("--natural-foreign")
if self.natural_primary:
a.append("--natural-primary")
if self.settings is not None:
a.append("--settings=%s" % self.settings)
# if self.copy_to is not None:
# a.append("%s > %s && cp %s %s" % (
# self.export,
# self.path,
# self.path,
# self.copy_to
# ))
# else:
a.append("%s > %s)" % (self.export, self.path))
return " ".join(a)
def get_output(self):
"""Get the output of the command.
:rtype: str
"""
return self._output
def preview(self):
"""Preview the command statement.
:rtype: str
"""
return self.get_command()
def run(self):
"""Run the command.
:rtype: bool
"""
command = self.get_command()
self._status, self._output = getstatusoutput(command)
if self._status > EXIT.OK:
return False
return True
class LoadData(object):
"""A command for loading fixture data."""
def __init__(self, app, database=None, path=None, settings=None):
"""Initialize the command.
:param app: The name of the app to which the fixture file belongs.
:type app: str
:param database: The database name into which the fixtures are installed.
:type database: str
:param path: The full path to the fixture file, including the file name.
:type path: str
:param settings: The settings to use when loading or dumping data.
:type settings: str
"""
self.app = app
self.database = database
self.path = path or os.path.join("../fixtures", app, "initial.json")
self.settings = settings
self._output = None
self._status = None
def get_command(self):
"""Get the command.
:rtype: str
"""
a = list()
a.append("(cd source && ./manage.py loaddata")
if self.database is not None:
a.append("--database=%s" % self.database)
if self.settings is not None:
a.append("--settings=%s" % self.settings)
a.append("%s)" % self.path)
return " ".join(a)
def get_output(self):
"""Get the output of the command.
:rtype: str
"""
return self._output
def preview(self):
"""Preview the command.
:rtype: str
"""
return self.get_command()
def run(self):
"""Run the command.
:rtype: str
"""
command = self.get_command()
self._status, self._output = getstatusoutput(command)
if self._status > EXIT.OK:
return False
return True
'''
class Fixture(object):
def __init__(self, model, operation, **kwargs):
self.app_label, self.model_name = model.split(".")
self.database = kwargs.pop("database", None)
self.group = kwargs.pop("group", "defaults")
self.is_readonly = kwargs.pop("readonly", False)
self.model = model
self.operation = operation
self.output = None
self.settings = kwargs.pop("settings", None)
self.status = None
self._preview = kwargs.pop("preview", False)
default_path = os.path.join(self.app_label, "fixtures", self.model_name.lower())
self.path = kwargs.pop("path", default_path)
def __repr__(self):
return "<%s %s.%s>" % (self.__class__.__name__, self.app_label, self.model_name)
def get_command(self):
if self.operation == "dumpdata":
return self._get_dumpdata_command()
elif self.operation == "loaddata":
return self._get_loaddata_command()
else:
raise ValueError("Invalid fixture operation: %s" % self.operation)
def preview(self):
return self.get_command()
def run(self):
command = self.get_command()
status, output = getstatusoutput(command)
self.output = output
self.status = status
if status > EXIT.OK:
return False
return True
def _get_dumpdata_command(self):
a = list()
a.append("(cd source && ./manage.py dumpdata")
if self.settings is not None:
a.append("--settings=%s" % self.settings)
if self.database is not None:
a.append("--database=%s" % self.database)
# args.full_preview_enabled = True
if self._preview:
a.append("--indent=4 %s)" % self.model)
else:
a.append("--indent=4 %s > %s)" % (self.model, self.path))
return " ".join(a)
def _get_loaddata_command(self):
a = list()
a.append("(cd source && ./manage.py loaddata")
if self.settings is not None:
a.append("--settings=%s" % self.settings)
if self.database is not None:
a.append("--database=%s" % self.database)
a.append("%s)" % self.path)
return " ".join(a)
'''
|
develmaycare/django-fixman
|
fixman/cli/initialize.py
|
# Imports
# Exports
# Functions
def subcommands(subparsers):
commands = SubCommands(subparsers)
commands.dumpdata()
commands.init()
commands.inspect()
commands.list()
commands.loaddata()
commands.scan()
# Classes
class SubCommands(object):
"""A utility class which keeps the ``cli.py`` module clean."""
def __init__(self, subparsers):
self.subparsers = subparsers
def dumpdata(self):
"""Create the dumpdata sub-command."""
sub = self.subparsers.add_parser(
"dumpdata",
aliases=["dd", "dump"],
help="Export Django fixtures."
)
self._add_fixture_options(sub)
self._add_common_options(sub)
def init(self):
"""Create the init sub-command."""
sub = self.subparsers.add_parser(
"init",
help="Initialize fixture management."
)
sub.add_argument(
"-b=",
"--base=",
default="source",
dest="base_directory",
help="The base directory within the project where fixture files may be located."
)
sub.add_argument(
"-F",
"--force-it",
action="store_true",
dest="force_enabled",
help="Force initialization (and scanning with -S) even if an existing configuration exists. This will "
"overwrite your current config.ini file."
)
sub.add_argument(
"-S",
"--scan",
action="store_true",
dest="scan_enabled",
help="Scan the current directory (or project root) to find fixture files to be added to the config."
)
self._add_common_options(sub)
def inspect(self):
"""Create the inspect sub-command."""
sub = self.subparsers.add_parser(
"inspect",
aliases=["ins"],
help="Display Django fixtures."
)
self._add_fixture_options(sub)
self._add_common_options(sub)
def list(self):
"""Create the list sub-command."""
# Arguments do NOT use _add_common_options() because this sub-command doesn't utilize the common options for
# dump, load, etc. So common options such as -D and -p have to be added here.
sub = self.subparsers.add_parser(
"list",
aliases=["ls"],
help="List the configured fixtures."
)
self._add_fixture_options(sub)
self._add_common_options(sub)
def loaddata(self):
"""Create the loaddata sub-command."""
sub = self.subparsers.add_parser(
"loaddata",
aliases=["ld", "load"],
help="Load Django fixtures."
)
sub.add_argument(
"-S",
"--script",
action="store_true",
dest="to_script",
help="Export to a bash script."
)
self._add_fixture_options(sub)
self._add_common_options(sub)
def scan(self):
"""Create the scan sub-command."""
# Arguments do NOT use _add_common_options() because this sub-command doesn't utilize the common options for
# dump, load, etc. So common options such as -D and -p have to be added here.
sub = self.subparsers.add_parser(
"scan",
help="Scan for fixture files in project source."
)
sub.add_argument(
"-P=",
"--path=",
default="deploy/fixtures/config.ini",
dest="path",
help="The path to the fixtures INI file. Default: deploy/fixtures/config.ini"
)
sub.add_argument(
"-S=",
"--source=",
default="source",
dest="base_directory",
help="The base directory in project root from which the scan will take place."
)
self._add_common_options(sub)
def _add_fixture_options(self, sub):
"""Add the common options for the fixture dump/inspect/load commands."""
sub.add_argument(
"-A=",
"--app-name=",
action="append",
dest="app_names",
help="Only work with this app. May be used multiple times."
)
sub.add_argument(
"-G=",
"--group-name=",
action="append",
dest="group_names",
help="Only work with this group. May be used multiple times."
)
sub.add_argument(
"-M=",
"--model-name=",
action="append",
dest="model_names",
help="Only work with this model. May be used multiple times."
)
sub.add_argument(
"-P=",
"--path=",
default="deploy/fixtures/config.ini",
dest="path",
help="The path to the fixtures INI file. Default: deploy/fixtures/config.ini"
)
sub.add_argument(
"-s=",
"--settings=",
dest="settings",
help="The dotted path to the Django settings file."
)
# noinspection PyMethodMayBeStatic
def _add_common_options(self, sub):
"""Add the common switches to a given sub-command instance.
:param sub: The sub-command instance.
"""
sub.add_argument(
"-D",
"--debug",
action="store_true",
dest="debug_enabled",
help="Enable debug output."
)
sub.add_argument(
"-p",
"--preview",
action="store_true",
dest="preview_enabled",
help="Preview the commands."
)
sub.add_argument(
"-r=",
"--project-root=",
dest="project_root",
help="The path to the project."
)
|
develmaycare/django-fixman
|
fixman/library/files.py
|
<reponame>develmaycare/django-fixman
# Imports
import os
# Exports
__all__ = (
"FixtureFile",
)
# Classes
class FixtureFile(object):
"""Represents a fixture file."""
def __init__(self, app, comment=None, copy_to=None, database=None, file_name=None, group=None, model=None,
natural_foreign=False, natural_primary=False, path=None, project_root=None, readonly=False,
settings=None):
"""Initialize a fixture file.
:param app: The name of the app to which the fixture file belongs.
:type app: str
:param comment: A comment regarding the fixture file.
:type comment: str
:param copy_to: The path to which the fixture file should be copied.
:type copy_to: str
:param database: The database name into which the fixtures are installed.
:type database: str
:param file_name: The file name of the file.
:type file_name: str
:param group: The group into which the fixtures are organized.
:type group: str
:param model: The name of the model to which the fixtures apply.
:type model: str
:param natural_foreign: Indicates whether natural foreign keys are used.
:type natural_foreign: bool
:param natural_primary: Indicates whether natural primary keys are used.
:type natural_primary: bool
:param path: The path to the fixture file, excluding the file name.
:type path: str
:param project_root: The root (path) of the project where the fixture is used.
:type project_root: str
:param readonly: Indicates the fixture file may only be loaded (not dumped).
:type readonly: bool
:param settings: The settings to use when loading or dumping data.
:type settings: str
"""
self.app = app
self.comment = comment
self.copy_to = copy_to
self.database = database
self.file_name = file_name or "initial.json"
self.group = group
self.model = model
self.natural_foreign = natural_foreign
self.natural_primary = natural_primary
self.readonly = readonly
self.settings = settings
if model is not None:
self.export = "%s.%s" % (app, model)
if file_name is None:
self.file_name = "%s.json" % model.lower()
else:
self.export = app
source = ""
if path is not None:
source = "source"
self.path = path
else:
self.path = os.path.join("fixtures", app)
if project_root is not None:
self._full_path = os.path.join(project_root, source, self.path, self.file_name)
else:
self._full_path = os.path.join(source, self.path, self.file_name)
def __repr__(self):
return "<%s %s>" % (self.__class__.__name__, self._full_path)
def get_path(self):
"""Get the path to the fixture file (without the file).
:rtype: str
"""
return os.path.dirname(self._full_path)
def get_full_path(self):
"""Get the full path to the fixture file.
:rtype: str
"""
return self._full_path
@property
def label(self):
"""Get a display name for the fixture.
:rtype: str
"""
if self.model is not None:
return "%s.%s" % (self.app, self.model)
return self.app
|
develmaycare/django-fixman
|
fixman/constants.py
|
LOGGER_NAME = "fixman"
"""The internal name used for logging."""
|
develmaycare/django-fixman
|
fixman/cli/subcommands.py
|
<filename>fixman/cli/subcommands.py
# Imports
import logging
import os
from commonkit import highlight_code, read_file, truncate, write_file
from commonkit.shell import EXIT, TABLE_FORMAT, Table
from subprocess import getstatusoutput
from ..library.commands import DumpData, LoadData
from ..constants import LOGGER_NAME
from ..utils import filter_fixtures, load_fixtures, scan_fixtures
log = logging.getLogger(LOGGER_NAME)
# Functions
def dumpdata(path, apps=None, database=None, groups=None, models=None, natural_foreign=False, natural_primary=False,
preview_enabled=False, project_root=None, settings=None):
fixtures = load_fixtures(
path,
database=database,
natural_foreign=natural_foreign,
natural_primary=natural_primary,
project_root=project_root,
settings=settings
)
if not fixtures:
return EXIT.ERROR
success = list()
_fixtures = filter_fixtures(fixtures, apps=apps, groups=groups, models=models, skip_readonly=True)
for f in _fixtures:
log.info("Dumping fixtures to: %s" % f.get_full_path())
dump = DumpData(
f.app,
database=f.database,
export=f.export,
natural_foreign=f.natural_foreign,
natural_primary=f.natural_primary,
path=f.get_full_path(),
settings=f.settings
)
if preview_enabled:
success.append(True)
if not os.path.exists(f.get_path()):
print("mkdir -p %s" % f.get_path())
print(dump.preview())
if f.copy_to is not None:
print("cp %s %s" % (f.get_full_path(), f.copy_to))
else:
if not os.path.exists(f.get_path()):
os.makedirs(f.get_path())
if dump.run():
success.append(dump.run())
if f.copy_to is not None:
getstatusoutput("cp %s %s" % (f.get_full_path(), f.copy_to))
else:
log.error(dump.get_output())
if all(success):
return EXIT.OK
return EXIT.ERROR
def init(base_directory="source", force_enabled=False, preview_enabled=False, project_root=None, scan_enabled=False):
# The base path is where global fixtures and the config.ini file are located.
base_path = os.path.join(project_root, "deploy", "fixtures")
if not os.path.exists(base_path):
log.info("Creating fixtures directory.")
if not preview_enabled:
os.makedirs(base_path)
# The path to the config.ini file.
config_path = os.path.join(base_path, "config.ini")
if os.path.exists(config_path) and not force_enabled:
log.warning("A %s file already exists. Use the -F switch to force initialization (and scanning if "
"using -S). Alternatively, use the -p switch to copy and paste the results of the "
"init." % config_path)
return EXIT.TEMP
# Output for the config is collected in a list with or without scan_enabled.
output = list()
# Scan for fixture files.
if scan_enabled:
path = os.path.join(project_root, base_directory)
if not os.path.exists(path):
log.error("Path does not exist: %s" % path)
return EXIT.ERROR
log.info("Scanning the project for fixture files.")
results = scan_fixtures(path)
for app_name, file_name, relative_path in results:
log.debug("Found fixtures for %s app: %s/%s" % (app_name, relative_path, file_name))
output.append("[%s]" % app_name)
output.append("file_name = %s" % file_name)
output.append("path = %s" % relative_path)
output.append("")
# for root, directories, files in os.walk(path):
# for f in files:
# if not f.endswith(".json"):
# continue
#
# relative_path = root.replace(path + "/", "")
# # print(root.replace(path + "/", ""), f)
#
# app_name = os.path.basename(os.path.dirname(relative_path))
#
# log.debug("Found fixtures for %s app: %s/%s" % (app_name, relative_path, f))
#
# output.append("[%s]" % app_name)
# output.append("file_name = %s" % f)
# output.append("path = %s" % relative_path)
# output.append("")
else:
output.append(";[app_name]")
output.append(";file_name = fixture-file-name.json")
output.append("")
if preview_enabled:
print("\n".join(output))
return EXIT.OK
log.info("Writing config.ini file.")
write_file(config_path, content="\n".join(output))
if scan_enabled:
log.warning("Fixture entries may not exist in the correct order for loading. Please double-check and change "
"as needed.")
return EXIT.OK
def inspect(path, apps=None, groups=None, models=None, project_root=None):
fixtures = load_fixtures(path, project_root=project_root)
if not fixtures:
return EXIT.ERROR
exit_code = EXIT.OK
_fixtures = filter_fixtures(fixtures, apps=apps, groups=groups, models=models)
for f in _fixtures:
try:
content = read_file(f.get_full_path())
except FileNotFoundError as e:
exit_code = EXIT.IO
content = str(e)
print("")
print(f.label)
print("-" * 120)
print(highlight_code(content, language="json"))
print("-" * 120)
return exit_code
def ls(path, apps=None, groups=None, models=None, project_root=None):
fixtures = load_fixtures(path, project_root=project_root)
if not fixtures:
return EXIT.ERROR
headings = [
"App",
"File Name",
"Path",
"Read Only",
"Group",
"Comment",
]
table = Table(headings, output_format=TABLE_FORMAT.SIMPLE)
_fixtures = filter_fixtures(fixtures, apps=apps, groups=groups, models=models)
for f in _fixtures:
readonly = "no"
if f.readonly:
readonly = "yes"
comment = "None."
if f.comment:
comment = truncate(f.comment)
values = [
f.app,
f.file_name,
f.path,
readonly,
f.group,
comment
]
table.add(values)
print("")
print("Configured Fixtures")
print("")
print(table)
print("")
return EXIT.OK
def loaddata(path, apps=None, database=None, groups=None, models=None, preview_enabled=False, project_root=None,
settings=None, to_script=False):
fixtures = load_fixtures(path, database=database, project_root=project_root, settings=settings)
if not fixtures:
return EXIT.ERROR
if to_script:
script = list()
script.append("!# /usr/bin/env bash")
script.append("")
success = list()
_fixtures = filter_fixtures(fixtures, apps=apps, groups=groups, models=models)
for f in _fixtures:
log.info("Loading fixtures from: %s" % f.get_full_path())
load = LoadData(
f.app,
database=f.database,
path=f.get_full_path(),
settings=f.settings
)
if to_script:
# noinspection PyUnboundLocalVariable
script.append(load.preview())
continue
if preview_enabled:
success.append(True)
print(load.preview())
else:
if load.run():
success.append(load.run())
else:
log.error(load.get_output())
if to_script:
script.append("")
print("\n".join(script))
return EXIT.OK
if all(success):
return EXIT.OK
return EXIT.ERROR
def scan(path, base_directory="source", project_root=None):
configured_fixtures = load_fixtures(path, project_root=project_root)
search_path = os.path.join(project_root, base_directory)
if not os.path.exists(search_path):
log.error("Path does not exist: %s" % search_path)
return EXIT.ERROR
headings = [
"App Name",
"File Name",
"Path",
"Configured",
]
table = Table(headings, output_format=TABLE_FORMAT.SIMPLE)
results = scan_fixtures(search_path)
for values in results:
configured = "no"
for cf in configured_fixtures:
conditions = [
cf.app == values[0],
cf.file_name == values[1],
cf.path == values[2],
]
if all(conditions):
configured = "yes"
break
elif any(conditions):
configured = "maybe"
break
else:
configured = "no"
_values = list(values)
_values.append(configured)
table.add(_values)
print("")
print("Fixtures Found in Project")
print("")
print(table)
print("")
return EXIT.OK
'''
def dumpdata(args):
"""Dump data using a fixtures.ini file."""
# Make sure the file exists.
path = args.path
if not os.path.exists(path):
logger.warning("fixtures.ini file does not exist: %s" % path)
return EXIT_INPUT
# Load the file.
ini = ConfigParser()
ini.read(path)
# Generate the commands.
for model in ini.sections():
kwargs = dict()
if args.full_preview_enabled:
kwargs['preview'] = True
if args.settings:
kwargs['settings'] = args.settings
for key, value in ini.items(model):
kwargs[key] = value
fixture = Fixture(model, "dumpdata", **kwargs)
if args.app_labels and fixture.app_label not in args.app_labels:
logger.info("(SKIPPED) %s" % fixture.model)
continue
if args.model_name and fixture.model_name.lower() != args.model_name.lower():
logger.info("(SKIPPED) %s" % fixture.model)
continue
if args.groups and fixture.group not in args.groups:
logger.info("(SKIPPED) %s" % fixture.model)
continue
if fixture.is_readonly:
logger.info("(READONLY) %s (dumpdata skipped)" % fixture.model)
continue
if args.preview_enabled or args.full_preview_enabled:
logger.info("(PREVIEW) %s" % fixture.preview())
if args.full_preview_enabled:
fixture.run()
print(fixture.output)
else:
result = fixture.run()
if result:
logger.info("(OK) %s" % fixture.model)
else:
logger.info("(FAILED) %s %s" % (fixture.model, fixture.output))
return EXIT.UNKNOWN
return EXIT.OK
def loaddata(args):
"""Load data using a fixtures.ini file."""
path = args.path
if not os.path.exists(path):
logger.warning("fixtures.ini file does not exist: %s" % path)
return EXIT_INPUT
# Load the file.
ini = ConfigParser()
ini.read(path)
# Generate the commands.
for model in ini.sections():
kwargs = dict()
if args.settings:
kwargs['settings'] = args.settings
for key, value in ini.items(model):
kwargs[key] = value
fixture = Fixture(model, "loaddata", **kwargs)
if args.app_labels and fixture.app_label not in args.app_labels:
if args.preview_enabled:
logger.info("(SKIPPED) %s" % fixture.model)
continue
if args.model_name and fixture.model_name.lower() != args.model_name.lower():
if args.preview_enabled:
logger.info("(SKIPPED) %s" % fixture.model)
continue
if args.groups and fixture.group not in args.groups:
if args.preview_enabled:
logger.info("[SKIPPED] %s" % fixture.model)
continue
if args.preview_enabled:
logger.info("(PREVIEW) %s" % fixture.preview())
else:
result = fixture.run()
if result:
logger.info("(OK) %s %s" % (fixture.model, fixture.output))
else:
logger.warning("(FAILED) %s %s" % (fixture.model, fixture.output))
return EXIT.UNKNOWN
return EXIT.OK
'''
|
develmaycare/django-fixman
|
fixman/cli/__init__.py
|
# Imports
from argparse import ArgumentParser, RawDescriptionHelpFormatter
from commonkit.logging import LoggingHelper
from commonkit.shell import EXIT
from ..constants import LOGGER_NAME
from ..variables import CURRENT_WORKING_DIRECTORY
from ..version import DATE as VERSION_DATE, VERSION
from . import initialize
from . import subcommands
DEBUG = 10
logging = LoggingHelper(colorize=True, name=LOGGER_NAME)
log = logging.setup()
# Commands
def main_command():
"""Work with Django fixtures."""
__author__ = "<NAME> <<EMAIL>>"
__date__ = VERSION_DATE
__help__ = """NOTES
Work with Django fixtures.
"""
__version__ = VERSION
# Main argument parser from which sub-commands are created.
parser = ArgumentParser(description=__doc__, epilog=__help__, formatter_class=RawDescriptionHelpFormatter)
# Access to the version number requires special consideration, especially
# when using sub parsers. The Python 3.3 behavior is different. See this
# answer: http://stackoverflow.com/questions/8521612/argparse-optional-subparser-for-version
parser.add_argument(
"-v",
action="version",
help="Show version number and exit.",
version=__version__
)
parser.add_argument(
"--version",
action="version",
help="Show verbose version information and exit.",
version="%(prog)s" + " %s %s by %s" % (__version__, __date__, __author__)
)
# Initialize sub-commands.
subparsers = parser.add_subparsers(
dest="subcommand",
help="Commands",
metavar="dumpdata, init, inspect, list, loaddata, scan"
)
initialize.subcommands(subparsers)
# Parse arguments.
args = parser.parse_args()
command = args.subcommand
# Set debug level.
if args.debug_enabled:
log.setLevel(DEBUG)
log.debug("Namespace: %s" % args)
project_root = args.project_root or CURRENT_WORKING_DIRECTORY
log.debug("Project root: %s" % project_root)
exit_code = EXIT.UNKNOWN
if command in ("dd", "dump", "dumpdata"):
exit_code = subcommands.dumpdata(
args.path,
apps=args.app_names,
groups=args.group_names,
models=args.model_names,
preview_enabled=args.preview_enabled,
project_root=project_root,
settings=args.settings
)
elif command == "init":
exit_code = subcommands.init(
force_enabled=args.force_enabled,
preview_enabled=args.preview_enabled,
project_root=project_root,
scan_enabled=args.scan_enabled
)
elif command in ("ins", "inspect"):
exit_code = subcommands.inspect(
args.path,
apps=args.app_names,
groups=args.group_names,
models=args.model_names,
project_root=project_root
)
elif command in ("ld", "load", "loaddata"):
exit_code = subcommands.loaddata(
args.path,
apps=args.app_names,
groups=args.group_names,
models=args.model_names,
preview_enabled=args.preview_enabled,
project_root=project_root,
settings=args.settings,
to_script=args.to_script
)
elif command in ("list", "ls"):
exit_code = subcommands.ls(
args.path,
apps=args.app_names,
groups=args.group_names,
models=args.model_names,
project_root=project_root
)
elif command == "scan":
exit_code = subcommands.scan(
args.path,
base_directory=args.base_directory,
project_root=project_root
)
else:
log.error("Unsupported command: %s" % command)
exit(exit_code)
|
develmaycare/django-fixman
|
tests/test_library_commands.py
|
<filename>tests/test_library_commands.py
from fixman.library.commands import *
# import subprocess
def mock_bad_getstatusoutput(command):
return 1, "fail"
def mock_good_getstatusoutput(command):
return 0, "ok"
# Tests
class TestDumpData(object):
def test_get_command(self):
dd = DumpData("test_app", database="testing", natural_foreign=True, natural_primary=True,
settings="tenants.example.settings")
#(cd source &&
# ./manage.py dumpdata --database=testing --indent=4 --natural-foreign --natural-primary
# --settings=tenants.example.settings.py test_app > ../fixtures/test_app/initial.json)
cmd = dd.get_command()
assert "dumpdata" in cmd
assert "database=testing" in cmd
assert "indent=4" in cmd
assert "natural-foreign" in cmd
assert "natural-primary" in cmd
assert "settings=tenants.example.settings" in cmd
assert "test_app > ../fixtures/test_app/initial.json"
def test_get_output(self):
dd = DumpData("test_app")
assert dd.get_output() is None
def test_preview(self):
dd = DumpData("test_app")
assert dd.get_command() == dd.preview()
def test_run(self, monkeypatch):
# https://stackoverflow.com/a/28405771/241720
monkeypatch.setattr("subprocess.getstatusoutput.__code__", mock_good_getstatusoutput.__code__)
dd = DumpData("test_app")
assert dd.run() is True
monkeypatch.setattr("subprocess.getstatusoutput.__code__", mock_bad_getstatusoutput.__code__)
dd = DumpData("test_app")
assert dd.run() is False
class TestLoadData(object):
def test_get_command(self):
ld = LoadData("test_app", database="testing", settings="tenant.example.settings")
# (cd source && ./manage.py loaddata --database=testing
# --settings=tenant.example.settings ../fixtures/test_app/initial.json)
cmd = ld.get_command()
assert "loaddata" in cmd
assert "database=testing" in cmd
assert "settings=tenant.example.settings" in cmd
assert "../fixtures/test_app/initial.json" in cmd
def test_get_output(self):
ld = LoadData("test_app")
assert ld.get_output() is None
def test_preview(self):
ld = LoadData("test_app")
assert ld.preview() == ld.get_command()
def test_run(self, monkeypatch):
# https://stackoverflow.com/a/28405771/241720
monkeypatch.setattr("subprocess.getstatusoutput.__code__", mock_good_getstatusoutput.__code__)
ld = LoadData("test_app")
assert ld.run() is True
monkeypatch.setattr("subprocess.getstatusoutput.__code__", mock_bad_getstatusoutput.__code__)
ld = LoadData("test_app")
assert ld.run() is False
|
develmaycare/django-fixman
|
fixman/version.py
|
DATE = "2021-05-01"
VERSION = "1.0.0"
MAJOR = 1
MINOR = 0
PATCH = 0
|
carlopires/tnetstring3
|
tools/shootout.py
|
<reponame>carlopires/tnetstring3
import sys
import random
import cjson
import ujson
import tnetstring
import marshal
from tnetstring.tests.test_format import FORMAT_EXAMPLES, get_random_object
TESTS = []
def add_test(v):
# These modules have a few round-tripping problems...
try:
assert cjson.decode(cjson.encode(v)) == v
assert ujson.loads(ujson.dumps(v)) == v
except Exception:
pass
else:
TESTS.append((v,tnetstring.dumps(v),cjson.encode(v),marshal.dumps(v)))
# Test it on all our format examples.
for (k,v) in FORMAT_EXAMPLES.iteritems():
add_test(v)
# And on some randomly-generated objects.
# Use a fixed random seed for consistency.
r = random.Random(7)
for _ in xrange(20):
v = get_random_object(r)
add_test(v)
TEST_DUMP_ONLY = False
TEST_LOAD_ONLY = False
if len(sys.argv) >1 :
if sys.argv[1] == "dumps":
TEST_DUMP_ONLY = True
elif sys.argv[1] == "loads":
TEST_LOAD_ONLY = True
elif sys.argv[1] == "roundtrip":
pass
else:
raise ValueError("unknown test type: " + sys.argv[1])
def thrash_tnetstring():
for obj, tns, json, msh in TESTS:
if TEST_DUMP_ONLY:
tnetstring.dumps(obj)
elif TEST_LOAD_ONLY:
assert tnetstring.loads(tns) == obj
else:
assert tnetstring.loads(tnetstring.dumps(obj)) == obj
def thrash_cjson():
for obj, tns, json, msh in TESTS:
if TEST_DUMP_ONLY:
cjson.encode(obj)
elif TEST_LOAD_ONLY:
assert cjson.decode(json) == obj
else:
assert cjson.decode(cjson.encode(obj)) == obj
def thrash_ujson():
for obj, tns, json, msh in TESTS:
if TEST_DUMP_ONLY:
ujson.dumps(obj)
elif TEST_LOAD_ONLY:
assert ujson.loads(json) == obj
else:
assert ujson.loads(ujson.dumps(obj)) == obj
def thrash_marshal():
for obj, tns, json, msh in TESTS:
if TEST_DUMP_ONLY:
marshal.dumps(obj)
elif TEST_LOAD_ONLY:
assert marshal.loads(msh) == obj
else:
assert marshal.loads(marshal.dumps(obj)) == obj
if __name__ == "__main__":
import timeit
t1 = timeit.Timer("thrash_tnetstring()",
"from shootout import thrash_tnetstring")
t1 = min(t1.repeat(number=10000))
print "tnetstring", t1
t2 = timeit.Timer("thrash_cjson()",
"from shootout import thrash_cjson")
t2 = min(t2.repeat(number=10000))
print "cjson:", t2
print "speedup: ", round((t2 - t1) / (t2) * 100,2), "%"
t3 = timeit.Timer("thrash_ujson()",
"from shootout import thrash_ujson")
t3 = min(t3.repeat(number=10000))
print "ujson:", t3
print "speedup: ", round((t3 - t1) / (t3) * 100,2), "%"
t4 = timeit.Timer("thrash_marshal()",
"from shootout import thrash_marshal")
t4 = min(t4.repeat(number=10000))
print "marshal:", t4
print "speedup: ", round((t4 - t1) / (t4) * 100,2), "%"
|
carlopires/tnetstring3
|
tnetstring/__init__.py
|
<filename>tnetstring/__init__.py
"""
tnetstring: data serialization using typed netstrings
======================================================
This is a data serialization library. It's a lot like JSON but it uses a
new syntax called "typed netstrings" that Zed has proposed for use in the
Mongrel2 webserver. It's designed to be simpler and easier to implement
than JSON, with a happy consequence of also being faster in many cases.
An ordinary netstring is a blob of data prefixed with its length and postfixed
with a sanity-checking comma. The string "hello world" encodes like this::
11:hello world,
Typed netstrings add other datatypes by replacing the comma with a type tag.
Here's the integer 12345 encoded as a tnetstring::
5:12345#
And here's the list [12345,True,0] which mixes integers and bools::
19:5:12345#4:true!1:0#]
Simple enough? This module gives you the following functions:
:dump: dump an object as a tnetstring to a file
:dumps: dump an object as a tnetstring to a string
:load: load a tnetstring-encoded object from a file
:loads: load a tnetstring-encoded object from a string
:pop: pop a tnetstring-encoded object from the front of a string
Note that since parsing a tnetstring requires reading all the data into memory
at once, there's no efficiency gain from using the file-based versions of these
functions. They're only here so you can use load() to read precisely one
item from a file or socket without consuming any extra data.
The tnetstrings specification explicitly states that strings are binary blobs
and forbids the use of unicode at the protocol level. As a convenience to
python programmers, this library lets you specify an application-level encoding
to translate python's unicode strings to and from binary blobs:
>>> print repr(tnetstring.loads("2:\\xce\\xb1,"))
'\\xce\\xb1'
>>>
>>> print repr(tnetstring.loads("2:\\xce\\xb1,", "utf8"))
u'\\u03b1'
:Copyright: (c) 2012-2013 by <NAME> <<EMAIL>>.
:Copyright: (c) 2014 by <NAME> <<EMAIL>>.
:License: MIT, see LICENCE for more details.
"""
__ver_major__ = 0
__ver_minor__ = 3
__ver_patch__ = 1
__ver_sub__ = ''
__version__ = '{}.{}.{}{}'.format(__ver_major__,__ver_minor__,__ver_patch__,__ver_sub__)
# Use the c-extension version if available
try:
import _tnetstring
except ImportError:
from collections import deque
def dumps(value: object) -> bytes:
"""
This function dumps a python object as a tnetstring.
"""
# This uses a deque to collect output fragments in reverse order,
# then joins them together at the end. It's measurably faster
# than creating all the intermediate strings.
# If you're reading this to get a handle on the tnetstring format,
# consider the _gdumps() function instead; it's a standard top-down
# generator that's simpler to understand but much less efficient.
q = deque()
_rdumpq(q, 0, value)
return b''.join(q)
def dump(value: object, file_handle: 'file object') -> bytes:
"""
This function dumps a python object as a tnetstring and
writes it to the given file.
"""
file_handle.write(dumps(value))
def _rdumpq(q: deque, size: int, value: object) -> None:
"""
Dump value as a tnetstring, to a deque instance, last chunks first.
This function generates the tnetstring representation of the given value,
pushing chunks of the output onto the given deque instance. It pushes
the last chunk first, then recursively generates more chunks.
When passed in the current size of the string in the queue, it will return
the new size of the string in the queue.
Operating last-chunk-first makes it easy to calculate the size written
for recursive structures without having to build their representation as
a string. This is measurably faster than generating the intermediate
strings, especially on deeply nested structures.
"""
write = q.appendleft
if value is None:
write(b'0:~')
return size + 3
elif value is True:
write(b'4:true!')
return size + 7
elif value is False:
write(b'5:false!')
return size + 8
elif isinstance(value, int):
data = str(value).encode()
ldata = len(data)
span = str(ldata).encode()
write(b'#')
write(data)
write(b':')
write(span)
return size + 2 + len(span) + ldata
elif isinstance(value, float):
# Use repr() for float rather than str().
# It round-trips more accurately.
# Probably unnecessary in later python versions that
# use <NAME>'s ftoa routines.
data = repr(value).encode()
ldata = len(data)
span = str(ldata).encode()
write(b'^')
write(data)
write(b':')
write(span)
return size + 2 + len(span) + ldata
elif isinstance(value, bytes):
lvalue = len(value)
span = str(lvalue).encode()
write(b',')
write(value)
write(b':')
write(span)
return size + 2 + len(span) + lvalue
elif isinstance(value, (list,tuple)):
write(b']')
init_size = size = size + 1
for item in reversed(value):
size = _rdumpq(q, size, item)
span = str(size - init_size).encode()
write(b':')
write(span)
return size + 1 + len(span)
elif isinstance(value, dict):
write(b'}')
init_size = size = size + 1
for (k,v) in value.items():
size = _rdumpq(q,size,v)
size = _rdumpq(q,size,k)
span = str(size - init_size).encode()
write(b':')
write(span)
return size + 1 + len(span)
else:
raise ValueError("unserializable object: {} ({})".format(value, type(value)))
def _gdumps(value: object) -> bytes:
"""
Generate fragments of value dumped as a tnetstring.
This is the naive dumping algorithm, implemented as a generator so that
it's easy to pass to "".join() without building a new list.
This is mainly here for comparison purposes; the _rdumpq version is
measurably faster as it doesn't have to build intermediate strins.
"""
if value is None:
yield b'0:~'
elif value is True:
yield b'4:true!'
elif value is False:
yield b'5:false!'
elif isinstance(value, int):
data = str(value).encode()
yield str(len(data)).encode()
yield b':'
yield data
yield b'#'
elif isinstance(value, float):
data = repr(value).encode()
yield str(len(data)).encode()
yield b':'
yield data
yield b'^'
elif isinstance(value, bytes):
yield str(len(value)).encode()
yield b':'
yield value
yield b','
elif isinstance(value, (list,tuple)):
sub = []
for item in value:
sub.extend(_gdumps(item))
sub = b''.join(sub)
yield str(len(sub)).encode()
yield b':'
yield sub
yield b']'
elif isinstance(value,(dict,)):
sub = []
for (k,v) in value.items():
sub.extend(_gdumps(k))
sub.extend(_gdumps(v))
sub = b''.join(sub)
yield str(len(sub)).encode()
yield b':'
yield sub
yield b'}'
else:
raise ValueError("unserializable object")
def loads(string: bytes) -> object:
"""
This function parses a tnetstring into a python object.
"""
# No point duplicating effort here. In the C-extension version,
# loads() is measurably faster then pop() since it can avoid
# the overhead of building a second string.
return pop(string)[0]
def load(file_handle: 'file object') -> object:
"""load(file) -> object
This function reads a tnetstring from a file and parses it into a
python object. The file must support the read() method, and this
function promises not to read more data than necessary.
"""
# Read the length prefix one char at a time.
# Note that the netstring spec explicitly forbids padding zeros.
c = file_handle.read(1)
if not ord(b'0') <= ord(c) <= ord(b'9'):
raise ValueError("not a tnetstring: missing or invalid length prefix")
datalen = ord(c) - ord('0')
c = file_handle.read(1)
if datalen != 0:
while ord(b'0') <= ord(c) <= ord(b'9'):
datalen = (10 * datalen) + (ord(c) - ord('0'))
if datalen > 999999999:
errmsg = "not a tnetstring: absurdly large length prefix"
raise ValueError(errmsg)
c = file_handle.read(1)
if ord(c) != ord(b':'):
raise ValueError("not a tnetstring: missing or invalid length prefix")
# Now we can read and parse the payload.
# This repeats the dispatch logic of pop() so we can avoid
# re-constructing the outermost tnetstring.
data = file_handle.read(datalen)
if len(data) != datalen:
raise ValueError("not a tnetstring: length prefix too big")
tns_type = ord(file_handle.read(1))
if tns_type == ord(b','):
return data
if tns_type == ord(b'#'):
try:
return int(data)
except ValueError:
raise ValueError("not a tnetstring: invalid integer literal")
if tns_type == ord(b'^'):
try:
return float(data)
except ValueError:
raise ValueError("not a tnetstring: invalid float literal")
if tns_type == ord(b'!'):
if data == b'true':
return True
elif data == b'false':
return False
else:
raise ValueError("not a tnetstring: invalid boolean literal")
if tns_type == ord(b'~'):
if data:
raise ValueError("not a tnetstring: invalid null literal")
return None
if tns_type == ord(b']'):
l = []
while data:
item, data = pop(data)
l.append(item)
return l
if tns_type == ord(b'}'):
d = {}
while data:
key, data = pop(data)
val, data = pop(data)
d[key] = val
return d
raise ValueError("unknown type tag")
def pop(string: bytes) -> object:
"""pop(string,encoding='utf_8') -> (object, remain)
This function parses a tnetstring into a python object.
It returns a tuple giving the parsed object and a string
containing any unparsed data from the end of the string.
"""
# Parse out data length, type and remaining string.
try:
dlen, rest = string.split(b':', 1)
dlen = int(dlen)
except ValueError:
raise ValueError("not a tnetstring: missing or invalid length prefix: {}".format(string))
try:
data, tns_type, remain = rest[:dlen], rest[dlen], rest[dlen+1:]
except IndexError:
# This fires if len(rest) < dlen, meaning we don't need
# to further validate that data is the right length.
raise ValueError("not a tnetstring: invalid length prefix: {}".format(dlen))
# Parse the data based on the type tag.
if tns_type == ord(b','):
return data, remain
if tns_type == ord(b'#'):
try:
return int(data), remain
except ValueError:
raise ValueError("not a tnetstring: invalid integer literal: {}".format(data))
if tns_type == ord(b'^'):
try:
return float(data), remain
except ValueError:
raise ValueError("not a tnetstring: invalid float literal: {}".format(data))
if tns_type == ord(b'!'):
if data == b'true':
return True, remain
elif data == b'false':
return False, remain
else:
raise ValueError("not a tnetstring: invalid boolean literal: {}".format(data))
if tns_type == ord(b'~'):
if data:
raise ValueError("not a tnetstring: invalid null literal")
return None, remain
if tns_type == ord(b']'):
l = []
while data:
item, data = pop(data)
l.append(item)
return (l,remain)
if tns_type == ord(b'}'):
d = {}
while data:
key, data = pop(data)
val, data = pop(data)
d[key] = val
return d, remain
raise ValueError("unknown type tag: {}".format(tns_type))
else:
dumps = _tnetstring.dumps
load = _tnetstring.load
loads = _tnetstring.loads
pop = _tnetstring.pop
def dump(value: object, file_handle: 'file object') -> bytes:
"""
This function dumps a python object as a tnetstring and
writes it to the given file.
"""
file_handle.write(dumps(value))
|
carlopires/tnetstring3
|
tests/__init__.py
|
import unittest
def suite():
from . import test_format
suite = unittest.TestSuite()
suite.addTest(test_format.suite())
return suite
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
|
carlopires/tnetstring3
|
setup.py
|
from setuptools import setup, Extension
def get_info():
info = {}
src = open("tnetstring/__init__.py")
lines = []
ln = next(src)
while "__version__" not in ln:
lines.append(ln)
ln = next(src)
while "__version__" in ln:
lines.append(ln)
ln = next(src)
exec("".join(lines),info)
return info
info = get_info()
setup(name="tnetstring3",
version=info["__version__"],
author="<NAME>",
author_email="<EMAIL>",
url="http://github.com/carlopires/tnetstring3",
description="Super fast data serialization for Python 3",
long_description=info["__doc__"],
license="MIT",
keywords="netstring serialization",
packages=["tnetstring"],
ext_modules = [
Extension(name="_tnetstring", sources=["tnetstring/_tnetstring.c"]),
],
classifiers=[
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License"
],
test_suite='tests.suite'
)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab08/ex2_create_table.py
|
import pymysql
connection = pymysql.connect(
host="localhost",
database="css326_lab8"
)
with connection:
with connection.cursor() as cursor:
sql1 = """CREATE TABLE user (
title varchar(100) DEFAULT NULL,
first_name varchar(250) DEFAULT NULL,
last_name varchar(250) DEFAULT NULL,
email varchar(250) NOT NULL PRIMARY KEY,
usergroup varchar(20) DEFAULT NULL,
disabled tinyint(4) DEFAULT NULL
) DEFAULT CHARSET=utf8mb4;
"""
cursor.execute(sql1)
sql2 = """CREATE TABLE course(
title varchar(250) DEFAULT NULL,
code varchar(10) NOT NULL PRIMARY KEY,
credit float DEFAULT NULL
) DEFAULT CHARSET=utf8mb4;
"""
cursor.execute(sql2)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab08/ex4_select_join.py
|
<reponame>waterthatfrozen/Y3T1-CodeVault<filename>CSS326/Lab08/ex4_select_join.py
import pymysql
connection = pymysql.connect(
host="localhost",
database="css326_lab8",
)
with connection:
with connection.cursor() as cursor:
sql = """SELECT u.email,u.first_name,u.last_name,c.title
FROM user_course uc INNER JOIN user u ON uc.user_email = u.email
INNER JOIN course c ON uc.course_code = c.code"""
cursor.execute(sql)
for row in cursor:
print(row)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS324/Homework 1/main.py
|
<filename>CSS324/Homework 1/main.py
# Ball Sort Puzzle Assignment
# CSS324 Section 3
# 6222780379 <NAME>.
# 6222780833 <NAME>.
import BallSort as BS
import UCGS
import utils
goal_node, _ = UCGS.uniform_cost_graph_search(BS)
if goal_node:
utils.print_solution(goal_node)
else:
print("No Solution")
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab08/create_m2m_link.py
|
import pymysql
connection = pymysql.connect(
host="localhost",
database="css326_lab8"
)
with connection:
with connection.cursor() as cursor:
# create m2m table
sql = """CREATE TABLE IF NOT EXISTS user_course (
user_email varchar(250) NOT NULL,
course_code varchar(250) NOT NULL,
PRIMARY KEY (user_email,course_code),
CONSTRAINT uc_u_fk FOREIGN KEY (user_email) REFERENCES user (email)
ON DELETE RESTRICT ON UPDATE CASCADE,
CONSTRAINT uc_c_fk FOREIGN KEY (course_code) REFERENCES course (code)
ON DELETE RESTRICT ON UPDATE CASCADE
) DEFAULT CHARSET=utf8mb4;
"""
cursor.execute(sql)
# add user data
sql = """INSERT INTO user(title, first_name, last_name, email)
VALUES (%s, %s, %s, %s);
"""
values = [
("Mr", "James", "NoTime", "<EMAIL>"),
("Mr", "Dora", "Emon", "<EMAIL>"),
("Mr", "Aegon", "Targaryen", "<EMAIL>"),
("Mr", "Walter", "White", "<EMAIL>")
]
try:
cursor.executemany(sql, values)
connection.commit() # equal to save
except Exception as err:
print("Error, rollbacked!",err)
connection.rollback()
print(f"Total {cursor.rowcount} users inserted")
# add user_course data
sql2 = """INSERT INTO user_course(user_email, course_code)
VALUES (%s, %s);
"""
values = [
("<EMAIL>", "SPY007"),
("<EMAIL>", "DRA101")
]
try:
cursor.executemany(sql2, values)
connection.commit() # equal to save
except Exception as err:
print("Error, rollbacked!",err)
connection.rollback()
print(f"Total {cursor.rowcount} m2m inserted")
|
waterthatfrozen/Y3T1-CodeVault
|
DES322/flaskBackend/01_helloworld.py
|
from flask import Flask
import os
app = Flask(__name__)
@app.route("/")
def hello_world():
return "<p>Hello, World! 1234</p>"
if __name__ == "__main__":
"""
The default port is 5000, to change port of your service in powershell use following command:
$env:PORT = xxxx
"""
app.run(host="0.0.0.0",port=int(os.environ.get('PORT',5000)))
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab08/ex1_create_database.py
|
<reponame>waterthatfrozen/Y3T1-CodeVault
import pymysql
connection = pymysql.connect(
host="localhost"
)
with connection:
with connection.cursor() as cursor:
sql = "CREATE DATABASE css326_lab8"
cursor.execute(sql)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS324/Homework 1/UCGS.py
|
<filename>CSS324/Homework 1/UCGS.py<gh_stars>0
# Uniform Cost Graph Search
# from lecture 3
from heapq import heappush, heappop, heapify
from utils import create_node
def index(f, s):
return next((i for i, x in enumerate(f) if x[1][0] == s), -1)
def uniform_cost_graph_search(problem):
initial_node = create_node(problem.initial_state(), None, "", 0, 0)
frontier = [(0, initial_node)]
explored = []
n_visits = 0
while True:
if not frontier:
return (None, n_visits)
else:
n_visits += 1
_, node = heappop(frontier)
state, _, _, path_cost, depth = node
explored.append(state)
if problem.is_goal(state):
return (node, n_visits)
else:
for succ, cost in problem.successors(state):
child_cost = path_cost + cost
child = create_node(succ, node, "", child_cost, depth + 1)
if succ not in explored:
idx = index(frontier, succ)
if idx < 0:
heappush(frontier, (child_cost, child))
else:
_, existing = frontier[idx]
if existing[3] > child_cost:
frontier[idx] = (child_cost, child)
heapify(frontier)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS324/Homework 1/BallSort.py
|
<gh_stars>0
# Ball Sorting
import copy
def initial_state():
# "" as empty stack
# return ["AAAA","BBBB","DCCC","CDDD","",""]
S1 = S2 = S3 = S4 = ""
for _ in range(4):
a = input().split()
S1+=a[0]
S2+=a[1]
S3+=a[2]
S4+=a[3]
init_state = [S1,S2,S3,S4,"",""]
#print(init_state)
return init_state
def is_goal(s):
count = 0
for i in s:
if i in ["AAAA","BBBB","CCCC","DDDD",""]:
count +=1
return count == 6
# print("TEST GOAL FUNCTION")
# print(is_goal(["AADC","AADB","CCDD","CBBB","",""]))
# print(is_goal(["","BBBB","CCCC","DDDD","AAAA",""]))
# print(is_goal(["","","DDDD","BBBB","AAAA","CCCC"]))
def valid_move(s,orgn,dest):
# check the valid move
if s[orgn] == "" or len(s[dest]) == 4 or orgn == dest:
return False
else:
if s[dest] == "":
return True
elif s[orgn][0] == s[dest][0]:
# print(s[dest],s[orgn][0],len(s[dest]),s[dest].count(s[orgn][0]))
return True
else:
return False
# return True
#print("TEST VALID MOVE FUNCTION")
#print(valid_move(["AADC","AADB","CCDD","CBBB","",""],4,0))
#print(valid_move(["AADC","AADB","CCDD","CBBB","",""],0,4))
def move_ball(s,orgn,dest):
new_s = copy.deepcopy(s)
new_s[dest] = new_s[orgn][0] + new_s[dest]
new_s[orgn] = new_s[orgn][1:]
return new_s
#rint("TEST MOVE BALL FUNCTION")
#print(move_ball(["AADC","AADB","CCDD","CBBB","",""],0,4))
def successors(s):
# Successor Function
# not the same stack, not full, same character -> moveable
for orgn in range (6):
for dest in range(6):
if valid_move(s,orgn,dest):
#print("now trying to move top of orgn stk#",orgn," to dest stk#",dest)
new_ball_s = move_ball(s,orgn,dest)
# print(new_ball_s)
yield new_ball_s,1
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab09/ex1_create_and_import.py
|
import pymysql
from pymysql.constants import CLIENT
connection = pymysql.connect(
host="localhost",
client_flag=CLIENT.MULTI_STATEMENTS
)
with connection: # Create a connection (pipe)
with connection.cursor() as cursor: # Create a cursor(pointer)
sql = "DROP DATABASE IF EXISTS css326_lab9"
cursor.execute(sql)
sql = "CREATE DATABASE css326_lab9"
cursor.execute(sql)
sql = "USE css326_lab9"
cursor.execute(sql)
myfile = open('sample_database.sql')
sql = myfile.read()
cursor.execute(sql)
print(f"Imported successfully".center(50,"-"))
cursor.execute("SHOW TABLES;")
for row in cursor:
print(*row)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab08/batch_insert_users.py
|
<reponame>waterthatfrozen/Y3T1-CodeVault
import pymysql
connection = pymysql.connect(
host="localhost",
database="css326_lab8",
)
with connection:
with connection.cursor() as cursor:
sql = """INSERT INTO user(title, first_name, last_name, email)
VALUES (%s, %s, %s, %s);
"""
values = [
("Mr", "James", "NoTime", "<EMAIL>"),
("Mr", "Dora", "Emon", "<EMAIL>"),
("Mr", "Aegon", "Targaryen", "<EMAIL>"),
("Mr", "Walter", "White", "<EMAIL>")
]
try:
cursor.executemany(sql, values)
connection.commit() # equal to save
except Exception as err:
print("Error, rollbacked!",err)
connection.rollback()
print(f"Total {cursor.rowcount} inserted")
|
waterthatfrozen/Y3T1-CodeVault
|
DES322/flaskBackend/07_axios.py
|
import peewee
from flask import Flask, jsonify, request
from flask import render_template, send_from_directory, abort
import os
from product import *
app = Flask(__name__)
# This hook ensures that a connection is opened to handle any queries
# generated by the request.
@app.before_request
def _db_connect():
db.connect()
# This hook ensures that the connection is closed when we've finished
# processing the request.
@app.teardown_request
def _db_close(exc):
if not db.is_closed():
db.close()
@app.route("/imgs/<image_name>")
def get_image(image_name):
try:
return send_from_directory("./static/frontend/imgs/", image_name)
except FileNotFoundError:
abort(404)
@app.route("/")
def index():
return app.send_static_file("frontend/axios.html")
@app.route("/product")
def product_list():
results = Product.select().dicts()
for r in list(results):
print(r)
return jsonify({'products':list(results)})
@app.route("/delete", methods=["POST"])
def delete_product():
pid = request.get_json()
try:
product = Product.get(Product.id == pid["id"])
product.delete_instance()
except peewee.DoesNotExist:
return jsonify({'response': 'error'})
return jsonify({'response': 'success'})
if __name__ == "__main__":
"""
The default port is 5000, to change port of your service in powershell use following command:
$env:PORT = xxxx
"""
app.run(host="0.0.0.0",port=int(os.environ.get('PORT',5000)))
|
waterthatfrozen/Y3T1-CodeVault
|
DES322/flaskBackend/02_routing.py
|
from flask import Flask
import os
app = Flask(__name__)
@app.route("/")
def hello_world():
return "<p>Hello, World!</p>"
@app.route("/hi")
def say_hi():
return "<p>Hi, there!</p>"
@app.route("/hello/<name>")
def hello(name):
return f"<p>Hello, {name}!</p>"
if __name__ == "__main__":
"""
The default port is 5000, to change port of your service in powershell use following command:
$env:PORT = xxxx
"""
app.run(host="0.0.0.0",port=int(os.environ.get('PORT',5000)))
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab08/ex3_insert_data.py
|
<filename>CSS326/Lab08/ex3_insert_data.py<gh_stars>0
import pymysql
connection = pymysql.connect(
host="localhost",
database="css326_lab8",
)
with connection:
with connection.cursor() as cursor:
sql2 = """INSERT INTO course(title, code, credit)
VALUES (%s, %s, %s);
"""
values = [("How to train your dragon","DRA101",3),
("Black magic defense","HOG411",3),("Killing License","SPY007",3)]
try:
cursor.executemany(sql2, values)
connection.commit() # equal to save
except Exception as err:
print("Error, rollbacked!",err)
connection.rollback()
print(f"Total {cursor.rowcount} course inserted")
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab09/ex3_agent_performance.py
|
import pymysql
connection = pymysql.connect(
host="localhost",
database="css326_lab9"
)
with connection:
with connection.cursor() as cursor:
sql = """SELECT a.AGENT_NAME, SUM(c.OUTSTANDING_AMT) FROM customer c INNER JOIN agents a ON c.AGENT_CODE = a.AGENT_CODE GROUP BY c.AGENT_CODE;"""
cursor.execute(sql)
for row in cursor:
print(*row)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab09/ex4_monthly_sale.py
|
<reponame>waterthatfrozen/Y3T1-CodeVault
import pymysql
connection = pymysql.connect(
host="localhost",
database="css326_lab9"
)
with connection:
with connection.cursor() as cursor:
sql = """SELECT MONTHNAME(ORD_DATE),SUM(ORD_AMOUNT) FROM orders GROUP BY MONTH(ORD_DATE) ORDER BY MONTH(ORD_DATE);"""
cursor.execute(sql)
for row in cursor:
print(*row)
|
waterthatfrozen/Y3T1-CodeVault
|
DES322/flaskBackend/03_template.py
|
<gh_stars>0
from flask import Flask
from flask import render_template
import os
app = Flask(__name__)
@app.route("/")
def hello_world():
return "<p>Hello, World!</p>"
@app.route("/index")
def index():
return render_template("index.html")
@app.route("/hello/<name>")
def hello(name):
return render_template("hello.html", name=name)
if __name__ == "__main__":
"""
The default port is 5000, to change port of your service in powershell use following command:
$env:PORT = xxxx
"""
app.run(host="0.0.0.0",port=int(os.environ.get('PORT',5000)))
|
waterthatfrozen/Y3T1-CodeVault
|
DES322/flaskBackend/product.py
|
from peewee import *
import uuid
db = SqliteDatabase('database/product.db')
class Product(Model):
id = UUIDField(primary_key=True) #Autogenerated key, required for Peewee
productid = CharField(unique=True)
name = CharField()
desc = TextField()
price = FloatField()
image = CharField()
class Meta:
database = db
def create():
db.connect()
db.create_tables([Product])
db.close()
def addProduct():
db.connect()
id = input("Product ID : ")
name = input("Product Name : ")
desc = input("Product Desc : ")
price = float(input("Product Price : "))
image = input("Image name : ")
product = Product(id=uuid.uuid4(), productid=id,name=name,desc=desc, price=price,image=image)
product.save(force_insert=True)
db.close()
def delProduct(pid):
db.connect()
product = Product.get(Product.productid == pid)
product.delete_instance()
db.close()
def new_db():
create()
for i in range(3):
addProduct()
def list_item():
results = Product.select().dicts()
for r in list(results):
print(r)
if __name__ == "__main__":
new_db()
list_item()
# addProduct()
# delProduct("001")
|
waterthatfrozen/Y3T1-CodeVault
|
DES322/GitLecture/OnlineStore/main.py
|
<filename>DES322/GitLecture/OnlineStore/main.py<gh_stars>0
import os
print(os.__version__)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS326/Lab09/ex2_find_min_max.py
|
import pymysql
connection = pymysql.connect(
host="localhost",
database="css326_lab9"
)
with connection:
with connection.cursor() as cursor:
sql = """SELECT MAX(OUTSTANDING_AMT),MIN(OUTSTANDING_AMT),AVG(OUTSTANDING_AMT) FROM customer;"""
cursor.execute(sql)
for row in cursor:
print(*row)
|
waterthatfrozen/Y3T1-CodeVault
|
CSS324/Homework 1/utils.py
|
<reponame>waterthatfrozen/Y3T1-CodeVault
from collections import deque
import copy
def create_node(state, parent, action, path_cost, depth):
return (state, parent, action, path_cost, depth)
def print_solution(n):
r = deque()
while n is not None:
r.appendleft(n[0])
n = n[1]
step = 0
for s in r:
print("Step",step)
print_s = []
for i in range(6):
if len(s[i]) < 4:
empty_height = 4 - len(s[i])
print_s.append('.'*empty_height+s[i])
else:
print_s.append(s[i])
for i in range(4):
for j in range(6):
print(print_s[j][i],end=' ')
print()
step+=1
print()
|
Josue-Rodriguez-98/TwitterAPI
|
twitter.py
|
<gh_stars>0
# -----------------------------------------------------------
# Tarea: Análisis de Datos en Twitter
# Sistemas Inteligentes
# Desarrollado por <NAME> (11641196)
# 2. Emplear expresiones regulares para limpiar el texto removiendo Tags, Caracteres Especiales, URLs, Imágenes (done)
# 2.1 Emplear el algoritmo de Viterbi para segmentar Hashtags
# -----------------------------------------------------------
import tweepy as tw
import re
import os
from dotenv import load_dotenv
#Credenciales para usar el API de Twitter, cargadas desde el archivo .env
load_dotenv()
consumer_key = os.getenv('consumer_key')
consumer_secret = os.getenv('consumer_secret')
access_token = os.getenv('access_token')
access_secret = os.getenv('access_secret')
#Inicializar el API
auth = tw.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tw.API(auth, wait_on_rate_limit=True)
#Bloque de código que remueve Emojis de la cadena (tweet)
def removeEmojis(tweet):
return tweet.encode('ascii', 'ignore').decode('ascii')
#--------------------------------------------------------
#Bloque de código que remueve líneas en blanco de la cadena (tweet)
def removeEmptyLines(tweet):
lines = tweet.split('\n')
nonEmpty = [line for line in lines if line.strip() != '']
retVal = ''
for line in nonEmpty:
retVal += line + '\n'
return retVal[:len(retVal)-1]
#--------------------------------------------------------
#Bloque de código que remueve tags, caracteres especiales, urls
def removeTrash(tweet):
#Expresion regular para quitar usernames
retVal = re.sub(r'@[\w]+', '', tweet, flags=re.I)
#Expresión regular para quitar links (aplicando el servicio de enlace de twitter)
retVal = re.sub(r'https://t.co/[\w]+','', retVal, flags = re.I)
#Expresión regular para quitar caracteres especiales
retVal = re.sub(r'[^#a-zA-Z0-9 \n]+','',retVal, flags = re.I)
#Expresión regular para quitar espacios consecutivos
retVal = re.sub(r"\s+",' ', retVal, flags = re.I)
return retVal
#--------------------------------------------------------
#Función que manda a llamar a todos los métodos auxiliares para la limpieza del tweet
def cleanTweet(tweet):
retVal = removeEmojis(tweet)
retVal = removeTrash(retVal)
retVal = removeEmptyLines(retVal)
return retVal
#---------------------------------------------------------
def main():
#Palabras claves para nuestros tweets
query = 'coronavirus OR covid-19 OR #StayAtHome -filter:retweets'
#Fecha de inicio de nuestros tweets
date_since = '2020-2-22'
tweets = tw.Cursor(api.search, q = query, lang = 'en OR es', since = date_since, tweet_mode = 'extended').items(20)
filteredTweets = []
count = 1
for tweet in tweets:
print(f'##################Tweet #{count}#####################')
print(f'{tweet.full_text}')
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++')
filteredText = cleanTweet(tweet.full_text)
#print(f'{filteredText}')
filteredTweets.append(filteredText)
count += 1
count = 0
for txt in filteredTweets:
print(f'##################Tweet #{count}#####################')
print(f'{txt}')
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++')
count += 1
#---------------------------------------------------------
if '__main__' == __name__:
main()
|
MOBSkuchen/bodytracking
|
bodytracking/__init__.py
|
import cv2,mediapipe
__title__ = "bodytracking"
__version__ = "1.02"
__author__ = "Suprime"
class Hand:
def __init__(self,n_hand:int,lm:list,handLms):
self.hand_n = n_hand
self.sol = mediapipe.solutions
self.lm_list = lm
self.hand_lms = handLms
self.mpHands = self.sol.hands
self.WRIST = self.lm_list[0]
self.THUMB_CMC = self.lm_list[1]
self.THUMB_MCP = self.lm_list[2]
self.THUMB_IP = self.lm_list[3]
self.THUMB_TIP = self.lm_list[4]
self.INDEX_FINGER_MCP = self.lm_list[5]
self.INDEX_FINGER_PIP = self.lm_list[6]
self.INDEX_FINGER_DIP = self.lm_list[7]
self.INDEX_FINGER_TIP = self.lm_list[8]
self.MIDDLE_FINGER_MCP = self.lm_list[9]
self.MIDDLE_FINGER_PIP = self.lm_list[10]
self.MIDDLE_FINGER_DIP = self.lm_list[11]
self.MIDDLE_FINGER_TIP = self.lm_list[12]
self.RING_FINGER_MCP = self.lm_list[13]
self.RING_FINGER_PIP = self.lm_list[14]
self.RING_FINGER_DIP = self.lm_list[15]
self.RING_FINGER_TIP = self.lm_list[16]
self.PINKY_MCP = self.lm_list[17]
self.PINKY_PIP = self.lm_list[18]
self.PINKY_DIP = self.lm_list[19]
self.PINKY_TIP = self.lm_list[20]
self.THUMB = [self.THUMB_CMC,self.THUMB_MCP,self.THUMB_IP,self.THUMB_TIP]
self.INDEX = [self.INDEX_FINGER_MCP,self.INDEX_FINGER_PIP,self.INDEX_FINGER_DIP,self.INDEX_FINGER_TIP]
self.MIDDLE = [self.MIDDLE_FINGER_MCP,self.MIDDLE_FINGER_PIP,self.MIDDLE_FINGER_DIP,self.MIDDLE_FINGER_TIP]
self.RING = [self.RING_FINGER_MCP,self.RING_FINGER_PIP,self.RING_FINGER_DIP,self.RING_FINGER_TIP]
self.PINKY = [self.PINKY_MCP,self.PINKY_PIP,self.PINKY_DIP,self.PINKY_TIP]
class Pose:
def __init__(self,lm_list:list,res):
self.lm_list = lm_list
self.results = res
self.nose = lm_list[0]
self.left_eye_inner = lm_list[1]
self.left_eye = lm_list[2]
self.left_eye_outer = lm_list[3]
self.right_eye_inner = lm_list[4]
self.right_eye = lm_list[5]
self.right_eye_outer = lm_list[6]
self.left_ear = lm_list[7]
self.right_ear = lm_list[8]
self.mouth_left = lm_list[9]
self.mouth_right = lm_list[10]
self.left_shoulder = lm_list[11]
self.right_shoulder = lm_list[12]
self.left_elbow = lm_list[13]
self.right_elbow = lm_list[14]
self.left_wrist = lm_list[15]
self.right_wrist = lm_list[16]
self.left_pinky = lm_list[17]
self.right_pinky = lm_list[18]
self.left_index = lm_list[19]
self.right_index = lm_list[20]
self.left_thumb = lm_list[21]
self.right_thumb = lm_list[22]
self.left_hip = lm_list[23]
self.right_hip = lm_list[24]
self.left_knee = lm_list[25]
self.right_knee = lm_list[26]
self.left_ankle = lm_list[27]
self.right_ankle = lm_list[28]
self.left_heel = lm_list[29]
self.right_heel = lm_list[30]
self.left_foot_index = lm_list[31]
self.right_foot_index = lm_list[32]
class HandDetector:
def __init__(self,mode=False,maxHands=2,detectionConfidence=0.5,trackConfidence=0.5):
"""
Sets all the values for mediapipe and the other HandDetector functions.
"""
self.mode = mode
self.maxHands = maxHands
self.detectConf = detectionConfidence
self.trackConf = trackConfidence
self.sol = mediapipe.solutions
self.mpHands = self.sol.hands
self.hands = self.mpHands.Hands(self.mode,self.maxHands,self.detectConf,self.trackConf)
self.mpDraw = self.sol.drawing_utils
self.nt_list = [(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0)]
def get_Hands(self,img):
"""
Finds the hands img the given img, needs RGB img
to find the hands, so it first converts them.
Returns the Hand object with all the landmarks for each hand.
!!! ONLY INITIALIZE THIS ONCE!!!
"""
imgRGB = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
res = self.hands.process(imgRGB)
HAND = 0
if res.multi_hand_landmarks:
for handLms in res.multi_hand_landmarks:
HAND = HAND + 1
List = []
for id, lm in enumerate(handLms.landmark):
h, w, c = img.shape
cx, cy = int(lm.x * w), int(lm.y * h)
List.append((cx,cy))
yield Hand(HAND,List,handLms)
else:
return Hand(1,self.nt_list,[])
def draw_hand(self,img,hand:Hand):
"""
Draws the Landmarks and connections on the image.
"""
self.mpDraw.draw_landmarks(img,hand.hand_lms ,self.mpHands.HAND_CONNECTIONS)
return True
class PoseDetector:
def __init__(self,static_image_mode=False,model_complexity=1,smooth_landmarks=True,min_detection_confidence=0.5,min_tracking_confidence=0.5):
"""
Sets all the values for mediapipe and the other PoseDetector functions.
!!! ONLY INITIALIZE THIS ONCE!!!
"""
self.static_image_mode = static_image_mode
self.model_complexity = model_complexity
self.smooth_landmarks = smooth_landmarks
self.min_detection_conf = min_detection_confidence
self.min_tracking_conf = min_tracking_confidence
self.sol = mediapipe.solutions
self.mpPose = self.sol.pose
self.pose = self.mpPose.Pose()
self.nt_list = [(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),(0,0),]
self.mpDraw = self.sol.drawing_utils
def get_Pose(self,img,wd=True):
"""
Transforms the img to RGB and then builds the Pose object
based off all the landmarks on the frame.
Returns the Pose object with the complete list of landmarks.
"""
imgRGB = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
res = self.pose.process(imgRGB).pose_landmarks
if res:
List = []
if wd:
for id, lm in enumerate(res.landmark):
h, w, c = img.shape
cx, cy = int(lm.x * w), int(lm.y * h)
List.append((cx,cy))
try:
return Pose(List, res)
except IndexError:
return Pose(self.nt_list, [])
else:
return Pose(self.nt_list, res)
else:
return Pose(self.nt_list,[])
def draw_pose(self,img,pose:Pose):
"""
Draws the Landmarks and connections on the image.
"""
self.mpDraw.draw_landmarks(img,pose.results,self.mpPose.POSE_CONNECTIONS)
|
BitcoinNanoLabs/infinitum-wallet-server
|
ws_server.py
|
import asyncio
import websockets
from logging import getLogger, INFO, StreamHandler
logger = getLogger('websockets')
logger.setLevel(INFO)
logger.addHandler(StreamHandler())
clients = set()
async def handler(websocket, path):
msg = await websocket.recv()
print(f"Received: {msg}")
global clients
clients.add(websocket)
try:
await asyncio.wait([ws.send("{\"ack\": \"subscribe\",\"time\": \"10\"}") for ws in clients])
await asyncio.sleep(10)
finally:
clients.remove(websocket)
start_server = websockets.serve(handler, port=7078)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
|
pranav083/puauto_login
|
pu_cam.py
|
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import webbrowser
<<<<<<< HEAD
browser = webdriver.Firefox() #install
browser.get("http:\\bit.do/wifion")
time.sleep(2) #based on processor increase the time for initiallizing the browser
username = browser.find_element_by_id("user")
password = browser.find_element_by_id("password")
username.send_keys("------") #enter your pu wifi user name here
password.send_keys("------") #enter your pu wifi password here
=======
# firefox_capabilities = {}
# firefox_capabilities['marionette'] = True
# firefox_capabilities['binary'] = '/usr/bin/firefox'
browser = webdriver.Firefox()
browser.get("http:\\bit.do/wifion")
time.sleep(2)
username = browser.find_element_by_id("user")
password = browser.find_element_by_id("password")
username.send_keys("1<PASSWORD>")
password.send_keys("<PASSWORD>")
>>>>>>> 1e00778e810aef7b03<PASSWORD>12288f8abe887e289
login_attempt = browser.find_element_by_xpath("//*[@type='submit']")
login_attempt.submit()
|
lizardsystem/django-stored-messages
|
stored_messages/tests/urls.py
|
<gh_stars>0
from django.contrib import admin
from django.conf.urls import url, include
from stored_messages.tests.views import message_view, message_create, message_create_mixed
admin.autodiscover()
urlpatterns = [
url(r'^consume$', message_view),
url(r'^create$', message_create),
url(r'^create_mixed$', message_create_mixed),
url(r'^messages', include(('stored_messages.urls', 'reviews'), namespace='stored_messages'))
]
|
lizardsystem/django-stored-messages
|
stored_messages/tests/base.py
|
from __future__ import unicode_literals
from django.contrib.auth import get_user_model
from django.test import TestCase, RequestFactory
from importlib import reload
import mock
from stored_messages import storage
from stored_messages import settings
class BaseTest(TestCase):
def setUp(self):
# settings and storage modules should be reloaded
reload(settings)
reload(storage)
self.factory = RequestFactory()
self.user = get_user_model().objects.create_user("test_user", "<EMAIL>", "123456")
self.request = RequestFactory().get('/')
self.request.session = mock.MagicMock()
self.request.user = self.user
def tearDown(self):
self.user.delete()
class BackendBaseTest(BaseTest):
"""
Tests that need to access a Backend.
Given the dynamic nature of Stored Messages settings, retrieving the backend class when we
need to ovveride settings is a little bit tricky
"""
def setUp(self):
super(BackendBaseTest, self).setUp()
self.backend = settings.stored_messages_settings.STORAGE_BACKEND()
def tearDown(self):
self.backend._flush()
|
lizardsystem/django-stored-messages
|
stored_messages/urls.py
|
"""
At the moment this module does something only when restframework is available
"""
from django.conf import settings
if 'rest_framework' in settings.INSTALLED_APPS:
from rest_framework.routers import DefaultRouter
from django.conf.urls import url, include
from . import views
router = DefaultRouter()
router.register(r'inbox', views.InboxViewSet, basename='inbox')
urlpatterns = [
url(r'^', include(router.urls)),
url(r'^mark_all_read/$', views.mark_all_read, name='mark_all_read'),
]
|
fastfists/project
|
tools.py
|
<gh_stars>1-10
import pandas as pd
well_productions = pd.read_csv("well_productions/well productions.csv")
"""
Sets a proppant per stage as "ppf" defaults to the maximum weight
method can be ["min, "max", "avg"]
"""
def propant_per_stage(dataframe: pd.DataFrame, method="avg"):
if method == "min":
val = min(dataframe["proppant weight (lbs)"])
elif method == "max":
val = max(dataframe["proppant weight (lbs)"])
elif method == "avg":
val = dataframe["proppant weight (lbs)"].describe()["mean"]
else:
raise Exception(f"No method found for {method}")
dataframe["ppf"] = val
"""
Calculates the original oil in place and places the data in
row "ooip"
"""
def original_oil_in_place(dataframe: pd.DataFrame):
"""
Sets a pump rate as "pr" defaults to the maximum weight
method can be ["min, "max", "avg"]
"""
def pump_rate(dataframe: pd.DataFrame, method="avg"):
if method == "min":
val = min(dataframe["pump rate (cubic feet/min)"])
elif method == "max":
val = max(dataframe["pump rate (cubic feet/min)"])
elif method == "avg":
val = dataframe["pump rate (cubic feet/min)"].describe()["mean"]
else:
raise Exception(f"No method found for {method}")
dataframe["pr"] = val
def calc_production():
print("hi")
def well_length(dataframe: pd.DataFrame):
dataframe["well length"] = dataframe["easting"].iloc[-1] - dataframe["easting"][0]
def frac_stages(dataframe : pd.DataFrame):
dataframe["frac stages"] = dataframe[dataframe["proppant weight (lbs)"].isna() == False].shape[0]
|
machow/duckdb_engine
|
duckdb_engine/tests/test_basic.py
|
from hypothesis import assume, given
from hypothesis.strategies import text
from pytest import fixture, mark
from sqlalchemy import (
Column,
ForeignKey,
Integer,
MetaData,
Sequence,
String,
Table,
create_engine,
inspect,
)
from sqlalchemy.dialects.postgresql.base import PGInspector
from sqlalchemy.engine import Engine
from sqlalchemy.engine.url import registry
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import RelationshipProperty, Session, relationship, sessionmaker
@fixture
def engine() -> Engine:
registry.register("duckdb", "duckdb_engine", "Dialect")
eng = create_engine("duckdb:///:memory:")
Base.metadata.create_all(eng)
return eng
Base = declarative_base()
class FakeModel(Base): # type: ignore
__tablename__ = "fake"
id = Column(Integer, Sequence("fakemodel_id_sequence"), primary_key=True)
name = Column(String)
owner = relationship("Owner") # type: RelationshipProperty[Owner]
class Owner(Base): # type: ignore
__tablename__ = "owner"
id = Column(Integer, Sequence("owner_id"), primary_key=True)
fake_id = Column(Integer, ForeignKey("fake.id"))
owned = relationship(
FakeModel, back_populates="owner"
) # type: RelationshipProperty[FakeModel]
@fixture
def session(engine: Engine) -> Session:
return sessionmaker(bind=engine)()
def test_basic(session: Session) -> None:
session.add(FakeModel(name="Frank"))
session.commit()
frank = session.query(FakeModel).one() # act
assert frank.name == "Frank"
def test_foreign(session: Session) -> None:
model = FakeModel(name="Walter")
session.add(model)
session.add(Owner(owned=model))
session.commit()
owner = session.query(Owner).one() # act
assert owner.owned.name == "Walter"
@given(text())
def test_simple_string(s: str) -> None:
assume("\x00" not in s)
eng = create_engine("duckdb:///:memory:")
Base.metadata.create_all(eng)
session = sessionmaker(bind=eng)()
model = FakeModel(name=s)
session.add(model)
session.add(Owner(owned=model))
session.commit()
owner = session.query(Owner).one() # act
assert owner.owned.name == s
def test_get_tables(inspector: PGInspector) -> None:
assert inspector.get_table_names()
@fixture
def inspector(engine: Engine, session: Session) -> PGInspector:
session.execute("create table test (id int);")
session.commit()
meta = MetaData()
Table("test", meta)
return inspect(engine)
def test_get_columns(inspector: PGInspector) -> None:
inspector.get_columns("test", None)
def test_get_foreign_keys(inspector: PGInspector) -> None:
inspector.get_foreign_keys("test", None)
@mark.xfail(reason="reflection not yet supported in duckdb", raises=NotImplementedError)
def test_get_check_constraints(inspector: PGInspector) -> None:
inspector.get_check_constraints("test", None)
def test_get_unique_constraints(inspector: PGInspector) -> None:
inspector.get_unique_constraints("test", None)
def test_reflect(session: Session, engine: Engine) -> None:
session.execute("create table test (id int);")
session.commit()
meta = MetaData(engine)
meta.reflect(only=["test"])
def test_commit(session: Session, engine: Engine) -> None:
session.execute("commit;")
from IPython.core.interactiveshell import InteractiveShell
shell = InteractiveShell()
assert not shell.run_line_magic("load_ext", "sql")
assert not shell.run_line_magic("sql", "duckdb:///:memory:")
assert shell.run_line_magic("sql", "select 42;") == [(42,)]
def test_table_reflect(session: Session, engine: Engine) -> None:
session.execute("create table test (id int);")
session.commit()
meta = MetaData()
user_table = Table("test", meta)
insp = inspect(engine)
insp.reflect_table(user_table, None)
def test_fetch_df_chunks() -> None:
import duckdb
duckdb.connect(":memory:").execute("select 1").fetch_df_chunk(1)
def test_description() -> None:
import duckdb
duckdb.connect("").description
|
aiorchestra/aiorchestra-persistency
|
aiorchestra_persistence/models.py
|
# Author: <NAME>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sa
from sqlalchemy import exc
from sqlalchemy import orm
from sqlalchemy.ext import declarative
Base = declarative.declarative_base()
def with_session(action):
def wrapper(*args, **kwargs):
self_or_class = list(args)[1]
if not hasattr(self_or_class, 'engine'):
raise Exception("Engine for ORM model was not configured.")
try:
session = orm.sessionmaker(bind=self_or_class.engine)
setattr(self_or_class, 'session', session)
action(*args, **kwargs)
except (exc.SQLAlchemyError, Exception):
if hasattr(self_or_class, 'session'):
del self_or_class.session
return wrapper
class BaseDatabaseModel(object):
@with_session
def save(self, engine):
self.session.add(self)
self.session.commit()
@with_session
def delete(self, engine):
self.session.delete(self)
self.session.commit()
@with_session
def update(self, engine, **values):
try:
for key in values:
if hasattr(self, key):
setattr(self, key, values[key])
self.save(engine)
return self.find_by(engine, name=self.name)
except Exception as e:
raise e
@classmethod
def list(cls, engine):
return cls.session.query(cls).all()
@classmethod
def find_by(cls, engine, **kwargs):
obj = cls.session.query(cls).filter_by(**kwargs).first()
return obj if obj else None
@classmethod
def get_all_by(cls, engine, **kwargs):
objs = cls.session.query(cls).filter_by(**kwargs).all()
return objs if objs else []
class ContextModel(Base, BaseDatabaseModel):
__tablename__ = "context"
name = sa.Column(sa.String(), primary_key=True,
nullable=False, unique=True)
status = sa.Column(sa.String(), nullable=False)
template_path = sa.Column(sa.String(), nullable=False)
inputs = sa.Column(sa.Text(), nullable=False)
def __init__(self, context, engine):
s_context = context.serialize()
self.name = s_context['name']
self.status = s_context['status']
self.template_path = s_context['template_path']
self.inputs = s_context['inputs']
self.save()
for node in s_context['nodes']:
ContextNodeModel(engine, context, node)
def jsonify(self):
return {
'name': self.name,
'status': self.status,
'template_path': self.template_path,
'inputs': self.inputs,
}
@classmethod
def assemble(cls, name, engine):
new_context = cls.find_by(engine, name=name).jsonify()
nodes = [node.jsonify() for node in
ContextNodeModel.get_all_by(
engine, context=new_context.name)]
new_context.update(nodes=nodes,
path=new_context['template_path'])
class ContextNodeModel(Base, BaseDatabaseModel):
__tablename__ = "node"
context = sa.Column(sa.ForeignKey('context.name'))
name = sa.Column(sa.String(), nullable=False, unique=True)
is_provisioned = sa.Column(sa.Boolean(), nullable=False)
properties = sa.Column(sa.Text(), nullable=True)
attributes = sa.Column(sa.Text(), nullable=True)
runtime_properties = sa.Column(sa.Text(), nullable=True)
def __init__(self, engine, context, node):
self.context = context.name
self.name = node['__name']
self.is_provisioned = node['is_provisioned']
self.properties = node['__properties']
self.attributes = node['__attributes']
self.runtime_properties = node['runtime_properties']
self.save(engine)
def jsonify(self):
return {
'name': self.name,
'is_provisioned': self.is_provisioned,
'__properties': self.properties,
'__attributes': self.attributes,
'runtime_properties': self.runtime_properties,
}
|
aiorchestra/aiorchestra-persistency
|
migrations/versions/f1a1565a8dd4_aiorchestra_persistency.py
|
<reponame>aiorchestra/aiorchestra-persistency
"""AIOrchestra persistence
Revision ID: f1a1565a8dd4
Revises:
Create Date: 2016-08-19 11:04:20.567324
"""
# revision identifiers, used by Alembic.
import sqlalchemy as sa
from alembic import op
revision = 'f1a1565a8dd4'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
'context',
sa.Column('name', sa.String(), nullable=False, unique=True, primary_key=True),
sa.Column('status', sa.String(), nullable=False),
sa.Column('template_path', sa.String(), nullable=False),
sa.Column('inputs', sa.Text(), nullable=False),
)
op.create_table(
'node',
sa.Column('context', sa.String(), sa.ForeignKey('context.name'), ),
sa.Column('name', sa.String(), nullable=False, unique=True),
sa.Column('is_provisioned', sa.Boolean(), nullable=False),
sa.Column('properties', sa.Text(), nullable=True),
sa.Column('attributes', sa.Text(), nullable=True),
sa.Column('runtime_properties', sa.Text(), nullable=True),
)
def downgrade():
op.drop_table('node')
op.drop_table('context')
|
abhishek62073/algorithm-paradise
|
entries/abhishek'sswap.py
|
a = int(input("Enter a number: "))
b = int(input("Enter another number: "))
a = a+b
b = a-b
a = a-b
print(a)
print(b)
|
Nazukixv/OpenNMT-py
|
onmt/decoders/ensemble.py
|
<filename>onmt/decoders/ensemble.py
"""
Ensemble decoding.
Decodes using multiple models simultaneously,
combining their prediction distributions by averaging.
All models in the ensemble must share a target vocabulary.
"""
import torch
import torch.nn as nn
from onmt.encoders.encoder import EncoderBase
from onmt.models import NMTModel
import onmt.model_builder
class EnsembleDecoderOutput(object):
""" Wrapper around multiple decoder final hidden states """
def __init__(self, model_dec_outs):
self.model_dec_outs = tuple(model_dec_outs)
def squeeze(self, dim=None):
"""
Delegate squeeze to avoid modifying
:obj:`Translator.translate_batch()`
"""
return EnsembleDecoderOutput([
x.squeeze(dim) for x in self.model_dec_outs])
def __getitem__(self, index):
return self.model_dec_outs[index]
class EnsembleEncoder(EncoderBase):
""" Dummy Encoder that delegates to individual real Encoders """
def __init__(self, model_encoders):
super(EnsembleEncoder, self).__init__()
self.model_encoders = nn.ModuleList(model_encoders)
def forward(self, src, lengths=None):
enc_hidden, memory_bank, _ = zip(*[
model_encoder(src, lengths)
for model_encoder in self.model_encoders])
return enc_hidden, memory_bank, lengths
class EnsembleDecoder(nn.Module):
""" Dummy Decoder that delegates to individual real Decoders """
def __init__(self, model_decoders):
super(EnsembleDecoder, self).__init__()
self.model_decoders = nn.ModuleList(model_decoders)
def forward(self, tgt, memory_bank, memory_lengths=None, step=None):
""" See :obj:`RNNDecoderBase.forward()` """
# Memory_lengths is a single tensor shared between all models.
# This assumption will not hold if Translator is modified
# to calculate memory_lengths as something other than the length
# of the input.
dec_outs, attns = zip(*[
model_decoder(
tgt, memory_bank[i], memory_lengths, step=step)
for i, model_decoder in enumerate(self.model_decoders)])
mean_attns = self.combine_attns(attns)
return EnsembleDecoderOutput(dec_outs), mean_attns
def combine_attns(self, attns):
result = {}
for key in attns[0].keys():
result[key] = torch.stack([attn[key] for attn in attns]).mean(0)
return result
def init_state(self, src, memory_bank, enc_hidden, with_cache=False):
""" See :obj:`RNNDecoderBase.init_state()` """
for i, model_decoder in enumerate(self.model_decoders):
model_decoder.init_state(src, memory_bank[i],
enc_hidden[i], with_cache)
def map_state(self, fn):
for model_decoder in self.model_decoders:
model_decoder.map_state(fn)
class EnsembleGenerator(nn.Module):
"""
Dummy Generator that delegates to individual real Generators,
and then averages the resulting target distributions.
"""
def __init__(self, model_generators, raw_probs=False):
super(EnsembleGenerator, self).__init__()
self.model_generators = nn.ModuleList(model_generators)
self._raw_probs = raw_probs
def forward(self, hidden, attn=None, src_map=None):
"""
Compute a distribution over the target dictionary
by averaging distributions from models in the ensemble.
All models in the ensemble must share a target vocabulary.
"""
distributions = torch.stack(
[mg(h) if attn is None else mg(h, attn, src_map)
for h, mg in zip(hidden, self.model_generators)]
)
if self._raw_probs:
return torch.log(torch.exp(distributions).mean(0))
else:
return distributions.mean(0)
class EnsembleModel(NMTModel):
""" Dummy NMTModel wrapping individual real NMTModels """
def __init__(self, models, raw_probs=False):
encoder = EnsembleEncoder(model.encoder for model in models)
decoder = EnsembleDecoder(model.decoder for model in models)
super(EnsembleModel, self).__init__(encoder, decoder)
self.generator = EnsembleGenerator(
[model.generator for model in models], raw_probs)
self.models = nn.ModuleList(models)
def load_test_model(opt, dummy_opt):
""" Read in multiple models for ensemble """
shared_fields = None
shared_model_opt = None
models = []
for model_path in opt.models:
fields, model, model_opt = \
onmt.model_builder.load_test_model(opt,
dummy_opt,
model_path=model_path)
if shared_fields is None:
shared_fields = fields
else:
for key, field in fields.items():
if field is not None and 'vocab' in field.__dict__:
assert field.vocab.stoi == shared_fields[key].vocab.stoi, \
'Ensemble models must use the same preprocessed data'
models.append(model)
if shared_model_opt is None:
shared_model_opt = model_opt
ensemble_model = EnsembleModel(models, opt.avg_raw_probs)
return shared_fields, ensemble_model, shared_model_opt
|
Nazukixv/OpenNMT-py
|
onmt/decoders/decoder.py
|
<reponame>Nazukixv/OpenNMT-py
""" Base Class and function for Decoders """
import torch
import torch.nn as nn
import onmt.models.stacked_rnn
from onmt.utils.misc import aeq
from onmt.utils.rnn_factory import rnn_factory
class RNNDecoderBase(nn.Module):
"""
Base recurrent attention-based decoder class.
Specifies the interface used by different decoder types
and required by :obj:`models.NMTModel`.
.. mermaid::
graph BT
A[Input]
subgraph RNN
C[Pos 1]
D[Pos 2]
E[Pos N]
end
G[Decoder State]
H[Decoder State]
I[Outputs]
F[memory_bank]
A--emb-->C
A--emb-->D
A--emb-->E
H-->C
C-- attn --- F
D-- attn --- F
E-- attn --- F
C-->I
D-->I
E-->I
E-->G
F---I
Args:
rnn_type (:obj:`str`):
style of recurrent unit to use, one of [RNN, LSTM, GRU, SRU]
bidirectional_encoder (bool) : use with a bidirectional encoder
num_layers (int) : number of stacked layers
hidden_size (int) : hidden size of each layer
attn_type (str) : see :obj:`onmt.modules.GlobalAttention`
coverage_attn (str): see :obj:`onmt.modules.GlobalAttention`
context_gate (str): see :obj:`onmt.modules.ContextGate`
copy_attn (bool): setup a separate copy attention mechanism
dropout (float) : dropout value for :obj:`nn.Dropout`
embeddings (:obj:`onmt.modules.Embeddings`): embedding module to use
"""
def __init__(self, rnn_type, bidirectional_encoder, num_layers,
hidden_size, attn_type="general", attn_func="softmax",
coverage_attn=False, context_gate=None,
copy_attn=False, dropout=0.0, embeddings=None,
reuse_copy_attn=False):
super(RNNDecoderBase, self).__init__()
# Basic attributes.
self.decoder_type = 'rnn'
self.bidirectional_encoder = bidirectional_encoder
self.num_layers = num_layers
self.hidden_size = hidden_size
self.embeddings = embeddings
self.dropout = nn.Dropout(dropout)
# Decoder state
self.state = {}
# Build the RNN.
self.rnn = self._build_rnn(rnn_type,
input_size=self._input_size,
hidden_size=hidden_size,
num_layers=num_layers,
dropout=dropout)
# Set up the context gate.
self.context_gate = None
if context_gate is not None:
self.context_gate = onmt.modules.context_gate_factory(
context_gate, self._input_size,
hidden_size, hidden_size, hidden_size
)
# Set up the standard attention.
self._coverage = coverage_attn
self.attn = onmt.modules.GlobalAttention(
hidden_size, coverage=coverage_attn,
attn_type=attn_type, attn_func=attn_func
)
# Set up a separated copy attention layer, if needed.
self._copy = False
if copy_attn and not reuse_copy_attn:
self.copy_attn = onmt.modules.GlobalAttention(
hidden_size, attn_type=attn_type, attn_func=attn_func
)
if copy_attn:
self._copy = True
self._reuse_copy_attn = reuse_copy_attn
def init_state(self, src, memory_bank, encoder_final, with_cache=False):
""" Init decoder state with last state of the encoder """
def _fix_enc_hidden(hidden):
# The encoder hidden is (layers*directions) x batch x dim.
# We need to convert it to layers x batch x (directions*dim).
if self.bidirectional_encoder:
hidden = torch.cat([hidden[0:hidden.size(0):2],
hidden[1:hidden.size(0):2]], 2)
return hidden
if isinstance(encoder_final, tuple): # LSTM
self.state["hidden"] = tuple([_fix_enc_hidden(enc_hid)
for enc_hid in encoder_final])
else: # GRU
self.state["hidden"] = (_fix_enc_hidden(encoder_final), )
# Init the input feed.
batch_size = self.state["hidden"][0].size(1)
h_size = (batch_size, self.hidden_size)
self.state["input_feed"] = \
self.state["hidden"][0].data.new(*h_size).zero_().unsqueeze(0)
self.state["coverage"] = None
def update_state(self, rnnstate, input_feed, coverage):
""" Update decoder state """
if not isinstance(rnnstate, tuple):
self.state["hidden"] = (rnnstate,)
else:
self.state["hidden"] = rnnstate
self.state["input_feed"] = input_feed
self.state["coverage"] = coverage
def map_state(self, fn):
self.state["hidden"] = tuple(map(lambda x: fn(x, 1),
self.state["hidden"]))
self.state["input_feed"] = fn(self.state["input_feed"], 1)
def detach_state(self):
""" Need to document this """
self.state["hidden"] = tuple([_.detach()
for _ in self.state["hidden"]])
self.state["input_feed"] = self.state["input_feed"].detach()
def forward(self, tgt, memory_bank, memory_lengths=None,
step=None):
"""
Args:
tgt (`LongTensor`): sequences of padded tokens
`[tgt_len x batch x nfeats]`.
memory_bank (`FloatTensor`): vectors from the encoder
`[src_len x batch x hidden]`.
memory_lengths (`LongTensor`): the padded source lengths
`[batch]`.
Returns:
(`FloatTensor`,:obj:`onmt.Models.DecoderState`,`FloatTensor`):
* dec_outs: output from the decoder (after attn)
`[tgt_len x batch x hidden]`.
* attns: distribution over src at each tgt
`[tgt_len x batch x src_len]`.
"""
# Run the forward pass of the RNN.
dec_state, dec_outs, attns = self._run_forward_pass(
tgt, memory_bank, memory_lengths=memory_lengths)
# Update the state with the result.
output = dec_outs[-1]
coverage = None
if "coverage" in attns:
coverage = attns["coverage"][-1].unsqueeze(0)
self.update_state(dec_state, output.unsqueeze(0), coverage)
# Concatenates sequence of tensors along a new dimension.
# NOTE: v0.3 to 0.4: dec_outs / attns[*] may not be list
# (in particular in case of SRU) it was not raising error in 0.3
# since stack(Variable) was allowed.
# In 0.4, SRU returns a tensor that shouldn't be stacke
if type(dec_outs) == list:
dec_outs = torch.stack(dec_outs)
for k in attns:
if type(attns[k]) == list:
attns[k] = torch.stack(attns[k])
# TODO change the way attns is returned dict => list or tuple (onnx)
return dec_outs, attns
class StdRNNDecoder(RNNDecoderBase):
"""
Standard fully batched RNN decoder with attention.
Faster implementation, uses CuDNN for implementation.
See :obj:`RNNDecoderBase` for options.
Based around the approach from
"Neural Machine Translation By Jointly Learning To Align and Translate"
:cite:`Bahdanau2015`
Implemented without input_feeding and currently with no `coverage_attn`
or `copy_attn` support.
"""
def _run_forward_pass(self, tgt, memory_bank, memory_lengths=None):
"""
Private helper for running the specific RNN forward pass.
Must be overriden by all subclasses.
Args:
tgt (LongTensor): a sequence of input tokens tensors
[len x batch x nfeats].
memory_bank (FloatTensor): output(tensor sequence) from the
encoder RNN of size (src_len x batch x hidden_size).
state (FloatTensor): hidden state from the encoder RNN for
initializing the decoder.
memory_lengths (LongTensor): the source memory_bank lengths.
Returns:
dec_state (Tensor): final hidden state from the decoder.
dec_outs ([FloatTensor]): an array of output of every time
step from the decoder.
attns (dict of (str, [FloatTensor]): a dictionary of different
type of attention Tensor array of every time
step from the decoder.
"""
assert not self._copy # TODO, no support yet.
assert not self._coverage # TODO, no support yet.
# Initialize local and return variables.
attns = {}
emb = self.embeddings(tgt)
# Run the forward pass of the RNN.
if isinstance(self.rnn, nn.GRU):
rnn_output, dec_state = self.rnn(emb, self.state["hidden"][0])
else:
rnn_output, dec_state = self.rnn(emb, self.state["hidden"])
# Check
tgt_len, tgt_batch, _ = tgt.size()
output_len, output_batch, _ = rnn_output.size()
aeq(tgt_len, output_len)
aeq(tgt_batch, output_batch)
# END
# Calculate the attention.
dec_outs, p_attn = self.attn(
rnn_output.transpose(0, 1).contiguous(),
memory_bank.transpose(0, 1),
memory_lengths=memory_lengths
)
attns["std"] = p_attn
# Calculate the context gate.
if self.context_gate is not None:
dec_outs = self.context_gate(
emb.view(-1, emb.size(2)),
rnn_output.view(-1, rnn_output.size(2)),
dec_outs.view(-1, dec_outs.size(2))
)
dec_outs = \
dec_outs.view(tgt_len, tgt_batch, self.hidden_size)
dec_outs = self.dropout(dec_outs)
return dec_state, dec_outs, attns
def _build_rnn(self, rnn_type, **kwargs):
rnn, _ = rnn_factory(rnn_type, **kwargs)
return rnn
@property
def _input_size(self):
"""
Private helper returning the number of expected features.
"""
return self.embeddings.embedding_size
class InputFeedRNNDecoder(RNNDecoderBase):
"""
Input feeding based decoder. See :obj:`RNNDecoderBase` for options.
Based around the input feeding approach from
"Effective Approaches to Attention-based Neural Machine Translation"
:cite:`Luong2015`
.. mermaid::
graph BT
A[Input n-1]
AB[Input n]
subgraph RNN
E[Pos n-1]
F[Pos n]
E --> F
end
G[Encoder]
H[memory_bank n-1]
A --> E
AB --> F
E --> H
G --> H
"""
def _run_forward_pass(self, tgt, memory_bank, memory_lengths=None):
"""
See StdRNNDecoder._run_forward_pass() for description
of arguments and return values.
"""
# Additional args check.
input_feed = self.state["input_feed"].squeeze(0)
input_feed_batch, _ = input_feed.size()
_, tgt_batch, _ = tgt.size()
aeq(tgt_batch, input_feed_batch)
# END Additional args check.
# Initialize local and return variables.
dec_outs = []
attns = {"std": []}
if self._copy:
attns["copy"] = []
if self._coverage:
attns["coverage"] = []
emb = self.embeddings(tgt)
assert emb.dim() == 3 # len x batch x embedding_dim
dec_state = self.state["hidden"]
coverage = self.state["coverage"].squeeze(0) \
if self.state["coverage"] is not None else None
# Input feed concatenates hidden state with
# input at every time step.
for _, emb_t in enumerate(emb.split(1)):
emb_t = emb_t.squeeze(0)
decoder_input = torch.cat([emb_t, input_feed], 1)
rnn_output, dec_state = self.rnn(decoder_input, dec_state)
decoder_output, p_attn = self.attn(
rnn_output,
memory_bank.transpose(0, 1),
memory_lengths=memory_lengths)
if self.context_gate is not None:
# TODO: context gate should be employed
# instead of second RNN transform.
decoder_output = self.context_gate(
decoder_input, rnn_output, decoder_output
)
decoder_output = self.dropout(decoder_output)
input_feed = decoder_output
dec_outs += [decoder_output]
attns["std"] += [p_attn]
# Update the coverage attention.
if self._coverage:
coverage = coverage + p_attn \
if coverage is not None else p_attn
attns["coverage"] += [coverage]
# Run the forward pass of the copy attention layer.
if self._copy and not self._reuse_copy_attn:
_, copy_attn = self.copy_attn(decoder_output,
memory_bank.transpose(0, 1))
attns["copy"] += [copy_attn]
elif self._copy:
attns["copy"] = attns["std"]
# Return result.
return dec_state, dec_outs, attns
def _build_rnn(self, rnn_type, input_size,
hidden_size, num_layers, dropout):
assert not rnn_type == "SRU", "SRU doesn't support input feed! " \
"Please set -input_feed 0!"
if rnn_type == "LSTM":
stacked_cell = onmt.models.stacked_rnn.StackedLSTM
else:
stacked_cell = onmt.models.stacked_rnn.StackedGRU
return stacked_cell(num_layers, input_size,
hidden_size, dropout)
@property
def _input_size(self):
"""
Using input feed by concatenating input with attention vectors.
"""
return self.embeddings.embedding_size + self.hidden_size
|
regalk13/updated_discord-BOT.py
|
launcher.py
|
<reponame>regalk13/updated_discord-BOT.py
from lib.bot import bot
from discord.ext.commands import Cog
VERSION = "0.0.32"
bot.run(VERSION)
|
regalk13/updated_discord-BOT.py
|
lib/cogs/fun.py
|
<gh_stars>1-10
import random
from random import choice, randint
from typing import Optional
from aiohttp import request
from discord import Member, Embed
from discord.errors import HTTPException
from discord.ext.commands import Cog
from discord.ext.commands import BucketType
from discord.ext.commands import command, cooldown
class Fun(Cog):
def __init__(self, bot):
self.bot = bot
@command(name= "8ball")
async def _8ball(self, ctx, *, question):
responses = ["Es cierto.",
"Es decididamente así.",
"Sin duda.",
"Sí definitivamente.",
"Puedes confiar en ello.",
"Como yo lo veo, sí.",
"Más probable.",
"Perspectivas buenas.",
"Si.",
"Las señales apuntan a que sí.",
"Respuesta confusa, intenta otra vez.",
"Pregunta de nuevo más tarde.",
"Mejor no decirte ahora.",
"No se puede predecir ahora.",
"Concéntrate y pregunta otra vez.",
"No cuentes con eso",
"Mi respuesta es no.",
"Mis fuentes dicen que no.",
"Outlook no es tan bueno",
"Muy dudoso."]
await ctx.send(f"Pregunta: {question}\nRespuesta: {random.choice(responses)}")
@command(name="fact")
@cooldown(3, 60, BucketType.guild)
async def animal_fact(self, ctx, animal: str):
if (animal := animal.lower()) in ("dog", "cat", "panda", "fox", "bird", "koala"):
fact_url = f"https://some-random-api.ml/facts/{animal}"
image_url = f"https://some-random-api.ml/img/{'birb' if animal == 'bird' else animal}"
async with request("GET", image_url, headers={}) as response:
if response.status == 200:
data = await response.json()
image_link = data["link"]
else:
image_link = None
async with request("GET", fact_url, headers={}) as response:
if response.status == 200:
data = await response.json()
embed = Embed(title=f"{animal.title()} fact",
description=data["fact"],
colour=ctx.author.colour)
if image_link is not None:
embed.set_image(url=image_link)
await ctx.send(embed=embed)
else:
await ctx.send(f"API returned a {response.status} status.")
else:
await ctx.send("No fact are aviable for that animal.")
@command(name="slap", aliases=["hit"])
async def slap_member(self, ctx, member: Member, *, reason: Optional[str] = "Ninguna razón"):
await ctx.send(f"{ctx.author.mention} amonesto a {member.mention} por {reason}!")
@slap_member.error
async def slap_member_error(self, ctx, exc):
if isinstance(exc, BadArgument):
await ctx.send("No puedo encontrar ese miembro.")
@command(name="echo", aliases=["say"])
@cooldown(1, 15, BucketType.guild)
async def echo_message(self, ctx, *, message):
await ctx.message.delete()
await ctx.send(message)
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.bot.cogs_ready.ready_up("fun")
def setup(bot):
bot.add_cog(Fun(bot))
|
regalk13/updated_discord-BOT.py
|
lib/cogs/mod.py
|
from asyncio import sleep
from re import search
import discord
from discord.ext import commands
from discord.ext.commands import Cog, Greedy
from better_profanity import profanity
from discord import Embed, Member
from discord.ext.commands import CheckFailure
from discord.ext.commands import command, has_permissions, bot_has_permissions
from datetime import datetime, timedelta
from typing import Optional
from ..db import db
profanity.load_censor_words_from_file("./data/profanity.txt")
class Mod(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.url_regex = r"(?i)\b((?:https?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:'\".,<>?«»“”‘’]))"
self.links_allowed = (760460794139377714,)
async def kick_members(self, message, targets, reason):
for target in targets:
if (message.guild.me.top_role.position > target.top_role.position
and not target.guild_permissions.administrator):
await target.kick(reason=reason)
embed = Embed(title="Miembro kickeado",
colour=0xDD2222,
time_stamp=datetime.utcnow())
embed.set_thumbnail(url=target.avatar_url)
fields = [("Miembro", f"{target.name} {target.display_name}", False),
("Kickeado por", target.display_name, message.author.display_name, False),
("Razón", reason, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await self.log_channel.send(embed=embed)
@command(name="kick")
@bot_has_permissions(kick_members=True)
@has_permissions(kick_members=True)
async def kick_command(self, ctx, targets: Greedy[Member], *, reason: Optional[str] = "Ninguna razón."):
if not len(targets):
await ctx.send("algún o algunos argumentos faltan.")
else:
await self.kick_members(ctx.message, targets, reason)
await ctx.send("Acción completada.")
@kick_command.error
async def kick_command_error(self, ctx, exc):
if isinstance(exc, CheckFailure):
await ctx.send("Insuficientes permisos para kickear.")
async def ban_members(self, message, targets, reason):
for target in targets:
if (message.guild.me.top_role.position > target.top_role.position
and not target.guild_permissions.administrator):
await target.ban(reason=reason)
embed = Embed(title="Miembro baneado",
colour=0xDD2222,
time_stamp=datetime.utcnow())
embed.set_thumbnail(url=target.avatar_url)
fields = [("Miembro", f"{target.name}-{target.display_name}", False),
("baneado por", message.author.display_name, False),
("Razón", reason, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await self.log_channel.send(embed=embed)
@command(name="ban")
@bot_has_permissions(ban_members=True)
@has_permissions(ban_members=True)
async def ban_command(self, ctx, targets: Greedy[Member], *, reason: Optional[str] = "Ninguna razón."):
if not len(targets):
await ctx.send("algún o algunos argumentos faltan.")
else:
await self.ban_members(ctx.message, targets, reason)
await ctx.send("Acción completada.")
@ban_command.error
async def ban_command_error(self, ctx, exc):
if isinstance(exc, CheckFailure):
await ctx.send("Insuficientes permisos para banear.")
@command(name="clear", aliases=["purgue"])
@bot_has_permissions(manage_messages=True)
@has_permissions(manage_messages=True)
async def clear_menssages(self, ctx, targets: Greedy[Member], limit: Optional[int] = 1):
def _check(message):
return not len(targets) or message.author in targets
if 0 < limit <= 100:
with ctx.channel.typing():
await ctx.message.delete()
deleted = await ctx.channel.purge(limit=limit, after=datetime.utcnow()-timedelta(days=14),
check=_check)
await ctx.send(f"✅ Se han borrado {len(deleted):,} mensajes", delete_after=5)
else:
await ctx.send("El número de mensajes que desea borrar no esta entre los limites.")
async def unmute(self, ctx, targets, reason="Tiempo de mute expirado."):
for target in targets:
if self.mute_role in target.roles:
role_ids = db.field("SELECT RoleIDs FROM mutes WHERE UserID = ?", target.id)
roles = [ctx.guild.get_role(int(id_)) for id_ in role_ids.split(",") if len(id_)]
db.execute("DELETE FROM mutes WHERE UserID = ?", target.id)
await target.remove_roles(target.guild.get_role(764653159452114954))
await ctx.send(f"{target.mention} ha sido desmuteado.")
@command(name="unmute")
@bot_has_permissions(manage_roles=True)
@has_permissions(manage_roles=True)
async def unmute_members(self, ctx, targets: Greedy[Member], *, reason: Optional[str] = "Ninguna razón"):
if not len(targets):
await ctx.send("Por favor indica al miembro muteado.")
else:
for target in targets:
await self.unmute(ctx, targets, reason=reason)
await target.send(f"Has sido desmuteado por {reason}.")
@command(name="mute")
@bot_has_permissions(manage_roles=True)
@has_permissions(manage_roles=True)
async def mute_members(self, message, targets: Greedy[Member], hours: Optional[int], *, reason: Optional[str] = "Ninguna razón"):
if not len(targets):
await message.channel.send("Por favor indica el miembro a mutear.")
else:
unmutes = []
for target in targets:
if not self.mute_role in target.roles:
if message.guild.me.top_role.position > target.top_role.position:
role_ids = ",".join([str(r.id) for r in target.roles])
end_time = datetime.utcnow() + timedelta(seconds=hours*3600) if hours else None
db.execute("INSERT INTO mutes VALUES (?, ?, ?)", target.id, role_ids, getattr(end_time, "isoformat", lambda: None)())
await target.add_roles(target.guild.get_role(764653159452114954))
await message.channel.send(f"{target.mention} ha sido muteado durante {hours} hora(s)")
await target.send(f"Has sido muteado del server por {reason}.")
if hours:
unmutes.append(target)
else:
await message.channel.send(f"{target.mention} no puede ser muteado.")
else:
await message.channel.send(f"{target.display_name} ya está muteado.")
if len(unmutes):
await sleep(hours)
await self.unmute(message, targets)
@command(name="addprofanity", aliases=["addswears", "addcurses"])
@has_permissions(manage_guild=True)
async def add_profanity(self, ctx, *words):
with open("./data/profanity.txt", "a", encoding="utf-8") as f:
f.write("".join([f"{w}\n" for w in words]))
profanity.load_censor_words_from_file("./data/profanity.txt")
await ctx.send("Acción completada.")
@command(name="delprofanity", aliases=["delswears", "delcurses"])
@has_permissions(manage_guild=True)
async def remove_profanity(self, ctx, *words):
with open("./data/profanity.txt", "r", encoding="utf-8") as f:
stored = [w.strip() for w in f.readlines()]
with open("./data/profanity.txt", "w", encoding="utf-8") as f:
f.write("".join([f"{w}\n" for w in stored if w not in words]))
profanity.load_censor_words_from_file("./data/profanity.txt")
await ctx.send("Acción completada.")
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.log_channel = self.bot.get_channel(762403035586101288)
self.mute_role = self.bot.guild.get_role(764557420788842538)
self.bot.cogs_ready.ready_up("mod")
@Cog.listener()
async def on_message(self, message):
def _check(m):
return (m.author == message.author
and len(m.mentions)
and (datetime.utcnow()-m.created_at).seconds < 60)
if not message.author.bot:
if len(list(filter(lambda m: _check(m), self.bot.cached_messages))) >= 5:
await message.channel.send("No hagas SPAM de menciones!", delete_after=10)
unmutes = await self.mute_members(message, [message.author], 5, reason="SPAM de menciones")
if len(unmutes):
await sleep(5)
await self.unmute_members(message.guild, [message.author])
if profanity.contains_profanity(message.content):
await message.delete()
await message.channel.send("Mejora tú vocabulario por favor.", delete_after=10)
elif message.channel.id not in self.links_allowed and search(self.url_regex, message.content):
await message.delete()
await message.channel.send("No puedes enviar links aquí.", delete_after=10)
def setup(bot):
bot.add_cog(Mod(bot))
|
regalk13/updated_discord-BOT.py
|
lib/cogs/info.py
|
from datetime import datetime
from typing import Optional
from discord import Embed, Member
from discord.ext.commands import Cog
from discord.ext.commands import command, has_permissions
class Info(Cog):
def __init__(self, bot):
self.bot = bot
@command(name="userinfo", alisases=["ui"])
async def userinfo(self, ctx, target: Optional[Member]):
target = target or ctx.author
embed = Embed(title="Info Usuario",
colour=target.colour,
timestamp=datetime.utcnow())
embed.set_thumbnail(url=target.avatar_url)
fields = [("Nombre", str(target), True),
("ID", target.id, True),
("Bot", target.bot, True),
("Top Role", target.top_role.mention, True),
("Status", str(target.status).title(), True),
("Activity", f"{str(target.activity.type).split('.')[-1].title() if target.activity else 'N/A'} {target.activity.name if target.activity else ''}", True),
("Creación", target.created_at.strftime("%d/%m/%Y %H/:%M:%S"), True),
("Ingresó", target.joined_at.strftime("%d/%m/%Y %H/:%M:%S"), True),
("Boosted", bool(target.premium_since),True)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await ctx.send(embed=embed)
@command(name="serverinfo", aliases=["guildinfo"])
async def serverinfo(self, ctx):
embed = Embed(title="Server information",
colour=ctx.guild.owner.colour,
timestamp=datetime.utcnow())
embed.set_thumbnail(url=ctx.guild.icon_url)
statuses = [len(list(filter(lambda m: str(m.status) == "online", ctx.guild.members))),
len(list(filter(lambda m: str(m.status) == "idle", ctx.guild.members))),
len(list(filter(lambda m: str(m.status) == "dnd", ctx.guild.members))),
len(list(filter(lambda m: str(m.status) == "offline", ctx.guild.members)))]
fields = [("ID", ctx.guild.id, True),
("Owner", ctx.guild.owner, True),
("Región", ctx.guild.region, True),
("Creación", ctx.guild.created_at.strftime("%d/%m/%Y %H:%M:%S"), True),
("Miembros", len(ctx.guild.members), True),
("Personas", len(list(filter(lambda m: not m.bot, ctx.guild.members))), True),
("Bots", len(list(filter(lambda m: m.bot, ctx.guild.members))), True),
("Miembros baneados", len(await ctx.guild.bans()), True),
("Estados", f"🟢 {statuses[0]} 🟠 {statuses[1]} 🔴 {statuses[2]} ⚪ {statuses[3]}", True),
("Canales de texto", len(ctx.guild.text_channels), True),
("Canales de voz", len(ctx.guild.voice_channels), True),
("Categorías", len(ctx.guild.categories), True),
("Roles", len(ctx.guild.roles), True),
("Invitaciones", len(await ctx.guild.invites()), True),
("\u200b", "\u200b", True)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await ctx.send(embed=embed)
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.bot.cogs_ready.ready_up("info")
def setup(bot):
bot.add_cog(Info(bot))
|
regalk13/updated_discord-BOT.py
|
lib/cogs/welcome.py
|
<reponame>regalk13/updated_discord-BOT.py
from discord import Forbidden
from discord.ext.commands import Cog
from discord.ext.commands import command
from ..db import db
class Welcome(Cog):
def __init__(self, bot):
self.bot = bot
@Cog.listener()
async def on_member_join(self, member):
db.execute("INSERT INTO exp (UserID) VALUES (?)", member.id)
await self.bot.get_channel(752951710346903694).send(f"Bienvenido@ a **{member.guild.name}** {member.mention}! Visita <#751428354086928464> y saluda a todos!")
try:
await member.send(f"Bienvenid@ a **{member.guild.name}**! disfruta tu estadía!, no olvides revisar las reglas para evitar problemas.")
except Forbidden:
pass
await member.add_roles(member.guild.get_role(760567454275207209)) #member.guild.get_role(763812765751574528))
@Cog.listener()
async def on_member_remove(self, member):
db.execute("DELETE FROM exp WHERE UserID = ?", member.id)
await self.bot.get_channel(762434670574567474).send(f"{member.display_name} ha dejado {member.guild.name}.")
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.bot.cogs_ready.ready_up("welcome")
def setup(bot):
bot.add_cog(Welcome(bot))
|
regalk13/updated_discord-BOT.py
|
lib/cogs/log.py
|
<gh_stars>1-10
from datetime import datetime
from discord import Embed
from discord.ext.commands import Cog
from discord.ext.commands import command
class Log(Cog):
def __init__(self, bot):
self.bot = bot
@Cog.listener()
async def on_user_update(self, before, after):
if before.name != after.name:
embed = Embed(title="Actualización de nombre",
colour=after.colour,
timestamp=datetime.utcnow())
fields = [("Antes", before.name, False),
("Despues", after.name, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await self.log_channel.send(embed=embed)
if before.name != after.name:
embed = Embed(title="Actualización de Tag",
colour=after.colour,
timestamp=datetime.utcnow())
fields = [("Antes", before.discriminator, False),
("Despues", after.discriminator, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await self.log_channel.send(embed=embed)
if before.avatar_url != after.avatar_url:
embed = Embed(title="Actualización Miembro",
description="Avatar cambiado (La imagen de abajo es la nueva, la antigua la de la derecha).",
colour=self.log_channel.guild.get_member(after.id).colour,
timestamp=datetime.utcnow())
embed.set_thumbnail(url=before.avatar_url)
embed.set_image(url=after.avatar_url)
await self.log_channel.send(embed=embed)
@Cog.listener()
async def on_member_update(self, before, after):
if before.display_name != after.display_name:
embed = Embed(title="Actualización Miembro",
description="Nickname cambiado",
colour=after.colour,
timestamp=datetime.utcnow())
fields = [("Antes", before.display_name, False),
("Despues", after.display_name, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await self.log_channel.send(embed=embed)
elif before.roles != after.roles:
embed = Embed(title="Actualización Miembro",
description=f"Actualización de roles de {after.display_name}",
colour=after.colour,
timestamp=datetime.utcnow())
fields = [("Antes", ", ".join([r.mention for r in before.roles[:1]]), False),
("Despues", ", ".join([r.mention for r in after.roles[:1]]), False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await self.log_channel.send(embed=embed)
@Cog.listener()
async def on_message_edit(self, before, after):
if not after.author.bot:
if before.content != after.content:
embed = Embed(title="Edición de mensaje.",
description=f"Editado por {after.author.display_name}",
colour=after.author.colour,
timestamp=datetime.utcnow())
fields = [("Antes", before.content, False),
("Despues", after.content, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await self.log_channel.send(embed=embed)
@Cog.listener()
async def on_message_delete(self, message):
if not message.author.bot:
embed = Embed(title="Mensaje borrado.",
description=f"mensaje borrado por {message.author.display_name}",
colour=message.author.colour,
timestamp=datetime.utcnow())
fields = [("Contenido", message.content, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
await self.log_channel.send(embed=embed)
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.log_channel = self.bot.get_channel(764255491680632842)
self.bot.cogs_ready.ready_up("log")
def setup(bot):
bot.add_cog(Log(bot))
|
regalk13/updated_discord-BOT.py
|
lib/cogs/misc.py
|
<reponame>regalk13/updated_discord-BOT.py<filename>lib/cogs/misc.py
from discord.ext.commands import Cog
from discord.ext.commands import CheckFailure
from discord.ext.commands import command, has_permissions
from ..db import db
class Misc(Cog):
def __init__(self, bot):
self.bot = bot
@command(name="prefix")
@has_permissions(manage_guild=True)
async def change_prefix(self, ctx, new: str):
if len(new) > 5:
await ctx.send("El prefix no puede tener más de 5 carácteres.")
else:
db.execute("UPDATE guilds SET Prefix = ? WHERE GuildID = ?", new, ctx.guild.id)
await ctx.send(f"El prefix fue actualizado a {new}.")
@change_prefix.error
async def change_prefix_error(self, ctx, exc):
if isinstance(exc, CheckFailure):
await ctx.send("No tienes permisos para esto.")
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.bot.cogs_ready.ready_up("misc")
def setup(bot):
bot.add_cog(Misc(bot))
|
regalk13/updated_discord-BOT.py
|
lib/cogs/reactions.py
|
<gh_stars>1-10
from discord.ext.commands import Cog
from discord.ext.commands import command, has_permissions
from datetime import datetime, timedelta
from discord import Embed
from ..db import db
#numbers
#0⃣ 1️⃣ 2⃣ 3⃣ 4⃣ 5⃣ 6⃣ 7⃣ 8⃣ 9⃣
numbers = ("1️⃣", "2⃣", "3⃣", "4⃣", "5⃣",
"6⃣", "7⃣", "8⃣", "9⃣", "🔟")
class Reactions(Cog):
def __init__(self, bot):
self.bot = bot
self.polls = []
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.colours = {
"❤️": self.bot.guild.get_role(766074172585017384), #red
"💛": self.bot.guild.get_role(766075992111448074), #yellow
"🧡": self.bot.guild.get_role(766074278625673236), #orange
"💚": self.bot.guild.get_role(766074815496585237), #gren
"💙": self.bot.guild.get_role(766074244253483038), #blue
"💜": self.bot.guild.get_role(766074864628006942), #purple
"🖤": self.bot.guild.get_role(66074510277345281), #black
}
self.reaction_message = await self.bot.get_channel(766038079558516767).fetch_message(766089712573612083)
self.bot.cogs_ready.ready_up("reactions")
self.starboard_channel = self.bot.get_channel(766321103210938379)
@command(name="createpoll", aliases=["mkpoll"])
@has_permissions(manage_guild=True)
async def create_poll(self, ctx, hours: int, question: str, *options):
await ctx.message.delete()
if len(options) > 10:
await ctx.send("La encuesta no puede tener más de 10 opciones.")
else:
embed = Embed(title="Encuesta",
description=question,
colour=ctx.author.colour,
timestamp=datetime.utcnow())
fields = [("Opciones", "\n".join([f"{numbers[idx]} {option}" for idx, option in enumerate(options)]), False),
("Instrucciones", "Reacciona y vota!", False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
message = await ctx.send(embed=embed)
for emoji in numbers[:len(options)]:
await message.add_reaction(emoji)
self.polls.append((message.channel.id, message.id))
self.bot.scheduler.add_job(self.complete_poll, "date", run_date=datetime.now()+timedelta(seconds=hours),
args=[message.channel.id, message.id])
async def complete_poll(self, channel_id, message_id):
message = await self.bot.get_channel(channel_id).fetch_message(message_id)
most_voted = max(message.reactions, key=lambda r: r.count)
await message.channel.send(f"En la encuesta, la opción favorita fue {most_voted.emoji} con {most_voted.count-1:} votos!")
self.polls.remove((message.channel.id, message.id))
@Cog.listener()
async def on_raw_reaction_add(self, payload):
if self.bot.ready and payload.message_id == self.reaction_message.id:
current_colours = filter(lambda r: r in self.colours.values(), payload.member.roles)
await payload.member.remove_roles(*current_colours, reason="Color configurado.")
await payload.member.add_roles(self.colours[payload.emoji.name], reason="Color configurado.")
await self.reaction_message.remove_reaction(payload.emoji, payload.member)
elif payload.message_id in (poll[1] for poll in self.polls):
message = await self.bot.get_channel(payload.channel_id).fetch_message(payload.message_id)
for reaction in message.reactions:
if (not payload.member.bot
and payload.member in await reaction.users().flatten()
and reaction.emoji != payload.emoji.name):
await message.remove_reaction(reaction.emoji, payload.member)
elif payload.emoji.name == "⭐":
message = await self.bot.get_channel(payload.channel_id).fetch_message(payload.message_id)
if not message.author.bot: #and payload.member.id != message.author.id:
msg_id, stars = db.record("SELECT StartMessageID, Stars FROM starboard WHERE RootMessageID = ?", message.id) or (None, 0)
embed = Embed(title="Starred message",
colour=message.author.colour,
timestamp=datetime.utcnow())
fields = [("Autor ", message.author.mention, False),
("Contenido", message.content or "Mira el post", False),
("Estrellas", stars+1, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
if len(message.attachments):
embed.set_image(url=message.attachments[0].url)
if not stars:
star_message = await self.starboard_channel.send(embed=embed)
db.execute("INSERT INTO starboard (RootMessageID, StartMessageID) VALUES (?, ?)", message.id, star_message.id)
else:
star_message = await self.starboard_channel.fetch_message(msg_id)
await star_message.edit(embed=embed)
db.execute("UPDATE starboard SET Stars = Stars + 1 WHERE RootMessageID = ?", message.id)
else:
await message.remove_reaction(payload.emoji, payload.member)
def setup(bot):
bot.add_cog(Reactions(bot))
|
regalk13/updated_discord-BOT.py
|
lib/cogs/exp.py
|
<filename>lib/cogs/exp.py
from datetime import datetime, timedelta
from random import randint
from typing import Optional
from discord import Member, Embed
from discord.ext.menus import MenuPages, ListPageSource
from discord.ext.commands import Cog
from discord.ext.commands import CheckFailure
from discord.ext.commands import command, has_permissions
from ..db import db
class HelpMenu(ListPageSource):
def __init__(self, ctx, data):
self.ctx = ctx
super().__init__(data, per_page=10)
async def write_page(self, menu, offset, fields=[]):
len_data = len_data = len(self.entries)
embed = Embed(tittle="Tabla de experiencia.",
colour=self.ctx.author.colour)
embed.set_thumbnail(url=self.ctx.guild.icon_url)
embed.set_footer(text=f"{offset:,} - {min(len_data, offset+self.per_page-1):,} de {len_data:,} miembros.")
for name, value in fields:
embed.add_field(name=name, value=value, inline=False)
return embed
async def format_page(self, menu, entries):
offset = (menu.current_page*self.per_page) + 1
fields = []
table = ("\n".join(f"{idx+offset}. {self.ctx.bot.guild.get_member(entry[0]).display_name} (XP: {entry[1]} | Level: {entry[2]})"
for idx, entry in enumerate(entries)))
fields.append(("Ranks", table))
return await self.write_page(menu, offset, fields)
class Exp(Cog):
def __init__(self, bot):
self.bot = bot
async def process_xp(self, message):
xp, lvl, xplock = db.record("SELECT XP, Level, XPLock FROM exp WHERE UserID = ?", message.author.id)
if datetime.utcnow() > datetime.fromisoformat(xplock):
await self.add_xp(message, xp, lvl)
async def add_xp(self, message, xp, lvl):
xp_to_add = randint(10, 20)
new_lvl = int(((xp+xp_to_add)//42) ** 0.55)
db.execute("UPDATE exp SET XP = XP + ?, Level = ?, XPLock = ? WHERE UserID = ?",
xp_to_add, new_lvl, (datetime.utcnow()+timedelta(seconds=60)).isoformat(), message.author.id)
if new_lvl > lvl:
await self.levelup_channel.send(f"Felicidades {message.author.mention} has alcanzado el nivel {new_lvl:,}!")
async def check_lvl_rewards(self, message, lvl):
if lvl >= 50: # Mammut
if (new_role := message.guild.get_role(766808210630901810)) not in message.author.roles:
await message.author.add_roles(new_role)
elif 40 <= lvl < 50: # Ninja
if (new_role := message.guild.get_role(766808108877611029)) not in message.author.roles:
await message.author.add_roles(new_role)
elif 30 <= lvl < 40: # Master
if (new_role := message.guild.get_role(766807991378640897)) not in message.author.roles:
await message.author.add_roles(new_role)
elif 20 <= lvl < 30: # Hechizero
if (new_role := message.guild.get_role(766807888525393922)) not in message.author.roles:
await message.author.add_roles(new_role)
elif 10 <= lvl < 20: # Aprendiz
if (new_role := message.guild.get_role(766807837618995251)) not in message.author.roles:
await message.author.add_roles(new_role)
elif 5 <= lvl < 9: # Beginner
if (new_role := message.guild.get_role(766807774742446140)) not in message.author.roles:
await message.author.add_roles(new_role)
@command(name="level")
async def display_level(self, ctx, target: Optional[Member]):
target = target or ctx.author
xp, lvl = db.record("SELECT XP, level FROM exp WHERE UserID = ?", target.id) or (None, None)
if lvl is not None:
await ctx.send(f"{target.display_name} está en el nivel {lvl:,} con {xp:,} de experiencia.")
else:
await ctx.send("Este miembro no se encuentra en el sistema de niveles.")
@command(name="rank")
async def display_rank(self, ctx, target: Optional[Member]):
target = target or ctx.author
ids = db.column("SELECT UserID FROM ex ORDER BY XP DESC")
try:
await ctx.send(f"{target.display_name} está en el puesto {ids.index(target.id)+1} de {len(ids)}")
except ValueError:
await ctx.send("Este miembro no se encuentra en el sistema de niveles.")
@command(name="leaderboard", aliases=["lb"])
async def display_leaderboard(self, ctx):
records = db.records("SELECT UserID, XP, Level FROM exp ORDER BY XP DESC")
menu = MenuPages(source=HelpMenu(ctx, records),
clear_reactions_after=True,
timeout=60.0)
await menu.start(ctx)
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.levelup_channel = self.bot.get_channel(766449343212421120)
self.bot.cogs_ready.ready_up("exp")
@Cog.listener()
async def on_message(self, message):
if not message.author.bot:
await self.process_xp(message)
def setup(bot):
bot.add_cog(Exp(bot))
|
regalk13/updated_discord-BOT.py
|
lib/bot/__init__.py
|
import discord
from asyncio import sleep
from datetime import datetime
from glob import glob
import sqlite3
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from apscheduler.triggers.cron import CronTrigger
from discord import Embed, File, DMChannel
from discord.ext.commands import Bot as BotBase
from discord.ext.commands import Context
from discord.errors import HTTPException, Forbidden
from discord.ext.commands import (CommandNotFound, BadArgument , MissingRequiredArgument, CommandOnCooldown, MissingPermissions)
from discord.ext.commands import when_mentioned_or, command, has_permissions
from pathlib import Path
from ..db import db
intents=discord.Intents.all()
OWNER_IDS = [751143350299787276]
COGS = [p.stem for p in Path(".").glob("./lib/cogs/*.py")] #The problem being the split("\\"). On linux paths are split with /, /path/to/file.
IGNORE_EXCEPTIONS = (CommandNotFound, BadArgument)
def get_prefix(bot, message):
prefix = db.field("SELECT Prefix FROM guilds WHERE GuildID = ?", message.guild.id)
return when_mentioned_or(prefix)(bot, message)
class Ready(object):
def __init__(self):
for cog in COGS:
setattr(self, cog, False)
def ready_up(self, cog):
setattr(self, cog, True)
print(f"{cog} cog ready")
def all_ready(self):
return all([getattr(self, cog) for cog in COGS])
class Bot(BotBase):
def __init__(self):
self.ready = False
self.cogs_ready = Ready()
self.guild = None
self.scheduler = AsyncIOScheduler()
db.autosave(self.scheduler)
super().__init__(command_prefix=get_prefix, owner_ids=OWNER_IDS, intents=discord.Intents.all())
def setup(self):
for cog in COGS:
self.load_extension(f"lib.cogs.{cog}")
print(f" {cog} cog loaded")
print("setup complete")
def update_db(self):
db.multiexec("INSERT OR IGNORE INTO guilds (GuildID) VALUES (?)",
((guild.id,) for guild in self.guilds))
db.multiexec("INSERT OR IGNORE INTO exp (UserID) VALUES (?)",
((member.id,) for member in self.guild.members if not member.bot))
to_remove = []
stored_members = db.column("SELECT UserID FROM exp")
for id_ in stored_members:
if not self.guild.get_member(id_):
to_remove.append(id_)
db.multiexec("DELETE FROM exp WHERE UserID = ?",
((id_,) for id_ in to_remove))
db.commit()
def run(self, version):
self.VERSION = version
print("running setup...")
self.setup()
with open("./lib/bot/token.0", "r", encoding="utf-8") as tf:
self.TOKEN = tf.read()
print("Running BOT...")
super().run(self.TOKEN, reconnect=True)
async def process_commands(self, message):
ctx = await self.get_context(message, cls=Context)
if ctx.command is not None and ctx.guild is not None:
if self.ready:
await self.invoke(ctx)
else:
await ctx.send("No estoy listo para recibir mensajes.")
async def rules_reminder(self):
await self.channel.send("Recuerda las reglas del server, visita Reglas y leelas.")
async def on_connect(self):
print("bot connected")
async def on_disconect(self):
print("bot disconected")
async def on_error(self, err, *arg, **kwargs):
if err == "on_command_error":
await arg[0].send("Tengo algún error.")
await self.stdout.send("Ha ocurrido algún error")
raise
async def on_command_error(self, ctx, exc):
if any([isinstance(exc, error) for error in IGNORE_EXCEPTIONS]):
pass
elif isinstance(exc, CommandNotFound):
await ctx.send("Este comando no existe.")
elif isinstance(exc, MissingRequiredArgument):
await ctx.send("Algún o algunos argumentos faltan.")
elif isinstance(exc, CommandOnCooldown):
await ctx.send(f"Toma un respiro e intenta de nuevo en {exc.retry_after:,.2f} segundos")
elif isinstance(exc, MissingPermissions):
await ctx.send("No tienes permisos para esto...")
elif hasattr(exc, "original"):
if isinstance(exc.original, HTTPException):
await ctx.send("Inhabilitado para enviar mensajes.")
if isinstance(exc.original, Forbidden):
await ctx.send("No tengo permisos para eso.")
else:
raise exc.original
else:
raise exc
async def on_ready(self):
if not self.ready:
self.guild = self.get_guild(751428354086928461)
self.stdout = self.get_channel(762837811485212712)
self.channel = self.get_channel(751428354086928464)
self.scheduler.add_job(self.rules_reminder, CronTrigger(day_of_week=0, hour=12, minute=0, second=0))
self.scheduler.start()
self.update_db()
while not self.cogs_ready.all_ready():
await sleep(0.5)
await self.stdout.send("Now online!")
self.ready = True
print("bot ready")
else:
print("bot reconnect")
async def on_message(self, message):
if not message.author.bot:
if isinstance(message.channel, DMChannel):
if len(message.content) < 50:
await message.channel.send("Tú mensaje debe superar los 50 carácteres.")
else:
member = self.guild.get_member(message.author.id)
embed = Embed(title="Modmail",
colour=member.colour,
time_stamp=datetime.utcnow())
embed.set_thumbnail(url=member.avatar_url)
fields = [("Miembro", member.display_name, False),
("Message", message.content, False)]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
#mod = self.get_cog("Mod")
await self.stdout.send(embed=embed)
await message.channel.send("Mensaje enviado a los moderadores.")
await self.process_commands(message)
bot = Bot()
|
regalk13/updated_discord-BOT.py
|
lib/cogs/meta.py
|
<filename>lib/cogs/meta.py
from datetime import datetime, timedelta
from discord import Embed
from discord.ext.commands import Cog
from discord.ext.commands import command, has_permissions
from discord import Activity, ActivityType
from apscheduler.triggers.cron import CronTrigger
from time import time
from psutil import Process, virtual_memory
from discord import __version__ as discord_version
from platform import python_version
class Meta(Cog):
def __init__(self, bot):
self.bot = bot
self._message = "Watching >help | Mi DM para ModMail."
bot.scheduler.add_job(self.set, CronTrigger(second=0))
@property
def message(self):
return self._message.format(users=len(self.bot.users), guilds=len(self.bot.guilds))
@message.setter
def message(self, value):
if value.split(" ")[0] not in ("playing", "watching", "listening", "streaming"):
raise ValueError("Esta actividad no existe.")
self._message = value
async def set(self):
_type, _name = self.message.split(" ", maxsplit=1)
await self.bot.change_presence(activity=Activity(
name=_name, type=getattr(ActivityType, _type, ActivityType.playing)
))
@command(name="setactivity")
@has_permissions(manage_guild=True)
async def set_activity_message(self, ctx, *, text: str):
self.message = text
await self.set()
@command(name="ping")
async def ping(self, ctx):
start = time()
message = await ctx.send(f"Pong! latencia: {self.bot.latency*1000:,.0f} ms.")
end = time()
await message.edit(content=f"Pong! latencia: {self.bot.latency*1000:,.0f} ms, tiempo de respuesta: {(end-start)*1000:,.0f} ms.")
@command(name="stats")
async def show_bot_stats(self, ctx):
embed = Embed(title="Bot stats",
colour=ctx.author.colour,
timestamp=datetime.utcnow())
proc = Process()
with proc.oneshot():
uptime = timedelta(seconds=time()-proc.create_time())
cpu_time = timedelta(seconds=(cpu := proc.cpu_times()).system + cpu.user)
mem_total = virtual_memory().total / (1024**2)
mem_of_total = proc.memory_percent()
mem_usage = mem_total * (mem_of_total / 100)
fields = [
("Bot version", self.bot.VERSION, True),
("Python version", python_version(), True),
("discord.py version", discord_version, True),
("Uptime", uptime, True),
("CPU", cpu_time, True),
("Memoria usada", f"{mem_usage:,.3f} / {mem_total:,.0f} MiB ({mem_of_total:.0f}%)", True),
("Usuarios", f"{self.bot.guild.member_count:,}", True)
]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
embed.set_thumbnail(url=self.bot.user.avatar_url)
embed.set_footer(text="Creador Regalk13.")
await ctx.send(embed=embed)
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.bot.cogs_ready.ready_up("meta")
def setup(bot):
bot.add_cog(Meta(bot))
|
hzli-ucas/pytorch-retinanet
|
lib/nms/build.py
|
import os
import torch
from torch.utils.cpp_extension import BuildExtension, CppExtension
from setuptools import setup
sources = ['src/nms_binding.cpp']
defines = []
with_cuda = False
if torch.cuda.is_available():
print('Including CUDA code.')
defines += [('WITH_CUDA', None)]
from torch.utils.cpp_extension import CUDAExtension
build_extension = CUDAExtension
else:
build_extension = CppExtension
this_file = os.path.dirname(os.path.realpath(__file__))
print(this_file)
extra_objects = ['src/cuda/nms_kernel.cu.o']
extra_objects = [os.path.join(this_file, fname) for fname in extra_objects]
ext_module = build_extension(
'_ext.nms',
sources=sources,
define_macros=defines,
relative_to=__file__,
extra_objects=extra_objects,
extra_compile_args=['-std=c99']
)
if __name__ == '__main__':
setup(
name = '_ext.nms',
ext_modules = [ext_module],
cmdclass={'build_ext': BuildExtension}
)
|
99Kies/ecdsa
|
demo.py
|
## 数字签名和验签
from ecdsa import PrivateKey, hash256
e = int.from_bytes(hash256(b'my secret'), 'big') # 私钥
z = int.from_bytes(hash256(b'my message'), 'big') # 加密的数据
print(e)
print(z)
pk = PrivateKey(e)
sig = pk.sign(z)
print(sig)
res = pk.point.verify(z, sig)
print(res)
|
99Kies/ecdsa
|
ecdsa.py
|
<filename>ecdsa.py
import hashlib
import hmac
def hash256(s):
'''two rounds of sha256'''
return hashlib.sha256(hashlib.sha256(s).digest()).digest()
class FieldElement:
# 有限域
def __init__(self, num, prime):
if num >= prime or num < 0:
error = 'Num {} not in field range 0 to {}'.format(
num, prime - 1)
raise ValueError(error)
self.num = num
self.prime = prime
def __repr__(self):
return 'FieldElement_{}({})'.format(self.prime, self.num)
def __eq__(self, other):
if other is None:
return False
return self.num == other.num and self.prime == other.prime
def __ne__(self, other):
# this should be the inverse of the == operator
return not (self == other)
def __add__(self, other):
if self.prime != other.prime:
raise TypeError('Cannot add two numbers in different Fields')
# self.num and other.num are the actual values
# self.prime is what we need to mod against
num = (self.num + other.num) % self.prime
# We return an element of the same class
return self.__class__(num, self.prime)
def __sub__(self, other):
if self.prime != other.prime:
raise TypeError('Cannot subtract two numbers in different Fields')
# self.num and other.num are the actual values
# self.prime is what we need to mod against
num = (self.num - other.num) % self.prime
# We return an element of the same class
return self.__class__(num, self.prime)
def __mul__(self, other):
if self.prime != other.prime:
raise TypeError('Cannot multiply two numbers in different Fields')
# self.num and other.num are the actual values
# self.prime is what we need to mod against
num = (self.num * other.num) % self.prime
# We return an element of the same class
return self.__class__(num, self.prime)
def __pow__(self, exponent):
n = exponent % (self.prime - 1)
num = pow(self.num, n, self.prime)
return self.__class__(num, self.prime)
def __truediv__(self, other):
if self.prime != other.prime:
raise TypeError('Cannot divide two numbers in different Fields')
# self.num and other.num are the actual values
# self.prime is what we need to mod against
# use fermat's little theorem:
# self.num**(p-1) % p == 1
# this means:
# 1/n == pow(n, p-2, p)
num = (self.num * pow(other.num, self.prime - 2, self.prime)) % self.prime
# We return an element of the same class
return self.__class__(num, self.prime)
def __rmul__(self, coefficient):
num = (self.num * coefficient) % self.prime
return self.__class__(num=num, prime=self.prime)
class Point:
# 椭圆曲线里的点
def __init__(self, x, y, a, b):
self.a = a
self.b = b
self.x = x
self.y = y
if self.x is None and self.y is None:
return
if self.y**2 != self.x**3 + a * x + b:
raise ValueError('({}, {}) is not on the curve'.format(x, y))
def __eq__(self, other):
return self.x == other.x and self.y == other.y \
and self.a == other.a and self.b == other.b
def __ne__(self, other):
# this should be the inverse of the == operator
return not (self == other)
def __repr__(self):
if self.x is None:
return 'Point(infinity)'
elif isinstance(self.x, FieldElement):
return 'Point({},{})_{}_{} FieldElement({})'.format(
self.x.num, self.y.num, self.a.num, self.b.num, self.x.prime)
else:
return 'Point({},{})_{}_{}'.format(self.x, self.y, self.a, self.b)
def __add__(self, other):
"""
点加法
"""
if self.a != other.a or self.b != other.b:
raise TypeError('Points {}, {} are not on the same curve'.format(self, other))
# Case 0.0: self is the point at infinity, return other
if self.x is None:
return other
# Case 0.1: other is the point at infinity, return self
if other.x is None:
return self
# Case 1: self.x == other.x, self.y != other.y
# Result is point at infinity
if self.x == other.x and self.y != other.y:
return self.__class__(None, None, self.a, self.b)
# Case 2: self.x ≠ other.x
# Formula (x3,y3)==(x1,y1)+(x2,y2)
# s=(y2-y1)/(x2-x1)
# x3=s**2-x1-x2
# y3=s*(x1-x3)-y1
if self.x != other.x:
s = (other.y - self.y) / (other.x - self.x)
x = s**2 - self.x - other.x
y = s * (self.x - x) - self.y
return self.__class__(x, y, self.a, self.b)
# Case 4: if we are tangent to the vertical line,
# we return the point at infinity
# note instead of figuring out what 0 is for each type
# we just use 0 * self.x
if self == other and self.y == 0 * self.x:
return self.__class__(None, None, self.a, self.b)
# Case 3: self == other
# Formula (x3,y3)=(x1,y1)+(x1,y1)
# s=(3*x1**2+a)/(2*y1)
# x3=s**2-2*x1
# y3=s*(x1-x3)-y1
if self == other:
s = (3 * self.x**2 + self.a) / (2 * self.y)
x = s**2 - 2 * self.x
y = s * (self.x - x) - self.y
return self.__class__(x, y, self.a, self.b)
def __rmul__(self, coefficient):
# 标量乘法
coef = coefficient
current = self
result = self.__class__(None, None, self.a, self.b)
while coef:
if coef & 1:
result += current
current += current
coef >>= 1
return result
# 定义比特币的曲线 secp256k1
# y^2 = x^3 + 7
A = 0
B = 7
P = 2**256 - 2**32 - 977 # 有限域的阶
N = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141 # 群的阶
# G = S256Point(
# 0x79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798,
# 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8) 起点
class S256Field(FieldElement):
def __init__(self, num, prime=P):
super().__init__(num=num, prime=prime)
def __repr__(self):
return '{:x}'.format(self.num).zfill(64)
class S256Point(Point):
def __init__(self, x, y, a=None, b=None):
a, b = S256Field(A), S256Field(B)
if type(x) == int:
super().__init__(x=S256Field(x), y=S256Field(y), a=a, b=b)
else:
super().__init__(x=x, y=y, a=a, b=b)
def __repr__(self):
if self.x is None:
return 'S256Point(infinity)'
else:
return 'S256Point({}, {})'.format(self.x, self.y)
def __rmul__(self, coefficient):
coef = coefficient % N
return super().__rmul__(coef)
def verify(self, z, sig):
s_inv = pow(sig.s, N - 2, N)
u = z * s_inv % N
v = sig.r * s_inv % N
total = u * G + v * self
return total.x.num == sig.r
G = S256Point(
0x79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798,
0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8)
class Signature:
def __init__(self, r, s):
self.r = r
self.s = s
def __repr__(self):
return 'Signature({:x},{:x})'.format(self.r, self.s)
class PrivateKey:
def __init__(self, secret):
self.secret = secret
self.point = secret * G
def hex(self):
return '{:x}'.format(self.secret).zfill(64)
def sign(self, z):
k = self.deterministic_k(z)
r = (k * G).x.num
k_inv = pow(k, N - 2, N)
s = (z + r * self.secret) * k_inv % N
if s > N / 2:
s = N - s
return Signature(r, s)
def deterministic_k(self, z):
# 确定性随机数,根据私钥和文件hash进行计算随机数。
k = b'\x00' * 32
v = b'\x01' * 32
if z > N:
z -= N
z_bytes = z.to_bytes(32, 'big')
secret_bytes = self.secret.to_bytes(32, 'big')
s256 = hashlib.sha256
k = hmac.new(k, v + b'\x00' + secret_bytes + z_bytes, s256).digest()
v = hmac.new(k, v, s256).digest()
k = hmac.new(k, v + b'\x01' + secret_bytes + z_bytes, s256).digest()
v = hmac.new(k, v, s256).digest()
while True:
v = hmac.new(k, v, s256).digest()
candidate = int.from_bytes(v, 'big')
if candidate >= 1 and candidate < N:
return candidate # <2>
k = hmac.new(k, v + b'\x00', s256).digest()
v = hmac.new(k, v, s256).digest()
|
icbi-lab/miopy
|
miopy/__init__.py
|
<gh_stars>0
from .correlation import *
|
icbi-lab/miopy
|
miopy/classification.py
|
from sklearn.ensemble import BaggingClassifier, AdaBoostClassifier,GradientBoostingClassifier, RandomForestClassifier
from sklearn.linear_model import LogisticRegression, RidgeClassifier
from sklearn.svm import SVC
# used for normalization
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import label_binarize
# used for cross-validation
from sklearn.model_selection import StratifiedKFold
import numpy as np
import pandas as pd
import copy
def rf(X_train, y_train, X_test, y_test, lFeature = None, seed = 123):
seed = np.random.RandomState(seed)
md = RandomForestClassifier(n_estimators=1000, random_state=seed)
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
md.fit(X_train, y_train)
y_predict = md.predict(X_test)
scoreTraining = md.score(X_train, y_train)
scoreTest = md.score(X_test, y_test)
feature = pd.Series(md.feature_importances_, index = lFeature)
md.feature_names = lFeature
return scoreTraining, scoreTest, y_predict, feature, md
def lr(X_train, y_train, X_test, y_test, lFeature = None, seed = 123):
seed = np.random.RandomState(seed)
md = LogisticRegression(penalty="l2", max_iter=100000, random_state=seed)
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
md.fit(X_train, y_train)
y_predict = md.predict(X_test)
scoreTraining = md.score(X_train, y_train)
scoreTest = md.score(X_test, y_test)
feature = pd.Series(md.coef_[0], index = lFeature)
md.feature_names = lFeature
return scoreTraining, scoreTest, y_predict, feature, md
def ridge(X_train, y_train, X_test, y_test, lFeature = None, seed = 123):
seed = np.random.RandomState(seed)
md = RidgeClassifier(max_iter=10000,random_state=seed)
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
md.fit(X_train, y_train)
scoreTraining = md.score(X_train, y_train)
scoreTest = md.score(X_test, y_test)
feature = pd.Series(md.coef_[0], index = lFeature)
md.feature_names = lFeature
return scoreTraining, scoreTest, feature, md
def svm(X_train, y_train, X_test, y_test, lFeature = None, seed = 123):
seed = np.random.RandomState(seed)
md = SVC(kernel='linear',random_state=seed)
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
md.fit(X_train, y_train)
y_predict = md.predict(X_test)
scoreTraining = md.score(X_train, y_train)
scoreTest = md.score(X_test, y_test)
scoreTraining = md.score(X_train, y_train)
scoreTest = md.score(X_test, y_test)
feature = pd.Series(md.coef_[0], index = lFeature)
md.feature_names = lFeature
return scoreTraining, scoreTest, y_predict, feature, md
def classification_cv(data, k = 10, name = "Random Forest", group = "event", lFeature = None, seed = 123):
# list of classifiers, selected on the basis of our previous paper "
from sklearn.metrics import roc_auc_score, roc_curve
modelList = { "Random Forest": rf,
"Logistic Regression": lr,
"Support Vector Machine": svm}
print("Loading dataset...")
lFeature = list(set(lFeature).intersection(data.columns.tolist()))
X, Y = data[lFeature], label_binarize(data[group], classes = data[group].unique().tolist())[:,0]
skf = StratifiedKFold(n_splits=k, shuffle=True, random_state= np.random.RandomState(seed))
indexes = [ (training, test) for training, test in skf.split(X, Y) ]
model = modelList[name]
results = { 'model': name,
'auc': [],
'fpr': [],
'tpr': [],
'train': [],
'test' : [],
'classifier' : []
}
print("\nClassifier " + name)
# iterate over all folds
featureDf = pd.DataFrame()
for train_index, test_index in indexes:
X_train, X_test = X.iloc[train_index,:], X.iloc[test_index,:]
y_train, y_test = Y[train_index], Y[test_index]
classifier = copy.deepcopy(model)
scoreTraining, scoreTest, y_predict, feature, model_fit = classifier(X_train, y_train,\
X_test, y_test, lFeature = lFeature, seed = seed)
print("\ttraining: %.4f, test: %.4f" % (scoreTraining, scoreTest))
fpr, tpr, thresholds = roc_curve(y_test, y_predict)
results['auc'].append(roc_auc_score(y_test, y_predict))
results['fpr'].append(fpr)
results["tpr"].append(tpr)
results["train"].append(scoreTraining)
results["test"].append(scoreTest)
results["classifier"].append(model_fit)
featureDf = pd.concat([featureDf,feature], axis = 1)
#print(featureDf.mean(axis = 1))
results["feature"] = featureDf
print("\tTest Mean: %.4f" % (np.mean(results['test'])))
return results
def classification_training_model(data, model = None, group = "event", lFeature = None):
# list of classifiers, selected on the basis of our previous paper "
from sklearn.metrics import roc_auc_score, roc_curve
print("Loading dataset...")
lFeature = list(set(lFeature).intersection(data.columns.tolist()))
print(data.head())
print(lFeature)
X, Y = data[lFeature], label_binarize(data[group], classes = data[group].unique().tolist())[:,0]
name = type(model).__name__
results = { 'model': name,
'auc': [],
'fpr': [],
'tpr': [],
'train': [],
'test' : [],
'classifier' : []
}
print("\nClassifier " + name)
# iterate over all folds
print(X)
print(model)
y_predict = model.predict(X)
print("predicted")
scoreTraining = None
scoreTest = model.score(X, Y)
scoreTraining = model.score(X, Y)
scoreTest = model.score(X, Y)
try:
feature = pd.Series(model.coef_[0], index = lFeature)
except:
feature = pd.Series(model.feature_importances_, index = lFeature)
fpr, tpr, thresholds = roc_curve(Y, y_predict)
results['auc'].append(roc_auc_score(Y, y_predict))
results['fpr'].append(fpr)
results["tpr"].append(tpr)
results["train"].append(scoreTraining)
results["test"].append(scoreTest)
results["feature"] = feature
return results
|
icbi-lab/miopy
|
setup.py
|
<filename>setup.py<gh_stars>0
#!/usr/bin/python3
from setuptools import setup
setup(
name='miopy',
version='1.0.05',
author='<NAME>',
author_email='<EMAIL>',
packages=['miopy', ],
scripts=['bin/mio_correlation.py',],
url='https://gitlab.i-med.ac.at/cbio/miopy',
license='LICENSE.txt',
description='',
long_description=open('README.rst').read(),
long_description_content_type='text/markdown',
install_requires=[
"pandas",
"scipy",
"numpy",
"ranky",
"pandarallel",
"lifelines",
"argparse",
"eli5",
],
include_package_data=True,
package_data={'': ['data/*', "Rscript/*.r", "dataset/*.csv"],
},
)
|
icbi-lab/miopy
|
miopy/survival.py
|
import pandas as pd
from lifelines import KaplanMeierFitter, CoxPHFitter
import numpy as np
from sklearn.exceptions import ConvergenceWarning
from multiprocessing import Pool
import numpy as np
import functools
from .correlation import intersection, header_list
import plotly
import plotly.offline as opy
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import ShuffleSplit, GridSearchCV
import warnings
#######################
### Sklearn Survival ##
#######################
class EarlyStoppingMonitor:
def __init__(self, window_size, max_iter_without_improvement):
self.window_size = window_size
self.max_iter_without_improvement = max_iter_without_improvement
self._best_step = -1
def __call__(self, iteration, estimator, args):
# continue training for first self.window_size iterations
if iteration < self.window_size:
return False
# compute average improvement in last self.window_size iterations.
# oob_improvement_ is the different in negative log partial likelihood
# between the previous and current iteration.
start = iteration - self.window_size + 1
end = iteration + 1
improvement = np.mean(estimator.oob_improvement_[start:end])
if improvement > 1e-6:
self._best_step = iteration
return False # continue fitting
# stop fitting if there was no improvement
# in last max_iter_without_improvement iterations
diff = iteration - self._best_step
return diff >= self.max_iter_without_improvement
def IPC_RIDGE(X_train, y_train, X_test, y_test, lFeature = None, n_core = 2, seed = 123):
from sksurv.linear_model import IPCRidge
from sklearn.pipeline import make_pipeline
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
seed = np.random.RandomState(seed)
y_train_log = y_train.copy()
y_train_log["time"] = np.log1p(y_train["time"])
y_test_log = y_test.copy()
y_test_log["time"] = np.log1p(y_test["time"])
#https://github.com/sebp/scikit-survival/issues/41
n_alphas = 50
alphas = np.logspace(-10, 1, n_alphas)
gcv = GridSearchCV(IPCRidge(max_iter=100000),
{"alpha":alphas},
cv = 2,
n_jobs=10).fit(X_train,y_train_log)
best_model = gcv.best_estimator_.named_steps["IPCRidge"]
alpha = best_model.alphas_
scoreTraining = best_model.score(X_train,y_train_log)
scoreTest = best_model.score(X_test,y_test_log)
feature = pd.DataFrame(best_model.coef_, index=lFeature)[0]
return scoreTraining, scoreTest, feature
def score_survival_model(model, X, y):
from sksurv.metrics import concordance_index_censored
prediction = model.predict(X)
result = concordance_index_censored(y['event'], y['time'], prediction)
return result[0]
def SurvivalSVM(X_train, y_train, X_test, y_test, lFeature = None, n_core = 2, seed = 123):
from sksurv.svm import FastSurvivalSVM
import numpy as np
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
seed = np.random.RandomState(seed)
ssvm = FastSurvivalSVM(max_iter=100, tol=1e-5, random_state=seed)
param_grid = {'alpha': 2. ** np.arange(-12, 13, 4)}
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=seed)
gcv = GridSearchCV(ssvm, param_grid, scoring=score_survival_model,
n_jobs = n_core , refit=False,
cv=cv)
warnings.filterwarnings("ignore", category=FutureWarning)
gcv = gcv.fit(X_train, y_train)
ssvm.set_params(**gcv.best_params_)
ssvm.fit(X_train, y_train)
scoreTraining = ssvm.score(X_train,y_train)
scoreTest = ssvm.score(X_test,y_test)
feature = pd.Series(ssvm.coef_, index=lFeature)
return scoreTraining, scoreTest, feature
def PenaltyCox(X_train, y_train, X_test, y_test, lFeature = None, n_core = 2, seed = 123):
from sksurv.linear_model import CoxPHSurvivalAnalysis, CoxnetSurvivalAnalysis
from sklearn.pipeline import make_pipeline
seed = np.random.RandomState(seed)
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
model = CoxnetSurvivalAnalysis(alpha_min_ratio=0.12, l1_ratio=0.9, max_iter=100)
#https://github.com/sebp/scikit-survival/issues/41
model.set_params(max_iter = 100, n_alphas = 50)
model.fit(X_train, y_train)
warnings.simplefilter("ignore", ConvergenceWarning)
alphas = model.alphas_
gcv = GridSearchCV(
make_pipeline(CoxnetSurvivalAnalysis(l1_ratio=0.9, max_iter=1000)),
param_grid={"coxnetsurvivalanalysis__alphas": [[v] for v in alphas]},
cv = 2,
n_jobs= n_core).fit(X_train,y_train)
best_model = gcv.best_estimator_.named_steps["coxnetsurvivalanalysis"]
alpha = best_model.alphas_
scoreTraining = best_model.score(X_train,y_train)
scoreTest = best_model.score(X_test,y_test)
feature = pd.DataFrame(best_model.coef_, index=lFeature)[0]
return scoreTraining, scoreTest, feature
def SurvivalForest(X_train, y_train, X_test, y_test, lFeature = None, n_core = 2, seed = 123):
from sksurv.ensemble import RandomSurvivalForest
from eli5.formatters import format_as_dataframe
from eli5.sklearn import explain_weights_sklearn
from eli5.sklearn import PermutationImportance
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
seed = np.random.RandomState(seed)
rsf = RandomSurvivalForest(n_estimators=300,
min_samples_split=10,
min_samples_leaf=15,
max_features="sqrt",
n_jobs= n_core,
random_state=seed)
rsf.fit(X_train, y_train)
scoreTraining = rsf.score(X_train,y_train)
scoreTest = rsf.score(X_test,y_test)
perm = PermutationImportance(rsf, n_iter=3, random_state=seed)
perm.fit(X_test, y_test)
feature = format_as_dataframe(explain_weights_sklearn(perm, feature_names=lFeature, top = len(lFeature) ))
feature = pd.Series(feature["weight"].tolist(), index=feature["feature"].tolist())
#feature = pd.DataFrame(rsf.feature_importances_, index=lFeature)
return scoreTraining, scoreTest, feature
def gradient_boosted_models(X_train, y_train, X_test, y_test, lFeature = None, n_core = 2, seed = 123):
from sksurv.ensemble import GradientBoostingSurvivalAnalysis
# let's normalize, anyway
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
seed = np.random.RandomState(seed)
model = GradientBoostingSurvivalAnalysis(
n_estimators=1000, learning_rate=0.05, subsample=0.5,
max_depth=1, random_state=seed
)
monitor = EarlyStoppingMonitor(25, 100)
model.fit(X_train, y_train, monitor=monitor)
scoreTraining = model.score(X_train,y_train)
scoreTest = model.score(X_test,y_test)
feature = pd.Series(model.feature_importances_, index=lFeature)
return scoreTraining, scoreTest, feature
def survival_selection(data, k = 10, topk = 100, event = "event", n_core = 2, seed = 123):
from sksurv.datasets import get_x_y
from sklearn.model_selection import StratifiedKFold
import copy
from miopy.feature_selection import sort_abs
# list of classifiers, selected on the basis of our previous paper "
modelList = [
[gradient_boosted_models,"Gradient Boosted Models"],
[SurvivalSVM,"Support Vector Machine"],
#[SurvivalForest,"Random Forest",],
[PenaltyCox,"Penalized Cox",]
]
print("Loading dataset...")
X, Y = get_x_y(data, attr_labels = [event,"time"], pos_label=0)
skf = StratifiedKFold(n_splits=k, shuffle=True, random_state = np.random.RandomState(seed))
indexes = [ (training, test) for training, test in skf.split(X, Y) ]
lFeature = X.columns.tolist()
topFeatures = pd.Series(dtype='float64', index=lFeature).fillna(0)
lAll = []
DictScore = {}
dfTopCoef = pd.DataFrame(dtype='float64', index=lFeature).fillna(0)
for model, name in modelList :
print("\nClassifier " + name)
ListScore = []
classifierTopFeatures = pd.Series(dtype='float64', name = name, index=lFeature).fillna(0)
dfTopCoefTemp = pd.DataFrame(dtype='float64', index=lFeature).fillna(0)
i = 1
# iterate over all folds
for train_index, test_index in indexes :
X_train, X_test = X.iloc[train_index,:], X.iloc[test_index,:]
y_train, y_test = Y[train_index], Y[test_index]
try:
classifier = copy.deepcopy(model)
scoreTraining, scoreTest, features = classifier(X_train, y_train,\
X_test, y_test, lFeature = lFeature, n_core = n_core, seed = seed)
except Exception as error:
print(error)
else:
print("\ttraining: %.4f, test: %.4f" % (scoreTraining, scoreTest))
ListScore.append( scoreTest )
# now, let's get a list of the most important features, then mark the ones in the top X
orderedFeatures = sort_abs(features[features != 0]).round(3)
if topk <= len(orderedFeatures):
lF = orderedFeatures.index[0:topk].tolist()
else:
lF = orderedFeatures.index.tolist()
dfTopCoefTemp.loc[:, i] = orderedFeatures
for f in lF:
if orderedFeatures[f] != 0:
topFeatures[f] += 1
classifierTopFeatures[ f ] += 1
finally:
i +=1
dfTopCoef[name] = dfTopCoefTemp.apply(lambda row: row.mean(), axis=1)
print("\ttest mean: %.4f" % (np.mean(ListScore)))
DictScore[name] = np.mean(ListScore)
lAll.append(classifierTopFeatures)
feature_per = topFeatures.div(len(modelList)*k)*100
feature_per = feature_per.sort_values(ascending=False)[:topk]
dAll = pd.DataFrame(lAll).div(k)*100
return feature_per, dAll, DictScore, dfTopCoef
########################
### Survival Analysis ##
########################
def get_exprs_cutoff(exprDF, target="hsa-miR-223-3p", q = 0.5, treshold = None, optimal = True):
from scipy import stats
if optimal:
q, treshold = get_survival_cutoff(exprDF = exprDF, time = "time", event = "event", target = target)
else:
if treshold != None:
q = stats.percentileofscore(exprDF[target],treshold)/100
else:
treshold = exprDF[target].quantile(q)
return q, treshold
def split_by_exprs(exprDF, target="hsa-miR-223-3p", treshold = 0.5):
exprDF["exprs"] = None
is_higher = exprDF[target] >= float(treshold)
exprDF["exprs"] = exprDF["exprs"].mask(is_higher, 1)
exprDF["exprs"] = exprDF["exprs"].mask(~is_higher, 0)
#print("Splitted")
return exprDF
def get_survival_cutoff(exprDF = "exprDF", time = "time", event = "event", target = "target"):
lPoint = exprDF[target].unique().tolist()
df = pd.DataFrame()
for point in lPoint:
q, treshold = get_exprs_cutoff(exprDF, target=target, treshold = point, optimal = False)
if 0.1 < q < 0.9:
try:
tRes = get_hazard_ratio(split_by_exprs(exprDF, target=target, treshold = treshold))
except Exception as error:
print(error)
tRes = (0, 1,)
dfTemp = pd.Series({"Target":target,"Q":q,"Cutpoint":treshold,"HR":tRes[0],"pval":tRes[1]})
df = pd.concat([df,dfTemp], axis = 1)
df = df.transpose()
df["P_ADJ"] = df.pval.apply(lambda x: -1.63 * x * (1 + 2.35 * np.log(x)))
df = df.query("0.001 < pval < 0.1")
df = df.sort_values("P_ADJ")
row = df.iloc[0,:]
print(df)
return row["Q"], row["Cutpoint"]
def get_hazard_ratio(exprDF, target = "exprs"):
np.seterr(divide='ignore', invalid='ignore')
cph = CoxPHFitter()
cph.fit(exprDF[[target,"time","event"]].dropna(), "time", event_col = "event")
pval = cph.summary["p"][target]
hr_high, hr_low = cph.summary["exp(coef) upper 95%"][target], cph.summary["exp(coef) lower 95%"][target]
log_hr = cph.summary["exp(coef)"][target]
#print(cph.summary)
return (log_hr, pval, hr_high, hr_low)
def obatin_hr(ltarget, exprDF = None):
lhr = []
for target in ltarget:
try:
q, treshold = get_exprs_cutoff(exprDF, target=target, q=0.5, optimal = False)
print(q), print(treshold)
tRes = get_hazard_ratio(split_by_exprs(exprDF, target=target, treshold = treshold))
print("%s"%(target))
print(tRes)
hr = tRes[0]
except Exception as error:
print(error)
hr = 1
finally:
lhr.append(hr)
df = pd.DataFrame({"target":ltarget,"log(hr)":lhr})
return df
def obatin_hr_by_exprs(ltarget, exprDF = None):
lhr = []
for target in ltarget:
try:
tRes = get_hazard_ratio(exprDF, target = target)
hr = tRes[0]
except Exception as error:
print(error)
hr = 0
finally:
lhr.append(hr)
print("Lista HR")
print(lhr)
print(len(ltarget)), print(len(lhr))
df = pd.DataFrame({"target":ltarget,"log(hr)":lhr})
print("DF inside obtain_hr")
print(df)
return df
def same_length(list_lists):
lmax = 0
for l in list_lists:
lmax = max(lmax, len(l))
new_l = []
for l in list_lists:
ll = len(l)
if ll < lmax:
l += ["foo"] * (lmax - ll)
new_l.append(l)
return new_l
def hazard_ratio(lGeneUser = None, lMirUser = None, exprDF = None, n_core = 4):
### Intersect with Gene and Mir from table##
lMir, lGene = header_list(exprDF=exprDF)
if lGeneUser is not None:
lGene = intersection(lGene, lGeneUser)
if lMirUser is not None:
lMir = intersection(lMir, lMirUser)
lTarget = lGene+lMir
print(exprDF)
##Split List
np_list_split = np.array_split(lTarget, n_core)
split_list = [i.tolist() for i in np_list_split]
#split_list = same_length(split_list)
#Fix Exprs Variable
partial_func = functools.partial(obatin_hr, exprDF=exprDF)
#Generating Pool
pool = Pool(n_core)
lres = pool.map(partial_func, split_list)
print("lResultados")
print(lres)
res = pd.concat(lres)
pool.close()
pool.join()
print(res)
return res
|
icbi-lab/miopy
|
miopy/R_utils.py
|
import subprocess
from os import path, remove
import pandas as pd
import uuid
def read_count(file_path, sep = "\t"):
"""
Function to read dataset from the csv
Args:
file_path string path to the csv file
Return:
df dataframe DataFrame with the matrix count
"""
df = pd.read_csv(file_path, index_col=0, sep = sep)
return df
def tmm_normalization(fPath, bFilter):
lRemove = []
r_path = path.join(path.dirname(__file__), 'Rscript')
if not isinstance(fPath, str):
tempPath = "%s.csv"%(str(uuid.uuid1()))
fPath.to_csv(tempPath)
fPath = tempPath
lRemove.append(tempPath)
outPath = "%s.csv"%(str(uuid.uuid1()))
r_path = path.join(path.dirname(__file__), 'Rscript')
cmd = ["Rscript", path.join(r_path,"get_normal_counts.r"),\
"-f", fPath, "-o", outPath, "-t", bFilter]
a = subprocess.run(cmd,stdout=subprocess.PIPE)
df = read_count(outPath,",")
lRemove.append(outPath)
for p in lRemove:
remove(p)
return df
def get_survival_cutoff(fPath, time = "time", event = "event", target = "target"):
lRemove = []
r_path = path.join(path.dirname(__file__), 'Rscript')
if not isinstance(fPath, str):
tempPath = "%s.csv"%(str(uuid.uuid1()))
fPath.to_csv(tempPath)
fPath = tempPath
lRemove.append(tempPath)
outPath = "%s.csv"%(str(uuid.uuid1()))
r_path = path.join(path.dirname(__file__), 'Rscript')
cmd = ["Rscript", path.join(r_path,"get_survival_cutoff.r"),\
"-f", fPath, "-o", outPath,"-t", time,"-e",event,"-g",target]
a = subprocess.run(cmd,stdout=subprocess.PIPE)
df = pd.read_csv(outPath,",")
print(df)
lRemove.append(outPath)
for p in lRemove:
remove(p)
return float(df["cutpoint"])
def deg_edger(fPath, metaFile, bNormal="False", bFilter="False", bPaired="False", group = "event"):
lRemove = []
r_path = path.join(path.dirname(__file__), 'Rscript')
if not isinstance(fPath, str):
tempPath = "%s.csv"%(str(uuid.uuid1()))
fPath.to_csv(tempPath)
fPath = tempPath
lRemove.append(tempPath)
if not isinstance(metaFile, str):
tempMetaFile = "%s.csv"%(str(uuid.uuid1()))
metaFile.to_csv(tempMetaFile)
metaFile = tempMetaFile
lRemove.append(tempMetaFile)
outPath = "%s.csv"%(str(uuid.uuid1()))
cmd = ["Rscript", path.join(r_path,"get_deg.r"),\
"-f", fPath, "-m",metaFile,"-o", outPath,\
"-n", bNormal,"-t",bFilter, "-p", bPaired,\
"-g",group]
a = subprocess.run(cmd,stdout=subprocess.PIPE)
DegDf = read_count(outPath,",")
lRemove.append(outPath)
for p in lRemove:
remove(p)
return DegDf
def voom_normalization(fPath, bFilter):
lRemove = []
r_path = path.join(path.dirname(__file__), 'Rscript')
if not isinstance(fPath, str):
tempPath = "%s.csv"%(str(uuid.uuid1()))
fPath.to_csv(tempPath)
fPath = tempPath
lRemove.append(tempPath)
outPath = "%s.csv"%(str(uuid.uuid1()))
r_path = path.join(path.dirname(__file__), 'Rscript')
cmd = ["Rscript", path.join(r_path,"get_voom.r"),\
"-f", fPath, "-o", outPath, "-t", bFilter]
a = subprocess.run(cmd,stdout=subprocess.PIPE)
df = read_count(outPath,",")
lRemove.append(outPath)
for p in lRemove:
remove(p)
return df
def deg_limma_array(fPath, metaFile, bNormal,bFilter, bPaired):
lRemove = []
r_path = path.join(path.dirname(__file__), 'Rscript')
if not isinstance(fPath, str):
tempPath = "%s.csv"%(str(uuid.uuid1()))
fPath.to_csv(tempPath)
fPath = tempPath
lRemove.append(tempPath)
if not isinstance(metaFile, str):
tempMetaFile = "%s.csv"%(str(uuid.uuid1()))
metaFile.to_csv(tempMetaFile)
metaFile = tempMetaFile
lRemove.append(tempMetaFile)
outPath = "%s.csv"%(str(uuid.uuid1()))
cmd = ["Rscript", path.join(r_path,"get_de_array_limma.r"),\
"-f", fPath, "-m",metaFile,"-o", outPath,\
"-n", bNormal,"-t",bFilter, "-p", bPaired]
a = subprocess.run(cmd,stdout=subprocess.PIPE)
DegDf = read_count(outPath,",")
lRemove.append(outPath)
for p in lRemove:
remove(p)
return DegDf
|
icbi-lab/miopy
|
miopy/metrics.py
|
import numpy as np
from scipy.stats import rankdata
from scipy.signal import decimate
import pandas as pd
import math
import time
import matplotlib.pyplot as plt
from sklearn.preprocessing import KBinsDiscretizer
import warnings
"""
Implements the Randomized Dependence Coefficient
<NAME>, <NAME>, <NAME>
http://papers.nips.cc/paper/5138-the-randomized-dependence-coefficient.pdf
"""
def rdc(x, y, f=np.sin, k=20, s=1/6., n=1):
"""
Computes the Randomized Dependence Coefficient
x,y: numpy arrays 1-D or 2-D
If 1-D, size (samples,)
If 2-D, size (samples, variables)
f: function to use for random projection
k: number of random projections to use
s: scale parameter
n: number of times to compute the RDC and
return the median (for stability)
According to the paper, the coefficient should be relatively insensitive to
the settings of the f, k, and s parameters.
"""
if n > 1:
values = []
for i in range(n):
try:
values.append(rdc(x, y, f, k, s, 1))
except np.linalg.linalg.LinAlgError: pass
return np.median(values)
if len(x.shape) == 1: x = x.reshape((-1, 1))
if len(y.shape) == 1: y = y.reshape((-1, 1))
# Copula Transformation
cx = np.column_stack([rankdata(xc, method='ordinal') for xc in x.T])/float(x.size)
cy = np.column_stack([rankdata(yc, method='ordinal') for yc in y.T])/float(y.size)
# Add a vector of ones so that w.x + b is just a dot product
O = np.ones(cx.shape[0])
X = np.column_stack([cx, O])
Y = np.column_stack([cy, O])
# Random linear projections
Rx = (s/X.shape[1])*np.random.randn(X.shape[1], k)
Ry = (s/Y.shape[1])*np.random.randn(Y.shape[1], k)
X = np.dot(X, Rx)
Y = np.dot(Y, Ry)
# Apply non-linear function to random projections
fX = f(X)
fY = f(Y)
# Compute full covariance matrix
C = np.cov(np.hstack([fX, fY]).T)
# Due to numerical issues, if k is too large,
# then rank(fX) < k or rank(fY) < k, so we need
# to find the largest k such that the eigenvalues
# (canonical correlations) are real-valued
k0 = k
lb = 1
ub = k
while True:
# Compute canonical correlations
Cxx = C[:k, :k]
Cyy = C[k0:k0+k, k0:k0+k]
Cxy = C[:k, k0:k0+k]
Cyx = C[k0:k0+k, :k]
eigs = np.linalg.eigvals(np.dot(np.dot(np.linalg.pinv(Cxx), Cxy),
np.dot(np.linalg.pinv(Cyy), Cyx)))
# Binary search if k is too large
if not (np.all(np.isreal(eigs)) and
0 <= np.min(eigs) and
np.max(eigs) <= 1):
ub -= 1
k = (ub + lb) // 2
continue
if lb == ub: break
lb = k
if ub == lb + 1:
k = ub
else:
k = (ub + lb) // 2
return np.sqrt(np.max(eigs))
arnings.filterwarnings("ignore", category=UserWarning)
def hoeffding(*arg):
"""
ref: https://github.com/PaulVanDev/HoeffdingD
Hoeffding’s test for dependence was proposed by
<NAME> (1948) as a test of correlation for two variables
with continuous distribution functions. Hoeffding’s D is a
nonparametric measure of the distance between the joint distribution F(x, y)
and the product of marginal distributions F1(x)F2(y). The advantage of this statistic
lies in the fact that it has more power to detect non-monotonic dependency structures
compared to other more common measures (Pearson, Kendall, Spearman)
"""
if(len(arg)==1):
if isinstance(arg[0], pd.DataFrame):
if(arg[0].shape[0]>1):
return arg[0].apply(lambda x: arg[0].apply(lambda y: hoeffding(x.values, y.values)))
else:
if(len(arg)==2):
if type(arg[0]) is not np.ndarray:
if (len(arg[0].shape)>1):
return print("ERROR inputs : hoeffding(df >2col) or hoeffding(numpy.array -1d- ,numpy.array -1d-)")
if type(arg[1]) is np.ndarray:
if (len(arg[0].shape)>1):
return print("ERROR inputs : hoeffding(df >2col) or hoeffding(numpy.array -1d- ,numpy.array -1d-)")
xin=arg[0]
yin=arg[1]
#crop data to the smallest array, length have to be equal
if len(xin)<len(yin):
yin=yin[:len(xin)]
if len(xin)>len(yin):
xin=xin[:len(yin)]
# dropna
x = xin[~(np.isnan(xin) | np.isnan(yin))]
y = yin[~(np.isnan(xin) | np.isnan(yin))]
# undersampling if length too long
lenx=len(x)
if lenx>99999:
factor=math.ceil(lenx/100000)
x=x[::factor]
y=y[::factor]
# bining if too much "definition"
if len(np.unique(x))>50:
est = KBinsDiscretizer(n_bins=50, encode='ordinal', strategy='quantile') #faster strategy='quantile' but less accurate
est.fit(x.reshape(-1, 1))
Rtemp = est.transform(x.reshape(-1, 1))
R=rankdata(Rtemp)
else:
R=rankdata(x)
if len(np.unique(y))>50:
est1 = KBinsDiscretizer(n_bins=50, encode='ordinal', strategy='quantile') #faster strategy='quantile' but less accurate
est1.fit(y.reshape(-1, 1))
Stemp = est1.transform(y.reshape(-1, 1))
S=rankdata(Stemp)
else:
S=rankdata(y)
# core processing
N=x.shape
dico={(np.nan,np.nan):np.nan}
dicoRin={np.nan:np.nan}
dicoSin={np.nan:np.nan}
dicoRless={np.nan:np.nan}
dicoSless={np.nan:np.nan}
Q=np.ones(N[0])
i=0;
for r,s in np.nditer([R,S]):
r=float(r)
s=float(s)
if (r,s) in dico.keys():
Q[i]=dico[(r,s)]
else:
if r in dicoRin.keys():
isinR=dicoRin[r]
lessR=dicoRless[r]
else:
isinR=np.isin(R,r)
dicoRin[r]=isinR
lessR=np.less(R,r)
dicoRless[r]=lessR
if s in dicoSin.keys():
isinS=dicoSin[s]
lessS=dicoSless[s]
else:
isinS=np.isin(S,s)
dicoSin[s]=isinS
lessS=np.less(S,s)
dicoSless[s]=lessS
Q[i] = Q[i] + np.count_nonzero(lessR & lessS) \
+ 1/4 * (np.count_nonzero(isinR & isinS)-1) \
+ 1/2 * (np.count_nonzero(isinR & lessS)) \
+ 1/2 * (np.count_nonzero(lessR & isinS))
dico[(r,s)]=Q[i]
i+=1
D1 = np.sum( np.multiply((Q-1),(Q-2)) );
D2 = np.sum( np.multiply(np.multiply((R-1),(R-2)),np.multiply((S-1),(S-2)) ) );
D3 = np.sum( np.multiply(np.multiply((R-2),(S-2)),(Q-1)) );
D = 30*((N[0]-2)*(N[0]-3)*D1 + D2 - 2*(N[0]-2)*D3) / (N[0]*(N[0]-1)*(N[0]-2)*(N[0]-3)*(N[0]-4));
return D
return print("ERROR inputs : hoeffding(df >2col) or hoeffding(numpy.array -1d- ,numpy.array -1d-)")
|
icbi-lab/miopy
|
miopy/process_data.py
|
import pandas as pd
import numpy as np
from os import path
import io
import numpy as np
import re
def _get_path_data():
return path.join(path.dirname(__file__), 'data')
def get_target_query(method = "and", lTarget = []):
#Build DB query:
dbQ = ""
lHeader = load_matrix_header()
if method == "and":
for db in lHeader:
if db in lTarget:
dbQ += "1"
else:
dbQ += "."
elif method == "or":
lQ = []
nullString = "."*len(lHeader)
for db in lHeader:
if db in lTarget:
i = lHeader.index(db)
q = list(nullString)
q[i] = "1"
lQ.append("".join(q))
dbQ="("+"|".join(lQ)+")"
return dbQ
def load_matrix_header():
import codecs
import pkg_resources
"""Return a dataframe about miRNA/Gene prediction tool.
Contains the following fields:
col gene symbol
index mirbase mature id
"""
stream = pkg_resources.resource_stream(__name__, 'data/MATRIX_LIST.txt')
return stream.read().decode('utf-8').split()
def load_matrix_counts():
import pkg_resources
"""Return a dataframe about miRNA/Gene prediction tool.
Contains the following fields:
col gene symbol
index mirbase mature id
"""
stream = pkg_resources.resource_filename(__name__, 'data/MATRIX.pickle.gz')
return pd.read_pickle(stream)
def load_table_counts():
import pkg_resources
"""Return a dataframe about miRNA/Gene prediction tool.
Contains the following fields:
col gene symbol
index mirbase mature id
"""
stream = pkg_resources.resource_filename(__name__, 'data/MATRIX_TABLE.pickle.gz')
return pd.read_pickle(stream)
def load_synthetic():
import pkg_resources
"""Return a dataframe about Gene/Gene synthetic lehal
Contains the following fields:
col1 gene symbol
col2 gene symbol
"""
stream = pkg_resources.resource_stream(__name__, 'data/SL.tsv')
return pd.read_csv(stream, sep = "\t",header=None, names = ["GeneA","GeneB"])
def load_dataset():
import pkg_resources
"""Return a 3dataframe about Gene/Gene synthetic lehal
Contains the following fields:
col1 gene symbol
col2 gene symbol
"""
stream = pkg_resources.resource_stream(__name__, 'dataset/TCGA-OV_miRNAs.csv')
dfMir = pd.read_csv(stream, index_col=0)
stream = pkg_resources.resource_stream(__name__, 'dataset/TCGA-OV_RNAseq.csv')
dfRna = pd.read_csv(stream, index_col=0)
stream = pkg_resources.resource_stream(__name__, 'dataset/metadata.csv')
metadata = pd.read_csv(stream, index_col=0)
return dfMir, dfRna, metadata
def read_count(file_path, sep = "\t"):
"""
Function to read dataset from the csv
Args:
file_path string path to the csv file
Return:
df dataframe DataFrame with the matrix count
"""
df = pd.read_csv(file_path, index_col=0, sep = sep)
return df
def count_db(table):
table["Tool Number"] = table["Prediction Tools"].str.count("1")
return table
def concat_matrix(mrnaDF, mirDF):
"""
Function to concat the miR and Gene expression Matrix.
The sample name have to be the same in both Matrix.
With dropna, pandas remove all genes/mir with NA.
Args:
mrnaDF df Dataframe rows are genes and cols are samples
mirDF df Dataframe rows are mirs and cols are samples
Return:
exprDF df Concat Dataframe rows are samples and cols are gene/mirs
"""
# axis = 0 -> axis = 1 \/
exprDF = pd.concat([mrnaDF, mirDF]).dropna(axis = 1).transpose()
return exprDF
def header_list (exprDF):
"""
Function to obtain a list of the miR and genes present in the dataframe
Args:
exprDF df Concat Dataframe rows are samples and cols are gene/mirs
Return:
lMir list miR List
lGene list Gene List
"""
lAll = exprDF.columns.tolist()
patMir = re.compile("^hsa-...-*")
lMir = [i for i in lAll if patMir.match(i)]
lGene = [i for i in lAll if not patMir.match(i)]
return lMir, lGene
def intersection(lst1, lst2):
"""
Python program to illustrate the intersection
of two lists using set() method
"""
return list(set(lst1).intersection(lst2))
def GetMethodList(lHeader):
lDefault = ["R", "Rho","Tau","Hoeffding","RDC","Lasso", "Ridge","ElasticNet","Lars","Random Forest","Log(HR)"]
return intersection(lHeader,lDefault)
def get_confident_df(df):
dfConf = df / 40
return dfConf
def get_confident_serie(serie):
serie = serie / 40
return serie
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.