blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 2
616
| content_id
stringlengths 40
40
| detected_licenses
listlengths 0
69
| license_type
stringclasses 2
values | repo_name
stringlengths 5
118
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringlengths 4
63
| visit_date
timestamp[us] | revision_date
timestamp[us] | committer_date
timestamp[us] | github_id
int64 2.91k
686M
⌀ | star_events_count
int64 0
209k
| fork_events_count
int64 0
110k
| gha_license_id
stringclasses 23
values | gha_event_created_at
timestamp[us] | gha_created_at
timestamp[us] | gha_language
stringclasses 213
values | src_encoding
stringclasses 30
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 2
10.3M
| extension
stringclasses 246
values | content
stringlengths 2
10.3M
| authors
listlengths 1
1
| author_id
stringlengths 0
212
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
41e9b4afbab2d779f665fd3575b54ad8011ea7a8
|
60715c9ea4c66d861708531def532814eab781fd
|
/python-programming-workshop/interesting_programs/find_top_n_items_occuring_in_a_list.py
|
a80d1580bc755548d78d93a5e1a576fcfbf06feb
|
[] |
no_license
|
bala4rtraining/python_programming
|
6ce64d035ef04486f5dc9572cb0975dd322fcb3e
|
99a5e6cf38448f5a01b310d5f7fa95493139b631
|
refs/heads/master
| 2023-09-03T00:10:26.272124
| 2021-11-01T08:20:52
| 2021-11-01T08:20:52
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 767
|
py
|
# example.py
#
# Determine the most common words in a list
words = [
'look', 'into', 'my', 'eyes', 'look', 'into', 'my', 'eyes',
'the', 'eyes', 'the', 'eyes', 'the', 'eyes', 'not', 'around', 'the',
'eyes', "don't", 'look', 'around', 'the', 'eyes', 'look', 'into',
'my', 'eyes', "you're", 'under'
]
from collections import Counter
word_counts = Counter(words)
print(word_counts)
top_three = word_counts.most_common(3)
print(top_three)
# outputs [('eyes', 8), ('the', 5), ('look', 4)]
# Example of merging in more words
morewords = ['why','are','you','not','looking','in','my','eyes','my','actually','actually',
'actually','actually','actually','actually']
word_counts.update(morewords)
print(word_counts)
print(word_counts.most_common(3))
|
[
"karthikkannan@gmail.com"
] |
karthikkannan@gmail.com
|
2a745553b9c631a13ce660834e8b05dfce2df968
|
c839961aeab22795200d9edef9ba043fe42eeb9c
|
/data/script763.py
|
1b0213959badc4deafd6a607917ba8d0b1a00f22
|
[] |
no_license
|
StevenLOL/kaggleScape
|
ad2bb1e2ed31794f1ae3c4310713ead1482ffd52
|
18bede8420ab8d2e4e7c1eaf6f63280e20cccb97
|
refs/heads/master
| 2020-03-17T05:12:13.459603
| 2018-05-02T19:35:55
| 2018-05-02T19:35:55
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 10,785
|
py
|
# coding: utf-8
# # Aim
# This is a small yet useful kernel for providing an introduction to **Artificial Neural Networks** for people who want to begin their journey into the field of **deep learning**. For this, I have used Keras which is a high-level Neural Networks API built on top of low level neural networks APIs like Tensorflow and Theano. As it is high-level, many things are already taken care of therefore it is easy to work with and a great tool to start with. [Here's the documentation for keras](https://keras.io/)
#
# # What is Deep learning?
# Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.
#
#
# # What are artificial neural networks?
# An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. Information that flows through the network affects the structure of the ANN because a neural network changes - or learns, in a sense - based on that input and output. ANNs are considered nonlinear statistical data modeling tools where the complex relationships between inputs and outputs are modeled or patterns are found. ANN is also known as a neural network.
#
# <img src="https://cdn-images-1.medium.com/max/1000/1*ZX05x1xYgaVoa4Vn2kKS9g.png">
# A single neuron is known as a perceptron. It consists of a layer of inputs(corresponds to columns of a dataframe). Each input has a weight which controls the magnitude of an input.
# The summation of the products of these input values and weights is fed to the activation function. Activation functions are really important for a Artificial Neural Network to learn and make sense of something really complicated and Non-linear complex functional mappings between the inputs and response variable.
#
# They introduce non-linear properties to our Network.Their main purpose is to convert a input signal of a node in a A-NN to an output signal. That output signal now is used as a input in the next layer in the stack. Specifically in A-NN we do the sum of products of inputs(X) and their corresponding Weights(W) and apply a Activation function f(x) to it to get the output of that layer and feed it as an input to the next layer. [Refer to this article for more info.](https://towardsdatascience.com/activation-functions-and-its-types-which-is-better-a9a5310cc8f)
# <img src="https://cdnpythonmachinelearning.azureedge.net/wp-content/uploads/2017/09/Single-Perceptron.png">
# **Concept of backpropagation** - Backpropagation, short for "backward propagation of errors," is an algorithm for supervised learning of artificial neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights. It is a generalization of the delta rule for perceptrons to multilayer feedforward neural networks.
# <img src="https://www.researchgate.net/profile/Hassan_Al-Haj_Ibrahim/publication/235338024/figure/fig6/AS:299794191929349@1448487913220/Flow-chart-for-the-back-propagation-BP-learning-algorithm.png">
# **Gradient Descent** - To explain Gradient Descent I’ll use the classic mountaineering example. Suppose you are at the top of a mountain, and you have to reach a lake which is at the lowest point of the mountain (a.k.a valley). A twist is that you are blindfolded and you have zero visibility to see where you are headed. So, what approach will you take to reach the lake? The best way is to check the ground near you and observe where the land tends to descend. This will give an idea in what direction you should take your first step. If you follow the descending path, it is very likely you would reach the lake. [Refer to this article for more information.](https://www.analyticsvidhya.com/blog/2017/03/introduction-to-gradient-descent-algorithm-along-its-variants/)
# About Breast Cancer Wisconsin (Diagnostic) Data Set
# Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].
#
# This database is also available through the UW CS ftp server: ftp ftp.cs.wisc.edu cd math-prog/cpo-dataset/machine-learn/WDBC/
#
# Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
#
# Attribute Information:
#
# 1) ID number 2) Diagnosis (M = malignant, B = benign) 3-32)
#
# Ten real-valued features are computed for each cell nucleus:
#
# a) radius (mean of distances from center to points on the perimeter) b) texture (standard deviation of gray-scale values) c) perimeter d) area e) smoothness (local variation in radius lengths) f) compactness (perimeter^2 / area - 1.0) g) concavity (severity of concave portions of the contour) h) concave points (number of concave portions of the contour) i) symmetry j) fractal dimension ("coastline approximation" - 1)
#
# The mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.
#
# All feature values are recoded with four significant digits.
#
# Missing attribute values: none
#
# Class distribution: 357 benign, 212 malignant
# In[ ]:
# Importing libraries
import pandas as pd
import numpy as np
# Importing data
data = pd.read_csv('../input/data.csv')
del data['Unnamed: 32']
# In[ ]:
X = data.iloc[:, 2:].values
y = data.iloc[:, 1].values
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder
labelencoder_X_1 = LabelEncoder()
y = labelencoder_X_1.fit_transform(y)
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.1, random_state = 0)
# **Now that we have prepared data, we will import Keras and its packages.**
# In[ ]:
import keras
from keras.models import Sequential
from keras.layers import Dense
# In[ ]:
# Initialising the ANN
classifier = Sequential()
# In[ ]:
# Adding the input layer and the first hidden layer
classifier.add(Dense(output_dim=16, init='uniform', activation='relu', input_dim=30))
# input_dim - number of columns of the dataset
#
# output_dim - number of outputs to be fed to the next layer, if any
#
# activation - activation function which is ReLU in this case
#
# init - the way in which weights should be provided to an ANN
#
# The **ReLU** function is f(x)=max(0,x). Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. In MLP usages, rectifier units replace all other activation functions except perhaps the readout layer. But I suppose you could mix-and-match them if you'd like. One way ReLUs improve neural networks is by speeding up training. The gradient computation is very simple (either 0 or 1 depending on the sign of x). Also, the computational step of a ReLU is easy: any negative elements are set to 0.0 -- no exponentials, no multiplication or division operations. Gradients of logistic and hyperbolic tangent networks are smaller than the positive portion of the ReLU. This means that the positive portion is updated more rapidly as training progresses. However, this comes at a cost. The 0 gradient on the left-hand side is has its own problem, called "dead neurons," in which a gradient update sets the incoming values to a ReLU such that the output is always zero; modified ReLU units such as ELU (or Leaky ReLU etc.) can minimize this. Source : [StackExchange](https://stats.stackexchange.com/questions/226923/why-do-we-use-relu-in-neural-networks-and-how-do-we-use-it)
# In[ ]:
# Adding the second hidden layer
classifier.add(Dense(output_dim=16, init='uniform', activation='relu'))
# In[ ]:
# Adding the output layer
classifier.add(Dense(output_dim=1, init='uniform', activation='sigmoid'))
# output_dim is 1 as we want only 1 output from the final layer.
#
# Sigmoid function is used when dealing with classfication problems with 2 types of results.(Submax function is used for 3 or more classification results)
# <img src="https://cdn-images-1.medium.com/max/1000/1*Xu7B5y9gp0iL5ooBj7LtWw.png">
# In[ ]:
# Compiling the ANN
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Optimizer is chosen as adam for gradient descent.
#
# Binary_crossentropy is the loss function used.
#
# Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label. So predicting a probability of .012 when the actual observation label is 1 would be bad and result in a high loss value. A perfect model would have a log loss of 0. [More about this](http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html)
# In[ ]:
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size=400, nb_epoch=400)
# Long scroll ahead but worth
# The batch size and number of epochs have been set using trial and error. Still looking for more efficient ways. Open to suggestions.
# Batch size defines number of samples that going to be propagated through the network.
#
# An Epoch is a complete pass through all the training data.
#
# # So, we get more than 94% accuracy
#
# You can manipulate the above algorithm to get even better results.
# In[ ]:
# Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# In[ ]:
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# In[ ]:
print("Our accuracy is {}".format((cm[0][0] + cm[1][1])/57))
# Thanks for reading this. May this help you on your "deep" journey into machine learning.
|
[
"adithyagirish@berkeley.edu"
] |
adithyagirish@berkeley.edu
|
033d868494a1885d332dd755e35485ada7a883c0
|
39f800b66f5c3c6d98fb41e5551cfb8c1959f4f3
|
/pyspark/test/bigdl/keras/test_load_model.py
|
0be15f22b02c895bcdbadf7b47c35b2496357bd4
|
[
"Apache-2.0"
] |
permissive
|
GaryHalo/BigDL
|
ec523d13305880e9dde39d46cd9601eab5f277f5
|
987053d05fa55a685a25ac2e0ad17470433688fb
|
refs/heads/master
| 2022-10-30T20:11:51.622634
| 2021-04-21T02:14:37
| 2021-04-21T02:14:37
| 143,459,472
| 1
| 0
|
Apache-2.0
| 2022-10-05T00:10:17
| 2018-08-03T18:14:56
|
Scala
|
UTF-8
|
Python
| false
| false
| 4,683
|
py
|
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import print_function
import numpy as np
import pytest
from numpy.testing import assert_allclose
import bigdl.nn.layer as BLayer
from bigdl.keras.converter import WeightLoader
from bigdl.keras.converter import DefinitionLoader
np.random.seed(1337) # for reproducibility
from test.bigdl.test_utils import BigDLTestCase, TestModels
from bigdl.examples.keras.keras_utils import *
import keras.backend as K
class TestLoadModel(BigDLTestCase):
def __kmodel_load_def_weight_test(self, kmodel, input_data):
keras_model_path_json, keras_model_path_hdf5 = dump_keras(kmodel, dump_weights=True)
bmodel = DefinitionLoader.from_json_path(keras_model_path_json)
WeightLoader.load_weights_from_hdf5(bmodel,
kmodel,
keras_model_path_hdf5)
bmodel.training(False)
boutput = bmodel.forward(input_data)
koutput = kmodel.predict(input_data)
assert_allclose(boutput, koutput, rtol=1e-5)
def test_load_api_with_hdf5(self):
K.set_image_dim_ordering("th")
kmodel, input_data, output_data = TestModels.kmodel_graph_1_layer()
keras_model_json_path, keras_model_hdf5_path = dump_keras(kmodel, dump_weights=True)
bmodel = BLayer.Model.load_keras(json_path=keras_model_json_path,
hdf5_path=keras_model_hdf5_path)
self.assert_allclose(kmodel.predict(input_data),
bmodel.forward(input_data))
def test_load_model_with_hdf5_with_definition(self):
kmodel, input_data, output_data = TestModels.kmodel_graph_1_layer()
keras_model_json_path, keras_model_hdf5_path = dump_keras(kmodel, dump_weights=True)
bmodel = BLayer.Model.load_keras(hdf5_path=keras_model_hdf5_path)
self.assert_allclose(kmodel.predict(input_data),
bmodel.forward(input_data))
def test_load_api_no_hdf5(self):
K.set_image_dim_ordering("th")
kmodel, input_data, output_data = TestModels.kmodel_graph_1_layer()
keras_model_json_path, keras_model_hdf5_path = dump_keras(kmodel, dump_weights=True)
bmodel = BLayer.Model.load_keras(json_path=keras_model_json_path)
def test_load_def_weights_graph_1_layer(self):
K.set_image_dim_ordering("th")
kmodel, input_data, output_data = TestModels.kmodel_graph_1_layer()
self.__kmodel_load_def_weight_test(kmodel, input_data)
def test_load_def_weights_graph_activation(self):
K.set_image_dim_ordering("th")
kmodel, input_data, output_data = TestModels.kmodel_graph_activation_is_layer()
self.__kmodel_load_def_weight_test(kmodel, input_data)
def test_load_def_weights_kmodel_seq_lenet_mnist(self):
K.set_image_dim_ordering("th")
kmodel, input_data, output_data = TestModels.kmodel_seq_lenet_mnist()
self.__kmodel_load_def_weight_test(kmodel, input_data)
def test_load_definition(self):
K.set_image_dim_ordering("th")
kmodel, input_data, output_data = TestModels.kmodel_seq_lenet_mnist()
keras_model_json_path, keras_model_hdf5_path = dump_keras(kmodel, dump_weights=True)
bmodel = DefinitionLoader.from_json_path(keras_model_json_path)
WeightLoader.load_weights_from_kmodel(bmodel, kmodel)
self.assert_allclose(bmodel.forward(input_data), kmodel.predict(input_data))
def test_load_weights(self):
K.set_image_dim_ordering("th")
kmodel, input_data, output_data = TestModels.kmodel_graph_1_layer()
keras_model_json_path, keras_model_hdf5_path = dump_keras(kmodel, dump_weights=True)
bmodel = DefinitionLoader.from_json_path(keras_model_json_path)
kmodel.set_weights([kmodel.get_weights()[0] + 100, kmodel.get_weights()[1]])
WeightLoader.load_weights_from_hdf5(bmodel, kmodel, filepath=keras_model_hdf5_path)
self.assert_allclose(bmodel.forward(input_data), kmodel.predict(input_data))
if __name__ == "__main__":
pytest.main([__file__])
|
[
"noreply@github.com"
] |
GaryHalo.noreply@github.com
|
492b3eeb310bb197c2e37049e101fdfd915fa423
|
090e58e3bc859fdaf57035f5823c2427211945df
|
/src/Linear_Distributed_Delay_System/Linear_Distributed_Delay_Stability_Analysis_Example.py
|
d0a026c7b711b0625b05f4f4ed9315fe4c56c97c
|
[] |
no_license
|
WDash1/MMSC-Distributed_Delay
|
4a2708b90fe65412b7a9bc4f0a1020f6ad2ef641
|
71502409c1eba40385ff4f7f27e758d32ea44b0b
|
refs/heads/master
| 2022-12-12T05:39:44.300135
| 2020-08-23T09:02:41
| 2020-08-23T09:02:41
| 267,675,451
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,380
|
py
|
git diff --name-only --diff-filter=U
import numpy as NP
from pylab import figure, plot, xlabel, ylabel, legend, title, savefig
import matplotlib.pyplot as plt
from DistributedDelaySimulator import DistributedDelaySimulator;
# The number of points we wish to use the Trapezium rule to discretise the
# integral.
n=20;
# The model parameters we wish to use for simulations.
alpha_amt = 50
alpha_values = NP.linspace(-10, 2, num=alpha_amt);
beta_amt = 50;
beta_values = NP.linspace(-20, 10, num=beta_amt);
# The time values at which we wish to compute the values of the trajectories.
t_values = NP.linspace(10, 20, num=300);
# Initial data for the simulations.
y0_values = lambda t: NP.sin(NP.sqrt(2)*t)+NP.cos(t);
# Produce trajectory simulations for each value of beta.
distributed_delay_simulator = DistributedDelaySimulator(n, t_values);
stability_matrix = NP.zeros((beta_amt, alpha_amt));
for i in range(0, alpha_amt):
for j in range(0, beta_amt):
alpha_value = alpha_values[i];
beta_value = beta_values[j];
y_sol = distributed_delay_simulator.generateTrajectory(y0_values,
alpha_value,
beta_value);
if(max(abs(y_sol))>1.0):
stability_matrix[j][i] = 6;
else:
stability_matrix[j][i] = 0;
# Plot types of fixed point in a bifurcatiion diagram.
XX, YY = NP.meshgrid(alpha_values, beta_values);
fig,ax = plt.subplots(1,1)
plt.title('Bifurcation Plot for Linear Distributed Delay System');
p = plt.imshow(stability_matrix,
extent=[min(alpha_values), max(alpha_values), max(beta_values), min(beta_values)],
aspect = (max(alpha_values)-min(alpha_values))/(max(beta_values)-min(beta_values)),
cmap=plt.cm.get_cmap('jet'));
plt.clim(0,10)
fig, ax = plt.subplots(1,1)
plt.title('Bifurcation Plot for Linear Distributed Delay System');
p = ax.imshow(stability_matrix,
extent=[min(alpha_values), max(alpha_values), max(beta_values), min(beta_values)],
aspect = (max(alpha_values)-min(alpha_values))/(max(beta_values)-min(beta_values)));
#plt.colorbar(p);
plt.xlabel(r'$\alpha$');
plt.ylabel(r'$\beta$');
plt.gca().invert_yaxis()
fig.savefig('fig2.png', dpi=300);
plt.show();
|
[
"William@ITSs-iMac.local"
] |
William@ITSs-iMac.local
|
f48e3965f482d60c5e43a0c45060ceee144d79f2
|
1adbd4b6b9b56f3ca45d8f7d244280415183a2b3
|
/src/particle_filter.py
|
3dbcbe41e43764025d63250ec746173b3c792865
|
[] |
no_license
|
shortstheory/RGBD-Tracking
|
d50258f3553d98302489c1a512c468648248493c
|
475a987b69ea7db6dacd88e97f0d121217533f37
|
refs/heads/master
| 2022-06-18T03:42:23.165484
| 2020-05-01T19:45:54
| 2020-05-01T19:45:54
| 255,496,271
| 3
| 2
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,424
|
py
|
import numpy as np
class PF:
def __init__(self, init_pose, num_p = 10, cov = 0.1, model = "velocity"):
'''
initialize the particle filter with num_p_ particles and velocity or acc model
Inputs:
1. init_pose: initial pose obtained from first frame
2. num_p: number of particles
3. cov: covariance for noise to be added in the predict step
4. model: which motion model to use for the predict step. Currently, only supports constant velocity model
'''
self.num_p = num_p
self.model = model
if model == "velocity":
self.state_dims = 6
self.cov = cov * np.identity(self.state_dims)
self.best_p = init_pose
self.init_pose = init_pose
self.particles = np.random.multivariate_normal(self.init_pose, self.cov, self.num_p)
self.weights = np.ones((self.particles.shape[0]))/self.particles.shape[0]
def predict(self, dt = 0.1):
"""
Move the particles as per the motion model and then add noise
"""
self.propagate(dt)
noise = np.random.multivariate_normal(np.zeros((self.state_dims)), self.cov, self.num_p)
# print(noise)
self.particles += noise
def propagate(self,dt):
"""
apply the motion model
"""
F = np.identity((self.state_dims))
if self.model == "velocity":
F[0, -3] = dt
F[1, -2] = dt
F[2, -1] = dt
# print(F)
self.particles = np.matmul(F, self.particles[:,:,None])[:,:,0]
def update(self, correlation):
'''
Reweight the particles as per the correlation score
'''
self.weights = correlation/np.sum(correlation)
self.best_p = self.particles[np.argmax(self.weights),:]
def restratified_sampling(self):
'''
Resample the particles as per the distribution governed by current weights
'''
print("resampling particles!")
means = self.particles
weights = self.weights
N = self.weights.shape[0]
c = weights[0]
j = 0
u = np.random.uniform(0,1.0/N)
new_mean = np.zeros(means.shape)
new_weights = np.zeros(weights.shape)
for k in range(N):
beta = u + float(k)/N
while beta > c:
j += 1
c += weights[j]
# add point
new_mean[k] = means[j]
new_weights[k] = 1.0/N
self.particles = new_mean
self.weights = new_weights
if __name__ == "__main__":
init_pose = np.array([0, 0, 0, 0, 0, 0])
pf = PF(init_pose)
pf.predict()
|
[
"arnav.dhamija@gmail.com"
] |
arnav.dhamija@gmail.com
|
624dc6a10958ace519ad830e661c28a837ba12ed
|
ae63c9d81a11c4ab10d7a6bc723d1f3d94761abc
|
/upload/migrations/0004_auto__del_field_photo_file__add_field_photo_src__add_field_photo_pub_d.py
|
f6fe80cfdebe12ff97996dc048e5aa8b77d49a9a
|
[] |
no_license
|
rif/blackbar
|
2fdc3cc57853cd0d49d1db65763090b0455b4e81
|
eae31830ab25c554a6160e19092188a24bb8c614
|
refs/heads/master
| 2022-11-29T12:44:18.815860
| 2017-07-07T07:07:22
| 2017-07-07T07:07:22
| 204,672,053
| 0
| 0
| null | 2022-11-22T00:20:09
| 2019-08-27T09:48:23
|
JavaScript
|
UTF-8
|
Python
| false
| false
| 5,549
|
py
|
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Deleting field 'Photo.file'
db.delete_column('upload_photo', 'file')
# Adding field 'Photo.src'
db.add_column('upload_photo', 'src',
self.gf('django.db.models.fields.files.FileField')(default='', max_length=100),
keep_default=False)
# Adding field 'Photo.pub_date'
db.add_column('upload_photo', 'pub_date',
self.gf('django.db.models.fields.DateTimeField')(auto_now=True, default=datetime.datetime(2013, 1, 1, 0, 0), blank=True),
keep_default=False)
def backwards(self, orm):
# Adding field 'Photo.file'
db.add_column('upload_photo', 'file',
self.gf('django.db.models.fields.files.FileField')(default=datetime.datetime(2013, 1, 1, 0, 0), max_length=100),
keep_default=False)
# Deleting field 'Photo.src'
db.delete_column('upload_photo', 'src')
# Deleting field 'Photo.pub_date'
db.delete_column('upload_photo', 'pub_date')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'upload.blackbarprofile': {
'Meta': {'object_name': 'BlackbarProfile'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mugshot': ('django.db.models.fields.files.ImageField', [], {'max_length': '100', 'blank': 'True'}),
'privacy': ('django.db.models.fields.CharField', [], {'default': "'registered'", 'max_length': '15'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'my_profile'", 'unique': 'True', 'to': "orm['auth.User']"})
},
'upload.photo': {
'Meta': {'object_name': 'Photo'},
'caption': ('django.db.models.fields.CharField', [], {'max_length': '300'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'src': ('django.db.models.fields.files.FileField', [], {'max_length': '100'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['auth.User']", 'unique': 'True'})
}
}
complete_apps = ['upload']
|
[
"radu@fericean.ro"
] |
radu@fericean.ro
|
de1498ab923433223a4a2d4502ba859c27e07d78
|
a81a1efe1a93d5af0ef3f6403862a1544befd6cf
|
/Array/54_SpiralMatrix.py
|
4186aa8310f613fd10f02a0a2bdcd644ad169971
|
[] |
no_license
|
fishleongxhh/LeetCode
|
89da4ae3ca1715b1909c350437c0ba79eb2a8349
|
d0352fecc61396fc460e1350572189b175a13f61
|
refs/heads/master
| 2020-04-05T17:14:27.976946
| 2018-12-16T14:10:54
| 2018-12-16T14:10:54
| 157,050,997
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,428
|
py
|
# -*- coding: utf-8 -*-
# Author: Xu Hanhui
# 此程序用来求解LeetCode54: Spiral Matrix问题
def spiralOrder(matrix):
if not matrix:
return []
res = []
i, j = 0, 0
min_left, max_right, max_down, min_up = 0, len(matrix[0])-1, len(matrix)-1, 0
direction = 'right'
while True:
if direction == 'right':
if j > max_right:
break
res.extend(matrix[i][j:max_right+1])
i, j = i+1, max_right
min_up += 1
direction = 'down'
if direction == 'down':
if i > max_down:
break
res.extend([matrix[k][j] for k in range(i, max_down+1)])
i, j = max_down, j-1
max_right -= 1
direction = 'left'
if direction == 'left':
if j < min_left:
break
res.extend(reversed(matrix[i][min_left:j+1]))
i, j = i-1, min_left
max_down -= 1
direction = 'up'
if direction == 'up':
if i < min_up:
break
res.extend([matrix[k][j] for k in range(i, min_up-1, -1)])
i, j = min_up, j+1
min_left += 1
direction = 'right'
return res
if __name__ == "__main__":
matrix = [[1,2,3,4],[5,6,7,8],[9,10,11,12]]
matrix = [[],[]]
for l in matrix:
print(l)
print(spiralOrder(matrix))
|
[
"xhh1120132805@163.com"
] |
xhh1120132805@163.com
|
35ab3a19e850bbe01e7c10a7b8760ea6d19c925e
|
e378277e270487668ec34e3aeb1e843d193f87ee
|
/scrap_site/urls.py
|
b12ee179860ea1ef94863d968d38cd3eef30159a
|
[] |
no_license
|
simofirdoussi/Django-webscraping
|
f9516940d47d8a7e8a9f35dc6184ad1411e71fc4
|
f7f75cde707f2b44827fe5c6d9c230f9779404ec
|
refs/heads/master
| 2022-10-22T20:24:26.294367
| 2020-06-09T10:36:17
| 2020-06-09T10:36:17
| 252,498,354
| 3
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 798
|
py
|
"""scrap_site URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('', include('myapp.urls')),
path('admin/', admin.site.urls),
]
|
[
"simow@MacBook-Pro-de-Simow.local"
] |
simow@MacBook-Pro-de-Simow.local
|
bba29c0aeb561e9cc5092213128b7a624854b703
|
ddac7feb045569ba059ec7a96874e94d86ed1feb
|
/python/course/douban/spider.py
|
b6b0dc6de66c736267ebd04dfc182b3bffe2945a
|
[
"Apache-2.0"
] |
permissive
|
TimVan1596/ACM-ICPC
|
8d38e1843b40afe9294fd668e3e18ebc69e89bbf
|
b8d4e681b4b999cc025ac6d1d0357f0ccbaf092f
|
refs/heads/master
| 2023-08-14T07:58:20.340426
| 2022-08-20T16:33:59
| 2022-08-20T16:33:59
| 182,549,941
| 1
| 0
|
Apache-2.0
| 2023-07-22T03:45:01
| 2019-04-21T15:22:22
|
JavaScript
|
UTF-8
|
Python
| false
| false
| 1,578
|
py
|
# -*- coding:utf-8 -*-
# @Time:2020/8/4 11:55
# @Author:TimVan
# @File:spider.py
# @Software:PyCharm
import urllib.request, urllib.error # urllib:制定URL,获取网页数据
from bs4 import BeautifulSoup # bs4:网页解析,获取数据
import re # re:正则表达式,进行文字匹配
import xlwt # xlwt:进行excel操作
import sqlite3 # sqlite3:进行SQLite数据库操作
def main():
# 1.爬取网页
# 2.逐一解析数据
# 3.保存数据(SQL或Excel)
baseUrl = "https://movie.douban.com/top250?start="
savePath = ".\\豆瓣电影TOP250.xls"
getData(baseUrl)
saveData(savePath)
# 读取一个URL,并返回其源码
def askUrl(url):
header = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36"
}
request = urllib.request.Request(url, headers=header)
# html=待返回源码
html = None
try:
response = urllib.request.urlopen(request)
html = response.read().decode("utf-8")
except urllib.error.URLError as e:
print("请求错误")
if hasattr(e, "code"):
print(",错误状态码为{}".format(e.code))
if hasattr(e, "reason"):
print(",原因为{}".format(e.reason))
return html
# 1.爬取网页
def getData(baseUrl):
dataList = []
html = askUrl(baseUrl)
print(html)
# 2.逐一解析数据
return dataList
# 3.保存数据(SQL或Excel)
def saveData(savePath):
print()
if __name__ == "__main__":
main()
|
[
"877020296@qq.com"
] |
877020296@qq.com
|
c8e339c1b6310663e6035ac79c1f923100bb627e
|
d63ae7d6076d52477778e456388e5231147ef96a
|
/app/applications/repository.py
|
c2474352265c6fe2f56c762d7aa8be401d4bbf59
|
[] |
no_license
|
oscarTematio/projet-back
|
6687f8b6a70a93e23372715a3a484ad943c3cb67
|
b3dd5d7b767aaa93b3ff62d4dca1502641e747e4
|
refs/heads/master
| 2020-06-07T06:21:08.490554
| 2019-06-21T15:47:41
| 2019-06-21T15:47:41
| 192,947,681
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,247
|
py
|
from .models import ApplicationModel
from flask_sqlalchemy import SQLAlchemy
from injector import inject
class ApplicationRepository:
"""Persistence of applications"""
@inject
def __init__(self, db: SQLAlchemy):
self._db = db
def get_all(self):
return self._db.session.query(ApplicationModel).all()
def json(self):
return {'name': self.name, 'source': self.source}
def find_application_by_id(self, _id):
return self._db.session.query(ApplicationModel).filter(ApplicationModel.app_id == _id).first()
def find_application_by_name(self, name):
return self._db.session.query(ApplicationModel).filter(ApplicationModel.name == name).first()
def create_app(self,name,source):
return ApplicationModel(name,source)
def _update_application(self,_id,object):
return self._db.session.query(ApplicationModel).filter(ApplicationModel.app_id ==_id).update(object)
def save_to_db(self, object):
self._db.session.add(object)
self._db.session.commit()
def delete(self, object):
self._db.session.delete(object)
self._db.session.commit()
def flush(self):
self._db.session.flush()
|
[
"oscar-miguel.tematio@PCP105.intech.lan"
] |
oscar-miguel.tematio@PCP105.intech.lan
|
41523cb4cc1d648782d753058c682c1af1aa9015
|
1d1ff3dbce4035cc437d5b6a57800156b007e4e8
|
/.ycm_extra_conf.py
|
7e282807438b8dd6b39bf017f731aa9325bbd0f2
|
[] |
no_license
|
osolong/vim-scripts
|
45fc5b211a3bb477e8bfc52546a490dd91b40778
|
a2a24a94842bfbfedfa6a9a79ff93c4f7233feb1
|
refs/heads/master
| 2021-01-18T17:25:44.245878
| 2014-07-16T20:59:37
| 2014-07-16T20:59:37
| 2,991,096
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 6,415
|
py
|
# This file is NOT licensed under the GPLv3, which is the license for the rest
# of YouCompleteMe.
#
# Here's the license text for this file:
#
# This is free and unencumbered software released into the public domain.
#
# Anyone is free to copy, modify, publish, use, compile, sell, or
# distribute this software, either in source code form or as a compiled
# binary, for any purpose, commercial or non-commercial, and by any
# means.
#
# In jurisdictions that recognize copyright laws, the author or authors
# of this software dedicate any and all copyright interest in the
# software to the public domain. We make this dedication for the benefit
# of the public at large and to the detriment of our heirs and
# successors. We intend this dedication to be an overt act of
# relinquishment in perpetuity of all present and future rights to this
# software under copyright law.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
#
# For more information, please refer to <http://unlicense.org/>
import os
import ycm_core
# These are the compilation flags that will be used in case there's no
# compilation database set (by default, one is not set).
# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
flags = [
'-Wall',
# You 100% do NOT need -DUSE_CLANG_COMPLETER in your flags; only the YCM
# source code needs it.
#'-DUSE_CLANG_COMPLETER',
# THIS IS IMPORTANT! Without a "-std=<something>" flag, clang won't know which
# language to use when compiling headers. So it will guess. Badly. So C++
# headers will be compiled as C headers. You don't want that so ALWAYS specify
# a "-std=<something>".
# For a C project, you would set this to something like 'c99' instead of
# 'c++11'.
'-std=c++0x',
# ...and the same thing goes for the magic -x option which specifies the
# language that the files to be compiled are written in. This is mostly
# relevant for c++ headers.
# For a C project, you would set this to 'c' instead of 'c++'.
'-x',
'c++',
'-I',
'include',
'-I',
'/home/rafael/Perforce/rodriraf_MEQ-RODRIRAF01-Linux01_7903/users/rodriraf/10087/test/test_DEV_EU_TELEMETRY/libtomcrypt/src/headers',
'-I',
'/home/rafael/Perforce/rodriraf_MEQ-RODRIRAF01-Linux01_7903/users/rodriraf/10087/test/test_DEV_EU_TELEMETRY/libtommath',
'-isystem',
'/home/rafael/Perforce/rodriraf_MEQ-RODRIRAF01-Linux01_7903/users/rodriraf/10087/test/test_DEV_EU_TELEMETRY'
'-isystem',
'/home/rafael/CodeSourcery/Sourcery_G++_Lite/arm-none-linux-gnueabi/include/c++/4.4.1',
'-isystem',
'/home/rafael/CodeSourcery/Sourcery_G++_Lite/arm-none-linux-gnueabi/libc/usr/include',
'-isystem',
'/home/rafael/CodeSourcery/Sourcery_G++_Lite/arm-none-linux-gnueabi/include/c++/4.4.1/arm-none-linux-gnueabi',
]
# Set this to the absolute path to the folder (NOT the file!) containing the
# compile_commands.json file to use that instead of 'flags'. See here for
# more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html
#
# Most projects will NOT need to set this to anything; you can just change the
# 'flags' list of compilation flags. Notice that YCM itself uses that approach.
compilation_database_folder = ''
if os.path.exists( compilation_database_folder ):
database = ycm_core.CompilationDatabase( compilation_database_folder )
else:
database = None
SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ]
def DirectoryOfThisScript():
return os.path.dirname( os.path.abspath( __file__ ) )
def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
if not working_directory:
return list( flags )
new_flags = []
make_next_absolute = False
path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
for flag in flags:
new_flag = flag
if make_next_absolute:
make_next_absolute = False
if not flag.startswith( '/' ):
new_flag = os.path.join( working_directory, flag )
for path_flag in path_flags:
if flag == path_flag:
make_next_absolute = True
break
if flag.startswith( path_flag ):
path = flag[ len( path_flag ): ]
new_flag = path_flag + os.path.join( working_directory, path )
break
if new_flag:
new_flags.append( new_flag )
return new_flags
def IsHeaderFile( filename ):
extension = os.path.splitext( filename )[ 1 ]
return extension in [ '.h', '.hxx', '.hpp', '.hh' ]
def GetCompilationInfoForFile( filename ):
# The compilation_commands.json file generated by CMake does not have entries
# for header files. So we do our best by asking the db for flags for a
# corresponding source file, if any. If one exists, the flags for that file
# should be good enough.
if IsHeaderFile( filename ):
basename = os.path.splitext( filename )[ 0 ]
for extension in SOURCE_EXTENSIONS:
replacement_file = basename + extension
if os.path.exists( replacement_file ):
compilation_info = database.GetCompilationInfoForFile(
replacement_file )
if compilation_info.compiler_flags_:
return compilation_info
return None
return database.GetCompilationInfoForFile( filename )
def FlagsForFile( filename, **kwargs ):
if database:
# Bear in mind that compilation_info.compiler_flags_ does NOT return a
# python list, but a "list-like" StringVec object
compilation_info = GetCompilationInfoForFile( filename )
if not compilation_info:
return None
final_flags = MakeRelativePathsInFlagsAbsolute(
compilation_info.compiler_flags_,
compilation_info.compiler_working_dir_ )
# NOTE: This is just for YouCompleteMe; it's highly likely that your project
# does NOT need to remove the stdlib flag. DO NOT USE THIS IN YOUR
# ycm_extra_conf IF YOU'RE NOT 100% SURE YOU NEED IT.
#try:
# final_flags.remove( '-stdlib=libc++' )
#except ValueError:
# pass
else:
relative_to = DirectoryOfThisScript()
final_flags = MakeRelativePathsInFlagsAbsolute( flags, relative_to )
return {
'flags': final_flags,
'do_cache': True
}
|
[
"rafael.rodriguez@meigroup.com"
] |
rafael.rodriguez@meigroup.com
|
dfcac23bda71dc7eb0f21a66c59fefefdf7f6041
|
c8d49e7ba66ccaaa31ea11dd097dbdd4c2f532ad
|
/jogo/urls.py
|
c7d2a321dc8af21f92d4769ee550c764a8e0ad68
|
[] |
no_license
|
lucascastejon/lucas-jogodavelha
|
abd475dfba952f5167ee75bca5051cbaa37df120
|
9eb426c76b1d93fc2d92bafc02991a78b0d17c63
|
refs/heads/master
| 2021-01-22T22:57:43.645558
| 2015-05-31T04:01:08
| 2015-05-31T04:01:08
| 18,375,393
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 235
|
py
|
from django.conf.urls import patterns, include, url
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
url(r'', include('jogo.core.urls', namespace='core')),
url(r'^admin/', include(admin.site.urls)),
)
|
[
"lucascastejon@gmail.com"
] |
lucascastejon@gmail.com
|
3f56c3f4cc60ed35eb0df059daae27c2af8b1e0f
|
52470638e2dd65049f9e7a31716b400acc21302a
|
/movie-system/testing.py
|
cefaf9fecc8bec19551a8e74829e9029d5e1e9d4
|
[] |
no_license
|
asechnaya/-python-postgresql-
|
c9e1d5ba502077533a1aedbe35be58725e39df7c
|
8a6be2e3efd017f441ed3e7237943bbfa32532ef
|
refs/heads/master
| 2020-04-15T18:21:25.641526
| 2019-03-08T11:04:15
| 2019-03-08T11:04:15
| 164,910,686
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 397
|
py
|
import user from User
import json
with open('my_file.txt', 'w') as f:
f.write('Hello, World!')
with open('my_file.txt', 'r') as f:
print(f.readline())
with open('my_file.txt', 'r') as f:
json_data = json.load(f)
user = User.from_json(json_data)
'''
import sys
my_vars = []
for i in range(3):
my_vars.append(lambda: i)
print([f() for f in my_vars])
print(sys.argv)
'''
|
[
"anutya@mail.ru"
] |
anutya@mail.ru
|
96cecd7dd8ddb4ed96f5708602ad98e74a9c41d1
|
ef4eab407f24e04278db71db0da9306b7890dd91
|
/src/Algorithms/LeetCode/Arrays/34-findFistandLastPositionofElement.py
|
663d686a09c4a05060431d4341950e033ad9f6a9
|
[] |
no_license
|
someonehan/Algorithms
|
aabda722d7124d9953e5151a45b786ac946a7d7a
|
4b3f584591952c769d5bfe2146c7043ef6e045b9
|
refs/heads/master
| 2021-06-09T14:27:47.588858
| 2021-04-06T15:23:21
| 2021-04-06T15:23:21
| 141,306,297
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,042
|
py
|
""" given an array of integers nums sorted in ascending order, first the starting and
the ending position of a given target value.
if target is not found in the array return [-1, -1]
Example:
input nums [1, 7, 7, 8, 8, 10] target = 8
return [3, 4]
Example:
input nums [5, 7, 7, 8, 8, 10] target = 6
return [-1, -1]
"""
class Solution:
def flPosition1(self, nums : list[int], target : int) -> list[int]:
for index, elem in enumerate(nums):
left_found = False
if not left_found and elem == target:
lo = index
left_found = True
if left_found:
if elem > target:
hi = index - 1
return [lo, hi]
return [-1, -1]
def find_insert_pos(self, nums : list[int], target : int, left : bool) -> int:
lo, hi = 0, len(nums) - 1
while lo < hi:
mid = (lo + hi) // 2
if target > nums[mid] or (left ):
lo = mid
else:
hi = mid
|
[
"hxzh@localhost.localdomain"
] |
hxzh@localhost.localdomain
|
2e1fa2579cafb12827fa7579a57c7dfe4d844b71
|
8e18f7fe444040105b34703030029355641ddf2a
|
/standalone/udfs.py
|
c506e57ff17eca883c96f4b2738975123768d676
|
[] |
no_license
|
smplisri/AdventOfCode2020
|
f24a2486a66b04857a6f6573626c23daf270de25
|
12b06508c840f3e44be54e4f5c785fb1d75d35ff
|
refs/heads/main
| 2023-01-31T02:47:44.519208
| 2020-12-16T04:50:14
| 2020-12-16T04:50:14
| 321,431,778
| 0
| 0
| null | 2020-12-14T18:57:36
| 2020-12-14T18:04:17
| null |
UTF-8
|
Python
| false
| false
| 969
|
py
|
import os
def inputFileHandler(script_file_name, input_file_name):
dir_path = os.path.dirname(os.path.realpath(script_file_name))
file_name = dir_path + "/" + input_file_name
ifh = open(file_name, "r")
return ifh
def lineSepGroupFormatter(lines_list, delimiter, join_char):
formatted_list = map(lambda x: x.strip() if x != delimiter else x, lines_list)
return join_char.join(formatted_list).split(join_char + delimiter + join_char)
def lineSepGroupFormatterDict(lines_list, delimiter, join_char):
iterator, answerset, interim_data = 1, {}, []
formatted_list = list(map(lambda x: x.strip() if x != delimiter else x, lines_list))
formatted_list.append(delimiter)
for item in formatted_list:
if item.strip() == "":
answerset["group" + str(iterator)] = interim_data
interim_data = []
iterator = iterator + 1
else:
interim_data.append(item)
return answerset
|
[
"s.srinivasakalyan@gmail.com"
] |
s.srinivasakalyan@gmail.com
|
32d7fc142d327a807d2b8c1585963888cc16f038
|
7b8fbf5b79b56428d3015e2b9130e82d4424f682
|
/menu/forms.py
|
5ea260b163c23113232c0760c5f2c98a793291fc
|
[] |
no_license
|
chohanjoo/WebOrderSystem
|
6fb1e31f289ec4f88bfa93b144643317d562376d
|
1b90620b4b8d3a53b6c36a3636a75e8fb7e8442e
|
refs/heads/master
| 2022-12-11T08:53:59.316516
| 2019-06-17T08:04:15
| 2019-06-17T08:04:15
| 188,407,879
| 0
| 0
| null | 2022-12-03T11:44:18
| 2019-05-24T11:08:54
|
CSS
|
UTF-8
|
Python
| false
| false
| 572
|
py
|
from django import forms
from .models import Menu, Category
class MenuForm(forms.ModelForm):
class Meta:
model = Menu
fields = ('name', 'price','image','category')
# class MenuBoardForm(forms.ModelForm):
# class Meta:
# model = Menu
# fields = ('name', 'price','image','category')
class CategoryForm(forms.ModelForm):
class Meta:
model = Category
fields = ('name',)
# class ShopForm(forms.ModelForm):
# class Meta:
# model = Menu
# fields = ('name', 'price','image','category')
|
[
"johanjoo@naver.com"
] |
johanjoo@naver.com
|
837d7f156b75755d43c1273240b8db9dcbecab16
|
6649777d8f0b0e3e54c43669e46cd93baf0b4956
|
/underworld/units/cerberus.py
|
820b9226d4014fba09db1bd979abf2a468c842e7
|
[] |
no_license
|
Lnk2past/underworld
|
5266e20468ff1ec67ad8a8134a7890f54e170403
|
04b5f43234a53ed8072a0ecc62d701d873193daf
|
refs/heads/master
| 2021-01-06T05:57:12.335451
| 2020-03-04T03:04:47
| 2020-03-04T03:04:47
| 241,229,241
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,336
|
py
|
from underworld.event_manager.event_manager import global_event_manager
from underworld.event_manager.triggers import *
from underworld.units.base_unit import base_unit
from underworld.units.other import bomber_rocket_rocket
from underworld.modules import *
from underworld.traits import *
class sentinel(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 750.0
self.weapon_slot = battery(6)
class guardian(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 7000.0
self.weapon_slot = guardian_battery()
class interceptor(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 8000.0
self.weapon_slot = mass_battery(1)
class colossus(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 40000.0
self.weapon_slot = colossus_laser()
self.shield_slot = passive_shield(10)
self.support_slots = [salvage(12)]
self.support_slots[0].register(self)
class destroyer(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 10000.0
class bomber(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 48000.0
self.weapon_slot = bomber_rocket()
self.weapon_slot.register(self)
self.set_trigger(enemy_in_neighboring_sector, self.weapon_slot.activate)
def spawn_bomber_rocket(self):
brr = bomber_rocket_rocket()
brr.time = self.time
brr.corporation = self.corporation
self.corporation.add(brr)
class phoenix(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 45000.0
self.weapon_slot = dual_laser(5)
self.shield_slot = phoenix_area_shield()
self.set_trigger(death, phoenix.spawn_sentinels)
@staticmethod
def spawn_sentinels(s):
for i in range(3):
se = sentinel()
se.time = s.time
s.corporation.add(se)
se.corporation = s.corporation
class storm(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 40000.0
self.weapon_slot = dart_barrage()
class ghost(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 200.0
self.weapon_slot = ghost_battery()
class weak_cerberus_base(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 20000.0
self.weapon_slot = weak_cerberus_base_battery()
self.shield_slot = weak_cerberus_base_passive_shield()
class cerberus_base(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 50000.0
self.weapon_slot = cerberus_base_battery()
self.shield_slot = cerberus_base_passive_shield()
class strong_cerberus_base(base_unit, salvageable):
def __init__(self, name=None):
super().__init__()
self.name = self.get_class_name() if name is None else name
self.hull = self.max_hull = 90000.0
self.weapon_slot = strong_cerberus_base_battery()
self.shield_slot = strong_cerberus_base_passive_shield()
|
[
"Lnk2past@gmail.com"
] |
Lnk2past@gmail.com
|
d380d216484991a1eecc5857a84f6ae5d10474af
|
005340836278129a8f3d947c953792527fd34bee
|
/calculator1/mycalc10.py
|
40e913c2116f3ac6874b1cc6db990906ae627a3c
|
[] |
no_license
|
Myoung-heeSeo/SoftwareProject2-KMU-2017
|
b8508652b787e41bb80f828d8c0cc22e956c5807
|
1c5f202306208f869c0d2e1f46a0468bb0341264
|
refs/heads/master
| 2020-09-05T00:13:40.945923
| 2019-11-06T06:58:28
| 2019-11-06T06:58:28
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,855
|
py
|
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QApplication, QWidget
from PyQt5.QtWidgets import QLineEdit, QToolButton
from PyQt5.QtWidgets import QSizePolicy
from PyQt5.QtWidgets import QLayout, QGridLayout
from keypad2 import numPadList, operatorList, constantList, functionList
import calcFunctions
class Button(QToolButton):
def __init__(self, text, callback):
super().__init__()
self.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Preferred)
self.setText(text)
self.clicked.connect(callback)
def sizeHint(self):
size = super(Button, self).sizeHint()
size.setHeight(size.height() + 20)
size.setWidth(max(size.width(), size.height()))
return size
class Calculator(QWidget):
def __init__(self, parent=None):
super().__init__(parent)
# Display Window
self.display = QLineEdit()
self.display.setReadOnly(True)
self.display.setAlignment(Qt.AlignRight)
self.display.setMaxLength(15)
# Button Creation and Placement
numLayout = QGridLayout()
opLayout = QGridLayout()
constLayout = QGridLayout()
funcLayout = QGridLayout()
buttonGroups = {
'num': {'buttons': numPadList, 'layout': numLayout, 'columns': 3},
'op': {'buttons': operatorList, 'layout': opLayout, 'columns': 2},
'constants': {'buttons': constantList, 'layout': constLayout, 'columns': 1},
'functions': {'buttons': functionList, 'layout': funcLayout, 'columns': 1},
}
for label in buttonGroups.keys():
r = 0; c = 0
buttonPad = buttonGroups[label]
for btnText in buttonPad['buttons']:
button = Button(btnText, self.buttonClicked)
buttonPad['layout'].addWidget(button, r, c)
c += 1
if c >= buttonPad['columns']:
c = 0; r += 1
# Layout
mainLayout = QGridLayout()
mainLayout.setSizeConstraint(QLayout.SetFixedSize)
mainLayout.addWidget(self.display, 0, 0, 1, 2)
mainLayout.addLayout(numLayout, 1, 0)
mainLayout.addLayout(opLayout, 1, 1)
mainLayout.addLayout(constLayout, 2, 0)
mainLayout.addLayout(funcLayout, 2, 1)
self.setLayout(mainLayout)
self.setWindowTitle("My Calculator")
def buttonClicked(self):
if self.display.text() == 'Error!':
self.display.setText('')
button = self.sender()
key = button.text()
if key == '=':
try:
result = str(eval(self.display.text()))
except:
result = 'Error!'
self.display.setText(result)
elif key == 'C':
self.display.clear()
elif key in constantList: #for문과 list를 사용하여 단축
conLi = ['3.141592','3E+8', '340', '1.5E+8']
for i in range(4):
if key == constantList[i]:
self.display.setText(self.display.text() + conLi[i])
elif key in functionList: #for문과 리스트를 사용하여 단축
n = self.display.text()
fun = [calcFunctions.factorial(n), calcFunctions.decToBin(n), calcFunctions.binToDec(n),
calcFunctions.decToRoman(n)] #getattr 사용해서 단축 가능
for i in range(4):
if key==functionList[i]:
value = fun[i]
self.display.setText(str(value))
else:
self.display.setText(self.display.text() + key)
if __name__ == '__main__':
import sys
app = QApplication(sys.argv)
calc = Calculator()
calc.show()
sys.exit(app.exec_())
|
[
"noreply@github.com"
] |
Myoung-heeSeo.noreply@github.com
|
3244b9e5e936800c4b69ed2a2614cb5fdf388dfb
|
77d13fa4a3beea42d460eb745dd854061578e095
|
/ship/fmp/datunits/initialconditionsunit.py
|
38f2cba61d8da27474a88c20e474c73c570b168a
|
[
"MIT"
] |
permissive
|
duncan-r/SHIP
|
b2a899b393a6e5a10c0cbc580c12cf0dc7e60455
|
e8e7249a511d52b29d34be0951d6a05f346b836c
|
refs/heads/develop
| 2022-11-09T09:34:38.872270
| 2022-10-26T08:09:36
| 2022-10-26T08:09:36
| 55,844,064
| 6
| 6
| null | 2017-06-28T10:22:26
| 2016-04-09T12:51:32
|
Python
|
UTF-8
|
Python
| false
| false
| 11,400
|
py
|
"""
Summary:
Contains the InitialConditionsUnit class.
This is a holder for all of the data in the initial conditions section
of the dat file.
Author:
Duncan Runnacles
Created:
01 Apr 2016
Copyright:
Duncan Runnacles 2016
TODO:
Not fully implemented at the moment - see class TODO comment for details.
Updates:
"""
from __future__ import unicode_literals
from ship.fmp.datunits.isisunit import AUnit
from ship.datastructures.rowdatacollection import RowDataCollection
from ship.datastructures import dataobject as do
from ship.fmp.datunits import ROW_DATA_TYPES as rdt
import logging
logger = logging.getLogger(__name__)
class InitialConditionsUnit (AUnit):
"""isisunit for storing the initial conditions.
Stores the initial conditions data; near the end of the .dat file.
"""
# Class constants
UNIT_TYPE = 'initial_conditions'
UNIT_CATEGORY = 'initial_conditions'
FILE_KEY = 'INITIAL'
FILE_KEY2 = None
def __init__(self, **kwargs):
"""Constructor
Args:
node_count (int): The number of nodes in the model. We need this to
know how many lines there are to read from the contents list.
fileOrder (int): The location of the initial conditions in the
.DAT file. This will always be at the end but before the
GISINFO if there is any.
"""
super(InitialConditionsUnit, self).__init__(**kwargs)
self._unit_type = InitialConditionsUnit.UNIT_TYPE
self._unit_category = InitialConditionsUnit.UNIT_CATEGORY
self._name = "initial_conditions"
self._name_types = {}
self._node_count = 0
self._label_length = 12
# self.has_datarows = True
# self.has_ics = False
dobjs = [
do.StringData(rdt.LABEL, format_str='{:<12}'),
do.StringData(rdt.QMARK, format_str='{:>2}', default='y'),
do.FloatData(rdt.FLOW, format_str='{:>10}', default=0.000, no_of_dps=3),
do.FloatData(rdt.STAGE, format_str='{:>10}', default=0.000, no_of_dps=3),
do.FloatData(rdt.FROUDE_NO, format_str='{:>10}', default=0.000, no_of_dps=3),
do.FloatData(rdt.VELOCITY, format_str='{:>10}', default=0.000, no_of_dps=3),
do.FloatData(rdt.UMODE, format_str='{:>10}', default=0.000, no_of_dps=3),
do.FloatData(rdt.USTATE, format_str='{:>10}', default=0.000, no_of_dps=3),
do.FloatData(rdt.ELEVATION, format_str='{:>10}', default=0.000, no_of_dps=3),
]
self.row_data['main'] = RowDataCollection.bulkInitCollection(dobjs)
@property
def node_count(self):
return self._node_count
# return self.row_data['main'].getNumberOfRows()
def readUnitData(self, unit_data, file_line, **kwargs):
"""
"""
self._node_count = kwargs['node_count']
self._name_types = kwargs['name_types']
self._label_length = kwargs['label_length']
i = file_line
out_line = file_line + self._node_count + 2
for i in range(file_line, out_line):
if i < file_line + 2:
continue # Skip the first couple of header lines
label = unit_data[i][0:self._label_length].strip()
qmark = unit_data[i][self._label_length:self._label_length+2].strip()
flow = unit_data[i][self._label_length+2:self._label_length+12].strip()
stage = unit_data[i][self._label_length+12:self._label_length+22].strip()
froude_no = unit_data[i][self._label_length+22:self._label_length+32].strip()
velocity = unit_data[i][self._label_length+32:self._label_length+42].strip()
umode = unit_data[i][self._label_length+42:self._label_length+52].strip()
ustate = unit_data[i][self._label_length+52:self._label_length+62].strip()
elevation = unit_data[i][self._label_length+62:self._label_length+72].strip()
try:
self.row_data['main'].addRow({
rdt.LABEL: label, rdt.QMARK: qmark, rdt.FLOW: flow,
rdt.STAGE: stage, rdt.FROUDE_NO: froude_no, rdt.VELOCITY: velocity,
rdt.UMODE: umode, rdt.USTATE: ustate,
rdt.ELEVATION: elevation
}, no_copy=True)
except:
pass
return out_line - 1
def getData(self):
"""
"""
out_data = []
out_data.append('INITIAL CONDITIONS')
out_data.append(' label ? flow stage froude no velocity umode ustate z')
# for i in range(0, self._node_count):
for i in range(0, self.row_data['main'].numberOfRows()):
out_data.append(self.row_data['main'].getPrintableRow(i))
return out_data
# def updateDataRow(self, row_vals, index):
def updateRow(self, row_vals, index, **kwargs):
"""Updates the row at the given index in the row_collection.
Changes the state of the values in the initial conditions list of
the .dat file at the given index.
Args:
row_vals(Dict): keys must be datunits.ROW_DATA_TYPES with a legal
value assigned for the DataType. Chainage and Elevation MUST
be included.
index: the row to update.
Raises:
IndexError: If the index does not exist.
ValueError: If the given value is not accepted by the DataObject's.
See Also:
ADataObject and subclasses for information on the parameters.
"""
# Call superclass method to add the new row
AUnit.updateRow(self, row_vals=row_vals, index=index, **kwargs)
# def updateDataRowByName(self, row_vals, name):
def updateRowByName(self, row_vals, name, **kwargs):
"""Updates the row for the entry with the give name.
Changes the state of the values in the initial conditions list for the
the .dat file for the unit with the given name.
Args:
row_vals(Dict): keys must be datunits.ROW_DATA_TYPES with a legal
value assigned for the DataType. Chainage and Elevation MUST
be included.
name: the name of the unit who's ic's should be updated.
Raises:
IndexError: If the index does not exist.
ValueError: If the given value is not accepted by the DataObject's.
AttributeError: If the given name doesn't exists in the collection.
See Also:
ADataObject and subclasses for information on the parameters.
"""
labels = self.row_data['main'].dataObjectAsList(rdt.LABEL)
try:
index = labels.index(name)
except ValueError:
raise KeyError('Name does not exist in initial conditions: ' + str(name))
# Call superclass method to add the new row
AUnit.updateRow(self, row_vals=row_vals, index=index, **kwargs)
# def addDataRow(self, row_vals):
def addRow(self, row_vals, unit_type, **kwargs):
"""Adds a new row to the InitialCondition units row_collection.
The new row will be added at the given index. If no index is given it
will be appended to the end of the collection.
If no LABEL value is given a AttributeError will be raised as it
cannot have a default value. All other values can be ommitted. If they
are they will be given defaults.
Examples:
>>> import ship.fmp.datunits.ROW_DATA_TYPES as rdt
>>> ics.addRow({rdt.LABEL:UNITNAME, rdt.STAGE:10.2}, index=4)
Args:
row_vals(Dict): keys must be datunits.ROW_DATA_TYPES with a legal
value assigned for the DataType. Chainage and Elevation MUST
be included.
Raises:
AttributeError: If LABEL is not given.
IndexError: If the index does not exist.
ValueError: If the given value is not accepted by the DataObject's.
See Also:
ADataObject and subclasses for information on the parameters.
"""
if not rdt.LABEL in row_vals.keys():
logger.error('Required values of LABEL not given')
raise AttributeError("Required value 'LABEL' not given")
# Keep a record of multiple unit types under the same name
if row_vals[rdt.LABEL] in self._name_types.keys():
if not unit_type in self._name_types[row_vals[rdt.LABEL]]:
self._name_types[row_vals[rdt.LABEL]].append(unit_type)
else:
self._name_types[row_vals[rdt.LABEL]] = [unit_type]
# Don't add the same ic's in twice
labels = self.row_data['main'].dataObjectAsList(rdt.LABEL)
if row_vals[rdt.LABEL] in labels:
return self._node_count
# Call superclass method to add the new row
AUnit.addRow(self, row_vals=row_vals, index=None, **kwargs)
self._node_count += 1
return self._node_count
def deleteRowByName(self, unit_name, unit_type, **kwargs):
"""Delete one of the RowDataCollection objects in the row_collection.
This calls the AUnit deleteRow method, but obtains the index
of the row to be deleted from the name first.
Args:
section_name(str): the name of the AUnit to be removed from
the initial conditions.
Raises:
KeyError - if section_name does not exist.
"""
labels = self.row_data['main'].dataObjectAsList(rdt.LABEL)
try:
index = labels.index(unit_name)
except ValueError:
raise KeyError('Name does not exist in initial conditions: ' + str(unit_name))
# Delete the ic if the unit_name is the only one using it
# Otherwise remove the type and keep the ic's as they are
if not unit_name in self._name_types.keys():
return
elif len(self._name_types[unit_name]) > 1:
self._name_types[unit_name].remove(unit_type)
else:
self.deleteRow(index, **kwargs)
self._node_count -= 1
del self._name_types[unit_name]
def rowByName(self, section_name):
"""Get the data vals in a particular row by name.
This is the same functionality as the AUnit's getRow(int) method
which returns a row in the RowDataCollection by the index value given.
In this case it will find the index based on the section label and
return the same dictionary of row values.
Args:
section_name(str): the name of the AUnit to be removed from
the initial conditions.
Return:
dict - containing the values for the requested row.
"""
labels = self.row_data['main'].dataObjectAsList(rdt.LABEL)
index = labels.index(section_name)
if index == -1:
raise AttributeError('Name does not exist in initial conditions: ' + str(section_name))
return self.row(index)
|
[
"duncan.runnacles@ermeviewenvironmental.co.uk"
] |
duncan.runnacles@ermeviewenvironmental.co.uk
|
091cc45d13b1fc09b094fe4c2213b177e322453f
|
01f4d411909aacc878b654f033a4bffe9136bb5f
|
/orangecontrib/wonder/widgets/wonder/ow_decrease_points.py
|
0ab19f65ce4f404d7e4d9e18ece9e33163afac52
|
[] |
no_license
|
WONDER-project/OASYS1-WONDER
|
75cdf07c19b91f525835b9ea1280da63507cb142
|
cf6e3620f95c0b14c5c33d13161f615f2ac23b14
|
refs/heads/master
| 2020-12-19T04:17:43.660676
| 2020-03-04T23:59:48
| 2020-03-04T23:59:48
| 235,618,279
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 7,135
|
py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# #########################################################################
# Copyright (c) 2020, UChicago Argonne, LLC. All rights reserved. #
# #
# Copyright 2020. UChicago Argonne, LLC. This software was produced #
# under U.S. Government contract DE-AC02-06CH11357 for Argonne National #
# Laboratory (ANL), which is operated by UChicago Argonne, LLC for the #
# U.S. Department of Energy. The U.S. Government has rights to use, #
# reproduce, and distribute this software. NEITHER THE GOVERNMENT NOR #
# UChicago Argonne, LLC MAKES ANY WARRANTY, EXPRESS OR IMPLIED, OR #
# ASSUMES ANY LIABILITY FOR THE USE OF THIS SOFTWARE. If software is #
# modified to produce derivative works, such modified software should #
# be clearly marked, so as not to confuse it with the version available #
# from ANL. #
# #
# Additionally, redistribution and use in source and binary forms, with #
# or without modification, are permitted provided that the following #
# conditions are met: #
# #
# * Redistributions of source code must retain the above copyright #
# notice, this list of conditions and the following disclaimer. #
# #
# * Redistributions in binary form must reproduce the above copyright #
# notice, this list of conditions and the following disclaimer in #
# the documentation and/or other materials provided with the #
# distribution. #
# #
# * Neither the name of UChicago Argonne, LLC, Argonne National #
# Laboratory, ANL, the U.S. Government, nor the names of its #
# contributors may be used to endorse or promote products derived #
# from this software without specific prior written permission. #
# #
# THIS SOFTWARE IS PROVIDED BY UChicago Argonne, LLC AND CONTRIBUTORS #
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT #
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS #
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL UChicago #
# Argonne, LLC OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, #
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, #
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; #
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER #
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT #
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN #
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE #
# POSSIBILITY OF SUCH DAMAGE. #
# #########################################################################
import sys
from PyQt5.QtWidgets import QMessageBox
from orangewidget.settings import Setting
from orangecontrib.wonder.util.gui_utility import gui
from orangecontrib.wonder.widgets.gui.ow_generic_parameter_widget import OWGenericDiffractionPatternParametersWidget, ParameterBox
class OWDecreasePoints(OWGenericDiffractionPatternParametersWidget):
name = "Decrease Number of Points"
description = "Decrease Number of Points"
icon = "icons/decrease.png"
priority = 1000
reduction_factor = Setting([1])
def get_max_height(self):
return 310
def get_parameter_name(self):
return "Reduction Factor"
def get_current_dimension(self):
return len(self.reduction_factor)
def get_parameter_box_instance(self, parameter_tab, index):
return DecreasePointsBox(widget=self,
parent=parameter_tab,
index=index,
reduction_factor=self.reduction_factor[index])
def get_empty_parameter_box_instance(self, parameter_tab, index):
return DecreasePointsBox(widget=self, parent=parameter_tab, index=index)
def set_data(self, data):
try:
if not data is None: self.input_diffraction_patterns = data.measured_dataset.duplicate_diffraction_patterns()
super().set_data(data)
except Exception as e:
QMessageBox.critical(self, "Error",
str(e),
QMessageBox.Ok)
if self.IS_DEVELOP: raise e
def set_parameter_data(self):
for diffraction_pattern_index in range(self.fit_global_parameters.measured_dataset.get_diffraction_patterns_number()):
reduction_factor = self.get_parameter_box(diffraction_pattern_index).get_reduction_factor()
if reduction_factor > 1:
self.fit_global_parameters.measured_dataset.diffraction_patterns[diffraction_pattern_index].diffraction_pattern = \
self.input_diffraction_patterns[diffraction_pattern_index].diffraction_pattern[::reduction_factor]
def get_parameter_array(self):
return self.fit_global_parameters.measured_dataset.diffraction_patterns
def get_parameter_item(self, diffraction_pattern_index):
return self.fit_global_parameters.measured_dataset.diffraction_patterns[diffraction_pattern_index]
def dumpSettings(self):
self.dump_reduction_factor()
def dump_reduction_factor(self): self.dump_variable("reduction_factor")
class DecreasePointsBox(ParameterBox):
def __init__(self,
widget=None,
parent=None,
index=0,
reduction_factor=1):
super(DecreasePointsBox, self).__init__(widget=widget,
parent=parent,
index=index,
reduction_factor = reduction_factor)
def get_height(self):
return 100
def init_fields(self, **kwargs):
self.reduction_factor = kwargs["reduction_factor"]
def init_gui(self, container):
gui.lineEdit(container, self, "reduction_factor", "Reduction Factor", labelWidth=300, valueType=int, callback=self.widget.dump_reduction_factor)
def get_basic_parameter_prefix(self):
pass
def set_data(self, data):
pass
def get_reduction_factor(self):
return self.reduction_factor
from PyQt5.QtWidgets import QApplication
if __name__ == "__main__":
a = QApplication(sys.argv)
ow = OWDecreasePoints()
ow.show()
a.exec_()
ow.saveSettings()
|
[
"lrebuffi@anl.gov"
] |
lrebuffi@anl.gov
|
32c2c61016bb44f1e91f4977fa16d3d59d8750e3
|
c0df2f79f372c9599dddf83846b25e23600e4e55
|
/examples/ga/3d.py
|
1b6d741324bafb3716f2f4962bbd28940d4f3302
|
[] |
no_license
|
mikbuch/pymri
|
548612acf42a41898a2370d48bdd9d02486b2b98
|
9be4d40dcb085ad3c92b2979d83f5918f1ad1624
|
refs/heads/master
| 2021-01-15T15:31:30.187858
| 2016-09-28T11:23:25
| 2016-09-28T11:23:25
| 41,487,163
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,267
|
py
|
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import math
def randrange(n, vmin, vmax):
return (vmax - vmin)*np.random.rand(n) + vmin
def fitness(x, y, category):
if category == 1:
z = x - y
z = z + (1-z)/2
# z = x / math.sqrt(math.pow(x, 2) + math.pow(y, 2))
else:
z = y - x
z = z + (1-z)/2
# z = y / math.sqrt(math.pow(y, 2) + math.pow(x, 2))
return z
n = 1000
xs = randrange(n, 0, 1)
ys = randrange(n, 0, 1)
category_0 = []
category_1 = []
for x, y in zip(xs, ys):
category_0.append(fitness(x, y, 0))
category_1.append(fitness(x, y, 1))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# ax.scatter(xs, ys, category_0, c='g', marker='o', s=40)
# ax.scatter(xs, ys, category_1, c='r', marker=',', s=40)
ax.scatter(xs, ys, category_0, c='g', marker='.', s=40)
ax.scatter(xs, ys, category_1, c='r', marker='.', s=40)
ax.scatter(1, 0, 1, c='k', marker='^', s=100)
ax.scatter(0, 1, 1, c='y', marker='s', s=100)
# import pdb
# pdb.set_trace()
ax.set_xlabel('x_1')
ax.set_ylabel('x_2')
ax.set_zlabel('fitness value')
font = {'family': 'sans-serif',
'weight': 'bold',
'size': 20}
plt.rc('font', **font)
plt.show()
|
[
"mikolaj.buchwald@gmail.com"
] |
mikolaj.buchwald@gmail.com
|
c5f4ff0c5f04c5d1f06299bd126ff0522753cdec
|
58ca7d5c042572604023783040e4e5b11897242f
|
/profile_pbs_base/ipengine_config.py
|
e2164c20c8418932f0a856ee394756f1b8ec57ef
|
[] |
no_license
|
rmcgibbo/ipython_parallel_profiles
|
25f54523687f66fe5bee320da8a62d39dfc9d6df
|
ca251b039bae1eadc7aa838d9a160adc3178a4f8
|
refs/heads/master
| 2021-01-15T10:19:26.176894
| 2012-08-17T05:39:02
| 2012-08-17T05:39:02
| 5,447,952
| 0
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 227
|
py
|
from IPython.utils.path import expand_path
c = get_config()
# set working directory
work_dir = expand_path('~/ipclusterworkdir')
if not os.path.exists(work_dir):
os.makedirs(work_dir)
c.IPClusterStart.work_dir = work_dir
|
[
"rmcgibbo@gmail.com"
] |
rmcgibbo@gmail.com
|
d2dc9e047ddd95ab1ded819bbe41f031122270da
|
62ccdb11daefaecc8e63f235c7519cc7594f705a
|
/images/google-cloud-sdk/lib/surface/compute/backend_services/create.py
|
bf00d9d770eec74c877aea318addb654f0159d98
|
[
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
] |
permissive
|
hiday1979/kalabasa-mas
|
eccc869bfe259bb474f9d2a4dc4b8561a481f308
|
53a9818eb2a6f35ee57c4df655e7abaaa3e7ef5b
|
refs/heads/master
| 2021-07-05T16:34:44.962142
| 2018-07-10T10:22:24
| 2018-07-10T10:22:24
| 129,709,974
| 0
| 1
| null | 2020-07-24T22:15:29
| 2018-04-16T08:27:13
|
Python
|
UTF-8
|
Python
| false
| false
| 20,541
|
py
|
# Copyright 2014 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Command for creating backend services.
There are separate alpha, beta, and GA command classes in this file.
"""
from googlecloudsdk.api_lib.compute import base_classes
from googlecloudsdk.calliope import base
from googlecloudsdk.calliope import exceptions
from googlecloudsdk.command_lib.compute import flags as compute_flags
from googlecloudsdk.command_lib.compute import signed_url_flags
from googlecloudsdk.command_lib.compute.backend_services import backend_services_utils
from googlecloudsdk.command_lib.compute.backend_services import flags
from googlecloudsdk.core import log
# TODO(b/73642225): Determine whether 'https' should be default
def _ResolvePortName(args):
"""Determine port name if one was not specified."""
if args.port_name:
return args.port_name
if args.protocol == 'HTTPS':
return 'https'
if args.protocol == 'HTTP2':
return 'http2'
if args.protocol == 'SSL':
return 'ssl'
if args.protocol == 'TCP':
return 'tcp'
return 'http'
# TODO(b/73642225): Determine whether 'HTTPS' should be default
def _ResolveProtocol(messages, args, default='HTTP'):
return messages.BackendService.ProtocolValueValuesEnum(
args.protocol or default)
def AddIapFlag(parser):
# TODO(b/34479878): It would be nice if the auto-generated help text were
# a bit better so we didn't need to be quite so verbose here.
flags.AddIap(
parser,
help="""\
Configure Identity Aware Proxy (IAP) service. You can configure IAP to be
'enabled' or 'disabled' (default). If it is enabled you can provide values
for 'oauth2-client-id' and 'oauth2-client-secret'. For example,
'--iap=enabled,oauth2-client-id=foo,oauth2-client-secret=bar' will
turn IAP on, and '--iap=disabled' will turn it off. See
https://cloud.google.com/iap/ for more information about this feature.
""")
@base.ReleaseTracks(base.ReleaseTrack.GA)
class CreateGA(base.CreateCommand):
"""Create a backend service.
*{command}* is used to create backend services. Backend
services define groups of backends that can receive
traffic. Each backend group has parameters that define the
group's capacity (e.g. max CPU utilization, max queries per
second, ...). URL maps define which requests are sent to which
backend services.
Backend services created through this command will start out
without any backend groups. To add backend groups, use 'gcloud
compute backend-services add-backend' or 'gcloud compute
backend-services edit'.
"""
HEALTH_CHECK_ARG = None
HTTP_HEALTH_CHECK_ARG = None
HTTPS_HEALTH_CHECK_ARG = None
@classmethod
def Args(cls, parser):
parser.display_info.AddFormat(flags.DEFAULT_LIST_FORMAT)
flags.GLOBAL_REGIONAL_BACKEND_SERVICE_ARG.AddArgument(
parser, operation_type='create')
flags.AddDescription(parser)
cls.HEALTH_CHECK_ARG = flags.HealthCheckArgument()
cls.HEALTH_CHECK_ARG.AddArgument(parser, cust_metavar='HEALTH_CHECK')
cls.HTTP_HEALTH_CHECK_ARG = flags.HttpHealthCheckArgument()
cls.HTTP_HEALTH_CHECK_ARG.AddArgument(
parser, cust_metavar='HTTP_HEALTH_CHECK')
cls.HTTPS_HEALTH_CHECK_ARG = flags.HttpsHealthCheckArgument()
cls.HTTPS_HEALTH_CHECK_ARG.AddArgument(
parser, cust_metavar='HTTPS_HEALTH_CHECK')
flags.AddTimeout(parser)
flags.AddPortName(parser)
flags.AddProtocol(parser, default=None)
flags.AddEnableCdn(parser, default=False)
flags.AddSessionAffinity(parser, internal_lb=False)
flags.AddAffinityCookieTtl(parser)
flags.AddConnectionDrainingTimeout(parser)
flags.AddLoadBalancingScheme(parser)
flags.AddCacheKeyIncludeProtocol(parser, default=True)
flags.AddCacheKeyIncludeHost(parser, default=True)
flags.AddCacheKeyIncludeQueryString(parser, default=True)
flags.AddCacheKeyQueryStringList(parser)
AddIapFlag(parser)
parser.display_info.AddCacheUpdater(flags.BackendServicesCompleter)
def _CreateBackendService(self, holder, args, backend_services_ref):
health_checks = flags.GetHealthCheckUris(args, self, holder.resources)
if not health_checks:
raise exceptions.ToolException('At least one health check required.')
enable_cdn = True if args.enable_cdn else None
return holder.client.messages.BackendService(
description=args.description,
name=backend_services_ref.Name(),
healthChecks=health_checks,
portName=_ResolvePortName(args),
protocol=_ResolveProtocol(holder.client.messages, args),
timeoutSec=args.timeout,
enableCDN=enable_cdn)
def CreateGlobalRequests(self, holder, args, backend_services_ref):
if args.load_balancing_scheme == 'INTERNAL':
raise exceptions.ToolException(
'Must specify --region for internal load balancer.')
backend_service = self._CreateBackendService(holder, args,
backend_services_ref)
client = holder.client
if args.connection_draining_timeout is not None:
backend_service.connectionDraining = client.messages.ConnectionDraining(
drainingTimeoutSec=args.connection_draining_timeout)
if args.session_affinity is not None:
backend_service.sessionAffinity = (
client.messages.BackendService.SessionAffinityValueValuesEnum(
args.session_affinity))
if args.session_affinity is not None:
backend_service.affinityCookieTtlSec = args.affinity_cookie_ttl
backend_services_utils.ApplyCdnPolicyArgs(
client, args, backend_service, is_update=False)
self._ApplyIapArgs(client.messages, args.iap, backend_service)
request = client.messages.ComputeBackendServicesInsertRequest(
backendService=backend_service,
project=backend_services_ref.project)
return [(client.apitools_client.backendServices, 'Insert', request)]
def CreateRegionalRequests(self, holder, args, backend_services_ref):
backend_service = self._CreateRegionBackendService(holder, args,
backend_services_ref)
client = holder.client
if args.connection_draining_timeout is not None:
backend_service.connectionDraining = client.messages.ConnectionDraining(
drainingTimeoutSec=args.connection_draining_timeout)
request = client.messages.ComputeRegionBackendServicesInsertRequest(
backendService=backend_service,
region=backend_services_ref.region,
project=backend_services_ref.project)
return [(client.apitools_client.regionBackendServices, 'Insert', request)]
def _CreateRegionBackendService(self, holder, args, backend_services_ref):
health_checks = flags.GetHealthCheckUris(args, self, holder.resources)
if not health_checks:
raise exceptions.ToolException('At least one health check required.')
messages = holder.client.messages
return messages.BackendService(
description=args.description,
name=backend_services_ref.Name(),
healthChecks=health_checks,
loadBalancingScheme=(
messages.BackendService.LoadBalancingSchemeValueValuesEnum(
args.load_balancing_scheme)),
protocol=_ResolveProtocol(messages, args, default='TCP'),
timeoutSec=args.timeout)
def _ApplyIapArgs(self, messages, iap_arg, backend_service):
if iap_arg is not None:
backend_service.iap = backend_services_utils.GetIAP(iap_arg,
messages)
if backend_service.iap.enabled:
log.warning(backend_services_utils.IapBestPracticesNotice())
if (backend_service.iap.enabled and backend_service.protocol is not
messages.BackendService.ProtocolValueValuesEnum.HTTPS):
log.warning(backend_services_utils.IapHttpWarning())
def Run(self, args):
"""Issues request necessary to create Backend Service."""
holder = base_classes.ComputeApiHolder(self.ReleaseTrack())
client = holder.client
ref = flags.GLOBAL_REGIONAL_BACKEND_SERVICE_ARG.ResolveAsResource(
args,
holder.resources,
scope_lister=compute_flags.GetDefaultScopeLister(client))
if ref.Collection() == 'compute.backendServices':
requests = self.CreateGlobalRequests(holder, args, ref)
elif ref.Collection() == 'compute.regionBackendServices':
requests = self.CreateRegionalRequests(holder, args, ref)
return client.MakeRequests(requests)
@base.ReleaseTracks(base.ReleaseTrack.ALPHA)
class CreateAlpha(CreateGA):
"""Create a backend service.
*{command}* is used to create backend services. Backend
services define groups of backends that can receive
traffic. Each backend group has parameters that define the
group's capacity (e.g. max CPU utilization, max queries per
second, ...). URL maps define which requests are sent to which
backend services.
Backend services created through this command will start out
without any backend groups. To add backend groups, use 'gcloud
compute backend-services add-backend' or 'gcloud compute
backend-services edit'.
"""
HEALTH_CHECK_ARG = None
HTTP_HEALTH_CHECK_ARG = None
HTTPS_HEALTH_CHECK_ARG = None
@classmethod
def Args(cls, parser):
parser.display_info.AddFormat(flags.DEFAULT_LIST_FORMAT)
flags.GLOBAL_REGIONAL_BACKEND_SERVICE_ARG.AddArgument(
parser, operation_type='create')
flags.AddDescription(parser)
cls.HEALTH_CHECK_ARG = flags.HealthCheckArgument()
cls.HEALTH_CHECK_ARG.AddArgument(parser, cust_metavar='HEALTH_CHECK')
cls.HTTP_HEALTH_CHECK_ARG = flags.HttpHealthCheckArgument()
cls.HTTP_HEALTH_CHECK_ARG.AddArgument(
parser, cust_metavar='HTTP_HEALTH_CHECK')
cls.HTTPS_HEALTH_CHECK_ARG = flags.HttpsHealthCheckArgument()
cls.HTTPS_HEALTH_CHECK_ARG.AddArgument(
parser, cust_metavar='HTTPS_HEALTH_CHECK')
flags.AddTimeout(parser)
flags.AddPortName(parser)
flags.AddProtocol(
parser,
default=None,
choices=['HTTP', 'HTTPS', 'HTTP2', 'SSL', 'TCP', 'UDP'])
flags.AddEnableCdn(parser, default=False)
flags.AddCacheKeyIncludeProtocol(parser, default=True)
flags.AddCacheKeyIncludeHost(parser, default=True)
flags.AddCacheKeyIncludeQueryString(parser, default=True)
flags.AddCacheKeyQueryStringList(parser)
flags.AddSessionAffinity(parser, internal_lb=True)
flags.AddAffinityCookieTtl(parser)
flags.AddConnectionDrainingTimeout(parser)
flags.AddLoadBalancingScheme(parser)
flags.AddCustomRequestHeaders(parser, remove_all_flag=False, default=False)
signed_url_flags.AddSignedUrlCacheMaxAge(parser, required=False)
flags.AddConnectionDrainOnFailover(parser, default=None)
flags.AddDropTrafficIfUnhealthy(parser, default=None)
flags.AddFailoverRatio(parser)
AddIapFlag(parser)
parser.display_info.AddCacheUpdater(flags.BackendServicesCompleter)
def CreateGlobalRequests(self, holder, args, backend_services_ref):
if args.load_balancing_scheme == 'INTERNAL':
raise exceptions.ToolException(
'Must specify --region for internal load balancer.')
if (args.connection_drain_on_failover is not None or
args.drop_traffic_if_unhealthy is not None or args.failover_ratio):
raise exceptions.InvalidArgumentException(
'--global',
'cannot specify failover policies for global backend services.')
backend_service = self._CreateBackendService(holder, args,
backend_services_ref)
client = holder.client
if args.connection_draining_timeout is not None:
backend_service.connectionDraining = (client.messages.ConnectionDraining(
drainingTimeoutSec=args.connection_draining_timeout))
if args.enable_cdn:
backend_service.enableCDN = args.enable_cdn
backend_services_utils.ApplyCdnPolicyArgs(
client,
args,
backend_service,
is_update=False,
apply_signed_url_cache_max_age=True)
if args.session_affinity is not None:
backend_service.sessionAffinity = (
client.messages.BackendService.SessionAffinityValueValuesEnum(
args.session_affinity))
if args.affinity_cookie_ttl is not None:
backend_service.affinityCookieTtlSec = args.affinity_cookie_ttl
if args.custom_request_header is not None:
backend_service.customRequestHeaders = args.custom_request_header
self._ApplyIapArgs(client.messages, args.iap, backend_service)
request = client.messages.ComputeBackendServicesInsertRequest(
backendService=backend_service,
project=backend_services_ref.project)
return [(client.apitools_client.backendServices, 'Insert', request)]
def CreateRegionalRequests(self, holder, args, backend_services_ref):
if (not args.cache_key_include_host or
not args.cache_key_include_protocol or
not args.cache_key_include_query_string or
args.cache_key_query_string_blacklist is not None or
args.cache_key_query_string_whitelist is not None):
raise exceptions.ToolException(
'Custom cache key flags cannot be used for regional requests.')
backend_service = self._CreateRegionBackendService(holder, args,
backend_services_ref)
client = holder.client
if args.connection_draining_timeout is not None:
backend_service.connectionDraining = client.messages.ConnectionDraining(
drainingTimeoutSec=args.connection_draining_timeout)
if args.custom_request_header is not None:
backend_service.customRequestHeaders = args.custom_request_header
backend_services_utils.ApplyFailoverPolicyArgs(client.messages, args,
backend_service)
request = client.messages.ComputeRegionBackendServicesInsertRequest(
backendService=backend_service,
region=backend_services_ref.region,
project=backend_services_ref.project)
return [(client.apitools_client.regionBackendServices, 'Insert', request)]
def _CreateRegionBackendService(self, holder, args, backend_services_ref):
health_checks = flags.GetHealthCheckUris(args, self, holder.resources)
if not health_checks:
raise exceptions.ToolException('At least one health check required.')
messages = holder.client.messages
return messages.BackendService(
description=args.description,
name=backend_services_ref.Name(),
healthChecks=health_checks,
loadBalancingScheme=(
messages.BackendService.LoadBalancingSchemeValueValuesEnum(
args.load_balancing_scheme)),
protocol=_ResolveProtocol(messages, args, default='TCP'),
timeoutSec=args.timeout)
@base.ReleaseTracks(base.ReleaseTrack.BETA)
class CreateBeta(CreateGA):
"""Create a backend service.
*{command}* is used to create backend services. Backend
services define groups of backends that can receive
traffic. Each backend group has parameters that define the
group's capacity (e.g. max CPU utilization, max queries per
second, ...). URL maps define which requests are sent to which
backend services.
Backend services created through this command will start out
without any backend groups. To add backend groups, use 'gcloud
compute backend-services add-backend' or 'gcloud compute
backend-services edit'.
"""
HEALTH_CHECK_ARG = None
HTTP_HEALTH_CHECK_ARG = None
HTTPS_HEALTH_CHECK_ARG = None
@classmethod
def Args(cls, parser):
parser.display_info.AddFormat(flags.DEFAULT_LIST_FORMAT)
flags.GLOBAL_REGIONAL_BACKEND_SERVICE_ARG.AddArgument(
parser, operation_type='create')
flags.AddDescription(parser)
cls.HEALTH_CHECK_ARG = flags.HealthCheckArgument()
cls.HEALTH_CHECK_ARG.AddArgument(parser, cust_metavar='HEALTH_CHECK')
cls.HTTP_HEALTH_CHECK_ARG = flags.HttpHealthCheckArgument()
cls.HTTP_HEALTH_CHECK_ARG.AddArgument(
parser, cust_metavar='HTTP_HEALTH_CHECK')
cls.HTTPS_HEALTH_CHECK_ARG = flags.HttpsHealthCheckArgument()
cls.HTTPS_HEALTH_CHECK_ARG.AddArgument(
parser, cust_metavar='HTTPS_HEALTH_CHECK')
flags.AddTimeout(parser)
flags.AddPortName(parser)
flags.AddProtocol(parser, default=None)
flags.AddEnableCdn(parser, default=False)
flags.AddSessionAffinity(parser, internal_lb=True)
flags.AddAffinityCookieTtl(parser)
flags.AddConnectionDrainingTimeout(parser)
flags.AddLoadBalancingScheme(parser)
flags.AddCustomRequestHeaders(parser, remove_all_flag=False)
flags.AddCacheKeyIncludeProtocol(parser, default=True)
flags.AddCacheKeyIncludeHost(parser, default=True)
flags.AddCacheKeyIncludeQueryString(parser, default=True)
flags.AddCacheKeyQueryStringList(parser)
signed_url_flags.AddSignedUrlCacheMaxAge(parser, required=False)
AddIapFlag(parser)
def CreateGlobalRequests(self, holder, args, backend_services_ref):
if args.load_balancing_scheme == 'INTERNAL':
raise exceptions.ToolException(
'Must specify --region for internal load balancer.')
backend_service = self._CreateBackendService(holder, args,
backend_services_ref)
client = holder.client
if args.connection_draining_timeout is not None:
backend_service.connectionDraining = client.messages.ConnectionDraining(
drainingTimeoutSec=args.connection_draining_timeout)
if args.session_affinity is not None:
backend_service.sessionAffinity = (
client.messages.BackendService.SessionAffinityValueValuesEnum(
args.session_affinity))
if args.session_affinity is not None:
backend_service.affinityCookieTtlSec = args.affinity_cookie_ttl
if args.IsSpecified('custom_request_header'):
backend_service.customRequestHeaders = args.custom_request_header
backend_services_utils.ApplyCdnPolicyArgs(
client,
args,
backend_service,
is_update=False,
apply_signed_url_cache_max_age=True)
self._ApplyIapArgs(client.messages, args.iap, backend_service)
request = client.messages.ComputeBackendServicesInsertRequest(
backendService=backend_service,
project=backend_services_ref.project)
return [(client.apitools_client.backendServices, 'Insert', request)]
def CreateRegionalRequests(self, holder, args, backend_services_ref):
backend_service = self._CreateRegionBackendService(holder, args,
backend_services_ref)
client = holder.client
if args.connection_draining_timeout is not None:
backend_service.connectionDraining = client.messages.ConnectionDraining(
drainingTimeoutSec=args.connection_draining_timeout)
if args.IsSpecified('custom_request_header'):
backend_service.customRequestHeaders = args.custom_request_header
request = client.messages.ComputeRegionBackendServicesInsertRequest(
backendService=backend_service,
region=backend_services_ref.region,
project=backend_services_ref.project)
return [(client.apitools_client.regionBackendServices, 'Insert', request)]
def _CreateRegionBackendService(self, holder, args, backend_services_ref):
health_checks = flags.GetHealthCheckUris(args, self, holder.resources)
if not health_checks:
raise exceptions.ToolException('At least one health check required.')
messages = holder.client.messages
return messages.BackendService(
description=args.description,
name=backend_services_ref.Name(),
healthChecks=health_checks,
loadBalancingScheme=(
messages.BackendService.LoadBalancingSchemeValueValuesEnum(
args.load_balancing_scheme)),
protocol=_ResolveProtocol(messages, args, default='TCP'),
timeoutSec=args.timeout)
|
[
"accounts@wigitech.com"
] |
accounts@wigitech.com
|
928a332440c5e2aadd586ef2ceef6ed62ae0663b
|
c2aa88848a65eb657707dc4edcf5eefd7b059f08
|
/twitter_stream.py
|
4275e34ce5d00cefd3798c4ca5b9c680f098024b
|
[] |
no_license
|
jorgecontreras/kafka
|
111619de698c9a86d4e70e3ce825044d9483cae4
|
32e46a6016eab7f23704c176eb27a1ca9c17fdc5
|
refs/heads/main
| 2023-03-21T22:31:43.287940
| 2021-03-14T22:38:48
| 2021-03-14T22:38:48
| 345,859,481
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,606
|
py
|
# TWITTER TO KAFKA PRODUCER
#
# This program will read a tweeter stream and create a Kafka producer from it.
#
import tweepy, json
from kafka import KafkaProducer
from time import sleep
# twitter credentials
consumer_key = "CONSUMER_KEY"
consumer_secret = "CONSUMER_SECRET"
access_token = "ACCESS_TOKEN"
access_token_secret = "ACCESS_SECRET"
# topic to track
TOPIC_NAME = 'covid'
# server config
KAFKA_SERVER = 'localhost:9092'
class StreamListener(tweepy.StreamListener):
def on_status(self, status):
print(status.text)
def on_error(self, status_code):
if status_code == 420:
return False
def on_data(self, data):
try:
tweet = json.loads(data)
producer.send(TOPIC_NAME, tweet['text'].encode('utf-8'))
print(tweet['text'])
sleep(3)
except Exception as e:
print(e)
return False
return True
def on_timeout(self):
return True
# Twitter authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
# create producer
producer = KafkaProducer(bootstrap_servers=KAFKA_SERVER)
# stream listener
stream_listener = StreamListener(api=tweepy.API(wait_on_rate_limit=True, wait_on_rate_limit_notify=True, timeout=20, retry_delay=5, retry_count=10, retry_errors=set([401, 404, 500, 503])))
stream = tweepy.Stream(auth = api.auth, listener=stream_listener)
# start the stream
print("Tracking: " + str(TOPIC_NAME))
stream.filter(track=[TOPIC_NAME], languages = ['en'])
|
[
"jrcr23@gmail.com"
] |
jrcr23@gmail.com
|
e02a06586aa43081b713ad3d21709e807b830292
|
97bc1d18cfe06705c4208eff4eb0cc86238a12a5
|
/app/dashboard_app/views.py
|
e378b5bd122fbe76928116b0c4b9c917d8482449
|
[
"Apache-2.0"
] |
permissive
|
ganggas95/E-Wisata
|
422ac1acd451c6f6d40d1ec5ff18de29cb890094
|
fb66fc7d3d4cc5a45ad9acea42fb306140a6449f
|
refs/heads/master
| 2020-03-24T20:01:33.828622
| 2018-07-31T03:14:36
| 2018-07-31T03:14:36
| 142,955,756
| 0
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 314
|
py
|
from flask import (
render_template,
views
)
from flask_login import login_required
class DashboardView(views.View):
def __init__(self, template_name):
self.template_name = template_name
@login_required
def dispatch_request(self):
return render_template(self.template_name)
|
[
"subhannizar25@gmail.com"
] |
subhannizar25@gmail.com
|
a450853a742044f8885b76f1cdf35648cafc0e4c
|
281d110466c050eaeeecb76cc8ceb63e81b22252
|
/codes/backend/mongo_accessor.py
|
59e889c628e85ce201d8755300e4f8d24ff50d89
|
[] |
no_license
|
Greilfang/Amzaon-movie-analysis
|
be9e5d532a60b9f7d239df80c5d989d52463e33e
|
2e0f50f9d956f3518b64b8b944750ed5772ec2a4
|
refs/heads/master
| 2023-03-18T09:28:38.440523
| 2019-12-24T12:21:58
| 2019-12-24T12:21:58
| 223,198,531
| 1
| 1
| null | 2023-03-02T09:56:16
| 2019-11-21T14:55:35
|
Python
|
UTF-8
|
Python
| false
| false
| 4,973
|
py
|
import pymongo
import json
import os
class IntroHandler:
def __init__(self, addr='localhost', port=27017, base_name='Amazon-Movies'):
# def __init__(self,addr='localhost',port=27017,base_name='Amazon-Movie'):
self.client = pymongo.MongoClient(addr, port)
self.database = self.client[base_name]
def insert_all_reviews(self, path, collection_name='Reviews'):
collection = self.database[collection_name]
names = os.listdir(path)
count = 0
for name in names:
with open(path + '\\{}'.format(name)) as f:
data = json.load(f)
for review in data['Reviews']:
count = count + 1
review['Title'] = data['Title']
review['MovieID'] = data['ID']
collection.insert_one(review)
if count % 10000 == 0:
print("{} reviews inserted".format(count))
def insert_all_intros(self, path, collection_name='Intros'):
collection = self.database[collection_name]
names = os.listdir(path)
count = 0
for name in names:
with open(path + '\\{}'.format(name)) as f:
data = json.load(f)
if not data['Intro']:
continue
intro = dict()
intro['MovieID'] = data['ID']
intro['Title'] = data['Title']
intro['Intro'] = data['Intro']
collection.insert_one(intro)
count = count + 1
if count % 1000 == 0:
print("{} intros inserted".format(count))
def insert_all_people(self, path, collection_name='Details'):
collection = self.database[collection_name]
names = os.listdir(path)
count = 0
for name in names:
with open(path + '\\{}'.format(name)) as f:
data = json.load(f)
peoples = dict()
peoples['MovieID'] = data['ID']
peoples['Director'] = data['Director']
if data['Supporting']:
peoples['Actor'] = data['Starring'] + data['Supporting']
else:
peoples['Actor'] = data['Starring']
peoples['Genre'] = data['Genre']
peoples["Intro"] = data["Intro"]
peoples["Emotion"] = data["Emotion"] if "Emotion" in data else 0.5
collection.insert_one(peoples)
count = count + 1
if count % 1000 == 0:
print("{} details inserted".format(count))
def query_id_with_bundent(self, directors, actors, intro):
collection = self.database['Details']
condition = dict()
if not directors == '':
condition['Director'] = {'$regex': directors[0]}
if not actors == '':
asr = ""
for actor in actors:
asr = asr + "+" + actor
asr = asr[1:]
#print('asr:', asr)
condition['Actor'] = {'$regex': asr}
if not intro == '':
condition['Intro'] = {'$regex': intro}
results = collection.find(condition, {"_id": 0, "MovieID": 1, })
ids = [result["MovieID"] for result in results]
return ids
def query_more_info_with_bundent(self, directors, actors, intro, genre):
collection = self.database['Details']
condition = dict()
if not directors == '':
dsr = ""
for director in directors:
dsr = dsr + "+" + director
dsr = dsr[1:]
condition['Director'] = {'$regex': dsr}
if not actors == '':
asr = ""
for actor in actors:
asr = asr + "+" + actor
asr = asr[1:]
#print('asr:', asr)
condition['Actor'] = {'$regex': asr}
if not intro == '':
condition['Intro'] = {'$regex': intro}
if not genre == '':
condition['Genre'] = genre
results = collection.find(condition,
{"_id": 0, "MovieID": 1, "Intro": 1, "Genre": 1, "Director": 1, "Actor": 1})
new_results = dict()
for result in results:
new_results[result["MovieID"]] = result
#print("new_results:", new_results)
return new_results
def query_more_info_with_ids(self, ids):
collection = self.database['Details']
condition = {'MovieID': {'$in': ids}}
results = collection.find(condition,
{"_id": 0, "MovieID": 1, "Intro": 1, "Genre": 1, "Director": 1, "Actor": 1})
new_results = dict()
for result in results:
new_results[result["MovieID"]] = result
return new_results
|
[
"565222945@qq.com"
] |
565222945@qq.com
|
f3ab31a5bb6beae5f2ca34e84ae948eaf94ae166
|
e2e130172767e061bad272429dfd9096540f85f7
|
/python-testing/checkout_kata.py
|
81622efa0786d9135d63733d7da2863cc0b06174
|
[] |
no_license
|
seanmortimer/udemy-python
|
ee8033e4155d5ed8937d56d0ea793f2b7f56a6df
|
0acb93b079014adf1020926560595e3a137fd085
|
refs/heads/master
| 2023-04-12T23:20:30.423492
| 2020-07-20T08:26:24
| 2020-07-20T08:26:24
| 275,500,095
| 0
| 0
| null | 2021-04-20T20:28:08
| 2020-06-28T03:35:36
|
Python
|
UTF-8
|
Python
| false
| false
| 1,604
|
py
|
class Checkout:
class Discount:
def __init__(self, quantity, price):
self.quantity = quantity
self.price = price
def __init__(self):
self.prices = {}
self.discounts = {}
self.items = {}
self.total = 0
def addItemPrice(self, item, price):
self.prices[item] = price
def addItem(self, item):
if item not in self.prices:
raise Exception("Item is missing price")
if item in self.items:
self.items[item] += 1
else:
self.items[item] = 1
def addDiscount(self, item, quantity, price):
discount = self.Discount(quantity, price)
self.discounts[item] = discount
def totalItems(self):
total = 0
for (item, cnt) in self.items.items():
total += self.calculateItemTotal(item, cnt)
return total
def calculateItemTotal(self, item, count):
total = 0
if item in self.discounts:
discount = self.discounts[item]
if count >= discount.quantity:
total += self.calcItemDiscount(item, count, discount)
else:
total+= self.prices[item] * count
else:
total += self.prices[item] * count
return total
def calcItemDiscount(self, item, count, discount):
total = 0
quantity = count / discount.quantity
total += quantity * discount.price
remain = count % discount.quantity
total += remain * self.prices[item]
return total
|
[
"sean.mortimer@gmail.com"
] |
sean.mortimer@gmail.com
|
d7e879f4af1622534ee8798a8f6c375c73f73e0d
|
b25b72df03b68f262b34ea3a8daa5ef4a26104de
|
/facereader1.py
|
ce3956c95971645bce473adeb0333fd2ecf53274
|
[] |
no_license
|
16LeeSeul/FaceReader
|
961a83c7b840f76d2184421c642518b093264581
|
c3d0f05a423b78080433efc35c08e6af533162db
|
refs/heads/master
| 2020-06-02T00:46:34.460891
| 2019-06-09T09:01:54
| 2019-06-09T09:01:54
| 190,983,982
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 17,165
|
py
|
# -*- coding: utf-8 -*-
"""
Created on Mon Dec 10 22:45:32 2018
@author: BME
"""
import sys
import torch
import torch.nn.init
from torch.autograd import Variable
import torchvision.utils as utils
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import torchvision
import matplotlib.pyplot as plt
import numpy as np
import time
import zipfile
import random
import cv2 # 얼굴 인식을 위한 opencv 설치
import numpy
from matplotlib import pyplot as plt
output_dir = sys.argv[3]
usr_name = sys.argv[2]
start =time.time()
#filename = raw_input()
filename = sys.argv[1]
cascadefile = "./haarcascade_lefteye_2splits.xml" # 왼쪽 눈을 인식
cascadefile1 = "./haarcascade_righteye_2splits.xml" # 오른쪽 눈을 따로 인식
img = cv2.imread(filename)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # 컬러 이미지를 인식할 수 있는 흑백으로 바꿔 줌
# 왼쪽 눈에 대해 왼쪽 눈 부분만 추출
cascade = cv2.CascadeClassifier(cascadefile)
facelist = cascade.detectMultiScale(imgray, scaleFactor=2.08, minNeighbors=1)
cropped = []
if len(facelist) >= 1:
for face in facelist:
x, y, w, h = face
cv2.rectangle(imgray, (x, y), (x+w, y+h), (255, 0, 0), 2) # 눈에 해당하는 부위를 네모로 표시
cropped = imgray[y:y+h, x:x+w] # 눈에 해당하는 부분을 추출함
result_filename = ["./real/left/1.jpg"]
result_filename = ''.join(result_filename)
cv2.imwrite(result_filename,cropped) # 추출한 눈 부위를 저장
if not np.any(cropped): # 눈을 인식하지 못했을 때
print('왼쪽 눈을 인식하지 못했습니다..ㅜㅠ')
# 오른쪽 눈에 대해 오른쪽 눈 부분만 추출
cascade = cv2.CascadeClassifier(cascadefile1)
facelist = cascade.detectMultiScale(imgray, scaleFactor=2.08, minNeighbors=1)
cropped=[]
if len(facelist) >= 1:
for face in facelist:
x, y, w, h = face
cv2.rectangle(imgray, (x, y), (x+w, y+h), (255, 0, 0), 2) # 눈에 해당하는 부위를 네모로 표시
cropped = imgray[y:y+h, x:x+w] # 눈에 해당하는 부분을 추출함
result_filename = ["./real1/right/1.jpg"]
result_filename = ''.join(result_filename)
cv2.imwrite(result_filename,cropped) # 추출한 눈 부위를 저장
if not np.any(cropped): # 눈을 인식하지 못했을 때
print('오른쪽 눈을 인식하지 못했습니다..ㅠㅜ')
transform = transforms.Compose(
[transforms.Resize(24),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size=16
testset = torchvision.datasets.ImageFolder(root='./real',transform=transform)
test_loader = torch.utils.data.DataLoader(dataset=testset,
batch_size = batch_size,
shuffle=True) # 추출했던 왼쪽눈을 testset으로 설정
testset1 = torchvision.datasets.ImageFolder(root='./real1',transform=transform)
test_loader1 = torch.utils.data.DataLoader(dataset=testset1,
batch_size = batch_size,
shuffle=True) # 추출했던 오른쪽 눈을 testset으로 설정
test_images_l, test_labels_l = next(iter(test_loader))
test_images_r, test_labels_r = next(iter(test_loader1))
class LeNet(torch.nn.Module):
def __init__(self):
super(LeNet,self).__init__()
self.layer1=torch.nn.Sequential(
torch.nn.Conv2d(3,48,3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(48),
torch.nn.MaxPool2d(2),
torch.nn.Conv2d(48,96,3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(96),
torch.nn.MaxPool2d(2)
)
self.fc = torch.nn.Sequential(
torch.nn.Linear(96*4*4,120),
torch.nn.ReLU(),
torch.nn.Linear(120,84),
torch.nn.ReLU(),
torch.nn.Linear(84,3)
)
def forward(self,x):
x=self.layer1(x)
x=x.view(x.size()[0],-1)
x=self.fc(x)
return x
left_eye = LeNet() # 모델 클래스 입력
left_eye.load_state_dict(torch.load('./Left_eye.path'))
left_eye.eval()
right_eye = LeNet()
right_eye.load_state_dict(torch.load('./right_eye.path'))
right_eye.eval()
size_eye = LeNet()
size_eye.load_state_dict(torch.load('./size_eye.path'))
size_eye.eval()
X = Variable(test_images_l.view(-1,3,24,24).float())
E1 = left_eye(X) # 왼쪽 눈에 대해 up, middle, down 판정
E3 = size_eye(X) # 눈의 크기를 판정
Y = Variable(test_images_r.view(-1,3,24,24).float())
E2 = right_eye(Y) # 오른 쪽 눈에 대해 up, middle, down 판정
E4 = size_eye(Y) # 눈의 크기를 판정
classes1 = ('down','middle','up')
classes2 = ('down','middle','up')
classes3 = ('big', 'small')
# 눈 크기와 눈꼬리에 대한 클래스 출력
print('왼쪽 눈꼬리는 '+classes1[torch.max(E1,1)[1][0]]+' 되어 있습니다!')
print('오른쪽 눈꼬리는 '+classes2[torch.max(E2,1)[1][0]]+' 되어 있습니다!')
print('왼쪽 눈의 크기는 '+classes3[torch.max(E3,1)[1][0]]+' 하군요!')
print('오른쪽 눈의 크기는 '+classes3[torch.max(E4,1)[1][0]]+' 하군요!')
# 눈이 클 때
L1 = "감정표현이 뛰어나다."
L2 = "감정 중시하며, 천진하고, 착하다."
L3 = "동정심 많음, 금전이나 애정 문제로 남에게 쉽게 이용당할 수 있다."
L4 = "애정에서는 우유부단하고 주저하여 결정을 내리지 못하는 경우가 있고, 심지어 양다리를 걸치는 상황도 생길 수 있다."
L5 = "시야가 넓고 명랑하고 외향적이며 사교와 단체 생활을 좋아한다."
L6 = "관찰력이 예리하고 반응이 민첩하다."
L7 = "색채 분별력이 뛰어나고 음악이나 회화 쪽으로 재능을 발휘할 수 있다."
L8 = "목표를 이루기 위한 의지와 집중력이 부족하기 때문에 전문 분야로 성과를 거두기 어려울 수 있다."
L9 = "언변이 좋아 이성의 환심을 살 수 있다."
L10 = "마음이 열려 있어 정이 많고, 열정적이다."
L11 = "호기심이 넘치고 개방적인 성격을 갖추고 있다."
L12 = "정이 많아 이성에 대한 관심과 인기도도 많고 개방적인 성격을 갖고 있다."
L13 = "적극적인 애정공세를 펴는 경우가 많다."
L14 = "심리 변화가 심하기 때문에 즉흥적인 행동을 보여 오해를 받는 경우가 많다."
L15 = "현실보다는 이상을 추구하여 금전적으로 기복이 심하다."
L16 = "일반적으로 얼굴을 보았을 때 크다고 느껴지는 눈을 가진 사람은 감각이 뛰어나고 이성을 끌어들이는 매력이 있으며 개방적이다."
L17 = "정열적인 성격을 갖추고 있으며 상대방을 잘 배려해주는 한편, 상대방의 마음을 읽어내는 재능이 있다."
L18 = "개방적인 성격이기는 하지만, 사람을 가려서 사귀는 편이고 정열이 지나치게 강해서 애정문제에 빠지면 헤어나지 못한다."
L19 = "사랑을 할 때에는 최선을 다 하지만, 사랑이 식으면 미련 없이 등을 돌리는 냉정함이 있다."
L20 = "남성의 경우에는 리더가 될 수 있는 자질을 충분히 갖추고 있기 때문에 다른 사람 밑에서 일하는 것에 거부감을 느낀다. 단, 직장생활을 하면 승진이 빠른 편이다."
L21 = "여성의 경우에는 남성에게 인기가 좋으며 음악적 감각이 뛰어나서 노래를 잘하며 춤에도 소질이 있다."
L = [L1, L2, L3, L4, L5, L6, L7, L8, L9, L10, L11, L12, L13, L14, L15, L16, L17, L18, L19, L20, L21]
# 눈이 작을 때
S1 = "차분하고 겸손한 성격을 갖추고 있다."
S2 = "강인하고 냉정한 자기만의 세계를 가진 사람이 많다."
S3 = "말보다는 행동으로 생각을 표현하는 신중함을 가진다."
S4 = "자신의 속내를 쉽게 드러내지 않는다."
S5 = "사회적으로 믿음직하다는 평가를 받는다."
S6 = "한번 마음먹은 일은 가능하면 끝까지 성사시키려는 끈기도 있다."
S7 = "힘든 시기가 닥치더라도 꿋꿋이 이겨낼 수 있는 사람이다."
S8 = "젊은 시절에 고생이 많고 매력이 뒤떨어져 윗사람들의 사랑을 받지 못한다."
S9 = "겸손한 성격으로 대인관계에서 자신을 굽힐 줄 알고 지적인 능력이 뛰어나기 때문에 학문적인 분야에서 성공할 가능성이 높다."
S10 = "특히 한 우물을 파서 성공을 거두는 예가 많지만, 성격이 매우 강해 냉정하다는 인상을 주기 쉽고 자신만의 공간에 틀어박혀 좀처럼 마음을 열지 않는다."
S11 = "남성의 경우 여자를 다루는 능력과 금전을 융통하는 능력은 부족하지만, 믿음직하고 성실하기 때문에 늦게 인정을 받는 타입이다."
S12 = "의지가 강하기 때문에 난관을 잘 극복한다."
S13 = "여성의 경우에는 남성을 선택하는 데 많은 시간이 걸리지만, 한번 마음을 주면 어지간해서는 다른 이성에게 눈길을 돌리지 않는 일편단심형이며 가족을 매우 중요하게 생각한다."
S14 = "가정 경제를 꾸려나가는 능력이 있고 사회활동을 해도 성공할 수 있다."
S = [S1, S2, S3, S4, S5, S6, S7, S8, S9, S10, S11, S12, S13, S14]
# 눈꼬리가 올라간 눈
U1 = "성급하며 양의 특성 상 기질이 강하고 빠르고 폭발적이고 급한 것이다."
U2 = "감각이 뛰어나고 어떤 일에도 굽히지 않는 강한 용기를 갖추고 있으며 두뇌회전이 빠르고 기회를 잡는 능력이 뛰어나다."
U3 = "예술적인 방면에 소질이 있고 추친력을 갖추고 있으며 아무리 어려운 난관에 부딪혀도 강한 인내력으로 돌파할 수 있는 용기가 있다."
U4 = "기회가 오면 어떻게 해서든 움켜쥐려하기 때문에 이기적이라는 인상을 주기 쉽고 독단적인 성향이 강하다."
U5 = "남성의 경우에는 두뇌회전이 빨라 중간관리직으로 잘 어울리며 실행력이 있어 운세가 좋은 편이다."
U6 = "자신의 주장을 약간 억제하고 다른 사람의 의견을 받아들이는 포용력을 갖추는 것이 바람직하다."
U7 = "성공을 추구하는 눈이다."
U8 = "성격이 예민하고 반응이 빠르고 결단력이 있고 시기를 놓치지 않는다."
U9 = "그러나 자존심과 승부욕 소유욕이 강하고 의심이 많은 것이 단점이다."
U10 = "품격이 있다."
U11 = "두뇌 회전이 빠르고 총명하다."
U12 = "예상 밖의 아이디어를 가져 영리해 보일 수 있다."
U13 = "남의 어려움을 앞장 서 해결하므로 인복이 많다."
U14 = "애정 문제에서 주도권을 잡고 적극적으로 어필한다."
U15 = "점유욕과 지배욕이 있다."
U16 = "끈기가 있고 체력이 강하다."
U17 = "주관이 분명하고 대범한 성격을 갖췄다."
U18 = "어떤 일을 하든 반드시 성사시키는 강인함을 갖추고 있다."
U19 = "리더십도 매우 뛰어나 ‘대인의 상’, ‘장수의 상’이라고 표현한다."
U20 = "자존심도 강해 다른 사람에게 지는 것을 싫어한다."
U21 = "자신의 영역을 침범당하면 즉각 자기방어에 나설 정도로 철저한 자기관리 능력을 자랑한다."
U = [U1, U2, U3, U4, U5, U6, U7, U8, U9, U10, U11, U12, U13, U14, U15, U16, U17, U18, U19, U20, U21]
# 눈꼬리가 내려간 눈
O1 = "만약 반대로 눈끝이 아래로 숙인 자는 음에 속하니 문질이며 부드럽고 약하며 침착하며 느린 것이다."
O2 = "눈꼬리가 처진 눈을 가진 사람은 심리적으로 느긋하고 여유 있는 성격이며 투쟁이나 다툼보다는 평화를 사랑한다."
O3 = "모든 일을 긍정적이고 원만하게 처리하려 하기 때문에 대인관계가 매우 좋아 다른 사람의 도움으로 출세를 할 가능성이 매우 높다. "
O4 = "성실하다는 점도 장점이다."
O5 = "수동적이며 소극적이기 때문에 주위 사람들로부터 자신의 주장을 할 줄 모르는 사람이라는 비난을 받는다."
O6 = "남성의 경우에는 친구나 동료, 선후배와의 관계가 원만해서 일찍 출세할 수 있다."
O7 = "여성의 유혹에 넘어가기 쉽고 그 때문에 실패를 맛볼 가능성이 높다."
O8 = "여성의 경우에는 역시 남성의 유혹에 넘어가기 쉽고 그 때문에 손해를 볼 가능성이 높다."
O9 = "사교적이며 인정이 많다."
O10 = "인정이 많고 보스 기질이 있다."
O11 = "대인관계가 좋고 주변에 사람이 많이 모이는 편이다."
O12 = "유머도 풍부하여 재미있고 즐거운 인생을 보낼 것 같으나 사실 외로움도 많이 탄다."
O13 = "이성에 대한 호기심이 매우 강해서 정 때문에 마음을 졸일 가능성이 매우 높다."
O14 = "모든 사람에게 친절하고 다정하게 행동하지만, 그에 못지 않을 정도로 자존심이 강하고 보스 기질이 강하기 때문에 실질적으로는 성격이 매우 강한 사람이다."
O = [O1, O2, O3, O4, O5, O6, O7, O8, O9, O10, O11, O12, O13, O14]
# 눈꼬리가 일직선으로 수평인 눈
M1 = "불상불하의 눈으로 불투(사물을 훔쳐보지 말아야)해야 모름지기 쓸 만한 그릇이 된다."
M2 = "위인(爲人)이 강개롭고 심평정직하다."
M3 = "위, 아래로 향하지 않고 수평을 유지하는 것이 가장 이상적이다."
M = [M1, M2, M3]
# 짝짝이 눈
D1 = "성격이 변덕스럽고 우유부단하다."
D2 = "부모님의 사이가 좋지않을 확률이 높다."
D3 = "성격상 소극적이면서 어두운 면이 있다."
D4 = "남다른 관찰력과 예민한 직감력을 지녔다."
D5 = "인생 굴곡이 많다."
D6 = " 활동적이고 야심이 있고 부를 축적한다."
D7 = "세상을 두가지 관점으로 보는 경향이 있어 객관성이 매우 뛰어나고 논리적이다."
D8 = "어떤 분야에서든 상위에까지 오르기는 하지만, 최상위에 오르기는 어려움이 있다."
D9 = "주변에 시기와 질투를 하는 사람들이 많다."
D10 = "한쪽은 크고 한쪽은 작은 눈을 가진 사람은 인생에서 큰 전환기를 겪을 가능성이 높고 두뇌회전이 빠른 편이다."
D11 = "자기 주장이 뚜렷하고 활동적이며 승부에 대한 열정이 강하고 이상도 높다."
D12 = "고집이 세고 자기 주장이 강해 견제의 대상이 될 가능성이 높고 이성에게 약한 편이며 인생에 기복이 아주 심하다."
D13 = "남성의 경우에는 왼쪽이 클 경우에는 매우 활동적이고 승부욕이 강하며 이상이 높고, 오른쪽이 클 경우에는 정에 이끌리기는 해도 리더심과 자신감이 있어서 노력에 따라 행복을 만끽할 수 있다."
D = [D1, D2, D3, D4, D5, D6, D7, D8, D9, D10, D11, D12, D13]
# 눈꼬리
n=4
if classes1[torch.max(E1,1)[1][0]] == classes2[torch.max(E2,1)[1][0]]: # 양쪽 눈꼬리가 같을 때
n=4
if classes1[torch.max(E1,1)[1][0]] == 'down': # 둘다 눈꼬리가 내려갔으면
ind = random.sample(range(14),n)
answer=[]
for i in ind:
answer.append(O[i])
elif classes1[torch.max(E1,1)[1][0]] == 'up': # 둘다 눈꼬리가 올라갔으면
ind = random.sample(range(21),4)
answer=[]
for i in ind:
answer.append(U[i])
else: # 둘다 눈꼬리가 수평에 이르면
answer=[]
for i in range(3):
answer.append(M[i])
elif classes1[torch.max(E1,1)[1][0]] or classes1[torch.max(E2,1)[1][0]] == 'middle':
n=3
if classes1[torch.max(E1,1)[1][0]] or classes1[torch.max(E2,1)[1][0]] == 'up': # 눈꼬리가 수평과 올라갔다면
ind = random.sample(range(21),n)
answer=[]
for i in ind:
answer.append(U[i])
else: # 눈꼬리가 수평과 내려갔다면
ind = random.sample(range(14),n)
answer=[]
for i in ind:
answer.append(O[i])
else:
n=2
ind = random.sample(range(14),n)
answer=[]
for i in ind:
answer.append(O[i])
answer.append(U[i])
# 눈 크기
n=4
if classes3[torch.max(E3,1)[1][0]] == classes3[torch.max(E4,1)[1][0]]: # 양쪽 눈의 크기가 같으면
if classes3[torch.max(E3,1)[1][0]] == 'big' : # 눈의 크기가 클 때
ind = random.sample(range(21),n)
for i in ind:
answer.append(L[i])
else:
ind = random.sample(range(14),n)
for i in ind:
answer.append(S[i])
else: # 두 눈의 결과가 다를 때
ind = random.sample(range(14),n)
answer.append(L[ind[0]])
answer.append(S[ind[1]])
answer.append(D[ind[2]])
answer.append(D[ind[3]])
print('수행 시간은 '+str(time.time()-start)+ '초 걸렸습니다.')
with open(output_dir+"+.txt", "w") as f:
f.write('< ')
f.write(usr_name)
f.write(' 님의 관상 결과!!!! > \n\n')
f.write('\n'.join(answer))
|
[
"tmf789@likelion.org"
] |
tmf789@likelion.org
|
ba20ea31980447ed3f66403fce6e3d557a34b971
|
6583a0af3f0e1b2c7a4e0efdf6b946fc4e3b6009
|
/blog/D4DJ/read_movie.py
|
717e4666959d8a9eaf4c2682e47eb08db08c116c
|
[] |
no_license
|
nasuika1/my-first-blog
|
87b3cdd9092512c0c91e029dce9173629b8e8c31
|
46c94d134712ad20483dc1c8d6c785cab45800c4
|
refs/heads/master
| 2021-06-21T15:26:08.154139
| 2021-05-17T21:06:54
| 2021-05-17T21:06:54
| 220,879,589
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 21,704
|
py
|
import cv2
import os
import numpy as np
import csv
class Note_Type:
def __init__(self,color,region,n_name):
self.color = color
self.region = region
self.n_name = n_name
class Note:
def __init__(self,min_value,max_value,color,name):
self.min_value = min_value
self.max_value = max_value
self.color = color
self.name = name
class ChartInfo:
def __init__(self):
self.chartinfo = []
def Add_Note(self,frame,note_type,note_place):
self.chartinfo += [[frame,note_type,note_place]]
def reset(self):
self.chartinfo = []
def Substi(self,s):
self.chartinfo = s
class Chart:
def __init__(self):
self.chart = []
def add_note(self,frame,note_type,note_place):
self.chart += [[frame,note_place,note_type]]
def serch_region(hsv,min_value,max_value):
color_min = np.array(min_value,np.uint8)
color_max = np.array(max_value,np.uint8)
color_region = cv2.inRange(hsv,color_min,color_max)
return color_region
def padding_position(x,y,w,h,p):
return x-p,y-p,w+p*2,h+p*2
class Analysis:
def __init__(self,movie_name):
self.movie_name = cv2.VideoCapture(movie_name)
self.width = self.movie_name.get(cv2.CAP_PROP_FRAME_WIDTH)
self.height = self.movie_name.get(cv2.CAP_PROP_FRAME_HEIGHT)
self.fps = self.movie_name.get(cv2.CAP_PROP_FPS)
self.fc = self.movie_name.get(cv2.CAP_PROP_FRAME_COUNT)
self.ci = ChartInfo()
self.note_count = [0,0,0,0,0,0,0]
self.slider_count = 0
self.hold_count = 0
self.long_count = 0
self.chart = Chart()
self.count_1 = 0
self.count_2 = 0
def save_movie(self,file_name,frame_num = None):
fourcc = cv2.VideoWriter_fourcc('m','p','4','v')
video = cv2.VideoWriter(file_name,fourcc,self.fps,(int(self.width),int(self.height)))
if(frame_num == None):
frame_num = int(self.fc)
i = 0
while i < frame_num:
print(i)
self.ci.reset()
self.count_1 = 0
self.count_2 = 0
img = self.save_frame(i)
if(img.dtype!='uint8'):
break
video.write(img)
i += 4-2*self.count_1 - self.count_2
video.release()
def analys(self,frame_num = None):
if(frame_num == None):
frame_num = int(self.fc)
self.c_before = []
self.c_after = []
for i in range(7):
self.c_before += [ChartInfo()]
self.c_after += [ChartInfo()]
i = 0
while i < frame_num:
print(i)
self.count_1 = 0
self.count_2 = 0
b = self.analys_frame(i)
if not(b):
break
i += 3-self.count_1 - self.count_2
m = int(700-500/0.85)
for i in range(len(self.chart.chart)):
n = self.chart.chart[i][1]
if(self.chart.chart[i][2] == 6):
print(i,self.chart.chart[i])
else:
w = 960+(1115-m)/(n[1]+n[3]/2-m)*(n[0]+n[2]/2-960)
print(i,self.chart.chart[i],int(w*7/1920))
def analys_frame(self,frame_num):
if not self.movie_name.isOpened():
return False
#該当フレームの読み込み
#retは読み込めたがどうが(True,False),frameは画像データ
self.movie_name.set(cv2.CAP_PROP_POS_FRAMES, frame_num)
ret, frame = self.movie_name.read()
if not(ret):
return False
#色抽出
hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
#slider,[150~180,140~200,180~255]
#note1,[90~100,170~220,200~255]
#note2,[100~110,60~230,200~255]
#long,[25~35,60~230,180~255]
#scrach,[5~20,60~230,200~255]
#hold,[170~180,60~230,200~255]
pink_region = serch_region(hsv,[150,140,180],[170,200,255])
sky_region = serch_region(hsv,[90,170,200],[100,220,255])
blue_region = serch_region(hsv,[100,60,200],[110,230,255])
yellow_region = serch_region(hsv,[25,60,180],[35,230,255])
orange_region = serch_region(hsv,[5,180,200],[15,230,255])
red_region = serch_region(hsv,[170,60,200],[180,230,255])
red_line = serch_region(hsv,[170,200,100],[180,255,255])
#それぞれのノーツの色の補色で領域を四角で囲む
Slider = Note_Type([128,255,20],pink_region,'slider')
Note1 = Note_Type([0,0,255],sky_region,'note1')
Note2 = Note_Type([0,255,255],blue_region,'note2')
Long = Note_Type([255,0,0],yellow_region,'longnote')
Scrach = Note_Type([255,90,0],orange_region,'scrach')
Red = Note_Type([255,0,0],red_region,'hold')
Skill_Line = Note_Type([255,255,255],red_line,'skill_line')
note_array = [Slider,Note1,Note2,Long,Scrach,Red,Skill_Line]
#ノーツを検出し四角でマーク
for i in range(len(note_array)):
self.c_before[i].Substi(self.c_after[i].chartinfo)
self.ci.reset()
self.mark_region(frame,note_array[i],frame_num)
for j in range(len(self.ci.chartinfo)):
n = self.ci.chartinfo[j]
print(n[2],n[1].n_name)
m = int(700-500/0.85)
self.c_after[i].Substi(self.ci.chartinfo)
self.counting(self.c_before[i].chartinfo,self.c_after[i].chartinfo,i,frame_num)
if(len(self.c_before[i].chartinfo) > 0 and len(self.c_after[i].chartinfo)):
by = (1440-self.c_before[i].chartinfo[0][2][1])/(self.c_before[i].chartinfo[0][2][1]-m)*10
ay = (1440-self.c_after[i].chartinfo[0][2][1])/(self.c_after[i].chartinfo[0][2][1]-m)*10
#ay-by = -4.2
print(self.note_count)
print(len(self.chart.chart))
return True
def counting(self,sb,sa,c,f):
if(c == 0):
if(len(sb)>0):
if(len(sa) == 0):
self.note_count[c] += 1
if(len(sb[self.slider_count]) == 4):
c = 7
self.chart.add_note(f,c,sb[self.slider_count][2])
self.slider_count = 0
else:
if(self.slider_count == 0):
if(sb[0][2][1] > sa[0][2][1]+4):
self.note_count[c] += 1
if(len(sb[0]) == 4):
c = 7
self.chart.add_note(f,c,sb[0][2])
elif(sa[0][2][1] > 990):
self.slider_count = 1
self.note_count[c] += 1
if(len(sb[0]) == 4):
c = 7
self.chart.add_note(f,c,sb[0][2])
elif(self.slider_count == 1):
if(sa[0][2][1] < 990):
if(sa[0][2][1] < 800):
self.note_count[c] += 1
if(len(sb[1]) == 4):
c = 7
self.chart.add_note(f,c,sb[1][2])
self.slider_count = 0
elif(len(sb) > 1 and len(sa) == 1):
self.note_count[c] += 1
if(len(sb[1]) == 4):
c = 7
self.chart.add_note(f,c,sb[1][2])
else:
for j in range(len(sb)-1):
if(sb[j+1][2][1] > sa[1][2][1]):
self.note_count[c] += 1
if(len(sb[j+1]) == 4):
c = 7
self.chart.add_note(f,c,sb[j+1][2])
break
elif(c == 3):
if(len(sb)>0):
if(len(sa) == 0):
self.note_count[c] += len(sb)-self.long_count
for i in range(len(sb)-self.long_count):
self.chart.add_note(f,c,sb[i+self.long_count][2])
self.long_count = 0
else:
if(self.long_count == 0):
for j in range(len(sb)):
if(sb[j][2][1] > sa[0][2][1]+4):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[j][2])
elif(sa[j][2][1] > 1050):
self.long_count += 1
self.note_count[c] += 1
self.chart.add_note(f,c,sb[j][2])
else:
break
elif(self.long_count == 1):
if(sa[0][2][1] < 1050):
if(sa[0][2][1] < 900):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[1][2])
self.long_count -= 1
elif(len(sb) > 1):
if(len(sa) > 1):
if(sa[1][2][1] > 1050):
self.long_count += 1
self.note_count[c] += 1
self.chart.add_note(f,c,sb[1][2])
if not(sb[0][2][0] > sa[0][2][0] - 200 and sb[0][2][0] < sa[0][2][0] + 200):
self.note_count[c] += 2
self.chart.add_note(f,c,sb[1][2])
self.chart.add_note(f,c,sb[2][2])
elif(self.long_count == 2):
if(len(sa) > 1):
if(sa[0][2][1] < 1050):
if(sa[0][2][1] < 900):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[3][2])
self.long_count -= 1
if(sa[1][2][1] < 1050):
if(sa[1][2][1] < 900):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[2][2])
self.long_count -= 1
else:
if(sa[0][2][1] < 1050):
if(sa[0][2][1] < 900):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[1][2])
self.long_count -= 1
elif(c == 5):
if(len(sb)>0):
if(len(sa) == 0):
self.note_count[c] += len(sb)-self.hold_count
for i in range(len(sb)-self.hold_count):
self.chart.add_note(f,c,sb[i+self.hold_count][2])
self.hold_count = 0
else:
if(self.hold_count == 0):
for j in range(len(sb)):
if(sb[j][2][1] > sa[0][2][1]+4):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[j][2])
elif(sa[j][2][1] > 1000):
self.hold_count += 1
self.note_count[c] += 1
self.chart.add_note(f,c,sb[j][2])
else:
break
elif(self.hold_count == 1):
if(sa[0][2][1] < 1000):
if(sa[0][2][1] < 800):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[1][2])
self.hold_count -= 1
elif(len(sb) > 1):
if(len(sa) > 1):
if(sa[1][2][1] > 1000):
self.hold_count += 1
self.note_count[c] += 1
self.chart.add_note(f,c,sb[1][2])
if not(sb[0][2][0] > sa[0][2][0] - 50 and sb[0][2][0] < sa[0][2][0] + 50):
self.note_count[c] += 2
self.chart.add_note(f,c,sb[1][2])
self.chart.add_note(f,c,sb[2][2])
elif(self.hold_count == 2):
if(len(sa) > 1):
if(sa[0][2][1] < 1000):
if(sa[0][2][1] < 800):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[3][2])
self.hold_count -= 1
if(sa[1][2][1] < 1000):
if(sa[1][2][1] < 800):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[2][2])
self.hold_count -= 1
else:
if(sa[0][2][1] < 1000):
if(sa[0][2][1] < 800):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[1][2])
self.hold_count -= 1
else:
if(len(sb) > 0 ):
if(len(sa) == 0):
if(c == 6):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[0][2])
else:
self.note_count[c] += len(sb)
for i in range(len(sb)):
self.chart.add_note(f,c,sb[i][2])
else:
for j in range(len(sb)):
if(sb[j][2][1] > sa[0][2][1]+4):
self.note_count[c] += 1
self.chart.add_note(f,c,sb[j][2])
else:
break
def save_frame(self,frame_num,result_path = None):
if not self.movie_name.isOpened():
return
#該当フレームの読み込み
#retは読み込めたがどうが(True,False),frameは画像データ
self.movie_name.set(cv2.CAP_PROP_POS_FRAMES, frame_num)
ret, frame = self.movie_name.read()
if not(ret):
return np.array(0,dtype=np.int64)
#色抽出
hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
a,b = 700,600
m = -0.61
print(hsv[a:a+10,b:b+10])
for i in range(1440):
frame[i,int(m*(i-a))+b]=[0,0,255]
for i in range(1440):
frame[i,int(-m*(i-a))+1920-b]=[0,0,255]
a,b = 700,460
m = -0.85
for i in range(1440):
if(int(m*(i-a))+b>0):
frame[i,int(m*(i-a))+b]=[0,0,255]
for i in range(1440):
if(int(-m*(i-a))+1920-b < 1920):
frame[i,int(-m*(i-a))+1920-b]=[0,0,255]
a = 1115
b = 692
c = 820
for i in range(1920):
frame[a,i]=[0,0,255]
frame[b,i]=[0,0,255]
frame[c,i]=[0,0,255]
#slider,[150~180,140~200,180~255]
#note1,[90~100,170~220,200~255]
#note2,[100~110,60~230,200~255]
#long,[25~35,60~230,180~255]
#scrach,[5~20,60~230,200~255]
#hold,[170~180,60~230,200~255]
pink_region = serch_region(hsv,[150,140,180],[170,200,255])
sky_region = serch_region(hsv,[90,170,200],[100,220,255])
blue_region = serch_region(hsv,[100,60,200],[110,230,255])
yellow_region = serch_region(hsv,[25,60,180],[35,230,255])
orange_region = serch_region(hsv,[5,180,200],[15,230,255])
red_region = serch_region(hsv,[170,60,200],[180,230,255])
red_line = serch_region(hsv,[170,200,100],[180,255,255])
#それぞれのノーツの色の補色で領域を四角で囲む
Slider = Note_Type([128,255,20],pink_region,'slider')
Note1 = Note_Type([0,0,255],sky_region,'note1')
Note2 = Note_Type([0,255,255],blue_region,'note2')
Long = Note_Type([255,0,0],yellow_region,'longnote')
Scrach = Note_Type([255,90,0],orange_region,'scrach')
Red = Note_Type([255,0,0],red_region,'hold')
Skill_Line = Note_Type([255,255,255],red_line,'skill_line')
note_array = [Slider,Note1,Note2,Long,Scrach,Red,Skill_Line]
#ノーツを検出し四角でマーク
for i in range(len(note_array)):
self.ci.reset()
frame = self.mark_region(frame,note_array[i],frame_num)
for j in range(len(self.ci.chartinfo)):
n = self.ci.chartinfo[j]
print(n[2],n[1].n_name)
cv2.rectangle(frame,(n[2][0],n[2][1]),(n[2][0]+n[2][2],n[2][1]+n[2][3]),n[1].color,3)
if result_path !=None:
os.makedirs(os.path.dirname(result_path),exist_ok=True)
cv2.imwrite(result_path,frame)
return frame
def print_property(self):
print(self.movie_name.isOpened())
print(self.width)
print(self.height)
print(self.fps)
print(self.fc)
def mark_region(self,img,note,frame_num):
contours, hieralky = cv2.findContours(note.region,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
#輪郭表示
if(note.color != [255,255,255]):
#検出最小サイズ
min_size = 2000
for c in contours:
if cv2.contourArea(c) < min_size:
continue
x,y,w,h = cv2.boundingRect(c)
x,y,w,h = padding_position(x,y,w,h,5)
if(y > 200):
if(len(self.ci.chartinfo) > 0):
n = self.ci.chartinfo[-1]
if(n[1].n_name == 'slider' and y + h/2 > n[2][1] and y < n[2][1]+5):
self.ci.chartinfo[-1] = [frame_num,note,[x,y,w,h]]
if(x < n[2][0] or x > n[2][0] + n[2][2]):
self.ci.chartinfo[-1] += [1]
continue
if(n[1].n_name == 'longnote' and (y < n[2][1] + 10 and y > n[2][1] -10)):
if(n[2][0] - x < 400 and y > 950):
self.ci.chartinfo[-1] = [frame_num,note,[x,y,n[2][0]+n[2][3],h]]
continue
if(y < 995):
self.count_1 = 1
if(y > 550):
self.count_2 = 1
if note.n_name == 'slider' and ((y > 800 and w*h < 10000) or y > 1000):
continue
self.ci.Add_Note(frame_num,note,[x,y,w,h])
else:
min_size = 500
for c in contours:
if cv2.contourArea(c) < min_size:
continue
x,y,w,h = cv2.boundingRect(c)
x,y,w,h = padding_position(x,y,w,h,5)
if h < 30 and y > 400:
self.ci.Add_Note(frame_num,note,[x,y,w,h])
if(y < 995):
self.count_1 = 1
if(y > 550):
self.count_2 = 1
return img
def F2S(self,file_name):
m = int(700-500/0.85)
before_time = 0
s_c = []
for i in range(len(self.chart.chart)):
t = self.chart.chart[i][0]
p = self.chart.chart[i][1]
n = self.chart.chart[i][2]
if(n == 1 or n == 2 or n == 3):
b = (1440-p[1])/(p[1]-m)*10
a = (1440-1057)/(1057-m)*10
time = (t+(a-b)/4.2)/self.fps
elif(n == 0 or n == 7):
b = (1440-p[1])/(p[1]-m)*10
a = (1440-995)/(995-m)*10
time = (t+(a-b)/4.2)/self.fps
elif(n == 4 or n == 5):
b = (1440-p[1])/(p[1]-m)*10
a = (1440-1005)/(1005-m)*10
time = (t+(a-b)/4.2)/self.fps
elif(n == 6):
b = (1440-p[1])/(p[1]-m)*10
a = (1440-1057)/(1057-m)*10
time = (t+(a-b)/4.2)/self.fps
if(n == 6):
w = 7
else:
w = int((960+(1115-m)/(p[1]+p[3]/2-m)*(p[0]+p[2]/2-960))*7/1920)
if(i > 0):
if(time-s_c[-1][0] < 0.002 and s_c[-1][1] != 7 and w != 7):
time = (time+s_c[-1][0])/2
s_c[-1][0] = time
s_c += [[time,w,n]]
before_time = time
for i in range(len(s_c)):
print(s_c[i][0],s_c[i][1],s_c[i][2])
with open(file_name,'w') as f:
writer = csv.writer(f)
writer.writerows(s_c)
def release(self):
self.movie_name.release()
a = Analysis('動画/ぐるぐるDJTURN.mov')
#a.save_frame(824,'動画/テスト.jpg')
a.print_property()
#a.save_movie('動画/テスト.mp4',frame_num = None)
a.analys(frame_num = None)
a.F2S('動画/ぐるぐるDJTURN.csv')
a.release()
|
[
"nasuika1@icloud.com"
] |
nasuika1@icloud.com
|
0617378a687756f8866a77c1abb3d78b316dff95
|
43cbff554a9b7d06a761cb8e62276b6aaa5e7754
|
/src/old/text_vae_6.py
|
822a16d5dffbea8a2c27d2fe795c9d2bac4afa50
|
[] |
no_license
|
an-seunghwan/vrae
|
7d0022ec31d2acd267ec38b99072574d60fe7690
|
6a8b7f7e65b439f89dd9317da3b3ee485bcc4b33
|
refs/heads/master
| 2023-01-09T11:52:01.201800
| 2020-11-09T11:50:18
| 2020-11-09T11:50:18
| 281,103,819
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 15,796
|
py
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Jul 16 14:29:42 2020
@author: anseunghwan
"""
#%%
'''
- latent 공간의 분리 - mixture distribution
- imputing missing words = 주어진 문장 형식에 빠진 부분 메꾸기
단, z의 정보를 반영하여 긍정, 부정 등의 특성을 나타내는 단어로 메꾸기
- beta(= sigma of continuous data) learning
- categorical reparametrization with gumbell softmax
- negative sampling?
- sentence interpolation
'''
#%%
import tensorflow as tf
# import tensorflow_probability as tfp
import tensorflow.keras as K
from tensorflow.keras import layers
from tensorflow.keras import preprocessing
print('TensorFlow version:', tf.__version__)
print('즉시 실행 모드:', tf.executing_eagerly())
print('available GPU:', tf.config.list_physical_devices('GPU'))
from tensorflow.python.client import device_lib
print('==========================================')
print(device_lib.list_local_devices())
tf.debugging.set_log_device_placement(False)
#%%
from tqdm import tqdm
import pandas as pd
import numpy as np
# import random
import math
# import json
import time
import re
import matplotlib.pyplot as plt
from pprint import pprint
from konlpy.tag import Okt
okt = Okt()
import os
# os.chdir('/home/jeon/Desktop/an/kakao_arena')
os.chdir('/Users/anseunghwan/Documents/uos/generating_text')
print('current directory:', os.getcwd())
from subprocess import check_output
print('=====Data list=====')
print(check_output(["ls", "./data"]).decode("utf8"))
#%% data 1
'''한국은행 데이터'''
data = pd.read_csv('./data/total_sample_labeling_fin44.csv', encoding='euc-kr')
data.head()
data.columns
sentence_idx = data['news/sentence'] == 0
data = data.loc[sentence_idx].reset_index()
#%% data 2
'''감성라벨 데이터'''
# 소비자와 기업을 모두 사용
sentiment_idx1_pos = np.array(data['소비자'] == 1) | np.array(data['소비자'] == 4)
sentiment_idx1_neg = np.array(data['소비자'] == 2) | np.array(data['소비자'] == 5)
sentiment_idx2_pos = np.array(data['기업'] == 1) | np.array(data['기업'] == 4)
sentiment_idx2_neg = np.array(data['기업'] == 2) | np.array(data['기업'] == 5)
sentence1_pos = data.loc[sentiment_idx1_pos]['content_new'].to_list()
sentence1_neg = data.loc[sentiment_idx1_neg]['content_new'].to_list()
sentence2_pos = data.loc[sentiment_idx2_pos]['content_new'].to_list()
sentence2_neg = data.loc[sentiment_idx2_neg]['content_new'].to_list()
sentence = sentence1_pos + sentence2_pos + sentence1_neg + sentence2_neg
print(len(sentence))
# label
label_ = np.zeros((len(sentence), 2))
label_[:len(sentence1_pos + sentence2_pos), 0] = 1
label_[len(sentence1_pos + sentence2_pos):, 1] = 1
#%%
'''.(마침표)를 단위로 다시 문장을 분리한다(기사가 껴있는 경우가 있어 이를 방지)'''
corpus_ = []
label_data = []
for i in tqdm(range(len(sentence))):
# corpus.extend([x + '.' for x in sentence[i].split('. ')])
temp = [x.strip() for x in sentence[i].split('. ')]
corpus_.extend(temp)
label_data.extend([label_[i] for _ in range(len(temp))])
#%%
def clean_korean(sent):
if type(sent) == str:
h = re.compile('[^가-힣ㄱ-ㅎㅏ-ㅣ\\s]+')
result = h.sub('', sent)
else:
result = ''
return result
#%% tokenize
p = re.compile('[가-힣]+')
corpus = []
label = []
# useful_tag = ['Noun', 'Verb', 'Adjective', 'Adverb']
for i in tqdm(range(len(corpus_))):
if type(corpus_[i] == str):
# corpus.append(['<sos>'] + [x[0] for x in okt.pos(sentence[i], stem=True) if p.match(x[0]) and len(x[0]) > 1 and x[1] in useful_tag] + ['<eos>'])
# corpus[i] = ['<sos>'] + [x[0] for x in okt.pos(temp, stem=False) if p.match(x[0]) and len(x[0]) > 1 and x[1] != 'Josa'] + ['<eos>']
temp = clean_korean(corpus_[i])
corpus.append(['<sos>'] + [x[0] for x in okt.pos(temp, stem=False) if len(x[0]) > 1 and x[1] != 'Josa'] + ['<eos>'])
label.append(label_data[i])
label = np.array(label)
#%%
vocab = set()
for i in tqdm(range(len(corpus))):
vocab.update(corpus[i])
vocab = {x:i+2 for i,x in enumerate(sorted(list(vocab)))}
vocab['<PAD>'] = 0
vocab['<UNK>'] = 1
vocab_size = len(vocab)
print(len(vocab))
num_vocab = {i:x for x,i in vocab.items()}
#%%
input_text = [0]*len(corpus)
for i in tqdm(range(len(corpus))):
input_text[i] = [vocab.get(x) for x in corpus[i]]
#%%
# maxlen 결정
plt.hist([len(x) for x in corpus])
# maxlen = max(len(x) for x in input_text)
maxlen = 50
input_text = preprocessing.sequence.pad_sequences(input_text,
maxlen=maxlen,
padding='post',
value=0)
output_text = np.concatenate((input_text[:, 1:], np.zeros((len(input_text), 1))), axis=1)
#%% parameters
batch_size = 200
embedding_size = 150
latent_dim = 40
units = 100
#%% prior
M = 2 # the number of components
prior_mu = np.ones((M, latent_dim))
prior_mu[0, :] *= 2
prior_mu[1, :] *= -2
'''we set sigma for 1 globally'''
#%% encoder
x = layers.Input((maxlen))
# embedding_layer = layers.Embedding(input_dim=vocab_size,
# output_dim=embedding_size)
embedding_layer = layers.Embedding(input_dim=vocab_size,
output_dim=embedding_size,
mask_zero=True)
ex = embedding_layer(x)
encoder_lstm = layers.LSTM(units)
encoder_h = encoder_lstm(ex)
# for 문으로 list로 생성(나중에 M이 커지면?)
mix_prob_dense = layers.Dense(M, activation='softmax')
mean_dense1 = layers.Dense(latent_dim)
log_var_dense1 = layers.Dense(latent_dim)
mean_dense2 = layers.Dense(latent_dim)
log_var_dense2 = layers.Dense(latent_dim)
mix_prob = mix_prob_dense(encoder_h)
z_mean1 = mean_dense1(encoder_h)
z_log_var1 = log_var_dense1(encoder_h)
z_mean2 = mean_dense2(encoder_h)
z_log_var2 = log_var_dense2(encoder_h)
prob_sampling = tf.random.categorical(mix_prob, 1)
chosen_idx = tf.concat((prob_sampling, tf.cast(tf.cast(tf.logical_not(tf.cast(prob_sampling, tf.bool)), tf.bool), tf.int64)), axis=1)
epsilon1 = tf.random.normal((latent_dim, ))
z1 = z_mean1 + tf.math.exp(z_log_var1 / 2) * epsilon1
epsilon2 = tf.random.normal((latent_dim, ))
z2 = z_mean2 + tf.math.exp(z_log_var2 / 2) * epsilon2
z12 = tf.concat((z1[:, tf.newaxis, :], z2[:, tf.newaxis, :]), axis=1)
z = tf.reduce_sum(tf.multiply(tf.cast(tf.tile(chosen_idx[..., tf.newaxis], (1, 1, latent_dim)), tf.float32), z12), axis=1)
#%% decoder
y = layers.Input((maxlen))
ey = embedding_layer(y)
decoder_lstm = layers.LSTM(units,
return_sequences=True)
'''for initial state, z could be reweighted using dense layer'''
reweight_h_dense = layers.Dense(units)
reweight_c_dense = layers.Dense(units)
init_h = reweight_h_dense(z)
init_c = reweight_c_dense(z)
decoder_h = decoder_lstm(ey, initial_state=[init_h, init_c])
logit_layer = layers.TimeDistributed(layers.Dense(vocab_size)) # no softmax normalizaing -> logit tensor (from_logits=True)
logit = logit_layer(decoder_h)
#%% model
mixprob_vae = K.models.Model([x, y], mix_prob)
mixprob_vae.summary()
text_vae = K.models.Model([x, y], [z_mean1, z_log_var1, z_mean2, z_log_var2, logit])
text_vae.summary()
#%% decoder by case
# case1 = True
# # case1 = False
# if case1:
# '''decoder case 1: latent variable z in only given as hidden vector of LSTM'''
# y = layers.Input((maxlen))
# ey = embedding_layer(y)
# decoder_lstm = layers.LSTM(units,
# return_sequences=True)
# '''for initial state, z could be reweighted using dense layer'''
# decoder_h = decoder_lstm(ey, initial_state=[z, z])
# logit_layer = layers.TimeDistributed(layers.Dense(vocab_size))
# logit = logit_layer(decoder_h)
# text_vae = K.models.Model([x, y], [z_mean, z_log_var, z, logit])
# text_vae.summary()
# else:
# '''decoder case 2: latent variable z in given as input of decoder
# in this case, word dropout is not needed'''
# hiddens = layers.RepeatVector(maxlen)(z)
# decoder_lstm = layers.LSTM(units,
# return_sequences=True)
# '''for initial state, z could be reweighted using dense layer'''
# decoder_h = decoder_lstm(hiddens, initial_state=[z, z])
# logit_layer = layers.TimeDistributed(layers.Dense(vocab_size))
# logit = logit_layer(decoder_h)
# text_vae = K.models.Model(x, [z_mean, z_log_var, z, logit])
# text_vae.summary()
#%% loss
scce = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def loss_fun(y, y_pred, mean_pred, log_var_pred, beta):
'''do not consider padding'''
# reconstruction loss
non_pad_count = tf.reduce_sum(tf.cast(tf.cast(y != 0, tf.bool), tf.float32), axis=1, keepdims=True)
recon_loss = tf.reduce_mean(tf.reduce_sum(tf.divide(tf.multiply(scce(y, y_pred),
tf.cast(tf.cast(y != 0, tf.bool), tf.float32)),
non_pad_count), axis=1))
# non_pad_ = np.sum(y != vocab.get('<PAD>'), axis=1)
# recon_loss = tf.zeros(())
# for i in range(len(non_pad_)):
# n = non_pad_[i]
# recon_loss += scce(y[[i], :n], y_pred[i, :n, :]) / n
# recon_loss /= len(non_pad_)
# kl-divergence loss
kl_loss = tf.reduce_mean(tf.reduce_sum(-0.5 * (1 + log_var_pred - tf.math.pow(mean_pred, 2) - tf.math.exp(log_var_pred)), axis=1))
return recon_loss, kl_loss, recon_loss + beta * kl_loss
#%% loss
scce = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def loss_mixture_fun(y, y_pred, mean_pred1, log_var_pred1, mean_pred2, log_var_pred2, pi, beta):
'''do not consider padding'''
# reconstruction loss
non_pad_count = tf.reduce_sum(tf.cast(tf.cast(y != 0, tf.bool), tf.float32), axis=1, keepdims=True)
recon_loss = tf.reduce_mean(tf.reduce_sum(tf.divide(tf.multiply(scce(y, y_pred),
tf.cast(tf.cast(y != 0, tf.bool), tf.float32)),
non_pad_count), axis=1))
# kl-divergence loss
term1 = tf.reduce_mean(tf.reduce_sum(pi * tf.math.log(pi * M), axis=1))
kl1 = tf.reduce_sum(-0.5 * (1 + log_var_pred1 - tf.math.pow(mean_pred1 - prior_mu[0, :], 2) - tf.math.exp(log_var_pred1)), axis=1, keepdims=True)
kl2 = tf.reduce_sum(-0.5 * (1 + log_var_pred2 - tf.math.pow(mean_pred2 - prior_mu[1, :], 2) - tf.math.exp(log_var_pred2)), axis=1, keepdims=True)
kl_loss = term1 + tf.reduce_mean(tf.reduce_sum(tf.multiply(pi, tf.concat((kl1, kl2), axis=1)), axis=1))
return recon_loss, kl_loss, recon_loss + beta * kl_loss
#%%
'''
- kl annealing using logistic vs linear
'''
def kl_anneal(step, s, k=0.001):
# logistic
return 1 / (1 + math.exp(-k*(step - s)))
#%%
optimizer = tf.keras.optimizers.Adam(0.005)
optimizer1 = tf.keras.optimizers.Adam(0.005)
#%% training
epochs = 3000
# beta = 0.1
dropout_rate = 0.5
for epoch in range(700, epochs):
beta = kl_anneal(epoch, int(epochs/2))
if epoch % 10 == 1:
t1 = time.time()
idx = np.random.randint(0, len(input_text), batch_size) # sampling random batch -> stochasticity
input_sequence = input_text[idx][:, ::-1]
input_sequence_dropout = input_text[idx]
output_sequence = output_text[idx]
'''word dropout with UNK
-> hold PAD and UNK word embedding vector zero vector(non-trainable)'''
non_pad = np.sum(input_sequence != vocab.get('<PAD>'), axis=1)
dropout_ = [np.random.binomial(1, dropout_rate, x-2) for x in non_pad]
dropout_index = [d * np.arange(1, x-1) for d, x in zip(dropout_, non_pad)]
for i in range(batch_size):
input_sequence_dropout[i][[d for d in dropout_index[i] if d != 0]] = vocab.get('<UNK>')
with tf.GradientTape(persistent=True) as tape:
# get output
z_mean_pred1, z_log_var_pred1, z_mean_pred2, z_log_var_pred2, sequence_pred = text_vae([input_sequence, input_sequence_dropout])
pi_hat = mixprob_vae([input_sequence, input_sequence_dropout])
# ELBO
recon_loss, kl_loss, loss = loss_mixture_fun(output_sequence, sequence_pred, z_mean_pred1, z_log_var_pred1, z_mean_pred2, z_log_var_pred2, pi_hat, beta)
# mixture probability loss
mix_loss = -tf.reduce_mean(tf.math.log(tf.reduce_sum(tf.multiply(label[idx, :], pi_hat), axis=1)))
grad = tape.gradient(loss, text_vae.weights)
optimizer.apply_gradients(zip(grad, text_vae.weights))
grad1 = tape.gradient(mix_loss, mixprob_vae.weights)
optimizer1.apply_gradients(zip(grad1, mixprob_vae.weights))
if epoch % 10 == 0:
t2 = time.time()
print('({} epoch, time: {:.3})'.format(epoch, t2-t1))
print('Text VAE loss: {:.6}, Reconstruction: {:.6}, KL: {:.6}, MIX: {:.6}'.format(loss.numpy(), recon_loss.numpy(), kl_loss.numpy(), mix_loss.numpy()))
#%%
# K.backend.clear_session()
#%% latent generation
latent_input = layers.Input((maxlen))
latent_emb = embedding_layer(latent_input)
latent_h = encoder_lstm(latent_emb)
latent_mix_prob = mix_prob_dense(latent_h)
latent_mean1 = mean_dense1(latent_h)
latent_log_var1 = log_var_dense1(latent_h)
latent_mean2 = mean_dense2(latent_h)
latent_log_var2 = log_var_dense2(latent_h)
latent_prob_sampling = tf.random.categorical(latent_mix_prob, 1)
latent_chosen_idx = tf.concat((latent_prob_sampling, tf.cast(tf.cast(tf.logical_not(tf.cast(latent_prob_sampling, tf.bool)), tf.bool), tf.int64)), axis=1)
epsilon1 = tf.random.normal((latent_dim, ))
latent_z1 = latent_mean1 + tf.math.exp(latent_log_var1 / 2) * epsilon1
epsilon2 = tf.random.normal((latent_dim, ))
latent_z2 = latent_mean2 + tf.math.exp(latent_log_var2 / 2) * epsilon2
latent_z12 = tf.concat((latent_z1[:, tf.newaxis, :], latent_z2[:, tf.newaxis, :]), axis=1)
latent_z = tf.reduce_sum(tf.multiply(tf.cast(tf.tile(latent_chosen_idx[..., tf.newaxis], (1, 1, latent_dim)), tf.float32), latent_z12), axis=1)
latent_model = K.models.Model(latent_input, latent_z)
latent_model.summary()
#%% inference model
inf_input = layers.Input((maxlen))
inf_hidden = layers.Input((latent_dim))
inf_emb = embedding_layer(inf_input)
latent_init_h = reweight_h_dense(inf_hidden)
latent_init_c = reweight_c_dense(inf_hidden)
inf_output = logit_layer(decoder_lstm(inf_emb, initial_state=[latent_init_h, latent_init_c]))
inference_model = K.models.Model([inf_input, inf_hidden], inf_output)
inference_model.summary()
#%% interpolation & inference
j1 = 2
j2 = 3
print('===input===')
print(' '.join([num_vocab.get(x) for x in input_text[j1, :] if x != 0]))
print(' '.join([num_vocab.get(x) for x in input_text[j2, :] if x != 0]))
z1 = latent_model(input_text[[j1], :])
z2 = latent_model(input_text[[j2], :])
# interpolation
z_inter = z1
for v in np.linspace(0, 1, 7):
z_inter = np.vstack((z_inter, v * z1 + (1 - v) * z2))
z_inter = np.vstack((z_inter, z2))
val_seq = np.zeros((len(z_inter), maxlen))
val_seq[:, 0] = vocab.get('<sos>')
result = ['']*len(z_inter)
for k in range(len(result)):
for t in range(1, maxlen):
pred = inference_model([val_seq[[k], :], z_inter[[k], :]])
pred_id = tf.argmax(pred[0][t-1]).numpy()
result[k] += num_vocab.get(pred_id) + ' '
if num_vocab.get(pred_id) == '<eos>':
break
val_seq[:, t] = pred_id
print('===output===')
pprint(result)
#%%
|
[
"dpeltms79@gmail.com"
] |
dpeltms79@gmail.com
|
aab291f1cacaafadba65a903047532155d75d8f5
|
71ab28f41329457cbba8bd749c7e44768880d964
|
/experiments/suite/data/sachs/load_data.py
|
2c5abed5e92ba52b7c7a89eb59b7ed86879e96d2
|
[
"Apache-2.0"
] |
permissive
|
diadochos/incorporating-causal-graphical-prior-knowledge-into-predictive-modeling-via-simple-data-augmentation
|
210a3382445d9e11a1594e6b26c788f21b8089bd
|
11eb7b4bb9c39672ece6177e321f63ce205e0307
|
refs/heads/main
| 2023-05-30T23:10:49.323396
| 2021-06-14T06:39:54
| 2021-06-14T06:39:54
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,735
|
py
|
import bnlearn
import pandas as pd
# Type hinting
from causal_data_augmentation.api_support.typing import GraphType
VARIABLES_REMOVED = []
def get_predicted_variable_name():
return 'PKA'
def load_data(path: str) -> pd.DataFrame:
DATA_RENAME = {
'raf': 'Raf',
'mek': 'Mek',
'plc': 'Plcg',
'pip2': 'PIP2',
'pip3': 'PIP3',
'erk': 'Erk',
'akt': 'Akt',
'pka': 'PKA',
'pkc': 'PKC',
'p38': 'P38',
'jnk': 'Jnk'
}
data = pd.read_csv(path, delimiter='\t')
data = data.rename(columns=DATA_RENAME).drop(VARIABLES_REMOVED, axis=1)
return data
def _bnlearn_adjmat_to_edge_tuples(adjmat: pd.DataFrame):
edge_tuples = []
for rowname in adjmat.index.values:
for colname in adjmat.columns:
if adjmat[colname][rowname]:
edge_tuples.append((rowname, colname))
return edge_tuples
def load_bif(path: str) -> GraphType:
"""
Params:
path : path to BIF file.
"""
is_DAG = True
verbose = 0
bnlearn_model = bnlearn.import_DAG(path, CPD=is_DAG, verbose=verbose)
bayesian_model, adjmat = bnlearn_model['model'], bnlearn_model['adjmat']
adjmat = adjmat.drop(VARIABLES_REMOVED, axis=1).drop(VARIABLES_REMOVED,
axis=0)
vertices = adjmat.columns
directed_edges = _bnlearn_adjmat_to_edge_tuples(adjmat)
bi_edges = []
graph = vertices, directed_edges, bi_edges
return graph
def load_consensus_graph(_=None) -> GraphType:
"""Load graph."""
vertices = [
'Raf', 'Mek', 'Plcg', 'PIP2', 'PIP3', 'Erk', 'Akt', 'PKA', 'PKC',
'P38', 'Jnk'
]
directed_edges = [('Plcg', 'PIP2'), ('Plcg', 'PKC'), ('PIP2', 'PKC'),
('PIP3', 'PIP2'), ('PIP3', 'Plcg'), ('PIP3', 'Akt'),
('PKA', 'Akt'), ('PKA', 'Erk'), ('PKA', 'Mek'),
('PKA', 'Raf'), ('PKA', 'Jnk'), ('PKA', 'P38'),
('PKC', 'Mek'), ('PKC', 'Raf'), ('PKC', 'Jnk'),
('PKC', 'P38'), ('Mek', 'Erk')]
bi_edges = []
vertices = [v for v in vertices if v not in VARIABLES_REMOVED]
directed_edges = [
e for e in directed_edges
if (e[0] not in VARIABLES_REMOVED) and (e[1] not in VARIABLES_REMOVED)
]
bi_edges = [
e for e in bi_edges
if (e[0] not in VARIABLES_REMOVED) and (e[1] not in VARIABLES_REMOVED)
]
graph = vertices, directed_edges, bi_edges
return graph
def load_mooij_heskes_2013_graph(_=None) -> GraphType:
"""Load graph."""
vertices = [
'Raf', 'Mek', 'Plcg', 'PIP2', 'PIP3', 'Erk', 'Akt', 'PKA', 'PKC',
'P38', 'Jnk'
]
directed_edges = [
('PIP2', 'Plcg'),
('PIP3', 'PIP2'),
('Akt', 'Erk'),
('PKA', 'Akt'),
('PKA', 'Mek'),
('PKA', 'Jnk'),
('PKA', 'P38'),
('PKC', 'PKA'),
('PKC', 'Akt'),
('PKC', 'PIP2'),
('PKC', 'Plcg'),
('PKC', 'Mek'),
('PKC', 'Raf'),
('PKC', 'Jnk'),
('PKC', 'P38'),
('Mek', 'Raf'),
('Mek', 'Erk'),
]
bi_edges = []
vertices = [v for v in vertices if v not in VARIABLES_REMOVED]
directed_edges = [
e for e in directed_edges
if (e[0] not in VARIABLES_REMOVED) and (e[1] not in VARIABLES_REMOVED)
]
bi_edges = [
e for e in bi_edges
if (e[0] not in VARIABLES_REMOVED) and (e[1] not in VARIABLES_REMOVED)
]
graph = vertices, directed_edges, bi_edges
return graph
if __name__ == '__main__':
data = load_data("main.result.ourvarrs/1. cd3cd28.txt")
graph = load_bif("sachs.bif")
|
[
"takeshi.diadochos@gmail.com"
] |
takeshi.diadochos@gmail.com
|
fac8b4f5546ab26d53970a5f7f3ca0643c0446b0
|
c3297a96e0dacadba5fdd5c9b30a06794f6fd5d7
|
/base/urls.py
|
825610661687b46622a959dcfd1af8e7409bef5f
|
[] |
no_license
|
craig-r-w/website
|
bac54f3598f270ea6812d2ddac9ae04496ce1f96
|
a73692da78bed5a67616df9785fc8eec5358128c
|
refs/heads/master
| 2023-08-31T05:33:19.951678
| 2021-07-07T15:30:54
| 2021-07-07T15:30:54
| 268,131,854
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,826
|
py
|
"""base URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path
from blog import views
from django.contrib.auth import views as auth_views
urlpatterns = [
path('admin/', admin.site.urls),
path('', views.view_published, name='view_published'), # Direct to the published posts view when no url directory is given.
path('post/<int:primary_key>/', views.view_post, name='view_post'), # Direct to the selected post.
path('post/new/', views.create_post, name='create_post'), # Create a new Post.
path('post/edit/<int:primary_key>/', views.edit_post, name='edit_post'), # Direct to the selected post.
path('post/delete/<int:primary_key>/', views.delete_post, name='delete_post'), # Delete the selected post.
path('posts/all', views.view_all, name='view_all'), # Display all posts.
path('posts/unpublished', views.view_unpublished, name='view_unpublished'), # Display unpublished posts.
path('posts/published', views.view_published, name='view_published'), # Display published posts.
path('login/', auth_views.LoginView.as_view(template_name='blog/login.html'), name='login'),
path('logout/', auth_views.LogoutView.as_view(next_page='/'), name='logout'),
]
|
[
"craigwilbourne@zoho.com"
] |
craigwilbourne@zoho.com
|
16565fdd13096dd6115c8dc833e71a7cfc96815b
|
6d9af7eade7ddc239ee6839a3766cb40c27f619d
|
/src/main.py
|
c3ffaf19271d395c270b629dc07aaefc59bdb1a0
|
[] |
no_license
|
lmj1029123/SingleNN
|
24dfe40c8d920e2a777742c885907c27484976f4
|
701752c3378e537387fa0dc2b410aec44577b7a3
|
refs/heads/master
| 2021-11-27T04:42:46.229290
| 2021-11-09T00:40:10
| 2021-11-09T00:40:10
| 246,651,678
| 7
| 2
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 10,699
|
py
|
import sys
sys.path.append("./SimpleNN")
import os
import shutil
from ase.db import connect
import torch
from ContextManager import cd
from preprocess import train_test_split, train_val_split, get_scaling, CV
from preprocess import snn2sav
from NN import MultiLayerNet
from train import train, evaluate
from fp_calculator import set_sym, calculate_fp
import pickle
is_train = True
is_transfer = False
is_force = True
if is_train and is_transfer:
raise ValueError('train and transfer could not be true at the same time.')
##################################################################################
#Hyperparameters
##################################################################################
E_coeff = 100
if is_force:
F_coeff = 1
else:
F_coeff = 0
val_interval = 10
n_val_stop = 10
epoch = 3000
opt_method = 'lbfgs'
if opt_method == 'lbfgs':
history_size = 100
lr = 1
max_iter = 10
line_search_fn = 'strong_wolfe'
SSE = torch.nn.MSELoss(reduction='sum')
SAE = torch.nn.L1Loss(reduction='sum')
convergence = {'E_cov':0.0005,'F_cov':0.005}
# min_max will scale fingerprints to (0,1)
fp_scale_method = 'min_max'
e_scale_method = 'min_max'
test_percent = 0.2
# Pecentage from train+val
val_percent = 0.2
# Training model configuration
SEED = [1,2,3,4,5]
#n_nodes = [20,20]
#activations = [torch.nn.Tanh(), torch.nn.Tanh()]
n_nodes = []
activations = []
lr = 1
hp = {'n_nodes': n_nodes, 'activations': activations, 'lr': lr}
####################################################################################################
# Configuration for train
####################################################################################################
if is_train:
# The Name of the training
Name = f'linear_Ge_e'
for seed in SEED:
if not os.path.exists(Name+f'-{seed}'):
os.makedirs(Name+f'-{seed}')
dbfile = f'./db/Ge.db'
db = connect(dbfile)
elements = ['Li', 'Si', 'Ni', 'Cu', 'Ge', 'Mo']
nelem = len(elements)
# This is the energy of the metal in its ground state structure
#if you don't know the energy of the ground state structure,
# you can set it to None
element_energy = torch.tensor([-1.90060294,-10.84460345/2,-5.51410074,-3.71807396,-8.94730881/2,-10.96382467])
# Atomic number
#weights = [3, 14, 28, 29, 32, 42]
# Allen electronegativity
weights = [0.912,1.916,1.88,1.85,1.994,1.47]
# Covalent radii
#weights = [1.28,1.11,1.24,1.32,1.2,1.54]
Gs = [22,24]
cutoff = 6.0
g2_etas = [0.001, 0.01, 0.03, 0.05, 0.07, 0.1, 0.2, 0.3, 0.4, 0.5]
#g2_etas = [0, 0.001, 0.003, 0.005, 0.007, 0.01, 0.03, 0.05, 0.07, 0.1]
g2_Rses = [0.0]
g4_etas=[0.01]
g4_zetas=[1.0, 4.0]
g4_lambdas=[-1.0, 1.0]
sym_params = [Gs, cutoff, g2_etas, g2_Rses, g4_etas, g4_zetas, g4_lambdas, elements, weights, element_energy]
params_set = set_sym(elements, Gs, cutoff,
g2_etas=g2_etas, g2_Rses=g2_Rses,
g4_etas=g4_etas, g4_zetas = g4_zetas,
g4_lambdas= g4_lambdas, weights=weights)
N_sym = params_set[elements[0]]['num']
####################################################################################################
# Configuration for transfer
####################################################################################################
if is_transfer:
source_Name = 'combined_noNi_e'
# The Name of the training
Name = f'combined_Ni_e'
for seed in SEED:
if not os.path.exists(Name+f'-{seed}'):
os.makedirs(Name+f'-{seed}')
dbfile = f'./db/Ni.db'
db = connect(dbfile)
elements = ['Li', 'Si', 'Ni', 'Cu', 'Ge', 'Mo']
nelem = len(elements)
# This is the energy of the metal in its ground state structure
#if you don't know the energy of the ground state structure,
# you can set it to None
element_energy = torch.tensor([-1.90060294,-10.84460345/2,-5.51410074,-3.71807396,-8.94730881/2,-10.96382467])
# Atomic number
#weights = [3, 14, 28, 29, 32, 42]
# Allen electronegativity
weights = [0.912,1.916,1.88,1.85,1.994,1.47]
# Covalent radii
#weights = [1.28,1.11,1.24,1.32,1.2,1.54]
####################################################################################################
# Train
####################################################################################################
if is_train:
for seed in SEED:
# This use the context manager to operate in the data directory
with cd(Name+f'-{seed}'):
pickle.dump(sym_params, open("sym_params.sav", "wb"))
logfile = open('log.txt','w+')
resultfile = open('result.txt','w+')
if os.path.exists('test.sav'):
logfile.write('Did not calculate symfunctions.\n')
else:
data_dict = snn2sav(db, Name, elements, params_set,
element_energy=element_energy)
train_dict = train_test_split(data_dict,1-test_percent,seed=seed)
train_val_split(train_dict,1-val_percent,seed=seed)
logfile.flush()
train_dict = torch.load('final_train.sav')
val_dict = torch.load('final_val.sav')
test_dict = torch.load('test.sav')
scaling = get_scaling(train_dict, fp_scale_method, e_scale_method)
n_nodes = hp['n_nodes']
activations = hp['activations']
lr = hp['lr']
model = MultiLayerNet(N_sym, n_nodes, activations, nelem, scaling=scaling)
if opt_method == 'lbfgs':
optimizer = torch.optim.LBFGS(model.parameters(), lr=lr,
max_iter=max_iter, history_size=history_size,
line_search_fn=line_search_fn)
results = train(train_dict, val_dict,
model,
opt_method, optimizer,
E_coeff, F_coeff,
epoch, val_interval,
n_val_stop,
convergence, is_force,
logfile)
[loss, E_MAE, F_MAE, v_loss, v_E_MAE, v_F_MAE] = results
test_results = evaluate(test_dict, E_coeff, F_coeff, is_force)
[test_loss, test_E_MAE, test_F_MAE] =test_results
resultfile.write(f'Hyperparameter: n_nodes = {n_nodes}, activations = {activations}, lr = {lr}\n')
resultfile.write(f'loss = {loss}, E_MAE = {E_MAE}, F_MAE = {F_MAE}.\n')
resultfile.write(f'v_loss = {v_loss}, v_E_MAE = {v_E_MAE}, v_F_MAE = {v_F_MAE}.\n')
resultfile.write(f'test_loss = {test_loss}, test_E_MAE = {test_E_MAE}, test_F_MAE = {test_F_MAE}.\n')
logfile.close()
resultfile.close()
####################################################################################################
# Transfer
####################################################################################################
if is_transfer:
for seed in SEED:
# This use the context manager to operate in the data directory
with cd(source_Name+f'-{seed}'):
model = torch.load('best_model')
sym_params = pickle.load(open( "sym_params.sav", "rb" ))
[Gs, cutoff, g2_etas, g2_Rses, g4_etas, g4_zetas, g4_lambdas, _, _, _]=sym_params
sym_params = [Gs, cutoff, g2_etas, g2_Rses, g4_etas, g4_zetas, g4_lambdas, elements, weights, element_energy]
params_set = set_sym(elements, Gs, cutoff,
g2_etas=g2_etas, g2_Rses=g2_Rses,
g4_etas=g4_etas, g4_zetas = g4_zetas,
g4_lambdas= g4_lambdas, weights=weights)
N_sym = params_set[elements[0]]['num']
with cd(Name+f'-{seed}'):
pickle.dump(sym_params, open("sym_params.sav", "wb"))
logfile = open('log.txt','w+')
resultfile = open('result.txt','w+')
if os.path.exists('test.sav'):
logfile.write('Did not calculate symfunctions.\n')
else:
data_dict = snn2sav(db, Name, elements, params_set,
element_energy=element_energy)
train_dict = train_test_split(data_dict,1-test_percent,seed=seed)
train_val_split(train_dict,1-val_percent,seed=seed)
logfile.flush()
train_dict = torch.load('final_train.sav')
val_dict = torch.load('final_val.sav')
test_dict = torch.load('test.sav')
#n_nodes = hp['n_nodes']
#activations = hp['activations']
lr = hp['lr']
for param in model.parameters():
param.requires_grad = False
H = model.net[-1].in_features
model.net[-1] = torch.nn.Linear(H, nelem)
trainable_params = filter(lambda p: p.requires_grad, model.parameters())
if opt_method == 'lbfgs':
optimizer = torch.optim.LBFGS(model.parameters(), lr=lr,
max_iter=max_iter, history_size=history_size,
line_search_fn=line_search_fn)
results = train(train_dict, val_dict,
model,
opt_method, optimizer,
E_coeff, F_coeff,
epoch, val_interval,
n_val_stop,
convergence, is_force,
logfile)
[loss, E_MAE, F_MAE, v_loss, v_E_MAE, v_F_MAE] = results
test_results = evaluate(test_dict, E_coeff, F_coeff, is_force)
[test_loss, test_E_MAE, test_F_MAE] =test_results
resultfile.write(f'Hyperparameter: n_nodes = {n_nodes}, activations = {activations}, lr = {lr}\n')
resultfile.write(f'loss = {loss}, E_MAE = {E_MAE}, F_MAE = {F_MAE}.\n')
resultfile.write(f'v_loss = {v_loss}, v_E_MAE = {v_E_MAE}, v_F_MAE = {v_F_MAE}.\n')
resultfile.write(f'test_loss = {test_loss}, test_E_MAE = {test_E_MAE}, test_F_MAE = {test_F_MAE}.\n')
logfile.close()
resultfile.close()
|
[
"mingjie1@andrew.cmu.edu"
] |
mingjie1@andrew.cmu.edu
|
df64e6c89083f39d1c9f7b60839bf17d8a58b3e6
|
1a258306d04da48964c9b10b31d543d10856e2e8
|
/DealWithTables/test.py
|
7233bd1a3b7c55b0475df3fc5daeb58e16f8f36d
|
[] |
no_license
|
DIAOZHAFENG/simplework
|
1485727772908642d2cc4712e3afda2a3a499f3a
|
1b9cf12f0497f118bca9e633b4dab279039fa74e
|
refs/heads/master
| 2021-08-24T00:46:24.708984
| 2017-12-07T09:33:12
| 2017-12-07T09:33:12
| 113,014,080
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,411
|
py
|
# -*- coding: utf-8 -*-
import time
# from db_inconnect import MSSQL
from servers import get_info
get_ping_tables = '''
SELECT Name FROM SysObjects Where XType='U' AND Name LIKE 'UserPingListTable_%' ORDER BY Name
'''
mark_test_ip = '''
update ip set Name = '{name}' where IpAddress = '{ip}'
'''
get_name = '''
select Name from ip where IpAddress = '{ip}'
'''
ips = '''115.159.158.220;101.226.247.79;183.129.141.83;116.211.92.26;118.123.240.10;113.106.98.174;125.76.242.76;119.188.39.132;125.46.49.74;111.206.162.204;123.56.177.90;139.129.193.189;120.27.142.56;60.191.12.142;222.73.235.79;115.238.100.105;123.206.26.215'''
if __name__ == '__main__':
# i = 0
# now = time.time()
# db = MSSQL()
# while i < 1000:
# for row in db.query('select * from ip where name = \'阿里北京\''):
# print(row[1].encode('latin1').decode('gbk'))
# i += 1
# print time.time() - now
req = '''/SubmitUserPingListInterface/?command=submit&userid=389506263&username=%3F%3F%3F&ipaddress=101.226.247.79&pings=1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000;1000'''
info = get_info(req)
print info
|
[
"513414712@qq.com"
] |
513414712@qq.com
|
1f969d32266a2a1461f90074b270393360e66605
|
8a2e7242846a6d6d95f8d150738d5a7a9c8f96d7
|
/main/migrations/0023_order_status.py
|
e9f055d53fd6db443abf301c6599e278a21e8fae
|
[] |
no_license
|
sidharth1017/Dreammarket
|
88ccb979014cc16469ec0f534734d415d96cb75e
|
52c03aeabd454d948b802613865197864f8d1aaf
|
refs/heads/master
| 2023-07-31T09:27:42.119763
| 2021-09-14T18:56:10
| 2021-09-14T18:56:10
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 370
|
py
|
# Generated by Django 3.1.7 on 2021-03-21 11:33
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0022_order'),
]
operations = [
migrations.AddField(
model_name='order',
name='status',
field=models.BooleanField(default=False),
),
]
|
[
"sidharthv605@gmail.com"
] |
sidharthv605@gmail.com
|
f7529a6c09b46c2d7f94c5330f76f204993ebe4b
|
3ae3489e63de992d504b6bf80e49469c195aa0d0
|
/mailmachine/mail.py
|
9166f0775f9c2e27264c2c6f16d6c48af121dceb
|
[] |
no_license
|
paluh/mailmachine
|
81e218cf4b3352de266cb9685de02fd1e00d7f14
|
e3ad8a70b67c140d146cfc44c475a68278dc16ea
|
refs/heads/master
| 2021-01-17T13:12:42.779596
| 2020-03-19T20:16:12
| 2020-03-19T20:16:55
| 12,365,379
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,697
|
py
|
from __future__ import absolute_import
from email.utils import formatdate
import email_message
import time
def enqueue(mail_queue, subject, body, from_email, recipients, alternatives=None,
attachments=None):
mail_queue.put(subject=subject, body=body, from_email=from_email,
recipients=recipients, alternatives=alternatives,
attachments=attachments)
def send(connection, subject, body, from_email, recipients, alternatives=None, attachments=None):
messages = _build_messages(subject, body, from_email, recipients, alternatives, attachments)
for from_email, recipients, msg in messages:
connection.sendmail(from_email, recipients, msg.encode('utf-8') if isinstance(msg, unicode) else msg)
def _build_messages(subject, body, from_email, recipients, alternatives=None, attachments=None):
headers = {
'Date': formatdate(int(time.time()))
}
messages = []
attachments = attachments or []
for recipient in recipients:
message = email_message.EmailMultiAlternatives(to=[recipient], alternatives=alternatives,
headers=headers, subject=subject, body=body,
from_email=from_email, encoding='utf-8')
for attachment in attachments:
message.attach(*attachment)
fe = email_message.sanitize_address(message.from_email, message.encoding)
recipients = [email_message.sanitize_address(addr, message.encoding) for addr in message.recipients()]
messages.append((fe, recipients,
message.message().as_string()))
return messages
|
[
"paluho@gmail.com"
] |
paluho@gmail.com
|
dc9f10739fa6306a29577835e9d7d18e3a409cc7
|
10b3f8b1bb2d43a053558e2974b1190ec5af9ab3
|
/test/functional/feature_loadblock.py
|
1078f70c1c56018d1ce056e74ae5a3588156a46e
|
[
"MIT"
] |
permissive
|
Satoex/Sato
|
ff4683226c2cedb14203a86af68ae168e3c45400
|
fda51ccc241ca426e838e1ba833c7eea26f1aedd
|
refs/heads/master
| 2022-07-27T23:30:32.734477
| 2022-01-29T17:44:00
| 2022-01-29T17:44:00
| 346,001,467
| 6
| 8
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,866
|
py
|
#!/usr/bin/env python3
# Copyright (c) 2017-2019 The Bitcoin Core developers
# Copyright (c) 2017-2020 The Sato Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""
Test loadblock option
Test the option to start a node with the option loadblock which loads
a serialized blockchain from a file (usually called bootstrap.dat).
To generate that file this test uses the helper scripts available
in contrib/linearize.
"""
import configparser
import os
import subprocess
import sys
import tempfile
import urllib
from test_framework.test_framework import SatoTestFramework
from test_framework.util import assert_equal, wait_until
class LoadblockTest(SatoTestFramework):
def set_test_params(self):
self.setup_clean_chain = True
self.num_nodes = 2
def run_test(self):
self.nodes[1].setnetworkactive(state=False)
self.nodes[0].generate(100)
# Parsing the url of our node to get settings for config file
data_dir = self.nodes[0].datadir
node_url = urllib.parse.urlparse(self.nodes[0].url)
cfg_file = os.path.join(data_dir, "linearize.cfg")
bootstrap_file = os.path.join(self.options.tmpdir, "bootstrap.dat")
genesis_block = self.nodes[0].getblockhash(0)
blocks_dir = os.path.join(data_dir, "regtest", "blocks")
hash_list = tempfile.NamedTemporaryFile(dir=data_dir,
mode='w',
delete=False,
encoding="utf-8")
self.log.info("Create linearization config file")
with open(cfg_file, "a", encoding="utf-8") as cfg:
cfg.write("datadir={}\n".format(data_dir))
cfg.write("rpcuser={}\n".format(node_url.username))
cfg.write("rpcpassword={}\n".format(node_url.password))
cfg.write("port={}\n".format(node_url.port))
cfg.write("host={}\n".format(node_url.hostname))
cfg.write("output_file={}\n".format(bootstrap_file))
cfg.write("max_height=100\n")
cfg.write("netmagic=43524f57\n")
cfg.write("input={}\n".format(blocks_dir))
cfg.write("genesis={}\n".format(genesis_block))
cfg.write("hashlist={}\n".format(hash_list.name))
# Get the configuration file to find src and linearize
config = configparser.ConfigParser()
if not self.options.configfile:
self.options.configfile = os.path.abspath(os.path.join(os.path.dirname(__file__), "../config.ini"))
config.read_file(open(self.options.configfile))
base_dir = config["environment"]["SRCDIR"]
linearize_dir = os.path.join(base_dir, "contrib", "linearize")
self.log.info("Run linearization of block hashes")
linearize_hashes_file = os.path.join(linearize_dir, "linearize-hashes.py")
subprocess.run([sys.executable, linearize_hashes_file, cfg_file],
stdout=hash_list,
check=True)
self.log.info("Run linearization of block data")
linearize_data_file = os.path.join(linearize_dir, "linearize-data.py")
subprocess.run([sys.executable, linearize_data_file, cfg_file],
check=True)
self.log.info("Restart second, unsynced node with bootstrap file")
self.stop_node(1)
self.start_node(1, ["-loadblock=" + bootstrap_file])
wait_until(lambda: self.nodes[1].getblockcount() == 100, err_msg="Wait for block count == 100")
assert_equal(self.nodes[1].getblockchaininfo()['blocks'], 100)
assert_equal(self.nodes[0].getbestblockhash(), self.nodes[1].getbestblockhash())
if __name__ == '__main__':
LoadblockTest().main()
|
[
"78755872+Satoex@users.noreply.github.com"
] |
78755872+Satoex@users.noreply.github.com
|
ac16325e32d04380008cb982641765605f50d959
|
9ac405635f3ac9332e02d0c7803df757417b7fee
|
/bandas_eurobelt/migrations/0013_auto_20190801_2046.py
|
42051039e8faed86b51d878b396193648a4a306b
|
[] |
no_license
|
odecsarrollo/07_intranet_proyectos
|
80af5de8da5faeb40807dd7df3a4f55f432ff4c0
|
524aeebb140bda9b1bf7a09b60e54a02f56fec9f
|
refs/heads/master
| 2023-01-08T04:59:57.617626
| 2020-09-25T18:01:09
| 2020-09-25T18:01:09
| 187,250,667
| 0
| 0
| null | 2022-12-30T09:36:37
| 2019-05-17T16:41:35
|
JavaScript
|
UTF-8
|
Python
| false
| false
| 409
|
py
|
# Generated by Django 2.2 on 2019-08-02 01:46
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('bandas_eurobelt', '0012_auto_20190801_1806'),
]
operations = [
migrations.AlterModelOptions(
name='bandaeurobelt',
options={'permissions': [('list_bandaeurobelt', 'Can see list bandas eurobelt')]},
),
]
|
[
"fabio.garcia.sanchez@gmail.com"
] |
fabio.garcia.sanchez@gmail.com
|
3db904de747b7d0704639a3328746010003fc72d
|
58f8ac8ffec2d8c0dd561452d6335bb344707a5e
|
/venv/bin/django-admin.py
|
e3b39416210acd2d09d4766f302aca92fd9750d8
|
[] |
no_license
|
Sheikh2Imran/corona19-graphQL
|
e255ed6fabf5f5044298edf0e3deb0ee1383656f
|
927d6771001dfac038cb61501cc81af5919709a7
|
refs/heads/master
| 2022-11-25T03:16:26.605685
| 2020-07-24T17:41:08
| 2020-07-24T17:41:08
| 253,882,941
| 0
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 164
|
py
|
#!/Users/ergoventuresltd/Desktop/corona19/venv/bin/python
from django.core import management
if __name__ == "__main__":
management.execute_from_command_line()
|
[
"imranjustcse@gmail.com"
] |
imranjustcse@gmail.com
|
3a88f0f40fb307b48c95ea8a48095acc6f6353b8
|
99bdfce39152daa7f1d6088be4e57f7962cb14b1
|
/notifications/sms.py
|
d4933f7fa447575a798fa6da097454a66aef43cf
|
[] |
no_license
|
gusevartemasd/mypplanner-master
|
94f67142a57b7fc93e408eb256efb2f5b6bf2144
|
44a88e22b1eb4a1d789bee8811d06000078ea9b3
|
refs/heads/master
| 2020-04-23T15:20:42.876205
| 2019-02-18T10:19:30
| 2019-02-18T10:19:30
| 171,261,989
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 581
|
py
|
from urllib.parse import urlencode
import requests
from django.conf import settings
from django.template.loader import get_template
def send_sms(phone, template, context):
tpl_ns = '/'.join(['sms', template])
text = get_template('/'.join([tpl_ns, 'template.txt']))
sms_content = text.render(context)
params = {
'login': settings.SMSC_LOGIN,
'psw': settings.SMSC_PASSWORD,
'charset': 'utf-8',
'phones': phone,
'mes': sms_content,
}
url = 'https://smsc.ru/sys/send.php?' + urlencode(params)
requests.get(url)
|
[
"gusevartemasd@gmail.com"
] |
gusevartemasd@gmail.com
|
dc8f95ac89ae5ed51bc0323cc90278c11846444a
|
d77fb68b1d5e3af068124c8c5e5af25207ef12f2
|
/Python14期课上代码(day1-day30)/day29/PerfectCRM/kingadmin/urls.py
|
2fd905cae29981c020857b0b9062c99896b9aabb
|
[] |
no_license
|
sy106/s13
|
f4c2645e872fc4e60c4ac64776ba10ff97a6db8f
|
6371f3a782cf7292216dfb973741556c69513338
|
refs/heads/master
| 2020-07-20T18:56:40.615723
| 2018-02-02T09:32:20
| 2018-02-02T09:33:34
| 65,897,826
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 818
|
py
|
"""PerfectCRM URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url,include
from kingadmin import views
urlpatterns = [
url(r'^$', views.app_index),
url(r'^(\w+)/(\w+)/$', views.table_data_list),
]
|
[
"sy106@126.com"
] |
sy106@126.com
|
408ab2c050d1804ec19a89d53647617839d81fdd
|
c792b076cdf8c943c344d90b21817dd501c165ab
|
/programmers/Level2/후보키.py
|
8d42a17989ea96cbf6a6abe95e73dfc8efb0ee44
|
[] |
no_license
|
Jdoublee/CodingTestPractice
|
d68afa38e64de67aa53ab8c6569e07e7b310a83d
|
83eb2b84f63d55808a5e9b014e023b72bf4a4e9e
|
refs/heads/master
| 2023-06-02T16:48:52.913402
| 2021-06-16T13:34:40
| 2021-06-16T13:34:40
| 290,072,409
| 6
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,170
|
py
|
from itertools import combinations # 조합
def solution(relation):
answer = 0
col = len(relation[0])
row = len(relation)
rem = [i for i in range(col)]
res = []
i = 1
while i <= col:
combs = list(combinations(rem, i)) # 가능한 조합 모두 구하기. 1개 ~ col개. -> 시간초과 안 나는 범위여서 가능
for comb in combs:
checklist = []
# 최소성 체크
flag = True
for r in res:
if set(r) == set(comb).intersection(set(r)): # 부분집합 여부 판단. set 자료형이어야 함.
flag = False
break
if not flag:
continue
for r in range(row):
tmp = ''
for c in comb:
tmp += relation[r][c]
checklist.append(tmp)
if len(set(checklist)) == row: # 유일성 체크
answer += 1
res.append(tuple(comb))
i += 1
return answer
# 다시 보기
# 풀이중 비트 연산 사용한 풀이 참고
|
[
"hyewon3429@gmail.com"
] |
hyewon3429@gmail.com
|
e57b5ff3fa92a86d4242fa3d5b53e2757533c7a0
|
253363653815dbe51ffb9cc8f7b470bb1b4e7f90
|
/thermal_models/k_spectral.py
|
9f3dc8742705400bbc30ba2504a03a3cb0a8d2f0
|
[] |
no_license
|
RamyaGuru/Single-MultiBandModel
|
767ae3900f2591086239110c496f9fc382cbf5ea
|
bdd29f38c6668ac44419eac0cf7233d45f1f084b
|
refs/heads/master
| 2021-07-22T00:14:57.615495
| 2021-07-08T19:36:00
| 2021-07-08T19:36:00
| 254,430,054
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 469
|
py
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Jun 4 09:14:04 2021
@author: ramyagurunathan
Silicon spectral thermal conductivity
"""
'''
constants
'''
from math import pi as pi
kB = 1.38E-23
hbar = 1.054E-34
Na = 6.02E23
'''
Silicon values
'''
vs = 6084
T = 300
a = 2.7e-10
grun = 0.56
atmM = (28.05 / Na) / 1e3
k_full = (6 * pi**2)**(2 / 3) * (1 / (4 * pi**2)) * (vs**3 / (T * a**2 * grun**2)) * atmM
k_spec = k_full / (vs * (2 * pi) / a)
|
[
"ramya1006@gmail.com"
] |
ramya1006@gmail.com
|
c74df65958d4ad2bfccfacfc541f05c9a3e3ad24
|
30e26d4376d2d233be7b6acb45516a8e873a65db
|
/pycurl_requests/models.py
|
048cc9e98b95e7debfb3a43f1477553e130fcb78
|
[
"MIT"
] |
permissive
|
chibie/pycurl-requests
|
53a658aa39058dc538d36f42b7b087b96baa2a96
|
66ee39e2d357f0e91d1e9bfb7a2e3339aaa11aef
|
refs/heads/master
| 2022-08-24T00:45:39.322652
| 2020-05-29T05:43:36
| 2020-05-29T05:43:36
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 9,384
|
py
|
import codecs
import datetime
import io
import json as json_
from collections import abc
from urllib.parse import urlsplit, urlunsplit, urlencode, parse_qsl, quote
from io import BytesIO
from typing import *
import chardet
from pycurl_requests.cookies import RequestsCookieJar
from pycurl_requests import exceptions
from pycurl_requests import structures
DEFAULT_REDIRECT_LIMIT = 30
class Request:
def __init__(self,
method=None,
url=None,
headers=None,
files=None,
data=None,
params=None,
auth=None,
cookies=None,
hooks=None,
json=None):
self.method = method
self.url = url
self.headers = headers
self.files = files
self.data = data
self.json = json
self.params = params
self.auth = auth
self.cookies = cookies
self.hooks = hooks
def deregister_hook(self, event, hook):
raise NotImplementedError
def prepare(self):
prepared = PreparedRequest()
prepared.prepare(
method=self.method,
url=self.url,
headers=self.headers,
files=self.files,
data=self.data,
params=self.params,
auth=self.auth,
cookies=self.cookies,
hooks=self.hooks,
json=self.json)
return prepared
def register_hook(self, event, hook):
raise NotImplementedError
class Response:
def __init__(self):
self.request = None # type: Optional[Request]
self.elapsed = None # type: Optional[datetime.timedelta]
self.status_code = None # type: Optional[int]
self.reason = None # type: Optional[str]
self.headers = None # type: Optional[structures.CaseInsensitiveDict]
self.encoding = None # type: Optional[str]
self.url = None # type: Optional[str]
self.raw = None # type: Optional[BytesIO]
@property
def apparent_encoding(self):
return chardet.detect(self.content)['encoding']
def close(self):
# Not implemented
pass
@property
def content(self):
return self.raw.getvalue()
@property
def cookies(self):
return NotImplemented
@property
def history(self):
return NotImplemented
@property
def is_permanent_redirect(self):
# Moved Permanently (HTTP 301) or Permanent Redirect (HTTP 308)
return self.status_code in {301, 308}
@property
def is_redirect(self):
return self.status_code in {301, 302, 303, 307, 308}
def iter_content(self, chunk_size=1, decode_unicode=False):
chunk_size = chunk_size or -1
decoder = codecs.getincrementaldecoder(self.encoding)('replace') if self.encoding and decode_unicode else None
for chunk in iter(lambda: self.raw.read1(chunk_size), b''):
if decoder:
yield decoder.decode(chunk)
else:
yield chunk
if decoder:
# Make sure we finalize the decoder (may yield replacement character)
tail = decoder.decode(b'', True)
if tail:
yield tail
def iter_lines(self, chunk_size=512, decode_unicode=False, delimiter=None):
leftover = None
for chunk in self.iter_content(chunk_size, decode_unicode=decode_unicode):
if leftover:
chunk = leftover + chunk
if delimiter is not None:
parts = chunk.split(delimiter)
else:
parts = chunk.splitlines()
# FIXME: This logic doesn't work for CR-only line endings
if chr(ord(chunk[-1])) == '\n':
yield from parts
leftover = None
else:
# This may be a partial line, so add to the next chunk
yield from parts[:-1]
leftover = parts[-1]
if leftover is not None:
yield leftover
def json(self, **kwargs):
return json_.loads(self.content, **kwargs)
@property
def links(self):
return NotImplemented
@property
def next(self):
return NotImplemented
@property
def ok(self):
return self.status_code < 400
def raise_for_status(self):
if 400 <= self.status_code < 500:
raise exceptions.HTTPError('{s.status_code} Client Error: {s.reason} for url: {s.url}'.format(s=self),
response=self)
if 500 <= self.status_code < 600:
raise exceptions.HTTPError('{s.status_code} Client Error: {s.reason} for url: {s.url}'.format(s=self),
response=self)
@property
def text(self):
return self.content.decode(self.encoding or 'ISO-8859-1')
class PreparedRequest:
def __init__(self):
self.method = None
self.url = None
self.headers = None
self.body = None
self.hooks = None
@property
def path_url(self):
return urlsplit(self.url).path
def prepare(self,
method=None,
url=None,
headers=None,
files=None,
data=None,
params=None,
auth=None,
cookies=None,
hooks=None,
json=None):
self.prepare_method(method)
self.prepare_url(url, params)
self.prepare_headers(headers)
self.prepare_cookies(cookies)
self.prepare_body(data, files, json)
self.prepare_auth(auth, url)
self.prepare_hooks(hooks)
def prepare_method(self, method):
self.method = method.upper() if method else None
def prepare_url(self, url, params):
if isinstance(url, bytes):
url = url.decode('iso-8859-1')
url = url.strip()
# Leave non-HTTP schemes as-is
if ':' in url and not url.lower().startswith('http'):
self.url = url
return
parts = urlsplit(url)
path = quote(parts.path) if parts.path else '/'
if not params:
query = parts.query
else:
if isinstance(params, (str, bytes)):
params = parse_qsl(params)
if isinstance(params, abc.Mapping):
params = list(params.items())
else:
params = list(params)
query = urlencode(parse_qsl(parts.query) + params, doseq=True)
self.url = urlunsplit(parts[:2] + (path, query) + parts[4:])
def prepare_headers(self, headers):
# NOTE: Only user-defined headers, not those set by libcurl
headers = headers or structures.CaseInsensitiveDict()
# Filter out headers with None value
header_names = headers.keys()
for name in header_names:
if headers[name] is None:
del headers[name]
self.headers = headers
def prepare_cookies(self, cookies):
# Cookies can only be set if there is no existing `Cookie` header
if 'Cookie' in self.headers or cookies is None:
return
cookiejar = RequestsCookieJar()
cookiejar.update(cookies)
value = '; '.join(('{}={}'.format(n, v) for n, v in cookiejar.iteritems()))
self.headers['Cookie'] = value
def prepare_content_length(self, body):
content_length = None
if body is None:
if self.method not in ('GET', 'HEAD'):
content_length = 0
elif isinstance(body, bytes):
content_length = len(body)
elif isinstance(body, str):
content_length = len(body.encode('iso-8859-1'))
elif getattr(body, 'seekable', False):
content_length = body.seek(0, io.SEEK_END)
body.seek(0)
if content_length is not None:
self.headers['Content-Length'] = str(content_length)
def prepare_body(self, data, files, json=None):
body = None
if files is not None:
raise NotImplementedError
elif data is not None:
if isinstance(data, (io.RawIOBase, io.BufferedReader)):
# It's a file-like object, so can be sent directly
body = data
elif isinstance(data, (abc.Mapping, list, tuple)):
self._set_header_default('Content-Type', 'application/x-www-form-urlencoded')
body = urlencode(data)
else:
# Assume it's something bytes-compatible
body = data
elif json is not None:
self._set_header_default('Content-Type', 'application/json')
body = json_.dumps(json, ensure_ascii=True).encode('ascii')
if 'Content-Length' not in self.headers:
self.prepare_content_length(body)
self.body = body
def _set_header_default(self, key, default):
"""Set header `key` to `default` if not already set"""
if key not in self.headers:
self.headers[key] = default
def prepare_auth(self, auth, url=''):
# FIXME: Not implemented
pass
def prepare_hooks(self, hooks):
# FIXME: Not implemented
pass
|
[
"coles.david@gmail.com"
] |
coles.david@gmail.com
|
5fc4c383cc710de67fea02c946439b51e1d2d0a9
|
2d9365671b746e17097ed15efd67c25053f4f2d9
|
/setup.py
|
304fb7d7fd8c1ee07f8cb0a4a66353041d461f6a
|
[] |
no_license
|
stargliderdev/registos_paroquiais
|
3012e154befd4bbf1ddc4244f915865a8717f4e3
|
73444ed7668dc7a1fa357aa10c47c93015eae5aa
|
refs/heads/master
| 2020-12-20T20:59:00.210240
| 2020-01-25T18:12:41
| 2020-01-25T18:12:41
| 236,209,037
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,742
|
py
|
# ======================================================== #
# File automagically generated by GUI2Exe version 0.5.1
# Copyright: (c) 2007-2011 Andrea Gavana
# ======================================================== #
# Let's start with some default (for me) imports...
from distutils.core import setup
from py2exe.build_exe import py2exe
import glob
import os
import zlib
import shutil
# Remove the build folder
shutil.rmtree("build", ignore_errors=True)
class Target(object):
""" A simple class that holds information on our executable file. """
def __init__(self, **kw):
""" Default class constructor. Update as you need. """
self.__dict__.update(kw)
# Ok, let's explain why I am doing that.
# Often, data_files, excludes and dll_excludes (but also resources)
# can be very long list of things, and this will clutter too much
# the setup call at the end of this file. So, I put all the big lists
# here and I wrap them using the textwrap module.
data_files = []
includes = ['PyQt4.QtNetwork', 'sip','psycopg2']
excludes = ['_gtkagg', '_tkagg', 'bsddb', 'curses', 'email', 'pywin.debugger',
'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl',
'Tkconstants', 'Tkinter']
packages = ['sip','psycopg2']
dll_excludes = ['libgdk-win32-2.0-0.dll', 'libgobject-2.0-0.dll', 'tcl84.dll',
'tk84.dll']
icon_resources = [(1, 'z:\\source\\paroquia\\icone.ico')]
bitmap_resources = []
other_resources = []
# This is a place where the user custom code may go. You can do almost
# whatever you want, even modify the data_files, includes and friends
# here as long as they have the same variable name that the setup call
# below is expecting.
# No custom code added
# Ok, now we are going to build our target class.
# I chose this building strategy as it works perfectly for me :-D
GUI2Exe_Target_1 = Target(
# what to build
script = "main.py",
icon_resources = icon_resources,
bitmap_resources = bitmap_resources,
other_resources = other_resources,
dest_base = "main",
version = "1.2.4",
company_name = "Jorge Espiridiao.",
copyright = "Jorge Espiridiao (c) 2014",
name = "Registos Paroquiais",
)
# No custom class for UPX compression or Inno Setup script
# That's serious now: we have all (or almost all) the options py2exe
# supports. I put them all even if some of them are usually defaulted
# and not used. Some of them I didn't even know about.
setup(
# No UPX or Inno Setup
data_files = data_files,
options = {"py2exe": {"compressed": 2,
"optimize": 2,
"includes": includes,
"excludes": excludes,
"packages": packages,
"dll_excludes": dll_excludes,
"bundle_files": 3,
"dist_dir": "C:\\bin\\paroquia",
"xref": False,
"skip_archive": False,
"ascii": False,
"custom_boot_script": '',
}
},
zipfile = r'QtCore5.dll',
console = [],
windows = [GUI2Exe_Target_1],
service = [],
com_server = [],
ctypes_com_server = []
)
# This is a place where any post-compile code may go.
# You can add as much code as you want, which can be used, for example,
# to clean up your folders or to do some particular post-compilation
# actions.
# No post-compilation code added
# And we are done. That's a setup script :-D
|
[
"stargliderdev@gmail.com"
] |
stargliderdev@gmail.com
|
0edcf86b0f495c20df193852fb974f68195218f7
|
1a22bee5a01e5aa4ddf5a5d2b24f0139ba261d75
|
/interactive-build-guide/raw/build_guide.py
|
5f9af12392fdd8d91ec0bf1c301befcd1ab8c656
|
[] |
no_license
|
ffont/ddrm-tools
|
961af7a0614ac3e3c76c7ca962397fd8af597e46
|
472e06bb7033e6df2bcb86bf1a8307ed9110855e
|
refs/heads/master
| 2022-03-26T10:01:30.552437
| 2020-09-08T11:30:34
| 2020-09-08T11:30:34
| 141,696,341
| 3
| 0
| null | 2022-03-01T23:42:54
| 2018-07-20T10:03:59
|
JavaScript
|
UTF-8
|
Python
| false
| false
| 551
|
py
|
import os
import shutil
from collections import defaultdict
outdir = 'out'
index = defaultdict(list)
import re
exp = re.compile(r'[0-9]x[0-9]')
for filename in os.listdir('.'):
if filename.endswith('.jpg') and not exp.search(filename):
board, number, name = [elm for elm in filename.split('_') if elm][0:3]
number = int(number)
index[board].append((number, name, filename))
for board, pics in index.items():
pics = sorted(pics, key=lambda x: x[0])
for number, name, filename in pics:
shutil.copy(filename, 'out/%s' % filename)
|
[
"frederic.font@upf.edu"
] |
frederic.font@upf.edu
|
a9b548e829889ce8f1507dcc2f013e7e1a205c68
|
202be9ce15e7e41bad55e6bbe4d0c941ecbb6781
|
/1015 德才论.py
|
a6c87a2b4866593f98692848aacc2428f1c12c4f
|
[] |
no_license
|
junyechen/Basic-level
|
ae55ab4e13fd38595772786af25fcc91c055f28c
|
a6e15bc3829dfe05cefc248454f0433f8070cdfb
|
refs/heads/master
| 2020-04-29T08:01:21.936408
| 2019-07-06T04:16:14
| 2019-07-06T04:16:14
| 175,972,034
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,481
|
py
|
#宋代史学家司马光在《资治通鉴》中有一段著名的“德才论”:“是故才德全尽谓之圣人,才德兼亡谓之愚人,德胜才谓之君子,才胜德谓之小人。凡取人之术,苟不得圣人,君子而与之,与其得小人,不若得愚人。”
#现给出一批考生的德才分数,请根据司马光的理论给出录取排名。
#输入格式:
#输入第一行给出 3 个正整数,分别为:N(≤105),即考生总数;L(≥60),为录取最低分数线,即德分和才分均不低于 L 的考生才有资格被考虑录取;H(<100),为优先录取线——德分和才分均不低于此线的被定义为“才德全尽”,此类考生按德才总分从高到低排序;才分不到但德分到线的一类考生属于“德胜才”,也按总分排序,但排在第一类考生之后;德才分均低于 H,但是德分不低于才分的考生属于“才德兼亡”但尚有“德胜才”者,按总分排序,但排在第二类考生之后;其他达到最低线 L 的考生也按总分排序,但排在第三类考生之后。
#随后 N 行,每行给出一位考生的信息,包括:准考证号 德分 才分,其中准考证号为 8 位整数,德才分为区间 [0, 100] 内的整数。数字间以空格分隔。
#输出格式:
#输出第一行首先给出达到最低分数线的考生人数 M,随后 M 行,每行按照输入格式输出一位考生的信息,考生按输入中说明的规则从高到低排序。当某类考生中有多人总分相同时,按其德分降序排列;若德分也并列,则按准考证号的升序输出。
#输入样例:
#14 60 80
#10000001 64 90
#10000002 90 60
#10000011 85 80
#10000003 85 80
#10000004 80 85
#10000005 82 77
#10000006 83 76
#10000007 90 78
#10000008 75 79
#10000009 59 90
#10000010 88 45
#10000012 80 100
#10000013 90 99
#10000014 66 60
#输出样例:
#12
#10000013 90 99
#10000012 80 100
#10000003 85 80
#10000011 85 80
#10000004 80 85
#10000007 90 78
#10000006 83 76
#10000005 82 77
#10000002 90 60
#10000014 66 60
#10000008 75 79
#10000001 64 90
#########
#python性能问题,用python将有3个测试点超时
line = input().split()
N = int(line[0])
L = int(line[1])
H = int(line[2])
line1 = []
line2 = []
line3 = []
line4 = []
for i in range(N):
line = str(input())
line = line.split()
line = list(map(int,line))
score = line[1] + line[2]
line.append(score)
if line[1] < L or line[2] < L:
continue
elif line[1] >= H and line[2] >= H:
line1.append(line)
elif line[1] >= H and line[2] < H:
line2.append(line)
elif line[1] < H and line[2] < H and line[1] >= line[2]:
line3.append(line)
else:
line4.append(line)
line1.sort(key=(lambda x:[x[3],x[1],-x[0]]),reverse=True)
line2.sort(key=(lambda x:[x[3],x[1],-x[0]]),reverse=True)
line3.sort(key=(lambda x:[x[3],x[1],-x[0]]),reverse=True)
line4.sort(key=(lambda x:[x[3],x[1],-x[0]]),reverse=True)
print(len(line1) + len(line2) + len(line3) + len(line4))
for i in range(len(line1)):
line1[i] = list(map(str,line1[i]))
print(' '.join(line1[i][:3]))
for i in range(len(line2)):
line2[i] = list(map(str,line2[i]))
print(' '.join(line2[i][:3]))
for i in range(len(line3)):
line3[i] = list(map(str,line3[i]))
print(' '.join(line3[i][:3]))
for i in range(len(line4)):
line4[i] = list(map(str,line4[i]))
print(' '.join(line4[i][:3]))
|
[
"chenjunyeword@outlook.com"
] |
chenjunyeword@outlook.com
|
7fe6d72d895363d88d2dfb0cd48dbe3f3769d8e9
|
fe842b9f42c1b1112c2a0f9f934d3b3360b97957
|
/backend/app/alembic/versions/13d5b7bf4214_add_chinook_models.py
|
78b9b146a0565a4562930fe826f9c60ede0c3e5c
|
[] |
no_license
|
JayGitH/SQLacodegen-FastAPI
|
138d494aca29f426cbeadda81dda773003a4e094
|
80af2a402366806d257f6f9cd163abc20b34d1c6
|
refs/heads/master
| 2023-03-16T04:54:07.649103
| 2020-06-19T04:12:52
| 2020-06-19T04:12:52
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 8,072
|
py
|
"""Add chinook models
Revision ID: 13d5b7bf4214
Revises: b7f884f5fc23
Create Date: 2020-06-17 22:23:58.202580
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '13d5b7bf4214'
down_revision = 'b7f884f5fc23'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('Artist',
sa.Column('ArtistId', sa.Integer(), nullable=False),
sa.Column('Name', sa.String(length=120), nullable=True),
sa.PrimaryKeyConstraint('ArtistId')
)
op.create_table('Employee',
sa.Column('EmployeeId', sa.Integer(), nullable=False),
sa.Column('LastName', sa.String(length=20), nullable=False),
sa.Column('FirstName', sa.String(length=20), nullable=False),
sa.Column('Title', sa.String(length=30), nullable=True),
sa.Column('ReportsTo', sa.Integer(), nullable=True),
sa.Column('BirthDate', sa.DateTime(), nullable=True),
sa.Column('HireDate', sa.DateTime(), nullable=True),
sa.Column('Address', sa.String(length=70), nullable=True),
sa.Column('City', sa.String(length=40), nullable=True),
sa.Column('State', sa.String(length=40), nullable=True),
sa.Column('Country', sa.String(length=40), nullable=True),
sa.Column('PostalCode', sa.String(length=10), nullable=True),
sa.Column('Phone', sa.String(length=24), nullable=True),
sa.Column('Fax', sa.String(length=24), nullable=True),
sa.Column('Email', sa.String(length=60), nullable=True),
sa.ForeignKeyConstraint(['ReportsTo'], ['Employee.EmployeeId'], ),
sa.PrimaryKeyConstraint('EmployeeId')
)
op.create_index(op.f('ix_Employee_ReportsTo'), 'Employee', ['ReportsTo'], unique=False)
op.create_table('Genre',
sa.Column('GenreId', sa.Integer(), nullable=False),
sa.Column('Name', sa.String(length=120), nullable=True),
sa.PrimaryKeyConstraint('GenreId')
)
op.create_table('MediaType',
sa.Column('MediaTypeId', sa.Integer(), nullable=False),
sa.Column('Name', sa.String(length=120), nullable=True),
sa.PrimaryKeyConstraint('MediaTypeId')
)
op.create_table('Playlist',
sa.Column('PlaylistId', sa.Integer(), nullable=False),
sa.Column('Name', sa.String(length=120), nullable=True),
sa.PrimaryKeyConstraint('PlaylistId')
)
op.create_table('Album',
sa.Column('AlbumId', sa.Integer(), nullable=False),
sa.Column('Title', sa.String(length=160), nullable=False),
sa.Column('ArtistId', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['ArtistId'], ['Artist.ArtistId'], ),
sa.PrimaryKeyConstraint('AlbumId')
)
op.create_index(op.f('ix_Album_ArtistId'), 'Album', ['ArtistId'], unique=False)
op.create_table('Customer',
sa.Column('CustomerId', sa.Integer(), nullable=False),
sa.Column('FirstName', sa.String(length=40), nullable=False),
sa.Column('LastName', sa.String(length=20), nullable=False),
sa.Column('Company', sa.String(length=80), nullable=True),
sa.Column('Address', sa.String(length=70), nullable=True),
sa.Column('City', sa.String(length=40), nullable=True),
sa.Column('State', sa.String(length=40), nullable=True),
sa.Column('Country', sa.String(length=40), nullable=True),
sa.Column('PostalCode', sa.String(length=10), nullable=True),
sa.Column('Phone', sa.String(length=24), nullable=True),
sa.Column('Fax', sa.String(length=24), nullable=True),
sa.Column('Email', sa.String(length=60), nullable=False),
sa.Column('SupportRepId', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['SupportRepId'], ['Employee.EmployeeId'], ),
sa.PrimaryKeyConstraint('CustomerId')
)
op.create_index(op.f('ix_Customer_SupportRepId'), 'Customer', ['SupportRepId'], unique=False)
op.create_table('Invoice',
sa.Column('InvoiceId', sa.Integer(), nullable=False),
sa.Column('CustomerId', sa.Integer(), nullable=False),
sa.Column('InvoiceDate', sa.DateTime(), nullable=False),
sa.Column('BillingAddress', sa.String(length=70), nullable=True),
sa.Column('BillingCity', sa.String(length=40), nullable=True),
sa.Column('BillingState', sa.String(length=40), nullable=True),
sa.Column('BillingCountry', sa.String(length=40), nullable=True),
sa.Column('BillingPostalCode', sa.String(length=10), nullable=True),
sa.Column('Total', sa.Numeric(precision=10, scale=2), nullable=False),
sa.ForeignKeyConstraint(['CustomerId'], ['Customer.CustomerId'], ),
sa.PrimaryKeyConstraint('InvoiceId')
)
op.create_index(op.f('ix_Invoice_CustomerId'), 'Invoice', ['CustomerId'], unique=False)
op.create_table('Track',
sa.Column('TrackId', sa.Integer(), nullable=False),
sa.Column('Name', sa.String(length=200), nullable=False),
sa.Column('AlbumId', sa.Integer(), nullable=True),
sa.Column('MediaTypeId', sa.Integer(), nullable=False),
sa.Column('GenreId', sa.Integer(), nullable=True),
sa.Column('Composer', sa.String(length=220), nullable=True),
sa.Column('Milliseconds', sa.Integer(), nullable=False),
sa.Column('Bytes', sa.Integer(), nullable=True),
sa.Column('UnitPrice', sa.Numeric(precision=10, scale=2), nullable=False),
sa.ForeignKeyConstraint(['AlbumId'], ['Album.AlbumId'], ),
sa.ForeignKeyConstraint(['GenreId'], ['Genre.GenreId'], ),
sa.ForeignKeyConstraint(['MediaTypeId'], ['MediaType.MediaTypeId'], ),
sa.PrimaryKeyConstraint('TrackId')
)
op.create_index(op.f('ix_Track_AlbumId'), 'Track', ['AlbumId'], unique=False)
op.create_index(op.f('ix_Track_GenreId'), 'Track', ['GenreId'], unique=False)
op.create_index(op.f('ix_Track_MediaTypeId'), 'Track', ['MediaTypeId'], unique=False)
op.create_table('InvoiceLine',
sa.Column('InvoiceLineId', sa.Integer(), nullable=False),
sa.Column('InvoiceId', sa.Integer(), nullable=False),
sa.Column('TrackId', sa.Integer(), nullable=False),
sa.Column('UnitPrice', sa.Numeric(precision=10, scale=2), nullable=False),
sa.Column('Quantity', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['InvoiceId'], ['Invoice.InvoiceId'], ),
sa.ForeignKeyConstraint(['TrackId'], ['Track.TrackId'], ),
sa.PrimaryKeyConstraint('InvoiceLineId')
)
op.create_index(op.f('ix_InvoiceLine_InvoiceId'), 'InvoiceLine', ['InvoiceId'], unique=False)
op.create_index(op.f('ix_InvoiceLine_TrackId'), 'InvoiceLine', ['TrackId'], unique=False)
op.create_table('PlaylistTrack',
sa.Column('PlaylistId', sa.Integer(), nullable=False),
sa.Column('TrackId', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['PlaylistId'], ['Playlist.PlaylistId'], ),
sa.ForeignKeyConstraint(['TrackId'], ['Track.TrackId'], ),
sa.PrimaryKeyConstraint('PlaylistId', 'TrackId')
)
op.create_index(op.f('ix_PlaylistTrack_TrackId'), 'PlaylistTrack', ['TrackId'], unique=False)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_PlaylistTrack_TrackId'), table_name='PlaylistTrack')
op.drop_table('PlaylistTrack')
op.drop_index(op.f('ix_InvoiceLine_TrackId'), table_name='InvoiceLine')
op.drop_index(op.f('ix_InvoiceLine_InvoiceId'), table_name='InvoiceLine')
op.drop_table('InvoiceLine')
op.drop_index(op.f('ix_Track_MediaTypeId'), table_name='Track')
op.drop_index(op.f('ix_Track_GenreId'), table_name='Track')
op.drop_index(op.f('ix_Track_AlbumId'), table_name='Track')
op.drop_table('Track')
op.drop_index(op.f('ix_Invoice_CustomerId'), table_name='Invoice')
op.drop_table('Invoice')
op.drop_index(op.f('ix_Customer_SupportRepId'), table_name='Customer')
op.drop_table('Customer')
op.drop_index(op.f('ix_Album_ArtistId'), table_name='Album')
op.drop_table('Album')
op.drop_table('Playlist')
op.drop_table('MediaType')
op.drop_table('Genre')
op.drop_index(op.f('ix_Employee_ReportsTo'), table_name='Employee')
op.drop_table('Employee')
op.drop_table('Artist')
# ### end Alembic commands ###
|
[
"evv@alum.mit.edu"
] |
evv@alum.mit.edu
|
0f59f0691cf22570a0967631f2f5876793a49327
|
345c5fcbc995c47b12e5afe2a5d62e437dab937e
|
/Camera.py
|
404803eeef858137157399f14a7721f156e057c1
|
[] |
no_license
|
apuly/Nobodys_Watching
|
18a257c720d2f3677c8c3e7d2f12a3cf86473264
|
55dcb34569bb415e9e6d821746ea44bfe8b768e8
|
refs/heads/master
| 2020-03-17T05:45:07.715582
| 2018-08-07T09:52:22
| 2018-08-07T09:52:22
| 133,327,499
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 976
|
py
|
from abc import ABC, abstractmethod
import cv2 as cv
import urllib.request as request
import base64
import numpy as np
class Camera(ABC):
@abstractmethod
def read_image(self):
pass
class URLCam(Camera):
def __init__(self, url):
self._url = url
def read_image(self):
req = request.urlopen(self._url)
img_arr = np.array(bytearray(req.read()), dtype=np.uint8)
img = cv.imdecode(img_arr, -1)
return img
class NULLCam(Camera):
"""
always returns an empty image of 400 by 300 pixels
"""
def read_image(self):
return np.zeros((400,300,3))
class WebCam(Camera):
"""
returns images from webcam
"""
def __init__(self, cam_index):
self._vc = cv.VideoCapture(0)
self._vc.set(3,1280)
self._vc.set(4,720)
def read_image(self):
rval, frame = self._vc.read()
if rval:
return frame
else:
return None
|
[
"paul@bersee.nl"
] |
paul@bersee.nl
|
0c4dc595e48ce2a6399cc0fb39d7cdcb43412ef9
|
da893fbeedfc197a74c96b380a0cb2f22a81917c
|
/Cpg_island.py
|
ae36d5d566b8368501af2af60cceeb2ccbe7688d
|
[] |
no_license
|
MeenakshiAnbukkarasu/CpG-Island
|
dc9dc3c376f64cb1243bb2f846669be3c047421a
|
3ed09aab94608827ec88a83128da2aabe37f1c4c
|
refs/heads/master
| 2020-04-04T18:12:40.914385
| 2018-11-05T03:41:51
| 2018-11-05T03:41:51
| 156,154,386
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 9,351
|
py
|
# NAME: hmm_example.py
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
#matplotlib inline
"""
A Markov chain (model) describes a stochastic process where the assumed probability
of future state(s) depends only on the current process state and not on any the states
that preceded it (shocker).
Let's get into a simple example. Assume you want to model the future probability that
you land in a CpG island given its current state. To do this we need to
specify the state space, the initial probabilities, and the transition probabilities.
Imagine you have a DNA sequence. We define the state space as the four diffent bases A,T,C and G.
We will set the initial probabilities to 0.25%, 0.25%, 0.25% and 0.25% respectively.
"""
# create state space and initial state probabilities
states = ['a', 't', 'c' ,'g']
pi = [0.25, 0.25, 0.25, 0.25]
state_space = pd.Series(pi, index=states, name='states')
# create transition matrix
# equals transition probability matrix of changing states given a state
# matrix is size (M x M) where M is number of states
q_df = pd.DataFrame(columns=states, index=states)
q_df.loc[states[0]] = [0.180, 0.274, 0.426, 0.120]
q_df.loc[states[1]] = [0.171, 0.368, 0.274, 0.188]
q_df.loc[states[2]] = [0.161, 0.339, 0.375, 0.125]
q_df.loc[states[3]] = [0.079, 0.355, 0.384, 0.182]
q = q_df.values
"""
Now that we have the initial and transition probabilities setup we can create a
Markov diagram using the Networkx package.
To do this requires a little bit of flexible thinking. Networkx creates Graphs
that consist of nodes and edges. In our example the possible states are
the nodes and the edges are the lines that connect the nodes. The transition
probabilities are the weights. They represent the probability of transitioning
to a state given the current state.
Something to note is networkx deals primarily with dictionary objects. With that
said, we need to create a dictionary object that holds our edges and their weights.
"""
from pprint import pprint
# create a function that maps transition probability dataframe
# to markov edges and weights
def _get_markov_edges(Q):
edges = {}
for col in Q.columns:
for idx in Q.index:
edges[(idx,col)] = Q.loc[idx,col]
return edges
edges_wts = _get_markov_edges(q_df)
"""
Now we can create the graph. To visualize a Markov model we need to
use nx.MultiDiGraph(). A multidigraph is simply a directed graph which can have
multiple arcs such that a single node can be both the origin and destination.
In the following code, we create the graph object, add our nodes, edges, and
labels, then draw a bad networkx plot while outputting our graph to a dot file.
"""
# create graph object
G = nx.MultiDiGraph()
# nodes correspond to states
states = ['a', 't', 'c', 'g']
G.add_nodes_from(states)
# edges represent transition probabilities
for k, v in edges_wts.items():
tmp_origin, tmp_destination = k[0], k[1]
G.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
pos = nx.drawing.nx_pydot.graphviz_layout(G, prog='dot')
nx.draw_networkx(G, pos)
# In Windows: dot -Tps filename.dot -o outfile.ps
# create edge labels for jupyter plot but is not necessary
edge_labels = {(n1,n2):d['label'] for n1,n2,d in G.edges(data=True)}
nx.draw_networkx_edge_labels(G , pos, edge_labels=edge_labels)
nx.drawing.nx_pydot.write_dot(G, 'cpg_markov.dot')
"""
Lets us assume that you are traversing through the DNA sequence.
Consider a situation you encounter more CpG pairs along the DNA and you wanted to model
the probability of you landing in a CpG island
In this situation the true state of the sequence is unknown, thus hidden from you.
One way to model this is to assume hidden state. Let's walk through an example.
First we create our state space - CpG or Not-Cpg. We assume they are equiprobable.
"""
# create state space and initial state probabilities
hidden_states = ['CpG', 'Not-Cpg']
pi = [0.5, 0.5]
state_space = pd.Series(pi, index=hidden_states, name='states')
# Next we create our transition matrix for the hidden states.
# create hidden transition matrix
# a or alpha = transition probability matrix of changing states given a state
# matrix is size (M x M) where M is number of states
a_df = pd.DataFrame(columns=hidden_states, index=hidden_states)
a_df.loc[hidden_states[0]] = [0.7, 0.3]
a_df.loc[hidden_states[1]] = [0.4, 0.6]
a = a_df.values
"""
Now we create the emission or observation probability matrix. This matrix is size M x O where M is the number
of hidden states and O is the number of possible observable states.
The emission matrix tells us the probability that we are in one of the hidden
states, given the current, observable state.
Let's keep the same observable states from the previous example. We can be in
either A, T,C or G. For now we make our best guess to fill in
the probabilities.
"""
# create matrix of observation (emission) probabilities
# b or beta = observation probabilities given state
# matrix is size (M x O) where M is number of states
# and O is number of different possible observations
observable_states = ['a', 't', 'c', 'g']
b_df = pd.DataFrame(columns=observable_states, index=hidden_states)
b_df.loc[hidden_states[0]] = [0.155, 0.341, 0.350, 0.154]
b_df.loc[hidden_states[1]] = [0.262, 0.246, 0.239, 0.253]
b = b_df.values
# Now we create the graph edges and the graph object.
# create graph edges and weights
hide_edges_wts = _get_markov_edges(a_df)
#pprint(hide_edges_wts)
emit_edges_wts = _get_markov_edges(b_df)
# pprint(emit_edges_wts)
print()
# create graph object
G = nx.MultiDiGraph()
# nodes correspond to states
G.add_nodes_from(hidden_states)
# edges represent hidden probabilities
for k, v in hide_edges_wts.items():
tmp_origin, tmp_destination = k[0], k[1]
G.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
# edges represent emission probabilities
for k, v in emit_edges_wts.items():
tmp_origin, tmp_destination = k[0], k[1]
G.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
pos = nx.drawing.nx_pydot.graphviz_layout(G, prog='neato')
nx.draw_networkx(G, pos)
# create edge labels for jupyter plot but is not necessary
emit_edge_labels = {(n1,n2):d['label'] for n1,n2,d in G.edges(data=True)}
nx.draw_networkx_edge_labels(G , pos, edge_labels=emit_edge_labels)
nx.drawing.nx_pydot.write_dot(G, 'pet_dog_hidden_markov.dot')
# In Windows: dot -Tps filename.dot -o outfile.ps
print("========================================================================")
print(" CpG Island using HMMs")
print("========================================================================")
def viterbi(pi, a, b, obs_seq):
nStates = np.shape(b)[0]
T = np.shape(obs_seq)[0]
# init blank path
path = np.zeros(T)
# delta --> highest probability of any path that reaches state i
delta = np.zeros((nStates, T))
# phi --> argmax by time step for each state
phi = np.zeros((nStates, T))
# init delta and phi
delta[:, 0] = pi * b[:, obs_seq[0]]
phi[:, 0] = 0
# the forward algorithm extension
for t in range(1, T):
for s in range(nStates):
delta[s, t] = np.max(delta[:, t-1] * a[:, s]) * b[s, obs_seq[t]]
phi[s, t] = np.argmax(delta[:, t-1] * a[:, s])
# find optimal path
print('-'*50)
path[T-1] = np.argmax(delta[:, T-1])
#p('init path\n t={} path[{}-1]={}\n'.format(T-1, T, path[T-1])) #LPW
for t in range(T-2, -1, -1):
path[t] = phi[int(path[t+1]), [t+1]]
return path, delta, phi
"""
"""
# observation sequence of DNA
# observations are encoded numerically
obs_map = { 0:'a', 1:'t',2:'c',3:'g' }
filepath = "dna_seq.txt"
fp=open(filepath,'r+')
i=1
for line in fp.readlines():
line = line.splitlines()
char_array = []
for entry in line:
for c in entry:
char_array.append(c)
obs = np.array(char_array)
inv_obs_map = dict((v,k) for k, v in obs_map.items())
obs_seq = [inv_obs_map[v] for v in list(obs)]
path, delta, phi = viterbi(pi, a, b, obs_seq)
# Let's take a look at the result.
state_map = {0:'I', 1:'N'}
state_path = [state_map[v] for v in path]
print(' '.join(str(o) for o in obs))
print(' '.join(str(p) for p in state_path))
print()
"""
References
https://en.wikipedia.org/wiki/Andrey_Markov
https://www.britannica.com/biography/Andrey-Andreyevich-Markov
https://www.reddit.com/r/explainlikeimfive/comments/vbxfk/eli5_brownian_motion_and_what_it_has_to_do_with/
http://www.math.uah.edu/stat/markov/Introduction.html
http://setosa.io/ev/markov-chains/
http://www.cs.jhu.edu/~langmea/resources/lecture_notes/hidden_markov_models.pdf
https://github.com/alexsosn/MarslandMLAlgo/blob/master/Ch16/HMM.py
http://hmmlearn.readthedocs.io
http://www.blackarbs.com/blog/introduction-hidden-markov-models-python-networkx-sklearn/2/9/2017
"""
|
[
"noreply@github.com"
] |
MeenakshiAnbukkarasu.noreply@github.com
|
72689bcea781c0e5d7a7fc60ce1159c980987af9
|
2b9bcdd45c70f9029d3469899a4d716c245a0146
|
/rename_files.py
|
be12ecd8379a66e49cc2450c54781a1a6e51857d
|
[] |
no_license
|
tatbikat/Pytutos
|
599de36db52f15307a248e14ae715784b1eec2d5
|
da9d9d787265b8ef244cfac881529431398d44c0
|
refs/heads/master
| 2021-04-06T18:32:03.564015
| 2018-04-13T06:52:10
| 2018-04-13T06:52:10
| 125,377,384
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 333
|
py
|
import os
os.chdir('Z:\Marketing\Videos')
for f in os.listdir() :
f_name,f_ext = os.path.splitext(f)
#print(f_name.split(' '))
f_num, f_title = f_name.split('- ')
f_title = f_title.strip()
f_num = f_num.strip()[0:100].zfill(2)
new_name = ('{} {}{}'.format(f_num, f_title, f_ext))
os.rename(f,new_name)
|
[
"bas2nagy@hotmail.com"
] |
bas2nagy@hotmail.com
|
2c70c2d2c2b147d59749853a4dfcffa6eccb4c91
|
7bb75e8560afc65ff77f198ac02d1f65253762f4
|
/Gazebo_ROS/catkin_ws/build/catkin_generated/generate_cached_setup.py
|
42ce7e26a04be48aba44b398d40858af164bab5b
|
[] |
no_license
|
maxwellalexgordon/Autonomous-Intersection-Research-
|
98f8509230e6fa25653e2920e7a5a1b78f449729
|
7ef55399ddace95f1089018bda7ebf750616613e
|
refs/heads/master
| 2020-12-10T16:50:04.526986
| 2020-04-09T18:38:52
| 2020-04-09T18:38:52
| 233,649,244
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,276
|
py
|
# -*- coding: utf-8 -*-
from __future__ import print_function
import argparse
import os
import stat
import sys
# find the import for catkin's python package - either from source space or from an installed underlay
if os.path.exists(os.path.join('/opt/ros/melodic/share/catkin/cmake', 'catkinConfig.cmake.in')):
sys.path.insert(0, os.path.join('/opt/ros/melodic/share/catkin/cmake', '..', 'python'))
try:
from catkin.environment_cache import generate_environment_script
except ImportError:
# search for catkin package in all workspaces and prepend to path
for workspace in "/opt/ros/melodic".split(';'):
python_path = os.path.join(workspace, 'lib/python2.7/dist-packages')
if os.path.isdir(os.path.join(python_path, 'catkin')):
sys.path.insert(0, python_path)
break
from catkin.environment_cache import generate_environment_script
code = generate_environment_script('/home/maxwell/catkin_ws/devel/env.sh')
output_filename = '/home/maxwell/catkin_ws/build/catkin_generated/setup_cached.sh'
with open(output_filename, 'w') as f:
#print('Generate script for cached setup "%s"' % output_filename)
f.write('\n'.join(code))
mode = os.stat(output_filename).st_mode
os.chmod(output_filename, mode | stat.S_IXUSR)
|
[
"maxwellalexgordon@gmail.com"
] |
maxwellalexgordon@gmail.com
|
201f4d8f69f22e1cb40d58bc0d586fecce461e43
|
3935529ff3b09431f48e092f9bc31f473d4be1e7
|
/PYTHON/ceng 434 -DATA COMMUNICATIONS AND NETWORKING/term project part1/ExperimentScripts/ExperimentScripts/exper_d.py
|
9dfb867ceb5c6eb28c38f00d6489dcc412c98503
|
[] |
no_license
|
emrahkosen/Programming-Languages
|
23e0c1590e791e4d04ab9932a4675ed2f0e6852d
|
8e38c82344643b819e5d52ee8bbf934bb57af93e
|
refs/heads/master
| 2022-12-20T21:03:09.089656
| 2020-12-30T09:59:10
| 2020-12-30T09:59:10
| 166,031,713
| 0
| 0
| null | 2022-12-12T20:04:51
| 2019-01-16T11:50:12
|
C++
|
UTF-8
|
Python
| false
| false
| 3,541
|
py
|
#!/usr/bin/python
import threading
import thread
import socket
import time
f=open("routelist.txt", "r")
thisIndex = 4 #which host
UDP_IP = [ [0 ,"10.10.1.2","10.10.2.1" ,"10.10.3.2" ,0],
["10.10.1.1",0 ,"10.10.8.2" ,0 ,"10.10.4.2"],
["10.10.2.2","10.10.8.1",0 ,"10.10.6.2" ,"10.10.5.2"],
["10.10.3.1",0 ,"10.10.6.1" ,0 ,"10.10.7.1"],
[0 ,"10.10.4.1","10.10.5.1" ,"10.10.7.2" ,0]
] #matrix for referencing connections
if f.mode == 'r': #reading routelist
content =f.read()
f.close()
route = list(content.split(" "))
for i in range(len(route)):
route[i] = int(route[i])
print route
if route[thisIndex] != -2:
if route[thisIndex] == -1: #for s host , calculations made here
N=50
sendHost = route.index(thisIndex)
#send messages to UDP_ID[thisIndex][sendHost]
sock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
sock.settimeout(10)
sock.bind((UDP_IP[sendHost][thisIndex],5000+sendHost))
for i in range(N):
try:
time.sleep(10 / 1000)
data = str(float(round(time.time() * 1000)))
sock.sendto( data , (UDP_IP[thisIndex][sendHost],5000+thisIndex))
data, addr = sock.recvfrom(1024)
data = float(round(time.time() * 1000)) - float(data) #rtt calculated
if data > 0:
print data
except socket.timeout:
break
sock.sendto( "end", (UDP_IP[thisIndex][sendHost],5000+thisIndex))
elif thisIndex == 4:# for d host
recieveHost = route[thisIndex]
sockd = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
sockd.settimeout(10)
sockd.bind((UDP_IP[recieveHost][thisIndex],5000+recieveHost))
while True:
#get message from UDP_ID[recieveHost][thisIndex]
try:
data, addr = sockd.recvfrom(1024)
print "rec: " + data
if data== "end":
break
sockd.sendto( data , (UDP_IP[thisIndex][recieveHost],5000 + thisIndex)) #forward data
except socket.timeout:
break
else:
sendHost = route.index(thisIndex)
recieveHost = route[thisIndex]
sockr = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
socks = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
sockr.settimeout(10)
socks.settimeout(10)
sockr.bind((UDP_IP[recieveHost][thisIndex],5000+recieveHost))
socks.bind((UDP_IP[sendHost][thisIndex],5000+sendHost))
while True:
#first get message from UDP_ID[recieveHost][thisIndex]
try:
data, addr = sockr.recvfrom(1024)
print "rec: " + data
#then send messages to UDP_ID[thisIndex][sendHost]
socks.sendto(data,(UDP_IP[thisIndex][sendHost],5000 + thisIndex))
if data=="end":
break
data, addr = socks.recvfrom(1024)
sockr.sendto(data,(UDP_IP[thisIndex][recieveHost],5000+thisIndex)) #forward data
except socket.timeout:
break
|
[
"noreply@github.com"
] |
emrahkosen.noreply@github.com
|
e8d055c5841edf710ebc345263b13e6c6b097124
|
256f322c70bab8b77266bcb2b9e4b0e2eeb169f9
|
/shs/wsgi.py
|
be6437fad393bd8dbad3b74d7d22359b017a0461
|
[] |
no_license
|
stajama/Secret-Hitler-Server
|
c4337e7da73d95324be6ef71e88af5edef8b2bdf
|
988e0ee0a2c81a60711bc1cc04307635e4b59eac
|
refs/heads/master
| 2022-08-15T22:29:28.018148
| 2018-04-27T04:42:00
| 2018-04-27T04:42:00
| 124,625,887
| 0
| 0
| null | 2022-07-06T19:46:56
| 2018-03-10T05:38:11
|
Python
|
UTF-8
|
Python
| false
| false
| 383
|
py
|
"""
WSGI config for shs project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "shs.settings")
application = get_wsgi_application()
|
[
"stajama@gmail.com"
] |
stajama@gmail.com
|
08d63a83fd62fb584f9dac8b5ce43b058372e3b2
|
807d842325d62319ff98d539e559df9bbae68ee1
|
/config.py
|
54f8a401deeecb344dd01fe27c1348f764383bb5
|
[] |
no_license
|
AndrewVasilevskii/exc-power-supply
|
41d3f0e19dfc80f165c264abb33537349202476d
|
67e137426a95e7cf674cccb7c503b6bf69c258da
|
refs/heads/master
| 2020-09-14T21:31:04.113475
| 2019-11-21T20:48:11
| 2019-11-21T20:48:11
| 223,262,468
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 7,454
|
py
|
APP_NAME = 'ExDPowerSupply'
CONFIG_FILE_NAME = 'userConfig.ini'
position = 300,150
childFrameDisplacement = 40,60
positionInPercent = 0,0
expandSize = 660, 500
decreaseSize = 660, 160
portSettingsSize = 410, 300
grapgSettingsSize = 710, 300
chSize = (40, 20)
timeToAppear = 125
autoConnect = True
savePosition = True
alwaysOnTop = True
import wx
onTopTrue = wx.DEFAULT_FRAME_STYLE & ~(wx.RESIZE_BORDER | wx.MAXIMIZE_BOX) | wx.STAY_ON_TOP
onTopFalse = wx.DEFAULT_FRAME_STYLE & ~(wx.RESIZE_BORDER | wx.MAXIMIZE_BOX)
onAxuliaryPageWithOnTopTrue = wx.DEFAULT_FRAME_STYLE & ~(wx.RESIZE_BORDER | wx.MAXIMIZE_BOX | wx.CLOSE_BOX) | wx.STAY_ON_TOP
onAxuliaryPageWithOnTopFalse = wx.DEFAULT_FRAME_STYLE & ~(wx.RESIZE_BORDER | wx.MAXIMIZE_BOX | wx.CLOSE_BOX)
childFrameStyle = wx.DEFAULT_FRAME_STYLE &~(wx.RESIZE_BORDER | wx.MAXIMIZE_BOX) | wx.FRAME_FLOAT_ON_PARENT
channelBorderColour = '#a9a9a9'
autoTuneColour = '#89CFF0'
import configparser
import os, sys
import win32api
monitors = win32api.EnumDisplayMonitors()
def creatingDefaultConfigFile():
configParser = configparser.ConfigParser()
configParser['SAVING_CONFIG'] = {'autoConnect': str(autoConnect),
'savePosition': str(savePosition),
'alwaysOnTop': str(alwaysOnTop)}
configParser['POSITION'] = {'positionX': str(position[0]),
'positionY': str(position[1])}
configParser['POSITION_IN_PERCENT'] = {'positionX': str(positionInPercent[0]),
'positionY': str(positionInPercent[1])}
pos = float(position[0]), float(position[1])
style = onTopTrue
with open(CONFIG_FILE_NAME, 'w') as configfile:
configParser.write(configfile)
return pos, style
def GetConfigurations():
if not os.path.exists(os.path.join(os.getcwd(), CONFIG_FILE_NAME)):
pos, style = creatingDefaultConfigFile()
else:
configParser = configparser.ConfigParser()
configParser.read(CONFIG_FILE_NAME)
pos = getPosition(configParser)
style = getStyle(configParser)
return wx.Point(pos), wx.Size(expandSize), style
def getPosition(configParser):
try:
pos = float(configParser['POSITION']['positionX']), float(configParser['POSITION']['positionY'])
if pos[0] > monitors[-1][2][2] or pos[1] > monitors[-1][2][3]:
raise
except:
if type(KeyError()) == sys.exc_info()[0]:
configParser['POSITION'] = {'positionX': str(position[0]),
'positionY': str(position[1])}
with open(CONFIG_FILE_NAME, 'w') as configfile:
configParser.write(configfile)
try:
x_PercetPos = float(configParser['POSITION_IN_PERCENT']['positionX'])
y_PercetPos = float(configParser['POSITION_IN_PERCENT']['positionY'])
pos = monitors[0][2][2] / 100 * x_PercetPos, monitors[0][2][3] / 100 * y_PercetPos
except KeyError:
configParser['POSITION_IN_PERCENT'] = {'positionX': str(positionInPercent[0]),
'positionY': str(positionInPercent[1])}
with open(CONFIG_FILE_NAME, 'w') as configfile:
configParser.write(configfile)
finally:
x_PercetPos = float(configParser['POSITION_IN_PERCENT']['positionX'])
y_PercetPos = float(configParser['POSITION_IN_PERCENT']['positionY'])
pos = monitors[0][2][2] / 100 * x_PercetPos, monitors[0][2][3] / 100 * y_PercetPos
configParser['POSITION'] = {'positionX': str(pos[0]),
'positionY': str(pos[1])}
with open(CONFIG_FILE_NAME, 'w') as configfile:
configParser.write(configfile)
return pos
def getStyle(configParser):
try:
onTop = configParser['SAVING_CONFIG']['alwaysOnTop']
except KeyError as key:
if 'alwaysOnTop' in str(key):
configParser.set('SAVING_CONFIG', 'alwaysOnTop', str(alwaysOnTop))
else:
configParser['SAVING_CONFIG'] = {'autoConnect': str(autoConnect),
'savePosition': str(savePosition),
'alwaysOnTop': str(alwaysOnTop)}
with open(CONFIG_FILE_NAME, 'w') as configFile:
configParser.write(configFile)
finally:
onTop = configParser['SAVING_CONFIG']['alwaysOnTop']
if 'True' in onTop:
style = onTopTrue
else:
style = onTopFalse
return style
def GetSavingConfig():
configParser = configparser.ConfigParser()
configParser.read(CONFIG_FILE_NAME)
success = False
while not success:
try:
configAlwaysOnTop = getAlwaysOnTop(configParser)
configAutoConnect = getAutoConnect(configParser)
configSavePosition = getSavePosition(configParser)
success = True
except KeyError as key:
if 'autoConnect' in str(key):
configParser.set('SAVING_CONFIG', 'autoConnect', str(autoConnect))
with open(CONFIG_FILE_NAME, 'w') as configFile:
configParser.write(configFile)
elif 'savePosition' in str(key):
configParser.set('SAVING_CONFIG', 'savePosition', str(savePosition))
with open(CONFIG_FILE_NAME, 'w') as configFile:
configParser.write(configFile)
return configAlwaysOnTop, configAutoConnect, configSavePosition
def getAlwaysOnTop(configParser):
if 'True' in configParser['SAVING_CONFIG']['alwaysOnTop']:
configAlwaysOnTop = True
else:
configAlwaysOnTop = False
return configAlwaysOnTop
def getAutoConnect(configParser):
if 'True' in configParser['SAVING_CONFIG']['autoConnect']:
configAutoConnect = True
else:
configAutoConnect = False
return configAutoConnect
def getSavePosition(configParser):
if 'True' in configParser['SAVING_CONFIG']['savePosition']:
configSavePosition = True
else:
configSavePosition = False
return configSavePosition
def SavingUsersConfig(window):
configParser = configparser.ConfigParser()
configParser.read(CONFIG_FILE_NAME)
configParser.set('SAVING_CONFIG', 'autoConnect', str(window.autoConnect))
configParser.set('SAVING_CONFIG', 'savePosition', str(window.savePosition))
configParser.set('SAVING_CONFIG', 'alwaysOnTop', str(window.alwaysOnTop))
if window.savePosition:
pos = window.GetPosition()
for monitor in monitors:
if (pos[0] >= monitor[2][0] and pos[0] < monitor[2][2]) and (pos[1] >= monitor[2][1] and pos[1] < monitor[2][3]):
x_PercetPos = (pos[0] - monitor[2][0]) * 100 / (monitor[2][2] - monitor[2][0])
y_PercetPos = (pos[1] - monitor[2][1]) * 100 / (monitor[2][3] - monitor[2][1])
configParser.set('POSITION_IN_PERCENT', 'positionX', str(x_PercetPos))
configParser.set('POSITION_IN_PERCENT', 'positionY', str(y_PercetPos))
configParser.set('POSITION', 'positionX', str(pos[0]))
configParser.set('POSITION', 'positionY', str(pos[1]))
with open(CONFIG_FILE_NAME, 'w') as configFile:
configParser.write(configFile)
def GetScreenCenter():
x = monitors[0][2][2]
y = monitors[0][2][3]
return int(x/2), int(y/2)
|
[
"andrew.vasilevskii@gmail.com"
] |
andrew.vasilevskii@gmail.com
|
69e678bd25b69f2768e4642e525cdd18b0358859
|
1cc868f3beebd9b875f2d5a1c533a06875800db7
|
/Python/Cisco_pyATS/pyats_env/lib/python3.6/site-packages/ansible/cli/arguments/option_helpers.py
|
dc3a88b53cbf791ccb0178b97891bcd00ef7d035
|
[] |
no_license
|
Saby2002/arinsnetwork_Automation
|
4dc2b933deefe9317f73143e5998aa3749f518d7
|
f77ab7f2e849a6ca5247f7b3eb171a4362a7423b
|
refs/heads/master
| 2023-06-25T21:44:18.783505
| 2021-07-26T19:09:56
| 2021-07-26T19:09:56
| 389,741,652
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 17,375
|
py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import operator
import argparse
import os
import os.path
import sys
import time
import yaml
try:
import _yaml
HAS_LIBYAML = True
except ImportError:
HAS_LIBYAML = False
from jinja2 import __version__ as j2_version
import ansible
from ansible import constants as C
from ansible.module_utils._text import to_native
from ansible.release import __version__
from ansible.utils.path import unfrackpath
#
# Special purpose OptionParsers
#
class SortingHelpFormatter(argparse.HelpFormatter):
def add_arguments(self, actions):
actions = sorted(actions, key=operator.attrgetter('option_strings'))
super(SortingHelpFormatter, self).add_arguments(actions)
class AnsibleVersion(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
ansible_version = to_native(version(getattr(parser, 'prog')))
print(ansible_version)
parser.exit()
class UnrecognizedArgument(argparse.Action):
def __init__(self, option_strings, dest, const=True, default=None, required=False, help=None, metavar=None, nargs=0):
super(UnrecognizedArgument, self).__init__(option_strings=option_strings, dest=dest, nargs=nargs, const=const,
default=default, required=required, help=help)
def __call__(self, parser, namespace, values, option_string=None):
parser.error('unrecognized arguments: %s' % option_string)
class PrependListAction(argparse.Action):
"""A near clone of ``argparse._AppendAction``, but designed to prepend list values
instead of appending.
"""
def __init__(self, option_strings, dest, nargs=None, const=None, default=None, type=None,
choices=None, required=False, help=None, metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != argparse.OPTIONAL:
raise ValueError('nargs must be %r to supply const' % argparse.OPTIONAL)
super(PrependListAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=nargs,
const=const,
default=default,
type=type,
choices=choices,
required=required,
help=help,
metavar=metavar
)
def __call__(self, parser, namespace, values, option_string=None):
items = copy.copy(ensure_value(namespace, self.dest, []))
items[0:0] = values
setattr(namespace, self.dest, items)
def ensure_value(namespace, name, value):
if getattr(namespace, name, None) is None:
setattr(namespace, name, value)
return getattr(namespace, name)
#
# Callbacks to validate and normalize Options
#
def unfrack_path(pathsep=False):
"""Turn an Option's data into a single path in Ansible locations"""
def inner(value):
if pathsep:
return [unfrackpath(x) for x in value.split(os.pathsep) if x]
if value == '-':
return value
return unfrackpath(value)
return inner
def maybe_unfrack_path(beacon):
def inner(value):
if value.startswith(beacon):
return beacon + unfrackpath(value[1:])
return value
return inner
def _git_repo_info(repo_path):
""" returns a string containing git branch, commit id and commit date """
result = None
if os.path.exists(repo_path):
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
gitdir = yaml.safe_load(open(repo_path)).get('gitdir')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path[:-4], gitdir)
except (IOError, AttributeError):
return ''
with open(os.path.join(repo_path, "HEAD")) as f:
line = f.readline().rstrip("\n")
if line.startswith("ref:"):
branch_path = os.path.join(repo_path, line[5:])
else:
branch_path = None
if branch_path and os.path.exists(branch_path):
branch = '/'.join(line.split('/')[2:])
with open(branch_path) as f:
commit = f.readline()[:10]
else:
# detached HEAD
commit = line[:10]
branch = 'detached HEAD'
branch_path = os.path.join(repo_path, "HEAD")
date = time.localtime(os.stat(branch_path).st_mtime)
if time.daylight == 0:
offset = time.timezone
else:
offset = time.altzone
result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit, time.strftime("%Y/%m/%d %H:%M:%S", date), int(offset / -36))
else:
result = ''
return result
def _gitinfo():
basedir = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..', '..', '..'))
repo_path = os.path.join(basedir, '.git')
return _git_repo_info(repo_path)
def version(prog=None):
""" return ansible version """
if prog:
result = ["{0} [core {1}] ".format(prog, __version__)]
else:
result = [__version__]
gitinfo = _gitinfo()
if gitinfo:
result[0] = "{0} {1}".format(result[0], gitinfo)
result.append(" config file = %s" % C.CONFIG_FILE)
if C.DEFAULT_MODULE_PATH is None:
cpath = "Default w/o overrides"
else:
cpath = C.DEFAULT_MODULE_PATH
result.append(" configured module search path = %s" % cpath)
result.append(" ansible python module location = %s" % ':'.join(ansible.__path__))
result.append(" ansible collection location = %s" % ':'.join(C.COLLECTIONS_PATHS))
result.append(" executable location = %s" % sys.argv[0])
result.append(" python version = %s" % ''.join(sys.version.splitlines()))
result.append(" jinja version = %s" % j2_version)
result.append(" libyaml = %s" % HAS_LIBYAML)
return "\n".join(result)
#
# Functions to add pre-canned options to an OptionParser
#
def create_base_parser(prog, usage="", desc=None, epilog=None):
"""
Create an options parser for all ansible scripts
"""
# base opts
parser = argparse.ArgumentParser(
prog=prog,
formatter_class=SortingHelpFormatter,
epilog=epilog,
description=desc,
conflict_handler='resolve',
)
version_help = "show program's version number, config file location, configured module search path," \
" module location, executable location and exit"
parser.add_argument('--version', action=AnsibleVersion, nargs=0, help=version_help)
add_verbosity_options(parser)
return parser
def add_verbosity_options(parser):
"""Add options for verbosity"""
parser.add_argument('-v', '--verbose', dest='verbosity', default=C.DEFAULT_VERBOSITY, action="count",
help="verbose mode (-vvv for more, -vvvv to enable connection debugging)")
def add_async_options(parser):
"""Add options for commands which can launch async tasks"""
parser.add_argument('-P', '--poll', default=C.DEFAULT_POLL_INTERVAL, type=int, dest='poll_interval',
help="set the poll interval if using -B (default=%s)" % C.DEFAULT_POLL_INTERVAL)
parser.add_argument('-B', '--background', dest='seconds', type=int, default=0,
help='run asynchronously, failing after X seconds (default=N/A)')
def add_basedir_options(parser):
"""Add options for commands which can set a playbook basedir"""
parser.add_argument('--playbook-dir', default=C.config.get_config_value('PLAYBOOK_DIR'), dest='basedir', action='store',
help="Since this tool does not use playbooks, use this as a substitute playbook directory."
"This sets the relative path for many features including roles/ group_vars/ etc.",
type=unfrack_path())
def add_check_options(parser):
"""Add options for commands which can run with diagnostic information of tasks"""
parser.add_argument("-C", "--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
parser.add_argument('--syntax-check', dest='syntax', action='store_true',
help="perform a syntax check on the playbook, but do not execute it")
parser.add_argument("-D", "--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those"
" files; works great with --check")
def add_connect_options(parser):
"""Add options for commands which need to connection to other hosts"""
connect_group = parser.add_argument_group("Connection Options", "control as whom and how to connect to hosts")
connect_group.add_argument('-k', '--ask-pass', default=C.DEFAULT_ASK_PASS, dest='ask_pass', action='store_true',
help='ask for connection password')
connect_group.add_argument('--private-key', '--key-file', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
help='use this file to authenticate the connection', type=unfrack_path())
connect_group.add_argument('-u', '--user', default=C.DEFAULT_REMOTE_USER, dest='remote_user',
help='connect as this user (default=%s)' % C.DEFAULT_REMOTE_USER)
connect_group.add_argument('-c', '--connection', dest='connection', default=C.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT)
connect_group.add_argument('-T', '--timeout', default=C.DEFAULT_TIMEOUT, type=int, dest='timeout',
help="override the connection timeout in seconds (default=%s)" % C.DEFAULT_TIMEOUT)
# ssh only
connect_group.add_argument('--ssh-common-args', default='', dest='ssh_common_args',
help="specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)")
connect_group.add_argument('--sftp-extra-args', default='', dest='sftp_extra_args',
help="specify extra arguments to pass to sftp only (e.g. -f, -l)")
connect_group.add_argument('--scp-extra-args', default='', dest='scp_extra_args',
help="specify extra arguments to pass to scp only (e.g. -l)")
connect_group.add_argument('--ssh-extra-args', default='', dest='ssh_extra_args',
help="specify extra arguments to pass to ssh only (e.g. -R)")
parser.add_argument_group(connect_group)
def add_fork_options(parser):
"""Add options for commands that can fork worker processes"""
parser.add_argument('-f', '--forks', dest='forks', default=C.DEFAULT_FORKS, type=int,
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
def add_inventory_options(parser):
"""Add options for commands that utilize inventory"""
parser.add_argument('-i', '--inventory', '--inventory-file', dest='inventory', action="append",
help="specify inventory host path or comma separated host list. --inventory-file is deprecated")
parser.add_argument('--list-hosts', dest='listhosts', action='store_true',
help='outputs a list of matching hosts; does not execute anything else')
parser.add_argument('-l', '--limit', default=C.DEFAULT_SUBSET, dest='subset',
help='further limit selected hosts to an additional pattern')
def add_meta_options(parser):
"""Add options for commands which can launch meta tasks from the command line"""
parser.add_argument('--force-handlers', default=C.DEFAULT_FORCE_HANDLERS, dest='force_handlers', action='store_true',
help="run handlers even if a task fails")
parser.add_argument('--flush-cache', dest='flush_cache', action='store_true',
help="clear the fact cache for every host in inventory")
def add_module_options(parser):
"""Add options for commands that load modules"""
module_path = C.config.get_configuration_definition('DEFAULT_MODULE_PATH').get('default', '')
parser.add_argument('-M', '--module-path', dest='module_path', default=None,
help="prepend colon-separated path(s) to module library (default=%s)" % module_path,
type=unfrack_path(pathsep=True), action=PrependListAction)
def add_output_options(parser):
"""Add options for commands which can change their output"""
parser.add_argument('-o', '--one-line', dest='one_line', action='store_true',
help='condense output')
parser.add_argument('-t', '--tree', dest='tree', default=None,
help='log output to this directory')
def add_runas_options(parser):
"""
Add options for commands which can run tasks as another user
Note that this includes the options from add_runas_prompt_options(). Only one of these
functions should be used.
"""
runas_group = parser.add_argument_group("Privilege Escalation Options", "control how and which user you become as on target hosts")
# consolidated privilege escalation (become)
runas_group.add_argument("-b", "--become", default=C.DEFAULT_BECOME, action="store_true", dest='become',
help="run operations with become (does not imply password prompting)")
runas_group.add_argument('--become-method', dest='become_method', default=C.DEFAULT_BECOME_METHOD,
help='privilege escalation method to use (default=%s)' % C.DEFAULT_BECOME_METHOD +
', use `ansible-doc -t become -l` to list valid choices.')
runas_group.add_argument('--become-user', default=None, dest='become_user', type=str,
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
add_runas_prompt_options(parser, runas_group=runas_group)
def add_runas_prompt_options(parser, runas_group=None):
"""
Add options for commands which need to prompt for privilege escalation credentials
Note that add_runas_options() includes these options already. Only one of the two functions
should be used.
"""
if runas_group is None:
runas_group = parser.add_argument_group("Privilege Escalation Options",
"control how and which user you become as on target hosts")
runas_group.add_argument('-K', '--ask-become-pass', dest='become_ask_pass', action='store_true',
default=C.DEFAULT_BECOME_ASK_PASS,
help='ask for privilege escalation password')
parser.add_argument_group(runas_group)
def add_runtask_options(parser):
"""Add options for commands that run a task"""
parser.add_argument('-e', '--extra-vars', dest="extra_vars", action="append", type=maybe_unfrack_path('@'),
help="set additional variables as key=value or YAML/JSON, if filename prepend with @", default=[])
def add_tasknoplay_options(parser):
"""Add options for commands that run a task w/o a defined play"""
parser.add_argument('--task-timeout', type=int, dest="task_timeout", action="store", default=C.TASK_TIMEOUT,
help="set task timeout limit in seconds, must be positive integer.")
def add_subset_options(parser):
"""Add options for commands which can run a subset of tasks"""
parser.add_argument('-t', '--tags', dest='tags', default=C.TAGS_RUN, action='append',
help="only run plays and tasks tagged with these values")
parser.add_argument('--skip-tags', dest='skip_tags', default=C.TAGS_SKIP, action='append',
help="only run plays and tasks whose tags do not match these values")
def add_vault_options(parser):
"""Add options for loading vault files"""
parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str,
help='the vault identity to use')
base_group = parser.add_mutually_exclusive_group()
base_group.add_argument('--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files',
help="vault password file", type=unfrack_path(), action='append')
|
[
"arin@arinsnetwork.com"
] |
arin@arinsnetwork.com
|
7d2b33a086a30db63febd6257723fcb552dec508
|
6cc700408356f8574d7c036828ef47e14bbfe210
|
/model/FaceAlignment3D/bfm.py
|
d9fc06335cc776b7f8151c87fdf0e3ee4140c18e
|
[
"MIT"
] |
permissive
|
bubingy/HeadPoseEstimate
|
81307da9a29f68fe92e82eea57c8b1a8ffea94ec
|
bfc6bfecc7269d63a83fc4db37de76fba44f9577
|
refs/heads/main
| 2023-09-05T09:04:24.603001
| 2021-11-16T02:03:30
| 2021-11-16T02:03:30
| 309,602,708
| 45
| 4
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,145
|
py
|
# coding: utf-8
__author__ = 'cleardusk'
import sys
sys.path.append('..')
import os
import pickle
import numpy as np
SCRIPT_HOME = os.path.dirname(os.path.abspath(__file__))
def _to_ctype(arr):
if not arr.flags.c_contiguous:
return arr.copy(order='C')
return arr
class BFMModel(object):
def __init__(self, bfm_fp, shape_dim=40, exp_dim=10):
bfm = pickle.load(open(bfm_fp, 'rb'))
self.u = bfm.get('u').astype(np.float32) # fix bug
self.w_shp = bfm.get('w_shp').astype(np.float32)[..., :shape_dim]
self.w_exp = bfm.get('w_exp').astype(np.float32)[..., :exp_dim]
self.tri = pickle.load(
open(os.path.join(SCRIPT_HOME, 'weights', 'tri.pkl'), 'rb')
)
self.tri = _to_ctype(self.tri.T).astype(np.int32)
self.keypoints = bfm.get('keypoints').astype(np.long) # fix bug
w = np.concatenate((self.w_shp, self.w_exp), axis=1)
self.w_norm = np.linalg.norm(w, axis=0)
self.u_base = self.u[self.keypoints].reshape(-1, 1)
self.w_shp_base = self.w_shp[self.keypoints]
self.w_exp_base = self.w_exp[self.keypoints]
|
[
"769004837@qq.com"
] |
769004837@qq.com
|
4809bbca43bdc1f670954e2cf37ca605b33ce402
|
3b46b6d9d3f4f67ae6876add492cb13bd3d79b4b
|
/wordcount/settings.py
|
a19e61c0f2dda2534f055651081e91801be20390
|
[] |
no_license
|
girinabin/wordcount
|
d76ad4aeee1c012a2d6bb77739056740dcb0fd7d
|
5570d2c8ff33206ca44608838575cf30b4ae9965
|
refs/heads/master
| 2020-05-03T13:37:39.449840
| 2019-03-31T07:38:52
| 2019-03-31T07:38:52
| 178,657,720
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,107
|
py
|
"""
Django settings for wordcount project.
Generated by 'django-admin startproject' using Django 2.1.7.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'nothrc#+^e80#fv13ciy75^d)iu%p-9d_fff6dul81x520d!1g'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'wordcount.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['template'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'wordcount.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
|
[
"giri.nabin1994@gmail.com"
] |
giri.nabin1994@gmail.com
|
cf184f3ea14e19e885441844ff0389b8a44a86b6
|
ef7eabdd5f9573050ef11d8c68055ab6cdb5da44
|
/topCoder/srms/400s/srm444/div2/four_blocks_easy.py
|
adf0b528de9ac3ebd503d194d293994285d1b77e
|
[
"WTFPL"
] |
permissive
|
gauravsingh58/algo
|
cdbf68e28019ba7c3e4832e373d32c71902c9c0d
|
397859a53429e7a585e5f6964ad24146c6261326
|
refs/heads/master
| 2022-12-28T01:08:32.333111
| 2020-09-30T19:37:53
| 2020-09-30T19:37:53
| 300,037,652
| 1
| 1
|
WTFPL
| 2020-10-15T09:26:32
| 2020-09-30T19:29:29
|
Java
|
UTF-8
|
Python
| false
| false
| 303
|
py
|
from itertools import groupby
class FourBlocksEasy:
def maxScore(self, grid):
s = 0
for k, g in groupby(zip(*grid)):
t = len(list(g))
if k == ('.', '.'):
s += (t/2) * 16 + t%2 * 2
else:
s += 2 * t
return s
|
[
"elmas.ferhat@gmail.com"
] |
elmas.ferhat@gmail.com
|
de6f5e995f30cab05aeec0095428f649013ae77b
|
4d01b0674072bf6ab817cd28d8a137d70ade68d3
|
/tzlink/preprocessing/overlap.py
|
088bd87748165ed15939710950e629f038552188
|
[
"BSD-3-Clause"
] |
permissive
|
lfurrer/tzlink
|
84f60ae90ddebff37282e7f0506dcacaf0d5c13c
|
0fd09a4c48d73cbd51e8f1628628812a74f209a7
|
refs/heads/master
| 2022-11-27T22:52:51.160758
| 2020-07-30T10:37:12
| 2020-07-30T10:37:12
| 132,708,312
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,383
|
py
|
#!/usr/bin/env python3
# coding: utf8
# Author: Lenz Furrer, 2018
'''
Overlap between token sequences.
'''
from .tokenization import create_tokenizer
from .stem import PorterStemmer
class TokenOverlap:
'''
Compute token overlap between two texts.
'''
def __init__(self):
self._tokenize = create_tokenizer('charclass')
self._stem = PorterStemmer().stem
self._cached_text = None
self._cached_tokens = None
def overlap(self, query, answer):
'''
Compute the Jaccard index of the stemmed tokens.
'''
if not isinstance(answer, str): # allow a sequence of str for answer
return max(self.overlap(query, a) for a in answer)
q_toks = self.tokens(query, cache=True)
a_toks = self.tokens(answer)
intersection = q_toks.intersection(a_toks)
union = q_toks.union(a_toks)
return len(intersection)/len(union)
def tokens(self, text, cache=False):
'''
Get a set of stemmed tokens.
'''
if cache and text == self._cached_text:
toks = self._cached_tokens
else:
toks = self._tokens(text)
if cache:
self._cached_text, self._cached_tokens = text, toks
return toks
def _tokens(self, text):
return set(self._stem(t) for t in self._tokenize(text))
|
[
"Lenz.Furrer@gmail.com"
] |
Lenz.Furrer@gmail.com
|
f8069a85e534c0df65b75245b29e7cbb228285bf
|
dfbde5609fc18e7641a1004d7ecf55648fcfebd5
|
/server/snippets/urls.py
|
4d73b69e8f991c7c4fb0d51996cc0f0a942220b5
|
[
"MIT"
] |
permissive
|
ildoc/ildoc.it_django
|
93b19a9ef67b6521be90c8e4baadeaad0a5a7802
|
3f9582a9e9e74877f37aa739be147cfada01d99e
|
refs/heads/master
| 2021-06-01T18:13:04.707625
| 2016-08-23T14:00:03
| 2016-08-23T14:00:03
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 417
|
py
|
from django.conf.urls import url
from rest_framework.urlpatterns import format_suffix_patterns
from . import views
urlpatterns = [
url(r'^$', views.SnippetList.as_view()),
url(r'^(?P<pk>[0-9]+)/$', views.SnippetDetail.as_view()),
url(r'^users/$', views.UserList.as_view()),
url(r'^users/(?P<pk>[0-9]+)/$', views.UserDetail.as_view()),
]
urlpatterns = format_suffix_patterns(urlpatterns)
|
[
"filippo.giomi@gmail.com"
] |
filippo.giomi@gmail.com
|
ba46cd97d7765c543c37f8ae7976c0dbe1b8a5bf
|
c0668407f94cad329a31169e57e970df0e8c3c57
|
/test/functional/feature_nulldummy.py
|
49fd54bc4c7f30c53ed6853fdb3112743a9b0b5b
|
[
"MIT"
] |
permissive
|
crypTuron/mocha
|
5d7ad9befdcba88e717d4e91d094400f1f158f12
|
e3d6c6d13ef7c5aa918ccc44770f22138835b336
|
refs/heads/master
| 2022-09-23T15:10:05.410444
| 2020-05-31T21:04:52
| 2020-05-31T21:04:52
| 269,326,681
| 0
| 0
|
NOASSERTION
| 2020-06-04T10:15:09
| 2020-06-04T10:15:08
| null |
UTF-8
|
Python
| false
| false
| 5,889
|
py
|
#!/usr/bin/env python3
# Copyright (c) 2016-2017 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test NULLDUMMY softfork.
Connect to a single node.
Generate 2 blocks (save the coinbases for later).
Generate 427 more blocks.
[Policy/Consensus] Check that NULLDUMMY compliant transactions are accepted in the 430th block.
[Policy] Check that non-NULLDUMMY transactions are rejected before activation.
[Consensus] Check that the new NULLDUMMY rules are not enforced on the 431st block.
[Policy/Consensus] Check that the new NULLDUMMY rules are enforced on the 432nd block.
"""
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import *
from test_framework.mininode import CTransaction, network_thread_start
from test_framework.blocktools import create_coinbase, create_block, add_witness_commitment
from test_framework.script import CScript
from io import BytesIO
import time
NULLDUMMY_ERROR = "64: non-mandatory-script-verify-flag (Dummy CHECKMULTISIG argument must be zero)"
def trueDummy(tx):
scriptSig = CScript(tx.vin[0].scriptSig)
newscript = []
for i in scriptSig:
if (len(newscript) == 0):
assert(len(i) == 0)
newscript.append(b'\x51')
else:
newscript.append(i)
tx.vin[0].scriptSig = CScript(newscript)
tx.rehash()
class NULLDUMMYTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 1
self.setup_clean_chain = True
# This script tests NULLDUMMY activation, which is part of the 'segwit' deployment, so we go through
# Must set the blockversion for this test
self.extra_args = [['-whitelist=127.0.0.1']]
def run_test(self):
self.address = self.nodes[0].getnewaddress()
self.ms_address = self.nodes[0].addmultisigaddress(1,[self.address])
network_thread_start()
self.coinbase_blocks = self.nodes[0].generate(2) # Block 2
coinbase_txid = []
for i in self.coinbase_blocks:
coinbase_txid.append(self.nodes[0].getblock(i)['tx'][0])
self.nodes[0].generate(427) # Block 429
self.lastblockhash = self.nodes[0].getbestblockhash()
self.tip = int("0x" + self.lastblockhash, 0)
self.lastblockheight = 429
self.lastblocktime = int(time.time()) + 429
self.log.info("Test 1: NULLDUMMY compliant base transactions should be accepted to mempool and mined before activation [430]")
test1txs = [self.create_transaction(self.nodes[0], coinbase_txid[0], self.ms_address, 49)]
txid1 = self.nodes[0].sendrawtransaction(bytes_to_hex_str(test1txs[0].serialize_without_witness()), True)
test1txs.append(self.create_transaction(self.nodes[0], txid1, self.ms_address, 48))
txid2 = self.nodes[0].sendrawtransaction(bytes_to_hex_str(test1txs[1].serialize_without_witness()), True)
self.block_submit(self.nodes[0], test1txs, True)
self.log.info("Test 2: Non-NULLDUMMY base multisig transaction should not be accepted to mempool before activation")
test2tx = self.create_transaction(self.nodes[0], txid2, self.ms_address, 48)
trueDummy(test2tx)
txid4 = self.tx_submit(self.nodes[0], test2tx, NULLDUMMY_ERROR)
self.log.info("Test 3: Non-NULLDUMMY base transactions should be accepted in a block before activation [431]")
self.block_submit(self.nodes[0], [test2tx], True)
self.log.info("Test 4: Non-NULLDUMMY base multisig transaction is invalid after activation")
test4tx = self.create_transaction(self.nodes[0], txid4, self.address, 47)
test6txs=[CTransaction(test4tx)]
trueDummy(test4tx)
self.tx_submit(self.nodes[0], test4tx, NULLDUMMY_ERROR)
self.block_submit(self.nodes[0], [test4tx])
self.log.info("Test 6: NULLDUMMY compliant transactions should be accepted to mempool and in block after activation [432]")
for i in test6txs:
self.nodes[0].sendrawtransaction(bytes_to_hex_str(i.serialize_without_witness()), True)
self.block_submit(self.nodes[0], test6txs, True)
def create_transaction(self, node, txid, to_address, amount):
inputs = [{ "txid" : txid, "vout" : 0}]
outputs = { to_address : amount }
rawtx = node.createrawtransaction(inputs, outputs)
signresult = node.signrawtransaction(rawtx)
tx = CTransaction()
f = BytesIO(hex_str_to_bytes(signresult['hex']))
tx.deserialize(f)
return tx
def tx_submit(self, node, tx, msg = ""):
tx.rehash()
try:
node.sendrawtransaction(bytes_to_hex_str(tx.serialize()), True)
except JSONRPCException as exp:
assert_equal(exp.error["message"], msg)
else:
assert_equal('', msg)
return tx.hash
def block_submit(self, node, txs, accept = False):
block = create_block(self.tip, create_coinbase(self.lastblockheight + 1), self.lastblocktime + 1)
block.nVersion = 4
for tx in txs:
tx.rehash()
block.vtx.append(tx)
block.hashMerkleRoot = block.calc_merkle_root()
block.rehash()
block.solve()
node.submitblock(bytes_to_hex_str(block.serialize()))
if (accept):
assert_equal(node.getbestblockhash(), block.hash)
self.tip = block.sha256
self.lastblockhash = block.hash
self.lastblocktime += 1
self.lastblockheight += 1
else:
assert_equal(node.getbestblockhash(), self.lastblockhash)
if __name__ == '__main__':
NULLDUMMYTest().main()
|
[
"root@0912356.localdomain"
] |
root@0912356.localdomain
|
9471c05cb11258d5263d62d08f2d9cde21643fa2
|
327540dcd6a4596a8bd47b4b92fd98b8d8b2f67d
|
/FinMate-master/Finmate/urls.py
|
0241df14c2a7191e827f1ed28e6d189e4e8e5272
|
[] |
no_license
|
el-Catedratic/finmate
|
9af52af5d338dd50997d575f456e4c99ad431889
|
61491f063588389e8fe8d43124f10043bc53e01c
|
refs/heads/master
| 2023-06-16T18:18:31.785150
| 2021-07-17T14:44:40
| 2021-07-17T14:44:40
| 377,436,540
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,146
|
py
|
"""Finmate URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import include, path
from django.conf.urls.static import static
from django.conf import settings
import management
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('management.urls')),
path('save/', management.views.save_data, name='save'),
path('delete/', management.views.delete_data, name='delete'),
path('edit/', management.views.edit_data, name='edit'),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
|
[
"akhil9167782208@gmail.com"
] |
akhil9167782208@gmail.com
|
8213c42ce759c7be9b20326180bc58947dc6ed45
|
5635674f48e13d0de604ec4d141f24a5aff39472
|
/FindCharacterFace/google.py
|
1ac1235c3e54f07157bf86ed26f8848ee747076a
|
[] |
no_license
|
wisteria204/FullstackPrac
|
957e3c2a4d7d6751735ef84e38a6f6a5ad2d430c
|
83b97a1f552c1bca3d1bc5062c02a0a63fa35108
|
refs/heads/main
| 2023-03-22T11:18:09.512672
| 2021-03-24T21:55:34
| 2021-03-24T21:55:34
| 344,835,718
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,508
|
py
|
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import urllib.request
import os
name = "아야세 에리"
if not os.path.isdir('./{}'.format(name)):
os.mkdir('./{}'.format(name))
driver = webdriver.Chrome()
driver.get("https://www.google.co.kr/imghp?hl=ko&ogbl")
elem = driver.find_element_by_name("q")
elem.send_keys(name)
elem.send_keys(Keys.RETURN)
SCROLL_PAUSE_TIME = 1
# Get scroll height
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
# Scroll down to bottom
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# Wait to load page
time.sleep(SCROLL_PAUSE_TIME)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
try:
driver.find_element_by_css_selector(".mye4qd").click()
except:
break
last_height = new_height
images = driver.find_elements_by_css_selector(".rg_i.Q4LuWd")
count = 1
for image in images:
try:
image.click()
time.sleep(2)
imgUrl = driver.find_element_by_xpath('/html/body/div[2]/c-wiz/div[3]/div[2]/div[3]/div/div/div[3]/div[2]/c-wiz/div/div[1]/div[1]/div/div[2]/a/img').get_attribute("src")
urllib.request.urlretrieve(imgUrl, './{}/{}{}{}{}'.format(name, name, " ", count, ".jpg"))
count = count + 1
except:
pass
driver.close
|
[
"wisteria204@naver.com"
] |
wisteria204@naver.com
|
f012aba5c3e5bcf074da3520052561b9b51960f6
|
7d3d0d89712228179366e9b7cf1d0dcc9b15f57c
|
/app_juego/migrations/0004_auto_20210828_0221.py
|
db5d53740fa07663a125218f1dd8eb7cb90d2c37
|
[] |
no_license
|
informatorio2021com06/proyectoInfo
|
1cf95552b15035f9cd289826187fd611b35dbb07
|
f8716ecc9096e05b0153b11545da86c06a2408cd
|
refs/heads/master
| 2023-07-14T06:36:16.667857
| 2021-09-03T01:01:02
| 2021-09-03T01:01:02
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 769
|
py
|
# Generated by Django 2.2.5 on 2021-08-28 05:21
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('app_juego', '0003_auto_20210828_0139'),
]
operations = [
migrations.AlterField(
model_name='preguntasrespondidas',
name='respuesta',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='app_juego.ElegirRespuesta'),
),
migrations.AlterField(
model_name='preguntasrespondidas',
name='triviaUser',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='intentos', to='app_juego.TriviaUsuario'),
),
]
|
[
"danulucas44@gmail.com"
] |
danulucas44@gmail.com
|
ee1c33af20b9cbad0b3a102d83eb2198eccecbc9
|
c8a306fa252aa54e8399e41fa420d776ea3eb8cc
|
/webStands/settings.py
|
a1e9e3dad3626aeee59bae168a46a3c0598a1f7a
|
[] |
no_license
|
bytsur01/webStands
|
755990de7e76baa0a041360293d656206337afdc
|
3fb2f3dc61d2e6836e6e1cd6772032f084c3974f
|
refs/heads/master
| 2021-02-28T00:21:48.464768
| 2020-02-28T10:11:46
| 2020-02-28T10:11:46
| 245,647,468
| 0
| 0
| null | 2020-03-07T14:29:20
| 2020-03-07T14:29:19
| null |
UTF-8
|
Python
| false
| false
| 3,201
|
py
|
"""
Django settings for webStands project.
Generated by 'django-admin startproject' using Django 3.0.3.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '=4#50-!#v3_*kf=qcw=u=zub0&khx-_v#$q&p^8%h!blu^^fe8'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'stands',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'webStands.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, "templates")],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'webStands.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'stands/static')
|
[
"54931596+ACphil@users.noreply.github.com"
] |
54931596+ACphil@users.noreply.github.com
|
b2f62ea0473aa5b1ff000592d8e716dd96c54c27
|
1c0158145cbc7afa9b969b739f7e3507b73276a4
|
/packages/OpenCV/nodes/OpenCV___ImageBlend0/OpenCV___ImageBlend0___METACODE.py
|
40b58fc20bd8689187a4f191daf90c799716de40
|
[
"MIT"
] |
permissive
|
farukdemirbas/pyScript
|
cc3726d0de730234d4f36ba535532b9306e3c971
|
89139615f95c86178cfdb072945942de3be405b7
|
refs/heads/master
| 2023-08-03T12:26:58.328450
| 2020-06-04T09:26:04
| 2020-06-04T09:26:04
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,353
|
py
|
from custom_src.NodeInstance import NodeInstance
from custom_src.Node import Node
import cv2
# USEFUL
# self.input(index) <- access to input data
# self.outputs[index].set_val(val) <- set output data port value
# self.main_widget <- access to main widget
class %NODE_TITLE%_NodeInstance(NodeInstance):
def __init__(self, parent_node: Node, flow, configuration=None):
super(%NODE_TITLE%_NodeInstance, self).__init__(parent_node, flow, configuration)
# self.special_actions['action name'] = self.actionmethod ...
self.img_unblend1 = None
self.img_unblend2 = None
self.img_blend= None
self.initialized()
def update_event(self, input_called=-1):
self.img_unblend1 = self.input(0)
alpha= self.input(1)
alpha=int(alpha)
self.img_unblend2=self.input(2)
beta=int(1.0-alpha)
self.img_blend = cv2.addWeighted(self.img_unblend1,alpha,self.img_unblend2,beta,0.0)
self.main_widget.show_image(self.img_blend)
self.outputs[0].set_val(self.img_blend)
def get_data(self):
data = {}
# ...
return data
def set_data(self, data):
pass
# ...
# optional - important for threading - stop everything here
def removing(self):
pass
|
[
"miskimit@gmail.com"
] |
miskimit@gmail.com
|
c1a5ecf3bfefbe8d4be1281c68120ce3451f3ff2
|
1626e16760c9c5b5dc9bd7c345871c716d5ffd99
|
/Problems/1000_1099/1054_Distante_Barcodes/Project_Python3/Distant_Barcodes.py
|
b1f63038f324314e1669142ea896010e68adf714
|
[] |
no_license
|
NobuyukiInoue/LeetCode
|
94ddb19e63cb8d0775cdc13f311fe90c87a1d718
|
3f0ffd519404165fd1a735441b212c801fd1ad1e
|
refs/heads/master
| 2023-09-01T07:38:50.939942
| 2023-08-23T09:51:17
| 2023-08-23T09:51:17
| 158,100,912
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,522
|
py
|
# coding: utf-8
import collections
import os
import sys
import time
class Solution:
# def rebarcodesangeBarcodes(self, barcodes: List[int]) -> List[int]:
def rebarcodesangeBarcodes(self, barcodes):
# 440ms
i, n = 0, len(barcodes)
res = [0] * n
for k, v in collections.Counter(barcodes).most_common():
for _ in range(v):
res[i] = k
i += 2
if i >= n:
i = 1
return res
def main():
argv = sys.argv
argc = len(argv)
if argc < 2:
print("Usage: python {0} <testdata.txt>".format(argv[0]))
exit(0)
if not os.path.exists(argv[1]):
print("{0} not found...".format(argv[1]))
exit(0)
testDataFile = open(argv[1], "r")
lines = testDataFile.readlines()
for temp in lines:
temp = temp.strip()
if temp == "":
continue
print("args = {0}".format(temp))
loop_main(temp)
# print("Hit Return to continue...")
# input()
def loop_main(temp):
flds = temp.replace("[","").replace("]","").replace("\"","").replace(" ","").rstrip().split(",")
barcodes = [int(num) for num in flds]
print("barcodes = {0}".format(barcodes))
sl = Solution()
time0 = time.time()
result = sl.rebarcodesangeBarcodes(barcodes)
time1 = time.time()
print("result = {0}".format(result))
print("Execute time ... : {0:f}[s]\n".format(time1 - time0))
if __name__ == "__main__":
main()
|
[
"spring555@gmail.com"
] |
spring555@gmail.com
|
b0a676978bcc1843449b5466560c8edb938cd098
|
b08e84c92ce41147432ba65c403f63faf82f582e
|
/0-Fontes/DataMiningSamples-master/3-Classification/KNN.py
|
c9c8181711704b22ad9f6ff6868e15efb3b03a9f
|
[] |
no_license
|
iuri-ramon98/DataMining
|
3fb605ab91b6c84aa4fe2399389203630e229786
|
d7fa55efcf43f2969378d610ef15aeba9c80375b
|
refs/heads/main
| 2023-06-08T13:10:01.147012
| 2021-06-23T07:01:00
| 2021-06-23T07:01:00
| 351,201,554
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 5,239
|
py
|
# Initial imports
import itertools
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from collections import Counter
# Calculate distance between two points
def minkowski_distance(a, b, p=1):
# Store the number of dimensions
dim = len(a)
# Set initial distance to 0
distance = 0
# Calculate minkowski distance using parameter p
for d in range(dim):
distance += abs(a[d] - b[d])**p
distance = distance**(1/p)
return distance
def knn_predict(X_train, X_test, y_train, y_test, k, p):
# Make predictions on the test data
# Need output of 1 prediction per test data point
y_hat_test = []
for test_point in X_test:
distances = []
for train_point in X_train:
distance = minkowski_distance(test_point, train_point, p=p)
distances.append(distance)
# Store distances in a dataframe
df_dists = pd.DataFrame(data=distances, columns=['dist'],
index=y_train.index)
# Sort distances, and only consider the k closest points
df_nn = df_dists.sort_values(by=['dist'], axis=0)[:k]
# Create counter object to track the labels of k closest neighbors
counter = Counter(y_train[df_nn.index])
# Get most common label of all the nearest neighbors
prediction = counter.most_common()[0][0]
# Append prediction to output list
y_hat_test.append(prediction)
return y_hat_test
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.figure()
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
def main():
# Load iris data and store in dataframe
iris = datasets.load_iris()
df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
df['target'] = iris.target
df.head()
# Separate X and y data
X = df.drop('target', axis=1)
y = df.target
print("Total samples: {}".format(X.shape[0]))
# Split the data - 75% train, 25% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1)
print("Total train samples: {}".format(X_train.shape[0]))
print("Total test samples: {}".format(X_test.shape[0]))
# Scale the X data using Z-score
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# STEP 1 - TESTS USING knn classifier write from scratch
# Make predictions on test dataset using knn classifier
y_hat_test = knn_predict(X_train, X_test, y_train, y_test, k=5, p=2)
# Get test accuracy score
accuracy = accuracy_score(y_test, y_hat_test)*100
f1 = f1_score(y_test, y_hat_test, average='macro')
print("Acurracy K-NN from scratch: {:.2f}%".format(accuracy))
print("F1 Score K-NN from scratch: {:.2f}%".format(f1))
# Get test confusion matrix
cm = confusion_matrix(y_test, y_hat_test)
plot_confusion_matrix(cm, iris.target_names, False, "Confusion Matrix - K-NN")
plot_confusion_matrix(cm, iris.target_names, True, "Confusion Matrix - K-NN normalized")
# STEP 2 - TESTS USING knn classifier from sk-learn
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_hat_test = knn.predict(X_test)
# Get test accuracy score
accuracy = accuracy_score(y_test, y_hat_test)*100
f1 = f1_score(y_test, y_hat_test,average='macro')
print("Acurracy K-NN from sk-learn: {:.2f}%".format(accuracy))
print("F1 Score K-NN from sk-learn: {:.2f}%".format(f1))
# Get test confusion matrix
cm = confusion_matrix(y_test, y_hat_test)
plot_confusion_matrix(cm, iris.target_names, False, "Confusion Matrix - K-NN sklearn")
plot_confusion_matrix(cm, iris.target_names, True, "Confusion Matrix - K-NN sklearn normalized" )
plt.show()
if __name__ == "__main__":
main()
|
[
"noreply@github.com"
] |
iuri-ramon98.noreply@github.com
|
75f04de7f68c82753200c5d88844b72f84790ef1
|
77fdfa980f6d923d8fccb7eefdcadadad6f7cdcc
|
/main/views.py
|
5afb7b921a77976a693494ab466616ad3a201a63
|
[] |
no_license
|
joegotflow83/tdd_blog
|
fe72657a361a6203bcebc1ff64a831c3c307e871
|
254d44de3037bfaeee4495c6a1620afbfe87c7fb
|
refs/heads/master
| 2021-01-18T18:42:11.594528
| 2016-07-31T00:20:44
| 2016-07-31T00:20:44
| 63,269,446
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 910
|
py
|
from django.views.generic import TemplateView, ListView, DetailView
from django.views.generic.edit import CreateView
from django.core.urlresolvers import reverse
from .models import Post
class Home(TemplateView):
"""Home page when users login"""
template_name = 'main/home.html'
class CreatePost(CreateView):
"""Users can create posts"""
model = Post
fields = ('title', 'body')
def form_valid(self, form):
new_post = form.save(commit=False)
new_post.user = self.request.user
new_post.save()
return super().form_valid(form)
def get_success_url(self):
return reverse('home')
class ListPosts(ListView):
"""List posts created by all users"""
model = Post
class SinglePost(DetailView):
"""Users can view a single post"""
model = Post
def get_queryset(self):
return Post.objects.filter(pk=self.kwargs['pk'])
|
[
"joe@absolutod.com"
] |
joe@absolutod.com
|
cabe79e959383cbe05e5fad5155996245729f80f
|
fd62676eec4e84fd450ffac3024a68a07bc45645
|
/Ejercicios-01/Datos-_compuestos-05.py
|
237e4af07269dbe91896ad20dc92136006b9d5c4
|
[] |
no_license
|
FidelNava2002/Actividades-de-python
|
086040aa2b1c4a136f4cdb9721d2198421c1da4c
|
d4cec969fc3eb7398490470ad6547f17a69dfa5f
|
refs/heads/main
| 2023-03-09T15:24:28.681531
| 2021-02-28T21:37:47
| 2021-02-28T21:37:47
| 340,786,879
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 573
|
py
|
"""5.- Proponer una representación con tuplas para las cartas de la baraja francesa.
Escribir una función poker que recibe cinco cartas de la baraja francesa e informe
(devuelva el valor lógico correspondiente) si esas cartas forman o no un poker (es
decir que hay 4 cartas con el mismo número)."""
import sys
cartas=('A', 'A', 'A', 'A', 'K')
cont=0;
for i in range(5):
if cont==4:
print("Se formo el ¡POKER!")
sys.exit()
else:
cont=0
for a in range(5):
if cartas[i]==cartas[a]:
cont+=1
print("No se formo el poker")
|
[
"fidelnava2002@gmail.com"
] |
fidelnava2002@gmail.com
|
4c51e20ce07d04dc6653490b0e053574ca63e9f0
|
e3a86afd44cd9034fc1cca716850b1a90c86a860
|
/FeatureCode.py
|
bed1209a1014b25a9a87ad11c59541af033d13bb
|
[] |
no_license
|
DumasDED/push-pin
|
9aa940f1e72a09e226f976837d8f8c63e520e1bb
|
e39851ae4f1ec5b6b0cc2aa11a7c15f343566ed4
|
refs/heads/master
| 2021-06-23T10:54:46.791036
| 2017-09-01T11:59:30
| 2017-09-01T11:59:30
| 100,262,204
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 237
|
py
|
from enum import Enum
class FeatureCode(Enum):
ADM1 = 1
ADM2 = 2
ADM3 = 3
ADMD = 4
PPLC = 5
PPLG = 6
PPLA = 7
PPLA2 = 8
PPLA3 = 9
PPLA4 = 10
PPL = 11
PPLX = 12
PPLS = 13
PPLL = 14
|
[
"DDumas@enstoa.com"
] |
DDumas@enstoa.com
|
b185bb8e94f7fe10a35b1fa4729d3fc7a3d7cd50
|
46a624e335783f9035f14e8d5a1f7c1d76fdf69a
|
/python/tempest/shaders/__init__.py
|
df75956e46449e7d09d8629cf903ddb43445b890
|
[] |
no_license
|
tymonpitts/game_test
|
cc04fccef2e54112de578e37db92870e3fd4c2d3
|
b40ef729353205a9751b1f975f8a487e24286bf9
|
refs/heads/master
| 2021-01-18T18:30:36.789779
| 2020-05-31T04:19:57
| 2020-05-31T04:19:57
| 22,335,441
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 4,910
|
py
|
import os
import glob
import re
import game_core
from OpenGL import GL
from OpenGL.GL.shaders import compileShader
def init():
shaders_dir = os.path.abspath( os.path.dirname(__file__) )
frag_shaders = {}
for frag_shader_path in glob.iglob('%s/*.frag.glsl' % shaders_dir):
name = re.sub( r'\.frag\.glsl$', '', os.path.basename(frag_shader_path) )
with open(frag_shader_path, 'r') as handle:
contents = handle.read()
frag_shaders[name] = compileShader(contents, GL.GL_FRAGMENT_SHADER)
vert_shaders = {}
for vert_shader_path in glob.iglob('%s/*.vert.glsl' % shaders_dir):
name = re.sub( r'\.vert\.glsl$', '', os.path.basename(vert_shader_path) )
with open(vert_shader_path, 'r') as handle:
contents = handle.read()
vert_shaders[name] = compileShader(contents, GL.GL_VERTEX_SHADER)
shaders = {}
shaders['skin'] = game_core.ShaderProgram(vert_shaders['skin'], frag_shaders['frag'])
shaders['skin'].store_uniform_location('modelToWorldMatrix')
shaders['skin'].store_uniform_location('worldToCameraMatrix')
shaders['skin'].store_uniform_location('cameraToClipMatrix')
shaders['skin'].store_uniform_location('lightIntensity')
shaders['skin'].store_uniform_location('ambientIntensity')
shaders['skin'].store_uniform_location('diffuseColor')
shaders['skin'].store_uniform_location('dirToLight')
with shaders['skin'] as shader:
GL.glUniform4f(shader.uniforms['lightIntensity'], 0.8, 0.8, 0.8, 1.0)
GL.glUniform4f(shader.uniforms['ambientIntensity'], 0.2, 0.2, 0.2, 1.0)
# TODO: figure out a better range than just 8
# colors = [
# (1.0, 0.0, 0.0), # red
# (1.0, 0.5, 0.0), # orange
# (1.0, 1.0, 0.0), # yellow
# (0.0, 1.0, 0.0), # green
# (0.0, 1.0, 1.0), # cyan
# (0.0, 0.0, 1.0), # blue
# (0.5, 0.0, 1.0), # purple
# (1.0, 0.0, 1.0), # pink
# ]
colors = [
(0.5, 0.75, 0.5),
(0.5, 0.75, 0.5),
(0.5, 0.75, 0.5),
(0.5, 0.75, 0.5),
(0.5, 0.75, 0.5),
(0.5, 0.75, 0.5),
(0.5, 0.75, 0.5),
(0.5, 0.75, 0.5),
]
for i in range(8):
name = 'lod_test_{}'.format(i)
shaders[name] = game_core.ShaderProgram(vert_shaders['lod_test'], frag_shaders['frag'])
shaders[name].store_uniform_location('fineDistance')
shaders[name].store_uniform_location('coarseDistance')
shaders[name].store_uniform_location('cameraWorldPosition')
shaders[name].store_uniform_location('modelToWorldMatrix')
shaders[name].store_uniform_location('worldToCameraMatrix')
shaders[name].store_uniform_location('cameraToClipMatrix')
shaders[name].store_uniform_location('lightIntensity')
shaders[name].store_uniform_location('ambientIntensity')
shaders[name].store_uniform_location('diffuseColor')
shaders[name].store_uniform_location('dirToLight')
with shaders[name] as shader:
GL.glUniform4f(shader.uniforms['lightIntensity'], 0.8, 0.8, 0.8, 1.0)
GL.glUniform4f(shader.uniforms['ambientIntensity'], 0.2, 0.2, 0.2, 1.0)
color = colors[i]
GL.glUniform4f(shader.uniforms['diffuseColor'], color[0], color[1], color[2], 1.0)
# # FOR DEBUGGING
# shaders[name].store_uniform_location('coarsness')
# with shaders[name] as shader:
# GL.glUniform1f(shader.uniforms['coarsness'], 1.0)
shaders['ndc'] = game_core.ShaderProgram(vert_shaders['ndc'], frag_shaders['ndc'])
shaders['simple'] = game_core.ShaderProgram(vert_shaders['simple'], frag_shaders['simple'])
shaders['simple'].store_uniform_location('modelToWorldMatrix')
shaders['simple'].store_uniform_location('worldToCameraMatrix')
shaders['simple'].store_uniform_location('cameraToClipMatrix')
shaders['point'] = game_core.ShaderProgram(vert_shaders['point'], frag_shaders['frag'])
shaders['point'].store_uniform_location('modelToWorldMatrix')
shaders['point'].store_uniform_location('worldToCameraMatrix')
shaders['point'].store_uniform_location('cameraToClipMatrix')
shaders['point'].store_uniform_location('color')
shaders['constant'] = game_core.ShaderProgram(vert_shaders['constant'], frag_shaders['frag'])
shaders['constant'].store_uniform_location('modelToWorldMatrix')
shaders['constant'].store_uniform_location('worldToCameraMatrix')
shaders['constant'].store_uniform_location('cameraToClipMatrix')
shaders['constant'].store_uniform_location('color')
shaders['heightmap'] = game_core.ShaderProgram(vert_shaders['heightmap'], frag_shaders['heightmap'])
shaders['heightmap'].store_uniform_location('textureSampler')
for shader in frag_shaders.values() + vert_shaders.values():
GL.glDeleteShader(shader)
return shaders
|
[
"tpitts@wetafx.co.nz"
] |
tpitts@wetafx.co.nz
|
6e5fdd9b62863d4f27ad3514f5747fbd039a0174
|
bb5f99ca25c3ba11c01227367dd4eb0995a237bb
|
/csc 121/Chapter 9/Nicolas_DeJohn_9-5_Lab.py
|
fdeb3ff49d66df45b9e6d0c53e7dc94d66217a0f
|
[] |
no_license
|
nicolasdejohn/nicolasdejohn.github.io
|
3572097b02093c2466e1b282390cdac436aa24f8
|
83141986ec27173bf9686a9368751783fffbd848
|
refs/heads/master
| 2023-04-22T07:46:11.642323
| 2021-05-10T04:20:04
| 2021-05-10T04:20:04
| 330,054,501
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 991
|
py
|
# Nicolas DeJohn | Chapter 9-5 Lab | March 26 2021
'''
The purpose of this program to read a file and store the number of occurances for each word in a dictionary.
'''
import collections
import os
os.chdir(r"C:\Users\Nick\Desktop\CPCC\.github.io\csc 121\Chapter 9")
# Defines main function
def main():
# Opens and reads the file
file = open("text95.txt", "r")
readFile = file.read()
file.close()
dictionary = {} # Initializes an empty dictionary
unwantedChars = ".,_-()!?'" # Establishes characters that will be removed
words = readFile.split() # Creates an array of every word without spaces
# Loop through the words array
for i in words:
words = i.strip(unwantedChars) # Remove unwanted characters
if words not in dictionary: # If the key doesn't exist..
dictionary[words] = 0
dictionary[words] += 1 # Add 1 to the key's value
print(dictionary) # Display information
# Call the main function
main()
|
[
"49886199+nickdejohn@users.noreply.github.com"
] |
49886199+nickdejohn@users.noreply.github.com
|
d2e002d463f27ec19af829888c1a6b1d36e0ac77
|
9bffb40d694079257741d703b2a311082efcc0c2
|
/goha/migrations/0001_initial.py
|
463dda73b74a366d3b82e0a94c0665fb5060dda4
|
[] |
no_license
|
ubracklow/GoldenHands
|
20d858dbf2233dc39581bf155b7c2bdac2ad45b0
|
aa717a01d7f8a156c2c9e8ecca16d853be01a8e3
|
refs/heads/master
| 2021-07-03T17:55:11.330059
| 2020-04-22T19:34:49
| 2020-04-22T19:34:49
| 44,267,003
| 1
| 0
| null | 2021-06-10T17:53:04
| 2015-10-14T18:10:58
|
Python
|
UTF-8
|
Python
| false
| false
| 1,673
|
py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Event',
fields=[
('id', models.AutoField(auto_created=True, verbose_name='ID', primary_key=True, serialize=False)),
('date_time', models.DateTimeField()),
('location', models.CharField(max_length=250)),
('number_of_guests', models.IntegerField()),
],
),
migrations.CreateModel(
name='Guest',
fields=[
('id', models.AutoField(auto_created=True, verbose_name='ID', primary_key=True, serialize=False)),
('guest_name', models.CharField(max_length=50)),
('guest_email', models.EmailField(max_length=254)),
('guest_task', models.CharField(blank=True, max_length=50)),
('related_event', models.ForeignKey(to='goha.Event')),
],
),
migrations.CreateModel(
name='Host',
fields=[
('id', models.AutoField(auto_created=True, verbose_name='ID', primary_key=True, serialize=False)),
('host_name', models.CharField(max_length=50)),
('host_email', models.EmailField(max_length=254)),
('host_choice', models.CharField(choices=[('SA', 'Salty'), ('SW', 'Sweet'), ('DR', 'Drink'), ('IDC', 'I dont care')], max_length=3)),
('related_event', models.ForeignKey(to='goha.Event')),
],
),
]
|
[
"ubracklow@hotmail.com"
] |
ubracklow@hotmail.com
|
eab6cd61c54c2dce994ad5e89fb750b5b3d5ee81
|
a5d004f5c484ecbf5fd6d41d1a9186f2af78f20e
|
/run.py
|
5321d64b456f5ecfd4b99ac87a0bde345442b951
|
[] |
no_license
|
Easthy/garden
|
91778831d0611dc9dbd23dceb9eaaea391b16936
|
f257585232920a32f6b73d32e810d23654ff3fbb
|
refs/heads/master
| 2020-08-01T00:06:35.877177
| 2019-09-25T09:00:31
| 2019-09-25T09:00:31
| 210,795,476
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 694
|
py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Author: Ovchinnikov Anatoly Vladimirovich
# Email: east@thyloved.ru
# Version: 1.0-2017
import os
import sys
# Подключить папку с модулями
sys.path.append(os.path.dirname(__file__) + 'modules')
from Garden import *
from PyQt5.QtWidgets import QApplication
from UiForm import * # подключает модуль описания формы
import signal
signal.signal(signal.SIGINT, signal.SIG_DFL)
def main():
g = Garden()
app = QApplication(sys.argv)
f = UiForm(g)
f.web.show()
f.draw('index')
f.center()
g.start()
return app.exec_()
if __name__ == '__main__':
sys.exit(main())
|
[
"east@thyloved.ru"
] |
east@thyloved.ru
|
01fc34ca4c86e39a1056f5e2da4c65e63a439c47
|
e48f694259b153457c14456a18a14431e93d3b41
|
/PY_TEST/Basic/writeHTML.py
|
8b48ec1dda29c220aa74cc7981491a4c6b9374b0
|
[] |
no_license
|
desaidipen/Python
|
5e216e2b3ff017d7e09e1810918de667ec8ec394
|
c6bcce3f9b70e50d9512a4110810491f5a230c02
|
refs/heads/master
| 2023-08-21T17:20:57.709556
| 2023-08-07T15:55:57
| 2023-08-07T15:55:57
| 135,466,190
| 0
| 1
| null | 2020-04-28T21:07:31
| 2018-05-30T15:55:48
|
Java
|
UTF-8
|
Python
| false
| false
| 337
|
py
|
import re
x = "Dipen Desai"
y = f'''
<html>
<head>
<title>Look at this</title>
</head>
<body>
<h1>{x}</h1>
<a href='http://www.google.com'>CLICK</a>
</body>
</html>
'''
with open("C:/Users/RRDD/Desktop/myhtml.html", "w+") as my_html_file:
my_html_file.write(y)
print ('HTML File Created')
|
[
"RRDD@RRDD-PC"
] |
RRDD@RRDD-PC
|
9f01a38f40869bcff88b1d1b134ed2654537deb6
|
61ba9ec78e004cbf7ad38dbc047b7d9b99a013cb
|
/src/GymNow_site/pages/migrations/0034_remove_bookingitem_complete.py
|
834f9dfef3858e9f642a5ec10c6a005d65799ae3
|
[] |
no_license
|
lackeya2/GymNow-Final-Year-Project
|
eb286d5b75238057cc1443e05f0c569fc6b10846
|
89cabd3cb44b78dd5e103c7c34f940a222a4d9aa
|
refs/heads/master
| 2023-06-05T09:31:09.094600
| 2021-05-24T15:16:37
| 2021-05-24T15:16:37
| 378,228,110
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 335
|
py
|
# Generated by Django 2.2.7 on 2021-04-27 21:43
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('pages', '0033_bookingitem_complete'),
]
operations = [
migrations.RemoveField(
model_name='bookingitem',
name='complete',
),
]
|
[
"adrian.lackey2@mail.dcu.ie"
] |
adrian.lackey2@mail.dcu.ie
|
272ffbc73e9c7ae7398326658c0e363535799673
|
34ec7982e2e5f676f86caa8bfedccb8617b9b7cf
|
/Eric/Day5/day5-2.py
|
243aa4719cab58ebb1f7c708e6158ce80b0435be
|
[] |
no_license
|
ntceleste/AdventOfCode2020
|
e1420523303b3cafb53cb14014bd2e8261091c19
|
139a9ba53b77700ab9385fb92a75ac43d1327f30
|
refs/heads/main
| 2023-01-31T19:54:14.107288
| 2020-12-20T06:15:56
| 2020-12-20T06:15:56
| 317,680,504
| 1
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,197
|
py
|
input = open('input.txt').read().split("\n")
# BFFFBBFRRR: row 70, column 7, seat ID 567.
# FFFBBBFRRR: row 14, column 7, seat ID 119.
# BBFFBBFRLL: row 102, column 4, seat ID 820.
# input = ['BFFFBBFRRR', 'FFFBBBFRRR', 'BBFFBBFRLL']
# print(input)
ids = []
maxid = 0
for bsp in input:
bspa = list(bsp)
# print(bspa)
row = 0
seat = 0
if len(bspa) == 10:
if bspa[0] == 'B':
row += 64
if bspa[1] == 'B':
row += 32
if bspa[2] == 'B':
row += 16
if bspa[3] == 'B':
row += 8
if bspa[4] == 'B':
row += 4
if bspa[5] == 'B':
row += 2
if bspa[6] == 'B':
row += 1
if bspa[7] == 'R':
seat += 4
if bspa[8] == 'R':
seat += 2
if bspa[9] == 'R':
seat += 1
seatid = row * 8 + seat
ids.append(seatid)
if seatid > maxid:
maxid = seatid
# print(row, seat, seatid)
myseatid = 0
for seatid in range(maxid):
if seatid in ids:
continue
if seatid + 1 in ids:
if seatid - 1 in ids:
myseatid = seatid
break
print(myseatid)
|
[
"efc@clst.org"
] |
efc@clst.org
|
0ec07afd40abdc8267ae389ccbb542bcd1c4364e
|
5027bed7f149f977943ce0f8f6bffa4e5683eb98
|
/testing1.py
|
8f83a3c79a4a1d01cf8fb51ad7a024926928fae9
|
[] |
no_license
|
fbarneda/testing
|
3efe1ef6ba5010e840b0420d5916c1b360a7d9ce
|
273b94dae3cb2348a167f6f49e68a20a158c13fc
|
refs/heads/master
| 2020-07-09T03:24:39.633290
| 2019-11-04T23:27:56
| 2019-11-04T23:27:56
| 203,862,070
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,453
|
py
|
#!/usr/bin/env python
# coding: utf-8
# In[1]:
dic_key = {'key1':'value1','key2':'value2'}
# In[2]:
dic_key
# In[3]:
dic_key['key2']
# In[4]:
prices_lookup = {'apples':2.99,'oranges':1.99,'milk':5.80}
# In[5]:
prices_lookup['apples']
# In[7]:
d = {'k1':123,'k2':[0,1,2],'k3':{'insideKey':100}}
# In[8]:
d['k3']['insideKey']
# In[10]:
d['k2'][-1]
# In[11]:
d = {'key1':['a','b','c']}
# In[13]:
d['key1'][-1] = d['key1'][-1].upper()
# In[14]:
d
# In[15]:
d = {'k1':100,'k2':200}
# In[16]:
d
# In[17]:
d['k3'] = 300
# In[18]:
d
# In[19]:
d['k1'] = 'NEW VALUE'
# In[20]:
d
# In[21]:
d = {'k1': 100, 'k2': 200, 'k3': 300}
# In[22]:
d.keys()
# In[23]:
for key in d.keys():
print(key)
# In[24]:
d.values()
# In[25]:
d.items()
# In[ ]:
# In[26]:
# TUPLES
# In[27]:
t = (1,2,3)
# In[28]:
mylist = [1,2,3]
# In[29]:
type(t)
# In[30]:
type(mylist)
# In[31]:
t = ('one',2)
# In[32]:
t[0]
# In[33]:
t[-1]
# In[34]:
t = ('a','a','b')
# In[46]:
t.count('a') # COUNT number of elements in tuple
# In[38]:
t.index('a')
# In[39]:
t.index('b')
# In[40]:
t
# In[42]:
mylist[0] = "NEW"
# In[43]:
mylist
# In[44]:
t[0] = "NEW"
# In[ ]:
# In[45]:
# SETS
# In[47]:
s = set()
# In[48]:
type(s)
# In[49]:
s
# In[50]:
s.add(1)
# In[51]:
s
# In[52]:
s.add(2)
# In[53]:
s
# In[54]:
s.add(2)
# In[55]:
s
# In[56]:
mylist = [1,1,1,1,2,2,2,2,2,3,3]
# In[59]:
set(mylist) # WE CAST it to a SET and we have only the UNIQUE values, UNORDERED
# In[ ]:
# In[60]:
# BOOLS
# In[61]:
True
# In[62]:
False
# In[63]:
type(False)
# In[64]:
1 > 2
# In[65]:
1==1
# In[71]:
b = None
# In[ ]:
# In[72]:
# FILES
# In[73]:
get_ipython().run_cell_magic('writefile', 'myfile.txt', 'Hello this is a text file\nthis is the second line\nthis is the third line')
# In[74]:
myfile = open('myfile.txt')
# In[75]:
myfile = open('whoops.txt')
# In[76]:
pwd
# In[77]:
myfile = open('myfile.txt')
# In[78]:
myfile.read()
# In[79]:
myfile.read()
# In[82]:
myfile.seek(0)
# In[83]:
contents = myfile.read()
# In[84]:
contents
# In[90]:
myfile.seek(0)
# In[91]:
myfile.readlines() # grab all lines in a list, each element is a line
# In[94]:
myfile.close() # best practise to close the process
# In[96]:
myfile = open('myfile.txt') # old way of doing things
# In[97]:
# new way of doing things:
# In[98]:
with open('myfile.txt') as my_new_file:
contents = my_new_file.read()
# with that, because of the indentation, no need to close the file
# In[99]:
contents
# In[100]:
with open('myfile.txt',mode='r') as myfile:
contents = myfile.read()
# In[101]:
contents
# In[102]:
with open('myfile.txt',mode='w') as myfile:
contents = myfile.read()
# In[6]:
get_ipython().run_cell_magic('writefile', 'my_new_file.txt', 'ONE ON FIRST\nTWO ON SECOND\nTHREE ON THIRD')
# In[7]:
with open('my_new_file.txt') as f:
print(f.read())
# In[8]:
with open('my_new_file.txt',mode='a') as f:
f.write('FOUR ON FOURTH')
# In[4]:
# In[9]:
with open('my_new_file.txt') as f:
print(f.read())
# In[10]:
with open('qwertyuiop.txt',mode='w') as f:
f.write('I CREATED THIS FILE!')
# In[12]:
with open('qwertyuiop.txt',mode='r') as f:
print(f.read())
|
[
"fbarneda@gmail.com"
] |
fbarneda@gmail.com
|
28149a1fcfc7385618ac85811adcb0098abed088
|
7a28c3540cbaa2f583f60bae4721ec73a7dc7be9
|
/Project_useful/send_Email_Photo.py
|
00353d2f96380b93e3424bfcc73a3bd412318417
|
[] |
no_license
|
mafiadarm/Python-Practice
|
0d1b085f883d95bf39e0c396ccaac548b951c957
|
ba9a8dd9a62f60df7130d2240628b7969800c647
|
refs/heads/master
| 2021-01-24T03:37:26.458176
| 2019-04-18T13:31:38
| 2019-04-18T13:31:38
| 122,897,253
| 0
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,085
|
py
|
# -*- coding: utf-8 -*-
"""
==============================
Date: 02_07_2018 14:08
File Name: /GitHub/send_mail_html
Creat From: PyCharm
Python version: 3.6.2
- - - - - - - - - - - - - - -
Description:
邮件带图片发送,图片作为html的一部分直接展示
==============================
"""
import logging
import smtplib
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart
from email.header import Header
from email.utils import parseaddr, formataddr
__author__ = 'Loffew'
logging.basicConfig(level=logging.DEBUG, format=" %(asctime)s - %(levelname)s - %(message)s") # [filename]
# logging.disable(logging.CRITICAL)
def pp_dbg(*args):
return logging.debug(*args)
def formatAddr(mail):
name, addr = parseaddr(mail)
return formataddr((Header(name, 'utf-8').encode(), addr))
smtp_server = "smtp.rosun.com.cn" # smtp服务器地址
from_mail = "hq-it@rosun.com.cn" # 邮件账号
mail_pwd = "r0sun*953@143@" # 登陆密码
to_mail = ["32336434@qq.com", "zhongshuai@rosun.com.cn"] # 接收邮件的地址
cc_mail = [] # 抄送"gaowh@rosun.com.cn"
from_name = "集团流程IT部" # 发送者名称[可任意修改]
subject = "标题" # 标题[可任意修改]
body = '''
<h1>测试邮件</h1>
<h2 style='color:red'>This is a test</h1>
<img src="cid:image"/>
''' # 内容[用网页方式发送] 因为要插入图片,所以在body要插入,不然会作为附件处理
msg = MIMEMultipart() # 构造一个msg
msg["From"] = formatAddr("{} <{}>".format(from_name, from_mail))
msg["To"] = ','.join(to_mail)
msg["Subject"] = "标题"
msg.attach(MIMEText(body, 'html', 'utf-8'))
image_path = "C:/Users/lo/Desktop/sss.png" # 插入一张图片
with open(image_path, "rb") as rr:
msgImage = MIMEImage(rr.read())
msgImage.add_header("Content-ID", "<image>") # 定义ID,对应body里面的
msg.attach(msgImage)
s = smtplib.SMTP(smtp_server)
s.login(from_mail, mail_pwd)
s.sendmail(from_mail, to_mail + cc_mail, msg.as_string())
s.quit()
|
[
"Loffew@users.noreply.github.com"
] |
Loffew@users.noreply.github.com
|
78bb2b9efb6fd01cdd50b4672b46b3cd3faf5f41
|
0f23b4b96cbe644991e6e191c56f0c7838c05bb4
|
/spider/baiduimage.py
|
5b7dfebcdfa1ec3f11d90455cb091cc7698166cf
|
[] |
no_license
|
hongbozheng/PythonScrape
|
8e60f057ad1ccca719e2a1db94a5b4d0c0eb498f
|
0a9a0f988c225420d5b02023df88b9fa32ef2ad0
|
refs/heads/master
| 2021-07-07T01:35:09.741681
| 2017-10-03T08:15:13
| 2017-10-03T08:15:13
| 105,628,450
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,876
|
py
|
# !/usr/bin/env python
# -*- coding: utf-8 -*-
import re
import sys
sys.path.append("/usr/local/lib/python2.7/site-packages")
import requests
import os
from datetime import datetime as dt
from bs4 import BeautifulSoup
start_page = 124
end_page = 199
#2786
s_age = 0
e_age = 0
s_state = 0
folder = 'AsiandateGirlsVietam'
index = 13395
uniqueIndex = 1700
def count(start=0, step=1):
n = start
i = 0
pages = end_page - start_page
while i <= pages:
yield n
n += step
i += 1
def getUrls():
url = "https://www.asiandate.com/Pages/Search/SearchResults.aspx?age_min=18&age_max=99&countryID=1000272&sortBy=4&pageNum={pn}"
urls = (url.format(pn=x) for x in count(start=start_page, step=1))
return urls
dirpath = '/Users/danny/Desktop/MultifacesForOnePerson/' + folder
if not os.path.isdir(dirpath):
os.mkdir(dirpath)
def postSearchform(session, state):
print "state: ", state
searchURL = "https://www.ourtime.com/v3/search/processlegacyasync"
searchpayload = {
}
headers = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36',
'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8',
}
# searchP = c.post(searchURL, data=searchpayload, headers=headers)
# print "searchP: ",searchP
def downloadImageByUrl(pic_url):
print 'Downloading imageNum: {0}, address:{1} to {2}, uniqueFaceNum: {3}'.format(
str(index), str(
pic_url), folder, str(
uniqueIndex))
try:
res = requests.get(pic_url, timeout=10)
if str(res.status_code)[0] == "4":
print("Fail download: ", pic_url)
except Exception as inst:
print("Fail download2: ", pic_url)
print(type(inst)) # the exception instance
print(inst.args) # arguments stored in .args
print(inst)
else:
filename = os.path.join(dirpath, str(index) + '-' + name + '-' + str(uniqueIndex) + ".jpg")
with open(filename, 'wb') as f:
f.write(res.content)
with requests.Session() as c:
LOGIN_URL = "https://www.asiandate.com/Pages/Security/Login.aspx?logout=1"
# html = c.get(LOGIN_URL)
# soup = BeautifulSoup(html.content, 'html.parser')
payload = {
"ctl00$ucContent$cntrlLogin$txtBoxLogin": "dannyzhengtest@gmail.com",
"ctl00$ucContent$cntrlLogin$txtBoxPassword": "zheng123456",
"ctl00$ucContent$cntrlLogin$btnLogin": "Login"
}
p = c.post(LOGIN_URL, data=payload)
print "loginpost: ",p
start_time = dt.now()
urls = getUrls()
for url in urls:
print '************************newpage***************************'
print 'url: ' + str(url)
page = c.get(url)
soupPage = BeautifulSoup(page.content, 'html.parser')
ladys = soupPage.findAll('a',{'class':'b'})
print len(ladys)
# print photolist
for lady in ladys:
href = lady['href']
print 'ladynameid: ' + href
strs = href.split('/')
nameid = strs[3][:-4]
name = nameid[:-8]
idstr = nameid[-7:]
profile_url = 'https://www.asiandate.com/pages/lady/profile/profilepreview.aspx?LadyID=' + idstr
print 'profile_url: ' + profile_url
profilep = c.get(profile_url)
profilesoup = BeautifulSoup(profilep.content,'html.parser')
photolist = profilesoup.select('.thumbnail')
for img in photolist:
pic_url = img['href']
if 'http' not in pic_url:
continue
downloadImageByUrl(pic_url)
index += 1
uniqueIndex +=1
print "Finish date :", dt.now(), "Images: ", index
print "time used :", dt.now() - start_time
|
[
"danny@rainbe.com"
] |
danny@rainbe.com
|
a7a49f8c72b412a0f3c972b895936a8a25e7021a
|
0bda1d033e321c00803325cf32010c069f393230
|
/Main.py
|
be0d961901d9d539ab4f43dcf9dc8fbaa1c95014
|
[] |
no_license
|
RayLyu-Mac/COVID-19-SImulation
|
5d18ea92b009ac26f948fbcabe1ee7c76d175857
|
e7a87ed281e2822cb87b31fc5b8c1debf198b92d
|
refs/heads/main
| 2023-08-02T17:09:50.979893
| 2021-09-06T18:53:13
| 2021-09-06T18:53:13
| 312,124,618
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 2,387
|
py
|
import pygame
import random
from os import path
#initialize
pygame.init()
screen=pygame.display.set_mode((800,600))
running=True
#Background
background=pygame.image.load('bg.jpg')
#title and icon
pygame.display.set_caption("Space Invader")
icon=pygame.image.load('spaceship.png')
pygame.display.set_icon(icon)
#player
playerImage=pygame.image.load('pla.png')
playerx=0
playery=500
playerxChange=5
def player(playerx,playery):
screen.blit(playerImage,(playerx,playery))
#enemy
enemyImage=pygame.image.load('ufo.png')
enemyx=random.randint(0,800)
enemyy=random.randint(50,150)
enemyxchange=3
enemyychange=-5
#bullet
bulletImage=pygame.image.load('bullet.png')
bulletY=500
bulletX=0
bulletYchange=20
bullet_state='ready'
def enemy(x,y):
screen.blit(enemyImage,(x,y))
def fire_bullet(x,y):
global bullet_state
bullet_state="fire"
screen.blit(bulletImage,(x+16,y+10))
#Game Loop
while running:
screen.fill((255,0,0))
screen.blit(background,(0,0))
player(playerx,playery)
enemy(enemyx,enemyy)
enemyx+=enemyxchange
playerx+=playerxChange
#checking the boundaries
if playerx<=0:
playerx=0
elif playerx>=740:
playerx=740
if enemyx<=0:
enemyx=0
enemyxchange*=-1
enemyy+=enemyychange
elif enemyx>=740:
enemyx=740
enemyxchange*=-1
enemyy-=enemyychange
#bullet movement
if bullet_state=="fire":
fire_bullet(bulletX,bulletY)
bulletY-=bulletYchange
if bulletY<=0:
bulletY=480
bullet_state='ready'
for event in pygame.event.get():
if event.type==pygame.QUIT:
running=False
#if keystroke is pressed check whether its right or left
if event.type==pygame.KEYDOWN:
if event.key==pygame.K_LEFT:
playerxChange=-5
if event.key==pygame.K_RIGHT:
playerxChange=5
if event.key==pygame.K_SPACE:
#after push the space, it will check if there is a bullet already
if bullet_state is "ready":
bulletX=playerx
fire_bullet(bulletX,bulletY)
if event.type==pygame.KEYUP:
if event.key==pygame.K_LEFT or event.key==pygame.K_RIGHT:
playerxChange=0
pygame.display.update()
#RGB
|
[
"59774755+RayLyu-Mac@users.noreply.github.com"
] |
59774755+RayLyu-Mac@users.noreply.github.com
|
e123e7fab6c6603d07d4503c50303cd33eda8a35
|
b4c4dc777fdfda297f52ae539e9123bcd4a31a39
|
/downloader.py
|
67edc1e7ef2111a02136613029764e1aae21a14c
|
[] |
no_license
|
syedfaisalrizvi0-zz/youtubedownloader
|
9e218166a1cac9f3ea0ac91f648a6f8ae8d645a3
|
042fc3f0c65a485d126d0535f8d87130c56a8c93
|
refs/heads/master
| 2022-08-14T13:30:30.456438
| 2019-03-14T18:51:50
| 2019-03-14T18:51:50
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 556
|
py
|
import requests as re
from bs4 import BeautifulSoup
import youtube_dl
print('Engine start....')
qry = str(input('Enter the Video name : '))
qry = qry.replace(' ','+')
data ='https://www.youtube.com/results?search_query='+qry
html = re.get(data)
soup = BeautifulSoup(html.text,'html.parser')
yt_links = soup.find_all("a", class_ = "yt-uix-tile-link")
yt_href = yt_links[0].get("href")
href ='https://www.youtube.com'+yt_href
ydl_opts = {'format': 'bestaudio/best','noplaylist' : True,}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download([href])
|
[
"noreply@github.com"
] |
syedfaisalrizvi0-zz.noreply@github.com
|
b935da832f38d9d5b97eeb29131fc2a068757d69
|
509194b6a9e2ae124e819c3f358a93f49358c70c
|
/consistent/GetconsistantRegions.py
|
c43d2451b88e8377cbb0a1137f8ef5ff763b229e
|
[
"MIT"
] |
permissive
|
tk2/assembly-eval
|
b540a6af3f09a2edb85fdb64cac42b3c34387919
|
3200867bf2490871fe54f3b8e03de0b16fa9b41e
|
refs/heads/master
| 2021-01-10T02:35:58.885068
| 2015-07-22T11:19:57
| 2015-07-22T11:19:57
| 36,653,335
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 8,960
|
py
|
#!/homes/dthybert/software/Python-2.7.5/python
import pysam
import scipy.stats
import sys
import argparse
class GenomeSegment:
def __init__(self,size, chr,start):
self.chr=chr
self.start=start
self.windowsSmoothing=5
self.lstPos=[0]*size
self.lstNbRead=[0]*size
self.lstFraction=[0.0]*size
self.lstNormFraction=[0.0]*size
self.lstOtherInformation=[[]]*size
self.smoothedNbReads=[0.0]*size
self.smoothedFraction=[0.0]*size
def addPosition(self, position, index):
tabLine=position.split("\t")
self.lstPos[index]=int(tabLine[1])
self.lstNbRead[index]=int(tabLine[2])
self.lstFraction[index]=float(tabLine[3])
self.lstNormFraction[index]=float(tabLine[4])
self.lstOtherInformation[index]=tabLine[5:]
def _average(self,lst):
sum=0
for v in lst:
sum=v+sum
return float(sum)/float(len(lst))
def smooth(self,size):
i=0
size=len(self.lstPos)
while i < size:
smoothNBRead=0.0
smmothFraction=0.0
if i < 5:
smoothNBRead=self._average(self.lstNbRead[:i+self.windowsSmoothing])
smmothFraction=self._average(self.lstFraction[:i+self.windowsSmoothing])
elif i > size-5:
smoothNBRead=self._average(self.lstNbRead[i-self.windowsSmoothing:])
smmothFraction=self._average(self.lstFraction[i-self.windowsSmoothing:])
else:
smoothNBRead=self._average(self.lstNbRead[i-self.windowsSmoothing:i+self.windowsSmoothing])
smmothFraction=self._average(self.lstFraction[i-self.windowsSmoothing:i+self.windowsSmoothing])
self.smoothedNbReads[i]=smoothNBRead
self.smoothedFraction[i]=smmothFraction
i=i+1
def IdentifyGoodRegion(self, nbReadMini, FreqThreshold):
lstRegions=[]
start=self.start
end=self.start
i=0
while i < len(self.smoothedNbReads):
if self.smoothedNbReads[i] < nbReadMini and self.smoothedFraction[i] <FreqThreshold:
if start!=end:
lstRegions.append([self.chr, start,end])
start=self.start+i
end=self.start+i
else:
end=end+1
i=i+1
return lstRegions
def Z_score(val, mean,std):
return (float(val)-float(mean))/float(std)
def loadStatistics(strconfigFile):
statByFile={}
objFile=open(strconfigFile)
for line in objFile:
if line[0]=="#":
continue
tabLine=line.split()
file=tabLine[0]
mean=float(tabLine[1])
std=float(tabLine[2])
statByFile[file]=[mean,std]
return statByFile
def getString(dico, file,pos):
#print pos
lsttag=dico[file][pos]
stringTag="-"
for tag in lsttag:
if stringTag=="-":
stringTag=str(tag)
else:
stringTag=stringTag+","+str(tag)
return stringTag
def getLineToPrint(dico,index,pos,chr):
nbTotalOK=0
nbTotal=0
fractionOk=0.0
correctoedFractionOk=0.0
lstTotal=[]
lstFraction=[]
i=0
for sample in dico.keys():
lstTag=dico[sample][index]
nbTagOK=0
nbTagMQbad=0
for tag in lstTag:
if tag==1:
nbTagOK=nbTagOK+1
if tag==4:
nbTagMQbad=nbTagMQbad+1
lstTotal.append(nbTagOK)
sizeSample=len(lstTag)-nbTagMQbad
print sizeSample,len(lstTag)
if sizeSample==0:
fraction=0
else:
fraction=float(nbTagOK)/float(sizeSample)
lstFraction.append(fraction)
nbTotal=nbTotal+sizeSample
nbTotalOK=nbTotalOK+nbTagOK
for fr in lstFraction:
correctoedFractionOk=correctoedFractionOk+fr
correctoedFractionOk=correctoedFractionOk/float(len(lstFraction))
fractionOk=0.0
if nbTotal!=0:
fractionOk=float(nbTotalOK)/float(nbTotal)
string=chr+"\t"+str(pos)+"\t"+str(nbTotalOK)+"\t"+str(fractionOk)+"\t"+str(correctoedFractionOk)
i=0
for sample in dico.keys():
string=string+"\t"+str(lstTotal[i])+"\t"+str(lstFraction[i])
i=i+1
i=0
for sample in dico.keys():
string=string+"\t"+getString(dico,sample,index)
i=i+1
return string
def calculateFrequency(objreadcount, chr,start,end,outFile):
objFile=open(outFile,"a")
length=end-start+1
obgGenomeSegment=GenomeSegment(length,chr,start)
i=0
while i < length:
#print i, length
pos=start+i
string=getLineToPrint(objreadcount,i, pos, chr)
obgGenomeSegment.addPosition(string, i)
objFile.write(string+"\n")
#print string
i=i+1
objFile.close()
return obgGenomeSegment
##################################################################
#
#
#
#
#
#################################################################
def countReadsMate(lstFile,dicoStats,chr,start,end,threshold_pval,MQ):
dicoPos={}
for file in lstFile:
samfile = pysam.AlignmentFile(file, "rb")
lstPos=[[]]*(end-start+1)
for pileupcolumn in samfile.pileup(chr,start,end):
position=pileupcolumn.reference_pos
lst=[]
if position < start:
continue
if position > end:
break
posTab=position-start
for pReads in pileupcolumn.pileups:
if pReads.alignment.mapping_quality < MQ:
lst.append(4)
if pReads.alignment.mate_is_unmapped:
lst.append(0)
#lstPos[posTab].append(0)
elif samfile.getrname(pReads.alignment.next_reference_id) != chr:
lst.append(3)
else:
rend=pReads.alignment.reference_end
startMate=pReads.alignment.next_reference_start
delta=abs(startMate-rend)
mean=dicoStats[file][0]
std=dicoStats[file][1]
z=Z_score(delta,mean,std)
p_value = scipy.stats.norm.sf([abs(z)])[0]
#print pReads.alignment.next_reference_id
#print mean, std, delta, p_value
if p_value < threshold_pval:
lst.append(2)
else:
lst.append(1)
lstPos[posTab]=lst
dicoPos[file]=lstPos
return dicoPos
def saveLstRegion(lstRegion, fileOut):
objFile=open(fileOut,"a")
for region in lstRegion:
string=region[0]+"\t"+str(region[1])+"\t"+str(region[2])+"\n"
objFile.write(string)
objFile.close()
def main(param):
dicoStats=loadStatistics(param.strConfigFile)
##InitFileTo analyse
outfile=param.outFile
outReadCount=outfile+".rdc"
outGoodRegion=outfile+".bed"
objFile=open(outReadCount,"w")
objFile.close()
objFile=open(outGoodRegion,"w")
objFile.close()
lstBams=param.lstBamFiles.split(",")
CurrStart=param.start
CurrEnd=param.start+param.bin-1
#print end-start
if param.end-param.start < param.bin:
CurrEnd=param.end
while CurrEnd <=param.end:
##count reads pair
print "counting paired reads"
hashReadCount=countReadsMate(lstBams,dicoStats,param.chr,CurrStart,CurrEnd,param.pvalMate,param.MQthreshold)
## calculate some stat and create an object that represnt genome segment (save the data in file
print " calculate frequencies"
objGenomSegment=calculateFrequency(hashReadCount,param.chr,CurrStart,CurrEnd,outReadCount)
## get the regioni
print "smoothing count"
objGenomSegment.smooth(param.smoothingWindows)
print "identify regions"
lstRegion=objGenomSegment.IdentifyGoodRegion(param.minReads, param.minFreq)
## save the regions
saveLstRegion(lstRegion,outGoodRegion)
CurrStart=CurrEnd+1
CurrEnd=CurrStart+param.bin-1
if CurrEnd > param.end:
CurrEnd=param.end
if CurrEnd<=CurrStart:
break
####################################################################################
parser = argparse.ArgumentParser()
parser.add_argument('--bam_files', action='store', dest='lstBamFiles', default ="", help='liste of bam file to analyse format : bam1,bam2,...,bamN',required=True)
parser.add_argument('--config', action='store', dest='strConfigFile', help='configuration file describing the mean and std of the insert per library', required=True)
parser.add_argument('--out', action='store', dest='outFile', help='output file prefix where the data will be stored ', required=True)
parser.add_argument('--chr', action='store', dest='chr', help='chromosome to analyse',required=True)
parser.add_argument('--start', action='store', dest='start', help='start of the region to analyse',required=True, type=int)
parser.add_argument('--end', action='store', dest='end', help='end of the region to analyse\n',required=True,type=int)
parser.add_argument('--pval_mate', action='store', dest='pvalMate', help='pval threshold that two mates are in a good distance [0.0001]', default=0.0001, type=float)
parser.add_argument('--min_reads', action='store', dest='minReads', help='minimum number of reads that satisfy the pair-ends constraints required to have a "good" region [8]', default=8, type=int)
parser.add_argument('--min_freq', action='store', dest='minFreq', help='frequency threshold of reads satisfying the pair-end constraints to have a good regions [0.2]', default=0.2, type=float)
parser.add_argument('--MQ', action='store', dest='MQthreshold', help='reads with a mapping quality < MQ won\'t be considered [25]', default=25, type =int)
parser.add_argument('--smoothing_size', action='store', dest='smoothingWindows', help='size of the windows used to smooth the dataseti [5]', default=5, type=int)
parser.add_argument('--bin', action='store', dest='bin', help='number of position evaluated before storing in file (this is for performances issues) [30000]', default=30000, type=int)
param = parser.parse_args()
main(param)
|
[
"tk2@sanger.ac.uk"
] |
tk2@sanger.ac.uk
|
fcbba8373ebd9c2af4f3b86506b5331bbeb7ecda
|
7de58acd871c2306002f7694957e8f573e577b99
|
/src/training/train_xgboost.py
|
3fa5006a779526cc603be0c58b341b6fda88c31b
|
[] |
no_license
|
dekukkk/StravaKudos
|
aa893b42c2b216e312e94065856aaf9a222743c9
|
5ac21e035010ce900a03f67d3b8046ff41768078
|
refs/heads/main
| 2023-08-22T04:51:59.410924
| 2021-10-22T22:44:01
| 2021-10-22T22:44:01
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 3,766
|
py
|
import pandas as pd
import numpy as np
import xgboost as xgb
import pickle
from sklearn import metrics
from sklearn import preprocessing
from sklearn import impute
from sklearn.pipeline import Pipeline
from scipy.sparse import hstack, vstack
from imblearn.over_sampling import SMOTENC
def run(fold):
# read training data with folds
df = pd.read_csv("input/data_train_kfold.csv")
# list all numeric features
num_cols = [
"distance",
"moving_time",
"total_elevation_gain",
"max_speed",
"average_heartrate",
"max_heartrate",
"suffer_score",
"run_area",
"average_speed_mpk",
]
cat_cols = [
"workout_type",
"timezone",
"manual",
"dayofweek",
"weekend",
"is_uk_awake",
"latlng_cluster",
"city",
"has_photo",
"run_per_day",
"max_run",
"is_named",
]
ordinal_cols = ["hour", "pr_count"]
# all cols are features except for target and kfold
features = num_cols + cat_cols + ordinal_cols
# fill cat column NaN values with NONE
for col in cat_cols + ordinal_cols:
df.loc[:, col] = df[col].astype(str).fillna("NONE")
# training data is where kfold is not equal to fold
df_train = df[df.kfold != fold].reset_index(drop=True)
y_train = df_train.kudos_count.values
# validation data is where kfold = fold
df_valid = df[df.kfold == fold].reset_index(drop=True)
y_valid = df_valid.kudos_count.values
# pipelines for model transformation
num_pipeline = Pipeline([("imputer", impute.SimpleImputer(strategy="median"))])
cat_pipeline = Pipeline(
[("cat", preprocessing.OneHotEncoder(handle_unknown="ignore"))]
)
# transforms columns and drops columns not specified
x_train_num = num_pipeline.fit_transform(df_train[num_cols])
x_train_cat = cat_pipeline.fit_transform(df_train[cat_cols + ordinal_cols])
x_valid_num = num_pipeline.transform(df_valid[num_cols])
x_valid_cat = cat_pipeline.transform(df_valid[cat_cols + ordinal_cols])
# check shapes are the same
assert (
x_train_num.shape[0] == y_train.shape[0]
), "training data (numeric) and label dimension are not equal"
assert (
x_train_cat.shape[0] == y_train.shape[0]
), "training data (categorical) and label dimension are not equal"
assert (
x_valid_num.shape[0] == y_valid.shape[0]
), "validation data (numeric) and label dimension are not equal"
assert (
x_valid_cat.shape[0] == y_valid.shape[0]
), "validation data (categorical) and label dimension are not equal"
# join numeric data and categorical data
x_train = hstack((x_train_num, x_train_cat), format="csr")
x_valid = hstack((x_valid_num, x_valid_cat), format="csr")
# initialize xgboost model
model = xgb.XGBRegressor(n_jobs=-1)
# fit model on training data
eval_set = [(x_valid, y_valid)]
model.fit(
x_train,
y_train,
early_stopping_rounds=10,
eval_metric="rmse",
eval_set=eval_set,
verbose=False,
)
# model.fit(x_train, y_train)
# predict on validation data
valid_preds = model.predict(x_valid)
# get rmse, and mape
rmse = metrics.mean_squared_error(y_valid, valid_preds, squared=False)
max_error = metrics.max_error(y_valid, valid_preds)
print(f"\nFold = {fold}, rmse = {rmse}, max error = {max_error}")
data = [x_train, y_train, x_valid, y_valid]
return rmse, model, data
if __name__ == "__main__":
scores = []
for fold_ in range(3):
rmse, _, _ = run(fold_)
scores.append(rmse)
print(f"\nAverage rmse = {sum(scores) / len(scores)}")
|
[
"jackmleitch@gmail.com"
] |
jackmleitch@gmail.com
|
f462a3d3110bd55c42950d29ebc9001cb1de915d
|
c45b7f89afa87d07cf8a65fd33d03f95f54898a2
|
/Sample_random_Sampling.py
|
23307de58640186f4d48db10576e24b34feb0beb
|
[] |
no_license
|
kpp46/statistic_calculator
|
cf6a8ea77c92a5724f7e8d54a9f8d091e4abf7d8
|
2b32eceded7c51af89e9a47ef53376be3fc33290
|
refs/heads/master
| 2023-07-09T15:51:34.477301
| 2021-08-03T02:19:45
| 2021-08-03T02:19:45
| 392,249,457
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 199
|
py
|
import random
from NListWithSeed import generator_int_and_float
def population(data, sample_size):
pp = random.choices(generator_int_and_float(data, sample_size), k=sample_size)
return pp
|
[
"P.kirtan@yahoo.com"
] |
P.kirtan@yahoo.com
|
b101e8a1d2e1295edb59878707e8b8b795eb6a7b
|
32f5787972ca0408ffbc57692cf38292eb80c6b3
|
/users/models.py
|
195961734988749e595de94a2a7598007339e25c
|
[] |
no_license
|
bakiev05/avito_djangorestframework
|
c863fa0722cefaf47506e3f30b3d87e30ed4ca26
|
c07febcff913d631c7e14c5625112106aff16e66
|
refs/heads/main
| 2023-06-20T09:39:48.066575
| 2021-07-16T03:12:09
| 2021-07-16T03:12:09
| 386,479,921
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 260
|
py
|
from django.db import models
from django.contrib.auth.models import AbstractUser
from django.contrib.auth.models import User
class User(AbstractUser):
image = models.ImageField(upload_to='profile')
class Meta:
ordering = ('-id',)
|
[
"aziz.hadj1212@gmail.com"
] |
aziz.hadj1212@gmail.com
|
38e33bb7a8de3cc2f1a1e00f7e908ff017e0c400
|
89033fbde9f166aabba4769d8104c18e1c2baa81
|
/amznscrp/autocompletesearch.py
|
0085886f90d951b665f3e3296ba433bd4d6543ca
|
[] |
no_license
|
jenslaufer/amznscrp
|
67ff3087dc4d4e5591839d37eed25c4cb7b4a1f0
|
3d74701e7551106f3b2411f16a4f1ed0656303d5
|
refs/heads/master
| 2021-07-12T23:14:28.546741
| 2019-02-19T13:46:45
| 2019-02-19T13:46:45
| 167,195,848
| 3
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 1,018
|
py
|
from urllib.parse import quote_plus
import re
import requests
import argparse
import json
from string import ascii_lowercase
def scrape(keyword, proxy_srv, user_agents):
s = requests.session()
headers = {
'User-Agent': user_agents.get(),
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
}
base_url = 'https://completion.amazon.co.uk'
mid = 'A1PA6795UKMFR9'
lop = 'de_DE'
uri = '{0}/api/2017/suggestions?lop={1}&mid={2}&alias=aps&prefix={3}'
f_kwrd = quote_plus(keyword)
result = s.get(uri.format(base_url, lop, mid, f_kwrd), headers=headers,
proxies=proxy_srv.get())
if "Invalid Marketplace ID" in result.text:
resp = s.get(base_url).text
mid = re.findall(re.compile(
r'obfuscatedMarketId:\s"(.*)"'), resp)[0]
result = s.get(uri.format(mid, f_kwrd), headers=headers,
proxies=proxy_srv.get())
return json.loads(result.content)
|
[
"jenslaufer@gmail.com"
] |
jenslaufer@gmail.com
|
4336c5431f1f6de898f7d06c3d024e572c248da5
|
89a0034c6a0904552d23a0f8c7a645869af275ca
|
/myportfolio/urls.py
|
4351e2f846a993ed522be692e951c10cd568b408
|
[] |
no_license
|
Pheonix12/my_portfolio
|
9974be4f6b183f40811c13f393b8483afab329c6
|
fe6ef08d32d54ff15ce2f41e7d0760c14b52d2ae
|
refs/heads/main
| 2023-02-26T21:11:35.072276
| 2021-02-05T13:18:42
| 2021-02-05T13:18:42
| 334,504,263
| 0
| 1
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 797
|
py
|
"""myportfolio URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.1/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('base.urls'))
]
|
[
"s.shaown@outlook.com"
] |
s.shaown@outlook.com
|
c5383644b1d22a2d111b06f36d3970196a9d1d6a
|
94bfec39dd8bbf2906c010f96112fb71511ca1fb
|
/RTE/testfolder/part2/workin/part2.py
|
a932f0d7f1f719ec048049bfe30ccf6affe07019
|
[] |
no_license
|
magnuskiro/IT3105-AIprog
|
636622c28f3dc04d9a2e066d0e1c8b79c9303ab0
|
5ac8abebdb230b89f6dd2da82c7c1c152a406af0
|
refs/heads/master
| 2020-04-21T11:57:20.100239
| 2011-11-23T22:52:34
| 2011-11-23T22:52:34
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 283
|
py
|
import rewrite
import part2a
import predict
def init(file_name):
print "Starting part II"
step_size = 0.001
processed_file = rewrite.run(file_name)
result = part2a.run(processed_file)
predict.predict(step_size, result)
print "Part II done"
init("RTE2_dev.preprocessed.xml")
|
[
"janbremnes@gmail.com"
] |
janbremnes@gmail.com
|
c556b39470401d6dc6b15137a371bc2d01395417
|
7829d22ea38576231cd286d6be4d66ec03783091
|
/Python/lab/lab-8/compute.py
|
e7a607c558d91a57d951b821a01c4b5e19ac85aa
|
[] |
no_license
|
cuppar/course
|
10fd118e9b4b053cd11065864324877adefcb180
|
5fe112d34987de972dfb91c68ff5ab147d5f42c9
|
refs/heads/master
| 2020-03-07T02:43:32.760145
| 2018-06-26T15:41:33
| 2018-06-26T15:41:33
| 127,215,460
| 2
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 730
|
py
|
"""
这是一个计算模块,可进行+,-,*,/,**运算
"""
def add(a, b):
"""
对输入的两个参数求和
a: 被加数
b: 加数
return: 和
"""
return a+b
def sub(a, b):
"""
对输入的两个参数求差
a: 被减数
b: 减数
return: 差
"""
return a-b
def mul(a, b):
"""
对输入的两个参数求积
a: 被乘数
b: 乘数
return: 积
"""
return a*b
def div(a, b):
"""
对输入的两个参数求商
a: 被除数
b: 除数
return: 商
"""
return a/b
def pow(a, b):
"""
对输入的第一个参数求第二个参数的幂
a: 基数
b: 指数
return: 幂
"""
return a**b
|
[
"cuppar.hzy@gmail.com"
] |
cuppar.hzy@gmail.com
|
37bcac96bbcd76d4f2527023b5e58f2e276b9c58
|
b1e325259687b58572ea962e5528fa5afa17e6f6
|
/python/src/algorithm_and_data_structure/alds1_9_a_complete_binary_tree.py
|
60e2c4f5b2a0ec6c77f4479383f5845eb1214f7c
|
[] |
no_license
|
gen0083/atcoder_python
|
84e1b0a63a736f1fca21bf7fcda776f4016a30bd
|
93d5b1023242e562e4687119c94812d8c0df429c
|
refs/heads/master
| 2023-07-23T21:47:38.964277
| 2023-07-13T12:23:51
| 2023-07-13T12:23:51
| 231,208,047
| 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 660
|
py
|
# 完全二分木
# http://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=ALDS1_9_A&lang=jp
import sys
def main():
n = int(input())
heap = [""] * (n + 1)
i = 1
for s in input().split():
heap[i] = s
i += 1
for i in range(1, n + 1):
sys.stdout.write("node %d: key = %s," % (i, heap[i]))
if i // 2 > 0:
sys.stdout.write(" parent key = %s," % heap[i // 2])
if i * 2 <= n:
sys.stdout.write(" left key = %s," % heap[i * 2])
if i * 2 + 1 <= n:
sys.stdout.write(" right key = %s," % heap[i * 2 + 1])
print(" ")
if __name__ == '__main__':
main()
|
[
"archiherewego@gmail.com"
] |
archiherewego@gmail.com
|
6dfd74694db9b8ab526ada69999029b70c75dd49
|
d4341f9f4f3c389e0c2aa0143330d2aef19ac25c
|
/Algorithms/little_labyrinth.py
|
4bc356de45cb38decb0150668fcf1e9afe35af4c
|
[] |
no_license
|
aeirado/hello-world
|
0d24f027dcb9342363e0c01c8202c22facec48c4
|
bf0679e74f9088730d641d5c7e7cc6615283de0a
|
refs/heads/master
| 2021-05-23T05:09:51.750783
| 2018-06-04T20:02:45
| 2018-06-04T20:02:45
| 95,236,575
| 1
| 0
| null | 2017-06-23T17:50:44
| 2017-06-23T16:19:26
| null |
UTF-8
|
Python
| false
| false
| 10,132
|
py
|
'''Maze Sotution Algorithm'''
LABYRINTH = [
['x', 'x', 'x', 'x', 'x', '-', 'x', 'x'],
['x', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', '-', 'x', '-', '-', 'x'],
['x', '-', 'x', '-', 'x', 'x', '-', 'x'],
['x', '-', 'x', '-', 'x', '-', '-', 'x'],
['x', '-', '-', '-', '-', '-', '-', 'x'],
['x', 'x', 'x', 'x', 'x', 'x', 'x', 'x']
]
LABYRINTH_2 = [
['x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x'],
['x', '-', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', '-', '-', 'x', 'x', '-', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', 'x', 'x', '-', 'x', 'x', '-', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', '-', 'x', 'x', '-', 'x', '-', 'x', '-', '-', '-', '-', 'x', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', '-', '-', '-', '-', 'x', '-', 'x', '-', 'x', 'x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x', '-', 'x', 'x', '-', 'x', '-', 'x', '-', '-', '-', '-', 'x'],
['x', '-', 'x', '-', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', '-', 'x', '-', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', '-', '-', 'x', '-', 'x', '-', '-', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', 'x', '-', '-', 'x', '-', '-', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', '-', '-', '-', '-', '-', '-', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', '-', '-', '-', '-', '-', '-', 'x', 'x', 'x', 'x', '-', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', '-', '-', '-', 'x', 'x', 'x', 'x', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', '-', '-', '-', '-', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', '-', 'x'],
['x', 'x', 'x', 'x', 'x', '-', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x']
]
LAB_CLEAN = [
['x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x'],
['x', '-', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', '-', '-', 'x', 'x', '-', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', 'x', 'x', '-', 'x', 'x', '-', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', '-', 'x', 'x', '-', 'x', '-', 'x', '-', '-', '-', '-', 'x', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', '-', '-', '-', '-', 'x', '-', 'x', '-', 'x', 'x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x', '-', 'x', 'x', '-', 'x', '-', 'x', '-', '-', '-', '-', 'x'],
['x', '-', 'x', '-', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', '-', 'x', '-', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', '-', '-', 'x', '-', 'x', '-', '-', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', 'x', '-', '-', 'x', '-', '-', 'x', 'x', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', '-', '-', '-', '-', '-', '-', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', '-', '-', '-', '-', '-', '-', 'x', 'x', 'x', 'x', '-', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', '-', '-', '-', 'x', 'x', 'x', 'x', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', '-', 'x', 'x', 'x', '-', 'x', '-', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', '-', 'x'],
['x', '-', '-', '-', '-', '-', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', '-', 'x', 'x', 'x', 'x', 'x', '-', 'x'],
['x', 'x', 'x', 'x', 'x', '-', 'x', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', 'x'],
['x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x']
]
GOAL = 'G'
PERSON = '☺'
WALL = 'x'
MSG1 = ''
def print_maze_coordinates(maze):
print('\n{:^68}'.format('L A B Y R I N T H'))
print('--------------------------------------------------------------------')
print(' 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22')
for i in range(22):
print('{:0>2}'.format(i + 1), end=' ')
for j in maze[i]:
print('{:' '<2}'.format(j), end=' ')
print()
def input_print(*msg):
pass
def true_coord(x, y):
coord = (x, y)
coord = tuple(i - 1 for i in coord)
return coord
def insert_G_P(maze, goal, person, gxy=(21, 6), pxy=(21, 21)):
gxy = true_coord(gxy[0], gxy[1])
pxy = true_coord(pxy[0], pxy[1])
maze[gxy[0]][gxy[1]] = goal
maze[pxy[0]][pxy[1]] = person
return maze, gxy, pxy
def map_maze(maze, wall):
maze_keys = [(x, y) for x in range(len(maze)) for y in range(len(maze))]
maze_maped = dict(zip(maze_keys,
[j for i in range(len(maze)) for j in maze[i]]))
# for k in maze_maped.copy():
# if maze_maped[k] == 'x':
# maze_maped.pop(k)
# the for above works too
maze_maped = {k : v for k, v in iter(maze_maped.items()) if v is not wall}
return maze_maped
def count_distances(maze_maped, gxy, pxy):
'''
Marca todos os passos do labirinto com um número que vai crescendo desde
de GOAL até preencher todo o laririnto. O menor caminho a ser percorrido
é aquele que chega até PERSON com o mínimo de passos.
Usa-se uma estrutura dict para marcar a "distância" (número inteiro) e o
antecessor ("pai").
Percorrendo o labirinto de PERSON até GOAL de pai em pai vai-se chegar a
GOAL pelo menor caminho.
'''
def prepare_maze_map(maze_maped):
for k in maze_maped.keys():
maze_maped[k] = {"distance": 0, "father": (0, 0)}
return maze_maped
def update_dist_father(actual, queue, exploreds):
maze[adj] = {"distance": maze[actual]['distance'] + 1,
"father": actual}
queue.append(adj)
exploreds.append(adj)
maze = prepare_maze_map(maze_maped)
maze[gxy] = {"distance": 0, "father": gxy}
adjacents = list(maze.keys())
adjacents.remove(gxy)
goal = gxy
queue = [goal]
exploreds = [goal]
while queue:
actual = queue.pop(0)
actual_row, actual_col = actual[0], actual[1]
for adj in adjacents:
if adj in exploreds:
continue
adj_row, adj_col = adj[0], adj[1]
if actual_row == 0 and actual_row + 1 == adj_row:
if actual_col == adj_col:
update_dist_father(actual, queue, exploreds)
elif actual_row == 7 and actual_row - 1 == adj_row:
if actual_col == adj_col:
update_dist_father(actual, queue, exploreds)
elif actual_col == 0 and actual_col + 1 == adj_col:
if actual_row == adj_row:
update_dist_father(actual, queue, exploreds)
elif actual_col == 7 and actual_col - 1 == adj_col:
if actual_row == adj_row:
update_dist_father(actual, queue, exploreds)
elif actual_row + 1 == adj_row or actual_row - 1 == adj_row:
if actual_col == adj_col:
update_dist_father(actual, queue, exploreds)
elif actual_col + 1 == adj_col or actual_col - 1 == adj_col:
if actual_row == adj_row:
update_dist_father(actual, queue, exploreds)
maze[pxy]['distance'] = PERSON
maze[gxy]['distance'] = GOAL
return maze
def label_distances(maze, maze_distances):
for key in maze_distances.keys():
maze[key[0]][key[1]] = maze_distances[key]['distance']
return maze
def walking_to_goal(maze, maze_distances, gxy, pxy):
father = (0, 0)
actual = pxy
distance = maze_distances[maze_distances[actual]['father']]['distance']
for _ in range(distance + 1):
father = maze_distances[actual]['father']
# maze[actual[0]][actual[1]] = '-'
# print('actual:', actual,'====> father:', father)
maze[father[0]][father[1]] = PERSON
actual = father
return maze
if __name__ == "__main__":
lab, g_coord, p_coord = insert_G_P(LABYRINTH_2, GOAL, PERSON)
lab_clean, x, y = insert_G_P(LAB_CLEAN, GOAL, PERSON)
print_maze_coordinates(lab)
maze_distances = count_distances(map_maze(lab, WALL), g_coord, p_coord)
print_maze_coordinates(label_distances(lab, maze_distances))
print_maze_coordinates(walking_to_goal(lab_clean, maze_distances, g_coord,
p_coord))
|
[
"aeirado@gmail.com"
] |
aeirado@gmail.com
|
d8c7e6d7db9bca7d7f44fc210e2ec9ec36afa5ea
|
aa87ba72785c0a32f98adcb8a978963baf5d122f
|
/Mobley_logP/tautomerExploration/makeMAEfiles.py
|
31b1598a63ee4d2fd623d9c493ecf1d97b3d7400
|
[
"MIT"
] |
permissive
|
zyh0608/SAMPL5_logD_PredictionAnalysis
|
b475ad5156827c100e126ef0e39dadcc64621cca
|
c7675a8f183a465bee89599a6df9e360476ef868
|
refs/heads/master
| 2021-12-24T03:40:06.460732
| 2017-12-07T20:34:27
| 2017-12-07T20:34:27
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 953
|
py
|
# Written by Caitlin C Bannan
# Mobley Group, University of California Irvine
# February 2016
# This script uses the schrodinger tool ligprep to analyze all the SAMPL5 molecule files and perform a tautomer enumeration to get a state penalty for the tautomer used in our analysis
import commands as c
import os
import glob
import sys
#convertCommand = "/opt/schrodinger/suites2014-4/utilities/structconvert -imol2 %s -omae %s.mae"
#allpKaCommand = "/opt/schrodinger/suites2014-4/epik -imae %s.mae -omae %s_allpKa.mae"
LigPrepCommand = "/opt/schrodinger/suites2014-4/ligprep -imae %s.mae -omae ligprep_%s.maegz -bff 14 -ph 7.4 -retain_i -ac -s 32 -r 1 -epik"
maeFiles = glob.glob('../MoleculeFiles/SAMPL5_*.mae')
for f in maeFiles:
samplID = f.split('.')[0]
if os.path.isfile('ligprep_%s.maegz' % samplID):
continue
print samplID
# Create mae and log file with pKas
print c.getoutput(LigPrepCommand % (samplID, samplID))
|
[
"bannanc@uci.edu"
] |
bannanc@uci.edu
|
4aa21d3a7adf0f9f45a4c35ec08d660026730f4a
|
8c06317ee3bb99e8035ef2182256030a965acf29
|
/spint/universal.py
|
60fefbbfbc97c1469513dfa8569cc17b6cd268f5
|
[
"BSD-3-Clause"
] |
permissive
|
TaylorOshan/spint
|
4120bea89d56d7be35e74328f953eef4cb5428a8
|
8496c55ae5097904965f1fb1f70a918aea77353a
|
refs/heads/master
| 2021-08-17T22:47:24.466224
| 2021-01-15T18:20:13
| 2021-01-15T18:20:13
| 79,855,998
| 1
| 1
| null | 2017-01-23T22:42:19
| 2017-01-23T22:42:19
| null |
UTF-8
|
Python
| false
| false
| 10,196
|
py
|
"""
Implementations of universal spatial interaction models: Lenormand's
model, radiation model, and population-weighted opportunities.
References
----------
Lenormand, M., Huet, S., Gargiulo, F., and Deffuant, G. (2012). "A Universal
Model of Commuting Networks." PLOS One, 7, 10.
Simini, F., Gonzalez, M. C., Maritan, A., Barabasi, A.-L. (2012). "A universal
model for mobility and migration patterns." Nature, 484, 96-100.
Yan, X.-Y., Zhao, C., Fan, Y., Di, Z., and Wang, W.-X. (2014). "Universal
predictability of mobility patterns in cities." Journal of the Royal
Society Interface, 11, 100.
"""
__author__ = 'Tyler Hoffman tylerhoff1@gmail.com'
from abc import ABC, abstractmethod
import numpy as np
import pandas as pd
from scipy.stats import pearsonr
class Universal(ABC):
"""
Base class for all the universal models as they all have similar
underlying structures. For backend design purposes, not practical use.
Parameters
----------
inflows : array of reals
N x 1, observed flows into each location
outflows : array of reals
M x 1, observed flows out of each location
dists : matrix of reals
N x M, pairwise distances between each location
Attributes
----------
N : integer
number of origins
M : integer
number of destinations
flowmat : abstract method
estimates flows, implemented by children
"""
def __init__(self, inflows, outflows, dists):
self.N = len(outflows) # number of origins
self.M = len(inflows) # number of destinations
self.outflows = outflows.copy() # list of origin outflows
self.inflows = inflows.copy() # list of destination inflows
self.dists = dists.copy() # list of distances
@abstractmethod
def flowmat(self): pass
class Lenormand(Universal):
"""
Universal model based off of Lenormand et al. 2012,
"A Universal Model of Commuting Networks".
Parameters
----------
inflows : array of reals
N x 1, observed flows into each location
outflows : array of reals
M x 1, observed flows out of each location
dists : matrix of reals
N x M, pairwise distances between each location
beta : scalar
real, universal parameter for the model
avg_sa : scalar
real, average surface area of units
Attributes
----------
N : integer
number of origins
M : integer
number of destinations
calibrate : method
calibrates beta using constants from the paper
flowmat : method
estimates flows via the Lenormand model
"""
def __init__(self, inflows, outflows, dists, beta=1, avg_sa=None):
super().__init__(inflows, outflows, dists)
self.beta = self.calibrate(avg_sa) if avg_sa is not None else beta
def calibrate(self, avg_sa):
# Constants from the paper
nu = 0.177
alpha = 3.15 * 10**(-4)
self.beta = alpha*avg_sa**(-nu)
def flowmat(self):
# Builds the matrix T from the parameter beta and a matrix of distances
T = np.zeros((self.N, self.M))
# Copy class variables so as not to modify
sIN = self.inflows.copy()
sOUT = self.outflows.copy()
# Assembly loop
while sum(sOUT) > 0:
# Pick random nonzero sOUT
idxs, = np.where(sOUT > 0)
i = np.random.choice(idxs)
# Compute Pij's (not memoized b/c it changes on iteration)
Pi = np.multiply(sIN, np.exp(-self.beta*self.dists[i, :])) / \
np.dot(sIN, np.exp(-self.beta*self.dists[i, :]))
# Pick random j according to Pij
j = np.random.choice(range(self.N), p=Pi)
# Adjust values
T[i, j] += 1
sIN[j] -= 1
sOUT[i] -= 1
return T
class Radiation(Universal):
"""
Universal model based off of Simini et al. 2012,
"A universal model for mobility and migration patterns".
Requires slightly more data than Lenormand.
Parameters
----------
inflows : array of reals
N x 1, observed flows into each location
outflows : array of reals
M x 1, observed flows out of each location
dists : matrix of reals
N x M, pairwise distances between each location
ilocs : array of reals
N x 2, inflow node locations
olocs : array of reals
M x 2, outflow node locations
Attributes
----------
N : integer
number of origins
M : integer
number of destinations
flowmat : method
estimates flows via the Radiation model
"""
def __init__(self, inflows, outflows, dists, ilocs, olocs):
super().__init__(inflows, outflows, dists)
self.ilocs = ilocs.copy()
self.olocs = olocs.copy()
def _from_origin(self, idx, total_origins):
# Sort destinations by distance from origin
didxs = np.argsort(self.dists[idx, :])
inflows = self.inflows[didxs]
# Normalization
F = 1.0/(1.0 - self.outflows[idx]/total_origins)
pop_in_radius = 0
flows = np.zeros((self.M,))
for j in range(self.M):
# Use formula from the paper
flows[j] = F*(self.outflows[idx]*inflows[j]) / \
((self.outflows[idx] + pop_in_radius) *
(self.outflows[idx] + inflows[j] + pop_in_radius))
pop_in_radius += inflows[j]
# Unsort list
return flows[didxs.argsort()]
def flowmat(self):
# Builds the OD matrix T from the inputted data
T = np.zeros((self.N, self.M))
total_origins = sum(self.outflows)
for i in range(self.N):
T[i, :] = self._from_origin(i, total_origins)
return T
class PWO(Universal):
"""
Population-weighted opportunies (PWO) implements a
universal model based off of Yan et al. 2014,
"Universal predictability of mobility patterns in cities".
Requires slightly more data than Lenormand.
Parameters
----------
inflows : array of reals
N x 1, observed flows into each location
outflows : array of reals
M x 1, observed flows out of each location
dists : matrix of reals
N x M, pairwise distances between each location
ilocs : array of reals
N x 2, inflow node locations
olocs : array of reals
M x 2, outflow node locations
Attributes
----------
N : integer
number of origins
M : integer
number of destinations
flowmat : method
estimates flows via the Radiation model
"""
def __init__(self, inflows, outflows, dists, ilocs, olocs):
super().__init__(inflows, outflows, dists)
self.ilocs = ilocs.copy()
self.olocs = olocs.copy()
self.total = sum(inflows) # total population of the system
def _from_destination(self, jdx):
# Sort origins by distance from destination
didxs = np.argsort(self.dists[jdx, :])
outflows = self.outflows[didxs]
pop_in_radius = self.inflows[jdx] # here pop_in_radius includes endpts
flows = np.zeros((self.N,))
# Loop over origins
for i in range(self.N):
pop_in_radius += outflows[i] # add other endpt
# Compute denominator
denom = 0
denom_pop_in_radius = outflows[i]
for k in range(self.M): # loop over destinations
denom_pop_in_radius += self.inflows[k]
if k != i:
denom += self.inflows[k] * (1/denom_pop_in_radius -
1/self.total)
# Use formula from the paper
flows[i] = self.inflows[jdx]*(1/pop_in_radius - 1/self.total)/denom
# Unsort list
return flows[didxs.argsort()]
def flowmat(self):
# Builds the OD matrix T from the inputted data
T = np.zeros((self.N, self.M))
for j in range(self.M):
T[:, j] = self._from_destination(j)
return T
def test():
# Read data from Austria file
N = 9
austria = pd.read_csv('austria.csv')
modN = austria[austria.index % N == 0]
outflows = modN['Oi'].values
inflows = austria['Dj'].head(n=N).values
locs = np.zeros((N, 2))
locs[:, 0] = modN['X'].values
locs[:, 1] = modN['Y'].values
dists = np.reshape(austria['Dij'].values, (N, N), order='C')
T_obs = np.reshape(austria['Data'].values, (N, N), order='C')
# Lenormand paper's model
model = Lenormand(inflows, outflows, dists)
T_L = model.flowmat()
print(pearsonr(T_L.flatten(), T_obs.flatten()))
# Radiation model -- requires locations of each node
model = Radiation(inflows, outflows, dists, locs, locs)
T_R = model.flowmat()
print(pearsonr(T_R.flatten(), T_obs.flatten()))
# PWO model
model = PWO(inflows, outflows, dists, locs, locs)
T_P = model.flowmat()
print(pearsonr(T_P.flatten(), T_obs.flatten()))
if __name__ == '__main__':
test()
|
[
"tayoshan@gmail.com"
] |
tayoshan@gmail.com
|
08aa07947973611446df1c9c162384db088191e3
|
6a6c2b922c3ff3d35622c8b9638426fb162b79ed
|
/pnc_cli/swagger_client/models/project.py
|
82765a36a314e3882363e137823cac3e52b4b789
|
[
"Apache-2.0"
] |
permissive
|
thauser/pnc-cli
|
1383f45a309a34b12a8792d4e648259c8cd8b33b
|
cf9a1ce236c3de9ec6c393816e3c75db2f75bc33
|
refs/heads/master
| 2021-04-19T00:48:26.375270
| 2017-11-09T13:01:24
| 2017-11-09T16:14:23
| 36,449,869
| 0
| 4
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 6,973
|
py
|
# coding: utf-8
"""
Copyright 2015 SmartBear Software
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Ref: https://github.com/swagger-api/swagger-codegen
"""
from datetime import datetime
from pprint import pformat
from six import iteritems
class Project(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
def __init__(self):
"""
Project - a model defined in Swagger
:param dict swaggerTypes: The key is attribute name
and the value is attribute type.
:param dict attributeMap: The key is attribute name
and the value is json key in definition.
"""
self.swagger_types = {
'id': 'int',
'name': 'str',
'description': 'str',
'issue_tracker_url': 'str',
'project_url': 'str',
'license': 'License',
'build_configurations': 'list[BuildConfiguration]',
'field_handler': 'FieldHandler'
}
self.attribute_map = {
'id': 'id',
'name': 'name',
'description': 'description',
'issue_tracker_url': 'issueTrackerUrl',
'project_url': 'projectUrl',
'license': 'license',
'build_configurations': 'buildConfigurations',
'field_handler': 'fieldHandler'
}
self._id = None
self._name = None
self._description = None
self._issue_tracker_url = None
self._project_url = None
self._license = None
self._build_configurations = None
self._field_handler = None
@property
def id(self):
"""
Gets the id of this Project.
:return: The id of this Project.
:rtype: int
"""
return self._id
@id.setter
def id(self, id):
"""
Sets the id of this Project.
:param id: The id of this Project.
:type: int
"""
self._id = id
@property
def name(self):
"""
Gets the name of this Project.
:return: The name of this Project.
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""
Sets the name of this Project.
:param name: The name of this Project.
:type: str
"""
self._name = name
@property
def description(self):
"""
Gets the description of this Project.
:return: The description of this Project.
:rtype: str
"""
return self._description
@description.setter
def description(self, description):
"""
Sets the description of this Project.
:param description: The description of this Project.
:type: str
"""
self._description = description
@property
def issue_tracker_url(self):
"""
Gets the issue_tracker_url of this Project.
:return: The issue_tracker_url of this Project.
:rtype: str
"""
return self._issue_tracker_url
@issue_tracker_url.setter
def issue_tracker_url(self, issue_tracker_url):
"""
Sets the issue_tracker_url of this Project.
:param issue_tracker_url: The issue_tracker_url of this Project.
:type: str
"""
self._issue_tracker_url = issue_tracker_url
@property
def project_url(self):
"""
Gets the project_url of this Project.
:return: The project_url of this Project.
:rtype: str
"""
return self._project_url
@project_url.setter
def project_url(self, project_url):
"""
Sets the project_url of this Project.
:param project_url: The project_url of this Project.
:type: str
"""
self._project_url = project_url
@property
def license(self):
"""
Gets the license of this Project.
:return: The license of this Project.
:rtype: License
"""
return self._license
@license.setter
def license(self, license):
"""
Sets the license of this Project.
:param license: The license of this Project.
:type: License
"""
self._license = license
@property
def build_configurations(self):
"""
Gets the build_configurations of this Project.
:return: The build_configurations of this Project.
:rtype: list[BuildConfiguration]
"""
return self._build_configurations
@build_configurations.setter
def build_configurations(self, build_configurations):
"""
Sets the build_configurations of this Project.
:param build_configurations: The build_configurations of this Project.
:type: list[BuildConfiguration]
"""
self._build_configurations = build_configurations
@property
def field_handler(self):
"""
Gets the field_handler of this Project.
:return: The field_handler of this Project.
:rtype: FieldHandler
"""
return self._field_handler
@field_handler.setter
def field_handler(self, field_handler):
"""
Sets the field_handler of this Project.
:param field_handler: The field_handler of this Project.
:type: FieldHandler
"""
self._field_handler = field_handler
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, datetime):
result[attr] = str(value.date())
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
|
[
"thauser@redhat.com"
] |
thauser@redhat.com
|
a61152fb7d88aa633634dfe02f3bc6b0900c09a5
|
4a4d8b806f42a1943d3c3d378174529265380324
|
/useMap.py
|
e79b1ecb6a11cc2fe8a70a1558cb753c28a0a72b
|
[] |
no_license
|
syves/sandbox
|
c4ec6f1c03ebeb94dcf02fb3de07f2611e59f0e7
|
0394fc74d119ec1cc4bb1403bf1488825923152a
|
refs/heads/master
| 2016-08-02T21:27:16.514892
| 2014-04-07T05:52:12
| 2014-04-07T05:52:12
| null | 0
| 0
| null | null | null | null |
UTF-8
|
Python
| false
| false
| 446
|
py
|
lst = [37, 996, 28, 10]
def times_3(num):
return num * 3
def times_2(num):
return num * 2
def make_multiplier(n):
lambda m: m * n
times_3 = make_multiplier(3)
times_2 = make_multiplier(2)
def product(a, b):
return a * b
import functools
times_3 = functools.partial(product, 3)
times_2 = functools.partial(product, 2)
print "map", map(times_2, lst)
print "list comp", [nums * 2 for nums in lst]
|
[
"7shakrahs@gmail.com"
] |
7shakrahs@gmail.com
|
3b624809c01f392d200d800727230749108bafad
|
c41a5d8923e3954232c7bb401cf528b60bf5d615
|
/docs/conf.py
|
31434ccdde21d167da3efe7e475c4fa6e9dddb9f
|
[
"MIT"
] |
permissive
|
TG-Techie/Adafruit_CircuitPython_ST7735R
|
fed1718fe305b7b5937da97fffab6455636ed481
|
7bc2f385464db75a0bd580f3297439d2ad330fa1
|
refs/heads/master
| 2020-05-30T01:53:55.026168
| 2019-06-01T17:47:11
| 2019-06-01T17:47:11
| 189,487,410
| 0
| 0
|
MIT
| 2019-06-01T13:05:45
| 2019-05-30T21:53:53
|
Python
|
UTF-8
|
Python
| false
| false
| 5,255
|
py
|
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
# -- General configuration ------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'sphinx.ext.todo',
]
# TODO: Please Read!
# Uncomment the below if you use native CircuitPython modules such as
# digitalio, micropython and busio. List the modules you use. Without it, the
# autodoc module docs will fail to generate with a warning.
# autodoc_mock_imports = ["digitalio", "busio"]
autodoc_mock_imports = ["displayio"]
intersphinx_mapping = {'python': ('https://docs.python.org/3.4', None),'CircuitPython': ('https://circuitpython.readthedocs.io/en/latest/', None)}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Adafruit ST7735R Library'
copyright = u'2019 Scott Shawcroft'
author = u'Scott Shawcroft'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'1.0'
# The full version, including alpha/beta/rc tags.
release = u'1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '.env', 'CODE_OF_CONDUCT.md']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#
default_role = "any"
# If true, '()' will be appended to :func: etc. cross-reference text.
#
add_function_parentheses = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# If this is True, todo emits a warning for each TODO entries. The default is False.
todo_emit_warnings = True
napoleon_numpy_docstring = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if not on_rtd: # only import and set the theme if we're building docs locally
try:
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path(), '.']
except:
html_theme = 'default'
html_theme_path = ['.']
else:
html_theme_path = ['.']
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#
html_favicon = '_static/favicon.ico'
# Output file base name for HTML help builder.
htmlhelp_basename = 'AdafruitSt7735RLibrarydoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'AdafruitST7735RLibrary.tex', u'AdafruitST7735R Library Documentation',
author, 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'AdafruitST7735Rlibrary', u'Adafruit ST7735R Library Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'AdafruitST7735RLibrary', u'Adafruit ST7735R Library Documentation',
author, 'AdafruitST7735RLibrary', 'One line description of project.',
'Miscellaneous'),
]
|
[
"melissa@melissagirl.com"
] |
melissa@melissagirl.com
|
67249075941b16fc6956ad6e62942e8090c2c8cc
|
ec61946a176935044d08cf1244d2185f2460df32
|
/pyleecan/Methods/Machine/MachineSRM/get_machine_type.py
|
3469489164e082846042f4764cbb5bc2b647bfaf
|
[
"Apache-2.0"
] |
permissive
|
Lunreth/pyleecan
|
d3974a144cb8a6c332339ab0426f1630b7516fc9
|
1faedde4b24acc6361fa1fdd4e980eaec4ca3a62
|
refs/heads/master
| 2023-06-07T01:46:32.453763
| 2021-07-01T21:29:51
| 2021-07-01T21:29:51
| 383,880,732
| 1
| 0
|
Apache-2.0
| 2021-07-07T17:47:01
| 2021-07-07T17:47:01
| null |
UTF-8
|
Python
| false
| false
| 979
|
py
|
# -*- coding: utf-8 -*-
def get_machine_type(self):
"""Return a string with the main information about the machine architecture
Parameters
----------
self : MachineSRM
A MachineSRM object
Returns
-------
type_str: str
SRM Zs/Zr/p (int/ext rotor)
"""
type_str = "SRM "
if self.stator.slot is None:
type_str += "0s / "
elif self.stator.slot.Zs is not None:
type_str += str(self.stator.slot.Zs) + "s / "
else:
type_str += "?s / "
if self.rotor.slot is None:
type_str += "0r / "
elif self.rotor.slot.Zs is not None:
type_str += str(self.rotor.slot.Zs) + "r / "
else:
type_str += "?r / "
if self.stator.winding.p is not None:
type_str += str(self.stator.winding.p) + "p"
else:
type_str += "?p"
if self.stator.is_internal:
type_str += " (ext rotor)"
else:
type_str += " (int rotor)"
return type_str
|
[
"pierre.bonneel@gmail.com"
] |
pierre.bonneel@gmail.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.