blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 777 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 149 values | src_encoding stringclasses 26 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 3 10.2M | extension stringclasses 188 values | content stringlengths 3 10.2M | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4f13c5d285c91f061fa8460aadb0cd6e1bdff63c | 7971b92edf291a48ed578fb8ac85803161762036 | /textbook/_build/jupyter_execute/ch_05.py | 414064255e710176ea11bc9a4001e1c1eb2f284b | [] | no_license | paulsavala/Introduction-to-Data-Science | b9e84627fb3faf9a2acf519553aa72f35a1e9b7e | 677337f9e0aa5aeb4c7de8b83a95449e79c7d746 | refs/heads/main | 2023-01-15T17:01:49.722008 | 2020-11-14T15:25:34 | 2020-11-14T15:25:34 | 312,834,750 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 27,867 | py | #!/usr/bin/env python
# coding: utf-8
# # Training models
# ### Imports
# In[ ]:
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.linear_model import LinearRegression
# ### Data loading
#
# We'll start by using the Boston housing dataset again.
# In[ ]:
df = pd.read_csv('data/boston.csv')
# ### Data cleaning
#
# Change all column names to lowercase, convert spaces to underscores, replace "%" with "pct", replace "-" with an underscore, check for missing values.
# In[ ]:
df.columns = [x.lower().replace(' ', '_').replace('%', 'pct').replace('-', '_') for x in df.columns]
# In[ ]:
df.head()
# ## Training models <a id="training"></a>
#
# As we learned last week, linear regression is the process of fitting a line to data in the "best" way possible. We measured "best" in terms of mean squared error (MSE), given specifically by
# $$
# \displaystyle\sum_i (y_i - (mx_i + b))^2
# $$
# where $y_i$ are the values to be predicted (median home value in our example last week) and $x_i$ are the data being used to make predictions (things like poverty rate, distance to downtown, etc). We then showed that you could simply take derivatives and find critical points to solve for what values of $m$ and $b$ will make this "error" as small as possible, i.e. minimize it.
#
# None of this is common in machine learning. In fact, linear regression is largely the only case of machine learning where we can actually *solve* for what value of the **model parameters**, the variables used in the model, will give the smallest error. Instead, we do what is called "model training".
#
# Model training works like this:
# 1. Find data which you want your model to predict.
# 2. Pick a model. Last week this was linear regression. As the semester goes on you'll learn about several other models.
# 3. Do gradient descent (covered last week) to optimize your parameters.
#
# Let's do a brief overview of each step.
# ### Finding data to train your model on
# The first step is to find data. Normally you've got a general problem in mind that you want to answer. Your first step should be looking for data related to that problem. If you don't have data then you can't do anything else either.
#
# Let's define a few terms that we will be using throughout the rest of this semester:
# - **features:** Features are simply the "inputs" in your data. So in the Titanic example, this would be things like age, fare, sex, etc.
# - **labels:** Labels are the values you want to predict. The term "label" comes from when you are trying to predict a categorical variable, such as the breed of a dog, or the survival or death of a passenger. However we also use it for numerical variables, such as home value.
# - **ground truth:** This refers to the "correct" values of the labels. For instance, suppose we collected data on passengers on the Titanic. We could build a machine learning model to predict whether or not each person survived, and the model would predict a label ("survived" or "died"). However, these are just *predictions*. By "ground truth" we mean the actual correct labels. That is, for each person described in the data, did they *actually* survive or die? Whatever the answer to this is is called the ground truth.
#
# So we want data with features and ground truth. Once we have that, we can move on to step 3.
# ### Picking a model
#
# Machine learning "models" are simply functions. Linear regression is an especially simple function represented by a simple equation ($y=mx+b$). When dealing with inputs with many features then $m$, $x$ and $b$ are all vectors, which makes things seem complicated. But in reality it's just a line. Another model we will deal with extensively this semester is called a "decision tree". We will hold off on the details for now, but a decision tree is simply a function that repeated asks "yes/no" questions of the data. For instance, suppose we want to use a decision tree to determine whether or not a passenger on the titanic survives. Below is a possible decision tree:
#
# 
# You can see that the first question is about the person's gender, then if they are a female the model predicts they will survive. If they are a male, the model then asks about their age, and so forth. This may not *look* like a function, but it is. Recall that a function is simply something that takes in input and returns a single output (think about the "vertical line test"). We could write this as an *equation* (which is probably how you typically think about functions) as follows:
# $$
# f(\text{sex}, \text{age}, \text{sibsp}) = \text{piecewise function}
# $$
# In this decision tree we have the following model parameters:
# - Which columns should we ask questions about?
# - What order should we ask these questions? Do we start with sex, age, or sibsp?
# - When we ask about a numerical column (such as "is age > 9.5" or "is sibsp > 2.5"), what value should be our cutoff? That is why aren't we asking "is age > 14", or "is sibsp < 5"?
# - After each question, what should we do next? Should we predict a value or go to another question?
# - Whenever we decide to predict a value, what value should we predict?
#
# As you can see, model parameters can be quite complicated. It is impossible to setup an equation and "solve" for each of these like we did for linear regression. So instead, we train the model. That leads us to step 3.
# ### Gradient descent
#
# We saw last class that gradient descent allows us to find values of the model parameters which make the loss function (mean squared error) as small as possible. By doing gradient descent we are able to find model parameters that make our model "fit" our data very well. By "fit", we mean that the loss function is very small.
# ### Putting it all together
#
# In practice, the first two steps are done by you, and the final step is done automatically using sklearn. Let's do a couple examples. You may remember from last week that when doing linear regression there were a few options, in particular `fit_intercept` and `normalize`. How do these affect our model? Let's try doing one of each of the four possibilities (True/False for each of the two options) and see what we get.
# In[ ]:
lr1 = LinearRegression(fit_intercept=False, normalize=False)
lr2 = LinearRegression(fit_intercept=True, normalize=False)
lr3 = LinearRegression(fit_intercept=False, normalize=True)
lr4 = LinearRegression(fit_intercept=True, normalize=True)
# In[ ]:
X = df[['crime_rate', 'avg_num_rooms', 'pct_below_poverty_line']]
y = df['median_home_value']
# In[ ]:
lr1.fit(X, y)
lr2.fit(X, y)
lr3.fit(X, y)
lr4.fit(X, y)
# We've now trained four models. That's great (and simple!), but how do we know which one is best? That is, which one makes the best predictions? Or, said more formally, how do we *evaluate* our models?
# ## Evaluating your model <a id="model_evaluation"></a>
#
# Now that we have trained our model and used it to make predictions, how can we tell if it's making "good" predictions? Let's start with the simplest way, which is just comparing the predicted values to the actual values, which were stored in `y`. We'll loop through the first ten predictions.
# In[ ]:
lr1_pred = lr1.predict(X)
for i in range(10):
print(f'Actual = {y[i]}, Predicted = {lr1_pred[i]:.2f}')
# We can see that most predictions seem reasonable, though definitely not perfect. It's a little annoying to look through the predictions this way. Let's plot both the predicted and actual values and see what we get. The problem is, how can we do this? To plot we put the `X` on the x-axis and the `y` on the y-axis. The problem is that our `X` is three dimensional, which means we're plotting in four dimensions! That won't work. Instead, what we'll do is just plot our y's (both `y` and `y_pred`) against each other. Let's do it then discuss it.
# In[ ]:
fig = plt.figure(figsize=(12, 10))
plt.scatter(y, lr1_pred)
plt.xlabel('Actual home prices')
plt.ylabel('Predicted home prices')
plt.title('Actual vs predicted using linear regression');
# What are we looking at? Each point represents a home. The x value is the actual home price, and the y value is the predicted home price using our linear regression model. Let's look at a single point to make this clear. I'll do this by plotting twice: once for all except the first prediction, and once for the first prediction (in a different color to make it easy to see). I'll also make the size bigger using `s`.
# In[ ]:
fig = plt.figure(figsize=(12, 10))
plt.scatter(y[1:], lr1_pred[1:], color='gray')
plt.scatter(y[0], lr1_pred[0], color='red', s=100)
plt.xlabel('Actual home prices')
plt.ylabel('Predicted home prices')
plt.title('Actual vs predicted using linear regression');
# What is this red point?
# In[ ]:
print(f'Actual home value = {y[0]}, Predicted home value = {lr1_pred[0]}')
# It is a home which cost \\$24,000 and had a predicted home price of a little over \\$28,857. We can see the information on it by looking at the data.
# In[ ]:
X.iloc[0]
# So how should we make sense of the scatter plot above? Think about the following: suppose the regression line made perfect predictions. That is, suppose that for every single home, the regression line could predict exactly the correct value. Here is a simple example:
# In[ ]:
y = df['median_home_value']
y_pred_fake = y # The (fake) predicted values are exactly the real values.
# Make the scatterplot
fig = plt.figure(figsize=(12, 10))
plt.scatter(y, y_pred_fake)
plt.xlabel('Actual home prices')
plt.ylabel('(Fake) Predicted home prices')
plt.title('Actual vs (fake) predicted using linear regression');
# A perfect line! that is because both the x and y values are identical. So a model which returned perfect predictions would have this scatter plot being a perfect line.
#
# Because of this, a common way to measure how strong a model's predictions are is by making this scatter plot and computing the correlation coefficient $R^2$. Let's do that now using what we learned last homework.
# In[ ]:
fig = plt.figure(figsize=(12, 10))
plt.scatter(y, lr1_pred)
plt.xlabel('Actual home prices')
plt.ylabel('Predicted home prices')
plt.title(f'Actual vs predicted using linear regression: R^2 = {np.corrcoef(y, lr1_pred)[0,1]**2:.3f}');
# Not bad! That is a high correlation coefficient, as reflected by the fact that the scatterplot is quite linear. This means that our predictions are generally pretty good.
#
# At this point you don't really have a grasp on what's "good" and "not good". We'll do one more example below, but I highly encourage you to play with all of this on your own. Use both this dataset and others to try it all out.
#
# Let's pick some columns that probably *aren't* that useful in predicting home value and see what we get.
# In[ ]:
lr_weak = LinearRegression()
X_weak = df[['pct_industrial', 'distance_to_downtown', 'pupil_teacher_ratio']]
y = df['median_home_value'] # y is the same regardless, it's just the home prices
lr_weak.fit(X_weak, y)
y_weak_pred = lr_weak.predict(X_weak)
# In[ ]:
fig = plt.figure(figsize=(12, 10))
plt.scatter(y, y_weak_pred)
plt.xlabel('Actual home prices')
plt.ylabel('Predicted home prices')
plt.title(f'Actual vs predicted using linear regression: R^2 = {np.corrcoef(y, y_weak_pred)[0, 1]**2:.3f}');
# A much lower $R^2$ value, and a much less linear graph. We can see that generally higher actual home prices seem to match with higher predicted home prices, and similarly for lower-priced homes. But the correlation is pretty weak.
#
# Since computing the correlation coefficient between the actual and predicted y values is so common and important, sklearn has it built-in. You can access the $R^2$ value using the `.score(...)` method. Go look at the documentation on it now on the sklearn page. At this point you should be comfortable enough reading that documentation to try it on your own.
# In[ ]:
lr1.score(X, y)
# This shows one particular way to compare how "good" different models' predictions are. That is, we can compare their $R^2$ value by using `.score()`.
# In[ ]:
print(f'Score for lr1 (fit_intercept=False, normalize=False): {lr1.score(X, y)}')
print(f'Score for lr2 (fit_intercept=True, normalize=False): {lr2.score(X, y)}')
print(f'Score for lr3 (fit_intercept=False, normalize=True): {lr3.score(X, y)}')
print(f'Score for lr4 (fit_intercept=True, normalize=True): {lr4.score(X, y)}')
# It ends up that all three have very similar scores! In other words, it seens like those changes are making only very minor differences. Perhaps normalizing (*without* fitting the intercept) is very slightly the worst, but the difference is so small it likely isn't statistically significant.
# ## Training vs testing <a id="train_test"></a>
#
# Now that you know how model training works, let's do some more examples. Suppose you go back through your old exams and you find what score you got. You then estimate how many hours you studied alone, how many hours you studied with a group, and how many hours you were studying for other courses that week. You then decide to build a linear model that will predict your score on your next exam, given how many hours you studied each way.
# In[ ]:
# Hours studied
exam_df = pd.DataFrame({'score': [56, 86, 84], 'hours_alone': [3, 8, 4], 'hours_group': [3, 6, 10], 'hours_other': [7, 2, 5]})
X = exam_df[['hours_alone', 'hours_group', 'hours_other']]
y = exam_df['score']
lr = LinearRegression()
lr.fit(X, y)
# Let's now look at the $R^2$ value (the score) to see how well this model fits the data.
# In[ ]:
lr.score(X, y)
# My god, it's perfect! We have a perfect model that can determine exactly what score we will get based on knowing this study info! Let's confirm that.
# In[ ]:
exam_df
# In[ ]:
print(f'Actual score for first exam: {y[0]}, Predicted score for first exam: {lr.predict(X)[0]:.2f}')
print(f'Actual score for second exam: {y[1]}, Predicted score for first exam: {lr.predict(X)[1]:.2f}')
print(f'Actual score for third exam: {y[2]}, Predicted score for first exam: {lr.predict(X)[2]:.2f}')
# We've done it! Let's go publish our results and become heroes to every student who wants to know how long they need to study!
#
# Okay, enough sarcasm, obviously something is wrong here. The numbers above are just made up (go ahead, try tweaking them and see that your model is essentially always perfect), and yet it can perfectly predict our scores? That doesn't make sense. What's happening is that *we are testing our model and the same data that we are training it with*. To understand this, consider the following real-world situation: Suppose your professor tells you your next exam is in a week. She gives you a practice exam, but you're too busy to really spend much time with it. However, just because you've seen it, you memorized the answers. You don't know how to do the problems, but you remember that the answer to the first problem is 42, the second is $x^2+y$, and so forth. Finally, the day of the test arrives. You sit down nervous about how much your grade is going to tank after this exam. She passes out the exams and you flip it over only to find...it's exactly the same as the practice test! Thrilled, you write down the correct answers and turn in the test after just 30 seconds. You come back next week to see that you got a perfect score!
#
# Does this mean that you *understand* the material? Of course not! You simply memorized the answers. More importantly, you don't know how to *generalize* to other topics. For example, perhaps you memorized that if $f(x)=2x^3$ then $f'(1) = 6$. However, if you got a new question "$f(x)=2x^4$, compute $f'(1)$" you would have no idea how to do it. This is exactly what happened with our model above. We trained it using some data for which it "memorized" the answers by choosing values of $m$ and $b$ which would produce perfect results. It then simply spit those back out when asked for predictions. Let's illustrate this by putting in the data for another exam you studied for.
# In[ ]:
# Create a DataFrame for this single exam
next_exam_df = pd.DataFrame({'score': [76], 'hours_alone': [7], 'hours_group': [4], 'hours_other': [19]})
# Concatenate it to the already existing DataFrame
exam_df = pd.concat([exam_df, next_exam_df])
exam_df = exam_df.reset_index(drop=True)
exam_df.head()
# In[ ]:
X = exam_df[['hours_alone', 'hours_group', 'hours_other']]
y = exam_df['score']
print(f'Actual score for fourth exam: {y[3]}, Predicted score for fourth exam: {lr.predict(X)[3]:.2f}')
# Not even close! The reason is that our model did not learn to *generalize* what it learned. In reality, there's a bit more going on here. This is a linear system with three columns (variables) and three rows (exams). So it just solved the system and found a solution. However, in general models are *underdetermined*, meaning there are more rows than columns. Thus they cannot solve for a solution, they always have to approximate it. Regardless, the idea is the same: it is a mistake to *test* a model on the same data that you *trained* it on. Said in simple terms, the professor should have given you a real exam which was different (but similar) to the practice exam.
# ## Train-test split <a id="train_test_split"></a>
#
# The fix to this is to take your data and **split** it into training and testing sets. The idea is that you will *train* your model on the **training set**, and *test* (or *evaluate*) your model on the **test set**.
#
# Splitting is very simple--it just consists of randomly putting some rows aside as the test set, and leaving the rest as the training set. Let's use some data on avocado sales to do this.
# In[ ]:
df = pd.read_csv(drive_dir + 'data/avocado.csv')
df.head()
# Here is a brief rundown of the columns:
# - **Date:** The week when sales were measured
# - **AveragePrice:** The average price of a single avocado
# - **Total Volume:** The total number of avocado's sold in that week
# - **4046:** The number of small avocados sold (4046 is the PLU code)
# - **4225:** The number of large avocados sold
# - **4770:** The number of extra-large avocados sold
# - **Total Bags:** The total number of bags of avocados sold
# - **Small Bags/Large Bags/XLarge Bags:** The total number of small/large/xlarge bags sold
# - **type:** Whether the avocados were conventional or organic
# - **year:** The year
# - **region:** The region of sales
# We'll start by quickly cleaning and checking the data, and renaming some columns.
# In[ ]:
df = df.drop('Unnamed: 0', axis='columns')
df.columns = ['date', 'avg_price', 'total_volume', 'small', 'large', 'xlarge', 'total_bags', 'small_bags', 'large_bags', 'xlarge_bags', 'type', 'year', 'region']
df['date'] = pd.to_datetime(df['date'])
# In[ ]:
df.head()
# In[ ]:
df.isna().sum()
# In[ ]:
df.dtypes
# Let's now split into training and testing sets. One way would be to just take the first rows and set them aside.
# In[ ]:
num_rows = df.shape[0]
print(f'# rows = {num_rows}')
# In[ ]:
# Keep 30% of the data for testing
first_30_pct = int(0.3 * num_rows)
# Make the test set be the first 30%...
test_df = df.iloc[:first_30_pct]
# ... and the training set the last 70%
train_df = df.iloc[first_30_pct:]
# In[ ]:
test_df.shape
# In[ ]:
train_df.shape
# While this may seem reasonable, there are potential problems here. For example, what if your data was sorted by date? Or by neighborhood? Or by home value? By taking the first 30% we're potentially taking the first 30% of the dates or lowest 30% of home values, or only certain neighborhoods. We always want our samples to be representative of the population, so this could be a problem.
#
# The simplest fix is just to take a random sample. If you happen to know that your data has some other structure that you want to preserve (like neighborhoods), perhaps you could sample from those neighborhoods instead and be fancier. But 99% of the time the best idea is just to take a random sample.
#
# The simplest way to take a random sample is by using sklearn. They have a helper function called `train_test_split()`. You just need to supply your data, and what percentage of the data you want to be in the training set. The function will handle the rest.
# In[ ]:
from sklearn.model_selection import train_test_split
# In[ ]:
train_df, test_df = train_test_split(df, test_size=0.3)
# In[ ]:
test_df.shape
# In[ ]:
train_df.shape
# Now that we have our data split into a training and testing set let's train on that train set and test on the test set. We'll try to predict the number of large avocados sold based on information about the small avocados.
# In[ ]:
# Split into training and testing data (30% test)
train_df, test_df = train_test_split(df, test_size=0.3)
# Grab the columns we want for X and y for both train and test
X_train = train_df[['small', 'small_bags', 'avg_price']]
y_train = train_df['large']
X_test = test_df[['small', 'small_bags', 'avg_price']]
y_test = test_df['large']
# Create the linear regression object
lr = LinearRegression()
# Fit (i.e. train) the model on the training data
lr.fit(X_train, y_train)
# See how well it did (R^2 score) by looking at the test data
lr.score(X_test, y_test)
# ## Examining your predictions <a id="examining_predictions"></a>
#
# That's a really strong score! Let's look at the prediction vs ground truth scatterplot to see how it lines up.
# In[ ]:
fig = plt.figure(figsize=(12, 10))
# Note that we look at the predictions on the _test_ set
test_pred = lr.predict(X_test)
plt.scatter(y_test, test_pred)
plt.xlabel('Actual large avocado sales')
plt.ylabel('Predicted large avocado sales')
plt.title(f'Actual vs predicted using linear regression: R^2 = {np.corrcoef(y_test, test_pred)[0,1]**2:.3f}');
# Hmm, that's interesting. It looks like the majority of the actual large avocado sales are fairly low (the bottom-left). There's also a small set of outliers with large sales.
#
# This example shows why you always need to actually *look at* what your model is predicting. While it returns a very high $R^2$ value, the actual scatterplot of predictions doesn't look great. It seems like the majority of the avocados are in the clump in the lower-left, and it doesn't look like the predictions are great there.
#
# Let's dive into this further. First we want to see how "large" prices are distributed. A boxplot seems like a good choice for this.
# In[ ]:
ax = df.boxplot('large', figsize=(12, 8), vert=False)
ax.set_title('Large avocado sales');
# So the avocado sales are almost entirely in the smaller side, with a bunch of outliers. Let's try grouping by a few different factors, and see if we can figure out if some partial factor (organic vs conventional, a certain region, a certain year) is where all of those high sales are located.
# In[ ]:
# Group by type
df.groupby('type')['large'].mean()
# Yes, organic are sold in far lower numbers, but the "conventional" sales of 574805 is in scientific notation $5.7 \cdot 10^5$, which is still way, way on the left of the boxplot. So that's not the issue. Let's try regions.
# In[ ]:
# Group by region
df.groupby('region')['large'].mean()
# Skimming through the regions, the sales seem to be reasonably similar across the board. In the boxplot the outlier sales are on the order of $2 \cdot 10^7$, and we're not seeing anything like that here. California is about $10^6$, which is $0.1 \cdot 10^7$, so also not an outlier. Let's try year.
# In[ ]:
# Group by year
df.groupby('year')['large'].mean()
# Nope, still nothing jumping out. Let's try just directly grabbing those outliers and looking at them, maybe something will jump out.
# In[ ]:
outlier_df = df[df['large'] > 0.5 * 10**7]
# In[ ]:
outlier_df.head()
# In[ ]:
df.head()
# That's interesting, the first few in `outlier_df` all have a region of `TotalUS`. Is that a coincidence?
# In[ ]:
outlier_df['region'].value_counts()
# Doesn't look like it! Let's see how many `TotalUS` rows there are all together in the original data.
# In[ ]:
df[df['region'] == 'TotalUS'].shape[0]
# Let's directly look at those.
# In[ ]:
df[df['region'] == 'TotalUS']['large'].plot(kind='hist', figsize=(12, 8))
# So that's why, when we grouped by region, `TotalUS` didn't jump out. Yes, there are plenty of outliers, but there are also plenty on the low range which "cancel out" the outliers.
#
# Regardless, it seems silly to compare individual regions (Albany, Boise, etc.) to the entire US. Let's remove those and just work with actual regions.
# In[ ]:
region_df = df[df['region'] != 'TotalUS']
# In[ ]:
region_lr = LinearRegression()
# Split into training and testing data (30% test)
train_df, test_df = train_test_split(region_df, test_size=0.3)
# Wait a second, we're duplicating code from above! That means we should make a function.
# In[ ]:
def lr_train_test_split(df, X_cols, y_col, test_size=0.3):
# Split into training and testing data (30% test)
train_df, test_df = train_test_split(df, test_size=0.3)
# Grab the columns we want for X and y for both train and test
X_train = train_df[X_cols]
y_train = train_df[y_col]
X_test = test_df[X_cols]
y_test = test_df[y_col]
# Create the linear regression object
lr = LinearRegression()
# Fit (i.e. train) the model on the training data
lr.fit(X_train, y_train)
return lr, X_train, y_train, X_test, y_test
# In[ ]:
region_lr, X_train, y_train, X_test, y_test = lr_train_test_split(region_df, X_cols=['small', 'small_bags', 'avg_price'], y_col='large')
# In[ ]:
region_lr.score(X_test, y_test)
# We have a lower, but still strong $R^2$ value. Let's look at the scatterplot.
# In[ ]:
def lr_scatterplot(lr, X_test, y_test):
fig = plt.figure(figsize=(12, 10))
test_pred = lr.predict(X_test)
plt.scatter(y_test, test_pred)
plt.xlabel('Actual sales')
plt.ylabel('Predicted sales')
plt.title(f'Actual vs predicted using linear regression: R^2 = {np.corrcoef(y_test, test_pred)[0,1]**2:.3f}');
# In[ ]:
lr_scatterplot(region_lr, X_test, y_test)
# That looks decent. The cone shape is not great, as it means that there is a range of predicted values corresponding to any actual value. But regardless, we have a model that does decently well.
#
# This sort of "inspecting your model" is absolutely necessary. If you ever train a model and then stop, you are doing data science wrong. What makes a good data scientist is *not* how many models you know or how quickly you write code, it's how well you understand your model and data. Someone who is able to dive into and understand their model and data is far more useful as a data scientist than someone who simply knows commands. Make it a habit to spent several multiples more time *analyzing* your model and data than *modeling* your data.
# ## Exercises
#
# 1. Repeat the models comparing small sales to large sales, but with different columns. For instance, use medium to predict large. Try to not just copy-paste the code above. First, do as much as you can from memory. Then, when you get stuck, go up and see how to proceed. Type that part, and then again try to do the next step on your own. Repeat.
# 2. Do exercise 1 again, but with new columns. This sort of "try on your own, look for help when you're stuck" process is the best way to learn these things, so do it over and over.
# 3. Use more or less columns.
# 4. Get another dataset and do linear regression on it. As you may have guessed from these exercises, linear regression is incredibly important, and the best thing you can do right now is to practice it.
| [
"paulsavala@gmail.com"
] | paulsavala@gmail.com |
cbbeb20bd3aaffb49646955091a89e5d16fedd2f | c149f1263b41f89eb03201044c16e47476f7e79f | /Rootfs/usr/bin/pildriver.py | 89afaca15e58319d9e272337f88ac2c93b3cd9a0 | [] | no_license | BasJ93/Zybo_Ubuntu | d596511dd7820aeb545a426e9c395b27b60028f9 | 2a50f4171a5469e0e96f0c0e75c528af0908758c | refs/heads/master | 2022-12-22T00:17:54.522038 | 2017-02-02T13:15:16 | 2017-02-02T13:15:16 | 72,639,835 | 2 | 1 | null | 2022-12-16T08:57:26 | 2016-11-02T12:54:53 | C++ | UTF-8 | Python | false | false | 15,600 | py | #! /usr/bin/python
"""PILdriver, an image-processing calculator using PIL.
An instance of class PILDriver is essentially a software stack machine
(Polish-notation interpreter) for sequencing PIL image
transformations. The state of the instance is the interpreter stack.
The only method one will normally invoke after initialization is the
`execute' method. This takes an argument list of tokens, pushes them
onto the instance's stack, and then tries to clear the stack by
successive evaluation of PILdriver operators. Any part of the stack
not cleaned off persists and is part of the evaluation context for
the next call of the execute method.
PILDriver doesn't catch any exceptions, on the theory that these
are actually diagnostic information that should be interpreted by
the calling code.
When called as a script, the command-line arguments are passed to
a PILDriver instance. If there are no command-line arguments, the
module runs an interactive interpreter, each line of which is split into
space-separated tokens and passed to the execute method.
In the method descriptions below, a first line beginning with the string
`usage:' means this method can be invoked with the token that follows
it. Following <>-enclosed arguments describe how the method interprets
the entries on the stack. Each argument specification begins with a
type specification: either `int', `float', `string', or `image'.
All operations consume their arguments off the stack (use `dup' to
keep copies around). Use `verbose 1' to see the stack state displayed
before each operation.
Usage examples:
`show crop 0 0 200 300 open test.png' loads test.png, crops out a portion
of its upper-left-hand corner and displays the cropped portion.
`save rotated.png rotate 30 open test.tiff' loads test.tiff, rotates it
30 degrees, and saves the result as rotated.png (in PNG format).
"""
# by Eric S. Raymond <esr@thyrsus.com>
# $Id$
# TO DO:
# 1. Add PILFont capabilities, once that's documented.
# 2. Add PILDraw operations.
# 3. Add support for composing and decomposing multiple-image files.
#
from __future__ import print_function
from PIL import Image
class PILDriver:
verbose = 0
def do_verbose(self):
"""usage: verbose <int:num>
Set verbosity flag from top of stack.
"""
self.verbose = int(self.do_pop())
# The evaluation stack (internal only)
stack = [] # Stack of pending operations
def push(self, item):
"Push an argument onto the evaluation stack."
self.stack = [item] + self.stack
def top(self):
"Return the top-of-stack element."
return self.stack[0]
# Stack manipulation (callable)
def do_clear(self):
"""usage: clear
Clear the stack.
"""
self.stack = []
def do_pop(self):
"""usage: pop
Discard the top element on the stack.
"""
top = self.stack[0]
self.stack = self.stack[1:]
return top
def do_dup(self):
"""usage: dup
Duplicate the top-of-stack item.
"""
if hasattr(self, 'format'): # If it's an image, do a real copy
dup = self.stack[0].copy()
else:
dup = self.stack[0]
self.stack = [dup] + self.stack
def do_swap(self):
"""usage: swap
Swap the top-of-stack item with the next one down.
"""
self.stack = [self.stack[1], self.stack[0]] + self.stack[2:]
# Image module functions (callable)
def do_new(self):
"""usage: new <int:xsize> <int:ysize> <int:color>:
Create and push a greyscale image of given size and color.
"""
xsize = int(self.do_pop())
ysize = int(self.do_pop())
color = int(self.do_pop())
self.push(Image.new("L", (xsize, ysize), color))
def do_open(self):
"""usage: open <string:filename>
Open the indicated image, read it, push the image on the stack.
"""
self.push(Image.open(self.do_pop()))
def do_blend(self):
"""usage: blend <image:pic1> <image:pic2> <float:alpha>
Replace two images and an alpha with the blended image.
"""
image1 = self.do_pop()
image2 = self.do_pop()
alpha = float(self.do_pop())
self.push(Image.blend(image1, image2, alpha))
def do_composite(self):
"""usage: composite <image:pic1> <image:pic2> <image:mask>
Replace two images and a mask with their composite.
"""
image1 = self.do_pop()
image2 = self.do_pop()
mask = self.do_pop()
self.push(Image.composite(image1, image2, mask))
def do_merge(self):
"""usage: merge <string:mode> <image:pic1> [<image:pic2> [<image:pic3> [<image:pic4>]]]
Merge top-of stack images in a way described by the mode.
"""
mode = self.do_pop()
bandlist = []
for band in mode:
bandlist.append(self.do_pop())
self.push(Image.merge(mode, bandlist))
# Image class methods
def do_convert(self):
"""usage: convert <string:mode> <image:pic1>
Convert the top image to the given mode.
"""
mode = self.do_pop()
image = self.do_pop()
self.push(image.convert(mode))
def do_copy(self):
"""usage: copy <image:pic1>
Make and push a true copy of the top image.
"""
self.dup()
def do_crop(self):
"""usage: crop <int:left> <int:upper> <int:right> <int:lower> <image:pic1>
Crop and push a rectangular region from the current image.
"""
left = int(self.do_pop())
upper = int(self.do_pop())
right = int(self.do_pop())
lower = int(self.do_pop())
image = self.do_pop()
self.push(image.crop((left, upper, right, lower)))
def do_draft(self):
"""usage: draft <string:mode> <int:xsize> <int:ysize>
Configure the loader for a given mode and size.
"""
mode = self.do_pop()
xsize = int(self.do_pop())
ysize = int(self.do_pop())
self.push(self.draft(mode, (xsize, ysize)))
def do_filter(self):
"""usage: filter <string:filtername> <image:pic1>
Process the top image with the given filter.
"""
from PIL import ImageFilter
filter = eval("ImageFilter." + self.do_pop().upper())
image = self.do_pop()
self.push(image.filter(filter))
def do_getbbox(self):
"""usage: getbbox
Push left, upper, right, and lower pixel coordinates of the top image.
"""
bounding_box = self.do_pop().getbbox()
self.push(bounding_box[3])
self.push(bounding_box[2])
self.push(bounding_box[1])
self.push(bounding_box[0])
def do_getextrema(self):
"""usage: extrema
Push minimum and maximum pixel values of the top image.
"""
extrema = self.do_pop().extrema()
self.push(extrema[1])
self.push(extrema[0])
def do_offset(self):
"""usage: offset <int:xoffset> <int:yoffset> <image:pic1>
Offset the pixels in the top image.
"""
xoff = int(self.do_pop())
yoff = int(self.do_pop())
image = self.do_pop()
self.push(image.offset(xoff, yoff))
def do_paste(self):
"""usage: paste <image:figure> <int:xoffset> <int:yoffset> <image:ground>
Paste figure image into ground with upper left at given offsets.
"""
figure = self.do_pop()
xoff = int(self.do_pop())
yoff = int(self.do_pop())
ground = self.do_pop()
if figure.mode == "RGBA":
ground.paste(figure, (xoff, yoff), figure)
else:
ground.paste(figure, (xoff, yoff))
self.push(ground)
def do_resize(self):
"""usage: resize <int:xsize> <int:ysize> <image:pic1>
Resize the top image.
"""
ysize = int(self.do_pop())
xsize = int(self.do_pop())
image = self.do_pop()
self.push(image.resize((xsize, ysize)))
def do_rotate(self):
"""usage: rotate <int:angle> <image:pic1>
Rotate image through a given angle
"""
angle = int(self.do_pop())
image = self.do_pop()
self.push(image.rotate(angle))
def do_save(self):
"""usage: save <string:filename> <image:pic1>
Save image with default options.
"""
filename = self.do_pop()
image = self.do_pop()
image.save(filename)
def do_save2(self):
"""usage: save2 <string:filename> <string:options> <image:pic1>
Save image with specified options.
"""
filename = self.do_pop()
options = self.do_pop()
image = self.do_pop()
image.save(filename, None, options)
def do_show(self):
"""usage: show <image:pic1>
Display and pop the top image.
"""
self.do_pop().show()
def do_thumbnail(self):
"""usage: thumbnail <int:xsize> <int:ysize> <image:pic1>
Modify the top image in the stack to contain a thumbnail of itself.
"""
ysize = int(self.do_pop())
xsize = int(self.do_pop())
self.top().thumbnail((xsize, ysize))
def do_transpose(self):
"""usage: transpose <string:operator> <image:pic1>
Transpose the top image.
"""
transpose = self.do_pop().upper()
image = self.do_pop()
self.push(image.transpose(transpose))
# Image attributes
def do_format(self):
"""usage: format <image:pic1>
Push the format of the top image onto the stack.
"""
self.push(self.do_pop().format)
def do_mode(self):
"""usage: mode <image:pic1>
Push the mode of the top image onto the stack.
"""
self.push(self.do_pop().mode)
def do_size(self):
"""usage: size <image:pic1>
Push the image size on the stack as (y, x).
"""
size = self.do_pop().size
self.push(size[0])
self.push(size[1])
# ImageChops operations
def do_invert(self):
"""usage: invert <image:pic1>
Invert the top image.
"""
from PIL import ImageChops
self.push(ImageChops.invert(self.do_pop()))
def do_lighter(self):
"""usage: lighter <image:pic1> <image:pic2>
Pop the two top images, push an image of the lighter pixels of both.
"""
from PIL import ImageChops
image1 = self.do_pop()
image2 = self.do_pop()
self.push(ImageChops.lighter(image1, image2))
def do_darker(self):
"""usage: darker <image:pic1> <image:pic2>
Pop the two top images, push an image of the darker pixels of both.
"""
from PIL import ImageChops
image1 = self.do_pop()
image2 = self.do_pop()
self.push(ImageChops.darker(image1, image2))
def do_difference(self):
"""usage: difference <image:pic1> <image:pic2>
Pop the two top images, push the difference image
"""
from PIL import ImageChops
image1 = self.do_pop()
image2 = self.do_pop()
self.push(ImageChops.difference(image1, image2))
def do_multiply(self):
"""usage: multiply <image:pic1> <image:pic2>
Pop the two top images, push the multiplication image.
"""
from PIL import ImageChops
image1 = self.do_pop()
image2 = self.do_pop()
self.push(ImageChops.multiply(image1, image2))
def do_screen(self):
"""usage: screen <image:pic1> <image:pic2>
Pop the two top images, superimpose their inverted versions.
"""
from PIL import ImageChops
image2 = self.do_pop()
image1 = self.do_pop()
self.push(ImageChops.screen(image1, image2))
def do_add(self):
"""usage: add <image:pic1> <image:pic2> <int:offset> <float:scale>
Pop the two top images, produce the scaled sum with offset.
"""
from PIL import ImageChops
image1 = self.do_pop()
image2 = self.do_pop()
scale = float(self.do_pop())
offset = int(self.do_pop())
self.push(ImageChops.add(image1, image2, scale, offset))
def do_subtract(self):
"""usage: subtract <image:pic1> <image:pic2> <int:offset> <float:scale>
Pop the two top images, produce the scaled difference with offset.
"""
from PIL import ImageChops
image1 = self.do_pop()
image2 = self.do_pop()
scale = float(self.do_pop())
offset = int(self.do_pop())
self.push(ImageChops.subtract(image1, image2, scale, offset))
# ImageEnhance classes
def do_color(self):
"""usage: color <image:pic1>
Enhance color in the top image.
"""
from PIL import ImageEnhance
factor = float(self.do_pop())
image = self.do_pop()
enhancer = ImageEnhance.Color(image)
self.push(enhancer.enhance(factor))
def do_contrast(self):
"""usage: contrast <image:pic1>
Enhance contrast in the top image.
"""
from PIL import ImageEnhance
factor = float(self.do_pop())
image = self.do_pop()
enhancer = ImageEnhance.Contrast(image)
self.push(enhancer.enhance(factor))
def do_brightness(self):
"""usage: brightness <image:pic1>
Enhance brightness in the top image.
"""
from PIL import ImageEnhance
factor = float(self.do_pop())
image = self.do_pop()
enhancer = ImageEnhance.Brightness(image)
self.push(enhancer.enhance(factor))
def do_sharpness(self):
"""usage: sharpness <image:pic1>
Enhance sharpness in the top image.
"""
from PIL import ImageEnhance
factor = float(self.do_pop())
image = self.do_pop()
enhancer = ImageEnhance.Sharpness(image)
self.push(enhancer.enhance(factor))
# The interpreter loop
def execute(self, list):
"Interpret a list of PILDriver commands."
list.reverse()
while len(list) > 0:
self.push(list[0])
list = list[1:]
if self.verbose:
print("Stack: " + repr(self.stack))
top = self.top()
if not isinstance(top, str):
continue;
funcname = "do_" + top
if not hasattr(self, funcname):
continue
else:
self.do_pop()
func = getattr(self, funcname)
func()
if __name__ == '__main__':
import sys
try:
import readline
except ImportError:
pass # not available on all platforms
# If we see command-line arguments, interpret them as a stack state
# and execute. Otherwise go interactive.
driver = PILDriver()
if len(sys.argv[1:]) > 0:
driver.execute(sys.argv[1:])
else:
print("PILDriver says hello.")
while True:
try:
if sys.version_info[0] >= 3:
line = input('pildriver> ');
else:
line = raw_input('pildriver> ');
except EOFError:
print("\nPILDriver says goodbye.")
break
driver.execute(line.split())
print(driver.stack)
# The following sets edit modes for GNU EMACS
# Local Variables:
# mode:python
# End:
| [
"r.halvaei@gmail.com"
] | r.halvaei@gmail.com |
0abd0228cbd16d4d2350bd9c779f9884e2e9aa4a | e1bdbd08afec39c1ee56a3885a837ec966543a2d | /Section_07_code/extract_freq_features.py | 00d66469084172389834335b106181bd1cd6dc6e | [
"MIT"
] | permissive | PacktPublishing/Python-Machine-Learning-Solutions-V- | 507bd8b285f051d2761a5348e4a8c9a50329287a | 8bb80a43a7c64032c25c1023faaa29bbfbd39d45 | refs/heads/master | 2023-02-28T05:19:49.782472 | 2021-01-20T09:11:09 | 2021-01-20T09:11:09 | 188,817,647 | 0 | 2 | null | null | null | null | UTF-8 | Python | false | false | 836 | py | import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
from python_speech_features import mfcc, logfbank
# Read input sound file
sampling_freq, audio = wavfile.read("input_freq.wav")
# Extract MFCC and Filter bank features
mfcc_features = mfcc(audio, sampling_freq)
filterbank_features = logfbank(audio, sampling_freq)
# Print parameters
print('\nMFCC:\nNumber of windows =', mfcc_features.shape[0])
print('Length of each feature =', mfcc_features.shape[1])
print('\nFilter bank:\nNumber of windows =', filterbank_features.shape[0])
print('Length of each feature =', filterbank_features.shape[1]
# Plot the features
mfcc_features= mfcc_features.T
plt.matshow(mfcc_features)
plt.title('MFCC')
filterbank_features = filterbank_features.T
plt.matshow(filterbank_features)
plt.title('Filter bank')
plt.show()
| [
"sonalis@packtpub.com"
] | sonalis@packtpub.com |
443bfea7bc2081899d1ca87933fdcd6b88ff3423 | d71d30383eddf30df781b5d13aeb311957beda4f | /edure_env/lib/python2.7/site-packages/dropbox/base.py | 0ae39821426fa62f4f911888a89eb5b280c68c4b | [] | no_license | priyabrata88/EDURE | 34ef0e0bf9f4525a9f4f495ed06d8c127485a547 | 9733b1b2985a658ce3e31951037a6ed5e47d4cb7 | refs/heads/master | 2022-12-13T21:01:41.338849 | 2019-05-14T08:16:59 | 2019-05-14T08:16:59 | 186,575,492 | 0 | 0 | null | 2022-12-08T00:48:19 | 2019-05-14T08:10:29 | Python | UTF-8 | Python | false | false | 164,944 | py | # -*- coding: utf-8 -*-
# Auto-generated by Stone, do not modify.
# flake8: noqa
# pylint: skip-file
from abc import ABCMeta, abstractmethod
import warnings
from . import (
async,
auth,
common,
file_properties,
file_requests,
files,
paper,
sharing,
team,
team_common,
team_log,
team_policies,
users,
users_common,
)
class DropboxBase(object):
__metaclass__ = ABCMeta
@abstractmethod
def request(self, route, namespace, arg, arg_binary=None):
pass
# ------------------------------------------
# Routes in auth namespace
def auth_token_from_oauth1(self,
oauth1_token,
oauth1_token_secret):
"""
Creates an OAuth 2.0 access token from the supplied OAuth 1.0 access
token.
:param str oauth1_token: The supplied OAuth 1.0 access token.
:param str oauth1_token_secret: The token secret associated with the
supplied access token.
:rtype: :class:`dropbox.auth.TokenFromOAuth1Result`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.auth.TokenFromOAuth1Error`
"""
arg = auth.TokenFromOAuth1Arg(oauth1_token,
oauth1_token_secret)
r = self.request(
auth.token_from_oauth1,
'auth',
arg,
None,
)
return r
def auth_token_revoke(self):
"""
Disables the access token used to authenticate the call.
:rtype: None
"""
arg = None
r = self.request(
auth.token_revoke,
'auth',
arg,
None,
)
return None
# ------------------------------------------
# Routes in file_properties namespace
def file_properties_properties_add(self,
path,
property_groups):
"""
Add property groups to a Dropbox file. See
:meth:`file_properties_templates_add_for_user` or
:meth:`file_properties_templates_add_for_team` to create new templates.
:param str path: A unique identifier for the file or folder.
:param list property_groups: The property groups which are to be added
to a Dropbox file.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.AddPropertiesError`
"""
arg = file_properties.AddPropertiesArg(path,
property_groups)
r = self.request(
file_properties.properties_add,
'file_properties',
arg,
None,
)
return None
def file_properties_properties_overwrite(self,
path,
property_groups):
"""
Overwrite property groups associated with a file. This endpoint should
be used instead of :meth:`file_properties_properties_update` when
property groups are being updated via a "snapshot" instead of via a
"delta". In other words, this endpoint will delete all omitted fields
from a property group, whereas :meth:`file_properties_properties_update`
will only delete fields that are explicitly marked for deletion.
:param str path: A unique identifier for the file or folder.
:param list property_groups: The property groups "snapshot" updates to
force apply.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.InvalidPropertyGroupError`
"""
arg = file_properties.OverwritePropertyGroupArg(path,
property_groups)
r = self.request(
file_properties.properties_overwrite,
'file_properties',
arg,
None,
)
return None
def file_properties_properties_remove(self,
path,
property_template_ids):
"""
Permanently removes the specified property group from the file. To
remove specific property field key value pairs, see
:meth:`file_properties_properties_update`. To update a template, see
:meth:`file_properties_templates_update_for_user` or
:meth:`file_properties_templates_update_for_team`. Templates can't be
removed once created.
:param str path: A unique identifier for the file or folder.
:param list property_template_ids: A list of identifiers for a template
created by :meth:`file_properties_templates_add_for_user` or
:meth:`file_properties_templates_add_for_team`.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.RemovePropertiesError`
"""
arg = file_properties.RemovePropertiesArg(path,
property_template_ids)
r = self.request(
file_properties.properties_remove,
'file_properties',
arg,
None,
)
return None
def file_properties_properties_search(self,
queries,
template_filter=file_properties.TemplateFilter.filter_none):
"""
Search across property templates for particular property field values.
:param list queries: Queries to search.
:param template_filter: Filter results to contain only properties
associated with these template IDs.
:type template_filter: :class:`dropbox.file_properties.TemplateFilter`
:rtype: :class:`dropbox.file_properties.PropertiesSearchResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.PropertiesSearchError`
"""
arg = file_properties.PropertiesSearchArg(queries,
template_filter)
r = self.request(
file_properties.properties_search,
'file_properties',
arg,
None,
)
return r
def file_properties_properties_search_continue(self,
cursor):
"""
Once a cursor has been retrieved from
:meth:`file_properties_properties_search`, use this to paginate through
all search results.
:param str cursor: The cursor returned by your last call to
:meth:`file_properties_properties_search` or
:meth:`file_properties_properties_search_continue`.
:rtype: :class:`dropbox.file_properties.PropertiesSearchResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.PropertiesSearchContinueError`
"""
arg = file_properties.PropertiesSearchContinueArg(cursor)
r = self.request(
file_properties.properties_search_continue,
'file_properties',
arg,
None,
)
return r
def file_properties_properties_update(self,
path,
update_property_groups):
"""
Add, update or remove properties associated with the supplied file and
templates. This endpoint should be used instead of
:meth:`file_properties_properties_overwrite` when property groups are
being updated via a "delta" instead of via a "snapshot" . In other
words, this endpoint will not delete any omitted fields from a property
group, whereas :meth:`file_properties_properties_overwrite` will delete
any fields that are omitted from a property group.
:param str path: A unique identifier for the file or folder.
:param list update_property_groups: The property groups "delta" updates
to apply.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.UpdatePropertiesError`
"""
arg = file_properties.UpdatePropertiesArg(path,
update_property_groups)
r = self.request(
file_properties.properties_update,
'file_properties',
arg,
None,
)
return None
def file_properties_templates_add_for_team(self,
name,
description,
fields):
"""
Add a template associated with a team. See
:meth:`file_properties_properties_add` to add properties to a file or
folder. Note: this endpoint will create team-owned templates.
:rtype: :class:`dropbox.file_properties.AddTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.ModifyTemplateError`
"""
arg = file_properties.AddTemplateArg(name,
description,
fields)
r = self.request(
file_properties.templates_add_for_team,
'file_properties',
arg,
None,
)
return r
def file_properties_templates_add_for_user(self,
name,
description,
fields):
"""
Add a template associated with a user. See
:meth:`file_properties_properties_add` to add properties to a file. This
endpoint can't be called on a team member or admin's behalf.
:rtype: :class:`dropbox.file_properties.AddTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.ModifyTemplateError`
"""
arg = file_properties.AddTemplateArg(name,
description,
fields)
r = self.request(
file_properties.templates_add_for_user,
'file_properties',
arg,
None,
)
return r
def file_properties_templates_get_for_team(self,
template_id):
"""
Get the schema for a specified template.
:param str template_id: An identifier for template added by route See
:meth:`file_properties_templates_add_for_user` or
:meth:`file_properties_templates_add_for_team`.
:rtype: :class:`dropbox.file_properties.GetTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.TemplateError`
"""
arg = file_properties.GetTemplateArg(template_id)
r = self.request(
file_properties.templates_get_for_team,
'file_properties',
arg,
None,
)
return r
def file_properties_templates_get_for_user(self,
template_id):
"""
Get the schema for a specified template. This endpoint can't be called
on a team member or admin's behalf.
:param str template_id: An identifier for template added by route See
:meth:`file_properties_templates_add_for_user` or
:meth:`file_properties_templates_add_for_team`.
:rtype: :class:`dropbox.file_properties.GetTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.TemplateError`
"""
arg = file_properties.GetTemplateArg(template_id)
r = self.request(
file_properties.templates_get_for_user,
'file_properties',
arg,
None,
)
return r
def file_properties_templates_list_for_team(self):
"""
Get the template identifiers for a team. To get the schema of each
template use :meth:`file_properties_templates_get_for_team`.
:rtype: :class:`dropbox.file_properties.ListTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.TemplateError`
"""
arg = None
r = self.request(
file_properties.templates_list_for_team,
'file_properties',
arg,
None,
)
return r
def file_properties_templates_list_for_user(self):
"""
Get the template identifiers for a team. To get the schema of each
template use :meth:`file_properties_templates_get_for_user`. This
endpoint can't be called on a team member or admin's behalf.
:rtype: :class:`dropbox.file_properties.ListTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.TemplateError`
"""
arg = None
r = self.request(
file_properties.templates_list_for_user,
'file_properties',
arg,
None,
)
return r
def file_properties_templates_remove_for_team(self,
template_id):
"""
Permanently removes the specified template created from
:meth:`file_properties_templates_add_for_user`. All properties
associated with the template will also be removed. This action cannot be
undone.
:param str template_id: An identifier for a template created by
:meth:`file_properties_templates_add_for_user` or
:meth:`file_properties_templates_add_for_team`.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.TemplateError`
"""
arg = file_properties.RemoveTemplateArg(template_id)
r = self.request(
file_properties.templates_remove_for_team,
'file_properties',
arg,
None,
)
return None
def file_properties_templates_remove_for_user(self,
template_id):
"""
Permanently removes the specified template created from
:meth:`file_properties_templates_add_for_user`. All properties
associated with the template will also be removed. This action cannot be
undone.
:param str template_id: An identifier for a template created by
:meth:`file_properties_templates_add_for_user` or
:meth:`file_properties_templates_add_for_team`.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.TemplateError`
"""
arg = file_properties.RemoveTemplateArg(template_id)
r = self.request(
file_properties.templates_remove_for_user,
'file_properties',
arg,
None,
)
return None
def file_properties_templates_update_for_team(self,
template_id,
name=None,
description=None,
add_fields=None):
"""
Update a template associated with a team. This route can update the
template name, the template description and add optional properties to
templates.
:param str template_id: An identifier for template added by See
:meth:`file_properties_templates_add_for_user` or
:meth:`file_properties_templates_add_for_team`.
:param Nullable name: A display name for the template. template names
can be up to 256 bytes.
:param Nullable description: Description for the new template. Template
descriptions can be up to 1024 bytes.
:param Nullable add_fields: Property field templates to be added to the
group template. There can be up to 32 properties in a single
template.
:rtype: :class:`dropbox.file_properties.UpdateTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.ModifyTemplateError`
"""
arg = file_properties.UpdateTemplateArg(template_id,
name,
description,
add_fields)
r = self.request(
file_properties.templates_update_for_team,
'file_properties',
arg,
None,
)
return r
def file_properties_templates_update_for_user(self,
template_id,
name=None,
description=None,
add_fields=None):
"""
Update a template associated with a user. This route can update the
template name, the template description and add optional properties to
templates. This endpoint can't be called on a team member or admin's
behalf.
:param str template_id: An identifier for template added by See
:meth:`file_properties_templates_add_for_user` or
:meth:`file_properties_templates_add_for_team`.
:param Nullable name: A display name for the template. template names
can be up to 256 bytes.
:param Nullable description: Description for the new template. Template
descriptions can be up to 1024 bytes.
:param Nullable add_fields: Property field templates to be added to the
group template. There can be up to 32 properties in a single
template.
:rtype: :class:`dropbox.file_properties.UpdateTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_properties.ModifyTemplateError`
"""
arg = file_properties.UpdateTemplateArg(template_id,
name,
description,
add_fields)
r = self.request(
file_properties.templates_update_for_user,
'file_properties',
arg,
None,
)
return r
# ------------------------------------------
# Routes in file_requests namespace
def file_requests_create(self,
title,
destination,
deadline=None,
open=True):
"""
Creates a file request for this user.
:param str title: The title of the file request. Must not be empty.
:param str destination: The path of the folder in the Dropbox where
uploaded files will be sent. For apps with the app folder
permission, this will be relative to the app folder.
:param Nullable deadline: The deadline for the file request. Deadlines
can only be set by Pro and Business accounts.
:param bool open: Whether or not the file request should be open. If the
file request is closed, it will not accept any file submissions, but
it can be opened later.
:rtype: :class:`dropbox.file_requests.FileRequest`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.file_requests.CreateFileRequestError`
"""
arg = file_requests.CreateFileRequestArgs(title,
destination,
deadline,
open)
r = self.request(
file_requests.create,
'file_requests',
arg,
None,
)
return r
def file_requests_get(self,
id):
"""
Returns the specified file request.
:param str id: The ID of the file request to retrieve.
:rtype: :class:`dropbox.file_requests.FileRequest`
"""
arg = file_requests.GetFileRequestArgs(id)
r = self.request(
file_requests.get,
'file_requests',
arg,
None,
)
return r
def file_requests_list(self):
"""
Returns a list of file requests owned by this user. For apps with the
app folder permission, this will only return file requests with
destinations in the app folder.
:rtype: :class:`dropbox.file_requests.ListFileRequestsResult`
"""
arg = None
r = self.request(
file_requests.list,
'file_requests',
arg,
None,
)
return r
def file_requests_update(self,
id,
title=None,
destination=None,
deadline=file_requests.UpdateFileRequestDeadline.no_update,
open=None):
"""
Update a file request.
:param str id: The ID of the file request to update.
:param Nullable title: The new title of the file request. Must not be
empty.
:param Nullable destination: The new path of the folder in the Dropbox
where uploaded files will be sent. For apps with the app folder
permission, this will be relative to the app folder.
:param deadline: The new deadline for the file request.
:type deadline: :class:`dropbox.file_requests.UpdateFileRequestDeadline`
:param Nullable open: Whether to set this file request as open or
closed.
:rtype: :class:`dropbox.file_requests.FileRequest`
"""
arg = file_requests.UpdateFileRequestArgs(id,
title,
destination,
deadline,
open)
r = self.request(
file_requests.update,
'file_requests',
arg,
None,
)
return r
# ------------------------------------------
# Routes in files namespace
def files_alpha_get_metadata(self,
path,
include_media_info=False,
include_deleted=False,
include_has_explicit_shared_members=False,
include_property_groups=None,
include_property_templates=None):
"""
Returns the metadata for a file or folder. This is an alpha endpoint
compatible with the properties API. Note: Metadata for the root folder
is unsupported.
:param Nullable include_property_templates: If set to a valid list of
template IDs, ``FileMetadata.property_groups`` is set for files with
custom properties.
:rtype: :class:`dropbox.files.Metadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.AlphaGetMetadataError`
"""
warnings.warn(
'alpha/get_metadata is deprecated. Use get_metadata.',
DeprecationWarning,
)
arg = files.AlphaGetMetadataArg(path,
include_media_info,
include_deleted,
include_has_explicit_shared_members,
include_property_groups,
include_property_templates)
r = self.request(
files.alpha_get_metadata,
'files',
arg,
None,
)
return r
def files_alpha_upload(self,
f,
path,
mode=files.WriteMode.add,
autorename=False,
client_modified=None,
mute=False,
property_groups=None):
"""
Create a new file with the contents provided in the request. Note that
this endpoint is part of the properties API alpha and is slightly
different from :meth:`files_upload`. Do not use this to upload a file
larger than 150 MB. Instead, create an upload session with
:meth:`files_upload_session_start`.
:param bytes f: Contents to upload.
:rtype: :class:`dropbox.files.FileMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.UploadErrorWithProperties`
"""
warnings.warn(
'alpha/upload is deprecated. Use alpha/upload.',
DeprecationWarning,
)
arg = files.CommitInfoWithProperties(path,
mode,
autorename,
client_modified,
mute,
property_groups)
r = self.request(
files.alpha_upload,
'files',
arg,
f,
)
return r
def files_copy(self,
from_path,
to_path,
allow_shared_folder=False,
autorename=False,
allow_ownership_transfer=False):
"""
Copy a file or folder to a different location in the user's Dropbox. If
the source path is a folder all its contents will be copied.
:param bool allow_shared_folder: If true, :meth:`files_copy` will copy
contents in shared folder, otherwise
``RelocationError.cant_copy_shared_folder`` will be returned if
``from_path`` contains shared folder. This field is always true for
:meth:`files_move`.
:param bool autorename: If there's a conflict, have the Dropbox server
try to autorename the file to avoid the conflict.
:param bool allow_ownership_transfer: Allow moves by owner even if it
would result in an ownership transfer for the content being moved.
This does not apply to copies.
:rtype: :class:`dropbox.files.Metadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.RelocationError`
"""
warnings.warn(
'copy is deprecated. Use copy_v2.',
DeprecationWarning,
)
arg = files.RelocationArg(from_path,
to_path,
allow_shared_folder,
autorename,
allow_ownership_transfer)
r = self.request(
files.copy,
'files',
arg,
None,
)
return r
def files_copy_batch(self,
entries,
allow_shared_folder=False,
autorename=False,
allow_ownership_transfer=False):
"""
Copy multiple files or folders to different locations at once in the
user's Dropbox. If ``RelocationBatchArg.allow_shared_folder`` is false,
this route is atomic. If on entry failes, the whole transaction will
abort. If ``RelocationBatchArg.allow_shared_folder`` is true, not
atomicity is guaranteed, but you will be able to copy the contents of
shared folders to new locations. This route will return job ID
immediately and do the async copy job in background. Please use
:meth:`files_copy_batch_check` to check the job status.
:param list entries: List of entries to be moved or copied. Each entry
is :class:`dropbox.files.RelocationPath`.
:param bool allow_shared_folder: If true, :meth:`files_copy_batch` will
copy contents in shared folder, otherwise
``RelocationError.cant_copy_shared_folder`` will be returned if
``RelocationPath.from_path`` contains shared folder. This field is
always true for :meth:`files_move_batch`.
:param bool autorename: If there's a conflict with any file, have the
Dropbox server try to autorename that file to avoid the conflict.
:param bool allow_ownership_transfer: Allow moves by owner even if it
would result in an ownership transfer for the content being moved.
This does not apply to copies.
:rtype: :class:`dropbox.files.RelocationBatchLaunch`
"""
arg = files.RelocationBatchArg(entries,
allow_shared_folder,
autorename,
allow_ownership_transfer)
r = self.request(
files.copy_batch,
'files',
arg,
None,
)
return r
def files_copy_batch_check(self,
async_job_id):
"""
Returns the status of an asynchronous job for :meth:`files_copy_batch`.
If success, it returns list of results for each entry.
:param str async_job_id: Id of the asynchronous job. This is the value
of a response returned from the method that launched the job.
:rtype: :class:`dropbox.files.RelocationBatchJobStatus`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.PollError`
"""
arg = async.PollArg(async_job_id)
r = self.request(
files.copy_batch_check,
'files',
arg,
None,
)
return r
def files_copy_reference_get(self,
path):
"""
Get a copy reference to a file or folder. This reference string can be
used to save that file or folder to another user's Dropbox by passing it
to :meth:`files_copy_reference_save`.
:param str path: The path to the file or folder you want to get a copy
reference to.
:rtype: :class:`dropbox.files.GetCopyReferenceResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.GetCopyReferenceError`
"""
arg = files.GetCopyReferenceArg(path)
r = self.request(
files.copy_reference_get,
'files',
arg,
None,
)
return r
def files_copy_reference_save(self,
copy_reference,
path):
"""
Save a copy reference returned by :meth:`files_copy_reference_get` to
the user's Dropbox.
:param str copy_reference: A copy reference returned by
:meth:`files_copy_reference_get`.
:param str path: Path in the user's Dropbox that is the destination.
:rtype: :class:`dropbox.files.SaveCopyReferenceResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.SaveCopyReferenceError`
"""
arg = files.SaveCopyReferenceArg(copy_reference,
path)
r = self.request(
files.copy_reference_save,
'files',
arg,
None,
)
return r
def files_copy_v2(self,
from_path,
to_path,
allow_shared_folder=False,
autorename=False,
allow_ownership_transfer=False):
"""
Copy a file or folder to a different location in the user's Dropbox. If
the source path is a folder all its contents will be copied.
:param bool allow_shared_folder: If true, :meth:`files_copy` will copy
contents in shared folder, otherwise
``RelocationError.cant_copy_shared_folder`` will be returned if
``from_path`` contains shared folder. This field is always true for
:meth:`files_move`.
:param bool autorename: If there's a conflict, have the Dropbox server
try to autorename the file to avoid the conflict.
:param bool allow_ownership_transfer: Allow moves by owner even if it
would result in an ownership transfer for the content being moved.
This does not apply to copies.
:rtype: :class:`dropbox.files.RelocationResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.RelocationError`
"""
arg = files.RelocationArg(from_path,
to_path,
allow_shared_folder,
autorename,
allow_ownership_transfer)
r = self.request(
files.copy_v2,
'files',
arg,
None,
)
return r
def files_create_folder(self,
path,
autorename=False):
"""
Create a folder at a given path.
:param str path: Path in the user's Dropbox to create.
:param bool autorename: If there's a conflict, have the Dropbox server
try to autorename the folder to avoid the conflict.
:rtype: :class:`dropbox.files.FolderMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.CreateFolderError`
"""
warnings.warn(
'create_folder is deprecated. Use create_folder_v2.',
DeprecationWarning,
)
arg = files.CreateFolderArg(path,
autorename)
r = self.request(
files.create_folder,
'files',
arg,
None,
)
return r
def files_create_folder_v2(self,
path,
autorename=False):
"""
Create a folder at a given path.
:param str path: Path in the user's Dropbox to create.
:param bool autorename: If there's a conflict, have the Dropbox server
try to autorename the folder to avoid the conflict.
:rtype: :class:`dropbox.files.CreateFolderResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.CreateFolderError`
"""
arg = files.CreateFolderArg(path,
autorename)
r = self.request(
files.create_folder_v2,
'files',
arg,
None,
)
return r
def files_delete(self,
path):
"""
Delete the file or folder at a given path. If the path is a folder, all
its contents will be deleted too. A successful response indicates that
the file or folder was deleted. The returned metadata will be the
corresponding :class:`dropbox.files.FileMetadata` or
:class:`dropbox.files.FolderMetadata` for the item at time of deletion,
and not a :class:`dropbox.files.DeletedMetadata` object.
:param str path: Path in the user's Dropbox to delete.
:rtype: :class:`dropbox.files.Metadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.DeleteError`
"""
warnings.warn(
'delete is deprecated. Use delete_v2.',
DeprecationWarning,
)
arg = files.DeleteArg(path)
r = self.request(
files.delete,
'files',
arg,
None,
)
return r
def files_delete_batch(self,
entries):
"""
Delete multiple files/folders at once. This route is asynchronous, which
returns a job ID immediately and runs the delete batch asynchronously.
Use :meth:`files_delete_batch_check` to check the job status.
:type entries: list
:rtype: :class:`dropbox.files.DeleteBatchLaunch`
"""
arg = files.DeleteBatchArg(entries)
r = self.request(
files.delete_batch,
'files',
arg,
None,
)
return r
def files_delete_batch_check(self,
async_job_id):
"""
Returns the status of an asynchronous job for
:meth:`files_delete_batch`. If success, it returns list of result for
each entry.
:param str async_job_id: Id of the asynchronous job. This is the value
of a response returned from the method that launched the job.
:rtype: :class:`dropbox.files.DeleteBatchJobStatus`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.PollError`
"""
arg = async.PollArg(async_job_id)
r = self.request(
files.delete_batch_check,
'files',
arg,
None,
)
return r
def files_delete_v2(self,
path):
"""
Delete the file or folder at a given path. If the path is a folder, all
its contents will be deleted too. A successful response indicates that
the file or folder was deleted. The returned metadata will be the
corresponding :class:`dropbox.files.FileMetadata` or
:class:`dropbox.files.FolderMetadata` for the item at time of deletion,
and not a :class:`dropbox.files.DeletedMetadata` object.
:param str path: Path in the user's Dropbox to delete.
:rtype: :class:`dropbox.files.DeleteResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.DeleteError`
"""
arg = files.DeleteArg(path)
r = self.request(
files.delete_v2,
'files',
arg,
None,
)
return r
def files_download(self,
path,
rev=None):
"""
Download a file from a user's Dropbox.
:param str path: The path of the file to download.
:param Nullable rev: Please specify revision in ``path`` instead.
:rtype: (:class:`dropbox.files.FileMetadata`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.DownloadError`
If you do not consume the entire response body, then you must call close
on the response object, otherwise you will max out your available
connections. We recommend using the `contextlib.closing
<https://docs.python.org/2/library/contextlib.html#contextlib.closing>`_
context manager to ensure this.
"""
arg = files.DownloadArg(path,
rev)
r = self.request(
files.download,
'files',
arg,
None,
)
return r
def files_download_to_file(self,
download_path,
path,
rev=None):
"""
Download a file from a user's Dropbox.
:param str download_path: Path on local machine to save file.
:param str path: The path of the file to download.
:param Nullable rev: Please specify revision in ``path`` instead.
:rtype: (:class:`dropbox.files.FileMetadata`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.DownloadError`
"""
arg = files.DownloadArg(path,
rev)
r = self.request(
files.download,
'files',
arg,
None,
)
self._save_body_to_file(download_path, r[1])
return r[0]
def files_get_metadata(self,
path,
include_media_info=False,
include_deleted=False,
include_has_explicit_shared_members=False,
include_property_groups=None):
"""
Returns the metadata for a file or folder. Note: Metadata for the root
folder is unsupported.
:param str path: The path of a file or folder on Dropbox.
:param bool include_media_info: If true, ``FileMetadata.media_info`` is
set for photo and video.
:param bool include_deleted: If true,
:class:`dropbox.files.DeletedMetadata` will be returned for deleted
file or folder, otherwise ``LookupError.not_found`` will be
returned.
:param bool include_has_explicit_shared_members: If true, the results
will include a flag for each file indicating whether or not that
file has any explicit members.
:param Nullable include_property_groups: If set to a valid list of
template IDs, ``FileMetadata.property_groups`` is set if there
exists property data associated with the file and each of the listed
templates.
:rtype: :class:`dropbox.files.Metadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.GetMetadataError`
"""
arg = files.GetMetadataArg(path,
include_media_info,
include_deleted,
include_has_explicit_shared_members,
include_property_groups)
r = self.request(
files.get_metadata,
'files',
arg,
None,
)
return r
def files_get_preview(self,
path,
rev=None):
"""
Get a preview for a file. Currently, PDF previews are generated for
files with the following extensions: .ai, .doc, .docm, .docx, .eps,
.odp, .odt, .pps, .ppsm, .ppsx, .ppt, .pptm, .pptx, .rtf. HTML previews
are generated for files with the following extensions: .csv, .ods, .xls,
.xlsm, .xlsx. Other formats will return an unsupported extension error.
:param str path: The path of the file to preview.
:param Nullable rev: Please specify revision in ``path`` instead.
:rtype: (:class:`dropbox.files.FileMetadata`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.PreviewError`
If you do not consume the entire response body, then you must call close
on the response object, otherwise you will max out your available
connections. We recommend using the `contextlib.closing
<https://docs.python.org/2/library/contextlib.html#contextlib.closing>`_
context manager to ensure this.
"""
arg = files.PreviewArg(path,
rev)
r = self.request(
files.get_preview,
'files',
arg,
None,
)
return r
def files_get_preview_to_file(self,
download_path,
path,
rev=None):
"""
Get a preview for a file. Currently, PDF previews are generated for
files with the following extensions: .ai, .doc, .docm, .docx, .eps,
.odp, .odt, .pps, .ppsm, .ppsx, .ppt, .pptm, .pptx, .rtf. HTML previews
are generated for files with the following extensions: .csv, .ods, .xls,
.xlsm, .xlsx. Other formats will return an unsupported extension error.
:param str download_path: Path on local machine to save file.
:param str path: The path of the file to preview.
:param Nullable rev: Please specify revision in ``path`` instead.
:rtype: (:class:`dropbox.files.FileMetadata`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.PreviewError`
"""
arg = files.PreviewArg(path,
rev)
r = self.request(
files.get_preview,
'files',
arg,
None,
)
self._save_body_to_file(download_path, r[1])
return r[0]
def files_get_temporary_link(self,
path):
"""
Get a temporary link to stream content of a file. This link will expire
in four hours and afterwards you will get 410 Gone. Content-Type of the
link is determined automatically by the file's mime type.
:param str path: The path to the file you want a temporary link to.
:rtype: :class:`dropbox.files.GetTemporaryLinkResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.GetTemporaryLinkError`
"""
arg = files.GetTemporaryLinkArg(path)
r = self.request(
files.get_temporary_link,
'files',
arg,
None,
)
return r
def files_get_thumbnail(self,
path,
format=files.ThumbnailFormat.jpeg,
size=files.ThumbnailSize.w64h64):
"""
Get a thumbnail for an image. This method currently supports files with
the following file extensions: jpg, jpeg, png, tiff, tif, gif and bmp.
Photos that are larger than 20MB in size won't be converted to a
thumbnail.
:param str path: The path to the image file you want to thumbnail.
:param format: The format for the thumbnail image, jpeg (default) or
png. For images that are photos, jpeg should be preferred, while
png is better for screenshots and digital arts.
:type format: :class:`dropbox.files.ThumbnailFormat`
:param size: The size for the thumbnail image.
:type size: :class:`dropbox.files.ThumbnailSize`
:rtype: (:class:`dropbox.files.FileMetadata`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.ThumbnailError`
If you do not consume the entire response body, then you must call close
on the response object, otherwise you will max out your available
connections. We recommend using the `contextlib.closing
<https://docs.python.org/2/library/contextlib.html#contextlib.closing>`_
context manager to ensure this.
"""
arg = files.ThumbnailArg(path,
format,
size)
r = self.request(
files.get_thumbnail,
'files',
arg,
None,
)
return r
def files_get_thumbnail_to_file(self,
download_path,
path,
format=files.ThumbnailFormat.jpeg,
size=files.ThumbnailSize.w64h64):
"""
Get a thumbnail for an image. This method currently supports files with
the following file extensions: jpg, jpeg, png, tiff, tif, gif and bmp.
Photos that are larger than 20MB in size won't be converted to a
thumbnail.
:param str download_path: Path on local machine to save file.
:param str path: The path to the image file you want to thumbnail.
:param format: The format for the thumbnail image, jpeg (default) or
png. For images that are photos, jpeg should be preferred, while
png is better for screenshots and digital arts.
:type format: :class:`dropbox.files.ThumbnailFormat`
:param size: The size for the thumbnail image.
:type size: :class:`dropbox.files.ThumbnailSize`
:rtype: (:class:`dropbox.files.FileMetadata`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.ThumbnailError`
"""
arg = files.ThumbnailArg(path,
format,
size)
r = self.request(
files.get_thumbnail,
'files',
arg,
None,
)
self._save_body_to_file(download_path, r[1])
return r[0]
def files_get_thumbnail_batch(self,
entries):
"""
Get thumbnails for a list of images. We allow up to 25 thumbnails in a
single batch. This method currently supports files with the following
file extensions: jpg, jpeg, png, tiff, tif, gif and bmp. Photos that are
larger than 20MB in size won't be converted to a thumbnail.
:param list entries: List of files to get thumbnails.
:rtype: :class:`dropbox.files.GetThumbnailBatchResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.GetThumbnailBatchError`
"""
arg = files.GetThumbnailBatchArg(entries)
r = self.request(
files.get_thumbnail_batch,
'files',
arg,
None,
)
return r
def files_list_folder(self,
path,
recursive=False,
include_media_info=False,
include_deleted=False,
include_has_explicit_shared_members=False,
include_mounted_folders=True,
limit=None,
shared_link=None,
include_property_groups=None):
"""
Starts returning the contents of a folder. If the result's
``ListFolderResult.has_more`` field is ``True``, call
:meth:`files_list_folder_continue` with the returned
``ListFolderResult.cursor`` to retrieve more entries. If you're using
``ListFolderArg.recursive`` set to ``True`` to keep a local cache of the
contents of a Dropbox account, iterate through each entry in order and
process them as follows to keep your local state in sync: For each
:class:`dropbox.files.FileMetadata`, store the new entry at the given
path in your local state. If the required parent folders don't exist
yet, create them. If there's already something else at the given path,
replace it and remove all its children. For each
:class:`dropbox.files.FolderMetadata`, store the new entry at the given
path in your local state. If the required parent folders don't exist
yet, create them. If there's already something else at the given path,
replace it but leave the children as they are. Check the new entry's
``FolderSharingInfo.read_only`` and set all its children's read-only
statuses to match. For each :class:`dropbox.files.DeletedMetadata`, if
your local state has something at the given path, remove it and all its
children. If there's nothing at the given path, ignore this entry. Note:
:class:`dropbox.auth.RateLimitError` may be returned if multiple
:meth:`files_list_folder` or :meth:`files_list_folder_continue` calls
with same parameters are made simultaneously by same API app for same
user. If your app implements retry logic, please hold off the retry
until the previous request finishes.
:param str path: A unique identifier for the file.
:param bool recursive: If true, the list folder operation will be
applied recursively to all subfolders and the response will contain
contents of all subfolders.
:param bool include_media_info: If true, ``FileMetadata.media_info`` is
set for photo and video.
:param bool include_deleted: If true, the results will include entries
for files and folders that used to exist but were deleted.
:param bool include_has_explicit_shared_members: If true, the results
will include a flag for each file indicating whether or not that
file has any explicit members.
:param bool include_mounted_folders: If true, the results will include
entries under mounted folders which includes app folder, shared
folder and team folder.
:param Nullable limit: The maximum number of results to return per
request. Note: This is an approximate number and there can be
slightly more entries returned in some cases.
:param Nullable shared_link: A shared link to list the contents of. If
the link is password-protected, the password must be provided. If
this field is present, ``ListFolderArg.path`` will be relative to
root of the shared link. Only non-recursive mode is supported for
shared link.
:param Nullable include_property_groups: If set to a valid list of
template IDs, ``FileMetadata.property_groups`` is set if there
exists property data associated with the file and each of the listed
templates.
:rtype: :class:`dropbox.files.ListFolderResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.ListFolderError`
"""
arg = files.ListFolderArg(path,
recursive,
include_media_info,
include_deleted,
include_has_explicit_shared_members,
include_mounted_folders,
limit,
shared_link,
include_property_groups)
r = self.request(
files.list_folder,
'files',
arg,
None,
)
return r
def files_list_folder_continue(self,
cursor):
"""
Once a cursor has been retrieved from :meth:`files_list_folder`, use
this to paginate through all files and retrieve updates to the folder,
following the same rules as documented for :meth:`files_list_folder`.
:param str cursor: The cursor returned by your last call to
:meth:`files_list_folder` or :meth:`files_list_folder_continue`.
:rtype: :class:`dropbox.files.ListFolderResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.ListFolderContinueError`
"""
arg = files.ListFolderContinueArg(cursor)
r = self.request(
files.list_folder_continue,
'files',
arg,
None,
)
return r
def files_list_folder_get_latest_cursor(self,
path,
recursive=False,
include_media_info=False,
include_deleted=False,
include_has_explicit_shared_members=False,
include_mounted_folders=True,
limit=None,
shared_link=None,
include_property_groups=None):
"""
A way to quickly get a cursor for the folder's state. Unlike
:meth:`files_list_folder`, :meth:`files_list_folder_get_latest_cursor`
doesn't return any entries. This endpoint is for app which only needs to
know about new files and modifications and doesn't need to know about
files that already exist in Dropbox.
:param str path: A unique identifier for the file.
:param bool recursive: If true, the list folder operation will be
applied recursively to all subfolders and the response will contain
contents of all subfolders.
:param bool include_media_info: If true, ``FileMetadata.media_info`` is
set for photo and video.
:param bool include_deleted: If true, the results will include entries
for files and folders that used to exist but were deleted.
:param bool include_has_explicit_shared_members: If true, the results
will include a flag for each file indicating whether or not that
file has any explicit members.
:param bool include_mounted_folders: If true, the results will include
entries under mounted folders which includes app folder, shared
folder and team folder.
:param Nullable limit: The maximum number of results to return per
request. Note: This is an approximate number and there can be
slightly more entries returned in some cases.
:param Nullable shared_link: A shared link to list the contents of. If
the link is password-protected, the password must be provided. If
this field is present, ``ListFolderArg.path`` will be relative to
root of the shared link. Only non-recursive mode is supported for
shared link.
:param Nullable include_property_groups: If set to a valid list of
template IDs, ``FileMetadata.property_groups`` is set if there
exists property data associated with the file and each of the listed
templates.
:rtype: :class:`dropbox.files.ListFolderGetLatestCursorResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.ListFolderError`
"""
arg = files.ListFolderArg(path,
recursive,
include_media_info,
include_deleted,
include_has_explicit_shared_members,
include_mounted_folders,
limit,
shared_link,
include_property_groups)
r = self.request(
files.list_folder_get_latest_cursor,
'files',
arg,
None,
)
return r
def files_list_folder_longpoll(self,
cursor,
timeout=30):
"""
A longpoll endpoint to wait for changes on an account. In conjunction
with :meth:`files_list_folder_continue`, this call gives you a
low-latency way to monitor an account for file changes. The connection
will block until there are changes available or a timeout occurs. This
endpoint is useful mostly for client-side apps. If you're looking for
server-side notifications, check out our `webhooks documentation
<https://www.dropbox.com/developers/reference/webhooks>`_.
:param str cursor: A cursor as returned by :meth:`files_list_folder` or
:meth:`files_list_folder_continue`. Cursors retrieved by setting
``ListFolderArg.include_media_info`` to ``True`` are not supported.
:param long timeout: A timeout in seconds. The request will block for at
most this length of time, plus up to 90 seconds of random jitter
added to avoid the thundering herd problem. Care should be taken
when using this parameter, as some network infrastructure does not
support long timeouts.
:rtype: :class:`dropbox.files.ListFolderLongpollResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.ListFolderLongpollError`
"""
arg = files.ListFolderLongpollArg(cursor,
timeout)
r = self.request(
files.list_folder_longpoll,
'files',
arg,
None,
)
return r
def files_list_revisions(self,
path,
mode=files.ListRevisionsMode.path,
limit=10):
"""
Returns revisions for files based on a file path or a file id. The file
path or file id is identified from the latest file entry at the given
file path or id. This end point allows your app to query either by file
path or file id by setting the mode parameter appropriately. In the
``ListRevisionsMode.path`` (default) mode, all revisions at the same
file path as the latest file entry are returned. If revisions with the
same file id are desired, then mode must be set to
``ListRevisionsMode.id``. The ``ListRevisionsMode.id`` mode is useful to
retrieve revisions for a given file across moves or renames.
:param str path: The path to the file you want to see the revisions of.
:param mode: Determines the behavior of the API in listing the revisions
for a given file path or id.
:type mode: :class:`dropbox.files.ListRevisionsMode`
:param long limit: The maximum number of revision entries returned.
:rtype: :class:`dropbox.files.ListRevisionsResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.ListRevisionsError`
"""
arg = files.ListRevisionsArg(path,
mode,
limit)
r = self.request(
files.list_revisions,
'files',
arg,
None,
)
return r
def files_move(self,
from_path,
to_path,
allow_shared_folder=False,
autorename=False,
allow_ownership_transfer=False):
"""
Move a file or folder to a different location in the user's Dropbox. If
the source path is a folder all its contents will be moved.
:param bool allow_shared_folder: If true, :meth:`files_copy` will copy
contents in shared folder, otherwise
``RelocationError.cant_copy_shared_folder`` will be returned if
``from_path`` contains shared folder. This field is always true for
:meth:`files_move`.
:param bool autorename: If there's a conflict, have the Dropbox server
try to autorename the file to avoid the conflict.
:param bool allow_ownership_transfer: Allow moves by owner even if it
would result in an ownership transfer for the content being moved.
This does not apply to copies.
:rtype: :class:`dropbox.files.Metadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.RelocationError`
"""
warnings.warn(
'move is deprecated. Use move_v2.',
DeprecationWarning,
)
arg = files.RelocationArg(from_path,
to_path,
allow_shared_folder,
autorename,
allow_ownership_transfer)
r = self.request(
files.move,
'files',
arg,
None,
)
return r
def files_move_batch(self,
entries,
allow_shared_folder=False,
autorename=False,
allow_ownership_transfer=False):
"""
Move multiple files or folders to different locations at once in the
user's Dropbox. This route is 'all or nothing', which means if one entry
fails, the whole transaction will abort. This route will return job ID
immediately and do the async moving job in background. Please use
:meth:`files_move_batch_check` to check the job status.
:param list entries: List of entries to be moved or copied. Each entry
is :class:`dropbox.files.RelocationPath`.
:param bool allow_shared_folder: If true, :meth:`files_copy_batch` will
copy contents in shared folder, otherwise
``RelocationError.cant_copy_shared_folder`` will be returned if
``RelocationPath.from_path`` contains shared folder. This field is
always true for :meth:`files_move_batch`.
:param bool autorename: If there's a conflict with any file, have the
Dropbox server try to autorename that file to avoid the conflict.
:param bool allow_ownership_transfer: Allow moves by owner even if it
would result in an ownership transfer for the content being moved.
This does not apply to copies.
:rtype: :class:`dropbox.files.RelocationBatchLaunch`
"""
arg = files.RelocationBatchArg(entries,
allow_shared_folder,
autorename,
allow_ownership_transfer)
r = self.request(
files.move_batch,
'files',
arg,
None,
)
return r
def files_move_batch_check(self,
async_job_id):
"""
Returns the status of an asynchronous job for :meth:`files_move_batch`.
If success, it returns list of results for each entry.
:param str async_job_id: Id of the asynchronous job. This is the value
of a response returned from the method that launched the job.
:rtype: :class:`dropbox.files.RelocationBatchJobStatus`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.PollError`
"""
arg = async.PollArg(async_job_id)
r = self.request(
files.move_batch_check,
'files',
arg,
None,
)
return r
def files_move_v2(self,
from_path,
to_path,
allow_shared_folder=False,
autorename=False,
allow_ownership_transfer=False):
"""
Move a file or folder to a different location in the user's Dropbox. If
the source path is a folder all its contents will be moved.
:param bool allow_shared_folder: If true, :meth:`files_copy` will copy
contents in shared folder, otherwise
``RelocationError.cant_copy_shared_folder`` will be returned if
``from_path`` contains shared folder. This field is always true for
:meth:`files_move`.
:param bool autorename: If there's a conflict, have the Dropbox server
try to autorename the file to avoid the conflict.
:param bool allow_ownership_transfer: Allow moves by owner even if it
would result in an ownership transfer for the content being moved.
This does not apply to copies.
:rtype: :class:`dropbox.files.RelocationResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.RelocationError`
"""
arg = files.RelocationArg(from_path,
to_path,
allow_shared_folder,
autorename,
allow_ownership_transfer)
r = self.request(
files.move_v2,
'files',
arg,
None,
)
return r
def files_permanently_delete(self,
path):
"""
Permanently delete the file or folder at a given path (see
https://www.dropbox.com/en/help/40). Note: This endpoint is only
available for Dropbox Business apps.
:param str path: Path in the user's Dropbox to delete.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.DeleteError`
"""
arg = files.DeleteArg(path)
r = self.request(
files.permanently_delete,
'files',
arg,
None,
)
return None
def files_properties_add(self,
path,
property_groups):
"""
:param str path: A unique identifier for the file or folder.
:param list property_groups: The property groups which are to be added
to a Dropbox file.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.AddPropertiesError`
"""
warnings.warn(
'properties/add is deprecated.',
DeprecationWarning,
)
arg = file_properties.AddPropertiesArg(path,
property_groups)
r = self.request(
files.properties_add,
'files',
arg,
None,
)
return None
def files_properties_overwrite(self,
path,
property_groups):
"""
:param str path: A unique identifier for the file or folder.
:param list property_groups: The property groups "snapshot" updates to
force apply.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.InvalidPropertyGroupError`
"""
warnings.warn(
'properties/overwrite is deprecated.',
DeprecationWarning,
)
arg = file_properties.OverwritePropertyGroupArg(path,
property_groups)
r = self.request(
files.properties_overwrite,
'files',
arg,
None,
)
return None
def files_properties_remove(self,
path,
property_template_ids):
"""
:param str path: A unique identifier for the file or folder.
:param list property_template_ids: A list of identifiers for a template
created by :meth:`files_templates_add_for_user` or
:meth:`files_templates_add_for_team`.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.RemovePropertiesError`
"""
warnings.warn(
'properties/remove is deprecated.',
DeprecationWarning,
)
arg = file_properties.RemovePropertiesArg(path,
property_template_ids)
r = self.request(
files.properties_remove,
'files',
arg,
None,
)
return None
def files_properties_template_get(self,
template_id):
"""
:param str template_id: An identifier for template added by route See
:meth:`files_templates_add_for_user` or
:meth:`files_templates_add_for_team`.
:rtype: :class:`dropbox.files.GetTemplateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.TemplateError`
"""
warnings.warn(
'properties/template/get is deprecated.',
DeprecationWarning,
)
arg = file_properties.GetTemplateArg(template_id)
r = self.request(
files.properties_template_get,
'files',
arg,
None,
)
return r
def files_properties_template_list(self):
warnings.warn(
'properties/template/list is deprecated.',
DeprecationWarning,
)
arg = None
r = self.request(
files.properties_template_list,
'files',
arg,
None,
)
return r
def files_properties_update(self,
path,
update_property_groups):
"""
:param str path: A unique identifier for the file or folder.
:param list update_property_groups: The property groups "delta" updates
to apply.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.UpdatePropertiesError`
"""
warnings.warn(
'properties/update is deprecated.',
DeprecationWarning,
)
arg = file_properties.UpdatePropertiesArg(path,
update_property_groups)
r = self.request(
files.properties_update,
'files',
arg,
None,
)
return None
def files_restore(self,
path,
rev):
"""
Restore a file to a specific revision.
:param str path: The path to the file you want to restore.
:param str rev: The revision to restore for the file.
:rtype: :class:`dropbox.files.FileMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.RestoreError`
"""
arg = files.RestoreArg(path,
rev)
r = self.request(
files.restore,
'files',
arg,
None,
)
return r
def files_save_url(self,
path,
url):
"""
Save a specified URL into a file in user's Dropbox. If the given path
already exists, the file will be renamed to avoid the conflict (e.g.
myfile (1).txt).
:param str path: The path in Dropbox where the URL will be saved to.
:param str url: The URL to be saved.
:rtype: :class:`dropbox.files.SaveUrlResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.SaveUrlError`
"""
arg = files.SaveUrlArg(path,
url)
r = self.request(
files.save_url,
'files',
arg,
None,
)
return r
def files_save_url_check_job_status(self,
async_job_id):
"""
Check the status of a :meth:`files_save_url` job.
:param str async_job_id: Id of the asynchronous job. This is the value
of a response returned from the method that launched the job.
:rtype: :class:`dropbox.files.SaveUrlJobStatus`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.PollError`
"""
arg = async.PollArg(async_job_id)
r = self.request(
files.save_url_check_job_status,
'files',
arg,
None,
)
return r
def files_search(self,
path,
query,
start=0,
max_results=100,
mode=files.SearchMode.filename):
"""
Searches for files and folders. Note: Recent changes may not immediately
be reflected in search results due to a short delay in indexing.
:param str path: The path in the user's Dropbox to search. Should
probably be a folder.
:param str query: The string to search for. The search string is split
on spaces into multiple tokens. For file name searching, the last
token is used for prefix matching (i.e. "bat c" matches "bat cave"
but not "batman car").
:param long start: The starting index within the search results (used
for paging).
:param long max_results: The maximum number of search results to return.
:param mode: The search mode (filename, filename_and_content, or
deleted_filename). Note that searching file content is only
available for Dropbox Business accounts.
:type mode: :class:`dropbox.files.SearchMode`
:rtype: :class:`dropbox.files.SearchResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.SearchError`
"""
arg = files.SearchArg(path,
query,
start,
max_results,
mode)
r = self.request(
files.search,
'files',
arg,
None,
)
return r
def files_upload(self,
f,
path,
mode=files.WriteMode.add,
autorename=False,
client_modified=None,
mute=False,
property_groups=None):
"""
Create a new file with the contents provided in the request. Do not use
this to upload a file larger than 150 MB. Instead, create an upload
session with :meth:`files_upload_session_start`.
:param bytes f: Contents to upload.
:param str path: Path in the user's Dropbox to save the file.
:param mode: Selects what to do if the file already exists.
:type mode: :class:`dropbox.files.WriteMode`
:param bool autorename: If there's a conflict, as determined by
``mode``, have the Dropbox server try to autorename the file to
avoid conflict.
:param Nullable client_modified: The value to store as the
``client_modified`` timestamp. Dropbox automatically records the
time at which the file was written to the Dropbox servers. It can
also record an additional timestamp, provided by Dropbox desktop
clients, mobile clients, and API apps of when the file was actually
created or modified.
:param bool mute: Normally, users are made aware of any file
modifications in their Dropbox account via notifications in the
client software. If ``True``, this tells the clients that this
modification shouldn't result in a user notification.
:param Nullable property_groups: List of custom properties to add to
file.
:rtype: :class:`dropbox.files.FileMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.UploadError`
"""
arg = files.CommitInfo(path,
mode,
autorename,
client_modified,
mute,
property_groups)
r = self.request(
files.upload,
'files',
arg,
f,
)
return r
def files_upload_session_append(self,
f,
session_id,
offset):
"""
Append more data to an upload session. A single request should not
upload more than 150 MB.
:param bytes f: Contents to upload.
:param str session_id: The upload session ID (returned by
:meth:`files_upload_session_start`).
:param long offset: The amount of data that has been uploaded so far. We
use this to make sure upload data isn't lost or duplicated in the
event of a network error.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.UploadSessionLookupError`
"""
warnings.warn(
'upload_session/append is deprecated. Use upload_session/append_v2.',
DeprecationWarning,
)
arg = files.UploadSessionCursor(session_id,
offset)
r = self.request(
files.upload_session_append,
'files',
arg,
f,
)
return None
def files_upload_session_append_v2(self,
f,
cursor,
close=False):
"""
Append more data to an upload session. When the parameter close is set,
this call will close the session. A single request should not upload
more than 150 MB.
:param bytes f: Contents to upload.
:param cursor: Contains the upload session ID and the offset.
:type cursor: :class:`dropbox.files.UploadSessionCursor`
:param bool close: If true, the current session will be closed, at which
point you won't be able to call
:meth:`files_upload_session_append_v2` anymore with the current
session.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.UploadSessionLookupError`
"""
arg = files.UploadSessionAppendArg(cursor,
close)
r = self.request(
files.upload_session_append_v2,
'files',
arg,
f,
)
return None
def files_upload_session_finish(self,
f,
cursor,
commit):
"""
Finish an upload session and save the uploaded data to the given file
path. A single request should not upload more than 150 MB.
:param bytes f: Contents to upload.
:param cursor: Contains the upload session ID and the offset.
:type cursor: :class:`dropbox.files.UploadSessionCursor`
:param commit: Contains the path and other optional modifiers for the
commit.
:type commit: :class:`dropbox.files.CommitInfo`
:rtype: :class:`dropbox.files.FileMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.UploadSessionFinishError`
"""
arg = files.UploadSessionFinishArg(cursor,
commit)
r = self.request(
files.upload_session_finish,
'files',
arg,
f,
)
return r
def files_upload_session_finish_batch(self,
entries):
"""
This route helps you commit many files at once into a user's Dropbox.
Use :meth:`files_upload_session_start` and
:meth:`files_upload_session_append_v2` to upload file contents. We
recommend uploading many files in parallel to increase throughput. Once
the file contents have been uploaded, rather than calling
:meth:`files_upload_session_finish`, use this route to finish all your
upload sessions in a single request. ``UploadSessionStartArg.close`` or
``UploadSessionAppendArg.close`` needs to be true for the last
:meth:`files_upload_session_start` or
:meth:`files_upload_session_append_v2` call. This route will return a
job_id immediately and do the async commit job in background. Use
:meth:`files_upload_session_finish_batch_check` to check the job status.
For the same account, this route should be executed serially. That means
you should not start the next job before current job finishes. We allow
up to 1000 entries in a single request.
:param list entries: Commit information for each file in the batch.
:rtype: :class:`dropbox.files.UploadSessionFinishBatchLaunch`
"""
arg = files.UploadSessionFinishBatchArg(entries)
r = self.request(
files.upload_session_finish_batch,
'files',
arg,
None,
)
return r
def files_upload_session_finish_batch_check(self,
async_job_id):
"""
Returns the status of an asynchronous job for
:meth:`files_upload_session_finish_batch`. If success, it returns list
of result for each entry.
:param str async_job_id: Id of the asynchronous job. This is the value
of a response returned from the method that launched the job.
:rtype: :class:`dropbox.files.UploadSessionFinishBatchJobStatus`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.files.PollError`
"""
arg = async.PollArg(async_job_id)
r = self.request(
files.upload_session_finish_batch_check,
'files',
arg,
None,
)
return r
def files_upload_session_start(self,
f,
close=False):
"""
Upload sessions allow you to upload a single file in one or more
requests, for example where the size of the file is greater than 150 MB.
This call starts a new upload session with the given data. You can then
use :meth:`files_upload_session_append_v2` to add more data and
:meth:`files_upload_session_finish` to save all the data to a file in
Dropbox. A single request should not upload more than 150 MB. An upload
session can be used for a maximum of 48 hours. Attempting to use an
``UploadSessionStartResult.session_id`` with
:meth:`files_upload_session_append_v2` or
:meth:`files_upload_session_finish` more than 48 hours after its
creation will return a ``UploadSessionLookupError.not_found``.
:param bytes f: Contents to upload.
:param bool close: If true, the current session will be closed, at which
point you won't be able to call
:meth:`files_upload_session_append_v2` anymore with the current
session.
:rtype: :class:`dropbox.files.UploadSessionStartResult`
"""
arg = files.UploadSessionStartArg(close)
r = self.request(
files.upload_session_start,
'files',
arg,
f,
)
return r
# ------------------------------------------
# Routes in paper namespace
def paper_docs_archive(self,
doc_id):
"""
Marks the given Paper doc as archived. Note: This action can be
performed or undone by anyone with edit permissions to the doc.
:param str doc_id: The Paper doc ID.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.RefPaperDoc(doc_id)
r = self.request(
paper.docs_archive,
'paper',
arg,
None,
)
return None
def paper_docs_create(self,
f,
import_format,
parent_folder_id=None):
"""
Creates a new Paper doc with the provided content.
:param bytes f: Contents to upload.
:param Nullable parent_folder_id: The Paper folder ID where the Paper
document should be created. The API user has to have write access to
this folder or error is thrown.
:param import_format: The format of provided data.
:type import_format: :class:`dropbox.paper.ImportFormat`
:rtype: :class:`dropbox.paper.PaperDocCreateUpdateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.PaperDocCreateError`
"""
arg = paper.PaperDocCreateArgs(import_format,
parent_folder_id)
r = self.request(
paper.docs_create,
'paper',
arg,
f,
)
return r
def paper_docs_download(self,
doc_id,
export_format):
"""
Exports and downloads Paper doc either as HTML or markdown.
:type export_format: :class:`dropbox.paper.ExportFormat`
:rtype: (:class:`dropbox.paper.PaperDocExportResult`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
If you do not consume the entire response body, then you must call close
on the response object, otherwise you will max out your available
connections. We recommend using the `contextlib.closing
<https://docs.python.org/2/library/contextlib.html#contextlib.closing>`_
context manager to ensure this.
"""
arg = paper.PaperDocExport(doc_id,
export_format)
r = self.request(
paper.docs_download,
'paper',
arg,
None,
)
return r
def paper_docs_download_to_file(self,
download_path,
doc_id,
export_format):
"""
Exports and downloads Paper doc either as HTML or markdown.
:param str download_path: Path on local machine to save file.
:type export_format: :class:`dropbox.paper.ExportFormat`
:rtype: (:class:`dropbox.paper.PaperDocExportResult`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.PaperDocExport(doc_id,
export_format)
r = self.request(
paper.docs_download,
'paper',
arg,
None,
)
self._save_body_to_file(download_path, r[1])
return r[0]
def paper_docs_folder_users_list(self,
doc_id,
limit=1000):
"""
Lists the users who are explicitly invited to the Paper folder in which
the Paper doc is contained. For private folders all users (including
owner) shared on the folder are listed and for team folders all non-team
users shared on the folder are returned.
:param int limit: Size limit per batch. The maximum number of users that
can be retrieved per batch is 1000. Higher value results in invalid
arguments error.
:rtype: :class:`dropbox.paper.ListUsersOnFolderResponse`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.ListUsersOnFolderArgs(doc_id,
limit)
r = self.request(
paper.docs_folder_users_list,
'paper',
arg,
None,
)
return r
def paper_docs_folder_users_list_continue(self,
doc_id,
cursor):
"""
Once a cursor has been retrieved from
:meth:`paper_docs_folder_users_list`, use this to paginate through all
users on the Paper folder.
:param str cursor: The cursor obtained from
:meth:`paper_docs_folder_users_list` or
:meth:`paper_docs_folder_users_list_continue`. Allows for
pagination.
:rtype: :class:`dropbox.paper.ListUsersOnFolderResponse`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.ListUsersCursorError`
"""
arg = paper.ListUsersOnFolderContinueArgs(doc_id,
cursor)
r = self.request(
paper.docs_folder_users_list_continue,
'paper',
arg,
None,
)
return r
def paper_docs_get_folder_info(self,
doc_id):
"""
Retrieves folder information for the given Paper doc. This includes: -
folder sharing policy; permissions for subfolders are set by the
top-level folder. - full 'filepath', i.e. the list of folders (both
folderId and folderName) from the root folder to the folder directly
containing the Paper doc. Note: If the Paper doc is not in any folder
(aka unfiled) the response will be empty.
:param str doc_id: The Paper doc ID.
:rtype: :class:`dropbox.paper.FoldersContainingPaperDoc`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.RefPaperDoc(doc_id)
r = self.request(
paper.docs_get_folder_info,
'paper',
arg,
None,
)
return r
def paper_docs_list(self,
filter_by=paper.ListPaperDocsFilterBy.docs_accessed,
sort_by=paper.ListPaperDocsSortBy.accessed,
sort_order=paper.ListPaperDocsSortOrder.ascending,
limit=1000):
"""
Return the list of all Paper docs according to the argument
specifications. To iterate over through the full pagination, pass the
cursor to :meth:`paper_docs_list_continue`.
:param filter_by: Allows user to specify how the Paper docs should be
filtered.
:type filter_by: :class:`dropbox.paper.ListPaperDocsFilterBy`
:param sort_by: Allows user to specify how the Paper docs should be
sorted.
:type sort_by: :class:`dropbox.paper.ListPaperDocsSortBy`
:param sort_order: Allows user to specify the sort order of the result.
:type sort_order: :class:`dropbox.paper.ListPaperDocsSortOrder`
:param int limit: Size limit per batch. The maximum number of docs that
can be retrieved per batch is 1000. Higher value results in invalid
arguments error.
:rtype: :class:`dropbox.paper.ListPaperDocsResponse`
"""
arg = paper.ListPaperDocsArgs(filter_by,
sort_by,
sort_order,
limit)
r = self.request(
paper.docs_list,
'paper',
arg,
None,
)
return r
def paper_docs_list_continue(self,
cursor):
"""
Once a cursor has been retrieved from :meth:`paper_docs_list`, use this
to paginate through all Paper doc.
:param str cursor: The cursor obtained from :meth:`paper_docs_list` or
:meth:`paper_docs_list_continue`. Allows for pagination.
:rtype: :class:`dropbox.paper.ListPaperDocsResponse`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.ListDocsCursorError`
"""
arg = paper.ListPaperDocsContinueArgs(cursor)
r = self.request(
paper.docs_list_continue,
'paper',
arg,
None,
)
return r
def paper_docs_permanently_delete(self,
doc_id):
"""
Permanently deletes the given Paper doc. This operation is final as the
doc cannot be recovered. Note: This action can be performed only by the
doc owner.
:param str doc_id: The Paper doc ID.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.RefPaperDoc(doc_id)
r = self.request(
paper.docs_permanently_delete,
'paper',
arg,
None,
)
return None
def paper_docs_sharing_policy_get(self,
doc_id):
"""
Gets the default sharing policy for the given Paper doc.
:param str doc_id: The Paper doc ID.
:rtype: :class:`dropbox.paper.SharingPolicy`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.RefPaperDoc(doc_id)
r = self.request(
paper.docs_sharing_policy_get,
'paper',
arg,
None,
)
return r
def paper_docs_sharing_policy_set(self,
doc_id,
sharing_policy):
"""
Sets the default sharing policy for the given Paper doc. The default
'team_sharing_policy' can be changed only by teams, omit this field for
personal accounts. Note: 'public_sharing_policy' cannot be set to the
value 'disabled' because this setting can be changed only via the team
admin console.
:param sharing_policy: The default sharing policy to be set for the
Paper doc.
:type sharing_policy: :class:`dropbox.paper.SharingPolicy`
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.PaperDocSharingPolicy(doc_id,
sharing_policy)
r = self.request(
paper.docs_sharing_policy_set,
'paper',
arg,
None,
)
return None
def paper_docs_update(self,
f,
doc_id,
doc_update_policy,
revision,
import_format):
"""
Updates an existing Paper doc with the provided content.
:param bytes f: Contents to upload.
:param doc_update_policy: The policy used for the current update call.
:type doc_update_policy: :class:`dropbox.paper.PaperDocUpdatePolicy`
:param long revision: The latest doc revision. This value must match the
head revision or an error code will be returned. This is to prevent
colliding writes.
:param import_format: The format of provided data.
:type import_format: :class:`dropbox.paper.ImportFormat`
:rtype: :class:`dropbox.paper.PaperDocCreateUpdateResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.PaperDocUpdateError`
"""
arg = paper.PaperDocUpdateArgs(doc_id,
doc_update_policy,
revision,
import_format)
r = self.request(
paper.docs_update,
'paper',
arg,
f,
)
return r
def paper_docs_users_add(self,
doc_id,
members,
custom_message=None,
quiet=False):
"""
Allows an owner or editor to add users to a Paper doc or change their
permissions using their email address or Dropbox account ID. Note: The
Doc owner's permissions cannot be changed.
:param list members: User which should be added to the Paper doc.
Specify only email address or Dropbox account ID.
:param Nullable custom_message: A personal message that will be emailed
to each successfully added member.
:param bool quiet: Clients should set this to true if no email message
shall be sent to added users.
:rtype: list
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.AddPaperDocUser(doc_id,
members,
custom_message,
quiet)
r = self.request(
paper.docs_users_add,
'paper',
arg,
None,
)
return r
def paper_docs_users_list(self,
doc_id,
limit=1000,
filter_by=paper.UserOnPaperDocFilter.shared):
"""
Lists all users who visited the Paper doc or users with explicit access.
This call excludes users who have been removed. The list is sorted by
the date of the visit or the share date. The list will include both
users, the explicitly shared ones as well as those who came in using the
Paper url link.
:param int limit: Size limit per batch. The maximum number of users that
can be retrieved per batch is 1000. Higher value results in invalid
arguments error.
:param filter_by: Specify this attribute if you want to obtain users
that have already accessed the Paper doc.
:type filter_by: :class:`dropbox.paper.UserOnPaperDocFilter`
:rtype: :class:`dropbox.paper.ListUsersOnPaperDocResponse`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.ListUsersOnPaperDocArgs(doc_id,
limit,
filter_by)
r = self.request(
paper.docs_users_list,
'paper',
arg,
None,
)
return r
def paper_docs_users_list_continue(self,
doc_id,
cursor):
"""
Once a cursor has been retrieved from :meth:`paper_docs_users_list`, use
this to paginate through all users on the Paper doc.
:param str cursor: The cursor obtained from
:meth:`paper_docs_users_list` or
:meth:`paper_docs_users_list_continue`. Allows for pagination.
:rtype: :class:`dropbox.paper.ListUsersOnPaperDocResponse`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.ListUsersCursorError`
"""
arg = paper.ListUsersOnPaperDocContinueArgs(doc_id,
cursor)
r = self.request(
paper.docs_users_list_continue,
'paper',
arg,
None,
)
return r
def paper_docs_users_remove(self,
doc_id,
member):
"""
Allows an owner or editor to remove users from a Paper doc using their
email address or Dropbox account ID. Note: Doc owner cannot be removed.
:param member: User which should be removed from the Paper doc. Specify
only email address or Dropbox account ID.
:type member: :class:`dropbox.paper.MemberSelector`
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.paper.DocLookupError`
"""
arg = paper.RemovePaperDocUser(doc_id,
member)
r = self.request(
paper.docs_users_remove,
'paper',
arg,
None,
)
return None
# ------------------------------------------
# Routes in sharing namespace
def sharing_add_file_member(self,
file,
members,
custom_message=None,
quiet=False,
access_level=sharing.AccessLevel.viewer,
add_message_as_comment=False):
"""
Adds specified members to a file.
:param str file: File to which to add members.
:param list members: Members to add. Note that even an email address is
given, this may result in a user being directy added to the
membership if that email is the user's main account email.
:param Nullable custom_message: Message to send to added members in
their invitation.
:param bool quiet: Whether added members should be notified via device
notifications of their invitation.
:param access_level: AccessLevel union object, describing what access
level we want to give new members.
:type access_level: :class:`dropbox.sharing.AccessLevel`
:param bool add_message_as_comment: If the custom message should be
added as a comment on the file.
:rtype: list
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.AddFileMemberError`
"""
arg = sharing.AddFileMemberArgs(file,
members,
custom_message,
quiet,
access_level,
add_message_as_comment)
r = self.request(
sharing.add_file_member,
'sharing',
arg,
None,
)
return r
def sharing_add_folder_member(self,
shared_folder_id,
members,
quiet=False,
custom_message=None):
"""
Allows an owner or editor (if the ACL update policy allows) of a shared
folder to add another member. For the new member to get access to all
the functionality for this folder, you will need to call
:meth:`sharing_mount_folder` on their behalf. Apps must have full
Dropbox access to use this endpoint.
:param str shared_folder_id: The ID for the shared folder.
:param list members: The intended list of members to add. Added members
will receive invites to join the shared folder.
:param bool quiet: Whether added members should be notified via email
and device notifications of their invite.
:param Nullable custom_message: Optional message to display to added
members in their invitation.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.AddFolderMemberError`
"""
arg = sharing.AddFolderMemberArg(shared_folder_id,
members,
quiet,
custom_message)
r = self.request(
sharing.add_folder_member,
'sharing',
arg,
None,
)
return None
def sharing_change_file_member_access(self,
file,
member,
access_level):
"""
Identical to update_file_member but with less information returned.
:param str file: File for which we are changing a member's access.
:param member: The member whose access we are changing.
:type member: :class:`dropbox.sharing.MemberSelector`
:param access_level: The new access level for the member.
:type access_level: :class:`dropbox.sharing.AccessLevel`
:rtype: :class:`dropbox.sharing.FileMemberActionResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.FileMemberActionError`
"""
warnings.warn(
'change_file_member_access is deprecated. Use update_file_member.',
DeprecationWarning,
)
arg = sharing.ChangeFileMemberAccessArgs(file,
member,
access_level)
r = self.request(
sharing.change_file_member_access,
'sharing',
arg,
None,
)
return r
def sharing_check_job_status(self,
async_job_id):
"""
Returns the status of an asynchronous job. Apps must have full Dropbox
access to use this endpoint.
:param str async_job_id: Id of the asynchronous job. This is the value
of a response returned from the method that launched the job.
:rtype: :class:`dropbox.sharing.JobStatus`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.PollError`
"""
arg = async.PollArg(async_job_id)
r = self.request(
sharing.check_job_status,
'sharing',
arg,
None,
)
return r
def sharing_check_remove_member_job_status(self,
async_job_id):
"""
Returns the status of an asynchronous job for sharing a folder. Apps
must have full Dropbox access to use this endpoint.
:param str async_job_id: Id of the asynchronous job. This is the value
of a response returned from the method that launched the job.
:rtype: :class:`dropbox.sharing.RemoveMemberJobStatus`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.PollError`
"""
arg = async.PollArg(async_job_id)
r = self.request(
sharing.check_remove_member_job_status,
'sharing',
arg,
None,
)
return r
def sharing_check_share_job_status(self,
async_job_id):
"""
Returns the status of an asynchronous job for sharing a folder. Apps
must have full Dropbox access to use this endpoint.
:param str async_job_id: Id of the asynchronous job. This is the value
of a response returned from the method that launched the job.
:rtype: :class:`dropbox.sharing.ShareFolderJobStatus`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.PollError`
"""
arg = async.PollArg(async_job_id)
r = self.request(
sharing.check_share_job_status,
'sharing',
arg,
None,
)
return r
def sharing_create_shared_link(self,
path,
short_url=False,
pending_upload=None):
"""
Create a shared link. If a shared link already exists for the given
path, that link is returned. Note that in the returned
:class:`dropbox.sharing.PathLinkMetadata`, the ``PathLinkMetadata.url``
field is the shortened URL if ``CreateSharedLinkArg.short_url`` argument
is set to ``True``. Previously, it was technically possible to break a
shared link by moving or renaming the corresponding file or folder. In
the future, this will no longer be the case, so your app shouldn't rely
on this behavior. Instead, if your app needs to revoke a shared link,
use :meth:`sharing_revoke_shared_link`.
:param str path: The path to share.
:param bool short_url: Whether to return a shortened URL.
:param Nullable pending_upload: If it's okay to share a path that does
not yet exist, set this to either ``PendingUploadMode.file`` or
``PendingUploadMode.folder`` to indicate whether to assume it's a
file or folder.
:rtype: :class:`dropbox.sharing.PathLinkMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.CreateSharedLinkError`
"""
warnings.warn(
'create_shared_link is deprecated. Use create_shared_link_with_settings.',
DeprecationWarning,
)
arg = sharing.CreateSharedLinkArg(path,
short_url,
pending_upload)
r = self.request(
sharing.create_shared_link,
'sharing',
arg,
None,
)
return r
def sharing_create_shared_link_with_settings(self,
path,
settings=None):
"""
Create a shared link with custom settings. If no settings are given then
the default visibility is ``RequestedVisibility.public`` (The resolved
visibility, though, may depend on other aspects such as team and shared
folder settings).
:param str path: The path to be shared by the shared link.
:param Nullable settings: The requested settings for the newly created
shared link.
:rtype: :class:`dropbox.sharing.SharedLinkMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.CreateSharedLinkWithSettingsError`
"""
arg = sharing.CreateSharedLinkWithSettingsArg(path,
settings)
r = self.request(
sharing.create_shared_link_with_settings,
'sharing',
arg,
None,
)
return r
def sharing_get_file_metadata(self,
file,
actions=None):
"""
Returns shared file metadata.
:param str file: The file to query.
:param Nullable actions: A list of `FileAction`s corresponding to
`FilePermission`s that should appear in the response's
``SharedFileMetadata.permissions`` field describing the actions the
authenticated user can perform on the file.
:rtype: :class:`dropbox.sharing.SharedFileMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.GetFileMetadataError`
"""
arg = sharing.GetFileMetadataArg(file,
actions)
r = self.request(
sharing.get_file_metadata,
'sharing',
arg,
None,
)
return r
def sharing_get_file_metadata_batch(self,
files,
actions=None):
"""
Returns shared file metadata.
:param list files: The files to query.
:param Nullable actions: A list of `FileAction`s corresponding to
`FilePermission`s that should appear in the response's
``SharedFileMetadata.permissions`` field describing the actions the
authenticated user can perform on the file.
:rtype: list
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.SharingUserError`
"""
arg = sharing.GetFileMetadataBatchArg(files,
actions)
r = self.request(
sharing.get_file_metadata_batch,
'sharing',
arg,
None,
)
return r
def sharing_get_folder_metadata(self,
shared_folder_id,
actions=None):
"""
Returns shared folder metadata by its folder ID. Apps must have full
Dropbox access to use this endpoint.
:param str shared_folder_id: The ID for the shared folder.
:param Nullable actions: A list of `FolderAction`s corresponding to
`FolderPermission`s that should appear in the response's
``SharedFolderMetadata.permissions`` field describing the actions
the authenticated user can perform on the folder.
:rtype: :class:`dropbox.sharing.SharedFolderMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.SharedFolderAccessError`
"""
arg = sharing.GetMetadataArgs(shared_folder_id,
actions)
r = self.request(
sharing.get_folder_metadata,
'sharing',
arg,
None,
)
return r
def sharing_get_shared_link_file(self,
url,
path=None,
link_password=None):
"""
Download the shared link's file from a user's Dropbox.
:param str url: URL of the shared link.
:param Nullable path: If the shared link is to a folder, this parameter
can be used to retrieve the metadata for a specific file or
sub-folder in this folder. A relative path should be used.
:param Nullable link_password: If the shared link has a password, this
parameter can be used.
:rtype: (:class:`dropbox.sharing.SharedLinkMetadata`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.GetSharedLinkFileError`
If you do not consume the entire response body, then you must call close
on the response object, otherwise you will max out your available
connections. We recommend using the `contextlib.closing
<https://docs.python.org/2/library/contextlib.html#contextlib.closing>`_
context manager to ensure this.
"""
arg = sharing.GetSharedLinkMetadataArg(url,
path,
link_password)
r = self.request(
sharing.get_shared_link_file,
'sharing',
arg,
None,
)
return r
def sharing_get_shared_link_file_to_file(self,
download_path,
url,
path=None,
link_password=None):
"""
Download the shared link's file from a user's Dropbox.
:param str download_path: Path on local machine to save file.
:param str url: URL of the shared link.
:param Nullable path: If the shared link is to a folder, this parameter
can be used to retrieve the metadata for a specific file or
sub-folder in this folder. A relative path should be used.
:param Nullable link_password: If the shared link has a password, this
parameter can be used.
:rtype: (:class:`dropbox.sharing.SharedLinkMetadata`,
:class:`requests.models.Response`)
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.GetSharedLinkFileError`
"""
arg = sharing.GetSharedLinkMetadataArg(url,
path,
link_password)
r = self.request(
sharing.get_shared_link_file,
'sharing',
arg,
None,
)
self._save_body_to_file(download_path, r[1])
return r[0]
def sharing_get_shared_link_metadata(self,
url,
path=None,
link_password=None):
"""
Get the shared link's metadata.
:param str url: URL of the shared link.
:param Nullable path: If the shared link is to a folder, this parameter
can be used to retrieve the metadata for a specific file or
sub-folder in this folder. A relative path should be used.
:param Nullable link_password: If the shared link has a password, this
parameter can be used.
:rtype: :class:`dropbox.sharing.SharedLinkMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.SharedLinkError`
"""
arg = sharing.GetSharedLinkMetadataArg(url,
path,
link_password)
r = self.request(
sharing.get_shared_link_metadata,
'sharing',
arg,
None,
)
return r
def sharing_get_shared_links(self,
path=None):
"""
Returns a list of :class:`dropbox.sharing.LinkMetadata` objects for this
user, including collection links. If no path is given, returns a list of
all shared links for the current user, including collection links, up to
a maximum of 1000 links. If a non-empty path is given, returns a list of
all shared links that allow access to the given path. Collection links
are never returned in this case. Note that the url field in the response
is never the shortened URL.
:param Nullable path: See :meth:`sharing_get_shared_links` description.
:rtype: :class:`dropbox.sharing.GetSharedLinksResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.GetSharedLinksError`
"""
warnings.warn(
'get_shared_links is deprecated. Use list_shared_links.',
DeprecationWarning,
)
arg = sharing.GetSharedLinksArg(path)
r = self.request(
sharing.get_shared_links,
'sharing',
arg,
None,
)
return r
def sharing_list_file_members(self,
file,
actions=None,
include_inherited=True,
limit=100):
"""
Use to obtain the members who have been invited to a file, both
inherited and uninherited members.
:param str file: The file for which you want to see members.
:param Nullable actions: The actions for which to return permissions on
a member.
:param bool include_inherited: Whether to include members who only have
access from a parent shared folder.
:param long limit: Number of members to return max per query. Defaults
to 100 if no limit is specified.
:rtype: :class:`dropbox.sharing.SharedFileMembers`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ListFileMembersError`
"""
arg = sharing.ListFileMembersArg(file,
actions,
include_inherited,
limit)
r = self.request(
sharing.list_file_members,
'sharing',
arg,
None,
)
return r
def sharing_list_file_members_batch(self,
files,
limit=10):
"""
Get members of multiple files at once. The arguments to this route are
more limited, and the limit on query result size per file is more
strict. To customize the results more, use the individual file endpoint.
Inherited users and groups are not included in the result, and
permissions are not returned for this endpoint.
:param list files: Files for which to return members.
:param long limit: Number of members to return max per query. Defaults
to 10 if no limit is specified.
:rtype: list
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.SharingUserError`
"""
arg = sharing.ListFileMembersBatchArg(files,
limit)
r = self.request(
sharing.list_file_members_batch,
'sharing',
arg,
None,
)
return r
def sharing_list_file_members_continue(self,
cursor):
"""
Once a cursor has been retrieved from :meth:`sharing_list_file_members`
or :meth:`sharing_list_file_members_batch`, use this to paginate through
all shared file members.
:param str cursor: The cursor returned by your last call to
:meth:`sharing_list_file_members`,
:meth:`sharing_list_file_members_continue`, or
:meth:`sharing_list_file_members_batch`.
:rtype: :class:`dropbox.sharing.SharedFileMembers`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ListFileMembersContinueError`
"""
arg = sharing.ListFileMembersContinueArg(cursor)
r = self.request(
sharing.list_file_members_continue,
'sharing',
arg,
None,
)
return r
def sharing_list_folder_members(self,
shared_folder_id,
actions=None,
limit=1000):
"""
Returns shared folder membership by its folder ID. Apps must have full
Dropbox access to use this endpoint.
:param str shared_folder_id: The ID for the shared folder.
:rtype: :class:`dropbox.sharing.SharedFolderMembers`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.SharedFolderAccessError`
"""
arg = sharing.ListFolderMembersArgs(shared_folder_id,
actions,
limit)
r = self.request(
sharing.list_folder_members,
'sharing',
arg,
None,
)
return r
def sharing_list_folder_members_continue(self,
cursor):
"""
Once a cursor has been retrieved from
:meth:`sharing_list_folder_members`, use this to paginate through all
shared folder members. Apps must have full Dropbox access to use this
endpoint.
:param str cursor: The cursor returned by your last call to
:meth:`sharing_list_folder_members` or
:meth:`sharing_list_folder_members_continue`.
:rtype: :class:`dropbox.sharing.SharedFolderMembers`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ListFolderMembersContinueError`
"""
arg = sharing.ListFolderMembersContinueArg(cursor)
r = self.request(
sharing.list_folder_members_continue,
'sharing',
arg,
None,
)
return r
def sharing_list_folders(self,
limit=1000,
actions=None):
"""
Return the list of all shared folders the current user has access to.
Apps must have full Dropbox access to use this endpoint.
:param long limit: The maximum number of results to return per request.
:param Nullable actions: A list of `FolderAction`s corresponding to
`FolderPermission`s that should appear in the response's
``SharedFolderMetadata.permissions`` field describing the actions
the authenticated user can perform on the folder.
:rtype: :class:`dropbox.sharing.ListFoldersResult`
"""
arg = sharing.ListFoldersArgs(limit,
actions)
r = self.request(
sharing.list_folders,
'sharing',
arg,
None,
)
return r
def sharing_list_folders_continue(self,
cursor):
"""
Once a cursor has been retrieved from :meth:`sharing_list_folders`, use
this to paginate through all shared folders. The cursor must come from a
previous call to :meth:`sharing_list_folders` or
:meth:`sharing_list_folders_continue`. Apps must have full Dropbox
access to use this endpoint.
:param str cursor: The cursor returned by the previous API call
specified in the endpoint description.
:rtype: :class:`dropbox.sharing.ListFoldersResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ListFoldersContinueError`
"""
arg = sharing.ListFoldersContinueArg(cursor)
r = self.request(
sharing.list_folders_continue,
'sharing',
arg,
None,
)
return r
def sharing_list_mountable_folders(self,
limit=1000,
actions=None):
"""
Return the list of all shared folders the current user can mount or
unmount. Apps must have full Dropbox access to use this endpoint.
:param long limit: The maximum number of results to return per request.
:param Nullable actions: A list of `FolderAction`s corresponding to
`FolderPermission`s that should appear in the response's
``SharedFolderMetadata.permissions`` field describing the actions
the authenticated user can perform on the folder.
:rtype: :class:`dropbox.sharing.ListFoldersResult`
"""
arg = sharing.ListFoldersArgs(limit,
actions)
r = self.request(
sharing.list_mountable_folders,
'sharing',
arg,
None,
)
return r
def sharing_list_mountable_folders_continue(self,
cursor):
"""
Once a cursor has been retrieved from
:meth:`sharing_list_mountable_folders`, use this to paginate through all
mountable shared folders. The cursor must come from a previous call to
:meth:`sharing_list_mountable_folders` or
:meth:`sharing_list_mountable_folders_continue`. Apps must have full
Dropbox access to use this endpoint.
:param str cursor: The cursor returned by the previous API call
specified in the endpoint description.
:rtype: :class:`dropbox.sharing.ListFoldersResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ListFoldersContinueError`
"""
arg = sharing.ListFoldersContinueArg(cursor)
r = self.request(
sharing.list_mountable_folders_continue,
'sharing',
arg,
None,
)
return r
def sharing_list_received_files(self,
limit=100,
actions=None):
"""
Returns a list of all files shared with current user. Does not include
files the user has received via shared folders, and does not include
unclaimed invitations.
:param long limit: Number of files to return max per query. Defaults to
100 if no limit is specified.
:param Nullable actions: A list of `FileAction`s corresponding to
`FilePermission`s that should appear in the response's
``SharedFileMetadata.permissions`` field describing the actions the
authenticated user can perform on the file.
:rtype: :class:`dropbox.sharing.ListFilesResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.SharingUserError`
"""
arg = sharing.ListFilesArg(limit,
actions)
r = self.request(
sharing.list_received_files,
'sharing',
arg,
None,
)
return r
def sharing_list_received_files_continue(self,
cursor):
"""
Get more results with a cursor from :meth:`sharing_list_received_files`.
:param str cursor: Cursor in ``ListFilesResult.cursor``.
:rtype: :class:`dropbox.sharing.ListFilesResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ListFilesContinueError`
"""
arg = sharing.ListFilesContinueArg(cursor)
r = self.request(
sharing.list_received_files_continue,
'sharing',
arg,
None,
)
return r
def sharing_list_shared_links(self,
path=None,
cursor=None,
direct_only=None):
"""
List shared links of this user. If no path is given, returns a list of
all shared links for the current user. If a non-empty path is given,
returns a list of all shared links that allow access to the given path -
direct links to the given path and links to parent folders of the given
path. Links to parent folders can be suppressed by setting direct_only
to true.
:param Nullable path: See :meth:`sharing_list_shared_links` description.
:param Nullable cursor: The cursor returned by your last call to
:meth:`sharing_list_shared_links`.
:param Nullable direct_only: See :meth:`sharing_list_shared_links`
description.
:rtype: :class:`dropbox.sharing.ListSharedLinksResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ListSharedLinksError`
"""
arg = sharing.ListSharedLinksArg(path,
cursor,
direct_only)
r = self.request(
sharing.list_shared_links,
'sharing',
arg,
None,
)
return r
def sharing_modify_shared_link_settings(self,
url,
settings,
remove_expiration=False):
"""
Modify the shared link's settings. If the requested visibility conflict
with the shared links policy of the team or the shared folder (in case
the linked file is part of a shared folder) then the
``LinkPermissions.resolved_visibility`` of the returned
:class:`dropbox.sharing.SharedLinkMetadata` will reflect the actual
visibility of the shared link and the
``LinkPermissions.requested_visibility`` will reflect the requested
visibility.
:param str url: URL of the shared link to change its settings.
:param settings: Set of settings for the shared link.
:type settings: :class:`dropbox.sharing.SharedLinkSettings`
:param bool remove_expiration: If set to true, removes the expiration of
the shared link.
:rtype: :class:`dropbox.sharing.SharedLinkMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ModifySharedLinkSettingsError`
"""
arg = sharing.ModifySharedLinkSettingsArgs(url,
settings,
remove_expiration)
r = self.request(
sharing.modify_shared_link_settings,
'sharing',
arg,
None,
)
return r
def sharing_mount_folder(self,
shared_folder_id):
"""
The current user mounts the designated folder. Mount a shared folder for
a user after they have been added as a member. Once mounted, the shared
folder will appear in their Dropbox. Apps must have full Dropbox access
to use this endpoint.
:param str shared_folder_id: The ID of the shared folder to mount.
:rtype: :class:`dropbox.sharing.SharedFolderMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.MountFolderError`
"""
arg = sharing.MountFolderArg(shared_folder_id)
r = self.request(
sharing.mount_folder,
'sharing',
arg,
None,
)
return r
def sharing_relinquish_file_membership(self,
file):
"""
The current user relinquishes their membership in the designated file.
Note that the current user may still have inherited access to this file
through the parent folder. Apps must have full Dropbox access to use
this endpoint.
:param str file: The path or id for the file.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.RelinquishFileMembershipError`
"""
arg = sharing.RelinquishFileMembershipArg(file)
r = self.request(
sharing.relinquish_file_membership,
'sharing',
arg,
None,
)
return None
def sharing_relinquish_folder_membership(self,
shared_folder_id,
leave_a_copy=False):
"""
The current user relinquishes their membership in the designated shared
folder and will no longer have access to the folder. A folder owner
cannot relinquish membership in their own folder. This will run
synchronously if leave_a_copy is false, and asynchronously if
leave_a_copy is true. Apps must have full Dropbox access to use this
endpoint.
:param str shared_folder_id: The ID for the shared folder.
:param bool leave_a_copy: Keep a copy of the folder's contents upon
relinquishing membership.
:rtype: :class:`dropbox.sharing.LaunchEmptyResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.RelinquishFolderMembershipError`
"""
arg = sharing.RelinquishFolderMembershipArg(shared_folder_id,
leave_a_copy)
r = self.request(
sharing.relinquish_folder_membership,
'sharing',
arg,
None,
)
return r
def sharing_remove_file_member(self,
file,
member):
"""
Identical to remove_file_member_2 but with less information returned.
:param str file: File from which to remove members.
:param member: Member to remove from this file. Note that even if an
email is specified, it may result in the removal of a user (not an
invitee) if the user's main account corresponds to that email
address.
:type member: :class:`dropbox.sharing.MemberSelector`
:rtype: :class:`dropbox.sharing.FileMemberActionIndividualResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.RemoveFileMemberError`
"""
warnings.warn(
'remove_file_member is deprecated. Use remove_file_member_2.',
DeprecationWarning,
)
arg = sharing.RemoveFileMemberArg(file,
member)
r = self.request(
sharing.remove_file_member,
'sharing',
arg,
None,
)
return r
def sharing_remove_file_member_2(self,
file,
member):
"""
Removes a specified member from the file.
:param str file: File from which to remove members.
:param member: Member to remove from this file. Note that even if an
email is specified, it may result in the removal of a user (not an
invitee) if the user's main account corresponds to that email
address.
:type member: :class:`dropbox.sharing.MemberSelector`
:rtype: :class:`dropbox.sharing.FileMemberRemoveActionResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.RemoveFileMemberError`
"""
arg = sharing.RemoveFileMemberArg(file,
member)
r = self.request(
sharing.remove_file_member_2,
'sharing',
arg,
None,
)
return r
def sharing_remove_folder_member(self,
shared_folder_id,
member,
leave_a_copy):
"""
Allows an owner or editor (if the ACL update policy allows) of a shared
folder to remove another member. Apps must have full Dropbox access to
use this endpoint.
:param str shared_folder_id: The ID for the shared folder.
:param member: The member to remove from the folder.
:type member: :class:`dropbox.sharing.MemberSelector`
:param bool leave_a_copy: If true, the removed user will keep their copy
of the folder after it's unshared, assuming it was mounted.
Otherwise, it will be removed from their Dropbox. Also, this must be
set to false when kicking a group.
:rtype: :class:`dropbox.sharing.LaunchResultBase`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.RemoveFolderMemberError`
"""
arg = sharing.RemoveFolderMemberArg(shared_folder_id,
member,
leave_a_copy)
r = self.request(
sharing.remove_folder_member,
'sharing',
arg,
None,
)
return r
def sharing_revoke_shared_link(self,
url):
"""
Revoke a shared link. Note that even after revoking a shared link to a
file, the file may be accessible if there are shared links leading to
any of the file parent folders. To list all shared links that enable
access to a specific file, you can use the
:meth:`sharing_list_shared_links` with the file as the
``ListSharedLinksArg.path`` argument.
:param str url: URL of the shared link.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.RevokeSharedLinkError`
"""
arg = sharing.RevokeSharedLinkArg(url)
r = self.request(
sharing.revoke_shared_link,
'sharing',
arg,
None,
)
return None
def sharing_share_folder(self,
path,
acl_update_policy=None,
force_async=False,
member_policy=None,
shared_link_policy=None,
viewer_info_policy=None,
actions=None,
link_settings=None):
"""
Share a folder with collaborators. Most sharing will be completed
synchronously. Large folders will be completed asynchronously. To make
testing the async case repeatable, set `ShareFolderArg.force_async`. If
a ``ShareFolderLaunch.async_job_id`` is returned, you'll need to call
:meth:`sharing_check_share_job_status` until the action completes to get
the metadata for the folder. Apps must have full Dropbox access to use
this endpoint.
:param Nullable actions: A list of `FolderAction`s corresponding to
`FolderPermission`s that should appear in the response's
``SharedFolderMetadata.permissions`` field describing the actions
the authenticated user can perform on the folder.
:param Nullable link_settings: Settings on the link for this folder.
:rtype: :class:`dropbox.sharing.ShareFolderLaunch`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.ShareFolderError`
"""
arg = sharing.ShareFolderArg(path,
acl_update_policy,
force_async,
member_policy,
shared_link_policy,
viewer_info_policy,
actions,
link_settings)
r = self.request(
sharing.share_folder,
'sharing',
arg,
None,
)
return r
def sharing_transfer_folder(self,
shared_folder_id,
to_dropbox_id):
"""
Transfer ownership of a shared folder to a member of the shared folder.
User must have ``AccessLevel.owner`` access to the shared folder to
perform a transfer. Apps must have full Dropbox access to use this
endpoint.
:param str shared_folder_id: The ID for the shared folder.
:param str to_dropbox_id: A account or team member ID to transfer
ownership to.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.TransferFolderError`
"""
arg = sharing.TransferFolderArg(shared_folder_id,
to_dropbox_id)
r = self.request(
sharing.transfer_folder,
'sharing',
arg,
None,
)
return None
def sharing_unmount_folder(self,
shared_folder_id):
"""
The current user unmounts the designated folder. They can re-mount the
folder at a later time using :meth:`sharing_mount_folder`. Apps must
have full Dropbox access to use this endpoint.
:param str shared_folder_id: The ID for the shared folder.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.UnmountFolderError`
"""
arg = sharing.UnmountFolderArg(shared_folder_id)
r = self.request(
sharing.unmount_folder,
'sharing',
arg,
None,
)
return None
def sharing_unshare_file(self,
file):
"""
Remove all members from this file. Does not remove inherited members.
:param str file: The file to unshare.
:rtype: None
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.UnshareFileError`
"""
arg = sharing.UnshareFileArg(file)
r = self.request(
sharing.unshare_file,
'sharing',
arg,
None,
)
return None
def sharing_unshare_folder(self,
shared_folder_id,
leave_a_copy=False):
"""
Allows a shared folder owner to unshare the folder. You'll need to call
:meth:`sharing_check_job_status` to determine if the action has
completed successfully. Apps must have full Dropbox access to use this
endpoint.
:param str shared_folder_id: The ID for the shared folder.
:param bool leave_a_copy: If true, members of this shared folder will
get a copy of this folder after it's unshared. Otherwise, it will be
removed from their Dropbox. The current user, who is an owner, will
always retain their copy.
:rtype: :class:`dropbox.sharing.LaunchEmptyResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.UnshareFolderError`
"""
arg = sharing.UnshareFolderArg(shared_folder_id,
leave_a_copy)
r = self.request(
sharing.unshare_folder,
'sharing',
arg,
None,
)
return r
def sharing_update_file_member(self,
file,
member,
access_level):
"""
Changes a member's access on a shared file.
:rtype: :class:`dropbox.sharing.MemberAccessLevelResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.FileMemberActionError`
"""
arg = sharing.UpdateFileMemberArgs(file,
member,
access_level)
r = self.request(
sharing.update_file_member,
'sharing',
arg,
None,
)
return r
def sharing_update_folder_member(self,
shared_folder_id,
member,
access_level):
"""
Allows an owner or editor of a shared folder to update another member's
permissions. Apps must have full Dropbox access to use this endpoint.
:param str shared_folder_id: The ID for the shared folder.
:param member: The member of the shared folder to update. Only the
``MemberSelector.dropbox_id`` may be set at this time.
:type member: :class:`dropbox.sharing.MemberSelector`
:param access_level: The new access level for ``member``.
``AccessLevel.owner`` is disallowed.
:type access_level: :class:`dropbox.sharing.AccessLevel`
:rtype: :class:`dropbox.sharing.MemberAccessLevelResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.UpdateFolderMemberError`
"""
arg = sharing.UpdateFolderMemberArg(shared_folder_id,
member,
access_level)
r = self.request(
sharing.update_folder_member,
'sharing',
arg,
None,
)
return r
def sharing_update_folder_policy(self,
shared_folder_id,
member_policy=None,
acl_update_policy=None,
viewer_info_policy=None,
shared_link_policy=None,
link_settings=None,
actions=None):
"""
Update the sharing policies for a shared folder. User must have
``AccessLevel.owner`` access to the shared folder to update its
policies. Apps must have full Dropbox access to use this endpoint.
:param str shared_folder_id: The ID for the shared folder.
:param Nullable member_policy: Who can be a member of this shared
folder. Only applicable if the current user is on a team.
:param Nullable acl_update_policy: Who can add and remove members of
this shared folder.
:param Nullable viewer_info_policy: Who can enable/disable viewer info
for this shared folder.
:param Nullable shared_link_policy: The policy to apply to shared links
created for content inside this shared folder. The current user must
be on a team to set this policy to ``SharedLinkPolicy.members``.
:param Nullable link_settings: Settings on the link for this folder.
:param Nullable actions: A list of `FolderAction`s corresponding to
`FolderPermission`s that should appear in the response's
``SharedFolderMetadata.permissions`` field describing the actions
the authenticated user can perform on the folder.
:rtype: :class:`dropbox.sharing.SharedFolderMetadata`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.sharing.UpdateFolderPolicyError`
"""
arg = sharing.UpdateFolderPolicyArg(shared_folder_id,
member_policy,
acl_update_policy,
viewer_info_policy,
shared_link_policy,
link_settings,
actions)
r = self.request(
sharing.update_folder_policy,
'sharing',
arg,
None,
)
return r
# ------------------------------------------
# Routes in team_log namespace
def team_log_get_events(self,
limit=1000,
account_id=None,
time=None,
category=None):
"""
Retrieves team events. Permission : Team Auditing.
:param long limit: Number of results to return per call.
:param Nullable account_id: Filter the events by account ID. Return ony
events with this account_id as either Actor, Context, or
Participants.
:param Nullable time: Filter by time range.
:param Nullable category: Filter the returned events to a single
category.
:rtype: :class:`dropbox.team_log.GetTeamEventsResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.team_log.GetTeamEventsError`
"""
arg = team_log.GetTeamEventsArg(limit,
account_id,
time,
category)
r = self.request(
team_log.get_events,
'team_log',
arg,
None,
)
return r
def team_log_get_events_continue(self,
cursor):
"""
Once a cursor has been retrieved from :meth:`team_log_get_events`, use
this to paginate through all events. Permission : Team Auditing.
:param str cursor: Indicates from what point to get the next set of
events.
:rtype: :class:`dropbox.team_log.GetTeamEventsResult`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.team_log.GetTeamEventsContinueError`
"""
arg = team_log.GetTeamEventsContinueArg(cursor)
r = self.request(
team_log.get_events_continue,
'team_log',
arg,
None,
)
return r
# ------------------------------------------
# Routes in users namespace
def users_get_account(self,
account_id):
"""
Get information about a user's account.
:param str account_id: A user's account identifier.
:rtype: :class:`dropbox.users.BasicAccount`
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.users.GetAccountError`
"""
arg = users.GetAccountArg(account_id)
r = self.request(
users.get_account,
'users',
arg,
None,
)
return r
def users_get_account_batch(self,
account_ids):
"""
Get information about multiple user accounts. At most 300 accounts may
be queried per request.
:param list account_ids: List of user account identifiers. Should not
contain any duplicate account IDs.
:rtype: list
:raises: :class:`dropbox.exceptions.ApiError`
If this raises, ApiError.reason is of type:
:class:`dropbox.users.GetAccountBatchError`
"""
arg = users.GetAccountBatchArg(account_ids)
r = self.request(
users.get_account_batch,
'users',
arg,
None,
)
return r
def users_get_current_account(self):
"""
Get information about the current user's account.
:rtype: :class:`dropbox.users.FullAccount`
"""
arg = None
r = self.request(
users.get_current_account,
'users',
arg,
None,
)
return r
def users_get_space_usage(self):
"""
Get the space usage information for the current user's account.
:rtype: :class:`dropbox.users.SpaceUsage`
"""
arg = None
r = self.request(
users.get_space_usage,
'users',
arg,
None,
)
return r
| [
"ubuntu@ip-172-31-91-159.ec2.internal"
] | ubuntu@ip-172-31-91-159.ec2.internal |
02dd1640ec63d777df0a4a38be03877dc2b326dc | df64b9d1851d3b5770ef4cd726bb6898911d8aff | /protos/final_tree.py | 66152b4264328ecb47431aed5f8376566a9027e7 | [] | no_license | lampts/kaggle_quora | 10bbd5e16f659719bb4eee5bd5adaf2cc9a08737 | d35d2619e8f2762cc648eb0829a4c54e698871ea | refs/heads/master | 2021-07-11T04:33:46.640082 | 2017-05-28T08:43:01 | 2017-05-28T08:43:01 | 108,620,557 | 0 | 1 | null | 2017-10-28T04:49:50 | 2017-10-28T04:49:50 | null | UTF-8 | Python | false | false | 4,147 | py | from sklearn.model_selection import cross_val_predict
from sklearn.tree.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from lightgbm.sklearn import LGBMClassifier
import pandas
import pickle
import numpy as np
from tqdm import tqdm
from sklearn.model_selection import GridSearchCV, ParameterGrid, StratifiedKFold, cross_val_predict
from tfidf_k import calc_weight
from sklearn.metrics import log_loss, roc_auc_score
from logging import StreamHandler, DEBUG, Formatter, FileHandler
log_fmt = Formatter('%(asctime)s %(name)s %(lineno)d [%(levelname)s][%(funcName)s] %(message)s ')
from logging import getLogger
logger = getLogger(__name__)
handler = StreamHandler()
handler.setLevel('INFO')
handler.setFormatter(log_fmt)
logger.setLevel('INFO')
logger.addHandler(handler)
aaa = pandas.read_csv('clique_data.csv')
sample_weight = calc_weight(aaa['label'].values)
# , 'emax', 'emin'] # , # 'l_score', 'r_score', 'm_score'] #
use_cols = ['cnum', 'pred', 'new', 'vmax', 'vmin', 'vavg'] # , 'emax', 'emin']
use_cols = ['cnum', 'pred', 'vmax', 'vmin', 'vavg'] # , 'emax', 'emin']
#'l_num', 'r_num', 'm_num']
x_train = aaa[use_cols].values
y_train = aaa['label'].values
all_params = {'max_depth': [5], # [14],
'learning_rate': [0.02], # [0.06, 0.1, 0.2],
'n_estimators': [10000],
'min_child_weight': [1],
'colsample_bytree': [0.7],
'boosting_type': ['gbdt'],
#'num_leaves': [32, 100, 200], # [1300, 1500, 2000],
'subsample': [0.99],
'min_child_samples': [5],
'reg_alpha': [0],
'reg_lambda': [0],
'max_bin': [500],
'min_split_gain': [0.1],
'silent': [True],
'seed': [2261]
}
cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=871)
min_score = (100, 100, 100)
min_params = None
use_score = 0
logger.info('x size {}'.format(x_train.shape))
for params in tqdm(list(ParameterGrid(all_params))):
cnt = 0
list_score = []
list_score2 = []
list_best_iter = []
all_pred = np.zeros(y_train.shape[0])
for train, test in cv.split(x_train, y_train):
trn_x = x_train[train]
val_x = x_train[test]
trn_y = y_train[train]
val_y = y_train[test]
trn_w = sample_weight[train]
val_w = sample_weight[test]
clf = LGBMClassifier(**params)
clf.fit(trn_x, trn_y,
sample_weight=trn_w,
eval_sample_weight=[val_w],
eval_set=[(val_x, val_y)],
verbose=False,
# eval_metric='logloss',
early_stopping_rounds=100
)
pred = clf.predict_proba(val_x)[:, 1]
_score = log_loss(val_y, pred, sample_weight=val_w)
_score2 = - roc_auc_score(val_y, pred, sample_weight=val_w)
list_score.append(_score)
list_score2.append(_score2)
if clf.best_iteration != -1:
list_best_iter.append(clf.best_iteration)
else:
list_best_iter.append(params['n_estimators'])
logger.info('trees: {}'.format(list_best_iter))
params['n_estimators'] = np.mean(list_best_iter, dtype=int)
score = (np.mean(list_score), np.min(list_score), np.max(list_score))
score2 = (np.mean(list_score2), np.min(list_score2), np.max(list_score2))
logger.info('param: %s' % (params))
logger.info('loss: {} (avg min max {})'.format(score[use_score], score))
logger.info('score: {} (avg min max {})'.format(score2[use_score], score2))
if min_score[use_score] > score[use_score]:
min_score = score
min_score2 = score2
min_params = params
logger.info('best score: {} {}'.format(min_score[use_score], min_score))
logger.info('best score2: {} {}'.format(min_score2[use_score], min_score2))
logger.info('best_param: {}'.format(min_params))
final_tree = LGBMClassifier(**min_params)
final_tree.fit(x_train, y_train, sample_weight=sample_weight)
with open('final_tree.pkl', 'wb') as f:
pickle.dump(final_tree, f, -1)
| [
"you@example.com"
] | you@example.com |
c519d75ba0c468f9c100a50c6b1a7cdb34ba4573 | f1679e8c872e7e5d12d947a47920d5bad4d3b92a | /paciente/migrations/0002_paciente_imagem.py | 7ed0e6a8112b42ff40c7a8f125867af6457ec1e6 | [] | no_license | lldenisll/doctor_backend | d03e6090fae704bae9eabd002aefeb63de5a9d78 | 071911186b1f2940feff7549ca9c49b9f8c7ce22 | refs/heads/master | 2023-03-27T21:28:49.571818 | 2021-04-04T22:27:45 | 2021-04-04T22:27:45 | 354,152,189 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 390 | py | # Generated by Django 3.1.7 on 2021-04-03 19:23
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('paciente', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='paciente',
name='imagem',
field=models.ImageField(null=True, upload_to='img'),
),
]
| [
"namorado@TFGcos-MacBook-Pro.local"
] | namorado@TFGcos-MacBook-Pro.local |
e88613a1e28d2556c2ad8050f5e8ab0266b4c4d4 | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_282/ch23_2020_03_04_22_50_49_571106.py | e8af7c8bab6de99ba6d7382438bd6a574da48447 | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 161 | py | velocidade = int(input('qual eh a velocidade? '))
if velocidade>80:
print('multa de R${0:.2f}'.format((velocidade-80)*5))
else:
print('Não foi multado') | [
"you@example.com"
] | you@example.com |
931ad54b3f9cc8bcf4700cad46f9d5985056e646 | bd435e3ff491d13c3cb1ffcf34771ac1c80f7859 | /code/flask/bookshare/app/views.py | a4a2b0f8c1669efd4097e629e116ff3682494636 | [] | no_license | luningcowboy/PythonTutorial | 8f4b6d16e0fad99a226540a6f12639ccdff402ff | 9024efe8ed22aca0a1271a2c1c388d3ffe1e6690 | refs/heads/master | 2021-06-16T23:03:22.153473 | 2020-04-09T13:52:12 | 2020-04-09T13:52:12 | 187,571,993 | 0 | 0 | null | 2021-03-25T23:02:36 | 2019-05-20T05:16:13 | Python | UTF-8 | Python | false | false | 2,211 | py | from flask import render_template, url_for, redirect
from app import app
books = [
{'name':'呐喊1','author':'鲁迅1','id':0,'desc':'这是一本关于呐喊的书','contents':['xxxxx','xxxx','xxxx'],'download':['https://www.baidu.com','https://www.baidu.com'],'pic_url':'http://haodoo.net/covers/17Z7.jpg'},
{'name':'呐喊2','author':'鲁迅2','id':1,'desc':'这是一本关于呐喊的书','contents':['xxxxx','xxxx','xxxx'],'download':['https://www.baidu.com','https://www.baidu.com'],'pic_url':'http://haodoo.net/covers/17Z7.jpg'},
{'name':'呐喊3','author':'鲁迅3','id':2,'desc':'这是一本关于呐喊的书','contents':['xxxxx','xxxx','xxxx'],'download':['https://www.baidu.com','https://www.baidu.com'],'pic_url':'http://haodoo.net/covers/17Z7.jpg'},
{'name':'呐喊4','author':'鲁迅4','id':3,'desc':'这是一本关于呐喊的书','contents':['xxxxx','xxxx','xxxx'],'download':['https://www.baidu.com','https://www.baidu.com'],'pic_url':'http://haodoo.net/covers/17Z7.jpg'}
]
types = [
{'name':'计算机','tag':'1'},
{'name':'小说','tag':'2'},
{'name':'小说1','tag':'3'},
{'name':'小说2','tag':'4'},
{'name':'计算机','tag':'5'},
{'name':'小说','tag':'6'},
{'name':'小说1','tag':'7'},
{'name':'小说2','tag':'8'}]
def getTypes():
ret = []
for t in types:
print('getTypes', t)
t['url'] = url_for('type', type=t['tag'])
ret.append(t)
return ret
def getBooks():
ret = []
for b in books:
b['url'] = url_for('book_desc', book_id=b['id'])
ret.append(b)
return ret
@app.route("/")
@app.route("/index/")
def index():
return render_template('index.html',types=getTypes(),books=getBooks())
@app.route("/type/<type>")
def type(type):
tmpBooks = getBooks()
return render_template('type.html', type=type,books=books,types=getTypes())
@app.route("/book_desc/<book_id>")
def book_desc(book_id):
if not book_id:
return redirect(url_for('index.html'))
tmpBooks = getBooks()
bookInfo = tmpBooks[int(book_id)]
return render_template('book_desc.html', bookInfo=bookInfo, types=getTypes())
| [
"luningcowboy@gmail.com"
] | luningcowboy@gmail.com |
0c8df35d0cfa2f012fb8cb0c40d18fabd7622a0c | 3c94e96486a7e3616e3656b60abb4a53690fe216 | /tools/workspace/libjpeg/repository.bzl | b515b9136476b75092076de7375ec4ddcf61cd12 | [] | no_license | doubleyou/bazel_deps | da499f26d357739a4558225fc3e371497565eb37 | 06b595fd79d1f6dd85a399e61b3f8ae97c859194 | refs/heads/master | 2022-02-05T15:47:37.630661 | 2019-03-11T19:40:11 | 2019-03-11T19:40:11 | 178,992,654 | 0 | 0 | null | 2019-04-02T03:34:00 | 2019-04-02T03:34:00 | null | UTF-8 | Python | false | false | 1,067 | bzl | # -*- python -*-
# Copyright 2018 Josh Pieper, jjp@pobox.com.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
def libjpeg_repository(name):
http_archive(
name = name,
urls = [
"https://svwh.dl.sourceforge.net/project/libjpeg/libjpeg/6b/jpegsrc.v6b.tar.gz",
],
sha256 = "75c3ec241e9996504fe02a9ed4d12f16b74ade713972f3db9e65ce95cd27e35d",
strip_prefix = "jpeg-6b",
build_file = Label("//tools/workspace/libjpeg:package.BUILD"),
)
| [
"jjp@pobox.com"
] | jjp@pobox.com |
b64950b1af04cca69d274f053c5afa6d6f3e0f6e | 8957f0b42ba945399a2eeb71f796c11c9eb35b06 | /lib/test/test_array.py | 6a4474329970ed63b14d7ca2a2b7e0423d700513 | [] | no_license | notro/tmp_CircuitPython_stdlib | 4de177cbb45b2209f07171c27f844c7d377dffc9 | 641727294039a9441c35ba1a1d22de403664b710 | refs/heads/master | 2020-03-27T18:26:33.544047 | 2019-02-15T20:49:34 | 2019-02-15T20:49:34 | 146,922,496 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 48,142 | py | """Test the arraymodule.
Roger E. Masse
"""
import unittest
from test import support
#import weakref
#import pickle
#import operator
import io
#import math
import struct
import sys
#import warnings
import array
#from array import _array_reconstructor as array_reconstructor
try:
# Try to determine availability of long long independently
# of the array module under test
struct.calcsize('@q')
have_long_long = True
except struct.error:
have_long_long = False
#sizeof_wchar = array.array('u').itemsize
class ArraySubclass(array.array):
pass
class ArraySubclassWithKwargs(array.array):
def __init__(self, typecode, newarg=None):
# array.array.__init__(self)
super().__init__(typecode) ###
#typecodes = "ubBhHiIlLfd"
typecodes = "bBhHiIlLfd" ###
if have_long_long:
typecodes += 'qQ'
class BadConstructorTest(unittest.TestCase):
def test_constructor(self):
self.assertRaises(TypeError, array.array)
self.assertRaises(TypeError, array.array, spam=42)
# self.assertRaises(TypeError, array.array, 'xx')
# self.assertRaises(ValueError, array.array, 'x')
## Machine format codes.
##
## Search for "enum machine_format_code" in Modules/arraymodule.c to get the
## authoritative values.
#UNKNOWN_FORMAT = -1
#UNSIGNED_INT8 = 0
#SIGNED_INT8 = 1
#UNSIGNED_INT16_LE = 2
#UNSIGNED_INT16_BE = 3
#SIGNED_INT16_LE = 4
#SIGNED_INT16_BE = 5
#UNSIGNED_INT32_LE = 6
#UNSIGNED_INT32_BE = 7
#SIGNED_INT32_LE = 8
#SIGNED_INT32_BE = 9
#UNSIGNED_INT64_LE = 10
#UNSIGNED_INT64_BE = 11
#SIGNED_INT64_LE = 12
#SIGNED_INT64_BE = 13
#IEEE_754_FLOAT_LE = 14
#IEEE_754_FLOAT_BE = 15
#IEEE_754_DOUBLE_LE = 16
#IEEE_754_DOUBLE_BE = 17
#UTF16_LE = 18
#UTF16_BE = 19
#UTF32_LE = 20
#UTF32_BE = 21
#
#class ArrayReconstructorTest(unittest.TestCase):
#
# def test_error(self):
# self.assertRaises(TypeError, array_reconstructor,
# "", "b", 0, b"")
# self.assertRaises(TypeError, array_reconstructor,
# str, "b", 0, b"")
# self.assertRaises(TypeError, array_reconstructor,
# array.array, "b", '', b"")
# self.assertRaises(TypeError, array_reconstructor,
# array.array, "b", 0, "")
# self.assertRaises(ValueError, array_reconstructor,
# array.array, "?", 0, b"")
# self.assertRaises(ValueError, array_reconstructor,
# array.array, "b", UNKNOWN_FORMAT, b"")
# self.assertRaises(ValueError, array_reconstructor,
# array.array, "b", 22, b"")
# self.assertRaises(ValueError, array_reconstructor,
# array.array, "d", 16, b"a")
#
# def test_numbers(self):
# testcases = (
# (['B', 'H', 'I', 'L'], UNSIGNED_INT8, '=BBBB',
# [0x80, 0x7f, 0, 0xff]),
# (['b', 'h', 'i', 'l'], SIGNED_INT8, '=bbb',
# [-0x80, 0x7f, 0]),
# (['H', 'I', 'L'], UNSIGNED_INT16_LE, '<HHHH',
# [0x8000, 0x7fff, 0, 0xffff]),
# (['H', 'I', 'L'], UNSIGNED_INT16_BE, '>HHHH',
# [0x8000, 0x7fff, 0, 0xffff]),
# (['h', 'i', 'l'], SIGNED_INT16_LE, '<hhh',
# [-0x8000, 0x7fff, 0]),
# (['h', 'i', 'l'], SIGNED_INT16_BE, '>hhh',
# [-0x8000, 0x7fff, 0]),
# (['I', 'L'], UNSIGNED_INT32_LE, '<IIII',
# [1<<31, (1<<31)-1, 0, (1<<32)-1]),
# (['I', 'L'], UNSIGNED_INT32_BE, '>IIII',
# [1<<31, (1<<31)-1, 0, (1<<32)-1]),
# (['i', 'l'], SIGNED_INT32_LE, '<iii',
# [-1<<31, (1<<31)-1, 0]),
# (['i', 'l'], SIGNED_INT32_BE, '>iii',
# [-1<<31, (1<<31)-1, 0]),
# (['L'], UNSIGNED_INT64_LE, '<QQQQ',
# [1<<31, (1<<31)-1, 0, (1<<32)-1]),
# (['L'], UNSIGNED_INT64_BE, '>QQQQ',
# [1<<31, (1<<31)-1, 0, (1<<32)-1]),
# (['l'], SIGNED_INT64_LE, '<qqq',
# [-1<<31, (1<<31)-1, 0]),
# (['l'], SIGNED_INT64_BE, '>qqq',
# [-1<<31, (1<<31)-1, 0]),
# # The following tests for INT64 will raise an OverflowError
# # when run on a 32-bit machine. The tests are simply skipped
# # in that case.
# (['L'], UNSIGNED_INT64_LE, '<QQQQ',
# [1<<63, (1<<63)-1, 0, (1<<64)-1]),
# (['L'], UNSIGNED_INT64_BE, '>QQQQ',
# [1<<63, (1<<63)-1, 0, (1<<64)-1]),
# (['l'], SIGNED_INT64_LE, '<qqq',
# [-1<<63, (1<<63)-1, 0]),
# (['l'], SIGNED_INT64_BE, '>qqq',
# [-1<<63, (1<<63)-1, 0]),
# (['f'], IEEE_754_FLOAT_LE, '<ffff',
# [16711938.0, float('inf'), float('-inf'), -0.0]),
# (['f'], IEEE_754_FLOAT_BE, '>ffff',
# [16711938.0, float('inf'), float('-inf'), -0.0]),
# (['d'], IEEE_754_DOUBLE_LE, '<dddd',
# [9006104071832581.0, float('inf'), float('-inf'), -0.0]),
# (['d'], IEEE_754_DOUBLE_BE, '>dddd',
# [9006104071832581.0, float('inf'), float('-inf'), -0.0])
# )
# for testcase in testcases:
# valid_typecodes, mformat_code, struct_fmt, values = testcase
# arraystr = struct.pack(struct_fmt, *values)
# for typecode in valid_typecodes:
# try:
# a = array.array(typecode, values)
# except OverflowError:
# continue # Skip this test case.
# b = array_reconstructor(
# array.array, typecode, mformat_code, arraystr)
# self.assertEqual(a, b,
# msg="{0!r} != {1!r}; testcase={2!r}".format(a, b, testcase))
#
# def test_unicode(self):
# teststr = "Bonne Journ\xe9e \U0002030a\U00020347"
# testcases = (
# (UTF16_LE, "UTF-16-LE"),
# (UTF16_BE, "UTF-16-BE"),
# (UTF32_LE, "UTF-32-LE"),
# (UTF32_BE, "UTF-32-BE")
# )
# for testcase in testcases:
# mformat_code, encoding = testcase
# a = array.array('u', teststr)
# b = array_reconstructor(
# array.array, 'u', mformat_code, teststr.encode(encoding))
# self.assertEqual(a, b,
# msg="{0!r} != {1!r}; testcase={2!r}".format(a, b, testcase))
#
#
class BaseTest:
# Required class attributes (provided by subclasses
# typecode: the typecode to test
# example: an initializer usable in the constructor for this type
# smallerexample: the same length as example, but smaller
# biggerexample: the same length as example, but bigger
# outside: An entry that is not in example
# minitemsize: the minimum guaranteed itemsize
def assertEntryEqual(self, entry1, entry2):
self.assertEqual(entry1, entry2)
def badtypecode(self):
# Return a typecode that is different from our own
return typecodes[(typecodes.index(self.typecode)+1) % len(typecodes)]
def test_constructor(self):
a = array.array(self.typecode)
# self.assertEqual(a.typecode, self.typecode)
# self.assertGreaterEqual(a.itemsize, self.minitemsize)
self.assertRaises(TypeError, array.array, self.typecode, None)
def test_len(self):
a = array.array(self.typecode)
a.append(self.example[0])
self.assertEqual(len(a), 1)
a = array.array(self.typecode, self.example)
self.assertEqual(len(a), len(self.example))
# def test_buffer_info(self):
# a = array.array(self.typecode, self.example)
# self.assertRaises(TypeError, a.buffer_info, 42)
# bi = a.buffer_info()
# self.assertIsInstance(bi, tuple)
# self.assertEqual(len(bi), 2)
# self.assertIsInstance(bi[0], int)
# self.assertIsInstance(bi[1], int)
# self.assertEqual(bi[1], len(a))
#
# def test_byteswap(self):
# if self.typecode == 'u':
# example = '\U00100100'
# else:
# example = self.example
# a = array.array(self.typecode, example)
# self.assertRaises(TypeError, a.byteswap, 42)
# if a.itemsize in (1, 2, 4, 8):
# b = array.array(self.typecode, example)
# b.byteswap()
# if a.itemsize==1:
# self.assertEqual(a, b)
# else:
# self.assertNotEqual(a, b)
# b.byteswap()
# self.assertEqual(a, b)
#
# def test_copy(self):
# import copy
# a = array.array(self.typecode, self.example)
# b = copy.copy(a)
# self.assertNotEqual(id(a), id(b))
# self.assertEqual(a, b)
#
# def test_deepcopy(self):
# import copy
# a = array.array(self.typecode, self.example)
# b = copy.deepcopy(a)
# self.assertNotEqual(id(a), id(b))
# self.assertEqual(a, b)
#
# def test_reduce_ex(self):
# a = array.array(self.typecode, self.example)
# for protocol in range(3):
# self.assertIs(a.__reduce_ex__(protocol)[0], array.array)
# for protocol in range(3, pickle.HIGHEST_PROTOCOL):
# self.assertIs(a.__reduce_ex__(protocol)[0], array_reconstructor)
#
# def test_pickle(self):
# for protocol in range(pickle.HIGHEST_PROTOCOL + 1):
# a = array.array(self.typecode, self.example)
# b = pickle.loads(pickle.dumps(a, protocol))
# self.assertNotEqual(id(a), id(b))
# self.assertEqual(a, b)
#
# a = ArraySubclass(self.typecode, self.example)
# a.x = 10
# b = pickle.loads(pickle.dumps(a, protocol))
# self.assertNotEqual(id(a), id(b))
# self.assertEqual(a, b)
# self.assertEqual(a.x, b.x)
# self.assertEqual(type(a), type(b))
#
# def test_pickle_for_empty_array(self):
# for protocol in range(pickle.HIGHEST_PROTOCOL + 1):
# a = array.array(self.typecode)
# b = pickle.loads(pickle.dumps(a, protocol))
# self.assertNotEqual(id(a), id(b))
# self.assertEqual(a, b)
#
# a = ArraySubclass(self.typecode)
# a.x = 10
# b = pickle.loads(pickle.dumps(a, protocol))
# self.assertNotEqual(id(a), id(b))
# self.assertEqual(a, b)
# self.assertEqual(a.x, b.x)
# self.assertEqual(type(a), type(b))
#
# def test_iterator_pickle(self):
# data = array.array(self.typecode, self.example)
# for proto in range(pickle.HIGHEST_PROTOCOL + 1):
# orgit = iter(data)
# d = pickle.dumps(orgit, proto)
# it = pickle.loads(d)
# self.assertEqual(type(orgit), type(it))
# self.assertEqual(list(it), list(data))
#
# if len(data):
# it = pickle.loads(d)
# next(it)
# d = pickle.dumps(it, proto)
# self.assertEqual(list(it), list(data)[1:])
#
# def test_insert(self):
# a = array.array(self.typecode, self.example)
# a.insert(0, self.example[0])
# self.assertEqual(len(a), 1+len(self.example))
# self.assertEqual(a[0], a[1])
# self.assertRaises(TypeError, a.insert)
# self.assertRaises(TypeError, a.insert, None)
# self.assertRaises(TypeError, a.insert, 0, None)
#
# a = array.array(self.typecode, self.example)
# a.insert(-1, self.example[0])
# self.assertEqual(
# a,
# array.array(
# self.typecode,
# self.example[:-1] + self.example[:1] + self.example[-1:]
# )
# )
#
# a = array.array(self.typecode, self.example)
# a.insert(-1000, self.example[0])
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[:1] + self.example)
# )
#
# a = array.array(self.typecode, self.example)
# a.insert(1000, self.example[0])
# self.assertEqual(
# a,
# array.array(self.typecode, self.example + self.example[:1])
# )
#
# def test_tofromfile(self):
# a = array.array(self.typecode, 2*self.example)
# self.assertRaises(TypeError, a.tofile)
# support.unlink(support.TESTFN)
# f = open(support.TESTFN, 'wb')
# try:
# a.tofile(f)
# f.close()
# b = array.array(self.typecode)
# f = open(support.TESTFN, 'rb')
# self.assertRaises(TypeError, b.fromfile)
# b.fromfile(f, len(self.example))
# self.assertEqual(b, array.array(self.typecode, self.example))
# self.assertNotEqual(a, b)
# self.assertRaises(EOFError, b.fromfile, f, len(self.example)+1)
# self.assertEqual(a, b)
# f.close()
# finally:
# if not f.closed:
# f.close()
# support.unlink(support.TESTFN)
#
# def test_fromfile_ioerror(self):
# # Issue #5395: Check if fromfile raises a proper OSError
# # instead of EOFError.
# a = array.array(self.typecode)
# f = open(support.TESTFN, 'wb')
# try:
# self.assertRaises(OSError, a.fromfile, f, len(self.example))
# finally:
# f.close()
# support.unlink(support.TESTFN)
#
# def test_filewrite(self):
# a = array.array(self.typecode, 2*self.example)
# f = open(support.TESTFN, 'wb')
# try:
# f.write(a)
# f.close()
# b = array.array(self.typecode)
# f = open(support.TESTFN, 'rb')
# b.fromfile(f, len(self.example))
# self.assertEqual(b, array.array(self.typecode, self.example))
# self.assertNotEqual(a, b)
# b.fromfile(f, len(self.example))
# self.assertEqual(a, b)
# f.close()
# finally:
# if not f.closed:
# f.close()
# support.unlink(support.TESTFN)
#
# def test_tofromlist(self):
# a = array.array(self.typecode, 2*self.example)
# b = array.array(self.typecode)
# self.assertRaises(TypeError, a.tolist, 42)
# self.assertRaises(TypeError, b.fromlist)
# self.assertRaises(TypeError, b.fromlist, 42)
# self.assertRaises(TypeError, b.fromlist, [None])
# b.fromlist(a.tolist())
# self.assertEqual(a, b)
#
# def test_tofromstring(self):
# nb_warnings = 4
# with warnings.catch_warnings(record=True) as r:
# warnings.filterwarnings("always",
# message=r"(to|from)string\(\) is deprecated",
# category=DeprecationWarning)
# a = array.array(self.typecode, 2*self.example)
# b = array.array(self.typecode)
# self.assertRaises(TypeError, a.tostring, 42)
# self.assertRaises(TypeError, b.fromstring)
# self.assertRaises(TypeError, b.fromstring, 42)
# b.fromstring(a.tostring())
# self.assertEqual(a, b)
# if a.itemsize>1:
# self.assertRaises(ValueError, b.fromstring, "x")
# nb_warnings += 1
# self.assertEqual(len(r), nb_warnings)
#
# def test_tofrombytes(self):
# a = array.array(self.typecode, 2*self.example)
# b = array.array(self.typecode)
# self.assertRaises(TypeError, a.tobytes, 42)
# self.assertRaises(TypeError, b.frombytes)
# self.assertRaises(TypeError, b.frombytes, 42)
# b.frombytes(a.tobytes())
# c = array.array(self.typecode, bytearray(a.tobytes()))
# self.assertEqual(a, b)
# self.assertEqual(a, c)
# if a.itemsize>1:
# self.assertRaises(ValueError, b.frombytes, b"x")
#
def test_fromarray(self):
a = array.array(self.typecode, self.example)
b = array.array(self.typecode, a)
self.assertEqual(a, b)
def test_repr(self):
a = array.array(self.typecode, 2*self.example)
self.assertEqual(a, eval(repr(a), {"array": array.array}))
a = array.array(self.typecode)
self.assertEqual(repr(a), "array('%s')" % self.typecode)
def test_str(self):
a = array.array(self.typecode, 2*self.example)
str(a)
def test_cmp(self):
a = array.array(self.typecode, self.example)
self.assertIs(a == 42, False)
self.assertIs(a != 42, True)
self.assertIs(a == a, True)
self.assertIs(a != a, False)
# self.assertIs(a < a, False)
# self.assertIs(a <= a, True)
# self.assertIs(a > a, False)
# self.assertIs(a >= a, True)
al = array.array(self.typecode, self.smallerexample)
ab = array.array(self.typecode, self.biggerexample)
# self.assertIs(a == 2*a, False)
self.assertIs(a == a*2, False) ###
# self.assertIs(a != 2*a, True)
self.assertIs(a != a*2, True) ###
# self.assertIs(a < 2*a, True)
# self.assertIs(a <= 2*a, True)
# self.assertIs(a > 2*a, False)
# self.assertIs(a >= 2*a, False)
self.assertIs(a == al, False)
self.assertIs(a != al, True)
# self.assertIs(a < al, False)
# self.assertIs(a <= al, False)
# self.assertIs(a > al, True)
# self.assertIs(a >= al, True)
self.assertIs(a == ab, False)
self.assertIs(a != ab, True)
# self.assertIs(a < ab, True)
# self.assertIs(a <= ab, True)
# self.assertIs(a > ab, False)
# self.assertIs(a >= ab, False)
def test_add(self):
a = array.array(self.typecode, self.example) \
+ array.array(self.typecode, self.example[::-1])
self.assertEqual(
a,
array.array(self.typecode, self.example + self.example[::-1])
)
# b = array.array(self.badtypecode())
# self.assertRaises(TypeError, a.__add__, b)
#
# self.assertRaises(TypeError, a.__add__, "bad")
#
def test_iadd(self):
a = array.array(self.typecode, self.example[::-1])
b = a
a += array.array(self.typecode, 2*self.example)
self.assertIs(a, b)
self.assertEqual(
a,
array.array(self.typecode, self.example[::-1]+2*self.example)
)
a = array.array(self.typecode, self.example)
a += a
self.assertEqual(
a,
array.array(self.typecode, self.example + self.example)
)
# b = array.array(self.badtypecode())
# self.assertRaises(TypeError, a.__add__, b)
#
# self.assertRaises(TypeError, a.__iadd__, "bad")
#
def test_mul(self):
# a = 5*array.array(self.typecode, self.example)
# self.assertEqual(
# a,
# array.array(self.typecode, 5*self.example)
# )
#
a = array.array(self.typecode, self.example)*5
self.assertEqual(
a,
array.array(self.typecode, self.example*5)
)
# a = 0*array.array(self.typecode, self.example)
a = array.array(self.typecode, self.example) * 0 ###
self.assertEqual(
a,
array.array(self.typecode)
)
# a = (-1)*array.array(self.typecode, self.example)
# self.assertEqual(
# a,
# array.array(self.typecode)
# )
#
# a = 5 * array.array(self.typecode, self.example[:1])
a = array.array(self.typecode, self.example[:1]) * 5 ###
self.assertEqual(
a,
array.array(self.typecode, [a[0]] * 5)
)
# self.assertRaises(TypeError, a.__mul__, "bad")
#
def test_imul(self):
a = array.array(self.typecode, self.example)
b = a
a *= 5
self.assertIs(a, b)
self.assertEqual(
a,
array.array(self.typecode, 5*self.example)
)
a *= 0
self.assertIs(a, b)
self.assertEqual(a, array.array(self.typecode))
# a *= 1000
a *= 10 ###
self.assertIs(a, b)
self.assertEqual(a, array.array(self.typecode))
# a *= -1
# self.assertIs(a, b)
# self.assertEqual(a, array.array(self.typecode))
#
# a = array.array(self.typecode, self.example)
# a *= -1
# self.assertEqual(a, array.array(self.typecode))
#
# self.assertRaises(TypeError, a.__imul__, "bad")
#
def test_getitem(self):
a = array.array(self.typecode, self.example)
self.assertEntryEqual(a[0], self.example[0])
self.assertEntryEqual(a[0], self.example[0])
self.assertEntryEqual(a[-1], self.example[-1])
self.assertEntryEqual(a[-1], self.example[-1])
self.assertEntryEqual(a[len(self.example)-1], self.example[-1])
self.assertEntryEqual(a[-len(self.example)], self.example[0])
# self.assertRaises(TypeError, a.__getitem__)
# self.assertRaises(IndexError, a.__getitem__, len(self.example))
# self.assertRaises(IndexError, a.__getitem__, -len(self.example)-1)
def test_setitem(self):
a = array.array(self.typecode, self.example)
a[0] = a[-1]
self.assertEntryEqual(a[0], a[-1])
a = array.array(self.typecode, self.example)
a[0] = a[-1]
self.assertEntryEqual(a[0], a[-1])
a = array.array(self.typecode, self.example)
a[-1] = a[0]
self.assertEntryEqual(a[0], a[-1])
a = array.array(self.typecode, self.example)
a[-1] = a[0]
self.assertEntryEqual(a[0], a[-1])
a = array.array(self.typecode, self.example)
a[len(self.example)-1] = a[0]
self.assertEntryEqual(a[0], a[-1])
a = array.array(self.typecode, self.example)
a[-len(self.example)] = a[-1]
self.assertEntryEqual(a[0], a[-1])
# self.assertRaises(TypeError, a.__setitem__)
# self.assertRaises(TypeError, a.__setitem__, None)
# self.assertRaises(TypeError, a.__setitem__, 0, None)
# self.assertRaises(
# IndexError,
# a.__setitem__,
# len(self.example), self.example[0]
# )
# self.assertRaises(
# IndexError,
# a.__setitem__,
# -len(self.example)-1, self.example[0]
# )
#
# def test_delitem(self):
# a = array.array(self.typecode, self.example)
# del a[0]
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[1:])
# )
#
# a = array.array(self.typecode, self.example)
# del a[-1]
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[:-1])
# )
#
# a = array.array(self.typecode, self.example)
# del a[len(self.example)-1]
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[:-1])
# )
#
# a = array.array(self.typecode, self.example)
# del a[-len(self.example)]
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[1:])
# )
#
# self.assertRaises(TypeError, a.__delitem__)
# self.assertRaises(TypeError, a.__delitem__, None)
# self.assertRaises(IndexError, a.__delitem__, len(self.example))
# self.assertRaises(IndexError, a.__delitem__, -len(self.example)-1)
#
def test_getslice(self):
a = array.array(self.typecode, self.example)
self.assertEqual(a[:], a)
self.assertEqual(
a[1:],
array.array(self.typecode, self.example[1:])
)
self.assertEqual(
a[:1],
array.array(self.typecode, self.example[:1])
)
self.assertEqual(
a[:-1],
array.array(self.typecode, self.example[:-1])
)
self.assertEqual(
a[-1:],
array.array(self.typecode, self.example[-1:])
)
self.assertEqual(
a[-1:-1],
array.array(self.typecode)
)
self.assertEqual(
a[2:1],
array.array(self.typecode)
)
self.assertEqual(
a[1000:],
array.array(self.typecode)
)
self.assertEqual(a[-1000:], a)
self.assertEqual(a[:1000], a)
self.assertEqual(
a[:-1000],
array.array(self.typecode)
)
self.assertEqual(a[-1000:1000], a)
self.assertEqual(
a[2000:1000],
array.array(self.typecode)
)
def test_extended_getslice(self):
# Test extended slicing by comparing with list slicing
# (Assumes list conversion works correctly, too)
a = array.array(self.typecode, self.example)
indices = (0, None, 1, 3, 19, 100, -1, -2, -31, -100)
for start in indices:
for stop in indices:
# Everything except the initial 0 (invalid step)
# for step in indices[1:]:
for step in [1]: ###
self.assertEqual(list(a[start:stop:step]),
list(a)[start:stop:step])
def test_setslice(self):
a = array.array(self.typecode, self.example)
a[:1] = a
self.assertEqual(
a,
array.array(self.typecode, self.example + self.example[1:])
)
a = array.array(self.typecode, self.example)
a[:-1] = a
self.assertEqual(
a,
array.array(self.typecode, self.example + self.example[-1:])
)
a = array.array(self.typecode, self.example)
a[-1:] = a
self.assertEqual(
a,
array.array(self.typecode, self.example[:-1] + self.example)
)
a = array.array(self.typecode, self.example)
a[1:] = a
self.assertEqual(
a,
array.array(self.typecode, self.example[:1] + self.example)
)
a = array.array(self.typecode, self.example)
a[1:-1] = a
self.assertEqual(
a,
array.array(
self.typecode,
self.example[:1] + self.example + self.example[-1:]
)
)
a = array.array(self.typecode, self.example)
a[1000:] = a
self.assertEqual(
a,
array.array(self.typecode, 2*self.example)
)
a = array.array(self.typecode, self.example)
a[-1000:] = a
self.assertEqual(
a,
array.array(self.typecode, self.example)
)
a = array.array(self.typecode, self.example)
a[:1000] = a
self.assertEqual(
a,
array.array(self.typecode, self.example)
)
a = array.array(self.typecode, self.example)
a[:-1000] = a
self.assertEqual(
a,
array.array(self.typecode, 2*self.example)
)
a = array.array(self.typecode, self.example)
a[1:0] = a
self.assertEqual(
a,
array.array(self.typecode, self.example[:1] + self.example + self.example[1:])
)
a = array.array(self.typecode, self.example)
a[2000:1000] = a
self.assertEqual(
a,
array.array(self.typecode, 2*self.example)
)
# a = array.array(self.typecode, self.example)
# self.assertRaises(TypeError, a.__setitem__, slice(0, 0), None)
# self.assertRaises(TypeError, a.__setitem__, slice(0, 1), None)
#
# b = array.array(self.badtypecode())
# self.assertRaises(TypeError, a.__setitem__, slice(0, 0), b)
# self.assertRaises(TypeError, a.__setitem__, slice(0, 1), b)
#
# def test_extended_set_del_slice(self):
# indices = (0, None, 1, 3, 19, 100, -1, -2, -31, -100)
# for start in indices:
# for stop in indices:
# # Everything except the initial 0 (invalid step)
# for step in indices[1:]:
# a = array.array(self.typecode, self.example)
# L = list(a)
# # Make sure we have a slice of exactly the right length,
# # but with (hopefully) different data.
# data = L[start:stop:step]
# data.reverse()
# L[start:stop:step] = data
# a[start:stop:step] = array.array(self.typecode, data)
# self.assertEqual(a, array.array(self.typecode, L))
#
# del L[start:stop:step]
# del a[start:stop:step]
# self.assertEqual(a, array.array(self.typecode, L))
#
# def test_index(self):
# example = 2*self.example
# a = array.array(self.typecode, example)
# self.assertRaises(TypeError, a.index)
# for x in example:
# self.assertEqual(a.index(x), example.index(x))
# self.assertRaises(ValueError, a.index, None)
# self.assertRaises(ValueError, a.index, self.outside)
#
# def test_count(self):
# example = 2*self.example
# a = array.array(self.typecode, example)
# self.assertRaises(TypeError, a.count)
# for x in example:
# self.assertEqual(a.count(x), example.count(x))
# self.assertEqual(a.count(self.outside), 0)
# self.assertEqual(a.count(None), 0)
#
# def test_remove(self):
# for x in self.example:
# example = 2*self.example
# a = array.array(self.typecode, example)
# pos = example.index(x)
# example2 = example[:pos] + example[pos+1:]
# a.remove(x)
# self.assertEqual(a, array.array(self.typecode, example2))
#
# a = array.array(self.typecode, self.example)
# self.assertRaises(ValueError, a.remove, self.outside)
#
# self.assertRaises(ValueError, a.remove, None)
#
# def test_pop(self):
# a = array.array(self.typecode)
# self.assertRaises(IndexError, a.pop)
#
# a = array.array(self.typecode, 2*self.example)
# self.assertRaises(TypeError, a.pop, 42, 42)
# self.assertRaises(TypeError, a.pop, None)
# self.assertRaises(IndexError, a.pop, len(a))
# self.assertRaises(IndexError, a.pop, -len(a)-1)
#
# self.assertEntryEqual(a.pop(0), self.example[0])
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[1:]+self.example)
# )
# self.assertEntryEqual(a.pop(1), self.example[2])
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[1:2]+self.example[3:]+self.example)
# )
# self.assertEntryEqual(a.pop(0), self.example[1])
# self.assertEntryEqual(a.pop(), self.example[-1])
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[3:]+self.example[:-1])
# )
#
# def test_reverse(self):
# a = array.array(self.typecode, self.example)
# self.assertRaises(TypeError, a.reverse, 42)
# a.reverse()
# self.assertEqual(
# a,
# array.array(self.typecode, self.example[::-1])
# )
#
def test_extend(self):
a = array.array(self.typecode, self.example)
self.assertRaises(TypeError, a.extend)
# a.extend(array.array(self.typecode, self.example[::-1]))
# self.assertEqual(
# a,
# array.array(self.typecode, self.example+self.example[::-1])
# )
a = array.array(self.typecode, self.example)
a.extend(a)
self.assertEqual(
a,
array.array(self.typecode, self.example+self.example)
)
# b = array.array(self.badtypecode())
# self.assertRaises(TypeError, a.extend, b)
#
# a = array.array(self.typecode, self.example)
# a.extend(self.example[::-1])
# self.assertEqual(
# a,
# array.array(self.typecode, self.example+self.example[::-1])
# )
#
def test_constructor_with_iterable_argument(self):
a = array.array(self.typecode, iter(self.example))
b = array.array(self.typecode, self.example)
self.assertEqual(a, b)
# non-iterable argument
self.assertRaises(TypeError, array.array, self.typecode, 10)
# pass through errors raised in __iter__
class A:
def __iter__(self):
raise UnicodeError
self.assertRaises(UnicodeError, array.array, self.typecode, A())
# pass through errors raised in next()
def B():
raise UnicodeError
yield None
self.assertRaises(UnicodeError, array.array, self.typecode, B())
def test_coveritertraverse(self):
try:
import gc
except ImportError:
self.skipTest('gc module not available')
a = array.array(self.typecode)
l = [iter(a)]
l.append(l)
gc.collect()
# def test_buffer(self):
# a = array.array(self.typecode, self.example)
# m = memoryview(a)
# expected = m.tobytes()
# self.assertEqual(a.tobytes(), expected)
# self.assertEqual(a.tobytes()[0], expected[0])
# # Resizing is forbidden when there are buffer exports.
# # For issue 4509, we also check after each error that
# # the array was not modified.
# self.assertRaises(BufferError, a.append, a[0])
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, a.extend, a[0:1])
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, a.remove, a[0])
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, a.pop, 0)
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, a.fromlist, a.tolist())
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, a.frombytes, a.tobytes())
# self.assertEqual(m.tobytes(), expected)
# if self.typecode == 'u':
# self.assertRaises(BufferError, a.fromunicode, a.tounicode())
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, operator.imul, a, 2)
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, operator.imul, a, 0)
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, operator.setitem, a, slice(0, 0), a)
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, operator.delitem, a, 0)
# self.assertEqual(m.tobytes(), expected)
# self.assertRaises(BufferError, operator.delitem, a, slice(0, 1))
# self.assertEqual(m.tobytes(), expected)
#
# def test_weakref(self):
# s = array.array(self.typecode, self.example)
# p = weakref.proxy(s)
# self.assertEqual(p.tobytes(), s.tobytes())
# s = None
# self.assertRaises(ReferenceError, len, p)
#
# @unittest.skipUnless(hasattr(sys, 'getrefcount'),
# 'test needs sys.getrefcount()')
# def test_bug_782369(self):
# for i in range(10):
# b = array.array('B', range(64))
# rc = sys.getrefcount(10)
# for i in range(10):
# b = array.array('B', range(64))
# self.assertEqual(rc, sys.getrefcount(10))
#
def test_subclass_with_kwargs(self):
# SF bug #1486663 -- this used to erroneously raise a TypeError
ArraySubclassWithKwargs('b', newarg=1)
# def test_create_from_bytes(self):
# # XXX This test probably needs to be moved in a subclass or
# # generalized to use self.typecode.
# a = array.array('H', b"1234")
# self.assertEqual(len(a) * a.itemsize, 4)
#
# @support.cpython_only
# def test_sizeof_with_buffer(self):
# a = array.array(self.typecode, self.example)
# basesize = support.calcvobjsize('Pn2Pi')
# buffer_size = a.buffer_info()[1] * a.itemsize
# support.check_sizeof(self, a, basesize + buffer_size)
#
# @support.cpython_only
# def test_sizeof_without_buffer(self):
# a = array.array(self.typecode)
# basesize = support.calcvobjsize('Pn2Pi')
# support.check_sizeof(self, a, basesize)
#
# def test_initialize_with_unicode(self):
# if self.typecode != 'u':
# with self.assertRaises(TypeError) as cm:
# a = array.array(self.typecode, 'foo')
# self.assertIn("cannot use a str", str(cm.exception))
# with self.assertRaises(TypeError) as cm:
# a = array.array(self.typecode, array.array('u', 'foo'))
# self.assertIn("cannot use a unicode array", str(cm.exception))
# else:
# a = array.array(self.typecode, "foo")
# a = array.array(self.typecode, array.array('u', 'foo'))
class StringTest(BaseTest):
def test_setitem(self):
super().test_setitem()
a = array.array(self.typecode, self.example)
self.assertRaises(TypeError, a.__setitem__, 0, self.example[:2])
#class UnicodeTest(StringTest, unittest.TestCase):
# typecode = 'u'
# example = '\x01\u263a\x00\ufeff'
# smallerexample = '\x01\u263a\x00\ufefe'
# biggerexample = '\x01\u263a\x01\ufeff'
# outside = str('\x33')
# minitemsize = 2
#
# def test_unicode(self):
# self.assertRaises(TypeError, array.array, 'b', 'foo')
#
# a = array.array('u', '\xa0\xc2\u1234')
# a.fromunicode(' ')
# a.fromunicode('')
# a.fromunicode('')
# a.fromunicode('\x11abc\xff\u1234')
# s = a.tounicode()
# self.assertEqual(s, '\xa0\xc2\u1234 \x11abc\xff\u1234')
# self.assertEqual(a.itemsize, sizeof_wchar)
#
# s = '\x00="\'a\\b\x80\xff\u0000\u0001\u1234'
# a = array.array('u', s)
# self.assertEqual(
# repr(a),
# "array('u', '\\x00=\"\\'a\\\\b\\x80\xff\\x00\\x01\u1234')")
#
# self.assertRaises(TypeError, a.fromunicode)
#
# def test_issue17223(self):
# # this used to crash
# if sizeof_wchar == 4:
# # U+FFFFFFFF is an invalid code point in Unicode 6.0
# invalid_str = b'\xff\xff\xff\xff'
# else:
# # PyUnicode_FromUnicode() cannot fail with 16-bit wchar_t
# self.skipTest("specific to 32-bit wchar_t")
# a = array.array('u', invalid_str)
# self.assertRaises(ValueError, a.tounicode)
# self.assertRaises(ValueError, str, a)
#
class NumberTest(BaseTest):
# def test_extslice(self):
# a = array.array(self.typecode, range(5))
# self.assertEqual(a[::], a)
# self.assertEqual(a[::2], array.array(self.typecode, [0,2,4]))
# self.assertEqual(a[1::2], array.array(self.typecode, [1,3]))
# self.assertEqual(a[::-1], array.array(self.typecode, [4,3,2,1,0]))
# self.assertEqual(a[::-2], array.array(self.typecode, [4,2,0]))
# self.assertEqual(a[3::-2], array.array(self.typecode, [3,1]))
# self.assertEqual(a[-100:100:], a)
# self.assertEqual(a[100:-100:-1], a[::-1])
# self.assertEqual(a[-100:100:2], array.array(self.typecode, [0,2,4]))
# self.assertEqual(a[1000:2000:2], array.array(self.typecode, []))
# self.assertEqual(a[-1000:-2000:-2], array.array(self.typecode, []))
#
# def test_delslice(self):
# a = array.array(self.typecode, range(5))
# del a[::2]
# self.assertEqual(a, array.array(self.typecode, [1,3]))
# a = array.array(self.typecode, range(5))
# del a[1::2]
# self.assertEqual(a, array.array(self.typecode, [0,2,4]))
# a = array.array(self.typecode, range(5))
# del a[1::-2]
# self.assertEqual(a, array.array(self.typecode, [0,2,3,4]))
# a = array.array(self.typecode, range(10))
# del a[::1000]
# self.assertEqual(a, array.array(self.typecode, [1,2,3,4,5,6,7,8,9]))
# # test issue7788
# a = array.array(self.typecode, range(10))
# del a[9::1<<333]
#
def test_assignment(self):
# a = array.array(self.typecode, range(10))
# a[::2] = array.array(self.typecode, [42]*5)
# self.assertEqual(a, array.array(self.typecode, [42, 1, 42, 3, 42, 5, 42, 7, 42, 9]))
# a = array.array(self.typecode, range(10))
# a[::-4] = array.array(self.typecode, [10]*3)
# self.assertEqual(a, array.array(self.typecode, [0, 10, 2, 3, 4, 10, 6, 7, 8 ,10]))
# a = array.array(self.typecode, range(4))
# a[::-1] = a
# self.assertEqual(a, array.array(self.typecode, [3, 2, 1, 0]))
a = array.array(self.typecode, range(10))
b = a[:]
c = a[:]
ins = array.array(self.typecode, range(2))
a[2:3] = ins
b[slice(2,3)] = ins
c[2:3:] = ins
def test_iterationcontains(self):
a = array.array(self.typecode, range(10))
self.assertEqual(list(a), list(range(10)))
# b = array.array(self.typecode, [20])
# self.assertEqual(a[-1] in a, True)
# self.assertEqual(b[0] not in a, True)
def check_overflow(self, lower, upper):
# method to be used by subclasses
# should not overflow assigning lower limit
a = array.array(self.typecode, [lower])
a[0] = lower
# should overflow assigning less than lower limit
self.assertRaises(OverflowError, array.array, self.typecode, [lower-1])
self.assertRaises(OverflowError, a.__setitem__, 0, lower-1)
# should not overflow assigning upper limit
a = array.array(self.typecode, [upper])
a[0] = upper
# should overflow assigning more than upper limit
self.assertRaises(OverflowError, array.array, self.typecode, [upper+1])
self.assertRaises(OverflowError, a.__setitem__, 0, upper+1)
# def test_subclassing(self):
# typecode = self.typecode
# class ExaggeratingArray(array.array):
# __slots__ = ['offset']
#
# def __new__(cls, typecode, data, offset):
# return array.array.__new__(cls, typecode, data)
#
# def __init__(self, typecode, data, offset):
# self.offset = offset
#
# def __getitem__(self, i):
# return array.array.__getitem__(self, i) + self.offset
#
# a = ExaggeratingArray(self.typecode, [3, 6, 7, 11], 4)
# self.assertEntryEqual(a[0], 7)
#
# self.assertRaises(AttributeError, setattr, a, "color", "blue")
#
# def test_frombytearray(self):
# a = array.array('b', range(10))
# b = array.array(self.typecode, a)
# self.assertEqual(a, b)
#
class SignedNumberTest(NumberTest):
example = [-1, 0, 1, 42, 0x7f]
smallerexample = [-1, 0, 1, 42, 0x7e]
biggerexample = [-1, 0, 1, 43, 0x7f]
outside = 23
# def test_overflow(self):
# a = array.array(self.typecode)
# lower = -1 * int(pow(2, a.itemsize * 8 - 1))
# upper = int(pow(2, a.itemsize * 8 - 1)) - 1
# self.check_overflow(lower, upper)
#
class UnsignedNumberTest(NumberTest):
example = [0, 1, 17, 23, 42, 0xff]
smallerexample = [0, 1, 17, 23, 42, 0xfe]
biggerexample = [0, 1, 17, 23, 43, 0xff]
outside = 0xaa
# def test_overflow(self):
# a = array.array(self.typecode)
# lower = 0
# upper = int(pow(2, a.itemsize * 8)) - 1
# self.check_overflow(lower, upper)
#
# def test_bytes_extend(self):
# s = bytes(self.example)
#
# a = array.array(self.typecode, self.example)
# a.extend(s)
# self.assertEqual(
# a,
# array.array(self.typecode, self.example+self.example)
# )
#
# a = array.array(self.typecode, self.example)
# a.extend(bytearray(reversed(s)))
# self.assertEqual(
# a,
# array.array(self.typecode, self.example+self.example[::-1])
# )
#
class ByteTest(SignedNumberTest, unittest.TestCase):
typecode = 'b'
minitemsize = 1
class UnsignedByteTest(UnsignedNumberTest, unittest.TestCase):
typecode = 'B'
minitemsize = 1
class ShortTest(SignedNumberTest, unittest.TestCase):
typecode = 'h'
minitemsize = 2
class UnsignedShortTest(UnsignedNumberTest, unittest.TestCase):
typecode = 'H'
minitemsize = 2
class IntTest(SignedNumberTest, unittest.TestCase):
typecode = 'i'
minitemsize = 2
class UnsignedIntTest(UnsignedNumberTest, unittest.TestCase):
typecode = 'I'
minitemsize = 2
class LongTest(SignedNumberTest, unittest.TestCase):
typecode = 'l'
minitemsize = 4
class UnsignedLongTest(UnsignedNumberTest, unittest.TestCase):
typecode = 'L'
minitemsize = 4
@unittest.skipIf(not have_long_long, 'need long long support')
class LongLongTest(SignedNumberTest, unittest.TestCase):
typecode = 'q'
minitemsize = 8
###
# -1 ends up as 65535 in these tests ###
# We can't mark using expectedFailure since function name is used as key ###
# and all functions with that name would be marked (no function properties) ###
def test_fromarray(self): ###
raise unittest.SkipTest('FAILS on -1') ###
###
def test_mul(self): ###
raise unittest.SkipTest('FAILS on -1') ###
###
def test_setitem(self): ###
raise unittest.SkipTest('FAILS on -1') ###
@unittest.skipIf(not have_long_long, 'need long long support')
class UnsignedLongLongTest(UnsignedNumberTest, unittest.TestCase):
typecode = 'Q'
minitemsize = 8
class FPTest(NumberTest):
example = [-42.0, 0, 42, 1e5, -1e10]
smallerexample = [-42.0, 0, 42, 1e5, -2e10]
biggerexample = [-42.0, 0, 42, 1e5, 1e10]
outside = 23
def assertEntryEqual(self, entry1, entry2):
self.assertAlmostEqual(entry1, entry2)
# def test_byteswap(self):
# a = array.array(self.typecode, self.example)
# self.assertRaises(TypeError, a.byteswap, 42)
# if a.itemsize in (1, 2, 4, 8):
# b = array.array(self.typecode, self.example)
# b.byteswap()
# if a.itemsize==1:
# self.assertEqual(a, b)
# else:
# # On alphas treating the byte swapped bit patters as
# # floats/doubles results in floating point exceptions
# # => compare the 8bit string values instead
# self.assertNotEqual(a.tobytes(), b.tobytes())
# b.byteswap()
# self.assertEqual(a, b)
#
class FloatTest(FPTest, unittest.TestCase):
typecode = 'f'
minitemsize = 4
class DoubleTest(FPTest, unittest.TestCase):
typecode = 'd'
minitemsize = 8
# def test_alloc_overflow(self):
# from sys import maxsize
# a = array.array('d', [-1]*65536)
# try:
# a *= maxsize//65536 + 1
# except MemoryError:
# pass
# else:
# self.fail("Array of size > maxsize created - MemoryError expected")
# b = array.array('d', [ 2.71828183, 3.14159265, -1])
# try:
# b * (maxsize//3 + 1)
# except MemoryError:
# pass
# else:
# self.fail("Array of size > maxsize created - MemoryError expected")
#
#
#if __name__ == "__main__":
# unittest.main()
| [
"noralf@tronnes.org"
] | noralf@tronnes.org |
0b099bf370ff65eb2b99e6e3a29c31e8d1e4e3b5 | 5d1892f6db3c7bba1699455d934b6193840346c6 | /swig/swig_example/cppfunctions.py | 09785e80b6dd21e7d3ccd60aaa2a23f80b3096e8 | [] | no_license | sylvaus/python_bindings | ecd6595324e73ec66753cb5c12d86788b10beaa8 | ddf3ed7a59497c2f73fc47dfaa752d629ae98d99 | refs/heads/master | 2021-03-22T09:07:54.492561 | 2020-03-14T20:49:41 | 2020-03-14T20:49:41 | 247,351,299 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,024 | py | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 4.0.1
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info as _swig_python_version_info
if _swig_python_version_info < (2, 7, 0):
raise RuntimeError("Python 2.7 or later required")
# Import the low-level C/C++ module
if __package__ or "." in __name__:
from . import _cppfunctions
else:
import _cppfunctions
try:
import builtins as __builtin__
except ImportError:
import __builtin__
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except __builtin__.Exception:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
def _swig_setattr_nondynamic_instance_variable(set):
def set_instance_attr(self, name, value):
if name == "thisown":
self.this.own(value)
elif name == "this":
set(self, name, value)
elif hasattr(self, name) and isinstance(getattr(type(self), name), property):
set(self, name, value)
else:
raise AttributeError("You cannot add instance attributes to %s" % self)
return set_instance_attr
def _swig_setattr_nondynamic_class_variable(set):
def set_class_attr(cls, name, value):
if hasattr(cls, name) and not isinstance(getattr(cls, name), property):
set(cls, name, value)
else:
raise AttributeError("You cannot add class attributes to %s" % cls)
return set_class_attr
def _swig_add_metaclass(metaclass):
"""Class decorator for adding a metaclass to a SWIG wrapped class - a slimmed down version of six.add_metaclass"""
def wrapper(cls):
return metaclass(cls.__name__, cls.__bases__, cls.__dict__.copy())
return wrapper
class _SwigNonDynamicMeta(type):
"""Meta class to enforce nondynamic attributes (no new attributes) for a class"""
__setattr__ = _swig_setattr_nondynamic_class_variable(type.__setattr__)
class SwigPyIterator(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined - class is abstract")
__repr__ = _swig_repr
__swig_destroy__ = _cppfunctions.delete_SwigPyIterator
def value(self):
return _cppfunctions.SwigPyIterator_value(self)
def incr(self, n=1):
return _cppfunctions.SwigPyIterator_incr(self, n)
def decr(self, n=1):
return _cppfunctions.SwigPyIterator_decr(self, n)
def distance(self, x):
return _cppfunctions.SwigPyIterator_distance(self, x)
def equal(self, x):
return _cppfunctions.SwigPyIterator_equal(self, x)
def copy(self):
return _cppfunctions.SwigPyIterator_copy(self)
def next(self):
return _cppfunctions.SwigPyIterator_next(self)
def __next__(self):
return _cppfunctions.SwigPyIterator___next__(self)
def previous(self):
return _cppfunctions.SwigPyIterator_previous(self)
def advance(self, n):
return _cppfunctions.SwigPyIterator_advance(self, n)
def __eq__(self, x):
return _cppfunctions.SwigPyIterator___eq__(self, x)
def __ne__(self, x):
return _cppfunctions.SwigPyIterator___ne__(self, x)
def __iadd__(self, n):
return _cppfunctions.SwigPyIterator___iadd__(self, n)
def __isub__(self, n):
return _cppfunctions.SwigPyIterator___isub__(self, n)
def __add__(self, n):
return _cppfunctions.SwigPyIterator___add__(self, n)
def __sub__(self, *args):
return _cppfunctions.SwigPyIterator___sub__(self, *args)
def __iter__(self):
return self
# Register SwigPyIterator in _cppfunctions:
_cppfunctions.SwigPyIterator_swigregister(SwigPyIterator)
class IntVector(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
__repr__ = _swig_repr
def iterator(self):
return _cppfunctions.IntVector_iterator(self)
def __iter__(self):
return self.iterator()
def __nonzero__(self):
return _cppfunctions.IntVector___nonzero__(self)
def __bool__(self):
return _cppfunctions.IntVector___bool__(self)
def __len__(self):
return _cppfunctions.IntVector___len__(self)
def __getslice__(self, i, j):
return _cppfunctions.IntVector___getslice__(self, i, j)
def __setslice__(self, *args):
return _cppfunctions.IntVector___setslice__(self, *args)
def __delslice__(self, i, j):
return _cppfunctions.IntVector___delslice__(self, i, j)
def __delitem__(self, *args):
return _cppfunctions.IntVector___delitem__(self, *args)
def __getitem__(self, *args):
return _cppfunctions.IntVector___getitem__(self, *args)
def __setitem__(self, *args):
return _cppfunctions.IntVector___setitem__(self, *args)
def pop(self):
return _cppfunctions.IntVector_pop(self)
def append(self, x):
return _cppfunctions.IntVector_append(self, x)
def empty(self):
return _cppfunctions.IntVector_empty(self)
def size(self):
return _cppfunctions.IntVector_size(self)
def swap(self, v):
return _cppfunctions.IntVector_swap(self, v)
def begin(self):
return _cppfunctions.IntVector_begin(self)
def end(self):
return _cppfunctions.IntVector_end(self)
def rbegin(self):
return _cppfunctions.IntVector_rbegin(self)
def rend(self):
return _cppfunctions.IntVector_rend(self)
def clear(self):
return _cppfunctions.IntVector_clear(self)
def get_allocator(self):
return _cppfunctions.IntVector_get_allocator(self)
def pop_back(self):
return _cppfunctions.IntVector_pop_back(self)
def erase(self, *args):
return _cppfunctions.IntVector_erase(self, *args)
def __init__(self, *args):
_cppfunctions.IntVector_swiginit(self, _cppfunctions.new_IntVector(*args))
def push_back(self, x):
return _cppfunctions.IntVector_push_back(self, x)
def front(self):
return _cppfunctions.IntVector_front(self)
def back(self):
return _cppfunctions.IntVector_back(self)
def assign(self, n, x):
return _cppfunctions.IntVector_assign(self, n, x)
def resize(self, *args):
return _cppfunctions.IntVector_resize(self, *args)
def insert(self, *args):
return _cppfunctions.IntVector_insert(self, *args)
def reserve(self, n):
return _cppfunctions.IntVector_reserve(self, n)
def capacity(self):
return _cppfunctions.IntVector_capacity(self)
__swig_destroy__ = _cppfunctions.delete_IntVector
# Register IntVector in _cppfunctions:
_cppfunctions.IntVector_swigregister(IntVector)
def plus_two_list(v):
return _cppfunctions.plus_two_list(v)
| [
"pierreyves.breches74@gmail.com"
] | pierreyves.breches74@gmail.com |
49df7f342de48948163f94f572a958289a1e794f | 442fd46c2647d4988d409563d50a841fc1cdf259 | /tasks/__init__.py | 1412a06ab5b6c8a385eaddc2bf7482c04fd97226 | [
"MIT"
] | permissive | pycontw/mail_handler | 211278ac64facd106e8a85c40d0b3b3e56ba3603 | 5181d7ded2a8e43c11798ceb2e77bf87b6218276 | refs/heads/master | 2022-12-09T15:34:21.031968 | 2022-04-22T22:53:16 | 2022-04-22T22:53:16 | 203,540,182 | 10 | 16 | MIT | 2022-12-09T06:15:13 | 2019-08-21T08:25:24 | Python | UTF-8 | Python | false | false | 308 | py | from invoke import Collection
from tasks import doc, env, git, secure, style, test
from tasks.build import build_ns
ns = Collection()
ns.add_collection(env)
ns.add_collection(git)
ns.add_collection(test)
ns.add_collection(style)
ns.add_collection(build_ns)
ns.add_collection(doc)
ns.add_collection(secure)
| [
"weilee.rx@gmail.com"
] | weilee.rx@gmail.com |
e4656b2fa445618865fe0cc04e2b7a95eaf810e5 | 1d928c3f90d4a0a9a3919a804597aa0a4aab19a3 | /python/core/2017/4/unifi.py | 42b5070b0461bc90cbbea23482de85c0cd25bb7f | [] | no_license | rosoareslv/SED99 | d8b2ff5811e7f0ffc59be066a5a0349a92cbb845 | a062c118f12b93172e31e8ca115ce3f871b64461 | refs/heads/main | 2023-02-22T21:59:02.703005 | 2021-01-28T19:40:51 | 2021-01-28T19:40:51 | 306,497,459 | 1 | 1 | null | 2020-11-24T20:56:18 | 2020-10-23T01:18:07 | null | UTF-8 | Python | false | false | 3,383 | py | """
Support for Unifi WAP controllers.
For more details about this platform, please refer to the documentation at
https://home-assistant.io/components/device_tracker.unifi/
"""
import logging
import urllib
import voluptuous as vol
import homeassistant.helpers.config_validation as cv
import homeassistant.loader as loader
from homeassistant.components.device_tracker import (
DOMAIN, PLATFORM_SCHEMA, DeviceScanner)
from homeassistant.const import CONF_HOST, CONF_USERNAME, CONF_PASSWORD
from homeassistant.const import CONF_VERIFY_SSL
REQUIREMENTS = ['pyunifi==2.0']
_LOGGER = logging.getLogger(__name__)
CONF_PORT = 'port'
CONF_SITE_ID = 'site_id'
DEFAULT_HOST = 'localhost'
DEFAULT_PORT = 8443
DEFAULT_VERIFY_SSL = True
NOTIFICATION_ID = 'unifi_notification'
NOTIFICATION_TITLE = 'Unifi Device Tracker Setup'
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Optional(CONF_HOST, default=DEFAULT_HOST): cv.string,
vol.Optional(CONF_SITE_ID, default='default'): cv.string,
vol.Required(CONF_PASSWORD): cv.string,
vol.Required(CONF_USERNAME): cv.string,
vol.Required(CONF_PORT, default=DEFAULT_PORT): cv.port,
vol.Optional(CONF_VERIFY_SSL, default=DEFAULT_VERIFY_SSL): cv.boolean,
})
def get_scanner(hass, config):
"""Set up the Unifi device_tracker."""
from pyunifi.controller import Controller
host = config[DOMAIN].get(CONF_HOST)
username = config[DOMAIN].get(CONF_USERNAME)
password = config[DOMAIN].get(CONF_PASSWORD)
site_id = config[DOMAIN].get(CONF_SITE_ID)
port = config[DOMAIN].get(CONF_PORT)
verify_ssl = config[DOMAIN].get(CONF_VERIFY_SSL)
persistent_notification = loader.get_component('persistent_notification')
try:
ctrl = Controller(host, username, password, port, version='v4',
site_id=site_id, ssl_verify=verify_ssl)
except urllib.error.HTTPError as ex:
_LOGGER.error("Failed to connect to Unifi: %s", ex)
persistent_notification.create(
hass, 'Failed to connect to Unifi. '
'Error: {}<br />'
'You will need to restart hass after fixing.'
''.format(ex),
title=NOTIFICATION_TITLE,
notification_id=NOTIFICATION_ID)
return False
return UnifiScanner(ctrl)
class UnifiScanner(DeviceScanner):
"""Provide device_tracker support from Unifi WAP client data."""
def __init__(self, controller):
"""Initialize the scanner."""
self._controller = controller
self._update()
def _update(self):
"""Get the clients from the device."""
try:
clients = self._controller.get_clients()
except urllib.error.HTTPError as ex:
_LOGGER.error("Failed to scan clients: %s", ex)
clients = []
self._clients = {client['mac']: client for client in clients}
def scan_devices(self):
"""Scan for devices."""
self._update()
return self._clients.keys()
def get_device_name(self, mac):
"""Return the name (if known) of the device.
If a name has been set in Unifi, then return that, else
return the hostname if it has been detected.
"""
client = self._clients.get(mac, {})
name = client.get('name') or client.get('hostname')
_LOGGER.debug("Device %s name %s", mac, name)
return name
| [
"rodrigosoaresilva@gmail.com"
] | rodrigosoaresilva@gmail.com |
e549afef111dba3869240fa2b3c410db40d5f07c | e6fea6eb27c169642454674f20e6b2db0dc2f738 | /word2vec/get_close_words.py | 8cd7902e4b7dc56529a133344553ebabbe2ca9bd | [] | no_license | muntakimrafi/insbcn | 1db79e03cf6440f57da90a6e4fa8f047bee63d38 | 25e1b8394d1727a792591f5fb3f9e309e245cc50 | refs/heads/master | 2021-03-25T13:45:17.047609 | 2018-08-03T10:40:12 | 2018-08-03T10:40:12 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,245 | py | from gensim import models
lans = ['en','es','ca']
model_name = 'word2vec_model_instaBarcelona_lan.model'
model_path = '../../../datasets/instaBarcelona/models/word2vec/' + model_name
models_list = []
print "Loading models ... "
for l in lans:
models_list.append(models.Word2Vec.load(model_path.replace('lan',l)))
# districts = ['surf','ciutatvella', 'eixample', 'santsmontjuic', 'lescorts', 'sarria', 'gracia', 'hortaguinardo', 'noubarris', 'santandreu', 'santmarti']
# districts += ['poblenou','poblesec','sagradafamilia','barceloneta','gothic','vallcarca','gotic','gotico','viladegracia','viladegracia','vallvidrera','diagonalmar','raval','born','borne']
# districts = ['elborn','santmarti','poblesec','barceloneta','gothic','vallcarca','gotic','gotico','born','raval','sants','poblenou','vallcarca','viladegracia','gracia','sagradafamilia','vallvidrera']
districts = ['poblesec','poblenou','born']
print "Checking models"
for d in districts:
print '\n' + d
for m in models_list:
try:
topw = m.wv.most_similar(positive=[d], topn=30)
except:
topw = [('Not in voc','')]
toprint = ''
for w in topw:
toprint += str(w[0]) + ' '
print toprint
print "DONE" | [
"raulgombru@gmail.com"
] | raulgombru@gmail.com |
ac2872f4a29ec3412b06173421e13d01004ea1da | 3536c71ef08e52088a10e1b62e943386e32ccc1e | /docs/conf.py | 6feb5b7f670bbd6f19eb7ad996857aa5ed817a20 | [
"Apache-2.0"
] | permissive | Aspire1Inspire2/tdameritrade | 66698ef9a1444f930bf9a57f29d233d3f1e5215a | 00d93ca355b7f4f9a7f5041ea5096e9b6f3bb228 | refs/heads/master | 2020-05-17T11:30:48.443511 | 2019-04-03T21:29:42 | 2019-04-03T21:29:42 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,287 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# tdameritrade documentation build configuration file, created by
# sphinx-quickstart on Fri Jan 12 22:07:11 2018.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import sphinx_rtd_theme
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.coverage',
'sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'tdameritrade'
copyright = '2018, Tim Paine'
author = 'Tim Paine'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = 'v0.0.8'
# The full version, including alpha/beta/rc tags.
release = 'v0.0.8'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
# html_theme = 'alabaster'
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# This is required for the alabaster theme
# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
html_sidebars = {
'**': [
'relations.html', # needs 'show_related': True theme option to display
'searchbox.html',
]
}
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'tdameritradedoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'tdameritrade.tex', 'tdameritrade Documentation',
'Tim Paine', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'tdameritrade', 'tdameritrade Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'tdameritrade', 'tdameritrade Documentation',
author, 'tdameritrade', 'One line description of project.',
'Miscellaneous'),
]
| [
"t.paine154@gmail.com"
] | t.paine154@gmail.com |
5f8e18df7e9043614b5b0ecf4d2dfbb9fdbba54b | a0dda8be5892a390836e19bf04ea1d098e92cf58 | /叶常春视频例题/chap07/7-3-1-用while检测列表为空.py | 07d112954fbb6a310ba170b0ffbb0422ca5eee39 | [] | no_license | wmm98/homework1 | d9eb67c7491affd8c7e77458ceadaf0357ea5e6b | cd1f7f78e8dbd03ad72c7a0fdc4a8dc8404f5fe2 | refs/heads/master | 2020-04-14T19:22:21.733111 | 2019-01-08T14:09:58 | 2019-01-08T14:09:58 | 164,055,018 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 722 | py | #例7-3-1 用while语句检测列表是否为空
# 首先,创建一个待验证用户列表
# 和一个用于存储已验证用户的空列表
unconfirmed_users = ['alice', 'brian', 'candace']
confirmed_users = []
# 验证每个用户,直到没有未验证用户为止
# 将每个经过验证的列表都移到已验证用户列表中
while unconfirmed_users: #当unconfirmed_users列表不为空,则...
current_user = unconfirmed_users.pop()
print("正在验证用户: " + current_user.title()) #模仿验证用户动作
confirmed_users.append(current_user)
# 显示所有已验证的用户
print("\n以下用户验证通过:")
for confirmed_user in confirmed_users:
print(confirmed_user.title()) | [
"792545884@qq.com"
] | 792545884@qq.com |
34d2334c3931b648f94be9e0a25b6ca4b2b4d527 | 0042c37405a7865c50b7bfa19ca531ec36070318 | /20_selenium/test_incontrol/incontrol_picture.py | b06cfca4773c8ba95547cdb313b520192666e3fb | [] | no_license | lu-judong/untitled1 | b7d6e1ad86168673283917976ef0f5c2ad97d9e0 | aa158e7541bae96332633079d67b5ab19ea29e71 | refs/heads/master | 2022-05-23T18:55:45.272216 | 2020-04-28T09:55:38 | 2020-04-28T09:55:38 | 257,822,681 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,831 | py | from selenium import webdriver
from new_selenium.bin.login import Login
from new_selenium.bin.main import Method
import time
from new_selenium.tech_incontrol.incontrol_config import *
from config.config import path_dir
from config.log_config import logger
class Fault:
def log_file_out(self,msg):
fo = open(r'{}/usecase.txt'.format(path_dir), mode='a', encoding='utf-8')
fo.write(msg + '\r\n')
fo.close()
def picture(self,url,username,password):
driver = webdriver.Chrome()
Login().login(url,username, password, driver)
self.log_file_out('-----内控指标图表点击-----')
for i in contents:
try:
Method(driver).contains_xpath('click',i)
time.sleep(2)
self.log_file_out('点击'+i+'成功')
except Exception as e:
logger.error(e)
self.log_file_out('点击' + i + '失败')
try:
Method(driver).switch_out()
Method(driver).switch_iframe(
driver.find_element_by_xpath("//iframe[contains(@src,'/darams/a/inControl')]"))
self.log_file_out('切入内控指标成功')
except:
self.log_file_out('切入内控指标失败')
driver.find_element_by_xpath("//a[contains(text(),\'{}\')]/../../td[7]/a[1]".format('111')).click()
time.sleep(2)
Method(driver).switch_out()
driver.find_element_by_class_name('layui-layer-btn0').click()
time.sleep(2)
Method(driver).switch_out()
Method(driver).switch_iframe(
driver.find_element_by_xpath("//iframe[contains(@src,'/darams/a/inControl')]"))
time.sleep(5)
driver.find_element_by_xpath("//a[contains(text(),\'{}\')]/../../td[7]/a[2]".format('111')).click()
Method(driver).switch_out()
incontrol_p = Method(driver).get_attr('css', "[class='layui-layer layui-layer-iframe']", 'times')
Method(driver).switch_iframe('layui-layer-iframe' + incontrol_p)
home_handles = driver.current_window_handle
time.sleep(2)
value_com = driver.execute_script('var aa = echarts.getInstanceByDom($("#myChart2")[0]);' \
'var option = aa.getOption();' \
'return [option.series[0].data[0].value[0]]')
js1 = 'myChart2.trigger("dblclick",{"data":{"path":"苏州华兴致远电子科技有限公司"},"componentType":"series","seriesType":"treemap"})'
try:
driver.execute_script(js1)
self.log_file_out('点击供应商内控指标图表成功')
except:
self.log_file_out('点击供应商内控指标图表失败')
time.sleep(2)
all_handle = driver.window_handles
for i in all_handle:
if i != home_handles:
driver.switch_to.window(i)
driver.find_element_by_xpath('/html/body/div[1]/div/div[2]/div[2]/div[1]/div[2]/button[1]').click()
time.sleep(10)
# for i in range(0,len(driver.find_elements_by_xpath('//*[@id="opFaultOrderTable"]/tbody/tr/td[12]'))):
# print(driver.find_elements_by_xpath('//*[@id="opFaultOrderTable"]/tbody/tr/td[12]')[i].text)
status_aa = driver.execute_script(
'return $("#opFaultOrderTable").bootstrapTable("getData").map(function(row){return $(row).attr("mainResponsibility")}).some(function(item){return item !="苏州华兴致远电子科技有限公司"})')
value_com1 = driver.execute_script('return $("#opFaultOrderTable").bootstrapTable("getData").map(function(row){return $(row).attr("mainResponsibility")}).length')
if status_aa is False and value_com[0] == value_com1:
self.log_file_out('主要责任单位数值正确')
else:
self.log_file_out('主要责任单位数值不正确')
for i in all_handle:
if i != home_handles:
driver.close()
driver.switch_to.window(home_handles)
time.sleep(2)
# Method(driver).click('id','chart2')
# js1 = 'var aa = echarts.getInstanceByDom($("#chart2")[0]);' \
# 'var option = aa.getOption();' \
# 'param = {componentType:"series",name:option.yAxis[0].data[0],seriesName:"关联故障",seriesType:"bar",value:option.series[0].data[0]}; ' \
# 'skipTo(param, "RAILWAY_BUREAU");'
js2 = 'var title = "2017-07责任部室故障统计";' \
'var dateString = title.substring(0,title.length-8);' \
'if (dateString.length > 7){' \
'window.open ("/darams/a/fault/opFaultOrder/qList?confModelId=ed42931a637744a0a11141ccaccfd40b000 &chartType=INSIDE&depart=" + "转向架开发部");}else{window.open("/darams/a/fault/opFaultOrder/qList?confModelId=ed42931a637744a0a11141ccaccfd40b000&chartType=INSIDE&depart=" + "转向架开发部" + "&octMonthFrom=" + dateString + "&octMonthTo=" + dateString);}'
try:
driver.execute_script(js2)
self.log_file_out('点击责任部室图表成功')
except:
self.log_file_out('点击责任部室图表失败')
time.sleep(2)
all_handle1 = driver.window_handles
for i in all_handle1:
if i != home_handles:
driver.switch_to.window(i)
status_bb = driver.execute_script(
'return $("#opFaultOrderTable").bootstrapTable("getData").map(function(row){return $(row).attr("mainResponsibility")}).some(function(item){return item !="技术中心"})')
if status_bb is False:
self.log_file_out('责任部室验证正确')
else:
self.log_file_out('责任部室验证不正确')
url = 'http://192.168.1.115:8080/darams/a?login'
Fault().picture(url, 'test', '1234')
| [
"ljd_python@163.com"
] | ljd_python@163.com |
d5acc9da01fcf37cdad10fb33e9391c39115bb87 | b68c92fe89b701297f76054b0f284df5466eb698 | /Sorting/InsertionSort.py | c42a74592fdfb36e35685e6e8cbc61a47c65daaa | [] | no_license | makrandp/python-practice | 32381a8c589f9b499ab6bde8184a847b066112f8 | 60218fd79248bf8138158811e6e1b03261fb38fa | refs/heads/master | 2023-03-27T18:11:56.066535 | 2021-03-28T04:02:00 | 2021-03-28T04:02:00 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 690 | py | # Insertion sort is an algorithm that functions by iterating from 0 to n where n is the size of the input dataset
# For every iteration of i from 0 to n, we then swap from i to 0 given a situation where the variable is swappable
## Best ## Avrg ## Wrst ## Spce ##
## n # n^2 # n^2 # 1 ##
# Insertion sort is still bad, but it is stable and has a best case time complexity of n and a space complexity of 1
from typing import List
def insertionSort(arr: List[int]):
for i in range(1,len(arr)):
for j in range(i,0,-1):
if arr[j] < arr[j - 1]:
arr[j], arr[j-1] = arr[j-1], arr[j]
a = [5,2,7,9,0,1,3,4,2,15,25,35]
insertionSort(a)
print(a) | [
"awalexweber99@gmail.com"
] | awalexweber99@gmail.com |
f74f921086196d7e6c01a153fed48f3f0806ffdb | d1ff466d7a230409020ebc88aa2f2ffac8c45c15 | /cournot/pages.py | 68eecd9c2f47d2ed52156f48cc24aadc29827df9 | [
"MIT"
] | permissive | Muhammadahmad06/oTree | 224ef99a2ca55c8f2d7e67fec944b8efd0b21884 | 789fb2c2681aa5fbb8385f2f65a633e02592b225 | refs/heads/master | 2020-08-25T19:12:36.929702 | 2019-10-03T09:29:54 | 2019-10-03T09:29:54 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 641 | py | from ._builtin import Page, WaitPage
from otree.api import Currency as c, currency_range
from .models import Constants
class Introduction(Page):
pass
class Decide(Page):
form_model = 'player'
form_fields = ['units']
class ResultsWaitPage(WaitPage):
body_text = "Waiting for the other participant to decide."
def after_all_players_arrive(self):
self.group.set_payoffs()
class Results(Page):
def vars_for_template(self):
return dict(
other_player_units=self.player.other_player().units,
)
page_sequence = [
Introduction,
Decide,
ResultsWaitPage,
Results
]
| [
"chris@otree.org"
] | chris@otree.org |
2a321a878b13ab2bbf4750762e7d6976c44ab082 | 98b4aeadab444eaf6f0d5b469c199e6d24a52f7f | /step14/1904-2.py | b257545f762791c40187c81b554b1391356d7d8d | [] | no_license | kwr0113/BOJ_Python | 7a9dc050bb3bb42ae2b03671c5d6fa76cc0d6d99 | 27bafdaafc44115f55f0b058829cb36b8c79469a | refs/heads/master | 2023-06-10T23:22:20.639613 | 2021-06-25T07:25:53 | 2021-06-25T07:25:53 | 328,057,859 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 104 | py | # 1904-2.py
n = int(input())
a, b = 1, 2
for _ in range(n-1):
a, b = b, (a + b) % 15746
print(a)
| [
"kwr0113@gmail.com"
] | kwr0113@gmail.com |
cff5b0ce9bad60d2be0a2346e00bfc63ed894ca8 | 6eea60bcbf206dafc5fe578b996267ce2bc9ae6e | /interviewbit/Magician_and_Chocolates.py | 5949fe44831449b74974a2e3381a8c91ec4d1949 | [] | no_license | SueAli/cs-problems | 491fef79f3e352d7712cd622d3b80ec15d38642b | b321116d135f868d88bd849b5ea7172feb74fb4c | refs/heads/master | 2023-08-31T10:46:30.374394 | 2023-08-24T20:14:04 | 2023-08-24T20:14:04 | 95,930,918 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 725 | py | import heapq
import math
class Solution:
# @param A : integer
# @param B : list of integers
# @return an integer
# Time complexity is O(n) + k * log (n)
# Space Complexity if we can not make any changes in the input array, extra space memory of O(n) will be
# required to build the heap
def nchoc(self, A, B):
r =0
m = ((10**9)+7)
h = [ item * -1 for item in B] # O(n)
heapq.heapify(h) # O(n)
for i in range(0,A):
curr = h[0] * -1
r = r + curr
heapq.heapreplace(h,-1 *int(math.floor(curr/2.))) #(log n)
return int(r) % m
s = Solution()
print s.nchoc(10, [ 2147483647, 2000000014, 2147483647 ])
#284628164
| [
"souad.hassanien@gmail.com"
] | souad.hassanien@gmail.com |
19a0fd4d3939243857e88c8327c49561837841ab | d05a59feee839a4af352b7ed2fd6cf10a288a3cb | /xlsxwriter/test/comparison/test_textbox29.py | 46aabfa9a927938c6129ee8a9c355269cb61109c | [
"BSD-2-Clause-Views"
] | permissive | elessarelfstone/XlsxWriter | 0d958afd593643f990373bd4d8a32bafc0966534 | bb7b7881c7a93c89d6eaac25f12dda08d58d3046 | refs/heads/master | 2020-09-24T06:17:20.840848 | 2019-11-24T23:43:01 | 2019-11-24T23:43:01 | 225,685,272 | 1 | 0 | NOASSERTION | 2019-12-03T18:09:06 | 2019-12-03T18:09:05 | null | UTF-8 | Python | false | false | 798 | py | ###############################################################################
#
# Tests for XlsxWriter.
#
# Copyright (c), 2013-2019, John McNamara, jmcnamara@cpan.org
#
from ..excel_comparsion_test import ExcelComparisonTest
from ...workbook import Workbook
class TestCompareXLSXFiles(ExcelComparisonTest):
"""
Test file created by XlsxWriter against a file created by Excel.
"""
def setUp(self):
self.set_filename('textbox29.xlsx')
def test_create_file(self):
"""Test the creation of a simple XlsxWriter file with textbox(s)."""
workbook = Workbook(self.got_filename)
worksheet = workbook.add_worksheet()
worksheet.insert_textbox('E9', None, {'textlink': '=$A$1'})
workbook.close()
self.assertExcelEqual()
| [
"jmcnamara@cpan.org"
] | jmcnamara@cpan.org |
bd23ea04dc04e328739dd86a97a65296c5e7aa4e | 3d91c09bca4e68bf7a527cb40ed70ac208495b93 | /library/templatetags/get_lended.py | dc9e1d23ec96ed29f427b8fd2cf57aa40028802e | [] | no_license | Kaik-a/OCR-Projet13 | 02e9d8c9228d6d7a09013b4ab2570304c01dfc28 | ac339002279397f43316e33a869cce797b5d92b2 | refs/heads/main | 2023-02-17T09:39:11.184120 | 2021-01-11T15:50:58 | 2021-01-11T15:50:58 | 311,875,691 | 0 | 0 | null | 2021-01-11T15:50:59 | 2020-11-11T05:51:34 | CSS | UTF-8 | Python | false | false | 590 | py | """Get lended games"""
from typing import Union
from django import template
from django.core.exceptions import ObjectDoesNotExist
from library.models import LendedGame
register = template.Library()
@register.filter(name="get_lended")
def get_lended(owned_game_id) -> Union[LendedGame, bool]:
"""
Get lended games.
:param owned_game_id: id of game owned
:rtype: Union[LendedGame, bool]
"""
try:
lended_game = LendedGame.objects.get(owned_game=owned_game_id, returned=False)
return lended_game
except ObjectDoesNotExist:
return False
| [
"mehdi.bichari@outscale.com"
] | mehdi.bichari@outscale.com |
a73af9fd8e5bfce2037b80001cd6b910eba5b8f4 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03272/s726182121.py | b242fec41d8d3b81d3bed6415bd6e16cc9349c97 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 59 | py | n ,i = map(int, input().split())
res = n - i + 1
print(res) | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
87a207a046ec88b78484e7b0c816fccf6e4be3bc | ea05a89f4df49323eb630960c31bfbf3eb812e48 | /events/migrations/0001_initial.py | 7bae2e4599ed9154b47d3475390f388f1749a0db | [] | no_license | psteichen/aperta-cms-lts | 3dff06fbf17e4a8c4a124c826b36f083451d613e | cf46e82cd71e7acddb900e558bc155cdd7999d9c | refs/heads/master | 2021-01-20T00:08:06.012978 | 2017-08-11T13:58:10 | 2017-08-11T13:58:10 | 89,083,140 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,654 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2017-04-23 16:28
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import events.models
class Migration(migrations.Migration):
initial = True
dependencies = [
('locations', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Event',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100, verbose_name='Titre')),
('when', models.DateField(verbose_name='Date')),
('time', models.TimeField(verbose_name='Heure de début')),
('deadline', models.DateTimeField(verbose_name='Deadline')),
('location', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='locations.Location', verbose_name='Lieu')),
],
),
migrations.CreateModel(
name='Invitation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('message', models.CharField(blank=True, max_length=5000, null=True)),
('attachement', models.FileField(blank=True, null=True, upload_to=events.models.rename_attach, verbose_name='Annexe(s)')),
('sent', models.DateTimeField(blank=True, null=True)),
('event', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='events.Event')),
],
),
]
| [
"pst@libre.lu"
] | pst@libre.lu |
6ccd0fd80e07bc2409e0b4c4d5708a7161fb5fa9 | 6b1b506139088aa30de9fd65cff9e3b6a3a36874 | /sofia_redux/instruments/hawc/steps/__init__.py | dac31389261c9c2edcb7bfdcbb30788994698992 | [
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | SOFIA-USRA/sofia_redux | df2e6ad402b50eb014b574ea561734334d70f84d | 493700340cd34d5f319af6f3a562a82135bb30dd | refs/heads/main | 2023-08-17T11:11:50.559987 | 2023-08-13T19:52:37 | 2023-08-13T19:52:37 | 311,773,000 | 12 | 2 | null | null | null | null | UTF-8 | Python | false | false | 3,262 | py | # Licensed under a 3-clause BSD style license - see LICENSE.rst
__all__ = ['StepBinPixels', 'StepBgSubtract', 'StepCalibrate', 'StepCheckhead',
'StepCombine', 'StepDemodulate', 'StepDmdCut',
'StepDmdPlot', 'StepFlat', 'StepFluxjump', 'StepFocus',
'StepImgMap', 'StepIP', 'StepLabChop', 'StepLabPolPlots',
'StepMerge', 'StepMkflat', 'StepNodPolSub',
'StepNoiseFFT', 'StepNoisePlots', 'StepOpacity',
'StepPolDip', 'StepPolMap', 'StepPolVec', 'StepPrepare',
'StepRegion', 'StepRotate', 'StepScanMap',
'StepScanMapFlat', 'StepScanMapFocus', 'StepScanMapPol',
'StepScanStokes', 'StepShift',
'StepSkycal', 'StepSkydip', 'StepSplit', 'StepStdPhotCal',
'StepStokes', 'StepWcs', 'StepZeroLevel']
from sofia_redux.instruments.hawc.steps.stepbinpixels import *
from sofia_redux.instruments.hawc.steps.stepbgsubtract import *
from sofia_redux.instruments.hawc.steps.stepcalibrate import *
from sofia_redux.instruments.hawc.steps.stepcheckhead import *
from sofia_redux.instruments.hawc.steps.stepcombine import *
from sofia_redux.instruments.hawc.steps.stepdemodulate import *
from sofia_redux.instruments.hawc.steps.stepdmdcut import *
from sofia_redux.instruments.hawc.steps.stepdmdplot import *
from sofia_redux.instruments.hawc.steps.stepflat import *
from sofia_redux.instruments.hawc.steps.stepfluxjump import *
from sofia_redux.instruments.hawc.steps.stepfocus import *
from sofia_redux.instruments.hawc.steps.stepimgmap import *
from sofia_redux.instruments.hawc.steps.stepip import *
from sofia_redux.instruments.hawc.steps.steplabchop import *
from sofia_redux.instruments.hawc.steps.steplabpolplots import *
from sofia_redux.instruments.hawc.steps.stepmerge import *
from sofia_redux.instruments.hawc.steps.stepmkflat import *
from sofia_redux.instruments.hawc.steps.stepnodpolsub import *
from sofia_redux.instruments.hawc.steps.stepnoisefft import *
from sofia_redux.instruments.hawc.steps.stepnoiseplots import *
from sofia_redux.instruments.hawc.steps.stepopacity import *
from sofia_redux.instruments.hawc.steps.steppoldip import *
from sofia_redux.instruments.hawc.steps.steppolmap import *
from sofia_redux.instruments.hawc.steps.steppolvec import *
from sofia_redux.instruments.hawc.steps.stepprepare import *
from sofia_redux.instruments.hawc.steps.stepregion import *
from sofia_redux.instruments.hawc.steps.steprotate import *
from sofia_redux.instruments.hawc.steps.stepscanstokes import *
from sofia_redux.instruments.hawc.steps.stepscanmap import *
from sofia_redux.instruments.hawc.steps.stepscanmapflat import *
from sofia_redux.instruments.hawc.steps.stepscanmapfocus import *
from sofia_redux.instruments.hawc.steps.stepscanmappol import *
from sofia_redux.instruments.hawc.steps.stepshift import *
from sofia_redux.instruments.hawc.steps.stepskycal import *
from sofia_redux.instruments.hawc.steps.stepskydip import *
from sofia_redux.instruments.hawc.steps.stepsplit import *
from sofia_redux.instruments.hawc.steps.stepstdphotcal import *
from sofia_redux.instruments.hawc.steps.stepstokes import *
from sofia_redux.instruments.hawc.steps.stepwcs import *
from sofia_redux.instruments.hawc.steps.stepzerolevel import *
| [
"melanie.j.clarke@nasa.gov"
] | melanie.j.clarke@nasa.gov |
5aa75852540563db657a5c4fe15b75e585fcdfa2 | 37dd16e4e48511e5dab789c57d97ab47ccffd561 | /src/apps/domain/engagement_assignment/admin.py | b5e547f63e46b70a8c0c61b622b47c81dc3b62fd | [] | no_license | willow/scone-api | c9473a043996639024ae028bb3d7bf420eb3d75b | c786915bc0535cb0ed78726afa4ee3c0772a8c0e | refs/heads/production | 2016-09-05T18:43:22.953283 | 2014-08-18T23:16:47 | 2014-08-18T23:18:23 | 18,448,114 | 1 | 0 | null | 2014-08-08T16:40:35 | 2014-04-04T18:21:18 | Python | UTF-8 | Python | false | false | 769 | py | from django.contrib import admin
from src.apps.domain.engagement_assignment.models import AssignedProspect
class AssignedProspectAdmin(admin.ModelAdmin):
actions = None
def has_delete_permission(self, request, obj=None):
return False
def has_add_permission(self, request):
return False
# Allow viewing objects but not actually changing them
# https://gist.github.com/aaugustin/1388243
def has_change_permission(self, request, obj=None):
if request.method not in ('GET', 'HEAD'):
return False
return super().has_change_permission(request, obj)
def get_readonly_fields(self, request, obj=None):
return (self.fields or [f.name for f in self.model._meta.fields])
admin.site.register(AssignedProspect, AssignedProspectAdmin)
| [
"scoarescoare@gmail.com"
] | scoarescoare@gmail.com |
122fdbad2d4235c0448a9c191ffe41de3a7b7478 | fa064f5ef48b29dcf2e90d9e4e30199a32c5e2af | /case/theater/myparser/legacy.py | 97480ea593d418b0cf66e9aafc17bc502f89c3bf | [] | no_license | gsrr/Crawler | ed05971bf6be31f6dae32d6e82bbae9cb93a8d02 | 1e109eeaaf518e699e591fa8e72909e6f965be0c | refs/heads/master | 2020-04-07T06:40:08.492069 | 2017-06-13T03:42:04 | 2017-06-13T03:42:04 | 44,025,883 | 27 | 5 | null | null | null | null | UTF-8 | Python | false | false | 5,099 | py | # -*- coding: utf-8 -*-
import mylib
import re
import urlparse
import urllib
import parseplatform
import copy
def getContent(data, item):
if item == "price":
return data.replace("<span class='ticket_content'></span>", ",").replace("<br />", "\n")
return data
class Parser:
def __init__(self, paras):
self.url = paras['url']
self.queue = []
def extractTitle(self, data):
searchObj = re.search(r'alt="(.*?)"', data , re.M|re.I|re.S)
return searchObj.group(1)
def extractImage(self, data):
searchObj = re.search(r'src="(.*?)"', data , re.M|re.I|re.S)
return searchObj.group(1)
def extractURL(self, data):
searchObj = re.search(r'href="(.*?)"', data , re.M|re.I|re.S)
return urlparse.urljoin(self.url, searchObj.group(1))
def download(self, url_content, url_image):
file_id = url_content.split("/")[-1]
with open("image/%s"%file_id, "w") as fw:
fr = urllib.urlopen(url_image)
data = fr.read()
fw.write(data)
return file_id
def extractPoster(self, data):
contents = []
items = re.findall(r'<a class="poster"(.*?)</a>', data , re.M|re.I|re.S)
for item in items:
content = {}
content['url_content'] = self.extractURL(item)
content['title'] = self.extractTitle(item)
content['url_image'] = self.extractImage(item) #download image
content['image_id'] = self.download(content['url_content'], content['url_image'])
contents.append(copy.deepcopy(content))
return contents
def extractPrice(self, data, contents):
data_dic = {
"票價" : "price",
"場地" : "place",
"開始" : "start_time",
}
items = re.findall(r'<th>(.*?)</th><td>(.*?)</td>', data , re.M|re.I|re.S)
cnt = 0
for item in items:
if item[0] in data_dic.keys():
content = contents[cnt/3]
content[data_dic[item[0]]] = getContent(item[1], data_dic[item[0]])
cnt += 1
else:
pass
def extractDate(self, data, contents):
items = re.findall(r'<div class="m">(.*?)</div>(.*?)<div class="d">(.*?)</div>(.*?)<div class="week">(.*?)</div>', data , re.M|re.I|re.S)
cnt = 0
for item in items:
content = contents[cnt]
content["start_date"] = item[0] + "/" + item[2]
cnt += 1
def _parse_content(self, url, content):
print url
data = mylib.myurl(url)
data_ret = ""
place = 0
price = 0
start_date = 0
start_time = 0
for line in data:
if "<title>" in line:
searchObj = re.search(r'<title>(.*?)</title>', line , re.M|re.I|re.S)
if searchObj:
content['title'] = searchObj.group(1)
if "alignnone" in line:
searchObj = re.search(r'src="(.*?)"', line , re.M|re.I|re.S)
if searchObj:
content['url_image'] = searchObj.group(1)
content['image_id'] = searchObj.group(1).split("/")[-1]
if place == 1 and "</p>" in line:
content['place'] = line.strip().rstrip("</p>")
place = 0
if price == 1 and '</p>' in line:
content['price'] = line.strip().rstrip("</p>")
price = 0
if start_date == 1 and '</p>' in line:
content['start_date'] = line.strip().rstrip("</p>")
start_date = 0
if start_time == 1 and '</p>' in line:
content['start_time'] = line.strip().rstrip("</p>")
start_time = 0
if "演出場地" in line:
place = 1
if "演出票價" in line:
price = 1
if "演出日期" in line:
start_date = 1
if "演出開始" in line:
start_time = 1
def parse(self):
data = mylib.myurl(self.url)
data_ret = ""
contents = []
for line in data:
if "galleries-slide-sub-title1" in line:
concert = {}
concert['url_content'] = self.extractURL(line)
self._parse_content(concert['url_content'], concert)
contents.append(copy.deepcopy(concert))
self.write(contents)
def write(self, data):
with open("result/legacy.result", "a") as fw:
for content in data:
fw.write("--start--\n")
for key in content.keys():
if key == "price":
fw.write(key + "=" + content[key].replace("\n", "::") + "\n")
else:
fw.write(key + "=" + content[key] + "\n")
fw.write("--end--\n\n")
def start(self):
return self.parse()
| [
"jerrycheng1128@gmail.com"
] | jerrycheng1128@gmail.com |
b0e8975a3841436d1056fbc76ca39921dc2e3f5b | ad13583673551857615498b9605d9dcab63bb2c3 | /output/instances/nistData/atomic/token/Schema+Instance/NISTXML-SV-IV-atomic-token-enumeration-3-2.py | 617a90346e1a437b02be47eb86f0aa348ed0dd9c | [
"MIT"
] | permissive | tefra/xsdata-w3c-tests | 397180205a735b06170aa188f1f39451d2089815 | 081d0908382a0e0b29c8ee9caca6f1c0e36dd6db | refs/heads/main | 2023-08-03T04:25:37.841917 | 2023-07-29T17:10:13 | 2023-07-30T12:11:13 | 239,622,251 | 2 | 0 | MIT | 2023-07-25T14:19:04 | 2020-02-10T21:59:47 | Python | UTF-8 | Python | false | false | 511 | py | from output.models.nist_data.atomic.token.schema_instance.nistschema_sv_iv_atomic_token_enumeration_3_xsd.nistschema_sv_iv_atomic_token_enumeration_3 import NistschemaSvIvAtomicTokenEnumeration3
from output.models.nist_data.atomic.token.schema_instance.nistschema_sv_iv_atomic_token_enumeration_3_xsd.nistschema_sv_iv_atomic_token_enumeration_3 import NistschemaSvIvAtomicTokenEnumeration3Type
obj = NistschemaSvIvAtomicTokenEnumeration3(
value=NistschemaSvIvAtomicTokenEnumeration3Type.STANDARDIZATION
)
| [
"tsoulloftas@gmail.com"
] | tsoulloftas@gmail.com |
d61c6018bc90c2c057e06d9f8d891a4a72b7b642 | 77f63e447ef93bd77ce4315b6d4220da86abffdf | /setup.py | cf8f789dcd9fdad981b8de82177b1b99dc6dbd2a | [
"WTFPL"
] | permissive | wsxxhx/TorchSUL | 8d1625989b5f5ef5aeb879e01019ddf850848961 | 46ee6aab4367d8a02ddb6de66d24455dbfa465c4 | refs/heads/master | 2023-05-25T16:52:10.321801 | 2021-06-12T09:34:56 | 2021-06-12T09:34:56 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 657 | py | from setuptools import setup, find_packages
with open("README.md", "r") as fh:
long_description = fh.read()
setup_args = dict(
name='TorchSUL',
version='0.1.26',
description='Simple but useful layers for Pytorch',
packages=find_packages(),
long_description=long_description,
long_description_content_type="text/markdown",
author='Cheng Yu',
author_email='chengyu996@gmail.com',
url='https://github.com/ddddwee1/TorchSUL',
install_requires = [
'tqdm',
'progressbar2',
'opencv-python',
'matplotlib',
]
)
install_requires = []
if __name__ == '__main__':
setup(**setup_args)
| [
"cy960823@outlook.com"
] | cy960823@outlook.com |
11abbb227113c1d595e49225fb3b0db9a9fcde37 | 1d928c3f90d4a0a9a3919a804597aa0a4aab19a3 | /python/spaCy/2016/12/setup.py | 2a1d56a5eb1afee77dd993bc1b23ef7ca3b18273 | [
"MIT"
] | permissive | rosoareslv/SED99 | d8b2ff5811e7f0ffc59be066a5a0349a92cbb845 | a062c118f12b93172e31e8ca115ce3f871b64461 | refs/heads/main | 2023-02-22T21:59:02.703005 | 2021-01-28T19:40:51 | 2021-01-28T19:40:51 | 306,497,459 | 1 | 1 | null | 2020-11-24T20:56:18 | 2020-10-23T01:18:07 | null | UTF-8 | Python | false | false | 8,194 | py | #!/usr/bin/env python
from __future__ import print_function
import io
import os
import subprocess
import sys
import contextlib
from distutils.command.build_ext import build_ext
from distutils.sysconfig import get_python_inc
from distutils import ccompiler, msvccompiler
try:
from setuptools import Extension, setup
except ImportError:
from distutils.core import Extension, setup
PACKAGE_DATA = {'': ['*.pyx', '*.pxd', '*.txt', '*.tokens']}
PACKAGES = [
'spacy',
'spacy.tokens',
'spacy.en',
'spacy.de',
'spacy.zh',
'spacy.es',
'spacy.fr',
'spacy.it',
'spacy.hu',
'spacy.pt',
'spacy.nl',
'spacy.sv',
'spacy.language_data',
'spacy.serialize',
'spacy.syntax',
'spacy.munge',
'spacy.tests',
'spacy.tests.matcher',
'spacy.tests.morphology',
'spacy.tests.munge',
'spacy.tests.parser',
'spacy.tests.print',
'spacy.tests.serialize',
'spacy.tests.spans',
'spacy.tests.tagger',
'spacy.tests.tokenizer',
'spacy.tests.tokens',
'spacy.tests.vectors',
'spacy.tests.vocab',
'spacy.tests.website']
MOD_NAMES = [
'spacy.parts_of_speech',
'spacy.strings',
'spacy.lexeme',
'spacy.vocab',
'spacy.attrs',
'spacy.morphology',
'spacy.tagger',
'spacy.pipeline',
'spacy.syntax.stateclass',
'spacy.syntax._state',
'spacy.tokenizer',
'spacy.syntax.parser',
'spacy.syntax.nonproj',
'spacy.syntax.transition_system',
'spacy.syntax.arc_eager',
'spacy.syntax._parse_features',
'spacy.gold',
'spacy.orth',
'spacy.tokens.doc',
'spacy.tokens.span',
'spacy.tokens.token',
'spacy.serialize.packer',
'spacy.serialize.huffman',
'spacy.serialize.bits',
'spacy.cfile',
'spacy.matcher',
'spacy.syntax.ner',
'spacy.symbols',
'spacy.syntax.iterators']
# TODO: This is missing a lot of modules. Does it matter?
COMPILE_OPTIONS = {
'msvc': ['/Ox', '/EHsc'],
'mingw32' : ['-O3', '-Wno-strict-prototypes', '-Wno-unused-function'],
'other' : ['-O3', '-Wno-strict-prototypes', '-Wno-unused-function']
}
LINK_OPTIONS = {
'msvc' : [],
'mingw32': [],
'other' : []
}
# I don't understand this very well yet. See Issue #267
# Fingers crossed!
#if os.environ.get('USE_OPENMP') == '1':
# compile_options['msvc'].append('/openmp')
#
#
#if not sys.platform.startswith('darwin'):
# compile_options['other'].append('-fopenmp')
# link_options['other'].append('-fopenmp')
#
USE_OPENMP_DEFAULT = '1' if sys.platform != 'darwin' else None
if os.environ.get('USE_OPENMP', USE_OPENMP_DEFAULT) == '1':
if sys.platform == 'darwin':
COMPILE_OPTIONS['other'].append('-fopenmp')
LINK_OPTIONS['other'].append('-fopenmp')
PACKAGE_DATA['spacy.platform.darwin.lib'] = ['*.dylib']
PACKAGES.append('spacy.platform.darwin.lib')
elif sys.platform == 'win32':
COMPILE_OPTIONS['msvc'].append('/openmp')
else:
COMPILE_OPTIONS['other'].append('-fopenmp')
LINK_OPTIONS['other'].append('-fopenmp')
# By subclassing build_extensions we have the actual compiler that will be used which is really known only after finalize_options
# http://stackoverflow.com/questions/724664/python-distutils-how-to-get-a-compiler-that-is-going-to-be-used
class build_ext_options:
def build_options(self):
for e in self.extensions:
e.extra_compile_args += COMPILE_OPTIONS.get(
self.compiler.compiler_type, COMPILE_OPTIONS['other'])
for e in self.extensions:
e.extra_link_args += LINK_OPTIONS.get(
self.compiler.compiler_type, LINK_OPTIONS['other'])
class build_ext_subclass(build_ext, build_ext_options):
def build_extensions(self):
build_ext_options.build_options(self)
build_ext.build_extensions(self)
def generate_cython(root, source):
print('Cythonizing sources')
p = subprocess.call([sys.executable,
os.path.join(root, 'bin', 'cythonize.py'),
source])
if p != 0:
raise RuntimeError('Running cythonize failed')
def is_source_release(path):
return os.path.exists(os.path.join(path, 'PKG-INFO'))
def clean(path):
for name in MOD_NAMES:
name = name.replace('.', '/')
for ext in ['.so', '.html', '.cpp', '.c']:
file_path = os.path.join(path, name + ext)
if os.path.exists(file_path):
os.unlink(file_path)
@contextlib.contextmanager
def chdir(new_dir):
old_dir = os.getcwd()
try:
os.chdir(new_dir)
sys.path.insert(0, new_dir)
yield
finally:
del sys.path[0]
os.chdir(old_dir)
def setup_package():
root = os.path.abspath(os.path.dirname(__file__))
if len(sys.argv) > 1 and sys.argv[1] == 'clean':
return clean(root)
with chdir(root):
with io.open(os.path.join(root, 'spacy', 'about.py'), encoding='utf8') as f:
about = {}
exec(f.read(), about)
with io.open(os.path.join(root, 'README.rst'), encoding='utf8') as f:
readme = f.read()
include_dirs = [
get_python_inc(plat_specific=True),
os.path.join(root, 'include')]
if (ccompiler.new_compiler().compiler_type == 'msvc'
and msvccompiler.get_build_version() == 9):
include_dirs.append(os.path.join(root, 'include', 'msvc9'))
ext_modules = []
for mod_name in MOD_NAMES:
mod_path = mod_name.replace('.', '/') + '.cpp'
extra_link_args = []
# ???
# Imported from patch from @mikepb
# See Issue #267. Running blind here...
if sys.platform == 'darwin':
dylib_path = ['..' for _ in range(mod_name.count('.'))]
dylib_path = '/'.join(dylib_path)
dylib_path = '@loader_path/%s/spacy/platform/darwin/lib' % dylib_path
extra_link_args.append('-Wl,-rpath,%s' % dylib_path)
ext_modules.append(
Extension(mod_name, [mod_path],
language='c++', include_dirs=include_dirs,
extra_link_args=extra_link_args))
if not is_source_release(root):
generate_cython(root, 'spacy')
setup(
name=about['__title__'],
zip_safe=False,
packages=PACKAGES,
package_data=PACKAGE_DATA,
description=about['__summary__'],
long_description=readme,
author=about['__author__'],
author_email=about['__email__'],
version=about['__version__'],
url=about['__uri__'],
license=about['__license__'],
ext_modules=ext_modules,
install_requires=[
'numpy>=1.7',
'murmurhash>=0.26,<0.27',
'cymem>=1.30,<1.32',
'preshed>=0.46.0,<0.47.0',
'thinc>=5.0.0,<5.1.0',
'plac',
'six',
'cloudpickle',
'pathlib',
'sputnik>=0.9.2,<0.10.0',
'ujson>=1.35'],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Operating System :: POSIX :: Linux',
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Programming Language :: Cython',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Topic :: Scientific/Engineering'],
cmdclass = {
'build_ext': build_ext_subclass},
)
if __name__ == '__main__':
setup_package()
| [
"rodrigosoaresilva@gmail.com"
] | rodrigosoaresilva@gmail.com |
f9018da09e837ffe4443e04232a0b7cf548b49d9 | 472905e7a5f26465af4eee0fcfaa592de52eed17 | /server/apps/memories/migrations/0002_auto_20191201_1010.py | 824245ab101e92788bc10412075f16655229495e | [] | no_license | backpropogation/memories | 7c72bfeca8a4ab07a2c19960c5af91ed4da24304 | 0da75bcffccbe5d3f4e2d5b30ee3f224f70aa81b | refs/heads/master | 2022-12-24T08:27:24.838744 | 2019-12-01T16:14:04 | 2019-12-01T16:14:04 | 225,188,993 | 0 | 0 | null | 2019-12-01T16:06:24 | 2019-12-01T16:03:37 | JavaScript | UTF-8 | Python | false | false | 756 | py | # Generated by Django 2.2.1 on 2019-12-01 10:10
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('memories', '0001_initial'),
]
operations = [
migrations.AlterModelOptions(
name='memory',
options={'ordering': ('-posted_at',)},
),
migrations.AlterField(
model_name='memory',
name='latitude',
field=models.DecimalField(decimal_places=20, max_digits=22, verbose_name='Latitude'),
),
migrations.AlterField(
model_name='memory',
name='longitude',
field=models.DecimalField(decimal_places=20, max_digits=23, verbose_name='Longitude'),
),
]
| [
"jack.moriarty@mail.ru"
] | jack.moriarty@mail.ru |
cca39d38965f0f35ece283739d54c15e3e72d4d9 | 563274d0bfb720b2d8c4dfe55ce0352928e0fa66 | /TestProject/src/sqlalchemy-default/lib/sqlalchemy/dialects/oracle/cx_oracle.py | bee7308005ea28044d35e31dc0bed1e0b5d8adfd | [
"MIT"
] | permissive | wangzhengbo1204/Python | 30488455637ad139abc2f173a0a595ecaf28bcdc | 63f7488d9df9caf1abec2cab7c59cf5d6358b4d0 | refs/heads/master | 2020-05-19T19:48:27.092764 | 2013-05-11T06:49:41 | 2013-05-11T06:49:41 | 6,544,357 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 31,002 | py | # oracle/cx_oracle.py
# Copyright (C) 2005-2012 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""
.. dialect:: oracle+cx_oracle
:name: cx-Oracle
:dbapi: cx_oracle
:connectstring: oracle+cx_oracle://user:pass@host:port/dbname[?key=value&key=value...]
:url: http://cx-oracle.sourceforge.net/
Additional Connect Arguments
----------------------------
When connecting with ``dbname`` present, the host, port, and dbname tokens are
converted to a TNS name using
the cx_oracle :func:`makedsn()` function. Otherwise, the host token is taken
directly as a TNS name.
Additional arguments which may be specified either as query string arguments
on the URL, or as keyword arguments to :func:`~sqlalchemy.create_engine()` are:
* allow_twophase - enable two-phase transactions. Defaults to ``True``.
* arraysize - set the cx_oracle.arraysize value on cursors, in SQLAlchemy
it defaults to 50. See the section on "LOB Objects" below.
* auto_convert_lobs - defaults to True, see the section on LOB objects.
* auto_setinputsizes - the cx_oracle.setinputsizes() call is issued for
all bind parameters. This is required for LOB datatypes but can be
disabled to reduce overhead. Defaults to ``True``. Specific types
can be excluded from this process using the ``exclude_setinputsizes``
parameter.
* exclude_setinputsizes - a tuple or list of string DBAPI type names to
be excluded from the "auto setinputsizes" feature. The type names here
must match DBAPI types that are found in the "cx_Oracle" module namespace,
such as cx_Oracle.UNICODE, cx_Oracle.NCLOB, etc. Defaults to
``(STRING, UNICODE)``.
.. versionadded:: 0.8 specific DBAPI types can be excluded from the
auto_setinputsizes feature via the exclude_setinputsizes attribute.
* mode - This is given the string value of SYSDBA or SYSOPER, or alternatively
an integer value. This value is only available as a URL query string
argument.
* threaded - enable multithreaded access to cx_oracle connections. Defaults
to ``True``. Note that this is the opposite default of the cx_Oracle DBAPI
itself.
Unicode
-------
cx_oracle 5 fully supports Python unicode objects. SQLAlchemy will pass
all unicode strings directly to cx_oracle, and additionally uses an output
handler so that all string based result values are returned as unicode as well.
Generally, the ``NLS_LANG`` environment variable determines the nature
of the encoding to be used.
Note that this behavior is disabled when Oracle 8 is detected, as it has been
observed that issues remain when passing Python unicodes to cx_oracle with Oracle 8.
LOB Objects
-----------
cx_oracle returns oracle LOBs using the cx_oracle.LOB object. SQLAlchemy converts
these to strings so that the interface of the Binary type is consistent with that of
other backends, and so that the linkage to a live cursor is not needed in scenarios
like result.fetchmany() and result.fetchall(). This means that by default, LOB
objects are fully fetched unconditionally by SQLAlchemy, and the linkage to a live
cursor is broken.
To disable this processing, pass ``auto_convert_lobs=False`` to :func:`create_engine()`.
Two Phase Transaction Support
-----------------------------
Two Phase transactions are implemented using XA transactions. Success has been reported
with this feature but it should be regarded as experimental.
Precision Numerics
------------------
The SQLAlchemy dialect goes through a lot of steps to ensure
that decimal numbers are sent and received with full accuracy.
An "outputtypehandler" callable is associated with each
cx_oracle connection object which detects numeric types and
receives them as string values, instead of receiving a Python
``float`` directly, which is then passed to the Python
``Decimal`` constructor. The :class:`.Numeric` and
:class:`.Float` types under the cx_oracle dialect are aware of
this behavior, and will coerce the ``Decimal`` to ``float`` if
the ``asdecimal`` flag is ``False`` (default on :class:`.Float`,
optional on :class:`.Numeric`).
Because the handler coerces to ``Decimal`` in all cases first,
the feature can detract significantly from performance.
If precision numerics aren't required, the decimal handling
can be disabled by passing the flag ``coerce_to_decimal=False``
to :func:`.create_engine`::
engine = create_engine("oracle+cx_oracle://dsn",
coerce_to_decimal=False)
.. versionadded:: 0.7.6
Add the ``coerce_to_decimal`` flag.
Another alternative to performance is to use the
`cdecimal <http://pypi.python.org/pypi/cdecimal/>`_ library;
see :class:`.Numeric` for additional notes.
The handler attempts to use the "precision" and "scale"
attributes of the result set column to best determine if
subsequent incoming values should be received as ``Decimal`` as
opposed to int (in which case no processing is added). There are
several scenarios where OCI_ does not provide unambiguous data
as to the numeric type, including some situations where
individual rows may return a combination of floating point and
integer values. Certain values for "precision" and "scale" have
been observed to determine this scenario. When it occurs, the
outputtypehandler receives as string and then passes off to a
processing function which detects, for each returned value, if a
decimal point is present, and if so converts to ``Decimal``,
otherwise to int. The intention is that simple int-based
statements like "SELECT my_seq.nextval() FROM DUAL" continue to
return ints and not ``Decimal`` objects, and that any kind of
floating point value is received as a string so that there is no
floating point loss of precision.
The "decimal point is present" logic itself is also sensitive to
locale. Under OCI_, this is controlled by the NLS_LANG
environment variable. Upon first connection, the dialect runs a
test to determine the current "decimal" character, which can be
a comma "," for european locales. From that point forward the
outputtypehandler uses that character to represent a decimal
point. Note that cx_oracle 5.0.3 or greater is required
when dealing with numerics with locale settings that don't use
a period "." as the decimal character.
.. versionchanged:: 0.6.6
The outputtypehandler uses a comma "," character to represent
a decimal point.
.. _OCI: http://www.oracle.com/technetwork/database/features/oci/index.html
"""
from .base import OracleCompiler, OracleDialect, \
RESERVED_WORDS, OracleExecutionContext
from . import base as oracle
from ...engine import result as _result
from sqlalchemy import types as sqltypes, util, exc, processors
import random
import collections
from sqlalchemy.util.compat import decimal
import re
class _OracleNumeric(sqltypes.Numeric):
def bind_processor(self, dialect):
# cx_oracle accepts Decimal objects and floats
return None
def result_processor(self, dialect, coltype):
# we apply a cx_oracle type handler to all connections
# that converts floating point strings to Decimal().
# However, in some subquery situations, Oracle doesn't
# give us enough information to determine int or Decimal.
# It could even be int/Decimal differently on each row,
# regardless of the scale given for the originating type.
# So we still need an old school isinstance() handler
# here for decimals.
if dialect.supports_native_decimal:
if self.asdecimal:
if self.scale is None:
fstring = "%.10f"
else:
fstring = "%%.%df" % self.scale
def to_decimal(value):
if value is None:
return None
elif isinstance(value, decimal.Decimal):
return value
else:
return decimal.Decimal(fstring % value)
return to_decimal
else:
if self.precision is None and self.scale is None:
return processors.to_float
elif not getattr(self, '_is_oracle_number', False) \
and self.scale is not None:
return processors.to_float
else:
return None
else:
# cx_oracle 4 behavior, will assume
# floats
return super(_OracleNumeric, self).\
result_processor(dialect, coltype)
class _OracleDate(sqltypes.Date):
def bind_processor(self, dialect):
return None
def result_processor(self, dialect, coltype):
def process(value):
if value is not None:
return value.date()
else:
return value
return process
class _LOBMixin(object):
def result_processor(self, dialect, coltype):
if not dialect.auto_convert_lobs:
# return the cx_oracle.LOB directly.
return None
def process(value):
if value is not None:
return value.read()
else:
return value
return process
class _NativeUnicodeMixin(object):
# Py3K
#pass
# Py2K
def bind_processor(self, dialect):
if dialect._cx_oracle_with_unicode:
def process(value):
if value is None:
return value
else:
return unicode(value)
return process
else:
return super(_NativeUnicodeMixin, self).bind_processor(dialect)
# end Py2K
# we apply a connection output handler that returns
# unicode in all cases, so the "native_unicode" flag
# will be set for the default String.result_processor.
class _OracleChar(_NativeUnicodeMixin, sqltypes.CHAR):
def get_dbapi_type(self, dbapi):
return dbapi.FIXED_CHAR
class _OracleNVarChar(_NativeUnicodeMixin, sqltypes.NVARCHAR):
def get_dbapi_type(self, dbapi):
return getattr(dbapi, 'UNICODE', dbapi.STRING)
class _OracleText(_LOBMixin, sqltypes.Text):
def get_dbapi_type(self, dbapi):
return dbapi.CLOB
class _OracleString(_NativeUnicodeMixin, sqltypes.String):
pass
class _OracleUnicodeText(_LOBMixin, _NativeUnicodeMixin, sqltypes.UnicodeText):
def get_dbapi_type(self, dbapi):
return dbapi.NCLOB
def result_processor(self, dialect, coltype):
lob_processor = _LOBMixin.result_processor(self, dialect, coltype)
if lob_processor is None:
return None
string_processor = sqltypes.UnicodeText.result_processor(self, dialect, coltype)
if string_processor is None:
return lob_processor
else:
def process(value):
return string_processor(lob_processor(value))
return process
class _OracleInteger(sqltypes.Integer):
def result_processor(self, dialect, coltype):
def to_int(val):
if val is not None:
val = int(val)
return val
return to_int
class _OracleBinary(_LOBMixin, sqltypes.LargeBinary):
def get_dbapi_type(self, dbapi):
return dbapi.BLOB
def bind_processor(self, dialect):
return None
class _OracleInterval(oracle.INTERVAL):
def get_dbapi_type(self, dbapi):
return dbapi.INTERVAL
class _OracleRaw(oracle.RAW):
pass
class _OracleRowid(oracle.ROWID):
def get_dbapi_type(self, dbapi):
return dbapi.ROWID
class OracleCompiler_cx_oracle(OracleCompiler):
def bindparam_string(self, name, quote=None, **kw):
if quote is True or quote is not False and \
self.preparer._bindparam_requires_quotes(name):
quoted_name = '"%s"' % name
self._quoted_bind_names[name] = quoted_name
return OracleCompiler.bindparam_string(self, quoted_name, **kw)
else:
return OracleCompiler.bindparam_string(self, name, **kw)
class OracleExecutionContext_cx_oracle(OracleExecutionContext):
def pre_exec(self):
quoted_bind_names = \
getattr(self.compiled, '_quoted_bind_names', None)
if quoted_bind_names:
if not self.dialect.supports_unicode_statements:
# if DBAPI doesn't accept unicode statements,
# keys in self.parameters would have been encoded
# here. so convert names in quoted_bind_names
# to encoded as well.
quoted_bind_names = \
dict(
(fromname.encode(self.dialect.encoding),
toname.encode(self.dialect.encoding))
for fromname, toname in
quoted_bind_names.items()
)
for param in self.parameters:
for fromname, toname in quoted_bind_names.items():
param[toname] = param[fromname]
del param[fromname]
if self.dialect.auto_setinputsizes:
# cx_oracle really has issues when you setinputsizes
# on String, including that outparams/RETURNING
# breaks for varchars
self.set_input_sizes(quoted_bind_names,
exclude_types=self.dialect.exclude_setinputsizes
)
# if a single execute, check for outparams
if len(self.compiled_parameters) == 1:
for bindparam in self.compiled.binds.values():
if bindparam.isoutparam:
dbtype = bindparam.type.dialect_impl(self.dialect).\
get_dbapi_type(self.dialect.dbapi)
if not hasattr(self, 'out_parameters'):
self.out_parameters = {}
if dbtype is None:
raise exc.InvalidRequestError(
"Cannot create out parameter for parameter "
"%r - it's type %r is not supported by"
" cx_oracle" %
(bindparam.key, bindparam.type)
)
name = self.compiled.bind_names[bindparam]
self.out_parameters[name] = self.cursor.var(dbtype)
self.parameters[0][quoted_bind_names.get(name, name)] = \
self.out_parameters[name]
def create_cursor(self):
c = self._dbapi_connection.cursor()
if self.dialect.arraysize:
c.arraysize = self.dialect.arraysize
return c
def get_result_proxy(self):
if hasattr(self, 'out_parameters') and self.compiled.returning:
returning_params = dict(
(k, v.getvalue())
for k, v in self.out_parameters.items()
)
return ReturningResultProxy(self, returning_params)
result = None
if self.cursor.description is not None:
for column in self.cursor.description:
type_code = column[1]
if type_code in self.dialect._cx_oracle_binary_types:
result = _result.BufferedColumnResultProxy(self)
if result is None:
result = _result.ResultProxy(self)
if hasattr(self, 'out_parameters'):
if self.compiled_parameters is not None and \
len(self.compiled_parameters) == 1:
result.out_parameters = out_parameters = {}
for bind, name in self.compiled.bind_names.items():
if name in self.out_parameters:
type = bind.type
impl_type = type.dialect_impl(self.dialect)
dbapi_type = impl_type.get_dbapi_type(self.dialect.dbapi)
result_processor = impl_type.\
result_processor(self.dialect,
dbapi_type)
if result_processor is not None:
out_parameters[name] = \
result_processor(self.out_parameters[name].getvalue())
else:
out_parameters[name] = self.out_parameters[name].getvalue()
else:
result.out_parameters = dict(
(k, v.getvalue())
for k, v in self.out_parameters.items()
)
return result
class OracleExecutionContext_cx_oracle_with_unicode(OracleExecutionContext_cx_oracle):
"""Support WITH_UNICODE in Python 2.xx.
WITH_UNICODE allows cx_Oracle's Python 3 unicode handling
behavior under Python 2.x. This mode in some cases disallows
and in other cases silently passes corrupted data when
non-Python-unicode strings (a.k.a. plain old Python strings)
are passed as arguments to connect(), the statement sent to execute(),
or any of the bind parameter keys or values sent to execute().
This optional context therefore ensures that all statements are
passed as Python unicode objects.
"""
def __init__(self, *arg, **kw):
OracleExecutionContext_cx_oracle.__init__(self, *arg, **kw)
self.statement = unicode(self.statement)
def _execute_scalar(self, stmt):
return super(OracleExecutionContext_cx_oracle_with_unicode, self).\
_execute_scalar(unicode(stmt))
class ReturningResultProxy(_result.FullyBufferedResultProxy):
"""Result proxy which stuffs the _returning clause + outparams into the fetch."""
def __init__(self, context, returning_params):
self._returning_params = returning_params
super(ReturningResultProxy, self).__init__(context)
def _cursor_description(self):
returning = self.context.compiled.returning
return [
("ret_%d" % i, None)
for i, col in enumerate(returning)
]
def _buffer_rows(self):
return collections.deque([tuple(self._returning_params["ret_%d" % i]
for i, c in enumerate(self._returning_params))])
class OracleDialect_cx_oracle(OracleDialect):
execution_ctx_cls = OracleExecutionContext_cx_oracle
statement_compiler = OracleCompiler_cx_oracle
driver = "cx_oracle"
colspecs = colspecs = {
sqltypes.Numeric: _OracleNumeric,
sqltypes.Date: _OracleDate, # generic type, assume datetime.date is desired
oracle.DATE: oracle.DATE, # non generic type - passthru
sqltypes.LargeBinary: _OracleBinary,
sqltypes.Boolean: oracle._OracleBoolean,
sqltypes.Interval: _OracleInterval,
oracle.INTERVAL: _OracleInterval,
sqltypes.Text: _OracleText,
sqltypes.String: _OracleString,
sqltypes.UnicodeText: _OracleUnicodeText,
sqltypes.CHAR: _OracleChar,
# this is only needed for OUT parameters.
# it would be nice if we could not use it otherwise.
sqltypes.Integer: _OracleInteger,
oracle.RAW: _OracleRaw,
sqltypes.Unicode: _OracleNVarChar,
sqltypes.NVARCHAR: _OracleNVarChar,
oracle.ROWID: _OracleRowid,
}
execute_sequence_format = list
def __init__(self,
auto_setinputsizes=True,
exclude_setinputsizes=("STRING", "UNICODE"),
auto_convert_lobs=True,
threaded=True,
allow_twophase=True,
coerce_to_decimal=True,
arraysize=50, **kwargs):
OracleDialect.__init__(self, **kwargs)
self.threaded = threaded
self.arraysize = arraysize
self.allow_twophase = allow_twophase
self.supports_timestamp = self.dbapi is None or \
hasattr(self.dbapi, 'TIMESTAMP')
self.auto_setinputsizes = auto_setinputsizes
self.auto_convert_lobs = auto_convert_lobs
if hasattr(self.dbapi, 'version'):
self.cx_oracle_ver = tuple([int(x) for x in
self.dbapi.version.split('.')])
else:
self.cx_oracle_ver = (0, 0, 0)
def types(*names):
return set(
getattr(self.dbapi, name, None) for name in names
).difference([None])
self.exclude_setinputsizes = types(*(exclude_setinputsizes or ()))
self._cx_oracle_string_types = types("STRING", "UNICODE",
"NCLOB", "CLOB")
self._cx_oracle_unicode_types = types("UNICODE", "NCLOB")
self._cx_oracle_binary_types = types("BFILE", "CLOB", "NCLOB", "BLOB")
self.supports_unicode_binds = self.cx_oracle_ver >= (5, 0)
self.supports_native_decimal = (
self.cx_oracle_ver >= (5, 0) and
coerce_to_decimal
)
self._cx_oracle_native_nvarchar = self.cx_oracle_ver >= (5, 0)
if self.cx_oracle_ver is None:
# this occurs in tests with mock DBAPIs
self._cx_oracle_string_types = set()
self._cx_oracle_with_unicode = False
elif self.cx_oracle_ver >= (5,) and not hasattr(self.dbapi, 'UNICODE'):
# cx_Oracle WITH_UNICODE mode. *only* python
# unicode objects accepted for anything
self.supports_unicode_statements = True
self.supports_unicode_binds = True
self._cx_oracle_with_unicode = True
# Py2K
# There's really no reason to run with WITH_UNICODE under Python 2.x.
# Give the user a hint.
util.warn("cx_Oracle is compiled under Python 2.xx using the "
"WITH_UNICODE flag. Consider recompiling cx_Oracle without "
"this flag, which is in no way necessary for full support of Unicode. "
"Otherwise, all string-holding bind parameters must "
"be explicitly typed using SQLAlchemy's String type or one of its subtypes,"
"or otherwise be passed as Python unicode. Plain Python strings "
"passed as bind parameters will be silently corrupted by cx_Oracle."
)
self.execution_ctx_cls = OracleExecutionContext_cx_oracle_with_unicode
# end Py2K
else:
self._cx_oracle_with_unicode = False
if self.cx_oracle_ver is None or \
not self.auto_convert_lobs or \
not hasattr(self.dbapi, 'CLOB'):
self.dbapi_type_map = {}
else:
# only use this for LOB objects. using it for strings, dates
# etc. leads to a little too much magic, reflection doesn't know if it should
# expect encoded strings or unicodes, etc.
self.dbapi_type_map = {
self.dbapi.CLOB: oracle.CLOB(),
self.dbapi.NCLOB: oracle.NCLOB(),
self.dbapi.BLOB: oracle.BLOB(),
self.dbapi.BINARY: oracle.RAW(),
}
@classmethod
def dbapi(cls):
import cx_Oracle
return cx_Oracle
def initialize(self, connection):
super(OracleDialect_cx_oracle, self).initialize(connection)
if self._is_oracle_8:
self.supports_unicode_binds = False
self._detect_decimal_char(connection)
def _detect_decimal_char(self, connection):
"""detect if the decimal separator character is not '.', as
is the case with european locale settings for NLS_LANG.
cx_oracle itself uses similar logic when it formats Python
Decimal objects to strings on the bind side (as of 5.0.3),
as Oracle sends/receives string numerics only in the
current locale.
"""
if self.cx_oracle_ver < (5,):
# no output type handlers before version 5
return
cx_Oracle = self.dbapi
conn = connection.connection
# override the output_type_handler that's
# on the cx_oracle connection with a plain
# one on the cursor
def output_type_handler(cursor, name, defaultType,
size, precision, scale):
return cursor.var(
cx_Oracle.STRING,
255, arraysize=cursor.arraysize)
cursor = conn.cursor()
cursor.outputtypehandler = output_type_handler
cursor.execute("SELECT 0.1 FROM DUAL")
val = cursor.fetchone()[0]
cursor.close()
char = re.match(r"([\.,])", val).group(1)
if char != '.':
_detect_decimal = self._detect_decimal
self._detect_decimal = \
lambda value: _detect_decimal(value.replace(char, '.'))
self._to_decimal = \
lambda value: decimal.Decimal(value.replace(char, '.'))
def _detect_decimal(self, value):
if "." in value:
return decimal.Decimal(value)
else:
return int(value)
_to_decimal = decimal.Decimal
def on_connect(self):
if self.cx_oracle_ver < (5,):
# no output type handlers before version 5
return
cx_Oracle = self.dbapi
def output_type_handler(cursor, name, defaultType,
size, precision, scale):
# convert all NUMBER with precision + positive scale to Decimal
# this almost allows "native decimal" mode.
if self.supports_native_decimal and \
defaultType == cx_Oracle.NUMBER and \
precision and scale > 0:
return cursor.var(
cx_Oracle.STRING,
255,
outconverter=self._to_decimal,
arraysize=cursor.arraysize)
# if NUMBER with zero precision and 0 or neg scale, this appears
# to indicate "ambiguous". Use a slower converter that will
# make a decision based on each value received - the type
# may change from row to row (!). This kills
# off "native decimal" mode, handlers still needed.
elif self.supports_native_decimal and \
defaultType == cx_Oracle.NUMBER \
and not precision and scale <= 0:
return cursor.var(
cx_Oracle.STRING,
255,
outconverter=self._detect_decimal,
arraysize=cursor.arraysize)
# allow all strings to come back natively as Unicode
elif defaultType in (cx_Oracle.STRING, cx_Oracle.FIXED_CHAR):
return cursor.var(unicode, size, cursor.arraysize)
def on_connect(conn):
conn.outputtypehandler = output_type_handler
return on_connect
def create_connect_args(self, url):
dialect_opts = dict(url.query)
for opt in ('use_ansi', 'auto_setinputsizes', 'auto_convert_lobs',
'threaded', 'allow_twophase'):
if opt in dialect_opts:
util.coerce_kw_type(dialect_opts, opt, bool)
setattr(self, opt, dialect_opts[opt])
if url.database:
# if we have a database, then we have a remote host
port = url.port
if port:
port = int(port)
else:
port = 1521
dsn = self.dbapi.makedsn(url.host, port, url.database)
else:
# we have a local tnsname
dsn = url.host
opts = dict(
user=url.username,
password=url.password,
dsn=dsn,
threaded=self.threaded,
twophase=self.allow_twophase,
)
# Py2K
if self._cx_oracle_with_unicode:
for k, v in opts.items():
if isinstance(v, str):
opts[k] = unicode(v)
else:
for k, v in opts.items():
if isinstance(v, unicode):
opts[k] = str(v)
# end Py2K
if 'mode' in url.query:
opts['mode'] = url.query['mode']
if isinstance(opts['mode'], basestring):
mode = opts['mode'].upper()
if mode == 'SYSDBA':
opts['mode'] = self.dbapi.SYSDBA
elif mode == 'SYSOPER':
opts['mode'] = self.dbapi.SYSOPER
else:
util.coerce_kw_type(opts, 'mode', int)
return ([], opts)
def _get_server_version_info(self, connection):
return tuple(
int(x)
for x in connection.connection.version.split('.')
)
def is_disconnect(self, e, connection, cursor):
error, = e.args
if isinstance(e, self.dbapi.InterfaceError):
return "not connected" in str(e)
elif hasattr(error, 'code'):
# ORA-00028: your session has been killed
# ORA-03114: not connected to ORACLE
# ORA-03113: end-of-file on communication channel
# ORA-03135: connection lost contact
# ORA-01033: ORACLE initialization or shutdown in progress
# TODO: Others ?
return error.code in (28, 3114, 3113, 3135, 1033)
else:
return False
def create_xid(self):
"""create a two-phase transaction ID.
this id will be passed to do_begin_twophase(), do_rollback_twophase(),
do_commit_twophase(). its format is unspecified."""
id = random.randint(0, 2 ** 128)
return (0x1234, "%032x" % id, "%032x" % 9)
def do_begin_twophase(self, connection, xid):
connection.connection.begin(*xid)
def do_prepare_twophase(self, connection, xid):
connection.connection.prepare()
def do_rollback_twophase(self, connection, xid, is_prepared=True, recover=False):
self.do_rollback(connection.connection)
def do_commit_twophase(self, connection, xid, is_prepared=True, recover=False):
self.do_commit(connection.connection)
def do_recover_twophase(self, connection):
pass
dialect = OracleDialect_cx_oracle
| [
"wangzhengbo1204@gmail.com"
] | wangzhengbo1204@gmail.com |
64ba6ec501d975d541fea1dd55faa1b766c24658 | 6444622ad4a150993955a0c8fe260bae1af7f8ce | /djangoenv/bin/django-admin | b9210326fb809c5ec6c08d9923a99c16f9a46121 | [] | no_license | jeremyrich/Lesson_RestAPI_jeremy | ca965ef017c53f919c0bf97a4a23841818e246f9 | a44263e45b1cc1ba812059f6984c0f5be25cd234 | refs/heads/master | 2020-04-25T23:13:47.237188 | 2019-03-22T09:26:58 | 2019-03-22T09:26:58 | 173,138,073 | 0 | 0 | null | 2019-03-22T09:26:59 | 2019-02-28T15:34:19 | Python | UTF-8 | Python | false | false | 349 | #!/home/mymy/Desktop/Python_agility/cours/Hugo/Lessons_RestAPI/Lesson_RestAPI/djangoenv/bin/python2.7
# -*- coding: utf-8 -*-
import re
import sys
from django.core.management import execute_from_command_line
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(execute_from_command_line())
| [
"jeremyrich@free.fr"
] | jeremyrich@free.fr | |
2401aaa2a42cd0e7892df3be26c15c536640cfef | ccb81eef3cd4f5562cab89b51695756ab8dbc736 | /message_ler_17076/wsgi.py | f1c921791698157a52064050940c5b21f2e0dd40 | [] | no_license | crowdbotics-apps/message-ler-17076 | 4e847a2ccd333b77414bf9b913e7705c203accbb | 135b68325e04caf669fd9fe281244c460e71068c | refs/heads/master | 2023-05-16T11:16:10.937234 | 2020-05-16T19:12:53 | 2020-05-16T19:12:53 | 264,508,470 | 0 | 0 | null | 2021-06-10T11:01:47 | 2020-05-16T19:11:07 | Python | UTF-8 | Python | false | false | 411 | py | """
WSGI config for message_ler_17076 project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "message_ler_17076.settings")
application = get_wsgi_application()
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
24ce83491fbb7e1866d1b8b4a18150fc82c28ee5 | 79479634cd8da72fc912b11ec43e237726f7c5e5 | /scripts/subreddit_submissions.py | 5d74fcf29545b78a2ba70950c7140671f3e4f73d | [
"MIT"
] | permissive | PhantomInsights/subreddit-analyzer | 594235169f58d5b2c9de886625464ced5be02790 | ba5e6250797515d664d6cfa6df011f8cec1b2729 | refs/heads/master | 2020-11-25T10:46:13.771188 | 2020-04-02T12:50:12 | 2020-04-02T12:50:12 | 228,625,357 | 513 | 47 | MIT | 2019-12-24T13:35:32 | 2019-12-17T13:42:42 | Python | UTF-8 | Python | false | false | 2,915 | py | """
This script uses the Pushshift API to download posts from the specified subreddits.
By default it downloads 10,000 posts starting from the newest one.
"""
import csv
import time
from datetime import datetime
import requests
import tldextract
SUBREDDITS = ["mexico"]
HEADERS = {"User-Agent": "Submissions Downloader v0.2"}
SUBMISSIONS_LIST = list()
MAX_SUBMISSIONS = 10000
def init():
"""Iterates over all the subreddits and creates their csv files."""
for subreddit in SUBREDDITS:
writer = csv.writer(open("./{}-submissions.csv".format(subreddit),
"w", newline="", encoding="utf-8"))
# Adding the header.
writer.writerow(["datetime", "author", "title", "url", "domain"])
print("Downloading:", subreddit)
download_submissions(subreddit=subreddit)
writer.writerows(SUBMISSIONS_LIST)
SUBMISSIONS_LIST.clear()
def download_submissions(subreddit, latest_timestamp=None):
"""Keeps downloading submissions using recursion, it downloads them 500 at a time.
Parameters
----------
subreddit : str
The desired subreddit.
latest_timestamp: int
The timestampf of the latest comment.
"""
base_url = "https://api.pushshift.io/reddit/submission/search/"
params = {"subreddit": subreddit, "sort": "desc",
"sort_type": "created_utc", "size": 500}
# After the first call of this function we will use the 'before' parameter.
if latest_timestamp != None:
params["before"] = latest_timestamp
with requests.get(base_url, params=params, headers=HEADERS) as response:
json_data = response.json()
total_submissions = len(json_data["data"])
latest_timestamp = 0
print("Downloading: {} submissions".format(total_submissions))
for item in json_data["data"]:
# We will only take 3 properties, the timestamp, author and url.
latest_timestamp = item["created_utc"]
iso_date = datetime.fromtimestamp(latest_timestamp)
tld = tldextract.extract(item["url"])
domain = tld.domain + "." + tld.suffix
if item["is_self"] == True:
domain = "self-post"
if domain == "youtu.be":
domain = "youtube.com"
if domain == "redd.it":
domain = "reddit.com"
SUBMISSIONS_LIST.append(
[iso_date, item["author"], item["title"], item["url"], domain])
if len(SUBMISSIONS_LIST) >= MAX_SUBMISSIONS:
break
if total_submissions < 500:
print("No more results.")
elif len(SUBMISSIONS_LIST) >= MAX_SUBMISSIONS:
print("Download complete.")
else:
time.sleep(1.2)
download_submissions(subreddit, latest_timestamp)
if __name__ == "__main__":
init()
| [
"phantom@phantom.im"
] | phantom@phantom.im |
1e189ea6efb35bf1d8f0ab74b3f558a83fc5ab98 | 35b460a5e72e3cb40681861c38dc6d5df1ae9b92 | /CodeFights/Arcade/Intro/throughTheFog/circleOfNumbers.py | aef33d190af0cb87515c13db64c6284b45fdb0e4 | [] | no_license | robgoyal/CodingChallenges | 9c5f3457a213cf54193a78058f74fcf085ef25bc | 0aa99d1aa7b566a754471501945de26644558d7c | refs/heads/master | 2021-06-23T09:09:17.085873 | 2019-03-04T04:04:59 | 2019-03-04T04:04:59 | 94,391,412 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 274 | py | # Name: circleOfNumbers.py
# Author: Robin Goyal
# Last-Modified: July 13, 2017
# Purpose: Given a circular radius n and an input number,
# find the number which is opposite the input number
def circleOfNumbers(n, firstNumber):
return (firstNumber + n / 2) % n | [
"goyal.rob@gmail.com"
] | goyal.rob@gmail.com |
2e6d663e1f9f0847d490f1af6ae277c83881bfaa | 96dcea595e7c16cec07b3f649afd65f3660a0bad | /homeassistant/components/twentemilieu/calendar.py | f4d1e51b171eedd978a3de9419f2265296cd7c7b | [
"Apache-2.0"
] | permissive | home-assistant/core | 3455eac2e9d925c92d30178643b1aaccf3a6484f | 80caeafcb5b6e2f9da192d0ea6dd1a5b8244b743 | refs/heads/dev | 2023-08-31T15:41:06.299469 | 2023-08-31T14:50:53 | 2023-08-31T14:50:53 | 12,888,993 | 35,501 | 20,617 | Apache-2.0 | 2023-09-14T21:50:15 | 2013-09-17T07:29:48 | Python | UTF-8 | Python | false | false | 3,653 | py | """Support for Twente Milieu Calendar."""
from __future__ import annotations
from datetime import date, datetime, timedelta
from twentemilieu import WasteType
from homeassistant.components.calendar import CalendarEntity, CalendarEvent
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import CONF_ID
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers.entity_platform import AddEntitiesCallback
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator
import homeassistant.util.dt as dt_util
from .const import DOMAIN, WASTE_TYPE_TO_DESCRIPTION
from .entity import TwenteMilieuEntity
async def async_setup_entry(
hass: HomeAssistant,
entry: ConfigEntry,
async_add_entities: AddEntitiesCallback,
) -> None:
"""Set up Twente Milieu calendar based on a config entry."""
coordinator = hass.data[DOMAIN][entry.data[CONF_ID]]
async_add_entities([TwenteMilieuCalendar(coordinator, entry)])
class TwenteMilieuCalendar(TwenteMilieuEntity, CalendarEntity):
"""Defines a Twente Milieu calendar."""
_attr_has_entity_name = True
_attr_icon = "mdi:delete-empty"
_attr_name = None
def __init__(
self,
coordinator: DataUpdateCoordinator[dict[WasteType, list[date]]],
entry: ConfigEntry,
) -> None:
"""Initialize the Twente Milieu entity."""
super().__init__(coordinator, entry)
self._attr_unique_id = str(entry.data[CONF_ID])
self._event: CalendarEvent | None = None
@property
def event(self) -> CalendarEvent | None:
"""Return the next upcoming event."""
return self._event
async def async_get_events(
self, hass: HomeAssistant, start_date: datetime, end_date: datetime
) -> list[CalendarEvent]:
"""Return calendar events within a datetime range."""
events: list[CalendarEvent] = []
for waste_type, waste_dates in self.coordinator.data.items():
events.extend(
CalendarEvent(
summary=WASTE_TYPE_TO_DESCRIPTION[waste_type],
start=waste_date,
end=waste_date + timedelta(days=1),
)
for waste_date in waste_dates
if start_date.date() <= waste_date <= end_date.date()
)
return events
@callback
def _handle_coordinator_update(self) -> None:
"""Handle updated data from the coordinator."""
next_waste_pickup_type = None
next_waste_pickup_date = None
for waste_type, waste_dates in self.coordinator.data.items():
if (
waste_dates
and (
next_waste_pickup_date is None
or waste_dates[0] # type: ignore[unreachable]
< next_waste_pickup_date
)
and waste_dates[0] >= dt_util.now().date()
):
next_waste_pickup_date = waste_dates[0]
next_waste_pickup_type = waste_type
self._event = None
if next_waste_pickup_date is not None and next_waste_pickup_type is not None:
self._event = CalendarEvent(
summary=WASTE_TYPE_TO_DESCRIPTION[next_waste_pickup_type],
start=next_waste_pickup_date,
end=next_waste_pickup_date + timedelta(days=1),
)
super()._handle_coordinator_update()
async def async_added_to_hass(self) -> None:
"""When entity is added to hass."""
await super().async_added_to_hass()
self._handle_coordinator_update()
| [
"noreply@github.com"
] | home-assistant.noreply@github.com |
032a5183b2f4f23432281cb1bdb5b8dbd83c594d | 0ee329d7c2de6783dccf3a5fa128930a000c672d | /Final_loop_for_X_Test_and_predictions_Soros_167.py | 078aa9cbae41c35805972f545b365c4e837638dd | [] | no_license | rsc2143/ViteosModel | 2df5788def1a2c9531b050932c94ca04227bdef6 | 9be8a03fcd3528de1db3d801d11875a60779dd20 | refs/heads/master | 2023-05-09T03:25:26.818465 | 2021-06-07T11:31:31 | 2021-06-07T11:31:31 | 374,601,477 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 87,818 | py | #!/usr/bin/env python
# coding: utf-8
# In[1]:
import numpy as np
import pandas as pd
from datetime import datetime
#from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from tqdm import tqdm
import pickle
import os
import sys
#from sklearn.metrics import confusion_matrix
# In[4709]:
print(os.getcwd())
os.chdir('C:\\Users\\consultant138\\Downloads\\Viteos_Rohit\\ViteosModel')
print(os.getcwd())
orig_stdout = sys.stdout
f = open('153_model_run_' + str(datetime.now().strftime("%d_%m_%Y_%H_%M")) + '.txt', 'w')
sys.stdout = f
print(datetime.now())
def equals_fun(a,b):
if a == b:
return 1
else:
return 0
vec_equals_fun = np.vectorize(equals_fun)
def mhreplaced(item):
word1 = []
word2 = []
if (type(item) == str):
for items in item.split(' '):
if (type(items) == str):
items = items.lower()
if items.isdigit() == False:
word1.append(items)
for c in word1:
if c.endswith('MH')==False:
word2.append(c)
words = ' '.join(word2)
return words
else:
return item
vec_tt_match = np.vectorize(mhreplaced)
def fundmatch(item):
items = item.lower()
items = item.replace(' ','')
return items
vec_fund_match = np.vectorize(fundmatch)
def nan_fun(x):
if x=='nan':
return 1
else:
return 0
vec_nan_fun = np.vectorize(nan_fun)
def a_keymatch(a_cusip, a_isin):
pb_nan = 0
a_common_key = 'NA'
if a_cusip=='nan' and a_isin =='nan':
pb_nan =1
elif(a_cusip!='nan' and a_isin == 'nan'):
a_common_key = a_cusip
elif(a_cusip =='nan' and a_isin !='nan'):
a_common_key = a_isin
else:
a_common_key = a_isin
return (pb_nan, a_common_key)
def b_keymatch(b_cusip, b_isin):
accounting_nan = 0
b_common_key = 'NA'
if b_cusip =='nan' and b_isin =='nan':
accounting_nan =1
elif (b_cusip!='nan' and b_isin == 'nan'):
b_common_key = b_cusip
elif(b_cusip =='nan' and b_isin !='nan'):
b_common_key = b_isin
else:
b_common_key = b_isin
return (accounting_nan, b_common_key)
vec_a_key_match_fun = np.vectorize(a_keymatch)
vec_b_key_match_fun = np.vectorize(b_keymatch)
def nan_equals_fun(a,b):
if a==1 and b==1:
return 1
else:
return 0
vec_nan_equal_fun = np.vectorize(nan_equals_fun)
def new_key_match_fun(a,b,c):
if a==b and c==0:
return 1
else:
return 0
vec_new_key_match_fun = np.vectorize(new_key_match_fun)
cols = ['Currency','Account Type','Accounting Net Amount',
#'Accounting Net Amount Difference','Accounting Net Amount Difference Absolute ',
#'Activity Code',
'Age','Age WK',
'Asset Type Category','Base Currency','Base Net Amount',
#'Bloomberg_Yellow_Key',
'B-P Net Amount',
#'B-P Net Amount Difference','B-P Net Amount Difference Absolute',
'BreakID',
'Business Date','Cancel Amount','Cancel Flag','CUSIP','Custodian',
'Custodian Account',
'Derived Source','Description','ExpiryDate','ExternalComment1','ExternalComment2',
'ExternalComment3','Fund',
#'FX Rate',
#'Interest Amount',
'InternalComment1','InternalComment2',
'InternalComment3','Investment Type','Is Combined Data','ISIN','Keys',
'Mapped Custodian Account','Net Amount Difference','Net Amount Difference Absolute','Non Trade Description',
#'OTE Custodian Account',
#'Predicted Action','Predicted Status','Prediction Details',
'Price','Prime Broker',
'Quantity','SEDOL','Settle Date','SPM ID','Status',
#'Strike Price',
'System Comments','Ticker','Trade Date','Trade Expenses','Transaction Category','Transaction ID','Transaction Type',
'Underlying Cusip','Underlying Investment ID','Underlying ISIN','Underlying Sedol','Underlying Ticker','Source Combination','_ID']
#'UnMapped']
add = ['ViewData.Side0_UniqueIds', 'ViewData.Side1_UniqueIds',
# 'MetaData.0._RecordID','MetaData.1._RecordID',
'ViewData.Task Business Date']
# In[4710]:
new_cols = ['ViewData.' + x for x in cols] + add
cols_to_show = [
'Account Type',
'Accounting Net Amount',
#'Accounting Net Amount Difference',
#'Activity Code',
'Age',
'Alt ID 1',
'Asset Type Category',
#'Bloomberg_Yellow_Key',
'B-P Net Amount',
#'B-P Net Amount Difference',
#'B-P Net Amount Difference Absolute',
'BreakID',
'Business Date',
#'Call Put Indicator',
'Cancel Amount',
'Cancel Flag',
'Commission',
'Currency',
'CUSIP',
'Custodian',
'Custodian Account',
'Department',
'Description',
'ExpiryDate',
'ExternalComment2',
'Fund',
#'FX Rate',
#'Interest Amount',
'InternalComment2',
'Investment ID',
'Investment Type',
'Is Combined Data',
'ISIN',
'Keys',
'Knowledge Date',
'Mapped Custodian Account',
'Net Amount Difference',
'Non Trade Description',
#'OTE Custodian Account',
#'OTE Ticker',
'PB Account Numeric',
'Portfolio ID',
'Portolio',
'Price',
'Prime Broker',
#'Principal Amount',
'Quantity',
#'Sec Fees',
'SEDOL',
'Settle Date',
'Status',
#'Strike Price',
'System Comments',
'Ticker',
'Trade Date',
'Trade Expenses',
'Transaction Category',
'Transaction ID',
'Transaction Type',
'Underlying Cusip',
'Underlying Investment ID',
'Underlying ISIN',
'Underlying Sedol',
'Underlying Ticker',
'UserTran1',
'UserTran2',
'Value Date',
]
add_cols_to_show = ['ViewData.Side0_UniqueIds', 'ViewData.Side1_UniqueIds']
viewdata_cols_to_show = ['ViewData.' + x for x in cols_to_show] + add_cols_to_show
common_cols = ['ViewData.Accounting Net Amount', 'ViewData.Age',
'ViewData.Age WK', 'ViewData.Asset Type Category',
'ViewData.B-P Net Amount', 'ViewData.Base Net Amount','ViewData.CUSIP',
'ViewData.Cancel Amount',
'ViewData.Cancel Flag',
#'ViewData.Commission',
'ViewData.Currency', 'ViewData.Custodian',
'ViewData.Custodian Account',
'ViewData.Description', 'ViewData.ExpiryDate', 'ViewData.Fund',
'ViewData.ISIN',
'ViewData.Investment Type',
# 'ViewData.Keys',
'ViewData.Mapped Custodian Account',
'ViewData.Net Amount Difference',
'ViewData.Net Amount Difference Absolute',
#'ViewData.OTE Ticker',
'ViewData.Price',
'ViewData.Prime Broker', 'ViewData.Quantity',
'ViewData.SEDOL', 'ViewData.SPM ID', 'ViewData.Settle Date',
# 'ViewData.Strike Price',
'Date',
'ViewData.Ticker', 'ViewData.Trade Date',
'ViewData.Transaction Category',
'ViewData.Transaction Type', 'ViewData.Underlying Cusip',
'ViewData.Underlying ISIN',
'ViewData.Underlying Sedol','filter_key','ViewData.Status','ViewData.BreakID',
'ViewData.Side0_UniqueIds','ViewData.Side1_UniqueIds','ViewData._ID']
date_numbers_list = [1,2,3,4,
7,8,9,10,11,
14,15,16,17,18,
21,22,23,24,25,
28,29,30]
filepaths_X_test = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Soros/JuneData/X_Test_153/x_test_153_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
filepaths_no_pair_id_data = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Soros/JuneData/X_Test_153/no_pair_ids_153_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
filepaths_no_pair_id_no_data_warning = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Soros/JuneData/X_Test_153/WARNING_no_pair_ids_153_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
filepaths_AUA = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Soros/JuneData/AUA/AUACollections_SOROS.AUA_HST_RecData_153_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
filepaths_MEO = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Soros/JuneData/MEO/MeoCollections_SOROS.MEO_HST_RecData_153_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
filepaths_final_prediction_table = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Soros/JuneData/Final_Predictions_153/Final_Predictions_Table_HST_RecData_153_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
filepaths_accuracy_table = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Soros/JuneData/Final_Predictions_153/Accuracy_Table_HST_RecData_153_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
filepaths_crosstab_table = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Soros/JuneData/Final_Predictions_153/Crosstab_Table_HST_RecData_153_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
# In[4711]:
#df_170.shape
# ## Read testing data
# In[4712]:
i = 0
for i in range(0,len(date_numbers_list)):
meo = pd.read_csv(filepaths_MEO[i],usecols=new_cols)
df1 = meo[~meo['ViewData.Status'].isin(['SMT','HST', 'OC', 'CT', 'Archive','SMR'])]
#df = df[df['MatchStatus'] != 21]
df1 = df1[~df1['ViewData.Status'].isnull()]
df1 = df1.reset_index()
df1 = df1.drop('index',1)
# ## Machine generated output
# In[4716]:
df = df1.copy()
# In[4717]:
df = df.reset_index()
df = df.drop('index',1)
# In[4720]:
df['Date'] = pd.to_datetime(df['ViewData.Task Business Date'])
# In[4721]:
#df['Date'] = pd.to_datetime(df['ViewData.Task Business Date'])
# In[4722]:
df = df[~df['Date'].isnull()]
df = df.reset_index()
df = df.drop('index',1)
# In[4723]:
pd.to_datetime(df['Date'])
# In[4724]:
df['Date'] = pd.to_datetime(df['Date']).dt.date
# In[4725]:
df['Date'] = df['Date'].astype(str)
# In[4726]:
# In[4727]:
df = df[df['ViewData.Status'].isin(['OB','SPM','SDB','UOB','UDB','SMB'])]
df = df.reset_index()
df = df.drop('index',1)
# In[4728]:
df['ViewData.Side0_UniqueIds'] = df['ViewData.Side0_UniqueIds'].astype(str)
df['ViewData.Side1_UniqueIds'] = df['ViewData.Side1_UniqueIds'].astype(str)
df['flag_side0'] = df.apply(lambda x: len(x['ViewData.Side0_UniqueIds'].split(',')), axis=1)
df['flag_side1'] = df.apply(lambda x: len(x['ViewData.Side1_UniqueIds'].split(',')), axis=1)
# In[4729]:
#df_170[(df_170['ViewData.Status']=='UMR')]
# In[4730]:
print('The Date value count is:')
print(df['Date'].value_counts())
date_i = df['Date'].mode()[0]
print('Choosing the date : ' + date_i)
df = df.rename(columns= {'ViewData.B-P Net Amount':'ViewData.B-P Net Amount'})
sample = df[df['Date'] == date_i]
sample = sample.reset_index()
sample = sample.drop('index',1)
# In[4945]:
smb = sample[sample['ViewData.Status']=='SMB'].reset_index()
smb = smb.drop('index',1)
# In[4946]:
#import glob
#df_list = []
#path = "//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/AUA/*.csv"
#for fname in glob.glob(path):
# print(fname)
# if "125" in fname:
# df_list.append(pd.read_csv(fname))
# In[4947]:
#dfff = pd.concat(df_list, axis=0)
#dfff['Date'] = pd.to_datetime(dfff['ViewData.Task Business Date'])
#dfff[dfff['ViewData.Status']=='UCB'].shape
# In[4948]:
#dfff[dfff['ViewData.Status']=='UCB'].groupby('Date')['ViewData.Age'].size()
# In[4949]:
#dfff[(dfff['ViewData.Status']=='UCB') & (dfff['Date']=='2020-06-30')]['ViewData.Side1_UniqueIds'].nunique()
# In[4950]:
smb_pb = smb.copy()
smb_acc = smb.copy()
# In[4951]:
smb_pb['ViewData.Accounting Net Amount'] = np.nan
smb_pb['ViewData.Side0_UniqueIds'] = np.nan
smb_pb['ViewData.Status'] ='SMB-OB'
smb_acc['ViewData.B-P Net Amount'] = np.nan
smb_acc['ViewData.Side1_UniqueIds'] = np.nan
smb_acc['ViewData.Status'] ='SMB-OB'
# In[4952]:
sample = sample[sample['ViewData.Status']!='SMB']
sample = sample.reset_index()
sample = sample.drop('index',1)
# In[4953]:
# In[4954]:
sample = pd.concat([sample,smb_pb,smb_acc],axis=0)
sample = sample.reset_index()
sample = sample.drop('index',1)
# In[4955]:
# In[4958]:
sample['ViewData.Side0_UniqueIds'] = sample['ViewData.Side0_UniqueIds'].astype(str)
sample['ViewData.Side1_UniqueIds'] = sample['ViewData.Side1_UniqueIds'].astype(str)
# In[4959]:
sample.loc[sample['ViewData.Side0_UniqueIds']=='nan','flag_side0'] = 0
sample.loc[sample['ViewData.Side1_UniqueIds']=='nan','flag_side1'] = 0
# In[4960]:
# In[4961]:
# In[4962]:
sample.loc[sample['ViewData.Side1_UniqueIds']=='nan','Trans_side'] = 'B_side'
sample.loc[sample['ViewData.Side0_UniqueIds']=='nan','Trans_side'] = 'A_side'
sample.loc[sample['Trans_side']=='A_side','ViewData.B-P Currency'] = sample.loc[sample['Trans_side']=='A_side','ViewData.Currency']
sample.loc[sample['Trans_side']=='B_side','ViewData.Accounting Currency'] = sample.loc[sample['Trans_side']=='B_side','ViewData.Currency']
sample['ViewData.B-P Currency'] = sample['ViewData.B-P Currency'].astype(str)
sample['ViewData.Accounting Currency'] = sample['ViewData.Accounting Currency'].astype(str)
sample['ViewData.Mapped Custodian Account'] = sample['ViewData.Mapped Custodian Account'].astype(str)
#sample['ViewData.Mapped Custodian Account'] = sample['ViewData.Mapped Custodian Account'].astype(str)
sample['filter_key'] = sample.apply(lambda x: x['ViewData.Mapped Custodian Account'] + x['ViewData.B-P Currency'] if x['Trans_side']=='A_side' else x['ViewData.Mapped Custodian Account'] + x['ViewData.Accounting Currency'], axis=1)
sample1 = sample[(sample['flag_side0']<=1) & (sample['flag_side1']<=1) & (sample['ViewData.Status'].isin(['OB','SPM','SDB','UDB','UOB','SMB-OB']))]
sample1 = sample1.reset_index()
sample1 = sample1.drop('index', 1)
# In[4963]:
sample1['ViewData.BreakID'] = sample1['ViewData.BreakID'].astype(int)
# In[4964]:
sample1 = sample1[sample1['ViewData.BreakID']!=-1]
sample1 = sample1.reset_index()
sample1 = sample1.drop('index',1)
# In[4965]:
sample1 = sample1.sort_values(['ViewData.BreakID','Date'], ascending =[True, False])
sample1 = sample1.reset_index()
sample1 = sample1.drop('index',1)
# In[4966]:
# In[5252]:
#sample1[sample1['ViewData.Status']=='SMB-OB']
# In[4968]:
aa = sample1[sample1['Trans_side']=='A_side']
bb = sample1[sample1['Trans_side']=='B_side']
# In[4969]:
aa['filter_key'] = aa['ViewData.Source Combination'].astype(str) + aa['ViewData.Mapped Custodian Account'].astype(str) + aa['ViewData.B-P Currency'].astype(str)
bb['filter_key'] = bb['ViewData.Source Combination'].astype(str) + bb['ViewData.Mapped Custodian Account'].astype(str) + bb['ViewData.Accounting Currency'].astype(str)
# In[4971]:
aa = aa.reset_index()
aa = aa.drop('index', 1)
bb = bb.reset_index()
bb = bb.drop('index', 1)
# In[4972]:
#'ViewData.Side0_UniqueIds', 'ViewData.Side1_UniqueIds'
# In[4973]:
bb = bb[~bb['ViewData.Accounting Net Amount'].isnull()]
bb = bb.reset_index()
bb = bb.drop('index',1)
# In[4974]:
# In[4975]:
###################### loop 3 ###############################
pool =[]
key_index =[]
training_df =[]
no_pair_ids = []
#max_rows = 5
for d in tqdm(aa['Date'].unique()):
aa1 = aa.loc[aa['Date']==d,:][common_cols]
bb1 = bb.loc[bb['Date']==d,:][common_cols]
aa1 = aa1.reset_index()
aa1 = aa1.drop('index',1)
bb1 = bb1.reset_index()
bb1 = bb1.drop('index', 1)
bb1 = bb1.sort_values(by='filter_key',ascending =True)
for key in (list(np.unique(np.array(list(aa1['filter_key'].values) + list(bb1['filter_key'].values))))):
df1 = aa1[aa1['filter_key']==key]
df2 = bb1[bb1['filter_key']==key]
if df1.empty == False and df2.empty == False:
#aa_df = pd.concat([aa1[aa1.index==i]]*repeat_num, ignore_index=True)
#bb_df = bb1.loc[pool[len(pool)-1],:][common_cols].reset_index()
#bb_df = bb_df.drop('index', 1)
df1 = df1.rename(columns={'ViewData.BreakID':'ViewData.BreakID_A_side'})
df2 = df2.rename(columns={'ViewData.BreakID':'ViewData.BreakID_B_side'})
#dff = pd.concat([aa[aa.index==i],bb.loc[pool[i],:][accounting_vars]],axis=1)
df1 = df1.reset_index()
df2 = df2.reset_index()
df1 = df1.drop('index', 1)
df2 = df2.drop('index', 1)
df1.columns = ['SideA.' + x for x in df1.columns]
df2.columns = ['SideB.' + x for x in df2.columns]
df1 = df1.rename(columns={'SideA.filter_key':'filter_key'})
df2 = df2.rename(columns={'SideB.filter_key':'filter_key'})
#dff = pd.concat([aa_df,bb_df],axis=1)
dff = pd.merge(df1, df2, on='filter_key')
training_df.append(dff)
#key_index.append(i)
#else:
#no_pair_ids.append([aa1[(aa1['filter_key']=='key') & (aa1['ViewData.Status'].isin(['OB','SDB']))]['ViewData.Side1_UniqueIds'].values[0]])
# no_pair_ids.append(aa1[(aa1['filter_key']== key) & (aa1['ViewData.Status'].isin(['OB','SDB']))]['ViewData.Side1_UniqueIds'].values[0])
else:
no_pair_ids.append([aa1[(aa1['filter_key']==key) & (aa1['ViewData.Status'].isin(['OB','SDB']))]['ViewData.Side1_UniqueIds'].values])
no_pair_ids.append([bb1[(bb1['filter_key']==key) & (bb1['ViewData.Status'].isin(['OB','SDB']))]['ViewData.Side0_UniqueIds'].values])
# In[4976]:
#no_pair_ids = np.unique(np.concatenate(no_pair_ids,axis=1)[0])
# In[4977]:
#pd.DataFrame(no_pair_ids).rename
# In[4978]:
if len(no_pair_ids) != 0:
no_pair_ids = np.unique(np.concatenate(no_pair_ids,axis=1)[0])
no_pair_ids_df = pd.DataFrame(no_pair_ids)
#no_pair_ids_df = no_pair_ids_df.rename(columns={'0':'filter_key'})
no_pair_ids_df.columns = ['filter_key']
no_pair_ids_df.to_csv(filepaths_no_pair_id_data[i])
else:
with open(filepaths_no_pair_id_no_data_warning[i], 'w') as f:
f.write('No no pair ids found for this setup and date combination')
# In[4981]:
# In[4980]:
test_file = pd.concat(training_df)
# In[4982]:
test_file = test_file.reset_index()
test_file = test_file.drop('index',1)
# In[4983]:
test_file['SideB.ViewData.BreakID_B_side'] = test_file['SideB.ViewData.BreakID_B_side'].astype('int64')
test_file['SideA.ViewData.BreakID_A_side'] = test_file['SideA.ViewData.BreakID_A_side'].astype('int64')
# In[4984]:
model_cols = [
'SideA.ViewData.Accounting Net Amount',
'SideA.ViewData.B-P Net Amount',
'SideA.ViewData.CUSIP',
'SideA.ViewData.Currency',
#'SideA.ViewData.Description',
'SideA.ViewData.ISIN',
'SideB.ViewData.Accounting Net Amount',
'SideB.ViewData.B-P Net Amount',
'SideB.ViewData.CUSIP',
'SideB.ViewData.Currency',
#'SideB.ViewData.Description',
'SideB.ViewData.ISIN',
'SideB.ViewData.Status','SideB.ViewData.BreakID_B_side',
'SideA.ViewData.Status','SideA.ViewData.BreakID_A_side',
'label']
y_col = ['label']
# In[4985]:
test_file['SideB.ViewData.CUSIP'] = test_file['SideB.ViewData.CUSIP'].str.split(".",expand=True)[0]
test_file['SideA.ViewData.CUSIP'] = test_file['SideA.ViewData.CUSIP'].str.split(".",expand=True)[0]
# In[4986]:
test_file['SideA.ViewData.ISIN'] = test_file['SideA.ViewData.ISIN'].astype(str)
test_file['SideB.ViewData.ISIN'] = test_file['SideB.ViewData.ISIN'].astype(str)
test_file['SideA.ViewData.CUSIP'] = test_file['SideA.ViewData.CUSIP'].astype(str)
test_file['SideB.ViewData.CUSIP'] = test_file['SideB.ViewData.CUSIP'].astype(str)
test_file['SideA.ViewData.Currency'] = test_file['SideA.ViewData.Currency'].astype(str)
test_file['SideB.ViewData.Currency'] = test_file['SideB.ViewData.Currency'].astype(str)
test_file['SideA.ViewData.Trade Date'] = test_file['SideA.ViewData.Trade Date'].astype(str)
test_file['SideB.ViewData.Trade Date'] = test_file['SideB.ViewData.Trade Date'].astype(str)
test_file['SideA.ViewData.Settle Date'] = test_file['SideA.ViewData.Settle Date'].astype(str)
test_file['SideB.ViewData.Settle Date'] = test_file['SideB.ViewData.Settle Date'].astype(str)
test_file['SideA.ViewData.Fund'] = test_file['SideA.ViewData.Fund'].astype(str)
test_file['SideB.ViewData.Fund'] = test_file['SideB.ViewData.Fund'].astype(str)
# In[4987]:
#test_file[['SideA.ViewData.ISIN','SideB.ViewData.ISIN']]
values_ISIN_A_Side = test_file['SideA.ViewData.ISIN'].values
values_ISIN_B_Side = test_file['SideB.ViewData.ISIN'].values
#test_file['ISIN_match'] = vec_equals_fun(values_ISIN_A_Side,values_ISIN_B_Side)
values_CUSIP_A_Side = test_file['SideA.ViewData.CUSIP'].values
values_CUSIP_B_Side = test_file['SideB.ViewData.CUSIP'].values
#
# values_CUSIP_A_Side = test_file['SideA.ViewData.Currency'].values
# values_CUSIP_B_Side = test_file['SideB.ViewData.Currency'].values
values_Currency_match_A_Side = test_file['SideA.ViewData.Currency'].values
values_Currency_match_B_Side = test_file['SideA.ViewData.Currency'].values
values_Trade_Date_match_A_Side = test_file['SideA.ViewData.Trade Date'].values
values_Trade_Date_match_B_Side = test_file['SideB.ViewData.Trade Date'].values
values_Settle_Date_match_A_Side = test_file['SideA.ViewData.Settle Date'].values
values_Settle_Date_match_B_Side = test_file['SideB.ViewData.Settle Date'].values
values_Fund_match_A_Side = test_file['SideA.ViewData.Fund'].values
values_Fund_match_B_Side = test_file['SideB.ViewData.Fund'].values
test_file['ISIN_match'] = vec_equals_fun(values_ISIN_A_Side,values_ISIN_B_Side)
test_file['CUSIP_match'] = vec_equals_fun(values_CUSIP_A_Side,values_CUSIP_B_Side)
test_file['Currency_match'] = vec_equals_fun(values_Currency_match_A_Side,values_Currency_match_B_Side)
test_file['Trade_Date_match'] = vec_equals_fun(values_Trade_Date_match_A_Side,values_Trade_Date_match_B_Side)
test_file['Settle_Date_match'] = vec_equals_fun(values_Settle_Date_match_A_Side,values_Settle_Date_match_B_Side)
test_file['Fund_match'] = vec_equals_fun(values_Fund_match_A_Side,values_Fund_match_B_Side)
# In[4988]:
#test_file['ISIN_match'] = test_file.apply(lambda x: 1 if x['SideA.ViewData.ISIN']==x['SideB.ViewData.ISIN'] else 0, axis=1)
#test_file['CUSIP_match'] = test_file.apply(lambda x: 1 if x['SideA.ViewData.CUSIP']==x['SideB.ViewData.CUSIP'] else 0, axis=1)
#test_file['Currency_match'] = test_file.apply(lambda x: 1 if x['SideA.ViewData.Currency']==x['SideB.ViewData.Currency'] else 0, axis=1)
# In[4989]:
#test_file['Trade_Date_match'] = test_file.apply(lambda x: 1 if x['SideA.ViewData.Trade Date']==x['SideB.ViewData.Trade Date'] else 0, axis=1)
#test_file['Settle_Date_match'] = test_file.apply(lambda x: 1 if x['SideA.ViewData.Settle Date']==x['SideB.ViewData.Settle Date'] else 0, axis=1)
#test_file['Fund_match'] = test_file.apply(lambda x: 1 if x['SideA.ViewData.Fund']==x['SideB.ViewData.Fund'] else 0, axis=1)
# In[4990]:
test_file['Amount_diff_1'] = test_file['SideA.ViewData.Accounting Net Amount'] - test_file['SideB.ViewData.B-P Net Amount']
test_file['Amount_diff_2'] = test_file['SideB.ViewData.Accounting Net Amount'] - test_file['SideA.ViewData.B-P Net Amount']
# In[4991]:
#test_file = pd.read_csv('//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/OakTree/X_test_files_after_loop/meo_testing_HST_RecData_379_06_19_2020_test_file_with_ID.csv')
# In[4992]:
#test_file = test_file.drop('Unnamed: 0',1)
# In[4993]:
test_file['Trade_date_diff'] = (pd.to_datetime(test_file['SideA.ViewData.Trade Date']) - pd.to_datetime(test_file['SideB.ViewData.Trade Date'])).dt.days
test_file['Settle_date_diff'] = (pd.to_datetime(test_file['SideA.ViewData.Settle Date']) - pd.to_datetime(test_file['SideB.ViewData.Settle Date'])).dt.days
# In[4994]:
# In[4995]:
############ Fund match new ########
values_Fund_match_A_Side = test_file['SideA.ViewData.Fund'].values
values_Fund_match_B_Side = test_file['SideB.ViewData.Fund'].values
#test_file['ISIN_match'] = vec_(values_ISIN_A_Side,values_ISIN_B_Side)
#test_file['SideA.ViewData.Fund'] = test_file.apply(lambda x : fundmatch(x['SideA.ViewData.Fund']), axis=1)
test_file['SideA.ViewData.Fund'] = vec_fund_match(values_Fund_match_A_Side)
#test_file['SideB.ViewData.Fund'] = test_file.apply(lambda x : fundmatch(x['SideB.ViewData.Fund']), axis=1)
test_file['SideB.ViewData.Fund'] = vec_fund_match(values_Fund_match_B_Side)
# In[ ]:
# In[4996]:
#test_file['SideA.ViewData.Fund']
# In[4997]:
#test_file['SideB.ViewData.Transaction Type'] = test_file['SideB.ViewData.Transaction Type'].apply(lambda x : mhreplaced(x))
# In[4998]:
#test_file['SideA.ViewData.Transaction Type'] = test_file['SideA.ViewData.Transaction Type'].apply(lambda x : mhreplaced(x))
# In[4999]:
##############
values_transaction_type_match_A_Side = test_file['SideA.ViewData.Transaction Type'].values
values_transaction_type_match_B_Side = test_file['SideB.ViewData.Transaction Type'].values
#test_file['ISIN_match'] = vec_(values_ISIN_A_Side,values_ISIN_B_Side)
#test_file['SideA.ViewData.Fund'] = test_file.apply(lambda x : fundmatch(x['SideA.ViewData.Fund']), axis=1)
test_file['SideA.ViewData.Transaction Type'] = vec_tt_match(values_transaction_type_match_A_Side)
#test_file['SideB.ViewData.Fund'] = test_file.apply(lambda x : fundmatch(x['SideB.ViewData.Fund']), axis=1)
test_file['SideB.ViewData.Transaction Type'] = vec_tt_match(values_transaction_type_match_B_Side)
# In[5000]:
test_file['ViewData.Combined Transaction Type'] = test_file['SideA.ViewData.Transaction Type'].astype(str) + test_file['SideB.ViewData.Transaction Type'].astype(str)
# In[5001]:
#train_full_new1['ViewData.Combined Transaction Type'] = train_full_new1['SideA.ViewData.Transaction Type'].astype(str) + train_full_new1['SideB.ViewData.Transaction Type'].astype(str)
test_file['ViewData.Combined Fund'] = test_file['SideA.ViewData.Fund'].astype(str) + test_file['SideB.ViewData.Fund'].astype(str)
# In[ ]:
# In[5002]:
values_ISIN_A_Side = test_file['SideA.ViewData.ISIN'].values
values_ISIN_B_Side = test_file['SideB.ViewData.ISIN'].values
test_file['SideA.ISIN_NA'] = vec_nan_fun(values_ISIN_A_Side)
test_file['SideB.ISIN_NA'] = vec_nan_fun(values_ISIN_A_Side)
#test_file['SideA.ISIN_NA'] = test_file.apply(lambda x: 1 if x['SideA.ViewData.ISIN']=='nan' else 0, axis=1)
#test_file['SideB.ISIN_NA'] = test_file.apply(lambda x: 1 if x['SideB.ViewData.ISIN']=='nan' else 0, axis=1)
# In[5666]:
# In[ ]:
# In[5669]:
values_ISIN_A_Side = test_file['SideA.ViewData.ISIN'].values
values_ISIN_B_Side = test_file['SideB.ViewData.ISIN'].values
values_CUSIP_A_Side = test_file['SideA.ViewData.CUSIP'].values
values_CUSIP_B_Side = test_file['SideB.ViewData.CUSIP'].values
test_file['SideB.ViewData.key_NAN']= vec_a_key_match_fun(values_CUSIP_B_Side,values_ISIN_B_Side)[0]
test_file['SideB.ViewData.Common_key'] = vec_a_key_match_fun(values_CUSIP_B_Side,values_ISIN_B_Side)[1]
test_file['SideA.ViewData.key_NAN'] = vec_b_key_match_fun(values_CUSIP_A_Side,values_ISIN_A_Side)[0]
test_file['SideA.ViewData.Common_key'] = vec_b_key_match_fun(values_CUSIP_A_Side,values_ISIN_A_Side)[1]
# In[ ]:
#test_file[['SideB.ViewData.key_NAN','SideB.ViewData.Common_key']] = test_file.apply(lambda x: b_keymatch(x['SideB.ViewData.CUSIP'], x['SideB.ViewData.ISIN']), axis=1)
#test_file[['SideA.ViewData.key_NAN','SideA.ViewData.Common_key']] = test_file.apply(lambda x: a_keymatch(x['SideA.ViewData.CUSIP'],x['SideA.ViewData.ISIN']), axis=1)
# In[ ]:
values_key_NAN_B_Side = test_file['SideB.ViewData.key_NAN'].values
values_key_NAN_A_Side = test_file['SideA.ViewData.key_NAN'].values
test_file['All_key_nan'] = vec_nan_equal_fun(values_key_NAN_B_Side,values_key_NAN_A_Side )
#test_file['All_key_nan'] = test_file.apply(lambda x: 1 if x['SideB.ViewData.key_NAN']==1 and x['SideA.ViewData.key_NAN']==1 else 0, axis=1)
# In[5005]:
test_file['SideB.ViewData.Common_key'] = test_file['SideB.ViewData.Common_key'].astype(str)
test_file['SideA.ViewData.Common_key'] = test_file['SideA.ViewData.Common_key'].astype(str)
values_Common_key_B_Side = test_file['SideB.ViewData.Common_key'].values
values_Common_key_A_Side = test_file['SideA.ViewData.Common_key'].values
values_All_key_NAN = test_file['All_key_nan'].values
#values_accounting_nan = np.where((values_CUSIP_B_Side == 'nan') & (values_ISIN_B_Side == 'nan'),1,0)
#values_b_common_key = np.where((values_CUSIP_B_Side == 'nan') & (values_ISIN_B_Side == 'nan'),'NA',
# np.where((values_CUSIP_B_Side != 'nan') & (values_ISIN_B_Side == 'nan'), values_CUSIP_B_Side,
# np.where((values_CUSIP_B_Side == 'nan') & (values_ISIN_B_Side != 'nan'),values_ISIN_B_Side,values_ISIN_B_Side)))
test_file['new_key_match']= vec_new_key_match_fun(values_Common_key_B_Side,values_Common_key_A_Side,values_All_key_NAN)
# In[5006]:
#test_file['new_key_match'] = test_file.apply(lambda x: 1 if x['SideB.ViewData.Common_key']==x['SideA.ViewData.Common_key'] and x['All_key_nan']==0 else 0, axis=1)
# In[5007]:
# In[5008]:
model_cols = [
# 'SideA.ViewData.Accounting Net Amount',
'SideA.ViewData.B-P Net Amount',
'SideA.ViewData.Price',
'SideA.ViewData.Quantity',
'SideB.ViewData.Accounting Net Amount',
# 'SideB.ViewData.B-P Net Amount',
'SideB.ViewData.Price',
'SideB.ViewData.Quantity',
'Trade_Date_match',
'Settle_Date_match',
# 'Fund_match',
'Amount_diff_2',
'Trade_date_diff',
'Settle_date_diff',
'SideA.ISIN_NA',
'SideB.ISIN_NA',
'ViewData.Combined Fund',
'ViewData.Combined Transaction Type',
'All_key_nan',
'new_key_match',
'SideA.ViewData._ID',
'SideB.ViewData._ID',
'SideB.ViewData.Status',
'SideB.ViewData.BreakID_B_side',
'SideA.ViewData.Status',
'SideA.ViewData.BreakID_A_side',
'SideB.ViewData.Side0_UniqueIds',
'SideA.ViewData.Side1_UniqueIds']
# In[4933]:
test_file.to_csv(filepaths_X_test[i])
# In[930]:
#test_file['SideA.ViewData.BreakID_A_side'].value_counts()
# In[4299]:
print('Done till X_Test creation')
print(datetime.now())
#test_file = pd.read_csv("//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/X_Test/x_test_125_2020-06-8.csv")
#
#
## In[4300]:
#
#
#test_file = test_file.drop('Unnamed: 0',1)
# ## Test file served into the model
# In[5009]:
X_test = test_file[model_cols]
# In[5010]:
X_test = X_test.reset_index()
X_test = X_test.drop('index',1)
# In[5011]:
X_test = X_test.fillna(0)
# In[5012]:
# In[5013]:
X_test = X_test.drop_duplicates()
X_test = X_test.reset_index()
X_test = X_test.drop('index',1)
# In[5014]:
# ## Model Pickle file import
# In[5015]:
# In[5016]:
#filename = 'Oak_W125_model_with_umb.sav'
#filename = '125_with_umb_without_des_and_many_to_many.sav'
#filename = '125_with_umb_and_price_without_des_and_many_to_many_tdsd2.sav'
filename = 'Soros_new_model_V1.sav'
clf = pickle.load(open(filename, 'rb'))
# In[5017]:
# In[5018]:
# ## Predictions
# In[5019]:
# Actual class predictions
rf_predictions = clf.predict(X_test.drop(['SideB.ViewData.Status','SideB.ViewData.BreakID_B_side', 'SideA.ViewData.Status','SideA.ViewData.BreakID_A_side','SideA.ViewData._ID','SideB.ViewData._ID','SideB.ViewData.Side0_UniqueIds','SideA.ViewData.Side1_UniqueIds'],1))
# Probabilities for each class
rf_probs = clf.predict_proba(X_test.drop(['SideB.ViewData.Status','SideB.ViewData.BreakID_B_side', 'SideA.ViewData.Status','SideA.ViewData.BreakID_A_side','SideA.ViewData._ID','SideB.ViewData._ID','SideB.ViewData.Side0_UniqueIds','SideA.ViewData.Side1_UniqueIds'],1))[:, 1]
# In[5020]:
probability_class_0 = clf.predict_proba(X_test.drop(['SideB.ViewData.Status','SideB.ViewData.BreakID_B_side','SideA.ViewData.Status','SideA.ViewData.BreakID_A_side','SideA.ViewData._ID','SideB.ViewData._ID','SideB.ViewData.Side0_UniqueIds','SideA.ViewData.Side1_UniqueIds'],1))[:, 0]
probability_class_1 = clf.predict_proba(X_test.drop(['SideB.ViewData.Status','SideB.ViewData.BreakID_B_side', 'SideA.ViewData.Status','SideA.ViewData.BreakID_A_side','SideA.ViewData._ID','SideB.ViewData._ID','SideB.ViewData.Side0_UniqueIds','SideA.ViewData.Side1_UniqueIds'],1))[:, 1]
probability_class_2 = clf.predict_proba(X_test.drop(['SideB.ViewData.Status','SideB.ViewData.BreakID_B_side','SideA.ViewData.Status','SideA.ViewData.BreakID_A_side','SideA.ViewData._ID','SideB.ViewData._ID','SideB.ViewData.Side0_UniqueIds','SideA.ViewData.Side1_UniqueIds'],1))[:, 2]
probability_class_3 = clf.predict_proba(X_test.drop(['SideB.ViewData.Status','SideB.ViewData.BreakID_B_side', 'SideA.ViewData.Status','SideA.ViewData.BreakID_A_side','SideA.ViewData._ID','SideB.ViewData._ID','SideB.ViewData.Side0_UniqueIds','SideA.ViewData.Side1_UniqueIds'],1))[:, 3]
#probability_class_4 = clf1.predict_proba(X_test.drop(['SideB.ViewData.Status','SideB.ViewData.BreakID_B_side', 'SideA.ViewData.Status','SideA.ViewData.BreakID_A_side','SideA.ViewData._ID','SideB.ViewData._ID','SideB.ViewData.Side0_UniqueIds','SideA.ViewData.Side1_UniqueIds'],1))[:, 4]
# In[5021]:
X_test['Predicted_action'] = rf_predictions
#X_test['Predicted_action_probabilty'] = rf_probs
X_test['probability_No_pair'] = probability_class_0
#X_test['probability_Partial_match'] = probability_class_1
#X_test['probability_UMB'] = probability_class_1
X_test['probability_UMB'] = probability_class_1
X_test['probability_UMR'] = probability_class_2
X_test['probability_UMT'] = probability_class_3
# In[5022]:
# ## Prediction Table
# In[5023]:
X_test.loc[(X_test['Predicted_action']=='UMR_One_to_One') & ((X_test['Amount_diff_2']!=0) | (X_test['Amount_diff_2']!=0)),'Predicted_action'] = 'Unrecognized'
# In[5024]:
# In[5025]:
# In[5026]:
###### Probability filter for UMT and UMB ################
#X_test.loc[(X_test['Predicted_action']=='UMT_One_to_One') & (X_test['probability_UMT']<0.90) & (X_test['probability_No_pair']>0.05),'Predicted_action'] = 'No-Pair'
#X_test.loc[(X_test['Predicted_action']=='UMB_One_to_One') & (X_test['probability_UMB']<0.75) & (X_test['probability_No_pair']>0.2),'Predicted_action'] = 'No-Pair'
#X_test.loc[(X_test['Predicted_action']=='UMR_One_to_One') & (X_test['probability_UMR']<0.90) & (X_test['probability_No_pair']>0.05),'Predicted_actionX_test.loc[(X_test['Predicted_action']=='No-Pair') & (X_test['probability_No_pair']<0.9) & (X_test['probability_UMB']>0.05),'Predicted_action'] = 'UMB_One_to_One'
#X_test.loc[(X_test['Predicted_action']=='No-Pair') & (X_test['probability_No_pair']<0.95) & (X_test['probability_UMB']>0.05),'Predicted_action'] = 'UMB_One_to_One'
#X_test.loc[(X_test['Predicted_action']=='UMR_One_to_One') & (X_test['Settle_date_diff']>4),'Predicted_action'] = 'No-Pair'
#X_test.loc[(X_test['Predicted_action']=='UMR_One_to_One') & (X_test['Settle_date_diff']<-4),'Predicted_action'] = 'No-Pair'
# In[5027]:
#X_test.loc[(X_test['SideB.ViewData.Status']=='SDB') & (X_test['SideA.ViewData.Status']=='OB') & (X_test['Predicted_action']=='No-Pair'),'Predicted_action'] = 'SDB/Open Break'
# In[5028]:
prediction_table = X_test.groupby('SideB.ViewData.BreakID_B_side')['Predicted_action'].unique().reset_index()
# In[5029]:
#prob1 = X_test.groupby('SideB.ViewData.BreakID_B_side')['probability_No_pair'].mean().reset_index()
# In[5030]:
prediction_table['len'] = prediction_table['Predicted_action'].str.len()
# In[5031]:
prediction_table['No_Pair_flag'] = prediction_table['Predicted_action'].apply(lambda x: 1 if 'No-Pair' in x else 0)
# In[5032]:
prediction_table['UMB_flag'] = prediction_table['Predicted_action'].apply(lambda x: 1 if 'UMB_One_to_One' in x else 0)
prediction_table['UMT_flag'] = prediction_table['Predicted_action'].apply(lambda x: 1 if 'UMT_One_to_One' in x else 0)
prediction_table['UMR_flag'] = prediction_table['Predicted_action'].apply(lambda x: 1 if 'UMR_One_to_One' in x else 0)
# In[5033]:
# In[5034]:
# In[5035]:
umr_array = X_test[X_test['Predicted_action']=='UMR_One_to_One'].groupby(['SideB.ViewData.BreakID_B_side'])['SideA.ViewData.BreakID_A_side'].unique().reset_index()
umt_array = X_test[X_test['Predicted_action']=='UMT_One_to_One'].groupby(['SideB.ViewData.BreakID_B_side'])['SideA.ViewData.BreakID_A_side'].unique().reset_index()
umb_array = X_test[X_test['Predicted_action']=='UMB_One_to_One'].groupby(['SideB.ViewData.BreakID_B_side'])['SideA.ViewData.BreakID_A_side'].unique().reset_index()
# In[5036]:
umr_array.columns = ['SideB.ViewData.BreakID_B_side', 'Predicted_UMR_array']
umt_array.columns = ['SideB.ViewData.BreakID_B_side', 'Predicted_UMT_array']
umb_array.columns = ['SideB.ViewData.BreakID_B_side', 'Predicted_UMB_array']
# In[5037]:
prediction_table = pd.merge(prediction_table,umr_array, on='SideB.ViewData.BreakID_B_side', how='left' )
prediction_table = pd.merge(prediction_table,umt_array, on='SideB.ViewData.BreakID_B_side', how='left' )
prediction_table = pd.merge(prediction_table,umb_array, on='SideB.ViewData.BreakID_B_side', how='left' )
# In[5038]:
#prediction_table
#X_test[X_test['SideB.ViewData.Side0_UniqueIds']=='2495_125897734_Advent Geneva']
# In[5039]:
prediction_table['Final_prediction'] = prediction_table.apply(lambda x: 'UMR_One_to_One' if x['UMR_flag']==1 else('UMT_One_to_One' if x['len']==1 and x['UMT_flag']==1 else('UMB_One_to_UMB' if x['len']==1 and x['UMB_flag']==1 else('No-Pair' if x['len']==1 else 'Undecided'))), axis=1)
# In[5040]:
# In[5042]:
prediction_table['UMT_flag'] = prediction_table['Predicted_action'].apply(lambda x: 1 if 'UMT_One_to_One' in x else 0)
prediction_table['UMB_flag'] = prediction_table['Predicted_action'].apply(lambda x: 1 if 'UMB_One_to_One' in x else 0)
# In[5043]:
prediction_table.loc[(prediction_table['UMB_flag']==1) & (prediction_table['len']==2),'Final_prediction']='UMB_One_to_One'
prediction_table.loc[(prediction_table['UMT_flag']==1) & (prediction_table['len']==2),'Final_prediction']='UMT_One_to_One'
# In[5044]:
prediction_table.loc[(prediction_table['Final_prediction']=='Undecided') & (prediction_table['len']==2),'Final_prediction']='No-Pair/Unrecognized'
# In[5045]:
prediction_table.loc[(prediction_table['Final_prediction']=='Undecided') & (prediction_table['UMT_flag']==1),'Final_prediction']='UMT_One_to_One'
# In[5046]:
prediction_table.loc[(prediction_table['Final_prediction']=='Undecided') & (prediction_table['UMB_flag']==1),'Final_prediction']='UMB_One_to_One'
# In[5047]:
# In[5049]:
#X_test[(X_test['Predicted_action']=='UMR_One_to_One') & (X_test['SideB.ViewData.BreakID_B_side']==1346769635)]
# In[5050]:
prediction_table.loc[prediction_table['Final_prediction']=='UMR_One_to_One', 'Final_predicted_break'] = prediction_table.loc[prediction_table['Final_prediction']=='UMR_One_to_One', 'Predicted_UMR_array']
# In[5051]:
prediction_table.loc[prediction_table['Final_prediction']=='UMT_One_to_One', 'Final_predicted_break'] = prediction_table.loc[prediction_table['Final_prediction']=='UMT_One_to_One', 'Predicted_UMT_array']
prediction_table.loc[prediction_table['Final_prediction']=='UMB_One_to_One', 'Final_predicted_break'] = prediction_table.loc[prediction_table['Final_prediction']=='UMB_One_to_One', 'Predicted_UMB_array']
#prediction_table.loc[prediction_table['Final_prediction']=='No-Pair', 'Final_predicted_break'] = prediction_table.loc[prediction_table['Final_prediction']=='No-Pair', '']
# In[5052]:
prediction_table['predicted_break_len'] = prediction_table['Final_predicted_break'].str.len()
# In[5053]:
# In[5054]:
#prediction_table[(prediction_table['predicted_break_len']>1) & (prediction_table['Final_prediction']=='UMT_One_to_One')]
# In[5055]:
#prediction_table[['SideB.ViewData.BreakID_B_side', 'Final_prediction', 'Final_predicted_break']]
# In[5056]:
X_test['prob_key'] = X_test['SideB.ViewData.BreakID_B_side'].astype(str) + X_test['Predicted_action']
prediction_table['prob_key'] = prediction_table['SideB.ViewData.BreakID_B_side'].astype(str) + prediction_table['Final_prediction']
# In[5057]:
user_prob = X_test.groupby('prob_key')[['probability_UMR','probability_UMT','probability_UMB']].max().reset_index()
open_prob = X_test.groupby('prob_key')['probability_No_pair'].mean().reset_index()
# In[5058]:
#prediction_table = prediction_table.drop(,1)
prediction_table = pd.merge(prediction_table,user_prob, on='prob_key', how='left')
prediction_table = pd.merge(prediction_table,open_prob, on='prob_key', how='left')
# In[5059]:
prediction_table = prediction_table.drop('prob_key',1)
# In[5060]:
# In[5062]:
prediction_table = pd.merge(prediction_table, X_test[['SideB.ViewData.BreakID_B_side','SideA.ViewData._ID','SideB.ViewData._ID']].drop_duplicates(['SideB.ViewData.BreakID_B_side','SideB.ViewData._ID']), on ='SideB.ViewData.BreakID_B_side', how='left')
# In[5063]:
# ## Merging PB side Break ID's
# In[5065]:
#pb_break_ids = prediction_table[~prediction_table['Final_predicted_break'].isnull()][['Final_prediction','Final_predicted_break']]
# In[4358]:
#pb_break_ids = pb_break_ids.reset_index()
#pb_break_ids = pb_break_ids.drop('index',1)
# In[1706]:
#pb_break_ids['Final_predicted_break'] = pb_break_ids['Final_predicted_break'].apply(lambda x: str(x).replace("[",''))
#pb_break_ids['Final_predicted_break'] = pb_break_ids['Final_predicted_break'].apply(lambda x: str(x).replace("]",''))
# In[2444]:
#pb_break_ids['Final_predicted_break'].unique()
# In[1708]:
#id_list = []
#id_list2 = []
#for i in pb_break_ids['Final_predicted_break'].unique():
# id_list.append(i.split(' '))
#for j in np.concatenate(id_list,axis=0):
# if j!='':
# id_list2.append(j.replace("\n",''))
# In[1709]:
#new_ob_ids =[]
#
#for i in X_test['SideA.ViewData.BreakID_A_side'].astype(str).unique():
# if i not in np.array(id_list2,dtype="O"):
# new_ob_ids.append(i)
# In[1710]:
#prediction_table2 = pd.DataFrame(np.array(new_ob_ids))
# In[1711]:
#prediction_table2.columns = ['SideB.ViewData.BreakID_B_side']
# In[1712]:
#prediction_table2['Final_prediction'] = 'No-Pair'
# In[1713]:
#prediction_table2['Side'] = 'P-B Side'
# In[1714]:
#prediction_table['Side'] = 'Accounting Side'
# In[2074]:
#prediction_table3 = prediction_table
# In[2442]:
#prediction_table3 = pd.concat([prediction_table, prediction_table2], axis=0)
# In[1716]:
#prediction_table3 = prediction_table3.reset_index()
#prediction_table3 = prediction_table3.drop('index',1)
# In[1717]:
#prediction_table3 = prediction_table3[prediction_table.columns]
# In[2443]:
#prediction_table3[['SideB.ViewData.BreakID_B_side', 'Final_prediction', 'Final_predicted_break','Side']]
# In[1719]:
#ids_for_comment = prediction_table3[['SideB.ViewData.BreakID_B_side', 'Final_prediction', 'Final_predicted_break','Side']]
# In[2445]:
#ids_for_comment
# In[1721]:
#ids_for_comment.to_csv('//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/Input for comment prediction/prediction_table_testing_HST_RecData_125_1159652110_06-19-2020.csv')
# ## Merging with User Action Data
# In[5474]:
prediction_table3 = prediction_table
# In[5475]:
aua = pd.read_csv(filepaths_AUA[i])
# In[5476]:
#test_file.to_csv("//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/OakTree/JuneData/X_Test/x_test_2020-06-29.csv")
# In[5477]:
# In[5478]:
aua = aua[~((aua['LastPerformedAction']==0) & (aua['ViewData.Status']=='SDB'))]
aua = aua.reset_index()
aua = aua.drop('index',1)
# In[5479]:
# In[5480]:
aua = aua[aua['ViewData.Status'].isin(['UMR','UMB','UMT','OB','SDB','UCB'])]
aua = aua.reset_index()
aua = aua.drop('index',1)
# In[5481]:
# In[5483]:
if 'MetaData.0._ParentID' in aua.columns:
aua_id_match = aua[['MetaData.0._ParentID','ViewData.Status','ViewData.Age','ViewData.BreakID','ViewData._ID','ViewData.Side0_UniqueIds','ViewData.Side1_UniqueIds']]
print('MetaData.0._ParentID is present')
else:
aua_id_match = aua[['ViewData.Status','ViewData.Age','ViewData.BreakID','ViewData._ID','ViewData.Side0_UniqueIds','ViewData.Side1_UniqueIds']]
aua_id_match['MetaData.0._ParentID'] = np.nan
print('MetaData.0._ParentID is absent')
# Set the order of columns
aua_id_match['MetaData.0._ParentID'] = aua_id_match['MetaData.0._ParentID'].astype(str)
aua_id_match = aua_id_match[['MetaData.0._ParentID','ViewData.Status','ViewData.Age','ViewData.BreakID','ViewData._ID','ViewData.Side0_UniqueIds','ViewData.Side1_UniqueIds']]
aua_id_match.columns = ['SideB.ViewData._ID','Actual_Status','ViewData.Age','ViewData.BreakID','AUA_ViewData._ID','ViewData.Side0_UniqueIds','ViewData.Side1_UniqueIds']
aua_id_match = aua_id_match.drop_duplicates()
aua_id_match = aua_id_match.reset_index()
aua_id_match = aua_id_match.drop('index',1)
########################################################################################################
aua_open_status = aua[['ViewData.BreakID','ViewData.Status']]
aua_open_status.columns = ['SideB.ViewData.BreakID_B_side','Actual_Status_Open']
aua_open_status = aua_open_status.drop_duplicates()
aua_open_status = aua_open_status.reset_index()
aua_open_status = aua_open_status.drop('index',1)
# In[5484]:
# In[5485]:
aua_open_status['SideB.ViewData.BreakID_B_side'] = aua_open_status['SideB.ViewData.BreakID_B_side'].astype(int).astype(str)
prediction_table3['SideB.ViewData.BreakID_B_side'] = prediction_table3['SideB.ViewData.BreakID_B_side'].astype(int).astype(str)
# In[5486]:
# In[5487]:
prediction_table3['SideB.ViewData._ID'] = prediction_table3['SideB.ViewData._ID'].fillna('Not_generated')
prediction_table3['SideA.ViewData._ID'] = prediction_table3['SideA.ViewData._ID'].fillna('Not_generated')
# In[5488]:
# In[5489]:
#aua_id_match['len_side0'] = aua_id_match.apply(lambda x: len(x['Actual_Status'].split(',')), axis=1)
#aua_id_match['len_side1'] = aua_id_match.apply(lambda x: len(x['Actual_Status'].split(',')), axis=1)
# In[5490]:
#aua_one_side = aua_id_match.groupby(['ViewData.Side1_UniqueIds'])['Actual_Status'].unique().reset_index()
#aua_zero_side = aua_id_match.groupby(['ViewData.Side0_UniqueIds'])['Actual_Status'].unique().reset_index()
# In[5491]:
aua_id_match['combined_flag'] = aua_id_match.apply(lambda x: 1 if 'Combined' in x['AUA_ViewData._ID'] else 0,axis=1)
# In[5492]:
#aua_id_match[''.sort_values(['ViewData.Side0_UniqueIds'])
# In[5493]:
aua_id_match1 = aua_id_match[aua_id_match['combined_flag']!=1]
aua_id_match1 = aua_id_match1.reset_index()
aua_id_match1 = aua_id_match1.drop('index',1)
# In[5494]:
side1_repeat = aua_id_match1['ViewData.Side1_UniqueIds'].value_counts().reset_index()
side0_repeat = aua_id_match1['ViewData.Side0_UniqueIds'].value_counts().reset_index()
# In[5495]:
# In[5497]:
aua_id_match1['1_repeat_flag'] = aua_id_match1.apply(lambda x: 1 if x['ViewData.Side1_UniqueIds'] in side1_repeat[side1_repeat['ViewData.Side1_UniqueIds']>1]['index'].values else 0, axis=1)
aua_id_match1['0_repeat_flag'] = aua_id_match1.apply(lambda x: 1 if x['ViewData.Side0_UniqueIds'] in side0_repeat[side0_repeat['ViewData.Side0_UniqueIds']>1]['index'].values else 0, axis=1)
# In[5498]:
aua_id_match2 = aua_id_match1[~((aua_id_match1['1_repeat_flag']==1) & (aua_id_match1['Actual_Status']=='OB'))]
aua_id_match2 = aua_id_match2.reset_index()
aua_id_match2 = aua_id_match2.drop('index',1)
# In[5499]:
aua_id_match3 = aua_id_match2[~((aua_id_match2['0_repeat_flag']==1) & (aua_id_match2['Actual_Status']=='OB'))]
aua_id_match3 = aua_id_match3.reset_index()
aua_id_match3 = aua_id_match3.drop('index',1)
# In[5500]:
#aua_zero_side['len_side0'].value_counts()
#aua_open_status['SideB.ViewData.BreakID_B_side'].nunique()
# In[5501]:
#aua_sub99[aua_sub99['ViewData.Side0_UniqueIds'] == '789_125897734_Advent Geneva']
# In[5502]:
# In[5505]:
pb_side = X_test.groupby('SideA.ViewData.BreakID_A_side')['Predicted_action'].unique().reset_index()
# In[5506]:
pb_side['len'] = pb_side['Predicted_action'].apply(lambda x: len(x))
# In[5507]:
pb_side['No_Pair_flag'] = pb_side.apply(lambda x: 1 if 'No-Pair' in x['Predicted_action'] else 0, axis=1)
# In[5508]:
pb_side_open_ids = pb_side[(pb_side['len']==1) & (pb_side['No_Pair_flag']==1)]['SideA.ViewData.BreakID_A_side']
# In[ ]:
# In[5509]:
# In[5510]:
# In[5511]:
prediction_table_new = pd.merge(prediction_table3, aua_id_match3, on='SideB.ViewData._ID', how='left')
# In[5512]:
# In[ ]:
# In[5514]:
aua_id_match4 = aua_id_match3.rename(columns = {'ViewData.BreakID': 'SideB.ViewData.BreakID_B_side'})
aua_id_match4 = aua_id_match4.rename(columns = {'Actual_Status': 'Actual_Status_Open'})
# In[5515]:
aua_id_match4['SideB.ViewData.BreakID_B_side'] = aua_id_match4['SideB.ViewData.BreakID_B_side'].astype(str)
# In[5516]:
#prediction_table_new = pd.merge(prediction_table_new ,aua_open_status, on='SideB.ViewData.BreakID_B_side', how='left')
prediction_table_new = pd.merge(prediction_table_new ,aua_id_match4[['SideB.ViewData.BreakID_B_side','Actual_Status_Open']], on='SideB.ViewData.BreakID_B_side', how='left')
# In[5517]:
#prediction_table_new
# In[5518]:
# In[5519]:
prediction_table_new.loc[prediction_table_new['Final_prediction']=='No-Pair/Unrecognized','Final_prediction'] = 'No-Pair'
# In[5520]:
prediction_table_new.loc[prediction_table_new['Actual_Status'].isnull()]
# In[5521]:
prediction_table_new.loc[~prediction_table_new['Actual_Status_Open'].isnull(),'Actual_Status'] = prediction_table_new.loc[~prediction_table_new['Actual_Status_Open'].isnull(),'Actual_Status_Open']
# In[5522]:
prediction_table_new.loc[~prediction_table_new['Actual_Status_Open'].isnull(),:]
# In[5523]:
# In[5524]:
prediction_table_new.loc[prediction_table_new['Actual_Status']=='OB','Actual_Status'] = 'Open Break'
# In[5525]:
prediction_table_new.loc[prediction_table_new['Final_prediction']=='No-Pair','Final_prediction'] = 'Open Break'
prediction_table_new.loc[prediction_table_new['Final_prediction']=='UMR_One_to_One','Final_prediction'] = 'UMR'
prediction_table_new.loc[prediction_table_new['Final_prediction']=='UMT_One_to_One','Final_prediction'] = 'UMT'
prediction_table_new.loc[prediction_table_new['Final_prediction']=='UMB_One_to_One','Final_prediction'] = 'UMB'
# In[5526]:
# In[5527]:
prediction_table_new = prediction_table_new[~prediction_table_new['Actual_Status'].isnull()]
prediction_table_new = prediction_table_new.reset_index()
prediction_table_new = prediction_table_new.drop('index',1)
# ## Final Actual vs Predicted Table - Process Initiation
# In[5528]:
meo = pd.read_csv(filepaths_MEO[i],usecols=new_cols)
# In[5529]:
meo = meo[['ViewData.BreakID','ViewData.Side1_UniqueIds','ViewData.Side0_UniqueIds','ViewData.Age','ViewData.Status']].drop_duplicates()
# In[5530]:
meo['key'] = meo['ViewData.Side0_UniqueIds'].astype(str) + meo['ViewData.Side1_UniqueIds'].astype(str)
# In[5531]:
aua_id_match5 = aua_id_match3.rename(columns ={'Actual_Status': 'ViewData.Status'})
# In[5532]:
aua_sub = aua_id_match5[['ViewData.Side1_UniqueIds','ViewData.Side0_UniqueIds','ViewData.Age','ViewData.Status']].drop_duplicates()
# In[5533]:
aua_sub['key'] = aua_sub['ViewData.Side0_UniqueIds'].astype(str) + aua_sub['ViewData.Side1_UniqueIds'].astype(str)
# In[5534]:
prediction_table_new['ViewData.BreakID'] = prediction_table_new['SideB.ViewData.BreakID_B_side']
prediction_table_new['ViewData.BreakID'] = prediction_table_new['ViewData.BreakID'].astype(str)
# In[5535]:
meo['ViewData.BreakID'] = meo['ViewData.BreakID'].astype(str)
# In[5536]:
prediction_table_new1 = pd.merge(prediction_table_new, meo[['ViewData.BreakID','key']], on='ViewData.BreakID', how='left')
# In[5537]:
# In[5538]:
# In[5539]:
aua_sub1 = pd.merge(aua_sub, prediction_table_new1[['key','Final_prediction','probability_UMR','probability_No_pair','probability_UMT','probability_UMB','Final_predicted_break']], on='key', how='left')
# In[5540]:
# In[5541]:
no_open = prediction_table_new1[prediction_table_new1['Final_prediction']!='Open Break'].reset_index()
no_open = no_open.drop('index',1)
no_open['key'] = no_open['ViewData.Side0_UniqueIds'].astype(str) + no_open['ViewData.Side1_UniqueIds'].astype(str)
# In[5542]:
#aua_sub1[aua_sub1['Final_prediction']=='UMR_One_to_One']
X_test['key'] = X_test['SideB.ViewData.Side0_UniqueIds'].astype(str) + X_test['SideA.ViewData.Side1_UniqueIds'].astype(str)
# In[5543]:
# In[5544]:
aua_sub = pd.merge(aua_sub1, no_open[['key','Final_prediction']], on='key', how='left')
# In[5545]:
aua_sub11 = aua_sub1[aua_sub1['Final_prediction']=='Open Break']
aua_sub11 = aua_sub11.reset_index()
aua_sub11 = aua_sub11.drop('index',1)
# In[5546]:
aua_sub11['probability_UMR'].fillna(0.00355,inplace=True)
aua_sub11['probability_UMB'].fillna(0.003124,inplace=True)
aua_sub11['probability_UMT'].fillna(0.00255,inplace=True)
aua_sub11['probability_No_pair'].fillna(0.99034,inplace=True)
# In[5547]:
aua_sub22 = aua_sub1[aua_sub1['Final_prediction']!='Open Break'][['ViewData.Side1_UniqueIds', 'ViewData.Side0_UniqueIds', 'ViewData.Age','ViewData.Status', 'key']]
aua_sub22 = aua_sub22.reset_index()
aua_sub22 = aua_sub22.drop('index',1)
aua_sub22 = pd.merge(aua_sub22, no_open[['key','Final_prediction','probability_UMR','probability_No_pair','probability_UMT','probability_UMB','Final_predicted_break']], on='key', how='left')
aua_sub22 = aua_sub22.reset_index()
aua_sub22 = aua_sub22.drop('index',1)
# In[5548]:
aua_sub33 = pd.concat([aua_sub11,aua_sub22], axis=0)
aua_sub33 = aua_sub33.reset_index()
aua_sub33 = aua_sub33.drop('index',1)
# In[5549]:
aua_sub33['ViewData.Side0_UniqueIds'] = aua_sub33['ViewData.Side0_UniqueIds'].astype(str)
aua_sub33['ViewData.Side1_UniqueIds'] = aua_sub33['ViewData.Side1_UniqueIds'].astype(str)
# In[5550]:
aua_sub33['len_side0'] = aua_sub33.apply(lambda x: len(x['ViewData.Side0_UniqueIds'].split(',')), axis=1)
aua_sub33['len_side1'] = aua_sub33.apply(lambda x: len(x['ViewData.Side1_UniqueIds'].split(',')), axis=1)
# In[5551]:
aua_sub33.loc[(aua_sub33['len_side0']>1) & (aua_sub33['len_side1']==1) & (aua_sub33['ViewData.Status']=='OB') ,'Type'] = 'One_side_aggregation'
aua_sub33.loc[(aua_sub33['len_side0']>1) & (aua_sub33['len_side1']==1) & (aua_sub33['ViewData.Status']!='OB') ,'Type'] = 'One_to_Many'
aua_sub33.loc[(aua_sub33['len_side0']==1) & (aua_sub33['len_side1']>1) & (aua_sub33['ViewData.Status']=='OB') ,'Type'] = 'One_side_aggregation'
aua_sub33.loc[(aua_sub33['len_side0']==1) & (aua_sub33['len_side1']>1) & (aua_sub33['ViewData.Status']!='OB') ,'Type'] = 'One_to_Many'
aua_sub33.loc[(aua_sub33['len_side0']>1) & (aua_sub33['len_side1']>1) & (aua_sub33['ViewData.Status']!='OB') ,'Type'] = 'Many_to_Many'
aua_sub33.loc[(aua_sub33['len_side0']==1) & (aua_sub33['len_side1']==1) ,'Type'] = 'One_to_One/Open'
# In[5552]:
aua_sub44 = aua_sub33[(aua_sub33['ViewData.Status']=='UMB') & (aua_sub33['ViewData.Age']>1)]
aua_sub44 = aua_sub44.reset_index()
aua_sub44 = aua_sub44.drop('index',1)
# In[5553]:
aua_sub44['Final_prediction'].fillna('UMB-Carry-Forward',inplace= True)
aua_sub44['probability_UMR'].fillna(0.0001,inplace= True)
aua_sub44['probability_UMB'].fillna(0.9998,inplace= True)
aua_sub44['probability_UMT'].fillna(0.0000,inplace= True)
aua_sub44['probability_No_pair'].fillna(0.0000,inplace= True)
# In[5554]:
aua_sub55 = aua_sub33[~((aua_sub33['ViewData.Status']=='UMB') & (aua_sub33['ViewData.Age']>1))]
aua_sub55 = aua_sub55.reset_index()
aua_sub55 = aua_sub55.drop('index',1)
# In[5555]:
aua_sub66 = pd.concat([aua_sub55,aua_sub44], axis=0)
aua_sub66 = aua_sub66.reset_index()
aua_sub66 = aua_sub66.drop('index',1)
# In[5556]:
aua_sub66.loc[(aua_sub66['ViewData.Status']=='UMB') & (aua_sub66['ViewData.Age']>1),'ViewData.Status'] = 'UMB-Carry-Forward'
aua_sub66.loc[(aua_sub66['ViewData.Status']=='OB'),'ViewData.Status'] = 'Open Break'
# In[5557]:
# In[ ]:
# ## Read No-Pair Id File
# In[5558]:
#no_pair_id_data = pd.read_csv("//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/X_Test/no_pair_ids_125_2020-06-8.csv")
# In[5559]:
no_pair_ids = no_pair_ids_df['filter_key'].unique()
# In[5560]:
aua_sub66.loc[aua_sub66['ViewData.Side1_UniqueIds'].isin(no_pair_ids),'Final_prediction'] = aua_sub66.loc[aua_sub66['ViewData.Side1_UniqueIds'].isin(no_pair_ids),'ViewData.Status']
aua_sub66.loc[aua_sub66['ViewData.Side0_UniqueIds'].isin(no_pair_ids),'Final_prediction'] = aua_sub66.loc[aua_sub66['ViewData.Side0_UniqueIds'].isin(no_pair_ids),'ViewData.Status']
# In[5561]:
# In[5672]:
#aua_sub66
# In[5563]:
pb_side_grp = X_test.groupby(['SideA.ViewData.Side1_UniqueIds'])['Predicted_action'].unique().reset_index()
# In[5564]:
# In[5565]:
pb_side_grp_status = X_test.groupby(['SideA.ViewData.Side1_UniqueIds'])['SideA.ViewData.Status'].unique().reset_index()
pb_side_grp_status['SideA.ViewData.Status'] = pb_side_grp_status['SideA.ViewData.Status'].apply(lambda x: str(x).replace("[",""))
pb_side_grp_status['SideA.ViewData.Status'] = pb_side_grp_status['SideA.ViewData.Status'].apply(lambda x: str(x).replace("]",""))
pb_side_grp['len'] = pb_side_grp.apply(lambda x: len(x['Predicted_action']), axis=1)
pb_side_grp['No_pair_flag'] = pb_side_grp.apply(lambda x: 1 if x['len'] == 1 and "No-Pair" in x['Predicted_action'] else 0, axis=1)
# In[5566]:
pb_side_grp = pd.merge(pb_side_grp,pb_side_grp_status, on='SideA.ViewData.Side1_UniqueIds', how='left')
# In[5567]:
#pb_side_grp['SideA.ViewData.Status'].value_counts()
# In[5568]:
#pb_side_grp = pd.merge(pb_side_grp,pb_side_grp_status, on='SideA.ViewData.Side1_UniqueIds', how='left')
pb_side_grp['Final_status'] = pb_side_grp.apply(lambda x: "Open Break" if x['SideA.ViewData.Status']=="'OB'" else("SDB" if x['SideA.ViewData.Status']=="'SDB'" else "NA"),axis=1)
pb_side_grp = pb_side_grp.rename(columns = {'SideA.ViewData.Side1_UniqueIds':'ViewData.Side1_UniqueIds'})
pb_side_grp1 = pb_side_grp[pb_side_grp['No_pair_flag']==1]
pb_side_grp1 = pb_side_grp1.reset_index()
pb_side_grp1 = pb_side_grp1.drop('index',1)
# In[5569]:
aua_sub77 = pd.merge(aua_sub66 ,pb_side_grp1[['ViewData.Side1_UniqueIds','Final_status']], on ='ViewData.Side1_UniqueIds',how='left')
# In[5570]:
aua_sub77.loc[(~aua_sub77['Final_status'].isnull()) & (aua_sub77['ViewData.Side0_UniqueIds']=='nan'),'Final_prediction'] = aua_sub77.loc[(~aua_sub77['Final_status'].isnull()) & (aua_sub77['ViewData.Side0_UniqueIds']=='nan'),'Final_status']
# In[5571]:
pb_side_grp_B = X_test.groupby(['SideB.ViewData.Side0_UniqueIds'])['Predicted_action'].unique().reset_index()
# In[5572]:
pb_side_grp_B_status = X_test.groupby(['SideB.ViewData.Side0_UniqueIds'])['SideB.ViewData.Status'].unique().reset_index()
pb_side_grp_B_status['SideB.ViewData.Status'] = pb_side_grp_B_status['SideB.ViewData.Status'].apply(lambda x: str(x).replace("[",""))
pb_side_grp_B_status['SideB.ViewData.Status'] = pb_side_grp_B_status['SideB.ViewData.Status'].apply(lambda x: str(x).replace("]",""))
pb_side_grp_B['len'] = pb_side_grp_B.apply(lambda x: len(x['Predicted_action']), axis=1)
pb_side_grp_B['No_pair_flag'] = pb_side_grp_B.apply(lambda x: 1 if x['len'] == 1 and "No-Pair" in x['Predicted_action'] else 0, axis=1)
# In[5573]:
pb_side_grp_B = pd.merge(pb_side_grp_B,pb_side_grp_B_status, on='SideB.ViewData.Side0_UniqueIds', how='left')
pb_side_grp_B['Final_status_B'] = pb_side_grp_B.apply(lambda x: "Open Break" if x['SideB.ViewData.Status']=="'OB'" else("SDB" if x['SideB.ViewData.Status']=="'SDB'" else "NA"),axis=1)
pb_side_grp_B = pb_side_grp_B.rename(columns = {'SideB.ViewData.Side0_UniqueIds':'ViewData.Side0_UniqueIds'})
pb_side_grp2 = pb_side_grp_B[pb_side_grp_B['No_pair_flag']==1]
pb_side_grp2 = pb_side_grp2.reset_index()
pb_side_grp2 = pb_side_grp2.drop('index',1)
# In[5574]:
aua_sub88 = pd.merge(aua_sub77 ,pb_side_grp2[['ViewData.Side0_UniqueIds','Final_status_B']], on ='ViewData.Side0_UniqueIds',how='left')
# In[5575]:
aua_sub88.loc[(~aua_sub88['Final_status_B'].isnull()) & (aua_sub88['ViewData.Side1_UniqueIds']=='nan'),'Final_prediction'] = aua_sub88.loc[(~aua_sub88['Final_status_B'].isnull()) & (aua_sub88['ViewData.Side1_UniqueIds']=='nan'),'Final_status_B']
# In[5576]:
aua_sub99 = aua_sub88[(aua_sub88['ViewData.Status']!='SDB')]
aua_sub99 = aua_sub99.reset_index()
aua_sub99 = aua_sub99.drop('index',1)
# In[5577]:
aua_sub99['Final_prediction'] = aua_sub99['Final_prediction'].fillna('Open Break')
aua_sub99 = aua_sub99.reset_index()
aua_sub99 = aua_sub99.drop('index',1)
# In[5578]:
aua_sub99['ViewData.Status'] = aua_sub99['ViewData.Status'].astype(str)
aua_sub99['Final_prediction'] = aua_sub99['Final_prediction'].astype(str)
# In[5579]:
#X_test
# In[5580]:
#aua[aua['ViewData.Side0_UniqueIds'] == '789_125897734_Advent Geneva']
# ## Summary file
# In[5581]:
break_id_merge = meo[meo['ViewData.Status'].isin(['OB','SDB','UOB','UDB','SPM'])][['ViewData.Side0_UniqueIds','ViewData.Side1_UniqueIds','ViewData.BreakID']].drop_duplicates()
break_id_merge = break_id_merge.reset_index()
break_id_merge = break_id_merge.drop('index',1)
# In[5582]:
# In[5583]:
break_id_merge['key'] = break_id_merge['ViewData.Side0_UniqueIds'].astype(str) + break_id_merge['ViewData.Side1_UniqueIds'].astype(str)
# In[5584]:
final = pd.merge(aua_sub99,break_id_merge[['key','ViewData.BreakID']], on='key', how='left')
# In[5585]:
# In[5586]:
# In[5587]:
# In[5588]:
#final[final['ViewData.BreakID'].isnull()]
final = pd.merge(final,break_id_merge[['ViewData.Side0_UniqueIds','ViewData.BreakID']], on='ViewData.Side0_UniqueIds', how='left')
# In[5589]:
final.loc[final['ViewData.BreakID_x'].isnull(),'ViewData.BreakID_x'] = final.loc[final['ViewData.BreakID_x'].isnull(),'ViewData.BreakID_y']
# In[5590]:
final = final.rename(columns={'ViewData.BreakID_x':'ViewData.BreakID'})
final = final.drop('ViewData.BreakID_y',1)
# In[ ]:
# In[5591]:
final1 = final[(final['Type']=='One_to_One/Open') & (final['probability_No_pair'].isnull())]
final1 = final1.reset_index()
final1 = final1.drop('index',1)
final2 = final[~((final['Type']=='One_to_One/Open') & (final['probability_No_pair'].isnull()))]
final2 = final2.reset_index()
final2 = final2.drop('index',1)
# In[5592]:
final1['probability_UMR'].fillna(0.0024,inplace=True)
final1['probability_UMB'].fillna(0.004124,inplace=True)
final1['probability_UMT'].fillna(0.00155,inplace=True)
final1['probability_No_pair'].fillna(0.9922,inplace=True)
# In[5593]:
final3 = pd.concat([final1, final2], axis=0)
# In[5594]:
final3['ML_flag'] = final3.apply(lambda x: "ML" if x['Type']=='One_to_One/Open' else "Non-ML", axis=1)
# In[5595]:
prediction_cols = ['ViewData.BreakID', 'ViewData.Side1_UniqueIds', 'ViewData.Side0_UniqueIds','ViewData.Age' ,
'probability_No_pair', 'probability_UMR','probability_UMB', 'probability_UMT',
'Final_predicted_break', 'Type', 'ML_flag','ViewData.Status', 'Final_prediction']
final4 = final3[prediction_cols]
final4 = final4.rename(columns ={'ViewData.Status':'Actual_Status', 'Final_prediction': 'Predicted_Status'})
# In[5596]:
# In[5597]:
#crosstab_table
# In[5598]:
NA_status_file = final4[(final4['Type']=='One_to_One/Open') & (final4['Predicted_Status']=='NA')]
NA_status_file = NA_status_file.reset_index()
NA_status_file = NA_status_file.drop('index',1)
# In[5599]:
final5 = final4[~((final4['Type']=='One_to_One/Open') & (final4['Predicted_Status']=='NA'))]
final5 = final5.reset_index()
final5 = final5.drop('index',1)
# In[5600]:
NA_status_file_A_side = NA_status_file[NA_status_file['ViewData.Side0_UniqueIds']=='nan']
NA_status_file_B_side = NA_status_file[NA_status_file['ViewData.Side1_UniqueIds']=='nan']
# In[5601]:
gg = X_test[X_test['SideA.ViewData.BreakID_A_side'].isin(NA_status_file_A_side['ViewData.BreakID'].unique())].groupby(['SideA.ViewData.BreakID_A_side'])['Predicted_action'].unique().reset_index()
gg.columns = ['ViewData.BreakID','Predicted_action']
gg['NA_prediction_A'] = 'Open Break'
kk = X_test[X_test['SideB.ViewData.BreakID_B_side'].isin(NA_status_file_B_side['ViewData.BreakID'].unique())].groupby(['SideB.ViewData.BreakID_B_side'])['Predicted_action'].unique().reset_index()
kk.columns = ['ViewData.BreakID','Predicted_action']
kk['NA_prediction_B'] = 'Open Break'
# In[5602]:
gg['ViewData.BreakID'] = gg['ViewData.BreakID'].astype(str)
kk['ViewData.BreakID'] = kk['ViewData.BreakID'].astype(str)
# In[5603]:
final6 = pd.merge(NA_status_file, gg[['ViewData.BreakID','NA_prediction_A']], on='ViewData.BreakID', how='left')
final6 = pd.merge(final6, kk[['ViewData.BreakID','NA_prediction_B']], on='ViewData.BreakID', how='left')
# In[5604]:
final6.loc[final6['NA_prediction_A'].isnull(),'Predicted_Status'] = 'Open Break'
final6.loc[final6['NA_prediction_B'].isnull(),'Predicted_Status'] = 'Open Break'
# In[5605]:
final6 = final6.drop(['NA_prediction_A','NA_prediction_B'],1)
# In[5606]:
final5[final5['ViewData.Side0_UniqueIds']=='789_125897734_Advent Geneva']
# In[5607]:
final7 = pd.concat([final5, final6], axis=0)
final7 = final7.reset_index()
final7 = final7.drop('index',1)
# In[5609]:
# In[5610]:
pair_match = X_test[X_test['Predicted_action']!='No-Pair']
pair_match = pair_match.reset_index()
pair_match = pair_match.drop('index',1)
# In[5611]:
pair_match = pair_match[['Predicted_action',
'probability_No_pair', 'probability_UMB', 'probability_UMR',
'probability_UMT', 'key']]
pair_match.columns = ['New_Predicted_action',
'New_probability_No_pair', 'New_probability_UMB', 'New_probability_UMR',
'New_probability_UMT','key']
# In[5612]:
pair_match['New_Predicted_action'] = pair_match['New_Predicted_action'].apply(lambda x: 'UMR' if x=='UMR_One_to_One' else("UMT" if x=='UMT_One_to_One' else("UMB" if x== "UMB_One_to_One" else x)))
# In[5613]:
final7['key'] = final7['ViewData.Side0_UniqueIds'].astype(str) + final7['ViewData.Side1_UniqueIds'].astype(str)
# In[5614]:
final8 = pd.merge(final7,pair_match, on='key', how='left')
# In[5615]:
# In[5617]:
# In[5618]:
final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'Predicted_Status'] = final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'New_Predicted_action']
final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'probability_No_pair'] = final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'New_probability_No_pair']
final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'probability_UMB'] = final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'New_probability_UMB']
final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'probability_UMR'] = final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'New_probability_UMR']
final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'probability_UMT'] = final8.loc[(~final8['New_Predicted_action'].isnull()) & (final8['New_Predicted_action']!= final8['Predicted_Status']),'New_probability_UMT']
# In[5619]:
final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMT'),'probability_UMT'] = final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMT'),'New_probability_UMT']
final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMR'),'probability_UMR'] = final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMR'),'New_probability_UMR']
final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMB'),'probability_UMB'] = final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMB'),'New_probability_UMB']
final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMT'),'probability_No_pair'] = 0.002
final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMR'),'probability_No_pair'] = 0.002
final8.loc[(final8['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMB'),'probability_No_pair'] = 0.002
# In[5620]:
umr_break_array_match = prediction_table[prediction_table['Final_prediction']=='UMR_One_to_One'][['SideB.ViewData.BreakID_B_side','Final_predicted_break']]
umt_break_array_match = prediction_table[prediction_table['Final_prediction']=='UMT_One_to_One'][['SideB.ViewData.BreakID_B_side','Final_predicted_break']]
umb_break_array_match = prediction_table[prediction_table['Final_prediction']=='UMB_One_to_One'][['SideB.ViewData.BreakID_B_side','Final_predicted_break']]
umr_break_array_match.columns = np.array(['ViewData.BreakID','New_Final_predicted_break_UMR'])
umt_break_array_match.columns = np.array(['ViewData.BreakID','New_Final_predicted_break_UMT'])
umb_break_array_match.columns = np.array(['ViewData.BreakID','New_Final_predicted_break_UMB'])
# In[5621]:
#umr_break_array_match['New_Final_predicted_break_UMR'] = umr_break_array_match['New_Final_predicted_break_UMR'].astype(str)
#umb_break_array_match['New_Final_predicted_break_UMB'] = umb_break_array_match['New_Final_predicted_break_UMB'].astype(str)
#umt_break_array_match['New_Final_predicted_break_UMT'] = umt_break_array_match['New_Final_predicted_break_UMT'].astype(str)
# In[5622]:
final9 = pd.merge(final8, umr_break_array_match, on ='ViewData.BreakID', how='left')
final9 = pd.merge(final9, umt_break_array_match, on ='ViewData.BreakID', how='left')
final9 = pd.merge(final9, umb_break_array_match, on ='ViewData.BreakID', how='left')
# In[5623]:
final9.loc[(final9['Final_predicted_break'].isnull()) & (final9['Predicted_Status']=='UMT'),'Final_predicted_break'] = final9.loc[(final9['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMT'),'New_Final_predicted_break_UMT']
final9.loc[(final9['Final_predicted_break'].isnull()) & (final9['Predicted_Status']=='UMR'),'Final_predicted_break'] = final9.loc[(final9['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMR'),'New_Final_predicted_break_UMR']
final9.loc[(final9['Final_predicted_break'].isnull()) & (final9['Predicted_Status']=='UMB'),'Final_predicted_break'] = final9.loc[(final9['Final_predicted_break'].isnull()) & (final8['Predicted_Status']=='UMB'),'New_Final_predicted_break_UMB']
# In[5624]:
#final9[(final9['Actual_Status']=='UMB') & (final9['Predicted_Status']=='UMB') & (final9['ML_flag']=='ML')]['Final_predicted_break']
# In[5625]:
# In[5626]:
final9 = final9.drop(['key','New_Predicted_action',
'New_probability_No_pair', 'New_probability_UMB', 'New_probability_UMR',
'New_probability_UMT','New_Final_predicted_break_UMR',
'New_Final_predicted_break_UMT', 'New_Final_predicted_break_UMB'], 1)
# In[5627]:
# In[5628]:
#final8['Type'].value_counts()
# In[5629]:
#meo1 = pd.read_csv("//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/MEO/MeoCollections.MEO_HST_RecData_125_2020-06-8.csv",usecols=new_cols)
# In[5630]:
#meo1[meo1['ViewData.Side1_UniqueIds']=='6_125858636_Goldman Sachs']
# ## Merging columns from the transaction table
# In[5631]:
# In[5632]:
aua_final = pd.read_csv(filepaths_MEO[i],usecols = viewdata_cols_to_show)
# In[5633]:
#final_predictions = pd.read_csv('//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/Final_Predictions/Final_Predictions_Table_HST_RecData_125_2020-06-1.csv')
# In[5634]:
final_predictions = final9.copy()
# In[5635]:
#final_predictions.groupby(['Actual_Status'])['Predicted_Status'].value_counts()
# In[5636]:
#final_predictions[(final_predictions['Actual_Status'] == 'Open Break') & (final_predictions['Predicted_Status'] == 'UMR')][['ViewData.Side0_UniqueIds','ViewData.Side1_UniqueIds']]
# In[5637]:
#final_predictions.groupby(['ViewData.Side0_UniqueIds'])['ViewData.Side1_UniqueIds'].value_counts()
# In[5638]:
# In[5639]:
final_predictions_both_present = final_predictions[(final_predictions['ViewData.Side0_UniqueIds'] !='nan') & (final_predictions['ViewData.Side1_UniqueIds']!='nan')]
final_predictions_side0_only = final_predictions[(final_predictions['ViewData.Side0_UniqueIds']!='nan') & (final_predictions['ViewData.Side1_UniqueIds'] =='nan')]
final_predictions_side1_only = final_predictions[(final_predictions['ViewData.Side0_UniqueIds']=='nan') & (final_predictions['ViewData.Side1_UniqueIds'] != 'nan')]
final_predictions_both_null = final_predictions[(final_predictions['ViewData.Side0_UniqueIds']=='nan') & (final_predictions['ViewData.Side1_UniqueIds']=='nan')]
# In[5640]:
aua_final = aua_final.drop_duplicates()
aua_final = aua_final.reset_index()
aua_final = aua_final.drop('index',1)
# In[5642]:
final_predictions_both_present_aua_merge = pd.merge(final_predictions_both_present,aua_final, on=['ViewData.Side0_UniqueIds','ViewData.Side1_UniqueIds'], how='left' )
final_predictions_side0_only_aua_merge = pd.merge(final_predictions_side0_only,aua_final, on='ViewData.Side0_UniqueIds', how='left' )
final_predictions_side1_only_aua_merge = pd.merge(final_predictions_side1_only,aua_final, on='ViewData.Side1_UniqueIds', how='left' )
# In[5643]:
#final_predictions_side1_only_aua_merge
# In[5644]:
final_predictions_side0_only_aua_merge = final_predictions_side0_only_aua_merge.drop(['ViewData.BreakID_y', 'ViewData.Side1_UniqueIds_y', 'ViewData.Age_y'], 1)
final_predictions_side0_only_aua_merge = final_predictions_side0_only_aua_merge.rename(columns={'ViewData.BreakID_x': 'ViewData.BreakID'})
final_predictions_side0_only_aua_merge = final_predictions_side0_only_aua_merge.rename(columns={'ViewData.Side1_UniqueIds_x': 'ViewData.Side1_UniqueIds'})
final_predictions_side0_only_aua_merge = final_predictions_side0_only_aua_merge.rename(columns={'ViewData.Age_x': 'ViewData.Age'})
final_predictions_side1_only_aua_merge = final_predictions_side1_only_aua_merge.drop(['ViewData.BreakID_y', 'ViewData.Side0_UniqueIds_y', 'ViewData.Age_y'], 1)
final_predictions_side1_only_aua_merge = final_predictions_side1_only_aua_merge.rename(columns={'ViewData.BreakID_x': 'ViewData.BreakID'})
final_predictions_side1_only_aua_merge = final_predictions_side1_only_aua_merge.rename(columns={'ViewData.Side0_UniqueIds_x': 'ViewData.Side0_UniqueIds'})
final_predictions_side1_only_aua_merge = final_predictions_side1_only_aua_merge.rename(columns={'ViewData.Age_x': 'ViewData.Age'})
final_predictions_both_present_aua_merge = final_predictions_both_present_aua_merge.drop(['ViewData.BreakID_y', 'ViewData.Age_y'], 1)
final_predictions_both_present_aua_merge = final_predictions_both_present_aua_merge.rename(columns={'ViewData.BreakID_x': 'ViewData.BreakID'})
final_predictions_both_present_aua_merge = final_predictions_both_present_aua_merge.rename(columns={'ViewData.Age_x': 'ViewData.Age'})
# In[5645]:
#final_prediction_show_cols = final_predictions_both_present_aua_merge.append([final_predictions_side0_only_aua_merge,final_predictions_side1_only_aua_merge])
# In[5655]:
final11 = pd.concat([final_predictions_both_present_aua_merge, final_predictions_side0_only_aua_merge,final_predictions_side1_only_aua_merge], axis=0)
# In[5656]:
final11 = final11.reset_index()
final11 = final11.drop('index',1)
# In[5657]:
final12 = final11.drop_duplicates(['ViewData.BreakID', 'ViewData.Side1_UniqueIds','ViewData.Side0_UniqueIds', 'ViewData.Age'])
# In[5658]:
final12.loc[(final12['Actual_Status']=='UCB'), 'ML_flag'] ='Non-ML'
final12.loc[(final12['Actual_Status']=='UCB'), 'Type'] = 'Closed Breaks'
# In[5673]:
final12.loc[final12['Actual_Status']=='UCB','Predicted_Status'] = 'No-Prediction'
# In[5246]:
# In[5674]:
print('classification_report')
print(classification_report(final12[final12['Type']=='One_to_One/Open']['Actual_Status'], final12[final12['Type']=='One_to_One/Open']['Predicted_Status']))
# In[5675]:
report = classification_report(final12[final12['Type']=='One_to_One/Open']['Actual_Status'], final12[final12['Type']=='One_to_One/Open']['Predicted_Status'], output_dict=True)
accuracy_table = pd.DataFrame(report).transpose()
print('accuracy_table')
print(accuracy_table)
# In[5676]:
crosstab_table = pd.crosstab(final12[final12['Type']=='One_to_One/Open']['Actual_Status'], final12[final12['Type']=='One_to_One/Open']['Predicted_Status'])
# In[5678]:
print('crosstab_table')
print(crosstab_table)
print(datetime.now())
# ## Save results
# In[ ]:
# filepaths_final_prediction_table = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/Final_Predictions_2/Final_Predictions_Table_HST_RecData_125_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
# filepaths_accuracy_table = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/Final_Predictions_2/Accuracy_Table_HST_RecData_125_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
# filepaths_crosstab_table = ['//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/Final_Predictions_2/Crosstab_Table_HST_RecData_125_2020-06-' + str(date_numbers_list[i]) + '.csv' for i in range(0,len(date_numbers_list))]
final12.to_csv(filepaths_final_prediction_table[i])
# In[ ]:
accuracy_table.to_csv(filepaths_accuracy_table[i])
# In[ ]:
crosstab_table.to_csv(filepaths_crosstab_table[i])
i = i+1
sys.stdout = orig_stdout
f.close()
# ## Enitre month prediction
# In[4747]:
#all_june_data = pd.read_csv('//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/Final_Predictions/All_June_predictions_123.csv')
#
#
## In[4694]:
#
#
#from sklearn.metrics import accuracy_score
#from sklearn.metrics import classification_report
#print(classification_report(all_june_data[all_june_data['Type']=='One_to_One/Open']['Actual_Status'], all_june_data[all_june_data['Type']=='One_to_One/Open']['Predicted_Status']))
#
#
## In[4748]:
#
#
#from sklearn.metrics import accuracy_score
#from sklearn.metrics import classification_report
#print(classification_report(all_june_data[all_june_data['Type']=='One_to_One/Open']['Actual_Status'], all_june_data[all_june_data['Type']=='One_to_One/Open']['Predicted_Status']))
#
#
## In[4695]:
#
#
#report_all_june = classification_report(all_june_data[all_june_data['Type']=='One_to_One/Open']['Actual_Status'], all_june_data[all_june_data['Type']=='One_to_One/Open']['Predicted_Status'], output_dict=True)
#accuracy_table_all_june = pd.DataFrame(report_all_june).transpose()
#
#
## In[4696]:
#
#
#accuracy_table_all_june
#
#
## In[4697]:
#
#
#from sklearn.metrics import confusion_matrix
#crosstab_all_june = pd.crosstab(all_june_data[all_june_data['Type']=='One_to_One/Open']['Actual_Status'], all_june_data[all_june_data['Type']=='One_to_One/Open']['Predicted_Status'])
#
#
## In[4698]:
#
#
#crosstab_all_june
#
#
## ## Save Results (Entire Month)
#
## In[4702]:
#
#
#accuracy_table_all_june.to_csv('//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/Final_Predictions/Accuracy_table_all_june.csv')
#
#
## In[4703]:
#
#
#crosstab_all_june.to_csv('//vitblrdevcons01/Raman Strategy ML 2.0/All_Data/Weiss/JuneData/Final_Predictions/Crosstab_table_all_june.csv')
#
| [
"rohitschauhanitbhu@gmail.com"
] | rohitschauhanitbhu@gmail.com |
0750081985acee30b91acb274a9231071fbedfeb | d531f502304f4f314a2e5d2e28a98d184b143e42 | /elastic/documents.py | 498ceec138557a0e21834f5db9bfdec2efd61514 | [] | no_license | mikohan/djangoblogtest | 6a0d4704fb51a8ca935ea329cb007b7e48fecb54 | 3af986cd8be3d83ad5b01d2583464e243309dd50 | refs/heads/master | 2020-06-13T15:39:13.686997 | 2019-07-02T05:25:06 | 2019-07-02T05:25:06 | 194,696,685 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 287 | py | from django_elasticsearch_dsl import DocType, Index
from elastic.models import Post
post = Index('posts')
@post.doc_type
class PostDocument(DocType):
class Meta:
model = Post
fields = [
'title',
'content',
'timestamp'
] | [
"angara99@gmail.com"
] | angara99@gmail.com |
ee3feed5831ec7b466639f24086daaf2f5a0b2b2 | c1d4b1ab3d8949a34cdcfa7246a34fc102b5c8d1 | /test10_4895_dev_16191/settings.py | 33cb45d0fda1393aabf64214773df2aea414397f | [] | no_license | crowdbotics-apps/test10-4895-dev-16191 | 3b19a305c17f93e398e4a7a4e59bf5defa76d3e7 | e796b9cd9bd64e91d436a6dade728e5b426cee04 | refs/heads/master | 2023-01-21T09:30:23.530979 | 2020-12-03T10:06:29 | 2020-12-03T10:06:29 | 318,150,338 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,781 | py | """
Django settings for test10_4895_dev_16191 project.
Generated by 'django-admin startproject' using Django 2.2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
import environ
import logging
env = environ.Env()
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env.bool("DEBUG", default=False)
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env.str("SECRET_KEY")
ALLOWED_HOSTS = env.list("HOST", default=["*"])
SITE_ID = 1
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SECURE_SSL_REDIRECT = env.bool("SECURE_REDIRECT", default=False)
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites'
]
LOCAL_APPS = [
'home',
'users.apps.UsersConfig',
]
THIRD_PARTY_APPS = [
'rest_framework',
'rest_framework.authtoken',
'rest_auth',
'rest_auth.registration',
'bootstrap4',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.google',
'django_extensions',
'drf_yasg',
'storages',
]
INSTALLED_APPS += LOCAL_APPS + THIRD_PARTY_APPS
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'test10_4895_dev_16191.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'test10_4895_dev_16191.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
if env.str("DATABASE_URL", default=None):
DATABASES = {
'default': env.db()
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
MIDDLEWARE += ['whitenoise.middleware.WhiteNoiseMiddleware']
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
'allauth.account.auth_backends.AuthenticationBackend'
)
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static')]
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
# allauth / users
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_EMAIL_VERIFICATION = "optional"
ACCOUNT_CONFIRM_EMAIL_ON_GET = True
ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True
ACCOUNT_UNIQUE_EMAIL = True
LOGIN_REDIRECT_URL = "users:redirect"
ACCOUNT_ADAPTER = "users.adapters.AccountAdapter"
SOCIALACCOUNT_ADAPTER = "users.adapters.SocialAccountAdapter"
ACCOUNT_ALLOW_REGISTRATION = env.bool("ACCOUNT_ALLOW_REGISTRATION", True)
SOCIALACCOUNT_ALLOW_REGISTRATION = env.bool("SOCIALACCOUNT_ALLOW_REGISTRATION", True)
REST_AUTH_SERIALIZERS = {
# Replace password reset serializer to fix 500 error
"PASSWORD_RESET_SERIALIZER": "home.api.v1.serializers.PasswordSerializer",
}
REST_AUTH_REGISTER_SERIALIZERS = {
# Use custom serializer that has no username and matches web signup
"REGISTER_SERIALIZER": "home.api.v1.serializers.SignupSerializer",
}
# Custom user model
AUTH_USER_MODEL = "users.User"
EMAIL_HOST = env.str("EMAIL_HOST", "smtp.sendgrid.net")
EMAIL_HOST_USER = env.str("SENDGRID_USERNAME", "")
EMAIL_HOST_PASSWORD = env.str("SENDGRID_PASSWORD", "")
EMAIL_PORT = 587
EMAIL_USE_TLS = True
# AWS S3 config
AWS_ACCESS_KEY_ID = env.str("AWS_ACCESS_KEY_ID", "")
AWS_SECRET_ACCESS_KEY = env.str("AWS_SECRET_ACCESS_KEY", "")
AWS_STORAGE_BUCKET_NAME = env.str("AWS_STORAGE_BUCKET_NAME", "")
AWS_STORAGE_REGION = env.str("AWS_STORAGE_REGION", "")
USE_S3 = (
AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY and
AWS_STORAGE_BUCKET_NAME and
AWS_STORAGE_REGION
)
if USE_S3:
AWS_S3_CUSTOM_DOMAIN = env.str("AWS_S3_CUSTOM_DOMAIN", "")
AWS_S3_OBJECT_PARAMETERS = {"CacheControl": "max-age=86400"}
AWS_DEFAULT_ACL = env.str("AWS_DEFAULT_ACL", "public-read")
AWS_MEDIA_LOCATION = env.str("AWS_MEDIA_LOCATION", "media")
AWS_AUTO_CREATE_BUCKET = env.bool("AWS_AUTO_CREATE_BUCKET", True)
DEFAULT_FILE_STORAGE = env.str(
"DEFAULT_FILE_STORAGE", "home.storage_backends.MediaStorage"
)
MEDIA_URL = '/mediafiles/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'mediafiles')
# Swagger settings for api docs
SWAGGER_SETTINGS = {
"DEFAULT_INFO": f"{ROOT_URLCONF}.api_info",
}
if DEBUG or not (EMAIL_HOST_USER and EMAIL_HOST_PASSWORD):
# output email to console instead of sending
if not DEBUG:
logging.warning("You should setup `SENDGRID_USERNAME` and `SENDGRID_PASSWORD` env vars to send emails.")
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
07124bad1968e93498c2ffb7a7ec6a09847a9663 | ba3d65019a8dc88a4da9b14b68e100cb204b8f55 | /flask/repository/__init__.py | 3d7e8db47ac89b642dd3b39806e0fd6d61cc19c6 | [
"MIT"
] | permissive | geovote/geovote-main | b8cb37abd6e9cdeece18f38e05481c5fcda4076f | b848fba73c51c112cdc235f6d5240f5e8706332f | refs/heads/master | 2020-04-13T23:35:09.538333 | 2019-02-08T14:32:00 | 2019-02-08T14:32:00 | 163,510,538 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 99 | py | from repository.answers import *
from repository.clean import *
from repository.questions import *
| [
"lawanledoux@gmail.com"
] | lawanledoux@gmail.com |
d4f91e5b1100d8cab2d28287ffbeeb2921329d47 | 70e51fff2290cef405063d1a95ebd230ba0e05b1 | /backend/tiny_fog_27865/settings.py | 694713fb1f4315f07f1442ed44d027d3c12917f2 | [] | no_license | crowdbotics-apps/tiny-fog-27865 | ce252260efaeddbb67e4318648eda6201d2a2094 | 12ebe268be73391399415df8aa95c72fa36a5900 | refs/heads/master | 2023-05-17T15:14:18.187594 | 2021-06-09T04:40:08 | 2021-06-09T04:40:08 | 375,227,417 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,108 | py | """
Django settings for tiny_fog_27865 project.
Generated by 'django-admin startproject' using Django 2.2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
import environ
import logging
env = environ.Env()
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env.bool("DEBUG", default=False)
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env.str("SECRET_KEY")
ALLOWED_HOSTS = env.list("HOST", default=["*"])
SITE_ID = 1
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SECURE_SSL_REDIRECT = env.bool("SECURE_REDIRECT", default=False)
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites'
]
LOCAL_APPS = [
'home',
'modules',
'users.apps.UsersConfig',
]
THIRD_PARTY_APPS = [
'rest_framework',
'rest_framework.authtoken',
'rest_auth',
'rest_auth.registration',
'bootstrap4',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.google',
'django_extensions',
'drf_yasg',
'storages',
# start fcm_django push notifications
'fcm_django',
# end fcm_django push notifications
]
INSTALLED_APPS += LOCAL_APPS + THIRD_PARTY_APPS
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'tiny_fog_27865.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'web_build')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'tiny_fog_27865.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
if env.str("DATABASE_URL", default=None):
DATABASES = {
'default': env.db()
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
MIDDLEWARE += ['whitenoise.middleware.WhiteNoiseMiddleware']
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
'allauth.account.auth_backends.AuthenticationBackend'
)
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static'), os.path.join(BASE_DIR, 'web_build/static')]
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
# allauth / users
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_EMAIL_VERIFICATION = "optional"
ACCOUNT_CONFIRM_EMAIL_ON_GET = True
ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True
ACCOUNT_UNIQUE_EMAIL = True
LOGIN_REDIRECT_URL = "users:redirect"
ACCOUNT_ADAPTER = "users.adapters.AccountAdapter"
SOCIALACCOUNT_ADAPTER = "users.adapters.SocialAccountAdapter"
ACCOUNT_ALLOW_REGISTRATION = env.bool("ACCOUNT_ALLOW_REGISTRATION", True)
SOCIALACCOUNT_ALLOW_REGISTRATION = env.bool("SOCIALACCOUNT_ALLOW_REGISTRATION", True)
REST_AUTH_SERIALIZERS = {
# Replace password reset serializer to fix 500 error
"PASSWORD_RESET_SERIALIZER": "home.api.v1.serializers.PasswordSerializer",
}
REST_AUTH_REGISTER_SERIALIZERS = {
# Use custom serializer that has no username and matches web signup
"REGISTER_SERIALIZER": "home.api.v1.serializers.SignupSerializer",
}
# Custom user model
AUTH_USER_MODEL = "users.User"
EMAIL_HOST = env.str("EMAIL_HOST", "smtp.sendgrid.net")
EMAIL_HOST_USER = env.str("SENDGRID_USERNAME", "")
EMAIL_HOST_PASSWORD = env.str("SENDGRID_PASSWORD", "")
EMAIL_PORT = 587
EMAIL_USE_TLS = True
# AWS S3 config
AWS_ACCESS_KEY_ID = env.str("AWS_ACCESS_KEY_ID", "")
AWS_SECRET_ACCESS_KEY = env.str("AWS_SECRET_ACCESS_KEY", "")
AWS_STORAGE_BUCKET_NAME = env.str("AWS_STORAGE_BUCKET_NAME", "")
AWS_STORAGE_REGION = env.str("AWS_STORAGE_REGION", "")
USE_S3 = (
AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY and
AWS_STORAGE_BUCKET_NAME and
AWS_STORAGE_REGION
)
if USE_S3:
AWS_S3_CUSTOM_DOMAIN = env.str("AWS_S3_CUSTOM_DOMAIN", "")
AWS_S3_OBJECT_PARAMETERS = {"CacheControl": "max-age=86400"}
AWS_DEFAULT_ACL = env.str("AWS_DEFAULT_ACL", "public-read")
AWS_MEDIA_LOCATION = env.str("AWS_MEDIA_LOCATION", "media")
AWS_AUTO_CREATE_BUCKET = env.bool("AWS_AUTO_CREATE_BUCKET", True)
DEFAULT_FILE_STORAGE = env.str(
"DEFAULT_FILE_STORAGE", "home.storage_backends.MediaStorage"
)
MEDIA_URL = '/mediafiles/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'mediafiles')
# start fcm_django push notifications
FCM_DJANGO_SETTINGS = {
"FCM_SERVER_KEY": env.str("FCM_SERVER_KEY", "")
}
# end fcm_django push notifications
# Swagger settings for api docs
SWAGGER_SETTINGS = {
"DEFAULT_INFO": f"{ROOT_URLCONF}.api_info",
}
if DEBUG or not (EMAIL_HOST_USER and EMAIL_HOST_PASSWORD):
# output email to console instead of sending
if not DEBUG:
logging.warning("You should setup `SENDGRID_USERNAME` and `SENDGRID_PASSWORD` env vars to send emails.")
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
09fc57573c94392a2bbb62397eb7ec5dc72fcdd3 | 04f83aab47940b739f13c1ba102c230372966c43 | /EDSHyFT/test/run_8TeV_all_crab.py | 5b675b953da5f3e41661d8a07451d4b8827c0046 | [] | no_license | PerilousApricot/SUSHyFT-Analyzer | 5a11909963d30c8ad7f19f499253a6753e78608a | 9f5ba528a96203459c52a0434b32311a16e2ff3b | refs/heads/master | 2016-09-15T15:31:30.617286 | 2016-03-14T20:32:09 | 2016-03-14T21:02:28 | 21,915,887 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 49,829 | py | import subprocess, os, ConfigParser
dummy_config_name = 'crab_dummy_anashyft_ele.cfg'
dummy_config = ConfigParser.RawConfigParser()
dummy_config.read(dummy_config_name)
pset = 'edmNtupleMaker.py'
working_dir = os.environ['HOME'] + '/nobackup/BPrimeEDM_8TeV/Jan15/'
dir_suffix = '_tlbsm_53x_v2'
# one pset for all jobs
dummy_config.set('CMSSW','pset',pset)
joblist = [
# ========== MC ==========
## {'datasetpath':'/BprimeBprimeToBZTWinc_M-450_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBZTWinc_M-450_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_450', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToBZTWinc_M-500_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBZTWinc_M-500_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_500', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBZTWinc_M-500_TuneZ2star_8TeV-madgraph'},
## {'datasetpath':'/BprimeBprimeToBZTWinc_M-550_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBZTWinc_M-550_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_550', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBZTWinc_M-550_TuneZ2star_8TeV-madgraph'},
## {'datasetpath':'/BprimeBprimeToBZTWinc_M-600_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZTWinc_M-600_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2_A-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_600', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToBZTWinc_M-650_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZTWinc_M-650_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_650', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToBZTWinc_M-700_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZTWinc_M-700_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2_A-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_700', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToBZTWinc_M-750_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZTWinc_M-750_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_750', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToBZTWinc_M-800_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZTWinc_M-800_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_800', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
# {'datasetpath':'/BprimeBprimeToBZTWinc_M-900_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBZTWinc_M-900_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7C-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_900', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBZTWinc_M-900_TuneZ2star_8TeV-madgraph'},
# {'datasetpath':'/BprimeBprimeToBZTWinc_M-1000_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBZTWinc_M-1000_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7C-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_1000', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBZTWinc_M-1000_TuneZ2star_8TeV-madgraph'},
# {'datasetpath':'/BprimeBprimeToBZTWinc_M-1100_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBZTWinc_M-1100_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7C-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZTW_1100', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', ''btag_map':'BprimeBprimeToBZTWinc_M-1100_TuneZ2star_8TeV-madgraph'},
## {'datasetpath':'/BprimeBprimeToTWTWinc_M-450_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToTWTWinc_M-450_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToTWTW_450', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
# {'datasetpath':'/BprimeBprimeToTWTWinc_M-500_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToTWTWinc_M-500_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToTWTW_500', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToTWTWinc_M-550_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToTWTWinc_M-550_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToTWTW_550', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToTWTWinc_M-600_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToTWTWinc_M-600_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToTWTW_600', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToTWTWinc_M-600_TuneZ2star_8TeV-madgraph'},
## {'datasetpath':'/BprimeBprimeToTWTWinc_M-650_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToTWTWinc_M-650_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2_B-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToTWTW_650', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToTWTWinc_M-700_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToTWTWinc_M-700_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2_B-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToTWTW_700', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToTWTWinc_M-750_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToTWTWinc_M-750_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToTWTW_750', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/BprimeBprimeToTWTWinc_M-800_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToTWTWinc_M-800_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToTWTW_800', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath': '/BprimeBprimeToBHBHinc_M-450_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBHinc_M-450_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBH_450', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBHinc_M-500_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBHinc_M-500_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBH_500', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBHinc_M-550_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBHinc_M-550_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBH_550', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBHinc_M-600_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBHinc_M-600_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBH_600', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBHinc_M-650_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBHinc_M-650_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBH_650', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBHinc_M-700_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBHinc_M-700_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBH_700', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBHinc_M-750_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBHinc_M-750_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBH_750', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBHinc_M-800_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBHinc_M-800_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBH_800', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
# {'datasetpath': '/BprimeBprimeToBHBZinc_M-450_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBZinc_M-450_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBZ_450', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBHBZinc_M-450_TuneZ2star_8TeV-madgraph' },
## {'datasetpath': '/BprimeBprimeToBHBZinc_M-500_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBZinc_M-500_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBZ_500', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBZinc_M-550_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBZinc_M-550_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBZ_550', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBZinc_M-600_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBZinc_M-600_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBZ_600', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBZinc_M-650_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBZinc_M-650_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBZ_650', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBZinc_M-700_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBZinc_M-700_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBZ_700', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBZinc_M-750_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBZinc_M-750_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBZ_750', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHBZinc_M-800_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHBZinc_M-800_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHBZ_800', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHTWinc_M-450_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBHTWinc_M-450_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHTW_450', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBHTWinc_M-450_TuneZ2star_8TeV-madgraph' },
## {'datasetpath': '/BprimeBprimeToBHTWinc_M-500_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBHTWinc_M-500_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHTW_500', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBHTWinc_M-500_TuneZ2star_8TeV-madgraph' },
## {'datasetpath': '/BprimeBprimeToBHTWinc_M-550_TuneZ2star_8TeV-madgraph/cjenkins-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHTW_550', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBHTWinc_M-550_TuneZ2star_8TeV-madgraph' },
## {'datasetpath': '/BprimeBprimeToBHTWinc_M-600_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHTWinc_M-600_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHTW_600', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHTWinc_M-650_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHTWinc_M-650_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHTW_650', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHTWinc_M-700_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHTWinc_M-700_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHTW_700', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHTWinc_M-750_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBHTWinc_M-750_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHTW_750', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBHTWinc_M-800_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBHTWinc_M-800_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBHTW_800', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBHTWinc_M-800_TuneZ2star_8TeV-madgraph' },
## {'datasetpath': '/BprimeBprimeToBZBZinc_M-450_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZBZinc_M-450_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZBZ_450', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBZBZinc_M-500_TuneZ2star_8TeV-madgraph/cjenkins-BprimeBprimeToBZBZinc_M-500_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZBZ_500', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'BprimeBprimeToBZBZinc_M-550_TuneZ2star_8TeV-madgraph' },
## {'datasetpath': '/BprimeBprimeToBZBZinc_M-550_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZBZinc_M-550_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZBZ_550', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBZBZinc_M-600_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZBZinc_M-600_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZBZ_600', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBZBZinc_M-650_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZBZinc_M-650_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZBZ_650', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBZBZinc_M-700_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZBZinc_M-700_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZBZ_700', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBZBZinc_M-750_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZBZinc_M-750_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZBZ_750', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/BprimeBprimeToBZBZinc_M-800_TuneZ2star_8TeV-madgraph/StoreResults-BprimeBprimeToBZBZinc_M-800_TuneZ2star_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'BBToBZBZ_800', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
#### ========= To be submitted =============
## {'datasetpath': '/TTJets_MassiveBinDECAY_TuneZ2star_8TeV-madgraph-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'200', 'pycfg_params':'runData=0', '#sample_name': 'Top', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/T_s-channel_TuneZ2star_8TeV-powheg-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'SingleTopS', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/T_t-channel_TuneZ2star_8TeV-powheg-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'SingleTopT', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/T_tW-channel-DR_TuneZ2star_8TeV-powheg-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'SingleToptW', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/Tbar_s-channel_TuneZ2star_8TeV-powheg-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'SingleTopbarS', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/Tbar_t-channel_TuneZ2star_8TeV-powheg-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'SingleTopbarT', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/Tbar_tW-channel-DR_TuneZ2star_8TeV-powheg-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'SingleTopbartW', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/DYJetsToLL_M-50_TuneZ2Star_8TeV-madgraph-tarball/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'300','pycfg_params':'runData=0', '#sample_name': 'ZJets', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/WJetsToLNu_TuneZ2Star_8TeV-madgraph-tarball/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v2_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'150', 'pycfg_params':'runData=0', '#sample_name': 'WJets_v2', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/WJetsToLNu_TuneZ2Star_8TeV-madgraph-tarball/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'WJets_v1', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' }, #not urgent
## {'datasetpath': '/TTWJets_8TeV-madgraph/avetisya-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'TopWJets', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet' },
## {'datasetpath': '/TTZJets_8TeV-madgraph_v2/mmhl-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'TopZJets', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet' },
## {'datasetpath': '/WW_TuneZ2star_8TeV_pythia6_tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'WW', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/ZZ_TuneZ2star_8TeV_pythia6_tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'ZZ', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/WZ_TuneZ2star_8TeV_pythia6_tauola/malik-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-3abcc3b1cd74b7b45c7ed2df0ee1e03c/USER', 'number_of_jobs':'50', 'pycfg_params':'runData=0', '#sample_name': 'WZ', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'WZ_TuneZ2star_8TeV_pythia6_tauola' },
## {'datasetpath': '/TT_8TeV-mcatnlo/galank-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'450', 'pycfg_params':'runData=0', '#sample_name': 'Top_mcatnlo', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'TT_8TeV-mcatnlo' },
## {'datasetpath': '/TT_CT10_TuneZ2star_8TeV-powheg-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v2_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'300', 'pycfg_params':'runData=0', '#sample_name': 'Top_powheg', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'TT_CT10_TuneZ2star_8TeV-powheg-tauola' },
## {'datasetpath': '/TTJets_matchingdown_TuneZ2star_8TeV-madgraph-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0', '#sample_name': 'TopMatchdn', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/TTJets_matchingup_TuneZ2star_8TeV-madgraph-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0', '#sample_name': 'TopMatchup', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/TTJets_scaledown_TuneZ2star_8TeV-madgraph-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2_bugfix_v1-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0', '#sample_name': 'TopScaledn', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/TTJets_scaleup_TuneZ2star_8TeV-madgraph-tauola/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0', '#sample_name': 'TopScaleup', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
## {'datasetpath': '/QCD_Pt_20_30_BCtoE_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_20_30_BCtoE', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt_20_30_BCtoE_TuneZ2star_8TeV_pythia6' },
## {'datasetpath': '/QCD_Pt_30_80_BCtoE_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_30_80_BCtoE', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt_30_80_BCtoE_TuneZ2star_8TeV_pythia6' },
## {'datasetpath': '/QCD_Pt_80_170_BCtoE_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt_80_170_BCtoE_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_80_170_BCtoE', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt_80_170_BCtoE_TuneZ2star_8TeV_pythia6'},
## {'datasetpath': '/QCD_Pt_170_250_BCtoE_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt_170_250_BCtoE_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_170_250_BCtoE', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt_170_250_BCtoE_TuneZ2star_8TeV_pythia6' },
## {'datasetpath': '/QCD_Pt_250_350_BCtoE_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt_250_350_BCtoE_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_250_350_BCtoE', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet','btag_map':'QCD_Pt_250_350_BCtoE_TuneZ2star_8TeV_pythia6' },
## {'datasetpath': '/QCD_Pt_350_BCtoE_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v2_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_350_BCtoE', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt_350_BCtoE_TuneZ2star_8TeV_pythia6' },
## {'datasetpath': '/QCD_Pt_20_30_EMEnriched_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_20_30_EMEnriched', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt_20_30_EMEnriched_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt_30_80_EMEnriched_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt_30_80_EMEnriched_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_30_80_EMEnriched', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt_30_80_EMEnriched_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt_80_170_EMEnriched_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_80_170_EMEnriched', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt_80_170_EMEnriched_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt_170_250_EMEnriched_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_170_250_EMEnriched', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt_170_250_EMEnriched_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt_250_350_EMEnriched_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_250_350_EMEnriched', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt_250_350_EMEnriched_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt_350_EMEnriched_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'QCD_Pt_350_EMEnriched', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt_350_EMEnriched_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/GJets_HT-200To400_8TeV-madgraph/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'GJets_HT-200To400', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'GJets_HT-200To400_8TeV-madgraph' },
{'datasetpath': '/GJets_HT-400ToInf_8TeV-madgraph/cjenkins-GJets_HT-400ToInf_8TeV-madgraph-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runEleQCDSamples=1', '#sample_name': 'GJets_HT-400ToInf', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'GJets_HT-400ToInf_8TeV-madgraph' },
{'datasetpath': '/QCD_Pt-15to20_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt-15to20_MuEnrichedPt5_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v2_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-15to20_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt-15to20_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-20to30_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-20to30_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt-20to30_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-30to50_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-30to50_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt-30to50_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': 'QCD_Pt-50to80_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt-50to80_MuEnrichedPt5_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-50to80_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt-50to80_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-80to120_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-80to120_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt-80to120_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-120to170_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt-120to170_MuEnrichedPt5_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-120to170_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt-120to170_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-170to300_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt-170to300_MuEnrichedPt5_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-170to300_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt-170to300_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-300to470_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt-300to470_MuEnrichedPt5_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-300to470_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt-300to470_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-470to600_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/cjenkins-QCD_Pt-470to600_MuEnrichedPt5_TuneZ2star_8TeV_pythia6-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-470to600_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet', 'btag_map':'QCD_Pt-470to600_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-600to800_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-600to800_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt-600to800_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
{'datasetpath': '/QCD_Pt-800to1000_MuEnrichedPt5_TuneZ2star_8TeV_pythia6/StoreResults-Summer12_DR53X-PU_S10_START53_V7A-v1_TLBSM_53x_v2-c04f3b4fa74c8266c913b71e0c74901d/USER', 'number_of_jobs':'100', 'pycfg_params':'runData=0 runMuonQCDSamples=1', '#sample_name': 'QCD_Pt-800to1000_MuEnrichedPt5', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet', 'btag_map':'QCD_Pt-800to1000_MuEnrichedPt5_TuneZ2star_8TeV_pythia6' },
# ========== Data Electron ==========
## {'datasetpath':'/SingleElectron/StoreResults-Run2012A-13Jul2012-v1_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'50', 'pycfg_params':'runData=1','runselection':'190450-193621' ,'lumi_mask':'Cert_190456-196531_8TeV_13Jul2012ReReco_Collisions12_JSON_v2.txt' ,
## '#sample_name': 'Data-Run2012A-13Jul2012-v1','dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleElectron/StoreResults-Run2012A-recover-06Aug2012-v1_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'30', 'pycfg_params':'runData=1','runselection':'190782-190949' ,'lumi_mask':'Cert_190782-190949_8TeV_06Aug2012ReReco_Collisions12_JSON.txt' ,
## '#sample_name': 'Data-Run2012A-06Aug2012-v1','dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleElectron/StoreResults-Run2012B-13Jul2012-v1_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'300', 'pycfg_params':'runData=1', 'runselection':'193834-196531', 'lumi_mask':'Cert_190456-196531_8TeV_13Jul2012ReReco_Collisions12_JSON_v2.txt' ,
## '#sample_name': 'Data_Run2012B-13Jul2012-v1', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global//servlet/DBSServlet'},
## {'datasetpath':'/SingleElectron/StoreResults-Run2012C-24Aug2012-v1_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'50', 'pycfg_params':'runData=1','runselection':'198022-198523' ,'lumi_mask':'Cert_198022-198523_8TeV_24Aug2012ReReco_Collisions12_JSON.txt' ,
## '#sample_name': 'Data-Run2012C-24Aug2012-v1', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleElectron/StoreResults-Run2012C-PromptReco-v2_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'150', 'pycfg_params':'runData=1','runselection':'198934-203755' ,'lumi_mask':'Cert_190456-203002_8TeV_PromptReco_Collisions12_JSON_v2.txt' ,
## '#sample_name': 'Data-Run2012C-PromptReco-v2-a', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleElectron/StoreResults-Run2012C-PromptReco-v2_TLBSM_53x_v2_extension_v1-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'80', 'pycfg_params':'runData=1','runselection':'198934-203755' ,'lumi_mask':'Cert_190456-203002_8TeV_PromptReco_Collisions12_JSON_v2.txt' ,
## '#sample_name': 'Data-Run2012C-PromptReco-v2-b', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleElectron/cjenkins-Run2012C-EcalRecover_11Dec2012-v1_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'5', 'pycfg_params':'runData=1','runselection':'201191-201191' ,'lumi_mask':'Cert_201191-201191_8TeV_11Dec2012ReReco-recover_Collisions12_JSON.txt' ,
## '#sample_name': 'Data-Run2012C-EcalRecover_11Dec2012-v1', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_ph_analysis_02/servlet/DBSServlet'},
## {'datasetpath':'/SingleElectron/StoreResults-Run2012D-PromptReco-v1_TLBSM_53x_v2_bugfix-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'300', 'pycfg_params':'runData=1','runselection':'203773-209465' ,'lumi_mask':'Cert_190456-208686_8TeV_PromptReco_Collisions12_JSON.txt' ,
## '#sample_name': 'Data-Run2012D-PromptReco-v1-a', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleElectron/StoreResults-Run2012D-PromptReco-v1_TLBSM_53x_v2_extension_v1-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'100', 'pycfg_params':'runData=1','runselection':'203773-209465' ,'lumi_mask':'Cert_190456-208686_8TeV_PromptReco_Collisions12_JSON.txt' ,
## '#sample_name': 'Data-Run2012D-PromptReco-v1-b', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
##=========== Data Muon ===============
## {'datasetpath':'/SingleMu/StoreResults-SingleMu_Run2012A-13Jul2012-v1_TLBSM_53x_v2_jsonfix-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'40', 'pycfg_params':'runData=1','runselection':'190456-193621','lumi_mask':'Cert_190456-196531_8TeV_13Jul2012ReReco_Collisions12_JSON_MuonPhys_v3.txt' ,
## '#sample_name': 'Data-Run2012A-13Jul2012-v1', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleMu/StoreResults-Run2012A-recover-06Aug2012-v1_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'50', 'pycfg_params':'runData=1','runselection':'190782-190949','lumi_mask':'Cert_190782-190949_8TeV_06Aug2012ReReco_Collisions12_JSON_MuonPhys.txt' ,
## '#sample_name': 'Data-Run2012A-06Aug2012-v1', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleMu/StoreResults-Run2012B-13Jul2012-v1_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'300', 'pycfg_params':'runData=1','runselection':'193834-196531','lumi_mask':'Cert_190456-196531_8TeV_13Jul2012ReReco_Collisions12_JSON_MuonPhys_v3.txt' ,
## '#sample_name': 'Data-Run2012B-13Jul2012-v1', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleMu/StoreResults-Run2012C-24Aug2012-v1_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'50', 'pycfg_params':'runData=1','runselection':'197770-198913','lumi_mask':'Cert_198022-198523_8TeV_24Aug2012ReReco_Collisions12_JSON_MuonPhys.txt' ,
## '#sample_name': 'Data-Run2012C-24Aug2012-v1', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleMu/StoreResults-Run2012C-PromptReco-v2_TLBSM_53x_v2-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'150', 'pycfg_params':'runData=1','runselection':'198934-203755','lumi_mask':'Cert_190456-203002_8TeV_PromptReco_Collisions12_JSON_MuonPhys_v2.txt' ,
## '#sample_name': 'Data-Run2012C-PromptReco-v2-a', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleMu/StoreResults-Run2012C-PromptReco-v2_TLBSM_53x_v2-646f7563e9ae6f48814faa1c250f042a/USER',
## 'number_of_jobs':'150', 'pycfg_params':'runData=1','runselection':'198934-203755','lumi_mask':'Cert_190456-203002_8TeV_PromptReco_Collisions12_JSON_MuonPhys_v2.txt' ,
## '#sample_name': 'Data-Run2012C-PromptReco-v2-b', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet'},
## {'datasetpath':'/SingleMu/StoreResults-Run2012C-PromptReco-v2_TLBSM_53x_v2_extension_v1-e3fb55b810dc7a0811f4c66dfa2267c9/USER',
## 'number_of_jobs':'50', 'pycfg_params':'runData=1','runselection':'198934-203755','lumi_mask':'Cert_190456-203002_8TeV_PromptReco_Collisions12_JSON_MuonPhys_v2.txt' ,
## '#sample_name': 'Data-Run2012C-PromptReco-v2-c', 'dbs_url':'http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet' },
]
for job in joblist:
config = dummy_config
# common input files
config.set('USER', 'additional_input_files', 'Jec12_V3_L1FastJet_AK5PFchs.txt,Jec12_V3_L2Relative_AK5PFchs.txt,Jec12_V3_L3Absolute_AK5PFchs.txt,Jec12_V3_L2L3Residual_AK5PFchs.txt,Jec12_V3_Uncertainty_AK5PFchs.txt,Jec12_V3_MC_L1FastJet_AK5PFchs.txt,Jec12_V3_MC_L2Relative_AK5PFchs.txt,Jec12_V3_MC_L3Absolute_AK5PFchs.txt,Jec12_V3_MC_L2L3Residual_AK5PFchs.txt,Jec12_V3_MC_Uncertainty_AK5PFchs.txt,PUMC_dist_flat10.root,PUData_finebin_dist.root')
# for MC we don't need this
config.remove_option('CMSSW','runselection')
config.remove_option('CMSSW','lumi_mask')
# plug for data
if 'Data' in job['#sample_name']:
config.remove_option('CMSSW','total_number_of_events')
config.set('CMSSW','total_number_of_lumis','-1')
# set the CMSSW parameters
for p in job:
config.set('CMSSW', p, job[p])
#if 'cms_dbs_prod_global'in job[p]:
#config.set('CRAB', 'scheduler', 'remoteGlidein')
if 'Data' not in job['#sample_name']:
btagMap = job['datasetpath'].split('/')[1]
if 'btag_map' in job.keys():
btagMap = job['btag_map']
config.set('CMSSW', 'pycfg_params', config.get('CMSSW', 'pycfg_params')+" btagMap="+btagMap)
# specify the name in case of data and signal
if 'Data' in job['#sample_name'] or 'Bprime' in job['#sample_name']:
ui_working_dir = working_dir + job['datasetpath'].split('/')[1] + '_' + job['#sample_name'] + dir_suffix
if 'Data' in job['#sample_name']:
publish_data_name = 'BPrimeEDMNtuples_53x_v2_'+job['#sample_name']
elif 'WJets_v2' in job['#sample_name']:
publish_data_name = 'BPrimeEDMNtuples_53x_v2_WJets_v2'
ui_working_dir = working_dir + job['datasetpath'].split('/')[1] + '_v2'+ dir_suffix
elif 'WJets_v1' in job['#sample_name']:
publish_data_name = 'BPrimeEDMNtuples_53x_v2_WJets_v1'
ui_working_dir = working_dir + job['datasetpath'].split('/')[1] + '_v1'+ dir_suffix
else:
ui_working_dir = working_dir + job['datasetpath'].split('/')[1] + dir_suffix
publish_data_name = 'BPrimeEDMNtuples_53x_v2'
config.set('USER','ui_working_dir',ui_working_dir)
config.set('USER','publish_data_name',publish_data_name)
print 'ui', ui_working_dir
print 'pub name', publish_data_name
# write cfg file and run crab
cfgname = 'crab_'+job['#sample_name']+'.cfg'
with open(cfgname, 'wb') as configfile:
config.write(configfile)
s = 'crab -create -cfg ' + cfgname
print s
subprocess.call( [s], shell=True )
s = 'crab -submit -c ' + ui_working_dir
print s
subprocess.call( [s], shell=True )
| [
"andrew.m.melo@vanderbilt.edu"
] | andrew.m.melo@vanderbilt.edu |
44338e41bb946806695a6fb18d70a1d6fa64fd0e | 5e989188eb0cfde46f57e033679bd7817eae6620 | /liteeth/phy/common.py | a290dfb1bd84798bfb3c5ab3213b22d8bb6a7865 | [
"BSD-2-Clause"
] | permissive | telantan/liteeth | 1f85b086a7740013f4adfcecc92644fd147085e3 | 73bd27b506211f12f8c515ad93a3cc65a3624dc3 | refs/heads/master | 2020-12-13T16:37:47.229699 | 2020-01-16T14:29:49 | 2020-01-16T14:46:13 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,186 | py | # This file is Copyright (c) 2015-2018 Florent Kermarrec <florent@enjoy-digital.fr>
# License: BSD
from liteeth.common import *
from migen.genlib.cdc import MultiReg
from migen.fhdl.specials import Tristate
class LiteEthPHYHWReset(Module):
def __init__(self):
self.reset = Signal()
# # #
counter = Signal(max=512)
counter_done = Signal()
counter_ce = Signal()
self.sync += If(counter_ce, counter.eq(counter + 1))
self.comb += [
counter_done.eq(counter == 256),
counter_ce.eq(~counter_done),
self.reset.eq(~counter_done)
]
class LiteEthPHYMDIO(Module, AutoCSR):
def __init__(self, pads):
self._w = CSRStorage(3, name="w")
self._r = CSRStatus(1, name="r")
# # #
data_w = Signal()
data_oe = Signal()
data_r = Signal()
self.comb +=[
pads.mdc.eq(self._w.storage[0]),
data_oe.eq(self._w.storage[1]),
data_w.eq(self._w.storage[2])
]
self.specials += [
MultiReg(data_r, self._r.status[0]),
Tristate(pads.mdio, data_w, data_oe, data_r)
]
| [
"florent@enjoy-digital.fr"
] | florent@enjoy-digital.fr |
521703d00dde8fdf3755df816b625447ecc002e5 | 4a0c047f73458d089dc62bc2be7c3bd098a08ee2 | /data_structor/datetime_format.py | b7c87e2848cb10ed5974e929f4beb7d26f13f2f4 | [] | no_license | sunghyungi/pandas_study | b53e53d88abe733b292c06e2658e2fa21428ffca | b861724995914a4a4644c8b08b3b38070d5abc51 | refs/heads/master | 2020-11-28T02:05:22.565760 | 2020-01-08T03:09:37 | 2020-01-08T03:09:37 | 229,675,769 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 898 | py | import pandas as pd
df = pd.read_csv('stock-data.csv')
pd.set_option('display.max_columns', 15)
pd.set_option('display.max_colwidth', 20)
pd.set_option('display.unicode.east_asian_width', True)
pd.set_option('display.width', 600)
print("# 문자열인 날짜 데이터를 판다스 Timestamp로 변환")
df['new_Date'] = pd.to_datetime(df['Date'])
print(df, '\n')
print("# dt속성을 이용하여 new_Date 열의 년월일 정보를 년, 월, 일로 구분")
df['Year'] = df['new_Date'].dt.year
df['Month'] = df['new_Date'].dt.month
df['Day'] = df['new_Date'].dt.day
print(df, '\n')
print("# Timestamp를 Period로 변환하여 년월일 표기 변경하기")
df['Date_yr'] = df['new_Date'].dt.to_period(freq='A')
df['Date_m'] = df['new_Date'].dt.to_period(freq='M')
print(df, '\n')
print("# 원하는 열을 새로운 행 인덱스로 지정")
df.set_index('Date_m', inplace=True)
print(df) | [
"tjdgusrlek@gmail.com"
] | tjdgusrlek@gmail.com |
04f219dbbec76539a6aa9dbf1c473f2c37172866 | f07a42f652f46106dee4749277d41c302e2b7406 | /Data Set/bug-fixing-5/2c284017d48ab0534905f0d287f801db8ba2f673-<fit>-fix.py | 351c29c0af4c2a5ffe9fd52cf6053301e08289d7 | [] | no_license | wsgan001/PyFPattern | e0fe06341cc5d51b3ad0fe29b84098d140ed54d1 | cc347e32745f99c0cd95e79a18ddacc4574d7faa | refs/heads/main | 2023-08-25T23:48:26.112133 | 2021-10-23T14:11:22 | 2021-10-23T14:11:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,290 | py | def fit(self, x=None, y=None, batch_size=32, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, **kwargs):
'Trains the model for a fixed number of epochs (iterations on a dataset).\n\n # Arguments\n x: Numpy array of training data,\n or list of Numpy arrays if the model has multiple inputs.\n If all inputs in the model are named,\n you can also pass a dictionary\n mapping input names to Numpy arrays.\n y: Numpy array of target data,\n or list of Numpy arrays if the model has multiple outputs.\n If all outputs in the model are named,\n you can also pass a dictionary\n mapping output names to Numpy arrays.\n batch_size: integer. Number of samples per gradient update.\n epochs: integer, the number of times to iterate\n over the training data arrays.\n verbose: 0, 1, or 2. Verbosity mode.\n 0 = silent, 1 = verbose, 2 = one log line per epoch.\n callbacks: list of callbacks to be called during training.\n See [callbacks](/callbacks).\n validation_split: float between 0 and 1:\n fraction of the training data to be used as validation data.\n The model will set apart this fraction of the training data,\n will not train on it, and will evaluate\n the loss and any model metrics\n on this data at the end of each epoch.\n validation_data: data on which to evaluate\n the loss and any model metrics\n at the end of each epoch. The model will not\n be trained on this data.\n This could be a tuple (x_val, y_val)\n or a tuple (x_val, y_val, val_sample_weights).\n shuffle: boolean, whether to shuffle the training data\n before each epoch.\n class_weight: optional dictionary mapping\n class indices (integers) to\n a weight (float) to apply to the model\'s loss for the samples\n from this class during training.\n This can be useful to tell the model to "pay more attention" to\n samples from an under-represented class.\n sample_weight: optional array of the same length as x, containing\n weights to apply to the model\'s loss for each sample.\n In the case of temporal data, you can pass a 2D array\n with shape (samples, sequence_length),\n to apply a different weight to every timestep of every sample.\n In this case you should make sure to specify\n sample_weight_mode="temporal" in compile().\n initial_epoch: epoch at which to start training\n (useful for resuming a previous training run)\n\n # Returns\n A `History` instance. Its `history` attribute contains\n all information collected during training.\n\n # Raises\n ValueError: In case of mismatch between the provided input data\n and what the model expects.\n '
if ('nb_epoch' in kwargs):
warnings.warn('The `nb_epoch` argument in `fit` has been renamed `epochs`.', stacklevel=2)
epochs = kwargs.pop('nb_epoch')
if kwargs:
raise TypeError(('Unrecognized keyword arguments: ' + str(kwargs)))
(x, y, sample_weights) = self._standardize_user_data(x, y, sample_weight=sample_weight, class_weight=class_weight, check_batch_axis=False, batch_size=batch_size)
if validation_data:
do_validation = True
if (len(validation_data) == 2):
(val_x, val_y) = validation_data
val_sample_weight = None
elif (len(validation_data) == 3):
(val_x, val_y, val_sample_weight) = validation_data
else:
raise ValueError(('When passing validation_data, it must contain 2 (x_val, y_val) or 3 (x_val, y_val, val_sample_weights) items, however it contains %d items' % len(validation_data)))
(val_x, val_y, val_sample_weights) = self._standardize_user_data(val_x, val_y, sample_weight=val_sample_weight, check_batch_axis=False, batch_size=batch_size)
self._make_test_function()
val_f = self.test_function
if (self.uses_learning_phase and (not isinstance(K.learning_phase(), int))):
val_ins = (((val_x + val_y) + val_sample_weights) + [0.0])
else:
val_ins = ((val_x + val_y) + val_sample_weights)
elif (validation_split and (0.0 < validation_split < 1.0)):
do_validation = True
split_at = int((len(x[0]) * (1.0 - validation_split)))
(x, val_x) = (_slice_arrays(x, 0, split_at), _slice_arrays(x, split_at))
(y, val_y) = (_slice_arrays(y, 0, split_at), _slice_arrays(y, split_at))
(sample_weights, val_sample_weights) = (_slice_arrays(sample_weights, 0, split_at), _slice_arrays(sample_weights, split_at))
self._make_test_function()
val_f = self.test_function
if (self.uses_learning_phase and (not isinstance(K.learning_phase(), int))):
val_ins = (((val_x + val_y) + val_sample_weights) + [0.0])
else:
val_ins = ((val_x + val_y) + val_sample_weights)
else:
do_validation = False
val_f = None
val_ins = None
if (self.uses_learning_phase and (not isinstance(K.learning_phase(), int))):
ins = (((x + y) + sample_weights) + [1.0])
else:
ins = ((x + y) + sample_weights)
self._make_train_function()
f = self.train_function
out_labels = self._get_deduped_metrics_names()
if do_validation:
callback_metrics = (copy.copy(out_labels) + [('val_' + n) for n in out_labels])
else:
callback_metrics = copy.copy(out_labels)
return self._fit_loop(f, ins, out_labels=out_labels, batch_size=batch_size, epochs=epochs, verbose=verbose, callbacks=callbacks, val_f=val_f, val_ins=val_ins, shuffle=shuffle, callback_metrics=callback_metrics, initial_epoch=initial_epoch) | [
"dg1732004@smail.nju.edu.cn"
] | dg1732004@smail.nju.edu.cn |
63b51da8acb55197f2b5cf0b3f435534ad187add | bf21cd0ef7a94fa106ccd9f91a4bbfdcda7f94ed | /Deep-Learning/scratch/chapter02/ex01.py | 2c5dc92731ed2fa6ccaecca539c46eeb6749499f | [] | no_license | juneglee/Deep_Learning | fdf8cae1b962aaa0ce557cb53f78a22b6d5ae1e8 | 17a448cf6a7c5b61b967dd78af3d328d63378205 | refs/heads/master | 2023-07-15T03:02:55.739619 | 2021-08-19T14:04:55 | 2021-08-19T14:04:55 | 273,253,872 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 350 | py | # 시그모이드 함수 구현하기
import numpy as np
import matplotlib.pylab as plt
def sigmoid(x):
return 1/(1 + np.exp(-x))
x = np.arange(-5.0, 5.0, 0.1)
y = sigmoid(x)
plt.plot(x, y)
plt.ylim(-0.1, 1.1)
plt.show()
# 시그모이드 = s자 모양 함수
# 계단형 함수와 시그모이드 함수는 둘다 비선형 함수이다.
| [
"klcpop1@gmail.com"
] | klcpop1@gmail.com |
555f97472640862d217d8dc672bc2c246ae663fb | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_158/624.py | 2a3abc64d2cdf70a551ac5e03e6ac90bbaed9216 | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,118 | py | #!/usr/bin/python
import requests, logging, string, sys
def createOutput(result):
f = open(sys.argv[2], "w")
for i in range(0, len(result)):
f.write("Case #" + str(i + 1) + ": " + result[i] + "\n")
f.close();
return
def processResults(X, R, C):
volume = R * C
if volume % X != 0:
return "RICHARD"
if X == 1 or X == 2:
return "GABRIEL"
if X == 3:
if R == 1 or C == 1:
return "RICHARD"
else:
return "GABRIEL"
if X == 4:
if (R == 4 and C == 4) or (R == 4 and C == 3) or (R == 3 and C == 4):
return "GABRIEL"
else:
return "RICHARD"
def processInput(inputlines):
result = []
for line in inputlines:
values = line.split(' ')
X = int(values[0])
R = int(values[1])
C = int(values[2])
result.append(processResults(X, R, C))
return result
def readInput():
inputlines = []
f = open(sys.argv[1])
testcases = int(f.readline().strip())
for i in range(0, testcases):
line = f.readline().strip()
inputlines.append(line)
f.close()
return inputlines
if __name__ == '__main__':
inputlines = readInput()
result = processInput(inputlines)
createOutput(result)
sys.exit()
| [
"miliar1732@gmail.com"
] | miliar1732@gmail.com |
cfd620bb6fbedecd779cce1cc00f2d22eddeb425 | b22588340d7925b614a735bbbde1b351ad657ffc | /athena/Generators/MadGraphModels/python/models/HeavyHiggsTHDM/__init__.py | 948b99c911ee93b63f7295fe1003ccaa5fc58319 | [] | no_license | rushioda/PIXELVALID_athena | 90befe12042c1249cbb3655dde1428bb9b9a42ce | 22df23187ef85e9c3120122c8375ea0e7d8ea440 | refs/heads/master | 2020-12-14T22:01:15.365949 | 2020-01-19T03:59:35 | 2020-01-19T03:59:35 | 234,836,993 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 905 | py |
import particles
import couplings
import lorentz
import parameters
import vertices
import coupling_orders
import write_param_card
import propagators
all_particles = particles.all_particles
all_vertices = vertices.all_vertices
all_couplings = couplings.all_couplings
all_lorentz = lorentz.all_lorentz
all_parameters = parameters.all_parameters
all_orders = coupling_orders.all_orders
all_functions = function_library.all_functions
all_propagators = propagators.all_propagators
try:
import decays
except ImportError:
pass
else:
all_decays = decays.all_decays
try:
import form_factors
except ImportError:
pass
else:
all_form_factors = form_factors.all_form_factors
try:
import CT_vertices
except ImportError:
pass
else:
all_CTvertices = CT_vertices.all_CTvertices
gauge = [0]
__author__ = "N. Christensen, C. Duhr, B. Fuks"
__date__ = "21. 11. 2012"
__version__= "1.4.5"
| [
"rushioda@lxplus754.cern.ch"
] | rushioda@lxplus754.cern.ch |
3f6b359aac70741087ee67c3d384da06abd1b2ac | ac45b55915e634815922329195c203b1e810458c | /minionOC1304_9.py | f7041853ece9ca31941c3cde8472cec562bb2397 | [] | no_license | mj1e16lsst/iridisPeriodicNew | 96a8bfef0d09f13e18adb81b89e25ae885e30bd9 | dc0214b1e702b454e0cca67d4208b2113e1fbcea | refs/heads/master | 2020-03-23T15:01:23.583944 | 2018-07-23T18:58:59 | 2018-07-23T18:58:59 | 141,715,292 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 12,180 | py | from operator import add
#from astropy import units as u
#from astropy.coordinates import SkyCoord
#from astropy.stats import LombScargle
#from gatspy.periodic import LombScargleFast
from functools import partial
#from gatspy import periodic
#import matplotlib.pyplot as plt
#from matplotlib.font_manager import FontProperties
import lomb_scargle_multiband as periodic
from multiprocessing import Pool
import numpy as np
import os
#from sqlite3 import *
import random
from random import shuffle
from random import randint
import Observations
import Magnitudes
# In[13]:
#conn = connect('minion_1016_sqlite.db')
#conn = connect('astro_lsst_01_1004_sqlite.db')
#conn = connect('minion_1020_sqlite.db')
# In[14]:
# LSST zero points u,g,r,i,z,y
zeroPoints = [0,26.5,28.3,28.13,27.79,27.4,26.58]
FWHMeff = [0.8,0.92,0.87,0.83,0.80,0.78,0.76] # arcmins?
pixelScale = 0.2
readOut = 12.7
sigSys = 0.005
flareperiod = 4096
flarecycles = 10
dayinsec=86400
background = 40
# sat mag u,g,r,i,z,y=14.7,15.7,15.8,15.8,15.3 and 13.9
# start date 59580.033829 end date + 10 years
#maglist=[20]*7
lim = [0, 23.5, 24.8, 24.4, 23.9, 23.3, 22.1] # limiting magnitude ugry
sat = [0, 14.7, 15.7, 15.8, 15.8, 15.3, 13.9] # sat mag as above
# In[15]:
looooops = 10000
maglength = 20
freqlength = 20
processors = 20
startnumber = 0 + 9
endnumber = startnumber + 1
#observingStrategy = 'minion'
observingStrategy = 'astroD'
#observingStrategy = 'panstars'
inFile = '/home/mj1e16/periodic/in'+str(startnumber)+'.txt'
outFile = '/home/mj1e16/periodic/outminionOC1304'+str(startnumber)+'.txt'
#inFile = '/home/ubuntu/vagrant/'+observingStrategy+'/in'+observingStrategy+'KtypefullresultsFile'+str(startnumber)+'.txt'
#outFile = '/home/ubuntu/vagrant/'+observingStrategy+'/out'+observingStrategy+'KtypefullresultsFile'+str(startnumber)+'.txt'
obs = Observations.obsminionOC1304
for y in range(len(obs)):
for x in range(len(obs[y])):
obs[y][x] = obs[y][x] + ((random.random()*2.)-1.)
# In[19]:
def magUncertainy(Filter, objectmag, exposuretime,background, FWHM): # b is background counts per pixel
countsPS = 10**((Filter-objectmag)/2.5)
counts = countsPS * exposuretime
uncertainty = 1/(counts/((counts/2.3)+(((background/2.3)+(12.7**2))*2.266*((FWHM/0.2)**2)))**0.5) # gain assumed to be 1
return uncertainty
#from lsst should have got the website! https://smtn-002.lsst.io/
# In[20]:
def averageFlux(observations, Frequency, exptime):
b = [0]*len(observations)
for seconds in range(0, exptime):
a = [np.sin((2*np.pi*(Frequency))*(x+(seconds/(3600*24)))) for x in observations] # optical modulation
b = map(add, a, b)
c = [z/exptime for z in b]
return c
def Flux(observations,Frequency,exptime):
a = [np.sin((2*np.pi*(Frequency)*x)) for x in observations]
return a
# In[21]:
def ellipsoidalFlux(observations, Frequency,exptime):
period = 1/(Frequency)
phase = [(x % (2*period)) for x in observations]
b = [0]*len(observations)
for seconds in range(0, exptime):
a = [np.sin((2*np.pi*(Frequency))*(x+(seconds/(3600*24)))) for x in observations] # optical modulation
b = map(add, a, b)
c = [z/exptime for z in b]
for x in range(0,len(phase)):
if (phase[x]+(1.5*period)) < (3*period):
c[x] = c[x]*(1./3.)
else:
c[x] = c[x]*(2./3.)
return c
## this is doing something but not the right something, come back to it
# In[22]:
def flaring(B, length, dayinsec=86400,amplitude=1):
global flareMag, minutes
fouriers = np.linspace(0.00001,0.05,(dayinsec/30))
logF = [np.log(x) for x in fouriers] # start at 30 go to a day in 30 sec increments
real = [random.gauss(0,1)*((1/x)**(B/2)) for x in fouriers] #random.gauss(mu,sigma) to change for values from zurita
# imaginary = [random.gauss(0,1)*((1/x)**(B/2)) for x in fouriers]
IFT = np.fft.ifft(real)
seconds = np.linspace(0,dayinsec, (dayinsec/30)) # the day in 30 sec increments
minutes = [x for x in seconds]
minimum = (np.max(-IFT))
positive = [x + minimum for x in IFT] # what did this even achieve? it helped with normalisation!
normalised = [x/(np.mean(positive)) for x in positive] # find normalisation
normalisedmin = minimum/(np.mean(positive))
normalised = [x - normalisedmin for x in normalised]
flareMag = [amplitude * x for x in normalised] # normalise to amplitude
logmins = [np.log(d) for d in minutes] # for plotting?
# plt.plot(minutes,flareMag)
# plt.title('lightcurve')
# plt.show()
return flareMag
# In[55]:
def lombScargle(frequencyRange,objectmag=20,loopNo=looooops,df=0.001,fmin=0.001,numsteps=100000,modulationAmplitude=0.1,Nquist=200): # frequency range and object mag in list
#global totperiod, totmperiod, totpower, date, amplitude, frequency, periods, LSperiod, power, mag, error, SigLevel
results = {}
totperiod = []
totmperiod = []
totpower = [] # reset
SigLevel = []
filterletter = ['o','u','g','r','i','z','y']
period = 1/(frequencyRange)
if period > 0.5:
numsteps = 10000
elif period > 0.01:
numsteps = 100000
else:
numsteps = 200000
freqs = fmin + df * np.arange(numsteps) # for manuel
allobsy, uobsy, gobsy, robsy, iobsy, zobsy, yobsy = [], [], [], [], [], [], [] #reset
measuredpower = [] # reset
y = [allobsy, uobsy, gobsy, robsy, iobsy, zobsy, yobsy] # for looping only
for z in range(1, len(y)):
#y[z] = averageFlux(obs[z], frequencyRange[frange], 30) # amplitde calculation for observations, anf frequency range
y[z] = ellipsoidalFlux(obs[z], frequencyRange,30)
y[z] = [modulationAmplitude * t for t in y[z]] # scaling
for G in range(0, len(y[z])):
flareMinute = int(round((obs[z][G]*24*60*2)%((dayinsec/(30*2))*flarecycles)))
y[z][G] = y[z][G] + longflare[flareMinute] # add flares swapped to second but not changing the name intrtoduces fewer bugs
date = []
amplitude = []
mag = []
error = []
filts = []
for z in range(1, len(y)):
if objectmag[z] > sat[z] and objectmag[z] < lim[z]:
#date.extend([x for x in obs[z]])
date.extend(obs[z])
amplitude = [t + random.gauss(0,magUncertainy(zeroPoints[z],objectmag[z],30,background,FWHMeff[z])) for t in y[z]] # scale amplitude and add poisson noise
mag.extend([objectmag[z] - t for t in amplitude]) # add actual mag
error.extend([sigSys + magUncertainy(zeroPoints[z],objectmag[z],30,background,FWHMeff[z])+0.2]*len(amplitude))
filts.extend([filterletter[z]]*len(amplitude))
phase = [(day % (period*2))/(period*2) for day in obs[z]]
pmag = [objectmag[z] - t for t in amplitude]
# plt.plot(phase, pmag, 'o', markersize=4)
# plt.xlabel('Phase')
# plt.ylabel('Magnitude')
# plt.gca().invert_yaxis()
# plt.title('filter'+str(z)+', Period = '+str(period))#+', MeasuredPeriod = '+str(LSperiod)+', Periodx20 = '+(str(period*20)))
# plt.show()
# plt.plot(date, mag, 'o')
# plt.xlim(lower,higher)
# plt.xlabel('time (days)')
# plt.ylabel('mag')
# plt.gca().invert_yaxis()
# plt.show()
model = periodic.LombScargleMultibandFast(fit_period=False)
model.fit(date, mag, error, filts)
power = model.score_frequency_grid(fmin, df, numsteps)
if period > 10.:
model.optimizer.period_range=(10, 110)
elif period > 0.51:
model.optimizer.period_range=(0.5, 10)
elif period > 0.011:
model.optimizer.period_range=(0.01, 0.52)
else:
model.optimizer.period_range=(0.0029, 0.012)
LSperiod = model.best_period
if period < 10:
higher = 10
else:
higher = 100
# fig, ax = plt.subplots()
# ax.plot(1./freqs, power)
# ax.set(xlim=(0, higher), ylim=(0, 1.2),
# xlabel='period (days)',
# ylabel='Lomb-Scargle Power',
# title='Period = '+str(period)+', MeasuredPeriod = '+str(LSperiod)+', Periodx20 = '+(str(period*20)));
# plt.show()
phase = [(day % (period*2))/(period*2) for day in date]
#idealphase = [(day % (period*2))/(period*2) for day in dayZ]
#print(len(phase),len(idealphase))
#plt.plot(idealphase,Zmag,'ko',)
# plt.plot(phase, mag, 'o', markersize=4)
# plt.xlabel('Phase')
# plt.ylabel('Magnitude')
# plt.gca().invert_yaxis()
# plt.title('Period = '+str(period)+', MeasuredPeriod = '+str(LSperiod)+', Periodx20 = '+(str(period*20)))
# plt.show()
#print(period, LSperiod, period*20)
# print('actualperiod', period, 'measured period', np.mean(LSperiod),power.max())# 'power',np.mean(power[maxpos]))
# print(frequencyRange[frange], 'z', z)
# totperiod.append(period)
# totmperiod.append(np.mean(LSperiod))
# totpower.append(power.max())
mpower = power.max()
measuredpower.append(power.max()) # should this correspond to period power and not max power?
maxpower = []
counter = 0.
for loop in range(0,loopNo):
random.shuffle(date)
model = periodic.LombScargleMultibandFast(fit_period=False)
model.fit(date, mag, error, filts)
power = model.score_frequency_grid(fmin, df, numsteps)
maxpower.append(power.max())
for X in range(0, len(maxpower)):
if maxpower[X] > measuredpower[-1]:
counter = counter + 1.
Significance = (1.-(counter/len(maxpower)))
#print('sig', Significance, 'counter', counter)
SigLevel.append(Significance)
#freqnumber = FrangeLoop.index(frequencyRange)
#magnumber = MagRange.index(objectmag)
#print(fullmaglist)
#listnumber = (magnumber*maglength)+freqnumber
# print(listnumber)
# measuredperiodlist[listnumber] = LSperiod
# periodlist[listnumber] = period
# powerlist[listnumber] = mpower
# siglist[listnumber] = Significance
# fullmaglist[listnumber] = objectmag
# results order, 0=mag,1=period,2=measuredperiod,3=siglevel,4=power,5=listnumber
results[0] = objectmag[3]
results[1] = period
results[2] = LSperiod
results[3] = Significance
results[4] = mpower
results[5] = 0#listnumber
return results
# In[24]:
#findObservations([(630,)])
#remove25(obs)
#averageFlux(obs[0], 1, 30)
longflare = []
for floop in range(0,flarecycles):
flareone = flaring(-1, flareperiod, amplitude=0.3)
flareone = flareone[0:1440]
positiveflare = [abs(x) for x in flareone]
longflare.extend(positiveflare)
# In[25]:
PrangeLoop = np.logspace(-2.5,2,freqlength)
FrangeLoop = [(1/x) for x in PrangeLoop]
# In[26]:
# reset results file
with open(inFile,'w') as f:
f.write('fullmaglist \n\n periodlist \n\n measuredperiodlist \n\n siglist \n\n powerlist \n\n listnumberlist \n\n end of file')
# In[57]:
results = []
fullmeasuredPeriod = []
fullPeriod = []
fullPower = []
fullSigLevel = []
fullMag = []
MagRangearray = np.linspace(17,24,maglength)
MagRange = [x for x in MagRangearray]
maglist = []
for x in range(len(MagRange)):
maglist.append([MagRange[x]]*7)
newlist = Magnitudes.mag1304
pool = Pool(processors)
for h in range(startnumber,endnumber):
print(newlist[h])
results.append(pool.map(partial(lombScargle, objectmag=newlist[h]),FrangeLoop))
twoDlist = [[],[],[],[],[],[]]
for X in range(len(results)):
for Y in range(len(results[X])):
twoDlist[0].append(results[X][Y][0])
twoDlist[1].append(results[X][Y][1])
twoDlist[2].append(results[X][Y][2])
twoDlist[3].append(results[X][Y][3])
twoDlist[4].append(results[X][Y][4])
twoDlist[5].append(results[X][Y][5])
with open(inFile, 'r') as istr:
with open(outFile,'w') as ostr:
for i, line in enumerate(istr):
# Get rid of the trailing newline (if any).
line = line.rstrip('\n')
if i % 2 != 0:
line += str(twoDlist[int((i-1)/2)])+','
ostr.write(line+'\n')
| [
"mj1e16@soton.ac.uk"
] | mj1e16@soton.ac.uk |
d36ac047086b61bb183185f54828352106cbdb9e | c7a94e7b1956c79f3c390508e60902a6bb56f3c5 | /xlsxwriter/core.py | 905736f039167206a9cfb1549c151f8e084c2bb7 | [
"BSD-2-Clause"
] | permissive | alexander-beedie/XlsxWriter | 635b68d98683efb8404d58f5d896f8e6d433e379 | 03f76666df9ce5ac0ab6bb8ff866d424dc8fea58 | refs/heads/main | 2023-05-27T15:33:36.911705 | 2023-05-04T00:00:04 | 2023-05-04T00:00:04 | 144,862,072 | 0 | 0 | null | 2018-08-15T14:15:51 | 2018-08-15T14:15:50 | null | UTF-8 | Python | false | false | 5,656 | py | ###############################################################################
#
# Core - A class for writing the Excel XLSX Worksheet file.
#
# SPDX-License-Identifier: BSD-2-Clause
# Copyright 2013-2023, John McNamara, jmcnamara@cpan.org
#
# Standard packages.
from datetime import datetime
# Package imports.
from . import xmlwriter
class Core(xmlwriter.XMLwriter):
"""
A class for writing the Excel XLSX Core file.
"""
###########################################################################
#
# Public API.
#
###########################################################################
def __init__(self):
"""
Constructor.
"""
super(Core, self).__init__()
self.properties = {}
###########################################################################
#
# Private API.
#
###########################################################################
def _assemble_xml_file(self):
# Assemble and write the XML file.
# Write the XML declaration.
self._xml_declaration()
self._write_cp_core_properties()
self._write_dc_title()
self._write_dc_subject()
self._write_dc_creator()
self._write_cp_keywords()
self._write_dc_description()
self._write_cp_last_modified_by()
self._write_dcterms_created()
self._write_dcterms_modified()
self._write_cp_category()
self._write_cp_content_status()
self._xml_end_tag("cp:coreProperties")
# Close the file.
self._xml_close()
def _set_properties(self, properties):
# Set the document properties.
self.properties = properties
def _datetime_to_iso8601_date(self, date):
# Convert to a ISO 8601 style "2010-01-01T00:00:00Z" date.
if not date:
date = datetime.utcnow()
return date.strftime("%Y-%m-%dT%H:%M:%SZ")
###########################################################################
#
# XML methods.
#
###########################################################################
def _write_cp_core_properties(self):
# Write the <cp:coreProperties> element.
xmlns_cp = (
"http://schemas.openxmlformats.org/package/2006/"
+ "metadata/core-properties"
)
xmlns_dc = "http://purl.org/dc/elements/1.1/"
xmlns_dcterms = "http://purl.org/dc/terms/"
xmlns_dcmitype = "http://purl.org/dc/dcmitype/"
xmlns_xsi = "http://www.w3.org/2001/XMLSchema-instance"
attributes = [
("xmlns:cp", xmlns_cp),
("xmlns:dc", xmlns_dc),
("xmlns:dcterms", xmlns_dcterms),
("xmlns:dcmitype", xmlns_dcmitype),
("xmlns:xsi", xmlns_xsi),
]
self._xml_start_tag("cp:coreProperties", attributes)
def _write_dc_creator(self):
# Write the <dc:creator> element.
data = self.properties.get("author", "")
self._xml_data_element("dc:creator", data)
def _write_cp_last_modified_by(self):
# Write the <cp:lastModifiedBy> element.
data = self.properties.get("author", "")
self._xml_data_element("cp:lastModifiedBy", data)
def _write_dcterms_created(self):
# Write the <dcterms:created> element.
date = self.properties.get("created", datetime.utcnow())
xsi_type = "dcterms:W3CDTF"
date = self._datetime_to_iso8601_date(date)
attributes = [
(
"xsi:type",
xsi_type,
)
]
self._xml_data_element("dcterms:created", date, attributes)
def _write_dcterms_modified(self):
# Write the <dcterms:modified> element.
date = self.properties.get("created", datetime.utcnow())
xsi_type = "dcterms:W3CDTF"
date = self._datetime_to_iso8601_date(date)
attributes = [
(
"xsi:type",
xsi_type,
)
]
self._xml_data_element("dcterms:modified", date, attributes)
def _write_dc_title(self):
# Write the <dc:title> element.
if "title" in self.properties:
data = self.properties["title"]
else:
return
self._xml_data_element("dc:title", data)
def _write_dc_subject(self):
# Write the <dc:subject> element.
if "subject" in self.properties:
data = self.properties["subject"]
else:
return
self._xml_data_element("dc:subject", data)
def _write_cp_keywords(self):
# Write the <cp:keywords> element.
if "keywords" in self.properties:
data = self.properties["keywords"]
else:
return
self._xml_data_element("cp:keywords", data)
def _write_dc_description(self):
# Write the <dc:description> element.
if "comments" in self.properties:
data = self.properties["comments"]
else:
return
self._xml_data_element("dc:description", data)
def _write_cp_category(self):
# Write the <cp:category> element.
if "category" in self.properties:
data = self.properties["category"]
else:
return
self._xml_data_element("cp:category", data)
def _write_cp_content_status(self):
# Write the <cp:contentStatus> element.
if "status" in self.properties:
data = self.properties["status"]
else:
return
self._xml_data_element("cp:contentStatus", data)
| [
"jmcnamara@cpan.org"
] | jmcnamara@cpan.org |
c7f48720bb0d186381903465c450342a3c0e979a | 2b82b45edf199488e45cef97571e57dff4a3e824 | /programs/spectralnorm/spectralnorm-numba-2.py | e373f4c513a9114fd405c25ea2883f5c37f7a01e | [
"BSD-3-Clause"
] | permissive | abilian/python-benchmarks | cf8b82d97c0836c65ff00337b649a53bc9af965e | 37a519a2ee835cf53ca0bb78e7e7c83da69d664e | refs/heads/main | 2023-08-05T18:50:53.490042 | 2021-10-01T10:49:33 | 2021-10-01T10:49:33 | 321,367,812 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 686 | py | # The Computer Language Benchmarks Game
# http://benchmarksgame.alioth.debian.org/
#
# Contributed by Sebastien Loisel
# Fixed by Isaac Gouy
# Sped up by Josh Goldfoot
# Dirtily sped up by Simon Descarpentries
# Sped up with numpy by Kittipong Piyawanno
# 2to3
from numba import jit
from sys import argv
from numpy import *
@jit
def spectralnorm(n):
u = matrix(ones(n))
j = arange(n)
eval_func = lambda i: 1.0 / ((i + j) * (i + j + 1) / 2 + i + 1)
M = matrix([eval_func(i) for i in arange(n)])
MT = M.T
for i in range(10):
v = (u * MT) * M
u = (v * MT) * M
print("%0.9f" % (sum(u * v.T) / sum(v * v.T)) ** 0.5)
spectralnorm(int(argv[1]))
| [
"sf@fermigier.com"
] | sf@fermigier.com |
77066fd264e1194de5a270d36f268632b904b588 | 8d47d0bdf0f3bcc8c8f82e7624e391ba2353efe1 | /hpcloud/networks/workflows.py | 79377a368544c19952a6263b5713bcf7887182df | [
"Apache-2.0"
] | permissive | cosgrid001/cosgrid_hh | 48328bbfae69f9978b82fe2c94799fbf8bc978b2 | 9b4dbf3c9c134f0c08c7d0330a3d0e69af12a8f4 | refs/heads/master | 2020-01-23T21:03:04.242315 | 2016-12-11T05:39:33 | 2016-12-11T05:39:33 | 74,579,908 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 16,524 | py | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2012 NEC Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import netaddr
from django.core.urlresolvers import reverse
from django.utils.translation import ugettext_lazy as _
from horizon import exceptions,forms,messages,workflows
from horizon.utils import fields
from hpcloud import api
LOG = logging.getLogger(__name__)
class CreateNetworkInfoAction(workflows.Action):
net_name = forms.CharField(max_length=255,
label=_("Network Name"),
required=False)
if api.neutron.is_port_profiles_supported():
net_profile_id = forms.ChoiceField(label=_("Network Profile"))
admin_state = forms.BooleanField(label=_("Admin State"),
initial=True, required=False)
if api.neutron.is_port_profiles_supported():
def __init__(self, request, *args, **kwargs):
super(CreateNetworkInfoAction, self).__init__(request,
*args, **kwargs)
self.fields['net_profile_id'].choices = (
self.get_network_profile_choices(request))
def get_network_profile_choices(self, request):
profile_choices = [('', _("Select a profile"))]
for profile in self._get_profiles(request, 'network'):
profile_choices.append((profile.id, profile.name))
return profile_choices
def _get_profiles(self, request, type_p):
try:
profiles = api.neutron.profile_list(request, type_p)
except Exception:
profiles = []
msg = _('Network Profiles could not be retrieved.')
exceptions.handle(request, msg)
return profiles
# TODO(absubram): Add ability to view network profile information
# in the network detail if a profile is used.
class Meta:
name = _("Network")
help_text = _("From here you can create a new network.\n"
"In addition a subnet associated with the network "
"can be created in the next panel.")
class CreateNetworkInfo(workflows.Step):
action_class = CreateNetworkInfoAction
if api.neutron.is_port_profiles_supported():
contributes = ("net_name", "admin_state", "net_profile_id")
else:
contributes = ("net_name", "admin_state")
class CreateSubnetInfoAction(workflows.Action):
with_subnet = forms.BooleanField(label=_("Create Subnet"),
initial=True, required=False)
subnet_name = forms.CharField(max_length=255,
label=_("Subnet Name"),
required=False)
cidr = fields.IPField(label=_("Network Address"),
required=False,
initial="",
help_text=_("Network address in CIDR format "
"(e.g. 192.168.0.0/24)"),
version=fields.IPv4 | fields.IPv6,
mask=True)
ip_version = forms.ChoiceField(choices=[(4, 'IPv4'), (6, 'IPv6')],
label=_("IP Version"))
gateway_ip = fields.IPField(
label=_("Gateway IP"),
required=False,
initial="",
help_text=_("IP address of Gateway (e.g. 192.168.0.254) "
"The default value is the first IP of the "
"network address (e.g. 192.168.0.1 for "
"192.168.0.0/24). "
"If you use the default, leave blank. "
"If you want to use no gateway, "
"check 'Disable Gateway' below."),
version=fields.IPv4 | fields.IPv6,
mask=False)
no_gateway = forms.BooleanField(label=_("Disable Gateway"),
initial=False, required=False)
class Meta:
name = _("Subnet")
help_text = _('You can create a subnet associated with the new '
'network, in which case "Network Address" must be '
'specified. If you wish to create a network WITHOUT a '
'subnet, uncheck the "Create Subnet" checkbox.')
def _check_subnet_data(self, cleaned_data, is_create=True):
cidr = cleaned_data.get('cidr')
ip_version = int(cleaned_data.get('ip_version'))
gateway_ip = cleaned_data.get('gateway_ip')
no_gateway = cleaned_data.get('no_gateway')
if not cidr:
msg = _('Specify "Network Address" or '
'clear "Create Subnet" checkbox.')
raise forms.ValidationError(msg)
if cidr:
subnet = netaddr.IPNetwork(cidr)
if subnet.version != ip_version:
msg = _('Network Address and IP version are inconsistent.')
raise forms.ValidationError(msg)
if (ip_version == 4 and subnet.prefixlen == 32) or \
(ip_version == 6 and subnet.prefixlen == 128):
msg = _("The subnet in the Network Address is too small (/%s)."
% subnet.prefixlen)
raise forms.ValidationError(msg)
if not no_gateway and gateway_ip:
if netaddr.IPAddress(gateway_ip).version is not ip_version:
msg = _('Gateway IP and IP version are inconsistent.')
raise forms.ValidationError(msg)
if not is_create and not no_gateway and not gateway_ip:
msg = _('Specify IP address of gateway or '
'check "Disable Gateway".')
raise forms.ValidationError(msg)
def clean(self):
cleaned_data = super(CreateSubnetInfoAction, self).clean()
with_subnet = cleaned_data.get('with_subnet')
if not with_subnet:
return cleaned_data
self._check_subnet_data(cleaned_data)
return cleaned_data
class CreateSubnetInfo(workflows.Step):
action_class = CreateSubnetInfoAction
contributes = ("with_subnet", "subnet_name", "cidr",
"ip_version", "gateway_ip", "no_gateway")
class CreateSubnetDetailAction(workflows.Action):
enable_dhcp = forms.BooleanField(label=_("Enable DHCP"),
initial=True, required=False)
allocation_pools = forms.CharField(
widget=forms.Textarea(),
label=_("Allocation Pools"),
help_text=_("IP address allocation pools. Each entry is "
"<start_ip_address>,<end_ip_address> "
"(e.g., 192.168.1.100,192.168.1.120) "
"and one entry per line."),
required=False)
dns_nameservers = forms.CharField(
widget=forms.widgets.Textarea(),
label=_("DNS Name Servers"),
help_text=_("IP address list of DNS name servers for this subnet. "
"One entry per line."),
required=False)
host_routes = forms.CharField(
widget=forms.widgets.Textarea(),
label=_("Host Routes"),
help_text=_("Additional routes announced to the hosts. "
"Each entry is <destination_cidr>,<nexthop> "
"(e.g., 192.168.200.0/24,10.56.1.254) "
"and one entry per line."),
required=False)
class Meta:
name = _("Subnet Detail")
help_text = _('You can specify additional attributes for the subnet.')
def _convert_ip_address(self, ip, field_name):
try:
return netaddr.IPAddress(ip)
except (netaddr.AddrFormatError, ValueError):
msg = _('%(field_name)s: Invalid IP address '
'(value=%(ip)s)' % dict(
field_name=field_name, ip=ip))
raise forms.ValidationError(msg)
def _convert_ip_network(self, network, field_name):
try:
return netaddr.IPNetwork(network)
except (netaddr.AddrFormatError, ValueError):
msg = _('%(field_name)s: Invalid IP address '
'(value=%(network)s)' % dict(
field_name=field_name, network=network))
raise forms.ValidationError(msg)
def _check_allocation_pools(self, allocation_pools):
for p in allocation_pools.split('\n'):
p = p.strip()
if not p:
continue
pool = p.split(',')
if len(pool) != 2:
msg = _('Start and end addresses must be specified '
'(value=%s)') % p
raise forms.ValidationError(msg)
start, end = [self._convert_ip_address(ip, "allocation_pools")
for ip in pool]
if start > end:
msg = _('Start address is larger than end address '
'(value=%s)') % p
raise forms.ValidationError(msg)
def _check_dns_nameservers(self, dns_nameservers):
for ns in dns_nameservers.split('\n'):
ns = ns.strip()
if not ns:
continue
self._convert_ip_address(ns, "dns_nameservers")
def _check_host_routes(self, host_routes):
for r in host_routes.split('\n'):
r = r.strip()
if not r:
continue
route = r.split(',')
if len(route) != 2:
msg = _('Host Routes format error: '
'Destination CIDR and nexthop must be specified '
'(value=%s)') % r
raise forms.ValidationError(msg)
self._convert_ip_network(route[0], "host_routes")
self._convert_ip_address(route[1], "host_routes")
def clean(self):
cleaned_data = super(CreateSubnetDetailAction, self).clean()
self._check_allocation_pools(cleaned_data.get('allocation_pools'))
self._check_host_routes(cleaned_data.get('host_routes'))
self._check_dns_nameservers(cleaned_data.get('dns_nameservers'))
return cleaned_data
class CreateSubnetDetail(workflows.Step):
action_class = CreateSubnetDetailAction
contributes = ("enable_dhcp", "allocation_pools",
"dns_nameservers", "host_routes")
class CreateNetwork(workflows.Workflow):
slug = "create_network"
name = _("Create Network")
finalize_button_name = _("Create")
success_message = _('Created network "%s".')
failure_message = _('Unable to create network "%s".')
default_steps = (CreateNetworkInfo,
CreateSubnetInfo,
CreateSubnetDetail)
def get_success_url(self):
return reverse("horizon:hpcloud:networks:index")
def get_failure_url(self):
return reverse("horizon:hpcloud:networks:index")
def format_status_message(self, message):
name = self.context.get('net_name') or self.context.get('net_id', '')
return message % name
def _create_network(self, request, data):
try:
params = {'name': data['net_name'],
'admin_state_up': data['admin_state']}
if api.neutron.is_port_profiles_supported():
params['net_profile_id'] = data['net_profile_id']
network = api.neutron.network_create(request, **params)
network.set_id_as_name_if_empty()
self.context['net_id'] = network.id
msg = _('Network "%s" was successfully created.') % network.name
LOG.debug(msg)
return network
except Exception as e:
msg = (_('Failed to create network "%(network)s": %(reason)s') %
{"network": data['net_name'], "reason": e})
LOG.info(msg)
redirect = self.get_failure_url()
exceptions.handle(request, msg, redirect=redirect)
return False
def _setup_subnet_parameters(self, params, data, is_create=True):
"""Setup subnet parameters
This methods setups subnet parameters which are available
in both create and update.
"""
is_update = not is_create
params['enable_dhcp'] = data['enable_dhcp']
if is_create and data['allocation_pools']:
pools = [dict(zip(['start', 'end'], pool.strip().split(',')))
for pool in data['allocation_pools'].split('\n')
if pool.strip()]
params['allocation_pools'] = pools
if data['host_routes'] or is_update:
routes = [dict(zip(['destination', 'nexthop'],
route.strip().split(',')))
for route in data['host_routes'].split('\n')
if route.strip()]
params['host_routes'] = routes
if data['dns_nameservers'] or is_update:
nameservers = [ns.strip()
for ns in data['dns_nameservers'].split('\n')
if ns.strip()]
params['dns_nameservers'] = nameservers
def _create_subnet(self, request, data, network=None, tenant_id=None,
no_redirect=False):
if network:
network_id = network.id
network_name = network.name
else:
network_id = self.context.get('network_id')
network_name = self.context.get('network_name')
try:
params = {'network_id': network_id,
'name': data['subnet_name'],
'cidr': data['cidr'],
'ip_version': int(data['ip_version'])}
if tenant_id:
params['tenant_id'] = tenant_id
if data['no_gateway']:
params['gateway_ip'] = None
elif data['gateway_ip']:
params['gateway_ip'] = data['gateway_ip']
self._setup_subnet_parameters(params, data)
subnet = api.neutron.subnet_create(request, **params)
self.context['subnet_id'] = subnet.id
msg = _('Subnet "%s" was successfully created.') % data['cidr']
LOG.debug(msg)
return subnet
except Exception as e:
msg = _('Failed to create subnet "%(sub)s" for network "%(net)s": '
' %(reason)s')
if no_redirect:
redirect = None
else:
redirect = self.get_failure_url()
exceptions.handle(request,
msg % {"sub": data['cidr'], "net": network_name,
"reason": e},
redirect=redirect)
return False
def _delete_network(self, request, network):
"""Delete the created network when subnet creation failed"""
try:
api.neutron.network_delete(request, network.id)
msg = _('Delete the created network "%s" '
'due to subnet creation failure.') % network.name
LOG.debug(msg)
redirect = self.get_failure_url()
messages.info(request, msg)
raise exceptions.Http302(redirect)
#return exceptions.RecoverableError
except Exception:
msg = _('Failed to delete network "%s"') % network.name
LOG.info(msg)
redirect = self.get_failure_url()
exceptions.handle(request, msg, redirect=redirect)
def handle(self, request, data):
network = self._create_network(request, data)
if not network:
return False
# If we do not need to create a subnet, return here.
if not data['with_subnet']:
return True
subnet = self._create_subnet(request, data, network, no_redirect=True)
if subnet:
return True
else:
self._delete_network(request, network)
return False
| [
"jayaprakash.r@cloudenablers.com"
] | jayaprakash.r@cloudenablers.com |
7d21255d581e353aca38239ac109c88f33e37acd | 13a5a2ab12a65d65a5bbefce5253c21c6bb8e780 | /dnainfo/crimemaps/migrations/0024_nycschoolswatertesting.py | 8679bb07a5f901d7e3c0d9f930f5754ade2f377c | [] | no_license | NiJeLorg/DNAinfo-CrimeMaps | 535b62205fe1eb106d0f610d40f2f2a35e60a09e | 63f3f01b83308294a82565f2dc8ef6f3fbcdb721 | refs/heads/master | 2021-01-23T19:28:12.642479 | 2017-05-11T06:04:08 | 2017-05-11T06:04:08 | 34,847,724 | 2 | 0 | null | 2016-11-25T15:56:14 | 2015-04-30T10:02:41 | JavaScript | UTF-8 | Python | false | false | 1,689 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('crimemaps', '0023_nyctrainsitstand'),
]
operations = [
migrations.CreateModel(
name='NYCschoolsWaterTesting',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now=True)),
('lc', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('bc', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('ln', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('bn', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('add', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('cit', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('stc', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('zip', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('wtp', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('er', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('dohm', models.CharField(default=b'', max_length=255, null=True, blank=True)),
('note', models.CharField(default=b'', max_length=255, null=True, blank=True)),
],
),
]
| [
"jd@nijel.org"
] | jd@nijel.org |
5559ceb97c6a0b3e8d421323d1656c568b83aa72 | 912c4445e7041869d1c8535a493b78d7ee35424b | /status/tests.py | 034f0f6f61543636b32a1de55226078cabb8a2f1 | [] | no_license | maltezc/Udemy-DjangoRestAPI | 3f243ec97ea5e8e9d6ddc2005986b6a05aa11097 | de6f885cf0cddaf22fb6fd72d18fc805b9ce48d2 | refs/heads/master | 2022-12-14T06:04:43.011691 | 2018-08-05T01:10:17 | 2018-08-05T01:10:17 | 140,590,753 | 0 | 0 | null | 2022-11-22T02:48:04 | 2018-07-11T14:56:08 | Python | UTF-8 | Python | false | false | 602 | py |
from django.test import TestCase
from django.contrib.auth import get_user_model
from status.models import Status
User = get_user_model()
class StatusTestCase(TestCase):
def setUp(self):
user = User.objects.create(username='cfe', email='hello@cfe.com')
user.set_password("yeahhhcfe")
user.save()
def test_creating_status(self):
user = User.objects.get(username='cfe')
obj = Status.objects.create(user=user, content='Some cool new content')
self.assertEqual(obj.id, 1)
qs = Status.objects.all()
self.assertEqual(qs.count(), 1) | [
"cflux.maltez@live.com"
] | cflux.maltez@live.com |
48c68f16180b5d0dff96e2b6800b16dfe4b6f958 | ed559cbd80aa290f03ac9f1c8a08258fe051ed29 | /model_r.py | 1088010db9557932c1f80ff8c9f31c62616bb23f | [] | no_license | nmaypeter/project_nw_190424 | 70759e441a9f2782d97428cfac10c99bc0c052a2 | 07adfe85a9389d7a3ea8261ff62b181b2deff2c3 | refs/heads/master | 2020-05-16T17:24:28.658557 | 2019-04-29T14:32:24 | 2019-04-29T14:32:24 | 183,194,312 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 13,541 | py | from SeedSelection_Random import *
import os
if __name__ == '__main__':
model_name = 'mr'
dataset_seq = [2]
prod_seq, prod2_seq = [1, 2], [1, 2, 3]
cm_seq = [1, 2]
wallet_distribution_seq = [1, 2]
total_budget = 10
wpiwp_seq = [bool(1), bool(0)]
sample_number = 10
ppp_seq = [1, 2, 3]
monte_carlo, eva_monte_carlo = 10, 100
for data_setting in dataset_seq:
dataset_name = 'email_undirected' * (data_setting == 1) + 'dnc_email_directed' * (data_setting == 2) + 'email_Eu_core_directed' * (data_setting == 3) + \
'WikiVote_directed' * (data_setting == 4) + 'NetPHY_undirected' * (data_setting == 5)
for cm in cm_seq:
cascade_model = 'ic' * (cm == 1) + 'wc' * (cm == 2)
for prod_setting in prod_seq:
for prod_setting2 in prod2_seq:
product_name = 'item_lphc' * (prod_setting == 1) + 'item_hplc' * (prod_setting == 2) + '_ce' * (prod_setting2 == 2) + '_ee' * (prod_setting2 == 3)
for wallet_distribution in wallet_distribution_seq:
wallet_distribution_type = 'm50e25' * (wallet_distribution == 1) + 'm99e96' * (wallet_distribution == 2)
for wpiwp in wpiwp_seq:
iniG = IniGraph(dataset_name)
iniP = IniProduct(product_name)
seed_cost_dict = iniG.constructSeedCostDict()
graph_dict = iniG.constructGraphDict(cascade_model)
product_list = iniP.getProductList()
num_node = len(seed_cost_dict)
num_product = len(product_list)
seed_set_sequence, ss_time_sequence = [[] for _ in range(total_budget)], [[] for _ in range(total_budget)]
ssr_main = SeedSelectionRandom(graph_dict, seed_cost_dict, product_list)
for sample_count in range(sample_number):
ss_strat_time = time.time()
begin_budget = 1
now_budget = 0.0
seed_set = [set() for _ in range(num_product)]
random_node_set = ssr_main.constructRandomNodeSet()
ss_acc_time = round(time.time() - ss_strat_time, 2)
temp_sequence = [[begin_budget, now_budget, seed_set, random_node_set, ss_acc_time]]
while len(temp_sequence) != 0:
ss_strat_time = time.time()
[begin_budget, now_budget, seed_set, random_node_set, ss_acc_time] = temp_sequence.pop(0)
print('@ seed selection @ dataset_name = ' + dataset_name + '_' + cascade_model + ', dist = ' + str(wallet_distribution_type) + ', wpiwp = ' + str(wpiwp) +
', product_name = ' + product_name + ', budget = ' + str(begin_budget) + ', sample_count = ' + str(sample_count))
mep_g = selectRandomSeed(random_node_set)
mep_k_prod, mep_i_node = mep_g[0], mep_g[1]
while now_budget < begin_budget and mep_i_node != '-1':
sc = seed_cost_dict[mep_i_node]
if round(now_budget + sc, 2) >= begin_budget and begin_budget < total_budget and len(temp_sequence) == 0:
ss_time = round(time.time() - ss_strat_time + ss_acc_time, 2)
temp_random_node_set = copy.deepcopy(random_node_set)
temp_random_node_set.add((mep_k_prod, mep_i_node))
temp_sequence.append([begin_budget + 1, now_budget, copy.deepcopy(seed_set), temp_random_node_set, ss_time])
if round(now_budget + sc, 2) >= begin_budget > begin_budget:
mep_g = selectRandomSeed(random_node_set)
mep_k_prod, mep_i_node = mep_g[0], mep_g[1]
if mep_i_node == '-1':
break
continue
seed_set[mep_k_prod].add(mep_i_node)
now_budget += sc
mep_g = selectRandomSeed(random_node_set)
mep_k_prod, mep_i_node = mep_g[0], mep_g[1]
ss_time = round(time.time() - ss_strat_time + ss_acc_time, 2)
print('ss_time = ' + str(ss_time) + 'sec')
seed_set_sequence[begin_budget - 1].append(seed_set)
ss_time_sequence[begin_budget - 1].append(ss_time)
for bud in range(total_budget):
if len(seed_set_sequence[bud]) != sample_count + 1:
seed_set_sequence[bud].append(0)
ss_time_sequence[bud].append(ss_time_sequence[bud - 1][-1])
eva_start_time = time.time()
result = [[[] for _ in range(len(ppp_seq))] for _ in range(total_budget)]
for bud in range(1, total_budget + 1):
for ppp in ppp_seq:
ppp_strategy = 'random' * (ppp == 1) + 'expensive' * (ppp == 2) + 'cheap' * (ppp == 3)
pps_start_time = time.time()
eva_main = Evaluation(graph_dict, seed_cost_dict, product_list, ppp, wpiwp)
iniW = IniWallet(dataset_name, product_name, wallet_distribution_type)
wallet_list = iniW.getWalletList()
personal_prob_list = eva_main.setPersonalPurchasingProbList(wallet_list)
for sample_count, sample_seed_set in enumerate(seed_set_sequence[bud - 1]):
if sample_seed_set != 0:
print('@ evaluation @ dataset_name = ' + dataset_name + '_' + cascade_model + ', dist = ' + wallet_distribution_type + ', wpiwp = ' + str(wpiwp) +
', product_name = ' + product_name + ', budget = ' + str(bud) + ', ppp = ' + ppp_strategy + ', sample_count = ' + str(sample_count))
sample_pro_acc, sample_bud_acc = 0.0, 0.0
sample_sn_k_acc, sample_pnn_k_acc = [0.0 for _ in range(num_product)], [0 for _ in range(num_product)]
sample_pro_k_acc, sample_bud_k_acc = [0.0 for _ in range(num_product)], [0.0 for _ in range(num_product)]
for _ in range(eva_monte_carlo):
pro, pro_k_list, pnn_k_list = eva_main.getSeedSetProfit(sample_seed_set, copy.deepcopy(wallet_list), copy.deepcopy(personal_prob_list))
sample_pro_acc += pro
for kk in range(num_product):
sample_pro_k_acc[kk] += pro_k_list[kk]
sample_pnn_k_acc[kk] += pnn_k_list[kk]
sample_pro_acc = round(sample_pro_acc / eva_monte_carlo, 4)
for kk in range(num_product):
sample_pro_k_acc[kk] = round(sample_pro_k_acc[kk] / eva_monte_carlo, 4)
sample_pnn_k_acc[kk] = round(sample_pnn_k_acc[kk] / eva_monte_carlo, 2)
sample_sn_k_acc[kk] = len(sample_seed_set[kk])
for sample_seed in sample_seed_set[kk]:
sample_bud_acc = round(sample_bud_acc + seed_cost_dict[sample_seed], 2)
sample_bud_k_acc[kk] = round(sample_bud_k_acc[kk] + seed_cost_dict[sample_seed], 2)
result[bud - 1][ppp - 1].append([sample_pro_acc, sample_bud_acc, sample_sn_k_acc, sample_pnn_k_acc, sample_pro_k_acc, sample_bud_k_acc, sample_seed_set])
print('eva_time = ' + str(round(time.time() - eva_start_time, 2)) + 'sec')
print(result[bud - 1][ppp - 1][sample_count])
print('------------------------------------------')
else:
result[bud - 1][ppp - 1].append(result[bud - 2][ppp - 1][sample_count])
avg_pro, avg_bud = 0.0, 0.0
avg_sn_k, avg_pnn_k = [0 for _ in range(num_product)], [0 for _ in range(num_product)]
avg_pro_k, avg_bud_k = [0.0 for _ in range(num_product)], [0.0 for _ in range(num_product)]
for r in result[bud - 1][ppp - 1]:
avg_pro += r[0]
avg_bud += r[1]
for kk in range(num_product):
avg_sn_k[kk] += r[2][kk]
avg_pnn_k[kk] += r[3][kk]
avg_pro_k[kk] += r[4][kk]
avg_bud_k[kk] += r[5][kk]
avg_pro = round(avg_pro / sample_number, 4)
avg_bud = round(avg_bud / sample_number, 2)
for kk in range(num_product):
avg_sn_k[kk] = round(avg_sn_k[kk] / sample_number, 2)
avg_pnn_k[kk] = round(avg_pnn_k[kk] / sample_number, 2)
avg_pro_k[kk] = round(avg_pro_k[kk] / sample_number, 4)
avg_bud_k[kk] = round(avg_bud_k[kk] / sample_number, 2)
total_time = round(sum(ss_time_sequence[bud - 1]), 2)
path1 = 'result/' + model_name + '_' + wallet_distribution_type + '_ppp' + str(ppp) + '_wpiwp' * wpiwp
if not os.path.isdir(path1):
os.mkdir(path1)
path = path1 + '/' + dataset_name + '_' + cascade_model + '_' + product_name
if not os.path.isdir(path):
os.mkdir(path)
fw = open(path + '/b' + str(bud) + '_i' + str(sample_number) + '.txt', 'w')
fw.write(model_name + ', ppp = ' + str(ppp) + ', total_budget = ' + str(bud) + ', dist = ' + wallet_distribution_type + ', wpiwp = ' + str(wpiwp) + '\n' +
'dataset_name = ' + dataset_name + '_' + cascade_model + ', product_name = ' + product_name + '\n' +
'total_budget = ' + str(bud) + ', sample_count = ' + str(sample_number) + '\n' +
'avg_profit = ' + str(avg_pro) + ', avg_budget = ' + str(avg_bud) + '\n' +
'total_time = ' + str(total_time) + ', avg_time = ' + str(round(total_time / sample_number, 4)) + '\n')
fw.write('\nprofit_ratio =')
for kk in range(num_product):
fw.write(' ' + str(avg_pro_k[kk]))
fw.write('\nbudget_ratio =')
for kk in range(num_product):
fw.write(' ' + str(avg_bud_k[kk]))
fw.write('\nseed_number =')
for kk in range(num_product):
fw.write(' ' + str(avg_sn_k[kk]))
fw.write('\ncustomer_number =')
for kk in range(num_product):
fw.write(' ' + str(avg_pnn_k[kk]))
fw.write('\n')
for t, r in enumerate(result[bud - 1][ppp - 1]):
fw.write('\n' + str(t) + '\t' + str(round(r[0], 4)) + '\t' + str(round(r[1], 4)) + '\t' + str(r[2]) + '\t' + str(r[3]) + '\t' + str(r[4]) + '\t' + str(r[5]) + '\t' + str(r[6]))
fw.close() | [
"37822464+nmaypeter@users.noreply.github.com"
] | 37822464+nmaypeter@users.noreply.github.com |
a056f5ef70bc84e79e42928a38abb5e4257c531d | efde71fc3e296804a9e5cb6bc2ab48ad575b7faa | /applications/delivery/management/commands/processing_delivery_send_general.py | 7cfebbccb447c1218e2618bf9fd5692286ff7bb2 | [] | no_license | denispan1993/vitaliy | 597cb546c9d1a14d7abc2931eb71fab38b878ec4 | 764d703ffc285f13a9f05e4c197bc75b495b5ff7 | refs/heads/master | 2021-04-29T21:54:45.095807 | 2018-02-10T19:05:23 | 2018-02-10T19:05:23 | 121,627,169 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 15,632 | py | # -*- coding: utf-8 -*-
__author__ = 'AlexStarov'
from django.core.management.base import BaseCommand
class Command(BaseCommand, ):
from optparse import make_option
option_list = BaseCommand.option_list + (
make_option('--id', '--pk', '--delivery_id', '--delivery_pk',
action='store', type='int', dest='delivery_pk',
help=''),
make_option('--t', '--delivery_test', '--test',
action='store_true', dest='delivery_test',
help=''),
make_option('--g', '--delivery_general', '--general',
action='store_true', dest='delivery_test',
help=''),
)
#self.verbosity = int(options.get('verbosity'))
#def add_arguments(self, parser):
# parser.add_argument('delivery_id', nargs='+', type=int)
def handle(self, *args, **options):
from applications.delivery.models import Delivery
try:
deliveryes = Delivery.objects.filter(delivery_test=False,
send_test=True,
send_general=False,
type__in=[1, 2, 3, ], )
except Delivery.DoesNotExist:
deliveryes = None
else:
from applications.delivery.models import EmailMiddleDelivery
for delivery in deliveryes:
# print 'delivery', delivery
try:
EmailMiddleDelivery.objects.\
get(delivery=delivery,
send_test=False,
send_general=True,
updated_at__lte=delivery.updated_at, )
except:
""" Создаем ссылочку на отсылку рассылки """
email_middle_delivery = EmailMiddleDelivery()
email_middle_delivery.delivery = delivery
email_middle_delivery.delivery_test_send = False
email_middle_delivery.delivery_send = True
email_middle_delivery.save()
""" Закрываем отсылку теста в самой рассылке """
delivery.send_general = True
delivery.save()
""" Отсылаем тестовое письмо """
from django.utils.html import strip_tags
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.yandex.ru'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'subscribe@keksik.com.ua'
EMAIL_HOST_PASSWORD = ''
from django.core.mail import get_connection
backend = get_connection(backend='django.core.mail.backends.smtp.EmailBackend',
host=EMAIL_HOST,
port=EMAIL_PORT,
username=EMAIL_HOST_USER,
password=EMAIL_HOST_PASSWORD,
use_tls=EMAIL_USE_TLS,
fail_silently=False, )
from django.core.mail import EmailMultiAlternatives
from proj.settings import Email_MANAGER
from applications.authModel.models import Email
""" Создаем указатели на E-Mail адреса рассылки """
try:
emails = Email.objects.filter(bad_email=False, )
except Email.DoesNotExist:
emails = None
""" Здесь нужно помудрить с коммитом """
from applications.delivery.models import EmailForDelivery
from applications.delivery.utils import parsing
i = 0
time = 0
for real_email in emails:
i += 1
# if i < 125:
# continue
email = EmailForDelivery.objects.create(delivery=email_middle_delivery,
email=real_email, )
""" Отсылка """
msg = EmailMultiAlternatives(subject=delivery.subject,
body=strip_tags(parsing(value=delivery.html,
key=email.key, ), ),
from_email='subscribe@keksik.com.ua',
to=[real_email.email, ],
connection=backend, )
msg.attach_alternative(content=parsing(value=delivery.html,
key=email.key, ),
mimetype="text/html", )
msg.content_subtype = "html"
try:
msg.send(fail_silently=False, )
except Exception as e:
msg = EmailMultiAlternatives(subject='Error for subject: %s' % delivery.subject,
body='Error: %s - E-Mail: %s - real_email.pk: %d' % (e, real_email.email, real_email.pk, ),
from_email='subscribe@keksik.com.ua',
to=['subscribe@keksik.com.ua', ],
connection=backend, )
msg.send(fail_silently=True, )
else:
print 'i: ', i, 'Pk: ', real_email.pk, ' - ', real_email.email
from random import randrange
time1 = randrange(6, 12, )
time2 = randrange(6, 12, )
time += time1 + time2
print 'Time1: ', time1, ' Time2: ', time2, ' Time all: ', time1+time2, ' average time: ', time/i
from time import sleep
sleep(time1, )
print 'Next'
sleep(time2, )
def hernya2():
try:
deliveryes = Delivery.objects.filter(delivery_test=False, )
except Delivery.DoesNotExist:
deliveryes = None
else:
for delivery in deliveryes:
try:
aaa=EmailMiddleDelivery.objects.\
get(delivery=delivery, updated_at__lte=delivery.updated_at, )
print aaa, delivery.updated_at
except:
email_middle_delivery = EmailMiddleDelivery()
email_middle_delivery.delivery = delivery
email_middle_delivery.delivery_test_send = False
email_middle_delivery.delivery_send = True
email_middle_delivery.save()
from django.utils.html import strip_tags
from django.core.mail import get_connection
backend = get_connection(backend='django.core.mail.backends.smtp.EmailBackend',
fail_silently=False, )
from django.core.mail import EmailMultiAlternatives
from proj.settings import Email_MANAGER
msg = EmailMultiAlternatives(subject=delivery.subject,
body=strip_tags(delivery.html, ),
from_email=u'site@keksik.com.ua',
to=[real_email.email, ],
connection=backend, )
msg.attach_alternative(content=delivery.html,
mimetype="text/html", )
msg.content_subtype = "html"
print real_email.email
#try:
# # msg.send(fail_silently=False, )
#except Exception as inst:
# print type(inst, )
# print inst.args
# print inst
# else:
# email.send
# email.save()
#try:
# """ Берем 10 E-Mail адресов на которые мы еще не отсылали данную рассылку """
# emails = EmailForDelivery.objects.filter(delivery=email_middle_delivery,
# send=False, )[10]
#except EmailForDelivery.DoesNotExist:
# """ E-Mail адреса в этой рассылке закончились """
# emails = None
#else:
# emails = ', '.join(emails, )
# """ Отсылаем E-Mail на 10 адресатов """
def hernya():
from datetime import datetime
print datetime.now()
from applications.product.models import Category
try:
action_category = Category.objects.get(url=u'акции', )
except Category.DoesNotExist:
action_category = None
from applications.discount.models import Action
action_active = Action.objects.active()
if action_active:
print 'Action - ACTIVE:', action_active
for action in action_active:
products_of_action = action.product_in_action.all()
print 'All products:', products_of_action
# print action
"""
Если акция с автостартом,
то мы еЁ стартуем.
"""
if action.auto_start:
""" Включаем галочку 'Учавствует в акции' всем продуктам которые внесены в акцию
исключая продукты 'отсутсвующие на складе' """
products_of_action = action.product_in_action.exclude(is_availability=4, )
if len(products_of_action, ) > 0:
print 'Product auto_start:', products_of_action
for product in products_of_action:
""" Помечает товар как учавствующий в акции """
product.in_action = True
""" Добавляем категорию 'Акция' в товар """
product.category.add(action_category, )
product.save()
""" Удаляем товары учавствующие в активной акции но при этом 'отсутсвующие на складе' """
products_remove_from_action = action.product_in_action.exclude(is_availability__lt=4, )
if len(products_of_action, ) > 0:
print 'Product auto_start:', products_of_action
for product in products_remove_from_action:
""" Помечает товар как учавствующий в акции """
product.in_action = False
""" Добавляем категорию 'Акция' в товар """
product.category.remove(action_category, )
product.save()
action_not_active = Action.objects.not_active()
if action_not_active:
print 'Action - NOT ACTIVE:', action_not_active
for action in action_not_active:
products_of_action = action.product_in_action.all()
print 'All products:', products_of_action
# print action
"""
Если акция с авто окончанием,
то заканчиваем еЁ.
"""
if action.auto_end:
products_of_action = action.product_in_action.in_action()
if len(products_of_action, ) > 0:
print 'Product auto_end:', products_of_action
for product in products_of_action:
print 'Del product from Action: ', product
"""
Помечает товар как не учавствующий в акции
"""
product.category.remove(action_category, )
product.in_action = False
# """
# Меняем местами нынешнюю и акционные цены местами
# """
# price = product.price
# product.price = product.regular_price
# if action.auto_del_action_price:
# product.regular_price = 0
# else:
# product.regular_price = price
if action.auto_del_action_from_product:
product.action.remove(action, )
product.save()
if action.auto_del:
action.deleted = True
action.save()
# from applications.product.models import Product
# Product.objects.filter(is_availability=2, ).update(is_availability=5, )
# Product.objects.filter(is_availability=3, ).update(is_availability=2, )
# Product.objects.filter(is_availability=5, ).update(is_availability=3, )
""" Убираем галочку 'участвует в акции' всем продуктам у которых она почемуто установлена,
но при этом отсутвует хоть какая то акция """
from applications.product.models import Product
products = Product.objects.filter(in_action=True, action=None, ).update(in_action=False, )
print 'Товары удаленные из акции по причине вывода их из акции: ', products
""" Убираем галочку 'участвует в акции' всем продуктам которые отсутсвуют на складе """
products = Product.objects.filter(in_action=True, is_availability=4, ).update(in_action=False, )
print 'Товары удаленные из акции по причине отсутсвия на складе: ', products
""" Делаем активной акционную категорию, если есть хоть один акционный товар """
all_actions_products = action_category.products.all()
if len(all_actions_products) != 0 and not action_category.is_active:
action_category.is_active = True
action_category.save()
elif len(all_actions_products) == 0 and action_category.is_active:
action_category.is_active = False
action_category.save()
| [
"alex.starov@gmail.com"
] | alex.starov@gmail.com |
eea3d467d452081cbc8361a71b31783bd7c01f4d | 3185bc3bf14cbcd06ff84b90deb56d5dd1557af0 | /ptensor/include.py | b23c98b25fe219e46db53a37aac2fb2d22cc8b49 | [] | no_license | vegaandagev/fPEPS | 49ea1e910a14037b43b20c1be4e78e690aa9983f | bfb06082caae458b1b03e7586219a0f5413b7d14 | refs/heads/master | 2020-07-06T17:14:24.725570 | 2018-10-25T21:52:48 | 2018-10-25T21:52:48 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 91 | py | import autograd
np = autograd.numpy
npeinsum = autograd.numpy.einsum
dataType = np.float_
| [
"zhendongli2008@gmail.com"
] | zhendongli2008@gmail.com |
564a8dc4667af1a3fb616136ef6739ca876695e8 | c9c94fcc33b25ebef73ce7c117ea20721a504701 | /tests/spatial_operator/test_rectangle_knn.py | 1f9a5b64d1da90b85cfa670891069534998dfa1b | [
"Apache-2.0"
] | permissive | Imbruced/geo_pyspark | 46468cc95658fa156144246a45df32116d7ff20e | 26da16d48168789c5f2bb75b5fdec1f515bf9cb1 | refs/heads/master | 2022-12-16T18:56:54.675038 | 2020-02-24T20:32:38 | 2020-02-24T20:32:38 | 204,563,687 | 8 | 3 | Apache-2.0 | 2022-12-08T03:32:18 | 2019-08-26T21:21:00 | Python | UTF-8 | Python | false | false | 2,730 | py | import os
import pytest
from shapely.geometry import Point
from geo_pyspark.core.SpatialRDD import RectangleRDD
from geo_pyspark.core.enums import IndexType, FileDataSplitter
from geo_pyspark.core.geom_types import Envelope
from geo_pyspark.core.spatialOperator import KNNQuery
from tests.test_base import TestBase
from tests.tools import tests_path, distance_sorting_functions
inputLocation = os.path.join(tests_path, "resources/zcta510-small.csv")
queryWindowSet = os.path.join(tests_path, "resources/zcta510-small.csv")
offset = 0
splitter = FileDataSplitter.CSV
gridType = "rtree"
indexType = "rtree"
numPartitions = 11
distance = 0.001
queryPolygonSet = os.path.join(tests_path, "resources/primaryroads-polygon.csv")
inputCount = 3000
inputBoundary = Envelope(-171.090042, 145.830505, -14.373765, 49.00127)
matchCount = 17599
matchWithOriginalDuplicatesCount = 17738
class TestRectangleKNN(TestBase):
query_envelope = Envelope(-90.01, -80.01, 30.01, 40.01)
loop_times = 5
query_point = Point(-84.01, 34.01)
top_k = 100
def test_spatial_knn_query(self):
rectangle_rdd = RectangleRDD(self.sc, inputLocation, offset, splitter, True)
for i in range(self.loop_times):
result = KNNQuery.SpatialKnnQuery(rectangle_rdd, self.query_point, self.top_k, False)
assert result.__len__() > -1
assert result[0].getUserData() is not None
def test_spatial_knn_query_using_index(self):
rectangle_rdd = RectangleRDD(self.sc, inputLocation, offset, splitter, True)
rectangle_rdd.buildIndex(IndexType.RTREE, False)
for i in range(self.loop_times):
result = KNNQuery.SpatialKnnQuery(rectangle_rdd, self.query_point, self.top_k, False)
assert result.__len__() > -1
assert result[0].getUserData() is not None
def test_spatial_knn_query_correctness(self):
rectangle_rdd = RectangleRDD(self.sc, inputLocation, offset, splitter, True)
result_no_index = KNNQuery.SpatialKnnQuery(rectangle_rdd, self.query_point, self.top_k, False)
rectangle_rdd.buildIndex(IndexType.RTREE, False)
result_with_index = KNNQuery.SpatialKnnQuery(rectangle_rdd, self.query_point, self.top_k, True)
sorted_result_no_index = sorted(result_no_index, key=lambda geo_data: distance_sorting_functions(
geo_data, self.query_point))
sorted_result_with_index = sorted(result_with_index, key=lambda geo_data: distance_sorting_functions(
geo_data, self.query_point))
difference = 0
for x in range(self.top_k):
difference += sorted_result_no_index[x].geom.distance(sorted_result_with_index[x].geom)
assert difference == 0
| [
"pawel93kocinski@gmail.com"
] | pawel93kocinski@gmail.com |
c3b3b50d5ff7c5951e37fa4475abe2824058f619 | 140303e86eed46c9260da1a536077e884f809668 | /phytoplankton_classification/resnet50_class.py | aacf1ae47286d7026a170749343aa47410706e50 | [] | no_license | deephdc/phytoplankton-classification-theano | d0d9fae57ac20b2b6c716046e0bdc9e5386741b6 | 7e1da9a849fe137119182aa52bcd83c6325a3dac | refs/heads/master | 2020-03-28T01:25:22.465535 | 2019-01-09T08:40:25 | 2019-01-09T08:40:25 | 147,503,779 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 11,660 | py | """
Resnet50_class
Author: Ignacio Heredia
Date: October 2016
Description:
Class for training a resnet50 for a new dataset by finetuning the weights
already pretrained with ImageNet.
"""
import time
import pickle
import json
import collections
import inspect
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
import theano
import theano.tensor as T
import lasagne
from plant_classification.data_utils import iterate_minibatches, data_augmentation
from plant_classification.models.resnet50 import build_model
theano.config.floatX = 'float32'
class prediction_net(object):
def __init__(self, output_dim=3680, lr=1e-3, lr_decay=0.1,
lr_decay_rate=None, lr_decay_schedule=[0.7, 0.9],
finetuning=1e-3, reg=1e-4, num_epochs=50,
batchsize=32):
"""
Parameters
----------
output_dim : int
output dimension (number of possible output classes)
lr : float
Base learning rate (1e-3 is the default for Adam update rule)
lr_decay : float
It's the ratio (new_lr / old_lr)
lr_decay_rate : float, None
Update the lr after this number of epochs
lr_decay_schedule : list, numpy array, None
Update at this % of training.
Eg. [0.7,0.9] and 50 epochs --> update at epochs 35 and 45
This variable overwrites lr_decay_rate.
finetuning : float
Finetuning coefficient for learning the first layers
Eg. layer_lr = finetuning * lr
reg : float
Regularization parameter
num_epochs : int
Number of epochs for training
batchsize: int
Size of each training batch (should fit in GPU)
"""
self.output_dim = output_dim
self.lr_init = lr
self.lr = theano.shared(np.float32(lr))
self.lr_decay = np.float32(lr_decay)
if lr_decay_schedule is not None:
self.lr_decay_schedule = (np.array(lr_decay_schedule) * num_epochs).astype(np.int)
else:
self.lr_decay_schedule = np.arange(0, num_epochs, lr_decay_rate)[1:].astype(np.int)
self.reg = reg
self.num_epochs = num_epochs
self.batchsize = batchsize
self.finetuning = finetuning
def build_and_train(self, X_train, y_train, X_val=None, y_val=None,
display=False, save_model=True, aug_params=None):
"""
Builds the model and runs the training loop.
Parameters
----------
X_train : numpy array
Training data
y_train : numpy array
Training targets.
X_val : numpy array, None, optional
Validation data
y_val : numpy array, None, optional
Validation targets
Display : bool, optional
Display on-fly plots of training and validation results.
Save_model : bool, optional
Save model weights.
aug_params : dict, None, optional
Dict containing the data augmentation parameters.
Returns
-------
Test function of the net.
"""
# ======================================================================
# Model compilation
# ======================================================================
print("Building model and compiling functions...")
# Create Theano variables for input and target minibatch
input_var = T.tensor4('X', dtype=theano.config.floatX) # shape (batchsize,3,224,224)
target_var = T.ivector('y') # shape (batchsize,)
# Load model weights and metadata
d = pickle.load(open(os.path.join(homedir, 'data', 'pretrained_weights', 'resnet50.pkl')))
# Build the network and fill with pretrained weights except for the last fc layer
net = build_model(input_var, self.output_dim)
lasagne.layers.set_all_param_values(net['pool5'], d['values'][:-2])
# create loss function and accuracy
prediction = lasagne.layers.get_output(net['prob'])
loss = lasagne.objectives.categorical_crossentropy(prediction, target_var)
loss = loss.mean() + self.reg * lasagne.regularization.regularize_network_params(
net['prob'], lasagne.regularization.l2)
train_acc = T.mean(T.eq(T.argmax(prediction, axis=1), target_var), dtype=theano.config.floatX)
# Create parameter update expressions with fine tuning
updates = {}
for name, layer in net.items():
layer_params = layer.get_params(trainable=True)
if name == 'fc1000' or name == 'prob':
layer_lr = self.lr
else:
layer_lr = self.lr * self.finetuning
layer_updates = lasagne.updates.adam(loss, layer_params, learning_rate=layer_lr)
updates.update(layer_updates)
updates = collections.OrderedDict(updates)
# Create a loss expression for validation/testing.
test_prediction = lasagne.layers.get_output(net['prob'], deterministic=True)
test_loss = lasagne.objectives.categorical_crossentropy(test_prediction, target_var)
test_loss = test_loss.mean()
test_acc = T.mean(T.eq(T.argmax(test_prediction, axis=1), target_var), dtype=theano.config.floatX)
# Compile training and validation functions
train_fn = theano.function([input_var, target_var], [loss, train_acc], updates=updates)
val_fn = theano.function([input_var, target_var], [test_loss, test_acc])
test_fn = theano.function([input_var], test_prediction)
# ======================================================================
# Training routine
# ======================================================================
print("Starting training...")
track = {'train_err': [], 'train_acc': [], 'val_err': [], 'val_acc': []}
if display:
fig, (ax1, ax2) = plt.subplots(1, 2)
line1, = ax1.plot([], [], 'r-')
line2, = ax2.plot([], [], 'r-')
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Training loss')
ax1.set_yscale('log')
ax1.set_title('Training loss')
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Validation loss')
ax2.set_yscale('log')
ax2.set_title('Validation loss')
# Batchsize and augmentation parameters
if aug_params is None:
aug_params = {}
train_batchsize = min(len(y_train), self.batchsize)
train_aug_params = aug_params.copy()
train_aug_params.update({'mode': 'standard'})
if X_val is not None:
val_batchsize = min(len(y_val), self.batchsize)
val_aug_params = aug_params.copy()
val_aug_params.update({'mode': 'minimal', 'tags': None})
for epoch in range(self.num_epochs):
start_time = time.time()
# Learning rate schedule decay
if epoch in self.lr_decay_schedule:
self.lr.set_value(self.lr.get_value() * self.lr_decay)
print('############# Leaning rate: {} ####################').format(self.lr.get_value())
# Full pass over training data
train_err, train_batches = 0, 0
for batch in iterate_minibatches(X_train, y_train, train_batchsize, shuffle=True, **train_aug_params):
inputs, targets = batch[0], batch[1]
tmp_train_err, tmp_train_acc = train_fn(inputs, targets)
track['train_err'].append(tmp_train_err)
track['train_acc'].append(tmp_train_acc)
train_err += tmp_train_err
train_batches += 1
print 'Training epoch {} - {:.1f}% completed | Loss: {:.4f} ; Accuracy: {:.1f}%'.format(epoch, train_batches*self.batchsize*100./len(y_train), float(tmp_train_err), float(tmp_train_acc)*100)
if np.isnan(train_err):
print('Your net exploded, try decreasing the learning rate.')
return None
# Full pass over the validation data (if any)
if X_val is not None:
val_err, val_batches = 0, 0
for batch in iterate_minibatches(X_val, y_val, val_batchsize, shuffle=False, **val_aug_params):
inputs, targets = batch[0], batch[1]
tmp_val_err, tmp_val_acc = val_fn(inputs, targets)
track['val_err'].append(tmp_val_err)
track['val_acc'].append(tmp_val_acc)
val_err += tmp_val_err
val_batches += 1
print 'Validation epoch {} - {:.1f}% completed | Loss: {:.4f} ; Accuracy: {:.1f}%'.format(epoch, val_batches*self.batchsize*100./len(y_val), float(tmp_val_err), float(tmp_val_acc)*100)
# Print the results for this epoch
print("Epoch {} of {} took {:.3f}s".format(epoch + 1, self.num_epochs, time.time() - start_time))
print(" training loss:\t\t{:.6f}".format(train_err / train_batches))
if X_val is not None:
print(" validation loss:\t\t{:.6f}".format(val_err / val_batches))
# Display training and validation accuracy in plot
if display:
line1.set_xdata(np.append(line1.get_xdata(), epoch))
line1.set_ydata(np.append(line1.get_ydata(), train_err / train_batches))
ax1.relim(), ax1.autoscale_view()
if X_val is not None:
line2.set_xdata(np.append(line2.get_xdata(), epoch))
line2.set_ydata(np.append(line2.get_ydata(), val_err / val_batches))
ax2.relim(), ax2.autoscale_view()
fig.canvas.draw()
# Save training information and net parameters
print("Saving the model parameters and training information ...")
train_info = {'training_params': {'output_dim': self.output_dim,
'lr_init': self.lr_init,
'lr_decay': float(self.lr_decay),
'lr_schedule': self.lr_decay_schedule.tolist(),
'reg': self.reg,
'num_epochs': self.num_epochs,
'batchsize': self.batchsize,
'finetuning': self.finetuning}}
a = inspect.getargspec(data_augmentation)
augmentation_params = dict(zip(a.args[-len(a.defaults):], a.defaults)) # default augmentation params
augmentation_params.update(aug_params) # update with user's choice
for k, v in augmentation_params.items():
if type(v) == np.ndarray:
augmentation_params[k] = np.array(v).tolist()
train_info.update({'augmentation_params': augmentation_params})
for k, v in track.items():
track[k] = np.array(v).tolist()
train_info.update(track)
if save_model:
filename = 'resnet50_' + str(self.output_dim) + 'classes_' + str(self.num_epochs) + 'epochs'
with open(os.path.join(homedir, 'plant_classification', 'training_info', filename + '.json'), 'w') as outfile:
json.dump(train_info, outfile)
np.savez(os.path.join(homedir, 'plant_classification', 'training_weights', filename + '.npz'), *lasagne.layers.get_all_param_values(net['prob']))
return test_fn
| [
"lara.cern@gmail.com"
] | lara.cern@gmail.com |
0cc4a2fbc3d553407bee5160d9a38847be8d9dd1 | f3b233e5053e28fa95c549017bd75a30456eb50c | /bace_input/L3C/3C-7G_MD_NVT_rerun/set_7.py | 0f46d20e708063854de9926a0271f971f5bca6db | [] | no_license | AnguseZhang/Input_TI | ddf2ed40ff1c0aa24eea3275b83d4d405b50b820 | 50ada0833890be9e261c967d00948f998313cb60 | refs/heads/master | 2021-05-25T15:02:38.858785 | 2020-02-18T16:57:04 | 2020-02-18T16:57:04 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 740 | py | import os
dir = '/mnt/scratch/songlin3/run/bace/L3C/MD_NVT_rerun/ti_one-step/3C_7G/'
filesdir = dir + 'files/'
temp_prodin = filesdir + 'temp_prod_7.in'
temp_pbs = filesdir + 'temp_7.pbs'
lambd = [ 0.00922, 0.04794, 0.11505, 0.20634, 0.31608, 0.43738, 0.56262, 0.68392, 0.79366, 0.88495, 0.95206, 0.99078]
for j in lambd:
os.chdir("%6.5f" %(j))
workdir = dir + "%6.5f" %(j) + '/'
#prodin
prodin = workdir + "%6.5f_prod_7.in" %(j)
os.system("cp %s %s" %(temp_prodin, prodin))
os.system("sed -i 's/XXX/%6.5f/g' %s" %(j, prodin))
#PBS
pbs = workdir + "%6.5f_7.pbs" %(j)
os.system("cp %s %s" %(temp_pbs, pbs))
os.system("sed -i 's/XXX/%6.5f/g' %s" %(j, pbs))
#submit pbs
#os.system("qsub %s" %(pbs))
os.chdir(dir)
| [
"songlin3@msu.edu"
] | songlin3@msu.edu |
e989e386a134506b91ae97587f771f0c11f17115 | 3fa8eead6e001c4d5a6dc5b1fd4c7b01d7693292 | /ros_final_exam/src/path_exam/src/drone_takeoff.py | 72614d2c3d72572785aac0f590f96c7dbe3cb835 | [] | no_license | MarzanShuvo/Ros_from_the_construct | 09261902841cdd832672658947790ec5fbba4cd3 | 4798234284d9d0bab3751e9d8ac2df95ae34a5bf | refs/heads/master | 2023-08-24T17:28:09.182113 | 2021-10-23T07:57:02 | 2021-10-23T07:57:02 | 339,105,075 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 320 | py | #! /usr/bin/env python
import rospy
from std_msgs.msg import Empty
pub = rospy.Publisher('/drone/takeoff', Empty, queue_size=1)
rospy.init_node('taking_off', anonymous=True)
i=0
takeoff_msg = Empty()
while not (i==3):
rospy.loginfo("Taking off....... ")
pub.publish(takeoff_msg)
rospy.sleep(1)
i +=1
| [
"marzanalam3@gmail.com"
] | marzanalam3@gmail.com |
4fc4e6b6c216cafeb53c0703782cfe3a9f1fdd53 | 730f89724aca038c15191f01d48e995cb94648bc | /entrances/migrations/0009_auto_20141110_1309.py | 17928096718feb7f3db7ab280502d06f3d51621d | [] | no_license | Happyandhappy/django_email | 14bc3f63376f2568754292708ec8ca7f2e2cf195 | ea858c9fac79112542551b7ba6e899e348f24de3 | refs/heads/master | 2020-03-22T14:22:08.431334 | 2018-07-21T13:41:23 | 2018-07-21T13:41:23 | 140,174,033 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 699 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('entrances', '0008_auto_20141110_1307'),
]
operations = [
migrations.AddField(
model_name='apartment',
name='apartment_integer',
field=models.IntegerField(null=True, editable=False, blank=True),
preserve_default=True,
),
migrations.AlterField(
model_name='apartment',
name='apartment',
field=models.CharField(max_length=255, verbose_name='Apartment'),
preserve_default=True,
),
]
| [
"greyfrapp@gmail.com"
] | greyfrapp@gmail.com |
4ee9e4930ad0c277ac82dc545653ba3ef880b7e6 | 8acffb8c4ddca5bfef910e58d3faa0e4de83fce8 | /ml-flask/Lib/site-packages/sklearn/metrics/_scorer.py | 4bc04f4d204ed29c114a974229d0a44f0ba6f1b8 | [
"MIT"
] | permissive | YaminiHP/SimilitudeApp | 8cbde52caec3c19d5fa73508fc005f38f79b8418 | 005c59894d8788c97be16ec420c0a43aaec99b80 | refs/heads/master | 2023-06-27T00:03:00.404080 | 2021-07-25T17:51:27 | 2021-07-25T17:51:27 | 389,390,951 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 130 | py | version https://git-lfs.github.com/spec/v1
oid sha256:8f37b7f3b8da7123e4a98b2e1da0d724580dcc50f279d05bd2498a751b565c7c
size 29542
| [
"yamprakash130@gmail.com"
] | yamprakash130@gmail.com |
7fe4fa0948b4b59a3b4f406c2ba089c3276a13f0 | 2e7bcb513d1ae368a7fa42fa41397b1e6f18ed80 | /three.py | 5d1b5ce84daee8af3f4f71806322b437c073f343 | [] | no_license | Anjanaanjujsrr/anju-s-codekata | 2826b3508a4e61a4f7d934bfdc7b626fa46b5fa1 | f977392dd6c32e8256e2a9bb9ff873a2577e6a4f | refs/heads/master | 2020-05-23T01:02:46.231105 | 2019-05-20T10:32:21 | 2019-05-20T10:32:21 | 186,580,774 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 72 | py | x,y,z=input().split()
x=int(x)
y=int(y)
z=int(z)
print(max(x,y,z))
| [
"noreply@github.com"
] | Anjanaanjujsrr.noreply@github.com |
5078acc63f2d24dd99dc18787c02cd2665fb8670 | 48894ae68f0234e263d325470178d67ab313c73e | /sa/profiles/Alentis/NetPing/highlight.py | 60164c4a96843307785a0cc7612ad8968ecfa497 | [
"BSD-3-Clause"
] | permissive | DreamerDDL/noc | 7f949f55bb2c02c15ac2cc46bc62d957aee43a86 | 2ab0ab7718bb7116da2c3953efd466757e11d9ce | refs/heads/master | 2021-05-10T18:22:53.678588 | 2015-06-29T12:28:20 | 2015-06-29T12:28:20 | 118,628,133 | 0 | 0 | null | 2018-01-23T15:19:51 | 2018-01-23T15:19:51 | null | UTF-8 | Python | false | false | 1,558 | py | # -*- coding: utf-8 -*-
##----------------------------------------------------------------------
## Alentis.NetPing highlight lexers
##----------------------------------------------------------------------
## Copyright (C) 2007-2014 The NOC Project
## See LICENSE for details
##----------------------------------------------------------------------
from pygments.lexer import RegexLexer, bygroups, include
from pygments.token import *
class ConfigLexer(RegexLexer):
name = "Alentis.NetPing"
tokens = {
"root": [
(r"^!.*", Comment),
(r"(description)(.*?)$", bygroups(Keyword, Comment)),
(r"(password|shared-secret|secret)(\s+[57]\s+)(\S+)",
bygroups(Keyword, Number, String.Double)),
(r"(ca trustpoint\s+)(\S+)", bygroups(Keyword, String.Double)),
(r"^(interface|controller|router \S+|voice translation-\S+|voice-port)(.*?)$", bygroups(Keyword, Name.Attribute)),
(r"^(dial-peer\s+\S+\s+)(\S+)(.*?)$",
bygroups(Keyword, Name.Attribute, Keyword)),
(r"^(vlan\s+)(\d+)$", bygroups(Keyword, Name.Attribute)),
(r"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})(/\d{1,2})?",
Number), # IPv4 Address/Prefix
(r"49\.\d{4}\.\d{4}\.\d{4}\.\d{4}\.\d{2}", Number), # NSAP
(r"(\s+[0-9a-f]{4}\.[0-9a-f]{4}\.[0-9a-f]{4}\s+)",
Number), # MAC Address
(r"^(?:no\s+)?\S+", Keyword),
(r"\s+\d+\s+\d*|,\d+|-\d+", Number),
(r".", Text)
]
}
| [
"dmitryluhtionov@gmail.com"
] | dmitryluhtionov@gmail.com |
eadd7513a637a4fbfe75dbf5f146bd0eb9c4b2a3 | e768a26a03283628ceccf98a021e9441101aae0c | /lstail/util/timestamp.py | 937ec3dd3a7938b64cdc4058ff09eb367a7a4a9f | [
"MIT"
] | permissive | eht16/lstail | d8a4ecadf41b71c72bcc54ab59ce7229f7060d00 | 8fb61e9d07b05b27e3d45e988afe0c198010248d | refs/heads/master | 2023-01-24T02:11:54.864001 | 2021-06-24T20:09:07 | 2021-06-24T20:09:07 | 231,070,462 | 6 | 2 | null | null | null | null | UTF-8 | Python | false | false | 1,881 | py | # -*- coding: utf-8 -*-
#
# This software may be modified and distributed under the terms
# of the MIT license. See the LICENSE file for details.
from datetime import datetime, timedelta
from lstail.constants import ELASTICSEARCH_TIMESTAMP_FORMATS
from lstail.error import InvalidTimeRangeFormatError, InvalidTimestampFormatError
# ----------------------------------------------------------------------
def parse_and_convert_time_range_to_start_date_time(time_range):
error_message = 'Invalid time range specified: {}. ' \
'Valid examples are: 60, 5m, 12h, 7d'.format(time_range)
try:
# try to parse the time range as integer, interpret the value as seconds
seconds = value = int(time_range)
except TypeError as exc_type:
raise InvalidTimeRangeFormatError(error_message) from exc_type
except ValueError as exc_value:
try:
suffix = time_range[-1]
value = int(time_range[:-1])
except (ValueError, IndexError):
raise InvalidTimeRangeFormatError(error_message) from exc_value
if suffix == 'd':
seconds = value * 86400
elif suffix == 'h':
seconds = value * 3600
elif suffix == 'm':
seconds = value * 60
else:
raise InvalidTimeRangeFormatError(error_message) from exc_value
if value < 0:
raise InvalidTimeRangeFormatError(error_message)
return datetime.now() - timedelta(seconds=seconds)
# ----------------------------------------------------------------------
def parse_timestamp_from_elasticsearch(timestamp):
for format_ in ELASTICSEARCH_TIMESTAMP_FORMATS:
try:
return datetime.strptime(timestamp, format_)
except ValueError:
continue
# we didn't find any matching format, so cry
raise InvalidTimestampFormatError(timestamp)
| [
"enrico.troeger@uvena.de"
] | enrico.troeger@uvena.de |
ded1616469cab5647a546b3d6712f5da6c57babf | 0c3db34634cb85e778c95a4b4ff64514eca0477f | /lagtraj_aux/_version.py | 3c268f304c8fbc69360ae05202fe04dfffbd8245 | [] | no_license | EUREC4A-UK/lagtraj_aux | 4efad4c94bcb9a2a367a6794abe0bc96e99a06af | da39ec1f6afa04a5a808130175b595c9bd9d01af | refs/heads/master | 2023-03-05T17:06:14.938118 | 2021-02-09T12:17:51 | 2021-02-09T14:14:29 | 337,395,052 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 18,685 | py | # This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.19 (https://github.com/python-versioneer/python-versioneer)
"""Git implementation of _version.py."""
import errno
import os
import re
import subprocess
import sys
def get_keywords():
"""Get the keywords needed to look up the version information."""
# these strings will be replaced by git during git-archive.
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = "$Format:%d$"
git_full = "$Format:%H$"
git_date = "$Format:%ci$"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_config():
"""Create, populate and return the VersioneerConfig() object."""
# these strings are filled in when 'setup.py versioneer' creates
# _version.py
cfg = VersioneerConfig()
cfg.VCS = "git"
cfg.style = "pep440"
cfg.tag_prefix = ""
cfg.parentdir_prefix = "None"
cfg.versionfile_source = "lagtraj_aux/_version.py"
cfg.verbose = False
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Create decorator to mark a method as the handler of a VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(
[c] + args,
cwd=cwd,
env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr else None),
)
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %s" % (commands,))
return None, None
stdout = p.communicate()[0].strip().decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
print("stdout was %s" % stdout)
return None, p.returncode
return stdout, p.returncode
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {
"version": dirname[len(parentdir_prefix) :],
"full-revisionid": None,
"dirty": False,
"error": None,
"date": None,
}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print(
"Tried directories %s but none started with prefix %s"
% (str(rootdirs), parentdir_prefix)
)
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
# git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG) :] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r"\d", r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix) :]
if verbose:
print("picking %s" % r)
return {
"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False,
"error": None,
"date": date,
}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {
"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False,
"error": "no suitable tags",
"date": None,
}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %s not under git control" % root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(
GITS,
[
"describe",
"--tags",
"--dirty",
"--always",
"--long",
"--match",
"%s*" % tag_prefix,
],
cwd=root,
)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[: git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = "unable to parse git-describe output: '%s'" % describe_out
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = "tag '%s' doesn't start with prefix '%s'" % (
full_tag,
tag_prefix,
)
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix) :]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[
0
].strip()
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post0.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post0.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post0.dev%d" % pieces["distance"]
else:
# exception #1
rendered = "0.post0.dev%d" % pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {
"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None,
}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%s'" % style)
return {
"version": rendered,
"full-revisionid": pieces["long"],
"dirty": pieces["dirty"],
"error": None,
"date": pieces.get("date"),
}
def get_versions():
"""Get version information or return default if unable to do so."""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
cfg = get_config()
verbose = cfg.verbose
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for i in cfg.versionfile_source.split("/"):
root = os.path.dirname(root)
except NameError:
return {
"version": "0+unknown",
"full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None,
}
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return {
"version": "0+unknown",
"full-revisionid": None,
"dirty": None,
"error": "unable to compute version",
"date": None,
}
| [
"leif@denby.eu"
] | leif@denby.eu |
21b40d08fed1635c5fe1d8ce52d30d0da90e50af | 786de89be635eb21295070a6a3452f3a7fe6712c | /pyana_examples/tags/V00-00-22/src/myana_epics.py | 43a50ac4ee210d95837b5b08f3fe499855f1f31f | [] | no_license | connectthefuture/psdmrepo | 85267cfe8d54564f99e17035efe931077c8f7a37 | f32870a987a7493e7bf0f0a5c1712a5a030ef199 | refs/heads/master | 2021-01-13T03:26:35.494026 | 2015-09-03T22:22:11 | 2015-09-03T22:22:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,275 | py | #--------------------------------------------------------------------------
# File and Version Information:
# $Id$
#
# Description:
# Pyana user analysis module myana_epics...
#
#------------------------------------------------------------------------
"""User analysis module for pyana framework.
This software was developed for the LCLS project. If you use all or
part of it, please give an appropriate acknowledgment.
@see RelatedModule
@version $Id$
@author Andrei Salnikov
"""
#------------------------------
# Module's version from SVN --
#------------------------------
__version__ = "$Revision$"
# $Source$
#--------------------------------
# Imports of standard modules --
#--------------------------------
import sys
import logging
#-----------------------------
# Imports for other modules --
#-----------------------------
#----------------------------------
# Local non-exported definitions --
#----------------------------------
# local definitions usually start with _
#---------------------
# Class definition --
#---------------------
class myana_epics (object) :
"""Example analysis module which accesses EPICS data. """
#----------------
# Constructor --
#----------------
def __init__ ( self, pv = "BEAM:LCLS:ELEC:Q") :
"""Class constructor. The parameters to the constructor are passed
from pyana configuration file. If parameters do not have default
values here then the must be defined in pyana.cfg. All parameters
are passed as strings, convert to correct type before use.
@param pv Name of the EPICS PV to dump
"""
self.m_pv = pv
#-------------------
# Public methods --
#-------------------
def beginjob( self, evt, env ) :
# Preferred way to log information is via logging package
logging.info( "myana_epics.beginjob() called" )
# Use environment object to access EPICS data
pv = env.epicsStore().value(self.m_pv)
if not pv:
logging.warning('EPICS PV %s does not exist', self.m_pv)
else:
# Returned value should be of the type epics.EpicsPvCtrl.
# The code here demonstrates few members accessible for that type.
# For full list of members see Pyana Ref. Manual.
print "PV %s: id=%d type=%d size=%d status=%s severity=%s values=%s" % \
(self.m_pv, pv.iPvId, pv.iDbrType, pv.iNumElements,
pv.status, pv.severity, pv.values)
def event( self, evt, env ) :
# Use environment object to access EPICS data
pv = env.epicsStore().value(self.m_pv)
if not pv:
logging.warning('EPICS PV %s does not exist', self.m_pv)
else:
# Returned value should be of the type epics.EpicsPvTime.
# The code here demonstrates few members accessible for that type.
# For full list of members see Pyana Ref. Manual.
print "PV %s: id=%d type=%d size=%d status=%s severity=%s values=%s stamp=%s" % \
(self.m_pv, pv.iPvId, pv.iDbrType, pv.iNumElements,
pv.status, pv.severity, pv.values, pv.stamp)
def endjob( self, env ) :
pass
| [
"salnikov@SLAC.STANFORD.EDU@b967ad99-d558-0410-b138-e0f6c56caec7"
] | salnikov@SLAC.STANFORD.EDU@b967ad99-d558-0410-b138-e0f6c56caec7 |
2493c1cc3a4b0fe3b2854c9e23fc45bfface1968 | d50f50f455a2f96e7fbd9fb76fcdcdd71b8cc27c | /Day-23/Day23_Shahazada(ST).py | c48889980ab7f74e9d0a9c64863664893954a65a | [] | no_license | Rushi21-kesh/30DayOfPython | 9b2cc734c553b81d98593031a334b9a556640656 | d9741081716c3cf67823e2acf37f015b5906b913 | refs/heads/main | 2023-06-29T13:18:09.635799 | 2021-07-30T13:33:04 | 2021-07-30T13:33:04 | 384,316,331 | 1 | 0 | null | 2021-07-09T04:01:24 | 2021-07-09T04:01:23 | null | UTF-8 | Python | false | false | 695 | py | '''This program rotate list cyclically by user choice'''
if __name__ == '__main__':
n=int(input("Enter the size of list:- "))
print("Enter element of list")
elementList=[]
for i in range(n):
ele=int(input())
elementList.append(ele)
rotatedlist=[]
rotateBy=int(input("By how many element want to rotate:- "))
for i in range(n-1,n-rotateBy-1,-1):
rotatedlist.append(elementList[i])
for i in range(n-rotateBy):
rotatedlist.append(elementList[i])
print()
print("Rotated cyclically list element are :-",end=" ")
for i in range(n):
print(rotatedlist[i],end=" ")
| [
"noreply@github.com"
] | Rushi21-kesh.noreply@github.com |
eca0c55b107bd3d4779cf6d82077c32e6d204a7c | d7d524d1c0ba1cf62cdbc2f9bf5b9c66fa56726b | /47high.py | 7e7773c0544de60d34e249dd843254966da9d18b | [] | no_license | ramyasutraye/pythonproject | d997ca5ada024e211b6bf087d0d56684daf9df8b | 38975a99eb3ee1ad9e79a9efd538cc992d249fc3 | refs/heads/master | 2020-04-23T19:30:10.128774 | 2018-05-25T06:18:53 | 2018-05-25T06:18:53 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 136 | py | n=int(input("enter the number:"))
a=[]
for i in range(0,n):
b=int(input("Enter number:"))
a.append(b)
a.sort()
print(min(a),max(a))
| [
"noreply@github.com"
] | ramyasutraye.noreply@github.com |
e0a183e93aadcfada6ef5a4998601ae0e7797837 | 6f9a5717fed38b0a79c399f7e5da55c6a461de6d | /Baekjoon/CardPurchase.py | 9a394075fa2e93607643af50ef1def9dbd576b48 | [] | no_license | Alfred-Walker/pythonps | d4d3b0f7fe93c138d02651e05ca5165825676a5e | 81ef8c712c36aa83d1c53aa50886eb845378d035 | refs/heads/master | 2022-04-16T21:34:39.316565 | 2020-04-10T07:50:46 | 2020-04-10T07:50:46 | 254,570,527 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,845 | py | # 요즘 민규네 동네에서는 스타트링크에서 만든 PS카드를 모으는 것이 유행이다.
#
# PS카드는 PS(Problem Solving)분야에서 유명한 사람들의 아이디와 얼굴이 적혀있는 카드이다.
# 각각의 카드에는 등급을 나타내는 색이 칠해져 있고, 다음과 같이 8가지가 있다.
#
# 전설카드
# 레드카드
# 오렌지카드
# 퍼플카드
# 블루카드
# 청록카드
# 그린카드
# 그레이카드
# 카드는 카드팩의 형태로만 구매할 수 있고, 카드팩의 종류는 카드
# 1개가 포함된 카드팩, 카드 2개가 포함된 카드팩, ... 카드 N개가 포함된 카드팩
# 과 같이 총 N가지가 존재한다.
#
# 민규는 카드의 개수가 적은 팩이더라도 가격이 비싸면 높은 등급의 카드가 많이 들어있을 것이라는 미신을 믿고 있다.
# 따라서, 민규는 돈을 최대한 많이 지불해서 카드 N개 구매하려고 한다. 카드가 i개 포함된 카드팩의 가격은 Pi원이다.
#
# 예를 들어, 카드팩이 총 4가지 종류가 있고, P1 = 1, P2 = 5, P3 = 6, P4 = 7인 경우에
# 민규가 카드 4개를 갖기 위해 지불해야 하는 금액의 최댓값은 10원이다. 2개 들어있는 카드팩을 2번 사면 된다.
#
# P1 = 5, P2 = 2, P3 = 8, P4 = 10인 경우에는
# 카드가 1개 들어있는 카드팩을 4번 사면 20원이고, 이 경우가 민규가 지불해야 하는 금액의 최댓값이다.
#
# 마지막으로, P1 = 3, P2 = 5, P3 = 15, P4 = 16인 경우에는
# 3개 들어있는 카드팩과 1개 들어있는 카드팩을 구매해 18원을 지불하는 것이 최댓값이다.
#
# 카드 팩의 가격이 주어졌을 때, N개의 카드를 구매하기 위해 민규가 지불해야 하는 금액의 최댓값을 구하는 프로그램을 작성하시오.
# N개보다 많은 개수의 카드를 산 다음, 나머지 카드를 버려서 N개를 만드는 것은 불가능하다.
# 즉, 구매한 카드팩에 포함되어 있는 카드 개수의 합은 N과 같아야 한다.
#
# 입력
# 첫째 줄에 민규가 구매하려고 하는 카드의 개수 N이 주어진다. (1 ≤ N ≤ 1,000)
# 둘째 줄에는 Pi가 P1부터 PN까지 순서대로 주어진다. (1 ≤ Pi ≤ 10,000)
#
# 출력
# 첫째 줄에 민규가 카드 N개를 갖기 위해 지불해야 하는 금액의 최댓값을 출력한다.
import sys
N = int(sys.stdin.readline().rstrip()) # 구입할 카드의 개수
P = [0] + list(map(int, sys.stdin.readline().rstrip().split())) # 편의상 0 추가
dp = dict() # dp[i]: 카드를 i개 갖기 위해 지불한 금액의 최댓값
dp[0] = 0
for i in range(1, N+1):
dp[i] = 0
for j in range(1, i + 1):
dp[i] = max(dp[i], dp[i - j] + P[j]) # j번째 카드팩에 대하여, dp[i] = dp[i-j] + P[j]
print(dp[N])
| [
"studio.alfred.walker@gmail.com"
] | studio.alfred.walker@gmail.com |
97952047d650f3e84518e6583fe08909eb2da9a6 | 9ac19e6733e1f91bb9cb0fe47967491a5e856040 | /test/test_revoke.py | 8c0a82737e93f456dc810b7cc587fccd0bb78a27 | [
"MIT"
] | permissive | DS4SD/project-mognet | 7898b41046a31b82052b1424e6910cb65b14e5c5 | 9e415e88404da0a0eab3b379d6cd7b7d15ca71a6 | refs/heads/main | 2023-05-23T22:02:10.406590 | 2022-07-13T11:53:28 | 2022-07-13T11:53:28 | 474,094,219 | 5 | 1 | MIT | 2022-07-13T11:53:29 | 2022-03-25T16:55:53 | Python | UTF-8 | Python | false | false | 1,402 | py | import asyncio
import uuid
from mognet.model.result import ResultFailed
from mognet.model.result_state import ResultState
import pytest
from mognet import App, Request, Context, task
@pytest.mark.asyncio
async def test_revoke(test_app: App):
req = Request(name="test.sleep", args=(10,))
await test_app.submit(req)
await asyncio.sleep(2)
await test_app.revoke(req.id)
res = await test_app.result_backend.get(req.id)
assert res is not None
assert res.state == ResultState.REVOKED
assert res.revoked
with pytest.raises(ResultFailed):
await res
@task(name="test.recurses_after_wait")
async def recurses_after_wait(context: Context, child_id: uuid.UUID):
req = Request(name="test.add", id=child_id, args=(1, 2))
try:
await asyncio.sleep(5)
finally:
await context.submit(req)
@pytest.mark.asyncio
async def test_revokes_children_if_parent_revoked(test_app: App):
child_id = uuid.uuid4()
req = Request(name="test.recurses_after_wait", args=(child_id,))
await test_app.submit(req)
await asyncio.sleep(1)
await test_app.revoke(req.id)
await asyncio.sleep(1)
child_res = await test_app.result_backend.get(child_id)
assert child_res is not None
assert child_res.state == ResultState.REVOKED
assert child_res.revoked
with pytest.raises(ResultFailed):
await child_res
| [
"dol@zurich.ibm.com"
] | dol@zurich.ibm.com |
9c26d10ade54eaa70dce931bd5513bb7e4b1f601 | 1b5802806cdf2c3b6f57a7b826c3e064aac51d98 | /tensorrt-basic-1.10-3rd-plugin/TensorRT-main/demo/HuggingFace/NNDF/general_utils.py | 64717b1eb53c28be3c1809bc124766cc218189cd | [
"MIT",
"BSD-3-Clause",
"Apache-2.0",
"ISC",
"BSD-2-Clause"
] | permissive | jinmin527/learning-cuda-trt | def70b3b1b23b421ab7844237ce39ca1f176b297 | 81438d602344c977ef3cab71bd04995c1834e51c | refs/heads/main | 2023-05-23T08:56:09.205628 | 2022-07-24T02:48:24 | 2022-07-24T02:48:24 | 517,213,903 | 36 | 18 | null | 2022-07-24T03:05:05 | 2022-07-24T03:05:05 | null | UTF-8 | Python | false | false | 6,519 | py | #
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Common utils used by demo folder."""
import os
import shutil
import timeit
from shutil import rmtree
from typing import Callable, Union, List
from collections import defaultdict
from statistics import mean, median
from glob import glob
# NNDF
from NNDF.networks import NNConfig, NetworkResult, NetworkMetadata
from NNDF.logger import G_LOGGER
# Used for HuggingFace setting random seed
RANDOM_SEED = 42
# Networks #
def register_network_folders(
root_dir: str, config_file_str: str = "*Config.py"
) -> List[str]:
networks = []
for network_configs in glob(os.path.join(root_dir, "*", config_file_str)):
network_name = os.path.split(os.path.split(network_configs)[0])[1]
networks.append(network_name)
return networks
def process_results(category: List[str], results: List[NetworkResult], nconfig: NNConfig):
"""
Calculate and process results across multiple runs.
"""
general_stats = ["script", "accuracy"]
runtime_result_row_names = list(nconfig.NETWORK_SEGMENTS)
if nconfig.NETWORK_FULL_NAME not in nconfig.NETWORK_SEGMENTS:
runtime_result_row_names.append(nconfig.NETWORK_FULL_NAME)
rows = []
row_entry = []
for cat, result in zip(category, results):
# Process runtime results for each group
runtime_results = defaultdict(list)
for runtimes in [nr.median_runtime for nr in result.network_results]:
for runtime in runtimes:
runtime_results[runtime.name].append(runtime.runtime)
# Calculate average runtime for each group
average_group_runtime = {k: mean(v) for k, v in runtime_results.items()}
row_entry = [cat, result.accuracy] + [
average_group_runtime[n] for n in runtime_result_row_names
]
rows.append(row_entry)
headers = general_stats + [r + " (sec)" for r in runtime_result_row_names]
return headers, rows
def process_per_result_entries(script_category: List[str], results: List[NetworkResult], max_output_char:int = 30):
"""Prints tabulations for each entry returned by the runtime result."""
def _shorten_text(w):
l = len(w)
if l > max_output_char:
return w[0:max_output_char // 2] + " ... " + w[-max_output_char//2:]
return w
headers = ["script", "network_part", "accuracy", "runtime", "input", "output"]
row_data_by_input = defaultdict(list)
for cat, result in zip(script_category, results):
for nr in result.network_results:
for runtime in nr.median_runtime:
row_data_by_input[hash(nr.input)].append([
cat,
runtime.name,
result.accuracy,
runtime.runtime,
_shorten_text(nr.input),
_shorten_text(nr.semantic_output)
])
return headers, dict(row_data_by_input)
# IO #
def confirm_folder_delete(
fpath: str, prompt: str = "Confirm you want to delete entire folder?"
) -> None:
"""
Confirms whether or not user wants to delete given folder path.
Args:
fpath (str): Path to folder.
prompt (str): Prompt to display
Returns:
None
"""
msg = prompt + " {} [Y/n] ".format(fpath)
confirm = input(msg)
if confirm == "Y":
rmtree(fpath)
else:
G_LOGGER.info("Skipping file removal.")
def remove_if_empty(
fpath: str,
success_msg: str = "Folder successfully removed.",
error_msg: str = "Folder cannot be removed, there are files.",
) -> None:
"""
Removes an entire folder if folder is empty. Provides print info statements.
Args:
fpath: Location to folder
success_msg: Success message.
error_msg: Error message.
Returns:
None
"""
if len(os.listdir(fpath)) == 0:
os.rmdir(fpath)
G_LOGGER.info(success_msg + " {}".format(fpath))
else:
G_LOGGER.info(error_msg + " {}".format(fpath))
def measure_python_inference_code(
stmt: Union[Callable, str], warmup: int = 3, number: int = 10, iterations: int = 10
) -> None:
"""
Measures the time it takes to run Pythonic inference code.
Statement given should be the actual model inference like forward() in torch.
See timeit for more details on how stmt works.
Args:
stmt (Union[Callable, str]): Callable or string for generating numbers.
number (int): Number of times to call function per iteration.
iterations (int): Number of measurement cycles.
"""
G_LOGGER.debug(
"Measuring inference call with warmup: {} and number: {} and iterations {}".format(
warmup, number, iterations
)
)
# Warmup
warmup_mintime = timeit.repeat(stmt, number=number, repeat=warmup)
G_LOGGER.debug("Warmup times: {}".format(warmup_mintime))
return median(timeit.repeat(stmt, number=number, repeat=iterations)) / number
class NNFolderWorkspace:
"""
For keeping track of workspace folder and for cleaning them up.
Due to potential corruption of ONNX model conversion, the workspace is split up by model variants.
"""
def __init__(
self, network_name: str, metadata: NetworkMetadata, working_directory: str
):
self.rootdir = working_directory
self.metadata = metadata
self.network_name = network_name
self.dpath = os.path.join(self.rootdir, self.network_name, metadata.variant)
os.makedirs(self.dpath, exist_ok=True)
def get_path(self) -> str:
return self.dpath
def cleanup(self, force_remove: bool = False) -> None:
fpath = self.get_path()
if force_remove:
return shutil.rmtree(fpath)
remove_if_empty(
fpath,
success_msg="Sucessfully removed workspace.",
error_msg="Unable to remove workspace.",
)
| [
"dujw@deepblueai.com"
] | dujw@deepblueai.com |
a202a75b98c1863c08432cd4589d7efbb80bd7d9 | 5e95083d63ce1e76385dd34c96c13c7ac382aa28 | /Длина последовательности.py | c6500972c5eedee2ad6a125ce06bac0785b77950 | [] | no_license | oOoSanyokoOo/Course-Python-Programming-Basics | 1ee3bff98259951d87d8656af1519884fb089f41 | 88a8ada069da45a882942ef83dd3d3bcb9cb3b0d | refs/heads/main | 2023-06-14T20:57:25.347205 | 2021-07-08T07:09:20 | 2021-07-08T07:09:20 | 384,029,659 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 95 | py | i = 0
while True:
n = int(input())
if n == 0:
break
i += 1
print(i)
| [
"noreply@github.com"
] | oOoSanyokoOo.noreply@github.com |
056c67711ce448d1738be23298e3ca357a3e5980 | 6800da49fb74cbc0079d3106762122ea102562be | /channel_manager.py | a7711d6079828cd8d5634f7a70a045f8e2856764 | [] | no_license | legoktm/adminbots | 8f9e03eb2002addf0e0589d627202cd977bafd7e | 0b0a913c8b1ad3d92b77d6352660a05af54f5e06 | refs/heads/master | 2016-09-05T19:17:30.683600 | 2013-11-21T01:51:20 | 2013-11-21T01:51:20 | 10,352,998 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 903 | py | # -*- coding: utf-8 -*-
# (C) Legoktm, 2013
# Licensed under the MIT License
# Assists with joining/parting channels
import os
import yaml
from mtirc import hooks
filename = os.path.expanduser('~/channels.yml')
def get_channel_list():
with open(filename) as f:
raw = f.read()
data = yaml.load(raw)
return data
def on_connect(**kw):
data = get_channel_list()
if kw['server'] in data:
for channel in data[kw['server']]:
kw['bot'].servers[kw['server']].join(channel)
def on_msg(**kw):
if kw['text'] == '!reload channels':
data = get_channel_list()
for server in data:
if server in kw['bot'].servers:
for channel in data[server]:
kw['bot'].servers[server].join(channel)
hooks.add_hook('connected', 'channel_joiner', on_connect)
hooks.add_hook('on_msg', 'channel_reloader', on_msg)
| [
"legoktm@gmail.com"
] | legoktm@gmail.com |
da01bc57ce96ce7a637d80966352e5dd5539954c | b2ff7365dda9fa9290c2eae04988e3bda9cae23a | /13_top_k/8.py | 74578e1adccfe804dc3394e7b47b49305637cb5f | [] | no_license | terrifyzhao/educative3 | cd6ccdb0fc4b9ba7f5058fe2e3d2707f022d8b16 | 5c7db9ef6cf58ca5e68bb5aec8ed95af1d5c0f47 | refs/heads/master | 2022-12-26T23:53:30.645339 | 2020-10-10T00:48:22 | 2020-10-10T00:48:22 | 298,991,655 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 637 | py | from heapq import *
def find_closest_elements(arr, K, X):
result = []
min_heap = []
for i in range(len(arr)):
heappush(min_heap, [abs(arr[i] - X), i])
for i in range(K):
index = heappop(min_heap)[1]
result.append(arr[index])
return result
def main():
print("'K' closest numbers to 'X' are: " +
str(find_closest_elements([5, 6, 7, 8, 9], 3, 7)))
print("'K' closest numbers to 'X' are: " +
str(find_closest_elements([2, 4, 5, 6, 9], 3, 6)))
print("'K' closest numbers to 'X' are: " +
str(find_closest_elements([2, 4, 5, 6, 9], 3, 10)))
main()
| [
"zjiuzhou@gmail.com"
] | zjiuzhou@gmail.com |
aaacb65e368ef4189378c6a8e678963699b64664 | 8b8351c8d0431a95d2e1ad88a1ef42470ff6f66c | /python/exceptions_hierarchy.py | d02f55310a599e914c6972c208f5bba08b75fd07 | [] | no_license | miso-belica/playground | 2d771197cca8d8a0031466c97317bfa38bb2faff | 6c68f648301801785db8b3b26cb3f31b782389ec | refs/heads/main | 2022-11-29T09:17:24.661966 | 2022-11-23T22:19:50 | 2022-11-24T07:20:13 | 30,124,960 | 14 | 8 | null | 2022-11-24T07:20:14 | 2015-01-31T20:18:53 | Python | UTF-8 | Python | false | false | 386 | py | # -*- coding: utf-8 -*-
import sys
import time
if __name__ == "__main__":
try:
# time.sleep(5)
# sys.exit(1)
raise ValueError("value")
except Exception as e:
print("Catched ValueError!", e)
try:
sys.exit(1)
except Exception:
print("Catched exit by Exception.")
except:
print("Catched exit by empty except")
| [
"miso.belica@gmail.com"
] | miso.belica@gmail.com |
40bb5c70b623f899a77f7317d99cbb2312d64a19 | 086ece6952b56602c20709bfa219037b0375ab0c | /ENGLISH_DETECTION.py | 085150b7a9dbb3fd571fb2f0c15a3e22448c73fd | [] | no_license | pseudo11235813/Random-Python-Ciphers | 71aca8561b003ab8818e4d934288aef8d2779e9c | 48f04b2e6a32ea67a2bc88d0bb283a51fd5150e5 | refs/heads/master | 2022-12-20T03:53:04.643760 | 2020-09-21T19:16:41 | 2020-09-21T19:16:41 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,508 | py | #Self-dev English Detector module
#you need to download an English dictionary Text File to use this module
#go to google or any search engine and download an English Dictionary File
#a one you can use :
#import eng
#eng.isEnglish(message) ; will return false or true.
#eng.DictLoader() ; will ask you for a the English dictionary file name then returns a dictionary that contains all the words in that dictionary file.
#eng.countWords() ; will return the percentage of English words in the entered text .
#eng.englishWords() ; will return the english words in the entered text in a list.
Englishletters = "abcdefghijklmnopqrstuvwxyz"
fullEnglishLetters = Englishletters + " \n\t"
def DictLoader(): #loads the dictionary text File, if the function isn't called the program will automatically set the language to English.
englishWords = {}
dictFileName = input("enter the dictionary file name (make sure to specify the whole path or copy the dictionary file into the python32 folder and just type the File's name) : ")
dictFile = open(dictFileName)
EnglishWords = dictFile.read()
for word in EnglishWords.split('\n'):
englishWords[word] = None
dictFile.close()
return englishWords
def countWords(text): #this function will remove all the non-letter characters and count how many real word(not gibberish) in the provided text.
chars = []
counter = 0
for char in text:
if char in fullEnglishLetters:
chars.append(char)
nonLetters = ''.join(chars)
if chars == []:
return 0
wordsDict = DictLoader()
for word in nonLetters.split():
if word in wordsDict:
counter += 1
return (counter/len(nonLetters.split())) * 100
def isEnglish(text , percantage = 35 , letterPercentage = 5):
wordsMatch = countWords(text) >= percantage
chars = []
for char in text:
if char in fullEnglishLetters:
chars.append(char)
nonLetters = ''.join(chars)
lettersPercentage = len(nonLetters)/len(text) * 100
lettersMatch = lettersPercentage >= letterPercentage
return wordsMatch and lettersMatch
def englishWords(text):
chars = []
eng = []
counter = 0
for char in text:
if char in fullEnglishLetters:
chars.append(char)
nonLetters = ''.join(chars)
if chars == []:
return 0
wordsDict = DictLoader()
for word in nonLetters.split():
if word in wordsDict:
eng.append(word)
return eng
| [
"="
] | = |
ab8c01014610afd428a0eac76f21e2d1ea9158de | 8d43e69234f6d7df8ee66ed306d1b3efbea50fe7 | /CMGTools/WMass/python/analyzers/WAnalyzer.py | b304278533ff77147d5db4c802d94f69affc189f | [] | no_license | mariadalfonso/cmg-wmass-44X | c484583c5dfa6af61cbb5f52c492644221c7bfd1 | 5c2051fc062354d26b78a7426f1541d377022ebb | refs/heads/master | 2020-12-29T02:06:52.874006 | 2013-11-05T16:27:21 | 2013-11-05T16:27:21 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 23,424 | py | import operator
import copy
import math
from CMGTools.RootTools.fwlite.Analyzer import Analyzer
from CMGTools.RootTools.statistics.Counter import Counter, Counters
from CMGTools.RootTools.fwlite.AutoHandle import AutoHandle
from CMGTools.RootTools.physicsobjects.PhysicsObjects import Muon, Jet, GenParticle
from CMGTools.RootTools.utils.TriggerMatching import triggerMatched
from CMGTools.RootTools.utils.DeltaR import bestMatch, deltaR, deltaR2
from CMGTools.WMass.analyzers.common_functions import *
# from CMGTools.Utilities.mvaMET.mvaMet import MVAMet, PFMET
# from WMass.analyzers.GetMVAMET import GetMVAMET
class WAnalyzer( Analyzer ):
MuonClass = Muon
JetClass = Jet
def beginLoop(self):
super(WAnalyzer,self).beginLoop()
self.counters.addCounter('WAna')
count = self.counters.counter('WAna')
count.register('W all events')
count.register('W ev trig, good vertex and >= 1 lepton')
count.register('W at least 1 lep trig matched')
count.register('W only 1 lep trig matched')
count.register('W non trg leading lepton pT < 10 GeV')
count.register('W lep is MuIsTightAndIso')
count.register('W Mu_eta<2.1 && Mu_pt>30')
count.register('W pfmet>25')
count.register('W pt<20')
count.register('W Jet_leading_pt<30')
# self.mvamet = MVAMet() # SHOULD BE MVAMet(0.1)
# void Initialize(const edm::ParameterSet &iConfig,
# TString iU1Weights ="$CMSSW_BASE/src/pharris/data/gbrmet_52.root",
# TString iPhiWeights ="$CMSSW_BASE/src/pharris/data/gbrmetphi_52.root",
# TString iCovU1Weights ="$CMSSW_BASE/src/pharris/data/gbrcovu1_52.root",
# TString iCovU2Weights ="$CMSSW_BASE/src/pharris/data/gbrcovu2_52.root",
# MVAMet::MVAType iType=kBaseline);
# basePath = os.environ['CMSSW_BASE']+"/src/CMGTools/Utilities/data/mvaMET/";
# print ("Inputs are",
# basePath+'gbrmet_42.root',
# basePath+'gbrmetphi_42.root',
# basePath+'gbru1cov_42.root',
# basePath+'gbru2cov_42.root',
# )
# self.mvamet.Initialize(0,
# basePath+'gbrmet_42.root',
# basePath+'gbrmetphi_42.root',
# basePath+'gbrmetu1cov_42.root',
# basePath+'gbrmetu2cov_42.root',
# 0 # useless
# )
def buildLeptons(self, cmgMuons, event):
'''Creates python Leptons from the muons read from the disk.
to be overloaded if needed.'''
return map( self.__class__.MuonClass, cmgMuons )
def buildJets(self, cmgJets, event):
'''Creates python Jets from the Jets read from the disk.
to be overloaded if needed.'''
return map( self.__class__.JetClass, cmgJets )
def buildGenParticles(self, cmgGenParticles, event):
'''Creates python GenParticles from the di-leptons read from the disk.
to be overloaded if needed.'''
return map( GenParticle, cmgGenParticles )
def declareVariables(self):
tr = self.tree
var( tr, 'pfmet')
def process(self, iEvent, event):
# access event
self.readCollections( iEvent )
# access muons
event.muons = self.buildLeptons( self.handles['muons'].product(), event )
# access jets
event.jets = self.buildJets( self.handles['jets'].product(), event )
# access MET
event.pfmet = self.handles['pfmet'].product()[0]
# access genP
event.genParticles = []
event.LHEweights = []
if self.cfg_comp.isMC :
event.genParticles = self.buildGenParticles( self.mchandles['genpart'].product(), event )
event.LHEweights = self.mchandles['LHEweights'].product()
# define good event bool
event.WGoodEvent = False
# select event
return self.selectionSequence(event, fillCounter=True )
def selectionSequence(self, event, fillCounter):
if fillCounter: self.counters.counter('WAna').inc('W all events')
# if self.cfg_comp.isMC :
# print event.LHEweights.comments_size()
# for i in range(0,event.LHEweights.comments_size()):
# print i, event.LHEweights.getComment(i).split()
# retrieve collections of interest (muons and jets)
event.allMuons = copy.copy(event.muons)
event.selMuons = copy.copy(event.muons)
event.NoTriggeredMuonsLeadingPt = copy.copy(event.muons)
event.allJets = copy.copy(event.jets)
event.selJets = copy.copy(event.jets)
# check if the event is MC and if genp must be saved
event.savegenpW=False
# event.savegenpW=True
# if not (self.cfg_ana.savegenp and self.cfg_comp.isMC):
# event.savegenpW=False
# print 'event.savegenpW 1 ',event.savegenpW
# save genp only for signal events
# i.e. only one W is present and daughters are muon plus neutrino
genW_dummy = [ genp for genp in event.genParticles if \
math.fabs(genp.pdgId())==24 if \
( \
math.fabs(genp.daughter(0).pdgId())==11 or
math.fabs(genp.daughter(1).pdgId())==11 or
math.fabs(genp.daughter(0).pdgId())==13 or
math.fabs(genp.daughter(1).pdgId())==13 or
math.fabs(genp.daughter(0).pdgId())==15 or
math.fabs(genp.daughter(1).pdgId())==15
) ]
# if len(genW_dummy)>0:
# if the genp event is selected, associate gen muon and neutrino
event.genMu = []
event.genMuStatus1 = []
event.genNu = []
event.genWLept = []
if self.cfg_ana.savegenp and self.cfg_comp.isMC:
if len(genW_dummy)==1:
# if len(genW_dummy)>0:
event.genW = [ genp for genp in genW_dummy if \
( \
( math.fabs(genW_dummy[0].daughter(0).pdgId())==13 ) or \
( math.fabs(genW_dummy[0].daughter(1).pdgId())==13 ) \
) ]
if len(event.genW)==1:
event.savegenpW=True
event.genW_mt = mT(self,event.genW[0].daughter(0).p4() , event.genW[0].daughter(1).p4())
event.muGenDeltaRgenP=1e6
event.genWLept.append(event.genW[0])
if [ math.fabs(event.genW[0].daughter(0).pdgId())==13 ]:
event.genMu.append(event.genW[0].daughter(0))
event.genNu.append(event.genW[0].daughter(1))
else:
event.genMu.append(event.genW[0].daughter(1))
event.genNu.append(event.genW[0].daughter(0))
if(len(event.genMu) >0):
if(math.fabs(event.genW[0].mother(0).pdgId())!=6):
event.genMuStatus1.append(returnMuonDaughterStatus1(self,event.genMu[0]))
else:
event.genMuStatus1.append(event.genMu[0])
if len(genW_dummy)>1:
event.genW = [ genp for genp in genW_dummy if \
( \
( math.fabs(genW_dummy[0].daughter(0).pdgId())==11 ) or \
( math.fabs(genW_dummy[0].daughter(1).pdgId())==11 ) or \
( math.fabs(genW_dummy[0].daughter(0).pdgId())==13 ) or \
( math.fabs(genW_dummy[0].daughter(1).pdgId())==13 ) or \
( math.fabs(genW_dummy[0].daughter(0).pdgId())==15 ) or \
( math.fabs(genW_dummy[0].daughter(1).pdgId())==15 ) or \
( math.fabs(genW_dummy[1].daughter(0).pdgId())==11 ) or \
( math.fabs(genW_dummy[1].daughter(1).pdgId())==11 ) or \
( math.fabs(genW_dummy[1].daughter(0).pdgId())==13 ) or \
( math.fabs(genW_dummy[1].daughter(1).pdgId())==13 ) or \
( math.fabs(genW_dummy[1].daughter(0).pdgId())==15 ) or \
( math.fabs(genW_dummy[1].daughter(1).pdgId())==15 ) \
) ]
if len(event.genW)==2:
if ( math.fabs(event.genW[0].daughter(0).pdgId())==13. or math.fabs(event.genW[0].daughter(0).pdgId())==15. or math.fabs(event.genW[0].daughter(0).pdgId())==11. ):
# print 'event.savegenpW 2 ',event.savegenpW
event.savegenpW=False
# print 'found leptonic W0 a'
event.genWLept.append(event.genW[0])
if ( math.fabs(event.genW[0].daughter(0).pdgId())==13 ):
event.genMu.append(event.genW[0].daughter(0))
event.genNu.append(event.genW[0].daughter(1))
event.genW_mt = mT(self,event.genMu[0].p4() , event.genNu[0].p4())
event.muGenDeltaRgenP=1e6
if(len(event.genMu) >0):
event.genMuStatus1.append(returnMuonDaughterStatus1(self,event.genMu[0]))
event.savegenpW=True
if ( math.fabs(event.genW[0].daughter(1).pdgId())==13. or math.fabs(event.genW[0].daughter(1).pdgId())==15. or math.fabs(event.genW[0].daughter(1).pdgId())==11. ):
# print 'event.savegenpW 3 ',event.savegenpW
event.savegenpW=False
# print 'found leptonic W0 b'
event.genWLept.append(event.genW[0])
if ( math.fabs(event.genW[0].daughter(1).pdgId())==13 ):
event.genMu.append(event.genW[0].daughter(1))
event.genNu.append(event.genW[0].daughter(0))
event.genW_mt = mT(self,event.genMu[0].p4() , event.genNu[0].p4())
event.muGenDeltaRgenP=1e6
if(len(event.genMu) >0):
event.genMuStatus1.append(returnMuonDaughterStatus1(self,event.genMu[0]))
# print 'event.savegenpW 4 ',event.savegenpW
event.savegenpW=True
if ( math.fabs(event.genW[1].daughter(0).pdgId())==13. or math.fabs(event.genW[1].daughter(0).pdgId())==15. or math.fabs(event.genW[1].daughter(0).pdgId())==11. ):
# print 'event.savegenpW 5 ',event.savegenpW
event.savegenpW=False
# print 'found leptonic W1 c'
event.genWLept.append(event.genW[1])
if ( math.fabs(event.genW[1].daughter(0).pdgId())==13 ):
event.genMu.append(event.genW[1].daughter(0))
event.genNu.append(event.genW[1].daughter(1))
event.genW_mt = mT(self,event.genMu[0].p4() , event.genNu[0].p4())
event.muGenDeltaRgenP=1e6
if(len(event.genMu) >0):
event.genMuStatus1.append(returnMuonDaughterStatus1(self,event.genMu[0]))
event.savegenpW=True
if ( math.fabs(event.genW[1].daughter(1).pdgId())==13. or math.fabs(event.genW[1].daughter(1).pdgId())==15. or math.fabs(event.genW[1].daughter(1).pdgId())==11. ):
# print 'event.savegenpW 6 ',event.savegenpW
event.savegenpW=False
# print 'found leptonic W1 d'
event.genWLept.append(event.genW[1])
if ( math.fabs(event.genW[1].daughter(1).pdgId())==13 ):
event.genMu.append(event.genW[1].daughter(1))
event.genNu.append(event.genW[1].daughter(0))
event.genW_mt = mT(self,event.genMu[0].p4() , event.genNu[0].p4())
event.muGenDeltaRgenP=1e6
if(len(event.genMu) >0):
event.genMuStatus1.append(returnMuonDaughterStatus1(self,event.genMu[0]))
event.savegenpW=True
# if the genp is not signal, don't save genp but do not exit
# -----> events which will pass the reconstruction but are not signal
# can be considered as background (for example, in W+Jets, from W decaying into electrons, taus)
# else:
# ## here put false for fully hadronic WW
# print 'event.savegenpW 7 ',event.savegenpW
# event.savegenpW=False
# print 'genW found ', len(genW_dummy)
# print 'genWLeptonic found ', len(event.genWLept)
# store event number of muons, MET and jets in all gen events (necessary to make cuts in genp studies...)
# total number of reco muons
event.nMuons=len(event.selMuons)
# clean jets by removing muons
event.selJets = [ jet for jet in event.allJets if ( \
not (bestMatch( jet , event.selMuons ))[1] <0.5 \
and jet.looseJetId() and jet.pt()>30 \
)
]
# reco events must have good reco vertex and trigger fired...
if not (event.passedVertexAnalyzer and event.passedTriggerAnalyzer):
return True
# ...and at lest one reco muon...
if len(event.selMuons) == 0:
return True
if fillCounter: self.counters.counter('WAna').inc('W ev trig, good vertex and >= 1 lepton')
#check if the event is triggered according to cfg_ana
if len(self.cfg_comp.triggers)>0:
# muon object trigger matching
event.selMuons = [lep for lep in event.allMuons if \
trigMatched(self, event, lep)]
# exit if there are no triggered muons
if len(event.selMuons) == 0:
return True, 'trigger matching failed'
else:
if fillCounter: self.counters.counter('WAna').inc('W at least 1 lep trig matched')
# to select W impose only 1 triggering lepton in the event:
# the number of triggering lepton is checked on the whole lepton collection
# before any cut, otherwise could be a Z!!!
if len(event.selMuons) != 1:
return True, 'more than 1 lep trig matched'
else:
if fillCounter: self.counters.counter('WAna').inc('W only 1 lep trig matched')
# store muons that did not fire the trigger
event.NoTriggeredMuonsLeadingPt = [lep for lep in event.allMuons if \
not trigMatched(self, event, lep) ]
# print "len(event.NoTriggeredMuonsLeadingPt)= ",len(event.NoTriggeredMuonsLeadingPt)
# if len(event.NoTriggeredMuonsLeadingPt)>0 : print "event.NoTriggeredMuonsLeadingPt[0].pt() = ",event.NoTriggeredMuonsLeadingPt[0].pt()
if len(event.NoTriggeredMuonsLeadingPt) > 0:
if event.NoTriggeredMuonsLeadingPt[0].pt()>10:
# if (event.NoTriggeredMuonsLeadingPt[0].pt()<10): print "ESISTE UN LEPTONE NON TRIGGERING WITH PT>10, event.NoTriggeredMuonsLeadingPt[0].pt() = ",event.NoTriggeredMuonsLeadingPt[0].pt()
return True, 'rejecting event with non triggering lepton with pT > 10 GeV'
else:
if fillCounter: self.counters.counter('WAna').inc('W non trg leading lepton pT < 10 GeV')
else:
if fillCounter: self.counters.counter('WAna').inc('W non trg leading lepton pT < 10 GeV')
# if the genp are saved, compute dR between gen and reco muon
if (event.savegenpW and len(event.genW)==1):
event.muGenDeltaRgenP = deltaR( event.selMuons[0].eta(), event.selMuons[0].phi(), event.genMu[0].eta(), event.genMu[0].phi() )
# associate good vertex to muon to compute dxy
event.selMuons[0].associatedVertex = event.goodVertices[0]
# testing offline muon cuts (tight+iso, no kinematical cuts)
event.selMuonIsTightAndIso = testLeg(self, event.selMuons[0] )
event.selMuonIsTight = testLegID( self,event.selMuons[0] )
# START RETRIEVING MVAMET
# INPUT DEFINITIONS AS OF HTT
# mvaMETTauMu = cms.EDProducer(
# "MVAMETProducerTauMu",
# pfmetSrc = cms.InputTag('pfMetForRegression'),
# tkmetSrc = cms.InputTag('tkMet'),
# nopumetSrc = cms.InputTag('nopuMet'),
# pucmetSrc = cms.InputTag('pcMet'),
# pumetSrc = cms.InputTag('puMet'),
# recBosonSrc = cms.InputTag('cmgTauMuSel'),
# jetSrc = cms.InputTag('cmgPFJetSel'),
# leadJetSrc = cms.InputTag('cmgPFBaseJetLead'),
# vertexSrc = cms.InputTag('goodPVFilter'),
# nJetsPtGt1Src = cms.InputTag('nJetsPtGt1'),
# rhoSrc = cms.InputTag('kt6PFJets','rho'),
# enable = cms.bool(True),
# verbose = cms.untracked.bool( False ),
# weights_gbrmet = cms.string(weights_gbrmet),
# weights_gbrmetphi = cms.string(weights_gbrmetphi),
# weights_gbrmetu1cov = cms.string(weights_gbrmetu1cov),
# weights_gbrmetu2cov = cms.string(weights_gbrmetu2cov),
# #COLIN: make delta R a parameter
# )
# self.prepareObjectsForMVAMET(event)
# self.mvamet.getMet(
# event.cleanpfmetForRegression, #iPFMet,
# event.cleantkmet, #iTKMet,
# event.cleannopumet, #iNoPUMet,
# event.pumet, #iPUMet,
# event.cleanpucmet, #iPUCMet,
# event.iLeadJet, #event.iLeadJet,
# event.i2ndJet, #event.i2ndJet,
# event.NJetsGt30, #iNJetsGt30,
# event.nJetsPtGt1Clean, #iNJetsGt1,
# len(event.goodVertices), #iNGoodVtx,
# event.iJets_p4, #iJets,
# event.iJets_mva, #iJets,
# event.iJets_neutFrac, #iJets,
# False, #iPrintDebug,
# event.visObjectP4s_array #visObjectP4s
# )
# event.mvamet = self.mvamet.GetMet_first();
# event.GetMVAMet_second = self.mvamet.GetMet_second();
# print 'AFTER MVAmet_test'
# print 'event.pfmet.pt() ', event.pfmet.pt()
# print 'event.selMuons[0].pt() ',event.selMuons[0].pt(),' event.mvamet.Pt() ',event.mvamet.Pt()
# print ''
# print 'event.GetMVAMet_second ',event.GetMVAMet_second,' event.GetMVAMet_second.significance() ',event.GetMVAMet_second.significance().Print()
# define a W from lepton and MET
event.W4V = event.selMuons[0].p4() + event.pfmet.p4()
event.W4V_mt = mT(self,event.selMuons[0].p4() , event.pfmet.p4())
event.covMatrixMuon = []
RetrieveMuonMatrixIntoVector(self,event.selMuons[0],event.covMatrixMuon)
# print event.covMatrixMuon
# Code to study the recoil (not very useful for W...)
metVect = event.pfmet.p4().Vect()
metVect.SetZ(0.) # use only transverse info
WVect = event.W4V.Vect()
WVect.SetZ(0.) # use only transverse info
recoilVect = - copy.deepcopy(metVect) ## FIXED (met sign inverted)
# recoilVect -= WVect
temp_recoil = event.selMuons[0].p4().Vect()
temp_recoil.SetZ(0.) # use only transverse info
recoilVect -= temp_recoil ## FIXED (subtract only lepton for consistent recoil definition)
uWVect = WVect.Unit()
zAxis = type(WVect)(0,0,1)
uWVectPerp = WVect.Cross(zAxis).Unit()
u1 = recoilVect.Dot(uWVect) # recoil parallel to W pt
u2 = - recoilVect.Dot(uWVectPerp) # recoil perpendicular to W pt
event.u1 = u1
event.u2 = u2
if fillCounter:
if event.selMuonIsTightAndIso :
self.counters.counter('WAna').inc('W lep is MuIsTightAndIso')
if testLegKine( self, event.selMuons[0] , 30 , 2.1 ) :
self.counters.counter('WAna').inc('W Mu_eta<2.1 && Mu_pt>30')
if event.pfmet.pt() >25:
self.counters.counter('WAna').inc('W pfmet>25')
if event.W4V.Pt() < 20:
self.counters.counter('WAna').inc('W pt<20')
if len(event.selJets) > 0:
if event.selJets[0].pt()<30:
self.counters.counter('WAna').inc('W Jet_leading_pt<30')
else:
self.counters.counter('WAna').inc('W Jet_leading_pt<30')
# event is fully considered as good
# if fillCounter: self.counters.counter('WAna').inc('W pass')
event.WGoodEvent = True
return True
def declareHandles(self):
super(WAnalyzer, self).declareHandles()
self.handles['cmgTriggerObjectSel'] = AutoHandle('cmgTriggerObjectSel','std::vector<cmg::TriggerObject>')
self.handles['muons'] = AutoHandle('cmgMuonSel','std::vector<cmg::Muon>')
self.handles['jets'] = AutoHandle('cmgPFJetSel','std::vector<cmg::PFJet>')
self.handles['jetLead'] = AutoHandle('cmgPFBaseJetLead','vector<cmg::BaseJet>')
self.handles['pfmet'] = AutoHandle('cmgPFMET','std::vector<cmg::BaseMET>' )
self.handles['pfMetForRegression'] = AutoHandle('pfMetForRegression','std::vector<reco::PFMET>' )
self.handles['tkmet'] = AutoHandle('tkMet','std::vector<reco::PFMET>' )
self.handles['nopumet'] = AutoHandle('nopuMet','std::vector<reco::PFMET>' )
self.handles['pumet'] = AutoHandle('puMet','std::vector<reco::PFMET>' )
self.handles['pucmet'] = AutoHandle('pcMet','std::vector<reco::PFMET>' )
self.mchandles['genpart'] = AutoHandle('genParticlesPruned','std::vector<reco::GenParticle>')
self.handles['vertices'] = AutoHandle('offlinePrimaryVertices','std::vector<reco::Vertex>')
self.handles['nJetsPtGt1'] = AutoHandle('nJetsPtGt1','int')
self.handles['cmgTriggerObjectSel'] = AutoHandle('cmgTriggerObjectSel','std::vector<cmg::TriggerObject>')
self.handles['muons'] = AutoHandle('cmgMuonSel','std::vector<cmg::Muon>')
self.handles['jets'] = AutoHandle('cmgPFJetSel','std::vector<cmg::PFJet>')
self.handles['pfmet'] = AutoHandle('cmgPFMET','std::vector<cmg::BaseMET>' )
self.mchandles['genpart'] = AutoHandle('genParticlesPruned','std::vector<reco::GenParticle>')
self.mchandles['LHEweights'] = AutoHandle('source','LHEEventProduct')
| [
"perrozzi@cern.ch"
] | perrozzi@cern.ch |
42db80f31e7be0f63eda0be8f66f974d95ed6f61 | dbd23b5c9ead096ea1b4c4ddd2acba3f6b4eb0db | /testing/test_delete_job_when_finished.py | 62808bad32277674cc8859516c778e703e7e1ef6 | [] | no_license | NGTS/real-time-transmission | b067b9572d02ae99c7cbd6c569c4ac36eb14bc25 | f70901dbc9ae59515e7786d5d3a5978c46adc312 | refs/heads/master | 2020-06-07T16:11:45.860569 | 2018-02-26T11:15:49 | 2018-02-26T11:15:49 | 42,189,235 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 508 | py | import pytest
import pymysql
from ngts_transmission.watching import fetch_transmission_jobs
@pytest.yield_fixture
def cursor():
connection = pymysql.connect(user='ops', db='ngts_ops')
cursor = connection.cursor()
try:
yield cursor
finally:
connection.rollback()
connection.close()
def test_job_deleted(job_db, cursor):
job = list(fetch_transmission_jobs(cursor))[0]
job.remove_from_database(cursor)
assert list(fetch_transmission_jobs(cursor)) == []
| [
"s.r.walker101@googlemail.com"
] | s.r.walker101@googlemail.com |
3a0df4406e2172b099f147aac840fbdf997001a3 | 620cd7d12a3d241da9fe59f30bbbc97c3ffa61e2 | /apptools/apptools-android-tests/apptools/build_path.py | 2ceabab85becaf578c47fac647bd2d36b0cc4829 | [
"BSD-3-Clause"
] | permissive | playbar/crosswalk-test-suite | e46db96343f4a47f1a19fddaedc519818c10d992 | 29686407e8b3106cf2b0e87080f927609e745f8e | refs/heads/master | 2021-05-29T15:33:01.099059 | 2015-10-09T06:03:22 | 2015-10-09T06:03:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,717 | py | #!/usr/bin/env python
#
# Copyright (c) 2015 Intel Corporation.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of works must retain the original copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the original copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of Intel Corporation nor the names of its contributors
# may be used to endorse or promote products derived from this work without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY INTEL CORPORATION "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL INTEL CORPORATION BE LIABLE FOR ANY DIRECT,
# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
# EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors:
# Yun, Liu<yunx.liu@intel.com>
import unittest
import os
import comm
import shutil
class TestCrosswalkApptoolsFunctions(unittest.TestCase):
def test_build_path_normal(self):
comm.setUp()
comm.create(self)
if os.path.exists("pkg"):
shutil.rmtree("pkg")
os.mkdir("pkg")
os.chdir('pkg')
buildcmd = comm.HOST_PREFIX + comm.PackTools + "crosswalk-app build " + comm.XwalkPath + "org.xwalk.test"
comm.build(self, buildcmd)
comm.run(self)
os.chdir('../')
shutil.rmtree("pkg")
comm.clear("org.xwalk.test")
os.system('adb start-server')
def test_build_path_release(self):
comm.setUp()
comm.create(self)
if os.path.exists("pkg"):
shutil.rmtree("pkg")
os.mkdir("pkg")
os.chdir('pkg')
buildcmd = comm.HOST_PREFIX + comm.PackTools + "crosswalk-app build release " + comm.XwalkPath + "org.xwalk.test"
comm.build(self, buildcmd)
comm.run(self)
os.chdir('../')
shutil.rmtree("pkg")
comm.clear("org.xwalk.test")
os.system('adb start-server')
if __name__ == '__main__':
unittest.main()
| [
"yunx.liu@intel.com"
] | yunx.liu@intel.com |
78e008b4a51cdbbb81dead7bc5945ee98ccad862 | bf681fbd7edbf4f8f1e0b20cbd09b362f777c9c3 | /matlab/mex_handle_example/stuff.py | 55b253d58c48aa9d653944185b73a4b2aea17ae8 | [
"BSD-3-Clause"
] | permissive | EricCousineau-TRI/repro | 308d4a86f3c7da8be5811db2f3f68d39db60d7ed | 9800f45e07f511c9a355ee90333955451b55559a | refs/heads/master | 2023-08-31T13:49:23.540640 | 2023-08-25T19:18:33 | 2023-08-25T19:18:33 | 87,116,976 | 24 | 13 | NOASSERTION | 2023-03-25T01:40:55 | 2017-04-03T20:19:28 | Jupyter Notebook | UTF-8 | Python | false | false | 25 | py | def test(x):
print x
| [
"eric.cousineau@tri.global"
] | eric.cousineau@tri.global |
3bfdba7d4e743674ea2486e57020635bd9131403 | b529da0623557e3271adbc148617000ca699ac08 | /test/test_homepage_section.py | d2c2493adeb13b95ddddecfecc84b1800516f7eb | [
"MIT"
] | permissive | gustavs408650/looker_sdk_30 | 106dc73ac8949e06c68c6caf544123edabe6c5cc | 8b52449f216b2cb3b84f09e2856bcea1ed4a2b0c | refs/heads/master | 2020-09-08T16:42:02.224496 | 2019-11-13T14:59:12 | 2019-11-13T14:59:12 | 221,186,703 | 0 | 0 | MIT | 2019-11-12T10:05:51 | 2019-11-12T10:05:50 | null | UTF-8 | Python | false | false | 2,486 | py | # coding: utf-8
"""
Looker API 3.0 Reference
### Authorization The Looker API uses Looker **API3** credentials for authorization and access control. Looker admins can create API3 credentials on Looker's **Admin/Users** page. Pass API3 credentials to the **/login** endpoint to obtain a temporary access_token. Include that access_token in the Authorization header of Looker API requests. For details, see [Looker API Authorization](https://looker.com/docs/r/api/authorization) ### Client SDKs The Looker API is a RESTful system that should be usable by any programming language capable of making HTTPS requests. Client SDKs for a variety of programming languages can be generated from the Looker API's Swagger JSON metadata to streamline use of the Looker API in your applications. A client SDK for Ruby is available as an example. For more information, see [Looker API Client SDKs](https://looker.com/docs/r/api/client_sdks) ### Try It Out! The 'api-docs' page served by the Looker instance includes 'Try It Out!' buttons for each API method. After logging in with API3 credentials, you can use the \"Try It Out!\" buttons to call the API directly from the documentation page to interactively explore API features and responses. ### Versioning Future releases of Looker will expand this API release-by-release to securely expose more and more of the core power of Looker to API client applications. API endpoints marked as \"beta\" may receive breaking changes without warning. Stable (non-beta) API endpoints should not receive breaking changes in future releases. For more information, see [Looker API Versioning](https://looker.com/docs/r/api/versioning) # noqa: E501
OpenAPI spec version: 3.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import looker_client_30
from looker_client_30.looker_sdk.homepage_section import HomepageSection # noqa: E501
from looker_client_30.rest import ApiException
class TestHomepageSection(unittest.TestCase):
"""HomepageSection unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testHomepageSection(self):
"""Test HomepageSection"""
# FIXME: construct object with mandatory attributes with example values
# model = looker_client_30.models.homepage_section.HomepageSection() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| [
"looker@MacBook-Pro.local"
] | looker@MacBook-Pro.local |
2ba5addce7fd9b7cb325e48d5c45b3ce7fd59344 | 57fc5d54f5df359c7a53020fb903f36479d3a322 | /controllers/.history/supervisor/test_20201127155537.py | e3b792d9d8e09cd6b8a393ddf0de55beb40da8d5 | [] | no_license | shenwuyue-xie/webots_testrobots | 929369b127258d85e66c5275c9366ce1a0eb17c7 | 56e476356f3cf666edad6449e2da874bb4fb4da3 | refs/heads/master | 2023-02-02T11:17:36.017289 | 2020-12-20T08:22:59 | 2020-12-20T08:22:59 | 323,032,362 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,372 | py | import numpy
import math
import os
# def normalize_to_range(value, min, max, newMin, newMax):
# value = float(value)
# min = float(min)
# max = float(max)
# newMin = float(newMin)
# newMax = float(newMax)
# return (newMax - newMin) / (max - min) * (value - max) + newMax
# k = normalize_to_range(50,0,1000,0,1)
# print(k)
# x = [0.5 for i in range(12)]
# y = numpy.random.normal(12)
# """ better function """
# def robot_step(self,action):
# flag_translation = False
# flag_rotation = False
# if action[-1] > 0.8 and action[-1] <= 1 and self.robot_num < Max_robotnum:
# last_translation = self.robot_handles[-1].getField('translation').getSFVec3f()
# last_angle = self.robot_handles[-1].getField('rotation').getSFRotation()[3]
# last_rotation = self.robot_handles[-1].getField('rotation').getSFRotation()
# delta_z = 0.23 * math.cos(last_angle)
# delta_x = 0.23 * math.sin(last_angle)
# new_translation = []
# new_translation.append(last_translation[0] - delta_x)
# new_translation.append(last_translation[1])
# new_translation.append(last_translation[2] - delta_z)
# robot_children = self.robot_handles[-1].getField('children')
# rearjoint_node = robot_children.getMFNode(4)
# joint = rearjoint_node.getField('jointParameters')
# joint = joint.getSFNode()
# para = joint.getField('position')
# hingeposition = para.getSFFloat()
# if hingeposition > 0.8 or hingeposition < -0.8:
# delta = 0.03 - 0.03 * math.cos(hingeposition)
# delta_z = delta * math.cos(last_angle)
# delta_x = delta * math.sin(last_angle)
# new_translation[0] = new_translation[0] + delta_x
# new_translation[2] = new_translation[2] + delta_z
# new_rotation = []
# for i in range(4):
# new_rotation.append(last_rotation[i])
flag_translation = False
flag_rotation = False
new_file = []
with open ("Robot.wbo",'r+') as f:
lines = f.readlines()
for line in lines:
if "translation" in line:
if flag_translation == False:
replace = "translation " + str(0) + " " + str(0) + " " + str(0)
line = "\t" + replace +'\n'
flag_translation = True
if "rotation" in line:
if flag_rotation == False:
replace = "rotation " + str(0) + " " + str(0) + " " + str(0) + " " \
+str(0)
line = "\t" + replace +'\n'
flag_rotation = True
new_file = []
# rootNode = self.supervisor.getRoot()
# childrenField = rootNode.getField('children')
# childrenField.importMFNode(-1,importname)
# defname = 'robot_' + str(self.robot_num)
# self.robot_handles.append(self.supervisor.getFromDef(defname))
# self.robot_num = self.robot_num + 1
# elif action[-1] >0 and action[-1] <= 0.2 and self.robot_num >1:
# removerobot = self.robot_handles[-1]
# removerobot.remove()
# self.robot_num = self.robot_num - 1
# del(self.robot_handles[-1])
| [
"1092673859@qq.com"
] | 1092673859@qq.com |
1d1b31629d7dcfd346475c36ac89b89bf2102260 | fbbe424559f64e9a94116a07eaaa555a01b0a7bb | /Sklearn_scipy_numpy/source/scipy/signal/ltisys.py | 1deabb3a11809ff2e91ff31f7f036a9b73dbc052 | [
"MIT"
] | permissive | ryfeus/lambda-packs | 6544adb4dec19b8e71d75c24d8ed789b785b0369 | cabf6e4f1970dc14302f87414f170de19944bac2 | refs/heads/master | 2022-12-07T16:18:52.475504 | 2022-11-29T13:35:35 | 2022-11-29T13:35:35 | 71,386,735 | 1,283 | 263 | MIT | 2022-11-26T05:02:14 | 2016-10-19T18:22:39 | Python | UTF-8 | Python | false | false | 79,826 | py | """
ltisys -- a collection of classes and functions for modeling linear
time invariant systems.
"""
from __future__ import division, print_function, absolute_import
#
# Author: Travis Oliphant 2001
#
# Feb 2010: Warren Weckesser
# Rewrote lsim2 and added impulse2.
# Aug 2013: Juan Luis Cano
# Rewrote abcd_normalize.
# Jan 2015: Irvin Probst irvin DOT probst AT ensta-bretagne DOT fr
# Added pole placement
# Mar 2015: Clancy Rowley
# Rewrote lsim
# May 2015: Felix Berkenkamp
# Split lti class into subclasses
#
import warnings
import numpy as np
#np.linalg.qr fails on some tests with LinAlgError: zgeqrf returns -7
#use scipy's qr until this is solved
from scipy.linalg import qr as s_qr
import numpy
from numpy import (r_, eye, real, atleast_1d, atleast_2d, poly,
squeeze, asarray, product, zeros, array,
dot, transpose, ones, zeros_like, linspace, nan_to_num)
import copy
from scipy import integrate, interpolate, linalg
from scipy._lib.six import xrange
from .filter_design import tf2zpk, zpk2tf, normalize, freqs
__all__ = ['tf2ss', 'ss2tf', 'abcd_normalize', 'zpk2ss', 'ss2zpk', 'lti',
'TransferFunction', 'ZerosPolesGain', 'StateSpace', 'lsim',
'lsim2', 'impulse', 'impulse2', 'step', 'step2', 'bode',
'freqresp', 'place_poles']
def tf2ss(num, den):
r"""Transfer function to state-space representation.
Parameters
----------
num, den : array_like
Sequences representing the numerator and denominator polynomials.
The denominator needs to be at least as long as the numerator.
Returns
-------
A, B, C, D : ndarray
State space representation of the system, in controller canonical
form.
Examples
--------
Convert the transfer function:
.. math:: H(s) = \frac{s^2 + 3s + 3}{s^2 + 2s + 1}
>>> num = [1, 3, 3]
>>> den = [1, 2, 1]
to the state-space representation:
.. math::
\dot{\textbf{x}}(t) =
\begin{bmatrix} -2 & -1 \\ 1 & 0 \end{bmatrix} \textbf{x}(t) +
\begin{bmatrix} 1 \\ 0 \end{bmatrix} \textbf{u}(t) \\
\textbf{y}(t) = \begin{bmatrix} 1 & 2 \end{bmatrix} \textbf{x}(t) +
\begin{bmatrix} 1 \end{bmatrix} \textbf{u}(t)
>>> from scipy.signal import tf2ss
>>> A, B, C, D = tf2ss(num, den)
>>> A
array([[-2., -1.],
[ 1., 0.]])
>>> B
array([[ 1.],
[ 0.]])
>>> C
array([[ 1., 2.]])
>>> D
array([ 1.])
"""
# Controller canonical state-space representation.
# if M+1 = len(num) and K+1 = len(den) then we must have M <= K
# states are found by asserting that X(s) = U(s) / D(s)
# then Y(s) = N(s) * X(s)
#
# A, B, C, and D follow quite naturally.
#
num, den = normalize(num, den) # Strips zeros, checks arrays
nn = len(num.shape)
if nn == 1:
num = asarray([num], num.dtype)
M = num.shape[1]
K = len(den)
if M > K:
msg = "Improper transfer function. `num` is longer than `den`."
raise ValueError(msg)
if M == 0 or K == 0: # Null system
return (array([], float), array([], float), array([], float),
array([], float))
# pad numerator to have same number of columns has denominator
num = r_['-1', zeros((num.shape[0], K - M), num.dtype), num]
if num.shape[-1] > 0:
D = num[:, 0]
else:
D = array([], float)
if K == 1:
return array([], float), array([], float), array([], float), D
frow = -array([den[1:]])
A = r_[frow, eye(K - 2, K - 1)]
B = eye(K - 1, 1)
C = num[:, 1:] - num[:, 0] * den[1:]
return A, B, C, D
def _none_to_empty_2d(arg):
if arg is None:
return zeros((0, 0))
else:
return arg
def _atleast_2d_or_none(arg):
if arg is not None:
return atleast_2d(arg)
def _shape_or_none(M):
if M is not None:
return M.shape
else:
return (None,) * 2
def _choice_not_none(*args):
for arg in args:
if arg is not None:
return arg
def _restore(M, shape):
if M.shape == (0, 0):
return zeros(shape)
else:
if M.shape != shape:
raise ValueError("The input arrays have incompatible shapes.")
return M
def abcd_normalize(A=None, B=None, C=None, D=None):
"""Check state-space matrices and ensure they are two-dimensional.
If enough information on the system is provided, that is, enough
properly-shaped arrays are passed to the function, the missing ones
are built from this information, ensuring the correct number of
rows and columns. Otherwise a ValueError is raised.
Parameters
----------
A, B, C, D : array_like, optional
State-space matrices. All of them are None (missing) by default.
See `ss2tf` for format.
Returns
-------
A, B, C, D : array
Properly shaped state-space matrices.
Raises
------
ValueError
If not enough information on the system was provided.
"""
A, B, C, D = map(_atleast_2d_or_none, (A, B, C, D))
MA, NA = _shape_or_none(A)
MB, NB = _shape_or_none(B)
MC, NC = _shape_or_none(C)
MD, ND = _shape_or_none(D)
p = _choice_not_none(MA, MB, NC)
q = _choice_not_none(NB, ND)
r = _choice_not_none(MC, MD)
if p is None or q is None or r is None:
raise ValueError("Not enough information on the system.")
A, B, C, D = map(_none_to_empty_2d, (A, B, C, D))
A = _restore(A, (p, p))
B = _restore(B, (p, q))
C = _restore(C, (r, p))
D = _restore(D, (r, q))
return A, B, C, D
def ss2tf(A, B, C, D, input=0):
r"""State-space to transfer function.
A, B, C, D defines a linear state-space system with `p` inputs,
`q` outputs, and `n` state variables.
Parameters
----------
A : array_like
State (or system) matrix of shape ``(n, n)``
B : array_like
Input matrix of shape ``(n, p)``
C : array_like
Output matrix of shape ``(q, n)``
D : array_like
Feedthrough (or feedforward) matrix of shape ``(q, p)``
input : int, optional
For multiple-input systems, the index of the input to use.
Returns
-------
num : 2-D ndarray
Numerator(s) of the resulting transfer function(s). `num` has one row
for each of the system's outputs. Each row is a sequence representation
of the numerator polynomial.
den : 1-D ndarray
Denominator of the resulting transfer function(s). `den` is a sequence
representation of the denominator polynomial.
Examples
--------
Convert the state-space representation:
.. math::
\dot{\textbf{x}}(t) =
\begin{bmatrix} -2 & -1 \\ 1 & 0 \end{bmatrix} \textbf{x}(t) +
\begin{bmatrix} 1 \\ 0 \end{bmatrix} \textbf{u}(t) \\
\textbf{y}(t) = \begin{bmatrix} 1 & 2 \end{bmatrix} \textbf{x}(t) +
\begin{bmatrix} 1 \end{bmatrix} \textbf{u}(t)
>>> A = [[-2, -1], [1, 0]]
>>> B = [[1], [0]] # 2-dimensional column vector
>>> C = [[1, 2]] # 2-dimensional row vector
>>> D = 1
to the transfer function:
.. math:: H(s) = \frac{s^2 + 3s + 3}{s^2 + 2s + 1}
>>> from scipy.signal import ss2tf
>>> ss2tf(A, B, C, D)
(array([[1, 3, 3]]), array([ 1., 2., 1.]))
"""
# transfer function is C (sI - A)**(-1) B + D
# Check consistency and make them all rank-2 arrays
A, B, C, D = abcd_normalize(A, B, C, D)
nout, nin = D.shape
if input >= nin:
raise ValueError("System does not have the input specified.")
# make SIMO from possibly MIMO system.
B = B[:, input:input + 1]
D = D[:, input:input + 1]
try:
den = poly(A)
except ValueError:
den = 1
if (product(B.shape, axis=0) == 0) and (product(C.shape, axis=0) == 0):
num = numpy.ravel(D)
if (product(D.shape, axis=0) == 0) and (product(A.shape, axis=0) == 0):
den = []
return num, den
num_states = A.shape[0]
type_test = A[:, 0] + B[:, 0] + C[0, :] + D
num = numpy.zeros((nout, num_states + 1), type_test.dtype)
for k in range(nout):
Ck = atleast_2d(C[k, :])
num[k] = poly(A - dot(B, Ck)) + (D[k] - 1) * den
return num, den
def zpk2ss(z, p, k):
"""Zero-pole-gain representation to state-space representation
Parameters
----------
z, p : sequence
Zeros and poles.
k : float
System gain.
Returns
-------
A, B, C, D : ndarray
State space representation of the system, in controller canonical
form.
"""
return tf2ss(*zpk2tf(z, p, k))
def ss2zpk(A, B, C, D, input=0):
"""State-space representation to zero-pole-gain representation.
A, B, C, D defines a linear state-space system with `p` inputs,
`q` outputs, and `n` state variables.
Parameters
----------
A : array_like
State (or system) matrix of shape ``(n, n)``
B : array_like
Input matrix of shape ``(n, p)``
C : array_like
Output matrix of shape ``(q, n)``
D : array_like
Feedthrough (or feedforward) matrix of shape ``(q, p)``
input : int, optional
For multiple-input systems, the index of the input to use.
Returns
-------
z, p : sequence
Zeros and poles.
k : float
System gain.
"""
return tf2zpk(*ss2tf(A, B, C, D, input=input))
class lti(object):
"""
Linear Time Invariant system base class.
Parameters
----------
*system : arguments
The `lti` class can be instantiated with either 2, 3 or 4 arguments.
The following gives the number of arguments and the corresponding
subclass that is created:
* 2: `TransferFunction`: (numerator, denominator)
* 3: `ZerosPolesGain`: (zeros, poles, gain)
* 4: `StateSpace`: (A, B, C, D)
Each argument can be an array or a sequence.
Notes
-----
`lti` instances do not exist directly. Instead, `lti` creates an instance
of one of its subclasses: `StateSpace`, `TransferFunction` or
`ZerosPolesGain`.
Changing the value of properties that are not directly part of the current
system representation (such as the `zeros` of a `StateSpace` system) is
very inefficient and may lead to numerical inaccuracies.
"""
def __new__(cls, *system):
"""Create an instance of the appropriate subclass."""
if cls is lti:
N = len(system)
if N == 2:
return super(lti, cls).__new__(TransferFunction)
elif N == 3:
return super(lti, cls).__new__(ZerosPolesGain)
elif N == 4:
return super(lti, cls).__new__(StateSpace)
else:
raise ValueError('Needs 2, 3 or 4 arguments.')
# __new__ was called from a subclass, let it call its own functions
return super(lti, cls).__new__(cls)
def __init__(self, *system):
"""
Initialize the `lti` baseclass.
The heavy lifting is done by the subclasses.
"""
self.inputs = None
self.outputs = None
@property
def num(self):
"""Numerator of the `TransferFunction` system."""
return self.to_tf().num
@num.setter
def num(self, num):
obj = self.to_tf()
obj.num = num
source_class = type(self)
self._copy(source_class(obj))
@property
def den(self):
"""Denominator of the `TransferFunction` system."""
return self.to_tf().den
@den.setter
def den(self, den):
obj = self.to_tf()
obj.den = den
source_class = type(self)
self._copy(source_class(obj))
@property
def zeros(self):
"""Zeros of the `ZerosPolesGain` system."""
return self.to_zpk().zeros
@zeros.setter
def zeros(self, zeros):
obj = self.to_zpk()
obj.zeros = zeros
source_class = type(self)
self._copy(source_class(obj))
@property
def poles(self):
"""Poles of the `ZerosPolesGain` system."""
return self.to_zpk().poles
@poles.setter
def poles(self, poles):
obj = self.to_zpk()
obj.poles = poles
source_class = type(self)
self._copy(source_class(obj))
@property
def gain(self):
"""Gain of the `ZerosPolesGain` system."""
return self.to_zpk().gain
@gain.setter
def gain(self, gain):
obj = self.to_zpk()
obj.gain = gain
source_class = type(self)
self._copy(source_class(obj))
@property
def A(self):
"""State matrix of the `StateSpace` system."""
return self.to_ss().A
@A.setter
def A(self, A):
obj = self.to_ss()
obj.A = A
source_class = type(self)
self._copy(source_class(obj))
@property
def B(self):
"""Input matrix of the `StateSpace` system."""
return self.to_ss().B
@B.setter
def B(self, B):
obj = self.to_ss()
obj.B = B
source_class = type(self)
self._copy(source_class(obj))
@property
def C(self):
"""Output matrix of the `StateSpace` system."""
return self.to_ss().C
@C.setter
def C(self, C):
obj = self.to_ss()
obj.C = C
source_class = type(self)
self._copy(source_class(obj))
@property
def D(self):
"""Feedthrough matrix of the `StateSpace` system."""
return self.to_ss().D
@D.setter
def D(self, D):
obj = self.to_ss()
obj.D = D
source_class = type(self)
self._copy(source_class(obj))
def impulse(self, X0=None, T=None, N=None):
"""
Return the impulse response of a continuous-time system.
See `scipy.signal.impulse` for details.
"""
return impulse(self, X0=X0, T=T, N=N)
def step(self, X0=None, T=None, N=None):
"""
Return the step response of a continuous-time system.
See `scipy.signal.step` for details.
"""
return step(self, X0=X0, T=T, N=N)
def output(self, U, T, X0=None):
"""
Return the response of a continuous-time system to input `U`.
See `scipy.signal.lsim` for details.
"""
return lsim(self, U, T, X0=X0)
def bode(self, w=None, n=100):
"""
Calculate Bode magnitude and phase data of a continuous-time system.
Returns a 3-tuple containing arrays of frequencies [rad/s], magnitude
[dB] and phase [deg]. See `scipy.signal.bode` for details.
Notes
-----
.. versionadded:: 0.11.0
Examples
--------
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
>>> s1 = signal.lti([1], [1, 1])
>>> w, mag, phase = s1.bode()
>>> plt.figure()
>>> plt.semilogx(w, mag) # Bode magnitude plot
>>> plt.figure()
>>> plt.semilogx(w, phase) # Bode phase plot
>>> plt.show()
"""
return bode(self, w=w, n=n)
def freqresp(self, w=None, n=10000):
"""
Calculate the frequency response of a continuous-time system.
Returns a 2-tuple containing arrays of frequencies [rad/s] and
complex magnitude.
See `scipy.signal.freqresp` for details.
"""
return freqresp(self, w=w, n=n)
class TransferFunction(lti):
r"""Linear Time Invariant system class in transfer function form.
Represents the system as the transfer function
:math:`H(s)=\sum_{i=0}^N b[N-i] s^i / \sum_{j=0}^M a[M-j] s^j`, where :math:`b` are
elements of the numerator `num`, :math:`a` are elements of the denominator
`den`, and ``N == len(b) - 1``, ``M == len(a) - 1``.
Parameters
----------
*system : arguments
The `TransferFunction` class can be instantiated with 1 or 2 arguments.
The following gives the number of input arguments and their
interpretation:
* 1: `lti` system: (`StateSpace`, `TransferFunction` or
`ZerosPolesGain`)
* 2: array_like: (numerator, denominator)
See Also
--------
ZerosPolesGain, StateSpace, lti
tf2ss, tf2zpk, tf2sos
Notes
-----
Changing the value of properties that are not part of the
`TransferFunction` system representation (such as the `A`, `B`, `C`, `D`
state-space matrices) is very inefficient and may lead to numerical
inaccuracies.
Examples
--------
Construct the transfer function:
.. math:: H(s) = \frac{s^2 + 3s + 3}{s^2 + 2s + 1}
>>> from scipy import signal
>>> num = [1, 3, 3]
>>> den = [1, 2, 1]
>>> signal.TransferFunction(num, den)
TransferFunction(
array([ 1., 3., 3.]),
array([ 1., 2., 1.])
)
"""
def __new__(cls, *system):
"""Handle object conversion if input is an instance of lti."""
if len(system) == 1 and isinstance(system[0], lti):
return system[0].to_tf()
# No special conversion needed
return super(TransferFunction, cls).__new__(cls)
def __init__(self, *system):
"""Initialize the state space LTI system."""
# Conversion of lti instances is handled in __new__
if isinstance(system[0], lti):
return
super(TransferFunction, self).__init__(self, *system)
self._num = None
self._den = None
self.num, self.den = normalize(*system)
def __repr__(self):
"""Return representation of the system's transfer function"""
return '{0}(\n{1},\n{2}\n)'.format(
self.__class__.__name__,
repr(self.num),
repr(self.den),
)
@property
def num(self):
"""Numerator of the `TransferFunction` system."""
return self._num
@num.setter
def num(self, num):
self._num = atleast_1d(num)
# Update dimensions
if len(self.num.shape) > 1:
self.outputs, self.inputs = self.num.shape
else:
self.outputs = 1
self.inputs = 1
@property
def den(self):
"""Denominator of the `TransferFunction` system."""
return self._den
@den.setter
def den(self, den):
self._den = atleast_1d(den)
def _copy(self, system):
"""
Copy the parameters of another `TransferFunction` object
Parameters
----------
system : `TransferFunction`
The `StateSpace` system that is to be copied
"""
self.num = system.num
self.den = system.den
def to_tf(self):
"""
Return a copy of the current `TransferFunction` system.
Returns
-------
sys : instance of `TransferFunction`
The current system (copy)
"""
return copy.deepcopy(self)
def to_zpk(self):
"""
Convert system representation to `ZerosPolesGain`.
Returns
-------
sys : instance of `ZerosPolesGain`
Zeros, poles, gain representation of the current system
"""
return ZerosPolesGain(*tf2zpk(self.num, self.den))
def to_ss(self):
"""
Convert system representation to `StateSpace`.
Returns
-------
sys : instance of `StateSpace`
State space model of the current system
"""
return StateSpace(*tf2ss(self.num, self.den))
class ZerosPolesGain(lti):
"""
Linear Time Invariant system class in zeros, poles, gain form.
Represents the system as the transfer function
:math:`H(s)=k \prod_i (s - z[i]) / \prod_j (s - p[j])`, where :math:`k` is
the `gain`, :math:`z` are the `zeros` and :math:`p` are the `poles`.
Parameters
----------
*system : arguments
The `ZerosPolesGain` class can be instantiated with 1 or 3 arguments.
The following gives the number of input arguments and their
interpretation:
* 1: `lti` system: (`StateSpace`, `TransferFunction` or
`ZerosPolesGain`)
* 3: array_like: (zeros, poles, gain)
See Also
--------
TransferFunction, StateSpace, lti
zpk2ss, zpk2tf, zpk2sos
Notes
-----
Changing the value of properties that are not part of the
`ZerosPolesGain` system representation (such as the `A`, `B`, `C`, `D`
state-space matrices) is very inefficient and may lead to numerical
inaccuracies.
"""
def __new__(cls, *system):
"""Handle object conversion if input is an instance of `lti`"""
if len(system) == 1 and isinstance(system[0], lti):
return system[0].to_zpk()
# No special conversion needed
return super(ZerosPolesGain, cls).__new__(cls)
def __init__(self, *system):
"""Initialize the zeros, poles, gain LTI system."""
# Conversion of lti instances is handled in __new__
if isinstance(system[0], lti):
return
super(ZerosPolesGain, self).__init__(self, *system)
self._zeros = None
self._poles = None
self._gain = None
self.zeros, self.poles, self.gain = system
def __repr__(self):
"""Return representation of the `ZerosPolesGain` system"""
return '{0}(\n{1},\n{2},\n{3}\n)'.format(
self.__class__.__name__,
repr(self.zeros),
repr(self.poles),
repr(self.gain),
)
@property
def zeros(self):
"""Zeros of the `ZerosPolesGain` system."""
return self._zeros
@zeros.setter
def zeros(self, zeros):
self._zeros = atleast_1d(zeros)
# Update dimensions
if len(self.zeros.shape) > 1:
self.outputs, self.inputs = self.zeros.shape
else:
self.outputs = 1
self.inputs = 1
@property
def poles(self):
"""Poles of the `ZerosPolesGain` system."""
return self._poles
@poles.setter
def poles(self, poles):
self._poles = atleast_1d(poles)
@property
def gain(self):
"""Gain of the `ZerosPolesGain` system."""
return self._gain
@gain.setter
def gain(self, gain):
self._gain = gain
def _copy(self, system):
"""
Copy the parameters of another `ZerosPolesGain` system.
Parameters
----------
system : instance of `ZerosPolesGain`
The zeros, poles gain system that is to be copied
"""
self.poles = system.poles
self.zeros = system.zeros
self.gain = system.gain
def to_tf(self):
"""
Convert system representation to `TransferFunction`.
Returns
-------
sys : instance of `TransferFunction`
Transfer function of the current system
"""
return TransferFunction(*zpk2tf(self.zeros, self.poles, self.gain))
def to_zpk(self):
"""
Return a copy of the current 'ZerosPolesGain' system.
Returns
-------
sys : instance of `ZerosPolesGain`
The current system (copy)
"""
return copy.deepcopy(self)
def to_ss(self):
"""
Convert system representation to `StateSpace`.
Returns
-------
sys : instance of `StateSpace`
State space model of the current system
"""
return StateSpace(*zpk2ss(self.zeros, self.poles, self.gain))
class StateSpace(lti):
"""
Linear Time Invariant system class in state-space form.
Represents the system as the first order differential equation
:math:`\dot{x} = A x + B u`.
Parameters
----------
*system : arguments
The `StateSpace` class can be instantiated with 1 or 4 arguments.
The following gives the number of input arguments and their
interpretation:
* 1: `lti` system: (`StateSpace`, `TransferFunction` or
`ZerosPolesGain`)
* 4: array_like: (A, B, C, D)
See Also
--------
TransferFunction, ZerosPolesGain, lti
ss2zpk, ss2tf, zpk2sos
Notes
-----
Changing the value of properties that are not part of the
`StateSpace` system representation (such as `zeros` or `poles`) is very
inefficient and may lead to numerical inaccuracies.
"""
def __new__(cls, *system):
"""Handle object conversion if input is an instance of `lti`"""
if len(system) == 1 and isinstance(system[0], lti):
return system[0].to_ss()
# No special conversion needed
return super(StateSpace, cls).__new__(cls)
def __init__(self, *system):
"""Initialize the state space LTI system."""
# Conversion of lti instances is handled in __new__
if isinstance(system[0], lti):
return
super(StateSpace, self).__init__(self, *system)
self._A = None
self._B = None
self._C = None
self._D = None
self.A, self.B, self.C, self.D = abcd_normalize(*system)
def __repr__(self):
"""Return representation of the `StateSpace` system."""
return '{0}(\n{1},\n{2},\n{3},\n{4}\n)'.format(
self.__class__.__name__,
repr(self.A),
repr(self.B),
repr(self.C),
repr(self.D),
)
@property
def A(self):
"""State matrix of the `StateSpace` system."""
return self._A
@A.setter
def A(self, A):
self._A = _atleast_2d_or_none(A)
@property
def B(self):
"""Input matrix of the `StateSpace` system."""
return self._B
@B.setter
def B(self, B):
self._B = _atleast_2d_or_none(B)
self.inputs = self.B.shape[-1]
@property
def C(self):
"""Output matrix of the `StateSpace` system."""
return self._C
@C.setter
def C(self, C):
self._C = _atleast_2d_or_none(C)
self.outputs = self.C.shape[0]
@property
def D(self):
"""Feedthrough matrix of the `StateSpace` system."""
return self._D
@D.setter
def D(self, D):
self._D = _atleast_2d_or_none(D)
def _copy(self, system):
"""
Copy the parameters of another `StateSpace` system.
Parameters
----------
system : instance of `StateSpace`
The state-space system that is to be copied
"""
self.A = system.A
self.B = system.B
self.C = system.C
self.D = system.D
def to_tf(self, **kwargs):
"""
Convert system representation to `TransferFunction`.
Parameters
----------
kwargs : dict, optional
Additional keywords passed to `ss2zpk`
Returns
-------
sys : instance of `TransferFunction`
Transfer function of the current system
"""
return TransferFunction(*ss2tf(self._A, self._B, self._C, self._D,
**kwargs))
def to_zpk(self, **kwargs):
"""
Convert system representation to `ZerosPolesGain`.
Parameters
----------
kwargs : dict, optional
Additional keywords passed to `ss2zpk`
Returns
-------
sys : instance of `ZerosPolesGain`
Zeros, poles, gain representation of the current system
"""
return ZerosPolesGain(*ss2zpk(self._A, self._B, self._C, self._D,
**kwargs))
def to_ss(self):
"""
Return a copy of the current `StateSpace` system.
Returns
-------
sys : instance of `StateSpace`
The current system (copy)
"""
return copy.deepcopy(self)
def lsim2(system, U=None, T=None, X0=None, **kwargs):
"""
Simulate output of a continuous-time linear system, by using
the ODE solver `scipy.integrate.odeint`.
Parameters
----------
system : an instance of the LTI class or a tuple describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2: (num, den)
* 3: (zeros, poles, gain)
* 4: (A, B, C, D)
U : array_like (1D or 2D), optional
An input array describing the input at each time T. Linear
interpolation is used between given times. If there are
multiple inputs, then each column of the rank-2 array
represents an input. If U is not given, the input is assumed
to be zero.
T : array_like (1D or 2D), optional
The time steps at which the input is defined and at which the
output is desired. The default is 101 evenly spaced points on
the interval [0,10.0].
X0 : array_like (1D), optional
The initial condition of the state vector. If `X0` is not
given, the initial conditions are assumed to be 0.
kwargs : dict
Additional keyword arguments are passed on to the function
`odeint`. See the notes below for more details.
Returns
-------
T : 1D ndarray
The time values for the output.
yout : ndarray
The response of the system.
xout : ndarray
The time-evolution of the state-vector.
Notes
-----
This function uses `scipy.integrate.odeint` to solve the
system's differential equations. Additional keyword arguments
given to `lsim2` are passed on to `odeint`. See the documentation
for `scipy.integrate.odeint` for the full list of arguments.
"""
if isinstance(system, lti):
sys = system.to_ss()
else:
sys = lti(*system).to_ss()
if X0 is None:
X0 = zeros(sys.B.shape[0], sys.A.dtype)
if T is None:
# XXX T should really be a required argument, but U was
# changed from a required positional argument to a keyword,
# and T is after U in the argument list. So we either: change
# the API and move T in front of U; check here for T being
# None and raise an exception; or assign a default value to T
# here. This code implements the latter.
T = linspace(0, 10.0, 101)
T = atleast_1d(T)
if len(T.shape) != 1:
raise ValueError("T must be a rank-1 array.")
if U is not None:
U = atleast_1d(U)
if len(U.shape) == 1:
U = U.reshape(-1, 1)
sU = U.shape
if sU[0] != len(T):
raise ValueError("U must have the same number of rows "
"as elements in T.")
if sU[1] != sys.inputs:
raise ValueError("The number of inputs in U (%d) is not "
"compatible with the number of system "
"inputs (%d)" % (sU[1], sys.inputs))
# Create a callable that uses linear interpolation to
# calculate the input at any time.
ufunc = interpolate.interp1d(T, U, kind='linear',
axis=0, bounds_error=False)
def fprime(x, t, sys, ufunc):
"""The vector field of the linear system."""
return dot(sys.A, x) + squeeze(dot(sys.B, nan_to_num(ufunc([t]))))
xout = integrate.odeint(fprime, X0, T, args=(sys, ufunc), **kwargs)
yout = dot(sys.C, transpose(xout)) + dot(sys.D, transpose(U))
else:
def fprime(x, t, sys):
"""The vector field of the linear system."""
return dot(sys.A, x)
xout = integrate.odeint(fprime, X0, T, args=(sys,), **kwargs)
yout = dot(sys.C, transpose(xout))
return T, squeeze(transpose(yout)), xout
def _cast_to_array_dtype(in1, in2):
"""Cast array to dtype of other array, while avoiding ComplexWarning.
Those can be raised when casting complex to real.
"""
if numpy.issubdtype(in2.dtype, numpy.float):
# dtype to cast to is not complex, so use .real
in1 = in1.real.astype(in2.dtype)
else:
in1 = in1.astype(in2.dtype)
return in1
def lsim(system, U, T, X0=None, interp=True):
"""
Simulate output of a continuous-time linear system.
Parameters
----------
system : an instance of the LTI class or a tuple describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2: (num, den)
* 3: (zeros, poles, gain)
* 4: (A, B, C, D)
U : array_like
An input array describing the input at each time `T`
(interpolation is assumed between given times). If there are
multiple inputs, then each column of the rank-2 array
represents an input. If U = 0 or None, a zero input is used.
T : array_like
The time steps at which the input is defined and at which the
output is desired. Must be nonnegative, increasing, and equally spaced.
X0 : array_like, optional
The initial conditions on the state vector (zero by default).
interp : bool, optional
Whether to use linear (True, the default) or zero-order-hold (False)
interpolation for the input array.
Returns
-------
T : 1D ndarray
Time values for the output.
yout : 1D ndarray
System response.
xout : ndarray
Time evolution of the state vector.
Examples
--------
Simulate a double integrator y'' = u, with a constant input u = 1
>>> from scipy import signal
>>> system = signal.lti([[0., 1.], [0., 0.]], [[0.], [1.]], [[1., 0.]], 0.)
>>> t = np.linspace(0, 5)
>>> u = np.ones_like(t)
>>> tout, y, x = signal.lsim(system, u, t)
>>> import matplotlib.pyplot as plt
>>> plt.plot(t, y)
"""
if isinstance(system, lti):
sys = system.to_ss()
else:
sys = lti(*system).to_ss()
T = atleast_1d(T)
if len(T.shape) != 1:
raise ValueError("T must be a rank-1 array.")
A, B, C, D = map(np.asarray, (sys.A, sys.B, sys.C, sys.D))
n_states = A.shape[0]
n_inputs = B.shape[1]
n_steps = T.size
if X0 is None:
X0 = zeros(n_states, sys.A.dtype)
xout = zeros((n_steps, n_states), sys.A.dtype)
if T[0] == 0:
xout[0] = X0
elif T[0] > 0:
# step forward to initial time, with zero input
xout[0] = dot(X0, linalg.expm(transpose(A) * T[0]))
else:
raise ValueError("Initial time must be nonnegative")
no_input = (U is None
or (isinstance(U, (int, float)) and U == 0.)
or not np.any(U))
if n_steps == 1:
yout = squeeze(dot(xout, transpose(C)))
if not no_input:
yout += squeeze(dot(U, transpose(D)))
return T, squeeze(yout), squeeze(xout)
dt = T[1] - T[0]
if not np.allclose((T[1:] - T[:-1]) / dt, 1.0):
warnings.warn("Non-uniform timesteps are deprecated. Results may be "
"slow and/or inaccurate.", DeprecationWarning)
return lsim2(system, U, T, X0)
if no_input:
# Zero input: just use matrix exponential
# take transpose because state is a row vector
expAT_dt = linalg.expm(transpose(A) * dt)
for i in xrange(1, n_steps):
xout[i] = dot(xout[i-1], expAT_dt)
yout = squeeze(dot(xout, transpose(C)))
return T, squeeze(yout), squeeze(xout)
# Nonzero input
U = atleast_1d(U)
if U.ndim == 1:
U = U[:, np.newaxis]
if U.shape[0] != n_steps:
raise ValueError("U must have the same number of rows "
"as elements in T.")
if U.shape[1] != n_inputs:
raise ValueError("System does not define that many inputs.")
if not interp:
# Zero-order hold
# Algorithm: to integrate from time 0 to time dt, we solve
# xdot = A x + B u, x(0) = x0
# udot = 0, u(0) = u0.
#
# Solution is
# [ x(dt) ] [ A*dt B*dt ] [ x0 ]
# [ u(dt) ] = exp [ 0 0 ] [ u0 ]
M = np.vstack([np.hstack([A * dt, B * dt]),
np.zeros((n_inputs, n_states + n_inputs))])
# transpose everything because the state and input are row vectors
expMT = linalg.expm(transpose(M))
Ad = expMT[:n_states, :n_states]
Bd = expMT[n_states:, :n_states]
for i in xrange(1, n_steps):
xout[i] = dot(xout[i-1], Ad) + dot(U[i-1], Bd)
else:
# Linear interpolation between steps
# Algorithm: to integrate from time 0 to time dt, with linear
# interpolation between inputs u(0) = u0 and u(dt) = u1, we solve
# xdot = A x + B u, x(0) = x0
# udot = (u1 - u0) / dt, u(0) = u0.
#
# Solution is
# [ x(dt) ] [ A*dt B*dt 0 ] [ x0 ]
# [ u(dt) ] = exp [ 0 0 I ] [ u0 ]
# [u1 - u0] [ 0 0 0 ] [u1 - u0]
M = np.vstack([np.hstack([A * dt, B * dt,
np.zeros((n_states, n_inputs))]),
np.hstack([np.zeros((n_inputs, n_states + n_inputs)),
np.identity(n_inputs)]),
np.zeros((n_inputs, n_states + 2 * n_inputs))])
expMT = linalg.expm(transpose(M))
Ad = expMT[:n_states, :n_states]
Bd1 = expMT[n_states+n_inputs:, :n_states]
Bd0 = expMT[n_states:n_states + n_inputs, :n_states] - Bd1
for i in xrange(1, n_steps):
xout[i] = (dot(xout[i-1], Ad) + dot(U[i-1], Bd0) + dot(U[i], Bd1))
yout = (squeeze(dot(xout, transpose(C))) + squeeze(dot(U, transpose(D))))
return T, squeeze(yout), squeeze(xout)
def _default_response_times(A, n):
"""Compute a reasonable set of time samples for the response time.
This function is used by `impulse`, `impulse2`, `step` and `step2`
to compute the response time when the `T` argument to the function
is None.
Parameters
----------
A : ndarray
The system matrix, which is square.
n : int
The number of time samples to generate.
Returns
-------
t : ndarray
The 1-D array of length `n` of time samples at which the response
is to be computed.
"""
# Create a reasonable time interval.
# TODO: This could use some more work.
# For example, what is expected when the system is unstable?
vals = linalg.eigvals(A)
r = min(abs(real(vals)))
if r == 0.0:
r = 1.0
tc = 1.0 / r
t = linspace(0.0, 7 * tc, n)
return t
def impulse(system, X0=None, T=None, N=None):
"""Impulse response of continuous-time system.
Parameters
----------
system : an instance of the LTI class or a tuple of array_like
describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2 (num, den)
* 3 (zeros, poles, gain)
* 4 (A, B, C, D)
X0 : array_like, optional
Initial state-vector. Defaults to zero.
T : array_like, optional
Time points. Computed if not given.
N : int, optional
The number of time points to compute (if `T` is not given).
Returns
-------
T : ndarray
A 1-D array of time points.
yout : ndarray
A 1-D array containing the impulse response of the system (except for
singularities at zero).
"""
if isinstance(system, lti):
sys = system.to_ss()
else:
sys = lti(*system).to_ss()
if X0 is None:
X = squeeze(sys.B)
else:
X = squeeze(sys.B + X0)
if N is None:
N = 100
if T is None:
T = _default_response_times(sys.A, N)
else:
T = asarray(T)
_, h, _ = lsim(sys, 0., T, X, interp=False)
return T, h
def impulse2(system, X0=None, T=None, N=None, **kwargs):
"""
Impulse response of a single-input, continuous-time linear system.
Parameters
----------
system : an instance of the LTI class or a tuple of array_like
describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2 (num, den)
* 3 (zeros, poles, gain)
* 4 (A, B, C, D)
X0 : 1-D array_like, optional
The initial condition of the state vector. Default: 0 (the
zero vector).
T : 1-D array_like, optional
The time steps at which the input is defined and at which the
output is desired. If `T` is not given, the function will
generate a set of time samples automatically.
N : int, optional
Number of time points to compute. Default: 100.
kwargs : various types
Additional keyword arguments are passed on to the function
`scipy.signal.lsim2`, which in turn passes them on to
`scipy.integrate.odeint`; see the latter's documentation for
information about these arguments.
Returns
-------
T : ndarray
The time values for the output.
yout : ndarray
The output response of the system.
See Also
--------
impulse, lsim2, integrate.odeint
Notes
-----
The solution is generated by calling `scipy.signal.lsim2`, which uses
the differential equation solver `scipy.integrate.odeint`.
.. versionadded:: 0.8.0
Examples
--------
Second order system with a repeated root: x''(t) + 2*x(t) + x(t) = u(t)
>>> from scipy import signal
>>> system = ([1.0], [1.0, 2.0, 1.0])
>>> t, y = signal.impulse2(system)
>>> import matplotlib.pyplot as plt
>>> plt.plot(t, y)
"""
if isinstance(system, lti):
sys = system.to_ss()
else:
sys = lti(*system).to_ss()
B = sys.B
if B.shape[-1] != 1:
raise ValueError("impulse2() requires a single-input system.")
B = B.squeeze()
if X0 is None:
X0 = zeros_like(B)
if N is None:
N = 100
if T is None:
T = _default_response_times(sys.A, N)
# Move the impulse in the input to the initial conditions, and then
# solve using lsim2().
ic = B + X0
Tr, Yr, Xr = lsim2(sys, T=T, X0=ic, **kwargs)
return Tr, Yr
def step(system, X0=None, T=None, N=None):
"""Step response of continuous-time system.
Parameters
----------
system : an instance of the LTI class or a tuple of array_like
describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2 (num, den)
* 3 (zeros, poles, gain)
* 4 (A, B, C, D)
X0 : array_like, optional
Initial state-vector (default is zero).
T : array_like, optional
Time points (computed if not given).
N : int, optional
Number of time points to compute if `T` is not given.
Returns
-------
T : 1D ndarray
Output time points.
yout : 1D ndarray
Step response of system.
See also
--------
scipy.signal.step2
"""
if isinstance(system, lti):
sys = system.to_ss()
else:
sys = lti(*system).to_ss()
if N is None:
N = 100
if T is None:
T = _default_response_times(sys.A, N)
else:
T = asarray(T)
U = ones(T.shape, sys.A.dtype)
vals = lsim(sys, U, T, X0=X0, interp=False)
return vals[0], vals[1]
def step2(system, X0=None, T=None, N=None, **kwargs):
"""Step response of continuous-time system.
This function is functionally the same as `scipy.signal.step`, but
it uses the function `scipy.signal.lsim2` to compute the step
response.
Parameters
----------
system : an instance of the LTI class or a tuple of array_like
describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2 (num, den)
* 3 (zeros, poles, gain)
* 4 (A, B, C, D)
X0 : array_like, optional
Initial state-vector (default is zero).
T : array_like, optional
Time points (computed if not given).
N : int, optional
Number of time points to compute if `T` is not given.
kwargs : various types
Additional keyword arguments are passed on the function
`scipy.signal.lsim2`, which in turn passes them on to
`scipy.integrate.odeint`. See the documentation for
`scipy.integrate.odeint` for information about these arguments.
Returns
-------
T : 1D ndarray
Output time points.
yout : 1D ndarray
Step response of system.
See also
--------
scipy.signal.step
Notes
-----
.. versionadded:: 0.8.0
"""
if isinstance(system, lti):
sys = system.to_ss()
else:
sys = lti(*system).to_ss()
if N is None:
N = 100
if T is None:
T = _default_response_times(sys.A, N)
else:
T = asarray(T)
U = ones(T.shape, sys.A.dtype)
vals = lsim2(sys, U, T, X0=X0, **kwargs)
return vals[0], vals[1]
def bode(system, w=None, n=100):
"""
Calculate Bode magnitude and phase data of a continuous-time system.
Parameters
----------
system : an instance of the LTI class or a tuple describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2 (num, den)
* 3 (zeros, poles, gain)
* 4 (A, B, C, D)
w : array_like, optional
Array of frequencies (in rad/s). Magnitude and phase data is calculated
for every value in this array. If not given a reasonable set will be
calculated.
n : int, optional
Number of frequency points to compute if `w` is not given. The `n`
frequencies are logarithmically spaced in an interval chosen to
include the influence of the poles and zeros of the system.
Returns
-------
w : 1D ndarray
Frequency array [rad/s]
mag : 1D ndarray
Magnitude array [dB]
phase : 1D ndarray
Phase array [deg]
Notes
-----
.. versionadded:: 0.11.0
Examples
--------
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
>>> s1 = signal.lti([1], [1, 1])
>>> w, mag, phase = signal.bode(s1)
>>> plt.figure()
>>> plt.semilogx(w, mag) # Bode magnitude plot
>>> plt.figure()
>>> plt.semilogx(w, phase) # Bode phase plot
>>> plt.show()
"""
w, y = freqresp(system, w=w, n=n)
mag = 20.0 * numpy.log10(abs(y))
phase = numpy.unwrap(numpy.arctan2(y.imag, y.real)) * 180.0 / numpy.pi
return w, mag, phase
def freqresp(system, w=None, n=10000):
"""Calculate the frequency response of a continuous-time system.
Parameters
----------
system : an instance of the LTI class or a tuple describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2 (num, den)
* 3 (zeros, poles, gain)
* 4 (A, B, C, D)
w : array_like, optional
Array of frequencies (in rad/s). Magnitude and phase data is
calculated for every value in this array. If not given a reasonable
set will be calculated.
n : int, optional
Number of frequency points to compute if `w` is not given. The `n`
frequencies are logarithmically spaced in an interval chosen to
include the influence of the poles and zeros of the system.
Returns
-------
w : 1D ndarray
Frequency array [rad/s]
H : 1D ndarray
Array of complex magnitude values
Examples
--------
# Generating the Nyquist plot of a transfer function
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
>>> s1 = signal.lti([], [1, 1, 1], [5])
# transfer function: H(s) = 5 / (s-1)^3
>>> w, H = signal.freqresp(s1)
>>> plt.figure()
>>> plt.plot(H.real, H.imag, "b")
>>> plt.plot(H.real, -H.imag, "r")
>>> plt.show()
"""
if isinstance(system, lti):
sys = system.to_tf()
else:
sys = lti(*system).to_tf()
if sys.inputs != 1 or sys.outputs != 1:
raise ValueError("freqresp() requires a SISO (single input, single "
"output) system.")
if w is not None:
worN = w
else:
worN = n
# In the call to freqs(), sys.num.ravel() is used because there are
# cases where sys.num is a 2-D array with a single row.
w, h = freqs(sys.num.ravel(), sys.den, worN=worN)
return w, h
# This class will be used by place_poles to return its results
# see http://code.activestate.com/recipes/52308/
class Bunch:
def __init__(self, **kwds):
self.__dict__.update(kwds)
def _valid_inputs(A, B, poles, method, rtol, maxiter):
"""
Check the poles come in complex conjugage pairs
Check shapes of A, B and poles are compatible.
Check the method chosen is compatible with provided poles
Return update method to use and ordered poles
"""
poles = np.asarray(poles)
if poles.ndim > 1:
raise ValueError("Poles must be a 1D array like.")
# Will raise ValueError if poles do not come in complex conjugates pairs
poles = _order_complex_poles(poles)
if A.ndim > 2:
raise ValueError("A must be a 2D array/matrix.")
if B.ndim > 2:
raise ValueError("B must be a 2D array/matrix")
if A.shape[0] != A.shape[1]:
raise ValueError("A must be square")
if len(poles) > A.shape[0]:
raise ValueError("maximum number of poles is %d but you asked for %d" %
(A.shape[0], len(poles)))
if len(poles) < A.shape[0]:
raise ValueError("number of poles is %d but you should provide %d" %
(len(poles), A.shape[0]))
r = np.linalg.matrix_rank(B)
for p in poles:
if sum(p == poles) > r:
raise ValueError("at least one of the requested pole is repeated "
"more than rank(B) times")
# Choose update method
update_loop = _YT_loop
if method not in ('KNV0','YT'):
raise ValueError("The method keyword must be one of 'YT' or 'KNV0'")
if method == "KNV0":
update_loop = _KNV0_loop
if not all(np.isreal(poles)):
raise ValueError("Complex poles are not supported by KNV0")
if maxiter < 1:
raise ValueError("maxiter must be at least equal to 1")
# We do not check rtol <= 0 as the user can use a negative rtol to
# force maxiter iterations
if rtol > 1:
raise ValueError("rtol can not be greater than 1")
return update_loop, poles
def _order_complex_poles(poles):
"""
Check we have complex conjugates pairs and reorder P according to YT, ie
real_poles, complex_i, conjugate complex_i, ....
The lexicographic sort on the complex poles is added to help the user to
compare sets of poles.
"""
ordered_poles = np.sort(poles[np.isreal(poles)])
im_poles = []
for p in np.sort(poles[np.imag(poles) < 0]):
if np.conj(p) in poles:
im_poles.extend((p, np.conj(p)))
ordered_poles = np.hstack((ordered_poles, im_poles))
if poles.shape[0] != len(ordered_poles):
raise ValueError("Complex poles must come with their conjugates")
return ordered_poles
def _KNV0(B, ker_pole, transfer_matrix, j, poles):
"""
Algorithm "KNV0" Kautsky et Al. Robust pole
assignment in linear state feedback, Int journal of Control
1985, vol 41 p 1129->1155
http://la.epfl.ch/files/content/sites/la/files/
users/105941/public/KautskyNicholsDooren
"""
# Remove xj form the base
transfer_matrix_not_j = np.delete(transfer_matrix, j, axis=1)
# If we QR this matrix in full mode Q=Q0|Q1
# then Q1 will be a single column orthogonnal to
# Q0, that's what we are looking for !
# After merge of gh-4249 great speed improvements could be achieved
# using QR updates instead of full QR in the line below
# To debug with numpy qr uncomment the line below
# Q, R = np.linalg.qr(transfer_matrix_not_j, mode="complete")
Q, R = s_qr(transfer_matrix_not_j, mode="full")
mat_ker_pj = np.dot(ker_pole[j], ker_pole[j].T)
yj = np.dot(mat_ker_pj, Q[:, -1])
# If Q[:, -1] is "almost" orthogonal to ker_pole[j] its
# projection into ker_pole[j] will yield a vector
# close to 0. As we are looking for a vector in ker_pole[j]
# simply stick with transfer_matrix[:, j] (unless someone provides me with
# a better choice ?)
if not np.allclose(yj, 0):
xj = yj/np.linalg.norm(yj)
transfer_matrix[:, j] = xj
# KNV does not support complex poles, using YT technique the two lines
# below seem to work 9 out of 10 times but it is not reliable enough:
# transfer_matrix[:, j]=real(xj)
# transfer_matrix[:, j+1]=imag(xj)
# Add this at the beginning of this function if you wish to test
# complex support:
# if ~np.isreal(P[j]) and (j>=B.shape[0]-1 or P[j]!=np.conj(P[j+1])):
# return
# Problems arise when imag(xj)=>0 I have no idea on how to fix this
def _YT_real(ker_pole, Q, transfer_matrix, i, j):
"""
Applies algorithm from YT section 6.1 page 19 related to real pairs
"""
# step 1 page 19
u = Q[:, -2, np.newaxis]
v = Q[:, -1, np.newaxis]
# step 2 page 19
m = np.dot(np.dot(ker_pole[i].T, np.dot(u, v.T) -
np.dot(v, u.T)), ker_pole[j])
# step 3 page 19
um, sm, vm = np.linalg.svd(m)
# mu1, mu2 two first columns of U => 2 first lines of U.T
mu1, mu2 = um.T[:2, :, np.newaxis]
# VM is V.T with numpy we want the first two lines of V.T
nu1, nu2 = vm[:2, :, np.newaxis]
# what follows is a rough python translation of the formulas
# in section 6.2 page 20 (step 4)
transfer_matrix_j_mo_transfer_matrix_j = np.vstack((
transfer_matrix[:, i, np.newaxis],
transfer_matrix[:, j, np.newaxis]))
if not np.allclose(sm[0], sm[1]):
ker_pole_imo_mu1 = np.dot(ker_pole[i], mu1)
ker_pole_i_nu1 = np.dot(ker_pole[j], nu1)
ker_pole_mu_nu = np.vstack((ker_pole_imo_mu1, ker_pole_i_nu1))
else:
ker_pole_ij = np.vstack((
np.hstack((ker_pole[i],
np.zeros(ker_pole[i].shape))),
np.hstack((np.zeros(ker_pole[j].shape),
ker_pole[j]))
))
mu_nu_matrix = np.vstack(
(np.hstack((mu1, mu2)), np.hstack((nu1, nu2)))
)
ker_pole_mu_nu = np.dot(ker_pole_ij, mu_nu_matrix)
transfer_matrix_ij = np.dot(np.dot(ker_pole_mu_nu, ker_pole_mu_nu.T),
transfer_matrix_j_mo_transfer_matrix_j)
if not np.allclose(transfer_matrix_ij, 0):
transfer_matrix_ij = (np.sqrt(2)*transfer_matrix_ij /
np.linalg.norm(transfer_matrix_ij))
transfer_matrix[:, i] = transfer_matrix_ij[
:transfer_matrix[:, i].shape[0], 0
]
transfer_matrix[:, j] = transfer_matrix_ij[
transfer_matrix[:, i].shape[0]:, 0
]
else:
# As in knv0 if transfer_matrix_j_mo_transfer_matrix_j is orthogonal to
# Vect{ker_pole_mu_nu} assign transfer_matrixi/transfer_matrix_j to
# ker_pole_mu_nu and iterate. As we are looking for a vector in
# Vect{Matker_pole_MU_NU} (see section 6.1 page 19) this might help
# (that's a guess, not a claim !)
transfer_matrix[:, i] = ker_pole_mu_nu[
:transfer_matrix[:, i].shape[0], 0
]
transfer_matrix[:, j] = ker_pole_mu_nu[
transfer_matrix[:, i].shape[0]:, 0
]
def _YT_complex(ker_pole, Q, transfer_matrix, i, j):
"""
Applies algorithm from YT section 6.2 page 20 related to complex pairs
"""
# step 1 page 20
ur = np.sqrt(2)*Q[:, -2, np.newaxis]
ui = np.sqrt(2)*Q[:, -1, np.newaxis]
u = ur + 1j*ui
# step 2 page 20
ker_pole_ij = ker_pole[i]
m = np.dot(np.dot(np.conj(ker_pole_ij.T), np.dot(u, np.conj(u).T) -
np.dot(np.conj(u), u.T)), ker_pole_ij)
# step 3 page 20
e_val, e_vec = np.linalg.eig(m)
# sort eigenvalues according to their module
e_val_idx = np.argsort(np.abs(e_val))
mu1 = e_vec[:, e_val_idx[-1], np.newaxis]
mu2 = e_vec[:, e_val_idx[-2], np.newaxis]
# what follows is a rough python translation of the formulas
# in section 6.2 page 20 (step 4)
# remember transfer_matrix_i has been split as
# transfer_matrix[i]=real(transfer_matrix_i) and
# transfer_matrix[j]=imag(transfer_matrix_i)
transfer_matrix_j_mo_transfer_matrix_j = (
transfer_matrix[:, i, np.newaxis] +
1j*transfer_matrix[:, j, np.newaxis]
)
if not np.allclose(np.abs(e_val[e_val_idx[-1]]),
np.abs(e_val[e_val_idx[-2]])):
ker_pole_mu = np.dot(ker_pole_ij, mu1)
else:
mu1_mu2_matrix = np.hstack((mu1, mu2))
ker_pole_mu = np.dot(ker_pole_ij, mu1_mu2_matrix)
transfer_matrix_i_j = np.dot(np.dot(ker_pole_mu, np.conj(ker_pole_mu.T)),
transfer_matrix_j_mo_transfer_matrix_j)
if not np.allclose(transfer_matrix_i_j, 0):
transfer_matrix_i_j = (transfer_matrix_i_j /
np.linalg.norm(transfer_matrix_i_j))
transfer_matrix[:, i] = np.real(transfer_matrix_i_j[:, 0])
transfer_matrix[:, j] = np.imag(transfer_matrix_i_j[:, 0])
else:
# same idea as in YT_real
transfer_matrix[:, i] = np.real(ker_pole_mu[:, 0])
transfer_matrix[:, j] = np.imag(ker_pole_mu[:, 0])
def _YT_loop(ker_pole, transfer_matrix, poles, B, maxiter, rtol):
"""
Algorithm "YT" Tits, Yang. Globally Convergent
Algorithms for Robust Pole Assignment by State Feedback
http://drum.lib.umd.edu/handle/1903/5598
The poles P have to be sorted accordingly to section 6.2 page 20
"""
# The IEEE edition of the YT paper gives useful information on the
# optimal update order for the real poles in order to minimize the number
# of times we have to loop over all poles, see page 1442
nb_real = poles[np.isreal(poles)].shape[0]
# hnb => Half Nb Real
hnb = nb_real // 2
# Stick to the indices in the paper and then remove one to get numpy array
# index it is a bit easier to link the code to the paper this way even if it
# is not very clean. The paper is unclear about what should be done when
# there is only one real pole => use KNV0 on this real pole seem to work
if nb_real > 0:
#update the biggest real pole with the smallest one
update_order = [[nb_real], [1]]
else:
update_order = [[],[]]
r_comp = np.arange(nb_real+1, len(poles)+1, 2)
# step 1.a
r_p = np.arange(1, hnb+nb_real % 2)
update_order[0].extend(2*r_p)
update_order[1].extend(2*r_p+1)
# step 1.b
update_order[0].extend(r_comp)
update_order[1].extend(r_comp+1)
# step 1.c
r_p = np.arange(1, hnb+1)
update_order[0].extend(2*r_p-1)
update_order[1].extend(2*r_p)
# step 1.d
if hnb == 0 and np.isreal(poles[0]):
update_order[0].append(1)
update_order[1].append(1)
update_order[0].extend(r_comp)
update_order[1].extend(r_comp+1)
# step 2.a
r_j = np.arange(2, hnb+nb_real % 2)
for j in r_j:
for i in range(1, hnb+1):
update_order[0].append(i)
update_order[1].append(i+j)
# step 2.b
if hnb == 0 and np.isreal(poles[0]):
update_order[0].append(1)
update_order[1].append(1)
update_order[0].extend(r_comp)
update_order[1].extend(r_comp+1)
# step 2.c
r_j = np.arange(2, hnb+nb_real % 2)
for j in r_j:
for i in range(hnb+1, nb_real+1):
idx_1 = i+j
if idx_1 > nb_real:
idx_1 = i+j-nb_real
update_order[0].append(i)
update_order[1].append(idx_1)
# step 2.d
if hnb == 0 and np.isreal(poles[0]):
update_order[0].append(1)
update_order[1].append(1)
update_order[0].extend(r_comp)
update_order[1].extend(r_comp+1)
# step 3.a
for i in range(1, hnb+1):
update_order[0].append(i)
update_order[1].append(i+hnb)
# step 3.b
if hnb == 0 and np.isreal(poles[0]):
update_order[0].append(1)
update_order[1].append(1)
update_order[0].extend(r_comp)
update_order[1].extend(r_comp+1)
update_order = np.array(update_order).T-1
stop = False
nb_try = 0
while nb_try < maxiter and not stop:
det_transfer_matrixb = np.abs(np.linalg.det(transfer_matrix))
for i, j in update_order:
if i == j:
assert i == 0, "i!=0 for KNV call in YT"
assert np.isreal(poles[i]), "calling KNV on a complex pole"
_KNV0(B, ker_pole, transfer_matrix, i, poles)
else:
transfer_matrix_not_i_j = np.delete(transfer_matrix, (i, j),
axis=1)
# after merge of gh-4249 great speed improvements could be
# achieved using QR updates instead of full QR in the line below
#to debug with numpy qr uncomment the line below
#Q, _ = np.linalg.qr(transfer_matrix_not_i_j, mode="complete")
Q, _ = s_qr(transfer_matrix_not_i_j, mode="full")
if np.isreal(poles[i]):
assert np.isreal(poles[j]), "mixing real and complex " + \
"in YT_real" + str(poles)
_YT_real(ker_pole, Q, transfer_matrix, i, j)
else:
assert ~np.isreal(poles[i]), "mixing real and complex " + \
"in YT_real" + str(poles)
_YT_complex(ker_pole, Q, transfer_matrix, i, j)
det_transfer_matrix = np.max((np.sqrt(np.spacing(1)),
np.abs(np.linalg.det(transfer_matrix))))
cur_rtol = np.abs(
(det_transfer_matrix -
det_transfer_matrixb) /
det_transfer_matrix)
if cur_rtol < rtol and det_transfer_matrix > np.sqrt(np.spacing(1)):
# Convergence test from YT page 21
stop = True
nb_try += 1
return stop, cur_rtol, nb_try
def _KNV0_loop(ker_pole, transfer_matrix, poles, B, maxiter, rtol):
"""
Loop over all poles one by one and apply KNV method 0 algorithm
"""
# This method is useful only because we need to be able to call
# _KNV0 from YT without looping over all poles, otherwise it would
# have been fine to mix _KNV0_loop and _KNV0 in a single function
stop = False
nb_try = 0
while nb_try < maxiter and not stop:
det_transfer_matrixb = np.abs(np.linalg.det(transfer_matrix))
for j in range(B.shape[0]):
_KNV0(B, ker_pole, transfer_matrix, j, poles)
det_transfer_matrix = np.max((np.sqrt(np.spacing(1)),
np.abs(np.linalg.det(transfer_matrix))))
cur_rtol = np.abs((det_transfer_matrix - det_transfer_matrixb) /
det_transfer_matrix)
if cur_rtol < rtol and det_transfer_matrix > np.sqrt(np.spacing(1)):
# Convergence test from YT page 21
stop = True
nb_try += 1
return stop, cur_rtol, nb_try
def place_poles(A, B, poles, method="YT", rtol=1e-3, maxiter=30):
"""
Compute K such that eigenvalues (A - dot(B, K))=poles.
K is the gain matrix such as the plant described by the linear system
``AX+BU`` will have its closed-loop poles, i.e the eigenvalues ``A - B*K``,
as close as possible to those asked for in poles.
SISO, MISO and MIMO systems are supported.
Parameters
----------
A, B : ndarray
State-space representation of linear system ``AX + BU``.
poles : array_like
Desired real poles and/or complex conjugates poles.
Complex poles are only supported with ``method="YT"`` (default).
method: {'YT', 'KNV0'}, optional
Which method to choose to find the gain matrix K. One of:
- 'YT': Yang Tits
- 'KNV0': Kautsky, Nichols, Van Dooren update method 0
See References and Notes for details on the algorithms.
rtol: float, optional
After each iteration the determinant of the eigenvectors of
``A - B*K`` is compared to its previous value, when the relative
error between these two values becomes lower than `rtol` the algorithm
stops. Default is 1e-3.
maxiter: int, optional
Maximum number of iterations to compute the gain matrix.
Default is 30.
Returns
-------
full_state_feedback : Bunch object
full_state_feedback is composed of:
gain_matrix : 1-D ndarray
The closed loop matrix K such as the eigenvalues of ``A-BK``
are as close as possible to the requested poles.
computed_poles : 1-D ndarray
The poles corresponding to ``A-BK`` sorted as first the real
poles in increasing order, then the complex congugates in
lexicographic order.
requested_poles : 1-D ndarray
The poles the algorithm was asked to place sorted as above,
they may differ from what was achieved.
X : 2-D ndarray
The transfer matrix such as ``X * diag(poles) = (A - B*K)*X``
(see Notes)
rtol : float
The relative tolerance achieved on ``det(X)`` (see Notes).
`rtol` will be NaN if it is possible to solve the system
``diag(poles) = (A - B*K)``, or 0 when the optimization
algorithms can't do anything i.e when ``B.shape[1] == 1``.
nb_iter : int
The number of iterations performed before converging.
`nb_iter` will be NaN if it is possible to solve the system
``diag(poles) = (A - B*K)``, or 0 when the optimization
algorithms can't do anything i.e when ``B.shape[1] == 1``.
Notes
-----
The Tits and Yang (YT), [2]_ paper is an update of the original Kautsky et
al. (KNV) paper [1]_. KNV relies on rank-1 updates to find the transfer
matrix X such that ``X * diag(poles) = (A - B*K)*X``, whereas YT uses
rank-2 updates. This yields on average more robust solutions (see [2]_
pp 21-22), furthermore the YT algorithm supports complex poles whereas KNV
does not in its original version. Only update method 0 proposed by KNV has
been implemented here, hence the name ``'KNV0'``.
KNV extended to complex poles is used in Matlab's ``place`` function, YT is
distributed under a non-free licence by Slicot under the name ``robpole``.
It is unclear and undocumented how KNV0 has been extended to complex poles
(Tits and Yang claim on page 14 of their paper that their method can not be
used to extend KNV to complex poles), therefore only YT supports them in
this implementation.
As the solution to the problem of pole placement is not unique for MIMO
systems, both methods start with a tentative transfer matrix which is
altered in various way to increase its determinant. Both methods have been
proven to converge to a stable solution, however depending on the way the
initial transfer matrix is chosen they will converge to different
solutions and therefore there is absolutely no guarantee that using
``'KNV0'`` will yield results similar to Matlab's or any other
implementation of these algorithms.
Using the default method ``'YT'`` should be fine in most cases; ``'KNV0'``
is only provided because it is needed by ``'YT'`` in some specific cases.
Furthermore ``'YT'`` gives on average more robust results than ``'KNV0'``
when ``abs(det(X))`` is used as a robustness indicator.
[2]_ is available as a technical report on the following URL:
http://drum.lib.umd.edu/handle/1903/5598
References
----------
.. [1] J. Kautsky, N.K. Nichols and P. van Dooren, "Robust pole assignment
in linear state feedback", International Journal of Control, Vol. 41
pp. 1129-1155, 1985.
.. [2] A.L. Tits and Y. Yang, "Globally convergent algorithms for robust
pole assignment by state feedback, IEEE Transactions on Automatic
Control, Vol. 41, pp. 1432-1452, 1996.
Examples
--------
A simple example demonstrating real pole placement using both KNV and YT
algorithms. This is example number 1 from section 4 of the reference KNV
publication ([1]_):
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
>>> A = np.array([[ 1.380, -0.2077, 6.715, -5.676 ],
... [-0.5814, -4.290, 0, 0.6750 ],
... [ 1.067, 4.273, -6.654, 5.893 ],
... [ 0.0480, 4.273, 1.343, -2.104 ]])
>>> B = np.array([[ 0, 5.679 ],
... [ 1.136, 1.136 ],
... [ 0, 0, ],
... [-3.146, 0 ]])
>>> P = np.array([-0.2, -0.5, -5.0566, -8.6659])
Now compute K with KNV method 0, with the default YT method and with the YT
method while forcing 100 iterations of the algorithm and print some results
after each call.
>>> fsf1 = signal.place_poles(A, B, P, method='KNV0')
>>> fsf1.gain_matrix
array([[ 0.20071427, -0.96665799, 0.24066128, -0.10279785],
[ 0.50587268, 0.57779091, 0.51795763, -0.41991442]])
>>> fsf2 = signal.place_poles(A, B, P) # uses YT method
>>> fsf2.computed_poles
array([-8.6659, -5.0566, -0.5 , -0.2 ])
>>> fsf3 = signal.place_poles(A, B, P, rtol=-1, maxiter=100)
>>> fsf3.X
array([[ 0.52072442+0.j, -0.08409372+0.j, -0.56847937+0.j, 0.74823657+0.j],
[-0.04977751+0.j, -0.80872954+0.j, 0.13566234+0.j, -0.29322906+0.j],
[-0.82266932+0.j, -0.19168026+0.j, -0.56348322+0.j, -0.43815060+0.j],
[ 0.22267347+0.j, 0.54967577+0.j, -0.58387806+0.j, -0.40271926+0.j]])
The absolute value of the determinant of X is a good indicator to check the
robustness of the results, both ``'KNV0'`` and ``'YT'`` aim at maximizing
it. Below a comparison of the robustness of the results above:
>>> abs(np.linalg.det(fsf1.X)) < abs(np.linalg.det(fsf2.X))
True
>>> abs(np.linalg.det(fsf2.X)) < abs(np.linalg.det(fsf3.X))
True
Now a simple example for complex poles:
>>> A = np.array([[ 0, 7/3., 0, 0 ],
... [ 0, 0, 0, 7/9. ],
... [ 0, 0, 0, 0 ],
... [ 0, 0, 0, 0 ]])
>>> B = np.array([[ 0, 0 ],
... [ 0, 0 ],
... [ 1, 0 ],
... [ 0, 1 ]])
>>> P = np.array([-3, -1, -2-1j, -2+1j]) / 3.
>>> fsf = signal.place_poles(A, B, P, method='YT')
We can plot the desired and computed poles in the complex plane:
>>> t = np.linspace(0, 2*np.pi, 401)
>>> plt.plot(np.cos(t), np.sin(t), 'k--') # unit circle
>>> plt.plot(fsf.requested_poles.real, fsf.requested_poles.imag,
... 'wo', label='Desired')
>>> plt.plot(fsf.computed_poles.real, fsf.computed_poles.imag, 'bx',
... label='Placed')
>>> plt.grid()
>>> plt.axis('image')
>>> plt.axis([-1.1, 1.1, -1.1, 1.1])
>>> plt.legend(bbox_to_anchor=(1.05, 1), loc=2, numpoints=1)
"""
# Move away all the inputs checking, it only adds noise to the code
update_loop, poles = _valid_inputs(A, B, poles, method, rtol, maxiter)
# The current value of the relative tolerance we achieved
cur_rtol = 0
# The number of iterations needed before converging
nb_iter = 0
# Step A: QR decomposition of B page 1132 KN
# to debug with numpy qr uncomment the line below
# u, z = np.linalg.qr(B, mode="complete")
u, z = s_qr(B, mode="full")
rankB = np.linalg.matrix_rank(B)
u0 = u[:, :rankB]
u1 = u[:, rankB:]
z = z[:rankB, :]
# If we can use the identity matrix as X the solution is obvious
if B.shape[0] == rankB:
# if B is square and full rank there is only one solution
# such as (A+BK)=inv(X)*diag(P)*X with X=eye(A.shape[0])
# i.e K=inv(B)*(diag(P)-A)
# if B has as many lines as its rank (but not square) there are many
# solutions and we can choose one using least squares
# => use lstsq in both cases.
# In both cases the transfer matrix X will be eye(A.shape[0]) and I
# can hardly think of a better one so there is nothing to optimize
#
# for complex poles we use the following trick
#
# |a -b| has for eigenvalues a+b and a-b
# |b a|
#
# |a+bi 0| has the obvious eigenvalues a+bi and a-bi
# |0 a-bi|
#
# e.g solving the first one in R gives the solution
# for the second one in C
diag_poles = np.zeros(A.shape)
idx = 0
while idx < poles.shape[0]:
p = poles[idx]
diag_poles[idx, idx] = np.real(p)
if ~np.isreal(p):
diag_poles[idx, idx+1] = -np.imag(p)
diag_poles[idx+1, idx+1] = np.real(p)
diag_poles[idx+1, idx] = np.imag(p)
idx += 1 # skip next one
idx += 1
gain_matrix = np.linalg.lstsq(B, diag_poles-A)[0]
transfer_matrix = np.eye(A.shape[0])
cur_rtol = np.nan
nb_iter = np.nan
else:
# step A (p1144 KNV) and begining of step F: decompose
# dot(U1.T, A-P[i]*I).T and build our set of transfer_matrix vectors
# in the same loop
ker_pole = []
# flag to skip the conjugate of a complex pole
skip_conjugate = False
# select orthonormal base ker_pole for each Pole and vectors for
# transfer_matrix
for j in range(B.shape[0]):
if skip_conjugate:
skip_conjugate = False
continue
pole_space_j = np.dot(u1.T, A-poles[j]*np.eye(B.shape[0])).T
# after QR Q=Q0|Q1
# only Q0 is used to reconstruct the qr'ed (dot Q, R) matrix.
# Q1 is orthogonnal to Q0 and will be multiplied by the zeros in
# R when using mode "complete". In default mode Q1 and the zeros
# in R are not computed
# To debug with numpy qr uncomment the line below
# Q, _ = np.linalg.qr(pole_space_j, mode="complete")
Q, _ = s_qr(pole_space_j, mode="full")
ker_pole_j = Q[:, pole_space_j.shape[1]:]
# We want to select one vector in ker_pole_j to build the transfer
# matrix, however qr returns sometimes vectors with zeros on the
# same line for each pole and this yields very long convergence
# times.
# Or some other times a set of vectors, one with zero imaginary
# part and one (or several) with imaginary parts. After trying
# many ways to select the best possible one (eg ditch vectors
# with zero imaginary part for complex poles) I ended up summing
# all vectors in ker_pole_j, this solves 100% of the problems and
# is a valid choice for transfer_matrix.
# This way for complex poles we are sure to have a non zero
# imaginary part that way, and the problem of lines full of zeros
# in transfer_matrix is solved too as when a vector from
# ker_pole_j has a zero the other one(s) when
# ker_pole_j.shape[1]>1) for sure won't have a zero there.
transfer_matrix_j = np.sum(ker_pole_j, axis=1)[:, np.newaxis]
transfer_matrix_j = (transfer_matrix_j /
np.linalg.norm(transfer_matrix_j))
if ~np.isreal(poles[j]): # complex pole
transfer_matrix_j = np.hstack([np.real(transfer_matrix_j),
np.imag(transfer_matrix_j)])
ker_pole.extend([ker_pole_j, ker_pole_j])
# Skip next pole as it is the conjugate
skip_conjugate = True
else: # real pole, nothing to do
ker_pole.append(ker_pole_j)
if j == 0:
transfer_matrix = transfer_matrix_j
else:
transfer_matrix = np.hstack((transfer_matrix, transfer_matrix_j))
if rankB > 1: # otherwise there is nothing we can optimize
stop, cur_rtol, nb_iter = update_loop(ker_pole, transfer_matrix,
poles, B, maxiter, rtol)
if not stop and rtol > 0:
# if rtol<=0 the user has probably done that on purpose,
# don't annoy him
err_msg = (
"Convergence was not reached after maxiter iterations.\n"
"You asked for a relative tolerance of %f we got %f" %
(rtol, cur_rtol)
)
warnings.warn(err_msg)
# reconstruct transfer_matrix to match complex conjugate pairs,
# ie transfer_matrix_j/transfer_matrix_j+1 are
# Re(Complex_pole), Im(Complex_pole) now and will be Re-Im/Re+Im after
transfer_matrix = transfer_matrix.astype(complex)
idx = 0
while idx < poles.shape[0]-1:
if ~np.isreal(poles[idx]):
rel = transfer_matrix[:, idx].copy()
img = transfer_matrix[:, idx+1]
# rel will be an array referencing a column of transfer_matrix
# if we don't copy() it will changer after the next line and
# and the line after will not yield the correct value
transfer_matrix[:, idx] = rel-1j*img
transfer_matrix[:, idx+1] = rel+1j*img
idx += 1 # skip next one
idx += 1
try:
m = np.linalg.solve(transfer_matrix.T, np.dot(np.diag(poles),
transfer_matrix.T)).T
gain_matrix = np.linalg.solve(z, np.dot(u0.T, m-A))
except np.linalg.LinAlgError:
raise ValueError("The poles you've chosen can't be placed. "
"Check the controllability matrix and try "
"another set of poles")
# Beware: Kautsky solves A+BK but the usual form is A-BK
gain_matrix = -gain_matrix
# K still contains complex with ~=0j imaginary parts, get rid of them
gain_matrix = np.real(gain_matrix)
full_state_feedback = Bunch()
full_state_feedback.gain_matrix = gain_matrix
full_state_feedback.computed_poles = _order_complex_poles(
np.linalg.eig(A - np.dot(B, gain_matrix))[0]
)
full_state_feedback.requested_poles = poles
full_state_feedback.X = transfer_matrix
full_state_feedback.rtol = cur_rtol
full_state_feedback.nb_iter = nb_iter
return full_state_feedback
| [
"master@MacBook-Pro-admin.local"
] | master@MacBook-Pro-admin.local |
4282f0f0a003d44a67bb0595b6cc543a4271a345 | 4920b6c12dc2427036077d38ed8fa513130418a8 | /bipad_api/bipad_api/models/inline_response20053.py | a52d1c5a3839994bdacc9f2ef436923135b09853 | [] | no_license | laxmitimalsina/covid_dashboard | d51a43d3ba2ad8a9754f723383f6395c1dccdda5 | ccba8a3f5dd6dbd2b28e2479bda6e581eb23805f | refs/heads/master | 2023-05-29T15:07:32.524640 | 2021-05-03T11:15:43 | 2021-05-03T11:15:43 | 273,698,762 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,265 | py | # coding: utf-8
"""
BIPAD API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: v1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class InlineResponse20053(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'count': 'int',
'next': 'str',
'previous': 'str',
'results': 'list[ReleaseStatus]'
}
attribute_map = {
'count': 'count',
'next': 'next',
'previous': 'previous',
'results': 'results'
}
def __init__(self, count=None, next=None, previous=None, results=None): # noqa: E501
"""InlineResponse20053 - a model defined in Swagger""" # noqa: E501
self._count = None
self._next = None
self._previous = None
self._results = None
self.discriminator = None
self.count = count
if next is not None:
self.next = next
if previous is not None:
self.previous = previous
self.results = results
@property
def count(self):
"""Gets the count of this InlineResponse20053. # noqa: E501
:return: The count of this InlineResponse20053. # noqa: E501
:rtype: int
"""
return self._count
@count.setter
def count(self, count):
"""Sets the count of this InlineResponse20053.
:param count: The count of this InlineResponse20053. # noqa: E501
:type: int
"""
if count is None:
raise ValueError("Invalid value for `count`, must not be `None`") # noqa: E501
self._count = count
@property
def next(self):
"""Gets the next of this InlineResponse20053. # noqa: E501
:return: The next of this InlineResponse20053. # noqa: E501
:rtype: str
"""
return self._next
@next.setter
def next(self, next):
"""Sets the next of this InlineResponse20053.
:param next: The next of this InlineResponse20053. # noqa: E501
:type: str
"""
self._next = next
@property
def previous(self):
"""Gets the previous of this InlineResponse20053. # noqa: E501
:return: The previous of this InlineResponse20053. # noqa: E501
:rtype: str
"""
return self._previous
@previous.setter
def previous(self, previous):
"""Sets the previous of this InlineResponse20053.
:param previous: The previous of this InlineResponse20053. # noqa: E501
:type: str
"""
self._previous = previous
@property
def results(self):
"""Gets the results of this InlineResponse20053. # noqa: E501
:return: The results of this InlineResponse20053. # noqa: E501
:rtype: list[ReleaseStatus]
"""
return self._results
@results.setter
def results(self, results):
"""Sets the results of this InlineResponse20053.
:param results: The results of this InlineResponse20053. # noqa: E501
:type: list[ReleaseStatus]
"""
if results is None:
raise ValueError("Invalid value for `results`, must not be `None`") # noqa: E501
self._results = results
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(InlineResponse20053, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, InlineResponse20053):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| [
"laxmitimalsina2017@gmail.com"
] | laxmitimalsina2017@gmail.com |
d5fef288e77bbef2f70188793ca31f94dd4b090c | d1fb8bb087564052674cb33ac6d75daca4ae586a | /Amazon 11月VO真题/1/1181. Diameter of Binary Tree.py | feaeae0a737d49417a450acd9d2dc1468f37fd6f | [] | no_license | YunsongZhang/lintcode-python | 7db4ca48430a05331e17f4b79d05da585b1611ca | ea6a0ff58170499c76e9569074cb77f6bcef447a | refs/heads/master | 2020-12-24T03:05:43.487532 | 2020-01-30T19:58:37 | 2020-01-30T19:58:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 638 | py | class TreeNode:
def __init__(self, val):
self.val = val
self.left, self.right = None, None
class Solution:
"""
@param root: a root of binary tree
@return: return a integer
"""
def __init__(self):
self.diameter = 0
def diameterOfBinaryTree(self, root):
if not root:
return 0
self.dfs(root)
return self.diameter
def dfs(self, root):
if not root:
return 0
left = self.dfs(root.left)
right = self.dfs(root.right)
self.diameter = max(self.diameter, left + right)
return max(left, right) + 1
| [
"haixiang6123@gmail.com"
] | haixiang6123@gmail.com |
d355b7ca1605808368bffb45037fb5f9c0de8c1b | 31f5c200fbaded1f3670b94042b9c47182a160ca | /ch17/q17.py | 9d33c61cbee9a9322dbd698c19e807c63a45bd91 | [] | no_license | AeroX2/advent-of-code-2020 | d86f15593ceea442515e2853003d3a1ec6527475 | e47c02e4d746ac88f105bf5a8c55dcd519f4afe8 | refs/heads/main | 2023-02-03T02:24:58.494613 | 2020-12-22T10:15:29 | 2020-12-22T10:15:29 | 317,873,883 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,610 | py | import sys
data = open(sys.argv[1]).read().strip()
data = [list(x) for x in data.split('\n')]
print(data)
cube = {}
for y,v in enumerate(data):
h = len(v)//2
for x,v2 in enumerate(v):
cube[(x-h,y-h,0)] = v2
print(cube)
width = len(data[0])
height = width
depth = 0
print('Dimensions')
print((width, height, depth))
def check_active(pos):
active_count = 0
for x in range(-1,2):
for y in range(-1,2):
for z in range(-1,2):
if (x == 0 and y == 0 and z == 0):
continue
new_pos = (pos[0]+x, pos[1]+y, pos[2]+z)
active_count += 1 if cube.get(new_pos, '.') == '#' else 0
return active_count
for i in range(6):
#for z in range(-depth,depth+1):
# print('z =',z)
# for y in range(-height,height+1):
# for x in range(-width,width+1):
# print(cube.get((x,y,z),'.'),end='')
# print()
width += 1
height += 1
depth += 1
modify_list = []
for x in range(-width,width+1):
for y in range(-height,height+1):
for z in range(-depth,depth+1):
is_active = cube.get((x,y,z), '.') == '#'
active_count = check_active((x,y,z))
if (is_active and not (active_count == 2 or active_count == 3)):
modify_list.append((x,y,z,'.'))
elif (not is_active and (active_count == 3)):
modify_list.append((x,y,z,'#'))
for x,y,z,v in modify_list:
cube[(x,y,z)] = v
print(len([x for x in cube.values() if x == '#']))
| [
"james@ridey.email"
] | james@ridey.email |
aceeca9c2d8787ad6a846e833d5d569f4584213e | 4a191e5aecd53c4cea28482a0179539eeb6cd74b | /comments/forms.py | 5ba7334316083549b5600318794c46b4b51310b5 | [] | no_license | jiangjingwei/blogproject | 631a2e8e2f72420cce45ddaf152174852376d831 | daf14e88092dc030a3ab0c295ee06fb6b2164372 | refs/heads/master | 2020-03-14T23:29:08.052253 | 2018-05-10T11:35:59 | 2018-05-10T11:35:59 | 131,846,149 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 191 | py | from django import forms
from comments.models import Comments
class CommentForm(forms.ModelForm):
class Meta:
model = Comments
fields = ['name', 'email', 'url', 'text']
| [
"270159429@qq.com"
] | 270159429@qq.com |
852adc191b890bbcb734581a6f26bd32495378c8 | ec21d4397a1939ac140c22eca12491c258ed6a92 | /Zope-2.9/lib/python/Testing/dispatcher.py | a309a4937442b12ebedc1492329a89a0f71071bb | [] | no_license | wpjunior/proled | dc9120eaa6067821c983b67836026602bbb3a211 | 1c81471295a831b0970085c44e66172a63c3a2b0 | refs/heads/master | 2016-08-08T11:59:09.748402 | 2012-04-17T07:37:43 | 2012-04-17T07:37:43 | 3,573,786 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,469 | py | ##############################################################################
#
# Copyright (c) 2002 Zope Corporation and Contributors. All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
# Dispatcher for usage inside Zope test environment
# Andreas Jung, andreas@digicool.com 03/24/2001
__version__ = '$Id: dispatcher.py 40222 2005-11-18 15:46:28Z andreasjung $'
import os,sys,re,string
import threading,time,commands,profile
class Dispatcher:
"""
a multi-purpose thread dispatcher
"""
def __init__(self,func=''):
self.fp = sys.stderr
self.f_startup = []
self.f_teardown = []
self.lastlog = ""
self.lock = threading.Lock()
self.func = func
self.profiling = 0
self.doc = getattr(self,self.func).__doc__
def setlog(self,fp):
self.fp = fp
def log(self,s):
if s==self.lastlog: return
self.fp.write(s)
self.fp.flush()
self.lastlog=s
def logn(self,s):
if s==self.lastlog: return
self.fp.write(s + '\n')
self.fp.flush()
self.lastlog=s
def profiling_on():
self.profiling = 1
def profiling_off():
self.profiling = 0
def dispatcher(self,name='', *params):
""" dispatcher for threads
The dispatcher expects one or several tupels:
(functionname, number of threads to start , args, keyword args)
"""
self.mem_usage = [-1]
mem_watcher = threading.Thread(None,self.mem_watcher,name='memwatcher')
mem_watcher.start()
self.start_test = time.time()
self.name = name
self.th_data = {}
self.runtime = {}
self._threads = []
s2s=self.s2s
for func,numthreads,args,kw in params:
f = getattr(self,func)
for i in range(0,numthreads):
kw['t_func'] = func
th = threading.Thread(None,self.worker,name="TH_%s_%03d" % (func,i) ,args=args,kwargs=kw)
self._threads.append(th)
for th in self._threads: th.start()
while threading.activeCount() > 1: time.sleep(1)
self.logn('ID: %s ' % self.name)
self.logn('FUNC: %s ' % self.func)
self.logn('DOC: %s ' % self.doc)
self.logn('Args: %s' % params)
for th in self._threads:
self.logn( '%-30s ........................ %9.3f sec' % (th.getName(), self.runtime[th.getName()]) )
for k,v in self.th_data[th.getName()].items():
self.logn ('%-30s %-15s = %s' % (' ',k,v) )
self.logn("")
self.logn('Complete running time: %9.3f sec' % (time.time()-self.start_test) )
if len(self.mem_usage)>1: self.mem_usage.remove(-1)
self.logn( "Memory: start: %s, end: %s, low: %s, high: %s" % \
(s2s(self.mem_usage[0]),s2s(self.mem_usage[-1]),s2s(min(self.mem_usage)), s2s(max(self.mem_usage))))
self.logn('')
def worker(self,*args,**kw):
for func in self.f_startup: f = getattr(self,func)()
t_func = getattr(self,kw['t_func'])
del kw['t_func']
ts = time.time()
apply(t_func,args,kw)
te = time.time()
for func in self.f_teardown: getattr(self,func)()
def th_setup(self):
""" initalize thread with some environment data """
env = {'start': time.time()
}
return env
def th_teardown(self,env,**kw):
""" famous last actions of thread """
self.lock.acquire()
self.th_data[ threading.currentThread().getName() ] = kw
self.runtime [ threading.currentThread().getName() ] = time.time() - env['start']
self.lock.release()
def getmem(self):
""" try to determine the current memory usage """
if not sys.platform in ['linux2']: return None
cmd = '/bin/ps --no-headers -o pid,vsize --pid %s' % os.getpid()
outp = commands.getoutput(cmd)
pid,vsize = filter(lambda x: x!="" , string.split(outp," ") )
data = open("/proc/%d/statm" % os.getpid()).read()
fields = re.split(" ",data)
mem = string.atoi(fields[0]) * 4096
return mem
def mem_watcher(self):
""" thread for watching memory usage """
running = 1
while running ==1:
self.mem_usage.append( self.getmem() )
time.sleep(1)
if threading.activeCount() == 2: running = 0
def register_startup(self,func):
self.f_startup.append(func)
def register_teardown(self,func):
self.f_teardown.append(func)
def s2s(self,n):
import math
if n <1024.0: return "%8.3lf Bytes" % n
if n <1024.0*1024.0: return "%8.3lf KB" % (1.0*n/1024.0)
if n <1024.0*1024.0*1024.0: return "%8.3lf MB" % (1.0*n/1024.0/1024.0)
else: return n
if __name__=="__main__":
d=Dispatcher()
print d.getmem()
pass
| [
"root@cpro5106.publiccloud.com.br"
] | root@cpro5106.publiccloud.com.br |
d624d83e089f1a87248f66a4ca5b174c8b084b89 | 99a2d82d2a10c0af77731885f80307edcdc48535 | /maildir-cat | c6e018e5ef5f707ebd50c8ae0030ce0ce56bf5cd | [
"WTFPL",
"LicenseRef-scancode-public-domain"
] | permissive | mk-fg/fgtk | be60c102f6ad6cd0d0e364c3863c36a1902a15a3 | 90de180b0d4184f3040d85a4ff2ac38319a992af | refs/heads/master | 2023-09-06T08:41:33.852815 | 2023-08-19T07:25:54 | 2023-08-19T07:25:54 | 3,831,498 | 149 | 46 | null | 2017-08-15T19:23:59 | 2012-03-26T09:58:03 | Python | UTF-8 | Python | false | false | 4,883 | #!/usr/bin/env python3
import itertools as it, operator as op, functools as ft
import mailbox, email, email.header, email.charset, email.errors
import os, sys, re, pathlib as pl, collections as cs
def bytes_decode(b, enc, errors='strict'):
try: return b.decode(enc, errors)
except LookupError as err:
# Try to handle cp-850, unknown-8bit and such
if enc == 'unknown-8bit': enc_sub = 'utf-8'
else: enc_sub = enc.replace('-', '')
if enc_sub == enc: raise
try: return b.decode(enc_sub, errors)
except LookupError:
raise LookupError(enc, enc_sub) from None
def _mail_header_decode_part(line):
header = ''
for part, enc in email.header.decode_header(line):
if enc: part = bytes_decode(part, enc, 'replace')
if isinstance(part, bytes): part = part.decode('utf-8', 'replace')
# RFC2822#2.2.3 whitespace folding auto-adds spaces.
# But extra space can also be encoded in base64 or such,
# so this does not preserve exact number of encoded spaces.
if not header.endswith(' '): header += ' '
header += part.lstrip()
return header.strip()
def mail_header_decode(val):
res, header = list(), _mail_header_decode_part(val)
while True:
match = re.search('=\?[\w\d-]+(\*[\w\d-]+)?\?[QB]\?[^?]+\?=', header)
if not match:
res.append(header)
break
start, end = match.span(0)
match = header[start:end]
try: match = _mail_header_decode_part(match)
except email.errors.HeaderParseError: pass
res.extend([header[:start], match])
header = header[end:]
return ''.join(res)
def _mail_parse(msg):
headers = MailMsgHeaders((k.lower(), mail_header_decode(v)) for k,v in msg.items())
payload = ( msg.get_payload(decode=True)
if not msg.is_multipart() else list(map(_mail_parse, msg.get_payload())) )
if not headers.get('content-type'): headers['content-type'] = [msg.get_content_type()]
if headers.get_core('content-disposition') == 'attachment': payload = '<attachment scrubbed>'
elif isinstance(payload, bytes):
payload = bytes_decode(payload, msg.get_content_charset() or 'utf-8', 'replace')
return MailMsg(headers, payload)
def mail_parse(msg):
if isinstance(msg, (bytes, str)): msg = email.message_from_bytes(msg)
return _mail_parse(msg)
class MailMsg(cs.namedtuple('MailMsg', 'headers payload')):
@property
def all_parts(self):
return [self] if isinstance(self.payload, str)\
else sorted(it.chain.from_iterable(m.all_parts for m in self.payload), key=len)
def _text_ct_prio(self, part):
ct = part.headers.get('content-type')
if ct == 'text/plain': return 1
if ct.startswith('text/'): return 2
return 3
@property
def text(self):
return sorted(self.all_parts, key=self._text_ct_prio)[0].payload
class MailMsgHeaders(cs.UserDict):
def __init__(self, header_list):
super().__init__()
for k, v in header_list:
if k not in self: self[k] = list()
self[k].append(v)
def get(self, k, default=None, proc=op.itemgetter(0)):
hs = self.data.get(k)
if not hs: return default
return proc(hs)
def get_core(self, k, default=None):
return self.get(k, default, lambda hs: hs[0].split(';', 1)[0].strip())
def get_all(self, k, default=None):
return self.get(k, default, lambda x: x)
def dump_msg(pre, msg):
msg = mail_parse(msg)
header_list = 'from to subject date message-id reply-to sender'.split()
# header_list = sorted(msg.headers.keys())
for k in header_list:
for v in msg.headers.get_all(k, list()): print(f'{pre}{k.title()}: {v}')
print(pre)
for line in msg.text.strip().split('\n'): print(f'{pre}{line}')
def main(args=None):
import argparse
parser = argparse.ArgumentParser(
description='Tool to find all messages in the maildir, decode'
' MIME msg bodies and dump every line in these along with the filename'
' to stdout to run grep or any other search on them to find specific msg/file.')
parser.add_argument('maildir', nargs='*', default=['~/.maildir'],
help='Path to maildir(s) or individual msg file(s). Default: %(default)s.')
opts = parser.parse_args(sys.argv[1:] if args is None else args)
log_err = ft.partial(print, file=sys.stderr, flush=True)
for p in opts.maildir:
p_root_base = p = pl.Path(p)
p_root = p_root_base.expanduser().resolve()
if p_root.is_file():
try: msg = dump_msg(f'{p}: ', p_root.read_bytes())
except email.errors.MessageParseError: log_err('Malformed msg file: {p}')
continue
ps_root = str(p_root)
maildir = mailbox.Maildir(ps_root)
box_dirs = [maildir, *(maildir.get_folder(key) for key in maildir.list_folders())]
for box in box_dirs:
for key in box.keys():
ps = str((pl.Path(box._path) / box._lookup(key)).resolve())
assert ps.startswith(ps_root), [ps_root, ps]
p = p_root_base / ps[len(ps_root)+1:]
try: msg = box[key]
except email.errors.MessageParseError: log_err('Malformed msg file: {p}')
else: dump_msg(f'{p}: ', msg)
if __name__ == '__main__': sys.exit(main())
| [
"mk.fraggod@gmail.com"
] | mk.fraggod@gmail.com | |
0a22320d5ad8c6a27fe4569472cbc5867d672629 | 07b249d8b26fc49f1268798b3bd6bdcfd0b86447 | /0x07-python-test_driven_development/testmod_.py | 49bd0c640293d84da2e7e2965fdca8e0dc1c19a2 | [] | no_license | leocjj/holbertonschool-higher_level_programming | 544d6c40632fbcf721b1f39d2453ba3d033007d6 | 50cf2308d2c9eeca8b25c01728815d91e0a9b784 | refs/heads/master | 2020-09-28T23:21:13.378060 | 2020-08-30T23:45:11 | 2020-08-30T23:45:11 | 226,889,413 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,398 | py | #!/usr/bin/python3
"""
This is the "example" module.
The example module supplies one function, factorial(). For example,
>>> factorial(6)
720
"""
def factorial(n):
"""Return the factorial of n, an exact integer >= 0.
>>> [factorial(n) for n in range(6)]
[1, 1, 2, 6, 24, 120]
>>> factorial(30)
265252859812191058636308480000000
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: n must be >= 0
Factorials of floats are OK, but the float must be an exact integer:
>>> factorial(30.1)
Traceback (most recent call last):
...
ValueError: n must be exact integer
>>> factorial(30.0)
265252859812191058636308480000000
It must also not be ridiculously large:
>>> factorial(1e100)
Traceback (most recent call last):
...
OverflowError: n too large
"""
import math
if not n >= 0:
raise ValueError("n must be >= 0")
if math.floor(n) != n:
raise ValueError("n must be exact integer")
if n+1 == n: # catch a value like 1e300
raise OverflowError("n too large")
result = 1
factor = 2
while factor <= n:
result *= factor
factor += 1
return result
if __name__ == "__main__":
#print(factorial(3))
import doctest
doctest.testmod()
#doctest.testmod(verbose=True) #Parameter to force vervose
| [
"leocj@hotmail.com"
] | leocj@hotmail.com |
5b8c78a5752cf5cdaa9f3f64037d7faeab7cad3f | 659d41f0c737dffc2a6ebd5e773a6513da32e5ba | /scripts_OLD/PulseSequences/tests/turn_on_auto.py | e2444cebfe26f3413c0cf7aa2ce2c234bd956621 | [] | no_license | HaeffnerLab/sqip | b3d4d570becb1022083ea01fea9472115a183ace | 5d18f167bd9a5344dcae3c13cc5a84213fb7c199 | refs/heads/master | 2020-05-21T23:11:10.448549 | 2019-11-21T02:00:58 | 2019-11-21T02:00:58 | 19,164,232 | 0 | 0 | null | 2019-11-04T04:39:37 | 2014-04-25T23:54:47 | Python | UTF-8 | Python | false | false | 226 | py |
def main():
import labrad
cxn = labrad.connect()
cxn.dac_server.reset_queue()
cxn.pulser.switch_auto('adv',True)
cxn.pulser.switch_auto('rst',True)
if __name__ == '__main__':
main() | [
"haeffnerlab@gmail.com"
] | haeffnerlab@gmail.com |
393d7ea4800d14b27bca6522be3500d2762cfe10 | 85a9ffeccb64f6159adbd164ff98edf4ac315e33 | /pysnmp-with-texts/Wellfleet-MPLS-MLM-MIB.py | 7cc3f54a6e31ffff8938d45ebeffda15c8aaebe6 | [
"LicenseRef-scancode-warranty-disclaimer",
"LicenseRef-scancode-proprietary-license",
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
] | permissive | agustinhenze/mibs.snmplabs.com | 5d7d5d4da84424c5f5a1ed2752f5043ae00019fb | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | refs/heads/master | 2020-12-26T12:41:41.132395 | 2019-08-16T15:51:41 | 2019-08-16T15:53:57 | 237,512,469 | 0 | 0 | Apache-2.0 | 2020-01-31T20:41:36 | 2020-01-31T20:41:35 | null | UTF-8 | Python | false | false | 42,622 | py | #
# PySNMP MIB module Wellfleet-MPLS-MLM-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/Wellfleet-MPLS-MLM-MIB
# Produced by pysmi-0.3.4 at Wed May 1 15:40:59 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
ObjectIdentifier, OctetString, Integer = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "OctetString", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, ValueSizeConstraint, SingleValueConstraint, ConstraintsUnion, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "ValueSizeConstraint", "SingleValueConstraint", "ConstraintsUnion", "ConstraintsIntersection")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
ModuleIdentity, MibScalar, MibTable, MibTableRow, MibTableColumn, ObjectIdentity, Unsigned32, IpAddress, Counter64, Gauge32, TimeTicks, NotificationType, MibIdentifier, Bits, Integer32, iso, Counter32 = mibBuilder.importSymbols("SNMPv2-SMI", "ModuleIdentity", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "ObjectIdentity", "Unsigned32", "IpAddress", "Counter64", "Gauge32", "TimeTicks", "NotificationType", "MibIdentifier", "Bits", "Integer32", "iso", "Counter32")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
wfMplsAtmGroup, = mibBuilder.importSymbols("Wellfleet-COMMON-MIB", "wfMplsAtmGroup")
wfMplsAtm = MibIdentifier((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1))
wfMplsAtmIfConfTable = MibTable((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 1), )
if mibBuilder.loadTexts: wfMplsAtmIfConfTable.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfConfTable.setDescription('MPLS ATM interface configuration table - This tabulates the ATM interfaces within an mpls protocol group. All interfaces are indexed according to their line number. There is one ATM interface per MPLS LDP session, but there might be more LDP sessions per ATM interface.')
wfMplsAtmIfConfEntry = MibTableRow((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 1, 1), ).setIndexNames((0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmIfConfLineNumber"))
if mibBuilder.loadTexts: wfMplsAtmIfConfEntry.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfConfEntry.setDescription('MPLS ATM interface configuration entries.')
wfMplsAtmIfCreate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 1, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("created", 1), ("deleted", 2))).clone('created')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmIfCreate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfCreate.setDescription('Create/Delete parameter. Default is created. Users modify this object in order to create/delete MPLS ATM interfaces')
wfMplsAtmIfAdminStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2))).clone('enabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmIfAdminStatus.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfAdminStatus.setDescription('Enable/Disable parameter. Default is enabled.')
wfMplsAtmIfConfLineNumber = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 1, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmIfConfLineNumber.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfConfLineNumber.setDescription('The line number of the driver line to which the interface belongs.')
wfMplsAtmIfDebugLogMask = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 1, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 16, 128, 255))).clone(namedValues=NamedValues(("none", 1), ("fsm", 16), ("other", 128), ("all", 255))).clone('none')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmIfDebugLogMask.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfDebugLogMask.setDescription("Log bits for debug messages. This values above can be used separately, or OR'd together for a combination of debug levels.")
wfMplsAtmIfStatusTable = MibTable((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 2), )
if mibBuilder.loadTexts: wfMplsAtmIfStatusTable.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfStatusTable.setDescription('MPLS ATM interface status table - This tabulates the ATM interfaces within an mpls protocol group. All interfaces are indexed according to their line number. There is one ATM interface per MPLS LDP session, but there might be multiple LDP sessions per ATM interface.')
wfMplsAtmIfStatusEntry = MibTableRow((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 2, 1), ).setIndexNames((0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmIfStatusLineNumber"))
if mibBuilder.loadTexts: wfMplsAtmIfStatusEntry.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfStatusEntry.setDescription('MPLS ATM interface status entries.')
wfMplsAtmIfOperStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 2, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5))).clone(namedValues=NamedValues(("down", 1), ("init", 2), ("up", 3), ("cleanup", 4), ("notpresent", 5))).clone('notpresent')).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmIfOperStatus.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfOperStatus.setDescription('The current state of the MPLS ATM interface')
wfMplsAtmIfStatusLineNumber = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 2, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmIfStatusLineNumber.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfStatusLineNumber.setDescription('The line number of the driver line to which the interface belongs.')
wfMplsAtmIfCircuit = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 2, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 1023))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmIfCircuit.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfCircuit.setDescription('The circuit number of the circuit to which the interface belongs.')
wfMplsAtmIfTotalSess = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 2, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmIfTotalSess.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfTotalSess.setDescription('The total number of LDPs up running.')
wfMplsAtmIfTotalVcs = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 2, 1, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmIfTotalVcs.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfTotalVcs.setDescription('The total number of VCs either in use, or available on this interface.')
wfMplsAtmIfAllocVcs = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 2, 1, 6), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmIfAllocVcs.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmIfAllocVcs.setDescription('The number of VCs which are currently allocated for different LDP sessions on this interface. wfMplsAtmTotalVcs minus the value of this object results in knowning how many VCs are still available to allocate to new LDP sessions on this interface.')
wfMplsAtmSessConfTable = MibTable((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3), )
if mibBuilder.loadTexts: wfMplsAtmSessConfTable.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfTable.setDescription('This is the ATM related configuration table for every LDP session.')
wfMplsAtmSessConfEntry = MibTableRow((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1), ).setIndexNames((0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmSessConfLineNumber"), (0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmSessConfIndex"))
if mibBuilder.loadTexts: wfMplsAtmSessConfEntry.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfEntry.setDescription('Entry definition.')
wfMplsAtmSessDelete = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("create", 1), ("delete", 2))).clone('create')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDelete.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDelete.setDescription('Create/delete MIB instance parameter.')
wfMplsAtmSessAdminStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('enable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessAdminStatus.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessAdminStatus.setDescription('Specifies the desired administrative state of the VCL. The up and down states indicate that the traffic flow is enabled and disabled respectively for the VCL.')
wfMplsAtmSessConfLineNumber = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessConfLineNumber.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfLineNumber.setDescription('Uniquely identifies the interface (port) that contains the appropriate management information. We use line number here.')
wfMplsAtmSessConfIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessConfIndex.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfIndex.setDescription('LDP session index number.')
wfMplsAtmSessConfDefVclVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessConfDefVclVpi.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfDefVclVpi.setDescription('The VPI value of the default VC.')
wfMplsAtmSessConfDefVclVci = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(32, 65535)).clone(32)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessConfDefVclVci.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfDefVclVci.setDescription('The VCI value of the default VC. It should not in the configured VC range for any LDP session.')
wfMplsAtmSessConfVcRangeVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessConfVcRangeVpi.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfVcRangeVpi.setDescription('The VPI value of the VCs in this LDP session.')
wfMplsAtmSessConfVcRangeMinVci = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(32, 65535)).clone(33)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessConfVcRangeMinVci.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfVcRangeMinVci.setDescription('The minimum VCI value of the configured LDP session.')
wfMplsAtmSessConfVcRangeMaxVci = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(32, 65535)).clone(65535)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessConfVcRangeMaxVci.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessConfVcRangeMaxVci.setDescription('The maximum VCI value of the configured LDP session.')
wfMplsAtmSessDefVclXmtPeakCellRate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 10), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtPeakCellRate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtPeakCellRate.setDescription('Transmit (Forward) Peak Cell Rate in cells/second. This specifies the upper bound on the traffic that can be submitted on an ATM connection. 0 means best effort peak cell rate.')
wfMplsAtmSessDefVclXmtSustainableCellRate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 11), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtSustainableCellRate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtSustainableCellRate.setDescription("Transmit (Forward) Sustainable Cell Rate in cells/second. This specifies the upper bound on the conforming average rate of an ATM connection, where 'average rate' is the number of cells transmitted divided by the 'duration of the connection'. 0 means best effort sustainable cell rate.")
wfMplsAtmSessDefVclXmtBurstSize = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 12), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(40))).clone(namedValues=NamedValues(("default", 40))).clone('default')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtBurstSize.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtBurstSize.setDescription('Transmit (Forward) Burst Size in cells.')
wfMplsAtmSessDefVclXmtQosClass = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 13), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("class0", 1), ("class1", 2), ("class2", 3), ("class3", 4))).clone('class3')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtQosClass.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtQosClass.setDescription('Transmit (Forward) Quality of Service as specified in Appendix A, Section 4 of the ATM Forum UNI Specification, Version 3.0')
wfMplsAtmSessDefVclRcvPeakCellRate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 14), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvPeakCellRate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvPeakCellRate.setDescription('Receive (Backward) Peak Cell Rate in cells/second. This specifies the upper bound on the traffic that can be submitted on an ATM connection. 0 means best effort peak cell rate.')
wfMplsAtmSessDefVclRcvSustainableCellRate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 15), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvSustainableCellRate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvSustainableCellRate.setDescription("Receive (Backward) Sustainable Cell Rate in cells/second. This specifies the upper bound on the conforming average rate of an ATM connection, where 'average rate' is the number of cells transmitted divided by the 'duration of the connection'. 0 means best effort sustainable cell rate.")
wfMplsAtmSessDefVclRcvBurstSize = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 16), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(40))).clone(namedValues=NamedValues(("default", 40))).clone('default')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvBurstSize.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvBurstSize.setDescription('Receive (Backward) Burst Size in cells.')
wfMplsAtmSessDefVclRcvQosClass = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 17), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("class0", 1), ("class1", 2), ("class2", 3), ("class3", 4))).clone('class3')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvQosClass.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvQosClass.setDescription('Receive (Backward) Quality of Service as specified in Appendix A, Section 4 of the ATM Forum UNI Specification, Version 3.0')
wfMplsAtmSessDefVclAalType = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 18), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5))).clone(namedValues=NamedValues(("type1", 1), ("type34", 2), ("type5", 3), ("other", 4), ("unknown", 5))).clone('type5')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclAalType.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclAalType.setDescription('The type of AAL used on the VCL.')
wfMplsAtmSessDefVclAalCpcsTransmitSduSize = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 19), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(4608)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclAalCpcsTransmitSduSize.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclAalCpcsTransmitSduSize.setDescription('The maximum AAL CPCS SDU size in octets that is supported on the transmit direction of this VCC.')
wfMplsAtmSessDefVclAalCpcsReceiveSduSize = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 20), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(4608)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclAalCpcsReceiveSduSize.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclAalCpcsReceiveSduSize.setDescription('The maximum AAL CPCS SDU size in octets that is supported on the receive direction of this VCC.')
wfMplsAtmSessDefVclAalEncapsType = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 21), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("unknown", 1), ("llcencaps", 2), ("null", 3), ("other", 4))).clone('llcencaps')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclAalEncapsType.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclAalEncapsType.setDescription('The type of data encapsulation used over both AAL3/4 and AAL5 SSCS layer. Currently, the only values supported are : ATM_VCLENCAPS_LLCENCAPS - RFC1483 ATM_VCLENCAPS_ROUTEDPROTO - NONE')
wfMplsAtmSessDefVclCongestionIndication = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 22), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("off", 1), ("on", 2))).clone('off')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclCongestionIndication.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclCongestionIndication.setDescription('The desired state of the Congestion Indication (CI) bit in the payload field of each ATM cell for this VCL.')
wfMplsAtmSessDefVclCellLossPriority = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 23), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("off", 1), ("on", 2))).clone('off')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclCellLossPriority.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclCellLossPriority.setDescription('The desired state of the Cell Loss Priority (CLP) bit in the ATM header of each cell for this VCL.')
wfMplsAtmSessDefVclXmtTagging = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 24), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("on", 1), ("off", 2))).clone('off')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtTagging.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclXmtTagging.setDescription('Tagging forward VC messages if peak/sustainable rates exceeded')
wfMplsAtmSessDefVclRcvTagging = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 3, 1, 25), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("on", 1), ("off", 2))).clone('off')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvTagging.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessDefVclRcvTagging.setDescription('Tagging backward VC messages if peak/sustainable rates exceeded')
wfMplsAtmSessStatusTable = MibTable((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4), )
if mibBuilder.loadTexts: wfMplsAtmSessStatusTable.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessStatusTable.setDescription('This is the ATM related status table for every LDP session.')
wfMplsAtmSessStatusEntry = MibTableRow((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1), ).setIndexNames((0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmSessStatusLineNumber"), (0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmSessStatusIndex"))
if mibBuilder.loadTexts: wfMplsAtmSessStatusEntry.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessStatusEntry.setDescription('Entry definition.')
wfMplsAtmSessOperStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5))).clone(namedValues=NamedValues(("down", 1), ("init", 2), ("up", 3), ("cleanup", 4), ("notpresent", 5))).clone('notpresent')).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessOperStatus.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessOperStatus.setDescription('The ATM related LDP session state.')
wfMplsAtmSessStatusLineNumber = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessStatusLineNumber.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessStatusLineNumber.setDescription('Uniquely identifies the interface (port) that contains the appropriate management information. We use line number here.')
wfMplsAtmSessStatusIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessStatusIndex.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessStatusIndex.setDescription('LDP session index number.')
wfMplsAtmSessActualVcRangeVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessActualVcRangeVpi.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessActualVcRangeVpi.setDescription('The VPI value of the actually allowable VC range for this session. The maximum VPI value cannot exceed the value allowable by the wfAtmInterfaceMaxActiveVpiBits.')
wfMplsAtmSessActualVcRangeMinVci = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(32, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessActualVcRangeMinVci.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessActualVcRangeMinVci.setDescription('The minimum VCI value of the actually allowable VC range for LDP session. The maximum VCI value cannot exceed the value allowable by the wfAtmInterfaceMaxActiveVciBits.')
wfMplsAtmSessActualVcRangeMaxVci = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(32, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessActualVcRangeMaxVci.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessActualVcRangeMaxVci.setDescription('The maximum VCI value of the actually allowable VC range for LDP session. The maximum VCI value cannot exceed the value allowable by the wfAtmInterfaceMaxActiveVciBits.')
wfMplsAtmSessNegotiatedVcRangeVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(-1, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessNegotiatedVcRangeVpi.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessNegotiatedVcRangeVpi.setDescription('The VPI value of the VCs negotiated with LDP peer for this LDP session. The maximum VPI value cannot exceed the value allowable by the wfAtmInterfaceMaxActiveVpiBits. -1 means it is not possible.')
wfMplsAtmSessNegotiatedVcRangeMinVci = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(32, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessNegotiatedVcRangeMinVci.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessNegotiatedVcRangeMinVci.setDescription('The minimum VCI value of VCs negotiated with LDP peer for this LDP session. The maximum VCI value cannot exceed the value allowable by the wfAtmInterfaceMaxActiveVciBits.')
wfMplsAtmSessNegotiatedVcRangeMaxVci = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(32, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessNegotiatedVcRangeMaxVci.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessNegotiatedVcRangeMaxVci.setDescription('The maximum VCI value of VCs negotiated with LDP peer for this LDP session. The maximum VCI value cannot exceed the value allowable by the wfAtmInterfaceMaxActiveVciBits.')
wfMplsAtmSessInboundInuseVcs = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 10), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessInboundInuseVcs.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessInboundInuseVcs.setDescription('number of VCs opened for inbound LSP.')
wfMplsAtmSessOutboundInuseVcs = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 4, 1, 11), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmSessOutboundInuseVcs.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmSessOutboundInuseVcs.setDescription('number of VCs opened for outbound LSP.')
wfMplsAtmVclTable = MibTable((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5), )
if mibBuilder.loadTexts: wfMplsAtmVclTable.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclTable.setDescription('Read-only per VC infomation.')
wfMplsAtmVclEntry = MibTableRow((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1), ).setIndexNames((0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmVclLineNumber"), (0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmVclVpi"), (0, "Wellfleet-MPLS-MLM-MIB", "wfMplsAtmVclVci"))
if mibBuilder.loadTexts: wfMplsAtmVclEntry.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclEntry.setDescription('Entry definition.')
wfMplsAtmVclLineNumber = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclLineNumber.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclLineNumber.setDescription('Uniquely identifies the interface (port) that contains the appropriate management information. We use line number here.')
wfMplsAtmVclVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclVpi.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclVpi.setDescription('The VPI value of the VCL. The maximum VPI value cannot exceed the value allowable by the wfMplsAtmInterfaceMaxActiveVpiBits.')
wfMplsAtmVclVci = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclVci.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclVci.setDescription('The VCI value of the VCL. The maximum VCI value cannot exceed the value allowable by the wfMplsAtmInterfaceMaxActiveVciBits.')
wfMplsAtmVclLdpIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 4), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclLdpIndex.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclLdpIndex.setDescription('Uniquely identifies the LDP session number in this interface.')
wfMplsAtmVclDirection = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("inbound", 1), ("outbound", 2), ("duplex", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclDirection.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclDirection.setDescription('The direction of LSP')
wfMplsAtmVclState = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 6), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5))).clone(namedValues=NamedValues(("down", 1), ("init", 2), ("up", 3), ("cleanup", 4), ("notpresent", 5)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclState.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclState.setDescription('The VC state.')
wfMplsAtmVclType = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 7), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("default", 1), ("lsp", 2), ("unknown", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclType.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclType.setDescription('default VC or normal LSP VC.')
wfMplsAtmVclLastChange = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 8), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclLastChange.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclLastChange.setDescription("The value of MIBII's sysUpTime at the time this VCL entered its current operational state.")
wfMplsAtmVclXmtPeakCellRate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 9), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(4716))).clone(namedValues=NamedValues(("default", 4716)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclXmtPeakCellRate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclXmtPeakCellRate.setDescription('Transmit (Forward) Peak Cell Rate in cells/second. This specifies the upper bound on the traffic that can be submitted on an ATM connection.')
wfMplsAtmVclXmtSustainableCellRate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 10), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(4716))).clone(namedValues=NamedValues(("default", 4716)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclXmtSustainableCellRate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclXmtSustainableCellRate.setDescription("Transmit (Forward) Sustainable Cell Rate in cells/second. This specifies the upper bound on the conforming average rate of an ATM connection, where 'average rate' is the number of cells transmitted divided by the 'duration of the connection'.")
wfMplsAtmVclXmtBurstSize = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 11), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(40))).clone(namedValues=NamedValues(("default", 40)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclXmtBurstSize.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclXmtBurstSize.setDescription('Transmit (Forward) Burst Size in cells.')
wfMplsAtmVclXmtQosClass = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 12), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("class0", 1), ("class1", 2), ("class2", 3), ("class3", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclXmtQosClass.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclXmtQosClass.setDescription('Transmit (Forward) Quality of Service as specified in Appendix A, Section 4 of the ATM Forum UNI Specification, Version 3.0')
wfMplsAtmVclRcvPeakCellRate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 13), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(4716))).clone(namedValues=NamedValues(("default", 4716)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclRcvPeakCellRate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclRcvPeakCellRate.setDescription('Receive (Backward) Peak Cell Rate in cells/second. This specifies the upper bound on the traffic that can be submitted on an ATM connection.')
wfMplsAtmVclRcvSustainableCellRate = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 14), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(4716))).clone(namedValues=NamedValues(("default", 4716)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclRcvSustainableCellRate.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclRcvSustainableCellRate.setDescription("Receive (Backward) Sustainable Cell Rate in cells/second. This specifies the upper bound on the conforming average rate of an ATM connection, where 'average rate' is the number of cells transmitted divided by the 'duration of the connection'.")
wfMplsAtmVclRcvBurstSize = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 15), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(40))).clone(namedValues=NamedValues(("default", 40)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclRcvBurstSize.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclRcvBurstSize.setDescription('Receive (Backward) Burst Size in cells.')
wfMplsAtmVclRcvQosClass = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 16), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("class0", 1), ("class1", 2), ("class2", 3), ("class3", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclRcvQosClass.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclRcvQosClass.setDescription('Receive (Backward) Quality of Service as specified in Appendix A, Section 4 of the ATM Forum UNI Specification, Version 3.0')
wfMplsAtmVclAalType = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 17), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5))).clone(namedValues=NamedValues(("type1", 1), ("type34", 2), ("type5", 3), ("other", 4), ("unknown", 5)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclAalType.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclAalType.setDescription('The type of AAL used on the VCL.')
wfMplsAtmVclAalCpcsTransmitSduSize = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 18), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclAalCpcsTransmitSduSize.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclAalCpcsTransmitSduSize.setDescription('The maximum AAL CPCS SDU size in octets that is supported on the transmit direction of this VCC.')
wfMplsAtmVclAalCpcsReceiveSduSize = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 19), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclAalCpcsReceiveSduSize.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclAalCpcsReceiveSduSize.setDescription('The maximum AAL CPCS SDU size in octets that is supported on the receive direction of this VCC.')
wfMplsAtmVclAalEncapsType = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 20), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("unknown", 1), ("llcencaps", 2), ("null", 3), ("other", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclAalEncapsType.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclAalEncapsType.setDescription('The type of data encapsulation used over both AAL3/4 and AAL5 SSCS layer. Currently, the only values supported are : ATM_VCLENCAPS_LLCENCAPS - RFC1483 ATM_VCLENCAPS_ROUTEDPROTO - NONE')
wfMplsAtmVclCongestionIndication = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 21), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("off", 1), ("on", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclCongestionIndication.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclCongestionIndication.setDescription('The desired state of the Congestion Indication (CI) bit in the payload field of each ATM cell for this VCL.')
wfMplsAtmVclCellLossPriority = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 22), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("off", 1), ("on", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclCellLossPriority.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclCellLossPriority.setDescription('The desired state of the Cell Loss Priority (CLP) bit in the ATM header of each cell for this VCL.')
wfMplsAtmVclXmtTagging = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 23), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("on", 1), ("off", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclXmtTagging.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclXmtTagging.setDescription('Tagging forward VC messages if peak/sustainable rates exceeded')
wfMplsAtmVclRcvTagging = MibTableColumn((1, 3, 6, 1, 4, 1, 18, 3, 5, 9, 16, 1, 5, 1, 24), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("on", 1), ("off", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: wfMplsAtmVclRcvTagging.setStatus('mandatory')
if mibBuilder.loadTexts: wfMplsAtmVclRcvTagging.setDescription('Tagging backward VC messages if peak/sustainable rates exceeded')
mibBuilder.exportSymbols("Wellfleet-MPLS-MLM-MIB", wfMplsAtmVclCellLossPriority=wfMplsAtmVclCellLossPriority, wfMplsAtmSessConfVcRangeMaxVci=wfMplsAtmSessConfVcRangeMaxVci, wfMplsAtmSessDelete=wfMplsAtmSessDelete, wfMplsAtmIfConfLineNumber=wfMplsAtmIfConfLineNumber, wfMplsAtmSessConfIndex=wfMplsAtmSessConfIndex, wfMplsAtmSessDefVclXmtPeakCellRate=wfMplsAtmSessDefVclXmtPeakCellRate, wfMplsAtmVclLastChange=wfMplsAtmVclLastChange, wfMplsAtmSessDefVclAalCpcsTransmitSduSize=wfMplsAtmSessDefVclAalCpcsTransmitSduSize, wfMplsAtmVclXmtTagging=wfMplsAtmVclXmtTagging, wfMplsAtmSessConfTable=wfMplsAtmSessConfTable, wfMplsAtmSessDefVclRcvSustainableCellRate=wfMplsAtmSessDefVclRcvSustainableCellRate, wfMplsAtmVclAalEncapsType=wfMplsAtmVclAalEncapsType, wfMplsAtmIfOperStatus=wfMplsAtmIfOperStatus, wfMplsAtmVclType=wfMplsAtmVclType, wfMplsAtmSessOperStatus=wfMplsAtmSessOperStatus, wfMplsAtmSessStatusIndex=wfMplsAtmSessStatusIndex, wfMplsAtmSessActualVcRangeVpi=wfMplsAtmSessActualVcRangeVpi, wfMplsAtmVclTable=wfMplsAtmVclTable, wfMplsAtmSessDefVclCongestionIndication=wfMplsAtmSessDefVclCongestionIndication, wfMplsAtmSessAdminStatus=wfMplsAtmSessAdminStatus, wfMplsAtmSessNegotiatedVcRangeMinVci=wfMplsAtmSessNegotiatedVcRangeMinVci, wfMplsAtmSessDefVclAalCpcsReceiveSduSize=wfMplsAtmSessDefVclAalCpcsReceiveSduSize, wfMplsAtmIfStatusLineNumber=wfMplsAtmIfStatusLineNumber, wfMplsAtmIfConfTable=wfMplsAtmIfConfTable, wfMplsAtmVclXmtSustainableCellRate=wfMplsAtmVclXmtSustainableCellRate, wfMplsAtmSessDefVclXmtBurstSize=wfMplsAtmSessDefVclXmtBurstSize, wfMplsAtmVclAalCpcsTransmitSduSize=wfMplsAtmVclAalCpcsTransmitSduSize, wfMplsAtmSessDefVclXmtTagging=wfMplsAtmSessDefVclXmtTagging, wfMplsAtmVclRcvTagging=wfMplsAtmVclRcvTagging, wfMplsAtmSessDefVclRcvQosClass=wfMplsAtmSessDefVclRcvQosClass, wfMplsAtmIfStatusEntry=wfMplsAtmIfStatusEntry, wfMplsAtmSessDefVclAalType=wfMplsAtmSessDefVclAalType, wfMplsAtmIfDebugLogMask=wfMplsAtmIfDebugLogMask, wfMplsAtmIfAllocVcs=wfMplsAtmIfAllocVcs, wfMplsAtmSessConfDefVclVci=wfMplsAtmSessConfDefVclVci, wfMplsAtmSessConfLineNumber=wfMplsAtmSessConfLineNumber, wfMplsAtmSessNegotiatedVcRangeMaxVci=wfMplsAtmSessNegotiatedVcRangeMaxVci, wfMplsAtmSessActualVcRangeMaxVci=wfMplsAtmSessActualVcRangeMaxVci, wfMplsAtmVclLdpIndex=wfMplsAtmVclLdpIndex, wfMplsAtmIfConfEntry=wfMplsAtmIfConfEntry, wfMplsAtmVclRcvPeakCellRate=wfMplsAtmVclRcvPeakCellRate, wfMplsAtm=wfMplsAtm, wfMplsAtmSessDefVclAalEncapsType=wfMplsAtmSessDefVclAalEncapsType, wfMplsAtmVclLineNumber=wfMplsAtmVclLineNumber, wfMplsAtmVclAalType=wfMplsAtmVclAalType, wfMplsAtmIfAdminStatus=wfMplsAtmIfAdminStatus, wfMplsAtmVclXmtPeakCellRate=wfMplsAtmVclXmtPeakCellRate, wfMplsAtmSessConfVcRangeMinVci=wfMplsAtmSessConfVcRangeMinVci, wfMplsAtmVclXmtBurstSize=wfMplsAtmVclXmtBurstSize, wfMplsAtmSessActualVcRangeMinVci=wfMplsAtmSessActualVcRangeMinVci, wfMplsAtmVclDirection=wfMplsAtmVclDirection, wfMplsAtmSessDefVclRcvPeakCellRate=wfMplsAtmSessDefVclRcvPeakCellRate, wfMplsAtmVclState=wfMplsAtmVclState, wfMplsAtmSessNegotiatedVcRangeVpi=wfMplsAtmSessNegotiatedVcRangeVpi, wfMplsAtmSessStatusEntry=wfMplsAtmSessStatusEntry, wfMplsAtmIfTotalSess=wfMplsAtmIfTotalSess, wfMplsAtmSessConfEntry=wfMplsAtmSessConfEntry, wfMplsAtmSessConfVcRangeVpi=wfMplsAtmSessConfVcRangeVpi, wfMplsAtmVclAalCpcsReceiveSduSize=wfMplsAtmVclAalCpcsReceiveSduSize, wfMplsAtmSessOutboundInuseVcs=wfMplsAtmSessOutboundInuseVcs, wfMplsAtmIfTotalVcs=wfMplsAtmIfTotalVcs, wfMplsAtmIfCircuit=wfMplsAtmIfCircuit, wfMplsAtmVclVpi=wfMplsAtmVclVpi, wfMplsAtmSessConfDefVclVpi=wfMplsAtmSessConfDefVclVpi, wfMplsAtmSessStatusLineNumber=wfMplsAtmSessStatusLineNumber, wfMplsAtmSessStatusTable=wfMplsAtmSessStatusTable, wfMplsAtmVclEntry=wfMplsAtmVclEntry, wfMplsAtmIfStatusTable=wfMplsAtmIfStatusTable, wfMplsAtmVclCongestionIndication=wfMplsAtmVclCongestionIndication, wfMplsAtmSessDefVclRcvBurstSize=wfMplsAtmSessDefVclRcvBurstSize, wfMplsAtmVclRcvSustainableCellRate=wfMplsAtmVclRcvSustainableCellRate, wfMplsAtmSessDefVclXmtQosClass=wfMplsAtmSessDefVclXmtQosClass, wfMplsAtmSessDefVclRcvTagging=wfMplsAtmSessDefVclRcvTagging, wfMplsAtmVclRcvQosClass=wfMplsAtmVclRcvQosClass, wfMplsAtmIfCreate=wfMplsAtmIfCreate, wfMplsAtmVclVci=wfMplsAtmVclVci, wfMplsAtmSessInboundInuseVcs=wfMplsAtmSessInboundInuseVcs, wfMplsAtmVclRcvBurstSize=wfMplsAtmVclRcvBurstSize, wfMplsAtmSessDefVclCellLossPriority=wfMplsAtmSessDefVclCellLossPriority, wfMplsAtmSessDefVclXmtSustainableCellRate=wfMplsAtmSessDefVclXmtSustainableCellRate, wfMplsAtmVclXmtQosClass=wfMplsAtmVclXmtQosClass)
| [
"dcwangmit01@gmail.com"
] | dcwangmit01@gmail.com |
cefeca51da9d665c2fa87a15a7a909aa2b76ceb4 | 71596c8aec5ea7eb44b0f86736bc5acdccd55ac1 | /Graphs/dfs_adv.py | 95d92150b5ca2fd58e2045eea1bb44e2b2db9ca6 | [] | no_license | karthikeyansa/Data_Structures_python | dbab61f67d1bc33995dd7ff86989aa56b6f11a5c | b64618a4cff2b1d29ce8c129cb1f8ec35dcddf6f | refs/heads/master | 2023-01-16T05:21:10.308318 | 2020-11-22T06:28:08 | 2020-11-22T06:28:08 | 264,231,273 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 360 | py | #undirected_cyclic_graph
def dfs(g,s):
vis,stack = {s},[]
stack.append(s)
while stack:
u = stack.pop()
for u in g[u]:
if u not in vis:
vis.add(u)
stack.append(u)
return vis
n,m = map(int,input().split())
g = {}
for i in range(n):
g[i+1] = []
for _ in range(m):
x,y = map(int,input().split())
g[x].append(y)
g[y].append(x)
print(dfs(g,1)) | [
"karthikeyansa39@gmail.com"
] | karthikeyansa39@gmail.com |
8a0cae9036743c46a8fba91fa5ef68a7cc72396c | 17c371020e9d5f163246092dc2ba405a4ec19900 | /posts/migrations/0001_initial.py | ab0b028560dcd5c70b2611f8b510221a5776e1d9 | [] | no_license | avs8/My-Blog | 2820386c8af8ceba448e45566c0cad01b832a2a6 | 636a48cf91d55c5688707295b0c0d78b47a17f7d | refs/heads/master | 2021-01-10T23:19:32.467922 | 2016-10-12T18:39:00 | 2016-10-12T18:39:00 | 70,621,951 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 734 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.9 on 2016-10-10 18:40
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Post',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=256)),
('content', models.TextField()),
('updated', models.DateTimeField(auto_now=True)),
('timestamp', models.DateTimeField(auto_now_add=True)),
],
),
]
| [
"ajitavsingh_8@yahoo.com"
] | ajitavsingh_8@yahoo.com |
b5d2e8ce15ead6d7ef987071845d4c21c1689de8 | 7a4ed01a40e8d79126b26f5e8fca43c8e61e78fd | /Geeky Shows/Advance Pyhton/203.Passing_Member_Of_One_Class_To_Another_Class[17].py | 503cf407b9664e539e7ad2bd8bdf02bdd184c17a | [] | no_license | satyam-seth-learnings/python_learning | 5a7f75bb613dcd7fedc31a1567a434039b9417f8 | 7e76c03e94f5c314dcf1bfae6f26b4a8a6e658da | refs/heads/main | 2023-08-25T14:08:11.423875 | 2021-10-09T13:00:49 | 2021-10-09T13:00:49 | 333,840,032 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 502 | py | # Passing Members Of One Class To Another Class
class Student:
# Constructor
def __init__(self,n,r):
self.name=n
self.roll=r
# Instance Method
def disp(self):
print('Student Name:',self.name)
print('Student Roll:',self.roll)
class User:
# Static Method
@staticmethod
def show(s):
print('User Name:',s.name)
print('User Roll:',s.roll)
s.disp()
# Creating Object Of Student Class
stu=Student('Satyam',101)
User.show(stu) | [
"satyam1998.1998@gmail.com"
] | satyam1998.1998@gmail.com |
2cad8786efb1b0659b1f3bf2c217c5e0e997bd99 | 70c532c46847329d09757455721f4dc15bc16a77 | /morsite/settings.py | 982da5827335efc59b68cc8dd085fa929c876566 | [] | no_license | yaronsamuel-zz/morsite | 31a8f8b25c76f33819bc4eb72ad23c1ca258b7f7 | 4a609bc8cfa49ab8798c1bb87c43cd224a635f1b | refs/heads/master | 2023-01-29T13:26:08.915327 | 2014-04-17T14:20:01 | 2014-04-17T14:20:01 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,551 | py | import os
# Django settings for morsite project.
LOCAL_DIR = r"c:\morsite"
IS_LOCAL = os.path.isdir(LOCAL_DIR)
if IS_LOCAL:
PROJECT_DIR = LOCAL_DIR
# BASE_URL = "http://127.0.0.1:8000/"
else:
PROJECT_DIR = r"/home/ordercak/public_html/sweetsamuel.co.il/"
# BASE_URL = "http://www.morsite.ordercakeinhaifa.com/"
def relToAbs(path):
return os.path.join(PROJECT_DIR, path).replace('\\','/')
def dec(st):
ret = ''
key = '\xab\x67\xa4\x5c\xbb' * 10
for i in xrange(len(st)):
ret += chr( ord(st[i]) ^ ord(key[i]) )
return ret
def assign(name , value):
attr_name = dec(name)
attr_value = dec(value)
globals()[attr_name] = attr_value
DEBUG = False
TEMPLATE_DEBUG = DEBUG
ADMINS = [
('Mor' , 'SamuelCakes@gmail.com') ,
# ('Your Name', 'your_emai@example.com'),
]
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'morsite.db', # Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': '',
'PASSWORD': '',
'HOST': '', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '', # Set to empty string for default.
}
}
# Hosts/domain names that are valid for this site; required if DEBUG is False
# See https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts
ALLOWED_HOSTS = ['*']
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# In a Windows environment this must be set to your system time zone.
TIME_ZONE = 'Asia/Tel_Aviv'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'he'#'en-us'
SITE_ID = 1
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale.
USE_L10N = True
# If you set this to False, Django will not use timezone-aware datetimes.
USE_TZ = True
MEDIA_ROOT = relToAbs('media')
MEDIA_URL = '/media/'
STATIC_ROOT = relToAbs('static')
STATIC_URL = '/static/'
MY_STATIC_ROOT = relToAbs('static_files')
# Additional locations of static files
STATICFILES_DIRS = (
MY_STATIC_ROOT,
# Put strings here, like "/home/html/static" or "C:/www/django/static".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'f3oda#81rs%yu+*-bc%_5@*nmmf0!yiyw23d(!34awfexfc+j-'
# List of callables that know how to import templates from various sources.
if IS_LOCAL:
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
else:
TEMPLATE_LOADERS = (
('django.template.loaders.cached.Loader', (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
)),
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'morsite.urls'
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'morsite.wsgi.application'
TEMPLATE_DIRS = (
relToAbs('templates') ,
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
INSTALLED_APPS = (
'grappelli',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
# Uncomment the next line to enable admin documentation:
# 'django.contrib.admindocs',
'Prices' ,
'orderedmodel',
'django.contrib.comments',
'tagging',
'mptt',
'zinnia',
'menu' ,
'Gallery',
'contact_form',
'my_comment_app',
'tinymce',
)
COMMENTS_APP = 'my_comment_app'
#Zinnia stuff
TEMPLATE_CONTEXT_PROCESSORS = (
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.i18n',
'django.core.context_processors.request',
'django.core.context_processors.media',
'django.core.context_processors.static',
'zinnia.context_processors.version',
"django.core.context_processors.debug",
"django.contrib.messages.context_processors.messages",
) # Optional
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.JSONSerializer'
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
# the site admins on every HTTP 500 error when DEBUG=False.
# See http://docs.djangoproject.com/en/dev/topics/logging for
# more details on how to customize your logging configuration.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
TINYMCE_DEFAULT_CONFIG = {
'theme_advanced_buttons1' : "save,newdocument,|,bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull,|,styleselect,formatselect,fontselect,fontsizeselect",
}
# assign('\xee*\xe5\x15\xf7\xf4/\xeb\x0f\xef\xf47\xe5\x0f\xe8\xfc(\xf6\x18' , '\x9aU\x97h\x8e\x9dP\xdd')
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'cakesnmore1010@gmail.com'
assign('\xee*\xe5\x15\xf7\xf4/\xeb\x0f\xef\xf47\xe5\x0f\xe8\xfc(\xf6\x18' , '\xd1\x00\xce2\xd5\xc7\x08\xd1-\xcc\xde\x11\xd32\xcf\xc1')
EMAIL_PORT = 587
EMAIL_USE_TLS = True
# EMAIL_RECIPIAENTS_LIST = [EMAIL_HOST_USER ]
EMAIL_RECIPIAENTS_LIST = ['cakesnmore1010@gmail.com' , 'SamuelCakes@gmail.com'] | [
"samuel.yaron@gmail.com"
] | samuel.yaron@gmail.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.