qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
49,715,583
|
I have this python code solving knapsack problem using dynamic programming.
this function returns the total cost of best subset but I want it to return the elements of best subset . can anybody help me with this?
```
def knapSack(W, wt, val, n):
K = [[0 for x in range(W + 1)] for x in range(n + 1)]
# Build table K[][] in bottom up manner
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
K[i][w] = 0
elif wt[i - 1] <= w:
K[i][w] = max(val[i - 1] + K[i - 1][w - wt[i - 1]], K[i - 1] [w])
else:
K[i][w] = K[i - 1][w]
return K[n][W]
val = [40, 100, 120,140]
wt = [1, 2, 3,4]
W = 4
n = len(val)
print(knapSack(W, wt, val, n))
```
|
2018/04/08
|
[
"https://Stackoverflow.com/questions/49715583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238009/"
] |
What you can do is instead of returning only K[n][W], return K
Then iterate K as :
```
elements=list()
dp=K
w = W
i = n
while (i> 0):
if dp[w][i] - dp[w - wt(i)][i-1] == val(i):
#the element 'i' is in the knapsack
element.append(i)
i = i-1 //only in 0-1 knapsack
w -=wt(i)
else:
i = i-1
```
The idea is you reverse iterate the K matrix to determine which elements' values were added to get to the optimum K[W][n] value.
|
You can add this code to the end of your function to work your way back through the items added:
```
res = K[n][W]
print(res)
w = W
for i in range(n, 0, -1):
if res <= 0:
break
if res == K[i - 1][w]:
continue
else:
print(wt[i - 1])
res = res - val[i - 1]
w = w - wt[i - 1]
```
| 9,723
|
33,008,401
|
How to define an attribute in a Python 3 enum class that is NOT an enum value?
```
class Color(Enum):
red = 0
blue = 1
violet = 2
foo = 'this is a regular attribute'
bar = 55 # this is also a regular attribute
```
But this seems to fail for me. It seems that Color tries to include foo and bar as part of its enum values.
EDIT:
Lest you think I'm not using Enum in a way that's intended... For example, take the official Python documentation's example enum class Planet (docs.python.org/3/library/enum.html#planet). Note that they define the gravitational constant G within the surface\_gravity() method. But this is weird code. A normal programmer would say, set that constant G once, outside the function. But if I try to move G out (but not to global scope, just class scope), then I run into the issue I'm asking about here.
|
2015/10/08
|
[
"https://Stackoverflow.com/questions/33008401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5421703/"
] |
The point of the `Enum` type is to define enum values, so non-enum values are theoretically out of scope for this type. For constants, you should consider moving them out of the type anyway: They are likely not directly related to the enum values (but rather some logic that builds on those), and also should be a mutable property of the type. Usually, you would just create a constant at module level.
If you really need something on the type, then you could add it as a class method:
```
class Color(Enum):
red = 0
blue = 1
violet = 2
bar = 55
@classmethod
def foo (cls):
return 'this is a not really an attributeβ¦'
```
And using the `classproperty` descriptor from [this answer](https://stackoverflow.com/a/5191224/216074), you can also turn this into a property at class level which you can access as if it was a normal attribute:
```
class Color(enum.Enum):
red = 0
blue = 1
violet = 2
bar = 55
@classproperty
def foo (cls):
return 'this is a almost a real attribute'
```
```
>>> Color.foo
'this is a almost a real attribute'
>>> list(Color)
[<Color.red: 0>, <Color.blue: 1>, <Color.violet: 2>, <Color.bar: 55>]
```
|
When building an `enum.Enum` class, **all** regular attributes become members of the enumeration. A different type of value does not make a difference.
By regular attributes I mean all objects that are not descriptors (like functions are) and excluded names (using single underscore names, see the [*Allowed members and attributes of enumerations* section](https://docs.python.org/3/library/enum.html#allowed-members-and-attributes-of-enumerations)).
If you need additional attributes on the final `enum.Enum` object, add attributes *afterwards*:
```
class Color(Enum):
red = 0
blue = 1
violet = 2
Color.foo = 'this is a regular attribute'
Color.bar = 55
```
Demo:
```
>>> from enum import Enum
>>> class Color(Enum):
... red = 0
... blue = 1
... violet = 2
...
>>> Color.foo = 'this is a regular attribute'
>>> Color.bar = 55
>>> Color.foo
'this is a regular attribute'
>>> Color.bar
55
>>> Color.red
<Color.red: 0>
>>> list(Color)
[<Color.red: 0>, <Color.blue: 1>, <Color.violet: 2>]
```
| 9,724
|
60,998,188
|
I'm trying to write a python 3 code that prints out square matrix from user input. In addition the first row of this matrix must be filled by numbers from 1,n, the second row is the multiplication of the first row by 2, the third by 3, etc., until n-th row, which is created by the first row being multiplied by n. I was able to get my code to print out the square matrix but I do not understand how can I modified it so that it prints according to the earlier description. So, for example for n=3 I should have matrix : 1 2 3 (1st row), 2 4 6 (2nd row) and
3 6 9 (3rd row) but instead I get 1 2 3 (1st row), 4 5 6 (2nd row) and 7 8 9 (3rd row).
My code:
```
n = int(input("Enter dimensions of matrix :"))
m = n
x = 1
columns = []
for row in range(n):
inner_column = []
for col in range(m):
inner_column.append(x)
x = x + 1
columns.append(inner_column)
for inner_column in columns:
print(' '.join(map(str, inner_column)))
```
|
2020/04/02
|
[
"https://Stackoverflow.com/questions/60998188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13111470/"
] |
You can sore function in a List.
```
public class NeutralFactory: NPCFactory
{
private List<Func<int, Creature>> humanoids = new List<Func<int, Creature>> {
hp=> new Dwarf(hp),
hp=> new Fairy(hp),
hp=> new Elf(hp),
hp=> new Troll(hp),
hp=> new Orc(hp)
};
private Random random = new Random();
public Creature CreateHumanoid(int hp = 100)
{
int index = random.Next(humanoids.Count);
return humanoids[index](hp);
}
}
```
|
To avoid having to write creation functions for each class, you can use `Activator.CreateInstance`:
```cs
using System;
using System.Collections.Generic;
namespace so60998181
{
public class Creature
{
public int hp;
public Creature()
{
this.hp = 100;
}
public Creature(int hp)
{
this.hp = hp;
}
public override string ToString()
{
return String.Format("{0}, {1} hp", GetType().Name, hp);
}
}
public class Dwarf : Creature
{
public Dwarf(int hp) : base(hp) { }
}
public class Fairy : Creature
{
public Fairy(int hp) : base(hp / 3) { }
}
public class Orc : Creature
{
public Orc(int hp) : base(hp * 4) { }
}
class Program
{
private static List<Type> humanoids = new List<Type> { typeof(Dwarf), typeof(Fairy), typeof(Orc) };
private static Random rng = new Random();
private static Creature CreateHumanoid(int hp = 100)
{
int index = rng.Next(humanoids.Count);
var params = new object[] { hp };
return Activator.CreateInstance(t, humanoids[index], null, params, null) as Creature;
}
static void Main(string[] args)
{
for (var i = 0; i < 10; i++)
{
Console.WriteLine(CreateHumanoid(i * 10));
}
}
}
}
```
This outputs e.g.
```
Fairy, 0 hp
Orc, 40 hp
Fairy, 6 hp
Orc, 120 hp
Fairy, 13 hp
Orc, 200 hp
Fairy, 20 hp
Fairy, 23 hp
Fairy, 26 hp
Fairy, 30 hp
```
(poor fairy #1)
| 9,725
|
60,579,544
|
I have a frontend, which is hosted via Firebase. The code uses Firebase authentication and retrieves the token via `user.getIdToken()`. According to answers to similar questions that's the way to go.
The backend is written in Python, expects the token and verifies it using the firebase\_admin SDK. On my local machine, I set `FIREBASE_CONFIG` to the path to `firebase-auth.json` that I exported from my project. Everything works as expected.
Now I deployed my backend via Google AppEngine. Here I configure `FIREBASE_CONFIG` as JSON string in the app.yaml. The code looks like this:
```
runtime: python37
env_variables:
FIREBASE_CONFIG: '{
"type": "service_account",
"project_id": "[firebase-project-name]",
...
```
The backend logs the value of `FIREBASE_CONFIG` at startup. In the logs I can see the JSON string is there and `{` is the first character. So everything looks good to me. But if I retrieve the token from the client and try to validate it (same code, that is working locally) it get this error:
>
> Firebase ID token has incorrect "aud" (audience) claim. Expected
> "[backend-appengine-project-name]" but got "[firebase-project-name]". Make sure the ID token
> comes from the same Firebase project as the service account used to
> authenticate this SDK.
>
>
>
Can somebody explain, what I'm missing and how to solve it?
|
2020/03/07
|
[
"https://Stackoverflow.com/questions/60579544",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/110963/"
] |
The error message makes it sound like the user of your client app is signed into a different Firebase project than your backend is working with. Taking the error message literally, the client is using "backend-appengine-project-name", but your backend is using "firebase-project-name". Make sure they are both configured to use the same project using the same project ID.
|
```
final GoogleSignInAccount googleUser = await _googleSignIn.signIn();
final GoogleSignInAuthentication googleAuth = await googleUser.authentication;
final AuthCredential credential = GoogleAuthProvider.getCredential(
accessToken: googleAuth.accessToken,
idToken: googleAuth.idToken,
);
final FirebaseUser user = (await _auth.signInWithCredential(credential)).user;
IdTokenResult idTokenResult = await user.getIdToken(refresh: true);
print("userIdToken:" + idTokenResult.token);
```
| 9,730
|
44,669,963
|
I'm only starting getting into a python from #C and I have this question that I wasn't able to find an answer to, maybe I wasn't able to form a question right
I need this to create two lists when using:**load(positives)** and **load(negatives)**, positives is a path to the file. From #C I'm used to use this kind of structure for not copy the same code again just with another variable, eg. what if I would need 5 lists. With this code i'm only able to access the self.dictionary variable but in no way self.positives and self.negatives
I get error AttributeError: 'Analyzer' object has no attribute 'positives' at line 'for p in self.positives:'
MAIN QUESTION IS: how to make self.dictionary = [] to create list variables from the argument name - self.positives and self.negatives which i need later in code
---
```
def load(self, dictionary):
i = 0
self.dictionary = []
with open(dictionary) as lines:
for line in lines:
#some more code
self.dictionary.append(0)
self.dictionary[i] = line
i+=1
#later in code
for p in self.positives:
if text == p:
score += 1
for p in self.negatives:
if text == p:
score -= 1
#structure of a program:
class Analyzer():
def load()
def init()
load(positives)
load(negatives)
def analyze()
for p in self.positives
```
|
2017/06/21
|
[
"https://Stackoverflow.com/questions/44669963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8192419/"
] |
Integration with Spring and other frameworks available in Intellij IDEA Ultimate, but you're using Community Edition version which supports mainly Java Core features so there's no inspection capable of determining whether or not the field is assigned.
|
In recent versions of CE you can suppress these when @Autowired annotation is present, by using the light bulb (I'm using version 2022.2)
| 9,731
|
33,169,619
|
Having this:
```
a = 12
b = [1, 2, 3]
```
What is the most pythonic way to convert it into this?:
```
[12, 1, 12, 2, 12, 3]
```
|
2015/10/16
|
[
"https://Stackoverflow.com/questions/33169619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1468388/"
] |
If you want to alternate between `a` and elements of `b`. You can use [`itertools.cycle`](https://docs.python.org/2/library/itertools.html#itertools.cycle) and `zip` , Example -
```
>>> a = 12
>>> b = [1, 2, 3]
>>> from itertools import cycle
>>> [i for item in zip(cycle([a]),b) for i in item]
[12, 1, 12, 2, 12, 3]
```
|
You can use `itertools.repeat` to create an iterable with the length of `b` then use `zip` to put its item alongside the items of `a` and at last use `chain.from_iterable` function to concatenate the pairs:
```
>>> from itertools import repeat,chain
>>> list(chain.from_iterable(zip(repeat(a,len(b)),b)))
[12, 1, 12, 2, 12, 3]
```
Also without `itertools` you can use following trick :
```
>>> it=iter(b)
>>> [next(it) if i%2==0 else a for i in range(len(b)*2)]
[1, 12, 2, 12, 3, 12]
```
| 9,732
|
52,821,996
|
How to do early stopping in lstm.
I am using python tensorflow but not keras.
I would appreciate if you can provide a sample python code.
Regards
|
2018/10/15
|
[
"https://Stackoverflow.com/questions/52821996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9023836/"
] |
You can do it Using `checkpoints`:
```
from keras.callbacks import EarlyStopping
earlyStop=EarlyStopping(monitor="val_loss",verbose=2,mode='min',patience=3)
history=model.fit(xTrain,yTrain,epochs=100,batch_size=10,validation_data=(xTest,yTest) ,verbose=2,callbacks=[earlyStop])
```
Training will stop when "val\_loss" has not decreased(`mode='min'`) even after 3 epochs(`patience=3`)
```
#Didn't realize u were note using keras
```
|
You can find it with a little search
<https://github.com/mmuratarat/handson-ml/blob/master/11_deep_learning.ipynb>
```
max_checks_without_progress = 20
checks_without_progress = 0
best_loss = np.infty
....
if loss_val < best_loss:
save_path = saver.save(sess, './my_mnist_model.ckpt')
best_loss = loss_val
check_without_progress = 0
else:
check_without_progress +=1
if check_without_progress > max_checks_without_progress:
print("Early stopping!")
break
print("Epoch: {:d} - ".format(epoch), \
"Training Loss: {:.5f}, ".format(loss_train), \
"Training Accuracy: {:.2f}%, ".format(accuracy_train*100), \
"Validation Loss: {:.4f}, ".format(loss_val), \
"Best Loss: {:.4f}, ".format(best_loss), \
"Validation Accuracy: {:.2f}%".format(accuracy_val*100))
```
| 9,737
|
51,405,580
|
I am trying to install behave-parallel using pip install. I have installed programmes previously using pip so I know my Python/script path is correct in my env variables. However I am seeing the following error
```
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\.....Temp\\pip-install-rjiorrn7\\behave-parallel\\setup.py'
```
how can I resolve this issue
```
C:\Users\.....>pip install behave-parallel
Collecting behave-parallel
Using cached https://files.pythonhosted.org/packages/05/9d/22f74dd77bc4fa85d391564a232c49b4e99cfdeac7bfdee8151ea4606632/behave-parallel-1.2.4a1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\.........\python\lib\tokenize.py", line 447, in open
buffer = _builtin_open(filename, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\.........\\AppData\\Local\\Temp\\pip-install-7vgf8_mu\\behave-parallel\\setup.py'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\.........\AppData\Local\Temp\pip-install-7vgf8_mu\behave-parallel\
```
|
2018/07/18
|
[
"https://Stackoverflow.com/questions/51405580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7546921/"
] |
The package is simply broken, as it is missing the `setup.py` file.
```
$ tar tzvf behave-parallel-1.2.4a1.tar.gz | grep setup.py
$
```
You might be able to download the source from Github or wherever and package it yourself (`python setup.py bdist_wheel`), then install that wheel (`pip install ../../dist/behave-parallel...whl`).
|
There is a newer feature for building python packages (see also [PEP 517](https://www.python.org/dev/peps/pep-0517/) and [PEP 518](https://www.python.org/dev/peps/pep-0518/)). A package can now be built without setup.py (with pyproject.toml), but older pip versions are not aware of this feature and raise the error shown in the question.
So if you have reason to believe that the library was packaged properly, try updating pip to something newer ([version 19 or newer](https://pip.pypa.io/en/stable/news/#id209) will probably work).
| 9,738
|
2,332,773
|
I am using newt/snack (a TUI graphical Widgit library for Python based on slang) to have some interactive scripts. However for some target terminals the output of those screens are not very nice. I can change the look of them by changing the `$TERM` variable to remove non printable characters, and to convert them to something more suitable. For example:
```
TERM=linux python myscript.py
```
So far the values I tested for `$TERM`, yielded only moderate success.
Is there a known value for $TERM that consistently converts graphical characters:
```
ββββββββββββββ€ Title ββββββββββββββ
β β
β Some text for the entry window β
β β
β foo _______________________ β
β bar _______________________ β
β baz _______________________ β
β β
β ββββββ ββββββββββ β
β β Ok β β Cancel β β
β ββββββ ββββββββββ β
β β
βββββββββββββββββββββββββββββββββββ
```
into non graphical characters:
```
+------------| Title |------------+
| |
| Some text for the entry window |
| |
| foo _______________________ |
| bar _______________________ |
| baz _______________________ |
| |
| +----+ +--------+ |
| | Ok | | Cancel | |
| +----+ +--------+ |
| |
+---------------------------------+
```
|
2010/02/25
|
[
"https://Stackoverflow.com/questions/2332773",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/110488/"
] |
short: no - it (newt and slang) simply doesn't **do** that.
long:
newt uses the function `SLsmg_draw_box` which is shown here for reference:
```
void SLsmg_draw_box (int r, int c, unsigned int dr, unsigned int dc)
{
if (Smg_Mode == SMG_MODE_NONE) return;
if (!dr || !dc) return;
This_Row = r; This_Col = c;
dr--; dc--;
SLsmg_draw_hline (dc);
SLsmg_draw_vline (dr);
This_Row = r; This_Col = c;
SLsmg_draw_vline (dr);
SLsmg_draw_hline (dc);
SLsmg_draw_object (r, c, SLSMG_ULCORN_CHAR);
SLsmg_draw_object (r, c + (int) dc, SLSMG_URCORN_CHAR);
SLsmg_draw_object (r + (int) dr, c, SLSMG_LLCORN_CHAR);
SLsmg_draw_object (r + (int) dr, c + (int) dc, SLSMG_LRCORN_CHAR);
This_Row = r; This_Col = c;
}
```
In the `slsmg.c` file, slang does have a table:
```
typedef struct
{
unsigned char vt100_char;
unsigned char ascii;
SLwchar_Type unicode; /* may have an ambiguous width */
SLwchar_Type unicode_narrow; /* has a narrow width */
}
ACS_Def_Type;
static SLCONST ACS_Def_Type UTF8_ACS_Map[] =
{
{'+', '>', 0x2192, '>'}, /* RIGHTWARDS ARROW [A] */
{',', '<', 0x2190, '<'}, /* LEFTWARDS ARROW [A] */
{'-', '^', 0x2191, 0x2303}, /* UPWARDS ARROW [A] */
{'.', 'v', 0x2193, 0x2304}, /* DOWNWARDS ARROW [A] */
```
but it will automatically choose the Unicode values if told by the application (newt) to use UTF-8 encoding. newt does that unconditionally, ignoring the terminal and locale information:
```
/**
* @brief Initialize the newt library
* @return int - 0 for success, else < 0
*/
int newtInit(void) {
char * MonoValue, * MonoEnv = "NEWT_MONO";
const char *lang;
int ret;
if ((lang = getenv("LC_ALL")) == NULL)
if ((lang = getenv("LC_CTYPE")) == NULL)
if ((lang = getenv("LANG")) == NULL)
lang = "";
/* slang doesn't support multibyte encodings except UTF-8,
avoid character corruption by redrawing the screen */
if (strstr (lang, ".euc") != NULL)
trashScreen = 1;
(void) strlen(ident);
SLutf8_enable(-1);
SLtt_get_terminfo();
SLtt_get_screen_size();
```
Back in `slsmg.c`, the `init_acs()` function will first try to use Unicode (and always succeed if your locale supports UTF-8). If it happens to be something else, the functions proceeds to use whatever information exists in the terminal description.
As a rule, if a terminal supports line-drawing characters, every terminal description is written to tell how to draw lines. If you **modified** a terminal description to remove those capabilities, you could get just ASCII (but that's based on just reading the function: slang has numerous hard-coded cases designed to fill in for terminal descriptions which don't behave as slang's author wants, and you might trip over one of those).
For what it's worth, the terminfo capabilities used by slang are: **`acsc`**, **`enacs`**, **`rmacs`**, **`smacs`**.
|
It might be based on your `$LANG` being set to something like `en_US.UTF-8`. Try changing it to `en_US` (assuming your base locale is `en_US`).
| 9,744
|
41,454,563
|
I could just write a long-running CLI app and run it, but I'm assuming it wouldn't comply to all the expectations one would have of a standards-compliant linux daemon (responding to SIGTERM, Started by System V init process, Ignore terminal I/O signals, [etc.](https://www.python.org/dev/peps/pep-3143/#id1))
Most ecosystems have some best-practice way of doing this, for example, in python, you can use <https://pypi.python.org/pypi/python-daemon/>
Is there some documentation about how to do this with .Net Core?
|
2017/01/04
|
[
"https://Stackoverflow.com/questions/41454563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/970673/"
] |
I toyed with an idea similar to how .net core web host waits for shutdown in console applications. I was reviewing it on GitHub and was able to extract the gist of how they performed the `Run`
<https://github.com/aspnet/Hosting/blob/15008b0b7fcb54235a9de3ab844c066aaf42ea44/src/Microsoft.AspNetCore.Hosting/WebHostExtensions.cs#L86>
```cs
public static class ConsoleHost {
/// <summary>
/// Block the calling thread until shutdown is triggered via Ctrl+C or SIGTERM.
/// </summary>
public static void WaitForShutdown() {
WaitForShutdownAsync().GetAwaiter().GetResult();
}
/// <summary>
/// Runs an application and block the calling thread until host shutdown.
/// </summary>
/// <param name="host">The <see cref="IWebHost"/> to run.</param>
public static void Wait() {
WaitAsync().GetAwaiter().GetResult();
}
/// <summary>
/// Runs an application and returns a Task that only completes when the token is triggered or shutdown is triggered.
/// </summary>
/// <param name="host">The <see cref="IConsoleHost"/> to run.</param>
/// <param name="token">The token to trigger shutdown.</param>
public static async Task WaitAsync(CancellationToken token = default(CancellationToken)) {
//Wait for the token shutdown if it can be cancelled
if (token.CanBeCanceled) {
await WaitAsync(token, shutdownMessage: null);
return;
}
//If token cannot be cancelled, attach Ctrl+C and SIGTERN shutdown
var done = new ManualResetEventSlim(false);
using (var cts = new CancellationTokenSource()) {
AttachCtrlcSigtermShutdown(cts, done, shutdownMessage: "Application is shutting down...");
await WaitAsync(cts.Token, "Application running. Press Ctrl+C to shut down.");
done.Set();
}
}
/// <summary>
/// Returns a Task that completes when shutdown is triggered via the given token, Ctrl+C or SIGTERM.
/// </summary>
/// <param name="token">The token to trigger shutdown.</param>
public static async Task WaitForShutdownAsync(CancellationToken token = default (CancellationToken)) {
var done = new ManualResetEventSlim(false);
using (var cts = CancellationTokenSource.CreateLinkedTokenSource(token)) {
AttachCtrlcSigtermShutdown(cts, done, shutdownMessage: string.Empty);
await WaitForTokenShutdownAsync(cts.Token);
done.Set();
}
}
private static async Task WaitAsync(CancellationToken token, string shutdownMessage) {
if (!string.IsNullOrEmpty(shutdownMessage)) {
Console.WriteLine(shutdownMessage);
}
await WaitForTokenShutdownAsync(token);
}
private static void AttachCtrlcSigtermShutdown(CancellationTokenSource cts, ManualResetEventSlim resetEvent, string shutdownMessage) {
Action ShutDown = () => {
if (!cts.IsCancellationRequested) {
if (!string.IsNullOrWhiteSpace(shutdownMessage)) {
Console.WriteLine(shutdownMessage);
}
try {
cts.Cancel();
} catch (ObjectDisposedException) { }
}
//Wait on the given reset event
resetEvent.Wait();
};
AppDomain.CurrentDomain.ProcessExit += delegate { ShutDown(); };
Console.CancelKeyPress += (sender, eventArgs) => {
ShutDown();
//Don't terminate the process immediately, wait for the Main thread to exit gracefully.
eventArgs.Cancel = true;
};
}
private static async Task WaitForTokenShutdownAsync(CancellationToken token) {
var waitForStop = new TaskCompletionSource<object>();
token.Register(obj => {
var tcs = (TaskCompletionSource<object>)obj;
tcs.TrySetResult(null);
}, waitForStop);
await waitForStop.Task;
}
}
```
I tried adapting something like a `IConsoleHost` but quickly realized I was over-engineering it. Extracted the main parts into something like `await ConsoleUtil.WaitForShutdownAsync();` that operated like `Console.ReadLine`
This then allowed the utility to be used like this
```cs
public class Program {
public static async Task Main(string[] args) {
//relevant code goes here
//...
//wait for application shutdown
await ConsoleUtil.WaitForShutdownAsync();
}
}
```
from there creating a *systemd* as in the following link should get you the rest of the way
[Writing a Linux daemon in C#](https://developers.redhat.com/blog/2017/06/07/writing-a-linux-daemon-in-c/)
|
I'm not sure it is production grade, but for a quick and dirty console app this works well:
```cs
await Task.Delay(-1); //-1 indicates infinite timeout
```
| 9,745
|
62,328,661
|
I do understand that higher order functions are functions that take functions as parameters or return functions. I also know that decorators are functions that add some functionality to other functions. What are they exactly. Are they the functions that are passed in as parameters or are they the higher order functions themselves?
note: if you will give an example give it in python
|
2020/06/11
|
[
"https://Stackoverflow.com/questions/62328661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13675368/"
] |
A higher order function is a function that takes a function as an argument OR\* returns a function.
A decorator in Python is (typically) an example of a higher-order function, but there are decorators that aren't (class decorators\*\*, and decorators that aren't functions), and there are higher-order functions that aren't decorators, for example those that take two required arguments that are functions.
Not decorator, not higher-order function:
=========================================
```
def hello(who):
print("Hello", who)
```
Not decorator, but higher-order function:
=========================================
```
def compose(f, g):
def wrapper(*args, **kwargs):
return g(f(*args, **kwargs))
return wrapper
```
Decorator, not higher-order function:
=====================================
```
def classdeco(cls):
cls.__repr__ = lambda self: "WAT"
return cls
# Usage:
@classdeco
class Foo:
pass
```
Decorator, higher-order function:
=================================
```
def log_calls(fn):
def wrapper(*args, **kwargs):
print("Calling", fn.__name__)
return fn(*args, **kwargs)
return wrapper
```
---
\* Not XOR
\*\* Whether or not you consider class decorators to be higher-order functions, because classes are callable etc. is up for debate, I guess..
|
A higher-order function is a function that either takes a function as an argument or returns a function.
Decorator *syntax* is a syntactic shortcut:
```
@f
def g(...):
...
```
is just a convenient shorthand for
```
def g(...):
...
g = f(g)
```
As such, a decorator really is simply a function that takes another function as an argument. It would be more accurate to talk about using `f` *as* a decorator than to say that `f` *is* a decorator.
| 9,755
|
59,505,322
|
I am going through a Django tutorial but it's an old one. The videos were all made using Django 1.11 and Python 3.6. Problem is I have installed python3.8 in my machine. So I was trying to create virtualenv with python version 3.6. But as python 3.6 is not available in my machine, I couldn't do that. At this point I was wondering even if it is actually possible to have both python 3.6 and python 3.8 in a machine at same time.
Kindly someone help me with this problem or point me to the right resource to understand more on this problem.
|
2019/12/27
|
[
"https://Stackoverflow.com/questions/59505322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11748245/"
] |
It is possible to have two versions of python installed on the same machine. You might have to do some path manipulation to get things working, depending on the specifics of your setup. You can also probably just follow along with the tutorial using python 3.8 even if the tutorial it-self uses 3.6.
You can also use the python launcher to manage multiple versions of python installed on the same machine: <https://docs.python.org/3/using/windows.html#python-launcher-for-windows>
|
Yes. You can have both versions installed on single machine. All u need to do is to download Python3.6 from its Official site, set your Interpreter to python3.6 and u r all set.
| 9,756
|
25,502,666
|
I want to execute a linux shell in python, for example:
```
import os
cmd='ps -ef | grep java | grep -v grep'
p=os.popen(cmd)
print p.read()
```
the code works well in python2.7,but it doesn't work well in python2.4 or python2.6
the problem is: when the environment is 2.4 or 2.6,for each process in linux it only returns one line.
for exmaple:
this is what i want(and this is just what is returned in 2.7):
```
59996 17038 17033 0 14:08 pts/3 00:00:02 java -Xms64m -Xmx256m classpath=/home/admin/axxxX xxxx//xxxxxxxx ....
root 85751 85750 0 12:25 XXXXX XXXXXXX XXXXXXXX
```
but it actually returns like this(in 2.4 2.6):
```
59996 17038 17033 0 14:08 pts/3 00:00:02 java -Xms64m -Xmx256m classpath=/home/admin/ax\n
root 85751 85750 0 12:25 XXXXX XXXXXXX XXXXXXXX\n
```
that means it cut each item and then there is only one line for each item left,and it adds an `\n` for each item in the result, which is what i don't want to see
i have try other method like `subprocess.Popen` or `commands.getstatusoutput(command)`, but the results are the same -- i get only get one line for each item(process)
Info:
1. if i execute the shell directly on the ssh of linux, the result is good
2. if i execute `ps -ef |grep java |grep -v grep >>1.txt`, make the result into txtFile,the result is also ok
the python script will be executed on so many machines,so it is not proper to update all the machines to python2.7 or newer version
I am a little bit worried because the deadline will come soon, and i need help.
Looking forward to your answers, thanks very much
|
2014/08/26
|
[
"https://Stackoverflow.com/questions/25502666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3978288/"
] |
You're making an unwarranted assumption about the behavior of `read`. Use [subprocess.Popen](https://docs.python.org/2.7/library/subprocess.html#popen-objects) (and especially its `communicate` method) to read the whole thing. It was introduced in 2.4.
Use the string `splitlines` method as necessary if you want individual lines.
|
`ps` usually clips its output according to the terminal width but, because you are piping the output to `grep`, `ps` can not determine the width, and so it determines that from various things such as the terminal type, environment variables, or command line options such as `--cols`.
You might find that you get different results depending on how you execute your python script. In an interactive session you will most likely see that the output of the pipeline is clipped to your terminal width. If you run the script from the command line you will probably see the full output.
Are your tests for the different versions of python being run on the same machine, and in the same manner (interactive vs. command line)? I suspect that some inconsistency here might be causing the different output.
Fortunately you can tell `ps` to use unlimited width using the `-ww` command line option:
```
import os
cmd='ps -efww | grep java | grep -v grep'
p = os.popen(cmd)
print p.read()
```
With this you should receive the full `ps` output.
Although `subprocess.Popen()` should produce the same result (since `ps` is doing the clipping) you should use it instead of `os.popen()` - when used with `subprocess.communicate()` you can avoid possible deadlocks.
| 9,757
|
71,662,125
|
Context
-------
The instructions on [the Linux/MacOS instructions](https://github.com/lava-nc/lava#linuxmacos) to setup your device for the Lava neuromorphic computing framework by Intel provide a few pip commands, a git clone command and some poetry instructions. I am used to be able to integrate `pip` commands in an `environment.yml` for conda, and I thought the git clone command could also be included in the `environment.yml` file. However, I am not yet sure how to integrate the poetry commands.
Question
--------
Hence, I would like to ask: *How can I convert the following installation script into a (single) `conda` environment yaml file?*:
```
cd $HOME
pip install -U pip
pip install "poetry>=1.1.13"
git clone git@github.com:lava-nc/lava.git
cd lava
poetry config virtualenvs.in-project true
poetry install
source .venv/bin/activate
pytest
```
Attempts
--------
I have been able to install the Lava software successfully in a single environment.yml using:
```
# run: conda env create --file lava_environment.yml
# include new packages: conda env update --file lava_environment.yml
name: lava
channels:
- conda-forge
- conda
dependencies:
- anaconda
- conda:
# Run python tests.
- pytest=6.1.2
- pip
- pip:
# Auto generate docstrings
- pyment
# Run pip install on .tar.gz file in GitHub repository.
- https://github.com/lava-nc/lava/releases/download/v0.3.0/lava-nc-0.3.0.tar.gz
```
Which I've installed with:
```
conda env create --file lava_environment.yml
```
However, that [installs it from a binary](https://github.com/lava-nc/lava#windowsmacoslinux) instead of from source.
|
2022/03/29
|
[
"https://Stackoverflow.com/questions/71662125",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7437143/"
] |
Just add the "slideUp" class in your HTML markup:
```
<div class="box slideUp">
```
NB: the `style="display: none;"` attribute on that element is then no longer needed, nor do you have to execute `$('.box').show()`.
Updated snippet:
```js
$(".openNav").click(function() {
$('.box').toggleClass("slideUp")
});
```
```css
.clickbox {
width: 100px;
height: 100px;
background: #343434;
margin: 0 auto;
color: #fff;
}
.openNav {
color: #fff;
}
.box {
width: 200px;
height: 200px;
background: orange;
margin: 0 auto;
margin-top: 3%;
overflow: hidden;
transition: all 0.2s ease-in-out;
}
.box.slideUp {
height: 0;
}
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="clickbox"><a href="javascript:;" class="openNav">dddd</a></div>
<div class="box slideUp"><a href="javascript:;" class="openNav">dddd</a</div>
```
|
Try this code:
```js
$(".openNav").click(function() {
$('.box').slideToggle("fast");
});
```
```css
.clickbox {
width: 100px;
height: 100px;
background: #343434;
margin: 0 auto;
color: #fff;
}
.openNav {
color: #fff;
}
.box {
width: 200px;
height: 200px;
background: orange;
margin: 0 auto;
margin-top: 3%;
overflow: hidden;
transition: all 0.2s ease-in-out;
}
.box.slideUp {
height: 0;
}
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="clickbox"><a href="javascript:;" class="openNav">dddd</a></div>
<div class="box" style="display: none;"><a href="javascript:;" class="openNav">dddd</a</div>
```
| 9,758
|
56,456,656
|
I'm a newbie to data science with python. So, I wanted to play around with the following data "<https://www.ssa.gov/OACT/babynames/limits.html>." The main problem here is that instead of giving me one file containing the data for all years, it contains a separate file for each year. Furthermore, each separate file also lacks column headings.
FYI, the data contains the names, genders and some identification number of all registered US citizens from 1910 onwards. The data is available to the public (intended to aid demographers tracking trends in popular names).
Thus, one major problem I'm facing is the need to edit more than 100 files directly (manually, open each and edit) so as to ensure that all column headings are the same (which is required for a function like concat to work).
Another big problem is the sheer magnitude of the task. It's very, very inefficient to use concat for 100\* files, as well as use up more than 100 lines of code in just scanning/reading your data
Of course, 'concat' was built for this, but I think it's quite inefficient to use it for around 130 files. Regarding the missing column headings, I've manually edited some files, but there are just too many to be edited directly.
```
names2010 = pd.read_csv("../yob2010.txt")
names2011 = pd.read_csv("../yob2011.txt")
names = pd.concat([names2010, names2011])
```
intuitively, this is what I want to avoid>
```
#rough notation
names = pd.concat([names1910, names1911 ..., names2017, names2018])
```
this is just for two years' worth of data. I need to create a single data frame consisting of all data from the years 1910 to 2018.
update: I've figured out how to combine all different .txt files, but still need to resolve for column headings.
```
dataframes = pd.read_csv("../yob1910.txt")
for year in range(1911, 2019):
temp_frame = pd.read_csv("../yob{}.txt".format(year))
dataframes = pd.concat([temp_frame, dataframes])
```
|
2019/06/05
|
[
"https://Stackoverflow.com/questions/56456656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11557970/"
] |
I saw people running into this issue on Windows by not realizing that in File Explorer file extensions are hidden by default, so while they wanted to create a file called "config", they actually created a file called "config.txt" and that's not found by kubectl.
|
I ended up deleting windows and install Ubuntu. Windows was a nightmare.
| 9,763
|
53,421,991
|
I am using Visual Studio Code as my IDE for building web applications using Python's Django web development framework. I am developing on a 2018 MacBook Pro. I am able to launch my web applications by launching them in the terminal using:
```
python3 manage.py runserver
```
However, I want to be able to launch my application through the debugger. To try and do this, I navigated to the debug section, created the launch.json file, and changed my configuration in the drop down to Python: Django. Here is are my configurations from the file.
```
{
"name": "Python: Django",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/manage.py",
"console": "integratedTerminal",
"args": [
"runserver",
"--noreload",
"--nothreading"
],
"django": true
},
```
When I try to run the debugger using the green play arrow, I get the following exception:
>
> **Exception has occurred: ImportError**
> Couldn't import Django. Are you
> sure it's installed and available on your PYTHONPATH environment
> variable? Did you forget to activate a virtual environment? File
> "/Users/justinoconnor/Desktop/Rapid
> Prototyping/Projects/hello\_django/manage.py", line 14, in
> ) from exc
>
>
>
Launching the VS Code debugger with this configuration should be the same as running python manage.py runserver --noreload --nothreading, but it is not working. I'm thinking it is because on the MacBook I have to use the "python3" command rather than "python", but I did not see anything in the documentation that would allow me to specify this in the launch.json configuration file.
Does anyone know how to resolve this so that when I run the debugger it automatically executes/saves my project? I don't understand why this is not working when I can type python3 manage.py runserver into the terminal and it will execute just fine.
|
2018/11/21
|
[
"https://Stackoverflow.com/questions/53421991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5644892/"
] |
Use the command `virtualenv -p python3 venv` (or replace "venv" with your virtual environment name) in the terminal to create the virtual environment with python3 as the default when "python" is used in the terminal (e.g. `python manage.py ...`).
The `-p` is used to specify a specific version of python.
|
The issue was that I used the "python" command instead of the "python3" command when creating the virtual environment for my project. This was causing the debugger to execute the wrong command when trying run the local server. I was able to create a new virtual environment using the command ...
```
python3 -m venv env
```
... that the Visual Studio Code debugger was able to successfully recognize when debugging using the "Python: Django" drop down configuration.
| 9,766
|
860,140
|
What is the best way to find out the user that a python process is running under?
I could do this:
```
name = os.popen('whoami').read()
```
But that has to start a whole new process.
```
os.environ["USER"]
```
works sometimes, but sometimes that environment variable isn't set.
|
2009/05/13
|
[
"https://Stackoverflow.com/questions/860140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83898/"
] |
```
import getpass
print(getpass.getuser())
```
See the documentation of the [getpass](https://docs.python.org/3/library/getpass.html) module.
>
> getpass.getuser()
>
>
> Return the βlogin nameβ of the user. Availability: Unix, Windows.
>
>
> This function checks the environment variables LOGNAME, USER,
> LNAME and USERNAME, in order, and
> returns the value of the first one
> which is set to a non-empty string. If
> none are set, the login name from the
> password database is returned on
> systems which support the pwd module,
> otherwise, an exception is raised.
>
>
>
|
This should work under Unix.
```
import os
print(os.getuid()) # numeric uid
import pwd
print(pwd.getpwuid(os.getuid())) # full /etc/passwd info
```
| 9,769
|
2,664,099
|
I've found numerous posts on stackoverflow on how to store user passwords. However, I need to know what is the best way to store a password that my application needs to communicate with another application via the web? Currently, our web app needs to transmit data to a remote website. To upload the data, our web app reads the password from a text file and creates the header with payloads and submits via https.
This password in plain text on the file system is the issue. Is there any way to store the password more securely?
This is a linux os and the application is written in python and is not compiled.
Further clarification:
There are no users involved in this process at all. The password stored in the file system is used by the other web app to authenticate the web app that is making the request. To put it in the words of a commenter below:
"In this case, the application is the client to another remote application."
|
2010/04/18
|
[
"https://Stackoverflow.com/questions/2664099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can use a two-way key encryption algorithms like RSA,
The password is stored encrypted (by a key, which is stored in the user's brain) on the filesystem, but to decode the password, the user must enter the key.
|
I don't think you are understanding the answers provided. You don't ever store a plain-text password anywhere, nor do you transmit it to another device.
>
> You wrote: Sorry, but the issue is storing a
> password on the file system... This
> password is needed to authenticate by
> the other web app.
>
>
>
You can't count on file system protections to keep plain-text safe which is why others have responded that you need SHA or similar. If you think that a hashed password can't be sufficient for authentication, you don't understand the relevant algorithm:
1. get password P from user
2. store encrypted (e.g. salted hash)
password Q someplace relatively
secure
3. forget P (even clear the buffer you
used to read it)
4. send Q to remote host H
5. H gets password P' from user when
needed
6. H computes Q' from P', compares Q'
to Q for equality
| 9,770
|
49,524,189
|
Comparing two lists is tough, there are numerous posts on this subject. But what if I have a list of lists? Simplified to extreme:
```
members=[['john',1964,'NY'], \
['anna',1991,'CA'], \
['bert',2001,'AL'], \
['eddy',1990,'OH']]
cash =[['john',200], \
['dirk',200], \
['anna',300], \
['eddy',150]]
```
What I need are differences and intersections:
```
a =[['john',1964,'NY'], \
['anna',1991,'CA'], \
['eddy',1990,'OH']] #BOTH in members and cash
b =[['bert',2001,'AL']] #in members only
```
Usually I handle this with a loop, but I feel it is time to switch to a more pythonic way.
The big problem is that I have to fish out the whole (sub)list while comparing just it's first element. The only way I imagine is to make two sets of names, compare these and recreate the lists (of lists) from the sets of differences. Not much better than a loop.
|
2018/03/28
|
[
"https://Stackoverflow.com/questions/49524189",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8407665/"
] |
You'll need a nested list comprehension. Additionally, you can get rid of punctuation using `re.sub`.
```
import re
data = ["How are you. Don't wait for me", "this is all fine"]
words = [
re.sub([^a-z\s], '', j.lower()).split() for i in data for j in nlp(i).sents
]
```
Or,
```
words = []
for i in data:
... # do something here
for j in nlp(i).sents:
words.append(re.sub([^a-z\s], '', j.lower()).split())
```
|
There is a much simpler way for list comprehension.
You can first join the strings with a period '.' and split them again.
```
[x.split() for x in '.'.join(s).split('.')]
```
It will give the desired result.
```
[["How", "are","you"],["Don't", "wait", "for", "me"],["this","is","all","fine"]]
```
For Pandas dataframes, you may get an object, and hence a list of lists after `tolist` function in return. Just extract the first element.
For example,
```
import pandas as pd
def splitwords(s):
s1 = [x.split() for x in '.'.join(s).split('.')]
return s1
df = pd.DataFrame(s)
result = df.apply(splitwords).tolist()[0]
```
Again, it will give you the preferred result.
Hope it helps ;)
| 9,780
|
18,423,941
|
I have an excel sheet that has a lot of data in it in one column in the form of a python dictionary from a sql database. I don't have access to the original database and I can't import the CSV back into sql with the local infile command due to the fact that the keys/values on each row of the CSV are not in the same order. When I export the excel sheet to CSV I get:
```
"{""first_name"":""John"",""last_name"":""Smith"",""age"":30}"
"{""first_name"":""Tim"",""last_name"":""Johnson"",""age"":34}"
```
What is the best way to remove the " before and after the curly brackets as well as the extra " around the keys/values?
I also need to leave the integers alone that don't have quotes around them.
I am trying to then import this into python with the json module so that I can print specific keys but I can't import them with the doubled double quotes. I ultimately need the data saved in a file that looks like:
```
{"first_name":"John","last_name":"Smith","age":30}
{"first_name":"Tim","last_name":"Johnson","age":34}
```
Any help is most appreciated!
|
2013/08/24
|
[
"https://Stackoverflow.com/questions/18423941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/802136/"
] |
If the input file is just as shown, and of the small size you mention, you can load the whole file in memory, make the substitutions, and then save it. IMHO, you don't need a RegEx to do this. The easiest to read code that does this is:
```
with open(filename) as f:
input= f.read()
input= str.replace('""','"')
input= str.replace('"{','{')
input= str.replace('}"','}')
with open(filename, "w") as f:
f.write(input)
```
I tested it with the sample input and it produces:
```
{"first_name":"John","last_name":"Smith","age":30}
{"first_name":"Tim","last_name":"Johnson","age":34}
```
Which is exactly what you want.
If you want, you can also pack the code and write
```
with open(inputFilename) as if:
with open(outputFilename, "w") as of:
of.write(if.read().replace('""','"').replace('"{','{').replace('}"','}'))
```
but I think the first one is much clearer and both do exactly the same.
|
I think you are overthinking the problem, why don't replace data?
```
l = list()
with open('foo.txt') as f:
for line in f:
l.append(line.replace('""','"').replace('"{','{').replace('}"','}'))
s = ''.join(l)
print s # or save it to file
```
It generates:
```
{"first_name":"John","last_name":"Smith","age":30}
{"first_name":"Tim","last_name":"Johnson","age":34}
```
Use a `list` to store intermediate lines and then invoke `.join` for improving performance as explained in [Good way to append to a string](https://stackoverflow.com/questions/4435169/good-way-to-append-to-a-string)
| 9,781
|
46,164,770
|
Keywords [have to](https://mail.python.org/pipermail/python-dev/2012-March/117441.html) be strings
```
>>> def foo(**kwargs):
... pass
...
>>> foo(**{0:0})
TypeError: foo() keywords must be strings
```
But by some black magic, namespaces are able to bypass that
```
>>> from types import SimpleNamespace
>>> SimpleNamespace(**{0:0})
namespace()
```
Why? And *how*? **Could you implement a Python function that can receive integers in the `kwargs` mapping?**
|
2017/09/11
|
[
"https://Stackoverflow.com/questions/46164770",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/674039/"
] |
>
> Could you implement a Python function that can receive integers in the kwargs mapping?
>
>
>
No, you can't. The Python evaluation loop handles calling functions defined in Python code differently from calling a callable object defined in C code. The Python evaluation loop code that handles keyword argument expansion has firmly closed the door on non-string keyword arguments.
But `SimpleNamespace` is not a Python-defined callable, it is defined [entirely in C code](https://github.com/python/cpython/blob/v3.6.2/Objects/namespaceobject.c). It accepts keyword arguments directly, without any validation, which is why you can pass in a dictionary with non-string keyword arguments.
That's perhaps a bug; you are supposed to use the [C-API argument parsing functions](https://docs.python.org/3/c-api/arg.html), which all do guard against non-string keyword arguments. `SimpleNamespace` was initially designed just as a container for the [`sys.implementation` data](https://docs.python.org/3/library/sys.html#sys.implementation)\*, and wasn't really designed for other uses.
There might be other such exceptions, but they'll all be C-defined callables, not Python functions.
---
\* The [`time.get_clock_info()` method](https://docs.python.org/3/library/time.html#time.get_clock_info) also runs an instance of the `SimpleNamespace` class; it's the only other place that the type is currently used.
|
No, kwargs cannot be integers. This answer, however, is designed as a (very) short history lesson rather than technical answer (for that, please see @MartijnPierter's answer).
The check was originally added in 2010, in [issue 8419](https://bugs.python.org/issue8419) ([commit fb88636199c12f63d6c8c89f311cdafc91f30d2f](https://github.com/python/cpython/commit/fb88636199c12f63d6c8c89f311cdafc91f30d2f)) for Python 3.2 (and I believe Python 2.7.4 as well, but don't quote me on that), and simply checked that call kwargs were strings (and raised a value error if they weren't). It also added `PyArg_ValidateKeywordArguments` to C-api, which simply performed the above check.
In 2017, [issue 29951](https://bugs.python.org/issue29951) changed the error text for Python 3.7 from "keyword arguments must be strings" to "keywords must be strings" in [PR 916](https://github.com/python/cpython/pull/916) ([commit 64c8f705c0121a4b45ca2c3bc7b47b282e9efcd8](https://github.com/python/cpython/commit/64c8f705c0121a4b45ca2c3bc7b47b282e9efcd8)). The error remained a `ValueError` and it did **not** change the behaviour in any way, and was simply a small change to the error descriptor.
| 9,787
|
71,277,152
|
```
"127.0.0.1": {
"nmaprun": {
"@scanner": "nmap",
"@args": "nmap -v -sS -sV -sC -A -O -oX nmap 127.0.0.1 1-1024",
"@start": "1645467733",
"@startstr": "Mon Feb 21 23:52:13 2022",
"@version": "7.91",
"@xmloutputversion": "1.05",
"scaninfo": {
"@type": "syn",
"@protocol": "tcp",
"@numservices": "1000",
"@services": "1,3-4,6-7,9,13,17,19-26,
},
"verbose": {
"@level": "1"
},
"debugging": {
"@level": "0"
},
"runstats": {
"finished": {
"@time": "1645467744",
"@timestr": "Mon Feb 21 23:52:24 2022",
"@summary": "Nmap done at Mon Feb 21 23:52:24 2022; 1 IP address (1 host up) scanned in 12.14 seconds",
"@elapsed": "12.14",
"@exit": "success"
}
}
}
}
}
```
I have this scan output from Nmap. I want to parse the entire JSON and replace the '@' character with ''.
how can I do this in python?
|
2022/02/26
|
[
"https://Stackoverflow.com/questions/71277152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13668217/"
] |
I would guess that `Contest` does not implement `IEquatable<Contest>` or otherwise override `Equals(object)`. Therefore `Assert.Equal` will only work when the objects are reference equal. If you don't want to do this, you can create your own `IEqualityComparer<Contest>` and use the overload `Assert.Equal(expectedRanking, input, equalityComparer)`. Best of luck
|
I'd suggest to use the [Fluent Assertions](https://fluentassertions.com/introduction) library for this kind of assert checks in unit tests. Instead of rigid equality, you are looking for *equivalency*.
Then your assert becomes:
```cs
input.Should().BeEquivalentTo(expectedRanking);
```
and this doesn't look for references, only values!
| 9,789
|
9,921,976
|
I have a list iteration in python defined like this:
```
for i in range(5):
for j in range(5):
if i != j:
print i , j
```
So for each element in my defined range [0..5] I want to get each element i, but also all other elements which are not i.
This code does exactly as I expect, but is there a cleaner way of doing this?
|
2012/03/29
|
[
"https://Stackoverflow.com/questions/9921976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1220022/"
] |
Use [`itertools.permutations`](http://docs.python.org/library/itertools.html#itertools.permutations):
```
import itertools as it
for i, j in it.permutations(range(5), 2):
print i, j
```
|
[(x,y)for x in range(5) for y in range(5) if x!=y]
| 9,790
|
48,339,383
|
My python script is not running under my Crontab. But when i try to run it from the Terminal it works perfectly. I have placed this in the python script at the top:
```
#!/usr/bin/python
```
Also I tried:
```
#!/usr/bin/env python
```
I did my file executable:
```
chmod a+x vida.py
```
Added to my crontab and added PATH:
```
USER=gg
SHELL=/bin/sh
PATH=/usr/local/sbin/:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin/:/home/gg/DC.bin/:/home/gg/GNSSMET/DC/:usr/bin:/usr/bin/X11:/:/home/gg/GNSSMET/DC/bin/:/home/gg/:/usr/lib/python2.7/:
PYTHONPATH=/usr/bin/:/usr/lib/python2.7/
*/1 * * * * gg /usr/bin/python /home/gg/vida.py 2>&1 >>/home/gg/out1.txt
```
I checked the log via `grep CRON /var/log/syslog`
```
Jan 19 13:37:01 gg-pc CRON[26500]: (gg) CMD ( /usr/bin/python /home/gg/vida.py 2>&1 >>/home/gg/out1.txt)
```
I even run a dummy python script from using crontab and it worked like charm (simple Hello, World!). But when it comes to my script the output file `out1.txt` is created (which is empty) but does not run the actual script. I even checked all of the solutions presented on StackOverflow, none did work.
So here is my python script:
```
#!/usr/bin/env python
from datetime import *
import os
import sys
gamitRinexDir = '/home/gg/GAMIT/rinex'
stalist = ['ankr','argi','aut1','beug','brst','bucu','busk','ganm','gism','glsv','gmlk','gope','hofn','ingl','ista','joze',
'kiru','krcy','ktvl','mas1','mate','mets','mkps','morp','nico','onsa','orhn','orid','pdel','penc','polv','pots','puyv',
'sofi','vis0','vlns','wtzr','yebe','zeck','zimm']
letlist = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X']
seslist = ['0','1','2','3','4','5','6','7','8','9']
tnow = datetime.now()
dback = timedelta(hours=2)
tnow = tnow -dback
wlength = 4
os.system('rm ' + gamitRinexDir + '/*')
wlett = []
updir = []
doylist = []
yrlist = []
for i in range(wlength):
delta = timedelta(hours=i+1)
tback = tnow -delta
wlett.append(letlist[tback.hour])
doystr = 'doy ' + str(tnow.year) + ' ' + str(tnow.month) + ' ' + str(tnow.day) + ' ' + '> /home/gg/sil.sil'
os.system(doystr)
fid = open('/home/gg/sil.sil')
line = fid.readline().split()
doynum = '%03d' % (int(line[5]))
x = str(tnow.year)
yrnum = x[2:4]
updir.append(yrnum+doynum)
doylist.append(doynum)
yrlist.append(yrnum)
dirname = '/home/gg/REPO/nrtdata/'
for i in range(len(wlett)):
adirname = dirname + updir[i]+'/' + wlett[i]
for sta in stalist:
fname = adirname + '/' + sta + doylist[i] + wlett[i].lower() + '.' + yrlist[i]+'d.Z'
fname2 = gamitRinexDir + '/' + sta + doylist[i] + seslist[i] + '.' + yrlist[i]+'d.Z'
os.system('cp ' + fname + ' ' + fname2)
udoy = list(set(doylist))
dcmd = ''
for gun in udoy:
dcmd = dcmd + gun + ' '
CmdGamit = 'sh_gamit -d ' + x + ' ' + dcmd + ' ' + '-orbit IGSU -expt expt -eops usnd -gnss G -nopngs -metutil Z'
print(CmdGamit)
mainCmd = 'cd /home/gg/GAMIT/;'+CmdGamit
os.system(mainCmd)
filestocopy1 = 'met_*'
filestocopy2 = 'hexpta.*'
filestocopy3 = 'uexpt*'
ndirname = ' /home/gg/REPO_GAMIT/' + doynum + '_'+ wlett[-1]
os.system('mkdir ' + ndirname)
cleancmd1 = 'mv /home/gg/GAMIT/'+doynum +'/'+filestocopy1 + ' ' + ndirname
cleancmd2 = 'mv /home/gg/GAMIT/'+doynum +'/'+filestocopy2 + ' ' + ndirname
cleancmd3 = 'mv /home/gg/GAMIT/'+doynum +'/'+filestocopy3 + ' ' + ndirname
cleancmd4 = 'rm -r /home/gg/GAMIT/'+doynum
os.system(cleancmd1)
os.system(cleancmd2)
os.system(cleancmd3)
os.system(cleancmd4)
```
Please show me some pointers, I am seriously stuck here.
|
2018/01/19
|
[
"https://Stackoverflow.com/questions/48339383",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9239574/"
] |
You should change you crontab line as such to get `stdout` and `stderr` saved to the file:
```
*/1 * * * * gg /usr/bin/python /home/gg/vida.py >> /home/gg/out1.txt 2>&1
```
Simply read `out1.txt` after crontab has run the line to see what's wrong
Edit after your comment:
Based on the error you've shared, I believe you're not actually writing anything in the `/home/gg/sil.sil` file:
```
doystr = 'doy ' + str(tnow.year) + ' ' + str(tnow.month) + ' ' + str(tnow.day) + ' ' + '> /home/gg/sil.sil'
os.system(doystr)
```
`doystr` does not evaluate to a shell command, I think you need to write the variable as below to write to the file.
```
doystr = 'echo "doy ' + str(tnow.year) + ' ' + str(tnow.month) + ' ' + str(tnow.day) + '" ' + '> /home/gg/sil.sil'
```
|
syntax:
minutes hour dom mon dow user command
55 16 \* \* \* root /root/anaconda/bin/python /root/path/file\_name.py &>> /root/output/output.log
| 9,791
|
65,699,603
|
I want to use some function of ximgproc, so I uninstalled opencv-python and re-installed opencv-contrib-python
```
(venv) C:\Users\Administrator\PycharmProjects\eps>pip uninstall opencv-contrib-python opencv-python
Skipping opencv-contrib-python as it is not installed.
Uninstalling opencv-python-4.5.1.48:
Would remove:
c:\users\administrator\pycharmprojects\eps\venv\lib\site-packages\cv2\*
c:\users\administrator\pycharmprojects\eps\venv\lib\site-packages\opencv_python-4.5.1.48.dist-info\*
Proceed (y/n)? y
Successfully uninstalled opencv-python-4.5.1.48
(venv) C:\Users\Administrator\PycharmProjects\eps>pip install opencv-contrib-python
Collecting opencv-contrib-python
Using cached https://files.pythonhosted.org/packages/01/07/4da5e3a2262c033bd10d7f6275b6fb6c77396b2058cedd02e61dbcc4a56d/opencv_contrib_python-4.5.1.48-cp36-cp36m-win_amd64.whl
Requirement already satisfied: numpy>=1.13.3 in c:\users\administrator\pycharmprojects\eps\venv\lib\site-packages (from opencv-contrib-python) (1.18.2)
Installing collected packages: opencv-contrib-python
Successfully installed opencv-contrib-python-4.5.1.48
```
However, I could not import ximgproc still. Is there anything I forgot ?
|
2021/01/13
|
[
"https://Stackoverflow.com/questions/65699603",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11702897/"
] |
Can you check which packages you have intalled with `pip list`?
Even though you uninstalled opencv-python, you also need to uninstall `opencv-python-headless, opencv-contrib-python-headless` packages. As @skvark says
>
> never install both package
>
>
>
|
Install opencv and opencv-contrib modules separately.
Command for standard desktop environment:
```
$ pip install opencv-python opencv-contrib-python
```
| 9,792
|
13,262,575
|
Making a turn based game using python 3. I want 2 characters (foe & enemy) to attack, pause based on random+speed, then attack again if they are still alive.
The problem I am running into is the time.sleep freezes both modules, not 1 or the other. Any suggestions to make this work effectively?
```
from multiprocessing import Process
import time
import random
def timing1():
speed=60#SPEED IS NORMALLY A KEY FROM LIST, USING 60 FOR EXAMPLE
sleeptime=36/((random.randint(1,20)+speed)/5)
print (sleeptime)
time.sleep(sleeptime)
input('HERO ACTION')
def timing2():
speed=45
sleeptime=36/((random.randint(1,20)+speed)/5)
print (sleeptime)
time.sleep(sleeptime)
input('FOE ACTION')
if __name__ == '__main__':
p1=Process(target=timing1)
p1.start()
p2=Process(target=timing2)
p2.start()
p1.join()
p2.join()
```
|
2012/11/07
|
[
"https://Stackoverflow.com/questions/13262575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1804903/"
] |
Edit the [netbeans.conf](http://wiki.netbeans.org/FaqNetbeansConf) file (located in the /etc folder of your NetBeans installation), look for the line that starts with "netbeans\_default\_options=". Edit the fontsize parameter if present. If not, add something like "--fontsize 11" (without the quote) at the end of the line.
[Source](http://wiki.netbeans.org/FaqFontSize)
|
How about using a screen magnifier?
on Windows 7, [Sysinternals ZoomIt](https://technet.microsoft.com/en-us/sysinternals/zoomit.aspx) works fine.
Nearly every Linux desktop has one as well.
| 9,793
|
74,327,541
|
FAST CGI IS NOT WORKING PROPERLY IN DJANGO DEPLOYMENT ON IIS WINDOW SERVER
```
HTTP Error 500.0 - Internal Server Error
C:\Users\satish.pal\AppData\Local\Programs\Python\Python310\python.exe - The FastCGI process exited unexpectedly
Most likely causes:
β’IIS received the request; however, an internal error occurred during the processing of the request. The root cause of this error depends on which module handles the request and what was happening in the worker process when this error occurred.
β’IIS was not able to access the web.config file for the Web site or application. This can occur if the NTFS permissions are set incorrectly.
β’IIS was not able to process configuration for the Web site or application.
β’The authenticated user does not have permission to use this DLL.
β’The request is mapped to a managed handler but the .NET Extensibility Feature is not installed.
Things you can try:
β’Ensure that the NTFS permissions for the web.config file are correct and allow access to the Web server's machine account.
β’Check the event logs to see if any additional information was logged.
β’Verify the permissions for the DLL.
β’Install the .NET Extensibility feature if the request is mapped to a managed handler.
β’Create a tracing rule to track failed requests for this HTTP status code. For more information about creating a tracing rule for failed requests, click here.
Detailed Error Information:
Module
FastCgiModule
Notification
ExecuteRequestHandler
Handler
fastcgiModule
Error Code
0x00000001
Requested URL
http://10.0.0.5:8097/
Physical Path
C:\inetpub\wwwroot\hcm.ariespro.com
Logon Method
Anonymous
Logon User
Anonymous
More Information:
This error means that there was a problem while processing the request. The request was received by the Web server, but during processing a fatal error occurred, causing the 500 error.
View more information Β»
Microsoft Knowledge Base Articles:
β’294807
```
i HAVE TRIED EVERY THIN FROM GIVING APPpOOL PERMITTIONS TO CHANGING VERSIONS OF PYTHON AND WFASTCGI
BUT NOTHING IS WORKING FOR ME
PROJECT IS WORKING JUST FINE ON DJANGO SERVER
I HAVE ALSO DEPLOYED IT USING NGINX AND WAITRESS FROM WINDOYS SERVER BUT I NEED IT TO WORK WITH IIS
PLEASE hELP ME OUT-- AT ANY COST
|
2022/11/05
|
[
"https://Stackoverflow.com/questions/74327541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17904860/"
] |
As I see from your code, whenever you get an image from `imagePickerController` you store it into variable `self.image`. Then whenever you click Done you just upload this `self.image`
Make variable `self.image` can be nil then remember to unset it after uploading successfully
Code will be like this
```swift
var image : UIImage? = nil
@IBAction func updateProfile(_ sender: UIButton) {
uploadPic(arg: true, completion: { (success) -> Void in
if success {
addUrlToFirebaseProfile()
self.image = nil // reset image to nil if success
} else {
}
})
}
```
|
You are setting `self.image` if the user selects a photo.
But you are not *unsetting* `self.image` if the user *doesn't* select a photo. It needs to be set to `nil` (not to an empty `UIImage()`).
| 9,803
|
59,705,956
|
I'm working with `tensorflow-gpu` version `2.0.0` and **I have installed gpu driver and CUDA and cuDNN** (`CUDA version 10.1.243_426` and `cuDNN v7.6.5.32` and I'm using windows!)
When I compile my model or run:
```
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
It will print out:
```
2020-01-12 19:56:50.961755: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-01-12 19:56:50.974003: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-01-12 19:56:51.628299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:01:00.0
2020-01-12 19:56:51.636256: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2020-01-12 19:56:51.642106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-01-12 19:56:52.386608: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-12 19:56:52.393162: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2020-01-12 19:56:52.396516: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2020-01-12 19:56:52.400632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/device:GPU:0 with 1356 MB memory) -> physical GPU (device: 0, na
me: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 1008745203605650029
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 1422723891
locality {
bus_id: 1
links {
}
}
incarnation: 18036547379173389852
physical_device_desc: "device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1"
]
```
Which is saying tensorflow is going to use gpu device for sure! But when I run my model, I can see that gpu isn't doing anything!
[](https://i.stack.imgur.com/QDeEL.png)
However you can see that a part of gpu memory is being used and even I can see a gpu activity which is my program!!
[](https://i.stack.imgur.com/cK50D.png)
What's going on?! Am I doing something wrong?! I have searched a lot and have checked a lot of questions in SO but nobody asked such a question!
|
2020/01/12
|
[
"https://Stackoverflow.com/questions/59705956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8342406/"
] |
Taken from the official documentation of TensorFlow.
```
import tensorflow as tf
tf.debugging.set_log_device_placement(True)
# Create some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
```
If you run the code above (which should run on your GPU if your GPU is visible to TensorFlow), then your training will run on TensorFlow.
You must see an output like this one:
>
> Executing op MatMul in device
> /job:localhost/replica:0/task:0/device:GPU:0 tf.Tensor( [[22. 28.]
> [49. 64.]], shape=(2, 2), dtype=float32)
>
>
>
Also, you can see that you have a surge in the dedicated GPU memory usage in the Task Manager --> it appears that your GPU is being used, but for certainty run the code above.
|
Have also noted that Windows Task Manager is not useful for monitoring GPU(dual) activity. Try installing TechPowerUp GPU-Z. (I am running dual NVidia cards). This monitors CPU and GPU activity, power and temperatures.
| 9,804
|
23,968,716
|
I am using the following code to get remote PC CPU percentage of usage witch is slow and loading the remote PC because of SSHing.
```
per=(subprocess.check_output('ssh root@192.168.32.218 nohup python psutilexe.py',stdin=None,stderr=subprocess.STDOUT,shell=True)).split(' ')
print 'CPU %=',float(per[0])
print 'MEM %=',float(per[1])
```
where `psutilexe.py` is as follows:
```
import psutil
print psutil.cpu_percent(), psutil.virtual_memory()[2]
```
Would you please let me know if there is any alternate way to measure remote PC CPU % of usage using Python?
|
2014/05/31
|
[
"https://Stackoverflow.com/questions/23968716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3693882/"
] |
I would suggest taking look at Glances. It's written in python and can also be used for remote server monitoring:
<https://github.com/nicolargo/glances>
Using glances on remote server:
<http://mylinuxbook.com/glances-an-all-in-one-system-monitoring-tool/>
|
You don't need a custom Python script, since you can [have CPU usage directly with `top`](https://stackoverflow.com/a/9229692/240613), (or [with `sysstat`](https://stackoverflow.com/a/9229396/240613), if installed).
Have you **profiled** your app? Is it the custom script which is making it slow, or the SSHing itself? If it's the SSHing, then:
* Consider logging once if you're getting multiple values, or:
* Consider using message queue instead of SSHing: monitored machines will constantly send their CPU usage to a message queue service, which will be listened by the machine which is in charge of gathering the results.
| 9,805
|
50,547,218
|
Why does the python code below crash my website?
But the code at the very bottom does not crash the website
Here is the code that crashes the website:
```
from django.urls import path, include
from django.contrib import admin
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('learning_logs.urls')),
]
```
Here is the code that does not crash:
```
from django.contrib import admin
from django.urls import path
urlpatterns = [
path('admin/', admin.site.urls),
]
```
Thank you
|
2018/05/26
|
[
"https://Stackoverflow.com/questions/50547218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9852457/"
] |
You have a space between `url` and `patterns`. It should be all one word `urlpatterns`.
If you ever need to check the code for any of the other exercises in that book, they're all on github [here](https://github.com/ehmatthes/pcc).
|
I got to this point in the Crash Course, this area will break your site, temporarily. You will not have made all the files referenced in your code yet. In this case, you haven't made the urls.py file in learning\_logs. After this is made, you will not have updated your views.py nor made your index.html template. Keep going, it should be resolved later. See also: <https://ehmatthes.github.io/pcc/chapter_18/README.html#updates>
| 9,807
|
73,793,403
|
I frequently need to generate similar looking excel sheets for humans to read. Background colors and formatting should be similar. I'm looking to be able to read a template into python and have the values and cells filled in in Python.
It does not appear that xlsxwriter can read background color and formatting. It can output formatting, but it's taking a long time to code one template in manually.
openpyxl does not appear to have that function either.
I'm looking for a solution that would be able to read through a worksheet and say "A1 has a background color of red (or hex value), is bold, and has 'this is a template' in it." Does such a module exist?
|
2022/09/20
|
[
"https://Stackoverflow.com/questions/73793403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5775112/"
] |
Fill color is fgColor per the OOXML specs "For solid cell fills (no pattern), fgColor is used".
You can get the color from about three attributes, all should provide the same hex value unless the fill is grey in which case the index/value is 0 and the grey content is determined by tint
```
for cell in ws['A']:
print(cell)
if cell.fill.fgColor.index != 0:
print(cell.fill.fgColor.index)
print(cell.fill.fgColor.rgb)
print(cell.fill.fgColor.value)
else:
print(cell.fill.fgColor.tint)
print(cell.fill.patternType)
print("-------------")
```
|
There were no openstack answers I could find about reading existing background color formatting. The answers I did find were about formatting of the cell into things like percentage or currency.
Here is a solution I've found for background cell color from the openpyxl documentation, though fill color was not explicit in what I read.
```py
from openpyxl import load_workbook
from openpyxl.styles import Fill
wb = load_workbook("test.xlsx") # your specific template workbook name
ws = wb["Sheet1"] # your specific sheet name
style_dictionary = {}
for row in ws.rows:
for cell in row:
style_dictionary[cell] = cell.fill
style_dictionary
```
The background color will be under parameters rgb = ####.
I'm hopeful this dictionary can be used to template the background and pattern fill for other worksheets but I haven't gotten that far yet.
| 9,808
|
55,095,983
|
I'm having some trouble with the `replace()` function in python. Here is my code :
```
string = input()
word = string.find('word')
if word >= 1:
string = string.replace('word', 'word.2')
print(string)
```
The output gives `word`. Shouldn't it be `word.2`?
I'm confused. Any help?
Edit: After playing around with the issue for a bit, I've found that the question is now "Why is `string.find('word')` equal to 0 for input `word`?
|
2019/03/11
|
[
"https://Stackoverflow.com/questions/55095983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11182783/"
] |
Instead of
```
word >= 1
```
write
```
word >= 0
```
string.find() returns the first occurence of the word. If your string is 'word' and you find 'word', it'll return 0 as the word 'word' occurs at index 0 first.
In python, arrays start at 0. The first character in a string is at index 0.
Therefore, 'word' in 'word' is at the first location, i.e. 0.
|
There is no need to use the find function, just do:
```
string = input()
string = string.replace('word', 'word.2')
```
But nevertheless, if i ran it in Python3, your code is correct ;-)
How does your input look like?
| 9,809
|
54,093,253
|
I've been trying to work with BeautifulSoup because I want to try and scrape a webpage (<https://www.imdb.com/search/title?release_date=2017&sort=num_votes,desc&page=1>). So far I scraped some elements with success but now I wanted to scrape a movie description but I've been struggling. The description is simply situated like this in html :
```
<div class="lister-item mode-advanced">
<div class="lister-item-content>
<p class="muted-text"> paragraph I don't need</p>
<p class="muted-text"> paragraph I need</p>
</div>
</div>
```
I want to scrape the second paragraph which seemed easy to do but everything I tried gave me 'None' as output. I've been digging around to find an answer. In an other stackoverflow post I found that
```
find('p:nth-of-type(1)')
```
or
```
find_elements_by_css_selector('.lister-item-mode >p:nth-child(1)')
```
could do the trick but it still gives me
```
none #as output
```
Below you can find a piece of my code it's a bit low grade code because I'm just trying out stuff to learn
```
import urllib2
from bs4 import BeautifulSoup
from requests import get
url = 'http://www.imdb.com/search/title?
release_date=2017&sort=num_votes,desc&page=1'
response = get(url)
html_soup = BeautifulSoup(response.text, 'html.parser')
type(html_soup)
movie_containers = html_soup.find_all('div', class_='lister-item mode-
advanced')
first_movie = movie_containers[0]
first_title = first_movie.h3.a.text
print first_title
first_year = first_movie.h3.find('span', class_='lister-item-year text-muted unbold')
first_year = first_year.text
print first_year
first_imdb = float(first_movie.strong.text)
print first_imdb
# !!!! problem zone ---------------------------------------------
first_description = first_movie.find('p', class_='muted-text')
#first_description = first_description.text
print first_description
```
the above code gives me this output:
```
$ python scrape.py
Logan
(2017)
8.1
None
```
I would like to learn the correct method of selecting html tags because it will be useful to know for future projects.
|
2019/01/08
|
[
"https://Stackoverflow.com/questions/54093253",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9441232/"
] |
>
> [find\_all()](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all) method looks through a tagβs descendants and retrieves
> all descendants that match your filters.
>
>
>
You can then use the list's index to get the element you need. Index starts at 0, so 1 will give the second item.
Change the first\_description to this.
```
first_description = first_movie.find_all('p', {"class":"text-muted"})[1].text.strip()
```
Full code
```
import urllib2
from bs4 import BeautifulSoup
from requests import get
url = 'http://www.imdb.com/search/title?release_date=2017&sort=num_votes,desc&page=1'
response = get(url)
html_soup = BeautifulSoup(response.text, 'html.parser')
type(html_soup)
movie_containers = html_soup.find_all('div', class_='lister-item mode-advanced')
first_movie = movie_containers[0]
first_title = first_movie.h3.a.text
print first_title
first_year = first_movie.h3.find('span', class_='lister-item-year text-muted unbold')
first_year = first_year.text
print first_year
first_imdb = float(first_movie.strong.text)
print first_imdb
# !!!! problem zone ---------------------------------------------
first_description = first_movie.find_all('p', {"class":"text-muted"})[1].text.strip()
#first_description = first_description.text
print first_description
```
Output
```
Logan
(2017)
8.1
In the near future, a weary Logan cares for an ailing Professor X. However, Logan's attempts to hide from the world and his legacy are upended when a young mutant arrives, pursued by dark forces.
```
Read the [Documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to learn the correct method of selecting html tags.
Also consider moving to python 3.
|
Just playing around with `.next_sibling` was able to get it. There's probably a more elegant way though. At least might give you a start/some direction
```
from bs4 import BeautifulSoup
html = '''<div class="lister-item mode-advanced">
<div class="lister-item-content>
<p class="muted-text"> paragraph I don't need</p>
<p class="muted-text"> paragraph I need</p>
</div>
</div>'''
soup = BeautifulSoup(html, 'html.parser')
first_p = soup.find('div',{'class':'lister-item mode-advanced'}).text.strip()
second_p = soup.find('div',{'class':'lister-item mode-advanced'}).next_sibling.next_sibling.text.strip()
print (second_p)
```
**Output:**
```
print (second_p)
paragraph I need
```
| 9,812
|
64,754,032
|
I am trying to use SageMaker script mode for training a model on image data. I have multiple scripts for data preparation, model creation, and training. This is the content of my working directory:
```
WORKDIR
|-- config
| |-- hyperparameters.json
| |-- lossweights.json
| `-- lr.json
|-- dataset.py
|-- densenet.py
|-- resnet.py
|-- models.py
|-- train.py
|-- imagenet_utils.py
|-- keras_utils.py
|-- utils.py
`-- train.ipynb
```
The training script is `train.py` and it makes use of other scripts. To run the training script, I'm using the following code:
```py
bucket='ashutosh-sagemaker'
data_key = 'training'
data_location = 's3://{}/{}'.format(bucket, data_key)
print(data_location)
inputs = {'data':data_location}
print(inputs)
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(entry_point='train.py',
role=role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
framework_version='1.14',
py_version='py3',
script_mode=True,
hyperparameters={
'epochs': 10
}
)
estimator.fit(inputs)
```
On running this code, I get the following output:
```
2020-11-09 10:42:07 Starting - Starting the training job...
2020-11-09 10:42:10 Starting - Launching requested ML instances......
2020-11-09 10:43:24 Starting - Preparing the instances for training.........
2020-11-09 10:44:43 Downloading - Downloading input data....................................
2020-11-09 10:51:08 Training - Downloading the training image...
2020-11-09 10:51:40 Uploading - Uploading generated training model
Traceback (most recent call last):
File "train.py", line 5, in <module>
from dataset import WatchDataSet
ModuleNotFoundError: No module named 'dataset'
WARNING: Logging before flag parsing goes to stderr.
E1109 10:51:37.525632 140519531874048 _trainer.py:94] ExecuteUserScriptError:
Command "/usr/local/bin/python3.6 train.py --epochs 10 --model_dir s3://sagemaker-ap-northeast-1-485707876195/tensorflow-training-2020-11-09-10-42-06-234/model"
2020-11-09 10:51:47 Failed - Training job failed
```
What should I do to remove the `ModuleNotFoundError`? I tried to look for solutions but didn't find any relevant resources.
The contents of `train.py` file:
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from dataset import WatchDataSet
from models import BCNN
from utils import image_generator, val_image_generator
from utils import BCNNScheduler, LossWeightsModifier
from utils import restore_checkpoint, get_epoch_key
import argparse
from collections import defaultdict
import json
import keras
from keras import backend as K
from keras import optimizers
from keras.backend import tensorflow_backend
from keras.callbacks import LearningRateScheduler, TensorBoard
from math import ceil
import numpy as np
import os
import glob
from sklearn.model_selection import train_test_split
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', type=int, default=100, help='number of epoch of training')
parser.add_argument('--batch_size', type=int, default=32, help='size of the batches')
parser.add_argument('--data', type=str, default=os.environ.get('SM_CHANNEL_DATA'))
opt = parser.parse_args()
def main():
csv_config_dict = {
'csv': opt.data + 'train.csv',
'image_dir': opt.data + 'images',
'xlabel_column': opt.xlabel_column,
'brand_column': opt.brand_column,
'model_column': opt.model_column,
'ref_column': opt.ref_column,
'encording': opt.encoding
}
dataset = WatchDataSet(
csv_config_dict=csv_config_dict,
min_data_ref=opt.min_data_ref
)
X, y_c1, y_c2, y_fine = dataset.X, dataset.y_c1, dataset.y_c2, dataset.y_fine
brand_uniq, model_uniq, ref_uniq = dataset.brand_uniq, dataset.model_uniq, dataset.ref_uniq
print("ref. shape: ", y_fine.shape)
print("brand shape: ", y_c1.shape)
print("model shape: ", y_c2.shape)
height, width = 224, 224
channel = 3
# get pre-trained weights
if opt.mode == 'dense':
WEIGHTS_PATH = 'https://github.com/keras-team/keras-applications/releases/download/densenet/densenet121_weights_tf_dim_ordering_tf_kernels.h5'
elif opt.mode == 'res':
WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5'
weights_path, current_epoch, checkpoint = restore_checkpoint(opt.ckpt_path, WEIGHTS_PATH)
# split train/validation
y_ref_list = np.array([ref_uniq[np.argmax(i)] for i in y_fine])
index_list = np.array(range(len(X)))
train_index, test_index, _, _ = train_test_split(index_list, y_ref_list, train_size=0.8, random_state=23, stratify=None)
print("Train")
model = None
bcnn = BCNN(
height=height,
width=width,
channel=channel,
num_classes=len(ref_uniq),
coarse1_classes=len(brand_uniq),
coarse2_classes=len(model_uniq),
mode=opt.mode
)
if __name__ == '__main__':
main()
```
|
2020/11/09
|
[
"https://Stackoverflow.com/questions/64754032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7697327/"
] |
If you don't mind switching from TF 1.14 to TF 1.15.2+, you'll be able to bring a local code directory containing your custom modules to your SageMaker TensorFlow Estimator via the argument `source_dir`. Your entry point script shall be in that `source_dir`. Details in the SageMaker TensorFlow doc: <https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/using_tf.html#use-third-party-libraries>
|
This isn't exactly what the questioner asked but if anyone has come here wanting to know how to use custom libraries with SKLearn you can use `dependencies` as an argument like in the following:
```
import sagemaker
from sagemaker.sklearn.estimator import SKLearn
sess = sagemaker.Session()
role = sagemkaer.get_execution_role()
model = SKLearn(
entry_point='training.py',
role=role,
instance_type='ml.m5.large',
sagemaker_session=sess,
dependencies=['my_custom_file.py']
)
```
| 9,814
|
62,097,219
|
I am trying to connect to Google Sheets' API from a Django view. The bulk of the code I have taken from this link:
<https://developers.google.com/sheets/api/quickstart/python>
Anyway, here are the codes:
**sheets.py** (Copy pasted from the link above, function renamed)
```
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly']
# The ID and range of a sample spreadsheet.
SAMPLE_SPREADSHEET_ID = '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms'
SAMPLE_RANGE_NAME = 'Class Data!A2:E'
def test():
"""Shows basic usage of the Sheets API.
Prints values from a sample spreadsheet.
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
sheet = service.spreadsheets()
result = sheet.values().get(spreadsheetId=SAMPLE_SPREADSHEET_ID,
range=SAMPLE_RANGE_NAME).execute()
values = result.get('values', [])
if not values:
print('No data found.')
else:
print('Name, Major:')
for row in values:
# Print columns A and E, which correspond to indices 0 and 4.
print('%s, %s' % (row[0], row[4]))
```
**urls.py**
```
urlpatterns = [
path('', views.index, name='index')
]
```
**views.py**
```
from django.http import HttpResponse
from django.shortcuts import render
from .sheets import test
# Views
def index(request):
test()
return HttpResponse('Hello world')
```
All the view function does is just call the `test()` method from the **sheets.py** module. Anyway, when I run my server and go the URL, another tab opens up for the Google oAuth2, which means that the credentials file is detected and everything. However, in this tab, the following error message is displayed from Google:
```
Error 400: redirect_uri_mismatch The redirect URI in the request, http://localhost:65262/, does not match the ones authorized for the OAuth client.
```
In my API console, I have the callback URL set exactly to `127.0.0.1:8000` to match my Django's view URL. I don't even know where the `http://localhost:65262/` URL comes from. Any help in fixing this? And can someone explain to me why this is happening? Thanks in advance.
**EDIT**
I tried to remove the `port=0` in the flow method as mentioned in the comment, then the URL mismatch occurs with `http://localhost:8080/`, which is again pretty weird because my Django app is running in the `8000` port.
|
2020/05/30
|
[
"https://Stackoverflow.com/questions/62097219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5056347/"
] |
You shouldn't be using [Flow.run\_local\_server()](https://github.com/googleapis/google-auth-library-python-oauthlib/blob/v0.4.1/google_auth_oauthlib/flow.py#L408) unless you don't have the intention of deploying the code. This is because `run_local_server` launches a browser on the server to complete the flow.
This works just fine if you're developing the project locally for yourself.
If you're intent on using the local server to negotiate the OAuth flow. The Redirect URI configured in your secrets must match that, the local server default for the host is [`localhost` and port is `8080`](https://github.com/googleapis/google-auth-library-python-oauthlib/blob/v0.4.1/google_auth_oauthlib/flow.py#L409).
If you're looking to deploy the code, you must perform the flow via an exchange between the user's browser, your server and Google.
Since you have a Django server already running, you can use that to negotiate the flow.
For example,
Say there is a tweets app in a Django project with `urls.py` module as follows.
```
from django.urls import path, include
from . import views
urlpatterns = [
path('google_oauth', views.google_oath, name='google_oauth'),
path('hello', views.say_hello, name='hello'),
]
urls = include(urlpatterns)
```
You could implement a guard for views that require credentials as follow.
```py
import functools
import json
import urllib
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from django.shortcuts import redirect
from django.http import HttpResponse
SCOPES = ['https://www.googleapis.com/auth/userinfo.email', 'https://www.googleapis.com/auth/userinfo.profile', 'openid']
def provides_credentials(func):
@functools.wraps(func)
def wraps(request):
# If OAuth redirect response, get credentials
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES,
redirect_uri="http://localhost:8000/tweet/hello")
existing_state = request.GET.get('state', None)
current_path = request.path
if existing_state:
secure_uri = request.build_absolute_uri(
).replace('http', 'https')
location_path = urllib.parse.urlparse(existing_state).path
flow.fetch_token(
authorization_response=secure_uri,
state=existing_state
)
request.session['credentials'] = flow.credentials.to_json()
if location_path == current_path:
return func(request, flow.credentials)
# Head back to location stored in state when
# it is different from the configured redirect uri
return redirect(existing_state)
# Otherwise, retrieve credential from request session.
stored_credentials = request.session.get('credentials', None)
if not stored_credentials:
# It's strongly recommended to encrypt state.
# location is needed in state to remember it.
location = request.build_absolute_uri()
# Commence OAuth dance.
auth_url, _ = flow.authorization_url(state=location)
return redirect(auth_url)
# Hydrate stored credentials.
credentials = Credentials(**json.loads(stored_credentials))
# If credential is expired, refresh it.
if credentials.expired and creds.refresh_token:
creds.refresh(Request())
# Store JSON representation of credentials in session.
request.session['credentials'] = credentials.to_json()
return func(request, credentials=credentials)
return wraps
@provides_credentials
def google_oauth(request, credentials):
return HttpResponse('Google OAUTH <a href="/tweet/hello">Say Hello</a>')
@provides_credentials
def say_hello(request, credentials):
# Use credentials for whatever
return HttpResponse('Hello')
```
Note that this is only an example. If you decide to go this route, I recommend looking into extracting the OAuth flow to its very own Django App.
|
The redirect URI tells Google the location you would like the authorization to be returned to. This must be set up properly in google developer console to avoid anyone hijacking your client. It must match exactly.
To to [Google developer console](https://console.developers.google.com/). Edit the client you are currently using and add the following as a redirect uri
```
http://localhost:65262/
```
[](https://i.stack.imgur.com/W7NVf.png)
Tip click the little pencil icon to edit a client :)
TBH while in development its easier to just add the port that google says you are calling from then fiddle with the settings in your application.
| 9,815
|
24,204,582
|
I want to generate multiple streams of random numbers in python.
I am writing a program for simulating queues system and want one stream for the inter-arrival time and another stream for the service time and so on.
`numpy.random()` generates random numbers from a global stream.
In matlab there is [something called RandStream](http://www.mathworks.com/help/matlab/ref/randstream.html) which enables me to create multiple streams.
Is there any way to create something like RandStream in Python
|
2014/06/13
|
[
"https://Stackoverflow.com/questions/24204582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3429820/"
] |
Both Numpy and the internal random generators have instantiatable classes.
For just `random`:
```
import random
random_generator = random.Random()
random_generator.random()
#>>> 0.9493959884174072
```
And for Numpy:
```
import numpy
random_generator = numpy.random.RandomState()
random_generator.uniform(0, 1, 10)
#>>> array([ 0.98992857, 0.83503764, 0.00337241, 0.76597264, 0.61333436,
#>>> 0.0916262 , 0.52129459, 0.44857548, 0.86692693, 0.21150068])
```
|
Veedrac's answer did not address how one might generate independent streams.
The best way I could find to generate independent streams is to use a replacement for numpy's RandomState. This is provided by the [RandomGen package](https://bashtage.github.io/randomgen/index.html).
It supports [independent random streams](https://bashtage.github.io/randomgen/parallel.html#independent-streams), but these use one of three random number generators: PCG64, ThreeFry or Philox. If you want to use the more conventional MT19937, you can rely on [jumping](https://bashtage.github.io/randomgen/parallel.html#jump-advance-the-prng-state) instead.
| 9,818
|
38,430,491
|
I'm writing a Python application that needs to fetch a Google document from Google Drive as markdown.
I'm looking for ideas for the design and existing open-source code.
As far as I know, Google doesn't provide export as markdown. I suppose this means I would have to figure out, which of the available download/export formats is the best for converting to markdown.
The contents of the document is ensured to not contain anything that markdown doesn't support.
EDIT: I would like to avoid non python software to keep the setup as simple as possible.
|
2016/07/18
|
[
"https://Stackoverflow.com/questions/38430491",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/852140/"
] |
You might want to take a look at [Pandoc](http://pandoc.org/ "Pandoc") which supports conversions i.e. from docx to markdown. There are several Python wrappers for Pandoc, such as [pypandoc](https://pypi.python.org/pypi/pypandoc/ "pypandoc").
After fetching a document from Google Drive in docx format, the conversion is as simple as:
```
import pypandoc
markdown_output = pypandoc.convert_file('Document.docx', 'markdown')
```
|
Google Drive offers a "Zipped HTML" export option.
[](https://i.stack.imgur.com/BosJ2.png)
Use the [Python module `html2text`](https://pypi.python.org/pypi/html2text) to convert the HTML into Markdown.
>
> html2text is a Python script that converts a page of HTML into clean, easy-to-read plain ASCII text. Better yet, that ASCII also happens to be valid Markdown (a text-to-HTML format).
>
>
>
```
>>> import html2text
>>>
>>> print(html2text.html2text("<p><strong>Zed's</strong> dead baby,
<em>Zed's</em> dead.</p>"))
**Zed's** dead baby, _Zed's_ dead.
```
| 9,827
|
9,753,885
|
I'd like to have the matplotlib "show" command return to the command line
while displaying the plot. Most other plot packages, like R, do this.
But pylab hangs until the plot window closes. For example:
```
import pylab
x = pylab.arange( 0, 10, 0.1)
y = pylab.sin(x)
pylab.plot(x,y, 'ro-')
pylab.show() # Python hangs here until the plot window is closed
```
I'd like to be able to view the plot while doing command line queries.
I'm running Debian squeeze with python 2.6.6.
My ~/.matplotlib/matplotlibrc contains
```
backend : GTKAgg
```
Thanks!
|
2012/03/17
|
[
"https://Stackoverflow.com/questions/9753885",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/647331/"
] |
Add `pylab.ion()` ([interactive mode](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ion)) before the `pylab.show()` call. That will make the UI run in a separate thread and the call to `show` will return immediately.
|
You need to run it as
```
$ ipython --pylab
```
and run your code as
```
In [8]: x = arange(0,10,.1)
In [9]: y = sin(x)
In [10]: plot(x,y,'ro-')
Out[10]: [<matplotlib.lines.Line2D at 0x2f2fd50>]
In [11]:
```
This gives you the prompt for cases where you would want to modify other parts or plot more.
| 9,828
|
52,711,988
|
I'm having trouble using Pipenv on my Windows 10 machine. Initially, I got a timeout error while trying to run `pipenv install <module>` and following [this answer](https://stackoverflow.com/a/52509038/5535114), I disabled Windows Defender.
That got rid of the timeout error and it then seems to successfully install the package at **~/.virtualenvs** but I get an error when it comes to creating **Pipfile.lock**:
```
Adding flask to Pipfile's [packages]...
Pipfile.lock not found, creating...
Locking [dev-packages] dependencies...
Locking [packages] dependencies...
File "C:\Users\Edgar\AppData\Roaming\Python\Python36\site-packages\pipenv\utils.py", line 402, in resolve_deps
req_dir=req_dir
File "C:\Users\Edgar\AppData\Roaming\Python\Python36\site-packages\pipenv\utils.py", line 250, in actually_resolve_deps
req = Requirement.from_line(dep)
File "C:\Users\Edgar\AppData\Roaming\Python\Python36\site-packages\pipenv\vendor\requirementslib\models\requirements.py", line 704, in from_line
line, extras = _strip_extras(line)
TypeError: 'module' object is not callable
```
I've tried installing `requests` and `flask`, with the same results.
* **python**: Python 3.6.4 :: Anaconda, Inc.
* **pip**: pip 18.0 from c:\users\edgar\anaconda3\lib\site-packages\pip (python 3.6)
* **pipenv**: pipenv, version 2018.7.1
Any clues as to what is the problem/solution?
|
2018/10/09
|
[
"https://Stackoverflow.com/questions/52711988",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5535114/"
] |
Finally solved it. This is current issue, with a [workaround](https://github.com/pypa/pipenv/issues/2924#issuecomment-427383459) for Windows:
`pipenv run python -m pip install -U pip==18.0`
|
I got the same problem . It looks like problem happen with pip18.1 . However, you are using pip 18.0 . By the way,
I solved by these commands . You can try it.
`pipenv run pip install pip==18.0
pipenv install`
Reference:
<https://github.com/pypa/pipenv/issues/2924>
| 9,829
|
33,845,726
|
For example this is my simple python code to send e-mail:
```
import smtplib
import getpass
mail = "example@example.com"
passs = getpass.getpass("pass: ")
sendto = "example1@example2.com"
title = "Subject: example\n"
body = "blabla\n"
msg = title + body
send = smtplib.SMTP("smtp.example.com",587)
send.ehlo()
send.starttls()
send.login(mail,passs)
send.sendmail(mail,sendto,msg)
```
and it works perfectly, but whenever i search for sending emails from python much more complicated code shows up, much more modules and lines, but they do the same thing! Why is that? Is my code good or bad?
|
2015/11/21
|
[
"https://Stackoverflow.com/questions/33845726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848384/"
] |
The simplest way is to ask for its `description`:
```
cell.priceLabel.text = productPrice.price.description;
```
(All those answers that suggest formatting with `"%@"` are using `description`, indirectly.)
But if it's a price, you probably want to format it like a price. For example, in the USA, prices in US dollars are normally formatted with two digits to the right of the decimal point and a comma before every group of three digits to the left of the decimal point. So instead of using `description`, you should add an `NSNumberFormatter` to your controller and use that:
.m
--
```
@interface ViewController ()
@property (nonatomic, strong) NSNumberFormatter *priceFormatter;
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.priceFormatter = [[NSNumberFormatter alloc] init];
self.priceFormatter.numberStyle = NSNumberFormatterCurrencyStyle;
// If you don't want a currency symbol like $ in the output, do this:
// self.priceFormatter.currencySymbol = nil;
}
- (void)showPrice:(NSDecimalNumber *)price inTextField:(UILabel *)label {
label.text = [self.priceFormatter stringFromNumber:price];
}
```
There are lots of other `NSNumberFormatter` properties you can use to tweak the output, so check the [class reference](https://developer.apple.com/library/ios/documentation/Cocoa/Reference/Foundation/Classes/NSNumberFormatter_Class/) if you need to.
### UPDATE
Assuming `price` is declared as `NSArray`:
```
BUYProductVariant *productPrice = price[indexPath.row];
cell.priceLabel.test = [self.formatter stringWithNumber:productPrice.price];
```
|
A UILabel expects its text value to be an NSString, so you need to create a string using the value of product.price.
```
cell.priceLabel.text = [NSString stringWithFormat:@"%@", product.price];
```
What's important is that you can't simply cast (change) the type of NSDecimalNumber, you have to convert the value in some way.
| 9,832
|
12,665,574
|
I'm working with a class that emulates a python list. I want to return it as a python list() when I access it without an index.
with a normal list():
```
>>> a = [1,2,3]
>>> a
[1,2,3]
```
what I'm getting, essentially:
```
>>> a = MyList([1,2,3])
>>> a
<MyList object at 0xdeadbeef>
```
I can't figure out which dunder method (if any) would allow me to customize this behavior?
I'd think it would be \_\_ get \_\_ ? although list() doesn't implement get/set/delete - i guess because it's a built-in?
|
2012/09/30
|
[
"https://Stackoverflow.com/questions/12665574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1212685/"
] |
You should override the `__repr__` method in you class (and optionally the `__str__` method too), see this [post](https://stackoverflow.com/questions/1436703/difference-between-str-and-repr-in-python) for a discussion on the differences.
Something like this:
```
class MyList(object):
def __repr__(self):
# iterate over elements and add each one to resulting string
```
As pointed in the comments, `str()` calls `__repr__` if `__str__` isn't defined, but `repr()` doesn't call `__str__` if `__repr__` isn't defined.
|
Allow me to answer my own question - I believe it's the \_\_ repr \_\_ method that I'm looking for. Please correct me if i'm wrong. Here's what I came up with:
```
def __repr__(self):
return str([i for i in iter(self)])
```
| 9,833
|
55,564,014
|
I am unable to import the tensorflow 2.0 module into my code i end up getting this error
```
Traceback (most recent call last):
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Perseus\Anaconda3\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Perseus\Anaconda3\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Perseus\Anaconda3\lib\runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Users\Perseus\Anaconda3\lib\runpy.py", line 142, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "C:\Users\Perseus\Anaconda3\lib\runpy.py", line 109, in _get_module_details
__import__(pkg_name)
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\__init__.py", line 27, in <module>
from tensorflow._api.v2 import audio
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\_api\v2\audio\__init__.py", line 8, in <module>
from tensorflow.python.ops.gen_audio_ops import decode_wav
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Perseus\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Perseus\Anaconda3\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Perseus\Anaconda3\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55564014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11258767/"
] |
I think you hit [this bug](https://github.com/tensorflow/tensorflow/issues/22794).
You can downgrade tensorflow to `v1.10.0`
```
pip install tensorflow-gpu==1.10.0
```
or make sure that you have these versions for CUDA, Tensorflow and CUDNN:
* CUDA v9.0
* tensorflow-gpu v1.12.0
* CUDNN 7.4.1.5
Alternatively, you can uninstall tensorflow and install it through conda:
```
pip uninstall tensorflow-gpu
conda install tensorflow-gpu
```
|
tensorflow 2.0 is now officially available. You can retry. This time it should work without any errors, if CUDA and CuDNN are properly installed.
| 9,838
|
63,160,976
|
I am trying to split my nested list of strings into nested lists of floats. My nested list is below:
```
nested = [['0.3, 0.4, 0.2', '0.5, 0.1, 0.3'], ['0.7, 0.4, 0.2'], ['0.4, 0.1, 0.3']]
```
My desired output would be a nested list where these values remain in their sublist and are converted to floats as seen below:
```
nested = [[0.3, 0.4, 0.2, 0.5, 0.1, 0.3], [0.7, 0.4, 0.2], [0.4, 0.1, 0.3]]
```
The difficulty has come when trying to handle sublists with multiple strings (I.e. the first element). I have found some examples such as here [How do I split strings within nested lists in Python?](https://stackoverflow.com/questions/29002067/how-do-i-split-strings-within-nested-lists-in-python), but this code only handles sublists with one string element and I'm unsure how to apply this to sublists with multiple strings.
I am trying to avoid hardcoding anything as this is part of a script for a larger dataset and the sublist length may vary.
If anyone has any ideas, I'd appreciate some help.
|
2020/07/29
|
[
"https://Stackoverflow.com/questions/63160976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13984141/"
] |
```
result = [[float(t) for s in sublist for t in s.split(', ')] for sublist in nested]
```
which is equivalent to
```
result = []
for sublist in nested:
inner = []
for s in sublist:
for t in s.split(', '):
inner.append(float(t))
result.append(inner)
```
|
OK, starting with your example:
myNestedList = [['0.3, 0.4, 0.2', '0.5, 0.1, 0.3'], ['0.7, 0.4, 0.2'], ['0.4, 0.1, 0.3']]
```
myOutputList = []
for subList in myNestedList:
tempList = []
for valueStr in sublist:
valueFloat = float( valueStr )
tempList.append( valueFloat )
myOutputList.append( tempList )
```
It will look something like that. (don't have time to try it out, but pretty close to correct)
| 9,839
|
54,901,493
|
i have problem with my code when i want signup error appear `Manager isn't available; 'auth.User' has been swapped for 'members.CustomUser'` , i try solotion of other questions same like [Manager isn't available; 'auth.User' has been swapped for 'members.CustomUser'](https://stackoverflow.com/questions/17873855/manager-isnt-available-user-has-been-swapped-for-pet-person) but all of them asking to replace `User = User = get_user_model()` but i am not use any `User` in my code or i dont know where i used that.im new in django , python , js and etc so if my question is silly forgivme .
for more information :1) i used [Django Signup Tutorial](https://wsvincent.com/django-user-authentication-tutorial-signup/) for create signup method . first that was working well but after i expand my homework project i get error .2)in others app ("products" and "search") no where im not import User and even i dont use CustomUser too bez not need to work with User in this apps.just `memebers` work with User and CustomUser.
model.py :
```
from django.contrib.auth.models import AbstractUser
class CustomUser(AbstractUser):
def __str__(self):
return self.email
class Meta:
verbose_name = "member"
verbose_name_plural = "members"
```
setting.py:
```
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'card.apps.CardConfig',
'members.apps.MembersConfig',
'search.apps.SearchConfig',
'products.apps.ProductsConfig',
'rest_framework',
]
AUTH_USER_MODEL = 'members.CustomUser'
```
admin.py:
```
from django.contrib import admin
from django.contrib.auth import get_user_model
from django.contrib.auth.admin import UserAdmin
from .forms import CustomUserCreationForm, CustomUserChangeForm
from .models import CustomUser
class CustomUserAdmin(UserAdmin):
add_form = CustomUserCreationForm
form = CustomUserChangeForm
model = CustomUser
list_display = ['email', 'username']
admin.site.register(CustomUser, CustomUserAdmin)
```
form.py:
```
# users/forms.py
from django.contrib.auth.forms import UserCreationForm, UserChangeForm
from .models import CustomUser
class CustomUserCreationForm(UserCreationForm):
class Meta(UserCreationForm):
model = CustomUser
fields = ('username', 'email')
class CustomUserChangeForm(UserChangeForm):
class Meta:
model = CustomUser
fields = ('username', 'email')
```
error:
```
27/Feb/2019 12:36:01] "GET /signup/ HTTP/1.1" 200 5293
Internal Server Error: /signup/
Traceback (most recent call last):
File "C:\shopping\venv\lib\site-packages\django\core\handlers\exception.py", line 34, in inner
response = get_response(request)
File "C:\shopping\venv\lib\site-packages\django\core\handlers\base.py", line 126, in _get_response
response = self.process_exception_by_middleware(e, request)
File "C:\shopping\venv\lib\site-packages\django\core\handlers\base.py", line 124, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\shopping\venv\lib\site-packages\django\views\generic\base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "C:\shopping\venv\lib\site-packages\django\views\generic\base.py", line 88, in dispatch
return handler(request, *args, **kwargs)
File "C:\shopping\venv\lib\site-packages\django\views\generic\edit.py", line 172, in post
return super().post(request, *args, **kwargs)
File "C:\shopping\venv\lib\site-packages\django\views\generic\edit.py", line 141, in post
if form.is_valid():
File "C:\shopping\venv\lib\site-packages\django\forms\forms.py", line 185, in is_valid
return self.is_bound and not self.errors
File "C:\shopping\venv\lib\site-packages\django\forms\forms.py", line 180, in errors
self.full_clean()
File "C:\shopping\venv\lib\site-packages\django\forms\forms.py", line 383, in full_clean
self._post_clean()
File "C:\shopping\venv\lib\site-packages\django\contrib\auth\forms.py", line 107, in _post_clean
super()._post_clean()
File "C:\shopping\venv\lib\site-packages\django\forms\models.py", line 403, in _post_clean
self.instance.full_clean(exclude=exclude, validate_unique=False)
File "C:\shopping\venv\lib\site-packages\django\db\models\base.py", line 1137, in full_clean
self.clean()
File "C:\shopping\venv\lib\site-packages\django\contrib\auth\models.py", line 338, in clean
self.email = self.__class__.objects.normalize_email(self.email)
File "C:\shopping\venv\lib\site-packages\django\db\models\manager.py", line 188, in __get__
cls._meta.swapped,
AttributeError: Manager isn't available; 'auth.User' has been swapped for 'members.CustomUser'
[27/Feb/2019 12:36:04] "POST /signup/ HTTP/1.1" 500 113770
```
|
2019/02/27
|
[
"https://Stackoverflow.com/questions/54901493",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10642829/"
] |
Modify views.py
```
from django.contrib.auth.forms import UserCreationForm
from django.urls import reverse_lazy
from django.views import generic
class SignUp(generic.CreateView):
form_class = UserCreationForm
success_url = reverse_lazy('login')
template_name = 'signup.html'
```
to
```
from .forms import CustomUserCreationForm
from django.urls import reverse_lazy
from django.views import generic
class SignUp(generic.CreateView):
form_class = CustomUserCreationForm
success_url = reverse_lazy('login')
template_name = 'signup.html'
```
|
In your forms.py make changes as:
```
from django.contrib.auth import get_user_model
class CustomUserChangeForm(UserChangeForm):
class Meta:
model = get_user_model()
fields = ('username', 'email')
class CustomUserCreationForm(UserCreationForm):
class Meta:
model = get_user_model()
fields = ('username', 'email')
```
Also add this script in your members/**init.py** file:
```
default_app_config = 'members.apps.MembersConfig'
```
| 9,841
|
14,447,202
|
Heroku seems to prefer the apps deployed have a certain structure, mostly that the .git and manage.py is at root level and everything else is below that.
I have inherited a Django app I'm trying to deploy for testing purposes and I don't think I can restructure it so I was wondering if I have an alternative.
The structure I've inherited has most of the files in the root folder:
```
./foo:
__init__.py,
.git,
Procfile,
settings.py,
manage.py,
bar/
models.py, etc
```
From within foo I can run `python manage.py shell` and in there `from foo.bar import models` works.
However, when I push this to Heroku, it puts the root in `/app`, so `foo` becomes `app` and `from foo.bar import models` no longer works.
Is there any magic settings that would allow me to indicate that `app` is really `foo` and allow me to continue without refactoring the app structure and/or all the imports?
*Similar question*: I think my question is similar to [Heroku - Django: Had to change every mentioning of `myproject` into `app` to get my site working. How to best avoid this in the future?](https://stackoverflow.com/questions/11972817/heroku-django-had-to-change-every-mentioning-of-myproject-into-app-to-get), except I'm asking if there's anything I can do without changing the site structure.
|
2013/01/21
|
[
"https://Stackoverflow.com/questions/14447202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1998312/"
] |
Just in case someone else has this trouble, my findings are:
There is nothing to solve -- the sandbox is just really slow, sometimes it took a couple days for the profile to become active and send the IPN. In other words, sandbox isn't good to test these functions at all, just go live and refund a couple tests. Even live sometimes takes a bit of time... I've seen it take a few hours sometimes, so don't go crazy.
|
From PayPal doco:
"By default, PayPal does not activate the profile if the initial payment amount fails. To override this default behavior, set the FAILEDINITAMTACTION field to ContinueOnFailure. If the initial payment amount fails, ContinueOnFailure instructs PayPal to add the failed payment amount to the outstanding balance due on this recurring payment profile.
If you do not set FAILEDINITAMTACTION or set it to CancelOnFailure, PayPal creates the recurring payment profile. However, PayPal places the profile into a pending status until the initial payment completes. If the initial payment clears, PayPal notifies you by Instant Payment Notification (IPN) that it has activated the pending profile. If the payment fails, PayPal notifies you by IPN that it has canceled the pending profile"
from <https://cms.paypal.com/mx/cgi-bin/?cmd=_render-content&content_ID=developer/e_howto_api_WPRecurringPayments>, just below Table 6.
| 9,842
|
25,317,140
|
I may be going about this the wrong way but that's why I'm asking the question.
I have a source of serial data that is connected to a SOC then streams the serial data up to a socket on my server over UDP. The baud rate of the raw data is 57600, I'm trying to use Python to receive and parse the data. I tested that I'm receiving the data successfully on the port via the script below (found here: <https://wiki.python.org/moin/UdpCommunication>)
```
import socket
UDP_IP = "MY IP"
UDP_PORT = My PORT
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print "received message:", data
```
Since I'm not reading the data with the .serial lib in Python or setting the baud rate to read at it comes all garbled, as would be expected. My end goal is to be able to receive and parse the data for server side processing and also have another client connect to the raw data stream piped back out from the server (proxy) which is why I'm not processing the data directly from the serial port on the device.
So my question is, how can I have Python treat the socket as a serial port that I can set a baud rate on and #import serial and .read from? I can't seem to find any examples online which makes me think I'm overlooking something simple or am trying to do something stupid.
sadface
=======
|
2014/08/14
|
[
"https://Stackoverflow.com/questions/25317140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/993775/"
] |
You can't treat a socket as a serial line. A socket can only send and receive data (data stream for TCP, packets for UDP). If you would need a facility to control the serial line on the SOC you would need to build an appropriate control protocol over the socket, i.e. either use another socket for control like FTP does or use in-band control and distinguish between controlling and data like HTTP does. And of course both sides of the connection have to understand this protocol.
|
Build on facts
--------------
A first thing to start with is to summarise facts -- **begining from the very SystemOnChip** (SOC) all the way up ...:
1. an originator serial-bitstream parameters ::= **57600** Bd, **X**-<*dataBIT*>-s, **Y**-<*stopBIT*>, **Z**-<*parityBIT*>,
2. a mediator receiving process de-framing <*data*> from a raw-bitstream
3. a mediator upStream sending to server-process integration needs specification ( FSA-model of a multi-party hand-shaking, fault-detection, failure-state-resilience, logging, ... )
4. T.B.C. ...
Design as per a valid requirement list
--------------------------------------
A few things work as a just bunch of SLOC one-liners. Design carefully against the validated Requirement List as a principle. It saves both your side and the cooperating Team(s).
Test on models
--------------
Worth a time to test on-the-fly, during the efforts to progress from simple parts to more complex, multiparty scenarios.
Integrate on smarter frameworks
-------------------------------
Definitely a waste of time to reinvent wheel. Using a smart framework for server-side integration will unlock a lot of your time and energy on your ProblemDOMAIN-specific tasks, rather than to waste both the former and the latter for writing your just-another-socket-framework ( destined in majority of cases to failure ... )
Try **ZeroMQ** or a **nanomsg** Scale-able Formal Communication Patterns Framework for smart-services to send de-framed data from your serial-bitStream source and you are almost there.
| 9,843
|
61,831,953
|
I am looking for a way in python to make a dictionary of dictionaries based on the desired structure dynamically.
I have the data bellow:
```py
{'weather': ['windy', 'calm'], 'season': ['summer', 'winter', 'spring', 'autumn'], 'lateness': ['ontime', 'delayed']}
```
I give the structure I want them to be like:
```py
['weather', 'season', 'lateness']
```
and finally get the data in this format:
```py
{'calm': {'autumn': {'delayed': 0, 'ontime': 0},
'spring': {'delayed': 0, 'ontime': 0},
'summer': {'delayed': 0, 'ontime': 0},
'winter': {'delayed': 0, 'ontime': 0}},
'windy': {'autumn': {'delayed': 0, 'ontime': 0},
'spring': {'delayed': 0, 'ontime': 0},
'summer': {'delayed': 0, 'ontime': 0},
'winter': {'delayed': 0, 'ontime': 0}}}
```
This is the manual way that I thought for achieving this:
```py
dtree = {}
for cat1 in category_cases['weather']:
dtree.setdefault(cat1, {})
for cat2 in category_cases['season']:
dtree[cat1].setdefault(cat2, {})
for cat3 in category_cases['lateness']:
dtree[cat1][cat2].setdefault(cat3, 0)
```
Can you think of a way to be able to just change the structure I wrote and having the desired result?
Keep in mind that the structure might not be the same size every time.
Also if you think of another way except dictionaries that I can access the result, it will also work for me.
|
2020/05/16
|
[
"https://Stackoverflow.com/questions/61831953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4377632/"
] |
If you're not avert to using external packages, [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame) might be a viable candidate since it looks like you'll be using a table:
```
import pandas as pd
df = pd.DataFrame(
index=pd.MultiIndex.from_product([d['weather'], d['season']]),
columns=d['lateness'], data=0
)
```
Result:
```
ontime delayed
windy summer 0 0
winter 0 0
spring 0 0
autumn 0 0
calm summer 0 0
winter 0 0
spring 0 0
autumn 0 0
```
And you can easily make changes with [indexing](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html):
```
df.loc[('windy', 'summer'), 'ontime'] = 1
df.loc['calm', 'autumn']['delayed'] = 2
# Result:
ontime delayed
windy summer 1 0
winter 0 0
spring 0 0
autumn 0 0
calm summer 0 0
winter 0 0
spring 0 0
autumn 0 2
```
The table can be constructed dynamically if you will always use the last key for columns, *assuming your keys are in the desired insertion order*:
```
df = pd.DataFrame(
index=pd.MultiIndex.from_product(list(d.values())[:-1]),
columns=list(d.values())[-1], data=0
)
```
---
Since you're interested in `pandas`, given your structure, I would also recommend giving a good read over on [MultiIndex and Advance Indexing](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html), just to get some idea on how to play around with your data. Here are some examples:
```
# Gets the sum of 'delayed' items in all of 'calm'
# Filters all the 'delayed' data in 'calm'
df.loc['calm', 'delayed']
# summer 5
# winter 0
# spring 0
# autumn 2
# Name: delayed, dtype: int64
# Apply a sum:
df.loc['calm', 'delayed'].sum()
# 7
# Gets the mean of all 'summer' (notice the `slice(None)` is required to return all of the 'calm' and 'windy' group)
df.loc[(slice(None), 'summer'), :].mean()
# ontime 0.5
# delayed 2.5
# dtype: float64
```
It definitely is very handy and versatile, but before you get too deep into it you might will definitely want to read up first, the framework might take some getting used to.
---
Otherwise, if you still prefer `dict`, there's nothing wrong with that. Here's a recursive function to generate based on the given keys *(assuming your keys are in the desired insertion order)*:
```
def gen_dict(d, level=0):
if level >= len(d):
return 0
key = tuple(d.keys())[level]
return {val: gen_dict(d, level+1) for val in d.get(key)}
gen_dict(d)
```
Result:
```
{'calm': {'autumn': {'delayed': 0, 'ontime': 0},
'spring': {'delayed': 0, 'ontime': 0},
'summer': {'delayed': 0, 'ontime': 0},
'winter': {'delayed': 0, 'ontime': 0}},
'windy': {'autumn': {'delayed': 0, 'ontime': 0},
'spring': {'delayed': 0, 'ontime': 0},
'summer': {'delayed': 0, 'ontime': 0},
'winter': {'delayed': 0, 'ontime': 0}}}
```
|
Yes, you can achieve this using the following code:
```
import copy
structure = ['weather', 'season', 'lateness']
data = {'weather': ['windy', 'calm'], 'season': ['summer', 'winter', 'spring', 'autumn'],
'lateness': ['ontime', 'delayed'], }
d_tree = dict()
n = len(structure) # length of the structure list
prev_val = 0 # the innermost value
while n > 0:
n -= 1
keys = data.get(structure[n]) or list() # get the list of values from data
# Idea here is to start with inner most dict and keep moving outer
d_tree.clear()
for key in keys:
d_tree[key] = copy.copy(prev_val)
prev_val = copy.copy(d_tree) # Copy the d_tree to put as value to outer dict
print(d_tree)
```
Hope this helps!!
| 9,844
|
37,912,206
|
Given a list:
```
l1 = [0, 211, 576, 941, 1307, 1672, 2037]
```
What is the most pythonic way of getting the index of the last element of the list. Given that Python lists are zero-indexed, is it:
```
len(l1) - 1
```
Or, is it the following which uses Python's list operations:
```
l1.index(l1[-1])
```
Both return the same value, that is 6.
|
2016/06/19
|
[
"https://Stackoverflow.com/questions/37912206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3096712/"
] |
Only the first is correct:
```
>>> lst = [1, 2, 3, 4, 1]
>>> len(lst) - 1
4
>>> lst.index(lst[-1])
0
```
However it depends on what do you mean by "the index of the last element".
Note that `index` must traverse the whole list in order to provide an answer:
```
In [1]: %%timeit lst = list(range(100000))
...: lst.index(lst[-1])
...:
1000 loops, best of 3: 1.82 ms per loop
In [2]: %%timeit lst = list(range(100000))
len(lst)-1
...:
The slowest run took 80.20 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 109 ns per loop
```
Note that the second timing is in **nanoseconds** versus **milliseconds** for the first one.
|
You should use the first. Why?
```
>>> l1 = [1,2,3,4,3]
>>> l1.index(l1[-1])
2
```
| 9,851
|
41,788,056
|
I am following [this tutorial](https://cloud.google.com/endpoints/docs/frameworks/python/quickstart-frameworks-python) on setting up cloud endpoints in python on googles app engine and keep on getting an import error
```
ImportError: No module named control
```
on the **Generating the OpenAPI configuration file** step when I input
```
python lib/endpoints/endpointscfg.py get_swagger_spec main.EchoApi --hostname echo-api.endpoints.projectid.cloud.goog
```
I followed these steps on a new account and still got this error. No idea what I am doing wrong/steps I am skipping.
here is the traceback:
```
Traceback (most recent call last):
File "lib/endpoints/endpointscfg.py", line 625, in <module>
main(sys.argv)
File "lib/endpoints/endpointscfg.py", line 621, in main
args.callback(args)
File "lib/endpoints/endpointscfg.py", line 479, in _GenOpenApiSpecCallback
application_path=args.application)
File "lib/endpoints/endpointscfg.py", line 324, in _GenOpenApiSpec
application_path=application_path)
File "lib/endpoints/endpointscfg.py", line 181, in GenApiConfig
module = __import__(module_name, fromlist=base_service_class_name)
File "/home/hairyhenry/python-docs-samples/appengine/standard/endpoints-frameworks-v2/echo/main.py", line 19, in <module>
import endpoints
File "/home/hairyhenry/python-docs-samples/appengine/standard/endpoints-frameworks-v2/echo/lib/endpoints/__init__.py", line 29, in <module>
from apiserving import *
File "/home/hairyhenry/python-docs-samples/appengine/standard/endpoints-frameworks-v2/echo/lib/endpoints/apiserving.py", line 74, in <module>
from google.api.control import client as control_client
ImportError: No module named control
```
any insight would be fabulous
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41788056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5327279/"
] |
You have to indent all the statements after your while loops and a single iteration version of your program should work. Proper indentation is critical in python. Lots of sites talk about python indentation (see [here](http://www.peachpit.com/articles/article.aspx?p=1312792&seqNum=3) for example). You were also missing the outer loop that would allow it to loop indefinitely.
Additionally you can clear the graph after each iteration.
```
import pylab
import math
xs=[]
ys=[]
x0=-4.0
x1=+4.0
x=x0
n=500
dx=(x1-x0)/n
while True:
a=input("Enter a: ")
b=input("Enter b: ")
c=input("Enter c: ")
a=int(a)
b=int(b)
c=int(c)
xs=[]
ys=[]
x=x0
while x<=x1:
xs.append(x)
y=(a*x**2)+(b*x+c)
x+=dx
ys.append(y)
pylab.plot(xs,ys,"rx-")
print(xs)
print(ys)
pylab.show()
pylab.clf()
```
|
First, you have to do your indentation correctly in the while-loop.
Second, your while loop only create the lists, `xs` and `ys`. That's why you can't keep the prompt and plot running again and again. So you have to use another loop to repeat your code above. Here is an example.
```
import matplotlib.pyplot as plt
import math
def interactiveQPlot():
xs=[]
ys=[]
x0=-4.0
x1=+4.0
x=x0
n=500
dx=(x1-x0)/n
a= input("Enter a: ")
b = input("Enter b: ")
c= input("Enter c: ")
a=int(a)
b=int(b)
c=int(c)
while x<=x1:
xs.append(x)
y=(a*x**2)+(b*x+c)
ys.append(y)
x+=dx
plt.plot(xs,ys,"rx-")
print(xs)
print(ys)
plt.show()
while True:
interactiveQPlot()
```
| 9,854
|
1,756,721
|
I just updated Python to 2.6.4 on my Mac.
I installed from the dmg package.
The binary did not seem to correctly set my Python path, so I added `'/usr/local/lib/python2.6/site-packages'` in `.bash_profile`
```
>>> pprint.pprint(sys.path)
['',
'/Users/Bryan/work/django-trunk',
'/usr/local/lib/python2.6/site-packages',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages']
```
Apparently that is not all the required paths because I can't run iPython.
```
$ ipython
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named `pkg_resources`
```
I've done Google searches and I can't really figure out how to install `pkg_resources` or make sure it's on the path.
What do I need to do to fix this?
|
2009/11/18
|
[
"https://Stackoverflow.com/questions/1756721",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/86731/"
] |
In case of upgrading your python on mac os 10.7 and pkg\_resources doesn't work, the simplest way to fix this is just reinstall setuptools as Ned mentioned above.
```
sudo pip install setuptools --upgrade
or sudo easy_install install setuptools --upgrade
```
|
Try this only if you are ok with uninstalling python.
I uninstalled python using
```
brew uninstall python
```
then later installed using
```
brew install python
```
then it worked!
| 9,855
|
42,838,366
|
I think this question has been asked many times, but I can't find the answer. I am probably not using the correct words in my searches.
I am a beginner in python and I am learning to make simple games, with the pygame library. I would like to create a variable `character`, containing x and y coordinates.
I would like something like `character[x=default_value,y=default_value]`, and then edit these coordinates for example by typing `character[x]=new_value`.
Is there a way to achieve something like that in python ?
EDIT : Thank you all, I used dictionary like that `character= {'x':'value', 'y':'value'}` and it works perfectly!
|
2017/03/16
|
[
"https://Stackoverflow.com/questions/42838366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5682871/"
] |
You could use a dictionary like this:
```
character = {'Name': 'Mary', 'xPos': 0, 'yPos': 0}
character['xPos'] = 10
```
|
You can use dictionary.
```
character = dict()
character['x']= default_value
character['y']= default_value
```
You might want to have a look at this [Documentation](https://learnpythonthehardway.org/book/ex39.html)
| 9,865
|
39,919,586
|
I know this is probably really easy question, but i'm struggling to split a string in python. My regex has group separators like this:
```
myRegex = "(\W+)"
```
And I want to parse this string into words:
```
testString = "This is my test string, hopefully I can get the word i need"
testAgain = re.split("(\W+)", testString)
```
Here's the results:
```
['This', ' ', 'is', ' ', 'my', ' ', 'test', ' ', 'string', ', ', 'hopefully', ' ', 'I', ' ', 'can', ' ', 'get', ' ', 'the', ' ', 'word', ' ', 'i', ' ', 'need']
```
Which isn't what I expected. I am expecting the list to contain:
```
['This','is','my','test']......etc
```
Now I know it's something to do with the grouping in my regex, and I can fix the issue by removing the brackets. **But how can I keep the brackets and get the result above?**
Sorry about this question, I have read the official python documentation on regex spliting with groups, but I still don't understand why the empty spaces are in my list
|
2016/10/07
|
[
"https://Stackoverflow.com/questions/39919586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5613356/"
] |
As described in this answer, [How to split but ignore separators in quoted strings, in python?](https://stackoverflow.com/questions/2785755/how-to-split-but-ignore-separators-in-quoted-strings-in-python), you can simply slice the array once it's split. It's easy to do so because you want every other member, starting with the first one (so 1,3,5,7)
You can use the [start:end:step] notation as described below:
```
testString = "This is my test string, hopefully I can get the word i need"
testAgain = re.split("(\W+)", testString)
testAgain = testAgain[0::2]
```
Also, I must point out that `\W` matches any non-word characters, including punctuation. If you want to keep your punctuation, you'll need to change your regex.
|
You can simly do:
```
testAgain = testString.split() # built-in split with space
```
Different `regex` ways of doing this:
```
testAgain = re.split(r"\s+", testString) # split with space
testAgain = re.findall(r"\w+", testString) # find all words
testAgain = re.findall(r"\S+", testString) # find all non space characters
```
| 9,868
|
21,458,037
|
I am using Ubuntu 12.04 LTS.
In Windows Azure account .cer file uploaded.
my python script is:
```
#!/usr/bin/python
from azure import *
from azure.servicemanagement import *
azureId = "XXXXXXXXXXXXXXXXXXXXX";
certificate_path= "/home/dharampal/Desktop/azure.pem";
sms = ServiceManagementService(azureId,certificate_path)
print sms
result = sms.list_locations()
print result
```
when scripts runs that time getting ServiceManagementService object but certificate related error thrown.
output of the script is :
```
<azure.servicemanagement.servicemanagementservice.ServiceManagementService object at 0xb7259f2c>
Traceback (most recent call last):
File "available_locations_list.py", line 13, in <module>
result = sms.list_locations()
File "/usr/local/lib/python2.7/dist-packages/azure/servicemanagement/servicemanagementservice.py", line 796, in list_locations
Locations)
File "/usr/local/lib/python2.7/dist-packages/azure/servicemanagement/servicemanagementclient.py", line 96, in _perform_get
response = self._perform_request(request)
File "/usr/local/lib/python2.7/dist-packages/azure/servicemanagement/servicemanagementclient.py", line 83, in _perform_request
resp = self._filter(request)
File "/usr/local/lib/python2.7/dist-packages/azure/http/httpclient.py", line 144, in perform_request
self.send_request_headers(connection, request.headers)
File "/usr/local/lib/python2.7/dist-packages/azure/http/httpclient.py", line 125, in send_request_headers
connection.endheaders()
File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 776, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 1161, in connect
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file)
File "/usr/lib/python2.7/ssl.py", line 381, in wrap_socket
ciphers=ciphers)
File "/usr/lib/python2.7/ssl.py", line 141, in __init__
ciphers)
ssl.SSLError: [Errno 336265218] _ssl.c:351: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib
```
where am i doing wrong ?
if any one faced same issue and got solution, please help me.
did google but unable to find solution.
|
2014/01/30
|
[
"https://Stackoverflow.com/questions/21458037",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3086379/"
] |
I think your approach is extremely overly complicated for this. You should take a step back, think about what it is that you wish to accomplish, plan it out first, then start programming.
What you want to do is sort your array with a random sort order.
Create a new `IComparer` that returns the comparison randomly:
```
public class RandomComparer<T> : IComparer<T> {
private static Random random = new Random();
public int Compare(T a, T b) {
return random.Next(2) == 0 ? 1 : -1;
}
}
```
Now, sort your array:
```
int[] array = {
1, 2, 3, 4, 5,
6, 7, 8, 9, 10
};
Array.Sort<int>(array, new RandomComparer<int>());
for (int i = 0; i < array.Length; i++)
Console.WriteLine(array[i]);
```
It's really that simple. See this demonstration at [IDEOne.com](http://ideone.com/4XTZAg)
|
You can randomly switch the values :
```
int n;
Console.WriteLine("Please enter a positive integer for the array size"); // asking the user for the int n
n = Int32.Parse(Console.ReadLine());
int[] array = new int[n]; // declaring the array
int[] newarray = new int[n];
Random rand = new Random();
for (int i = 0; i < array.Length; i++)
{
array[i] = i + 1;
newarray[i] = i + 1;
}
for (int y = 0; y < newarray.Length; y++)
{
int r = rand.Next(n);
int tmp = newarray[y];
newarray[y] = newarray[r];
newarray[r] = tmp;
}
for (int x=0;x<newarray.Length;x++)
{
Console.Write(" {0}", newarray[x]);
}
Console.ReadLine();
```
| 9,869
|
72,521,192
|
Given a reproducible dataframe, I want to find the number of unique values in each column not including missing (NA) values. Below code counts NA values, as a result the cardinality of `nat_country` column shows as 4 in `n_unique_values` dataframe (it is supposed to be 3). In python there exists `nunique()` function which does not take NA values into consideration. In r how can one achieve this?
```r
nat_country = c("United-States", "Germany", "United-States", "United-States", "United-States", "United-States", "Taiwan", NA)
age = c(14,15,45,78,96,58,25,36)
dat = data.frame(nat_country, age)
n_unique_values = t(data.frame(apply(dat, 2, function(x) length(unique(x)))))
```
|
2022/06/06
|
[
"https://Stackoverflow.com/questions/72521192",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10617194/"
] |
You can use `dplyr::n_distinct` with `na.rm = T`:
```r
library(dplyr)
sapply(dat, n_distinct, na.rm = T)
#map_dbl(dat, n_distinct, na.rm = T)
#nat_country age
# 3 8
```
---
In base R, you can use `na.omit` as well:
```r
sapply(dat, \(x) length(unique(na.omit(x))))
#nat_country age
# 3 8
```
|
We could use `map` or `map_dfr` with `n_distinct`:
```
library(dplyr)
library(purrr)
dat %>%
map_dfr(., n_distinct, na.rm = TRUE)
nat_country age
<int> <int>
1 3 8
```
```
library(dplyr)
library(purrr)
dat %>%
map(., n_distinct, na.rm = TRUE) %>%
unlist()
```
```
nat_country age
3 8
```
| 9,879
|
41,604,223
|
I am trying to read N lines of file in python.
This is my code
```
N = 10
counter = 0
lines = []
with open(file) as f:
if counter < N:
lines.append(f:next())
else:
break
```
Assuming the file is a super large text file. Is there anyway to write this better. I understand in production code, its advised not to use break in loops so as to achieve better readability. But i cannot think of a good way not to use break and to achieve the same effect.
I am a new developer and am just trying to improve my code quality.
Any advice is greatly appreciated. Thanks.
|
2017/01/12
|
[
"https://Stackoverflow.com/questions/41604223",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939166/"
] |
try using
`gem install nokogiri -v 1.7.0.1 -- --use-system-libraries=true --with-xml2-include=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/libxml2`
|
try:
gem update --system
then:
xcode-select --install
then:
gem install nokogiri
and finally:
install the rails gem
| 9,882
|
68,368,323
|
I want to run a Macro with python. I am doing:
```
import win32com.client as w3c
def ejecuntar_macro():
xlApp_mrapp = w3c.Dispatch("Excel.Application")
pw_str = str('Plantilla123')
mrapp = r'D:\Proyectos\Tablero estados\Tablero.xlsm'
xlApp_mrapp.Visible = True
xlApp_mrapp.DisplayAlerts = False
wb = xlApp_mrapp.Workbooks.Open(mrapp, False, False, None, pw_str)
xlApp_mrapp.Application.Run("'" + mrapp + "'" + "!MΓ³dulo1.guardar_archivo")
wb.Close(True)
xlApp_mrapp.Quit()
ejecuntar_macro()
```
but I keep getting an error:
>
> File ".\ejecucion.py", line 244, in ejecuntar\_macro
> xlApp\_mrapp.Application.Run("'" + mrapp + "'" +
> "!MΓ³dulo1.guardar\_archivo") File "", line 14, in Run File
> "C:\Users\Ruben\AppData\Roaming\Python\Python37\site-packages\win32com\client\dynamic.py",
> line 314, in ApplyTypes result = self.oleobj.InvokeTypes(\*(dispid,
> LCID, wFlags, retType, argTypes) + args) pywintypes.com\_error:
> (-2147352567, 'OcurriΓ³ una excepciΓ³n.', (0, None, None, None, 0,
> -2146788248), None)
>
>
>
Please, Can You help me to solve it?.
|
2021/07/13
|
[
"https://Stackoverflow.com/questions/68368323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16053579/"
] |
Use `text-align: center;`
Link: <https://developer.mozilla.org/en-US/docs/Web/CSS/text-align>
```html
<div style="text-align:center; width: 150px;">
veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat
</div>
```
|
Give the property `text-align: center` to the element containing the text.
| 9,883
|
60,306,244
|
We are automating the process of creating/modifying tables on our database. We keep our ddls in github repo. Our objective is to drop and create the table again if the definition has changed. Otherwise, no change.
Lets say we have a table named `table1`
Steps:
```
1. Query database to get ddl for table1.
2. Get ddl from github repo, check if there is any difference between github & ddl from database server.
3. If there is a difference, drop and create again
4. Else, no change.
```
For comparing, doing string comparison is very naive ( change in space(s) doesn't mean change in schema).
Is there any API for comparison? I am specifically looking for python APIs. Standard diff utility doesn't do a good job of comparing 2 sql files. It will create a diff if the order the fields are different but the overall ddl may be same.
|
2020/02/19
|
[
"https://Stackoverflow.com/questions/60306244",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1768610/"
] |
This is not currently supported, but I am 80% sure it is in the roadmap.
An alternative would be to use the SDK to create the same pipeline using `ModuleStep` where I *believe* you can reference a Designer Module by its name to use it like a `PythonScriptStep`
|
The export Designer graph to notebook is in our roadmap. For now, please take a look at the ModuleStep in SDK and let us know if you have any questions.
Thanks,
Lu Zhang | Senior Program Manager | Azure Machine Learning
| 9,886
|
2,859,081
|
I'm trying to create a database connection in a python script to my DB2 database. When the connection is done I've to run some different SQL statements.
I googled the problem and has read the ibm\_db API (<http://code.google.com/p/ibm-db/wiki/APIs>) but just can't seem to get it right.
Here is what I got so far:
```
import sys
import getopt
import timeit
import multiprocessing
import random
import os
import re
import ibm_db
import time
from string import maketrans
query_str = None
conn = ibm_db.pconnect("dsn=write","usrname","secret")
query_stmt = ibm_db.prepare(conn, query_str)
ibm_db.execute(query_stmt, "SELECT COUNT(*) FROM accounts")
result = ibm_db.fetch_assoc()
print result
status = ibm_db.close(conn)
```
but I get an error. I really tried everything (or, not everything but pretty damn close) and I can't get it to work.
I just need to make a automatic test python script that can test different queries with different indexes and so on and for that I need to create and remove indexes a long the way.
Hope someone has a solutions or maybe knows about some example codes out there I can download and study.
Thanks
Mestika
|
2010/05/18
|
[
"https://Stackoverflow.com/questions/2859081",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/188082/"
] |
it should be:
```
query_str = "SELECT COUNT(*) FROM accounts"
conn = ibm_db.pconnect("dsn=write","usrname","secret")
query_stmt = ibm_db.prepare(conn, query_str)
ibm_db.execute(query_stmt)
```
|
I'm sorry, of cause you need to error message. When trying to run my script it gives me this error:
```
Traceback (most recent call last):
File "test.py", line 16, in <module>
ibm_db.execute(query_stmt, "SELECT COUNT(*) FROM accounts")
Exception: Param is not a tuple
```
I'm pretty sure that it is my parameter "SELECT COUNT(\*) FROM accounts" that is the problem, but I have no idea how to fix it or what to put in its place.
| 9,889
|
69,818,851
|
I am running a simple React/Django app with webpack that is receiving this error on build:
```
ERROR in ./src/index.js
Module build failed (from ./node_modules/eslint-loader/dist/cjs.js):
TypeError: Cannot read properties of undefined (reading 'getFormatter')
at getFormatter (**[Relative path]**/frontend/node_modules/eslint-loader/dist/getOptions.js:52:20)
at getOptions (**[Relative path]**/frontend/node_modules/eslint-loader/dist/getOptions.js:30:23)
at Object.loader (**[Relative path]**/frontend/node_modules/eslint-loader/dist/index.js:17:43)
```
Here's my `package.json`
```
{
"name": "frontend",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start:dev": "webpack serve --config webpack.config.dev.js --port 3000",
"clean:build": "rimraf ../static && mkdir ../static",
"prebuild": "run-p clean:build",
"build": "webpack --config webpack.config.prod.js",
"postbuild": "rimraf ../static/index.html"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"axios": "^0.24.0",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"react-redux": "^7.2.6",
"redux": "^4.1.2",
"redux-thunk": "^2.4.0",
"reselect": "^4.1.1"
},
"devDependencies": {
"@babel/core": "^7.16.0",
"@babel/node": "^7.16.0",
"@babel/preset-env": "^7.16.0",
"@babel/preset-react": "^7.16.0",
"babel-eslint": "^10.1.0",
"babel-loader": "^8.2.3",
"babel-polyfill": "^6.26.0",
"css-loader": "^6.5.0",
"cssnano": "^5.0.9",
"eslint": "^8.1.0",
"eslint-loader": "^4.0.2",
"eslint-plugin-import": "^2.25.2",
"eslint-plugin-react": "^7.26.1",
"eslint-plugin-react-hooks": "^4.2.0",
"html-webpack-plugin": "^5.5.0",
"mini-css-extract-plugin": "^2.4.3",
"npm-run-all": "^4.1.5",
"postcss-loader": "^6.2.0",
"redux-immutable-state-invariant": "^2.1.0",
"rimraf": "^3.0.2",
"style-loader": "^3.3.1",
"webpack": "^5.61.0",
"webpack-cli": "^4.9.1",
"webpack-dev-server": "^4.4.0"
}
}
```
And my `webpack.config.dev.js`
```
const webpack = require('webpack');
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
process.env.NODE_ENV = 'development';
module.exports = {
mode: 'development',
target: 'web',
devtool: 'cheap-module-source-map',
entry: ['babel-polyfill', './src/index'],
output: {
path: path.resolve(__dirname),
publicPath: '/',
filename: 'bundle.js'
},
devServer: {
// stats: 'minimal',
client: {
overlay: true
},
historyApiFallback: true,
// disableHostCheck: true,
headers: { 'Access-Control-Allow-Origin': '*' },
https: false
},
plugins: [
new webpack.DefinePlugin({
'process.env.API_URL': JSON.stringify('http://localhost:8000/api/')
}),
new HtmlWebpackPlugin({
template: './src/index.html',
favicon: './src/favicon.ico'
})
],
module: {
rules: [
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader'
},
'eslint-loader'
]
},
{
test: /(\.css)$/,
use: ['style-loader', 'css-loader']
}
]
}
};
```
Unsure if I need to make changes to my config file or if it's an issue with a package I have installed. Any guidance would be helpful as I am not very familiar with webpack's inner workings.
Versions:
* node: v17.0.1
* npm: 8.1.2
* python: 3.9.6 (pretty sure it's a js/webpack issue)
* Django: 3.2.8 (^^^)
|
2021/11/03
|
[
"https://Stackoverflow.com/questions/69818851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7359966/"
] |
eslint-loader is deprecated:
<https://www.npmjs.com/package/eslint-loader>
You may use eslint-webpack-plugin instead:
<https://www.npmjs.com/package/eslint-webpack-plugin>
|
I found an issue <https://github.com/webpack-contrib/eslint-loader/issues/331> about this in the eslint-loader github, but I don't know if it will be useful for you.
. It would help to have a git repository to store the code that would be wrong for better testing. :)
| 9,890
|
51,113,531
|
I am setting up `docker-for-windows` on my private pc.
When I set it up a while ago on my office laptop I had the same issue but it just stopped happening.
So I am stuck with this:
I have a docker-working project (on my other computer) with a `docker-compose.yml` like this:
```
version: '2'
services:
web:
depends_on:
- db
build: .
env_file: ./docker-compose.env
command: bash ./run_web_local.sh
volumes:
- .:/srv/project
ports:
- 8001:8001
links:
- db
- rabbit
restart: always
```
**Dockerfile:**
```
### STAGE 1: Build ###
# We label our stage as 'builder'
FROM node:8-alpine as builder
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
# build backend
ADD package.json /tmp/package.json
ADD package-lock.json /tmp/package-lock.json
RUN cd /tmp && npm install
RUN mkdir -p /backend-app && cp -a /tmp/node_modules /backend-app
### STAGE 2: Setup ###
FROM python:3
# Install Python dependencies
COPY requirements.txt /tmp/requirements.txt
RUN pip3 install -U pip
RUN pip3 install --no-cache-dir -r /tmp/requirements.txt
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV PROJECT_SRC=.
# Directory in container for all project files
ENV PROJECT_SRVHOME=/srv
# Directory in container for project source files
ENV PROJECT_SRVPROJ=/srv/project
# Create application subdirectories
WORKDIR $PROJECT_SRVPROJ
RUN mkdir media static staticfiles logs
# make folders available for other containers
VOLUME ["$PROJECT_SRVHOME/media/", "$PROJECT_SRVHOME/logs/"]
# Copy application source code to SRCDIR
COPY $PROJECT_SRC $PROJECT_SRVPROJ
COPY --from=builder /backend-app/node_modules $PROJECT_SRVPROJ/node_modules
# Copy entrypoint script into the image
WORKDIR $PROJECT_SRVPROJ
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
CMD ["./run_web.sh"]
```
**docker-compose.env:**
```
C_FORCE_ROOT=True
DJANGO_CELERY_BROKER_URL=amqp://admin:mypass@rabbit:5672//
DJANGO_DATABASE_ENGINE=django.db.backends.mysql
DJANGO_DATABASE_NAME=project-db
DJANGO_DATABASE_USER=project-user
DJANGO_DATABASE_PASSWORD=mypassword
DJANGO_DATABASE_HOST=db
DJANGO_ALLOWED_HOSTS=127.0.0.1,localhost
DJANGO_DEBUG=True
DJANGO_USE_DEBUG_TOOLBAR=off
DJANGO_TEST_RUN=off
PYTHONUNBUFFERED=0
```
**run\_web\_local.sh:**
```
#!/bin/bash
echo django shell commands
python ./manage.py migrate
echo Starting django server on 127.0.0.1:8000
python ./manage.py runserver 127.0.0.1:8000
```
When I call `docker-compose up web` I get the following error:
>
> web\_1 | bash: ./run\_web\_local.sh: No such file or directory
>
>
>
* I checked the line endings, they are UNIX
* the file exists on the file system as well as inside the container
* I can call `bash run_web_local.sh` from my windows powershell and inside the container
* I changed the UNIX permissions inside the container
* I left out the `bash` in the `command` in the `docker-compose` command. And tried with backslash, no dot etc.
* I reinstalled docker
* I tried switching to version 3
* docker claims to have a connection to my shared drive C
And: The exact same setup **works** on my other laptop.
Any ideas? All the two million github posts didn't solve the problem for me.
Thanks!
**Update**
Removing `volumes:` from the docker-compose makes it work [like stated here](https://github.com/docker/compose/issues/2548) but I don't have an instant mapping. That's kind of important for me...
|
2018/06/30
|
[
"https://Stackoverflow.com/questions/51113531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1331671/"
] |
I had same issue recently and the problems goes away using any advanced editor and changing line ending to unix style on sh entrypoint scripts.
In my case, not sure why, because git handle it very well depending on linux or windows host I ended up in same situation.
If you have files mounted in container and host(in windows)
it dependes on how you edit them may be changed to linux from inside container but that affects outside windows host.
After that git stated file .sh changed but no additions no deletions.
a graphical compare tool showed up that only new line where changed.
Another workaround to trouble shooting you can start your container overriden entrypoint script by sh for example and then from there you can check on the started container how linux sees the entrypoint script, even you can test it and will see exact same error.
|
Do not use -d at the end.
Instead of this command
**docker-compose -f start\_tools.yaml up βd**
Use
**docker-compose -f start\_tools.yaml up**
| 9,894
|
63,694,387
|
The below is a selenium python code where I am trying to click Sign In by sending the login details via selenium. However, when I am using `find_element_by_id` method to locate the username and password input area the scripts throws an error
`Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="usernameOrEmail"]"}`. But, when I am inspect the webpage on the input text type it shows me the same id which I have mentioned in my script.
P.S: When the selenium opens up the browser please maximize the windows else the code will not work
```
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Chrome(executable_path='C://Arighna/chromedriver.exe')
driver.get("https://www.fool.com/")
print(driver.title)
mybutton = driver.find_element_by_id('login-menu-item')
mybutton.click()
delay = 5
WebDriverWait(driver,delay)
email_area = driver.find_element_by_id('usernameOrEmail')
email.send_keys(Keys.ENTER)
email_area.send_keys('ar')
WebDriverWait(driver,delay)
pwd_area = driver.find_element_by_id('password')
pwd_area.send_keys(Keys.ENTER)
pwd_area.send_keys('1234')
WebDriverWait(driver,delay)
login_btn = driver.find_element_by_id('btn-login')
login_btn.click()
```
Any help is really appreciated.
|
2020/09/01
|
[
"https://Stackoverflow.com/questions/63694387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8599554/"
] |
don't initialized the variables. use the `nillable` attribute and set it value to `true`
```
@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "", propOrder = { "currencyCode", "discountValue", "setPrice" })
@XmlRootElement(name = "countryData")
public class CountryData {
@XmlElement(nillable=true)
protected String currencyCode;
@XmlElement(nillable=true)
protected String discountValue;
@XmlElement(nillable=true)
protected String setPrice;
// getters and setters
}
```
output
```
<currencyCode>GBP</currencyCode>
<discountValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/>
<setPrice xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/>
```
|
Although the strings are empty they still contain non-null data and the end tag is generated. Remove the default values of the strings or set them as `null` (a default instance field value):
```
protected String discountValue;
protected String setPrice;
```
The tags become closed:
```
<discountValue/>
<setPrice/>
```
| 9,904
|
1,736,655
|
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7.
I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired -
* VirtuaBox latest, with grml-medium (very light debian-based distro)
some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now.
* MinGW
installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development
Do i need mySys now. i can installed it if it provides me with manpages and ssh.
* Cygwin
Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest.
* Something called Interix.
any taker for that. is it recommened.
What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while.
I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks.
I checked this
[Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows)
do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
|
2009/11/15
|
[
"https://Stackoverflow.com/questions/1736655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/132597/"
] |
Here is what I do for Python development on Windows:
* EasyEclipse for Python (includes eclipse, subclipse, pydev)
* GNU Win32 [Native Windows ports for GNU tools](http://gnuwin32.sourceforge.net/)
* Vim and Emacs (for non-IDE editing work)
|
The following suggestions hold if you are not going to do complex template programming as the c++ IDE's other than visual studio SUCK, they cannot efficiently index modern C++ code (the boost library).
I would suggest using Netbeans (it has far better support for C++ than eclipse/CDT) with the following two build environments. Both are important if you want to cross-compile and test against POSIX and win32. This is not a silver-bullet, you should test on different variants of UNIX once in a while:
I would suggest installing Mingw and Msys for windows development, its nice when you can use awk, grep, sed etc on your code :D generative programming is easier with shell tools as well -- writing generative build scripts is a bitch to do effectively of the command line in windows (powershell might have changed this).
I would ALSO suggest installing Cygwin and using that on the side. Mingw is for programming against the win32 low-level API, Cygwin is for programming against the POSIX standard. Cygwin also compiles a lot of software that you would otherwise have to port.
Also once you get your project up and running you can use CMAKE as build environment, its the best thing since sliced bread :P You can get it to spit out build definition for anything and everything -- including visual studio.
| 9,905
|
16,648,670
|
I am developing the structure of the MySQL database and I've faced a small decisional problem about its structure.
I have 2 tables:
1. All messages published on the site.
2. All comments published on the site.
Every message can have more than one comment associated to it.
What is a better way to make connection between a message and comments related to it?
1. Have a field for comments that contains id of the related message.
2. Have a field for messages that contains an array of ids of related comments in json format.
I think that usually the first method is used and then MySQL query is used to find comments that have message\_id of corresponding message. But how efficient will it be when there are hundreds of thousands of comments?
Will in this case decoding json string and accessing comments by exact unique id be more efficient and fast?
I am using python for back-end if that matters.
|
2013/05/20
|
[
"https://Stackoverflow.com/questions/16648670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/656100/"
] |
The first option is the way to go. So you'll have:
comment\_id | message\_id | comment\_text | timestamp etc.
For your MySQL table you can specify to build the index over the first two columns for good performance.
10Mio Comments should work OK, but you could test this in advance with a test scenario yourself.
If you want to plan for more, then after about 100,000 comments you can do the following:
* determine how many comments there are on average per message
* determine how many messages would be required for about 5mio comments
* let's say it takes 50,000 messages for 5mio comments
* add comment\_table1 [..] comment\_table9 to your database
* switch within python: if message\_id > 50,000 -> then look at comment\_table2 etc.
* Of course, you'll have to save the comments accordingly
This should be performant for a large number of entries.
You can adapt the numbers to your individual hosting (performance) environment...
|
Option one is the best approach. You'll want an index on the `message_id` column in the comments table. This allows MySQL to quickly and efficiently pull out all the comments for a particular message, even when there are hundreds of thousands of comments.
| 9,915
|
71,607,064
|
In openai.py the Completion.create is highlighting as alert and also not working.. the error is right down below.. whats the problem with the code
```
response = openai.Completion.create(
engine="text-davinci-002",
prompt="Generate blog topic on: Ethical hacking",
temperature=0.7,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
$ python openai.py
Traceback (most recent call last):
File "E:\python\openAI\openai.py", line 2, in <module>
import openai
File "E:\python\openAI\openai.py", line 9, in <module>
response = openai.Completion.create(
AttributeError: partially initialized module 'openai' has no attribute 'Completion' (most likely due to a circular import)
```
|
2022/03/24
|
[
"https://Stackoverflow.com/questions/71607064",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16521679/"
] |
for my fellow doofuses going thru all the above suggestions and wondering why its not working:
make sure your file is NOT named `openai.py`. because then it will call itself, because python.
wasted 2 hours on this nonsense lol.
relevant link [How to fix AttributeError: partially initialized module?](https://stackoverflow.com/questions/59762996/how-to-fix-attributeerror-partially-initialized-module)
|
Try this,
engine="davinci"
| 9,916
|
57,449,963
|
I want to install ansible in RHEL 8 Centos.
To use yum install ansible i must enable epel release but i can't find a best source of epel release for Rhel 8.
**I tried this**
```
sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum install ansible
```
**The output i got is**
```
Last metadata expiration check: 0:01:26 ago on Sun 11 Aug 2019 12:21:55 PM UTC.
Error:
Problem: conflicting requests
- nothing provides python-setuptools needed by ansible-2.8.2-1.el7.noarch
- nothing provides python-jinja2 needed by ansible-2.8.2-1.el7.noarch
- nothing provides python-six needed by ansible-2.8.2-1.el7.noarch
- nothing provides PyYAML needed by ansible-2.8.2-1.el7.noarch
- nothing provides python2-cryptography needed by ansible-2.8.2-1.el7.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
```
|
2019/08/11
|
[
"https://Stackoverflow.com/questions/57449963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7179457/"
] |
EPEL8 is not released yet. There are some packages available, but a lot are still being worked on and the repo is not considered "generally available".
For now, you can install Ansible from the Python Package Index (PyPI):
```
yum install python3-pip
pip3 install ansible
```
|
If you are using RHEL 8 then you can use the subscription manager to get Ansible with the host and config file pre-built.
Also, you will need to create an account on <https://developers.redhat.com> before you can do this:
```
subscription-manager register --auto-attach
subscription-manager repos --enable ansible-2.8-for-rhel-8-x86_64-rpms
yum -y install ansible
ansible --version
```
| 9,919
|
7,330,279
|
I am writing a python interface to a c++ library and am wondering about the correct design of the library.
I have found out (the hard way) that all methods passed to python must be declared static. If I understand correctly, this means that all functions basically must be defined in the same .cpp file. My interface has many functions, so this gets ugly very quickly.
What is the standard way to deal with this problem? Possibilities I could think of:
* don't worry about it and use one looong .cpp file
* compile into more than one library (.so file)
* write a .cpp for each group of functions and #include that .cpp into the body of the main defining cpp file (the one with the PyMethodDef)
both of them seem very ugly
|
2011/09/07
|
[
"https://Stackoverflow.com/questions/7330279",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/446137/"
] |
>
> I have found out (the hard way) that all methods passed to python must
> be declared static. If I understand correctly, this means that all
> functions basically must be defined in the same .cpp file. My
> interface has many functions, so this gets ugly very quickly.
>
>
>
Where did you find this out? It isn't true. The keyword `static` means two different things in C++. There is class-static, which means a class-scoped function is called without an instance of the object (just like a normal function). There is also static linkage, which means your functions do not get added to the global offset table and you'll have a tough time finding them outside of the translation unit (CPP file).
I would recommend looking at [Boost.Python](http://www.boost.org/doc/libs/1_47_0/libs/python/doc/index.html). They have solved many of the problems you would encounter and make it extremely easy to make C++ and Python talk to each other.
|
Why do you say that all functions called by Python have to be
static? It's usual for that to be the case, in order to avoid
name conflicts (since any namespace, etc. will be ignored
because of the `extern "C"`), but whether the function is static
or not is of no consequence.
When interfacing a library in C++, in my experience, it's
generally not a big problem to make it static, and to put all of
the functions in a single translation unit, because the
functions will be just small wrappers which call the actual C++,
and normally, will be automatically generated from some sort of
descripter file; you surely aren't going to write all of the
necessary boilerplate by hand.
| 9,922
|
7,542,421
|
[Python Challenge #2](http://www.pythonchallenge.com/pc/def/ocr.html)
[Answer I found](http://ymcagodme.blogspot.com/2011/04/python-challenge-level-2.html)
```
FILE_PATH = 'l2-text'
f = open(FILE_PATH)
print ''.join([ t for t in f.read() if t.isalpha()])
f.close()
```
Question: Why is their a 't' before the for loop `t for t in f.read()`.
I understand the rest of the code except for that one bit.
If I try to remove it I get an error, so what does it do?
Thanks.
|
2011/09/24
|
[
"https://Stackoverflow.com/questions/7542421",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/963081/"
] |
`[t for t in f.read() if t.isalpha()]` is a list comprehension. Basically, it takes the given iterable (`f.read()`) and forms a list by taking all the elements read by applying an optional filter (the `if` clause) and a mapping function (the part on the left of the `for`).
However, the mapping part is trivial here, this makes the syntax look a bit redundant: for each element `t` given, it just adds the element value (`t`) to the output list. But more complex expressions are possible, for example `t*2 for t ...` would duplicate all valid characters.
|
This is a [list comprehension](http://www.python.org/doc//current/tutorial/datastructures.html#list-comprehensions), not a `for`-loop.
>
> List comprehensions provide a concise way to create lists.
>
>
>
```
[t for t in f.read() if t.isalpha()]
```
This creates a [`list`](http://www.python.org/doc//current/tutorial/datastructures.html#more-on-lists) of all of the `alpha` characters in the file (`f`). You then [`join()`](http://www.python.org/doc//current/library/string.html#string.join) them all together.
You now have a link to the documentation, which should help you comprehend comprehensions. It's tricky to search for things when you don't know what they're called!
Hope this helps.
| 9,923
|
57,077,432
|
I was trying to add two tuples two create a new sort of nested tuple using the coerce function of python.
I'm using python version 3.7 which is showing that the function isn't defined.
It is supposed to be a built-in function in python
|
2019/07/17
|
[
"https://Stackoverflow.com/questions/57077432",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11797954/"
] |
Including `Building` will also include `MultiApartmentBuilding` entries (in fact all types deriving from `Building`).
You can use C# 7.0's pattern matching to test and cast at the same time (where `apartments` is the result of the query):
```
foreach (Apartment apartment in apartments) {
// Access common Building field.
Console.WriteLine(apartment.Building.Id);
// Access specialized field from derived building type.
if (apartment.Building is MultiApartmentBuilding maBuilding) {
Console.WriteLine(maBuilding.GroundFloorCount);
}
}
```
If you have many types of buildings, you can use pattern matching in the switch statement
```
switch (apartment.Building)
{
case MultiApartmentBuilding maBuilding:
Console.WriteLine(maBuilding.GroundFloorCount);
break;
case Igloo igloo:
Console.WriteLine(igloo.SnowQuality);
break;
default:
Console.WriteLine("all other building types");
break;
}
```
|
You can't access the child's class attributes. In other words, if you have a Building, you can't access its **MultiApartmentBuilding** attributes, because you don't even know if it really is a **MultiApartmentBuilding**.
What I would do in this case would be to change your **Apartment** class and use the type **MultiApartmentBuilding** instead of **Building**:
```
public class Apartment : EntityBase
{
public int Id { get; set; }
public int BuildingId { get; set; }
public MultiApartmentBuilding MultiApartmentBuilding { get; set; }
public Common.Enums.ApartmentState State { get; set; }
public AccessibilityState Accessibility { get; set; }
public int Floor { get; set; }
public bool IsPentHouse { get; set; }
}
```
| 9,926
|
24,452,972
|
Would like to extract all the lines from first file (GunZip \*.gz i.e Input.csv.gz), if the first file 4th field is falls within a range of
Second file (Slab.csv) first field (Start Range) and second field (End Range) then populate Slab wise count of rows and sum of 4th and 5th field of first file.
Input.csv.gz (GunZip)
```
Desc,Date,Zone,Duration,Calls
AB,01-06-2014,XYZ,450,3
AB,01-06-2014,XYZ,642,3
AB,01-06-2014,XYZ,0,0
AB,01-06-2014,XYZ,205,3
AB,01-06-2014,XYZ,98,1
AB,01-06-2014,XYZ,455,1
AB,01-06-2014,XYZ,120,1
AB,01-06-2014,XYZ,0,0
AB,01-06-2014,XYZ,193,1
AB,01-06-2014,XYZ,0,0
AB,01-06-2014,XYZ,161,2
```
Slab.csv
```
StartRange,EndRange
0,0
1,10
11,100
101,200
201,300
301,400
401,500
501,10000
```
Expected Output:
```
StartRange,EndRange,Count,Sum-4,Sum-5
0,0,3,0,0
1,10,NotFound,NotFound,NotFound
11,100,1,98,1
101,200,3,474,4
201,300,1,205,3
301,400,NotFound,NotFound,NotFound
401,500,2,905,4
501,10000,1,642,3
```
I am using below two commands to get the above output , expect "NotFound"cases .
```
awk -F, 'NR==FNR{s[NR]=$1;e[NR]=$2;c[NR]=$0;n++;next} {for(i=1;i<=n;i++) if($4>=s[i]&&$4<=e[i]) {print $0,","c[i];break}}' Slab.csv <(gzip -dc Input.csv.gz) >Op_step1.csv
cat Op_step1.csv | awk -F, '{key=$6","$7;++a[key];b[key]=b[key]+$4;c[key]=c[key]+$5} END{for(i in a)print i","a[i]","b[i]","c[i]}' >Op_step2.csv
```
Op\_step2.csv
```
101,200,3,474,4
501,10000,1,642,3
0,0,3,0,0
401,500,2,905,4
11,100,1,98,1
201,300,1,205,3
```
Any suggestions to make it one liner command to achieve the Expected Output , Don't have perl , python access.
|
2014/06/27
|
[
"https://Stackoverflow.com/questions/24452972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3350223/"
] |
Here is one way using `awk` and `sort`:
```
awk '
BEGIN {
FS = OFS = SUBSEP = ",";
print "StartRange,EndRange,Count,Sum-4,Sum-5"
}
FNR == 1 { next }
NR == FNR {
ranges[$1,$2]++;
next
}
{
for (range in ranges) {
split(range, tmp, SUBSEP);
if ($4 >= tmp[1] && $4 <= tmp[2]) {
count[range]++;
sum4[range]+=$4;
sum5[range]+=$5;
next
}
}
}
END {
for(range in ranges)
print range, (count[range]?count[range]:"NotFound"), (sum4[range]?sum4[range]:"NotFound"), (sum5[range]?sum5[range]:"NotFound") | "sort -t, -nk1,2"
}' slab input
StartRange,EndRange,Count,Sum-4,Sum-5
0,0,3,NotFound,NotFound
1,10,NotFound,NotFound,NotFound
11,100,1,98,1
101,200,3,474,4
201,300,1,205,3
301,400,NotFound,NotFound,NotFound
401,500,2,905,4
501,10000,1,642,3
```
* Set the Input, Output Field Separators and `SUBSEP` to `,`. Print the Header line.
* If it is the first line skip it.
* Load the entire `slab.txt` in to an array called `ranges`.
* For every range in the `ranges` array, split the field to get start and end range. If the 4th column is in the range, increment the count array and add the value to `sum4` and `sum5` array appropriately.
* In the `END` block, iterate through the ranges and print them.
* Pipe the output to `sort` to get the output in order.
|
Here is another option using `perl` which takes benefits of creating multi-dimensional arrays and hashes.
```
perl -F, -lane'
BEGIN {
$x = pop;
## Create array of arrays from start and end ranges
## $range = ( [0,0] , [1,10] ... )
(undef, @range)= map { chomp; [split /,/] } <>;
@ARGV = $x;
}
## Skip the first line
next if $. ==1;
## Create hash of hash
## $line = '[0,0]' => { "count" => counts , "sum4" => sum_of_col4 , "sum5" => sum_of_col5 }
for (@range) {
if ($F[3] >= $_->[0] && $F[3] <= $_->[1]) {
$line{"@$_"}{"count"}++;
$line{"@$_"}{"sum4"} +=$F[3];
$line{"@$_"}{"sum5"} +=$F[4];
}
}
}{
print "StartRange,EndRange,Count,Sum-4,Sum-5";
print join ",", @$_,
$line{"@$_"}{"count"} //"NotFound",
$line{"@$_"}{"sum4"} //"NotFound",
$line{"@$_"}{"sum5"} //"NotFound"
for @range
' slab input
StartRange,EndRange,Count,Sum-4,Sum-5
0,0,3,0,0
1,10,NotFound,NotFound,NotFound
11,100,1,98,1
101,200,3,474,4
201,300,1,205,3
301,400,NotFound,NotFound,NotFound
401,500,2,905,4
501,10000,1,642,3
```
| 9,927
|
44,335,494
|
So I downloaded Deuces, code for poker hand evaluations, and originally I think it was in Python 2, because all of the print statements had no parentheses. I fixed all of those, and everything seems to work, except this last part. Here is the code for it:
```
def get_lexographically_next_bit_sequence(self, bits):
"""
Bit hack from here:
http://www-graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
Generator even does this in poker order rank
so no need to sort when done! Perfect.
"""
t = (bits | (bits - 1)) + 1
next = t | ((((t & -t) / (bits & -bits)) >> 1) - 1)
yield next
while True:
t = (next | (next - 1)) + 1
next = t | ((((t & -t) / (next & -next)) >> 1) - 1)
yield next
```
I looked online and found that they are bit operators, but I dont understand why python doesnt recognize them. Do I have to import something, or are those operators not used in python 3
```
File "/Volumes/PROJECTS/deuces/All_poker.py", line 709, in get_lexographically_next_bit_sequence
next = t | ((((t and -t) / (bits and -bits)) // 2) - 1)
```
TypeError: unsupported operand type(s) for |: 'float' and 'float'
This is the error I get and the code can be found at <https://github.com/vitamins/deuces/tree/8222a6505979886171b8a0c581ef667f13c5d165>
It is the last portion of the lookup class
when I write
```
board = [ Card.new('Ah'), Card.new('Kd'), ('Jc') ]
hand = [ Card.new('Qs'),Card.new('Th')]
evaluator=Evaluator()
```
On that last line of code I get the error. All of the code can be found in the link
|
2017/06/02
|
[
"https://Stackoverflow.com/questions/44335494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8062387/"
] |
In accord with Arrivillaga's comment I had just modified what you had posted to this.
```
def get_lexographically_next_bit_sequence(bits):
"""
Bit hack from here:
http://www-graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
Generator even does this in poker order rank
so no need to sort when done! Perfect.
"""
t = (bits | (bits - 1)) + 1
next = t | ((((t & -t) // (bits & -bits)) >> 1) - 1)
yield next
while True:
t = (next | (next - 1)) + 1
next = t | ((((t & -t) // (next & -next)) >> 1) - 1)
yield next
for i, g in enumerate(get_lexographically_next_bit_sequence(123)):
print (g)
if i > 10:
break
```
Do these results seem reasonable?
```
125
126
159
175
183
187
189
190
207
215
219
221
```
|
It was the / symbol, as the gentleman said above it is supposed to be for floor division, and quick fix and it works fine.
| 9,928
|
69,217,390
|
I'm trying to build a website in python and flask however my CSS is not loading I don't see anything wrong with my code and I've tried the same code snippet from a few different sites.
My Link:
```html
<link rel="stylesheet" href="{{ url_for('static', filename= 'css/style.css') }}">
```
File structure as below:
[](https://i.stack.imgur.com/eyo3p.png)
>
> Error: 127.0.0.1 - - [16/Sep/2021 20:18:34] "GET /static/css/style.css
> HTTP/1.1" 404 -
>
>
>
|
2021/09/17
|
[
"https://Stackoverflow.com/questions/69217390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11386215/"
] |
The problem was probably with the numpy function 'percentile' and how I passed in my argument to the find\_outliers\_tukey function. So these changes worked for me
step 1
======
1. Include two arguments; one for the name of df, another for the name of the feature.
2. Put the feature argument into the df explicitly.
3. Don't use attribute chaining when accessing the feature and use quantile instead of percentile.
```
def find_outliers_tukey(df:"dataframe", feature:"series") -> "list, list":
"write later"
q1 = df[feature].quantile(0.25)
q3 = df[feature].quantile(0.75)
iqr = q3-q1
floor = q1 -1.5*iqr
ceiling = q3 +1.5*iqr
outlier_indices = list(df.index[ (df[feature] < floor) | (df[feature] > ceiling) ])
#outlier_values = list(df[feature][outlier_indices])
#print(f"outliers are {outlier_values} at indices {outlier_indices}")
#return outlier_indices, outlier_values
return outlier_indices
```
step 2
======
I put all the columns I wanted to remove outliers from into a list.
```
df_columns = list(df.columns[1:56])
```
step 3
======
no change here. Just used 2 arguments instead of 1 for the find\_outliers\_tukey function. Oh and I stored the indices of the outliers just for future use.
```
index_list = []
for feature in df_columns:
index_list.extend(find_outliers_tukey(df, feature))
```
This gave me better statistical results for the columns.
|
For Question 1, your code seems to work fine on my end, but of course I don't have your original data.
For Question 2, there are two problems. The first is that you are passing the column *names* to `find_outliers_tukey` instead of the columns themselves. Use `iteritems` to iterate over pairs of `(column name, column Series)`:
```py
for feature, column in df.iteritems():
tukey_indices, tukey_values = find_outliers_tukey(column)
print(f"Outliers in {feature} are {tukey_values} \n")
```
The second problem, which you'll run into after solving the first problem, is that your `location` column is not a column with, so you won't be able to find outliers for it. Make sure to only iterate over the columns that you actually want to perform the calculation on.
| 9,929
|
54,695,126
|
I am trying to parse a webpage and print the link for items(href).
Can you help with where am i going wrong?
```
import requests
from bs4 import BeautifulSoup
link = "https://www.amazon.in/Power-
Banks/b/ref=nav_shopall_sbc_mobcomp_powerbank?ie=UTF8&node=6612025031"
def amazon(url):
sourcecode = requests.get(url)
sourcecode_text = sourcecode.text
soup = BeautifulSoup(sourcecode_text)
for link in soup.findALL('a', {'class': 'a-link-normal aok-block a-
text-normal'}):
href = link.get('href')
print(href)
amazon(link)
```
Output :
>
> C:\Users\TIMAH\AppData\Local\Programs\Python\Python37\python.exe
> "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self
> Basic/Class\_Test.py" Traceback (most recent call last): File
> "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self
> Basic/Class\_Test.py", line 15, in
> amazon(link) File "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self Basic/Class\_Test.py", line 9, in
> amazon
> soup = BeautifulSoup(sourcecode\_text, 'features="html.parser"') File
> "C:\Users\TIMAH\AppData\Local\Programs\Python\Python37\lib\site-packages\bs4\_\_init\_\_.py",
> line 196, in **init**
> % ",".join(features)) bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: features="html.parser". Do
> you need to install a parser library?
>
>
> Process finished with exit code 1
>
>
>
|
2019/02/14
|
[
"https://Stackoverflow.com/questions/54695126",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4992020/"
] |
You can though add headers. Then also when you do `find_all('a')`, you can just get it there is href:
```
import requests
from bs4 import BeautifulSoup
link = "https://www.amazon.in/Power-Banks/b/ref=nav_shopall_sbc_mobcomp_powerbank?ie=UTF8&node=6612025031"
def amazon(url):
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'}
sourcecode = requests.get(url, headers=headers)
sourcecode_text = sourcecode.text
soup = BeautifulSoup(sourcecode_text, 'html.parser')
for link in soup.find_all('a', href=True):
href = link.get('href')
print(href)
amazon(link)
```
|
If you tried to scrape Amazon right now with `requests` you won't get anything in return since Amazon will know that it's a script, and headers won't help it (as far as I know).
Instead, in response they will tell the following:
```
To discuss automated access to Amazon data please contact api-services-support@amazon.com.
```
---
You can scrape Amazon using `requests-html` or `selenium` by rendering it.
`Requeests-html` simple example scraping titles (results will be similar if you open the same link in the incognito tab):
```
from requests_html import HTMLSession
session = HTMLSession()
url = 'https://www.amazon.com/s?k=apple+watch+series+6+band'
r = session.get(url)
r.html.render(sleep=1, keep_page=True, scrolldown = 1)
for container in r.html.find('.a-size-medium'):
title = container.text
print(f"Title: {title}")
```
Output:
```none
Title: New AppleΒ Watch Series 6 (GPS, 40mm) - (Product) RED - Aluminum Case with (Product) RED - Sport Band
Title: SUPCASE [Unicorn Beetle Pro] Designed for Apple Watch Series 6/SE/5/4 [44mm], Rugged Protective Case with Strap Bands(Black)
Title: Spigen Rugged Armor Pro Designed for Apple Watch Band with Case for 44mm Series 6/SE/5/4 - Charcoal Gray
Title: Highly rated and well-priced products
Title: Fitlink Stainless Steel Metal Band for Apple Watch 38/40/42/44mm Replacement Link Bracelet Band Compatible with Apple Watch Series 6 Apple Watch Series 5 Apple Watch Series 1/2/3/4 (Grey,42/44mm)
Title: TalkWorks Compatible for Apple Watch Band 42mm / 44mm Comfort Fit Mesh Loop Stainless Steel Adjustable Magnetic Strap for iWatch Series 6, 5, 4, 3, 2, 1, SE - Rose Gold
Title: COOYA Compatible for Apple Watch Band 44mm 42mm Women Men iWatch Wristband with Protective Rugged Case Sport Strap Adjustable Replacement Band Compatible with Apple Watch Series 6 SE 5 4 3 2, Clear
Title: Stainless Steel Metal Bands Compatible with Apple Watch Band 42mm 44mm, Gold Replacement Strap with Adapter+Case Cover Compatible with iWatch Series 6 5 4 3 2 1 SE Sport
Title: elago W2 Charger Stand Compatible with Apple Watch Series 6/SE/5/4/3/2/1 (44mm, 42mm, 40mm, 38mm), Durable Silicone, Compatible with Nightstand Mode (Black)
Title: Element Case Black Ops Watch Band for Apple Watch Series 4/5/6/SE, 44mm - Black (EMT-522-244A-01)
...
```
| 9,930
|
52,788,039
|
I'm given a task to convert a **Perl script to Python**.
I'm really new to Perl and understanding it where I came across a command line option which is `-Sx`.
There is good documentation provided for these parameters in Perl. But there is no much documentation for the same in python (Didn't find much info in Python official site).
My question is are those command line options `-Sx` same for both **Perl** and **Python**?
Do they achieve same task in both?
|
2018/10/12
|
[
"https://Stackoverflow.com/questions/52788039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10496576/"
] |
Questions I asked in a comment:
>
> Have you thought about if that bit of shell you were asking about is really necessary for what you're doing? Or are you just trying to blindly translate without understanding what things are doing?
>
>
>
I'm pretty sure the answers are no and yes respectively. That's not a good place to be when you're trying to translate code from one language to another; you should understand **what** is going on in the original and make your new version do the same thing in whatever way is most appropriate for the new language, and not get trapped into some blind-leading-the-blind cargo cult code where you have no idea what's going on or how to fix it when it invariably doesn't work.
It doesn't help that based on your [other question](https://stackoverflow.com/questions/52785232/what-does-exec-perl-perl-sx-0-1-mean-in-shell-script) your source program that you're trying to translate is rather confusing if you've never seen one like it before.
You have a shell script that, as the only thing it does, is run perl with a program whose source is directly embedded in the shell script. The reason to do this is to have the real script run under different perl installs on the same computer depending on the environment (Personally I'd put the perl code in its own separate file instead of trying to be clever with having it directly in the shell script; [perlbrew documentation](https://perlbrew.pl/Perlbrew-In-Shell-Scripts.html) examples take that approach). Is that something you need to be concerned about with the python version? I'm guessing probably not (and if it is, look into pythonic ways to do it, not obscure perlish ways). Which means the answer to another question
>
> Are you sure you even *need* equivalents [to -S and -x]?
>
>
>
is no, I don't think you do. I think you should just keep it to pure python, making the things those options do irrelevant.
|
The following snippet is used to support ancient systems that predate the existence of `python`:
```
#!/bin/sh
exec perl -Sx $0 ${1+"$@"}
if 0;
```
Now, [it appears](https://stackoverflow.com/questions/52785232/what-does-exec-perl-perl-sx-0-1-mean-in-shell-script) that you are dealing with a bastardized and modified version, but it makes no more sense to use that. If the caller wants to use a specific `perl`, they should use `"$PERL" script` instead of relying on `script` to use `$PERL`.
So, you should be using the following for the Perl script:
```
#!/usr/bin/env perl
```
or
```
#!/path/to/the/installing/perl
```
So, you should be using the following for the Python script:
```
#!/usr/bin/env python
```
or
```
#!/path/to/the/installing/python
```
| 9,932
|
48,466,337
|
I have been working on creating a python GUI for some work. I would self-describe as a novice when it comes to by Python knowledge. I am using wxPython and wxGlade to help with the GUI development, as well.
The problem is as follows:
I have an empty TextCtrl object and a Button next to it.
The Button is meant to open a FileDialog and populate or replace the TextCtrl with the value of the file location that is selected. I have created the functionality for the button to open the FileDialog but I can't seem to figure out how to populate the TextCtrl with that resulting value.
```
import wx
class frmCheckSubmital(wx.Frame):
def __init__(self, *args, **kwds):
# begin wxGlade: frmCheckSubmitall.__init__
kwds["style"] = kwds.get("style", 0) | wx.DEFAULT_FRAME_STYLE
wx.Frame.__init__(self, *args, **kwds)
self.rbxUtilitySelect = wx.RadioBox(self, wx.ID_ANY, "Utility", choices=["Stormwater", "Sewer", "Water"], majorDimension=1, style=wx.RA_SPECIFY_ROWS)
self.txtFeaturesPath = wx.TextCtrl(self, wx.ID_ANY, "")
self.btnSelectFeatures = wx.Button(self, wx.ID_ANY, "Select")
# selectEvent = lambda event, pathname=txt: self.dialogFeatures(event, pathname)
self.btnSelectFeatures.Bind(wx.EVT_BUTTON, self.dialogFeatures)
self.txtPipesPath = wx.TextCtrl(self, wx.ID_ANY, "")
self.btnSelectPipes = wx.Button(self, wx.ID_ANY, "Select")
self.bxOutput = wx.Panel(self, wx.ID_ANY)
self.cbxDraw = wx.CheckBox(self, wx.ID_ANY, "Draw")
self.btnClear = wx.Button(self, wx.ID_ANY, "Clear")
self.btnZoom = wx.Button(self, wx.ID_ANY, "Zoom")
self.btnRun = wx.Button(self, wx.ID_ANY, "Run", style=wx.BU_EXACTFIT)
self.__set_properties()
self.__do_layout()
# end wxGlade
def __set_properties(self):
# begin wxGlade: frmCheckSubmitall.__set_properties
self.SetTitle("Check Submittal")
self.rbxUtilitySelect.SetSelection(0)
self.btnSelectFeatures.SetMinSize((80, 20))
self.btnSelectPipes.SetMinSize((80, 20))
self.cbxDraw.SetValue(1)
self.btnClear.SetMinSize((50, 20))
self.btnZoom.SetMinSize((50, 20))
# end wxGlade
def __do_layout(self):
# begin wxGlade: frmCheckSubmitall.__do_layout
sizer_1 = wx.BoxSizer(wx.VERTICAL)
sizer_5 = wx.BoxSizer(wx.VERTICAL)
sizer_8 = wx.BoxSizer(wx.HORIZONTAL)
sizer_7 = wx.BoxSizer(wx.HORIZONTAL)
sizer_6 = wx.BoxSizer(wx.HORIZONTAL)
sizer_5.Add(self.rbxUtilitySelect, 0, wx.ALIGN_CENTER | wx.BOTTOM, 10)
lblFeatures = wx.StaticText(self, wx.ID_ANY, "Features: ")
sizer_6.Add(lblFeatures, 0, wx.ALIGN_CENTER | wx.LEFT, 16)
sizer_6.Add(self.txtFeaturesPath, 1, 0, 0)
sizer_6.Add(self.btnSelectFeatures, 0, wx.ALIGN_CENTER_VERTICAL | wx.LEFT | wx.RIGHT, 5)
sizer_5.Add(sizer_6, 0, wx.EXPAND, 0)
lblPipes = wx.StaticText(self, wx.ID_ANY, "Pipes: ")
sizer_7.Add(lblPipes, 0, wx.ALIGN_CENTER | wx.LEFT | wx.RIGHT, 16)
sizer_7.Add(self.txtPipesPath, 1, 0, 0)
sizer_7.Add(self.btnSelectPipes, 0, wx.ALIGN_CENTER_VERTICAL | wx.LEFT | wx.RIGHT, 5)
sizer_5.Add(sizer_7, 0, wx.ALL | wx.EXPAND, 0)
sizer_5.Add(self.bxOutput, 1, wx.ALL | wx.EXPAND, 10)
sizer_8.Add(self.cbxDraw, 0, wx.LEFT | wx.RIGHT, 10)
sizer_8.Add(self.btnClear, 0, wx.RIGHT, 10)
sizer_8.Add(self.btnZoom, 0, 0, 0)
sizer_8.Add((20, 20), 1, 0, 0)
sizer_8.Add(self.btnRun, 0, wx.BOTTOM | wx.RIGHT, 10)
sizer_5.Add(sizer_8, 0, wx.EXPAND, 0)
sizer_1.Add(sizer_5, 1, wx.EXPAND, 0)
self.SetSizer(sizer_1)
self.Layout()
self.SetSize((400, 300))
# end wxGlade
# Begin Dialog Method
def dialogFeatures(self, event):
# otherwise ask the user what new file to open
#with wx.FileDialog(self, "Select the Features File", wildcard="Text files (*.txt)|*.txt",
# style=wx.FD_OPEN | wx.FD_FILE_MUST_EXIST) as fileDialog:
fileDialog = wx.FileDialog(self, "Select the Features File", wildcard="Text files (*.txt)|*.txt",
style=wx.FD_OPEN | wx.FD_FILE_MUST_EXIST)
if fileDialog.ShowModal() == wx.ID_CANCEL:
return # the user changed their mind
# Proceed loading the file chosen by the user
pathname = fileDialog.GetPath()
self.txtFeaturesPath.SetValue = pathname
self.txtFeaturesPath.SetValue(pathname)
try:
with open(pathname, 'r') as file:
self.txtFeaturesPath = file
except IOError:
wx.LogError("Cannot open file '%s'." % newfile)
# End Dialog Method
# end of class frmCheckSubmitall
if __name__ == '__main__':
app=wx.PySimpleApp()
frame = frmCheckSubmital(parent=None, id=-1)
frame.Show()
app.MainLoop()
```
I've tried to do several things and I am just burnt out and in need of some help.
Some things I've tried to do:
- Add a third argument in the dialog method to return that (just not sure where to assign)
- Use a lambda event to try and assign the value with the constructors?
Any help or insight would be greatly appreciated. Thank you!
|
2018/01/26
|
[
"https://Stackoverflow.com/questions/48466337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9272739/"
] |
As others have already pointed out, the way to go is by using the text control's `SetValue`. But here's a small runnable example:
```
import wx
class MyPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
open_file_dlg_btn = wx.Button(self, label="Open FileDialog")
open_file_dlg_btn.Bind(wx.EVT_BUTTON, self.on_open_file)
self.file_path = wx.TextCtrl(self)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(open_file_dlg_btn, 0, wx.ALL, 5)
sizer.Add(self.file_path, 1, wx.ALL|wx.EXPAND, 5)
self.SetSizer(sizer)
def on_open_file(self, event):
wildcard = "Python source (*.py)|*.py|" \
"All files (*.*)|*.*"
dlg = wx.FileDialog(
self, message="Choose a file",
defaultDir='',
defaultFile="",
wildcard=wildcard,
style=wx.FD_OPEN | wx.FD_MULTIPLE | wx.FD_CHANGE_DIR
)
if dlg.ShowModal() == wx.ID_OK:
paths = dlg.GetPath()
print("You chose the following file(s):")
for path in paths:
print(path)
self.file_path.SetValue(str(paths))
dlg.Destroy()
class MyFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None,
title="File Dialogs Tutorial")
panel = MyPanel(self)
self.Show()
if __name__ == '__main__':
app = wx.App(False)
frame = MyFrame()
app.MainLoop()
```
|
Try:
```
self.txtFeaturesPath.SetValue(pathname)
```
You have a few other buggy "features" in your example code, so watch out.
| 9,933
|
44,060,906
|
I just installed python-vlc via pip and when I try
```
import vlc
```
The follow error message shows up:
```
... ...
File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 173, in <module>
dll, plugin_path = find_lib()
File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 150, in find_lib
dll = ctypes.CDLL('libvlc.dll')
File "c:\Program Files\Python34\Lib\ctypes\__init__.py", line 351, in __init__
self._handle = _dlopen(self._name, mode)
builtins.OSError: [WinError 126] The specified module could not be found
```
I am unfamiliar with the ctypes module. What is causing the problem?
|
2017/05/19
|
[
"https://Stackoverflow.com/questions/44060906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7970976/"
] |
The problem has been solved. I was using 64 bit python and 32 bit VLC. Installing a 64 bit VLC program fixed the problem.
|
`python-vlc` on Windows needs to load `libvlc.dll` from VLC. If it's not found in the normal `%PATH%`, it will try to use `pywin32` to look in the registry to find the VLC install path, and fall back to a hard-coded set of
directories after that. The stack trace looks like all of that failed.
Do you have VLC installed?
| 9,935
|
68,036,975
|
**Done**
I am just trying to run and replicate the following project: <https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/> . Basically until this point I have done everything as it is in the linked project but than I got the following issue:
**My Own Dataset - I have tried with the dataframe:**
* I have tried with his original dataset fully 100% his code but I still have the same error
* A.) having the 2 columns (1st column date and 2nd column target values),
* B.) time code in to the index and dataframe only containing the target value.
**INPUT CODE:**
```
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
# create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
```
**OUTPUT ERROR:**
```
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
1879 try:
-> 1880 c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
1881 except errors.InvalidArgumentError as e:
InvalidArgumentError: Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,16], [16].
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-146-278c5358bee6> in <module>
1 # create and fit the LSTM network
2 model = Sequential()
----> 3 model.add(LSTM(4, input_shape=(1, look_back)))
4 model.add(Dense(1))
5 model.compile(loss='mean_squared_error', optimizer='adam')
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
520 self._self_setattr_tracking = False # pylint: disable=protected-access
521 try:
--> 522 result = method(self, *args, **kwargs)
523 finally:
524 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/sequential.py in add(self, layer)
206 # and create the node connecting the current layer
207 # to the input layer we just created.
--> 208 layer(x)
209 set_inputs = True
210
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
658
659 if initial_state is None and constants is None:
--> 660 return super(RNN, self).__call__(inputs, **kwargs)
661
662 # If any of `initial_state` or `constants` are specified and are Keras
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
944 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
945 return self._functional_construction_call(inputs, args, kwargs,
--> 946 input_list)
947
948 # Maintains info about the `Layer.call` stack.
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1082 # Check input assumptions set after layer building, e.g. input shape.
1083 outputs = self._keras_tensor_symbolic_call(
-> 1084 inputs, input_masks, args, kwargs)
1085
1086 if outputs is None:
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)
814 return tf.nest.map_structure(keras_tensor.KerasTensor, output_signature)
815 else:
--> 816 return self._infer_output_signature(inputs, args, kwargs, input_masks)
817
818 def _infer_output_signature(self, inputs, args, kwargs, input_masks):
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
854 self._maybe_build(inputs)
855 inputs = self._maybe_cast_inputs(inputs)
--> 856 outputs = call_fn(inputs, *args, **kwargs)
857
858 self._handle_activity_regularization(inputs, outputs)
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in call(self, inputs, mask, training, initial_state)
1250 else:
1251 (last_output, outputs, new_h, new_c,
-> 1252 runtime) = lstm_with_backend_selection(**normal_lstm_kwargs)
1253
1254 states = [new_h, new_c]
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in lstm_with_backend_selection(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask, time_major, go_backwards, sequence_lengths, zero_output_for_mask)
1645 # Call the normal LSTM impl and register the CuDNN impl function. The
1646 # grappler will kick in during session execution to optimize the graph.
-> 1647 last_output, outputs, new_h, new_c, runtime = defun_standard_lstm(**params)
1648 _function_register(defun_gpu_lstm, **params)
1649
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
3020 with self._lock:
3021 (graph_function,
-> 3022 filtered_flat_args) = self._maybe_define_function(args, kwargs)
3023 return graph_function._call_flat(
3024 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3442
3443 self._function_cache.missed.add(call_context_key)
-> 3444 graph_function = self._create_graph_function(args, kwargs)
3445 self._function_cache.primary[cache_key] = graph_function
3446
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3287 arg_names=arg_names,
3288 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3289 capture_by_value=self._capture_by_value),
3290 self._function_attributes,
3291 function_spec=self.function_spec,
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
997 _, original_func = tf_decorator.unwrap(python_func)
998
--> 999 func_outputs = python_func(*func_args, **func_kwargs)
1000
1001 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in standard_lstm(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask, time_major, go_backwards, sequence_lengths, zero_output_for_mask)
1386 input_length=(sequence_lengths
1387 if sequence_lengths is not None else timesteps),
-> 1388 zero_output_for_mask=zero_output_for_mask)
1389 return (last_output, outputs, new_states[0], new_states[1],
1390 _runtime(_RUNTIME_CPU))
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/backend.py in rnn(step_function, inputs, initial_states, go_backwards, mask, constants, unroll, input_length, time_major, zero_output_for_mask)
4341 # the value is discarded.
4342 output_time_zero, _ = step_function(
-> 4343 input_time_zero, tuple(initial_states) + tuple(constants))
4344 output_ta = tuple(
4345 tf.TensorArray(
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in step(cell_inputs, cell_states)
1364 z = backend.dot(cell_inputs, kernel)
1365 z += backend.dot(h_tm1, recurrent_kernel)
-> 1366 z = backend.bias_add(z, bias)
1367
1368 z0, z1, z2, z3 = tf.split(z, 4, axis=1)
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/backend.py in bias_add(x, bias, data_format)
5961 if len(bias_shape) == 1:
5962 if data_format == 'channels_first':
-> 5963 return tf.nn.bias_add(x, bias, data_format='NCHW')
5964 return tf.nn.bias_add(x, bias, data_format='NHWC')
5965 if ndim(x) in (3, 4, 5):
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py in bias_add(value, bias, data_format, name)
3376 else:
3377 return gen_nn_ops.bias_add(
-> 3378 value, bias, data_format=data_format, name=name)
3379
3380
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py in bias_add(value, bias, data_format, name)
689 data_format = _execute.make_str(data_format, "data_format")
690 _, _, _op, _outputs = _op_def_library._apply_op_helper(
--> 691 "BiasAdd", value=value, bias=bias, data_format=data_format, name=name)
692 _result = _outputs[:]
693 if _execute.must_record_gradient():
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(op_type_name, name, **keywords)
748 op = g._create_op_internal(op_type_name, inputs, dtypes=None,
749 name=scope, input_types=input_types,
--> 750 attrs=attr_protos, op_def=op_def)
751
752 # `outputs` is returned as a separate return value so that the output
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device)
599 return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access
600 op_type, captured_inputs, dtypes, input_types, name, attrs, op_def,
--> 601 compute_device)
602
603 def capture(self, tensor, name=None, shape=None):
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device)
3563 input_types=input_types,
3564 original_op=self._default_original_op,
-> 3565 op_def=op_def)
3566 self._create_op_helper(ret, compute_device=compute_device)
3567 return ret
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
2040 op_def = self._graph._get_op_def(node_def.op)
2041 self._c_op = _create_c_op(self._graph, node_def, inputs,
-> 2042 control_input_ops, op_def)
2043 name = compat.as_str(node_def.name)
2044
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
1881 except errors.InvalidArgumentError as e:
1882 # Convert to ValueError for backwards compatibility.
-> 1883 raise ValueError(str(e))
1884
1885 return c_op
ValueError: Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,16], [16].
```
**Tried Solutions**
* no actual solution in the answers - <https://www.reddit.com/r/tensorflow/comments/ipbse4/valueerror_shape_must_be_at_least_rank_3_but_is/>
* no actual solution in the answers - <https://github.com/tensorflow/recommenders/issues/237>
* no actual solution in the answers, different input code - [ValueError: Shape must be rank 2 but is rank 3 for 'MatMul'](https://stackoverflow.com/questions/50162787/valueerror-shape-must-be-rank-2-but-is-rank-3-for-matmul)
|
2021/06/18
|
[
"https://Stackoverflow.com/questions/68036975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10270590/"
] |
I continue to see this problem in 2022 when using LSTMs or GRUs in Sagemaker with conda\_tensorflow2\_p38 kernel. Here's my workaround:
Early in your notebook, before defining your model, set
```
tf.keras.backend.set_image_data_format("channels_last")
```
I know it looks weird to set image data format when you aren't processing pics, but this somehow works around the dimension error.
To demonstrate that this isn't just a library mismatch in the default kernel, here's something I sometimes add to the beginning of my notebooks to update to the latest library versions (currently TF 2.9.0). It did not solve the error above.
```
import sys
!{sys.executable} -m pip install --upgrade pip tensorflow numpy scikit-learn pandas
```
|
**Solution**
* I switched to AWS EC2 SageMaker "Python [conda env:tensorflow2\_p36] " so this is the exact pre made environment "tensorflow2\_p36"
* As I have read it in some other places it is probably library collision maybe with NumPy.
| 9,940
|
30,284,611
|
I have a python web app that carry's out calculations on data you send to it via POST / GET parameters.
The app works perfectly on my machine, but when deployed to openshift, it fails to access the parameters with an error no 32 : Broken pipe
I then used this [quickstart](https://github.com/openshift-quickstart/flask-base) repo to just focus on server code and not app code.
Got to differentiate between a POST and GET request and ended there
here's the relevant python code :
```
@app.route('/', methods=['GET','POST'])
def index():
result = ""
if request.method == "GET":
name = request.form['name'] if "name" in request.form else ""
result = "We received a GET request and the value for <name> is :%s" % name
elif request.method == "POST":
result = "We received a POST request"
else :
result = "We don't know what type of request we have received"
return result
```
So i just wanna know how i can access the parameters.
|
2015/05/17
|
[
"https://Stackoverflow.com/questions/30284611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1677589/"
] |
Don't use Flask's development server in production. Use a proper WSGI server that can handle concurrent requests, like [Gunicorn](http://gunicorn.org/ "Gunicorn"). For now try turning on the server's threaded mode and see if it works.
```
app.run(host="x.x.x.x", port=1234, threaded=True)
```
|
You can get form data from the POST request via:
```
name = request.form.get("name")
```
Refactor:
```
@app.route('/', methods=['GET', 'POST'])
def index():
if request.method == 'POST':
name = request.form.get("name")
result = "We received a POST request and the value for <name> is - {0}".format(name)
else:
result = "This is a GET request"
return result
```
---
Refer to the [official Flask documentation](http://flask.pocoo.org/docs/0.10/api/#incoming-request-data) to learn more about the Request object.
| 9,942
|
4,542,730
|
I have a app with a kind of rest api that I'm using to send emails . However it currently sends only text email so I need to know how to modify it and make it send html . Below is the code :
```
from __future__ import with_statement
#!/usr/bin/env python
#
import cgi
import os
import logging
import contextlib
from xml.dom import minidom
from xml.dom.minidom import Document
import exceptions
import warnings
import imghdr
from google.appengine.api import images
from google.appengine.api import users
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext.webapp import template
from google.appengine.api import mail
import wsgiref.handlers
# START Constants
CONTENT_TYPE_HEADER = "Content-Type"
CONTENT_TYPE_TEXT = "text/plain"
XML_CONTENT_TYPE = "application/xml"
XML_ENCODING = "utf-8"
"""
Allows you to specify IP addresses and associated "api_key"s to prevent others from using your app.
Storage and Manipulation methods will check for this "api_key" in the POST/GET params.
Retrieval methods don't use it (however you could enable them to use it, but maybe rewrite so you have a "read" key and a "write" key to prevent others from manipulating your data).
Set "AUTH = False" to disable (allowing anyone use your app and CRUD your data).
To generate a hash/api_key visit https://www.grc.com/passwords.htm
To find your ip visit http://www.whatsmyip.org/
"""
AUTH = {
'000.000.000.000':'JLQ7P5SnTPq7AJvLnUysJmXSeXTrhgaJ',
}
# END Constants
# START Exception Handling
class Error(StandardError):
pass
class Forbidden(Error):
pass
logging.getLogger().setLevel(logging.DEBUG)
@contextlib.contextmanager
def mailExcpHandler(ctx):
try:
yield {}
except (ValueError), exc:
xml_error_response(ctx, 400 ,'app.invalid_parameters', 'The indicated parameters are not valid: ' + exc.message)
except (Forbidden), exc:
xml_error_response(ctx, 403 ,'app.forbidden', 'You don\'t have permission to perform this action: ' + exc.message)
except (Exception), exc:
xml_error_response(ctx, 500 ,'system.other', 'An unexpected error in the web service has happened: ' + exc.message)
def xml_error_response(ctx, status, error_id, error_msg):
ctx.error(status)
doc = Document()
errorcard = doc.createElement("error")
errorcard.setAttribute("id", error_id)
doc.appendChild(errorcard)
ptext = doc.createTextNode(error_msg)
errorcard.appendChild(ptext)
ctx.response.headers[CONTENT_TYPE_HEADER] = XML_CONTENT_TYPE
ctx.response.out.write(doc.toxml(XML_ENCODING))
# END Exception Handling
# START Helper Methods
def isAuth(ip = None, key = None):
if AUTH == False:
return True
elif AUTH.has_key(ip) and key == AUTH[ip]:
return True
else:
return False
# END Helper Methods
# START Request Handlers
class Send(webapp.RequestHandler):
def post(self):
"""
Sends an email based on POST params. It will queue if resources are unavailable at the time.
Returns "Success"
POST Args:
to: the receipent address
from: the sender address (must be a registered GAE email)
subject: email subject
body: email body content
"""
with mailExcpHandler(self):
# check authorised
if isAuth(self.request.remote_addr,self.request.POST.get('api_key')) == False:
raise Forbidden("Invalid Credentials")
# read data from request
mail_to = str(self.request.POST.get('to'))
mail_from = str(self.request.POST.get('from'))
mail_subject = str(self.request.POST.get('subject'))
mail_body = str(self.request.POST.get('body'))
mail.send_mail(mail_from, mail_to, mail_subject, mail_body)
self.response.headers[CONTENT_TYPE_HEADER] = CONTENT_TYPE_TEXT
self.response.out.write("Success")
# END Request Handlers
# START Application
application = webapp.WSGIApplication([
('/send', Send)
],debug=True)
def main():
run_wsgi_app(application)
if __name__ == '__main__':
main()
# END Application
```
|
2010/12/27
|
[
"https://Stackoverflow.com/questions/4542730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/331071/"
] |
Have a look to the [Email message fields](http://code.google.com/intl/it/appengine/docs/python/mail/emailmessagefields.html) of the `send_mail` function.
Here is the parameter you need:
>
> **html**
>
> An HTML version of the body content, for recipients that prefer HTML email.
>
>
>
You should add the `html` input parameter like this:
```
#Your html body
mail_html_body = '<h1>Hello!</h1>'
# read data from request
mail_to = str(self.request.POST.get('to'))
mail_from = str(self.request.POST.get('from'))
mail_subject = str(self.request.POST.get('subject'))
mail_body = str(self.request.POST.get('body'))
mail.send_mail(mail_from,
mail_to,
mail_subject,
mail_body,
html = mail_html_body ) #your html body
```
|
you can use the html field of EmailMessage class
```
message = mail.EmailMessage(sender=emailFrom,subject=emailSubject)
message.to = emailTo
message.body = emailBody
message.html = emailHtml
message.send()
```
| 9,943
|
73,906,061
|
Be the following python pandas DataFrame:
| ID | country | money | code | money\_add | other | time |
| --- | --- | --- | --- | --- | --- | --- |
| 832932 | Other | NaN | 00000 | NaN | [N2,N2,N4] | 0 days 01:37:00 |
| 217#8# | NaN | NaN | NaN | NaN | [N1,N2,N3] | 2 days 01:01:00 |
| 1329T2 | France | 12131 | 00020 | 3452 | [N1,N1] | 1 days 03:55:00 |
| 124932 | France | NaN | 00016 | NaN | [N2] | 0 days 01:28:00 |
| 194022 | France | NaN | 00000 | NaN | [N4,N3] | 3 days 02:35:00 |
If `code` column is not `NaN` and the `money` column is `NaN`, we update the values `money` and `money_add` from the following table. Using the `code` and `cod_t` columns as a key.
| cod\_t | money | money\_add |
| --- | --- | --- |
| 00000 | 4532 | 72323 |
| 00016 | 1213 | 23822 |
| 00030 | 1313 | 8393 |
| 00020 | 1813 | 27328 |
Example of the resulting table:
| ID | country | money | code | money\_add | other | time |
| --- | --- | --- | --- | --- | --- | --- |
| 832932 | Other | 4532 | 00000 | 72323 | [N2,N2,N4] | 0 days 01:37:00 |
| 217#8# | NaN | NaN | NaN | NaN | [N1,N2,N3] | 2 days 01:01:00 |
| 1329T2 | France | 12131 | 00020 | 3452 | [N1,N1] | 1 days 03:55:00 |
| 124932 | France | 1213 | 00016 | 23822 | [N2] | 0 days 01:28:00 |
| 194022 | France | 4532 | 00000 | 72323 | [N4,N3] | 3 days 02:35:00 |
User @jezrael, gave me the following solution to the problem:
```py
df1 = df1.drop_duplicates('cod_t').set_index('cod_t')
df = df.set_index(df['code'])
df.update(df1, overwrite=False)
df = df.reset_index(drop=True).reindex(df.columns, axis=1)
```
But this code gives me an error that I don't know how to solve:
```
TypeError: The DType <class 'numpy.dtype[timedelta64]'> could not be promoted by <class
'numpy.dtype[float64]'>. This means that no common DType exists for the given inputs.
For example they cannot be stored in a single array unless the dtype is `object`.
The full list of DTypes is: (<class 'numpy.dtype[timedelta64]'>, <class 'numpy.dtype[float64]'>)
```
```
// First DataFrame dtypes
ID object
country object
code object
money float64
money_add float64
other object
time timedelta64[ns]
dtype: object
// Second DataFrame dtypes
cod_t object
money int64
money_add int64
dtype: object
```
I would be grateful if you could help me to solve the error, or suggest an alternative method to using `update`.
|
2022/09/30
|
[
"https://Stackoverflow.com/questions/73906061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18396935/"
] |
Because [`DataFrame.update`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.update.html) not working well here is alternative - first use left join for new columns from second DataFrame by [`DataFrame.merge`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html):
```
df2 = df.merge(df1.drop_duplicates('cod_t').rename(columns={'cod_t':'code'}),
on='code',
how='left',
suffixes=('','_'))
print (df2)
ID country money code money_add other time \
0 832932 Other NaN 0.0 NaN [N2, N2, N4] 0 days 01:37:00
1 217#8# NaN NaN NaN NaN [N1, N2, N3] 2 days 01:01:00
2 1329T2 France 12131.0 20.0 3452.0 [N1, N1] 1 days 03:55:00
3 124932 France NaN 16.0 NaN [N2] 0 days 01:28:00
4 194022 France NaN 0.0 NaN [N4, N3] 3 days 02:35:00
money_ money_add_
0 4532.0 72323.0
1 NaN NaN
2 1813.0 27328.0
3 1213.0 23822.0
4 4532.0 72323.0
```
Then get columns names with/without `_`:
```
cols_with_ = df2.columns[df2.columns.str.endswith('_')]
cols_without_ = cols_with_.str.rstrip('_')
print (cols_with_)
Index(['money_', 'money_add_'], dtype='object')
print (cols_without_)
Index(['money', 'money_add'], dtype='object')
```
Pass to [`DataFrame.combine_first`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.combine_first.html) and last remove helper columns:
```
df2[cols_without_] = (df2[cols_without_].combine_first(df2[cols_with_]
.rename(columns=lambda x: x.rstrip('_'))))
df2 = df2.drop(cols_with_, axis=1)
print (df2)
ID country money code money_add other time
0 832932 Other 4532.0 0.0 72323.0 [N2, N2, N4] 0 days 01:37:00
1 217#8# NaN NaN NaN NaN [N1, N2, N3] 2 days 01:01:00
2 1329T2 France 12131.0 20.0 3452.0 [N1, N1] 1 days 03:55:00
3 124932 France 1213.0 16.0 23822.0 [N2] 0 days 01:28:00
4 194022 France 4532.0 0.0 72323.0 [N4, N3] 3 days 02:35:00
```
|
This is another method different from Jezrael's, but you can try it out.
You can first create a condition variable for your dataframe.
```
condition = (df.code.isin(df1.cod_t) & ~df.code.isnull() & df.money.isna())
columns = ['money', 'money_add']
```
Next, use `df.loc` to do the update.
```
df.loc[condition, columns] = df1.loc[condition, columns]
ID country money code money_add other time
0 832932 Other 4532.0 0.0 72323.0 [N2,N2,N4] 0 days 01:37:00
1 217#8# NaN NaN NaN NaN [N1,N2,N3] 2 days 01:01:00
2 1329T2 France 12131.0 20.0 3452.0 [N1,N1] 1 days 03:55:00
3 124932 France 1813.0 16.0 27328.0 [N2] 0 days 01:28:00
4 194022 France 8932.0 0.0 3204.0 [N4,N3] 3 days 02:35:00
```
Update
------
If there's an unequal length for both dataframe.
```
df1_cond = df1.cod_t.isin(df.loc[condition].code)
result = [i[1:] for row in df.loc[condition].code for i in df1.loc[df1_cond].values if row in i]
df.loc[condition, columns] = result
```
| 9,944
|
43,006,368
|
I am trying to connect to AWS Athena using python. I am trying to use pyathenajdbc to achieve this task. The issue I am having is obtaining a connection. When I run the code below, I receive an error message stating it cannot find the AthenaDriver. ( java.lang.RuntimeException: Class com.amazonaws.athena.jdbc.AthenaDriver not found). I did download this file from AWS and I have confirmed it is sitting in that directory.
```
from mdpbi.rsi.config import *
from mdpbi.tools.functions import mdpLog
from pkg_resources import resource_string
import argparse
import os
import pyathenajdbc
import sys
SCRIPT_NAME = "Athena_Export"
ATHENA_JDBC_CLASSPATH = "/opt/amazon/athenajdbc/AthenaJDBC41-1.0.0.jar"
EXPORT_OUTFILE = "RSI_Export.txt"
EXPORT_OUTFILE_PATH = os.path.join(WORKINGDIR, EXPORT_OUTFILE)
def get_arg_parser():
"""This function returns the argument parser object to be used with this script"""
parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
return parser
def main():
args = get_arg_parser().parse_args(sys.argv[1:])
logger = mdpLog(SCRIPT_NAME, LOGDIR)
SQL = resource_string("mdpbi.rsi.athena.resources", "athena.sql")
conn = pyathenajdbc.connect(
s3_staging_dir="s3://athena",
access_key=AWS_ACCESS_KEY_ID,
secret_key=AWS_SECRET_ACCESS_KEY,
region_name="us-east-1",
log_path=LOGDIR,
driver_path=ATHENA_JDBC_CLASSPATH
)
try:
with conn.cursor() as cursor:
cursor.execute(SQL)
logger.info(cursor.description)
logger.info(cursor.fetchall())
finally:
conn.close()
return 0
if __name__ == '__main__':
rtn = main()
sys.exit(rtn)
```
>
> Traceback (most recent call last): File
> "/usr/lib64/python2.7/runpy.py", line 174, in \_run\_module\_as\_main
> "**main**", fname, loader, pkg\_name) File "/usr/lib64/python2.7/runpy.py", line 72, in \_run\_code
> exec code in run\_globals File "/home/ec2-user/jason\_testing/mdpbi/rsi/athena/**main**.py", line 53,
> in
> rtn = main() File "/home/ec2-user/jason\_testing/mdpbi/rsi/athena/**main**.py", line 39,
> in main
> driver\_path=athena\_jdbc\_driver\_path File "/opt/mdpbi/Python\_Envs/2.7.10/local/lib/python2.7/dist-packages/pyathenajdbc/**init**.py",
> line 65, in connect
> driver\_path, \*\*kwargs) File "/opt/mdpbi/Python\_Envs/2.7.10/local/lib/python2.7/dist-packages/pyathenajdbc/connection.py",
> line 68, in **init**
> jpype.JClass(ATHENA\_DRIVER\_CLASS\_NAME) File "/opt/mdpbi/Python\_Envs/2.7.10/lib64/python2.7/dist-packages/jpype/\_jclass.py",
> line 55, in JClass
> raise \_RUNTIMEEXCEPTION.PYEXC("Class %s not found" % name)
>
>
>
|
2017/03/24
|
[
"https://Stackoverflow.com/questions/43006368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3389780/"
] |
The JDBC driver requires Java 8. I was currently running Java 7. I was able to install another version of Java on EC2 instance.
<https://tecadmin.net/install-java-8-on-centos-rhel-and-fedora/#>
I had to also set the java version in my code. With these changes, the code now runs as expected.
```
from mdpbi.rsi.config import *
from mdpbi.tools.functions import mdpLog
from pkg_resources import resource_string
import argparse
import os
import pyathenajdbc
import sys
SCRIPT_NAME = "Athena_Export"
def get_arg_parser():
"""This function returns the argument parser object to be used with this script"""
parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
return parser
def main():
args = get_arg_parser().parse_args(sys.argv[1:])
logger = mdpLog(SCRIPT_NAME, LOGDIR)
SQL = resource_string("mdpbi.rsi.athena.resources", "athena.sql")
os.environ["JAVA_HOME"] = "/opt/jdk1.8.0_121"
os.environ["JRE_HOME"] = "/opt/jdk1.8.0_121/jre"
os.environ["PATH"] = "/opt/jdk1.8.0_121/bin:/opt/jdk1.8.0_121/jre/bin"
conn = pyathenajdbc.connect(
s3_staging_dir="s3://mdpbi.data.rsi.out/",
access_key=AWS_ACCESS_KEY_ID,
secret_key=AWS_SECRET_ACCESS_KEY,
schema_name="rsi",
region_name="us-east-1"
)
try:
with conn.cursor() as cursor:
cursor.execute(SQL)
logger.info(cursor.description)
logger.info(cursor.fetchall())
finally:
conn.close()
return 0
if __name__ == '__main__':
rtn = main()
sys.exit(rtn)
```
|
Try this :
```
pyathenajdbc.ATHENA_JAR = ATHENA_JDBC_CLASSPATH
```
You won't be needing to specify the driver\_path argument in the connection method
| 9,945
|
59,986,413
|
I'm trying to use the new python dataclasses to create some mix-in classes (already as I write this I think it sounds like a rash idea), and I'm having some issues. Behold the example below:
```py
from dataclasses import dataclass
@dataclass
class NamedObj:
name: str
def __post_init__(self):
print("NamedObj __post_init__")
self.name = "Name: " + self.name
@dataclass
class NumberedObj:
number: int = 0
def __post_init__(self):
print("NumberedObj __post_init__")
self.number += 1
@dataclass
class NamedAndNumbered(NumberedObj, NamedObj):
def __post_init__(self):
super().__post_init__()
print("NamedAndNumbered __post_init__")
```
If I then try:
```
nandn = NamedAndNumbered('n_and_n')
print(nandn.name)
print(nandn.number)
```
I get
```py
NumberedObj __post_init__
NamedAndNumbered __post_init__
n_and_n
1
```
Suggesting it has run `__post_init__` for `NamedObj`, but not for `NumberedObj`.
What I would like is to have NamedAndNumbered run `__post_init__` for both of its mix-in classes, Named and Numbered. One might think that it could be done if `NamedAndNumbered` had a `__post_init__` like this:
```
def __post_init__(self):
super(NamedObj, self).__post_init__()
super(NumberedObj, self).__post_init__()
print("NamedAndNumbered __post_init__")
```
But this just gives me an error `AttributeError: 'super' object has no attribute '__post_init__'` when I try to call `NamedObj.__post_init__()`.
At this point I'm not entirely sure if this is a bug/feature with dataclasses or something to do with my probably-flawed understanding of Python's approach to inheritance. Could anyone lend a hand?
|
2020/01/30
|
[
"https://Stackoverflow.com/questions/59986413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9112585/"
] |
This:
```
def __post_init__(self):
super(NamedObj, self).__post_init__()
super(NumberedObj, self).__post_init__()
print("NamedAndNumbered __post_init__")
```
doesn't do what you think it does. `super(cls, obj)` will return a proxy to the class **after** `cls` in `type(obj).__mro__` - so, in your case, to `object`. And the whole point of cooperative `super()` calls is to avoid having to explicitely call each of the parents.
The way cooperative `super()` calls are intended to work is, well, by being "cooperative" - IOW, everyone in the mro is supposed to relay the call to the next class (actually, the `super` name is a rather sad choice, as it's not about calling "the super class", but about "calling the next class in the mro").
IOW, you want each of your "composable" dataclasses (which are not mixins - mixins only have behaviour) to relay the call, so you can compose them in any order. A first naive implementation would look like:
```
@dataclass
class NamedObj:
name: str
def __post_init__(self):
super().__post_init__()
print("NamedObj __post_init__")
self.name = "Name: " + self.name
@dataclass
class NumberedObj:
number: int = 0
def __post_init__(self):
super().__post_init__()
print("NumberedObj __post_init__")
self.number += 1
@dataclass
class NamedAndNumbered(NumberedObj, NamedObj):
def __post_init__(self):
super().__post_init__()
print("NamedAndNumbered __post_init__")
```
BUT this doesn't work, since for the last class in the mro (here `NamedObj`), the next class in the mro is the builtin `object` class, which doesn't have a `__post_init__` method. The solution is simple: just add a base class that defines this method as a noop, and make all your composable dataclasses inherit from it:
```
class Base(object):
def __post_init__(self):
# just intercept the __post_init__ calls so they
# aren't relayed to `object`
pass
@dataclass
class NamedObj(Base):
name: str
def __post_init__(self):
super().__post_init__()
print("NamedObj __post_init__")
self.name = "Name: " + self.name
@dataclass
class NumberedObj(Base):
number: int = 0
def __post_init__(self):
super().__post_init__()
print("NumberedObj __post_init__")
self.number += 1
@dataclass
class NamedAndNumbered(NumberedObj, NamedObj):
def __post_init__(self):
super().__post_init__()
print("NamedAndNumbered __post_init__")
```
|
The problem (most probably) isn't related to `dataclass`es. The problem is in Python's [method resolution](http://python-history.blogspot.com/2010/06/method-resolution-order.html). Calling method on `super()` invokes the first found method from parent class in the [MRO](https://www.python.org/download/releases/2.3/mro/) chain. So to make it work you need to call the methods of parent classes manually:
```
@dataclass
class NamedAndNumbered(NumberedObj, NamedObj):
def __post_init__(self):
NamedObj.__post_init__(self)
NumberedObj.__post_init__(self)
print("NamedAndNumbered __post_init__")
```
Another approach (if you really like `super()`) could be to continue the MRO chain by calling `super()` in all parent classes (but it needs to have a `__post_init__` in the chain):
```
@dataclass
class MixinObj:
def __post_init__(self):
pass
@dataclass
class NamedObj(MixinObj):
name: str
def __post_init__(self):
super().__post_init__()
print("NamedObj __post_init__")
self.name = "Name: " + self.name
@dataclass
class NumberedObj(MixinObj):
number: int = 0
def __post_init__(self):
super().__post_init__()
print("NumberedObj __post_init__")
self.number += 1
@dataclass
class NamedAndNumbered(NumberedObj, NamedObj):
def __post_init__(self):
super().__post_init__()
print("NamedAndNumbered __post_init__")
```
In both approaches:
```
>>> nandn = NamedAndNumbered('n_and_n')
NamedObj __post_init__
NumberedObj __post_init__
NamedAndNumbered __post_init__
>>> print(nandn.name)
Name: n_and_n
>>> print(nandn.number)
1
```
| 9,946
|
38,913,502
|
I am trying to install a python package on my ubuntu.I am trying to install it through a setup script which i had written.The setup.py script looks like this:
```
from setuptools import setup
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
setup(
name = 'pyduino',
description = 'PyDuino project aims to make python interactive with hardware particularly arduino.',
url = '###',
keywords = 'python arduino',
author = '###',
author_email = '###',
version = '0.0.0',
license = 'GNU',
packages = ['pyduino'],
install_requires = ['pyserial'],
classifiers = [
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Topic :: Software Development :: Build Tools',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
scripts=['pyduino/pyduino.py'],
)
```
Package installs in /usr/local/bin directory.But when I am importing the modules outside the /usr/local/bin,import error occurs.I tried changing path to /usr/local/bin and it works perfectly and import error doesn't occur.How can I install the package so that I can import the modules in any directory? Thanks in advance...
|
2016/08/12
|
[
"https://Stackoverflow.com/questions/38913502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5507861/"
] |
Try install your packages with pip using this
```
pip install --install-option="--prefix=$PREFIX_PATH" package_name
```
as described here [Install a Python package into a different directory using pip?](https://stackoverflow.com/questions/2915471/install-a-python-package-into-a-different-directory-using-pip)
and i'll suggest to read what are 1. pip 2. virtualenv
Good luck :)
EDIT: i found the package is installed with pip like:
```
pip install --install-option="--prefix=/usr/local/bin" pyduino_mk
```
|
Currently, you're using a `scripts` tag to install your python code. This will put your code in `/usr/local/bin`, which is not in `PYTHONPATH`.
According to [the documentation](https://docs.python.org/2/distutils/setupscript.html), you use `scripts` when you want to install executable scripts (stuff you want to call from command line). Otherwise, you need to use `packages`.
My approach would be like this:
* install the `pyduino/pyduino.py` in the library with something like `packages=['pyduino']`
* create a wrapper (shell or python) capable of calling your installed script and install that via `scripts=[...]`
Using the `packages` tag for your module will install it in `/usr/local/lib/python...`, which is in `PYTHONPATH`. This will allow you to import your script with something like `import pyduino.pyduino.*`.
For the wrapper script part:
A best practice is to isolate the code to be executed if the script is triggered from command line in something like:
```
def main():
# insert your code here
pass
if __name__ == '__main__':
main()
```
* Assuming there is a `def main()` as above
* create a directory `scripts` in your tree (at the same level with `setup.py`)
* create a file `scripts/pyduino`
* in `scripts/pyduino`:
```
#!/usr/bin/env python
from pydiuno.pyduino import main
if __name__ == '__main__':
main()
```
* add a `scripts = ['scripts/pyduino'] to your setup.py code
| 9,947
|
49,355,434
|
How do I navigate to another webpage using the same driver with Selenium in python?
I do not want to open a new page. I want to keep on using the same driver.
I thought that the following would work:
```
driver.navigate().to("https://support.tomtom.com/app/contact/")
```
But it doesn't! Navigate seems not to be a 'WebDriver' method
|
2018/03/19
|
[
"https://Stackoverflow.com/questions/49355434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3623123/"
] |
To navigate to a webpage you just write
```
driver.get(__url__)
```
you can do this in your program multiple times
|
The line of code which you have tried as :
```
driver.navigate().to("https://support.tomtom.com/app/contact/")
```
It is a typical *Java* based line of code.
However as per the currect **Python API Docs** of [The WebDriver implementation](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webdriver.html#module-selenium.webdriver.remote.webdriver) **navigate()** method is yet to be supported/implemented.
Indtead, you can use the **get(url)** method instead which is defined as :
```
def get(self, url):
"""
Loads a web page in the current browser session.
"""
self.execute(Command.GET, {'url': url})
```
| 9,948
|
23,726,365
|
I'm using tweepy and trying to run the basic script as shown by this [video](https://www.youtube.com/watch?v=pUUxmvvl2FE). I was previously receiving 401 errors (unsynchronized time zones) but am using the provided keys. I fixed that problem and now I'm getting this result:
```
Traceback (most recent call last):
File "algotest.py", line 25, in <module>
twitterStream.filter(track=["North"])
File "/usr/local/lib/python2.7/dist-packages/tweepy-2.3-py2.7.egg/tweepy/streaming.py", line 313, in filter
File "/usr/local/lib/python2.7/dist-packages/tweepy-2.3-py2.7.egg/tweepy/streaming.py", line 235, in _start
File "/usr/local/lib/python2.7/dist-packages/tweepy-2.3-py2.7.egg/tweepy/streaming.py", line 151, in _run
File "/usr/local/lib/python2.7/dist-packages/requests-2.2.1-py2.7.egg/requests/sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.2.1-py2.7.egg/requests/sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.2.1-py2.7.egg/requests/adapters.py", line 327, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='stream.twitter.com', port=443): Max retries exceeded with url: /1.1/statuses/filter.json?track=North&delimited=length (Caused by <class 'socket.gaierror'>: [Errno -2] Name or service not known)
```
Any way around this? Is there some sort of reset option I can trigger?
Thanks in advance
|
2014/05/18
|
[
"https://Stackoverflow.com/questions/23726365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1472645/"
] |
Turns out the solution is simply to wait a day. Who would've thought!
|
I was also getting same error while using the python-twitter module in my script but it got resolved automatically when I tried after an interval. As there is limit for the number of try at a particular interval hence we get this error when we exceed that maximum try limit.
| 9,949
|
33,340,442
|
I am trying to post [some data via ajax](http://jsfiddle.net/g1wvryp7/) to our backend API, but the arrays within the json data get turned into weird things by jquery...for example, the backend (python) sees the jquery ajax data as a dict of two lists
```
{'subject': ['something'], 'members[]': ['joe','bob']}
```
when it should be
```
{'subject':'something','members':['joe','bob']}
```
The HTML Form extracted from a react component:
```
<div class="the_form">
<form onSubmit={this.handleSubmit}>
<input type="textarea" ref="members" placeholder="spongebob, patrick" />
<input type="submit" value="Add Thread" />
</form>
</div>
```
The jquery ajax code:
```
$.ajax({
beforeSend: function(xhr, settings) {
// csrf validation
},
url: this.props.url,
dataType: 'json',
type: 'POST',
data: {subject: "something", members: ["joe","bob"]},
success: function(data) {
this.setState({data: data});
}.bind(this),
error: function(xhr, status, err) {
console.log(this.props.url, status, err.toString());
}.bind(this)
});
```
I am able, however, to make such a request appropriately with httpie (simple http command line client):
```
echo '{"subject":"something", "members":["joe","bob"]}' | http --auth test:test POST localhost:8000/api/some_page/ --verbose
```
What might I be doing wrong in the javascript request such that the inputs come into the server differently than expected?
|
2015/10/26
|
[
"https://Stackoverflow.com/questions/33340442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1449443/"
] |
That shouldnt be a problem since you only pass the reference to that list to other objects. That means you have only one big list.
But you should be aware that every object that has a reference to that list can change it
|
Well in Java, you only pass object "by reference"...
From the link in the comments:
>
> Letβs be a little bit more specific by what we mean here: objects are
> passed by reference β meaning that a reference/memory address is
> passed when an object is assigned to another β BUT (and this is whatβs
> important) that reference is actually passed by value.
>
>
>
| 9,950
|
62,733,213
|
I'm trying to figure out how to read a file from Azure blob storage.
Studying its documentation, I can see that the [download\_blob](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blobclient?view=azure-python#download-blob-offset-none--length-none----kwargs-) method seems to be the main way to access a blob.
This method, though, seems to require downloading the whole blob into a file or some other stream.
Is it possible to read a file from Azure Blob Storage line by line as a stream from the service? (And without having to have downloaded the whole thing first)
|
2020/07/04
|
[
"https://Stackoverflow.com/questions/62733213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1255356/"
] |
**Update 0710:**
In the latest SDK [azure-storage-blob 12.3.2](https://pypi.org/project/azure-storage-blob/), we can also do the same thing by using `download_blob`.
The screenshot of the source code of `download_blob`:
[](https://i.stack.imgur.com/eFH3j.jpg)
So just provide an `offset` and `length` parameter, like below(it works as per my test):
```
blob_client.download_blob(60,100)
```
---
**Original answer:**
You can not read the blob file line by line, but you can read them as per bytes. Like first read 10 bytes of the data, next you can continue to read the next 10 to 20 bytes etc.
This is only available in the older version of [python blob storage sdk 2.1.0](https://pypi.org/project/azure-storage-blob/2.1.0/). Install it like below:
```
pip install azure-storage-blob==2.1.0
```
Here is the sample code(here I read the text, but you can change it to use `get_blob_to_stream(container_name,blob_name,start_range=0,end_range=10)` method to read stream):
```
from azure.storage.blob import BlockBlobService, PublicAccess
accountname="xxxx"
accountkey="xxxx"
blob_service_client = BlockBlobService(account_name=accountname,account_key=accountkey)
container_name="test2"
blob_name="a5.txt"
#get the length of the blob file, you can use it if you need a loop in your code to read a blob file.
blob_property = blob_service_client.get_blob_properties(container_name,blob_name)
print("the length of the blob is: " + str(blob_property.properties.content_length) + " bytes")
print("**********")
#get the first 10 bytes data
b1 = blob_service_client.get_blob_to_text(container_name,blob_name,start_range=0,end_range=10)
#you can use the method below to read stream
#blob_service_client.get_blob_to_stream(container_name,blob_name,start_range=0,end_range=10)
print(b1.content)
print("*******")
#get the next range of data
b2=blob_service_client.get_blob_to_text(container_name,blob_name,start_range=10,end_range=50)
print(b2.content)
print("********")
#get the next range of data
b3=blob_service_client.get_blob_to_text(container_name,blob_name,start_range=50,end_range=200)
print(b3.content)
```
|
The accepted answer [here](https://stackoverflow.com/questions/33091830/how-best-to-convert-from-azure-blob-csv-format-to-pandas-dataframe-while-running) may be of use to you. The documentation can be found [here](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice?view=azure-python-previous).
| 9,951
|
23,201,351
|
I know this is a basic question, but I'm new to python and can't figure out how to solve it.
I have a list like the next example:
```
entities = ["#1= IFCORGANIZATION($,'Autodesk Revit 2014 (ENU)',$,$,$)";, "#5= IFCAPPLICATION(#1,'2014','Autodesk Revit 2014 (ENU)','Revit');"]
```
My problem is how to add the information from the list `"entities"` to a dictionary in the following format:
```
dic = {'#1= IFCORGANIZATION' : ['$','Autodesk Revit 2014 (ENU)','$','$','$'], '#5= IFCAPPLICATION' : ['#1','2014','Autodesk Revit 2014 (ENU)','Revit']
```
I tried to do this using `"find"` but I'm getting the following error:
`'list' object has no attribute 'find'`,
and I don't know how to do this without find method.
|
2014/04/21
|
[
"https://Stackoverflow.com/questions/23201351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3556883/"
] |
If you want to know if a value is in a list you can use `in`, like this:
```
>>> my_list = ["one", "two", "three"]
>>> "two" in my_list
True
>>>
```
If you need to get the position of the value in the list you must use `index`:
```
>>> my_list.index("two")
1
>>>
```
Note that the first element of the list has the 0 index.
|
Here you go:
```
>>> import re
>>> import ast
>>> entities = ["#1= IFCORGANIZATION('$','Autodesk Revit 2014 (ENU)','$','$','$');", "#5= IFCAPPLICATION('#1','2014','Autodesk Revit 2014 (ENU)','Revit');"]
>>> entities = [a.strip(';') for a in entities]
>>> pattern = re.compile(r'\((.*)\)')
>>> dic = {}
>>> for a in entities:
... s = re.search(pattern, a)
... dic[a[:a.index(s.group(0))]] = list(ast.literal_eval(s.group(0)))
>>> dic
{'#5= IFCAPPLICATION': ['#1', '2014', 'Autodesk Revit 2014 (ENU)', 'Revit'], '#1= IFCORGANIZATION': ['$', 'Autodesk Revit 2014 (ENU)', '$', '$', '$']}
```
This regex `r'\((.*)\)'` looks for elements in `(` and `)` and converts them to a list. It makes the sub string appearing before the brackets as the key and the list as the value.
| 9,952
|
69,637,510
|
I try to add a local KVM maschine dynamically to ansible inventory with ansible 2.11.6.
```
ansible [core 2.11.6]
config file = /home/ansible/ansible.cfg
configured module search path = ['/home/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.0.2
libyaml = True
```
I create the KVM successful, start it, wait for port 22 and try to add it to inventory with following task in play "A":
```
- name: "{{libvirt_maschine_name}}: Add VM to in-memory inventory"
local_action:
module: add_host
name: "{{libvirt_maschine_name}}"
groups: libvirt
ansible_ssh_private_key_file: "{{ansible_user_home}}/.ssh/{{libvirt_maschine_name}}-ssh.key"
ansible_default_ipv4: "{{vm_ip}}"
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
ansible_host: "{{vm_ip}}"
```
When i output the content of hostvars in Play "B" i see the groups and hostname as expected:
```
...
"group_names": [
"libvirt"
],
"groups": {
"all": [
"ansible",
"k8smaster"
],
"libvirt": [
"k8smaster"
],
"local_ansible": [
"ansible"
],
"ungrouped": []
},
...
```
When i add
```
- debug: var=group_names
- debug: var=play_hosts
```
to my play "B", i get just the static information of my inventory.
```
TASK [debug] ****************************************************************************************************************************************************************************************************
ok: [ansible] => {
"group_names": [
"local_ansible"
]
}
TASK [debug] ****************************************************************************************************************************************************************************************************
ok: [ansible] => {
"play_hosts": [
"ansible"
]
}
```
My inventory.ini looks like
```
[all]
ansible ansible_host=localhost
[local_ansible]
ansible ansible_host=localhost
[local_ansible:vars]
ansible_ssh_private_key_file=~/.ssh/ansible.key
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
ansible_user=ansible
```
Here is a minimal example:
```
---
- name: "Play A"
hosts: all
become: yes
gather_facts: yes
tasks:
- name: "Import variables from file"
include_vars:
file: k8s-single-node_vars.yaml
- name: "Do some basic stuff"
include_role:
name: ansible-core
- name: "Add VM to in-memory inventory"
add_host:
name: "myMaschine"
groups: myGroup
ansible_ssh_private_key_file: "test.key"
ansible_default_ipv4: "192.168.1.1"
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
ansible_host: "192.168.1.1"
- name: "Play B"
hosts: all
become: yes
gather_facts: no
tasks:
- debug: var=hostvars
- debug: var=group_names
- debug: var=play_hosts
- name: test-ping
ping:
```
Therefore, i am not able to run any task against the VM, because ansible is completely ignoring them. A ping is just working against the host "ansible".
Any idea, what i do wrong here?
|
2021/10/19
|
[
"https://Stackoverflow.com/questions/69637510",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8478100/"
] |
Next.js is a framework for React which helps developers manage Server-Side Rendering in react.
There are many benefits of server-side rendering including: caching specific pages (or caching only what is public and keeping user-specific data or auth-required data to be loaded on the frontend).
Since Next.js is doing server side rendering that means that sometimes they use the `reactDOMServer.renderToString()` function in Node.js. They build the full page as HTML and send it to the user who is browsing the site. Next.js' intention in generating the page HTML is to maximize the capabilities of CDNs and improve your page's SEO. So not only do they render the React page as HTML. They make the API requests and `await` for them to return allowing them to render the list of elements which the API responded with.
This can allow developers to take advantage of dynamic aspects of React and run JavaScript function within the rendering code (like: `{products.length <= 0 && <EmptyStateDiv type='products' />}`), but sadly you can't use JavaScript/functionality which lives on the client's/user's browser (as opposed to native to JavaScript/cross-platform Node.js/Browser).
So while all functionality built into JS (like Array prototype methods) can be used without a second thought. Other functionality like fetch can be used cross-platform on Node.js and Frontend/React but only due to cross-platform libraries like `isomorphic-fetch`. And finally, other functionality lives only within the browser and is not native to JavaScript. This especially includes methods/properties accessible from the specific user's browser like it might be great to do: `document.innerWidth is > 1600` but that isn't possible since this function runs before a specific client had rendered the page.
Next.js built the page on the server side where things like `document`/`window` are not defined and where it wouldn't make sense for them to exist. (Though you can probably optimize and cache different experiences for mobile vs desktop users, by reading some the client's headers.)
While it runs on the server in Node.js (Server-Side Rendering) window is not defined in the Node.js runtime and it could crash before rendering. It also wouldn't make sense for window to be defined on the server as window typically contains browser specific properties like clientHeight/clientWidth or allow a user to do client side redirects with `window.location.assign` which would be impossible on the server.
|
If this code is run on the server as part of [pre-rendering](https://nextjs.org/docs/basic-features/pages#pre-rendering) (either server-side rendering or static rendering), there will be no `window` (and hence no `window.btoa` for base64-encoding) since there is no browser, but instead node.js's `Buffer` can be utilized.
| 9,955
|
70,290,737
|
i tried to use this command in cmd to install module certifi:
```
pip install certifi
```
But it throws some warning like this:
```
WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages)
```
How can i fix it and install certifi ? (Python 3.9.6 )
|
2021/12/09
|
[
"https://Stackoverflow.com/questions/70290737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17291416/"
] |
there is no question in your title nor in description,
some mathematical resolution to your problem could be to put numbers to your directions, for example up=1 down=-1 left=2 right=-2
and then on keypress to change direction check:
```
if not actualPosition + newPosition:
#dont do anything since collision
else:
#do your action
```
|
You can check if the new direction is different from the old one. If it is diffrent, you update the direction, otherwise you keep the same direction:
```
def new_dir(self, new_dir):
return new_dir if new_dir != self.direction else self.direction
def move_up(self):
self.direction = self.new_dir("up")
def move_down(self):
self.direction = self.new_dir("down")
def move_right(self):
self.direction = self.new_dir("right")
def move_left(self):
self.direction = self.new_dir("left")
```
| 9,956
|
32,277,153
|
I'm using wxpython to code this simple form. A notebook with a scroll bar and few text controls is what i have used.I can see the widgets which are view-able on screen but the ones which needs to be scrolled down are not visible. In my code below i could see upto "Enter the Logs" and appropriate text control for that fields but the "review fields are missing along with submit and cancel buttons.
```
import wx
```
import wx.lib.filebrowsebutton as filebrowse
class Frame ( wx.Frame ):
```
def __init__( self, parent ):
wx.Frame.__init__ ( self, parent, id = wx.ID_ANY, title = u"Test", pos = wx.DefaultPosition, size = wx.Size( 600,300 ), style = wx.DEFAULT_FRAME_STYLE|wx.TAB_TRAVERSAL )
self.SetSizeHintsSz( wx.DefaultSize, wx.DefaultSize )
sizer = wx.BoxSizer( wx.VERTICAL )
self.notebook = wx.Notebook( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, 0 )
self.login = wx.Panel( self.notebook, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL )
self.notebook.AddPage( self.login, u"Login", False )
self.scroll = wx.ScrolledWindow( self.notebook, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.HSCROLL|wx.VSCROLL )
vbox = wx.BoxSizer(wx.VERTICAL)
# Sizer for widgets inside tabs
inside_sizer_h1 = wx.BoxSizer(wx.HORIZONTAL)
inside_sizer_h2 = wx.BoxSizer(wx.HORIZONTAL)
inside_sizer_h3 = wx.BoxSizer(wx.HORIZONTAL)
inside_sizer_h4 = wx.BoxSizer(wx.HORIZONTAL)
inside_sizer_h5 = wx.BoxSizer(wx.HORIZONTAL)
inside_sizer_h6 = wx.BoxSizer(wx.HORIZONTAL)
inside_sizer_h7 = wx.BoxSizer(wx.HORIZONTAL)
inside_sizer_h8 = wx.BoxSizer(wx.HORIZONTAL)
inside_sizer_h9 = wx.BoxSizer(wx.HORIZONTAL)
#Test Approve Label
self.test_app_label = wx.StaticText(self.scroll, -1 , label="Test Approved By :")
inside_sizer_h1.Add(self.test_app_label, 1, wx.ALL,5)
#Test Approve Combo
self.tes_app_combo = wx.ComboBox(self.scroll, -1, value='None', choices= ['None', 'approver1', 'approver2', 'approver3', 'approver4'] )
inside_sizer_h1.Add(self.tes_app_combo, 1, wx.ALL, 5 )
#Workspace Label
self.wrksp_label = wx.StaticText(self.scroll, -1 , label="Workspace :")
inside_sizer_h2.Add(self.wrksp_label, 1, wx.ALL,5)
#Workspace file selector
self.select_wrksp_dir = filebrowse.DirBrowseButton(self.scroll, -1,labelText = "", toolTip = 'Select tip of your workspace')
inside_sizer_h2.Add(self.select_wrksp_dir, 1, wx.ALL|wx.EXPAND, 5 )
# Issuelist label
self.ar_list_label = wx.StaticText(self.scroll, -1 , label="Issue List :")
inside_sizer_h3.Add(self.ar_list_label, 1, wx.ALL,5)
# Issue Text box
self.ar_list_text = wx.TextCtrl(self.scroll, -1, value=u"Enter The issue, one per line", style=wx.TE_MULTILINE)
inside_sizer_h3.Add(self.ar_list_text, 1, wx.ALL, 5 )
# Summary of change Title
self.change_summary_label = wx.StaticText(self.scroll, -1 , label=u"Summary of change :")
inside_sizer_h4.Add(self.change_summary_label, 1, wx.ALL, 5)
# Summary of change Text Box
self.change_summary_text = wx.TextCtrl(self.scroll, -1, value=u"What componet has changed?",style=wx.TE_MULTILINE)
inside_sizer_h4.Add(self.change_summary_text, 1, wx.ALL, 5 )
# Changed File List Title
self.change_file_list_label = wx.StaticText(self.scroll, -1 , label=u"Changed File List :")
inside_sizer_h5.Add(self.change_file_list_label,1, wx.ALL, 5)
# Changed File List Box
self.change_summary_text = wx.TextCtrl(self.scroll, -1, u' enter list of changed files',style=wx.TE_MULTILINE)
inside_sizer_h5.Add(self.change_summary_text,1, wx.ALL, 5)
# GUI Testing done label
self.testing_done_label = wx.StaticText(self.scroll, -1 , label=u"What tests have you done? :")
inside_sizer_h6.Add(self.testing_done_label,1, wx.ALL, 5)
#FlexGUi Checkbox
self.gui_check_list = wx.CheckListBox(self.scroll, -1, choices=['GUI Builds Successfully', 'GUI Automation Tests', 'CLI Automation Tests'])
inside_sizer_h6.Add(self.gui_check_list,1, wx.ALL, 5)
# GUI Automation test logs label
self.gui_auto_log_label = wx.StaticText(self.scroll, -1 , label=u"Enter the logs :")
inside_sizer_h7.Add(self.gui_auto_log_label,1, wx.ALL, 5)
#GUI Automation test box
self.gui_auto_log = wx.TextCtrl(self.scroll, -1, u'Copy and paste the logs.',style=wx.TE_MULTILINE)
inside_sizer_h7.Add(self.gui_auto_log,1, wx.ALL, 5)
# Review URL Text
self.review_url_label = wx.StaticText(self.scroll, -1 , label=u"Code review URL :")
inside_sizer_h8.Add(self.review_url_label,1, wx.ALL, 5)
#Code Review Textbox
self.review_url_tbox = wx.TextCtrl(self.scroll, -1, value=u"Enter the code review URL",style=wx.TE_MULTILINE)
inside_sizer_h8.Add(self.review_url_tbox,1, wx.ALL, 5)
#Submit button
self.sub_button = wx.Button(self.scroll, label = 'Submit')
inside_sizer_h9.Add(self.sub_button, wx.ALL, 5)
#Cancel button
self.canc_button = wx.Button(self.scroll, label = 'Cancel')
inside_sizer_h9.Add(self.canc_button,1, wx.ALL, 5)
vbox.Add(inside_sizer_h1, 0 , wx.TOP|wx.EXPAND, 40 )
vbox.Add(inside_sizer_h2, 0 , wx.ALL|wx.EXPAND, 5 )
vbox.Add(inside_sizer_h3, 0 , wx.ALL|wx.EXPAND, 5 )
vbox.Add(inside_sizer_h4, 0 , wx.ALL|wx.EXPAND, 10)
vbox.Add(inside_sizer_h5, 0 , wx.ALL|wx.EXPAND, 10)
vbox.Add(inside_sizer_h6, 0 , wx.ALL|wx.EXPAND, 10)
vbox.Add(inside_sizer_h7, 0 , wx.ALL|wx.EXPAND, 10)
vbox.Add(inside_sizer_h8, 0 , wx.ALL|wx.EXPAND, 10)
vbox.Add(inside_sizer_h9, 0 , wx.ALL|wx.EXPAND, 10)
self.Maximize()
self.scroll.Size = self.GetSize()
print self.GetSize()
self.scroll.SetScrollbars(20,25,45,50)
self.SetSizer( vbox )
self.SetSizerAndFit(vbox)
self.Layout()
self.notebook.AddPage( self.scroll, u"Delivery", True )
sizer.Add( self.notebook, 1, wx.EXPAND |wx.ALIGN_RIGHT|wx.ALL, 0 )
self.SetSizer( sizer )
self.Layout()
self.Centre( wx.BOTH )
self.Show()
```
if **name** == "**main**":
app = wx.App()
Frame(None)
app.MainLoop()
|
2015/08/28
|
[
"https://Stackoverflow.com/questions/32277153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4672258/"
] |
Your Java application is going to run in a Linux container, so you can use any Linux or Java method of setting the timezone.
The easy ones that come to mind...
`cf set-env <app-name> TZ 'America/Los_Angeles'
cf restage <app-name>`
or
`cf set-env <app-name> JAVA_OPTS '-Duser.timezone=Europe/Sofia'
cf restage <app-name>`
The first should be generic to any application that respects the TZ environment variable. The second is specific to Java.
If you don't want to use `cf set-env` you could alternatively set the environment variables via your `manifest.yml` file.
|
To expand on @DanielMikusa. (Great Answer)
SAMPLE MANIFEST:
```
applications:
- path: .
buildpack: nodejs_buildpack
memory: 128M
instances: 1
name: sampleCronJobService
health-check-type: process
disk_quota: 1024M
env:
TZ: Etc/Greenwich
CF_STAGING_TIMEOUT: 15
CF_STARTUP_TIMEOUT: 15
TIMEOUT: 180
SuperSecret: passwordToTheWorld
```
WHY I KNOW IT WORKS:
```
var date = new Date();
var utc_hour = date.getUTCHours();
var local_hour = date.getHours();
console.log('(Timezone) Local Hour: ' + local_hour + ' UTC Hour: ' + utc_hour);
```
Prints: `(Timezone) Local Hour: 23 UTC Hour: 23`
If I set the manifest to have `TZ: America/Los_Angeles`
It Prints: `(Timezone) Local Hour: 16 UTC Hour: 23`
| 9,961
|
50,238,512
|
I have installed virtualenv on my system using <http://www.pythonforbeginners.com/basics/how-to-use-python-virtualenv>
according to these [guidelines](http://blog.niandrei.com/2016/03/01/install-tensorflow-on-ubuntu-with-virtualenv/#comment-21), the initial step is:
$ sudo apt-get install python-pip python-dev python-virtualenv
However, I do not want to touch my parent environment. The only reason I believe virtualenv might be of some help for my case is because I have some weird errors that point to python version inconsistencies.
So my requirements are:
* virtualenv with e.g. python 3.5
* tensorflow
* no influence on my parent environment
* ability to disable virtualenv with no side effects
Is it doable how?
|
2018/05/08
|
[
"https://Stackoverflow.com/questions/50238512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9332343/"
] |
Have you noted that in the screen shot you're using version 2.5 in the references for the OpenXml assembly, but the exception message is referencing the newer 2.7.2? That could be your issue. It could be that you've referenced 2.5, but the "ClosedXML" is expecting 2.7.2 and when it doesn't find it tosses an error?
I would get 2.7.2, update your reference and then see if that works.
|
you have to go to nugget and update the document xml reference to the latest one which is 2.7 and the issue is fixed
| 9,962
|
65,871,734
|
**Update: I let-rally tried 12 suggested solutions but nothing worked at all.**
Is my question missing any details? The suggested answer doesn't solve the problem
In python I wrote:
```
print(s.cookies.get_dict())
```
where s is my session, the output is:
```
{'lubl': 'https%3A%2F%2Fopenworld.com%2Fconfirm', 'rishum': 'SHjshd2398-'}
```
Now my question is how can I edit rishum cookie such that I want to append 'test' next to it (or to make things simple replace it by 'test')?
For example, I want:
```
'rishum': 'SHjshd2398-test'
```
---
**Note: as someone suggested I tried the following but didn't work:**
```
print(s.cookies.get_dict())
s.cookies.get_dict()['rishum'] = 'test'
print(s.cookies.get_dict())
```
output before and after is:
{'lubl': 'confirm', 'rishum': 'SUqsadkjn239s8n-', 'PHPSESSID': 'nfdskjfn3k42342', 'authchallenge': 'asjkdnjnkj34'}
{'rishum': 'SUqsadkjn239s8n-', 'lubl': 'confirm', 'PHPSESSID': 'nfdskjfn3k42342', 'authchallenge': 'asjkdnjnkj34'}
Note the order has changed.
|
2021/01/24
|
[
"https://Stackoverflow.com/questions/65871734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
What you are looking for is [`pandas.DataFrame.applymap`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.applymap.html), which applies a function element-wise:
```
df.applymap(lambda x: -1 if x < low else (1 if x > high else 0))
```
The method [`pandas.DataFrame.apply`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html) applies a function along a given axis (default is column-wise).
|
You are sending the lambda the full dataframe, but you need to send it a column:
```
for col in df.columns:
df[col].apply(lambda x: -1 if x < low else (1 if x > high else 0))
```
| 9,963
|
56,105,090
|
I'm trying to upload a file using the built in wagtaildocs application in my Wagtail application. I've setup my Ubuntu 16.04 server was setup with the Digital Ocean tutorial methods for Nginx | Gunicorn | Postgres
Some initial clarifications:
1. In my Nginx config I've set `client_max_body_size` 10000M;
2. In my production settings I have the following lines:
`MAX_UPLOAD_SIZE = "5242880000"
WAGTAILIMAGES_MAX_UPLOAD_SIZE = 5000 * 1024 * 1024`
3. My file type is a `.zip`
4. This a production test at this point. I've only implemented a basic wagtail application without an additional modules.
So as along as my File size is below 10Gb I should be fine from a configuration stand point unless I'm missing something or am blind to a typo.
I've already tried adjusting all the configuration values even to unreasonably large values. I've tried using other file extensions and doesn't change my error.
I assume this has to do with a TCP or SSL connection being closed during the session. I've never encountered this problem before so I'd appreciate some help.
Here is my error message:
```
Internal Server Error: /admin/documents/multiple/add/
Traceback (most recent call last):
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.DatabaseError: SSL SYSCALL error: Operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/wagtail/admin/urls/__init__.py", line 102, in wrapper
return view_func(request, *args, **kwargs)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/wagtail/admin/decorators.py", line 34, in decorated_view
return view_func(request, *args, **kwargs)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/wagtail/admin/utils.py", line 151, in wrapped_view_func
return view_func(request, *args, **kwargs)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/views/decorators/vary.py", line 20, in inner_func
response = func(*args, **kwargs)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/wagtail/documents/views/multiple.py", line 60, in add
doc.save()
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/base.py", line 741, in save
force_update=force_update, update_fields=update_fields)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/base.py", line 779, in save_base
force_update, using, update_fields,
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/base.py", line 870, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/base.py", line 908, in _do_insert
using=using, raw=raw)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/query.py", line 1186, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1335, in execute_sql
cursor.execute(sql, params)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 99, in execute
return super().execute(sql, params)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.DatabaseError: SSL SYSCALL error: Operation timed out
```
Here are my settings
```
### base.py ###
import os
PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
BASE_DIR = os.path.dirname(PROJECT_DIR)
SECRET_KEY = os.getenv('SECRET_KEY_WAGTAILDEV')
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# Application definition
INSTALLED_APPS = [
'home',
'search',
'wagtail.contrib.forms',
'wagtail.contrib.redirects',
'wagtail.embeds',
'wagtail.sites',
'wagtail.users',
'wagtail.snippets',
'wagtail.documents',
'wagtail.images',
'wagtail.search',
'wagtail.admin',
'wagtail.core',
'modelcluster',
'taggit',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'storages',
]
MIDDLEWARE = [
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'wagtail.core.middleware.SiteMiddleware',
'wagtail.contrib.redirects.middleware.RedirectMiddleware',
]
ROOT_URLCONF = 'wagtaildev.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(PROJECT_DIR, 'templates'),
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'wagtaildev.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'HOST': os.getenv('DATABASE_HOST_WAGTAILDEV'),
'USER': os.getenv('DATABASE_USER_WAGTAILDEV'),
'PASSWORD': os.getenv('DATABASE_PASSWORD_WAGTAILDEV') ,
'NAME': os.getenv('DATABASE_NAME_WAGTAILDEV'),
'PORT': '5432',
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATICFILES_FINDERS = [
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
]
STATICFILES_DIRS = [
os.path.join(PROJECT_DIR, 'static'),
]
# ManifestStaticFilesStorage is recommended in production, to prevent outdated
# Javascript / CSS assets being served from cache (e.g. after a Wagtail upgrade).
# See https://docs.djangoproject.com/en/2.2/ref/contrib/staticfiles/#manifeststaticfilesstorage
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATIC_URL = '/static/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
# Wagtail settings
WAGTAIL_SITE_NAME = "wagtaildev"
# Base URL to use when referring to full URLs within the Wagtail admin backend -
# e.g. in notification emails. Don't include '/admin' or a trailing slash
BASE_URL = 'http://example.com'
### production.py ###
from .base import *
DEBUG = True
ALLOWED_HOSTS = ['wagtaildev.wesgarlock.com', '127.0.0.1','134.209.230.125']
from wagtaildev.aws.conf import *
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
MAX_UPLOAD_SIZE = "5242880000"
WAGTAILIMAGES_MAX_UPLOAD_SIZE = 5000 * 1024 * 1024
FILE_UPLOAD_TEMP_DIR = str(os.path.join(BASE_DIR, 'tmp'))
```
Here are my Nginx settings
```
server {
listen 80;
server_name wagtaildev.wesgarlock.com;
client_max_body_size 10000M;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/home/wesgarlock/run/wagtaildev.sock;
}
}
```
|
2019/05/13
|
[
"https://Stackoverflow.com/questions/56105090",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11490399/"
] |
I was never able to solve this problem directly, but I did come up with a hack to get around it.
I'm not a Wagtail or Django expert so I'm sure there is a proper solution to this answer, but anyway here's what I did. If you have any recommendations on improvement feel free to leave a comment.
As a note this is really documentation to remind me what I did as well. There are many redundant lines of code at this point (05-25-19) because I Frankenstein'ed a lot of code together. I'll edit it down overtime.
Here are the tutorials I Frankenstein'ed together to create this solution.
1. <https://www.codingforentrepreneurs.com/blog/large-file-uploads-with-amazon-s3-django/>
2. <http://docs.wagtail.io/en/v2.1.1/advanced_topics/documents/custom_document_model.html>
3. <https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html>
4. <https://medium.com/faun/summary-667d0fdbcdae>
5. <http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-browser-credentials-federated-id.html>
6. <https://kite.com/python/examples/454/threading-wait-for-a-thread-to-finish>
7. <http://docs.celeryproject.org/en/latest/userguide/daemonizing.html#usage-systemd>
There may be a few others, but these were the principles.
Okay here we go.
I create an app called "files" and then a Custom Document models a models.py file. You need to specify WAGTAILDOCS\_DOCUMENT\_MODEL = 'files.LargeDocument' in your settings file. The only reason I did this was to track the behavior I was changing more explicitly. This custom Document model simply extended the standard Document Model in Wagtail.
```
#models.py
from django.db import models
from wagtail.documents.models import AbstractDocument
from wagtail.admin.edit_handlers import FieldPanel
# Create your models here.
class LargeDocument(AbstractDocument):
admin_form_fields = (
'file',
)
panels = [
FieldPanel('file', classname='fn'),
]
```
Next you'll need to create a wagtail\_hook.py file with following content.
```
#wagtail_hook.py
from wagtail.contrib.modeladmin.options import (
ModelAdmin, modeladmin_register)
from .models import LargeDocument
from .views import LargeDocumentAdminView
class LargeDocumentAdmin(ModelAdmin):
model = LargeDocument
menu_label = 'Large Documents' # ditch this to use verbose_name_plural from model
menu_icon = 'pilcrow' # change as required
menu_order = 200 # will put in 3rd place (000 being 1st, 100 2nd)
add_to_settings_menu = False # or True to add your model to the Settings sub-menu
exclude_from_explorer = False # or True to exclude pages of this type from Wagtail's explorer view
create_template_name ='large_document_index.html'
# Now you just need to register your customised ModelAdmin class with Wagtail
modeladmin_register(LargeDocumentAdmin)
```
This allows you to do 2 things:
1. Create a new menu item for uploading Large Documents while maintaining your standard document menu item with its standard functionality.
2. Specify a custom html file for handling large uploads.
Here is the html
```
{% extends "wagtailadmin/base.html" %}
{% load staticfiles cache %}
{% load static wagtailuserbar %}
{% load compress %}
{% load underscore_hyphan_to_space %}
{% load url_vars %}
{% load pagination_value %}
{% load static %}
{% load i18n %}
{% block titletag %}{{ view.page_title }}{% endblock %}
{% block content %}
{% include "wagtailadmin/shared/header.html" with title=view.page_title icon=view.header_icon %}
<!-- Google Signin Button -->
<div class="g-signin2" data-onsuccess="onSignIn" data-theme="dark">
</div>
<!-- Select the file to upload -->
<div class="input-group mb-3">
<link rel="stylesheet" href="{% static 'css/input.css'%}"/>
<div class="custom-file">
<input type="file" class="custom-file-input" id="file" name="file">
<label id="file_label" class="custom-file-label" style="width:auto!important;" for="inputGroupFile02" aria-describedby="inputGroupFileAddon02">Choose file</label>
</div>
<div class="input-group-append">
<span class="input-group-text" id="file_submission_button">Upload</span>
</div>
<div id="start_progress"></div>
</div>
<div class="progress-upload">
<div class="progress-upload-bar" role="progressbar" style="width: 100%;" aria-valuenow="100" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% endblock %}
{% block extra_js %}
{{ block.super }}
{{ form.media.js }}
<script src="https://apis.google.com/js/platform.js" async defer></script>
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.148.0.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script src="{% static 'js/awsupload.js' %}"></script>
{% endblock %}
{% block extra_css %}
{{ block.super }}
{{ form.media.css }}
<meta name="google-signin-client_id" content="847336061839-9h651ek1dv7u1i0t4edsk8pd20d0lkf3.apps.googleusercontent.com">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
{% endblock %}
```
I then created some objects in views.py
```
#views.py
from django.shortcuts import render
# Create your views here.
import base64
import hashlib
import hmac
import os
import time
from rest_framework import permissions, status, authentication
from rest_framework.response import Response
from rest_framework.views import APIView
from .config_aws import (
AWS_UPLOAD_BUCKET,
AWS_UPLOAD_REGION,
AWS_UPLOAD_ACCESS_KEY_ID,
AWS_UPLOAD_SECRET_KEY
)
from .models import LargeDocument
import datetime
from wagtail.contrib.modeladmin.views import WMABaseView
from django.db.models.fields.files import FieldFile
from django.core.files import File
import urllib.request
from django.core.mail import send_mail
from .tasks import file_creator
class FilePolicyAPI(APIView):
"""
This view is to get the AWS Upload Policy for our s3 bucket.
What we do here is first create a LargeDocument object instance in our
Django backend. This is to include the LargeDocument instance in the path
we will use within our bucket as you'll see below.
"""
permission_classes = [permissions.IsAuthenticated]
authentication_classes = [authentication.SessionAuthentication]
def post(self, request, *args, **kwargs):
"""
The initial post request includes the filename
and auth credientails. In our case, we'll use
Session Authentication but any auth should work.
"""
filename_req = request.data.get('filename')
if not filename_req:
return Response({"message": "A filename is required"}, status=status.HTTP_400_BAD_REQUEST)
policy_expires = int(time.time()+5000)
user = request.user
username_str = str(request.user.username)
"""
Below we create the Django object. We'll use this
in our upload path to AWS.
Example:
To-be-uploaded file's name: Some Random File.mp4
Eventual Path on S3: <bucket>/username/2312/2312.mp4
"""
doc_obj = LargeDocument.objects.create(uploaded_by_user=user, )
doc_obj_id = doc_obj.id
doc_obj.title=filename_req
upload_start_path = "{location}".format(
location = "LargeDocuments/",
)
file_extension = os.path.splitext(filename_req)
filename_final = "{title}".format(
title= filename_req,
)
"""
Eventual file_upload_path includes the renamed file to the
Django-stored LargeDocument instance ID. Renaming the file is
done to prevent issues with user generated formatted names.
"""
final_upload_path = "{upload_start_path}/{filename_final}".format(
upload_start_path=upload_start_path,
filename_final=filename_final,
)
if filename_req and file_extension:
"""
Save the eventual path to the Django-stored LargeDocument instance
"""
policy_document_context = {
"expire": policy_expires,
"bucket_name": AWS_UPLOAD_BUCKET,
"key_name": "",
"acl_name": "public-read",
"content_name": "",
"content_length": 524288000,
"upload_start_path": upload_start_path,
}
policy_document = """
{"expiration": "2020-01-01T00:00:00Z",
"conditions": [
{"bucket": "%(bucket_name)s"},
["starts-with", "$key", "%(upload_start_path)s"],
{"acl": "public-read"},
["starts-with", "$Content-Type", "%(content_name)s"],
["starts-with", "$filename", ""],
["content-length-range", 0, %(content_length)d]
]
}
""" % policy_document_context
aws_secret = str.encode(AWS_UPLOAD_SECRET_KEY)
policy_document_str_encoded = str.encode(policy_document.replace(" ", ""))
url = 'https://thearchmedia.s3.amazonaws.com/'
policy = base64.b64encode(policy_document_str_encoded)
signature = base64.b64encode(hmac.new(aws_secret, policy, hashlib.sha1).digest())
doc_obj.file_hash = signature
doc_obj.path = final_upload_path
doc_obj.save()
data = {
"policy": policy,
"signature": signature,
"key": AWS_UPLOAD_ACCESS_KEY_ID,
"file_bucket_path": upload_start_path,
"file_id": doc_obj_id,
"filename": filename_final,
"url": url,
"username": username_str,
}
return Response(data, status=status.HTTP_200_OK)
class FileUploadCompleteHandler(APIView):
permission_classes = [permissions.IsAuthenticated]
authentication_classes = [authentication.SessionAuthentication]
def post(self, request, *args, **kwargs):
file_id = request.POST.get('file')
size = request.POST.get('fileSize')
data = {}
type_ = request.POST.get('fileType')
if file_id:
obj = LargeDocument.objects.get(id=int(file_id))
obj.size = int(size)
obj.uploaded = True
obj.type = type_
obj.file_hash
obj.save()
data['id'] = obj.id
data['saved'] = True
data['url']=obj.url
return Response(data, status=status.HTTP_200_OK)
class ModelFileCompletion(APIView):
permission_classes = [permissions.IsAuthenticated]
authentication_classes = [authentication.SessionAuthentication]
def post(self, request, *args, **kwargs):
file_id = request.POST.get('file')
url = request.POST.get('aws_url')
data = {}
if file_id:
obj = LargeDocument.objects.get(id=int(file_id))
file_creator.delay(obj.pk)
data['test'] = 'process started'
return Response(data, status=status.HTTP_200_OK)
def LargeDocumentAdminView(request):
context = super(WMABaseView, self).get_context(request)
render(request, 'modeladmin/files/index.html', context)
```
This views goes around the standard file handling system. I didn't want to abandon the standard file handling system or write a new one. This is the reason why I call this hack and a non ideal solution.
```
// javascript upload file "awsupload.js"
var id_token; //token we get upon Authentication with Web Identiy Provider
function onSignIn(googleUser) {
var profile = googleUser.getBasicProfile();
// The ID token you need to pass to your backend:
id_token = googleUser.getAuthResponse().id_token;
}
$(document).ready(function(){
// setup session cookie data. This is Django-related
function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie !== '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = jQuery.trim(cookies[i]);
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) === (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}
var csrftoken = getCookie('csrftoken');
function csrfSafeMethod(method) {
// these HTTP methods do not require CSRF protection
return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method));
}
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
}
});
// end session cookie data setup.
// declare an empty array for potential uploaded files
var fileItemList = []
$(document).on('click','#file_submission_button', function(event){
var selectedFiles = $('#file').prop('files');
formItem = $(this).parent()
$.each(selectedFiles, function(index, item){
uploadFile(item)
})
$(this).val('');
$('.progress-upload-bar').attr('aria-valuenow',progress);
$('.progress-upload-bar').attr('width',progress.toString()+'%');
$('.progress-upload-bar').attr('style',"width:"+progress.toString()+'%');
$('.progress-upload-bar').text(progress.toString()+'%');
})
$(document).on('change','#file', function(event){
var selectedFiles = $('#file').prop('files');
$('#file_label').text(selectedFiles[0].name)
})
function constructFormPolicyData(policyData, fileItem) {
var contentType = fileItem.type != '' ? fileItem.type : 'application/octet-stream'
var url = policyData.url
var filename = policyData.filename
var repsonseUser = policyData.user
// var keyPath = 'www/' + repsonseUser + '/' + filename
var keyPath = policyData.file_bucket_path
var fd = new FormData()
fd.append('key', keyPath + filename);
fd.append('acl','private');
fd.append('Content-Type', contentType);
fd.append("AWSAccessKeyId", policyData.key)
fd.append('Policy', policyData.policy);
fd.append('filename', filename);
fd.append('Signature', policyData.signature);
fd.append('file', fileItem);
return fd
}
function fileUploadComplete(fileItem, policyData){
data = {
uploaded: true,
fileSize: fileItem.size,
file: policyData.file_id,
}
$.ajax({
method:"POST",
data: data,
url: "/api/files/complete/",
success: function(data){
displayItems(fileItemList)
},
error: function(jqXHR, textStatus, errorThrown){
alert("An error occured, please refresh the page.")
}
})
}
function modelComplete(policyData, aws_url){
data = {
file: policyData.file_id,
aws_url: aws_url
}
$.ajax({
method:"POST",
data: data,
url: "/api/files/modelcomplete/",
success:
console.log('model complete success') ,
error: function(jqXHR, textStatus, errorThrown){
alert("An error occured, please refresh the page.")
}
})
}
function displayItems(fileItemList){
var itemList = $('.item-loading-queue')
itemList.html("")
$.each(fileItemList, function(index, obj){
var item = obj.file
var id_ = obj.id
var order_ = obj.order
var html_ = "<div class=\"progress\">" +
"<div class=\"progress-bar\" role=\"progressbar\" style='width:" + item.progress + "%' aria-valuenow='" + item.progress + "' aria-valuemin=\"0\" aria-valuemax=\"100\"></div></div>"
itemList.append("<div>" + order_ + ") " + item.name + "<a href='#' class='srvup-item-upload float-right' data-id='" + id_ + ")'>X</a> <br/>" + html_ + "</div><hr/>")
})
}
function uploadFile(fileItem){
var policyData;
var newLoadingItem;
// get AWS upload policy for each file uploaded through the POST method
// Remember we're creating an instance in the backend so using POST is
// needed.
$.ajax({
method:"POST",
data: {
filename: fileItem.name
},
url: "/api/files/policy/",
success: function(data){
policyData = data
},
error: function(data){
alert("An error occured, please try again later")
}
}).done(function(){
// construct the needed data using the policy for AWS
var file = fileItem;
AWS.config.credentials = new AWS.WebIdentityCredentials({
RoleArn: 'arn:aws:iam::120974195102:role/thearchmedia-google-role',
ProviderId: null, // this is null for Google
WebIdentityToken: id_token // Access token from identity provider
});
var bucket = 'thearchmedia'
var key = 'LargeDocuments/'+file.name
var aws_url = 'https://'+bucket+'.s3.amazonaws.com/'+ key
var s3bucket = new AWS.S3({params: {Bucket: bucket}});
var params = {Key: key , ContentType: file.type, Body: file, ACL:'public-read', };
s3bucket.upload(params, function (err, data) {
$('#results').html(err ? 'ERROR!' : 'UPLOADED :' + data.Location);
}).on(
'httpUploadProgress', function(evt) {
progress = parseInt((evt.loaded * 100) / evt.total)
$('.progress-upload-bar').attr('aria-valuenow',progress)
$('.progress-upload-bar').attr('width',progress.toString()+'%')
$('.progress-upload-bar').attr('style',"width:"+progress.toString()+'%')
$('.progress-upload-bar').text(progress.toString()+'%')
}).send(
function(err, data) {
alert("File uploaded successfully.")
fileUploadComplete(fileItem, policyData)
modelComplete(policyData, aws_url)
});
})
}
})
```
Explanation of .js and .view.py interaction
First an Ajax call with file information in the head creates the Document object, but since the file never touches the server a "File" object is not created in the Document object. This "File" object contained the functionality I needed so I needed to do more. Next my javascript file uploads the file to my s3 bucket using the AWS Javascript SDK. The s3bucket.upload() function from the SDK is robust enough to upload files up to 5GB, but with a few other modifications not included it can upload up to 5TB (aws limit). After the file is uploaded to the s3 bucket, my final API call occurs. The final API call triggers a Celery task that downloads the file to a temporary directory on my remote server. Once the file exists on my remote server the File objects is created and saved to document model.
The task.py file that handles the download of the file from S3 bucket to the remote server and then creates & saves the File object to the document file.
```
#task.py
from .models import LargeDocument
from celery import shared_task
import urllib.request
from django.core.mail import send_mail
from django.core.files import File
import threading
@shared_task
def file_creator(pk_num):
obj = LargeDocument.objects.get(pk=pk_num)
tmp_loc = 'tmp/'+ obj.title
def downloadit():
urllib.request.urlretrieve('https://thearchmedia.s3.amazonaws.com/LargeDocuments/' + obj.title, tmp_loc)
def after_dwn():
dwn_thread.join() #waits till thread1 has completed executing
#next chunk of code after download, goes here
send_mail(
obj.title + ' has finished to downloading to the server',
obj.title + 'Downloaded to server',
'info@thearchmedia.com',
['wes@wesgarlock.com'],
fail_silently=False,
)
reopen = open(tmp_loc, 'rb')
django_file = File(reopen)
obj.file = django_file
obj.save()
send_mail(
obj.title + ' has finished to downloading to the server',
'File Model Created for' + obj.title,
'info@thearchmedia.com',
['wes@wesgarlock.com'],
fail_silently=False,
)
dwn_thread = threading.Thread(target=downloadit)
dwn_thread.start()
metadata_thread = threading.Thread(target=after_dwn)
metadata_thread.start()
```
This is process needs to run in Celery because downloading large files takes time and I didn't want to wait around with a browser open. Also inside of this task.py is a python thread() which forces the process to wait until the file is successfully downloaded to the remote server. If you are new to Celery here is the start of their documentation (<http://docs.celeryproject.org/en/master/getting-started/introduction.html>)
Also I've added some email notifications for confirmation that the processes completed.
Final note I created a /tmp directory in my project and setup a daily delete of all files to give it the tmp functionality.
```
crontab -e
find ~/thearchmedia/tmp -mtime +1 -delete
```
|
I would suspect that the exception `psycopg2.DatabaseError SSL SYSCALL error: Operation timed out` will happen if the droplet is running out of memory.
Try to create a swap partition or extend your memory.
[Creating a swap partition](https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04)
| 9,964
|
9,301,531
|
Why doesn't the following work?
```
$ alias sayHello='/bin/echo "Hello world!"'
$ sayHello
Hello world!
$ nohup sayHello
nohup: appending output to `nohup.out'
nohup: cannot run command `sayHello': No such file or directory
```
(the reason I ask this question is because I've aliased my `perl` and `python` to different perl/python binaries which were optimized for my own purposes; however, nohup gives me troubles if I don't supply full path to my perl/python binaries)
|
2012/02/15
|
[
"https://Stackoverflow.com/questions/9301531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/884447/"
] |
Because the shell doesn't pass aliases on to child processes (except when you use $() or ``).
$ alias sayHello='/bin/echo "Hello world!"'
Now an alias is known in this shell process, which is fine but only works in this one shell process.
```
$ sayHello
Hello world!
```
Since you said "sayHello" in the same shell it worked.
```
$ nohup sayHello
```
Here, a program "nohup" is being started as a child process. Therefore, it will not receive the aliases.
Then it starts the child process "sayHello" - which isn't found.
For your specific problem, it's best to make the new "perl" and "python" look like the normal ones as much as possible. I'd suggest to set the search path.
In your `~/.bash_profile` add
```
export PATH="/my/shiny/interpreters/bin:${PATH}"
```
Then re-login.
Since this is an environment variable, it *will* be passed to all the child processes, be they shells or not - it should now work very often.
|
If you look at the [Aliases](http://www.gnu.org/software/bash/manual/html_node/Aliases.html) section of the Bash manual, it says
>
> The first word of each simple command, if unquoted, is checked to see
> if it has an alias.
>
>
>
Unfortunately, it doesn't seem like `bash` has anything like `zsh`'s [global aliases](http://zsh.sourceforge.net/Doc/Release/Shell-Builtin-Commands.html#index-alias), which are expanded in any position.
| 9,965
|
21,822,054
|
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners)
By doing
```
os.system('GREPDB="my command"')
os.system('/bin/bash -c \'$GREPDB\'')
```
However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
|
2014/02/17
|
[
"https://Stackoverflow.com/questions/21822054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2453153/"
] |
Both commands are executed in different subshells.
Setting variables in the first `system` call does not affect the second `system` call.
You need to put two command in one string (combining them with `;`).
```
>>> import os
>>> os.system('GREPDB="echo 123"; /bin/bash -c "$GREPDB"')
123
0
```
**NOTE** You need to use `"$GREPDB"` instead of `'$GREPDBS'`. Otherwise it is interpreted literally instead of being expanded.
If you can use `subprocess`:
```
>>> import subprocess
>>> subprocess.call('/bin/bash -c "$GREPDB"', shell=True,
... env={'GREPDB': 'echo 123'})
123
0
```
|
The solution below still initially invokes a shell, but it switches to bash for the command you are trying to execute:
```
os.system('/bin/bash -c "echo hello world"')
```
| 9,971
|
14,402,654
|
extreme python/sql beginner here. I've looked around for some help with this but wasn't able to find exactly what I need- would really appreciate any assistance.
As the title indicates, I have a very large text file that I want to parse into a sql database preferably using python. The text file is set up as so:
```
#Parent field 1.1
child 1.1
child 1.1 continued
# Parent field 1.2
child 1.2
# Parent field 1.3
child 1.3 text
child 1.3 text
more child 1.3 text
...
# Parent field 1.88
child 1.88
#Parent field 2.1
child 2.1
etc...
```
Some key points about the list:
* the first field (i.e. 1.1, 2.1) has no space after the #
* the length of each child row has variable character lengths and line breaks but there is always an empty line before the next parent
* there are 88 fields for each parent
* there are hundreds of parent fields
Now, I'd like each parent field (1.1, 1.2, 1.3 --> .88) to be a column and the rows populated by subsequent numbers (2.1, 3.1 -->100s)
Could someone help me set up a python script and give me some direction of how to begin parsing? Let me know if I haven't explained the task properly and I'll promptly provide more details.
Thanks so much!
Ben
EDIT: I just realized that the # of columns is NOT constant 88, it is variable
|
2013/01/18
|
[
"https://Stackoverflow.com/questions/14402654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1756574/"
] |
A few points:
1. From the description it seems like you aim at your data being denormalized in one table. This is generally not a good idea. Split your data into two tables: PARENT and CHILDREN.
PARENT should contain ID and CHILDREN should have at least two columns: PARENT\_ID and CHILD\_VALUE (or smth like it) with PARENT\_ID being ID of a parent, whether linked explicitly as foreign key DB construct or not (depending on database). Then, while parsing, INSERT into table CHILDREN relevant record with VALUES("1.1", "1.1childA"), VALUES("1.1", "1.1childB") and so on.
2. parsing should be trivial: iterate line by line and on "parent" line change parent\_id and INSERT into PARENT and read child rows as it goes and INSERT those into CHILDREN table. You could also do it in two passes.
Smth like this:
```
#!/usr/bin/python
parent=''
child=''
for line in open('input.txt'):
if line.find('#Parent') > -1 or line.find('# Parent') > -1:
parent = field_extract(line) # fun where you extract parent value
parent_id = ... # write it down or generate
# INSERT into PARENT
elif line:
child = field_extract(line)
# INSERT into CHILDREN with parent_id and child values
```
Although... I shudder when I see smth so primitive. I'd urge you to learn Pyparsing module, absolutely great for this kind of work.
|
you should look into **file handling** in python.
the `open() , .readlines()` methods and lists will help you **alot**.
for example:
```
f = open("NAMEOFTXTFILE.TXT","r") #r for read, w for write, a for append.
cell = f.readlines() # Displays the content in a list
f.seek(0) # Just takes the cursor to the first cell (start of document)
print cell[2] # Prints the word or letter in the second cell.
```
then from there, you can send `cell[2]` with sql statements.
| 9,981
|
36,064,495
|
currently I need to make some distance calculation. For this I am trying the following on my ipython-notebook (version 4.0.4):
```
from geopy.distance import vincenty
ig_gruendau = (50.195883, 9.115557)
delphi = (49.99908,19.84481)
print(vincenty(ig_gruendau,delphi).miles)
```
Unfortunately I receive the following error when running the code above: ImportError: No module named 'geopy'
Since I am pretty new at python, I wonder how can I install this module (without admin rights) or what other simple options I do have for this calculations?
Thanks,
ML
|
2016/03/17
|
[
"https://Stackoverflow.com/questions/36064495",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5446609/"
] |
You need to install the missing module in your python installation. So you have to run the command:
```
pip install geopy
```
in your terminal. If you don't have pip, you'll have to install it using:
```
easy_install pip
```
and if both command fail with `Permission denied`, then you'll have to either launch the command as root:
```
sudo easy_install pip
sudo pip install geopy
```
or for pip, install it only for your user:
```
pip install geopy --user
```
And for future reference, whenever you get that kind of error:
```
ImportError: No module named 'XXXXX'
```
you can search for it on pypi using pip:
```
% pip search XXXXX
```
and in your case:
```
% pip search geopy
tornado-geopy (0.1.0) - tornado-geopy is an asynchronous version of the awesome geopy library.
geopy.jp (0.1.0) - Geocoding library for Python.
geopy.jp-2.7 (0.1.0) - Geocoding library for Python.
geopy (1.11.0) - Python Geocoding Toolbox
```
HTH
|
Even if you install using `pip install` command you still have to use:
```
conda install -c conda-forge geopy
```
This command is in the anaconda server so that it gets installed in the anaconda.
| 9,982
|
34,013,185
|
Let's say that I have this list in python
```
A = ["(a,1)", "(b,2)", "(c,3)", "(d,4)"]
```
so how can I print it out in the following format:
```
(a,1), (b,2), (c,3), (d,4)
```
using one line, better without using for loop
Thanks in advance
|
2015/12/01
|
[
"https://Stackoverflow.com/questions/34013185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5482492/"
] |
When A is a list of str:
```
print(', '.join(A))
```
Or more general:
```
print(', '.join(map(str, A)))
```
|
In your case below code will work
```
print(', '.join(A))
```
| 9,983
|
49,514,684
|
I'm relatively new to using sklearn and python for data analysis and am trying to run some linear regression on a dataset that I loaded from a `.csv` file.
I have loaded my data into `train_test_split` without any issues, but when I try to fit my training data I receive an error `ValueError: Expected 2D array, got 1D array instead: ... Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.`.
Error at `model = lm.fit(X_train, y_train)`
Because of my freshness with working with these packages, I'm trying to determine if this is the result of not setting my imported csv to a pandas data frame before running the regression or if this has to do with something else.
My CSV is in the format of:
```
Month,Date,Day of Week,Growth,Sunlight,Plants
7,7/1/17,Saturday,44,611,26
7,7/2/17,Sunday,30,507,14
7,7/5/17,Wednesday,55,994,25
7,7/6/17,Thursday,50,1014,23
7,7/7/17,Friday,78,850,49
7,7/8/17,Saturday,81,551,50
7,7/9/17,Sunday,59,506,29
```
Here is how I set up the regression:
```
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
organic = pd.read_csv("linear-regression.csv")
organic.columns
Index(['Month', 'Date', 'Day of Week', 'Growth', 'Sunlight', 'Plants'], dtype='object')
# Set the depedent (Growth) and independent (Sunlight)
y = organic['Growth']
X = organic['Sunlight']
# Test train split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print (X_train.shape, X_test.shape)
print (y_train.shape, y_test.shape)
(192,) (49,)
(192,) (49,)
lm = linear_model.LinearRegression()
model = lm.fit(X_train, y_train)
# Error pointing to an array with values from Sunlight [611, 507, 994, ...]
```
|
2018/03/27
|
[
"https://Stackoverflow.com/questions/49514684",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1061892/"
] |
You are only using one feature, so it tells you what to do within the error:
>
> Reshape your data either using array.reshape(-1, 1) if your data has a single feature.
>
>
>
The data always has to be 2D in scikit-learn.
(Don't forget the typo in `X = organic['Sunglight']`)
|
Once you load the data into `train_test_split(X, y, test_size=0.2)`, it returns Pandas Series `X_train` and `X_test` with `(192, )` and `(49, )` dimensions. As mentioned in the previous answers, sklearn expect matrices of shape `[n_samples,n_features]` as the `X_train`, `X_test` data. You can simply convert the Pandas Series `X_train` and `X_test` to Pandas Dataframes to change their dimensions to `(192, 1)` and `(49, 1)`.
```
lm = linear_model.LinearRegression()
model = lm.fit(X_train.to_frame(), y_train)
```
| 9,985
|
72,484,522
|
I am making a little math game, similar to [zeta mac](https://arithmetic.zetamac.com/game?key=a7220a92). Everything seems to be working well. Ideally I would like this console output to erase incorrect answers entered by the user, without reprinting the math problem again for them to solve. Is something like this possible?
For example, I may prompt the user to answer "57 + 37 = " in the console, then if they type 24 (console would look like this "57 + 37 = 24", I would like the 24 to be erased, and for the "57 + 37 = " to remain, allowing the user to guess again, without the same equation having to be printed again on a line below.
Here is the source code (sorry if its messy, I just started learning python):
```
import random
import time
def play(seconds):
start_time = time.time()
score = 0
while True:
current_time = time.time()
elapsed_time = current_time - start_time
a = random.randint(2, 100)
b = random.randint(2, 100)
d = random.randint(2, 12)
asmd = random.choice([1, 2, 3, 4])
if (asmd == 1):
solve = a + b
answer = input("%d + %d = " % (a, b))
elif (asmd == 2):
if (a > b):
solve = a - b
answer = input("%d - %d = " % (a, b))
else:
solve = b - a
answer = input("%d - %d = " % (b, a))
elif (asmd == 3):
solve = a * d
answer = input("%d * %d = " % (a, d))
else:
solve = d
c = a * d
answer = input("%d / %d = " % (c, a))
while (solve != int(answer)):
answer = input("= ")
score += 1
if elapsed_time > seconds:
print("Time\'s up! Your score was %d." % (score))
break
play(10)
```
|
2022/06/03
|
[
"https://Stackoverflow.com/questions/72484522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19244670/"
] |
Keep the duration in a variable and decrease the duration in every loop
```
def blink_green2():
red1.on()
sleep_duration = 0.5
for i in range(5):
green2.toggle()
time.sleep(sleep_duration)
green2.toggle()
time.sleep(sleep_duration)
sleep_duration -= 0.01
```
|
Gradually increasing the speed of blinking means that you need to decrease the sleep duration between the blinking. So in the for loop you need to decrease the value of i. so something like this.
```
def blink_green2():
red1.on()
for i in range(0,0.5,0.1):
green2.toggle()
time.sleep(0.5-i)
```
| 9,987
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.