qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
16,375,781
|
I am trying a simple nested for loop in python to scan a threshold-ed image to detect the white pixels and store their location. The problem is that although the array it is reading from is only 160\*120 (19200) it still takes about 6s to execute, my code is as follows and any help or guidance would be greatly appreciated:
```
im = Image.open('PYGAMEPIC')
r, g, b = np.array(im).T
x = np.zeros_like(b)
height = len(x[0])
width = len(x)
x[r > 120] = 255
x[g > 100] = 0
x[b > 100] = 0
row_array = np.zeros(shape = (19200,1))
col_array = np.zeros(shape = (19200,1))
z = 0
for i in range (0,width-1):
for j in range (0,height-1):
if x[i][j] == 255:
z = z+1
row_array[z] = i
col_array[z] = j
```
|
2013/05/04
|
[
"https://Stackoverflow.com/questions/16375781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2274632/"
] |
First, it shouldn't take 6 seconds. Trying your code on a 160x120 image takes ~0.2 s for me.
That said, for good `numpy` performance, you generally want to avoid loops. Sometimes it's simpler to vectorize along all except the smallest axis and loop along that, but when possible you should try to do everything at once. This usually makes things both faster (pushing the loops down to C) and easier.
Your for loop itself seems a little strange to me-- you seem to have an off-by-one error both in terms of where you're starting storing the results (your first value is placed in `z=1`, not `z=0`) and in terms of how far you're looking (`range(0, x-1)` doesn't include `x-1`, so you're missing the last row/column-- probably you want `range(x)`.)
If all you want is the indices where `r > 120` but neither `g > 100` nor `b > 100`, there are much simpler approaches. We can create boolean arrays. For example, first we can make some dummy data:
```
>>> r = np.random.randint(0, 255, size=(8,8))
>>> g = np.random.randint(0, 255, size=(8,8))
>>> b = np.random.randint(0, 255, size=(8,8))
```
Then we can find the places where our condition is met:
```
>>> (r > 120) & ~(g > 100) & ~(b > 100)
array([[False, True, False, False, False, False, False, False],
[False, False, True, False, False, False, False, False],
[False, True, False, False, False, False, False, False],
[False, False, False, True, False, True, False, False],
[False, False, False, False, False, False, False, False],
[False, True, False, False, False, False, False, False],
[False, False, False, False, False, False, False, False],
[False, False, False, False, False, False, False, False]], dtype=bool)
```
Then we can use `np.where` to get the coordinates:
```
>>> r_idx, c_idx = np.where((r > 120) & ~(g > 100) & ~(b > 100))
>>> r_idx
array([0, 1, 2, 3, 3, 5])
>>> c_idx
array([1, 2, 1, 3, 5, 1])
```
And we can sanity-check these by indexing back into `r`, `g`, and `b`:
```
>>> r[r_idx, c_idx]
array([166, 175, 155, 150, 241, 222])
>>> g[r_idx, c_idx]
array([ 6, 29, 19, 62, 85, 31])
>>> b[r_idx, c_idx]
array([67, 97, 30, 4, 50, 71])
```
|
e you're on python 2.x (2.6 or 2.7). In python 2, every time you call `range` you're creating a list with that many elements. (In this case, you're creating 1 list of `width - 1` length, and then `width - 1` lists of `height - 1` length. One way to speed this up is to make one list of each ahead of time and use that list each time.
For example
```
height_indices = range(0, height - 1)
for i in range(0, width - 1):
for j in height_indices:
# etc
```
To prevent python having to create either list, you can use `xrange` to return a generator which will save memory and time, e.g.,
```
for i in xrange(0, width - 1):
for j in xrange(0, height - 1):
# etc.
```
You should also look into using the `filter` function which takes a function and executes it. It will return a list of items returned by the function but if all you're doing is incrementing a global counter and modifying global arrays, you don't have to return anything or concern yourself with the list returned.
|
11,705,114
|
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like `<92>`,`<89>`, `<94>` etc.
Any thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that `UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128)`
|
2012/07/28
|
[
"https://Stackoverflow.com/questions/11705114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1546936/"
] |
I would try a character class regex similar to
```
"[.!?\\-]"
```
Add whatever characters you wish to match inside the `[]`s. Be careful to escape any characters that might have a special meaning to the regex parser.
You then have to iterate through the matches by using `Matcher.find()` until it returns false.
|
I would try
>
> `\W`
>
>
>
it matches any non-word character. This includes spaces and punctuation, but not underscores. It’s equivalent to [^A-Za-z0-9\_]
|
11,705,114
|
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like `<92>`,`<89>`, `<94>` etc.
Any thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that `UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128)`
|
2012/07/28
|
[
"https://Stackoverflow.com/questions/11705114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1546936/"
] |
I would try a character class regex similar to
```
"[.!?\\-]"
```
Add whatever characters you wish to match inside the `[]`s. Be careful to escape any characters that might have a special meaning to the regex parser.
You then have to iterate through the matches by using `Matcher.find()` until it returns false.
|
I was tring to find how to replace a regex, with keeping other regex part.
Example: `Hi , how are you ?` -> `Hi, how are you?`.
After studying a little i found that i could create groups, using "()", so just replaced the goup one, that was "(\s)".
```java
String a = "Hi , how are you ?";
String p = "(\s)([,.!?\\-])";
System.out.println(a.replaceAll(p,"$2"));
//output: Hi, how are you?
```
|
11,705,114
|
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like `<92>`,`<89>`, `<94>` etc.
Any thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that `UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128)`
|
2012/07/28
|
[
"https://Stackoverflow.com/questions/11705114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1546936/"
] |
Java does support POSIX character classes in a roundabout way. For punctuation, the Java equivalent of **[:punct:]** is **\p{Punct}**.
Please see the following [link](http://www.regular-expressions.info/posixbrackets.html) for details.
Here is a concrete, working example that uses the expression in the comments
```
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class RegexFindPunctuation {
public static void main(String[] args) {
Pattern p = Pattern.compile("\\p{Punct}");
Matcher m = p.matcher("One day! when I was walking. I found your pants? just kidding...");
int count = 0;
while (m.find()) {
count++;
System.out.println("\nMatch number: " + count);
System.out.println("start() : " + m.start());
System.out.println("end() : " + m.end());
System.out.println("group() : " + m.group());
}
}
}
```
|
I would try
>
> `\W`
>
>
>
it matches any non-word character. This includes spaces and punctuation, but not underscores. It’s equivalent to [^A-Za-z0-9\_]
|
11,705,114
|
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like `<92>`,`<89>`, `<94>` etc.
Any thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that `UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128)`
|
2012/07/28
|
[
"https://Stackoverflow.com/questions/11705114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1546936/"
] |
Java does support POSIX character classes in a roundabout way. For punctuation, the Java equivalent of **[:punct:]** is **\p{Punct}**.
Please see the following [link](http://www.regular-expressions.info/posixbrackets.html) for details.
Here is a concrete, working example that uses the expression in the comments
```
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class RegexFindPunctuation {
public static void main(String[] args) {
Pattern p = Pattern.compile("\\p{Punct}");
Matcher m = p.matcher("One day! when I was walking. I found your pants? just kidding...");
int count = 0;
while (m.find()) {
count++;
System.out.println("\nMatch number: " + count);
System.out.println("start() : " + m.start());
System.out.println("end() : " + m.end());
System.out.println("group() : " + m.group());
}
}
}
```
|
I was tring to find how to replace a regex, with keeping other regex part.
Example: `Hi , how are you ?` -> `Hi, how are you?`.
After studying a little i found that i could create groups, using "()", so just replaced the goup one, that was "(\s)".
```java
String a = "Hi , how are you ?";
String p = "(\s)([,.!?\\-])";
System.out.println(a.replaceAll(p,"$2"));
//output: Hi, how are you?
```
|
11,705,114
|
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like `<92>`,`<89>`, `<94>` etc.
Any thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that `UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128)`
|
2012/07/28
|
[
"https://Stackoverflow.com/questions/11705114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1546936/"
] |
I would try
>
> `\W`
>
>
>
it matches any non-word character. This includes spaces and punctuation, but not underscores. It’s equivalent to [^A-Za-z0-9\_]
|
I was tring to find how to replace a regex, with keeping other regex part.
Example: `Hi , how are you ?` -> `Hi, how are you?`.
After studying a little i found that i could create groups, using "()", so just replaced the goup one, that was "(\s)".
```java
String a = "Hi , how are you ?";
String p = "(\s)([,.!?\\-])";
System.out.println(a.replaceAll(p,"$2"));
//output: Hi, how are you?
```
|
49,557,625
|
For my exercice I must with selenium and Chrome webdriver with python 2.7 click on the link :
>
> <https://test.com/console/remote.pl>
>
>
>
Below structure of the html file :
```
<div class="leftside" >
<span class="spacer spacer-20"></span>
<a href="https://test.com" title="Retour à l'accueil"><img class="logo" src="https://img.test.com/frontoffice.png" /></a>
<span class="spacer spacer-20"></span>
<a href="https://test.com/console/index.pl" class="menu selected"><img src="https://img.test.com/icons/fichiers.png" alt="" /> Mes fichiers</a>
<a href="https://test.com/console/ftpmode.pl" class="menu"><img src="https://img.test.com/icons/publication.png" alt="" /> Gestion FTP</a>
<a href="https://test.com/console/remote.pl" class="menu"><img src="https://img.test.com/icons/telechargement-de-liens.png" alt="" /> Remote Upload</a>
<a href="https://test.com/console/details.pl" class="menu"><img src="https://img.test.com/icons/profil.png" alt="" /> Mon profil</a>
<a href="https://test.com/console/params.pl" class="menu"><img src="https://img.test.com/icons/parametres.png" alt="" /> Paramètres</a>
<a href="https://test.com/console/abo.pl" class="menu"><img src="https://img.test.com/icons/abonnement.png" alt="" /> Services Payants</a>
<a href="https://test.com/console/aff.pl" class="menu"><img src="https://img.test.com/icons/af.png" alt="" /> Af</a>
<a href="https://test.com/console/com.pl" class="menu"><img src="https://img.test.com/icons/v.png" alt="" /> V</a>
<a href="https://test.com/console/logs.pl" class="menu"><img src="https://img.test.com/icons/logs.png" alt="" /> Jour</a>
<a href="https://test.com/logout.pl" class="menu"><img src="https://img.test.com/icons/deconnexion.png" alt="" /> Déconnexion</a>
<span class="spacer spacer-20"></span>
<a href="#" id="msmall"><img src="https://img.test.com/btns/reverse.png"></a>
</div>
```
I use : **driver.find\_element\_by\_xpath()** as explained here : [enter link description here](https://stackoverflow.com/questions/41602539/click-on-element-in-dropdown-with-selenium-and-python "Click on element")
```
driver.find_element_by_xpath('//[@id="leftside"]/a[3]').click()
```
But I have this error message :
>
> SyntaxError: Failed to execute 'evaluate' on 'Document': The string
> '//[@id="leftside"]/a[3]' is not a valid XPath expression.
>
>
>
Who can help me please ?
Regards
|
2018/03/29
|
[
"https://Stackoverflow.com/questions/49557625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4200256/"
] |
I think you were pretty close. But as it is a `<div>` tag with *class* attribute set as **leftside** you have to be specific. But again the `<a[3]>` tag won't be the immediate child of `driver.find_element_by_xpath("//div[@class='leftside']` node but a decendent, so instead of `/` you have to induce `//` as follows :
```
driver.find_element_by_xpath("//div[@class='leftside']//a[3]").click()
```
|
The issue is that you need to have a tag name also. So you should either use
```
driver.find_element_by_xpath('//*[@id="leftside"]/a[3]').click()
```
When you don't care about which tag it is. Or you should use the actual tag if you care
```
driver.find_element_by_xpath('//div[@id="leftside"]/a[3]').click()
```
I would suggest against using the second one, with the `div` tag. As xpath are slower and using a `*` may be slightly slower
|
49,557,625
|
For my exercice I must with selenium and Chrome webdriver with python 2.7 click on the link :
>
> <https://test.com/console/remote.pl>
>
>
>
Below structure of the html file :
```
<div class="leftside" >
<span class="spacer spacer-20"></span>
<a href="https://test.com" title="Retour à l'accueil"><img class="logo" src="https://img.test.com/frontoffice.png" /></a>
<span class="spacer spacer-20"></span>
<a href="https://test.com/console/index.pl" class="menu selected"><img src="https://img.test.com/icons/fichiers.png" alt="" /> Mes fichiers</a>
<a href="https://test.com/console/ftpmode.pl" class="menu"><img src="https://img.test.com/icons/publication.png" alt="" /> Gestion FTP</a>
<a href="https://test.com/console/remote.pl" class="menu"><img src="https://img.test.com/icons/telechargement-de-liens.png" alt="" /> Remote Upload</a>
<a href="https://test.com/console/details.pl" class="menu"><img src="https://img.test.com/icons/profil.png" alt="" /> Mon profil</a>
<a href="https://test.com/console/params.pl" class="menu"><img src="https://img.test.com/icons/parametres.png" alt="" /> Paramètres</a>
<a href="https://test.com/console/abo.pl" class="menu"><img src="https://img.test.com/icons/abonnement.png" alt="" /> Services Payants</a>
<a href="https://test.com/console/aff.pl" class="menu"><img src="https://img.test.com/icons/af.png" alt="" /> Af</a>
<a href="https://test.com/console/com.pl" class="menu"><img src="https://img.test.com/icons/v.png" alt="" /> V</a>
<a href="https://test.com/console/logs.pl" class="menu"><img src="https://img.test.com/icons/logs.png" alt="" /> Jour</a>
<a href="https://test.com/logout.pl" class="menu"><img src="https://img.test.com/icons/deconnexion.png" alt="" /> Déconnexion</a>
<span class="spacer spacer-20"></span>
<a href="#" id="msmall"><img src="https://img.test.com/btns/reverse.png"></a>
</div>
```
I use : **driver.find\_element\_by\_xpath()** as explained here : [enter link description here](https://stackoverflow.com/questions/41602539/click-on-element-in-dropdown-with-selenium-and-python "Click on element")
```
driver.find_element_by_xpath('//[@id="leftside"]/a[3]').click()
```
But I have this error message :
>
> SyntaxError: Failed to execute 'evaluate' on 'Document': The string
> '//[@id="leftside"]/a[3]' is not a valid XPath expression.
>
>
>
Who can help me please ?
Regards
|
2018/03/29
|
[
"https://Stackoverflow.com/questions/49557625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4200256/"
] |
I think you were pretty close. But as it is a `<div>` tag with *class* attribute set as **leftside** you have to be specific. But again the `<a[3]>` tag won't be the immediate child of `driver.find_element_by_xpath("//div[@class='leftside']` node but a decendent, so instead of `/` you have to induce `//` as follows :
```
driver.find_element_by_xpath("//div[@class='leftside']//a[3]").click()
```
|
i have got a similar problem but i didn't post it so thanks for the post.
the xpath is `"//[@class="leftside"]/a[3]"` there is no id with the name **leftside** in your html.
|
49,544,207
|
I am using python 2.7. I am looking to calculate compounding returns from daily returns and my current code is pretty slow at calculating returns, so I was looking for areas where I could gain efficiency.
What I want to do is pass two dates and a security into a price table and calulate the compounding returns between those dates using the giving security.
I have a price table (`prices_df`):
```
security_id px_last asof
1 3.055 2015-01-05
1 3.360 2015-01-06
1 3.315 2015-01-07
1 3.245 2015-01-08
1 3.185 2015-01-09
```
I also have a table with two dates and security (`events_df`):
```
asof disclosed_on security_ref_id
2015-01-05 2015-01-09 16:31:00 1
2018-03-22 2018-03-27 16:33:00 3616
2017-08-03 2018-03-27 12:13:00 2591
2018-03-22 2018-03-27 11:33:00 3615
2018-03-22 2018-03-27 10:51:00 3615
```
Using the two dates in this table, I want to use the price table to calculate the returns.
The two functions I am using:
```
import pandas as pd
# compounds returns
def cum_rtrn(df):
df_out = df.add(1).cumprod()
df_out['return'].iat[0] = 1
return df_out
# calculates compound returns from prices between two dates
def calc_comp_returns(price_df, start_date=None, end_date=None, security=None):
df = price_df[price_df.security_id == security]
df = df.set_index(['asof'])
df = df.loc[start_date:end_date]
df['return'] = df.px_last.pct_change()
df = df[['return']]
df = cum_rtrn(df)
return df.iloc[-1][0]
```
I then iterate over the `events_df` with `.iterrows` passng the `calc_comp_returns` function each time. However, this is a very slow process as I have 10K+ iterations, so I am looking for improvements. Solution does not need to be based in `pandas`
```
# example of how function is called
start = datetime.datetime.strptime('2015-01-05', '%Y-%m-%d').date()
end = datetime.datetime.strptime('2015-01-09', '%Y-%m-%d').date()
calc_comp_returns(prices_df, start_date=start, end_date=end, security=1)
```
|
2018/03/28
|
[
"https://Stackoverflow.com/questions/49544207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5293603/"
] |
Here is a solution (100x times faster on my computer with some dummy data).
```
import numpy as np
price_df = price_df.set_index('asof')
def calc_comp_returns_fast(price_df, start_date, end_date, security):
rows = price_df[price_df.security_id == security].loc[start_date:end_date]
changes = rows.px_last.pct_change()
comp_rtrn = np.prod(changes + 1)
return comp_rtrn
```
Or, as a one-liner:
```
def calc_comp_returns_fast(price_df, start_date, end_date, security):
return np.prod(price_df[price_df.security_id == security].loc[start_date:end_date].px_last.pct_change() + 1)
```
Not that I call the `set_index` method beforehand, it only needs to be done once on the entire `price_df` dataframe.
It is faster because it does not recreate DataFrames at each step. In your code, `df` is overwritten almost at each line by a new dataframe. Both the init process and the garbage collection (erasing unused data from memory) take a lot of time.
In my code, `rows` is a slice or a "view" of the original data, it does not need to copy or re-init any object. Also, I used directly the numpy product function, which is the same as taking the last cumprod element (pandas uses `np.cumprod` internally anyway).
Suggestion : if you are using IPython, Jupyter or Spyder, you can use the magic `%prun calc_comp_returns(...)` to see which part takes the most time. I ran it on your code, and it was the garbage collector, using like more than 50% of the total running time!
|
I'm not very familiar with pandas, but I'll give this a shot.
Problem with your solution
==========================
Your solution currently does a huge amount of unnecessary calculation. This is mostly due to the line:
```
df['return'] = df.px_last.pct_change()
```
This line is actually calcuating the percent change for *every* date between start and end. Just fixing this issue should give you a huge speed up. You should just get the start price and the end price and compare the two. The prices inbetween these two prices are completely irrelevant to your calculations. Again, my familiarity with pandas is nil, but you should do something like this instead:
```
def calc_comp_returns(price_df, start_date=None, end_date=None, security=None):
df = price_df[price_df.security_id == security]
df = df.set_index(['asof'])
df = df.loc[start_date:end_date]
return 1 + (df['px_last'].iloc(-1) - df['px_last'].iloc(0)
```
Remember that this code relies on the fact that price\_df is sorted by date, so be careful to make sure you only pass `calc_comp_returns` a date-sorted price\_df.
|
49,544,207
|
I am using python 2.7. I am looking to calculate compounding returns from daily returns and my current code is pretty slow at calculating returns, so I was looking for areas where I could gain efficiency.
What I want to do is pass two dates and a security into a price table and calulate the compounding returns between those dates using the giving security.
I have a price table (`prices_df`):
```
security_id px_last asof
1 3.055 2015-01-05
1 3.360 2015-01-06
1 3.315 2015-01-07
1 3.245 2015-01-08
1 3.185 2015-01-09
```
I also have a table with two dates and security (`events_df`):
```
asof disclosed_on security_ref_id
2015-01-05 2015-01-09 16:31:00 1
2018-03-22 2018-03-27 16:33:00 3616
2017-08-03 2018-03-27 12:13:00 2591
2018-03-22 2018-03-27 11:33:00 3615
2018-03-22 2018-03-27 10:51:00 3615
```
Using the two dates in this table, I want to use the price table to calculate the returns.
The two functions I am using:
```
import pandas as pd
# compounds returns
def cum_rtrn(df):
df_out = df.add(1).cumprod()
df_out['return'].iat[0] = 1
return df_out
# calculates compound returns from prices between two dates
def calc_comp_returns(price_df, start_date=None, end_date=None, security=None):
df = price_df[price_df.security_id == security]
df = df.set_index(['asof'])
df = df.loc[start_date:end_date]
df['return'] = df.px_last.pct_change()
df = df[['return']]
df = cum_rtrn(df)
return df.iloc[-1][0]
```
I then iterate over the `events_df` with `.iterrows` passng the `calc_comp_returns` function each time. However, this is a very slow process as I have 10K+ iterations, so I am looking for improvements. Solution does not need to be based in `pandas`
```
# example of how function is called
start = datetime.datetime.strptime('2015-01-05', '%Y-%m-%d').date()
end = datetime.datetime.strptime('2015-01-09', '%Y-%m-%d').date()
calc_comp_returns(prices_df, start_date=start, end_date=end, security=1)
```
|
2018/03/28
|
[
"https://Stackoverflow.com/questions/49544207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5293603/"
] |
Here is a solution (100x times faster on my computer with some dummy data).
```
import numpy as np
price_df = price_df.set_index('asof')
def calc_comp_returns_fast(price_df, start_date, end_date, security):
rows = price_df[price_df.security_id == security].loc[start_date:end_date]
changes = rows.px_last.pct_change()
comp_rtrn = np.prod(changes + 1)
return comp_rtrn
```
Or, as a one-liner:
```
def calc_comp_returns_fast(price_df, start_date, end_date, security):
return np.prod(price_df[price_df.security_id == security].loc[start_date:end_date].px_last.pct_change() + 1)
```
Not that I call the `set_index` method beforehand, it only needs to be done once on the entire `price_df` dataframe.
It is faster because it does not recreate DataFrames at each step. In your code, `df` is overwritten almost at each line by a new dataframe. Both the init process and the garbage collection (erasing unused data from memory) take a lot of time.
In my code, `rows` is a slice or a "view" of the original data, it does not need to copy or re-init any object. Also, I used directly the numpy product function, which is the same as taking the last cumprod element (pandas uses `np.cumprod` internally anyway).
Suggestion : if you are using IPython, Jupyter or Spyder, you can use the magic `%prun calc_comp_returns(...)` to see which part takes the most time. I ran it on your code, and it was the garbage collector, using like more than 50% of the total running time!
|
We'll use `pd.merge_asof` to grab prices from `prices_df`. However, when we do, we'll need to have relevant dataframes sorted by the date columns we are utilizing. Also, for convenience, I'll aggregate some `pd.merge_asof` parameters in dictionaries to be used as keyword arguments.
```
prices_df = prices_df.sort_values(['asof'])
aed = events_df.sort_values('asof')
ded = events_df.sort_values('disclosed_on')
aokw = dict(
left_on='asof', right_on='asof',
left_by='security_ref_id', right_by='security_id'
)
start_price = pd.merge_asof(aed, prices_df, **aokw).px_last
dokw = dict(
left_on='disclosed_on', right_on='asof',
left_by='security_ref_id', right_by='security_id'
)
end_price = pd.merge_asof(ded, prices_df, **dokw).px_last
returns = end_price.div(start_price).sub(1).rename('return')
events_df.join(returns)
asof disclosed_on security_ref_id return
0 2015-01-05 2015-01-09 16:31:00 1 0.040816
1 2018-03-22 2018-03-27 16:33:00 3616 NaN
2 2017-08-03 2018-03-27 12:13:00 2591 NaN
3 2018-03-22 2018-03-27 11:33:00 3615 NaN
4 2018-03-22 2018-03-27 10:51:00 3615 NaN
```
|
51,454,694
|
Azure Cognitive Services OCR has a demo on the site
<https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/#text>
On the website, I get pretty accurate results. However, when I try to call the same using the code mentioned in their documentation, I get different and poor results.
<https://learn.microsoft.com/en-us/azure/cognitive-services/Computer-vision/quickstarts/python-print-text>
I'm assuming, the version available on the site is the preview one.
<https://westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/587f2c6a154055056008f200>
How can I call that version in Python?
Thank you for help!
|
2018/07/21
|
[
"https://Stackoverflow.com/questions/51454694",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10093162/"
] |
There is now an official Microsoft package for that:
* <https://pypi.org/project/azure-cognitiveservices-vision-computervision/>
With samples:
* <https://github.com/Azure-Samples/cognitive-services-python-sdk-samples/blob/master/samples/vision/computer_vision_samples.py>
Create issue on Github if you have troubles :)
* <https://github.com/Azure/azure-sdk-for-python/issues>
(I work at MS in the Azure SDK team, which releases this SDK)
|
There are two different APIs for recognizing text. The demo page is using the new way, but has the caveat that it only works for English as of this writing.
The example code you should be looking at is [here](https://learn.microsoft.com/en-us/azure/cognitive-services/Computer-vision/quickstarts/python-hand-text). If you want to recognized printed text, you will tweak the `param`. It'll look something like this:
```
region = 'westcentralus'
request_url = 'https://{region}.api.cognitive.microsoft.com/vision/v2.0/recognizeText'.format(region=region)
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
params = {'mode': 'Printed'}
data = {'url': image_url}
response = requests.post(
request_url, headers=headers, params=params, json=data)
```
You will normally get a HTTP 202 response, not the recognition result. You will need to fetch the response from the operation location:
```
operation_url = response.headers["Operation-Location"]
operation_response = requests.get(operation_url, headers=headers)
```
Note that you'll need to check the status of the `operation_response` to make sure the task has completed:
```
if operation_response.json()[u'status'] == 'Succeeded': ...
```
|
45,836,036
|
Comparing two python lists upto n-2 elements:
```py
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1 == list2 => True
```
Excluding the last 2 elements of the 2 lists they are the same.
I am able to do it by comparing each and every element of the 2 lists. But is there any other efficient way to do this?
|
2017/08/23
|
[
"https://Stackoverflow.com/questions/45836036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7016928/"
] |
return false after the first pair (a,b) where a != b
```
def compare(list1,list2):
for a,b in zip(list1[:-2],list2[:-2]):
if a != b :
return False
return True
```
|
This way:
```
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1[:-2] == list2[:-2] => True
```
|
45,836,036
|
Comparing two python lists upto n-2 elements:
```py
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1 == list2 => True
```
Excluding the last 2 elements of the 2 lists they are the same.
I am able to do it by comparing each and every element of the 2 lists. But is there any other efficient way to do this?
|
2017/08/23
|
[
"https://Stackoverflow.com/questions/45836036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7016928/"
] |
return false after the first pair (a,b) where a != b
```
def compare(list1,list2):
for a,b in zip(list1[:-2],list2[:-2]):
if a != b :
return False
return True
```
|
Just slice the lists directly...
================================
`Python` has the syntax for slicing lists which looks like:
```
lst[start:stop:step]
```
a neat feature of this being that you can slice lists up to a position specified from the end using negative values. So if you have a list susch as:
```
lst = [1,2,3,4,5]
```
you can slice it with:
```
lst[:-3]
```
to get the values up to the third from the end:
```
[1,2,3]
```
so this can be used to compare your two lists:
```
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1[:-2] == list2[:-2] => True
```
|
45,836,036
|
Comparing two python lists upto n-2 elements:
```py
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1 == list2 => True
```
Excluding the last 2 elements of the 2 lists they are the same.
I am able to do it by comparing each and every element of the 2 lists. But is there any other efficient way to do this?
|
2017/08/23
|
[
"https://Stackoverflow.com/questions/45836036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7016928/"
] |
return false after the first pair (a,b) where a != b
```
def compare(list1,list2):
for a,b in zip(list1[:-2],list2[:-2]):
if a != b :
return False
return True
```
|
If your lists are very large and you want to avoid duplicating them with `list1[:-2]==list2[:-2]`, you can use a generator expression for a more memory-efficient solution:
```
all(a==b for a,b,_ in zip(list1, list2, range(len(list1)-2)))
```
|
45,836,036
|
Comparing two python lists upto n-2 elements:
```py
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1 == list2 => True
```
Excluding the last 2 elements of the 2 lists they are the same.
I am able to do it by comparing each and every element of the 2 lists. But is there any other efficient way to do this?
|
2017/08/23
|
[
"https://Stackoverflow.com/questions/45836036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7016928/"
] |
This way:
```
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1[:-2] == list2[:-2] => True
```
|
Just slice the lists directly...
================================
`Python` has the syntax for slicing lists which looks like:
```
lst[start:stop:step]
```
a neat feature of this being that you can slice lists up to a position specified from the end using negative values. So if you have a list susch as:
```
lst = [1,2,3,4,5]
```
you can slice it with:
```
lst[:-3]
```
to get the values up to the third from the end:
```
[1,2,3]
```
so this can be used to compare your two lists:
```
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1[:-2] == list2[:-2] => True
```
|
45,836,036
|
Comparing two python lists upto n-2 elements:
```py
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1 == list2 => True
```
Excluding the last 2 elements of the 2 lists they are the same.
I am able to do it by comparing each and every element of the 2 lists. But is there any other efficient way to do this?
|
2017/08/23
|
[
"https://Stackoverflow.com/questions/45836036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7016928/"
] |
This way:
```
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1[:-2] == list2[:-2] => True
```
|
If your lists are very large and you want to avoid duplicating them with `list1[:-2]==list2[:-2]`, you can use a generator expression for a more memory-efficient solution:
```
all(a==b for a,b,_ in zip(list1, list2, range(len(list1)-2)))
```
|
45,836,036
|
Comparing two python lists upto n-2 elements:
```py
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1 == list2 => True
```
Excluding the last 2 elements of the 2 lists they are the same.
I am able to do it by comparing each and every element of the 2 lists. But is there any other efficient way to do this?
|
2017/08/23
|
[
"https://Stackoverflow.com/questions/45836036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7016928/"
] |
If your lists are very large and you want to avoid duplicating them with `list1[:-2]==list2[:-2]`, you can use a generator expression for a more memory-efficient solution:
```
all(a==b for a,b,_ in zip(list1, list2, range(len(list1)-2)))
```
|
Just slice the lists directly...
================================
`Python` has the syntax for slicing lists which looks like:
```
lst[start:stop:step]
```
a neat feature of this being that you can slice lists up to a position specified from the end using negative values. So if you have a list susch as:
```
lst = [1,2,3,4,5]
```
you can slice it with:
```
lst[:-3]
```
to get the values up to the third from the end:
```
[1,2,3]
```
so this can be used to compare your two lists:
```
list1 = [1,2,3,'a','b']
list2 = [1,2,3,'c','d']
list1[:-2] == list2[:-2] => True
```
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
For CSS, how about CSS in a Nutshell, by O'Reilly? Nice and thin.
|
[W3Schools](http://www.w3schools.com/) is a good place to start.
However, you might also benefit by poking around the [Mozilla Developer Centre](https://developer.mozilla.org/En) (MDC), which has lots of information about HTML, CSS, and JavaScript. I now almost exclusively use the MDC for looking things up—it has lots of examples, lots of detail (if you want to go into it), and it shows you many different things that you can do with the item you're looking up.
Also, for JavaScript, after you've learnt the basics ("[A re-introduction to JavaScript](https://developer.mozilla.org/En/A_re-introduction_to_JavaScript)" on the MDC is a good place to start), Douglas Crockford's [JavaScript page](http://javascript.crockford.com/) and John Resig's "[Learning Advanced JavaScript](http://ejohn.org/apps/learn/)" make for excellent reading.
Steve
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
I won't suggest w3schools for CSS and XHTML, but [htmldog.com](http://www.htmldog.com). I would suggest something about unobtrouse JavaScript for JS.
|
The W3Schools site has a try it yourself section that i think will be perfect for you.
[W3Schools CSS](http://www.w3schools.com/Css/default.asp)
[W3Schools Javascript](http://www.w3schools.com/JS/default.asp)
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
For CSS, how about CSS in a Nutshell, by O'Reilly? Nice and thin.
|
Install [firebug](http://getfirebug.com/).
* It helps inspecting html.
* You can edit CSS on the fly.
* Has a JavaScript console.
[Here](http://net.tutsplus.com/tutorials/other/10-reasons-why-you-should-be-using-firebug/) is a nice article explaining the some features of [firebug](http://getfirebug.com/).
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
You can learn style and best practices on [A List Apart](http://www.alistapart.com/) web site.
|
Opera recently put a lot of effort into getting people to write a [bunch of tutorials](http://opera.com/wsc/). The quality is high, and they pay attention to feedback (unlike W3Schools). It covers HTML, CSS and JavaScript and I haven't come across a better starting point.
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
Since you are an experienced programmer, a good place to start with javascript might be [Javascript: The Good Parts](https://rads.stackoverflow.com/amzn/click/com/0596517742) by Douglas Crockford. It is a brief but thorough tour of, well, the best parts of javascript (and pretty much all you'll need for quite a while).
Your approach to CSS and HTML will have to be very different. I suggest trying to make a static site or two, checking reference material if you get stuck. Pick a site that you like, and try recreating the basic layout in HTML. Got the layout? Try making it look pretty. Repeat.
|
My favourite CSS tutorial site has always been [www.htmldog.com](http://www.htmldog.com). The reason I like it so much is that not only does it teach you CSS, it also teaches you to drop any bad html habits you may have picked up over the years. In my view learning to write clean, semantic html is an important precursor to really getting to grips with css.
As for javascript, w3schools is probably best
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
You can learn style and best practices on [A List Apart](http://www.alistapart.com/) web site.
|
I found <http://htmldog.com/> to be useful when learning HTML/CSS. It teaches w3c compliant HTML and CSS, unlike many other sites. Looking at other people's CSS is also really useful. CSS is pretty simple (ignoring all the browser incompatibilites), so even will little CSS knowledge you can figure out what other people are doing.
Javascript is more complicated. Javascript has a pretty strange object system (it uses prototypal inheritance), so it's best to pick up a book. Crockford's Javascript: The Good Parts is an excellenent book to learn the fundamentals of javascript. The thing about javascript is that are basically two parts to it: the language, and the DOM (document object model). Most of the time, javascript is used in the browser, which means it has to interact with HTML via the DOM. Many people don't realize that javascript can be used outside of a web browser. JS: The Good Parts will teach you the javascript core, then you can look up the DOM interaction elsewhere.
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
Since you are an experienced programmer, a good place to start with javascript might be [Javascript: The Good Parts](https://rads.stackoverflow.com/amzn/click/com/0596517742) by Douglas Crockford. It is a brief but thorough tour of, well, the best parts of javascript (and pretty much all you'll need for quite a while).
Your approach to CSS and HTML will have to be very different. I suggest trying to make a static site or two, checking reference material if you get stuck. Pick a site that you like, and try recreating the basic layout in HTML. Got the layout? Try making it look pretty. Repeat.
|
For CSS, how about CSS in a Nutshell, by O'Reilly? Nice and thin.
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
You can learn style and best practices on [A List Apart](http://www.alistapart.com/) web site.
|
My favourite CSS tutorial site has always been [www.htmldog.com](http://www.htmldog.com). The reason I like it so much is that not only does it teach you CSS, it also teaches you to drop any bad html habits you may have picked up over the years. In my view learning to write clean, semantic html is an important precursor to really getting to grips with css.
As for javascript, w3schools is probably best
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
I found <http://htmldog.com/> to be useful when learning HTML/CSS. It teaches w3c compliant HTML and CSS, unlike many other sites. Looking at other people's CSS is also really useful. CSS is pretty simple (ignoring all the browser incompatibilites), so even will little CSS knowledge you can figure out what other people are doing.
Javascript is more complicated. Javascript has a pretty strange object system (it uses prototypal inheritance), so it's best to pick up a book. Crockford's Javascript: The Good Parts is an excellenent book to learn the fundamentals of javascript. The thing about javascript is that are basically two parts to it: the language, and the DOM (document object model). Most of the time, javascript is used in the browser, which means it has to interact with HTML via the DOM. Many people don't realize that javascript can be used outside of a web browser. JS: The Good Parts will teach you the javascript core, then you can look up the DOM interaction elsewhere.
|
My favourite CSS tutorial site has always been [www.htmldog.com](http://www.htmldog.com). The reason I like it so much is that not only does it teach you CSS, it also teaches you to drop any bad html habits you may have picked up over the years. In my view learning to write clean, semantic html is an important precursor to really getting to grips with css.
As for javascript, w3schools is probably best
|
746,873
|
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development.
Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc.
So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/746873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90593/"
] |
I would highly recommend you look at [Dev Opera](http://dev.opera.com/), its full of up to date information with a strong focus on webstandards. In particular, the [Web Standards Curriculum](http://dev.opera.com/articles/wsc/) is a great resource for beginners to get started.
I really wouldn't rely on the W3 Schools site, its content isn't kept as up to date and the examples often show bad-practice. If you know what you're doing it can be good as a quick resource for a single technique, but for a beginner it could easily lead you down the wrong path.
|
My favourite CSS tutorial site has always been [www.htmldog.com](http://www.htmldog.com). The reason I like it so much is that not only does it teach you CSS, it also teaches you to drop any bad html habits you may have picked up over the years. In my view learning to write clean, semantic html is an important precursor to really getting to grips with css.
As for javascript, w3schools is probably best
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
[Perl](http://www.perl.org/) - interesting that there are so many answers before this
|
You can do almost-interactive C# and VB.NET using [LINQPad](http://www.linqpad.net/)
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
Lisp and Scheme have interactive mode.
|
I guess one of the first was LISP.
Just try clisp
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
FORTH comes immediately to mind.
So does APL.
I remember seeing an interactive FORTRAN implementation on an SDS-930 (I think), many, many moons ago.
|
True to its name, the science-oriented and proprietary [Interactive Data Language](http://en.wikipedia.org/wiki/IDL_%28programming_language%29) (usually just called IDL, but spelled out here to avoid confusion with the other [IDL](http://en.wikipedia.org/wiki/Interface_description_language)) has an interactive mode which many of its users utilize more often than they program in it.
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
[Haskell](http://www.haskell.org/) even has two (mainstream) interactive interpreters, [Hugs](http://www.mirrorservice.org/sites/www.haskell.org/hugs/) and [ghci](http://www.haskell.org/haskellwiki/GHC/GHCi).
|
[Perl](http://www.perl.org/) - interesting that there are so many answers before this
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
* PHP can do that too: [PHP from the command line](http://php.net/manual/en/features.commandline.php)
* Does mySQL count? [mySQL Commands](http://dev.mysql.com/doc/refman/4.1/en/mysql-commands.html)
* [JavaScript shell in SpiderMonkey](https://developer.mozilla.org/en/Introduction_to_the_JavaScript_shell) (including, but not limited to, Firefox)
|
Most scripting languages will read from stdin and execute code typed at the console if you don't specify a filename to run. Php and perl will all do it.
Ruby has irb.
Lua has a more formal interactive mode like python, which will show you the indent level of your code at the prompt. It's very helpful since lua is typically used as an embedded scripting language, and you don't have to run your full application to test out code snippets.
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
Most (all?) lisps (including common lisp, scheme and clojure), sml, ocaml, haskell, F#, erlang, scala, ruby, python, lua, groovy, prolog.
|
Scala has [REPL](http://scala-lang.org/node/2097).
>
> The Scala Interpreter (often called a REPL for Read-Evaluate-Print
> Loop) sits in an unusual design space - an interactive interpreter for
> a statically typed language straddles two worlds which historically
> have been distinct. In version 2.8 the REPL further exploits the
> unique possibilities.
>
>
>
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
Any interpreted language is most likely going to have one.
|
True to its name, the science-oriented and proprietary [Interactive Data Language](http://en.wikipedia.org/wiki/IDL_%28programming_language%29) (usually just called IDL, but spelled out here to avoid confusion with the other [IDL](http://en.wikipedia.org/wiki/Interface_description_language)) has an interactive mode which many of its users utilize more often than they program in it.
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
[Haskell](http://www.haskell.org/) even has two (mainstream) interactive interpreters, [Hugs](http://www.mirrorservice.org/sites/www.haskell.org/hugs/) and [ghci](http://www.haskell.org/haskellwiki/GHC/GHCi).
|
There's one for [C#](http://www.mono-project.com/CsharpRepl).
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
As has been pointed out lots of languages can be used interactively, though how conveniently they can be so used varies quite a bit. The interactive environment I'm most familiar with, and one that I have found among the most congenial of all the free environments for interactive programming I've tried (not that I've tried them all) is Slime, a mode for emacs that allows interaction with a running Common Lisp, and can also be used with Clojure, a Lisp for the JVM.
If Lisp isn't your cup of tea a variety of Smalltalk environments are worth mentioning. One of the interesting things about many Smalltalk systems is that they expose almost all of the code that implements the system in the programming environment- if you want you can browse or even rewrite parts of the programming environment as you are using it, just as you would write new code. In fact the line between the system provided to you and the code you are writing is pretty blurry. Squeak is an interesting free Smalltalk, and Cincom offers an evaluation version of their commercial Smalltalk, which is a great environment IMHO.
Anyway, if you're interested in playing with interactive environments you could do worse than to play with those two, though of course there are a lot of other systems out there that allow interactive programming to one degree or another.
|
You can do almost-interactive C# and VB.NET using [LINQPad](http://www.linqpad.net/)
|
2,575,219
|
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty.
I typically use this mode to test if a given module is in the classpath or to play around and test some snippets.
Do you know any other programming languages that have Interactive Mode?
If you can, give the name of the languages and where possible, a web reference.
If it is already mentioned, you can just vote for it.
|
2010/04/04
|
[
"https://Stackoverflow.com/questions/2575219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130929/"
] |
**Logo** programming language.
Some implementations are so interactive that some people don't even use any other mode.
|
[Erlang](http://www.erlang.org/index.html) does, as well as [Haskell](http://www.haskell.org/) and i'm guessing [Ruby](http://www.ruby-lang.org/en/) does. Also there are Javascript CLIs like [Firebug](http://getfirebug.com/)
|
59,023,371
|
I am tryin to have a form submit to a python script using flask. the form is in my index.html -
```
<form action="{{ url_for('/predict') }}" method="POST">
<p>Enter Mileage</p>
<input type="text" name="mileage">
<p>Enter Year</p>
<input type="text" name="year">
<input type="submit" value="Predict">
</form>
```
Here is my flask page (my\_flask.py)-
```
@app.route('/')
def index():
return render_template('index.html')
@app.route('/predict', methods=("POST", "GET"))
def predict():
df = pd.read_csv("carprices.csv")
X = df[['Mileage','Year']]
y = df['Sell Price($)']
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
clf = LinearRegression()
clf.fit(X_train, y_train)
if request.method == 'POST':
mileage = request.form['mileage']
year = request.form['year']
data = [[mileage, year]]
price = clf.predict(data)
return render_template('prediction.html', prediction = price)
```
But when I go to my index page I get internal server error because of the
Why would this be happening?
{{ url\_for('/predict') }}
|
2019/11/24
|
[
"https://Stackoverflow.com/questions/59023371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4671619/"
] |
Instead of `url_for('/predict')`, drop the leading slash and use `url_for('predict')`.
`url_for(...)` takes the method name and not the route name.
|
I was not importing url\_for.
```
from flask import Flask, request, render_template, url_for
```
|
62,075,847
|
I tried to create a polygon shapefile in QGIS and read it in python by shapely. An example code looks like this:
```
import fiona
from shapely.geometry import shape
multipolys = fiona.open(somepath)
multi = multipolys[0]
coord = shape(multi['geometry'])
```
The EOSGeom\_createLinearRing\_r returned a NULL pointer
I checked if the polygon is valid in QGIS and no error was reported. Actually, it does not work for even a simple triangle generated in QGIS. Anyone knows how to solve it?
Thank you
|
2020/05/28
|
[
"https://Stackoverflow.com/questions/62075847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13242482/"
] |
I had a similar problem but with the shapely.geometry.LineString. The error I got was
```
ValueError: GEOSGeom_createLineString_r returned a NULL pointer
```
I don't know the reason behind this message, but there are two ways, how to avoid it:
1. Do the following:
```
...
from shapely import speedups
...
speedups.disable()
```
Import the speedups module and disable the speedups. This needs to be done, since they are enabled by default.
From shapelys speedups init method:
```
"""
The shapely.speedups module contains performance enhancements written in C.
They are automaticaly installed when Python has access to a compiler and
GEOS development headers during installation, and are enabled by default.
"""
```
If you disable them, you won't get the NULL Pointer Exception, because you don't use the C implementation, rather than the usual implementation.
2. If you call python in a command shell, type:
```
from shapely.geometry import shape
```
this loads your needed shape. Then load your program
```
import yourscript
```
then run your script.
```
yourscript.main()
```
This should also work. I think in this variant the C modules get properly loaded and therefore you don't get the NULL Pointer Exception. But this only works, if you open a python terminal by hand and import the needed shape by hand. If you import the shape with your program, you will run into the same error again.
|
Face the same issue and this work for me
`import shapely`
`shapely.speedups.disable()`
|
62,075,847
|
I tried to create a polygon shapefile in QGIS and read it in python by shapely. An example code looks like this:
```
import fiona
from shapely.geometry import shape
multipolys = fiona.open(somepath)
multi = multipolys[0]
coord = shape(multi['geometry'])
```
The EOSGeom\_createLinearRing\_r returned a NULL pointer
I checked if the polygon is valid in QGIS and no error was reported. Actually, it does not work for even a simple triangle generated in QGIS. Anyone knows how to solve it?
Thank you
|
2020/05/28
|
[
"https://Stackoverflow.com/questions/62075847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13242482/"
] |
Like J. P., I had this issue with creating LineStrings as well. There is [an old issue](https://github.com/Toblerity/Shapely/issues/353) (2016) in the Shapely github repository that seems related. Changing the order of the imports solved the problem for me:
```py
from shapely.geometry import LineString
import fiona
LineString([[0, 0], [1, 1]]).to_wkt()
# 'LINESTRING (0.0000000000000000 0.0000000000000000, 1.0000000000000000 1.0000000000000000)'
```
whereas
```py
import fiona
from shapely.geometry import LineString
LineString([[0, 0], [1, 1]]).to_wkt()
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File "C:\Users\xxxxxxx\AppData\Roaming\Python\Python37\site-packages\shapely\geometry\linestring.py", line 48, in __init__
# self._set_coords(coordinates)
# File "C:\Users\xxxxxxx\AppData\Roaming\Python\Python37\site-packages\shapely\geometry\linestring.py", line 97, in _set_coords
# ret = geos_linestring_from_py(coordinates)
# File "shapely\speedups\_speedups.pyx", line 208, in shapely.speedups._speedups.geos_linestring_from_py
# ValueError: GEOSGeom_createLineString_r returned a NULL pointer
```
Some other issues in the Shapely repository to look at
* [553](https://github.com/Toblerity/Shapely/issues/553#issuecomment-369206763) for import order issues on a Mac
* [887](https://github.com/Toblerity/Shapely/issues/887) (same reverse-import-order trick with `osgeo` and `shapely`)
* [919](https://github.com/Toblerity/Shapely/issues/919)
|
Face the same issue and this work for me
`import shapely`
`shapely.speedups.disable()`
|
62,479,608
|
What's the difference? [docs](https://docs.python.org/3.7/library/types.html#types.FunctionType) show nothing on this, and their `help()` is identical. Is there an object for which `isinstance` will fail with one but not other?
|
2020/06/19
|
[
"https://Stackoverflow.com/questions/62479608",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10133797/"
] |
Back in 1994 I wasn't sure that we would always be using the same implementation type for lambda and def. That's all there is to it. It would be a pain to remove it, so we're just leaving it (it's only one line). If you want to add a note to the docs, feel free to submit a PR.
|
See [`cpython/Lib/types.py`](https://github.com/python/cpython/blob/a041e116db5f1e78222cbf2c22aae96457372680/Lib/types.py#L11-L13):
```
def _f(): pass
FunctionType = type(_f)
LambdaType = type(lambda: None) # Same as FunctionType
```
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
1. go the windows cmd prompt
2. go to the python directory
3. then type python -m pip install package-name
|
I had the same problem with Version 3.5.2.
Have you tried `py.exe -m install package-name`? This worked for me.
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
1. go the windows cmd prompt
2. go to the python directory
3. then type python -m pip install package-name
|
If you are working in Pycharm, an easy way is go to file>setting>project interpreter. Click on the + icon you will find on right side probably and then search and install required library.
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
1. go the windows cmd prompt
2. go to the python directory
3. then type python -m pip install package-name
|
As soon as you open a command prompt, use:
```
python -m pip install --upgrade pip
```
then
```
python -m pip install <<package-name>>
```
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
Add the Script folder of python to your environment path
or you can do this from command line:
```
python -m pip install package-name
```
|
As soon as you open a command prompt, use:
```
python -m pip install --upgrade pip
```
then
```
python -m pip install <<package-name>>
```
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
I was having the same problem on Windows 10, This is how I fix it:
1. Click the *search* icon and type **System Environment**
2. In *System Properties* click on **Environment Variables**
3. In *System Variables* tab click **New**
4. Enter **PYTHON3\_SCRIPTS** for the *variable name* and `C:\Users\YOUR USER NAME\AppData\Local\Programs\Python\Python38-32\Scripts` for *Variable Value*. *Don't forget to change (YOUR USER NAME) in the path with your **user**, And to change your **Python version** or just go to this path to check it `C:\Users\YOUR USER NAME\AppData\Local\Programs\Python`*
5. Click **OK**
6. Click **NEW** again!
7. Enter **PYTHON3\_HOME** for the *variable name* and `C:\Users\YOUR USER NAME\AppData\Local\Programs\Python\Python38-32\` for *Variable Value*. *Don't forget to change (YOUR USER NAME) in the path with your **user**, And to change your **Python version** or just go to this path to check it `C:\Users\YOUR USER NAME\AppData\Local\Programs\Python`*
8. Click **OK**
9. Find **Path** in the same tab *select it* and click **Edit**
10. Click **New** and type `%PYTHON3_SCRIPTS%` Then click **OK**
Now, everything is set. Restart your **Terminal** and `pip` should be working now.
|
If you are working in Pycharm, an easy way is go to file>setting>project interpreter. Click on the + icon you will find on right side probably and then search and install required library.
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
run it at the cmd window, not inside the python window. it took me forever to realize my mistake.
|
I had the same problem with Version 3.5.2.
Have you tried `py.exe -m install package-name`? This worked for me.
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
run it at the cmd window, not inside the python window. it took me forever to realize my mistake.
|
If you are working in Pycharm, an easy way is go to file>setting>project interpreter. Click on the + icon you will find on right side probably and then search and install required library.
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
If you are working in Pycharm, an easy way is go to file>setting>project interpreter. Click on the + icon you will find on right side probably and then search and install required library.
|
For those with several python versions of python 3 installed in windows: I solved this issue by executing the pip install command directly from my python35 Scripts folder in cmd...for some reason pip3 pointed to python 34 even though python 35 was set first in environmental variables.
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
Add the Script folder of python to your environment path
or you can do this from command line:
```
python -m pip install package-name
```
|
For those with several python versions of python 3 installed in windows: I solved this issue by executing the pip install command directly from my python35 Scripts folder in cmd...for some reason pip3 pointed to python 34 even though python 35 was set first in environmental variables.
|
34,247,930
|
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34247930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542278/"
] |
I had the same problem with Version 3.5.2.
Have you tried `py.exe -m install package-name`? This worked for me.
|
For those with several python versions of python 3 installed in windows: I solved this issue by executing the pip install command directly from my python35 Scripts folder in cmd...for some reason pip3 pointed to python 34 even though python 35 was set first in environmental variables.
|
66,873,774
|
I'm really new to python and pandas so would you please help me answer this seemingly simple question? I already have an excel file containing my data, now I want to create an array containing those data in python. For example, I have data in excel that look like this:
[](https://i.stack.imgur.com/xgcx9.jpg)
I want from those data to create a matrix of the form like the python code below:
[](https://i.stack.imgur.com/zOSpc.jpg)
Actually, my data is much longer so is there any way that I can take advantage of pandas to put the data from my excel file into a matrix in python similar to the simple example above?
Thank you!
|
2021/03/30
|
[
"https://Stackoverflow.com/questions/66873774",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14533186/"
] |
This is quite simple; if you take a look at your code you should be able to follow through this sequence of operations.
1. The widget is created. **No action.** *At this point userIconData is null.*
2. `initState` is called. **async http call is initiated.** *userIconData == null*
3. `build` is called. **build occurs, throws error.** *userIconData == null*
4. http call returns. **userIconData is set.** *userIconData == your image*
Due to not calling `setState`, your build function won't run again. If you did, this would happen (but you'd still have had the exception earlier).
6. `build` is called. **userIconData is set.** *userIconData == your image*
The key here is understanding that asynchronous calls (anything that returns a future and optionally uses `async` and `await`) do not return immediately, but rather at some later point, and that you can't rely on them having set what you need in the meantime. If you had previously tried doing this with an image loaded from disk and it worked, that's only because flutter does some tricks that are only possible because loading from disk is synchronous.
Here are two options for how you can write your code instead.
```dart
class _FuncState extends State<Func> {
Uint8List? userIconData;
// if you're using any data from the `func` widget, use this instead
// of initState in case the widget changes.
// You could also check the old vs new and if there has been no change
// that would need a reload, not do the reload.
@override
void didUpdateWidget(Func oldWidget) {
super.didUpdateWidget(oldWidget);
updateUI();
}
void updateUI() async {
await getUserIconData(widget.role, widget.id, widget.session).then((value){
// this ensures that a rebuild happens
setState(() => userIconData = value);
});
}
@override
Widget build(BuildContext context) {
return SafeArea(
child: Scaffold(
body: Container(
// this only uses your circle avatar if the image is loaded, otherwise
// show a loading indicator.
child: userIconData != null ? CircleAvatar(
backgroundImage: Image.memory(userIconData!).image,
maxRadius: 20,
) : CircularProgressIndicator(),
),
),
);
}
}
```
Another way to do the same thing is to use a FutureBuilder.
```dart
class _FuncState extends State<Func> {
// using late isn't entirely safe, but we can trust
// flutter to always call didUpdateWidget before
// build so this will work.
late Future<Uint8List> userIconDataFuture;
@override
void didUpdateWidget(Func oldWidget) {
super.didUpdateWidget(oldWidget);
userIconDataFuture =
getUserIconData(widget.role, widget.id, widget.session);
}
@override
Widget build(BuildContext context) {
return SafeArea(
child: Scaffold(
body: Container(
child: FutureBuilder(
future: userIconDataFuture,
builder: (BuildContext context, AsyncSnapshot<Uint8List> snapshot) {
if (snapshot.hasData) {
return CircleAvatar(
backgroundImage: Image.memory(snapshot.data!).image,
maxRadius: 20);
} else {
return CircularProgressIndicator();
}
},
),
),
),
);
}
}
```
Note that the loading indicator is just one option; I'd actually recommend having a hard-coded default for your avatar (i.e. a grey 'user' image) that gets switched out when the image is loaded.
Note that I've used null-safe code here as that will make this answer have better longevity, but to switch back to non-null-safe code you can just remove the extraneous `?`, `!` and `late` in the code.
|
The error message is pretty clear to me. `userIconData` is `null` when you pass it to the `Image.memory` constructor.
Either use [FutureBuilder](https://api.flutter.dev/flutter/widgets/FutureBuilder-class.html) or a condition to check if `userIconData` is null before rendering image, and manually show a loading indicator if it is, or something along these lines. Also you'd need to actually set the state to trigger a re-render. I'd go with the former, though.
|
54,392,016
|
I have a python script were I was experimenting with minmax AI. And so tried to make a tic tac toe game.
I had a self calling function to calculate the values and it used a variable called alist(not the one below) which would be given to it by the function before. it would then save it as new list and modify it.
This worked but when it came to back-tracking and viewing all the other possibilities the original alist variable had been changed by the function following it e.g.
```
import sys
sys.setrecursionlimit(1000000)
def somefunction(alist):
newlist = alist
newlist[0][0] = newlist[0][0] + 1
if newlist[0][0] < 10:
somefunction(newlist)
print(newlist)
thelist = [[0, 0], [0, 0]]
somefunction(thelist)
```
it may be that this is difficult to solve but if someone could help me it would be greatly appreciated
|
2019/01/27
|
[
"https://Stackoverflow.com/questions/54392016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10976004/"
] |
`newlist = alist` does not make a copy of the list. You just have two variable names for the same list.
There are several ways to actually copy a list. I usually do this:
```
newlist = alist[:]
```
On the other hand, that will make a new list with the same elements. To make a deep copy of the list:
```
import copy
newlist = copy.deepcopy(alist)
```
|
You probably want to `deepcopy` your list, as it contains other lists:
```
from copy import deepcopy
```
And then change:
```
newlist = alist
```
to:
```
newlist = deepcopy(alist)
```
|
70,163,997
|
I have a folder of python scripts, I want to call each of them and pass in a DB object, this is easily doable, but I would like to do it dynamically, that is if I don't know the name of the script beforehand, is this possible?
Let's say all scripts are in the "scripts" subfolder.
My caller file:
```
#!/usr/bin/python
import scripts.Script1 as MyScript1
import scripts.Script2 as MyScript2
import pandas as pd
_scripts = {
'MyScript1': MyScript1,
'MyScript2': MyScript2,
}
def Invoke(DB, script, parameters):
if script in _scripts:
curScript = _scripts[script]
tables = GetTable(DB, curScript)
result = curScript.Invoke(tables)
return result
def GetTable(DB, script):
tables = script.TableToLoad()
dataframes = {}
if not isinstance(tables, list):
tables = [tables]
for table in tables:
dataframes[table] = DB.LoadDataframe(table)
return dataframes
```
The script file:
```
def TableToLoad():
return ['MyDBTable1']
def Invoke(tables):
df = tables['MyDBTable1']
# do useful work here
```
Is something like this dynamically loadable? Like dynamically load the \_scripts variable
|
2021/11/30
|
[
"https://Stackoverflow.com/questions/70163997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/468384/"
] |
If we work backwards, you'll need your DataFrame to have the addenda information in a single row before using `.to_dict` operation:
| id\_number | name | amount | addenda |
| --- | --- | --- | --- |
| 1234 | ABCD | $100 | [{payment\_related\_info: Car-wash-$30, payment\_related\_info: Maintenance-$70}] |
To get here, you can perform a `groupby` on `id_number, name, amount`, then apply a function that collapses the strings in the `addenda` rows for that groupby into a list of dictionaries where each key is the string `'payment_related_info'`.
This works as expected if you add more rows to your original `df` as well:
| id\_number | name | amount | addenda |
| --- | --- | --- | --- |
| 1234 | ABCD | $100 | Car-wash-$30 |
| 1234 | ABCD | $100 | Maintenance-$70 |
| 2345 | BCDE | $200 | Car-wash-$100 |
| 2345 | BCDE | $200 | Maintenance-$100 |
```
def collapse_row(x):
addenda_list = x["addenda"].to_list()
last_row = x.iloc[-1]
last_row["addenda"] = [{'payment_related_info':v} for v in addenda_list]
return last_row
grouped = df.groupby(["id_number","name","amount"]).apply(collapse_row).reset_index(drop=True)
grouped.to_dict(orient='records')
```
Result:
```
[
{
"id_number":1234,
"name":"ABCD",
"amount":"$100",
"addenda":[
{"payment_related_info":"Car-wash-$30"},
{"payment_related_info":"Maintenance-$70"}
]
},
{
"id_number":2345,
"name":"BCDE",
"amount":"$200",
"addenda":[
{"payment_related_info":"Car-wash-$100"},
{"payment_related_info":"Maintenance-$100"}
]
}
]
```
|
Just apply a groupby and aggregate by creating a dataframe inside like this:
```py
data = {
"id_number": [1234, 1234],
"name": ["ABCD", "ABCD"],
"amount": ["$100", "$100"],
"addenda": ["Car-wash-$30", "Maintenance-$70"]
}
df = pd.DataFrame(data=data)
df.groupby(by=["id_number", "name", "amount"]) \
.agg(lambda col: pd.DataFrame(data=col) \
.rename(columns={"addenda": "payment_related_info"})) \
.reset_index() \
.to_json(orient="records")
```
This returns axactly the result you want!
|
25,598,838
|
I'm really new to python so this is probably a really stupid problem but I honestly have no idea what I'm doing and I have spent hours trying to get this to work.
I need to have the user input a date (in string form) and then use this date to return some data (The function get\_data\_for\_date has already previously been created and works fine, I just have to call it manually in the console and enter the date for it to work currently ). The data then needs to be split when it is returned. Any help would be appreciated, or even if you could just point me in the right direction.
```
dateStr = raw_input('Date? ')
def load_data(dateStr):
def get_data_for_date(dateStr):
text = data
return data.split('\n')
```
|
2014/09/01
|
[
"https://Stackoverflow.com/questions/25598838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3995938/"
] |
Try this sequence :
```
MYApplication.getInstance().clearApplicationData();
android.os.Process.killProcess(android.os.Process.myPid());
Intent intent1 = new Intent(Intent.ACTION_MAIN);
intent1.addCategory(Intent.CATEGORY_HOME);
intent1.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
startActivity(intent1);
finish();
```
|
Avoid `killProcess`
Try this code :
```
Intent startMain = new Intent(Intent.ACTION_MAIN);
startMain.addCategory(Intent.CATEGORY_HOME);
startMain.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
activity.startActivity(startMain);
System.exit(-1);
```
|
25,598,838
|
I'm really new to python so this is probably a really stupid problem but I honestly have no idea what I'm doing and I have spent hours trying to get this to work.
I need to have the user input a date (in string form) and then use this date to return some data (The function get\_data\_for\_date has already previously been created and works fine, I just have to call it manually in the console and enter the date for it to work currently ). The data then needs to be split when it is returned. Any help would be appreciated, or even if you could just point me in the right direction.
```
dateStr = raw_input('Date? ')
def load_data(dateStr):
def get_data_for_date(dateStr):
text = data
return data.split('\n')
```
|
2014/09/01
|
[
"https://Stackoverflow.com/questions/25598838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3995938/"
] |
This is a fair and simple way of exiting an android app programmatically in my opinion:
```
Intent intent = new Intent(Intent.ACTION_MAIN);
intent.addCategory(Intent.CATEGORY_HOME);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intent);
finish();
```
Hope it helps.
|
Avoid `killProcess`
Try this code :
```
Intent startMain = new Intent(Intent.ACTION_MAIN);
startMain.addCategory(Intent.CATEGORY_HOME);
startMain.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
activity.startActivity(startMain);
System.exit(-1);
```
|
63,570,453
|
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
|
2020/08/25
|
[
"https://Stackoverflow.com/questions/63570453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13931651/"
] |
If you are using Linux and a distribution like Ubuntu, you will definitely break the OS. Don't do it.
Moreover, there is no evidence that your installation is broken because of Python, and you may probably not solve your problem.
|
There's no harm I can see in overwriting a pip installation. So, just follow the [instructions](https://pip.pypa.io/en/stable/installing/) and let us know if you have further problems:
1. Download [get-pip.py](https://bootstrap.pypa.io/get-pip.py).
2. Run python get-pip.py and get on with the rest of your stuff.
|
63,570,453
|
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
|
2020/08/25
|
[
"https://Stackoverflow.com/questions/63570453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13931651/"
] |
Your Python files are not specially managed by Python itself. If you uninstall Python, source code files (files with the `.py` extension) won't be affected.
|
There's no harm I can see in overwriting a pip installation. So, just follow the [instructions](https://pip.pypa.io/en/stable/installing/) and let us know if you have further problems:
1. Download [get-pip.py](https://bootstrap.pypa.io/get-pip.py).
2. Run python get-pip.py and get on with the rest of your stuff.
|
63,570,453
|
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
|
2020/08/25
|
[
"https://Stackoverflow.com/questions/63570453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13931651/"
] |
There's no harm I can see in overwriting a pip installation. So, just follow the [instructions](https://pip.pypa.io/en/stable/installing/) and let us know if you have further problems:
1. Download [get-pip.py](https://bootstrap.pypa.io/get-pip.py).
2. Run python get-pip.py and get on with the rest of your stuff.
|
Before uninstalling python, make sure all your python applications support the new python version.
My suggestion is to create virtual environments in your system to use multiple python versions
Try Anaconda - <https://www.anaconda.com/> to create multiple virtual environments, where you can run a python version on each environment.
|
63,570,453
|
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
|
2020/08/25
|
[
"https://Stackoverflow.com/questions/63570453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13931651/"
] |
There's no harm I can see in overwriting a pip installation. So, just follow the [instructions](https://pip.pypa.io/en/stable/installing/) and let us know if you have further problems:
1. Download [get-pip.py](https://bootstrap.pypa.io/get-pip.py).
2. Run python get-pip.py and get on with the rest of your stuff.
|
It depends on whether you installed the Python or it came with the OS.
If you installed Python, it’s no problem at all — your files are safe and uninstalling Python won’t touch them.
If you’re planning on uninstalling the Python that came with your OS, I’d advise not do do that — it could cause a whole lot of trouble. Instead, you could install a new version of Python into your user directory and link to it by adding its location to the `PATH` variable used by your shell.
|
63,570,453
|
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
|
2020/08/25
|
[
"https://Stackoverflow.com/questions/63570453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13931651/"
] |
If you are using Linux and a distribution like Ubuntu, you will definitely break the OS. Don't do it.
Moreover, there is no evidence that your installation is broken because of Python, and you may probably not solve your problem.
|
Before uninstalling python, make sure all your python applications support the new python version.
My suggestion is to create virtual environments in your system to use multiple python versions
Try Anaconda - <https://www.anaconda.com/> to create multiple virtual environments, where you can run a python version on each environment.
|
63,570,453
|
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
|
2020/08/25
|
[
"https://Stackoverflow.com/questions/63570453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13931651/"
] |
If you are using Linux and a distribution like Ubuntu, you will definitely break the OS. Don't do it.
Moreover, there is no evidence that your installation is broken because of Python, and you may probably not solve your problem.
|
It depends on whether you installed the Python or it came with the OS.
If you installed Python, it’s no problem at all — your files are safe and uninstalling Python won’t touch them.
If you’re planning on uninstalling the Python that came with your OS, I’d advise not do do that — it could cause a whole lot of trouble. Instead, you could install a new version of Python into your user directory and link to it by adding its location to the `PATH` variable used by your shell.
|
63,570,453
|
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
|
2020/08/25
|
[
"https://Stackoverflow.com/questions/63570453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13931651/"
] |
Your Python files are not specially managed by Python itself. If you uninstall Python, source code files (files with the `.py` extension) won't be affected.
|
Before uninstalling python, make sure all your python applications support the new python version.
My suggestion is to create virtual environments in your system to use multiple python versions
Try Anaconda - <https://www.anaconda.com/> to create multiple virtual environments, where you can run a python version on each environment.
|
63,570,453
|
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
|
2020/08/25
|
[
"https://Stackoverflow.com/questions/63570453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13931651/"
] |
Your Python files are not specially managed by Python itself. If you uninstall Python, source code files (files with the `.py` extension) won't be affected.
|
It depends on whether you installed the Python or it came with the OS.
If you installed Python, it’s no problem at all — your files are safe and uninstalling Python won’t touch them.
If you’re planning on uninstalling the Python that came with your OS, I’d advise not do do that — it could cause a whole lot of trouble. Instead, you could install a new version of Python into your user directory and link to it by adding its location to the `PATH` variable used by your shell.
|
14,068,042
|
Resently I'm installed Opencv in my machine. Its working in python well(I just checked it by some eg programs). But due to the lack of tutorials in python I decided to move to c. I just run an Hello world program from <http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/>
My program is
```
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
int main(int argc, char *argv[])
{
IplImage* img = 0;
int height,width,step,channels;
uchar *data;
int i,j,k;
if(argc<2){
printf("Usage: main <image-file-name>\n\7");
exit(0);
}
// load an image
img=cvLoadImage(argv[1]);
if(!img){
printf("Could not load image file: %s\n",argv[1]);
exit(0);
}
// get the image data
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
printf("Processing a %dx%d image with %d channels\n",height,width,channels);
// create a window
cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin", 100, 100);
// invert the image
for(i=0;i<height;i++) for(j=0;j<width;j++) for(k=0;k<channels;k++)
data[i*step+j*channels+k]=255-data[i*step+j*channels+k];
// show the image
cvShowImage("mainWin", img );
// wait for a key
cvWaitKey(0);
// release the image
cvReleaseImage(&img );
return 0;
}
```
first while compiling I got the following error
```
hello-world.c:4:16: fatal error: cv.h: No such file or directory
compilation terminated.
```
and I rectify this error by compiling like this
```
gcc -I/usr/lib/perl/5.12.4/CORE -o hello-world hello-world.c
```
But now the error is
```
In file included from hello-world.c:4:0:
/usr/lib/perl/5.12.4/CORE/cv.h:14:5: error: expected specifier-qualifier-list before ‘_XPV_HEAD’
hello-world.c:5:21: fatal error: highgui.h: No such file or directory
compilation terminated.
```
Qns :
Is it this header is not installed in my system? While I'm using this command find /usr -name "highgui.h" I'm not find anything
If this header is not in my sysytem hoew I install this?
Please help me . I'm new in opencv
|
2012/12/28
|
[
"https://Stackoverflow.com/questions/14068042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1894272/"
] |
First check if highgui.h exists on your machine:
```
sudo find /usr/include -name "highgui.h"
```
If you find it on path lets say "/usr/include/opencv/highgui.h"
then use:
```
#include <opencv/highgui.h> in your c file.
```
or
while compiling you could add
```
-I/usr/include/opencv in gcc line
```
but then your include line in c file should become:
```
#include "highgui.h"
```
if, your first command fails that is you don't "find" highgui.h on your machine. Then clearly you are missing some package. To figure out that package name, use apt-find command:
```
sudo apt-find search highgui.h
```
on my machine, it gave me this:
```
libhighgui-dev: /usr/include/opencv/highgui.h
libhighgui-dev: /usr/include/opencv/highgui.hpp
```
if you don't have apt-find then install it first, using:
```
sudo apt-get install apt-find
```
So, now you know the package name, then issue:
```
sudo apt-get install libhighgui-dev
```
once this is done, use the find command to see where exactly, headers been installed and then use then change include path accordingly
|
I have the following headers in my project:
```
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/features2d/features2d.hpp>
```
The version of OpenCV 2.4.2
|
64,983,755
|
As of until now, my understanding is that python imports module by the path relative to the directory, despite the source file being anywhere else.
for example:
```
bar
|-foo.py
|-foo1.py
```
so if we want to access `fo01.py` through
`foo.py` from `bar`, I would think I need to do
`from bar import foo1`. But the former does not seem to work.
It not being the way has this following issue:
if we now have another dir - `examples`
```
bar
|-foo.py
|-foo1.py
examples
|- getfoo.py
```
How can we access `foo` and `foo1` under `bar` from `getfoo.py` under examples? (My common intuition would be, import `foo` from the scope of `bar.<foo>`, but it does not work)
|
2020/11/24
|
[
"https://Stackoverflow.com/questions/64983755",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9817556/"
] |
before explaining why and how to make things work.
let me put some right code.
here is the dir tree(which followed yours)
```
.
├── bar
│ ├── foo1.py
│ └── foo.py
└── examples
└── getfoo.py
```
and there is a variable named `var` in foo1.py and foo.py
Question I:
>
> so if we want to access fo01.py through foo.py from bar, I would think I need to do from bar import foo1. But the former does not seem to work.
>
>
>
Answer I:
>
> In `foo.py`. you should change `from bar import foo1` to `from foo1 import xxx` (`xxx` are things you want from `foo1.py`)
>
>
>
Question II:
>
> How can we access foo and foo1 under bar from getfoo.py under examples? (My common intuition would be, import foo from the scope of bar., but it does not work)
>
>
>
Answer II:
>
> You can't import modules like this before you add some `sys path`, when you're testing py file like `python getfoo.py`.
> if you want to do things like `from bar import foo`, you may need to `sys.path.append('path/to/bar')`, after which you can do the imports.
>
>
>
Explanations:
these are things about `python interpret path`. which you can find in [python sys.path](https://docs.python.org/3/library/sys.html?highlight=sys%20path#sys.path)
when we do `python file.py`, the interpreter creates a list that contains all of directories it will use to search for modules when importing automatically.
you can see the list with `import sys; print(sys.path)`
the problem is that this list only contain the directory where file.py is located and some system path that python own.
So, in `bar` directory, you can access `foo1.py` through foo.py using `from foo1 import xxx`.
or `from foo import xxx` in `foo1.py`
But you can not import bar in `getfoo.py`. this is because python interpreter don't know where to find it.
so if you want to import things tell the interpreter with `sys.path.append('path/to/add')`
|
Try importing foo1.py in getfoo1.py using its path.
```
import ../bar/foo1.py
```
Or
copy paste foo1.py in examples and then call
```
import foo1.py
```
Please check syntax for "import ../bar/foo1.py"
|
67,045,619
|
I have a python script and I used on Kubernetes.
After process ended on python script Kubernetes restart pod. And I don't want to this.
I tried to add a line of code from python script like that:
```
text = input("please a key for exiting")
```
And I get EOF error, so its depends on container has no EOF config on my Kubernetes.
After that I tried to use restartPolicy: Never. But restartPolicy is can not be Never and I get error like that:
```
error validating data: ValidationError(Deployment.spec.template): unknown field \"restartPolicy\" in io.k8s.api.core.v1.PodTemplateSpec;
```
How can I make this? I just want to no restart for this pod. Its can be on python script or Kubernetes yaml file.
|
2021/04/11
|
[
"https://Stackoverflow.com/questions/67045619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13338897/"
] |
You get `unknown field \"restartPolicy\" in io.k8s.api.core.v1.PodTemplateSpec;` because you most probably messed up some indentation.
Here is an example deploymeny with **incorrect indentation** of `restartPolicy` field:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
restartPolicy: Never # <-------
```
---
Here is a deploymet with **correct indentation**:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
restartPolicy: Never # <-------
```
But this will result in error:
```
kubectl apply -f deploy.yaml
The Deployment "nginx-deployment" is invalid: spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: "Always"
```
Here is an explaination why: [restartpolicy-unsupported-value-never-supported-values-always](https://stackoverflow.com/questions/55169075/restartpolicy-unsupported-value-never-supported-values-always)
---
If you want to run a one time pod, use a [k8s job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) or use pod directly:
```
apiVersion: v1
kind: Pod
metadata:
labels:
run: ngx
name: ngx
spec:
containers:
- image: nginx
name: ngx
restartPolicy: Never
```
|
A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always.
could you please try OnFailure you have only one container it should work.
|
67,045,619
|
I have a python script and I used on Kubernetes.
After process ended on python script Kubernetes restart pod. And I don't want to this.
I tried to add a line of code from python script like that:
```
text = input("please a key for exiting")
```
And I get EOF error, so its depends on container has no EOF config on my Kubernetes.
After that I tried to use restartPolicy: Never. But restartPolicy is can not be Never and I get error like that:
```
error validating data: ValidationError(Deployment.spec.template): unknown field \"restartPolicy\" in io.k8s.api.core.v1.PodTemplateSpec;
```
How can I make this? I just want to no restart for this pod. Its can be on python script or Kubernetes yaml file.
|
2021/04/11
|
[
"https://Stackoverflow.com/questions/67045619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13338897/"
] |
You get `unknown field \"restartPolicy\" in io.k8s.api.core.v1.PodTemplateSpec;` because you most probably messed up some indentation.
Here is an example deploymeny with **incorrect indentation** of `restartPolicy` field:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
restartPolicy: Never # <-------
```
---
Here is a deploymet with **correct indentation**:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
restartPolicy: Never # <-------
```
But this will result in error:
```
kubectl apply -f deploy.yaml
The Deployment "nginx-deployment" is invalid: spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: "Always"
```
Here is an explaination why: [restartpolicy-unsupported-value-never-supported-values-always](https://stackoverflow.com/questions/55169075/restartpolicy-unsupported-value-never-supported-values-always)
---
If you want to run a one time pod, use a [k8s job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) or use pod directly:
```
apiVersion: v1
kind: Pod
metadata:
labels:
run: ngx
name: ngx
spec:
containers:
- image: nginx
name: ngx
restartPolicy: Never
```
|
you have a single process which ends due to exception you need to correct that logic.
and can set OnFailure. If you want to debug your code then you need to catch this error and log it. so that your process should not end or you can run this logic inside a new thread. which will be child thread and in main thread you can have sleep or wait condition.
|
67,025,052
|
As I am teaching myself Bash programming, I came across an interesting use case, where **I want to take a list of variables that exist in the environment, and put them into an array. Then, I want to output a list of the variable names and their values, and store that output in an array, one entry per variable.**
I'm only about 2 weeks into Bash shell scripting in any "real" way, and I am educating myself on arrays. A common function in other programming language is the ability to "zip" two arrays, e.g. [as is done in Python](https://realpython.com/python-zip-function/). Another common feature in any programming language is indirection, e.g. via [pointer indirection](https://en.wikipedia.org/wiki/Pointer_(computer_programming)), etc. This is largely academic, to teach myself through a somewhat challenging example, but I think this has widespread use if for no other reason than debugging, keeping track of overall system state, etc.
**What I want is for the following input... :**
```
VAR_ONE="LIGHT RED"
VAR_TWO="DARK GREEN"
VAR_THREE="BLUE"
VARIABLE_ARRAY=(VAR_ONE VAR_TWO VAR_THREE)
```
**... to be converted into the following output (as an array, one element per line):**
```
VAR_ONE: LIGHT RED
VAR_TWO: DARK GREEN
VAR_THREE: BLUE
```
**Constraints:**
* Assume that I do not have control of all of the variables, so I cannot just sidestep the problem e.g. by using an associative array from the get-go. (i.e. please do not recommend avoiding the need for indirect reference lookups altogether by never having a discrete variable named "VAR\_ONE"). But a solution that stores the result in an associative array is fine.
* Assume that variable names will never contain spaces, but their values might.
* The final output should *not* contain separate elements just because the input variables had values containing spaces.
**What I've read about so far:**
* I've read some StackOverflow posts like this one, that deal with using indirect references to arrays *themselves* (e.g. if you have three arrays and want to choose which one to pull from based on an "array choice" variable): [How to iterate over an array using indirect reference?](https://stackoverflow.com/q/11180714/12854372)
* I've also found one single post that deals with "zipping" arrays in Bash in the manner I'm talking about, where you pair-up e.g. the 1st element from `array1` and `array2`, then pair up the 2nd elements, etc.: [Iterate over two arrays simultaneously in bash](https://stackoverflow.com/questions/17403498/iterate-over-two-arrays-simultaneously-in-bash)
* ...but I haven't found anything that quite discusses this unique use-case...
**QUESTION:**
**How should I make an array containing a list of variable names and their values (colon-separated), given an array containing a list of variable names only. I'm not "failing to come up with any way to do it" but I want to find the "preferred" way to do this in Bash, considering performance, security, and being concise/understandable.**
EDIT: I'll post what I've come up with thus far as an answer to this post... but not mark it as answered, since I want to also hear some unbiased recommendations...
|
2021/04/09
|
[
"https://Stackoverflow.com/questions/67025052",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12854372/"
] |
OP starts with:
```
VAR_ONE="LIGHT RED"
VAR_TWO="DARK GREEN"
VAR_THREE="BLUE"
VARIABLE_ARRAY=(VAR_ONE VAR_TWO VAR_THREE)
```
OP has provided an answer with 4 sets of code:
```
# first 3 sets of code generate:
$ typeset -p outputValues
declare -a outputValues=([0]="VAR_ONE: LIGHT RED" [1]="VAR_TWO: DARK GREEN" [2]="VAR_THREE: BLUE")
# the 4th set of code generates the following where the data values are truncated at the first space:
$ typeset -p outputValues
declare -a outputValues=([0]="VAR_ONE: LIGHT" [1]="VAR_TWO: DARK" [2]="VAR_THREE: BLUE")
```
**NOTES**:
* I'm assuming the output from the 4th set of code is wrong so will be ignoring this one
* OP's code samples touch on a couple ideas I'm going to make use of (below) ... nameref's (`declare -n <variable_name>`) and indirect variable references (`${!<variable_name>}`)
---
---
For readability (and maintainability by others) I'd probably avoid the various `eval` and expansion ideas and instead opt for using `bash` namesref's (`declare -n`); a quick example:
```
$ x=5
$ echo "${x}"
5
$ y=x
$ echo "${y}"
x
$ declare -n y="x" # nameref => y=(value of x)
$ echo "${y}"
5
```
Pulling this into the original issue we get:
```
unset outputValues
declare -a outputValues # optional; declare 'normal' array
for var_name in "${VARIABLE_ARRAY[@]}"
do
declare -n data_value="${var_name}"
outputValues+=("${var_name}: ${data_value}")
done
```
Which gives us:
```
$ typeset -p outputValues
declare -a outputValues=([0]="VAR_ONE: LIGHT RED" [1]="VAR_TWO: DARK GREEN" [2]="VAR_THREE: BLUE")
```
While this generates the same results (as OP's first 3 sets of code) there is (for me) the nagging question of how is this new array going to be used?
If the sole objective is to print this pre-formatted data to stdout ... ok, though why bother with a new array when the same can be done with the current array and nameref's?
If the objective is to access this array as sets of variable name/value pairs for processing purposes, then the current structure is going to be hard(er) to work with, eg, each array 'value' will need to be parsed/split based on the delimiter `:<space>` in order to access the actual variable names and values.
In this scenario I'd opt for using an associative array, eg:
```
unset outputValues
declare -A outputValues # required; declare associative array
for var_name in "${VARIABLE_ARRAY[@]}"
do
declare -n data_value="${var_name}"
outputValues[${var_name}]="${data_value}"
done
```
Which gives us:
```
$ typeset -p outputValues
declare -A outputValues=([VAR_ONE]="LIGHT RED" [VAR_THREE]="BLUE" [VAR_TWO]="DARK GREEN" )
```
**NOTES**:
* again, why bother with a new array when the same can be done with the current array and nameref's?
* if the variable `$data_value` is to be re-used in follow-on code as a 'normal' variable it will be necessary to remove the nameref attribute (`unset -n data_value`)
With an associative array (index=variable name / array element=variable value) it becomes easier to reference the variable name/value pairs, eg:
```
$ myvar=VAR_ONE
$ echo "${myvar}: ${outputValues[${myvar}]}"
VAR_ONE: LIGHT RED
$ for var_name in "${!outputValues[@]}"; do echo "${var_name}: ${outputValues[${var_name}]}"; done
VAR_ONE: LIGHT RED
VAR_THREE: BLUE
VAR_TWO: DARK GREEN
```
---
---
In older versions of `bash` (before nameref's were available), and still available in newer versions of `bash`, there's the option of using indirect variable references;
```
$ x=5
$ echo "${x}"
5
$ unset -n y # make sure 'y' has not been previously defined as a nameref
$ y=x
$ echo "${y}"
x
$ echo "${!y}"
5
```
Pulling this into the associative array approach:
```
unset -n var_name # make sure var_name not previously defined as a nameref
unset outputValues
declare -A outputValues # required; declare associative array
for var_name in "${VARIABLE_ARRAY[@]}"
do
outputValues[${var_name}]="${!var_name}"
done
```
Which gives us:
```
$ typeset -p outputValues
declare -A outputValues=([VAR_ONE]="LIGHT RED" [VAR_THREE]="BLUE" [VAR_TWO]="DARK GREEN" )
```
**NOTE**: While this requires less coding in the `for` loop, if you forget to `unset -n` the variable (`var_name` in this case) then you'll end up with the wrong results if `var_name` was previously defined as a nameref; perhaps a minor issue but it requires the coder to know of, and code for, this particular issue ... a bit too esoteric (for my taste) so I prefer to stick with namerefs ... ymmv ...
|
I've come up with a handful of possible solutions in the last couple days, each one with their own pro's and con's. I won't mark this as the answer for awhile though, since I'm interested in hearing unbiased recommendations.
---
My brainstorming solutions thus far:
OPTION #1 - FOR-LOOP:
```
alias PrintCommandValues='unset outputValues
for var in ${VARIABLE_ARRAY[@]}
do outputValues+=("${var}: ${!var}")
done; printf "%s\n\n" "${outputValues[@]}"'
PrintCommandValues
```
Pro's: Traditional, easy to understand
Cons: A little verbose. I'm not sure about Bash, but I've been doing a lot of Mathematica programming (imperative-style), where such loops are notably slower. Anybody know if that's true for Bash?
OPTION #2 - EVAL:
```
i=0; outputValues=("${VARIABLE_ARRAY[@]}")
eval declare "${VARIABLE_ARRAY[@]/#/outputValues[i++]+=:\\ $}"
printf "%s\n\n" "${outputValues[@]}"
```
Pros: Shorter than the for-loop, and still easy to understand.
Cons: I'm no expert, but I've read a lot of warnings to avoid `eval` whenever possible, due to security issues. Probably not something I'll concern myself a ton over when I'm mostly writing scripts for "handy utility purposes" for my personal machine only, but...
OPTION #3 - QUOTED DECLARE WITH PARENTHESIS:
```
i=0; declare -a outputValues="(${VARIABLE_ARRAY[@]/%/'\:\ "${!VARIABLE_ARRAY[i++]}"'})"
printf "%s\n\n" "${outputValues[@]}"
```
Pros: Super-concise. I just plain stumbled onto this syntax -- I haven't found it mentioned anywhere on the web. Apparently, using `declare` in Bash (I use version 4.4.20(1)), if (**and ONLY if**) you place array-style `(...)` brackets after the equals-sign, and quote it, you get one more "round" of expansion/dereferencing, similar to `eval`. I happened to be toying with [this post](https://superuser.com/a/1186997), and found the part about the "extra expansion" by accident.
For example, compare these two tests:
```
varName=varOne; varOne=something
declare test1=\$$varName
declare -a test2="(\$$varName)"
declare -p test1 test2
```
Output:
```
declare -- test1="\$varOne"
declare -a test2=([0]="something")
```
Pretty neat, I think...
Anyways, the cons for this method are... I've never seen it documented officially or unofficially anywhere, so... portability...?
Alternative for this option:
```
i=0; declare -a LABELED_VARIABLE_ARRAY="(${VARIABLE_ARRAY[@]/%/'\:\ \$"${VARIABLE_ARRAY[i++]}"'})"
declare -a outputValues=("${LABELED_VARIABLE_ARRAY[@]@P}")
printf "%s\n\n" "${outputValues[@]}"
```
JUST FOR FUN - BRACE EXPANSION:
```
unset outputValues; OLDIFS=$IFS; IFS=; i=0; j=0
declare -n nameCursor=outputValues[i++]; declare -n valueCursor=outputValues[j++]
declare {nameCursor+=,valueCursor+=": "$}{VAR_ONE,VAR_TWO,VAR_THREE}
printf "%s\n\n" "${outputValues[@]}"
IFS=$OLDIFS
```
Pros: ??? Maybe speed?
Cons: Pretty verbose, not very easy to understand
---
Anyways, those are all of my methods... Are any of them reasonable, or would you do something different altogether?
|
1,894,099
|
I am trying to run the script [csv2json.py](http://www.djangosnippets.org/snippets/1680/) in the Command Prompt, but I get this error:
```
C:\Users\A\Documents\PROJECTS\Django\sw2>csv2json.py csvtest1.csv wkw1.Lawyer
Converting C:\Users\A\Documents\PROJECTS\Django\sw2csvtest1.csv from CSV to JSON as C:\Users\A\Documents\PROJECTS\Django\sw2csvtest1.csv.json
Traceback (most recent call last):
File "C:\Users\A\Documents\PROJECTS\Django\sw2\csv2json.py", line 37, in <module>
f = open(in_file, 'r' )
IOError: [Errno 2] No such file or directory: 'C:\\Users\\A\\Documents\\PROJECTS\\Django\\sw2csvtest1.csv'
```
Here are the relevant lines from the snippet:
```
31 in_file = dirname(__file__) + input_file_name
32 out_file = dirname(__file__) + input_file_name + ".json"
34 print "Converting %s from CSV to JSON as %s" % (in_file, out_file)
36 f = open(in_file, 'r' )
37 fo = open(out_file, 'w')
```
It seems that the directory name and file name are combined. How can I make this script run?
Thanks.
**Edit:**
Altering lines 31 and 32 as answered by Denis Otkidach worked fine. But I realized that the first column name needs to be pk and each row needs to start with an integer:
```
for row in reader:
if not header_row:
header_row = row
continue
pk = row[0]
model = model_name
fields = {}
for i in range(len(row)-1):
active_field = row[i+1]
```
So my csv row now looks like this (including the header row):
```
pk, firm_url, firm_name, first, last, school, year_graduated
1, http://www.graychase.com/aabbas, Gray & Chase, Amr A, Babas, The George Washington University Law School, 2005
```
Is this a requirement of the django fixture or json format? If so, I need to find a way to add the pk numbers to each row. Can I delete this pk column? Any suggestions?
**Edit 2**
I keep getting this ValidationError: "This value must be an integer". There is only one integer field and that's the pk. Is there a way to find out from the traceback what the line numbers refer to?
```
Problem installing fixture 'C:\Users\A\Documents\Projects\Django\sw2\wkw2\fixtures\csvtest1.csv.json': Traceback (most recent call last):
File "C:\Python26\Lib\site-packages\django\core\management\commands\loaddata.py", line 150, in handle
for obj in objects:
File "C:\Python26\lib\site-packages\django\core\serializers\json.py", line 41, in Deserializer
for obj in PythonDeserializer(simplejson.load(stream)):
File "C:\Python26\lib\site-packages\django\core\serializers\python.py", line 95, in Deserializer
data[field.attname] = field.rel.to._meta.get_field(field.rel.field_name).to_python(field_value)
File "C:\Python26\lib\site-packages\django\db\models\fields\__init__.py", line 356, in to_python
_("This value must be an integer."))
ValidationError: This value must be an integer.
```
|
2009/12/12
|
[
"https://Stackoverflow.com/questions/1894099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/215094/"
] |
```
from os import path
in_file = path.join(dirname(__file__), input_file_name )
out_file = path.join(dirname(__file__), input_file_name + ".json" )
[...]
```
|
You should be using `os.path.join` rather than just concatenating `dirname()` and filenames.
```
import os.path
in_file = os.path.join(dirname(__file__), input_file_name)
out_file = os.path.join(dirname(__file__), input_file_name + ".json")
```
will fix your problem, though depending on what exactly you're doing, there's probably a more elegant way to do it.
|
1,894,099
|
I am trying to run the script [csv2json.py](http://www.djangosnippets.org/snippets/1680/) in the Command Prompt, but I get this error:
```
C:\Users\A\Documents\PROJECTS\Django\sw2>csv2json.py csvtest1.csv wkw1.Lawyer
Converting C:\Users\A\Documents\PROJECTS\Django\sw2csvtest1.csv from CSV to JSON as C:\Users\A\Documents\PROJECTS\Django\sw2csvtest1.csv.json
Traceback (most recent call last):
File "C:\Users\A\Documents\PROJECTS\Django\sw2\csv2json.py", line 37, in <module>
f = open(in_file, 'r' )
IOError: [Errno 2] No such file or directory: 'C:\\Users\\A\\Documents\\PROJECTS\\Django\\sw2csvtest1.csv'
```
Here are the relevant lines from the snippet:
```
31 in_file = dirname(__file__) + input_file_name
32 out_file = dirname(__file__) + input_file_name + ".json"
34 print "Converting %s from CSV to JSON as %s" % (in_file, out_file)
36 f = open(in_file, 'r' )
37 fo = open(out_file, 'w')
```
It seems that the directory name and file name are combined. How can I make this script run?
Thanks.
**Edit:**
Altering lines 31 and 32 as answered by Denis Otkidach worked fine. But I realized that the first column name needs to be pk and each row needs to start with an integer:
```
for row in reader:
if not header_row:
header_row = row
continue
pk = row[0]
model = model_name
fields = {}
for i in range(len(row)-1):
active_field = row[i+1]
```
So my csv row now looks like this (including the header row):
```
pk, firm_url, firm_name, first, last, school, year_graduated
1, http://www.graychase.com/aabbas, Gray & Chase, Amr A, Babas, The George Washington University Law School, 2005
```
Is this a requirement of the django fixture or json format? If so, I need to find a way to add the pk numbers to each row. Can I delete this pk column? Any suggestions?
**Edit 2**
I keep getting this ValidationError: "This value must be an integer". There is only one integer field and that's the pk. Is there a way to find out from the traceback what the line numbers refer to?
```
Problem installing fixture 'C:\Users\A\Documents\Projects\Django\sw2\wkw2\fixtures\csvtest1.csv.json': Traceback (most recent call last):
File "C:\Python26\Lib\site-packages\django\core\management\commands\loaddata.py", line 150, in handle
for obj in objects:
File "C:\Python26\lib\site-packages\django\core\serializers\json.py", line 41, in Deserializer
for obj in PythonDeserializer(simplejson.load(stream)):
File "C:\Python26\lib\site-packages\django\core\serializers\python.py", line 95, in Deserializer
data[field.attname] = field.rel.to._meta.get_field(field.rel.field_name).to_python(field_value)
File "C:\Python26\lib\site-packages\django\db\models\fields\__init__.py", line 356, in to_python
_("This value must be an integer."))
ValidationError: This value must be an integer.
```
|
2009/12/12
|
[
"https://Stackoverflow.com/questions/1894099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/215094/"
] |
`+` is used incorrectly here, the proper way to combine directory name and file name is using `os.path.join()`. But there is no need to combine directory where script is located with file name, since it's common to pass relative path to current working directory. So, change lines 31-32 to the following:
```
in_file = input_file_name
out_file = in_file + '.json'
```
|
You should be using `os.path.join` rather than just concatenating `dirname()` and filenames.
```
import os.path
in_file = os.path.join(dirname(__file__), input_file_name)
out_file = os.path.join(dirname(__file__), input_file_name + ".json")
```
will fix your problem, though depending on what exactly you're doing, there's probably a more elegant way to do it.
|
1,894,099
|
I am trying to run the script [csv2json.py](http://www.djangosnippets.org/snippets/1680/) in the Command Prompt, but I get this error:
```
C:\Users\A\Documents\PROJECTS\Django\sw2>csv2json.py csvtest1.csv wkw1.Lawyer
Converting C:\Users\A\Documents\PROJECTS\Django\sw2csvtest1.csv from CSV to JSON as C:\Users\A\Documents\PROJECTS\Django\sw2csvtest1.csv.json
Traceback (most recent call last):
File "C:\Users\A\Documents\PROJECTS\Django\sw2\csv2json.py", line 37, in <module>
f = open(in_file, 'r' )
IOError: [Errno 2] No such file or directory: 'C:\\Users\\A\\Documents\\PROJECTS\\Django\\sw2csvtest1.csv'
```
Here are the relevant lines from the snippet:
```
31 in_file = dirname(__file__) + input_file_name
32 out_file = dirname(__file__) + input_file_name + ".json"
34 print "Converting %s from CSV to JSON as %s" % (in_file, out_file)
36 f = open(in_file, 'r' )
37 fo = open(out_file, 'w')
```
It seems that the directory name and file name are combined. How can I make this script run?
Thanks.
**Edit:**
Altering lines 31 and 32 as answered by Denis Otkidach worked fine. But I realized that the first column name needs to be pk and each row needs to start with an integer:
```
for row in reader:
if not header_row:
header_row = row
continue
pk = row[0]
model = model_name
fields = {}
for i in range(len(row)-1):
active_field = row[i+1]
```
So my csv row now looks like this (including the header row):
```
pk, firm_url, firm_name, first, last, school, year_graduated
1, http://www.graychase.com/aabbas, Gray & Chase, Amr A, Babas, The George Washington University Law School, 2005
```
Is this a requirement of the django fixture or json format? If so, I need to find a way to add the pk numbers to each row. Can I delete this pk column? Any suggestions?
**Edit 2**
I keep getting this ValidationError: "This value must be an integer". There is only one integer field and that's the pk. Is there a way to find out from the traceback what the line numbers refer to?
```
Problem installing fixture 'C:\Users\A\Documents\Projects\Django\sw2\wkw2\fixtures\csvtest1.csv.json': Traceback (most recent call last):
File "C:\Python26\Lib\site-packages\django\core\management\commands\loaddata.py", line 150, in handle
for obj in objects:
File "C:\Python26\lib\site-packages\django\core\serializers\json.py", line 41, in Deserializer
for obj in PythonDeserializer(simplejson.load(stream)):
File "C:\Python26\lib\site-packages\django\core\serializers\python.py", line 95, in Deserializer
data[field.attname] = field.rel.to._meta.get_field(field.rel.field_name).to_python(field_value)
File "C:\Python26\lib\site-packages\django\db\models\fields\__init__.py", line 356, in to_python
_("This value must be an integer."))
ValidationError: This value must be an integer.
```
|
2009/12/12
|
[
"https://Stackoverflow.com/questions/1894099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/215094/"
] |
`+` is used incorrectly here, the proper way to combine directory name and file name is using `os.path.join()`. But there is no need to combine directory where script is located with file name, since it's common to pass relative path to current working directory. So, change lines 31-32 to the following:
```
in_file = input_file_name
out_file = in_file + '.json'
```
|
```
from os import path
in_file = path.join(dirname(__file__), input_file_name )
out_file = path.join(dirname(__file__), input_file_name + ".json" )
[...]
```
|
33,545,813
|
I am creating a Python class but it seems I can't get the constructor class to work properly. Here is my class:
```
class IQM_Prep(SBconcat):
def __init__(self,project_dir):
self.project_dir=project_dir #path to parent project dir
self.models_path=self.__get_models_path__() #path to parent models dir
self.experiments_path=self.__get_experiments_path__() #path to parent experiemnts dir
def __get_models_path__(self):
for i in os.listdir(self.project_dir):
if i=='models':
models_path=os.path.join(self.project_dir,i)
return models_path
def __get_experiments_path__(self):
for i in os.listdir(self.project_dir):
if i == 'experiments':
experiments_path= os.path.join(self.project_dir,i)
return experiments
```
When I initialize this class:
```
project_dir='D:\\MPhil\\Model_Building\\Models\\TGFB\\Vilar2006\\SBML_sh_ver\\vilar2006_SBSH_test7\\Python_project'
IQM= Modelling_Tools.IQM_Prep(project_dir)
```
I get the following error:
```
Traceback (most recent call last):
File "<ipython-input-49-7c46385755ce>", line 1, in <module>
runfile('D:/MPhil/Python/My_Python_Modules/Modelling_Tools/Modelling_Tools.py', wdir='D:/MPhil/Python/My_Python_Modules/Modelling_Tools')
File "C:\Anaconda1\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile
execfile(filename, namespace)
File "D:/MPhil/Python/My_Python_Modules/Modelling_Tools/Modelling_Tools.py", line 1655, in <module>
import test
File "test.py", line 19, in <module>
print parameter_file
File "Modelling_Tools.py", line 1536, in __init__
self.models_path=self.__get_models_path__() #path to parent models dir
File "Modelling_Tools.py", line 1543, in __get_models_path__
return models_path
UnboundLocalError: local variable 'models_path' referenced before assignment
```
`Modelling_Tools` is the name of my custom module.
|
2015/11/05
|
[
"https://Stackoverflow.com/questions/33545813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3059024/"
] |
Based on the traceback, it seems that either:
```
def __get_models_path__(self):
for i in os.listdir(self.project_dir): # 1. this never loops; or
if i=='models': # 2. this never evaluates True
models_path=os.path.join(self.project_dir,i) # hence this never happens
return models_path # and this causes an error
```
You should review the result of `os.listdir(self.project_dir)` to find out why; either the directory is empty or nothing in it is named `models`. You could initialise e.g. `models_path = None` at the start of the method, but that would just hide the problem until later.
---
**Sidenote**: per my comments, you should check out the [style guide](https://www.python.org/dev/peps/pep-0008/), particularly on naming conventions for methods...
|
`models_path` is initialized only when:
* `self.project_dir` has some files/dirs and
* one of this file/dir has name `models`
If one of this condition is not fullfiled, then `models_path` is not initialized.
|
21,319,261
|
I am trying to execute some code on a Beaglebone Black running ubuntu. The script has two primary functions:
1: count digital pulse
2: store the counted pulses in mySQL every 10s or so
These two functions need to run idefinitely.
My question is how to do get these two functions to run in parallel? Here is my latest code revision that doesn't work so you can see what I am attempting to do. Any help much appreciated -- as I new to python.
Thank you in advance!
```
#!/usr/bin/python
import Adafruit_BBIO.GPIO as GPIO
import MySQLdb
import time
import thread
from threading import Thread
now = time.strftime('%Y-%m-%d %H:%M:%S')
total1 = 0
total2 = 0
def insertDB_10sec(now, total1, total2):
while True:
conn = MySQLdb.connect(host ="localhost",
user="root",
passwd="password",
db="dbname")
cursor= conn.cursor()
cursor.execute("INSERT INTO tablename VALUES (%s, %s, %s)", (now, total1, total2))
conn.commit()
conn.close()
print "DB Entry Made"
time.sleep(10)
def countPulse():
now = time.strftime('%Y-%m-%d %H:%M:%S')
GPIO.setup("P8_12", GPIO.IN)
total1 = 0
total2 = 0
while True:
GPIO.wait_for_edge("P8_12", GPIO.RISING)
GPIO.wait_for_edge("P8_12", GPIO.FALLING)
now = time.strftime('%Y-%m-%d %H:%M:%S')
print now
total1 +=1
print total1
return now, total1, total2
t1 = Thread(target = countPulse)
t2 = Thread(target = insertDB_10sec(now, total1, total2))
t1.start()
t2.start()
```
|
2014/01/23
|
[
"https://Stackoverflow.com/questions/21319261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2133624/"
] |
This is a perfect problem for a `Queue`!
```
#!/usr/bin/python
import Adafruit_BBIO.GPIO as GPIO
import MySQLdb
import time
import thread
import Queue
from threading import Thread
now = time.strftime('%Y-%m-%d %H:%M:%S')
total1 = 0
total2 = 0
pulse_objects = Queue.Queue()
def insertDB_10sec(pulse_objects):
while True:
now,total1,total2 = pulse_objects.get()
conn = MySQLdb.connect(host ="localhost",
user="root",
passwd="password",
db="dbname")
cursor= conn.cursor()
cursor.execute("INSERT INTO tablename VALUES (%s, %s, %s)", (now, total1, total2))
conn.commit()
conn.close()
print "DB Entry Made"
time.sleep(10)
def countPulse(pulse_objects):
now = time.strftime('%Y-%m-%d %H:%M:%S')
GPIO.setup("P8_12", GPIO.IN)
total1 = 0
total2 = 0
while True:
GPIO.wait_for_edge("P8_12", GPIO.RISING)
GPIO.wait_for_edge("P8_12", GPIO.FALLING)
now = time.strftime('%Y-%m-%d %H:%M:%S')
print now
total1 +=1
print total1
pulse_objects.put( (now, total1, total2) )
t1 = Thread(target = countPulse, args = (pulse_objects,))
t2 = Thread(target = insertDB_10sec, args = (pulse_objects,))
t1.start()
t2.start()
```
|
Why do you need two threads? Move the insert to `countPulse`. That being said:
You should not call `insertDB_10sec` here:
```
t2 = Thread(target = insertDB_10sec(now, total1, total2))
```
Provide the arguments as actual arguments:
```
t2 = Thread(target = insertDB_10sec, args=(now, total1, total2))
```
This will still not do what you intend, because `now, total1, total2` are local variables in `countPulse`. Instead declare them as global:
```
global now
global total1
global total2
```
The same holds for the other thread, because you pass the variables there at start. Remove the arguments to `insertDB_10sec` all together.
Why would you want to store a new database entry every ten seconds regardless of what the other thread does? Your comment suggests that the field for `now` is unique. Therefore you should first check whether `now` changed before you try to insert again. Or replace `INSERT INTO` with `REPLACE INTO`.
|
3,023,136
|
Is it somehow possible to extract .cab files in python?
|
2010/06/11
|
[
"https://Stackoverflow.com/questions/3023136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/341963/"
] |
Not strictly answering what you asked, but if you are running on a windows platform you could spawn a process to do it for you.
Taken from Wikipedia:
>
> Microsoft Windows provides two
> command-line tools for creation and
> extraction of CAB files. They are
> MAKECAB.EXE (included within Windows
> packages such as 'ie501sp2.exe' and
> 'orktools.msi'; also available from
> the SDK, see below) and EXTRACT.EXE
> (included on the installation CD),
> respectively. Windows XP also provides
> the EXPAND.EXE command.
>
>
>
|
Oddly, the [msilib](http://docs.python.org/library/msilib.html) can only create or append to .CAB files, but not extract them. :(
However, the [hachoir](https://hachoir.readthedocs.io/en/latest/parser.html) parser module can apparently read & edit Cabinets. (I have not used it, though, so I couldn't tell you how fitting it is or not!)
|
3,023,136
|
Is it somehow possible to extract .cab files in python?
|
2010/06/11
|
[
"https://Stackoverflow.com/questions/3023136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/341963/"
] |
I had the same problem last week so I implemented this in python. Comments, additions and especially pull requests welcome: <https://github.com/hughsie/python-cabarchive>
|
Oddly, the [msilib](http://docs.python.org/library/msilib.html) can only create or append to .CAB files, but not extract them. :(
However, the [hachoir](https://hachoir.readthedocs.io/en/latest/parser.html) parser module can apparently read & edit Cabinets. (I have not used it, though, so I couldn't tell you how fitting it is or not!)
|
66,283,314
|
I am writing a script to automate data collection and was having trouble clicking a link. The website is behind a login, but I navigated that successfully. I ran into problems when trying to navigate to the download page. This is in python using chrome webdriver.
I have tried using:
```
find_element_by_partial_link_text('stuff').click()
find_element_by_xpath('stuff').click()
#and a few others
```
I get iterations of following message when I try a few of the selector statements.
```
NoSuchElementException: Message: no such element: Unable to locate element: {"method":"partial link text","selector":"download"}
(Session info: chrome=88.0.4324.182)
```
Html source I'm trying to use is:
```
<a routerlink="/download" title="Download" href="/itron-mvweb/download"><i class="fa fa-lg fa-fw fa-download"></i><span class="menu-item-parent">Download</span></a>
```
Thank you!
|
2021/02/19
|
[
"https://Stackoverflow.com/questions/66283314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14024634/"
] |
This is caused by a typo. `Download` is case-sensitive, make sure you capitalize the `D`!
|
To click on the element with text as **Download** you can use either of the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890):
* Using `css_selector`:
```
driver.find_element(By.CSS_SELECTOR, "a[title='Download'][href='/itron-mvweb/download'] span.menu-item-parent").click()
```
* Using `xpath`:
```
driver.find_element(By.XPATH, "//a[@title='Download' and @href='/itron-mvweb/download']//span[@class='menu-item-parent' and text()='Download']").click()
```
---
Ideally, to click on the element you need to induce [WebDriverWait](https://stackoverflow.com/questions/59130200/selenium-wait-until-element-is-present-visible-and-interactable/59130336#59130336) for the [`element_to_be_clickable()`](https://stackoverflow.com/questions/65604057/unable-to-locate-element-using-selenium-chrome-webdriver-in-python-selenium/65604613#65604613) and you can use either of the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890):
* Using `CSS_SELECTOR`:
```
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[title='Download'][href='/itron-mvweb/download'] span.menu-item-parent"))).click()
```
* Using `XPATH`:
```
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//a[@title='Download' and @href='/itron-mvweb/download']//span[@class='menu-item-parent' and text()='Download']"))).click()
```
* **Note**: You have to add the following imports :
```
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
```
---
References
----------
You can find a couple of relevant discussions on [NoSuchElementException](https://stackoverflow.com/questions/47993443/selenium-selenium-common-exceptions-nosuchelementexception-when-using-chrome/47995294#47995294) in:
* [selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element while trying to click Next button with selenium](https://stackoverflow.com/questions/50315587/selenium-common-exceptions-nosuchelementexception-message-no-such-element-una/50315715#50315715)
* [selenium in python : NoSuchElementException: Message: no such element: Unable to locate element](https://stackoverflow.com/questions/53441658/selenium-in-python-nosuchelementexception-message-no-such-element-unable-to/53442511#53442511)
|
27,572,688
|
I have written the following code using Python 2.7 to search the list 'dem\_nums' for the first three characters from each element in the list 'dems', and if they are not present to append them. When I run the code the list 'dem\_nums' is returned as empty. I've tried using this article to help ([check if a number already exist in a list in python](https://stackoverflow.com/questions/14667578/check-if-a-number-already-exist-in-a-list-in-python)) but using the information there hasn't solved the problem.
```
dems = ["083c15", "083c16", "083f01", "083f02"]
dem_nums = []
for dem in dems:
dem_num = dem[0:3]
if dem_num not in dem_nums:
dem_nums.append(dem_num)
```
|
2014/12/19
|
[
"https://Stackoverflow.com/questions/27572688",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4289336/"
] |
I am not sure I understand your requirement. Why write such a complicated stylesheet when the end result should simply be a total amount of numbers? Also, it seems you are already familiar with the relevant EXSLT functions and with converting strings into numbers.
**Stylesheet**
```
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"
xmlns:exsl="http://exslt.org/common">
<xsl:output method="text" encoding="UTF-8" indent="no"/>
<xsl:variable name="amounts">
<xsl:for-each select="//LI_Amount_display">
<amount>
<xsl:value-of select="number(substring(translate(.,',',''),2))"/>
</amount>
</xsl:for-each>
</xsl:variable>
<xsl:template match="/">
<xsl:value-of select="sum(exsl:node-set($amounts)/amount)"/>
</xsl:template>
</xsl:stylesheet>
```
**Output**
```
9000
```
|
While I tend to go with the suggestion made by Mathias Müller, I wanted to show how you can do this using a recursive named template:
**XSLT 1.0**
```
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" omit-xml-declaration="yes" version="1.0" encoding="utf-8" indent="yes"/>
<xsl:template match="/">
<xsl:call-template name="sum-nodes" >
<xsl:with-param name="nodes" select="query/results/result/columns/column/LI_Amount_display" />
</xsl:call-template>
</xsl:template>
<xsl:template name="sum-nodes" >
<xsl:param name="nodes"/>
<xsl:param name="sum" select="0"/>
<xsl:param name="newSum" select="$sum + translate($nodes[1], '$,', '')"/>
<xsl:choose>
<xsl:when test="count($nodes) > 1">
<!-- recursive call -->
<xsl:call-template name="sum-nodes" >
<xsl:with-param name="nodes" select="$nodes[position() > 1]" />
<xsl:with-param name="sum" select="$newSum" />
</xsl:call-template>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="format-number($newSum, '#,##0.00')"/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
```
**Result**:
```
9,000.00
```
|
2,051,526
|
As we all know (or should), you can use Django's template system to render email bodies:
```
def email(email, subject, template, context):
from django.core.mail import send_mail
from django.template import loader, Context
send_mail(subject, loader.get_template(template).render(Context(context)), 'from@domain.com', [email,])
```
This has one flaw in my mind: to edit the subject and content of an email, you have to edit both the view and the template. While I can justify giving admin users access to the templates, I'm not giving them access to the raw python!
What would be really cool is if you could specify blocks in the email and pull them out separately when you send the email:
```
{% block subject %}This is my subject{% endblock %}
{% block plaintext %}My body{% endblock%}
{% block html %}My HTML body{% endblock%}
```
But how would you do that? How would you go about rendering just one block at a time?
|
2010/01/12
|
[
"https://Stackoverflow.com/questions/2051526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12870/"
] |
This is my third working iteration. It assuming you have an email template like so:
```
{% block subject %}{% endblock %}
{% block plain %}{% endblock %}
{% block html %}{% endblock %}
```
I've refactored to iterate the email sending over a list by default and there are utility methods for sending to a single email and `django.contrib.auth` `User`s (single and multiple). I'm covering perhaps more than I'll sensibly need but there you go.
I also might have gone over the top with Python-love.
```
def email_list(to_list, template_path, context_dict):
from django.core.mail import send_mail
from django.template import loader, Context
nodes = dict((n.name, n) for n in loader.get_template(template_path).nodelist if n.__class__.__name__ == 'BlockNode')
con = Context(context_dict)
r = lambda n: nodes[n].render(con)
for address in to_list:
send_mail(r('subject'), r('plain'), 'from@domain.com', [address,])
def email(to, template_path, context_dict):
return email_list([to,], template_path, context_dict)
def email_user(user, template_path, context_dict):
return email_list([user.email,], template_path, context_dict)
def email_users(user_list, template_path, context_dict):
return email_list([user.email for user in user_list], template_path, context_dict)
```
As ever, if you can improve on that, please do.
|
Just use two templates: one for the body and one for the subject.
|
2,051,526
|
As we all know (or should), you can use Django's template system to render email bodies:
```
def email(email, subject, template, context):
from django.core.mail import send_mail
from django.template import loader, Context
send_mail(subject, loader.get_template(template).render(Context(context)), 'from@domain.com', [email,])
```
This has one flaw in my mind: to edit the subject and content of an email, you have to edit both the view and the template. While I can justify giving admin users access to the templates, I'm not giving them access to the raw python!
What would be really cool is if you could specify blocks in the email and pull them out separately when you send the email:
```
{% block subject %}This is my subject{% endblock %}
{% block plaintext %}My body{% endblock%}
{% block html %}My HTML body{% endblock%}
```
But how would you do that? How would you go about rendering just one block at a time?
|
2010/01/12
|
[
"https://Stackoverflow.com/questions/2051526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12870/"
] |
This is my third working iteration. It assuming you have an email template like so:
```
{% block subject %}{% endblock %}
{% block plain %}{% endblock %}
{% block html %}{% endblock %}
```
I've refactored to iterate the email sending over a list by default and there are utility methods for sending to a single email and `django.contrib.auth` `User`s (single and multiple). I'm covering perhaps more than I'll sensibly need but there you go.
I also might have gone over the top with Python-love.
```
def email_list(to_list, template_path, context_dict):
from django.core.mail import send_mail
from django.template import loader, Context
nodes = dict((n.name, n) for n in loader.get_template(template_path).nodelist if n.__class__.__name__ == 'BlockNode')
con = Context(context_dict)
r = lambda n: nodes[n].render(con)
for address in to_list:
send_mail(r('subject'), r('plain'), 'from@domain.com', [address,])
def email(to, template_path, context_dict):
return email_list([to,], template_path, context_dict)
def email_user(user, template_path, context_dict):
return email_list([user.email,], template_path, context_dict)
def email_users(user_list, template_path, context_dict):
return email_list([user.email for user in user_list], template_path, context_dict)
```
As ever, if you can improve on that, please do.
|
I couldn't get template inheritance to work using the `{% body %}` tags, so I switched to a template like this:
```
{% extends "base.txt" %}
{% if subject %}Subject{% endif %}
{% if body %}Email body{% endif %}
{% if html %}<p>HTML body</p>{% endif %}
```
Now we have to render the template three times, but the inheritance works properly.
```
c = Context(context, autoescape = False)
subject = render_to_string(template_name, {'subject': True}, c).strip()
body = render_to_string(template_name, {'body': True}, c).strip()
c = Context(context, autoescape = True)
html = render_to_string(template_name, {'html': True}, c).strip()
```
I also found it necessary to turn off autoescape when rendering the non-HTML text to avoid escaped text in the email
|
62,191,724
|
Trying to make use of this package: <https://github.com/microsoft/Simplify-Docx>
Can someone pls tell me the proper sequence of actions needed to install and use the package?
What I've tried (as a separate commands from vscode terminal):
```
pip install python-docx
Git clone <git link>
python setup.py install
```
After the installation has been successfully completed I'm trying to run from VS Code terminal the file in which I've pasted the code from readme's "usage" section:
```
import docx
from simplify_docx import simplify
# read in a document
my_doc = docx.Document("docxinaprojectfolder.docx") //I wonder how should I properly specify the path to file?
# coerce to JSON using the standard options
my_doc_as_json = simplify(my_doc)
# or with non-standard options
my_doc_as_json = simplify(my_doc,{"remove-leading-white-space":False})
```
And I only get
```
ModuleNotFoundError: No module named 'docx'
```
But I've installed this module in the first place.
What am I doing wrong? Am I missing some of the steps? (Like init or smth).
Vscode status bar at the bottom left says that I'm using python 3.8.x, and I'm trying to run the script via "play" button.
```
python --version
Python 3.6.5
py show's though that 3.8.x is being used.
```
Thanks
|
2020/06/04
|
[
"https://Stackoverflow.com/questions/62191724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9130563/"
] |
The problem is that your system doesn't have "docx" module.
to install docx module you will have to install docx.
steps to install:
1) open CMD prompt.
2) type "pip install docx"
if your installation is fresh it may need "simplify" module too.
|
Like any python package that doesn't come with python, you need to install it before using it. In your terminal window you can install if from the Python package index like this:
```bash
pip install simplify-docx
```
or you can install it directly from GitHub like this:
```bash
pip install git+git://github.com/microsoft/Simplify-Docx.git
```
|
62,191,724
|
Trying to make use of this package: <https://github.com/microsoft/Simplify-Docx>
Can someone pls tell me the proper sequence of actions needed to install and use the package?
What I've tried (as a separate commands from vscode terminal):
```
pip install python-docx
Git clone <git link>
python setup.py install
```
After the installation has been successfully completed I'm trying to run from VS Code terminal the file in which I've pasted the code from readme's "usage" section:
```
import docx
from simplify_docx import simplify
# read in a document
my_doc = docx.Document("docxinaprojectfolder.docx") //I wonder how should I properly specify the path to file?
# coerce to JSON using the standard options
my_doc_as_json = simplify(my_doc)
# or with non-standard options
my_doc_as_json = simplify(my_doc,{"remove-leading-white-space":False})
```
And I only get
```
ModuleNotFoundError: No module named 'docx'
```
But I've installed this module in the first place.
What am I doing wrong? Am I missing some of the steps? (Like init or smth).
Vscode status bar at the bottom left says that I'm using python 3.8.x, and I'm trying to run the script via "play" button.
```
python --version
Python 3.6.5
py show's though that 3.8.x is being used.
```
Thanks
|
2020/06/04
|
[
"https://Stackoverflow.com/questions/62191724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9130563/"
] |
Amin sama was right - that was indeed an environment issue.
Looks like modules were getting globally installed in an older python folder. Different from the python which runs when you try to run python file. So I had to uninstall the older python.
After that
```
py --version
```
and
```
Python --version
```
Started to show the same version unlike before.
So, the sequence
1. Opened a fresh folder within VS Code
2. `git clone <git link to repository from github>`
3. copied all the files from cloned repo to my current folder (or you can go one level down with cd command)
4. installed dependency: `pip install python-docx`
5. run setup.py from where you copied files: `python setup.py install`
6. Copy "usage" into a new file, for example run.py
7. Specify an absolute path to your file with double backslash.
8. Add strings to run.py to output the result in a json:
```
import json
with open('data.txt', 'w') as f:
json.dump(my_doc_as_json, f, ensure_ascii=False)
```
9. Run this file from the terminal opened in your project folder typing `run.py` or `python run.py`
It wasn't necessary to open `>>>` python console.
|
Like any python package that doesn't come with python, you need to install it before using it. In your terminal window you can install if from the Python package index like this:
```bash
pip install simplify-docx
```
or you can install it directly from GitHub like this:
```bash
pip install git+git://github.com/microsoft/Simplify-Docx.git
```
|
19,085,887
|
I searched and tried following stuff but could not found any solution, please let me know if this is possible:
I am trying to develop a python module as wrapper where I call another 3rd party module with its .main() and provide the required parameter which I need to get from command line in my module. I need few parameter for my module too.
I am using argparse to parse command line for calling module and my module. The calling parameter list is huge (more than 40) which are optional but may require anytime who will use my module. Currently I have declared few important parameters in my module to parse but I need to expand with all the parameter.
I thought of providing all the parameter in my module without declaring in add\_argument. I tried with parse\_known\_args which also require declaration of all parameter is required.
Is there any way where I can pass on all parameter to calling module without declaring in my module? If its possible please let me know how it can be done.
Thanks in advance,
|
2013/09/30
|
[
"https://Stackoverflow.com/questions/19085887",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/948673/"
] |
Does this scenario fit?
Module B:
```
import argparse
parser = argparse....
def main(args):
....
if __name__ == '__main__':
args = parser.parse_args()
main(args)
```
Module A
```
import argparse
import B
parser = argparse....
# define arguments that A needs to use
if _name__=='__main__':
args,rest = parser.parse_known_args()
# use args
# rest - argument strings that A could not process
argsB = B.parse_args(rest)
# 'rest' does not have the strings that A used;
# but could also use
# argsB = B.parse_known_args() # using sys.argv; ignore what it does not recognize
# or even
# argsB = B.parse_known_args(rest)
B.main(argsB)
```
Alternate A
```
import argparse
import B
parser = argparse.ArgumentParser(parents=[B.parser], add_help=False)
# B.parser probably already defines -h
# add arguments that A needs to use
if _name__=='__main__':
args = parser.parse_args()
# use args that A needs
B.main(args)
```
In one case, each parser handles only the strings that it recognizes. In the other A.parser handles everything, using the 'parents' parameter to 'learn' what B.parser recognizes.
|
Provided that you are calling third-party modules, a possible solution is to
change **sys.argv** and **sys.argc** at runtime to reflect the correct parameters for
the module you're calling, once you're done with your own parameters.
|
19,085,887
|
I searched and tried following stuff but could not found any solution, please let me know if this is possible:
I am trying to develop a python module as wrapper where I call another 3rd party module with its .main() and provide the required parameter which I need to get from command line in my module. I need few parameter for my module too.
I am using argparse to parse command line for calling module and my module. The calling parameter list is huge (more than 40) which are optional but may require anytime who will use my module. Currently I have declared few important parameters in my module to parse but I need to expand with all the parameter.
I thought of providing all the parameter in my module without declaring in add\_argument. I tried with parse\_known\_args which also require declaration of all parameter is required.
Is there any way where I can pass on all parameter to calling module without declaring in my module? If its possible please let me know how it can be done.
Thanks in advance,
|
2013/09/30
|
[
"https://Stackoverflow.com/questions/19085887",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/948673/"
] |
Thanks hpaulj, mguijarr and mike,
I was able to resolve the issue with all above inputs, like following:
My module:
```
import sys
import argparse
parser = argparse.ArgumentParser(description='something')
parser.add_argument('--my_env', help='my environment')
if __name__=='__main__':
args,rest = parser.parse_known_args()
rest_arg = ['calling_module.py']
rest_arg.extend(rest)
sys.argv = rest_arg
import calling_module
calling_module.main()
```
|
Provided that you are calling third-party modules, a possible solution is to
change **sys.argv** and **sys.argc** at runtime to reflect the correct parameters for
the module you're calling, once you're done with your own parameters.
|
51,876,794
|
I have a text file named `file.txt` with some numbers like the following :
```
1 79 8.106E-08 2.052E-08 3.837E-08
1 80 -4.766E-09 9.003E-08 4.812E-07
1 90 4.914E-08 1.563E-07 5.193E-07
2 2 9.254E-07 5.166E-06 9.723E-06
2 3 1.366E-06 -5.184E-06 7.580E-06
2 4 2.966E-06 5.979E-07 9.702E-08
2 5 5.254E-07 0.166E-02 9.723E-06
3 23 1.366E-06 -5.184E-03 7.580E-06
3 24 3.244E-03 5.239E-04 9.002E-08
```
I want to build a python dictionary, where the first number in each row is the key, the second number is always ignored, and the last three numbers are put as values. But in a dictionary, a key can not be repeated, so when I write my code (attached at the end of the question), what I get is
```
'1' : [ '90' '4.914E-08' '1.563E-07' '5.193E-07' ]
'2' : [ '5' '5.254E-07' '0.166E-02' '9.723E-06' ]
'3' : [ '24' '3.244E-03' '5.239E-04' '9.002E-08' ]
```
All the other numbers are removed, and only the last row is kept as the values. What I need is to have all the numbers against a key, say 1, to be appended in the dictionary. For example, what I need is :
```
'1' : ['8.106E-08' '2.052E-08' '3.837E-08' '-4.766E-09' '9.003E-08' '4.812E-07' '4.914E-08' '1.563E-07' '5.193E-07']
```
Is it possible to do it elegantly in python? The code I have right now is the following :
```
diction = {}
with open("file.txt") as f:
for line in f:
pa = line.split()
diction[pa[0]] = pa[1:]
with open('file.txt') as f:
diction = {pa[0]: pa[1:] for pa in map(str.split, f)}
```
|
2018/08/16
|
[
"https://Stackoverflow.com/questions/51876794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8869818/"
] |
You can use a `defaultdict`.
```
from collections import defaultdict
data = defaultdict(list)
with open("file.txt", "r") as f:
for line in f:
line = line.split()
data[line[0]].extend(line[2:])
```
|
Try this:
```
from collections import defaultdict
diction = defaultdict(list)
with open("file.txt") as f:
for line in f:
key, _, *values = line.strip().split()
diction[key].extend(values)
print(diction)
```
This is a solution for Python 3, because the statement `a, *b = tuple1` is invalid in Python 2. Look at the solution of @cha0site if you are using Python 2.
|
51,876,794
|
I have a text file named `file.txt` with some numbers like the following :
```
1 79 8.106E-08 2.052E-08 3.837E-08
1 80 -4.766E-09 9.003E-08 4.812E-07
1 90 4.914E-08 1.563E-07 5.193E-07
2 2 9.254E-07 5.166E-06 9.723E-06
2 3 1.366E-06 -5.184E-06 7.580E-06
2 4 2.966E-06 5.979E-07 9.702E-08
2 5 5.254E-07 0.166E-02 9.723E-06
3 23 1.366E-06 -5.184E-03 7.580E-06
3 24 3.244E-03 5.239E-04 9.002E-08
```
I want to build a python dictionary, where the first number in each row is the key, the second number is always ignored, and the last three numbers are put as values. But in a dictionary, a key can not be repeated, so when I write my code (attached at the end of the question), what I get is
```
'1' : [ '90' '4.914E-08' '1.563E-07' '5.193E-07' ]
'2' : [ '5' '5.254E-07' '0.166E-02' '9.723E-06' ]
'3' : [ '24' '3.244E-03' '5.239E-04' '9.002E-08' ]
```
All the other numbers are removed, and only the last row is kept as the values. What I need is to have all the numbers against a key, say 1, to be appended in the dictionary. For example, what I need is :
```
'1' : ['8.106E-08' '2.052E-08' '3.837E-08' '-4.766E-09' '9.003E-08' '4.812E-07' '4.914E-08' '1.563E-07' '5.193E-07']
```
Is it possible to do it elegantly in python? The code I have right now is the following :
```
diction = {}
with open("file.txt") as f:
for line in f:
pa = line.split()
diction[pa[0]] = pa[1:]
with open('file.txt') as f:
diction = {pa[0]: pa[1:] for pa in map(str.split, f)}
```
|
2018/08/16
|
[
"https://Stackoverflow.com/questions/51876794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8869818/"
] |
You can use a `defaultdict`.
```
from collections import defaultdict
data = defaultdict(list)
with open("file.txt", "r") as f:
for line in f:
line = line.split()
data[line[0]].extend(line[2:])
```
|
Make the value of each key in `diction` be a list and extend that list with each iteration. With your code as it is written now when you say `diction[pa[0]] = pa[1:]` you're overwriting the value in `diction[pa[0]]` each time the key appears, which describes the behavior you're seeing.
```
with open("file.txt") as f:
for line in f:
pa = line.split()
try:
diction[pa[0]].extend(pa[1:])
except KeyError:
diction[pa[0]] = pa[1:]
```
In this code each value of `diction` will be a list. In each iteration if the key exists that list will be extended with new values from `pa` giving you a list of all the values for each key.
|
51,876,794
|
I have a text file named `file.txt` with some numbers like the following :
```
1 79 8.106E-08 2.052E-08 3.837E-08
1 80 -4.766E-09 9.003E-08 4.812E-07
1 90 4.914E-08 1.563E-07 5.193E-07
2 2 9.254E-07 5.166E-06 9.723E-06
2 3 1.366E-06 -5.184E-06 7.580E-06
2 4 2.966E-06 5.979E-07 9.702E-08
2 5 5.254E-07 0.166E-02 9.723E-06
3 23 1.366E-06 -5.184E-03 7.580E-06
3 24 3.244E-03 5.239E-04 9.002E-08
```
I want to build a python dictionary, where the first number in each row is the key, the second number is always ignored, and the last three numbers are put as values. But in a dictionary, a key can not be repeated, so when I write my code (attached at the end of the question), what I get is
```
'1' : [ '90' '4.914E-08' '1.563E-07' '5.193E-07' ]
'2' : [ '5' '5.254E-07' '0.166E-02' '9.723E-06' ]
'3' : [ '24' '3.244E-03' '5.239E-04' '9.002E-08' ]
```
All the other numbers are removed, and only the last row is kept as the values. What I need is to have all the numbers against a key, say 1, to be appended in the dictionary. For example, what I need is :
```
'1' : ['8.106E-08' '2.052E-08' '3.837E-08' '-4.766E-09' '9.003E-08' '4.812E-07' '4.914E-08' '1.563E-07' '5.193E-07']
```
Is it possible to do it elegantly in python? The code I have right now is the following :
```
diction = {}
with open("file.txt") as f:
for line in f:
pa = line.split()
diction[pa[0]] = pa[1:]
with open('file.txt') as f:
diction = {pa[0]: pa[1:] for pa in map(str.split, f)}
```
|
2018/08/16
|
[
"https://Stackoverflow.com/questions/51876794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8869818/"
] |
You can use a `defaultdict`.
```
from collections import defaultdict
data = defaultdict(list)
with open("file.txt", "r") as f:
for line in f:
line = line.split()
data[line[0]].extend(line[2:])
```
|
To do this in a very simple for loop:
```
with open('file.txt') as f:
return_dict = {}
for item_list in map(str.split, f):
if item_list[0] not in return_dict:
return_dict[item_list[0]] = []
return_dict[item_list[0]].extend(item_list[1:])
return return_dict
```
Or, if you wanted to use defaultdict in a one liner-ish:
```
from collections import defaultdict
with open('file.txt') as f:
return_dict = defaultdict(list)
[return_dict[item_list[0]].extend(item_list[1:]) for item_list in map(str.split, f)]
return return_dict
```
|
51,876,794
|
I have a text file named `file.txt` with some numbers like the following :
```
1 79 8.106E-08 2.052E-08 3.837E-08
1 80 -4.766E-09 9.003E-08 4.812E-07
1 90 4.914E-08 1.563E-07 5.193E-07
2 2 9.254E-07 5.166E-06 9.723E-06
2 3 1.366E-06 -5.184E-06 7.580E-06
2 4 2.966E-06 5.979E-07 9.702E-08
2 5 5.254E-07 0.166E-02 9.723E-06
3 23 1.366E-06 -5.184E-03 7.580E-06
3 24 3.244E-03 5.239E-04 9.002E-08
```
I want to build a python dictionary, where the first number in each row is the key, the second number is always ignored, and the last three numbers are put as values. But in a dictionary, a key can not be repeated, so when I write my code (attached at the end of the question), what I get is
```
'1' : [ '90' '4.914E-08' '1.563E-07' '5.193E-07' ]
'2' : [ '5' '5.254E-07' '0.166E-02' '9.723E-06' ]
'3' : [ '24' '3.244E-03' '5.239E-04' '9.002E-08' ]
```
All the other numbers are removed, and only the last row is kept as the values. What I need is to have all the numbers against a key, say 1, to be appended in the dictionary. For example, what I need is :
```
'1' : ['8.106E-08' '2.052E-08' '3.837E-08' '-4.766E-09' '9.003E-08' '4.812E-07' '4.914E-08' '1.563E-07' '5.193E-07']
```
Is it possible to do it elegantly in python? The code I have right now is the following :
```
diction = {}
with open("file.txt") as f:
for line in f:
pa = line.split()
diction[pa[0]] = pa[1:]
with open('file.txt') as f:
diction = {pa[0]: pa[1:] for pa in map(str.split, f)}
```
|
2018/08/16
|
[
"https://Stackoverflow.com/questions/51876794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8869818/"
] |
Try this:
```
from collections import defaultdict
diction = defaultdict(list)
with open("file.txt") as f:
for line in f:
key, _, *values = line.strip().split()
diction[key].extend(values)
print(diction)
```
This is a solution for Python 3, because the statement `a, *b = tuple1` is invalid in Python 2. Look at the solution of @cha0site if you are using Python 2.
|
Make the value of each key in `diction` be a list and extend that list with each iteration. With your code as it is written now when you say `diction[pa[0]] = pa[1:]` you're overwriting the value in `diction[pa[0]]` each time the key appears, which describes the behavior you're seeing.
```
with open("file.txt") as f:
for line in f:
pa = line.split()
try:
diction[pa[0]].extend(pa[1:])
except KeyError:
diction[pa[0]] = pa[1:]
```
In this code each value of `diction` will be a list. In each iteration if the key exists that list will be extended with new values from `pa` giving you a list of all the values for each key.
|
51,876,794
|
I have a text file named `file.txt` with some numbers like the following :
```
1 79 8.106E-08 2.052E-08 3.837E-08
1 80 -4.766E-09 9.003E-08 4.812E-07
1 90 4.914E-08 1.563E-07 5.193E-07
2 2 9.254E-07 5.166E-06 9.723E-06
2 3 1.366E-06 -5.184E-06 7.580E-06
2 4 2.966E-06 5.979E-07 9.702E-08
2 5 5.254E-07 0.166E-02 9.723E-06
3 23 1.366E-06 -5.184E-03 7.580E-06
3 24 3.244E-03 5.239E-04 9.002E-08
```
I want to build a python dictionary, where the first number in each row is the key, the second number is always ignored, and the last three numbers are put as values. But in a dictionary, a key can not be repeated, so when I write my code (attached at the end of the question), what I get is
```
'1' : [ '90' '4.914E-08' '1.563E-07' '5.193E-07' ]
'2' : [ '5' '5.254E-07' '0.166E-02' '9.723E-06' ]
'3' : [ '24' '3.244E-03' '5.239E-04' '9.002E-08' ]
```
All the other numbers are removed, and only the last row is kept as the values. What I need is to have all the numbers against a key, say 1, to be appended in the dictionary. For example, what I need is :
```
'1' : ['8.106E-08' '2.052E-08' '3.837E-08' '-4.766E-09' '9.003E-08' '4.812E-07' '4.914E-08' '1.563E-07' '5.193E-07']
```
Is it possible to do it elegantly in python? The code I have right now is the following :
```
diction = {}
with open("file.txt") as f:
for line in f:
pa = line.split()
diction[pa[0]] = pa[1:]
with open('file.txt') as f:
diction = {pa[0]: pa[1:] for pa in map(str.split, f)}
```
|
2018/08/16
|
[
"https://Stackoverflow.com/questions/51876794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8869818/"
] |
Try this:
```
from collections import defaultdict
diction = defaultdict(list)
with open("file.txt") as f:
for line in f:
key, _, *values = line.strip().split()
diction[key].extend(values)
print(diction)
```
This is a solution for Python 3, because the statement `a, *b = tuple1` is invalid in Python 2. Look at the solution of @cha0site if you are using Python 2.
|
To do this in a very simple for loop:
```
with open('file.txt') as f:
return_dict = {}
for item_list in map(str.split, f):
if item_list[0] not in return_dict:
return_dict[item_list[0]] = []
return_dict[item_list[0]].extend(item_list[1:])
return return_dict
```
Or, if you wanted to use defaultdict in a one liner-ish:
```
from collections import defaultdict
with open('file.txt') as f:
return_dict = defaultdict(list)
[return_dict[item_list[0]].extend(item_list[1:]) for item_list in map(str.split, f)]
return return_dict
```
|
59,745,214
|
I have 2 files to copy from a folder to another folder and these are my codes:
```
import shutil
src = '/Users/cadellteng/Desktop/Program Booklet/'
dst = '/Users/cadellteng/Desktop/Python/'
file = ['AI+Product+Manager+Nanodegree+Program+Syllabus.pdf','Artificial+Intelligence+with+Python+Nanodegree+Syllabus+9-5.pdf']
for i in file:
shutil.copyfile(src+file[i], dst+file[i])
```
When I tried to run the code I got the following error message:
```
/Users/cadellteng/venv/bin/python /Users/cadellteng/PycharmProjects/someProject/movingFiles.py
Traceback (most recent call last):
File "/Users/cadellteng/PycharmProjects/someProject/movingFiles.py", line 8, in <module>
shutil.copyfile(src+file[i], dst+file[i])
TypeError: list indices must be integers or slices, not str
Process finished with exit code 1
```
I tried to find some solution on stackoverflow and one thread suggest to do this:
```
for i in range(file):
shutil.copyfile(src+file[i], dst+file[i])
```
and then I got the following error message:
```
/Users/cadellteng/venv/bin/python /Users/cadellteng/PycharmProjects/someProject/movingFiles.py
Traceback (most recent call last):
File "/Users/cadellteng/PycharmProjects/someProject/movingFiles.py", line 7, in <module>
for i in range(file):
TypeError: 'list' object cannot be interpreted as an integer
Process finished with exit code 1
```
So now I am thoroughly confused. If "i" can't be a string and it can't be an integer, what should it be?
I am using PyCharm CE and very new to Python.
|
2020/01/15
|
[
"https://Stackoverflow.com/questions/59745214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3910616/"
] |
Just use the below code since `i` doesn't need an extra indexing `file[...]`, because it is not an index:
```
for i in file:
shutil.copyfile(src + i, dst + i)
```
If you want to use `range`, use it this way with `len`:
```
for i in range(len(file)):
shutil.copyfile(src+file[i], dst+file[i])
```
But of course the first solution is preferred.
|
Try the code below, and read [for Statement in python](https://docs.python.org/3/tutorial/controlflow.html#for-statements)
```
import shutil
src = '/Users/cadellteng/Desktop/Program Booklet/'
dst = '/Users/cadellteng/Desktop/Python/'
file = ['AI+Product+Manager+Nanodegree+Program+Syllabus.pdf','Artificial+Intelligence+with+Python+Nanodegree+Syllabus+9-5.pdf']
for i in file:
shutil.copyfile(src + i, dst + i)
```
|
64,578,491
|
The other version of this question wasn't ever answered, the original poster didn't give a full example of their code...
I have a function that's meant to import a spreadsheet for formatting purposes. Now, the spreadsheet can come in two forms:
1. As a filename string (excel, .csv, etc) to be imported as a DataFrame
2. Directly as a DataFrame (there's another function that may or may not be called to do some preprocessing)
the code looks like
```
def func1(spreadsheet):
if type(spreadsheet) == pd.DataFrame:
df = spreadsheet
else:
df_ext = os.path.splitext(spreadsheet)[1]
etc. etc.
```
If I run this function with a DataFrame, I get the following error:
```
---> 67 if type(spreadsheet) == pd.DataFrame: df = spreadsheet
68 else:
/opt/anaconda3/lib/python3.7/posixpath.py in splitext(p)
120
121 def splitext(p):
--> 122 p = os.fspath(p)
123 if isinstance(p, bytes):
124 sep = b'/'
TypeError: expected str, bytes or os.PathLike object, not DataFrame
```
Why is it doing this?
|
2020/10/28
|
[
"https://Stackoverflow.com/questions/64578491",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2954167/"
] |
So, one way is to just compare with a string and reading the dataframe in the else condition.
The other way would be to use `isinstance`
```py
In [21]: dict1
Out[21]: {'a': [1, 2, 3, 4], 'b': [2, 4, 6, 7], 'c': [2, 3, 4, 5]}
In [24]: df = pd.DataFrame(dict1)
In [28]: isinstance(df, pd.DataFrame)
Out[28]: True
In [30]: isinstance(os.getcwd(), pd.DataFrame)
Out[30]: False
```
So, in your case just do this
```
if isinstance(spreadsheet, pd.DataFrame)
```
|
This line is the problem:
```
if type(spreadsheet) == pd.DataFrame:
```
The type of a dataframe is `pandas.core.frame.DataFrame`. [pandas.DataFrame is a class](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) which returns a dataframe when you call it.
Either of these would work:
```
if type(spreadsheet) == type(pd.DataFrame()):
if type(spreadsheet) == pd.core.frame.DataFrame:
```
|
16,710,374
|
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?
Thank you for your time
|
2013/05/23
|
[
"https://Stackoverflow.com/questions/16710374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2155605/"
] |
There are indeed several other alternatives to BFS and DFS.
One that is quite adequate to computing shortest path is: <http://en.wikipedia.org/wiki/Dijkstra>'s\_algorithm
Dijsktra's Algorithm is basically an adaptation of a BFS algorithm, and it's much more efficient than searching the entire graph, if your graph is weighted.
Like a @ThomasH said, Djikstra is only relevant if you have a weighted graph, if the weight of every edge is the same, it basically defaults back to BFS.
If the choice is between BFS and DFS, then BFS is more adequate to finding shortest paths, because you explore the immediate vicinity of a node completely before moving on to nodes that are at a greater distance.
This means that if there's a path of size 3, it'll be explored before the algorithm moves on to exploring nodes at distance 4, for instance.
With DFS, you don't have such a guarantee, since you explore nodes in depth, you can find a longer path that just happened to be explored earlier, and you'll need to explore the entire graph to make sure that that is the shortest path.
As to why you're getting downvotes, most SO questions should show a little effort has been put into finding a solution, for instance, there are several related questions on the pros and cons of DFS versus BFS.
Next time try to make sure that you've searched a bit, and then ask questions about any specific doubts that you have.
|
Take a look at the following two algorithms:
1. [Dijkstra's algorithm](http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) - Single source shortest path
2. [Floyd-Warshall algorithm](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) - All pairs shortest path
|
16,710,374
|
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?
Thank you for your time
|
2013/05/23
|
[
"https://Stackoverflow.com/questions/16710374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2155605/"
] |
There are indeed several other alternatives to BFS and DFS.
One that is quite adequate to computing shortest path is: <http://en.wikipedia.org/wiki/Dijkstra>'s\_algorithm
Dijsktra's Algorithm is basically an adaptation of a BFS algorithm, and it's much more efficient than searching the entire graph, if your graph is weighted.
Like a @ThomasH said, Djikstra is only relevant if you have a weighted graph, if the weight of every edge is the same, it basically defaults back to BFS.
If the choice is between BFS and DFS, then BFS is more adequate to finding shortest paths, because you explore the immediate vicinity of a node completely before moving on to nodes that are at a greater distance.
This means that if there's a path of size 3, it'll be explored before the algorithm moves on to exploring nodes at distance 4, for instance.
With DFS, you don't have such a guarantee, since you explore nodes in depth, you can find a longer path that just happened to be explored earlier, and you'll need to explore the entire graph to make sure that that is the shortest path.
As to why you're getting downvotes, most SO questions should show a little effort has been put into finding a solution, for instance, there are several related questions on the pros and cons of DFS versus BFS.
Next time try to make sure that you've searched a bit, and then ask questions about any specific doubts that you have.
|
If there are no weights for the edges on the graph, a simple Breadth-first search where you access nodes in the graph iteratively and check if any of the new nodes equals the destination-node can be done. If the edges have weights, DJikstra's algorithm and Bellman-Ford algoriths are things which you should be looking at, depending on your expected time and space complexities that you are looking at.
|
16,710,374
|
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?
Thank you for your time
|
2013/05/23
|
[
"https://Stackoverflow.com/questions/16710374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2155605/"
] |
There are indeed several other alternatives to BFS and DFS.
One that is quite adequate to computing shortest path is: <http://en.wikipedia.org/wiki/Dijkstra>'s\_algorithm
Dijsktra's Algorithm is basically an adaptation of a BFS algorithm, and it's much more efficient than searching the entire graph, if your graph is weighted.
Like a @ThomasH said, Djikstra is only relevant if you have a weighted graph, if the weight of every edge is the same, it basically defaults back to BFS.
If the choice is between BFS and DFS, then BFS is more adequate to finding shortest paths, because you explore the immediate vicinity of a node completely before moving on to nodes that are at a greater distance.
This means that if there's a path of size 3, it'll be explored before the algorithm moves on to exploring nodes at distance 4, for instance.
With DFS, you don't have such a guarantee, since you explore nodes in depth, you can find a longer path that just happened to be explored earlier, and you'll need to explore the entire graph to make sure that that is the shortest path.
As to why you're getting downvotes, most SO questions should show a little effort has been put into finding a solution, for instance, there are several related questions on the pros and cons of DFS versus BFS.
Next time try to make sure that you've searched a bit, and then ask questions about any specific doubts that you have.
|
When you want to find the shortest path you should use BFS and not DFS because BFS explores the closest nodes first so when you reach your goal you know for sure that you used the shortest path and you can stop searching. Whereas DFS explores one branch at a time so when you reach your goal you can't be sure that there is not another path via another branch that is shorter.
So you should use BFS.
If your graph have different weights on its edges, then you should use Dijkstra's algorithm which is an adaptation of BFS for weighted graphs, but don't use it if you don't have weights.
Some people may recommend you to use Floyd-Warshall algorithm but it is a very bad idea for a graph this large.
|
16,710,374
|
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?
Thank you for your time
|
2013/05/23
|
[
"https://Stackoverflow.com/questions/16710374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2155605/"
] |
Take a look at the following two algorithms:
1. [Dijkstra's algorithm](http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) - Single source shortest path
2. [Floyd-Warshall algorithm](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) - All pairs shortest path
|
If there are no weights for the edges on the graph, a simple Breadth-first search where you access nodes in the graph iteratively and check if any of the new nodes equals the destination-node can be done. If the edges have weights, DJikstra's algorithm and Bellman-Ford algoriths are things which you should be looking at, depending on your expected time and space complexities that you are looking at.
|
16,710,374
|
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?
Thank you for your time
|
2013/05/23
|
[
"https://Stackoverflow.com/questions/16710374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2155605/"
] |
Take a look at the following two algorithms:
1. [Dijkstra's algorithm](http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) - Single source shortest path
2. [Floyd-Warshall algorithm](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) - All pairs shortest path
|
When you want to find the shortest path you should use BFS and not DFS because BFS explores the closest nodes first so when you reach your goal you know for sure that you used the shortest path and you can stop searching. Whereas DFS explores one branch at a time so when you reach your goal you can't be sure that there is not another path via another branch that is shorter.
So you should use BFS.
If your graph have different weights on its edges, then you should use Dijkstra's algorithm which is an adaptation of BFS for weighted graphs, but don't use it if you don't have weights.
Some people may recommend you to use Floyd-Warshall algorithm but it is a very bad idea for a graph this large.
|
44,408,625
|
I am writing a python wrapper for calling programs of the AMOS package (specifically for merging genome assemblies from different sources using good ol' minimus2 from AMOS).
The scripts should be called like this when using the shell directly:
```
toAmos -s myinput.fasta -o testoutput.afg
minimus2 testoutput -D REFCOUNT=400 -D OVERLAP=500
```
>
> [*just for clarification:*
>
>
> *-toAmos: converts my input.fasta file to .afg format and requires an input sequence argument ("-s") and an output argument ("-o")*
>
>
> *-minimus2: merges a sequence dataset against reference contigs and requires an argument "-D REFCOUNT=x" for stating the number of rerference seqeunces in your input and an argument "-D OVERLAP=Y" for stating the minimum overlap between sequences*]
>
>
>
So within my script I use subprocess.call() to call the necessary AMOS tools.
Basically I do this:
```
from subprocess import call:
output_basename = "testoutput"
inputfile = "myinput.fasta"
call(["toAmos", "-s " + inputfile, "-o " + output_basename + ".afg"])
call(["minimus2", output_basename, "-D REFCOUNT=400", "-D OVERLAP=500"])
```
But in this case the AMOS tools cannot interpret the arguments anymore. The arguments seem get modified by subprocess.call() and passed incorrectly. The error message I get is:
```
Unknown option: s myinput.fasta
Unknown option: o testoutput.afg
You must specify an output AMOS AFG file with option -o
/home/jov14/tools/miniconda2/bin/runAmos: unrecognized option '-D REFCOUNT=400'
Command line parsing failed, use -h option for usage info
```
It seems that the arguments get passed without the leading "-"? So I then tried passing the command as a single string (including arguments) like this:
```
call(["toAmos -s " + inputfile +" -o " + output_basename + ".afg"])
```
But then I get this error...
```
OSError: [Errno 2] No such file or directory
```
... presumably because subprocess.call is interpreting the whole string as the name for a single script.
I guess I COULD probably try `shell=True` as a workaround, but the internet is FULL of instructions clearly advising against this.
What seems to be the problem here?
What can I do?
|
2017/06/07
|
[
"https://Stackoverflow.com/questions/44408625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4685799/"
] |
### answer
Either do:
```
call("toAmos -s " + inputfile +" -o " + output_basename + ".afg") # single string
```
or do:
```
call(["toAmos", "-s", inputfile, "-o", output_basename + ".afg"]) # list of arguments
```
### discussion
In the case of your:
```
call(["toAmos", "-s " + inputfile, "-o " + output_basename + ".afg"])
```
you should supply:
* either `"-s" + inputfile` (no space after `-s`), `"-o" + output_basename + ".afg"`
* or `"-s", inputfile` (separate arguments), `"-o", output_basename + ".afg"`
In the case of your:
```
call(["minimus2", output_basename, "-D REFCOUNT=400", "-D OVERLAP=500"])
```
the `"-D REFCOUNT=400"` and `"-D OVERLAP=500"` should be provided as two items each (`'-D', 'REFCOUNT=400', '-D', 'OVERLAP=500'`), or drop the spaces (`'-DREFCOUNT=400', '-DOVERLAP=500'`).
### additional info
You seem to lack knowledge of how a shell splits a command-line; I suggest you always use the single string method, unless there are spaces in filenames or you have to use the `shell=False` option; in that case, I suggest you always supply a list of arguments.
|
`-s` and following input file name should be separate arguments to `call`, as they are in the command line:
```
call(["toAmos", "-s", inputfile, "-o", output_basename + ".afg"])
```
|
46,460,218
|
im new in python and world of programming. get to the point. when i run this code and put input let say chicken, it will reply as two leg animal. but i cant get reply for two words things that has space in between like space monkey(althought it appear in my dictionary) so how do i solve it???
my dictionary: example.py
```
dictionary2 = {
"chicken":"chicken two leg animal",
"fish":"fish is animal that live under water",
"cow":"cow is big vegetarian animal",
"space monkey":"monkey live in space",
```
my code: test.py
```
from example import *
print "how can i help you?"
print
user_input = raw_input()
print
print "You asked: " + user_input + "."
response = "I will get back to you. "
input_ls = user_input.split(" ")
processor = {
"dictionary2":False,
"dictionary_lookup":[]
}
for w in input_ls:
if w in dictionary2:
processor["dictionary2"] = True
processor["dictionary_lookup"].append(w)
if processor["dictionary2"] is True:
dictionary_lookup = processor["dictionary_lookup"][0]
translation = dictionary2[dictionary_lookup]
response = "what you were looking for is: " + translation
print
print "Response: " + response
```
|
2017/09/28
|
[
"https://Stackoverflow.com/questions/46460218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8681243/"
] |
Try this in your code .
```
@Override
protected String doInBackground(Void... params) {
RequestHandler rh = new RequestHandler();
String s = rh.sendGetRequest(konfigurasi.URL_GET_ALL);
return s;
}
@Override
protected void onPostExecute(String s) {
// edited here
try {
JSONObject jsonObject = new JSONObject(s);
JSONArray jsonArray = jsonObject.getJSONArray("result");
if(jsonArray.length() == 0){
Toast.makeText(getApplicationContext(), "No Data", Toast.LENGTH_LONG).show();
if (loading.isShowing()){
loading.dismiss();
}
return;
}
} catch (JSONException e) {
e.printStackTrace();
}
Log.e("TAG",s);
// Dismiss the progress dialog
if (loading.isShowing())
loading.dismiss();
JSON_STRING = s;
showEmployee();
}
```
1.Determine return value whether is empty in `doInBackground` method
2.Determine param value whether is empty in `onPostExecute` method
|
You need to Debug this issue why your toast is not showing:
You have correctly put show Toast code in onPostExecute
Now to debug , first put a Log to know the value of s , whether it is ever null or empty.
If yes and still toast is not showing, move the dialog dismiss dialog before Toast and check.
If Toast still does not show debug your showEmployee() method as what it does.
|
50,388,396
|
I try to compile this code but I get this errror :
```
NameError: name 'dtype' is not defined
```
Here is the python code :
```
# -*- coding: utf-8 -*-
from __future__ import division
import pandas as pd
import numpy as np
import re
import missingno as msno
from functools import partial
import seaborn as sns
sns.set(color_codes=True)
if dtype(data.OPP_CREATION_DATE)=="datetime64[ns]":
print("OPP_CREATION_DATE is of datetime type")
else:
print("warning: the type of OPP_CREATION_DATE is not datetime, please fix this")
```
Any idea please to help me to resolve this problem?
Thank you
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50388396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9360453/"
] |
As written by Amr Keleg,
>
> If `data` is a pandas dataframe then you can check the type of a
> column as follows:
> `df['colname'].dtype` or `df.colname.dtype`
>
>
>
In that case you need e.g.
```
df['colname'].dtype == np.dtype('datetime64')
```
or
```
df.colname.dtype == np.dtype('datetime64')
```
|
You should use `type` instead of `dtype`.
`type` is a built-in function of python -
<https://docs.python.org/3/library/functions.html#type>
On the other hand, If `data` is a pandas dataframe then you can check the type of a column as follows:
`df['colname'].dtype` or `df.colname.dtype`
|
50,388,396
|
I try to compile this code but I get this errror :
```
NameError: name 'dtype' is not defined
```
Here is the python code :
```
# -*- coding: utf-8 -*-
from __future__ import division
import pandas as pd
import numpy as np
import re
import missingno as msno
from functools import partial
import seaborn as sns
sns.set(color_codes=True)
if dtype(data.OPP_CREATION_DATE)=="datetime64[ns]":
print("OPP_CREATION_DATE is of datetime type")
else:
print("warning: the type of OPP_CREATION_DATE is not datetime, please fix this")
```
Any idea please to help me to resolve this problem?
Thank you
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50388396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9360453/"
] |
You should use `type` instead of `dtype`.
`type` is a built-in function of python -
<https://docs.python.org/3/library/functions.html#type>
On the other hand, If `data` is a pandas dataframe then you can check the type of a column as follows:
`df['colname'].dtype` or `df.colname.dtype`
|
I have just realized that I could have used:
```
from pandas.api.types import is_string_dtype, is_numeric_dtype
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.