qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
68,660,419
|
I don't have a lot of experience with Selenium but I am trying to run a code which search for an element in HTML with chromedriver. I keep getting an error as below. The first thing I would like to confirm is that this error cannot be due to the connection with Chromedriver to the web but is because of the way the python script search in the HTML code. Any help would be appreciated.
The error:
```
('no such element: Unable to locate element: {"method":"xpath","selector":"//*[contains(text(),\'Find exited companies announced\')]/../.."}\n (Session info: headless chrome=91.0.4472.101)', None, None)
```
The code source:
```
<div id="logon-brownContent" style="width:100%;display:true;;padding-bottom: 0px;" class="hideforprinting">
<table width="" cellpadding="0" cellspacing="0" class="">
</table>
</div>
</div>
</td></tr></table>
</div>
</td>
<td class="homepage_mainbody-headlines">
<table class="framework_standard">
<tr>
<td colspan="2" valign="top">
<form action="exitbroker.asp?" method="post" name="oz" id="oz" sumbit="javascript:return validate();">
<input type="hidden" name="verb" value="8" />
<input type="hidden" name="dateformat" value="dd/mm/yyyy" />
<input type="hidden" name="contextid" value="1032390856" />
<input type="hidden" name="statecodelength" value="0" />
<table cellspacing="0" cellpadding="0" border="0">
<tr>
<td>
<table>
<tr>
<td class="framework_page-title">
<span class="framework_page-title">PE Exit Companies: Search</span><br/>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td height="1"><img src="/images/spacer.gif" height="13" width="1"></td>
</tr>
</table>
<table class="criteriaborder" cellspacing="0" cellpadding="2" width="100%" border="0">
<tbody>
<tr>
<td>
<table cellspacing="0" cellpadding="0" border="0" style="width:100%;">
<tbody>
<tr>
<td valign="top">
<table cellspacing="0" cellpadding="0" width="100%" border="0">
<tr>
<td align="center" valign="middle" width="100%" height="18" class="criteriaheader2">Exits</td>
</tr>
</table>
</td>
</tr>
<tr>
<td class="criteriasectionheader"><br />Exit Types</td>
</tr>
<tr>
<td>
<table border="0" cellpadding="0" cellspacing="0">
<tr valign="top"><td width="200"><input type="checkbox" name="exitdealtype" value="ipo"/>Initial Public Offering</td><td width="200"><input type="checkbox" name="exitdealtype" value="sbo"/>Secondary Buyout</td><td width="200"><input type="checkbox" name="exitdealtype" value="tradesale"/>Trade Sale</td></tr>
</table>
</td>
</tr>
<tr>
<td class="criteriasectionheader"><br />Date Range</td>
</tr>
<tr>
<td>
Find exited companies announced<br><br>
</td>
</tr>
<tr>
<td>
<table cellpadding="2" cellspacing="0" border="0">
<tr>
<td>From </td>
<td><input type="text" name="datefrom" style="width:100" value=""></td>
<td> To </td>
<td><input type="text" name="dateto" style="width:100" value=""></td>
<td> <a href="javascript:removeMe(document.oz.datefrom);removeMe(document.oz.dateto);">Clear Date</a></td>
</tr>
<tr>
<td> </td>
<td><span class="hint">(dd/mm/yyyy)</span></td>
<td> </td>
<td><span class="hint">(dd/mm/yyyy)</span></td>
<td> </td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<br />
Please Note: The default start date for our searches has been changed to 01/01/2005. You can still access all
<br />
of our historical data by inserting the desired start date above. For help or further information please contact
<br />
your Customer Relationship Consultant.
<br />
</td>
</tr>
<tr>
<td class="criteriasectionheader"><br />Industry</td>
</tr>
<tr>
<td>
Find exited companies in these sectors.
<br />The industries defined here are affiliated with both the core business and divisions of the portfolio/exited companies.
<br />Multiple select using ctrl and click. The default is set to all.<br><br>
</td>
</tr>
<tr>
<td>
<table border="0" cellspacing="0" cellpadding="0">
<tr>
<td><span class="criterialabel">Sectors<a href="javascript:displaySectorGlossary('../includes/glossary');"><img src="/includes/images/mm-info-icon.gif"></a></span></td>
<td><span class="criterialabel">Sub-Sectors</span></td>
</tr>
<tr>
<td><select multiple="multiple" size="6" name="sectorcode" style="width:250px" onChange="javascript:emptyListBox(document.oz.subsectorcode);fillSelect(document.oz.subsectorcode,null,buildSelectedItems(document.oz.sectorcode));"></select> </td>
<td><select multiple="multiple" size="6" name="subsectorcode" style="width:250px"></select> </td>
</tr>
<tr>
<td><a name="selectAllSubsectorLink" href="javascript:fillSelect(document.oz.subsectorcode,null,buildSelectedItems(document.oz.sectorcode));selectAll(document.oz.sectorcode);fillSelect(document.oz.subsectorcode,null,buildSelectedItems(document.oz.sectorcode));">Select All Sectors</a> </td>
<td><a href="javascript:if(!document.oz.domsectoronly.checked){selectAll(document.oz.subsectorcode)};">Select All Sub-Sectors</a> </td>
</tr>
<tr>
<td><a href="javascript:emptyListBox(document.oz.subsectorcode);deselectAll(document.oz.sectorcode);">Clear All</a><br><br></td>
</tr>
<tr>
<td colspan="4">
<input type="hidden" name="normalsectorsearch" value="" />
<input type="hidden" name="normalsubsectorsearch" value="" />
<input type="checkbox" name="domsectoronly" value="true" onclick="javascript:deselectAll(document.oz.subsectorcode);setItemDisableStatus(document.oz.subsectorcode);setItemDisableStatus(document.oz.selectAllSubsectorLink);">Search by dominant sector only<a href="javascript:displayPEPortfolioDominantSectorCountryGlossary('../includes/glossary');"><img src="/includes/images/mm-info-icon.gif" title="More information" />
</td>
</tr>
</table>
</td>
<!--
<td><select size="6" multiple="multiple" name="sectorcode" style="width:250px" ></select> </td>
</tr>
<tr>
<td>
<a href="javascript:selectAll(document.oz.sectorcode);">Select All</a>
<a href="javascript:deselectAll(document.oz.sectorcode);">Clear All</a>
</td>
</tr>
-->
</tr>
<tr>
<td style="TEXT-ALIGN: right;" class="search_buttons_right">
<input type="button" value="Save Search" class="framework_flatbutton" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=1;document.oz.target='_self';document.oz.submit();};"/>
<!-- a onmouseover="style.cursor = 'hand'" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=28;defaultDatesWithLocale( document.oz.datefrom, document.oz.dateto, 'dd/mm/yyyy' );if (verifyDateSubSectors(document.oz.datefrom.value)) {countWindow();document.oz.target='_self';document.oz.submit();}}"><img src="/images/button_countresults.gif" border="0" /></a -->
<input type="button" value="Count Results" class="framework_flatbutton" onclick="javascript:submitCount();" />
<!-- a onmouseover="style.cursor = 'hand'" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=8;defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' );document.oz.target='_self';if (verifyDateSubSectors(document.oz.datefrom.value)) {document.oz.target='_self';document.oz.submit();}};"><img src="/images/button_search.gif" border="0" /></a -->
<input type="button" value="Search" class="framework_flatbutton" onclick="javascript:if (validatePage(document.oz)) {
document.oz.verb.value=8
;document.oz.target='_self'
defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' );
; document.oz.target='_self';
document.oz.submit();
}" />
</td>
</tr>
</tbody>
</table>
</tr>
</td>
</tbody>
</table>
<table>
<tr>
<td>
<br>
</td>
</tr>
</table>
<table class="criteriaborder" cellspacing="0" cellpadding="2" width="100%" border="0">
<tbody>
<tr>
<td>
<table cellspacing="0" cellpadding="0" border="0" style="width:100%;">
<tbody>
<tr>
<td valign="top">
<table cellspacing="0" cellpadding="0" width="100%" border="0">
<tr>
<td align="center" valign="middle" width="100%" height="18" class="criteriaheader2">Further Search Criteria</td>
</tr>
</table>
</td>
</tr>
<tr>
<td class="criteriasectionheader"><br/>Geography</td>
</tr>
<tr>
<td>Find exited companies in these locations.
<br />Multiple select using ctrl and click. The default is set to all. </td>
</tr>
<tr>
<td>
<table border="0" cellspacing="0" cellpadding="0">
<tr>
<td> </td>
<td> </td>
<td> </td>
<td><img src="/images/spacer.gif" width="10" height="1" alt="" /></td><td> </td>
</tr>
<tr>
<td><select multiple="multiple" size="6" name="areacode" style="width:200px" onChange="javascript:emptyListBox(document.oz.regioncode);emptyListBox(document.oz.countrycode);fillSelect(document.oz.regioncode,null,buildSelectedItems(document.oz.areacode));emptyListBox(document.oz.statecode);"></select></td>
<td><select multiple="multiple" size="6" name="regioncode" style="width:200px" onChange="javascript:emptyListBox(document.oz.countrycode);fillSelect(document.oz.countrycode,null,buildSelectedItems(document.oz.regioncode));emptyListBox(document.oz.statecode);"></select></td>
<td><select multiple="multiple" size="6" name="countrycode" style="width:200px" onChange="javascript:emptyListBox(document.oz.statecode);fillSelect(document.oz.statecode,null,buildSelectedItems(document.oz.countrycode));"></select></td>
<td> </td><td><select multiple="multiple" size="6" name="statecode" style="width:200px"></select></td>
</tr>
<tr>
<td><a href="javascript:emptyListBox(document.oz.regioncode);emptyListBox(document.oz.countrycode);selectAll(document.oz.areacode);fillSelect(document.oz.regioncode,null,buildSelectedItems(document.oz.areacode));">Select All</a></td>
<td><a href="javascript:emptyListBox(document.oz.countrycode);selectAll(document.oz.regioncode);fillSelect(document.oz.countrycode,null,buildSelectedItems(document.oz.regioncode));">Select All</a></td>
<td><a href="javascript:selectAll(document.oz.countrycode);emptyListBox(document.oz.statecode);fillSelect(document.oz.statecode,null,buildSelectedItems(document.oz.countrycode));">Select All</a></td>
<td> </td><td><a href="javascript:selectAll(document.oz.statecode);">Select All</a></td>
</tr>
<tr>
<td><a href="javascript:emptyListBox(document.oz.regioncode);emptyListBox(document.oz.countrycode);emptyListBox(document.oz.statecode);deselectAll(document.oz.areacode);">Clear All</a></td>
</tr>
</table>
</td>
</tr>
<tr>
<td class="criteriasectionheader"><br/>PE House</td>
</tr>
<tr>
<td>Find exit companies who are currently held by specific PE Houses.
<br />Maximum of 50 selections allowed.</td >
</tr>
<tr>
<td>
<table border="0" cellspacing="0" cellpadding="0">
<tr>
<td>
<a class="search_lookup" href="javascript:openWin('qpehousenotapproved','hyperlink','pehousesysid','select-multiple','pehousesysiddescription','');">Lookup</a>
</td>
</tr>
<tr>
<td>
<select size="4" multiple="multiple" name="pehousesysid" style="width:350px"></select>
<input type="hidden" name="pehousesysiddescription" />
</td>
</tr>
<tr>
<td>
<a href="javascript:removeLookupOption(document.oz.pehousesysid);removeMe(document.oz.pehousesysid);">Remove</a>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td class="criteriasectionheader"><br/>Advisors</td>
</tr>
<tr>
<td>
Find exited companies who have been advised by these companies.
<br />Maximum of 50 selections allowed.
</td>
</tr>
<tr>
<td>
<table border="0" cellspacing="0" cellpadding="0">
<tr>
<td>
<a class="search_lookup" href="javascript:openWin('ecadvisor','hyperlink','advisorcompanysysid','select-multiple','advisorcompanysysiddescription','');">Lookup</a>
</td>
</tr>
<tr>
<td>
<select size="4" multiple="multiple" name="advisorcompanysysid" style="width:350px"></select>
<input type="hidden" name="advisorcompanysysiddescription" />
</td>
</tr>
<tr>
<td>
<a href="javascript:removeLookupOption(document.oz.advisorcompanysysid);removeMe(document.oz.advisorcompanysysid);">Remove</a>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td><br /><span class="criteriasectionheader">Deal Value</span></td>
</tr>
<tr>
<td>Find exited companies with the following deal value. </td>
</tr>
<tr>
<td>
<table>
<tr>
<td><p><span class="criterialabel">Currency</span></p></td>
<td> </td>
<td><select id="currencycode" name="currencycode"><option value="AUD">AUD</option>
<option value="CHF">CHF</option>
<option value="CNY">CNY</option>
<option value="EUR">EUR</option>
<option value="GBP">GBP</option>
<option value="HKD">HKD</option>
<option value="INR">INR</option>
<option value="JPY">JPY</option>
<option value="USD" selected="">USD</option></select></td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<table>
<tr>
<td width="180"><p><span class="criterialabel">Minimum value in millions</span></p></td>
<td> </td>
<td><p><input type="text" name="mindealvalue" size="12" value="" onkeypress="checkMinimumValue();" onkeyup="checkMinimumValue();" /></td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<table>
<tr>
<td width="180"><span class="criterialabel">Maximum value in millions</span></td>
<td> </td>
<td><input type="text" name="maxdealvalue" size="12" value=""></td>
</tr>
</table>
</td>
</tr>
<tr><td><br>Include deals with undisclosed value <input type="checkbox" name="undiscloseddealvalues" value="true" Checked></td></tr>
<tr>
<td class="criteriasectionheader"><br/>Exited Companies</td>
</tr>
<tr>
<td>Maximum of 50 selections allowed.</td >
</tr>
<tr>
<td>
<table border="0" cellspacing="0" cellpadding="0">
<tr>
<td>
<a class="search_lookup" href="javascript:openWin('eccompany','hyperlink','eccompanysysid','select-multiple','eccompanysysiddescription','');">Lookup</a>
</td>
</tr>
<tr>
<td>
<select size="4" multiple="multiple" name="eccompanysysid" style="width:350px"></select>
<input type="hidden" name="eccompanysysiddescription" />
</td>
</tr>
<tr>
<td>
<a href="javascript:removeLookupOption(document.oz.eccompanysysid);removeMe(document.oz.eccompanysysid);">Remove</a>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td class="criteriasectionheader"><br/>Free Text Search</td>
</tr>
<tr>
<td>Please use the Free Text Search by typing in a keyword or phrase to identify the required portfolio.
<br />
<span class="hint">Searches on companies' information, deal description, and condition, type, nature, consideration structure.<br><br></span>
</td>
</tr>
<tr>
<td>
<table border="0" cellspacing="0" cellpadding="0">
<tr>
<td width="150" class="criterialabel">Search</td>
<td><input type="text" name="textsearch" style="width:250px" value="" /></td>
<td><table border="0" cellpadding="0" cellspacing="0">
<tr valign="top"><td width="350"><input checked type="radio" name="andorfreetext" value="and"/>Match all words<br><input type="radio" name="andorfreetext" value="or"/>Match any word<br><input type="radio" name="andorfreetext" value="phrase"/>Match exact phrase</td></tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td style="TEXT-ALIGN: right;" class="search_buttons_right">
<input type="button" value="Save Search" class="framework_flatbutton" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=1;document.oz.target='_self';document.oz.submit();};"/>
<!-- a onmouseover="style.cursor = 'hand'" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=28;defaultDatesWithLocale( document.oz.datefrom, document.oz.dateto, 'dd/mm/yyyy' );if (verifyDateSubSectors(document.oz.datefrom.value)) {countWindow();document.oz.target='_self';document.oz.submit();}}"><img src="/images/button_countresults.gif" border="0" /></a -->
<input type="button" value="Count Results" class="framework_flatbutton" onclick="javascript:submitCount();" />
<!-- a onmouseover="style.cursor = 'hand'" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=8;defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' );document.oz.target='_self';if (verifyDateSubSectors(document.oz.datefrom.value)) {document.oz.target='_self';document.oz.submit();}};"><img src="/images/button_search.gif" border="0" /></a -->
<input type="button" value="Search" class="framework_flatbutton" onclick="javascript:if (validatePage(document.oz)) {
document.oz.verb.value=8
;document.oz.target='_self';
defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' );
document.oz.target='_self';
document.oz.submit();
}" />
</td>
</tr>
</tbody>
</table>
</tr>
</td>
</tbody>
</table>
</form>
<script LANGUAGE="JavaScript">
<!--
function validatePage(objitem) {
selectAll(objitem.pehousesysid);
selectAll(objitem.eccompanysysid);
objitem.eccompanysysid.required=false;
objitem.eccompanysysid.description='Portfolio Company Name';
objitem.eccompanysysid.datatype='alphanumeric';
selectAll(objitem.advisorcompanysysid);
objitem.advisorcompanysysid.required=false;
objitem.advisorcompanysysid.description='Advisor Name';
objitem.advisorcompanysysid.datatype='alphanumeric';
// locale info.
objitem.localedateformat='dd/mm/yyyy';
objitem.localecurrencycode='USD';
objitem.localelanguagecode='en_eu';
objitem.localetimezone='235';
objitem.mindealvalue.required=false;
objitem.mindealvalue.description='Currency minimum value in millions';
objitem.mindealvalue.datatype='decimal';
objitem.mindealvalue.min =0;
objitem.mindealvalue.max=1000000000000000000;
objitem.maxdealvalue.required=false;
objitem.maxdealvalue.description='Currency maximum value in millions';
objitem.maxdealvalue.datatype='decimal';
objitem.maxdealvalue.min=0;
objitem.maxdealvalue.max=1000000000000000000;
objitem.datefrom.required=false;
objitem.datefrom.description='Date from';
objitem.datefrom.datatype='date';
objitem.dateto.required=false;
objitem.dateto.description='Date to';
objitem.dateto.datatype='date';
if (objitem.statecode)
{
objitem.statecodelength.value = objitem.statecode.length;
}
// DanielC: 7/11/08: Case 107136: set the hidden field so that it will end up in the token XML and can be used in criteria.xml
if (document.oz.domsectoronly.checked == false)
{
document.oz.normalsectorsearch.value = "true";
document.oz.normalsubsectorsearch.value = "true";
}
return verify(objitem,false);
}
function submitCount()
{
if (validatePage(document.oz)) {
var dOz = document.oz;
//need to change pPopup variable to pPopup=1 to ensure no chrome on popup in event of failure
var vAction = dOz.action;
dOz.action = (dOz.action.search(/pPopup/) == -1) ? dOz.action+= "&pPopup=1" : dOz.action.replace(/pPopup=./,"pPopup=1");
defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' );
dOz.verb.value=28;
countWindow();
document.oz.submit();
dOz.action = vAction;
}
}
//-->
</script>
</td>
</tr>
</table>
</td>
<td class="homepage_mainbody-leaguetbl"></td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="100%"><img src="/images/spacer.gif" width="1" height="1"></td>
</tr>
</table>
</td>
</tr>
</table>
</div><footer class="acuris-footer" xmlns:msxsl="urn:schemas
```
A piece of code with xpath not sending error:
```
def openSearchPageCommon(self,url,clear_xpath) :
self.drv.get(url)
for x in self.drv.find_elements_by_xpath(clear_xpath) :
x.click()
def openSearchPage(self) :
xpath = "//form[@action='portfoliobroker.asp?']//table//*[contains(text(),'Clear Date')]"
self.openSearchPageCommon(self.tgt,xpath)
```
Full error:
```
Traceback (most recent call last):
File "mmmm_lib.py", line 73, in __init__
self.drv.find_element_by_xpath("//*[contains(text(),'Find exited companies announced')]/../..")
File "/home/airflow/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 394, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "/home/airflow/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 978, in find_element
'value': value})['value']
File "/home/airflow/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/home/airflow/.local/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[contains(text(),'Find exited companies announced')]/../.."}
```
|
2021/08/05
|
[
"https://Stackoverflow.com/questions/68660419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8867871/"
] |
Using `repr` or raw string on a target string is a bad idea!
By doing that newline characters are treated as literal '`\n`'.
This is likely to cause unexpected behavior on other test cases.
The real problem is that `.` matches any character **EXCEPT** newline.
If you want to match everything, replace `.` with `[\s\S]`.
This means "whitespace or not whitespace" = "anything".
Using other character groups like `[\w\W]` also works,
[and it is more efficient for adding exception just for newline.](https://stackoverflow.com/a/33312193/11556864)
One more thing, it is a good practice to use raw string in **pattern** string(not match target).
This will eliminate the need to escape every characters that has special meaning in normal python strings.
|
You could add it as an or, but make sure you `\` in the regex string, so regex actually gets the `\n` and not a actual newline.
Something like this:
```
regex = '.*match(.|\\n)*fail.*'
```
This would match anything from the last `\n` to `match`, then any mix or number of `\n` until `testfail`. You can change this how you want, but the idea is the same. Put what you want into a grouping, and then use `|` as an `or`.
[](https://i.stack.imgur.com/O97Gn.png)
On the left is what this regex pattern matched from your example.
|
4,561,113
|
hi
how to convert a = ['1', '2', '3', '4']
into a = [1, 2, 3, 4] in one line in python ?
|
2010/12/30
|
[
"https://Stackoverflow.com/questions/4561113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/557854/"
] |
With a list comprehension.
```
a[:] = [int(x) for x in a]
```
|
With a **generator**:
```
a[:] = (int(x) for x in a)
```
... list comprehensions are so ummmmm, 2.1, don't you know?
but please be wary of replacing the contents in situ; compare this:
```
>>> a = b = ['1', '2', '3', '4']
>>> a[:] = [int(x) for x in a]
>>> a
[1, 2, 3, 4]
>>> b
[1, 2, 3, 4]
```
with this:
```
>>> a = b = ['1', '2', '3', '4']
>>> a = [int(x) for x in a]
>>> a
[1, 2, 3, 4]
>>> b
['1', '2', '3', '4']
```
|
4,561,113
|
hi
how to convert a = ['1', '2', '3', '4']
into a = [1, 2, 3, 4] in one line in python ?
|
2010/12/30
|
[
"https://Stackoverflow.com/questions/4561113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/557854/"
] |
With a list comprehension.
```
a[:] = [int(x) for x in a]
```
|
```
l = []
a = ['1', '2', '3', '4']
l = [(int(x)) for x in a]
print l
[1, 2, 3, 4]
```
|
4,561,113
|
hi
how to convert a = ['1', '2', '3', '4']
into a = [1, 2, 3, 4] in one line in python ?
|
2010/12/30
|
[
"https://Stackoverflow.com/questions/4561113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/557854/"
] |
You can use `map()`:
```
a = map(int, a)
```
This is an alternative to the (more common) list comprehension, that can be more succinct in some cases.
|
With a **generator**:
```
a[:] = (int(x) for x in a)
```
... list comprehensions are so ummmmm, 2.1, don't you know?
but please be wary of replacing the contents in situ; compare this:
```
>>> a = b = ['1', '2', '3', '4']
>>> a[:] = [int(x) for x in a]
>>> a
[1, 2, 3, 4]
>>> b
[1, 2, 3, 4]
```
with this:
```
>>> a = b = ['1', '2', '3', '4']
>>> a = [int(x) for x in a]
>>> a
[1, 2, 3, 4]
>>> b
['1', '2', '3', '4']
```
|
4,561,113
|
hi
how to convert a = ['1', '2', '3', '4']
into a = [1, 2, 3, 4] in one line in python ?
|
2010/12/30
|
[
"https://Stackoverflow.com/questions/4561113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/557854/"
] |
You can use `map()`:
```
a = map(int, a)
```
This is an alternative to the (more common) list comprehension, that can be more succinct in some cases.
|
```
l = []
a = ['1', '2', '3', '4']
l = [(int(x)) for x in a]
print l
[1, 2, 3, 4]
```
|
4,561,113
|
hi
how to convert a = ['1', '2', '3', '4']
into a = [1, 2, 3, 4] in one line in python ?
|
2010/12/30
|
[
"https://Stackoverflow.com/questions/4561113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/557854/"
] |
With a **generator**:
```
a[:] = (int(x) for x in a)
```
... list comprehensions are so ummmmm, 2.1, don't you know?
but please be wary of replacing the contents in situ; compare this:
```
>>> a = b = ['1', '2', '3', '4']
>>> a[:] = [int(x) for x in a]
>>> a
[1, 2, 3, 4]
>>> b
[1, 2, 3, 4]
```
with this:
```
>>> a = b = ['1', '2', '3', '4']
>>> a = [int(x) for x in a]
>>> a
[1, 2, 3, 4]
>>> b
['1', '2', '3', '4']
```
|
```
l = []
a = ['1', '2', '3', '4']
l = [(int(x)) for x in a]
print l
[1, 2, 3, 4]
```
|
14,007,784
|
I'm trying to create a scheduled task using the Unix `at` command. I wanted to run a python script, but quickly realized that `at` is configured to use run whatever file I give it with `sh`. In an attempt to circumvent this, I created a file that contained the command `python mypythonscript.py` and passed that to `at` instead.
I have set the permissions on the python file to executable by everyone (`chmod a+x`), but when the `at` job runs, I am told `python: can't open file 'mypythonscript.py': [Errno 13] Permission denied`.
If I run `source myshwrapperscript.sh`, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with `at`?
**Edit:** I got frustrated with the python script, so I went ahead and made a `sh` script version of the thing I wanted to run. I am now finding that the `sh` script returns to me saying `rm: cannot remove <filename>: Permission denied` (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have `at` do it.
|
2012/12/23
|
[
"https://Stackoverflow.com/questions/14007784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/599391/"
] |
You can use a `LEFT JOIN`, but it would be so much easier to do that if you started off by using the cleaner and more modern `JOIN` syntax:
```
SELECT c.*, d.username, d.email, e.country_name
FROM user_profiles c
JOIN users d ON d.id = c.id
JOIN country e ON e.country_id = c.country_id
WHERE c.user_id = 42
```
Now to solve your problem you can just add `LEFT`:
```
LEFT JOIN country e ON e.country_id = c.country_id
```
Full query:
```
SELECT c.*, d.username, d.email, e.country_name
FROM user_profiles c
JOIN users d ON d.id = c.id
LEFT JOIN country e ON e.country_id = c.country_id
WHERE c.user_id = 42
```
**Related**
* [Why isn't SQL ANSI-92 standard better adopted over ANSI-89?](https://stackoverflow.com/questions/334201/why-isnt-sql-ansi-92-standard-better-adopted-over-ansi-89)
|
Before I even start thinking about your current problem, can I just point out that your current query is a mess. Really bad. It might work, it might even work efficiently - but it's still a mess:
```
SELECT c.*, d.username, d.email, e.country_name
FROM user_profiles c, users d, country e
WHERE d.id = ".$id."
AND d.id = c.user_id
AND e.id = c.country_id;
```
>
> I have tried to rewrite this with CASE or LEFT JOIN
>
>
>
But you're not going to show us your code?
One solution would be to use a sub select against each row in user\_profiles/users:
```
SELECT c.*, d.username, d.email,
(SELECT e.country_name
FROM country e
WHERE AND e.id = c.country_id LIMIT 0,1) AS country_name
FROM user_profiles c, users d
WHERE d.id = ".$id."
AND d.id = c.user_id;
```
Alternatively, use a LEFT JOIN:
```
SELECT c.*, d.username, d.email, e.country_name
FROM user_profiles c
INNER JOIN users d
ON d.id = c.user_id
LEFT JOIN country e
ON e.id = c.country_id
WHERE d.id = ".$id.";
```
|
47,891,644
|
I am doing a python project, with the SikuliX feature. I want to make an Automatic Mail sending system, but I import the TO, CC/BCC, and so on.. trough a BAT file, which sends then its data to a txt, python imports the txt and then it uses to do the job. But my problem is that when I leave a variable in Batch empty, it Automatically fills it as 'ECHO is off.' How could I prevent this ? Here's the code:
```
@echo Off
SETLOCAL EnableDelayedExpansion
for /F "tokens=1,2,3 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do (
set "DEL=%%a"
)
call :colorEcho f0 "-----[]AutoMail System[]-----"
echo.
echo.
call :colorEcho 0f "Fill out the next part:"
echo.
pause
del userdata.txt
echo.
echo.
set /p to="TO: "
echo To: >> userdata.txt
echo %to% >> userdata.txt
set /p ccbcc="CC/BCC: "
CC/ BCC: >> userdata.txt
%ccbcc% >> userdata.txt
set /p targy="Tárgy: "
Targy: >> userdata.txt
%targy% >> userdata.txt
set /p szoveg=">->-> "
szoveg: >> userdata.txt
%szoveg% >> userdata.txt
echo.
PAUSE
echo.
echo Starting AutoFill
echo.
PAUSE
start C:\\Users\\gutiw\\Desktop\\Sikuli\\runsikulix.cmd -r C:\\Users\\gutiw\\Desktop\\Sikuli\\AUTOMATION\\AutoMail.sikuli
exit
:colorEcho
echo off
<nul set /p ".=%DEL%" > "%~2"
findstr /v /a:%1 /R "^$" "%~2" nul
del "%~2" > nul 2>&1i
```
Thanks for helping!
|
2017/12/19
|
[
"https://Stackoverflow.com/questions/47891644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8961515/"
] |
Use the command line:
```
>>userdata.txt echo/%to%
```
Now environment variable `to` can be not defined and **ECHO** does nevertheless not output current state of **ECHO** mode because of forward slash `/` in command line.
The output redirection is specified on this command line for safety at beginning to make it possible to output with referencing `to` also the string values `1` to `9` without getting a trailing space written into the file `userdata.txt`.
The command line `echo.%to% >> userdata.txt` has two small disadvantages:
1. Although extremely unlikely, `echo.` can fail and do something completely different as expected, see DosTips forum topic [ECHO. FAILS to give text or blank line - Instead use ECHO/](https://www.dostips.com/forum/viewtopic.php?f=3&t=774) for details.
2. The space between `%to%` and the redirection operator `>>` is also written as trailing space into the output file `userdata.txt`.
Example to demonstrate the difference:
```
del userdata.txt 2>nul
set to=1
>>userdata.txt echo/%to%
```
Execution of a batch file with the three lines above results in really executing by Windows command interpreter:
```
del userdata.txt 2>nul
set to=1
echo/11>>userdata.txt
```
The file `userdata.txt` contains `1`, carriage return and line-feed, i.e. **three** bytes with the hexadecimal values `31 0D 0A`.
A batch file with the three lines
```
del userdata.txt 2>nul
set to=1
echo.%to% >>userdata.txt
```
results in really executing by Windows command interpreter
```
del userdata.txt 2>nul
set to=1
echo.1 1>>userdata.txt
```
and the file `userdata.txt` contains `1`, a space character, carriage return and line-feed, i.e. **four** bytes with the hexadecimal values `31 20 0D 0A`.
In this case it works also to specify the redirection on right side as usual without a space between environment variable reference `%to%` and redirection operator `>>`, i.e. use the command line:
```
echo/%to%>>userdata.txt
```
This works because of `/` after command **ECHO**. `.` could be also used if there would not be issue 1 with `echo.`.
See also [Why does ECHO command print some extra trailing space into the file?](https://stackoverflow.com/a/46972524/3074564)
|
So it looks like it's working now.
The problem was that I haven't used the **.** between echo and the variables.
So it looks like this after edited:
```
@echo Off
SETLOCAL EnableDelayedExpansion
for /F "tokens=1,2,3 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do (
set "DEL=%%a"
)
call :colorEcho f0 "-----[]AutoMail System[]-----"
echo.
echo.
call :colorEcho 0f "Fill out the next part"
echo.
pause
del userdata.txt
echo.
echo.
set /p to="TO: "
echo To: >> userdata.txt
echo.%to% >> userdata.txt
set /p ccbcc="CC/BCC: "
echo.CC/ BCC: >> userdata.txt
echo.%ccbcc% >> userdata.txt
set /p targy="Tárgy: "
echo.Targy: >> userdata.txt
echo.%targy% >> userdata.txt
set /p szoveg=">->-> "
echo.szoveg: >> userdata.txt
echo.%szoveg% >> userdata.txt
echo.
PAUSE
echo.
echo Starting AutoFill
echo.
PAUSE
start C:\\Users\\gutiw\\Desktop\\Sikuli\\runsikulix.cmd -r C:\\Users\\gutiw\\Desktop\\Sikuli\\AUTOMATION\\AutoMail.sikuli
exit
:colorEcho
echo off
<nul set /p ".=%DEL%" > "%~2"
findstr /v /a:%1 /R "^$" "%~2" nul
del "%~2" > nul 2>&1i
```
|
57,408,736
|
I can’t figure out how to give my R package’s shared library’s debug symbols source line information. What am I missing?
1. I create the following `src/Makevars` file:
```
PKG_CXXFLAGS=-O0 -ggdb
PKG_LIBS=-O0 -ggdb
```
2. I compile the package using `R CMD INSTALL --no-multiarch --with-keep.source`:
```
* installing to library ‘~/.local/lib/R/3.6’
* installing *source* package ‘reticulate’ ...
** using staged installation
g++ -std=gnu++11 -I"/usr/include/R/" -DNDEBUG -I"$HOME/.local/lib/R/3.6/Rcpp/include" -D_FORTIFY_SOURCE=2 -O0 -ggdb -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c RcppExports.cpp -o RcppExports.o
** libs
g++ -std=gnu++11 -shared -L/usr/lib64/R/lib -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -o reticulate.so RcppExports.o event_loop.o libpython.o output.o python.o readline.o -O0 -ggdb -L/usr/lib64/R/lib -lR
```
installing to ~/.local/lib/R/3.6/00LOCK-reticulate/00new/reticulate/libs
3. I debug like this:
```
R -d gdb --slave -e 'reticulate::py_eval("print")()'
GNU gdb (GDB) 8.3
[...]
(No debugging symbols found in /usr/lib64/R/bin/exec/R)
(gdb) break py_get_formals
Function "py_get_formals" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (py_get_formals) pending.
(gdb) run
Starting program: /usr/lib/R/bin/exec/R --slave -e reticulate::py_eval\(\"print\"\)\(\)
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[...]
Thread 1 "R" hit Breakpoint 1, 0x00007fffeb6b79a0 in py_get_formals(PyObjectRef, bool) () from /home/angerer/.local/lib/R/3.6/reticulate/libs/reticulate.so
(gdb) step
Single stepping until exit from function _Z14py_get_formals11PyObjectRefb,
which has no line number information.
[...]
```
Why does my function not have line numbers even though I specified `-ggdb` in both compilation? I see that only `RcppExports.cpp` is mentioned in the command line, is that the problem? If so, how can I change this?
|
2019/08/08
|
[
"https://Stackoverflow.com/questions/57408736",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247482/"
] |
Changing the Makevars doesn’t prompt recompilation.
I needed to `rm -f src/*.o src/*.so` before the object files get recompiled.
|
This is specifically for Windows. The simplest way to do it is to set the R\_MAKEVARS\_USER environment to point to the Makevars.win file. That seems to work. However, debug break points have stopped working!!!!
|
20,201,562
|
I have a list where each element is a letter. Like this:
```
myList = ['L', 'H', 'V', 'M']
```
However, I want to reverse these letters and store them as a string. Like this:
```
myString = 'MVHL'
```
is there an easy way to do this in python? is there a .reverse I could call on my list and then just loop through and add items to my string?
|
2013/11/25
|
[
"https://Stackoverflow.com/questions/20201562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1110590/"
] |
There is a [`reversed()` function](http://docs.python.org/2/library/functions.html#reversed), as well as the `[::-1]` negative-stride slice:
```
>>> myList = ['L', 'H', 'V', 'M']
>>> ''.join(reversed(myList))
'MVHL'
>>> ''.join(myList[::-1])
'MVHL'
```
Both get the job done admirably when combined with the [`str.join()` method](http://docs.python.org/2/library/stdtypes.html#str.join), but of the two, the negative stride slice is the faster method:
```
>>> import timeit
>>> timeit.timeit("''.join(reversed(myList))", 'from __main__ import myList')
1.4639930725097656
>>> timeit.timeit("''.join(myList[::-1])", 'from __main__ import myList')
0.4923250675201416
```
This is because `str.join()` really wants a list, to pass over the strings in the input list twice (once for allocating space, the second time for copying the character data), and a negative slice returns a list directly, while `reversed()` returns an iterator instead.
|
You can use `reversed` (or `[::-1]`) and `str.join`:
```
>>> myList = ['L', 'H', 'V', 'M']
>>> "".join(reversed(myList))
'MVHL'
```
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
There is no direct method to print output of pymongo in a structured way.
as the output of pymongo is a `dict`
```
print(json.dumps('variable with out of pymongo query'))
```
this will serve your purpose i think
|
It probably depends on your IDE, not the pymongo itself. the pymongo is responsible for manipulating data and communicating with the mongodb. I am using Visual Studio with PTVS and I have such options provided from the Visual Studio. The PyCharm is also a good option for IDE that will allow you to watch your code variables and the JSON in a formatted structure.
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
>
> I want to know whether there is any method like `pretty()` in PyMongo
>
>
>
**No** PyMongo doesn't provide such method. It is only available in the shell.
You need to use the [`pprint`](https://docs.python.org/3.5/library/pprint.html#pprint.pprint) function from the [`pprint`](https://docs.python.org/3.5/library/pprint.html#module-pprint) module.
|
There is no direct method to print output of pymongo in a structured way.
as the output of pymongo is a `dict`
```
print(json.dumps('variable with out of pymongo query'))
```
this will serve your purpose i think
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
Actually you can also program it by yourself like:
```
db = connection.[dbname]
collection = db.[yourcollectionname]
for col in collection.find({}):
for keys in col.keys():
print ('{', keys, ":" , col[keys] , '}' )
```
I think this will be helpful or take it as an option.
|
There is no direct method to print output of pymongo in a structured way.
as the output of pymongo is a `dict`
```
print(json.dumps('variable with out of pymongo query'))
```
this will serve your purpose i think
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
>
> I want to know whether there is any method like `pretty()` in PyMongo
>
>
>
**No** PyMongo doesn't provide such method. It is only available in the shell.
You need to use the [`pprint`](https://docs.python.org/3.5/library/pprint.html#pprint.pprint) function from the [`pprint`](https://docs.python.org/3.5/library/pprint.html#module-pprint) module.
|
It probably depends on your IDE, not the pymongo itself. the pymongo is responsible for manipulating data and communicating with the mongodb. I am using Visual Studio with PTVS and I have such options provided from the Visual Studio. The PyCharm is also a good option for IDE that will allow you to watch your code variables and the JSON in a formatted structure.
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
Actually you can also program it by yourself like:
```
db = connection.[dbname]
collection = db.[yourcollectionname]
for col in collection.find({}):
for keys in col.keys():
print ('{', keys, ":" , col[keys] , '}' )
```
I think this will be helpful or take it as an option.
|
It probably depends on your IDE, not the pymongo itself. the pymongo is responsible for manipulating data and communicating with the mongodb. I am using Visual Studio with PTVS and I have such options provided from the Visual Studio. The PyCharm is also a good option for IDE that will allow you to watch your code variables and the JSON in a formatted structure.
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
I'm a bit new to this too but I might have found a viable answer for those who are looking. Libraries I'm using are `pymongo`, `bson`, `json`, `from bson import json_util` and `from bson.json_util import dumps, loads`
Where you want to print (or return) try:
```
print(loads(dumps(stringToPrint, indent=4, default=json_util.default)))
```
If your data is already using loads, you will not need loads in this statement.
If you want to use `return` leave out the first parentheses.
Example:
```
return json.loads(json.dumps(string, ..... )
```
If you imported loads and dumps you can leave out `json.`.
I haven't tried (because this worked great for me) to alter the 'indent' value but if you don't like how the output looks, try changing that.
|
It probably depends on your IDE, not the pymongo itself. the pymongo is responsible for manipulating data and communicating with the mongodb. I am using Visual Studio with PTVS and I have such options provided from the Visual Studio. The PyCharm is also a good option for IDE that will allow you to watch your code variables and the JSON in a formatted structure.
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
>
> I want to know whether there is any method like `pretty()` in PyMongo
>
>
>
**No** PyMongo doesn't provide such method. It is only available in the shell.
You need to use the [`pprint`](https://docs.python.org/3.5/library/pprint.html#pprint.pprint) function from the [`pprint`](https://docs.python.org/3.5/library/pprint.html#module-pprint) module.
|
Actually you can also program it by yourself like:
```
db = connection.[dbname]
collection = db.[yourcollectionname]
for col in collection.find({}):
for keys in col.keys():
print ('{', keys, ":" , col[keys] , '}' )
```
I think this will be helpful or take it as an option.
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
>
> I want to know whether there is any method like `pretty()` in PyMongo
>
>
>
**No** PyMongo doesn't provide such method. It is only available in the shell.
You need to use the [`pprint`](https://docs.python.org/3.5/library/pprint.html#pprint.pprint) function from the [`pprint`](https://docs.python.org/3.5/library/pprint.html#module-pprint) module.
|
I'm a bit new to this too but I might have found a viable answer for those who are looking. Libraries I'm using are `pymongo`, `bson`, `json`, `from bson import json_util` and `from bson.json_util import dumps, loads`
Where you want to print (or return) try:
```
print(loads(dumps(stringToPrint, indent=4, default=json_util.default)))
```
If your data is already using loads, you will not need loads in this statement.
If you want to use `return` leave out the first parentheses.
Example:
```
return json.loads(json.dumps(string, ..... )
```
If you imported loads and dumps you can leave out `json.`.
I haven't tried (because this worked great for me) to alter the 'indent' value but if you don't like how the output looks, try changing that.
|
34,493,535
|
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
|
2015/12/28
|
[
"https://Stackoverflow.com/questions/34493535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4138764/"
] |
Actually you can also program it by yourself like:
```
db = connection.[dbname]
collection = db.[yourcollectionname]
for col in collection.find({}):
for keys in col.keys():
print ('{', keys, ":" , col[keys] , '}' )
```
I think this will be helpful or take it as an option.
|
I'm a bit new to this too but I might have found a viable answer for those who are looking. Libraries I'm using are `pymongo`, `bson`, `json`, `from bson import json_util` and `from bson.json_util import dumps, loads`
Where you want to print (or return) try:
```
print(loads(dumps(stringToPrint, indent=4, default=json_util.default)))
```
If your data is already using loads, you will not need loads in this statement.
If you want to use `return` leave out the first parentheses.
Example:
```
return json.loads(json.dumps(string, ..... )
```
If you imported loads and dumps you can leave out `json.`.
I haven't tried (because this worked great for me) to alter the 'indent' value but if you don't like how the output looks, try changing that.
|
42,162,985
|
**Use Case**
I am making a factory type script in Python that consumes XML and based on that XML, returns information from a specific factory. I have created a file that I call FactoryMap.json that stores the mapping between the location an item can be found in XML and the appropriate factory.
**Issue**
The JSON in my mapping file looks like:
```
{
"path": "['project']['builders']['hudson.tasks.Shell']",
"class": "bin.classes.factories.step.ShellStep"
}
```
*path* is where the element can be found in the xml once its converted to a dict.
*class* is the corresponding path to the factory that can consume that elements information.
In order to do anything with this, I need to descend into the dictionaries structure, which would look like this if I didn't have to draw this information from a file(note the key reference = 'path' from my json'):
```
configDict={my xml config dict}
for k,v in configDict['project']['builders']['hudson.tasks.Shell'].iteritems():
#call the appropriate factory
```
The issue is that if I look up the path value as a string or a list, I can not use it in 'iteritems'():
```
path="['project']['builders']['hudson.tasks.Shell']" #this is taken from the JSON
for k,v in configDict[path].iteritems():
#call the appropriate factory
```
This returns a key error stating that I can't use a string as the key value. How can I used a variable as the key for that python dictionary?
|
2017/02/10
|
[
"https://Stackoverflow.com/questions/42162985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7127136/"
] |
You could use `eval`:
```
eval( "configDict"+path )
```
|
You can use the `eval()` function to evaluate your path into an actual dict object vs a string. Something like this is what I'm referring to:
```
path="['project']['builders']['hudson.tasks.Shell']" #this is taken from the JSON
d = eval("configDict%s" % path)
for k,v in d.iteritems():
#call the appropriate factory
```
|
25,109,445
|
I am developing a client-server software in which server is developed by python. I want to call a group of methods from a java program in python. All the java methods exists in one jar file. It means I do not need to load different jars.
For this purpose, I used jpype. For each request from client, I invoke a function of python which looks like this:
```
def test(self, userName, password):
Classpath = "/home/DataSource/DMP.jar"
jpype.startJVM(
"/usr/local/java/jdk1.7.0_60/jre/lib/amd64/server/libjvm.so",
"-ea",
"- Xmx512m",
"-Djava.class.path=%s" % Classpath)
NCh = jpype.JClass("Common.NChainInterface")
n = NCh(self._DB_ipAddress, self._DB_Port, self._XML_SCHEMA_PATH, self._DSTDir)
jpype.shutdownJVM()
```
For one function it works, but for the second call it cannot start jvm.
I saw a lot of complain about it but I could not find any solution for that. I appreciate it if any body can help.
If jpype has problem in multiple starting jvm, is there any way to start and stop jvm once? The server is deployed on a Ubuntu virtual machine but I do not have enough knowledge to write for example, a script for this purpose. Could you please provide a link, or an example?
|
2014/08/03
|
[
"https://Stackoverflow.com/questions/25109445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Check `isJVMStarted()` before `startJVM()`.
If JVM is running, it will return `True`, otherwise `False`.
```
def init_jvm(jvmpath=None):
if jpype.isJVMStarted():
return
jpype.startJVM(jpype.getDefaultJVMPath())
```
For a real example, see [here](https://github.com/e9t/konlpy/blob/master/konlpy/jvm.py#L21).
|
This issue is not resolved by et9's answer above.
The problem is explained [here](https://sourceforge.net/p/jpype/discussion/379372/thread/8dab696c/).
Effectively you need to start/stop the JVM at the server/module level.
I have had success with multiple calls using this method in unit tests.
|
25,109,445
|
I am developing a client-server software in which server is developed by python. I want to call a group of methods from a java program in python. All the java methods exists in one jar file. It means I do not need to load different jars.
For this purpose, I used jpype. For each request from client, I invoke a function of python which looks like this:
```
def test(self, userName, password):
Classpath = "/home/DataSource/DMP.jar"
jpype.startJVM(
"/usr/local/java/jdk1.7.0_60/jre/lib/amd64/server/libjvm.so",
"-ea",
"- Xmx512m",
"-Djava.class.path=%s" % Classpath)
NCh = jpype.JClass("Common.NChainInterface")
n = NCh(self._DB_ipAddress, self._DB_Port, self._XML_SCHEMA_PATH, self._DSTDir)
jpype.shutdownJVM()
```
For one function it works, but for the second call it cannot start jvm.
I saw a lot of complain about it but I could not find any solution for that. I appreciate it if any body can help.
If jpype has problem in multiple starting jvm, is there any way to start and stop jvm once? The server is deployed on a Ubuntu virtual machine but I do not have enough knowledge to write for example, a script for this purpose. Could you please provide a link, or an example?
|
2014/08/03
|
[
"https://Stackoverflow.com/questions/25109445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Check `isJVMStarted()` before `startJVM()`.
If JVM is running, it will return `True`, otherwise `False`.
```
def init_jvm(jvmpath=None):
if jpype.isJVMStarted():
return
jpype.startJVM(jpype.getDefaultJVMPath())
```
For a real example, see [here](https://github.com/e9t/konlpy/blob/master/konlpy/jvm.py#L21).
|
I have solved it by adding these lines when defining the connection:
```
if not jpype.isJVMStarted():
jpype.startJVM(jvmPath, args)
```
|
25,109,445
|
I am developing a client-server software in which server is developed by python. I want to call a group of methods from a java program in python. All the java methods exists in one jar file. It means I do not need to load different jars.
For this purpose, I used jpype. For each request from client, I invoke a function of python which looks like this:
```
def test(self, userName, password):
Classpath = "/home/DataSource/DMP.jar"
jpype.startJVM(
"/usr/local/java/jdk1.7.0_60/jre/lib/amd64/server/libjvm.so",
"-ea",
"- Xmx512m",
"-Djava.class.path=%s" % Classpath)
NCh = jpype.JClass("Common.NChainInterface")
n = NCh(self._DB_ipAddress, self._DB_Port, self._XML_SCHEMA_PATH, self._DSTDir)
jpype.shutdownJVM()
```
For one function it works, but for the second call it cannot start jvm.
I saw a lot of complain about it but I could not find any solution for that. I appreciate it if any body can help.
If jpype has problem in multiple starting jvm, is there any way to start and stop jvm once? The server is deployed on a Ubuntu virtual machine but I do not have enough knowledge to write for example, a script for this purpose. Could you please provide a link, or an example?
|
2014/08/03
|
[
"https://Stackoverflow.com/questions/25109445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
I have solved it by adding these lines when defining the connection:
```
if not jpype.isJVMStarted():
jpype.startJVM(jvmPath, args)
```
|
This issue is not resolved by et9's answer above.
The problem is explained [here](https://sourceforge.net/p/jpype/discussion/379372/thread/8dab696c/).
Effectively you need to start/stop the JVM at the server/module level.
I have had success with multiple calls using this method in unit tests.
|
41,795,116
|
While `frozendict` [was rejected](https://www.python.org/dev/peps/pep-0416/#rejection-notice), a related class `types.MappingProxyType` was added to public API in python 3.3.
I understand `MappingProxyType` is just a wrapper around the underlying `dict`, but despite that isn't it functionally equivalent to `frozendict`?
In other words, what's the substantive difference between the original PEP 416 `frozendict` and this:
```
from types import MappingProxyType
def frozendict(*args, **kwargs):
return MappingProxyType(dict(*args, **kwargs))
```
Of course `MappingProxyType` is not hashable as is, but just as [the PEP suggested for `frozendict`](https://www.python.org/dev/peps/pep-0416/#recipe-hashable-dict), it can be made hashable after ensuring that all its values are hashable (MappingProxyType cannot be subclassed, so it would be require composition and forwarding of methods).
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41795116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/336527/"
] |
TL;DR
-----
`MappingProxyType` is a read only proxy for mapping (e.g. dict) objects.
`frozendict` is an immutable dict
Answer
------
The proxy pattern is (quoting [wikipedia](https://en.wikipedia.org/wiki/Proxy_pattern)):
>
> A proxy, in its most general form, is a class functioning as an
> interface to something else.
>
>
>
The `MappingProxyType` is just a simple proxy (i.e. interface) to access the real object (the real map, which on our example is dict).
the suggested `frozendict` object is just as set is to frozenset. a read only (immutable) object which can only be changed upon creation.
So why do we need `MappingProxyType`? example use case is where you want to pass a dictionary to another function but without it able to change your dictionary, it act as a read only proxy, (quoting [python docs](https://docs.python.org/3.5/library/types.html#types.MappingProxyType)):
>
> Read-only proxy of a mapping. It provides a dynamic view on the
> mapping’s entries, which means that when the mapping changes, the view
> reflects these changes.
>
>
>
lets see some example usage of the `MappingProxyType`
```
In [1]: from types import MappingProxyType
In [2]: d = {'a': 1, 'b': 2}
In [3]: m = MappingProxyType(d)
In [4]: m['a']
Out[4]: 1
In [5]: m['a'] = 5
TypeError: 'mappingproxy' object does not support item assignment
In [6]: d['a'] = 42
In [7]: m['a']
Out[7]: 42
In [8]: for i in m.items():
...: print(i)
('a', 42)
('b', 2)
```
Update:
-------
because the PEP did not make it into python, we cannot know for sure what the implementation that would be.
by looking at the PEP we see that:
```
frozendict({'a': {'b': 1}})
```
would raise an exception as `{'b': 1}` is not hashable value, but on your implementation it will create the object. of-course, you can add a validation for the value as noted on the PEP.
I assume part of the PEP was memory optimization and implementation of this kind of frozendict could have benefit from the performance of dict comparison using the `__hash__` implementation.
|
One thing I've noticed is that `frozendict.copy` supports add/replace (limited to string keys), whereas `MappingProxyType.copy` does not. For instance:
```py
d = {'a': 1, 'b': 2}
from frozendict import frozendict
fd = frozendict(d)
fd2 = fd.copy(b=3, c=5)
from types import MappingProxyType
mp = MappingProxyType(d)
# mp2 = mp.copy(b=3, c=5) => TypeError: copy() takes no keyword arguments
# to do that w/ MappingProxyType we need more biolerplate
temp = dict(mp)
temp.update(b=3, c=5)
mp2 = MappingProxyType(temp)
```
Note: none of these two immutable maps supports "remove and return new immutable copy" operation.
|
41,795,116
|
While `frozendict` [was rejected](https://www.python.org/dev/peps/pep-0416/#rejection-notice), a related class `types.MappingProxyType` was added to public API in python 3.3.
I understand `MappingProxyType` is just a wrapper around the underlying `dict`, but despite that isn't it functionally equivalent to `frozendict`?
In other words, what's the substantive difference between the original PEP 416 `frozendict` and this:
```
from types import MappingProxyType
def frozendict(*args, **kwargs):
return MappingProxyType(dict(*args, **kwargs))
```
Of course `MappingProxyType` is not hashable as is, but just as [the PEP suggested for `frozendict`](https://www.python.org/dev/peps/pep-0416/#recipe-hashable-dict), it can be made hashable after ensuring that all its values are hashable (MappingProxyType cannot be subclassed, so it would be require composition and forwarding of methods).
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41795116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/336527/"
] |
TL;DR
-----
`MappingProxyType` is a read only proxy for mapping (e.g. dict) objects.
`frozendict` is an immutable dict
Answer
------
The proxy pattern is (quoting [wikipedia](https://en.wikipedia.org/wiki/Proxy_pattern)):
>
> A proxy, in its most general form, is a class functioning as an
> interface to something else.
>
>
>
The `MappingProxyType` is just a simple proxy (i.e. interface) to access the real object (the real map, which on our example is dict).
the suggested `frozendict` object is just as set is to frozenset. a read only (immutable) object which can only be changed upon creation.
So why do we need `MappingProxyType`? example use case is where you want to pass a dictionary to another function but without it able to change your dictionary, it act as a read only proxy, (quoting [python docs](https://docs.python.org/3.5/library/types.html#types.MappingProxyType)):
>
> Read-only proxy of a mapping. It provides a dynamic view on the
> mapping’s entries, which means that when the mapping changes, the view
> reflects these changes.
>
>
>
lets see some example usage of the `MappingProxyType`
```
In [1]: from types import MappingProxyType
In [2]: d = {'a': 1, 'b': 2}
In [3]: m = MappingProxyType(d)
In [4]: m['a']
Out[4]: 1
In [5]: m['a'] = 5
TypeError: 'mappingproxy' object does not support item assignment
In [6]: d['a'] = 42
In [7]: m['a']
Out[7]: 42
In [8]: for i in m.items():
...: print(i)
('a', 42)
('b', 2)
```
Update:
-------
because the PEP did not make it into python, we cannot know for sure what the implementation that would be.
by looking at the PEP we see that:
```
frozendict({'a': {'b': 1}})
```
would raise an exception as `{'b': 1}` is not hashable value, but on your implementation it will create the object. of-course, you can add a validation for the value as noted on the PEP.
I assume part of the PEP was memory optimization and implementation of this kind of frozendict could have benefit from the performance of dict comparison using the `__hash__` implementation.
|
MappingProxyType add immutability only on a first level:
```
>>> from types import MappingProxyType
>>> d = {'a': {'b': 1}}
>>> md = MappingProxyType(d)
>>> md
mappingproxy({'a': {'b': 1}})
>>> md['a']['b']
1
>>> md['a']['b'] = 3
>>> md['a']['b']
3
```
|
41,795,116
|
While `frozendict` [was rejected](https://www.python.org/dev/peps/pep-0416/#rejection-notice), a related class `types.MappingProxyType` was added to public API in python 3.3.
I understand `MappingProxyType` is just a wrapper around the underlying `dict`, but despite that isn't it functionally equivalent to `frozendict`?
In other words, what's the substantive difference between the original PEP 416 `frozendict` and this:
```
from types import MappingProxyType
def frozendict(*args, **kwargs):
return MappingProxyType(dict(*args, **kwargs))
```
Of course `MappingProxyType` is not hashable as is, but just as [the PEP suggested for `frozendict`](https://www.python.org/dev/peps/pep-0416/#recipe-hashable-dict), it can be made hashable after ensuring that all its values are hashable (MappingProxyType cannot be subclassed, so it would be require composition and forwarding of methods).
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41795116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/336527/"
] |
TL;DR
-----
`MappingProxyType` is a read only proxy for mapping (e.g. dict) objects.
`frozendict` is an immutable dict
Answer
------
The proxy pattern is (quoting [wikipedia](https://en.wikipedia.org/wiki/Proxy_pattern)):
>
> A proxy, in its most general form, is a class functioning as an
> interface to something else.
>
>
>
The `MappingProxyType` is just a simple proxy (i.e. interface) to access the real object (the real map, which on our example is dict).
the suggested `frozendict` object is just as set is to frozenset. a read only (immutable) object which can only be changed upon creation.
So why do we need `MappingProxyType`? example use case is where you want to pass a dictionary to another function but without it able to change your dictionary, it act as a read only proxy, (quoting [python docs](https://docs.python.org/3.5/library/types.html#types.MappingProxyType)):
>
> Read-only proxy of a mapping. It provides a dynamic view on the
> mapping’s entries, which means that when the mapping changes, the view
> reflects these changes.
>
>
>
lets see some example usage of the `MappingProxyType`
```
In [1]: from types import MappingProxyType
In [2]: d = {'a': 1, 'b': 2}
In [3]: m = MappingProxyType(d)
In [4]: m['a']
Out[4]: 1
In [5]: m['a'] = 5
TypeError: 'mappingproxy' object does not support item assignment
In [6]: d['a'] = 42
In [7]: m['a']
Out[7]: 42
In [8]: for i in m.items():
...: print(i)
('a', 42)
('b', 2)
```
Update:
-------
because the PEP did not make it into python, we cannot know for sure what the implementation that would be.
by looking at the PEP we see that:
```
frozendict({'a': {'b': 1}})
```
would raise an exception as `{'b': 1}` is not hashable value, but on your implementation it will create the object. of-course, you can add a validation for the value as noted on the PEP.
I assume part of the PEP was memory optimization and implementation of this kind of frozendict could have benefit from the performance of dict comparison using the `__hash__` implementation.
|
`MappingProxyType` is also terribly slow. I suggest you to use [frozendict](https://pypi.org/project/frozendict/)
PS: I'm the new owner of the package
|
41,795,116
|
While `frozendict` [was rejected](https://www.python.org/dev/peps/pep-0416/#rejection-notice), a related class `types.MappingProxyType` was added to public API in python 3.3.
I understand `MappingProxyType` is just a wrapper around the underlying `dict`, but despite that isn't it functionally equivalent to `frozendict`?
In other words, what's the substantive difference between the original PEP 416 `frozendict` and this:
```
from types import MappingProxyType
def frozendict(*args, **kwargs):
return MappingProxyType(dict(*args, **kwargs))
```
Of course `MappingProxyType` is not hashable as is, but just as [the PEP suggested for `frozendict`](https://www.python.org/dev/peps/pep-0416/#recipe-hashable-dict), it can be made hashable after ensuring that all its values are hashable (MappingProxyType cannot be subclassed, so it would be require composition and forwarding of methods).
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41795116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/336527/"
] |
MappingProxyType add immutability only on a first level:
```
>>> from types import MappingProxyType
>>> d = {'a': {'b': 1}}
>>> md = MappingProxyType(d)
>>> md
mappingproxy({'a': {'b': 1}})
>>> md['a']['b']
1
>>> md['a']['b'] = 3
>>> md['a']['b']
3
```
|
One thing I've noticed is that `frozendict.copy` supports add/replace (limited to string keys), whereas `MappingProxyType.copy` does not. For instance:
```py
d = {'a': 1, 'b': 2}
from frozendict import frozendict
fd = frozendict(d)
fd2 = fd.copy(b=3, c=5)
from types import MappingProxyType
mp = MappingProxyType(d)
# mp2 = mp.copy(b=3, c=5) => TypeError: copy() takes no keyword arguments
# to do that w/ MappingProxyType we need more biolerplate
temp = dict(mp)
temp.update(b=3, c=5)
mp2 = MappingProxyType(temp)
```
Note: none of these two immutable maps supports "remove and return new immutable copy" operation.
|
41,795,116
|
While `frozendict` [was rejected](https://www.python.org/dev/peps/pep-0416/#rejection-notice), a related class `types.MappingProxyType` was added to public API in python 3.3.
I understand `MappingProxyType` is just a wrapper around the underlying `dict`, but despite that isn't it functionally equivalent to `frozendict`?
In other words, what's the substantive difference between the original PEP 416 `frozendict` and this:
```
from types import MappingProxyType
def frozendict(*args, **kwargs):
return MappingProxyType(dict(*args, **kwargs))
```
Of course `MappingProxyType` is not hashable as is, but just as [the PEP suggested for `frozendict`](https://www.python.org/dev/peps/pep-0416/#recipe-hashable-dict), it can be made hashable after ensuring that all its values are hashable (MappingProxyType cannot be subclassed, so it would be require composition and forwarding of methods).
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41795116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/336527/"
] |
`MappingProxyType` is also terribly slow. I suggest you to use [frozendict](https://pypi.org/project/frozendict/)
PS: I'm the new owner of the package
|
One thing I've noticed is that `frozendict.copy` supports add/replace (limited to string keys), whereas `MappingProxyType.copy` does not. For instance:
```py
d = {'a': 1, 'b': 2}
from frozendict import frozendict
fd = frozendict(d)
fd2 = fd.copy(b=3, c=5)
from types import MappingProxyType
mp = MappingProxyType(d)
# mp2 = mp.copy(b=3, c=5) => TypeError: copy() takes no keyword arguments
# to do that w/ MappingProxyType we need more biolerplate
temp = dict(mp)
temp.update(b=3, c=5)
mp2 = MappingProxyType(temp)
```
Note: none of these two immutable maps supports "remove and return new immutable copy" operation.
|
41,795,116
|
While `frozendict` [was rejected](https://www.python.org/dev/peps/pep-0416/#rejection-notice), a related class `types.MappingProxyType` was added to public API in python 3.3.
I understand `MappingProxyType` is just a wrapper around the underlying `dict`, but despite that isn't it functionally equivalent to `frozendict`?
In other words, what's the substantive difference between the original PEP 416 `frozendict` and this:
```
from types import MappingProxyType
def frozendict(*args, **kwargs):
return MappingProxyType(dict(*args, **kwargs))
```
Of course `MappingProxyType` is not hashable as is, but just as [the PEP suggested for `frozendict`](https://www.python.org/dev/peps/pep-0416/#recipe-hashable-dict), it can be made hashable after ensuring that all its values are hashable (MappingProxyType cannot be subclassed, so it would be require composition and forwarding of methods).
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41795116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/336527/"
] |
MappingProxyType add immutability only on a first level:
```
>>> from types import MappingProxyType
>>> d = {'a': {'b': 1}}
>>> md = MappingProxyType(d)
>>> md
mappingproxy({'a': {'b': 1}})
>>> md['a']['b']
1
>>> md['a']['b'] = 3
>>> md['a']['b']
3
```
|
`MappingProxyType` is also terribly slow. I suggest you to use [frozendict](https://pypi.org/project/frozendict/)
PS: I'm the new owner of the package
|
49,217,962
|
I tend to write a lot of command line utility programs and was wondering if
there is a standard way of messaging the user in Python. Specifically, I would like to print error and warning messages, as well as other more conversational output in a manner that is consistent with Unix conventions. I could produce these myself using the built-in print function, but the messages have a uniform structure so it seems like it would be useful to have a package to handle this for me.
For example, for commands that you run directly in the command line you might
get messages like this:
```
This is normal output.
error: no files given.
error: parse.c: no such file or directory.
error: parse.c:7:16: syntax error.
warning: /usr/lib64/python2.7/site-packages/simplejson:
not found, skipping.
```
If the commands might be run in a script or pipeline, they should include their name:
```
grep: /usr/dict/words: no such file or directory.
```
It would be nice if could handle levels of verbosity.
These things are all relatively simple in concept, but can result in a lot of
extra conditionals and complexity for each print statement.
I have looked at the logging facility in Python, but it seems overly complicated and more suited for daemons than command line utilities.
|
2018/03/11
|
[
"https://Stackoverflow.com/questions/49217962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8323360/"
] |
I can recommend [Inform](https://inform.readthedocs.io). It is the only package I have seen that seems to address this need. It provides a variety of print functions that print in different circumstances or with different headers. For example:
```
log() -- prints to log file, no header
comment() -- prints if verbose, no header
display() -- prints if not quiet, no header
output() -- always prints, no header
warning() -- always prints with warning header
error() -- always prints with error header
fatal() -- always prints with error header, terminates program.
```
Inform refers to these functions as 'informants'. Informants are very similar to the Python print function in that they take any number of arguments and builds the message by joining them together. It also allows you to specify a *culprit*, which is added to the front of the message.
For example, here is a simple search and replace program written using Inform.
```
#!/usr/bin/env python3
"""
Replace a string in one or more files.
Usage:
replace [options] <target> <replacement> <file>...
Options:
-v, --verbose indicate whether file is changed
"""
from docopt import docopt
from inform import Inform, comment, error, os_error
from pathlib import Path
# read command line
cmdline = docopt(__doc__)
target = cmdline['<target>']
replacement = cmdline['<replacement>']
filenames = cmdline['<file>']
Inform(verbose=cmdline['--verbose'], prog_name=True)
for filename in filenames:
try:
filepath = Path(filename)
orig = filepath.read_text()
new = orig.replace(target, replacement)
comment('updated' if orig != new else 'unchanged', culprit=filename)
filepath.write_text(new)
except OSError as e:
error(os_error(e))
```
Inform() is used to specify your preferences; comment() and error() are the
informants, they actually print the messages; and os\_error() is a useful utility that converts OSError exceptions into a string that can be used as an error message.
If you were to run this, you might get the following output:
```
> replace -v tiger toe eeny meeny miny moe
eeny: updated
meeny: unchanged
replace error: miny: no such file or directory.
replace error: moe: no such file or directory.
```
Hopefully this gives you an idea of what Inform does. There is a lot more power there. For example, it provides a collection of utilities that are useful when printing messages. An example is os\_error(), but there are others. You can also define your own informants, which is a way of handling multiple levels of verbosity.
|
```
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s')
```
`level` specified above controls the verbosity of the output.
You can attach handlers (this is where the complexity outweighs the benefit in my case) to the logging to send output to different places (<https://docs.python.org/2/howto/logging-cookbook.html#multiple-handlers-and-formatters>) but I haven't needed more than command line output to date.
To produce output you must specify it's *verbosity* as you log it:
`logging.debug("This debug message will rarely appeal to end users")`
I hadn't read your very last line, the answer seemed obvious by then and I wouldn't have imagined that single `basicConfig` line could be described as "overly complicated". It's all I use the 60% of the time when print is not enough.
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
Using dictionary comprehension (For Python 2.7 +) and slicing -
```
d = {e[0] : e[1:] for e in ll}
```
Demo -
```
>>> ll = [[1,2,3], [4,5,6], [7,8,9]]
>>> d = {e[0] : e[1:] for e in ll}
>>> d
{1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
you could do it this way:
```
ll = [[1,2,3], [4,5,6], [7,8,9]]
dct = dict( (item[0], item[1:]) for item in ll)
# or even: dct = { item[0]: item[1:] for item in ll }
print(dct)
# {1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
Using [dict comprehension](https://stackoverflow.com/questions/14507591/python-dictionary-comprehension) :
```
{words[0]:words[1:] for words in lst}
```
**output:**
```
{1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
you could do it this way:
```
ll = [[1,2,3], [4,5,6], [7,8,9]]
dct = dict( (item[0], item[1:]) for item in ll)
# or even: dct = { item[0]: item[1:] for item in ll }
print(dct)
# {1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
you could do it this way:
```
ll = [[1,2,3], [4,5,6], [7,8,9]]
dct = dict( (item[0], item[1:]) for item in ll)
# or even: dct = { item[0]: item[1:] for item in ll }
print(dct)
# {1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
Another:
```
d = {k: v for k, *v in ll}
```
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
Using [dict comprehension](https://stackoverflow.com/questions/14507591/python-dictionary-comprehension) :
```
{words[0]:words[1:] for words in lst}
```
**output:**
```
{1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
Using dictionary comprehension (For Python 2.7 +) and slicing -
```
d = {e[0] : e[1:] for e in ll}
```
Demo -
```
>>> ll = [[1,2,3], [4,5,6], [7,8,9]]
>>> d = {e[0] : e[1:] for e in ll}
>>> d
{1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
Using dictionary comprehension (For Python 2.7 +) and slicing -
```
d = {e[0] : e[1:] for e in ll}
```
Demo -
```
>>> ll = [[1,2,3], [4,5,6], [7,8,9]]
>>> d = {e[0] : e[1:] for e in ll}
>>> d
{1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
Another variation on the theme:
```
d = {e.pop(0): e for e in ll}
```
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
Using dictionary comprehension (For Python 2.7 +) and slicing -
```
d = {e[0] : e[1:] for e in ll}
```
Demo -
```
>>> ll = [[1,2,3], [4,5,6], [7,8,9]]
>>> d = {e[0] : e[1:] for e in ll}
>>> d
{1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
Another:
```
d = {k: v for k, *v in ll}
```
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
Using [dict comprehension](https://stackoverflow.com/questions/14507591/python-dictionary-comprehension) :
```
{words[0]:words[1:] for words in lst}
```
**output:**
```
{1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
Another variation on the theme:
```
d = {e.pop(0): e for e in ll}
```
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
Using [dict comprehension](https://stackoverflow.com/questions/14507591/python-dictionary-comprehension) :
```
{words[0]:words[1:] for words in lst}
```
**output:**
```
{1: [2, 3], 4: [5, 6], 7: [8, 9]}
```
|
Another:
```
d = {k: v for k, *v in ll}
```
|
32,604,558
|
I looked but i didn't found the answer (and I'm pretty new to python).
The question is pretty simple. I have a list made of sublists:
```
ll
[[1,2,3], [4,5,6], [7,8,9]]
```
What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like:
```
d = {1:[2,3], 4:[5,6], 7:[8,9]}
```
How can I do that?
|
2015/09/16
|
[
"https://Stackoverflow.com/questions/32604558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2509085/"
] |
Another variation on the theme:
```
d = {e.pop(0): e for e in ll}
```
|
Another:
```
d = {k: v for k, *v in ll}
```
|
24,151,563
|
I've got a presentation running with reveal.js and everything is working. I am writing some sample code and highlight.js is working well within my presentation. But, I want to incrementally display code. E.g., imagine that I'm explaining a function to you, and I show you the first step, and then want to show the subsequent steps. Normally, I would use fragments to incrementally display items, but it's not working in a code block.
So I have something like this:
```
<pre><code>
def python_function()
<span class="fragment">display this first</span>
<span class="fragment">now display this</span>
</code></pre>
```
But the `<span>` elements are getting syntax-highlighted instead of read as HTML fragments. It looks something like this: <http://imgur.com/nK3yNIS>
FYI without the `<span>` elements highlight.js reads this correctly as python, but with the `<span>`, the language it detects is coffeescript.
Any ideas on how to have fragments inside a code block (or another way to simulate this) would be greatly appreciated.
|
2014/06/10
|
[
"https://Stackoverflow.com/questions/24151563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2423506/"
] |
I got this to work. I had to change the init for the highlight.js dependency:
```
{ src: 'plugin/highlight/highlight.js', async: true, callback: function() {
[].forEach.call( document.querySelectorAll( '.highlight' ), function( v, i) {
hljs.highlightBlock(v);
});
} },
```
Then I authored the section this way:
```
<section>
<h2>Demo</h2>
<pre class="stretch highlight cpp">
#pragma once
void step_one_setup(ofApp* app)
{
auto orbit_points = app-><span class="fragment zoom-in highlight-current-green">orbitPointsFromTimeInPeriod</span>(
app-><span class="fragment zoom-in highlight-current-green">timeInPeriodFromMilliseconds</span>(
app->updates.
<span class="fragment zoom-in highlight-current-green" data->milliseconds</span>()));
}
</pre>
</section>
```
Results:



|
I would try to use multiple `<pre class="fragment">`and change manually `.reveal pre` to `margin: 0 auto;` and `box-shadow: none;` so they will look like one block of code.
OR
Have you tried `<code class="fragment">`? If you use negative vertical margin to remove space between individual fragments and add the same background to `<pre>` as `<code>` has then you get what you want.
Result:


|
24,151,563
|
I've got a presentation running with reveal.js and everything is working. I am writing some sample code and highlight.js is working well within my presentation. But, I want to incrementally display code. E.g., imagine that I'm explaining a function to you, and I show you the first step, and then want to show the subsequent steps. Normally, I would use fragments to incrementally display items, but it's not working in a code block.
So I have something like this:
```
<pre><code>
def python_function()
<span class="fragment">display this first</span>
<span class="fragment">now display this</span>
</code></pre>
```
But the `<span>` elements are getting syntax-highlighted instead of read as HTML fragments. It looks something like this: <http://imgur.com/nK3yNIS>
FYI without the `<span>` elements highlight.js reads this correctly as python, but with the `<span>`, the language it detects is coffeescript.
Any ideas on how to have fragments inside a code block (or another way to simulate this) would be greatly appreciated.
|
2014/06/10
|
[
"https://Stackoverflow.com/questions/24151563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2423506/"
] |
To make fragments work in code snippets, you can now use the attribute `data-noescape` with the `<code>` tag
Source: [Reveal.js docs](https://revealjs.com/code/#presenting-code)
|
I would try to use multiple `<pre class="fragment">`and change manually `.reveal pre` to `margin: 0 auto;` and `box-shadow: none;` so they will look like one block of code.
OR
Have you tried `<code class="fragment">`? If you use negative vertical margin to remove space between individual fragments and add the same background to `<pre>` as `<code>` has then you get what you want.
Result:


|
37,020,181
|
I am trying to pull a `change` of a `gerrit` project into my local repository using `gitpython`. This can be done using the following command,
```
git pull origin refs/changes/25/225/1
```
Here, `refs/changes/25/225/1` is the change that has not been submitted in `gerrit`. I have cloned the `gerrit` project into a directory. Now, I want to `pull` the changes that have not submitted into this directory. Below mentioned code is the usual way to `git pull` into a directory containing `.git` file.
```
#gitPull.py
import git
repo = git.Repo('/home/user/gitRepo')
o = repo.remotes.origin
o.pull()
```
Here, `gitRepo` has the `.git` folder(it is the cloned gerrit project). I did a lot of searching, but did not find a way to execute the above mentioned command `git pull origin refs/changes/25/225/1` using `gitpython`.
|
2016/05/04
|
[
"https://Stackoverflow.com/questions/37020181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6164440/"
] |
well, it's a simple as giving the branch as [`refspec` parameter to the pull method](http://gitpython.readthedocs.io/en/stable/reference.html#git.remote.Remote.pull) :
```
import git
repo = git.Repo('/home/user/gitRepo')
o = repo.remotes.origin
o.pull('refs/changes/25/225/1')
```
|
```
import git
import os, os.path
g = git.Git(os.path.expanduser("/home/user/gitRepo"))
result = g.execute(["git", "pull", "origin", "refs/changes/25/225/1"])
```
Could do the same using `execute()`.
|
17,209,397
|
This is the code, its quite simple, it just looks like a lot of code:
```
from collections import namedtuple
# make a basic Link class
Link = namedtuple('Link', ['id', 'submitter_id', 'submitted_time', 'votes',
'title', 'url'])
# list of Links to work with
links = [
Link(0, 60398, 1334014208.0, 109,
"C overtakes Java as the No. 1 programming language in the TIOBE index.",
"http://pixelstech.net/article/index.php?id=1333969280"),
Link(1, 60254, 1333962645.0, 891,
"This explains why technical books are all ridiculously thick and overpriced",
"http://prog21.dadgum.com/65.html"),
Link(23, 62945, 1333894106.0, 351,
"Learn Haskell Fast and Hard",
"http://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Way/"),
Link(2, 6084, 1333996166.0, 81,
"Announcing Yesod 1.0- a robust, developer friendly, high performance web framework for Haskell",
"http://www.yesodweb.com/blog/2012/04/announcing-yesod-1-0"),
Link(3, 30305, 1333968061.0, 270,
"TIL about the Lisp Curse",
"http://www.winestockwebdesign.com/Essays/Lisp_Curse.html"),
Link(4, 59008, 1334016506.0, 19,
"The Downfall of Imperative Programming. Functional Programming and the Multicore Revolution",
"http://fpcomplete.com/the-downfall-of-imperative-programming/"),
Link(5, 8712, 1333993676.0, 26,
"Open Source - Twitter Stock Market Game - ",
"http://www.twitstreet.com/"),
Link(6, 48626, 1333975127.0, 63,
"First look: Qt 5 makes JavaScript a first-class citizen for app development",
"http://arstechnica.com/business/news/2012/04/an-in-depth-look-at-qt-5-making-javascript-a-first-class-citizen-for-native-cross-platform-developme.ars"),
Link(7, 30172, 1334017294.0, 5,
"Benchmark of Dictionary Structures", "http://lh3lh3.users.sourceforge.net/udb.shtml"),
Link(8, 678, 1334014446.0, 7,
"If It's Not on Prod, It Doesn't Count: The Value of Frequent Releases",
"http://bits.shutterstock.com/?p=165"),
Link(9, 29168, 1334006443.0, 18,
"Language proposal: dave",
"http://davelang.github.com/"),
Link(17, 48626, 1334020271.0, 1,
"LispNYC and EmacsNYC meetup Tuesday Night: Large Scale Development with Elisp ",
"http://www.meetup.com/LispNYC/events/47373722/"),
Link(101, 62443, 1334018620.0, 4,
"research!rsc: Zip Files All The Way Down",
"http://research.swtch.com/zip"),
Link(12, 10262, 1334018169.0, 5,
"The Tyranny of the Diff",
"http://michaelfeathers.typepad.com/michael_feathers_blog/2012/04/the-tyranny-of-the-diff.html"),
Link(13, 20831, 1333996529.0, 14,
"Understanding NIO.2 File Channels in Java 7",
"http://java.dzone.com/articles/understanding-nio2-file"),
Link(15, 62443, 1333900877.0, 1244,
"Why vector icons don't work",
"http://www.pushing-pixels.org/2011/11/04/about-those-vector-icons.html"),
Link(14, 30650, 1334013659.0, 3,
"Python - Getting Data Into Graphite - Code Examples",
"http://coreygoldberg.blogspot.com/2012/04/python-getting-data-into-graphite-code.html"),
Link(16, 15330, 1333985877.0, 9,
"Mozilla: The Web as the Platform and The Kilimanjaro Event",
"https://groups.google.com/forum/?fromgroups#!topic/mozilla.dev.planning/Y9v46wFeejA"),
Link(18, 62443, 1333939389.0, 104,
"github is making me feel stupid(er)",
"http://www.serpentine.com/blog/2012/04/08/github-is-making-me-feel-stupider/"),
Link(19, 6937, 1333949857.0, 39,
"BitC Retrospective: The Issues with Type Classes",
"http://www.bitc-lang.org/pipermail/bitc-dev/2012-April/003315.html"),
Link(20, 51067, 1333974585.0, 14,
"Object Oriented C: Class-like Structures",
"http://cecilsunkure.blogspot.com/2012/04/object-oriented-c-class-like-structures.html"),
Link(10, 23944, 1333943632.0, 188,
"The LOVE game framework version 0.8.0 has been released - with GLSL shader support!",
"https://love2d.org/forums/viewtopic.php?f=3&t=8750"),
Link(22, 39191, 1334005674.0, 11,
"An open letter to language designers: Please kill your sacred cows. (megarant)",
"http://joshondesign.com/2012/03/09/open-letter-language-designers"),
Link(21, 3777, 1333996565.0, 2,
"Developers guide to Garage48 hackatron",
"http://martingryner.com/developers-guide-to-garage48-hackatron/"),
Link(24, 48626, 1333934004.0, 17,
"An R programmer looks at Julia",
"http://www.r-bloggers.com/an-r-programmer-looks-at-julia/")]
def query():
return_list = [link for link in links if link.submitter_id == 62443]
return_list = sorted(return_list, key=lambda var: var.submitted_time)
return return_list
query()
```
So, this is the problem, whenever I use the above code, it works fine, however whenever I do this, in the `query()` function it gives me a problem:
```
return_list = [link for link in links if link.submitter_id == 62443].sort(key=lambda var: var.submitted_time)
```
Now, I do not know why, because to me both of these look identical. When I try to do it using `.sort()`, I get None as my list (I tried to iterate over it), which is rather odd. How do you get the list that you want, using the '.sort()` method in Python?
I am on Windows 8 using Python 2.7.5.
|
2013/06/20
|
[
"https://Stackoverflow.com/questions/17209397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1624921/"
] |
`.sort` is an in-place method that sorts an existing list and returns `None`.
`sorted` is its counter part the returns a new sorted list.
```
return_list = sorted((link for link in links if link.submitter_id == 62443),
key=lambda var: var.submitted_time)
```
On a side note I would use `operator.attrgetter` to eliminate the `lambda`
```
from operator import attrgetter
return_list = sorted((link for link in links if link.submitter_id == 62443),
key=attrgetter('submitted_time'))
```
|
If you are bent on using `.sort`, you need a temporary reference to the unsorted list
```
return_list = [link for link in links if link.submitter_id == 62443]
return_list.sort(key=lambda var: var.submitted_time)
```
again, it's nice to use attrgetter instead of the lambda
|
61,546,785
|
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn).
I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there.
My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this:
```
class Simulator:
parameters = input_file
def __init__(self):
self.fn1
self.fn2
self.fn3
def fn1():
# with nested functions
def fn2
def fn3
...
def fn(n)
```
Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing?
Appreciate any advice or tips,
cheers!
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61546785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559356/"
] |
Hope everything is alright with you!
`pytest` is good for simple cases like yours (1 file script).
It's really simple to get started. Just install it using pip:
```
pip install -U pytest
```
Then create a test file (`pytest` will run all files of the form test\_\*.py or \*\_test.py in the current directory and its subdirectories)
```py
# content of test_fn1.py
from your_script import example_function
def test_1():
assert example_function(1, 2, 3) == 'expected output'
```
You can add as many tests in this file as you want, and as many test files as you desire. To run, go to the folder in a terminal and just execute `pytest`. For organization sake, create a folder named test with all test files inside. If you do this, pay attention to how you import your script since they won't be in the same folder anymore.
Check [pytest docs](https://docs.pytest.org/en/latest/getting-started.html) for more information.
Hope this helps! Stay safe!
|
No matter how hard you test a program, it is always fairly reasonable to assume that there will always still be bugs left unfound, in other words, it is impossible to check for everything. To start, I recommend that you thoroughly understand how the program works; thus, you will know what the expected and important values that should be returned are, and what exceptions should be thrown when an error occurs. You will have to write the tests yourself, which may be a hassle, and it sounds as if you don't want to do it, but rigorous testing involves perseverance and determination. As you may know, debugging and fixing code can take a lot longer than the coding portion itself.
Here is the [pytest](https://docs.pytest.org/en/latest/contents.html) documentation, I suggest you map out what you want to test first before reading the documentation. You don't need to know how pytest works before you understand how that script of yours works first. Take a pen and paper if necessary and plan out what functions do what and what exceptions should be thrown. Good luck!
|
61,546,785
|
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn).
I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there.
My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this:
```
class Simulator:
parameters = input_file
def __init__(self):
self.fn1
self.fn2
self.fn3
def fn1():
# with nested functions
def fn2
def fn3
...
def fn(n)
```
Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing?
Appreciate any advice or tips,
cheers!
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61546785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559356/"
] |
If you are savvy enough with Python, but struggle to adopt unittesting meaningfully to a set of scripts which take data in and transform it, I would take a different approach, at least if the output is deterministic
* store sample data somewhere and keep it under source control
* run individual functions against that test data.
+ record the output. this is assumed to be "known good"/baseline.
- one challenge is that you may have to "scrub" out continuously-varying data like timestamps or GUIDs.
- sort, sort, sort. plenty of data comes out unsorted, and is good as long as all the records are correct. you cannot compare anything meaningfully under those circumstances, so you'll need to sort in a deterministic fashion.
- file diffing typically works on a line-by-line basis. so it's best to "explode" multiple fields in 1 row into 1 field per row, possibly with a label `<row1key>.f1 : <value1>\n<row1key>.f2: : <value2>`
+ at this point, you don't need to validate anything via this mechanism. (traditional unittesting approaches can still be used elsewhere)
* whenever you modify/refactor the code, run the sample data against the relevant functions.
+ compare your new output against your previous baseline. if it doesn't match you have 2 possibilities:
- the new code produces better output, i.e. it fixes something. the new output becomes the baseline. store it in the baseline.
- the old output is better. fix the new code till you have the same output
+ if you store in text/json/yaml form, you can leverage diff-type utilities such as Winmerge, Beyond Compare (never used), diff, opendiff, etc to assist you in finding points of divergence. In fact, in the start it's often easier to ask Python to just write the output files without checking for equality and then using file diff tools to compare multiple last-run vs current-run files.
This sounds rather naive. After all, your "tests" don't really know what is going on. But it is a surprisingly powerful method to achieve stability and refactorability against an existing codebase that takes in lots of data and outputs lots of *already acceptable* results. This is especially true when you don't know the codebase well yet. It is less useful against a new codebase.
Note that you can always use regular pytest/unittest techniques feed your functions with more limited carefully crafted test data that exercises some particular aspect.
I've done this a number of times and it has always served me well. As you get more comfortable with the technique it takes less and less time to adapt to new circumstances and becomes more and more powerful. It is good for batch and data-transformation pipelines, not so much for GUI testing.
I have [an html-oriented toolkit on github, lazy regression tests,](https://github.com/jpeyret/lazy-regression-tests) based on this approach. Probably unsuited for a data pipeline, but you can really write your own.
|
Hope everything is alright with you!
`pytest` is good for simple cases like yours (1 file script).
It's really simple to get started. Just install it using pip:
```
pip install -U pytest
```
Then create a test file (`pytest` will run all files of the form test\_\*.py or \*\_test.py in the current directory and its subdirectories)
```py
# content of test_fn1.py
from your_script import example_function
def test_1():
assert example_function(1, 2, 3) == 'expected output'
```
You can add as many tests in this file as you want, and as many test files as you desire. To run, go to the folder in a terminal and just execute `pytest`. For organization sake, create a folder named test with all test files inside. If you do this, pay attention to how you import your script since they won't be in the same folder anymore.
Check [pytest docs](https://docs.pytest.org/en/latest/getting-started.html) for more information.
Hope this helps! Stay safe!
|
61,546,785
|
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn).
I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there.
My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this:
```
class Simulator:
parameters = input_file
def __init__(self):
self.fn1
self.fn2
self.fn3
def fn1():
# with nested functions
def fn2
def fn3
...
def fn(n)
```
Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing?
Appreciate any advice or tips,
cheers!
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61546785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559356/"
] |
Hope everything is alright with you!
`pytest` is good for simple cases like yours (1 file script).
It's really simple to get started. Just install it using pip:
```
pip install -U pytest
```
Then create a test file (`pytest` will run all files of the form test\_\*.py or \*\_test.py in the current directory and its subdirectories)
```py
# content of test_fn1.py
from your_script import example_function
def test_1():
assert example_function(1, 2, 3) == 'expected output'
```
You can add as many tests in this file as you want, and as many test files as you desire. To run, go to the folder in a terminal and just execute `pytest`. For organization sake, create a folder named test with all test files inside. If you do this, pay attention to how you import your script since they won't be in the same folder anymore.
Check [pytest docs](https://docs.pytest.org/en/latest/getting-started.html) for more information.
Hope this helps! Stay safe!
|
Summary: One method is to create a stand alone [doctests](https://docs.python.org/3/library/doctest.html) file from tests run on your existing code through the [command line interpreter](https://docs.python.org/3/tutorial/interpreter.html?highlight=interactive).
Ideally you want a set of tests in place before you refactor the existing code. If you have inherited a [big ball of mud](https://stackoverflow.com/questions/1030388/how-to-overcome-the-anti-pattern-big-ball-of-mud#1030430) (bbom), it can be tricky to create a set of comprehensive unit tests for each function prior to refactoring. Creating doctests can be a faster way to go.
You can quickly extend your doctests file as your understanding of the code develops and you encounter edge cases you need to include in your tests.
Please find an example for a nonesense 'little ball of mud' class (lbom) below.
```
import random
class Lbom():
def __init__(self, **kwargs):
for key, value in kwargs.items():
if key == 'color':
self.print_color(value)
if key == 'combo':
self.combo(value)
def combo(self, combination):
print(combination * random.randint(0,100))
def print_color(self, color):
print('color: {}'.format(color))
```
enter the Python REPL by typing 'python' at the command line:
```
>>> from lbom import *
>>> random.seed(1234)
>>> test = Lbom(color='blue', combo = 3.4)
color: blue
336.59999999999997
>>> test.print_color('red')
color: red
>>> random.seed(1010)
>>> test.combo(-1)
-85
>>>
```
Cut and paste the tests into a file. I wrap these commands into a python module as shown below and saved it as test\_lbom.py in a subdirectory called tests. The advantage of saving as a .py file and not just using doctest with a .txt file is that you can place the file in a folder separate from the file under test.
test\_lbom.py:
```
def test_lbom():
'''
>>> from lbom import *
>>> random.seed(1234)
>>> test = Lbom(color='blue', combo = 3.4)
color: blue
336.59999999999997
>>> test.print_color('red')
color: red
>>> random.seed(1010)
>>> test.combo(-1)
-85
>>>
'''
if __name__ == '__main__':
import doctest
doctest.testmod(name='test_lbom', verbose=True)
```
Run this using:
```
python -m tests.lbom_test
```
You will get a verbose output showing all tests are passing.
|
61,546,785
|
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn).
I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there.
My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this:
```
class Simulator:
parameters = input_file
def __init__(self):
self.fn1
self.fn2
self.fn3
def fn1():
# with nested functions
def fn2
def fn3
...
def fn(n)
```
Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing?
Appreciate any advice or tips,
cheers!
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61546785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559356/"
] |
Broadly speaking, you test a function by calling it with arguments and checking if the return value is what you expect it to be. This means you should *know beforehand* how you expect your function to behave.
Here's a test for a simple `add` function:
```
def add(a, b):
return a + b
def test_add_function():
a = 1
b = 2
assert add(a, b) == 3 # we KNOW that adding 1 + 2 must equal 3
```
If you call `test_add_function` and no `AssertionError` is raised, congrats! Your test passed.
Of course, testing gets messier if you don't have "pure" functions, but rather objects that operate on shared data, like classes. Still, the logic is basically the same: *call the function and check whether the expected result actually happens*:
```
class MyClass:
def __init__(self, a):
self.a = a
def add_one_to_a(self):
self.a += 1
def test_method_add_one_to_a():
initial_a = 1
instance = MyClass(a=1)
assert instance.a == initial_a # we expect this to be 1
instance.add_one_to_a() # instance.a is now 2
assert instance.a == initial_a + 1 # we expect this to be 2
```
I suggest reading/watching some tutorials on Python's `unittest` module to get your feet wet, especially getting used to the [`unittest.TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase) class, which helps a lot to common test operations, like set up/tear down routines (which allows you to e.g. "refresh" your `Simulator` instance between tests), testing if an error is raised when a function is called with wrong arguments, etc.
There are of course other strategies when things are more complicated than this (as they often are), like [mocking objects](https://docs.python.org/3/library/unittest.mock.html) which basically allows you to inspect any object/function called or modified by another object/function, check if it was called, what arguments were used, and so on.
If testing your functions is *still* too complex, this probably means that your code isn't modularized enough, or that your functions try to perform too many things at once.
|
No matter how hard you test a program, it is always fairly reasonable to assume that there will always still be bugs left unfound, in other words, it is impossible to check for everything. To start, I recommend that you thoroughly understand how the program works; thus, you will know what the expected and important values that should be returned are, and what exceptions should be thrown when an error occurs. You will have to write the tests yourself, which may be a hassle, and it sounds as if you don't want to do it, but rigorous testing involves perseverance and determination. As you may know, debugging and fixing code can take a lot longer than the coding portion itself.
Here is the [pytest](https://docs.pytest.org/en/latest/contents.html) documentation, I suggest you map out what you want to test first before reading the documentation. You don't need to know how pytest works before you understand how that script of yours works first. Take a pen and paper if necessary and plan out what functions do what and what exceptions should be thrown. Good luck!
|
61,546,785
|
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn).
I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there.
My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this:
```
class Simulator:
parameters = input_file
def __init__(self):
self.fn1
self.fn2
self.fn3
def fn1():
# with nested functions
def fn2
def fn3
...
def fn(n)
```
Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing?
Appreciate any advice or tips,
cheers!
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61546785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559356/"
] |
If you are savvy enough with Python, but struggle to adopt unittesting meaningfully to a set of scripts which take data in and transform it, I would take a different approach, at least if the output is deterministic
* store sample data somewhere and keep it under source control
* run individual functions against that test data.
+ record the output. this is assumed to be "known good"/baseline.
- one challenge is that you may have to "scrub" out continuously-varying data like timestamps or GUIDs.
- sort, sort, sort. plenty of data comes out unsorted, and is good as long as all the records are correct. you cannot compare anything meaningfully under those circumstances, so you'll need to sort in a deterministic fashion.
- file diffing typically works on a line-by-line basis. so it's best to "explode" multiple fields in 1 row into 1 field per row, possibly with a label `<row1key>.f1 : <value1>\n<row1key>.f2: : <value2>`
+ at this point, you don't need to validate anything via this mechanism. (traditional unittesting approaches can still be used elsewhere)
* whenever you modify/refactor the code, run the sample data against the relevant functions.
+ compare your new output against your previous baseline. if it doesn't match you have 2 possibilities:
- the new code produces better output, i.e. it fixes something. the new output becomes the baseline. store it in the baseline.
- the old output is better. fix the new code till you have the same output
+ if you store in text/json/yaml form, you can leverage diff-type utilities such as Winmerge, Beyond Compare (never used), diff, opendiff, etc to assist you in finding points of divergence. In fact, in the start it's often easier to ask Python to just write the output files without checking for equality and then using file diff tools to compare multiple last-run vs current-run files.
This sounds rather naive. After all, your "tests" don't really know what is going on. But it is a surprisingly powerful method to achieve stability and refactorability against an existing codebase that takes in lots of data and outputs lots of *already acceptable* results. This is especially true when you don't know the codebase well yet. It is less useful against a new codebase.
Note that you can always use regular pytest/unittest techniques feed your functions with more limited carefully crafted test data that exercises some particular aspect.
I've done this a number of times and it has always served me well. As you get more comfortable with the technique it takes less and less time to adapt to new circumstances and becomes more and more powerful. It is good for batch and data-transformation pipelines, not so much for GUI testing.
I have [an html-oriented toolkit on github, lazy regression tests,](https://github.com/jpeyret/lazy-regression-tests) based on this approach. Probably unsuited for a data pipeline, but you can really write your own.
|
Broadly speaking, you test a function by calling it with arguments and checking if the return value is what you expect it to be. This means you should *know beforehand* how you expect your function to behave.
Here's a test for a simple `add` function:
```
def add(a, b):
return a + b
def test_add_function():
a = 1
b = 2
assert add(a, b) == 3 # we KNOW that adding 1 + 2 must equal 3
```
If you call `test_add_function` and no `AssertionError` is raised, congrats! Your test passed.
Of course, testing gets messier if you don't have "pure" functions, but rather objects that operate on shared data, like classes. Still, the logic is basically the same: *call the function and check whether the expected result actually happens*:
```
class MyClass:
def __init__(self, a):
self.a = a
def add_one_to_a(self):
self.a += 1
def test_method_add_one_to_a():
initial_a = 1
instance = MyClass(a=1)
assert instance.a == initial_a # we expect this to be 1
instance.add_one_to_a() # instance.a is now 2
assert instance.a == initial_a + 1 # we expect this to be 2
```
I suggest reading/watching some tutorials on Python's `unittest` module to get your feet wet, especially getting used to the [`unittest.TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase) class, which helps a lot to common test operations, like set up/tear down routines (which allows you to e.g. "refresh" your `Simulator` instance between tests), testing if an error is raised when a function is called with wrong arguments, etc.
There are of course other strategies when things are more complicated than this (as they often are), like [mocking objects](https://docs.python.org/3/library/unittest.mock.html) which basically allows you to inspect any object/function called or modified by another object/function, check if it was called, what arguments were used, and so on.
If testing your functions is *still* too complex, this probably means that your code isn't modularized enough, or that your functions try to perform too many things at once.
|
61,546,785
|
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn).
I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there.
My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this:
```
class Simulator:
parameters = input_file
def __init__(self):
self.fn1
self.fn2
self.fn3
def fn1():
# with nested functions
def fn2
def fn3
...
def fn(n)
```
Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing?
Appreciate any advice or tips,
cheers!
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61546785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559356/"
] |
Broadly speaking, you test a function by calling it with arguments and checking if the return value is what you expect it to be. This means you should *know beforehand* how you expect your function to behave.
Here's a test for a simple `add` function:
```
def add(a, b):
return a + b
def test_add_function():
a = 1
b = 2
assert add(a, b) == 3 # we KNOW that adding 1 + 2 must equal 3
```
If you call `test_add_function` and no `AssertionError` is raised, congrats! Your test passed.
Of course, testing gets messier if you don't have "pure" functions, but rather objects that operate on shared data, like classes. Still, the logic is basically the same: *call the function and check whether the expected result actually happens*:
```
class MyClass:
def __init__(self, a):
self.a = a
def add_one_to_a(self):
self.a += 1
def test_method_add_one_to_a():
initial_a = 1
instance = MyClass(a=1)
assert instance.a == initial_a # we expect this to be 1
instance.add_one_to_a() # instance.a is now 2
assert instance.a == initial_a + 1 # we expect this to be 2
```
I suggest reading/watching some tutorials on Python's `unittest` module to get your feet wet, especially getting used to the [`unittest.TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase) class, which helps a lot to common test operations, like set up/tear down routines (which allows you to e.g. "refresh" your `Simulator` instance between tests), testing if an error is raised when a function is called with wrong arguments, etc.
There are of course other strategies when things are more complicated than this (as they often are), like [mocking objects](https://docs.python.org/3/library/unittest.mock.html) which basically allows you to inspect any object/function called or modified by another object/function, check if it was called, what arguments were used, and so on.
If testing your functions is *still* too complex, this probably means that your code isn't modularized enough, or that your functions try to perform too many things at once.
|
Summary: One method is to create a stand alone [doctests](https://docs.python.org/3/library/doctest.html) file from tests run on your existing code through the [command line interpreter](https://docs.python.org/3/tutorial/interpreter.html?highlight=interactive).
Ideally you want a set of tests in place before you refactor the existing code. If you have inherited a [big ball of mud](https://stackoverflow.com/questions/1030388/how-to-overcome-the-anti-pattern-big-ball-of-mud#1030430) (bbom), it can be tricky to create a set of comprehensive unit tests for each function prior to refactoring. Creating doctests can be a faster way to go.
You can quickly extend your doctests file as your understanding of the code develops and you encounter edge cases you need to include in your tests.
Please find an example for a nonesense 'little ball of mud' class (lbom) below.
```
import random
class Lbom():
def __init__(self, **kwargs):
for key, value in kwargs.items():
if key == 'color':
self.print_color(value)
if key == 'combo':
self.combo(value)
def combo(self, combination):
print(combination * random.randint(0,100))
def print_color(self, color):
print('color: {}'.format(color))
```
enter the Python REPL by typing 'python' at the command line:
```
>>> from lbom import *
>>> random.seed(1234)
>>> test = Lbom(color='blue', combo = 3.4)
color: blue
336.59999999999997
>>> test.print_color('red')
color: red
>>> random.seed(1010)
>>> test.combo(-1)
-85
>>>
```
Cut and paste the tests into a file. I wrap these commands into a python module as shown below and saved it as test\_lbom.py in a subdirectory called tests. The advantage of saving as a .py file and not just using doctest with a .txt file is that you can place the file in a folder separate from the file under test.
test\_lbom.py:
```
def test_lbom():
'''
>>> from lbom import *
>>> random.seed(1234)
>>> test = Lbom(color='blue', combo = 3.4)
color: blue
336.59999999999997
>>> test.print_color('red')
color: red
>>> random.seed(1010)
>>> test.combo(-1)
-85
>>>
'''
if __name__ == '__main__':
import doctest
doctest.testmod(name='test_lbom', verbose=True)
```
Run this using:
```
python -m tests.lbom_test
```
You will get a verbose output showing all tests are passing.
|
61,546,785
|
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn).
I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there.
My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this:
```
class Simulator:
parameters = input_file
def __init__(self):
self.fn1
self.fn2
self.fn3
def fn1():
# with nested functions
def fn2
def fn3
...
def fn(n)
```
Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing?
Appreciate any advice or tips,
cheers!
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61546785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559356/"
] |
If you are savvy enough with Python, but struggle to adopt unittesting meaningfully to a set of scripts which take data in and transform it, I would take a different approach, at least if the output is deterministic
* store sample data somewhere and keep it under source control
* run individual functions against that test data.
+ record the output. this is assumed to be "known good"/baseline.
- one challenge is that you may have to "scrub" out continuously-varying data like timestamps or GUIDs.
- sort, sort, sort. plenty of data comes out unsorted, and is good as long as all the records are correct. you cannot compare anything meaningfully under those circumstances, so you'll need to sort in a deterministic fashion.
- file diffing typically works on a line-by-line basis. so it's best to "explode" multiple fields in 1 row into 1 field per row, possibly with a label `<row1key>.f1 : <value1>\n<row1key>.f2: : <value2>`
+ at this point, you don't need to validate anything via this mechanism. (traditional unittesting approaches can still be used elsewhere)
* whenever you modify/refactor the code, run the sample data against the relevant functions.
+ compare your new output against your previous baseline. if it doesn't match you have 2 possibilities:
- the new code produces better output, i.e. it fixes something. the new output becomes the baseline. store it in the baseline.
- the old output is better. fix the new code till you have the same output
+ if you store in text/json/yaml form, you can leverage diff-type utilities such as Winmerge, Beyond Compare (never used), diff, opendiff, etc to assist you in finding points of divergence. In fact, in the start it's often easier to ask Python to just write the output files without checking for equality and then using file diff tools to compare multiple last-run vs current-run files.
This sounds rather naive. After all, your "tests" don't really know what is going on. But it is a surprisingly powerful method to achieve stability and refactorability against an existing codebase that takes in lots of data and outputs lots of *already acceptable* results. This is especially true when you don't know the codebase well yet. It is less useful against a new codebase.
Note that you can always use regular pytest/unittest techniques feed your functions with more limited carefully crafted test data that exercises some particular aspect.
I've done this a number of times and it has always served me well. As you get more comfortable with the technique it takes less and less time to adapt to new circumstances and becomes more and more powerful. It is good for batch and data-transformation pipelines, not so much for GUI testing.
I have [an html-oriented toolkit on github, lazy regression tests,](https://github.com/jpeyret/lazy-regression-tests) based on this approach. Probably unsuited for a data pipeline, but you can really write your own.
|
No matter how hard you test a program, it is always fairly reasonable to assume that there will always still be bugs left unfound, in other words, it is impossible to check for everything. To start, I recommend that you thoroughly understand how the program works; thus, you will know what the expected and important values that should be returned are, and what exceptions should be thrown when an error occurs. You will have to write the tests yourself, which may be a hassle, and it sounds as if you don't want to do it, but rigorous testing involves perseverance and determination. As you may know, debugging and fixing code can take a lot longer than the coding portion itself.
Here is the [pytest](https://docs.pytest.org/en/latest/contents.html) documentation, I suggest you map out what you want to test first before reading the documentation. You don't need to know how pytest works before you understand how that script of yours works first. Take a pen and paper if necessary and plan out what functions do what and what exceptions should be thrown. Good luck!
|
61,546,785
|
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn).
I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there.
My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this:
```
class Simulator:
parameters = input_file
def __init__(self):
self.fn1
self.fn2
self.fn3
def fn1():
# with nested functions
def fn2
def fn3
...
def fn(n)
```
Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing?
Appreciate any advice or tips,
cheers!
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61546785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559356/"
] |
If you are savvy enough with Python, but struggle to adopt unittesting meaningfully to a set of scripts which take data in and transform it, I would take a different approach, at least if the output is deterministic
* store sample data somewhere and keep it under source control
* run individual functions against that test data.
+ record the output. this is assumed to be "known good"/baseline.
- one challenge is that you may have to "scrub" out continuously-varying data like timestamps or GUIDs.
- sort, sort, sort. plenty of data comes out unsorted, and is good as long as all the records are correct. you cannot compare anything meaningfully under those circumstances, so you'll need to sort in a deterministic fashion.
- file diffing typically works on a line-by-line basis. so it's best to "explode" multiple fields in 1 row into 1 field per row, possibly with a label `<row1key>.f1 : <value1>\n<row1key>.f2: : <value2>`
+ at this point, you don't need to validate anything via this mechanism. (traditional unittesting approaches can still be used elsewhere)
* whenever you modify/refactor the code, run the sample data against the relevant functions.
+ compare your new output against your previous baseline. if it doesn't match you have 2 possibilities:
- the new code produces better output, i.e. it fixes something. the new output becomes the baseline. store it in the baseline.
- the old output is better. fix the new code till you have the same output
+ if you store in text/json/yaml form, you can leverage diff-type utilities such as Winmerge, Beyond Compare (never used), diff, opendiff, etc to assist you in finding points of divergence. In fact, in the start it's often easier to ask Python to just write the output files without checking for equality and then using file diff tools to compare multiple last-run vs current-run files.
This sounds rather naive. After all, your "tests" don't really know what is going on. But it is a surprisingly powerful method to achieve stability and refactorability against an existing codebase that takes in lots of data and outputs lots of *already acceptable* results. This is especially true when you don't know the codebase well yet. It is less useful against a new codebase.
Note that you can always use regular pytest/unittest techniques feed your functions with more limited carefully crafted test data that exercises some particular aspect.
I've done this a number of times and it has always served me well. As you get more comfortable with the technique it takes less and less time to adapt to new circumstances and becomes more and more powerful. It is good for batch and data-transformation pipelines, not so much for GUI testing.
I have [an html-oriented toolkit on github, lazy regression tests,](https://github.com/jpeyret/lazy-regression-tests) based on this approach. Probably unsuited for a data pipeline, but you can really write your own.
|
Summary: One method is to create a stand alone [doctests](https://docs.python.org/3/library/doctest.html) file from tests run on your existing code through the [command line interpreter](https://docs.python.org/3/tutorial/interpreter.html?highlight=interactive).
Ideally you want a set of tests in place before you refactor the existing code. If you have inherited a [big ball of mud](https://stackoverflow.com/questions/1030388/how-to-overcome-the-anti-pattern-big-ball-of-mud#1030430) (bbom), it can be tricky to create a set of comprehensive unit tests for each function prior to refactoring. Creating doctests can be a faster way to go.
You can quickly extend your doctests file as your understanding of the code develops and you encounter edge cases you need to include in your tests.
Please find an example for a nonesense 'little ball of mud' class (lbom) below.
```
import random
class Lbom():
def __init__(self, **kwargs):
for key, value in kwargs.items():
if key == 'color':
self.print_color(value)
if key == 'combo':
self.combo(value)
def combo(self, combination):
print(combination * random.randint(0,100))
def print_color(self, color):
print('color: {}'.format(color))
```
enter the Python REPL by typing 'python' at the command line:
```
>>> from lbom import *
>>> random.seed(1234)
>>> test = Lbom(color='blue', combo = 3.4)
color: blue
336.59999999999997
>>> test.print_color('red')
color: red
>>> random.seed(1010)
>>> test.combo(-1)
-85
>>>
```
Cut and paste the tests into a file. I wrap these commands into a python module as shown below and saved it as test\_lbom.py in a subdirectory called tests. The advantage of saving as a .py file and not just using doctest with a .txt file is that you can place the file in a folder separate from the file under test.
test\_lbom.py:
```
def test_lbom():
'''
>>> from lbom import *
>>> random.seed(1234)
>>> test = Lbom(color='blue', combo = 3.4)
color: blue
336.59999999999997
>>> test.print_color('red')
color: red
>>> random.seed(1010)
>>> test.combo(-1)
-85
>>>
'''
if __name__ == '__main__':
import doctest
doctest.testmod(name='test_lbom', verbose=True)
```
Run this using:
```
python -m tests.lbom_test
```
You will get a verbose output showing all tests are passing.
|
38,228,593
|
I have the followng dict in python:
```
d = {'ABC': ["DEF", "ASD"], 'DEF': ["AFS", "UAP"]}
```
Now I want to delete the value "DEF" but leave it as a Key.
so it will be:
```
d = {'ABC': [ "ASD"], 'DEF': ["AFS", "UAP"]}
```
|
2016/07/06
|
[
"https://Stackoverflow.com/questions/38228593",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6171823/"
] |
The solution was simple. I installed Mirosoft.NETCore.UniversalWindowsPlatform package via the Package Manager Console:
`PM> Install-Package Microsoft.NETCore.UniversalWindowsPlatform`
|
If you have the newest version of NETCore.UniversalWindowsPlatform installed and it still isn't working, make sure that you're using the newest nuget.
We had it working in Visual Studio, but failing in command line. The reason was that we were using old version of nuget.exe, and this caused our F# fake script to use msbuild 14 instead of 15.
|
47,301,581
|
I'm building genetic algorithm to feature selection in python. I have extracted features from my datas, then I divided into two dataframe, 'train' and 'test' dataframe.
How can I multiple the values for each row in 'population' dataframe (each individu) and 'train' dataframe?
'train' dataframe:
```
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 13.611829 -7.250185 -11.773605 -18.265003
1 17.899545 -15.503942 -0.741729 -0.053619 -6.734652 4.398419
4 16.432750 -22.490190 -4.611659 -15.247781 -13.941488 -2.433374
5 15.905368 -4.812785 18.291712 3.742221 3.631887 -1.074326
6 16.991823 -15.946251 8.299577 8.057511 8.057510 -1.482333
```
'population' dataframe:
```
0 1 2 3 4 5
0 1 1 0 0 0 1
1 0 1 0 1 0 0
2 0 0 0 0 0 1
3 0 0 1 0 1 1
```
Multiplying each row in 'population' to all rows in 'train'.
It will results:
1) From population row 1:
```
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 0 0 0 -18.265003
1 17.899545 -15.503942 0 0 0 4.398419
4 16.432750 -22.490190 0 0 0 -2.433374
5 15.905368 -4.812785 0 0 0 -1.074326
6 16.991823 -15.946251 0 0 0 -1.482333
```
2) From population row 2:
```
feature0 feature1 feature2 feature3 feature4 feature5
0 0 -3.921346 0 -7.250185 0 0
1 0 -15.503942 0 -0.053619 0 0
4 0 -22.490190 0 -15.247781 0 0
5 0 -4.812785 0 3.742221 0 0
6 0 -15.946251 0 8.057511 0 0
```
And so on...
|
2017/11/15
|
[
"https://Stackoverflow.com/questions/47301581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4093535/"
] |
If need loop (slow if large data):
```
for i, x in population.iterrows():
print (train * x.values)
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003
1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419
4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374
5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326
6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333
feature0 feature1 feature2 feature3 feature4 feature5
0 0.0 -3.921346 0.0 -7.250185 -0.0 -0.0
1 0.0 -15.503942 -0.0 -0.053619 -0.0 0.0
4 0.0 -22.490190 -0.0 -15.247781 -0.0 -0.0
5 0.0 -4.812785 0.0 3.742221 0.0 -0.0
6 0.0 -15.946251 0.0 8.057511 0.0 -0.0
feature0 feature1 feature2 feature3 feature4 feature5
0 0.0 -0.0 0.0 -0.0 -0.0 -18.265003
1 0.0 -0.0 -0.0 -0.0 -0.0 4.398419
4 0.0 -0.0 -0.0 -0.0 -0.0 -2.433374
5 0.0 -0.0 0.0 0.0 0.0 -1.074326
6 0.0 -0.0 0.0 0.0 0.0 -1.482333
feature0 feature1 feature2 feature3 feature4 feature5
0 0.0 -0.0 13.611829 -0.0 -11.773605 -18.265003
1 0.0 -0.0 -0.741729 -0.0 -6.734652 4.398419
4 0.0 -0.0 -4.611659 -0.0 -13.941488 -2.433374
5 0.0 -0.0 18.291712 0.0 3.631887 -1.074326
6 0.0 -0.0 8.299577 0.0 8.057510 -1.482333
```
---
Or each row separately:
```
print (train * population.values[0])
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003
1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419
4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374
5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326
6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333
```
---
Or for MultiIndex DataFrame:
```
d = pd.concat([train * population.values[i] for i in range(population.shape[0])],
keys=population.index.tolist())
print (d)
feature0 feature1 feature2 feature3 feature4 feature5
0 0 18.279579 -3.921346 0.000000 -0.000000 -0.000000 -18.265003
1 17.899545 -15.503942 -0.000000 -0.000000 -0.000000 4.398419
4 16.432750 -22.490190 -0.000000 -0.000000 -0.000000 -2.433374
5 15.905368 -4.812785 0.000000 0.000000 0.000000 -1.074326
6 16.991823 -15.946251 0.000000 0.000000 0.000000 -1.482333
1 0 0.000000 -3.921346 0.000000 -7.250185 -0.000000 -0.000000
1 0.000000 -15.503942 -0.000000 -0.053619 -0.000000 0.000000
4 0.000000 -22.490190 -0.000000 -15.247781 -0.000000 -0.000000
5 0.000000 -4.812785 0.000000 3.742221 0.000000 -0.000000
6 0.000000 -15.946251 0.000000 8.057511 0.000000 -0.000000
2 0 0.000000 -0.000000 0.000000 -0.000000 -0.000000 -18.265003
1 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 4.398419
4 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 -2.433374
5 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.074326
6 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.482333
3 0 0.000000 -0.000000 13.611829 -0.000000 -11.773605 -18.265003
1 0.000000 -0.000000 -0.741729 -0.000000 -6.734652 4.398419
4 0.000000 -0.000000 -4.611659 -0.000000 -13.941488 -2.433374
5 0.000000 -0.000000 18.291712 0.000000 3.631887 -1.074326
6 0.000000 -0.000000 8.299577 0.000000 8.057510 -1.482333
```
And select by [`xs`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html):
```
print (d.xs(0))
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003
1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419
4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374
5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326
6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333
```
|
Once you set the columns of `population` to match `train` you can use `*`:
```
In [11]: population.columns = train.columns
In [12]: train * population.iloc[0]
Out[12]:
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003
1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419
4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374
5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326
6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333
```
---
You can make a MultiIndex (as recommended by @jezrael) very efficiently using `np.tile` and `np.repeat`:
```
In [11]: res = population.iloc[np.repeat(np.arange(len(population)), len(train))]
In [12]: res = res.set_index(np.tile(train.index, len(population)), append=True)
In [13]: res
Out[13]:
feature0 feature1 feature2 feature3 feature4 feature5
0 0 1 1 0 0 0 1
1 1 1 0 0 0 1
4 1 1 0 0 0 1
5 1 1 0 0 0 1
6 1 1 0 0 0 1
1 0 0 1 0 1 0 0
1 0 1 0 1 0 0
4 0 1 0 1 0 0
5 0 1 0 1 0 0
6 0 1 0 1 0 0
2 0 0 0 0 0 0 1
1 0 0 0 0 0 1
4 0 0 0 0 0 1
5 0 0 0 0 0 1
6 0 0 0 0 0 1
3 0 0 0 1 0 1 1
1 0 0 1 0 1 1
4 0 0 1 0 1 1
5 0 0 1 0 1 1
6 0 0 1 0 1 1
In [14]: res.mul(train, level=1)
Out[14]:
feature0 feature1 feature2 feature3 feature4 feature5
0 0 18.279579 -3.921346 0.000000 -0.000000 -0.000000 -18.265003
1 17.899545 -15.503942 -0.000000 -0.000000 -0.000000 4.398419
4 16.432750 -22.490190 -0.000000 -0.000000 -0.000000 -2.433374
5 15.905368 -4.812785 0.000000 0.000000 0.000000 -1.074326
6 16.991823 -15.946251 0.000000 0.000000 0.000000 -1.482333
1 0 0.000000 -3.921346 0.000000 -7.250185 -0.000000 -0.000000
1 0.000000 -15.503942 -0.000000 -0.053619 -0.000000 0.000000
4 0.000000 -22.490190 -0.000000 -15.247781 -0.000000 -0.000000
5 0.000000 -4.812785 0.000000 3.742221 0.000000 -0.000000
6 0.000000 -15.946251 0.000000 8.057511 0.000000 -0.000000
2 0 0.000000 -0.000000 0.000000 -0.000000 -0.000000 -18.265003
1 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 4.398419
4 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 -2.433374
5 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.074326
6 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.482333
3 0 0.000000 -0.000000 13.611829 -0.000000 -11.773605 -18.265003
1 0.000000 -0.000000 -0.741729 -0.000000 -6.734652 4.398419
4 0.000000 -0.000000 -4.611659 -0.000000 -13.941488 -2.433374
5 0.000000 -0.000000 18.291712 0.000000 3.631887 -1.074326
6 0.000000 -0.000000 8.299577 0.000000 8.057510 -1.482333
```
|
47,301,581
|
I'm building genetic algorithm to feature selection in python. I have extracted features from my datas, then I divided into two dataframe, 'train' and 'test' dataframe.
How can I multiple the values for each row in 'population' dataframe (each individu) and 'train' dataframe?
'train' dataframe:
```
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 13.611829 -7.250185 -11.773605 -18.265003
1 17.899545 -15.503942 -0.741729 -0.053619 -6.734652 4.398419
4 16.432750 -22.490190 -4.611659 -15.247781 -13.941488 -2.433374
5 15.905368 -4.812785 18.291712 3.742221 3.631887 -1.074326
6 16.991823 -15.946251 8.299577 8.057511 8.057510 -1.482333
```
'population' dataframe:
```
0 1 2 3 4 5
0 1 1 0 0 0 1
1 0 1 0 1 0 0
2 0 0 0 0 0 1
3 0 0 1 0 1 1
```
Multiplying each row in 'population' to all rows in 'train'.
It will results:
1) From population row 1:
```
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 0 0 0 -18.265003
1 17.899545 -15.503942 0 0 0 4.398419
4 16.432750 -22.490190 0 0 0 -2.433374
5 15.905368 -4.812785 0 0 0 -1.074326
6 16.991823 -15.946251 0 0 0 -1.482333
```
2) From population row 2:
```
feature0 feature1 feature2 feature3 feature4 feature5
0 0 -3.921346 0 -7.250185 0 0
1 0 -15.503942 0 -0.053619 0 0
4 0 -22.490190 0 -15.247781 0 0
5 0 -4.812785 0 3.742221 0 0
6 0 -15.946251 0 8.057511 0 0
```
And so on...
|
2017/11/15
|
[
"https://Stackoverflow.com/questions/47301581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4093535/"
] |
I'd use numpy broadcasting to do it all in one go...
```
train_ = pd.DataFrame(
(train.values * pop.values[:, None]).reshape(-1, train.shape[1]),
pd.MultiIndex.from_product([pop.index, train.index]),
train.columns
)
train_
feature0 feature1 feature2 feature3 feature4 feature5
0 0 18.279579 -3.921346 0.000000 -0.000000 -0.000000 -18.265003
1 17.899545 -15.503942 -0.000000 -0.000000 -0.000000 4.398419
4 16.432750 -22.490190 -0.000000 -0.000000 -0.000000 -2.433374
5 15.905368 -4.812785 0.000000 0.000000 0.000000 -1.074326
6 16.991823 -15.946251 0.000000 0.000000 0.000000 -1.482333
1 0 0.000000 -3.921346 0.000000 -7.250185 -0.000000 -0.000000
1 0.000000 -15.503942 -0.000000 -0.053619 -0.000000 0.000000
4 0.000000 -22.490190 -0.000000 -15.247781 -0.000000 -0.000000
5 0.000000 -4.812785 0.000000 3.742221 0.000000 -0.000000
6 0.000000 -15.946251 0.000000 8.057511 0.000000 -0.000000
2 0 0.000000 -0.000000 0.000000 -0.000000 -0.000000 -18.265003
1 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 4.398419
4 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 -2.433374
5 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.074326
6 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.482333
3 0 0.000000 -0.000000 13.611829 -0.000000 -11.773605 -18.265003
1 0.000000 -0.000000 -0.741729 -0.000000 -6.734652 4.398419
4 0.000000 -0.000000 -4.611659 -0.000000 -13.941488 -2.433374
5 0.000000 -0.000000 18.291712 0.000000 3.631887 -1.074326
6 0.000000 -0.000000 8.299577 0.000000 8.057510 -1.482333
```
You can access just the one corresponding to the ith row or `population` with `train_.loc[i]`
```
train_.loc[3]
feature0 feature1 feature2 feature3 feature4 feature5
0 0.0 -0.0 13.611829 -0.0 -11.773605 -18.265003
1 0.0 -0.0 -0.741729 -0.0 -6.734652 4.398419
4 0.0 -0.0 -4.611659 -0.0 -13.941488 -2.433374
5 0.0 -0.0 18.291712 0.0 3.631887 -1.074326
6 0.0 -0.0 8.299577 0.0 8.057510 -1.482333
```
---
**ROUGH TIME TEST**
*I'm too lazy to do more robust testing*
```
%%timeit
pd.DataFrame(
(train.values * pop.values[:, None]).reshape(-1, train.shape[1]),
pd.MultiIndex.from_product([pop.index, train.index]),
train.columns
)
%%timeit
res = pop.iloc[np.repeat(np.arange(len(pop)), len(train))]
res = res.set_index(np.tile(train.index, len(pop)), append=True).add_prefix('feature')
res.mul(train, level=1)
%%timeit
pd.concat([train * pop.values[i] for i in range(pop.shape[0])],
keys=pop.index.tolist())
571 µs ± 10.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1.42 ms ± 18 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1.7 ms ± 69.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
```
|
Once you set the columns of `population` to match `train` you can use `*`:
```
In [11]: population.columns = train.columns
In [12]: train * population.iloc[0]
Out[12]:
feature0 feature1 feature2 feature3 feature4 feature5
0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003
1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419
4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374
5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326
6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333
```
---
You can make a MultiIndex (as recommended by @jezrael) very efficiently using `np.tile` and `np.repeat`:
```
In [11]: res = population.iloc[np.repeat(np.arange(len(population)), len(train))]
In [12]: res = res.set_index(np.tile(train.index, len(population)), append=True)
In [13]: res
Out[13]:
feature0 feature1 feature2 feature3 feature4 feature5
0 0 1 1 0 0 0 1
1 1 1 0 0 0 1
4 1 1 0 0 0 1
5 1 1 0 0 0 1
6 1 1 0 0 0 1
1 0 0 1 0 1 0 0
1 0 1 0 1 0 0
4 0 1 0 1 0 0
5 0 1 0 1 0 0
6 0 1 0 1 0 0
2 0 0 0 0 0 0 1
1 0 0 0 0 0 1
4 0 0 0 0 0 1
5 0 0 0 0 0 1
6 0 0 0 0 0 1
3 0 0 0 1 0 1 1
1 0 0 1 0 1 1
4 0 0 1 0 1 1
5 0 0 1 0 1 1
6 0 0 1 0 1 1
In [14]: res.mul(train, level=1)
Out[14]:
feature0 feature1 feature2 feature3 feature4 feature5
0 0 18.279579 -3.921346 0.000000 -0.000000 -0.000000 -18.265003
1 17.899545 -15.503942 -0.000000 -0.000000 -0.000000 4.398419
4 16.432750 -22.490190 -0.000000 -0.000000 -0.000000 -2.433374
5 15.905368 -4.812785 0.000000 0.000000 0.000000 -1.074326
6 16.991823 -15.946251 0.000000 0.000000 0.000000 -1.482333
1 0 0.000000 -3.921346 0.000000 -7.250185 -0.000000 -0.000000
1 0.000000 -15.503942 -0.000000 -0.053619 -0.000000 0.000000
4 0.000000 -22.490190 -0.000000 -15.247781 -0.000000 -0.000000
5 0.000000 -4.812785 0.000000 3.742221 0.000000 -0.000000
6 0.000000 -15.946251 0.000000 8.057511 0.000000 -0.000000
2 0 0.000000 -0.000000 0.000000 -0.000000 -0.000000 -18.265003
1 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 4.398419
4 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 -2.433374
5 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.074326
6 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.482333
3 0 0.000000 -0.000000 13.611829 -0.000000 -11.773605 -18.265003
1 0.000000 -0.000000 -0.741729 -0.000000 -6.734652 4.398419
4 0.000000 -0.000000 -4.611659 -0.000000 -13.941488 -2.433374
5 0.000000 -0.000000 18.291712 0.000000 3.631887 -1.074326
6 0.000000 -0.000000 8.299577 0.000000 8.057510 -1.482333
```
|
45,657,365
|
I just started python like a week ago and now I am stuck at the question about rolling dice. This is a question that my friend sent to me yesterday and I have just no idea how to solve it myself.
>
> Imagine you are playing a board game. You roll a 6-faced dice and move forward the same number of spaces that you rolled. If the finishing point is “n” spaces away from the starting point, please implement a program that calculates how many possible ways there are to arrive exactly at the finishing point.
>
>
>
So it seems I shall make a function with a parameter with "N" and when it reaches a certain point, let's say 10, so we all can see how many possibilities there are to get to 10 spaces away from the starting point.
I suppose this is something to do with "compositions" but I am not sure how it should be coded in python.
Please, python masters!
|
2017/08/13
|
[
"https://Stackoverflow.com/questions/45657365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6891099/"
] |
This is one way to compute the result that is exact, and uses neither iteration nor recursion:
```
def ways(n):
A = 3**(n+6)
M = A**6 - A**5 - A**4 - A**3 - A**2 - A - 1
return pow(A, n+6, M) % A
for i in xrange(20):
print i, '->', ways(i)
```
The output is in agreement with <https://oeis.org/A001592>
```
0 -> 1
1 -> 1
2 -> 2
3 -> 4
4 -> 8
5 -> 16
6 -> 32
7 -> 63
8 -> 125
9 -> 248
10 -> 492
11 -> 976
12 -> 1936
13 -> 3840
14 -> 7617
15 -> 15109
16 -> 29970
17 -> 59448
18 -> 117920
19 -> 233904
```
|
Sorry i'm not expert in python but Java can solve this, you can easily transform it to language you want :
**First idea using recursion :**
The idea is to create all possible combinaisons in a GameTree after calculate the sum requested and increment out counter.
```
public class GameTree {
public int value;
public GameTree[] childs;
public GameTree(int value) {
this.value = value;
}
public GameTree(int value, GameTree[] childs) {
this.value = value;
this.childs = childs;
}
```
}
For Memory issue i'll ignore the subtree superior to our sum ([Like Alpha–beta pruning algorithm](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning))
```
static void generateGameTreeRecursive(String path, GameTree node, int winnerScore, int currentScore) throws InterruptedException {
// Build the path
if(node.value != 0)// We exclude the root node
path += " " + String.valueOf(node.value);
if (winnerScore <= currentScore) {
// lierate the current node (prevent for Java heap space error)
node = null;
// Add the winner route
count++;
// Finish for this node
return;
} else{
// cerate the childs
node.childs = new GameTree[6];
for (int i = 0; i < 6; i++) {
// Generate the possible values to the childs
node.childs[i] = new GameTree(i+1);
// Recursion for each child
generateGameTreeRecursive(path, node.childs[i], winnerScore, currentScore + i + 1);
}
}
}
```
**Second idea using iteration :**
This solution is more elegant just dont ask me how i found it :)
```
// Returns number of ways to reach score n
static int getCombinaison(int n)
{
int[] table = new int[n+1];
// table[i] will store count of solutions for
// Base case (If given value is 0)
table[0] = 1;
// One by one consider given 3 moves and update the table[]
// values after the index greater than or equal to the
// value of the picked move
for (int j=1; j<7; j++) {
for (int i=j; i<=n; i++)
table[i] += table[i-j];
}
return table[n];
}
```
|
45,657,365
|
I just started python like a week ago and now I am stuck at the question about rolling dice. This is a question that my friend sent to me yesterday and I have just no idea how to solve it myself.
>
> Imagine you are playing a board game. You roll a 6-faced dice and move forward the same number of spaces that you rolled. If the finishing point is “n” spaces away from the starting point, please implement a program that calculates how many possible ways there are to arrive exactly at the finishing point.
>
>
>
So it seems I shall make a function with a parameter with "N" and when it reaches a certain point, let's say 10, so we all can see how many possibilities there are to get to 10 spaces away from the starting point.
I suppose this is something to do with "compositions" but I am not sure how it should be coded in python.
Please, python masters!
|
2017/08/13
|
[
"https://Stackoverflow.com/questions/45657365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6891099/"
] |
This is one way to compute the result that is exact, and uses neither iteration nor recursion:
```
def ways(n):
A = 3**(n+6)
M = A**6 - A**5 - A**4 - A**3 - A**2 - A - 1
return pow(A, n+6, M) % A
for i in xrange(20):
print i, '->', ways(i)
```
The output is in agreement with <https://oeis.org/A001592>
```
0 -> 1
1 -> 1
2 -> 2
3 -> 4
4 -> 8
5 -> 16
6 -> 32
7 -> 63
8 -> 125
9 -> 248
10 -> 492
11 -> 976
12 -> 1936
13 -> 3840
14 -> 7617
15 -> 15109
16 -> 29970
17 -> 59448
18 -> 117920
19 -> 233904
```
|
Im not really good in python but I can help you using ruby ;)
```
def ways(n)
return ways(n-1) + ways(n-2) + ways(n-3) + ways(n-4) + ways(n-5) + ways(n-6) if n >= 6
return ways(4) + ways(3) + ways(2) + ways(1) + ways(0) if n == 5
return ways(3) + ways(2) + ways(1) + ways(0) if n == 4
return ways(2) + ways(1) + ways(0) if n == 3
return ways(1) + ways(0) if n == 2
return ways(0) if n == 1
return 1 if n == 0
end
```
|
45,657,365
|
I just started python like a week ago and now I am stuck at the question about rolling dice. This is a question that my friend sent to me yesterday and I have just no idea how to solve it myself.
>
> Imagine you are playing a board game. You roll a 6-faced dice and move forward the same number of spaces that you rolled. If the finishing point is “n” spaces away from the starting point, please implement a program that calculates how many possible ways there are to arrive exactly at the finishing point.
>
>
>
So it seems I shall make a function with a parameter with "N" and when it reaches a certain point, let's say 10, so we all can see how many possibilities there are to get to 10 spaces away from the starting point.
I suppose this is something to do with "compositions" but I am not sure how it should be coded in python.
Please, python masters!
|
2017/08/13
|
[
"https://Stackoverflow.com/questions/45657365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6891099/"
] |
This is one way to compute the result that is exact, and uses neither iteration nor recursion:
```
def ways(n):
A = 3**(n+6)
M = A**6 - A**5 - A**4 - A**3 - A**2 - A - 1
return pow(A, n+6, M) % A
for i in xrange(20):
print i, '->', ways(i)
```
The output is in agreement with <https://oeis.org/A001592>
```
0 -> 1
1 -> 1
2 -> 2
3 -> 4
4 -> 8
5 -> 16
6 -> 32
7 -> 63
8 -> 125
9 -> 248
10 -> 492
11 -> 976
12 -> 1936
13 -> 3840
14 -> 7617
15 -> 15109
16 -> 29970
17 -> 59448
18 -> 117920
19 -> 233904
```
|
After a lot of calculation I found a solution that it creates a Hexanacci series. Now let me explain Hexanacci series a little bit. In the Hexanacci series each element is the summation of previous 6 elements. So I achieved this in Objective-C by just using a for loop which can be easily convert to any language:
```
-(void)getPossibleWaysFor:(NSInteger)number
{
static unsigned long ways;
unsigned long first = 0;
unsigned long second = 0;
unsigned long third = 0;
unsigned long fourth = 0;
unsigned long fifth = 0;
unsigned long sixth = 1;
for (int i = 0; i<= number; i++) {
ways = first + second + third + fourth + fifth + sixth;
if (i>0) {
first = second;
second = third;
third = fourth;
fourth = fifth;
fifth = sixth;
sixth = ways;
}
NSLog(@"%d : -> %ld",i,ways);
}
return ways;}
```
// Result:
```
[self getPossibleWaysFor:20];
0 : -> 1
1 : -> 1
2 : -> 2
3 : -> 4
4 : -> 8
5 : -> 16
6 : -> 32
7 : -> 63
8 : -> 125
9 : -> 248
10 : -> 492
11 : -> 976
12 : -> 1936
13 : -> 3840
14 : -> 7617
15 : -> 15109
16 : -> 29970
17 : -> 59448
18 : -> 117920
19 : -> 233904
20 : -> 463968
```
|
45,657,365
|
I just started python like a week ago and now I am stuck at the question about rolling dice. This is a question that my friend sent to me yesterday and I have just no idea how to solve it myself.
>
> Imagine you are playing a board game. You roll a 6-faced dice and move forward the same number of spaces that you rolled. If the finishing point is “n” spaces away from the starting point, please implement a program that calculates how many possible ways there are to arrive exactly at the finishing point.
>
>
>
So it seems I shall make a function with a parameter with "N" and when it reaches a certain point, let's say 10, so we all can see how many possibilities there are to get to 10 spaces away from the starting point.
I suppose this is something to do with "compositions" but I am not sure how it should be coded in python.
Please, python masters!
|
2017/08/13
|
[
"https://Stackoverflow.com/questions/45657365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6891099/"
] |
After a lot of calculation I found a solution that it creates a Hexanacci series. Now let me explain Hexanacci series a little bit. In the Hexanacci series each element is the summation of previous 6 elements. So I achieved this in Objective-C by just using a for loop which can be easily convert to any language:
```
-(void)getPossibleWaysFor:(NSInteger)number
{
static unsigned long ways;
unsigned long first = 0;
unsigned long second = 0;
unsigned long third = 0;
unsigned long fourth = 0;
unsigned long fifth = 0;
unsigned long sixth = 1;
for (int i = 0; i<= number; i++) {
ways = first + second + third + fourth + fifth + sixth;
if (i>0) {
first = second;
second = third;
third = fourth;
fourth = fifth;
fifth = sixth;
sixth = ways;
}
NSLog(@"%d : -> %ld",i,ways);
}
return ways;}
```
// Result:
```
[self getPossibleWaysFor:20];
0 : -> 1
1 : -> 1
2 : -> 2
3 : -> 4
4 : -> 8
5 : -> 16
6 : -> 32
7 : -> 63
8 : -> 125
9 : -> 248
10 : -> 492
11 : -> 976
12 : -> 1936
13 : -> 3840
14 : -> 7617
15 : -> 15109
16 : -> 29970
17 : -> 59448
18 : -> 117920
19 : -> 233904
20 : -> 463968
```
|
Sorry i'm not expert in python but Java can solve this, you can easily transform it to language you want :
**First idea using recursion :**
The idea is to create all possible combinaisons in a GameTree after calculate the sum requested and increment out counter.
```
public class GameTree {
public int value;
public GameTree[] childs;
public GameTree(int value) {
this.value = value;
}
public GameTree(int value, GameTree[] childs) {
this.value = value;
this.childs = childs;
}
```
}
For Memory issue i'll ignore the subtree superior to our sum ([Like Alpha–beta pruning algorithm](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning))
```
static void generateGameTreeRecursive(String path, GameTree node, int winnerScore, int currentScore) throws InterruptedException {
// Build the path
if(node.value != 0)// We exclude the root node
path += " " + String.valueOf(node.value);
if (winnerScore <= currentScore) {
// lierate the current node (prevent for Java heap space error)
node = null;
// Add the winner route
count++;
// Finish for this node
return;
} else{
// cerate the childs
node.childs = new GameTree[6];
for (int i = 0; i < 6; i++) {
// Generate the possible values to the childs
node.childs[i] = new GameTree(i+1);
// Recursion for each child
generateGameTreeRecursive(path, node.childs[i], winnerScore, currentScore + i + 1);
}
}
}
```
**Second idea using iteration :**
This solution is more elegant just dont ask me how i found it :)
```
// Returns number of ways to reach score n
static int getCombinaison(int n)
{
int[] table = new int[n+1];
// table[i] will store count of solutions for
// Base case (If given value is 0)
table[0] = 1;
// One by one consider given 3 moves and update the table[]
// values after the index greater than or equal to the
// value of the picked move
for (int j=1; j<7; j++) {
for (int i=j; i<=n; i++)
table[i] += table[i-j];
}
return table[n];
}
```
|
45,657,365
|
I just started python like a week ago and now I am stuck at the question about rolling dice. This is a question that my friend sent to me yesterday and I have just no idea how to solve it myself.
>
> Imagine you are playing a board game. You roll a 6-faced dice and move forward the same number of spaces that you rolled. If the finishing point is “n” spaces away from the starting point, please implement a program that calculates how many possible ways there are to arrive exactly at the finishing point.
>
>
>
So it seems I shall make a function with a parameter with "N" and when it reaches a certain point, let's say 10, so we all can see how many possibilities there are to get to 10 spaces away from the starting point.
I suppose this is something to do with "compositions" but I am not sure how it should be coded in python.
Please, python masters!
|
2017/08/13
|
[
"https://Stackoverflow.com/questions/45657365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6891099/"
] |
After a lot of calculation I found a solution that it creates a Hexanacci series. Now let me explain Hexanacci series a little bit. In the Hexanacci series each element is the summation of previous 6 elements. So I achieved this in Objective-C by just using a for loop which can be easily convert to any language:
```
-(void)getPossibleWaysFor:(NSInteger)number
{
static unsigned long ways;
unsigned long first = 0;
unsigned long second = 0;
unsigned long third = 0;
unsigned long fourth = 0;
unsigned long fifth = 0;
unsigned long sixth = 1;
for (int i = 0; i<= number; i++) {
ways = first + second + third + fourth + fifth + sixth;
if (i>0) {
first = second;
second = third;
third = fourth;
fourth = fifth;
fifth = sixth;
sixth = ways;
}
NSLog(@"%d : -> %ld",i,ways);
}
return ways;}
```
// Result:
```
[self getPossibleWaysFor:20];
0 : -> 1
1 : -> 1
2 : -> 2
3 : -> 4
4 : -> 8
5 : -> 16
6 : -> 32
7 : -> 63
8 : -> 125
9 : -> 248
10 : -> 492
11 : -> 976
12 : -> 1936
13 : -> 3840
14 : -> 7617
15 : -> 15109
16 : -> 29970
17 : -> 59448
18 : -> 117920
19 : -> 233904
20 : -> 463968
```
|
Im not really good in python but I can help you using ruby ;)
```
def ways(n)
return ways(n-1) + ways(n-2) + ways(n-3) + ways(n-4) + ways(n-5) + ways(n-6) if n >= 6
return ways(4) + ways(3) + ways(2) + ways(1) + ways(0) if n == 5
return ways(3) + ways(2) + ways(1) + ways(0) if n == 4
return ways(2) + ways(1) + ways(0) if n == 3
return ways(1) + ways(0) if n == 2
return ways(0) if n == 1
return 1 if n == 0
end
```
|
64,507,361
|
I am trying to write a SQL query that helps me find out the unique amount of "Numbers" that show up in a specific column. Example, in a select \* query, the column I want can look like this
```
Num_Option
9000
9001
9000,9001,9002
8080
8080,8000,8553
```
I then have another field of "date\_available" which is a date/time.
Basically, what want is something where I can group by the "date\_available" while combing all the Num\_Options on that date, so something like this..
```
Num_Option date_available
9000,9001,9002,8080 10/22/2020
9000,9002,8080,8000,8553 10/23/2020
```
I am struggling to figure this out. I have gotten to the possible point of using a python script and matplotlib instead... but I am hoping there is a SQL way of handling this as well.
|
2020/10/23
|
[
"https://Stackoverflow.com/questions/64507361",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973837/"
] |
In Postgres, you can use `regexp_split_to_table()` in a lateral join to turn the csv elements to rows, then `string_agg()` to aggregate by date:
```
select string_agg(x.num, ',') num_option, t.date_available
from mytable t
cross join lateral regexp_split_to_table(t.num_option, ',') x(num)
group by date_available
```
Of course, this assumes that you want to avoid duplicate nums on the same data (otherwise, there is not need to split, you can directly aggregate).
|
You may just be able to use `string_agg()`:
```
select date_available, string_agg(num_option, ',')
from t
group by date_available;
```
|
64,507,361
|
I am trying to write a SQL query that helps me find out the unique amount of "Numbers" that show up in a specific column. Example, in a select \* query, the column I want can look like this
```
Num_Option
9000
9001
9000,9001,9002
8080
8080,8000,8553
```
I then have another field of "date\_available" which is a date/time.
Basically, what want is something where I can group by the "date\_available" while combing all the Num\_Options on that date, so something like this..
```
Num_Option date_available
9000,9001,9002,8080 10/22/2020
9000,9002,8080,8000,8553 10/23/2020
```
I am struggling to figure this out. I have gotten to the possible point of using a python script and matplotlib instead... but I am hoping there is a SQL way of handling this as well.
|
2020/10/23
|
[
"https://Stackoverflow.com/questions/64507361",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973837/"
] |
In Postgres, you can use `regexp_split_to_table()` in a lateral join to turn the csv elements to rows, then `string_agg()` to aggregate by date:
```
select string_agg(x.num, ',') num_option, t.date_available
from mytable t
cross join lateral regexp_split_to_table(t.num_option, ',') x(num)
group by date_available
```
Of course, this assumes that you want to avoid duplicate nums on the same data (otherwise, there is not need to split, you can directly aggregate).
|
first you have to split the strings into multiple rows with something like `split_part('9000,9001,9002',',',1)` etc. (use `UNION ALL` to append the 2nd number etc.), then group them back by availability date with `string_agg`
if you don't want to hardcode `split_part` part there is an answer here on how to dynamically split strings in Redshift, look for it
|
8,560,320
|
```
>>> False in [0]
True
>>> type(False) == type(0)
False
```
The reason I stumbled upon this:
For my unit-testing I created lists of valid and invalid example values for each of my types. (with 'my types' I mean, they are not 100% equal to the python types)
So I want to iterate the list of all values and expect them to pass if they are in my valid values, and on the other hand, fail if they are not.
That does not work so well now:
```
>>> valid_values = [-1, 0, 1, 2, 3]
>>> invalid_values = [True, False, "foo"]
>>> for value in valid_values + invalid_values:
... if value in valid_values:
... print 'valid value:', value
...
valid value: -1
valid value: 0
valid value: 1
valid value: 2
valid value: 3
valid value: True
valid value: False
```
Of course I disagree with the last two 'valid' values.
Does this mean I really have to iterate through my valid\_values and compare the type?
|
2011/12/19
|
[
"https://Stackoverflow.com/questions/8560320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/532373/"
] |
The problem is not the missing type checking, but because in Python `bool` is a subclass of `int`. Try this:
```
>>> False == 0
True
>>> isinstance(False, int)
True
```
|
According to the [documentation](http://docs.python.org/reference/datamodel.html#object.__contains__), `__contains__` is done by iterating over the collection and testing elements by `==`. Hence the actual problem is caused by the fact, that `False == 0` is `True`.
|
8,560,320
|
```
>>> False in [0]
True
>>> type(False) == type(0)
False
```
The reason I stumbled upon this:
For my unit-testing I created lists of valid and invalid example values for each of my types. (with 'my types' I mean, they are not 100% equal to the python types)
So I want to iterate the list of all values and expect them to pass if they are in my valid values, and on the other hand, fail if they are not.
That does not work so well now:
```
>>> valid_values = [-1, 0, 1, 2, 3]
>>> invalid_values = [True, False, "foo"]
>>> for value in valid_values + invalid_values:
... if value in valid_values:
... print 'valid value:', value
...
valid value: -1
valid value: 0
valid value: 1
valid value: 2
valid value: 3
valid value: True
valid value: False
```
Of course I disagree with the last two 'valid' values.
Does this mean I really have to iterate through my valid\_values and compare the type?
|
2011/12/19
|
[
"https://Stackoverflow.com/questions/8560320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/532373/"
] |
The problem is not the missing type checking, but because in Python `bool` is a subclass of `int`. Try this:
```
>>> False == 0
True
>>> isinstance(False, int)
True
```
|
Since `True == 1` and `False == 0` it's hard to differentiate between the two.
One possible but ugly approach (which is also not guaranteed to work in all Python implementations but should be OK in CPython):
```
>>> for value in valid_values + invalid_values:
... if value in valid_values and not any(v is value for v in invalid_values):
... print ('valid value:', value)
...
valid value: -1
valid value: 0
valid value: 1
valid value: 2
valid value: 3
```
|
8,560,320
|
```
>>> False in [0]
True
>>> type(False) == type(0)
False
```
The reason I stumbled upon this:
For my unit-testing I created lists of valid and invalid example values for each of my types. (with 'my types' I mean, they are not 100% equal to the python types)
So I want to iterate the list of all values and expect them to pass if they are in my valid values, and on the other hand, fail if they are not.
That does not work so well now:
```
>>> valid_values = [-1, 0, 1, 2, 3]
>>> invalid_values = [True, False, "foo"]
>>> for value in valid_values + invalid_values:
... if value in valid_values:
... print 'valid value:', value
...
valid value: -1
valid value: 0
valid value: 1
valid value: 2
valid value: 3
valid value: True
valid value: False
```
Of course I disagree with the last two 'valid' values.
Does this mean I really have to iterate through my valid\_values and compare the type?
|
2011/12/19
|
[
"https://Stackoverflow.com/questions/8560320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/532373/"
] |
The problem is not the missing type checking, but because in Python `bool` is a subclass of `int`. Try this:
```
>>> False == 0
True
>>> isinstance(False, int)
True
```
|
As others have written, the "in" code does not do what you want it to do. You'll need something else.
If you really want a type check (where the check is for exactly the same type) then you can include the type in the list:
```
>>> valid_values = [(int, i) for i in [-1, 0, 1, 2, 3]]
>>> invalid_values = [True, False, "foo"]
>>> for value in [v[1] for v in valid_values] + invalid_values:
... if (type(value), value) in valid_values:
... print value, "is valid"
... else:
... print value, "is invalid"
...
-1 is valid
0 is valid
1 is valid
2 is valid
3 is valid
True is invalid
False is invalid
foo is invalid
>>>
```
Handling subtypes is a bit more difficult, and will depend on what you want to do.
|
8,560,320
|
```
>>> False in [0]
True
>>> type(False) == type(0)
False
```
The reason I stumbled upon this:
For my unit-testing I created lists of valid and invalid example values for each of my types. (with 'my types' I mean, they are not 100% equal to the python types)
So I want to iterate the list of all values and expect them to pass if they are in my valid values, and on the other hand, fail if they are not.
That does not work so well now:
```
>>> valid_values = [-1, 0, 1, 2, 3]
>>> invalid_values = [True, False, "foo"]
>>> for value in valid_values + invalid_values:
... if value in valid_values:
... print 'valid value:', value
...
valid value: -1
valid value: 0
valid value: 1
valid value: 2
valid value: 3
valid value: True
valid value: False
```
Of course I disagree with the last two 'valid' values.
Does this mean I really have to iterate through my valid\_values and compare the type?
|
2011/12/19
|
[
"https://Stackoverflow.com/questions/8560320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/532373/"
] |
According to the [documentation](http://docs.python.org/reference/datamodel.html#object.__contains__), `__contains__` is done by iterating over the collection and testing elements by `==`. Hence the actual problem is caused by the fact, that `False == 0` is `True`.
|
Since `True == 1` and `False == 0` it's hard to differentiate between the two.
One possible but ugly approach (which is also not guaranteed to work in all Python implementations but should be OK in CPython):
```
>>> for value in valid_values + invalid_values:
... if value in valid_values and not any(v is value for v in invalid_values):
... print ('valid value:', value)
...
valid value: -1
valid value: 0
valid value: 1
valid value: 2
valid value: 3
```
|
8,560,320
|
```
>>> False in [0]
True
>>> type(False) == type(0)
False
```
The reason I stumbled upon this:
For my unit-testing I created lists of valid and invalid example values for each of my types. (with 'my types' I mean, they are not 100% equal to the python types)
So I want to iterate the list of all values and expect them to pass if they are in my valid values, and on the other hand, fail if they are not.
That does not work so well now:
```
>>> valid_values = [-1, 0, 1, 2, 3]
>>> invalid_values = [True, False, "foo"]
>>> for value in valid_values + invalid_values:
... if value in valid_values:
... print 'valid value:', value
...
valid value: -1
valid value: 0
valid value: 1
valid value: 2
valid value: 3
valid value: True
valid value: False
```
Of course I disagree with the last two 'valid' values.
Does this mean I really have to iterate through my valid\_values and compare the type?
|
2011/12/19
|
[
"https://Stackoverflow.com/questions/8560320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/532373/"
] |
As others have written, the "in" code does not do what you want it to do. You'll need something else.
If you really want a type check (where the check is for exactly the same type) then you can include the type in the list:
```
>>> valid_values = [(int, i) for i in [-1, 0, 1, 2, 3]]
>>> invalid_values = [True, False, "foo"]
>>> for value in [v[1] for v in valid_values] + invalid_values:
... if (type(value), value) in valid_values:
... print value, "is valid"
... else:
... print value, "is invalid"
...
-1 is valid
0 is valid
1 is valid
2 is valid
3 is valid
True is invalid
False is invalid
foo is invalid
>>>
```
Handling subtypes is a bit more difficult, and will depend on what you want to do.
|
Since `True == 1` and `False == 0` it's hard to differentiate between the two.
One possible but ugly approach (which is also not guaranteed to work in all Python implementations but should be OK in CPython):
```
>>> for value in valid_values + invalid_values:
... if value in valid_values and not any(v is value for v in invalid_values):
... print ('valid value:', value)
...
valid value: -1
valid value: 0
valid value: 1
valid value: 2
valid value: 3
```
|
15,371,643
|
I am receiving this error when authenticating users for vsftpd with pam\_python on Ubuntu (13.04 development branch) in the auth.log file,
```
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
and then vsftpd says the password is wrong when attempting to connect.
Here is the full section from the auth.log file:
```
vsftpd[1]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): auth_user()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): verify_password()
vsftpd[1]: pam_auth.py(5): LOGIN: dev
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
Now, this is not normal at all, `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error).. here is a healthy output from another server with the exact same configuration:
```
vsftpd[11037]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): auth_user()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): verify_password()
vsftpd[11037]: pam_auth.py(5): LOGIN: dev
vsftpd[11037]: pam_auth.py(9): pam_sm_acct_mgmt()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): pam_sm_setcred()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(5): /home/dev/downloads/
```
The only thing different about this server, is that it is running a different kernel (it is from a different datacenter than usual), the kernel normally is:
```
Linux sb16 3.2.13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
```
Whereas the kernel on the server where I can't get pam to work is:
```
Linux sb17 3.8.0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
```
There is definitely something going wrong, but the only error that I can see anywhere is the `audit_log_acct_message() failed` message.
When trying the python script directly it outputs success too:
```
$ pam_auth.py dev test
success
```
What could be causing this? And how can I fix it/get around it?
|
2013/03/12
|
[
"https://Stackoverflow.com/questions/15371643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112652/"
] |
Protected fields can be inherited, but cannot be shown like `echo $q->protectedQ;`
Private fields cannot be neither displayed nor inherited.
|
Protected functions make your class more flexible.
Think of a class that somewhere has to load some data. It has a default implementation, which reads the data from a file. If you want to use the same class, but want to change the way it gets its data, you can create a subclass and override the getData() funciton.
|
15,371,643
|
I am receiving this error when authenticating users for vsftpd with pam\_python on Ubuntu (13.04 development branch) in the auth.log file,
```
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
and then vsftpd says the password is wrong when attempting to connect.
Here is the full section from the auth.log file:
```
vsftpd[1]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): auth_user()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): verify_password()
vsftpd[1]: pam_auth.py(5): LOGIN: dev
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
Now, this is not normal at all, `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error).. here is a healthy output from another server with the exact same configuration:
```
vsftpd[11037]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): auth_user()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): verify_password()
vsftpd[11037]: pam_auth.py(5): LOGIN: dev
vsftpd[11037]: pam_auth.py(9): pam_sm_acct_mgmt()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): pam_sm_setcred()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(5): /home/dev/downloads/
```
The only thing different about this server, is that it is running a different kernel (it is from a different datacenter than usual), the kernel normally is:
```
Linux sb16 3.2.13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
```
Whereas the kernel on the server where I can't get pam to work is:
```
Linux sb17 3.8.0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
```
There is definitely something going wrong, but the only error that I can see anywhere is the `audit_log_acct_message() failed` message.
When trying the python script directly it outputs success too:
```
$ pam_auth.py dev test
success
```
What could be causing this? And how can I fix it/get around it?
|
2013/03/12
|
[
"https://Stackoverflow.com/questions/15371643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112652/"
] |
Think of your public properties and methods as an API you expose to the outside world and private/protected ones as "inner workings" of your class that the outside world not only shouldn't be concerend with but shouldn't be able to mess with either.
Here comes the obligatory bad car analogy:
The methods you'd expose in a `Car` class could be `driveForward()` and `driveBackwards()`. Both of them would make use of a method called `transmitTheDriveToTheWheels()` but it shouldn't concern the car's users and shouldn't be accessed by them, so you'd "hide" it by making it private.
Your car would have an `engine` property. You definitely don't want someone to be able to replace the engine with a cute little kitty by going `$car->engine = $kitty;` so you'd make the engine private as well.
Finally, your car would have a `mileage` property. You want the user to be able to read the mileage but not to be able to modify it. So you make the `mileage` private and expose a public `getMileage()` method.
Now whether you want to use private or protected to encapsulate the "inner" stuff of your class, depends on whether you expect the class to be extended or not.
|
Protected functions make your class more flexible.
Think of a class that somewhere has to load some data. It has a default implementation, which reads the data from a file. If you want to use the same class, but want to change the way it gets its data, you can create a subclass and override the getData() funciton.
|
15,371,643
|
I am receiving this error when authenticating users for vsftpd with pam\_python on Ubuntu (13.04 development branch) in the auth.log file,
```
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
and then vsftpd says the password is wrong when attempting to connect.
Here is the full section from the auth.log file:
```
vsftpd[1]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): auth_user()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): verify_password()
vsftpd[1]: pam_auth.py(5): LOGIN: dev
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
Now, this is not normal at all, `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error).. here is a healthy output from another server with the exact same configuration:
```
vsftpd[11037]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): auth_user()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): verify_password()
vsftpd[11037]: pam_auth.py(5): LOGIN: dev
vsftpd[11037]: pam_auth.py(9): pam_sm_acct_mgmt()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): pam_sm_setcred()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(5): /home/dev/downloads/
```
The only thing different about this server, is that it is running a different kernel (it is from a different datacenter than usual), the kernel normally is:
```
Linux sb16 3.2.13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
```
Whereas the kernel on the server where I can't get pam to work is:
```
Linux sb17 3.8.0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
```
There is definitely something going wrong, but the only error that I can see anywhere is the `audit_log_acct_message() failed` message.
When trying the python script directly it outputs success too:
```
$ pam_auth.py dev test
success
```
What could be causing this? And how can I fix it/get around it?
|
2013/03/12
|
[
"https://Stackoverflow.com/questions/15371643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112652/"
] |
Protected fields can be inherited, but cannot be shown like `echo $q->protectedQ;`
Private fields cannot be neither displayed nor inherited.
|
The only really difference from public methods is, as you've discovered, that protected functions can only be accessed from within the class or another class in the inheritance tree.
You wan't to declare functions as protected when they are not meant to be used from outside the class.
This is a language feature purely to make your code more understandable (easier to read) and less susceptible to bugs and misuse. There is nothing (in forms of functionality) that you can't accomplish using only public methods.
It is very useful if you're sharing your code with others or if it's some kind of library.
Specific to PHP there is a particular useful case when using PHP's magic getter and setter functions (<http://www.php.net/manual/en/language.oop5.overloading.php#object.set>).
```
public $a = '1';
protected $b = '2';
public function __get($name) {
return $this->{$name}.' (protected)';
}
$obj->a; //1
$obj->b; //2 (protected)
```
As per example, you can "protect" you variables and catch calls with the magic function.
It's useful if you've published a class with a variable, and you later decide to do some preprocessing internally in the class before returning the variable.
|
15,371,643
|
I am receiving this error when authenticating users for vsftpd with pam\_python on Ubuntu (13.04 development branch) in the auth.log file,
```
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
and then vsftpd says the password is wrong when attempting to connect.
Here is the full section from the auth.log file:
```
vsftpd[1]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): auth_user()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): verify_password()
vsftpd[1]: pam_auth.py(5): LOGIN: dev
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
Now, this is not normal at all, `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error).. here is a healthy output from another server with the exact same configuration:
```
vsftpd[11037]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): auth_user()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): verify_password()
vsftpd[11037]: pam_auth.py(5): LOGIN: dev
vsftpd[11037]: pam_auth.py(9): pam_sm_acct_mgmt()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): pam_sm_setcred()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(5): /home/dev/downloads/
```
The only thing different about this server, is that it is running a different kernel (it is from a different datacenter than usual), the kernel normally is:
```
Linux sb16 3.2.13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
```
Whereas the kernel on the server where I can't get pam to work is:
```
Linux sb17 3.8.0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
```
There is definitely something going wrong, but the only error that I can see anywhere is the `audit_log_acct_message() failed` message.
When trying the python script directly it outputs success too:
```
$ pam_auth.py dev test
success
```
What could be causing this? And how can I fix it/get around it?
|
2013/03/12
|
[
"https://Stackoverflow.com/questions/15371643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112652/"
] |
Protected fields can be inherited, but cannot be shown like `echo $q->protectedQ;`
Private fields cannot be neither displayed nor inherited.
|
You use protected/private methods to contain functionality to make your code easier to read and prevent repeating the same functionality in your public methods.
Making properties protected protects the object from being modified from outside unless you provided access via a setter.
You get more control over how your object is able to be used.
|
15,371,643
|
I am receiving this error when authenticating users for vsftpd with pam\_python on Ubuntu (13.04 development branch) in the auth.log file,
```
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
and then vsftpd says the password is wrong when attempting to connect.
Here is the full section from the auth.log file:
```
vsftpd[1]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): auth_user()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): verify_password()
vsftpd[1]: pam_auth.py(5): LOGIN: dev
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
Now, this is not normal at all, `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error).. here is a healthy output from another server with the exact same configuration:
```
vsftpd[11037]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): auth_user()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): verify_password()
vsftpd[11037]: pam_auth.py(5): LOGIN: dev
vsftpd[11037]: pam_auth.py(9): pam_sm_acct_mgmt()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): pam_sm_setcred()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(5): /home/dev/downloads/
```
The only thing different about this server, is that it is running a different kernel (it is from a different datacenter than usual), the kernel normally is:
```
Linux sb16 3.2.13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
```
Whereas the kernel on the server where I can't get pam to work is:
```
Linux sb17 3.8.0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
```
There is definitely something going wrong, but the only error that I can see anywhere is the `audit_log_acct_message() failed` message.
When trying the python script directly it outputs success too:
```
$ pam_auth.py dev test
success
```
What could be causing this? And how can I fix it/get around it?
|
2013/03/12
|
[
"https://Stackoverflow.com/questions/15371643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112652/"
] |
Think of your public properties and methods as an API you expose to the outside world and private/protected ones as "inner workings" of your class that the outside world not only shouldn't be concerend with but shouldn't be able to mess with either.
Here comes the obligatory bad car analogy:
The methods you'd expose in a `Car` class could be `driveForward()` and `driveBackwards()`. Both of them would make use of a method called `transmitTheDriveToTheWheels()` but it shouldn't concern the car's users and shouldn't be accessed by them, so you'd "hide" it by making it private.
Your car would have an `engine` property. You definitely don't want someone to be able to replace the engine with a cute little kitty by going `$car->engine = $kitty;` so you'd make the engine private as well.
Finally, your car would have a `mileage` property. You want the user to be able to read the mileage but not to be able to modify it. So you make the `mileage` private and expose a public `getMileage()` method.
Now whether you want to use private or protected to encapsulate the "inner" stuff of your class, depends on whether you expect the class to be extended or not.
|
Protected fields can be inherited, but cannot be shown like `echo $q->protectedQ;`
Private fields cannot be neither displayed nor inherited.
|
15,371,643
|
I am receiving this error when authenticating users for vsftpd with pam\_python on Ubuntu (13.04 development branch) in the auth.log file,
```
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
and then vsftpd says the password is wrong when attempting to connect.
Here is the full section from the auth.log file:
```
vsftpd[1]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): auth_user()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): verify_password()
vsftpd[1]: pam_auth.py(5): LOGIN: dev
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
Now, this is not normal at all, `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error).. here is a healthy output from another server with the exact same configuration:
```
vsftpd[11037]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): auth_user()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): verify_password()
vsftpd[11037]: pam_auth.py(5): LOGIN: dev
vsftpd[11037]: pam_auth.py(9): pam_sm_acct_mgmt()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): pam_sm_setcred()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(5): /home/dev/downloads/
```
The only thing different about this server, is that it is running a different kernel (it is from a different datacenter than usual), the kernel normally is:
```
Linux sb16 3.2.13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
```
Whereas the kernel on the server where I can't get pam to work is:
```
Linux sb17 3.8.0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
```
There is definitely something going wrong, but the only error that I can see anywhere is the `audit_log_acct_message() failed` message.
When trying the python script directly it outputs success too:
```
$ pam_auth.py dev test
success
```
What could be causing this? And how can I fix it/get around it?
|
2013/03/12
|
[
"https://Stackoverflow.com/questions/15371643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112652/"
] |
Think of your public properties and methods as an API you expose to the outside world and private/protected ones as "inner workings" of your class that the outside world not only shouldn't be concerend with but shouldn't be able to mess with either.
Here comes the obligatory bad car analogy:
The methods you'd expose in a `Car` class could be `driveForward()` and `driveBackwards()`. Both of them would make use of a method called `transmitTheDriveToTheWheels()` but it shouldn't concern the car's users and shouldn't be accessed by them, so you'd "hide" it by making it private.
Your car would have an `engine` property. You definitely don't want someone to be able to replace the engine with a cute little kitty by going `$car->engine = $kitty;` so you'd make the engine private as well.
Finally, your car would have a `mileage` property. You want the user to be able to read the mileage but not to be able to modify it. So you make the `mileage` private and expose a public `getMileage()` method.
Now whether you want to use private or protected to encapsulate the "inner" stuff of your class, depends on whether you expect the class to be extended or not.
|
The only really difference from public methods is, as you've discovered, that protected functions can only be accessed from within the class or another class in the inheritance tree.
You wan't to declare functions as protected when they are not meant to be used from outside the class.
This is a language feature purely to make your code more understandable (easier to read) and less susceptible to bugs and misuse. There is nothing (in forms of functionality) that you can't accomplish using only public methods.
It is very useful if you're sharing your code with others or if it's some kind of library.
Specific to PHP there is a particular useful case when using PHP's magic getter and setter functions (<http://www.php.net/manual/en/language.oop5.overloading.php#object.set>).
```
public $a = '1';
protected $b = '2';
public function __get($name) {
return $this->{$name}.' (protected)';
}
$obj->a; //1
$obj->b; //2 (protected)
```
As per example, you can "protect" you variables and catch calls with the magic function.
It's useful if you've published a class with a variable, and you later decide to do some preprocessing internally in the class before returning the variable.
|
15,371,643
|
I am receiving this error when authenticating users for vsftpd with pam\_python on Ubuntu (13.04 development branch) in the auth.log file,
```
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
and then vsftpd says the password is wrong when attempting to connect.
Here is the full section from the auth.log file:
```
vsftpd[1]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): auth_user()
vsftpd[1]: pam_auth.py(9): get_user_base_dir()
vsftpd[1]: pam_auth.py(9): verify_password()
vsftpd[1]: pam_auth.py(5): LOGIN: dev
vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted
```
Now, this is not normal at all, `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error).. here is a healthy output from another server with the exact same configuration:
```
vsftpd[11037]: pam_auth.py(9): pam_sm_authenticate()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): auth_user()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): verify_password()
vsftpd[11037]: pam_auth.py(5): LOGIN: dev
vsftpd[11037]: pam_auth.py(9): pam_sm_acct_mgmt()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(9): pam_sm_setcred()
vsftpd[11037]: pam_auth.py(9): get_user_base_dir()
vsftpd[11037]: pam_auth.py(5): /home/dev/downloads/
```
The only thing different about this server, is that it is running a different kernel (it is from a different datacenter than usual), the kernel normally is:
```
Linux sb16 3.2.13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
```
Whereas the kernel on the server where I can't get pam to work is:
```
Linux sb17 3.8.0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
```
There is definitely something going wrong, but the only error that I can see anywhere is the `audit_log_acct_message() failed` message.
When trying the python script directly it outputs success too:
```
$ pam_auth.py dev test
success
```
What could be causing this? And how can I fix it/get around it?
|
2013/03/12
|
[
"https://Stackoverflow.com/questions/15371643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112652/"
] |
Think of your public properties and methods as an API you expose to the outside world and private/protected ones as "inner workings" of your class that the outside world not only shouldn't be concerend with but shouldn't be able to mess with either.
Here comes the obligatory bad car analogy:
The methods you'd expose in a `Car` class could be `driveForward()` and `driveBackwards()`. Both of them would make use of a method called `transmitTheDriveToTheWheels()` but it shouldn't concern the car's users and shouldn't be accessed by them, so you'd "hide" it by making it private.
Your car would have an `engine` property. You definitely don't want someone to be able to replace the engine with a cute little kitty by going `$car->engine = $kitty;` so you'd make the engine private as well.
Finally, your car would have a `mileage` property. You want the user to be able to read the mileage but not to be able to modify it. So you make the `mileage` private and expose a public `getMileage()` method.
Now whether you want to use private or protected to encapsulate the "inner" stuff of your class, depends on whether you expect the class to be extended or not.
|
You use protected/private methods to contain functionality to make your code easier to read and prevent repeating the same functionality in your public methods.
Making properties protected protects the object from being modified from outside unless you provided access via a setter.
You get more control over how your object is able to be used.
|
67,728,723
|
There is a really old thread on stackoverflow here [Getting 'DatabaseOperations' object has no attribute 'geo\_db\_type' error when doing a syncdb](https://stackoverflow.com/questions/12538510/getting-databaseoperations-object-has-no-attribute-geo-db-type-error-when-do)
but the difference that I have with their issue is that my containers have the POSTGIS and POSTGRES installed in. Specifically I used QGIS and the image is like so
```
db:
image: kartoza/postgis:13.0
volumes:
- postgis-data:/var/lib/postgresql
```
So locally I have two docker images - one is web and the other is the kartoza/postgis
I also have this as well in the settings.py file
```
import dj_database_url
db_from_env = dj_database_url.config(conn_max_age=500)
DATABASES['default'].update(db_from_env)
```
which should support the GIS data. I see all my packages gis, geolocation packages installed with no issues. But I am getting the above error when I run heroku run python manage.py migrate
The website runs with very limited functionality as the geo variables are needed to get you past the landing page.
The steps I have taken to deploy is
```
heroku create appname
heroku stack:set container -a appname
heroku addons:create heroku-postgresql:hobby-dev -a appname
heroku git:remote -a appname
git push heroku main
```
**EDIT** The db url on heroku is `postgres://foobar:3242q34rq2rq32rf3q2rfq2q2r3vq23rvq23vr@er3-13-234-91-69.compute-`
I have also ran the below command and it shows that the db now takes GIS, but I still get the error
```
$ heroku pg:psql
create extension postgis;
```
|
2021/05/27
|
[
"https://Stackoverflow.com/questions/67728723",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6046858/"
] |
>
> Process exited ... with return value 3221225620
>
>
>
3221225620 is the numeric code for a `STATUS_INTEGER_DIVIDE_BY_ZERO` (0xC0000094) exception, which means your code is crashing before it has a chance to print out its messages.
On this line:
```cpp
avg=sum/n;
```
`n` is 0 at this point, because the `while` loop above it had decremented `n` on each iteration until `n` reached 0.
To avoid that, change the `while` loop to not modify `n` anymore:
```cpp
while (i <= n)
{
...
i++;
}
```
Or, use a `for` loop instead:
```cpp
for (int i = 1; i <= n; ++i)
{
...
}
```
|
you're dividing by 0, since you're modifying n until it reaches 0 and then use it to divide, it's better if you iterate with another variable instead of reducing the value of n. so the problem isn't that it's not printing, it's that the program dies before reaching that
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
Try
```
try:
thread=reqthread()
thread.daemon=True
thread.start()
while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
```
Without the call to `time.sleep`, the main process is jumping out of the `try...except` block too early, so the `KeyboardInterrupt` is not caught. My first thought was to use `thread.join`, but that seems to block the main process (ignoring KeyboardInterrupt) until the `thread` is finished.
`thread.daemon=True` causes the thread to terminate when the main process ends.
|
To summarize the changes recommended in [the](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment16625542_3788243) [comments](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment28203538_3788243), the following works well for me:
```
try:
thread = reqthread()
thread.start()
while thread.isAlive():
thread.join(1) # not sure if there is an appreciable cost to this.
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
sys.exit()
```
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
Try
```
try:
thread=reqthread()
thread.daemon=True
thread.start()
while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
```
Without the call to `time.sleep`, the main process is jumping out of the `try...except` block too early, so the `KeyboardInterrupt` is not caught. My first thought was to use `thread.join`, but that seems to block the main process (ignoring KeyboardInterrupt) until the `thread` is finished.
`thread.daemon=True` causes the thread to terminate when the main process ends.
|
My (hacky) solution is to monkey-patch `Thread.join()` like this:
```
def initThreadJoinHack():
import threading, thread
mainThread = threading.currentThread()
assert isinstance(mainThread, threading._MainThread)
mainThreadId = thread.get_ident()
join_orig = threading.Thread.join
def join_hacked(threadObj, timeout=None):
"""
:type threadObj: threading.Thread
:type timeout: float|None
"""
if timeout is None and thread.get_ident() == mainThreadId:
# This is a HACK for Thread.join() if we are in the main thread.
# In that case, a Thread.join(timeout=None) would hang and even not respond to signals
# because signals will get delivered to other threads and Python would forward
# them for delayed handling to the main thread which hangs.
# See CPython signalmodule.c.
# Currently the best solution I can think of:
while threadObj.isAlive():
join_orig(threadObj, timeout=0.1)
else:
# In all other cases, we can use the original.
join_orig(threadObj, timeout=timeout)
threading.Thread.join = join_hacked
```
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
Try
```
try:
thread=reqthread()
thread.daemon=True
thread.start()
while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
```
Without the call to `time.sleep`, the main process is jumping out of the `try...except` block too early, so the `KeyboardInterrupt` is not caught. My first thought was to use `thread.join`, but that seems to block the main process (ignoring KeyboardInterrupt) until the `thread` is finished.
`thread.daemon=True` causes the thread to terminate when the main process ends.
|
Slight modification of ubuntu's solution.
Removing tread.daemon = True as suggested by Eric and replacing the sleeping loop by signal.pause():
```
import signal
try:
thread=reqthread()
thread.start()
signal.pause() # instead of: while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
```
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
Try
```
try:
thread=reqthread()
thread.daemon=True
thread.start()
while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
```
Without the call to `time.sleep`, the main process is jumping out of the `try...except` block too early, so the `KeyboardInterrupt` is not caught. My first thought was to use `thread.join`, but that seems to block the main process (ignoring KeyboardInterrupt) until the `thread` is finished.
`thread.daemon=True` causes the thread to terminate when the main process ends.
|
Putting the `try ... except` in each thread and also a `signal.pause()` in *true* `main()` works for me.
Watch out for [import lock](https://stackoverflow.com/a/46354248/5896591) though. I am guessing this is why Python doesn't solve ctrl-C by default.
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
To summarize the changes recommended in [the](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment16625542_3788243) [comments](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment28203538_3788243), the following works well for me:
```
try:
thread = reqthread()
thread.start()
while thread.isAlive():
thread.join(1) # not sure if there is an appreciable cost to this.
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
sys.exit()
```
|
My (hacky) solution is to monkey-patch `Thread.join()` like this:
```
def initThreadJoinHack():
import threading, thread
mainThread = threading.currentThread()
assert isinstance(mainThread, threading._MainThread)
mainThreadId = thread.get_ident()
join_orig = threading.Thread.join
def join_hacked(threadObj, timeout=None):
"""
:type threadObj: threading.Thread
:type timeout: float|None
"""
if timeout is None and thread.get_ident() == mainThreadId:
# This is a HACK for Thread.join() if we are in the main thread.
# In that case, a Thread.join(timeout=None) would hang and even not respond to signals
# because signals will get delivered to other threads and Python would forward
# them for delayed handling to the main thread which hangs.
# See CPython signalmodule.c.
# Currently the best solution I can think of:
while threadObj.isAlive():
join_orig(threadObj, timeout=0.1)
else:
# In all other cases, we can use the original.
join_orig(threadObj, timeout=timeout)
threading.Thread.join = join_hacked
```
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
To summarize the changes recommended in [the](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment16625542_3788243) [comments](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment28203538_3788243), the following works well for me:
```
try:
thread = reqthread()
thread.start()
while thread.isAlive():
thread.join(1) # not sure if there is an appreciable cost to this.
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
sys.exit()
```
|
Slight modification of ubuntu's solution.
Removing tread.daemon = True as suggested by Eric and replacing the sleeping loop by signal.pause():
```
import signal
try:
thread=reqthread()
thread.start()
signal.pause() # instead of: while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
```
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
To summarize the changes recommended in [the](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment16625542_3788243) [comments](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment28203538_3788243), the following works well for me:
```
try:
thread = reqthread()
thread.start()
while thread.isAlive():
thread.join(1) # not sure if there is an appreciable cost to this.
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
sys.exit()
```
|
Putting the `try ... except` in each thread and also a `signal.pause()` in *true* `main()` works for me.
Watch out for [import lock](https://stackoverflow.com/a/46354248/5896591) though. I am guessing this is why Python doesn't solve ctrl-C by default.
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
Slight modification of ubuntu's solution.
Removing tread.daemon = True as suggested by Eric and replacing the sleeping loop by signal.pause():
```
import signal
try:
thread=reqthread()
thread.start()
signal.pause() # instead of: while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
```
|
My (hacky) solution is to monkey-patch `Thread.join()` like this:
```
def initThreadJoinHack():
import threading, thread
mainThread = threading.currentThread()
assert isinstance(mainThread, threading._MainThread)
mainThreadId = thread.get_ident()
join_orig = threading.Thread.join
def join_hacked(threadObj, timeout=None):
"""
:type threadObj: threading.Thread
:type timeout: float|None
"""
if timeout is None and thread.get_ident() == mainThreadId:
# This is a HACK for Thread.join() if we are in the main thread.
# In that case, a Thread.join(timeout=None) would hang and even not respond to signals
# because signals will get delivered to other threads and Python would forward
# them for delayed handling to the main thread which hangs.
# See CPython signalmodule.c.
# Currently the best solution I can think of:
while threadObj.isAlive():
join_orig(threadObj, timeout=0.1)
else:
# In all other cases, we can use the original.
join_orig(threadObj, timeout=timeout)
threading.Thread.join = join_hacked
```
|
3,788,208
|
I'm running this simple code:
```
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
```
But when I run it, it prints
```none
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
```
In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
|
2010/09/24
|
[
"https://Stackoverflow.com/questions/3788208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39796/"
] |
Slight modification of ubuntu's solution.
Removing tread.daemon = True as suggested by Eric and replacing the sleeping loop by signal.pause():
```
import signal
try:
thread=reqthread()
thread.start()
signal.pause() # instead of: while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
```
|
Putting the `try ... except` in each thread and also a `signal.pause()` in *true* `main()` works for me.
Watch out for [import lock](https://stackoverflow.com/a/46354248/5896591) though. I am guessing this is why Python doesn't solve ctrl-C by default.
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
googlemaps package you are using is not an official one and does not use google maps API v3 which is the latest one from google.
You can use google's [geocode REST api](https://developers.google.com/maps/documentation/geocoding/) to fetch coordinates from address. Here's an example.
```
import requests
response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA')
resp_json_payload = response.json()
print(resp_json_payload['results'][0]['geometry']['location'])
```
|
For a Python script that does not require an API key nor external libraries you can query the Nominatim service which in turn queries the Open Street Map database.
For more information on how to use it see <https://nominatim.org/release-docs/develop/api/Search/>
A simple example is below:
```
import requests
import urllib.parse
address = 'Shivaji Nagar, Bangalore, KA 560001'
url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) +'?format=json'
response = requests.get(url).json()
print(response[0]["lat"])
print(response[0]["lon"])
```
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
Try this code:
```
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="my_user_agent")
city ="London"
country ="Uk"
loc = geolocator.geocode(city+','+ country)
print("latitude is :-" ,loc.latitude,"\nlongtitude is:-" ,loc.longitude)
```
|
Simplest way to get Latitude and Longitude using google api, Python and Django.
```
# Simplest way to get the lat, long of any address.
# Using Python requests and the Google Maps Geocoding API.
import requests
GOOGLE_MAPS_API_URL = 'http://maps.googleapis.com/maps/api/geocode/json'
params = {
'address': 'oshiwara industerial center goregaon west mumbai',
'sensor': 'false',
'region': 'india'
}
# Do the request and get the response data
req = requests.get(GOOGLE_MAPS_API_URL, params=params)
res = req.json()
# Use the first result
result = res['results'][0]
geodata = dict()
geodata['lat'] = result['geometry']['location']['lat']
geodata['lng'] = result['geometry']['location']['lng']
geodata['address'] = result['formatted_address']
print('{address}. (lat, lng) = ({lat}, {lng})'.format(**geodata))
# Result => Link Rd, Best Nagar, Goregaon West, Mumbai, Maharashtra 400104, India. (lat, lng) = (19.1528967, 72.8371262)
```
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
As @WSaitama said, `geopy` works well and it does not require authentification. To download it: <https://pypi.org/project/geopy/>. An example on how to use it would be:
```
from geopy.geocoders import Nominatim
address='Barcelona'
geolocator = Nominatim(user_agent="Your_Name")
location = geolocator.geocode(address)
print(location.address)
print((location.latitude, location.longitude))
#Barcelona, Barcelonès, Barcelona, Catalunya, 08001, España
#(41.3828939, 2.1774322)
```
|
Here is my solution with nominatim using `geopy` and `positionstack API` backup
Positionstack API supports upto 25,000 Requests / month for free
```
import requests
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent='myapplication')
def get_nominatim_geocode(address):
try:
location = geolocator.geocode(address)
return location.raw['lon'], location.raw['lat']
except Exception as e:
# print(e)
return None, None
def get_positionstack_geocode(address):
BASE_URL = "http://api.positionstack.com/v1/forward?access_key="
API_KEY = "YOUR_API_KEY"
url = BASE_URL +API_KEY+'&query='+urllib.parse.quote(address)
try:
response = requests.get(url).json()
# print( response["data"][0])
return response["data"][0]["longitude"], response["data"][0]["latitude"]
except Exception as e:
# print(e)
return None,None
def get_geocode(address):
long,lat = get_nominatim_geocode(address)
if long == None:
return get_positionstack_geocode(address)
else:
return long,lat
address = "50TH ST S"
get_geocode(address)
```
Output :
```
('-80.2581662', '26.6077474'
```
you can use `nominatim` without `geopy` client as well
```
import requests
import urllib
def get_nominatim_geocode(address):
url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) + '?format=json'
try:
response = requests.get(url).json()
return response[0]["lon"], response[0]["lat"]
except Exception as e:
# print(e)
return None, None
```
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
googlemaps package you are using is not an official one and does not use google maps API v3 which is the latest one from google.
You can use google's [geocode REST api](https://developers.google.com/maps/documentation/geocoding/) to fetch coordinates from address. Here's an example.
```
import requests
response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA')
resp_json_payload = response.json()
print(resp_json_payload['results'][0]['geometry']['location'])
```
|
You can also use geocoder with Bing Maps API. The API gets latitude and longitude data for all addresses for me (unlike Nominatim which works for only 60% of addresses) and it has a pretty good free version for non-commercial uses (max 125000 requests per year for free). To get free API, go [here](https://www.microsoft.com/en-us/maps/create-a-bing-maps-key) and click "Get a free Basic key". After getting your API, you can use it in the code below:
```
import geocoder # pip install geocoder
g = geocoder.bing('Mountain View, CA', key='<API KEY>')
results = g.json
print(results['lat'], results['lng'])
```
The `results` contains much more information than longitude and latitude. Check it out.
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
googlemaps package you are using is not an official one and does not use google maps API v3 which is the latest one from google.
You can use google's [geocode REST api](https://developers.google.com/maps/documentation/geocoding/) to fetch coordinates from address. Here's an example.
```
import requests
response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA')
resp_json_payload = response.json()
print(resp_json_payload['results'][0]['geometry']['location'])
```
|
Hello here is the one I use most time to get latitude and longitude using physical adress.
NB: Pleas fill `NaN` with empty. `df.adress.fillna('')`
```
from geopy.exc import GeocoderTimedOut
# You define col corresponding to adress, it can be one
col_addr = ['street','postcode','town']
geocode = geopy.geocoders.BANFrance().geocode
def geopoints(row):
search=""
for x in col_addr:
search = search + str(row[x]) +' '
if search is not None:
print(row.name+1,end="\r")
try:
search_location = geocode(search, timeout=5)
return search_location.latitude,search_location.longitude
except (AttributeError, GeocoderTimedOut):
print("Got an error on index : ",row.name)
return 0,0
print("Number adress to located /",len(df),":")
df['latitude'],df['longitude'] = zip(*df.apply(geopoints, axis=1))
```
NB: I use BANFrance() as API, you can find other API here [Geocoders](https://geopy.readthedocs.io/en/stable/index.html#module-geopy.geocoders).
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
For a Python script that does not require an API key nor external libraries you can query the Nominatim service which in turn queries the Open Street Map database.
For more information on how to use it see <https://nominatim.org/release-docs/develop/api/Search/>
A simple example is below:
```
import requests
import urllib.parse
address = 'Shivaji Nagar, Bangalore, KA 560001'
url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) +'?format=json'
response = requests.get(url).json()
print(response[0]["lat"])
print(response[0]["lon"])
```
|
Here is my solution with nominatim using `geopy` and `positionstack API` backup
Positionstack API supports upto 25,000 Requests / month for free
```
import requests
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent='myapplication')
def get_nominatim_geocode(address):
try:
location = geolocator.geocode(address)
return location.raw['lon'], location.raw['lat']
except Exception as e:
# print(e)
return None, None
def get_positionstack_geocode(address):
BASE_URL = "http://api.positionstack.com/v1/forward?access_key="
API_KEY = "YOUR_API_KEY"
url = BASE_URL +API_KEY+'&query='+urllib.parse.quote(address)
try:
response = requests.get(url).json()
# print( response["data"][0])
return response["data"][0]["longitude"], response["data"][0]["latitude"]
except Exception as e:
# print(e)
return None,None
def get_geocode(address):
long,lat = get_nominatim_geocode(address)
if long == None:
return get_positionstack_geocode(address)
else:
return long,lat
address = "50TH ST S"
get_geocode(address)
```
Output :
```
('-80.2581662', '26.6077474'
```
you can use `nominatim` without `geopy` client as well
```
import requests
import urllib
def get_nominatim_geocode(address):
url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) + '?format=json'
try:
response = requests.get(url).json()
return response[0]["lon"], response[0]["lat"]
except Exception as e:
# print(e)
return None, None
```
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
For a Python script that does not require an API key nor external libraries you can query the Nominatim service which in turn queries the Open Street Map database.
For more information on how to use it see <https://nominatim.org/release-docs/develop/api/Search/>
A simple example is below:
```
import requests
import urllib.parse
address = 'Shivaji Nagar, Bangalore, KA 560001'
url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) +'?format=json'
response = requests.get(url).json()
print(response[0]["lat"])
print(response[0]["lon"])
```
|
Simplest way to get Latitude and Longitude using google api, Python and Django.
```
# Simplest way to get the lat, long of any address.
# Using Python requests and the Google Maps Geocoding API.
import requests
GOOGLE_MAPS_API_URL = 'http://maps.googleapis.com/maps/api/geocode/json'
params = {
'address': 'oshiwara industerial center goregaon west mumbai',
'sensor': 'false',
'region': 'india'
}
# Do the request and get the response data
req = requests.get(GOOGLE_MAPS_API_URL, params=params)
res = req.json()
# Use the first result
result = res['results'][0]
geodata = dict()
geodata['lat'] = result['geometry']['location']['lat']
geodata['lng'] = result['geometry']['location']['lng']
geodata['address'] = result['formatted_address']
print('{address}. (lat, lng) = ({lat}, {lng})'.format(**geodata))
# Result => Link Rd, Best Nagar, Goregaon West, Mumbai, Maharashtra 400104, India. (lat, lng) = (19.1528967, 72.8371262)
```
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
Simplest way to get Latitude and Longitude using google api, Python and Django.
```
# Simplest way to get the lat, long of any address.
# Using Python requests and the Google Maps Geocoding API.
import requests
GOOGLE_MAPS_API_URL = 'http://maps.googleapis.com/maps/api/geocode/json'
params = {
'address': 'oshiwara industerial center goregaon west mumbai',
'sensor': 'false',
'region': 'india'
}
# Do the request and get the response data
req = requests.get(GOOGLE_MAPS_API_URL, params=params)
res = req.json()
# Use the first result
result = res['results'][0]
geodata = dict()
geodata['lat'] = result['geometry']['location']['lat']
geodata['lng'] = result['geometry']['location']['lng']
geodata['address'] = result['formatted_address']
print('{address}. (lat, lng) = ({lat}, {lng})'.format(**geodata))
# Result => Link Rd, Best Nagar, Goregaon West, Mumbai, Maharashtra 400104, India. (lat, lng) = (19.1528967, 72.8371262)
```
|
did you try to use the the library geopy ?
<https://pypi.org/project/geopy/>
It works for python 2.7 till 3.8.
It also work for OpenStreetMap Nominatim, Google Geocoding API (V3) and others.
Hope it can help you.
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
For a Python script that does not require an API key nor external libraries you can query the Nominatim service which in turn queries the Open Street Map database.
For more information on how to use it see <https://nominatim.org/release-docs/develop/api/Search/>
A simple example is below:
```
import requests
import urllib.parse
address = 'Shivaji Nagar, Bangalore, KA 560001'
url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) +'?format=json'
response = requests.get(url).json()
print(response[0]["lat"])
print(response[0]["lon"])
```
|
You can also use geocoder with Bing Maps API. The API gets latitude and longitude data for all addresses for me (unlike Nominatim which works for only 60% of addresses) and it has a pretty good free version for non-commercial uses (max 125000 requests per year for free). To get free API, go [here](https://www.microsoft.com/en-us/maps/create-a-bing-maps-key) and click "Get a free Basic key". After getting your API, you can use it in the code below:
```
import geocoder # pip install geocoder
g = geocoder.bing('Mountain View, CA', key='<API KEY>')
results = g.json
print(results['lat'], results['lng'])
```
The `results` contains much more information than longitude and latitude. Check it out.
|
25,888,396
|
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps .
kindly reply
Thanks In Advance
```
#!/usr/bin/env python
import urllib,urllib2
"""This Programs Fetch The Address"""
from googlemaps import GoogleMaps
address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001'
add=GoogleMaps().address_to_latlng(address)
print add
```
**Output:**
```
Traceback (most recent call last):
File "Fetching.py", line 12, in <module>
add=GoogleMaps().address_to_latlng(address)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng
return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1])
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode
url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params)
File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 407, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 445, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3008712/"
] |
googlemaps package you are using is not an official one and does not use google maps API v3 which is the latest one from google.
You can use google's [geocode REST api](https://developers.google.com/maps/documentation/geocoding/) to fetch coordinates from address. Here's an example.
```
import requests
response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA')
resp_json_payload = response.json()
print(resp_json_payload['results'][0]['geometry']['location'])
```
|
Here is my solution with nominatim using `geopy` and `positionstack API` backup
Positionstack API supports upto 25,000 Requests / month for free
```
import requests
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent='myapplication')
def get_nominatim_geocode(address):
try:
location = geolocator.geocode(address)
return location.raw['lon'], location.raw['lat']
except Exception as e:
# print(e)
return None, None
def get_positionstack_geocode(address):
BASE_URL = "http://api.positionstack.com/v1/forward?access_key="
API_KEY = "YOUR_API_KEY"
url = BASE_URL +API_KEY+'&query='+urllib.parse.quote(address)
try:
response = requests.get(url).json()
# print( response["data"][0])
return response["data"][0]["longitude"], response["data"][0]["latitude"]
except Exception as e:
# print(e)
return None,None
def get_geocode(address):
long,lat = get_nominatim_geocode(address)
if long == None:
return get_positionstack_geocode(address)
else:
return long,lat
address = "50TH ST S"
get_geocode(address)
```
Output :
```
('-80.2581662', '26.6077474'
```
you can use `nominatim` without `geopy` client as well
```
import requests
import urllib
def get_nominatim_geocode(address):
url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) + '?format=json'
try:
response = requests.get(url).json()
return response[0]["lon"], response[0]["lat"]
except Exception as e:
# print(e)
return None, None
```
|
40,616,527
|
I'm trying to build caffe with python but it keep saying this
```
CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp
/usr/bin/ld: cannot find -lboost_python3
collect2: error: ld returned 1 exit status
make: *** [python/caffe/_caffe.so] Error 1
```
This is what I get when I try to locate `boost_python`
```
$ sudo locate boost_python
/usr/lib/x86_64-linux-gnu/libboost_python-py27.a
/usr/lib/x86_64-linux-gnu/libboost_python-py27.so
/usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0
/usr/lib/x86_64-linux-gnu/libboost_python-py33.a
/usr/lib/x86_64-linux-gnu/libboost_python-py33.so
/usr/lib/x86_64-linux-gnu/libboost_python-py33.so.1.55.0
/usr/lib/x86_64-linux-gnu/libboost_python-py34.a
/usr/lib/x86_64-linux-gnu/libboost_python-py34.so
/usr/lib/x86_64-linux-gnu/libboost_python-py34.so.1.55.0
/usr/lib/x86_64-linux-gnu/libboost_python.a
/usr/lib/x86_64-linux-gnu/libboost_python.so
```
I've add this path also
```
## .bashrc
export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu":$LD_LIBRARY_PATH
```
Any idea why is that happing?
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40616527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4258576/"
] |
I've found the problem. it turned out that it tries to look for something with that name of `libboost_python3.so` after changing the name in Makefile.config from `boost_python3` to `boost_python-py34`, it worked just fine!
|
I know this thread is quite old, but :
```
dnf install boost-python3-devel
```
may help !
|
40,616,527
|
I'm trying to build caffe with python but it keep saying this
```
CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp
/usr/bin/ld: cannot find -lboost_python3
collect2: error: ld returned 1 exit status
make: *** [python/caffe/_caffe.so] Error 1
```
This is what I get when I try to locate `boost_python`
```
$ sudo locate boost_python
/usr/lib/x86_64-linux-gnu/libboost_python-py27.a
/usr/lib/x86_64-linux-gnu/libboost_python-py27.so
/usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0
/usr/lib/x86_64-linux-gnu/libboost_python-py33.a
/usr/lib/x86_64-linux-gnu/libboost_python-py33.so
/usr/lib/x86_64-linux-gnu/libboost_python-py33.so.1.55.0
/usr/lib/x86_64-linux-gnu/libboost_python-py34.a
/usr/lib/x86_64-linux-gnu/libboost_python-py34.so
/usr/lib/x86_64-linux-gnu/libboost_python-py34.so.1.55.0
/usr/lib/x86_64-linux-gnu/libboost_python.a
/usr/lib/x86_64-linux-gnu/libboost_python.so
```
I've add this path also
```
## .bashrc
export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu":$LD_LIBRARY_PATH
```
Any idea why is that happing?
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40616527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4258576/"
] |
I've found the problem. it turned out that it tries to look for something with that name of `libboost_python3.so` after changing the name in Makefile.config from `boost_python3` to `boost_python-py34`, it worked just fine!
|
I wanted to build Caffe and faced the same issue. Unfortunately, none of the answers work in my case. I checked the location of lboost\_python by the following command:
`find /usr/lib -name libboost_python-py`
I found the lboost\_python libraries and here are some of them:
* libboost\_python37.a
* libboost\_python37-mt.a
* libboost\_python37-mt.so
* libboost\_python37-mt.so.1.68.0
Then in the **Makefile.config** file I changed the following line:
PYTHON\_LIBRARIES := **boost\_python** python3.7m
to
PYTHON\_LIBRARIES := **boost\_python37-mt** python3.7m
|
47,966,556
|
I'm getting confused groups of Regex which from a book:《Automate the Boring Stuff with Python: Practical Programming for Total Beginners 》。The Regex as follow:
```
#! python3
# phoneAndEmail.py - Finds phone numbers and email addresses on the clipboard
# The data of paste from: https://www.nostarch.com/contactus.html
import pyperclip, re
phoneRegex = re.compile(r'''(
(\d{3}|\(\d{3}\))? # area code
(\s|-|\.)? # separator
(\d{3}) # first 3 digits
(\s|-|\.) # separator
(\d{4}) # last 4 digits
(\s*(ext|x|ext.)\s*(\d{2,,5}))? # extension
)''', re.VERBOSE )
# TODO: Create email regex.
emailRegex = re.compile(r'''(
[a-zA-Z0-9._%+-]+ # username
@ # @ symbol
[a-zA-Z0-9.-]+ # domian name
(\.[a-zA-Z]{2,4}) # dot-something
)''', re.VERBOSE)
# TODO: Find matches in clipboard text.
text = str(pyperclip.paste())
matches = []
for groups in phoneRegex.findall(text):
**phoneNum = '-'.join ([groups[1], groups[3], groups[5]])
if groups[8]!= '':
phoneNum += ' x' + groups[8]**
matches.append(phoneNum)
print(groups[0])
for groups in emailRegex.findall(text):
matches.append(groups[0])
# TODO: Copy results to the clipboard.
if len(matches) > 0:
pyperclip.copy('\n'.join(matches))
print('Copied to clipboard:')
print('\n'.join(matches))
else:
print('No phone number or email addresses found.')
```
I am confused about **groups[1](http://http//www.nostarch.com/contactus.html)/groups[2]……/groups[8]**. And how many groups in the phoneRegex. And what is the difference between **groups()** and **groups[]**.
The data of paste from: [<https://www.nostarch.com/contactus.html]>
|
2017/12/25
|
[
"https://Stackoverflow.com/questions/47966556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9135962/"
] |
Regexes can have *groups*. They are denoted by `()`. Groups can be used to extract a part of the match which might be useful.
In the phone number regex for example, there are 9 groups:
```
Group Subpattern
1 ((\d{3}|\(\d{3}\))?(\s|-|\.) (\d{3}) (\s|-|\.)(\d{4})(\s*(ext|x|ext.)\s*(\d{2,,5}))?)
2 (\d{3}|\(\d{3}\))?
3 (\s|-|\.)
4 (\d{3})
5 (\s|-|\.)
6 (\d{4})
7 (\s*(ext|x|ext.)\s*(\d{2,,5}))?
8 (ext|x|ext.)
9 (\d{2,,5})
```
Note how each group is enclosed in `()`s.
The `groups[x]` is just referring to the string matched by a particular group. `groups[0]` means the string matched by group 1, `groups[1]` means the string matched by group 2, etc.
|
In a regex, parenthesis `()` create what is called a **capturing group**. Each group is assigned a number, starting with 1.
For example:
```
In [1]: import re
In [2]: m = re.match('([0-9]+)([a-z]+)', '123xyz')
In [3]: m.group(1)
Out[3]: '123'
In [4]: m.group(2)
Out[4]: 'xyz'
```
Here, `([0-9]+)` is the first capturing group, and `([a-z]+)` is the second capturing group. When you apply the regex, the first capturing group ends up "capturing" the string `123` (since that's the part it matches), and the second part captures `xyz`.
With `findall`, it searches the string for all places where the regex matches, and for each match, it returns the list of captured groups as a tuple. I'd encourage you to play with it a bit in `ipython` to understand how it works. Also check the docs: <https://docs.python.org/3.6/library/re.html#re.findall>
|
59,591,632
|
I am currently trying to use the [dynamodb](https://www.npmjs.com/package/dynamodb) extension in python to get the position\rank in a descending query on an index, the following is what I have come up with:
```
router.get('/'+USER_ROUTE+'/top', (req, res) => {
POST.query()
.usingIndex('likecount')
.attributes(['id', 'likecount'])
.descending()
.loadAll()
.exec((error, result) => {
if (error) {
res.status(400).json({ error: 'Error retrieving most liked post' });
}
//convert res to dict and sort dict by value (likecount)
res.json({position:Object.keys(result).indexOf(req.body.id)});
});
});
```
As you can see, I would convert the result into a dict of ID and likecount and then sort the dict after which I get the index of the key I am looking for.
Obviously this fails in multiple respects, it is slow/inefficient (iterates over every item in the database per call) and requires multiple steps.
Is there a more succinct method to achieve this?
Thanks.
|
2020/01/04
|
[
"https://Stackoverflow.com/questions/59591632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4134377/"
] |
It is not possible in Delta Lake up to and including 0.5.0.
There's an issue to track this at <https://github.com/delta-io/delta/issues/294>. Feel free to upvote that to help get it prioritized.
---
Just a day after Google posted [Getting started with new table formats on Dataproc](https://cloud.google.com/blog/products/data-analytics/getting-started-with-new-table-formats-on-dataproc):
>
> We’re announcing that table format projects Delta Lake and Apache Iceberg (Incubating) are now available in the latest version of Cloud Dataproc (version 1.5 Preview). You can start using them today with either Spark or Presto. Apache Hudi is also available on Dataproc 1.3.
>
>
>
|
It's possible.
Here's a sample code and the libraries that you need:
Make sure to set first your credential, you can either part of the code or as environment:
```
export GOOGLE_APPLICATION_CREDENTIALS={gcs-key-path.json}
```
```
import org.apache.spark.sql.{SparkSession, DataFrame}
import com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryException
import com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryOptions
import com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.DatasetInfo
spark.conf.set("parentProject", {Proj})
spark.conf.set("spark.hadoop.fs.gs.auth.service.account.enable", "true")
spark.conf.set("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
spark.conf.set("fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS")
spark.conf.set("spark.delta.logStore.gs.impl", "io.delta.storage.GCSLogStore")
spark.conf.set("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
val targetTablePath = "gs://{bucket}/{dataset}/{tablename}"
spark.range(5, 10).write.format("delta")
.mode("overwrite")
.save(targetTablePath)
```
Libraries that you need:
```
"io.delta" % "delta-core_2.12" % "1.0.0",
"io.delta" % "delta-contribs_2.12" % "1.0.0",
"com.google.cloud.spark" % "spark-bigquery-with-dependencies_2.12" % "0.21.1",
"com.google.cloud.bigdataoss" % "gcs-connector" % "1.9.4-hadoop3"
```
Checking my delta files in GCS:
```
$ gsutil ls gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3
gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/
gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00000-ce79bfc7-e28f-4929-955c-56a7a08caf9f-c000.snappy.parquet
gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00001-dda0bd2d-a081-4444-8983-ac8f3a2ffe9d-c000.snappy.parquet
gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00002-93f7429b-777a-42f4-b2dd-adc9a482a6e8-c000.snappy.parquet
gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00003-e9874baf-6c0b-46de-891e-032ac8b67287-c000.snappy.parquet
gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00004-ede54816-2da1-412f-a9e3-5233e77258fb-c000.snappy.parquet
gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/_delta_log/
gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/_symlink_format_manifest/
```
|
48,464,693
|
So, I created a python program. Converted to exe using Py2Exe, and tried with PyInstaller and cx\_freeze as well. All these trigger the program to be detected as virus by avast, avg, and others on virustotal and on my local machine.
I tried changing to a Hello World script to see if the problem is there but the results are exactly the same.
My question is, what is triggering this detection? The way in which the .exe is created?
If so, are there any other alternatives to Py2exe, Pyinstaller, cx\_freeze?
|
2018/01/26
|
[
"https://Stackoverflow.com/questions/48464693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9273160/"
] |
You can try nuitka.
```
pip install -U nuitka
```
Example:
```
nuitka --recurse-all --icon=app.ico --portable helloworld.py
```
Website:
<http://nuitka.net/>
Maybe you need to install Visual C++ 2015 Build Tools for compile.
<http://landinghub.visualstudio.com/visual-cpp-build-tools>
|
If you download Nuitka package, you will find a Trojan files in the folder.
If you use this library, you will create a exe file with a Trojan embedded in the exe file.
It converts files much faster than other similar libraries with no errors.
|
60,421,328
|
I am trying to import a python dictionary from moodels and manipulate/print it's properties in Javascript. However nothing seems to print out and I don't receive any error warnings.
**Views.py**
```
from chesssite.models import Chess_board
import json
def chess(request):
board = Chess_board()
data = json.dumps(board.rep)
return render(request, 'home.html', {'board': data})
```
Here board.rep is a python dictionary {"0a":0, "0b":0, "0c":"K0"} - basically a chess board
**home.html**
```
<html>
<body>
{% block content %}
<script>
for (x in {{board}}) {
document.write(x)
}
</script>
{% endblock %}
</body>
</html>
```
I also would very much appreciate some debugging tips!
Thanks in advance, Alex
|
2020/02/26
|
[
"https://Stackoverflow.com/questions/60421328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12315848/"
] |
You can utilize the [z algorithm](https://codeforces.com/blog/entry/3107), a linear time (***O***(n)) algorithm that:
>
> Given a string *S* of length n, the Z Algorithm produces an array *Z*
> where *Z[i]* is the length of the longest substring starting from *S[i]*
> which is also a prefix of *S*
>
>
>
You need to concatenate your arrays (*b*+*a*) and run the algorithm on the resulting constructed array till the first *i* such that *Z[i]*+*i* == *m*+*n*.
For example, for *a* = [1, 2, 3, 6, 2, 3] & *b* = [2, 3, 6, 2, 1, 0], the concatenation would be [2, 3, 6, 2, 1, 0, 1, 2, 3, 6, 2, 3] which would yield *Z[10]* = 2 fulfilling *Z[i]* + *i* = 12 = *m* + *n*.
|
For O(n) time/space complexity, the trick is to evaluate hashes for each subsequence. Consider the array `b`:
```
[b1 b2 b3 ... bn]
```
Using [Horner's method](https://en.wikipedia.org/wiki/Horner%27s_method), you can evaluate all the possible hashes for each subsequence. Pick a base value `B` (bigger than any value in both of your arrays):
```
from b1 to b1 = b1 * B^1
from b1 to b2 = b1 * B^1 + b2 * B^2
from b1 to b3 = b1 * B^1 + b2 * B^2 + b3 * B^3
...
from b1 to bn = b1 * B^1 + b2 * B^2 + b3 * B^3 + ... + bn * B^n
```
Note that you can evaluate each sequence in O(1) time, using the result of the previous sequence, hence all the job costs O(n).
Now you have an array `Hb = [h(b1), h(b2), ... , h(bn)]`, where `Hb[i]` is the hash from `b1` until `bi`.
Do the same thing for the array `a`, but with a little trick:
```
from an to an = (an * B^1)
from an-1 to an = (an-1 * B^1) + (an * B^2)
from an-2 to an = (an-2 * B^1) + (an-1 * B^2) + (an * B^3)
...
from a1 to an = (a1 * B^1) + (a2 * B^2) + (a3 * B^3) + ... + (an * B^n)
```
You must note that, when you step from one sequence to another, you multiply the whole previous sequence by B and add the new value multiplied by B. For example:
```
from an to an = (an * B^1)
for the next sequence, multiply the previous by B: (an * B^1) * B = (an * B^2)
now sum with the new value multiplied by B: (an-1 * B^1) + (an * B^2)
hence:
from an-1 to an = (an-1 * B^1) + (an * B^2)
```
Now you have an array `Ha = [h(an), h(an-1), ... , h(a1)]`, where `Ha[i]` is the hash from `ai` until `an`.
Now, you can compare `Ha[d] == Hb[d]` for all `d` values from n to 1, if they match, you have your answer.
---
>
> **ATTENTION**: this is a hash method, the values can be large and you may have to use a [fast exponentiation method and modular arithmetics](https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/fast-modular-exponentiation), which may (hardly) give you **collisions**, making this method not totally safe. A good practice is to pick a base `B` as a really big prime number (at least bigger than the biggest value in your arrays). You should also be careful as the limits of the numbers may overflow at each step, so you'll have to use (modulo `K`) in each operation (where `K` can be a prime bigger than `B`).
>
>
>
This means that two different sequences **might** have the same hash, but two equal sequences will **always** have the same hash.
|
60,421,328
|
I am trying to import a python dictionary from moodels and manipulate/print it's properties in Javascript. However nothing seems to print out and I don't receive any error warnings.
**Views.py**
```
from chesssite.models import Chess_board
import json
def chess(request):
board = Chess_board()
data = json.dumps(board.rep)
return render(request, 'home.html', {'board': data})
```
Here board.rep is a python dictionary {"0a":0, "0b":0, "0c":"K0"} - basically a chess board
**home.html**
```
<html>
<body>
{% block content %}
<script>
for (x in {{board}}) {
document.write(x)
}
</script>
{% endblock %}
</body>
</html>
```
I also would very much appreciate some debugging tips!
Thanks in advance, Alex
|
2020/02/26
|
[
"https://Stackoverflow.com/questions/60421328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12315848/"
] |
You can utilize the [z algorithm](https://codeforces.com/blog/entry/3107), a linear time (***O***(n)) algorithm that:
>
> Given a string *S* of length n, the Z Algorithm produces an array *Z*
> where *Z[i]* is the length of the longest substring starting from *S[i]*
> which is also a prefix of *S*
>
>
>
You need to concatenate your arrays (*b*+*a*) and run the algorithm on the resulting constructed array till the first *i* such that *Z[i]*+*i* == *m*+*n*.
For example, for *a* = [1, 2, 3, 6, 2, 3] & *b* = [2, 3, 6, 2, 1, 0], the concatenation would be [2, 3, 6, 2, 1, 0, 1, 2, 3, 6, 2, 3] which would yield *Z[10]* = 2 fulfilling *Z[i]* + *i* = 12 = *m* + *n*.
|
This can indeed be done in linear time, *O(n)*, and *O(n)* extra space. I will assume the input arrays are character strings, but this is not essential.
A naive method would -- after matching *k* characters that are equal -- find a character that does not match, and go back *k-1* units in *a*, reset the index in *b*, and then start the matching process from there. This clearly represents a *O(n²)* worst case.
To avoid this backtracking process, we can observe that going back is not useful if we have not encountered the b[0] character while scanning the last *k-1* characters. If we *did* find that character, then backtracking to that position would only be useful, if in that *k* sized substring we had a periodic repetition.
For instance, if we look at substring "abcabc" somewhere in *a*, and *b* is "abcabd", and we find that the final character of *b* does not match, we must consider that a successful match might start at the second "a" in the substring, and we should move our current index in *b* back accordingly before continuing the comparison.
The idea is then to do some preprocessing based on string *b* to log back-references in *b* that are useful to check when there is a mismatch. So for instance, if *b* is "acaacaacd", we could identify these 0-based backreferences (put below each character):
```
index: 0 1 2 3 4 5 6 7 8
b: a c a a c a a c d
ref: 0 0 0 1 0 0 1 0 5
```
For example, if we have *a* equal to "acaacaaca" the first mismatch happens on the final character. The above information then tells the algorithm to go back in *b* to index 5, since "acaac" is common. And then with only changing the current index in *b* we can continue the matching at the current index of *a*. In this example the match of the final character then succeeds.
With this we can optimise the search and make sure that the index in *a* can always progress forwards.
Here is an implementation of that idea in JavaScript, using the most basic syntax of that language only:
```js
function overlapCount(a, b) {
// Deal with cases where the strings differ in length
let startA = 0;
if (a.length > b.length) startA = a.length - b.length;
let endB = b.length;
if (a.length < b.length) endB = a.length;
// Create a back-reference for each index
// that should be followed in case of a mismatch.
// We only need B to make these references:
let map = Array(endB);
let k = 0; // Index that lags behind j
map[0] = 0;
for (let j = 1; j < endB; j++) {
if (b[j] == b[k]) {
map[j] = map[k]; // skip over the same character (optional optimisation)
} else {
map[j] = k;
}
while (k > 0 && b[j] != b[k]) k = map[k];
if (b[j] == b[k]) k++;
}
// Phase 2: use these references while iterating over A
k = 0;
for (let i = startA; i < a.length; i++) {
while (k > 0 && a[i] != b[k]) k = map[k];
if (a[i] == b[k]) k++;
}
return k;
}
console.log(overlapCount("ababaaaabaabab", "abaababaaz")); // 7
```
Although there are nested `while` loops, these do not have more iterations in total than *n*. This is because the value of *k* strictly decreases in the `while` body, and cannot become negative. This can only happen when `k++` was executed that many times to give enough room for such decreases. So all in all, there cannot be more executions of the `while` body than there are `k++` executions, and the latter is clearly O(n).
To complete, here you can find the same code as above, but in an interactive snippet: you can input your own strings and see the result interactively:
```js
function overlapCount(a, b) {
// Deal with cases where the strings differ in length
let startA = 0;
if (a.length > b.length) startA = a.length - b.length;
let endB = b.length;
if (a.length < b.length) endB = a.length;
// Create a back-reference for each index
// that should be followed in case of a mismatch.
// We only need B to make these references:
let map = Array(endB);
let k = 0; // Index that lags behind j
map[0] = 0;
for (let j = 1; j < endB; j++) {
if (b[j] == b[k]) {
map[j] = map[k]; // skip over the same character (optional optimisation)
} else {
map[j] = k;
}
while (k > 0 && b[j] != b[k]) k = map[k];
if (b[j] == b[k]) k++;
}
// Phase 2: use these references while iterating over A
k = 0;
for (let i = startA; i < a.length; i++) {
while (k > 0 && a[i] != b[k]) k = map[k];
if (a[i] == b[k]) k++;
}
return k;
}
// I/O handling
let [inputA, inputB] = document.querySelectorAll("input");
let output = document.querySelector("pre");
function refresh() {
let a = inputA.value;
let b = inputB.value;
let count = overlapCount(a, b);
let padding = a.length - count;
// Apply some HTML formatting to highlight the overlap:
if (count) {
a = a.slice(0, -count) + "<b>" + a.slice(-count) + "</b>";
b = "<b>" + b.slice(0, count) + "</b>" + b.slice(count);
}
output.innerHTML = count + " overlapping characters:\n" +
a + "\n" +
" ".repeat(padding) + b;
}
document.addEventListener("input", refresh);
refresh();
```
```css
body { font-family: monospace }
b { background:yellow }
input { width: 90% }
```
```html
a: <input value="acacaacaa"><br>
b: <input value="acaacaacd"><br>
<pre></pre>
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.