qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
55,975,930
I'm locally running a standard app engine environment through dev\_appserver and cannot get rid of the following error: > > ImportError: No module named google.auth > > > Full traceback (replaced personal details with `...`): ``` Traceback (most recent call last): File "/Users/.../google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/Users/.../google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/Users/.../google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject obj = __import__(path[0]) File "/Users/.../.../main.py", line 6, in <module> from services.get_campaigns import get_campaigns File "/Users/.../.../get_campaigns.py", line 3, in <module> from googleads import adwords File "/Users/.../.../lib/googleads/__init__.py", line 17, in <module> from ad_manager import AdManagerClient File "/Users/.../lib/googleads/ad_manager.py", line 28, in <module> import googleads.common File "/Users/.../lib/googleads/common.py", line 51, in <module> import googleads.oauth2 File "/Users/.../lib/googleads/oauth2.py", line 28, in <module> import google.auth.transport.requests File "/Users/.../google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/sandbox.py", line 1154, in load_module raise ImportError('No module named %s' % fullname) ImportError: No module named google.auth ``` I do have google.auth installed, as `pip show google.auth` shows: ``` Name: google-auth Version: 1.6.3 Summary: Google Authentication Library Home-page: https://github.com/GoogleCloudPlatform/google-auth-library-python Author: Google Cloud Platform Author-email: jonwayne+google-auth@google.com License: Apache 2.0 Location: /Users/.../Library/Python/2.7/lib/python/site-packages Requires: rsa, pyasn1-modules, cachetools, six Required-by: googleads, google-auth-oauthlib, google-auth-httplib2, google-api-python-client ``` I have already upgraded all modules that require google.auth - *googleads, google-auth-oauthlib, google-auth-httplib2, google-api-python-clien*t - but without results. I'm not quite sure what next actions to take in order to debug this issue. Anyone here can point me in the right direction?
2019/05/03
[ "https://Stackoverflow.com/questions/55975930", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3793914/" ]
Your `google.auth` is installed in the system's Python site packages, not in your app: > > Location: /Users/.../Library/Python/2.7/lib/python/site-packages > > > You need to install your app's python dependencies inside your app instead - note the `-t lib/` pip option in the [Copying a third-party library](https://cloud.google.com/appengine/docs/standard/python/tools/using-libraries-python-27#copying_a_third-party_library) procedure you should follow: > > 2. Use [pip](https://pypi.python.org/pypi/pip) (version 6 or later) with the `-t <directory>` flag to copy the libraries into the folder you created in the previous step. > For example: > > > > ``` > pip install -t lib/ <library_name> > > ``` > > >
After much trial and error, I found the bug: A python runtime version issue. In my app.yaml file I had specified: ``` service: default runtime: python27 api_version: 1 threadsafe: false ``` There I changed runtime to: ``` runtime: python37 ``` Thanks to @AlassaneNdiaye for pointing me in this direction in the comments.
56,317,630
I am new in python and I am working with CSV file with over 10000 rows. In my CSV file, there are many rows with the same id which I would like to merge them in one and also combine their information as well. For instance, the data.csv look like (id and info is the name of columns): ``` id| info 1112| storage is full and needs extra space 1112| there is many problems with space 1113| pickup cars come and take the garbage 1113| payment requires for the garbage ``` and I want to get the output as: ``` id| info 1112| storage is full and needs extra space there is many problems with space 1113| pickup cars come and take the garbage payment requires for the garbage ``` I already looked at a few posts such as [1](https://stackoverflow.com/questions/35182749/merging-rows-with-the-same-id-variable) [2](https://stackoverflow.com/questions/39646345/pandas-merging-rows-with-the-same-value-and-same-index) [3](https://stackoverflow.com/questions/41915138/how-can-i-merge-csv-rows-that-have-the-same-value-in-the-first-cell) but none of them helped me to answer my question. It would be great if you could use python code to describe your help that I can also run and learn in my side. Thank you
2019/05/26
[ "https://Stackoverflow.com/questions/56317630", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8921989/" ]
I think about some simplier way: ``` some_dict = {} for idt, txt in line: #~ For line use your id, info reader. some_dict[idt] = some_dict.get(idt, "") + txt ``` It should create your dream structure without imports, and i hope most efficient way. Just to understand, `get` have secound argument, what must return if something isn't finded in dict. Then create empty string and add text, if some was finded, then add text to that. **@Edit:** Here is complete example with reader :). Try to replace correctly variable instead of reader entry, which shows how to do it :) ``` some_dict = {} with open('file.csv') as f: reader = csv.reader(f) for idt, info in reader: temp = some_dict.get(idt, "") some_dict[idt] = temp+" "+txt if temp else txt print(some_dict) df = pd.Series(some_dict).to_frame("Title of your column") ``` This is full program which should work for you. But, it won't work if you got more than 2 columns in file, then u can just replace `idt, info` with `row`, and use indexes for first and secound element. **@Next Edit:** For more then 2 columns: ``` some_dict = {} with open('file.csv') as f: reader = csv.reader(f) for row in reader: temp = some_dict.get(row[0], "") some_dict[row[0]] = temp+" "+row[1] if temp else row[1] #~ There you can add something with another columns if u want. #~ Example: another_dict[row[2]] = another_dict.get(row[2], "") + row[3] print(some_dict) df = pd.Series(some_dict).to_frame("Title of your column") ```
Just make a dictionary where id's are keys: ``` from collections import defaultdict by_id = defaultdict(list) for id, info in your_list: by_id[id].append(info) for key, value in by_id.items(): print(key, value) ```
25,012,031
I've migrated a Liferay 6.2-CE-GA2 server from Liferay 6.1.1-ce-ga2. I made a few changes in custom hooks and themes to addapt to the new version. On locale I have never had a problem with memory nor with the 6.1 version, but once in production, server runs out of memory in a few hours. I tried to adjust heap parameters and increasing server memory (from 2GB to 3GB) but it seems that the heap keeps growking slowly but non-stopping, until I get an `OutOfMemory: Java heap space` or, if I grant bigger limits to the heap, system runs out of memory. I've been some days studying catalina.out, trying to minimize warnings and errors and this is the only interesting things I've seen during a shutdown-reboot process (I replaced all domain names on logs): ``` [on shutdown] WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. [...] 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [org.python.google.common.base.internal.Finalizer] but has failed to stop it. This is very likely to create a memory leak. 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [Thread-26] but has failed to stop it. This is very likely to create a memory leak. 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [Thread-27] but has failed to stop it. This is very likely to create a memory leak. [...] 16:55:40,517 ERROR [liferay/hot_deploy-1][JDBCExceptionReporter:82] Batch entry 0 insert into CalendarBooking (uuid_, groupId, companyId, userId, userName, createDate, modifiedDate, resourceBlockId, calendarId, calendarResourceId, parentCalendarBookingId, title, description, location, startTime, endTime, allDay, recurrence, firstReminder, firstReminderType, secondReminder, secondReminderType, status, statusByUserId, statusByUserName, statusDate, calendarBookingId) values ('985aac08-6457-484c-becb-2c4964805158', '10545', '10154', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '2012-06-06 06:26:41.431000 +01:00:00', '0', '550617', '550571', '565201', 'Master Class de improvisación e tango contemporáneo con Jorge Retamoza', '<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Master Class con Jorge Retamoza. </a></p>_<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Organiza: Escola de Música de rrrrrr. Colabora: Concello de rrrrrr.</a></p>', 'Auditorio de rrrrrr', '1339246800000', '1339257600000', '0', '', '900000', 'email', '300000', 'email', '0', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '565201') was aborted. Call getNextException to see the cause. [Sanitized] 16:55:40,518 ERROR [liferay/hot_deploy-1][JDBCExceptionReporter:82] ERROR: duplicate key value violates unique constraint "ix_f4c61797" 16:55:40,536 ERROR [liferay/hot_deploy-1][SerialDestination:68] Unable to process message {destinationName=liferay/hot_deploy, response=null, responseDestinationName=null, responseId=null, payload=null, values={command=deploy, companyId=0, servletContextName=calendar-portlet}} com.liferay.portal.kernel.messaging.MessageListenerException: com.liferay.portal.kernel.dao.orm.ORMException: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at com.liferay.portal.kernel.messaging.BaseMessageListener.receive(BaseMessageListener.java:32) at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:72) at com.liferay.portal.kernel.messaging.SerialDestination$1.run(SerialDestination.java:65) at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:682) at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:593) at java.lang.Thread.run(Thread.java:636) Caused by: com.liferay.portal.kernel.dao.orm.ORMException: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update [...] ... 5 more Caused by: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [...] ... 62 more Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into CalendarBooking (uuid_, groupId, companyId, userId, userName, createDate, modifiedDate, resourceBlockId, calendarId, calendarResourceId, parentCalendarBookingId, title, description, location, startTime, endTime, allDay, recurrence, firstReminder, firstReminderType, secondReminder, secondReminderType, status, statusByUserId, statusByUserName, statusDate, calendarBookingId) values ('985aac08-6457-484c-becb-2c4964805158', '10545', '10154', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '2012-06-06 06:26:41.431000 +01:00:00', '0', '550617', '550571', '565201', 'Master Class de improvisación e tango contemporáneo con Jorge Retamoza', '<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Master Class con Jorge Retamoza. </a></p>_<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Organiza: Escola de Música de rrrrrr. Colabora: Concello de rrrrrr.</a></p>', 'Auditorio de rrrrrr', '1339246800000', '1339257600000', '0', '', '900000', 'email', '300000', 'email', '0', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '565201') was aborted. Call getNextException to see the cause. [Sanitized] at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2621) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1837) [...] ... 68 more ``` Then server runs properly for 5 hours with some spared warnings each few minutes: ``` 23:39:09,275 WARN [ajp-apr-8009-exec-20][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.com/rrrrrr25n/notadeprensa on 56_INSTANCE_rk7ADlb9Ui2w 23:43:51,234 WARN [ajp-apr-8009-exec-19][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.org/feiron/normas on 56_INSTANCE_jh6ewEPuvvjb 23:46:59,568 WARN [ajp-apr-8009-exec-5][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/recursos-servizossociais on 56_INSTANCE_4eX2GzETiAQb 23:55:51,177 WARN [ajp-apr-8009-exec-5][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/cans on 56_INSTANCE_4eX2GzETiAQb 00:00:13,713 WARN [ajp-apr-8009-exec-24][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/rexistro on 56_INSTANCE_4eX2GzETiAQb 00:00:25,822 WARN [ajp-apr-8009-exec-24][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/plenos on 110_INSTANCE_acNEFnslrX8c ``` And then memory problems begin. I post the first errors on log: ``` Exception in thread "http-apr-8080-exec-4" java.lang.OutOfMemoryError: Java heap space 01:00:01,223 ERROR [MemoryQuartzSchedulerEngineInstance_Worker-3][SimpleThreadPool:120] Error while executing the Runnable: java.lang.OutOfMemoryError: Java heap space Exception in thread "fileinstall-/home/rrrrrr/liferay-portal-6.2-ce-ga2/data/osgi/modules" Exception in thread "ajp-apr-8009-AsyncTimeout" Exception in thread "ajp-apr-8009-exec-21" at java.util.LinkedHashMap.createEntry(LinkedHashMap.java:441) at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:423) at java.util.HashMap.put(HashMap.java:402) Exception in thread "ajp-apr-8009-exec-24" at sun.util.resources.OpenListResourceBundle.loadLookup(OpenListResourceBundle.java:134) Exception in thread "MemoryQuartzSchedulerEngineInstance_QuartzSchedulerThread" Exception in thread "ContainerBackgroundProcessor[StandardEngine[Catalina]]" at sun.util.resources.OpenListResourceBundle.loadLookupTablesIfNecessary(OpenListResourceBundle.java:113) Exception in thread "ajp-apr-8009-exec-35" Exception in thread "ajp-apr-8009-exec-36" Exception in thread "ajp-apr-8009-exec-29" Exception in thread "ajp-apr-8009-exec-37" at sun.util.resources.OpenListResourceBundle.handleGetKeys(OpenListResourceBundle.java:91) at sun.util.LocaleServiceProviderPool.getLocalizedObjectImpl(LocaleServiceProviderPool.java:353) Exception in thread "ajp-apr-8009-exec-33" Exception in thread "ajp-apr-8009-exec-34" Exception in thread "ajp-apr-8009-exec-30" at sun.util.LocaleServiceProviderPool.getLocalizedObject(LocaleServiceProviderPool.java:284) Exception in thread "ajp-apr-8009-exec-28" Exception in thread "ajp-apr-8009-exec-31" Exception in thread "http-apr-8080-AsyncTimeout" Exception in thread "http-apr-8080-exec-5" Exception in thread "http-apr-8080-exec-2" Exception in thread "liferay/scheduler_dispatch-3" Exception in thread "ajp-apr-8009-exec-41" at sun.util.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:111) at sun.util.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:99) at java.util.TimeZone.getDisplayNames(TimeZone.java:418) at java.util.TimeZone.getDisplayName(TimeZone.java:369) at java.text.SimpleDateFormat.subFormat(SimpleDateFormat.java:1110) at java.text.SimpleDateFormat.format(SimpleDateFormat.java:899) at java.text.SimpleDateFormat.format(SimpleDateFormat.java:869) at org.apache.tomcat.util.http.ServerCookie.appendCookieValue(ServerCookie.java:254) at org.apache.catalina.connector.Response.generateCookieString(Response.java:1032) at org.apache.catalina.connector.Response.addCookie(Response.java:974) at org.apache.catalina.connector.ResponseFacade.addCookie(ResponseFacade.java:381) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.HttpOnlyCookieServletResponse.addCookie(HttpOnlyCookieServletResponse.java:62) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) 01:02:56,362 ERROR [PersistedQuartzSchedulerEngineInstance_QuartzSchedulerThread][ErrorLogger:120] An error occurred while scanning for the next triggers to fire. org.quartz.JobPersistenceException: Failed to obtain DB connection from data source 'ds': java.lang.OutOfMemoryError: Java heap space [See nested exception: java.lang.OutOfMemoryError: Java heap space] at org.quartz.impl.jdbcjobstore.JobStoreSupport.getConnection(JobStoreSupport.java:771) at org.quartz.impl.jdbcjobstore.JobStoreTX.getNonManagedTXConnection(JobStoreTX.java:71) at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3808) at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTriggers(JobStoreSupport.java:2751) at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264) Caused by: java.lang.OutOfMemoryError: Java heap space at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) at com.liferay.portal.kernel.util.CookieKeys.addCookie(CookieKeys.java:99) at com.liferay.portal.kernel.util.CookieKeys.addCookie(CookieKeys.java:63) at com.liferay.portal.language.LanguageImpl.updateCookie(LanguageImpl.java:751) java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space 01:03:00,917 ERROR [QuartzScheduler_PersistedQuartzSchedulerEngineInstance-NON_CLUSTERED_MisfireHandler][PortalJobStore:120] MisfireHandler: Error handling misfires: Unexpected runtime exception: Index: 0, Size: 0 org.quartz.JobPersistenceException: Unexpected runtime exception: Index: 0, Size: 0 [See nested exception: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0] at org.quartz.impl.jdbcjobstore.JobStoreSupport.doRecoverMisfires(JobStoreSupport.java:3200) at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.manage(JobStoreSupport.java:3947) at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:3968) Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(ArrayList.java:571) at java.util.ArrayList.get(ArrayList.java:349) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1689) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:512) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:273) at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.countMisfiredTriggersInState(StdJDBCDelegate.java:416) at org.quartz.impl.jdbcjobstore.JobStoreSupport.doRecoverMisfires(JobStoreSupport.java:3176) ... 2 more java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space Exception in thread "fileinstall-/home/rrrrrr/liferay-portal-6.2-ce-ga2/data/osgi/portal" java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space ``` I have some custom themes and hooks that I now post. I wonder there has to be a memory leak anywhere on them, but i cannot find it. First, I have a custom `Application Display Template`for Blogs: ``` <div class="cr-blog container-fluid"> #foreach ($entry in $entries) <div class="entry-content"> #set ($viewUrl = $currentURL.replaceFirst("\?.*$","") + "/-/blogs/" + $entry.getUrlTitle()) #set($img_ini=$entry.content.indexOf("<img")) #if ($img_ini >= 0) #set($img_end=$entry.content.indexOf(">",$img_ini) + 1) #set($first_img_tag= $entry.content.substring($img_ini, $img_end)) #set($first_img_url=$first_img_tag.replaceFirst("<img.*src=\"","")) #set($first_img_url=$first_img_url.replaceFirst("\".*","")) #end <div class="entry-extract"> #if ($img_ini >= 0) <div class="extract-thumbnail"> <a href="$viewUrl"> <img src="$escapeTool.html($first_img_url)" /> </a> </div> #end <div class="extract-title"> <a href="$viewUrl"><span>$entry.title</span></a> </div> <div class="extract-content"> <a href="$viewUrl"> <span class="extract-date">$dateFormats.getSimpleDateFormat("dd MMMMM yyyy HH:mm", $locale).format($entry.displayDate)</span> #set($plain_content = $entry.content.replaceAll("</?[^>]+/?>", "")) #set($res_length = 240) #if ($res_length > $plain_content.length()) #set($res_length = $plain_content.length()) #end <p> $plain_content.substring(0,$res_length) ... </p> </a> </div> </div> </div> #end </div> ``` Custom hook Blogshome overrides some jsps from blogs\_aggregator. view\_entries.jspf ``` <c:choose> <c:when test="<%= results.isEmpty() %>"> <liferay-ui:message key="there-are-no-blogs" /> <br /><br /> </c:when> <c:otherwise> <% if (displayStyle.startsWith("extract-side-events")) { List<BlogsEntry> eventsColumn = new ArrayList<BlogsEntry>(); List<BlogsEntry> mainColumn = new ArrayList<BlogsEntry>(); for (int i=0; i< results.size(); i++) { BlogsEntry entry = (BlogsEntry) results.get(i); if (entry.getDisplayDate().after(new Date())) { searchContainer.setTotal(searchContainer.getTotal() - 1); continue; } boolean isEvent = ((Boolean) entry.getExpandoBridge().getAttribute("evento")); if (isEvent) eventsColumn.add(entry); mainColumn.add(entry); /* change: add ALL to mainColumn; events are duplicated on side */ } /* reorder eventsColumn */ TreeMap<Date, BlogsEntry> next= new TreeMap<Date, BlogsEntry>(); List<BlogsEntry> toRemove = new ArrayList<BlogsEntry>(); for (BlogsEntry entry: eventsColumn) { Date ini = (Date) entry.getExpandoBridge().getAttribute("evento-inicio"); Date end = (Date) entry.getExpandoBridge().getAttribute("evento-remate"); Date now = new Date(); if (ini.before(now) && (end.after(now))) { next.put(end, entry); /* mainColumn.remove(entry); */ } else if (end.before(now)) { toRemove.add(entry); } else { next.put(ini, entry); /* mainColumn.remove(entry); */ } } eventsColumn.removeAll(toRemove); /* third rearrangement: current & next are visible; past are pushed to mainColumn */ /* current ordered by end; next ordered by ini */ ArrayList<BlogsEntry> lNext = new ArrayList<BlogsEntry>(next.values()); %> <div class="home-events"> <div class="events-showdown" id="events-showdown"> <% if (!lNext.isEmpty()) { for (BlogsEntry entry: lNext) { %> <div class="carousel-item"> <%@ include file="/html/portlet/blogs_aggregator/view_entry_extract.jspf" %> </div> <% } } %> </div> <script> YUI().use('aui-carousel', function(Y) { new Y.Carousel( {contentBox: '#events-showdown', height: 320, width: 600, intervalTime: 5 }).render(); }); </script> <%-- <div class="home-events-nav"> <button type="button" class="btn btn-default btn-large" onclick="document.getElementById('events-showdown').style.right =50"> <span class="glyphicon glyphicon-chevron-right"></span> </button> </div> --%> </div> <% %> <div class="home-blogs container-fluid" > <% for (BlogsEntry entry: mainColumn) { %> <%@ include file="/html/portlet/blogs_aggregator/view_entry_extract.jspf" %> <% } %> </div> <% /* original blogs styles */ } else { for (int i = 0; i < results.size(); i++) { BlogsEntry entry = (BlogsEntry)results.get(i); if (entry.getDisplayDate().after(new Date())) { searchContainer.setTotal(searchContainer.getTotal() - 1); continue; } %> <%@ include file="/html/portlet/blogs_aggregator/view_entry_content.jspf" %> <% } } %> </c:otherwise> </c:choose> <c:if test="<%= enableRssSubscription %>"> <% StringBundler rssURLParams = new StringBundler(); if (selectionMethod.equals("users")) { if (organizationId > 0) { rssURLParams.append("&organizationId="); rssURLParams.append(organizationId); } else { rssURLParams.append("&companyId="); rssURLParams.append(company.getCompanyId()); } } else { rssURLParams.append("&groupId="); rssURLParams.append(themeDisplay.getScopeGroupId()); } %> <span class="button"> <liferay-ui:icon image="rss" label="<%= true %>" method="get" target="_blank" url='<%= themeDisplay.getPathMain() + "/blogs_aggregator/rss?p_l_id=" + plid + rssURLParams %>' /> </span> </c:if> <c:if test="<%= !results.isEmpty() %>"> <div class="search-container"> <liferay-ui:search-paginator searchContainer="<%= searchContainer %>" /> </div> </c:if> ``` view\_entry\_extract.jspf ``` <c:if test="<%= BlogsEntryPermission.contains(permissionChecker, entry, ActionKeys.VIEW) %>"> <div class="entry-content"> <% PortletURL showBlogEntryURL = renderResponse.createRenderURL(); showBlogEntryURL.setParameter("struts_action", "/blogs_aggregator/view_entry"); showBlogEntryURL.setParameter("entryId", String.valueOf(entry.getEntryId())); StringBundler sb = new StringBundler(8); StringBundler ab = new StringBundler(8); ab.append(themeDisplay.getURLPortal()); ab.append(GroupLocalServiceUtil.getGroup(entry.getGroupId()).getFriendlyURL()); ab.append("/-/blogs/"); ab.append(entry.getUrlTitle()); String viewEntryURL = ab.toString(); sb.append("&showAllEntries=1"); String viewAllEntriesURL = sb.toString(); User user2 = UserLocalServiceUtil.getUserById(entry.getUserId()); %> <div class="entry-header"> <c:if test='<%= (Boolean) entry.getExpandoBridge().getAttribute("evento")%>'> <div class="event-schedule"> <% Calendar iniDate = com.liferay.portal.kernel.util.CalendarFactoryUtil.getCalendar(timeZone); Calendar endDate = com.liferay.portal.kernel.util.CalendarFactoryUtil.getCalendar(timeZone); iniDate.setTime(((Date) entry.getExpandoBridge().getAttribute("evento-inicio"))); endDate.setTime(((Date) entry.getExpandoBridge().getAttribute("evento-remate"))); boolean sameDay = false; if ((iniDate.get(Calendar.DAY_OF_YEAR) == endDate.get(Calendar.DAY_OF_YEAR)) && (iniDate.get(Calendar.YEAR) == endDate.get(Calendar.YEAR))) sameDay = true; String diaDaSemana = (new SimpleDateFormat("EEEE", locale)).format(iniDate.getTime()); String numeroDeDia = (new SimpleDateFormat("d", locale)).format(iniDate.getTime()); String mes = (new SimpleDateFormat("MMMM", locale)).format(iniDate.getTime()); // String hora = (new SimpleDateFormat("HH:mm", locale)).format(iniDate.getTime()) + "h"; String hora = (iniDate.get(Calendar.HOUR_OF_DAY) + new SimpleDateFormat(":mm", locale).format(iniDate.getTime()) + "h"); String numeroDeDiaFin = StringPool.BLANK; String mesFin = StringPool.BLANK; if (!sameDay) { numeroDeDiaFin = (new SimpleDateFormat("d", locale)).format(endDate.getTime()); mesFin = (new SimpleDateFormat("MMMM", locale)).format(endDate.getTime()); } %> <% if (sameDay) { %> <div class='event-date'> <%= numeroDeDia %> </div> <div class='event-data'> <div class="event-month"><%= mes %></div> <div class="event-day"><%= diaDaSemana %></div> <div class="event-time"><%= hora %></div> </div> <% } else { %> <div class='event-date'> <%= numeroDeDia %> </div> <div class='event-data'> <div class="event-month"><%= mes %></div> <div class="event-day"><%= diaDaSemana %></div> <div class="event-time"><liferay-ui:message key="rrrrrr.events.until" /> <%= numeroDeDiaFin %> <liferay-ui:message key="rrrrrr.events.of" /> <%= mesFin %></div> </div> <% } %> </div> </c:if> </div> <div class="entry-extract"> <% String resumeText = StringPool.BLANK; String resumeImage = StringPool.BLANK; int extLength = 240; if (entry.isSmallImage()) { if (Validator.isNotNull(entry.getSmallImageURL())) resumeImage = entry.getSmallImageURL(); else resumeImage = themeDisplay.getPathImage() + "/journal/article?img_id=" + entry.getSmallImageId() + "&t=" + WebServerServletTokenUtil.getToken(entry.getSmallImageId()) ; resumeImage.trim(); } /* if no small image, extract first */ if ((resumeImage == null) || (resumeImage.isEmpty())) { java.util.regex.Pattern p = java.util.regex.Pattern.compile("src=['\"]([^'\"]+)['\"]"); java.util.regex.Matcher m = p.matcher(entry.getContent()); if (m.find()) resumeImage = m.group().substring(5, m.group().length() -1); } resumeText = HtmlUtil.stripHtml(entry.getDescription()); resumeText.trim(); /* if no resume description, extract text */ if ((resumeText == null) || (resumeText.isEmpty())) { resumeText = HtmlUtil.escape(StringUtil.shorten(HtmlUtil.extractText(entry.getContent()), extLength)); } %> <div class="extract-thumbnail"> <a href="<%= viewEntryURL %>" style="background-image: url('<%= HtmlUtil.escape(resumeImage) %>')"> <img class="asset-small-image" src="<%= HtmlUtil.escape(resumeImage) %>"/> </a> </div> <div class="extract-title"> <a href="<%= viewEntryURL %>"><%= HtmlUtil.escape(entry.getTitle()) %></a> </div> <div class="extract-content"> <a href="<%= viewEntryURL %>"> <span class="extract-scope"><%= GroupLocalServiceUtil.getGroup(entry.getGroupId()).getDescriptiveName() %></span> <span class="extract-date"><%= dateFormatDateTime.format(entry.getDisplayDate()) %></span> <span><%= " " + resumeText %></span> </a> </div> </div> </div> ``` I am unable to guess any memory leak there, but there has to be! I've spent few weeks trying to find a bug (deactivating hooks to see if errors persisted) but couldn't come to a clue. Does anybody see something potentially dangerous in my code? Wich other way can I trace Java memory usage to fetch for leaks?
2014/07/29
[ "https://Stackoverflow.com/questions/25012031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3837065/" ]
As it seems the `open`-method doesn't update the `position` of the `infowindow`, you'll need to do it on your own(e.g. by binding the position of the infowindow to the position of the marker): ``` infowindow.unbind('position'); if(infowindow.getPosition() != this.getPosition()) { infowindow.bindTo('position',this,'position'); infowindow.open(map,this); } else { infowindow.close(); infowindow.setPosition(null); } ``` Demo: [**http://jsfiddle.net/aBg3N/**](http://jsfiddle.net/aBg3N/) Another solution(but this solution relies on a undocumented property `anchor`): ``` if(infowindow.get('anchor') != this) { infowindow.open(map,this); } else { infowindow.close(); } ``` Demo: [**http://jsfiddle.net/AZC3z/**](http://jsfiddle.net/AZC3z/) Both solutions will work with draggable markers and also when you use the same infowindow for multiple markers
I am not sure about `infowindow.getPosition()` . But you can try this code if you want to check whether the infowindow is open or not. JS: ``` function check(infoWindow) { var map = infoWindow.getMap(); return (map !== null && typeof map !== "undefined"); } ``` pass `infowindow` into the function and it will return `true` or `false` based on visibility of `infowindow`. Demo: <http://jsfiddle.net/lotusgodkk/x8dSP/3699/>
25,012,031
I've migrated a Liferay 6.2-CE-GA2 server from Liferay 6.1.1-ce-ga2. I made a few changes in custom hooks and themes to addapt to the new version. On locale I have never had a problem with memory nor with the 6.1 version, but once in production, server runs out of memory in a few hours. I tried to adjust heap parameters and increasing server memory (from 2GB to 3GB) but it seems that the heap keeps growking slowly but non-stopping, until I get an `OutOfMemory: Java heap space` or, if I grant bigger limits to the heap, system runs out of memory. I've been some days studying catalina.out, trying to minimize warnings and errors and this is the only interesting things I've seen during a shutdown-reboot process (I replaced all domain names on logs): ``` [on shutdown] WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. [...] 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [org.python.google.common.base.internal.Finalizer] but has failed to stop it. This is very likely to create a memory leak. 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [Thread-26] but has failed to stop it. This is very likely to create a memory leak. 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [Thread-27] but has failed to stop it. This is very likely to create a memory leak. [...] 16:55:40,517 ERROR [liferay/hot_deploy-1][JDBCExceptionReporter:82] Batch entry 0 insert into CalendarBooking (uuid_, groupId, companyId, userId, userName, createDate, modifiedDate, resourceBlockId, calendarId, calendarResourceId, parentCalendarBookingId, title, description, location, startTime, endTime, allDay, recurrence, firstReminder, firstReminderType, secondReminder, secondReminderType, status, statusByUserId, statusByUserName, statusDate, calendarBookingId) values ('985aac08-6457-484c-becb-2c4964805158', '10545', '10154', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '2012-06-06 06:26:41.431000 +01:00:00', '0', '550617', '550571', '565201', 'Master Class de improvisación e tango contemporáneo con Jorge Retamoza', '<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Master Class con Jorge Retamoza. </a></p>_<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Organiza: Escola de Música de rrrrrr. Colabora: Concello de rrrrrr.</a></p>', 'Auditorio de rrrrrr', '1339246800000', '1339257600000', '0', '', '900000', 'email', '300000', 'email', '0', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '565201') was aborted. Call getNextException to see the cause. [Sanitized] 16:55:40,518 ERROR [liferay/hot_deploy-1][JDBCExceptionReporter:82] ERROR: duplicate key value violates unique constraint "ix_f4c61797" 16:55:40,536 ERROR [liferay/hot_deploy-1][SerialDestination:68] Unable to process message {destinationName=liferay/hot_deploy, response=null, responseDestinationName=null, responseId=null, payload=null, values={command=deploy, companyId=0, servletContextName=calendar-portlet}} com.liferay.portal.kernel.messaging.MessageListenerException: com.liferay.portal.kernel.dao.orm.ORMException: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at com.liferay.portal.kernel.messaging.BaseMessageListener.receive(BaseMessageListener.java:32) at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:72) at com.liferay.portal.kernel.messaging.SerialDestination$1.run(SerialDestination.java:65) at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:682) at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:593) at java.lang.Thread.run(Thread.java:636) Caused by: com.liferay.portal.kernel.dao.orm.ORMException: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update [...] ... 5 more Caused by: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [...] ... 62 more Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into CalendarBooking (uuid_, groupId, companyId, userId, userName, createDate, modifiedDate, resourceBlockId, calendarId, calendarResourceId, parentCalendarBookingId, title, description, location, startTime, endTime, allDay, recurrence, firstReminder, firstReminderType, secondReminder, secondReminderType, status, statusByUserId, statusByUserName, statusDate, calendarBookingId) values ('985aac08-6457-484c-becb-2c4964805158', '10545', '10154', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '2012-06-06 06:26:41.431000 +01:00:00', '0', '550617', '550571', '565201', 'Master Class de improvisación e tango contemporáneo con Jorge Retamoza', '<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Master Class con Jorge Retamoza. </a></p>_<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Organiza: Escola de Música de rrrrrr. Colabora: Concello de rrrrrr.</a></p>', 'Auditorio de rrrrrr', '1339246800000', '1339257600000', '0', '', '900000', 'email', '300000', 'email', '0', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '565201') was aborted. Call getNextException to see the cause. [Sanitized] at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2621) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1837) [...] ... 68 more ``` Then server runs properly for 5 hours with some spared warnings each few minutes: ``` 23:39:09,275 WARN [ajp-apr-8009-exec-20][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.com/rrrrrr25n/notadeprensa on 56_INSTANCE_rk7ADlb9Ui2w 23:43:51,234 WARN [ajp-apr-8009-exec-19][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.org/feiron/normas on 56_INSTANCE_jh6ewEPuvvjb 23:46:59,568 WARN [ajp-apr-8009-exec-5][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/recursos-servizossociais on 56_INSTANCE_4eX2GzETiAQb 23:55:51,177 WARN [ajp-apr-8009-exec-5][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/cans on 56_INSTANCE_4eX2GzETiAQb 00:00:13,713 WARN [ajp-apr-8009-exec-24][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/rexistro on 56_INSTANCE_4eX2GzETiAQb 00:00:25,822 WARN [ajp-apr-8009-exec-24][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/plenos on 110_INSTANCE_acNEFnslrX8c ``` And then memory problems begin. I post the first errors on log: ``` Exception in thread "http-apr-8080-exec-4" java.lang.OutOfMemoryError: Java heap space 01:00:01,223 ERROR [MemoryQuartzSchedulerEngineInstance_Worker-3][SimpleThreadPool:120] Error while executing the Runnable: java.lang.OutOfMemoryError: Java heap space Exception in thread "fileinstall-/home/rrrrrr/liferay-portal-6.2-ce-ga2/data/osgi/modules" Exception in thread "ajp-apr-8009-AsyncTimeout" Exception in thread "ajp-apr-8009-exec-21" at java.util.LinkedHashMap.createEntry(LinkedHashMap.java:441) at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:423) at java.util.HashMap.put(HashMap.java:402) Exception in thread "ajp-apr-8009-exec-24" at sun.util.resources.OpenListResourceBundle.loadLookup(OpenListResourceBundle.java:134) Exception in thread "MemoryQuartzSchedulerEngineInstance_QuartzSchedulerThread" Exception in thread "ContainerBackgroundProcessor[StandardEngine[Catalina]]" at sun.util.resources.OpenListResourceBundle.loadLookupTablesIfNecessary(OpenListResourceBundle.java:113) Exception in thread "ajp-apr-8009-exec-35" Exception in thread "ajp-apr-8009-exec-36" Exception in thread "ajp-apr-8009-exec-29" Exception in thread "ajp-apr-8009-exec-37" at sun.util.resources.OpenListResourceBundle.handleGetKeys(OpenListResourceBundle.java:91) at sun.util.LocaleServiceProviderPool.getLocalizedObjectImpl(LocaleServiceProviderPool.java:353) Exception in thread "ajp-apr-8009-exec-33" Exception in thread "ajp-apr-8009-exec-34" Exception in thread "ajp-apr-8009-exec-30" at sun.util.LocaleServiceProviderPool.getLocalizedObject(LocaleServiceProviderPool.java:284) Exception in thread "ajp-apr-8009-exec-28" Exception in thread "ajp-apr-8009-exec-31" Exception in thread "http-apr-8080-AsyncTimeout" Exception in thread "http-apr-8080-exec-5" Exception in thread "http-apr-8080-exec-2" Exception in thread "liferay/scheduler_dispatch-3" Exception in thread "ajp-apr-8009-exec-41" at sun.util.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:111) at sun.util.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:99) at java.util.TimeZone.getDisplayNames(TimeZone.java:418) at java.util.TimeZone.getDisplayName(TimeZone.java:369) at java.text.SimpleDateFormat.subFormat(SimpleDateFormat.java:1110) at java.text.SimpleDateFormat.format(SimpleDateFormat.java:899) at java.text.SimpleDateFormat.format(SimpleDateFormat.java:869) at org.apache.tomcat.util.http.ServerCookie.appendCookieValue(ServerCookie.java:254) at org.apache.catalina.connector.Response.generateCookieString(Response.java:1032) at org.apache.catalina.connector.Response.addCookie(Response.java:974) at org.apache.catalina.connector.ResponseFacade.addCookie(ResponseFacade.java:381) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.HttpOnlyCookieServletResponse.addCookie(HttpOnlyCookieServletResponse.java:62) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) 01:02:56,362 ERROR [PersistedQuartzSchedulerEngineInstance_QuartzSchedulerThread][ErrorLogger:120] An error occurred while scanning for the next triggers to fire. org.quartz.JobPersistenceException: Failed to obtain DB connection from data source 'ds': java.lang.OutOfMemoryError: Java heap space [See nested exception: java.lang.OutOfMemoryError: Java heap space] at org.quartz.impl.jdbcjobstore.JobStoreSupport.getConnection(JobStoreSupport.java:771) at org.quartz.impl.jdbcjobstore.JobStoreTX.getNonManagedTXConnection(JobStoreTX.java:71) at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3808) at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTriggers(JobStoreSupport.java:2751) at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264) Caused by: java.lang.OutOfMemoryError: Java heap space at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) at com.liferay.portal.kernel.util.CookieKeys.addCookie(CookieKeys.java:99) at com.liferay.portal.kernel.util.CookieKeys.addCookie(CookieKeys.java:63) at com.liferay.portal.language.LanguageImpl.updateCookie(LanguageImpl.java:751) java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space 01:03:00,917 ERROR [QuartzScheduler_PersistedQuartzSchedulerEngineInstance-NON_CLUSTERED_MisfireHandler][PortalJobStore:120] MisfireHandler: Error handling misfires: Unexpected runtime exception: Index: 0, Size: 0 org.quartz.JobPersistenceException: Unexpected runtime exception: Index: 0, Size: 0 [See nested exception: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0] at org.quartz.impl.jdbcjobstore.JobStoreSupport.doRecoverMisfires(JobStoreSupport.java:3200) at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.manage(JobStoreSupport.java:3947) at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:3968) Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(ArrayList.java:571) at java.util.ArrayList.get(ArrayList.java:349) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1689) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:512) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:273) at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.countMisfiredTriggersInState(StdJDBCDelegate.java:416) at org.quartz.impl.jdbcjobstore.JobStoreSupport.doRecoverMisfires(JobStoreSupport.java:3176) ... 2 more java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space Exception in thread "fileinstall-/home/rrrrrr/liferay-portal-6.2-ce-ga2/data/osgi/portal" java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space ``` I have some custom themes and hooks that I now post. I wonder there has to be a memory leak anywhere on them, but i cannot find it. First, I have a custom `Application Display Template`for Blogs: ``` <div class="cr-blog container-fluid"> #foreach ($entry in $entries) <div class="entry-content"> #set ($viewUrl = $currentURL.replaceFirst("\?.*$","") + "/-/blogs/" + $entry.getUrlTitle()) #set($img_ini=$entry.content.indexOf("<img")) #if ($img_ini >= 0) #set($img_end=$entry.content.indexOf(">",$img_ini) + 1) #set($first_img_tag= $entry.content.substring($img_ini, $img_end)) #set($first_img_url=$first_img_tag.replaceFirst("<img.*src=\"","")) #set($first_img_url=$first_img_url.replaceFirst("\".*","")) #end <div class="entry-extract"> #if ($img_ini >= 0) <div class="extract-thumbnail"> <a href="$viewUrl"> <img src="$escapeTool.html($first_img_url)" /> </a> </div> #end <div class="extract-title"> <a href="$viewUrl"><span>$entry.title</span></a> </div> <div class="extract-content"> <a href="$viewUrl"> <span class="extract-date">$dateFormats.getSimpleDateFormat("dd MMMMM yyyy HH:mm", $locale).format($entry.displayDate)</span> #set($plain_content = $entry.content.replaceAll("</?[^>]+/?>", "")) #set($res_length = 240) #if ($res_length > $plain_content.length()) #set($res_length = $plain_content.length()) #end <p> $plain_content.substring(0,$res_length) ... </p> </a> </div> </div> </div> #end </div> ``` Custom hook Blogshome overrides some jsps from blogs\_aggregator. view\_entries.jspf ``` <c:choose> <c:when test="<%= results.isEmpty() %>"> <liferay-ui:message key="there-are-no-blogs" /> <br /><br /> </c:when> <c:otherwise> <% if (displayStyle.startsWith("extract-side-events")) { List<BlogsEntry> eventsColumn = new ArrayList<BlogsEntry>(); List<BlogsEntry> mainColumn = new ArrayList<BlogsEntry>(); for (int i=0; i< results.size(); i++) { BlogsEntry entry = (BlogsEntry) results.get(i); if (entry.getDisplayDate().after(new Date())) { searchContainer.setTotal(searchContainer.getTotal() - 1); continue; } boolean isEvent = ((Boolean) entry.getExpandoBridge().getAttribute("evento")); if (isEvent) eventsColumn.add(entry); mainColumn.add(entry); /* change: add ALL to mainColumn; events are duplicated on side */ } /* reorder eventsColumn */ TreeMap<Date, BlogsEntry> next= new TreeMap<Date, BlogsEntry>(); List<BlogsEntry> toRemove = new ArrayList<BlogsEntry>(); for (BlogsEntry entry: eventsColumn) { Date ini = (Date) entry.getExpandoBridge().getAttribute("evento-inicio"); Date end = (Date) entry.getExpandoBridge().getAttribute("evento-remate"); Date now = new Date(); if (ini.before(now) && (end.after(now))) { next.put(end, entry); /* mainColumn.remove(entry); */ } else if (end.before(now)) { toRemove.add(entry); } else { next.put(ini, entry); /* mainColumn.remove(entry); */ } } eventsColumn.removeAll(toRemove); /* third rearrangement: current & next are visible; past are pushed to mainColumn */ /* current ordered by end; next ordered by ini */ ArrayList<BlogsEntry> lNext = new ArrayList<BlogsEntry>(next.values()); %> <div class="home-events"> <div class="events-showdown" id="events-showdown"> <% if (!lNext.isEmpty()) { for (BlogsEntry entry: lNext) { %> <div class="carousel-item"> <%@ include file="/html/portlet/blogs_aggregator/view_entry_extract.jspf" %> </div> <% } } %> </div> <script> YUI().use('aui-carousel', function(Y) { new Y.Carousel( {contentBox: '#events-showdown', height: 320, width: 600, intervalTime: 5 }).render(); }); </script> <%-- <div class="home-events-nav"> <button type="button" class="btn btn-default btn-large" onclick="document.getElementById('events-showdown').style.right =50"> <span class="glyphicon glyphicon-chevron-right"></span> </button> </div> --%> </div> <% %> <div class="home-blogs container-fluid" > <% for (BlogsEntry entry: mainColumn) { %> <%@ include file="/html/portlet/blogs_aggregator/view_entry_extract.jspf" %> <% } %> </div> <% /* original blogs styles */ } else { for (int i = 0; i < results.size(); i++) { BlogsEntry entry = (BlogsEntry)results.get(i); if (entry.getDisplayDate().after(new Date())) { searchContainer.setTotal(searchContainer.getTotal() - 1); continue; } %> <%@ include file="/html/portlet/blogs_aggregator/view_entry_content.jspf" %> <% } } %> </c:otherwise> </c:choose> <c:if test="<%= enableRssSubscription %>"> <% StringBundler rssURLParams = new StringBundler(); if (selectionMethod.equals("users")) { if (organizationId > 0) { rssURLParams.append("&organizationId="); rssURLParams.append(organizationId); } else { rssURLParams.append("&companyId="); rssURLParams.append(company.getCompanyId()); } } else { rssURLParams.append("&groupId="); rssURLParams.append(themeDisplay.getScopeGroupId()); } %> <span class="button"> <liferay-ui:icon image="rss" label="<%= true %>" method="get" target="_blank" url='<%= themeDisplay.getPathMain() + "/blogs_aggregator/rss?p_l_id=" + plid + rssURLParams %>' /> </span> </c:if> <c:if test="<%= !results.isEmpty() %>"> <div class="search-container"> <liferay-ui:search-paginator searchContainer="<%= searchContainer %>" /> </div> </c:if> ``` view\_entry\_extract.jspf ``` <c:if test="<%= BlogsEntryPermission.contains(permissionChecker, entry, ActionKeys.VIEW) %>"> <div class="entry-content"> <% PortletURL showBlogEntryURL = renderResponse.createRenderURL(); showBlogEntryURL.setParameter("struts_action", "/blogs_aggregator/view_entry"); showBlogEntryURL.setParameter("entryId", String.valueOf(entry.getEntryId())); StringBundler sb = new StringBundler(8); StringBundler ab = new StringBundler(8); ab.append(themeDisplay.getURLPortal()); ab.append(GroupLocalServiceUtil.getGroup(entry.getGroupId()).getFriendlyURL()); ab.append("/-/blogs/"); ab.append(entry.getUrlTitle()); String viewEntryURL = ab.toString(); sb.append("&showAllEntries=1"); String viewAllEntriesURL = sb.toString(); User user2 = UserLocalServiceUtil.getUserById(entry.getUserId()); %> <div class="entry-header"> <c:if test='<%= (Boolean) entry.getExpandoBridge().getAttribute("evento")%>'> <div class="event-schedule"> <% Calendar iniDate = com.liferay.portal.kernel.util.CalendarFactoryUtil.getCalendar(timeZone); Calendar endDate = com.liferay.portal.kernel.util.CalendarFactoryUtil.getCalendar(timeZone); iniDate.setTime(((Date) entry.getExpandoBridge().getAttribute("evento-inicio"))); endDate.setTime(((Date) entry.getExpandoBridge().getAttribute("evento-remate"))); boolean sameDay = false; if ((iniDate.get(Calendar.DAY_OF_YEAR) == endDate.get(Calendar.DAY_OF_YEAR)) && (iniDate.get(Calendar.YEAR) == endDate.get(Calendar.YEAR))) sameDay = true; String diaDaSemana = (new SimpleDateFormat("EEEE", locale)).format(iniDate.getTime()); String numeroDeDia = (new SimpleDateFormat("d", locale)).format(iniDate.getTime()); String mes = (new SimpleDateFormat("MMMM", locale)).format(iniDate.getTime()); // String hora = (new SimpleDateFormat("HH:mm", locale)).format(iniDate.getTime()) + "h"; String hora = (iniDate.get(Calendar.HOUR_OF_DAY) + new SimpleDateFormat(":mm", locale).format(iniDate.getTime()) + "h"); String numeroDeDiaFin = StringPool.BLANK; String mesFin = StringPool.BLANK; if (!sameDay) { numeroDeDiaFin = (new SimpleDateFormat("d", locale)).format(endDate.getTime()); mesFin = (new SimpleDateFormat("MMMM", locale)).format(endDate.getTime()); } %> <% if (sameDay) { %> <div class='event-date'> <%= numeroDeDia %> </div> <div class='event-data'> <div class="event-month"><%= mes %></div> <div class="event-day"><%= diaDaSemana %></div> <div class="event-time"><%= hora %></div> </div> <% } else { %> <div class='event-date'> <%= numeroDeDia %> </div> <div class='event-data'> <div class="event-month"><%= mes %></div> <div class="event-day"><%= diaDaSemana %></div> <div class="event-time"><liferay-ui:message key="rrrrrr.events.until" /> <%= numeroDeDiaFin %> <liferay-ui:message key="rrrrrr.events.of" /> <%= mesFin %></div> </div> <% } %> </div> </c:if> </div> <div class="entry-extract"> <% String resumeText = StringPool.BLANK; String resumeImage = StringPool.BLANK; int extLength = 240; if (entry.isSmallImage()) { if (Validator.isNotNull(entry.getSmallImageURL())) resumeImage = entry.getSmallImageURL(); else resumeImage = themeDisplay.getPathImage() + "/journal/article?img_id=" + entry.getSmallImageId() + "&t=" + WebServerServletTokenUtil.getToken(entry.getSmallImageId()) ; resumeImage.trim(); } /* if no small image, extract first */ if ((resumeImage == null) || (resumeImage.isEmpty())) { java.util.regex.Pattern p = java.util.regex.Pattern.compile("src=['\"]([^'\"]+)['\"]"); java.util.regex.Matcher m = p.matcher(entry.getContent()); if (m.find()) resumeImage = m.group().substring(5, m.group().length() -1); } resumeText = HtmlUtil.stripHtml(entry.getDescription()); resumeText.trim(); /* if no resume description, extract text */ if ((resumeText == null) || (resumeText.isEmpty())) { resumeText = HtmlUtil.escape(StringUtil.shorten(HtmlUtil.extractText(entry.getContent()), extLength)); } %> <div class="extract-thumbnail"> <a href="<%= viewEntryURL %>" style="background-image: url('<%= HtmlUtil.escape(resumeImage) %>')"> <img class="asset-small-image" src="<%= HtmlUtil.escape(resumeImage) %>"/> </a> </div> <div class="extract-title"> <a href="<%= viewEntryURL %>"><%= HtmlUtil.escape(entry.getTitle()) %></a> </div> <div class="extract-content"> <a href="<%= viewEntryURL %>"> <span class="extract-scope"><%= GroupLocalServiceUtil.getGroup(entry.getGroupId()).getDescriptiveName() %></span> <span class="extract-date"><%= dateFormatDateTime.format(entry.getDisplayDate()) %></span> <span><%= " " + resumeText %></span> </a> </div> </div> </div> ``` I am unable to guess any memory leak there, but there has to be! I've spent few weeks trying to find a bug (deactivating hooks to see if errors persisted) but couldn't come to a clue. Does anybody see something potentially dangerous in my code? Wich other way can I trace Java memory usage to fetch for leaks?
2014/07/29
[ "https://Stackoverflow.com/questions/25012031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3837065/" ]
As it seems the `open`-method doesn't update the `position` of the `infowindow`, you'll need to do it on your own(e.g. by binding the position of the infowindow to the position of the marker): ``` infowindow.unbind('position'); if(infowindow.getPosition() != this.getPosition()) { infowindow.bindTo('position',this,'position'); infowindow.open(map,this); } else { infowindow.close(); infowindow.setPosition(null); } ``` Demo: [**http://jsfiddle.net/aBg3N/**](http://jsfiddle.net/aBg3N/) Another solution(but this solution relies on a undocumented property `anchor`): ``` if(infowindow.get('anchor') != this) { infowindow.open(map,this); } else { infowindow.close(); } ``` Demo: [**http://jsfiddle.net/AZC3z/**](http://jsfiddle.net/AZC3z/) Both solutions will work with draggable markers and also when you use the same infowindow for multiple markers
Apparently the google.maps.InfoWindow won't have a position unless you set one, if I do this: ``` var infowindow = new google.maps.InfoWindow({ content: '<div id="content">test</div>', position: new google.maps.LatLng(0,0) }); ``` [fiddle](http://jsfiddle.net/5yj2a/25/) or: ``` var infowindow = new google.maps.InfoWindow({ content: '<div id="content">test</div>', position: marker.getPosition() }); ``` `infowindow.getPosition()` returns the value set. Unfortunately, that can't be used to detect whether or not the infowindow is currently visible or not, so your test won't work as expected.
17,066,347
I have this issue with Titanium Studio. I can't compile my project for Android. I try to Run or Debug to project, but I've got this message: ``` Titanium Command-Line Interface, CLI version 3.1.0, Titanium SDK version 3.1.0.GA Copyright (c) 2012-2013, Appcelerator, Inc. All Rights Reserved. [INFO] : Running emulator process: python "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\3.1.0.GA\android\builder.py" "emulator" "MyApp" "E:\Developpement\Mobile\SDKs\Android" "E:\Developpement\Mobile\Appcelerator\MyApp" "com.developper.myapp" "2" "WVGA854" "armeabi" [INFO] : Running build process: python "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\3.1.0.GA\android\builder.py" "simulator" "MyApp" "E:\Developpement\Mobile\SDKs\Android" "E:\Developpement\Mobile\Appcelerator\MyApp" "com.developper.myapp" "2" "WVGA854" "/127.0.0.1:49314" [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Building MyApp for Android ... one moment [INFO] Titanium SDK version: 3.1.0 (04/15/13 18:45 57634ef) [ERROR] : Emulator process exited with code 1 [INFO] : Project built successfully in 5s 421ms [INFO] : Emulator not running, exiting... ``` The emulator is not starting and no APK file is built in the bin folder. I have the Android 2.2 and 4.2.2 SDK installed. I tried everythings (clean project, even uninstall and reinstall Titanium studio). I did this project with Titanium 2.1.4. Now I'm using 3.1.0 and I got this error message. In tiapp.xml, if I choose to run the project with the Titanium 2.1.4 SDK I got these messages : ``` [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Launching Android emulator...one moment [INFO] Creating new Android Virtual Device (2 WVGA854) [ERROR] Exception occured while building Android project: [ERROR] Traceback (most recent call last): [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 2282, in <module> [ERROR] s.run_emulator(avd_id, avd_skin, avd_name, avd_abi, add_args) [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 523, in run_emulator [ERROR] avd_name = self.create_avd(avd_id, avd_skin, avd_abi) [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 485, in create_avd [ERROR] inifilec = open(inifile,'r').read() [ERROR] IOError: [Errno 2] No such file or directory: 'C:\\Users\\Dev\\.android\\avd\\titanium_2_WVGA854.avd\\config.ini' ``` And then : ``` [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Building MyAppfor Android ... one moment [INFO] Titanium SDK version: 2.1.4 (11/09/12 12:46 51f2c64) [ERROR] Application Installer abnormal process termination. Process exit value was 1 [ERROR] Timed out waiting for emulator to be ready, you may need to close the emulator and try again ``` No emulators are running and no APKs are built. If anyone has an idea... I'm using Win7 64bits. Maybe I missed somthing during the configuration. Thank you for your help.
2013/06/12
[ "https://Stackoverflow.com/questions/17066347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486286/" ]
If this happens with the Kitchen Sink demo, the fix is to go into the Android SDK Manager and install "Android 3.0 (API 11)". Make sure the app uses emulator "Google APIs (Android 2.3.3)" and "WVGA854". I assume there's a Titanium bug because you have to install a higher API level (3.0) than is actually used (2.3.3). Using exactly these settings, Kitchen Sink works as expected.
Did you read [System Requirements](http://docs.appcelerator.com/titanium/latest/#!/guide/Quick_Start-section-29004949_QuickStart-SystemRequirements)? From Documentation: > > For Windows, the 32-bit version of Java JDK is required regardless of > whether Titanium is running on a 32-bit or 64-bit system. > > > Try to install additional 32bit version of Java (without removing the 64bit) and set the system variable. May be this will help you.
17,066,347
I have this issue with Titanium Studio. I can't compile my project for Android. I try to Run or Debug to project, but I've got this message: ``` Titanium Command-Line Interface, CLI version 3.1.0, Titanium SDK version 3.1.0.GA Copyright (c) 2012-2013, Appcelerator, Inc. All Rights Reserved. [INFO] : Running emulator process: python "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\3.1.0.GA\android\builder.py" "emulator" "MyApp" "E:\Developpement\Mobile\SDKs\Android" "E:\Developpement\Mobile\Appcelerator\MyApp" "com.developper.myapp" "2" "WVGA854" "armeabi" [INFO] : Running build process: python "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\3.1.0.GA\android\builder.py" "simulator" "MyApp" "E:\Developpement\Mobile\SDKs\Android" "E:\Developpement\Mobile\Appcelerator\MyApp" "com.developper.myapp" "2" "WVGA854" "/127.0.0.1:49314" [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Building MyApp for Android ... one moment [INFO] Titanium SDK version: 3.1.0 (04/15/13 18:45 57634ef) [ERROR] : Emulator process exited with code 1 [INFO] : Project built successfully in 5s 421ms [INFO] : Emulator not running, exiting... ``` The emulator is not starting and no APK file is built in the bin folder. I have the Android 2.2 and 4.2.2 SDK installed. I tried everythings (clean project, even uninstall and reinstall Titanium studio). I did this project with Titanium 2.1.4. Now I'm using 3.1.0 and I got this error message. In tiapp.xml, if I choose to run the project with the Titanium 2.1.4 SDK I got these messages : ``` [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Launching Android emulator...one moment [INFO] Creating new Android Virtual Device (2 WVGA854) [ERROR] Exception occured while building Android project: [ERROR] Traceback (most recent call last): [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 2282, in <module> [ERROR] s.run_emulator(avd_id, avd_skin, avd_name, avd_abi, add_args) [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 523, in run_emulator [ERROR] avd_name = self.create_avd(avd_id, avd_skin, avd_abi) [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 485, in create_avd [ERROR] inifilec = open(inifile,'r').read() [ERROR] IOError: [Errno 2] No such file or directory: 'C:\\Users\\Dev\\.android\\avd\\titanium_2_WVGA854.avd\\config.ini' ``` And then : ``` [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Building MyAppfor Android ... one moment [INFO] Titanium SDK version: 2.1.4 (11/09/12 12:46 51f2c64) [ERROR] Application Installer abnormal process termination. Process exit value was 1 [ERROR] Timed out waiting for emulator to be ready, you may need to close the emulator and try again ``` No emulators are running and no APKs are built. If anyone has an idea... I'm using Win7 64bits. Maybe I missed somthing during the configuration. Thank you for your help.
2013/06/12
[ "https://Stackoverflow.com/questions/17066347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486286/" ]
If this happens with the Kitchen Sink demo, the fix is to go into the Android SDK Manager and install "Android 3.0 (API 11)". Make sure the app uses emulator "Google APIs (Android 2.3.3)" and "WVGA854". I assume there's a Titanium bug because you have to install a higher API level (3.0) than is actually used (2.3.3). Using exactly these settings, Kitchen Sink works as expected.
Answer 1 : ========== Seems build tools got moved to another directory with the latest Android SDK update. Created symlinks to aapt and dx in /Applications/Android-sdk/platform-tools: ``` ln -s /Applications/Android-sdk/build-tools/17.0.0/aapt aapt ln -s /Applications/Android-sdk/build-tools/17.0.0/dx dx ``` This solved it for me (after some digging in their Python code). Answer 2 : ========== I'm on windows so i used mklink. I had to add a link to lib/dx.jar for it to work. What I dit was first add folder 'lib' to platform-tools folder and after in command line: ``` cd %YOUR_ANDROID_DIR%\platform-tools mklink aapt.exe ..\build-tools\android-4.2.2\aapt.exe mklink dx.bat ..\build-tools\android-4.2.2\dx.bat cd lib mklink dx.bat ..\..\build-tools\android-4.2.2\lib\dx.jar ``` Answer 3 : ========== I copied the following files: ``` C:\Android\build-tools\17.0.0\aapt.exe to C:\Android\platform-tools\aapt.exe C:\Android\build-tools\17.0.0\dx.bat to C:\Android\platform-tools\dx.bat C:\Android\build-tools\17.0.0\lib to C:\Android\platform-tools\lib ``` I then cleaned the project and rebuilt and everything is now working. Source here : <http://developer.appcelerator.com/question/152497/titanium-sdk-310-error-typeerror-argument-of-type-nonetype-is-not-iterable-on-building-android-app#comment-175782>
17,066,347
I have this issue with Titanium Studio. I can't compile my project for Android. I try to Run or Debug to project, but I've got this message: ``` Titanium Command-Line Interface, CLI version 3.1.0, Titanium SDK version 3.1.0.GA Copyright (c) 2012-2013, Appcelerator, Inc. All Rights Reserved. [INFO] : Running emulator process: python "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\3.1.0.GA\android\builder.py" "emulator" "MyApp" "E:\Developpement\Mobile\SDKs\Android" "E:\Developpement\Mobile\Appcelerator\MyApp" "com.developper.myapp" "2" "WVGA854" "armeabi" [INFO] : Running build process: python "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\3.1.0.GA\android\builder.py" "simulator" "MyApp" "E:\Developpement\Mobile\SDKs\Android" "E:\Developpement\Mobile\Appcelerator\MyApp" "com.developper.myapp" "2" "WVGA854" "/127.0.0.1:49314" [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Building MyApp for Android ... one moment [INFO] Titanium SDK version: 3.1.0 (04/15/13 18:45 57634ef) [ERROR] : Emulator process exited with code 1 [INFO] : Project built successfully in 5s 421ms [INFO] : Emulator not running, exiting... ``` The emulator is not starting and no APK file is built in the bin folder. I have the Android 2.2 and 4.2.2 SDK installed. I tried everythings (clean project, even uninstall and reinstall Titanium studio). I did this project with Titanium 2.1.4. Now I'm using 3.1.0 and I got this error message. In tiapp.xml, if I choose to run the project with the Titanium 2.1.4 SDK I got these messages : ``` [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Launching Android emulator...one moment [INFO] Creating new Android Virtual Device (2 WVGA854) [ERROR] Exception occured while building Android project: [ERROR] Traceback (most recent call last): [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 2282, in <module> [ERROR] s.run_emulator(avd_id, avd_skin, avd_name, avd_abi, add_args) [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 523, in run_emulator [ERROR] avd_name = self.create_avd(avd_id, avd_skin, avd_abi) [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 485, in create_avd [ERROR] inifilec = open(inifile,'r').read() [ERROR] IOError: [Errno 2] No such file or directory: 'C:\\Users\\Dev\\.android\\avd\\titanium_2_WVGA854.avd\\config.ini' ``` And then : ``` [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Building MyAppfor Android ... one moment [INFO] Titanium SDK version: 2.1.4 (11/09/12 12:46 51f2c64) [ERROR] Application Installer abnormal process termination. Process exit value was 1 [ERROR] Timed out waiting for emulator to be ready, you may need to close the emulator and try again ``` No emulators are running and no APKs are built. If anyone has an idea... I'm using Win7 64bits. Maybe I missed somthing during the configuration. Thank you for your help.
2013/06/12
[ "https://Stackoverflow.com/questions/17066347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486286/" ]
If this happens with the Kitchen Sink demo, the fix is to go into the Android SDK Manager and install "Android 3.0 (API 11)". Make sure the app uses emulator "Google APIs (Android 2.3.3)" and "WVGA854". I assume there's a Titanium bug because you have to install a higher API level (3.0) than is actually used (2.3.3). Using exactly these settings, Kitchen Sink works as expected.
I had a similar problem, when I was trying to run a project on Android I got: ``` [ERROR] : Emulator process exited with code 1 [ERROR] : Build process exited with code 1 [ERROR] : Project failed to build after 234ms [ERROR] Application Installer abnormal process termination. Process exit value was 1 ``` I tried compiling with different previous android sdks 2.3.3, 2.1, 2.2 because the app presumed of being compatible with them, but no luck. The solution was to delete/change this tag/line within Android Tag on Tiapp.xml ``` <tool-api-level>15</tool-api-level> ``` It was pointing to Api 15 (Android sdk 4.0.3) and I hadn't installed. Personally I've deleted that line. That solution worked for Kitchen Sink too, you must delete/change the same tag/line mentioned above. Now I have it builded for the Api 10 (Android sdk 2.3.3) the one I use.
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
I had the same issue and just running the below command solved it for me: ``` easy_install pip ```
The short answer is don't do this. Use `setup.py` or a straight import statement. [Here is why this doesn't work with pip and how to get around it if necessary.](https://pip.pypa.io/en/stable/user_guide/#using-pip-from-your-program) `pip` affects the whole environment. Depending on who is running this and why, they may or may not want to install `requests` in the environment they're running your script in. It could be a nasty surprise that running your script affects their python environment. Installing it as a package (using `python setup.py` or `pip install`) is a different matter. There are well-established ways to install other packages using `requirements.txt` and `setup.py`. It is also expected that installing a package will install its dependencies. [You can read more in the python.org packaging tutorials](https://packaging.python.org/tutorials/distributing-packages/) If your script has dependencies but people don't need to install it, you should tell them in the `README.rst` and/or `requirements.txt`. **or** simply include the import statement, and when they get the error, they'll know what to do. Let them control what environment installs which packages.
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
pip Developers do not recommend calling pip from within the program. And the pip.main() method has been removed from pip v10. As an alternative method it is recommended to execute pip in subprocess. <https://pip.pypa.io/en/stable/user_guide/?highlight=_internal#using-pip-from-your-program> ``` try: import requests except: import sys import subprocess subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'requests==2.0.1', 'PyYAML==3.11']) import requests ```
I had the same issue and just running the below command solved it for me: ``` easy_install pip ```
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
I had the same issue and just running the below command solved it for me: ``` easy_install pip ```
The pip.main function was moved, not removed by pip devs. The highest voted solution here is not good. Going from python -> shell -> python is not a good practice when you can just run the python code direct. Try `from pip._internal import main` then you can use that main function to execute your pip calls like before.
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
I had the same issue and just running the below command solved it for me: ``` easy_install pip ```
You can define a function to install a lib if needed. It is convenient. ``` #%% # IMPORTS import pip def import_or_install(package): try: __import__(package) except: import sys import subprocess subprocess.check_call([sys.executable, '-m', 'pip', 'install', package]) __import__(package) import_or_install("numpy") import_or_install("matplotlib") import numpy as np import matplotlib.pyplot as plt #%% # Your code print("Hello") ```
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
pip Developers do not recommend calling pip from within the program. And the pip.main() method has been removed from pip v10. As an alternative method it is recommended to execute pip in subprocess. <https://pip.pypa.io/en/stable/user_guide/?highlight=_internal#using-pip-from-your-program> ``` try: import requests except: import sys import subprocess subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'requests==2.0.1', 'PyYAML==3.11']) import requests ```
The short answer is don't do this. Use `setup.py` or a straight import statement. [Here is why this doesn't work with pip and how to get around it if necessary.](https://pip.pypa.io/en/stable/user_guide/#using-pip-from-your-program) `pip` affects the whole environment. Depending on who is running this and why, they may or may not want to install `requests` in the environment they're running your script in. It could be a nasty surprise that running your script affects their python environment. Installing it as a package (using `python setup.py` or `pip install`) is a different matter. There are well-established ways to install other packages using `requirements.txt` and `setup.py`. It is also expected that installing a package will install its dependencies. [You can read more in the python.org packaging tutorials](https://packaging.python.org/tutorials/distributing-packages/) If your script has dependencies but people don't need to install it, you should tell them in the `README.rst` and/or `requirements.txt`. **or** simply include the import statement, and when they get the error, they'll know what to do. Let them control what environment installs which packages.
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
The pip.main function was moved, not removed by pip devs. The highest voted solution here is not good. Going from python -> shell -> python is not a good practice when you can just run the python code direct. Try `from pip._internal import main` then you can use that main function to execute your pip calls like before.
The short answer is don't do this. Use `setup.py` or a straight import statement. [Here is why this doesn't work with pip and how to get around it if necessary.](https://pip.pypa.io/en/stable/user_guide/#using-pip-from-your-program) `pip` affects the whole environment. Depending on who is running this and why, they may or may not want to install `requests` in the environment they're running your script in. It could be a nasty surprise that running your script affects their python environment. Installing it as a package (using `python setup.py` or `pip install`) is a different matter. There are well-established ways to install other packages using `requirements.txt` and `setup.py`. It is also expected that installing a package will install its dependencies. [You can read more in the python.org packaging tutorials](https://packaging.python.org/tutorials/distributing-packages/) If your script has dependencies but people don't need to install it, you should tell them in the `README.rst` and/or `requirements.txt`. **or** simply include the import statement, and when they get the error, they'll know what to do. Let them control what environment installs which packages.
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
pip Developers do not recommend calling pip from within the program. And the pip.main() method has been removed from pip v10. As an alternative method it is recommended to execute pip in subprocess. <https://pip.pypa.io/en/stable/user_guide/?highlight=_internal#using-pip-from-your-program> ``` try: import requests except: import sys import subprocess subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'requests==2.0.1', 'PyYAML==3.11']) import requests ```
The pip.main function was moved, not removed by pip devs. The highest voted solution here is not good. Going from python -> shell -> python is not a good practice when you can just run the python code direct. Try `from pip._internal import main` then you can use that main function to execute your pip calls like before.
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
pip Developers do not recommend calling pip from within the program. And the pip.main() method has been removed from pip v10. As an alternative method it is recommended to execute pip in subprocess. <https://pip.pypa.io/en/stable/user_guide/?highlight=_internal#using-pip-from-your-program> ``` try: import requests except: import sys import subprocess subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'requests==2.0.1', 'PyYAML==3.11']) import requests ```
You can define a function to install a lib if needed. It is convenient. ``` #%% # IMPORTS import pip def import_or_install(package): try: __import__(package) except: import sys import subprocess subprocess.check_call([sys.executable, '-m', 'pip', 'install', package]) __import__(package) import_or_install("numpy") import_or_install("matplotlib") import numpy as np import matplotlib.pyplot as plt #%% # Your code print("Hello") ```
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
The pip.main function was moved, not removed by pip devs. The highest voted solution here is not good. Going from python -> shell -> python is not a good practice when you can just run the python code direct. Try `from pip._internal import main` then you can use that main function to execute your pip calls like before.
You can define a function to install a lib if needed. It is convenient. ``` #%% # IMPORTS import pip def import_or_install(package): try: __import__(package) except: import sys import subprocess subprocess.check_call([sys.executable, '-m', 'pip', 'install', package]) __import__(package) import_or_install("numpy") import_or_install("matplotlib") import numpy as np import matplotlib.pyplot as plt #%% # Your code print("Hello") ```
3,215,455
Is it possible to use multiple languages along side with ruby. For example, I have my application code in Ruby on Rails. I would like to calculate the recommendations and I would like to use python for that. So essentially, python code would get the data and calculate all the stuff and probably get the data from DB, calculate and update the tables.Is it possible and what do you guys think about its adv/disadv Thanks
2010/07/09
[ "https://Stackoverflow.com/questions/3215455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
If you are offloading work to an exterior process, you may want to make this a webservice (ajax, perhaps) of some sort so that you have some sort of consistent interface. Otherwise, you could always execute the python script in a subshell through ruby, using stdin/stdout/argv, but this can get ugly quick.
I would use the system command as such ``` system("python myscript.py") ```
3,215,455
Is it possible to use multiple languages along side with ruby. For example, I have my application code in Ruby on Rails. I would like to calculate the recommendations and I would like to use python for that. So essentially, python code would get the data and calculate all the stuff and probably get the data from DB, calculate and update the tables.Is it possible and what do you guys think about its adv/disadv Thanks
2010/07/09
[ "https://Stackoverflow.com/questions/3215455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
If you are offloading work to an exterior process, you may want to make this a webservice (ajax, perhaps) of some sort so that you have some sort of consistent interface. Otherwise, you could always execute the python script in a subshell through ruby, using stdin/stdout/argv, but this can get ugly quick.
An easy, quick 'n' dirty solution in case you have python scripts and you want to execute them from inside rails, is this: `%x[shell commands or python path/of/pythonscript.py #{ruby variables to pass on the script}]` or ``shell commands or python path/of/pythonscript.py #{ruby variables to pass on the script}`\` (with ` symbol in the beginning and the end). Put the above inside a controller and it will execute. For some reason, inside ruby on rails, system and exec commands didn't work for me (exec crashed my application and system doesn't do anything).
3,215,455
Is it possible to use multiple languages along side with ruby. For example, I have my application code in Ruby on Rails. I would like to calculate the recommendations and I would like to use python for that. So essentially, python code would get the data and calculate all the stuff and probably get the data from DB, calculate and update the tables.Is it possible and what do you guys think about its adv/disadv Thanks
2010/07/09
[ "https://Stackoverflow.com/questions/3215455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
Depending on your exact needs, you can either call out to an external process (using popen, system, etc) or you can setup another mini-web-server or something along those lines and have the rails server communicate with it over HTTP with a REST-style API (or whatever best suits your needs). In your example, you have a ruby frontend website and then a number-crunching python backend service that builds up recommendation data for the ruby site. A fairly nice solution is to have the ruby site send a HTTP request to the python service when it needs data updating (with a payload of information to identify what it needs doing to what or some such) and then the python backend service can crunch away and update the table which presumably your ruby frontend will automatically pick up the changes of during the next request and display.
I would use the system command as such ``` system("python myscript.py") ```
3,215,455
Is it possible to use multiple languages along side with ruby. For example, I have my application code in Ruby on Rails. I would like to calculate the recommendations and I would like to use python for that. So essentially, python code would get the data and calculate all the stuff and probably get the data from DB, calculate and update the tables.Is it possible and what do you guys think about its adv/disadv Thanks
2010/07/09
[ "https://Stackoverflow.com/questions/3215455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
Depending on your exact needs, you can either call out to an external process (using popen, system, etc) or you can setup another mini-web-server or something along those lines and have the rails server communicate with it over HTTP with a REST-style API (or whatever best suits your needs). In your example, you have a ruby frontend website and then a number-crunching python backend service that builds up recommendation data for the ruby site. A fairly nice solution is to have the ruby site send a HTTP request to the python service when it needs data updating (with a payload of information to identify what it needs doing to what or some such) and then the python backend service can crunch away and update the table which presumably your ruby frontend will automatically pick up the changes of during the next request and display.
An easy, quick 'n' dirty solution in case you have python scripts and you want to execute them from inside rails, is this: `%x[shell commands or python path/of/pythonscript.py #{ruby variables to pass on the script}]` or ``shell commands or python path/of/pythonscript.py #{ruby variables to pass on the script}`\` (with ` symbol in the beginning and the end). Put the above inside a controller and it will execute. For some reason, inside ruby on rails, system and exec commands didn't work for me (exec crashed my application and system doesn't do anything).
48,160,819
I want to write a python program to process csv sheets, the total numbers of rows and cols are different each time. One of things I want to do is to delete columns containing a specific string. ``` import csv input = open("1.csv","rb") reader = csv.reader(input) output = open("2.csv","wb") writer = csv.writer(output) index = -1 for row in reader: for item in row: if item == str('string'): index = row.index(item) print(index) ... ``` Update:I rewrite the code thanks to [tuan-huynh](https://stackoverflow.com/users/2305843/tuan-huynh/), but this code only works for the first column containing "string".
2018/01/09
[ "https://Stackoverflow.com/questions/48160819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9190725/" ]
You can find the column index with this code and can delete it. I test it ok import csv ``` with open("SampleCSVFile_2kb.csv","rb") as source: rdr= csv.reader( source ) with open("result","wb") as result: wtr= csv.writer( result ) index = -1 for r in rdr: for item in r: if item == str(string): index = r.index(item) ```
Assume csv looks like ``` name,color,price apple,red,10 banana,yellow,5 ``` ``` import csv with open(file_path, "r") as f: file = csv.reader(f) for line in file: print(line[0], line[1], line[2]) ``` print out would be ``` name color price apple red 10 banana yellow 5 ```
48,892,348
I have a for loop in python and at the end of each step I want the output to be added as a new column in a csv file. The output I have is a 40x1 array. So if the for loop consists of 100 steps, I want to have a csv file with 100 columns and 40 rows at the end. What I have now, at the end of each time step is the following: ``` with open( 'File name.csv','w') as output: writer=csv.writer(output, lineterminator='\n') for val in myvector: writer.writerow([val]) ``` However, this creates different csv files with 40 rows and 1 column each. How can have I add them all as different columns in the same csv file? This will save me a lot of computation time so any help would be very much appreciated.
2018/02/20
[ "https://Stackoverflow.com/questions/48892348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6756920/" ]
The insight would be that when `serverUrl` is truthy, you don't need the `switch` at all - you always return the same value that was switched upon. So don't do the test in every switch `case`, but do it once before that: ``` function checkField(str: string) : string { if (serverUrl === 'abc') return str.toLowerCase(); else switch (str.toLowerCase()) { case 'code': return 'CODE'; case 'webid': return 'Webid'; case 'pkid': return 'PkId'; case 'barcode': return 'Barcode'; case 'price': return 'Price'; case 'bestbefore': return 'BestBefore'; case 'produce': return 'Produce'; case 'sales': return 'Sales'; case 'marketid': return "MarketId"; case 'regdate': return "Regdate"; //and more fields default: return str; } } ``` Instead of the `switch` statement, you can also use an object literal or a [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) as a lookup table.
Borrowing a bit from @Bergi's answer, I would create a mapping object to make it a little cleaner. E.g.: ``` function checkField(str: string) : string { //create a mapping var myMapping = { 'code' : 'CODE', 'webid' : 'Webid', 'pkid' : 'PkId', 'barcode': 'Barcode', //and more fields } if (serverUrl === 'abc') { return str.toLowerCase(); } else { return myMapping[str.toLowerCase()] || str; } } ``` This keeps the logic a little more separate from the mapping, so it feels cleaner to me. But that's a personal preference.
48,892,348
I have a for loop in python and at the end of each step I want the output to be added as a new column in a csv file. The output I have is a 40x1 array. So if the for loop consists of 100 steps, I want to have a csv file with 100 columns and 40 rows at the end. What I have now, at the end of each time step is the following: ``` with open( 'File name.csv','w') as output: writer=csv.writer(output, lineterminator='\n') for val in myvector: writer.writerow([val]) ``` However, this creates different csv files with 40 rows and 1 column each. How can have I add them all as different columns in the same csv file? This will save me a lot of computation time so any help would be very much appreciated.
2018/02/20
[ "https://Stackoverflow.com/questions/48892348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6756920/" ]
The insight would be that when `serverUrl` is truthy, you don't need the `switch` at all - you always return the same value that was switched upon. So don't do the test in every switch `case`, but do it once before that: ``` function checkField(str: string) : string { if (serverUrl === 'abc') return str.toLowerCase(); else switch (str.toLowerCase()) { case 'code': return 'CODE'; case 'webid': return 'Webid'; case 'pkid': return 'PkId'; case 'barcode': return 'Barcode'; case 'price': return 'Price'; case 'bestbefore': return 'BestBefore'; case 'produce': return 'Produce'; case 'sales': return 'Sales'; case 'marketid': return "MarketId"; case 'regdate': return "Regdate"; //and more fields default: return str; } } ``` Instead of the `switch` statement, you can also use an object literal or a [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) as a lookup table.
You can put all the values in a map. ``` function checkField(String str) { let map = new Map([["code", "CODE"], ["webid", "Webid"]]); if (serverUrl === 'abc') return str.toLowerCase(); else return map[str] } ```
28,334,966
I am trying to open an Excel file (.xls) using xlrd. This is a summary of the code I am using: ``` import xlrd workbook = xlrd.open_workbook('thefile.xls') ``` This works for most files, but fails for files I get from a specific organization. The error I get when I try to open Excel files from this organization follows. ``` Traceback (most recent call last): File "<console>", line 1, in <module> File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/__init__.py", line 435, in open_workbook ragged_rows=ragged_rows, File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 116, in open_workbook_xls bk.parse_globals() File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 1180, in parse_globals self.handle_writeaccess(data) File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 1145, in handle_writeaccess strg = unpack_unicode(data, 0, lenlen=2) File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/biffh.py", line 303, in unpack_unicode strg = unicode(rawstrg, 'utf_16_le') File "/app/.heroku/python/lib/python2.7/encodings/utf_16_le.py", line 16, in decode return codecs.utf_16_le_decode(input, errors, True) UnicodeDecodeError: 'utf16' codec can't decode byte 0x40 in position 104: truncated data ``` This looks as if xlrd is trying to open an Excel file encoded in something other than UTF-16. How can I avoid this error? Is the file being written in a flawed way, or is there just a specific character that is causing the problem? If I open and re-save the Excel file, xlrd opens the file without a problem. I have tried opening the workbook with different encoding overrides but this doesn't work either. The file I am trying to open is available here: <https://dl.dropboxusercontent.com/u/6779408/Stackoverflow/AEPUsageHistoryDetail_RequestID_00183816.xls> Issue reported here: <https://github.com/python-excel/xlrd/issues/128>
2015/02/05
[ "https://Stackoverflow.com/questions/28334966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/382374/" ]
What are they using to generate that file ? They are using some Java Excel API (see below, [link here](http://jexcelapi.sourceforge.net/)), probably on an IBM mainframe or similar. From the stack trace the writeaccess information can't decoding into Unicode because the @ character. For more information on the writeaccess information of the XLS fileformat see [5.112 WRITEACCESS](https://www.openoffice.org/sc/excelfileformat.pdf) or [Page 277](http://www.digitalpreservation.gov/formats/digformatspecs/Excel97-2007BinaryFileFormat%28xls%29Specification.pdf). This field contains the username of the user that has saved the file. ``` import xlrd dump = xlrd.dump('thefile.xls') ``` Running xlrd.dump on the original file gives ``` 36: 005c WRITEACCESS len = 0070 (112) 40: d1 81 a5 81 40 c5 a7 83 85 93 40 c1 d7 c9 40 40 ????@?????@???@@ 56: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 72: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 88: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 104: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 120: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 136: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ ``` After resaving it with Excel or in my case LibreOffice Calc the write access information is overwritten with something like ``` 36: 005c WRITEACCESS len = 0070 (112) 40: 04 00 00 43 61 6c 63 20 20 20 20 20 20 20 20 20 ?~~Calc 56: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 72: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 88: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 104: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 120: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 136: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 ``` Based on the spaces being encoded as 40, I believe the encoding is EBCDIC, and when we convert `d1 81 a5 81 40 c5 a7 83 85 93 40 c1 d7 c9 40 40` to EBCDIC we get `Java Excel API`. So yes the file is being written in a flawed way in the case of BIFF8 and higher it should be a unicode string, and in BIFF3 to BIFF5, it should be a byte string in the encoding in the CODEPAGE information which is ``` 152: 0042 CODEPAGE len = 0002 (2) 156: 12 52 ?R ``` 1252 is Windows CP-1252 (Latin I) (BIFF4-BIFF5), which is not [EBCDIC\_037](http://en.wikipedia.org/wiki/EBCDIC_037). The fact the xlrd tried to use unicode, means that it determined the version of the file to be BIFF8. In this case, you have two options 1. Fix the file before opening it with xlrd. You could check using dump to a file that isn't standard out, and then if it is the case, you can overwrite the writeaccess information with xlutils.save or another library. 2. Patch [xlrd](https://github.com/python-excel/xlrd/blob/9429b4c1cd479830b1ecb08a5e7639244ef8dbcf/xlrd/book.py#L1136-L1148) to handle your special case, in `handle_writeaccess` adding a try block and setting strg to empty string on unpack\_unicode failure. The following snippet ``` def handle_writeaccess(self, data): DEBUG = 0 if self.biff_version < 80: if not self.encoding: self.raw_user_name = True self.user_name = data return strg = unpack_string(data, 0, self.encoding, lenlen=1) else: try: strg = unpack_unicode(data, 0, lenlen=2) except: strg = "" if DEBUG: fprintf(self.logfile, "WRITEACCESS: %d bytes; raw=%s %r\n", len(data), self.raw_user_name, strg) strg = strg.rstrip() self.user_name = strg ``` with ``` workbook=xlrd.open_workbook('thefile.xls',encoding_override="cp1252") ``` Seems to open the file successfully. Without the encoding override it complains `ERROR *** codepage 21010 -> encoding 'unknown_codepage_21010' -> LookupError: unknown encoding: unknown_codepage_21010`
This worked for me. ``` import xlrd my_xls = xlrd.open_workbook('//myshareddrive/something/test.xls',encoding_override="gb2312") ```
21,867,596
I'm a little new to web parsing in python. I am using beautiful soup. I would like to create a list by parsing strings from a webpage. I've looked around and can't seem to find the right answer. Doe anyone know how to create a list of strings from a web page? Any help is appreciated. My code is something like this: ``` from BeautifulSoup import BeautifulSoup import urllib2 url="http://www.any_url.com" page=urllib2.urlopen(url) soup = BeautifulSoup(page.read()) #The data I need is coming from HTML tag of td page_find=soup.findAll('td') for page_data in page_find: print page_data.string #I tried to create my list here page_List = [page_data.string] print page_List ```
2014/02/18
[ "https://Stackoverflow.com/questions/21867596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2278570/" ]
Having difficulty understanding what you are trying to achieve... If you want all values of `page_data.string` in `page_List`, then your code should look like this: ``` page_List = [] for page_data in page_find: page_List.append(page_data.string) ``` Or using a list comprehension: ``` page_List = [page_data.string for page_data in page_find] ``` The problem with your original code is that you create the list using the text from the last `td` element only (i.e. outside of the loop which processes each `td` element).
Here it is modified to call the web page as a string ``` import requests the_web_page_as_a_string = requests.get(some_path).content from lxml import html myTree = html.fromstring(the_web_page_as_a_string) td_list = [ e for e in myTree.iter() if e.tag == 'td'] text_list = [] for td_e in td_list: text = td_e.text_content() text_list.append(text) ```
21,867,596
I'm a little new to web parsing in python. I am using beautiful soup. I would like to create a list by parsing strings from a webpage. I've looked around and can't seem to find the right answer. Doe anyone know how to create a list of strings from a web page? Any help is appreciated. My code is something like this: ``` from BeautifulSoup import BeautifulSoup import urllib2 url="http://www.any_url.com" page=urllib2.urlopen(url) soup = BeautifulSoup(page.read()) #The data I need is coming from HTML tag of td page_find=soup.findAll('td') for page_data in page_find: print page_data.string #I tried to create my list here page_List = [page_data.string] print page_List ```
2014/02/18
[ "https://Stackoverflow.com/questions/21867596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2278570/" ]
I'd recommend lxml over BeautifulSoup, when you start scraping alot of pages the speed advantage of lxml is hard to ignore. ``` import requests import lxml.html dom = lxml.html.fromstring(requests.get('http://www.any_url.com').content) page_list = [x for x in dom.xpath('//td/text()')] print page_list ```
Here it is modified to call the web page as a string ``` import requests the_web_page_as_a_string = requests.get(some_path).content from lxml import html myTree = html.fromstring(the_web_page_as_a_string) td_list = [ e for e in myTree.iter() if e.tag == 'td'] text_list = [] for td_e in td_list: text = td_e.text_content() text_list.append(text) ```
21,867,596
I'm a little new to web parsing in python. I am using beautiful soup. I would like to create a list by parsing strings from a webpage. I've looked around and can't seem to find the right answer. Doe anyone know how to create a list of strings from a web page? Any help is appreciated. My code is something like this: ``` from BeautifulSoup import BeautifulSoup import urllib2 url="http://www.any_url.com" page=urllib2.urlopen(url) soup = BeautifulSoup(page.read()) #The data I need is coming from HTML tag of td page_find=soup.findAll('td') for page_data in page_find: print page_data.string #I tried to create my list here page_List = [page_data.string] print page_List ```
2014/02/18
[ "https://Stackoverflow.com/questions/21867596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2278570/" ]
Having difficulty understanding what you are trying to achieve... If you want all values of `page_data.string` in `page_List`, then your code should look like this: ``` page_List = [] for page_data in page_find: page_List.append(page_data.string) ``` Or using a list comprehension: ``` page_List = [page_data.string for page_data in page_find] ``` The problem with your original code is that you create the list using the text from the last `td` element only (i.e. outside of the loop which processes each `td` element).
I'd recommend lxml over BeautifulSoup, when you start scraping alot of pages the speed advantage of lxml is hard to ignore. ``` import requests import lxml.html dom = lxml.html.fromstring(requests.get('http://www.any_url.com').content) page_list = [x for x in dom.xpath('//td/text()')] print page_list ```
16,815,170
So this is probably a very basic question about output formatting in python using '.format' and since I'm a beginner, I can't figure this out for the life of me. I've tried to be as detailed as possible, just to make sure that there's no confusion. Let me give you an example so that you can better understand my dilemma. Consider the following program ``` list = (['wer', 'werwe', 'werwe' ,'wer we']) # list[0], list[1], list[2], list[3] list.append(['vbcv', 'cvnc', 'bnfhn', 'mjyh']) # list[4] list.append(['yth', 'rnhn', 'mjyu', 'mujym']) # list[5] list.append(['cxzz', 'bncz', 'nhrt', 'qweq']) # list[6] first = 'bill' last = 'gates' print ('{:10} {:10} {:10} {:10}'.format(first,last,list[5], list[6])) ``` Understandably that would give the output: ``` bill gates ['yth', 'rnhn', 'mjyu', 'mujym'] ['cxzz', 'bncz', 'nhrt', 'qweq'] ``` So here's my real question. I was doing this practice problem from the book and I don't understand the answer. The program below will give you a good idea of what kind of output we are going for: ``` students = [] students.append(['DeMoines', 'Jim', 'Sophomore', 3.45]) #students[0] students.append(['Pierre', 'Sophie', 'Sophomore', 4.0]) #students[1] students.append(['Columbus', 'Maria', 'Senior', 2.5]) #students[2] students.append(['Phoenix', 'River', 'Junior', 2.45]) #students[3] students.append(['Olympis', 'Edgar', 'Junior', 3.99]) #students[4] students.append(['van','john', 'junior', 3.56]) #students[5] def Grades(students): print ('Last First Standing GPA') for students in students: print('{0:10} {1:10} {2:10} {3:8.2f}'.format(students[0],students[1],students[2],students[3])) ``` The output we're trying to get is a kind of a table that gives all the stats for all the students - ``` Last First Standing GPA DeMoines Jim Sophomore 3.45 Pierre Sophie Sophomore 4.00 Columbus Maria Senior 2.50 Phoenix River Junior 2.45 Olympis Edgar Junior 3.99 van john junior 3.56 ``` So here's what I don't understand. We are working with basically the same thing in the two examples i.e. a list inside a list. For my first example, the print statement was: ``` print('{:10} {:10} {:10} {:10}'.format(first, last, list[5], list[6])) ``` where `list[5]` and `list[6]` are lists themselves and they are printed in entirety, as you can see from the output. **But that doesn't happen in the book problem**. There, the print statement says ``` print('{0:10} {1:10} {2:10} {3:8.2f}'.format(students[0], students[1], students[2], students[3])) ``` As you can see from the table output, here `students[0]` refers only to **'DeMoines'**. But if you just run the statement `students[0]` in the Python interpreter, it gives the whole sub list, as it should. ``` ['DeMoines', 'Jim', 'Sophomore', 3.45] ``` So, basically, I've got two questions, why does `students[0]` have two different meanings and why does `students[0]` not print the whole list like we did with `list[5]` and `list[6]`?
2013/05/29
[ "https://Stackoverflow.com/questions/16815170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2396553/" ]
Look at the *for loop*: ``` for students in students: # ^^^^^^^^ ``` So, `students`(inside loop) does not actually refers to **list of list**. And `students[0]` refers to **first element** from **element** from **list of lists**, as expected. I suggest replace `students` from function argument, say, with `all_students` or something like that.
Try renaming the variable `list` into something that's not a reserved word or built-in function or type. What's confusing to beginners - and it happens to everyone sooner or later - is what happens if you redefine or use in unintended ways a reserved word or a builtin. If you do ``` list = [1, 2, 3, 4] ``` you re-bind the name `list` to no longer point to the builtin `list` data type but to the actual list `[1, 2, 3, 4]` in the current scope. That is almost always not what you intend to do. Using a variable `dir` is a similar pitfall. Also do not use additional parantheses `()` around the square brackets of the list assignment. Something like `words = ['wer', 'werwe', 'werwe' ,'wer we']` suffices. Generally consider which names you choose for a variable. `students` is descriptive, helpful commentary, `list` is not. Also if `list` currently holds a list, your algorithm might be changed later on with the variable holding a `set` or any other container type. Then a type-based variable name will be even misleading.
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
You may try mlxtend which got various selection methods. ``` from mlxtend.feature_selection import SequentialFeatureSelector as sfs clf = LinearRegression() # Build step forward feature selection sfs1 = sfs(clf,k_features = 10,forward=True,floating=False, scoring='r2',cv=5) # Perform SFFS sfs1 = sfs1.fit(X_train, y_train) ```
You can make forward-backward selection based on `statsmodels.api.OLS` model, as shown [in this answer](https://datascience.stackexchange.com/a/24447/24162). However, [this answer](https://stats.stackexchange.com/questions/20836/algorithms-for-automatic-model-selection/20856#20856) describes why you should not use stepwise selection for econometric models in the first place.
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
I developed this repository <https://github.com/xinhe97/StepwiseSelectionOLS> My Stepwise Selection Classes (best subset, forward stepwise, backward stepwise) are compatible to sklearn. You can do Pipeline and GridSearchCV with my Classes. The essential part of my code is as follows: ``` ################### Criteria ################### def processSubset(self, X,y,feature_index): # Fit model on feature_set and calculate rsq_adj regr = sm.OLS(y,X[:,feature_index]).fit() rsq_adj = regr.rsquared_adj bic = self.myBic(X.shape[0], regr.mse_resid, len(feature_index)) rsq = regr.rsquared return {"model":regr, "rsq_adj":rsq_adj, "bic":bic, "rsq":rsq, "predictors_index":feature_index} ################### Forward Stepwise ################### def forward(self,predictors_index,X,y): # Pull out predictors we still need to process remaining_predictors_index = [p for p in range(X.shape[1]) if p not in predictors_index] results = [] for p in remaining_predictors_index: new_predictors_index = predictors_index+[p] new_predictors_index.sort() results.append(self.processSubset(X,y,new_predictors_index)) # Wrap everything up in a nice dataframe models = pd.DataFrame(results) # Choose the model with the highest rsq_adj # best_model = models.loc[models['bic'].idxmin()] best_model = models.loc[models['rsq'].idxmax()] # Return the best model, along with model's other information return best_model def forwardK(self,X_est,y_est, fK): models_fwd = pd.DataFrame(columns=["model", "rsq_adj", "bic", "rsq", "predictors_index"]) predictors_index = [] M = min(fK,X_est.shape[1]) for i in range(1,M+1): print(i) models_fwd.loc[i] = self.forward(predictors_index,X_est,y_est) predictors_index = models_fwd.loc[i,'predictors_index'] print(models_fwd) # best_model_fwd = models_fwd.loc[models_fwd['bic'].idxmin(),'model'] best_model_fwd = models_fwd.loc[models_fwd['rsq'].idxmax(),'model'] # best_predictors = models_fwd.loc[models_fwd['bic'].idxmin(),'predictors_index'] best_predictors = models_fwd.loc[models_fwd['rsq'].idxmax(),'predictors_index'] return best_model_fwd, best_predictors ```
``` """Importing the api class from statsmodels""" import statsmodels.formula.api as sm """X_opt variable has all the columns of independent variables of matrix X in this case we have 5 independent variables""" X_opt = X[:,[0,1,2,3,4]] """Running the OLS method on X_opt and storing results in regressor_OLS""" regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() ``` Using the summary method, you can check in your kernel the p values of your variables written as 'P>|t|'. Then check for the variable with the highest p value. Suppose x3 has the highest value e.g 0.956. Then remove this column from your array and repeat all the steps. ``` X_opt = X[:,[0,1,3,4]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() ``` Repeat these methods until you remove all the columns which have p value higher than the significance value(e.g 0.05). In the end your variable X\_opt will have all the optimal variables with p values less than significance level.
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
You may try mlxtend which got various selection methods. ``` from mlxtend.feature_selection import SequentialFeatureSelector as sfs clf = LinearRegression() # Build step forward feature selection sfs1 = sfs(clf,k_features = 10,forward=True,floating=False, scoring='r2',cv=5) # Perform SFFS sfs1 = sfs1.fit(X_train, y_train) ```
Here's a method I just wrote that uses "mixed selection" as described in Introduction to Statistical Learning. As input, it takes: * lm, a statsmodels.OLS.fit(Y,X), where X is an array of n ones, where n is the number of data points, and Y, where Y is the response in the training data * curr\_preds- a list with ['const'] * potential\_preds- a list of all potential predictors. There also needs to be a pandas dataframe X\_mix that has all of the data, including 'const', and all of the data corresponding to the potential predictors * tol, optional. The max pvalue, .05 if not specified ``` def mixed_selection (lm, curr_preds, potential_preds, tol = .05): while (len(potential_preds) > 0): index_best = -1 # this will record the index of the best predictor curr = -1 # this will record current index best_r_squared = lm.rsquared_adj # record the r squared of the current model # loop to determine if any of the predictors can better the r-squared for pred in potential_preds: curr += 1 # increment current preds = curr_preds.copy() # grab the current predictors preds.append(pred) lm_new = sm.OLS(y, X_mix[preds]).fit() # create a model with the current predictors plus an addional potential predictor new_r_sq = lm_new.rsquared_adj # record r squared for new model if new_r_sq > best_r_squared: best_r_squared = new_r_sq index_best = curr if index_best != -1: # a potential predictor improved the r-squared; remove it from potential_preds and add it to current_preds curr_preds.append(potential_preds.pop(index_best)) else: # none of the remaining potential predictors improved the adjust r-squared; exit loop break # fit a new lm using the new predictors, look at the p-values pvals = sm.OLS(y, X_mix[curr_preds]).fit().pvalues pval_too_big = [] # make a list of all the p-values that are greater than the tolerance for feat in pvals.index: if(pvals[feat] > tol and feat != 'const'): # if the pvalue is too large, add it to the list of big pvalues pval_too_big.append(feat) # now remove all the features from curr_preds that have a p-value that is too large for feat in pval_too_big: pop_index = curr_preds.index(feat) curr_preds.pop(pop_index) ```
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
Trevor Smith and I wrote a little forward selection function for linear regression with statsmodels: <http://planspace.org/20150423-forward_selection_with_statsmodels/> You could easily modify it to minimize a p-value, or select based on beta p-values with just a little more work.
``` """Importing the api class from statsmodels""" import statsmodels.formula.api as sm """X_opt variable has all the columns of independent variables of matrix X in this case we have 5 independent variables""" X_opt = X[:,[0,1,2,3,4]] """Running the OLS method on X_opt and storing results in regressor_OLS""" regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() ``` Using the summary method, you can check in your kernel the p values of your variables written as 'P>|t|'. Then check for the variable with the highest p value. Suppose x3 has the highest value e.g 0.956. Then remove this column from your array and repeat all the steps. ``` X_opt = X[:,[0,1,3,4]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() ``` Repeat these methods until you remove all the columns which have p value higher than the significance value(e.g 0.05). In the end your variable X\_opt will have all the optimal variables with p values less than significance level.
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
You can make forward-backward selection based on `statsmodels.api.OLS` model, as shown [in this answer](https://datascience.stackexchange.com/a/24447/24162). However, [this answer](https://stats.stackexchange.com/questions/20836/algorithms-for-automatic-model-selection/20856#20856) describes why you should not use stepwise selection for econometric models in the first place.
Here's a method I just wrote that uses "mixed selection" as described in Introduction to Statistical Learning. As input, it takes: * lm, a statsmodels.OLS.fit(Y,X), where X is an array of n ones, where n is the number of data points, and Y, where Y is the response in the training data * curr\_preds- a list with ['const'] * potential\_preds- a list of all potential predictors. There also needs to be a pandas dataframe X\_mix that has all of the data, including 'const', and all of the data corresponding to the potential predictors * tol, optional. The max pvalue, .05 if not specified ``` def mixed_selection (lm, curr_preds, potential_preds, tol = .05): while (len(potential_preds) > 0): index_best = -1 # this will record the index of the best predictor curr = -1 # this will record current index best_r_squared = lm.rsquared_adj # record the r squared of the current model # loop to determine if any of the predictors can better the r-squared for pred in potential_preds: curr += 1 # increment current preds = curr_preds.copy() # grab the current predictors preds.append(pred) lm_new = sm.OLS(y, X_mix[preds]).fit() # create a model with the current predictors plus an addional potential predictor new_r_sq = lm_new.rsquared_adj # record r squared for new model if new_r_sq > best_r_squared: best_r_squared = new_r_sq index_best = curr if index_best != -1: # a potential predictor improved the r-squared; remove it from potential_preds and add it to current_preds curr_preds.append(potential_preds.pop(index_best)) else: # none of the remaining potential predictors improved the adjust r-squared; exit loop break # fit a new lm using the new predictors, look at the p-values pvals = sm.OLS(y, X_mix[curr_preds]).fit().pvalues pval_too_big = [] # make a list of all the p-values that are greater than the tolerance for feat in pvals.index: if(pvals[feat] > tol and feat != 'const'): # if the pvalue is too large, add it to the list of big pvalues pval_too_big.append(feat) # now remove all the features from curr_preds that have a p-value that is too large for feat in pval_too_big: pop_index = curr_preds.index(feat) curr_preds.pop(pop_index) ```
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
You can make forward-backward selection based on `statsmodels.api.OLS` model, as shown [in this answer](https://datascience.stackexchange.com/a/24447/24162). However, [this answer](https://stats.stackexchange.com/questions/20836/algorithms-for-automatic-model-selection/20856#20856) describes why you should not use stepwise selection for econometric models in the first place.
Statsmodels has additional methods for regression: <http://statsmodels.sourceforge.net/devel/examples/generated/example_ols.html>. I think it will help you to implement stepwise regression.
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
``` """Importing the api class from statsmodels""" import statsmodels.formula.api as sm """X_opt variable has all the columns of independent variables of matrix X in this case we have 5 independent variables""" X_opt = X[:,[0,1,2,3,4]] """Running the OLS method on X_opt and storing results in regressor_OLS""" regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() ``` Using the summary method, you can check in your kernel the p values of your variables written as 'P>|t|'. Then check for the variable with the highest p value. Suppose x3 has the highest value e.g 0.956. Then remove this column from your array and repeat all the steps. ``` X_opt = X[:,[0,1,3,4]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() ``` Repeat these methods until you remove all the columns which have p value higher than the significance value(e.g 0.05). In the end your variable X\_opt will have all the optimal variables with p values less than significance level.
Here's a method I just wrote that uses "mixed selection" as described in Introduction to Statistical Learning. As input, it takes: * lm, a statsmodels.OLS.fit(Y,X), where X is an array of n ones, where n is the number of data points, and Y, where Y is the response in the training data * curr\_preds- a list with ['const'] * potential\_preds- a list of all potential predictors. There also needs to be a pandas dataframe X\_mix that has all of the data, including 'const', and all of the data corresponding to the potential predictors * tol, optional. The max pvalue, .05 if not specified ``` def mixed_selection (lm, curr_preds, potential_preds, tol = .05): while (len(potential_preds) > 0): index_best = -1 # this will record the index of the best predictor curr = -1 # this will record current index best_r_squared = lm.rsquared_adj # record the r squared of the current model # loop to determine if any of the predictors can better the r-squared for pred in potential_preds: curr += 1 # increment current preds = curr_preds.copy() # grab the current predictors preds.append(pred) lm_new = sm.OLS(y, X_mix[preds]).fit() # create a model with the current predictors plus an addional potential predictor new_r_sq = lm_new.rsquared_adj # record r squared for new model if new_r_sq > best_r_squared: best_r_squared = new_r_sq index_best = curr if index_best != -1: # a potential predictor improved the r-squared; remove it from potential_preds and add it to current_preds curr_preds.append(potential_preds.pop(index_best)) else: # none of the remaining potential predictors improved the adjust r-squared; exit loop break # fit a new lm using the new predictors, look at the p-values pvals = sm.OLS(y, X_mix[curr_preds]).fit().pvalues pval_too_big = [] # make a list of all the p-values that are greater than the tolerance for feat in pvals.index: if(pvals[feat] > tol and feat != 'const'): # if the pvalue is too large, add it to the list of big pvalues pval_too_big.append(feat) # now remove all the features from curr_preds that have a p-value that is too large for feat in pval_too_big: pop_index = curr_preds.index(feat) curr_preds.pop(pop_index) ```
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
Trevor Smith and I wrote a little forward selection function for linear regression with statsmodels: <http://planspace.org/20150423-forward_selection_with_statsmodels/> You could easily modify it to minimize a p-value, or select based on beta p-values with just a little more work.
Here's a method I just wrote that uses "mixed selection" as described in Introduction to Statistical Learning. As input, it takes: * lm, a statsmodels.OLS.fit(Y,X), where X is an array of n ones, where n is the number of data points, and Y, where Y is the response in the training data * curr\_preds- a list with ['const'] * potential\_preds- a list of all potential predictors. There also needs to be a pandas dataframe X\_mix that has all of the data, including 'const', and all of the data corresponding to the potential predictors * tol, optional. The max pvalue, .05 if not specified ``` def mixed_selection (lm, curr_preds, potential_preds, tol = .05): while (len(potential_preds) > 0): index_best = -1 # this will record the index of the best predictor curr = -1 # this will record current index best_r_squared = lm.rsquared_adj # record the r squared of the current model # loop to determine if any of the predictors can better the r-squared for pred in potential_preds: curr += 1 # increment current preds = curr_preds.copy() # grab the current predictors preds.append(pred) lm_new = sm.OLS(y, X_mix[preds]).fit() # create a model with the current predictors plus an addional potential predictor new_r_sq = lm_new.rsquared_adj # record r squared for new model if new_r_sq > best_r_squared: best_r_squared = new_r_sq index_best = curr if index_best != -1: # a potential predictor improved the r-squared; remove it from potential_preds and add it to current_preds curr_preds.append(potential_preds.pop(index_best)) else: # none of the remaining potential predictors improved the adjust r-squared; exit loop break # fit a new lm using the new predictors, look at the p-values pvals = sm.OLS(y, X_mix[curr_preds]).fit().pvalues pval_too_big = [] # make a list of all the p-values that are greater than the tolerance for feat in pvals.index: if(pvals[feat] > tol and feat != 'const'): # if the pvalue is too large, add it to the list of big pvalues pval_too_big.append(feat) # now remove all the features from curr_preds that have a p-value that is too large for feat in pval_too_big: pop_index = curr_preds.index(feat) curr_preds.pop(pop_index) ```
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
You may try mlxtend which got various selection methods. ``` from mlxtend.feature_selection import SequentialFeatureSelector as sfs clf = LinearRegression() # Build step forward feature selection sfs1 = sfs(clf,k_features = 10,forward=True,floating=False, scoring='r2',cv=5) # Perform SFFS sfs1 = sfs1.fit(X_train, y_train) ```
Statsmodels has additional methods for regression: <http://statsmodels.sourceforge.net/devel/examples/generated/example_ols.html>. I think it will help you to implement stepwise regression.
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
I developed this repository <https://github.com/xinhe97/StepwiseSelectionOLS> My Stepwise Selection Classes (best subset, forward stepwise, backward stepwise) are compatible to sklearn. You can do Pipeline and GridSearchCV with my Classes. The essential part of my code is as follows: ``` ################### Criteria ################### def processSubset(self, X,y,feature_index): # Fit model on feature_set and calculate rsq_adj regr = sm.OLS(y,X[:,feature_index]).fit() rsq_adj = regr.rsquared_adj bic = self.myBic(X.shape[0], regr.mse_resid, len(feature_index)) rsq = regr.rsquared return {"model":regr, "rsq_adj":rsq_adj, "bic":bic, "rsq":rsq, "predictors_index":feature_index} ################### Forward Stepwise ################### def forward(self,predictors_index,X,y): # Pull out predictors we still need to process remaining_predictors_index = [p for p in range(X.shape[1]) if p not in predictors_index] results = [] for p in remaining_predictors_index: new_predictors_index = predictors_index+[p] new_predictors_index.sort() results.append(self.processSubset(X,y,new_predictors_index)) # Wrap everything up in a nice dataframe models = pd.DataFrame(results) # Choose the model with the highest rsq_adj # best_model = models.loc[models['bic'].idxmin()] best_model = models.loc[models['rsq'].idxmax()] # Return the best model, along with model's other information return best_model def forwardK(self,X_est,y_est, fK): models_fwd = pd.DataFrame(columns=["model", "rsq_adj", "bic", "rsq", "predictors_index"]) predictors_index = [] M = min(fK,X_est.shape[1]) for i in range(1,M+1): print(i) models_fwd.loc[i] = self.forward(predictors_index,X_est,y_est) predictors_index = models_fwd.loc[i,'predictors_index'] print(models_fwd) # best_model_fwd = models_fwd.loc[models_fwd['bic'].idxmin(),'model'] best_model_fwd = models_fwd.loc[models_fwd['rsq'].idxmax(),'model'] # best_predictors = models_fwd.loc[models_fwd['bic'].idxmin(),'predictors_index'] best_predictors = models_fwd.loc[models_fwd['rsq'].idxmax(),'predictors_index'] return best_model_fwd, best_predictors ```
Here's a method I just wrote that uses "mixed selection" as described in Introduction to Statistical Learning. As input, it takes: * lm, a statsmodels.OLS.fit(Y,X), where X is an array of n ones, where n is the number of data points, and Y, where Y is the response in the training data * curr\_preds- a list with ['const'] * potential\_preds- a list of all potential predictors. There also needs to be a pandas dataframe X\_mix that has all of the data, including 'const', and all of the data corresponding to the potential predictors * tol, optional. The max pvalue, .05 if not specified ``` def mixed_selection (lm, curr_preds, potential_preds, tol = .05): while (len(potential_preds) > 0): index_best = -1 # this will record the index of the best predictor curr = -1 # this will record current index best_r_squared = lm.rsquared_adj # record the r squared of the current model # loop to determine if any of the predictors can better the r-squared for pred in potential_preds: curr += 1 # increment current preds = curr_preds.copy() # grab the current predictors preds.append(pred) lm_new = sm.OLS(y, X_mix[preds]).fit() # create a model with the current predictors plus an addional potential predictor new_r_sq = lm_new.rsquared_adj # record r squared for new model if new_r_sq > best_r_squared: best_r_squared = new_r_sq index_best = curr if index_best != -1: # a potential predictor improved the r-squared; remove it from potential_preds and add it to current_preds curr_preds.append(potential_preds.pop(index_best)) else: # none of the remaining potential predictors improved the adjust r-squared; exit loop break # fit a new lm using the new predictors, look at the p-values pvals = sm.OLS(y, X_mix[curr_preds]).fit().pvalues pval_too_big = [] # make a list of all the p-values that are greater than the tolerance for feat in pvals.index: if(pvals[feat] > tol and feat != 'const'): # if the pvalue is too large, add it to the list of big pvalues pval_too_big.append(feat) # now remove all the features from curr_preds that have a p-value that is too large for feat in pval_too_big: pop_index = curr_preds.index(feat) curr_preds.pop(pop_index) ```
17,332,350
For some reason I can't log into the same account on my home computer as my work computer. I was able to get Bo10's code to work, but not abernert's and I would really like to understand why. Here is my updates to abernert's code: ``` import csv import sys import json import urllib2 j = urllib2.urlopen('https://citibikenyc.com/stations/json') js = json.load(j) citi = js['stationBeanList'] columns = ('stationName', 'totalDocks', 'availableDocks', 'latitude', 'longitude', 'availableBikes') stations = (operator.itemgetter(columns)(station) for station in citi) with open('output.csv', 'w') as csv_file: csv_writer = csv.writer(csv_file) csv_file.writerows(stations) I thought adding this line `csv_writer = csv.writer(csv_file)` would fix the object has no attirbute error, but I am still getting it. This is the actual error: Andrews-MacBook:coding Andrew$ python citibike1.py Traceback (most recent call last): File "citibike1.py", line 17, in <module> csv_file.writerows(stations) AttributeError: 'file' object has no attribute 'writerows' ``` --- So now I have the changed the code to this and the output is just repeating the names of the columns 322 times. I changed it on line 14 because i was getting this error: ``` Traceback (most recent call last): File "citibike1.py", line 17, in <module> csv_writer.writerows(stations) File "citibike1.py", line 13, in <genexpr> stations = (operator.itemgetter(columns)(station) for station in citi) NameError: global name 'operator' is not defined: import csv import sys import json import urllib2 import operator j = urllib2.urlopen('https://citibikenyc.com/stations/json') js = json.load(j) citi = js['stationBeanList'] columns = ('stationName', 'totalDocks', 'availableDocks', 'latitude', 'longitude', 'availableBikes') stations = (operator.itemgetter(0,1,2,3,4,5)(columns) for station in citi) with open('output.csv', 'w') as csv_file: csv_writer = csv.writer(csv_file) csv_writer.writerows(stations) ```
2013/06/26
[ "https://Stackoverflow.com/questions/17332350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1887261/" ]
The problem is that you're not using the `csv` module, you're using the `pickle` module, and this is what `pickle` output looks like. To fix it: ``` csvfile = open('output.csv', 'w') csv.writer(csvfile).writerows(stationList) csvfile.close() ``` --- Note that you're going out of your way to build a transposed table, with 6 lists of 322 lists, not 322 lists of 6 lists. So, you're going to get 6 rows of 322 columns each. If you want the opposite, just don't do that: ``` stationList = [] for f in citi: stationList.append((f['stationName'], f['totalDocks'], f['availableDocks'], f['latitude'], f['longitude'], f['availableBikes'])) ``` Or, more briefly: ``` stationlist = map(operator.itemgetter('stationName', 'totalDocks', 'availableDocks', 'latitude', 'longitude', 'availableBikes'), citi) ``` --- However, instead of building up a huge list, you may want to consider writing the rows one at a time. You can do that by putting `csv.writerow` calls into the middle of the for loop. But you can also do that just by using `itertools.imap` or a generator expression instead of `map` or a list comprehension. That will make `stationlist` into an iterable that creates new values as needed, instead of creating them all at once. --- Putting that all together, here's how I'd write your program: ``` import csv import sys import json import urllib2 j = urllib2.urlopen('https://citibikenyc.com/stations/json') js = json.load(j) citi = js['stationBeanList'] columns = ('stationName', 'totalDocks', 'availableDocks', 'latitude', 'longitude', 'availableBikes') stations = (operator.itemgetter(columns)(station) for station in citi) with open('output.csv', 'w') as csv_file: csv.writer(csv_file).writerows(stations) ```
As abarnert mentions, you're not actually using the `csv` module that you've imported. Also, your logic for storing the columns might actually be transposed. I think you might want to do this instead (*edited to fix the tuple/list confusion*): ``` import csv import json import urllib2 j = urllib2.urlopen('https://citibikenyc.com/stations/json') js = json.load(j) citi = js['stationBeanList'] columns = ["stationName", "totalDocks", "availableDocks", "latitude", "longitude", "availableBikes"] station_list = [[f[s] for s in columns] for f in citi] with open("output.csv", 'w') as outfile: csv_writer = csv.writer(outfile) csv_writer.writerows(station_list) ```
42,418,713
I need to perform an integration with python but with one of the limits being a variable, and not a number (from 0 to z). I tried the following: ``` import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad def I(z,y,a): #function I want to integrate I = (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) return I def dl(z,y,a): #Integration of I dl = quad(I, 0, z, args =(z,y,a)) return dl ``` The problem I have is that `dl(z,y,a)` gives me an array, so whenever I want to plot or evaluate it, I obtain the following: > > ValueError: The truth value of an array with more than one element is ambiguous. > > > I don't know if there is any solution for that.Thanks in advanced
2017/02/23
[ "https://Stackoverflow.com/questions/42418713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7569812/" ]
**Edit:** In your code you should only your `args` argument as `agrs=(y, a)`, z should not be included. Then you can access the result of integration by indexing the first element of the returned tuple. Actually `quad` returns a tuple. The first element in the tuple is the reuslt you want. Since I cannot get your code run without problem, I wrote some short codes instead. I am not sure if this is what you want: ``` def I(a): return lambda z, y: (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) def dl(z, y, a): return quad(I(a), 0, z, args=(y)) print(dl(1,2,3)[0]) ``` Results: ``` 0.15826362868629346 ```
I don't think `quad` accepts vector valued integration boundaries. So in this case you'll actually have to either loop over `z` or use `np.vectorize`.
42,418,713
I need to perform an integration with python but with one of the limits being a variable, and not a number (from 0 to z). I tried the following: ``` import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad def I(z,y,a): #function I want to integrate I = (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) return I def dl(z,y,a): #Integration of I dl = quad(I, 0, z, args =(z,y,a)) return dl ``` The problem I have is that `dl(z,y,a)` gives me an array, so whenever I want to plot or evaluate it, I obtain the following: > > ValueError: The truth value of an array with more than one element is ambiguous. > > > I don't know if there is any solution for that.Thanks in advanced
2017/02/23
[ "https://Stackoverflow.com/questions/42418713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7569812/" ]
The correct way to call `quad` with your `I` is: ``` In [20]: quad(I, 0, 10, args=(1,2)) Out[20]: (0.6984886554222364, 1.1361829471531105e-11) ``` As Longwen points out, the first argument to `I` is the `z` that `quad` varies. The `(y,a)` are parameters that `quad` passes on to `I` without change. But you got the error because you tried using an array as the `z` boundary ``` In [21]: quad(I, 0, np.arange(3), args=(1,2)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-21-fbbfa9c0cd3f> in <module>() ----> 1 quad(I, 0, np.arange(3), args=(1,2)) /usr/local/lib/python3.5/dist-packages/scipy/integrate/quadpack.py in quad(func, a, b, args, full_output, epsabs, epsrel, limit, points, weight, wvar, wopts, maxp1, limlst) 313 if (weight is None): 314 retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit, --> 315 points) 316 else: 317 retval = _quad_weight(func, a, b, args, full_output, epsabs, epsrel, /usr/local/lib/python3.5/dist-packages/scipy/integrate/quadpack.py in _quad(func, a, b, args, full_output, epsabs, epsrel, limit, points) 362 def _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points): 363 infbounds = 0 --> 364 if (b != Inf and a != -Inf): 365 pass # standard integration 366 elif (b == Inf and a != -Inf): ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` [ValueError when using if commands in function](https://stackoverflow.com/questions/41868099/valueerror-when-using-if-commands-in-function) - another recent question trying to do the same thing, use an array as an integration boundary. That post gave more of the error traceback, so it was easier to identify the problem. --- If it hadn't been for this ValueError, your 3 term `args` would have produced a different error: ``` In [19]: quad(I, 0, 10, args=(10,1,2)) .... TypeError: I() takes 3 positional arguments but 4 were given ```
I don't think `quad` accepts vector valued integration boundaries. So in this case you'll actually have to either loop over `z` or use `np.vectorize`.
42,418,713
I need to perform an integration with python but with one of the limits being a variable, and not a number (from 0 to z). I tried the following: ``` import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad def I(z,y,a): #function I want to integrate I = (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) return I def dl(z,y,a): #Integration of I dl = quad(I, 0, z, args =(z,y,a)) return dl ``` The problem I have is that `dl(z,y,a)` gives me an array, so whenever I want to plot or evaluate it, I obtain the following: > > ValueError: The truth value of an array with more than one element is ambiguous. > > > I don't know if there is any solution for that.Thanks in advanced
2017/02/23
[ "https://Stackoverflow.com/questions/42418713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7569812/" ]
**Edit:** In your code you should only your `args` argument as `agrs=(y, a)`, z should not be included. Then you can access the result of integration by indexing the first element of the returned tuple. Actually `quad` returns a tuple. The first element in the tuple is the reuslt you want. Since I cannot get your code run without problem, I wrote some short codes instead. I am not sure if this is what you want: ``` def I(a): return lambda z, y: (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) def dl(z, y, a): return quad(I(a), 0, z, args=(y)) print(dl(1,2,3)[0]) ``` Results: ``` 0.15826362868629346 ```
The Python consideration is that an empty list (or iterable) is false when casted to a booleans. Imagine you are listing something. As you are doing computing you may consider the Zero vector, [0,0,0] which in linear alegbra may be considered as zero but is not an empty list. You error seems to come from an not null checking. As other answers suggest you miscalled a function and provided an array where a number was expected.
42,418,713
I need to perform an integration with python but with one of the limits being a variable, and not a number (from 0 to z). I tried the following: ``` import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad def I(z,y,a): #function I want to integrate I = (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) return I def dl(z,y,a): #Integration of I dl = quad(I, 0, z, args =(z,y,a)) return dl ``` The problem I have is that `dl(z,y,a)` gives me an array, so whenever I want to plot or evaluate it, I obtain the following: > > ValueError: The truth value of an array with more than one element is ambiguous. > > > I don't know if there is any solution for that.Thanks in advanced
2017/02/23
[ "https://Stackoverflow.com/questions/42418713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7569812/" ]
The correct way to call `quad` with your `I` is: ``` In [20]: quad(I, 0, 10, args=(1,2)) Out[20]: (0.6984886554222364, 1.1361829471531105e-11) ``` As Longwen points out, the first argument to `I` is the `z` that `quad` varies. The `(y,a)` are parameters that `quad` passes on to `I` without change. But you got the error because you tried using an array as the `z` boundary ``` In [21]: quad(I, 0, np.arange(3), args=(1,2)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-21-fbbfa9c0cd3f> in <module>() ----> 1 quad(I, 0, np.arange(3), args=(1,2)) /usr/local/lib/python3.5/dist-packages/scipy/integrate/quadpack.py in quad(func, a, b, args, full_output, epsabs, epsrel, limit, points, weight, wvar, wopts, maxp1, limlst) 313 if (weight is None): 314 retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit, --> 315 points) 316 else: 317 retval = _quad_weight(func, a, b, args, full_output, epsabs, epsrel, /usr/local/lib/python3.5/dist-packages/scipy/integrate/quadpack.py in _quad(func, a, b, args, full_output, epsabs, epsrel, limit, points) 362 def _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points): 363 infbounds = 0 --> 364 if (b != Inf and a != -Inf): 365 pass # standard integration 366 elif (b == Inf and a != -Inf): ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` [ValueError when using if commands in function](https://stackoverflow.com/questions/41868099/valueerror-when-using-if-commands-in-function) - another recent question trying to do the same thing, use an array as an integration boundary. That post gave more of the error traceback, so it was easier to identify the problem. --- If it hadn't been for this ValueError, your 3 term `args` would have produced a different error: ``` In [19]: quad(I, 0, 10, args=(10,1,2)) .... TypeError: I() takes 3 positional arguments but 4 were given ```
**Edit:** In your code you should only your `args` argument as `agrs=(y, a)`, z should not be included. Then you can access the result of integration by indexing the first element of the returned tuple. Actually `quad` returns a tuple. The first element in the tuple is the reuslt you want. Since I cannot get your code run without problem, I wrote some short codes instead. I am not sure if this is what you want: ``` def I(a): return lambda z, y: (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) def dl(z, y, a): return quad(I(a), 0, z, args=(y)) print(dl(1,2,3)[0]) ``` Results: ``` 0.15826362868629346 ```
42,418,713
I need to perform an integration with python but with one of the limits being a variable, and not a number (from 0 to z). I tried the following: ``` import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad def I(z,y,a): #function I want to integrate I = (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) return I def dl(z,y,a): #Integration of I dl = quad(I, 0, z, args =(z,y,a)) return dl ``` The problem I have is that `dl(z,y,a)` gives me an array, so whenever I want to plot or evaluate it, I obtain the following: > > ValueError: The truth value of an array with more than one element is ambiguous. > > > I don't know if there is any solution for that.Thanks in advanced
2017/02/23
[ "https://Stackoverflow.com/questions/42418713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7569812/" ]
The correct way to call `quad` with your `I` is: ``` In [20]: quad(I, 0, 10, args=(1,2)) Out[20]: (0.6984886554222364, 1.1361829471531105e-11) ``` As Longwen points out, the first argument to `I` is the `z` that `quad` varies. The `(y,a)` are parameters that `quad` passes on to `I` without change. But you got the error because you tried using an array as the `z` boundary ``` In [21]: quad(I, 0, np.arange(3), args=(1,2)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-21-fbbfa9c0cd3f> in <module>() ----> 1 quad(I, 0, np.arange(3), args=(1,2)) /usr/local/lib/python3.5/dist-packages/scipy/integrate/quadpack.py in quad(func, a, b, args, full_output, epsabs, epsrel, limit, points, weight, wvar, wopts, maxp1, limlst) 313 if (weight is None): 314 retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit, --> 315 points) 316 else: 317 retval = _quad_weight(func, a, b, args, full_output, epsabs, epsrel, /usr/local/lib/python3.5/dist-packages/scipy/integrate/quadpack.py in _quad(func, a, b, args, full_output, epsabs, epsrel, limit, points) 362 def _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points): 363 infbounds = 0 --> 364 if (b != Inf and a != -Inf): 365 pass # standard integration 366 elif (b == Inf and a != -Inf): ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` [ValueError when using if commands in function](https://stackoverflow.com/questions/41868099/valueerror-when-using-if-commands-in-function) - another recent question trying to do the same thing, use an array as an integration boundary. That post gave more of the error traceback, so it was easier to identify the problem. --- If it hadn't been for this ValueError, your 3 term `args` would have produced a different error: ``` In [19]: quad(I, 0, 10, args=(10,1,2)) .... TypeError: I() takes 3 positional arguments but 4 were given ```
The Python consideration is that an empty list (or iterable) is false when casted to a booleans. Imagine you are listing something. As you are doing computing you may consider the Zero vector, [0,0,0] which in linear alegbra may be considered as zero but is not an empty list. You error seems to come from an not null checking. As other answers suggest you miscalled a function and provided an array where a number was expected.
62,908,688
Recently I went on to clean my python code. I felt tiresome to remove all print statements in the code ony by one. Is there any shortcut in editor or RE for removing or commenting print statements in a python program in one go?
2020/07/15
[ "https://Stackoverflow.com/questions/62908688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8721742/" ]
Find / Replace -------------- * Find Replace `print(` with `# print(` will comment them out * Probably works in most editors Using [Notepad++](https://notepad-plus-plus.org/downloads/) with regex ---------------------------------------------------------------------- * Free to download * Recognizes many programming languages * Search expression `(print).*` + `^(print).*` if you only want print statements from the beginning of the line + [![enter image description here](https://i.stack.imgur.com/nac5J.png)](https://i.stack.imgur.com/nac5J.png) [![enter image description here](https://i.stack.imgur.com/Bz3cA.png)](https://i.stack.imgur.com/Bz3cA.png) Write a script -------------- * Use [`pathlib`](https://docs.python.org/3/library/pathlib.html) to find files + [How to replace characters and rename multiple files?](https://stackoverflow.com/questions/62668490) + [Python 3's pathlib Module: Taming the File System](https://realpython.com/python-pathlib/) * Use [`re.sub`](https://docs.python.org/3/library/re.html#re.sub) to find and replace expression ```py from pathlib import Path p = Path('c:\...\path_to_python_files') # path to directory with files files = list(p.rglob('*.py')) # find all python files including subdirectories of p for file in files: with file.open('r') as f: rows = [re.sub('(print).*', '', row) for row in f.readlines()] new_file_name = file.parent / f'{file.stem}_no_print{file.suffix}' with new_file_name.open('w') as f: # you could overwrite the original file, but that might be scary f.writelines(rows) ```
You should avoid working with print statements. Use the python logging module instead: ``` import logging logging.debug('debug message') ``` Once you finished your development and dont need debugging information, you can increase the log level: ``` logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.WARNING) ``` This suppresses all logging messages lower than WARNING. See the [DOCS](https://docs.python.org/3/howto/logging.html) for more information.
62,908,688
Recently I went on to clean my python code. I felt tiresome to remove all print statements in the code ony by one. Is there any shortcut in editor or RE for removing or commenting print statements in a python program in one go?
2020/07/15
[ "https://Stackoverflow.com/questions/62908688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8721742/" ]
Find / Replace -------------- * Find Replace `print(` with `# print(` will comment them out * Probably works in most editors Using [Notepad++](https://notepad-plus-plus.org/downloads/) with regex ---------------------------------------------------------------------- * Free to download * Recognizes many programming languages * Search expression `(print).*` + `^(print).*` if you only want print statements from the beginning of the line + [![enter image description here](https://i.stack.imgur.com/nac5J.png)](https://i.stack.imgur.com/nac5J.png) [![enter image description here](https://i.stack.imgur.com/Bz3cA.png)](https://i.stack.imgur.com/Bz3cA.png) Write a script -------------- * Use [`pathlib`](https://docs.python.org/3/library/pathlib.html) to find files + [How to replace characters and rename multiple files?](https://stackoverflow.com/questions/62668490) + [Python 3's pathlib Module: Taming the File System](https://realpython.com/python-pathlib/) * Use [`re.sub`](https://docs.python.org/3/library/re.html#re.sub) to find and replace expression ```py from pathlib import Path p = Path('c:\...\path_to_python_files') # path to directory with files files = list(p.rglob('*.py')) # find all python files including subdirectories of p for file in files: with file.open('r') as f: rows = [re.sub('(print).*', '', row) for row in f.readlines()] new_file_name = file.parent / f'{file.stem}_no_print{file.suffix}' with new_file_name.open('w') as f: # you could overwrite the original file, but that might be scary f.writelines(rows) ```
1. Open Text Editor in Ubuntu 2. Press Ctrl + h 3. Replace: print 4. Replace with: # print
66,530,908
I am using `gcc10.2`, `c++20`. I am studying c++ after 2 years of python. In python we always did run-time check for input validity ``` def createRectangle(x, y, width, height): # just for example for v in [x, y, width, height]: if v < 0: raise ValueError("Cant be negative") # blahblahblah ``` How would I do such process in c++?
2021/03/08
[ "https://Stackoverflow.com/questions/66530908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8471995/" ]
``` for (int v : {x, y, width, height}) if (v < 0) throw std::runtime_error("Can't be negative"); ``` Note that such loop copies each variable twice. If your variables are heavy to copy (e.g. containers), use pointers instead: ``` for (const int *v : {&x, &y, &width, &height}) if (*v < 0) ... ``` --- Comments also suggest using a reference, e.g. `for (const int &v : {x, y, width, height})`, but that will still give you one copy per variable. So if a type is that heavy, I'd prefer pointers.
In C++: 1. Use an appropriate type so validation (at the point you *use* the variables as opposed to setting them up from some input) is unnecessary, e.g. `unsigned` for a length. C++ is more strongly typed than Python, so you don't need large validation checks to make sure the correct type is passed to a function. 2. A `throw` is broadly equivalent to a `raise` in Python. In C++, we tend to derive an exception from `std::exception`, and throw that. Boost ([www.boost.org](http://www.boost.org)) has a nice validation library which is well-worth looking at.
55,351,647
I'm a beginner of python and follow a book to practice. In my book, the author uses this code ``` s, k = 0 ``` but I get the error: ```none Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not iterable ``` I want to know what happened here.
2019/03/26
[ "https://Stackoverflow.com/questions/55351647", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7290997/" ]
You are asking to initialize two variables `s` and `k` using a single int object `0`, which of course is not iterable. The corrrect syntax being: ``` s, k = 0, 0 ``` **Where** ``` s, k = 0, 1 ``` Would assign `s = 0` and `k = 1` > > Notice the each `int` object on the right being initialized to the > corresponding `var` on the left. > > > **OR** ``` s,k = [0 for _ in range(2)] print(s) # 0 print(k) # 0 ```
``` s = k = 0 ``` OR ``` s, k = (0, 0) ``` depends on what u need
55,351,647
I'm a beginner of python and follow a book to practice. In my book, the author uses this code ``` s, k = 0 ``` but I get the error: ```none Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not iterable ``` I want to know what happened here.
2019/03/26
[ "https://Stackoverflow.com/questions/55351647", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7290997/" ]
You are asking to initialize two variables `s` and `k` using a single int object `0`, which of course is not iterable. The corrrect syntax being: ``` s, k = 0, 0 ``` **Where** ``` s, k = 0, 1 ``` Would assign `s = 0` and `k = 1` > > Notice the each `int` object on the right being initialized to the > corresponding `var` on the left. > > > **OR** ``` s,k = [0 for _ in range(2)] print(s) # 0 print(k) # 0 ```
Insted of: ``` s, k = 0 ``` Use: ``` s, k = 0,0 ```
41,377,820
I downgraded Postgres.app from 9.6 to 9.5 by removing the Postgres.app desktop app. I updated the database by doing (I downloaded Postgres by downloading Postgres.app Desktop app and I installed Django by doing pip install Django) ``` sudo /usr/libexec/locate.updatedb ``` And it looks like it is initiating database from the right directory. ``` /Applications/Postgres.app/Contents/Versions/9.5/bin/initdb /Applications/Postgres.app/Contents/Versions/9.5/share/doc/postgresql/html/app-initdb.html /Applications/Postgres.app/Contents/Versions/9.5/share/man/man1/initdb.1 ``` However, when I am trying to do a migration in my Django app, it looks like the path is still point to the 9.6 version of Postgress ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 341, in execute django.setup() File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate app_config.import_models(all_models) File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/apps/config.py", line 199, in import_models self.models_module = import_module(models_module_name) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/tenant_schemas/models.py", line 4, in <module> from tenant_schemas.postgresql_backend.base import _check_schema_name File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/tenant_schemas/postgresql_backend/base.py", line 14, in <module> import psycopg2 File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/psycopg2/__init__.py", line 50, in <module> from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: dlopen(/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: /Applications/Postgres.app/Contents/Versions/9.6/lib/libpq.5.dylib Referenced from: /Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/psycopg2/_psycopg.so Reason: image not found ```
2016/12/29
[ "https://Stackoverflow.com/questions/41377820", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1427176/" ]
This solves the problem for me: 1. uninstall your psycopg2 pip uninstall psycopg2 2. then do this pip --no-cache-dir install -U psycopg2
I think that your problem is that the version of `psycopg2` that is currently installed references the C postgres library that was bundled with your previous install of postgres (`/Applications/Postgres.app/Contents/Versions/9.6/lib/libpq.5.dylib`). Try uninstalling and reinstalling `psycopg2`. ``` pip uninstall psycopg2 pip install psycopg2 ```
55,454,182
Read dates from user input in the form yyyy,mm,dd. Then find number of days between dates. Datetime wants integers and input needs to be converted from string to integer. Previous questions seem to involve time or a different OS and those suggestions do not seem to work with Anaconda and Win10. I tried those which seem to be for a Win10 OS. The user input variables do not work as entered since they are strings and apparently need to be converted to an integer format. I've tried various ways to convert to integer and datetime responds with errors indicating it needs integer and is seeing tuple, list or string. If I input the date values directly, everything works fine. I've tried - as a separator in the user input and still get an error. I've tried wrapping the input in (" "), " ", ' ' and ( ) and still get errors. I was not able to be more specific in the tags but the following may help with responses. I am using python 2.6 on Windows 10 in Anaconda. For some reason datetime.strptime is not recognized so some of the responses did not work. ``` InitialDate = input("Enter the begin date as yyyy,mm,dd") FinalDate = input("Enter the end date as yyyy,mm,dd") ``` ``` ID = InitialDate.split(",") ID2 = int(Id[0]),int(Id[1]),int(Id[2]) Iday = datetime.datetime(Id2) Fd = FinalDate.split(",") Fd2 = int(Fd[0]),int(Fd[1]),int(Fd[2]) Fday = datetime.datetime(Fd2) age = (Fd2 - Id2).day ``` I expect an integer value for age. I get type error an integer is required (got type tuple) before execution of the age line.
2019/04/01
[ "https://Stackoverflow.com/questions/55454182", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8491388/" ]
Use `.strptime()` to convert string date to date object and then calculate diff **Ex:** ``` import datetime InitialDate = "2019,02,10" #input("Enter the begin date as yyyy,mm,dd") FinalDate = "2019,02,20" #input("Enter the end date as yyyy,mm,dd") InitialDate = datetime.datetime.strptime(InitialDate, "%Y,%m,%d") FinalDate = datetime.datetime.strptime(FinalDate, "%Y,%m,%d") age = (FinalDate - InitialDate).days print(age) ``` **Output:** ``` 10 ```
from datetime import datetime InitialDate = input("Enter the begin date as yyyy/mm/dd: ") FinalDate = input("Enter the end date as yyyy/mm/dd: ") InitialDate = datetime.strptime(InitialDate, '%Y/%m/%d') FinalDate = datetime.strptime(FinalDate, '%Y/%m/%d') difference = FinalDate - InitialDate print(difference.days)
55,454,182
Read dates from user input in the form yyyy,mm,dd. Then find number of days between dates. Datetime wants integers and input needs to be converted from string to integer. Previous questions seem to involve time or a different OS and those suggestions do not seem to work with Anaconda and Win10. I tried those which seem to be for a Win10 OS. The user input variables do not work as entered since they are strings and apparently need to be converted to an integer format. I've tried various ways to convert to integer and datetime responds with errors indicating it needs integer and is seeing tuple, list or string. If I input the date values directly, everything works fine. I've tried - as a separator in the user input and still get an error. I've tried wrapping the input in (" "), " ", ' ' and ( ) and still get errors. I was not able to be more specific in the tags but the following may help with responses. I am using python 2.6 on Windows 10 in Anaconda. For some reason datetime.strptime is not recognized so some of the responses did not work. ``` InitialDate = input("Enter the begin date as yyyy,mm,dd") FinalDate = input("Enter the end date as yyyy,mm,dd") ``` ``` ID = InitialDate.split(",") ID2 = int(Id[0]),int(Id[1]),int(Id[2]) Iday = datetime.datetime(Id2) Fd = FinalDate.split(",") Fd2 = int(Fd[0]),int(Fd[1]),int(Fd[2]) Fday = datetime.datetime(Fd2) age = (Fd2 - Id2).day ``` I expect an integer value for age. I get type error an integer is required (got type tuple) before execution of the age line.
2019/04/01
[ "https://Stackoverflow.com/questions/55454182", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8491388/" ]
Use `.strptime()` to convert string date to date object and then calculate diff **Ex:** ``` import datetime InitialDate = "2019,02,10" #input("Enter the begin date as yyyy,mm,dd") FinalDate = "2019,02,20" #input("Enter the end date as yyyy,mm,dd") InitialDate = datetime.datetime.strptime(InitialDate, "%Y,%m,%d") FinalDate = datetime.datetime.strptime(FinalDate, "%Y,%m,%d") age = (FinalDate - InitialDate).days print(age) ``` **Output:** ``` 10 ```
I just had a similar problem, my solution was to make another int with the value of the array. Something like this: ``` ID = InitialDate.split(",") Id0 = Id[0] Id1 = Id[1] Id2 = Id[2] ID2 = int(Id0),int(Id1),int(Id2) Iday = datetime.datetime(Id2) Fd = FinalDate.split(",") Fd0 = Fd[0] Fd1 = Fd[1] Fd2 = Fd[2] Fd2 = int(Fd0),int(Fd1),int(Fd2) Fday = datetime.datetime(Fd2) age = (Fd2 - Id2).day ```
47,794,007
So time.sleep isnt working. i am on python 3.4.3 and i have not had this problem on my computer with 3.6. This is my code: ``` import calendar def ProfileCreation(): Name = input("Name: ") print("LOADING...") time.sleep(1) Age = input("Age: ") print("LOADING...") time.sleep(1) ProfileAns = input("This matches no profiles. Create profile? ") if ProfileAns.lower == 'yes': print("Creating profile...") elif ProfileAns.lower == 'no': print("Creating profile anyway...") else: print("yes or no answer.") ProfileCreation() ProfileCreation() ```
2017/12/13
[ "https://Stackoverflow.com/questions/47794007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9035852/" ]
You might want to `import time` as at the moment `time.sleep(1)` isn't actually defined, just add the import to the top of your code and this should fix it.
Change it to following: ``` import calendar import time def ProfileCreation(): Name = input("Name: ") print("LOADING...") time.sleep(1) Age = input("Age: ") print("LOADING...") time.sleep(1) ProfileAns = input("This matches no profiles. Create profile? ") if ProfileAns.lower == 'yes': print("Creating profile...") elif ProfileAns.lower == 'no': print("Creating profile anyway...") else: print("yes or no answer.") ProfileCreation() ProfileCreation() ```
47,794,007
So time.sleep isnt working. i am on python 3.4.3 and i have not had this problem on my computer with 3.6. This is my code: ``` import calendar def ProfileCreation(): Name = input("Name: ") print("LOADING...") time.sleep(1) Age = input("Age: ") print("LOADING...") time.sleep(1) ProfileAns = input("This matches no profiles. Create profile? ") if ProfileAns.lower == 'yes': print("Creating profile...") elif ProfileAns.lower == 'no': print("Creating profile anyway...") else: print("yes or no answer.") ProfileCreation() ProfileCreation() ```
2017/12/13
[ "https://Stackoverflow.com/questions/47794007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9035852/" ]
You might want to `import time` as at the moment `time.sleep(1)` isn't actually defined, just add the import to the top of your code and this should fix it.
In your import statement you have imported calendar you should import time also. sleep is inside time package. import calendar import time ``` def ProfileCreation(): Name = input("Name: ") print("LOADING...") time.sleep(1) Age = input("Age: ") print("LOADING...") time.sleep(1) ProfileAns = input("This matches no profiles. Create profile? ") if ProfileAns.lower == 'yes': print("Creating profile...") elif ProfileAns.lower == 'no': print("Creating profile anyway...") else: print("yes or no answer.") ProfileCreation() ProfileCreation() ```
44,707,384
For my evaluation, I wanted to run a rolling 1000 window `OLS regression estimation` of the dataset found in this URL: <https://drive.google.com/open?id=0B2Iv8dfU4fTUa3dPYW5tejA0bzg> using the following `Python` script. ``` # /usr/bin/python -tt import numpy as np import matplotlib.pyplot as plt import pandas as pd from statsmodels.formula.api import ols df = pd.read_csv('estimated.csv', names=('x','y')) model = pd.stats.ols.MovingOLS(y=df.Y, x=df[['y']], window_type='rolling', window=1000, intercept=True) df['Y_hat'] = model.y_predict ``` However, when I run my Python script, I am getting this error: `AttributeError: module 'pandas.stats' has no attribute 'ols'`. Could this error be from the version that I am using? The `pandas` installed on my Linux node has a version of `0.20.2`
2017/06/22
[ "https://Stackoverflow.com/questions/44707384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1731796/" ]
`pd.stats.ols.MovingOLS` was removed in Pandas version 0.20.0 <http://pandas-docs.github.io/pandas-docs-travis/whatsnew.html#whatsnew-0200-prior-deprecations> <https://github.com/pandas-dev/pandas/pull/11898> I can't find an 'off the shelf' solution for what should be such an obvious use case as rolling regressions. The following should do the trick without investing too much time in a more elegant solution. It uses numpy to calculate the predicted value of the regression based on the regression parameters and the X values in the rolling window. ``` window = 1000 a = np.array([np.nan] * len(df)) b = [np.nan] * len(df) # If betas required. y_ = df.y.values x_ = df[['x']].assign(constant=1).values for n in range(window, len(df)): y = y_[(n - window):n] X = x_[(n - window):n] # betas = Inverse(X'.X).X'.y betas = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) y_hat = betas.dot(x_[n, :]) a[n] = y_hat b[n] = betas.tolist() # If betas required. ``` The code above is equivalent to the following and about 35% faster: ``` model = pd.stats.ols.MovingOLS(y=df.y, x=df.x, window_type='rolling', window=1000, intercept=True) y_pandas = model.y_predict ```
It was [deprecated](https://github.com/pandas-dev/pandas/pull/11898) in favor of statsmodels. See here [examples how to use statsmodels rolling regression](https://www.statsmodels.org/dev/examples/notebooks/generated/rolling_ls.html).
60,008,614
I have two dictionaries: ``` members_singles = {'member3': ['PCP3'], 'member4': ['PCP1'], 'member11': ['PCP2'], 'member12': ['PCP3'], 'member14': ['PCP4'], 'member15': ['PCP4'], 'member16': ['PCP4'], 'members17': ['PCP3']} providers = { "PCP1" : 3, "PCP2" : 4, "PCP3" : 1, "PCP4" : 2, "PCP5" : 4, } ``` I want to iterate through `members` and each time a particular value occurs, count down one from the matching count in `providers` ``` to_remove_zero = [] pcps_in_negative = [] for member, provider_list in members_singles.items(): provider = provider_list[0] if provider in providers: providers[provider] -= 1 if providers[provider] == 0: to_remove_zero.append(provider) elif providers[provider] < 0: pcps_in_negative.append(provider) else: pass ``` I want to save those providers that end up with an even zero count to a list and save those providers that go in the negative to another list. But my results show `to_remove_zero` contains ['PCP3', 'PCP4'] even though they should be in the negative. So the loop is popping them out before they get to the second condition. Does python simple counters like this stop at zero when counting down or am I missing something?
2020/01/31
[ "https://Stackoverflow.com/questions/60008614", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11069614/" ]
When the `providers` count reaches zero, your code adds the provider to the `to_remove_zero` list. But the count may go negative later on, which will add the same provider to the `pcps_in_negative` list. At that point, your code needs to back-track and remove it from the `to_remove_zero` list: ``` providers[provider] -= 1 if providers[provider] == 0: to_remove_zero.append(provider) elif providers[provider] < 0: pcps_in_negative.append(provider) # back-track if provider in to_remove_zero: to_remove_zero.remove(provider) ``` This could be made a little neater if you used sets: ``` to_remove_zero = set() pcps_in_negative = set() for member, provider_list in members_singles.items(): provider = provider_list[0] if provider in providers: providers[provider] -= 1 if providers[provider] == 0: to_remove_zero.add(provider) elif providers[provider] < 0: pcps_in_negative.add(provider) to_remove_zero.discard(provider) ```
This is probably because you are altering the list, while expecting the indices to remain consistent: ``` a = [1,2,3] a[0] # 1 a.pop() # 3 a[0] # 1 ```
55,501,746
**Below is the problem I am running into:** *Linear Regression - Given 16 pairs of prices (as dependent variable) and corresponding demands (as independent variable), use the linear regression tool to estimate the best fitting linear line.* ``` Price Demand 127 3420 134 3400 136 3250 139 3410 140 3190 141 3250 148 2860 149 2830 151 3160 154 2820 155 2780 157 2900 159 2810 167 2580 168 2520 171 2430 ``` **Here is my code:** ``` from pylab import * from numpy import * from scipy.stats import * x = [3420, 3400, 3250, 3410, 3190, 3250, 2860, 2830, 3160, 2820, 2780, 2900, 2810, 2580, 2520, 2430] np.asarray(x,dtype= np.float64) y = [127, 134, 136 ,139, 140, 141, 148, 149, 151, 154, 155, 157, 159, 167, 168, 171] np.asarray(y, dtype= np.float64) slope,intercept,r_value,p_value,slope_std_error = stats.linregress(x,y) y_modeled = x*slope+intercept plot(x,y,'ob',markersize=2) plot(x,y_modeled,'-r',linewidth=1) show() ``` **Here is the error I get:** ``` Traceback (most recent call last): File "<ipython-input-48-0a0274c24b19>", line 13, in <module> y_modeled = x*slope+intercept TypeError: can't multiply sequence by non-int of type 'numpy.float64' ```
2019/04/03
[ "https://Stackoverflow.com/questions/55501746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11307363/" ]
I would rewrite this to be simpler and remove unnecessary looping: ``` $filenameOut = "out.html" #get current working dir $cwd = Get-ScriptDirectory #(Get-Location).path #PSScriptRoot #(Get-Item -Path ".").FullName $filenamePathOut = Join-Path $cwd $filenameOut $InitialAppointmentGenArr = Get-ChildItem -Path $temp foreach($file in $InitialAppointmentGenArr) { $fileWithoutExtension = [io.path]::GetFileNameWithoutExtension($file) $temp = '<li><a href="' + ($file.FullName -replace "\\",'/') + '" target="_app">' + $fileWithoutExtension + '</a></li>' Add-Content -Path $filenamePathOut -Value $temp } } ```
Well, let's start with what you are attempting to do, and why it isn't working. If you look at the file object for any of those files (`$file|get-member`), you see that the `FullName` property only has a `get` method, no `set` method, so you can't change that property. So you are never going to change that property without renaming the source file and getting the file info again. Knowing that, if you want to capture the path with the replaced slashes you will need to capture the output of the replace in a variable. You can then use that to build your string. ``` $filenameOut = "out.html" #get current working dir $cwd = Get-ScriptDirectory #(Get-Location).path #PSScriptRoot #(Get-Item -Path ".").FullName $filenamePathOut = Join-Path $cwd $filenameOut $InitialAppointmentGenArr = Get-ChildItem -Path $temp foreach($file in $InitialAppointmentGenArr) { $filePath = $file.FullName -replace "\\", "/" '<li><a href="' + $filePath + '" target="_app">' + $file.BaseName + '</a></li>' | Add-Content -Path $filenamePathOut} } ```
31,096,151
I am parsing streaming hex data with python regex. I have the following packet structure that I am trying to extract from the stream of packets: ``` '\xaa\x01\xFF\x44' ``` * \xaa - start of packet * \x01 - data length [value can vary from 00-FF] * \xFF - data * \x44 - end of packet i want to use python regex to indicate how much of the data portion of the packet to match as such: ``` r = re.compile('\xaa(?P<length>[\x00-\xFF]{1})(.*){?P<length>}\x44') ``` this compiles without errors, but it doesnt work. I suspect it doesnt work because it the regex engine cannot convert the `<length>` named group hex value to an appropriate integer for use inside the regex `{}` expression. Is there a method by which this can be accomplished in python without resorting to disseminating the match groups? Background: I have been using erlang for packet unpacking and I was looking for something similar in python
2015/06/28
[ "https://Stackoverflow.com/questions/31096151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2163865/" ]
I ended up doing something as follows: ``` self.packet_regex = \ re.compile('(\xaa)([\x04-\xFF]{1})([\x00-\xFF]{1})([\x10-\xFF]{1})([\x00-\xFF]*)([\x00-\xFF]{1})(\x44)') match = self.packet_regex.search(self.buffer) if match and match.groups(): groups = match.groups() if (ord(groups[1]) - 4) == len(groups[4]) + len(groups[5]) + len(groups[6]): ... ```
This is pretty much a work around for what you have asked. Just have a look at it ``` import re orig_str = '\xaa\x01\xFF\x44' print orig_str #converting original hex data into its representation form st = repr(orig_str) print st #getting the representation form of regex and removing leading and trailing single quotes reg = re.compile(repr("(\\xaa)")[1:-1]) p = reg.search(st) #creating the representation from matched string by adding leading and trailing single quotes extracted_repr = "\'"+p.group(1)+"\'" print extracted_repr #evaluating the matched string to get the original hex information extracted_str = eval(extracted_repr) print extracted_str >>> ��D '\xaa\x01\xffD' '\xaa' � ```
50,841,542
I am little bit familiar with np.fromregex. I read the tutorials and tried to implement it to read a data file. When the file is read using simple python list comprehension, it gives the desired result: `[400, 401, 405, 408, 412, 414, 420, 423, 433]`. But, when `np.fromregex` is is gives another format answer: `[(400,) (401,) (405,) (408,) (412,) (414,) (420,) (423,) (433,)]`. How can the code be changed so that the answer from regex becomes same as the simple python for loop. Thanks. P.S. I know this is a simple question but it took me a lot of time to look for the solution and it might be benificial to others too and save some time. Related links: <https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromregex.html> [np.fromregex with string as dtype](https://stackoverflow.com/questions/33014828/np-fromregex-with-string-as-dtype) ``` from __future__ import print_function, division, with_statement, unicode_literals import numpy as np import re data = """ DMStack failed for: lsst_z1.0_400.fits DMStack failed for: lsst_z1.0_401.fits DMStack failed for: lsst_z1.0_405.fits DMStack failed for: lsst_z1.0_408.fits DMStack failed for: lsst_z1.0_412.fits DMStack failed for: lsst_z1.0_414.fits DMStack failed for: lsst_z1.0_420.fits DMStack failed for: lsst_z1.0_423.fits DMStack failed for: lsst_z1.0_433.fits """ ifile = 'a.txt' with open(ifile, 'w') as fo: fo.write(data.lstrip()) # regex regexp = r".*_(\d+?).fits" # This works fine ans = [int(re.findall(regexp, line)[0]) for line in open(ifile)] print(ans) # using fromregex dt = [('num', np.int32)] x = np.fromregex(ifile, regexp, dt) print(x) ``` **Update** The above code failed when I used the future imports. The error log is given below: ``` Traceback (most recent call last): File "a.py", line 31, in <module> x = np.fromregex(ifile, regexp, dt) File "/Users/poudel/miniconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 1452, in fromregex dtype = np.dtype(dtype) TypeError: data type not understood $ which python python is /Users/poudel/miniconda2/bin/python $ python -c "import numpy; print(numpy.__version__)" 1.14.0 ```
2018/06/13
[ "https://Stackoverflow.com/questions/50841542", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Just choose the group and you'll get what you want: ``` dt = [('num', np.int32)] x = np.fromregex(ifile, regexp, dt) print(x['num']) #[400 401 405 408 412 414 420 423 433] ```
``` import numpy as np import cStringIO import re data = """ DMStack failed for: lsst_z1.0_400.fits DMStack failed for: lsst_z1.0_401.fits DMStack failed for: lsst_z1.0_405.fits DMStack failed for: lsst_z1.0_408.fits DMStack failed for: lsst_z1.0_412.fits DMStack failed for: lsst_z1.0_414.fits DMStack failed for: lsst_z1.0_420.fits DMStack failed for: lsst_z1.0_423.fits DMStack failed for: lsst_z1.0_433.fits """ # ifile = cStringIO.StringIO() # ifile.write(data) ifile = 'a.txt' with open(ifile, 'w') as fo: fo.write(data.lstrip()) # regex regexp = r".*_(\d+?).fits" # This works fine ans = [int(re.findall(regexp, line)[0]) for line in open(ifile)] print(ans) # using fromregex dt = [('num', np.int32)] x = np.fromregex(ifile, regexp, dt) y=[] for i in x: y = y + [i[0]] print y """ [400, 401, 405, 408, 412, 414, 420, 423, 433] [400, 401, 405, 408, 412, 414, 420, 423, 433] """ ``` I am not aware of doing this without a loop.
50,841,542
I am little bit familiar with np.fromregex. I read the tutorials and tried to implement it to read a data file. When the file is read using simple python list comprehension, it gives the desired result: `[400, 401, 405, 408, 412, 414, 420, 423, 433]`. But, when `np.fromregex` is is gives another format answer: `[(400,) (401,) (405,) (408,) (412,) (414,) (420,) (423,) (433,)]`. How can the code be changed so that the answer from regex becomes same as the simple python for loop. Thanks. P.S. I know this is a simple question but it took me a lot of time to look for the solution and it might be benificial to others too and save some time. Related links: <https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromregex.html> [np.fromregex with string as dtype](https://stackoverflow.com/questions/33014828/np-fromregex-with-string-as-dtype) ``` from __future__ import print_function, division, with_statement, unicode_literals import numpy as np import re data = """ DMStack failed for: lsst_z1.0_400.fits DMStack failed for: lsst_z1.0_401.fits DMStack failed for: lsst_z1.0_405.fits DMStack failed for: lsst_z1.0_408.fits DMStack failed for: lsst_z1.0_412.fits DMStack failed for: lsst_z1.0_414.fits DMStack failed for: lsst_z1.0_420.fits DMStack failed for: lsst_z1.0_423.fits DMStack failed for: lsst_z1.0_433.fits """ ifile = 'a.txt' with open(ifile, 'w') as fo: fo.write(data.lstrip()) # regex regexp = r".*_(\d+?).fits" # This works fine ans = [int(re.findall(regexp, line)[0]) for line in open(ifile)] print(ans) # using fromregex dt = [('num', np.int32)] x = np.fromregex(ifile, regexp, dt) print(x) ``` **Update** The above code failed when I used the future imports. The error log is given below: ``` Traceback (most recent call last): File "a.py", line 31, in <module> x = np.fromregex(ifile, regexp, dt) File "/Users/poudel/miniconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 1452, in fromregex dtype = np.dtype(dtype) TypeError: data type not understood $ which python python is /Users/poudel/miniconda2/bin/python $ python -c "import numpy; print(numpy.__version__)" 1.14.0 ```
2018/06/13
[ "https://Stackoverflow.com/questions/50841542", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Just choose the group and you'll get what you want: ``` dt = [('num', np.int32)] x = np.fromregex(ifile, regexp, dt) print(x['num']) #[400 401 405 408 412 414 420 423 433] ```
All the thanks goes to @zipa and @hpaulj, Finally this code works for python2 with future statements. It also works for python3. Instead of `dt = [('num', np.int32)]` we need to use `dt = [(str('num'), np.int32)]`. ``` #!python # -*- coding: utf-8 -*-# # # Imports from __future__ import print_function, division, with_statement, unicode_literals import numpy as np import re data = """ DMStack failed for: lsst_z1.0_400.fits DMStack failed for: lsst_z1.0_401.fits DMStack failed for: lsst_z1.0_405.fits DMStack failed for: lsst_z1.0_408.fits DMStack failed for: lsst_z1.0_412.fits DMStack failed for: lsst_z1.0_414.fits DMStack failed for: lsst_z1.0_420.fits DMStack failed for: lsst_z1.0_423.fits DMStack failed for: lsst_z1.0_433.fits """ ifile = 'a.txt' with open(ifile, 'w') as fo: fo.write(data.lstrip()) # regex regexp = r".*_(\d+?).fits" dt = [(str('num'), np.int32)] x = np.fromregex(ifile, regexp, dt) print(x['num']) ```
63,814,809
Hi i need to make a project for school where people can insert a review but before the review is posted on twitter it is going to be placed in a database. before it is going to be posted in twitter a moderator needs to check every review to see if there are no swear words etc. i wanted to make a small code in python where the moderator can insert which review he wants to see i written in python ``` show_review = str(input("which reviews do you want to check: ")) ``` and i want python to search the result of that question in the list ``` reviews_of_today = [review1, review2, review3, review4, review5, review6, review7, review8, review9, review10] ``` what code do i need to use or write to perform my needs? ``` review1 = ("Reizen ging soepel") review2 = ("Het reizen was erg tof") review3 = ("Kanker NS") review4 = ("Het ging simpel") review5 = ("Goede regels voor corona") review6 = ("Trein kwam eindelijk een keer optijd") review7 = ("NS komt altijd telaat! Tyfus zooi!") review8 = ("Kut NS weer vertraging") review9 = ("Volgende keer neem ik de taxi, sjonge jonge jonge altijd weer het zelfde probleem met NS") review10 = ("NS Altijd goede ervaring mee gehad") reviews_of_today = [review1, review2, review3, review4, review5, review6, review7, review8, review9, review10] #reviews_of_today_no_duplicates = list(dict.fromkeys(reviews_of_today)) # result = [] show_review = str(input("which reviews do you want to check: ")) moderator_want_to_see_review = (show_review) if moderator_want_to_see_review in show_review: print() ```
2020/09/09
[ "https://Stackoverflow.com/questions/63814809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Try this code. I am not clear on what you are trying to do in `output_table()` data frame. ``` library(shiny) library(shinyWidgets) # ui object ui <- fluidPage( titlePanel(p("Spatial app", style = "color:#3474A7")), sidebarLayout( sidebarPanel( uiOutput("inputp1"), numericInput("num", label = ("value"), value = 1), #Add the output for new pickers uiOutput("pickers"), actionButton("button", "Update") ), mainPanel( DTOutput("table") ) ) ) # server() server <- function(input, output, session) { DF1 <- reactiveValues(data=NULL) dt <- reactive({ name<-c("John","Jack","Bill") value1<-c(2,4,6) dt<-data.frame(name,value1) }) observe({ DF1$data <- dt() }) output$inputp1 <- renderUI({ pickerInput( inputId = "p1", label = "Select Column headers", choices = colnames( dt()), multiple = TRUE, options = list(`actions-box` = TRUE) ) }) observeEvent(input$p1, { #Create the new pickers output$pickers<-renderUI({ dt1 <- DF1$data div(lapply(input$p1, function(x){ if (is.numeric(dt1[[x]])) { sliderInput(inputId=x, label=x, min=min(dt1[[x]]), max=max(dt1[[x]]), value=c(min(dt1[[x]]),max(dt1[[x]]))) }else { # if (is.factor(dt1[[x]])) { selectInput( inputId = x, # The col name of selected column label = x, # The col label of selected column choices = dt1[,x], # all rows of selected column multiple = TRUE ) } })) }) }) dt2 <- eventReactive(input$button, { req(input$num) dt <- DF1$data ## here you can provide the user input data read inside this observeEvent or recently modified data DF1$data dt$value1<-dt$value1*isolate(input$num) dt }) observe({DF1$data <- dt2()}) output_table <- reactive({ req(input$p1, sapply(input$p1, function(x) input[[x]])) dt_part <- dt2() for (colname in input$p1) { if (is.factor(dt_part[[colname]]) && !is.null(input[[colname]])) { dt_part <- subset(dt_part, dt_part[[colname]] %in% input[[colname]]) } else { if (!is.null(input[[colname]][[1]])) { dt_part <- subset(dt_part, (dt_part[[colname]] >= input[[colname]][[1]]) & dt_part[[colname]] <= input[[colname]][[2]]) } } } dt_part }) output$table<-renderDT({ output_table() }) } # shinyApp() shinyApp(ui = ui, server = server) ``` [![output](https://i.stack.imgur.com/nRoks.png)](https://i.stack.imgur.com/nRoks.png)
First of all, you'd need to include a `req` in your `reactive()`, since `input$num` is not available at the initializing od your example: ```r dt<-reactive({input$button req(input$num) name<-c("John","Jack","Bill") value1<-c(2,4,6) dt<-data.frame(name,value1) dt$value1<-dt$value1*isolate(input$num) dt }) ```
57,264,952
very new to python and hope can get some help here. I'm trying to sum up the total from different lists in dictionary ``` {'1': [0, 2, 2, 0, 0], '2': [0, 1, 1, 0, 0], '3': [2, 4, 2, 0, 2]} ``` I have been trying to find a way to sum up the total as follow and append in a new list: ``` '1': [0, 2, 2, 0, 0] '2': [0, 1, 1, 0, 0] '3': [2, 4, 2, 0, 2] 0+0+2 = 2 2+1+4 = 7 2+1+2 = 5 [2, 7, 5, 0, 2] ``` I was able to get partial results for one row if I do this way but wasn't able ti figure out a way to get the output I want. ``` total_all = list() for x, result_total in result_all.items(): new_total = (result_total[1]) total_all.append((new_total)) print(sum(total_all)) output 7 ``` Any suggestions and help would be greatly appreciated
2019/07/30
[ "https://Stackoverflow.com/questions/57264952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7280296/" ]
Using `zip()` ([doc](https://docs.python.org/3/library/functions.html#zip)) function to transpose dictionary values and `sum()` to sum them inside list comprehension: ``` d = {'1': [0, 2, 2, 0, 0], '2': [0, 1, 1, 0, 0], '3': [2, 4, 2, 0, 2]} out = [sum(i) for i in zip(*d.values())] print(out) ``` Prints: ``` [2, 7, 5, 0, 2] ``` EDIT (little explanation): The star-expression `*` inside `zip()` effectively unpacks dict values into this: ``` out = [sum(i) for i in zip([0, 2, 2, 0, 0], [0, 1, 1, 0, 0], [2, 4, 2, 0, 2])] ``` `zip()` iterates over each of it's argument: ``` 1. iteration -> (0, 0, 2) 2. iteration -> (2, 1, 4) ... ``` `sum()` does sum of these tuples: ``` 1. iteration -> sum( (0, 0, 2) ) -> 2 2. iteration -> sum( (2, 1, 4) ) -> 7 ... ```
You can do like this, ``` In [6]: list(map(sum,zip(*d.values()))) Out[6]: [2, 7, 5, 0, 2] ```
43,742,143
I have a text file that contains data in json format ``` {"Header": { "name":"test"}, "params":{ "address":"myhouse" } } ``` I am trying to read it from a python file and convert it to json format. I have tried with both yaml and json libraries, and, with both libraries, it converts it to json format, but it also converts the double quotes to single quotes Is there any way of parsing it into a json format, but keeping the double quotes? I dont think using a replace call is a valid option, as it will also replace single quotes that are part of the data Thanks
2017/05/02
[ "https://Stackoverflow.com/questions/43742143", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5193545/" ]
Do this: ``` import json with open("file.json", "r") as f: obj = json.load(f) with open("file.json", "w") as f: json.dump(obj, f, indent = 4[, ensure_ascii = False]) # if your string has some unicode characters, let ensure_ascii to be False. ```
You can use `json.load` to load a json object from a file. ``` import json with open("file.json", "r") as f: obj = json.load(f) ``` The resulting json object delimits strings with `'` rather than `"`, but you can easily use a `replace` call at that point. ``` In [6]: obj Out[6]: {'Header': {'name': 'test'}, 'params': {'address': 'myhouse'}} ``` EDIT: Though I misunderstood the question at first, you can use `json.dumps` to write a string-encoding of the json object that uses double quotes as the standard requires. ``` In [10]: json.dumps(obj) Out[10]: '{"params": {"address": "myhouse"}, "Header": {"na\'me": "test"}}' ``` It's unclear what you're trying to do though, as if you're trying to read the json into a Python object, it doesn't matter what string delimiters are used; if you're trying to read your valid json into a Python string, you can just read the file without any libraries.
58,709,973
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message *ImportError: attempted relative import with no known parent package* [![Error message text: "ImportError: attempted relative import with no known parent package"](https://i.stack.imgur.com/jbTGS.png)](https://i.stack.imgur.com/jbTGS.png) * when running Python test from VS Code terminal by using command line > > python test\_HelloWorld.py > > > I'm getting error message *ValueError: attempted relative import beyong top-level package* [![Error Message: "ValueError: attempted relative import beyond top-level package"](https://i.stack.imgur.com/xpjn3.png)](https://i.stack.imgur.com/xpjn3.png) Here is the project structure [![Project structure](https://i.stack.imgur.com/Jxrkt.png)](https://i.stack.imgur.com/Jxrkt.png) How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! **[Update]** I have got the following solution using sys.path correction: [![The subject issue solution using sys.path correction](https://i.stack.imgur.com/udG1c.png)](https://i.stack.imgur.com/udG1c.png) ``` import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) ``` but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
2019/11/05
[ "https://Stackoverflow.com/questions/58709973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93277/" ]
Do not use relative import. Simply change it to ``` from solutions import helloWorldPackage as hw ``` **Update** I initially tested this in PyCharm. PyCharm has a nice feature - it adds content root and source roots to PYTHONPATH (both options are configurable). You can achieve the same effect in VS Code by adding a `.env` file: ``` PYTHONPATH=.:${PYTHONPATH} ``` Now, the project directory will be in the PYTHONPATH for every tool that is launched via VS Code. Now Ctrl+F5 works fine.
> > Setup a main module and its source packages paths > ================================================= > > > Solution found at: * [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=create%20a%20settings.json%20within%20.vscode](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=create%20a%20settings.json%20within%20.vscode) * [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=Inside%20the-,launch.json,-you%20have%20to](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=Inside%20the-,launch.json,-you%20have%20to) Which also provide a neat in-depth [video](https://www.youtube.com/watch?v=Ad-inC3mJfU) explanation --- The solution to the `attempted relative import with no known parent package` issue, which is especially tricky in VScode (in opposite to Pycharm that provide GUI tools to flag folders as package), is to: ### Add configuration files for the VScode debugger > > Id Est add `launch.json` as a `Module` (this will always execute the file given in "module" key) and `settings.json` inside the [`MyProjectRoot/.vscode`](https://i.stack.imgur.com/DKJGV.png) folder (manually add them if it's not there yet, or be guided by VScode GUI for [`Run & Debug`](https://i.stack.imgur.com/W9bHy.png)) > > > ### `launch.json` setup > > Id Est add an `"env"` key to `launch.json` containing an object with `"PYTHONPATH"` as key, and `"${workspaceFolder}/mysourcepackage"` as value > > [final launch.json configuration](https://i.stack.imgur.com/k5aOb.png) > > > ### `settings.json` setup > > Id Est add a `"python.analysis.extraPaths"` key to `settings.json` containing a list of paths for the debugger to be aware of, which in our case is one `["${workspaceFolder}/mysourcepackage"]` as value **(note that we put the string in a list only for the case in which we want to include other paths too, it is not needed for our specific example but it's still a standard de facto as I know)** > > [final settings.json configuration](https://i.stack.imgur.com/T7G8C.png) > > > This should be everything needed to both work by calling the script with python from the terminal and from the VScode debugger.
58,709,973
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message *ImportError: attempted relative import with no known parent package* [![Error message text: "ImportError: attempted relative import with no known parent package"](https://i.stack.imgur.com/jbTGS.png)](https://i.stack.imgur.com/jbTGS.png) * when running Python test from VS Code terminal by using command line > > python test\_HelloWorld.py > > > I'm getting error message *ValueError: attempted relative import beyong top-level package* [![Error Message: "ValueError: attempted relative import beyond top-level package"](https://i.stack.imgur.com/xpjn3.png)](https://i.stack.imgur.com/xpjn3.png) Here is the project structure [![Project structure](https://i.stack.imgur.com/Jxrkt.png)](https://i.stack.imgur.com/Jxrkt.png) How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! **[Update]** I have got the following solution using sys.path correction: [![The subject issue solution using sys.path correction](https://i.stack.imgur.com/udG1c.png)](https://i.stack.imgur.com/udG1c.png) ``` import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) ``` but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
2019/11/05
[ "https://Stackoverflow.com/questions/58709973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93277/" ]
Do not use relative import. Simply change it to ``` from solutions import helloWorldPackage as hw ``` **Update** I initially tested this in PyCharm. PyCharm has a nice feature - it adds content root and source roots to PYTHONPATH (both options are configurable). You can achieve the same effect in VS Code by adding a `.env` file: ``` PYTHONPATH=.:${PYTHONPATH} ``` Now, the project directory will be in the PYTHONPATH for every tool that is launched via VS Code. Now Ctrl+F5 works fine.
An Answer From 2022 =================== Here's a potential approach from 2022. The issue is identified correctly and if you're using an IDE like VS Code, it doesn't automatically extend the python path and discover modules. One way you can do this using an .env file that will automatically extend the python path. I used this website [k0nze.dev](https://k0nze.dev/posts/python-relative-imports-vscode/) repeatedly to find an answer and actually discovered another solution. Here are the drawbacks of the solution provided in the k0nze.dev solution: * It only extends the python path via the launch.json file which doesn't effect running python outside of the debugger in this case * You can only use the ${workspaceFolder} and other variables within an "env" variable in the launch.json, which gets overwritten in precedence by the existence of a .env file. * The solution works only within VS Code since it has to be written within the launch.json (- overall portability) ### The .env File In your example **tests** falls under it's own directory and has it's own **init**.py. In an IDE like VS Code, it's not going to automatically discover this directory and module. You can see this by creating the below script anywhere in your project and running it: **\_path.py** ```py from sys import path as pythonpath print("\n ,".join(pythonpath)) ``` You shouldn't see your ${workspaceFolder}/tests/ or if you do, it's because your \_path.py script is sitting in that directory and python automatically adds the script path to pythonpath. To solve this issue across your project, you need to extend the python path using .env file across all files in your project. To do this, use **dot notation** to indicate your ${workspaceFolder} in lieu of being able to actually use ${workspaceFolder}. You have to do dot notation because .env files *do not* do variable assignment like ${workspaceFolder}. Your env file should look like: #### Windows ```bash PYTHONPATH = ".\\tests\\;.\\" ``` #### Mac / Linux / etc ```bash PYTHONPATH = "./tests/:./" ``` where: * ; and : are the path separators for environment variables for windows and Mac respectively * ./tests/ and .\tests\ extend python path to the files within the module tests for import in the **init**.py * ./ and .\ extend the python path to modules tests and presumably solutions? I don't know if solutions is a module but I'm going to run with it. #### Test It Out Now re-run your **\_path.py** script and you should see permanent additions to your path. This works for deeply nested modules as well if your company has a more stringent project structure. VS Code ------- If you are using VS Code, you cannot use environment variables provided by VS Code in the .env file. This includes ${workspaceFolder}, which is very handy to extend a file path to your currently open folder. I've beaten myself up trying to figure out why it's not adding these environment variables to the path for a *very* long time now and it seems The solution is instead to use dot notation to prepend the path by using relative file path. This allows the user to append a file path relative to the project structure and not their own file structure. ### For Other IDE's The reason the above is written for VS Code is because it automatically reads in the .env file every time you run a python file. This functionality is **very** handy and unless your IDE does this, you will need the help of the **dotenv** package. You can actually see the location that your version of VS Code is looking for by searching for the below setting in your preferences: [VSCode settings env file](https://i.stack.imgur.com/dSiIE.png) Anyways, to install the package you need to import .env files with, run: ```bash pip install python-dotenv ``` In your python script, you need to run and import the below to get it to load the .env file as your environment variables: ```py from dotenv import load_dotenv() # load env variables load_dotenv() """ The rest of your code here """ ``` That's It --------- Congrats on making it to the bottom. This topic nearly drove me insane when I went to tackle it but I think it's helpful to be elaborate and to understand the issue and how to tackle it without doing hacky sys.path appends or absolute file paths. This also gives you a way to test what's on your path and an explanation of why each path is added in your project structure.
58,709,973
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message *ImportError: attempted relative import with no known parent package* [![Error message text: "ImportError: attempted relative import with no known parent package"](https://i.stack.imgur.com/jbTGS.png)](https://i.stack.imgur.com/jbTGS.png) * when running Python test from VS Code terminal by using command line > > python test\_HelloWorld.py > > > I'm getting error message *ValueError: attempted relative import beyong top-level package* [![Error Message: "ValueError: attempted relative import beyond top-level package"](https://i.stack.imgur.com/xpjn3.png)](https://i.stack.imgur.com/xpjn3.png) Here is the project structure [![Project structure](https://i.stack.imgur.com/Jxrkt.png)](https://i.stack.imgur.com/Jxrkt.png) How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! **[Update]** I have got the following solution using sys.path correction: [![The subject issue solution using sys.path correction](https://i.stack.imgur.com/udG1c.png)](https://i.stack.imgur.com/udG1c.png) ``` import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) ``` but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
2019/11/05
[ "https://Stackoverflow.com/questions/58709973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93277/" ]
Do not use relative import. Simply change it to ``` from solutions import helloWorldPackage as hw ``` **Update** I initially tested this in PyCharm. PyCharm has a nice feature - it adds content root and source roots to PYTHONPATH (both options are configurable). You can achieve the same effect in VS Code by adding a `.env` file: ``` PYTHONPATH=.:${PYTHONPATH} ``` Now, the project directory will be in the PYTHONPATH for every tool that is launched via VS Code. Now Ctrl+F5 works fine.
I was just going through this with VS Code and Python (using Win10) and found a solution. Below is my project folder. Files in folder "core" import functions from folder "event", and files in folder "unit tests" import functions from folder "core". I could run and debug the top-level file file\_gui\_tk.py within VS Code but I couldn't run/debug any of the files in the sub-folders due to import errors. I believe the issue is that when I try to run/debug those files, the working directory is no longer the project directory and consequently the import path declarations no longer work. Folder Structure: ``` testexample core __init__.py core_os.py dir_parser.py events __inits__.py event.py unit tests list_files.py test_parser.py .env file_gui_tk.py ``` My file import statements: in core/core\_os.py: ``` from events.event import post_event ``` in core/dir\_parser.py: ``` from core.core_os import compare_file_bytes, check_dir from events.event import post_event ``` To run/debug any file within the project directory, I added a top level .env file with contents: ``` PYTHONPATH="./" ``` Added this statement to the launch.json file: ``` "env": {"PYTHONPATH": "/testexample"}, ``` And added this to the settings.json file ``` "terminal.integrated.env.windows": {"PYTHONPATH": "./",} ``` Now I can run and debug any file and VS Code finds the import dependencies within the project. I haven't tried this with a project dir structure more than two levels deep.
58,709,973
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message *ImportError: attempted relative import with no known parent package* [![Error message text: "ImportError: attempted relative import with no known parent package"](https://i.stack.imgur.com/jbTGS.png)](https://i.stack.imgur.com/jbTGS.png) * when running Python test from VS Code terminal by using command line > > python test\_HelloWorld.py > > > I'm getting error message *ValueError: attempted relative import beyong top-level package* [![Error Message: "ValueError: attempted relative import beyond top-level package"](https://i.stack.imgur.com/xpjn3.png)](https://i.stack.imgur.com/xpjn3.png) Here is the project structure [![Project structure](https://i.stack.imgur.com/Jxrkt.png)](https://i.stack.imgur.com/Jxrkt.png) How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! **[Update]** I have got the following solution using sys.path correction: [![The subject issue solution using sys.path correction](https://i.stack.imgur.com/udG1c.png)](https://i.stack.imgur.com/udG1c.png) ``` import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) ``` but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
2019/11/05
[ "https://Stackoverflow.com/questions/58709973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93277/" ]
You're bumping into two issues. One is you're running your test file from within the directory it's written, and so Python doesn't know what `..` represents. There are a couple of ways to fix this. One is to take the solution that @lesiak proposed by changing the import to `from solutions import helloWorldPackage` but to execute your tests by running `python tests/test_helloWorld.py`. That will make sure that your project's top-level is in Python's search path and so it will see `solutions`. The other solution is to open your project in VS Code one directory higher (whatever directory that contains `solutions` and `tests`). You will still need to change how you execute your code, though, so you are doing it from the top-level as I suggested above. Even better would be to either run your code using `python -m tests.test_helloWorld`, use the Python extension's Run command, or use the extension's Test Explorer. All of those options should help you with how to run your code (you will still need to either change the import or open the higher directory in VS Code).
> > Setup a main module and its source packages paths > ================================================= > > > Solution found at: * [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=create%20a%20settings.json%20within%20.vscode](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=create%20a%20settings.json%20within%20.vscode) * [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=Inside%20the-,launch.json,-you%20have%20to](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=Inside%20the-,launch.json,-you%20have%20to) Which also provide a neat in-depth [video](https://www.youtube.com/watch?v=Ad-inC3mJfU) explanation --- The solution to the `attempted relative import with no known parent package` issue, which is especially tricky in VScode (in opposite to Pycharm that provide GUI tools to flag folders as package), is to: ### Add configuration files for the VScode debugger > > Id Est add `launch.json` as a `Module` (this will always execute the file given in "module" key) and `settings.json` inside the [`MyProjectRoot/.vscode`](https://i.stack.imgur.com/DKJGV.png) folder (manually add them if it's not there yet, or be guided by VScode GUI for [`Run & Debug`](https://i.stack.imgur.com/W9bHy.png)) > > > ### `launch.json` setup > > Id Est add an `"env"` key to `launch.json` containing an object with `"PYTHONPATH"` as key, and `"${workspaceFolder}/mysourcepackage"` as value > > [final launch.json configuration](https://i.stack.imgur.com/k5aOb.png) > > > ### `settings.json` setup > > Id Est add a `"python.analysis.extraPaths"` key to `settings.json` containing a list of paths for the debugger to be aware of, which in our case is one `["${workspaceFolder}/mysourcepackage"]` as value **(note that we put the string in a list only for the case in which we want to include other paths too, it is not needed for our specific example but it's still a standard de facto as I know)** > > [final settings.json configuration](https://i.stack.imgur.com/T7G8C.png) > > > This should be everything needed to both work by calling the script with python from the terminal and from the VScode debugger.
58,709,973
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message *ImportError: attempted relative import with no known parent package* [![Error message text: "ImportError: attempted relative import with no known parent package"](https://i.stack.imgur.com/jbTGS.png)](https://i.stack.imgur.com/jbTGS.png) * when running Python test from VS Code terminal by using command line > > python test\_HelloWorld.py > > > I'm getting error message *ValueError: attempted relative import beyong top-level package* [![Error Message: "ValueError: attempted relative import beyond top-level package"](https://i.stack.imgur.com/xpjn3.png)](https://i.stack.imgur.com/xpjn3.png) Here is the project structure [![Project structure](https://i.stack.imgur.com/Jxrkt.png)](https://i.stack.imgur.com/Jxrkt.png) How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! **[Update]** I have got the following solution using sys.path correction: [![The subject issue solution using sys.path correction](https://i.stack.imgur.com/udG1c.png)](https://i.stack.imgur.com/udG1c.png) ``` import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) ``` but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
2019/11/05
[ "https://Stackoverflow.com/questions/58709973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93277/" ]
You're bumping into two issues. One is you're running your test file from within the directory it's written, and so Python doesn't know what `..` represents. There are a couple of ways to fix this. One is to take the solution that @lesiak proposed by changing the import to `from solutions import helloWorldPackage` but to execute your tests by running `python tests/test_helloWorld.py`. That will make sure that your project's top-level is in Python's search path and so it will see `solutions`. The other solution is to open your project in VS Code one directory higher (whatever directory that contains `solutions` and `tests`). You will still need to change how you execute your code, though, so you are doing it from the top-level as I suggested above. Even better would be to either run your code using `python -m tests.test_helloWorld`, use the Python extension's Run command, or use the extension's Test Explorer. All of those options should help you with how to run your code (you will still need to either change the import or open the higher directory in VS Code).
An Answer From 2022 =================== Here's a potential approach from 2022. The issue is identified correctly and if you're using an IDE like VS Code, it doesn't automatically extend the python path and discover modules. One way you can do this using an .env file that will automatically extend the python path. I used this website [k0nze.dev](https://k0nze.dev/posts/python-relative-imports-vscode/) repeatedly to find an answer and actually discovered another solution. Here are the drawbacks of the solution provided in the k0nze.dev solution: * It only extends the python path via the launch.json file which doesn't effect running python outside of the debugger in this case * You can only use the ${workspaceFolder} and other variables within an "env" variable in the launch.json, which gets overwritten in precedence by the existence of a .env file. * The solution works only within VS Code since it has to be written within the launch.json (- overall portability) ### The .env File In your example **tests** falls under it's own directory and has it's own **init**.py. In an IDE like VS Code, it's not going to automatically discover this directory and module. You can see this by creating the below script anywhere in your project and running it: **\_path.py** ```py from sys import path as pythonpath print("\n ,".join(pythonpath)) ``` You shouldn't see your ${workspaceFolder}/tests/ or if you do, it's because your \_path.py script is sitting in that directory and python automatically adds the script path to pythonpath. To solve this issue across your project, you need to extend the python path using .env file across all files in your project. To do this, use **dot notation** to indicate your ${workspaceFolder} in lieu of being able to actually use ${workspaceFolder}. You have to do dot notation because .env files *do not* do variable assignment like ${workspaceFolder}. Your env file should look like: #### Windows ```bash PYTHONPATH = ".\\tests\\;.\\" ``` #### Mac / Linux / etc ```bash PYTHONPATH = "./tests/:./" ``` where: * ; and : are the path separators for environment variables for windows and Mac respectively * ./tests/ and .\tests\ extend python path to the files within the module tests for import in the **init**.py * ./ and .\ extend the python path to modules tests and presumably solutions? I don't know if solutions is a module but I'm going to run with it. #### Test It Out Now re-run your **\_path.py** script and you should see permanent additions to your path. This works for deeply nested modules as well if your company has a more stringent project structure. VS Code ------- If you are using VS Code, you cannot use environment variables provided by VS Code in the .env file. This includes ${workspaceFolder}, which is very handy to extend a file path to your currently open folder. I've beaten myself up trying to figure out why it's not adding these environment variables to the path for a *very* long time now and it seems The solution is instead to use dot notation to prepend the path by using relative file path. This allows the user to append a file path relative to the project structure and not their own file structure. ### For Other IDE's The reason the above is written for VS Code is because it automatically reads in the .env file every time you run a python file. This functionality is **very** handy and unless your IDE does this, you will need the help of the **dotenv** package. You can actually see the location that your version of VS Code is looking for by searching for the below setting in your preferences: [VSCode settings env file](https://i.stack.imgur.com/dSiIE.png) Anyways, to install the package you need to import .env files with, run: ```bash pip install python-dotenv ``` In your python script, you need to run and import the below to get it to load the .env file as your environment variables: ```py from dotenv import load_dotenv() # load env variables load_dotenv() """ The rest of your code here """ ``` That's It --------- Congrats on making it to the bottom. This topic nearly drove me insane when I went to tackle it but I think it's helpful to be elaborate and to understand the issue and how to tackle it without doing hacky sys.path appends or absolute file paths. This also gives you a way to test what's on your path and an explanation of why each path is added in your project structure.
58,709,973
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message *ImportError: attempted relative import with no known parent package* [![Error message text: "ImportError: attempted relative import with no known parent package"](https://i.stack.imgur.com/jbTGS.png)](https://i.stack.imgur.com/jbTGS.png) * when running Python test from VS Code terminal by using command line > > python test\_HelloWorld.py > > > I'm getting error message *ValueError: attempted relative import beyong top-level package* [![Error Message: "ValueError: attempted relative import beyond top-level package"](https://i.stack.imgur.com/xpjn3.png)](https://i.stack.imgur.com/xpjn3.png) Here is the project structure [![Project structure](https://i.stack.imgur.com/Jxrkt.png)](https://i.stack.imgur.com/Jxrkt.png) How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! **[Update]** I have got the following solution using sys.path correction: [![The subject issue solution using sys.path correction](https://i.stack.imgur.com/udG1c.png)](https://i.stack.imgur.com/udG1c.png) ``` import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) ``` but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
2019/11/05
[ "https://Stackoverflow.com/questions/58709973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93277/" ]
You're bumping into two issues. One is you're running your test file from within the directory it's written, and so Python doesn't know what `..` represents. There are a couple of ways to fix this. One is to take the solution that @lesiak proposed by changing the import to `from solutions import helloWorldPackage` but to execute your tests by running `python tests/test_helloWorld.py`. That will make sure that your project's top-level is in Python's search path and so it will see `solutions`. The other solution is to open your project in VS Code one directory higher (whatever directory that contains `solutions` and `tests`). You will still need to change how you execute your code, though, so you are doing it from the top-level as I suggested above. Even better would be to either run your code using `python -m tests.test_helloWorld`, use the Python extension's Run command, or use the extension's Test Explorer. All of those options should help you with how to run your code (you will still need to either change the import or open the higher directory in VS Code).
I was just going through this with VS Code and Python (using Win10) and found a solution. Below is my project folder. Files in folder "core" import functions from folder "event", and files in folder "unit tests" import functions from folder "core". I could run and debug the top-level file file\_gui\_tk.py within VS Code but I couldn't run/debug any of the files in the sub-folders due to import errors. I believe the issue is that when I try to run/debug those files, the working directory is no longer the project directory and consequently the import path declarations no longer work. Folder Structure: ``` testexample core __init__.py core_os.py dir_parser.py events __inits__.py event.py unit tests list_files.py test_parser.py .env file_gui_tk.py ``` My file import statements: in core/core\_os.py: ``` from events.event import post_event ``` in core/dir\_parser.py: ``` from core.core_os import compare_file_bytes, check_dir from events.event import post_event ``` To run/debug any file within the project directory, I added a top level .env file with contents: ``` PYTHONPATH="./" ``` Added this statement to the launch.json file: ``` "env": {"PYTHONPATH": "/testexample"}, ``` And added this to the settings.json file ``` "terminal.integrated.env.windows": {"PYTHONPATH": "./",} ``` Now I can run and debug any file and VS Code finds the import dependencies within the project. I haven't tried this with a project dir structure more than two levels deep.
58,709,973
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message *ImportError: attempted relative import with no known parent package* [![Error message text: "ImportError: attempted relative import with no known parent package"](https://i.stack.imgur.com/jbTGS.png)](https://i.stack.imgur.com/jbTGS.png) * when running Python test from VS Code terminal by using command line > > python test\_HelloWorld.py > > > I'm getting error message *ValueError: attempted relative import beyong top-level package* [![Error Message: "ValueError: attempted relative import beyond top-level package"](https://i.stack.imgur.com/xpjn3.png)](https://i.stack.imgur.com/xpjn3.png) Here is the project structure [![Project structure](https://i.stack.imgur.com/Jxrkt.png)](https://i.stack.imgur.com/Jxrkt.png) How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! **[Update]** I have got the following solution using sys.path correction: [![The subject issue solution using sys.path correction](https://i.stack.imgur.com/udG1c.png)](https://i.stack.imgur.com/udG1c.png) ``` import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) ``` but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
2019/11/05
[ "https://Stackoverflow.com/questions/58709973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93277/" ]
> > Setup a main module and its source packages paths > ================================================= > > > Solution found at: * [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=create%20a%20settings.json%20within%20.vscode](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=create%20a%20settings.json%20within%20.vscode) * [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=Inside%20the-,launch.json,-you%20have%20to](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=Inside%20the-,launch.json,-you%20have%20to) Which also provide a neat in-depth [video](https://www.youtube.com/watch?v=Ad-inC3mJfU) explanation --- The solution to the `attempted relative import with no known parent package` issue, which is especially tricky in VScode (in opposite to Pycharm that provide GUI tools to flag folders as package), is to: ### Add configuration files for the VScode debugger > > Id Est add `launch.json` as a `Module` (this will always execute the file given in "module" key) and `settings.json` inside the [`MyProjectRoot/.vscode`](https://i.stack.imgur.com/DKJGV.png) folder (manually add them if it's not there yet, or be guided by VScode GUI for [`Run & Debug`](https://i.stack.imgur.com/W9bHy.png)) > > > ### `launch.json` setup > > Id Est add an `"env"` key to `launch.json` containing an object with `"PYTHONPATH"` as key, and `"${workspaceFolder}/mysourcepackage"` as value > > [final launch.json configuration](https://i.stack.imgur.com/k5aOb.png) > > > ### `settings.json` setup > > Id Est add a `"python.analysis.extraPaths"` key to `settings.json` containing a list of paths for the debugger to be aware of, which in our case is one `["${workspaceFolder}/mysourcepackage"]` as value **(note that we put the string in a list only for the case in which we want to include other paths too, it is not needed for our specific example but it's still a standard de facto as I know)** > > [final settings.json configuration](https://i.stack.imgur.com/T7G8C.png) > > > This should be everything needed to both work by calling the script with python from the terminal and from the VScode debugger.
An Answer From 2022 =================== Here's a potential approach from 2022. The issue is identified correctly and if you're using an IDE like VS Code, it doesn't automatically extend the python path and discover modules. One way you can do this using an .env file that will automatically extend the python path. I used this website [k0nze.dev](https://k0nze.dev/posts/python-relative-imports-vscode/) repeatedly to find an answer and actually discovered another solution. Here are the drawbacks of the solution provided in the k0nze.dev solution: * It only extends the python path via the launch.json file which doesn't effect running python outside of the debugger in this case * You can only use the ${workspaceFolder} and other variables within an "env" variable in the launch.json, which gets overwritten in precedence by the existence of a .env file. * The solution works only within VS Code since it has to be written within the launch.json (- overall portability) ### The .env File In your example **tests** falls under it's own directory and has it's own **init**.py. In an IDE like VS Code, it's not going to automatically discover this directory and module. You can see this by creating the below script anywhere in your project and running it: **\_path.py** ```py from sys import path as pythonpath print("\n ,".join(pythonpath)) ``` You shouldn't see your ${workspaceFolder}/tests/ or if you do, it's because your \_path.py script is sitting in that directory and python automatically adds the script path to pythonpath. To solve this issue across your project, you need to extend the python path using .env file across all files in your project. To do this, use **dot notation** to indicate your ${workspaceFolder} in lieu of being able to actually use ${workspaceFolder}. You have to do dot notation because .env files *do not* do variable assignment like ${workspaceFolder}. Your env file should look like: #### Windows ```bash PYTHONPATH = ".\\tests\\;.\\" ``` #### Mac / Linux / etc ```bash PYTHONPATH = "./tests/:./" ``` where: * ; and : are the path separators for environment variables for windows and Mac respectively * ./tests/ and .\tests\ extend python path to the files within the module tests for import in the **init**.py * ./ and .\ extend the python path to modules tests and presumably solutions? I don't know if solutions is a module but I'm going to run with it. #### Test It Out Now re-run your **\_path.py** script and you should see permanent additions to your path. This works for deeply nested modules as well if your company has a more stringent project structure. VS Code ------- If you are using VS Code, you cannot use environment variables provided by VS Code in the .env file. This includes ${workspaceFolder}, which is very handy to extend a file path to your currently open folder. I've beaten myself up trying to figure out why it's not adding these environment variables to the path for a *very* long time now and it seems The solution is instead to use dot notation to prepend the path by using relative file path. This allows the user to append a file path relative to the project structure and not their own file structure. ### For Other IDE's The reason the above is written for VS Code is because it automatically reads in the .env file every time you run a python file. This functionality is **very** handy and unless your IDE does this, you will need the help of the **dotenv** package. You can actually see the location that your version of VS Code is looking for by searching for the below setting in your preferences: [VSCode settings env file](https://i.stack.imgur.com/dSiIE.png) Anyways, to install the package you need to import .env files with, run: ```bash pip install python-dotenv ``` In your python script, you need to run and import the below to get it to load the .env file as your environment variables: ```py from dotenv import load_dotenv() # load env variables load_dotenv() """ The rest of your code here """ ``` That's It --------- Congrats on making it to the bottom. This topic nearly drove me insane when I went to tackle it but I think it's helpful to be elaborate and to understand the issue and how to tackle it without doing hacky sys.path appends or absolute file paths. This also gives you a way to test what's on your path and an explanation of why each path is added in your project structure.
58,709,973
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message *ImportError: attempted relative import with no known parent package* [![Error message text: "ImportError: attempted relative import with no known parent package"](https://i.stack.imgur.com/jbTGS.png)](https://i.stack.imgur.com/jbTGS.png) * when running Python test from VS Code terminal by using command line > > python test\_HelloWorld.py > > > I'm getting error message *ValueError: attempted relative import beyong top-level package* [![Error Message: "ValueError: attempted relative import beyond top-level package"](https://i.stack.imgur.com/xpjn3.png)](https://i.stack.imgur.com/xpjn3.png) Here is the project structure [![Project structure](https://i.stack.imgur.com/Jxrkt.png)](https://i.stack.imgur.com/Jxrkt.png) How to solve the subject issue(s) with minimal (code/project structure) change efforts? TIA! **[Update]** I have got the following solution using sys.path correction: [![The subject issue solution using sys.path correction](https://i.stack.imgur.com/udG1c.png)](https://i.stack.imgur.com/udG1c.png) ``` import sys from pathlib import Path sys.path[0] = str(Path(sys.path[0]).parent) ``` but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
2019/11/05
[ "https://Stackoverflow.com/questions/58709973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/93277/" ]
> > Setup a main module and its source packages paths > ================================================= > > > Solution found at: * [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=create%20a%20settings.json%20within%20.vscode](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=create%20a%20settings.json%20within%20.vscode) * [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=Inside%20the-,launch.json,-you%20have%20to](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=Inside%20the-,launch.json,-you%20have%20to) Which also provide a neat in-depth [video](https://www.youtube.com/watch?v=Ad-inC3mJfU) explanation --- The solution to the `attempted relative import with no known parent package` issue, which is especially tricky in VScode (in opposite to Pycharm that provide GUI tools to flag folders as package), is to: ### Add configuration files for the VScode debugger > > Id Est add `launch.json` as a `Module` (this will always execute the file given in "module" key) and `settings.json` inside the [`MyProjectRoot/.vscode`](https://i.stack.imgur.com/DKJGV.png) folder (manually add them if it's not there yet, or be guided by VScode GUI for [`Run & Debug`](https://i.stack.imgur.com/W9bHy.png)) > > > ### `launch.json` setup > > Id Est add an `"env"` key to `launch.json` containing an object with `"PYTHONPATH"` as key, and `"${workspaceFolder}/mysourcepackage"` as value > > [final launch.json configuration](https://i.stack.imgur.com/k5aOb.png) > > > ### `settings.json` setup > > Id Est add a `"python.analysis.extraPaths"` key to `settings.json` containing a list of paths for the debugger to be aware of, which in our case is one `["${workspaceFolder}/mysourcepackage"]` as value **(note that we put the string in a list only for the case in which we want to include other paths too, it is not needed for our specific example but it's still a standard de facto as I know)** > > [final settings.json configuration](https://i.stack.imgur.com/T7G8C.png) > > > This should be everything needed to both work by calling the script with python from the terminal and from the VScode debugger.
I was just going through this with VS Code and Python (using Win10) and found a solution. Below is my project folder. Files in folder "core" import functions from folder "event", and files in folder "unit tests" import functions from folder "core". I could run and debug the top-level file file\_gui\_tk.py within VS Code but I couldn't run/debug any of the files in the sub-folders due to import errors. I believe the issue is that when I try to run/debug those files, the working directory is no longer the project directory and consequently the import path declarations no longer work. Folder Structure: ``` testexample core __init__.py core_os.py dir_parser.py events __inits__.py event.py unit tests list_files.py test_parser.py .env file_gui_tk.py ``` My file import statements: in core/core\_os.py: ``` from events.event import post_event ``` in core/dir\_parser.py: ``` from core.core_os import compare_file_bytes, check_dir from events.event import post_event ``` To run/debug any file within the project directory, I added a top level .env file with contents: ``` PYTHONPATH="./" ``` Added this statement to the launch.json file: ``` "env": {"PYTHONPATH": "/testexample"}, ``` And added this to the settings.json file ``` "terminal.integrated.env.windows": {"PYTHONPATH": "./",} ``` Now I can run and debug any file and VS Code finds the import dependencies within the project. I haven't tried this with a project dir structure more than two levels deep.
62,494,807
I have the following python script ``` from bs4 import BeautifulSoup import requests home_dict = [] for year in range(2005, 2021): if year == 2020: for month in range(1, 6): url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str(year) + '-' + str(month) + '-1.html'; r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') home_table = soup.find('div', class_="table-wrapper") for home in home_table.find_all('tbody'): rows = home.find_all('tr') for row in rows: area = row.find('td').text; benchmark = row.find_all('td')[1].text priceIndex = row.find_all('td')[2].text oneMonthChange = row.find_all('td')[3].text sixMonthChange = row.find_all('td')[4].text oneYearChange = row.find_all('td')[5].text threeYearChange = row.find_all('td')[6].text fiveYearChange = row.find_all('td')[7].text propertyType = row.find_all('td')[8].text year = year; month = month; home_obj = { "Area": area, "Benchmark": benchmark, "Price Index": priceIndex, "1 Month +/-": oneMonthChange, "6 Month +/-": sixMonthChange, "1 Year +/-": oneYearChange, "3 Year +/-": threeYearChange, "5 Year +/-": fiveYearChange, "Property Type": propertyType, "Report Month": month, "Report Year": year } home_dict.append(home_obj) else: for month in range(1, 13): url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str(year) + '-' + str(month) + '-1.html'; r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') home_table = soup.find('div', class_="table-wrapper") for home in home_table.find_all('tbody'): rows = home.find_all('tr') for row in rows: area = row.find('td').text; benchmark = row.find_all('td')[1].text priceIndex = row.find_all('td')[2].text oneMonthChange = row.find_all('td')[3].text sixMonthChange = row.find_all('td')[4].text oneYearChange = row.find_all('td')[5].text threeYearChange = row.find_all('td')[6].text fiveYearChange = row.find_all('td')[7].text propertyType = row.find_all('td')[8].text year = year; month = month; home_obj = { "Area": area, "Benchmark": benchmark, "Price Index": priceIndex, "1 Month +/-": oneMonthChange, "6 Month +/-": sixMonthChange, "1 Year +/-": oneYearChange, "3 Year +/-": threeYearChange, "5 Year +/-": fiveYearChange, "Property Type": propertyType, "Report Month": month, "Report Year": year } home_dict.append(home_obj) print(home_dict) ``` This script is web scraping a website. If the year is 2020, it would only scape from January to May. For other years, it would go from Jan to Dec. You can tell that the body of the script is repeated inside that if-else conditional statement, is there an easier way to write this to make it look cleaner and not repeat itself?
2020/06/21
[ "https://Stackoverflow.com/questions/62494807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11303284/" ]
Perhaps try a `try` clause? ``` for year in range(2005, 2021): month in range(1, 13): try: <your code> except: continue ```
As scraping for 1 to 6 months is common for all the years. You can **scrape** those years first. And then if the year is not equal to 2020 you can **scrape** rest of the years
62,494,807
I have the following python script ``` from bs4 import BeautifulSoup import requests home_dict = [] for year in range(2005, 2021): if year == 2020: for month in range(1, 6): url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str(year) + '-' + str(month) + '-1.html'; r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') home_table = soup.find('div', class_="table-wrapper") for home in home_table.find_all('tbody'): rows = home.find_all('tr') for row in rows: area = row.find('td').text; benchmark = row.find_all('td')[1].text priceIndex = row.find_all('td')[2].text oneMonthChange = row.find_all('td')[3].text sixMonthChange = row.find_all('td')[4].text oneYearChange = row.find_all('td')[5].text threeYearChange = row.find_all('td')[6].text fiveYearChange = row.find_all('td')[7].text propertyType = row.find_all('td')[8].text year = year; month = month; home_obj = { "Area": area, "Benchmark": benchmark, "Price Index": priceIndex, "1 Month +/-": oneMonthChange, "6 Month +/-": sixMonthChange, "1 Year +/-": oneYearChange, "3 Year +/-": threeYearChange, "5 Year +/-": fiveYearChange, "Property Type": propertyType, "Report Month": month, "Report Year": year } home_dict.append(home_obj) else: for month in range(1, 13): url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str(year) + '-' + str(month) + '-1.html'; r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') home_table = soup.find('div', class_="table-wrapper") for home in home_table.find_all('tbody'): rows = home.find_all('tr') for row in rows: area = row.find('td').text; benchmark = row.find_all('td')[1].text priceIndex = row.find_all('td')[2].text oneMonthChange = row.find_all('td')[3].text sixMonthChange = row.find_all('td')[4].text oneYearChange = row.find_all('td')[5].text threeYearChange = row.find_all('td')[6].text fiveYearChange = row.find_all('td')[7].text propertyType = row.find_all('td')[8].text year = year; month = month; home_obj = { "Area": area, "Benchmark": benchmark, "Price Index": priceIndex, "1 Month +/-": oneMonthChange, "6 Month +/-": sixMonthChange, "1 Year +/-": oneYearChange, "3 Year +/-": threeYearChange, "5 Year +/-": fiveYearChange, "Property Type": propertyType, "Report Month": month, "Report Year": year } home_dict.append(home_obj) print(home_dict) ``` This script is web scraping a website. If the year is 2020, it would only scape from January to May. For other years, it would go from Jan to Dec. You can tell that the body of the script is repeated inside that if-else conditional statement, is there an easier way to write this to make it look cleaner and not repeat itself?
2020/06/21
[ "https://Stackoverflow.com/questions/62494807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11303284/" ]
Just define a `dict` with year as key & range of month as values, ``` filter_ = {2020 : (1, 6)} for year in range(2005, 2021): start, stop = filter_.get(year, (1,13)) for month in range(start, stop): url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str( year) + '-' + str(month) + '-1.html' r = requests.get(url) ... ```
As scraping for 1 to 6 months is common for all the years. You can **scrape** those years first. And then if the year is not equal to 2020 you can **scrape** rest of the years
62,494,807
I have the following python script ``` from bs4 import BeautifulSoup import requests home_dict = [] for year in range(2005, 2021): if year == 2020: for month in range(1, 6): url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str(year) + '-' + str(month) + '-1.html'; r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') home_table = soup.find('div', class_="table-wrapper") for home in home_table.find_all('tbody'): rows = home.find_all('tr') for row in rows: area = row.find('td').text; benchmark = row.find_all('td')[1].text priceIndex = row.find_all('td')[2].text oneMonthChange = row.find_all('td')[3].text sixMonthChange = row.find_all('td')[4].text oneYearChange = row.find_all('td')[5].text threeYearChange = row.find_all('td')[6].text fiveYearChange = row.find_all('td')[7].text propertyType = row.find_all('td')[8].text year = year; month = month; home_obj = { "Area": area, "Benchmark": benchmark, "Price Index": priceIndex, "1 Month +/-": oneMonthChange, "6 Month +/-": sixMonthChange, "1 Year +/-": oneYearChange, "3 Year +/-": threeYearChange, "5 Year +/-": fiveYearChange, "Property Type": propertyType, "Report Month": month, "Report Year": year } home_dict.append(home_obj) else: for month in range(1, 13): url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str(year) + '-' + str(month) + '-1.html'; r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') home_table = soup.find('div', class_="table-wrapper") for home in home_table.find_all('tbody'): rows = home.find_all('tr') for row in rows: area = row.find('td').text; benchmark = row.find_all('td')[1].text priceIndex = row.find_all('td')[2].text oneMonthChange = row.find_all('td')[3].text sixMonthChange = row.find_all('td')[4].text oneYearChange = row.find_all('td')[5].text threeYearChange = row.find_all('td')[6].text fiveYearChange = row.find_all('td')[7].text propertyType = row.find_all('td')[8].text year = year; month = month; home_obj = { "Area": area, "Benchmark": benchmark, "Price Index": priceIndex, "1 Month +/-": oneMonthChange, "6 Month +/-": sixMonthChange, "1 Year +/-": oneYearChange, "3 Year +/-": threeYearChange, "5 Year +/-": fiveYearChange, "Property Type": propertyType, "Report Month": month, "Report Year": year } home_dict.append(home_obj) print(home_dict) ``` This script is web scraping a website. If the year is 2020, it would only scape from January to May. For other years, it would go from Jan to Dec. You can tell that the body of the script is repeated inside that if-else conditional statement, is there an easier way to write this to make it look cleaner and not repeat itself?
2020/06/21
[ "https://Stackoverflow.com/questions/62494807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11303284/" ]
Perhaps try a `try` clause? ``` for year in range(2005, 2021): month in range(1, 13): try: <your code> except: continue ```
Just define a `dict` with year as key & range of month as values, ``` filter_ = {2020 : (1, 6)} for year in range(2005, 2021): start, stop = filter_.get(year, (1,13)) for month in range(start, stop): url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str( year) + '-' + str(month) + '-1.html' r = requests.get(url) ... ```
67,790,430
I have a big text file that has around 200K lines of records/lines. But I need to extract only specific lines which Start with CLM. For example, if the file has 100K lines that start with CLM I should print all that 100K lines alone. Can anyone help me to achieve this using python script?
2021/06/01
[ "https://Stackoverflow.com/questions/67790430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15924358/" ]
this would work: ``` df2[, colnames(df2) %in% colnames(df1)] x3 x4 x7 x10 x12 1 IL_NA1A_P IL_NA3D_P PROD009_P PROD014_P PROD023A_P ``` You simply check which column-names of `df2` appear also in `df1` and select these columns from `df2`.
If all of `df1` column names are present in `df2` you can use - ``` df2[names(df1)] # x3 x4 x7 x10 x12 #1 IL_NA1A_P IL_NA3D_P PROD009_P PROD014_P PROD023A_P ``` If only few of `df1` column names are present in `df2` you can either use - ``` df2[intersect(names(df2), names(df1))] ``` Or in `dplyr`. ``` library(dplyr) df2 %>% select(any_of(names(df1))) ```
67,790,430
I have a big text file that has around 200K lines of records/lines. But I need to extract only specific lines which Start with CLM. For example, if the file has 100K lines that start with CLM I should print all that 100K lines alone. Can anyone help me to achieve this using python script?
2021/06/01
[ "https://Stackoverflow.com/questions/67790430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15924358/" ]
this would work: ``` df2[, colnames(df2) %in% colnames(df1)] x3 x4 x7 x10 x12 1 IL_NA1A_P IL_NA3D_P PROD009_P PROD014_P PROD023A_P ``` You simply check which column-names of `df2` appear also in `df1` and select these columns from `df2`.
We can also use ``` library(dplyr) df2 %>% select_at(vars(names(df1)) ```
67,790,430
I have a big text file that has around 200K lines of records/lines. But I need to extract only specific lines which Start with CLM. For example, if the file has 100K lines that start with CLM I should print all that 100K lines alone. Can anyone help me to achieve this using python script?
2021/06/01
[ "https://Stackoverflow.com/questions/67790430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15924358/" ]
We can also use ``` library(dplyr) df2 %>% select_at(vars(names(df1)) ```
If all of `df1` column names are present in `df2` you can use - ``` df2[names(df1)] # x3 x4 x7 x10 x12 #1 IL_NA1A_P IL_NA3D_P PROD009_P PROD014_P PROD023A_P ``` If only few of `df1` column names are present in `df2` you can either use - ``` df2[intersect(names(df2), names(df1))] ``` Or in `dplyr`. ``` library(dplyr) df2 %>% select(any_of(names(df1))) ```
6,652,492
I have a Java project that utilizes Jython to interface with a Python module. With my configuration, the program runs fine, however, when I export the project to a JAR file, I get the following error: ``` Jar export finished with problems. See details for additional information. Fat Jar Export: Could not find class-path entry for 'C:Projects/this_project/src/com/company/python/' ``` When browsing through the generated JAR file with an archive manager, the python module is in fact inside of the JAR, but when I check the manifest, only "." is in the classpath. I can overlook this issue by manually dropping the module into the JAR file after creation, but since the main point of this project is automation, I'd rather be able to configure Eclipse to generate properly configured JAR automatically. Any ideas? \*NOTE\*I obviously cannot run the program successfully when I do this, but removing the Python source folder from the classpath in "Run Configurations..." makes the error go away.
2011/07/11
[ "https://Stackoverflow.com/questions/6652492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/584676/" ]
This is not supported by Windows Installer. Elevation is usually handled by the application through its [manifest](http://msdn.microsoft.com/en-us/library/bb756929.aspx). A solution is to create a wrapper (VBScript or EXE) which uses [ShellExecute](http://msdn.microsoft.com/en-us/library/bb762153%28VS.85%29.aspx) with **runas** verb to launch your application as an Administrator. Your shortcut can then point to this wrapper instead of the actual application.
Sorry for the confusion - I now understand what you are after. There are indeed ways to set the shortcut flag but none that I know of straight in Visual Studio. I have found a number of functions written in C++ that set the SLDF\_RUNAS\_USER flag on a shortcut. Some links to such functions include: * <http://blogs.msdn.com/b/oldnewthing/archive/2007/12/19/6801084.aspx> * <http://social.msdn.microsoft.com/Forums/en-US/windowssecurity/thread/a55aa70e-ae4d-4bf6-b179-2e3df3668989/> Another interesting discussion on the same topic was carried out at NSIS forums, the thread may be of help. There is a function listed that can be built as well as mention of a registry location which stores such shortcut settings (this seems to be the easiest way to go, if it works) - I am unable to test the registry method at the moment, but can do a bit later to see if it works. This thread can be found here: <http://forums.winamp.com/showthread.php?t=278764> If you are quite keen to do this programatically, then maybe you could adapt one of the functions above to be run as a post-install task? This would set the flag of the shortcut after your install but this once again needs to be done on Non-Advertised shortcuts so the MSI would have to be fixed as I mentioned earlier. I'll keep looking and test out the registry setting method to see if it works and report back. Chada
6,652,492
I have a Java project that utilizes Jython to interface with a Python module. With my configuration, the program runs fine, however, when I export the project to a JAR file, I get the following error: ``` Jar export finished with problems. See details for additional information. Fat Jar Export: Could not find class-path entry for 'C:Projects/this_project/src/com/company/python/' ``` When browsing through the generated JAR file with an archive manager, the python module is in fact inside of the JAR, but when I check the manifest, only "." is in the classpath. I can overlook this issue by manually dropping the module into the JAR file after creation, but since the main point of this project is automation, I'd rather be able to configure Eclipse to generate properly configured JAR automatically. Any ideas? \*NOTE\*I obviously cannot run the program successfully when I do this, but removing the Python source folder from the classpath in "Run Configurations..." makes the error go away.
2011/07/11
[ "https://Stackoverflow.com/questions/6652492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/584676/" ]
This is not supported by Windows Installer. Elevation is usually handled by the application through its [manifest](http://msdn.microsoft.com/en-us/library/bb756929.aspx). A solution is to create a wrapper (VBScript or EXE) which uses [ShellExecute](http://msdn.microsoft.com/en-us/library/bb762153%28VS.85%29.aspx) with **runas** verb to launch your application as an Administrator. Your shortcut can then point to this wrapper instead of the actual application.
I needed to make my application to be prompted for Administator's Rights when running from Start Menu or Program Files. I achieved this behavior after setting in \bin\Debug\my\_app.exe 'Run this program as administator' checkbox to true. ( located in Properties\Compatibility section ). While installing project, this file was copied to the Program Files (and therefore the shortcut in the Start Menu) with needed behavior. Thanks, Pavlo
6,652,492
I have a Java project that utilizes Jython to interface with a Python module. With my configuration, the program runs fine, however, when I export the project to a JAR file, I get the following error: ``` Jar export finished with problems. See details for additional information. Fat Jar Export: Could not find class-path entry for 'C:Projects/this_project/src/com/company/python/' ``` When browsing through the generated JAR file with an archive manager, the python module is in fact inside of the JAR, but when I check the manifest, only "." is in the classpath. I can overlook this issue by manually dropping the module into the JAR file after creation, but since the main point of this project is automation, I'd rather be able to configure Eclipse to generate properly configured JAR automatically. Any ideas? \*NOTE\*I obviously cannot run the program successfully when I do this, but removing the Python source folder from the classpath in "Run Configurations..." makes the error go away.
2011/07/11
[ "https://Stackoverflow.com/questions/6652492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/584676/" ]
This is largely due to the fact that Windows Installer uses 'Advertised shortcuts' for the Windows Installer packages. There is no way inherently to disable this in Visual Studio, but it is possible to modify the MSI that is produced to make sure that it does not use advertised shortcuts (or uses only one). There are 2 ways of going about this: * If your application uses a single exe or two - Use ORCA to edit the MSI. Under the shortcuts table, change the Target Entry to "[TARGETDIR]\MyExeName.exe" - where MyExeName is the name of your exe - this ensures that that particular shortcut is not advertised. * Add DISABLEADVTSHORTCUTS=1 to the the property Table of the MSI using ORCA or a post build event (using the WiRunSQL.vbs script). If you need more info on this let me know. This disables all advertised shortcuts. it may be better to use the first approach, create 2 shortcuts and modify only one in ORCA so that you can right click and run as admin. Hope this helps
Sorry for the confusion - I now understand what you are after. There are indeed ways to set the shortcut flag but none that I know of straight in Visual Studio. I have found a number of functions written in C++ that set the SLDF\_RUNAS\_USER flag on a shortcut. Some links to such functions include: * <http://blogs.msdn.com/b/oldnewthing/archive/2007/12/19/6801084.aspx> * <http://social.msdn.microsoft.com/Forums/en-US/windowssecurity/thread/a55aa70e-ae4d-4bf6-b179-2e3df3668989/> Another interesting discussion on the same topic was carried out at NSIS forums, the thread may be of help. There is a function listed that can be built as well as mention of a registry location which stores such shortcut settings (this seems to be the easiest way to go, if it works) - I am unable to test the registry method at the moment, but can do a bit later to see if it works. This thread can be found here: <http://forums.winamp.com/showthread.php?t=278764> If you are quite keen to do this programatically, then maybe you could adapt one of the functions above to be run as a post-install task? This would set the flag of the shortcut after your install but this once again needs to be done on Non-Advertised shortcuts so the MSI would have to be fixed as I mentioned earlier. I'll keep looking and test out the registry setting method to see if it works and report back. Chada
6,652,492
I have a Java project that utilizes Jython to interface with a Python module. With my configuration, the program runs fine, however, when I export the project to a JAR file, I get the following error: ``` Jar export finished with problems. See details for additional information. Fat Jar Export: Could not find class-path entry for 'C:Projects/this_project/src/com/company/python/' ``` When browsing through the generated JAR file with an archive manager, the python module is in fact inside of the JAR, but when I check the manifest, only "." is in the classpath. I can overlook this issue by manually dropping the module into the JAR file after creation, but since the main point of this project is automation, I'd rather be able to configure Eclipse to generate properly configured JAR automatically. Any ideas? \*NOTE\*I obviously cannot run the program successfully when I do this, but removing the Python source folder from the classpath in "Run Configurations..." makes the error go away.
2011/07/11
[ "https://Stackoverflow.com/questions/6652492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/584676/" ]
This is largely due to the fact that Windows Installer uses 'Advertised shortcuts' for the Windows Installer packages. There is no way inherently to disable this in Visual Studio, but it is possible to modify the MSI that is produced to make sure that it does not use advertised shortcuts (or uses only one). There are 2 ways of going about this: * If your application uses a single exe or two - Use ORCA to edit the MSI. Under the shortcuts table, change the Target Entry to "[TARGETDIR]\MyExeName.exe" - where MyExeName is the name of your exe - this ensures that that particular shortcut is not advertised. * Add DISABLEADVTSHORTCUTS=1 to the the property Table of the MSI using ORCA or a post build event (using the WiRunSQL.vbs script). If you need more info on this let me know. This disables all advertised shortcuts. it may be better to use the first approach, create 2 shortcuts and modify only one in ORCA so that you can right click and run as admin. Hope this helps
I needed to make my application to be prompted for Administator's Rights when running from Start Menu or Program Files. I achieved this behavior after setting in \bin\Debug\my\_app.exe 'Run this program as administator' checkbox to true. ( located in Properties\Compatibility section ). While installing project, this file was copied to the Program Files (and therefore the shortcut in the Start Menu) with needed behavior. Thanks, Pavlo
6,652,492
I have a Java project that utilizes Jython to interface with a Python module. With my configuration, the program runs fine, however, when I export the project to a JAR file, I get the following error: ``` Jar export finished with problems. See details for additional information. Fat Jar Export: Could not find class-path entry for 'C:Projects/this_project/src/com/company/python/' ``` When browsing through the generated JAR file with an archive manager, the python module is in fact inside of the JAR, but when I check the manifest, only "." is in the classpath. I can overlook this issue by manually dropping the module into the JAR file after creation, but since the main point of this project is automation, I'd rather be able to configure Eclipse to generate properly configured JAR automatically. Any ideas? \*NOTE\*I obviously cannot run the program successfully when I do this, but removing the Python source folder from the classpath in "Run Configurations..." makes the error go away.
2011/07/11
[ "https://Stackoverflow.com/questions/6652492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/584676/" ]
I know this is quite an old question, but I needed to find an answer and I thought I could help other searchers. I wrote a small function to perform this task in VBScript (pasted below). It is easily adapted to VB.net / VB6. Return codes from function: 0 - success, changed the shortcut. 99 - shortcut flag already set to run as administrator. 114017 - file not found 114038 - Data file format not valid (specifically the file is way too small) All other non-zero = unexpected errors. As mentioned by Chada in a later post, this script will not work on msi Advertised shortcuts. If you use this method to manipulate the bits in the shortcut, it must be a standard, non-advertised shortcut. References: MS Shortcut LNK format: <http://msdn.microsoft.com/en-us/library/dd871305> Some inspiration: [Read and write binary file in VBscript](https://stackoverflow.com/questions/6060529/read-and-write-binary-file-in-vbscript) Please note that the function does not check for a valid LNK shortcut. In fact you can feed it ANY file and it will alter Hex byte 15h in the file to set bit 32 to on. If copies the original shortcut to %TEMP% before amending it. Daz. ``` '# D.Collins - 12:58 03/09/2012 '# Sets a shortcut to have the RunAs flag set. Drag an LNK file onto this script to test Option Explicit Dim oArgs, ret Set oArgs = WScript.Arguments If oArgs.Count > 0 Then ret = fSetRunAsOnLNK(oArgs(0)) MsgBox "Done, return = " & ret Else MsgBox "No Args" End If Function fSetRunAsOnLNK(sInputLNK) Dim fso, wshShell, oFile, iSize, aInput(), ts, i Set fso = CreateObject("Scripting.FileSystemObject") Set wshShell = CreateObject("WScript.Shell") If Not fso.FileExists(sInputLNK) Then fSetRunAsOnLNK = 114017 : Exit Function Set oFile = fso.GetFile(sInputLNK) iSize = oFile.Size ReDim aInput(iSize) Set ts = oFile.OpenAsTextStream() i = 0 Do While Not ts.AtEndOfStream aInput(i) = ts.Read(1) i = i + 1 Loop ts.Close If UBound(aInput) < 50 Then fSetRunAsOnLNK = 114038 : Exit Function If (Asc(aInput(21)) And 32) = 0 Then aInput(21) = Chr(Asc(aInput(21)) + 32) Else fSetRunAsOnLNK = 99 : Exit Function End If fso.CopyFile sInputLNK, wshShell.ExpandEnvironmentStrings("%temp%\" & oFile.Name & "." & Hour(Now()) & "-" & Minute(Now()) & "-" & Second(Now())) On Error Resume Next Set ts = fso.CreateTextFile(sInputLNK, True) If Err.Number <> 0 Then fSetRunAsOnLNK = Err.number : Exit Function ts.Write(Join(aInput, "")) If Err.Number <> 0 Then fSetRunAsOnLNK = Err.number : Exit Function ts.Close fSetRunAsOnLNK = 0 End Function ```
Sorry for the confusion - I now understand what you are after. There are indeed ways to set the shortcut flag but none that I know of straight in Visual Studio. I have found a number of functions written in C++ that set the SLDF\_RUNAS\_USER flag on a shortcut. Some links to such functions include: * <http://blogs.msdn.com/b/oldnewthing/archive/2007/12/19/6801084.aspx> * <http://social.msdn.microsoft.com/Forums/en-US/windowssecurity/thread/a55aa70e-ae4d-4bf6-b179-2e3df3668989/> Another interesting discussion on the same topic was carried out at NSIS forums, the thread may be of help. There is a function listed that can be built as well as mention of a registry location which stores such shortcut settings (this seems to be the easiest way to go, if it works) - I am unable to test the registry method at the moment, but can do a bit later to see if it works. This thread can be found here: <http://forums.winamp.com/showthread.php?t=278764> If you are quite keen to do this programatically, then maybe you could adapt one of the functions above to be run as a post-install task? This would set the flag of the shortcut after your install but this once again needs to be done on Non-Advertised shortcuts so the MSI would have to be fixed as I mentioned earlier. I'll keep looking and test out the registry setting method to see if it works and report back. Chada
6,652,492
I have a Java project that utilizes Jython to interface with a Python module. With my configuration, the program runs fine, however, when I export the project to a JAR file, I get the following error: ``` Jar export finished with problems. See details for additional information. Fat Jar Export: Could not find class-path entry for 'C:Projects/this_project/src/com/company/python/' ``` When browsing through the generated JAR file with an archive manager, the python module is in fact inside of the JAR, but when I check the manifest, only "." is in the classpath. I can overlook this issue by manually dropping the module into the JAR file after creation, but since the main point of this project is automation, I'd rather be able to configure Eclipse to generate properly configured JAR automatically. Any ideas? \*NOTE\*I obviously cannot run the program successfully when I do this, but removing the Python source folder from the classpath in "Run Configurations..." makes the error go away.
2011/07/11
[ "https://Stackoverflow.com/questions/6652492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/584676/" ]
I know this is quite an old question, but I needed to find an answer and I thought I could help other searchers. I wrote a small function to perform this task in VBScript (pasted below). It is easily adapted to VB.net / VB6. Return codes from function: 0 - success, changed the shortcut. 99 - shortcut flag already set to run as administrator. 114017 - file not found 114038 - Data file format not valid (specifically the file is way too small) All other non-zero = unexpected errors. As mentioned by Chada in a later post, this script will not work on msi Advertised shortcuts. If you use this method to manipulate the bits in the shortcut, it must be a standard, non-advertised shortcut. References: MS Shortcut LNK format: <http://msdn.microsoft.com/en-us/library/dd871305> Some inspiration: [Read and write binary file in VBscript](https://stackoverflow.com/questions/6060529/read-and-write-binary-file-in-vbscript) Please note that the function does not check for a valid LNK shortcut. In fact you can feed it ANY file and it will alter Hex byte 15h in the file to set bit 32 to on. If copies the original shortcut to %TEMP% before amending it. Daz. ``` '# D.Collins - 12:58 03/09/2012 '# Sets a shortcut to have the RunAs flag set. Drag an LNK file onto this script to test Option Explicit Dim oArgs, ret Set oArgs = WScript.Arguments If oArgs.Count > 0 Then ret = fSetRunAsOnLNK(oArgs(0)) MsgBox "Done, return = " & ret Else MsgBox "No Args" End If Function fSetRunAsOnLNK(sInputLNK) Dim fso, wshShell, oFile, iSize, aInput(), ts, i Set fso = CreateObject("Scripting.FileSystemObject") Set wshShell = CreateObject("WScript.Shell") If Not fso.FileExists(sInputLNK) Then fSetRunAsOnLNK = 114017 : Exit Function Set oFile = fso.GetFile(sInputLNK) iSize = oFile.Size ReDim aInput(iSize) Set ts = oFile.OpenAsTextStream() i = 0 Do While Not ts.AtEndOfStream aInput(i) = ts.Read(1) i = i + 1 Loop ts.Close If UBound(aInput) < 50 Then fSetRunAsOnLNK = 114038 : Exit Function If (Asc(aInput(21)) And 32) = 0 Then aInput(21) = Chr(Asc(aInput(21)) + 32) Else fSetRunAsOnLNK = 99 : Exit Function End If fso.CopyFile sInputLNK, wshShell.ExpandEnvironmentStrings("%temp%\" & oFile.Name & "." & Hour(Now()) & "-" & Minute(Now()) & "-" & Second(Now())) On Error Resume Next Set ts = fso.CreateTextFile(sInputLNK, True) If Err.Number <> 0 Then fSetRunAsOnLNK = Err.number : Exit Function ts.Write(Join(aInput, "")) If Err.Number <> 0 Then fSetRunAsOnLNK = Err.number : Exit Function ts.Close fSetRunAsOnLNK = 0 End Function ```
I needed to make my application to be prompted for Administator's Rights when running from Start Menu or Program Files. I achieved this behavior after setting in \bin\Debug\my\_app.exe 'Run this program as administator' checkbox to true. ( located in Properties\Compatibility section ). While installing project, this file was copied to the Program Files (and therefore the shortcut in the Start Menu) with needed behavior. Thanks, Pavlo
38,644,397
I am trying to make a login system with python and mysql. I connected to the database, but when I try to insert values into a table, it fails. I'm not sure what's wrong. I am using python 3.5 and the PyMySQL module. ``` #!python3 import pymysql, sys, time try: print('Connecting.....') time.sleep(1.66) conn = pymysql.connect(user='root', passwd='root', host='127.0.0.1', port=3306, database='MySQL') print('Connection suceeded!') except: print('Connection failed.') sys.exit('Error.') cursor = conn.cursor() sql = "INSERT INTO login(USER, PASS) VALUES('test', 'val')" try: cursor.execute(sql) conn.commit() except: conn.rollback() print('Operation failed.') conn.close() ```
2016/07/28
[ "https://Stackoverflow.com/questions/38644397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6544457/" ]
It looks like the `get`/`getAs` methods mentioned in the example are just convenience wrappers for the `fetch` method. See <https://github.com/http4s/http4s/blob/a4b52b042338ab35d89d260e0bcb39ccec1f1947/client/src/main/scala/org/http4s/client/Client.scala#L116> Use the `Request` constructor and pass `Method.POST` as the `method`. ``` fetch(Request(Method.POST, uri)) ```
``` import org.http4s.circe._ import org.http4s.dsl._ import io.circe.generic.auto._ case class Name(name: String) implicit val nameDecoder: EntityDecoder[Name] = jsonOf[Name] def routes: PartialFunction[Request, Task[Response]] = { case req @ POST -> Root / "hello" => req.decode[Name] { name => Ok(s"Hello, ${name.name}") } ``` Hope this helps.
38,644,397
I am trying to make a login system with python and mysql. I connected to the database, but when I try to insert values into a table, it fails. I'm not sure what's wrong. I am using python 3.5 and the PyMySQL module. ``` #!python3 import pymysql, sys, time try: print('Connecting.....') time.sleep(1.66) conn = pymysql.connect(user='root', passwd='root', host='127.0.0.1', port=3306, database='MySQL') print('Connection suceeded!') except: print('Connection failed.') sys.exit('Error.') cursor = conn.cursor() sql = "INSERT INTO login(USER, PASS) VALUES('test', 'val')" try: cursor.execute(sql) conn.commit() except: conn.rollback() print('Operation failed.') conn.close() ```
2016/07/28
[ "https://Stackoverflow.com/questions/38644397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6544457/" ]
It looks like the `get`/`getAs` methods mentioned in the example are just convenience wrappers for the `fetch` method. See <https://github.com/http4s/http4s/blob/a4b52b042338ab35d89d260e0bcb39ccec1f1947/client/src/main/scala/org/http4s/client/Client.scala#L116> Use the `Request` constructor and pass `Method.POST` as the `method`. ``` fetch(Request(Method.POST, uri)) ```
https4s version: 0.14.11 The hard part is how to set the post body. When you dive into the code, you may find `type EntityBody = Process[Task, ByteVector]`. But wtf is it? However, if you have not been ready to dive into scalaz, just use `withBody`. ``` object Client extends App { val client = PooledHttp1Client() val httpize = Uri.uri("http://httpize.herokuapp.com") def post() = { val req = Request(method = Method.POST, uri = httpize / "post").withBody("hello") val task = client.expect[String](req) val x = task.unsafePerformSync println(x) } post() client.shutdownNow() } ``` P.S. my helpful post about http4s client(Just skip the Chinese and read the scala code): <http://sadhen.com/blog/2016/11/27/http4s-client-intro.html>
38,644,397
I am trying to make a login system with python and mysql. I connected to the database, but when I try to insert values into a table, it fails. I'm not sure what's wrong. I am using python 3.5 and the PyMySQL module. ``` #!python3 import pymysql, sys, time try: print('Connecting.....') time.sleep(1.66) conn = pymysql.connect(user='root', passwd='root', host='127.0.0.1', port=3306, database='MySQL') print('Connection suceeded!') except: print('Connection failed.') sys.exit('Error.') cursor = conn.cursor() sql = "INSERT INTO login(USER, PASS) VALUES('test', 'val')" try: cursor.execute(sql) conn.commit() except: conn.rollback() print('Operation failed.') conn.close() ```
2016/07/28
[ "https://Stackoverflow.com/questions/38644397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6544457/" ]
https4s version: 0.14.11 The hard part is how to set the post body. When you dive into the code, you may find `type EntityBody = Process[Task, ByteVector]`. But wtf is it? However, if you have not been ready to dive into scalaz, just use `withBody`. ``` object Client extends App { val client = PooledHttp1Client() val httpize = Uri.uri("http://httpize.herokuapp.com") def post() = { val req = Request(method = Method.POST, uri = httpize / "post").withBody("hello") val task = client.expect[String](req) val x = task.unsafePerformSync println(x) } post() client.shutdownNow() } ``` P.S. my helpful post about http4s client(Just skip the Chinese and read the scala code): <http://sadhen.com/blog/2016/11/27/http4s-client-intro.html>
``` import org.http4s.circe._ import org.http4s.dsl._ import io.circe.generic.auto._ case class Name(name: String) implicit val nameDecoder: EntityDecoder[Name] = jsonOf[Name] def routes: PartialFunction[Request, Task[Response]] = { case req @ POST -> Root / "hello" => req.decode[Name] { name => Ok(s"Hello, ${name.name}") } ``` Hope this helps.
66,503,032
Is there anyway that I could make the function below faster and more optimized with `pandas` or `numpy`, the function below adds the sum of `seq45` until the elements of it is equivalent or over 10000.The elements that is being added up to `seq45` are `3,7,11` in order. The reason why i want to increase the speed is to have have to possibly test 10000 with larger number values like 1000000 and to be processed faster. The answer to this code was gotten from this issue: [issue](https://stackoverflow.com/questions/66487120/adding-through-an-array-with-np-where-function-numpy-python?noredirect=1#comment117539177_66487120) Code: ``` Sequence = np.array([3, 7, 11]) seq45= [] for n in itertools.cycle(Sequence): seq.append(n) if sum(seq45) >= 10000: break print(seq45) ``` Current Perfromance/ Processing time: 71.9ms [![enter image description here](https://i.stack.imgur.com/d7NRt.png)](https://i.stack.imgur.com/d7NRt.png)
2021/03/06
[ "https://Stackoverflow.com/questions/66503032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15217016/" ]
You can input numbers using scanner class similar to the following code from w3schools: ``` import java.util.Scanner; // Import the Scanner class class Main { public static void main(String[] args) { Scanner myObj = new Scanner(System.in); // Create a Scanner object System.out.println("Enter username"); String userName = myObj.nextLine(); // Read user input System.out.println("Username is: " + userName); // Output user input } } ``` `low`, `high` and `x` can be of data type `int`. To check which numbers between `low` and `high` are multiples of `x`, you can use a `for` loop. You can declare a new variable `count` that can be incremented using `count++` every time the `for` loop finds a multiple. The `%` operator could be useful to find multiples. You can output using `System.out.println(" " + );` **Edit:** `%` operator requires 2 operands and gives the remainder. So if `i % x == 0`, it means `i` is a multiple of `x`, and we do `count++`. The value of `i` will run through `low` to `high`. ``` for (i = low; i <= high; i++) { if (i % x == 0) { count++; } } ```
put count++; after the last print statement
66,503,032
Is there anyway that I could make the function below faster and more optimized with `pandas` or `numpy`, the function below adds the sum of `seq45` until the elements of it is equivalent or over 10000.The elements that is being added up to `seq45` are `3,7,11` in order. The reason why i want to increase the speed is to have have to possibly test 10000 with larger number values like 1000000 and to be processed faster. The answer to this code was gotten from this issue: [issue](https://stackoverflow.com/questions/66487120/adding-through-an-array-with-np-where-function-numpy-python?noredirect=1#comment117539177_66487120) Code: ``` Sequence = np.array([3, 7, 11]) seq45= [] for n in itertools.cycle(Sequence): seq.append(n) if sum(seq45) >= 10000: break print(seq45) ``` Current Perfromance/ Processing time: 71.9ms [![enter image description here](https://i.stack.imgur.com/d7NRt.png)](https://i.stack.imgur.com/d7NRt.png)
2021/03/06
[ "https://Stackoverflow.com/questions/66503032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15217016/" ]
Once you get to the basic implementation (as explained by Charisma), you'll notice, that it can take a lot of time if the numbers are huge: you have `high - low + 1` iterations of the loop. Therefore you can start optimizing, to get a result in constant time: * the first multiple is `qLow * x`, where `qLow` is the [ceiling](https://en.wikipedia.org/wiki/Floor_and_ceiling_functions) of the rational quotient `((double) low) / x`, * the last multiple is `qHigh * x`, where `qHigh` is the floor of the rational quotient `((double) high) / x`, Java provides a `Math.floor()` and `Math.ceil()`, but you can get the same result using integer division and playing with the signs: ```java final int qLow = -(-low / x); final int qHigh = high / x; ``` Now you just have to count the number of integers between `qLow` and `qHigh` inclusive. ```java return qHigh - qLow + 1; ``` **Attention**: if `x < 0`, then you need to use `qLow - qHigh`, so it is safer to use: ```java return x > 0 ? qHigh - qLow + 1 : qLow - qHigh + 1; ``` The case `x == 0` should be dealt with at the beginning.
put count++; after the last print statement
62,929,576
I would like to approximate bond yields in python. But the question arose which curve describes this better? ------------------------------------------------------------------------------------------------------------ ``` import numpy as np import matplotlib.pyplot as plt x = [0.02, 0.22, 0.29, 0.38, 0.52, 0.55, 0.67, 0.68, 0.74, 0.83, 1.05, 1.06, 1.19, 1.26, 1.32, 1.37, 1.38, 1.46, 1.51, 1.61, 1.62, 1.66, 1.87, 1.93, 2.01, 2.09, 2.24, 2.26, 2.3, 2.33, 2.41, 2.44, 2.51, 2.53, 2.58, 2.64, 2.65, 2.76, 3.01, 3.17, 3.21, 3.24, 3.3, 3.42, 3.51, 3.67, 3.72, 3.74, 3.83, 3.84, 3.86, 3.95, 4.01, 4.02, 4.13, 4.28, 4.36, 4.4] y = [3, 3.96, 4.21, 2.48, 4.77, 4.13, 4.74, 5.06, 4.73, 4.59, 4.79, 5.53, 6.14, 5.71, 5.96, 5.31, 5.38, 5.41, 4.79, 5.33, 5.86, 5.03, 5.35, 5.29, 7.41, 5.56, 5.48, 5.77, 5.52, 5.68, 5.76, 5.99, 5.61, 5.78, 5.79, 5.65, 5.57, 6.1, 5.87, 5.89, 5.75, 5.89, 6.1, 5.81, 6.05, 8.31, 5.84, 6.36, 5.21, 5.81, 7.88, 6.63, 6.39, 5.99, 5.86, 5.93, 6.29, 6.07] a = np.polyfit(np.power(x,0.5), y, 1) y1 = a[0]*np.power(x,0.5)+a[1] b = np.polyfit(np.log(x), y, 1) y2 = b[0]*np.log(x) + b[1] c = np.polyfit(x, y, 2) y3 = c[0] * np.power(x,2) + np.multiply(c[1], x) + c[2] plt.plot(x, y, 'ro', lw = 3, color='black') plt.plot(x, y1, 'g', lw = 3, color='red') plt.plot(x, y2, 'g', lw = 3, color='green') plt.plot(x, y3, 'g', lw = 3, color='blue') plt.axis([0, 4.5, 2, 8]) plt.rcParams['figure.figsize'] = [10, 5] ``` The parabolic too goes down at the end **(blue)**, the logarithmic goes too quickly to zero at the beginning **(green)**, and the square root has a strange hump **(red)**. Is there any other ways of more accurate approximation or is it that I'm already getting pretty good? [![enter image description here](https://i.stack.imgur.com/nvX7M.png)](https://i.stack.imgur.com/nvX7M.png)
2020/07/16
[ "https://Stackoverflow.com/questions/62929576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12412154/" ]
This may help you. I'm not sure but this works for me ### Signout Fuction ```dart Future _signOut() async { try { return await auth.signOut(); } catch (e) { print(e.toString()); return null; } } ``` ### Function Usage ```dart IconButton( onPressed: () async { await _auth.signOut(); MaterialPageRoute( builder: (context) => Login(), ); }, icon: Icon(Icons.exit_to_app), ), ```
You need to have AuthWidgetBuilder as top-level widget (ideally above MaterialApp) so that the entire widget tree is rebuilt on sign-in / sign-out events. You could make SplashScreen a child, and have some conditional logic to decide if you should present it. By the way, if your splash screen doesn't contain any animations you don't need a widget at all, and you can use the Launch Screen on iOS or equivalent on Android (there's tutorials about this online). By @bizz84
61,645,140
I am really struggling with this issue. In the below, I take a domain like something.facebook.com and turn it into facebook.com using my UDF. I get this error though: ``` UnicodeEncodeError: 'ascii' codec can't encode characters in position 64-65: ordinal not in range(128) ``` I've tried a few things to get around it but I really don't understand why it's causing a problem. I would really appreciate any pointers :) ``` toplevel = ['.co.uk', '.co.nz', '.com', '.net', '.uk', '.org', '.ie', '.it', '.gov.uk', '.news', '.co.in', '.io', '.tw', '.es', '.pe', '.ca', '.de', '.to', '.us', '.br', '.im', '.ws', '.gr', '.cc', '.cn', '.me', '.be', '.tv', '.ru', '.cz', '.st', '.eu', '.fi', '.jp', '.ai', '.at', '.ch', '.ly', '.fr', '.nl', '.se', '.cat', '.com.au', '.com.ar', '.com.mt', '.com.co', '.org.uk', '.com.mx', '.tech', '.life', '.mobi', '.info', '.ninja', '.today', '.earth', '.click'] def cleanup(domain): print(domain) if domain is None or domain == '': domain = 'empty' return domain for tld in toplevel: if tld in str(domain): splitdomain = domain.split('.') ext = tld.count('.') if ext == 1: cdomain = domain.split('.')[-2].encode('utf-8') + '.' + domain.split('.')[-1].encode('utf-8') return cdomain elif ext == 2: cdomain = domain.split('.')[-3].encode('utf-8') + '.' + domain.split('.')[-2].encode('utf-8') + '.' + domain.split('.')[-1].encode('utf-8') return cdomain elif domain == '': cdomain = 'empty' return cdomain else: return domain ''' #IPFR DOMS ''' output = ipfr_logs.withColumn('capital',udfdict(ipfr_logs.domain)).createOrReplaceTempView('ipfrdoms') ipfr_table_output = spark_session.sql('insert overwrite table design.ipfr_tld partition(dt=' + yday_date + ') select dt, hour, vservername, loc, cast(capital as string), count(distinct(emsisdn)) as users, sum(bytesdl) as size from ipfrdoms group by dt, hour, vservername, loc, capital') ``` Here is the full trace ``` Traceback (most recent call last): File "/data/keenek1/py_files/2020_scripts/web_ipfr_complete_new.py", line 177, in <module> ipfr_table_output = spark_session.sql('insert overwrite table design.ipfr_tld partition(dt=' + yday_date + ') select dt, hour, vservername, loc, cast(capital as string), count(distinct(emsisdn)) as users, sum(bytesdl) as size from ipfrdoms group by dt, hour, vservername, loc, capital') File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/session.py", line 714, in sql File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__ File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o51.sql. : org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224) at org.apache.spark.sql.hive.execution.SaveAsHiveFile$class.saveAsHiveFile(SaveAsHiveFile.scala:87) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.saveAsHiveFile(InsertIntoHiveTable.scala:66) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:195) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 685 in stage 0.0 failed 4 times, most recent failure: Lost task 685.3 in stage 0.0 (TID 4945, uds-far-dn112.dab.02.net, executor 22): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 229, in main process() File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 224, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 149, in <lambda> func = lambda _, it: map(mapper, it) File "<string>", line 1, in <lambda> File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 74, in <lambda> return lambda *a: f(*a) File "/data/keenek1/py_files/2020_scripts/web_ipfr_complete_new.py", line 147, in cleanup UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128) ```
2020/05/06
[ "https://Stackoverflow.com/questions/61645140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9629771/" ]
This seems to solve it. Interestingly, the output is never 'unicode error' ``` def cleanup(domain): try: if domain is None or domain == '': domain = 'empty' return str(domain) for tld in toplevel: if tld in domain: splitdomain = domain.split('.') ext = tld.count('.') if ext == 1: cdomain = domain.split('.')[-2] + '.' + domain.split('.')[-1] return str(cdomain) elif ext == 2: cdomain = domain.split('.')[-3] + '.' + domain.split('.')[-2] + '.' + domain.split('.')[-1] return str(cdomain) elif domain == '': cdomain = 'empty' return str(cdomain) else: return str(domain) except UnicodeEncodeError: domain = 'unicode error' return domain ```
Try this one. ``` def cleanup(domain): #Test the empt Entry if domain is None or domain == '': domain = 'empty' return domain listSub = domain.split('.') result = listSub[1] #get the part we need www.facebook.com > .facebook.com for part in listSub[2:]: result = result + '.' + part return result.encode('Utf-8') ```
13,365,876
I am using python [tox](http://pypi.python.org/pypi/tox) to run python unittest for several versions of python, but these python interpreters are not all available on all machines or platforms where I'm running tox. How can I configure tox so it will run tests only when python interpretors are available. Example of `tox.ini`: ``` [tox] envlist=py25,py27 [testenv] ... [testenv:py25] ... ``` The big problem is that I do want to have a list of python environments which is auto-detected.
2012/11/13
[ "https://Stackoverflow.com/questions/13365876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/99834/" ]
As of Tox version 1.7.2, you can pass the `--skip-missing-interpreters` flag to achieve this behavior. You can also set `skip_missing_interpreters=true` in your `tox.ini` file. More info [here](http://tox.readthedocs.org/en/latest/config.html#confval-skip_missing_interpreters=BOOL). ``` [tox] envlist = py24, py25, py26, py27, py30, py31, py32, py33, py34, jython, pypy, pypy3 skip_missing_interpreters = true ```
tox will display an Error if an interpreter cannot be found. Question is up if there should be a "SKIPPED" state and making tox return a "0" success result. This should probably be explicitely enabled via a command line option. If you agree, file an issue at <http://bitbucket.org/hpk42/tox> .
13,365,876
I am using python [tox](http://pypi.python.org/pypi/tox) to run python unittest for several versions of python, but these python interpreters are not all available on all machines or platforms where I'm running tox. How can I configure tox so it will run tests only when python interpretors are available. Example of `tox.ini`: ``` [tox] envlist=py25,py27 [testenv] ... [testenv:py25] ... ``` The big problem is that I do want to have a list of python environments which is auto-detected.
2012/11/13
[ "https://Stackoverflow.com/questions/13365876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/99834/" ]
First if you don't have tox : `pip install tox`. Use this command : `tox --skip-missing-interpreters` , it skips for the compilers which are not available locally and just runs for the available versions of python
tox will display an Error if an interpreter cannot be found. Question is up if there should be a "SKIPPED" state and making tox return a "0" success result. This should probably be explicitely enabled via a command line option. If you agree, file an issue at <http://bitbucket.org/hpk42/tox> .
13,365,876
I am using python [tox](http://pypi.python.org/pypi/tox) to run python unittest for several versions of python, but these python interpreters are not all available on all machines or platforms where I'm running tox. How can I configure tox so it will run tests only when python interpretors are available. Example of `tox.ini`: ``` [tox] envlist=py25,py27 [testenv] ... [testenv:py25] ... ``` The big problem is that I do want to have a list of python environments which is auto-detected.
2012/11/13
[ "https://Stackoverflow.com/questions/13365876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/99834/" ]
As of Tox version 1.7.2, you can pass the `--skip-missing-interpreters` flag to achieve this behavior. You can also set `skip_missing_interpreters=true` in your `tox.ini` file. More info [here](http://tox.readthedocs.org/en/latest/config.html#confval-skip_missing_interpreters=BOOL). ``` [tox] envlist = py24, py25, py26, py27, py30, py31, py32, py33, py34, jython, pypy, pypy3 skip_missing_interpreters = true ```
First if you don't have tox : `pip install tox`. Use this command : `tox --skip-missing-interpreters` , it skips for the compilers which are not available locally and just runs for the available versions of python
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
The code assumes that the input has one word per line without leading spaces and will count all words that start with two ASCII letters from `'a'`..`'z'`. As the statement in the question is not fully clear, I further assume that the character encoding is ASCII or at least ASCII compatible. (The question states: "there are no accentuated latin characters and they are all accii lowercase") If you want to include words that consist of only one letter or words that contain `'`, the calculation of the index values from the characters would be a bit more complicated. In this case I would add a function to calculate the index from the character value. Also for non-ASCII letters the simple calculation of the array index would not work. The program reads the input line by line without storing all lines, checks the input as defined above and converts the first two characters from range `'a'`..`'z'` to index values in range `0`..`'z'-'a'` to count the occurrence in a two-dimensional array. ``` #include <stdio.h> #include <stdlib.h> int main (void) { char *line = NULL; size_t len = 0; ssize_t read; /* Counter array, initialized with 0. The highest possible index will * be 'z'-'a', so the size in each dimension is 1 more */ unsigned long count['z'-'a'+1]['z'-'a'+1] = {0}; FILE *fp = fopen("large", "r"); if (fp == NULL) { return 1; } while ((read = getline(&line, &len, fp)) != -1) { /* ignore short input */ if(read >= 2) { /* ignore other characters */ if((line[0] >= 'a') && (line[0] <= 'z') && (line[1] >= 'a') && (line[1] <= 'z')) { /* convert first 2 characters to array index range and count */ count[line[0]-'a'][line[1]-'a']++; } } } fclose(fp); if (line) free(line); /* example output */ for(int i = 'a'-'a'; i <= 'z'-'a'; i++) { for(int j = 'a'-'a'; j <= 'z'-'a'; j++) { /* only print combinations that actually occurred */ if(count[i][j] > 0) { printf("%c%c %lu\n", i+'a', j+'a', count[i][j]); } } } return 0; } ``` The example input ```none foo a foobar bar baz fish ford ``` results in ``` ba 2 fi 1 fo 3 ```
Such job is more suitable for languages like Python, Perl, Ruby etc. instead of C. I suggest at least trying C++. If you don't have to write it in C, here is my Python version: (since you didn't mention it in the question - are you working on an embedded system or something where C/ASM are the only options?) ```py FILENAME = '/etc/dictionaries-common/words' with open(FILENAME) as f: flattened = [ line[:2] for line in f ] dic = { key: flattened.count(key) for key in sorted(frozenset(flattened)) } for k, v in dic.items(): print(f'{k} = {v}') ``` Outputs: ``` A' = 1 AM = 2 AO = 2 AW = 2 Aa = 6 Ab = 44 Ac = 37 Ad = 68 Ae = 18 Af = 22 Ag = 36 Ah = 12 Ai = 17 Aj = 2 Ak = 14 Al = 284 Am = 91 An = 223 Ap = 44 Aq = 13 Ar = 185 As = 88 At = 56 Au = 81 Av = 28 Ax = 2 ... ... ```
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
> > How to count the frequency of the two first letters in a word from a dictionary? > > > Use a simple [state machine](https://en.wikipedia.org/wiki/Finite-state_machine#Example:_coin-operated_turnstile) to read one character at a time, detect when the character is first 2 letters of a word, then increment a 26x26 table. Words do not need to be on seperate lines. Any word length is allowed. ``` unsigned long long frequency[26][26] = { 0 }; // Set all to 0 FILE *fp = fopen("large", "r"); ... int ch; // Below 2 objects are the state machine int word[2]; int word_length = 0; while ((ch = fgetc(fp)) != EOF) { if (isalpha(ch)) { if (word_length < 2) { word[word_length++] = tolower(ch); if (word_length == 2) { // 2nd letter just arrived assert(word[0] >= 'a' && word[0] <= 'z'); // Note 1 assert(word[1] >= 'a' && word[1] <= 'z'); frequency[word[0] - 'a'][word[1] - 'a']++; } } } else { word_length = 0; // Make ready for a new word } } for (int L0 = 'a'; L0 <= 'z'; L0++); for (int L1 = 'a'; L1 <= 'z'; L1++); unsigned long long sum = frequency[L0 - 'a'][L1 - 'a']; if (sum) { printf("%c%c %llu\n", L0, L1, sum); ... ``` --- Note 1, in locales that have more than a-z letters, like `á, é, í, ó, ú, ü, ñ`, additional needed. A simple approach is to use a `frequency[256][256]` - somewhat memory hoggish.
Such job is more suitable for languages like Python, Perl, Ruby etc. instead of C. I suggest at least trying C++. If you don't have to write it in C, here is my Python version: (since you didn't mention it in the question - are you working on an embedded system or something where C/ASM are the only options?) ```py FILENAME = '/etc/dictionaries-common/words' with open(FILENAME) as f: flattened = [ line[:2] for line in f ] dic = { key: flattened.count(key) for key in sorted(frozenset(flattened)) } for k, v in dic.items(): print(f'{k} = {v}') ``` Outputs: ``` A' = 1 AM = 2 AO = 2 AW = 2 Aa = 6 Ab = 44 Ac = 37 Ad = 68 Ae = 18 Af = 22 Ag = 36 Ah = 12 Ai = 17 Aj = 2 Ak = 14 Al = 284 Am = 91 An = 223 Ap = 44 Aq = 13 Ar = 185 As = 88 At = 56 Au = 81 Av = 28 Ax = 2 ... ... ```
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
The idea is to have a two-dimensional array, each dimension holding one of the first two characters of each line. The clever bit is that in C, even a string whose length as reported by `strlen()` to be 1 has two `char`'s - the character and the trailing 0 at the end, so you don't need to special-case cases like `"a"`. Its frequency is tracked in `counts['a'][0]`. ```c #include <stdio.h> #include <stdlib.h> #include <limits.h> /* Reads input on stdin, outputs to stdout. Using a multibyte * character encoding will likely cause unusual output; don't do * that. But it will work with encodings other than ASCII. Also handles * mixed-cased input. */ int main(void) { int *counts[UCHAR_MAX + 1] = { NULL }; char *line = NULL; size_t bufsize = 0; ssize_t len; // Populate the frequency counts while ((len = getline(&line, &bufsize, stdin)) > 0) { if (line[len - 1] == '\n') { // Get rid of newline line[len - 1] = 0; } if (line[0] == 0) { // Skip empty lines continue; } unsigned fc = line[0]; unsigned sc = line[1]; if (!counts[fc]) { // Allocate the second dimension if needed counts[fc] = calloc(UCHAR_MAX + 1, sizeof(int)); } counts[fc][sc] += 1; } // Print out the frequency table. for (int fc = 1; fc <= UCHAR_MAX; fc += 1) { if (!counts[fc]) { // Skip unused first characters continue; } if (counts[fc][0]) { // Single-character line count printf("%c\t%d\n", fc, counts[fc][0]); } for (int sc = 1; sc <= UCHAR_MAX; sc += 1) { if (counts[fc][sc]) { printf("%c%c\t%d\n", fc, sc, counts[fc][sc]); } } } return 0; } ``` Example: ```sh $ perl -Ci -ne 'print if /^[[:ascii:]]+$/ && /^[[:lower:]]+$/' /usr/share/dict/american-english-large | ./freqs a 1 aa 6 ab 483 ac 651 ad 497 ae 112 af 198 ag 235 ah 7 ai 161 etc. ```
Such job is more suitable for languages like Python, Perl, Ruby etc. instead of C. I suggest at least trying C++. If you don't have to write it in C, here is my Python version: (since you didn't mention it in the question - are you working on an embedded system or something where C/ASM are the only options?) ```py FILENAME = '/etc/dictionaries-common/words' with open(FILENAME) as f: flattened = [ line[:2] for line in f ] dic = { key: flattened.count(key) for key in sorted(frozenset(flattened)) } for k, v in dic.items(): print(f'{k} = {v}') ``` Outputs: ``` A' = 1 AM = 2 AO = 2 AW = 2 Aa = 6 Ab = 44 Ac = 37 Ad = 68 Ae = 18 Af = 22 Ag = 36 Ah = 12 Ai = 17 Aj = 2 Ak = 14 Al = 284 Am = 91 An = 223 Ap = 44 Aq = 13 Ar = 185 As = 88 At = 56 Au = 81 Av = 28 Ax = 2 ... ... ```
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
There is no need to read the entire dictionary into memory, or even to buffer lines. The dictionary consists of words, one per line. This means it has this structure: ``` "aardvark\nabacus\n" ``` The first two characters of the file are the first digraph. The other interesting digraphs are all characters which immediately follow a newline. This can be read by a state machine, which we can code into a loop like this. Suppose `f` is the `FILE *` handle to the stream reading from the dictionary file: ``` for (;;) { /* Read two characters from the dictionary file. */ int ch0 = getc(f); int ch1 = getc(f); /* Is the ch0 newline? That means we read an empty line, and one character after that. So, let us move that character into ch0, and read another ch1. Keep doing this until ch0 is not a newline, and bail at EOF. */ while (ch0 == '\n' && ch1 != EOF) { ch0 = ch1; ch1 = getc(f); } /* After the above, if we have EOF, we are done: bail the loop */ if (ch0 == EOF || ch1 == EOF) break; /* We know that ch0 isn't newline. But ch1 could be newline; i.e. we found a one-letter-long dictionary entry. We don't process those, only two or more letters. */ if (ch1 != '\n') { /* Here we put the code which looks up the ch0-ch1 pair in our frequency table and increments the count. */ } /* Now drop characters until the end of the line. If ch1 is newline, we are already there. If not, let's just use ch1 for reading more characters until we get a newline. */ while (ch1 != '\n' && ch1 != EOF) ch = getc(f); /* Watch out for EOF in the middle of a line that isn't newline-terminated. */ if (ch == EOF) break; } ``` I would do this with a state machine: ``` enum { begin, have_ch0, scan_eol } state = begin; int ch0, ch1; for (;;) { int c = getc(f); if (c == EOF) break; switch (state) { case begin: /* stay in begin state if newline seen */ if (c != \n') { /* otherwise accumulate ch0, and switch to have_ch0 state */ ch0 = c; state = have_ch0; } break; case have_ch0: if (c == '\n') { /* newline in ch0 state: back to begin */ state = begin; } else { /* we got a second character! */ ch1 = c; /* code for processing ch0 and ch1 goes here! */ state = scan_eol; /* switch to scanning for EOL. */ } break; case scan_eol: if (c == '\n') { /* We got the newline we are looking for; go to begin state. */ state = begin; } break; } } ``` Now we have a tidy loop around a single call to `getc`. `EOF` is checked in one place where we bail out of the loop. The state machine recognizes the situation when we have the first two characters of a line which is at least two characters long; there is a single place in the code where to put the logic for dealing with the two characters. We are not allocating any buffers; we are not `malloc`-ing lines, so there is nothing to free. There is no limit on the dictionary size we can scan (just we have to watch for overflowing frequency counters).
Such job is more suitable for languages like Python, Perl, Ruby etc. instead of C. I suggest at least trying C++. If you don't have to write it in C, here is my Python version: (since you didn't mention it in the question - are you working on an embedded system or something where C/ASM are the only options?) ```py FILENAME = '/etc/dictionaries-common/words' with open(FILENAME) as f: flattened = [ line[:2] for line in f ] dic = { key: flattened.count(key) for key in sorted(frozenset(flattened)) } for k, v in dic.items(): print(f'{k} = {v}') ``` Outputs: ``` A' = 1 AM = 2 AO = 2 AW = 2 Aa = 6 Ab = 44 Ac = 37 Ad = 68 Ae = 18 Af = 22 Ag = 36 Ah = 12 Ai = 17 Aj = 2 Ak = 14 Al = 284 Am = 91 An = 223 Ap = 44 Aq = 13 Ar = 185 As = 88 At = 56 Au = 81 Av = 28 Ax = 2 ... ... ```
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
You are started in the right direction. You do need a 2D array 27 x 27 for a single case (e.g. lowercase or uppercase), not including digits. To handle digits, just add another 11 x 11 array and map 2-digit frequencies there. The reason you can't use a flat 1D array and map to it without serious indexing gymnastics is that the ASCII sum of `"ab"` and `"ba"` would be the same. The 2D array solves that problem allowing the map of the 1st character ASCII value to the first index, and the map of the ASCII of the 2nd character to the 2nd index or after in that row. An easy way to think of it is to just take a lowercase example. Let's look at the word `"accent"`. You have your 2D array: ```none +---+---+---+---+---+---+ | a | a | b | c | d | e | ... +---+---+---+---+---+---+ | b | a | b | c | d | e | ... +---+---+---+---+---+---+ | c | a | b | c | d | e | ... +---+---+---+---+---+---+ ... ``` The first column tracks the first letter and then the remaining columns (the next `'a' - 'z'` characters) track the 2nd character that follows the first character. (you can do this will an array of struct holding the 1st char and a 26 char array as well -- up to you) This way, you remove ambiguity of which combination `"ab"` or `"ba"`. Now note -- you do not actually need a 27 x 27 arrays with the 1st column repeated. Recall, by mapping the ASCII value to the first index, it designates the first character associated with the row on its own, e.g. `row[0][..]` indicates the first character was `'a'`. So a 26 x 26 array is fine (and the same for digits). So you simply need: ```none +---+---+---+---+---+ | a | b | c | d | e | ... +---+---+---+---+---+ | a | b | c | d | e | ... +---+---+---+---+---+ | a | b | c | d | e | ... +---+---+---+---+---+ ... ``` So the remainder of the approach is simple. Open the file, read the word into a buffer, validate there is a 1st character (e.g. not the nul-character), then validate the 2nd character (`continue` to get the next word if either validation fails). Convert both to lowercase (or add the additional arrays if tracking both cases -- that gets ugly). Now just map the ASCII value for each character to an index in the array, e.g. ```c int ltrfreq[ALPHABET][ALPHABET] = {{0}}; ... while (fgets (buf, SZBUF, fp)) { /* read each line into buf */ int ch1 = *buf, ch2; /* initialize ch1 with 1st char */ if (!ch1 || !isalpha(ch1)) /* validate 1st char or get next word */ continue; ch2 = buf[1]; /* assign 2nd char */ if (!ch1 || !isalpha(ch2)) /* validate 2nd char or get next word */ continue; ch1 = tolower (ch1); /* convert to lower to eliminate case */ ch2 = tolower (ch2); ltrfreq[ch1-'a'][ch2-'a']++; /* map ASCII to index, increment */ } ``` With our example word `"accent"`, that would increment the array element `[0][2]`, so that corresponds to row `0` and column `2` for `"ac"` in: ```none +---+---+---+---+---+ | a | b | c | d | e | ... +---+---+---+---+---+ ... ^ [0][2] ``` Where you increment the value at that index. So `ltrfreq[0][2]++` now holds the value `1` for the combination `"ac"` having been seen once. When encountered again, the element would be incremented to `2` and so on... Since the value is *incremented* it is imperative the array be initialized all zero when declared. When you output the results, you just have to remember to subtract `1` from the `j` index when mapping from index back to ASCII, e.g. ```c for (int i = 0; i < ALPHABET; i++) /* loop over all 1st char index */ for (int j = 0; j < ALPHABET; j++) /* loop over all 2nd char index */ if (ltrfreq[i][j]) /* map i, j back to ASCII, output freq */ printf ("%c%c = %d\n", i + 'a', j + 'a', ltrfreq[i][j]); ``` That's it. Putting it altogether in an example that takes the filename to read as the first argument to the program (or reads from `stdin` if no argument is given), you would have: ```c #include <stdio.h> #include <ctype.h> #define ALPHABET 26 #define SZBUF 1024 int main (int argc, char **argv) { char buf[SZBUF] = ""; int ltrfreq[ALPHABET][ALPHABET] = {{0}}; /* use filename provided as 1st argument (stdin by default) */ FILE *fp = argc > 1 ? fopen (argv[1], "r") : stdin; if (!fp) { /* validate file open for reading */ perror ("file open failed"); return 1; } while (fgets (buf, SZBUF, fp)) { /* read each line into buf */ int ch1 = *buf, ch2; /* initialize ch1 with 1st char */ if (!ch1 || !isalpha(ch1)) /* validate 1st char or get next word */ continue; ch2 = buf[1]; /* assign 2nd char */ if (!ch1 || !isalpha(ch2)) /* validate 2nd char or get next word */ continue; ch1 = tolower (ch1); /* convert to lower to eliminate case */ ch2 = tolower (ch2); ltrfreq[ch1-'a'][ch2-'a']++; /* map ASCII to index, increment */ } if (fp != stdin) /* close file if not stdin */ fclose (fp); for (int i = 0; i < ALPHABET; i++) /* loop over all 1st char index */ for (int j = 0; j < ALPHABET; j++) /* loop over all 2nd char index */ if (ltrfreq[i][j]) /* map i, j back to ASCII, output freq */ printf ("%c%c = %d\n", i + 'a', j + 'a', ltrfreq[i][j]); } ``` **Example Input Dictionary** In the file `dat/ltrfreq2.txt`: ```none $ cat dat/ltrfreq2.txt My dog has fleas and my cat has none lucky cat! ``` **Example Use/Output** ```none $ ./bin/ltrfreq2 dat/ltrfreq2.txt an = 1 ca = 2 do = 1 fl = 1 ha = 2 lu = 1 my = 2 no = 1 ``` Where both `"cat"` words accurately account for `ca = 2`, both `"has"` for `ha = 2` and `"My"` and `"my"` for `my = 2`. The rest are just the 2 character prefixes for words that appear once in the dictionary. Or with the entire `307993` words dictionary that comes with SuSE, timed to show the efficiency of the approach (all within 15 ms): ```none $ time ./bin/ltrfreq2 /var/lib/dict/words aa = 40 ab = 990 ac = 1391 ad = 1032 ae = 338 af = 411 ag = 608 ah = 68 ai = 369 aj = 18 ak = 70 al = 2029 ... zn = 2 zo = 434 zr = 2 zs = 2 zu = 57 zw = 25 zy = 135 zz = 1 real 0m0.015s user 0m0.015s sys 0m0.001s ``` A bit about the array type. Since you have 143K words, that rules out using a `short` or `unsigned short` type -- just in case you have a bad dictionary with all 143K words being `"aardvark"`.... The `int` type is more than capable of handling all words -- even if you have a bad dictionary containing only `"aardvark"`. Look things over and let me know if this is what you need, if not let me know where I misunderstood. Also, let me know if you have further questions.
Such job is more suitable for languages like Python, Perl, Ruby etc. instead of C. I suggest at least trying C++. If you don't have to write it in C, here is my Python version: (since you didn't mention it in the question - are you working on an embedded system or something where C/ASM are the only options?) ```py FILENAME = '/etc/dictionaries-common/words' with open(FILENAME) as f: flattened = [ line[:2] for line in f ] dic = { key: flattened.count(key) for key in sorted(frozenset(flattened)) } for k, v in dic.items(): print(f'{k} = {v}') ``` Outputs: ``` A' = 1 AM = 2 AO = 2 AW = 2 Aa = 6 Ab = 44 Ac = 37 Ad = 68 Ae = 18 Af = 22 Ag = 36 Ah = 12 Ai = 17 Aj = 2 Ak = 14 Al = 284 Am = 91 An = 223 Ap = 44 Aq = 13 Ar = 185 As = 88 At = 56 Au = 81 Av = 28 Ax = 2 ... ... ```
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
The code assumes that the input has one word per line without leading spaces and will count all words that start with two ASCII letters from `'a'`..`'z'`. As the statement in the question is not fully clear, I further assume that the character encoding is ASCII or at least ASCII compatible. (The question states: "there are no accentuated latin characters and they are all accii lowercase") If you want to include words that consist of only one letter or words that contain `'`, the calculation of the index values from the characters would be a bit more complicated. In this case I would add a function to calculate the index from the character value. Also for non-ASCII letters the simple calculation of the array index would not work. The program reads the input line by line without storing all lines, checks the input as defined above and converts the first two characters from range `'a'`..`'z'` to index values in range `0`..`'z'-'a'` to count the occurrence in a two-dimensional array. ``` #include <stdio.h> #include <stdlib.h> int main (void) { char *line = NULL; size_t len = 0; ssize_t read; /* Counter array, initialized with 0. The highest possible index will * be 'z'-'a', so the size in each dimension is 1 more */ unsigned long count['z'-'a'+1]['z'-'a'+1] = {0}; FILE *fp = fopen("large", "r"); if (fp == NULL) { return 1; } while ((read = getline(&line, &len, fp)) != -1) { /* ignore short input */ if(read >= 2) { /* ignore other characters */ if((line[0] >= 'a') && (line[0] <= 'z') && (line[1] >= 'a') && (line[1] <= 'z')) { /* convert first 2 characters to array index range and count */ count[line[0]-'a'][line[1]-'a']++; } } } fclose(fp); if (line) free(line); /* example output */ for(int i = 'a'-'a'; i <= 'z'-'a'; i++) { for(int j = 'a'-'a'; j <= 'z'-'a'; j++) { /* only print combinations that actually occurred */ if(count[i][j] > 0) { printf("%c%c %lu\n", i+'a', j+'a', count[i][j]); } } } return 0; } ``` The example input ```none foo a foobar bar baz fish ford ``` results in ``` ba 2 fi 1 fo 3 ```
> > How to count the frequency of the two first letters in a word from a dictionary? > > > Use a simple [state machine](https://en.wikipedia.org/wiki/Finite-state_machine#Example:_coin-operated_turnstile) to read one character at a time, detect when the character is first 2 letters of a word, then increment a 26x26 table. Words do not need to be on seperate lines. Any word length is allowed. ``` unsigned long long frequency[26][26] = { 0 }; // Set all to 0 FILE *fp = fopen("large", "r"); ... int ch; // Below 2 objects are the state machine int word[2]; int word_length = 0; while ((ch = fgetc(fp)) != EOF) { if (isalpha(ch)) { if (word_length < 2) { word[word_length++] = tolower(ch); if (word_length == 2) { // 2nd letter just arrived assert(word[0] >= 'a' && word[0] <= 'z'); // Note 1 assert(word[1] >= 'a' && word[1] <= 'z'); frequency[word[0] - 'a'][word[1] - 'a']++; } } } else { word_length = 0; // Make ready for a new word } } for (int L0 = 'a'; L0 <= 'z'; L0++); for (int L1 = 'a'; L1 <= 'z'; L1++); unsigned long long sum = frequency[L0 - 'a'][L1 - 'a']; if (sum) { printf("%c%c %llu\n", L0, L1, sum); ... ``` --- Note 1, in locales that have more than a-z letters, like `á, é, í, ó, ú, ü, ñ`, additional needed. A simple approach is to use a `frequency[256][256]` - somewhat memory hoggish.
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
The code assumes that the input has one word per line without leading spaces and will count all words that start with two ASCII letters from `'a'`..`'z'`. As the statement in the question is not fully clear, I further assume that the character encoding is ASCII or at least ASCII compatible. (The question states: "there are no accentuated latin characters and they are all accii lowercase") If you want to include words that consist of only one letter or words that contain `'`, the calculation of the index values from the characters would be a bit more complicated. In this case I would add a function to calculate the index from the character value. Also for non-ASCII letters the simple calculation of the array index would not work. The program reads the input line by line without storing all lines, checks the input as defined above and converts the first two characters from range `'a'`..`'z'` to index values in range `0`..`'z'-'a'` to count the occurrence in a two-dimensional array. ``` #include <stdio.h> #include <stdlib.h> int main (void) { char *line = NULL; size_t len = 0; ssize_t read; /* Counter array, initialized with 0. The highest possible index will * be 'z'-'a', so the size in each dimension is 1 more */ unsigned long count['z'-'a'+1]['z'-'a'+1] = {0}; FILE *fp = fopen("large", "r"); if (fp == NULL) { return 1; } while ((read = getline(&line, &len, fp)) != -1) { /* ignore short input */ if(read >= 2) { /* ignore other characters */ if((line[0] >= 'a') && (line[0] <= 'z') && (line[1] >= 'a') && (line[1] <= 'z')) { /* convert first 2 characters to array index range and count */ count[line[0]-'a'][line[1]-'a']++; } } } fclose(fp); if (line) free(line); /* example output */ for(int i = 'a'-'a'; i <= 'z'-'a'; i++) { for(int j = 'a'-'a'; j <= 'z'-'a'; j++) { /* only print combinations that actually occurred */ if(count[i][j] > 0) { printf("%c%c %lu\n", i+'a', j+'a', count[i][j]); } } } return 0; } ``` The example input ```none foo a foobar bar baz fish ford ``` results in ``` ba 2 fi 1 fo 3 ```
The idea is to have a two-dimensional array, each dimension holding one of the first two characters of each line. The clever bit is that in C, even a string whose length as reported by `strlen()` to be 1 has two `char`'s - the character and the trailing 0 at the end, so you don't need to special-case cases like `"a"`. Its frequency is tracked in `counts['a'][0]`. ```c #include <stdio.h> #include <stdlib.h> #include <limits.h> /* Reads input on stdin, outputs to stdout. Using a multibyte * character encoding will likely cause unusual output; don't do * that. But it will work with encodings other than ASCII. Also handles * mixed-cased input. */ int main(void) { int *counts[UCHAR_MAX + 1] = { NULL }; char *line = NULL; size_t bufsize = 0; ssize_t len; // Populate the frequency counts while ((len = getline(&line, &bufsize, stdin)) > 0) { if (line[len - 1] == '\n') { // Get rid of newline line[len - 1] = 0; } if (line[0] == 0) { // Skip empty lines continue; } unsigned fc = line[0]; unsigned sc = line[1]; if (!counts[fc]) { // Allocate the second dimension if needed counts[fc] = calloc(UCHAR_MAX + 1, sizeof(int)); } counts[fc][sc] += 1; } // Print out the frequency table. for (int fc = 1; fc <= UCHAR_MAX; fc += 1) { if (!counts[fc]) { // Skip unused first characters continue; } if (counts[fc][0]) { // Single-character line count printf("%c\t%d\n", fc, counts[fc][0]); } for (int sc = 1; sc <= UCHAR_MAX; sc += 1) { if (counts[fc][sc]) { printf("%c%c\t%d\n", fc, sc, counts[fc][sc]); } } } return 0; } ``` Example: ```sh $ perl -Ci -ne 'print if /^[[:ascii:]]+$/ && /^[[:lower:]]+$/' /usr/share/dict/american-english-large | ./freqs a 1 aa 6 ab 483 ac 651 ad 497 ae 112 af 198 ag 235 ah 7 ai 161 etc. ```