text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: How many of you do 3-tier design? 3-Tier design has been my standard design philosophy for years for database driven applications, and it has never failed me. For those who practice it, describe your layers.
I've found that many people muddle up the business tier and the data access tier, making it more like a 2.5-Tier design.
I prefer to move the data tier almost entirely into the database using stored procedures, and just have a very lightweight data tier in code that wraps sproc calls into business objects.
How do you approach it?
EDIT: If all you are going to do is define what 3-tier is, don't waste your time replying. I am looking for how specific people implemented it, did you use stored procedures or an ORM, how did you handle circular dependancies between the DAL and the BLL? Theres a lot of depth to this topic besides saying
*
*UI
*Business
*Data
A: I've been doing primarly web apps for a while now and have been following 3-Tier as well:
UI: Pure ASPX pages. It is actually kind of hard to push your business layer down from here at times because doing a quick calculation or something seems so easy to do here. However, I've gotten disciplined enough to make sure the code behind pages are doing nothing but showing/hiding panels, updating user input, etc.
DAL: All data access layer stuff. I have really enjoyed using the XSD/DataTable/TableAdapter classes that are available. I also use stored procedure based CRUD methods, so hooking up the adapters to the stored procs is easy.
BLL: The business layer tends to be the lightest of the three layers in most of my apps here, since they are primarily CRUD type apps with some reporting built in.
A: 3-tier:
*
*Database back end- functions as a data store, we also enforce dependencies in the database
*C# business layer - deals with taking user request submitted via http (recived by an aspx page), gathering the correct response based on the state of the database and returning it to the client via xml (although, I would recommend json)
*javascript front end - deals with rendering xml in a user friendly fashion
A: I practice 3-tier design much the same way you do in that I use stored procedures to handle most, if not all of my communication with the database. I approach the design of my classes so that each one has a specific purpose in order to reduce complexity and to allow for greater flexibility when it comes to change.
One of the biggest problems I come across in 3-tier design is where to put input validation. Often times I find myself duplicating validation in both the UI and business layer to benefit the user with quick validation checking and to ensure that the data going in and coming out of the data layer is valid. How do you handle validation?
A: More of a side note: never forget that the n-tier layering is a logical layering, not a physical separation of processes. I.e., there should be no need to have the business logic running in a different process (or on a different box) than the presentational code. The important thing is the keeping the code clean.
Physically separating presentational code and business logic has been advertised for some time, e.g., by using webservices to connect to a backend. There are cases where this is a good idea, but it's not necessary in any way, but will significantly complicate your architecture, deployment, design, and cost performance.
A: n-tier design
I think that layering works quite well. Take a look at the tiers in the OSI model. I've used the three tiers that you describe and that approach really helped. The abstraction of Model View Controller is often helpful in a large desktop application. You can keep splitting things down into smaller and smaller more manageable pieces. There is a point of diminishing returns. And, there are occasions when we want to remove the abstraction layers and perhaps talk directly to the hardware -- it depends on the goals of your application.
A: I have found that the 2.5 Tier design to work best for the new web applications I have created.
I typically start off with 2 class libraries and 1 web application.
*
*Company.Data (Class library)
*Company.Web (Class library) (Contains PageBase, Custom Controls, Helper functions, etc)
*Company.Web.WebApplication (Web Application)
For applications that I have created, I only use stored procedures to access the data.
In fact, I have created CodeSmith templates to generate all of the stored procedures, data and business classes.
Company.Data
This assembly consists mainly of the entity data classes and collections.
For example, I typically have a table called Settings in my web applications.
A class called Setting and SettingCollection will be generated.
So if I need to load a specific setting, I can do this..
Dim setting As New Setting(1) 'pass in the id of the specific setting
setting.Value = "False" 'change the value
setting.Save() ' Call the save method to persist changes back to the database
or to create a new setting, I just do not pass a value in the constructor
Dim setting as New Setting()
setting.Name = "SmtpServer"
setting.Value = "localhost"
setting.Save()
My namespaces in the Company.Data assembly typically look like this..
Company.Data.Entites
Company.Data.Collections
Company.Data.BusinessObjects
( This namespace is used to create custom methods to access data)
I also generate custom methods based on primary keys, foreign keys, and unique indexes.
For example, the name column in the settings table has a unique index.
A shared method called GetSettingByName will be generated automatically and this will return a setting object.
This method would be in the Company.Data.BusinessObjects namespace
For the entity and business object classes, two files are generated. 1 that is generated each time and 1 that is editable and only generated
the first time. I use partial classes to allow me to add custom code with out it being overwritten.
For me, this methodolgy has worked really well. Codesmith generation saves me countless hours of coding. I can add 5 new columns to a table, regenerate
all of the new code in seconds. Stored procedures will be dropped and recreated.
The 2.5 tier design works well because my application is only going to use one database and that is Sql Server. The need to use access, oracle, mysql in the future wont happen.
A: We once approached it using the following:
- UI Layer (where all the UI is)
- Business layer (where all the business logic is)
- Data layer (where all the DB access is)
A: We use about a 6 tier design.
*
*Browser-side Javascript and what-not. This is pure visual sugar with little business value or processing. Any input validations here are redundant checks -- they can be bypassed, so we don't trust them.
*Server-side HTML presentation. This is partially business-rule driven. But there's no processing in the template language we use.
*Server-side view functions, business logic, "control". This is where navigation, validation and the "higher-level" application-oriented processing occurs. This where state-change occurs -- things are computed, updated, deleted, etc. This is the processing. This where authentication and authorization are enforced.
*Model definitions (using an ORM layer). This is the object model. It maps to a relational model. It includes all of the model-level processing as object methods. This is where some calculations are done, filtering, counting, ordering, are defined here. This is our useful view of the data.
*Access layer (some kind of database connectivity). Depends on the database product. It's managed by the ORM layer, so we don't do any coding, just configuring.
*Persistent storage in the DB (no stored procedures, no triggers).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How can I configure Tomcat to always direct to index.jsp after login? Currently Tomcat's login support redirects users back to where they initially were when the application figured out they weren't logged in.
For this particular application I need to force them to always go back to index.jsp.
I'm pretty sure this is a simple configuration option in the WAR's web.xml, but I haven't found the answer in google.
A: A better solution would probably be to use a servlet filter.
You could then check for j_username / j_password, and a successful login and redirect them where you wanted them to go.
A: It's something you can't configure in web.xml as it is not part of the standard. For Tomcat (tested on version 6.0.14) you can force users back to index.jsp by adding the next code on top of your login.jsp. It redirects every request that does not have a parameter with the name 'login' in the url to the /index.jsp?login page. Because the redirect does have the 'login' parameter the user will be presented the login page.
It's not a secure solution. If someone requests for a page and adds the login parameter, he will be redirected. So:
/showPerson?id=1234 will redirect to /index.jsp?login
/showPerson?id=1234?login will NOT redirect to /index.jsp?login
The code that goes on top of your login.jsp:
<%
if (request.getParameter("login") == null) {
response.sendRedirect(request.getContextPath() + "/index.jsp?login");
return;
}
%>
Instead of using the 'login' parameter you probably could use a cookie. You can make it more secure by creating a random value for the login parameter (login=randomvalue) and store the value in the session object for comparison.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to interpret code metrics (calculated by SourceMonitor) After reading the answers to the question "Calculate Code Metrics" I installed the tool SourceMonitor and calculated some metrics.
But I have no idea how to interpret them.
What's a "good" value for the metric
*
*"Percent Branch Statements"
*"Methods per Class"
*"Average Statements per Method"
*"Maximum Method or Function
Complexity"
I found no hints in the documentation, can anybody help me?
A: As a general rule of thumb, a cyclomatic complexity of 10 or less is where you want to be. A CC from 11 to 20 is about as high as you want to get in most cases: once you get above 20, you're more likely to encounter problems finding and fixing defects, and once you get above 50, you're usually looking at a method that needs to be refactored now.
Keep in mind that these are guidelines. It is possible to have a method with a CC of 25 that is as simplified as you can get it; you'll just want to be more careful with these methods when you need to update them.
A: SourceMonitor is an awesome tool.
"Methods Per Class" is useful to those who wish to ensure their classes follow good OO principles (too many methods indicates that a class could be taking on more than it should).
"Average Statements per Method" is useful for a general feel of how big each method is. More useful to me is the info on the methods with too many statements (double click on the module for finer grain detail).
Function Complexity is useful for ascertaining how nasty the code is. Truly I use this info more than anything else. This is info on how complicated the nastiest function in a module is (at least according to cyclomatic complexity). If you double click on the module / file you can find out which particular method is so bad.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: How to receive UDP Multicast in VxWorks 5.5 I have been unable to receive UDP multicast under VxWorks 5.5. I've joined the multicast group:
setsockopt(soc, IPPROTO_IP, IP_ADD_MEMBERSHIP, (char *) &ipMreq, sizeof (ipMreq));
Similar code on an adjacent Windows machine does receive multicast.
I am able to send multicast from VxWorks; ifShow() indicates the interface is multicast capable; MCAST_ROUTING is enabled in the kernel config, but still unable to receive multicast.
Edit: I needed to set a bit in the RealTek Ethernet drive RX configuration register to enable multicast to be passed on to the application layer.
#define RTL_RXCG_AM 0x04 /* Accept Multicast */
A: Are you checking the return value on the Join setsockopt() call to be sure it's actually succeeding? I had a specific problem with VxWorks 5.5 in the past where my multicast joins were failing when they shouldn't be. I believe we had to get new libraries from WindRiver to fix the issue.
Edit: There is no specific trick that I'm aware of to getting multicast to work with VxWorks. It should use the standard BSD sockets operations. If the interface can receive unicast traffic properly, and a network analyzer (Wireshark, for instance) shows that the multicast JOINs are being sent and the inbound multicast packets are correctly formed, I would suspect a driver issue. WindRiver support has been very helpful for us in the past with these sorts of problems; I don't know if you have a support contract with them to get that level of assistance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Hiding the header on an Infragistics Winform UltraCombo I've gone through just about every property I can think of, but haven't found a simple way to hide the header on a winform UltraCombo control from Infragistics.
Headers make sense when I have multiple visible columns and whatnot, but sometimes it would be nice to hide it.
To give a simple example, let's say I have a combobox that displays whether something is active or not. There's a label next to it that says "Active". The combobox has one visible column with two rows -- "Yes" and "No".
When the user opens the drop down, they see "Active" or whatever the header caption for the column is and then the choices. I'd like it to just show "Yes" and "No" only.
It's a minor aesthetic issue that probably just bothers me and isn't even noticed by the users, but I'd still really like to know if there's a way around this default behavior.
RESOLUTION: As @Craig suggested, ColHeadersVisible is what I needed. The location of the property was slightly different, but it was easy enough to track down. Once I set DisplayLayout.Bands(0).ColHeadersVisible=False, the dropdown display the way I wanted it to.
A: <DropDownLayout ColHeadersVisible="No"></DropDownLayout> works for us. This is on Infragistics NetAdvantage for .NET 2008.
A: My understanding of the Infragistics WinForms suite is that the UltraCombo is designed for multi-column (or embedded UltraGrid) use.
What I did to get around this was to replace those UltraCombos with UltraComboEditor controls. These are IG's "enhanced" versions of the standard .NET combobox.
That may or may not be appropriate in your case, depending on your usage scenario. However, it looks like you have a resolution using the original UltraCombo, which will definitely be lower-impact on your existing code.
(And thanks to you and Craig both: I actually overlooked that property when I went through this pain the first time; I'm making a mental note of where it is for the future!)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How select the rest of the word in incremental search in Intellij IDEA? When in incremental search mode in Intellij IDEA, is there a way to select the rest of the word. For example, suppose I want to find the word “handleReservationGranted”. I type Ctrl-f to enter incremental search mode, and start typing the letters “han”. Now suppose I have found the beginning of “handleReservationGranted”. In my search box I have “han”, but I would now like to be able to select the rest of the word, so that the search box contains “handleReservationGranted” instead of “han”.
In Xemacs, I can type Ctrl-s, type “han”, and then type Ctrl-w. Now my search term is “handleReservationGranted”, and not “han”. So now if I press Ctrl-s, I find the next occurrence of “handleReservationGranted”.
Is there a similar feature in Intellij IDEA? The best I can do now is either to keep typing in the rest of the letters (dleReservationGranted), or exit incremental search, select the word with Ctrl-W, then enter search again with Ctrl-f.
I am using Intellij IDEA 7.0.3.
A: Yes, you can use autocomplete during an incremental search.
After you type "han", press CTRL-SPACE (autocomplete) and it will give you a list of potential matches in the file. Just pick "handleReservationsGranted" from the list and that will become your search term.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I browse my Tomcat localhost from another computer on the network? I'm an IIS guy and know its as simple as just using the http://[computername]/path to webapp.. however, I can't seem to figure out how to make this possible for a JSP application I'm writing that runs under Tomcat. Is there a configuration setting I need to set somewhere?
A: if your ip were 192.122.11.22 you have to write http://192.122.11.22:8080/proyectname (if dont, then look your firewall)
A: You need to use the Port of Tomcat which is by default 8080. So you might want to access you localhost on machine A from machine B as
http://A:8080/YourProject
And Remember Unlike IIS , it is case sensitive.
A: you can use your ip address instead of localhost
http://10.4.0.1:8080/YourProject
A: Step 1: Add a firewall exception to inbound connections to the port that you use for your hosts (the Host tags in CATALINA_HOME(Tomcat dir)/conf/server.xml).
Step 2: At least in Windows 10, allow Tomcat to communicate through the firewall. One way would be Control Panel -> System and Security -> Windows Firewall -> "Allow an app or feature through Windows Firewall" -> "Change settings" -> Enable Private and Public for "Commons Daemon Service Runner" (if not present: "Allow another app..." -> Chose tomcat#.exe in Tomcat bin directory, where # will be the tomcat version number)
Step 3: Add a firewall exception for javaw. In Windows 10, that is the steps above up to "Change settings", followed by: Find Java(TM) Platform SE binary with a path to javaw (add as above if not present) -> Enable Private and Public for it.
Let me know if that does not work. :)
A: Have you created an exception in your firewall?
Assuming that Tomcat is running on port 8080 and this is a Windows XP machine, the the firewall will block that port (not the case on Windows Server 2003).
The firewall can be configured by: choosing the Windows Firewall from the Control Panel, then click on Exceptions -> Add Port and enter name and number: Tomcat, 8080 and leave transport protocol as TCP
A: Tomcat uses port 8080 by default so you have to provide the port number in the URL to see anything. If it is running http://yourcomputer:8080/app should do the trick.
A: As well as blocking the port (see AirSource Ltd's answer), your firewall may have restrictions on the Tomcat service. For example, Mcafee Firewall restricts Tomcat to "outgoing only".
If using Mcafee, under Change Settings > Firewall, expand Internet Connections for Programs and find Commons Daemon Service Runner (aka tomcat*.exe). Edit it and change Access from Outgoing Only to Incoming and outgoing - Use designated ports (recommended).
A: You must write your machine's IP instead of using "localhost"
A: this works fine
simply write http://your_ipaddress:8080(tomcat server port)/project name
and make sure you are connected to same network and project is deployed on tomcat.
A: `Step 1: Go to directory where tomcat is installed and look for server.xml file.Usually the path is
C:\Program Files\Apache Software Foundation\Tomcat 9.0\conf\server.xml
Open it with editor and look for connector block.It will be like
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"/>
Add address="0.0.0.0" to it
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
address="0.0.0.0" />
save the file.
step 2:
Go to the firewall and network protection setting of the pc and turn off the public network firewall.
step 3:
Start the tomcat server.Then use the local ip address of pc and port 8080 (used by tomcat server as default unless you have changed it) form other device to connect with the tomcat server on the pc.
eg - http://192.168.8.137:8080/ (replace 192.168.8.137 with your pc's local ip address)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Current standard compliance level of IronPython & IronRuby Does anyone have some numbers on this? I am just looking for a percentage, a summary will be better.
Standards compliance: How does the implementation stack up to the standard language specification?
For those still unclear: I place emphasis on current. The IronPython link provided below has info that was last edited more than 2 years back.
A: The following sites usually have updates as to how their 'compliance' is progressing:
IronPython -
http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython
http://www.codeplex.com/IronPython/Wiki/View.aspx?title=Regression%20Tests&referringTitle=More%20Information
IronRuby -
http://www.ironruby.net/
In fact from the IronRuby site -
"We showed IronRuby dispatching some static and dynamic Rails requests at RailsConf this year. We are running the RubySpecs to measure our conformity with Ruby and we're passing the core specs at a 71% rate (12026 / 16793 expectations for RubySpec core)."
A: IronRuby has a site that shows the updated numbers: http://www.ironruby.info
A: We don't actively track these kinds of numbers, but you could download them and run them against the respective test suites for the languages if you wanted to boil it down to a single numeric value.
A: Found an update.
They claim 85% now. Not bad :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Delphi: OpenFileDialog crashes with URL Giving a URL to the TOpenFileDialog, the Execute method throws an exception:
OpenDialog1.Filename := 'http://www.osfi-bsif.gc.ca/app/DocRepository/1/eng/issues/terrorism/indstld_e.xls';
bResult := OpenDialog1.Execute;
But you are allowed to open files from a URL.
Delphi 5
A: TOpenDialog is just a wrapper for the windows function GetOpenFileName in comdlg32.dll.
function TOpenDialog.Execute(ParentWnd: HWND): Boolean;
begin
Result := DoExecute(@GetOpenFileName, ParentWnd);
end;
Unfortunately the documentation for this function isn't that great. But I'm pretty sure it doesn't support http.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Error handling in PHP I'm familiar with some of the basics, but what I would like to know more about is when and why error handling (including throwing exceptions) should be used in PHP, especially on a live site or web app. Is it something that can be overused and if so, what does overuse look like? Are there cases where it shouldn't be used? Also, what are some of the common security concerns in regard to error handling?
A: One thing to add to what was said already is that it's paramount that you record any errors in your web application into a log. This way, as Jeff "Coding Horror" Atwood suggests, you'll know when your users are experiencing trouble with your app (instead of "asking them what's wrong").
To do this, I recommend the following type of infrastructure:
*
*Create a "crash" table in your database and a set of wrapper classes for reporting errors. I'd recommend setting categories for the crashes ("blocking", "security", "PHP error/warning" (vs exception), etc).
*In all of your error handling code, make sure to record the error. Doing this consistently depends on how well you built the API (above step) - it should be trivial to record crashes if done right.
Extra credit: sometimes, your crashes will be database-level crashes: i.e. DB server down, etc. If that's the case, your error logging infrastructure (above) will fail (you can't log the crash to the DB because the log tries to write to the DB). In that case, I would write failover logic in your Crash wrapper class to either
*
*send an email to the admin, AND/OR
*record the details of the crash to a plain text file
All of this sounds like an overkill, but believe me, this makes a difference in whether your application is accepted as a "stable" or "flaky". That difference comes from the fact that all apps start as flaky/crashing all the time, but those developers that know about all issues with their app have a chance to actually fix it.
A: The best practice IMHO is to use the following approach:
1. create an error/exception handler
2. start it upon the app start up
3. handle all your errors from inside there
<?php
class Debug {
public static setAsErrorHandler() {
set_error_handler(array(__CLASS__, '__error_handler'));
}
public static function __error_handler($errcode, $errmsg, $errfile, $errline) {
if (IN DEV) {
print on screen
}
else if (IN PRO) {
log and mail
}
}
}
Debug::setAsErrorHandler();
?>
A: Roughly speaking, errors are a legacy in PHP, while exceptions are the modern way to treat errors. The simplest thing then, is to set up an error-handler, that throws an exception. That way all errors are converted to exceptions, and then you can simply deal with one error-handling scheme. The following code will convert errors to exceptions for you:
function exceptions_error_handler($severity, $message, $filename, $lineno) {
if (error_reporting() == 0) {
return;
}
if (error_reporting() & $severity) {
throw new ErrorException($message, 0, $severity, $filename, $lineno);
}
}
set_error_handler('exceptions_error_handler');
error_reporting(E_ALL ^ E_STRICT);
There are a few cases though, where code is specifically designed to work with errors. For example, the schemaValidate method of DomDocument raises warnings, when validating a document. If you convert errors to exceptions, it will stop validating after the first failure. Some times this is what you want, but when validating a document, you might actually want all failures. In this case, you can temporarily install an error-handler, that collects the errors. Here's a small snippet, I've used for that purpose:
class errorhandler_LoggingCaller {
protected $errors = array();
function call($callback, $arguments = array()) {
set_error_handler(array($this, "onError"));
$orig_error_reporting = error_reporting(E_ALL);
try {
$result = call_user_func_array($callback, $arguments);
} catch (Exception $ex) {
restore_error_handler();
error_reporting($orig_error_reporting);
throw $ex;
}
restore_error_handler();
error_reporting($orig_error_reporting);
return $result;
}
function onError($severity, $message, $file = null, $line = null) {
$this->errors[] = $message;
}
function getErrors() {
return $this->errors;
}
function hasErrors() {
return count($this->errors) > 0;
}
}
And a use case:
$doc = new DomDocument();
$doc->load($xml_filename);
$validation = new errorhandler_LoggingCaller();
$validation->call(
array($doc, 'schemaValidate'),
array($xsd_filename));
if ($validation->hasErrors()) {
var_dump($validation->getErrors());
}
A: Unhanded errors stop the script, that alone is a pretty good reason to handle them.
Generally you can use a Try-Catch block to deal with errors
try
{
// Code that may error
}
catch (Exception $e)
{
// Do other stuff if there's an error
}
If you want to stop the error or warning message appearing on the page then you can prefix the call with an @ sign like so.
@mysql_query($query);
With queries however it's generally a good idea to do something like this so you have a better idea of what's going on.
@mysql_query($query)
or die('Invalid query: ' . mysql_error() . '<br />Line: ' . __LINE__ . '<br />File: ' . __FILE__ . '<br /><br />');
A: You should use Error Handling in cases where you don't have explicit control over the data your script is working on. I tend to use it frequently for example in places like form validation. Knowing how to spot error prone places in code takes some practice: Some common ones are after function calls that return a value, or when dealing with results from a database query. You should never assume the return from a function will be what your expecting, and you should be sure to code in anticipation. You don't have to use try/catch blocks, though they are useful. A lot of times you can get by with a simple if/else check.
Error handling goes hand in hand with secure coding practices, as there are a lot of "errors" that don't cause your script to simply crash. while not being strictly about error handling per se, addedbytes has a good 4 article series on some of the basics of secure PHP programming which you can find HERE. There are a lot of other questions here on stackoverflow on topics such as mysql_real_escape_string and Regular Expressions which can be very powerful in confirming the content of user entered data.
A: Rather than outputing the mysql_error you might store it in a log. that way you can track the error (and you don't depend on users to report it) and you can go in and remove the problem.
The best error handling is the kind that is transparent to the user, let your code sort out the problem, no need to involve that user fellow.
A: besides handling errors right away in your code you can also make use of
http://us.php.net/manual/en/function.set-exception-handler.php
and
http://us.php.net/manual/en/function.set-error-handler.php
I find setting your own exception handler particularly useful. When an exception occurs you can perform different operations depending on what type of exception it is.
ex: when a mysql_connet call returns FALSE I throw a new DBConnectionException(mysql_error()) and handle it a "special" way: log the error, the DB connection info (host, username, password) etc and maybe even email the dev team notifying them that something may be really wrong with the DB
I use this to compliment standard error handling. i wouldnt recommend overusing this approach
A: Error suppression with @ is very slow.
A: You can also use Google Forms to catch and analyse exceptions, without having to maintain a database or publicly accessible server. There is a tutorial here that explains the process.
A: public $error=array();
public function Errors($Err)
{
------ how to use -------
$Err = array("func" => "constr", "ref" => "constrac","context" =>
"2222222ت","state" => 3,);
$ResultErr=$this->Errors($Err);
$context=(array_filter(explode(',', $ResultErr['context'])));
$func=(array_filter(explode(',', $ResultErr['func'])));
$ref=(array_filter(explode(',', $ResultErr['ref'])));
$state=($ResultErr['state']);
$errors=array_merge(["context"=>$context], ["func"=>$func],
["ref"=>$ref], ["state"=>$state]);
var_dump($errors);
---------------begine ------------------------
global $error;
if (!is_array($Err)) {
return $error;
} else {
if (!(isset($error['state']))) {
$error['state']="";
}
if (!(isset($error['func']))) {
$error['func']="";
}
if (!(isset($error['ref']))) {
$error['ref']="";
}
if (!(isset($error['context']))) {
$error['context']="";
}
$error['state']=$error['state'];
$error['func']=$error['func'].= $Err["func"].",";
$error['ref']=$error['ref'].= $Err["ref"].",";
$error['context']=$error['context'].= $Err["context"].",";
$error["state"]=$Err["state"];
return $error;
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
}
|
Q: Eclipse folder Referenced Libraries disappears In Java projects in Eclipse version 3.4.1 sometimes the folder "Referenced Libraries" disappears from the "Project Explorer" view. All third party jars are shown directly in the root of the project folder. The project compiles and runs fine. It seems to be a GUI problem.
How can I get this folder back?
A: First, bring up the "Package Explorer" view (instead of the "Project Explorer" view).
Then, if the referenced .jar files still are visible in the root of the project, click on the little "down arrow" icon in the top-right corner of the Package Explorer view. In the context menu that appears, one of the items on the menu is "Show 'Referenced Libraries' Node." Click on that menu item.
A: I've been struggling with this thing for a while in Eclipse Juno because it's a little bit different.
*
*click the little down arrow as before
*click Customize view
*check Libraries from external
A: For those hitting the issue today (Kepler): it is possible that you are in the "Java EE" perspective, which by default has the Project Explorer. Simply switch to the "Java" perspective and it will replace Project Explorer with Package Explorer, which will have the missing Referenced Libraries folder.
A: Make sure you're actually in Project Explorer and not in some other view like Navigator, like my friend was...
A: Use the Package Explorer view instead of the Project Explorer view.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
}
|
Q: How to retrieve time zone choices on Windows Mobile for PDA Is there a way to retrieve the time zone choices in Windows Mobile in order to display them in a GUI? It would be much better not to have to show every 15 minutes just to be
able to display GMT+5:45 for Kathmandu.
A: As per MSDN:City List and Time Zone Data Files,
You can add or remove content to these
files. You can redistribute these
files as is or repackage this data by
including it in source code, a
database, or another format. You are
permitted to use excerpts of this data
rather than the entire data set.
Note Microsoft bears no responsibility for the content or
usage of these files. Certain locales have specific legal requirements with
regard to providing data of this type; ensure you are in compliance with such
regulations.
If you use the city data
provided or if you use any type of
geographical information from any
source, you are encouraged to provide
a way for users to edit, add, and
delete information.
A: Windows Mobile stores timezone info in a file called Timezones.csv
A: Timezones.csv & CityList.csv files are provided in the \Resource folder of the Windows Mobile SDK!
You could modify the list and decide the ones which you want to show to the user.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I determine program interupt in Windows Mobile I have an game application I have written for Windows Mobile and I want to have a timer associated with the puzzle. If the program loses focus for any reason (call comes in, user switches programs, user hits the Windows button) then I want a pop up dialog box to cover the puzzle and the timer to stop. When the user closes the pop up dialog the timer can start up again.
Does anyone know how to do this?
Thanks
A: Take a look at the article over at OpenNETCF's Community site on determining when a Form or Process changes.
A: A quick way would be to use PInvoke to call GetForegroundWindow() and GetWindowText() whenever your timer ticks (once a second?).
GetForegroundWindow() returns a windows handle which you can use to call GetWindowText(). If the text of the foreground window matches your form's Text property (its caption), you know your app has the focus. You can then show or hide your puzzle in each timer tick.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Benefits of multiple memcached instances Is there any difference between having 4 .5GB memcache servers running or one 2GB instance?
Does running multiple instances offer any benifits?
A: If one instance fails, you're still get advantages of using the cache. This is especially true if you are using the Consistenthashing that will bring the same data to the same instance, rather than spreading new reads/writes among the machines that are still up.
You may also elect to run servers on 32 bit operating systems, that cannot address more than around 3GB of memory.
Check the FAQ: http://www.socialtext.net/memcached/ and http://www.danga.com/memcached/
A: High availability is nice, and memcached will automatically distribute your cache across the 4 servers. If one of those servers dies for some reason, you can handle that error by either just continuing as if the cache was blank, redirecting to a different server, or any sort of custom error handling you want. If your 1x 2gb server dies, then your options are pretty limited.
The important thing to remember is that you do not have 4 copies of your cache, it is 1 cache, split amongst the 4 servers.
The only downside is that it's easier to run out of 4x .5 than it is to run out of 1x 2gb memory.
A: I would also add that theoretically, in case of several machines, it might save you some performance, as if you have a lot of frontends doing a lot of heavy reads, it's much better to split them into different machines: you know, network capabilities and processing power of one machine can become an upper bound for you.
This advantage is highly dependent on memcache utilization, however (sometimes it might be ways faster to fetch everything from one machine).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How Do You Write Code That Is Safe for UTF-8? We have a set of applications that were developed for the ASCII character set. Now, we're trying to install it in Iceland, and are running into problems where the Icelandic characters are getting screwed up.
We are working through our issues, but I was wondering: Is there a good "guide" out there for writing C++ code that is designed for 8-bit characters and which will work properly when UTF-8 data is given to it?
I can't expect everyone to read the whole Unicode standard, but if there is something more digestible available, I'd like to share it with the team so we don't run into these issues again.
Re-writing all the applications to use wchar_t or some other string representation is not feasible at this time. I'll also note that these applications communicate over networks with servers and devices that use 8-bit characters, so even if we did Unicode internally, we'd still have issues with translation at the boundaries. For the most part, these applications just pass data around; they don't "process" the text in any way other than copying it from place to place.
The operating systems used are Windows and Linux. We use std::string and plain-old C strings. (And don't ask me to defend any of the design decisions. I'm just trying to help fix the mess.)
Here is a list of what has been suggested:
*
*The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
*UTF-8 and Unicode FAQ for Unix/Linux
*The Unicode HOWTO
A: This looks like a comprehensive quick guide:
http://www.cl.cam.ac.uk/~mgk25/unicode.html
A: Just be 8-bit clean, for the most part. However, you will have to be aware that any non-ASCII character splits across multiple bytes, so you must take account of this if line-breaking or truncating text for display.
UTF-8 has the advantage that you can always tell where you are in a multi-byte character: if bit 7 is set and bit 6 reset (byte is 0x80-0xBF) this is a trailing byte, while if bits 7 and 6 are set and 5 is reset (0xC0-0xDF) it is a lead byte with one trailing byte; if 7, 6 and 5 are set and 4 is reset (0xE0-0xEF) it is a lead byte with two trailing bytes, and so on. The number of consecutive bits set at the most-significant bit is the total number of bytes making up the character. That is:
110x xxxx = two-byte character
1110 xxxx = three-byte character
1111 0xxx = four-byte character
etc
The Icelandic alphabet is all contained in ISO 8859-1 and hence Windows-1252. If this is a console-mode application, be aware that the console uses IBM codepages, so (depending on the system locale) it might display in 437, 850, or 861. Windows has no native display support for UTF-8; you must transform to UTF-16 and use Unicode APIs.
Calling SetConsoleCP and SetConsoleOutputCP, specifying codepage 1252, will help with your problem, if it is a console-mode application. Unfortunately the console font selected has to be a font that supports the codepage, and I can't see a way to set the font. The standard bitmap fonts only support the system default OEM codepage.
A: Be aware that full unicode doesn't fit in 16bit characters; so either use 32-bit chars, or variable-width encoding (UTF-8 is the most popular).
A: UTF-8 was designed exactly with your problems in mind. One thing I would be careful about is that ASCII is realy a 7-bit encoding, so if any part of your infrastructure is using the 8th bit for other purposes, that may be tricky.
A: You might want to check out icu. They might have functions available that would make working with UTF-8 strings easier.
A: Icelandic uses ISO Latin 1, so eight bits should be enough. We need more details to figure out what's happening.
A: Icelandic, like French, German, and most other languages of Western Europe, can be supported using an 8-bit character set (CP1252 on Windows, ISO 8859-1 aka Latin1 on *x). This was the standard approach before Unicode was invented, and is still quite common. As you say you have a constraint that you can't rewrite your app to use wchar, and you don't need to.
You shouldn't be surprised that UTF-8 is causing problems; UTF-8 encodes the non-ASCII characters (e.g. the accented Latin characters, thorn, eth, etc) as TWO BYTES each.
The only general advice that can be given is quite simple (in theory):
(1) decide what character set you are going to support (Unicode, Latin1, CP1252, ...) in your system
(2) if you are being supplied data encoded in some other fashion (e.g. UTF-8) then transcode it to your standard (e.g. CP1252) at the system border
(3) if you need to supply data encoded in some other fashion, ...
A: You may want to use wide characters (wchar_t instead of char and std::wstring instead of std::string). This doesn't automatically solve 100% of your problems, but is good first step.
Also use string functions which are Unicode-aware (refer to documentation). If something manipulates wide chars or string it generally is aware that they are wide.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How to listen to elements method calls in Javascript I would like to listen to method calls.
For example, when an element is appended by anything to the document, I would like to be passed that element to act on it, like:
//somewhere
aParent.appendChild(aChild);
//when the former, a function I defined as listener is called with the aChild as argument
Does anybody know how to do that?
A: don't know if that's possible with the core functions, but you could always create your own functions, for the actions you want to monitor:
function AppendChild(oParent, oChild) {
// your stuff on oParent
// append oChild
oParent.appendChild(oChild)
}
or, maybe, modify the actual appendChild(), but that would be tricky...
A: I know that the Dojo Toolkit provides this functionality. You can some explanation here - jump to the section that says "Connecting Functions to One Another". If you are interested, you can look at the source of dojo.connect to see what's going on.
A: In Firefox you could rewrite Node.prototype.appendChild to call your own function (saving the original appendChild first, then calling it within) to perform additional actions.
Node.prototype._appendChild = Node.prototype.appendChild;
Node.prototype.appendChild = function myFunct(el){....; this._appendChild(el);}
Internet Explorer doesn't implement these interfaces (but there might be a workaround floating around, maybe using .htc..). IE8 will have Element instead of Node.
A: What you're describing is Aspect Oriented programming. In AOP parlance, your "join point" would be element.appendChild(), and your "advice" is the function that you would like to execute (before and/or after) every matching join point executes.
I've been keenly interested in possibilties for JavaScript AOP this for some time, and I just found this Aspect Oriented Programming and javascript, which looks promising without needed to adopt a big old API. -- I'm really glad that you brought this up. I have uses for this, like temporary logging, timing code segments, etc.
A: Multiple browsers handle the DOM in different ways, and unfortunately the way IE handles things is not as powerful as the way Mozilla does. The easiest way to do it is by using a custom function like the one that Filini mentioned.
However you could also wrap the different browsers DOM objects in a facade and use it for all element access. This is a bit more work but you would then be able to handle all the browsers in the same way and be able to add/remove listeners with ease. I'm not sure if it would be anymore useful than the custom functions, but worth a look at.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Infopath doesn't render background colors/pictures w/ Outlook Task/Sharepoint I am using Infopath forms to collect information to trigger my Windows Workflow sitting on Sharepoint 2007. The Infopath forms have logo's and branding which only show up on the Sharepoint portal but the infopath forms which are emailed when tasks are created during the workflow look different as in the background colors (Infopath Theme) and jpeg's are not there...web form compatibility was for checked in all appropriate places....any ideas?
A: Figured out the issue here...InfoPath seems to cache the form on the client(seems to check for the form’s unique URN in the cache) which means that if you attempt to click on the email “Edit this task…” the new form is not downloaded, instead the InfoPath form from the cache is displayed.
I am looking at a few ways into fix this. In the meanwhile, to be able see the jpeg's and background colors on the InfoPath form, Run the following on your cmd window (Sorry it’s “hacky” for now)
"C:\Program Files\Microsoft Office\Office12\INFOPATH.EXE" /cache clearall
Will write a blog post when I figure out a nice way to fix this
Jacob
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Returning query results in predefined order Is it possible to do a SELECT statement with a predetermined order, ie. selecting IDs 7,2,5,9 and 8 and returning them in that order, based on nothing more than the ID field?
Both these statements return them in the same order:
SELECT id FROM table WHERE id in (7,2,5,9,8)
SELECT id FROM table WHERE id in (8,2,5,9,7)
A: I didn't think this was possible, but found a blog entry here that seems to do the type of thing you're after:
SELECT id FROM table WHERE id in (7,2,5,9,8)
ORDER BY FIND_IN_SET(id,"7,2,5,9,8");
will give different results to
SELECT id FROM table WHERE id in (7,2,5,9,8)
ORDER BY FIND_IN_SET(id,"8,2,5,9,7");
FIND_IN_SET returns the position of id in the second argument given to it, so for the first case above, id of 7 is at position 1 in the set, 2 at 2 and so on - mysql internally works out something like
id | FIND_IN_SET
---|-----------
7 | 1
2 | 2
5 | 3
then orders by the results of FIND_IN_SET.
A: Your best bet is:
ORDER BY FIELD(ID,7,2,4,5,8)
...but it's still ugly.
A: Could you include a case expression that maps your IDs 7,2,5,... to the ordinals 1,2,3,... and then order by that expression?
A: All ordering is done by the ORDER BY keywords, you can only however sort ascending and descending. If you are using a language such as PHP you can then sort them accordingly using some code but I do not believe it is possible with MySQL alone.
A: This works in Oracle. Can you do something similar in MySql?
SELECT ID_FIELD
FROM SOME_TABLE
WHERE ID_FIELD IN(11,10,14,12,13)
ORDER BY
CASE WHEN ID_FIELD = 11 THEN 0
WHEN ID_FIELD = 10 THEN 1
WHEN ID_FIELD = 14 THEN 2
WHEN ID_FIELD = 12 THEN 3
WHEN ID_FIELD = 13 THEN 4
END
A: You may need to create a temp table with an autonumber field and insert into it in the desired order. Then sort on the new autonumber field.
A: It's hacky (and probably slow), but you can get the effect with UNION ALL:
SELECT id FROM table WHERE id = 7
UNION ALL SELECT id FROM table WHERE id = 2
UNION ALL SELECT id FROM table WHERE id = 5
UNION ALL SELECT id FROM table WHERE id = 9
UNION ALL SELECT id FROM table WHERE id = 8;
Edit: Other people mentioned the find_in_set function which is documented here.
A: Erm, not really. Closest you can get is probably:
SELECT * FROM table WHERE id IN (3, 2, 1, 4) ORDER BY id=4, id=1, id=2, id=3
But you probably don't want that :)
It's hard to give you any more specific advice without more information about what's in the tables.
A: You get answers fast around here, don't you…
The reason I'm asking this is that it's the only way I can think of to avoid sorting a complex multidimensional array. I'm not saying it would be difficult to sort, but if there were a simpler way to do it with straight sql, then why not.
A: One Oracle solution is:
SELECT id FROM table WHERE id in (7,2,5,9,8)
ORDER BY DECODE(id,7,1,2,2,5,3,9,4,8,5,6);
This assigns an order number to each ID. Works OK for a small set of values.
A: Best I can think of is adding a second Column orderColumn:
7 1
2 2
5 3
9 4
8 5
And then just do a ORDER BY orderColumn
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: Run Pylons controller as separate app? I have a Pylons app where I would like to move some of the logic to a separate batch process. I've been running it under the main app for testing, but it is going to be doing a lot of work in the database, and I'd like it to be a separate process that will be running in the background constantly. The main pylons app will submit jobs into the database, and the new process will do the work requested in each job.
How can I launch a controller as a stand alone script?
I currently have:
from warehouse2.controllers import importServer
importServer.runServer(60)
and in the controller file, but not part of the controller class:
def runServer(sleep_secs):
try:
imp = ImportserverController()
while(True):
imp.runImport()
sleepFor(sleep_secs)
except Exception, e:
log.info("Unexpected error: %s" % sys.exc_info()[0])
log.info(e)
But starting ImportServer.py on the command line results in:
2008-09-25 12:31:12.687000 Could not locate a bind configured on mapper Mapper|I
mportJob|n_imports, SQL expression or this Session
A: If you want to load parts of a Pylons app, such as the models from outside Pylons, load the Pylons app in the script first:
from paste.deploy import appconfig
from pylons import config
from YOURPROJ.config.environment import load_environment
conf = appconfig('config:development.ini', relative_to='.')
load_environment(conf.global_conf, conf.local_conf)
That will load the Pylons app, which sets up most of the state so that you can proceed to use the SQLAlchemy models and Session to work with the database.
Note that if your code is using the pylons globals such as request/response/etc then that won't work since they require a request to be in progress to exist.
A: I'm redacting my response and upvoting the other answer by Ben Bangert, as it's the correct one. I answered and have since learned the correct way (mentioned below). If you really want to, check out the history of this answer to see the wrong (but working) solution I originally proposed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Where do you do your validation? model, controller or view Where do you put user input validation in a web form application?
*
*View: JavaScript client side
*Controller: Server side language (C#...)
*Model: Database (stored procedures or dependencies)
I think there is validation required by each level:
*
*Did the user input a sane value
*
*are dates actual dates, are numbers actualy numbers ...
*Do all of the checks in 1. again plus checks for malicious attacks(IE XSS or SQL injection)
*
*The checks done in 1. are mainly to avoid a server round trip when the user makes a mistake.
*Since they are done on the client side in javascript, you can't trust that they were run. Validating these values again will stop some malicious attacks.
*Are dependencies met (ie. did the user add a comment to a valid question)
*
*A good interface makes these very hard to violate. If something is caught here, something went very wrong.
[inspired by this response]
A: I check in all tiers, but I'd like to note a validation trick that I use.
I validate in the database layer, proper constraints on your model will provide automatic data integrity validation.
This is an art that seems to be lost on most web programmers.
A: Validation in the model, optionally automated routines in the UI that take their hints from the model and improve the user experience.
By automated routines I mean that there shouldn't be any per-model validation code in the user interface. If you have a library of validation methods, such as RoR's (which has methods like validates_presence_of :username) the controller or view should be able to read these and apply equivalent javascript (or whatever is convenient) methods.
That means you will have to duplicate the complete validation library in the ui, or at least provide a mapping if you use a preexisting one. But once that's done you won't have to write any validation logic outside the model.
A: Validation can be done at all layers.
Validating the input from a web form (all strings, casting to proper types, etc) is different from validating the input from a webservice, or XML file, etc. Each has its own special cases. You can create a Validator helper class of course, thus externalising the Validation and allowing it to be shared by views.
Then you have the DAO layer validation - is there enough data in the model to persist (to meet not null constraints, etc) and so on. You can even have check constraints in the database (is status in ('N', 'A', 'S', 'D') etc).
A: This is interesting. For the longest time I performed all validation in the model, right above what I would consider DAL (data access layer). My models are typically pattern'ed after table data gateway with a DAL providing the abstraction and low level API.
In side the TDG I would implement the business logic and validations, such as:
*
*Is username empty
*Is username > 30 characters
*If record doesn't exist, return error
As my application grew in complexity I began to realize that much of the validation could be done on the client side, using JavaScript. So I refactored most of the validation logic into JS and cleanuped up my models.
Then I realized that server side validation (not filtering/escaping -- which I consider different) should probalby be done in the server as well and only client side as icing on the cake.
So back the validation logic went, when I realized again, that there was probably a distinct difference between INPUT validation/assertion and business rules/logic.
Basically if it can be done in the client side of the application (using JS) I consider this to be INPUT validation...if it MUST be done by the model (does this record already exist, etc?) then I would consider that business logic. Whats confusing is they both protecte the integrity of the data model.
If you dont' validate the length of a username then whats to stop people from creating a single character username?
I still have not entirely decided where to put that logic next, I think it really depends on what you favour more, thin controllers, heavy models, or visa-versa...
Controllers in my case tend to be far more application centric, whereas models if crafted carefully I can often reuse in "other" projects not just internally, so I prefer keeping models light weight and controllers on the heavier side.
What forces drive you in either direction are really personal opinion, requirements, experiences, etc...
Interesting subject :)
A: Validation must be done in the controller - it's the only place which assures safety and response.
Validation should be done in the view - it's the point of contact and will provide the best UE and save your server extra work.
Validation will be done on the model - but only for a certain core level of checks. Databases should always reflect appropriate constraints, but it's inefficient to let this stand for real validation, nor is it always possible for a database to determine valid input with simple constraints.
A: All validation should happen at least one time, and this should be in the middle tier, whether it be in your value objects (in the DDD sense, not to be confused with DTO's), or through the business object of the entity itself. Client side validation can occur to enhance user experience. I tend to not do client side validation, because I can just expose all of the things that are wrong on the form at once, but that's just my personal preference The database validation can occur to insure data integrity in case you screwed up the logic in the middle tier or back ended something.
A: I only do it in the View and Controller, the database enforces some of that by your data types and whatnot, but I'd rather it not get that far without me catching an error.
You pretty much answered your own question though, the important thing to know is that you can never trust the view, although that's the easiest route to give feedback to the user, so you need to sanitize on at least one more level.
A: Hmmmm, not sure. I would have said the Controller until I read this article re: skinny Controllers, fat Models
http://blog.astrumfutura.com/archives/373-The-M-in-MVC-Why-Models-are-Misunderstood-and-Unappreciated.html
A: Since most of validations depends on business rules, I do the validation on the business layer as third party tool classes. There are other types of validations, such as user input, whereas it needs to be made in the controller, but you can encapsulate those validation rules in third party classes too. Really, it depends on what to validate.
The client side validations are the minor ones, just made to build a lightweight input validation, but the server side validation is required always. You never can trust in the user input ;)
.NET have nice controls to build validations, but the business layer always needs a better approach to validate the data and those controls are not enough to that task.
A: Simple input validation in the view. Full validation in the model. Reason? If you change your view technology, and the validation is in the view/controller, you have to rewrite your validation for the new view. This can introduce bugs. Put it in the model, and this is reused by all views...
But, as I said, simple validation in the view for speed and ease.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: How to add currency strings (non-standardized input) together in PHP? I have a form in which people will be entering dollar values.
Possible inputs:
$999,999,999.99
999,999,999.99
999999999
99,999
$99,999
The user can enter a dollar value however they wish. I want to read the inputs as doubles so I can total them.
I tried just typecasting the strings to doubles but that didn't work. Total just equals 50 when it is output:
$string1 = "$50,000";
$string2 = "$50000";
$string3 = "50,000";
$total = (double)$string1 + (double)$string2 + (double)$string3;
echo $total;
A: A regex won't convert your string into a number. I would suggest that you use a regex to validate the field (confirm that it fits one of your allowed formats), and then just loop over the string, discarding all non-digit and non-period characters. If you don't care about validation, you could skip the first step. The second step will still strip it down to digits and periods only.
By the way, you cannot safely use floats when calculating currency values. You will lose precision, and very possibly end up with totals that do not exactly match the inputs.
Update: Here are two functions you could use to verify your input and to convert it into a decimal-point representation.
function validateCurrency($string)
{
return preg_match('/^\$?(\d{1,3})(,\d{3})*(.\d{2})?$/', $string) ||
preg_match('/^\$?\d+(.\d{2})?$/', $string);
}
function makeCurrency($string)
{
$newstring = "";
$array = str_split($string);
foreach($array as $char)
{
if (($char >= '0' && $char <= '9') || $char == '.')
{
$newstring .= $char;
}
}
return $newstring;
}
The first function will match the bulk of currency formats you can expect "$99", "99,999.00", etc. It will not match ".00" or "99.", nor will it match most European-style numbers (99.999,00). Use this on your original string to verify that it is a valid currency string.
The second function will just strip out everything except digits and decimal points. Note that by itself it may still return invalid strings (e.g. "", "....", and "abc" come out as "", "....", and ""). Use this to eliminate extraneous commas once the string is validated, or possibly use this by itself if you want to skip validation.
A: You don't ever want to represent monetary values as floats!
For example, take the following (seemingly straight forward) code:
$x = 1.0;
for ($ii=0; $ii < 10; $ii++) {
$x = $x - .1;
}
var_dump($x);
You might assume that it would produce the value zero, but that is not the case. Since $x is a floating point, it actually ends up being a tiny bit more than zero (1.38777878078E-16), which isn't a big deal in itself, but it means that comparing the value with another value isn't guaranteed to be correct. For example $x == 0 would produce false.
A: http://p2p.wrox.com/topic.asp?TOPIC_ID=3099
goes through it step by step
[edit] typical...the site seems to be down now... :(
A: not a one liner, but if you strip out the ','s you can do: (this is pseudocode)
m/^\$?(\d+)(?:\.(\d\d))?$/
$value = $1 + $2/100;
That allows $9.99 but not $9. or $9.9 and fails to complain about missplaced thousands separators (bug or feature?)
There is a potential 'locality' issue here because you are assuming that thousands are done with ',' and cents as '.' but in europe it is opposite (e.g. 1.000,99)
A: I recommend not to use a float for storing currency values. You can get rounding errors if the sum gets large. (Ok, if it gets very large.)
Better use an integer variable with a large enough range, and store the input in cents, not dollars.
A: I belive that you can accomplish this with printf, which is similar to the c function of the same name. its parameters can be somewhat esoteric though. you can also use php's number_format function
A: Assuming that you are getting real money values, you could simply strip characters that are not digits or the decimal point:
(pseudocode)
newnumber = replace(oldnumber, /[^0-9.]/, //)
Now you can convert using something like
double(newnumber)
However, this will not take care of strings such as "5.6.3" and other such non-money strings. Which raises the question, "Do you need to handle badly formatted strings?"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Anyone ever tried to develop in C or C++ for Blackberry platforms? Every indication I have, based on my experience in embedded computing is that doing something like this would require expensive equipment to get access to the platform (ICE debuggers, JTAG probes, I2C programmers, etc, etc), but I've always wondered if some ambitious hacker out there has found a way to load native code on a Blackberry device. Anyone?
Edit: I'm aware of the published SDK and it's attendant restrictions. I'm curious if anyone has attempted to get around them, and if so, how far they got.
A: I've seen this question pop up in a number of different forums over time. The original Blackberries were programmable in C++ but I think that RIM ran up against the problems of trying to implement a secure platform in the C/C++ compile to native paradigm.
The devices do have JTAG ports, but unless one could get hands on the RIM code as a place to start the problem is enormous.
I also have to wonder how useful a Blackberry with a replacement FOSS operating system would be, since it would not likely have the protocols to connect to BES or BIS, send PIN's etc. If one was simply looking for a the power of the hand held computing platform I suspect there are many more likely candidates available.
A: No, C++ is no longer a supported RIM development tool, as they phased it out a number of years ago. Client applications can be developed in Java (or one of a few 5GL frameworks), and web + sever-side apps can be developed using standard tools.
A: For those looking for updated information, the new Playbook os, also known as QNX, also known as Blackberry 10 (or it will be when the phones running it come out) is in fact c/c++ based, also using QML and a C++ add on called Cascades.
A: Unfortunately the official SDK website only seems to mention Java. According to wikipedia, different versions of the BlackBerry use different processors. Combined with the fact that RIM uses a proprietary operating system for the devices, it becomes pretty difficult to develop native code without official tools. There is also a partial API-level security restriction which would further prohibit advanced tinkering.
A: Just randomly searching for an answer to this and came across http://supportforums.blackberry.com/t5/Tablet-OS-SDK-for-Adobe-AIR/Native-C-C-SDK/td-p/778009 which mentions that BB intend to release a C/C++ SDK soon, more details will be provided at the 2011 Game Developer Conference.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How do I submit multiple models in Struts 2? I have a JSP that allows users to dynamically create additional form fields to create multiple objects. Perhaps I want to allow users to be able to submit as many line items as they want when submitting an invoice form.
How do I create a Struts 2 Action that will be able to take in an ArrayList populated with objects created from those dynamically generated fields.
A: You should read the Tabular input guide.
A: According to the (ever-poor) documentation, which forces you to try to extrapolate the information you want, rather than just telling you authoritatively (and assuming you're really asking about Struts' built-in type conversion), your form fields would need to be named something like...
someList.makeNew(0).someField1
someList.makeNew(0).someField2
...
someList.makeNew(1).someField1
someList.makeNew(1).someField1
...
...and you would then need to set up an ActionClassName-conversion.properties file to let the type converter know how to handle type conversion for fields which begin with someList.
The only time I actually tried this myself, I had trouble getting it working with Lists and ended up having to use Maps.
Here's a useful blog entry about modifying a Map of objects using type conversion - I haven't had much luck finding useful information about the makeNew field name format the documentation mentions, but this might help you get started.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Mobile Capability for meta tags render en ASP.NET Mobile Anybody knows what is mobile capability for render Meta Tags for each adapter?
I am using Marg.Wurfl to detect mobile device, and it maps wurfl capabilities to mobile capabilities, but it does not render meta tags.I have found requiresXhtmlCssSuppression capability in ASP.NET Mobile Controls XHTML Adapter Source, but it doesn´t work to me.
Thx in advance,
A: After intensive using of Reflector, i have found than Wml, Chtml and Html (Mobile) control adaptares uses: RequiredMetaTagNameValue, RequiresContentTypeMetaTag and PreferredRenderingMime capabilities for rendering meta tags, you can see them in RenderExtraHeadElements function in *FormAdapter (not in *PageAdapter, WTF).
But Xhtml controls adapter (source code) doesn´t render metatag. I will create one.
Before, I thought the problem was a capabilities mapping, but now i think it is a XhtmlPageAdapter rendering problem.
more information?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What's the best free IDE for learning smalltalk? What do you think is a good IDE for learning SmallTalk? I'll only be using it as a hobby, so it has to be free.
A: Definitively go for Squeak. It's a closed system in terms of the environment, or what you call the IDE, but it's fun to do webapps with - look for Seaside. However I always recommend everyone involved in development to take a look at it, just to understand how development in an image is working - and to experience a live system.
The main problem with Squeak, or maybe Smalltalk in general, is that once you get used to it, it's very hard to go back to the conventional way of programming.
Besides, I heard that you might become a better programmer if you work for some time in Smalltalk. I don't know if that's true, but I certainly like to think so.
A: You should also consider Pharo. Pharo is a fork of Squeak. Their goals are:
*
*a clean and lean open-source Smalltalk platform, derived from Squeak
*the obvious choice for professional Smalltalk development
*an emerging platform to help people invent the future
Whether it is Squeak or Pharo, there is a large, active and supportive community.
A: Squeak is nice and free and very cool
A: I think Squeak is the way to go. It has an entire smalltalk environment and is constantly updated. Its what I used for learning and is actually even a cool app in itself.
A: You can also use Cincom Smalltalk or Dolphin Smalltalk. They both have community editions.
A: If you start with Cincom Smalltalk, there's a ton of learning material available:
-- tutorials
-- daily screencasts
-- videos
-- weekly podcast
You can find the screencasts, videos, and podcasts on iTunes - just search for "Smalltalk" in the podcast section.
A: You won't need a separate IDE because smalltalks usually come with their own IDE, so choosing your smalltalk flavour pretty much determines the IDE for you. Don't let this fact scare you off from taking on smalltalk though!
WRT your original question, I had two wonderful years developing in Dolphin Smalltalk & highly recommend it.
Dolphin Smalltalk is only as free as a beer is though. If you need an opensource smalltalk go with Squeak.
In my opinion Dolphin is the more polished/comfortable/user-friendly one.
A: If you are used to Eclipse or Visual Studio, and are running on Windows - then Dolphin is something that will feel very familiar to you. It looks very nice (no emulated widgets, as its not trying to be cross platform), and it has nice touches like code completion and a graphical window designer (rather like IB on the mac). It also has great refactoring tools and can easily create small .exe file (e.g. 500k including the vm). There is a little screencast of doing TDD in Dolphin
Of course, these things are available in other dialects - in particular Squeak Pharo looks very promising particularly if you are after an open source product.
A: Squeak is free. Cincom has a non-commercial version of VisualWorks. GemStone/S is free for small installations. GNU Smalltalk is "free" in the GPL sense.
A: Smalltalk/X came up on Reddit the other day. It looked pretty good.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: Log/Graph PHP execution time Are there any tools available to log the page load time for a php site?
Mainly looking for something that I can see trends of load times over time, I was considering dumping them into a file using error_log(), but I don't know what I could use to parse it and display graphs
A: You can record the microtime at the start of execution, hold that variable until the end, check the time, subtract them, and there you have your execution time. Output buffering will be required to make this work in most cases, unless it's a situation in which a particular thing always runs last (like footer()).
$time_start = microtime_float();
function microtime_float() {
list($usec, $sec) = explode(" ", microtime());
return ((float)$usec + (float)$sec);
}
//at the start.
//at the end:
$time_end = microtime_float();
$time = round($time_end - $time_start, 4);
echo "Last uncached content render took $time seconds";
A: Use the Firebug extension for Firefox, it has a Net panel that shows you load times.
If you want to do load testing, apache comes with a utility called apache bench, try ab --help in a console window near you.
A: See PEAR Benchmark. It allows you to add benchmarks into your code. You can have it dump an HTML table on your pages, or you can loop through the data and write to a log file.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Any performance impact in Oracle for using LIKE 'string' vs = 'string'? This
SELECT * FROM SOME_TABLE WHERE SOME_FIELD LIKE '%some_value%';
is slower than this
SELECT * FROM SOME_TABLE WHERE SOME_FIELD = 'some_value';
but what about this?
SELECT * FROM SOME_TABLE WHERE SOME_FIELD LIKE 'some_value';
My testing indicates the second and third examples are exactly the same. If that's true, my question is, why ever use "=" ?
A: Check out the EXPLAIN PLAN for both. They generate the same execution plan, so to the database, they're the same thing.
You would use = to test for equality, not similarity. If you're controlling the comparison value as well, then it doesn't make much of a difference. If that's being submitted by a user, then 'apple' and 'apple%' would give you much different results.
A:
If that's true, my question is, why
ever use "=" ?
A better question: If that's true, why use "LIKE" to test for equality? You get to save hitting the shift key, and everyone who reads the script gets to be confused.
A: There is a clear difference when you use bind variables, which you should be using in Oracle for anything other than data warehousing or other bulk data operations.
Take the case of:
SELECT * FROM SOME_TABLE WHERE SOME_FIELD LIKE :b1
Oracle cannot know that the value of :b1 is '%some_value%', or 'some_value' etc. until execution time, so it will make an estimation of the cardinality of the result based on heuristics and come up with an appropriate plan that either may or may not be suitable for various values of :b, such as '%A','%', 'A' etc.
Similar issues can apply with an equality predicate but the range of cardinalities that might result is much more easily estimated based on column statistics or the presence of a unique constraint, for example.
So, personally I wouldn't start using LIKE as a replacement for =. The optimizer is pretty easy to fool sometimes.
A: Have you tried it? Testing is the only sure way to know.
As an aside, none of these statements are certain to return the same rows. Try out:
insert into some_table (some_field) values ('some_value');
insert into some_table (some_fieled) values ('1some_value2');
insert into some_table (some_field) values ('some1value');
SELECT * FROM SOME_TABLE WHERE SOME_FIELD LIKE '%some_value%';
SELECT * FROM SOME_TABLE WHERE SOME_FIELD = 'some_value';
SELECT * FROM SOME_TABLE WHERE SOME_FIELD LIKE 'some_value';
In terms of clarity and to avoid subtle bugs, it's best to never use LIKE unless you need it's wildcard functionality. (Obviously, when doing ad-hoc queries, it's probably alright.)
A: LIKE '%WHATEVER%' will have to do a full index scan.
If there is not percent, then it acts like an equals.
If the % is on one end, then the index can be a range scan.
I'm not sure how the optimizer handles bound fields.
A: like is formally the same if you do not have characters like $% etc so it is not a big surprise to find that it has the same cost.
I find David Aldridge answer interesting as your application should be using bind variables. With a like '%foobar' you cannot make use of ordering in the index. If the query is pre-compiled It will lead to more index or table full Scans.
Furthermore, I find it dangerous as it can lead to SQL Injections and strange bugs ( for example, if there is a user called john a hacker can create a user named 'joh$' and try to log in)
why take the risk? '=' is clearer and has none of those problems.
A: 1) % and = are intended to be used in different scenarios.
Even though we can search with exact value in like clause and get the desired result, one should still be using = in all such scenarios. Therefore whenever we have exact value to be searched we always should be using =.
2) performance of like and = whenever no % is provided in search clause:
In all such cases, query optimizer automatically converts like cluase to =. And this can be evident from query plan (Please see screenshots attached) . Therefore performance should be exactly the same in this case. Thanks to query optimizer!
I am providing the screenshots of the execution plans of both the queries i.e. query with like clause but without % and query with like clause and with %.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Refactoring global to local. Should they be static or not? I'm refactoring "spaghetti code" C module to work in multitasking (RTOS) environment.
Now, there are very long functions and many unnecessary global variables.
When I try to replace global variables that exists only in one function with locals, I get into dilemma. Every global variable is behave like local "static" - e.g. keep its value even you exit and re-enter to the function.
For multitasking "static" local vars are worst from global. They make the functions non reentered.
There are a way to examine if the function is relay on preserving variable value re-entrancing without tracing all the logical flow?
A: Short answer: no, there isn't any way to tell automatically whether the function will behave differently according to whether the declaration of a local variable is static or not. You just have to examine the logic of each function that uses globals in the original code.
However, if replacing a global variable with a static local-scope variable means the function is not re-entrant, then it wasn't re-entrant when it was a global, either. So I don't think that changing a global to a static local-scope variable will make your functions any less re-entrant than they were to start with.
Provided that the global really was used only in that scope (which the compiler/linker should confirm when you remove the global), the behaviour should be close to the same. There may or may not be issues over when things are initialized, I can't remember what the standard says: if static initialization occurs in C the same time it does in C++, when execution first reaches the declaration, then you might have changed a concurrency-safe function into a non-concurrency-safe one.
Working out whether a function is safe for re-entrancy also requires looking at the logic. Unless the standard says otherwise (I haven't checked), a function isn't automatically non-re-entrant just because it declares a static variable. But if it uses either a global or a static in any significant way, you can assume that it's non-re-entrant. If there isn't synchronization then assume it's also non-concurrency-safe.
Finally, good luck. Sounds like this code is a long way from where you want it to be...
A: If your compiler will warn you if a variable is used before initialized, make a suspected variable local without assigning it a value in its declaration.
Any variable that gives a warning cannot be made local without changing other code.
A: Changing global variables to static local variables will help a little, since the scope for modification has been reduced. However the concurrency issue still remains a problem and you have to work around it with locks around access to those static variables.
But what you want to be doing is pushing the definition of the variable into the highest scope it is used as a local, then pass it as an argument to anything that needs it. This obviously requires alot of work potentially (since it has a cascading effect). You can group similarly needed variables into "context" objects and then pass those around.
See the design pattern Encapsulate Context
A: If your global vars are truly used only in one function, you're losing nothing by making them into static locals since the fact that they were global anyway made the function that used them non-re-entrant. You gain a little by limiting the scope of the variable.
You should make that change to all globals that are used in only one function, then examine each static local variable to see if it can be made non-static (automatic).
The rule is: if the variable is used in the function before being set, then leave it static.
An example of a variable that can be made automatic local (you would put "int nplus4;" inside the function (you don't need to set it to zero since it's set before use and this should issue a warning if you actually use it before setting it, a useful check):
int nplus4 = 0; // used only in add5
int add5 (int n) {
nplus4 = n + 4; // set
return nplus4 + 1; // use
}
The nplus4 var is set before being used. The following is an example that should be left static by putting "static int nextn = 0;" inside the function:
int nextn = 0; // used only in getn
int getn (void) {
int n = nextn++; // use, then use, then set
return n;
}
Note that it can get tricky, "nextn++" is not setting, it's using and setting since it's equivalent to "nextn = nextn + 1".
One other thing to watch out for: in an RTOS environment, stack space may be more limited than global memory so be careful moving big globals such as "char buffer[10000]" into the functions.
A: Please give examples of what you call 'global' and 'local' variables
int global_c; // can be used by any other file with 'extern int global_c;'
static int static_c; // cannot be seen or used outside of this file.
int foo(...)
{
int local_c; // cannot be seen or used outside of this function.
}
If you provide some code samples of what you have and what you changed we could better answer the question.
A: If I understand your question correctly, your concern is that global variables retain their value from one function call to the next. Obviously when you move to using a normal local variable that won't be the case. If you want to know whether or not it is safe to change them I don't think you have any option other than reading and understanding the code. Simply doing a full text search for the the name of the variable in question might be instructive.
If you want a quick and dirty solution that isn't completely safe, you can just change it and see what breaks. I recommend making sure you have a version you can roll back to in source control and setting up some unit tests in advance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Delphi Popup Menu Checks I am using a popup menu in Delphi. I want to use it in a "radio group" fashion where if the user selects an item it is checked and the other items are not checked. I tried using the AutoCheck property, but this allows multiple items to be checked. Is there a way to set the popup menu so that only one item can be checked?
A: Zartog is right, but if you want to keep the checkbox, assign this event to every item in the popup menu.
Note that this code is a little hairy looking because it does not depend on knowing the name of your popup menu (hence, looking it up with "GetParentComponent").
procedure TForm2.OnPopupItemClick(Sender: TObject);
var
i : integer;
begin
with (Sender as TMenuItem) do begin
//if they just checked something...
if Checked then begin
//go through the list and *un* check everything *else*
for i := 0 to (GetParentComponent as TPopupMenu).Items.Count - 1 do begin
if i <> MenuIndex then begin //don't uncheck the one they just clicked!
(GetParentComponent as TPopupMenu).Items[i].Checked := False;
end; //if not the one they just clicked
end; //for each item in the popup
end; //if we checked something
end; //with
end;
You can assign the event at runtime to every popup box on your form like this (if you want to do that):
procedure TForm2.FormCreate(Sender: TObject);
var
i,j: integer;
begin
inherited;
//look for any popup menus, and assign our custom checkbox handler to them
if Sender is TForm then begin
with (Sender as TForm) do begin
for i := 0 to ComponentCount - 1 do begin
if (Components[i] is TPopupMenu) then begin
for j := 0 to (Components[i] as TPopupMenu).Items.Count - 1 do begin
(Components[i] as TPopupMenu).Items[j].OnClick := OnPopupItemClick;
end; //for every item in the popup list we found
end; //if we found a popup list
end; //for every component on the form
end; //with the form
end; //if we are looking at a form
end;
In response to a comment below this answer: If you want to require at least one item to be checked, then use this instead of the first code block. You may want to set a default checked item in the oncreate event.
procedure TForm2.OnPopupItemClick(Sender: TObject);
var
i : integer;
begin
with (Sender as TMenuItem) do begin
//go through the list and make sure *only* the clicked item is checked
for i := 0 to (GetParentComponent as TPopupMenu).Items.Count - 1 do begin
(GetParentComponent as TPopupMenu).Items[i].Checked := (i = MenuIndex);
end; //for each item in the popup
end; //with
end;
A: To enlarge on Zartog's post: Popup menus in Delphi (from at least D6) have a GroupIndex property which allow you to have multiple sets of radio items within a menu. Set GroupIndex to 1 for the first group, 2 for a second etc.
So:
Set AutoCheck = True
Set RadioItem = True
Set GroupIndex if you need more than one group of radio items
A: To treat the popup (or any other) menu items like radio group items, set the 'RadioItem' property to true for each item you want to have in the radio group.
Instead of showing a checkmark, it will show a bullet by the selected item, but it will work the way you want, and the visual cue will actually match a windows standard.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Where to put controller parent class in CakePHP? I have two controllers which share most of their code (but must be, nonetheless, different controllers). The obvious solution (to me, at least) is to create a class, and make the two controllers inherit from it. The thing is... where to put it? Now I have it in app_controller.php, but it's kind of messy there.
A: In cake, components are used to store logic that can be used by multiple controllers. The directory is /app/controllers/components. For instance, if you had some sharable utility logic, you would have an object called UtilComponent and a file in /app/controlers/components called UtilComponent.php.
<?php
class UtilComponent extends Object {
function yourMethod($param) {
// logic here.......
return $param;
}
}
?>
Then, in your controller classes, you would add:
var $components = array('Util');
Then you call the methods like:
$this->Util->yourMethod($yourparam);
More Info:
Documentation
A: Btw, if the reason for "they must be seperate controllers" is the URLs you require. Remember you can use routing:
Router::connect('/posts', array('controller' => 'posts', 'action' => 'index'));
Router::connect('/comments', array('controller' => 'posts', 'action' => 'list_comments'));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How can I pass an argument to a C# plug-in being loaded through Assembly.CreateInstance? What I have now (which successfully loads the plug-in) is this:
Assembly myDLL = Assembly.LoadFrom("my.dll");
IMyClass myPluginObject = myDLL.CreateInstance("MyCorp.IMyClass") as IMyClass;
This only works for a class that has a constructor with no arguments. How do I pass in an argument to a constructor?
A: call
public object CreateInstance(string typeName, bool ignoreCase, BindingFlags bindingAttr, Binder binder, object[] args, CultureInfo culture, object[] activationAttributes)
instead.
MSDN Docs
EDIT: If you are going to vote this down, please give insight into why this approach is wrong/or not the best way.
A: You can with Activator.CreateInstance
A: You cannot. Instead use Activator.CreateInstance as shown in the example below (note that the Client namespace is in one DLL and the Host in another. Both must be found in the same directory for code to work.)
However, if you want to create a truly pluggable interface, I suggest you use an Initialize method that take the given parameters in your interface, instead of relying on constructors. That way you can just demand that the plugin class implement your interface, instead of "hoping" that it accepts the accepted parameters in the constructor.
using System;
using Host;
namespace Client
{
public class MyClass : IMyInterface
{
public int _id;
public string _name;
public MyClass(int id,
string name)
{
_id = id;
_name = name;
}
public string GetOutput()
{
return String.Format("{0} - {1}", _id, _name);
}
}
}
namespace Host
{
public interface IMyInterface
{
string GetOutput();
}
}
using System;
using System.Reflection;
namespace Host
{
internal class Program
{
private static void Main()
{
//These two would be read in some configuration
const string dllName = "Client.dll";
const string className = "Client.MyClass";
try
{
Assembly pluginAssembly = Assembly.LoadFrom(dllName);
Type classType = pluginAssembly.GetType(className);
var plugin = (IMyInterface) Activator.CreateInstance(classType,
42, "Adams");
if (plugin == null)
throw new ApplicationException("Plugin not correctly configured");
Console.WriteLine(plugin.GetOutput());
}
catch (Exception e)
{
Console.Error.WriteLine(e.ToString());
}
}
}
}
A: Activator.CreateInstance takes a Type and whatever you want to pass to the Types constructor.
http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx
A: You can also not use Activator.CreateInstance, which could perform better. See below StackOverflow question.
How to pass ctor args in Activator.CreateInstance or use IL?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Source Control for multiple projects/solutions with shared libraries I am currently working on a project to convert a number of Excel VBA powered workbooks to VSTO solutions. All of the workbooks will share a number of class libraries and third party assemblies, in fact most of the work is done in the class libraries. I currently have my folder structure laid out like this.
Base
Libraries
Assemblies
Workbooks
Workbook1
Workbook2
Each of the workbooks will be its own solution, and the workbook solutions just reference the assemblies in the folder structure. My question is how would you lay out the source control? Would you start the repository at the base? Or would you create a repository for each workbook solution? Would you rearrange the folders?
Now that we have the initial development done, we're about to have a bunch of outside developers come on to the project to helps us convert the rest of the workbooks and I really like the idea of them being able to check out from the base directory and having all of the dependencies ready to go. I also worry that there are other concerns that come with having 20+ solutions/projects under one source control repository.
I want everything to be as simple as possible for people joining the project but I don't want to sacrifice long term usability. In my mind I've been going back and forth, what's simpler one repository or one repository per solution?
I'd appreciate and insight you have, because I'm fresh out.
Additional Information: Currently, I am using Mercurial personally, but the project will probably get moved to StarTeam unless I can make some convincing arguments for something else.
A: You don't mention in your question what source control you are using. As it doesn't sound like you need to limit your outside developers access to the rest of the repository I would not bother with setting up multiple repositories. I would assume that unless your code runs into the millions of lines size that repository size is not an issue.
It all depends what functionality your revision control system supports. In subversion you can declare other folders as external and provide a file URL for the content of that folder, this will cause subversion to deal with that folder as a separate repository even though it is within your folder structure.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to serialize an object into a string I am able to serialize an object into a file and then restore it again as is shown in the next code snippet. I would like to serialize the object into a string and store into a database instead. Can anyone help me?
LinkedList<Diff_match_patch.Patch> patches = // whatever...
FileOutputStream fileStream = new FileOutputStream("foo.ser");
ObjectOutputStream os = new ObjectOutputStream(fileStream);
os.writeObject(patches1);
os.close();
FileInputStream fileInputStream = new FileInputStream("foo.ser");
ObjectInputStream oInputStream = new ObjectInputStream(fileInputStream);
Object one = oInputStream.readObject();
LinkedList<Diff_match_patch.Patch> patches3 = (LinkedList<Diff_match_patch.Patch>) one;
os.close();
A: Thanks for great and quick replies. I will gives some up votes inmediately to acknowledge your help. I have coded the best solution in my opinion based on your answers.
LinkedList<Patch> patches1 = diff.patch_make(text2, text1);
try {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutputStream os = new ObjectOutputStream(bos);
os.writeObject(patches1);
String serialized_patches1 = bos.toString();
os.close();
ByteArrayInputStream bis = new ByteArrayInputStream(serialized_patches1.getBytes());
ObjectInputStream oInputStream = new ObjectInputStream(bis);
LinkedList<Patch> restored_patches1 = (LinkedList<Patch>) oInputStream.readObject();
// patches1 equals restored_patches1
oInputStream.close();
} catch(Exception ex) {
ex.printStackTrace();
}
Note i did not considered using JSON because is less efficient.
Note: I will considered your advice about not storing serialized object as strings in the database but byte[] instead.
A: Java8 approach, converting Object from/to String, inspired by answer from OscarRyz. For de-/encoding, java.util.Base64 is required and used.
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.Base64;
import java.util.Optional;
final class ObjectHelper {
private ObjectHelper() {}
static Optional<String> convertToString(final Serializable object) {
try (final ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(baos)) {
oos.writeObject(object);
return Optional.of(Base64.getEncoder().encodeToString(baos.toByteArray()));
} catch (final IOException e) {
e.printStackTrace();
return Optional.empty();
}
}
static <T extends Serializable> Optional<T> convertFrom(final String objectAsString) {
final byte[] data = Base64.getDecoder().decode(objectAsString);
try (final ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(data))) {
return Optional.of((T) ois.readObject());
} catch (final IOException | ClassNotFoundException e) {
e.printStackTrace();
return Optional.empty();
}
}
}
A: How about persisting the object as a blob
A: XStream provides a simple utility for serializing/deserializing to/from XML, and it's very quick. Storing XML CLOBs rather than binary BLOBS is going to be less fragile, not to mention more readable.
A: If you're storing an object as binary data in the database, then you really should use a BLOB datatype. The database is able to store it more efficiently, and you don't have to worry about encodings and the like. JDBC provides methods for creating and retrieving blobs in terms of streams. Use Java 6 if you can, it made some additions to the JDBC API that make dealing with blobs a whole lot easier.
If you absolutely need to store the data as a String, I would recommend XStream for XML-based storage (much easier than XMLEncoder), but alternative object representations might be just as useful (e.g. JSON). Your approach depends on why you actually need to store the object in this way.
A: Sergio:
You should use BLOB. It is pretty straighforward with JDBC.
The problem with the second code you posted is the encoding. You should additionally encode the bytes to make sure none of them fails.
If you still want to write it down into a String you can encode the bytes using java.util.Base64.
Still you should use CLOB as data type because you don't know how long the serialized data is going to be.
Here is a sample of how to use it.
import java.util.*;
import java.io.*;
/**
* Usage sample serializing SomeClass instance
*/
public class ToStringSample {
public static void main( String [] args ) throws IOException,
ClassNotFoundException {
String string = toString( new SomeClass() );
System.out.println(" Encoded serialized version " );
System.out.println( string );
SomeClass some = ( SomeClass ) fromString( string );
System.out.println( "\n\nReconstituted object");
System.out.println( some );
}
/** Read the object from Base64 string. */
private static Object fromString( String s ) throws IOException ,
ClassNotFoundException {
byte [] data = Base64.getDecoder().decode( s );
ObjectInputStream ois = new ObjectInputStream(
new ByteArrayInputStream( data ) );
Object o = ois.readObject();
ois.close();
return o;
}
/** Write the object to a Base64 string. */
private static String toString( Serializable o ) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream( baos );
oos.writeObject( o );
oos.close();
return Base64.getEncoder().encodeToString(baos.toByteArray());
}
}
/** Test subject. A very simple class. */
class SomeClass implements Serializable {
private final static long serialVersionUID = 1; // See Nick's comment below
int i = Integer.MAX_VALUE;
String s = "ABCDEFGHIJKLMNOP";
Double d = new Double( -1.0 );
public String toString(){
return "SomeClass instance says: Don't worry, "
+ "I'm healthy. Look, my data is i = " + i
+ ", s = " + s + ", d = " + d;
}
}
Output:
C:\samples>javac *.java
C:\samples>java ToStringSample
Encoded serialized version
rO0ABXNyAAlTb21lQ2xhc3MAAAAAAAAAAQIAA0kAAWlMAAFkdAASTGphdmEvbGFuZy9Eb3VibGU7T
AABc3QAEkxqYXZhL2xhbmcvU3RyaW5nO3hwf////3NyABBqYXZhLmxhbmcuRG91YmxlgLPCSilr+w
QCAAFEAAV2YWx1ZXhyABBqYXZhLmxhbmcuTnVtYmVyhqyVHQuU4IsCAAB4cL/wAAAAAAAAdAAQQUJ
DREVGR0hJSktMTU5PUA==
Reconstituted object
SomeClass instance says: Don't worry, I'm healthy. Look, my data is i = 2147483647, s = ABCDEFGHIJKLMNOP, d = -1.0
NOTE: for Java 7 and earlier you can see the original answer here
A: Take a look at the java.sql.PreparedStatement class, specifically the function
http://java.sun.com/javase/6/docs/api/java/sql/PreparedStatement.html#setBinaryStream(int,%20java.io.InputStream)
Then take a look at the java.sql.ResultSet class, specifically the function
http://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#getBinaryStream(int)
Keep in mind that if you are serializing an object into a database, and then you change the object in your code in a new version, the deserialization process can easily fail because your object's signature changed. I once made this mistake with storing a custom Preferences serialized and then making a change to the Preferences definition. Suddenly I couldn't read any of the previously serialized information.
You might be better off writing clunky per property columns in a table and composing and decomposing the object in this manner instead, to avoid this issue with object versions and deserialization. Or writing the properties into a hashmap of some sort, like a java.util.Properties object, and then serializing the properties object which is extremely unlikely to change.
A: How about writing the data to a ByteArrayOutputStream instead of a FileOutputStream?
Otherwise, you could serialize the object using XMLEncoder, persist the XML, then deserialize via XMLDecoder.
A: The serialised stream is just a sequence of bytes (octets). So the question is how to convert a sequence of bytes to a String, and back again. Further it needs to use a limited set of character codes if it is going to be stored in a database.
The obvious solution to the problem is to change the field to a binary LOB. If you want to stick with a characer LOB, then you'll need to encode in some scheme such as base64, hex or uu.
A: You can use the build in classes sun.misc.Base64Decoder and sun.misc.Base64Encoder to convert the binary data of the serialize to a string. You das not need additional classes because it are build in.
A: Simple Solution,worked for me
public static byte[] serialize(Object obj) throws IOException {
ByteArrayOutputStream out = new ByteArrayOutputStream();
ObjectOutputStream os = new ObjectOutputStream(out);
os.writeObject(obj);
return out.toByteArray();
}
A:
*
Today the most obvious approach is to save the object(s) to JSON.
*
*JSON is readable
*JSON is more readable and easier to work with than XML.
*A lot of Non-SQL databases that allow storing JSON directly.
*Your client already communicates with the server using JSON. (If it doesn't, it is very likely a mistake.)
Example using Gson.
Gson gson = new Gson();
Person[] persons = getArrayOfPersons();
String json = gson.toJson(persons);
System.out.println(json);
//output: [{"name":"Tom","age":11},{"name":"Jack","age":12}]
Person[] personsFromJson = gson.fromJson(json, Person[].class);
//...
class Person {
public String name;
public int age;
}
Gson allows converting List directly. Examples can be easily
googled. I prefer to convert lists to arrays first.
A: you can use UUEncoding
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "159"
}
|
Q: Is there any difference between 'valid xml' and 'well formed xml'? I wasn't aware of a difference, but a coworker says there is, although he can't back it up. What's the difference if any?
A: There is a difference, yes.
XML that adheres to the XML standard is considered well formed, while xml that adheres to a DTD is considered valid.
A: Well-Formed XML is XML that meets the syntactic requirements of the language. Not missing any closing tags, having all your singleton tags use <whatever /> instead of just <whatever>, and having your closing tags in the right order.
Valid XML is XML that uses a DTD and complies with all its requirements. So if you use an attribute improperly, you violate the DTD and aren't valid.
All valid XML is well-formed, but not all well-formed XML is valid.
A: XML is well-formed if meets the requirements for all XML documents set out by the standards - so things like having a single root node, having nodes correctly nested, all nodes having a closing tag (or using the empty node shorthand of a slash before the closing angle bracket), attributes being quoted etc. Being well-formed just means it adheres to the rules of XML and can therefore be parsed properly.
XML is valid if it will validate against a DTD or schema. This obviously differs from case to case - XML that is valid against one schema won't be valid against another schema, even though it is still well-formed.
If XML isn't well-formed it can't be properly parsed - parsers will simply throw an exception or report an error. This is generic and it doesn't matter what your XML contains. Only once it is parsed can it be checked for validity. This domain or context dependent and requires a DTD or schema to validate against. For simple XML documents, you may not have a DTD or schema, in which case you can't know if the XML is valid - the concept or validity simply doesn't apply in this case. Of course, this doesn't mean you can't use it, it just means you can't tell whether or not it's valid.
A: Well-formed vs Valid XML
Well-formed means that a textual object meets the W3C requirements for being XML.
Valid means that well-formed XML meets additional requirements given by a specified schema.
Official Definitions
Per the W3C Recommendation for XML:
[Definition: A data object is an XML document if it is
well-formed, as defined in this specification. In addition, the
XML document is valid if it meets certain further constraints.]
Observations:
*
*A document that is not well-formed is not XML. (Well-formed XML is commonly used but technically redundant.)
*Being valid implies being well-formed.
*Being well-formed does not imply being valid.
*Although the W3C Recommendation for XML defines validity to be against a DTD, conventional use allows the term to be applied for conformance to XML schemas specified via XSD, RELAX NG, Schematron, or other methods.
Examples of what causes a document to be...
Not well-formed:
*
*An element lacks a closing tag (and is not self-closing).
*Elements overlap without proper nesting: <a><b></a></b>
*An attribute value is missing a closing quote that matches the
opening quote.
*< or & are used in content rather than < or &.
*Multiple root elements exist.
*Multiple XML declarations exist, or an XML declaration appears other than at the top of the document.
Invalid
*
*An element or attribute is missing but required by the XML schema.
*An element or attribute is used but undefined by the XML schema.
*The content of an element does not match the content specified by the XML schema.
*The value of an attribute does not match the type specified by the XML schema.
Namespace-Well-Formed
Technically, colon characters are permitted in component names in XML. However, colons should only be used in names for namespace purposes:
Note:
The Namespaces in XML Recommendation [XML Names] assigns a
meaning to names containing colon characters. Therefore, authors
should not use the colon in XML names except for namespace purposes,
but XML processors must accept the colon as a name character.
Therefore, another term, namespace-well-formed, is defined in the Namespaces in XML 1.0 W3C Recommendation that implies all of the XML rules for well-formedness plus those governing namespaces and namespace prefixes.
Colloquially, the term well-formed is often used where namespace-well-formed would be more precise. However, this is a minor technical manner of less practical consequence than the distinction between well-formed vs valid XML described in this answer.
A: W3C, in the XML specification, has defined certain rules that needs to be followed while creating XML documents. The examples of such rules include having exactly one root element, having end-tag for each start-tag, using single/double quotes for attribute values, and so on. If an XML document follows all these rules, it is said to be well-formed document and XML parsers can be used to parse and process such documents.
Document Type Definitions (DTDs) or XML Schemas can be used to define the structure and content of a specific class of XML documents. This includes the parent-child relationship details, attribute lists, data type information, value restrictions, etc. In addition to the well-formedness rules, if an XML document also follows the rules specified in the associated DTD/Schema, it is said to be a valid XML document.
All valid XML documents are well-formed, but the reverse is not always true. Well-formed XML documents do not necessarily have to be valid.
A: Valid XML is XML that succeeds validation against a DTD.
Well formed XML is XML that has all tags closed in the proper order and, if it has a declaration, it has it first thing in the file with the proper attributes.
In other words, validity refers to semantics, well-formedness refers to syntax.
So you can have invalid well formed XML.
A: As others have said, well-formed XML conforms to the XML spec, and valid XML conforms to a given schema.
Another way to put it is that well-formed XML is lexically correct (it can be parsed), while valid XML is grammatically correct (it can be matched to a known vocabulary and grammar).
An XML document cannot be valid until it is well-formed. All XML documents are held to the same standard for well-formedness (an RFC put out by the W3). One XML document can be valid against some schemas, and invalid against others. There are a number of schema languages, many of which are themselves XML-based.
A: I'll add that valid XML also implies that it's well-formed, but well-formed XML is not necessarily valid.
A: If XML is confirming to DTD rules then it's a valid XML.
If a XML document is conforming to XML rules (all tags
started are closed,there is a root element etc)then it's a
well formed XML.
A: Taken from Extensible Markup Language (XML) 1.0 (Fifth Edition) - W3C Recommendation 26 November 2008 :
[Definition: A data object is an XML document if it is well-formed, as
defined in this specification. In addition, the XML document is valid
if it meets certain further constraints.]
For those who prefer psuedo-code to paragraphs upon paragraphs of text... :)
IF is_well_formed(<XML_doc>) THEN
# It is well-formed, and can be parsed
IF is_valid(<XML_doc>) THEN
# Well-formed and ALSO valid. Hurray!
# **A valid XML doc, is a well-formed doc!**
ELSE
# Only well-formed, NOT valid
END IF
ELSE
# Not well-formed, or valid!
END IF
FUNCTION is_well_formed
IF <does_not_contain_syntax,_spelling,_punctuation,_grammar_errors,_etc._errors> THEN
RETURN TRUE
ELSE
RETURN FALSE
END IF
END FUNCTION
FUNCTION is_valid
IF <markup_of_the_XML_document_matches_"some"_defined_standard> THEN
# Standards used to validate XML could be a DTDs or XML Schemas, referenced within the XML document
RETURN TRUE
ELSE
RETURN FALSE
END IF
END FUNCTION
Based on the theory: "Well Formed" vs. Valid
A: Well, XML that isn't well formed, sort of by definition, isn't XML. Poeple usually refer to valid XML as XML that adheres to a certain schema (XSD or DTD).
A: DTD is the acronym for Document Type Definition. This is a description of the content for a family of XML files. This is part of the XML 1.0 specification, and allows one to describe and verify that a given document instance conforms to the set of rules detailing its structure and content.
Validation is the process of checking a document against a DTD (more generally against a set of construction rules).
The validation process and building DTDs are the two most difficult parts of the XML life cycle. Briefly a DTD defines all the possible elements to be found within your document, what is the formal shape of your document tree (by defining the allowed content of an element; either text, a regular expression for the allowed list of children, or mixed content i.e. both text and children). The DTD also defines the valid attributes for all elements and the types of those attributes.
A: See XML DTD on W3 Schools:
An XML document with correct syntax is called "Well Formed".
An XML document validated against a DTD is both "Well Formed" and
"Valid".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "88"
}
|
Q: fork() as an argument Usually when I need to fork in C, I do something like this:
pid_t p = fork();
if(p == 0) { /* do child stuff */ }
else { /* do parent stuff and pray there wasn't an error */ }
It occured to me that I could ditch the extra variable and use:
if(fork() == 0) { /* child */ }
else { /* parent/pray */ }
Improper error handling aside, (why) does this work/not work?
A: You lose the child process ID in the parent, which is what is returned to the parent. I think you could recover that information, but perhaps not uniquely (that is, I think you could get the PID of all of your children, but not necessarily the PID of the child you just forked). If you don't need to know the child's PID, I think the second way is fine.
Also, -1 is returned if there's an error in forking, which you aren't testing for in either case, which is usually a mistake.
A: What you are suggesting will certainly work. However, error handling is not optional in any well-behaved application. The following implementation pattern is similarly succinct and also handles errors. Furthermore, it saves the fork() return value in the pid variable, in case you want to use it later in the parent to, say, wait for the child.
switch (pid = fork()) {
case -1: /* Failure */
/* ... */
case 0: /* Child */
/* ... */
default: /* Parent */
/* ... */
}
A: You should do this instead. I've never known it to not work. It's how it's done in the Stevens books.
int p;
if((p = fork()) == 0) { /* child */ }
else { /* parent/pray */ }
A: You are free to do that in C and it will work because the parent and child will receive different return values from the fork - and it is evaluated first. The only issues are the error handling as you mentioned. Also, you won't have any other way to recover the child PID in case you wanted to operate on it, such as with a waitpid, etc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Microsoft.ApplicationBlocks.Data.ODBCHelper? I've found mention of a data application block existing for ODBC, but can't seem to find it anywhere. If i didn't have a copy of the Access DB application block I wouldn't believe it ever existed either.
Anyone know where to download either the DLL or the code-base from?
--UPDATE: It is NOT included in either the v1, v2, or Enterprise Library versions of the Data ApplicationBlocks
Thanks,
Brian Swanson
A: Which version of .net are you interested in using the ODBC block on?
The Enterprise library has a Data Access component. It is useful on SQL, Oracle, and ODBC. Just set a different provider name in the .config file
EX:
<add name="MyConnection" connectionString="Dsn=Datasource;uid=UserID;pwd=Password"
providerName="System.Data.Odbc" />
At that point, the data access code is "standardized" and looks identical for SQL, Oracle, and ODBC
EX:
Imports Microsoft.Practices.EnterpriseLibrary.Data
Imports Microsoft.Practices.EnterpriseLibrary.ExceptionHandling
Public Class MyClass
Private dbMyDatabase As Database
dbMyDatabase = DatabaseFactory.CreateDatabase("MyConnection")
Public Function GetMyData(ByVal FacilityCode As String) As Data.DataSet
Try
Dim SQL As String
SQL = "SELECT * from MyDataTable"
Dim cmd As Data.Common.DbCommand = dbMyDatabase.GetSqlStringCommand(SQL)
Return dbMyDatabase.ExecuteDataSet(cmd)
Catch ex As Exception
ExceptionPolicy.HandleException(ex, "All")
Throw
End Try
End Function
End Class
The address for the latest Enterprise Library is:
http://msdn.microsoft.com/en-us/library/cc467894.aspx
This is assuming you are using .net 3x.
Also note that we are using the Exception Handling block in the above code.
A: http://www.microsoft.com/downloads/details.aspx?FamilyId=F63D1F0A-9877-4A7B-88EC-0426B48DF275&displaylang=en
pretty sure its in there
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Show weights in JgraphT I have implemented this Graph:
ListenableDirectedWeightedGraph<String, MyWeightedEdge> g =
new ListenableDirectedWeightedGraph<String, MyWeightedEdge>(MyWeightedEdge.class);
In order to show what the class name says; a simple listenable directed weighted graph. I want to change the label of the edges and instead of the format
return "(" + source + " : " + target + ")";
I want it to show the weight of the edge. I realise that all actions on the nodes, e.g. the getEdgesWeight() method, are delegated from the graph and not the edge. How can I show the weight of the edge? Do I have to pass in the Graph to the edge somehow?
Any help is appreciated.
A: I assume that the class MyWeightedEdge already contains a method such as
public void setWeight(double weight)
If this is indeed the case, then what you need to do is:
Derive your own subclass from ListenableDirectedWeightedGraph (e.g., ListenableDirectedWeightedGraph). I would add both constructor versions, delegating to "super" to ensure compatibility with the original class.
Create the graph as in your question, but using the new class
ListenableDirectedWeightedGraph g =
new CustomListenableDirectedWeightedGraph(
MyWeightedEdge.class);
Override the method setEdgeWeight as follows:
public void setEdgeWeight(E e, double weight) {
super.setEdgeWeight(e, weight);
((MyWeightedEdge)e).setWeight(weight);
}
And, last but not least, override the toString method of the class MyWeightedEdge to return the label you want the edge to have (presumably including the weight, which is now available to it).
I hope this helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Getting software version numbers right. v1.0.0.1 I distribute software online, and always wonder if there is a proper way to better define version numbers.
Let's assume A.B.C.D in the answers. When do you increase each of the components?
Do you use any other version number tricks such as D mod 2 == 1 means it is an in house release only?
Do you have beta releases with their own version numbers, or do you have beta releases per version number?
A: In my opinion, almost any release number scheme can be made to work more or less sanely. The system I work on uses version numbers such as 11.50.UC3, where the U indicates 32-bit Unix, and the C3 is a minor revision (fix pack) number; other letters are used for other platform types. (I'd not recommend this scheme, but it works.)
There are a few golden rules which have not so far been stated, but which are implicit in what people have discussed.
*
*Do not release the same version twice - once version 1.0.0 is released to anyone, it can never be re-released.
*Release numbers should increase monotonically. That is, the code in version 1.0.1 or 1.1.0 or 2.0.0 should always be later than version 1.0.0, 1.0.9, or 1.4.3 (respectively).
Now, in practice, people do have to release fixes for older versions while newer versions are available -- see GCC, for example:
*
*GCC 3.4.6 was released after 4.0.0, 4.1.0 (and AFAICR 4.2.0), but it continues the functionality of GCC 3.4.x rather than adding the extra features added to GCC 4.x.
So, you have to build your version numbering scheme carefully.
One other point which I firmly believe in:
*
*The release version number is unrelated to the CM (VCS) system version numbering, except for trivial programs. Any serious piece of software with more than one main source file will have a version number unrelated to the version of any single file.
With SVN, you could use the SVN version number - but probably wouldn't as it changes too unpredictably.
For the stuff I work with, the version number is a purely political decision.
Incidentally, I know of software that went through releases from version 1.00 through 9.53, but that then changed to 2.80. That was a gross mistake - dictated by marketing. Granted, version 4.x of the software is/was obsolete, so it didn't immediately make for confusion, but version 5.x of the software is still in use and sold, and the revisions have already reached 3.50. I'm very worried about what my code that has to work with both the 5.x (old style) and 5.x (new style) is going to do when the inevitable conflict occurs. I guess I have to hope that they will dilly-dally on changing to 5.x until the old 5.x really is dead -- but I'm not optimistic. I also use an artificial version number, such as 9.60, to represent the 3.50 code, so that I can do sane if VERSION > 900 testing, rather than having to do: if (VERSION >= 900 || (VERSION >= 280 && VERSION < 400), where I represent version 9.00 by 900. And then there's the significant change introduced in version 3.00.xC3 -- my scheme fails to detect changes at the minor release level...grumble...grumble...
NB: Eric Raymond provides Software Release Practice HOWTO including the (linked) section on naming (numbering) releases.
A: I usually use D as a build counter (automatic increment by compiler)
I increment C every time a build is released to "public" (not every build is released)
A and B are used as major/minor version number and changed manually.
A: I think there are two ways to answer this question, and they are not entirely complimentary.
*
*Technical: Increment versions based on technical tasks. Example: D is build number, C is Iteration, B is a minor release, A is a major release. Defining minor and major releases is really subjective, but could be related things like changes to underlying architecture.
*Marketing: Increment versions based on how many "new" or "useful" features are being provided to your customers. You may also tie the version numbers to an update policy...Changes to A require the user to purchase an upgrade license, whereas other changes do not.
The bottom line, I think, is finding a model that works for you and your customers. I've seen some cases where even versions are public releases, and odd versions are considered beta, or dev releases. I've seen some products which ignore C and D all together.
Then there is the example from Micrsoft, where the only rational explanation to the version numbers for the .Net Framework is that Marketing was involved.
A: Our policy:
*
*A - Significant (> 25%) changes or
additions in functionality or
interface.
*B - small changes or
additions in functionality or
interface.
*C - minor changes that
break the interface.
*D - fixes to a
build that do not change the
interface.
A: I'm starting to like the Year.Release[.Build] convention that some apps (e.g. Perforce) use. Basically it just says the year in which you release, and the sequence within that year. So 2008.1 would be the first version, and if you released another a months or three later, it would go to 2008.2.
The advantage of this scheme is there is no implied "magnitude" of release, where you get into arguments about whether a feature is major enough to warrant a major version increment or not.
An optional extra is to tag on the build number, but that tends to be for internal purposes only (e.g. added to the EXE/DLL so you can inspect the file and ensure the right build is there).
A: People tend to want to make this much harder than it really needs to be. If your product has only a single long-lived branch, just name successive versions by their build number. If you've got some kind of "minor bug fixes are free, but you have to pay for major new versions", then use 1.0, 1.1 ... 1.n, 2.0, 2.1... etc.
If you can't immediately figure out what the A,B,C, and D in your example are, then you obviously don't need them.
A: The only use I have ever made of the version number was so that a customer could tell me they're using version 2.5.1.0 or whatever.
My only rule is designed to minimize mistakes in reporting that number: all four numbers have to be 1 digit only.
1.1.2.3
is ok, but
1.0.1.23
is not. Customers are likely to report both numbers (verbally, at least) as "one-one-two-three".
Auto-incrementing build numbers often results in version numbers like
1.0.1.12537
which doesn't really help, either.
A: A good and non-technical scheme just uses the build date in this format:
YYYY.MM.DD.BuildNumber
Where BuildNumber is either a continuous number (changelist) or just starts over at 1 each day.
Examples: 2008.03.24.1 or 2008.03.24.14503
This is mainly for internal releases, public releases would see the version printed as 2008.03 if you don't release more often than once a month. Maintenance releases get flagged as 2008.03a 2008.03b and so on. They should rarely go past "c" but if it does it's a good indicator you need better QA and/or testing procedures.
Version fields that are commonly seen by the user should be printed in a friendly "March 2008" format, reserve the more technical info in the About dialog or log files.
Biggest disadvantage: just compiling the same code on another day might change the version number. But you can avoid this by using the version control changelist as last number and checking against that to determine if the date needs to be changed as well.
A: In the github world, it has become popular to follow Tom Preston-Werner's "semver" spec for version numbers.
From http://semver.org/ :
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes, MINOR version
when you add functionality in a backwards-compatible manner, and PATCH
version when you make backwards-compatible bug fixes. Additional
labels for pre-release and build metadata are available as extensions
to the MAJOR.MINOR.PATCH format.
A: I use V.R.M e.g. 2.5.1
V (version) changes are a major rewrite
R (revision) changes are significant new features or bug fixes
M (modification) changes are minor bux fixes (typos, etc)
I sometimes use an SVN commit number on the end too.
A: Its all really subjective at the end of the day and simply up to yourself/your team.
Just take a look at all the answers already - all very different.
Personally I use Major.Minor.*.* - Where Visual Studio fills in the revison/build number automatically. This is used where I work too.
A: I like Year.Month.Day. So, v2009.6.8 would be the "version" of this post. It is impossible to duplicate (reasonably) and it very clear when something is a newer release. You could also drop the decimals and make it v20090608.
A: In the case of a library, the version number tells you about the level of compatibility between two releases, and thus how difficult an upgrade will be.
A bug fix release needs to preserve binary, source, and serialization compatibility.
Minor releases mean different things to different projects, but usually they don't need to preserve source compatibility.
Major version numbers can break all three forms.
I wrote more about the rationale here.
A: For in-house development, we use the following format.
[Program #] . [Year] . [Month] . [Release # of this app within the month]
For example, if I'm releasing application # 15 today, and it's the third update this month, then my version # will be
15.2008.9.3
It's totally non-standard, but it is useful for us.
A: For the past six major versions, we've used M.0.m.b where M is the major version, m is the minor version, and b is the build number. So released versions included 6.0.2, 7.0.1, ..., up to 11.0.0. Don't ask why the second number is always 0; I've asked a number of times and nobody really knows. We haven't had a non-zero there since 5.5 was released in 1996.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: ASPNET TreeView Expanded Node Style Does anyone know how in ASP.Net's TreeView control, to have a custom style applied to an Expanded node? I have many root nodes and want the Expanded nodes to have a different background.
A: There is no way of doing this with out of the box controls and this goes for alot of MS ASP.NET controls, however there is an adapters project on codeplex that makes your ASP.NET controls CSS-friendly:
http://www.codeplex.com/cssfriendly
Its pretty straight forward but ask again if you need any help setting it up.
A: I didn't want to deviate from the original control, so I ended up writing some JavaScript that would modify the tree via node structures on page load.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Integrate Stack Overflow into IDEs? Okay, this is just a crazy idea I have. Stack Overflow looks very structured and integrable into development applications. So would it be possible, even useful, to have a Stack Overflow plugin for, say, Eclipse?
Which features of Stack Overflow would you like to have directly integrated into your IDE so you can use it "natively" without changing to a browser?
EDIT: I'm thinking about ways of deeper integration than just using the web page inside the IDE. Like when you use a certain Java class and have a problem, answers from SO might flare up. There would probably be cases where something like this is annoying, but others may be very helpful.
A: In Visual Studio, you could add a shortcut to search for a highlighted term in StackOverflow. Jeff Atwood wrote about doing something similar with Google in his Google search VS.NET macro blog entry.
Using this approach would allow you to highlight a term or error message (or any other selectable text in the IDE), press the shortcut keys, and then see all the matching results on StackOverflow.
I'm sure there's a way to do this in other IDE's as well.
A: If StackOverflow can begin identifying the language that each code snippet contains, then I could see an code-completion/code-snippet plugin to an IDE that responds to a special syntax for performing searches on SO and inserting the code portion of accepted answers.
Eg: in my source I might type:
//# read an XML file
The //# syntax prompts the plugin to start a search and display a list of question titles. When I pick one, it inserts the code portion of the accepted answer.
A: Following up on Josh's answer. This VS Macro will search StackOverflow for highlighted text in the Visual Studio IDE. Just highlight and press Alt+F1
Public Sub SearchStackOverflowForSelectedText()
Dim s As String = ActiveWindowSelection().Trim()
If s.Length > 0 Then
DTE.ItemOperations.Navigate("http://www.stackoverflow.com/search?q=" & _
Web.HttpUtility.UrlEncode(s))
End If
End Sub
Private Function ActiveWindowSelection() As String
If DTE.ActiveWindow.ObjectKind = EnvDTE.Constants.vsWindowKindOutput Then
Return OutputWindowSelection()
End If
If DTE.ActiveWindow.ObjectKind = "{57312C73-6202-49E9-B1E1-40EA1A6DC1F6}" Then
Return HTMLEditorSelection()
End If
Return SelectionText(DTE.ActiveWindow.Selection)
End Function
Private Function HTMLEditorSelection() As String
Dim hw As HTMLWindow = ActiveDocument.ActiveWindow.Object
Dim tw As TextWindow = hw.CurrentTabObject
Return SelectionText(tw.Selection)
End Function
Private Function OutputWindowSelection() As String
Dim w As Window = DTE.Windows.Item(EnvDTE.Constants.vsWindowKindOutput)
Dim ow As OutputWindow = w.Object
Dim owp As OutputWindowPane = ow.OutputWindowPanes.Item(ow.ActivePane.Name)
Return SelectionText(owp.TextDocument.Selection)
End Function
Private Function SelectionText(ByVal sel As EnvDTE.TextSelection) As String
If sel Is Nothing Then
Return ""
End If
If sel.Text.Length = 0 Then
SelectWord(sel)
End If
If sel.Text.Length <= 2 Then
Return ""
End If
Return sel.Text
End Function
Private Sub SelectWord(ByVal sel As EnvDTE.TextSelection)
Dim leftPos As Integer
Dim line As Integer
Dim pt As EnvDTE.EditPoint = sel.ActivePoint.CreateEditPoint()
sel.WordLeft(True, 1)
line = sel.TextRanges.Item(1).StartPoint.Line
leftPos = sel.TextRanges.Item(1).StartPoint.LineCharOffset
pt.MoveToLineAndOffset(line, leftPos)
sel.MoveToPoint(pt)
sel.WordRight(True, 1)
End Sub
To install:
*
*go to Tools - Macros - IDE
*create a new Module with a name of your choice under "MyMacros". Or use an existing module.
*paste the above code into the module
*add a reference to the System.Web namespace (for HttpUtility) to the module
*close the macro IDE window
*go to Tools - Options - Environment - Keyboard
*type "google" in the Show Commands Containing textbox. The SearchGoogleForSelectedText macro should show up
*click in the Press Shortcut Keys textbox, then press ALT+F1
*click the Assign button
*click OK
This is all taken from Jeff Atwood's Google Search VS Macro post, just modified to search StackOverflow instead.
A: I don't know about Eclipse, but for Visual Studio, if someone really wanted this they could easily add the SO RSS feed for the "Start Page News Channel" so the SO question list appeared in the start page, or even better, narrow it down with a tag (like for C#). It's not exactly "integration", but it would provide a quick look at recent things with extremely little effort. However, not sure how "useful" it would be.
A: I don't think I'll be able to get any work done with SO integrated into an IDE. Its almost as bad, if not worst than integrating Digg/Reddit into an IDE.
A: You have the RSS plugin for Eclipse to read the StackOverflow feed.
But I'm with you, a SO Eclipse plugin would be really cool.
A: You could just set it as your Start Page in Visual Studio.
Not sure what benefit this would provide... but to each his own.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: How do I know I reached a file's maximum size when using ofstream? While writing a file using ofstream, how do I know when the file's size has reached the OS' maximum file size - more specifically linux's maximum file size - ??
A: First off, maximum file size is a filesystem limit, not an Operating System limit. It will even vary for a particular filesystem, based on how the filesystem was formatted.
As for how you'd figure out that you'd reached the limit, your code will likely throw an exception when that happens, which you'll then be able to relate back to the OS error codes.
Note that the actual limit is pretty darn big for a "typical" EXT2 filesystem - in the Terabytes. You'll likely never reach it, in practice. If you seriously are accumulating Terabytes of data, you might want to consider whether there's a more reasonable way to store it, rather than a single gigantic file.
A: You can check if the bad bit is set. Also, using exceptions you can force the stream to throw an exception, when the bad bit gets set.
A: I think (not 100% sure) that you'd just have to compare the stream's current size after a write to whatever the OS's max file size is. Otherwise I'm guessing the underlying implementation will just let you keep writing until the actual OS io calls fail.
A: Well, fstream does not have an exclusive exception for a "write to a file that exceeds the implementation-defined maximum file-size" (from 'man 2 write'), like the error code EFBIG available when using the C function 'write'. So, I think one has to do like Jim Crafton said and compare the file size against some user-defined maximum size or against the maximum value held by a 'streamoff' variable, which is the variable-type used to handle file sizes - file offset actually - in iostream.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Can you suggest the best screencast available to learn CSS? It should be hands-on, complete, targeted to programmers and detailed on layout techniques!
A: Here are a few that you might like to consider:
*
*CSS for Designers with: Andy Clarke and Molly E. Holzschlag
*CSS Web Site Design with: Eric Meyer
*SitePoint :The CSS Video Crash Course
*CSS Tricks
*Beginners CSS Tutorial
A: Are you looking for a free screen cast? If not, Eric Meyer (one of the gods of web standards) has a video called CSS Web Design which will tell you everything you need to be fluent with CSS.
It also covers some advanced CSS techniques like Sliding Doors. He also goes and refactors some HTML on some well known websites to take advantage of CSS.
If you are a fast learner, W3School's CSS section has more than enough information to get you started. CSS is actually very easy to learn.
A: Check out www.css-tricks.com. They have excellent screen casts.
Another place to check out is the various web design podcasts. Go to iTunes and search the podcasts and you will probably find a few to check out.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is it frowned upon to release your software with > 1 first commercial version number? Is it frowned upon to release your software with a version number high than 1?
For example, some non tech-savy people might see a competitor's product with a higher version number as meaning my software is not as good.
A: I believe there are 3 major ways people market product versions:
*
*Product Name [Version #] (i.e. Wordperfect 5.0)
*Product Name [Release Year] (i.e. Gentoo 2008.0)
*Product Name [Code Name] (i.e. Windows Vista)
I actuall prefer the release year as part of the versioning strategy, it lets your customers know what the latest version of a product is, general age of the release and using the 2008.1 type of number, you know the release is a revision (i.e. Service Pack).
If you still decide to go with a version number, I see no harm in starting with 1.1, that is what we always did at my old place of employ...
A: I would be miffed by anyone who set their version number based on marketing, rather than software engineering, considerations.
The reality is that having more releases, or patches, or a higher version doesn't make your software any inherently better, just as how having more eraser marks doesn't make something written in pencil any inherently better.
What marketing wants to call the product, I don't care, and in fact, I'd prefer that they didn't latch on to version numbers. Which Microsoft has gotten well, because while their internal version numbers remain somewhat consistent and sane, marketing doesn't push "Windows 6" but rather "Vista."
A: Oracle did it with their first DB release. There was no Oracle 1.
What you think of the quality of Oracle's software is a different matter.... :-)
A: A simple solution to this is to release the first couple version with year numbers (current year, or coming year), or model names. That is kind of in fashion right now anyway:
*
*Delphi 2009
*MS SQL 2008
*Windows Vista
*Microsoft Office 2007
*RemObjects Oxygene (which came after Chrome and Joyride)
Although individual version numbers are still more common.
This avoids the question of "Where is Version 1?" without having a version 1. After a few releases you can switch back to version numbers if you want, and be at a respectable number.
You can also go away with version numbers all together and sell your software on a subscription basis, so a purchase includes updates for a year, and then you can give them build 20080925A (build date), and update them frequently to the latest certified build.
It really depends on what kind of software you are releasing, how you are distributing it, and your target market.
A: I'd actually be more inclined to keep my version numbers lower than my competitors', so my marketing people could say "Our version 2 is equal to their version 10! They took 10 releases to get this far, and we made it in 2! Our people are therefore 5 times smarter! Give us more money!"
(This is why I don't have, or ever want, my own company.)
A: For our software we used version 1.0 and 1.1 internally and never released them to the public. Our first public release was version 2.0. No one was upset and if anyone asked we told them about the internal version.
A: I wouldn't worry about it. Let the quality of your software speak for itself. Stick with 1.0. Or, even better, don't attach a version number at all. If someone wants to know the version - well, that's what Help / About is for. =)
A: Either way people are going to know it is version 1. If you call it version 2.0 but version 1.0 is no where to be found, and no one has ever seen it people are going to figure it out.
Alternatively you can version internally and just call your product "Amazing App better than XYZ 2008"
Edit: Actually I changed my mind, call your software Amazing App 2010. That way people will think it is from the future.
A: My company regularly has this argument with the engineering department. There was a huge explosion when we tried to label our next release a major version above the previous one, because marketing/sales thought our current customers would be upset that we have a new major version so soon and they need to upgrade. They wanted it to be a .5 release higher than the previous one. We went back and for this a few times... currently, engineering is winning. The marketing department also uses releases numbered by year as the names of the product, so I don't know what the fuss is about. We tend to mix our internal and external names for things a lot though. Also, when we started using version numbers of the current format, we started at a fairly high number for exactly the reason of fear that customers would think our software isn't as good as competitors with higher version numbers. This is a real problem, at least in marketing's heads.
My $0.02 is that the version number should reflect major features/changes of the product, and thus the first public release is by definition 1.0.0.
A: No, absolutely go ahead and start from an arbitrary point. Alternatively, use a vaguely defensible scheme like the last digit of the release year.
A: You could use the year of release, or a fancy codename instead of a version number, thus bypassing the ethical issue of artificially making your software seem more crufty than it actually is :)
Marketing has a lot to answer for.
A: I say yes it is unethical to artificially inflate the number to your advantage. It is, however, perfectly fine to call it version 10.0 X .1!
A: I really don't think it matters at all. I don't know anyone, even those who aren't tech-savvy, who would assume that Version 2.0 of product A is even remotely related to a Version 1.0 of a product B.
The really non tech-savvy people probably won't even know what the version number is for a piece of software they have/want.
A: I think that it's all in the marketing department to sell the product to interested parties, however the version number should be accurate, although how you define the version number might play into things.
A: Let me turn the question back on you: would you rather buy version 1 of a big, expensive software package or version 3.14?
The purpose of the version number is to be a unique handle for debugging and troubleshooting. It also communicates something to the users. But the first use is irrelevant to the second use. If I use version 1702 internally, there's know way to say whether it should be released as version 1 or version 11.
A: Sure, you should call it version 1. But there's no rule on how quickly after that you can release version 2!
A: WordStar 1986
A: I am a big believer in:
[major].[minor].[sub-minor/patch]
Where for beta [major] = 0 and for gamma [major] starts at 1. I think it's the most informative, easy to understand, and honest version numbering scheme.
Of course, for alpha I'm a fan of the simpler: [svn-revision-num] since it's only for internal use at that point and that's more informative.
A: Ubuntu does it with the time basis like 9.04,9.10 etc we can always guess which one is latest.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: c++ exception : throwing std::string I would like to throw an exception when my C++ methods encounter something weird and can't recover. Is it OK to throw a std::string pointer?
Here's what I was looking forward to doing:
void Foo::Bar() {
if(!QueryPerformanceTimer(&m_baz)) {
throw new std::string("it's the end of the world!");
}
}
void Foo::Caller() {
try {
this->Bar(); // should throw
}
catch(std::string *caught) { // not quite sure the syntax is OK here...
std::cout << "Got " << caught << std::endl;
}
}
A: It works, but I wouldn't do it if I were you. You don't seem to be deleting that heap data when you're done, which means that you've created a memory leak. The C++ compiler takes care of ensuring that exception data is kept alive even as the stack is popped, so don't feel that you need to use the heap.
Incidentally, throwing a std::string isn't the best approach to begin with. You'll have a lot more flexibility down the road if you use a simple wrapper object. It may just encapsulate a string for now, but maybe in future you will want to include other information, like some data which caused the exception or maybe a line number (very common, that). You don't want to change all of your exception handling in every spot in your code-base, so take the high road now and don't throw raw objects.
A: In addition to probably throwing something derived from std::exception you should throw anonymous temporaries and catch by reference:
void Foo::Bar(){
if(!QueryPerformanceTimer(&m_baz)){
throw std::string("it's the end of the world!");
}
}
void Foo:Caller(){
try{
this->Bar();// should throw
}catch(std::string& caught){ // not quite sure the syntax is ok here...
std::cout<<"Got "<<caught<<std::endl;
}
}
*
*You should throw anonymous
temporaries so the compiler deals
with the object lifetime of whatever
you're throwing - if you throw
something new-ed off the heap,
someone else needs to free the
thing.
*You should catch references to
prevent object slicing
.
See Meyer's "Effective C++ - 3rd edition" for details or visit https://www.securecoding.cert.org/.../ERR02-A.+Throw+anonymous+temporaries+and+catch+by+reference
A: A few principles:
*
*you have a std::exception base class, you should have your exceptions derive from it. That way general exception handler still have some information.
*Don't throw pointers but object, that way memory is handled for you.
Example:
struct MyException : public std::exception
{
std::string s;
MyException(std::string ss) : s(ss) {}
~MyException() throw () {} // Updated
const char* what() const throw() { return s.c_str(); }
};
And then use it in your code:
void Foo::Bar(){
if(!QueryPerformanceTimer(&m_baz)){
throw MyException("it's the end of the world!");
}
}
void Foo::Caller(){
try{
this->Bar();// should throw
}catch(MyException& caught){
std::cout<<"Got "<<caught.what()<<std::endl;
}
}
A: Simplest way to throw an Exception in C++:
#include <iostream>
using namespace std;
void purturb(){
throw "Cannot purturb at this time.";
}
int main() {
try{
purturb();
}
catch(const char* msg){
cout << "We caught a message: " << msg << endl;
}
cout << "done";
return 0;
}
This prints:
We caught a message: Cannot purturb at this time.
done
If you catch the thrown exception, the exception is contained and the program will ontinue. If you do not catch the exception, then the program exists and prints:
This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information.
A: All these work:
#include <iostream>
using namespace std;
//Good, because manual memory management isn't needed and this uses
//less heap memory (or no heap memory) so this is safer if
//used in a low memory situation
void f() { throw string("foo"); }
//Valid, but avoid manual memory management if there's no reason to use it
void g() { throw new string("foo"); }
//Best. Just a pointer to a string literal, so no allocation is needed,
//saving on cleanup, and removing a chance for an allocation to fail.
void h() { throw "foo"; }
int main() {
try { f(); } catch (string s) { cout << s << endl; }
try { g(); } catch (string* s) { cout << *s << endl; delete s; }
try { h(); } catch (const char* s) { cout << s << endl; }
return 0;
}
You should prefer h to f to g. Note that in the least preferable option you need to free the memory explicitly.
A: Though this question is rather old and has already been answered, I just want to add a note on how to do proper exception handling in C++11:
Use std::nested_exception and std::throw_with_nested
Using these, in my opinion, leads to cleaner exception design and makes it unnecessary to create an exception class hierarchy.
Note that this enables you to get a backtrace on your exceptions inside your code without need for a debugger or cumbersome logging. It is described on StackOverflow here and here, how to write a proper exception handler which will rethrow nested exceptions.
Since you can do this with any derived exception class, you can add a lot of information to such a backtrace!
You may also take a look at my MWE on GitHub, where a backtrace would look something like this:
Library API: Exception caught in function 'api_function'
Backtrace:
~/Git/mwe-cpp-exception/src/detail/Library.cpp:17 : library_function failed
~/Git/mwe-cpp-exception/src/detail/Library.cpp:13 : could not open file "nonexistent.txt"
A: Yes. std::exception is the base exception class in the C++ standard library. You may want to avoid using strings as exception classes because they themselves can throw an exception during use. If that happens, then where will you be?
boost has an excellent document on good style for exceptions and error handling. It's worth a read.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
}
|
Q: jQuery & Prototype Conflict I am using the jQuery AutoComplete plugin in an html page where I also have an accordion menu which uses prototype.
They both work perfectly separately but when I tried to implement both components in a single page I get an error that I have not been able to understand.
uncaught exception: [Exception... "Component returned failure code:
0x80004005 (NS_ERROR_FAILURE) [nsIDOMViewCSS.getComputedStyle]"
nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame ::
file:///C:/Documents and
Settings/Administrator/Desktop/website/js/jquery-1.2.6.pack.js ::
anonymous :: line 11" data: no]
I found out the file conflicting with jQuery is 'effects.js' which is used by the accordion menu. I tried replacing this file with a newer version but newer seems to break the accordion behavior.
My guess is that the 'effects.js' file used in the accordion was modified to obtain the accordion demo output. I also tried using the overriding methods jQuery needs to avoid conflict with other libraries and that did not work.
I obtained the accordion demo from stickmanlabs.com.
And the jQuery AutoComplete can be obtained from jQuery site.
Has any one else experienced this issue?
A: I don't really see the reason for using both libraries at the same time in this case.
You can either use Prototype's (well, Scriptaculous' actually) Ajax.Autocompleter and ditch jQuery, or you can use jQuery's Accordion and get rid of Prototype.
Using both libraries at once is not really a good idea, because:
*
*They can cause conflicts.
*By including them both you force your users to download them both. Which is not bandwith friendly approach.
A: There are two possible solutions: There was a conflict with an older version of Scriptaculous and jQuery (Scriptaculous was attempting to extend the native Array prototype incorrectly) - first try upgrading your copy of Scriptaculous.
If that does not work you will need to use noConflict() (as alluded to above). However, there's a catch. Since you're including a plugin you'll need to do the includes in a specific order, for example:
<script src="jquery.js"></script>
<script src="jquery.autocomplete.js"></script>
<script>
jQuery.noConflict();
jQuery(document).ready(function($){
$("#example").autocomplete(options);
});
</script>
<script src="prototype.js"></script>
<script src="effects.js"></script>
<script src="accordion.js"></script>
A: jQuery lets you rename the jQuery function from $ to something else to avoid namespace conflicts with other libraries.
You can do something like this
var J = jQuery.noConflict();
Details here: michaelshadle.com — jQuery's no-conflict mode: yet another reason why it's the best
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: How to make HTML rendering fast I am working on a web application developed on C#/ASP.NET. We are using third-party controls for displaying Grids, Tabs, Trees and other complex controls in our pages. The problem is that these controls render a huge amount of HTML. Due to this the size of pages have grown heavily and the browser takes a while to load a page. I want to find some general techniques to make HTML rendering in a browser (Internet Explorer, Firefox, etc.) fast.
Note that all the pages have ViewState turned off.
A: gzip the HTML - won't increase the rendering speed, but will massively reduce the page size.
Make sure you aren't using a table based layout, and make sure any javascript or css that's used is minified, gzipped, and linked in the head so that it can be cached.
A: Open your normal pages, and type this in the URL, and press enter:
javascript:var tags = document.getElementsByTagName('*');alert('Page weight: ' + tags.length + ' tags.');
(you can even save it as a Bookmarklet)
If you have OVER 500 tags, you likely want to look at cleaning up some of the "tag soup" where possible.
Digg's homepage weighs in around 1,000 tags! thus is very slow to render (first time)
MSN's homepage weighs in around 700+ tags... thus is quite slow to render
Yahoo's homepage weighs in around 600 tags... thus renders faster
Google's homepage weighs in around 92 tags!... thus renders like lightning!
A: For third-party controls, all you can do is to bug there support to improve its performance.
But when coding you can use several techniques.
One key is to understand that JavaScript DOM call are way slower than html parsing.
So if you set innerHTML = " .... " with thousands of rows, it will be extremely fast compared to rendering it via document.createElement() calls in a loop.
Here are tips from MSDN:
http://msdn.microsoft.com/en-us/library/ms533019.aspx
A: *
*Compress the HTML using a 3rd-party tool or at least by using the IIS6 built-in compression option (Microsoft TechNet).
*Evaluate the third-party controls to see if they are necessary. If they are, write your own for your own needs and/or use their controls if they are "AJAX-enabled." Most popular 3rd-party controls do have AJAX capabilities and would allow the data to be populated after the rest of the page loads thus showing the user some progress.
*Turn off ViewState if it is not needed. When using third-party controls, ViewState can get huge.
Update: I've blogged about this at http://weblogs.asp.net/jgaylord/archive/2008/09/29/web-site-performance.aspx
A: There's a Firefox extension, YSlow ( http://developer.yahoo.com/yslow/ ) that analyzes any web page and lists the specific changes to be made, to improve the speed. Some of the changes that it suggests are related to the web server, not the content of the HTML, but it's very helpful anyway.
Screenshot from the YSlow webpage:
A: Compression doesn't accelerate rendering at all, it just accelerate content delivering.
I would recomment to do some sort of 'profiling' - make test page with a lot of some kind of your html objects (like table row or some common div-container) - and then use plugins like YSlow to test how much time it takes to render for example 10K of such elements. After profiling you will see the actual rendering botleneck..
Btw, the problem may be actually with content delivering, not rendering. YSlow will also show where it is. You also can use some visual tools to verify site loading speed, like http://Site-Perf.com/
A: I would take a look at the Viewstate of the controls on the page. You should disable it if at all possible, since it gets serialized (and Base64 encoded I think) and stuffed in the page. If your updating the data in the controls on each post-back you should be able to safely disable viewstate and likely save a good chunk of bandwidth.
A: I would strongly suggest looking at the Yahoo CSS & JavaScript Compressor which will not only reduce your CSS & JavaScript file sizes but also raise any errors & possible duplication in your code. A definite must in any web developer's tool box.
A: If the problem rests with the control itself, perhaps shopping around for a new vendor. I do realize this would involve a reinvestment in time and money, but I may need to be tabled for the next major revision if you can not gain the result you need with the previously mentioned compression methods.
And remember, set EnableViewstate to false where you can
A: To speed up rendering of grids, use grid paging.
Grid paging will cause less rendering by reducing number of gridlines shown.
We usually start out with 50 rows per page and always set the number of gridrows as a systemparameter which easily can be decreased or increased after deployment.
When using standard ASP.NET controls, also found that they are faster when using CSS Friendly control adapters. CSS Friendly control adapters
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Using Interface Builder for UITableViews I'm very early in the iPhone development learning process. I'm trying to get my head around various pieces. Right now I've just taken the basic NavigationController template and I'm trying to create a simple grouped table view with a couple of text fields. What I can't seem to do is to get Interface Builder to allow me to drop a UITableViewCell into a UITableView so that I can then add a text field to the Cell. Is this even possible (it would seem that its supposed to be given the fact that UITableViewCell is a draggable control)?
If not, does that mean all of that is code I will need to just write myself?
A: Be careful with Boot To The Head's method. You will leak if you don't properly deal with your IBOutlets. I will try to explain this to the best of my ability without posting code (NDA). If you plan on using IB to create your cell, make the UITableViewCell it's own Xib file. Set the File's Owner as your UIViewController subclass (or UITableController). Call the IBOutlet something like UITableViewCell *cellFactory. In the UITableViewDataSource method tableView:cellForRowAtIndexPath: do the following pseudo-code;
*
*Try to dequeue a cell using the identifier you setup in IB
*If successful, your done. Just use the cell
*Else you need to create a new cell. Use the [NSBundle mainBundle] loadNibNamed:owner:options: method with your proper xib file in there. This will fill the cellFactory ivar with a fresh cell. Here comes the tricky part.
*set cell = cellFactory then release cellFactory and set it to nil to be sure you don't accidentally use it again. You are now safe to use your cell as normal
A: You can create the cell with Interface Builder, but you have to make it a top-level object, rather than a child of the table view. Then you can return this cell in your view controller's tableView:cellForRowAtIndexPath: function.
Make sure to give the cell an identifier in Interface Builder and then use the same identifier with dequeueReusableCellWithIdentifier: (see the sample code for how this works -- the idea is that cells get re-used - the OS will only allocate as many cells as fit on the screen at once. Clever way to save memory.)
A: Unfortunately, it doesn't really work that way - the cells in the tableview are generated by the delegate at run-time. It turns out to be very straightforward code, though. Check out the tableview example code, it's pretty easy to follow.
A: This is a good tutorial on using the UI builder for UITableView's
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Applescriptable mySQL client - low cost or free? Can anyone recommend a MySQL front end client or OS X that is AppleScriptable? Preferably low cost or free.
A: I would suggest just using the command-line mysql client and using the do shell script command in applescript to invoke it:
do shell script "mysql -e 'select * from customer'"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Gem update on Windows - is it broken? This is a follow-up to this question.
When I issue the gem update command on Windows, whenever it gets to a
gem whose latest version DOESN'T have Windows binaries, it'll attempt to
build the native extension which will, of course, fail. For example:
Updating sqlite3-ruby
Building native extensions. This could take a while...
ERROR: While executing gem ... (Gem::Installer::ExtensionBuildError)
ERROR: Failed to build gem native extension.
c:/ruby/bin/ruby.exe extconf.rb update
checking for fdatasync() in rt.lib... no
checking for sqlite3.h... no
nmake
'nmake' is not recognized as an internal or external command,
operable program or batch file.
The old pre-1.x behavior of asking for the required platform at least
made updating possible. Now I can't update at all unless I uninstall the
troublesome gems (currently sqlite3-ruby and hpricot), run the update,
then re-install the gems using the --version switch.
Does anyone have a solution to this conundrum or are we stuck with it?
Note:
$ gem -v
1.2.0
$ ruby -v
ruby 1.8.6 (2007-09-24 patchlevel 111) [i386-mswin32]
Note (26 September 2008): I just updated to gems 1.3.0 and this problem persists.
Note (18 November 2008): Just updated to gems 1.3.1 and the problem persists.
Note (28 April 2009): The latest version of Gems (1.3.2) now skips any gems where building of native extensions fails during update; in other words, the problem is fixed. Hooray!
A: Gems, as of version 1.3.2, will now skip gems that fail to build, so update Rubygems to the latest version and the problem discussed here should be solved.
gem update --system
The following solution is now deprecated, but I leave it here for the record.
I started a thread on this issue on the Ruby Forum (it's a front end to the mailing list). There's some interesting discussion; it's worth a read. There's even a very hacky solution to this problem on there:
`gem.bat outdated`.split(/\n/).map{|z|z.scan(/^[^[:space:]]+/)}.flatten.each{|z| `gem.bat update #{z}`}
It calls the gem outdated command and builds a list of all of the outdated gems. It then iterates over the list and calls gem update for each individual outdated gem. If one fails, it just moves onto the next.
A: It seems that we are stuck. I have found here that there is no mswin32 gem for the last version (1.2.4), I tried to install it on my computer and got the same problem.
Installing the previous version works fine:
gem install sqlite3-ruby --version '1.2.3'
A: Execute the below command and it should work:
gem install sqlite3-ruby --platform=mswin32
A: Looking at the RubyForge file list for sqlite3-ruby reveals that version 1.2.3 has gems that were built using Visual Studio 6 and MinGW (sqlite3-ruby-1.2.3-mswin32.gem & sqlite3-ruby-1.2.3-x86-mingw32.gem). However, version 1.2.4 doesn't not have any such pre-built gems.
If you have Visual Studio 6 or MinGW installed and have the compiler environment variables set up (at least for Visual Studio 6 but not sure about MinGW), the gem should build during installation. I'm not sure if the gems will build under newer versions of Visual Studio.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Signature capture possible in various mobile web browsers? My company is considering offering a lightweight mobile web site for data entry in the field (we already have a thick-client mobile application). One hard requirement is that we must be able to capture a signature.
Is there any prior art for capturing a signature, specifically inside a web page running inside a mobile web browser, across a wide variety of mobile devices/web browsers? I am only asking for in-browser solutions, not thick-clients.
For obvious reasons, the device would be required to have a touchscreen.
Certainly there are many, many different mobile browsers out there with a wide variety of capabilities. The ideal solution would support as many browsers as possible and degrade gracefully based on browser capabilities.
I am already aware that certain versions of Flash might provide the drawing APIs needed for something like this, assuming the desired device's browser supports Flash.
I'm also aware of a third party ActiveX/OCX control for Pocket IE on Windows Mobile devices. It is necessary for the user to manually download/install the control within the browser before use. Unfortunate, but acceptable.
I'm not personally aware of many mobile browsers that support hosting a Java applet, but there are probably some. Again, based on the support for various Java APIs, perhaps this would be a possible avenue.
Javascript could do this, if the engine and processor are robust enough on the device.
Finally, total pipe-dream here, perhaps one could have the user take a picture of a signature using the mobile device's camera on a plain piece of paper and somehow count that as a valid signature. However, this would produce a bitmap image, as opposed to vectors which I'd likely be collecting in all other instances. Also, it would be pretty difficult, if not impossible/unreasonable, to integrate the taking of the photo via a camera app and upload that using the web browser app while associating that specific image with the rest of the data being captured.
Thanks.
A: There is a jQuery plugin to do this now -> http://thomasjbradley.ca/lab/signature-pad
The previous link is inactive as of March 17, 2016, but the relevant repository is on GitHub: https://github.com/thomasjbradley/signature-pad
A: I think the picture idea is really clever, but I'd take it one step further. Some mobile devices (phones in particular) don't even do file uploads in a browser. I'd generate an operation specific email address, a hash of some sort of transaction id and the user id for instance, and allow it to be sent as an email attachment. This should catch a very wide variety of clients, as well as not adding terrible complexity.
A: thomas j bradley's jQuery signature-pad plugin is awesome and very easy to implement.
A: All these answers are outdated.
Currently the best library is - https://github.com/szimek/signature_pad
signature-pad by thomas bradley is no longer maintained
A: First off, I'm a C++ developer, not web, but have written and deployed a Windows Mobile signature capture routine in C++ / MFC. If you want to use or translate the code, let me know and I'll post it here. It is not particularly elegant, but does the job. Basically, you need the button clicks and mouse movement messages available.
Having already been down this road, my conclusion is that it is not a great use of technology. The screens tend to get scuffed and unresponsive on the signature capture area, making them useless not only for signature capture, but also for other operation. Our experience was that for mobile sales force type applications, it limited the life of the hand helds to about a year, and resulted with less than happy users.
The camera idea seems much cleverer and isn't going break the device. IMO you'd also get much better signatures, touch screen ones are awful.
A: Yes, found one, this works on Android 2.1, 2.2, iPhone.
It works really well, and comes with php code for turning your JSON saved co-ordinates into images.
http://thomasjbradley.ca/lab/signature-pad
A: http://mysignature.brinkster.net - Does not work for a Mobile browser
http://thomasjbradley.ca/lab/signature-pad - Does not work for the IE. It has canvas and flash technologies. IE has problem with canvas tag.
A: I don't think this is even technically possible if you're talking about having it work on a wide array of mobile browers. Most phones can at least email a picture pretty easily so you could always send it to some account where the attachments are dumped somewhere. Still, you would have to manually type in some identifier in the subject.
A: If the mobile browser supports javascript then you might be able to do this on some touchscreen devices. Otherwise it's got to be done with a plugin, java, flash, or some similar method.
With javascript you'd look at where the 'mouse' is. On some devices if the user is pressing on the screen with the stylus you can capture mouse movements and record the pattern they follow (signature).
I suspect that some mobile browsers don't pass that info onto the javascript though - they may only pass clicks...
Some testing may be in order.
-Adam
A: Without embedding something on a web page, the only way to do this would be with JavaScript.
Unfortunately most mobile browsers don't support JavaScript and the ones that do aren't particularly fast.
I don't think that it's possible to create a generic solution based on most of the devices which are currently around.
A: Canvas, with Flash (Through FlashCanvas) worked well for us with jSignature. http://willowsystems.github.com/jSignature/
MIT + Works (was specifically written to run) everywhere there is Canvas or Flash. + tested on iPad, iPhone, Android tablets, phones.
A: You may want to consider OpenSource jQuery plugin: https://github.com/applicius/jquery.signfield/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: Google Maps-Like Scrolling Panel in WPF I have a Canvas where I'm drawing a bunch of shapes and other UI elements. This canvas can be very large so I want to put this in a panel which allows me to zoom in/out using the mouse and scroll by dragging the mouse, just like Google Maps. The closest thing I could find was the ScrollViewer but obviously this isn't close enough.
Has anyone done this in WPF and have any XAML and/or C# code?
A: I asked last week whether DeepZoom was planned for WPF (since it's available on Silverlight). I recieved a link to this code which sounds very much like your desired solution: Pan and Zoom in WPF
A: Would this link be of any help? I havent gotten into WPF but a quick search yields this link and hopefully it helps you out:
http://blogs.vertigo.com/personal/swarren/Blog/Lists/Posts/Post.aspx?ID=7
A: I think you're on the right track with using a large canvas/grid with the ScrollViewer. I've recently done something similar using the same setup.
For zooming in and out, you can use a ScaleTransform in the canvas's LayoutTransform property, then hook that up to the MouseWheel event. You can change the ScaleX and ScaleY to "zoom" in and out, and all of the canvas's child elements will "zoom" as well.
For panning, you can hide the scroll bars in the ScrollViewer, and use the MouseMove event to to call the ScrollViewers ScrollToHorizontalOffset() function to move the scroll bars programatically. Use the link that "Optimal Solutions" posted, it is exactly how to do it.
If I was at my dev machine, I could give you some example code.
A: What you need here is a 'Virtualizing Canvas Panel' Please see a sample here http://blogs.msdn.com/jgoldb/archive/2008/03/08/performant-virtualized-wpf-canvas.aspx
More about VirtualizingPanel http://blogs.msdn.com/dancre/archive/2006/02/06/526310.aspx
A: if I remember correctly here you can find something like you want.
http://www.codeproject.com/KB/vista/swordfishcharts.aspx
A: Check out this CodeProject article by Sacha... He has a FrictionScrollViewer that does the scrolling by dragging the mouse (Also supports some physics...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Best way to handle multilanguage support? I've been searching on SO on how to do i18n support on my little application...
I know that Microsoft offers 'culture' classes, but you have to re-compile your application to include any new string you add. So, all I'm trying to do is to put all my strings in a external file and offers users the choice to translate the app without the need to recompile.
Is there an easy way to handle this ? Use XML, or INI-like ? any Tutorial available ?
P.S: Trying to do this on C#... Don't bother if other languages
A: Here is a nice blog post from Scott Hanselman which contains several good resources:
http://www.hanselman.com/blog/ASPNETInternationalizationGlobalizationAndLocalizationWhew.aspx
Generally speaking I can say that you will want to keep your resources external to your binaries (using something like a .resource file), which will allow you to add/edit resources without a recompile. I've not done much myself, so I'm a bit rusty on the whole thing.
Hope this is helpful.
A: if you include a new string in your app, you have to recompile it anyway, do you not?
if you add languages often, resource files and/or satellite DLLs are probably your best bet
failing that, you can write your own provider. Here are some links I found useful, your mileage may vary:
http://en.csharp-online.net/Localization_Like_the_Pros
http://www.devhood.com/tutorials/tutorial_details.aspx?tutorial_id=211
http://www.codeproject.com/KB/aspnet/DeclarativeGlobalization.aspx
MS toolkit for web pages
CE solution
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Which is more preferable to use: lambda functions or nested functions ('def')? I mostly use lambda functions but sometimes use nested functions that seem to provide the same behavior.
Here are some trivial examples where they functionally do the same thing if either were found within another function:
Lambda function
>>> a = lambda x : 1 + x
>>> a(5)
6
Nested function
>>> def b(x): return 1 + x
>>> b(5)
6
Are there advantages to using one over the other? (Performance? Readability? Limitations? Consistency? etc.)
Does it even matter? If it doesn't then does that violate the Pythonic principle:
There should be one-- and preferably only one --obvious way to do it..
A: Performance:
Creating a function with lambda is slightly faster than creating it with def. The difference is due to def creating a name entry in the locals table. The resulting function has the same execution speed.
Readability:
Lambda functions are somewhat less readable for most Python users, but also much more concise in some circumstances. Consider converting from using non-functional to functional routine:
# Using non-functional version.
heading(math.sqrt(v.x * v.x + v.y * v.y), math.atan(v.y / v.x))
# Using lambda with functional version.
fheading(v, lambda v: math.sqrt(v.x * v.x + v.y * v.y), lambda v: math.atan(v.y / v.x))
# Using def with functional version.
def size(v):
return math.sqrt(v.x * v.x + v.y * v.y)
def direction(v):
return math.atan(v.y / v.x)
deal_with_headings(v, size, direction)
As you can see, the lambda version is shorter and "easier" in the sense that you only need to add lambda v: to the original non-functional version to convert to the functional version. It's also a lot more concise. But remember, a lot of Python users will be confused by the lambda syntax, so what you lose in length and real complexity might be gained back in confusion from fellow coders.
Limitations:
*
*lambda functions can only be used once, unless assigned to a variable name.
*lambda functions assigned to variable names have no advantage over def functions.
*lambda functions can be difficult or impossible to pickle.
*def functions' names must be carefully chosen to be reasonably descriptive and unique or at least otherwise unused in scope.
Consistency:
Python mostly avoids functional programming conventions in favor of procedural and simpler objective semantics. The lambda operator stands in direct contrast to this bias. Moreover, as an alternative to the already prevalent def, the lambda function adds diversity to your syntax. Some would consider that less consistent.
Pre-existing functions:
As noted by others, many uses of lambda in the field can be replaced by members of the operator or other modules. For instance:
do_something(x, y, lambda x, y: x + y)
do_something(x, y, operator.add)
Using the pre-existing function can make code more readable in many cases.
The Pythonic principle: “There should be one—and preferably only one—obvious way to do it”
That's similar to the single source of truth doctrine. Unfortunately, the single-obvious-way-to-do-it principle has always been more an wistful aspiration for Python, rather than a true guiding principal. Consider the very-powerful array comprehensions in Python. They are functionally equivalent to the map and filter functions:
[e for e in some_array if some_condition(e)]
filter(some_array, some_condition)
lambda and def are the same.
It's a matter of opinion, but I would say that anything in the Python language intended for general use which doesn't obviously break anything is "Pythonic" enough.
A: I agree with nosklo's advice: if you need to give the function a name, use def. I reserve lambda functions for cases where I'm just passing a brief snippet of code to another function, e.g.:
a = [ (1,2), (3,4), (5,6) ]
b = map( lambda x: x[0]+x[1], a )
A: While agreeing with the other answers, sometimes it's more readable. Here's an example where lambda comes in handy, in a use case I keep encountering of an N dimensional defaultdict.Here's an example:
from collections import defaultdict
d = defaultdict(lambda: defaultdict(list))
d['Foo']['Bar'].append(something)
I find it more readable than creating a def for the second dimension. This is even more significant for higher dimensions.
A: Practically speaking, to me there are two differences:
The first is about what they do and what they return:
*
*def is a keyword that doesn't return anything and creates a 'name' in the local namespace.
*lambda is a keyword that returns a function object and does not create a 'name' in the local namespace.
Hence, if you need to call a function that takes a function object, the only way to do that in one line of python code is with a lambda. There's no equivalent with def.
In some frameworks this is actually quite common; for example, I use Twisted a lot, and so doing something like
d.addCallback(lambda result: setattr(self, _someVariable, result))
is quite common, and more concise with lambdas.
The second difference is about what the actual function is allowed to do.
*
*A function defined with 'def' can contain any python code
*A function defined with 'lambda' has to evaluate to an expression, and can thus not contain statements like print, import, raise, ...
For example,
def p(x): print x
works as expected, while
lambda x: print x
is a SyntaxError.
Of course, there are workarounds - substitute print with sys.stdout.write, or import with __import__. But usually you're better off going with a function in that case.
A: The primary use of lambda has always been for simple callback functions, and for map, reduce, filter, which require a function as an argument. With list comprehensions becoming the norm, and the added allowed if as in:
x = [f for f in range(1, 40) if f % 2]
it's hard to imagine a real case for the use of lambda in daily use. As a result, I'd say, avoid lambda and create nested functions.
A: An important limitation of lambdas is that they cannot contain anything besides an expression. It's nearly impossible for a lambda expression to produce anything besides trivial side effects, since it cannot have anywhere near as rich a body as a def'ed function.
That being said, Lua influenced my programming style toward the extensive use of anonymous functions, and I litter my code with them. On top of that, I tend to think about map/reduce as abstract operators in ways I don't consider list comprehensions or generators, almost as If I'm deferring an implementation decision explicitly by using those operators.
Edit: This is a pretty old question, and my opinions on the matter have changed, somewhat.
First off, I am strongly biased against assigning a lambda expression to a variable; as python has a special syntax just for that (hint, def). In addition to that, many of the uses for lambda, even when they don't get a name, have predefined (and more efficient) implementations. For instance, the example in question can be abbreviated to just (1).__add__, without the need to wrap it in a lambda or def. Many other common uses can be satisfied with some combination of the operator, itertools and functools modules.
A: In this interview, Guido van Rossum says he wishes he hadn't let 'lambda' into Python:
"Q. What feature of Python are you least pleased with?
Sometimes I've been too quick in accepting contributions, and later realized that it was a mistake. One example would be some of the functional programming features, such as lambda functions. lambda is a keyword that lets you create a small anonymous function; built-in functions such as map, filter, and reduce run a function over a sequence type, such as a list.
In practice, it didn't turn out that well. Python only has two scopes: local and global. This makes writing lambda functions painful, because you often want to access variables in the scope where the lambda was defined, but you can't because of the two scopes. There's a way around this, but it's something of a kludge. Often it seems much easier in Python to just use a for loop instead of messing around with lambda functions. map and friends work well only when there's already a built-in function that does what you want.
IMHO, Iambdas can be convenient sometimes, but usually are convenient at the expense of readibility. Can you tell me what this does:
str(reduce(lambda x,y:x+y,map(lambda x:x**x,range(1,1001))))[-10:]
I wrote it, and it took me a minute to figure it out. This is from Project Euler - i won't say which problem because i hate spoilers, but it runs in 0.124 seconds :)
A: *
*Computation time.
*Function without name.
*To achieve One function and many use functionality.
Considering a simple example,
# CREATE ONE FUNCTION AND USE IT TO PERFORM MANY OPERATIONS ON SAME TYPE OF DATA STRUCTURE.
def variousUse(a,b=lambda x:x[0]):
return [b(i) for i in a]
dummyList = [(0,1,2,3),(4,5,6,7),(78,45,23,43)]
variousUse(dummyList) # extract first element
variousUse(dummyList,lambda x:[x[0],x[2],x[3]]) # extract specific indexed element
variousUse(dummyList,lambda x:x[0]+x[2]) # add specific elements
variousUse(dummyList,lambda x:x[0]*x[2]) # multiply specific elements
A: If you need to assign the lambda to a name, use a def instead. defs are just syntactic sugar for an assignment, so the result is the same, and they are a lot more flexible and readable.
lambdas can be used for use once, throw away functions which won't have a name.
However, this use case is very rare. You rarely need to pass around unnamed function objects.
The builtins map() and filter() need function objects, but list comprehensions and generator expressions are generally more readable than those functions and can cover all use cases, without the need of lambdas.
For the cases you really need a small function object, you should use the operator module functions, like operator.add instead of lambda x, y: x + y
If you still need some lambda not covered, you might consider writing a def, just to be more readable. If the function is more complex than the ones at operator module, a def is probably better.
So, real world good lambda use cases are very rare.
A: For n=1000 here's some timeit's of calling a function vs a lambda:
In [11]: def f(a, b):
return a * b
In [12]: g = lambda x, y: x * y
In [13]: %%timeit -n 100
for a in xrange(n):
for b in xrange(n):
f(a, b)
....:
100 loops, best of 3: 285 ms per loop
In [14]: %%timeit -n 100
for a in xrange(n):
for b in xrange(n):
g(a, b)
....:
100 loops, best of 3: 298 ms per loop
In [15]: %%timeit -n 100
for a in xrange(n):
for b in xrange(n):
(lambda x, y: x * y)(a, b)
....:
100 loops, best of 3: 462 ms per loop
A:
More preferable: lambda functions or nested functions (def)?
There is one advantage to using a lambda over a regular function: they are created in an expression.
There are several drawbacks:
*
*no name (just '<lambda>')
*no docstrings
*no annotations
*no complex statements
They are also both the same type of object. For those reasons, I generally prefer to create functions with the def keyword instead of with lambdas.
First point - they're the same type of object
A lambda results in the same type of object as a regular function
>>> l = lambda: 0
>>> type(l)
<class 'function'>
>>> def foo(): return 0
...
>>> type(foo)
<class 'function'>
>>> type(foo) is type(l)
True
Since lambdas are functions, they're first-class objects.
Both lambdas and functions:
*
*can be passed around as an argument (same as a regular function)
*when created within an outer function become a closure over that outer functions' locals
But lambdas are, by default, missing some things that functions get via full function definition syntax.
A lamba's __name__ is '<lambda>'
Lambdas are anonymous functions, after all, so they don't know their own name.
>>> l.__name__
'<lambda>'
>>> foo.__name__
'foo'
Thus lambda's can't be looked up programmatically in their namespace.
This limits certain things. For example, foo can be looked up with serialized code, while l cannot:
>>> import pickle
>>> pickle.loads(pickle.dumps(l))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_pickle.PicklingError: Can't pickle <function <lambda> at 0x7fbbc0464e18>:
attribute lookup <lambda> on __main__ failed
We can lookup foo just fine - because it knows its own name:
>>> pickle.loads(pickle.dumps(foo))
<function foo at 0x7fbbbee79268>
Lambdas have no annotations and no docstring
Basically, lambdas are not documented. Let's rewrite foo to be better documented:
def foo() -> int:
"""a nullary function, returns 0 every time"""
return 0
Now, foo has documentation:
>>> foo.__annotations__
{'return': <class 'int'>}
>>> help(foo)
Help on function foo in module __main__:
foo() -> int
a nullary function, returns 0 every time
Whereas, we don't have the same mechanism to give the same information to lambdas:
>>> help(l)
Help on function <lambda> in module __main__:
<lambda> lambda (...)
But we can hack them on:
>>> l.__doc__ = 'nullary -> 0'
>>> l.__annotations__ = {'return': int}
>>> help(l)
Help on function <lambda> in module __main__:
<lambda> lambda ) -> in
nullary -> 0
But there's probably some error messing up the output of help, though.
Lambdas can only return an expression
Lambdas can't return complex statements, only expressions.
>>> lambda: if True: 0
File "<stdin>", line 1
lambda: if True: 0
^
SyntaxError: invalid syntax
Expressions can admittedly be rather complex, and if you try very hard you can probably accomplish the same with a lambda, but the added complexity is more of a detriment to writing clear code.
We use Python for clarity and maintainability. Overuse of lambdas can work against that.
The only upside for lambdas: can be created in a single expression
This is the only possible upside. Since you can create a lambda with an expression, you can create it inside of a function call.
Creating a function inside a function call avoids the (inexpensive) name lookup versus one created elsewhere.
However, since Python is strictly evaluated, there is no other performance gain to doing so aside from avoiding the name lookup.
For a very simple expression, I might choose a lambda.
I also tend to use lambdas when doing interactive Python, to avoid multiple lines when one will do. I use the following sort of code format when I want to pass in an argument to a constructor when calling timeit.repeat:
import timeit
def return_nullary_lambda(return_value=0):
return lambda: return_value
def return_nullary_function(return_value=0):
def nullary_fn():
return return_value
return nullary_fn
And now:
>>> min(timeit.repeat(lambda: return_nullary_lambda(1)))
0.24312214995734394
>>> min(timeit.repeat(lambda: return_nullary_function(1)))
0.24894469301216304
I believe the slight time difference above can be attributed to the name lookup in return_nullary_function - note that it is very negligible.
Conclusion
Lambdas are good for informal situations where you want to minimize lines of code in favor of making a singular point.
Lambdas are bad for more formal situations where you need clarity for editors of code who will come later, especially in cases where they are non-trivial.
We know we are supposed to give our objects good names. How can we do so when the object has no name?
For all of these reasons, I generally prefer to create functions with def instead of with lambda.
A: If you are just going to assign the lambda to a variable in the local scope, you may as well use def because it is more readable and can be expanded more easily in the future:
fun = lambda a, b: a ** b # a pointless use of lambda
map(fun, someList)
or
def fun(a, b): return a ** b # more readable
map(fun, someList)
A: One use for lambdas I have found... is in debug messages.
Since lambdas can be lazily evaluated you can have code like this:
log.debug(lambda: "this is my message: %r" % (some_data,))
instead of possibly expensive:
log.debug("this is my message: %r" % (some_data,))
which processes the format string even if the debug call does not produce output because of current logging level.
Of course for it to work as described the logging module in use must support lambdas as "lazy parameters" (as my logging module does).
The same idea may be applied to any other case of lazy evaluation for on demand content value creation.
For example this custom ternary operator:
def mif(condition, when_true, when_false):
if condition:
return when_true()
else:
return when_false()
mif(a < b, lambda: a + a, lambda: b + b)
instead of:
def mif(condition, when_true, when_false):
if condition:
return when_true
else:
return when_false
mif(a < b, a + a, b + b)
with lambdas only the expression selected by the condition will be evaluated, without lambdas both will be evaluated.
Of course you could simply use functions instead of lambdas, but for short expressions lambdas are (c)leaner.
A: I agree with nosklo. By the way, even with a use once, throw away function, most of the time you just want to use something from the operator module.
E.G :
You have a function with this signature : myFunction(data, callback function).
You want to pass a function that add 2 elements.
Using lambda :
myFunction(data, (lambda x, y : x + y))
The pythonic way :
import operator
myFunction(data, operator.add)
Or course this is a simple example, but there is a lot of stuff the operator module provides, including the items setters / getters for list and dict. Really cool.
A: A major difference is that you can not use def functions inline, which is in my opinion the most convenient use case for a lambda function. For example when sorting a list of objects:
my_list.sort(key=lambda o: o.x)
I would therefore suggest keeping the use of lambdas to this kind of trivial operations, which also do not really benefit from the automatic documentation provided by naming the function.
A: lambda is useful for generating new functions:
>>> def somefunc(x): return lambda y: x+y
>>> f = somefunc(10)
>>> f(2)
12
>>> f(4)
14
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "132"
}
|
Q: How do I use django.core.urlresolvers.reverse with a function reference instead of a named URL pattern? In my urls.py file, I have:
from myapp import views
...
(r'^categories/$', views.categories)
Where categories is a view function inside myapp/views.py. No other URLconf lines reference views.categories.
In a unit test file, I’m trying to grab this URL using django.core.urlresolvers.reverse(), instead of just copying '/categories/' (DRY and all that). So, I have:
from django.core.urlresolvers import reverse
from myapp import views
...
url = reverse(views.categories)
When I run my tests, I get a NoReverseMatch error:
NoReverseMatch: Reverse for '<function categories at 0x1082f30>' with arguments '()' and keyword arguments '{}' not found.
It matches just fine if I make the URL pattern a named pattern, like this:
url(r'^categories/$', views.categories, 'myapp-categories')
And use the pattern name to match it:
url = reverse('myapp-categories')
But as far as I can tell from the reverse documentation, I shouldn’t need to make it a named URL pattern just to use reverse.
Any ideas what I’m doing wrong?
A: Jack M.'s example is nearly correct.
It needs to be a url function, not a tuple, if you want to use named urls.
url(r'^no_monkeys/$', 'views.noMonkeys', {}, "no-monkeys"),
A: After futher investigation, turns out it was an issue with how I was importing the views module:
How do I successfully pass a function reference to Django’s reverse() function?
Thanks for the help though, guys: you inspired me to look at it properly.
A: This does work, and all the code that you've pasted is correct and works fine (I just copied it into a clean test/project app and it reversed the URL without any problem). So there's something else going on here that you haven't showed us. Simplify down to the bare-bones basics until it works, then start adding complexity back in and see where it's breaking.
Also, you can do "./manage.py shell" and then interactively import the reverse function and your view function and try the reverse. That'll remove the test setup as a possible cause.
A: The reverse function actually uses the "name" of the URL. This is defined like so:
urlpatterns = patterns('',
(r'^no_monkeys/$', 'views.noMonkeys', {}, "no-monkeys"),
(r'^admin/(.*)', admin.site.root),
)
Now you would call reverse with the string "no-monkeys" to get the correct url.
Ninja Edit: Here is a link to the django docs on the subject.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Is it possible to implement a COM interface with a .NET generics class? I have the following interface which I'm trying to make COM-visible. When I try to generate the type-library it doesn't like the fact that my implementation class derives from a generic-class.
Is it possible to use a generic class as a COM implementation class?
(I know I could write a non-generic wrapper and export that to COM, but this adds another layer that I'd rather do without.)
[ComVisible(true)]
public interface IMyClass
{
...
}
[ComVisible(true), ComDefaultInterface(typeof(IMyClass))]
[ClassInterface(ClassInterfaceType.None)]
public class MyClass : BaseClass<IMyClass>, IMyClass
{
...
}
Error message:
Warning: Type library exporter encountered a type that derives
from a generic class and is not marked as
[ClassInterface(ClassInterfaceType.None)]. Class interfaces cannot
be exposed for such types. Consider marking the type with
[ClassInterface(ClassInterfaceType.None)]
and exposing an explicit interface as the default interface to
COM using the ComDefaultInterface attribute.
A: Generic types and types that derive from a generic type cannot be exported. Set ComVisible(false) on your MyClass type. You'll need to either create a non-generic class implementation or use the interface only.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Checking for duplicates in a collection Suppose you have a collection of Foo classes:
class Foo
{
public string Bar;
public string Baz;
}
List<Foo> foolist;
And you want to check this collection to see if another entry has a matching Bar.
bool isDuplicate = false;
foreach (Foo f in foolist)
{
if (f.Bar == SomeBar)
{
isDuplicate = true;
break;
}
}
Contains() doesn't work because it compares the classes as whole.
Does anyone have a better way to do this that works for .NET 2.0?
A: Implement the IEqualityComparer<T> interface, and use the matching Contains method.
public class MyFooComparer: IEqualityComparer<Foo> {
public bool Equals(Foo foo1, Foo foo2) {
return Equals(foo1.Bar, foo2.Bar);
}
public int GetHashCode(Foo foo) {
return foo.Bar.GetHashCode();
}
}
Foo exampleFoo = new Foo();
exampleFoo.Bar = "someBar";
if(myList.Contains(exampleFoo, new MyFooComparer())) {
...
}
A: fooList.Exists(item => item.Bar == SomeBar)
That's not LINQ, but a Lambda expression, but nevertheless, it uses a v3.5 feature. No problem:
fooList.Exists(delegate(Foo Item) { return item.Bar == SomeBar});
That should work in 2.0.
A: fooList.Exists(item => item.Bar == SomeBar)
or with anonymous delegate
fooList.Exists(delegate(Foo item) {return item.Bar == SomeBar;})
A: If you need the element, you can also use List.Find() and pass in a delegate that returns true for your definition of a "match" (http://msdn.microsoft.com/en-us/library/x0b5b5bc.aspx).
There's an example of how to define a delegate on that MSDN doc.
A: If the 'Bar's for your class are unique (a key to class Foo), then you can try implementing a System.Collections.ObjectModel.KeyedCollection. It's pretty simple: just implement the GetKeyForItem() method.
class Foo
{
public string Bar;
public string Baz;
}
class FooList : KeyedCollection<string, Foo>
{
protected override string GetKeyForItem(Foo item)
{
return item.Bar;
}
}
FooList fooList;
A: If you override Equals on Foo to make key on Bar, Contains() will work.
A: If you can use LINQ you can do the following:
bool contains = foolist.Where(f => f.Bar == someBar).Count() != 0;
A: Here are 4 ways to check if the collection has no duplicates:
public static bool LinqAll<T>(IEnumerable<T> enumerable)
{
HashSet<T> set = new();
return !enumerable.All(set.Add);
}
public static bool LinqAny<T>(IEnumerable<T> enumerable)
{
HashSet<T> set = new();
return enumerable.Any(element => !set.Add(element));
}
public static bool LinqDistinct<T>(IEnumerable<T> enumerable)
{
return enumerable.Distinct().Count() != enumerable.Count();
}
public static bool ToHashSet<T>(IEnumerable<T> enumerable)
{
return enumerable.ToHashSet().Count != enumerable.Count();
}
A: You probably want to use C5.HashSet, and implement Equals and GetHashCode() for Foo.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to search through archived files with Perl What is your preferred method for reading through the contents of zipped directories with Perl ?
A: There are several modules on CPAN for working with various archive formats (zip, tar, etc.), the one you're probably after is Archive::Zip.
A: Archive::Zip
require Archive::Zip;
my $zip = Archive::Zip->new($somefile);
for($zip->memberNames()) {
print "$_\n";
}
A: If you want the contents of a .tar.gz archive
open(DIR_LISTING, "gzip -dc concert25.tgz | tar -tf -|") || die;
while (<DIR_LISTING>) {
print;
}
close (DIR_LISTING);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Finalizer launched while its object was still being used Summary: C#/.NET is supposed to be garbage collected. C# has a destructor, used to clean resources. What happen when an object A is garbage collected the same line I try to clone one of its variable members? Apparently, on multiprocessors, sometimes, the garbage collector wins...
The problem
Today, on a training session on C#, the teacher showed us some code which contained a bug only when run on multiprocessors.
I'll summarize to say that sometimes, the compiler or the JIT screws up by calling the finalizer of a C# class object before returning from its called method.
The full code, given in Visual C++ 2005 documentation, will be posted as an "answer" to avoid making a very very large questions, but the essential are below:
The following class has a "Hash" property which will return a cloned copy of an internal array. At is construction, the first item of the array has a value of 2. In the destructor, its value is set to zero.
The point is: If you try to get the "Hash" property of "Example", you'll get a clean copy of the array, whose first item is still 2, as the object is being used (and as such, not being garbage collected/finalized):
public class Example
{
private int nValue;
public int N { get { return nValue; } }
// The Hash property is slower because it clones an array. When
// KeepAlive is not used, the finalizer sometimes runs before
// the Hash property value is read.
private byte[] hashValue;
public byte[] Hash { get { return (byte[])hashValue.Clone(); } }
public Example()
{
nValue = 2;
hashValue = new byte[20];
hashValue[0] = 2;
}
~Example()
{
nValue = 0;
if (hashValue != null)
{
Array.Clear(hashValue, 0, hashValue.Length);
}
}
}
But nothing is so simple...
The code using this class is wokring inside a thread, and of course, for the test, the app is heavily multithreaded:
public static void Main(string[] args)
{
Thread t = new Thread(new ThreadStart(ThreadProc));
t.Start();
t.Join();
}
private static void ThreadProc()
{
// running is a boolean which is always true until
// the user press ENTER
while (running) DoWork();
}
The DoWork static method is the code where the problem happens:
private static void DoWork()
{
Example ex = new Example();
byte[] res = ex.Hash; // [1]
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2)
{
// Oops... The finalizer of ex was launched before
// the Hash method/property completed
}
}
Once every 1,000,000 excutions of DoWork, apparently, the Garbage Collector does its magic, and tries to reclaim "ex", as it is not anymore referenced in the remaning code of the function, and this time, it is faster than the "Hash" get method. So what we have in the end is a clone of a zero-ed byte array, instead of having the right one (with the 1st item at 2).
My guess is that there is inlining of the code, which essentially replaces the line marked [1] in the DoWork function by something like:
// Supposed inlined processing
byte[] res2 = ex.Hash2;
// note that after this line, "ex" could be garbage collected,
// but not res2
byte[] res = (byte[])res2.Clone();
If we supposed Hash2 is a simple accessor coded like:
// Hash2 code:
public byte[] Hash2 { get { return (byte[])hashValue; } }
So, the question is: Is this supposed to work that way in C#/.NET, or could this be considered as a bug of either the compiler of the JIT?
edit
See Chris Brumme's and Chris Lyons' blogs for an explanation.
http://blogs.msdn.com/cbrumme/archive/2003/04/19/51365.aspx
http://blogs.msdn.com/clyon/archive/2004/09/21/232445.aspx
Everyone's answer was interesting, but I couldn't choose one better than the other. So I gave you all a +1...
Sorry
:-)
Edit 2
I was unable to reproduce the problem on Linux/Ubuntu/Mono, despite using the same code on the same conditions (multiple same executable running simultaneously, release mode, etc.)
A: It's simply a bug in your code: finalizers should not be accessing managed objects.
The only reason to implement a finalizer is to release unmanaged resources. And in this case, you should carefully implement the standard IDisposable pattern.
With this pattern, you implement a protected method "protected Dispose(bool disposing)". When this method is called from the finalizer, it cleans up unmanaged resources, but does not attempt to clean up managed resources.
In your example, you don't have any unmanaged resources, so should not be implementing a finalizer.
A: What you're seeing is perfectly natural.
You don't keep a reference to the object that owns the byte array, so that object (not the byte array) is actually free for the garbage collector to collect.
The garbage collector really can be that aggressive.
So if you call a method on your object, which returns a reference to an internal data structure, and the finalizer for your object mess up that data structure, you need to keep a live reference to the object as well.
The garbage collector sees that the ex variable isn't used in that method any more, so it can, and as you notice, will garbage collect it under the right circumstances (ie. timing and need).
The correct way to do this is to call GC.KeepAlive on ex, so add this line of code to the bottom of your method, and all should be well:
GC.KeepAlive(ex);
I learned about this aggressive behavior by reading the book Applied .NET Framework Programming by Jeffrey Richter.
A: this looks like a race condition between your work thread and the GC thread(s); to avoid it, i think there are two options:
(1) change your if statement to use ex.Hash[0] instead of res, so that ex cannot be GC'd prematurely, or
(2) lock ex for the duration of the call to Hash
that's a pretty spiffy example - was the teacher's point that there may be a bug in the JIT compiler that only manifests on multicore systems, or that this kind of coding can have subtle race conditions with garbage collection?
A: I think what you are seeing is reasonable behavior due to the fact that things are running on multiple threads. This is the reason for the GC.KeepAlive() method, which should be used in this case to tell the GC that the object is still being used and that it isn't a candidate for cleanup.
Looking at the DoWork function in your "full code" response, the problem is that immediately after this line of code:
byte[] res = ex.Hash;
the function no longer makes any references to the ex object, so it becomes eligible for garbage collection at that point. Adding the call to GC.KeepAlive would prevent this from happening.
A: That's perfectly nornal for the finalizer to be called in your do work method as after the
ex.Hash call, the CLR knows that the ex instance won't be needed anymore...
Now, if you want to keep the instance alive do this:
private static void DoWork()
{
Example ex = new Example();
byte[] res = ex.Hash; // [1]
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2) // NOTE
{
// Oops... The finalizer of ex was launched before
// the Hash method/property completed
}
GC.KeepAlive(ex); // keep our instance alive in case we need it.. uh.. we don't
}
GC.KeepAlive does... nothing :) it's an empty not inlinable /jittable method whose only purpose is to trick the GC into thinking the object will be used after this.
WARNING: Your example is perfectly valid if the DoWork method were a managed C++ method... You DO have to manually keep the managed instances alive manually if you don't want the destructor to be called from within another thread. IE. you pass a reference to a managed object who is going to delete a blob of unmanaged memory when finalized, and the method is using this same blob. If you don't hold the instance alive, you're going to have a race condition between the GC and your method's thread.
And this will end up in tears. And managed heap corruption...
A: Yes, this is an issue that has come up before.
Its even more fun in that you need to run release for this to happen and you end up stratching your head going 'huh, how can that be null?'.
A: Interesting comment from Chris Brumme's blog
http://blogs.msdn.com/cbrumme/archive/2003/04/19/51365.aspx
class C {<br>
IntPtr _handle;
Static void OperateOnHandle(IntPtr h) { ... }
void m() {
OperateOnHandle(_handle);
...
}
...
}
class Other {
void work() {
if (something) {
C aC = new C();
aC.m();
... // most guess here
} else {
...
}
}
}
So we can’t say how long ‘aC’ might live in the above code. The JIT might report the reference until Other.work() completes. It might inline Other.work() into some other method, and report aC even longer. Even if you add “aC = null;” after your usage of it, the JIT is free to consider this assignment to be dead code and eliminate it. Regardless of when the JIT stops reporting the reference, the GC might not get around to collecting it for some time.
It’s more interesting to worry about the earliest point that aC could be collected. If you are like most people, you’ll guess that the soonest aC becomes eligible for collection is at the closing brace of Other.work()’s “if” clause, where I’ve added the comment. In fact, braces don’t exist in the IL. They are a syntactic contract between you and your language compiler. Other.work() is free to stop reporting aC as soon as it has initiated the call to aC.m().
A: The Full Code
You'll find below the full code, copy/pasted from a Visual C++ 2008 .cs file. As I'm now on Linux, and without any Mono compiler or knowledge about its use, there's no way I can do tests now. Still, a couple of hours ago, I saw this code work and its bug:
using System;
using System.Threading;
public class Example
{
private int nValue;
public int N { get { return nValue; } }
// The Hash property is slower because it clones an array. When
// KeepAlive is not used, the finalizer sometimes runs before
// the Hash property value is read.
private byte[] hashValue;
public byte[] Hash { get { return (byte[])hashValue.Clone(); } }
public byte[] Hash2 { get { return (byte[])hashValue; } }
public int returnNothing() { return 25; }
public Example()
{
nValue = 2;
hashValue = new byte[20];
hashValue[0] = 2;
}
~Example()
{
nValue = 0;
if (hashValue != null)
{
Array.Clear(hashValue, 0, hashValue.Length);
}
}
}
public class Test
{
private static int totalCount = 0;
private static int finalizerFirstCount = 0;
// This variable controls the thread that runs the demo.
private static bool running = true;
// In order to demonstrate the finalizer running first, the
// DoWork method must create an Example object and invoke its
// Hash property. If there are no other calls to members of
// the Example object in DoWork, garbage collection reclaims
// the Example object aggressively. Sometimes this means that
// the finalizer runs before the call to the Hash property
// completes.
private static void DoWork()
{
totalCount++;
// Create an Example object and save the value of the
// Hash property. There are no more calls to members of
// the object in the DoWork method, so it is available
// for aggressive garbage collection.
Example ex = new Example();
// Normal processing
byte[] res = ex.Hash;
// Supposed inlined processing
//byte[] res2 = ex.Hash2;
//byte[] res = (byte[])res2.Clone();
// successful try to keep reference alive
//ex.returnNothing();
// Failed try to keep reference alive
//ex = null;
// If the finalizer runs before the call to the Hash
// property completes, the hashValue array might be
// cleared before the property value is read. The
// following test detects that.
if (res[0] != 2)
{
finalizerFirstCount++;
Console.WriteLine("The finalizer ran first at {0} iterations.", totalCount);
}
//GC.KeepAlive(ex);
}
public static void Main(string[] args)
{
Console.WriteLine("Test:");
// Create a thread to run the test.
Thread t = new Thread(new ThreadStart(ThreadProc));
t.Start();
// The thread runs until Enter is pressed.
Console.WriteLine("Press Enter to stop the program.");
Console.ReadLine();
running = false;
// Wait for the thread to end.
t.Join();
Console.WriteLine("{0} iterations total; the finalizer ran first {1} times.", totalCount, finalizerFirstCount);
}
private static void ThreadProc()
{
while (running) DoWork();
}
}
For those interested, I can send the zipped project through email.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Differences between Ruby VMs What are the advantages/disadvantages of the major Ruby VMs (things like features, compatibility, performance, and quirks?) I know there are also some bonus features like being able to use Java interfaces through JRuby, too. Those would also be helpful to note. Does any VM have a clear advantage at this point, and in what contexts?
A: I've used both Matz's Ruby and JRuby, and they solve different tasks. If you are developing a straight Ruby or Rails app, then that will probably suffice, but if there are some powerful Java libraries that would help a lot, then JRuby might be worthwhile.
I haven't done anything overly complicated, but JRuby seemed to match up pretty well, at least as far as implementing the core language features (I haven't run into any differences yet, but they may exist).
One little anecdote I wish to share... I was writing a script to interact with a DB2 database. The DB2 support in Ruby is abysmal... you have to install the whole DB2 express version just to be able to compile the Ruby drivers, which didn't even work for me. I got fed up and switched to JRuby, using JDBC and a few small DB2 JDBC jars. It resolved my problem perfectly. The point? Well, if gaining access to some Java libraries will simplify the problem at hand, by all means go for it!
I hope this was helpful! Sorry I don't have any experience with other VMs....
One more caveat I have read about, but I don't know the details too well... JRuby I think supports threading via Java threads, instead of the "green" threads supported in Matz's implementation... so if you want multithreading on multicore systems, JRuby will probably serve you better... unless you want to do the threading in C.
A: Here's a bit of info I scrounged up on the main VMs: Ruby MRI, Ruby 1.9 (YARV), JRuby, XRuby, Rubinius, and IronRuby
There was a performance benchmark last year that compared the major VMs, but with how quickly VM development has been it probably is not as relevant today. Ruby 1.9 was generally the fastest, and still has the edge over JRuby for now, I believe.
Four VMs are currently capable of running Ruby on Rails: Ruby MRI, Ruby 1.9, JRuby, and Rubinius.
XRuby runs on the JVM, as does JRuby, and compiles the Ruby source files to a Java .class.
IronRuby runs on .NET, making use of their DLR, and allows you to integrate Ruby with the .NET libraries and infrastructure. It cannot yet run Ruby on Rails.
There is also a VM called HotRuby that lets you run Ruby source code in the browser or in Flash.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Where can I find a Perl module for converting a Perl data structure into a JavaScript one? Where can I find a Perl module for converting a Perl data structure into a JavaScript one?
e.g. this is my code (Mason):
% # convert our @cti data structure into a javascript one
var cti = [
% foreach my $cti_category (@cti) {
{
label: "<% $cti_category->{'label'} %>",
value: "<% $cti_category->{'value'} %>",
children: [
% foreach my $cti_type (@{$cti_category->{'children'}}) {
{
label: "<% $cti_type->{'label'} %>",
value: "<% $cti_type->{'value'} %>",
},
% }
]
},
% }
];
is there a module for this?
A: Check out JSON or JSON::XS.
To elaborate a bit more, JSON is "JavaScript Object Notation", and the two modules above convert perl data structures into that format.
A: JSON stands for JavaScript Object Notation, which is the format you're looking for.
Unfortunately, none of the modules you're looking for are in the Perl core, but they are available on CPAN, as a quick search will reveal.
I'd actually recommend installing JSON::Any as a wrapper, as well as JSON::XS (if you have a C compiler) or one of JSON and JSON::Syck if you don't. JSON::Any provides an interface class on top of several other JSON modules (you can choose, or let it pick from what's installed) that's independent of which module you wind up using. That way, if your code should need to be ported elsewhere, and (say) the target machine can install JSON::XS when you can't, you get a performance boost without any extra code.
use JSON::Any;
my $j = JSON::Any->new;
$json = $j->objToJson($perl_data);
Like so.
A: JSON !
This module converts Perl data structures to JSON and vice versa using either JSON::XS or JSON::PP.
A: The JSON module will convert data structures - it's basically a to/from JSON serializer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Problem encrypting membership element in web.config I am trying to encrypt the "system.web.membership" element within the Web.Config of our .Net application to secure username and password to Active Directory. I am using the aspnet_regiis command to encrypt, and have tried several different strings for the value of the "pe" option with no success. I have successfully encrypted the "connectstrings" element on my web.config.
Cmd
C:\Windows\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pe "connectionStrings" -site MySite -app /MyApp
Encrypting configuration section...
Succeeded!
C:\Windows\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pe "membership" -site MySite -app /MyApp
Encrypting configuration section...
The configuration section 'membership' was not found.
Failed!
C:\Windows\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pe "system.web.membership" -site MySite -app /MyApp
Encrypting configuration section...
The configuration section 'system.web.membership' was not found.
Failed!
Web.Config
<configuration>
...
<system.web>
...
<authentication mode="Forms">
<forms name=".ADAuthCookie"
timeout="30"/>
</authentication>
<authorization>
<deny users="?"/>
<allow users="*"/>
</authorization>
<membership defaultProvider="MyADMembershipProvider">
<providers>
<add name="MyADMembershipProvider"
type="System.Web.Security.ActiveDirectoryMembershipProvider, System.Web, Version=2.0.0.0,Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
connectionStringName="ADConnectionString"
connectionUsername="MyUserName"
connectionPassword="MyPassowrd"/>
</providers>
</membership>
...
</system.web>
...
</configuration>
So what gives? What am I missing?
A: The configuration section is identified by "system.web/membership", not "membership" nor "system.web.membership".
A: I know that your issue has already been solved, but for other people getting this error message, it seems that only certain sections of the web.config can be encrypted. I was trying to encrypt the SMTP settings in my web config:
<?xml version="1.0"?>
<configuration>
<system.net>
<mailSettings>
<smtp>
<network host="myhost" port="25" userName="myusername" password="mypassword" />
</smtp>
</mailSettings>
</system.net>
</configuration>
This worked:
aspnet_regiis.exe -pef "system.net/mailSettings/smtp" "path_to_site" -prov "DataProtectionConfigurationProvider"
but these didn't:
aspnet_regiis.exe -pef "system.net/mailSettings" "path_to_site" -prov "DataProtectionConfigurationProvider"
aspnet_regiis.exe -pef "system.net" "path_to_site" -prov "DataProtectionConfigurationProvider"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Project dependency missing in deployment project I've got a VS2008 deployment project that builds an installer for a couple of Windows services.
Each service references several different projects:
CustomerName.MailSendingService
-> CustomerName.Network
-> CustomerName.Data
-> CustomerName.Security
CustomerName.ProductIntegrationService
-> CustomerName.Core
-> CustomerName.Security
The Windows service projects, the projects they reference, and the deployment project are all in the same VS2008 solution.
I've added the primary output from the Windows service projects in the deployment project's file system editor.
My expectation is that the primary output for the Windows service projects would include the DLLs from the referenced projects. However, when the deployment project is built, the DLL from one of the referenced projects is missing. (CustomerName.ProductIntegrationService is missing CustomerName.Security)
Maddeningly, the DLLs for the other projects referenced by the Windows service are present; just one project's output is missing.
(Edit) I've verified that the reference is set to Copy Local in the reference properties window. The DLL for the referenced project is placed in the windows service project's bin\Release folder, but isn't packaged in the MSI file built for the deployment project.
(Edit 2) Following Joseph Daigle's suggestion, I checked that the dependency is in the dependencies list for the primary output, and it's not marked "excluded," so that doesn't appear to be the cause of this issue.
Why would just one project's output be missing?
A: I have a couple more things to add after reproducing the same suspected msi defect.
1) When I added the second project output sharing the same detected dependency to the installer it did not automatically add the dependency. I removed both project output's and added them back in reverse order. The second project output added never added the detected dependency. This excludes any configuration or code issue with the projects and how the references were added. It's always the second one that fails.
2) My team actually hit a second problem after using the 'Manually add detected assembly' workaround. Initially we added the dependency from the location in '\Program Files\xxx' but ran into build problems on 64 bit machines where that same dependency was in the '\Program Files (x86)\xxx' folder even though VS is smart enough to handle this problem when picking up references.
*
*The proper way to manually add the assembly is by navigating to the bin folder and adding the assembly that is copied local. This ensures that the right assembly will be present on x86 or x64 machines.
A: I can verify this is an issue for us as well. I suspect it's a bug in the deployment project - it only adds dependent project output in one location (maybe it thinks it's a COM dll?)
Manually adding Primary Output for the missing dll seems to be a viable workaround.
A: I have not used Visual Studio 2008 yet, however in 2005 you have to verify that the missing reference on the project has the Copy Local property set to true.
This will copy the missing file to the output directory.
A: In addition to hectorsq's response, verify that the depedency is in the deployment project dependencies list, and that the DLL in question is marked to be included.
A: Have you tried looking at your dll in reflector to see if it really does depend on the other dll? VS is smart enough to not include a referenced assembly if it can see that you are not actually using it.
Added to that, even if you 'think' you're using it, VS can optimise away your use - this is a limit case but I have seen it:
For example, if you have a 'constants' assembly with this in:
public const string LockPanelUrn = "ApplicationRack.LockPanel";
VS will stick the string directly in your referencing code.
Beyond that, I'd suggest deleting and rebuilding your install solution.
A: Did you add this assembly dependency after initially creating the deployment project? If so, you may need to right-click the Detected Dependencies folder and select Refresh Dependencies. It will pick up anything new that has been added since the last time you did this.
A: check this out - maybe this not explains why is that, but at least it provides some workaround :)
http://lo-sharpdevs.blogspot.com/2009/07/vs-2008-disappearing-dependencies.html
A: I have had a similar issue with Microsoft SMO objects usage. I have a binary component (X.dll) that uses these Microsoft SMO objects that i've made myself. After compiling X.dll, I reference it in another EXE project using X.dll (and not the code). The installer project attached to that detects that it needs the Microsoft SMO Objects and detects that they are local to my SQL Server installation on the machine.
The component X.dll that uses the SMO Objects references the Microsoft SMO Objects via a local "Externals" folder i keep on a shared drive. All modules are compiled with reference to those, however my install project with my EXE project detects the ones from my SQL Server installation.
Because of this, we have another machine that has the "Externals" folder with SMO Objects, but the install project won't find the SMO objects from 'Detected Dependancies' anymore as it's not realising the SMO Objects are in the externals module! I'm not sure where it searches for the Detected Dependancy files, but it's not looking at where X.dll originally picked them up from, or even the EXE folder perhaps...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: DataGrid Edit Problem I have a DataGrid, with an ItemTemplate that has an image and label. In the EditItemTemplate the label is replaced by a textbox. My problem is that the edit template only shows when I click the Edit button the second time. What gives?
A: Make sure you check for Page.IsPostback before binding your datagrid. It may be the case that you are binding during every page load.
If Not Page.IsPostBack() Then
DoDataBinding()
End If
A: Make sure you're re-binding the DataGrid after setting the EditItemIndex property.
Edit: Agreed with Massa. Best practice is to move your databinding into a separate method and call it first on the first page load, and again after setting EditItemIndex.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why do small spaces keep showing up in my web pages? This might be a stupid question but if there's a better or proper way to do this, I'd love to learn it.
I have run across this a few times, including recently, where small spaces show up in the rendered version of my HTML page. Intuitively I think these should not be there because outside of text or entities the formatting of a page's HTML shouldn't matter but apparently it does.
What I'm referring to is this - I have some Photoshop file from the client on how they want their site to look. They want it to look basically pixel perfect to the image in this file.
One of the places in the page calls for a menu bar, where each one does the changing bit on hovering, acts like a hyperlink, etc. In the Photoshop file this is one long bar, so a cheap and easy way to do this is to just split that segment into multiple images and then place them next to each other in the file.
So instinctively I lay it out like so (there's more to it but this is the gist)
<a href="page1.html">
<img src="image1.png" />
</a>
<a href="page2.html">
<img src="image2.png" />
</a>
<a href="page3.html">
<img src="image3.png" />
</a>
and so forth.
The problem is the images have this tiny space between them which is unacceptable since the client wants this thing pixel-perfect (and it just plain looks bad).
One way to get it to render properly is to remove the carriage returns between the images
<a href="page1.html">
<img src="image1.png" />
</a>
<a href="page2.html">
<img src="image2.png" />
</a>
<a href="page3.html">
<img src="image3.png" />
</a>
Which makes the images go right up against each other (the desired effect) but it makes the line incredibly long and the code more difficult to maintain (it wraps here in SO and this is a simplified version - the real one has longer filenames and JavaScript sprinkled in to do the hovering).
It seems to me that this shouldn't happen but it looks like the carriage return in the HTML is being rendered as a small empty space. And this happens in all browsers, looks like.
Am I right or wrong for thinking the two snippets above should render the same? And is there something I'm doing wrong? Maybe saving the file with the wrong encoding? Should I make every one of these links a perfectly positioned CSS element instead?
A: I don't know if this is general enough for your page, but you could class these particular a tags and float them all left, then they'll bunch together no matter how your HTML is formatted.
<style>
a.together {
float:left;
}
</style>
<a class='together' href="page1.html"><img src="image1.png" /></a>
<a class='together' href="page2.html"><img src="image2.png" /></a>
<a class='together' href="page3.html"><img src="image3.png" /></a>
A: That's part of the HTML specification - the spaces are in the markup so they're considered part of the document.
The only other options you've got, since you dislike the formatting, is to break the html tags:
<a href="..."><img src=".." /></a
><a href="..."><img src=".." /></a
><a href="..."><img src=".." /></a
which is undesirable in my opinion, or create the html dynamically - either via JavaScript or using a templating system and dynamic html.
A: The reason is simple: In HTML white space matters, but only once. Repeated white space is ignored, only the first is shown.
The only reliable way to avoid this is, as you did, by putting no white space between elements.
When table based layout would be less out than it is currently, you could use a zero-border, zero padding table to align your elements while having them on separate lines in the source code.
A: The behavior you demonstrated above is true as the browser treats carriage returns as a space. To fix it, you can style it like so with:
a { display: block; float: left; }
Please note that the above rule applies it to all links, so you might want to narrow the selector to certain elements only, ie:
#nav a { display: block; float: left; }
A: The way I handle this is to use an unordered list, and make each image/link an item.
Then use CSS to display each item inline and and float them to the left.
This will give you a lot more flexibility and make the markup very readable.
A: The whitespace (carriage return included) is usually rendered as space in all browsers.
You need to put the elements one after another, but you can use a trick:
<a href="page1.html"><img src="image1.png"
/></a><a href="page2.html"><img src="image2.png"
/></a><a href="page3.html"><img src="image3.png"
/></a>
This also looks a little ugly, but it's still better than one single line. You might change the formatting, but the idea is to add carriage returns inside the elements and not between them.
A: If you're going to do a tabbed interface on a website, take great pains to do it properly, and it will be worthwhile. There are many websites with great examples of CSS tab implementations. Consider using one of them.
This one has a lot of CSS+Javascript/AJAX tabs. Or see this set of simple CSS examples (some styled). Finally, check out this actually-pretty-cool tabs generator.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Visual Studio ">of" command causes solution explorer to go wacky I'm working with a fairly large solution right now in Visual Studio 2005 and whenever I try to use the ">of" command in the toolbar search box it causes my solution explorer go haywire. Specifically, all of the folders start showing that there aren't any files in them anymore and I have to hit the "Refresh" button in order to get them all to show up. This is extremely annoying so I never use the ">of" command anymore, but a jump-to-file command sure does come in handy.
Has anyone else experienced this or have a better alternative to jumping to a specific file by filename?
Note: ReSharper is not an option. It is too slow on a solution of this size. I tried it out and each load up took 10-15 minutes.
A: First, thanks for getting me to discover that you can run commands from the Find Combo box.
I haven't experienced your problem (not enough files in my solution?), but the better alternative you mentioned could be Visual Assist's 'Open File in Workspace' command. It's lightning fast for me with 2500 or so files in a solution.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I create a jar file, which includes xml and html files? I am trying to create a jar file which includes some class and java files needed, but I also would like to include some extra xml, xsl, html, txt (README) files.
I am using Eclipse on Windows XP.
Is there an easy way for me to set up a directory structure and package all my files into a jar?
A: Add the files to a source folder and they can be included in the jar.
One common way is to have, at the root of your project, a src folder. Within that, folders for java files, and others. something like:
src/
css/
java/
html/
images/
Then you can make each of those subfolders a source folder (Right click, Use as Source Folder) and they should be available to add to the jar.
A: A .jar is nothing but a ZIP archive, so you can use any program capable of creating ZIPs. Just make sure that you include the manifest and all the class files.
A: If you're using Ant, you can use the jar task (see the examples section for how to include/exclude certain files, etc.)
A: I just added all the files into my Eclipse project (including the txt, html, xml, etc files).
Then I used Eclipse to File->Export->Jar File->Next
Check the "Export Java source files and resources" box.
Done.
A: If you move to an ANT (or Maven, for you Maven fans) then you can automate the Jar building very nicely, and also use it outside of Eclipse (e.g., in an automated build environment). All you need to do is copy the files from your src, jsp, foobar and resources locations into a build staging folder, then Jar the resulting files using ANT's Jar task.
<target name="makejar" depends="compile, copyfiles">
<jar destfile="${jars.dir}/myjarfile.jar" index="true" basedir="${build.dir}" />
</target>
One thing I look down on is including non-source (except package.html files for Javadoc) within the src folder. If you feel you have to do this to achieve something, then you are doing it wrong.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Linq To SQL caching VS multi-user application We develop Win32 application that access to SQL 2005 database through Linq to SQL. The issue is when 2 users access to same record (View and Edit)… User 1 update record (DataContext.SubmitChanges()), User 2 will continue to see old information until he restart application. So, we would like to update context of user 2… The solution that appears right now is to call DataContext.Refresh to sync object with SQL table… We wondering if other solution exist ?
Thank you
A: I've noticed that Refresh can be really nasty depending on the data you've already grabbed from an entity, another solution is to reset the context you are using to a new instance.
context = new MyDataContext(ConnectionString);
This, at least in the scenarios where I am using it is less overhead and less DB calls.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Catching exceptions as expected program execution flow control? I always felt that expecting exceptions to be thrown on a regular basis and using them as flow logic was a bad thing. Exceptions feel like they should be, well, the "exception". If you're expecting and planning for an exception, that would seem to indicate that your code should be refactored, at least in .NET...
However. A recent scenario gave me pause. I posted this on msdn a while ago, but I'd like to generate more discussion about it and this is the perfect place!
So, say you've got a database table which has a foreign key for several other tables (in the case that originally prompted the debate, there were 4 foreign keys pointing to it). You want to allow the user to delete, but only if there are NO foreign key references; you DON'T want to cascade delete.
I normally just do a check to see if there are any references, and if there are, I inform the user instead of doing the delete. It's very easy and relaxing to write that in LINQ as related tables are members on the object, so Section.Projects and Section.Categories and et cetera is nice to type with intellisense and all...
But the fact is that LINQ then has to hit potentially all 4 tables to see if there are any result rows pointing to that record, and hitting the database is obviously always a relatively expensive operation.
The lead on this project asked me to change it to just catch a SqlException with a code of 547 (foreign key constraint) and deal with it that way.
I was...
resistant.
But in this case, it's probably a lot more efficient to swallow the exception-related overhead than to swallow the 4 table hits... Especially since we have to do the check in every case, but we're spared the exception in the case when there are no children...
Plus the database really should be the one responsible for handling referential integrity, that's its job and it does it well...
So they won and I changed it.
On some level it still feels wrong to me though.
What do you guys think about expecting and intentionally handling exceptions? Is it okay when it looks like it'll be more efficient than checking beforehand? Is it more confusing to the next developer looking at your code, or less confusing? Is it safer, since the database might know about new foreign key constraints that the developer might not think to add a check for? Or is it a matter of perspective on what exactly you think best practice is?
A: Your lead is absolutely right. Exceptions are not just for once in a blue moon situations, but specifically for reporting other than expected outcomes.
In this case the foreign key check would still take place, and exceptions are the mechanism by which you can be notified.
What you should NOT do is catch and suppress exceptions with a blanket catchall statement. Doing fine-grained exception handling is specifically why exceptions were designed in the first place.
A: Wow,
First off, can you please distill the question down a bit, while it was nice to read a well thought out and explained question, that was quite a lot to digest.
The short answer is "yes", but it can depend.
*
*We have some applications where we have lots of business logic tied up in the SQL queries (not my design Gov!). If this is how it is structured, management can be difficult to convince of otherwise since it "already works".
*In this situation, does it really make a big deal? Since it's still one trip across the wire and back. Does the server do much before it realises that it cannot continue (i.e.if there is a sequence of transactions that take place to your action, does it then fall over half way through, wasting time?).
*Does it make sense to do the check in the UI first? Does it help with your application? If it provides a nicer user experience? (i.e. I have seen cases where you step through several steps in a wizard, it starts, then falls over, when it had all the info it needed to fall over after step 1).
*Is concurrency an issue? Is it possible that the record may be removed/edited or whatever before your commit takes place (as in the classic File.Exists boo-boo).
In my opinion:
I would do both. If I can fail fast and provide a better user experience, great. Any expected SQL (or any other) exceptions should be getting caught and fed back appropriately anyway.
I know there is a concensus that exceptions should not be used for other than exceptional circumstances, but remember, we are crossing application boundaries here, expect nothing. Like I said, this is like the File.Exists, there is no point, it can be deleted before you access it anyway.
A: I think you are right, exceptions should only be used to handle unexpected outcomes, here you are using an exception to deal with an possibly expected outcome, you should deal with this case explicitly but still catch the exception to show a possible error.
Unless this is the way this case is handled all throughout the code, I would side with you. The performance issue should only be brought up if it is actually an issue, i.e. it would depend of the size of those tables and the number of times this function is used.
A: Nice question. However, I find the answers ... scary!
An exception is a kind of GOTO.
I don't like using exceptions in this way because that leads to spaghetti code.
Simple as that.
A: There's nothing wrong with what you are doing. Exceptions aren't nessesarily "exceptional". They exist to allow calling objects to have fine grained error handling depending on their needs.
A: Catching the specific SqlException is the right thing to do. This is the mechanism by which SQL Server communicates the foreign key condition. Even if you might favor a different usage of the exception mechanism, this is how SQL Server does it.
Also, during your check on the four tables, some other user might add a related record before your check is completed but after you read that table.
A: I'd recommend calling a stored procedure that checks if there are any dependencies, then deletes if there aren't any. That way, the check for integrity is done.
Of course, you'd probably want a different stored procedure for singleton deletes vs. batch deleting... The batch delete could look for child rows and return back a set of records that were not eligible for a bulk delete (had child rows).
A: I don't like to see exceptions everywhere in a program, but in this case I would use the exception mechanism.
Even if you test in LINQ, you will have to catch the exception in case someone inserts a child record while you're testing integrity with LINQ. Since you have to check the exception anyways, why duplicate the code ?
Another point is that this kind of "short range" exception doesn't cause maintenance problems, nor it makes your program more difficult to read. You will have the try, the SQL call to delete, and the catch, all that within 10 lines of code, together. The intent is obvious.
Not like throwing an exception that is to be catched 5 procedure calls upper in the stack, with the problems of being sure that all objects in between have been taken care (freed) correctly.
Exceptions are not always a magic answer, but I would not feel wrong to use them in this situation.
My two cents,
Yves
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: How can I sort a 2-D array in MATLAB with respect to one column? I would like to sort a matrix according to a particular column. There is a sort function, but it sorts all columns independently.
For example, if my matrix data is:
1 3
5 7
-1 4
Then the desired output (sorting by the first column) would be:
-1 4
1 3
5 7
But the output of sort(data) is:
-1 3
1 4
5 7
How can I sort this matrix by the first column?
A: I think the sortrows function is what you're looking for.
>> sortrows(data,1)
ans =
-1 4
1 3
5 7
A: An alternative to sortrows(), which can be applied to broader scenarios.
*
*save the sorting indices of the row/column you want to order by:
[~,idx]=sort(data(:,1));
*reorder all the rows/columns according to the previous sorted indices
data=data(idx,:)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
}
|
Q: What's the key difference between HTML 4 and HTML 5? What are the key differences between HTML4 and HTML5 draft?
Please keep the answers related to changed syntax and added/removed html elements.
A: From Wikipedia:
*
*New parsing rules oriented towards flexible parsing and compatibility
*New elements – section, video, progress, nav, meter, time, aside, canvas
*New input attributes – dates and times, email, url
*New attributes – ping, charset, async
*Global attributes (that can be applied for every element) – id, tabindex, repeat
*Deprecated elements dropped – center, font, strike
A: You might be interested in this list of HTML5 elements and attributes.
Also, please note that it's "HTML 4", not "HTML4". Indeed, for HTML 5, both variants are used, but there is an important difference in meaning. HTML 5 refers to the name of the W3C specification, whereas "HTML5" is the document type of those HTML files with a text/html MIME type that follow this spec.
The same goes for XHTML 5 vs. XHTML5.
A: HTML5 has several goals which differentiate it from HTML4.
Consistency in Handling Malformed Documents
The primary one is consistent, defined error handling. As you know, HTML purposely supports 'tag soup', or the ability to write malformed code and have it corrected into a valid document. The problem is that the rules for doing this aren't written down anywhere. When a new browser vendor wants to enter the market, they just have to test malformed documents in various browsers (especially IE) and reverse-engineer their error handling. If they don't, then many pages won't display correctly (estimates place roughly 90% of pages on the net as being at least somewhat malformed).
So, HTML5 is attempting to discover and codify this error handling, so that browser developers can all standardize and greatly reduce the time and money required to display things consistently. As well, long in the future after HTML has died as a document format, historians may still want to read our documents, and having a completely defined parsing algorithm will greatly aid this.
Better Web Application Features
The secondary goal of HTML5 is to develop the ability of the browser to be an application platform, via HTML, CSS, and Javascript. Many elements have been added directly to the language that are currently (in HTML4) Flash or JS-based hacks, such as <canvas>, <video>, and <audio>. Useful things such as Local Storage (a js-accessible browser-built-in key-value database, for storing information beyond what cookies can hold), new input types such as date for which the browser can expose easy user interface (so that we don't have to use our js-based calendar date-pickers), and browser-supported form validation will make developing web applications much simpler for the developers, and make them much faster for the users (since many things will be supported natively, rather than hacked in via javascript).
Improved Element Semantics
There are many other smaller efforts taking place in HTML5, such as better-defined semantic roles for existing elements (<strong> and <em> now actually mean something different, and even <b> and <i> have vague semantics that should work well when parsing legacy documents) and adding new elements with useful semantics - <article>, <section>, <header>, <aside>, and <nav> should replace the majority of <div>s used on a web page, making your pages a bit more semantic, but more importantly, easier to read. No more painful scanning to see just what that random </div> is closing - instead you'll have an obvious </header>, or </article>, making the structure of your document much more intuitive.
A: Now W3c provides an official difference on their site:
http://www.w3.org/TR/html5-diff/
A: You'll want to check HTML5 Differences from HTML4: W3C Working Group Note 9 December 2014 for the complete differences. There are many new elements and element attributes. Some elements were removed and others have different semantic value than before.
There are also APIs defined, such as the use of canvas, to help build the next generation of web apps and make sure implementations are standardized.
A: HTML5 introduces a number of APIs that help in creating Web applications. These can be used together with the new elements introduced for applications:
*
*An API for playing of video and audio which can be used with the new video and audio elements.
*An API that enables offline Web applications.
*An API that allows a Web application to register itself for certain protocols or media types.
*An editing API in combination with a new global contenteditable attribute.
*A drag & drop API in combination with a draggable attribute.
*An API that exposes the history and allows pages to add to it to prevent breaking the back button.
A: HTML 5 invites you give add a lot of semantic value to your code. What's more, there are natives solution to embed multimedia content.
The rest is important, but it's more technical sugar that will save you from doing the same stuff with a client programming language.
A: In short it is much simple compared to html, the long doctype is removed and also center and font tag is removed.
I also answered this difference in my blog :
http://ravisinghblog.in/key-difference-between-html-and-html-5/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "150"
}
|
Q: Generic Object in VB.NET I have a class declared as follows:
Public MustInherit Container(Of T As {New, BaseClass}) Inherits ArrayList(Of T)
I have classes that inherit this class.
I have another class that I must pass instances in this method:
Public Sub LoadCollection(Of T As {BaseClass, New})(ByRef Collection As Container(Of T))
I need to store the passed in object in a global variable, but i can't simply declare it:
Private _Container as Collection(Of BaseClass)
What is the syntax to declare this object?
A: Sorry haven't got time to expand on this right now, but I think this link describes your underlying problem and a solution.
(You might also find this interesting.)
A: It cannot be a global variable. Container is an idea, not a thing.
As you have it designed, that idea is only formed into an actual thing inside LoadCollection(). You need to convey the information outside of that method.
A: Hmmm. "Collection" is a variable name, not a Type. I think this is what you want:
Private _Container as Container(Of BaseClass)
Also, ArrayList is not a Generic class; don't you mean Inherits List(of T) ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Returning a const reference to an object instead of a copy Whilst refactoring some code I came across some getter methods that returns a std::string. Something like this for example:
class foo
{
private:
std::string name_;
public:
std::string name()
{
return name_;
}
};
Surely the getter would be better returning a const std::string&? The current method is returning a copy which isn't as efficient. Would returning a const reference instead cause any problems?
A: Okay, so the differences between returning a copy and returning the reference are:
*
*Performance: Returning the reference may or may not be faster; it depends on how std::string is implemented by your compiler implementation (as others have pointed out). But even if you return the reference the assignment after the function call usually involves a copy, as in std::string name = obj.name();
*Safety: Returning the reference may or may not cause problems (dangling reference). If the users of your function don't know what they are doing, storing the reference as reference and using it after the providing object goes out of scope then there's a problem.
If you want it fast and safe use boost::shared_ptr. Your object can internally store the string as shared_ptr and return a shared_ptr. That way, there will be no copying of the object going and and it's always safe (unless your users pull out the raw pointer with get() and do stuff with it after your object goes out of scope).
A: The only way this can cause a problem is if the caller stores the reference, rather than copy the string, and tries to use it after the object is destroyed. Like this:
foo *pFoo = new foo;
const std::string &myName = pFoo->getName();
delete pFoo;
cout << myName; // error! dangling reference
However, since your existing function returns a copy, then you would
not break any of the existing code.
Edit: Modern C++ (i. e. C++11 and up) supports Return Value Optimization, so returning things by value is no longer frowned upon. One should still be mindful of returning extremely large objects by value, but in most cases it should be ok.
A: I'd change it to return const std::string&. The caller will probably make a copy of the result anyway if you don't change all the calling code, but it won't introduce any problems.
One potential wrinkle arises if you have multiple threads calling name(). If you return a reference, but then later change the underlying value, then the caller's value will change. But the existing code doesn't look thread-safe anyway.
Take a look at Dima's answer for a related potential-but-unlikely problem.
A: Actually, another issue specifically with returning a string not by reference, is the fact that std::string provides access via pointer to an internal const char* via the c_str() method. This has caused me many hours of debugging headache. For instance, let's say I want to get the name from foo, and pass it to JNI to be used to construct a jstring to pass into Java later on, and that name() is returning a copy and not a reference. I might write something like this:
foo myFoo = getFoo(); // Get the foo from somewhere.
const char* fooCName = foo.name().c_str(); // Woops! foo.name() creates a temporary that's destructed as soon as this line executes!
jniEnv->NewStringUTF(fooCName); // No good, fooCName was released when the temporary was deleted.
If your caller is going to be doing this kind of thing, it might be better to use some type of smart pointer, or a const reference, or at the very least have a nasty warning comment header over your foo.name() method. I mention JNI because former Java coders might be particularly vulnerable to this type of method chaining that may seem otherwise harmless.
A: It is conceivable that you could break something if the caller really wanted a copy, because they were about to alter the original and wanted to preserve a copy of it. However it is far more likely that it should, indeed, just be returning a const reference.
The easiest thing to do is try it and then test it to see if it still works, provided that you have some sort of test you can run. If not, I'd focus on writing the test first, before continuing with refactoring.
A: One problem for the const reference return would be if the user coded something like:
const std::string & str = myObject.getSomeString() ;
With a std::string return, the temporary object would remain alive and attached to str until str goes out of scope.
But what happens with a const std::string &? My guess is that we would have a const reference to an object that could die when its parent object deallocates it:
MyObject * myObject = new MyObject("My String") ;
const std::string & str = myObject->getSomeString() ;
delete myObject ;
// Use str... which references a destroyed object.
So my preference goes to the const reference return (because, anyway, I'm just more confortable with sending a reference than hoping the compiler will optimize the extra temporary), as long as the following contract is respected: "if you want it beyond my object's existence, they copy it before my object's destruction"
A: Some implementations of std::string share memory with copy-on-write semantics, so return-by-value can be almost as efficient as return-by-reference and you don't have to worry about the lifetime issues (the runtime does it for you).
If you're worried about performance, then benchmark it (<= can't stress that enough) !!! Try both approaches and measure the gain (or lack thereof). If one is better and you really care, then use it. If not, then prefer by-value for the protection it offers agains lifetime issues mentioned by other people.
You know what they say about making assumptions...
A: Odds are pretty good that typical usage of that function won't break if you change to a const reference.
If all of the code calling that function is under your control, just make the change and see if the compiler complains.
A: Does it matter? As soon as you use a modern optimizing compiler, functions that return by value will not involve a copy unless they are semantically required to.
See the C++ lite FAQ on this.
A: Depends what you need to do. Maybe you want to all the caller to change the returned value without changing the class. If you return the const reference that won't fly.
Of course, the next argument is that the caller could then make their own copy. But if you know how the function will be used and know that happens anyway, then maybe doing this saves you a step later in code.
A: I normally return const& unless I can't. QBziZ gives an example of where this is the case. Of course QBziZ also claims that std::string has copy-on-write semantics which is rarely true today since COW involves a lot of overhead in a multi-threaded environment. By returning const & you put the onus on the caller to do the right thing with the string on their end. But since you are dealing with code that is already in use you probably shouldn't change it unless profiling shows that the copying of this string is causing massive performance problems. Then if you decide to change it you will need to test thouroughly to make sure you didn't break anything. Hopefully the other developers you work with don't do sketchy stuff like in Dima's answer.
A: Returning a reference to a member exposes the implementation of the class.
That's could prevent to change the class. May be usefull for private or protected methods incase the optimization is needed.
What should a C++ getter return
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74"
}
|
Q: Bind ASP.NET DropDownList DataTextField to method? Is there anyway to have items in an ASP.NET DropDownList have either their Text or Value bound to a method on the source rather than a property?
A: This is my solution:
<asp:DropDownList ID="dropDownList" runat="server" DataSourceID="dataSource" DataValueField="DataValueField" DataTextField="DataTextField" />
<asp:ObjectDataSource ID="dataSource" runat="server" SelectMethod="SelectForDataSource" TypeName="CategoryDao" />
public IEnumerable<object> SelectForDataSource()
{
return _repository.Search().Select(x => new{
DataValueField = x.CategoryId,
DataTextField = x.ToString() // Here is the trick!
}).Cast<object>();
}
A: Here's 2 examples for binding a dropdown in ASP.net from a class
Your aspx page
<asp:DropDownList ID="DropDownListJour1" runat="server">
</asp:DropDownList>
<br />
<asp:DropDownList ID="DropDownListJour2" runat="server">
</asp:DropDownList>
Your aspx.cs page
protected void Page_Load(object sender, EventArgs e)
{
//Exemple with value different same as text (dropdown)
DropDownListJour1.DataSource = jour.ListSameValueText();
DropDownListJour1.DataBind();
//Exemple with value different of text (dropdown)
DropDownListJour2.DataSource = jour.ListDifferentValueText();
DropDownListJour2.DataValueField = "Key";
DropDownListJour2.DataTextField = "Value";
DropDownListJour2.DataBind();
}
Your jour.cs class (jour.cs)
public class jour
{
public static string[] ListSameValueText()
{
string[] myarray = {"a","b","c","d","e"} ;
return myarray;
}
public static Dictionary<int, string> ListDifferentValueText()
{
var joursem2 = new Dictionary<int, string>();
joursem2.Add(1, "Lundi");
joursem2.Add(2, "Mardi");
joursem2.Add(3, "Mercredi");
joursem2.Add(4, "Jeudi");
joursem2.Add(5, "Vendredi");
return joursem2;
}
}
A: The only way to do it is to handle the Databinding event of the DropDownList, call the method and set the values in the DropDownList item yourself.
A: Sometimes I need to use Navigation Properties as DataTextField, like ("User.Address.Description"), so I decided to create a simple control that derives from DropDownList.
I also implemented an ItemDataBound Event that can help as well.
public class RTIDropDownList : DropDownList
{
public delegate void ItemDataBoundDelegate( ListItem item, object dataRow );
[Description( "ItemDataBound Event" )]
public event ItemDataBoundDelegate ItemDataBound;
protected override void PerformDataBinding( IEnumerable dataSource )
{
if ( dataSource != null )
{
if ( !AppendDataBoundItems )
this.Items.Clear();
IEnumerator e = dataSource.GetEnumerator();
while ( e.MoveNext() )
{
object row = e.Current;
var item = new ListItem( DataBinder.Eval( row, DataTextField, DataTextFormatString ).ToString(), DataBinder.Eval( row, DataValueField ).ToString() );
this.Items.Add( item );
if ( ItemDataBound != null ) //
ItemDataBound( item, row );
}
}
}
}
A: Declaratively:
<asp:DropDownList ID="ddlType" runat="server" Width="250px" AppendDataBoundItems="true" DataSourceID="dsTypeList" DataTextField="Description" DataValueField="ID">
<asp:ListItem Value="0">All Categories</asp:ListItem>
</asp:DropDownList><br />
<asp:ObjectDataSource ID="dsTypeList" runat="server" DataObjectTypeName="MyType" SelectMethod="GetList" TypeName="MyTypeManager">
</asp:ObjectDataSource>
The above binds to a method that returns a generic list, but you could also bind to a method that returns a DataReader. You could also create your dataSource in code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Server-side DataTable Sorting in RichFaces I have a data table with a variable number of columns and a data scroller. How can I enable server side sorting? I prefer that it be fired by the user clicking the column header.
<rich:datascroller for="instanceList" actionListener="#{pageDataModel.pageChange}"/>
<rich:dataTable id="instanceList" rows="10" value="#{pageDataModel}"
var="fieldValues" rowKeyVar="rowKey">
<rich:columns value="#{pageDataModel.columnNames}" var="column" index="idx">
<f:facet name="header">
<h:outputText value="#{column}"/>
</f:facet>
<h:outputText value="#{classFieldValues[idx]}" />
</rich:columns>
</rich:dataTable>
I already have a method on the bean for executing the sort.
public void sort(int column)
A: I ended up doing it manually. I adding a support tag to the header text tag, like so.
<h:outputText value="#{column}">
<a4j:support event="onclick" action="#{pageDataModel.sort(idx)}"
eventsQueue="instancesQueue"
reRender="instanceList,instanceListScroller"/>
</h:outputText>
To get the ascending/descending arrows, I added a css class.
<h:outputText value="#{column}" styleClass="#{pageDataModel.getOrderClass(idx)}" >
<a4j:support event="onclick" action="#{pageDataModel.sort(idx)}"
eventsQueue="instancesQueue"
reRender="instanceList,instanceListScroller"/>
</h:outputText>
A: Your datamodel needs to implement "Modifiable" interface.
The datatable will call it's modify() method to do serverside
sorting and filtering.
A: There is a fairly elegant solution to this solution here:
http://livedemo.exadel.com/richfaces-demo/richfaces/sortingFeature.jsf?tab=ex-usage
This demo avoids using the tag.
A: Have a look at the "sortBy" property of "rich:columns", maybe that's what you're looking for.
Richfaces Reference
A: Cant you just use Collection.sort() when you retrieve the List?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: What is the difference between bool and Boolean types in C# What is the difference between bool and Boolean types in C#?
A: bool is an alias for the Boolean class. I use the alias when declaring a variable and the class name when calling a method on the class.
A: I don't believe there is one.
bool is just an alias for System.Boolean
A: bool is an alias for System.Boolean just as int is an alias for System.Int32. See a full list of aliases here: Built-In Types Table (C# Reference).
A: They are the same, Bool is just System.Boolean shortened. Use Boolean when you are with a VB.net programmer, since it works with both C# and Vb
A: They are one and the same.
bool is just an alias for Boolean.
A: There is no difference - bool is simply an alias of System.Boolean.
http://msdn.microsoft.com/en-us/library/c8f5xwh7(VS.71).aspx
A: Note that Boolean will only work were you have using System; (which is usually, but not necessarily, included) (unless you write it out as System.Boolean). bool does not need using System;
A: bool is a primitive type, meaning that the value (true/false in this case) is stored directly in the variable. Boolean is an object. A variable of type Boolean stores a reference to a Boolean object. The only real difference is storage. An object will always take up more memory than a primitive type, but in reality, changing all your Boolean values to bool isn't going to have any noticeable impact on memory usage.
I was wrong; that's how it works in java with boolean and Boolean. In C#, bool and Boolean are both reference types. Both of them store their value directly in the variable, both of them cannot be null, and both of them require a "convertTO" method to store their values in another type (such as int). It only matters which one you use if you need to call a static function defined within the Boolean class.
A: I realise this is many years later but I stumbled across this page from google with the same question.
There is one minor difference on the MSDN page as of now.
VS2005
Note:
If you require a Boolean variable that can also have a value of null, use bool.
For more information, see Nullable Types (C# Programming Guide).
VS2010
Note:
If you require a Boolean variable that can also have a value of null, use bool?.
For more information, see Nullable Types (C# Programming Guide).
A: They are the same.
C# programmers tend to prefer bool. It's less typing and just feels more natural from someone coming from that language family. It also guarantees you get the actual System.Boolean type (where otherwise it's possible to make your own Boolean type in a different namespace and the type resolution could become ambiguous).
But if you're in a shop where there's a lot of both VB.Net and C# then you may prefer Boolean because it works in both places and helps simplify conversion back and forth between C# and VB.Net.
A: As has been said, they are the same. There are two because bool is a C# keyword and Boolean a .Net class.
A: One is an alias for the other.
A: No actual difference unless you get the type string.
There when you use reflection or GetType() you get
{Name = "Boolean" FullName = "System.Boolean"}
for both.
A: Perhaps bool is a tad "lighter" than Boolean; Interestingly, changing this:
namespace DuckbillServerWebAPI.Models
{
public class Expense
{
. . .
public bool CanUseOnItems { get; set; }
}
}
...to this:
namespace DuckbillServerWebAPI.Models
{
public class Expense
{
. . .
public Boolean CanUseOnItems { get; set; }
}
}
...caused my cs file to sprout a "using System;" Changing the type back to "bool" caused the using clause's hair to turn grey.
(Visual Studio 2010, WebAPI project)
A: bool is an alias for Boolean. What aliases do is replace one string of text with another (like search/replace-all in notepad++), just before the code is compiled. Using one over the other has no effect at run-time.
In most other languages, one would be a primitive type and the other would be an object type (value type and reference type in C# jargon). C# does not give you the option of choosing between the two. When you want to call a static method defined in the Boolean class, it auto-magically treats Boolean as a reference type. If you create a new Boolean variable, it auto-magically treats it as a reference type (unless you use the Activator.CreateInstance method).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "384"
}
|
Q: Clearing the Windows "Run" dialog history without rebooting I am currently working on a program to immediately clear the list of previously-run-commands which appears in the Windows Start -> Run dialog. The procedure for clearing this list by removing the HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU key is well documented; however, before these changes take effect, it seems to be necessary to do one of the following:
*
*Restart the computer
*Select Start -> Shut down, and then select Cancel.
Neither of these is ideal for the task I am trying to accomplish: #1 is extremely disruptive to the user, and #2 appears to require additional user interaction.
Does anyone know how to immediately (and programmatically) force a reload of this information without requiring any user interaction, while also minimizing disruption of the user's other activities? I would like for the user's Run history to be cleared out immediately after executing my program, without requiring any further action on their part (such as using the "Shut Down" -> "Cancel" trick in #2 above) or forcing a reboot.
Or, to approach the problem from a different angle: When clicking Start -> Shut Down -> Cancel, Windows Explorer reloads the RunMUI key. Is there a way to force a similar reload without having the user select Shut Down and then Cancel?
Things I have already tried:
*
*Monitoring the explorer.exe status using procmon while selecting Shutdown and then Cancel. I see Explorer writing to the RunMRU key, but have not been able to determine what triggers this.
*Numerous Google searches along the lines of "reload runmru without reboot". Most results still recommend method #1 above, although a few suggest #2.
*Limited MSDN API examination. The RegFlushKey call appears promising, but I haven't ever used it before, so I don't know if it will apply to registry information cached by different processes.
Any suggestions or other information would be greatly appreciated.
A: Have you tried ccleaner?
http://www.ccleaner.com/
A: Not a full answer to your question, but I did find a third way to trigger the clearing of the run command from this article in PC Mag.
Killing explorer.exe and then restarting it will also clear the run list after the registry modification.
A: I have a nasty hack for you. Show the window programatically, hide it immediately (programatically) and click cancel on it (well, you guessed, programmatically).
You might try looking for the icon cache flush API, or other ones, I wouldn't be too suprised if they had side effects like the one you are looking for.
A: I've seen instances where it actually works, even the F5 key doesn't work? Try this, ctrl>alt>delete then go to task manager, processes tab...end explorer.exe. Then click on file new task and type explorer.exe, then check...does that work?
A: Windows XP
*
*Right click on the taskbar
*Properties menu option
*Start Menu tab
*Customize button
*Programs pane
*Clear List
*Click on OK
This calls a Windows API function that refreshes the explorere.exe taskbar process and also clears the list (no need for registry edits).
A: As far as I know, it relies on the explorer.exe process that hosts the start menu/taskbar/desktop being closed and reopened. There is no "clean" way to do this that I am aware of.
If you really need to do this without user interaction, you need to close all explorer.exe processes and relaunch one.
Here's a rudimentary C# program to do that;
using System.Diagnostics;
Process[] procs = Process.GetProcessesByName("explorer");
foreach (Process proc in procs)
{
proc.Kill();
}
Process.Start("explorer.exe");
Note that this will close all "Windows Explorer" windows open, and may or may not open an additional "Windows Explorer" afterwards.
I just tested that on Windows XP 32bit, and it did indeed clear the Run command cache.
A: HKEY_CURRENT_USER\ Software\ Microsoft\ Windows\ CurrentVersion\ Explorer\ RunMRU\
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Release management - best practice I work for a product development company.We first do internal releases,and then public release.I was wondering, how other product developing companies manage their release? How do you give release number? Tag the source control?
A: I created a similar question, but I wanted to add to this: I strongly recommend using something like Jira to manage a release cycle. You can associate commits with requests/issues/bugs, and then flag those as part of a release.
In particular, if you want to know how to manage a good release cycle, have a look at how the Apache foundation does it, because they have it down to a science. For instance, here's the roadmap for releases in the Mahout project.
Along with a working system that tracks issues and bundles them in a release package, you will want to start integrating this with your continuous integration (I've used both CruiseControl and Hudson) and unit tests so that your build cycle is managed along with everything else.
A: I worked for a custom software provider that eventually morphed into a solutions provider when customers decided that they didn't want to implement their own callcenters and websites.
In that environment, each major customer had an opportunity to customize some aspects of how the system worked. So development had a core product with components common to all contracts, and separate branches for each customer (some customers needed minor tweaks, others major integration with other systems).
It worked ok, until the business grew and the number of branches expanded, often to accommodate really lame changes. At one point I think they had something like 15 different active versions of the same codebase... which made things really inflexible and difficult to support.
Don't do what we did -- make your releases scale!
A: As others said, the best way of dealing with managing realeases is by branching.
I highly recommend taking a look at the TFS Branching Guide (http://tfsbranchingguideii.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=20785) which explains several approaches to creating the branch structure depending on the size of your projects and different ways of providing your software to end-users (major releases, service packs, hotfixes). Most of the is not specific to TFS so you can apply it to most other source control systems.
A: We use SubVersion, where tags and branches are cheap to create.
As far as releases go, we follow this convention:
(Major Release).(Minor Release).(Patch Release).(SVN revision)
*
*Patch Release = bug fixes
*Minor Release = binary compatible /
interface compatible
*Major Release = includes breaking
changes.
Does that make sense? If you need more information, add a comment and I'll edit my post to clarify.
A: At my company, when a release is ready, we create a branch for the major/minor release numbers, called something like R_2_1. The initial release is done by making a snapshot branch or label immediately afterwards, called R_2_1_0. When QA files bugs against a release, code changes are made on the R_X_Y branch, and then an R_2_1_1 branch is created to mark that release. So the tree looks like this:
Mainline
|
|- R_2_1
| |
| |-R_2_1_0 (locked)
| |
| |
| |-R_2_1_1 (locked)
| |
. .
. .
. .
A: We use SVN and create two branches for each release. One is a tag of the source code used to build this release, and one is a new import of the actual released binaries. This is important because (no matter how much you try to make two developers' machines the same, or try to maintain a stable build machine) inevitably when you come to try to regenerate build X 6 months down the line, you will find that something has changed and the binary that results is subtly different.
Minor patches are made in branches copied from the release source branch, and merged into the trunk. A minor release can then easily be made by copying the release source branch to a new branch and merging in whichever patches are required.
Major work is carried out in branches copied from the trunk, and merged back into the trunk when complete. Major releases can then be made from the trunk.
A: An answer based on ITIL framework (that's more or less equal to the other ones).
ITIL classifies releases in 3 groups: major software release, minor software release and emergency software fixes.
From ITIL books:
•Major software Releases and hardware upgrades, normally containing large areas of new functionality, some of which may make intervening fixes to Problems redundant. A major upgrade or Release usually supersedes all preceding minor upgrades, Releases and emergency fixes.
•Minor software Releases and hardware upgrades, normally containing small enhancements and fixes, some of which may have already been issued as emergency fixes. A minor upgrade or Release usually supersedes all preceding emergency fixes.
•Emergency software and hardware fixes, normally containing the corrections to a small number of known Problems
So, following this you should have:
Major: v1, V2, v3, etc
Minor: v1.1, V2.1, etc
Emergency: v1.1.1, V2.1.1, etc..
A: Follow-up to co-cat's answer regarding TFS. There is a new URL with some updates for VS2010 and VS11
http://vsarbranchingguide.codeplex.com/releases
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Datagrid usage sourced from SQL server I would like to know the best way to use a datagrid control linked to a combination (join) of data tables in a way that both simply allows both display, creation of new rows in underlying tables and deletion.
The datagrid appears to offer the latter capabilities but I have not found a way that I am happy with to do more than just display on one grid and offer specific separate edit create and delete facilities.
Suppose for the sake of illustration that the database contains:-
Customer Table
* CustomerID
* CustomerName
Order Table
* CustomerID
* OrderLineItem
* OrderLineQuanity
And that I want to lose the CustomerID for display purposes but would like to be able to create new customers and delete existing ones, perhaps with a confirmatory dialog.
A: CSharpAtl is correct, use a Master-Detail control. An example of using one in a WinForm app is at http://msdn.microsoft.com/en-us/library/y8c0cxey.aspx.
WinForm DataGrids support add, edit, and delete of both Master and Detail records. As for your question about what happens if you change a Detail record so it matches a new Master; that is not possible. By design a Detail row only contains records that match the Master, you cannot (for example) change an order to belong to a new customer because the Detail row does not contain any customer information.
If you want to move a Detail row to another Master, you have to create a new Detail row for the new Master, copy the data from the old Detail row, and delete the old Detail row. If you're ambitious you could support Cut and Paste or Drag and Drop of Detail rows, but internally you have to Create/Copy/Delete.
A: If the relationship is 1 to many you can go the route of using Master Detail. [link text][1]
[1]: http://msdn.microsoft.com/en-us/library/aa479344.aspx/"Master Detail"
A: If I understand your question correctly, you have a query that performs a join of several tables which you display on a single grid. You’d like the user to be able to manipulate that grid and have the underlying tables reflect the changes.
An approach is to solve this problem is to implement stored procedures to perform the CRUD operations. The stored procedures will contain the logic to insert, update and delete records from all of the required tables. Each procedure will need to have a parameter for each bound field on the grid. Set the procedures to be the insert, update and delete commands on your data source.
So imagine if you are adding a new record to the grid. The grid calls the insert command, passes the parameters to the stored procedure. Then within the stored procedure you’ll create the logic to determine if the new line in the grid requires a new customer or if it’s an existing customer then adjust the operation accordingly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Why pool Stateless session beans? Stateless beans in Java do not keep their state between two calls from the client. So in a nutshell we might consider them as objects with business methods. Each method takes parameters and return results. When the method is invoked some local variables are being created in execution stack. When the method returns the locals are removed from the stack and if some temporary objects were allocated they are garbage collected anyway.
From my perspective that doesn’t differ from calling method of the same single instance by separate threads. So why cannot a container use one instance of a bean instead of pooling a number of them?
A: Pooling does several things.
One, by having one bean per instance, you're guaranteed to be threads safe (Servlets, for example, are not thread safe).
Two, you reduce any potential startup time that a bean might have. While Session Beans are "stateless", they only need to be stateless with regards to the client. For example, in EJB, you can inject several server resources in to a Session Bean. That state is private to the bean, but there's no reason you can't keep it from invocation to invocation. So, by pooling beans you reduce these lookups to only happening when the bean is created.
Three, you can use bean pool as a means to throttle traffic. If you only have 10 Beans in a pool, you're only going to get at most 10 requests working simultaneously, the rest will be queued up.
A: Pooling enhances performance.
A single instance handling all requests/threads would lead to a lot of contention and blocking.
Since you don't know which instance will be used (and several threads could use a single instance concurrently), the beans must be threadsafe.
The container can manage pool size based on actual activity.
A: The transactionality of the Java EE model uses the thread context to manage the transaction lifecycle.
This simplification exists so that it is not necessary to implement any specific interface to interact with the UserTransaction object directly; when the transaction is retrieved from the InitialContext (or injected into the session bean) it is bound to a thread-local variable for reuse (for example if a method in your stateless session bean calls another stateless session bean that also uses an injected transaction.)
A: Life cycle of the Statelesss session beans are Doesnot exist, Passive and MethodReady(Passive or Inactive) state.To optimize on perormance, instead of traversing the bean all through from create to method ready state, container manages the bean between active and passive states through the container callbacks - ejbActivate() and ejbPassivate() there by managing the bean pool.
sreenut
A: Methods by nature ARE THREAD SAFE (including static). Why? Simple, because every variable inside the method is created in the stack memory, i.e. every variable used inside the method is created per call (it's not shared). However, parameters aren't part of the stack.
However, a method is unsafe if it uses an unsafe variable:
a) calling a static field or variable. However, it happens in every single case.
b) calling a resource that it's shared. Such as the EntityManager.
c) passing a parameter that is not safe.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: Automatically stop Visual C++ 2008 build at first compile error? I know I can compile individual source files, but sometimes -- say, when editing a header file used by many .cpp files -- multiple source files need to be recompiled. That's what Build is for.
Default behavior of the "Build" command in VC9 (Visual C++ 2008) is to attempt to compile all files that need it. Sometimes this just results in many failed compiles. I usually just watch for errors and hit ctrl-break to stop the build manually.
Is there a way to configure it such the build stops at the very first compile error (not the first failed project build) automatically?
A: Yeah, this works fine on MSVC 2005-2010:
Public Module EnvironmentEvents
Private Sub OutputWindowEvents_OnPaneUpdated(ByVal pPane As OutputWindowPane) Handles OutputWindowEvents.PaneUpdated
If Not (pPane.Name = "Build") Then Exit Sub
Dim foundError As Boolean = pPane.TextDocument.StartPoint.CreateEditPoint().FindPattern(": error")
Dim foundFatal As Boolean = pPane.TextDocument.StartPoint.CreateEditPoint().FindPattern(": fatal error")
If foundError Or foundFatal Then
DTE.ExecuteCommand("Build.Cancel")
End If
End Sub
End Module
A: I know the question was for VS 2008, but I stumbled across it when searching for the same answer for VS 2012. Since macros are no longer supported in 2012, macro solutions won't work anymore.
You can download an extension that apparently works in VS 2010 and 2012 here. I can confirm that it works well in VS 2012.
The original link to the extension was given in this response.
A: I came up with a better macro guys. It stops immediately after the first error/s (soon as build window is updated).
Visual Studio -> Tools -> Macros -> Macro IDE... (or ALT+F11)
Private Sub OutputWindowEvents_OnPaneUpdated(ByVal pPane As OutputWindowPane) Handles OutputWindowEvents.PaneUpdated
If Not (pPane.Name = "Build") Then Exit Sub
pPane.TextDocument.Selection.SelectAll()
Dim Context As String = pPane.TextDocument.Selection.Text
pPane.TextDocument.Selection.EndOfDocument()
Dim found As Integer = Context.IndexOf(": error ")
If found > 0 Then
DTE.ExecuteCommand("Build.Cancel")
End If
End Sub
Hope it works out for you guys.
A: This can be done by adding a macro that is run in response to the event OnBuildProjConfigDone.
The macro is as follows:
Private Sub BuildEvents_OnBuildProjConfigDone(ByVal Project As String, ByVal ProjectConfig As String, ByVal Platform As String, ByVal SolutionConfig As String, ByVal Success As Boolean) Handles BuildEvents.OnBuildProjConfigDone
If Success = False Then
DTE.ExecuteCommand("Build.Cancel")
End If
End Sub
A: There is this post - not sure if it stops the build at the first error or the first failed project in a solution.
Ctrl-break will also stop it manually.
Now if there was some way to stop it spending 10mins rebuilding intelisense after a build failed!
A: You can also download this extension, seems to work for every version of Visual Studio
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
}
|
Q: What parts of application you prefer to be externalized as configuration and why? What parts of your application are not coded?
I think one of the most obvious examples would be DB credentials - it's considered bad to have them hard coded. And in most of situations it is easy to decide if you want something to be externalized or coded.For me the rules are simple. Some part of the application should be externalized if:
*
*it can and should be changed by non-developer, but not so often to be included in application settings defined in UI (DB credentials, service URLs, etc)
*it does not require programming language and seems unnatural being coded (localization)
Do you have anything to add?
This is a little related to this question about spring cfg.
Spring configuration seems less obvious example for me, because in my practice it is never modified by anyone except the developer. And the road of externalizing can take you far away, to the entire project being "configured", not coded - so where to stop?
So please post here some examples from your experience, when you got benefit from having something configured, not coded - like dependency injection configuration in spring, etc.
And if you use spring - how often is configuration changed without recompiling?
A: Anything that needs to differ between different deployments of your application. That is, anything specific to the environment.
Examples include:
*
*Database connection strings
*URLs for web or WCF services
*Logging configuration
A: Any information your application uses that is "data" and that could change depending on where it is installed. Things like:
*
*smtp mail server used to send e-mails
*Database connect strings
*Paths to file locations / folders used by the app
*FTP servers & connect info
*Active Directory servers used for authentication
*Any links displayed in the application to external information
sources
*Warning limit values
*I've even put the RegEx filters used to limit the allowable characters
for data entry fields.
A: Besides the obvious changing stuff (paths, servers, ports, and so on), some people argue that you should be able to easily change whatever might reasonably change, for instance, say you have a generic engine which operates on the business logic (a rule engine).
You would then define the rules on a "configuration file" which ends up being is no less than programming in a DSL instead of in the generic purpose language. Benefits being it's closer to the domain so it's easier and more maintainable, and that you can easily change things that otherwise would demand a new build.
The main argument behind this is that things you assumed would never change always end up changing nonetheless, so you better be prepared.
A: paths and server names/addresses come to mind..
A: I agree with your two conditions, which is why I:
*
*Rarely include a config file as part of a Windows or Windows Mobile application (web apps yes).
*If I did include a config file meant to be tweaked by end users, it certainly wouldn't be XML.
A: Employee emails/names since employees can come and go... (you should typically try to keep them out of an application though)
A: Configuration files should include:
*
*deployment details
*
*DB credentials
*file paths
*host names
*anything that is used in many places but that may change
*
*contact email addresses
*options that aren't in the GUI
The last one is a bit open-ended, but very important. I've found it very useful to foresee variables that the client may, in the future, want to change. If changes are infrequent, I or they can edit the config file. If it becomes a frequent thing, it's trivial to add the option to the GUI, which isn't hardcoded.
A: I would also add encryption keys (which themselves should be encrypted)...
Basically the rule of thumb is information the application needs BEFORE it's regular, functional operation, data that it MUST have on-hand (i.e. local and not networked).
Note that this data should not be dynamically changing or large amounts of it, otherwise it should be in the database.
A: With Spring apps I actually distinguish between two types of configuration:
*
*Items externalized into property files which are "deploy time" concerns or "environment-specific": server IP's / addresses, file system locations, etc etc
*Spring XML configuration which can do lots of things, like indicate the overall application structure, apply behavior via AOP, etc.
A: I use Spring to wire all the beans in a J2SE application that has no GUI (a transactional switch). That way it's very easy for me to have different configurations in each deployment (we have this thing running in different countries), without having to code anything different.
Another thing I like to have is to manage all the SQL statements separately from the code, when I use plain JDBC (or Spring JDBC). Like in a properties file or XML or something, sometimes even as String properties in the beans that will use the statement (when there is only one bean that will use the statement, such as a DAO).
A: I am going to use spring JDBC or vanilla JDBC for data persistence, here we have decided to externalize all the SQL from the Java code, so can be better mangable in terms of SQL query tuning and optimization, we don't need to disturb the java code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Trying to get a Windows Service to run an executable on a shared drive I have c# that will run in a windows service. I'm trying to use the Process and ProcessStartInfo classes to run an executable. If the executable is on the local drive, no problem. However, I need to run an executable on a shared drive. I've tried using the UNC notation (//machine_name/share_name/directory/runme.exe), but the process seems to hang. The service and shared drive are on Windows XP. Has anyone tackled this issue before?
A: The account your service is running as likely does not have permission to access the shared drive. Try configuring it to run as a user with permission to the network via the services applet. Right click on the service, choose properties and set the account in the login tab.
A: What account is the service running as?
LocalSystem will only allow access to the local file system. If you want to access a network resource, you will have to run the service as a domain or network user.
A: Have a look at this: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=471168&SiteID=1
This should help.
A: If the app on the shared drive is a .Net app, make sure it has sufficient trust.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Javascript: Trigger action on function exit Is there a way to listen for a javascript function to exit? A trigger that could be setup when a function has completed?
I am attempting to use a user interface obfuscation technique (BlockUI) while an AJAX object is retrieving data from the DB, but the function doesn't necessarily execute last, even if you put it at the end of the function call.
Example:
function doStuff() {
blockUI();
ajaxCall();
unblockUI();
};
Is there a way for doStuff to listen for ajaxCall to complete, before firing the unBlockUI? As it is, it processes the function linearly, calling each object in order, then a separate thread is spawned to complete each one. So, though my AJAX call might take 10-15 seconds to complete, I am only blocking the user for just a split-second, due to the linear execution of the function.
There are less elegant ways around this...putting a loop to end only when a return value set by the AJAX function is set to true, or something of that nature. But that seems unnecessarily complicated and inefficient.
A: However you're accomplishing your Ajax routines, what you need is a "callback" function that will run once it's complete:
function ajaxCall(callback){
//do ajax stuff...
callback();
}
Then:
function doStuff(){
blockUI();
ajaxCall(unblockUI);
}
A: Your AJAX call should specify a callback function. You can call the unblockUI from within the callback.
SAJAX is a simple AJAX library that has more help on how to do AJAX calls.
There's also another post that describes what you're looking for.
A: You can do a synchronous xhr. This would cause the entire UI block for the duration of the call (no matter how long it might take).
A: You need to redesign your program flow to be compatible with asynchronus flow, like specifying a callback function to be called after the response is processed. Check out how Prototype or JQuery or ... accomplishes this.
A: The answer is simple, you have to call unblockUI() when your ajax request returns the result, using jQuery you can do it like this:
function doStuff(){
blockUI();
jQuery.ajax({
url: "example.com",
type: "POST", //you can use GET or POST
success: function(){
unblockUI();
}
});
}
A: It sounds to me that you want the user to wait while info is being fetched from the db. What I do when I make an Ajax call for some info from the database is to display an animated gif that says "getting it..." - it flashes continually until the info is retrieved and displayed in the webpage. When the info is displayed, the animated gif is turned off/hidden and the focus is moved to the new info being displayed. The animated gif lets the user know that something is happening.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Bug Fixing Time Allocation We've been asked by a client to give us a time estimate on each and every bug we have.
Though we do have a set schedule for bug fixing and have allocated time for it, we don't have a time allocation on each of the bugs we have. Simply, we have prioritized our bugs and have ensured that Highest priority bugs will be fixed in the time allotted.
I'm not a fan of allocating time to bugs, simply because:
*
*It usually is inaccurate. It's very difficult to figure out how long it would take to fix.
*Waste of time.
*Affects code quality
*Creates more bugs in the long run (We may miss certain things in our attempt to complete it by the deadline).
How should we tackle this issue where we don't want to provide the number of hours per bug, but just a time frame as to what bugs will be fixed?
How do you allocate time to your bugs? Is it effective? Worth the time and effort?
A: The only answer I can give is to be extremely conservative. Guess how long it will take, and multiple your guess by four. Use that as your estimate. As you said, it's very difficult to figure out how long things will take to fix, and it's better to say it will take longer than it actually does than to be caught "breaking your deadline" because you weren't conservative enough.
A: The company I work for often gets unreasonable requests from our customers. The key thing to remember is that customers want to be well informed. We've found the best way to do this is in terms of status reports.
So, we first do a pretty good job of explaining our position. In your example, this would be something like this:
We have a set schedule for fixing the bugs in our project, which we have historically a good track record of staying on schedule. However, the process of detailing how long each bug will take to fix is quite error-prone. We'd be happy to provide you with weekly updates (or twice-weekly or daily depending on the customer) on the bugs that have been fixed and the fixes that have been tested.
However, I do believe that it is good to try to estimate how long each bug will take to fix. The reason for this is you need to understand what the total time to fix all the bugs will take. You won't be able to get an accurate estimate if you don't have an estimate for how long the individual parts will take to fix. These can be rough estimates of course (estimated no longer than spending an hour researching the problem) -- you don't want to waste too much time estimating. Then I typically factor in an extra 20%. So say the estimates for bugs are 3 days, 5 days, and 2 days. Then I'd report to the customer that we should be able to fix the bugs in 12 days. Then of course you may need to add more time for testing and re-packing your product before you can give them a deliverable.
A: Don't think of this in terms of estimating how long bugs take to fix, because you can't possibly estimate that correctly.
Think of this in terms of managing client rage. If you tell them the bugs will take no time at all to fix and they end up taking 3 months, your client will be happy with you now and furious with you in the future.
If you tell them the bugs will take 3 months to fix and they actually take 3 months to fix (which they will), your client will be furious now and happy with you in the future.
I usually say bugs will take no time at all (2-3 days seems to be a good pacifying number).
A: It should be the same as estimating any other task you have. Split it up into the smallest tasks possible and estimate those as accurately as you can with padding for the unexpected. Then give them a range so you're not pinned down to a specific date on tasks that are not well-defined. There is no difference between estimating time to fix a bug and estimating time to implement a feature with nebulous requirements.
A: You're right, estimates are usually inaccurate.
Maybe you want to ask them how much each bug costs them if it goes unfixed. Then you can perform the appropriate computation for figuring out if they should ever be fixed, and how much time you (or realistically, they) can afford to devote to each bug.
A: Why not just pick several bands for bug severity, e.g. 1hour, 1/2 day, 1 day, 1 week and assign against them. Generally you will have a feeling for a bug -- ones for which you have no idea, put the worst case figure to it!
I wouldn't think you'd be wanted to estimate at any finer level than that, for the reasons you've quoted (taking too long to investigate, etc.)
I don't think it is a waste of time. Your customer wants to know more than the number of bugs and their priority -- they want a feeling for how much work remains.
Under no circumstances should this result in you generating more bugs. You shouldn't be hurrying against the clock to fix these. If you estimated 1 day and it tooks 10 hours, that's ok. If you estimated 1 week and it took 2 hours, good result!
This is simply an exercise in estimation!
A: Usually we will agree which bugs have to be fixed for a particular release, and then define an time frame for fixing all the bugs. For each individual bug there is a lot of uncertainty/variability in how long it could take to fix but that tends to average out with a larger number of bugs. For certain bugs that you know will take longer it may be possible to give some estimates, e.g. if you need to write a simulator or a test framework for it.
A: If these are bugs that have been found and reported, then you should be able to develop an estimate on the time to fix it (and the time to retest). The confidence of the estimate will likely be proportional to the time you spend on the estimate, perhaps explain this cost to the client.
If there are a number of related small bug reports perhaps you could collapse them into one omnibus report. This might avoid the client trying to pick and choose which bugs to fix based purely on individual estimates.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: On a WPF ComboBox, is it possible to set a different Foreground color for the textbox and the popup? Basically my problem stems from a desire to have the textbox portion be white, and the drop down to be black. When I set the text to white, the drop down appears as I want it, but the text in the textbox itself is hardly readable. Setting the Foreground to black makes the drop down unreadable.
Is there a good way to handle this? I am still learning WPF.
A: Edit the ControlTemplate, You will see a TextBlock and another PopUp which again has a set of controls. Have a different ForeGround/Background for this TextBox,
A: Your best bet is to edit a copy of the template of the ComboBox and set the two of them independently.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How do I find the MIME type of a file with PHP? I have an index.php file which has to process many different file types. How do I guess the filetype based on the REQUEST_URI?
If I request http://site/image.jpg, and all requests redirect through index.php, which looks like this
<?php
include('/www/site'.$_SERVER['REQUEST_URI']);
?>
How would I make that work correctly?
Should I test based on the extension of the file requested, or is there a way to get the filetype?
A: If you are working with images only and you need a MIME type (e.g., for headers), then this is the fastest and most direct answer:
$file = 'path/to/image.jpg';
$image_mime = image_type_to_mime_type(exif_imagetype($file));
It will output true image MIME type even if you rename your image file.
A: If you are sure you're only ever working with images, you can check out the exif_imagetype() PHP function, which attempts to return the image MIME type.
If you don't mind external dependencies, you can also check out the excellent getID3 library which can determine the MIME type of many different file types.
Lastly, you can check out the mime_content_type() function - but it has been deprecated for the Fileinfo PECL extension.
A: mime_content_type() is deprecated, so you won't be able to count on it working in the future. There is a "fileinfo" PECL extension, but I haven't heard good things about it.
If you are running on a Unix-like server, you can do the following, which has worked fine for me:
$file = escapeshellarg($filename);
$mime = shell_exec("file -bi " . $file);
$filename should probably include the absolute path.
A: function get_mime($file) {
if (function_exists("finfo_file")) {
$finfo = finfo_open(FILEINFO_MIME_TYPE); // Return MIME type a la the 'mimetype' extension
$mime = finfo_file($finfo, $file);
finfo_close($finfo);
return $mime;
} else if (function_exists("mime_content_type")) {
return mime_content_type($file);
} else if (!stristr(ini_get("disable_functions"), "shell_exec")) {
// http://stackoverflow.com/a/134930/1593459
$file = escapeshellarg($file);
$mime = shell_exec("file -bi " . $file);
return $mime;
} else {
return false;
}
}
For me, nothing of this works—mime_content_type is deprecated, finfo is not installed, and shell_exec is not allowed.
A: I actually got fed up by the lack of standard MIME sniffing methods in PHP. Install fileinfo... Use deprecated functions... Oh, these work, but only for images! I got fed up of it, so I did some research and found the WHATWG MIME sniffing specification - I believe this is still a draft specification though.
Anyway, using this specification, I was able to implement a MIME sniffer in PHP. Performance is not an issue. In fact, on my humble machine, I was able to open and sniff thousands of files before PHP timed out.
Here is the MimeReader class.
require_once("MimeReader.php");
$mime = new MimeReader(<YOUR FILE PATH>);
$mime_type_string = $mime->getType(); // "image/jpeg", etc.
A: According to the PHP manual, the finfo-file function is best way to do this. However, you will need to install the FileInfo PECL extension.
If the extension is not an option, you can use the outdated mime_content_type function.
A: mime_content_type() appears to be the way to go, notwithstanding the previous comments saying it is deprecated. It is not -- or at least this incarnation of mime_content_type() is not deprecated, according to http://php.net/manual/en/function.mime-content-type.php. It is part of the FileInfo extension, but the PHP documentation now tells us it is enabled by default as of PHP 5.3.0.
A: You can use finfo to accomplish this as of PHP 5.3:
<?php
$info = new finfo(FILEINFO_MIME_TYPE);
echo $info->file('myImage.jpg');
// prints "image/jpeg"
The FILEINFO_MIME_TYPE flag is optional; without it you get a more verbose string for some files; (apparently some image types will return size and colour depth information). Using the FILEINFO_MIME flag returns the mime-type and encoding if available (e.g. image/png; charset=binary or text/x-php; charset=us-ascii). See this site for more info.
A: I haven't used it, but there's a PECL extension for getting a file's MIME type. The official documentation for it is in the manual.
Depending on your purpose, a file extension can be ok, but it's not incredibly reliable since it's so easily changed.
A: If you're only dealing with images, you can use the [getimagesize()][1] function which contains all sorts of information about the image, including the type.
A more general approach would be to use the FileInfo extension from PECL.
Some people have serious complaints about that extension... so if you run into serious issues or cannot install the extension for some reason you might want to check out the deprecated function mime_content_type().
A: If you run Linux and have the extension you could simply read the MIME type from /etc/mime.types by making a hash array. You can then store that in memory and simply call the MIME by array key :)
/**
* Helper function to extract all mime types from the default Linux /etc/mime.types
*/
function get_mime_types() {
$mime_types = array();
if (
file_exists('/etc/mime.types') &&
($fh = fopen('/etc/mime.types', 'r')) !== false
) {
while (($line = fgets($fh)) !== false) {
if (!trim($line) || substr($line, 0, 1) === '#') continue;
$mime_type = preg_split('/\t+/', rtrim($line));
if (
is_array($mime_type) &&
isset($mime_type[0]) && $mime_type[0] &&
isset($mime_type[1]) && $mime_type[1]
) {
foreach (explode(' ', $mime_type[1]) as $ext) {
$mime_types[$ext] = $mime_type[0];
}
}
}
fclose($fh);
}
return $mime_types;
}
A: The MIME type of any file on your server can be gotten with this:
<?php
function get_mime($file_path){
$finfo = new finfo(FILEINFO_MIME_TYPE);
$type = $finfo->file(file_path);
}
$mime = get_mime('path/to/file.ext');
A: I got very good results using a user function from
http://php.net/manual/de/function.mime-content-type.php
@''john dot howard at prismmg dot com 26-Oct-2009 03:43''
function get_mime_type($filename, $mimePath = '../etc') { ...
which doesn’t use finfo, exec or a deprecated function.
It works well also with remote resources!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49"
}
|
Q: Beginner wondering if his code is 'Pythonic' This is really the first thing that I have written in python. I come from Java background. I don't want to just learn how to program java code with Python syntax. I want to learn how to program in a pythonic paradigm.
Could you guys please comment on how I can make the following code more pythonic?
from math import sqrt
# recursively computes the factors of a number
def factors(num):
factorList = []
numroot = int(sqrt(num)) + 1
numleft = num
# brute force divide the number until you find a factor
for i in range(2, numroot):
if num % i == 0:
# if we found a factor, add it to the list and compute the remainder
factorList.append(i)
numleft = num / i
break
# if we didn't find a factor, get out of here!
if numleft == num:
factorList.append(num)
return factorList
# now recursively find the rest of the factors
restFactors = factors(numleft)
factorList.extend(restFactors)
return factorList
# grabs all of the twos in the list and puts them into 2 ^ x form
def transformFactorList(factorList):
num2s = 0
# remove all twos, counting them as we go
while 2 in factorList:
factorList.remove(2)
num2s += 1
# simply return the list with the 2's back in the right spot
if num2s == 0: return factorList
if num2s == 1:
factorList.insert(0, 2)
return factorList
factorList.insert(0, '2 ^ ' + str(num2s))
return factorList
print transformFactorList(factors(#some number))
A: A few comments:
*
*I would replace range() with xrange(); when you call range(), it allocates the entire range all at once, whereas when you iterate over xrange(), it returns each result one at a time, saving memory.
*Don't put expressions after conditionals on the same line (if num2s -- 0: return factorList). It makes it harder to see at a glance what it's doing (that it's a block).
*Don't be afraid to use modules. The [sympy][1] module already has code to compute factors, which may simplify your code by eliminating most of it.
*Python's string formatting is simple and effective.
For example:
factorList.insert(0, '2 ^ ' + str(num2s))
could be changed to
factorlist.insert(0, '2 ^ %s' % num2s)
All in all, I don't find your code to be extensively un-pythonic. Just make sure you want to use floor division, because that's what tends to happen by default with integer values. Otherwise, you'll need to fix up the division operator:
from __future__ import division
A sometimes-frustrating caveat of the language.
A: from itertools import takewhile
def transform_factor_list(factor_list):
num_2s = len(list(takewhile(lambda e: e == 2, factor_list)))
if num_2s > 1:
factor_list[:num_2s] = ["2 ^ %i" % (num_2s, )]
return factor_list
That's what I would make out of the second function.
Most pythonic changes:
*
*PEP-8 compatible naming
*slicing (and assigning to slices)
*iterators
*string formatting
The function assumes that the input is ordered, which is fulfilled by factors.
Edit: removed special cases for some lists, more compact this way
A: Based on chris's answer, slightly simplified:
*
*for instead of external while
*internal while to preserve ability to use same divisor repeatedly
*use itertools.groupby simplifies compress() by much
*fix a small bug in tostring()
HTH:
import itertools
def factorize(n):
# ideally an iterator of prime numbers
# this'll work though
divisors = itertools.count(2)
for divisor in divisors:
# This condition is very clever!
# Note that `n` is decreasing, while `divisor` is increasing.
# And we know that `n` is not divisible by anything smaller,
# so this stops as soon as the remaining `n` is obviously prime.
if divisor**2 > n:
yield n
break
while n % divisor == 0:
yield divisor
n //= divisor
def compress(factors):
for (factor, copies) in itertools.groupby(factors):
# The second object yielded by groupby is a generator of equal factors.
# Using list() to count its length.
power = len(list(copies))
yield (factor, power)
def tostring(compressed):
return ' * '.join("%d**%d" % (factor, power) for (factor, power) in compressed)
# test
assert tostring(compress(factorize(12))) == '2**2 * 3**1'
A: Don't be afraid of list comprehensions. Switching from Java to Python and discovering them was a good day.
For the factors function, maybe something like this:
def factors(num):
return [i for i in xrange(1, num+1) if num % i == 0]
Probably not the best code but it's short and easy to understand.
Good luck with Python, it's a great language.
A: this is how I'd do this...
import itertools
import collections
def factorize(n):
# ideally an iterator of prime numbers
# this'll work though
divisors = itertools.count(2)
divisor = divisors.next()
while True:
if divisor**2 > n:
yield n
break
a,b = divmod(n, divisor)
if b == 0:
yield divisor
n = a
else:
divisor = divisors.next()
def compress(factors):
summands = collections.defaultdict(lambda: 0)
for factor in factors:
summands[factor] += 1
return [(base, summands[base]) for base in sorted(summands)]
def tostring(compressed):
return ' * '.join("%d**%d" % factor for factor in compressed)
A: There is an excellent primer by David Goodger called "Code Like a Pythonista" here. A couple of things from that text re naming (quoting):
*
*joined_lower for functions, methods,
attributes
*joined_lower or ALL_CAPS for
constants
*StudlyCaps for classes
*camelCase only to conform to
pre-existing conventions
A: Just use 'import math' and 'math.sqrt()' instead of 'from math import sqrt' and 'sqrt()'; you don't win anything by just importing 'sqrt', and code quickly gets unwieldy with too many from-imports. Also, things like reload() and mocking out for tests break a lot faster when you use from-import a lot.
The divmod() function is a convenient way to perform both division and modulo. You can use for/else instead of the separate check on numleft. Your factors function is a natural candidate for a generator. xrange() was already mentioned in another answer. Here's it all done that way:
import math
# recursively computes the factors of a number as a generator
def factors(num):
numroot = int(math.sqrt(num)) + 1
# brute force divide the number until you find a factor
for i in xrange(2, numroot):
divider, remainder = divmod(num, i)
if not remainder:
# if we found a factor, add it to the list and compute the
# remainder
yield i
break
else:
# if we didn't find a factor, get out of here!
yield num
return
# now recursively find the rest of the factors
for factor in factors(divider):
yield factor
Using a generator does mean you can only iterate over the result once; if you simply want a list (like you do in translateFactorsList) you will have to wrap the call to factors() in list().
A: Here's what jumps out at me:
def transformFactorList(factorList):
oldsize = len(factorList)
factorList = [f for f in factorList if f != 2]
num2s = oldsize - len(factorList)
if num2s == 0:
return []
if num2s == 1:
return [2]+factorList
return ['2 ^ %s' % num2s] + [factorList]
The form [f for f in factorList if f != 2] is called a list-comprehension.
A: Since this post seems to be resurrected by Casey (lol), I'll add in my 2 cents.
Go over everything in PEP-8. It helped me out substantially when I had code formatting issues.
A: I'd use a list comprehension to get the twos out:
def transformFactorList(factorList):
twos = [x for x in factorList if x == 2]
rest = [x for x in factorList if x != 2]
rest.insert(0, "2 ^ %d" % len(twos))
return rest
Note that this will give you 2^0 and 2^1, which your code didn't. What you're doing with the twos seems arbitraty (sometimes you get a string, sometimes a number, sometimes nothing), so I figured that would be fine. You can change that easily if you want:
def transformFactorList(factorList):
twos = [x for x in factorList if x == 2]
rest = [x for x in factorList if x != 2]
if twos:
rest.insert(0, 2 if len(twos)==1 else "2 ^ %d" % len(twos))
return rest
A: One other thing you might want to look at is the docstring. For example, the comment for this function:
# recursively computes the factors of a number
def factors(num):
Could be converted into this:
def factors(num):
""" recursively computes the factors of a number"""
It's not really 100% necessary to do it this way, but it's a good habit to get into in case you ever start using something along the lines of pydoc.
You can also do this:
docstring.py
"""This is a docstring"""
at the command line:
>>> import docstring
>>> help(docstring)
results:
Help on module docstring:
NAME
docstring - This is a docstring
FILE
/Users/jason/docstring.py
A: Using recursion (where not necessary) is not pythonic. Python doesn't have tail recursion elimination and flat is better than nested.
When in doubt, try import this
update: by popular request, here goes the iterative factorization (sigh):
"""returns an iterator of tuples (factor, power) such that
reduce(operator.mul, (factor**power for factor, power in factors(n))) == n """
def factors(n):
i = 2
while n > 1:
p = 0
while n > 1 and n % i == 0:
p += 1
n /= i
if p:
yield (i, p)
i += 1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Which "href" value should I use for JavaScript links, "#" or "javascript:void(0)"? The following are two methods of building a link that has the sole purpose of running JavaScript code. Which is better, in terms of functionality, page load speed, validation purposes, etc.?
function myJsFunc() {
alert("myJsFunc");
}
<a href="#" onclick="myJsFunc();">Run JavaScript Code</a>
or
function myJsFunc() {
alert("myJsFunc");
}
<a href="javascript:void(0)" onclick="myJsFunc();">Run JavaScript Code</a>
A: Doing <a href="#" onclick="myJsFunc();">Link</a> or <a href="javascript:void(0)" onclick="myJsFunc();">Link</a> or whatever else that contains an onclick attribute - was okay back five years ago, though now it can be a bad practice. Here's why:
*
*It promotes the practice of obtrusive JavaScript - which has turned out to be difficult to maintain and difficult to scale. More on this in Unobtrusive JavaScript.
*You're spending your time writing incredibly overly verbose code - which has very little (if any) benefit to your codebase.
*There are now better, easier, and more maintainable and scalable ways of accomplishing the desired result.
The unobtrusive JavaScript way
Just don't have a href attribute at all! Any good CSS reset would take care of the missing default cursor style, so that is a non-issue. Then attach your JavaScript functionality using graceful and unobtrusive best practices - which are more maintainable as your JavaScript logic stays in JavaScript, instead of in your markup - which is essential when you start developing large scale JavaScript applications which require your logic to be split up into blackboxed components and templates. More on this in Large-scale JavaScript Application Architecture
Simple code example
// Cancel click event
$('.cancel-action').click(function(){
alert('Cancel action occurs!');
});
// Hover shim for Internet Explorer 6 and Internet Explorer 7.
$(document.body).on('hover','a',function(){
$(this).toggleClass('hover');
});
a { cursor: pointer; color: blue; }
a:hover,a.hover { text-decoration: underline; }
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<a class="cancel-action">Cancel this action</a>
A blackboxed Backbone.js example
For a scalable, blackboxed, Backbone.js component example - see this working jsfiddle example here. Notice how we utilize unobtrusive JavaScript practices, and in a tiny amount of code have a component that can be repeated across the page multiple times without side-effects or conflicts between the different component instances. Amazing!
Notes
*
*Omitting the href attribute on the a element will cause the element to not be accessible using tab key navigation. If you wish for those elements to be accessible via the tab key, you can set the tabindex attribute, or use button elements instead. You can easily style button elements to look like normal links as mentioned in Tracker1's answer.
*Omitting the href attribute on the a element will cause Internet Explorer 6 and Internet Explorer 7 to not take on the a:hover styling, which is why we have added a simple JavaScript shim to accomplish this via a.hover instead. Which is perfectly okay, as if you don't have a href attribute and no graceful degradation then your link won't work anyway - and you'll have bigger issues to worry about.
*If you want your action to still work with JavaScript disabled, then using an a element with a href attribute that goes to some URL that will perform the action manually instead of via an Ajax request or whatever should be the way to go. If you are doing this, then you want to ensure you do an event.preventDefault() on your click call to make sure when the button is clicked it does not follow the link. This option is called graceful degradation.
A: Ideally you should have a real URL as fallback for non-JavaScript users.
If this doesn't make sense, use # as the href attribute. I don't like using the onclick attribute since it embeds JavaScript directly in the HTML. A better idea would be to use an external JS file and then add the event handler to that link. You can then prevent the default event so that the URL doesn't change to append the # after the user clicks it.
A: What I understand from your words is that you want to create a link just to run JavaScript code.
Then you should consider that there are people who blocks JavaScript out there in their browsers.
So if you are really going to use that link only for running a JavaScript function then you should add it dynamically so it won't be even seen if the users didn't enable their JavaScript in the browser and you are using that link just to trigger a JavaScript function which makes no sense to use a link like that when JavaScript is disabled in the browser.
For that reason neither of them is good when JavaScript is disabled.
Aand if JavaScript is enabled and you only want to use that link to invoke a JavaScript function then
<a href="javascript:void(0)" onclick="myJsFunc();">Link</a>
is far better way than using
<a href="#" onclick="myJsFunc();">Link</a>
because href="#" is going to cause the page to do actions that are not needed.
Also, another reason why <a href="javascript:void(0)" onclick="myJsFunc();">Link</a> is better than <a href="#" onclick="myJsFunc();">Link</a> is that JavaScript is used as the default scripting language for most of the browsers. As an example Internet Explorer, uses an onclick attribute to define the type of scripting language that would be used. Unless another good scripting language pops up, JavaScript will be used by Internet Explorer as the default too, but if another scripting language used javascript:, it would let Internet Explorer to understand which scripting language is being used.
Considering this, I would prefer using and exercising on
<a href="javascript:void(0)" onclick="myJsFunc();">Link</a>
enough to make it a habit and to be more user friendly please add that kind of links within the JavaScript code:
$(document).ready(function(){
$(".blabla").append('<a href="javascript:void(0)" onclick="myJsFunc();">Link</a>')
});
A: I see a lot of answers by people who want to keep using # values for href, hence, here is an answer hopefully satisfying both camps:
A) I'm happy to have javascript:void(0) as my href value:
<a href="javascript:void(0)" onclick="someFunc.call(this)">Link Text</a>
B) I am using jQuery, and want # as my href value:
<a href="#" onclick="someFunc.call(this)">Link Text</a>
<script type="text/javascript">
/* Stop page jumping when javascript links are clicked.
Only select links where the href value is a #. */
$('a[href="#"]').live("click", function(e) {
return false; // prevent default click action from happening!
e.preventDefault(); // same thing as above
});
</script>
Note, if you know links won't be created dynamically, use the click function instead:
$('a[href="#"]').click(function(e) {
A: You could use the href and remove all links that have only hashes:
HTML:
<a href="#" onclick="run_foo()"> foo </a>
JS:
$(document).ready(function(){ // on DOM ready or some other event
$('a[href=#]').attr('href',''); // set all reference handles to blank strings
// for anchors that have only hashes
});
A: Why not using this? This doesn't scroll page up.
<span role="button" onclick="myJsFunc();">Run JavaScript Code</span>
A: Edited on 2019 January
In HTML5, using an a element without an href attribute is valid. It is considered to be a "placeholder hyperlink"
If the a element has no href attribute, then the element represents a placeholder for where a link might otherwise have been placed, if it had been relevant, consisting of just the element's contents.
Example:
<a>previous</a>
If after that you want to do otherwise :
1 - If your link doesn't go anywhere, don't use an <a> element. Use a <span> or something else appropriate and add CSS :hover to style it as you wish.
2 - Use the javascript:void(0) OR javascript:undefined OR javascript:; if you want to be raw, precise and fast.
A: Using just # makes some funny movements, so I would recommend to use #self if you would like to save on typing efforts of JavaScript bla, bla,.
A: I use the following
<a href="javascript:;" onclick="myJsFunc();">Link</a>
instead
<a href="javascript:void(0);" onclick="myJsFunc();">Link</a>
A: If you use a link as a way to just execute some JavaScript code (instead of using a span like D4V360 greatly suggested), just do:
<a href="javascript:(function()%7Balert(%22test%22)%3B%7D)()%3B">test</a>
If you're using a link with onclick for navigation, don't use href="#" as the fallback when JavaScript is off. It's usually very annoying when the user clicks on the link. Instead, provide the same link the onclick handler would provide if possible. If you can't do that, skip the onclick and just use a JavaScript URI in the href.
A: If you are using an <a> element, just use this:
<a href="javascript:myJSFunc();" />myLink</a>
Personally I'd attach an event handler with JavaScript later on instead (using attachEvent or addEventListener or maybe <put your favorite JavaScript framework here > also).
A: In total agreement with the overall sentiment, use void(0) when you need it, and use a valid URL when you need it.
Using URL rewriting you can make URLs that not only do what you want to do with JavaScript disabled, but also tell you exactly what its going to do.
<a href="./Readable/Text/URL/Pointing/To/Server-Side/Script" id="theLinkId">WhyClickHere</a>
On the server side, you just have to parse the URL and query string and do what you want. If you are clever, you can allow the server side script to respond to both Ajax and standard requests differently. Allowing you to have concise centralized code that handles all the links on your page.
URL rewriting tutorials
Pros
*
*Shows up in status bar
*Easily upgraded to Ajax via onclick handler in JavaScript
*Practically comments itself
*Keeps your directories from becoming littered with single use HTML files
Cons
*
*Should still use event.preventDefault() in JavaScript
*Fairly complex path handling and URL parsing on the server side.
I am sure there are tons more cons out there. Feel free to discuss them.
A: I recommend using a <button> element instead, especially if the control is supposed to produce a change in the data. (Something like a POST.)
It's even better if you inject the elements unobtrusively, a type of progressive enhancement. (See this comment.)
A: I strongly prefer to keep my JavaScript out of my HTML markup as much as possible. If I'm using <a> as click event handlers then I'd recommend using <a class="trigger" href="#">Click me!</a>.
$('.trigger').click(function (e) {
e.preventDefault();
// Do stuff...
});
It's very important to note that many developers out there believe that using anchor tags for click-event handlers isn't good. They'd prefer you to use a <span> or <div> with some CSS that adds cursor: pointer; to it. This is a matter if much debate.
A: You should not use inline onclick="something();" in your HTML to not polluate it with meaningless code; all click bindings must be set in Javascript files (*.js).
Set binding like this : $('#myAnchor').click(function(){... **return false**;}); or $('#myAnchor').bind('click', function(){... **return false**;});
Then you have a clean HTML file easy to load (and seo friendly) without thousands of href="javascript:void(0);" and just href="#"
A: I agree with suggestions elsewhere stating that you should use regular URL in href attribute, then call some JavaScript function in onclick. The flaw is, that they automaticaly add return false after the call.
The problem with this approach is, that if the function will not work or if there will be any problem, the link will become unclickable. Onclick event will always return false, so the normal URL will not be called.
There's very simple solution. Let function return true if it works correctly. Then use the returned value to determine if the click should be cancelled or not:
JavaScript
function doSomething() {
alert( 'you clicked on the link' );
return true;
}
HTML
<a href="path/to/some/url" onclick="return !doSomething();">link text</a>
Note, that I negate the result of the doSomething() function. If it works, it will return true, so it will be negated (false) and the path/to/some/URL will not be called. If the function will return false (for example, the browser doesn't support something used within the function or anything else goes wrong), it is negated to true and the path/to/some/URL is called.
A: # is better than javascript:anything, but the following is even better:
HTML:
<a href="/gracefully/degrading/url/with/same/functionality.ext" class="some-selector">For great justice</a>
JavaScript:
$(function() {
$(".some-selector").click(myJsFunc);
});
You should always strive for graceful degradation (in the event that the user doesn't have JavaScript enabled...and when it is with specs. and budget). Also, it is considered bad form to use JavaScript attributes and protocol directly in HTML.
A: You can use javascript:void(0) here instead of using # to stop anchor tag redirect to header section.
function helloFunction() {
alert("hello world");
}
<a href="javascript:void(0)" onclick="helloFunction();">Call Hello Function</a>
A: Unless you're writing out the link using JavaScript (so that you know it's enabled in the browser), you should ideally be providing a proper link for people who are browsing with JavaScript disabled and then prevent the default action of the link in your onclick event handler. This way those with JavaScript enabled will run the function and those with JavaScript disabled will jump to an appropriate page (or location within the same page) rather than just clicking on the link and having nothing happen.
A: There are actually four options here.
Using return false; allows you to keep the anchor version in cases where you want a safe "fallback" in browsers that have JavaScript disabled or it is not supported in the user agent (1-5% of user's now). You can use the anchor "#" sign, an empty string, or a special URL for the href should your script fail. Note that you must use an href so screen readers know it is a hyperlink. (Note: I am not going to get into arguments about removing the href attribute as that point is moot here. Without an href on an anchor means the anchor is no longer a hyperlink and just an html tag with a click event on it that is captured.)
<a href="" onclick="alert('hello world!');return false;">My Link</a>
<a href="#" onclick="alert('hello world!');return false;">My Link</a>
<a href="MyFallbackURL.html" onclick="alert('hello world!');return false;">My Link</a>
Below is the more popular design today using javascript:void(0) inside the href attribute. If a browser doesn't support scripting it should post again back to its page again, as an empty string is returned for the href hyperlink path. Use this if you don't care who supports JavaScript.
<a href="javascript:void(0);" onclick="alert('hello world!');">My Link</a>
A: Definitely hash (#) is better because in JavaScript it is a pseudoscheme:
*
*pollutes history
*instantiates new copy of engine
*runs in global scope and doesn't respect event system.
Of course "#" with an onclick handler which prevents default action is [much] better. Moreover, a link that has the sole purpose to run JavaScript is not really "a link" unless you are sending user to some sensible anchor on the page (just # will send to top) when something goes wrong. You can simply simulate look and feel of link with stylesheet and forget about href at all.
In addition, regarding cowgod's suggestion, particularly this: ...href="javascript_required.html" onclick="... This is good approach, but it doesn't distinguish between "JavaScript disabled" and "onclick fails" scenarios.
A: '#' will take the user back to the top of the page, so I usually go with void(0).
javascript:; also behaves like javascript:void(0);
A: I would honestly suggest neither. I would use a stylized <button></button> for that behavior.
button.link {
display: inline-block;
position: relative;
background-color: transparent;
cursor: pointer;
border: 0;
padding: 0;
color: #00f;
text-decoration: underline;
font: inherit;
}
<p>A button that looks like a <button type="button" class="link">link</button>.</p>
This way you can assign your onclick. I also suggest binding via script, not using the onclick attribute on the element tag. The only gotcha is the psuedo 3d text effect in older IEs that cannot be disabled.
If you MUST use an A element, use javascript:void(0); for reasons already mentioned.
*
*Will always intercept in case your onclick event fails.
*Will not have errant load calls happen, or trigger other events based on a hash change
*The hash tag can cause unexpected behavior if the click falls through (onclick throws), avoid it unless it's an appropriate fall-through behavior, and you want to change the navigation history.
NOTE: You can replace the 0 with a string such as javascript:void('Delete record 123') which can serve as an extra indicator that will show what the click will actually do.
A: I usually go for
<a href="javascript:;" onclick="yourFunction()">Link description</a>
It's shorter than javascript:void(0) and does the same.
A: Here is one more option for completeness sake, that prevents the link from doing anything even if JavaScript is disabled, and it's short :)
<a href="#void" onclick="myJsFunc()">Run JavaScript function</a>
If the id is not present on the page, then the link will do nothing.
Generally, I agree with the Aaron Wagner's answer, the JavaScript link should be injected with JavaScript code into the document.
A: • Javascript: void(0); is void to null value [Not assigned], which that mean your browser is going to NULL click to DOM, and window return to false.
• The '#' is not follow the DOM or Window in javascript. which that mean the '#' sign inside anchor href is a LINK. Link to the same current direction.
A: I choose use javascript:void(0), because use this could prevent right click to open the content menu. But javascript:; is shorter and does the same thing.
A: I would use:
<a href="#" onclick="myJsFunc();return false;">Link</a>
Reasons:
*
*This makes the href simple, search engines need it. If you use anything else ( such as a string), it may cause a 404 not found error.
*When mouse hovers over the link, it doesn't show that it is a script.
*By using return false;, the page doesn't jump to the top or break the back button.
A: Don't use links for the sole purpose of running JavaScript.
The use of href="#" scrolls the page to the top; the use of void(0) creates navigational problems within the browser.
Instead, use an element other than a link:
<span onclick="myJsFunc()" class="funcActuator">myJsFunc</span>
And style it with CSS:
.funcActuator {
cursor: default;
}
.funcActuator:hover {
color: #900;
}
A: So, when you are doing some JavaScript things with an <a /> tag and if you put href="#" as well, you can add return false at the end of the event (in case of inline event binding) like:
<a href="#" onclick="myJsFunc(); return false;">Run JavaScript Code</a>
Or you can change the href attribute with JavaScript like:
<a href="javascript://" onclick="myJsFunc();">Run JavaScript Code</a>
or
<a href="javascript:void(0)" onclick="myJsFunc();">Run JavaScript Code</a>
But semantically, all the above ways to achieve this are wrong (it works fine though). If any element is not created to navigate the page and that have some JavaScript things associated with it, then it should not be a <a> tag.
You can simply use a <button /> instead to do things or any other element like b, span or whatever fits there as per your need, because you are allowed to add events on all the elements.
So, there is one benefit to use <a href="#">. You get the cursor pointer by default on that element when you do a href="#". For that, I think you can use CSS for this like cursor:pointer; which solves this problem also.
And at the end, if you are binding the event from the JavaScript code itself, there you can do event.preventDefault() to achieve this if you are using <a> tag, but if you are not using a <a> tag for this, there you get an advantage, you don't need to do this.
So, if you see, it's better not to use a tag for this kind of stuff.
A: I use javascript:void(0).
Three reasons. Encouraging the use of # amongst a team of developers inevitably leads to some using the return value of the function called like this:
function doSomething() {
//Some code
return false;
}
But then they forget to use return doSomething() in the onclick and just use doSomething().
A second reason for avoiding # is that the final return false; will not execute if the called function throws an error. Hence the developers have to also remember to handle any error appropriately in the called function.
A third reason is that there are cases where the onclick event property is assigned dynamically. I prefer to be able to call a function or assign it dynamically without having to code the function specifically for one method of attachment or another. Hence my onclick (or on anything) in HTML markup look like this:
onclick="someFunc.call(this)"
OR
onclick="someFunc.apply(this, arguments)"
Using javascript:void(0) avoids all of the above headaches, and I haven't found any examples of a downside.
So if you're a lone developer then you can clearly make your own choice, but if you work as a team you have to either state:
Use href="#", make sure onclick always contains return false; at the end, that any called function does not throw an error and if you attach a function dynamically to the onclick property make sure that as well as not throwing an error it returns false.
OR
Use href="javascript:void(0)"
The second is clearly much easier to communicate.
A: Usually, you should always have a fall back link to make sure that clients with JavaScript disabled still has some functionality. This concept is called unobtrusive JavaScript.
Example... Let's say you have the following search link:
<a href="search.php" id="searchLink">Search</a>
You can always do the following:
var link = document.getElementById('searchLink');
link.onclick = function() {
try {
// Do Stuff Here
} finally {
return false;
}
};
That way, people with JavaScript disabled are directed to search.php while your viewers with JavaScript view your enhanced functionality.
A: It would be better to use jQuery,
$(document).ready(function() {
$("a").css("cursor", "pointer");
});
and omit both href="#" and href="javascript:void(0)".
The anchor tag markup will be like
<a onclick="hello()">Hello</a>
Simple enough!
A: javascript:void(0) will deprecate in future, therefore you should use #.
A: If you happen to be using AngularJS, you can use the following:
<a href="">Do some fancy JavaScript</a>
Which will not do anything.
In addition
*
*It will not take you to the top of the page, as with (#)
*
*Therefore, you don't need to explicitly return false with JavaScript
*It is short an concise
A: Depending on what you want to accomplish, you could forget the onclick and just use the href:
<a href="javascript:myJsFunc()">Link Text</a>
It gets around the need to return false. I don't like the # option because, as mentioned, it will take the user to the top of the page. If you have somewhere else to send the user if they don't have JavaScript enabled (which is rare where I work, but a very good idea), then Steve's proposed method works great.
<a href="javascriptlessDestination.html" onclick="myJSFunc(); return false;">Link text</a>
Lastly, you can use javascript:void(0) if you do not want anyone to go anywhere and if you don't want to call a JavaScript function. It works great if you have an image you want a mouseover event to happen with, but there's not anything for the user to click on.
A: I believe you are presenting a false dichotomy. These are not the only two options.
I agree with Mr. D4V360 who suggested that, even though you are using the anchor tag, you do not truly have an anchor here. All you have is a special section of a document that should behave slightly different. A <span> tag is far more appropriate.
A: I personally use them in combination. For example:
HTML
<a href="#">Link</a>
with little bit of jQuery
$('a[href="#"]').attr('href','javascript:void(0);');
or
$('a[href="#"]').click(function(e) {
e.preventDefault();
});
But I'm using that just for preventing the page jumping to the top when the user clicks on an empty anchor. I'm rarely using onClick and other on events directly in HTML.
My suggestion would be to use <span> element with the class attribute instead of
an anchor. For example:
<span class="link">Link</span>
Then assign the function to .link with a script wrapped in the body and just before the </body> tag or in an external JavaScript document.
<script>
(function($) {
$('.link').click(function() {
// do something
});
})(jQuery);
</script>
*Note: For dynamically created elements, use:
$('.link').on('click', function() {
// do something
});
And for dynamically created elements which are created with dynamically created elements, use:
$(document).on('click','.link', function() {
// do something
});
Then you can style the span element to look like an anchor with a little CSS:
.link {
color: #0000ee;
text-decoration: underline;
cursor: pointer;
}
.link:active {
color: red;
}
Here's a jsFiddle example of above aforementioned.
A: I tried both in google chrome with the developer tools, and the id="#" took 0.32 seconds. While the javascript:void(0) method took only 0.18 seconds. So in google chrome, javascript:void(0) works better and faster.
A: On a modern website the use of href should be avoided if the element is only doing JavaScript functionality (not a real link).
Why?
The presence of this element tells the browser that this is a link with a destination.
With that, the browser will show the Open In New Tab / Window function (also triggered when you use shift+click).
Doing so will result in opening the same page without the desired function triggered (resulting in user frustration).
In regards to IE:
As of IE8, element styling (including hover) works if the doctype is set. Other versions of IE are not really to worry about anymore.
Only Drawback:
Removing HREF removes the tabindex.
To overcome this, you can use a button that's styled as a link or add a tabindex attribute using JS.
A: The first one, ideally with a real link to follow in case the user has JavaScript disabled. Just make sure to return false to prevent the click event from firing if the JavaScript executes.
<a href="#" onclick="myJsFunc(); return false;">Link</a>
If you use Angular2, this way works:
<a [routerLink]="" (click)="passTheSalt()">Click me</a>.
See here https://stackoverflow.com/a/45465728/2803344
A: It's nice to have your site be accessible by users with JavaScript disabled, in which case the href points to a page that performs the same action as the JavaScript being executed. Otherwise I use "#" with a "return false;" to prevent the default action (scroll to top of the page) as others have mentioned.
Googling for "javascript:void(0)" provides a lot of information on this topic. Some of them, like this one mention reasons to NOT use void(0).
A: When I've got several faux-links, I prefer to give them a class of 'no-link'.
Then in jQuery, I add the following code:
$(function(){
$('.no-link').click(function(e){
e.preventDefault();
});
});
And for the HTML, the link is simply
<a href="/" class="no-link">Faux-Link</a>
I don't like using Hash-Tags unless they're used for anchors, and I only do the above when I've got more than two faux-links, otherwise I go with javascript:void(0).
<a href="javascript:void(0)" class="no-link">Faux-Link</a>
Typically, I like to just avoid using a link at all and just wrap something around in a span and use that as a way to active some JavaScript code, like a pop-up or a content-reveal.
A: Neither.
If you can have an actual URL that makes sense use that as the HREF. The onclick won't fire if someone middle-clicks on your link to open a new tab or if they have JavaScript disabled.
If that is not possible, then you should at least inject the anchor tag into the document with JavaScript and the appropriate click event handlers.
I realize this isn't always possible, but in my opinion it should be striven for in developing any public website.
Check out Unobtrusive JavaScript and Progressive enhancement (both Wikipedia).
A: I'm basically paraphrasing from this practical article using progressive enhancement. The short answer is that you never use javascript:void(0); or # unless your user interface has already inferred that JavaScript is enabled, in which case you should use javascript:void(0);. Also, do not use span as links, since that is semantically false to begin with.
Using SEO friendly URL routes in your application, such as /Home/Action/Parameters is a good practice as well. If you have a link to a page that works without JavaScript first, you can enhance the experience afterward. Use a real link to a working page, then add an onlick event to enhance the presentation.
Here is a sample. Home/ChangePicture is a working link to a form on a page complete with user interface and standard HTML submit buttons, but it looks nicer injected into a modal dialog with jQueryUI buttons. Either way works, depending on the browser, which satisfies mobile first development.
<p><a href="Home/ChangePicture" onclick="return ChangePicture_onClick();" title="Change Picture">Change Picture</a></p>
<script type="text/javascript">
function ChangePicture_onClick() {
$.get('Home/ChangePicture',
function (htmlResult) {
$("#ModalViewDiv").remove(); //Prevent duplicate dialogs
$("#modalContainer").append(htmlResult);
$("#ModalViewDiv").dialog({
width: 400,
modal: true,
buttons: {
"Upload": function () {
if(!ValidateUpload()) return false;
$("#ModalViewDiv").find("form").submit();
},
Cancel: function () { $(this).dialog("close"); }
},
close: function () { }
});
}
);
return false;
}
</script>
A: Neither if you ask me;
If your "link" has the sole purpose of running some JavaScript code it doesn't qualify as a link; rather a piece of text with a JavaScript function coupled to it. I would recommend to use a <span> tag with an onclick handler attached to it and some basic CSS to immitate a link. Links are made for navigation, and if your JavaScript code isn't for navigation it should not be an <a> tag.
Example:
function callFunction() { console.log("function called"); }
.jsAction {
cursor: pointer;
color: #00f;
text-decoration: underline;
}
<p>I want to call a JavaScript function <span class="jsAction" onclick="callFunction();">here</span>.</p>
A: Don't lose sight of the fact that your URL may be necessary -- onclick is fired before the reference is followed, so sometimes you will need to process something clientside before navigating off the page.
A: You can also write a hint in an anchor like this:
<a href="javascript:void('open popup image')" onclick="return f()">...</a>
so the user will know what this link does.
A: Ideally you'd do this:
<a href="javascriptlessDestination.html" onclick="myJSFunc(); return false;">Link text</a>
Or, even better, you'd have the default action link in the HTML, and you'd add the onclick event to the element unobtrusively via JavaScript after the DOM renders, thus ensuring that if JavaScript is not present/utilized you don't have useless event handlers riddling your code and potentially obfuscating (or at least distracting from) your actual content.
A: Just to pick up the point some of the other have mentioned.
It's much better to bind the event 'onload'a or $('document').ready{}; then to put JavaScript directly into the click event.
In the case that JavaScript isn't available, I would use a href to the current URL, and perhaps an anchor to the position of the link. The page is still be usable for the people without JavaScript those who have won't notice any difference.
As I have it to hand, here is some jQuery which might help:
var [functionName] = function() {
// do something
};
jQuery("[link id or other selector]").bind("click", [functionName]);
A: There is one more important thing to remember here. Section 508 compliance.
Because of it, I feel it's necessary to point out that you need the anchor tag for screen readers such as JAWS to be able to focus it through tabbing. So the solution "just use JavaScript and forget the anchor to begin with" is not an option for some of this. Firing the JavaScript inside the href is only necessary if you can't afford for the screen to jump back up to the top. You can use a settimeout for 0 seconds and have JavaScript fire to where you need focus but even the apage will jump to the top and then back.
A: I use href="#" for links that I want a dummy behaviour for. Then I use this code:
$(document).ready(function() {
$("a[href='#']").click(function(event) {
event.preventDefault();
});
});
Meaning if the href equals to a hash (*="#") it prevents the default link behaviour, thus still allowing you to write functionality for it, and it doesn't affect anchor clicks.
A: I'd say the best way is to make an href anchor to an ID you'd never use, like #Do1Not2Use3This4Id5 or a similar ID, that you are 100% sure no one will use and won't offend people.
*
*Javascript:void(0) is a bad idea and violates Content Security Policy on CSP-enabled HTTPS pages https://developer.mozilla.org/en/docs/Security/CSP (thanks to @jakub.g)
*Using just # will have the user jump back to the top when pressed
*Won't ruin the page if JavaScript isn't enabled (unless you have JavaScript detecting code
*If JavaScript is enabled you can disable the default event
*You have to use href unless you know how to prevent your browser from selecting some text, (don't know if using 4 will remove the thing that stops the browser from selecting text)
Basically no one mentioned 5 in this article which I think is important as your site comes off as unprofessional if it suddenly starts selecting things around the link.
A: The most simple and used by everyone mostly is javascript:void(0) You can use it instead of using # to stop tag redirect to header section.
<a href="javascript:void(0)" onclick="testFunction();">Click To check Function</a>
function testFunction() {
alert("hello world");
}
A: Answer: Both approaches have no discernible effect on how quickly a website loads.
Explanation:
Well, Both are equally effective. The primary difference is related to click action. The first approach () will alter the URL in the address bar to "#,". The second method () will leave the URL in the address bar unchanged. Hence, both are good to go.
Tip: Use the second method as a best practice, that's: (<a href="javascript:void(0)">) as it is more valid HTML and doesn't affect the URL bar.
A: Bootstrap modals from before 4.0 have a basically undocumented behavior that they will load hrefs from a elements using AJAX unless they are exactly #. If you are using Bootstrap 3, javascript:void(0); hrefs will cause javascript errors:
AJAX Error: error GET javascript:void(0);
In these cases you would need to upgrade to bootstrap 4 or change the href.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4379"
}
|
Q: Sounds for build error/success in Visual C++? On long Visual C++ builds, it would be really helpful to hear some sort of (optional) sounds for such build/compile results as:
*
*individual compile error
*file compile success/failure
*build success/failure
*batch build success/failure
Does anyone know how to enable sounds for these kinds of build occurrences in Visual C++ (especially Visual C++ 2008 on Vista)?
A: Go to Start | Settings | Control Panel | Sounds, click the Sounds tab, and customize the entries under Microsoft Developer.
A: CJM is almost right.
In VC++ 9 (Visual Studio 2008) Go to Control Panel's Sounds applet (Control Panel/Hardware and Sounds/Sounds in Vista).
Under the Sounds tab scroll to "Build Succeeded" under "Microsoft Visual Studio" and set a sound for this event.
If you have (or had) multiple VS on this PC (I have 6.0, 2003, 2005, and 2008) there may be multiple entries with names like "Microsoft Developer" or blanks - which I assume work in the older versions. I often end-up setting the wrong ones. It seems you'll have to close VS 2008 and reopen for this to take effect.
Someone mentioned this was broken/removed in VS 2005 - I noticed this as well.
A: In VS2005, the sound subsystem wasn't working correctly, not sure it was fixed in 2008. Using macros, you COULD play sounds, like different ones for builds that succeeded, and builds that failed, however the person that I knew that did them was constantly crashing due to the macros failing.
A: Another solution is to install the Visual Studio Power Toys. This includes a feature called 'Toast' that shows a notification in your system tray when a build has finished. You might see if this has options that would be useful for sound notification.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: What's the simplest way to make a HTTP GET request in Perl? I have some code I've written in PHP for consuming our simple webservice, which I'd also like to provide in Perl for users who may prefer that language. What's the simplest method of making a HTTP request to do that? In PHP I can do it in one line with file_get_contents().
Here's the entire code I want to port to Perl:
/**
* Makes a remote call to the our API, and returns the response
* @param cmd {string} - command string ID
* @param argsArray {array} - associative array of argument names and argument values
* @return {array} - array of responses
*/
function callAPI( $cmd, $argsArray=array() )
{
$apikey="MY_API_KEY";
$secret="MY_SECRET";
$apiurl="https://foobar.com/api";
// timestamp this API was submitted (for security reasons)
$epoch_time=time();
//--- assemble argument array into string
$query = "cmd=" .$cmd;
foreach ($argsArray as $argName => $argValue) {
$query .= "&" . $argName . "=" . urlencode($argValue);
}
$query .= "&key=". $apikey . "&time=" . $epoch_time;
//--- make md5 hash of the query + secret string
$md5 = md5($query . $secret);
$url = $apiurl . "?" . $query . "&md5=" . $md5;
//--- make simple HTTP GET request, put the server response into $response
$response = file_get_contents($url);
//--- convert "|" (pipe) delimited string to array
$responseArray = explode("|", $response);
return $responseArray;
}
A: LWP::Simple:
use LWP::Simple;
$contents = get("http://YOUR_URL_HERE");
A: Take a look at LWP::Simple.
For more involved queries, there's even a book about it.
A: I would use the LWP::Simple module.
A: Try the HTTP::Request module.
Instances of this class are usually passed to the request() method of an LWP::UserAgent object.
A: Mojo::UserAgent is a great option too!
use Mojo::UserAgent;
my $ua = Mojo::UserAgent->new;
# Say hello to the Unicode snowman with "Do Not Track" header
say $ua->get('www.☃.net?hello=there' => {DNT => 1})->res->body;
# Form POST with exception handling
my $tx = $ua->post('https://metacpan.org/search' => form => {q => 'mojo'});
if (my $res = $tx->success) { say $res->body }
else {
my ($err, $code) = $tx->error;
say $code ? "$code response: $err" : "Connection error: $err";
}
# Quick JSON API request with Basic authentication
say $ua->get('https://sri:s3cret@example.com/search.json?q=perl')
->res->json('/results/0/title');
# Extract data from HTML and XML resources
say $ua->get('www.perl.org')->res->dom->html->head->title->text;`
Samples direct from CPAN page. I used this when I couldn 't get LWP::Simple to work on my machine.
A: LWP::Simple has the function you're looking for.
use LWP::Simple;
$content = get($url);
die "Can't GET $url" if (! defined $content);
A: If it's in Unix and if LWP::Simple isn't installed, you can try:
my $content = `GET "http://trackMyPhones.com/"`;
A: I think what Srihari might be referencing is Wget, but I would actually recommend (again, on *nix without LWP::Simple) to use cURL:
$ my $content = `curl -s "http://google.com"`;
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
The -s flag tells curl to be silent. Otherwise, you get curl's progress bar output on standard error every time.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
}
|
Q: WinDbg Dr. Watson minidump - requires pdb/dll originally built for installed version? I have a mindmp file from a target's application crash. Is it possible for me to rebuild the dll/pdb files for a version of software and have windbg load symbols correctly?
My problem is that our pdb files are only kept for major releases (unfortunately). This is a daily build, which I can rebuild myself, but I'm getting tripped up on errors.
With !sym noisy on:
"image header does not match memory image header."
DBGENG: C:\...\XXX.dll image header does not match memory image header.
DBGENG: XXX.dll - Partial symbol image load missing image info
DBGHELP: Module is not fully loaded into memory.
DBGHELP: Searching for symbols using debugger-provided data.
DBGHELP: C:\...\XXX.pdb - mismatched pdb
Note I've build the pdb with the dll, they are from the same RELEASE directory (should I be building debug?)
Theses are release builds (as release builds are installed on the target and crashing) should I be somehow using the debug build dlls to get more symbol information?
A: In my experience probably not.
If you have the exact build directory and build with the exact same compiler settings then this might work. You definitely will not be able to load symbols from a debug build against a release crash dump.
You will need to turn on the 'load anything' options: .symopt+0x40 to get windbg to ignore the timestamp differences.
A: if you still have the exact source code the image was compiled from, then rebuild it producing a new pdb file and then instruct WinDbg to forcibly load this pdb when you open the crash dump - it worked once in my practice.
A: The ChkMatch utility is designed for this exact scenario.
As long as you have the original .EXE, you can recompile the sources (with the same compiler and compiler settings) and patch the new .PDB to match the old .EXE.
In this example, OriginalExecutable.exe is the executable that no longer has a .PDB file, and RebuiltPDB.pdb is one that has been produced by rebuilding the original source.
chkmatch -m OriginalExecutable.exe RebuiltPDB.pdb
Now, as long as the two files have their original names, The debugger should accept them as a matching pair.
A: PDB files are tied to their EXE files by a GUID and an "age" (it's a sequence number). These are embedded in the EXE, and into the PDB. The GUID is regenerated on each complete build, and the "age" is changed on each incremental build.
The debugger uses these to ensure that it's looking at the correct PDB for the EXE file.
I didn't know about the "chkmatch" tool mentioned by SteveMan, but I suspect that it works by patching up the GUID/age so that they match.
A: This is too late to help Doug, but for the sake of anyone who comes across this question, another thread (Is it possible to load mismatched symbols in Visual Studio?) pointed out a way to get WinDbg to accept mismatched .PDB files
.symopt_0x40
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What's the best way to implement an API in ASP.NET using MVC? I've been a longtime ASP.NET developer in the web forms model, and am using a new project as an opportunity to get my feet wet with ASP.NET MVC.
The application will need an API so that a group of other apps can communicate with it. I've always built API's out just using a standard web service prior to this.
As a sidenote, I'm a little hesitant to plunge headfirst into the REST style of creating API's, for this particular instance at least. This application will likely need a concept of API versioning, and I think that the REST approach, where the API is essentially scattered across all the controllers of the site, is a little cumbersome in that regard. (But I'm not completely opposed to it if there is a good answer to the potential versioning potential requirement.)
So, what say ye, Stack Overflow denizens?
A: I'd agree with Kilhoffer. Try using a "Facade" wrapper class that inherits from an "IFacade". In your Facade class put your code to consume your web service. In this way your controllers will simply make calls to the Facade. The plus side of this being that you can swap a "DummyFacade" that implements the same IFacade interface in that doesn't actually talk to the web service and just returns static content. Lets you actually do some unit testing without hitting the service. Basically the same idea as the Repository pattern.
A: I would still recommend a service layer that can serve client side consumers or server side consumers. Possibly even returning data in a variety of formats, depending on the consuming caller.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Databinding 2 WPF ComboBoxes to 1 source without being "linked" I have a master-detail scenario where I have 1 ComboBox listing companies from an ObjectDataSourceProvider. Under that I have 2 ComboBoxes binding to the Contacts property from the current Company object. I need to be able to select a different contact in each ComboBox; however, as soon as I change selection in one list the other list updates to the same contact.
I have tried different settings (OneWay vs. TwoWay) but so far nothing seems to work. Here is an excerpt of my XAML.
<Page.Resources>
<!-- This is a custom class inheriting from ObjectDataProvider -->
<wg:CustomersDataProvider x:Key="CompanyDataList" />
</Page.Resources>
<Grid>
<!--- other layout XAML removed -->
<ComboBox x:Name="Customer" Width="150"
ItemsSource="{Binding Source={StaticResource CompanyDataList},Path=Companies,Mode=OneWay}"
DisplayMemberPath="Name"
SelectedValuePath="Id"
IsSynchronizedWithCurrentItem="True"
SelectedValue="{Binding Path=Id, Mode=OneWay}"
VerticalAlignment="Bottom" />
<ComboBox x:Name="PrimaryContact" Width="150"
DataContext="{Binding ElementName=Customer,Path=Items,Mode=OneWay}"
ItemsSource="{Binding Path=Contacts,Mode=OneWay}"
DisplayMemberPath="FullName"
SelectedValuePath="Id"
IsSynchronizedWithCurrentItem="True"
SelectedValue="{Binding Path=Id,Mode=OneWay}" />
<ComboBox x:Name="AdminContact" Width="150"
DataContext="{Binding ElementName=OwnerCustomer,Path=Items,Mode=OneWay}"
ItemsSource="{Binding Path=Contacts,Mode=OneWay}"
DisplayMemberPath="FullName"
SelectedValuePath="Id"
IsSynchronizedWithCurrentItem="True"
SelectedValue="{Binding Path=Id,Mode=OneWay}" />
<!--- other layout XAML removed -->
</Grid>
I thought that creating a CollectionViewSource would be the way to go, but I have not been able to make that work. Is there a simple way to do this so the PrimaryContact and AdminContact aren't linked?
A: Change your "IsSynchronizedWithCurrentItem" attributes to "False".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Undoing a git rebase How do I easily undo a git rebase? A lengthy manual method is:
*
*checkout the commit parent to both of the branches
*create and checkout a temporary branch
*cherry-pick all commits by hand
*reset the faulty rebased branch to point to the temporary branch
In my current situation, this works because I can easily spot commits from both branches (one was my stuff, the other was my colleague's stuff). However, my approach strikes me as suboptimal and error-prone (let's say I had just rebased with two of my own branches).
Clarification: I am talking about a rebase during which multiple commits were replayed, not only one.
A: Another way that doesn't require doing a hard reset is to create a new branch with your desired starting point.
As with the other solutions, you use the reflog to find the correct starting point.
git reflog
(you can also use git log -g here for more detail)
Then you note the reference to the commit SHA (ex: e86a52b851e).
Finally, you use the git branch command.
git branch recover-branch e86a52b851e
Reference: https://git-scm.com/book/en/v2/Git-Internals-Maintenance-and-Data-Recovery#_data_recovery
A: git reset --hard origin/{branchName}
is the correct solution to reset all your local changes done by rebase.
A: I actually put a backup tag on the branch before I do any nontrivial operation (most rebases are trivial, but I'd do that if it looks anywhere complex).
Then, restoring is as easy as git reset --hard BACKUP.
A: The easiest way would be to find the head commit of the branch as it was immediately before the rebase started in the reflog...
git reflog
and to reset the current branch to it (with the usual caveats about being absolutely sure before reseting with the --hard option).
Suppose the old commit was HEAD@{2} in the ref log:
git reset --hard HEAD@{2}
In Windows, you may need to quote the reference:
git reset --hard "HEAD@{2}"
You can check the history of the candidate old head by just doing a git log HEAD@{2} (Windows: git log "HEAD@{2}").
If you've not disabled per branch reflogs you should be able to simply do git reflog branchname@{1} as a rebase detaches the branch head before reattaching to the final head. I would double check this, though as I haven't verified this recently.
Per default, all reflogs are activated for non-bare repositories:
[core]
logAllRefUpdates = true
A: Charles's answer works, but you may want to do this:
git rebase --abort
to clean up after the reset.
Otherwise, you may get the message “Interactive rebase already started”.
A: Let's say I rebase master to my feature branch and I get 30 new commits which break something. I've found that often it's easiest to just remove the bad commits.
git rebase -i HEAD~31
Interactive rebase for the last 31 commits (it doesn't hurt if you pick way too many).
Simply take the commits that you want to get rid of and mark them with "d" instead of "pick". Now the commits are deleted effectively undoing the rebase (if you remove only the commits you just got when rebasing).
A: If you are on a branch you can use:
git reset --hard @{1}
There is not only a reference log for HEAD (obtained by git reflog), there are also reflogs for each branch (obtained by git reflog <branch>). So, if you are on master then git reflog master will list all changes to that branch. You can refer to that changes by master@{1}, master@{2}, etc.
git rebase will usually change HEAD multiple times but the current branch will be updated only once.
@{1} is simply a shortcut for the current branch, so it's equal to master@{1} if you are on master.
git reset --hard ORIG_HEAD will not work if you used git reset during an interactive rebase.
A: Using reflog didn't work for me.
What worked for me was similar to as described here. Open the file in .git/logs/refs named after the branch that was rebased and find the line that contains "rebase finsihed", something like:
5fce6b51 88552c8f Kris Leech <me@example.com> 1329744625 +0000 rebase finished: refs/heads/integrate onto 9e460878
Checkout the second commit listed on the line.
git checkout 88552c8f
Once confirmed this contained my lost changes I branched and let out a sigh of relief.
git log
git checkout -b lost_changes
A: What I usually do is
git reset #commit_hash
to the last commit where I think rebase had no effect.
then git pull
Now your branch should match exactly like master and rebased commits should not be in it.
Now one can just cherry-pick the commits on this branch.
A: It annoys me to no end that none of these answers is fully automatic, despite the fact that it should be automatable (at least mostly). I created a set of aliases to try to remedy this:
# Useful commands
#################
# Undo the last rebase
undo-rebase = "! f() { : git reset ; PREV_COMMIT=`git x-rev-before-rebase` && git reset --merge \"$PREV_COMMIT\" \"$@\";}; f"
# See what changed since the last rebase
rdiff = "!f() { : git diff ; git diff `git x-rev-before-rebase` "$@";}; f"
# Helpers
########
# Get the revision before the last rebase started
x-rev-before-rebase = !git reflog --skip=1 -1 \"`git x-start-of-rebase`\" --format=\"%gD\"
# Get the revision that started the rebase
x-start-of-rebase = reflog --grep-reflog '^rebase (start)' -1 --format="%gD"
You should be able to tweak this to allow going back an arbitrary number of rebases pretty easily (juggling the args is the trickiest part), which can be useful if you do a number of rebases in quick succession and mess something up along the way.
Caveats
It will get confused if any commit messages begin with "rebase (start)" (please don't do this). You could make the regex more resilient to improve the situation by matching something like this for your regex:
--grep-reflog "^rebase (start): checkout "
WARNING: not tested (regex may need adjustments)
The reason I haven't done this is because I'm not 100% that a rebase always begins with a checkout. Can anyone confirm this?
[If you're curious about the null (:) commands at the beginning of the function, that's a way of setting up bash completions for the aliases]
A: Actually, rebase saves your starting point to ORIG_HEAD so this is usually as simple as:
git reset --hard ORIG_HEAD
However, the reset, rebase and merge all save your original HEAD pointer into ORIG_HEAD so, if you've done any of those commands since the rebase you're trying to undo then you'll have to use the reflog.
A: For multiple commits, remember that any commit references all the history leading up to that commit. So in Charles' answer, read "the old commit" as "the newest of the old commits". If you reset to that commit, then all the history leading up to that commit will reappear. This should do what you want.
A: If you successfully rebased against a remote branch and can not git rebase --abort you still can do some tricks to save your work and don't have forced pushes.
Suppose your current branch that was rebased by mistake is called your-branch and is tracking origin/your-branch
*
*git branch -m your-branch-rebased # rename current branch
*git checkout origin/your-branch # checkout to latest state that is known to the origin
*git checkout -b your-branch
*check git log your-branch-rebased, compare to git log your-branch, and define commits that are missing from your-branch
*git cherry-pick COMMIT_HASH for every commit in your-branch-rebased
*push your changes. Please be aware that two local branches are associated with remote/your-branch and you should push only your-branch
A: If you don't want to do a hard reset...
You can checkout the commit from the reflog, and then save it as a new branch:
git reflog
Find the commit just before you started rebasing. You may need to scroll further down to find it (press Enter or PageDown). Take note of the HEAD number and replace 57:
git checkout HEAD@{57}
Review the branch/commits, and if it's correct then create a new branch using this HEAD:
git checkout -b new_branch_name
A: Following the solution of @Allan and @Zearin, I wish I could simply do a comment though but I don't enough reputation, so I have used the following command:
Instead of doing git rebase -i --abort (note the -i) I had to simply do git rebase --abort (without the -i).
Using both -i and --abort at the same time causes Git to show me a list of usage/options.
So my previous and current branch status with this solution is:
matbhz@myPc /my/project/environment (branch-123|REBASE-i)
$ git rebase --abort
matbhz@myPc /my/project/environment (branch-123)
$
A: Resetting the branch to the dangling commit object of its old tip is of course the best solution, because it restores the previous state without expending any effort. But if you happen to have lost those commits (f.ex. because you garbage-collected your repository in the meantime, or this is a fresh clone), you can always rebase the branch again. The key to this is the --onto switch.
Let’s say you had a topic branch imaginatively called topic, that you branched off master when the tip of master was the 0deadbeef commit. At some point while on the topic branch, you did git rebase master. Now you want to undo this. Here’s how:
git rebase --onto 0deadbeef master topic
This will take all commits on topic that aren’t on master and replay them on top of 0deadbeef.
With --onto, you can rearrange your history into pretty much any shape whatsoever.
Have fun. :-)
A: In case you had pushed your branch to remote repository (usually it's origin) and then you've done a succesfull rebase (without merge) (git rebase --abort gives "No rebase in progress") you can easily reset branch using
command:
git reset --hard origin/{branchName}
Example:
$ ~/work/projects/{ProjectName} $ git status
On branch {branchName}
Your branch is ahead of 'origin/{branchName}' by 135 commits.
(use "git push" to publish your local commits)
nothing to commit, working directory clean
$ ~/work/projects/{ProjectName} $ git reset --hard origin/{branchName}
HEAD is now at 6df5719 "Commit message".
$ ~/work/projects/{ProjectName} $ git status
On branch {branchName}
Your branch is up-to-date with 'origin/{branchName}.
nothing to commit, working directory clean
A: I tried all suggestions with reset and reflog without any success. Restoring local history of IntelliJ resolved the problem of lost files
A: If you mess something up within a git rebase, e.g. git rebase --abort, while you have uncommitted files, they will be lost and git reflog will not help. This happened to me and you will need to think outside the box here. If you are lucky like me and use IntelliJ Webstorm then you can right-click->local history and can revert to a previous state of your file/folders no matter what mistakes you have done with versioning software. It is always good to have another failsafe running.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4076"
}
|
Q: Why isn't Selenium capturing my keystrokes? I'm trying out the recorder of the latest Selenium IDE Firefox extension on win32/ff3.
On one page, currently I have to hit Enter to go to the next page, but it's not on a submit button, it's captured manually. This is not picked up by the recorder. I know I can enter it manually myself after recording, but why isn't this part of it?
A: From the Selenium FAQ:
http://wiki.openqa.org/display/SIDE/FAQ
"Not every event will be recorded by Selenium IDE. Usually the ones that won't be recorded are those that involve complex HTML and/or AJAX. We hope to improve this over time, but there will always be situations where the IDE can't record everything because it has to balance recording too little with too much."
A: Have you tried with a different browser? Does it happen the same in FF2, IE ?
Use the latest nightly version. I know that there are some problems with FF3.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: VBScript: Using a variable within a DOM element I'm looking to use a VBScript variable within a reference to a DOM element for a web-app I'm building. Here's a brief excerpt of the affected area of code:
dim num
num = CInt(document.myform.i.value)
dim x
x = 0
dim orders(num)
For x = 0 To num
orders(x) = document.getElementById("order" & x).value
objFile.writeLine(orders(x))
Next
This is my first venture into VBScript, and I've not been able to find any methods of performing this type of action online. As you can see in the above code, I'm trying to create an array (orders). This array can have any number of values, but that number will be specified in document.myform.i.value. So the For loop cycles through all text inputs with an ID of order+x (ie, order0, order1, order2, order3, order4, etc. up to num)
It seems to be a problem with my orders(x) line, I don't think it recognizes what I mean by getElementById("order" & x), and I'm not sure exactly how to do such a thing. Anyone have any suggestions? It would be much appreciated!
A: I was able to get this working. Thanks to both of you for your time and input. Here is what solved it for me:
Rather than using
document.getElementById("order" & x).value
I set the entire ID as a variable:
temp = "order" & x
document.getElementById(temp).value
It seems to be working as expected. Again, many thanks for the time and effort on this!
A: I can only assume that this is client side VBScript as document.getElementById() isn't accessible from the server.
try objFile.writeLine("order" & x), then check the source to make sure all the elements are in the document.
[As I can't put code in comments...]
That is strange. It looks to me like everything should be working.
Only other thing I can think of is: change
orders(x) = document.getElementById("order" & x).value
objFile.writeLine(orders(x))
to
orders(x) = document.getElementById("order" & x)
objFile.writeLine(orders(x).value)
A: It looks as if you're mixing client vs server-side code.
objFile.writeLine(orders(x))
That is VBScript to write to a file, which you can only do on the server.
document.getElementById
This is client-size code that is usually executed in JavaScript. You can use VBScript on IE on the client, but rarely does anyone do this.
On the server you'd usually refer to form fields that were part of a form tag, not DOM elements, (assuming you're using classic ASP) using request("formFieldName").
To make server-side stuff appear on the client (when you build a page) you'd embed it in your HTML like this:
<% = myVariable %>
or like this (as part of a code block):
document.write myVariable
A: Don't you need to change your loop slightly?
For x = 0 To num - 1
E.G. With 4 items you need to iterate from 0 to 3.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: When to use ' (or quote) in Lisp? After making it through the major parts of an introductory Lisp book, I still couldn't understand what the special operator (quote) (or equivalent ') function does, yet this has been all over Lisp code that I've seen.
What does it do?
A: It says "don't evaluate me". For example, if you wanted to use a list as data, and not as code, you'd put a quote in front of it. For example,
(print '(+ 3 4)) prints "(+ 3 4)", whereas
(print (+ 3 4)) prints "7"
A: In Emacs Lisp:
What can be quoted ?
Lists and symbols.
Quoting a number evaluates to the number itself:
'5 is the same as 5.
What happens when you quote lists ?
For example:
'(one two) evaluates to
(list 'one 'two) which evaluates to
(list (intern "one") (intern ("two"))).
(intern "one") creates a symbol named "one" and stores it in a "central" hash-map, so anytime you say 'one then the symbol named "one" will be looked up in that central hash-map.
But what is a symbol ?
For example, in OO-languages (Java/Javascript/Python) a symbol could be represented as an object that has a name field, which is the symbol's name like "one" above, and data and/or code can be associated with it this object.
So an symbol in Python could be implemented as:
class Symbol:
def __init__(self,name,code,value):
self.name=name
self.code=code
self.value=value
In Emacs Lisp for example a symbol can have 1) data associated with it AND (at the same time - for the same symbol) 2) code associated with it - depending on the context, either the data or the code gets called.
For example, in Elisp:
(progn
(fset 'add '+ )
(set 'add 2)
(add add add)
)
evaluates to 4.
Because (add add add) evaluates as:
(add add add)
(+ add add)
(+ 2 add)
(+ 2 2)
4
So, for example, using the Symbol class we defined in Python above, this add ELisp-Symbol could be written in Python as Symbol("add",(lambda x,y: x+y),2).
Many thanks for folks on IRC #emacs for explaining symbols and quotes to me.
A: Code is data and data is code. There is no clear distinction between them.
This is a classical statement any lisp programmer knows.
When you quote a code, that code will be data.
1 ]=> '(+ 2 3 4)
;Value: (+ 2 3 4)
1 ]=> (+ 2 3 4)
;Value: 9
When you quote a code, the result will be data that represent that code. So, when you want to work with data that represents a program you quote that program. This is also valid for atomic expressions, not only for lists:
1 ]=> 'code
;Value: code
1 ]=> '10
;Value: 10
1 ]=> '"ok"
;Value: "ok"
1 ]=> code
;Unbound variable: code
Supposing you want to create a programming language embedded in lisp -- you will work with programs that are quoted in scheme (like '(+ 2 3)) and that are interpreted as code in the language you create, by giving programs a semantic interpretation. In this case you need to use quote to keep the data, otherwise it will be evaluated in external language.
A: Other people have answered this question admirably, and Matthias Benkard brings up an excellent warning.
DO NOT USE QUOTE TO CREATE LISTS THAT YOU WILL LATER MODIFY. The spec allows the compiler to treat quoted lists as constants. Often, a compiler will optimize constants by creating a single value for them in memory and then referencing that single value from all locations where the constant appears. In other words, it may treat the constant like an anonymous global variable.
This can cause obvious problems. If you modify a constant, it may very well modify other uses of the same constant in completely unrelated code. For example, you may compare some variable to '(1 1) in some function, and in a completely different function, start a list with '(1 1) and then add more stuff to it. Upon running these functions, you may find that the first function doesn't match things properly anymore, because it's now trying to compare the variable to '(1 1 2 3 5 8 13), which is what the second function returned. These two functions are completely unrelated, but they have an effect on each other because of the use of constants. Even crazier bad effects can happen, like a perfectly normal list iteration suddenly infinite looping.
Use quote when you need a constant list, such as for comparison. Use list when you will be modifying the result.
A: When we want to pass an argument itself instead of passing the value of the argument then we use quote. It is mostly related to the procedure passing during using lists, pairs and atoms
which are not available in C programming Language ( most people start programming using C programming, Hence we get confused)
This is code in Scheme programming language which is a dialect of lisp and I guess you can understand this code.
(define atom? ; defining a procedure atom?
(lambda (x) ; which as one argument x
(and (not (null? x)) (not(pair? x) )))) ; checks if the argument is atom or not
(atom? '(a b c)) ; since it is a list it is false #f
The last line (atom? 'abc) is passing abc as it is to the procedure to check if abc is an atom or not, but when you pass(atom? abc) then it checks for the value of abc and passses the value to it. Since, we haven't provided any value to it
A: Short answer
Bypass the default evaluation rules and do not evaluate the expression (symbol or s-exp), passing it along to the function exactly as typed.
Long Answer: The Default Evaluation Rule
When a regular (I'll come to that later) function is invoked, all arguments passed to it are evaluated. This means you can write this:
(* (+ a 2)
3)
Which in turn evaluates (+ a 2), by evaluating a and 2. The value of the symbol a is looked up in the current variable binding set, and then replaced. Say a is currently bound to the value 3:
(let ((a 3))
(* (+ a 2)
3))
We'd get (+ 3 2), + is then invoked on 3 and 2 yielding 5. Our original form is now (* 5 3) yielding 15.
Explain quote Already!
Alright. As seen above, all arguments to a function are evaluated, so if you would like to pass the symbol a and not its value, you don't want to evaluate it. Lisp symbols can double both as their values, and markers where you in other languages would have used strings, such as keys to hash tables.
This is where quote comes in. Say you want to plot resource allocations from a Python application, but rather do the plotting in Lisp. Have your Python app do something like this:
print("'(")
while allocating:
if random.random() > 0.5:
print(f"(allocate {random.randint(0, 20)})")
else:
print(f"(free {random.randint(0, 20)})")
...
print(")")
Giving you output looking like this (slightly prettyfied):
'((allocate 3)
(allocate 7)
(free 14)
(allocate 19)
...)
Remember what I said about quote ("tick") causing the default rule not to apply? Good. What would otherwise happen is that the values of allocate and free are looked up, and we don't want that. In our Lisp, we wish to do:
(dolist (entry allocation-log)
(case (first entry)
(allocate (plot-allocation (second entry)))
(free (plot-free (second entry)))))
For the data given above, the following sequence of function calls would have been made:
(plot-allocation 3)
(plot-allocation 7)
(plot-free 14)
(plot-allocation 19)
But What About list?
Well, sometimes you do want to evaluate the arguments. Say you have a nifty function manipulating a number and a string and returning a list of the resulting ... things. Let's make a false start:
(defun mess-with (number string)
'(value-of-number (1+ number) something-with-string (length string)))
Lisp> (mess-with 20 "foo")
(VALUE-OF-NUMBER (1+ NUMBER) SOMETHING-WITH-STRING (LENGTH STRING))
Hey! That's not what we wanted. We want to selectively evaluate some arguments, and leave the others as symbols. Try #2!
(defun mess-with (number string)
(list 'value-of-number (1+ number) 'something-with-string (length string)))
Lisp> (mess-with 20 "foo")
(VALUE-OF-NUMBER 21 SOMETHING-WITH-STRING 3)
Not Just quote, But backquote
Much better! Incidently, this pattern is so common in (mostly) macros, that there is special syntax for doing just that. The backquote:
(defun mess-with (number string)
`(value-of-number ,(1+ number) something-with-string ,(length string)))
It's like using quote, but with the option to explicitly evaluate some arguments by prefixing them with comma. The result is equivalent to using list, but if you're generating code from a macro you often only want to evaluate small parts of the code returned, so the backquote is more suited. For shorter lists, list can be more readable.
Hey, You Forgot About quote!
So, where does this leave us? Oh right, what does quote actually do? It simply returns its argument(s) unevaluated! Remember what I said in the beginning about regular functions? Turns out that some operators/functions need to not evaluate their arguments. Such as IF -- you wouldn't want the else branch to be evaluated if it wasn't taken, right? So-called special operators, together with macros, work like that. Special operators are also the "axiom" of the language -- minimal set of rules -- upon which you can implement the rest of Lisp by combining them together in different ways.
Back to quote, though:
Lisp> (quote spiffy-symbol)
SPIFFY-SYMBOL
Lisp> 'spiffy-symbol ; ' is just a shorthand ("reader macro"), as shown above
SPIFFY-SYMBOL
Compare to (on Steel-Bank Common Lisp):
Lisp> spiffy-symbol
debugger invoked on a UNBOUND-VARIABLE in thread #<THREAD "initial thread" RUNNING {A69F6A9}>:
The variable SPIFFY-SYMBOL is unbound.
Type HELP for debugger help, or (SB-EXT:QUIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [ABORT] Exit debugger, returning to top level.
(SB-INT:SIMPLE-EVAL-IN-LEXENV SPIFFY-SYMBOL #<NULL-LEXENV>)
0]
Because there is no spiffy-symbol in the current scope!
Summing Up
quote, backquote (with comma), and list are some of the tools you use to create lists, that are not only lists of values, but as you seen can be used as lightweight (no need to define a struct) data structures!
If you wish to learn more, I recommend Peter Seibel's book Practical Common Lisp for a practical approach to learning Lisp, if you're already into programming at large. Eventually on your Lisp journey, you'll start using packages too. Ron Garret's The Idiot's Guide to Common Lisp Packages will give you good explanation of those.
Happy hacking!
A: One answer to this question says that QUOTE “creates list data structures”. This isn't quite right. QUOTE is more fundamental than this. In fact, QUOTE is a trivial operator: Its purpose is to prevent anything from happening at all. In particular, it doesn't create anything.
What (QUOTE X) says is basically “don't do anything, just give me X.” X needn't be a list as in (QUOTE (A B C)) or a symbol as in (QUOTE FOO). It can be any object whatever. Indeed, the result of evaluating the list that is produced by (LIST 'QUOTE SOME-OBJECT) will always just return SOME-OBJECT, whatever it is.
Now, the reason that (QUOTE (A B C)) seems as if it created a list whose elements are A, B, and C is that such a list really is what it returns; but at the time the QUOTE form is evaluated, the list has generally already been in existence for a while (as a component of the QUOTE form!), created either by the loader or the reader prior to execution of the code.
One implication of this that tends to trip up newbies fairly often is that it's very unwise to modify a list returned by a QUOTE form. Data returned by QUOTE is, for all intents and purposes, to be considered as part of the code being executed and should therefore be treated as read-only!
A: The quote prevents execution or evaluation of a form, turning it instead into data. In general you can execute the data by then eval'ing it.
quote creates list data structures, for example, the following are equivalent:
(quote a)
'a
It can also be used to create lists (or trees):
(quote (1 2 3))
'(1 2 3)
You're probably best off getting an introductary book on lisp, such as Practical Common Lisp (which is available to read on-line).
A: Quote returns the internal representation of its arguments. After plowing through way too many explanations of what quote doesn't do, that's when the light-bulb went on. If the REPL didn't convert function names to UPPER-CASE when I quoted them, it might not have dawned on me.
So. Ordinary Lisp functions convert their arguments into an internal representation, evaluate the arguments, and apply the function. Quote converts its arguments to an internal representation, and just returns that. Technically it's correct to say that quote says, "don't evaluate", but when I was trying to understand what it did, telling me what it doesn't do was frustrating. My toaster doesn't evaluate Lisp functions either; but that's not how you explain what a toaster does.
A: Anoter short answer:
quote means without evaluating it, and backquote is quote but leave back doors.
A good referrence:
Emacs Lisp Reference Manual make it very clear
9.3 Quoting
The special form quote returns its single argument, as written, without evaluating it. This provides a way to include constant symbols and lists, which are not self-evaluating objects, in a program. (It is not necessary to quote self-evaluating objects such as numbers, strings, and vectors.)
Special Form: quote object
This special form returns object, without evaluating it.
Because quote is used so often in programs, Lisp provides a convenient read syntax for it. An apostrophe character (‘'’) followed by a Lisp object (in read syntax) expands to a list whose first element is quote, and whose second element is the object. Thus, the read syntax 'x is an abbreviation for (quote x).
Here are some examples of expressions that use quote:
(quote (+ 1 2))
⇒ (+ 1 2)
(quote foo)
⇒ foo
'foo
⇒ foo
''foo
⇒ (quote foo)
'(quote foo)
⇒ (quote foo)
9.4 Backquote
Backquote constructs allow you to quote a list, but selectively evaluate elements of that list. In the simplest case, it is identical to the special form quote (described in the previous section; see Quoting). For example, these two forms yield identical results:
`(a list of (+ 2 3) elements)
⇒ (a list of (+ 2 3) elements)
'(a list of (+ 2 3) elements)
⇒ (a list of (+ 2 3) elements)
The special marker ‘,’ inside of the argument to backquote indicates a value that isn’t constant. The Emacs Lisp evaluator evaluates the argument of ‘,’, and puts the value in the list structure:
`(a list of ,(+ 2 3) elements)
⇒ (a list of 5 elements)
Substitution with ‘,’ is allowed at deeper levels of the list structure also. For example:
`(1 2 (3 ,(+ 4 5)))
⇒ (1 2 (3 9))
You can also splice an evaluated value into the resulting list, using the special marker ‘,@’. The elements of the spliced list become elements at the same level as the other elements of the resulting list. The equivalent code without using ‘`’ is often unreadable. Here are some examples:
(setq some-list '(2 3))
⇒ (2 3)
(cons 1 (append some-list '(4) some-list))
⇒ (1 2 3 4 2 3)
`(1 ,@some-list 4 ,@some-list)
⇒ (1 2 3 4 2 3)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "133"
}
|
Q: Return XML from a controller's action in as an ActionResult? What is the best way to return XML from a controller's action in ASP.NET MVC? There is a nice way to return JSON, but not for XML. Do I really need to route the XML through a View, or should I do the not-best-practice way of Response.Write-ing it?
A: I've had to do this recently for a Sitecore project which uses a method to create an XmlDocument from a Sitecore Item and its children and returns it from the controller ActionResult as a File. My solution:
public virtual ActionResult ReturnXml()
{
return File(Encoding.UTF8.GetBytes(GenerateXmlFeed().OuterXml), "text/xml");
}
A: If you're building the XML using the excellent Linq-to-XML framework, then this approach will be helpful.
I create an XDocument in the action method.
public ActionResult MyXmlAction()
{
// Create your own XDocument according to your requirements
var xml = new XDocument(
new XElement("root",
new XAttribute("version", "2.0"),
new XElement("child", "Hello World!")));
return new XmlActionResult(xml);
}
This reusable, custom ActionResult serialises the XML for you.
public sealed class XmlActionResult : ActionResult
{
private readonly XDocument _document;
public Formatting Formatting { get; set; }
public string MimeType { get; set; }
public XmlActionResult(XDocument document)
{
if (document == null)
throw new ArgumentNullException("document");
_document = document;
// Default values
MimeType = "text/xml";
Formatting = Formatting.None;
}
public override void ExecuteResult(ControllerContext context)
{
context.HttpContext.Response.Clear();
context.HttpContext.Response.ContentType = MimeType;
using (var writer = new XmlTextWriter(context.HttpContext.Response.OutputStream, Encoding.UTF8) { Formatting = Formatting })
_document.WriteTo(writer);
}
}
You can specify a MIME type (such as application/rss+xml) and whether the output should be indented if you need to. Both properties have sensible defaults.
If you need an encoding other than UTF8, then it's simple to add a property for that too.
A: use one of these methods
public ContentResult GetXml()
{
string xmlString = "your xml data";
return Content(xmlString, "text/xml");
}
or
public string GetXml()
{
string xmlString = "your xml data";
Response.ContentType = "text/xml";
return xmlString;
}
A: If you are only interested to return xml through a request, and you have your xml "chunk", you can just do (as an action in your controller):
public string Xml()
{
Response.ContentType = "text/xml";
return yourXmlChunk;
}
A: Finally manage to get this work and thought I would document how here in the hopes of saving others the pain.
Environment
*
*VS2012
*SQL Server 2008R2
*.NET 4.5
*ASP.NET MVC4 (Razor)
*Windows 7
Supported Web Browsers
*
*FireFox 23
*IE 10
*Chrome 29
*Opera 16
*Safari 5.1.7 (last one for Windows?)
My task was on a ui button click, call a method on my Controller (with some params) and then have it return an MS-Excel XML via an xslt transform. The returned MS-Excel XML would then cause the browser to popup the Open/Save dialog. This had to work in all the browsers (listed above).
At first I tried with Ajax and to create a dynamic Anchor with the "download" attribute for the filename,
but that only worked for about 3 of the 5 browsers(FF, Chrome, Opera) and not for IE or Safari.
And there were issues with trying to programmatically fire the Click event of the anchor to cause the actual "download".
What I ended up doing was using an "invisible" IFRAME and it worked for all 5 browsers!
So here is what I came up with:
[please note that I am by no means an html/javascript guru and have only included the relevant code]
HTML (snippet of relevant bits)
<div id="docxOutput">
<iframe id="ifOffice" name="ifOffice" width="0" height="0"
hidden="hidden" seamless='seamless' frameBorder="0" scrolling="no"></iframe></div>
JAVASCRIPT
//url to call in the controller to get MS-Excel xml
var _lnkToControllerExcel = '@Url.Action("ExportToExcel", "Home")';
$("#btExportToExcel").on("click", function (event) {
event.preventDefault();
$("#ProgressDialog").show();//like an ajax loader gif
//grab the basket as xml
var keys = GetMyKeys();//returns delimited list of keys (for selected items from UI)
//potential problem - the querystring might be too long??
//2K in IE8
//4096 characters in ASP.Net
//parameter key names must match signature of Controller method
var qsParams = [
'keys=' + keys,
'locale=' + '@locale'
].join('&');
//The element with id="ifOffice"
var officeFrame = $("#ifOffice")[0];
//construct the url for the iframe
var srcUrl = _lnkToControllerExcel + '?' + qsParams;
try {
if (officeFrame != null) {
//Controller method can take up to 4 seconds to return
officeFrame.setAttribute("src", srcUrl);
}
else {
alert('ExportToExcel - failed to get reference to the office iframe!');
}
} catch (ex) {
var errMsg = "ExportToExcel Button Click Handler Error: ";
HandleException(ex, errMsg);
}
finally {
//Need a small 3 second ( delay for the generated MS-Excel XML to come down from server)
setTimeout(function () {
//after the timeout then hide the loader graphic
$("#ProgressDialog").hide();
}, 3000);
//clean up
officeFrame = null;
srcUrl = null;
qsParams = null;
keys = null;
}
});
C# SERVER-SIDE (code snippet)
@Drew created a custom ActionResult called XmlActionResult which I modified for my purpose.
Return XML from a controller's action in as an ActionResult?
My Controller method (returns ActionResult)
*
*passes the keys parameter to a SQL Server stored proc that generates an XML
*that XML is then transformed via xslt into an MS-Excel xml (XmlDocument)
*creates instance of the modified XmlActionResult and returns it
XmlActionResult result = new XmlActionResult(excelXML, "application/vnd.ms-excel");
string version = DateTime.Now.ToString("dd_MMM_yyyy_hhmmsstt");
string fileMask = "LabelExport_{0}.xml";
result.DownloadFilename = string.Format(fileMask, version);
return result;
The main modification to the XmlActionResult class that @Drew created.
public override void ExecuteResult(ControllerContext context)
{
string lastModDate = DateTime.Now.ToString("R");
//Content-Disposition: attachment; filename="<file name.xml>"
// must set the Content-Disposition so that the web browser will pop the open/save dialog
string disposition = "attachment; " +
"filename=\"" + this.DownloadFilename + "\"; ";
context.HttpContext.Response.Clear();
context.HttpContext.Response.ClearContent();
context.HttpContext.Response.ClearHeaders();
context.HttpContext.Response.Cookies.Clear();
context.HttpContext.Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache);// Stop Caching in IE
context.HttpContext.Response.Cache.SetNoStore();// Stop Caching in Firefox
context.HttpContext.Response.Cache.SetMaxAge(TimeSpan.Zero);
context.HttpContext.Response.CacheControl = "private";
context.HttpContext.Response.Cache.SetLastModified(DateTime.Now.ToUniversalTime());
context.HttpContext.Response.ContentType = this.MimeType;
context.HttpContext.Response.Charset = System.Text.UTF8Encoding.UTF8.WebName;
//context.HttpContext.Response.Headers.Add("name", "value");
context.HttpContext.Response.Headers.Add("Last-Modified", lastModDate);
context.HttpContext.Response.Headers.Add("Pragma", "no-cache"); // HTTP 1.0.
context.HttpContext.Response.Headers.Add("Expires", "0"); // Proxies.
context.HttpContext.Response.AppendHeader("Content-Disposition", disposition);
using (var writer = new XmlTextWriter(context.HttpContext.Response.OutputStream, this.Encoding)
{ Formatting = this.Formatting })
this.Document.WriteTo(writer);
}
That was basically it.
Hope it helps others.
A: There is a XmlResult (and much more) in MVC Contrib. Take a look at http://www.codeplex.com/MVCContrib
A: return this.Content(xmlString, "text/xml");
A: Use MVCContrib's XmlResult Action.
For reference here is their code:
public class XmlResult : ActionResult
{
private object objectToSerialize;
/// <summary>
/// Initializes a new instance of the <see cref="XmlResult"/> class.
/// </summary>
/// <param name="objectToSerialize">The object to serialize to XML.</param>
public XmlResult(object objectToSerialize)
{
this.objectToSerialize = objectToSerialize;
}
/// <summary>
/// Gets the object to be serialized to XML.
/// </summary>
public object ObjectToSerialize
{
get { return this.objectToSerialize; }
}
/// <summary>
/// Serialises the object that was passed into the constructor to XML and writes the corresponding XML to the result stream.
/// </summary>
/// <param name="context">The controller context for the current request.</param>
public override void ExecuteResult(ControllerContext context)
{
if (this.objectToSerialize != null)
{
context.HttpContext.Response.Clear();
var xs = new System.Xml.Serialization.XmlSerializer(this.objectToSerialize.GetType());
context.HttpContext.Response.ContentType = "text/xml";
xs.Serialize(context.HttpContext.Response.Output, this.objectToSerialize);
}
}
}
A: A simple option that will let you use streams and all that is return File(stream, "text/xml");.
A: Here is a simple way of doing it:
var xml = new XDocument(
new XElement("root",
new XAttribute("version", "2.0"),
new XElement("child", "Hello World!")));
MemoryStream ms = new MemoryStream();
xml.Save(ms);
return File(new MemoryStream(ms.ToArray()), "text/xml", "HelloWorld.xml");
A: A small variation of the answer from Drew Noakes that use the method Save() of XDocument.
public sealed class XmlActionResult : ActionResult
{
private readonly XDocument _document;
public string MimeType { get; set; }
public XmlActionResult(XDocument document)
{
if (document == null)
throw new ArgumentNullException("document");
_document = document;
// Default values
MimeType = "text/xml";
}
public override void ExecuteResult(ControllerContext context)
{
context.HttpContext.Response.Clear();
context.HttpContext.Response.ContentType = MimeType;
_document.Save(context.HttpContext.Response.OutputStream)
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "145"
}
|
Q: How do I list all cron jobs for all users? Is there a command or an existing script that will let me view all of a *NIX system's scheduled cron jobs at once? I'd like it to include all of the user crontabs, as well as /etc/crontab, and whatever's in /etc/cron.d. It would also be nice to see the specific commands run by run-parts in /etc/crontab.
Ideally, I'd like the output in a nice column form and ordered in some meaningful way.
I could then merge these listings from multiple servers to view the overall "schedule of events."
I was about to write such a script myself, but if someone's already gone to the trouble...
A: for user in $(cut -f1 -d: /etc/passwd);
do
echo $user; crontab -u $user -l;
done
A: The following strips away comments, empty lines, and errors from users with no crontab. All you're left with is a clear list of users and their jobs.
Note the use of sudo in the 2nd line. If you're already root, remove that.
for USER in $(cut -f1 -d: /etc/passwd); do \
USERTAB="$(sudo crontab -u "$USER" -l 2>&1)"; \
FILTERED="$(echo "$USERTAB"| grep -vE '^#|^$|no crontab for|cannot use this program')"; \
if ! test -z "$FILTERED"; then \
echo "# ------ $(tput bold)$USER$(tput sgr0) ------"; \
echo "$FILTERED"; \
echo ""; \
fi; \
done
Example output:
# ------ root ------
0 */6 * * * /usr/local/bin/disk-space-notify.sh
45 3 * * * /opt/mysql-backups/mysql-backups.sh
5 7 * * * /usr/local/bin/certbot-auto renew --quiet --no-self-upgrade
# ------ sammy ------
55 * * * * wget -O - -q -t 1 https://www.example.com/cron.php > /dev/null
I use this on Ubuntu (12 thru 16) and Red Hat (5 thru 7).
A: On Solaris, for a particular known user name:
crontab -l username
To get all user's jobs at once on Solaris, much like other posts above:
for user in $(cut -f1 -d: /etc/passwd); do crontab -l $user 2>/dev/null; done
Update:
Please stop suggesting edits that are wrong on Solaris:
A: Depends on your version of cron. Using Vixie cron on FreeBSD, I can do something like this:
(cd /var/cron/tabs && grep -vH ^# *)
if I want it more tab deliminated, I might do something like this:
(cd /var/cron/tabs && grep -vH ^# * | sed "s/:/ /")
Where that's a literal tab in the sed replacement portion.
It may be more system independent to loop through the users in /etc/passwd and do crontab -l -u $user for each of them.
A: you can write for all user list :
sudo crontab -u userName -l
,
You can also go to
cd /etc/cron.daily/
ls -l
cat filename
this file will list the schedules
cd /etc/cron.d/
ls -l
cat filename
A: i made below one liner script and it worked for me to list all cron jobs for all users.
cat /etc/passwd |awk -F ':' '{print $1}'|while read a;do crontab -l -u ${a} ; done
A: Thanks for this very useful script. I had some tiny problems running it on old systems (Red Hat Enterprise 3, which handle differently egrep and tabs in strings), and other systems with nothing in /etc/cron.d/ (the script then ended with an error). So here is a patch to make it work in such cases :
2a3,4
> #See: http://stackoverflow.com/questions/134906/how-do-i-list-all-cron-jobs-for-all-users
>
27c29,30
< match=$(echo "${line}" | egrep -o 'run-parts (-{1,2}\S+ )*\S+')
---
> #match=$(echo "${line}" | egrep -o 'run-parts (-{1,2}\S+ )*\S+')
> match=$(echo "${line}" | egrep -o 'run-parts.*')
51c54,57
< cat "${CRONDIR}"/* | clean_cron_lines >>"${temp}" # */ <not a comment>
---
> sys_cron_num=$(ls /etc/cron.d | wc -l | awk '{print $1}')
> if [ "$sys_cron_num" != 0 ]; then
> cat "${CRONDIR}"/* | clean_cron_lines >>"${temp}" # */ <not a comment>
> fi
67c73
< sed "1i\mi\th\td\tm\tw\tuser\tcommand" |
---
> sed "1i\mi${tab}h${tab}d${tab}m${tab}w${tab}user${tab}command" |
I'm not really sure the changes in the first egrep are a good idea, but well, this script has been tested on RHEL3,4,5 and Debian5 without any problem. Hope this helps!
A: This will show all crontab entries from all users.
sed 's/^\([^:]*\):.*$/crontab -u \1 -l 2>\&1/' /etc/passwd | sh | grep -v "no crontab for"
A: Building on top of @Kyle
for user in $(tail -n +11 /etc/passwd | cut -f1 -d:); do echo $user; crontab -u $user -l; done
to avoid the comments usually at the top of /etc/passwd,
And on macosx
for user in $(dscl . -list /users | cut -f1 -d:); do echo $user; crontab -u $user -l; done
A: I think a better one liner would be below. For example if you have users in NIS or LDAP they wouldnt be in /etc/passwd. This will give you the crontabs of every user that has logged in.
for I in `lastlog | grep -v Never | cut -f1 -d' '`; do echo $I ; crontab -l -u $I ; done
A: With apologies and thanks to yukondude.
I've tried to summarise the timing settings for easy reading, though it's not a perfect job, and I don't touch 'every Friday' or 'only on Mondays' stuff.
This is version 10 - it now:
*
*runs much much faster
*has optional progress characters so you could improve the speed further.
*uses a divider line to separate header and output.
*outputs in a compact format when all timing intervals uencountered can be summarised.
*Accepts Jan...Dec descriptors for months-of-the-year
*Accepts Mon...Sun descriptors for days-of-the-week
*tries to handle debian-style dummying-up of anacron when it is missing
*tries to deal with crontab lines which run a file after pre-testing executability using "[ -x ... ]"
*tries to deal with crontab lines which run a file after pre-testing executability using "command -v"
*allows the use of interval spans and lists.
*supports run-parts usage in user-specific /var/spool crontab files.
I am now publishing the script in full here.
https://gist.github.com/myshkin-uk/d667116d3e2d689f23f18f6cd3c71107
A: Depends on your linux version but I use:
tail -n 1000 /var/spool/cron/*
as root. Very simple and very short.
Gives me output like:
==> /var/spool/cron/root <==
15 2 * * * /bla
==> /var/spool/cron/my_user <==
*/10 1 * * * /path/to/script
A: I ended up writing a script (I'm trying to teach myself the finer points of bash scripting, so that's why you don't see something like Perl here). It's not exactly a simple affair, but it does most of what I need. It uses Kyle's suggestion for looking up individual users' crontabs, but also deals with /etc/crontab (including the scripts launched by run-parts in /etc/cron.hourly, /etc/cron.daily, etc.) and the jobs in the /etc/cron.d directory. It takes all of those and merges them into a display something like the following:
mi h d m w user command
09,39 * * * * root [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -r -0 rm
47 */8 * * * root rsync -axE --delete --ignore-errors / /mirror/ >/dev/null
17 1 * * * root /etc/cron.daily/apt
17 1 * * * root /etc/cron.daily/aptitude
17 1 * * * root /etc/cron.daily/find
17 1 * * * root /etc/cron.daily/logrotate
17 1 * * * root /etc/cron.daily/man-db
17 1 * * * root /etc/cron.daily/ntp
17 1 * * * root /etc/cron.daily/standard
17 1 * * * root /etc/cron.daily/sysklogd
27 2 * * 7 root /etc/cron.weekly/man-db
27 2 * * 7 root /etc/cron.weekly/sysklogd
13 3 * * * archiver /usr/local/bin/offsite-backup 2>&1
32 3 1 * * root /etc/cron.monthly/standard
36 4 * * * yukon /home/yukon/bin/do-daily-stuff
5 5 * * * archiver /usr/local/bin/update-logs >/dev/null
Note that it shows the user, and more-or-less sorts by hour and minute so that I can see the daily schedule.
So far, I've tested it on Ubuntu, Debian, and Red Hat AS.
#!/bin/bash
# System-wide crontab file and cron job directory. Change these for your system.
CRONTAB='/etc/crontab'
CRONDIR='/etc/cron.d'
# Single tab character. Annoyingly necessary.
tab=$(echo -en "\t")
# Given a stream of crontab lines, exclude non-cron job lines, replace
# whitespace characters with a single space, and remove any spaces from the
# beginning of each line.
function clean_cron_lines() {
while read line ; do
echo "${line}" |
egrep --invert-match '^($|\s*#|\s*[[:alnum:]_]+=)' |
sed --regexp-extended "s/\s+/ /g" |
sed --regexp-extended "s/^ //"
done;
}
# Given a stream of cleaned crontab lines, echo any that don't include the
# run-parts command, and for those that do, show each job file in the run-parts
# directory as if it were scheduled explicitly.
function lookup_run_parts() {
while read line ; do
match=$(echo "${line}" | egrep -o 'run-parts (-{1,2}\S+ )*\S+')
if [[ -z "${match}" ]] ; then
echo "${line}"
else
cron_fields=$(echo "${line}" | cut -f1-6 -d' ')
cron_job_dir=$(echo "${match}" | awk '{print $NF}')
if [[ -d "${cron_job_dir}" ]] ; then
for cron_job_file in "${cron_job_dir}"/* ; do # */ <not a comment>
[[ -f "${cron_job_file}" ]] && echo "${cron_fields} ${cron_job_file}"
done
fi
fi
done;
}
# Temporary file for crontab lines.
temp=$(mktemp) || exit 1
# Add all of the jobs from the system-wide crontab file.
cat "${CRONTAB}" | clean_cron_lines | lookup_run_parts >"${temp}"
# Add all of the jobs from the system-wide cron directory.
cat "${CRONDIR}"/* | clean_cron_lines >>"${temp}" # */ <not a comment>
# Add each user's crontab (if it exists). Insert the user's name between the
# five time fields and the command.
while read user ; do
crontab -l -u "${user}" 2>/dev/null |
clean_cron_lines |
sed --regexp-extended "s/^((\S+ +){5})(.+)$/\1${user} \3/" >>"${temp}"
done < <(cut --fields=1 --delimiter=: /etc/passwd)
# Output the collected crontab lines. Replace the single spaces between the
# fields with tab characters, sort the lines by hour and minute, insert the
# header line, and format the results as a table.
cat "${temp}" |
sed --regexp-extended "s/^(\S+) +(\S+) +(\S+) +(\S+) +(\S+) +(\S+) +(.*)$/\1\t\2\t\3\t\4\t\5\t\6\t\7/" |
sort --numeric-sort --field-separator="${tab}" --key=2,1 |
sed "1i\mi\th\td\tm\tw\tuser\tcommand" |
column -s"${tab}" -t
rm --force "${temp}"
A: Since it is a matter of looping through a file (/etc/passwd) and performing an action, I am missing the proper approach on How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?:
while IFS=":" read -r user _
do
echo "crontab for user ${user}:"
crontab -u "$user" -l
done < /etc/passwd
This reads /etc/passwd line by line using : as field delimiter. By saying read -r user _, we make $user hold the first field and _ the rest (it is just a junk variable to ignore fields).
This way, we can then call crontab -u using the variable $user, which we quote for safety (what if it contains spaces? It is unlikely in such file, but you can never know).
A: Under Ubuntu or debian, you can view crontab by /var/spool/cron/crontabs/ and then a file for each user is in there. That's only for user-specific crontab's of course.
For Redhat 6/7 and Centos, the crontab is under /var/spool/cron/.
A: I tend to use following small commands to list all jobs for single user and all users on Unix based operating systems with a modern bash console:
1. Single user
echo "Jobs owned by $USER" && crontab -l -u $USER
2. All users
for wellknownUser in $(cut -f1 -d: /etc/passwd);
do
echo "Jobs owned by $wellknownUser";
crontab -l -u $wellknownUser;
echo -e "\n";
sleep 2; # (optional sleep 2 seconds) while drinking a coffee
done
A: A small refinement of Kyle Burton's answer with improved output formatting:
#!/bin/bash
for user in $(cut -f1 -d: /etc/passwd)
do echo $user && crontab -u $user -l
echo " "
done
A: getent passwd | cut -d: -f1 | perl -e'while(<>){chomp;$l = `crontab -u $_ -l 2>/dev/null`;print "$_\n$l\n" if $l}'
This avoids messing with passwd directly, skips users that have no cron entries and for those who have them it prints out the username as well as their crontab.
Mostly dropping this here though so i can find it later in case i ever need to search for it again.
A: To get list from ROOT user.
for user in $(cut -f1 -d: /etc/passwd); do echo $user; sudo crontab -u $user -l; done
A: You would have to run this as root, but:
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done
will loop over each user name listing out their crontab. The crontabs are owned by the respective users so you won't be able to see another user's crontab w/o being them or root.
Edit
if you want to know which user a crontab belongs to, use echo $user
for user in $(cut -f1 -d: /etc/passwd); do echo $user; crontab -u $user -l; done
A: If you check a cluster using NIS, the only way to see if a user has a crontab entry ist according to Matt's answer /var/spool/cron/tabs.
grep -v "#" -R /var/spool/cron/tabs
A: I like the simple one-liner answer above:
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done
But Solaris which does not have the -u flag and does not print the user it's checking, you can modify it like so:
for user in $(cut -f1 -d: /etc/passwd); do echo User:$user; crontab -l $user 2>&1 | grep -v crontab; done
You will get a list of users without the errors thrown by crontab when an account is not allowed to use cron etc. Be aware that in Solaris, roles can be in /etc/passwd too (see /etc/user_attr).
A: This script worked for me in CentOS to list all crons in the environment:
sudo cat /etc/passwd | sed 's/^\([^:]*\):.*$/sudo crontab -u \1 -l 2>\&1/' | grep -v "no crontab for" | sh
A: While many of the answers produce useful results, I think the hustle of maintaining a complex script for this task is not worth it. This is mainly because most distros use different cron daemons.
Watch and learn, kids & elders.
$ \cat ~jaroslav/bin/ls-crons
#!/bin/bash
getent passwd | awk -F: '{ print $1 }' | xargs -I% sh -c 'crontab -l -u % | sed "/^$/d; /^#/d; s/^/% /"' 2>/dev/null
echo
cat /etc/crontab /etc/anacrontab 2>/dev/null | sed '/^$/d; /^#/d;'
echo
run-parts --list /etc/cron.hourly;
run-parts --list /etc/cron.daily;
run-parts --list /etc/cron.weekly;
run-parts --list /etc/cron.monthly;
Run like this
$ sudo ls-cron
Sample output (Gentoo)
$ sudo ~jaroslav/bin/ls-crons
jaroslav */5 * * * * mv ~/java_error_in_PHPSTORM* ~/tmp 2>/dev/null
jaroslav 5 */24 * * * ~/bin/Find-home-files
jaroslav * 7 * * * cp /T/fortrabbit/ssh-config/fapps.tsv /home/jaroslav/reference/fortrabbit/fapps
jaroslav */8 1 * * * make -C /T/fortrabbit/ssh-config discover-apps # >/dev/null
jaroslav */7 * * * * getmail -r jazzoslav -r fortrabbit 2>/dev/null
jaroslav */1 * * * * /home/jaroslav/bin/checkmail
jaroslav * 9-18 * * * getmail -r fortrabbit 2>/dev/null
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
RANDOM_DELAY=45
START_HOURS_RANGE=3-22
1 5 cron.daily nice run-parts /etc/cron.daily
7 25 cron.weekly nice run-parts /etc/cron.weekly
@monthly 45 cron.monthly nice run-parts /etc/cron.monthly
/etc/cron.hourly/0anacron
/etc/cron.daily/logrotate
/etc/cron.daily/man-db
/etc/cron.daily/mlocate
/etc/cron.weekly/mdadm
/etc/cron.weekly/pfl
Sample output (Ubuntu)
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
/etc/cron.hourly/btrfs-quota-cleanup
/etc/cron.hourly/ntpdate-debian
/etc/cron.daily/apport
/etc/cron.daily/apt-compat
/etc/cron.daily/apt-show-versions
/etc/cron.daily/aptitude
/etc/cron.daily/bsdmainutils
/etc/cron.daily/dpkg
/etc/cron.daily/logrotate
/etc/cron.daily/man-db
/etc/cron.daily/mlocate
/etc/cron.daily/passwd
/etc/cron.daily/popularity-contest
/etc/cron.daily/ubuntu-advantage-tools
/etc/cron.daily/update-notifier-common
/etc/cron.daily/upstart
/etc/cron.weekly/apt-xapian-index
/etc/cron.weekly/man-db
/etc/cron.weekly/update-notifier-common
Pics
Ubuntu:
Gentoo:
A: This script outputs the Crontab to a file and also lists all users confirming those which have no crontab entry:
for user in $(cut -f1 -d: /etc/passwd); do
echo $user >> crontab.bak
echo "" >> crontab.bak
crontab -u $user -l >> crontab.bak 2>> > crontab.bak
done
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "967"
}
|
Q: In Source Safe is there a way to protect crucial files being modified by some developers I'd sooner not just permanently have a bunch of files checked out to me, but I'd like a way to prevent some really critical files being changed without my being aware of it.
PS Yes I know SourceSafe is terrible.
A: You can only set access rights on a project by project basis.
This is done by running the the Source Safe Administration application and then go to Tools > Rights By Project. Select the required project and then give a users the required priveledges.
In order to protect a subset of files place them in a seperate project and hence protect the subset.
When you go into Source Safe if you set the working folder of the sub folder to be the same as the parent then when you do a get latest etc. all the files will be in the same folder. If you want the protected files to be in a seperate folder then set the working folder accordingly.
A: It's been a while since I've had to use Source Safe but I don't think it has this kind of functionality built it.
Can you set up a separate repository/instance that excludes the users who shouldn't be allowed to modify them?
Or failing that, just keep the files always checked out on your machine :P
A: check them out locked exclusive
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/134914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.