text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: How do I select the Count(*) of an nHibernate Subquery's results I need to do the following for the purposes of paging a query in nHibernate:
Select count(*) from
(Select e.ID,e.Name from Object as e where...)
I have tried the following,
select count(*) from Object e where e = (Select distinct e.ID,e.Name from ...)
and I get an nHibernate Exception saying I cannot convert Object to int32.
Any ideas on the required syntax?
EDIT
The Subquery uses a distinct clause, I cannot replace the e.ID,e.Name with Count(*) because Count(*) distinct is not a valid syntax, and distinct count(*) is meaningless.
A: Solved My own question by modifying Geir-Tore's answer.....
IList results = session.CreateMultiQuery()
.Add(session.CreateQuery("from Orders o").SetFirstResult(pageindex).SetMaxResults(pagesize))
.Add(session.CreateQuery("select count(distinct e.Id) from Orders o where..."))
.List();
return results;
A: NHibernate 3.0 allows Linq query.
Try this
int count = session.QueryOver<Orders>().RowCount();
A: var session = GetSession();
var criteria = session.CreateCriteria(typeof(Order))
.Add(Restrictions.Eq("Product", product))
.SetProjection(Projections.CountDistinct("Price"));
return (int) criteria.UniqueResult();
A: Here is a draft of how I do it:
Query:
public IList GetOrders(int pageindex, int pagesize)
{
IList results = session.CreateMultiQuery()
.Add(session.CreateQuery("from Orders o").SetFirstResult(pageindex).SetMaxResults(pagesize))
.Add(session.CreateQuery("select count(*) from Orders o"))
.List();
return results;
}
ObjectDataSource:
[DataObjectMethod(DataObjectMethodType.Select)]
public DataTable GetOrders(int startRowIndex, int maximumRows)
{
IList result = dao.GetOrders(startRowIndex, maximumRows);
_count = Convert.ToInt32(((IList)result[1])[0]);
return DataTableFromIList((IList)result[0]); //Basically creates a DataTable from the IList of Orders
}
A: If you just need e.Id,e.Name:
select count(*) from Object where.....
A: I prefer,
public IList GetOrders(int pageindex, int pagesize, out int total)
{
var results = session.CreateQuery().Add(session.CreateQuery("from Orders o").SetFirstResult(pageindex).SetMaxResults(pagesize));
var wCriteriaCount = (ICriteria)results.Clone());
wCriteriaCount.SetProjection(Projections.RowCount());
total = Convert.ToInt32(wCriteriaCount.UniqueResult());
return results.List();
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Where can I find a good primer on network administration? Things like DHCP, IP addresses, configurations...that kind of thing. I have a Google search open, and it seems like there's just so much out there, I'm not sure where to begin. Perhaps there are some especially good websites/books/blogs out there that might help. Thanks
A: Network administration is a very broad field, and just about every organization will have its own ideas about the skills that are required. A good understanding of fundamentals never hurts, though, and one of the best books I've ever encountered for that purpose is Howard C. Berkowitz's Designing Addressing Architectures for Routing and Switching.
A: You might like to look at the book The Practice of System and Network Administration (Amazon link).
The first edition of this was an excellent book and this new edition has also received glowing reviews.
A: If you're looking for a good primer on the mechanisms that allow the internet to work, I learned the most about it from Security Now! Unfortunately, around episode 75, it starts getting really bad and turns into an hour long infomercial for the Sony E-reader, but up until then is really good.
Having a fundamental understanding of what makes all of this work makes finding problems so much easier.
A: For the basics what you need to start with is a TCP/IP introduction. For the actual act of administration that varies from one OS to another so you need to mention which OS you are using.
You might want to ask this question on the administrator version of this stackoverflow: https://serverfault.com/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Before XML became a standard and given all its shortcomings, what made XML so popular? Yes XML is human readable but so is comma delimited text and properties files.
XML is bloated, hard to parse, hard to modify in code, plus a ton of other problems that I can think about with it.
My questions is what are XML's most attractive qualities that has made it so popular????
A: *
*You can be given an xml file and have a chance at understanding what the data means by reading it without needing a separate specification of your pre-xml data format.
*Tools can be used to work with xml generically. Where before, if everybody used different file formats: comma separated, binary, etc. You'd need to write a custom tool.
*You can extend it, by adding a new tag into the schema with a default value. And if done correctly, with xml that doesn't break all the old code that parses the xml but doesn't know about the tag. That usually isn't true with proprietry formats.
*Probably the main thing that makes it popular is it looks a bit like HTML, which lots of people understood previously. So it became popular, then because it was popular it became more popular because its nice to work with one standard that everybody knows.
*A bad thing is that xml is usually a lot bigger because of all the tags and because its text based than used to be used. But, as computers are bigger now, we can often handle that and its worth trading size for having better self-describing data.
*You can get off the shelf code/libraries that will parse/write xml.
A: It has many advantages, and few shortcomings. The main problem is the increased size of file and slower processing. However, there are advantages:
*
*it is structured, so you write a parser only once
*it supports data with nested structure (hierarchies, trees, etc.)
*you can embed multiple types of data structure in a single XML
*you can describe the schema (data types, etc.) with standard language (XSL...)
A: How about the fact that it supports a standardized query language, XPath? That's pretty useful for me.
A: Do you remember the days before XML became popular? Data just wasn't easily interchangeable -- one program would take .csv files, the next .xls, the next EBSIDIC-formatted files. XML has its weaknesses, but it's structured, which makes it parsable and transformable.
As you point out, CSV files are pretty portable. However, there's no meaning to them. What does column(14) mean to me? As opposed to <customer id="14"/>?
A: Some inherent qualities of XML that make it so popular and useful:
*
*XML represents a tree, and tree-like structures are a very common pattern in programming. This is an evolutionary leap from record-based representations like CSV, made possible by today's cheap computing power and bandwidth.
*XML strikes a good balance between human factors (it is plain text, and fairly legible) and computing practicalities (terseness, ease in parsing, expressiveness, extensibility, etc).
A: XML provides a very straightforward way to represent data. Parsing is fairly easy - it's a very regular grammar and lends itself to straight forward recursive descent parsing. This makes it easy for data consumers and producers to exchange information without really having to know too much about their respective applications and internals.
It is, however, an extremely inefficient way to represent data and lends itself to being abused horribly. An example of this is an object interface I worked with that, instead of exporting constructors and properties for particular objects, required me to author XML programmatically and pass in the resulting XML to the single constructor. Similarly, XML does not lend itself well to large data sets that may require random access without creating an added cataloging system (ie, if I have a thousand page document in XML, I will need to parse nearly the entire file to get to page 999, assuming the page data is ordered), whereas I'd be better off putting the actual page data in a separate file or files and use the XML to point to the correct file or position within a file.
A: Something I haven't seen mentioned yet is that not only is XML structured, but the way that attributes and elements interact creates a somewhat unusual structure that is still easily understandable by humans.
If you compare an XML tree with its nearest structural neighbor, the directed acyclic graph, you might note that the typical DAG carries only an ID and a value at each node. XML carries this as well (gi/tag corresponding with ID, and the text of the node corresponding with the value), but each node then can also carry and arbitrary amount of additional metadata: the elements. This is very much like having an extra dimension — if you consider the DAG as spreading out flat in two dimensions with each branch, the XML document spreads out in three dimensions, flat, and then downwards to a subtree containing just the attributes.
This is an optional bend to the structure. Walk a list of attributes like any list of child elements, and you're back to a two-dimensional tree. Ignore them completely, and you have a simplified node/value tree which may more purely represent the overall "shape" of contained data. But the extra dimension is there if you need the metadata.
With decent indentation, this is something that a human being can pick up just by eyeballing the raw data, making XML a miniature visualization tool for a potentially complex structure — and having a visualization tool built into the data exchange of your application means that the programmers involved are more likely to build a structure that represents the way the data is likely to be used.
A: One of the major advantages it has over things like CSV files is that it can represent hierarchical data easily. To do this you either need a self-describing tree structure like XML, or a pre-defined format such as SWIFT or EDI (and if you've ever dealt with either of those, then you'll realise that XML is trivial to parse in comparison).
One of the reasons it's actually quite easy to parse is because it's 'bloated'. Those end tags mean that you can accurately match the end of elements to the start and work out when the tree has become unbalanced. You can't do that in the 'lightweight' alternatives such as JSON.
Another reason it's easy to parse is because it has had full support for Unicode encodings from the start, so you don't have to worry about what the default code page is on the target system, or how to encode multi-byte characters, because that information is all contained within the document.
And let's not forget about the other artefacts that came with it like the defined description and validation mechanism (XSD) and the powerful and declarative transformation mechanism (XSLT).
A: It was the late 90s and the internet was hot hot hot, but companies had systems that couldn't get anywhere near the internet. They had spent countless hours dealing with CORBA and were plotting using Enterprise JavaBeans to get these older systems communicating with their newer systems.
Along comes SGML, which is the precursor to almost all markup languages (I'm skipping GML). SGML was already used to define how to define HTML, but HTML had particular tags that HAD to be used in order for Netscape to properly display a given webpage.
But what if we had other data that needed to be explained? Ah ha!
So given that XML is structured, and you can feel free to define that structure, it naturally allows you to build interfaces (in a non-OO point of view). It doesn't really do anything that other interface languages already do, but it gave people the ability to design their own definitions.
Interface languages like X12 and HL7 existed for sure, but with XML people could tailor it to their individual AIX or AS/400 systems.
And with the predominance of tag language because of HTML, well it was just natural that XML would get pushed to the forefront because of its ease of use.
A: *
*Schema definition languages - you can describe the expected format of the XML
*It's a standard:) - it's definitely better than everybody using their own custom formats
CSV is human readable but that's really the only good thing about it - it's so inflexible, and there are no meanings assigned to the values. If I started designing a system now I would definitely use YAML instead - it's less bloated and it's definitely gaining momentum.
A: another benefit of XML vs binary data is error resilliancy..
for binary data, if a single bit goes wrong, the data are most likely unusable,
with xml, as a last resort, you can still open it up and make corrections...
A: Straight from the horse's mouth, the design goals of XML were:
*
*XML shall be straightforwardly usable over the Internet.
*XML shall support a wide variety of applications.
*XML shall be compatible with SGML.
*It shall be easy to write programs which process XML documents.
*The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
*XML documents should be human-legible and reasonably clear.
*The XML design should be prepared quickly.
*The design of XML shall be formal and concise.
*XML documents shall be easy to create.
*Terseness in XML markup is of minimal importance.
The reason why it became popular was because people needed a standard for a cross-platform data exchange format. XML may be a bit bloated, but it is a very simple way to delimit text data and it was backwards compatible with the large body of existing SGML systems.
You really can't compare XML to CSV because CSV is an extremely limited way of representing data. CSV cannot handle anything outside of a basic row-column table and has no notion of hierarchy.
XML is not that hard to parse and once you write or find a decent XML utility it's not difficult to deal with in code either.
A: XML is not hard to parse, in fact it's quite simple, given the volume of excellent APIs available for every language under the sun.
XML itself is not bloated, it can be as concise as necessary, but it's up to your schema to keep it that way.
XML handles hierarchical datasets in a way that comma-delimited text never could or should.
XML is self-documenting/describing, and human readable. Why is it a standard? Well, first and foremost, because it can be standardized. CSV isn't (and can't be) a standard because there's an infinite amount of variation.
A: It's structured.
A: XML's popularity derives from other markup languages. HTML is the one people are most familiar with, but increasingly now we see "markdown" languages like that used by wikis and even the stackoverflow post form.
HTML did an interesting job, of formatting text, but it was insufficient. It grew. Folks wanted to add tags for everything. <BLINK> anyone? Layouts, styles, and even data.
XML is the extensible markup language (duh, right?), designed so that anyone could create their own tags, and so that your RECORD tag doesn't interfere with my RECORD tag, in case they have different meanings, and with sensitivity to the issues of encoding and tag-matching and escaping that HTML has.
At the start, it was popular with people who already knew HTML, and liked the familiar concept of using markup to organize their data.
A: It's cross platform. We use it to encode robot control program and data running in C under VxWorks for execution, but our off line programming is done under dot net. XML is easily parsed by both.
A: it's compatable with many languages
A: The primary advantage it bestows is a system independent representation of hierarchical data. Comma delimited text and properties files are more appropriate in many places where XML was used, but the ability to represent complex data structures and data types, character set awareness, and standards document allowed it to be used as a good inter application exchange format.
My minor improvement suggestion for the language is to change the way end tags work. Just imagine how much bandwidth and disk space would be saved if you could end a tag with </>, like <my_tag>blah</> instead of <my_tag>blah</my_tag>. You aren't allowed to have overlapping tags, so I don't know why the standard insists on even more text than it needed. In fact, why use angle brackets at all?
The ugliness of the angle brackets is a good show of what it could have been: JSON. JavaScript Object Notation achieves most of the goals of XML with a lot less typing. Another alternate syntax that makes XML bearable is the Builder syntax, as used by Groovy and Ruby. It's much more natural and readable.
A: I'd guess that its popularity orginally stemmed from the fact it solved the right problems in a way that wasn't exceeding bad for enough big players to gain their support and thus gain Widespread industry adoption. At this point, it's rather strongly embedded into the landscape since there's so much component development invested around XML. The HIPPA and other EDI XML schemas and adapters that ship with MS BizTalk Server (and BizTalk itself) are a great example of the mountain that's been gradually built on top of XML.
A: Compared to some of the previous standards it's a dream.
Try writing HDF (Hierarchical Data Format) files or FITS. FITS was standardised before the invention of the disc drive - you have to worry about padding the file into block sizes!
Even CSV isn't as simple. Quick question, whats the separator in a German CSV file?
A lot of the complaints about XML are from people who use it to transfer data directly between machines where the data only exists for milliseconds.
In a lot of areas the data will have to last for 50-100 years and be far more valuable than the machine it ran on. It's worth paying a closing tag tax sometimes.
A: The two main things that made XML widely adopted are "Human readability" and "Sun Microsystem". They were (and there are still) other cross-language, cross-platform data exchange format that are more flexible, more easy to parse, less verbose than XML. Such as ASN.1.
A: It is a text format that is one of it's major advantages. All binary formats are usually much smaller but you always need tools to "read" them. You can simply open and editor and modify XML files to your liking. However I'd argue it's stil a bloated format, but well you can compress it quite well.... if one looks at the specs for the Windows Office XML formats one just can imagine it's wonderful to be seemingly open....
Regards
Friedrich
A: It's easier to write a parser for an XML dialect than for an arbitrary one because of tools that are available.
Using a DOM parser, for example, is much simpler than lexx and yacc, especially in Java where it was popularized.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: How do you handle browser specific .js and .css This is not a new topic, but I am curious how everyone is handling either .js or .css that is browser specific.
Do you have .js functions that have if/else conditions in them or do you have separate files for each browser?
Is this really an issue these days with the current versions of each of the popular browsers?
A: Don't write them?
Honestly, browser specific CSS is not really necessary for most layouts - if it is, consider changing the layout. What happens when the next browser comes out that needs another variation? Yuck. If you have something that you really need to include, and it doesn't seem to be working in one browser, ask a question here! There are lots of great minds.
For JS, there are several frameworks that take care of implementing cross-browser behaviour, such as jQuery (used on this site).
A: The IE conditional comments have the downside of an extra file download. I prefer to use a couple of well-known CSS filters:
.myClass {
color: red; // Non-IE browsers will use this one
*color: blue; // IE7 will see this one
_color: green; // IE6 and below will see this one
}
(Yeah, it won't validate, but last I checked, our money comes from users and advertisers, not from the W3C.)
A: It is still an issue these days for CSS not working in all browsers (mostly IE6/7).
I've never needed a separate JS file for anything I've worked on. If you are using a JS library (jQuery, YUI, Prototype, etc), 99% of your browser incompatibilities will be taken care of.
As for CSS, I prefer to stick my browser-specific fixes in the same CSS file. It makes it a lot easier to debug when you only have to look in 1 place for your styling. You could spend hours debugging something only to find out the bug is caused by your 10 line browser-specific stylesheet.
It's also better from a performance perspective to only have 1 CSS and 1 JS file.
A: Use what is known as "feature detection".
For example, if you want to use document.getElementsByClassName, do the following:
if(document.getElementsByClassName) {
// do something with document.getElementsByClassName
} else {
// find an alternative
}
As for CSS, you should largely be good if you use standards, except in IE. In IE, use conditional comments - it's the method recommended by the guys at Microsoft.
A: Personally I've mostly used conditional comments as noted above.
In the Stackoverflow podcast, though, Jeff indicated that Stackoverflow is using Yahoo's CSS Reset, which I'd never heard of. If it does what it's advertised to do it seems that would resolve many (most? all?) browser-based CSS differences; I don't see anything indicating conditional commenting in the Stackoverflow html source, at least. I'm definitely going to play with it on my next web project and would love to hear anyone's experiences with it.
As far as Javascript; as has already beed said, use your favorite JS Framework...
A: It's a very real issue. Mostly just because of IE6. You can handle IE6-specific CSS by using conditional comments.
For JavaScript, I'd recommend using a library that has already done most of the work of abstracting away browser differences. jQuery is really good in this regard.
A: I use a framework to handle 99% of the xbrowser stuff.
For everything not covered in the framework, I'd use a if/else or a try/catch.
A: Both if/else and separate files, it depends on the complexity of the site.
There are definitely still incompatibilities between browsers, so consider letting a JavaScript Framework do the dirty work for you...
jQuery
http://jquery.com/
Dojo
http://www.dojotoolkit.org/
Script.aculo.us
http://script.aculo.us/
Prototype
http://prototypejs.org/
A: I use the built in functions of jQuery for the actual detection:
jQuery.each(jQuery.browser, function(i, val) {});
As for organization, that would depend on your application. I think putting this in an initialization code and then using the detection where you need it would be best. I still have issues where I have to detect Explorer on occasion. For example, when using jQuery, I have found that the parent() and next() functions will sometimes have different meanings in Explorer compared to Firefox.
A: Internet Explorer has conditional constructs like
<!--[if IE]>
<link rel="stylesheet" type="text/css" href="ie.css" />
<![endif]-->
that will let you bring in specific style sheets and JavaScript just for IE. I consider this a solution of last resort if there are no standards-compliant ways to deal with a problem.
A: If you are doing ASP.Net development, you can also use Device Filtering (which I just learned about here on Stack Overflow today).
You can use it on your Page Directives to link in different skins, master pages, or CSS files, but you can also use on ASP.Net server control attributes, like this:
<asp:Label runat="server" ID="labelText"
ie:CssClass="IeLabelClass"
mozilla:CssClass="FirefoxLabelClass"
CssClass="GenericLabelClass" />
That example is, of course, a bit extreme, but it does let you work around some nasty browser inconsistencies while using only a single CSS file.
I also second the idea of using a CSS reset file and definitely use a library like jQuery instead of reinventing the wheel on JavaScript event and DOM differences.
A: I think conditional comments are great for style sheets but they can also be used for javascript/jquery.
Since the .browser has been deprecated and now removed from jquery 1.9 (?) using conditional comments is a pretty good way to do browser specific javascript.
here is my quick example:
1 - Place a conditional comment somewhere on the page. I personally put it right after the body tag. I've seen people use it on the actual body or html but that brings back the IE8/9/10 comparability view button and you want to avoid this if possible.
<!--[if lt IE 9]><div class="ie8-and-below"></div><![endif]-->
2 - then use jquery to check if our IE specific container is there.
if ($("div").hasClass("ie8-and-below")) {
//do you JS for IE 8 and below only
}
3 - (optional) set your target comparability and put something like:
<meta http-equiv="X-UA-Compatible" content="IE=10" />
right after the <head> tag. (it has to be the very 1st thing after the opening <head>).
This will turn off the compatibility button in ie10/9/8 if the rest of your page is properly coded. It's a good fail safe if you have sections that require comparability mode, other ways you may trigger your JS if running a newer browser in compatibility.
Note: As of the date of this post the http-equiv does not validate W3C standards but it's a very useful tag which has been adopted by the home pages of google and microsoft among others. I think it's only because W3C is a bit behind on adopting it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Replication with lots of temporary table writes I've got a database which I intend to replicate for backup reasons (performance is not a problem at the moment).
We've set up the replication correctly and tested it and all was fine.
Then we realized that it replicates all the writes to the temporary tables, which in effect meant that replication of one day's worth of data took almost two hours for the idle slave.
The reason for that is that we recompute some of the data in our db via cronjob every 15 mins to ensure it's in sync (it takes ~3 minutes in total, so it is unacceptable to do those operations during a web request; instead we just store the modifications without attempting to recompute anything while in the web request, and then do all of the work in bulk). In order to process that data efficiently, we use temporary tables (as there's lots of interdependencies).
Now, the first problem is that temporary tables do not persist if we restart the slave while it's in the middle of processing transactions that use that temp table. That can be avoided by not using temporary tables, although this has its own issues.
The more serious problem is that the slave could easily catch up in less than half an hour if it wasn't for all that recomputation (which it does one after the other, so there's no benefit of rebuilding the data every 15 mins... and you can literally see it stuck at, say 1115, only to quickly catch up and got stuck at 1130 etc).
One solution we came up with is to move all that recomputation out of the replicated db, so that the slave doesn't replicate it. But it has disadvantages in that we'd have to prune the tables it eventually updates, making our slave in effect "castrated", ie. we'd have to recompute everything on it before we could actually use it.
Did anyone have a similar problem and/or how would you solve it? Am I missing something obvious?
A: I've come up with the solution. It makes use of replicate-do-db mentioned by Nick. Writing it down here in case somebody had a similar problem.
The problem with just using replicate-(wild-)do* options in this case (like I said, we use temp tables to repopulate a central table) is that either you ignore temp tables and repopulate the central one with no data (which causes further problems as all the queries relying on the central table being up-to-date will produce different results) or you ignore the central table, which has a similar problem. Not to mention, you have to restart mysql after adding any of those options to my.cnf. We wanted something that would cover all those cases (and future ones) without the need for any further restart.
So, what we decided to do is to split the database into the "real" and a "workarea" databases. Only the "real" database is replicated (I guess you could decide on a convention of table names to be used for replicate-wild-do-table syntax).
All the temporary table work is happening in "workarea" db, and to avoid the dependency problem mentioned above, we won't populate the central table (which sits in "real" db) by INSERT ... SELECT or RENAME TABLE, but rather query the tmp tables to generate a sort of a diff on the live table (ie. generate INSERT statements for new rows, DELETE for the old ones and update where necessary).
This way the only queries that are replicated are exactly the updates that are required, nothing else, ie. some (most?) of the recomputation queries hapenning every fifteen minutes might not even make its way to slave, and the ones that do will be minimal and not computationally expensive at all, just simple INSERTs and DELETEs.
A: In MySQL, as of 5.0 I believe, you can do table wildcards to replicate specific tables. There are a number of command-line options that can be set but you can also do this via your MySQL config file.
[mysqld]
replicate-do-db = db1
replicate-do-table = db2.mytbl2
replicate-wild-do-table= database_name.%
replicate-wild-do-table= another_db.%
The idea being that you tell it to not replicate any tables other than the ones you specify.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: PowerBuilder database connection pool? How-to How to make database connection pool in PowerBuilder (v9+) with...
*
*...ODBC?
*...SQL Server?
*...Oracle?
A: At this risk of self-promotion, these may get you started for Oracle:
*
*PB9/Oracle 9i
*PB11.5/Oracle 11g
If you go to Sybase Manuals (intuitive, eh? ), go to the Connecting to Your Database manual for the version you're looking at, a search for "pool" may be productive. Looking at my local copy for 11.5, I can see references to SNC (MS) and ODBC.
As far as "non-native" approaches, I'm guessing Jason might have been referring to connection pooling with an application server, then getting your data through that.
Good luck.
A: Unfortunately, at least with PB 9, you can't natively. PB has always been a two-tier dev tool. However, if you are using the WebServices support that started in PB 9 you can get around this limitation by invoking WebServices on a connection pooled appServer. I haven't played with PB 11.5 yet BTW. Could be different there.
Jason
A: With PowerBuilder version 9 and up using the Oracle native driver and connecting to Oracle 9i and above databases, you can tell Oracle to maintain connections in a pool using the CnnPool='Yes' database parameter:
Additional info from the PB 11.1 docs:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc33820_1110/html/dbparm/BJEBJADI.htm
A: I don't believe that
CnnPool='Yes'
was supported officially in PB 9.
I'm not sure that most PB developers are all that familiar with how to deal with pools.
ASP.Net's approach is simple and straight forward at least compared to my experience with some Java app servers. (Please don't start a flame war on that last sentence, I said my experience).
I have written a "server" application that received PB datastores that were executed for ds.retrieve() and ds.update() and passed the data back to the client PB app. It was a way to pool. The server application would open multiple connections... I did this in PB 8 (there's a book out there somewhere). I wouldn't recomment this approach... lot's of code.
In PB 11.x, there are some cool new approaches that you should consider.
A: @Jason Vogel...
You said I can't do natively ...so there is an alternative way to do it?
A: /* Declare as an instance variable*/
n_to_server i_to_server //Transaction Object alternative to SQLCA, i_to_server is a custom name as is n_to_server
/* Instatiate connection object*/
i_to_server = CREATE transaction
//Was declared in the instance variables from n_to_server
i_to_server.DBMS = "ODBC"
i_to_server.AutoCommit = TRUE
i_to_server.DBParm = "ConnectString='DSN=SourceServer;UID=username;PWD=password'"
CONNECT USING i_to_server ;
SELECT @@trancount INTO :li_TranCount
FROM sysobjects
WHERE name = 'sysobjects'
USING i_to_server ; //Must have USING in transactions that are not using SQLCA (the native transaction)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to add a Numeric Up / Down control to a .Net StatusStrip container I'm looking to add a numeric up / down control to a .Net StatusStrip. Unfortunately, it isn't listed as an option in the UI designer. Is there a way to do this?
A: You can use a ToolStripControlHost class to host a custom ( NumericUpDown for now ) class.
You can also derive from this class "NumericUpDownToolStripItem" which initialize the "Control" property with the custom control and can populate next properties from the hosted control ( Min, Max, Value - for example ).
A: You could try adding 3 ToolStripStatusLabels, set the Image of the first to an "Up Arrow" and the Image of the Third to "Down Arrow". Blank their Text properties.
The Middle one can hold the Number, which you increase/decrease in the Click Events.
Edit: Left/Right Arrows would probably look better. Left for Down, Right for Up.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Software for managing medium sized projects So, at my current job we're usually 1-3 developers, 1-2 art directors and 1 project manager on each project, with the smallest ones just being one of each and the larger ones being three developers and two art directors.
I'm looking for a software, combination of softwares or some type of service that will allow us to manage our projects individually, it's important that we're able to manage several projects at once within one system/piece of software (without going through a too complicated setup process for each project) since we usually have 2-3 ongoing projects in parallel.
We need to be able to integrate with SVN, Track bugs/features/request, Put up milestones and some type of agile management a´la SCRUM would be nice.
Preferably it should be able to run on Windows (without to much hassle, ever tried to put up Apache+Python+Svn+Trac on the same Windows 2003 server and get them all to run together? not fun.) since we mostly do .NET development and most of our servers run Windows 2003.
A: Since you seem to have a maximum of six people working in a single room - I'd give serious consideration to not using software at all.
A whiteboard & cork board for each project, plus a whole lot of index cards / stickies can go a long, long way towards meeting the project management needs of one or two small projects.
(Failing that - I've found basecamp a fairly lightweight tool for small projects - although it doesn't do any sort of source control integration. I've also heard good things about the latest FogBugz - but I've had such bad personal experiences of earlier versions I've not tried it yet myself)
A: http://www.project-open.org/ covers your requirements and is available for Windows. However it is targeted at larger organizations (>20 employees), so that you might find it overkill for a group of 6.
A: I personally use BaseCamp for my company and have had great luck with it!
Edit oops, I didn't notice the SVN requirement, BaseCamp can help with the other stuff.
A: You might want to try out Mantis (www.mantisbt.org). It is a little cumbersome at first, but with a little bit of customization, it will work for you. It has SVN integration, and a bunch of other stuff which I haven't used yet... :|... such as Mobile support, Wiki support, etc.
And it's OSS (Open Source Software). Written in PHP, works with MySQL, or PostgreSQL. Just check it out, it's good.
http://www.mantisbt.org/
A: Atlassian's Jira Studio sounds like exactly what you need. It's hosted, so there's nothing to install.
A: If you want something that is quick and easy to work with that integrates well with Windows I would suggest Microsoft Office Groove. I have been using it on my current project and it also easily allows you to start new projects and add members.
It is not the best solution in the world, but it is included with Office '07 and it has tools to help with project management, bug reporting, calendar, meeting summaries, etc.
The one major problem I have found with it is that version control is not included by default. From what I understand you have to setup a SharePoint server to have version control in Groove, but I have not done this yet and have been hoping that my backups will work fine.
A: +1 for starting out with a whiteboard, stickies and whatever other office supplies you can think of. Being able to visualize the state of your project in a big visible wall can be really useful, more so than software-based tracking, IMHO.
You need to make sure the team is committed to keeping it up-to-date, though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: ArrayList in Java and inputting I'm used to python, so this is a bit confusing to me. I'm trying to take in input, line-by-line, until a user inputs a certain number. The numbers will be stored in an array to apply some statistical maths to them. Currently, I have a main class, the stats classes, and an "reading" class.
Two Questions:
*
*I can't seem to get the input loop to work out, what's the best practice for doing so.
*What is the object-type going to be for the reading method? A double[], or an ArrayList?
*
*How do I declare method-type to be an arraylist?
*How do I prevent the array from having more than 1000 values stored within it?
Let me show what I have so far:
public static java.util.ArrayList readRange(double end_signal){
//read in the range and stop at end_signal
ArrayList input = new ArrayList();
Scanner kbd = new Scanner( System.in );
int count = 0;
do{
input.add(kbd.nextDouble());
System.out.println(input); //debugging
++count;
} while(input(--count) != end_signal);
return input;
}
Any help would be appreciated, pardon my newbieness...
A: What you need in your loop condition is:
while ( input.get( input.size()-1 ) != end_signal );
What you're doing is decrementing the counter variable.
Also you should declare the ArrayList like so:
ArrayList<Double> list = new ArrayList<Double>();
This makes the list type-specific and allows the condition as given. Otherwise there's extra casting.
A: Answers:
>1. I can't seem to get the input loop to work out, what's the best practice for doing so.
I would rather have a simple while loop instead of a do{}while... and place the condition in the while... In my example it read:
while the read number is not end signal and count is lower than limit: do.
>2. What is the object-type going to be for the reading method? A double[], or an ArrayList?
An ArrayList, however I would strongly recommend you to use List ( java.util.List ) interface instead. It is a good OO practice to program to the interface rather to the implementation.
>2.1How do I declare method-type to be an arraylist?
See code below.
>2.2. How do I prevent the array from having more than 1000 values stored within it?
By adding this restriction in the while condition.
import java.util.Scanner;
import java.util.List;
import java.util.ArrayList;
public class InputTest{
private int INPUT_LIMIT = 10000;
public static void main( String [] args ) {
InputTest test = new InputTest();
System.out.println("Start typing numbers...");
List list = test.readRange( 2.0 );
System.out.println("The input was " + list );
}
/**
* Read from the standar input until endSignal number is typed.
* Also limits the amount of entered numbers to 10000;
* @return a list with the numbers.
*/
public List readRange( double endSignal ) {
List<Double> input = new ArrayList<Double>();
Scanner kdb = new Scanner( System.in );
int count = 0;
double number = 0;
while( ( number = kdb.nextDouble() ) != endSignal && count < INPUT_LIMIT ){
System.out.println( number );
input.add( number );
}
return input;
}
}
Final remarks:
It is preferred to have "instance methods" than class methods. This way if needed the "readRange" could be handled by a subclass without having to change the signature, thus In the sample I've removed the "static" keyword an create an instance of "InputTest" class
In java code style the variable names should go in cammel case like in "endSignal" rather than "end_signal"
A: **
public static java.util.ArrayList readRange(double end_signal) {
//read in the range and stop at end_signal
ArrayList input = new ArrayList();
Scanner kbd = new Scanner(System. in );
int count = 0;
do {
input.add(Double.valueOf(kbd.next()));
System.out.println(input); //debugging
++count;
} while (input(--count) != end_signal);
return input;
}
**
A: I think you started out not bad, but here is my suggestion. I'll highlight the important differences and points below the code:
package console;
import java.util.;
import java.util.regex.;
public class ArrayListInput {
public ArrayListInput() {
// as list
List<Double> readRange = readRange(1.5);
System.out.println(readRange);
// converted to an array
Double[] asArray = readRange.toArray(new Double[] {});
System.out.println(Arrays.toString(asArray));
}
public static List<Double> readRange(double endWith) {
String endSignal = String.valueOf(endWith);
List<Double> result = new ArrayList<Double>();
Scanner input = new Scanner(System.in);
String next;
while (!(next = input.next().trim()).equals(endSignal)) {
if (isDouble(next)) {
Double doubleValue = Double.valueOf(next);
result.add(doubleValue);
System.out.println("> Input valid: " + doubleValue);
} else {
System.err.println("> Input invalid! Try again");
}
}
// result.add(endWith); // uncomment, if last input should be in the result
return result;
}
public static boolean isDouble(String in) {
return Pattern.matches(fpRegex, in);
}
public static void main(String[] args) {
new ArrayListInput();
}
private static final String Digits = "(\\p{Digit}+)";
private static final String HexDigits = "(\\p{XDigit}+)";
// an exponent is 'e' or 'E' followed by an optionally
// signed decimal integer.
private static final String Exp = "[eE][+-]?" + Digits;
private static final String fpRegex = ("[\\x00-\\x20]*" + // Optional leading "whitespace"
"[+-]?(" + // Optional sign character
"NaN|" + // "NaN" string
"Infinity|" + // "Infinity" string
// A decimal floating-point string representing a finite positive
// number without a leading sign has at most five basic pieces:
// Digits . Digits ExponentPart FloatTypeSuffix
//
// Since this method allows integer-only strings as input
// in addition to strings of floating-point literals, the
// two sub-patterns below are simplifications of the grammar
// productions from the Java Language Specification, 2nd
// edition, section 3.10.2.
// Digits ._opt Digits_opt ExponentPart_opt FloatTypeSuffix_opt
"(((" + Digits + "(\\.)?(" + Digits + "?)(" + Exp + ")?)|" +
// . Digits ExponentPart_opt FloatTypeSuffix_opt
"(\\.(" + Digits + ")(" + Exp + ")?)|" +
// Hexadecimal strings
"((" +
// 0[xX] HexDigits ._opt BinaryExponent FloatTypeSuffix_opt
"(0[xX]" + HexDigits + "(\\.)?)|" +
// 0[xX] HexDigits_opt . HexDigits BinaryExponent
// FloatTypeSuffix_opt
"(0[xX]" + HexDigits + "?(\\.)" + HexDigits + ")" +
")[pP][+-]?" + Digits + "))" + "[fFdD]?))" + "[\\x00-\\x20]*");// Optional
// trailing
// "whitespace"
}
*
*In Java it's a good thing to use generics. This way you give the compiler and virtual machine a hint about the types you are about to use. In this case its double and by declaring the resulting List to contain double values,
you are able to use the values without casting/type conversion:
if (!readRange.isEmpty()) {
double last = readRange.get(readRange.size() - 1);
}
*It's better to return Interfaces when working with Java collections, as there are many implementations of specific lists (LinkedList, SynchronizedLists, ...). So if you need another type of List later on, you can easy change the concrete implementation inside the method and you don't need to change any further code.
*You may wonder why the while control statement works, but as you see, there are brackets around next = input.next().trim(). This way the variable assignment takes place right before the conditional testing. Also a trim takes playe to avoid whitespacing issues
*I'm not using nextDouble() here because whenever a user would input something that's not a double, well, you will get an exception. By using String I'm able to parse whatever input a user gives but also to test against the end signal.
*To be sure, a user really inputed a double, I used a regular expression from the JavaDoc of the Double.valueOf() method. If this expression matches, the value is converted, if not an error message will be printed.
*You used a counter for reasons I don't see in your code. If you want to know how many values have been inputed successfully, just call readRange.size().
*If you want to work on with an array, the second part of the constructor shows out how to convert it.
*I hope you're not confused by me mixin up double and Double, but thanks to Java 1.5 feature Auto-Boxing this is no problem. And as Scanner.next() will never return null (afaik), this should't be a problem at all.
*If you want to limit the size of the Array, use
Okay, I hope you're finding my solution and explanations usefull, use result.size() as indicator and the keyword break to leave the while control statement.
Greetz, GHad
A: public static ArrayList<Double> readRange(double end_signal) {
ArrayList<Double> input = new ArrayList<Double>();
Scanner kbd = new Scanner( System.in );
int count = 0;
do{
input.add(kbd.nextDouble());
++count;
} while(input(--count) != end_signal);
return input;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: what is the better way to handle errors in VB6 I have VB6 application , I want to put some good error handling finction in it which can tell me what was the error and exact place when it happened , can anyone suggest the good way to do this
A: I use a home-grown Error.bas module to make reporting and re-raising less cumbersome.
Here's its contents (edited for length):
Option Explicit
Public Sub ReportFrom(Source As Variant, Optional Procedure As String)
If Err.Number Then
'Backup Error Contents'
Dim ErrNumber As Long: ErrNumber = Err.Number
Dim ErrSource As String: ErrSource = Err.Source
Dim ErrDescription As String: ErrDescription = Err.Description
Dim ErrHelpFile As String: ErrHelpFile = Err.HelpFile
Dim ErrHelpContext As Long: ErrHelpContext = Err.HelpContext
Dim ErrLastDllError As Long: ErrLastDllError = Err.LastDllError
On Error Resume Next
'Retrieve Source Name'
Dim SourceName As String
If VarType(Source) = vbObject Then
SourceName = TypeName(Source)
Else
SourceName = CStr(Source)
End If
If LenB(Procedure) Then
SourceName = SourceName & "." & Procedure
End If
Err.Clear
'Do your normal error reporting including logging, etc'
MsgBox "Error " & CStr(ErrNumber) & vbLf & "Source: " & ErrSource & vbCrLf & "Procedure: " & SourceName & vbLf & "Description: " & ErrDescription & vbLf & "Last DLL Error: " & Hex$(ErrLastDllError)
'Report failure in logging'
If Err.Number Then
MsgBox "Additionally, the error failed to be logged properly"
Err.Clear
End If
End If
End Sub
Public Sub Reraise(Optional ByVal NewSource As String)
If LenB(NewSource) Then
NewSource = NewSource & " -> " & Err.Source
Else
NewSource = Err.Source
End If
Err.Raise Err.Number, NewSource, Err.Description, Err.HelpFile, Err.HelpContext
End Sub
Reporting an error is as simple as:
Public Sub Form_Load()
On Error Goto HError
MsgBox 1/0
Exit Sub
HError:
Error.ReportFrom Me, "Form_Load"
End Sub
Reraising an error is as simple as calling Error.Reraise with the new source.
Although it is possible to retrieve the Source and Procedure parameters from the call stack if you compile with symbolic debug info, it's not reliable enough to use in production applications
A: First of all, go get MZTools for Visual Basic 6, its free and invaluable. Second add a custom error handler on every function (yes, every function). The error handler we use looks something like this:
On Error GoTo {PROCEDURE_NAME}_Error
{PROCEDURE_BODY}
On Error GoTo 0
Exit {PROCEDURE_TYPE}
{PROCEDURE_NAME}_Error:
LogError "Error " & Err.Number & " (" & Err.Description & ") in line " & Erl & _
", in procedure {PROCEDURE_NAME} of {MODULE_TYPE} {MODULE_NAME}"
Then create a LogError function that logs the error to disc. Next, before you release code add Line Numbers to every function (this is also built into MZTools). From now on you will know from the Error Logs everything that happens. If possible, also, upload the error logs and actually examine them live from the field.
This is about the best you can do for unexpected global error handling in VB6 (one of its many defects), and really this should only be used to find unexpected errors. If you know that if there is the possibility of an error occurring in a certain situation, you should catch that particular error and handle for it. If you know that an error occurring in a certain section is going to cause instability (File IO, Memory Issues, etc) warn the user and know that you are in an "unknown state" and that "bad things" are probably going happen. Obviously use friendly terms to keep the user informed, but not frightened.
A: ON ERROR GOTO
and the
Err
object.
There is a tutorial here.
A: a simple way without additional modules, useful for class modules:
pre-empt each function/subs:
On Error Goto Handler
handler/bubbleup:
Handler:
Err.Raise Err.Number, "(function_name)->" & Err.source, Err.Description
voila, ghetto stack trace.
A: Yes, take Kris's advice and get MZTools.
You can add line numbers to section off areas of complex procedures, which ERL will report in the error handler, to track down which area is causing the error.
10
...group of statements
20
...group of statements
30
...and so on
A: Use on
dim errhndl as string
on error goto errhndl
errhndl:
msgbox "Error"
A: Use the On Error statement and the Err object.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: What is the best IDE for PHP? I'm a PHP developer and now I use Notepad++ for code editing, but lately I've been searching for an IDE to ease my work.
I've looked into Eclipse, Aptana Studio and several others, but I'm not really decided, they all look nice enough but a bit complicated. I'm sure it'll all get easy once I get used to it, but I don't want to waste my time.
This is what I'm looking for:
*
*FTP support
*Code highlight
*SVN support would be great
*Ruby and JavaScript would be great
A: For PHP I would recommend PhpStorm.
It supports FTP/SFTP synchronization, integrates well with Subversion, CVS, Mercurial and even with Git. Also, it supports HTML, CSS, JavaScript and handles language-mixing well like SQL or HTML blocks inside PHP code, JSON, etc.
But if you need Ruby you can try another IDE - RubyMine with same capabilities but for Ruby.
A: To get you started, here is a list of PHP Editors (Wikipedia).
A: There's no "best" IDE, only better and worse ones.
Right now I'm trying to settle in with Aptana. It has a lot of cruft that I don't want, like "Jaxer" doodads all over the place. It's reasonably fast, but chokes on large files when syntax highliting is on. I have not been able to figure out how to set up PHP debugging. Three good things about Aptana: easy plugin installations, very fast and intuitive Subversion plugins, ligning fast file search.
I tried Eclipse PDT and Zend for Eclipse, but they have nightmare levels of interface cruft. Installing plugins is a living horror of version mismatches and cryptic error messages.
I also use Komodo (they bought us licenses at work). Komodo has a very intuitive interface, but is ridiculously slow, chokes on medium sized files with syntax highlighting. File search is intuitive, but rather slow. Subversion integration is not that great - slow and buggy. If not for slowness, I would have probably stuck with Komodo, especially for the debugger.
A: NetBeans. Check out 7.0.1.
It supports FTP/SFTP synchronization, integrates well with Subversion, CVS, Mercurial and even with Git (with plugin). Also, it supports HTML, CSS, JavaScript, popular frameworks and more.
And its free.
A: For PHP in particular, PHPEdit is the best, and I tried and worked in some of them including, Dreamweaver, Elipse, Emacs, Notepad++, NetBeans, UltraEdit ...
A: Geany is a great lightweight editor -- like Notepad++ for Linux, only better. I find this, combined with a few shell scripts and symlinks for linking modules into a web source tree, make developing on Linux easy and fun.
A: I love JetBrains IDEs. For PHP it is JetBrains PHPStorm.
A: Too bad no one mentioned phpDesigner. It's really the best IDE I've came across (and I believe I've tried them all).
The main pro of this one is that it's NOT Java based. This keeps the whole thing quick.
Features:
*
*Intelligent Syntax Highlighter - automatic switch between PHP, HTML, CSS, and JavaScript depending on your position!
*PHP (both version 4 and 5 are supported)
*SQL (MySQL, MSSQL 2000, MSSQL 7, Ingres, Interbase 6, Oracle, Sybase)
*HTML/XHTML
*CSS (both version 1 and 2.1 are supported)
*JavaScript
*VBScript
*Java
*C#
*Perl
*Python
*Ruby
*Smarty
PHP:
*
*Support for both PHP 4 and PHP 5
*Code Explorer for PHP (includes, classes, extended classes, interfaces, properties, functions, constants and variables)
*Code Completion (IntelliSense) for PHP - code assist as you type
*Code Tip (code hint) for PHP - code assist as you type
*Work with any PHP frameworks (access classes, functions, variables, etc. on the fly)
*PHP object oriented programming (OOP) including nested objects
*Support for PHP heredoc
*Enclose strings with single- or double quotes, linefeed, carriage return or tabs
*PHP server variables
*PHP statement templates (if, else, then, while…)
*Powerful PHP Code Beautifier with many configurations and profile support
*phpDocumentor wizard
*Add phpDocumentor documentation to functions and classes with one click!
*phpDocumentor tags
*Comment or uncomment with one click!
*Jump to any declaration with filtering by classes, interfaces, functions, variables or constants
Debug (PHP):
*
*Debug with Xdebug
*Breakpoints
*Step by step debugging
*Step into
*Step over
*Run to cursor
*Run until return
*Call stack
*Watches
*Context variables
*Evaluate
*Profiling
*Multiple sessions
*Evaluation tip
*Catch errors
A: http://www.ibm.com/developerworks/opensource/library/os-php-ide/index.html
Personally, I love Notepad++... :D . The above link compares some of the better IDEs and the best ones aren't free.
I'd recommend Komodo 4.4 though (I used the trial version) since it was awesome. Better than Notepad++, but not free... :(
A: I would recommend Zend IDE for the integrated debugger.
A: I'm using Zend Studio. It has decent syntax highlighting, code completion and such. But the best part is that you can debug PHP code, either with a standalone PHP interpreter, or even on a live web server as you "browse" along your pages. You get the usual Visual Studio keys, breakpoints, watches and call stack, which is almost indispensable for bug hunting. No more "alert()"-cluttered debugged source code :)
A: Have you looked at Delphi for PHP (<http://www.codegear.com/products/delphi/php>) ?
Joe Stagner of Microsoft really likes Delphi for PHP.
He says it here: "[Delphi for PHP] 2.0 is the REAL DEAL and I LOVE IT !"
A: Are you sure you're looking for an IDE? The features you're describing, along with the impression of being too complicated that you got from e.g. Aptana, suggest that perhaps all you really want is a good editor with syntax highlighting and integration with some common workflow tools. For this, there are tons of options.
I've used jEdit on several platforms successfully, and that alone puts it above most of the rest (many of the IDEs are cross-platform too, but Aptana and anything Eclipse-based is going to be pretty heavy-weight, if full-featured). jEdit has ready-made plugins for everything on your list, and syntax highlighting for a wide range of languages. You can also bring up a shell in the bottom of your window, invoke scripts from within the editor, and so forth. It's not perfect (the UI is better than most Java UIs, but not perfect yet I don't think), but I've had good luck with it, and it'll be a hell of a lot simpler than Aptana/Eclipse.
That said, I do like Aptana quite a bit for web development, it does a lot of the grunt work for you once you're over the learning curve.
A: What features of an IDE do you want? Integrated build engine? Debugger? Code highlighting? IntelliSense? Project management? Configuration management? Testing tools? Except for code highlighting, none of these are in your requirements.
So my suggestion is to use an editor that supports plugins, like Notepad++ (which you are already used to). If there's not already a plugin that does what you want, then write one.
I use Coda on Mac OS X.
A: Eclipse with PDT.
A: I use and like Rapid PHP.
A: There is a new guy in town, PhpStorm from JetBrains. You use it and I bet you will forget all the other editors. It's bit pricey though, unfortunately.
A: RadPHP (previously known as Delphi for PHP) is the best.
A: All are good, but only Delphi for PHP (RadPHP 3.0) has a designer, drag and drop controls, GUI editeor, huge set of components including Zend Framework, Facebook, database, etc. components. It is the best in town.
RadPHP is the best of all; It has all the features the others have. Its designer is the best of all. You can design your page just like Dreamweaver (more than Dreamweaver).
If you use RadPHP you will feel like using ASP.NET with Visual Studio (but the language is PHP).
It's too bad only a few know about this.
A: Eclipse PDT is very nice.
A: I'm always amazed that more people don't use ActiveState Komodo.
It has the best debugging facilities of any PHP IDE I have tried, is a very mature product and has more useful features than you can shake a stick at. Of note, it has a fantastic HTTP inspector, Javascript debugger and Regular Expression Toolkit. You can get it so that it steps through your PHP, then you see your Javascript running, and then see your HTTP traffic going out over the wire!
It also comes in free (Komodo Edit) and open (OpenKomodo versions).
Oh, and if you don't always hack just on PHP, it's designed as a multi-language editor and rocks for Ruby and Python too.
I've been a happy customer for around 5 years.
A: Aptana supports this and I use it for all of my web development now.
A: My personal preference is Eclipse (with various plug-ins) as I am developing in several languages (PHP, Java, and Ruby) and this way I am always used to interface and keyboard shortcuts. This is not a minor thing as you become very productive this way.
I haven't used Aptana, but will (hopefully) soon - it does look interesting, though.
For others IDEs I have used: jEdit (for little Java), Notepad++ (still for some scripting and short test code runs).
And for the features You asked: Eclipse support many source code version servers (Subclipse); your project can be on a Samba share; ZendDebugger/xdebug for debugging.
A: Hands down the best IDE for PHP is NuSphere PHPEd. It's a no contest. It is so good that I use WINE to run it on my Mac. PHPEd has an awesome debugger built into it that can be used with their local webserver (totally automatic) or you can just install the dbg module for XAMPP or any other Apache you want to run.
A: The best IDE for PHP in my opinion is Zend Studio (which itself is based on Eclipse PDT). Note that in this case "best" does not necessarily mean "good." It is slow and a bit buggy, but even so, it's still the best option for PHP programmers. I've tried a ton of PHP editors over the years and I haven't yet found one that works great.
Komodo IDE would be my second choice. My only problem with Komodo is that the autocomplete is not as good. With properly structured apps where you use phpDoc to document return types etc., it should be alright. But I work on a project that doesn't really do that and Komodo can't read across files to know that $user is a User object for example.
A: Personally everything that is based uppon Eclipse or NetBeans is an overkill, the GUI is crap and the performance is soooo slow compared to other alternatives.
If you're willing to pay I would suggest Zend IDE (version 5.5, not 6 because it's based on Eclipse) and EditPlus for a more lightweight yet powerfull code editor.
If you're looking for free alternatives, or if you code in other languages other than PHP, OpenKomodo is a really nice IDE with almost all the features (no SVN neither CVS) that you require, the only con I see about OpenKomodo is that sometimes it messes my code indentation, but then again I don't use it on a very regular basis.
As for a free lightweight alternative: Notepad++. =)
A: Have you tried NetBeans 6? Zend Studio and NetBeans 6 are the best IDEs with PHP support you'll come across and NetBeans is free.
A: I've tried Eclipse PDT, with some success. Aptana is also pretty good, or if you are doing a lot of AJAX stuff, it's great. Your mileage may vary, however, depending on what additional plugins you want to use with them.
A: I believe that PHP being what it is, doesn't really require an IDE. I use vi, it is fast, doesn't crash and with grep -r and Ctags, it can multiply productivity many times over.
Subversion is literally built-in in the console, so you won't run into problems with source control.
Finally, I used springloops.com as the repositories, so I don't have have to manually FTP files to any server. It has a FTP deployment option which also makes sure that only the altered file move to the staging server.
The best part is that you can go to a friends house, find a Linux machine, and just start developing because everything that you need is mostly available on most machines.
A: There are a few IDEs out there you can use. I personally like UltraEdit. It does syntax highlighting, FTP/SFTP support, super fast, macros, etc. - only $30.
If you're doing anything heavy and would like some enterprise level IDE features (local/remote debugging, framework support, IntelliSense), try Zend IDE. I believe it's a few hundred dollars but be worth it.
There's also a plugin for Eclipse you try (PHPEclipse I think). I hope this helps.
A: Dreamweaver
A: Just last night I finally bought the latest version of Zend Studio. I used previous versions and I was always very happy with it. I don't think you can undervalue the integration between their debugger and their Firefox and Internet Explorer toolbars. I use them constantly and they give me a great sense of how the application will run live.
The latest version is built on Eclipse, so you get many of its features as a base which lets Zend focus on providing more advanced functionality. I like the way they have made Studio very PHP aware in the sense that once you start it up everything is geared toward developing PHP applications. It's knowledgeable about Zend Framework, PHPDoc, and PHP's newer OOP features. (It has grown up along with PHP.) You can get most of the same functionality from Eclipse or Eclipse PDT, but I always felt they provided me with so many options I couldn't actually do anything. Studio let me start building applications pretty quickly since that's about all it does.
I think it meets most of your requests except for the Ruby part. I'm sure you can add Ruby extensison to it since it is Eclipse, but I haven't tried that yet. Also, I think they recently improved the JavaScript coding as well, but I haven't tested it much so far.
A: PHPEclipse is as close to Eclipse java power as it could get. Eclipse PDT is much weaker (last time I checked).
A: Why Dreamweaver - 2? For current work I prefer Dreamweaver rather than another editor. I have tried a lot of editors, but in the end I stick with Dreamweaver.
A: I'm using PHPDesigner but I will go for Eclipse PDT. I was always against Eclipse until few months ago when I have one Java project to finish... Great IDE
Now I can't imagine one day without Eclipse. :)
A: Adobe Dreamweaver CS5 is very easy to use. It is having the feature of code highlighting, and show the files which you have included in the parent file in separate tabs, having the option of php.net offline. That means if you want to know about the new built-in functions just Ctrl + space. It will show the drop down. It is having the syntax and also the offline preview of the syntax from php.net.
A: My opinion is that the best for PHP is RadPHP.
A: I have a friend who swears by Aptana Studio.
A: *
*Best of all: Notepad ++ (Free and helpful with colors and link)
*Average: NetBeans (Normal IDE)
*Not good: Eclipse (It crashes when you don't wait for it)
*Oh and I forget: Don't ever use JDeveloper :D
A: NetBeans is pretty nice because it has syntax highlighting, tabs, auto-formatting and live syntax verification. Sadly, you cannot save in UTF-8 without having to set up "projects".
How annoying, I wonder if there is another editor that has syntax highlighting, tabs, auto-formatting and live syntax verification but would also allow me to use UTF-8 without having to set up "projects".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
}
|
Q: How do you handle code promotion in a Sharepoint environment? In a typical enterprise scenario with in-house development, you might have dev, staging, and production environments. You might use SVN to contain ongoing development work in a trunk, with patches being stored in branches, and your released code going into appropriately named tags. Migrating binaries from one environment to the next may be as simple as copying them to middle-ware servers, GAC'ing things that need to be GAC'ed, etc. In coordination with new revisions of binaries, databases are updated, usually by adding stored procedures, views, and adding/adjusting table schema.
In a Sharepoint environment, you might use a similar version control scheme. Custom code (assemblies) ends up in features that get installed either manually or via various setup programs. However, some of what needs to be promoted from dev to staging, and then onto production might be database content that supports the custom code bits.
If you've managed an enterprise Sharepoint environment, please share thoughts on how you manage promotion of code and content changes between environments, while protecting your work and your users, and keeping your sanity.
A: I assume when you talk about database content you are referring to the actual contents contained in a site a or a list.
Probably the best way to do this is to use the stsadm import and export commands to export and import content from one environment to another. (Don't use backup/restore when going from one environment to another.)
A: For any file changes (assemblies, aspx) you can use Features and then keep track of the installers. You would install the feature and do an upgrade to push changes.
There's no easy way to sync the data...you can use stsadm import/export commands as John pointed out. But this may not be straight-forward, especially if the servers are configured differently.
There's also Data Sync Studio product (http://www.simego.net/DataSync_Studio.aspx) you can try.
A: Depending on what form the database content takes, I would keep the creation of it in code so it's all in one place (your Visual Studio project) and can also be managed via source control. Deployment of the content could either be via a console application or even better feature receiver.
You might also like to read this blog post and look at the tool mentioned there for another approach.
A: The best resource I can point you to is Eric's paper:
http://msdn.microsoft.com/en-us/library/bb428899.aspx
I was part of a team working to better the story around development of WSS and MOSS solutions with TFS, but I don't know where that stands.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: SVN Client integrated with OS X's Finder Is there a TortoiseSVN equivalent for OS X? I'm looking for an SVN client that will integrate with OS X's Finder and be added as Context menu items.
Update: Just found SmartSVN. Anyone with feedback on it?
A: There is SCPlugin which is the closest match to TortoiseSVN on OS X. It adds overlay icons as well as context menu entries to the Finder.
A: Note, that neither SCPlugin nor the SmartSVN Finder integration will work on OS X 10.6 (Snow Leopard), because Apple has dropped the support for Finder plugins.
A: I don't believe so, but I've recently started using Cornerstone as a SVN client on the Mac and I'm super-happy with it.
It's about $60 and has a 30 day trial. Also try "Versions". I trailed it for a few weeks and it was the "best other" SVN client, but not as good as Cornerstone (IMO).
A: Google found this:
http://scplugin.tigris.org/
doesn't seem to be as slick as tortoise, but at least it's a start.
I tried Versions for a little while, but it often got confused and irritated me quite a lot.
A: You may be interested in Versions. Not exactly what you're looking for but close
A: Agreed. SCPlugin is the best option out there. But it's a bit buggy in the latest release of OS X. Has been for a little while. Another alternative is PathFinder which is a very slick Finder replacement, that has SCPlugin integrated, as well as a console, and various other SVN integrations. It's a commercial product, but well worth the money.
A: In my opinion svn-finder scripts are the best solution for leopard or snowleopard becauce scplugin will not work with Finder in those systems. You can download scripts from: http://svn-finder.sourceforge.net/ I also found easySVN http://sourceforge.net/projects/easy-svn/ which is not so bad and free.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: open solaris code vs solaris code How compatible is code written under Solaris with Open Solaris ? I would be interested specifically in some kernel modules .
A: I think it is hard to quantify software compatibility, but I'd say code written for Solaris is quite forward compatible with OpenSolaris kernel. OpenSolaris source code evolves into what will be Solaris 11, and Sun's commitment to backwards compatibility is quite a fact.
A: Kernel modules written for Solaris should function in OpenSolaris following a simple recompile providing you are using the exposed kernel APIs that are compatible between the releases that you are using in Solaris and OpenSolaris.
There is a huge amount of work in Sun to ensure that programs written using publicly exposed interfaces are compatible. There is a listed 'Exposure/Stability' entry at the bottom of manual pages for most APIs that state in defined terms how someone can use it.
A: Kernel modules in particular will be very compatible between Solaris and OpenSolaris. OpenSolaris (via Project Indiana) is evolving the user-space components more heavily, including the installer and packages.
A: This is with regard to core OS daemons only and not kernel modules, but I've had success compiling OpenSolaris components from source and using the resulting binaries on commercial Solaris just fine. It's obviously easier with a Makefile but I did one manually.
I tried this with a small handful of binaries that I needed to add debugging output to and compiled them directly on the commercial Solaris system using gcc without issue. As mentioned earlier YMMV based on what app/module it is.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Crystal Reports .Net Guidance We have been using .Net and Visual Studio for the last six years, and early on developed a number of web based reporting applications using the .Net version of Crystal Reports that was bundled with Visual Studio. I'm unimpressed with that product: It seems incredibly difficult and convoluted to use. We had to make security changes, install various extra software, and so on.
Now, we are moving to VS2008 and version 3.5 of the .Net framework, and the time has come to redevelop old applications. Our Crystal .Net developers are long gone and I face a decision: Do we stick with Crystal or move to something else? We have the "full" version of Crystal Reports XI at our disposal.
We produce PDF versions of data extracted from various databases. Some apps use the built-in Report Viewer but this seems redundant with the flexibility of grid views. We still need to produce printable PDFs in the grid or in a downloadable Excel format.
*
*Is Crystal Reports .Net worth persisting with, or should we work out how to use version XI?
*Alternatively is there a simple and low cost way to generate PDF reports without using Crystal?
A: I have experience with reporting in CrystalReports (trying lite version bundled with Visual Studio), ActiveReports from DataDynamics (4 years, full version), Reporting from Telerik (trying trial version) and XtraReports from DevExpress (last one year).
I think ( and not only me :) ), the CrystalReports are most uneficient tool ( developer productivity ) from this tools. The DataDynamics are much, much more better, bud is littlebit buggy :(. Last year we decided to change reporting suite - we have choosen a XtraReports ( with source code ), and I'm totaly happy. The price is little, no bugs ( to now :) ), wonderfull support, and ( the most important ) the productivity was grown a lot.
I recomend you DevExpress's or Telerik's reporting tools.
A: I would recommended i-net Clear Reports (used to be i-net Crystal-Clear). It can read your existing *.rpt files. Has a better and easier-to-use API (which I admit is not saying much...).
A: Like you, I've had poor experiences with Crystal Reports, and my gut instinct is to post "avoid it at all costs" in all caps with lots of exclamation points. However, I've had my afternoon nap today, so I'll post like a grownup.
If all you're looking to do is pdf-ize (yes, it's a real word, damnit!) then you might look into some of the PDF widgets like ABCPDF and the like. It's relatively easy to pop a well-formatted web page into a PDF document and be done with it.
However, if you need tight report formatting, consider sticking with crystal reports -- you have a big investment and knowledge base in the technology. Or, alternately, you could switch to ActiveReports or SQL Server reporting services.
I guess the cost/benefit analysis is the cost of retraining your dev team, and investing in the new technologies.
A: Move away from CR: just get a good PDF generator and Excel engine for .NET, and feed those using your own database code. You can use all the powerful .NET features, including LINQ, without having to wrestle with the Crystal Reports runtime and its woefully inadequate documentation and support.
A: I can suggest that the built in Microsoft reporting framework works adequately. You can do local reports or MS SQL server based reports. There is a client control that displays reports and can export to formats such as pdf and Excel. Visual Studio can handle report design for the stack.
As far as if it is better than Crystal Reports, I'd say check it out and see if you like it any better or worse. I've worked with the Microsoft Report Viewer more than Crystal Reports but both seem to be fairly similar. Offhand, Crystal Reports seems to be a more advanced reporting tool but more complicated.
I'm not sure about how to utilize the Microsoft Report Viewer infrastructure outside of Visual Studio. If you are using Visual Studio it should all be available in there and you can follow the online help instructions for deploying the pieces for your servers to your servers.
A: I have used ActiveReports from DataDynamics and Crystal Reports. Of these two, I would recommend ActiveReports above Crystal based on ease of use and, more importantly, future maintenance.
A: We use Crystal in our shop too. We are currently on 8.5, which is way old and is no longer supported by SAP. We tried to upgrade to CRXI recently, which involved an entirely new API. We had to shelf the effort due to other priorities. While working on the upgrade I found support for CRXI on a number of forums. Google it.
I believe you can find a cheap way to generate PDFs without using Crystal. I believe Adobe gives the creation part away for free. I would visit their site and look into it.
I would recommend staying with Crystal only if you had a lot of reports that were already using that technology.
A: Get out of Crystal Reports. They are poor.
Check out SQL Reporting Services. It works very well with .NET. Try it out. There is a learning curve, but when is there not?
A: IMO, you should consider other criteria as well such as:
*
*Cost of the software
*Integration with your .NET applications
*API and Programmatic flexibility (All said and done, there are always the "customizations" and tailoring. For such scenarios , developers eventually fall back on programmatic solutions vs. out of box
Now, in my experience (having used both Crystal Reports and SSRS(2005/2008) , though Crystal Reports does come with a friendly set of API, it fails in many basic criteria and developers end fighting the software. This is I say based on my experience with SSRS where developers are far more comfortable with. For starters, it uses XML extensively and the provision to use custom code assemblies does not harm either.
--I think you would where I am getting at---
"Consider & Evaluate SSRS*. If you are hesitant at first, then do a Proof of Concept and test your requirements. I have a feeling you will be pleased with what you see
*
*especially considering your requirement of using PDF format.
*Developers, especially , MSFT specialists will thank you
*Leverage the Programmatic rendering of the reports (though it sounds fancy , trust me , its not more than handling an API call
For e.g.:
public Byte[] Render
(
string Report,
string Format,
string HistoryID,
string DeviceInfo,
[Namespace].ParameterValue[] Parameters,
[Namespace].DataSourceCredentials[] Credentials,
string ShowHideToggle,
out string Encoding,
out string MimeType,
out [Namespace].ParameterValue[] ParametersUsed,
out [Namespace].Warning[] Warnings
out string[] StreamIds);
Member of [Namespace].ReportingService
)
--- where Format will be "PDF"
Hope you find this relevant
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Dragging HTML cells over the table using Javascript Folks,
I need a solution that allows drag-and-drop cell over the table.
The cells can be of different colspans, so when the cell is dropped into the middle of another cell, which is bigger, the following steps should be performed:
*
*Another td element is created, which is equal in width to the draggable element.
*Cells on left and right of new td element automatically adjust their width.
Now I use JQuery drag-and-drop plug-in. It allows to drag elements, but becomes a bit awkward when it comes to manipulating DOM elements on the part of droppable element.
Could anybody propose some library where this kind of behaviour is implemented?
A: DragTable might be a good starting point.
A: Not sure if it supports tables, but I have used link text before and it worked fairly well for me
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What is the difference between vmalloc and kmalloc? I've googled around and found most people advocating the use of kmalloc, as you're guaranteed to get contiguous physical blocks of memory. However, it also seems as though kmalloc can fail if a contiguous physical block that you want can't be found.
What are the advantages of having a contiguous block of memory? Specifically, why would I need to have a contiguous physical block of memory in a system call? Is there any reason I couldn't just use vmalloc?
Finally, if I were to allocate memory during the handling of a system call, should I specify GFP_ATOMIC? Is a system call executed in an atomic context?
GFP_ATOMIC
The allocation is high-priority and
does not sleep. This is the flag to
use in interrupt handlers, bottom
halves and other situations where you
cannot sleep.
GFP_KERNEL
This is a normal allocation and might block. This is the flag to use
in process context code when it is safe to sleep.
A: What are the advantages of having a contiguous block of memory? Specifically, why would I need to have a contiguous physical block of memory in a system call? Is there any reason I couldn't just use vmalloc?
From Google's "I'm Feeling Lucky" on vmalloc:
kmalloc is the preferred way, as long as you don't need very big areas. The trouble is, if you want to do DMA from/to some hardware device, you'll need to use kmalloc, and you'll probably need bigger chunk. The solution is to allocate memory as soon as possible, before
memory gets fragmented.
A: On a 32-bit system, kmalloc() returns the kernel logical address (its a virtual address though) which has the direct mapping (actually with constant offset) to physical address.
This direct mapping ensures that we get a contiguous physical chunk of RAM. Suited for DMA where we give only the initial pointer and expect a contiguous physical mapping thereafter for our operation.
vmalloc() returns the kernel virtual address which in turn might not be having a contiguous mapping on physical RAM.
Useful for large memory allocation and in cases where we don't care about that the memory allocated to our process is continuous also in Physical RAM.
A: Short answer: download Linux Device Drivers and read the chapter on memory management.
Seriously, there are a lot of subtle issues related to kernel memory management that you need to understand - I spend a lot of my time debugging problems with it.
vmalloc() is very rarely used, because the kernel rarely uses virtual memory. kmalloc() is what is typically used, but you have to know what the consequences of the different flags are and you need a strategy for dealing with what happens when it fails - particularly if you're in an interrupt handler, like you suggested.
A: Linux Kernel Development by Robert Love (Chapter 12, page 244 in 3rd edition) answers this very clearly.
Yes, physically contiguous memory is not required in many of the cases. Main reason for kmalloc being used more than vmalloc in kernel is performance. The book explains, when big memory chunks are allocated using vmalloc, kernel has to map the physically non-contiguous chunks (pages) into a single contiguous virtual memory region. Since the memory is virtually contiguous and physically non-contiguous, several virtual-to-physical address mappings will have to be added to the page table. And in the worst case, there will be (size of buffer/page size) number of mappings added to the page table.
This also adds pressure on TLB (the cache entries storing recent virtual to physical address mappings) when accessing this buffer. This can lead to thrashing.
A: The kmalloc() & vmalloc() functions are a simple interface for obtaining kernel memory in byte-sized chunks.
*
*The kmalloc() function guarantees that the pages are physically contiguous (and virtually contiguous).
*The vmalloc() function works in a similar fashion to kmalloc(), except it allocates memory that is only virtually contiguous and not necessarily physically contiguous.
A: You only need to worry about using physically contiguous memory if the buffer will be accessed by a DMA device on a physically addressed bus (like PCI). The trouble is that many system calls have no way to know whether their buffer will eventually be passed to a DMA device: once you pass the buffer to another kernel subsystem, you really cannot know where it is going to go. Even if the kernel does not use the buffer for DMA today, a future development might do so.
vmalloc is often slower than kmalloc, because it may have to remap the buffer space into a virtually contiguous range. kmalloc never remaps, though if not called with GFP_ATOMIC kmalloc can block.
kmalloc is limited in the size of buffer it can provide: 128 KBytes*). If you need a really big buffer, you have to use vmalloc or some other mechanism like reserving high memory at boot.
*) This was true of earlier kernels. On recent kernels (I tested this on 2.6.33.2), max size of a single kmalloc is up to 4 MB! (I wrote a fairly detailed post on this.) — kaiwan
For a system call you don't need to pass GFP_ATOMIC to kmalloc(), you can use GFP_KERNEL. You're not an interrupt handler: the application code enters the kernel context by means of a trap, it is not an interrupt.
A: One of other differences is kmalloc will return logical address (else you specify GPF_HIGHMEM). Logical addresses are placed in "low memory" (in the first gigabyte of physical memory) and are mapped directly to physical addresses (use __pa macro to convert it). This property implies kmalloced memory is continuous memory.
In other hand, Vmalloc is able to return virtual addresses from "high memory". These addresses cannot be converted in physical addresses in a direct fashion (you have to use virt_to_page function).
A: In short, vmalloc and kmalloc both could fix fragmentation. vmalloc use memory mappings to fix external fragmentation; kmalloc use slab to fix internal frgamentation. Fot what it's worth, kmalloc also has many other advantages.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "129"
}
|
Q: Uploading files in Ruby on Rails I have a web application that needs to take a file upload from the user and upload it to a remote server. I can take input from user to server fine via file_field, but can't seem to work out the next step of uploading from server to remote. Net::HTTP doesn't do multipart forms out of the box, and I haven't been able to find another good solution. I need something that will allow me to go from user -> server -> remote instead of going user -> remote. Anyone succeeded in doing this before?
A: I believe the attachment_fu plugin would allow for this:
http://svn.techno-weenie.net/projects/plugins/attachment_fu/
A: Surprisingly multipart form posts really aren't in Net:HTTP. A thread from comp.lang.ruby seems to have snippet of code you might find useful to perform the encoding necessary:
BOUNDARY = "AaB03x"
def encode_multipartformdata(parameters = {})
ret = String.new
parameters.each do |key, value|
unless value.empty?
ret << "\r\n--" << BOUNDARY << "\r\n"
ret << "Content-Disposition: form-data; name=\"#{key}\"\r\n\r\n"
ret << value
end
end
ret << "\r\n--" << BOUNDARY << "--\r\n"
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: C compiler for Windows? I'm fine working on Linux using gcc as my C compiler but would like a Windows solution. Any ideas? I've looked at Dev-C++ from Bloodshed but looking for more options.
A: GCC is not technically a linux specific compiler. Its a standards compliant c/c++ compiler, and I use it for windows programs on a daily basis. Its probably best that you use it until you become more comfortable with something else.
I recommend that you use the MinGW distribution of GCC. That will compile your programs natively for windows, using a standard library, etc.
If you're looking for an IDE, I have two recommendations. Visual Studio is the Microsoft version, and although it has its issues, it is an excellent IDE for working with the code. However, if you're looking for something a bit more lightweight, CodeBlocks is also rather good, and has the added benefit of being able to use basically any compiler you have installed (including several forms of GCC and the Microsoft Compiler that comes with Visual Studio) and being able to open project files fro other IDEs. Plus, it runs on linux too, so you could make that transition even easier on yourself.
I personally prefer GCC, but that's just me. If you really want the Microsoft Solution, VS is the way to go.
A: You can use GCC on Windows by downloading MingW (discontinued) or its successor Mingw-w64.
A: You may try Code::Blocks, which is better IDE and comes with MinGW GCC! I have used it and its just too good a freeware IDE for C/C++.
A: MinGW would be a direct translation off gcc for windows, or you might want to check out LCC, vanilla c (more or less) with an IDE. Pelles C seems to be based off lcc and has a somewhat nicer IDE, though I haven't used it personally. Of course there is always the Express Edition of MSVC which is free, but that's your call.
A: Most universities give you access to Microsoft Dreamspark.
If you're using GCC/Linux in class, just install Ubuntu. Windows is a terrible platform for C development.
A: Be careful to use a C compiler, not C++ if you're actually doing C. While most programs in C will work using a C++ compiler there are enough differences that there can be problems. I would agree with the people who suggest using gcc via cygwin.
EDIT:
http://en.wikipedia.org/wiki/Compatibility_of_C_and_C%2B%2B shows some of the major differences
A: You can get Visual C++ Express Edition straight from Microsoft, if you want something targeting Win32. Otherwise MinGW or lcc, as suggested elsewhere.
A: http://www.mingw.org/wiki/HOWTO_Install_the_MinGW_GCC_Compiler_Suite
A: GCC works fine. Note that MSVC is not necessarily a valid solution because it does not support C99.
A: I'm late to this party, but for any future C folks on Windows, Visual Studio targets C90 instead of C99, which is what you'd get on *nix. I am currently targeting C99 on Windows by using Sublime Text 2 in tandem with Cygwin.
A: GCC is ubiquitous. It is trusted and well understood by thousands of folks across dozens of communities.
Visual Studio is perhaps the best IDE ever developed. It has a great compiler underneath it. But it is strictly Windows-only.
If you're just playing, get GCC --it's free. If you're concerned about multiple platfroms, it's GCC. If you're talking serious Windows development, get Visual Studio.
A: There is another free C compiler for Windows: Pelles C.
Pelles C is a complete development kit for Windows and Windows Mobile. It contains among other things an optimizing C compiler, a macro assembler, a linker, a resource compiler, a message compiler, a make utility and install builders for both Windows and Windows Mobile.
It also contains an integrated development environment (IDE) with project management, debugger, source code editor and resource editors for dialogs, menus, string tables, accelerator tables, bitmaps, icons, cursors, animated cursors, animation videos (AVI's without sound), versions and XP manifests.
URL: http://www.smorgasbordet.com/pellesc/
A: You could always just use gcc via cygwin.
A: I personally have been looking into using MinGW (what Bloodshed uses) with the Code Blocks IDE.
I am also considering using the Digital Mars C/C++ compiler.
Both seem to be well regarded.
A: Cygwin offers full GCC support on Windows; also, the free Microsoft Visual C++ Express Edition supports 'legacy' C projects just fine.
A: Visual C++ Express is a fine and free IDE for Windows which comes with a compiler.
If you are more comfortable with commandline solutions in general and gcc in particular, MinGW or Cygwin might be more up you alley. They are also both free.
A: There have been a few comments pointing out that C is not C++. While that's true, also true that any C++ compiler will also compile C - usually the compiler mode will be automatically selected based on the filename extension, but every compiler also has an option to force C or C++ mode regardless of the filename.
So choose the free C++ compiler that you're most comfortable with gcc, VC++ Express, Digital Mars, whatever. Use the IDE you like best emacs, vim, VC++ Express, Code::Blocks, Bloodshed - again whatever.
Any of these tools will be more than adequate for learning. Personally, since you're asking about Windows, I'd choose VC++ Express - it's a great IDE, it's free, and it'll compile C programs just fine.
A: It comes down to what you're using in class.
If the labs and the assignments are in linux, then you probably want a MinGW solution. If they're in windows, get Visual Studio Express.
A: Can't you get a free version of Visual Studio Student Addition from your school? Most Universities have programs to give free software to students.
A: You mean Bloodshed's Dev-C++? It's a nice visual IDE for C++ which uses MinGW's gcc for Windows as the back-the-scenes compiler. The project's been abandoned for a while (in my opinion, using Delphi to develop a C++ IDE is a very stupid thing to do to draw developers' attention), however there's nothing that stops you from using it and updating the version of MinGW's gcc it uses to the latest one - besides it's GPL-licensed.
A: I use either BloodShed's DEV C++, CygWin, or Visual C++ Express. All of which are free and work well. I have found that for me, DEV C++ worked the best and was the least quirky. Each compiler has it's own quirks and deifferences, you need to try out a few and find the one with which you are most comfortable. I also liked the fact that DEV C++ allowed me to change the fonts that are used in the editor. I like Proggy Programming fonts!
A: Must Windows C++ compilers will work.
Also, check out MinGW.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "132"
}
|
Q: Java ME UI libraries I'm developing a Java ME app & need pointers to some really good UI libraries. I did see a few such as Java ME Polish. Are there any more out there? For e.g. ebuddy's java ME app has an amazing UI and so is gmail's java ME app. What libraries would they have been using or would have they have developed it on their own?
A: Sun recently released and opensourced their solution to crappy looking lcdui. It is called LIghtweight UI Toolkit and can be found on lwuit.dev.java.net
A: eSWT would be available for MIDlets on the latest J9 VM, as used by the Series60 3rd Edition feature Pack 2 handsets (Nokia N78, 6210, N96...) but we're mainly talking about nicer looking UI controls. Basically, a MIDlet can look much more like a native application now.
Sun has recently open-sourced LWUIT.That could also be worth a look.
Nothing beats drawing your own images on a Canvas,though. Generic layout managers in any kind of library will only get you so far. You should only look at the available technologies once you have a good idea of how many different kind of screens your application should have and what they look like.
A: We have been trying lately on kuix.. So far so good and more light weight than LWUIT
code.http://code.google.com/p/kuix
A: Most of the apps with amazing UIs (Opera Mini, Gmail, any game from an AAA developer) use custom UIs developed in-house. These developers take the task of developing an UI as one more in their projects and give it personality, involving professional graphic designers. Going with a packaged library would quickly accomplish the task but it would make the application look generic and bland (less bland that with the default UI, but still bland and limited).
In short, go with a packaged UI for quick development, but don't expect the level of quality to be near the apps you mentioned.
A: I'm facing a similar dilemma right now. We're currently using the default, high-level LCDUI framework for the speed of development, but its severely limiting what we can do. I had thought our best option would be to use a third-party UI framework, but I'm now coninced that if we're serious about the application we should write our own. Its like anything in software, if its mission critical to your application you should write it yourself, even if it that means re-inventing the wheel.
A: Digitalapes has developed a framework for J2ME application development that includes a high level UI library.
The library is lightweight and well documented, you can get have a look at the Gear framework page for more information, or you can directly download the JAR and javadoc from Gear's sourceforge page.
Digitalapes blog includes also a series of tutorial about how to use the framework.
A: Polish has really nice set of UI components, which are skinable with css style comments.
It also features a device database for compatibility purposes.
Some tipps if you decide to go with polish (as i did):
*
*Use eclipse and the mepose plugin. The netbeans integration is really nerve wracking.
*The bulletin board is dead, so by all means !use the mailing list!.
*If you are not familiar with the ant building system, you better start now.
*The J2ME Polish book is not worth its money. (My opinion). The documentation on their website and the sample coded are enough to give you a solid start.
A: I've used SWT when deploying to a full profile J2ME (IBM J9 on PocketPC), I don't know if it is usable by MIDlets however. It's quite a nice GUI library in its own way, and far better than AWT.
A: Unfortunately companies usually end up designing their own GUI's when it comes to mobile development. It's the only way to have full control over your interfaces, but you should consider if it's worth the additional development time, and you're also on your own when it comes to device compatibility issues / handset bugs - of which there are plenty.
If you are happy with a less flexible interface, you can go with one of the existing libraries. I currently use J2ME GUI from http://www.garcer.com/. We get the same flexibility as with desktop development and it also features custom styling, so with a little extra effort you can make it look the way you want it to.
A: You can use LWUIT for the UI development in J2ME framework.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: SQL Server 2000 Function for record created datetime I was given a task to display when a record in the database was added, however the previous developers never made a field for this, and I can't go back and make up dates for all the existing records. Is there an easy way to extract out a record Creation date from a SQL server 2000 query.
SELECT RECORD_CREATED_DATE FROM tblSomething WHERE idField = 1
The RECORD_CREATED_DATE isn't a field in the existing table. Is there some sort of SQL Function to get this information ?
A: No, unfortunately date of insert or last update are not automatically stored with each record.
To do that, you need to create two datetime columns in your table (e.g. CreatedOn, UpdatedOn) and then in an INSERT trigger set the CreatedOn = getdate() and in the UPDATE trigger set the UpdatedOn = getdate().
CREATE TRIGGER tgr_tblMain_Insert
ON dbo.tblMain
AFTER INSERT
AS
BEGIN
set nocount on
update dbo.tblMain
set CreatedOn = getdate(),
CreatedBy = session_user
where tblMain.ID = INSERTED.ID
END
I also like to create CreatedBy and UpdatedBy varchar(20) columns which I set to session_user or update through other methods.
A: If it's not stored as a field, the info is lost after the transaction log recycles (typically daily), and maybe even sooner.
A: I'm not aware of a way you can get this information for existing records. However, going forward you can create an audit table that stores the TableName and RecordCreateDate (or something similar.
Then, for the relevant tables you can make an insert trigger:
CREATE TRIGGER trigger_RecordInsertDate
ON YourTableName
AFTER INSERT
AS
BEGIN
-- T-SQL code for inserting a record and timestamp
-- to the audit table goes here
END
A: I would start with putting this information in from now on. Create two columns, InsertedDate, LastUpdatedDate. Use a default value of getdate() on the first and an update trigger to populate the second (might want to consider UpdatedBy as well). Then I would write a query to display the information using the CASE Statement to display the date if there is one and to display "Unknown" is the field is null. This gets more complicated if you need to store a record of all the updates. Then you need to use audit tables.
A: create another column and give it a default of getdate() that will take care of inserted date, for updated date you will need to write an update trigger
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: A regular expression to remove a given (x)HTML tag from a string Let's say I have a string holding a mess of text and (x)HTML tags. I want to remove all instances of a given tag (and any attributes of that tag), leaving all other tags and text along. What's the best Regex to get this done?
Edited to add: Oh, I appreciate that using a Regex for this particular issue is not the best solution. However, for the sake of discussion can we assume that that particular technical decision was made a few levels over my pay grade? ;)
A: Attempting to parse HTML with regular expressions is generally an extremely bad idea. Use a parser instead, there should be one available for your chosen language.
You might be able to get away with something like this:
</?tag[^>]*?>
But it depends on exactly what you're doing. For example, that won't remove the tag's content, and it may leave your HTML in an invalid state, depending on which tag you're trying to remove. It also copes badly with invalid HTML (and there's a lot of that about).
Use a parser instead :)
A: I think there is some serious anti-regex bigotry happening here. There are lots of times when you may want to strip a particular tag out of some markup when it doesn't make sense to use a full blown parser.
Of course there are times when a parser might be the best option, but if you are looking for a regex then:
<script[^>]*?>[\s\S]*?<\/script>
That would remove script tags and their contents. Make sure that you use case-insensitive matching.
If you don't want to remove the contents of the tag then you can use:
<\/?script[^>]*?>
An example of usage in javascript would be:
function stripScripts(markup) {
return markup.replace(/<script[^>]*?>[\s\S]*?<\/script>/gi, '');
}
var safeText = stripScripts(textarea.value);
A: I think it might be Raymond Chen (blogs.msdn.com/oldnewthing) that I'm paraphrasing (badly!) here... But, you want a Regular Expression? "Now you have two problems" ... :=)
If the string is well-formed (X)HTML, could you load it up into a parser (HTML/XML) and use this to remove any nodes of the offending variety? If it's not well-formed, then it becomes a bit more tricky, but, I suspect that a RegEx isn't the best way to go about this...
A: There are just TOO many ways a single tag can appear, not to mention encodings, variants, etc.
I strongly suggest you rethink this approach.... you really shouldnt have to be handling HTML directly, anyway.
A: Off the top of my head, I'd say this will get you started in the right direction.
s/<TAG[^>]*>([^<]*)</TAG[^>]*>/\1
Basically find the starting tag, any text in between the tags, and then the ending tag. Replace the whole thing with whatever was in between the tags.
A: Corrected answer:
</?TAG\b[^>]*?>
Because Dans answer would remove <br />, but you want only <b>
A: Here's a regex I wrote for this purpose, it works in a few more situations:
</?(?(?=b|img|a|script)notag|[a-zA-Z0-9]+)(?:\s[a-zA-Z0-9\-]+=?(?:(["",']?).*?\1?)?)*\s*/?>
A: While using regexes for parsing HTML is generally frowned upon or looked down on, you almost certainly don't want to write your own parser.
You could however use some inbuilt or library functions to achieve what you need.
*
*JavaScript has getElementsByTagName and getElementById, not to mention jQuery.
*PHP has the DOM extension.
*Python has the awesome Beautiful Soup
*...and many more.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Find potential SQL inject problems in Java/JSP code I'm working for a customer with a huge legacy codebase consisting of various Java en JSP based applications.
Most querying is done using the home-build 'orm' system. Some applications use Plain Old JDBC. Some applications are based on Hibernate (yes HQL build with plus signs is a potential problem as well). Some of the older applications are entirely writen in JSP.
I've found a couple of SQL inject bugs manually. But I could really use some sort of tool to search for potential weak spots.
Any ideas?
A: I'd write some searches or load up an IDE that looked for use of java.sql.Statement as opposed to PreparedStatement.
A: I would recommend FindBugs (there is also an eclipse plugin) which can track down these issues and many more.
We are using it at work, it's fast and it's worth the money (as in free). We've solved some common problems with its help.
A: How large is your URL space? If possible, it's best to attempt SQL injection via HTTP GET and POST requests. There are some issues that can be found by source/byte code examination, but the only way to know for certain what kinds of potentially malicious input your application will accept is to use HTTP requests.
CAL9000 is a good SQL Injection / Cross-site Scripting testing tool if your set of URLs is small.
Companies that are serious about detecting mishandled malicious input will hire a 3rd party to do penetration testing. White Hat Security is a vendor I have worked with in the past and can recommend. We used them for a $100MM+ e-commerce web site. (I have no affiliation with White Hat and do not benefit in any way if you become their customer.)
All testing/hardening of your code aside, it is a very good idea to have an HTTP firewall in place like mod_security.
A: When I was working on localization of "this-will-never-need-localization" application, we use a home-made tool to analyzing compiled code ( IL in .NET, it is same as byte-code in Java ).
You can find calling the specified methods which works with DB ( typicaly CRUD operations ) wicht has a string parameter with SQL command, and track the string instance up and check for concating.
We used the .NET Reflector for decompiling and tracking strings. But I don't know, if is available similar tool for Java :(.
A: You can go for prevention rather than cure. add a sanitization layer just below ur UI, so you wont end up with sql/scripts in user inputs. There must be examples in java, I have seen such an approach in CakePHP
A: Find any place that doesn't use a PreparedStatement.
A: I recommend CAL9000. You can get details from the following link:
CAL9000
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can my application benefit from temporary tables? I've been reading a little about temporary tables in MySQL but I'm an admitted newbie when it comes to databases in general and MySQL in particular. I've looked at some examples and the MySQL documentation on how to create a temporary table, but I'm trying to determine just how temporary tables might benefit my applications and I guess secondly what sorts of issues I can run into. Granted, each situation is different, but I guess what I'm looking for is some general advice on the topic.
I did a little googling but didn't find exactly what I was looking for on the topic. If you have any experience with this, I'd love to hear about it.
Thanks,
Matt
A: The best place to use temporary tables is when you need to pull a bunch of data from multiple tables, do some work on that data, and then combine everything to one result set.
In MS SQL, Temporary tables should also be used in place of cursors whenever possible because of the speed and resource impact associated with cursors.
A: If you are new to databases, there are some good books by Joe Kelko that review best practices for ANSI SQL. SQL For Smarties will describe in great detail the use of temp table, impact of indexes, where clauses, etc. It's a great reference book with in depth detail.
A: Temporary tables are often valuable when you have a fairly complicated SELECT you want to perform and then perform a bunch of queries on that...
You can do something like:
CREATE TEMPORARY TABLE myTopCustomers
SELECT customers.*,count(*) num from customers join purchases using(customerID)
join items using(itemID) GROUP BY customers.ID HAVING num > 10;
And then do a bunch of queries against myTopCustomers without having to do the joins to purchases and items on each query. Then when your application no longer needs the database handle, no cleanup needs to be done.
Almost always you'll see temporary tables used for derived tables that were expensive to create.
A: First a disclaimer - my job is reporting so I wind up with far more complex queries than any normal developer would. If you're writing a simple CRUD (Create Read Update Delete) application (this would be most web applications) then you really don't want to write complex queries, and you are probably doing something wrong if you need to create temporary tables.
That said, I use temporary tables in Postgres for a number of purposes, and most will translate to MySQL. I use them to break up complex queries into a series of individually understandable pieces. I use them for consistency - by generating a complex report through a series of queries, and I can then offload some of those queries into modules I use in multiple places, I can make sure that different reports are consistent with each other. (And make sure that if I need to fix something, I only need to fix it once.) And, rarely, I deliberately use them to force a specific query plan. (Don't try this unless you really understand what you are doing!)
So I think temp tables are great. But that said, it is very important for you to understand that databases generally come in two flavors. The first is optimized for pumping out lots of small transactions, and the other is optimized for pumping out a smaller number of complex reports. The two types need to be tuned differently, and a complex report run on a transactional database runs the risk of blocking transactions (and therefore making web pages not return quickly). Therefore you generally don't want to avoid using one database for both purposes.
My guess is that you're writing a web application that needs a transactional database. In that case, you shouldn't use temp tables. And if you do need complex reports generated from your transactional data, a recommended best practice is to take regular (eg daily) backups, restore them on another machine, then run reports against that machine.
A: I've used them in the past when I needed to create evaluated data. That was before the time of views and sub selects in MySQL though and I generally use those now where I would have needed a temporary table. The only time I might use them is if the evaluated data took a long time to create.
A: I haven't done them in MySQL, but I've done them on other databases (Oracle, SQL Server, etc).
Among other tasks, temporary tables provide a way for you to create a queryable (and returnable, say from a sproc) dataset that's purpose-built. Let's say you have several tables of figures -- you can use a temporary table to roll those figures up to nice, clean totals (or other math), then join that temp table to others in your schema for final output. (An example of this, in one of my projects, is calculating how many scheduled calls a given sales-related employee must make per week, bi-weekly, monthly, etc.)
I also often use them as a means of "tilting" the data -- turning columns to rows, etc. They're good for advanced data processing -- but only use them when you need to. (My golden rule, as always, applies: If you don't know why you're using x, and you don't know how x works, then you probably shouldn't use it.)
Generally, I wind up using them most in sprocs, where complex data processing is needed. I'd love to give a concrete example, but mine would be in T-SQL (as opposed to MySQL's more standard SQL), and also they're all client/production code which I can't share. I'm sure someone else here on SO will pick up and provide some genuine sample code; this was just to help you get the gist of what problem domain temp tables address.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Can you use key/keyref instead of restriction/enumeration in XML schema? Suppose we have a stylesheet which pulls in metadata using the key() function. In other words we have instance documents like this:
<items>
<item type="some_type"/>
<item type="another_type"/>
</items>
and a table of additional data we would like to associate with items during processing:
<item-meta>
<item type="some_type" meta="foo"/>
<item type="another_type" meta="bar"/>
<item type="yet_another_type" meta="baz"/>
</item-meta>
Finally, suppose we want to do schema validation on the instance document, restricting the type attributes to the set of types which occur in item-meta. So in the schema we want to use key/keyref instead of restriction/enumeration. This is because using restriction/enumeration will require making a separate list of valid type attributes.
However, it doesn't look like key/keyref will actually work. Having tried it (with MSXML 6.0) it appears the selector of a schema key won't accept the document() function in its xpath argument, so we can't examine the item-meta data, whether it appears in an external file or in the schema file itself. It looks like the only place we can look for keys is the instance document.
So if we really don't want to have a separate list of valid types, we have to do a pre-validation transform, pulling in the item-meta stuff, then do the validation, then do our original transform. That seems overcomplicated for what ought to be a relatively straightforward use of XML schema and stylesheets.
Is there a better way?
A: Selectors in key/keyref allow only a very restricted xpath syntax. Short, but not completely accurate: The selector must point to a subnode of the element declared.
The full definition of the restricted syntax is -> here.
So, no I don't see a better way, sorry.
BTW: The W3C states that this restriction was made to make life easier on implementers of XML Schema processors. Keep in mind that one of the design goals of XML Schema was to make it possible to process a document in streaming mode. That explains really a lot of the sometimes seemingly random restrictions of XML Schema.
A: Having thought about it a little more, I came up with the idea of having the stylesheet do that part of the validation. The schema would define the item type as a plain string, and the stylesheet would emit a message and stop processing if it couldn't look up the item type in the item-meta table.
This solution fixes the original problem of having to write down the list of valid types more than once, but it introduces the problem that validation logic is now mixed in with the stylesheet logic. I don't have enough experience with XSD+XSLT to tell whether this new problem is less serious than the old one, but it seems to be more elegant than what I wrote earlier about pulling the item-meta table into each instance document in a pre-validation transform.
A: You wouldn't need to stop the XSLT with some error. Just let it produce something that the schema won't validate and that points to the original problem like
<error txt="Invalid-Item-Type 'invalid_type'"/>
Apart from that please keep in mind that there are no discussion threads here. The posts may go up and down, so it's better to edit your question accordingly.
Remember, the philosophy here is "One question, and the best answer wins".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to change the location of the netbeans settings directory (~/.netbeans) By default netbeans stores it's settings in a directory called .netbeans under the user's home directory. Is it possible to change the location of this directory (especially under Windows)?
Thanks to James Schek I now know the answer (change the path in netbeans.conf) but that leads me to another question:
Is there a way to include the current username in the path to the netbeans setting directory?
I want to do something like this:
netbeans_default_userdir="D:\etc\${USERNAME}\.netbeans\6.5beta"
but I can't figure out the name of the variable to use (if there's any).
Of course I can achieve the same thing with the --userdir option, I'm just curious.
A: "HOME" is the only variable supported by the IDE. When deploying a custom application using the Netbeans Platform, "APPNAME" is also supported out of the box.
A: For someone who lands up here hunting for an answer:
If you are trying to setup a portable version in windows, Netbeans 7.2 and up wont start if userdir is at the same level or lower than the Netbeans root.
So if you have:
c:\Portable\Netbeans you can NOT do netbeans_default_userdir="c:\Portable\Netbeans\userdir\8.0"
Use a folder OUTSIDE netbeans installation e.g.
netbeans_default_userdir="c:\Portable\NetbeansUserDir\8.0"
for cache it does not matter.
Tested in Windows 8.1 and 7.
A: yes, edit the netbeans.conf file under %NETBEANS_HOME%\etc.
Edit the line with:
netbeans_default_userdir="${HOME}/.netbeans/6.0"
If you need different "profiles"--i.e. want to run different copies of Netbeans with different home directories, you can pass a new home directory to the launcher. Run "netbeans.exe --userdir /path/to/dir" or "nb.exe --userdir /path/to/dir"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Winforms for Mono on Mac, Linux and PC (Redux) (I asked this question in another way, and got some interesting responses but I'm not too convinced.)
Is Mono's GtkSharp truly cross-platform? It seems to be Gnome based... how can that work with PC and Mac?
Can someone give me examples of a working Mac/PC/Linux app that is written with a single codebase in Microsoft .Net?
A: Realize this is now an old question, but Banshee fits the bill for being a cross-platform application that uses GTK#. It runs on Max, Linux and Windows.
http://banshee.fm/download/
A: The best example of a Gtk# app that runs on both Windows and Linux may be Medsphere's OpenVista. Granted, its not an app that many people need to run, but it is a very professional, polished, open-source Gtk# application. It shows how a professional Gtk# app can be written.
http://medsphere.org/community/project/openvista-cis
A: Plastic SCM is supported on Windows, Linux, Solaris, and Mac OS X. The link includes screenshots on Windows and Linux.
A: Gtk# is cross platform. However the only platform where it looks nice is Linux/BSD running GNOME. If possible somehow, separate frontend and backend and develop separate user interfaces for Linux, Windows and OS X. Even wx, which does a really good job in looking okay on all three platforms, has its limits.
Working Mac/PC/Linux app in Gtk#? Tomboy runs on all three I think.
A: It would be more correct to say that GNOME is GTK-based than it is to say that GTK is GNOME based. GTK is a toolkit that GNOME sits on top of, and you can get GTK for several platforms, including Windows. That's how GIMP works on Windows: you install GTK first.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Cleaning a string of punctuation in C++ Ok so before I even ask my question I want to make one thing clear. I am currently a student at NIU for Computer Science and this does relate to one of my assignments for a class there. So if anyone has a problem read no further and just go on about your business.
Now for anyone who is willing to help heres the situation. For my current assignment we have to read a file that is just a block of text. For each word in the file we are to clear any punctuation in the word (ex : "can't" would end up as "can" and "that--to" would end up as "that" obviously with out the quotes, quotes were used just to specify what the example was).
The problem I've run into is that I can clean the string fine and then insert it into the map that we are using but for some reason with the code I have written it is allowing an empty string to be inserted into the map. Now I've tried everything that I can come up with to stop this from happening and the only thing I've come up with is to use the erase method within the map structure itself.
So what I am looking for is two things, any suggestions about how I could a) fix this with out simply just erasing it and b) any improvements that I could make on the code I already have written.
Here are the functions I have written to read in from the file and then the one that cleans it.
Note: the function that reads in from the file calls the clean_entry function to get rid of punctuation before anything is inserted into the map.
Edit: Thank you Chris. Numbers are allowed :). If anyone has any improvements to the code I've written or any criticisms of something I did I'll listen. At school we really don't get feed back on the correct, proper, or most efficient way to do things.
int get_words(map<string, int>& mapz)
{
int cnt = 0; //set out counter to zero
map<string, int>::const_iterator mapzIter;
ifstream input; //declare instream
input.open( "prog2.d" ); //open instream
assert( input ); //assure it is open
string s; //temp strings to read into
string not_s;
input >> s;
while(!input.eof()) //read in until EOF
{
not_s = "";
clean_entry(s, not_s);
if((int)not_s.length() == 0)
{
input >> s;
clean_entry(s, not_s);
}
mapz[not_s]++; //increment occurence
input >>s;
}
input.close(); //close instream
for(mapzIter = mapz.begin(); mapzIter != mapz.end(); mapzIter++)
cnt = cnt + mapzIter->second;
return cnt; //return number of words in instream
}
void clean_entry(const string& non_clean, string& clean)
{
int i, j, begin, end;
for(i = 0; isalnum(non_clean[i]) == 0 && non_clean[i] != '\0'; i++);
begin = i;
if(begin ==(int)non_clean.length())
return;
for(j = begin; isalnum(non_clean[j]) != 0 && non_clean[j] != '\0'; j++);
end = j;
clean = non_clean.substr(begin, (end-begin));
for(i = 0; i < (int)clean.size(); i++)
clean[i] = tolower(clean[i]);
}
A: The problem with empty entries is in your while loop. If you get an empty string, you clean the next one, and add it without checking. Try changing:
not_s = "";
clean_entry(s, not_s);
if((int)not_s.length() == 0)
{
input >> s;
clean_entry(s, not_s);
}
mapz[not_s]++; //increment occurence
input >>s;
to
not_s = "";
clean_entry(s, not_s);
if((int)not_s.length() > 0)
{
mapz[not_s]++; //increment occurence
}
input >>s;
EDIT: I notice you are checking if the characters are alphanumeric. If numbers are not allowed, you may need to revisit that area as well.
A: Further improvements would be to
*
*declare variables only when you use them, and in the innermost scope
*use c++-style casts instead of the c-style (int) casts
*use empty() instead of length() == 0 comparisons
*use the prefix increment operator for the iterators (i.e. ++mapzIter)
A: A blank string is a valid instance of the string class, so there's nothing special about adding it into the map. What you could do is first check if it's empty, and only increment in that case:
if (!not_s.empty())
mapz[not_s]++;
Style-wise, there's a few things I'd change, one would be to return clean from clean_entry instead of modifying it:
string not_s = clean_entry(s);
...
string clean_entry(const string &non_clean)
{
string clean;
... // as before
if(begin ==(int)non_clean.length())
return clean;
... // as before
return clean;
}
This makes it clearer what the function is doing (taking a string, and returning something based on that string).
A: The function 'getWords' is doing a lot of distinct actions that could be split out into other functions. There's a good chance that by splitting it up into it's individual parts, you would have found the bug yourself.
From the basic structure, I think you could split the code into (at least):
*
*getNextWord: Return the next (non blank) word from the stream (returns false if none left)
*clean_entry: What you have now
*getNextCleanWord: Calls getNextWord, and if 'true' calls CleanWord. Returns 'false' if no words left.
The signatures of 'getNextWord' and 'getNextCleanWord' might look something like:
bool getNextWord (std::ifstream & input, std::string & str);
bool getNextCleanWord (std::ifstream & input, std::string & str);
The idea is that each function does a smaller more distinct part of the problem. For example, 'getNextWord' does nothing but get the next non blank word (if there is one). This smaller piece therefore becomes an easier part of the problem to solve and debug if necessary.
The main component of 'getWords' then can be simplified down to:
std::string nextCleanWord;
while (getNextCleanWord (input, nextCleanWord))
{
++map[nextCleanWord];
}
An important aspect to development, IMHO, is to try to Divide and Conquer the problem. Split it up into the individual tasks that need to take place. These sub-tasks will be easier to complete and should also be easier to maintain.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Should I return a strongly typed dataset from a webservice? Should I expose a strongly typed dataset from a webservice and bind it directly in a client? or are there more sensible ways for asmx web services? I am doing CRUD operations (Create, Read, Update, Delete).
I find working with datasets to be frustrating and difficult to work with when for example when inserting into a table within it for instance. It doesn't seem logical to ship a whole dataset to and forth when only inserting one record or when only getting one record from a particular table within the dataset.
Is there a better way?
Should I perhaps be converting to objects and use objects over the webservice? Doing conversions all over the place to get objects passed around is perhaps just as tedious?
A: It depends on your interoperability requirements. Although it's entirely possible to process the DataSet XMLs from practically any environment it can get unwieldly. If you're not interoperating I'd definitely recommend the typed dataset route because it's insanely simple to use from C# and "just works".
A: I would say opt for objects, DataSet's can get kinda messy. Objects can be a lot cleaner to look at, and of course debug.
Be careful when working with abstract types though as they can be a bit of a pain to serialize if you have collections based on an abstract class/interface. I had problems with this in the past, however, I found a solution.
A: Note that the Dataset is specific of .NET. If you want to make you API interoperable, you should stick to elementary datatypes and constructs (otherwise, the situation is likely to be cumbersome for the non-.NET developers).
Then, web services aren't designed to pass large objects around in a single trip. If your dataset contains more than a few hundred KB, you are likely to end-up with client-side or server-side HTTP timeouts (considering default settings).
For CRUD operations, I would simply suggest to expose each operation directly through the WS.
A: I have had great success with DataSets (the server uses and returns a strongly-typed dataset, while the client consumes it as a standard dataset). Like Tomer warns, I have the benefit of no interoperability concerns.
With regards to updating, send the entire dataset is a bad idea. There is a method on both the DataSet and DataTable objects called GetChanges() that will return all the edits since AcceptChanges() was called. This should help you keep your network traffic down.
A: some links related to this topic
scott hanselman - Returning DataSets from WebServices is the Spawn of Satan and Represents All That Is Truly Evil in the World
rockford lhotka - Thoughts on passing DataSet objects via web services
4guysfromrolla - More On Why I Don't Use DataSets in My ASP.NET Applications
A: I agree with Joannes... stick with objects and specific methods for the types of operations you want to expose.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Fastest way to see how many bytes are equal between fixed length arrays I have 2 arrays of 16 elements (chars) that I need to "compare" and see how many elements are equal between the two.
This routine is going to be used millions of times (a usual run is about 60 or 70 million times), so I need it to be as fast as possible. I'm working on C++ (C++Builder 2007, for the record)
Right now, I have a simple:
matches += array1[0] == array2[0];
repeated 16 times (as profiling it appears to be 30% faster than doing it with a for loop)
Is there any other way that could work faster?
Some data about the environment and the data itself:
*
*I'm using C++Builder, which doesn't have any speed optimizations to take into account. I will try eventually with another compiler, but right now I'm stuck with this one.
*The data will be different most of the times. 100% equal data is usually very very rare (maybe less than 1%)
A: The key is to do the comparisons using the largest register your CPU supports, then fallback to bytes if necessary.
The below code demonstrates with using 4-byte integers, but if you are running on a SIMD architecture (any modern Intel or AMD chip) you could compare both arrays in one instruction before falling back to an integer-based loop. Most compilers these days have intrinsic support for 128-bit types so will NOT require ASM.
(Note that for the SIMD comparisions your arrays would have to be 16-byte aligned, and some processors (e.g MIPS) would require the arrays to be 4-byte aligned for the int-based comparisons.
E.g.
int* array1 = (int*)byteArray[0];
int* array2 = (int*)byteArray[1];
int same = 0;
for (int i = 0; i < 4; i++)
{
// test as an int
if (array1[i] == array2[i])
{
same += 4;
}
else
{
// test individual bytes
char* bytes1 = (char*)(array1+i);
char* bytes2 = (char*)(array2+i);
for (int j = 0; j < 4; j++)
{
same += (bytes1[j] == bytes2[j];
}
}
}
I can't remember what exactly the MSVC compiler supports for SIMD, but you could do something like;
// depending on compiler you may have to insert the words via an intrinsic
__m128 qw1 = *(__m128*)byteArray[0];
__m128 qw2 = *(__m128*)byteArray[1];
// again, depending on the compiler the comparision may have to be done via an intrinsic
if (qw1 == qw2)
{
same = 16;
}
else
{
// do int/byte testing
}
A: If you have the ability to control the location of the arrays, putting one right after the other in memory for instance, it might cause them to be loaded to the CPU's cache on the first access.
It depends on the CPU and its cache structure and will vary from one machine to another.
You can read about memory hierarchy and cache in Henessy & Patterson's Computer Architecture: A Quantitative Approach
A: If you need absolute lowest footprint, I'd go with assembly code. I haven't done this in a while but I'll bet MMX (or more likely SSE2/3) have instructions that can enable you to do exactly that in very few instructions.
A: If matches are the common case then try loading the values as 32 bit ints instead of 16 so you can compare 2 in one go (and count it as 2 matches).
If the two 32 bit values are not the same then you will have to test them separately (AND out the top and bottom 16 bit values).
The code will be more complex, but should be faster.
If you are targeting a 64-bit system you could do the same trick with 64 bit ints, and if you really want to push the limit then look at dropping into assembler and using the various vector based instructions which would let you work with 128 bits at once.
A: UPDATE: This answer has been modified to make my comments match the source code provided below.
There is an optimization available if you have the capability to use SSE2 and popcnt instructions.
16 bytes happens to fit nicely in an SSE register. Using c++ and assembly/intrinsics, load the two 16 byte arrays into xmm registers, and cmp them. This generates a bitmask representing the true/false condition of the compare. You then use a movmsk instruction to load a bit representation of the bitmask into an x86 register; this then becomes a bit field where you can count all the 1's to determine how many true values you had. A hardware popcnt instruction can be a fast way to count all the 1's in a register.
This requires knowledge of assembly/intrinsics and SSE in particular. You should be able to find web resources for both.
If you run this code on a machine that does not support either SSE2 or popcnt, you must then iterate through the arrays and count the differences with your unrolled loop approach.
Good luck
Edit:
Since you indicated you did not know assembly, here's some sample code to illustrate my answer:
#include "stdafx.h"
#include <iostream>
#include "intrin.h"
inline unsigned cmpArray16( char (&arr1)[16], char (&arr2)[16] )
{
__m128i first = _mm_loadu_si128( reinterpret_cast<__m128i*>( &arr1 ) );
__m128i second = _mm_loadu_si128( reinterpret_cast<__m128i*>( &arr2 ) );
return _mm_movemask_epi8( _mm_cmpeq_epi8( first, second ) );
}
int _tmain( int argc, _TCHAR* argv[] )
{
unsigned count = 0;
char arr1[16] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0 };
char arr2[16] = { 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0 };
count = __popcnt( cmpArray16( arr1, arr2 ) );
std::cout << "The number of equivalent bytes = " << count << std::endl;
return 0;
}
Some notes: This function uses SSE2 instructions and a popcnt instruction introduced in the Phenom processor (that's the machine that I use). I believe the most recent Intel processors with SSE4 also have popcnt. This function does not check for instruction support with CPUID; the function is undefined if used on a processor that does not have SSE2 or popcnt (you will probably get an invalid opcode instruction). That detection code is a separate thread.
I have not timed this code; the reason I think it's faster is because it compares 16 bytes at a time, branchless. You should modify this to fit your environment, and time it yourself to see if it works for you. I wrote and tested this on VS2008 SP1.
SSE prefers data that is aligned on a natural 16-byte boundary; if you can guarantee that then you should get additional speed improvements, and you can change the _mm_loadu_si128 instructions to _mm_load_si128, which requires alignment.
A: Magical compiler options will vary the time greatly. In particular making it generate SSE vectorization will likely get you a huge speedup.
A: Does this have to be platform independent, or will this code always run on the same type of CPU? If you restrict yourself to modern x86 CPUs, you may be able to use MMX instructions, which should allow you to operate on an array of 8 bytes in one clock tick. AFAIK, gcc allows you to embed assembly in your C code, and the Intel's compiler (icc) supports intrinsics, which are wrappers that allow you to call specific assembly instructions directly. Other SIMD instruction sets, such as SSE, may also be useful for this.
A: Is there any connection between the values in the arrays? Are some bytes more likely to be the same then others? Might there be some intrinsic order in the values? Then you could optimize for the most probable case.
A: If you explain what the data actually represents then there might be a totally different way to represent the data in memory that would make this type of brute force compare unnecessary. Care to elaborate on what the data actually represents??
A: Is it faster as one statement?
matches += (array1[0] == array2[0]) + (array1[1] == array2[1]) + ...;
A: If writing that 16 times is faster than a simple loop, then your compiler either sucks or you don't have optimization turned on.
Short answer: there's no faster way, unless you do vector operations on parallel hardware.
A: Try using pointers instead of arrays:
p1 = &array1[0];
p2 = &array2[0];
match += (*p1++ == *p2++);
// copy 15 times.
Of course you must measure this against other approaches to see which is fastest.
And are you sure that this routine is a bottleneck in your processing? Do you actually speed up the performance of your application as a whole by optimizing this? Again, only measurement will tell.
A: Is there any way you can modify the way the arrays are stored? Comparing 1 byte at a time is extremely slow considering you are probably using a 32-bit compiler. Instead if you stored your 16 bytes in 4 integers (32-bit) or 2 longs (64-bit), you would only need to perform 4 or 2 comparisons respectively.
The question to ask yourself is how much is the cost of storing the data as 4-integer or 2-long arrays. How often do you need to access the data, etc.
A: There's always the good old x86 REPNE CMPS instruction.
A: One extra possible optimization: if you are expecting that most of the time the arrays are identical then it might be slightly faster to do a memcmp() as the first step, setting '16' as the answer if the test returns true. If course if you are not expecting the arrays to be identical very often that would only slow things down.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Mapping to Dictionary with iBATIS Given a simple statement, such as:
<statement id="SelectProducts" resultMap="???">
SELECT * FROM Products
</statement>
Is it possible to get a list of dictionary objects where the keys are the column names?
ie.
var list = Mapper.QueryForList<IDictionary<string,string>>("SelectProducts", null);
IDictionary<string, string> dict = list[0];
// dict["id"] == "1"
// dict["name"] == "Some Product Name"
// dict["price"] == "$9.99"
// etc.
I'd like to generalize the result of a query to handle any number of columns/column names without mapping to specific properties on some class.
I realize the example here would fail since a result set may have duplicate (or null) column names. I've thought about a result class that holds an indexed list of key-value pairs. The key thing here is retaining the column information somewhere.
A: You can do this by setting the class attribute to HashTable in the resultMap configuration. More details available here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Python regular expression to split paragraphs How would one write a regular expression to use in Python to split paragraphs?
A paragraph is defined by two line breaks (\n). But one can have any amount of spaces/tabs together with the line breaks, and it still should be considered as a paragraph.
I am using Python, so the solution can use Python's regular expression syntax which is extended. (can make use of (?P...) stuff)
Examples:
the_str = 'paragraph1\n\nparagraph2'
# Splitting should yield ['paragraph1', 'paragraph2']
the_str = 'p1\n\t\np2\t\n\tstill p2\t \n \n\tp3'
# Should yield ['p1', 'p2\t\n\tstill p2', 'p3']
the_str = 'p1\n\n\n\tp2'
# Should yield ['p1', '\n\tp2']
The best I could come with is: r'[ \t\r\f\v]*\n[ \t\r\f\v]*\n[ \t\r\f\v]*', i.e.
import re
paragraphs = re.split(r'[ \t\r\f\v]*\n[ \t\r\f\v]*\n[ \t\r\f\v]*', the_str)
But that is ugly. Is there anything better?
Suggestions rejected:
r'\s*?\n\s*?\n\s*?' -> That would make example 2 and 3 fail, since \s includes \n, so it would allow paragraph breaks with more than 2 \ns.
A: Unfortunately there's no nice way to write "space but not a newline".
I think the best you can do is add some space with the x modifier and try to factor out the ugliness a bit, but that's questionable: (?x) (?: [ \t\r\f\v]*? \n ){2} [ \t\r\f\v]*?
You could also try creating a subrule just for the character class and interpolating it three times.
A: It is not a regexp, but it is really elegant:
from itertools import groupby
def paragraph(lines):
for group_separator, line_iteration in groupby(lines.splitlines(True), key = str.isspace):
if not group_separator:
yield ''.join(line_iteration)
for p in paragraph('p1\n\t\np2\t\n\tstill p2\t \n \n\tp'):
print repr(p)
'p1\n'
'p2\t\n\tstill p2\t \n'
'\tp3'
It's up to you to strip the output as you need it of course.
It was inspired by the famous "Python Cookbook" ;-)
A: You may be trying to deduce the structure of a document in plain test and doing what docutils does.
You might be able to simply use the Docutils parser rather than roll your own.
A: Almost the same, but using non-greedy quantifiers and taking advantage of the whitespace sequence.
\s*?\n\s*?\n\s*?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to get a quick status from the Emacs compilation buffer? By default, emacs 22.1.1 only shows the top of the compilation buffer when you first issue the compile command. I would like it to scroll to the bottom automatically when I use the compile command in order to save keystrokes. This way I can easily get a status of the current compilation by just looking at the compile buffer and seeing which files are currently being compiled instead of having to switch windows and scroll to the bottom of the buffer. Any ideas?
A:
(setq compilation-scroll-output t)
or
M-x set-variable compilation-scroll-output t RET
Also, if you get used to using next-error and previous-error before your compilation finishes, you will start to see why the default behavior is desirable.
A: I think the best option is to stop on the first error
(setq compilation-scroll-output 'first-error)
With this configuration, Emacs scrolls compilation mode until the first error happens. This allows you to use next-error and previous-error before compilation finishes.
If there aren't any errors, it scrolls until the end and you can thus easily see that compilation was succesful.
A: From Info > emacs > Compilation:
If you set the variable compilation-scroll-output to a non-nil
value, then the compilation buffer always scrolls to follow output as
it comes in.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: What Oracle privileges do I need to use DBMS_METADATA.GET_DDL? (Excuse any ignorance of mine here - I'm not a seasoned Oracle user.)
I'm attempting to use the DBMS_METADATA.GET_DDL function (in conjunction with ALL_OBJECTS or some such) to get the DDL for all of the tables in a particular schema. When I do this (either for all objects or for a single specific object) I get an ORA-31603 error ("object "FOO" of type TABLE not found in schema "SCHEMA").
I assume this means that the user I'm logged in with doesn't have whatever privilege is necessary to read the metadata needed for GET_DDL. What privilege is this that's needed? Is there a way when logged in to confirm that the current user does/does not have this privilege?
thanks!
Lee
A: Read this document, but basically, you need SELECT_CATALOG_ROLE
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_metada.htm#i1016867
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: String vs string In C# there are String objects and string objects.
What is the difference between the two?
What are the best practices regarding which to use?
A: One is System.String the .Net Type and one is specific to C# which turns out to be an alias back to System.String.
http://msdn.microsoft.com/en-us/library/362314fe(VS.71).aspx
A: There is no difference. string (lower case) is just an alias for System.String.
A: There is not a difference. string is an alias that the compiler converts to System.String.
In fact, it's even aliased in MSIL:
.method private hidebysig static void Main(string[] args) cil managed
A: There is no difference between them. string is just an alias for System.String. When compiled they both are compiled to System.String object.
A: The lower case version is just an alias to the actual class String. There is no real difference as far as IL generated.
A: There is no difference. string is a C# language keyword which refers to the class System.String, just like int is a keyword which refers to System.Int32.
A: In the future, try compiling an app that uses both and then use Reflector (change the language to IL) to view the compiled output. You'll see there's no difference.
A: No difference. System.String is strictly identical to string. Common C# coding guidelines indicates that you should use the keyword string.
A: They are aliases and are interchangeable. However, stylistically, for declarations, I use the lowercased string, and for the static methods, I use String.
string foo = "bar";
if( foo != String.Empty )
{
Console.WriteLine(String.Format("foo.Length = {0}", foo.Length));
}
A: There is no difference because string is converted to System.String by the compiler. Same with all of the common types (int goes to System.Int32, etc). We use the simple name so they stand out.
A: Considering that an “int” is different in some languages depending on 16bit/32bit system, a "string" could in the future evolve to not be the same as System.String.
But for now it is.
A: just a bit note:
string/String is not the only couple of aliases:
eg. Integer,Int32, int are all aliases.
@mliesen: it doesn't happend in C#, it's not like C. this because from C# you don't create executable but a per-compiled code, as java.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
}
|
Q: How to restrict NULL input parameters into oracle stored procedure I have writtent some Oracle storedprocedures in these there are more then 20 input parameters and from them morethen 10 parameters are required , I want all with some value and do not want to accept null values for that , Is there anything that I can declare in the Procedure defination itself which can restrict null input parameter or Will I have to check for each value and Raise the exception if the required value is null ?
A: In PL/SQL I don't know of a way around checking each one.
If you are calling the stored procedure from an external library, that library might have that functionality. This is probably not likely because frequently NULL input parameters are required.
You could make a helper PL/SQL procedure that, given a value, will raise an exception if it is null to save on redundant code. You could then write a chunk of perl/python/groovy that would slurp up your procedure declaration and crank out these calls to your null check procedure.
A: I know this is an old question, but there is another option (described here):
SUBTYPE varchar2_not_null IS VARCHAR2 NOT NULL;
You can define this type (and number_not_null, etc) either in the same package as your stored procedures, or in their own package if you want to use them in lots of places. You can then declare parameters of being these types.
If NULL gets passed as an argument, you'll get a very useful error message:
cannot pass NULL to a NOT NULL constrained formal parameter
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Is there a way to dump the objects in memory from a running ruby process? Killing the processs while obtaining this information would be fine.
A: A quick-and-dirty way would be ObjectSpace.each_object{|e| p e}. You could do some tests to determine what you wanted to keep, or Marshal the objects.
A: For 1.9.2/1.9.3 there's heap_dump gem, it can be injected into a running process using gdb (but more stable was is to include it in process itself, no performance overhead)
It dumps references to objects, not objects themselves, but this is usable if you're into fighting leaks
A: For the more hardcore there is also BleakHouse which gives you a special custom-compiled copy of ruby with better memory leak tracking powarz
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What browser is best for testing web standards? When I build a site, I'd like to have at least one browser I can show it off in without any hacks or workarounds, and yet still retain maximum functionality. Knowing that none of the browsers have perfect standards adherence, which one comes closest?
Also, are there any standards areas in which a single browser seems to specialize? Opera, for instance, seems to be bent on adhering to all voice-related CSS standards.
A: Safari using the latest WebKit nightly build.
Not that any browser in the world uses this yet (not even Chrome) but if all you're worried about is standards then that's your best bet - it passes Acid3, something no browser on the market can do yet.
A: This is an excellent question, but I find it hard to give a single answer. Traditionally, Opera has been the most standards compliant. For a long time, it was the ONLY browser to pass the ACID2 test in fact. FireFox and IE haven't been able to claim that (although supposedly IE8 is supposed to fix that, and FF is working on it all the time).
That having been said however, bear mind that IE has the largest "market share" of all the browsers right now (businesses have ties to MS, and Windows always comes with IE out of the box) followed closely by FireFox. So if your goal is to show off your app in a browser that most people will be using, it'll have to be one of those.
Purists will tell you that FF is more standards compliant than IE7 (and they are right), so that you should design for that and not IE. I can tell from many years as a designer/developer that pages taking that approach may not be a great idea. Bear in mind again - IE has the market share, and usually where it counts. So if it looks great in FF but breaks in IE, most users will be very upset, and the same vice-versa.
Best compromise - concentrate on those two. Tweak it to look right in at least FF AND IE, and now you've covered 90%+ of the people that will be using your website.
Don't get me wrong here - I'm not trying to dismiss the users of Opera, Safari, or any other browser. But if you want the most results for the least amount of work, then there ya go.
Best answer - take your time, do it right, test ALL the major browsers. The time spent working through these browser headaches ahead of time (when you can do it at your own pace) will be well rewarded. Compare that to the screaming client who wants to know why your page breaks in his favorite browser, and wants it fixed today. :)
A: "When I build a site, I'd like to have at least one browser I can show it off in without any hacks or workarounds, and yet still retain maximum functionality."
If you are testing your site, you would be better served to choose target browsers based on your users' needs.
Unless you are in a position where you can force your users to change to a particular browser, you need to test your site in whatever browser(s) they use.
A: Opera comes closest to standards compliance.
A: I use Firefox with IE tab and chrome. Firefox with IE tab because those are the two browsers with the most market share and chrome because it is one of the few windows browsers that use webkit, meaning it should display similarly to safari.
A: The way most people I know work is to run Firefox(with Firebug) and develop in that. Firebug is an invaluable tool for debugging. They will usually take what the get there and try to squeak it into IE and other browsers. Not exactly the answer to your question (Its not 100% compliant but its close), but hopefully it helps.
A:
The way most people I know work is to
run Firefox(with Firebug) and develop
in that. Firebug is an invaluable tool
for debugging. They will usually take
what the get there and try to squeak
it into IE and other browsers. Not
exactly the answer to your question
(Its not 100% compliant but its
close), but hopefully it helps.
+1 - I prefer firefox for most browsing as I find it most stable and easiest to use, but when it comes to web development it takes the cake when combined with firebug! The others are good for testing, but I find that firefox+firebug makes everything so much easier to use
A: Try Google Chrome it uses the WebKit which is feature complete for the latest standards.
A: I used to use Opera and Firefox as my "Standards Compliant" browsers, but I've recently become much enamored with Safari. The WebKit under the hood is really solid, the javascript is really fast (or fast enough, anyway), and and the CSS support is pretty excellent. It's not perfect, but does a really solid job of doing what it's supposed to do.
Also, it has the greatest print-screen feature I've ever seen in a browser. Seriously. Whereas every other browser seems to really munge up any screen I try to print, Safari always seems to get it right. A minor thing, but worth major points in my book.
Edited to add: Chrome is coming close to taking Safari's place, but that print screen thing is still the deciding factor. Being able to just hit "print" during a meeting with a client and not have to fiddle with any settings to get it working right is a major relief.
A: Don't just use a standards compliant browser, use other tools such as the w3c validator.
A: I find that if you stick to standards compliant markup, and don't use anything too unusual, it will render fine in Firefox 2/3 and Safari/Chrome (and usually fine in IE7/8). If not, you're probably better off trying to simplify your markup, rather than introducing hacks and workarounds.
A: Using one browser for all your testing and then hacking it for others is how most people do it but that often leads to problems. Even though Safari may be a more standards compliant browser it is not a highly popular browser and I'd lean more toward using the browser your customer's are using most while writing 100% standard code and then adjusting for idiosyncrasies of other browsers. It's not ideal but it is more customer centered thinking.
I've seen web development companies with multiple full time employees that had bugs on their homepage because they did exactly what you seem to be thinking is your answer. They developed the webpage on a Mac and tested exclusively with Safari. The result was that over 60% of potential customers may have seen layout bugs if their browser was windowed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Include only certain libraries on an operating system When writing an app that one wants to have compile on mac, linux and windows, what is the best way of managing the different libraries that will need to be included on the various operating systems. For example, using the glut opengl toolkit requires different includes on each operating system.
A: Your question is actually two questions in one:
1) How do I write my C++ code to include the right include files on the right platform?
2) How do I write my Makefile to work on different platforms?
The C++ code question is already answered - find the platform-specific defines and use them to figure out what platform you're on.
Automake or scons are quite complex, and are worth your time only if you intend to release your code to a wide audience. In the case of in-house code, a "generic" makefile with per-platform include is usually sufficient. For Windows, you can get GNU Make for Windows (available from here, or use nmake and limit yourself to the subset of syntax common between all platforms.
A: If you just need to worry about header files, then the preprocessor will do everything you need. If you want to handle differing source files, and possibly different libraries you'll need a tool to handle it.
Some options include:
*
*The Autotools
*Scons
*CMake
My personal favorite is CMake. The Autotools uses a multi-stage process that's relatively easy to break, and scons just feels weird to me. Cmake will also generate project files for a variety of IDEs, in addition to makefiles.
A: There is a good article on Macros. One of the answers how to use conditional compilation based on OS/COmpiler (its near the top).
The use of the Autoconfiguration tools is a nice addition on top of this but is not needed for small projects where it may be easier to detect the OS explicitly, though for larger projects that may need to run on many different types of OS you should also explore the Available autoconfiguration tools mentioned by Branan
A: Several projects I've worked on use an autoconf-based configure script which builds a Makefile, hence the reason you can build all of them from source with a simple:
./configure
make
make install
A: Scons has a configuring mechanism that will do a lot of what autotools do without as much complexity, and is pretty darn portable (although not as portable as autotools).
A: The compiler should have a set of preprocessor symbols it will provide that you can use. For example linux for gcc on a Linux system, _WIN32 for VC++. If you need something more complex then look at autoconf, but that works best for Unix based code.
I'd recommend checking out how some of the larger OpenSource projects handle this. See AutoSense.hpp from (an old release of) Apache Xerces.
A: If the libraries offer the same API on the different platforms, I would create a "proxy" include file containing all the necessary #ifdefs. That 'platform-independent' include file is then included in your client code instead of cluttering it with numerous and ugly-reading preprocessor commands. These will be contained in the ugly and cluttered platform-independent include.
If the API differs across platforms, you will need to create your own abstraction.
A: Perhaps this is a cop-out answer, but have you looked at how boost handles this? They build on quite a few platforms without autoconf, although they do have their own build system - bjam - that probably handles some of the same situations. They also do a nice auto-linking trick on windows that automatically selects the right version of libraries for linking depending on the version of the MSVC compiler. Based on your initial description, it sounds like just macro defs checking for various platforms/compilers might do the trick, but perhaps there is more to your problem that would prevent this.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Best way to handle error messages I'm wondering what is the best way to handle error messages in my application.
Currently I'm working on a web application and all error messages are stored in the database and we get them by ID, negative is an error , and positive are success messages.
the problem is that is hard to maintain and keep tracking of text messages.
What are your approaches ?
A: Generally speaking, I keep the text of the error messages in resource files. If you're using .NET 2.0 or higher (Visual Studio 2005 or higher), resource files are automatically compiled into strongly-typed classes, making the code which accesses said messages much more clear and readable.
(EDIT: Having seen another comment which mentions localization, I feel honor-bound to mention it here as well: .NET resource files do an excellent job of localization. You can get the localized text of the resource via the exact same code, with just a few minor tweaks to provide CultureInfo.)
That said, it sounds like a part of this problem domain is a message number. Are these errors being thrown from the database (say, as part of stored procs or triggers)? In that case, then database storage most likely is the right place for them, if only because that documents them most closely to where the "magic numbers" are being used.
A: If you're going to localize them, I would use the English text as the key, and then perform a lookup into your storage of choice (an SQL-based database, file store, or what have you), and then return the properly localized string.
Should the requested key not exist in the store, you could yield a plain-text error, perhaps e-mailing a developer?
For more inspiration, see GNU gettext.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Open an Emacs buffer when a command tries to open an editor in shell-mode I like to use Emacs' shell mode, but it has a few deficiencies. One of those is that it's not smart enough to open a new buffer when a shell command tries to invoke an editor. For example with the environment variable VISUAL set to vim I get the following from svn propedit:
$ svn propedit svn:externals .
"svn-prop.tmp" 2L, 149C[1;1H
~ [4;1H~ [5;1H~ [6;1H~ [7;1H~
...
(It may be hard to tell from the representation, but it's a horrible, ugly mess.)
With VISUAL set to "emacs -nw", I get
$ svn propedit svn:externals .
emacs: Terminal type "dumb" is not powerful enough to run Emacs.
It lacks the ability to position the cursor.
If that is not the actual type of terminal you have,
use the Bourne shell command `TERM=... export TERM' (C-shell:
`setenv TERM ...') to specify the correct type. It may be necessary
to do `unset TERMINFO' (C-shell: `unsetenv TERMINFO') as well.svn: system('emacs -nw svn-prop.tmp') returned 256
(It works with VISUAL set to just emacs, but only from inside an Emacs X window, not inside a terminal session.)
Is there a way to get shell mode to do the right thing here and open up a new buffer on behalf of the command line process?
A: There's emacsclient, gnuserv, and in Emacs 23, multi-tty that are all useful for this. Actually I think in Emacs 23, emacsclient has all of the interesting functionality of gnuserv.
A: You can attach to an Emacs session through emacsclient. First, start the emacs server with
M-x server-start
or add (server-start) to your .emacs. Then,
export VISUAL=emacsclient
Edit away.
Note:
*
*The versions of emacs and emacsclient must agree. If you have multiple versions of Emacs installed, make sure you invoke the version of emacsclient corresponding to the version of Emacs running the server.
*If you start the server in multiple Emacs processes/frames (e.g., because (server-start) is in your .emacs), the buffer will be created in the last frame to start the server.
A: Not entirely true. ansi-term can run an emacs fine (although I usually run mg for commit logs, in the rare event I don't commit from emacs directly). eshell can also run an emacs if you start a screen first and run it from within there.
A: Along with using emacs client/server, I am using this script to invoke emacs.
This will start emacs if it is not running yet, or just open a new emacs buffer in the running emacs (using gnuclient). It runs in the background by default, but can be run in the foreground for processes that expect some input. For example, I am using this as my source control editor, when entering a change list description. I have "SVN_EDITOR=emacs sync", so I can do "svn commit" in an emacs shell, and it will open the svn editor in a new emacs buffer in the same emacs. When I close the buffer, "svn commit" continues. Pretty useful.
#!/bin/sh
if [ -z $EMACS_CMD ]; then
EMACS_CMD="/usr/bin/emacs"
fi
if [ -z $GNUCLIENT_CMD ]; then
GNUCLIENT_CMD="/usr/bin/gnuclient"
fi
if [ "$1" = "sync" ]; then
shift 1
sync=true
else
sync=false
fi
cmd="${EMACS_CMD} $*"
lsof $EMACS_CMD | grep $USER >/dev/null 2>&1
if [ "$?" -ne "1" ]; then
cmd="${GNUCLIENT_CMD} $*"
fi
if [ $sync = "true" ]; then
$cmd
else
$cmd &
fi
A: I wanted to do something similar for merging in an emacs shell via mercurial. Thanks to the posters here, i found the way. two steps:
*
*add: (start-server) in your .emacs file (remember to load-file after your change)
*in your hgrc:
[merge-tools]
emacs.executable = emacsclient
emacs.premerge = False
emacs.args = --eval "(ediff-merge-with-ancestor \"$local\" \"$other\" \"$base\" nil \"$output\")"
A: When I have (start-server) in my .emacs I get this error....
Debugger entered--Lisp error: (void-function start-server)
(start-server)
eval-buffer(#<buffer *load*> nil "/Users/jarrold/.emacs" nil t) ; Reading at buffer position 22768
load-with-code-conversion("/Users/jarrold/.emacs" "/Users/jarrold/.emacs" t t)
load("~/.emacs" t t)
#[nil "^H\205\276^@ \306=\203^Q^@\307^H\310Q\202A^@ \311=\2033^@\312\307\313\314#\203#^@\315\202A^@\312\307\$
command-line()
normal-top-level()
....I am using GNU Emacs 22.1.1
And this is the version of Mac OS-X I am using:
shandemo 511 $ uname -a Darwin Facilitys-MacBook-Pro.local 10.8.0
Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011;
root:xnu-1504.15.3~1/RELEASE_I386 i386
Note that m-x ansi-term appears to allow me to successfully hg commit inside of its shell. However, that shell does not let me scroll through the buffer with e.g. c-p or c-n so I would prefer to us m-x shell.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: How do you reset ASP.Net AJAX cascading dropdown control (client side) The cascading dropdown control works great except that I am unable to figure out a way to reset the dropdown client-side (in Javascript)
My set up is something like this
DD1
DD2
DD3
DD4
each DD is dependent on the previous DD and uses webservice to load them.
On change of DD3 I need to reset DD4 but the previous selection stays.
Can this be done? I tried clearing the value in the supporting hidden input control (cddTest_ClientState) in vain
TIA
A: Here is the solution
<asp:DropDownList ID="dd1" runat="server" onChange="ondd1ChangeHandler(this)>
</asp:DropDownList>
<asp:DropDownList ID="dd2" runat="server">
</asp:DropDownList>
<cc1:CascadingDropDown ID="cdd2" runat="server" Category="Cat1"
ParentControlID="dd1" PromptText="(Select Option)" ServiceMethod="GetOptions"
ServicePath="Services/GetOptions.asmx" TargetControlID="dd2">
</cc1:CascadingDropDown>
<script type='text/javascript>
function ondd1ChangeHandler(dd){
var dd2=$get('dd2');
dd2.selectedIndex=0;
var cdd=$find('cdd2');
if(cdd!=null){
cdd.set_SelectedValue('','');
cdd._onParentChange(null,false);
}
}
</script>
Hope this helps
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Denormalize an XML schema programmatically I need to take any given valid XML schema (XSD) and denormalize it to a simple form containing no refs, no includes, etc. All simple type definitions should be inline, such that when looking at any given element, all declarations are visible without performing another lookup.
I've found some tools that have this built-in, but I need to do it "on the fly." Platform of choice is Java, but I'd be willing to port the code from another language if necessary. I just really don't want to reinvent the wheel here. Searching for OSS libraries from Apache/etc have yielded nothing. The closest I've found is XSOM which supports traversing a schema as an object model, but you still have to handle every possible form that a schema could take to represent a given structure.
The output doesn't have to be actual XML, as it will actually be used in a object model in its final form.
A: You might find XSD4J helpful:
http://dynvocation.selfip.net/xsd4j/
A: The EMF XSD model may be helpful:
http://www.eclipse.org/modeling/mdt/?project=xsd
A: Another useful API for XML Schema is XSOM.
XSOM is used by XJC, JAXB schema compiler under the hub so is probably guaranteed to be kept alive.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: java get file size efficiently While googling, I see that using java.io.File#length() can be slow.
FileChannel has a size() method that is available as well.
Is there an efficient way in java to get the file size?
A: When I modify your code to use a file accessed by an absolute path instead of a resource, I get a different result (for 1 run, 1 iteration, and a 100,000 byte file -- times for a 10 byte file are identical to 100,000 bytes)
LENGTH sum: 33, per Iteration: 33.0
CHANNEL sum: 3626, per Iteration: 3626.0
URL sum: 294, per Iteration: 294.0
A: In response to rgrig's benchmark, the time taken to open/close the FileChannel & RandomAccessFile instances also needs to be taken into account, as these classes will open a stream for reading the file.
After modifying the benchmark, I got these results for 1 iterations on a 85MB file:
file totalTime: 48000 (48 us)
raf totalTime: 261000 (261 us)
channel totalTime: 7020000 (7 ms)
For 10000 iterations on same file:
file totalTime: 80074000 (80 ms)
raf totalTime: 295417000 (295 ms)
channel totalTime: 368239000 (368 ms)
If all you need is the file size, file.length() is the fastest way to do it. If you plan to use the file for other purposes like reading/writing, then RAF seems to be a better bet. Just don't forget to close the file connection :-)
import java.io.File;
import java.io.FileInputStream;
import java.io.RandomAccessFile;
import java.nio.channels.FileChannel;
import java.util.HashMap;
import java.util.Map;
public class FileSizeBench
{
public static void main(String[] args) throws Exception
{
int iterations = 1;
String fileEntry = args[0];
Map<String, Long> times = new HashMap<String, Long>();
times.put("file", 0L);
times.put("channel", 0L);
times.put("raf", 0L);
long fileSize;
long start;
long end;
File f1;
FileChannel channel;
RandomAccessFile raf;
for (int i = 0; i < iterations; i++)
{
// file.length()
start = System.nanoTime();
f1 = new File(fileEntry);
fileSize = f1.length();
end = System.nanoTime();
times.put("file", times.get("file") + end - start);
// channel.size()
start = System.nanoTime();
channel = new FileInputStream(fileEntry).getChannel();
fileSize = channel.size();
channel.close();
end = System.nanoTime();
times.put("channel", times.get("channel") + end - start);
// raf.length()
start = System.nanoTime();
raf = new RandomAccessFile(fileEntry, "r");
fileSize = raf.length();
raf.close();
end = System.nanoTime();
times.put("raf", times.get("raf") + end - start);
}
for (Map.Entry<String, Long> entry : times.entrySet()) {
System.out.println(entry.getKey() + " totalTime: " + entry.getValue() + " (" + getTime(entry.getValue()) + ")");
}
}
public static String getTime(Long timeTaken)
{
if (timeTaken < 1000) {
return timeTaken + " ns";
} else if (timeTaken < (1000*1000)) {
return timeTaken/1000 + " us";
} else {
return timeTaken/(1000*1000) + " ms";
}
}
}
A: I ran into this same issue. I needed to get the file size and modified date of 90,000 files on a network share. Using Java, and being as minimalistic as possible, it would take a very long time. (I needed to get the URL from the file, and the path of the object as well. So its varied somewhat, but more than an hour.) I then used a native Win32 executable, and did the same task, just dumping the file path, modified, and size to the console, and executed that from Java. The speed was amazing. The native process, and my string handling to read the data could process over 1000 items a second.
So even though people down ranked the above comment, this is a valid solution, and did solve my issue. In my case I knew the folders I needed the sizes of ahead of time, and I could pass that in the command line to my win32 app. I went from hours to process a directory to minutes.
The issue did also seem to be Windows specific. OS X did not have the same issue and could access network file info as fast as the OS could do so.
Java File handling on Windows is terrible. Local disk access for files is fine though. It was just network shares that caused the terrible performance. Windows could get info on the network share and calculate the total size in under a minute too.
--Ben
A: The benchmark given by GHad measures lots of other stuff (such as reflection, instantiating objects, etc.) besides getting the length. If we try to get rid of these things then for one call I get the following times in microseconds:
file sum___19.0, per Iteration___19.0
raf sum___16.0, per Iteration___16.0
channel sum__273.0, per Iteration__273.0
For 100 runs and 10000 iterations I get:
file sum__1767629.0, per Iteration__1.7676290000000001
raf sum___881284.0, per Iteration__0.8812840000000001
channel sum___414286.0, per Iteration__0.414286
I did run the following modified code giving as an argument the name of a 100MB file.
import java.io.*;
import java.nio.channels.*;
import java.net.*;
import java.util.*;
public class FileSizeBench {
private static File file;
private static FileChannel channel;
private static RandomAccessFile raf;
public static void main(String[] args) throws Exception {
int runs = 1;
int iterations = 1;
file = new File(args[0]);
channel = new FileInputStream(args[0]).getChannel();
raf = new RandomAccessFile(args[0], "r");
HashMap<String, Double> times = new HashMap<String, Double>();
times.put("file", 0.0);
times.put("channel", 0.0);
times.put("raf", 0.0);
long start;
for (int i = 0; i < runs; ++i) {
long l = file.length();
start = System.nanoTime();
for (int j = 0; j < iterations; ++j)
if (l != file.length()) throw new Exception();
times.put("file", times.get("file") + System.nanoTime() - start);
start = System.nanoTime();
for (int j = 0; j < iterations; ++j)
if (l != channel.size()) throw new Exception();
times.put("channel", times.get("channel") + System.nanoTime() - start);
start = System.nanoTime();
for (int j = 0; j < iterations; ++j)
if (l != raf.length()) throw new Exception();
times.put("raf", times.get("raf") + System.nanoTime() - start);
}
for (Map.Entry<String, Double> entry : times.entrySet()) {
System.out.println(
entry.getKey() + " sum: " + 1e-3 * entry.getValue() +
", per Iteration: " + (1e-3 * entry.getValue() / runs / iterations));
}
}
}
A: If you want the file size of multiple files in a directory, use Files.walkFileTree. You can obtain the size from the BasicFileAttributes that you'll receive.
This is much faster then calling .length() on the result of File.listFiles() or using Files.size() on the result of Files.newDirectoryStream(). In my test cases it was about 100 times faster.
A: All the test cases in this post are flawed as they access the same file for each method tested. So disk caching kicks in which tests 2 and 3 benefit from. To prove my point I took test case provided by GHAD and changed the order of enumeration and below are the results.
Looking at result I think File.length() is the winner really.
Order of test is the order of output. You can even see the time taken on my machine varied between executions but File.Length() when not first, and incurring first disk access won.
---
LENGTH sum: 1163351, per Iteration: 4653.404
CHANNEL sum: 1094598, per Iteration: 4378.392
URL sum: 739691, per Iteration: 2958.764
---
CHANNEL sum: 845804, per Iteration: 3383.216
URL sum: 531334, per Iteration: 2125.336
LENGTH sum: 318413, per Iteration: 1273.652
---
URL sum: 137368, per Iteration: 549.472
LENGTH sum: 18677, per Iteration: 74.708
CHANNEL sum: 142125, per Iteration: 568.5
A: Actually, I think the "ls" may be faster. There are definitely some issues in Java dealing with getting File info. Unfortunately there is no equivalent safe method of recursive ls for Windows. (cmd.exe's DIR /S can get confused and generate errors in infinite loops)
On XP, accessing a server on the LAN, it takes me 5 seconds in Windows to get the count of the files in a folder (33,000), and the total size.
When I iterate recursively through this in Java, it takes me over 5 minutes. I started measuring the time it takes to do file.length(), file.lastModified(), and file.toURI() and what I found is that 99% of my time is taken by those 3 calls. The 3 calls I actually need to do...
The difference for 1000 files is 15ms local versus 1800ms on server. The server path scanning in Java is ridiculously slow. If the native OS can be fast at scanning that same folder, why can't Java?
As a more complete test, I used WineMerge on XP to compare the modified date, and size of the files on the server versus the files locally. This was iterating over the entire directory tree of 33,000 files in each folder. Total time, 7 seconds. java: over 5 minutes.
So the original statement and question from the OP is true, and valid. Its less noticeable when dealing with a local file system. Doing a local compare of the folder with 33,000 items takes 3 seconds in WinMerge, and takes 32 seconds locally in Java. So again, java versus native is a 10x slowdown in these rudimentary tests.
Java 1.6.0_22 (latest), Gigabit LAN, and network connections, ping is less than 1ms (both in the same switch)
Java is slow.
A: From GHad's benchmark, there are a few issue people have mentioned:
1>Like BalusC mentioned: stream.available() is flowed in this case.
Because available() returns an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream.
So 1st to remove the URL this approach.
2>As StuartH mentioned - the order the test run also make the cache difference, so take that out by run the test separately.
Now start test:
When CHANNEL one run alone:
CHANNEL sum: 59691, per Iteration: 238.764
When LENGTH one run alone:
LENGTH sum: 48268, per Iteration: 193.072
So looks like the LENGTH one is the winner here:
@Override
public long getResult() throws Exception {
File me = new File(FileSizeBench.class.getResource(
"FileSizeBench.class").getFile());
return me.length();
}
A: Well, I tried to measure it up with the code below:
For runs = 1 and iterations = 1 the URL method is fastest most times followed by channel. I run this with some pause fresh about 10 times. So for one time access, using the URL is the fastest way I can think of:
LENGTH sum: 10626, per Iteration: 10626.0
CHANNEL sum: 5535, per Iteration: 5535.0
URL sum: 660, per Iteration: 660.0
For runs = 5 and iterations = 50 the picture draws different.
LENGTH sum: 39496, per Iteration: 157.984
CHANNEL sum: 74261, per Iteration: 297.044
URL sum: 95534, per Iteration: 382.136
File must be caching the calls to the filesystem, while channels and URL have some overhead.
Code:
import java.io.*;
import java.net.*;
import java.util.*;
public enum FileSizeBench {
LENGTH {
@Override
public long getResult() throws Exception {
File me = new File(FileSizeBench.class.getResource(
"FileSizeBench.class").getFile());
return me.length();
}
},
CHANNEL {
@Override
public long getResult() throws Exception {
FileInputStream fis = null;
try {
File me = new File(FileSizeBench.class.getResource(
"FileSizeBench.class").getFile());
fis = new FileInputStream(me);
return fis.getChannel().size();
} finally {
fis.close();
}
}
},
URL {
@Override
public long getResult() throws Exception {
InputStream stream = null;
try {
URL url = FileSizeBench.class
.getResource("FileSizeBench.class");
stream = url.openStream();
return stream.available();
} finally {
stream.close();
}
}
};
public abstract long getResult() throws Exception;
public static void main(String[] args) throws Exception {
int runs = 5;
int iterations = 50;
EnumMap<FileSizeBench, Long> durations = new EnumMap<FileSizeBench, Long>(FileSizeBench.class);
for (int i = 0; i < runs; i++) {
for (FileSizeBench test : values()) {
if (!durations.containsKey(test)) {
durations.put(test, 0l);
}
long duration = testNow(test, iterations);
durations.put(test, durations.get(test) + duration);
// System.out.println(test + " took: " + duration + ", per iteration: " + ((double)duration / (double)iterations));
}
}
for (Map.Entry<FileSizeBench, Long> entry : durations.entrySet()) {
System.out.println();
System.out.println(entry.getKey() + " sum: " + entry.getValue() + ", per Iteration: " + ((double)entry.getValue() / (double)(runs * iterations)));
}
}
private static long testNow(FileSizeBench test, int iterations)
throws Exception {
long result = -1;
long before = System.nanoTime();
for (int i = 0; i < iterations; i++) {
if (result == -1) {
result = test.getResult();
//System.out.println(result);
} else if ((result = test.getResult()) != result) {
throw new Exception("variance detected!");
}
}
return (System.nanoTime() - before) / 1000;
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "170"
}
|
Q: Debugging Websites in Internet Explorer I have a Website that is really slow and "feels" really bad when using it. The server is fine, it's a clientside issue, I assume because too much JavaScript or Image Requests, but since it's not my own Website, I wonder if there is a way to show and profile the Page from within IE.
In Firefox, I would use Firebug, Y!Slow and the Web Developer extention to see all JavaScript, CSS, Images and other Requests, AJAX Requests etc., but on IE I did not see any problem. I know I could use Firefox, but the page works better in FF than in IE, so i wonder if there is some Development Addon specifically in IE.
Edit: Thanks for the many suggestions! Too many good answers to pick one as "accepted", but i'll have a look at the various tools suggested.
A: Fiddler with help you see the internet activity. It shows a log of all request/response messages through the network stack.
A: There is a lite version of Firebug that will work with IE and other browsers, have you tried that?
A: Try Fiddler! It is a free, HTTP debugging proxy, that among other things provides insight on what's loading in your site, what may slow it down, etc. It has advanced features like decoding compressed resources, providing pre-canned responses for certain URL's, etc. Learning Fiddler is a must for any web developer.
A: I would also suggest two tools for discovering JavaScript memory leaks:
*
*sIEve
*Microsoft JavaScript Memory Leak Detector
A: I've been using Web Development Helper lately. It does HTTP logging better than Firebug. Lets you run arbitrary Javascript as well.
A: There is the Internet Explorer Web Developer Toolbar. It isn't as good as Firebug IMHO, but it works.
IE8 will ship with one built-in, too.
A: There's a JS library called firebug light, you need to include it in your site. What it does for you is it enables you to pop up a div in which you can spit text, like in firebug, with the same statements you do it in firebug. MochiKit has something like this too.
A: This isn't a profiler or plugin, but you may find that Quirksmode may help you weed through some of the IE-centric problems once you find them.
A: Have you run performance monitors on the client side to see what is going on, e.g. is there a bunch of memory swapping that is slowing things down or is it all network traffic that is the issue?
Another thought is whether or not there are server logs that may be of some help in seeing the time of requests if there are a bunch of files to load as well as Javascript to initialize things.
A: By using a network sniffer like Wireshark or a proxy you can monitor the traffic and see if it's the loading of images and/or scripts that are slowing down you site. If you're unsure - turn of or comment out your javascripts to rule out that it's the processing of them that are slowing down.
If you can't see any indications in the network traffic of a slow down - then you will have to do a deeper analysis of the javascript code itself - perhaps by inserting timers or other measurements to see what parts it is that could be optimized.
A: I use HTTPWatch. It provides all of the information like Firefox LiveHeaders, but in a much more useful manner. It is also a great tool to determine if you have any content that are blocking operations for downloading further content for a page.
A: you can try debug bar and companionJS from the same company http://www.debugbar.com/ and they are free and pretty similar to Firebug in concept but not as developed
A: HttpWatch is also pretty amazing as IE plugins go.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: File System Management Tools Looking for suggestions on file system management tools. We have several terabytes of images, pdfs, excel sheets, etc.
We're looking at some sort of software that will help us to manage, archive, etc the images.
We don't store all the files information in a database but some are and we were hoping to maybe find an app that could help us integrate the archive process into the database.
Thank you!
A: I have always seen these guys in the trade magazines. http://www.dtsearch.com/ I believe they've been around long before even Google. Not sure if this is what you're looking for.
A: If some of the data is saved on disks perhaps a search application is more appropriate. You can use google, microsoft search or similar program.
A: Some database products (e.g., Oracle) offer file system-like storage that you can put files into. Since it's an Oracle-managed file system, you have all the Oracle backup and management facilities. Since it's a file system, you just use ordinary OS tools like cp to move files in and out of it.
The best practice is to avoid wasting RDBMS on large BLOBS of data that the RDMBS can't use. use the database for names, dates and stuff it handles well. The actual image file or spreadsheet file can be left in ordinary filesystem world. If you do this, you don't have a lot of effort or complexity -- you're just collecting essential information on your files.
You don't duplicate storage (the spreadsheet is only an ordinary file). You don't put large objects in the database that can't be processed by the database.
The file system is faster, simpler and more reliable than the database. Feel free to use it for bulk storage. The database has cool search capabilities. Use the database for just that.
A: to clarify i guess i should say all the files are on file servers but there are references to some of them in the DB (upload logs, etc) so we were just hoping maybe there were some tools that would let us set it so that if it archived a file in a certain directory it could run some sort of sql command so the database would be updated to know the file was archived.
but thanks for the info. I think we're just going to have to roll-our-own in this case.
A: You could run a job periodically to list the files that have been added to the file system
since the last time the job was run. On Windows, this batch file would list all files and folders
in archivedirectory so that you can compare the list to the last time it was run.
cd archivedirectory
del oldlist.txt
rename newlist.txt oldlist.txt
dir /s /b > newlist.txt
If you install diffutils on Windows, you can use the standard diff tool to list the new files.
To isolate the new files:
diff oldlist.txt newlist.txt > newfiles.txt
Any lines in newfiles.txt starting with > should now give you the new files.
(You could use grep and sed to trim it down even more. Windows versions available from gnuwin32)
You should now be able to run further operations on this file, perhaps in some language such as Python, C# or Java,
to add information to the database.
A: I would have to point you towards Total Commander. This is a two pane file manager that makes almost ALL file tasks easy and fast. The more you use it, the faster you get at it.
These kind of programs have been around for a LONG time. From the days of Norton Commander, to Midnight Commander on Unix/Linux systems. They are extremely efficient and make most operations done in windows explorer look clumsy and slow by comparison.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Opening more than one of the same Mac Application at once I'm developing a Mac App in Java that logs into any one of our client's databases. My users want to have several copies of this program running so they can log into a couple clients at the same time, rather than logging out and logging back in.
How can I allow a user to open several copies of my App at once?
I'm using Eclipse to develop, and Jarbundler to make the app.
Edit: More Importantly, is there a way to do so in the code base, rather than have my user do something funky on their system? I'd rather just give them a 'Open New Window' menu item, then have them typing things into the Terminal.
A: You've probably already gotten enough code that you don't want to hear this, but you should really not be starting up two instances of the same application. There's a reason that you're finding it so difficult and that's because Apple doesn't want you to do it.
The OSX way of doing this is to use the Cocoa Document-based Application template in XCode. Apple Documentation: choosing a project.
This is something users are very accustomed to, and it works just fine. FTP programs, IRC clients, and many other types already use different "document" windows to point to different servers or channels. There's nothing inherently different about pointing to different databases.
Depending on how much code you've written, and how your application is designed, this may be pretty much impossible to implement without starting over. Developers who are encountering this problem during design phase, however, should definitely take Apple's advice.
A: From the Terminal (or in a script wrapper):
/Applications/TextEdit.app/Contents/MacOS/TextEdit &
Something like that should work for you.
To do this in Java:
String[] cmd = { "/bin/sh", "-c", "[shell commmand goes here]" };
Process p = Runtime.getRuntime().exec (cmd);
A: From the Terminal, I can run
open -n -a appName.app
Then from Applescript, I can run
tell application "Terminal"
activaate
do script "open -n -a appName.app"
end tell
Then from Java, I can execute that script. Then, I can stuff that Java code into an Action. Then, stuff that action into a menu item that says "Open New Window".
That's what I'm going with for the moment. Now I just need to get the appName.
A: If you are developing it in swing, you should just be able to instantiate the top Frame to create a new window.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Determining the extended interfaces of a Class I need to determine if a Class object representing an interface extends another interface, ie:
package a.b.c.d;
public Interface IMyInterface extends a.b.d.c.ISomeOtherInterface{
}
according to the spec Class.getSuperClass() will return null for an Interface.
If this Class represents either the
Object class, an interface, a
primitive type, or void, then null is
returned.
Therefore the following won't work.
Class interface = Class.ForName("a.b.c.d.IMyInterface")
Class extendedInterface = interface.getSuperClass();
if(extendedInterface.getName().equals("a.b.d.c.ISomeOtherInterface")){
//do whatever here
}
any ideas?
A: Does Class.isAssignableFrom() do what you need?
Class baseInterface = Class.forName("a.b.c.d.IMyInterface");
Class extendedInterface = Class.forName("a.b.d.c.ISomeOtherInterface");
if ( baseInterface.isAssignableFrom(extendedInterface) )
{
// do stuff
}
A: Use Class.getInterfaces such as:
Class<?> c; // Your class
for(Class<?> i : c.getInterfaces()) {
// test if i is your interface
}
Also the following code might be of help, it will give you a set with all super-classes and interfaces of a certain class:
public static Set<Class<?>> getInheritance(Class<?> in)
{
LinkedHashSet<Class<?>> result = new LinkedHashSet<Class<?>>();
result.add(in);
getInheritance(in, result);
return result;
}
/**
* Get inheritance of type.
*
* @param in
* @param result
*/
private static void getInheritance(Class<?> in, Set<Class<?>> result)
{
Class<?> superclass = getSuperclass(in);
if(superclass != null)
{
result.add(superclass);
getInheritance(superclass, result);
}
getInterfaceInheritance(in, result);
}
/**
* Get interfaces that the type inherits from.
*
* @param in
* @param result
*/
private static void getInterfaceInheritance(Class<?> in, Set<Class<?>> result)
{
for(Class<?> c : in.getInterfaces())
{
result.add(c);
getInterfaceInheritance(c, result);
}
}
/**
* Get superclass of class.
*
* @param in
* @return
*/
private static Class<?> getSuperclass(Class<?> in)
{
if(in == null)
{
return null;
}
if(in.isArray() && in != Object[].class)
{
Class<?> type = in.getComponentType();
while(type.isArray())
{
type = type.getComponentType();
}
return type;
}
return in.getSuperclass();
}
Edit: Added some code to get all super-classes and interfaces of a certain class.
A: if (interface.isAssignableFrom(extendedInterface))
is what you want
i always get the ordering backwards at first but recently realized that it's the exact opposite of using instanceof
if (extendedInterfaceA instanceof interfaceB)
is the same thing but you have to have instances of the classes rather than the classes themselves
A: Take a look at Class.getInterfaces();
List<Object> list = new ArrayList<Object>();
for (Class c : list.getClass().getInterfaces()) {
System.out.println(c.getName());
}
A: Liast<Class> getAllInterfaces(Class<?> clazz){
List<Class> interfaces = new ArrayList<>();
Collections.addAll(interfaces,clazz.getInterfaces());
if(!clazz.getSuperclass().equals(Object.class)){
interfaces.addAll(getAllInterfaces(clazz.getSuperclass()));
}
return interfaces ;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How Do You Clear The IRB Console? How do you clear the IRB console screen?
A: On Linux Mint 17 also you can use Ctrl + Shift + L
or
Ctrl + L
to clear the IRB screen.
A: Throw this inside %userprofile%\.irbrc and you're good
def cls
system('cls')
end
From IRB clear screen on windows.
A: puts `clear`
Clears the screen and then returns => nil Tested on Mac OSX 10.6 Terminal and iTerm2.
A: Method:
def clear_screen
if RUBY_PLATFORM =~ /win32|win64|\.NET|windows|cygwin|mingw32/i
system('cls')
else
system('clear')
end
end
Or in IRB you can use system('clear')
A: In windows, using Rails 4,
system('cls')
worked for me
A: On *nix boxes
`clear`
on Windows
system 'cls' # works
`cls` # does not work
on OSX
system 'clear' # works
`clear` # does not work
A: Tons of good answers here, but I often remote into a linux box with Mintty from windows. Kudos to the above about using .irbrc, but came up with this:
def cls
puts "\ec\e[3J"
end
def clear
puts "\e[H\e[2Js"
end
This gives you the options for both the *nix 'clear' behavior and the Windows 'cls' behavior, which I often find more useful if I really want to nuke the buffer rather than just scrolling it out of view.
P.S. a similar variant also works in .bashrc:
alias cls='echo -e "\ec\e[3J"'
If anyone could find a way to actually map that to a keystroke, I'd love to hear it. I would really like to have something akin to cmd-k on osx that would work in Mintty.
A: Add the following method to ~/.irbrc:
def clear
conf.return_format = ""
system('clear')
end
Cntrl-L or Cntrl-K work in regular console but I'm using tmux and those mess the screen up inside the tmux window.
The conf.return_format = "" takes the nil off the return value.
A: Windows users simply try,
system 'cls'
OR
system('cls')
Looks like this in the IRB window,
irb(main):333:0> system 'cls'
irb(main):007:0> system('cls')
Did the trick for me in ruby 1.9.3. However the following commands did not work and returned => nil,
system('clear')
system 'clear'
system `cls` #using the backquotes below ESC Key in windows
A: I've used this for executable files:
def clear
system("cls") || system("clear") || puts("\e[H\e[2J")
end
clear
A: system 'cls'
Works for me in Windows, with Ruby 2.2.0 and rails 4.0
A: On Mac OS X or Linux you can use Ctrl + L to clear the IRB screen.
A: Command + K in macOS works great.
A: On Ubuntu 11.10 system clear will mostly clear the irb window. You get a return => True value printed.
A big mess of ugly text
ruby-1.9.2-p290 :007 > system 'clear'
what ya get:
=> true
ruby-1.9.2-p290 :007 >
A: In order to clear the screen just do:
puts "\e[H\e[2J"
P.S. This was tested on Linux.
A: Just discovered this today: in Pry (an IRB alternative), a line of input that begins with a . will be forwarded to the command shell. Which means on Mac & Linux, we can use:
. clear
And, on Windows (Command Prompt and Windows Terminal), we can use:
. cls
Source: pry.github.io
A: system 'clear'
Should work for rails 4.0 as well
A: I came here looking for a way to reset the tty with irb, since it wasn't printing newlines or showing what I typed somehow, only some output.
1.9.3-p125 :151 > system 'reset'
finally did the trick for me!
A: For windows users:
If you create a bat file name c.bat whose contents are:
@echo off
cls
Then, in IRB, you can say:
system('c')
to clear the console. I just thought I would share because I thought that was pretty cool. Essentially anything in the path is accessible.
A: ->(a,b,c){x=a.method(b);a.send(c,b){send c,b,&x;false};print"\e[2J\e[H \e[D"}[irb_context,:echo?,:define_singleton_method]
This will fully clear your IRB screen, with no extra empty lines and “=> nil” stuff. Tested on Linux/Windows.
This one-liner could be expanded as:
lambda {
original_echo = irb_context.method(:echo?)
irb_context.send(:define_singleton_method, :echo?) {
send :define_singleton_method, :echo?, &original_echo
false
}
print "\e[2J\e[H \e[D"
}.call
This uses lots of tricks.
Firstly, irb will call echo? to check if the result should be printed. I saved the method, then redefined with a method which restores the defination but returns false so irb will not echo the result.
Secondly, I printed some ANSI control chars. \e[2J will clean the screen and \e[H will move the cursor to the upper left position of the screen. \e[D will print a space and then move back the cursor while this is a workaround for something strange on Windows.
Finally this is kind of not practical at all. Just smile ;)
A: The backtick operator captures the output of the command and returns it
s = `cls`
puts s
would work better, I guess.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "202"
}
|
Q: What's the best setup for working with WSO2 WSAS What version of java, eclipse and WPT should I be using?
A: Java 2 SDK 5.0 or higher
The Release notes tell you all at this site
http://wso2.org/project/wsas/java/2.3/docs/tools/ide.html
A: WSO2 WSAS is now called WSO2 Application Server and is now in version 4.0.0
If you are to run the latest WSO2 Application Server, it is recommended that you run it with JDK 1.6
If you want to develop services to be hosted, you can use WSO2 Carbon Studio for this purpose. Note that Carbon Studio is Eclipse based and provide the same experience as WTP.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: MySpace DOM? I've been given the task of doing some work customizing an artist's space in MySpace. It seems that you sort of hack the HTML you want into your edit profile page (which has several empty boxes). The MySpace page, it seems, is already HTML so you can only hack into that. Suggested "tweaks" include incomplete HTML code (e.g., a <DIV> tag without a </DIV> tag to supress certain sections) and stylesheet pieces that you can "place anywhere" (meaning somewhere on your edit profile page). And the best one is that sites that offer layouts say, "Layout Code - Copy and Paste the code at the bottom of your 'I'd Like to Meet' Section!"
This cannot possibly be this lame, can it?
Is there any coherent guide to customizing MySpace pages for programmers/HTML designers? Is there a coherent DOM (including things like .contactTable etc.)? Could it be that all the tweaks are just hacks people have figured out from looking at the generated HTML?
Thanks!
A: You hit the nail on the head with your final question. The MySpace DOM is a disgusting set of nearly-infinitely nested tables. Normally, people edit the page by finding those sites that let you "cut and paste" and use their generated CSS since they've already done the hard work for isolating the proper elements.
Good luck... unfortunately, you are really going to need it. =/
A: This shouldn't be too hard if you whip out Firebug and do a bunch of "Inspect > click on page > edit CSS in Firebug's editor" work to see what you can learn about the structure of the page. Then mock it up to roughly how you want it and note down which elements and which styles need work and figure out how to get that set up in the profile editor.
Try approaching this from the point of view of a challenge. On the upside, MySpace allows you access to the DOM so you can screw with all sorts of things. On the downside, their choice of HTML composition is somewhat arguable.
A: Your fears are correct. MySpace "customization" is a bunch of hacks. Good luck.
A: You can a lot of information in this link: http://spiff-myspace.blogspot.com/
I think the same of the others answers: customize MySpace page is a difficult and complex task.
Regards,
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Why am I getting a "wrong number of arguments (0 for 2)" exception in my Ruby Code? I'm trying to polish up my Ruby by re writing Kent Beck's xUnit Python example from "Test Driven Development: By Example". I've got quite far but now I get the following error when I run which I don't grok.
C:\Documents and Settings\aharmel\My Documents\My Workspace\TDD_Book\TDDBook_xUnit_RubyVersion\lib\main.rb:21:in `test_running': wrong number of arguments (0 for 2) (ArgumentError)
from C:\Documents and Settings\aharmel\My Documents\My Workspace\TDD_Book\TDDBook_xUnit_RubyVersion\lib\main.rb:21:in `run'
from C:\Documents and Settings\aharmel\My Documents\My Workspace\TDD_Book\TDDBook_xUnit_RubyVersion\lib\main.rb:85
My code looks like this:
class TestCase
def initialize(name)
puts "1. inside TestCase.initialise: @name: #{name}"
@name = name
end
def set_up
# No implementation (but present to be overridden in WasRun)
end
def run
self.set_up
self.send @name # <<<<<<<<<<<<<<<<<<<<<<<<<= ERROR HERE!!!!!!
end
end
class WasRun < TestCase
attr_accessor :wasRun
attr_accessor :wasSetUp
def initialize(name)
super(name)
end
def set_up
@wasRun = false
@wasSetUp = true
end
def test_method
@wasRun = true
end
end
class TestCaseTest < TestCase
def set_up
@test = WasRun.new("test_method")
end
def test_running
@test.run
puts "test was run? (true expected): #{test.wasRun}"
end
def test_set_up
@test.run
puts "test was set up? (true expected): #{test.wasSetUp}"
end
end
TestCaseTest.new("test_running").run
Can anyone point out my obvious mistake?
A: It's your print statement:
puts "test was run? (true expected): #{test.wasRun}"
should be
puts "test was run? (true expected): #{@test.wasRun}"
without the '@' you are calling Kernel#test, which expects 2 variables.
A: One thing that leaps out is that the send method expects a symbol identifying the method name, but you're trying to use an instance variable.
Object.send documentation
Also, shouldn't lines like this:
puts "test was run? (true expected): #{test.wasRun}"
be:
puts "test was run? (true expected): #{@test.wasRun}"
?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Drools project idea needed I was asked to make some drools traning for my teammates. Just to show them how it can be used and how usefull it may be. To do this training I first have to learn Drools, and I figured that the best way to learn it will be small project centered around drools.
Any interesting ideas?
My idea for this project was to do some webMethods flow files validation (just some example validations). I'd do validation only - we have in-house parser of that file format.
But something that actually does some work would be nice.
A: Actually, we have a drools based project, you could try to mimic that. :-)
Suppose you have incoming SMS messages arriving on an HTTP based protocol. An HTTP request contains the Anumber (telephone number of the sender) the Bnumber (telephone number of the recipient) and the text of the message.
Your goal is to use drools to route the messages, based on their content, to the appropriate services. You should have a set of rules, each rule stating something like: if the Bnumber is 1792 and the message text contains the keyword "VIDEO" then the message should be directed to the video providing service.
Actually, we use this exact setup, a drools based router which picks up messages from HTTP servlet threads and puts them to JMS queues based on their contents.
Would it be interesting for you to work on this program? :-)
A: I'm gonna give you two real examples that my company is using right now. The company is one of the biggest e-commerce from Brazil.
*
*Drools is used to apply price promotions and discount over products while users just navigates inside the product's catalog.
So, before rendering the response for the user browser, we have to apply promotions related to price, installment and freight.
*And while checking out the products, there are may promotions that can be applied due to the customer address region, state, age, sex, product amount, product amount per category, combined promotions, holidays, and so on. The application of each promotion affects the entire list of product, that requires a new rules application until the checkout gets a stable state.
It was really challenging but working very well. Drools is used in other contexts inside this company too.
A: One example from a previous project:
You are trying to deliver a package and the way you want to deliver it is to use a number of transport companies. Each company will pick the package up at a depot and deliver it to another depot until it finally reaches its destination. Each company has a schedule that can be a weird combination of days and times. For example every Tuesday and Thursday except the 5th Tuesday and first Thursday of a month except on public holidays. Each trip between depots will take a given amount of time. Given a fixed route between depots how long will it take me to deliver this package given a starting time?
A: If you are trying to learn Drools there is also a pretty good book that has been published recently. It can be found at http://www.packtpub.com/drools-jboss-rules-5-0-developers-guide/book. I had already been using Drools for a while when it came out, but skimmed through it to learn some new concepts. Some of my teammates have also read the book and agreed it helped their understanding of the tool/application.
There are some short falls. The information isn't organized real well. You must read it from front to back or you're sure to miss some foundational concepts that will inhibit you're learning at later points. Also the example code can be a bit hard to work through. Overall though I would say it will help flatten your learning curve.
A: The simplest thing would be to play a game, say cards. Poker might be a bit complex, but spades, old maid etc might be easier.
A: Why are you training them on a tool you don't even use? How are you planning on applying it? A contrived example is just that -- contrived. If you have a real need for the technology, then apply it to that domain. At a minimum this can act as a very rough proof of concept to see if the tech is even applicable to your system.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Hiding Subreports in SQL Report (RDL) I have a bunch of reports that are printed out and mailed to clients. At the top of the report is the return address, left aligned. I was asked to add an optional logo to the report. This logo should be left of the return address. (The logo and all other info is stored in the database). So if the logo exists, you SHOULD see:
<someimage> <Return Address>
And if no logo exists, you SHOULD see:
<Return Address>
There are many different logos possible placed in many different reports, so to make life easier, the logo was implemented as a subreport. The subreport just grabs the correct logo image, and then it automatically displays in the report.
The problem I'm having is this. If the log DOES NOT exist, then we want the return address left aligned, as shown above. But what is happening is that while the subreport shows nothing, it still takes up space where the logo would be, and the return address is floating a few inches to the right of the left side of the page.
<Return Address>
SO... is there a setting I can use/set to get the subreport to either not show, or not take up any space, if there is no logo to be displayed?
Sorry, hope I made this clear enough. I'm totally new to RDL's.
A: You should be able to set an expression on the subreport's visibility so that it does not show if there isn't a logo.
Here is the XML from an RDL I had handy:
<Subreport Name="SubReport">
<ReportName>SubReport</ReportName>
<Visibility>
<Hidden>=Not Parameters!ShowLogo.Value</Hidden>
</Visibility>
</Subreport>
This tests against a boolean parameter called ShowLogo, but you could just as easily test the value of another parameter (perhaps the length of a URL?).
To be clear, when specifying the expression for the "Hidden" property, you want it to evaluate to False when you want the element to display. If your expression evaluates to True, that means that the element will be hidden.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: "SocketException: Unconnected sockets not implemented" with self-signed SSL certificate (I've asked the same question of the jmeter-user mailing list, but I wanted to try here as well - so at the least I can update this with the answer once I find it).
I'm having trouble using JMeter to test a Tomcat webapp using a self-signed SSL cert. JMeter throws a SocketException with message Unconnected sockets not implemented. According to JMeter's docs, the application is designed and written to accept any certificate, self-signed or CA signed or whatever.
Has anyone run into this specific exception before?
I've attempted to export this certificate from the server and import it into my local keystore (with keytool -import -alias tomcat -file ), but the result is the same.
I've also tried setting javax.net.debug=all as a JVM arg (the JSSE reference guide lists this as a debugging step); however, I don't see any debugging output anywhere - should I expect this somewhere other than standard out/error?
A: I had the same problem in my code (not related to JMeter).
My code uses a self-defined SocketFactory. What I found out is that the class com.sun.jndi.ldap.Connection calls some methods of the SocketFactory using Method.invoke(). One of the methods it tries to call is createSocket() - without parameters.
When I added such method to my factory all worked fine.
A: This is a hint rather than a proper answer: A cursory glance at Google results seems to suggest that the exception is usually caused by the code forcing the use of a default SSL socket factory that on purpose throws the exception when createSocket() is invoked on it. What some of the results seem to suggest is that the issue sometimes happens due to a bug in certain version in Java 6, or when the incorrect path or password is provided to the keystore.
So, I'd say, try using Java 5. Also, try pointing JMeter to a well-known site that uses proper SSL certificates. This way you could test the self-signed certificate hypothesis.
A: Most javax.net.SocketFactory implementations define all createSocket() methods that have parameters as abstract. But have a createSocket() method without parameters that looks like this:
public Socket createSocket() throws IOException {
throw new SocketException("Unconnected sockets not implemented");
}
So if you subclass the abstract javax.net.SocketFactory you will be force to override the methods with parameter, but it's easy to miss to override the createSocket() method without parameters. Thus the exception is thrown if your code calls createSocket(). Simply override the method and the be done. :)
A: I had Web Service problems similar to this with jdk 1.6.0_10.
I upgraded to 1.6.0_16 and everything worked.
A: Try searching your classpath (in Eclipse, I do ctrl-shift-t to do this) for SSLSocketFactory*. If you find one, set it as a property on the Security class:
In my environment, I've found the following two:
Security.setProperty("ssl.SocketFactory.provider", "com.ibm.jsse2.SSLSocketFactoryImpl");
or
Security.setProperty("ssl.SocketFactory.provider", "com.ibm.websphere.ssl.protocol.SSLSocketFactory");
(or for whatever other class you find)
Similarly for ServerSocketFactory, if you need server sockets.
A: Here is my solution (java 1.6) removing TLS_DHE_ ciphers completely, force to override the methods createSocket()
(https://gist.github.com/hoangthienan/735afb17ffd6955de95a49aa0138dbaa)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Speed of online backups with BLOBs In Oracle 8 doing an online backup with BLOBs in the database is extremely slow. By slow, I mean over an hour to backup a database with 100MB of BLOB data. Oracle acknowledged it was slow, but wouldn't fix the problem (so much for paying for support.) Does anyone know if Oracle has fixed this problem with subsequent releases? Also, how fast do online backups work with BLOBs work in SQL Server and MySQL?
A: I've had this issue in the past, and the only decent workarounds we found were to make sure that the LOBs were in their own tablespace, and use a different backup strategy with them, or to switch to using the BFILE type. Whether or not you can get by with BFILE will depend on how you're using the LOBs.
Some usage info on BFILE:
http://download-uk.oracle.com/docs/cd/B10501_01/java.920/a96654/oralob.htm#1059942
Note that BFILEs live on the filesystem outside of Oracle, so you'd need to back them up in a process outside of your normal Oracle backup. On one project we just had a scheduled rsync to offsite backup. Also important to note is that you cannot create/update BFILEs via JDBC, but you can read them.
A: To answer your question about the speed of online backups of BLOBs in SQL Server, it's the same speed as backing up regular data for SQL 2000/2005/2008 - it's typically limited by the speed of your storage. I usually get over 100mb/sec on my database backups with BLOBs.
Be wary of using backup compression tools with those, though - if the BLOB is binary-style data that's heavily random, then you'll waste CPU cycles trying to compress the data, and compression can make the backup slower instead of faster.
A: I use SQL Backup from Redgate for SQL Server -- it is ridiculously fast, even with my BLOB data.
I keep a copy of every file that I do EDI with, so while they aren't huge, they are numerous and BLOBs. I'm well over 100Megs of just these text files.
It's important to note that Redgate's SQL Backup is just a front-end to the standard SQL Backup...it gives you additional management features, basically, but still utilizes the SQL Server backup engine.
A: Depending on the size of the BLOBs, make sure you're storing them in-line / out of line appropriately.
See http://www.dba-oracle.com/t_table_blob_lob_storage.htm
A: Can you put the export file you're creating and the Oracle tablespaces on different disks? You I/O throughput may be the constraining factor...?
A: exp on 8i was slow, but not as much as you describe. I have backed-up gigabytes of blobs in minutes in 10g..(to disk - using expdp)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Low Java single process thread limit in Red Hat Linux I'm experiencing an issue on a test machine running Red Hat Linux (kernel version is 2.4.21-37.ELsmp) using Java 1.6 (1.6.0_02 or 1.6.0_04). The problem is, once a certain number of threads are created in a single thread group, the operating system is unwilling or unable to create any more.
This seems to be specific to Java creating threads, as the C thread-limit program was able to create about 1.5k threads. Additionally, this doesn't happen with a Java 1.4 JVM... it can create over 1.4k threads, though they are obviously being handled differently with respect to the OS.
In this case, the number of threads it's cutting off at is a mere 29 threads. This is testable with a simple Java program that just creates threads until it gets an error and then prints the number of threads it created. The error is a java.lang.OutOfMemoryError: unable to create new native thread
This seems to be unaffected by things such as the number of threads in use by other processes or users or the total amount of memory the system is using at the time. JVM settings like Xms, Xmx, and Xss don't seem to change anything either (which is expected, considering the issue seems to be with native OS thread creation).
The output of "ulimit -a" is as follows:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 4
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 7168
virtual memory (kbytes, -v) unlimited
The user process limit does not seem to be the issue. Searching for information on what could be wrong has not turned up much, but this post seems to indicate that at least some Red Hat kernels limit a process to 300 MB of memory allocated for stack, and at 10 MB per thread for stack, it seems like the issue could be there (though it seems strange and unlikely as well).
I've tried changing the stack size with "ulimit -s" to test this, but any value other than 10240 and the JVM does not start with an error of:
Error occurred during initialization of VM
Cannot create VM thread. Out of system resources.
I can generally get around Linux, but I really don't know much about system configuration, and I haven't been able to find anything specifically addressing this kind of situation. Any ideas on what system or JVM settings could be causing this would be appreciated.
Edits: Running the thread-limit program mentioned by plinth, there was no failure until it tried to create the 1529th thread.
The issue also did not occur using a 1.4 JVM (does occur with 1.6.0_02 and 1.6.0_04 JVMs, can't test with a 1.5 JVM at the moment).
The code for the thread test I'm using is as follows:
public class ThreadTest {
public static void main(String[] pArgs) throws Exception {
try {
// keep spawning new threads forever
while (true) {
new TestThread().start();
}
}
// when out of memory error is reached, print out the number of
// successful threads spawned and exit
catch ( OutOfMemoryError e ) {
System.out.println(TestThread.CREATE_COUNT);
System.exit(-1);
}
}
static class TestThread extends Thread {
private static int CREATE_COUNT = 0;
public TestThread() {
CREATE_COUNT++;
}
// make the thread wait for eternity after being spawned
public void run() {
try {
sleep(Integer.MAX_VALUE);
}
// even if there is an interruption, dont do anything
catch (InterruptedException e) {
}
}
}
}
If you run this with a 1.4 JVM it will hang when it can't create any more threads and require a kill -9 (at least it did for me).
More Edit:
It turns out that the system that is having the problem is using the LinuxThreads threading model while another system that works fine is using the NPTL model.
A: Have you looked at this resource?
It states that you should be able run thread-limit to find the maximum number of threads and can tweak it by compiling glibc.
A: Updating the kernel to a newer version (2.6.something) with NPTL threading fixed this.
A: This is with Ubuntu Linux (1GB RAM)
dsm@localhost:~$ javac ThreadTest.java
dsm@localhost:~$ java ThreadTest
8113
dsm@localhost:~$ java -version
java version "1.6.0_07"
Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
Java HotSpot(TM) Client VM (build 10.0-b23, mixed mode, sharing)
dsm@localhost:~$
A: Can you try it with the JRockit JVM? IIRC, it had a different threading model than the stock Sun JVM.
A: The settings in /etc/security/limits.d/90-nproc.conf may be overriding your /etc/security/limits.conf settings. That may cause the system act in a different way shown in ulimit -u.
https://bugzilla.redhat.com/show_bug.cgi?id=823030
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Project Management software i'm interested in project management software similar to MS Project. the project is a serious rebuild of my software. we have a bug management system already but i need to track project progress.
i don't like web-based apps so much because the data is either hosted off-site or i need to set up my own server.
i like MS Project but it doesn't seem to compare estimated time with actual time and use it to pro-rate the future work estimates.
i'd like a gantt chart & calendar view.
thank you for your help!
A: Please give open workbench a try.
A: Here's the rundown from Wikipedia.
A: From the list that Ben Hoffstein provided, two caught my eye - eGroupWare and Project Pier. Although those two might not meet your needs, they appear to be worth looking at for my needs.
A: I used primavera some time back, and I think it can do the said comparison.
A: I remember when Joel posted about the new Fogbugz Evidence Based Scheduling. Seems like it might be what you're looking for.
A: It's web-based, but Jira tracks time, and can be combined with the GreenHopper plugin to give an Agile-style Planning Board and Burn-down chart. It doesn't, however, do velocity for you.
A: I also used the open source tool OpenProj, worked perfect for me.
A: since i'm a one-man project, it became clear the best solution for me was an Excel workbook that tracks time management & pro-rates remaining tasks. the Recalculate macro updates the statistics.
A: Try Crew! http://www.devmynd.com/crew
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: data access object pattern implementation I would like to implement a data access object pattern in C++, but preferably without using multiple inheritance and/or boost (which my client does not like).
Do you have any suggestions?
A: OTL (otl.sourceforge.net) is an excellent C++ database library. It's a single include file so doesn't have all the complexity associated (rightly or wrongly!) with Boost.
In terms of the DAO itself, you have many options. The simplest that hides the database implementation is just to use C++ style interfaces and implement the data access layer in a particular implementation.
class MyDAO {
// Pure virtual functions to access the data itself
}
class MyDAOImpl : public MyDAO {
// Implementations to get the data from the database
}
A: A quick google search on data access object design patterns will return at least 10 results on the first page that will be useful. The most common of these is the abstract interface design as already shown by Jeff Foster. The only thing you may wish to add to this is a data access object factory to create your objects.
Most of the examples I could find with decent code are in Java, it's a common design pattern in Java, but they're still very relevant to C++ and you could use them quite easily.
This is a good link, it describes the abstract factory very well.
A: My preferred data access abstraction is the Repository Pattern.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: cURL equivalent in JAVA I am tasked with writing an authentication component for an open source JAVA app. We have an in-house authentication widget that uses https. I have some example php code that accesses the widget which uses cURL to handle the transfer.
My question is whether or not there is a port of cURL to JAVA, or better yet, what base package will get me close enough to handle the task?
Update:
This is in a nutshell, the code I would like to replicate in JAVA:
$cp = curl_init();
$my_url = "https://" . AUTH_SERVER . "/auth/authenticate.asp?pt1=$uname&pt2=$pass&pt4=full";
curl_setopt($cp, CURLOPT_URL, $my_url);
curl_setopt($cp, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($cp);
curl_close($cp);
Heath, I think you're on the right track, I think I'm going to end up using HttpsURLConnection and then picking out what I need from the response.
A: Exception handling omitted:
HttpURLConnection con = (HttpURLConnection) new URL("https://www.example.com").openConnection();
con.setRequestMethod("POST");
con.getOutputStream().write("LOGIN".getBytes("UTF-8"));
con.getInputStream();
A: jsoup
The jsoup library fetches a URL as the first step in its HTML scraping and parsing duties.
Document doc = Jsoup.connect("http://en.wikipedia.org/").get();
A: I'd use the Commons Http Client. There is a contrib class in the project that allows you to use ssl.
We're using it and it's working well.
Edit: Here's the SSL Guide
A: Try Apache Commons Net for network protocols. Free!
A: You could also try [http://hc.apache.org/](HTTP Components) from the Apache Project if you need more features than the ones provided through Commons Net.
A: you can try curl-to-java lib to convert curl php code to java code https://github.com/jeffreyning/curl-to-java demo like this
public Object curl(String url, Object postData, String method) {
CurlLib curl = CurlFactory.getInstance("default");
ch = curl.curl_init();
curl.curl_setopt(ch, CurlOption.CURLOPT_CONNECTTIMEOUT, 1000);
curl.curl_setopt(ch, CurlOption.CURLOPT_TIMEOUT, 5000);
curl.curl_setopt(ch, CurlOption.CURLOPT_SSL_VERIFYPEER, false);
curl.curl_setopt(ch, CurlOption.CURLOPT_SSL_VERIFYHOST, false);
String postDataStr = "key1=v1";
curl.curl_setopt(ch, CurlOption.CURLOPT_CUSTOMREQUEST, "POST");
curl.curl_setopt(ch, CurlOption.CURLOPT_POSTFIELDS, postDataStr);
curl.curl_setopt(ch, CurlOption.CURLOPT_URL, "https://xxxx.com/yyy");
Object html = curl.curl_exec(ch);
Object httpCode = curl.curl_getinfo(ch, CurlInfo.CURLINFO_HTTP_CODE);
if (httpCode != null && 200 == Integer.valueOf(httpCode.toString())) {
return null;
}
return html;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
}
|
Q: Non-C++ languages for generative programming? C++ is probably the most popular language for static metaprogramming and Java doesn't support it.
Are there any other languages besides C++ that support generative programming (programs that create programs)?
A: The "D" programming language is C++-like but has much better metaprogramming support. Here's an example of a ray-tracer written using only compile-time metaprogramming:
Ctrace
Additionally, there is a gcc branch called "Concept GCC" that supports metaprogramming contructs that C++ doesn't (at least not yet).
Concept GCC
A: Common Lisp supports programs that write programs in several different ways.
1) Program data and program "abstract syntax tree" are uniform (S-expressions!)
2) defmacro
3) Reader macros.
4) MOP
Of these, the real mind-blower is MOP. Read "The Art of the Metaobject Protocol." It will change things for you, I promise!
A: I recommend Haskell. Here is a paper describing its compile-time metaprogramming capabilities.
A: The alternative to template style meta-programming is Macro-style that you see in various Lisp implementations. I would suggest downloading Paul Graham's On Lisp and also taking a look at Clojure if you're interested in a Lisp with macros that runs on the JVM.
Macros in Lisp are much more powerful than C/C++ style and constitute a language in their own right -- they are meant for meta-programming.
A: Lots of work in Haskell: Domain Specific Languages (DSL's), Executable Specifications, Program Transformation, Partial Application, Staged Computation. Few links to get you started:
*
*http://haskell.readscheme.org/appl.html
*http://www.cse.unsw.edu.au/~dons/papers/SCKCB07.html
*http://www.haskell.org/haskellwiki/Research_papers/Domain_specific_languages
A: let me list a few important details about how metaprogramming works in lisp (or scheme, or slate, or pick your favorite "dynamic" language):
*
*when doing metaprogramming in lisp you don't have to deal with two languages. the meta level code is written in the same language as the object level code it generates. metaprogramming is not limited to two levels, and it's easier on the brain, too.
*in lisp you have the compiler available at runtime. in fact the compile-time/run-time distinction feels very artificial there and is very much subject to where you place your point of view. in lisp with a mere function call you can compile functions to machine instructions that you can use from then on as first class objects; i.e. they can be unnamed functions that you can keep in a local variable, or a global hashtable, etc...
*macros in lisp are very simple: a bunch of functions stuffed in a hashtable and given to the compiler. for each form the compiler is about to compile, it consults that hashtable. if it finds a function then calls it at compile-time with the original form, and in place of the original form it compiles the form this function returns. (modulo some non-important details) so lisp macros are basically plugins for the compiler.
*writing a lisp function in lisp that evaluates lisp code is about two pages of code (this is usually called eval). in such a function you have all the power to introduce whatever new rules you want on the meta level. (making it run fast is going to take some effort though... about the same as bootstrapping a new language... :)
random examples of what you can implement as a user library using lisp metaprogramming (these are actual examples of common lisp libraries):
*
*extend the language with delimited continuations (hu.dwim.delico)
*implement a js-to-lisp-rpc macro that you can use in javascript (which is generated from lisp). it expands into a mixture of js/lisp code that automatically posts (in the http request) all the referenced local variables, decodes them on the server side, runs the lisp code body on the server, and returns back the return value to the javascript side.
*add prolog like backtracking to the language that very seamlessly integrates with "normal" lisp code (see screamer)
*an XML templating extension to common lisp (includes an example of reader macros that are plugins for the lisp parser)
*a ton of small DSL's, like loop or iterate for easy looping
A: 'metaprogramming' is really a bad name for this specific feature, at least when you're discussing more than one language, since this feature is only needed for a narrow slice of languages that are:
*
*static
*compiled to machine language
*heavily optimised for performance at compile time
*extensible with user-defined data types (OOP in C++'s case)
*hugely popular
take out any one of these, and 'static metaprogramming', just doesn't make sense. therefore, i would be surprised if any remotely mainstream language had something like that, as understood on C++.
of course, dynamic languages, and several functional languages support totally different concepts that could also be called metaprogramming.
A: The ML family of languages were designed specifically for this purpose. One of OCaml's most famous success stories is the FFTW library for high-performance FFTs that is C code generated almost entirely by an OCaml program.
Cheers,
Jon Harrop.
A: Most people try to find a language that has "ultimate reflection"
for self-inspection and something like "eval" for reifying new code.
Such languages are hard to find (LISP being a prime counterexample)
and they certainly aren't mainstream.
But another approach is to use a set of tools that can inspect,
generate, and manipulate program code. Jackpot is such a tool
focused on Java. http://jackpot.netbeans.org/
Our DMS software reengineering toolkit is
such a tool, that works on C, C++, C#, Java, COBOL, PHP,
Javascript, Ada, Verilog, VHDL and variety of other languages.
(It uses production quality front ends to enable it to read
all these langauges).
Better, it can do this with multiple languages at the same instant.
See http://www.semdesigns.com/Products/DMS/DMSToolkit.html
DMS succeeds because it provides a regular method and support infrastructure for complete access to the program structure as ASTs, and in most cases additional data such a symbol tables, type information, control and data flow analysis, all necessary to do sophisticated program manipulation.
A: Template metaprogramming is essentially abuse of the template mechanism. What I mean is that you get basically what you'd expect from a feature that was an unplanned side-effect --- it's a mess, and (although tools are getting better) a real pain in the ass because the language doesn't support you in doing it (I should note that my experience with state-of-the-art on this is out of date, since I essentially gave up on the approach. I've not heard of any great strides made, though)
Messing around with this in about '98 was what drove me to look for better solutions. I could write useful systems that relied on it, but they were hellish. Poking around eventually led me to Common Lisp. Sure, the template mechanism is Turing complete, but then again so is intercal.
Common Lisp does metaprogramming `right'. You have the full power of the language available while you do it, no special syntax, and because the language is very dynamic you can do more with it.
There are other options of course. No other language I've used does metaprogramming better than Lisp does, which is why I use it for research code. There are lots of reasons you might want to try something else though, but it's all going to be tradeoffs. You can look at Haskell/ML/OCaml etc. Lots of functional languages have something approaching the power of Lisp macros. You can find some .NET targeted stuff, but they're all pretty marginal (in terms of user base etc.). None of the big players in industrially used languages have anything like this, really.
A: Nemerle and Boo are my personal favorites for such things. Nemerle has a very elegant macro syntax, despite its poor documentation. Boo's documentation is excellent but its macros are a little less elegant. Both work incredibly well, however.
Both target .NET, so they can easily interoperate with C# and other .NET languages -- even Java binaries, if you use IKVM.
Edit: To clarify, I mean macros in the Lisp sense of the word, not C's preprocessor macros. These allow definition of new syntax and heavy metaprogramming at compiletime. For instance, Nemerle ships with macros that will validate your SQL queries against your SQL server at compiletime.
A: Nim is a relatively new programming language that has extensive support for static meta-programming and produces efficient (C++ like) compiled code.
http://nim-lang.org/
It supports compile-time function evaluation, lisp-like AST code transformations through macros, compile-time reflection, generic types that can be parametrized with arbitrary values, and term rewriting that can be used to create user-defined high-level type-aware peephole optimizations. It's even possible to execute external programs during the compilation process that can influence the code generation. As an example, consider talking to a locally running database server in order to verify that the ORM definition in your code (supplied through some DSL) matches the schema of the database.
A: Lisp supports a form of "metaprogramming", although not in the same sense as C++ template metaprogramming. Also, your term "static" could mean different things in this context, but Lisp also supports static typing, if that's what you mean.
A: The Meta-Language (ML), of course: http://cs.anu.edu.au/student/comp8033/ml.html
A: It does not matter what language you are using -- any of them is able to do Heterogeneous Generative Metaprogramming. Take any dynamic language such as Python or Clojure, or Haskell if you are a type-fan, and write models in this host language that are able to compile themself into some mainstream language you want or forced to use by your team/employer.
I found object graphs a good model for internal model representation. This graph can mix attributes and ordered subgraphs in a single node, which is native to attribute grammar and AST. So, object graph interpretation can be an effective layer between your host and target languages and can act itself as some sort of no-syntax language defined over data structures.
The closest model is an AST: describe AST trees in Python (host language) targets to C++ syntax (target language):
# from metaL import *
class Object:
def __init__(self, V):
self.val = V
self.slot = {}
self.nest = []
class Module(Object):
def cc(self):
c = '// \ %s\n' % self.head(test=True)
for i in self.nest:
c += i.cc()
c += '// / %s\n' % self.head(test=True)
return c
hello = Module('hello')
# <module:hello> #a04475a2
class Include(Object):
def cc(self):
return '#include <%s.h>\n' % self.val
stdlib = Include('stdlib')
hello // stdlib
# <module:hello> #b6efb657
# 0: <include:stdlib> #f1af3e21
class Fn(Object):
def cc(self):
return '\nvoid %s() {\n}\n\n' % self.val
main = Fn('main')
hello // main
print(hello.cc())
// \ <module:hello>
#include <stdlib.h>
void main() {
}
// / <module:hello>
But you are not limited with the level of abstraction of constructed object graph: you not only can freely add your own types but object graph can interpret itself, can thus can modify itself the same way as lists in a Lisp can do.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: How do you create an osx application/dmg from a python package? I want to create a mac osx application from python package and then put it in a disk image.
Because I load some resources out of the package, the package should not reside in a zip file.
The resulting disk image should display the background picture to "drag here -> applications" for installation.
A: I don't know the correct way to do it, but this manual method is the approach I've used for simple scripts which seems to have preformed suitably.
I'll assume that whatever directory I'm in, the Python files for my program are in the relative src/ directory, and that the file I want to execute (which has the proper shebang and execute permissions) is named main.py.
$ mkdir -p MyApplication.app/Contents/MacOS
$ mv src/* MyApplication.app/Contents/MacOS
$ cd MyApplication.app/Contents/MacOS
$ mv main.py MyApplication
At this point we have an application bundle which, as far as I know, should work on any Mac OS system with Python installed (which I think it has by default). It doesn't have an icon or anything, that requires adding some more metadata to the package which is unnecessary for my purposes and I'm not familiar with.
To create the drag-and-drop installer is quite simple. Use Disk Utility to create a New Disk Image of approximately the size you require to store your application. Open it up, copy your application and an alias of /Applications to the drive, then use View Options to position them as you want.
The drag-and-drop message is just a background of the disk image, which you can also specify in View Options. I haven't done it before, but I'd assume that after you whip up an image in your editor of choice you could copy it over, set it as the background and then use chflags hidden to prevent it from cluttering up your nice window.
I know these aren't the clearest, simplest or most detailed instructions out there, but I hope somebody may find them useful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
}
|
Q: Maximum Number of characters with FCKeditor Have you determined a maximum number of characters allowed in FCKEditor?
I seem to have hit a wall. It will display the text to be edited, but changes don't get posted back to the server - only the original text gets posted back to the server. I am assuming that the changed text is not copied back to the hidden input field.
This problem only occurs with large amounts of text. Smaller lengths of text work just fine.
This may be a limitation of the editor, of javascript or the browser itself.
I realize this may be more suited for a bug report to the FCKEditor project, but the stack overflow community seems really responsive and willing to help.
Edit: I should clarify what I mean by large. A text field with 60,000 characters is giving us problems.
A: We haven't - we use it for a web content management system, and have some large pages that it quite happily handles. There may be a limit on your response size buffer, or something line that on your web server....
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Firebug 1.2 document.cookie inconsistency with Web Developer I have a URI here in which a simple document.cookie query through the console is resulting in three cookies being displayed. I verified this with trivial code such as the following as well:
var cookies = document.cookie.split(';');
console.log(cookies.length);
The variable cookies does indeed come out to the number 3. Web Developer on the other hand is indicating that a grand total of 8 cookies are in use.
I'm slightly confused to believe which is inaccurate. I believe the best solution might involve just reiterating the code above without the influence of Firebug. However, I was wondering if someone might suggest a more clever alternative to decipher which tool is giving me the inaccurate information.
A: One reason might be that the other 5 cookies are HTTPONLY:
http://msdn.microsoft.com/en-us/library/ms533046.aspx
If the HttpOnly attribute is included
in the response header, the cookie is
still sent when the user browses to a
Web site in the valid domain. The
cookie cannot be accessed through
script in Internet Explorer 6 SP1,
even by the Web site that set the
cookie in the first place. This means
that even if a cross-site scripting
bug exists, and the user is tricked
into clicking a link that exploits
this bug, Windows Internet Explorer
does not send the cookie to a third
party. The information is safe.
Firefox also respects this flag (as of v2.0.0.5).
A: I'm pretty sure the web developer toolbar shows cookies for domain and sub-domains.
So it will show cookies for
abc.xyz.com
xyz.com
whether you are on a page of either domain
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What algorithm should I use to hash passwords into my database? Is there anything available that isn't trivially breakable?
A: The aforementioned algorithms are cryptographically secure hashing algorithms (but MD5 isn't considered to be secure today).
However there are algorithms, that specifically created to derive keys from passwords. These are the key derivation functions. They are designed for use with symmetric ciphers, but they are good for storing password too. PBKDF2 for example uses salt, large number of iterations, and a good hash function. If you have a library, what implements it (e.g. .NET), I think you should consider it.
A: Use a strong crytographic hash function like MD5 or SHA1, but make sure you use a good salt, otherwise you'll be susceptible to rainbow table attacks.
A: Add a unique salt to the hashed password value (store the salt value in the db). When a unique salt is used the benefit of using a more secure algorithm than SHA1 or MD5 is not really necessary (at that point it's an incremental improvement, whereas using a salt is a monumental improvement).
A: Update Jan 2013
The original answer is from 2008, and things have moved a bit in the last 5 years. The ready availability of cloud computing and powerful parallel-processor graphics cards means that passwords up to 8 or 9 characters hashed as MD5 or SHA1 are now trivially breakable.
Now a long salt is a must, as is something tougher like SHA512.
However all SHA variant hashes are designed for communication encryption - messages back and forth where every message is encrypted, and for this reason they are designed to be fast.
In the password hashing world this design is a big disadvantage as the quicker the hash is the generate the less time it takes to generate large numbers of hashes.
A fast hash like SHA512 can be generated millions, even billions of times a second. Throw in cheap parallel processing and every possible permutation of a password becomes an absolute must.
Key-stretching is one way to combat this. A key-stretching algorithm (like PBKDF2) applies a quicker hash (like SHA512) thousands of times, typically causing the hash generation to take 1/5 of a second or so. Someone logging in won't notice, but if you can only generate 5 hashes per second brute force attacks are much tougher.
Secondly there should always be a per-user random salt. This can be randomly generated as the first n bytes of the hash (which are then stripped off and added to the password text to be checked before building the hashes to compare) or as an extra DB column.
So:
What algorithm should I use to hash passwords into my database?
*
*Key-stretching to slow down hash generation. I'd probably go with PBKDF2.
*Per-user salt means a new attack per user, and some work figuring out how to get the salt.
Computing power and availability are going up exponentially - chances are these rules will change again in another 4 years. If you need future-proof security I'd investigate bcrypt/scrypt style hashes - these take the slower key-stretching algorithms and add a step that uses a lot of RAM to generate the hash. Using so much RAM reduces the effectiveness of cheap parallel processors.
Original Sept 2008 (left in so comments make sense)
MD5+salt or SHA1+salt is not 'trivially breakable' - most hacks depend on huge rainbow tables and these become less useful with a salt [update, now they are].
MD5+salt is a relatively weak option, but it isn't going to be easily broken [update, now it is very easy to break].
SHA2 goes all the way up to 512 - that's going to be pretty impossible to crack with readily available kit [update, pretty easy up to 9 char passwords now] - though I'm sure there's a Cray in some military bunker somewhere that can do it [You can now rent this 'Cray' from Amazon]
A:
This 2008 answer is now dangerously out of date. SHA (all variants) is now trivially breakable, and best practice is now (as of Jan 2013) to use a key-stretching hash (like PBKDF2) or ideally a RAM intensive one (like Bcrypt) and to add a per-user salt too.
Points 2, 3 and 4 are still worth paying attention to.
See the IT Security SE site for more.
Original 2008 answer:
*
*Use a proven algorithm. SHA-256 uses 64 characters in the database, but with an index on the column that isn't a problem, and it is a proven hash and more reliable than MD5 and SHA-1. It's also implemented in most languages as part of the standard security suite. However don't feel bad if you use SHA-1.
*Don't just hash the password, but put other information in it as well. You often use the hash of "username:password:salt" or similar, rather than just the password, but if you play with this then you make it even harder to run a dictionary attack.
*Security is a tough field, do not think you can invent your own algorithms and protocols.
*Don't write logs like "[AddUser] Hash of GeorgeBush:Rep4Lyfe:ASOIJNTY is xyz"
A: First rule of cryptography and password storage is "don't invent it yourself," but if you must here is the absolute minimum you must do to have any semblance of security:
Cardinal rules:
*
*Never store a plain text password (which means you can never display or transmit it either.)
*Never transmit the stored representation of a password over an unsecured line (either plain text, encoded or hashed).
*Speed is your enemy.
*Regularly reanalyze and improve your process as hardware and cryptanalysis improves.
*Cryptography and process is a very small part of the solution.
*Points of failure include: storage, client, transmission, processing, user, legal warrants, intrusion, and administrators.
Steps:
*
*Enforce some reasonable minimum password requirements.
*Change passwords frequently.
*Use the strongest hash you can get - SHA-256 was suggested here.
*Combine the password with a fixed salt (same for your whole database).
*Combine the result of previous step with a unique salt (maybe the username, record id, a guid, a long random number, etc.) that is stored and attached to this record.
*Run the hash algorithm multiple times - like 1000+ times. Ideally include a different salt each time with the previous hash. Speed is your enemy and multiple iterations reduces the speed. Every so often double the iterations (this requires capturing a new hash - do it next time they change their password.)
Oh, and unless you are running SSL or some other line security then don't allow your password to be transmitted in plain text. And if you are only comparing the final hash from the client to your stored hash then don't allow that to be transmitted in plain text either. You need to send a nonce (number used once) to the client and have them hash that with their generated hash (using steps above) hash and then they send you that one. On the server side you run the same process and and see if the two one time hashes match. Then dispose of them. There is a better way, but that is the simplest one.
A: MD5 or SHA in combination with a randomly generated salt value for every entry
A: CodingHorror had a great article on this last year. The recommendation at the end of the article is bcrypt.
Also see: https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage/6415#6415
A: as mentioned earlier simple hashing algorithms should not be used here is reason why :
http://arstechnica.com/security/2012/08/passwords-under-assault/
so use something else such as http://msdn.microsoft.com/en-us/library/system.security.cryptography.rfc2898derivebytes.aspx
A: All hashing algorithms are vulnerable to a "dictionary attack". This is simply where the attacker has a very large dictionary of possible passwords, and they hash all of them. They then see if any of those hashes match the hash of the password they want to decrypt. This technique can easily test millions of passwords. This is why you need to avoid any password that might be remotely predictable.
But, if you are willing to accept the threat of a dictionary attack, MD5 and SHA1 would each be more than adequate. SHA1 is more secure, but for most applications this really isn't a significant improvement.
A: MD5 / SHA1 hashes are both good choices. MD5 is slightly weaker than SHA1.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Problem Linking "static" Methods in C++ I want to call a few "static" methods of a CPP class defined in a different file but I'm having linking problems. I created a test-case that recreates my problem and the code for it is below.
(I'm completely new to C++, I come from a Java background and I'm a little familiar with C.)
// CppClass.cpp
#include <iostream>
#include <pthread.h>
static pthread_t thread;
static pthread_mutex_t mutex;
static pthread_cond_t cond;
static int shutdown;
using namespace std;
class CppClass
{
public:
static void Start()
{
cout << "Testing start function." << endl;
shutdown = 0;
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
pthread_mutex_init(&mutex, NULL);
pthread_cond_init(&cond, NULL);
pthread_create(&thread, &attr, run_thread, NULL);
}
static void Stop()
{
pthread_mutex_lock(&mutex);
shutdown = 1;
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&mutex);
}
static void Join()
{
pthread_join(thread, NULL);
}
private:
static void *run_thread(void *pthread_args)
{
CppClass *obj = new CppClass();
pthread_mutex_lock(&mutex);
while (shutdown == 0)
{
struct timespec ts;
ts.tv_sec = time(NULL) + 3;
pthread_cond_timedwait(&cond, &mutex, &ts);
if (shutdown)
{
break;
}
obj->display();
}
pthread_mutex_unlock(&mutex);
pthread_mutex_destroy(&mutex);
pthread_cond_destroy(&cond);
pthread_exit(NULL);
return NULL;
}
void display()
{
cout << " Inside display() " << endl;
}
};
// main.cpp
#include <iostream>
/*
* If I remove the comment below and delete the
* the class declaration part, it works.
*/
// #include "CppClass.cpp"
using namespace std;
class CppClass
{
public:
static void Start();
static void Stop();
static void Join();
};
int main()
{
CppClass::Start();
while (1)
{
int quit;
cout << "Do you want to end?: (0 = stay, 1 = quit) ";
cin >> quit;
cout << "Input: " << quit << endl;
if (quit)
{
CppClass::Stop();
cout << "Joining CppClass..." << endl;
CppClass::Join();
break;
}
}
}
When I tried to compile, I get the following error:
$ g++ -o go main.cpp CppClass.cpp -l pthread
/tmp/cclhBttM.o(.text+0x119): In function `main':
: undefined reference to `CppClass::Start()'
/tmp/cclhBttM.o(.text+0x182): In function `main':
: undefined reference to `CppClass::Stop()'
/tmp/cclhBttM.o(.text+0x1ad): In function `main':
: undefined reference to `CppClass::Join()'
collect2: ld returned 1 exit status
But if I remove the class declaration in main.cpp and replace it with #include "CppClass.cpp", it works fine. Basically, I want to put these declarations in a separate .h file and use it. Am I missing something?
Thanks for the help.
A: It's obvious you come from a Java background because you haven't yet grasped the concept of header files. In Java the process of defining something is usually in one piece. You declare and define at the same time. In C/C++ it's a two-step process. Declaring something tells the compiler "something exists with this type, but I'll tell you later how it is actually implemented". Defining something is giving the compiler the actual implementation part. Header files are used mostly for declarations, .cpp files for definitions.
Header files are there to describe the "API" of classes, but not their actual code. It is possible to include code in the header, that's called header-inlining. You have inlined everything in CppClass.cpp (not good, header-inlining should be the exception), and then you declare your class in main.cpp AGAIN which is a double declaration in C++. The inlining in the class body leads to code reduplication everytime you use a method (this only sounds insane. See the C++ faq section on inlining for details.)
Including the double declaration in your code gives you a compiler error. Leaving the class code out compiles but gives you a linker error because now you only have the header-like class declaration in main.cpp. The linker sees no code that implements your class methods, that's why the errors appear. Different to Java, the C++ linker will NOT automatically search for object files it wants to use. If you use class XYZ and don't give it object code for XYZ, it will simply fail.
Please have a look at Wikipedia's header file article and Header File Include Patterns (the link is also at the bottom of the Wikipedia article and contains more examples)
In short:
For each class, generate a NewClass.h and NewClass.cpp file.
In the NewClass.h file, write:
class NewClass {
public:
NewClass();
int methodA();
int methodB();
}; <- don't forget the semicolon
In the NewClass.cpp file, write:
#include "NewClass.h"
NewClass::NewClass() {
// constructor goes here
}
int NewClass::methodA() {
// methodA goes here
return 0;
}
int NewClass::methodB() {
// methodB goes here
return 1;
}
In main.cpp, write:
#include "NewClass.h"
int main() {
NewClass nc;
// do something with nc
}
To link it all together, do a
g++ -o NewClassExe NewClass.cpp main.cpp
(just an example with gcc)
A: I think you want to do something like:
g++ -c CppClass.cpp
g++ -c main.cpp
g++ -o go main.o CppClass.o
That should resolve it.
A: You're defining the class twice, which I'm pretty sure doesn't work.
Try something like this:
First a header CppClass.h file:
// CppClass.h
using namespace std;
class CppClass
{
public:
static void Start();
static void Stop();
static void Join();
private:
void *run_thread(void *pthread_args);
void display();
};
Then a CppClass.cpp file implementing it:
// CppClass.cpp
#include <iostream>
#include <pthread.h>
#include "CppClass.h"
using namespace std;
void CppClass::Start()
{
/* method body goes here */
}
void CppClass::Stop()
{
/* method body goes here */
}
void CppClass::Join()
{
/* method body goes here */
}
void *CppClass::run_thread(void *pthread_args)
{
/* method body goes here */
}
void CppClass::display() {
/* method body goes here */
}
Then your main file:
// main.cpp
#include "CppClass.h"
int main()
{
/* main method body here */
}
I believe the g++ call would be the same.
Basically, you can't declare the same class twice. You should declare the class in the header file, then declare the implementation in the cpp file. You could also put all the code inline in a single declaration of the class in a header file. But declaring it twice like you did won't work.
I hope that made sense...
A: make a .h file with the class definition in it, and then #include that file into your 2 files.
A: Sure looks like the linker is not picking up you second source file.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Javascript to extract author/date from SVN keyword substitution SVN keyword substition gives is not pretty. E.g.,
Last updated: $Date$ by $Author$
yields
Last updated: $Date: 2008-09-22
14:38:43 -0400 (Mon, 22 Sep 2008) $ by
$Author: cconway $"
Does anybody have a Javascript snippet that prettifies things and outputs some HTML? The result should be more like:
Last update: 22 Sep 2008 by cconway
P.S. Is there a way to replace "cconway" with a display name?
A: Errr.. This feels a bit like me doing your job for you :), but here goes:
function formatSvnString(string){
var months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
var re = /\$Date: (\d{4})-(\d\d)-(\d\d).*?\$Author: (\S+) \$/
return string.replace(re, function(match, year, month, day, author){
var date = new Date([year, month, day].join('/'))
return date.getDate()
+ ' ' + months[date.getMonth()]
+ ' ' + date.getFullYear()
+ ' by ' + author
})
}
Usage:
formatSvnString("$Date: 2008-09-22 14:38:43 -0400 (Mon, 22 Sep 2008) $ by $Author: cconway $")
// returns: 22 Sep 2008 by cconway
I'll leave it up to you to work out how to find these SVN string and apply the above code automatically :)
To do the user's display name, you'll either have to convince SVN to insert it (I don't think it can do that, but I might be wrong), or somehow provide a means for JS to either fetch it, or to have access to a table full of usernames and associated display name (might be a bit too much like a security risk though. Hey kids, see if you can break into my server using one of these usernames!)
A: Some JavaScript libraries provide templating functionality.
Prototype - http://www.prototypejs.org/api/template
Ext JS - http://extjs.com/deploy/dev/docs/?class=Ext.DomHelper
I'm sure you could find a plugin for JQuery.
A: Here's a little jQuery code to do the job:
$(document).ready(function() {
var date = "$Date: 2008-09-23 00:10:56 -0400 (Tue, 23 Sep 2008) $"
.replace(/\$Date:.*\((.*)\)\s*\$/,"$1");
$(".timestamp").text( "Last updated: " + date );
});
where there is a <div class="timestamp" /> in the HTML wherever you want the timestamp text to appear.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: SharePoint Security We have a SharePoint site on it's own domain and are debating using Forms Authentication or Active Directory. Really we would like the power of kerberos with the flexibility and transparancy of a forms authentication (like storing the users in sqlserver using sqlmembershipprovider in asp.net). Is there any way to force Active Directory to authenticate against our user store, or can we set up a kerberos authentication server that isn't active directory?
Thanks!
A: Maybe ADAM might be helpful for your scenario: http://www.microsoft.com/windowsserver2003/adam/default.mspx
The problem with Forms authentication is that it misses some end user GUI controls like: change password, forgot password etc. We implemented it on a project and had to do a lot of coding to achieve good usability for the end users.
A: You might also want to look into using ISA Server to help you out: http://blogs.msdn.com/jannemattila/archive/2007/07/23/isa-moss-makes-life-a-lot-easier-for-fba.aspx
http://www.isaserver.org/tutorials/Configuring-ISA-Firewalls-ISA-2006-RC-Support-User-Certificate-Authentication-using-Constrained-Delegation-Part1.html
A: You might also consider using Forefront User Access Gateway (UAG). I have implemented multiple times and it works much better than ISA and in fact, bits are installed along with SharePoint for the User Profile Service - http://www.microsoft.com/en-us/server-cloud/forefront/unified-access-gateway.aspx.
UAG gives you better security and flexibility and it is 'SharePoint Aware'. Based on the technology developed by Whale Communications (purchased by MS), it provides a common gateway for all of your applications (in addition to SharePoint).
There is one 'gotcha' in the way UAG logs out however but I have the fix for you here: http://davidmsterling.blogspot.com/2011/08/sharepoint-logout-with-uag.html.
To date, 20 clients have moved from ISA to Forefront UAG and all love it.
David Sterling - http://davidmsterling.blogspot.com - http://www.sterling-consulting.com - http://www.sharepoint-blog.com
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can a C/C++ program put itself into background? What's the best way for a running C or C++ program that's been launched from the command line to put itself into the background, equivalent to if the user had launched from the unix shell with '&' at the end of the command? (But the user didn't.) It's a GUI app and doesn't need any shell I/O, so there's no reason to tie up the shell after launch. But I want a shell command launch to be auto-backgrounded without the '&' (or on Windows).
Ideally, I want a solution that would work on any of Linux, OS X, and Windows. (Or separate solutions that I can select with #ifdef.) It's ok to assume that this should be done right at the beginning of execution, as opposed to somewhere in the middle.
One solution is to have the main program be a script that launches the real binary, carefully putting it into the background. But it seems unsatisfying to need these coupled shell/binary pairs.
Another solution is to immediately launch another executed version (with 'system' or CreateProcess), with the same command line arguments, but putting the child in the background and then having the parent exit. But this seems clunky compared to the process putting itself into background.
Edited after a few answers: Yes, a fork() (or system(), or CreateProcess on Windows) is one way to sort of do this, that I hinted at in my original question. But all of these solutions make a SECOND process that is backgrounded, and then terminate the original process. I was wondering if there was a way to put the EXISTING process into the background. One difference is that if the app was launched from a script that recorded its process id (perhaps for later killing or other purpose), the newly forked or created process will have a different id and so will not be controllable by any launching script, if you see what I'm getting at.
Edit #2:
fork() isn't a good solution for OS X, where the man page for 'fork' says that it's unsafe if certain frameworks or libraries are being used. I tried it, and my app complains loudly at runtime: "The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec()."
I was intrigued by daemon(), but when I tried it on OS X, it gave the same error message, so I assume that it's just a fancy wrapper for fork() and has the same restrictions.
Excuse the OS X centrism, it just happens to be the system in front of me at the moment. But I am indeed looking for a solution to all three platforms.
A: On Linux, daemon() is what you're looking for, if I understand you correctly.
A: The way it's typically done on Unix-like OSes is to fork() at the beginning and exit from the parent. This won't work on Windows, but is much more elegant than launching another process where forking exists.
A: Three things need doing,
fork
setsid
redirect STDIN, STDOUT and STDERR to /dev/null
This applies to POSIX systems (all the ones you mention claim to be POSIX (but Windows stops at the claiming bit))
A: On UNIX, you need to fork twice in a row and let the parent die.
A: A process cannot put itself into the background, because it isn't the one in charge of background vs. foreground. That would be the shell, which is waiting for process exit. If you launch a process with an ampersand "&" at the end, then the shell does not wait for process exit.
But the only way the process can escape the shell is to fork off another child and then let its original self exit back to the waiting shell.
From the shell, you can background a process with Control-Z, then type "bg".
A: Backgrounding a process is a shell function, not an OS function.
If you want an app to start in the background, the typical trick is to write a shell script to launch it that launches it in the background.
#! /bin/sh
/path/to/myGuiApplication &
A: To followup on your edited question:
I was wondering if there was a way to put the EXISTING process into the background.
In a Unix-like OS, there really is not a way to do this that I know of. The shell is blocked because it is executing one of the variants of a wait() call, waiting for the child process to exit. There is not a way for the child process to remain running but somehow cause the shell's wait() to return with a "please stop watching me" status. The reason you have the child fork and exit the original is so the shell will return from wait().
A: Here is some pseudocode for Linux/UNIX:
initialization_code()
if(failure) exit(1)
if( fork() > 0 ) exit(0)
setsid()
setup_signal_handlers()
for(fd=0; fd<NOFILE; fd++) close(fd)
open("/dev/null", O_RDONLY)
open("/dev/null", O_WRONLY)
open("/dev/null", o_WRONLY)
chdir("/")
And congratulations, your program continues as an independent "daemonized" process without a controlling TTY and without any standard input or output.
Now, in Windows you simply build your program as a Win32 application with WinMain() instead of main(), and it runs without a console automatically. If you want to run as a service, you'll have to look that up because I've never written one and I don't really know how they work.
A: You edited your question, but you may still be missing the point that your question is a syntax error of sorts -- if the process wasn't put in the background to begin with and you want the PID to stay the same, you can't ignore the fact that the program which started the process is waiting on that PID and that is pretty much the definition of being in the foreground.
I think you need to think about why you want to both put something in the background and keep the PID the same. I suggest you probably don't need both of those constraints.
A: My advice: don't do this, at least not under Linux/UNIX.
GUI programs under Linux/UNIX traditionally do not auto-background themselves. While this may occasionally be annoying to newbies, it has a number of advantages:
*
*Makes it easy to capture standard error in case of core dumps / other problems that need debugging.
*Makes it easy for a shell script to run the program and wait until it's completed.
*Makes it easy for a shell script to run the program in the background and get its process id:
gui-program &
pid=$!
# do something with $pid later, such as check if the program is still running
If your program forks itself, this behavior will break.
"Scriptability" is useful in so many unexpected circumstances, even with GUI programs, that I would hesitate to explicitly break these behaviors.
Windows is another story. AFAIK, Windows programs automatically run in the background--even when invoked from a command shell--unless they explicitly request access to the command window.
A: As others mentioned, fork() is how to do it on *nix. You can get fork() on Windows by using MingW or Cygwin libraries. But those will require you to switch to using GCC as your compiler.
In pure Windows world, you'd use CreateProcess (or one of its derivatives CreateProcessAsUser, CreateProcessWithLogonW).
A: The simplest form of backgrounding is:
if (fork() != 0) exit(0);
In Unix, if you want to background an disassociate from the tty completely, you would do:
*
*Close all descriptors which may access a tty (usually 0, 1, and 2).
*if (fork() != 0) exit(0);
*setpgroup(0,getpid()); /* Might be necessary to prevent a SIGHUP on shell exit. */
*signal(SIGHUP,SIG_IGN); /* just in case, same as using nohup to launch program. */
*fd=open("/dev/tty",O_RDWR);
*ioctl(fd,TIOCNOTTY,0); /* Disassociates from the terminal */
*close(fd);
*if (fork() != 0) exit(0); /* just for good measure */
That should fully daemonize your program.
A: The most common way of doing this under Linux is via forking. The same should work on Mac, as for Windows I'm not 100% sure but I believe they have something similar.
Basically what happens is the process splits itself into two processes, and then the original one exits (returning control to the shell or whatever), and the second process continues to run in the background.
A: I'm not sure about Windows, but on UNIX-like systems, you can fork() then setsid() the forked process to move it into a new process group that is not connected to a terminal.
A: If you need a script to have the PID of the program, you can still get it after a fork.
When you fork, save the PID of the child in the parent process. When you exit the parent process, either output the PID to STD{OUT,ERR} or simply have a return pid; statement at the end of main(). A calling script can then get the pid of the program, although it requires a certain knowledge of how the program works.
A: Under Windows, the closing thing you're going to get to fork() is loading your program as a Windows service, I think.
Here is a link to an intro article on Windows services...
CodeProject: Simple Windows Service Sample
A: So, as you say, just fork()ing will not do the trick. What you must do is fork() and then re-exec(), as this code sample does:
#include stdio.h>
#include <unistd.h>
#include <string.h>
#include <CoreFoundation/CoreFoundation.h>
int main(int argc, char **argv)
{
int i, j;
for (i=1; i<argc; i++)
if (strcmp(argv[i], "--daemon") == 0)
{
for (j = i+1; j<argc; j++)
argv[j-1] = argv[j];
argv[argc - 1] = NULL;
if (fork()) return 0;
execv(argv[0], argv);
return 0;
}
sleep(1);
CFRunLoopRun();
CFStringRef hello = CFSTR("Hello, world!");
printf("str: %s\n", CFStringGetCStringPtr(hello, CFStringGetFastestEncoding(hello)));
return 0;
}
The loop is to check for a --daemon argument, and if it is present, remove it before re-execing so an infinite loop is avoided.
I don't think this will work if the binary is put into the path because argv[0] is not necessarily a full path, so it will need to be modified.
A: /**Deamonize*/
pid_t pid;
pid = fork(); /**father makes a little deamon(son)*/
if(pid>0)
exit(0); /**father dies*/
while(1){
printf("Hello I'm your little deamon %d\n",pid); /**The child deamon goes on*/
sleep(1)
}
/** try 'nohup' in linux(usage: nohup <command> &) */
A: In Unix, I have learned to do that using fork().
If you want to put a running process into the background, fork it twice.
A: I was trying the solution.
Only one fork is needed from the parent process.
The most important point is that, after fork, the parent process must die by calling _exit(0); and NOT by calling exit(0);
When _exit(0); is used, the command prompt immediately returns on the shell.
This is the trick.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Can ClickOnce deployment be used with windows mobile and compact frameworks? Can I use the ClickOnce deployment method to deploy and auto update applications targeted for the windows mobile platform (eg smartphone or pocket pc)?
A: True Click-Once is not supported. You might look at these articles to give you a better feel for what can be done:
*
*MSDN Article on Deployment Patterns
*Alex Feinman's article on self-updating apps
You can also package the app into a CAB File that you put on the web for OTA deployment. There are also a couple third-party providers like CloudSync and mProdigy (used neither so YMMV) for OTA as well.
A: No. Hopefully in the future.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: What Event to Trigger Javascript Form Field Validation and Formatting? Let me first say, we validate every field on the server side, so this a question
about client-side usability.
What is the conventional wisdom on exactly when to validate and format html form input fields using javascript?
As an example, we have a phone number field. We allow numbers, spaces, parentheses, and hyphens. We want the field to have ten digits. Also, we want the field to look like (123) 456-7890, even if the user doesn't type it that way.
It seems like we can
*
*Validate and format it when the user
exits the field.
*Validate and format
on every character entered.
*Intercept keystrokes and prevent the
user from entering characters that are wrong.
*Some combination of the above (e.g.
format on entry and validate on exit, prevent on entry and format on exit, etc.)
*[Added] Wait and do all the validation and formatting when the user clicks submit.
I've seen it done all of these ways, but I can't find information about what
is best (or even generally accepted) from a usability perspective, and more importantly, why.
[Edit: Some clarification]
We are absolutely not enforcing any format standards. When I say format, I mean we'll use javascript to rewrite things so they look nice. If the user types 1234567890, we'll change it to (123) 456-7890. There are no "formatting rules" that can fail.
I distinguish this from validation because if they don't type enough numbers, we have to make them fix it.
I guess I should rephrase the question as "what is the conventional wisdom on exactly when to validate and exactly when to format...?
Good info in the answers so far!
EDIT: I'm accepting my own answer below in the hopes that others will find the link as helpful as I did.
A:
Validate and format it when the user exits the field.
Yes. Provide noninvasive feedback to the user if validation or formatting rules fail. By noninasive I mean don't popup an alert or modal dialog box, thereby forcing the user to click something. Rather dynamically display a message adjacent or underneath the field where validation or formatting failed.
Validate and format on every character entered.
No. I think that hinders usability. Instead provide the user with a tooltip or some other hint as to what the formatting rules are or validation rules are. E.g. for a "required" field the practically ubiquitious asterisk, and for fields with formatting tell the user up front what the expected format is.
Intercept keystrokes and prevent the user from entering characters that are wrong.
If you are going to prevent the user from entering invalid characters, tell the user why you just blocked their input, noninvasively. Also,do not steal focus of the field.
So for me the general principles are:
*
*Inform the user up front about your validation and formatting rules.
*Do not assume the user is sighted, so keep web accessiblity and screen readers in mind. (Unless you are developing a web site that has a limited target audience such as an Intranet.)
*Provide the user with noninvasive feedback, meaning do not make the user click on an alert box or modal dialog upon each failure.
*Make it obvious which input box failed validation or formatting rules, and tell the user why their input failed.
*Do not steal the mouse/pointer focus, when providing feedback.
*Keep tab order in mind, so that when keyboard oriented users complete a field, they can hit tab and go to the next logical input/selection field.
A: I was going to describe various options but it may be beneficial just to use an existing js framework to handle input masks. Here is a good run down of various options
A: bmb states that they are accepting any format, and changing it to the desired format (xxx) nnn-xxxx. That is very good. The question is the timing of A) the format change, and B) the validation.
A) The format change should be done as the user exits the field. Sooner is annoying and later defeats the purpose of displaying the change at all.
B) Validation is most appropriately performed either on exiting the field or on submitting the form. Sooner is frustrating and confusing to the user. In a long and complex form, with more than one screen, I would favor doing validation on exiting the control to make corrections easier. On a short form, I would do it upon submit to avoid breaking the flow of filling out the form. It is really a judgment call, so test it with real users if at all possible.
Preferably you are testing your work with real users anyway, but if you do not have budget or access for that, a quick and dirty "user" test can help with decisions like this one. You can grab a handful of people who did not work on the software (as close a match to your real end users as possible) and ask them to fill out the form. Instruct them to enter things a specifically to get an error and then watch them correct it. Ask them to talk aloud through what they are doing so you don't need to guess at their thought process. Look for where they have issues and what seems to confuse/annoy them most.
A: By far the best answer so far was not an answer but a comment (see above.) I'm adding it as an answer in case anyone misses it in the comment.
See the following article on A List Apart.
Inline Validation in Web Forms by Luke Wroblewski
A: I find the first three options to be really annoying. There's nothing worse than being interrupted in the middle of typing something.
Your user's main goal is getting through the form as fast as possible and anything you do that slows them down is just one more reason for them to give up on it entirely.
I also hate being forced to type something like a credit card # or phone # is exactly the right format to satisfy the form. Whenever possible, just give the user a box to type stuff into and don't make them deal with the formatting.
In the case of your phone #, let them enter it however they want, strip out anything you don't like, try to put it back together into the format you want ( (124) 567-8901 ) and throw an error if you can't.
If you absolutely must validate something to a specific format, do it when they submit the form and then highlight the fields that have problems.
A: It depends per field. But for something like a phone number, it's generally pretty nice to prevent someone from even entering non-digits.
That way, when typing you'll notice that your 1-800-HELLOWORLD phone number isn't showing up correctly and realise that the field only accepts numbers (which you can also highlight with some sort of information field alongside the input field).
That seems to me to be a lot more inuitive and friendly than an awkward modal dialogue, pop-down error field or server-generated message displaying after you're done filling it out.
Of course, always balance the ultimate client-side validation with the ultimate technical requirements to build it. If you start down the path of, say, validating image uploads with Ajax before the page submits, that can be a pretty long road.
Edit: also, consider your audience. The more technically minded are going to be more accepting of "dynamic" forms than, say, people who are more accustomed to the non-Ajax approach to the Internet.
A: Personally I think formatting and validating on exit is the least bothersome to the user. Let them enter the number in whatever format they like (and there are a lot of these for a phone number) and then change it to the format you like. Don't force the user to conform to your preferences when you can handle the data in their preferred format.
Also, validation messages when I'm not done typing are annoying, and not being able to put a certain character in a text field is super-annoying.
The only exception I can think of is "is this value available" situations (such as creating a unique username) - in that case immediate feedback is really convenient.
A: The most user friendly way I've seen to do validation is to have an indicator that shows up next to the input field to indicate the value is invalid. This way you don't interrupt the user as they're typing and yet they can continually see whether or not they've typed a valid entry. I hate having to type information into a long form only to have the thing tell me at the end "Oh, you need to go back and fix field 1".
You can have the indicator be show/hidden as the user types. I use a warning icon when the value is invalid and I set a tooltip on the icon that explains why the value is invalid.
If you have the screen real estate, you could just put text such as "Valid" or "Must be in format XXX-YYY-XXXX".
Keep in mind that when you do per keystroke validation you also need to catch pasted text.
In addition to this, you should also prevent invalid keystrokes to be entered in the first place.
A: For usability best practices, I suggest reading How To Design The Perfect Form (more precisely Context & Assistance) and Beautiful Forms.
For a form validation framework, check the fValidator and iMask combo, they are complementary and thus work perfectly together.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Oracle: Is there a simple way to say "if null keep the current value" in merge/update statements? I have a rather weak understanding of any of oracle's more advanced functionality but this should I think be possible.
Say I have a table with the following schema:
MyTable
Id INTEGER,
Col1 VARCHAR2(100),
Col2 VARCHAR2(100)
I would like to write an sproc with the following
PROCEDURE InsertOrUpdateMyTable(p_id in integer, p_col1 in varcahr2, p_col2 in varchar2)
Which, in the case of an update will, if the value in p_col1, p_col2 is null will not overwrite Col1, Col2 respectively
So If I have a record:
id=123, Col1='ABC', Col2='DEF'
exec InsertOrUpdateMyTable(123, 'XYZ', '098'); --results in id=123, Col1='XYZ', Col2='098'
exec InsertOrUpdateMyTable(123, NULL, '098'); --results in id=123, Col1='ABC', Col2='098'
exec InsertOrUpdateMyTable(123, NULL, NULL); --results in id=123, Col1='ABC', Col2='DEF'
Is there any simple way of doing this without having multiple SQL statements?
I am thinking there might be a way to do this with the Merge statement though I am only mildly familiar with it.
EDIT:
Cade Roux bellow suggests using COALESCE which works great! Here are some examples of using the coalesce kewyord.
And here is the solution for my problem:
MERGE INTO MyTable mt
USING (SELECT 1 FROM DUAL) a
ON (mt.ID = p_id)
WHEN MATCHED THEN
UPDATE
SET mt.Col1 = coalesce(p_col1, mt.Col1), mt.Col2 = coalesce(p_col2, mt.Col2)
WHEN NOT MATCHED THEN
INSERT (ID, Col1, Col2)
VALUES (p_id, p_col1, p_col2);
A: Change the call or the update statement to use
nvl(newValue, oldValue)
for the new field value.
A: Using MERGE and COALESCE? Try this link for an example
with
SET a.Col1 = COALESCE(incoming.Col1, a.Col1)
,a.Col2 = COALESCE(incoming.Col2, a.Col2)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Best Practices: Storing a workflow state of an item in a database? I have a question about best practices regarding how one should approach storing complex workflow states for processing tasks in a database. I've been looking online to no avail, so I figured I'd ask the community what they thought was best.
This question comes out of the same "BoxItem" example I gave in a prior question. This "BoxItem" is being tracked in my system as various tasks are performed on it. The task may take place over several days and with human interaction, so the state of the BoxItem must be persisted. Who did the task (if applicable), and when the task was done must also be tracked.
At first, I approached this by adding three fields to the "BoxItems" table for every human-interactive task that needed to be done:
IsTaskNameComplete
DateTaskNameComplete
UserTaskNameComplete
This worked when the workflow was simple... but now that it has grown to a complex process (> 10 possible human interactions in the flow... about half of which are optional, and may or may not be done for the BoxItem, which resulted in me beginning to add "DoTaskName" fields as well for those optional tasks), I've found that what should've been a simple table now has 40 or so field devoted entirely to the retaining of this state information.
I find myself asking if there isn't a better way to do it... but I'm at a loss.
My first thought was to make a generic "BoxItemTasks" table which defined the tasks that may be done on a given box, but I still would need to save the Date and User information individually, so it didn't really help.
My second thought was that perhaps it didn't matter, and I shouldn't worry if this table has 40 or more fields devoted to state retaining... and maybe I'm just being paranoid. But it feels like that's a lot of information to retain.
Anyways, I'm at a loss as far as what a third option might be, or if one of the two options above is actually reasonable. I can see this workflow potentially getting even more complex in the future, and for each new task I'm going to need to add 3-4 fields just to support the tracking of it... it feels like it's spiraling out of control.
What would you do in this situation?
I should note that this is maintenance of an existing system, one that was built without an ORM, so I can't just leave it up to the ORM to take care of it.
EDIT:
Kev, are you talking about doing something like this:
BoxItems
(PK) BoxItemID
(Other irrelevant stuff)
BoxItemActions
(PK) BoxItemID
(PK) BoxItemTaskID
IsCompleted
DateCompleted
UserCompleted
BoxItemTasks
(PK) TaskType
Description (if even necessary)
Hmm... that would work... it would represent a need to change how I currently approach doing SQL Queries to see which items are in what state, but in the long term something like this looks like it would work better (without having to make a fundamental design change like the Serialization idea represents... though if I had the time, I'd like to do it that way I think.).
So is this what you were mentioning Kin, or am I off on it?
EDIT: Ah, I see your idea as well with the "Last Action" to determine the current state... I like it! I think that might work for me... I might have to change it up a little bit (because at some point tasks happen concurrently), but the idea seems like a good one!
EDIT FINAL: So in summation, if anyone else is looking this up in the future with the same question... it sounds like the serialization approach would be useful if your system has the information pre-loaded into some interface where it's queryable (i.e. not directly calling the database itself, as the ad-hoc system I'm working on does), but if you don't have that, the additional tables idea seems like it should work well! Thank you all for your responses!
A: If I'm understanding correctly, I would add the BoxItemTasks table (just an enumeration table, right?), then a BoxItemActions table with foreign keys to BoxItems and to BoxItemTasks for what type of task it is. If you want to make it so that a particular task can only be performed once on a particular box item, just make the (Items + Tasks) pair of columns be the primary key of BoxItemActions.
(You laid it out much better than I did, and kudos for correctly interpreting what I was saying. What you wrote is exactly what I was picturing.)
As for determining the current state, you could write a trigger on BoxItemActions that updates a single column BoxItems.LastAction. For concurrent actions, your trigger could just have special cases to decide which action takes recency.
A: As the previous answer suggested, I would break your table into several.
BoxItemActions, containing a list of actions that the work flow needs to go through, created each time a BoxItem is created. In this table, you can track the detailed dates \ times \ users of when each task was completed.
With this type of application, knowing where the Box is to go next can get quite tricky, so having a 'Map' of the remaining steps for the Box will prove quite helpful. As well, this table can group like crazy, hundreds of rows per box, and it will still be very easy to query.
It also makes it possible to have 'different paths' that can easily be changed. A master data table of 'paths' through the work flow is one solution, where as each box is created, the user has to select which 'path' the box will follow. Or you could set up so that when the user creates the box, they select tasks are required for this particular box. Depends on our business problem.
A: How about a hybrid of the serialization and the database models. Have an XML document that serves as your master workflow document, containing a node for each step with attributes and elements that detail it's name, order in the process, conditions for whether it's optional or not, etc. Most importantly each step node can have a unique step id.
Then in your database you have a simple two table structure. The BoxItems table stores your basic BoxItem data. Then a BoxItemActions table much like in the solution you marked as the answer.
It's essentially similar to the solution accepted as the answer, but instead of a BoxItemTasks table to store the master list of tasks, you use an XML document that allows for some more flexibility for the actual workflow definition.
A: For what it's worth, in BizTalk they "dehydrate" long-running message patterns (workflows and the like) by binary serializing them to the database.
A: I think I would serialize the Workflow object to XML and store in the database with an ID column. It may be more difficult to report on, but it sounds like it may work in your case.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: What are some good resources for the document/view or composite application architecture? Lately I've been working on applications that are relatively data-oriented. In general, they tend to be editors for data represented by classes that are related in odd ways. I've been handling it by having a UserControl for each type of object and as the selection changes the program displays the appropriate editor for the object.
The "framework" I have made for this feels clunky and messy. In general, I have a two-pane interface with an "item selection" control on the left and the "work area" on the right. The UI has to do most of the work of responding to item selections by determining what UserControl to display, and implementing behaviors like undo and asking if the user wants to save data before changing items can get messy. The inspiration for this post is a nice "make our build process easier" application I'm working on with a colleague but it's really out of hand to the extent that a well-designed rewrite will probably arrive faster than the current code will be completed.
I've got a passing familiarity with the document/view architecture from reading a little bit of some C++ books. I understand the more modern counterpart to it in .NET might be the Composite UI Application block. The problem is I've never seen anything but quick walkthroughs or howtos on these topics. It's never from the viewpoint of "how you should design an application for this" but more from the viewpoint of "paste this into the application and you'll understand!" I've spent an hour or two digging through the CAB documentation but it's somewhat confusing to me. I don't like the CAB mainly because I'm curious how things work under the hood and I think I'd appreciate it more if I were able to implement a simple version of a similar pattern before I dive into using a framework.
What I really think I need is a website or book focused on this issue. The big part I don't seem to get is how to separate concerns into the appropriate places; I'm used to designing my data classes with several methods to work with the data, and it seems like maybe that's the job of a controller object? What sources do you find useful for an introduction to this subject? I've seen lots of articles that draw nice diagrams like this and I get the high-level idea of how these architectures work. I don't think I've ever found the right source to teach me the low-level "how to implement it" part.
A: I find this article by Martin Fowler to be an excellent overview of a variety of UI architectures. Hope it helps :)
A: I accepted @Luke H's answer because it ultimately led me to several resources that are pretty decent.
*
*Martin Fowler's books look top-notch and are in my queue.
*The Build Your Own CAB Series by Jeremy Miller does a really good job.
*It looks like I'm a little too ignorant of design patterns to continue, so I'm reading Head-First Design Patterns followed by the Gang of Four book to help there; after that I plan on digesting Fowler's works.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: What is the best way to sort controls inside of a flowlayout panel? I am adding custom controls to a FlowLayoutPanel. Each control has a date property. I would like to sort the controls in the flowlayoutpanel based on the date property. I can't presort the controls before I add them because it is possible for the user to add more.
My current thought is when the ControlAdded event for the FlowLayoutPanel is triggered I loop through the controls and use the BringToFront function to order the controls based on the date.
What is the best way to do this?
A: I doubt this is the best but is what I have so far:
SortedList<DateTime,Control> sl = new SortedList<DateTime,Control>();
foreach (Control i in mainContent.Controls)
{
if (i.GetType().BaseType == typeof(MyBaseType))
{
MyBaseType iTyped = (MyBaseType)i;
sl.Add(iTyped.Date, iTyped);
}
}
foreach (MyBaseType j in sl.Values)
{
j.SendToBack();
}
A: BringToFront affects the z-order not the x/y position, I suspect you want to sort the FlowLayoutPanel.Controls collection when someone adds or deletes controls in the panel. Probably use SuspendLayout and ResumeLayout around the sorting code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What's the best way to set up data access for an ASP.NET MVC project? I am starting a new ASP.NET MVC project to learn with, and am wondering what's the optimal way to set up the project(s) to connect to a SQL server for the data. For example lets pretend we have a Product table and a product object I want to use to populate data in my view.
I know somewhere in here I should have an interface that gets implemented, etc but I can't wrap my mind around it today :-(
EDIT: Right now (ie: the current, poorly coded version of this app) I am just using plain old SQL server(2000 even) using only stored procedures for data access, but I would not be adverse to adding in an extra layer of flexability for using linq to sql or something.
EDIT #2: One thing I wanted to add was this: I will be writing this against a V1 of the database, and I will need to be able to let our DBA re-work the database and give me a V2 later, so it would be nice to only really have to change a few small things that are not provided via the database now that will be later. Rather than having to re-write a whole new DAL.
A: It really depends on which data access technology you're using. If you're using Linq To Sql, you might want to abstract away the data access behind some sort of "repository" interface, such as an IProductRepository. The main appeal for this is that you can change out the specific data access implementation at any time (such as when writing unit tests).
I've tried to cover some of this here:
A: I would check out Rob Conery's videos on his creation of an MVC store front. The series can be found here: MVC Store Front Series
This series dives into all sorts of design related subjects as well as coding/testing practies to use with MVC and other projects.
A: In my site's solution, I have the MVC web application project and a "common" project that contains my POCOs (plain ol' C# objects), business managers and data access layers.
The DAL classes are tied to SQL Server (I didn't abstract them out) and return POCOs to the business managers that I call from my controllers in the MVC project.
A: I think that Billy McCafferty's S#arp Architecture is a quite nice example of using ASP.NET MVC with a data access layer (using NHibernate as default), dependency injection (Ninject atm, but there are plans to support the CommonServiceLocator) and test-driven development. The framework is still in development, but I consider it quite good and stable. As of the current release, there should be few breaking changes until there is a final release, so coding against it should be okay.
A: I have done a few MVC applications and I have found a structure that works very nicely for me. It is based upon Rob Conery's MVC Storefront Series that JPrescottSanders mentioned (although the link he posted is wrong).
So here goes - I usually try to restrict my controllers to only contain view logic. This includes retrieving data to pass on to the views and mapping from data passed back from the view to the domain model. The key is to try and keep business logic out of this layer.
To this end I usually end up with 3 layers in my application. The first is the presentation layer - the controllers. The second is the service layer - this layer is responsible for executing complex queries as well as things like validation. The third layer is the repository layer - this layer is responsible for all access to the database.
So in your products example, this would mean that you would have a ProductRepository with methods such as GetProducts() and SaveProduct(Product product). You would also have a ProductService (which depends on the ProductRepository) with methods such as GetProductsForUser(User user), GetProductsWithCategory(Category category) and SaveProduct(Product product). Things like validation would also happen here. Finally your controller would depend on your service layer for retrieving and storing products.
You can get away with skipping the service layer but you will usually find that your controllers get very fat and tend to do too much. I have tried this architecture quite a few times and it tends to work quite nicely, especially since it supports TDD and automated testing very well.
A: For our application I plan on using LINQ to Entities, but as it's new to me there is the possiblity that I will want to replace this in the future if it doesn't perform as I would like and use something else like LINQ to SQL or NHibernate, so I'll be abstracting the data access objects into an abstract factory so that the implementation is hidden from the applicaiton.
How you do it is up to you, as long as you choose a proven and well know design pattern for implementation I think your final product will be well supported and robust.
A: Check out the Code Camp Server for a good reference application that does this very thing and as @haacked stated abstract that goo away to keep them separated.
A: Use LINQ. Create a LINQ to SQL file and drag and drop all the tables and views you need. Then when you call your model all of your CRUD level stuff is created for you automagically.
LINQ is the best thing I have seen in a long long time. Here are some simple samples for grabbing data from Scott Gu's blog.
LINQ Tutorial
A: I just did my first MVC project and I used a Service-Repository design pattern. There is a good bit of information about it on the net right now. It made my transition from Linq->Sql to Entity Framework effortless. If you think you're going to be changing a lot put in the little extra effort to use Interfaces.
I recommend Entity Framework for your DAL/Repository.
A: i think you need a orm.
for example entity framework(code first)
you can create some class for model.
use these models for you logic and view,and mapping them to db(v1).
when dba give you new db(v2),only change the mapping config.(v1 and v2 are all rdb,sql server,mysql,oracel...),if db(v1) is a rdb and db(v2) is a nosql(mongo,redis,couchbase...),that's not work
may be need do some find and replace
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Passing int array as parameter in web user control I have an int array as a property of a Web User Control. I'd like to set that property inline if possible using the following syntax:
<uc1:mycontrol runat="server" myintarray="1,2,3" />
This will fail at runtime because it will be expecting an actual int array, but a string is being passed instead. I can make myintarray a string and parse it in the setter, but I was wondering if there was a more elegant solution.
A: @mathieu, thanks so much for your code. I modified it somewhat in order to compile:
public class IntArrayConverter : System.ComponentModel.TypeConverter
{
public override bool CanConvertFrom(System.ComponentModel.ITypeDescriptorContext context, Type sourceType)
{
return sourceType == typeof(string);
}
public override object ConvertFrom(System.ComponentModel.ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)
{
string val = value as string;
string[] vals = val.Split(',');
System.Collections.Generic.List<int> ints = new System.Collections.Generic.List<int>();
foreach (string s in vals)
ints.Add(Convert.ToInt32(s));
return ints.ToArray();
}
}
A: Seems to me that the logical—and more extensible—approach is to take a page from the asp: list controls:
<uc1:mycontrol runat="server">
<uc1:myintparam>1</uc1:myintparam>
<uc1:myintparam>2</uc1:myintparam>
<uc1:myintparam>3</uc1:myintparam>
</uc1:mycontrol>
A: Great snippet @mathieu. I needed to use this for converting longs, but rather than making a LongArrayConverter, I wrote up a version that uses Generics.
public class ArrayConverter<T> : TypeConverter
{
public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
{
return sourceType == typeof(string);
}
public override object ConvertFrom(ITypeDescriptorContext context, CultureInfo culture, object value)
{
string val = value as string;
if (string.IsNullOrEmpty(val))
return new T[0];
string[] vals = val.Split(',');
List<T> items = new List<T>();
Type type = typeof(T);
foreach (string s in vals)
{
T item = (T)Convert.ChangeType(s, type);
items.Add(item);
}
return items.ToArray();
}
}
This version should work with any type that is convertible from string.
[TypeConverter(typeof(ArrayConverter<int>))]
public int[] Ints { get; set; }
[TypeConverter(typeof(ArrayConverter<long>))]
public long[] Longs { get; set; }
[TypeConverter(typeof(ArrayConverter<DateTime))]
public DateTime[] DateTimes { get; set; }
A: Implement a type converter, here is one, warning : quick&dirty, not for production use, etc :
public class IntArrayConverter : System.ComponentModel.TypeConverter
{
public override bool CanConvertFrom(System.ComponentModel.ITypeDescriptorContext context, Type sourceType)
{
return sourceType == typeof(string);
}
public override object ConvertFrom(System.ComponentModel.ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)
{
string val = value as string;
string[] vals = val.Split(',');
System.Collections.Generic.List<int> ints = new System.Collections.Generic.List<int>();
foreach (string s in vals)
ints.Add(Convert.ToInt32(s));
return ints.ToArray();
}
}
and tag the property of your control :
private int[] ints;
[TypeConverter(typeof(IntsConverter))]
public int[] Ints
{
get { return this.ints; }
set { this.ints = value; }
}
A: Have you tried looking into Type Converters? This page looks worth a look: http://www.codeguru.com/columns/VB/article.php/c6529/
Also, Spring.Net seems to have a StringArrayConverter (http://www.springframework.net/doc-latest/reference/html/objects-misc.html - section 6.4) which, if you can feed it to ASP.net by decorating the property with a TypeConverter attribute, might work..
A: You could also do something like this:
namespace InternalArray
{
/// <summary>
/// Item for setting value specifically
/// </summary>
public class ArrayItem
{
public int Value { get; set; }
}
public class CustomUserControl : UserControl
{
private List<int> Ints {get {return this.ItemsToList();}
/// <summary>
/// set our values explicitly
/// </summary>
[PersistenceMode(PersistenceMode.InnerProperty), TemplateContainer(typeof(List<ArrayItem>))]
public List<ArrayItem> Values { get; set; }
/// <summary>
/// Converts our ArrayItem into a List<int>
/// </summary>
/// <returns></returns>
private List<int> ItemsToList()
{
return (from q in this.Values
select q.Value).ToList<int>();
}
}
}
which will result in:
<xx:CustomUserControl runat="server">
<Values>
<xx:ArrayItem Value="1" />
</Values>
</xx:CustomUserControl>
A: To add child elements that make your list you need to have your control setup a certain way:
[ParseChildren(true, "Actions")]
[PersistChildren(false)]
[ToolboxData("<{0}:PageActionManager runat=\"server\" ></PageActionManager>")]
[NonVisualControl]
public class PageActionManager : Control
{
The Actions above is the name of the cproperty the child elements will be in. I use an ArrayList, as I have not testing anything else with it.:
private ArrayList _actions = new ArrayList();
public ArrayList Actions
{
get
{
return _actions;
}
}
when your contorl is initialized it will have the values of the child elements. Those you can make a mini class that just holds ints.
A: Do do what Bill was talking about with the list you just need to create a List property on your user control. Then you can implement it as Bill described.
A: You could add to the page events inside the aspx something like this:
<script runat="server">
protected void Page_Load(object sender, EventArgs e)
{
YourUserControlID.myintarray = new Int32[] { 1, 2, 3 };
}
</script>
A: You can implement a type converter class that converts between int array and string data types.
Then decorate your int array property with the TypeConverterAttribute, specifying the class that you implemented. Visual Studio will then use your type converter for type conversions on your property.
A: If you use DataBinding on one of the parent Controls, you can use a DataBinding Expression:
<uc1:mycontrol runat="server" myintarray="<%# new [] {1, 2, 3} %>" />
With a custom expression builder, you can do something similar. The expression builder:
[ExpressionPrefix("Code")]
public class CodeExpressionBuilder : ExpressionBuilder
{
public override CodeExpression GetCodeExpression(System.Web.UI.BoundPropertyEntry entry, object parsedData, ExpressionBuilderContext context)
{
return new CodeSnippetExpression(entry.Expression.Trim());
}
}
Usage:
<uc1:mycontrol runat="server" myintarray="<%$ Code: new [] {1, 2, 3} %>" />
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: Scrape multi-frame website I'm auditing our existing web application, which makes heavy use of HTML frames. I would like to download all of the HTML in each frame, is there a method of doing this with wget or a little bit of scripting?
A: as an addition to Steve's answer:
Span to any host—‘-H’
The ‘-H’ option turns on host spanning, thus allowing Wget's recursive run to visit any host referenced by a link. Unless sufficient recursion-limiting criteria are applied depth, these foreign hosts will typically link to yet more hosts, and so on until Wget ends up sucking up much more data than you have intended.
Limit spanning to certain domains—‘-D’
The ‘-D’ option allows you to specify the domains that will be followed, thus limiting the recursion only to the hosts that belong to these domains. Obviously, this makes sense only in conjunction with ‘-H’.
A typical example would be downloading the contents of ‘www.server.com’, but allowing downloads from ‘images.server.com’, etc.:
wget -rH -Dserver.com http://www.server.com/
You can specify more than one address by separating them with a comma,
e.g. ‘-Ddomain1.com,domain2.com’.
taken from: wget manual
A: wget --recursive --domains=www.mysite.com http://www.mysite.com
Which indicates a recursive crawl should also traverse into frames and iframes. Be careful to limit the scope of recursion only to your web site since you probably don't want to crawl the whole web.
A: wget has a -r option to make it recursive, try wget -r -l1 (in case the font makes it hard to read: that last part is a lower case L followed by a number one)
The -l1 part tells it to recurse to a maximum depth of 1. Try playing with this number to scrape more.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Regular Expression to exclude set of Keywords I want an expression that will fail when it encounters words such as "boon.ini" and "http". The goal would be to take this expression and be able to construct for any set of keywords.
A: ^(?:(?!boon\.ini|http).)*$\r?\n?
(taken from RegexBuddy's library) will match any line that does not contain boon.ini and/or http. Is that what you wanted?
A: you might be well served by writing a regex that will succeed when it encounters the words you're looking for, and then invert the condition.
For instance, in perl you'd use:
if (!/boon\.ini|http/) {
# the string passed!
}
A: ^[^£]*$
The above expression will restrict only the pound symbol from the string. This will allow all characters except string.
A: An alternative expression that could be used:
^(?!.*IgnoreMe).*$
^ = indicates start of line
$ = indicates the end of the line
(?! Expression) = indicates zero width look ahead negative match on the expression
The ^ at the front is needed, otherwise when evaluated the negative look ahead could start from somewhere within/beyond the 'IgnoreMe' text - and make a match where you don't want it too.
e.g. If you use the regex:
(?!.*IgnoreMe).*$
With the input "Hello IgnoreMe Please", this will will result in something like: "gnoreMe Please" as the negative look ahead finds that there is no complete string 'IgnoreMe' after the 'I'.
A: Rather than negating the result within the expression, you should do it in your code. That way, the expression becomes pretty simple.
\b(boon\.ini|http)\b
Would return true if boon.ini or http was anywhere in your string. It won't match words like httpd or httpxyzzy because of the \b, or word boundaries. If you want, you could just remove them and it will match those too. To add more keywords, just add more pipes.
\b(boon\.ini|http|foo|bar)\b
A: Which language/regexp library? I thought you question was around ASP.NET in which case you can see the "negative lookhead" section of this article:
http://msdn.microsoft.com/en-us/library/ms972966.aspx
Strictly speaking negation of a regular expression, still defines a regular language but there are very few libraries/languages/tool that allow to express it.
Negative lookahed may serve you the same but the actual syntax depends on what you are using. Tim's answer is an example with (?...)
A: I used this (based on Tim Pietzcker answer) to exclude non-production subdomain URLs for Google Analytics profile filters:
^\w+-*\w*\.(?!(?:alpha(123)*\.|beta(123)*\.|preprod\.)domain\.com).*$
You can see the context here: Regex to Exclude Multiple Words
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
}
|
Q: Whats the best way to get total # of records in a mysql table with php? Whats the most efficient way of selecting total number of records from a large table? Currently, Im simply doing
$result = mysql_query("SELECT id FROM table");
$total = mysql_num_rows($result)
I was told this was not very efficient or fast, if you have a lot of records in the table.
A: You should use SQL's built in COUNT function:
$result = mysql_query("SELECT COUNT(id) FROM table");
A: MyISAM tables already store the row count
SELECT COUNT(*) FROM table
on a MyISAM table simply reads that value. It doesn't scan the table or the index(es). So, it's just as fast or faster than reading the value from a different table.
A: You were told correctly. mysql can do this count for you which is much more efficient.
$result = mysql_query( "select count(id) as num_rows from table" );
$row = mysql_fetch_object( $result );
$total = $row->num_rows;
A: According to the MySQL documentation this is most efficient if you're using a MyISAM table (which is the most usual type of tables used):
$result = mysql_query("SELECT COUNT(*) FROM table");
Otherwise you should do as Wayne stated and be sure that the counted column is indexed.
A: Can I just add, that the most "efficient" way of getting the total number of records, particularly in a large table, is to save the total amount as a number in another table.
That way, you don't have to query the entire table everytime you want to get the total.
You will however, have to set up some code or Triggers in the database to increase or decrease that number when a row is added/deleted.
So its not the easiest way, but if your website grows, you should definitely consider doing that.
A: Even though I agree to use the built-in functions, I don't really see any performance difference between mysql_num_rows and count(id). For 25000 results, same performance (can say exact.) Just for the record.
A: What about something like this:
$result = mysql_query("SELECT COUNT(id) AS total_things from table");
$row = mysql_fetch_array($result,MYSQL_ASSOC);
$num_results = $row["total_things"];
A: I had a large table (>50 million rows) and it took a long time to count the primary key, so I use the following:
SELECT TABLE_NAME, TABLE_ROWS
FROM information_schema.tables
WHERE TABLE_SCHEMA = "database";
Replace database with the name of your schema.
A: Just wanted to note that SHOW TABLE STATUS returns a Rows column, though I can't speak to its efficiency. Some light Googling turns up reports of slowness in MySQL 4 over two years ago. Might make for interesting time trials.
Also note the InnoDB caveat regarding inaccurate counts.
A: Use aggregate function. Try the below SQL Command
$num= mysql_query("SELECT COUNT(id) FROM $table");
A: mysqli_query() is deprecated. Better use this:
$result = $dbh->query("SELECT id FROM {table_name}");
$total = $result->num_rows;
Using PDO:
$result = $dbh->query("SELECT id FROM {table_name}");
$total = $result->rowCount();
(where '$dbh' = handle of the db connected to)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Is there an "anonymous" generic tag in C#, like '?' in Java? In Java, one can declare a variable parameterised by an "unknown" generic type, which looks like this:
Foo<?> x;
Is there an equivalent construct to this question-mark, in C#?
A: It isn't (quite) true that there is no equivalent in C#. There is no static equivalent that you can use as a type, or call methods on, true enough. For that, use Jorge's answer.
On the other hand, sometimes you need the equivalent idea for reflection, and there is an equivalent there. If you have:
interface IFoo<T>
{
T Bar(T t, int n);
}
you can get a Type that represents IFoo<int> using typeof(IFoo<int>). Less well known, and a partial answer to your question, is that you can also get a Type that represents IFoo<T> using typeof(IFoo<>).
This is useful when you want to use IFoo<T> for some T through reflection and won't know T until runtime.
Type theInterface = typeof(IFoo<>);
Type theSpecificInterface = theInterface.MakeGenericType(typeof(string));
// theSpecificInterface now holds IFoo<string> even though we may not have known we wanted to use string until runtime
// proceed with reflection as normal, make late bound calls / constructions, emit DynamicMethod code, etc.
A: There isn't an equivalent syntax in C#.
A: The short answer is no. There isn't an equivalent feature in C#.
A workaround, from C# from a Java developer's perspective by Dare Obasanjo:
In certain cases, one may need create a method that can operate on data structures containing any type as opposed to those that contain a specific type (e.g. a method to print all the objects in a data structure) while still taking advantage of the benefits of strong typing in generics. The mechanism for specifying this in C# is via a feature called generic type inferencing while in Java this is done using wildcard types. The following code samples show how both approaches lead to the same result.
C# Code
using System;
using System.Collections;
using System.Collections.Generic;
class Test{
//Prints the contents of any generic Stack by
//using generic type inference
public static void PrintStackContents<T>(Stack<T> s){
while(s.Count != 0){
Console.WriteLine(s.Pop());
}
}
public static void Main(String[] args){
Stack<int> s2 = new Stack<int>();
s2.Push(4);
s2.Push(5);
s2.Push(6);
PrintStackContents(s2);
Stack<string> s1 = new Stack<string>();
s1.Push("One");
s1.Push("Two");
s1.Push("Three");
PrintStackContents(s1);
}
}
Java Code
import java.util.*;
class Test{
//Prints the contents of any generic Stack by
//specifying wildcard type
public static void PrintStackContents(Stack<?> s){
while(!s.empty()){
System.out.println(s.pop());
}
}
public static void main(String[] args){
Stack <Integer> s2 = new Stack <Integer>();
s2.push(4);
s2.push(5);
s2.push(6);
PrintStackContents(s2);
Stack<String> s1 = new Stack<String>();
s1.push("One");
s1.push("Two");
s1.push("Three");
PrintStackContents(s1);
}
}
A: AFAIK you can not do that in C#. What the BCL does and there are plenties of examples there is to create a class that is not generic and then create a generic class that inherits the base behavior from the previous one. See example below.
class Foo
{
}
class Foo<T> : Foo
{
}
The you can write something like this:
Foo t = new Foo<int>();
A: No, there isn't really the same concept in C#. You would need to refer to a base class of Foo (maybe a non-generic Foo), or make the method you're working in generic itself (so that you can refer to Foo, and let the caller of your method determine what T is).
Hope that helps.
A: While admittedly being not the clean approach, using Foo<object> x may also be suitable.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
}
|
Q: Is there a way from preventing other ReportViewers on the same webpage from refreshing? Currently I am working on a web page that has six ReportViewer controls that are using remote processing and that allow for drill-down. When a user clicks on one control the entire page refreshes and the other five reports all refresh as well. Users are currently requesting that the refreshing of the other controls be removed in favor of only the one they click on refreshing. Is there a way of doing this, I've noticed that in Sharepoint that clicking a drill-down report does not require the entire entire page to be reloaded and I'm wondering if I can provide the same functionality.
A: I have been doing some more research on this issue and it looks like the AsyncRendering property of the ReportViewer control controls the functionality that I'm looking for. When it is set to "false" it prevents the "Report is Being Generated" message from being displayed which is what the users were commenting on. The downside is that the page can take a bit longer to load than before, but as we are working on a development machine this might not be as noticeable once we move to the actual production box.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Getting started with rails? Must have gems? I'm starting work on a project using Rails, but I'm waiting for the 3rd edition of the pragmatic rails book to come out before I purchase a book.
Anyway, my question is a bit more pointed than how do I get started...
What are some of the must have gems that everyone uses?
I need basic authentication, so I have the restful authentication gem, but beyond that, I don't know what I don't know. Is there a run down of this information somewhere? Some basic setup that 99% of the people start with when starting a new rails application?
Thanks in advance.
A: For pagination, will_paginate.
A: This is very, very subjective because it all depends on what your application does! However, I've just had a look at the Gems I have installed and the one that absolutely does leap out as mandatory is Capistrano.
BTW Restful Authentication is a Rails plugin not a Gem.
A: HAML is a must have. You'll never think of HTML in the same way again -- No more tag soup.
A: The gems and plugins that I tend to use on most of my projects are:
*
*Restful Authentication -- For authentication
*Will Paginate -- For pagination
*Attachment Fu -- For image and file attachments
*RedCloth -- For textile rendering
*Capistrano -- For deployment
A: The only gems you need are:
*
*Rails
*Rake
If you "gem install rails" you'll get everything you need for Rails. You only need gems when you need them, so it's not worth worrying about before then.
EDIT: Actually there are a couple more you'll probably need:
*
*mysql - or whatever Ruby database driver you need
*mongrel - you don't necessarily need this until production, but it's nice to use in dev/test too
*ZenTest - I use this mainly for "autotest" so that my tests run in a console window whenever my source files change
There could be many other gems that help you but we'd need more info from you to know if they're applicable, eg:
*
*Web scraping (hpricot)
*CSV (fastercsv)
*Amazon S3 support (aws-s3)
*Image manipulation (rmagick)
*Graphing (gruff) - I use this as a plugin
*Role-based security (role_requirement) - This one is a plugin too
A: *
*sudo gem install haml
*sudo gem install ZenTest
*rspec on rails
A: How can nobody have mentioned andand yet? It's the best thing since ||=
A: mini_magick instead of rmagick.
A: Might want to keep an eye on: http://rubygems.org/ - you can see some interesting stats there re: most downloaded, most active, etc...
Also interesting and somewhat telling: https://github.com/languages/Ruby
A: This is a old thread but I thought I'll refine the list with what I believe to be must have gems at this point in time:
*
*RSpec or Shoulda - tools for BDD/testing
*factory_girl - fixture replacement
*will_paginate - simple pagination
*paperclip - image uploading/attachment
*CanCan - authorization
*Authlogic - authentication
*HAML - templating engine
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: What’s the current state of closures in Java? Does anybody know, if closures will be in Java 7?
A: There are currently several competing proposals, BGGA, CICE, among others. Unfortunately, a heated debate remains over the best approach. As a result it is unlikely at this point that closures will make it into Java 7, due to the conservative nature of the acceptance process.
The key problem here is that it can be very difficult to add features to a pre-existing language, without inadvertently introducing significant complexity. This was the experience with Generics in Java 1.5, and many are concerned that it would be compounded with the introduction of closures.
My advice is that if you really want to have access to modern language features like closures, but wish to stay within the Java ecosysteym, you should take a look at Scala.
A: Groovy is the best Java alternative I've seen that includes features of dynamic languages including closures, run-time class extension, etc. While Ruby has a slight design advantage imho, I'd have to say the fact that Groovy compiles into Java byte-code and interacts with Java without ANY interface code is a huge plus that can't be ignored.
http://groovy.codehaus.org
A: Apparently Closures will not be in Java 7.
See this
and this.
A: It is unknown until the Java SE 7 JSR is created (presumably by Danny Coward) and an expert group is formed and the contents selected.
My Java 7 page is a good collection of links about Java 7 in general and has links to all of the closures proposals and blog entries:
http://tech.puredanger.com/java7#closures
And I maintain a Java 7 link blog where you can find links on closures and other stuff at:
http://java7.tumblr.com
And you might find my Java 7 Predictions blog post to be interesting as well if you want my opinions:
http://tech.puredanger.com/2008/08/02/java7-prediction-update/
UPDATE: Mark Reinhold stated at Devoxx in Dec. 08 that closures will NOT be included in Java 7 due to a lack of consensus on how to implement.
A: At Devoxx 2008, Mark Reinhold made it clear that closures will not be included in Java 7.
Wait! Closures will be included in Java 7. Mark Reinhold announced this reversal at Devoxx 2009.
Belay that! Closures (lambda expressions) have been deferred until Java 8. Follow Project Lambda (JSR 335) for more information.
A: Closure won't definitively be present in Java 7, but if you are looking for a lighter solution to have closure in java right now check out how they have been implemented in the lambdaj library:
http://code.google.com/p/lambdaj/wiki/Closures
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Pulling in a dynamic image in a control based on a url using C# and ASP.net I know this is a dumb question. For some reason my mind is blank on this. Any ideas?
Sorry should have been more clear.
Using a HtmlGenericControl to pull in link description as well as image.
private void InternalCreateChildControls()
{
if (this.DataItem != null && this.Relationships.Count > 0)
{
HtmlGenericControl fieldset = new HtmlGenericControl("fieldset");
this.Controls.Add(fieldset);
HtmlGenericControl legend = new HtmlGenericControl("legend");
legend.InnerText = this.Caption;
fieldset.Controls.Add(legend);
HtmlGenericControl listControl = new HtmlGenericControl("ul");
fieldset.Controls.Add(listControl);
for (int i = 0; i < this.Relationships.Count; i++)
{
CatalogRelationshipsDataSet.CatalogRelationship relationship =
this.Relationships[i];
HtmlGenericControl listItem = new HtmlGenericControl("li");
listControl.Controls.Add(listItem);
RelatedItemsContainer container = new RelatedItemsContainer(relationship);
listItem.Controls.Add(container);
Image Image = new Image();
Image.ImageUrl = relationship.DisplayName;
LinkButton link = new LinkButton();
link.Text = relationship.DisplayName;
///ToDO Add Image or Image and description
link.CommandName = "Redirect";
container.Controls.Add(link);
}
}
}
Not asking anyone to do this for me just a reference or an idea.
Thanks -overly frustrated and feeling humbled.
A: I'm assuming you want to generate an image dynamicly based upon an url.
What I typically do is a create a very lightweight HTTPHandler to serve the images:
using System;
using System.Web;
namespace Example
{
public class GetImage : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
if (context.Request.QueryString("id") != null)
{
// Code that uses System.Drawing to construct the image
// ...
context.Response.ContentType = "image/pjpeg";
context.Response.BinaryWrite(Image);
context.Response.End();
}
}
public bool IsReusable
{
get
{
return false;
}
}
}
}
You can reference this directly in your img tag:
<img src="GetImage.ashx?id=111"/>
Or, you could even create a server control that does it for you:
using System;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace Example.WebControl
{
[ToolboxData("<{0}:DynamicImageCreator runat=server></{0}:DynamicImageCreator>")]
public class DynamicImageCreator : Control
{
public int Id
{
get
{
if (ViewState["Id" + this.ID] == null)
return 0;
else
return ViewState["Id"];
}
set
{
ViewState["Id" + this.ID] = value;
}
}
protected override void RenderContents(HtmlTextWriter output)
{
output.Write("<img src='getImage.ashx?id=" + this.Id + "'/>");
base.RenderContents(output);
}
}
}
This could be used like
<cc:DDynamicImageCreator id="db1" Id="123" runat="server/>
A: Check out the new DynamicImage control released in CodePlex by the ASP.NET team.
A: This is kind of a horrible question. I mean, .NET has an image control where you can set the source to anything you want. I'm not sure what you're wanting to be discussed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do you set directory permissions in NSIS? I'm trying to build a Windows installer using Nullsoft Install System that requires installation by an Administrator. The installer makes a "logs" directory. Since regular users can run this application, that directory needs to be writable by regular users. How do I specify that all users should have permission to have write access to that directory in the NSIS script language?
I admit that this sounds a like a sort of bad idea, but the application is just an internal app used by only a few people on a private network. I just need the log files saved so that I can see why the app is broken if something bad happens. The users can't be made administrator.
A: It's an old issue now but as suggested by Sören APPDATA directory is a nice way to do what you want, the thing is :
Don't take user's personnal APPDATA but the "All Users" APPDATA dir!
This way anyone will be able to access the log file ;-)
Also, I read somewhere that using (BU) on the GrantOnFile is not working well with some systems (Win 7 x64 if I remember well), maybe you should use the SID "(S-1-5-32-545)" instead (it's the All Users' SID, this value is a constant on each Windows OS)
A: One way: call the shell, and use cacls or xcacls.
A: Use the AccessControl plugin and then add this to the script, where the "logs" directory is in the install directory.
AccessControl::GrantOnFile "$INSTDIR\logs" "(BU)" "FullAccess"
That gives full access to the folder for all users.
A: Why not create a log-directory in the user's %APPDATA% directory? Do you really need to put all the logs in the install directory? Why?
A: AccessControl::GrantOnFile "<folder>" "(BU)" "FullAccess" didn't work for me on a Windows Server 2008 machine. Instead I had to use this one:
AccessControl::GrantOnFile "<folder>" "(S-1-5-32-545)" "FullAccess"
S-1-5-32-545 is equivalent to "Users" according to Microsoft Support: Well-known security identifiers in Windows operating systems.
A: Instead of changing the permissions on directories under Program Files, why not put the logs in a location that is writeable by all users.
See the 4.9.7.7 SetShellVarContext section in your NSIS documentation. You can use it with $APPDATA to get the application data folder that is writeable for all users.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
}
|
Q: Rails Tests Fail With Sqlite3 I seem to be getting a strange error when I run my tests in rails, they are all failing for the same reason and none of the online documentation seems particularly helpful in regards to this particular error:
SQLite3::SQLException: cannot rollback - no transaction is active
This error is crippling my ability to test my application and seems to have appeared suddenly. I have the latest version of sqlite3 (3.6.2), the latest sqlite3-ruby (1.2.4) gem and the latest rails (2.1.1).
A: Check http://dev.rubyonrails.org/ticket/4403 which shows a workaround. Could that be the problem you are encountering?
A: I had this problem once but with MySQL. Turned out I hadn't created the test database. Doh! Rails and sqlite creates them automatically I believe (at least it does in windows).
Are trying to do in memory testing? If not does the test database exist?
A: Thanks for the help. I actually ended up just deleting the rails folder and checking back out the last working copy from version control. I've made the identical changes and this problem hasn't reappeared, so either I messed up or rails had some sort of hiccup. Thankfully I had version control :-)
A: I got this error when running a test with the last statement being a click on a form submit. Once I did an assertion or should test, the test closed properly, and I didn't have to rerun the rake db:test:prepare
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Like in CASE statement not evaluating as expected Given this data:
CREATE TABLE tmpTable(
fldField varchar(10) null);
INSERT INTO tmpTable
SELECT 'XXX'
UNION ALL
SELECT 'XXX'
UNION ALL
SELECT 'ZZZ'
UNION ALL
SELECT 'ZZZ'
UNION ALL
SELECT 'YYY'
SELECT
CASE WHEN fldField like 'YYY' THEN 'OTH' ELSE 'XXX' END AS newField
FROM tmpTable
The expected resultset is:
XXX
XXX
XXX
XXX
OTH
What situation would casue SQL server 2000 to NOT find 'YYY'? And return the following as the resultset:
XXX
XXX
XXX
XXX
XXX
The problem is with the like 'YYY', I have found other ways to write this to get it to work, but I want to know why this exact method doesn't work. Another difficulty is that it works in most of my SQL Server 2000 environments. I need to find out what is different between them to cause this. Thanks for your help.
A: I ran the code on a SQL 2000 box and got identical results. Not only that, but when I ran some additional code to test I got some VERY bizarre results:
CREATE TABLE dbo.TestLike ( my_field varchar(10) null);
GO
CREATE CLUSTERED INDEX IDX_TestLike ON dbo.TestLike (my_field)
GO
INSERT INTO dbo.TestLike (my_field) VALUES ('XXX')
INSERT INTO dbo.TestLike (my_field) VALUES ('XXX')
INSERT INTO dbo.TestLike (my_field) VALUES ('ZZZ')
INSERT INTO dbo.TestLike (my_field) VALUES ('ZZZ')
INSERT INTO dbo.TestLike (my_field) VALUES ('YYY')
GO
SELECT
my_field,
case my_field when 'YYY' THEN 'Y' ELSE 'N' END AS C2,
case when my_field like 'YYY' THEN 'Y' ELSE 'N' END AS C3,
my_field
FROM dbo.TestLike
GO
My results:
my_field C2 C3 my_field
---------- ---- ---- ----------
N XXX N XXX
N XXX N XXX
Y YYY N YYY
N ZZZ N ZZZ
N ZZZ N ZZZ
Notice how my_field has two different values in the same row? I've asked some others at the office here to give it a quick test. Looks like a bug to me.
A: Check your service pack. After upgrading my SQL 2000 box to SP4 I now get the correct values for your situation.
I'm still getting the swapped data that I reported in my earlier post though :(
If you do SELECT @@version you should get 8.00.2039. Any version number less than that and you should install SP4.
A: How about fldField = '%YYY%'?
A: It worked as expected on my SQL 2005 installation. If it works on other machines, it sounds like you've got an environment difference. Try comparing your connection properties in SQL Server Management Studio for a connection that works and one that doesn't to see if you can figure out what the differences are.
A: I am an Oracle person, not a SQL*Server person, but it seems to me you should be either:-
SELECT
CASE WHEN fldField like '%YYY%' THEN
'OTH'
ELSE 'XXX'
END AS newField
FROM
tmpTable
or ...
SELECT
CASE WHEN fldField = 'YYY' THEN
'OTH'
ELSE 'XXX'
END AS newField
FROM
tmpTable
The second is the direction I'd go in, as at least in Oracle equality resolves quicker than like.
A: When you use LIKE without specifying any search criteria, it behaves like an = comparison. In your example, I would expect it to work properly. In your real data, you probably have a hidden (non-printable) character in your data (think about Carriage Return, Line Feed, Tab, etc....).
Take a look at this example...
Declare @tmpTable TABLE(
fldField varchar(10) null);
INSERT INTO @tmpTable
SELECT 'XXX'
UNION ALL
SELECT 'XXX'
UNION ALL
SELECT 'ZZZ'
UNION ALL
SELECT 'ZZZ'
UNION ALL
SELECT 'YYY'
UNION ALL
SELECT 'YYY' + Char(10)
SELECT CASE WHEN fldField like 'YYY' THEN 'OTH' ELSE 'XXX' END AS YourOriginalTest,
CASE WHEN fldField like 'YYY%' THEN 'OTH' ELSE 'XXX' END AS newField
FROM @tmpTable
You'll notice that the last piece of data I added is YYY and a Line Feed. If you select this data, you won't notice the line feed in the data, but it's there, so your LIKE condition (which is acting like an equal condition) doesn't match.
The common 'hidden' characters are Tab, Carriage Return, and Line Feed. To determine if this is causing your problem...
Select *
From Table
Where Column Like '%[' + Char(10) + Char(9) + Char(13) + ']%'
A: What a cute bug. I think I know the cause. If I'm right, then you'll get the results you expect from:
SELECT
CASE
WHEN fldField like 'YYY ' -- 7 spaces
THEN 'OTH'
ELSE 'XXX'
END as newField
from tmpTable
The bug is that varchar(10) is behaving like char(10) is supposed to. As for why it doesn't, you'll need to understand the old trivia question of how two strings with no metacharacters can be = but not LIKE each other.
The issue is that a char(10) is internally supposed to be space padded. The like operator does not ignore those spaces. The = operator is supposed to in the case of chars. Memory tells me that Oracle ignores spaces for strings in general. Postgres does some tricks with casting. I have not used SQL*Server so I can't tell you how it does it.
A: By adding (%) to the expression , it will work fine.
SELECT
CASE
WHEN fldField like '%YYY%' THEN 'OTH'
ELSE 'XXX' END AS newField
END
A: You aren't specifying what you are selecting and checking the CASE against...
SELECT CASE fldField WHEN 'YYY'
THEN 'OTH' ELSE 'XXX' END AS newField FROM tmpTable
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Best practices for manipulating database result sets in Python? I am writing a simple Python web application that consists of several pages of business data formatted for the iPhone. I'm comfortable programming Python, but I'm not very familiar with Python "idiom," especially regarding classes and objects. Python's object oriented design differs somewhat from other languages I've worked with. So, even though my application is working, I'm curious whether there is a better way to accomplish my goals.
Specifics: How does one typically implement the request-transform-render database workflow in Python? Currently, I am using pyodbc to fetch data, copying the results into attributes on an object, performing some calculations and merges using a list of these objects, then rendering the output from the list of objects. (Sample code below, SQL queries redacted.) Is this sane? Is there a better way? Are there any specific "gotchas" I've stumbled into in my relative ignorance of Python? I'm particularly concerned about how I've implemented the list of rows using the empty "Record" class.
class Record(object):
pass
def calculate_pnl(records, node_prices):
for record in records:
try:
# fill RT and DA prices from the hash retrieved above
if hasattr(record, 'sink') and record.sink:
record.da = node_prices[record.sink][0] - node_prices[record.id][0]
record.rt = node_prices[record.sink][1] - node_prices[record.id][1]
else:
record.da = node_prices[record.id][0]
record.rt = node_prices[record.id][1]
# calculate dependent values: RT-DA and PNL
record.rtda = record.rt - record.da
record.pnl = record.rtda * record.mw
except:
print sys.exc_info()
def map_rows(cursor, mappings, callback=None):
records = []
for row in cursor:
record = Record()
for field, attr in mappings.iteritems():
setattr(record, attr, getattr(row, field, None))
if not callback or callback(record):
records.append(record)
return records
def get_positions(cursor):
# get the latest position time
cursor.execute("SELECT latest data time")
time = cursor.fetchone().time
hour = eelib.util.get_hour_ending(time)
# fetch the current positions
cursor.execute("SELECT stuff FROM atable", (hour))
# read the rows
nodes = {}
def record_callback(record):
if abs(record.mw) > 0:
if record.id: nodes[record.id] = None
return True
else:
return False
records = util.map_rows(cursor, {
'id': 'id',
'name': 'name',
'mw': 'mw'
}, record_callback)
# query prices
for node_id in nodes:
# RT price
row = cursor.execute("SELECT price WHERE ? ? ?", (node_id, time, time)).fetchone()
rt5 = row.lmp if row else None
# DA price
row = cursor.execute("SELECT price WHERE ? ? ?", (node_id, hour, hour)).fetchone()
da = row.da_lmp if row else None
# update the hash value
nodes[node_id] = (da, rt5)
# calculate the position pricing
calculate_pnl(records, nodes)
# sort
records.sort(key=lambda r: r.name)
# return the records
return records
A: The empty Record class and the free-floating function that (generally) applies to an individual Record is a hint that you haven't designed your class properly.
class Record( object ):
"""Assuming rtda and pnl must exist."""
def __init__( self ):
self.da= 0
self.rt= 0
self.rtda= 0 # or whatever
self.pnl= None #
self.sink = None # Not clear what this is
def setPnl( self, node_prices ):
# fill RT and DA prices from the hash retrieved above
# calculate dependent values: RT-DA and PNL
Now, your calculate_pnl( records, node_prices ) is simpler and uses the object properly.
def calculate_pnl( records, node_prices ):
for record in records:
record.setPnl( node_prices )
The point isn't to trivially refactor the code in small ways.
The point is this: A Class Encapsulates Responsibility.
Yes, an empty-looking class is usually a problem. It means the responsibilities are scattered somewhere else.
A similar analysis holds for the collection of records. This is more than a simple list, since the collection -- as a whole -- has operations it performs.
The "Request-Transform-Render" isn't quite right. You have a Model (the Record class). Instances of the Model get built (possibly because of a Request.) The Model objects are responsible for their own state transformations and updates. Perhaps they get displayed (or rendered) by some object that examines their state.
It's that "Transform" step that often violates good design by scattering responsibility all over the place. "Transform" is a hold-over from non-object design, where responsibility was a nebulous concept.
A: Have you considered using an ORM? SQLAlchemy is pretty good, and Elixir makes it beautiful. It can really reduce the ammount of boilerplate code needed to deal with databases. Also, a lot of the gotchas mentioned have already shown up and the SQLAlchemy developers dealt with them.
A: Depending on how much you want to do with the data you may not need to populate an intermediate object. The cursor's header data structure will let you get the column names - a bit of introspection will let you make a dictionary with col-name:value pairs for the row.
You can pass the dictionary to the % operator. The docs for the odbc module will explain how to get at the column metadata.
This snippet of code to shows the application of the % operator in this manner.
>>> a={'col1': 'foo', 'col2': 'bar', 'col3': 'wibble'}
>>> 'Col1=%(col1)s, Col2=%(col2)s, Col3=%(col3)s' % a
'Col1=foo, Col2=bar, Col3=wibble'
>>>
A: Using a ORM for an iPhone app might be a bad idea because of performance issues, you want your code to be as fast as possible. So you can't avoid boilerplate code. If you are considering a ORM, besides SQLAlchemy I'd recommend Storm.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Visual Studio: How to break on handled exceptions? I would like Visual Studio to break when a handled exception happens (i.e. I don't just want to see a "First chance" message, I want to debug the actual exception).
e.g. I want the debugger to break at the exception:
try
{
System.IO.File.Delete(someFilename);
}
catch (Exception)
{
//we really don't care at runtime if the file couldn't be deleted
}
I came across these notes for Visual Studio.NET:
1) In VS.NET go to the Debug Menu >>
"Exceptions..." >> "Common Language
Runtime Exceptions" >> "System" and
select "System.NullReferenceException"
2) In the bottom of that dialog there
is a "When the exception is thrown:"
group box, select "Break into the
debugger"
3) Run your scenario. When the
exception is thrown, the debugger will
stop and notify you with a dialog that
says something like:
"An exception of type "System.NullReferenceException" has
been thrown.
[Break] [Continue]"
Hit [Break]. This will put you on the
line of code that's causing the
problem.
But they do not apply to Visual Studio 2005 (there is no Exceptions option on the Debug menu).
Does anyone know where the find this options dialog in Visual Studio that the "When the exception is thrown" group box, with the option to "Break into the debugger"?
Update: The problem was that my Debug menu didn't have an Exceptions item. I customized the menu to manually add it.
A: There is an 'exceptions' window in VS2005 ... try Ctrl+Alt+E when debugging and click on the 'Thrown' checkbox for the exception you want to stop on.
A: Took me a while to find the new place for expection settings, therefore a new answer.
Since Visual Studio 2015 you control which Exceptions to stop on in the Exception Settings Window (Debug->Windows->Exception Settings). The shortcut is still Ctrl-Alt-E.
The simplest way to handle custom exceptions is selecting "all exceptions not in this list".
Here is a screenshot from the english version:
Here is a screenshot from the german version:
A: From Visual Studio 2015 and onward, you need to go to the "Exception Settings" dialog (Ctrl+Alt+E) and check off the "Common Language Runtime Exceptions" (or a specific one you want i.e. ArgumentNullException) to make it break on handled exceptions.
Step 1
Step 2
A: With a solution open, go to the Debug - Windows - Exception Settings (Ctrl+Alt+E) menu option. From there you can choose to break on Thrown or User-unhandled exceptions.
EDIT: My instance is set up with the C# "profile" perhaps it isn't there for other profiles?
A: A technique I use is something like the following. Define a global variable that you can use for one or multiple try catch blocks depending on what you're trying to debug and use the following structure:
if(!GlobalTestingBool)
{
try
{
SomeErrorProneMethod();
}
catch (...)
{
// ... Error handling ...
}
}
else
{
SomeErrorProneMethod();
}
I find this gives me a bit more flexibility in terms of testing because there are still some exceptions I don't want the IDE to break on.
A: Check Managing Exceptions with the Debugger page, it explains how to set this up.
Essentially, here are the steps (during debugging):
*
*On the Debug menu, click Exceptions.
*In the Exceptions dialog box, select Thrown for an entire category of exceptions, for example, Common Language Runtime Exceptions.
-or-
Expand the node for a category of exceptions, for example, Common Language Runtime Exceptions, and select Thrown for a specific exception within that category.
A: The online documentation seems a little unclear, so I just performed a little test. Choosing to break on Thrown from the Exceptions dialog box causes the program execution to break on any exception, handled or unhandled. If you want to break on handled exceptions only, it seems your only recourse is to go through your code and put breakpoints on all your handled exceptions. This seems a little excessive, so it might be better to add a debug statement whenever you handle an exception. Then when you see that output, you can set a breakpoint at that line in the code.
A: There are some other aspects to this that need to be unpacked. Generally, an app should not throw exceptions unless something exceptional happens.
Microsoft's documentation says:
For conditions that are likely to occur but might trigger an exception, consider handling them in a way that will avoid the exception.
and
A class can provide methods or properties that enable you to avoid making a call that would trigger an exception.
Exceptions degrade performance and disrupt the debugging experience because you should be able to break on all exceptions in any running code.
If you find that your debugging experience is poor because the debugger constantly breaks on pointless exceptions, you may need to detect handled exceptions in your tests. This technique allows you to fail tests when code throws unexpected exceptions.
Here are some helper functions for doing that
public class HandledExceptionGuard
{
public static void DoesntThrowException(Action test,
Func<object?, Exception, bool>? ignoreException = null)
{
var errors = new List<ExceptionInformation>();
EventHandler<FirstChanceExceptionEventArgs> handler = (s, e) =>
{
if (e.Exception is AssertFailedException) return;
if (ignoreException?.Invoke(s, e.Exception) ?? false) return;
errors.Add(new ExceptionInformation(s, e.Exception, AppDomain.CurrentDomain.FriendlyName));
};
AppDomain.CurrentDomain.FirstChanceException += handler;
test();
AppDomain.CurrentDomain.FirstChanceException -= handler;
if (errors.Count > 0)
{
throw new ExceptionAssertionException(errors);
}
}
public async static Task DoesntThrowExceptionAsync(Func<Task> test,
Func<object?, Exception, bool>? ignoreException = null)
{
var errors = new List<ExceptionInformation>();
EventHandler<FirstChanceExceptionEventArgs> handler = (s, e) =>
{
if (e.Exception is AssertFailedException) return;
if (ignoreException?.Invoke(s, e.Exception) ?? false) return;
errors.Add(new ExceptionInformation(s, e.Exception, AppDomain.CurrentDomain.FriendlyName));
};
AppDomain.CurrentDomain.FirstChanceException += handler;
await test();
AppDomain.CurrentDomain.FirstChanceException -= handler;
if (errors.Count > 0)
{
throw new ExceptionAssertionException(errors);
}
}
}
If you wrap any code in these methods as below, the test will fail when a handled exception occurs. You can ignore exceptions with the callback. This validates your code against unwanted handled exceptions.
[TestClass]
public class HandledExceptionTests
{
private static void SyncMethod()
{
try
{
throw new Exception();
}
catch (Exception)
{
}
}
private static async Task AsyncMethod()
{
try
{
await Task.Run(() => throw new Exception());
}
catch (Exception)
{
}
}
[TestMethod]
public void SynchronousTest()
{
HandledExceptionGuard.DoesntThrowException(() => SyncMethod());
}
[TestMethod]
public async Task AsyncTest()
{
await HandledExceptionGuard.DoesntThrowExceptionAsync(() => AsyncMethod());
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "212"
}
|
Q: What must I do to ensure that a web server (Apache) running on a machine is not accessible to the outside world? I would like to use my laptop as a web development (PHP, Python, etc.) machine, but I'm hesitant to do this unless I can ensure that it can not be accessed by the outside world.
I'm guessing that something more than a firewall is necessary, such as configurations to the Apache configuration files, but I'm not sure what else I would need to be 100% sure it's locked down tightly.
A: in the configuration file, change the LISTEN directive to only listen on the loop back address:
Listen 127.0.0.1
A: You need to configure the server daemon to only bind to localhost using the Listen directive like this:
Listen 127.0.0.1
An alternative is to configure access control for the main server like this
<Directory "/var/www/localhost/htdocs">
AllowOverride None
Deny from all
Allow from 127.0.0.1/255.0.0.0
</Directory>
Remember to put the root directory of your server in the Directory Directive.
A: Install a firewall and close all external ports but those who you want to use. If you are using Linux, there are nice frontends for iptables such as firestarter, if you use OS X there is an integrated firewall and Windows has one too. :)
But yes, the Firewall is the way to go. (Or you can tell Apache to listen on 127.0.0.1:80 only)
A: A firewall should be sufficient. Just make sure that you run apache in a non-standard port (typically 8080) and make sure your firewall blocks outside access to that port.
A: Firewall should be enough. But you can use the Listen directive as well.
A: A firewall will do just fine. But if you won't settle for just a firewall you can configure apache to just listen on your loopback device, or tell it to just accept connections from a set of addresses on your lan. The first method is easier, but that way you can access the web pages only from the machine apache is running on.
A: Put a router between you and the internet, and don't forward any ports to your laptop. That way anyone trying to access the laptop hits the router and can't get any further.
You can forward ports to your main machine (or just put the main machine in the DMZ) if you need it to be available to incoming connections.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to use Decision Tables to help your application I learned some time ago about Decision Trees and Decision tables. I feel that Decision Tables can help with conditional If-Then-Else statements. In particular, I feel that Decision Tables have no side-effects, for example, if you didn't notice that you need one more "else if" statement.
But I am not sure how I can implement it. Arrays? Database Tables?
Does anyone even use Decision Tables in their code, nowadays?
A: I would highly recommend chapter 18 of Code Complete.
You could also check this post What Are Table Driven Methods
A: Well, I did my own research :S
*
*This is something from IBM about decision tables used to make testing scenarios
*This is from a company that makes decision tables that are then translated to if-then-else statements in vb.net.
*Open source ruby workflow and bpm engine that uses decision tables.
So, I am still looking. If anyone has some good answers, please enter them in.
A: Multi-platform, CCIDE-0.5.0-6 (or later) is available at SourceForge and Github.
See the web page at http://twysf.users.sourceforge.net/
A: A table-driven method uses data structures instead of if-then statements to drive program logic. For example, if you are processing two types of records (tv versus cable) you might do this:
hash[tv] = processTvRecords
hash[cable] = processCableRecords
In some languages, like Ruby or Perl, this technique is straightforward. In Java, you'd need to use Reflection to find method handles.
If you want to learn about decision tables, investiagethe Fitnesse testing framework at http://fitnesse.org/.
A: By far the best implementation I've seen for decision tables is an application called Prologa, which is available for download at http://www.econ.kuleuven.be/prologa. Unfortunately, it's only available in Windows, and there can be a short delay while you wait for the evaluation key.
The software handles conditions that are non-binary, can collapse similar rules, and actually tracks the number of combinations that your table currently covers so it's great for completeness checks for particularly large tables. Also handles nested tables gracefully (where the result of one table can be the condition of another table).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Performance implications of computed columns in SQL Server 2005 database? The situation: we have a large database with a number of denormalized tables. We frequently have to resummarize the data to keep the summary tables in synch. We've talked on and off about using computed columns to keep the data fresh. We've also talked about triggers, but that's a separate discussion.
In our summary tables, we denormalized the table such that the Standard ID as well as the Standard Description is stored in the table. This inherently assumes that the table will be resummarized often enough so that if they change the standard description, it will also change it in the summary table.
A bad assumption.
Question:
What if we made the Standard Description in the summary table a derived/computed column which selects the standard description from the standard table?
Is there a tremendous performance hit by dropping a computed column on a table with 100,000-500,000 rows?
A: Computed columns are fine when they are not calculation intensive and are not executed on a large number of rows. Your questions is "will there be a hit by dropping the computed column." Unless this column is an index that is used by the query (REAL bad idea to index a comp col - i don't know if you can depending on your DB), then dropping it cant hurt your performance (less data to query and crunch).
If the standard table has the description, then you should be joining it in from the id and not using any computation.
You alluded to what may be a real problem, and that is the schema of your database. I have had problems like this before, where a system was built to handle one thing, and something like reporting needs to be bolted on/in. Without refactoring your schema to balance all of the needs, Sunny's idea of using views is just about the only easy way.
If you want to post some cleansed DDL and data, and an example of what you are trying to get out of the db, we may be able to give you a less subjective answer.
A: A computed column in a table can only be derived from values on that row. You can't have a lookup in the computed column. For that you would require a view.
On a table that small denormalising the name into the table will probably have negligable performance impact. You can use DBCC PINTABLE to hint the server to keep the table in the cache.
If you need the updates to be made in realtime then really your only option is triggers. Putting a clustered index on the ID column corresponding to the name you are updating should reduce the amount of I/O overall (the records for a given ID will be in the same block or set of blocks) so try this if the triggers are causing performance issues.
A: Just to clarify the issue for the sql2005 and up:
This functionality was introduced for
performance in SQL Server version 6.5.
DBCC PINTABLE has highly unwanted
side-effects. These include the
potential to damage the buffer pool.
DBCC PINTABLE is not required and has
been removed to prevent additional
problems. The syntax for this command
still works but does not affect the
server.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Trouble running insert queries with varray variables I'm using SQL*Plus 9.2 on Oracle 10g enterprise. I have created some scripts that do basic inserts using parameters that I pass through the command prompt. It seemed logical that I should be able to run a bunch of inserts in a loop. So I tried the following:
--begin
DECLARE
TYPE va_orgs IS TABLE OF nbr.lien_item.lien_item_name%type;
org va_orgs := va_orgs('RTA','RTB','RTE','RTI','RTM','RTT');
BEGIN
FOR i in org.FIRST .. org.LAST
LOOP
INSERT INTO nbr.lien_item (lien_item_sid, excel_row, include_in_calcs, indent, header_level, sort_order, unit, lien_item_status, lien_item_name) VALUES (nbr.lien_item_seq.nextval, 0, 'Y', 1, 0, 1, 'FTE', 'A', 'org(i)');
COMMIT;
END LOOP;
END;
/
--end
When I run the script, I get a message that the PL/SQL completed successfully. I tried debugging and using dbms_output to diplay the values of org(i). Everything looks fine. But the rows never get entered into the database. As soon as I do a select, the new rows aren't there. Is there some trick about looping and doing inserts?
(I also tried IS VARRAY(6) OF in place of IS TABLE OF. Same non-result)
A: In your insert statement you have org(i) in single quotes. You shouldn't have that, you are probably inserting the words org(i) as values into the table. So your insert statement should be
INSERT INTO nbr.lien_item (lien_item_sid, excel_row, include_in_calcs, indent, header_level, sort_order, unit, lien_item_status, lien_item_name) VALUES (nbr.lien_item_seq.nextval, 0, 'Y', 1, 0, 1, 'FTE', 'A', org(i));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What happens to my app when my Mac goes to sleep? When Mac OS X goes to sleep, due to closing a laptop or selecting "Sleep" from the Apple menu, how does it suspend an executing process?
I suppose non-windowed processes are simply suspended at an arbitrary point of execution. Is that also true for Cocoa apps, or does the OS wait until control returns to the run loop dispatcher, and goes to sleep in a "known" location? Does any modern OS do that, or is it usually safe enough to simply suspend an app no matter what it is doing?
I'm curious, because allowing sleep to occur at any moment means, from the app's perspective, the system clock could suddenly leap forward by a significant amount. That's a possibility I don't usually consider while coding.
A: It depends on your app.
If you are interacting with external systems (think networking or doing something over usb/firewire,etc) then it might be affected. An application running on OSX gets to run for a limited time ( max 10ms ) , after which it is interrupted by the kernel which schedules a new process from the process queue to run on the CPU. This is transparent for the application , which "thinks" that it runs all the time on the CPU. Thus , a transition to sleep is no different - apart from the time jumping ahead.
If you need to be aware that there was a transition to sleep mode please refer to this tech note which details how to receive notifications about the state change : Registering and unregistering for sleep and wake notifications
A: Your app is interrupted exactly where it is that moment if the CPU is actually currently executing code of your app. Your app constantly gets execution time by the task scheduler, that decides which app gets CPU time, on which core, and for how long. Once the system really goes to sleep, the scheduler simply gives no time to your app any longer, thus it will stop execution wherever it is at that moment, which can happen pretty much everywhere. However, the kernel must be in a clean state. That means if you just made a call into the kernel (many libC functions do) and this call is not at some safe-point (e.g. sleeping, waiting for a condition to become true, etc.) or possibly holding critical kernel locks (e.g. funnels), the kernel may suspend sleep till this call returns back to user space or execution reaches such a safe-point before it finally cancels your app from the task scheduler.
You can open a kernel port and register for sleep/wake-up events. In that case, your app will receive an event, when the system wants to go to sleep. You have several possibilities. One is to reply to it, that the system may progress. Another one is to suspend sleep; however, Apple says certain events can be suspended at most 30 seconds, after that, the system will just continue, whether your app likes it or not. And finally, you can cancel it; though not all events can be canceled. If the system already decided it will go to sleep, you can only suspend this by at most 30 seconds or allow it at once, you cannot cancel it. However, you can also listen to an event, where the system asks apps, if it is okay to go to sleep now and there you can reply "no", causing a sleep to be canceled.
The difference between "Is it okay to sleep" and "I'm planing on going to sleep" is: The first one is sent if the power saving settings are applied, that is, if the user has not moved the mouse or typed anything for the time configured there. In that case the system will just ask, if sleep is okay. An app like Apple's DVD Player will say "no", because most likely the user watches a DVD and thus doesn't interact with the computer, still no reason to go to sleep. OTOH, if the user closes his Mac Book, apps are not asked, the system will go to sleep for sure and just informs apps, that have now up to 30 seconds to react to it.
Wake-up events can also be quite interesting to catch. E.g. if your system wakes up, open files might be inaccessible (an external drive has been unplugged) or network sockets won't work any longer (network has changed). So you may re-init certain app parts before using them and running into errors that are more or less expected.
Apple's page regarding catching these events.
A: I believe it will just suspend all apps wherever they happen to be.
Remember, this happens all the time anyway. Applications are constantly suspended and resumed due to context switching. So, really, the clock could jump between any 2 instructions in your app, though usually not in a noticable/significant way.
If the OS waited for the app to return to some main loop you could run into situations where applications cause the sleep to hang. If they're doing a lot of work and not returning to the run loop dispatcher they would prevent the machine from going to sleep. That wouldn't be very good. :)
A: And if you set the time it also appears to leap forward to the running programs. Nothing special either.
A: Check out this Wikipedia article. Cavver is correct in stating that things like network connections may time out, and thus those services may be interrupted.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: What is the semantic web, and why would I want to use it? Just like it reads.
A: The simplest and shortest explanation that I have found is: "The Semantic Web is to Machines what the World Wide Web is to Humans".
And as to why you would want that: for the same reasons why you let your Machine compute Pi to the quadrillionth digit instead of doing that yourself. So you can focus on interesting problems and leave the menial work to the Machine.
A: Well, it might not be fitting in with the "official" definition, but I always try and explain it to people as "It's like syndicating knowledge instead of content."
Why would you want to use it? Well... if you are making applications that could benefit from machine parseable and queryable "knowledge," then... you might want to use it :).
IMHO it's rather ill-defined and not implemented in a broadly useful way at present. It's a good idea and I have no doubt things will tend towards this sort of approach in the future, but it's not there yet.
A: From wikipedia:
The Semantic Web is an evolving
extension of the World Wide Web in
which the semantics of information and
services on the web is defined, making
it possible for the web to understand
and satisfy the requests of people and
machines to use the web content. It
derives from World Wide Web Consortium
director Sir Tim Berners-Lee's vision
of the Web as a universal medium for
data, information, and knowledge
exchange.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: using fstream to read every character including spaces and newline I wanted to use fstream to read a txt file.
I am using inFile >> characterToConvert, but the problem is that this omits any spaces and newline.
I am writing an encryption program so I need to include the spaces and newlines.
What would be the proper way to go about accomplishing this?
A: Probably the best way is to read the entire file's contents into a string, which can be done very easily using ifstream's rdbuf() method:
std::ifstream in("myfile");
std::stringstream buffer;
buffer << in.rdbuf();
std::string contents(buffer.str());
You can then use regular string manipulation now that you've got everything from the file.
While Tomek was asking about reading a text file, the same approach will work for reading binary data, though the std::ios::binary flag needs to be provided when creating the input file stream.
A: std::ifstream ifs( "filename.txt" );
std::string str( ( std::istreambuf_iterator<char>( ifs ) ),
std::istreambuf_iterator<char>()
);
A: The following c++ code will read an entire file...
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
int main ()
{
string line;
ifstream myfile ("foo.txt");
if (myfile.is_open()){
while (!myfile.eof()){
getline (myfile,line);
cout << line << endl;
}
myfile.close();
}
return 0;
}
post your code and I can give you more specific help to your problem...
A: A lot of the benefit of the istream layer is providing basic formatting and parsing for simple types ro and from a stream. For the purposes that you describe, none of this is really important and you are just interested in the file as a stream of bytes.
For these purpose you may be better of just using the basic_streambuf interface provided by a filebuf. The 'skip whitespace' behaviour is part of the istream interface functionality that you just don't need.
filebuf underlies an ifstream, but it is perfectly valid to use it directly.
std::filebuf myfile;
myfile.open( "myfile.dat", std::ios_base::in | std::ios_base::binary );
// gets next char, then moves 'get' pointer to next char in the file
int ch = myfile.sbumpc();
// get (up to) the next n chars from the stream
std::streamsize getcount = myfile.sgetn( char_array, n );
Also have a look at the functions snextc (moves the 'get' pointer forward and then returns the current char), sgetc (gets the current char but doesn't move the 'get' pointer) and sungetc (backs up the 'get' pointer by one position if possible).
When you don't need any of the insertion and extraction operators provided by an istream class and just need a basic byte interface, often the streambuf interface (filebuf, stringbuf) is more appropriate than an istream interface (ifstream, istringstream).
A: For encryption, you're better off opening your file in binary mode. Use something like this to put the bytes of a file into a vector:
std::ifstream ifs("foobar.txt", std::ios::binary);
ifs.seekg(0, std::ios::end);
std::ifstream::pos_type filesize = ifs.tellg();
ifs.seekg(0, std::ios::beg);
std::vector<char> bytes(filesize);
ifs.read(&bytes[0], filesize);
Edit: fixed a subtle bug as per the comments.
A: You can call int fstream::get(), which will read a single character from the stream. You can also use istream& fstream::read(char*, streamsize), which does the same operation as get(), just over multiple characters. The given links include examples of using each method.
I also recommend reading and writing in binary mode. This allows ASCII control characters to be properly read from and written to files. Otherwise, an encrypt/decrypt operation pair might result in non-identical files. To do this, you open the filestream with the ios::binary flag. With a binary file, you want to use the read() method.
A: Another better way is to use istreambuf_iterator, and the sample code is as below:
ifstream inputFile("test.data");
string fileData(istreambuf_iterator<char>(inputFile), istreambuf_iterator<char>());
A: I haven't tested this, but I believe you need to clear the "skip whitespace" flag:
inFile.unsetf(ios_base::skipws);
I use the following reference for C++ streams:
IOstream Library
A: For encryption, you should probably use read(). Encryption algorithms usually deal with fixed-size blocks. Oh, and to open in binary mode (no translation frmo \n\r to \n), pass ios_base::binary as the second parameter to constructor or open() call.
A: Simple
#include <fstream>
#include <iomanip>
ifstream ifs ("file");
ifs >> noskipws
that's all.
A: ifstream ifile(path);
std::string contents((std::istreambuf_iterator<char>(ifile)), std::istreambuf_iterator<char>());
ifile.close();
A: I also find that the get() method of ifstream object can also read all the characters of the file, which do not require unset std::ios_base::skipws. Quote from C++ Primer:
Several of the unformatted operations deal with a stream one byte at a time. These operations, which are described in Table 17.19, read rather ignore whitespaces.
These operations are list as below:
is.get(), os.put(), is.putback(), is.unget() and is.peek().
Below is a minimum working code
#include <iostream>
#include <fstream>
#include <string>
int main(){
std::ifstream in_file("input.txt");
char s;
if (in_file.is_open()){
int count = 0;
while (in_file.get(s)){
std::cout << count << ": "<< (int)s <<'\n';
count++;
}
}
else{
std::cout << "Unable to open input.txt.\n";
}
in_file.close();
return 0;
}
The content of the input file (cat input.txt) is
ab cd
ef gh
The output of the program is:
0: 97
1: 98
2: 32
3: 99
4: 100
5: 10
6: 101
7: 102
8: 32
9: 103
10: 104
11: 32
12: 10
10 and 32 are decimal representation of newline and space character. Obviously, all characters have been read.
A: As Charles Bailey correctly pointed out, you don't need fstream's services just to read bytes. So forget this iostream silliness, use fopen/fread and be done with it. C stdio is part of C++, you know ;)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: Which Layout Manager do you use? What java GUI layout manager does everyone use? Lately, I have been using MigLayout, which has some powerful component controls. Just wanted to see what other developers are using other than the standard JDK ones.
A: GridBagLayout is usable. Once you get used to using it, it works great. I think the standard JDK layout managers are pretty powerful on their own. Plus, you get to minimize dependency on 3rd party libraries.
A: MiG and FormLayout (JGoodies) are both excellent for manual layout (And almost all layout eventually becomes manual). My biggest piece of advice is to design your views so that you can completely rip out the layout and re-implement it without impacting your application (good separation of view and controller is key here).
Definitely take a look at JGoodie's PresentationModel approach for implementing 'dumb' views. I use this technique with a GUI builder (I use GroupLayout with the Jigloo GUI builder plugin) for tossing off quick prototypes. After 3 or 4 iterations, that usually goes out the window and we do a re-implement using MiG or FormLayout.
EDIT: Since I wrote this, I have moved to using MiG for all of my layouts, and I no longer use a GUI builder - it's just way too easy to lay things out using MiG.
A: The last Swing application I worked on used JGoodies' FormsLayout.
A: I use the GridBagLayout. It seems to take alot of code, but it makes very good looking layouts.
I also like to combine BorderLayout with GridBagLayout panels for great customizability.
A: I'm a big fan of using TableLayout instead of GridBagLayout. Everything just makes sense, whereas every time I try to use GridBagLayout it crushes my soul.
A: I use to go for GridBagLayout for the control, but since java1.6 I'm going to use GroupLayout Is awsome.
Here an screenshot and sample code to use it!.
alt text http://img145.imageshack.us/img145/7844/screenshot1dz8.png
private void layoutComponents(){
JPanel panel = new JPanel();
GroupLayout layout = new GroupLayout(panel);
panel.setLayout(layout);
layout.setAutoCreateGaps(true);
layout.setAutoCreateContainerGaps(true);
SequentialGroup hGroup = layout.createSequentialGroup();
JLabel nameLbl = new JLabel("Name");
JLabel countLbl = new JLabel("Amount");
JLabel dateLbl = new JLabel("Date(dd/MM/yy)");
hGroup.addGroup(layout.createParallelGroup().
addComponent(nameLbl).
addComponent(countLbl).
addComponent(dateLbl).
addComponent(go));
hGroup.addGroup(layout.createParallelGroup().
addComponent(name).
addComponent(count).
addComponent(date));
layout.setHorizontalGroup(hGroup);
SequentialGroup vGroup = layout.createSequentialGroup();
vGroup.addGroup(layout.createParallelGroup(Alignment.BASELINE).
addComponent(nameLbl).addComponent(name));
vGroup.addGroup(layout.createParallelGroup(Alignment.BASELINE).
addComponent(countLbl).addComponent(count));
vGroup.addGroup(layout.createParallelGroup(Alignment.BASELINE).
addComponent(dateLbl).addComponent(date));
vGroup.addGroup(layout.createParallelGroup(Alignment.BASELINE).
addComponent(go));
layout.setVerticalGroup(vGroup);
frame.add( panel , BorderLayout.NORTH );
frame.add( new JScrollPane( textArea ) );
}
A: I use DesignGridLayout for most of my panels.
For the rare panels that DesignGridLayout cannot fully handle, I use a mix of Borderlayout and DesignGridLayout.
With DesigngridLayout you can manually code your layouts with a minimum number of lines of code, that are easy to type and read:
DesignGridLayouut layout = new DesignGridLayout(myPanel);
layout.row().grid(lblFirstName).add(txfFirstName).grid(lblSurName).add(txfSurName);
layout.row().grid(lblAddress).add(txfAddress);
layout.row().center().add(btnOK, btnCancel);
Each row of the panel grid is defined by one line of code. As you can see, "drawing" your panel is quite straightforward.
In addition, I find DesignGridLayout has some unique features (such as its "smart vertical resize").
A: GridBagLayout is powerful but quite primitively: the code that wires up the layout is very verbose. This utility library (actual just 1 jar file containing about 10 classes) simplifies a lot of works: http://code.google.com/p/painless-gridbag/ The following snippet is quoted from the home page of that site:
PainlessGridBag gbl = new PainlessGridBag(getContentPane(), false);
gbl.row().cell(lblFirstName).cell(txtFirstName).fillX()
.cell(lblFamilyName).cell(txtFamilyName).fillX();
gbl.row().cell(lblAddress).cellXRemainder(txtAddress).fillX();
gbl.doneAndPushEverythingToTop();
A: As a general overview, you might find an article I wrote a loooong time ago at sun to be useful. It's not up to date with the latest layout managers, but it concentrates on effective nesting of layout managers, rather than trying to do everything with one layout.
See http://developer.java.sun.com/developer/onlineTraining/GUI/AWTLayoutMgr
A: I've found that for any non-trivial GUI I use multiple layouts with nested sub-panels where the main panel may have a GridBagLayout and each sub-panel (typically without a border or indication that it is a panel) uses a simpler layout where possible. Typically I'll use BorderLayout, FlowLayout, and BoxLayout for smaller, simpler sub-panels. By dividing small sections of the GUI into sub-panels and using the simplest layout possible to control that section of the GUI you can create complex, well arranged displays without too much headache from GridBagLayout's many options. Also, by grouping like display functionality into a panel, it creates more readable code.
A: MiGLayout is the GUI layout manager which is widely used by Java Developers.
A: Spring layout which was developed for the mantissa gui builder which is part of netbeans.
A: I've used GroupLayout as well. Again, its a standard JDK layout manager as of Java6, but you can find the library separate as well.
A: I've always been a big fan of the GridBagLayout. It resembles HTML tables a lot so it is intuitive to those web programmers.
A: I started off using various nested layouts, then moved over to GridBagLayout (which is pretty frustrating). Since then I tried FormLayout (but found it wasn't suited to anything but forms) and settled firmly on TableLayout, which overall I'm very happy with.
Since then I've discovered MiGLayout and although I haven't done much more than play with it, it seems very capable, quite similar to TableLayout and probably a little cleaner.
The big plus for me is that MiGLayout is set to become part of the JDK, so I intend to use it pretty much exclusively when it does.
The other thing to remember is that no matter which heavy-weight LayoutManager you settle on, there is always a place for some of the simpler layout managers such as GridLayout. I've seen some horrible things done with GridBagLayout that could have been done much more easily with a simpler layout manager.
A: I prefer to minimize dependencies on 3rd party libs, so it's usually BoxLayout for dialogs and GridBagLayout for "complicated" layouts. GridBagLayout is easy enough to understand, but a bit hard to configure. I wrote myself a tool for creating the code from HTML layouts (hope that helps others too):
http://www.onyxbits.de/content/blog/patrick/java-gui-building-gridbaglayout-manager-made-easy
A: The only layout manager that I have found, that I actually like is the Relative Layout Manager. The Relative Layout Manager works in a way that is consistent with how dialog boxes are conceptually organized. One draw-back is that while this layout manager deals with additive constraints. It does not seem to deal with ratio constraints. Fortunately it is pretty well designed, and I was able to implement this feature.
A: I use BorderLayout 90% of the time while nesting BoxLayout and SpringLayout
A: I'm a bit of Java newbie.
I tried GridBagLayout, gave up, then tried BoxLayout, then gave up, then made my own Custom Layout which worked. With GridBag and Box I put my best guess in and the Layout engines decided to do something different, without any apparent way to debug them.
With my custom layout, I could print out coordinates and widths and heights, to find out where it was going wrong. Its a bit mathy placing things but you've got the full power of java to use rather than the limited vocabulary of one of the builtin layout managers.
Of course it was just for a home project, you'd never be allowed to do this at work.
A: I use GridBagLayout for form like layouts, use BorderLayout for simple layouts, and FlowLayout for number of horizontal icons/buttons that have some spaces in between. Netbeans is also a good GUI builder that can avoid a lot of tedious layout codings to save your time.
A: I have started using Swing recently and I am using GridBagLayout.
A: I was sick of all those layoutmanagers that needed alot of setup, werent very readable or exhausting to do manually, so I wrote my own very simple laoutmanager which uses the abstraction of two photocorners keeping each component in place. You can add your component like this: parent.add(child,"topleft(0, 0.5)bottomright(0.5,1.0)");
Have a look here https://github.com/hageldave/UsefulStuff/blob/master/src/PhotoCornersLayout.java ;)
you're responisble for a correct layout yourself though, cause it's not checking overlappings or other shortcommings of your layout.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: Are there any online user group meetings? I must admit that I am incredibly jealous of those developers who happen to live near active user groups (e.g. the ALT.NET guys in Austin). I often read blog posts and listen to podcasts that reference these in-person meetings and find myself wishing that I could sit in and participate as well. But it just isn't realistic to fly across the country to meet a few guys for a couple hours in a pub to talk about patterns and practices.
So I was wondering if there was a similar discussion forum for those who don't happen to live near an active user group. After all, blogs and books only go so far, and for the most part are a one-way avenue of communication. True, you can use comments, e-mails, tweets, and IM to get some interaction, but there is something to be said about face-to-face real-time interaction that will get lost in all of these mediums.
I guess what I'm looking for is some sort of video-conferencing deal where people who share an interest in a specific field of software development can get together to talk and interact without having to live right next door to each other. Does anything like this exist?
A: There's a .NET usergroup in SecondLife. Of course this depends how you feel about second life.
A: I haven't had a chance to check it out, but the Virtual ALT.NET group sounds promising.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to show that you understand the requirements of the project How do you show your clients/employers that you understood their requirements?
What do you recommend to use? Use-Case diagrams? Flow-Charts? Data-Flow-Diagrams? Decision Trees?
I'm not really asking for a very detailed answer. Just something simple to help me communicate with the person who wrote the requirements and to see if both of you are on the same page.
A: I usually put together a PowerPoint deck fairly early in a project, giving a high-level overview of the project, along with some architectural diagrams (the simpler the better) and screen mockups/wireframes. Then I have a "kick-off" meeting for requirements review, and talk through the business problem and proposed solution.
A: I simply explain the requirements back in my own language, supplying my assumptions and adding in limitations.
The requirement may be "Button turns green when clicked"
I would ask "Ok, so when the user clicks on the button, the background color of the button turns green, but the text stays the same color?"
Basically prompting the person giving the requirements to explain how THEY envision it working.
A: My role has a lot of requirements gathering. The best way I find is a two pronged approach, talk through a PowerPoint presentation keeping it all simple and high level, and showing a Proof of Concept or a mock up. Walking and talking the customer through will see them responding with many "what if's" such as "Can I chance the colour?" this gives everyone a broad idea of what they are getting. If you can get something the users can touch and play with that works really well at uncovering the hidden what if.
Then, back this high level up with really detailed low level requirements. Spell out the dotted "i" and crossed "t". Get the users to read through and sign them before anything more than the POC is done. Generally word with a lot of screenshots works well.
Unless the users can bring you UML's and data flow charts, don't use them in anything the customer sees or signs. If it is signed by the customer and you had to regigg the back end to meet a "what if" you have to totally get everything resigned.
The final thing is to ensure that the customers can talk to you in their own words about their requirements and spell out what they are getting. One way to do this is to sit in on any middle management sell to higher management.
Don't try and bamboozle the customer, if they want something changed at the last minute, explain what the cost will be, in time and money, and ask them if this totally required. Doing this, will often stop people making trivial changes, and force them to think about why they want the change.
Requirements are getting what the customer needs from what they say they want.
Edit-
To the point about showing screenshots early- this sometimes requires a good PM to let the customer know the time scales and where everything is at. If the PM helps to set some decent time frames and expectations, the customers won't get excited. The good thing of POC and screenshots is people get an image of what it could be like and can often work that inside their minds.
If you want to avoid screenshots do a wire-frame look or use a whiteboard and 20 mins of drawing. Just remember to save the whiteboard as a photo before you whipe it.
Whiteboarding (and the good old OHP) can be a godsend to requirements gathering- developing a good clear style of drawing concepts can save hours in workshops.
A: Flow charts tend to confuse some non-technical people (ie clients), as well as data flow diagrams. Use Cases are good and understandable, as well as Business Requirement and Technical Requirement documentation, possible some sort of rough wireframe sketches.
A: It really depends wich requirements you're talking about.
*
*Functionnal requirements? Maybe that UML is the rigth tool for. But I would prefer a test o test specifications
*GUI requirements? Nothing beats a paper and a pencil.
*Security requirements? By describing the limits of your security, you avoid unexpected deceptions.
*Reliability requirements? Both testing mechannism, and software/hardware backup/recovery plan.
*Other requirements: depends of your client.
But anyway, keep in mind, and explain to the client that requirement WILL change during the development phase and that it will always be a discussion and a compromise between cost and functionnality. Being honnest give more confidence to your customer.
A: I think that the best way to show that you really understand the client idea is to build prototypes.
By the way I was present in the last edition of the Requirements Engineering conference and in one of the workshops (MERE), Siemens was showing and interesting software based on composing a video of the client idea (it was for projects not limited to software) just to ensure that all the requirements are fully understood.
Any way, the thing is that some times a creative way to show them is better. Don't limit yourself to the standard diagrams.
A: I have had good experiences with creating a simple vocabulary, with terms from the domain and their meanings and relationships, and then go through it and make sure everybody agrees on everything.
Writing and discussing the vocabulary forces you to think, rather than just thinking that "we'll figure that out later".
It's no silver bullet, of course, and should be used along with other means such as a functional requirements specification and possibly a prototype.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.