text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: How do I configure a project to use latest Derby DB version (10.4)? Every time I try to run a small application that uses a Derby DB I get this error message:
Message: Database at /path/to/db/TheDB has an incompatible format with the current version of the software. The database was created by or upgraded by version 10.4.
I've added the library from Netbeans, and still have the same problem.
I'm not sure what to do here.
A: The version included with Netbeans might be old (Derby 10.2 as of NB 6.0). If you added Derby via the project properties and added the "Library", then you probably had the old version.
You can update the library by going to Tool -> Libraries. Select "Java DB Driver". Delete the jar references and update them to point at your 10.4 version.
If you added the JAR file to the project properties AND had the library added, then NB may have grabbed the first/last JAR it found in the list...
A: Hmmm, all I have to do was to add the proper derby.jar manually to the project.
A simple copy command operation:
cp /opt/Apache/derbyinstall/lib/derby.jar /path/to/project/dist/lib/
...did the job.
The problem was that: I did this operation from Netbeans, and I don't know why, Netbeans didn't update the jar file. Weird, but fixed. :)
A: from the where you locate the derby's bin directory import derby.jar and everything will be ok. and don't forget to lower your derby's driver's jar version
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Specifying "all odd values" in crontab? In crontab, I can use an asterisk to mean every value, or "*/2" to mean every even value.
Is there a way to specify every odd value? (Would something like "1+*/2" work?)
A: Every odd minute would be:
1-59/2 * * * *
Every even minute would be:
0-58/2 * * * *
A: I realize this is almost 10 years old, but I was having trouble getting 1-23/2 for an every two hour, odd hour job.
For all you users where, exact odd hour precision is not needed. I did the following which suited my teams needs.
59 */2 * * *
Execute the job every two hours, at the 59th Minute.
A: Try
1-23/2
From your question, I'm assuming Vixie Cron. I doubt this will work with any other cron.
A: As I read the manual "1-23/2" (for hours) would do the trick.
A: Depending on your version of cron, you should be able to do (for hours, say):
1-23/2
Going by the EXTENSIONS section in the crontab(5) manpage:
Ranges can include "steps", so "1-9/2" is the same as "1,3,5,7,9".
For a more portable solution, I suspect you just have to use the simple list:
1,3,5,7,9,11,13,15,17,19,21,23
But it might be easier to wrap your command in a shell script that will immediately exit if it's not called in an odd minute.
A: Works on Cronie
Even with 5 minutes interval e.g.
3-58/5 * * * * /home/test/bin/do_some_thing_every_five_minute
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "82"
}
|
Q: C# serialization and event for data binding are lost I have already posted something similar here but I would like to ask the question more general over here.
Have you try to serialize an object that implement INotifyPropertyChanged and to get it back from serialization and to bind it to a DataGridView? When I do it, I have no refresh from the value that change (I need to minimize the windows and open it back).
Do you have any trick?
A: Use the DataContractSerializer and create a method for OnDeserialized
[OnDeserialized]
private void OnDeserialized(StreamingContext c) {}
This will let you raise the PropertyChanged event when deserialization is complete
A: The trick of having it's own Event and binding it after serialization works but is not elegant because require an other event that I would not like to have...
A: Serializing interfaces gets tricky when you deal with objects that have internal states. Can you post an example of the serialization code you're talking about?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Why is a SQL float different from a C# float Howdy, I have a DataRow pulled out of a DataTable from a DataSet. I am accessing a column that is defined in SQL as a float datatype. I am trying to assign that value to a local variable (c# float datatype) but am getting an InvalidCastExecption
DataRow exercise = _exerciseDataSet.Exercise.FindByExerciseID(65);
_AccelLimit = (float)exercise["DefaultAccelLimit"];
Now, playing around with this I did make it work but it did not make any sense and it didn't feel right.
_AccelLimit = (float)(double)exercise["DefaultAccelLimit"];
Can anyone explain what I am missing here?
A: A SQL float is a double according to the documentation for SQLDbType.
A: And normally you would never want to use float in SQL Server (or real) if you plan to perform math calculations on the data as it is an inexact datatype and it will introduce calculation errors. Use a decimal datatype instead if you need precision.
A: A float in SQL is a Double in the CLR (C#/VB). There's a table of SQL data types with the CLR equivalents on MSDN.
A: The float in Microsoft SQL Server is equivalent to a Double in C#. The reason for this is that a floating-point number can only approximate a decimal number, the precision of a floating-point number determines how accurately that number approximates a decimal number. The Double type represents a double-precision 64-bit floating-point number with values ranging from negative 1.79769313486232e308 to positive 1.79769313486232e308, as well as positive or negative zero, PositiveInfinity, NegativeInfinity, and Not-a-Number (NaN).
A: The reason it "doesn't feel right" is because C# uses the same syntax for unboxing and for casting, which are two very different things. exercise["DefaultAccelLimit"] contains a double value, boxed as an object. (double) is required to unbox the object back into a double. The (float) in front of that then casts the double to a float value. C# does not allow boxing and casting in the same operation, so you must unbox, then cast.
The same is true even if the cast is nondestructive. If a float was boxed as an object that you wanted to cast into a double, you would do it like so: (double)(float)object_var.
A: I think the main question has been answered here but I feel compelled to add something for the section of the question that states that this works.
_AccelLimit = (float)(double)exercise["DefaultAccelLimit"];
The reason this "works" and the reason it "doesn't feel right" is that you are downgrading the double to a float by the second cast (the one on the left) so you are losing precision and effectively telling the compiler that it is ok to truncate the value returned.
In words this line states...
Get an object (that happens to hold a double in this case)
Cast the object in to a double (losing all the object wrapping)
Cast the double in to a float (losing all the fine precision of a double)
e.g. If the value is say
0.0124022806089461
and you do the above then the value of AccelLimit will be
0.01240228
as that is the extent of what a float in c# can get from the double value.
Its a dangerous thing to do and I am pretty sure its a truncation too rather than a rounding but someone may want to confirm this as I am not sure.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
}
|
Q: Is there an Eclipse add-on to build a python executable for distribution? I want to build an executable to distribute to people without python installed on their machines.
Is there an add-on to Eclipse that allows this? I couldn't find one.
If not, do you have a builder that you recommend that would make it easy to go to my python project directory created in Eclipse, and bundle it all up?
Thanks,
Mark
A: It's not eclipse, but ActiveState's ActivePython FAQ mentions the freeze utility, which sounds like it might be close to what you're asking for.
A: For Windows, there's the py2exe project.
There's bbfreeze, and PyInstaller, and py2app, also.
A: See these questions
A: There is also PyInstaller:
http://www.pyinstaller.org/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I downgrade the format of a Subversion repositiory? Is there any way to down-format a Subversion repository to avoid messages like this:
svn: Expected format '3' of repository; found format '5'
This happens when you access repositories from more than one machine, and you aren't able to use a consistent version of Subversion across all of those machines.
Worse still, there are multiple repositories with various formats on different servers, and I'm not at liberty to upgrade some of these servers.
~~~
A: The svnbook says this (about file:/// access vs setting up a server)
Do not be seduced by the simple idea
of having all of your users access a
repository directly via file:// URLs.
Even if the repository is readily
available to everyone via a network
share, this is a bad idea. It removes
any layers of protection between the
users and the repository: users can
accidentally (or intentionally)
corrupt the repository database, it
becomes hard to take the repository
offline for inspection or upgrade, and
it can lead to a mess of file
permission problems (see the section
called “Supporting Multiple Repository
Access Methods”). Note that this is
also one of the reasons we warn
against accessing repositories via
svn+ssh:// URLs—from a security
standpoint, it's effectively the same
as local users accessing via file://,
and it can entail all the same
problems if the administrator isn't
careful.
Subversion guarantees that any 1.X client can talk to any 1.X server. By using a server you can upgrade server at a time, and clients independent of the server.
A: I suspect you have to export the repository and re-import it into the older version. There might be some format incompatibilities in the export format though - but since it's just a big text file, it would hopefully not be too difficult to strip those out.
A: According to the Subversion book, there's no way to do that. You will have to export the whole repository and then proceed to re-import it when you have the older version up and running.
In any case, I suggest you upgrade your SVN tools accordingly in the client machines, instead of playing dangerous games with your repository.
Decide upon a version and don't touch it until you're ready to upgrade to a newer version.
A: As mentioned in this thread How to downgrade a subversion tree from v1.7 to v1.6? there is python script on http://svn.apache.org/repos/asf/subversion/trunk/tools/client-side/change-svn-wc-format.py which can downgrade the work copy but it will only work for up to version 1.6x.
A: If you can't use the same version of Subversion across all machines, then you should set up a server process (either svnserve or Apache) and access the repository only through the server. The server can mediate between different versions of Subversion; it's only when you're using direct repository access that you run into this issue.
If the server will be an older version than the current repository format (which I don't recommend), then you'll need to export the repository using the newer version and import it using the older version.
A: If you downgrade to v1.4x, here's another way that may help you:
https://www.admon.org/scripts/downgrade-svn-from-1-6-to-1-4/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How can you add additional logic to type resolution at runtime? Is there a generic way, without creating and managing your own CLR host, to take over locating and loading a type if that type is not found?
The following is just an example. In your rush to be the first answer, don't suggest the new add-in framework or the MEF as a solution to my question.
An example would be a sample with add-ins. Your app reads a file in that lists the types to use for a particular function. The app attempts to instantiate those types. If they aren't already currently loaded in the appdomain, the method fails. I'm looking for an event I can handle or a component I can provide my own implementation for that will allow me to gracefully handle these situations and provide additional logic for loading these assemblies.
As far as I can tell (unless somebody has an example that works) none of the so-far mentioned AppDomain events fire when a type isn't found.
Wait, apparently this is working! Not sure what I did wrong before, but this event fires good and well.
A: There are events on the AppDomain that you can use.
You would want TypeResolve event, and possibly the AssemblyResolve event.
Also, you can read more about how the .net runtime resolves assemblies, so it's possible you could define this information in the probing section.
A: Isn't that possible just by using AppDomain events?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What to do when IE's moveToElementText spits out an Invalid Argument exception We've written a plugin to the Xinha text editor to handle footnotes. You can take a look at:
http://www.nicholasbs.com/xinha/examples/Newbie.html
In order to handle some problems with the way Webkit and IE handle links at the end of lines (there's no way to use the cursor to get out of the link on the same line) we insert a blank element and move the selection to that, than collapse right. This works fine in Webkit and Gecko, but for some reason moveToElementText is spitting out an Invalid Argument exception. It doesn't matter which element we pass to it, the function seems to be completely broken. In other code paths, however, this function seems to work.
To reproduce the error using the link above, click in the main text input area, type anything, then click on the yellow page icon with the green plus sign, type anything into the lightbox dialog, and click on Insert. An example of the code that causes the problem is below:
if (Xinha.is_ie)
{
var mysel = editor.getSelection();
var myrange = doc.body.createTextRange();
myrange.moveToElementText(newel);
} else
{
editor.selectNodeContents(newel, false);
}
The code in question lives in svn at:
https://svn.openplans.org/svn/xinha_dev/InsertNote
This plugin is built against a branch of Xinha available from svn at:
http://svn.xinha.webfactional.com/branches/new-dialogs
A: It's not visible in the snippet above, but newel has been appended to the dom using another element that was itself appended to the DOM. When inserting a dom element, you have to re-retrieve your handle if you wish to reference its siblings, since the handle is invalid (I'm not sure, but I think it refers to a DOM element inside of a document fragment and not the one inside of the document.) After re-retrieving the handle from the insert operation, moveToElementText stopped throwing an exception.
A: I've had the unfortunate experience of debugging this IE exception many different times while implementing a WYSIWYG editor, and it always arises from accessing a property on a DOM node (such as .parentNode) or passing a DOM node to a function (such as moveToElementText) while the DOM node is not currently in the document being rendered.
As you said, sometimes the DOM node is a piece of a document fragment that has been removed from the "actual" DOM being rendered by the browser, and sometimes the node has simply not been inserted yet. Either way, there are a number of properties and methods on DOM nodes that cannot be safely accessed in IE6 until the node has been properly inserted and rendered in the "actual" DOM.
The real kicker is that often this manifestation of IE6's "Invalid Argument" exception cannot be protected by try/catch.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How well does Bugzilla work for managing Scrum projects? We have MS Sharepoint -- which isn't all bad for managing a task list. The data's publicly available, people are notified of changes and assignments.
I think that Bugzilla might be a little easier for management and reporting purposes. While there are some nice Open Source Scrum management tools, I've used up a lot of my political capital and can't ask for too much more than what we've got now. Money isn't the object -- obviously -- it's the idea that my team has too many specialized tools.
Will Bugzilla work out as a more general project management tool -- outside the bug fix use cases?
Will I be bitterly disappointed and wish I'd downloaded something else and made my case for a better project management tool?
A: Bugzilla Is a great bug tracking system. We have tried to use it for other project management tasks and the results are less then stellar. I would recommend finding something designed with your goals in mind.
A: Try it for yourself.
Get a $15/month account at wush.net and use it yourself for a while (no business relationship besides satisfied customer).
Bugzilla is powerful and has a lot of configuration options, which can be confusing.
I personally used it three years ago on a project I was working on. I had no project manager and I was the developer, so I needed a very-light-overhead systtem. Bugzilla gave me that. I put my main goal as an enhancement "productionalized system" and then I made dependencies to reach that point. I ended up having 160 nodes all dependent on each other. This essentially was a work breakdown structure. I didn't bother with time estimates, and I didn't bother with creating any other kind of project documentation.
A cool advantage was that as I coded, if I noticed something needed to be done, I would just pop it into bugzilla (20 second process once it's set up), tie it as a dependency, and go back to what I was doing.
Whenever I completed a task, I would look at the dependency diagram and find the outermost leaves (bugs that blocked other but weren't themselves blocked), and work at it.
The advantage of this method for me is that if a task had looked simple and had one node associated with it, but when doing the thing itself I realized it was more complex, I would just split it into different subtasks. This took only a minute and absolutely didn't involve a meeting with a project manager.
Other people on the team could track my progress by looking at open bugs, closed bugs sorted by dates, etc. They saw action, they left me alone. When I had external dependecies, I would make a bug, detail the work, and send that person a link via email. They could then see why this was needed by looking at the dependency diagram.
Note that unless previously agreed upon, I did not assign them the bug.
It worked really well and the system was ready one month early.
How will it work with SCRUM? Having only had a cursory glance at scrum I can't tell you. But that was my experience.
Using a dedicated host will allow you three things:
*
*support
*easy upgrades (unless you got gurus in-house, bugzilla management ain't easy--for me at least)
*users across organizational boundaries.
Note that bugzilla has all sorts of security features, so it's easy to lock-down the users to what they need to see.
A: My stand-alone solution is DokuWiki + MantisBT + Subversion + Review Board, which can be integrated with relative ease. Hosted alternative is Bitbucket.org. The rationale is you write user stories on Wiki and can reference them specific tasks. Larger bugs can be collaboratively designed and the "wiki" link is provided on the bug report by Mantis. Review board lets you do peer code reviews against svn diff before change is committed.
A: We've used Trac and Subversion very successfully for several projects.
The main advantage here is being able to tailor reports, some very Scrum specific, to provide information to management.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: When to use Hibernate/JPA/Toplink? Right now I'm making an extremely simple website- about 5 pages. Question is if it's overkill and worth the time to integrate some sort of database mapping solution or if it would be better to just use plain old JNDI. I'll have maybe a dozen things I need to read/write from the database. I guess I have a basic understanding of these technologies but it would still take a lot of referring to the documentation. Anyone else faced with the decision before?
EDIT: Sorry, I should've specified JNDI to lookup the DB connection and JDBC to perform the operations.
A: Short answer: It depends on the complexity you want to support.
Long answer:
First of all, ORM ( object relational mapping - database mapping as you call it - ) and JNDI ( Java Naming and Directory Interfaces ) are two different things.
The first as you already know, is used to map the Database tables to classes and objects. The second is to provide a lookup mechanism for resources, they may be DataSources, Ejb, Queues or others.
Maybe your mean "JDBC".
Now as for your question: If it is that simple may be it wouldn't be necessary to implement an ORM. The number tables would be around 5 - 10 at most, and the operations really simple, I guess.
Probably using plain JDBC would be enough.
If you use the DAO pattern you may change it later to support the ORM strategy if needed.
Like this:
Say you have the Employee table
You create the Employee.java with all the fields of the DB by hand ( it should not take too long ) and a EmployeeDaO.java with methods like:
+findById( id ): Employee
+insert( Employee )
+update( Employee )
+delete( Employee )
+findAll():List<Employee>
And the implementation is quite straight forward:
select * from employee where id = ?
insert into employee ( bla, bla, bla ) values ( ? , ? , ? )
update etc. etc
When ( and If ) your application becomes too complex you may change the DAO implementation . For instance in the "select" method you change the code to use the ORM object that performs the operation.
public Employee selectById( int id ) {
// Commenting out the previous implementation...
// String query = select * from employee where id = ?
// execute( query )
// Using the ORM solution
Session session = getSession();
Employee e = ( Employee ) session.get( Employee.clas, id );
return e;
}
This is just an example, in real life you may let the abstact factory create the ORM DAO, but that is offtopic. The point is you may start simple and by using the desing patterns you may change the implementation later if needed.
Of course if you want to learn the technology you may start rigth away with even 1 table.
The choice of one or another ( ORM solution that is ) depend basically on the technology you're using. For instance for JBoss or other opensource products Hibernate is great. It is opensource, there's a lot of resources where to learn from. But if you're using something that already has Toplink ( like the oracle application server ) or if the base is already built on Toplink you should stay with that framework.
By the way, since Oracle bought BEA, they said they're replacing Kodo ( weblogic peresistence framework ) with toplink in the now called "Oracle Weblogic Application Server".
I leave you some resources where you can get more info about this:
In this "Patterns of Enterprise Application Architecture" book, Martin Fowler, explains where to use one or another, here is the catalog. Take a look at Data Source Architectural Patterns vs. Object-Relational Behavioral Patterns:
PEAA Catalog
DAO ( Data Access Object ) is part of the core J2EE patterns catalog:
The DAO pattern
This is a starter tutorial for Hibernate:
Hibernate
The official page of Toplink:
Toplink
Finally I "think" the good think of JPA is that you may change providers lately.
Start simple and then evolve.
I hope this helps.
A: It does seem like it would be overkill for a very simple application, especially if you don't have plans to expand on it ever. However, it also seems like it could be worthwhile to use those with this simple application so that you have a better understanding of how they work for next time you have something that could use them.
A: Do you mean plain old JDBC?
A small project might be a good opportunity to pick up one of the ORM frameworks, especially if you have the time.
Without more information it's hard to provide a recommendation one way or another however.
A: My rule of thumb is if it's read-only, I'm willing to do it in JDBC, although I prefer to use an empty Hibernate project with SQLQuery to take advantage of Hibernate's type mapping. Once I have to do writes, I go with Hibernate because it's so much easier to set a few attributes and then call save than to set each column individually. And when you have to start optimizing to avoid updates on unchanged objects, you're way better off with an OR/M and its dirty checking. Dealing with foreign key relationships is another sign that you need to map it once and then use the getters. The same logic would apply to Toplink, although unless they've added something like HQL in the 3 years since I used it, Hibernate would be much better for this kind of transition from pure SQL. Keep in mind that you don't have to map every object/table, just the ones where there's a clear advantage. In my experience, most projects that don't use an existing OR/M end up building a new one, which is a bad idea.
A: The best way to learn ORM is on a small project. Start on this project.
Once you get the hang of it, you'll use ORM for everything.
There's nothing too small for ORM. After your first couple of projects, you'll find that you can't work any other way. The ORM mapping generally makes more sense than almost any other way of working.
A: Look at the various toplink guides here, they have intro, examples, scenarios etc
http://docs.oracle.com/cd/E14571_01/web.1111/b32441/toc.htm
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Real (Great Circle) distance in PostGIS with lat/long SRID? I'm using a lat/long SRID in my PostGIS database (-4326). I would like to find the nearest points to a given point in an efficient manner. I tried doing an
ORDER BY ST_Distance(point, ST_GeomFromText(?,-4326))
which gives me ok results in the lower 48 states, but up in Alaska it gives me garbage. Is there a way to do real distance calculations in PostGIS, or am I going to have to give a reasonable sized buffer and then calculate the great circle distances and sort the results in the code afterwards?
A: You are looking for ST_distance_sphere(point,point) or st_distance_spheroid(point,point).
See:
http://postgis.refractions.net/documentation/manual-1.3/ch06.html#distance_sphere
http://postgis.refractions.net/documentation/manual-1.3/ch06.html#distance_spheroid
This is normally referred to a geodesic or geodetic distance... while the two terms have slightly different meanings, they tend to be used interchangably.
Alternatively, you can project the data and use the standard st_distance function... this is only practical over short distances (using UTM or state plane) or if all distances are relative to a one or two points (equidistant projections).
A: PostGIS 1.5 handles true globe distances using lat longs and meters. It is aware that lat/long is angular in nature and has a 360 degree line
A: This is from SQL Server, and I use Haversine for a ridiculously fast distance that may suffer from your Alaska issue (can be off by a mile):
ALTER function [dbo].[getCoordinateDistance]
(
@Latitude1 decimal(16,12),
@Longitude1 decimal(16,12),
@Latitude2 decimal(16,12),
@Longitude2 decimal(16,12)
)
returns decimal(16,12)
as
/*
fUNCTION: getCoordinateDistance
Computes the Great Circle distance in kilometers
between two points on the Earth using the
Haversine formula distance calculation.
Input Parameters:
@Longitude1 - Longitude in degrees of point 1
@Latitude1 - Latitude in degrees of point 1
@Longitude2 - Longitude in degrees of point 2
@Latitude2 - Latitude in degrees of point 2
*/
begin
declare @radius decimal(16,12)
declare @lon1 decimal(16,12)
declare @lon2 decimal(16,12)
declare @lat1 decimal(16,12)
declare @lat2 decimal(16,12)
declare @a decimal(16,12)
declare @distance decimal(16,12)
-- Sets average radius of Earth in Kilometers
set @radius = 6366.70701949371
-- Convert degrees to radians
set @lon1 = radians( @Longitude1 )
set @lon2 = radians( @Longitude2 )
set @lat1 = radians( @Latitude1 )
set @lat2 = radians( @Latitude2 )
set @a = sqrt(square(sin((@lat2-@lat1)/2.0E)) +
(cos(@lat1) * cos(@lat2) * square(sin((@lon2-@lon1)/2.0E))) )
set @distance =
@radius * ( 2.0E *asin(case when 1.0E < @a then 1.0E else @a end ) )
return @distance
end
Vicenty is slow, but accurate to within 1 mm (and I only found a javascript imp of it):
/*
* Calculate geodesic distance (in m) between two points specified by latitude/longitude (in numeric degrees)
* using Vincenty inverse formula for ellipsoids
*/
function distVincenty(lat1, lon1, lat2, lon2) {
var a = 6378137, b = 6356752.3142, f = 1/298.257223563; // WGS-84 ellipsiod
var L = (lon2-lon1).toRad();
var U1 = Math.atan((1-f) * Math.tan(lat1.toRad()));
var U2 = Math.atan((1-f) * Math.tan(lat2.toRad()));
var sinU1 = Math.sin(U1), cosU1 = Math.cos(U1);
var sinU2 = Math.sin(U2), cosU2 = Math.cos(U2);
var lambda = L, lambdaP = 2*Math.PI;
var iterLimit = 20;
while (Math.abs(lambda-lambdaP) > 1e-12 && --iterLimit>0) {
var sinLambda = Math.sin(lambda), cosLambda = Math.cos(lambda);
var sinSigma = Math.sqrt((cosU2*sinLambda) * (cosU2*sinLambda) +
(cosU1*sinU2-sinU1*cosU2*cosLambda) * (cosU1*sinU2-sinU1*cosU2*cosLambda));
if (sinSigma==0) return 0; // co-incident points
var cosSigma = sinU1*sinU2 + cosU1*cosU2*cosLambda;
var sigma = Math.atan2(sinSigma, cosSigma);
var sinAlpha = cosU1 * cosU2 * sinLambda / sinSigma;
var cosSqAlpha = 1 - sinAlpha*sinAlpha;
var cos2SigmaM = cosSigma - 2*sinU1*sinU2/cosSqAlpha;
if (isNaN(cos2SigmaM)) cos2SigmaM = 0; // equatorial line: cosSqAlpha=0 (§6)
var C = f/16*cosSqAlpha*(4+f*(4-3*cosSqAlpha));
lambdaP = lambda;
lambda = L + (1-C) * f * sinAlpha *
(sigma + C*sinSigma*(cos2SigmaM+C*cosSigma*(-1+2*cos2SigmaM*cos2SigmaM)));
}
if (iterLimit==0) return NaN // formula failed to converge
var uSq = cosSqAlpha * (a*a - b*b) / (b*b);
var A = 1 + uSq/16384*(4096+uSq*(-768+uSq*(320-175*uSq)));
var B = uSq/1024 * (256+uSq*(-128+uSq*(74-47*uSq)));
var deltaSigma = B*sinSigma*(cos2SigmaM+B/4*(cosSigma*(-1+2*cos2SigmaM*cos2SigmaM)-
B/6*cos2SigmaM*(-3+4*sinSigma*sinSigma)*(-3+4*cos2SigmaM*cos2SigmaM)));
var s = b*A*(sigma-deltaSigma);
s = s.toFixed(3); // round to 1mm precision
return s;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: 64-bit Alternative for Microsoft Jet Microsoft has chosen to not release a 64-bit version of Jet, their database driver for Access. Does anyone know of a good alternative?
Here are the specific features that Jet supports that I need:
*
*Multiple users can connect to database over a network.
*Users can use Windows Explorer to copy the database while it is open without risking corruption. Access currently does this with enough reliability for what my customers need.
*Works well in C++ without requiring .Net.
Alternatives I've considered that I do not think could work (though my understanding could be incorrect):
*
*SQLite: If multiple users connect to the database over a network, it will become corrupted.
*Firebird: Copying a database that is in use can corrupt the original database.
*SQL Server: Files in use are locked and cannot be copied.
*VistaDB: This appears to be .Net specific.
*Compile in 32-bit and use WOW64: There is another dependency that requires us to compile in 64-bit, even though we don't use any 64-bit functionality.
A: *
*Users can copy the database while it is open without risking corruption.
You can't do that with any database file with multiple users and/or processes modifying it.
A: Luckily, things have changed in the past two years:
Since Office 2010 is available in a 64-bit version, Microsoft had to create a 64-bit version of their Jet Engine. According to the Microsoft Customer Service blog, the Microsoft Access Database Engine 2010 Redistributable contains a 64-bit driver, which is able to access recent versions of the Microsoft Access database format.
A: What you're looking for is SQL Server Express with the portable .mdf files. To get around the copying limitation you need to make sure that the software in question doesn't keep connections open (i.e. create a disconnected data access layer).
A: Try to have a look at http://www.vistadb.net/
A: @Orion: Agreed, OP would be advised to go with SQL 2005 Express (if possible). The deal breaker is being able to copy the DB while in use/attached which is out of the question with SQL without using some kind of backup tool that can copy 'in use' files.
Another way would be to automate a backup and restore to roaming machine but this is getting a long way away from being able to just grab a copy of the file.
A: Another alternative you can look at is SQL Server Compact Edition (CE). I believe this has 64bit binaries.
I also agree with Orion and Kev about the copying the database.
A: What I am going to do is to create a separate 32-bit executable that connects to Jet that my 64-bit application can communicate with through COM.
This satisfies my general requirement of "work like Jet", because it is Jet. My customers don't get the benefit of 64-bit, but the other requirements are more important.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: ASP.NET: ICollection Constructor Not Found? I have a ASP.NET application running on a remote web server and I just started getting this error. I can't seem to reproduce it in my development environment:
Method not found: 'Void System.Collections.Generic.ICollection`1..ctor()'.
Could this be due to some misconfiguration of .NET Framework or IIS 6?
Update:
I disassembled the code in the DLL and it seems like the compiler is incorrectly optimizing the code. (Note that Set is a class that implements a set of unique objects. It inherits from IEnumerable.) This line:
Set<int> set = new Set<int>();
Is compiled into this line:
Set<int> set = (Set<int>) new ICollection<CalendarModule>();
The Calendar class is a totally unrelated class!! Has anyone ever noticed .NET incorrectly compiling code like this before?
A: Are the .NET versions on both systems the same inc. the same service pack?
A: Is your IIS setup to use .NET 2.0? If not, change it to 2.0. If you can't see 2.0 in the list then you'll need to run aspnet_regiis from the 2.0 framework directory.
A: This was caused by a bug in the aspnet merge tool which incorrectly merged optimized assemblies. It can be solved by either not merging the assemblies or not optimizing them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: throw exception from a JSP Is it possible to throw an exception in a JSP without using scriptlet code?
A: You really shouldn't be doing anything at the JSP layer that explicitly throws exceptions. The reason you don't want to use scriptlets in JSPs is because that puts application logic in your view. Throwing an exception is inherently application logic, so it doesn't belong in your JSP, scriptlet or not.
A: You can throw an exception if you do this:
<c:out value="${1/0}" />
or something that is similarly "illegal"
Ideally though, since JSPs are associated with the view...you don't want to throw an exception. You want to catch them with <c:catch>
A: You could have a bean with a getter method that throw the code, then have the JSP access the bean property. I'm not sure that'd be an actual improvement over a scriptlet.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: NLP: Qualitatively "positive" vs "negative" sentence I need your help in determining the best approach for analyzing industry-specific sentences (i.e. movie reviews) for "positive" vs "negative". I've seen libraries such as OpenNLP before, but it's too low-level - it just gives me the basic sentence composition; what I need is a higher-level structure:
- hopefully with wordlists
- hopefully trainable on my set of data
Thanks!
A: Some approaches to sentiment analysis use strategies popular on other text classification tasks. The most common being transforming your film review into a word vector, and feeding it into a classifier algorithm as training data. Most popular data mining packages can help you here. You could have a look at this tutorial on sentiment classification illustrating how to do an experiment using the open source RapidMiner toolkit.
Incidentally, there is a good data set made available for research purposes related to detecting opinion on film reviews. It is based on IMDB user reviews, and you can check many related research work on the area and how they use the data set.
Its worth bearing in mind that the effectiveness of these methods can only be judged from a statistical viewpoint, so you can pretty much assume there will be misclassifications and cases where opinion is hard to detect. As already noticed in this thread, detecting things like irony and sarcasm can be very difficult indeed.
A: What you are looking for is commonly dubbed Sentiment Analysis. Typically, sentiment analysis is not able to handle delicate subtleties, like sarcasm or irony, but it fares pretty well if you throw a large set of data at it.
Sentiment analysis usually needs quite a bit of pre-processing. At least tokenization, sentence boundary detection and part-of-speech tagging. Sometimes, syntactic parsing can be important. Doing it properly is an entire branch of research in computational linguistics, and I wouldn't advise you with coming up with your own solution unless you take your time to study the field first.
OpenNLP has some tools to aid sentiment analysis, but if you want something more serious, you should look into the LingPipe toolkit. It has some built-in SA-functionality and a nice tutorial. And you can train it on your own set of data, but don't think that it is entirely trivial :-).
Googling for the term will probably also give you some resources to work with. If you have any more specific question, just ask, I'm watching the nlp-tag closely ;-)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: How to consume a web service from VB6? I need to consume an external web service from my VB6 program. I want to be able to deploy my program without the SOAP toolkit, if possible, but that's not a requirement. I do not have the web service source and I didn't create it. It is a vendor-provided service.
So outside of the SOAP toolkit, what is the best way to consume a web service from VB6?
A: .NET has a good support for Web Services since day one, so you can develop your Web Service client logic in .NET as a .dll library/assembly and use it in VB6 app via COM Interop.
A: Assuming that you're running on Windows XP Professional or above, one interesting method is to use the SOAP moniker. Here's an example, lifted from some MSDN page. I don't know if this particular service works, but you get the idea...
set SoapObj = GetObject
("soap:wsdl=http://www.xmethods.net/sd/TemperatureService.wsdl")
WScript.Echo "Fairbanks Temperature = " & SoapObj.getTemp("99707")
This mechanism also works from VBScript. Which is nice.
A: Pocketsoap works very well. To generate your objects use the WSDL generator. Using this you don't have to parse anything yourself, plus everything is nice and strongly typed.
A: I use this function to get data from a web service.
Private Function HttpGetRequest(url As String) As DOMDocument
Dim req As XMLHTTP60
Set req = New XMLHTTP60
req.Open "GET", url, False
req.send ""
Dim resp As DOMDocument
If req.responseText <> vbNullString Then
Set resp = New DOMDocument60
resp.loadXML req.responseText
Else
Set resp = req.responseXML
End If
Set HttpGetRequest = resp
End Function
A: Check out this article by Scott Swigart on the MSDN VB 6.0 Resource Center.
Calling Web Services from Visual Basic 6, the Easy Way
A: I've had some measure of success so far using PocketSOAP to connect to the Salesforce API. I could not use the WSDL Wizard because it generates wrapper class filenames using the first 23 characters of the call names, and this results in duplicates. Nevertheless, PocketSOAP has been working well enough for me without the wizard, and it's much more straightforward than using XMLHTTP with DOMDocument.
I also looked into making a wrapper in .NET or using one of the "MS Office {MSO version} Web Services Toolkit" libraries, but there were significant deployment hassles with those options. PocketSOAP is a simple COM DLL, not dependent on some particular version of MS Office, and is licensed under MPL.
A: The SOAP toolkit is arguably the best you could get. Trying to do the same thing without it would require considerable extra effort. You need to have quite serious reasons to do that.
The format of the SOAP messages is not really easy to read or write manually and a third-party library is highly advised.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Is there a .NET equivalent of Perl's LWP / WWW::Mechanize? After working with .NET's HttpWebRequest/Response objects, I'd rather shoot myself than use this to crawl through web sites. I'm looking for an existing .NET library that can fetch URLs, and give you the ability to follow links, extract/fill in/submit forms on the page, etc. Perl's LWP and WWW::Mechanize modules do this very well, but I'm working with a .NET project.
I've come across the HTML Agility Pack, which looks awesome, but it stops short of simulating links/forms.
Does such a tool already exist?
A: Somebody built a bit of code to run as an addon to the HTML Agility Pack (which I also love) that allows you to do a bit of form tinkering:
http://apps.ultravioletconsulting.com/projects/uvcwebtransform/docs/class_html_agility_pack_1_1_add_ons_1_1_form_processor_1_1_form_processor.html
I read a review that says it's not WWW::Mechanize, but it's a great start. The code is provided, so you might be able to easily extend it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: ASP.Net Web Site Won't Compile, But Works Anyway? I have an ASP.Net 2.0 web site, using the DotNetNuke framework (4.09), and it will not compile, but when I hit the site in a browser, it works. Even the parts that don't compile will work. How is IIS able to compile and run this site, when Visual Studio can't? Everything is the same in both places... I copied the entire web site from the remote server on to my local machine, then I set it up in IIS the same way. On my local machine, Visual Studio can't compile the site, but it still runs. How can this be possible?
The specific errors are not important, as there are 189 of them, from every possible part of the site. I'm not trying to fix the errors... what I want to know is how it's possible for the web server to run the site, regardless of the errors. Please pay attention to what I have written - everything is exactly the same in both places. There are no missing DLLs, no different configurations, nothing on the machine itself... remember, the site runs fine on my local machine.
A: Is this a web site or a web application? If it's a web application, you're probably still running off the last successfully built bits in the bin.
A: The site is using old dlls, or possibly you have references missing in your local version that the server has just fine.
As Mitchel said, we need to see the error before we can really answer your question.
A: To give you an answer on this we would need to know what the errors are.
A: Your local machine cached the 'working' copy and is using that maybe?
A: The site was compiled successfully at one point as it works on the remote server. Thus, copying it to your local machine and hitting the local site will also work. However, there can be several reason why you can't re-compile it on your local machine including; missing references, web.config entries, third party control licensing, etc..
I realize you are not trying to correct the 189 errors, but there are clues, if not answers, in the error listing that will get you moving in the right direction.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I trim leading/trailing whitespace in a standard way? Is there a clean, preferably standard method of trimming leading and trailing whitespace from a string in C? I'd roll my own, but I would think this is a common problem with an equally common solution.
A: Late to the trim party
Features:
1. Trim the beginning quickly, as in a number of other answers.
2. After going to the end, trimming the right with only 1 test per loop. Like @jfm3, but works for an all white-space string)
3. To avoid undefined behavior when char is a signed char, cast *s to unsigned char.
Character handling "In all cases the argument is an int, the value of which shall be representable as an unsigned char or shall equal the value of the macro EOF. If the argument has any other value, the behavior is undefined." C11 §7.4 1
#include <ctype.h>
// Return a pointer to the trimmed string
char *string_trim_inplace(char *s) {
while (isspace((unsigned char) *s)) s++;
if (*s) {
char *p = s;
while (*p) p++;
while (isspace((unsigned char) *(--p)));
p[1] = '\0';
}
// If desired, shift the trimmed string
return s;
}
@chqrlie commented the above does not shift the trimmed string. To do so....
// Return a pointer to the (shifted) trimmed string
char *string_trim_inplace(char *s) {
char *original = s;
size_t len = 0;
while (isspace((unsigned char) *s)) {
s++;
}
if (*s) {
char *p = s;
while (*p) p++;
while (isspace((unsigned char) *(--p)));
p[1] = '\0';
// len = (size_t) (p - s); // older errant code
len = (size_t) (p - s + 1); // Thanks to @theriver
}
return (s == original) ? s : memmove(original, s, len + 1);
}
A: Here's one that shifts the string into the first position of your buffer. You might want this behavior so that if you dynamically allocated the string, you can still free it on the same pointer that trim() returns:
char *trim(char *str)
{
size_t len = 0;
char *frontp = str;
char *endp = NULL;
if( str == NULL ) { return NULL; }
if( str[0] == '\0' ) { return str; }
len = strlen(str);
endp = str + len;
/* Move the front and back pointers to address the first non-whitespace
* characters from each end.
*/
while( isspace((unsigned char) *frontp) ) { ++frontp; }
if( endp != frontp )
{
while( isspace((unsigned char) *(--endp)) && endp != frontp ) {}
}
if( frontp != str && endp == frontp )
*str = '\0';
else if( str + len - 1 != endp )
*(endp + 1) = '\0';
/* Shift the string so that it starts at str so that if it's dynamically
* allocated, we can still free it on the returned pointer. Note the reuse
* of endp to mean the front of the string buffer now.
*/
endp = str;
if( frontp != str )
{
while( *frontp ) { *endp++ = *frontp++; }
*endp = '\0';
}
return str;
}
Test for correctness:
#include <stdio.h>
#include <string.h>
#include <ctype.h>
/* Paste function from above here. */
int main()
{
/* The test prints the following:
[nothing to trim] -> [nothing to trim]
[ trim the front] -> [trim the front]
[trim the back ] -> [trim the back]
[ trim front and back ] -> [trim front and back]
[ trim one char front and back ] -> [trim one char front and back]
[ trim one char front] -> [trim one char front]
[trim one char back ] -> [trim one char back]
[ ] -> []
[ ] -> []
[a] -> [a]
[] -> []
*/
char *sample_strings[] =
{
"nothing to trim",
" trim the front",
"trim the back ",
" trim front and back ",
" trim one char front and back ",
" trim one char front",
"trim one char back ",
" ",
" ",
"a",
"",
NULL
};
char test_buffer[64];
char comparison_buffer[64];
size_t index, compare_pos;
for( index = 0; sample_strings[index] != NULL; ++index )
{
// Fill buffer with known value to verify we do not write past the end of the string.
memset( test_buffer, 0xCC, sizeof(test_buffer) );
strcpy( test_buffer, sample_strings[index] );
memcpy( comparison_buffer, test_buffer, sizeof(comparison_buffer));
printf("[%s] -> [%s]\n", sample_strings[index],
trim(test_buffer));
for( compare_pos = strlen(comparison_buffer);
compare_pos < sizeof(comparison_buffer);
++compare_pos )
{
if( test_buffer[compare_pos] != comparison_buffer[compare_pos] )
{
printf("Unexpected change to buffer @ index %u: %02x (expected %02x)\n",
compare_pos, (unsigned char) test_buffer[compare_pos], (unsigned char) comparison_buffer[compare_pos]);
}
}
}
return 0;
}
Source file was trim.c. Compiled with 'cc -Wall trim.c -o trim'.
A: Here's a solution similar to @adam-rosenfields in-place modification routine but without needlessly resorting to strlen(). Like @jkramer, the string is left-adjusted within the buffer so you can free the same pointer. Not optimal for large strings since it does not use memmove. Includes the ++/-- operators that @jfm3 mentions. FCTX-based unit tests included.
#include <ctype.h>
void trim(char * const a)
{
char *p = a, *q = a;
while (isspace(*q)) ++q;
while (*q) *p++ = *q++;
*p = '\0';
while (p > a && isspace(*--p)) *p = '\0';
}
/* See http://fctx.wildbearsoftware.com/ */
#include "fct.h"
FCT_BGN()
{
FCT_QTEST_BGN(trim)
{
{ char s[] = ""; trim(s); fct_chk_eq_str("", s); } // Trivial
{ char s[] = " "; trim(s); fct_chk_eq_str("", s); } // Trivial
{ char s[] = "\t"; trim(s); fct_chk_eq_str("", s); } // Trivial
{ char s[] = "a"; trim(s); fct_chk_eq_str("a", s); } // NOP
{ char s[] = "abc"; trim(s); fct_chk_eq_str("abc", s); } // NOP
{ char s[] = " a"; trim(s); fct_chk_eq_str("a", s); } // Leading
{ char s[] = " a c"; trim(s); fct_chk_eq_str("a c", s); } // Leading
{ char s[] = "a "; trim(s); fct_chk_eq_str("a", s); } // Trailing
{ char s[] = "a c "; trim(s); fct_chk_eq_str("a c", s); } // Trailing
{ char s[] = " a "; trim(s); fct_chk_eq_str("a", s); } // Both
{ char s[] = " a c "; trim(s); fct_chk_eq_str("a c", s); } // Both
// Villemoes pointed out an edge case that corrupted memory. Thank you.
// http://stackoverflow.com/questions/122616/#comment23332594_4505533
{
char s[] = "a "; // Buffer with whitespace before s + 2
trim(s + 2); // Trim " " containing only whitespace
fct_chk_eq_str("", s + 2); // Ensure correct result from the trim
fct_chk_eq_str("a ", s); // Ensure preceding buffer not mutated
}
// doukremt suggested I investigate this test case but
// did not indicate the specific behavior that was objectionable.
// http://stackoverflow.com/posts/comments/33571430
{
char s[] = " foobar"; // Shifted across whitespace
trim(s); // Trim
fct_chk_eq_str("foobar", s); // Leading string is correct
// Here is what the algorithm produces:
char r[16] = { 'f', 'o', 'o', 'b', 'a', 'r', '\0', ' ',
' ', 'f', 'o', 'o', 'b', 'a', 'r', '\0'};
fct_chk_eq_int(0, memcmp(s, r, sizeof(s)));
}
}
FCT_QTEST_END();
}
FCT_END();
A: I'm not sure what you consider "painless."
C strings are pretty painful. We can find the first non-whitespace character position trivially:
while (isspace(* p)) p++;
We can find the last non-whitespace character position with two similar trivial moves:
while (* q) q++;
do { q--; } while (isspace(* q));
(I have spared you the pain of using the * and ++ operators at the same time.)
The question now is what do you do with this? The datatype at hand isn't really a big robust abstract String that is easy to think about, but instead really barely any more than an array of storage bytes. Lacking a robust data type, it is impossible to write a function that will do the same as PHperytonby's chomp function. What would such a function in C return?
A: Another one, with one line doing the real job:
#include <stdio.h>
int main()
{
const char *target = " haha ";
char buf[256];
sscanf(target, "%s", buf); // Trimming on both sides occurs here
printf("<%s>\n", buf);
}
A: I didn't like most of these answers because they did one or more of the following...
*
*Returned a different pointer inside the original pointer's string (kind of a pain to juggle two different pointers to the same thing).
*Made gratuitous use of things like strlen() that pre-iterate the entire string.
*Used non-portable OS-specific lib functions.
*Backscanned.
*Used comparison to ' ' instead of isspace() so that TAB / CR / LF are preserved.
*Wasted memory with large static buffers.
*Wasted cycles with high-cost functions like sscanf/sprintf.
Here is my version:
void fnStrTrimInPlace(char *szWrite) {
const char *szWriteOrig = szWrite;
char *szLastSpace = szWrite, *szRead = szWrite;
int bNotSpace;
// SHIFT STRING, STARTING AT FIRST NON-SPACE CHAR, LEFTMOST
while( *szRead != '\0' ) {
bNotSpace = !isspace((unsigned char)(*szRead));
if( (szWrite != szWriteOrig) || bNotSpace ) {
*szWrite = *szRead;
szWrite++;
// TRACK POINTER TO LAST NON-SPACE
if( bNotSpace )
szLastSpace = szWrite;
}
szRead++;
}
// TERMINATE AFTER LAST NON-SPACE (OR BEGINNING IF THERE WAS NO NON-SPACE)
*szLastSpace = '\0';
}
A: If you're using glib, then you can use g_strstrip
A: My solution. String must be changeable. The advantage above some of the other solutions that it moves the non-space part to the beginning so you can keep using the old pointer, in case you have to free() it later.
void trim(char * s) {
char * p = s;
int l = strlen(p);
while(isspace(p[l - 1])) p[--l] = 0;
while(* p && isspace(* p)) ++p, --l;
memmove(s, p, l + 1);
}
This version creates a copy of the string with strndup() instead of editing it in place. strndup() requires _GNU_SOURCE, so maybe you need to make your own strndup() with malloc() and strncpy().
char * trim(char * s) {
int l = strlen(s);
while(isspace(s[l - 1])) --l;
while(* s && isspace(* s)) ++s, --l;
return strndup(s, l);
}
A: Use a string library, for instance:
Ustr *s1 = USTR1(\7, " 12345 ");
ustr_sc_trim_cstr(&s1, " ");
assert(ustr_cmp_cstr_eq(s1, "12345"));
...as you say this is a "common" problem, yes you need to include a #include or so and it's not included in libc but don't go inventing your own hack job storing random pointers and size_t's that way only leads to buffer overflows.
A: A bit late to the game, but I'll throw my routines into the fray. They're probably not the most absolute efficient, but I believe they're correct and they're simple (with rtrim() pushing the complexity envelope):
#include <ctype.h>
#include <string.h>
/*
Public domain implementations of in-place string trim functions
Michael Burr
michael.burr@nth-element.com
2010
*/
char* ltrim(char* s)
{
char* newstart = s;
while (isspace( *newstart)) {
++newstart;
}
// newstart points to first non-whitespace char (which might be '\0')
memmove( s, newstart, strlen( newstart) + 1); // don't forget to move the '\0' terminator
return s;
}
char* rtrim( char* s)
{
char* end = s + strlen( s);
// find the last non-whitespace character
while ((end != s) && isspace( *(end-1))) {
--end;
}
// at this point either (end == s) and s is either empty or all whitespace
// so it needs to be made empty, or
// end points just past the last non-whitespace character (it might point
// at the '\0' terminator, in which case there's no problem writing
// another there).
*end = '\0';
return s;
}
char* trim( char* s)
{
return rtrim( ltrim( s));
}
A: Very late to the party...
Single-pass forward-scanning solution with no backtracking. Every character in the source string is tested exactly once twice. (So it should be faster than most of the other solutions here, especially if the source string has a lot of trailing spaces.)
This includes two solutions, one to copy and trim a source string into another destination string, and the other to trim the source string in place. Both functions use the same code.
The (modifiable) string is moved in-place, so the original pointer to it remains unchanged.
#include <stddef.h>
#include <ctype.h>
char * trim2(char *d, const char *s)
{
// Sanity checks
if (s == NULL || d == NULL)
return NULL;
// Skip leading spaces
const unsigned char * p = (const unsigned char *)s;
while (isspace(*p))
p++;
// Copy the string
unsigned char * dst = (unsigned char *)d; // d and s can be the same
unsigned char * end = dst;
while (*p != '\0')
{
if (!isspace(*dst++ = *p++))
end = dst;
}
// Truncate trailing spaces
*end = '\0';
return d;
}
char * trim(char *s)
{
return trim2(s, s);
}
A: If you can modify the string:
// Note: This function returns a pointer to a substring of the original string.
// If the given string was allocated dynamically, the caller must not overwrite
// that pointer with the returned value, since the original pointer must be
// deallocated using the same allocator with which it was allocated. The return
// value must NOT be deallocated using free() etc.
char *trimwhitespace(char *str)
{
char *end;
// Trim leading space
while(isspace((unsigned char)*str)) str++;
if(*str == 0) // All spaces?
return str;
// Trim trailing space
end = str + strlen(str) - 1;
while(end > str && isspace((unsigned char)*end)) end--;
// Write new null terminator character
end[1] = '\0';
return str;
}
If you can't modify the string, then you can use basically the same method:
// Stores the trimmed input string into the given output buffer, which must be
// large enough to store the result. If it is too small, the output is
// truncated.
size_t trimwhitespace(char *out, size_t len, const char *str)
{
if(len == 0)
return 0;
const char *end;
size_t out_size;
// Trim leading space
while(isspace((unsigned char)*str)) str++;
if(*str == 0) // All spaces?
{
*out = 0;
return 1;
}
// Trim trailing space
end = str + strlen(str) - 1;
while(end > str && isspace((unsigned char)*end)) end--;
end++;
// Set output size to minimum of trimmed string length and buffer size minus 1
out_size = (end - str) < len-1 ? (end - str) : len-1;
// Copy trimmed string and add null terminator
memcpy(out, str, out_size);
out[out_size] = 0;
return out_size;
}
A: Here's my C mini library for trimming left, right, both, all, in place and separate, and trimming a set of specified characters (or white space by default).
contents of strlib.h:
#ifndef STRLIB_H_
#define STRLIB_H_ 1
enum strtrim_mode_t {
STRLIB_MODE_ALL = 0,
STRLIB_MODE_RIGHT = 0x01,
STRLIB_MODE_LEFT = 0x02,
STRLIB_MODE_BOTH = 0x03
};
char *strcpytrim(char *d, // destination
char *s, // source
int mode,
char *delim
);
char *strtriml(char *d, char *s);
char *strtrimr(char *d, char *s);
char *strtrim(char *d, char *s);
char *strkill(char *d, char *s);
char *triml(char *s);
char *trimr(char *s);
char *trim(char *s);
char *kill(char *s);
#endif
contents of strlib.c:
#include <strlib.h>
char *strcpytrim(char *d, // destination
char *s, // source
int mode,
char *delim
) {
char *o = d; // save orig
char *e = 0; // end space ptr.
char dtab[256] = {0};
if (!s || !d) return 0;
if (!delim) delim = " \t\n\f";
while (*delim)
dtab[*delim++] = 1;
while ( (*d = *s++) != 0 ) {
if (!dtab[0xFF & (unsigned int)*d]) { // Not a match char
e = 0; // Reset end pointer
} else {
if (!e) e = d; // Found first match.
if ( mode == STRLIB_MODE_ALL || ((mode != STRLIB_MODE_RIGHT) && (d == o)) )
continue;
}
d++;
}
if (mode != STRLIB_MODE_LEFT && e) { // for everything but trim_left, delete trailing matches.
*e = 0;
}
return o;
}
// perhaps these could be inlined in strlib.h
char *strtriml(char *d, char *s) { return strcpytrim(d, s, STRLIB_MODE_LEFT, 0); }
char *strtrimr(char *d, char *s) { return strcpytrim(d, s, STRLIB_MODE_RIGHT, 0); }
char *strtrim(char *d, char *s) { return strcpytrim(d, s, STRLIB_MODE_BOTH, 0); }
char *strkill(char *d, char *s) { return strcpytrim(d, s, STRLIB_MODE_ALL, 0); }
char *triml(char *s) { return strcpytrim(s, s, STRLIB_MODE_LEFT, 0); }
char *trimr(char *s) { return strcpytrim(s, s, STRLIB_MODE_RIGHT, 0); }
char *trim(char *s) { return strcpytrim(s, s, STRLIB_MODE_BOTH, 0); }
char *kill(char *s) { return strcpytrim(s, s, STRLIB_MODE_ALL, 0); }
The one main routine does it all.
It trims in place if src == dst, otherwise,
it works like the strcpy routines.
It trims a set of characters specified in the string delim, or white space if null.
It trims left, right, both, and all (like tr).
There is not much to it, and it iterates over the string only once. Some folks might complain that trim right starts on the left, however, no strlen is needed which starts on the left anyway. (One way or another you have to get to the end of the string for right trims, so you might as well do the work as you go.) There may be arguments to be made about pipelining and cache sizes and such -- who knows. Since the solution works from left to right and iterates only once, it can be expanded to work on streams as well. Limitations: it does not work on unicode strings.
A: Here is my attempt at a simple, yet correct in-place trim function.
void trim(char *str)
{
int i;
int begin = 0;
int end = strlen(str) - 1;
while (isspace((unsigned char) str[begin]))
begin++;
while ((end >= begin) && isspace((unsigned char) str[end]))
end--;
// Shift all characters back to the start of the string array.
for (i = begin; i <= end; i++)
str[i - begin] = str[i];
str[i - begin] = '\0'; // Null terminate string.
}
A: Just to keep this growing, one more option with a modifiable string:
void trimString(char *string)
{
size_t i = 0, j = strlen(string);
while (j > 0 && isspace((unsigned char)string[j - 1])) string[--j] = '\0';
while (isspace((unsigned char)string[i])) i++;
if (i > 0) memmove(string, string + i, j - i + 1);
}
A: I know there have many answers, but I post my answer here to see if my solution is good enough.
// Trims leading whitespace chars in left `str`, then copy at almost `n - 1` chars
// into the `out` buffer in which copying might stop when the first '\0' occurs,
// and finally append '\0' to the position of the last non-trailing whitespace char.
// Reture the length the trimed string which '\0' is not count in like strlen().
size_t trim(char *out, size_t n, const char *str)
{
// do nothing
if(n == 0) return 0;
// ptr stop at the first non-leading space char
while(isspace(*str)) str++;
if(*str == '\0') {
out[0] = '\0';
return 0;
}
size_t i = 0;
// copy char to out until '\0' or i == n - 1
for(i = 0; i < n - 1 && *str != '\0'; i++){
out[i] = *str++;
}
// deal with the trailing space
while(isspace(out[--i]));
out[++i] = '\0';
return i;
}
A: The easiest way to skip leading spaces in a string is, imho,
#include <stdio.h>
int main()
{
char *foo=" teststring ";
char *bar;
sscanf(foo,"%s",bar);
printf("String is >%s<\n",bar);
return 0;
}
A: Ok this is my take on the question. I believe it's the most concise solution that modifies the string in place (free will work) and avoids any UB. For small strings, it's probably faster than a solution involving memmove.
void stripWS_LT(char *str)
{
char *a = str, *b = str;
while (isspace((unsigned char)*a)) a++;
while (*b = *a++) b++;
while (b > str && isspace((unsigned char)*--b)) *b = 0;
}
A: #include <ctype.h>
#include <string.h>
char *trim_space(char *in)
{
char *out = NULL;
int len;
if (in) {
len = strlen(in);
while(len && isspace(in[len - 1])) --len;
while(len && *in && isspace(*in)) ++in, --len;
if (len) {
out = strndup(in, len);
}
}
return out;
}
isspace helps to trim all white spaces.
*
*Run a first loop to check from last byte for space character and reduce the length variable
*Run a second loop to check from first byte for space character and reduce the length variable and increment char pointer.
*Finally if length variable is more than 0, then use strndup to create new string buffer by excluding spaces.
A: This one is short and simple, uses for-loops and doesn't overwrite the string boundaries.
You can replace the test with isspace() if needed.
void trim (char *s) // trim leading and trailing spaces+tabs
{
int i,j,k, len;
j=k=0;
len = strlen(s);
// find start of string
for (i=0; i<len; i++) if ((s[i]!=32) && (s[i]!=9)) { j=i; break; }
// find end of string+1
for (i=len-1; i>=j; i--) if ((s[i]!=32) && (s[i]!=9)) { k=i+1; break;}
if (k<=j) {s[0]=0; return;} // all whitespace (j==k==0)
len=k-j;
for (i=0; i<len; i++) s[i] = s[j++]; // shift result to start of string
s[i]=0; // end the string
}//_trim
A: If, and ONLY IF there's only one contiguous block of text between whitespace, you can use a single call to strtok(3), like so:
char *trimmed = strtok(input, "\r\t\n ");
This works for strings like the following:
" +1.123.456.7890 "
" 01-01-2020\n"
"\t2.523"
This will not work for strings that contain whitespace between blocks of non-whitespace, like " hi there ". It's probably better to avoid this approach, but now it's here in your toolbox if you need it.
A: Personally, I'd roll my own. You can use strtok, but you need to take care with doing so (particularly if you're removing leading characters) that you know what memory is what.
Getting rid of trailing spaces is easy, and pretty safe, as you can just put a 0 in over the top of the last space, counting back from the end. Getting rid of leading spaces means moving things around. If you want to do it in place (probably sensible) you can just keep shifting everything back one character until there's no leading space. Or, to be more efficient, you could find the index of the first non-space character, and shift everything back by that number. Or, you could just use a pointer to the first non-space character (but then you need to be careful in the same way as you do with strtok).
A: I'm only including code because the code posted so far seems suboptimal (and I don't have the rep to comment yet.)
void inplace_trim(char* s)
{
int start, end = strlen(s);
for (start = 0; isspace(s[start]); ++start) {}
if (s[start]) {
while (end > 0 && isspace(s[end-1]))
--end;
memmove(s, &s[start], end - start);
}
s[end - start] = '\0';
}
char* copy_trim(const char* s)
{
int start, end;
for (start = 0; isspace(s[start]); ++start) {}
for (end = strlen(s); end > 0 && isspace(s[end-1]); --end) {}
return strndup(s + start, end - start);
}
strndup() is a GNU extension. If you don't have it or something equivalent, roll your own. For example:
r = strdup(s + start);
r[end-start] = '\0';
A: #include "stdafx.h"
#include "malloc.h"
#include "string.h"
int main(int argc, char* argv[])
{
char *ptr = (char*)malloc(sizeof(char)*30);
strcpy(ptr," Hel lo wo rl d G eo rocks!!! by shahil sucks b i g tim e");
int i = 0, j = 0;
while(ptr[j]!='\0')
{
if(ptr[j] == ' ' )
{
j++;
ptr[i] = ptr[j];
}
else
{
i++;
j++;
ptr[i] = ptr[j];
}
}
printf("\noutput-%s\n",ptr);
return 0;
}
A: Most of the answers so far do one of the following:
*
*Backtrack at the end of the string (i.e. find the end of the string and then seek backwards until a non-space character is found,) or
*Call strlen() first, making a second pass through the whole string.
This version makes only one pass and does not backtrack. Hence it may perform better than the others, though only if it is common to have hundreds of trailing spaces (which is not unusual when dealing with the output of a SQL query.)
static char const WHITESPACE[] = " \t\n\r";
static void get_trim_bounds(char const *s,
char const **firstWord,
char const **trailingSpace)
{
char const *lastWord;
*firstWord = lastWord = s + strspn(s, WHITESPACE);
do
{
*trailingSpace = lastWord + strcspn(lastWord, WHITESPACE);
lastWord = *trailingSpace + strspn(*trailingSpace, WHITESPACE);
}
while (*lastWord != '\0');
}
char *copy_trim(char const *s)
{
char const *firstWord, *trailingSpace;
char *result;
size_t newLength;
get_trim_bounds(s, &firstWord, &trailingSpace);
newLength = trailingSpace - firstWord;
result = malloc(newLength + 1);
memcpy(result, firstWord, newLength);
result[newLength] = '\0';
return result;
}
void inplace_trim(char *s)
{
char const *firstWord, *trailingSpace;
size_t newLength;
get_trim_bounds(s, &firstWord, &trailingSpace);
newLength = trailingSpace - firstWord;
memmove(s, firstWord, newLength);
s[newLength] = '\0';
}
A: This is the shortest possible implementation I can think of:
static const char *WhiteSpace=" \n\r\t";
char* trim(char *t)
{
char *e=t+(t!=NULL?strlen(t):0); // *e initially points to end of string
if (t==NULL) return;
do --e; while (strchr(WhiteSpace, *e) && e>=t); // Find last char that is not \r\n\t
*(++e)=0; // Null-terminate
e=t+strspn (t,WhiteSpace); // Find first char that is not \t
return e>t?memmove(t,e,strlen(e)+1):t; // memmove string contents and terminator
}
A: These functions will modify the original buffer, so if dynamically allocated, the original
pointer can be freed.
#include <string.h>
void rstrip(char *string)
{
int l;
if (!string)
return;
l = strlen(string) - 1;
while (isspace(string[l]) && l >= 0)
string[l--] = 0;
}
void lstrip(char *string)
{
int i, l;
if (!string)
return;
l = strlen(string);
while (isspace(string[(i = 0)]))
while(i++ < l)
string[i-1] = string[i];
}
void strip(char *string)
{
lstrip(string);
rstrip(string);
}
A: What do you think about using StrTrim function defined in header Shlwapi.h.? It is straight forward rather defining on your own.
Details can be found on:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb773454(v=vs.85).aspx
If you have
char ausCaptain[]="GeorgeBailey ";
StrTrim(ausCaptain," ");
This will give ausCaptain as "GeorgeBailey" not "GeorgeBailey ".
A: To trim my strings from the both sides I use the oldie but the gooody ;)
It can trim anything with ascii less than a space, meaning that the control chars will be trimmed also !
char *trimAll(char *strData)
{
unsigned int L = strlen(strData);
if(L > 0){ L--; }else{ return strData; }
size_t S = 0, E = L;
while((!(strData[S] > ' ') || !(strData[E] > ' ')) && (S >= 0) && (S <= L) && (E >= 0) && (E <= L))
{
if(strData[S] <= ' '){ S++; }
if(strData[E] <= ' '){ E--; }
}
if(S == 0 && E == L){ return strData; } // Nothing to be done
if((S >= 0) && (S <= L) && (E >= 0) && (E <= L)){
L = E - S + 1;
memmove(strData,&strData[S],L); strData[L] = '\0';
}else{ strData[0] = '\0'; }
return strData;
}
A: Here i use the dynamic memory allocation to trim the input string to the function trimStr. First, we find how many non-empty characters exist in the input string. Then, we allocate a character array with that size and taking care of the null terminated character. When we use this function, we need to free the memory inside of main function.
#include<stdio.h>
#include<stdlib.h>
char *trimStr(char *str){
char *tmp = str;
printf("input string %s\n",str);
int nc = 0;
while(*tmp!='\0'){
if (*tmp != ' '){
nc++;
}
tmp++;
}
printf("total nonempty characters are %d\n",nc);
char *trim = NULL;
trim = malloc(sizeof(char)*(nc+1));
if (trim == NULL) return NULL;
tmp = str;
int ne = 0;
while(*tmp!='\0'){
if (*tmp != ' '){
trim[ne] = *tmp;
ne++;
}
tmp++;
}
trim[nc] = '\0';
printf("trimmed string is %s\n",trim);
return trim;
}
int main(void){
char str[] = " s ta ck ove r fl o w ";
char *trim = trimStr(str);
if (trim != NULL )free(trim);
return 0;
}
A: Here is how I do it. It trims the string in place, so no worry about deallocating a returned string or losing the pointer to an allocated string. It may not be the shortest answer possible, but it should be clear to most readers.
#include <ctype.h>
#include <string.h>
void trim_str(char *s)
{
const size_t s_len = strlen(s);
int i;
for (i = 0; i < s_len; i++)
{
if (!isspace( (unsigned char) s[i] )) break;
}
if (i == s_len)
{
// s is an empty string or contains only space characters
s[0] = '\0';
}
else
{
// s contains non-space characters
const char *non_space_beginning = s + i;
char *non_space_ending = s + s_len - 1;
while ( isspace( (unsigned char) *non_space_ending ) ) non_space_ending--;
size_t trimmed_s_len = non_space_ending - non_space_beginning + 1;
if (s != non_space_beginning)
{
// Non-space characters exist in the beginning of s
memmove(s, non_space_beginning, trimmed_s_len);
}
s[trimmed_s_len] = '\0';
}
}
A: char* strtrim(char* const str)
{
if (str != nullptr)
{
char const* begin{ str };
while (std::isspace(*begin))
{
++begin;
}
auto end{ begin };
auto scout{ begin };
while (*scout != '\0')
{
if (!std::isspace(*scout++))
{
end = scout;
}
}
auto /* std::ptrdiff_t */ const length{ end - begin };
if (begin != str)
{
std::memmove(str, begin, length);
}
str[length] = '\0';
}
return str;
}
A: As the other answers don't seem to mutate the string pointer directly, but rather rely on the return value, I thought I would provide this method which additionally does not use any libraries and so is appropriate for operating system style programming:
// only used for printf in main
#include <stdio.h>
// note the char ** means we can modify the address
char *trimws(char **strp) {
char *str;
// check if empty string
if(!*str)
return;
// go to the end of the string
for (str = *strp; *str; str++)
;
// back up one from the null terminator
str--;
// set trailing ws to null
for (; *str == ' '; str--)
*str = 0;
// increment past leading ws
for (str = *strp; *str == ' '; str++)
;
// set new begin address of string
*strp = str;
}
int main(void) {
char buf[256] = " whitespace ";
// pointer must be modifiable lvalue so we make bufp
char **bufp = &buf;
// pass in the address
trimws(&bufp);
// prints : XXXwhitespaceXXX
printf("XXX%sXXX\n", bufp);
return 0;
}
A: IMO, it can be done without strlen and isspace.
char *
trim (char * s, char c)
{
unsigned o = 0;
char * sb = s;
for (; *s == c; s++)
o++;
for (; *s != '\0'; s++)
continue;
for (; s - o > sb && *--s == c;)
continue;
if (o > 0)
memmove(sb, sb + o, s + 1 - o - sb);
if (*s != '\0')
*(s + 1 - o) = '\0';
return sb;
}
A: #include<stdio.h>
#include<ctype.h>
main()
{
char sent[10]={' ',' ',' ','s','t','a','r','s',' ',' '};
int i,j=0;
char rec[10];
for(i=0;i<=10;i++)
{
if(!isspace(sent[i]))
{
rec[j]=sent[i];
j++;
}
}
printf("\n%s\n",rec);
}
A: C++ STL style
std::string Trimed(const std::string& s)
{
std::string::const_iterator begin = std::find_if(s.begin(),
s.end(),
[](char ch) { return !std::isspace(ch); });
std::string::const_iterator end = std::find_if(s.rbegin(),
s.rend(),
[](char ch) { return !std::isspace(ch); }).base();
return std::string(begin, end);
}
http://ideone.com/VwJaq9
A: void trim(char* string) {
int lenght = strlen(string);
int i=0;
while(string[0] ==' ') {
for(i=0; i<lenght; i++) {
string[i] = string[i+1];
}
lenght--;
}
for(i=lenght-1; i>0; i--) {
if(string[i] == ' ') {
string[i] = '\0';
} else {
break;
}
}
}
A: Here is a function to do what you want. It should take care of degenerate cases where the string is all whitespace. You must pass in an output buffer and the length of the buffer, which means that you have to pass in a buffer that you allocate.
void str_trim(char *output, const char *text, int32 max_len)
{
int32 i, j, length;
length = strlen(text);
if (max_len < 0) {
max_len = length + 1;
}
for (i=0; i<length; i++) {
if ( (text[i] != ' ') && (text[i] != '\t') && (text[i] != '\n') && (text[i] != '\r')) {
break;
}
}
if (i == length) {
// handle lines that are all whitespace
output[0] = 0;
return;
}
for (j=length-1; j>=0; j--) {
if ( (text[j] != ' ') && (text[j] != '\t') && (text[j] != '\n') && (text[j] != '\r')) {
break;
}
}
length = j + 1 - i;
strncpy(output, text + i, length);
output[length] = 0;
}
The if statements in the loops can probably be replaced with isspace(text[i]) or isspace(text[j]) to make the lines a little easier to read. I think that I had them set this way because there were some characters that I didn't want to test for, but it looks like I'm covering all whitespace now :-)
A: Here is what I disclosed regarding the question in Linux kernel code:
/**
* skip_spaces - Removes leading whitespace from @s.
* @s: The string to be stripped.
*
* Returns a pointer to the first non-whitespace character in @s.
*/
char *skip_spaces(const char *str)
{
while (isspace(*str))
++str;
return (char *)str;
}
/**
* strim - Removes leading and trailing whitespace from @s.
* @s: The string to be stripped.
*
* Note that the first trailing whitespace is replaced with a %NUL-terminator
* in the given string @s. Returns a pointer to the first non-whitespace
* character in @s.
*/
char *strim(char *s)
{
size_t size;
char *end;
size = strlen(s);
if (!size)
return s;
end = s + size - 1;
while (end >= s && isspace(*end))
end--;
*(end + 1) = '\0';
return skip_spaces(s);
}
It is supposed to be bug free due to the origin ;-)
Mine one piece is closer to KISS principle I guess:
/**
* trim spaces
**/
char * trim_inplace(char * s, int len)
{
// trim leading
while (len && isspace(s[0]))
{
s++; len--;
}
// trim trailing
while (len && isspace(s[len - 1]))
{
s[len - 1] = 0; len--;
}
return s;
}
A: void trim(char* const str)
{
char* begin = str;
char* end = str;
while (isspace(*begin))
{
++begin;
}
char* s = begin;
while (*s != '\0')
{
if (!isspace(*s++))
{
end = s;
}
}
*end = '\0';
const int dist = end - begin;
if (begin > str && dist > 0)
{
memmove(str, begin, dist + 1);
}
}
Modifies string in place, so you can still delete it.
Doesn't use fancy pants library functions (unless you consider memmove fancy).
Handles string overlap.
Trims front and back (not middle, sorry).
Fast if string is large (memmove often written in assembly).
Only moves characters if required (I find this true in most use cases because strings rarely have leading spaces and often don't have tailing spaces)
I would like to test this but I'm running late. Enjoy finding bugs... :-)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "201"
}
|
Q: Insert into ... Select *, how to ignore identity? I have a temp table with the exact structure of a concrete table T. It was created like this:
select top 0 * into #tmp from T
After processing and filling in content into #tmp, I want to copy the content back to T like this:
insert into T select * from #tmp
This is okay as long as T doesn't have identity column, but in my case it does. Is there any way I can ignore the auto-increment identity column from #tmp when I copy to T? My motivation is to avoid having to spell out every column name in the Insert Into list.
EDIT: toggling identity_insert wouldn't work because the pkeys in #tmp may collide with those in T if rows were inserted into T outside of my script, that's if #tmp has auto-incremented the pkey to sync with T's in the first place.
A: See answers here and here:
select * into without_id from with_id
union all
select * from with_id where 1 = 0
Reason:
When an existing identity column is selected into a new table, the new column inherits the IDENTITY property, unless one of the following conditions is true:
*
*The SELECT statement contains a join, GROUP BY clause, or aggregate function.
*Multiple SELECT statements are joined by using UNION.
*The identity column is listed more than one time in the select list.
*The identity column is part of an expression.
*The identity column is from a remote data source.
If any one of these conditions is true, the column is created NOT NULL instead of inheriting the IDENTITY property. If an identity column is required in the new table but such a column is not available, or you want a seed or increment value that is different than the source identity column, define the column in the select list using the IDENTITY function. See "Creating an identity column using the IDENTITY function" in the Examples section below.
All credit goes to Eric Humphrey and bernd_k
A: SET IDENTITY_INSERT ON
INSERT command
SET IDENTITY_INSERT OFF
A: As identity will be generated during insert anyway, could you simply remove this column from #tmp before inserting the data back to T?
alter table #tmp drop column id
UPD: Here's an example I've tested in SQL Server 2008:
create table T(ID int identity(1,1) not null, Value nvarchar(50))
insert into T (Value) values (N'Hello T!')
select top 0 * into #tmp from T
alter table #tmp drop column ID
insert into #tmp (Value) values (N'Hello #tmp')
insert into T select * from #tmp
drop table #tmp
select * from T
drop table T
A: set identity_insert on
Use this.
A: Not with SELECT * - if you selected every column but the identity, it will be fine. The only way I can see is that you could do this by dynamically building the INSERT statement.
A: Just list the colums you want to re-insert, you should never use select * anyway. If you don't want to type them ,just drag them from the object browser (If you expand the table and drag the word, columns, you will get all of them, just delete the id column)
A: INSERT INTO #Table
SELECT MAX(Id) + ROW_NUMBER() OVER(ORDER BY Id)
A: Might an "update where T.ID = #tmp.ID" work?
A:
*
*it gives me a chance to preview the data before I do the insert
*I have joins between temp tables as part of my calculation; temp tables allows me to focus on the exact set data that I am working with. I think that was it. Any suggestions/comments?
For part 1, as mentioned by Kolten in one of the comments, encapsulating your statements in a transaction and adding a parameter to toggle between display and commit will meet your needs. For Part 2, I would needs to see what "calculations" you are attempting. Limiting your data to a temp table may be over complicating the situation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: How can I decode HTML characters in C#? I have email addresses encoded with HTML character entities. Is there anything in .NET that can convert them to plain strings?
A: Use Server.HtmlDecode to decode the HTML entities. If you want to escape the HTML, i.e. display the < and > character to the user, use Server.HtmlEncode.
A: You can use HttpUtility.HtmlDecode
If you are using .NET 4.0+ you can also use WebUtility.HtmlDecode which does not require an extra assembly reference as it is available in the System.Net namespace.
A: As @CQ says, you need to use HttpUtility.HtmlDecode, but it's not available in a non-ASP .NET project by default.
For a non-ASP .NET application, you need to add a reference to System.Web.dll. Right-click your project in Solution Explorer, select "Add Reference", then browse the list for System.Web.dll.
Now that the reference is added, you should be able to access the method using the fully-qualified name System.Web.HttpUtility.HtmlDecode or insert a using statement for System.Web to make things easier.
A: On .Net 4.0:
System.Net.WebUtility.HtmlDecode()
No need to include assembly for a C# project
A: If there is no Server context (i.e your running offline), you can use HttpUtility.HtmlDecode.
A: To decode HTML take a look below code
string s = "Svendborg Værft A/S";
string a = HttpUtility.HtmlDecode(s);
Response.Write(a);
Output is like
Svendborg Værft A/S
A: It is also worth mentioning that if you're using HtmlAgilityPack like I was, you should use HtmlAgilityPack.HtmlEntity.DeEntitize(). It takes a string and returns a string.
A: Write static a method into some utility class, which accept string as parameter and return the decoded html string.
Include the using System.Web.HttpUtility into your class
public static string HtmlEncode(string text)
{
if(text.length > 0){
return HttpUtility.HtmlDecode(text);
}else{
return text;
}
}
A: For .net 4.0
Add a reference to System.net.dll to the project with using System.Net; then use the following extensions
// Html encode/decode
public static string HtmDecode(this string htmlEncodedString)
{
if(htmlEncodedString.Length > 0)
{
return System.Net.WebUtility.HtmlDecode(htmlEncodedString);
}
else
{
return htmlEncodedString;
}
}
public static string HtmEncode(this string htmlDecodedString)
{
if(htmlDecodedString.Length > 0)
{
return System.Net.WebUtility.HtmlEncode(htmlDecodedString);
}
else
{
return htmlDecodedString;
}
}
A: For strings containing   I've had to double-decode the string. First decode would turn it into the second pass would correctly decode it to the expected character.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "434"
}
|
Q: WSS 3.0: change parent type for a content type I have created a hierarchy of content types. The root of my hierarchy has the content type "Document" as a parent. There are about 20 other content types derived from my root.
Now, I want to change the parent from "Document" to something else. Is it possible? Either in the web interface or in code? Can the definition of content types be dumped to a text file and then recreated? Or any other trick?
A: If you can create a feature that contains all your custom content types, you will be able to change the XML that defines each content type and it's columns.
This will give you the ability to change the content types for your site by removing the feature and installing it again with the changes (using a Solution is best).
Note that any content using the older content types will still use them after updating the feature (content types are stored at the site level, list level and on the actual item).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What would be the simplest way to alpha sort an array of chars in C? I'm looking for a simple, easy to understand algorithm to alphabetically sort an array of characters in C.
A: characters in C have numeric values that happen to be in order, so you just treat your characters like integers. the C standard library includes a 'qsort' function. Use that (man qsort on a linux-like system). You might have to convert upper-case letters to lowercase to simplify things, but that's trivial. If you want to understand the quicksort algorithm (that's the one you should learn, because you'll actually use it), see Wikipedia.
A: Use the qsort method:
#include <stdlib.h>
int char_compare (const void * a, const void * b)
{
return *(const char *)a - *(const char *)b;
}
int main(){
const char char_array[] = { 'c', 'a', 'b' };
qsort (char_array, 3, sizeof(char), char_compare);
return 0;
}
A: If the result is intended for humans, it is better to use strcoll. It is slower then strcmp or strcasecmp but it accounts for non-english characters. If you are going to use it don't forget to set your locale for LC_COLLATE, i.e.
setlocale(LC_COLLATE, "");
A: Just try Bubble Sort that's the easiest sorting algorithm.
A: I wonder if you are really looking for an algorithm or just a way to solve the problem? If the latter, then use C's qsort.
If you want an algorith, go for Insertion sort or Selection sort, as they're very simple to understand.
A: Easy? Do a bubble sort.
This is java and int rather than char, but you can easily adapt it...
int[] bubble(int a[])
{
for (int i = a.length; --i>=0; )
{
for (int j = 0; j<i; j++)
{
if (a[j] > a[j+1])
{
int T = a[j];
a[j] = a[j+1];
a[j+1] = T;
}
}
}
return(a);
}
A: This is pretty simple and asymptotically fastest (N is size of array):
const unsigned char in[N];
unsigned char out[N], *p=out;
size_t cnt[N]={0}, i, j;
for (i=0; i<COUNT; i++) cnt[in[i]]++;
for (i=0; i<256; i++) for (j=cnt[i]; j; j--) *p++=i;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is the LINQ way to implode/join a string array? I have the following string array:
var sa = new string[] {"yabba","dabba","doo"};
I can convert it to "yabba, dabba, doo" it using string.Join() but what is the super-cool LINQ way of doing it? The Join extension method seems promising but for a novice like me very confusing.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
}
|
Q: Is there anyway to remove Design (and Split) views from Visual Studio 2005/2008? I am trying to find a way to disable Design / Split views from appearing in Visual Studio. I figure there's no way to completely disable it, but I still can't seem to find a way to remove the shortcut (F7), or remove the buttons from the bottom bar.
A: The best I found was:
for HTML/ASPX - Options -> HTML Designer -> start pages in source view
for XAML -> Options -> Text Editor -> XAML -> Misc -> Always open documents in full XAML view
A: The keyboard shortcut is listed under View.ToggleDesigner.
I just want to be able to disable the Design view; the F7 shortcut is rather handy for swapping between source and code behind when doing web development.
A: Short blog post here apparently has an answer for this problem
http://renaissauce.com/thoughts/visual-studio-2008-tip-remove-the-design-split-source-button-bar/
A: You should be able to delete the keyboard assignment for F7 in the Keyboard Customization screen. All hotkey assignments are customizable AFAIK.
I doubt there's any way to get rid of the buttons on the UI.
A: Here is a solution for XAML
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Which Java-library can be used to access data via WebDAV? That's the question: Which library can help me to access data available via WebDAV in my Java-programs? OpenSource is preferred.
A: Libraries which are already around for a while are:
*
*Miltion WebDav
*JackRabbit WebDav
*Apache wink WebDav extension
Milton requires license when DAV 2 is required.
On WikiPedia you find a a small summary of available libraries.
A: http://sourceforge.net/projects/webdavclient4j/ is based on the retired Apache Jakarta Slide project's Java webdav client, and includes the VFS WebDAV provider. It is packaged with HttpClient 3.0.1.
A: Here's a better library to use for webdav operations. It's called Sardine hosted in Google Code.
https://github.com/lookfirst/sardine (was previously http://code.google.com/p/sardine)
I found it through here:
Java: How to upload a file to a WebDAV server from a servlet?
A: Never used it, but maybe apache commons vfs?
A: Apache's Jakarta Project has a WebDav Construction Kit, which should fit this need.
A: I created a very easy to use java webdav client: http://sardine.googlecode.com/
This now moved to github : https://github.com/lookfirst/sardine
A: The now deprecated Apache Jakarta Slide project includes a Java WebDAV client library - but this project is retired due to the lack of a developer community.
Apache Jackrabbit is mentioned as alternative to Slide. You might want to check if its WebDAV library can be used instead.
If you just want to access files from a WebDAV repository, you can simply use a HTTP library as WebDAV builds upon HTTP. You only need a WebDAV client library if you want to use WebDAV features like locking, directory listings or access to properties (meta-data).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Working with client certificates for an ASP.NET MVC site on IIS 6 Wanting to implement authentication by client certificates I am experiencing some issues.
First some facts
The whole site is using SSL. I am using IIS 6 (on Windows Server 2003) and have configured the site to accept client certificates, not requiring them. Most browsers are however implemented in a way so that they will only ask the user for a certificate when it is strictly required. Because of this the authentication model isn't really useful.
Suggestions of my own
My first idea was to set the HttpResponse.Status property but it requires the characters before the first space to be an integer. The useful status for getting a browser to send a client certificate is 403.7 Client certificate required so this will not work (unless you can overwrite it).
I also thought that I would just configure IIS to require client certificates for a specific paths, but this - of cource - works only with physical files and not with routing.
A possible solution is to make a specific folder and require client certificates for it which is more of a hack than a solution. So I would like to avoid this if someone has a better proposal.
Clarifications
I have tested the browser response of both Internet Explorer, Firefox and Chrome (I use Chrome as my primary browser and Firefox as secondary). None of the browsers asks for the client certificate unless I - in IIS - configure it as required.
The HTTP status code 403.7 is due to my understanding allowed as the RFC 2616 only defines the status code as the first three digits. As IIS 6 returns the 403.7 when a client certificate is required, I thought sending it would force IIS into a special mode triggering a requirement.
I guess the problem now is how to configure IIS for requiring a certficate given an virtual path and not a physical.
A: There's no difference in the CertificateRequest message sent by a server when the certificate is merely requested, rather than required. The server makes the same request in both cases, and simply terminates the handshake when a client fails to provide a required certificate. Thus, if your browser appears to be ignoring "requests", it should appear to ignore "requirements" too.
Check for the following:
*
*Is your browser configured to ignore
all certificate requests, never
sending one?
*Is your browser configured to use a
given certificate without prompting
the user? (In other words, how do you know that the browser isn't sending a certificate?)
*Is your server actually requesting a
certificate?
The way I test this last case is with the OpenSSL (also available in Cygwin) tool:
openssl s_client -connect server.y.com:443 -msg
After the server sends its Certificate message, it will insert a CertificateRequest method which is absent if it is not requesting client authentication. The s_client output looks like this:
<<< TLS 1.0 Handshake [length 0008], CertificateRequest
0d 00 00 04 01 01 00 00
I'm not sure how it works if the server uses client authentication only on specific paths, because the initial SSL handshake is complete before the client transmits the HTTP request. It would be reasonable for the server to request a new handshake at this point, but I've never tested to see what servers support this.
You can fake an HTTP request via s_client by hand, entering:
GET /your/path/here HTTP/1.1[Enter]
Host: server.y.com:443[Enter]
[Enter]
If you never see a CertificateRequest message at all, your server isn't set up correctly.
Specifying security constraints based on directory structure is quite common and can actually simplify administration of security nicely. Don't feel bad about it if this offers a solution for you.
403.7 is not an HTTP status code. Is that some Microsoft "embrace, extend, and extinguish" subterfuge? In any case, it doesn't sound like the right direction to pursue, since this is a transport layer problem, not an application layer problem.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How can you load balance an IIS 6 hosted WCF Service? We use BigIP to load balance between our two IIS servers. We recently deployed a WCF service hosted on by IIS 6 onto these two Windows Server 2003R2 servers.
Each server is configured with two host headers: one for the load balancer address, and then a second host header that points only to that server. That way we can reference a specific server in the load balanced group for debugging.
So when we run We immediately got the error:
This collection already contains an address with scheme http. There can be at most one address per scheme in this collection.
Parameter name: item
I did some research and we can implement a filter to tell it to ignore the one of the hosts, but then we cannot access the server from that address.
<serviceHostingEnvironment>
<baseAddressPrefixFilters>
<add prefix="http://domain.com:80"/>
</baseAddressPrefixFilters>
</serviceHostingEnvironment>
What is the best solution in this scenario which would allow us to hit a WCF service via http://domain.com/service.svc and http://server1.domain.com/service.svc?
If we should create our own ServiceFactory as some sites suggest, does anyone have any sample code on this?
Any help is much appreciated.
EDIT: We will need to be able to access the WCF service from either of the two addresses, if at all possible.
Thank you.
A: On your bigIP Create 2 new virtual servers
http://server1.domain.com/
http://server2.domain.com/
create a pool for each VS with only the specific server in it - so there will be no actual load balancing and access it that way. If you are short on external IP'S you can still use the same IP as your production domain name and just use an irule to direct traffic to the appropriate pool
Hope this helps
A: The URL it uses is based on the bindings in IIS. Does the website have more than one binding? If it does, or is the WCF service used by multiple sites? If it is, then you are SOL AFAIK. We ran into this issue. Basically, there can be only one IIS binding for HTTP, otherwise it bombs.
Also, here's info on implementing a ServiceHostFactory. That WILL work if it's possible that your WCF service only be accessible through 1 address (unfortunately for us, this was not possible).
A: When you need to test a specific machine, you could "bypass" the load balancing and ensure the correct host-header is sent to keeep WCF happy by editing the "hosts" file on the machine you're testing from so, for example:
10.0.0.11 through 10.0.0.16 are the six hosts that are in the cluster "cluster.mycompany.local", with a load balanced IP address of 10.0.0.10. When testing you could add a line to the machines hosts file that says "10.0.0.13 cluster.mycompany.local" to be able to hit the third machine in the cluster directly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: What is a simple command line program or script to backup SQL server databases? I've been too lax with performing DB backups on our internal servers.
Is there a simple command line program that I can use to backup certain databases in SQL Server 2005? Or is there a simple VBScript?
A: I use ExpressMaint.
To backup all user databases I do for example:
C:\>ExpressMaint.exe -S (local)\sqlexpress -D ALL_USER -T DB -BU HOURS -BV 1 -B c:\backupdir\ -DS
A: Schedule the following to backup all Databases:
Use Master
Declare @ToExecute VarChar(8000)
Select @ToExecute = Coalesce(@ToExecute + 'Backup Database ' + [Name] + ' To Disk = ''D:\Backups\Databases\' + [Name] + '.bak'' With Format;' + char(13),'')
From
Master..Sysdatabases
Where
[Name] Not In ('tempdb')
and databasepropertyex ([Name],'Status') = 'online'
Execute(@ToExecute)
There are also more details on my blog: how to Automate SQL Server Express Backups.
A: I found this on a Microsoft Support page http://support.microsoft.com/kb/2019698.
It works great! And since it came from Microsoft, I feel like it's pretty legit.
Basically there are two steps.
*
*Create a stored procedure in your master db. See msft link or if it's broken try here: http://pastebin.com/svRLkqnq
*Schedule the backup from your task scheduler. You might want to put into a .bat or .cmd file first and then schedule that file.
sqlcmd -S YOUR_SERVER_NAME\SQLEXPRESS -E -Q "EXEC sp_BackupDatabases @backupLocation='C:\SQL_Backup\', @backupType='F'" 1>c:\SQL_Backup\backup.log
Obviously replace YOUR_SERVER_NAME with your computer name or optionally try .\SQLEXPRESS and make sure the backup folder exists. In this case it's trying to put it into c:\SQL_Backup
A: I'm using tsql on a Linux/UNIX infrastructure to access MSSQL databases. Here's a simple shell script to dump a table to a file:
#!/usr/bin/ksh
#
#.....
(
tsql -S {database} -U {user} -P {password} <<EOF
select * from {table}
go
quit
EOF
) >{output_file.dump}
A: You can use the backup application by ApexSQL. Although it’s a GUI application, it has all its features supported in CLI. It is possible to either perform one-time backup operations, or to create a job that would back up specified databases on the regular basis. You can check the switch rules and exampled in the articles:
*
*ApexSQL Backup CLI support
*ApexSQL Backup CLI examples
A: Eventual if you don't have a trusted connection as the –E switch declares
Use following command line
"[program dir]\[sql server version]\Tools\Binn\osql.exe" -Q "BACKUP DATABASE mydatabase TO DISK='C:\tmp\db.bak'" -S [server] –U [login id] -P [password]
Where
[program dir] is the directory where the osql.exe exists
On 32bit OS c:\Program Files\Microsoft SQL Server\
On 64bit OS c:\Program Files (x86)\Microsoft SQL Server\
[sql server version] your sql server version 110 or 100 or 90 or 80 begin with the largest number
[server] your servername or server ip
[login id] your ms-sql server user login name
[password] the required login password
A: Microsoft's answer to backing up all user databases on SQL Express is here:
The process is: copy, paste, and execute their code (see below. I've commented some oddly non-commented lines at the top) as a query on your database server. That means you should first install the SQL Server Management Studio (or otherwise connect to your database server with SSMS). This code execution will create a stored procedure on your database server.
Create a batch file to execute the stored procedure, then use Task Scheduler to schedule a periodic (e.g. nightly) run of this batch file. My code (that works) is a slightly modified version of their first example:
sqlcmd -S .\SQLEXPRESS -E -Q "EXEC sp_BackupDatabases @backupLocation='E:\SQLBackups\', @backupType='F'"
This worked for me, and I like it. Each time you run it, new backup files are created. You'll need to devise a method of deleting old backup files on a routine basis. I already have a routine that does that sort of thing, so I'll keep a couple of days' worth of backups on disk (long enough for them to get backed up by my normal backup routine), then I'll delete them. In other words, I'll always have a few days' worth of backups on hand without having to restore from my backup system.
I'll paste Microsoft's stored procedure creation script below:
--// Copyright © Microsoft Corporation. All Rights Reserved.
--// This code released under the terms of the
--// Microsoft Public License (MS-PL, http://opensource.org/licenses/ms-pl.html.)
USE [master]
GO
/****** Object: StoredProcedure [dbo].[sp_BackupDatabases] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Microsoft
-- Create date: 2010-02-06
-- Description: Backup Databases for SQLExpress
-- Parameter1: databaseName
-- Parameter2: backupType F=full, D=differential, L=log
-- Parameter3: backup file location
-- =============================================
CREATE PROCEDURE [dbo].[sp_BackupDatabases]
@databaseName sysname = null,
@backupType CHAR(1),
@backupLocation nvarchar(200)
AS
SET NOCOUNT ON;
DECLARE @DBs TABLE
(
ID int IDENTITY PRIMARY KEY,
DBNAME nvarchar(500)
)
-- Pick out only databases which are online in case ALL databases are chosen to be backed up
-- If specific database is chosen to be backed up only pick that out from @DBs
INSERT INTO @DBs (DBNAME)
SELECT Name FROM master.sys.databases
where state=0
AND name=@DatabaseName
OR @DatabaseName IS NULL
ORDER BY Name
-- Filter out databases which do not need to backed up
IF @backupType='F'
BEGIN
DELETE @DBs where DBNAME IN ('tempdb','Northwind','pubs','AdventureWorks')
END
ELSE IF @backupType='D'
BEGIN
DELETE @DBs where DBNAME IN ('tempdb','Northwind','pubs','master','AdventureWorks')
END
ELSE IF @backupType='L'
BEGIN
DELETE @DBs where DBNAME IN ('tempdb','Northwind','pubs','master','AdventureWorks')
END
ELSE
BEGIN
RETURN
END
-- Declare variables
DECLARE @BackupName varchar(100)
DECLARE @BackupFile varchar(100)
DECLARE @DBNAME varchar(300)
DECLARE @sqlCommand NVARCHAR(1000)
DECLARE @dateTime NVARCHAR(20)
DECLARE @Loop int
-- Loop through the databases one by one
SELECT @Loop = min(ID) FROM @DBs
WHILE @Loop IS NOT NULL
BEGIN
-- Database Names have to be in [dbname] format since some have - or _ in their name
SET @DBNAME = '['+(SELECT DBNAME FROM @DBs WHERE ID = @Loop)+']'
-- Set the current date and time n yyyyhhmmss format
SET @dateTime = REPLACE(CONVERT(VARCHAR, GETDATE(),101),'/','') + '_' + REPLACE(CONVERT(VARCHAR, GETDATE(),108),':','')
-- Create backup filename in path\filename.extension format for full,diff and log backups
IF @backupType = 'F'
SET @BackupFile = @backupLocation+REPLACE(REPLACE(@DBNAME, '[',''),']','')+ '_FULL_'+ @dateTime+ '.BAK'
ELSE IF @backupType = 'D'
SET @BackupFile = @backupLocation+REPLACE(REPLACE(@DBNAME, '[',''),']','')+ '_DIFF_'+ @dateTime+ '.BAK'
ELSE IF @backupType = 'L'
SET @BackupFile = @backupLocation+REPLACE(REPLACE(@DBNAME, '[',''),']','')+ '_LOG_'+ @dateTime+ '.TRN'
-- Provide the backup a name for storing in the media
IF @backupType = 'F'
SET @BackupName = REPLACE(REPLACE(@DBNAME,'[',''),']','') +' full backup for '+ @dateTime
IF @backupType = 'D'
SET @BackupName = REPLACE(REPLACE(@DBNAME,'[',''),']','') +' differential backup for '+ @dateTime
IF @backupType = 'L'
SET @BackupName = REPLACE(REPLACE(@DBNAME,'[',''),']','') +' log backup for '+ @dateTime
-- Generate the dynamic SQL command to be executed
IF @backupType = 'F'
BEGIN
SET @sqlCommand = 'BACKUP DATABASE ' +@DBNAME+ ' TO DISK = '''+@BackupFile+ ''' WITH INIT, NAME= ''' +@BackupName+''', NOSKIP, NOFORMAT'
END
IF @backupType = 'D'
BEGIN
SET @sqlCommand = 'BACKUP DATABASE ' +@DBNAME+ ' TO DISK = '''+@BackupFile+ ''' WITH DIFFERENTIAL, INIT, NAME= ''' +@BackupName+''', NOSKIP, NOFORMAT'
END
IF @backupType = 'L'
BEGIN
SET @sqlCommand = 'BACKUP LOG ' +@DBNAME+ ' TO DISK = '''+@BackupFile+ ''' WITH INIT, NAME= ''' +@BackupName+''', NOSKIP, NOFORMAT'
END
-- Execute the generated SQL command
EXEC(@sqlCommand)
-- Goto the next database
SELECT @Loop = min(ID) FROM @DBs where ID>@Loop
END
A: Here is an example one, it will take backup database, compress using 7zip and delete backup file so issue related to storage also solved. In this example I use 7zip, which is free
@echo off
CLS
echo Running dump ...
sqlcmd -S SERVER\SQLEXPRESS -U username -P password -Q "BACKUP DATABASE master TO DISK='D:\DailyDBBackup\DB_master_%date:~-10,2%%date:~-7,2%%date:~-4,4%.bak'"
echo Zipping ...
"C:\Program Files\7-Zip\7z.exe" a -tzip "D:\DailyDBBackup\DB_master_%date:~-10,2%%date:~-7,2%%date:~-4,4%_%time:~0,2%%time:~3,2%%time:~6,2%.bak.zip" "D:\DailyDBBackup\DB_master_%date:~-10,2%%date:~-7,2%%date:~-4,4%.bak"
echo Deleting the SQL file ...
del "D:\DailyDBBackup\DB_master_%date:~-10,2%%date:~-7,2%%date:~-4,4%.bak"
echo Done!
Save this as sqlbackup.bat and schedule it to be run everyday.
If you just want to take backup only then you can create script without zipping and deleting.
A: To backup a single database from the command line, use osql or sqlcmd.
"C:\Program Files\Microsoft SQL Server\90\Tools\Binn\osql.exe"
-E -Q "BACKUP DATABASE mydatabase TO DISK='C:\tmp\db.bak' WITH FORMAT"
You'll also want to read the documentation on BACKUP and RESTORE and general procedures.
A: You could use a VB Script I wrote exactly for this purpose:
https://github.com/ezrarieben/mssql-backup-vbs/
Schedule a task in the "Task Scheduler" to execute the script as you like and it'll backup the entire DB to a BAK file and save it wherever you specify.
A: SET NOCOUNT ON;
declare @PATH VARCHAR(200)='D:\MyBackupFolder\'
-- path where you want to take backups
IF OBJECT_ID('TEMPDB..#back') IS NOT NULL
DROP TABLE #back
CREATE TABLE #back
(
RN INT IDENTITY (1,1),
DatabaseName NVARCHAR(200)
)
INSERT INTO #back
SELECT 'MyDatabase1'
UNION SELECT 'MyDatabase2'
UNION SELECT 'MyDatabase3'
UNION SELECT 'MyDatabase4'
-- your databases List
DECLARE @COUNT INT =0 , @RN INT =1, @SCRIPT NVARCHAR(MAX)='', @DBNAME VARCHAR(200)
PRINT '---------------------FULL BACKUP SCRIPT-------------------------'+CHAR(10)
SET @COUNT = (SELECT COUNT(*) FROM #back)
PRINT 'USE MASTER'+CHAR(10)
WHILE(@COUNT > = @RN)
BEGIN
SET @DBNAME =(SELECT DatabaseName FROM #back WHERE RN=@RN)
SET @SCRIPT ='BACKUP DATABASE ' +'['+@DBNAME+']'+CHAR(10)+'TO DISK =N'''+@PATH+@DBNAME+ N'_Backup_'
+ REPLACE ( REPLACE ( REPLACE ( REPLACE ( CAST ( CAST ( GETDATE () AS DATETIME2 ) AS VARCHAR ( 100 )), '-' , '_' ), ' ' , '_' ), '.' , '_' ), ':' , '' )+'.bak'''+CHAR(10)+'WITH COMPRESSION, STATS = 10'+CHAR(10)+'GO'+CHAR(10)
PRINT @SCRIPT
SET @RN=@RN+1
END
PRINT '---------------------DIFF BACKUP SCRIPT-------------------------'+CHAR(10)
SET @COUNT =0 SET @RN =1 SET @SCRIPT ='' SET @DBNAME =''
SET @COUNT = (SELECT COUNT(*) FROM #back)
PRINT 'USE MASTER'+CHAR(10)
WHILE(@COUNT > = @RN)
BEGIN
SET @DBNAME =(SELECT DatabaseName FROM #back WHERE RN=@RN)
SET @SCRIPT ='BACKUP DATABASE ' +'['+@DBNAME+']'+CHAR(10)+'TO DISK =N'''+@PATH+@DBNAME+ N'_Backup_'
+ REPLACE ( REPLACE ( REPLACE ( REPLACE ( CAST ( CAST ( GETDATE () AS DATETIME2 ) AS VARCHAR ( 100 )), '-' , '_' ), ' ' , '_' ), '.' , '_' ), ':' , '' )+'.diff'''+CHAR(10)+'WITH DIFFERENTIAL, COMPRESSION, STATS = 10'+CHAR(10)+'GO'+CHAR(10)
PRINT @SCRIPT
SET @RN=@RN+1
END
A: This can be helpful when you are dealing with dockerised mssql container in your day to day work and want to take a quick dump of the data from table. I have specially found it useful when you are re-building the db container quite frequently and don't want to loose the test data after the re-build.
Export data using bcp utility
/opt/mssql-tools/bin/bcp <Table_Name> out /tmp/MyData.bcp -d <database_name> -c -U <user_name> -P "<password>" -S <server_name>
Import data using bcp utility
/opt/mssql-tools/bin/bcp <Table_Name> IN /tmp/MyData.bcp -d <database_name> -c -U <user_name> -P "<password>" -S <server_name>
A: If you can find the DB files... "cp DBFiles backup/"
Almost for sure not advisable in most cases, but it's simple as all getup.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "106"
}
|
Q: How do I read an HttpResponse in ASP.NET 2.0? For example, I have an ASP.NET form that is called by another aspx:
string url = "http://somewhere.com?P1=" + Request["param"];
Response.Write(url);
I want to do something like this:
string url = "http://somewhere.com?P1=" + Request["param"];
string str = GetResponse(url);
if (str...) {}
I need to get whatever Response.Write is getting as a result or going to url, manipulate that response, and send something else back.
Any help or a point in the right direction would be greatly appreciated.
A: WebClient client = new WebClient();
string response = client.DownloadString(url);
A: Webclient.DownloadString() is probably want you want.
A: You will need to use the HttpWebRequest and HttpWebResponse objects. You could also use the WebClient object
A: An HttpResponse is something that is sent back to the client in response to an HttpRequest. If you want process something on the server, then you can probably do it with either a web service call or a page method. However, I'm not totally sure I understand what you're trying to do in the first place.
A: WebClient.DownloadString totally did the trick. I got myself too wrapped up in this one.. I was looking at HttpModule and HttpHandler, when I had used WebClient.DownloadFile in the past.
Thank you very much to all who've replied.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Help configuring JNDI with embedded JBoss in Tomcat 5.5.x When I try the following lookup in my code:
Context initCtx = new InitialContext();
Context envCtx = (Context) initCtx.lookup("java:comp/env");
return (DataSource) envCtx.lookup("jdbc/mydb");
I get the following exception:
java.sql.SQLException: QueryResults: Unable to initialize naming context:
Name java:comp is not bound in this Context at
com.onsitemanager.database.ThreadLocalConnection.getConnection
(ThreadLocalConnection.java:130) at
...
I installed embedded JBoss following the JBoss wiki instructions. And I configured Tomcat using the "Scanning every WAR by default" deployment as specified in the configuration wiki page.
Quoting the config page:
JNDI
Embedded JBoss components like connection pooling, EJB, JPA, and transactions make
extensive use of JNDI to publish services. Embedded JBoss overrides Tomcat's JNDI
implementation by layering itself on top of Tomcat's JNDI instantiation. There are a few > reasons for this:
*
*To avoid having to declare each and every one of these services within server.xml
*To allow seemeless integration of the java:comp namespace between web apps and
EJBs.
*Tomcat's JNDI implementation has a few critical bugs in it that hamper some JBoss
components ability to work
*We want to provide the option for you of remoting EJBs and other services that can > be remotely looked up
Anyone have any thoughts on how I can configure the JBoss naming service which according to the above quote is overriding Tomcat's JNDI implementation so that I can do a lookup on java:comp/env?
FYI - My environment Tomcat 5.5.9, Seam 2.0.2sp, Embedded JBoss (Beta 3),
Note: I do have a -ds.xml file for my database connection properly setup and accessible on the class path per the instructions.
Also note: I have posted this question in embedded Jboss forum and seam user forum.
A: Thanks for the response toolkit.... yes, I can access my datasource by going directly to java:jdbc/mydb, but I'm using an existing code base that connects via the ENC. Here's some interesting info that I've found out ....
*
*The above code works with JBoss 4.2.2.GA and here's the JNDI ctx parameters being used:
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces:
org.jboss.naming:org.jnp.interfaces
*The above code works with Tomcat 5.5.x and here's the JNDI ctx parameters being used:
java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory
java.naming.factory.url.pkgs=org.apache.naming
*The above code fails with Embedded JBoss (Beta 3) in Tomcat 5.5.x with the above error message.
java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory
java.naming.factory.url.pkgs=org.apache.namingThe above code fails with the above error using JBoss Embedded in tomcat 5.5.x
Anyone have any thoughts I what I need to do with configuring embedded JBoss JNDI configuration?
A: java:comp/env is known as the Enterprise Naming Context (ENC) and is not globally visible. See here for more information. You will need to locate the global JNDI name which your datasource is regsitered at.
The easiest way to do this is to navigate to JBoss' web-based JMX console and look for a 'JNDIView' (not exactly sure of the name - currently at home) mbean. This mbean should have a list method which you can invoke, which will display the context path for all of the JNDI-bound objects.
A: I had some similar issue with Jboss Embedded and i finally fix playing in the file:
test-Datasource-ds.xml
adding
<mbean code="org.jboss.naming.NamingAlias" name="jboss.jmx:alias=testDatasource">
<attribute name="FromName">jdbc/Example DataSource</attribute>
<attribute name="ToName">java:/testDatasource</attribute>
</mbean>
The problem was jboss add the prefix java:/ for all data source declared. So finally i had a datasource named testDatasource, overrided with that directive to jdbc/Example DataSource
Hope it works
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Is there a Java equivalent for .Net System.Data? Is there a Java package providing funcionality like the .Net System.Data namespace ?
Specificaly the DataSet and Adaptor classes ?
A: Use java.util for the collections.
java.sql for databases.
A: The equivalent to ADO.NET is JDBC. You can get the flavor of it here:
http://www.heimetli.ch/jdbc/JDBCQuery.html
A: ADO.NET is a framework with multiple uses, and DataSet is one of the main abstractions. Tell us more about what you want to achieve, and I'm sure somebody will find a Java framework for that purpose.
If you want a simple way to map Java objects to a data backend (like XML files), take a look at some of the POJO (Plain Old Java Object) frameworks.
A: Hibernate is a popular framework in the Java world. Pretty simple mapping of objects to tables and lots more.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: pl sql & java - creating dynamic query I have a dilemma, I'm using Java and Oracle and trying to keep queries on PL/SQL side. Everything is OK, until I have these complex queries which may and may not have conditions.
It's not hard in Java to put together WHERE clause with conditions, but it's not nice.
And on PL/SQL side I also found out that the only possibility for dynamic queries is string manipulations like
IF inputname IS NOT NULL THEN
query := query ||' and NAME=' || inputname;
END IF;
Now I'm thinking, I'm leaving query in PL/SQL and sending WHERE clause with function parameter.
Any good recommendations or examples please?
A: SQLBuilder might be useful to you from the Java side. It allows you to write compile-time checked Java code that dynamically builds sql:
String selectQuery =
(new SelectQuery())
.addColumns(t1Col1, t1Col2, t2Col1)
.addJoin(SelectQuery.JoinType.INNER_JOIN, joinOfT1AndT2)
.addOrderings(t1Col1)
.validate().toString();
A: PL/SQL is not pleasant for creating dynamic SQL as you have discovered, its string manipulation is painful. You can send the where clause from the client, but you must make sure to check for SQL injection, i.e. make sure the phrase starts with "where", has no semi-colon or only at the end (if it could occur in the middle you need to look from string delimiter and only allow it within them), etc. Another option would be a stored procedure that takes a predefined parameter list of field filters, applying a "like" for each column against the parameter field.
A: In PL/SQL use:
EXECUTE IMMEDIATE lString;
This lets you build the lString (a VARCHAR2) into most bits of SQL that you'll want to use. e.g.
EXECUTE IMMEDIATE 'SELECT value
FROM TABLE
WHERE '||pWhereClause
INTO lValue;
You can also return multiple rows and perform DDL statements in EXECUTE IMMEDIATE.
A: Yea, EXECUTE IMMEDIATE is my friend also. Thanks for suggestions. I think this time I try to send just WHERE clause with parameter
A: I think its better to have the whole logic of the query creation in one place, Java or Oracle. I asume that you know how to do it in Java. In Oracle if the query only retrieves a row you can use the EXECUTE IMMEDIATE ... INTO clause.
If the query return multiple rows and has single parameters (no use the IN operator ) you can use the REF CURSOR strategy to loop the query results or return the cursor itself to the Java program (you must import Oracle java clases if you use it). First Ref Cursor answer in Google
If you must use the IN parameter ( or in another rare cases) you must parse the query with the DBMS_SQL package, which is TOO verbose and a little tricky to use, but it's VERY flexible. DBMS_SQL doc (watch the flow diagram BEFORE read the methods)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Explain Plan for Query in a Stored Procedure I have a stored procedure that consists of a single select query used to insert into another table based on some minor math that is done to the arguments in the procedure. Can I generate the plan used for this query by referencing the procedure somehow, or do I have to copy and paste the query and create bind variables for the input parameters?
A: Use SQL Trace and TKPROF. For example, open SQL*Plus, and then issue the following code:-
alter session set tracefile_identifier = 'something-unique'
alter session set sql_trace = true;
alter session set events '10046 trace name context forever, level 8';
select 'right-before-my-sp' from dual;
exec your_stored_procedure
alter session set sql_trace = false;
Once this has been done, go look in your database's UDUMP directory for a TRC file with "something-unique" in the filename. Format this TRC file with TKPROF, and then open the formatted file and search for the string "right-before-my-sp". The SQL command issued by your stored procedure should be shortly after this section, and immediately under that SQL statement will be the plan for the SQL statement.
Edit: For the purposes of full disclosure, I should thank all those who gave me answers on this thread last week that helped me learn how to do this.
A: From what I understand, this was done on purpose. The idea is that individual queries within the procedure are considered separately by the optimizer, so EXPLAIN PLAN doesn't make sense against a stored proc, which could contain multiple queries/statements.
The current answer is NO, you can't run it against a proc, and you must run it against the individual statements themselves. Tricky when you have variables and calculations, but that's the way it is.
A: Many tools, such as Toad or SQL Developer, will prompt you for the bind variable values when you execute an explain plan. You would have to do so manually in SQL*Plus or other tools.
You could also turn on SQL tracing and execute the stored procedure, then retrieve the explain plan from the trace file.
Be careful that you do not just retrieve the explain plan for the SELECT statement. The presence of the INSERT clause can change the optimizer goal from first rows to all rows.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Can SQL Try-Catch blocks handle thrown CLR errors? We are using SQL 2005 and the try-catch functionality to handle all of our error handling within the DB. We are currently working on deploying a .NET CLR function to make some WCF calls in the DB. This WCF procedure is written in the CLR and then deployed to SQL. If I put a try-catch block in the CLR code, it catches the error fine. However, I can't seem to throw the error up to the try-catch block in SQL. SQL seems to ignore what I throw it and catches the error it finds. Is there no relation between the two (i.e. I can't throw from one to another?)
If I can throw from within the CLR to the calling procedure in SQL, is any special formatting needed? I tried a specific case of catching the error that was thrown, and then throwing a different error, but SQL ignored my thrown error and caught the original error, as if it ignored the thrown error.
A: Here is a blog post that covers it at a highish level:
Exception handling in SQLCLR
When SQL server execute a user function/procedure/trigger implemented in CLR (i.e., managed code), we will install a managed exception handler around the user code. So if the user code leaked a exception, the server will catch it and throw a TSQL exception wrapping the user exception.
This seems to imply that it will just work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Testing for Random Value - Thoughts on this Approach? OK, I have been working on a random image selector and queue system (so you don't see the same images too often).
All was going swimmingly (as far as my crappy code does) until I got to the random bit. I wanted to test it, but how do you test for it? There is no Debug.Assert(i.IsRandom) (sadly) :D
So, I got my brain on it after watering it with some tea and came up with the following, I was just wondering if I could have your thoughts?
*
*Basically I knew the random bit was the problem, so I ripped that out to a delegate (which would then be passed to the objects constructor).
*I then created a class that pretty much performs the same logic as the live code, but remembers the value selected in a private variable.
*I then threw that delegate to the live class and tested against that:
i.e.
Debug.Assert(myObj.RndVal == RndIntTester.ValuePassed);
But I couldn't help but think, was I wasting my time? I ran that through lots of iterations to see if it fell over at any time etc.
Do you think I was wasting my time with this? Or could I have got away with:
GateKiller's answer reminded me of this:
Update to Clarify
*
*I should add that I basically never want to see the same result more than X number of times from a pool of Y size.
*The addition of the test container basically allowed me to see if any of the previously selected images were "randomly" selected.
*I guess technically the thing here being tested in not the RNG (since I never wrote that code) but the fact that am I expecting random results from a limited pool, and I want to track them.
A: Test from the requirement : "so you don't see the same images too often"
Ask for 100 images. Did you see an image too often?
A: There is a handy list of statistical randomness tests and related research on Wikipedia. Note that you won't know for certain that a source is truly random with most of these, you'll just have ruled out some ways in which it may be easily predictable.
A: If you have a fixed set of items, and you don't want them to repeat too often, shuffle the collection randomly. Then you will be sure that you never see the same image twice in a row, feel like you're listening to Top 20 radio, etc. You'll make a full pass through the collection before repeating.
Item[] foo = …
for (int idx = foo.size(); idx > 1; --idx) {
/* Pick random number from half-open interval [0, idx) */
int rnd = random(idx);
Item tmp = foo[idx - 1];
foo[idx - 1] = foo[rnd];
foo[rnd] = tmp;
}
If you have too many items to collect and shuffle all at once (10s of thousands of images in a repository), you can add some divide-and-conquer to the same approach. Shuffle groups of images, then shuffle each group.
A slightly different approach that sounds like it might apply to your revised problem statement is to have your "image selector" implementation keep its recent selection history in a queue of at most Y length. Before returning an image, it tests to see if its in the queue X times already, and if so, it randomly selects another, until it find one that passes.
If you are really asking about testing the quality of the random number generator, I'll have to open the statistics book.
A: It's impossible to test if a value is truly random or not. The best you can do is perform the test some large number of times and test that you got an appropriate distribution, but if the results are truly random, even this has a (very small) chance of failing.
If you're doing white box testing, and you know your random seed, then you can actually compute the expected result, but you may need a separate test to test the randomness of your RNG.
A:
The generation of random numbers is
too important to be left to chance. -- Robert R. Coveyou
To solve the psychological problem:
A decent way to prevent apparent repetitions is to select a few items at random from the full set, discarding duplicates. Play those, then select another few. How many is "a few" depends on how fast you're playing them and how big the full set is, but for example avoiding a repeat inside the larger of "20", and "5 minutes" might be OK. Do user testing - as the programmer you'll be so sick of slideshows you're not a good test subject.
To test randomising code, I would say:
Step 1: specify how the code MUST map the raw random numbers to choices in your domain, and make sure that your code correctly uses the output of the random number generator. Test this by Mocking the generator (or seeding it with a known test value if it's a PRNG).
Step 2: make sure the generator is sufficiently random for your purposes. If you used a library function, you do this by reading the documentation. If you wrote your own, why?
Step 3 (advanced statisticians only): run some statistical tests for randomness on the output of the generator. Make sure you know what the probability is of a false failure on the test.
A: There are whole books one can write about randomness and evaluating if something appears to be random, but I'll save you the pages of mathematics. In short, you can use a chi-square test as a way of determining how well an apparently "random" distribution fits what you expect.
If you're using Perl, you can use the Statistics::ChiSquare module to do the hard work for you.
However if you want to make sure that your images are evenly distributed, then you probably won't want them to be truly random. Instead, I'd suggest you take your entire list of images, shuffle that list, and then remove an item from it whenever you need a "random" image. When the list is empty, you re-build it, re-shuffle, and repeat.
This technique means that given a set of images, each individual image can't appear more than once every iteration through your list. Your images can't help but be evenly distributed.
All the best,
Paul
A: What the Random and similar functions give you is but pseudo-random numbers, a series of numbers produced through a function. Usually, you give that function it's first input parameter (a.k.a. the "seed") which is used to produce the first "random" number. After that, each last value is used as the input parameter for the next iteration of the cycle. You can check the Wikipedia article on "Pseudorandom number generator", the explanation there is very good.
All of these algorithms have something in common: the series repeats itself after a number of iterations. Remember, these aren't truly random numbers, only series of numbers that seem random. To select one generator over another, you need to ask yourself: What do you want it for?
How do you test randomness? Indeed you can. There are plenty of tests for that. The first and most simple is, of course, run your pseudo-random number generator an enormous number of times, and compile the number of times each result appears. In the end, each result should've appeared a number of times very close to (number of iterations)/(number of possible results). The greater the standard deviation of this, the worse your generator is.
The second is: how much random numbers are you using at the time? 2, 3? Take them in pairs (or tripplets) and repeat the previous experiment: after a very long number of iterations, each expected result should have appeared at least once, and again the number of times each result has appeared shouldn't be too far away from the expected. There are some generators which work just fine for taking one or 2 at a time, but fail spectacularly when you're taking 3 or more (RANDU anyone?).
There are other, more complex tests: some involve plotting the results in a logarithmic scale, or onto a plane with a circle in the middle and then counting how much of the plots fell within, others... I believe those 2 above should suffice most of the times (unless you're a finicky mathematician).
A: Random is Random. Even if the same picture shows up 4 times in a row, it could still be considered random.
A: My opinion is that anything random cannot be properly tested.
Sure you can attempt to test it, but there are so many combinations to try that you are better off just relying on the RNG and spot checking a large handful of cases.
A: Well, the problem is that random numbers by definition can get repeated (because they are... wait for it: random). Maybe what you want to do is save the latest random number and compare the calculated one to that, and if equal just calculate another... but now your numbers are less random (I know there's not such a thing as "more or less" randomness, but let me use the term just this time), because they are guaranteed not to repeat.
Anyway, you should never give random numbers so much thought. :)
A: As others have pointed out, it is impossible to really test for randomness. You can (and should) have the randomness contained to one particular method, and then write unit tests for every other method. That way, you can test all of the other functionality, assuming that you can get a random number out of that one last part.
A: store the random values and before you use the next generated random number, check against the stored value.
A: Any good pseudo-random number generator will let you seed the generator. If you seed the generator with same number, then the stream of random numbers generated will be the same. So why not seed your random number generator and then create your unit tests based on that particular stream of numbers?
A: To get a series of non-repeating random numbers:
*
*Create a list of random numbers.
*Add a sequence number to each random number
*Sort the sequenced list by the original random number
*Use your sequence number as a new random number.
A: Don't test the randomness, test to see if the results your getting are desirable (or, rather, try to get undesirable results a few times before accepting that your results are probably going to be desirable).
It will be impossible to ensure that you'll never get an undesirable result if you're testing a random output, but you can at least increase the chances that you'll notice it happening.
I would either take N pools of Y size, checking for any results that appear more than X number of times, or take one pool of N*Y size, checking every group of Y size for any result that appears more than X times (1 to Y, 2 to Y + 1, 3 to Y + 2, etc). What N is would depend on how reliable you want the test to be.
A: Random numbers are generated from a distribution. In this case, every value should have the same propability of appearing. If you calculate an infinite amount of randoms, you get the exact distribution.
In practice, call the function many times and check the results. If you expect to have N images, calculate 100*N randoms, then count how many of each expected number were found. Most should appear 70-130 times. Re-run the test with different random-seed to see if the results are different.
If you find the generator you use now is not good enough, you can easily find something. Google for "Mersenne Twister" - that is much more random than you ever need.
To avoid images re-appearing, you need something less random. A simple approach would be to check for the unallowed values, if its one of those, re-calculate.
A: Although you cannot test for randomness, you can test that for correlation, or distribution, of a sequence of numbers.
Hard to test goal: Each time we need an image, select 1 of 4 images at random.
Easy to test goal: For every 100 images we select, each of the 4 images must appear at least 20 times.
A: I agree with Adam Rosenfield. For the situation you're talking about, the only thing you can usefully test for is distribution across the range.
The situation I usually encounter is that I'm generating pseudorandom numbers with my favourite language's PRNG, and then manipulating them into the desired range. To check whether my manipulations have affected the distribution, I generate a bunch of numbers, manipulate them, and then check the distribution of the results.
To get a good test, you should generate at least a couple orders of magnitude more numbers than your range holds. The more values you use, the better the test. Obviously if you have a really large range, this won't work since you'll have to generate far too many numbers. But in your situation it should work fine.
Here's an example in Perl that illustrates what I mean:
for (my $i=0; $i<=100000; $i++) {
my $r = rand; # Get the random number
$r = int($r * 1000); # Move it into the desired range
$dist{$r} ++; # Count the occurrences of each number
}
print "Min occurrences: ", (sort { $a <=> $b } values %dist)[1], "\n";
print "Max occurrences: ", (sort { $b <=> $a } values %dist)[1], "\n";
If the spread between the min and max occurrences is small, then your distribution is good. If it's wide, then your distribution may be bad. You can also use this approach to check whether your range was covered and whether any values were missed.
Again, the more numbers you generate, the more valid the results. I tend to start small and work up to whatever my machine will handle in a reasonable amount of time, e.g. five minutes.
A: Supposing you are testing a range for randomness within integers, one way to verify this is to create a gajillion (well, maybe 10,000 or so) 'random' numbers and plot their occurrence on a histogram.
****** ****** ****
***********************************************
*************************************************
*************************************************
*************************************************
*************************************************
*************************************************
*************************************************
*************************************************
*************************************************
1 2 3 4 5
12345678901234567890123456789012345678901234567890
The above shows a 'relatively' normal distribution.
if it looked more skewed, such as this:
****** ****** ****
************ ************ ************
************ ************ ***************
************ ************ ****************
************ ************ *****************
************ ************ *****************
*************************** ******************
**************************** ******************
******************************* ******************
**************************************************
1 2 3 4 5
12345678901234567890123456789012345678901234567890
Then you can see there is less randomness. As others have mentioned, there is the issue of repetition to contend with as well.
If you were to write a binary file of say 10,000 random numbers from your generator using, say a random number from 1 to 1024 and try to compress that file using some compression (zip, gzip, etc.) then you could compare the two file sizes. If there is 'lots' of compression, then it's not particularly random. If there isn't much of a change in size, then it's 'pretty random'.
Why this works
The compression algorithms look for patterns (repetition and otherwise) and reduces that in some way. One way to look a these compression algorithms is a measure of the amount of information in a file. A highly compressed file has little information (e.g. randomness) and a little-compressed file has much information (randomness)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Most rapid RAD environment for prototyping What do you consider the most rapid RAD environment for a working prototype? Not for debate.
*
*language
*platform
*IDE
*DB
*(personal note)
Thank you.
P.S.1 I was pretty happy with PERL for back-end prototyping... I get stuck when dealing with the UI... it doesn't seem to come as easy...
A: It's all pretty subjective I guess, but as you asked 'what do you consider', so...
*
*Delphi 7 onwards (technically object pascal or Delphi language, I guess)
*Windows 2003/XP
*version 7 is the classic, newer ones don't seem as easy to prototype stuff in (to me)
*SQL Express
*in comparison I've used VB6, MS VC++ (from a long time ago), FoxPro/Windows and Visual FoxPro, and a very small smattering of VS2005 (C#). For me, Delphi is the all-round king every time. :-)
A: For prototypes on Windows, Visual Basic is hard to beat. If you need to suppoort another platform (or multiple platforms), then Tcl/Tk is fairly productive, as well.
A: I've always considered Perl to be my prototyping language of choice, for a few reasons:
*
*CPAN - There's a module for just about anything.
*It's easy to create hacks to mimic, fake or do something quick and dirty.
*It works everywhere.
A: I think "most rapid" is heavily subjective. A developer with many years in VB will likely be fastest at prototyping in VB. A Java developer in Java. Ruby in Ruby. The "most rapid", then, is going to be heavily skewed by the assets (code libraries, developer experience and tools) you already have in house.
What you define as a "prototype" also heavily affects things. Is a set of pseudo-working screen shots mocked up in Flash to have some clickability for navigation enough? What is the required feature set and what is the target audience for the prototype?
As you can see "best" is going to vary pretty widely. It's probably close to certain that the language will be high-level and the IDE tools are going to have nice UI designers (assuming the prototype has a UI). If you have a lot of DB work, then database wizards that do the SQL grunt work for you will save time and generate reasonable, if not optimized, objects. The platform would likely be whatever platform the prototype should be for - after all prototyping a Windows app under Linux or a Symbian app under Palm OS probably won't give you too much benefit.
A: VFP is great for prototyping. I've seen posts (sorry, don't have links) from Microsoft teams where they say WPF allows fast prototyping for them.
A: Enthought Python Distribution. You create the model of your problem in python and then you say "create a UI for that" in one line of code. If you don't like some parts of the UI, you override the defaults for those parts (and nothing else).
Doesn't get faster than that if you're doing a Desktop app.
The resulting prototype will work on Windows, Linux and Mac.
If you're looking for a web RAD, I suggest to give Grails or TurboGears a try. TurboGears is easier to use, Grails gives you access to the vast space of Java web frameworks (hard to beat).
A: I'd say Python with wxPython
A: I find that prototyping using the Netbeans GUI builder gives me a great start. I'm a Java programmer mostly though.
A: Try out Axure RP Pro.
We did give it a try and found that it to be really very good. It generates the whole prototype in HTML with a few JavaScripts so it becomes easy to distribute prototypes.
Do check it out.
A: Handcraft
When you prototype any GUI interactively in the browser, you can go from as low or high fidelity as you want. Handcraft is focused exactly on prototyping, so it does a whole lot less than IDE's intentionally.
A: For working prototype:
*
*non-gui: python
*gui: ruby on rails
For mockups don't use IDE but some specialized mockup tool, read this here on SO: Whats the best way to create interactive application prototypes?
For hybrid approach (mockups then code): QT designer is the only viable option I found, due to it's specific architecture
There you go.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to get rid of the "Console Root" node in a MMC 3.0 snapin? I've been creating snapins with the new MMC 3.0 classes and C#. I can't seem to find any examples of how to get rid of the "Console Root" node when creating the *.msc files. I looked through the examples in the SDK, but I can't seem to find anything for this.
I have seen other snapins that do what I want, but I can't tell what version of MMC they are using.
A: If I've understood you correctly, this isn't specific to MMC3, but it did take me a while to realise. Right-click on the node, and click New Window from Here. Then switch back to the Console Root window, and close it (Ctrl+F4).
Inside the .msc, it's //View/BookMark/@NodeID, which needs to be "2" (etc.), instead of "1".
A: I know this is an older post, so maybe a response is not necessary but what you're trying to do requires saving a customized MSC file. As one reply states, add your SnapIn, select Open new window from here, then save the MSC file. This is your console configured to show your SnapIn as the RootNode rather than the Console root. Under the File menu is an Options... dialog. From there you can change settings for that particular console file to provide end users a non-Author mode console, they won't be able to change the layout on you then. Note: this is only a setting for that specific console file (e.g. C:\temp\MyCustomConsole.msc), anyone could open a console and use the add/remove dialog to open the SnapIn in any other console they desire.
A: As far as I know, MMC always shows the Console Root. Even if you open it with no snap-in, you will still see the Console Root. Snap ins are only added under it, and several could be loaded at the same time and they would all be under the console root which is simply the root of the tree.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What is the recommended toolchain for formatting XML DocBook? I've seen Best tools for working with DocBook XML documents, but my question is slightly different. Which is the currently recommended formatting toolchain - as opposed to editing tool - for XML DocBook?
In Eric Raymond's 'The Art of Unix Programming' from 2003 (an excellent book!), the suggestion is XML-FO (XML Formatting Objects), but I've since seen suggestions here that indicated that XML-FO is no longer under development (though I can no longer find that question on StackOverflow, so maybe it was erroneous).
Assume I'm primarily interested in Unix/Linux (including MacOS X), but I wouldn't automatically ignore Windows-only solutions.
Is Apache's FOP the best way to go? Are there any alternatives?
A: We use XMLmind XmlEdit for editing and Maven's docbkx plugin to create output during our builds. For a set of good templates take a look at the ones Hibernate or Spring provide.
A: For HTML output, I use the Docbook XSL stylesheets with the XSLT processor xsltproc.
For PDF output, I use dblatex, which translates to LaTeX and then use pdflatex to compile it to PDF. (I used Jade, the DSSSL stylesheets and jadetex before.)
A: We use
*
*Serna XML Editor
*Eclipse (plain xml editing, mostly used by the technical people)
*own specific Eclipse plug-in (just for our release-notes)
*Maven docbkx plug-in
*Maven jar with specific corporate style sheet, based on the standard docbook style-sheets
*Maven plug-in for converting csv to DocBook table
*Maven plug-in for extracting BugZilla data and creating a DocBook section from it
*Hudson (to generate the PDF document(s))
*Nexus to deploy the created PDF documents
Some ideas we have:
Deploy with each product version not only the PDF, but also the original complete DocBook document (as we partly write the document and partly generate them). Saving the full DocBook document makes them independent for changes in the system setup in the future. Meaning, if the system changes, from which the content was extracted (or replaced by diff. systems) we would not be able to generate the exact content any more. Which could cause an issue, if we needed to re-release (with different style-sheet) the whole product ranche of manuals. Same as with the jars; these compiled Java classes are also placed in Nexus (you do not want to store them in your SCM); this we would also do with the generated DocBook document.
Update:
Fresh created a Maven HTML Cleaner Plug-in, which makes it possible to add DocBook content to a Maven Project Site (Beta version available). Feedback is welcome through the Open Discussion Forum.
A: The DocBook stylesheets, plus FOP, work well, but I finally decided to spring for RenderX, which covers the standard more thoroughly and has some nice extensions that the DocBook stylesheets take advantage of.
Bob Stayton's book, DocBook XSL: The Complete Guide, describes several alternate tool chains, including ones that work on Linux or Windows (almost surely MacOS, too, though I have not personally used a Mac).
A: A popular approach is to use DocBook XSL Stylesheets.
A: Regarding the question about Apache's FOP: when we established our toolchain (similar to what Gustavo has suggested) we had very good results using the RenderX XEP engine. XEPs output looks a little bit more polished, and as far as I recall, FOP had some problems with tables (this was a few years ago though, this might have changed).
A: With FOP you get the features that someone decided they wanted bad enough to implement. I'd say that no one who's serious about publishing uses it in production. You're far better off with RenderX or Antenna House or Arbortext. (I've used them all over the last decade's worth of implementation projects.) It depends on your business requirements, how much you want to automate, and what your team's skills, time, and resources are like as well. It's not just a technology question.
A: If you're on Red Hat, Ubuntu, or Windows, you could take a look at Publican, which is supposed to be a fairly complete command line toolchain. Red Hat uses it extensively.
*
*Wiki here: https://fedorahosted.org/publican/
*Doc here: http://jfearn.fedorapeople.org/Publican/
*Source tarballs and exes here: https://fedorahosted.org/releases/p/u/publican/
A: The article called The DocBook toolchain might be useful as well. It is a section of a HOWTO on DocBook written by Eric Raymond.
A: I've been using two CLI utils for simplifying my docbook toolchain: xmlto and publican.
Publican looks elegant to me but enough fitted for the Fedora & Redhat publication needs.
*
*https://fedorahosted.org/xmlto/
*https://fedorahosted.org/publican/
A: I release/am working on an open-source project called bookshop which is a RubyGem that installs a complete Docbook-XSL pipeline/toolchain. It includes everything needed to create and edit Docbook source files and output differing formats (currently pdf and epub, and growing quickly).
My goal is to make it possible to go from Zero-to-Exporting(pdf's or whatever) from your Docbook source in under 10 minutes.
The Summary:
bookShop is an OSS ruby-based framework for docbook toolchain happiness and sustainable productivity. The framework is optimized to help developers quickly ramp-up, allowing them to more rapidly jump in and develop their DocBook-to-Output flows, by favoring convention over configuration, setting them up with best practices, standards and tools from the get-go.
Here's the gem location: https://rubygems.org/gems/bookshop
And the source code: https://github.com/blueheadpublishing/bookshop
A: I've been doing some manual writing with DocBook, under cygwin, to produce One Page HTML, Many Pages HTML, CHM and PDF.
I installed the following:
*
*The docbook stylesheets (xsl) repository.
*xmllint, to test if the xml is correct.
*xsltproc, to process the xml with the stylesheets.
*Apache's fop, to produce PDF's.I make sure to add the installed folder to the PATH.
*Microsoft's HTML Help Workshop, to produce CHM's. I make sure to add the installed folder to the PATH.
Edit: In the below code I'm using more than the 2 files. If someone wants a cleaned up version of the scripts and the folder structure, please contact me: guscarreno (squiggly/at) googlemail (period/dot) com
I then use a configure.in:
AC_INIT(Makefile.in)
FOP=fop.sh
HHC=hhc
XSLTPROC=xsltproc
AC_ARG_WITH(fop, [ --with-fop Where to find Apache FOP],
[
if test "x$withval" != "xno"; then
FOP="$withval"
fi
]
)
AC_PATH_PROG(FOP, $FOP)
AC_ARG_WITH(hhc, [ --with-hhc Where to find Microsoft Help Compiler],
[
if test "x$withval" != "xno"; then
HHC="$withval"
fi
]
)
AC_PATH_PROG(HHC, $HHC)
AC_ARG_WITH(xsltproc, [ --with-xsltproc Where to find xsltproc],
[
if test "x$withval" != "xno"; then
XSLTPROC="$withval"
fi
]
)
AC_PATH_PROG(XSLTPROC, $XSLTPROC)
AC_SUBST(FOP)
AC_SUBST(HHC)
AC_SUBST(XSLTPROC)
HERE=`pwd`
AC_SUBST(HERE)
AC_OUTPUT(Makefile)
cat > config.nice <<EOT
#!/bin/sh
./configure \
--with-fop='$FOP' \
--with-hhc='$HHC' \
--with-xsltproc='$XSLTPROC' \
EOT
chmod +x config.nice
and a Makefile.in:
FOP=@FOP@
HHC=@HHC@
XSLTPROC=@XSLTPROC@
HERE=@HERE@
# Subdirs that contain docs
DOCS=appendixes chapters reference
XML_CATALOG_FILES=./build/docbook-xsl-1.71.0/catalog.xml
export XML_CATALOG_FILES
all: entities.ent manual.xml html
clean:
@echo -e "\n=== Cleaning\n"
@-rm -f html/*.html html/HTML.manifest pdf/* chm/*.html chm/*.hhp chm/*.hhc chm/*.chm entities.ent .ent
@echo -e "Done.\n"
dist-clean:
@echo -e "\n=== Restoring defaults\n"
@-rm -rf .ent autom4te.cache config.* configure Makefile html/*.html html/HTML.manifest pdf/* chm/*.html chm/*.hhp chm/*.hhc chm/*.chm build/docbook-xsl-1.71.0
@echo -e "Done.\n"
entities.ent: ./build/mkentities.sh $(DOCS)
@echo -e "\n=== Creating entities\n"
@./build/mkentities.sh $(DOCS) > .ent
@if [ ! -f entities.ent ] || [ ! cmp entities.ent .ent ]; then mv .ent entities.ent ; fi
@echo -e "Done.\n"
# Build the docs in chm format
chm: chm/htmlhelp.hpp
@echo -e "\n=== Creating CHM\n"
@echo logo.png >> chm/htmlhelp.hhp
@echo arrow.gif >> chm/htmlhelp.hhp
@-cd chm && "$(HHC)" htmlhelp.hhp
@echo -e "Done.\n"
chm/htmlhelp.hpp: entities.ent build/docbook-xsl manual.xml build/chm.xsl
@echo -e "\n=== Creating input for CHM\n"
@"$(XSLTPROC)" --output ./chm/index.html ./build/chm.xsl manual.xml
# Build the docs in HTML format
html: html/index.html
html/index.html: entities.ent build/docbook-xsl manual.xml build/html.xsl
@echo -e "\n=== Creating HTML\n"
@"$(XSLTPROC)" --output ./html/index.html ./build/html.xsl manual.xml
@echo -e "Done.\n"
# Build the docs in PDF format
pdf: pdf/manual.fo
@echo -e "\n=== Creating PDF\n"
@"$(FOP)" ./pdf/manual.fo ./pdf/manual.pdf
@echo -e "Done.\n"
pdf/manual.fo: entities.ent build/docbook-xsl manual.xml build/pdf.xsl
@echo -e "\n=== Creating input for PDF\n"
@"$(XSLTPROC)" --output ./pdf/manual.fo ./build/pdf.xsl manual.xml
check: manual.xml
@echo -e "\n=== Checking correctness of manual\n"
@xmllint --valid --noout --postvalid manual.xml
@echo -e "Done.\n"
# need to touch the dir because the timestamp in the tarball
# is older than that of the tarball :)
build/docbook-xsl: build/docbook-xsl-1.71.0.tar.gz
@echo -e "\n=== Un-taring docbook-xsl\n"
@cd build && tar xzf docbook-xsl-1.71.0.tar.gz && touch docbook-xsl-1.71.0
to automate the production of the above mentioned file outputs.
I prefer to use a nix approach to the scripting just because the toolset is more easy to find and use, not to mention easier to chain.
A: I prefer using Windows for most of my content creation (Notepad++ editor). Publican in Linux is a good tool chain to create a good documentation structure and process outputs. I use Dropbox (there are other document sharing services as well, which should work well on both platforms) on my Windows machine as well as Virtual Linux machine.
With this setup I've been able to achieve a combination that works great for me.
Once edit work is completed in Windows (which immediately syncs to Linux machine), I switch to Linux to run publican build and create HTML and PDF outputs, which again are updated in my Windows folder by Dropbox.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: Need to create a layered dict from a flat one I have a dict, that looks like this:
{
'foo': {
'opt1': 1,
'opt2': 2,
},
'foo/bar': {
'opt3': 3,
'opt4': 4,
},
'foo/bar/baz': {
'opt5': 5,
'opt6': 6,
}
}
And I need to get it to look like:
{
'foo': {
'opt1': 1,
'opt2': 2,
'bar': {
'opt3': 3,
'opt4': 4,
'baz': {
'opt5': 5,
'opt6': 6,
}
}
}
}
I should point out that there can and will be multiple top-level keys ('foo' in this case). I could probably throw something together to get what i need, but I was hoping that there is a solution that's more efficient.
A: Like this:
def nest(d):
rv = {}
for key, value in d.iteritems():
node = rv
for part in key.split('/'):
node = node.setdefault(part, {})
node.update(value)
return rv
A: def layer(dict):
for k,v in dict:
if '/' in k:
del dict[k]
subdict = dict.get(k[:k.find('/')],{})
subdict[k[k.find('/')+1:]] = v
layer(subdict)
A: Got this lib to print your dict in a better way. pprint.
https://docs.python.org/3.2/library/pprint.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Mount floppy image in cygwin How can I mount a floppy image file using cygwin. I would like to mount the image, copy a file to the mounted drive, and then unmount it from the command line.
I know you can use Virtual Floppy Drive in windows, but is there a way to do this in Cygwin?
A: Can't you just use Virtual Floppy Drive? Cygwin doesn't really do filesystems; it lets Windows take care of that.
A: If you look on line (google) there doens't seem to be support in Cygwin for that kind of functionality. An alternative, though more effort, would be to use something like VirtualBox, or the free version of VMWare and run a light-weight Linux distro, where you could use the loopback mounting feature and expose it via samba as a windows-share.
A: Use ImDisk Virtual Disk Driver is a disk image emulator created by Olof Lagerkvist. It can emulate devices as hard drives, floppy drives, or CD/DVD drives. It is free software, containing some code licensed under GPL, and some under BSD.
URL download: ImDisk
A: Cygwin is just a standard Win32 DLL, it relies on windows kernel for everything related to file-systems. This means it cannot mount or unmount anything by itself. However, you can still read and write to floppy images from the command line using mtools.
Note that I in order to compile I had to manually edit the Makefile (after the ./configure step) to modify this line:
LIBS = -liconv
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How can I load different endpoints for WCF in SQL CLR? We're deploying some new WCF calls in our SQL 2005 DB using the CLR. In testing, I hardcoded in the code the endpoint to connect to, and deployed it to our test server. When we go to deploy this to production, we will be deploying it to many different SQL DBs, and using different endpoints to connect to (same service running on different servers). How can something like this be done? Is there a config file that can be referenced for the deployment of the dll into SQL?
A: The solutions above would work, but we found that the best practice approach would be to create a new table storing all of the different endpoints into the DB. Then, we updated the CLR to make a call to this table to get the endpoint(s) that were needed. So each server would have the proper metadata loaded for it, and it would all be retrieved from the DB. No hardcoding this way, and there's no need to worry about external text files on the SQL server. It's all contained in the DB.
A: Accessing Application Configuration Settings from SQL CLR
another technique..
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do I display the full content of LOB column in Oracle SQL*Plus? When I try to display the contents of a LOB (large object) column in SQL*Plus, it is truncated. How do I display the whole thing?
A: SQL> set long 30000
SQL> show long
long 30000
A: You may also need:
SQL> set longchunksize 30000
Otherwise the LOB/CLOB will wrap.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: Repeating Header in a group I am parsing text that has a heading and then data that applies to that heading. I need to break each data field into groups, and have the heading also apply to those groups. Here's an example:
(Update: The text below has been updated to better reflect it's current layout, and to indicate an annotation.)
Heading 1
Heading 2 Heading 3
(Group 1)
data1 data2
data3 data4
data5
(Group 2)
data1 data2
data3 data4
data5
(Group 3)
data1 data2
data3 data4
data5
** The headers become different values here *** (this is not part of the data)
NewHeading 1
NewHeading 2 NewHeading 3
(Group 4)
data1 data2
data3 data4
data5
(Group 5)
data1 data2
data3 data4
data5
**etc
The output should be like this:
(Group 1) Heading1 Heading2 Heading3 data1 data2 data3 data4 data5
(Group 2) Heading1 Heading2 Heading3 data1 data2 data3 data4 data5
(Group 3) Heading1 Heading2 Heading3 data1 data2 data3 data4 data5
(Group 4) NewHeading1 NewHeading2 NewHeading3 data1 data2 data3 data4 data5
(Group 5) NewHeading1 NewHeading2 NewHeading3 data1 data2 data3 data4 data5
The fields marked (Group 1-5) are just labels for the line, they are not meant to be part of the returned set. The Headings changing to "NewHeading" is merely to indicate that the header values have changed and should be applied to the data that follows it.
I've done a bit of reading over the past couple of hours looking for what this is called, but I haven't had any luck. Any ideas? This is for the .Net regex engine.
Update: Annotations added, and showed that the headers change over the file.
I have also done some research and believe I can accomplish something almost as easy by having two patterns. One for the headers, and one for the data. Then I can compare the header and data match index (ie: where it is found in the file) to each other and combine them that way.
A: 1) Loop through line by line. This isn't the place for a regex
2) You really need to clarify your question. It isn't clear if (Group 1) and ** The headings become different here ** are part of your input, or whether you're annotating it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can apache-jmeter check the return value of a URL for a correct response? The set up for apache-jmeter allows for a URL to be sent to a web-server on multiple threads. I'm interested in first determining if the response codes are 200-500 and then whether the returned content is the expected content. Is this detailed configuration possible?
A: Yes it does. You simply need to add two Response Assertions to your HTTP Sampler.
One which checks the Response Code, and a second which checks the response message.
Whether these passed or failed will then be visible in the Summary Report.
A: I believe so from what it states here
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Capture console output for debugging in VS? Under VS's external tools settings there is a "Use Output Window" check box that captures the tools command line output and dumps it to a VS tab.
The question is: can I get the same processing for my program when I hit F5?
Edit: FWIW I'm in C# but if that makes a difference to your answer then it's unlikely that your answer is what I'm looking for.
What I want would take the output stream of the program and transfer it to the output tab in VS using the same devices that output redirection ('|' and '>') uses in the cmd prompt.
A: You should be able to capture the output in a text file and use that.
I don't have a VS handy, so this is from memory:
*
*Create a C++ project
*Open the project settings, debugging tab
*Enable managed debugging
*Edit command line to add "> output.txt"
*Run your program under the debugger
If things work the way I remember, this will redirect STDOUT to a file, even though you're not actually running under CMD.EXE.
(The debugger has its own implementation of redirection syntax, which is not 100% the same as cmd, but it's pretty good.)
Now, if you open this file in VS, you can still see the output from within VS, although not in exactly the same window you were hoping for.
A: The console can redirect it's output to any textwriter. If you implement a textwriter that writes to Diagnostics.Debug, you are all set.
Here's a textwriter that writes to the debugger.
using System.Diagnostics;
using System.IO;
using System.Text;
namespace TestConsole
{
public class DebugTextWriter : TextWriter
{
public override Encoding Encoding
{
get { return Encoding.UTF8; }
}
//Required
public override void Write(char value)
{
Debug.Write(value);
}
//Added for efficiency
public override void Write(string value)
{
Debug.Write(value);
}
//Added for efficiency
public override void WriteLine(string value)
{
Debug.WriteLine(value);
}
}
}
Since it uses Diagnostics.Debug it will adhere to your compiler settings to wether it should write any output or not. This output can also be seen in Sysinternals DebugView.
Here's how you use it:
using System;
namespace TestConsole
{
class Program
{
static void Main(string[] args)
{
Console.SetOut(new DebugTextWriter());
Console.WriteLine("This text goes to the Visual Studio output window.");
}
}
}
If you want to see the output in Sysinternals DebugView when you are compiling in Release mode, you can use a TextWriter that writes to the OutputDebugString API. It could look like this:
using System.IO;
using System.Runtime.InteropServices;
using System.Text;
namespace TestConsole
{
public class OutputDebugStringTextWriter : TextWriter
{
[DllImport("kernel32.dll")]
static extern void OutputDebugString(string lpOutputString);
public override Encoding Encoding
{
get { return Encoding.UTF8; }
}
//Required
public override void Write(char value)
{
OutputDebugString(value.ToString());
}
//Added for efficiency
public override void Write(string value)
{
OutputDebugString(value);
}
//Added for efficiency
public override void WriteLine(string value)
{
OutputDebugString(value);
}
}
}
A: I'm going to make a few assumptions here. First, I presume that you are talking about printf output from an application (whether it be from a console app or from a windows GUI app). My second assumption is the C language.
To my knowledge, you cannot direct printf output to the output window in dev studio, not directly anyway. [emphasis added by OP]
There might be a way but I'm not aware of it. One thing that you could do though would be to direct printf function calls to your own routine which will
*
*call printf and print the string
*call OuputDebugString() to print the string to the output window
You could do several things to accomplish this goal. First would be to write your own printf function and then call printf and the OuputDebugString()
void my_printf(const char *format, ...)
{
char buf[2048];
// get the arg list and format it into a string
va_start(arglist, format);
vsprintf_s(buf, 2048, format, arglist);
va_end(arglist);
vprintf_s(buf); // prints to the standard output stream
OutputDebugString(buf); // prints to the output window
}
The code above is mostly untested, but it should get the concepts across.
If you are not doing this in C/C++, then this method won't work for you. :-)
A: Maybe this will work for you: set a breakpoint on the close } in Main, and then look at the console window before it closes. You can even copy the text out of it if you need to.
On every machine that I use for development, I configure my console window in a certain way, which happens to make this approach work better:
*
*Run cmd.exe
*ALT-SPACE, D
*In Options, enable QuickEdit mode.
*In Layout, set Buffer Height to 9999
*Click OK
*Exit the CMD window.
A: System.Diagnostics.Debug.Writeline() or Trace.Writeline()
A: You can use Systems.Diagnostics.Trace class to write your output to the Output window instead of (or in addition to) the console. It take a little configuration, but it works. Is that along the line of what you want?
You can also add your own tab per this article, but I've never tried it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: 2d graphics effects 101: throwing something I have a page in my desktop app, and I've implemented simple grab-and-pan. It works great.
When you are panning in this way and you are release, the page stops dead where you dropped it.
I'd like it to continue slightly with some momentum, and stop eventually. Rather like the 'throw' in the iPhone UI, I guess.
I'm not really chasing perfection, just a very crude simple sense of being able to 'throw' that page.
A: Just keep track of the current velocity of the page, in addition to its position. When the user releases, set the velocity to the last amount by which you panned. Then, in subsequent frames, continue panning by the current velocity, and decrease the current velocity by some fixed amount (in magnitude) until it reaches zero.
A: You can calculate the velocity of the movement by tracking the position. Due to the lack of precision, and for smoothing reasons, you will want to average the last multiple positions, assuming they were taken at nearly even time-frame apart from one another.
Once you have the average of these, you can adjust your velocity according to how much you want the effect to show. Simply add a constant multiplier to the average once you have calculated it.
From here, you will move the window by this velocity, decreasing the velocity until it hits 0. The rate of decrease also depends on personal preference. If you want the window to move over a longer period, you will be decrease the velocity at a slower rate than if you wanted it to stop faster.
If you want a "bounce" effect, simply check for when the window hits the side of the screen. If it hits the left or right (that is, the WindowX <= 0 or WindowX + WindowWidth >= ScreenWidth), multiply the X velocity by -1 to send it in the other direction. Same goes for the Y axis. If you do not add a "bounce" effect, I would recommend at least doing the same check, but when it hits the side of the screen, you force it back into the screen (that is, WindowX >= 0 and WindowX <= ScreenWidth - WindowWidth) the set the velocity to 0, stopping the animation completely.
I would recommend, too, that you add a cap on the maximum velocity (ie between -x and x units). This will prevent the odd case where "something" happens and the velocity ends up at an insane number, and the screen bounces at a million miles per hour all over.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How can I wrap BOOST in a separate namespace? I'm looking to have two versions of BOOST compiled into a project at the same time. Ideally they should be usable along these lines:
boost_1_36_0::boost::shared_ptr<SomeClass> someClass = new SomeClass();
boost_1_35_0::boost::regex expression("[0-9]", boost_1_35_0::boost::regex_constants::basic);
A: Using bcp can install boost library to a specific location and can replace all 'namespace boost' in their code to a custom alias. Assuming our alias is 'boost_1_36_0' all 'namespace boost' code blocks will start with 'boost_1_36_0'. Something like
bcp --namespace=boost_1_36_0 --namespace-alias shared_ptr regex /path/to/install
, but check the documentation in the link yourself because I'm not sure if it is legal syntaxis.
A: I read (well scanned) through the development list discussion. There's no easy solution. To sum up:
*
*Wrapping header files in a namespace declaration
namespace boost_1_36_0 {
#include <boost_1_36_0/boost/regex.hpp>
}
namespace boost_1_35_0 {
#include <boost_1_35_0/boost/shared_ptr.hpp>
}
*
*Requires modifying source files
*Doesn't allow for both versions to be included in the same translation unit, due to the fact that macros do not respect namespaces.
*Defining boost before including headers
#define boost boost_1_36_0
#include <boost_1_36_0/boost/regex.hpp>
#undef boost
#define boost boost_1_35_0
#include <boost_1_35_0/boost/shared_ptr.hpp>
#undef boost
*
*Source files can simply be compiled with -Dboost=boost_1_36_0
*Still doesn't address macro conflicts in a single translation unit.
*Some internal header file inclusions may be messed up, since this sort of thing does happen.
#if defined(SOME_CONDITION)
# define HEADER <boost/some/header.hpp>
#else
# define HEADER <boost/some/other/header.hpp>
#endif
But it may be easy enough to work around those cases.
*Modifying the entire boost library to replace namespace boost {..} with namespace boost_1_36_0 {...} and then providing a namespace alias. Replace all BOOST_XYZ macros and their uses with BOOST_1_36_0_XYZ macros.
*
*This would likely work if you were willing to put into the effort.
A: @Josh:
While I agree with the shivering, I still believe this is the better course of action. Otherwise, linking troubles are a certainty. I've had the situation before where I had to hack the compiled libraries using objcopy to avoid definition conflicts. It was a nightmare for platform interoperability reasons because the name mangling works very differently even in different versions of the same compilers (in my case, GCC).
A: You'll have a world of trouble linking because the mangled names will be different. And yes, I see you knew that, but it seems like it will be trouble all around.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: What is the time complexity of indexing, inserting and removing from common data structures? There is no summary available of the big O notation for operations on the most common data structures including arrays, linked lists, hash tables etc.
A: Information on this topic is now available on Wikipedia at: Search data structure
+----------------------+----------+------------+----------+--------------+
| | Insert | Delete | Search | Space Usage |
+----------------------+----------+------------+----------+--------------+
| Unsorted array | O(1) | O(1) | O(n) | O(n) |
| Value-indexed array | O(1) | O(1) | O(1) | O(n) |
| Sorted array | O(n) | O(n) | O(log n) | O(n) |
| Unsorted linked list | O(1)* | O(1)* | O(n) | O(n) |
| Sorted linked list | O(n)* | O(1)* | O(n) | O(n) |
| Balanced binary tree | O(log n) | O(log n) | O(log n) | O(n) |
| Heap | O(log n) | O(log n)** | O(n) | O(n) |
| Hash table | O(1) | O(1) | O(1) | O(n) |
+----------------------+----------+------------+----------+--------------+
* The cost to add or delete an element into a known location in the list
(i.e. if you have an iterator to the location) is O(1). If you don't
know the location, then you need to traverse the list to the location
of deletion/insertion, which takes O(n) time.
** The deletion cost is O(log n) for the minimum or maximum, O(n) for an
arbitrary element.
A: Nothing as useful as this: Common Data Structure Operations:
A: Red-Black trees:
*
*Insert - O(log n)
*Retrieve - O(log n)
*Delete - O(log n)
A: Keep in mind that unless you're writing your own data structure (e.g. linked list in C), it can depend dramatically on the implementation of data structures in your language/framework of choice. As an example, take a look at the benchmarks of Apple's CFArray over at Ridiculous Fish. In this case, the data type, a CFArray from Apple's CoreFoundation framework, actually changes data structures depending on how many objects are actually in the array - changing from linear time to constant time at around 30,000 objects.
This is actually one of the beautiful things about object-oriented programming - you don't need to know how it works, just that it works, and the 'how it works' can change depending on requirements.
A: I guess I will start you off with the time complexity of a linked list:
Indexing---->O(n)
Inserting / Deleting at end---->O(1) or O(n)
Inserting / Deleting in middle--->O(1) with iterator O(n) with out
The time complexity for the Inserting at the end depends if you have the location of the last node, if you do, it would be O(1) other wise you will have to search through the linked list and the time complexity would jump to O(n).
A: Amortized Big-O for hashtables:
*
*Insert - O(1)
*Retrieve - O(1)
*Delete - O(1)
Note that there is a constant factor for the hashing algorithm, and the amortization means that actual measured performance may vary dramatically.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
}
|
Q: Has anbody used Boo and can you comment on your experiences? I'm looking for a groovy equivalent on .NET
http://boo.codehaus.org/
So far Boo looks interesting, but it is statically typed, yet does include some of the metaprogramming features I'd be looking for.
Can anyone comment on the experience of using Boo and is it worth looking into for more than hobby purposes at a 1.0 Version?
Edit: Changed BOO to Boo
A: I've been using Boo quite a lot lately. It's very flexible and very powerful. The metaprogramming works well, but it's not nearly as easy to use as Nemerle's. In addition, the lack of arbitrary expression nesting (e.g. def foo = if(bar) match(baz) { ... } else 0;) makes certain things harder than it has to be, but that's not something you're going to miss unless you're coming from Nemerle, OCaml, Haskell, or something along those lines.
Overall, I'd say give it a shot. I don't think you'll be disappointed.
A: I wrote a paper on it for my programming language class. I was really impressed with it. It's very fun, and they've started working on BooLangStudio. SharpDevelop also has some support for it.
There are a lot of things that I liked about it. When BooLangStudio will be released with code complition, and boo compiler reaches 1.0 ( that means that the compiler will be written in pure boo :D ) I'll definitely be considering it over C#. It's awesome so you won't regret looking into it!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: ASP.NET - Add Event Handler to LinkButton inside of Repeater in a RenderContent call I've got a Sharepoint WebPart which loads a custom User Control. The user control contains a Repeater which in turn contains several LinkButtons.
In the RenderContent call in the Webpart I've got some code to add event handlers:
ArrayList nextPages = new ArrayList();
//populate nextPages ....
AfterPageRepeater.DataSource = nextPages;
AfterPageRepeater.DataBind();
foreach (Control oRepeaterControl in AfterPageRepeater.Controls)
{
if (oRepeaterControl is RepeaterItem)
{
if (oRepeaterControl.HasControls())
{
foreach (Control oControl in oRepeaterControl.Controls)
{
if (oControl is LinkButton)
{
((LinkButton)oControl).Click += new EventHandler(PageNavigateButton_Click);
}
}
}
}
}
The function PageNavigateButton_Click is never called however. I can see it being added as an event handler in the debugger however.
Any ideas? I'm stumped how to do this.
A: By the time RenderContent() is called, all the registered event handlers have been called by the framework. You need to add the event handlers in an earlier method, like OnLoad():
protected override void OnLoad(EventArge e)
{ base.OnLoad(e);
EnsureChildControls();
var linkButtons = from c in AfterPageRepeater.Controls
.OfType<RepeaterItem>()
where c.HasControls()
select c into ris
from lb in ris.OfType<LinkButton>()
select lb;
foreach(var linkButton in linkButtons)
{ linkButton.Click += PageNavigateButton_Click
}
}
A: Have you tried assigning the CommandName and CommandArgument properties to each button as you iterate through? The Repeater control supports the ItemCommand event, which is an event that will be raised when a control with the CommandName property is hit.
From there it is easy enough to process because the CommandName and CommandArgument values are passed into the event and are readily accessible.
A: You need to make sure that the link button is re-added to the control tree and/or that the event is rewired up to the control before the event fires.
Article @ 4guysfromrolla
A: I've never done a SharePoint WebPart, so I don't know if this will apply. But if it were a plain-old apsx page, I'd say that by the time it's rendering, it's too late. Try adding the event handlers in the control's Init or PreInit events.
Edit: Wait, I think Dilli-O might be right. See the Adding Button Controls to a Repeater section at the end of http://www.ondotnet.com/pub/a/dotnet/2003/03/03/repeater.html. It's in VB.NET, but you can easily do the same thing in C#.
A: As others have pointed out, you're adding the event handler too late in the page life cycle. For SharePoint WebParts you'd typically want to override the class' OnInit/CreateChildControls methods to handle the activity.
A: YOu need your webpart to implement the INamingContainer marker interface, it is used by the framework to allow postbacks to return to the correct control...
Also the controls in your webpart all need to have an ID.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Reducing memory footprint of large unfamiliar codebase Suppose you have a fairly large (~2.2 MLOC), fairly old (started more than 10 years ago) Windows desktop application in C/C++. About 10% of modules are external and don't have sources, only debug symbols.
How would you go about reducing application's memory footprint in half? At least, what would you do to find out where memory is consumed?
A: Override malloc()/free() and new()/delete() with wrappers that keep track of how big the allocations are and (by recording the callstack and later resolving it against the symbol table) where they are made from. On shutdown, have your wrapper display any memory still allocated.
This should enable you both to work out where the largest allocations are and to catch any leaks.
A: this is description/skeleton of memory tracing application I used to reduce memory consumption of our game by 20%. It helped me to track many allocations done by external modules.
A: It's not an easy task. Begin by chasing down any memory leaks you cand find (a good tool would be Rational Purify). Skim the source code and try to optimize data structures and/or algorithms.
Sorry if this may sound pessimistic, but cutting down memory usage by 50% doesn't sound realistic.
A: There is a chance is you can find some significant inefficiencies very fast. First you should check what is the memory used for. A tool which I have found very handy for this is Memory Validator
Once you have this "memory usage map", you can check for Low Hanging Fruit. Are there any data structures consuming a lot of memory which could be represented in a more compact form? This is often possible, esp. when the data access is well encapsulated and when you have a spare CPU power you can dedicate to compressing / decompressing them on each access.
A: I don't think your question is well posed.
The size of source code is not directly related to the memory footprint. Sure, the compiled code will occupy some memory but the application might will have memory requirements on it's own. Both static (the variables declared in the code) and dynamic (the object the application creates).
I would suggest you to profile program execution and study the code carefully.
A: First places to start for me would be:
Does the application do a lot of preallocation memory to be used later? Does this memory often sit around unused, never handed out? Consider switching to newing/deleting (or better use a smart_ptr) as needed.
Does the code use a static array such as
Object arrayOfObjs[MAX_THAT_WILL_EVER_BE_USED];
and hand out objs in this array? If so, consider manually managing this memory.
A: One of the tools for memory usage analysis is LeakDiag, available for free download from Microsoft. It apparently allows to hook all user-mode allocators down to VirtualAlloc and to dump process allocation snapshots to XML at any time. These snapshots then can be used to determine which call stacks allocate most memory and which call stacks are leaking. It lacks pretty frontend for snapshot analysis (unless you can get LDParser/LDGrapher via Microsoft Premier Support), but all the data is there.
One more thing to note is that you may have false leak positives from BSTR allocator due to caching, see "Hey, why am I leaking all my BSTR's?"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: PHP: How to return information to a waiting script and continue processing Suppose there are two scripts Requester.php and Provider.php, and Requester requires processing from Provider and makes an http request to it (Provider.php?data="data"). In this situation, Provider quickly finds the answer, but to maintain the system must perform various updates throughout the database. Is there a way to immediately return the value to Requester, and then continue processing in Provider.
Psuedo Code
Provider.php
{
$answer = getAnswer($_GET['data']);
echo $answer;
//SIGNAL TO REQUESTER THAT WE ARE FINISHED
processDBUpdates();
return;
}
A: You can flush the output buffer with the flush() command.
Read the comments in the PHP manual for more info
A: I use this code for running a process in the background (works on Linux).
The process runs with its output redirected to a file.
That way, if I need to display status on the process, it's just a matter of writing a small amount of code to read and display the contents of the output file.
I like this approach because it means you can completely close the browser and easily come back later to check on the status.
A: I think you'll need on the provider to send the data (be sure to flush), and then on the Requester, use fopen/fread to read an expected amount of data, so you can drop the connection to the Provider and continue. If you don't specify an amount of data to expect, I would think the requester would sit there waiting for the Provider to close the connection, which probably doesn't happen until the end of it's run (ie. all the secondary work intensive tasks are complete). You'll need to try out a few POC's..
Good luck.
A: You basically want to signal the end of 1 process (return to the original Requester.php) and spawn a new process (finish Provider.php). There is probably a more elegant way to pull this off, but I've managed this a couple different ways. All of them basically result in exec-ing a command in order to shell off the second process.
adding the following > /dev/null 2>&1 & to the end of your command will allow it to run in the background without inhibiting the actual execution of your current script
Something like the following may work for you:
exec("wget -O - \"$url\" > /dev/null 2>&1 &");
-- though you could do it as a command line PHP process as well.
You could also save the information that needs to be processed and handle the remaining processing on a cron job that re-creates the same sort of functionality without the need to exec.
A: Split the Provider in two: ProviderCore and ProviderInterface. In ProviderInterface just do the "quick and easy" part, also save a flag in database that the recent request hasn't been processed yet. Run ProviderCore as a cron job that searches for that flag and completes processing. If there's nothing to do, ProviderCore will terminate and retry in (say) 2 minutes.
A: I'm going out on a limb here, but perhaps you should try cURL or use a socket to update the requester?
A: You could start another php process in Provider.php using pcntl_fork()
Provider.php
{
// Fork process
$pid = pcntl_fork();
// You are now running both a daemon process and the parent process
// through the rest of the code below
if ($pid > 0) {
// PARENT Process
$answer = getAnswer($_GET['data']);
echo $answer;
//SIGNAL TO REQUESTER THAT WE ARE FINISHED
return;
}
if ($pid == 0) {
// DAEMON Process
processDBUpdates();
return;
}
// If you get here the daemon process failed to start
handleDaemonErrorCondition();
return;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Easy data conversion tool I often have data in Excel or text that I need to get into SqlServer. I can use ODBC to query the Excel file and I can parse the text file. What I want though is some tool that will just grab the data and put it into tables with little / no effort. Does anyone know of such a tool?
A: If you are using Sql Server look at Integration Services (SSIS).
A: You can also take a look at parse-o-matic
A: Have you tried the SQL Server Import/Export Wizard ?
In SQL Server Management Studio, right-click your Database Name, and select Tasks menu, Import Data. For Data Source, select Microsoft Excel, browse to the .XLS...
A: Seems like it'd be pretty easy to write a script that reads the text file, and converts it to "INSERT * into TABLE" Sql statements. I suspect this has already been done, but a simple implementation would be less than 100 lines of code in your favorite scripting language.
Hey, Google says SQLServer comes with such a tool, BULK INSERT:
A: Use DTS or SSIS depending on which version of SQL Server you have. There is an import wizard which can get you started, but data imports are rarely simple and usually involve some sort of data cleanup so that your incoming data is acceptable to the table where you intend to store it. Excel data, in my experience, is usually particularly bad inthis respect becasue it often isn't stored properly in Excel to begin with.
A: I haven't seen commercial tools that do this. I create this kind of tools at work all the time, and the data validation is not trivial. This just makes sure that you don't have bad data making it into your database.
I found that for simple data conversion needs something like FileHelpers is pretty good. It still needs programming though. This framework is fairly easy to use, and somebody with a little bit of experience could bang something out for you.
On further thought, you can use the SQL Server bcp utility to upload the contents of a text file. This is a command-line utility and has a lot of switches. I would suggest you experiment on a test table before you use this in a production table.
It's been a while since I used it, so I can't remember if you can directly use an Excel spreadsheet. Text files are always the easiest to deal with in any case.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: ClickOnce app not working with Office 2007 I am a developer for a .net application that uses ClickOnce for deployment. I have deployed it over 60 times and computers have not had any issues downloading the latest release. However, when I deployed this morning, the following error occurs when computers with Office 2007 installed tries to start the application:
Unable to install or run the application. The application requires that assembly stdole Version 7.0.3300.0 be installed in the Global Assembly Cache (GAC) first. Please contact your system administrator.
However, computers with Office 2003 can start the application with no problem.
Does anyone have any recommendations on resolving this issue?
A: As far as I know this version of stdole is removed when Office2k7 is installed. You could install it individually via gacutil on all target machines or somehow include it via the ClickOnce package bootstrapper. On a mac right now so I can't test.
A: I am not sure about your particular problem, but the Office 12 version of the stdole library is different (and -in my experience- not always backwards compatible) than then one you have when you use Office 2003.
We use a wrapper around the Office DLLs to use the Mailmerge features Office has to offer and, believe it or not, Microsoft changed the interfaces again, therefore we have to compile two different wrappers, linked against two different office versions. In short, a PITA.
I am not coding against MS office again until they don't provide a Managed library.
Despite the rant, I think that you should install Office 12 in your computer and test from there. You'll notice that all the Office stuff is different. If you can make it work under Office 12 it may be compatible with machines with Office 11 (also known as 2003), but don't count on it and test it before deploying anything.
I don't think that this has anything to do with ClickOnce; it's more a GAC/OfficeVersion issue. Also, check for possible Vista problems, as the UAC and the DEP stuff tend to interfere with the way "old" applications used to work.
A: I had exactly the same problem once our company started rolling out Office 2007. My first quick solution was to just copy stdole to the GAC of the two computers giving the problem.
After investigating I found that our application was not actually using stdole. It might have added the reference when I tested a COM dll which I removed after testing. So my solution was just to exclude it from the Application Files dialog under the Publish tab. So first make sure you need it.
As far as I understand it is required when you reference some COM dlls like Office. If this is the case with you I found a few posts saying the problem was resolved by changing the Publish Status on the Application Files dialog from Prerequisite to Include and the Download Group from None to Required. In my case stdole was added by default as Prerequisite.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What method can I use to call a web service from VBA? What is the easiest way to call a web service from Excel (I am using version 2002)? Please, no third party tools or libraries. This has to be easy with some VBA that I can paste there, use it, and ever touch.
A: I don't think there is any especially easy way to talk to SOAP directly from VBA, but a web service with a RESTful interface (i.e. the whole thing can be done via a URL) is easier: I was looking at this article just today. Another article I just found is here.
If you're stuck with SOAP, however, you could start by reading this and this. Frankly, it all looks a bit nasty.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Static layers in a java web application I am building a small website for fun/learning using a fairly standard Web/Service/Data Access layered design.
To save me from constantly having to create instances of my service layer/data access layer classes, I have made the methods in them all static. I shouldn't get concurrency issues as they use local variables etc and do not share any resources (things are simple enough for this at the moment).
As far as I can see the only trade-off for this is that I am not really following a true OO approach, but then again it keeps the code much cleaner.
Is there any reason this would not be a viable approach? What sort of problems might arise later on? Would it be better to have a "factory" class that can return me instances of the service and data layer classes as needed?
A: You know those rides at the amusement park where they say "please keep your hands and feet inside the ride at all times"? It turns out the ride is a lot more fun if you don't. The only real trade-off is that you're not really following a true keeping-your-hands-and-feet-inside-the-ride-at-all-times approach.
The point is this -- there is a reason you should follow a "true OO approach", just as there's a reason to keep your hands and feet inside the ride -- it's great fun until you start bleeding everywhere.
A: The way you describe it, this isn't the "wrong" approach per se but I don't really see the problem you're trying to avoid. Can't you just create a single instance of these business objects when the server starts up and pass them to your servlets as needed?
If you're ready to throw OO out the window you might want to check out the Singleton pattern as well.
A: Disadvantages:
*
*You will be unable to write unit tests as you will be unable to write mock data access/business logic objects to test against.
*You will have concurrency problems as different threads try to access the static code at the same time - or if you use synchronized static methods you will end up with threads queuing up to use the static methods.
*You will not be able to use instance variables, which will become a restriction as the code becomes more complex.
*It will be more difficult to replace elements of the business or data access layers if you need to.
*If you intend to write your application in this manner you would be better off using a language designed to work in this way, such as PHP.
You would be better off going for non-static business/data access layer classes by either:
*
*Using the singleton pattern (creating a single instance of each class and sharing them among threads)...
*Or creating instances of the classes in each thread as and when they are needed.
Keep in mind that each user/session connected to your application will be running in it's own thread - so your web application is inherently multi-threaded.
A: I don't really see the advantage to your design, and there are many things that could go wrong. You are saving a line of code, maybe? Here's some disadvantages to your approach:
*
*You cannot easily replace implementations of your business logic
*You cannot defined instance variables to facilitate breaking up logic into multiple methods
*Your assumption that multi-threaded issues will not arise is almost certainly wrong
*You cannot easily mock them for testing
I really don't see that the omission of one line of code is buying you anything.
It's not really an "OO Design" issue, but more of an appropriateness. Why are you even using Java in such a procedural way? Surely PHP would be more appropriate to this kind of design (and actually save you time by not having to compile and deploy).
I would just make your business layer non-static; it will make it so much easier for to maintain, change, and evolve your application.
A: You may have difficulty unit-testing your objects with this type of architecture. For example, if you have a layer of business objects that reference your static data access layer, it could be difficult to test the business layer because you won't be able to easily use mock data access objects. That is, when testing your business layer, you probably won't want to use the "real" methods in the data access layer because they will make unwanted changes to your database. If your data access layer was not static, you could provide mock data access objects to your business layer for testing purposes.
A: I would think that you will have concurrency issues with all static methods with multiple users. The web layer will thread out concurrent users. Can all your static methods handle this? Perhaps, but won't they constantly be locked in queuing the requests in single file? I'm not sure, never tried your idea.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to get the file size from http headers I want to get the size of an http:/.../file before I download it. The file can be a webpage, image, or a media file. Can this be done with HTTP headers? How do I download just the file HTTP header?
A: Note that not every server accepts HTTP HEAD requests. One alternative approach to get the file size is to make an HTTP GET call to the server requesting only a portion of the file to keep the response small and retrieve the file size from the metadata that is returned as part of the response content header.
The standard System.Net.Http.HttpClient can be used to accomplish this. The partial content is requested by setting a byte range on the request message header as:
request.Headers.Range = new RangeHeaderValue(startByte, endByte)
The server responds with a message containing the requested range as well as the entire file size. This information is returned in the response content header (response.Content.Header) with the key "Content-Range".
Here's an example of the content range in the response message content header:
{
"Key": "Content-Range",
"Value": [
"bytes 0-15/2328372"
]
}
In this example the header value implies the response contains bytes 0 to 15 (i.e., 16 bytes total) and the file is 2,328,372 bytes in its entirety.
Here's a sample implementation of this method:
public static class HttpClientExtensions
{
public static async Task<long> GetContentSizeAsync(this System.Net.Http.HttpClient client, string url)
{
using (var request = new System.Net.Http.HttpRequestMessage(System.Net.Http.HttpMethod.Get, url))
{
// In order to keep the response as small as possible, set the requested byte range to [0,0] (i.e., only the first byte)
request.Headers.Range = new System.Net.Http.Headers.RangeHeaderValue(from: 0, to: 0);
using (var response = await client.SendAsync(request))
{
response.EnsureSuccessStatusCode();
if (response.StatusCode != System.Net.HttpStatusCode.PartialContent)
throw new System.Net.WebException($"expected partial content response ({System.Net.HttpStatusCode.PartialContent}), instead received: {response.StatusCode}");
var contentRange = response.Content.Headers.GetValues(@"Content-Range").Single();
var lengthString = System.Text.RegularExpressions.Regex.Match(contentRange, @"(?<=^bytes\s[0-9]+\-[0-9]+/)[0-9]+$").Value;
return long.Parse(lengthString);
}
}
}
}
A:
Can this be done with HTTP headers?
Yes, this is the way to go. If the information is provided, it's in the header as the Content-Length. Note, however, that this is not necessarily the case.
Downloading only the header can be done using a HEAD request instead of GET. Maybe the following code helps:
HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://example.com/");
req.Method = "HEAD";
long len;
using(HttpWebResponse resp = (HttpWebResponse)(req.GetResponse()))
{
len = resp.ContentLength;
}
Notice the property for the content length on the HttpWebResponse object – no need to parse the Content-Length header manually.
A: Yes, assuming the HTTP server you're talking to supports/allows this:
public long GetFileSize(string url)
{
long result = -1;
System.Net.WebRequest req = System.Net.WebRequest.Create(url);
req.Method = "HEAD";
using (System.Net.WebResponse resp = req.GetResponse())
{
if (long.TryParse(resp.Headers.Get("Content-Length"), out long ContentLength))
{
result = ContentLength;
}
}
return result;
}
If using the HEAD method is not allowed, or the Content-Length header is not present in the server reply, the only way to determine the size of the content on the server is to download it. Since this is not particularly reliable, most servers will include this information.
A: WebClient webClient = new WebClient();
webClient.OpenRead("http://stackoverflow.com/robots.txt");
long totalSizeBytes= Convert.ToInt64(webClient.ResponseHeaders["Content-Length"]);
Console.WriteLine((totalSizeBytes));
A: HttpClient client = new HttpClient(
new HttpClientHandler() {
Proxy = null, UseProxy = false
} // removes the delay getting a response from the server, if you not use Proxy
);
public async Task<long?> GetContentSizeAsync(string url) {
using (HttpResponseMessage responce = await client.GetAsync(url))
return responce.Content.Headers.ContentLength;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "73"
}
|
Q: java sound without hardware device Anyone know if it is possible to write an app that uses the Java Sound API on a system that doesn't actually have a hardware sound device?
I have some code I've written based on the API that manipulates some audio and plays the result but I am now trying to run this in a server environment, where the audio will be recorded to a file instead of played to line out.
The server I'm running on has no sound card, and I seem to be running into roadblocks with Java Sound not being able to allocate any lines if there is not a Mixer that supports it. (And with no hardware devices I'm getting no Mixers.)
Any info would be much appreciated -
thanks.
A: For linux you can use OSS Virtual Mixer, which will give you virtual sound channels.
On windows there are a few sound drivers that do this, one is Virtual Audio Cable, which, while not free, is about the cost of a sound card so it shouldn't be a hardship.
If neither of those work for you, it'll probably be easier to make your own Java sound library and replace the built in functionality than it would be to implement a sound card for your OS.
-Adam
A: All java needs are the drivers for a sound card. The JVM relies on the OS to handle direct hardware management, all JVM needs is a way to tell the OS that it wants a sound played (thus the driver).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Parse HTML links using C# Is there a built in dll that will give me a list of links from a string. I want to send in a string with valid html and have it parse all the links. I seem to remember there being something built into either .net or an unmanaged library.
I found a couple open source projects that looked promising but I thought there was a built in module. If not I may have to use one of those. I just didn't want an external dependency at this point if it wasn't necessary.
A: I'm not aware of anything built in and from your question it's a little bit ambiguous what you're looking for exactly. Do you want the entire anchor tag, or just the URL from the href attribute?
If you have well-formed XHtml, you might be able to get away with using an XmlReader and an XPath query to find all the anchor tags (<a>) and then hit the href attribute for the address. Since that's unlikely, you're probably better off using RegEx to pull down what you want.
Using RegEx, you could do something like:
List<Uri> findUris(string message)
{
string anchorPattern = "<a[\\s]+[^>]*?href[\\s]?=[\\s\\\"\']+(?<href>.*?)[\\\"\\']+.*?>(?<fileName>[^<]+|.*?)?<\\/a>";
MatchCollection matches = Regex.Matches(message, anchorPattern, RegexOptions.IgnorePatternWhitespace | RegexOptions.IgnoreCase | RegexOptions.Multiline | RegexOptions.Compiled);
if (matches.Count > 0)
{
List<Uri> uris = new List<Uri>();
foreach (Match m in matches)
{
string url = m.Groups["url"].Value;
Uri testUri = null;
if (Uri.TryCreate(url, UriKind.RelativeOrAbsolute, out testUri))
{
uris.Add(testUri);
}
}
return uris;
}
return null;
}
Note that I'd want to check the href to make sure that the address actually makes sense as a valid Uri. You can eliminate that if you aren't actually going to be pursuing the link anywhere.
A: I don't think there is a built-in library, but the Html Agility Pack is popular for what you want to do.
The way to do this with the raw .NET framework and no external dependencies would be use a regular expression to find all the 'a' tags in the string. You would need to take care of a lot of edge cases perhaps. eg href = "http://url" vs href=http://url etc.
A: SubSonic.Sugar.Web.ScrapeLinks seems to do part of what you want, however it grabs the html from a url, rather than from a string. You can check out their implementation here.
A: Google gives me this module: http://www.majestic12.co.uk/projects/html_parser.php
Seems to be a HTML parser for .NET.
A: A simple regular expression -
@"<a.*?>"
passed in to Regex.Matches should do what you need. That regex may need a tiny bit of tweaking, but it's pretty close I think.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: WPF transparent borders causes the UI to stop redrawing As a follow up to my previous question, I am wondering how to use transparent windows correctly. If I have set my window to use transparency, the UI will occasionally appear to stop responding. What is actually happening is that the UI simply is not updating as it should. Animations do not occur, pages do not appear to navigate; however, if you watch the debugger clicking on buttons, links, etc.. do actually work. Minimizing and restoring the window "catches up" the UI again and the user can continue working until the behavior comes back.
If I remove the transparent borders, the behavior does not occur. Am I doing something wrong or is there some other setting, code, etc... that I need to implement to work with transparent borders properly?
Here is my window declaration for the code that fails.
<Window x:Class="MyProject.MainContainer"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="WPF APplication" Height="600" Width="800"
xmlns:egc="ControlLibrary" Background="{x:Null}"
BorderThickness="0"
AllowsTransparency="True"
MinHeight="300" MinWidth="400" WindowStyle="None" >
And the code that does not exhibit the behavior
<Window x:Class="MyProject.MainContainer"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="WPF Application" Height="600" Width="800"
xmlns:egc="ControlLibrary" Background="{x:Null}"
BorderThickness="0"
AllowsTransparency="False"
MinHeight="300" MinWidth="400" WindowStyle="None" >
A: Are you using .NET 3.0, or .NET 3.5 on Windows XP SP2? If so, this is a known problem with the transparent window API that has been fixed in .NET 3.5 and SP3 of XP (and I think SP1 of Vista). Basically when you set the AllowsTransparency to True, the WPF pipeline has to render in software only mode. This will cause a significant degradation in performance on most systems.
Unfortunately, the only thing you can do to fix this is to upgrade to .NET 3.0 SP1 (included in .NET 3.5), and install the appropriate service pack for Windows. Note that the transparent windows are still slower, but not nearly as bad. You can find a more in-depth discussion here.
A: I think that I've finally found a workaround. From everything I've read this problem should not be occurring with XP SP3 & .NET 3.5 SP1, but it is.
The example from this blog post shows how to use the Win32 API functions to create an irregular shaped window, which is what I"m doing. After reworking my main window to use these techniques, things seem to be working as expected and the behavior has not returned.
It is also of note that the reason the author recommends this method is due to performance issues with WPF and transparent windows. While I believe it may be better in .NET 3.5 SP1 that it was, this was not that hard to implement and should perform better.
A: I am running on Windows XP Pro SP3 and using .NET 3.5 SP1. I have also verified that the project is targeting version 3.5 of the framework.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: element's z-index value can not overcome the iframe content's one I have a div and an iframe on the page
the div has
z-index: 0;
the iframe has its content with a popup having a z-index of 1000
z-index: 1000;
However, the div still overshadows the popup in IE (but works fine in Firefox).
Does anyone know what I can do?
A: Explorer Z-index bug
In general, http://www.quirksmode.org/ is an excellent reference for this sort of thing.
A: Which version of IE?
I'm no javascript guru, but I think hiding the div when the popup pops might accomplish what you need.
I've had to work with divs and iframes when creating a javascript menu that should show overtop dropdown boxes and listboxes -- other menu implementations just hide these items whose default behavior in IE6 is to show on top of any DIV, no matter the z-index.
A: I face the same problem. The problem in my case is that the content in the iframe is not controlled by IE directly, but by Acrobat as it is a pdf file. You can try to show the iframe without the content, in which case the popup displays normally. For some reason IE is not able to control the z-index for external helpers.
It was tested with IE7
A: Without seeing your code, it's difficult to determine the problem. But it's worth noting that z-index only works when the element has been positioned (e.g. position: absolute;), so perhaps that could be an issue?
There's a good article on CSS Z-index from the Mozilla Developer Center.
A: Without seeing a code snippet, it's hard to determine what the issue is. You may want to try appending an iframe under your popup that is the same size as your popup. With IE7 if you render the iframed popup after the other iframe has already loaded you should be able to cover up elements that are beneath. I believe some JS calendars and some lightbox/thickbox code does this if you are looking for examples.
A: never set your z-index to anything bellow 1 enless you want to hide it. I'm not sure about 7.0 but older versions of IE I've had issues with doing that. IE doesn't like z-index that much. Also check your positioning. Positioning may be your issue. sorry, i don't have enough info to help you further.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Why does strcpy trigger a segmentation fault with global variables? So I've got some C code:
#include <stdio.h>
#include <string.h>
/* putting one of the "char*"s here causes a segfault */
void main() {
char* path = "/temp";
char* temp;
strcpy(temp, path);
}
This compiles, runs, and behaves as it looks. However, if one or both of the character pointers is declared as global variable, strcpy results in a segmentation fault. Why does this happen? Evidently there's an error in my understanding of scope.
A: The temp variable doesn't point to any storage (memory) and it is uninitialized.
if temp is declared as char temp[32]; then the code would work no matter where it is declared. However, there are other problems with declaring temp with a fixed size like that, but that is a question for another day.
Now, why does it crash when declared globally and not locally. Luck...
When declared locally, the value of temp is coming from what ever value might be on the stack at that time. It is luck that it points to an address that doesn't cause a crash. However, it is trashing memory used by someone else.
When declared globally, on most processors these variables will be stored in data segments that will use demand zero pages. Thus char *temp appears as if it was declared char *temp=0.
A: You forgot to allocate and initialize temp:
temp = (char *)malloc(TEMP_SIZE);
Just make sure TEMP_SIZE is big enough. You can also calculate this at run-time, then make sure the size is enough (should be at least strlen(path))
A: As mentioned above, you forgot to allocate space for temp.
I prefer strdup to malloc+strcpy. It does what you want to do.
A: No - this doesn't work regardless of the variables - it just looks like it did because you got (un)lucky. You need to allocate space to store the contents of the string, rather than leave the variable uninitialised.
Uninitialised variables on the stack are going to be pointing at pretty much random locations of memory. If these addresses happen to be valid, your code will trample all over whatever was there, but you won't get an error (but may get nasty memory corruption related bugs elsewhere in your code).
Globals consistently fail because they usually get set to specific patterns that point to unmapped memory. Attempting to dereference these gives you an segfault immediately (which is better - leaving it to later makes the bug very hard to track down).
A: I'd like to rewrite first Adam's fragment as
// Make temp a static array of 256 chars
char temp[256];
strncpy(temp, sizeof(temp), path);
temp[sizeof(temp)-1] = '\0';
That way you:
1. don't have magic numbers laced through the code, and
2. you guarantee that your string is null terminated.
The second point is at the loss of the last char of your source string if it is >=256 characters long.
A: As other posters mentioned, the root of the problem is that temp is uninitialized. When declared as an automatic variable on the stack it will contain whatever garbage happens to be in that memory location. Apparently for the compiler+CPU+OS you are running, the garbage at that location is a valid pointer. The strcpy "succeeds" in that it does not segfault, but really it copied a string to some arbitrary location elsewhere in memory. This kind of memory corruption problem strikes fear into the hearts of C programmers everywhere as it is extraordinarily difficult to debug.
When you move the temp variable declaration to global scope, it is placed in the BSS section and automatically zeroed. Attempts to dereference *temp then result in a segfault.
When you move *path to global scope, then *temp moves up one location on the stack. The garbage at that location is apparently not a valid pointer, and so dereferencing *temp results in a segfault.
A: The important part to note:
destination string dest must be large enough to receive the copy.
In your situation temp has no memory allocated to copy into.
Copied from the man page of strcpy:
DESCRIPTION
The strcpy() function copies the string pointed to by src (including
the terminating '\0' character) to the array pointed to by dest. The
strings may not overlap, and the destination string dest must be large
enough to receive the copy.
A: You're invoking undefined behavior, since you're not initializing the temp variable. It points to a random location in memory, so your program may work, but most likely it will segfault. You need to have your destination string be an array, or have it point to dynamic memory:
// Make temp a static array of 256 chars
char temp[256];
strncpy(temp, 256, path);
// Or, use dynamic memory
char *temp = (char *)malloc(256);
strncpy(temp, 256, path);
Also, use strncpy() instead of strcpy() to avoid buffer overruns.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: .NET: How to have background thread signal main thread data is available? What is the proper technique to have ThreadA signal ThreadB of some event, without having ThreadB sit blocked waiting for an event to happen?
i have a background thread that will be filling a shared List<T>. i'm trying to find a way to asynchronously signal the "main" thread that there is data available to be picked up.
i considered setting an event with an EventWaitHandle object, but i can't have my main thread sitting at an Event.WaitOne().
i considered having a delegate callback, but
a) i don't want the main thread doing work in the delegate: the thread needs to get back to work adding more stuff - i don't want it waiting while the delegate executes, and
b) the delegate needs to be marshalled onto the main thread, but i'm not running a UI, i have no Control to .Invoke the delegate against.
i considered have a delegate callback that simply starts a zero interval System.Windows.Forms.Timer (with thread access to the timer synchronized). This way the thread only needs to be stuck as it calls
Timer.Enabled = true;
but that seems like a hack.
In the olden days my object would have created a hidden window and had the thread post messages to that hidden windows' HWND. i considered creating a hidden control, but i gather that you cannot .Invoke on a control with no handle created. Plus, i have no UI: my object could have been created on a web-server, service, or console, i don't want there to be a graphical control appearing - nor do i want to compile a dependency on System.Windows.Forms.
i considered having my object expose an ISynchronizeInvoke interface, but then i would need to implement .Invoke(), and that's my problem.
What is the proper technique to have thread A signal thread B of some event, without having thread B sit blocked waiting for an event to happen?
A: I'm combining a few responses here.
The ideal situation uses a thread-safe flag such as an AutoResetEvent. You don't have to block indefinitely when you call WaitOne(), in fact it has an overload that allows you to specify a timeout. This overload returns false if the flag was not set during the interval.
A Queue is a more ideal structure for a producer/consumer relationship, but you can mimic it if your requirements are forcing you to use a List. The major difference is you're going to have to ensure your consumer locks access to the collection while it's extracting items; the safest thing is to probably use the CopyTo method to copy all elements to an array, then release the lock. Of course, ensure your producer won't try to update the List while the lock is held.
Here's a simple C# console application that demonstrates how this might be implemented. If you play around with the timing intervals you can cause various things to happen; in this particular configuration I was trying to have the producer generate multiple items before the consumer checks for items.
using System;
using System.Collections.Generic;
using System.Threading;
namespace ConsoleApplication1
{
class Program
{
private static object LockObject = new Object();
private static AutoResetEvent _flag;
private static Queue<int> _list;
static void Main(string[] args)
{
_list = new Queue<int>();
_flag = new AutoResetEvent(false);
ThreadPool.QueueUserWorkItem(ProducerThread);
int itemCount = 0;
while (itemCount < 10)
{
if (_flag.WaitOne(0))
{
// there was an item
lock (LockObject)
{
Console.WriteLine("Items in queue:");
while (_list.Count > 0)
{
Console.WriteLine("Found item {0}.", _list.Dequeue());
itemCount++;
}
}
}
else
{
Console.WriteLine("No items in queue.");
Thread.Sleep(125);
}
}
}
private static void ProducerThread(object state)
{
Random rng = new Random();
Thread.Sleep(250);
for (int i = 0; i < 10; i++)
{
lock (LockObject)
{
_list.Enqueue(rng.Next(0, 100));
_flag.Set();
Thread.Sleep(rng.Next(0, 250));
}
}
}
}
}
If you don't want to block the producer at all, it's a little more tricky. In this case, I'd suggest making the producer its own class with both a private and a public buffer and a public AutoResetEvent. The producer will by default store items in the private buffer, then try to write them to the public buffer. When the consumer is working with the public buffer, it resets the flag on the producer object. Before the producer tries to move items from the private buffer to the public buffer, it checks this flag and only copies items when the consumer isn't working on it.
A: Here's a code sample for the System.ComponentModel.BackgroundWorker class.
private static BackgroundWorker worker = new BackgroundWorker();
static void Main(string[] args)
{
worker.DoWork += worker_DoWork;
worker.RunWorkerCompleted += worker_RunWorkerCompleted;
worker.ProgressChanged += worker_ProgressChanged;
worker.WorkerReportsProgress = true;
Console.WriteLine("Starting application.");
worker.RunWorkerAsync();
Console.ReadKey();
}
static void worker_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
Console.WriteLine("Progress.");
}
static void worker_DoWork(object sender, DoWorkEventArgs e)
{
Console.WriteLine("Starting doing some work now.");
for (int i = 0; i < 5; i++)
{
Thread.Sleep(1000);
worker.ReportProgress(i);
}
}
static void worker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
Console.WriteLine("Done now.");
}
A: If you use a backgroundworker to start the second thread and use the ProgressChanged event to notify the other thread that data is ready. Other events are available as well. THis MSDN article should get you started.
A: There are many ways to do this, depending upon exactly what you want to do. A producer/consumer queue is probably what you want. For an excellent in-depth look into threads, see the chapter on Threading (available online) from the excellent book C# 3.0 in a Nutshell.
A: You can use an AutoResetEvent (or ManualResetEvent). If you use AutoResetEvent.WaitOne(0, false), it will not block. For example:
AutoResetEvent ev = new AutoResetEvent(false);
...
if(ev.WaitOne(0, false)) {
// event happened
}
else {
// do other stuff
}
A: The BackgroundWorker class is answer in this case. It is the only threading construct that is able to asynchronously send messages to the thread that created the BackgroundWorker object. Internally BackgroundWorker uses the AsyncOperation class by calling the asyncOperation.Post() method.
this.asyncOperation = AsyncOperationManager.CreateOperation(null);
this.asyncOperation.Post(delegateMethod, arg);
A few other classes in the .NET framework also use AsyncOperation:
*
*BackgroundWorker
*SoundPlayer.LoadAsync()
*SmtpClient.SendAsync()
*Ping.SendAsync()
*WebClient.DownloadDataAsync()
*WebClient.DownloadFile()
*WebClient.DownloadFileAsync()
*WebClient...
*PictureBox.LoadAsync()
A: If your "main" thread is the Windows message pump (GUI) thread, then you can poll using a Forms.Timer - tune the timer interval according to how quickly you need to have your GUI thread 'notice' the data from the worker thread.
Remember to synchronize access to the shared List<> if you are going to use foreach, to avoid CollectionModified exceptions.
I use this technique for all the market-data-driven GUI updates in a real-time trading application, and it works very well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Why can't I seem to grasp interfaces? Could someone please demystify interfaces for me or point me to some good examples? I keep seeing interfaces popup here and there, but I haven't ever really been exposed to good explanations of interfaces or when to use them.
I am talking about interfaces in a context of interfaces vs. abstract classes.
A: Interfaces allow you to program against a "description" instead of a type, which allows you to more-loosely associate elements of your software.
Think of it this way: You want to share data with someone in the cube next to you, so you pull out your flash stick and copy/paste. You walk next door and the guy says "is that USB?" and you say yes - all set. It doesn't matter the size of the flash stick, nor the maker - all that matters is that it's USB.
In the same way, interfaces allow you to generisize your development. Using another analogy - imagine you wanted to create an application that virtually painted cars. You might have a signature like this:
public void Paint(Car car, System.Drawing.Color color)...
This would work until your client said "now I want to paint trucks" so you could do this:
public void Paint (Vehicle vehicle, System.Drawing.Color color)...
this would broaden your app... until your client said "now I want to paint houses!" What you could have done from the very beginning is created an interface:
public interface IPaintable{
void Paint(System.Drawing.Color color);
}
...and passed that to your routine:
public void Paint(IPaintable item, System.Drawing.Color color){
item.Paint(color);
}
Hopefully this makes sense - it's a pretty simplistic explanation but hopefully gets to the heart of it.
A: This is a rather "long" subject, but let me try to put it simple.
An interface is -as "they name it"- a Contract. But forget about that word.
The best way to understand them is through some sort of pseudo-code example. That's how I understood them long time ago.
Suppose you have an app that processes Messages. A Message contains some stuff, like a subject, a text, etc.
So you write your MessageController to read a database, and extract messages. It's very nice until you suddenly hear that Faxes will be also implemented soon. So you will now have to read "Faxes" and process them as messages!
This could easily turn into a Spagetti code. So what you do instead of having a MessageController than controls "Messages" only, you make it able to work with an interface called IMessage (the I is just common usage, but not required).
Your IMessage interface, contains some basic data you need to make sure that you're able to process the Message as such.
So when you create your EMail, Fax, PhoneCall classes, you make them Implement the Interface called IMessage.
So in your MessageController, you can have a method called like this:
private void ProcessMessage(IMessage oneMessage)
{
DoSomething();
}
If you had not used Interfaces, you'd have to have:
private void ProcessEmail(Email someEmail);
private void ProcessFax(Fax someFax);
etc.
So, by using a common interface, you just made sure that the ProcessMessage method will be able to work with it, no matter if it was a Fax, an Email a PhoneCall, etc.
Why or how?
Because the interface is a contract that specifies some things you must adhere (or implement) in order to be able to use it. Think of it as a badge. If your object "Fax" doesn't have the IMessage interface, then your ProcessMessage method wouldn't be able to work with that, it will give you an invalid type, because you're passing a Fax to a method that expects an IMessage object.
Do you see the point?
Think of the interface as a "subset" of methods and properties that you will have available, despite the real object type. If the original object (Fax, Email, PhoneCall, etc) implements that interface, you can safety pass it across methods that need that Interface.
There's more magic hidden in there, you can CAST the interfaces back to their original objects:
Fax myFax = (Fax)SomeIMessageThatIReceive;
The ArrayList() in .NET 1.1 had a nice interface called IList. If you had an IList (very "generic") you could transform it into an ArrayList:
ArrayList ar = (ArrayList)SomeIList;
And there are thousands of samples out there in the wild.
Interfaces like ISortable, IComparable, etc., define the methods and properties you must implement in your class in order to achieve that functionality.
To expand our sample, you could have a List<> of Emails, Fax, PhoneCall, all in the same List, if the Type is IMessage, but you couldn't have them all together if the objects were simply Email, Fax, etc.
If you wanted to sort (or enumerate for example) your objects, you'd need them to implement the corresponding interface. In the .NET sample, if you have a list of "Fax" objects and want to be able to sort them by using MyList.Sort(), you need to make your fax class like this:
public class Fax : ISorteable
{
//implement the ISorteable stuff here.
}
I hope this gives you a hint. Other users will possibly post other good examples. Good luck! and Embrace the power of INterfaces.
warning: Not everything is good about interfaces, there are some issues with them, OOP purists will start a war on this. I shall remain aside. One drawback of an Interfce (in .NET 2.0 at least) is that you cannot have PRIVATE members, or protected, it must be public. This makes some sense, but sometimes you wish you could simply declare stuff as private or protected.
A: In addition to the function interfaces have within programming languages, they also are a powerful semantic tool when expressing design ideas to other people.
A code base with well-designed interfaces is suddenly a lot easier to discuss. "Yes, you need a CredentialsManager to register new remote servers." "Pass a PropertyMap to ThingFactory to get a working instance."
Ability to address a complex thing with a single word is pretty useful.
A: Interfaces establish a contract between a class and the code that calls it. They also allow you to have similar classes that implement the same interface but do different actions or events and not have to know which you are actually working with. This might make more sense as an example so let me try one here.
Say you have a couple classes called Dog, Cat, and Mouse. Each of these classes is a Pet and in theory you could inherit them all from another class called Pet but here's the problem. Pets in and of themselves don't do anything. You can't go to the store and buy a pet. You can go and buy a dog or a cat but a pet is an abstract concept and not concrete.
So You know pets can do certain things. They can sleep, or eat, etc. So you define an interface called IPet and it looks something like this (C# syntax)
public interface IPet
{
void Eat(object food);
void Sleep(int duration);
}
Each of your Dog, Cat, and Mouse classes implement IPet.
public class Dog : IPet
So now each of those classes has to have it's own implementation of Eat and Sleep. Yay you have a contract... Now what's the point.
Next let's say you want to make a new object called PetStore. And this isn't a very good PetStore so they basically just sell you a random pet (yes i know this is a contrived example).
public class PetStore
{
public static IPet GetRandomPet()
{
//Code to return a random Dog, Cat, or Mouse
}
}
IPet myNewRandomPet = PetStore.GetRandomPet();
myNewRandomPet.Sleep(10);
The problem is you don't know what type of pet it will be. Thanks to the interface though you know whatever it is it will Eat and Sleep.
So this answer may not have been helpful at all but the general idea is that interfaces let you do neat stuff like Dependency Injection and Inversion of Control where you can get an object, have a well defined list of stuff that object can do without ever REALLY knowing what the concrete type of that object is.
A: Interfaces let you code against objects in a generic way. For instance, say you have a method that sends out reports. Now say you have a new requirement that comes in where you need to write a new report. It would be nice if you could reuse the method you already had written right? Interfaces makes that easy:
interface IReport
{
string RenderReport();
}
class MyNewReport : IReport
{
public string RenderReport()
{
return "Hello World Report!";
}
}
class AnotherReport : IReport
{
public string RenderReport()
{
return "Another Report!";
}
}
//This class can process any report that implements IReport!
class ReportEmailer()
{
public void EmailReport(IReport report)
{
Email(report.RenderReport());
}
}
class MyApp()
{
void Main()
{
//create specific "MyNewReport" report using interface
IReport newReport = new MyNewReport();
//create specific "AnotherReport" report using interface
IReport anotherReport = new AnotherReport();
ReportEmailer reportEmailer = new ReportEmailer();
//emailer expects interface
reportEmailer.EmailReport(newReport);
reportEmailer.EmailReport(anotherReport);
}
}
A: Interfaces are also key to polymorphism, one of the "THREE PILLARS OF OOD".
Some people touched on it above, polymorphism just means a given class can take on different "forms". Meaning, if we have two classes, "Dog" and "Cat" and both implement the interface "INeedFreshFoodAndWater" (hehe) - your code can do something like this (pseudocode):
INeedFreshFoodAndWater[] array = new INeedFreshFoodAndWater[];
array.Add(new Dog());
array.Add(new Cat());
foreach(INeedFreshFoodAndWater item in array)
{
item.Feed();
item.Water();
}
This is powerful because it allows you to treat different classes of objects abstractly, and allows you to do things like make your objects more loosely coupled, etc.
A: The easiest answer is that interfaces define a what your class can do. It's a "contract" that says that your class will be able to do that action.
Public Interface IRollOver
Sub RollOver()
End Interface
Public Class Dog Implements IRollOver
Public Sub RollOver() Implements IRollOver.RollOver
Console.WriteLine("Rolling Over!")
End Sub
End Class
Public Sub Main()
Dim d as New Dog()
Dim ro as IRollOver = TryCast(d, IRollOver)
If ro isNot Nothing Then
ro.RollOver()
End If
End Sub
Basically, you are guaranteeing that the Dog class always has the ability to roll over as long as it continues to implement that Interface. Should cats ever gain the ability to RollOver(), they too could implement that interface, and you can treat both Dogs and Cats homogeneously when asking them to RollOver().
A: OK, so it's about abstract classes vs. interfaces...
Conceptually, abstract classes are there to be used as base classes. Quite often they themselves already provide some basic functionality, and the subclasses have to provide their own implementation of the abstract methods (those are the methods which aren't implemented in the abstract base class).
Interfaces are mostly used for decoupling the client code from the details of a particular implementation. Also, sometimes the ability to switch the implementation without changing the client code makes the client code more generic.
On the technical level, it's harder to draw the line between abstract classes and interfaces, because in some languages (e.g., C++), there's no syntactic difference, or because you could also use abstract classes for the purposes of decoupling or generalization. Using an abstract class as an interface is possible because every base class, by definition, defines an interface that all of its subclasses should honor (i.e., it should be possible to use a subclass instead of a base class).
A: Interfaces are a way to enforce that an object implements a certain amount of functionality, without having to use inheritance (which leads to strongly coupled code, instead of loosely coupled which can be achieved through using interfaces).
Interfaces describe the functionality, not the implementation.
A: Most of the interfaces you come across are a collection of method and property signatures. Any one who implements an interface must provide definitions to what ever is in the interface.
A: When you drive a friend's car, you more or less know how to do that. This is because conventional cars all have a very similar interface: steering wheel, pedals, and so forth. Think of this interface as a contract between car manufacturers and drivers. As a driver (the user/client of the interface in software terms), you don't need to learn the particulars of different cars to be able to drive them: e.g., all you need to know is that turning the steering wheel makes the car turn. As a car manufacturer (the provider of an implementation of the interface in software terms) you have a clear idea what your new car should have and how it should behave so that drivers can use them without much extra training. This contract is what people in software design refer to as decoupling (the user from the provider) -- the client code is in terms of using an interface rather than a particular implementation thereof and hence doesn't need to know the details of the objects implementing the interface.
A: Interfaces are a mechanism to reduce coupling between different, possibly disparate parts of a system.
From a .NET perspective
*
*The interface definition is a list of operations and/or properties.
*Interface methods are always public.
*The interface itself doesn't have to be public.
When you create a class that implements the interface, you must provide either an explicit or implicit implementation of all methods and properties defined by the interface.
Further, .NET has only single inheritance, and interfaces are a necessity for an object to expose methods to other objects that aren't aware of, or lie outside of its class hierarchy. This is also known as exposing behaviors.
An example that's a little more concrete:
Consider is we have many DTO's (data transfer objects) that have properties for who updated last, and when that was. The problem is that not all the DTO's have this property because it's not always relevant.
At the same time we desire a generic mechanism to guarantee these properties are set if available when submitted to the workflow, but the workflow object should be loosely coupled from the submitted objects. i.e. the submit workflow method shouldn't really know about all the subtleties of each object, and all objects in the workflow aren't necessarily DTO objects.
// First pass - not maintainable
void SubmitToWorkflow(object o, User u)
{
if (o is StreetMap)
{
var map = (StreetMap)o;
map.LastUpdated = DateTime.UtcNow;
map.UpdatedByUser = u.UserID;
}
else if (o is Person)
{
var person = (Person)o;
person.LastUpdated = DateTime.Now; // Whoops .. should be UtcNow
person.UpdatedByUser = u.UserID;
}
// Whoa - very unmaintainable.
In the code above, SubmitToWorkflow() must know about each and every object. Additionally, the code is a mess with one massive if/else/switch, violates the don't repeat yourself (DRY) principle, and requires developers to remember copy/paste changes every time a new object is added to the system.
// Second pass - brittle
void SubmitToWorkflow(object o, User u)
{
if (o is DTOBase)
{
DTOBase dto = (DTOBase)o;
dto.LastUpdated = DateTime.UtcNow;
dto.UpdatedByUser = u.UserID;
}
It is slightly better, but it is still brittle. If we want to submit other types of objects, we need still need more case statements. etc.
// Third pass pass - also brittle
void SubmitToWorkflow(DTOBase dto, User u)
{
dto.LastUpdated = DateTime.UtcNow;
dto.UpdatedByUser = u.UserID;
It is still brittle, and both methods impose the constraint that all the DTOs have to implement this property which we indicated wasn't universally applicable. Some developers might be tempted to write do-nothing methods, but that smells bad. We don't want classes pretending they support update tracking but don't.
Interfaces, how can they help?
If we define a very simple interface:
public interface IUpdateTracked
{
DateTime LastUpdated { get; set; }
int UpdatedByUser { get; set; }
}
Any class that needs this automatic update tracking can implement the interface.
public class SomeDTO : IUpdateTracked
{
// IUpdateTracked implementation as well as other methods for SomeDTO
}
The workflow method can be made to be a lot more generic, smaller, and more maintainable, and it will continue to work no matter how many classes implement the interface (DTOs or otherwise) because it only deals with the interface.
void SubmitToWorkflow(object o, User u)
{
IUpdateTracked updateTracked = o as IUpdateTracked;
if (updateTracked != null)
{
updateTracked.LastUpdated = DateTime.UtcNow;
updateTracked.UpdatedByUser = u.UserID;
}
// ...
*
*We can note the variation void SubmitToWorkflow(IUpdateTracked updateTracked, User u) would guarantee type safety, however it doesn't seem as relevant in these circumstances.
In some production code we use, we have code generation to create these DTO classes from the database definition. The only thing the developer does is have to create the field name correctly and decorate the class with the interface. As long as the properties are called LastUpdated and UpdatedByUser, it just works.
Maybe you're asking What happens if my database is legacy and that's not possible? You just have to do a little more typing; another great feature of interfaces is they can allow you to create a bridge between the classes.
In the code below we have a fictitious LegacyDTO, a pre-existing object having similarly-named fields. It's implementing the IUpdateTracked interface to bridge the existing, but differently named properties.
// Using an interface to bridge properties
public class LegacyDTO : IUpdateTracked
{
public int LegacyUserID { get; set; }
public DateTime LastSaved { get; set; }
public int UpdatedByUser
{
get { return LegacyUserID; }
set { LegacyUserID = value; }
}
public DateTime LastUpdated
{
get { return LastSaved; }
set { LastSaved = value; }
}
}
You might thing Cool, but isn't it confusing having multiple properties? or What happens if there are already those properties, but they mean something else? .NET gives you the ability to explicitly implement the interface.
What this means is that the IUpdateTracked properties will only be visible when we're using a reference to IUpdateTracked. Note how there is no public modifier on the declaration, and the declaration includes the interface name.
// Explicit implementation of an interface
public class YetAnotherObject : IUpdatable
{
int IUpdatable.UpdatedByUser
{ ... }
DateTime IUpdatable.LastUpdated
{ ... }
Having so much flexibility to define how the class implements the interface gives the developer a lot of freedom to decouple the object from methods that consume it. Interfaces are a great way to break coupling.
There is a lot more to interfaces than just this. This is just a simplified real-life example that utilizes one aspect of interface based programming.
As I mentioned earlier, and by other responders, you can create methods that take and/or return interface references rather than a specific class reference. If I needed to find duplicates in a list, I could write a method that takes and returns an IList (an interface defining operations that work on lists) and I'm not constrained to a concrete collection class.
// Decouples the caller and the code as both
// operate only on IList, and are free to swap
// out the concrete collection.
public IList<T> FindDuplicates( IList<T> list )
{
var duplicates = new List<T>()
// TODO - write some code to detect duplicate items
return duplicates;
}
Versioning caveat
If it's a public interface, you're declaring I guarantee interface x looks like this! And once you have shipped code and published the interface, you should never change it. As soon as consumer code starts to rely on that interface, you don't want to break their code in the field.
See this Haacked post for a good discussion.
Interfaces versus abstract (base) classes
Abstract classes can provide implementation whereas Interfaces cannot. Abstract classes are in some ways more flexible in the versioning aspect if you follow some guidelines like the NVPI (Non-Virtual Public Interface) pattern.
It's worth reiterating that in .NET, a class can only inherit from a single class, but a class can implement as many interfaces as it likes.
Dependency Injection
The quick summary of interfaces and dependency injection (DI) is that the use of interfaces enables developers to write code that is programmed against an interface to provide services. In practice you can end up with a lot of small interfaces and small classes, and one idea is that small classes that do one thing and only one thing are much easier to code and maintain.
class AnnualRaiseAdjuster
: ISalaryAdjuster
{
AnnualRaiseAdjuster(IPayGradeDetermination payGradeDetermination) { ... }
void AdjustSalary(Staff s)
{
var payGrade = payGradeDetermination.Determine(s);
s.Salary = s.Salary * 1.01 + payGrade.Bonus;
}
}
In brief, the benefit shown in the above snippet is that the pay grade determination is just injected into the annual raise adjuster. How pay grade is determined doesn't actually matter to this class. When testing, the developer can mock pay grade determination results to ensure the salary adjuster functions as desired. The tests are also fast because the test is only testing the class, and not everything else.
This isn't a DI primer though as there are whole books devoted to the subject; the above example is very simplified.
A: Simply put: An interface is a class that methods defined but no implementation in them. In contrast an abstract class has some of the methods implemented but not all.
A: Think of an interface as a contract. When a class implements an interface, it is essentially agreeing to honor the terms of that contract. As a consumer, you only care that the objects you have can perform their contractual duties. Their inner workings and details aren't important.
A: Here is a db related example I often use. Let us say you have an object and a container object like an list. Let us assume that sometime you might want to store the objects in a particular sequence. Assume that the sequence is not related to the position in the array but instead that the objects are a subset of a larger set of objects and the sequence position is related to the database sql filtering.
To keep track of your customized sequence positions you could make your object implement a custom interface. The custom interface could mediate the organizational effort required to maintain such sequences.
For example, the sequence you are interested in has nothing to do with primary keys in the records. With the object implementing the interface you could say myObject.next() or myObject.prev().
A: Assuming you're referring to interfaces in statically-typed object-oriented languages, the primary use is in asserting that your class follows a particular contract or protocol.
Say you have:
public interface ICommand
{
void Execute();
}
public class PrintSomething : ICommand
{
OutputStream Stream { get; set; }
String Content {get; set;}
void Execute()
{
Stream.Write(content);
}
}
Now you have a substitutable command structure. Any instance of a class that implements IExecute can be stored in a list of some sort, say something that implements IEnumerable and you can loop through that and execute each one, knowing that each object will Just Do The Right Thing. You can create a composite command by implementing CompositeCommand which will have its own list of commands to run, or a LoopingCommand to run a set of commands repeatedly, then you'll have most of a simple interpreter.
When you can reduce a set of objects to a behavior that they all have in common, you might have cause to extract an interface. Also, sometimes you can use interfaces to prevent objects from accidentally intruding on the concerns of that class; for example, you may implement an interface that only allows clients to retrieve, rather than change data in your object, and have most objects receive only a reference to the retrieval interface.
Interfaces work best when your interfaces are relatively simple and make few assumptions.
Look up the Liskov subsitution principle to make more sense of this.
Some statically-typed languages like C++ don't support interfaces as a first-class concept, so you create interfaces using pure abstract classes.
Update
Since you seem to be asking about abstract classes vs. interfaces, here's my preferred oversimplification:
*
*Interfaces define capabilities and features.
*Abstract classes define core functionality.
Typically, I do an extract interface refactoring before I build an abstract class. I'm more likely to build an abstract class if I think there should be a creational contract (specifically, that a specific type of constructor should always be supported by subclasses). However, I rarely use "pure" abstract classes in C#/java. I'm far more likely to implement a class with at least one method containing meaningful behavior, and use abstract methods to support template methods called by that method. Then the abstract class is a base implementation of a behavior, which all concrete subclasses can take advantage of without having to reimplement.
A: Simple answer: An interface is a bunch of method signatures (+ return type). When an object says it implements an interface, you know it exposes that set of methods.
A: One good reason for using an interface vs. an abstract class in Java is that a subclass cannot extend multiple base classes, but it CAN implement multiple interfaces.
A: Java does not allow multiple inheritance (for very good reasons, look up dreadful diamond) but what if you want to have your class supply several sets of behavior? Say you want anyone who uses it to know it can be serialized, and also that it can paint itself on the screen. the answer is to implement two different interfaces.
Because interfaces contain no implementation of their own and no instance members it is safe to implement several of them in the same class with no ambiguities.
The down side is that you will have to have the implementation in each class separately. So if your hierarchy is simple and there are parts of the implementation that should be the same for all the inheriting classes use an abstract class.
A: Like others have said here, interfaces define a contract (how the classes who use the interface will "look") and abstract classes define shared functionality.
Let's see if the code helps:
public interface IReport
{
void RenderReport(); // This just defines the method prototype
}
public abstract class Reporter
{
protected void DoSomething()
{
// This method is the same for every class that inherits from this class
}
}
public class ReportViolators : Reporter, IReport
{
public void RenderReport()
{
// Some kind of implementation specific to this class
}
}
public class ClientApp
{
var violatorsReport = new ReportViolators();
// The interface method
violatorsReport.RenderReport();
// The abstract class method
violatorsReport.DoSomething();
}
A: Interfaces are a way to implement conventions in a way that is still strongly typed and polymorphic.
A good real world example would be IDisposable in .NET. A class that implements the IDisposable interface forces that class to implement the Dispose() method. If the class doesn't implement Dispose() you'll get a compiler error when trying to build. Additionally, this code pattern:
using (DisposableClass myClass = new DisposableClass())
{
// code goes here
}
Will cause myClass.Dispose() to be executed automatically when execution exits the inner block.
However, and this is important, there is no enforcement as to what your Dispose() method should do. You could have your Dispose() method pick random recipes from a file and email them to a distribution list, the compiler doesn't care. The intent of the IDisposable pattern is to make cleaning up resources easier. If instances of a class will hold onto file handles then IDisposable makes it very easy to centralize the deallocation and cleanup code in one spot and to promote a style of use which ensures that deallocation always occurs.
And that's the key to interfaces. They are a way to streamline programming conventions and design patterns. Which, when used correctly, promotes simpler, self-documenting code which is easier to use, easier to maintain, and more correct.
A: I have had the same problem as you and I find the "contract" explanation a bit confusing.
If you specify that a method takes an IEnumerable interface as an in-parameter you could say that this is a contract specifying that the parameter must be of a type that inherits from the IEnumerable interface and hence supports all methods specified in the IEnumerable interface. The same would be true though if we used an abstract class or a normal class. Any object that inherits from those classes would be ok to pass in as a parameter. You would in any case be able to say that the inherited object supports all public methods in the base class whether the base class is a normal class, an abstract class or an interface.
An abstract class with all abstract methods is basically the same as an interface so you could say an interface is simply a class without implemented methods. You could actually drop interfaces from the language and just use abstract class with only abstract methods instead. I think the reason we separate them is for semantic reasons but for coding reasons I don't see the reason and find it just confusing.
Another suggestion could be to rename the interface to interface class as the interface is just another variation of a class.
In certain languages there are subtle differences that allows a class to inherit only 1 class but multiple interfaces while in others you could have many of both, but that is another issue and not directly related I think
A: The simplest way to understand interfaces is to start by considering what class inheritance means. It includes two aspects:
*
*Members of a derived class can use public or protected members of a base class as their own.
*Members of a derived class can be used by code which expects a member of the base class (meaning they are substitutable).
Both of these features are useful, but because it is difficult to allow a class to use members of more than one class as its own, many languages and frameworks only allow classes to inherit from a single base class. On the other hand, there is no particular difficulty with having a class be substitutable for multiple other unrelated things.
Further, because the first benefit of inheritance can be largely achieved via encapsulation, the relative benefit from allowing multiple-inheritance of the first type is somewhat limited. On the other hand, being able to substitute an object for multiple unrelated types of things is a useful ability which cannot be readily achieved without language support.
Interfaces provide a means by which a language/framework can allow programs to benefit from the second aspect of inheritance for multiple base types, without requiring it to also provide the first.
A: Interface is like a fully abstract class. That is, an abstract class with only abstract members. You can also implement multiple interfaces, it's like inheriting from multiple fully abstract classes. Anyway.. this explanation only helps if you understand what an abstract class is.
A: Interfaces require any class that implements them to contain the methods defined in the interface.
The purpose is so that, without having to see the code in a class, you can know if it can be used for a certain task. For example, the Integer class in Java implements the comparable interface, so, if you only saw the method header (public class String implements Comparable), you would know that it contains a compareTo() method.
A: In your simple case, you could achieve something similar to what you get with interfaces by using a common base class that implements show() (or perhaps defines it as abstract). Let me change your generic names to something more concrete, Eagle and Hawk instead of MyClass1 and MyClass2. In that case you could write code like
Bird bird = GetMeAnInstanceOfABird(someCriteriaForSelectingASpecificKindOfBird);
bird.Fly(Direction.South, Speed.CruisingSpeed);
That lets you write code that can handle anything that is a Bird. You could then write code that causes the Bird to do its thing (fly, eat, lay eggs, and so forth) that acts on an instance it treats as a Bird. That code would work whether Bird is really an Eagle, Hawk, or anything else that derives from Bird.
That paradigm starts to get messy, though, when you don't have a true is a relationship. Say you want to write code that flies things around in the sky. If you write that code to accept a Bird base class, it suddenly becomes hard to evolve that code to work on a JumboJet instance, because while a Bird and a JumboJet can certainly both fly, a JumboJet is most certainly not a Bird.
Enter the interface.
What Bird (and Eagle, and Hawk) do have in common is that they can all fly. If you write the above code instead to act on an interface, IFly, that code can be applied to anything that provides an implementation to that interface.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
}
|
Q: Fast plane rotation algorithm? I am working on an application that detects the most prominent rectangle in an image, then seeks to rotate it so that the bottom left of the rectangle rests at the origin, similar to how IUPR's OSCAR system works. However, once the most prominent rectangle is detected, I am unsure how to take into account the depth component or z-axis, as the rectangle won't always be "head-on". Any examples to further my understanding would be greatly appreciated. Seen below is an example from IUPR's OSCAR system.
alt text http://quito.informatik.uni-kl.de/oscar/oscar.php?serverimage=img_0324.jpg&montage=use
A: You don't actually need to deal with the 3D information in this case, it's just a mappping function, from one set of coordinates to another.
Look at affine transformations, they're capable of correcting simple skew and perspective effects. You should be able to find code somewhere that will calculate a transform from the 4 points at the corners of your rectangle.
Almost forgot - if "fast" is really important, you could simplify the system to only use simple shear transformations in combination, though that'll have a bad impact on image quality for highly-tilted subjects.
A: Actually, I think you can get away with something much simpler than Mark's approach.
*
*Once you have the 2D coordinates on the skewed image, re-purpose those coordinates as texture coordinates.
*In a renderer, draw a simple rectangle where each corner's vertices are texture mapped to the vertices found on the skewed 2D image (normalized and otherwise transformed to your rendering system's texture coordinate plane).
Now you can rely on hardware (using OpenGL or similar) to do the correction for you, or you can write your own texture mapper:
The aspect ratio will need to be guessed at since we are disposing of the actual 3D info. However, you can get away with just taking the max width and max height of your skewed rectangle.
Perspective Texture Mapping by Chris Hecker
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What are some of the pros and cons of using jQuery? As someone who is only barely proficient in javascript, is jQuery right for me? Is there a better library to use? I've seen lots of posts related to jQuery and it seems to be the most effective way to incorporate javascript into ASP.NET applications.
I've been out to jQuery's site and have found the tutorials and other helpful information. Any other reference material (i.e books, blogs, etc.) would be helpful.
Thanks!
A: I just started using jQuery as well, and have found it to be very helpful. For me, the biggest advantage is having some really nice intellisense in VS for it, and not having to look up every archaic method in the world to accomplish simple tasks. To me, it just seems a lot better organized than plain old javascript, and like someone else said, it has a ton of good built in libraries.
A: While just beginning to learn JavaScript I looked at the various libraries with amazement. Then I looked more closely at jQuery and was hooked. No longer will I work with DOM without loading jQuery. Not just for websites, jQuery brings powerful utilities, reduced code, and simple handling of local administrator JavaScripts.
Local JavaScripts + jQuery + msHta = awesome interface driven scripts!
For more information about using jQuery on local administrator scripts check out my posts about using jQuery and HTA's...
Chris
A: Pros: Write less, do more.
Cons: You have to learn it ( only VS gets the intellisense, not the brain [:)] )
If you are interested in jQuery here is a good Review of jQuery Books by Rick Strahl
A: The great thing about libraries like jQuery and Prototype is that they take care of a lot of the cross-browser kwirks that can make Javascript such a pain to write. Either one of those or maybe even mootools will be good to you, their respective websites being about as good as a resource as it's gonna get.
edit: as far as the 'con' of having extra loading size on your page, I suggest using Google to host these for you. Optimistically, some people will have it cached from other websites, plus Google takes care of versions/compression for you.
A: The biggest thing that I've found helpful in learning jQuery is other people's plugins. I'd find some stuff that you like, and read the plugin code. You may find some pretty cool stuff to learn.
A: Pros: jQuery is a great library which lets you get what you want done in much much much less code, with a lot less hassle. The plugin architecture is incredibly simple, and the community producing plugins is very strong and active. If you can think "wouldn't it be nice if I could..." then chances are there's a plugin for it.
Cons: You are being abstracted away from the raw Javascript. Don't underestimate the effect of this "con". Though working in vanilla JS (that is, javascript without libraries), can be a massive pain, it gives you a much better understanding of what you're actually doing. You might find that your jQuery based solutions could actually be done in vanilla JS with a lot less overhead.
A: Pros: you don't have to deal with the tangled mess that is cross-platform Javascript compatibility. You don't have to worry about which browsers support standard event handlers and which have their own event systems. You don't have to write two hundred lines of DOM manipulation withHugeLongFunctionNamesFromHell to get nice dynamic pages.
Cons: It's an extra 15KB of code your users have to download the first time they load your page.
A: Jquery is also the first javascript framework I used as well. I find the syntax rather use to pick up and the library of plugin and the support from it user very helpful in picking up this language. Although eventually I still continue to borrow books on javascript, I felt jquery is perhaps a good way to show the power of javascript.
A: If you are only looking for javascript tool that is browser compatible and quite small in size to do DOM manipulation and ajax, then jQuery might be the one you are looking for.
But jQuery is lack in these two areas:
*
*Fullblown widgets (Think of extjs). Note: jQuery-UI is not as complete and quite slow.
*Object oriented support (Mootools).
A: Pros: Cross browser, User friendly function names.
Many plug-Ins.
Only 15KB on the client Side.
Community is pretty huge to guide you.
Easy to work with Services(.svc,asmx) etc. I believe it is wonderful.
Cons:
You may forget JavaScripting
So, I suggest for beginners, learn JavaScripting first and pull your socks for JQuery.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: Using PL/SQL how do you I get a file's contents in to a blob? I have a file. I want to get its contents into a blob column in my oracle database or into a blob variable in my PL/SQL program. What is the best way to do that?
A: To do it entirely in PL/SQL, the file would need to be on the server, located in a directory which you'd need to define in the database. Create the following objects:
CREATE OR REPLACE DIRECTORY
BLOB_DIR
AS
'/oracle/base/lobs'
/
CREATE OR REPLACE PROCEDURE BLOB_LOAD
AS
lBlob BLOB;
lFile BFILE := BFILENAME('BLOB_DIR', 'filename');
BEGIN
INSERT INTO table (id, your_blob)
VALUES (xxx, empty_blob())
RETURNING your_blob INTO lBlob;
DBMS_LOB.OPEN(lFile, DBMS_LOB.LOB_READONLY);
DBMS_LOB.OPEN(lBlob, DBMS_LOB.LOB_READWRITE);
DBMS_LOB.LOADFROMFILE(DEST_LOB => lBlob,
SRC_LOB => lFile,
AMOUNT => DBMS_LOB.GETLENGTH(lFile));
DBMS_LOB.CLOSE(lFile);
DBMS_LOB.CLOSE(lBlob);
COMMIT;
END;
/
A: Depends a bit on your environment. In Java you could do it something like this...
// Need as OracleConnection in mConnection
// Set an EMPTY_BLOB()
String update = "UPDATE tablename"+
" SET blob_column = EMPTY_BLOB()"+
" WHERE ID = "+id;
CallableStatement stmt = mConnection.prepareCall(update);
stmt.executeUpdate();
// Lock the row FOR UPDATE
String select = "BEGIN " +
" SELECT " + blob_column
" INTO ? " +
" FROM " + tablename +
" WHERE ID = '" + id + "'" +
" FOR UPDATE; " +
"END;";
stmt = mConnection.prepareCall(select);
stmt.registerOutParameter(1, java.sql.Types.BLOB);
stmt.executeUpdate();
BLOB blob = (BLOB) stmt.getBlob(1);
OutputStream bos = blob.setBinaryStream(0L);
FileInputStream fis = new FileInputStream(file);
// Code needed here to copy one stream to the other
fis.close();
bos.close();
stmt.close();
mConnection.commit();
But it really depends what environment / tools you're using. More info needed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Generic cache of objects Does anyone know any implementation of a templated cache of objects?
*
*You use a key to find object (the same as in std::map<>)
*You specify a maximum number of objects that can be in the cache at the same time
*There are facilities to create an object not found in the cache
*There are facilities to know when an object is discarded from the cache
For example :
typedef cache<int, MyObj*> MyCache;
MyCache oCache;
oCache.SetSize(1);
oCache.Insert(make_pair(1, new MyObj());
oCache.Touch(1);
MyObj* oldObj = oCache.Delete(1);
...
It can be as simple as a LRU or MRU cache.
Any suggestions are welcomed!
Nic
A: You can use the Boost.MultiIndex library.
It is easy to implement a MRU cache.
A: Ive put together a relatively simple LRU cache built from a map and a linked list:
template<typename K, typename V, typename Map = std::unordered_map<K, typename std::list<K>::iterator>>
class LRUCache
{
size_t maxSize;
Map data;
std::list<K> usageOrder;
std::function<void(std::pair<K, V>)> onEject = [](std::pair<K, V> x){};
void moveToFront(typename std::list<K>::iterator itr)
{
if(itr != usageOrder.begin())
usageOrder.splice(usageOrder.begin(), usageOrder, itr);
}
void trimToSize()
{
while(data.size() > maxSize)
{
auto itr = data.find(usageOrder.back());
onEject(std::pair<K, V>(itr->first, *(itr->second)));
data.erase(usageOrder.back());
usageOrder.erase(--usageOrder.end());
}
}
public:
typedef std::pair<const K, V> value_type;
typedef K key_type;
typedef V mapped_type;
LRUCache(size_t maxEntries) : maxSize(maxEntries)
{
data.reserve(maxEntries);
}
size_t size() const
{
return data.size();
}
void insert(const value_type& v)
{
usageOrder.push_front(v.first);
data.insert(typename Map::value_type(v.first, usageOrder.begin()));
trimToSize();
}
bool contains(const K& k) const
{
return data.count(k) != 0;
}
V& at(const K& k)
{
auto itr = data.at(k);
moveToFront(itr);
return *itr;
}
void setMaxEntries(size_t maxEntries)
{
maxSize = maxEntries;
trimToSize();
}
void touch(const K& k)
{
at(k);
}
template<typename Compute>
V& getOrCompute(const K& k)
{
if(!data.contains(k)) insert(value_type(k, Compute()));
return(at(k));
}
void setOnEject(decltype(onEject) f)
{
onEject = f;
}
};
Which I believe meets your criteria. Anything need to be added, or changed?
A: In an application I can hardly imagine it would speed/boost up performance to store objects that apparently can be re-created (hip: since they can be automatically discarded, when the cache tops).
A sw cache would require memory-fetching through associativism code, surely slower then memory allocation and constructor running (mostly memory initializations).
With the exception of manual user configuration to avoid the paging mechanism (precisely to boost performance, btw), most OS "caches" memory for you on disk... it's "paging", a form of "high-cost caching", because nothing gets thrown away, and it's done by specific HW, a sub-processing unit called Memory Management Unit...
A caching-code, in the big picture, would be slowing processes while being redundant.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: jQuery onClick execution I have this bit of javascript written with jQuery 1.2.5. It's contained inside the main function() of a plugin that I wrote. The plugin is a horizontal gallery scroller very similar to jCarousel. It does alot of auto calculating of widths and determines how many to scroll based on that and the size of the images, which is what all the calculations are that are going on.
What my question is, how do I prevent this from firing off before a previous execution is finished. For instance, if I get a little click happy and just frantically mash down on .digi_next. Things don't go so well in the UI when that happens and I'd like to fix it :) I thought the answer might lie in queue, but all my attempts at using it haven't turned out anything worthwhile.
var self = this;
$(".digi_next", this.container).click(function(){
var curLeft = $(".digi_container", self.container).css("left").split("px")[0];
var newLeft = (curLeft*1) - (self.containerPad + self.containerWidth) * self.show_photos;
if (newLeft < ((self.digi_gal_width - (self.containerPad + self.containerWidth) * self.show_photos)) * -1) {
newLeft = ((self.digi_gal_width - (self.containerPad + self.containerWidth) * self.show_photos)) * -1;
}
$(".digi_container", self.container).animate({
left: newLeft + "px"
}, self.rotateSpeed);
});
A: Take a look at jQuery UI. Specifically the effects-part of the plug in. I use the slide-effect on my personal website (click on the arrows at the sides of the boxes).
I prevent users triggering the effect more than once - before the effect has ended - with the one event-handler and a callback function.
Here's the source-code
A: As an alternative to the afformentioned global flag, you could assign the value to the DOM element, that way enabling multiple elements on the page to have the same behaviour:
$("...").onclick(function(el) {
var self = el;
if (self.busy) return false;
self.busy = true;
$("...").animate(..., ..., ..., function() {
self.busy= false;
});
return false;
});
A: Just use a global busy flag. When you enter your click handler, check it, and only proceed if it's false. Immediately set it to true, and then set it back to false when the animation ends. JavaScript is single-threaded, so there is no race condition to worry about.
var busy = false;
$("...").onclick(function() {
if (busy) return false;
busy = true;
$("...").animate(..., ..., ..., function() {
busy= false;
});
return false;
});
A: Since JavaScript functions calls are asyncronus, you can pass as a in parameter a callback function that's called when the previous call ends (same for errors).
You can pass the function you wrote in this post as the callback for the function that fire before.
Hope this helps.
Regards
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to line up items of varied length in a resizable space in CSS? I'd like to line up items approximately like this:
item1 item2 i3 longitemname
i4 longitemname2 anotheritem i5
Basically items of varying length arranged in a table like structure. The tricky part is the container for these can vary in size and I'd like to fit as many as I can in each row - in other words, I won't know beforehand how many items fit in a line, and if the page is resized the items should re-flow themselves to accommodate. E.g. initially 10 items could fit on each line, but on resize it could be reduced to 5.
I don't think I can use an html table since I don't know the number of columns (since I don't know how many will fit on a line). I can use css to float them, but since they're of varying size they won't line up.
So far the only thing I can think of is to use javascript to get the size of largest item, set the size of all items to that size, and float everything left.
Any better suggestions?
A: This can be done using floated div's, calculating the max width, and setting all widths to the max. Here's jquery code to do it:
html:
<div class="item">something</div>
<div class="item">something else</div>
css:
div.item { float: left; }
jquery:
var max_width=0;
$('div.item').each( function() { if ($(this).width() > max_width) { max_width=$(this).width(); } } ).width(max_width);
Not too ugly but not too pretty either, I'm still open to better suggestions...
A: What happens when you get one item thats rediculously large and makes the rest look small? I would consider two solutions:
*
*What you've already come up with involving a float:left; rule and jQuery, but with a max max_width as well or
*Just decide on a preset width for all items before hand, based on what values you expect to be in there
Then add an overflow:hidden; rule so items that are longer don't scew the table-look. You could even change the jQuery function to trim items that are longer, adding an elipsis (...) to the end.
A: You could use block level elements floated left, but you will need some javascript to check the sizes, find the largest one, and set them all to that width.
EDIT: Just read the second half of your post, and saw that you suggested just this fix. Count this post as +1 for your current idea :)
A: You would actually need to calculate the maxWidth and the maxHeight and then go through with a resize function in Javascript after the page loads. I had to do this for a prior project and one of the browsers (FF?) will snag/offset the ones underneath if the height of the divs vary.
A: Well, could you just create a floated div for each 'item', and loop through its properties and use linebreaks? Then when you finish looping over the properties, close the div and start a new one? That way they will be like columns and just float next to each other. If the window is small, they'll drop down to the next 'line' and when resized, will float back up to the right (which is really the left :-)
A: You could float a couple of unordered lists, like this:
<ul style="float: left;">
<li>Short</li>
<li>Loooong</li>
<li>Even longer</li>
</ul>
<ul style="float: left;">
<li>Loooong</li>
<li>Short</li>
<li>...</li>
</ul>
<ul style="float: left;">
<li>Semi long</li>
<li>...</li>
<li>Short</li>
</ul>
It would require you to do some calculations on how many list-items in how many lists should be shoved into the DOM, but this way - when you re-size the container which holds the lists - the lists will float to fit into that container.
A: You could use the following with each column the same width.
You'll have a fixed column width, but the list will reflow itself automatically.
I added a little bit of extra HTML but now it works in FF en IE.
html:
<ul class="ColumnBasedList">
<li><span>Item1 2</span></li>
<li><span>Item2 3</span></li>
<li><span>Item3 5</span></li>
<li><span>Item4 6</span></li>
<li><span>Item5 7</span></li>
<li><span>Item6 8</span></li>
</ul>
css:
.ColumnBasedList
{
width: 80%;
margin: 0;
padding: 0;
}
.ColumnBasedList li
{
list-style-type: none;
display:inline;
}
.ColumnBasedList li span
{
display: -moz-inline-block;
display: inline-block;
width: 20em;
margin: 0.3em;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Setting up an architecture department Some context upfront:
Imagine a 200+ developers company finally setting up a more or less independent architecture team/department.
The software portfolio consisting of 20+ "projects"/applications of varying sizes in production was taken care of by team-leads/technical-leads, who were responsible for and in charge of the projects "architecture" as well.
Out of the necessity to consolidate and control the architecture and enable certain needed large reworks on the systems as a whole, in addition of the all so needed knowledge exchange, the company decided to set-up an architecture department.
*
*What are the DOs and DON'Ts of such an undertaking?
*Who are the people making up such an architecture team?
*What should be their responsibilities?
*What's out of their scope?
*What are the useful transition strategies for the company?
*How to prevent those wry looks every time someone even mentions "the architecture team"?
*Did your company undergo such a change already successfully?Why did it fail?Why was it successful?
That's should not be a discussion on "What is architecutre?"(which is very closely related ;).
The really interesting points would be acceptable/realistic maybe even frictionless ways to install such a team, besides of course some warnings regarding battles better not to be even started.
A: Here are a few issues that should be thought about:
*
*What is the exact mandate for the architecture team?
*What is the architecture team's deliverable? A framework, guidelines, implementation help... Or are they just Architecture Astronauts?
*Is this only for applications going forward, or will this be a backport?
*Who will be responsible for backporting? (And we mean budget here...)
*Will there be resources allocated to testing the backports?
*Does the Architecture Team have real muscle, or will management's will fold when the first group grouses about the 4 months it will take to implement the changes...
*How will you deal with the friction between the individual project groups and the architure team (and there will be friction?). Opportunists will take this as a wonderful opportunity to jockey for position...
*Be aware that this will be primarily a political game...
My friend, you have a tough road ahead...
The first step is to be crystal clear on what the architecture team is supposed to achieve.
Why are you putting the team in place?
Are you trying to unify all the applications, develop a common framework, what?
What is the mandate and the vision for this team?
Whoever the lead on this team better have kick a** interpersonal skills.
It should not be the brilliant coder that can whistle the star wars theme song and make light saber noises... but he should probably be on the team in a technical capacity.
You should probably populate the team with people that are familiar with the majority of the projects. I would be wary of selecting all the current leads, as that might take a big chunk of knowledge from the current teams. And let's face it, those teams have to be productive while the architecture team comes up with its own deliverables.
A: Architecture is difficult to get right.
The "Architects" need the power to get things done, but need to be savvy enough to not abuse that power and completely alienate the rest of the company.
I've worked at two places where architecture teams were implemented -- one was a success, the other isn't looking so good. At the successful place, it was a relatively small environment where the head architect was recognized as a technical leader, and the other members of the team had excellent writing and political skills. Everyone acted in the best interest of the organization.
At the place that it didn't work out so well, the architects clearly represented specific factions in the organization, and didn't earn the trust or respect of the entire place. The result was that more time was spent cooking up excuses to bypass the architecture than gleaning any value from it. In this case, frustration turned into passive/aggressive and other anti-social behaviors.
I think the other questions that you ask about scope/responsibilities/transition are all answered by "it depends". It depends on the company, the people, the money and the schedule.
A: Interesting question.
First, you have to have a clear notion of what problem this "architecture" team is solving. If you can not clearly define the "mission" of the team it will fail and do it with great big explosions. :)
That being said, the first step is define the problem you are solving. Are you trying to keep up with technology? Are you trying to incorporate some code reuse between projects? Are you trying to utilize your development staff to the best possible effect? There are several reasons to implement an architecture team and given your setup, any one of these might be sufficent. From your question, it looks like your goal is reworking the existing apps so that is a good first step.
Since you already have a group of leads that have good specific knowledge of the apps it would be a good idea to start with them. Get them together and hash out what the new global architecture should look like. You might also want to get a consultant to help facilitate the conversation at this point. Define the goals of the rework and come out with a "big picture" that everybody can agree to.
After that I would take a handful of the leads and promote them (backfilling the leads from the developer pool) to the architecture team. They will then meet with the leads to ensure things are going according to that "Big Picture".
I would NOT bring in a whole new group from the outside. That would create an unwanted Us vs. Them dynamic that is never good. The outsiders would also have no idea of how things are supposed to work or why things don't work the way logic would imply they should. :)
A: "Architecture" in this context in itself means nothing. It means "experts on transversal topics".
Whenever you have an "Architecture team", you will have a transversal team which will provide services for many projects.
As stated by the previous answers, you need to know what topics such an "Architecture department" will have to address.
Now, here is a example of organization of architecture teams based on several topics:
*
*Business and Functional Architecture team: writes many business-related specifications, and checks the alignment between existing application and functional workflow, in order to complete a coherent cartography of application.
*Application Architecture team: provides the cartography, but also decide of how the functional specifications decided by the Business and Functional Architecture Team will be organized into applications.
For example, you need a functional module for "portfolio process", but the Application Architecture team can decide to split that into a "launcher", a "dispatcher", a GUI, and so on.
*Technical Architecture teams, always composed of:
*
*Execution Architecture team, for all non-business-purely-technical topics (logging, KPI, frameworks, ...)
*Development Architecture team (tool evaluation and support, technological survey, repositories management for version and configuration control)
*OA (Operational Architecture) for making an environment "executable" (that is, knowing the right processes, the right servers and the right networks in order to deploy your system either for homologation or for production.)
You may want to add a Logistic team for all the management of server and networks, with the tasks of Backup and DRP strategies. And a support strategy based on a good case system.
And you are good to go.
Now, do not forget that when you begin some "large rework", your Functional Architecture will have the mission to enforce the coherencies both between:
*
*the reworked projects to be sure they stay within the fixed functional perimeter
*the legacy projects to be sure their maintenances do not involve opposite choices compared to the one applied to the reworked projects.
Any rework in a shop this size means indeed to be able to make necessary evolutions to legacy projects while waiting for the rework to produce the first releases. (The legacy can not just wait and stay still during 2-3 years of rework)
A large rework should involve three major milestone:
*
*1/ dialog with the legacy
*2/ complete the legacy
*3/ replace the legacy
Meaning any given component is in effect developed three time! ;)
Good luck and good night.
A: in general, be very careful about the incentives both political and otherwise associated with the architecture group. it is far to easy for the 'architectural review board' (or whatever you want to call it) to become barriers to progress. All it takes is zero incentive to improve things and a negative incentive when things change and don't immediately improve.
realize that mistakes will be made, some 'great new technologies' will turn out to be half-baked fads, and ecourage change and innovation. this may yield the occasional short-term upheaval and failed transformation, but that is better than stagnation.
and the alternative inevitably yields stagnation; in larger organizations i have seen careers ruined because a manager believed enough in his team to support their recommendation for new technology all the way to the top, provide the case studies to prove it, and back it to the hilt. When the new tech was finally approved (after almost a year of political infighting) the CTO (who opposed it the entire time) claimed credit for the innovation and transferred the manager to a backwoods department. In another incident a new technology was proposed, with numerous examples of success in the same business area, and a committe was formed to study the issue. Five years later they are still studying the issue, and nothing has been done
A: I think the architecture team needs to have people senior enough that they knows the inner workings of all the development teams and be able to say No to requests/guide lines. I have been in teams with good developers, yet don't have enough authority and ended up following whatever the higher rank developer managers from different teams and produced inconsistent frameworks.
A: You need work through the business scenarios A) and B)
A) What if you don't set it up, i.e. do nothing:
Estimates of the rework on going maintenance costs
B) You do set up up then:
Disruption to near term deliverables, due to resource diversion.
Risk multiple products may be dis-avantanged in the short term.
Costs of perceived extra manpower.
Who will flag up if the products get weakened by the exercise
(performance or perceived inflexibility)
Next get the product teams to so same exercise, compare results.
If you do justify it, here are two routes I have seen:
1. Pick a lead product to drive the architecture and add resources to this project.
Then be prepared to add more resources, and be patient otherwise the lead product suffers.
You risk division with this route, it worked when the lead product was 40% of revenue.
2. Start up a small team, drawn from the most promising discussions that have been occurring internally, wrap in the new architecture in each product incrementally.
Weave this teams work into the the products work.
Some Question for you to look at:
1. How soon do you have to achieve architecture convergence to get the business benefit.
2. Who are the team members already talking about architecture convergence, and are they asking/suggesting its importance, you need this question to on the "back burner" for 80% of your team leads.
What not to do
* Hire in experts from outside (unless you are in a real mess now)
* Give up after a few months, this is long term.
* Change all projects at once.
* Start until you have a core of three that can make this happen.
* Let the architecture department become any bigger than it has to be
* Let it be perceived the architecture department will solve product teams problems
* Let any product appear to be "waiting for the new architecture"
* Let the architecture department "define all" or have scope creep
* Force all products into architecture, some may not fit (e.g. not developed in same country)
What do do:
* Armed with good justification from
first question get senior management
to buy in and be asking the product
teams to report progress
* Make step by step changes in alignment of product to architecture
map
* Work on alignment the most promising or low risk product lines
first
* Setup up metrics so you can demonstrate the value add (see
justification from fist set of
questions)
* Have a road map for all products to get converged or not.
* Think what the core architecture delivers and who maintains its
artefacts
* Allow product teams to contribute to the core in terms of
specificatios, code and maintenance
of core
* Setup trainig on how to use the work of the architecture team for
new starters and existing teams
A: Architecture alone might turn people into astronauts / zombies. So they should definitely have some coding to do even if that's basic prototyping. In fact the success of their prototypes must be definite review factor.
They should give out by-monthly presentations / frequent blogs that track their work, so that others in the organization can learns.
There should be academic goals like being familiar with certain platforms / tools / books and design philosophies.
They should be given time to pursue new tools / projects / responsibilities in existing projects if they feel like it.
They would have the responsibility to do at-least 3-4 code reviews of critical modules and come up with code style guidelines.
They would have the responsibility to review low level designs at -least of key modules.
They should be given spare time as individuals or team to build something they feel might be useful.
They should have the option to forego architecture and return to regular work if they feel like it with no penalties involved.
They should have the information about whats happening across all projects running in the organization. At least one project should be followed closely so that they can inform their own peers about stuff happening elsewhere. This can possibly be the project in which they perform code reviews and such.
They should have a highly technical person as their manager.
Architects should be switched between projects once they are very familiar with one of them, and allowed to follow on whatever prototype they where following while working with the original project.
Have at least one real goal (Like consolidate all commonalities across projects into a single library) every 1 year
Invest time and training to ensure that architects do not get ego bound and conduct fairly professionally. Conflict resolution and other soft skills training along with budget for technical meetings and trainings would be definitely required too.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: How to return multiple values in one column (T-SQL)? I have a table UserAliases (UserId, Alias) with multiple aliases per user. I need to query it and return all aliases for a given user, the trick is to return them all in one column.
Example:
UserId/Alias
1/MrX
1/MrY
1/MrA
2/Abc
2/Xyz
I want the query result in the following format:
UserId/Alias
1/ MrX, MrY, MrA
2/ Abc, Xyz
Thank you.
I'm using SQL Server 2005.
p.s. actual T-SQL query would be appreciated :)
A: You can use a function with COALESCE.
CREATE FUNCTION [dbo].[GetAliasesById]
(
@userID int
)
RETURNS varchar(max)
AS
BEGIN
declare @output varchar(max)
select @output = COALESCE(@output + ', ', '') + alias
from UserAliases
where userid = @userID
return @output
END
GO
SELECT UserID, dbo.GetAliasesByID(UserID)
FROM UserAliases
GROUP BY UserID
GO
A: Well... I see that an answer was already accepted... but I think you should see another solutions anyway:
/* EXAMPLE */
DECLARE @UserAliases TABLE(UserId INT , Alias VARCHAR(10))
INSERT INTO @UserAliases (UserId,Alias) SELECT 1,'MrX'
UNION ALL SELECT 1,'MrY' UNION ALL SELECT 1,'MrA'
UNION ALL SELECT 2,'Abc' UNION ALL SELECT 2,'Xyz'
/* QUERY */
;WITH tmp AS ( SELECT DISTINCT UserId FROM @UserAliases )
SELECT
LEFT(tmp.UserId, 10) +
'/ ' +
STUFF(
( SELECT ', '+Alias
FROM @UserAliases
WHERE UserId = tmp.UserId
FOR XML PATH('')
)
, 1, 2, ''
) AS [UserId/Alias]
FROM tmp
/* -- OUTPUT
UserId/Alias
1/ MrX, MrY, MrA
2/ Abc, Xyz
*/
A: DECLARE @Str varchar(500)
SELECT @Str=COALESCE(@Str,'') + CAST(ID as varchar(10)) + ','
FROM dbo.fcUser
SELECT @Str
A: group_concat() sounds like what you're looking for.
http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat
since you're on mssql, i just googled "group_concat mssql" and found a bunch of hits to recreate group_concat functionality. here's one of the hits i found:
http://www.stevenmapes.com/index.php?/archives/23-Recreating-MySQL-GROUP_CONCAT-In-MSSQL-Cross-Tab-Query.html
A: You can either loop through the rows with a cursor and append to a field in a temp table, or you could use the COALESCE function to concatenate the fields.
A: Sorry, read the question wrong the first time. You can do something like this:
declare @result varchar(max)
--must "initialize" result for this to work
select @result = ''
select @result = @result + alias
FROM aliases
WHERE username='Bob'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
}
|
Q: Invalid group name: Group names must begin with a word character I received the following exception when I was using the Regex class with the regular expression: (?'named a'asdf)
System.ArgumentException: parsing \"(?'named a'asdf)\" - Invalid group name: Group names must begin with a word character.
What is the problem with my regular expression?
A: The problem is the space in the name of the capture. Remove the space and it works fine.
From the MSDN documentation:
"The string used for name must not contain any punctuation and cannot begin with a number. You can use single quotes instead of angle brackets; for example, (?'name')."
It does not matter if you use angle brackets <> or single quotes '' to indicate a group name.
A: The reference for the MSDN documentation mentioned by vengafoo is here:
Regular Expression Grouping Constructs
(?<name> subexpression)
Captures the matched subexpression into a group name or number name. The string used
for name must not contain any punctuation and cannot begin with a
number. You can use single quotes instead of angle brackets; for example, (?'name').
A: The problem is your quotes around the name of the named capture group. Try the string: (?<Named>asdf)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Timed events with php/MySQL I need a way to modify a value in a table after a certain amount of time has passed. My current method is as follow:
*
*insert end time for wait period in table
*when a user loads a page requesting the value to be changed, check to see if current >= end time
*if it is, change the value and remove the end time field, if it isn't, do nothing
This is going to be a major feature on the site, and so efficiency is the key; with that in mind, you can probably see the problem with how I'm doing it. That same chunk of code is going to be called every time someone access a page that needs the information.
Any suggestions for improvements or better methods would be greatly appreciated, preferably in php or perl.
In response to cron job answers:
Thanks, and I'd like to do something like that if possible, but hosts limits are the problem. Since this is a major part of the app, it can't be limited.
A: why not use a cron to update this information behind the scenes? that way you offload the checks on each page hit, and can actually schedule the timing to meet your app's requirements.
A: Your solution sounds very logical, since you don't have access to cron. Another way could be storing the value in a file, and the next time the page is loaded check when it was last modified (filemtime("checkfile.txt")), and decide if it needs modifying again. You should test performance for both methods.
A: Can you use a cron job to check each field in the database periodically and update that way?
A big part of this is how frequently the updates are required. A lot of shared hosts limit the frequency of cron checks, for example no more than every 15 minutes, which could affect the application.
A: You could use a trigger of some sort on each page load. I really have no idea how that would affect performance but maybe somebody else can shed some light.
A: If performance really starts to be an issue, (which means a lot more than you probably realize) you could use memchached to store the info...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Robust and fast checksum algorithm? Which checksum algorithm can you recommend in the following use case?
I want to generate checksums of small JPEG files (~8 kB each) to check if the content changed. Using the filesystem's date modified is unfortunately not an option.
The checksum need not be cryptographically strong but it should robustly indicate changes of any size.
The second criterion is speed since it should be possible to process at least hundreds of images per second (on a modern CPU).
The calculation will be done on a server with several clients. The clients send the images over Gigabit TCP to the server. So there's no disk I/O as bottleneck.
A: If you are receiving the files over network you can calculate the checksum as you receive the file. This will ensure that you will calculate the checksum while the data is in memory. Hence you won't have to load them into memory from disk.
I believe if you apply this method, you'll see almost-zero overhead on your system.
This is the routines I'm using on an embedded system which does checksum control on firmware and other stuff.
static const uint32_t crctab[] = {
0x0,
0x04c11db7, 0x09823b6e, 0x0d4326d9, 0x130476dc, 0x17c56b6b,
0x1a864db2, 0x1e475005, 0x2608edb8, 0x22c9f00f, 0x2f8ad6d6,
0x2b4bcb61, 0x350c9b64, 0x31cd86d3, 0x3c8ea00a, 0x384fbdbd,
0x4c11db70, 0x48d0c6c7, 0x4593e01e, 0x4152fda9, 0x5f15adac,
0x5bd4b01b, 0x569796c2, 0x52568b75, 0x6a1936c8, 0x6ed82b7f,
0x639b0da6, 0x675a1011, 0x791d4014, 0x7ddc5da3, 0x709f7b7a,
0x745e66cd, 0x9823b6e0, 0x9ce2ab57, 0x91a18d8e, 0x95609039,
0x8b27c03c, 0x8fe6dd8b, 0x82a5fb52, 0x8664e6e5, 0xbe2b5b58,
0xbaea46ef, 0xb7a96036, 0xb3687d81, 0xad2f2d84, 0xa9ee3033,
0xa4ad16ea, 0xa06c0b5d, 0xd4326d90, 0xd0f37027, 0xddb056fe,
0xd9714b49, 0xc7361b4c, 0xc3f706fb, 0xceb42022, 0xca753d95,
0xf23a8028, 0xf6fb9d9f, 0xfbb8bb46, 0xff79a6f1, 0xe13ef6f4,
0xe5ffeb43, 0xe8bccd9a, 0xec7dd02d, 0x34867077, 0x30476dc0,
0x3d044b19, 0x39c556ae, 0x278206ab, 0x23431b1c, 0x2e003dc5,
0x2ac12072, 0x128e9dcf, 0x164f8078, 0x1b0ca6a1, 0x1fcdbb16,
0x018aeb13, 0x054bf6a4, 0x0808d07d, 0x0cc9cdca, 0x7897ab07,
0x7c56b6b0, 0x71159069, 0x75d48dde, 0x6b93dddb, 0x6f52c06c,
0x6211e6b5, 0x66d0fb02, 0x5e9f46bf, 0x5a5e5b08, 0x571d7dd1,
0x53dc6066, 0x4d9b3063, 0x495a2dd4, 0x44190b0d, 0x40d816ba,
0xaca5c697, 0xa864db20, 0xa527fdf9, 0xa1e6e04e, 0xbfa1b04b,
0xbb60adfc, 0xb6238b25, 0xb2e29692, 0x8aad2b2f, 0x8e6c3698,
0x832f1041, 0x87ee0df6, 0x99a95df3, 0x9d684044, 0x902b669d,
0x94ea7b2a, 0xe0b41de7, 0xe4750050, 0xe9362689, 0xedf73b3e,
0xf3b06b3b, 0xf771768c, 0xfa325055, 0xfef34de2, 0xc6bcf05f,
0xc27dede8, 0xcf3ecb31, 0xcbffd686, 0xd5b88683, 0xd1799b34,
0xdc3abded, 0xd8fba05a, 0x690ce0ee, 0x6dcdfd59, 0x608edb80,
0x644fc637, 0x7a089632, 0x7ec98b85, 0x738aad5c, 0x774bb0eb,
0x4f040d56, 0x4bc510e1, 0x46863638, 0x42472b8f, 0x5c007b8a,
0x58c1663d, 0x558240e4, 0x51435d53, 0x251d3b9e, 0x21dc2629,
0x2c9f00f0, 0x285e1d47, 0x36194d42, 0x32d850f5, 0x3f9b762c,
0x3b5a6b9b, 0x0315d626, 0x07d4cb91, 0x0a97ed48, 0x0e56f0ff,
0x1011a0fa, 0x14d0bd4d, 0x19939b94, 0x1d528623, 0xf12f560e,
0xf5ee4bb9, 0xf8ad6d60, 0xfc6c70d7, 0xe22b20d2, 0xe6ea3d65,
0xeba91bbc, 0xef68060b, 0xd727bbb6, 0xd3e6a601, 0xdea580d8,
0xda649d6f, 0xc423cd6a, 0xc0e2d0dd, 0xcda1f604, 0xc960ebb3,
0xbd3e8d7e, 0xb9ff90c9, 0xb4bcb610, 0xb07daba7, 0xae3afba2,
0xaafbe615, 0xa7b8c0cc, 0xa379dd7b, 0x9b3660c6, 0x9ff77d71,
0x92b45ba8, 0x9675461f, 0x8832161a, 0x8cf30bad, 0x81b02d74,
0x857130c3, 0x5d8a9099, 0x594b8d2e, 0x5408abf7, 0x50c9b640,
0x4e8ee645, 0x4a4ffbf2, 0x470cdd2b, 0x43cdc09c, 0x7b827d21,
0x7f436096, 0x7200464f, 0x76c15bf8, 0x68860bfd, 0x6c47164a,
0x61043093, 0x65c52d24, 0x119b4be9, 0x155a565e, 0x18197087,
0x1cd86d30, 0x029f3d35, 0x065e2082, 0x0b1d065b, 0x0fdc1bec,
0x3793a651, 0x3352bbe6, 0x3e119d3f, 0x3ad08088, 0x2497d08d,
0x2056cd3a, 0x2d15ebe3, 0x29d4f654, 0xc5a92679, 0xc1683bce,
0xcc2b1d17, 0xc8ea00a0, 0xd6ad50a5, 0xd26c4d12, 0xdf2f6bcb,
0xdbee767c, 0xe3a1cbc1, 0xe760d676, 0xea23f0af, 0xeee2ed18,
0xf0a5bd1d, 0xf464a0aa, 0xf9278673, 0xfde69bc4, 0x89b8fd09,
0x8d79e0be, 0x803ac667, 0x84fbdbd0, 0x9abc8bd5, 0x9e7d9662,
0x933eb0bb, 0x97ffad0c, 0xafb010b1, 0xab710d06, 0xa6322bdf,
0xa2f33668, 0xbcb4666d, 0xb8757bda, 0xb5365d03, 0xb1f740b4
};
typedef struct crc32ctx
{
uint32_t crc;
uint32_t length;
} CRC32Ctx;
#define COMPUTE(var, ch) (var) = (var) << 8 ^ crctab[(var) >> 24 ^ (ch)]
void crc32_stream_init( CRC32Ctx* ctx )
{
ctx->crc = 0;
ctx->length = 0;
}
void crc32_stream_compute_uint32( CRC32Ctx* ctx, uint32_t data )
{
COMPUTE( ctx->crc, data & 0xFF );
COMPUTE( ctx->crc, ( data >> 8 ) & 0xFF );
COMPUTE( ctx->crc, ( data >> 16 ) & 0xFF );
COMPUTE( ctx->crc, ( data >> 24 ) & 0xFF );
ctx->length += 4;
}
void crc32_stream_compute_uint8( CRC32Ctx* ctx, uint8_t data )
{
COMPUTE( ctx->crc, data );
ctx->length++;
}
void crc32_stream_finilize( CRC32Ctx* ctx )
{
uint32_t len = ctx->length;
for( ; len != 0; len >>= 8 )
{
COMPUTE( ctx->crc, len & 0xFF );
}
ctx->crc = ~ctx->crc;
}
/*** pseudo code ***/
CRC32Ctx crc;
crc32_stream_init(&crc);
while((just_received_buffer_len = received_anything()))
{
for(int i = 0; i < just_received_buffer_len; i++)
{
crc32_stream_compute_uint8(&crc, buf[i]); // assuming buf is uint8_t*
}
}
crc32_stream_finilize(&crc);
printf("%x", crc.crc); // ta daaa
A: CRC
A: adler32, available in the zlib headers, is advertised as being significantly faster than crc32, while being only slightly less accurate.
A: CRC32 is probably good enough, although there's a small chance you might get a collision, such that a file that has been modified might look like it hasn't been because the two versions generate the same checksum. To avoid this possibility I'd therefore suggest using MD5, which will easily be fast enough, and the chances of a collision occurring is reduced to the point where it's almost infinitessimal.
As others have said, with lots of small files your real performance bottleneck is going to be I/O so the issue is dealing with that. If you post up a few more details somebody will probably suggest a way of sorting that out as well.
A: Your most important requirement is "to check if the content changed".
If it most important that ANY change in the file be detected, MD-5, SHA-1 or even SHA-256 should be your choice.
Given that you indicated that the checksum NOT be cryptographically good, I would recommend CRC-32 for three reasons. CRC-32 gives good hamming distances over an 8K file. CRC-32 will be at least an order of magnitude faster than MD-5 to calculate (your second requirement). Sometimes as important, CRC-32 only requires 32 bits to store the value to be compared. MD-5 requires 4 times the storage and SHA-1 requires 5 times the storage.
BTW, any technique will be strengthened by prepending the length of the file when calculating the hash.
A: If you have many small files, your bottleneck is going to be file I/O and probably not a checksum algorithm.
A list of hash functions (which can be thought of as a checksum) can be found here.
Is there any reason you can't use the filesystem's date modified to determine if a file has changed? That would probably be faster.
A: According to the Wiki page pointed to by Luke, MD5 is actually faster than CRC32!
I have tried this myself by using Python 2.6 on Windows Vista, and got the same result.
Here are some results:
crc32: 162.481544276 MBps
md5: 224.489791549 MBps
crc32: 168.332996575 MBps
md5: 226.089336532 MBps
crc32: 155.851515828 MBps
md5: 194.943289532 MBps
I am thinking about the same question as well, and I'm tempted to use the Rsync's variation of Adler-32 for detecting file differences.
A: There are lots of fast CRC algorithms that should do the trick:
http://www.google.com/search?hl=en&q=fast+crc&aq=f&oq=
Edit: Why the hate? CRC is totally appropriate, as evidenced by the other answers. A Google search was also appropriate, since no language was specified. This is an old, old problem which has been solved so many times that there isn't likely to be a definitive answer.
A: *
*CRC-32 comes into mind mainly because it's cheap to calculate
*Any kind of I/O comes into mind mainly because this will be the limiting factor for such an undertaking ;)
*The problem is not calculating the checksums, the problem is to get the images into memory to calculate the checksum.
*I would suggest "stagged" monitoring:
*
*stage 1: check for changes of file timestamps and if you detect a change there hand over to...(not needed in your case as described in the edited version)
*stage 2: get the image into memory and calculate the checksum
*For sure important as well: multi-threading: setting up a pipeline which enables processing of several images in parallel if several CPU cores are available.
A: Just a postscript to the above; jpegs use lossy compression and the extent of the compression may depend upon the program used to create the jpeg, the colour pallette and/or bit-depth on the system, display gamma, graphics card and user-set compression levels/colour settings. Therefore, comparing jpegs built on different computers/platforms or using different software will be very difficult at the byte level.
A: This is 5 times faster than CCITT and makes exactly the same job:
Python:
def crc16_fast(data: bytearray, length):
crc = 0xCACA
for i in range(length):
crc ^= data[i]
return crc
C:
uint16_t crc16_fast(const uint16_t* data, size_t length)
{
uint16_t crc = 0xCACA;
for (size_t i = 0; i < length; i++)
crc ^= data[i];
return crc;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
}
|
Q: how to determine if a record in every source, represents the same person I have several sources of tables with personal data, like this:
SOURCE 1
ID, FIRST_NAME, LAST_NAME, FIELD1, ...
1, jhon, gates ...
SOURCE 2
ID, FIRST_NAME, LAST_NAME, ANOTHER_FIELD1, ...
1, jon, gate ...
SOURCE 3
ID, FIRST_NAME, LAST_NAME, ANOTHER_FIELD1, ...
2, jhon, ballmer ...
So, assuming that records with ID 1, from sources 1 and 2, are the same person, my problem is how to determine if a record in every source, represents the same person. Additionally, sure not every records exists in all sources. All the names, are written in spanish, mainly.
In this case, the exact matching needs to be relaxed because we assume the data sources has not been rigurously checked against the official bureau of identification of the country. Also we need to assume typos are common, because the nature of the processes to collect the data. What is more, the amount of records is around 2 or 3 millions in every source...
Our team had thought in something like this: first, force exact matching in selected fields like ID NUMBER, and NAMES to know how hard the problem can be. Second, relaxing the matching criteria, and count how much records more can be matched, but is here where the problem arises: how to do to relax the matching criteria without generating too noise neither restricting too much?
What tool can be more effective to handle this?, for example, do you know about some especific extension in some database engine to support this matching?
Do you know about clever algorithms like soundex to handle this approximate matching, but for spanish texts?
Any help would be appreciated!
Thanks.
A: This sounds like a Customer Data Integration problem. Search on that term and you might find some more information. Also, have a poke around inside The Data Warehousing Institude, and you might find some answers there as well.
Edit: In addition, here's an article that might interest you on spanish phonetic matching.
A: I've had to do something similar before and what I did was use a double metaphone phonetic search on the names.
Before I compared the names though, I tried to normalize away any name/nickname differences by looking up the name in a nick name table I created. (I populated the table with census data I found online) So people called Bob became Robert, Alex became Alexander, Bill became William, etc.
Edit: Double Metaphone was specifically designed to be better than Soundex and work in languages other than English.
A: The crux of the problem is to compute one or more measures of distance between each pair of entries and then consider them to be the same when one of the distances is less than a certain acceptable threshold. The key is to setup the analysis and then vary the acceptable distance until you reach what you consider to be the best trade-off between false-positives and false-negatives.
One distance measurement could be phonetic. Another you might consider is the Levenshtein or edit distance between the entires, which would attempt to measure typos.
If you have a reasonable idea of how many persons you should have, then your goal is to find the sweet spot where you are getting about the right number of persons. Make your matching too fuzzy and you'll have too few. Make it to restrictive and you'll have too many.
If you know roughly how many entries a person should have, then you can use that as the metric to see when you are getting close. Or you can divide the number of records into the average number of records for each person and get a rough number of persons that you're shooting for.
If you don't have any numbers to use, then you're left picking out groups of records from your analysis and checking by hand whether they look like the same person or not. So it's guess and check.
I hope that helps.
A: SSIS , try using the Fuzzy Lookup transformation
A: Just to add some details to solve this issue, I'd found this modules for Postgresql 8.3
*
*Fuzzy String Match
*Trigrams
A: You might try to cannonicalise the names by comparing them with a dicionary.
This would allow you to spot some common typos and correct them.
A: Sounds to me you have a record linkage problem. You can use the references in the link.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/122990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: opening links in the same window or in a new (tab) It has always been best practice to open links in the same window. Should that still be the rule now that most common browsers use tabs? Personally i prefer closing a tab/window over hitting the backbutton. What are your thoughts?
A: If you prefer closing a tab/window over hitting the back button, then by all means, click links with your middle mouse button. But please don't force your surfing preferences on others. Tabs don't change this principle in the slightest.
A: I think consistency is the most important thing to keep in mind. Browsers are beginning to provide ways to open links in multiple tabs regardless of the web site's design decisions, so maintaining similar functionality as other websites is probably the biggest concern.
You really want your site's core features to behave like the other sites your users visit, so they feel comfortable and don't waste time trying to figure out the differences.
That said, there are times when you should open a new window/tab vs. opening a link in the current window/tab. For example, if the two pages (the current one, and the linked page) really need to be viewed simultaneously).
A: Yes, it should be the same. A new tab is more or less a new window, it just happens to be held in the same parent container as the original tab.
A: Are we discussing links that leave your site? Our company benefits provider has a web site for administering our flexible spending acct. It opens 3 separate windows from the time I login until I reach the page to submit a claim. I have never been a fan of opening a new tab/window when navigating within a site.
Thoughts?
A: Some browsers actually open a new window when asked to open a new window (gasp), even if they support tabs. I usually have two or three different browser windows open, each with a number of tabs, grouped by task. If some site unexpectedly forces a new window on me, I now have to re-integrate it as a tab into one of my open windows if I want to keep my task grouping. This is a major annoyance.
If I want to open something in a new tab, I can do so myself, thank you very much. Don't try to think for the user. The only time I can accept/expect a new window is when the original site needs to stay open, for example because I'm entering text into a form but need to open a "more info" window (see StackOverflow Answer Help button).
A: I'm going to wear out my scroll wheel button by the amount of tabs I open. There's a great Firefox extension that allows you to close the current tab with a single keypress, and I've assigned to it, so it's very easy to close tabs quickly.
A: I think it is still in the realm of personal preference, but depending on what I am browsing I am finding more and more that I open links in new tabs. I do this a lot when I a reading a page with a lot of related links but I want to finish reading the main page first. That way when I am done I can go look at all those links rather than trying to go back using the back button.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Does the size of the constructor matter if you're using Inversion of Control? So I've got maybe 10 objects each of which has 1-3 dependencies (which I think is ok as far as loose coupling is concerned) but also some settings that can be used to define behavior (timeout, window size, etc).
Now before I started using an Inversion of Control container I would have created a factory and maybe even a simple ObjectSettings object for each of the objects that requires more than 1 setting to keep the size of the constructor to the recommended "less than 4" parameter size. I am now using an inversion of control container and I just don't see all that much of a point to it. Sure I might get a constructor with 7 parameters, but who cares? It's all being filled out by the IoC anyways.
Am I missing something here or is this basically correct?
A: The relationship between class complexity and the size of the IoC constructor had not occurred to me before reading this question, but my analysis below suggests that having many arguments in the IoC constructor is a code smell to be aware of when using IoC. Having a goal to stick to a short constructor argument list will help you keep the classes themselves simple. Following the single responsibility principle will guide you towards this goal.
I work on a system that currently has 122 classes that are instantiated using the Spring.NET framework. All relationships between these classes are set up in their constructors. Admittedly, the system has its fair share of less than perfect code where I have broken a few rules. (But, hey, our failures are opportunities to learn!)
The constructors of those classes have varying numbers of arguments, which I show in the table below.
Number of constructor arguments Number of classes
0 57
1 19
2 25
3 9
4 3
5 1
6 3
7 2
8 2
The classes with zero arguments are either concrete strategy classes, or classes that respond to events by sending data to external systems.
Those with 5 or 6 arguments are all somewhat inelegant and could use some refactoring to simplify them.
The four classes with 7 or 8 arguments are excellent examples of God objects. They ought to be broken up, and each is already on my list of trouble-spots within the system.
The remaining classes (1 to 4 arguments) are (mostly) simply designed, easy to understand, and conform to the single responsibility principle.
A: The need for many dependencies (maybe over 8) could be indicative of a design flaw but in general I think there is no problem as long as the design is cohesive.
Also, consider using a service locator or static gateway for infrastructure concerns such as logging and authorization rather than cluttering up the constructor arguments.
EDIT: 8 probably is too many but I figured there'd be the odd case for it. After looking at Lee's post I agree, 1-4 is usually good.
A: G'day George,
First off, what are the dependencies between the objects?
Lots of "isa" relationships? Lots of "hasa" relationships?
Lots of fan-in? Or fan-out?
George's response: "has-a mostly, been trying to follow the composition over inheritance advice...why would it matter though?"
As it's mostly "hasa" you should be all right.
Better make sure that your construction (and destruction) of the components is done correctly though to prevent memory leaks.
And, if this is in C++, make sure you use virtual destructors?
A: This is a tough one, and why I favor a hybrid approach where appropriate properties are mutable and only immutable properties and required dependencies without a useful default are part of the constructor. Some classes are constructed with the essentials, then tuned if necessary via setters.
A: It all depends upon what kind of container that you have used to do the IOC and what approaches the container takes whether it uses annotations or configuration file to saturate the object to be instiantiated. Furthermore, if your constructor parameters are just plain primitive data types then it is not really a big deal; however if you have non-primitive types then in my opinion, you can use the Property based DI rather than consutructor based DI.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Determine the ID of the JSF container form I need to determine the ID of a form field from within an action handler. The field is a part of a included facelets component and so the form will vary.
included.xhtml
<ui:component>
<h:inputText id="contained_field"/>
<h:commandButton actionListener="#{backingBean.update}" value="Submit"/>
</ui:component>
example_containing.xhtml
<h:form id="containing_form">
<ui:include src="/included.xhtml"/>
</h:form>
How may I determine the ID of the form in the update method at runtime? Or better yet, the ID of the input field directly.
A: Bind the button to your backing bean, then use getParent() until you find the nearest form.
A: Programmatically I would use jsight's method. You can know the id of your elements (unless you let JSF create them, I don't know the means for numbering in the ids) by looking at it. h:form is a naming container so as long as you don't have it wrapped in another naming container it will be containingForm:containedfield The ':' is the naming separator by default is JSF and the ids are created like this, roughly anyway, (parentNamingContainerId:)*componentId
A: Since update method is of type actionListener, you can access your UI component as follows
public void update(javax.faces.event.ActionEvent ac) {
javax.faces.component.UIComponent myCommand = ac.getComponent( );
String id = myCommand.getId(); // get the id of the firing component
..... your code .........
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Microsoft Access - grand total adding multiple fields together I can't quite figure this out. Microsoft Access 2000, on the report total section I have totals for three columns that are just numbers. These =Sum[(ThisColumn1)], 2, 3, etc and those grand totls all work fine.
I want to have another column that says =Sum([ThisColumn1])+Sum([ThisColumn2]) + Sum([ThisColumn3]) but can't figure those one out. Just get a blank so I am sure there is an error.
A: Give the 3 Grand Totals meaningful Control Names and then for the Grand Grand Total use:
=[GrandTotal1] + [GrandTotal2] + [GrandTotal3]
Your Grand Total formulas should be something like:
=Sum(Nz([ThisColumn1], 0))
A: NULL values propagate through an expression which means that if any of your three subtotals are blank, the final total will also be blank. For example:
NULL + 10 = NULL
Access has a built in function that you can use to convert NULL values to zero.
NZ( FieldName, ValueIfNull )
You can use NZ in reports, queries, forms and VBA.
So the example above could read like this:
=NZ([GrandTotal1],0) + NZ([GrandTotal2],0) + NZ([GrandTotal3],0)
http://office.microsoft.com/en-us/access/HA012288901033.aspx
A: Create a new query, and the sql should look like this:
SELECT SUM(Column1 + Column2 + Column3),
SUM(Column1),
SUM(Column2),
SUM(Column3),
FROM Your_Table;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Do you use Qt and why do you use it? Pros. and cons? how long do you use it? What about jambi?
A: Here are some of my Pros and Cons with Qt:
Pros:
Cross-platform
I know this one is always used, but after going back and forth between Windows and Linux with Qt, it's amazing how little I have to do to get up and running. I think this is helped by the fact I only use Vim with Qt Designer.
QMake
This is one of my favorite aspects of Qt. After doing work in wxWidgets, FLTK, etc., I get so tired of messing around with different build systems and I don't want to manually create my makefiles. I currently use CMake on anything other than Qt right now, but I think I'm slowly moving even Qt over to CMake. However it's just so easy to get going with QMake.
QTestLib
I looked at a couple other C++ unit testing frameworks and when I created my tests using QTestLib, it felt very similar to NUnit(C#) and within minutes I had several passing tests. I also noticed that it would be very easy to create my own continuous integration environment.
Closest to Java and .Net in productivity
The biggest thing I hear/read people say about C++ is, "I can be more productive with Java or .Net". From personal experience I can get a prototype of an application running in Qt using Vim and Qt Designer, before Eclipse or Visual Studio even load. I also get a very similar set of libraries in Qt that I have in .Net or Java and if it's not there I can leverage the existing C++ code out there.
Cons:
Price
This is the biggest factor I can think of right now. However, the cost is worth every cent, um if I knew how many cents I had to save up without making a call to a sales rep. I purchased a license back in the day when they had their small business discount and it was worth it then, I would've paid three times as much and I think that's the current price.
Develop anywhere with commercial license
I would love to be able to develop on any platform, but build and sell for another platform. For example, develop on Linux, then build and deploy on Windows if you just have the Windows commercial license. From what I know, you can only develop and build a commercial application on the platform you have a license for.
Vendor lock-in
Well sort of, this is more of a personal con. I don't like being tied to a specific vendor because I get side tracked by the company direction and product direction. TrollTech was purchased by Nokia, is this good or bad I don't know, but a company that size can do evil things.
I think I'm done for now :). Oh, I haven't used Jambi but I'm really interested in doing a couple prototype projects to find out how easy it is to use a plugin developed in C++ with Jambi. Especially using Jambi as a web interface with C++ plugins.
To be honest I haven't read much on it, so it may be impossible or very easy.
A: I used Qt in a previous job. I'd only had the absolute briefest of contact with Qt several years prior to that, so I was pretty much a Qt newb.
When I started I was told to choose my language and environment, but cross-platform support was desirable. I tried Qt and Java, and even gave C# a go just for the heck of it. I gave myself two days to evaluate each option.
Maybe I was slightly biased with my history as a C++ developer, but after spending time on each option Qt was the only one that showed any hints at being useful without a long learning curve.
The documentation provided with Qt and the example applications made it very easy for an experienced developer but Qt beginner to get up and running very quickly. I had UI prototype/mockups of the end application done by the end of my trial period. With Java/Eclipse, Java/SunStudio and C#/VS.net I had trouble getting anything nontrivial happening in that time.
Signals/slots took some getting used to, but it wasn't too bad, and I wrote some simple wrappers to assert when connections failed to stop silly typos from stopping the app. from working.
The other thing I liked is that Qt had almost everything I needed. You name it - storage, networking, GUI, threading, containers - Qt has a class to deal with it. Which IMHO is important because mixing libraries can sometimes cause problems.
Having the source code to Qt was a big plus, one for just plain interest's sake, but also it allowed me to compile Qt using the compiler and settings of my choosing, including a debug version for use during development.
I also found Trolltech's support to be fairly good. I raised a couple of bugs on Qt, one of which was fixed and released whilst I was still working on the project (only a 6 month job).
The only negative I can recall was the difficulty in debugging Qt objects (using VS) - there is a Qt plugin for VS that can examine Qt objects but I was using the free version of VS and plugins don't work for it. But that wasn't Qt's fault.
I haven't used jambi so can't comment.
A: I have been using Qt for several years now for commercial development and have been very happy with it.
One of the nice things with Qt is that it provides a large set of libraries as well as the GUI stuff (eg XML parsing, threads, networking), all in a consistent style and all multi-platform. This means we rarely need to use other libraries, though we do use boost for some things.
Another very important factor for us was internationalization. In a previous, MFC based application we had to maintain 2 localized versions, for the two languages we support. In our Qt based app we just have the one version.
*
*The Qt translation system, using linguist is easy to use and makes supporting multiple languages easy (of course you still have to translate the strings which is a lot of work!)
*The GUI layout system where the widgets resize themselves according to a layout makes everything much easier. In different languages the length of the strings are different. With fixed size widgets (like MFC) each dialog needs to be adjusted for each language, otherwise parts of labels get cut off. With Qt they resize themselves. Of course, there are cases when it does not work exactly right but it still makes everything much easier.
*QString does everything in Unicode and handles the conversions from different codecs very easily.
One thing that has been very valuable is the access to the source, although e this is certainly not unique to Qt. On several occasions the ability to check the Qt source has explained some strange behaviour or given a clue how to achieve something.
We have found a few bugs in Qt, some of which have been fixed after reporting to Trolltech. In other cases they have suggested a work around. These have all been fairly obscure and not had a major impact on our development.
One of the main downsides to Qt would be the lack of 3rd party libraries for use in commercial applications. However, Qt is fairly complete so for us it has not been a big problem, though that will depend on which type of application you are developing.
I have not used Jambi either.
A: On C++ your only other alternatives are MFC and wxWidgets.
QT / wxWidgets is largely a personal preference. I do think QT is a clean design with good documentation.
QT costs about one month of developer salary if you aren't using it for GPL.
A: I have been using Qt for over two years now.
Things I like on Qt are:
*
*Easy GUI programming (compared to
MFC), Qt Designer
*Nice container classes
*Nice graphics scene framework
*Excellent documentation with useful examples
*Translation support
*Good technical support
I can highly recommend the Qt Developer Days. If you have a chance to take part, then do it! Lots of nice and very interesting talks there.
A: I've used Qt on a couple of projects I did in c++ on several platforms over a period of seven years. I think it works pretty well and definitely was quicker for me to develop a decent GUI app on the Mac than plodding through a language I didn't know (Objective-C) at the time.
I think the signal/slot mechanism is a bit funky but isn't horrible. Once you're use it for a bit, it's not a show stopper. The connection stuff is easy to bungle up (or at least it was) and it's always good to check the return on those because your app will go merrily on its way and not tell you that it didn't work.
I've never used jambi.
A: Qt is a very nice library, but it has an expensive per-seat developer license, so it's not always useful for all projects.
A: Don't use it, however...
Pro:
QT has an optional 3 phase layout, where as WX only allows for 2 currently (I believe they plan to do 3 phase, just have not worked it in yet).
One of the bigger problems with using layouts is static text and wrapping. WX asks how big is your min width/height and portions out the screen, QT has option to say how wide do you want, how high do you need to be if your X wide. This allows you to express the flow of a page much better.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
}
|
Q: Do callbacks stop operations in rails If a callback handler returns false, does it cause only the handlers for that callback to not be called, or does it cause the remaining callbacks in that handler and all subsequent callbacks to not be called as well?
A: If a before_* callback returns false, all the later callbacks and the associated action are cancelled. If an after_* callback returns false, all the later callbacks are cancelled. Callbacks are generally run in the order they are defined, with the exception of callbacks defined as methods on the model, which are called last.
cf http://api.rubyonrails.org/classes/ActiveRecord/Callbacks.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Oracle Global Temporary Table / PHP interaction question I've never used the Global Temporary Tables however I have some questions how they will work in a php environment.
How is data shared: Assuming persistent connections to oracle through php using oci8. Is the data tied to a database id? is it done based on the Apache httpd demons? Or is each individual request unique?
When is the data for the session cleared from the global temporary table? I'm assuming (or rather hoping) that its done when the php script exits. Alternatively if not I'm assuming I'll need to remove it before script exit.
A: The global temporary table is simply the logical definition of a table structure (Name, column names, column data types etc). When a session references it by inserting data, a data segment is created in a temporary tablespace to hold only that session's data. Different sessions can therefore reference the same logical table definition because they each have their own dedicated data segment which can be purged easily on commit or when the session disconnects without affecting other sessions.
The purging of the data in the GTT can either be on commit or when the session ends, depending on the option with which it was created. In either case you do not have to attend to the purging yourself before disconnecting.
A useful alternative to the GTT is the subquery factoring clause ("WITH"), in which you can create multiple relations that can reference those previously declared in that SQL statement. These can be materialised as a data segment in a temporary tablespace either automatically by Oracle when they exceed a certain memory usage, or manually by using the MATERIALIZE optimizer hint.
A: If I remember correct, the data in global temporary tables is available only from one active session and only for this active session (I mean session = connection). So you can see only data which was inserted before in active session. Therefore I belive, this data is cleared after closing session. No matter which language are you using.
At least I think so. :D
As is it written here:
http://www.oracle-base.com/articles/8i/TemporaryTables.php
The data in a global temporary table is private, such that data inserted by a session can only be accessed by that session.
Data in temporary tables is automatically delete at the end of the database session, even if it ends abnormally.
Sorry for my bad english.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I fix a Cross language installation problem in SQL Server 2008? I'm trying to do a SQL Server 2008 setup and I've been given a Cross Language Installation failure. More specifically:
Rule "cross language installation: failed.
the Setup language is different than the language of existing SQL Server features. To continue, use SQL Server Setup installation media of the same language as the installed SQL Server features.
I do not have SQL Server Express installed and I browsed through "Add or Remove Programs" and was unable to find anything that looked like it was a previous version of SQL.
Any tips?
A: Ensure that you have uninstalled all of your old SQL Server versions. Also you must restart the installer if you have not done that when you began installation.
A: All I had to do was exit the installer and start the process again. For some reason it worked the second time around.
A: I restarted the setup after facing the same problem, and I realized that man should not close the installation center till the setup process is completed. If you leave it open it will work.
A: I had the same problem today when installing SQL Server 2008 Express on a computer that has never had an instance of SQL Server installed.
I found that "Microsoft SQL Server 2005 Backward compatibility" was installed. I removed this via Add/Remove Programs and was able to successfully install SQL Server 2008 Express afterwards.
A: Change the Current Windows Language interface for the needed language you want to install.
That will make it possible for the Installer to launch the Localized version.
A: If you've previously installed SQL on the machine (or apparently some RedGate tools) have you checked for any SQL detritus in the registry?
If not then the MS forums have details of some reg keys to look out for, and some of the links are worth following for advice on what to delete from the registry.
A: On my installation of Sql Server 2008 Express, this was caused by having Sql Server 2005 Express Tools installed while trying to install 2008. Uninstalling 2005 Tools fixed the problem. I was able to keep Sql Server 2005 Express, including Sql Server 2005 Backward compatability; only had to nuke tools.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: How do I avoid a memory leak with LINQ-To-SQL? I have been having some issues with LINQ-To-SQL around memory usage. I'm using it in a Windows Service to do some processing, and I'm looping through a large amount of data that I'm pulling back from the context. Yes - I know I could do this with a stored procedure but there are reasons why that would be a less than ideal solution.
Anyway, what I see basically is memory is not being released even after I call context.SubmitChanges(). So I end up having to do all sorts of weird things like only pull back 100 records at time, or create several contexts and have them all do separate tasks. If I keep the same DataContext and use it later for other calls, it just eats up more and more memory. Even if I call Clear() on the "var tableRows" array that the query returns to me, set it to null, and call SYstem.GC.Collect() - it still doesn't release the memory.
Now I've read some about how you should use DataContexts quickly and dispose of them quickly, but it seems like their ought to be a way to force the context to dump all its data (or all its tracking data for a particular table) at a certain point to guarantee the memory is free.
Anyone know what steps guarantee that the memory is released?
A: As Amy points out, you should dispose of the DataContext using a using block.
It seems that your primary concern is about creating and disposing a bunch of DataContext objects. This is how linq2sql is designed. The DataContext is meant to have short lifetime. Since you are pulling a lot of data from the database, it makes sense that there will be a lot of memory usage. You are on the right track, by processing your data in chunks.
Don't be afraid of creating a ton of DataContexts. They are designed to be used that way.
A: Thanks guys - I will check out the ClearCache method. Just for clarification (for future readers), the situation in which I was getting the memory usuage was something like this:
using(DataContext context = new DataContext())
{
while(true)
{
int skipAmount = 0;
var rows = context.tables.Select(x => x.Dept == "Dept").Skip(skipAmount).Take(100);
//break out of loop when out of rows
foreach(table t in rows)
{
//make changes to t
}
context.SubmitChanges();
skipAmount += rows.Count();
rows.Clear();
rows = null;
//at this point, even though the rows have been cleared and changes have been
//submitted, the context is still holding onto a reference somewhere to the
//removed rows. So unless you create a new context, memory usuage keeps on growing
}
}
A: A DataContext tracks all the objects it ever fetched. It won't release this until it is garbage collected. Also, as it implements IDisposable, you must call Dispose or use the using statement.
This is the right way to go:
using(DataContext myDC = new DataContext)
{
// Do stuff
} //DataContext is disposed
A: If you don't need object tracking set DataContext.ObjectTrackingEnabled to false. If you do need it, you can use reflection to call the internal DataContext.ClearCache(), although you have to be aware that since its internal, it's subject to disappear in a future version of the framework. And as far as I can tell, the framework itself doesn't use it but it does clear the object cache.
A: I just ran into a similar problem. In my case, helped establish the properties of DataContext.ObjectTrackingEnabled to false.
But it works only in the case of iterating through the rows as follows:
using (var db = new DataContext())
{
db.ObjectTrackingEnabled = false;
var documents = from d in db.GetTable<T>()
select d;
foreach (var doc in documents)
{
...
}
}
If, for example, in the query to use the methods ToArray() or ToList() - no effect
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: Paging in Pervasive SQL How to do paging in Pervasive SQL (version 9.1)? I need to do something similar like:
//MySQL
SELECT foo FROM table LIMIT 10, 10
But I can't find a way to define offset.
A: Tested query in PSQL:
select top n *
from tablename
where id not in(
select top k id
from tablename
)
for all n = no.of records u need to fetch at a time.
and k = multiples of n(eg. n=5; k=0,5,10,15,....)
A: Our paging required that we be able to pass in the current page number and page size (along with some additional filter parameters) as variables. Since a select top @page_size doesn't work in MS SQL, we came up with creating an temporary or variable table to assign each rows primary key an identity that can later be filtered on for the desired page number and size.
** Note that if you have a GUID primary key or a compound key, you just have to change the object id on the temporary table to a uniqueidentifier or add the additional key columns to the table.
The down side to this is that it still has to insert all of the results into the temporary table, but at least it is only the keys. This works in MS SQL, but should be able to work for any DB with minimal tweaks.
declare @page_number int, @page_size
int -- add any additional search
parameters here
--create the temporary table with the identity column and the id
--of the record that you'll be selecting. This is an in memory
--table, so if the number of rows you'll be inserting is greater
--than 10,000, then you should use a temporary table in tempdb
--instead. To do this, use
--CREATE TABLE #temp_table (row_num int IDENTITY(1,1), objectid int)
--and change all the references to @temp_table to #temp_table
DECLARE @temp_table TABLE (row_num int
IDENTITY(1,1), objectid int)
--insert into the temporary table with the ids of the records
--we want to return. It's critical to make sure the order by
--reflects the order of the records to return so that the row_num
--values are set in the correct order and we are selecting the
--correct records based on the page INSERT INTO @temp_table
(objectid)
/* Example: Select that inserts
records into the temporary table
SELECT personid FROM person WITH
(NOLOCK) inner join degree WITH
(NOLOCK) on degree.personid =
person.personid WHERE
person.lastname = @last_name
ORDER BY person.lastname asc,
person.firsname asc
*/
--get the total number of rows that we matched DECLARE @total_rows
int SET @total_rows =
@@ROWCOUNT
--calculate the total number of pages based on the number of
--rows that matched and the page size passed in as a parameter DECLARE
@total_pages int
--add the @page_size - 1 to the total number of rows to
--calculate the total number of pages. This is because sql
--alwasy rounds down for division of integers SET @total_pages =
(@total_rows + @page_size - 1) /
@page_size
--return the result set we are interested in by joining
--back to the @temp_table and filtering by row_num /* Example:
Selecting the data to return. If the
insert was done properly, then
you should always be joining the table
that contains the rows to return
to the objectid column on the
@temp_table
SELECT person.* FROM person WITH
(NOLOCK) INNER JOIN @temp_table
tt ON person.personid =
tt.objectid
*/
--return only the rows in the page that we are interested in
--and order by the row_num column of the @temp_table to make sure
--we are selecting the correct records WHERE tt.row_num <
(@page_size * @page_number) + 1
AND tt.row_num > (@page_size *
@page_number) - @page_size ORDER
BY tt.row_num
A: I face this problem in MS Sql too... no Limit or rownumber functions. What I do is insert the keys for my final query result (or sometimes the entire list of fields) into a temp table with an identity column... then I delete from the temp table everything outside the range I want... then use a join against the keys and the original table, to bring back the items I want. This works if you have a nice unique key - if you don't, well... that's a design problem in itself.
Alternative with slightly better performance is to skip the deleting step and just use the row numbers in your final join. Another performance improvement is to use the TOP operator so that at the very least, you don't have to grab the stuff past the end of what you want.
So... in pseudo-code... to grab items 80-89...
create table #keys (rownum int identity(1,1), key varchar(10))
insert #keys (key)
select TOP 89 key from myTable ORDER BY whatever
delete #keys where rownumber < 80
select <columns> from #keys join myTable on #keys.key = myTable.key
A: I ended up doing the paging in code. I just skip the first records in loop.
I thought I made up an easy way for doing the paging, but it seems that pervasive sql doesn't allow order clauses in subqueries. But this should work on other DBs (I tested it on firebird)
select *
from (select top [rows] * from
(select top [rows * pagenumber] * from mytable order by id)
order by id desc)
order by id
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I 'validate' on destroy in rails On destruction of a restful resource, I want to guarantee a few things before I allow a destroy operation to continue? Basically, I want the ability to stop the destroy operation if I note that doing so would place the database in a invalid state? There are no validation callbacks on a destroy operation, so how does one "validate" whether a destroy operation should be accepted?
A: You can raise an exception which you then catch. Rails wraps deletes in a transaction, which helps matters.
For example:
class Booking < ActiveRecord::Base
has_many :booking_payments
....
def destroy
raise "Cannot delete booking with payments" unless booking_payments.count == 0
# ... ok, go ahead and destroy
super
end
end
Alternatively you can use the before_destroy callback. This callback is normally used to destroy dependent records, but you can throw an exception or add an error instead.
def before_destroy
return true if booking_payments.count == 0
errors.add :base, "Cannot delete booking with payments"
# or errors.add_to_base in Rails 2
false
# Rails 5
throw(:abort)
end
myBooking.destroy will now return false, and myBooking.errors will be populated on return.
A: The ActiveRecord associations has_many and has_one allows for a dependent option that will make sure related table rows are deleted on delete, but this is usually to keep your database clean rather than preventing it from being invalid.
A: You can wrap the destroy action in an "if" statement in the controller:
def destroy # in controller context
if (model.valid_destroy?)
model.destroy # if in model context, use `super`
end
end
Where valid_destroy? is a method on your model class that returns true if the conditions for destroying a record are met.
Having a method like this will also let you prevent the display of the delete option to the user - which will improve the user experience as the user won't be able to perform an illegal operation.
A: just a note:
For rails 3
class Booking < ActiveRecord::Base
before_destroy :booking_with_payments?
private
def booking_with_payments?
errors.add(:base, "Cannot delete booking with payments") unless booking_payments.count == 0
errors.blank? #return false, to not destroy the element, otherwise, it will delete.
end
A: I ended up using code from here to create a can_destroy override on activerecord:
https://gist.github.com/andhapp/1761098
class ActiveRecord::Base
def can_destroy?
self.class.reflect_on_all_associations.all? do |assoc|
assoc.options[:dependent] != :restrict || (assoc.macro == :has_one && self.send(assoc.name).nil?) || (assoc.macro == :has_many && self.send(assoc.name).empty?)
end
end
end
This has the added benefit of making it trivial to hide/show a delete button on the ui
A: It is what I did with Rails 5:
before_destroy do
cannot_delete_with_qrcodes
throw(:abort) if errors.present?
end
def cannot_delete_with_qrcodes
errors.add(:base, 'Cannot delete shop with qrcodes') if qrcodes.any?
end
A: You can also use the before_destroy callback to raise an exception.
A: I have these classes or models
class Enterprise < AR::Base
has_many :products
before_destroy :enterprise_with_products?
private
def empresas_with_portafolios?
self.portafolios.empty?
end
end
class Product < AR::Base
belongs_to :enterprises
end
Now when you delete an enterprise this process validates if there are products associated with enterprises
Note: You have to write this in the top of the class in order to validate it first.
A: State of affairs as of Rails 6:
This works:
before_destroy :ensure_something, prepend: true do
throw(:abort) if errors.present?
end
private
def ensure_something
errors.add(:field, "This isn't a good idea..") if something_bad
end
validate :validate_test, on: :destroy doesn't work: https://github.com/rails/rails/issues/32376
Since Rails 5 throw(:abort) is required to cancel execution: https://makandracards.com/makandra/20301-cancelling-the-activerecord-callback-chain
prepend: true is required so that dependent: :destroy doesn't run before the validations are executed: https://github.com/rails/rails/issues/3458
You can fish this together from other answers and comments, but I found none of them to be complete.
As a sidenote, many used a has_many relation as an example where they want to make sure not to delete any records if it would create orphaned records. This can be solved much more easily:
has_many :entities, dependent: :restrict_with_error
A: Use ActiveRecord context validation in Rails 5.
class ApplicationRecord < ActiveRecord::Base
before_destroy do
throw :abort if invalid?(:destroy)
end
end
class Ticket < ApplicationRecord
validate :validate_expires_on, on: :destroy
def validate_expires_on
errors.add :expires_on if expires_on > Time.now
end
end
A: I was hoping this would be supported so I opened a rails issue to get it added:
https://github.com/rails/rails/issues/32376
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "88"
}
|
Q: Naming a "core" Assembly I know that this is somewhat subjective, but I wonder if there is a generally accepted standard for naming assemblies which contain some "core" functions.
Let's say you got a larger Projects, with Assemblies like
*
*Company.Product.WebControls.dll
*Company.Product.Net.dll
*Company.Product.UserPages.dll
and you have a Bunch of "Core" classes, like the Global Error Handler, the global Logging functionality etc.
How would such an assembly generally named? Here are some things I had in mind:
*
*Company.Product.dll
*Company.Product.Core.dll
*Company.Product.Global.dll
*Company.Product.Administration.dll
Now, while "just pick one and go on" will not cause Armageddon, I'd still like to know if there is an "accepted" way to name those assemblies.
A: I've used .Core, .Framework, and .Common.
A: We use this model:
*
*Company.Core.dll
*Company.WinControls.dll
*Company.WebControls.dll
*Company.Product.Core.dll
*Company.Product.WinControls.dll
*Company.Product.WebControls.dll
etc.
A: All these "root", "core", "common" and so on, are probably not the best naming-conventions.
Common stuff should lie in the root namespace, like in .NET, string, int and other things that are "core" or "common" lies in the root System-namespace.
Don't use namespaces to more easily collapse your folders in Visual Studio, but structure it after what it contains and what it's used for.
System.Security contains common security-things that, for example System.Xml doesn't need to know about, unless you want that functionality explicitly.
System.Security.Cryptographyis a sub-namespace. Cryptography is security, but security is not explicitly cryptography.
In this way System.Security.Cryptography has full insight into it's parent namespace and can implictly use all classes inside of its parent.
I would say System.Core.dll was a slip-up on Microsoft's side. They must have ran out of ideas or DLL-names.
Update: MSDN has a somewhat updated article that tries to explain Microsoft's thinking on the subject.
A: i always do .Core.dll.
A: The one I use most common and seem to love because I don't see other people use it is Root
I'll generally do
CompanyName.Root
or
SomethingMeaningfulToMe.Root
A: With .Net this is relatively easy to change, so I'd go with convenience.
Fewer, larger, assemblies compile quicker than many small ones, so I'd start with your 'core' stuff as a namespace inside Company.Product.dll, and split it out later if you need to.
A: I typically like to have names which describe what is inside each assembly.
You see, if you name something as .Core, then on a large team, it can grow very quickly as people would consider putting very common thing in that assembly.
So, I think that there shouldn't really be one core assembly.
A: This is one of those it depends questions. If its your code and you work on a small team I would use any naming convention that makes sense to you. I have seen however in large code bases namespaces allow ways for downstream developers to discover functionality without the need of documentation or training. we use the model. standardizing the namespaces made it easier for developers to move team to team.
BussinessName.Division.Layer
A: i also used .Core, especially for the .dll
Its all a matter of taste imo, if the root namespace is a company name, i feel .Core/framework/common is more descriptive than just the company name.
However if you're working on something like a opensource project where the name of the dll/namespace is also the name of the project, .Core/../ might be a little redundant.
There are many example in the .net framework and other microsoft libraries where both conventions are used. there is a System.dll and a System.Core.dll for example :)
A: Core is very meaningful and easy to understand naming convention and it does not conflict with any .net framework namespace so it is very good practice.
I strongley recomend to encapsulate in each assembly the service it suppose to provide to the application that uses it and not to mix different function domains in one assembly.
We are using
CSG.Core
CSG.Data
CSG.Services
...
In the Core we are including classes that are likely to be used in all our products: Logging, Collection Extensions, Generics, Configuration extensions, Security, Validation, etc.
Although comiling many assemblies is slower than fewer and larger assembly, it optimizes your deployment because you deploy only classes that are used by your system and not contains many classes that not used just because you wanted to save compile time.
When you name your namespaces I am strongly recomends to avoid repeating the same word in different level of the namespace. For example, avoid the following:
YourCompany.Core
YourCompany.YourProduct.Core
Either put the core in the YourCompany or in Myproduct but not in both. It can be very confusing if for example your using looks like:
using YourCompany;
using YourCompany.YourProduct;
When you will type Core.SomeClass it will be very confusing where this class came from and in case you have 2 classes with the same name it will cause a conflict.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Bypass .Net v3.5 verification when installing SQL Server 2008 Express For some reason, when I try to install SQL Server 2008 Express, I get an error saying that I need to have .Net Framework 3.5 installed first, but the thing is: I already have! So could anybody tell me if I can bypass this verification by updating a registry key or something? I have np with visual studio or any other thing... just when I try to install sql express.
A: I ran into this once before. Make sure that it is installed. Reinstall if necessary. I believe what I did was install SP1. SQL Server 2008 has a tendency of trying to install the Compact Framework 3.5 and based on the build or refresh of SQL, Compact Framework 3.5 SP1. HTH!
A: Installing Windows Installer 4.5 worked for me. Once installed, the message goes away.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Possible pitfalls of using this (extension method based) shorthand C#6 Update
In C#6 ?. is now a language feature:
// C#1-5
propertyValue1 = myObject != null ? myObject.StringProperty : null;
// C#6
propertyValue1 = myObject?.StringProperty;
The question below still applies to older versions, but if developing a new application using the new ?. operator is far better practice.
Original Question:
I regularly want to access properties on possibly null objects:
string propertyValue1 = null;
if( myObject1 != null )
propertyValue1 = myObject1.StringProperty;
int propertyValue2 = 0;
if( myObject2 != null )
propertyValue2 = myObject2.IntProperty;
And so on...
I use this so often that I have a snippet for it.
You can shorten this to some extent with an inline if:
propertyValue1 = myObject != null ? myObject.StringProperty : null;
However this is a little clunky, especially if setting lots of properties or if more than one level can be null, for instance:
propertyValue1 = myObject != null ?
(myObject.ObjectProp != null ? myObject.ObjectProp.StringProperty) : null : null;
What I really want is ?? style syntax, which works great for directly null types:
int? i = SomeFunctionWhichMightReturnNull();
propertyValue2 = i ?? 0;
So I came up with the following:
public static TResult IfNotNull<T, TResult>( this T input, Func<T, TResult> action, TResult valueIfNull )
where T : class
{
if ( input != null ) return action( input );
else return valueIfNull;
}
//lets us have a null default if the type is nullable
public static TResult IfNotNull<T, TResult>( this T input, Func<T, TResult> action )
where T : class
where TResult : class
{ return input.IfNotNull( action, null ); }
This lets me us this syntax:
propertyValue1 = myObject1.IfNotNull( x => x.StringProperty );
propertyValue2 = myObject2.IfNotNull( x => x.IntProperty, 0);
//or one with multiple levels
propertyValue1 = myObject.IfNotNull(
o => o.ObjectProp.IfNotNull( p => p.StringProperty ) );
This simplifies these calls, but I'm not sure about checking this sort of extension method in - it does make the code a little easier to read, but at the cost of extending object. This would appear on everything, although I could put it in a specifically referenced namespace.
This example is a rather simple one, a slightly more complex one would be comparing two nullable object properties:
if( ( obj1 == null && obj2 == null ) ||
( obj1 != null && obj2 != null && obj1.Property == obj2.Property ) )
...
//becomes
if( obj1.NullCompare( obj2, (x,y) => x.Property == y.Property )
...
What are the pitfalls of using extensions in this way? Are other coders likely to be confused? Is this just abuse of extensions?
I guess what I really want here is a compiler/language extension:
propertyValue1 = myObject != null ? myObject.StringProperty : null;
//becomes
propertyValue1 = myObject?StringProperty;
This would make the complex case far easier:
propertyValue1 = myObject != null ?
(myObject.ObjectProp != null ? myObject.ObjectProp.StringProperty) : null
//becomes
propertyValue1 = myObject?ObjectProp?StringProperty;
This would only work for value types, but you could return nullable equivalents:
int? propertyValue2 = myObject?ObjectProp?IntProperty;
//or
int propertyValue3 = myObject?ObjectProp?IntProperty ?? 0;
A: How is
propertyValue1 = myObject.IfNotNull(o => o.ObjectProp.IfNotNull( p => p.StringProperty ) );
easier to read and write than
if(myObject != null && myObject.ObjectProp != null)
propertyValue1 = myObject.ObjectProp.StringProperty;
Jafar Husain posted a sample of using Expression Trees to check for null in a chain, Runtime macros in C# 3.
This obviously has performance implications though. Now if only we had a way to do this at compile time.
A: I just have to say that I love this hack!
I hadn't realized that extension methods don't imply a null check, but it totally makes sense. As James pointed out, The extension method call itself is not any more expensive than a normal method, however if you are doing a ton of this, then it does make sense to follow the Null Object Pattern, that ljorquera suggested. Or to use a null object and ?? together.
class Class1
{
public static readonly Class1 Empty = new Class1();
.
.
x = (obj1 ?? Class1.Empty).X;
A: We independently came up with the exact same extension method name and implementation: Null-propagating extension method. So we don't think it's confusing or an abuse of extension methods.
I would write your "multiple levels" example with chaining as follows:
propertyValue1 = myObject.IfNotNull(o => o.ObjectProp).IfNotNull(p => p.StringProperty);
There's a now-closed bug on Microsoft Connect that proposed "?." as a new C# operator that would perform this null propagation. Mads Torgersen (from the C# language team) briefly explained why they won't implement it.
A: Here's another solution, for chained members, including extension methods:
public static U PropagateNulls<T,U> ( this T obj
,Expression<Func<T,U>> expr)
{ if (obj==null) return default(U);
//uses a stack to reverse Member1(Member2(obj)) to obj.Member1.Member2
var members = new Stack<MemberInfo>();
bool searchingForMembers = true;
Expression currentExpression = expr.Body;
while (searchingForMembers) switch (currentExpression.NodeType)
{ case ExpressionType.Parameter: searchingForMembers = false; break;
case ExpressionType.MemberAccess:
{ var ma= (MemberExpression) currentExpression;
members.Push(ma.Member);
currentExpression = ma.Expression;
} break;
case ExpressionType.Call:
{ var mc = (MethodCallExpression) currentExpression;
members.Push(mc.Method);
//only supports 1-arg static methods and 0-arg instance methods
if ( (mc.Method.IsStatic && mc.Arguments.Count == 1)
|| (mc.Arguments.Count == 0))
{ currentExpression = mc.Method.IsStatic ? mc.Arguments[0]
: mc.Object;
break;
}
throw new NotSupportedException(mc.Method+" is not supported");
}
default: throw new NotSupportedException
(currentExpression.GetType()+" not supported");
}
object currValue = obj;
while(members.Count > 0)
{ var m = members.Pop();
switch(m.MemberType)
{ case MemberTypes.Field:
currValue = ((FieldInfo) m).GetValue(currValue);
break;
case MemberTypes.Method:
var method = (MethodBase) m;
currValue = method.IsStatic
? method.Invoke(null,new[]{currValue})
: method.Invoke(currValue,null);
break;
case MemberTypes.Property:
var method = ((PropertyInfo) m).GetGetMethod(true);
currValue = method.Invoke(currValue,null);
break;
}
if (currValue==null) return default(U);
}
return (U) currValue;
}
Then you can do this where any can be null, or none:
foo.PropagateNulls(x => x.ExtensionMethod().Property.Field.Method());
A: If you find yourself having to check very often if a reference to an object is null, may be you should be using the Null Object Pattern. In this pattern, instead of using null to deal with the case where you don't have an object, you implement a new class with the same interface but with methods and properties that return adequate default values.
A:
it does make the code a little easier to read, but at the cost of extending object. This would appear on everything,
Note that you are not actually extending anything (except theoretically).
propertyValue2 = myObject2.IfNotNull( x => x.IntProperty, 0);
will generate IL code exactly as if it were written:
ExtentionClass::IfNotNull(myObject2, x => x.IntProperty, 0);
There is no "overhead" added to the objects to support this.
A: To reader not in the know it looks like you're calling a method on a null reference. If you want this, I'd suggest putting it in a utility class rather than using an extension method:
propertyValue1 = Util.IfNotNull(myObject1, x => x.StringProperty );
propertyValue2 = Util.IfNotNull(myObject2, x => x.IntProperty, 0);
The "Util." grates, but is IMO the lesser syntactic evil.
Also, if you developing this as part of a team, then gently ask what others think and do. Consistency across a codebase for frequently used patterns is important.
A: While extension methods generally cause misunderstandings when called from null instances, I think the intent is pretty straightforward in this case.
string x = null;
int len = x.IfNotNull(y => y.Length, 0);
I would want to be sure this static method works on Value Types that can be null, such as int?
Edit: compiler says that neither of these are valid:
public void Test()
{
int? x = null;
int a = x.IfNotNull(z => z.Value + 1, 3);
int b = x.IfNotNull(z => z.Value + 1);
}
Other than that, go for it.
A: Not an answer to the exact question asked, but there is Null-Conditional Operator in C# 6.0. I can argue it will be a poor choice to use the option in OP since C# 6.0 :)
So your expression is simpler,
string propertyValue = myObject?.StringProperty;
In case myObject is null it returns null. In case the property is a value type you have to use equivalent nullable type, like,
int? propertyValue = myObject?.IntProperty;
Or otherwise you can coalesce with null coalescing operator to give a default value in case of null. For eg,
int propertyValue = myObject?.IntProperty ?? 0;
?. is not the only syntax available. For indexed properties you can use ?[..]. For eg,
string propertyValue = myObject?[index]; //returns null in case myObject is null
One surprising behaviour of the ?. operator is that it can intelligently bypass subsequent .Member calls if object happens to be null. One such example is given in the link:
var result = value?.Substring(0, Math.Min(value.Length, length)).PadRight(length);
In this case result is null if value is null and value.Length expression wouldn't result in NullReferenceException.
A: Personally, even after all your explanation, I can't remember how the heck this works:
if( obj1.NullCompare( obj2, (x,y) => x.Property == y.Property )
This could be because I have no C# experience; however, I could read and understand everything else in your code. I prefer to keep code language agnostic (esp. for trivial things) so that tomorrow, another developer could change it to a whole new language without too much information about the existing language.
A: Here is another solution using myObject.NullSafe(x=>x.SomeProperty.NullSafe(x=>x.SomeMethod)), explained at
http://www.epitka.blogspot.com/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: How do you avoid waiting for requirements when using iterative agile development methods like SCRUM? We attempt to do agile development at my current job and we succeed for the most part. The main problem seems to be that the developers on the project are always waiting for requirements at the beginning of the sprint and rushing to get get things down by the end. The business analysts who are delivering the requirements are always working non-stop to get the requirements done.
EDIT: Additional Information:
We are customizing a COTS application for our internal use. Our 'user stories' just consist of what part of the application we will be customizing in the specific sprint and also what systems we will integrate with internally. The integration with different systems normally works pretty well because we can start working on that right away. The 'customize x screen' are the main problems areas because the developers can't do anything from that. We have to wait until we get the requirements from the BAs before we can really do anything.
EDIT: More insight/confusion perhaps:
I wonder if part of the problem is that the screen that are being customized are already there as this is a COTS product that is being heavily customized. People suggest that the user stories should be along the lines of 'make a screen that does X'. That's already done. Maybe there isn't a good way to do user stories for these requirements... maybe this need to be a whole new question.
A: Don't wait. Build a prototype based on whatever minimal requirements you do have and get feedback ASAP from the product owner. More often than not they don't know what they want anyway - if you can show them something tangible as a starting point you're more likely to get useful feedback. Also, once you have a better idea of the real requirements you will probably have already gained a lot of insight from developing your prototype.
A: If I understand your situation correctly, the BAs are the ones falling behind. There are two things you could try.
*
*Try either small sprints or smaller requirement chunks. Either way the work for the BAs should be more concise and managable.
*Take an interation to rework or bug squash. That should give the BAs sometime to get ahead of the curve.
If, however, the problem is that the BAs need to see the previous requirements in the "wild" before making more requirements you have much bigger issues. :)
A: At a previous position we managed this by asking our business customers to be a week ahead or so. Sure this breaks from some of the strict interpretations of agile but it made things so much easier. We would have both testing and the business working a week or 2 off from development so when developers were working on iteration 2 testing is working on what came out of IT1 and the business is on IT3. Priority was always given to active development so sometimes it broke down if a story was particularly flexible (i.e. the business had to spend lots of time revising things mid iteration) but overall it worked well.
Update to respond to the questioneers Update
It seems to me those don't really stand on their own as stories then and maybe the BA team needs to reevaluate how they are writing stories. I mean you can't reall "tell a tale" with customize X screen. In theory a story should be something like "When the user goes to screen X they should be able to modify (and save) the floozit"
A: Sounds like the BAs may not be handing you your user stories for the sprint in a timely manner.
I take it that there is no sprint planning sessions from what you say.
Given that one of the big tenets of Scrum is that the development team takes responsibility for what they will work on per sprint, it sounds like this ain't too agile to me! (-:
Apart from having short sprints that is.
A: Well, a couple of things might help
- In the SCRUM process, there is the concept of Product Owner wchich is a Pig Role, this represents the customer. So you can invite the PLM or the client's main contact to your SCRUM's meeting. This will give your customer's some buy-in into your process and will get them to work "with" you on your goals
- Weekly builds to the client might help. So, the basic idea of the weekly drops is to show the customer "progress". So if for a few weeks there is no progress, this should raise the question "why?" and then you should be able to explain that it is for the lack of requirements finalization.
Hope this helps
A: the "user story" is a placeholder for a future conversation, so get in front of the customer and ask them; if that's the BA's job, light a fire ;-)
A: Your user stories are incomplete. 'Customize X screen' is a task, it doesn't describe any requirements or completion criteria. The user story should be something like 'Allow Nancy to see the related purchase orders for an item in inventory'. Then break that down into tasks during your sprint that you can work on.
Once the BAs have developed a workable user story then add it to your product backlog, prioritize it, and plan your sprints for the top backlog items. The BAs should be developing user stories and adding to your backlog independent of your sprints, and thus not blocking you. During a sprint the tasks are completed and the user story does not change. After releasing the customer provides feedback which goes into the product backlog as more user stories.
A: I see a few ways to handle this:
Option 1, Under SCRUM, you should have a Product Owner who is managing your product backlog, which is supposed to contain requests for features of the software. If the feature consists of something vague like 'Customize screen X' and you decide to add that to your sprint, then the sprint tasks should be concrete, decomposed tasks, and I would say one of those tasks has to be 'Define requirements for screen X'.
During the daily SCRUM, when you're asking your three questions of each team member, the developer who has that screen mod task will say "I'm blocked waiting for requirements from the BA.", and your scrum master does what they can to get that moving along.
option 2, in my opinion, is that items do not go into your product backlog until they're defined well enough to do at least some productive work on. We all know requirements change, but the point is that you're supposed to have enough to start with.
A: Easy.
Allow yourself to think outside of Scrum's strict rules, and get back to your lean roots:
http://availagility.wordpress.com/2008/04/09/a-kanban-system-for-software-development/
http://leansoftwareengineering.com/2007/10/31/spreadsheet-example-for-a-small-kanban-team/
http://www.infoq.com/articles/hiranabe-lean-agile-kanban
Trust me, once you get that flow going, you'll never look back.
A: As it is said above, usually at the beginning of each sprint you should prioritize the existing backlog and pick some stories for the current sprint. If there is not enough user stories for the developers, you should shift developers to another project and let the product owner some time to create a decent (=large enough to feed some team) backlog for the project.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Visual Studio window manager Is there a window manager for Visual Studio 2008 like this one. I really liked it, and that's all I used in Visual Studio 2005 and saw somewhere it is supposed to work in Visual Studio 2008, but it doesn't. I have tried it on many installations of Visual Studio 2008, and it doesn't remember any settings. I really liked being able to easily change window layout quickly. Right now I just manually import and export settings, but it's not an instant process.
What do I have to do to make it work?
A: You can check out my blog post, Save and Change Tool Layout in Visual Studio, which provides the ability to list and switch window layouts.
A: You should contact RW on CodePlex. He claims to have it working in Visual Studio 2008. Check out this item.
A: The following macros may do the trick for you. I did your WindowManager mentioned above, recompiling it to work for Visual Studio 2008, but I still found it a little flaky. Also, I don't use the "Auto Apply Layouts" functionality in WindowManager, so these macros work great for me for switching from dual-monitor working to laptop-only working.
Sub DualMonitorConfiguration_Save()
SaveWindowConfiguration("Dual Monitor Layout")
End Sub
Sub DualMonitorConfiguration_Load()
LoadWindowConfiguration("Dual Monitor Layout")
End Sub
Sub LaptopOnlyConfiguration_Save()
SaveWindowConfiguration("Laptop Only Layout")
End Sub
Sub LaptopOnlyConfiguration_Load()
LoadWindowConfiguration("Laptop Only Layout")
End Sub
Private Sub SaveWindowConfiguration(ByVal configName As String)
Dim selectedConfig As WindowConfiguration
selectedConfig = FindWindowConfiguration(configName)
If selectedConfig Is Nothing Then
selectedConfig = DTE.WindowConfigurations.Add(configName)
End If
selectedConfig.Update()
DTE.StatusBar.Text = "Window configuration saved: " & configName
End Sub
Sub LoadWindowConfiguration(ByVal configName As String)
Dim selectedConfig As WindowConfiguration
selectedConfig = FindWindowConfiguration(configName)
If selectedConfig Is Nothing Then
MsgBox("Window Configuration """ & configName & """ not found.")
Else
selectedConfig.Apply()
DTE.StatusBar.Text = "Window configuration applied: " & configName
End If
End Sub
Private Function FindWindowConfiguration(ByVal name As String) As WindowConfiguration
Dim selectedLayout As WindowConfiguration
For Each config As WindowConfiguration In DTE.WindowConfigurations
If config.Name = name Then
Return config
End If
Next
Return Nothing
End Function
A: Your question was answered on the very same page where you asked it :-)
Just for the record:
To get this to work for 2008, add a
new HostApplication element to the
WindowManager2005.AddIn file. The file
is typically found in
"%APPDATA%\Microsoft\MSEnvShared\Addins".
Change the version in the new element
to be 9.0 (VS 2008) and it should work
in both 2008 and 2005.
<HostApplication>
<Name>Microsoft Visual Studio</Name>
<Version>9.0</Version>
</HostApplication>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How does my shared host's nameserver resolve http://servername.com/~username/ to my top level domain? I recently moved my website to a shared hosting solution at asmallorange.com, but I had to set my domain to use their provided nameservers in order for the site to properly resolve. I was determined to keep control of the domain's DNS but I could find no way to make my top level domain resolve to the shared location which was in the format of
server.asmallorange.com/~username
So I know I'm missing something here, my question is this:
What in their nameservers/DNS entry makes it possible for server.sharedhost.com/~username to serve as a top level domain? (ie. http://topleveldomain.com)
A: Nothing. DNS simply maps topleveldomain.com to server.sharedhost.com. It's the webserver which looks at the Host: topleveldomain.com header and knows that's equivalent to server.sharedhost.com/~username.
A: Nothing. They are having your domain name resolve to the same IP that server.asmallorange.com resolves to, but then they are making their web server aware of the domain name topleveldomain.com, and telling the webserver that it is the same as server.asmallorange.com/~username.
Virtual hosts aren't a DNS trick, they're an HTTP trick - the hostname requested is sent by the browser in a Host: field of every request.
A: apache has a "mod_user" which you can enable in your apache conf file. Using this and virtual hosts is how that is accomplished.
A: Virtual Hosts in Apache are how this is done.
However just because you set the DNS up to go "mydomain.com resolves to 1.2.3.4", which is their IP address, doesn't mean that you're giving up control of your domain name.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: how do you organize your namespaces? So I have logical entities (person, country, etc.), GUI elements / controls, data and navigation controllers / managers, then things like quad-trees and timers, and I always struggle with cleanly separating these things into logical namespaces.
I usually have something like this:
*
*Leviathan.GUI.Controls
*Leviathan.GUI.Views
*Leviathan.Entities
*Leviathan.Controllers (data and other stuff)
*Leviathan.Helpers (trees and other stuff)
Are there any good guides on this? I need to stop this mess.
A: For applications
Company.Product.Tier.Sub.Sub
where I like to get Tier from Model, View, Controller or other established names (Data)
But for our controls, we end up with
Company.Product.LogicalFeatureGrouping
or
Company.Product.Addon
sometimes it's
Company.Product.LogicalFeatureGrouping.Addon
A: Try to avoid the "and other stuff" or "misc." categories, If you are putting things in these categories you are failing to really organize them at all.
A: I usually create a namespace for every single tiers, like UI, business logic and database. It forces me to separate the tiers. I create other namespaces inside them according to system components.
A: I follow the Java / python ideal that namespaces should follow the directory structure.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Ant Junit tests are running much slower via ant than via IDE - what to look at? I am running my junit tests via ant and they are running substantially slower than via the IDE. My ant call is:
<junit fork="yes" forkmode="once" printsummary="off">
<classpath refid="test.classpath"/>
<formatter type="brief" usefile="false"/>
<batchtest todir="${test.results.dir}/xml">
<formatter type="xml"/>
<fileset dir="src" includes="**/*Test.java" />
</batchtest>
</junit>
The same test that runs in near instantaneously in my IDE (0.067s) takes 4.632s when run through Ant. In the past, I've been able to speed up test problems like this by using the junit fork parameter but this doesn't seem to be helping in this case. What properties or parameters can I look at to speed up these tests?
More info:
I am using the reported time from the IDE vs. the time that the junit task outputs. This is not the sum total time reported at the end of the ant run.
So, bizarrely, this problem has resolved itself. What could have caused this problem? The system runs on a local disk so that is not the problem.
A: Here's a blind guess: try increasing the maximum heap size available to the forked VM by using a nested <jvmarg> tag to set the -Xmx option.
A: I'm guessing it's because your antscript is outputing results to XML files, whereas the IDE is keeping those in memory. It takes longer to write a file than to not write a file.
todir="${test.results.dir}/xml"
That's the part of the <batchtest> call that tells it to stick the results into that directory. It looks like leaving it off just tells it to stick the results in the "current directory", whatever that is. At first glance I didn't see anything to turn it all the way off.
A: Difficult to tell with that information. First thing I would do is look at the test results and determine if all the individual tests are running uniformly slower or if it can be narrowed down to a certain subset of test cases.
(The zero'th thing I would do is make sure that my ant task is using the same JVM as Eclipse and that the classpath dependencies and imported JARs are really and truly identical)
A: Maybe you are seeing that because Eclipse do incremental compiling and Ant don't. Can you confirm that this time is wasted only in the test target?
A: For the record, I found my problem. We have been using a code obfuscator for this project, and the string encryption portion of that obfuscator was set to "maximum". This slowed down any operation where strings were present.
Turning down the string encryption to a faster mode fixed the problem.
A: Try setting fork, forkmode and threads to these values:
<junit fork="yes" forkmode="perTest" printsummary="off" threads="4">
<classpath refid="test.classpath"/>
<formatter type="brief" usefile="false"/>
<batchtest todir="${test.results.dir}/xml">
<formatter type="xml"/>
<fileset dir="src" includes="**/*Test.java" />
</batchtest>
</junit>
Also see https://ant.apache.org/manual/Tasks/junit.html
A: For me, adding forkmode="once" for the <junit> element and adding usefile="false" for the <formatter> element makes the tests run much faster. Also remove the formatters that you don't need.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: custom server control values lost in callback I have a custom server control that loads data from a web service into a GridView. Works fine on my page. I want to be able to click on a row and pop a popupcontrol with more detail on the clicked row. I am using the client side events of the DevExpress gridview to handle the onclick. And from JavaScript I am calling a callbackpanel to access my custom server control to get properties to use in the popupcontrol. In the callback, the properties on my server control (which were previously set in order to display the data) are not set, yet any of the other standard controls on the page still have their property settings. Am I missing a setting in my customer server control that will persist my property settings into a callback?
A: There are a few methods for persisting values through a postback. The method you pick will depend on your exact situation, which you didn't elaborate enough. Personally, I think it sounds like a good place for AJAX...
Here's a great article with some options:
http://msdn.microsoft.com/en-us/magazine/cc300437.aspx
A: I've had very similar issues. The problem seemed to resolved by tweaking the timing of when the Data is bound.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Adding headers to mail coming via exim4 I've got a Debian Etch system running Exim4-daemon-heavy.
The system is open to the internet, but the intention is that it will only receive legitimate mail coming from a spam-filtering service, which runs as a proxy ahead of it. (I can't just limit access to those IPs though, because I do have some authorized users who relay via my server on port 25. I know I should be using 587 - but currently I'm not.)
The general way this works is:
[Internet] -> [SMTP proxy] -> [My Server]
Unfortunately I've got spammers sending mail directly to the mailserver, and ignoring the MX record(s). So it seems like my obvious solution is to either:
*
*Add a header to each processed message at the SMTP proxy.
*Add a header at my server for each incoming message unless the mail is coming from an authorized relayer. (ie. Somebody who has completed SMTP AUTH.)
That way I could use procmail to just junk messages that came direct, via senders who ignored my MX records.
I'm pretty sure that Exim4 could be coerced into adding a header such as "X-Submitter: $ip" - to record the remote IP which submitted the message, but I'm unsure how that should be done.
A: Be aware that debian repackages exim in a fairly unique way that makes their packaging and mainetance easier but makes using generic rules sometimes not plug in as smoothly.
The correct way to handle this would be to reject mail that is not authorized and not from the proxy IP. Put something like this in your rcpt ACL:
deny message = quit trying to bypass DNS
!hosts = PROXY_IP_ADDRESS
!authenticated = *
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Sending raw data to FedEx Label printer I'm working on a .NET WinForms app that needs to print a FEDEX shipping label. As part of the FedEx api, I can get raw label data for the printer.
I just don't know how to send that data to the printer through .NET (I'm using C#). To be clear, the data is already pre formatted into ZPL (Zebra printer language) I just need to send it to the printer without windows mucking it up.
A: C# doesn't support raw printing, you'll have to use the win32 spooler, as detailed in this KB article How to send raw data to a printer by using Visual C# .NET.
Hope this helps.
-Adam
A: I think you just want to send the ZPL (job below) directly to your printer.
private void SendPrintJob(string job)
{
TcpClient client = null;
NetworkStream ns = null;
byte[] bytes;
int bytesRead;
IPEndPoint remoteIP;
Socket sock = null;
try
{
remoteIP = new IPEndPoint( IPAddress.Parse(hostName), portNum );
sock = new Socket(AddressFamily.InterNetwork,
SocketType.Stream,
ProtocolType.Tcp);
sock.Connect(remoteIP);
ns = new NetworkStream(sock);
if (ns.DataAvailable)
{
bytes = new byte[client.ReceiveBufferSize];
bytesRead = ns.Read(bytes, 0, bytes.Length);
}
byte[] toSend = Encoding.ASCII.GetBytes(job);
ns.Write(toSend, 0, toSend.Length);
if (ns.DataAvailable)
{
bytes = new byte[client.ReceiveBufferSize];
bytesRead = ns.Read(bytes, 0, bytes.Length);
}
}
finally
{
if( ns != null )
ns.Close();
if( sock != null && sock.Connected )
sock.Close();
if (client != null)
client.Close();
}
}
A: I've been working with a printer and ZPL for a while now, but with a Ruby app. Sending the ZPL out to the printer via socket works fine.
To check that it works, I often telnet to the printer and type ^XA^PH^XZ to feed a single label. Hope that helps.
A: A little late, but you can use this CodePlex Project for easy ZPL printing
http://sharpzebra.codeplex.com/
A: Zebra printers don't use a spooler, it isn't raw printing. It's a markup called ZPL. It's text based, not binary.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do you htmlencode using html agility pack? Has anyone done this? Basically, I want to use the html by keeping basic tags such as h1, h2, em, etc; clean all non http addresses in the img and a tags; and HTMLEncode every other tag.
I'm stuck at the HTML Encoding part. I know to remove a node you do a "node.ParentNode.RemoveChild(node);" where node is the object of the class HtmlNode. Instead of removing the node though, I want to HTMLEncode it.
A: You would need to remove the node representing the element you don't want. The encoded HTML would then need to be re-added as a text node.
If you don't want to process the children of the elements that you want to throw away, you should be able to just use OuterHtml ... something like this might work:
node.AppendChild(new HtmlTextNode { Text = HttpUtility.HtmlEncode(nodeToDelete.OuterHtml) });
A: The answer above pretty much covers it. There's one thing to add, though.
You don't want to change a particular node, but all of them, so the code above will probably be a method, wrapped in an if statement ( to make sure it's a tag you want to HtmlEncode ). More to the point, since Agility Pack doesn't expose nodes by ordinal, you can't iterate the entire document. Recursion is the easiest way to go about it. You probably already know this...
I tackled a similar problem, and have some shell code (C#) you're more than welcome to use: http://dev.forrestcroce.com/normalizer-of-web-pages-qualifier-of-urls/2008-12-09/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/123159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.