text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Is object mocking used extensively? I am curious about how many of you folks incorporate mocking of objects (frameworks like JMock, NMock, RhinoMocks hand in hand with unit testing frameworks) into your daily development process. What are your experiences?
You see, I develop on top of a GIS (geographic information systems) platform, in which most of work pertains to some way of handling data. As its data object model is quite complex (many many classes and interfaces, all COM-based), it is also quite difficult and cumbersome to mock. In this case, mocking incurs a great deal of overhead when writing test suites. I wonder if there are people in similar situation, or just, how does mocking (in whatever situation you are in) work for you.
A: On a recent project that I worked on we used mock objects extensively in our unit testing approach. The project was 100% Java and moderately sized (about 100,000 lines of non-commented code). It was a Swing-based desktop application - and the only effective way that we found to test the user interface logic was through an MVC variant design that allowed us to use mock objects to substitute for the actual Swing user interface classes for the automated testing. We also used mocking extensively in the testing of our data access layer (Hibernate/DAOs).
In the user inteface use, the Mocks were easy and straightforward to build. And the design of the application (Fowler Passive View) easily incorporated mocks. This was not the case for the mocks used in testing the data access layer. But I can say that it was clearly worth the effort. In fact, most of the 'effort' really focused on coming up with a reusable solution that minimized the work that a developer had to do to create each individual mock. I'd recommend taking the time to dig into and discover an approach for your situation that allows you to easily mock up your GIS data layer. That - or just manually mock up each class. Either way the ability to run the automated unit tests that rely on the mocks is worthwhile...
A: In my situation mocks work really nice. But I'm using Python, which is so dynamic that it makes many things involving testing much, much easier.
In situation like yours, when application is mainly data-driven (as far as I see), mocks may not be as useful. Just passing data in and watching it come out should be enough for testing. I would just make sure that application is modularized enough, so this approach can be applied to reasonably small components.
A: Mocking can be useful in some kind of project. But, sometimes mocking is very time consuming and the ROI of it is low.
A: Trying to test Sharepoint it seems that mocking is the only way, and only typemock will let you mock sealed classes.
A: Mocking is used very extensively in my case. Mocks are usually for the classes that has external dependencies, e.g. network, database, filesystem. Any of these can introduce flakiness in the tests if mocks are not used.
If the mocks that you find costly to write because there are a lot of fake data to populate, you could set some pre-populated data objects as constants and use them or slightly modified copies in your test. If such data objects has external dependencies, then maybe refactor it in a way you can separate the two concerns.
A: There is an initiative started by Dave Bouman to try and build a community library of Mocks for use in ArcObjects related unit testing. His blog and this svn repository have great information related to unit testing GIS systems
http://blog.davebouwman.net/CategoryView,category,Unit%2BTesting.aspx
http://svn2.assembla.com/svn/arcdeveloper/TestingUtilities/trunk/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to create an automatic Revision History table in Word 2007 Is it possible in Word 2007 to create a revision history table automatically using track changes or some other method?
e.g.
Revision History
DateVersionDescriptionAuthor
16/09/20081.0CreatedJohn Smith
17/09/20081.1Fixed dumb spelling errorsColin Jones
A: I don't think it's possible to do automatically.
I'd suggest that you keep track manually with a table like you suggested, and then keep all your documents in a version control system under a separate documentation branch in order to have an automatic revision history. If you feel up to it, you could also create a tool that compares said element to the revision history of the document and shouts at you if you haven't updated it :)
A: It is really a serendipity as I was grappling with the exact same issue a few days back. Although the approach I used was manual, it was quite intuitive.
There is an option to compare documents in word 2007 in the review tab.You can choose to generate a new document with only the changes. Open the change document on a second moniter and manually incorporate changes.
You can also go to Print and select printing the markup in the new document( if present) but as even the most trivial of changes generate a new entry, it becomes difficult to get to the meat of the changes.
Hope this helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to create a MaskedEditExtender on the fly? I want to create a number of masked edit extenders from codebehind. Something like:
private MaskedEditExtender m_maskedEditExtender;
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
m_maskedEditExtender = new MaskedEditExtender()
{
BehaviorID = "clientName"
};
m_maskedEditExtender.Mask = "999999999";
this.Controls.Add(m_maskedEditExtender);
}
protected override void Render(HtmlTextWriter writer)
{
m_maskedEditExtender.RenderControl(writer);
}
When I do this, I get a NullReferenceException on OnLoad of MaskedEditExtender. What is the correct way of doing that? Please note that putting the extender into a repeater-like control and using DataBind does not work for me.
Edit: I do not have an update panel. Turns out I also need to specify a target control on serverside.
A: Your example is not providing a TargetControlID.
Do you have an updatePanel on the page? I had problems dynamically creating extenders as they weren't being added to the updatePanel content.
I also think you have to do somethin with the ScriptManager (registering the extender) but I could be mistaken (I don't have access to the code I did dynamic extenders at the moment).
A: See ASP.NET Page Life Cycle Overview if this is in a Page subclass. If you scroll down to the event list, that page advises you to use the PreInit event to create any dynamic controls. It's necessary to do that early to ensure that ASP.NET cleanly loads ViewState at the right stage, among other things.
If you are doing this in a web user control or custom control, though, override CreateChildControls and do this in there.
Post a more complete code example if that doesn't help.
A: Provide the proper TargetControlID value to MaskedEditExtender
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: PHP - RSS builder I have a old website that generate its own RSS everytime a new post is created. Everything worked when I was on a server with PHP 4 but now that the host change to PHP 5, I always have a "bad formed XML". I was using xml_parser_create() and xml_parse(...) and fwrite(..) to save everything.
Here is the example when saving (I read before save to append old RSS line of course).
function SaveXml()
{
if (!is_file($this->getFileName()))
{
//Création du fichier
$file_handler = fopen($this->getFileName(),"w");
fwrite($file_handler,"");
fclose($file_handler);
}//Fin du if
//Header xml version="1.0" encoding="utf-8"
$strFileData = '<?xml version="1.0" encoding="iso-8859-1" ?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>'.$this->getProjectName().'</title><link>http://www.mywebsite.com</link><description>My description</description><lastBuildDate>' . date("r"). '</lastBuildDate>';
//Data
reset($this->arrData);
foreach($this->arrData as $i => $value)
{
$strFileData .= '<item>';
$strFileData .= '<title>'. $this->GetNews($i,0) . '</title>';
$strFileData .= '<pubDate>'. $this->GetNews($i,1) . '</pubDate>';
$strFileData .= '<dc:creator>'. $this->GetNews($i,2) . '</dc:creator>';
$strFileData .= '<description><![CDATA['. $this->GetNews($i,3) . ']]> </description>';
$strFileData .= '<link><![CDATA['. $this->GetNews($i,4) . ']]></link>';
$strFileData .= '<guid>'. $this->GetNews($i,4) . '</guid>';
//$strFileData .= '<category>'. $this->GetNews($i,5) . '</category>';
$strFileData .= '<category>Mycategory</category>';
$strFileData .= '</item>';
}//Fin du for i
$strFileData .= '</channel></rss>';
if (file_exists($this->getFileName()))//Détruit le fichier
unlink($this->getFileName());
$file_handler = fopen($this->getFileName(),"w");
fwrite($file_handler,$strFileData);
fclose($file_handler);
}//Fin de SaveXml
My question is : how do you create and fill up your RSS in PHP?
A: I would use simpleXML to create the required structure and export the XML. Then I'd cache it to disk with file_put_contents().
A: At swcombine.com we use Feedcreator. Use that one and your problem will be gone. :)
Here is the PHP code to use it once installed:
function feed_simnews() {
$objRSS = new UniversalFeedCreator();
$objRSS->title = 'My News';
$objRSS->link = 'http://link.to/news.php';
$objRSS->description = 'daily news from me';
$objRSS->xsl = 'http://link.to/feeds/feedxsl.xsl';
$objRSS->language = 'en';
$objRSS->copyright = 'Copyright: Mine!';
$objRSS->webmaster = 'webmaster@somewhere.com';
$objRSS->syndicationURL = 'http://link.to/news/simnews.php';
$objRSS->ttl = 180;
$objImage = new FeedImage();
$objImage->title = 'my logo';
$objImage->url = 'http://link.to/feeds/logo.jpg';
$objImage->link = 'http://link.to';
$objImage->description = 'Feed provided by link.to. Click to visit.';
$objImage->width = 120;
$objImage->height = 60;
$objRSS->image = $objImage;
//Function retrieving an array of your news from date start to last week
$colNews = getYourNews(array('start_date' => 'Last week'));
foreach($colNews as $p) {
$objItem = new FeedItem();
$objItem->title = $p->title;
$objItem->description = $p->body;
$objItem->link = $p->link;
$objItem->date = $p->date;
$objItem->author = $p->author;
$objItem->guid = $p->guid;
$objRSS->addItem($objItem);
}
$objRSS->saveFeed('RSS2.0', 'http://link.to/feeds/news.xml', false);
};
Quite KISS. :)
A: I've used this LGPL-licensed feedcreator class in the past and it worked quite well for the very simple use I had for it.
A: PHP5 now comes with the SimpleXML extension, it's a pretty quick way to build valid XML if your needs aren't complicated.
However, the problem you're suggesting doesn't seem to an issue of implementation more a problem of syntax. Perhaps you could update your question with a code example, or, a copy of the XML that is produced.
A: Not a full answer, but you don't have to parse your own XML. It will hurt performance and reliability.
But definitely make sure it is well-formed. It shouldn't be very hard if you generate it by hand or using general-purpose tools. Or maybe your included HTML ruins it?
A: There are lots of things that can make XML malformed. It might be a problem with character entities (a '<', '>', or '&' in the data between the XML tags). Try running anything output from a database through htmlentities() when you concatenate the string. Do you have an example of the generated XML for us to look at so we can see where the problem is?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I list the tables in a SQLite database file that was opened with ATTACH? What SQL can be used to list the tables, and the rows within those tables in an SQLite database file - once I have attached it with the ATTACH command on the SQLite 3 command line tool?
A: Since nobody has mentioned about the official reference of SQLite, I think it may be useful to refer to it under this heading:
https://www.sqlite.org/cli.html
You can manipulate your database using the commands described in this link. Besides, if you are using Windows OS and do not know where the command shell is, that is in the SQLite's site:
https://www.sqlite.org/download.html
After downloading it, click sqlite3.exe file to initialize the SQLite command shell. When it is initialized, by default this SQLite session is using an in-memory database, not a file on disk, and so all changes will be lost when the session exits. To use a persistent disk file as the database, enter the ".open ex1.db" command immediately after the terminal window starts up.
The example above causes the database file named "ex1.db" to be opened and used, and created if it does not previously exist. You might want to use a full pathname to ensure that the file is in the directory that you think it is in. Use forward-slashes as the directory separator character. In other words use "c:/work/ex1.db", not "c:\work\ex1.db".
To see all tables in the database you have previously chosen, type the command .tables as it is said in the above link.
If you work in Windows, I think it might be useful to move this sqlite.exe file to same folder with the other Python files. In this way, the Python file writes to and the SQLite shell reads from .db files are in the same path.
A: Use .help to check for available commands.
.table
This command would show all tables under your current database.
A: The .tables, and .schema "helper" functions don't look into ATTACHed databases: they just query the SQLITE_MASTER table for the "main" database. Consequently, if you used
ATTACH some_file.db AS my_db;
then you need to do
SELECT name FROM my_db.sqlite_master WHERE type='table';
Note that temporary tables don't show up with .tables either: you have to list sqlite_temp_master for that:
SELECT name FROM sqlite_temp_master WHERE type='table';
A: The ".schema" commando will list available tables and their rows, by showing you the statement used to create said tables:
sqlite> create table_a (id int, a int, b int);
sqlite> .schema table_a
CREATE TABLE table_a (id int, a int, b int);
A: There is a command available for this on the SQLite command line:
.tables ?PATTERN? List names of tables matching a LIKE pattern
Which converts to the following SQL:
SELECT name FROM sqlite_master
WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%'
UNION ALL
SELECT name FROM sqlite_temp_master
WHERE type IN ('table','view')
ORDER BY 1
A: It appears you need to go through the sqlite_master table, like this:
SELECT * FROM dbname.sqlite_master WHERE type='table';
And then manually go through each table with a SELECT or similar to look at the rows.
The .DUMP and .SCHEMA commands doesn't appear to see the database at all.
A: To list the tables you can also do:
SELECT name FROM sqlite_master
WHERE type='table';
A: I use this query to get it:
SELECT name FROM sqlite_master WHERE type='table'
And to use in iOS:
NSString *aStrQuery=[NSString stringWithFormat:@"SELECT name FROM sqlite_master WHERE type='table'"];
A: Try PRAGMA table_info(table-name);
http://www.sqlite.org/pragma.html#schema
A: According to the documentation, the equivalent of MySQL's SHOW TABLES; is:
The ".tables" command is similar to setting list mode then executing
the following query:
SELECT name FROM sqlite_master
WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%'
UNION ALL
SELECT name FROM sqlite_temp_master
WHERE type IN ('table','view')
ORDER BY 1;
However, if you are checking if a single table exists (or to get its details), see LuizGeron's answer.
A: To show all tables, use
SELECT name FROM sqlite_master WHERE type = "table"
To show all rows, I guess you can iterate through all tables and just do a SELECT * on each one. But maybe a DUMP is what you're after?
A: As of the latest versions of SQLite 3 you can issue:
.fullschema
to see all of your create statements.
A: There are a few steps to see the tables in an SQLite database:
*
*List the tables in your database:
.tables
*List how the table looks:
.schema tablename
*Print the entire table:
SELECT * FROM tablename;
*List all of the available SQLite prompt commands:
.help
A: The easiest way to do this is to open the database directly and use the .dump command, rather than attaching it after invoking the SQLite 3 shell tool.
So... (assume your OS command line prompt is $) instead of $sqlite3:
sqlite3> ATTACH database.sqlite as "attached"
From your OS command line, open the database directly:
$sqlite3 database.sqlite
sqlite3> .dump
A: Via a union all, combine all tables into one list.
select name
from sqlite_master
where type='table'
union all
select name
from sqlite_temp_master
where type='table'
A: Use:
import sqlite3
TABLE_LIST_QUERY = "SELECT * FROM sqlite_master where type='table'"
A: Use .da to see all databases - one is called 'main'.
Tables of this database can be seen by:
SELECT distinct tbl_name from sqlite_master order by 1;
The attached databases need prefixes you chose with AS in the statement ATTACH, e.g., aa (, bb, cc...) so:
SELECT distinct tbl_name from **aa.sqlite_master** order by 1;
Note that here you get the views as well. To exclude these add:
where type = 'table'
before ' order'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1342"
}
|
Q: Does anyone use Iron speed designer for rapid asp.net development? Visual studio is pretty good but doesn't create stored procedures automatically. Iron Speed designer does supposedly. But is it any good?
A: I have used it for convenience for a very small project. It did what I wanted and saved me a couple of days work.
The main problem I found was when it came to customising or extending the generated project. You have to spend quite a bit of time trying to understand Ironspeed's way of doing things which, I'll admit, is not my way.
I'd use it again for a small project if I knew in advance I wouldn't have to customise it much after.
A: I have used Ironspeed extensively for the past two years for most of our ASP.NET forms over data projects.
It works. Does several things well: stored procs, fast layout of table browse and CRUD screens, fast layout of single record CRUD screens. It manages the round-trip (or half-round trip) process decently, detecting changes in your back end db schema and updating its data access layer, then making the changed columns available for you to alter your UI (in record or table control panels). ISD (as they call it) does an excellent job in making security management for your app pretty painless, even down to the control level (if you use ISD's subclassed versions of asp.net controls). Final plus, not a small one, is the CSS-based theme control (easy to change to a variety of themes, easy to customize a particular theme, and not even too bad to build your own theme variant by forking an existing one you like). Depending upon whether you let ISD create your stored procs in the code base or the database, changing DB's at run time can be a piece of cake.
Fairly active forum with a core group of helpful contributors. You can probably avoid the paid tech support through the forum.
Okay, the down sides. Creates fairly large code conglomerations, being a three tiered architecture. As Galwegian says, like any framework, you've got the velvet handcuffs (get your mind out of the gutter if you are thinking about anything other than code limitations and conventions!). The velvet handcuffs are the page and control model, the data layer, lack of a business object/class capability per se, the postback model, and the temptation to make your user GUI look like THEIR user GUI that comes out of the box because it is so darned easy and convenient.
ISD builds a basic page by combining an HTML template (in to which you place ISD specific code generation tags and any other tags, etc., you which using the ISD GUI or by hand). The page model relies upon a code behind page created from a piece of code template. The base classes are almost completely overridable, so that you can override all of the default functions, regenerate the application and not lose your overrides. The database controls live in the page container, but have their own class definitions (i.e., their code-behind) in specific /app_code files. Again, each control type has its own base class with pretty completely overridable methods. A single record control (showing a single db record) is pretty simple. A table, showing several records, has a table class and a table row class. The ISD website (www.ironspeed.com/support) has good documentation of the ISD model as a whole.
So, where are the problems in this model?
1. Easy and tempting to live with their out of the box GUI. Point ISD at your database, pick the tables you want to have it turn in to pages, tell it the kinds of pages, give it a thematic style and five minutes later you're viewing the application. Cool. But, it is very easy to forget that their user GUI is probably not what your user wants to see. So, be prepared to think for yourself and tinker with the GUI thus created. Not hard to do, and you can use VS 2005 to help you.
*Business objects. You could put together your own business objects, but it would be difficult and you would get no help from ISD. ISD does a LOT of building of simple validation and checking (appropriate look up values, ranges, lengths, etc.) ISD lets you build custom queries, but these are read-only. It is smart enough (and you can override the write from a page in any case) to let you take a one to many view and write it back to the database (you'd probably override the default base method, but it isn't that hard to do). However, when you get in to serious dependency checking, ISD is still really about tables and not business objects. So, you're going to write some code.
If you are smart, you'll write it once store it in app_code somewhere and use it by calling it from an overridden method in your table or record controls. If you are like most of us, you'll first spaghetti it in to one of the code-behind classes above, and then forget you did so, or have a copy in each of the 10 pages that manipulates customer data. In my world, that has usually meant 5 identical functions and 5 that are all different (even though they are all supposed to be the same). ISD makes it tempting to order marinara, because the model lends itself to spaghetti code. Of course, you can completely prevent this, but you gotta learn the ISD model to determine the best way to do it on your project.
*Page state and postbacks. Although ISD is quite open about this problem and tells users not to just take the defaults of returning the whole asp.net page state in the postback stream (cache on the server instead), the default is to return the whole page. Can make for some BIG pages. Which makes users think S L O W. As I said, you can manipulate this. But, what newbie is going to get this when it is SO tempting to just point, click, and boom - instant application. Your manager is now off your back because her product inventory table is "on the web" with a cool search and edit GUI (of 400kb state pages if you've gone a bit nuts and have just taken the default behaviors of ISD). Great in-house, but the customers in the real world....
Again, knowledge is the key. You can fix this, but you need to know you SHOULD.
*Database read/write postbacks. No big problem here, but you also need to know that the model is to fetch only the data used at the moment. If your table shows 1000 records in 50 record increments, when you go from records 1 to 50 to 51 through 100, you will postback and hit the database again. This keeps data current, but increases server traffic.
Overall: Try the demo version. Point it at something simple that you really want to turn in to an asp.net application. Build maybe three tables. Then dissect it using the above as a guide. See what YOU think and post back to this question.
A: If stored procedure generation is all you are after, CodeSmith is a decent option at a fraction of the cost of IronSpeed. There are several sproc templates available, and you can create your own or tweak an existing if that is what you need. You can also gen .Net code to your hearts content with CodeSmith. Tons of business class templates already exist for this.
IronSpeed's value is not in the sproc generation, but in the RAD features. I agree with @Galwegian... IronSpeed is OK for mock ups or very simple apps, not so good at all if you need to do any customization.
A: You may want to check out Evolutility CRUD framework. It provides some of the same features (limited to CRUD) and is open source.
A: IronSpeed has been great (out-of-the-box) at helping me develop data-driven corporate Intranet applications. While the code model takes a little getting used to, it is effective at maintaining a nice three-tier app. While the page templates can appear garish compared to 2010's web-design, it gets the job done, when you need function over form.
A: Iron Speed Designer is great for simple CRUD type web applications. You can find some useful information on our web site http://www.dotnetarchitect.co.uk/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: In C# .NET 2.0, what's an easy way to do a foreach in reverse? Lets say I have a Dictionary object:
Dictionary myDictionary<int, SomeObject> = new Dictionary<string, SomeObject>();
Now I want to iterate through the dictionary in reverse order. I can't use a simple for loop because I don't know the keys of the dictionary. A foreach is easy:
foreach (SomeObject object in myDictionary.Values)
{
// Do stuff to object
}
But how can I perform this in reverse?
A: Actually, in C# 2.0 you can create your own iterator that traverses a container in reverse. Then, you can use that iterator in your foreach statement. But your iterator would have to have a way of navigating the container in the first place. If it's a simple array, it could go backwards like this:
static IEnumerable<T> CreateReverseIterator<T>(IList<T> list)
{
int count = list.Count;
for (int i = count - 1; i >= 0; --i)
{
yield return list[i];
}
}
But of course you can't do that with a Dictionary as it doesn't implement IList or provides an indexer. Saying that a Dictionary does not have order is not true: of course it has order. That order can even be useful if you know what it is.
For a solution to your problem: I'd say copy the elements to an array, and use the above method to traverse it in reverse. Like this:
static void Main(string[] args)
{
Dictionary<int, string> dict = new Dictionary<int, string>();
dict[1] = "value1";
dict[2] = "value2";
dict[3] = "value3";
foreach (KeyValuePair<int, string> item in dict)
{
Console.WriteLine("Key : {0}, Value: {1}", new object[] { item.Key, item.Value });
}
string[] values = new string[dict.Values.Count];
dict.Values.CopyTo(values, 0);
foreach (string value in CreateReverseIterator(values))
{
Console.WriteLine("Value: {0}", value);
}
}
Copying your values to an array may seem like a bad idea, but depending on the type of value it's not really that bad. You might just be copying references!
A: I agree with @leppie, but think you deserve an answer to the question in general. It could be that you meant for the question to be in general, but accidentally picked a bad data structure. The order of the values in a dictionary should be considered implementation-specific; according to the documentation it is always the same order as the keys, but this order is unspecified as well.
Anyway, there's not a straightforward way to make foreach work in reverse. It's syntactic sugar to using the class's enumerator, and enumerators can only travel in one direction. Technically the answer could be "reverse the collection, then enumerate", but I think this is a case where you'll just have to use a "backwards" for loop:
for (int i = myCollection.Length - 1; i >= 0; i--)
{
// do something
}
A: If you don't have .NET 3.5 and therefore the Reverse extension method you can implement your own. I'd guess it probably generates an intermediate list (when necessary) and iterates it in reverse, something like the following:
public static IEnumerable<T> Reverse<T>(IEnumerable<T> items)
{
IList<T> list = items as IList<T>;
if (list == null) list = new List<T>(items);
for (int i = list.Count - 1; i >= 0; i-- )
{
yield return list[i];
}
}
A: A dictionary or any other form of hashtable has no ordering. So what you are trying to do is pointless :)
A: I'd use a SortedList instead of a dictionary. You can still access it by Key, but you can access it by index as well.
SortedList sCol = new SortedList();
sCol.Add("bee", "Some extended string matching bee");
sCol.Add("ay", "value matching ay");
sCol.Add("cee", "Just a standard cee");
// Go through it backwards.
for (int i = sCol.Count - 1; i >=0 ; i--)
Console.WriteLine("sCol[" + i.ToString() + "] = " + sCol.GetByIndex(i));
// Reference By Key
foreach (string i in sCol.Keys)
Console.WriteLine("sCol[" + i + "] = " + sCol[i]);
// Enumerate all values
foreach (string i in sCol.Values)
Console.WriteLine(i);
It's worth noting that a sorted list stores key/value pairs sorted by key only.
A: If you have .NET 3.5 you can use the .Reverse() extension method on IEnumerables. For example:
foreach (object o in myDictionary.Values.Reverse())
{
// Do stuff to object
}
A: That would be a Dictionary<int, SomeObject> myDictionary, and you would do it by:
foreach(SomeObject _object in myDictionary.Values.Reverse())
{
}
A: The only way I can come up with in .NET 2.0 is to first copy all the values to a List, reverse the list and then run the foreach on that list:
Dictionary<int, object> d;
List<object> tmplist;
foreach (object o in d.Values) tmplist.Add(s);
tmplist.Reverse();
foreach (object o in tmplist) {
//Do stuff
}
A: Literal answer:
Dictionary<int, SomeObject> myDictionary = new Dictionary<int, SomeObject>();
foreach (var pair in myDictionary.OrderByDescending(i => i.Key))
{
//Observe pair.Key
//Do stuff to pair.Value
}
A: If the ordering is most important, you could you a Stack and create a simple struct to store your int, Object pair.
A: If you want a dictionary type collection but you need to maintain the insertion order you can look into the KeyedCollection
here
It is a merger between a dictionary and a list. That way you can access elements in the collection via the key or the insertion index.
The only gotcha is if your element being stored in the collection has to have an int key. If you could change that to a string or another type (Guid Mabye). Since collection1 will be searching for the key of 1 rather than the index of 1.
A: A standard for loop would be best. You don't have to worry about the processing overhead of reversing the collection.
A: You can use the LINQ to Objects Enumerable.Reverse() function in .NET 2.0 using LinqBridge.
A: foreach (Sample in Samples)
try the following:
Int32 nEndingSample = Samples.Count - 1;
for (i = nEndingSample; i >= 0; i--)
{
x = Samples[i].x;
y = Samples[i].y;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Stored procedures or OR mappers? Which is better? Or use and OR mapper with SP's? If you have a system with SP's already, is an OR mapper worth it?
A: I like ORM's because you don't have to reinvent the wheel. That being said, it completely depends on your application needs, development style and that of the team.
This question has already been covered Why is parameterized SQL generated by NHibernate just as fast as a stored procedure?
A: There is nothing good to be said about stored procedures. There were a necessity 10 years ago but every single benefit of using sprocs is no longer valid. The two most common arguments are regarding security and performance. The "sending stuff over the wire" crap doesn't hold either, I can certainly create a query dynamically to do everything on the server too. One thing the sproc proponents won't tell you is that it makes updates impossible if you are using column conflict resolution on a merge publication. Only DBAs who think they are the database overlord insist on sprocs because it makes their job look more impressive than it really is.
A: This has been discussed at length on previous questions.
What are the pros and cons to keeping SQL in Stored Procs versus Code
A: At my work, we mostly do line of business apps - contract work.
For this type of business, I'm a huge fan of ORM. About four years ago (when the ORM tools were less mature) we studied up on CSLA and rolled our own simplified ORM tool that we use in most of our applications,including some enterprise-class systems that have 100+ tables.
We estimate that this approach (which of course includes a lot of code generation) creates a time savings of up to 30% in our projects. Seriously, it's rediculous.
There is a small performance trade-off, but it's insubstantial as long as you have a decent understanding of software development. There are always exceptions that require flexibility.
For instance, extremely data-intensive batch operations should still be handled in specialized sprocs if possible. You probably don't want to send 100,000 huge records over the wire if you could do it in a sproc right on the database.
This is the type of problem that newbie devs run into whether they're using ORM or not. They just have to see the results and if they're competent, they will get it.
What we've seen in our web apps is that usually the most difficult to solve performance bottlenecks are no longer database-related even with ORM. Rather, tey're on the front-end (browser) due to bandwidth, AJAX overhead, etc. Even mid-range database servers are incredibly powerful these days.
Of course, other shops who work on much larger high-demand systems may have different experiences there. :)
A: Stored procedures hands down. OR Mappers are language specific, and often add graphic slowdowns.
Stored procedures means you're not limited by the language interface, and you can merely tack on new interfaces to the database in forwards compatible ways.
My personal opinion of OR Mappers is their existence highlights a design flaw in the popular structure of databases. Database developers should realize the tasks people are trying to achieve with complicated OR-Mappers and create server-side utilities that assist in performing this task.
OR Mappers also are epic targets of the "leaky abstraction" syndrome ( Joel On Software: Leaky Abstractions )
Where its quite easy to find things it just cant handle because of the abstraction layer not being psychic.
A: Stored procedures are better, in my view, because they can have an independent security configuration from the underlying tables.
This means you can allow specific operations without out allowing writes/reads to specific tables. It also limits the damage that people can do if they discover a SQL injection exploit.
A: Definitely ORMs. More flexible, more portable (generally they tend to have portability built in). In case of slowness you may want to use caching or hand-tuned SQL in hot spots.
Generally stored procedures have several problems with maintainability.
*
*separate from application (so many changes have now to be made in two places)
*generally harder to change
*harder to put under version control
*harder to make sure they're updated (deployment issues)
*portability (already mentioned)
A: I personally have found that SP's tend to be faster performance wise, at least for the large data items that I execute on a regular basis. But I know many people that swear by OR tools and wouldn't do ANYTHING else.
A: I would argue that using an OR mapper will increase readability and maintainability of your applications source code, while using SP will increase the performance of the application.
A: They are not actually mutually exclusive, though to your point they usually are so.
The advantage of using Object Relational mapping is that you can swap out data sources. Not only database structure, but you could use any data source. With advent web services / Service-oriented architecture / ESB's, in a larger corporation, it would be wise to consider having a higher level separation of concerns than what you could get in stored procedures. However, in smaller companies and in application that will never use a different data source, then SP's can fit the bill fine. And one last point, it is not necessary to use an OR mapper to get the abstraction. My former team had great success by simply using an adapter model using Spring.NET to plug-in the data source.
A: @ Kent Fredrick
My personal opinion of OR Mappers is their existence highlights a design flaw in the popular structure of databases"
I think you're talking about the difference between the relational model and object-oriented model. This is actually why we need ORMs, but the implementations of these models were done on purpose - it is not a design flow - it is just how things turned out to be historically.
A: Use stored procedures where you have identified a performance bottleneck. if you haven't identified a bottleneck, what are you doing with premature optimisation?
Use stored procedures where you are concerned about security access to a particular table.
Use stored procs when you have a SQL wizard who is prepared to sit and write complex queries that join together loads of tables in a legacy database- to do the things that are hard in an OR mapper.
Use the OR mapper for the other (at least) 80% of your database: where the selects and updates are so routine as to make access through stored procedures alone a pointless exercise in manual coding, and where updates are so infrequent that there is no performance cost. Use an OR mapper to automate the easy stuff.
Most OR mappers can talk to stored procs for the rest.
You should not use stored procs assuming that they're faster than a sql statement in a string, this is not necessarily the case in the last few versions of MS SQL server.
You do not need to use stored procs to thwart SQL injection attacks, there are other ways to do make sure that your query parameters are strongly typed and not just string-concatenated.
You don't need to use an OR mapper to get a POCO domain model, but it does help.
A: If you already have a data API that's exposed as sprocs, you'd need to justify a major architectural overhaul to go to ORM.
For a green-fields build, I'd evaluate several things:
*
*If there's a dedicated DBA on the team, I'd lean to sprocs
*If there's more than one application touching the same DB I'd lean to sprocs
*If there's no possibility of database migration ever, I'd lean to sprocs
*If I'm trying to implement MVCC in the DB, I'd lean to sprocs
*If I'm deploying this as a product with potentially multiple backend dbs (MySql, MSSql, Oracle), I'd lean to ORM
*If I'm on a tight deadline, I'd lean to ORM, since it's a faster way to create my domain model and keep it in sync with the data model (with appropriate tooling).
*If I'm exposing the same domain model in multiple ways (web app, web service, RIA client), I'll lean to ORM as then data model is then hidden behind my ORM facade, making a robust domain model is more valuable to me.
I think performance is a bit of a red herring; hibernate seems to perform nearly as well or better than hand-coded SQL (due to it's caching tiers), and it's easy to write a bad query in your sproc either way.
The most important criteria are probably the team's skillset and long-term database portability needs.
A: Well the SP's are already there. It doesn't make sense to can them really. I guess does it make sense to use a mapper with SP's?
A: "I'm trying to drive in a nail. Should I use the heel of my shoe or a glass bottle?"
Both Stored Procedures and ORMs are difficult and annoying to use for a developer (though not necessarily for a DBA or architect, respectively), because they incur a start-up cost and higher maintenance cost that doesn't guarantee a pay-off.
Both will pay off well if the requirements aren't expected to change much over the lifespan of the system, but they will get in your way if you're building the system to discover the requirements in the first place.
Straight-coded SQL or quasi-ORM like LINQ and ActiveRecord is better for build-to-discover projects (which happen in the enterprise a lot more than the PR wants you to think).
Stored Procedures are better in a language-agnostic environment, or where fine-grained control over permissions is required. They're also better if your DBA has a better grasp of the requirements than your programmers.
Full-blown ORMs are better if you do Big Design Up Front, use lots of UML, want to abstract the database back-end, and your architect has a better grasp of the requirements than either your DBA or programmers.
And then there's option #4: Use all of them. A whole system is not usually just one program, and while many programs may talk to the same database, they could each use whatever method is appropriate both for the program's specific task, and for its level of maturity. That is: you start with straight-coded SQL or LINQ, then mature the program by refactoring in ORM and Stored Procedures where you see they make sense.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: CausesValidation is set to "False" but the client side validation is still firing I have several RequiredFieldValidators in an ASP.NET 1.1 web application that are firing on the client side when I press the Cancel button, which has the CausesValidation attribute set to "False". How can I get this to stop?
I do not believe that Validation Groups are supported in 1.1.
Here's a code sample:
<asp:TextBox id="UsernameTextBox" runat="server"></asp:TextBox>
<br />
<asp:RequiredFieldValidator ID="UsernameTextBoxRequiredfieldvalidator" ControlToValidate="UsernameTextBox"
runat="server" ErrorMessage="This field is required."></asp:RequiredFieldValidator>
<asp:RegularExpressionValidator ID="UsernameTextBoxRegExValidator" runat="server" ControlToValidate="UsernameTextBox"
Display="Dynamic" ErrorMessage="Please specify a valid username (6 to 32 alphanumeric characters)."
ValidationExpression="[0-9,a-z,A-Z, ]{6,32}"></asp:RegularExpressionValidator>
<asp:Button CssClass="btn" id="addUserButton" runat="server" Text="Add User"></asp:Button>
<asp:Button CssClass="btn" id="cancelButton" runat="server" Text="Cancel" CausesValidation="False"></asp:Button>
Update: There was some dynamic page generating going on in the code behind that must have been messing it up, because when I cleaned that up it started working.
A: Validation Groups were not added to ASP.NET until version 2.0. This is a 1.1 question.
Double check your setting and make sure you are not overwriting it in the code behind.
A: Are they in separate validation groups (the button and validator controls)?
You're not manually calling the JS to do the client validation are you?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I tint a sprite to white in XNA? I don't think this is possible just using the color setting in SpriteBatch, so I'm trying to work out a simple shader that would take every pixel and make it white, while respecting the alpha value of the pixel.
The answer Joel Martinez gave looks right, but how do I incorporate that when I draw the sprite with SpriteBatch?
A: I think this is what you're looking for
sampler2D baseMap;
struct PS_INPUT
{
float2 Texcoord : TEXCOORD0;
};
float4 ps_main( PS_INPUT Input ) : COLOR0
{
float4 color = tex2D( baseMap, Input.Texcoord );
return float4(1.0f, 1.0f, 1.0f, color.w);
}
It's very simple, it just takes the sampled color from the texture, and then returns an all white color using the texture's alpha value.
A: I attach the documentation page from MS, and if you follow all the steps you should get it up and running in no time.
http://msdn.microsoft.com/en-us/library/bb203872(MSDN.9).aspx
To sum it up - you need to create and effect file (combined of the code above which is indeed correct for your purposes), , add it to your project, and then in the source file load it and use it during the render as explained in the link.
BTW: I don't quite remember the SpriteBatch (since I chose to write my own, it's too restrictive), but as I recall you might need to set the effect in the material you send to the render.
Anyways - maybe you'll find it here:
http://creators.xna.com/en-us/utilities/spritebatchshader
And an advanced code if you want to get there:
http://creators.xna.com/en-us/sample/particle3d
Have fun
A: If you want to use custom shaders with SpriteBatch, check out this sample:
http://creators.xna.com/en-us/sample/spriteeffects
A: Joel Martinez is indeed right, and you use it like this with a SpriteBatch, having loaded the effect into tintWhiteEffect:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
tintWhiteEffect.Begin();
tintWhiteEffect.CurrentTechnique.Passes[0].Begin();
// DRAW SPRITES HERE USING SPRITEBATCH
tintWhiteEffect.CurrentTechnique.Passes[0].End();
tintWhiteEffect.End();
spriteBatch.End();
SpriteSortMode.Immediate is the trick here, it allows you to swap out SpriteBatch's default shader for your own. Using it will make sprite drawing a bit slower though, since sprites aren't batched up in a single draw call, but I don't think you will notice the difference.
A: I haven't wrote my own pixel shaders, mostly modified samples from the net, what you would do is you would increase the value of the R,G,B components in the pixel respectively as long as they're under 255, this would gradually shift the color of the sprite towards white. Hey that rhymes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Sort with one option forced to top of list I have a PHP application that displays a list of options to a user. The list is generated from a simple query against SQL 2000. What I would like to do is have a specific option at the top of the list, and then have the remaining options sorted alphabetically.
For example, here's the options if sorted alphabetically:
Calgary
Edmonton
Halifax
Montreal
Toronto
What I would like the list to be is more like this:
**Montreal**
Calgary
Edmonton
Halifax
Toronto
Is there a way that I can do this using a single query? Or am I stuck running the query twice and appending the results?
A: SELECT name
FROM locations
ORDER BY
CASE
WHEN name = 'Montreal'
THEN 0
ELSE 1
END, name
A: SELECT name FROM options ORDER BY name = "Montreal", name;
Note: This works with MySQL, not SQL 2000 like the OP requested.
A: create table Places (
add Name varchar(30),
add Priority bit
)
select Name
from Places
order by Priority desc,
Name
A: I had a similar problem on a website I built full of case reports. I wanted the case reports where the victim name is known to sort to the top, because they are more compelling. Conversely I wanted all the John Doe cases to be at the bottom. Since this also involved people's names, I had the firstname/lastname sorting problem as well. I didn't want to split it into two name fields because some cases aren't people at all.
My solution:
I have a "Name" field which is what is displayed. I also have a "NameSorted" field that is used in all queries but is never displayed. My input UI takes care of converting "LAST, FIRST" entered into the sorting field into the display version automatically.
Finally, to "rig" the sorting I simply put appropriate characters at the beginning of the sort field. Since I want stuff to come out at the end, I put "zzz" at the beginning. To sort at the top you could put "!" at the beginning. Again your editing UI can take care of this for you.
Yes, I admit its a bit cheezy, but it works. One advantage for me is I have to do more complex queries with joins in different places to generate pages versus RSS etc, and I don't have to keep remembering a complex expression to get the sorting right, its always just sort by the "NameSorted" field.
Click my profile to see the resulting website.
A: I ended up with this
SELECT name
FROM locations
LEFT JOIN (VALUES ('Toronto', 1), ('Montreal', 2)) city (name, rank)
ON locations.name = city.name
ORDER BY city.rank, locations.name;
Which may be overkill for this example but can be extended for more complex needs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Managing feature creep in GUIs Does anyone have any practical suggestions about how to manage feature creep in GUIs?
I'm getting strong pressure from both internal and external sources to add, modify, tweak, etc. I always cringe when someone approaches me with the words "wouldn't it be nice if...?". I can't just turn around and yell "NO" at them, because often they are my superiors or customers.
Instead, I'm looking for suggestions to help explain why it's a bad idea to be constantly adding new features, and in doing so, manage their expectations of the final product.
A: Ideally requests like this should be handled by the person in charge of the Functional Design. Wether you like it or not, changes will happen (from the first letter in the functional design to the last byte of code and beyond) and there will always be requests for extra features. So make sure your design is up for such a dynamic process.
This will probably sound like a very lame solution (and I doubt it's good practice), but I have been struggling with the same issues in the past. The fact that it happened in a very small company (lack of 'layers' in management) made it worse, since I was in charge of development, functional design, technical design and managing my own projects.
What worked for me is to deflect the problem back to the person asking (wether it was a superior or the customer). Hand over the functional design, a prototype printout or whatever describes the current situation and ask them to figure out 'how' and 'where' this mighty new feature should be implemented.
Both the superiors and the customers were then 'forced' to take it back to their own people, discuss it in meetings and whatnot. Usually this means that you don't hear from it a second time. In the cases where it did come around, it was actually a concept that worked.
A: Your company appears not to be defining requirements clearly before starting a project, and this will only end in tears.
My policy is to get a clear breakdown of all requirements in advance and have all parties know the implications of intruding these requirements.
*
*Progressively Delayed Release times
*Increased Bugs
*Incomplete features
*Staff stress
*Staff resignations.
*Extra charges for expecting more out of the final product than was agreed upon at the declaration of a price ( and this is REALLY BAD )
If they don't want to adhere to a system that is sustainable and productive, one might want to opt for #5, or threaten with #5.
A: For managers:
*
*the sooner the product is released to the market (assuming it's shrink-wrap), the sooner the company can make money and the better the cashflow.
*don't rule out the new feature outright, but balance it against the value you can derive from doing alternative work; explain the opportunity cost.
For everyone:
*
*if the new features are in-your-face in the UI, start talking about the effect of visual complexity on the usability - and from that, attractiveness - of the product as whole. But I'm sure you're already doing that. I'll try digging out some references...
A: The best way IMHO is to clearly outline exactly what the cost of implementing the new features will be. "It would be nice if" really starts to dwindle when the user starts to see the cost of such additions.
Disagreeing with the customer about a feature usually gets you nowhere. If you blatently say NO to them they will feel alienated and out of touch with you and your team. The feature probably is a good idea overall, granted you have all the time and money in the world and no technical limitations. In their world being able to see a fiz next to a bar after they click on a snip is a good idea. Of course in our world it means a full table scan, potential security vulnerability, and an all nighter to make sure it's in by the next point release.
If you lay it out for them and explain why it is not a really good idea overall they will usually understand. Don't forget all of the different factors (time/money/cost of adding complexity to the project/risk of slipping deadlines). A reasonable person will understand if you paint the picture clear enough, and you can at least say "I told you so" to an unreasonable person.
A: Have feature requests handled in a formal process, normally through the project manager and whoever analyzed the requirements originally. Its always better to palm those sorts of decisions off to someone that isn't the developer, assuming that whoever is going to do that job is actually capable of it.
If you're freelance then obviously charge for changes to the requirements, and if you're an internal development team, then you could consider inter-department billing to make sure people think about what they want to spend money on.
Finally, expect requirements to change and feature creep to happen. If you code without considering what changes might be requested, or your process and/or deadlines are so inflexible that you can't adjust to this, then you'll find that the project will become a nightmare.
A: What I do is keep feature ideas on index cards and post the cards somewhere visible. When someone asks, "Could it also do XXX?" I write a new card. This is a better relationship building move than screaming "NO!" :-) It also has the advantage of not losing potentially good ideas. OTOH, I'm under no compulsion to implement it right then. The suggester knows they've been listened to, I know I won't forget, I can get back to work, and we can all get together to make priority decisions at a better time than when my brain is in CodeLand.
A: All right then, I'll be the voice of agile here. The problem can't be solved at the end of that process, it has to be avoided by managing the project differently.
Aside from a specific methodology, the trick is to put those decisions into the hands of the customers. You have a list of things to be done. When they want to change that list, you ask them which item from the list won't be getting done to accommodate the new item. Or, how much more money they will be giving you to handle it.
Also, you have to do the work in small iterations (a week to a month) so they have chances to readjust in between.
We use SCRUM and its been great. After a couple of iterations all the business-level and process-level items get worked out and you're delivering exactly what they want by the end.
A: You cannot handle just feature creep - you need to organise your whole development process in an proper way.
However from your description it seems that you just code what other people ask for and could not re-organise the process. In this scenario your best way to manage the requests effectively by havign a tracking/ticketing system which would allow you to receive requests from other people, prioritise them, estimate them, agree the implementation schedule and track the time you actually spend working on them.
When you will be able to prove with the real-world figures that 'this small button' would take 2-3 days instead of 5 seconds the customer probably believe it should be you will be in much better position to negotiate.
If you will be able to clearly show that the project go-live date will be delayed by two weeks because of the new features you might see those requests simply vanishing.
You have to remember however that 'feature creep' is not always a negative thing. As application matures and grows your customers priorities are changing as well. Failing to acknowledge that could mean that your finished product will not be what they want.
Try checking if they would accept trading a new feature for an old one from original specification which is not yet implemeted.
A: I keep a prioritized list of work tasks and my estimates on what will be in build X and how long (roughly speaking) I expect it to take to write tests, implement code and do whatever else is related. I always take their inputs, discuss what they really want/need and insist that we determine where it fits in the grand scheme of things. We talk about the impact to schedule and other tasks.
It keeps the communication line open and clear - there aren't any surprises and the expectations are managed. In the end, it isn't my program - it is the customer's (whoever the customer is) and I want to build them what they want (and need) built.
A: The key seems to be in the question.
'Managing feature creep'... you do this by implementing a management process that needs to be followed. You can't avoid it (after all, it's frequently the customers requesting it and shouting no at them all the time tends to drive the poor creatures away)... but that doesn't mean it has to be undisciplined. With a procedure in place that entails the person placing the request to give simple things like justification and a preliminary investigation/use-case for the change you start to reduce the number of 'wouldnt it be nice to'. Once you have this in place, your feature creep is managed and you can start prioritising and providing more consistent feedback.
A: If there's a lesson we can learn from the Internet and Web 2.0 kinds of things, it's that people love customization. That's what iGoogle and hundreds of other sites are all about. If you can build customization in to your GUI, chances are your customers will love you for it.
Also, take a look at how other projects successfully manage feature creep. For example, Google lets users submit feature requests, but also shows a list of features already requested. Users can then vote to request that feature as well. Not that I'm a suck up, but take a look at stackoverflow.uservoice.com. They have a similar policy.
It's critical to listen to your users and get their feedback. Expect them to come up with new ideas that are better than yours. Expect them to come up with ideas that you think are dumb. If enough people want it, and it seems reasonable, give them what they want.
A: Your users have lots of needs that aren't taken care of. They are suffering. They need attention, and they need you. I think feature creep is something that happens when you don't implement the right features already.
*
*Cultivate a close relationship with your users. Let them know you are always interested in their input. Periodically give them a call and ask how your software is treating them.
*Get to know their work habits, standard practices, how they use your software, and how they use their other software. As that information comes in, collect it.
*When feature requests come in, your users won't really know what they want. You know what they want, though, because you have expertise and you've been listening. So, work with them to clarify the problem they're having, then use your collected knowledge to generalize the problem as best you can. Write a solution that solves that problem.
On the other hand, "feature creep" is often the response of a software product to an evolving business. If your customer's business is growing, you're fortunate, because they will spend more money on your work. So relax, they'll pay you! They just need to understand that, the bigger a system gets, the harder it is to change, and sometimes a new small feature necessitates a big rewrite, or a whole new user interface, in order to keep everything working smoothly.
A: You have to be careful to balance a reluctance to avoid feature creep with a tendency to ignore feature requests and feedback.
Every time a user comes to you with feedback, that's an opportunity to improve your product and what you're working on. It may end up that you're adding something interesting to both the user, and your developers; it might actually be fun to work on. And yes, it may be a stupid idea, as posed to you. But it's your job to accept the feedback, extract anything positive from it, and shape it into something valuable to your users, the product, your company, and your development team.
That being said, feature creep is a very difficult thing to manage. And how well you manage it depends on your position and who the "creep" is. If you're a mid-to-junior level developer, and the CEO is demanding a feature; well, you're going to be adding that feature. You can try to convince the CEO that it's not a valuable feature, or it won't work, or there are more important things to be working on, or it will negatively impact the schedule. But never do any of that at the time the feature is being requested. All you'll end up with is two people defending their position instead of working together towards a common goal.
Instead, accept the feedback and feature request (or feature demand) at face value immediately. Walk away, think about it openly for a while by yourself. "Could this be valuable?" "Am I missing something in the way the CEO asked for this?" "Is it as hard as I'm making it out to be?" Ask yourself these kinds of questions, and come up with some concrete answers. Then always go back to the CEO with follow-up questions. Demonstrate that you've thought about the feature requested, and have actually come up with some ideas, tweaks, enhancements, or objections, etc. This will create an open discussion. One that the CEO hadn't anticipated, but that he most likely would not object to since it was not outright resistance to his idea initially.
A: One of our financial backers requests features all the time. Sometimes he says can we get the software to do 'x'. If it is possible we tell him yes, and then ask him what timescales he had in mind. If he comes back with ASAP - then we tell him that some other feature will have to give, or extend our deadlines. Thankfully he then normally changes his opinion to sometime in the future.
I think the most important thing is to actually record the idea or request, even if the feature doesnt get implemented straight away.
We use Bugzilla to keep a track of bugs - but also Feature Requests. We have a 'features' worklist (or target version)... that way everyone can see what features we would like to develop in the future and as people have more ideas on a feature they can simple add more to the item in bugzilla.
Every release when we sit down and work out the worklist's for a version we dip our toes into the features list to see if there is anything we can pull in. We do try to pull in a feature when we can and give feedback to people - this shows that features and ideas are not falling on deaf ears.
This feedback in helps people know that we are acknowleding their features requests and we DO get round to implementing them, rather than them just sitting on a list which gets bigger and bigger.
A: 1)Increased time before release.
2)Increased cost.
3)Exponential maintenance cost
4)Increased potential for bugs
In order to manage a feature request, ask them to submit a change order. Periodically, review the change orders and send back a statement about each request, "This will take X long to do, implying this Y additional cost. Is this acceptable?" Once the requester has accepted the additional cost, then that's a-ok. Your hands are washed. :)
A: You explain "sure, it's doable, would you like to have an estimate of how much it will push out the project completion date? Also, giving you that estimate will add about a day to the project end, as well."
There's nothing wrong with adding features, so long as the stakeholders understand that there is a cost associated with doing so.
A: Suppose you build a product that has exactly one feature and all 100 of your customers love your product and find it easy to use. Now suppose that you add ten more features to your product that only 10 of your customers will use. Now you will find that 90% of your customers have much more trouble using your product because there are ten times more choices to make, and ten times as many things you could do that won't help you. The good stuff has been lost in the noise.
This is of course a simplification but the reality is that most of your users will only use a small portion of the features of your product.
Read some good books on software design and UI design, and get your manager to read them too. Joel Spolsky's books are a good place to start - www.joelonsoftware.com
A: Create work mandates that define the problem that needs solving. Your work is constrained by only needing to implement that which is necessary to solve the problem.
Any further refinement of the problem then becomes change control.
A: We follow wireframes in my office. Any change after signoff has to go through a Change Control procedure.
A: Lock-in feature set for a short time frame (Scrum/iteration/agile). As the user starts seeing things working, the necessity or lack thereof of features will become more apparent.
Also, it is helpful to have a person through which all changes come (in Scrum, a really good Product Owner).
A: Show them how simple GUIs can be effective. Examples: Google Chrome, Apple's software. You may also want to show examples of bloated software, like Eclipse, Netbeans, Visual Studio... ok, these are actually all software IDEs, but they all have cluttered interface.
A: The trick is to define the project as a sequence of versions. Your initial design is for version 2.0, but, the intended first release is version 1.0. All new ideas (features) are welcomed but since, due to scheduling, version 1.0 is frozen, the new ideas have to go into version 2.0.
Of course as soon as version 1.0 is released you begin bug fixing and coding for a maintenance release of version 1.01 and so on... Perhaps version 2.0 never actually
gets released, but is used as an elusive goal and a parking place for features
that are good, but not good enough to delay the release of a working version.
A: The right question to ask is 'How can I give the developers a stable environment, while still responding only to high benefit feature requests.' A SCRUM like approach would be:
Stable environment:
Have the developers work on a small fixed set of features during a small fixed iteration interval.
Responding only to high benefit feature requests:
One person maintains a list of prioritized features. New features can always be added (Cuts down a lot of politics). However the features selected for the next iteration only are the high priority items.
A: Communication is key. In a relationship with a client, it must be clear to them that when a plan is created with a set of features, that is the set of features. It is only the fault of those interacting with the client who are either misleading the client or are somehow intimidated by the client.
As for developers contributing to feature creep, the key is to find a balance between making decisions on implementation and outright adding new features. Again, communicating with the developer on a regular basis will likely curb an issue here.
A: It might not be possible to avoid all feature requests.
But try assigning a cost for each feature request. When the next planning meeting or deciding the features for the next release comes around this will help to weed out the unnecessary ones.
A: IF you're not the manager or owner of the project, I prescribe the following:
If they want it, do it. Make sure they pay you on payday. I've learned that sometimes the battle to get things to conform to what you would like, isn't worth fighting. Enjoy life, after work and plan & code your own personal projects that do things the right way.
A: The answer to your question is broader than just GUIs. Feature/Scope creep will always happen, when someone isn't paying attention to what the contract has stipulated and when there isn't a formal process for handling change requests.
If you lack the ability to implement the formal process or influence its creation, I suggest you get all feature change requests documented in email, and that you notify your management of the possible consequences in email. This isn't to get anyone, but rather to protect yourself from the fallout of the eventual failure.
A: At some point, you have to ship something. Assuming you're going through some sort of a formal test process, as long as the product continues to change, testing is never going to be able to sign off on a working product.
It helps to come up with a timeline describing what features will be released and when. That way the people pushing for the new features have some idea that their requests will be handled. It doesn't mean they're going to be handled right now, but it should provide them some reassurances that the next version will address their concerns.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Initializing an array on arbitrary starting index in c# Is it possible in c# to initialize an array in, for example, subindex 1?
I'm working with Office interop, and every property is an object array that starts in 1 (I assume it was originally programed in VB.NET), and you cannot modify it, you have to set the entire array for it to accept the changes.
As a workaround I am cloning the original array, modifying that one, and setting it as a whole when I'm done.
But, I was wondering if it was possible to create a new non-zero based array
A: You can use Array.CreateInstance.
See Array Types in .NET
A: It is possible to do as you request see the code below.
// Construct an array containing ints that has a length of 10 and a lower bound of 1
Array lowerBoundArray = Array.CreateInstance(typeof(int), new int[1] { 10 }, new int[1] { 1 });
// insert 1 into position 1
lowerBoundArray.SetValue(1, 1);
//insert 2 into position 2
lowerBoundArray.SetValue(2, 2);
// IndexOutOfRangeException the lower bound of the array
// is 1 and we are attempting to write into 0
lowerBoundArray.SetValue(1, 0);
A: Not simply. But you can certainly write your own class. It would have an array as a private variable, and the user would think his array starts at 1, but really it starts at zero and you're subtracting 1 from all of his array accesses.
A: You can write your own array class
A: I don't think if it's possible to modify the starting index of arrays.
I would create my own array using generics and handle it inside.
A: Just keep of const int named 'offset' with a value of one, and always add that to your subscripts in your code.
A: I don't think you can create non-zero based arrays in C#, but you could easily write a wrapper class of your own around the built in data structures.This wrapper class would hold a private instance of the array type you required; overloading the [] indexing operator is not allowed, but you can add an indexer to a class to make it behave like an indexable array, see here. The index function you write could then add (or subtract) 1, to all index's passed in.
You could then use your object as follows, and it would behave correctly:
myArrayObject[1]; //would return the zeroth element.
A: In VB6 you could change the array to start with 0 or 1, so I think VBScript can do the same. For C#, it's not possible but you can simply add NULL value in the first [0] and start real value at [1]. Of course, this is a little dangerous...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Before and After Suite execution hook in jUnit 4.x I'm trying to preform setup and teardown for a set of integration tests, using jUnit 4.4 to execute the tests. The teardown needs to be run reliably. I'm having other problems with TestNG, so I'm looking to port back to jUnit. What hooks are available for execution before any tests are run and after all tests have completed?
Note: we're using maven 2 for our build. I've tried using maven's pre- & post-integration-test phases, but, if a test fails, maven stops and doesn't run post-integration-test, which is no help.
A: Using annotations, you can do something like this:
import org.junit.*;
import static org.junit.Assert.*;
import java.util.*;
class SomethingUnitTest {
@BeforeClass
public static void runBeforeClass()
{
}
@AfterClass
public static void runAfterClass()
{
}
@Before
public void setUp()
{
}
@After
public void tearDown()
{
}
@Test
public void testSomethingOrOther()
{
}
}
A: A colleague of mine suggested the following: you can use a custom RunListener and implement the testRunFinished() method: http://junit.sourceforge.net/javadoc/org/junit/runner/notification/RunListener.html#testRunFinished(org.junit.runner.Result)
To register the RunListener just configure the surefire plugin as follows:
http://maven.apache.org/surefire/maven-surefire-plugin/examples/junit.html section "Using custom listeners and reporters"
This configuration should also be picked by the failsafe plugin.
This solution is great because you don't have to specify Suites, lookup test classes or any of this stuff - it lets Maven to do its magic, waiting for all tests to finish.
A: Here, we
*
*upgraded to JUnit 4.5,
*wrote annotations to tag each test class or method which needed a working service,
*wrote handlers for each annotation which contained static methods to implement the setup and teardown of the service,
*extended the usual Runner to locate the annotations on tests, adding the static handler methods into the test execution chain at the appropriate points.
A: As for "Note: we're using maven 2 for our build. I've tried using maven's pre- & post-integration-test phases, but, if a test fails, maven stops and doesn't run post-integration-test, which is no help."
you can try the failsafe-plugin instead, I think it has the facility to ensure cleanup occurs regardless of setup or intermediate stage status
A: Provided that all your tests may extend a "technical" class and are in the same package, you can do a little trick :
public class AbstractTest {
private static int nbTests = listClassesIn(<package>).size();
private static int curTest = 0;
@BeforeClass
public static void incCurTest() { curTest++; }
@AfterClass
public static void closeTestSuite() {
if (curTest == nbTests) { /*cleaning*/ }
}
}
public class Test1 extends AbstractTest {
@Test
public void check() {}
}
public class Test2 extends AbstractTest {
@Test
public void check() {}
}
Be aware that this solution has a lot of drawbacks :
*
*must execute all tests of the package
*must subclass a "techincal" class
*you can not use @BeforeClass and @AfterClass inside subclasses
*if you execute only one test in the package, cleaning is not done
*...
For information: listClassesIn() => How do you find all subclasses of a given class in Java?
A: You can use the @ClassRule annotation in JUnit 4.9+ as I described in an answer another question.
A: Yes, it is possible to reliably run set up and tear down methods before and after any tests in a test suite. Let me demonstrate in code:
package com.test;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import org.junit.runners.Suite.SuiteClasses;
@RunWith(Suite.class)
@SuiteClasses({Test1.class, Test2.class})
public class TestSuite {
@BeforeClass
public static void setUp() {
System.out.println("setting up");
}
@AfterClass
public static void tearDown() {
System.out.println("tearing down");
}
}
So your Test1 class would look something like:
package com.test;
import org.junit.Test;
public class Test1 {
@Test
public void test1() {
System.out.println("test1");
}
}
...and you can imagine that Test2 looks similar. If you ran TestSuite, you would get:
setting up
test1
test2
tearing down
So you can see that the set up/tear down only run before and after all tests, respectively.
The catch: this only works if you're running the test suite, and not running Test1 and Test2 as individual JUnit tests. You mentioned you're using maven, and the maven surefire plugin likes to run tests individually, and not part of a suite. In this case, I would recommend creating a superclass that each test class extends. The superclass then contains the annotated @BeforeClass and @AfterClass methods. Although not quite as clean as the above method, I think it will work for you.
As for the problem with failed tests, you can set maven.test.error.ignore so that the build continues on failed tests. This is not recommended as a continuing practice, but it should get you functioning until all of your tests pass. For more detail, see the maven surefire documentation.
A: As far as I know there is no mechanism for doing this in JUnit, however you could try subclassing Suite and overriding the run() method with a version that does provide hooks.
A: The only way I think then to get the functionality you want would be to do something like
import junit.framework.Test;
import junit.framework.TestResult;
import junit.framework.TestSuite;
public class AllTests {
public static Test suite() {
TestSuite suite = new TestSuite("TestEverything");
//$JUnit-BEGIN$
suite.addTestSuite(TestOne.class);
suite.addTestSuite(TestTwo.class);
suite.addTestSuite(TestThree.class);
//$JUnit-END$
}
public static void main(String[] args)
{
AllTests test = new AllTests();
Test testCase = test.suite();
TestResult result = new TestResult();
setUp();
testCase.run(result);
tearDown();
}
public void setUp() {}
public void tearDown() {}
}
I use something like this in eclipse, so I'm not sure how portable it is outside of that environment
A: Since maven-surefire-plugin does not run Suite class first but treats suite and test classes same, so we can configure plugin as below to enable only suite classes and disable all the tests. Suite will run all the tests.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.5</version>
<configuration>
<includes>
<include>**/*Suite.java</include>
</includes>
<excludes>
<exclude>**/*Test.java</exclude>
<exclude>**/*Tests.java</exclude>
</excludes>
</configuration>
</plugin>
A: If you don't want to create a suite and have to list all your test classes you can use reflection to find the number of test classes dynamically and count down in a base class @AfterClass to do the tearDown only once:
public class BaseTestClass
{
private static int testClassToRun = 0;
// Counting the classes to run so that we can do the tear down only once
static {
try {
Field field = ClassLoader.class.getDeclaredField("classes");
field.setAccessible(true);
@SuppressWarnings({ "unchecked", "rawtypes" })
Vector<Class> classes = (Vector<Class>) field.get(BlockJUnit4ClassRunner.class.getClassLoader());
for (Class<?> clazz : classes) {
if (clazz.getName().endsWith("Test")) {
testClassToRun++;
}
}
} catch (Exception ignore) {
}
}
// Setup that needs to be done only once
static {
// one time set up
}
@AfterClass
public static void baseTearDown() throws Exception
{
if (--testClassToRun == 0) {
// one time clean up
}
}
}
If you prefer to use @BeforeClass instead of the static blocks, you can also use a boolean flag to do the reflection count and test setup only once at the first call. Hope this helps someone, it took me an afternoon to figure out a better way than enumerating all classes in a suite.
Now all you need to do is extend this class for all your test classes. We already had a base class to provide some common stuff for all our tests so this was the best solution for us.
Inspiration comes from this SO answer https://stackoverflow.com/a/37488620/5930242
If you don't want to extend this class everywhere, this last SO answer might do what you want.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "85"
}
|
Q: Using Yahoo! Pipes Have you used pipes.yahoo.com to quickly and easily do... anything? I've recently created a quick mashup of StackOverflow tags (via rss) so that I can browse through new questions in fields I like to follow.
This has been around for some time, but I've just recently revisited it and I'm completely impressed with it's ease of use. It's almost to the point where I could set up a pipe and then give a client privileges to go in and edit feed sources... and I didn't have to write more than a few lines of code.
So, what other practical uses can you think of for pipes?
A: It's nice for aggregating feeds, yes, but the other handy thing to do is filtering the feeds. A while back, I created a feed for Digg (before Digg fell into the Fark pit of dispair). I didn't care about the overwhelming Apple and Ubuntu news, so I filtered those keywords out of Technology, which I then combined with Science and World & Business feeds.
Anyway, you can do a lot more than just combine things. If you wanted to be smart about it, you could set up per-subfeed and whole-feed filters to give granular or over-arching filtering abilities as the news changes and you get bored with one topic or another.
A: The one thing I have really used Y! Pipes for (rather than just playing around with it) is to clean up item titles, merge and finally de-dupe the feeds I got from querying multiple blog search engines with the same search term. This is something I’ve done in several very different contexts, eg. for my own ego surfing, in another case for the planet site set up by some conference’s organisers to keep an eye on their conference’s buzz, etc. Highly recommended.
A: Well, pipes are real fast and useful.
Other effective uses might be:
1) combine many feeds into one, then sort, filter and translate it.
2) geocode your favorite feeds and browse the items on an interactive map.
3) power widgets/badges on your web site.
4) grab the output of any Pipes as RSS, JSON, KML, and other formats.
This is by no means a comprehensive list.
A: You can do tons of things with pipes. For example for sites like digg or reddit, you can make one to bypass the site and go directly to the linked article (rewriting the RSS).
I like also to filter webcomics' feeds to keep just the comics, and then mix them all in only one feed
A: I've taken the liberty of copying your pipe and rearranging it a bit so that it's easier to add and remove tags:
Yahoo Pipe: StackOverflow Merge Tags
Tags are now listed in a string builder, so to add a tag you just have to hit the + button on the string builder and type in the tag preceded by a slash.
A: One of my favorite things to do with Yahoo! Pipes is to aggregate multiple craigslist feeds into a single feed. You can make a feed out of any category or search criteria on craigslist. I live in a university town and am always on the lookout for tickets to sporting events, for example. I have a half-dozen craigslist searches all being combined into a single feed via Yahoo! Pipes. This works a lot better for me than simply monitoring the entire "Tickets" category; filters out most of the tickets I am not interested in. Yes, this is another aggregating feeds example, but the craigslist usage is quite valuable with the ability to aggregate feeds that are themselves based upon searches.
A: I've used Pipes to translate blogs into English. I would have liked to use it to fetch the full text for blogs which only provide a summary of the content in the feed, but unfortunately they don't provide any input which fetches the content from a parameterizable source :-(.
A: Just stumbled on this while looking for ways to connect Excel to Pipes. A bit necromancer-ish, but here goes.
One thing I've done, is take an HTML page (science data) which has links to tons of CSV files for a bunch of Army Corps measurement stations. Each station has a big table of datafiles, all organized individually by month and year. I use YQL to parse out and organize the links to the individual CSV files in a way that Pipes can read them. Then, I use that as input into a Pipe, which has a user input for "Station" and "Date."
Using this, I can go to the Pipes page, type in those values and get the values only for a specific station and date, rather than have to find the station on a website, find the year and month in a big table, click the link, open the CSV file, and find the values for a day within that month's worth of data. I can even change the pipe to specify the hour, and the parameter, and then get a single value returned.
Now, I wish I could figure out how to program Excel so that I can use "=yahoo_function(station, datetime)" to place that value automatically into a cell give the values of other columns!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Subscription Parameters in SQL Server Reporting Services 2005 When I subscribe for a report, I may chose to have a subject like: @ReportName was executed at: @ExecutionTime
I would like a name like this: Your "@ReportName" report covering Sep 10 2008 - Sep 16 2008
Sep 10 2008 - Sep 16 2008 are values of the two report parameters: @DateFrom and @DateTo, respectively.
Can I specify something like @ReportParameters!DateFrom as my subject?
A: Check out this article. The author shows how to execute a data driven subscription from code and provides a stored procedure for doing so. The stored procedure allows you to specify the email body so if you know the report parameters before running the report you could populate them before calling the procedure. I'm not sure if his procedure covers email subjects, but perhaps you could take what he has done and modify it.
Also just found this MSDN forum post with a response from MSFT that this a data driven subscription is the way to accomplish this.
A: I don't believe anything other than those 2 parameters are available to report subscription emails (at least in SSRS 2005).
You may be able to do something via a data-driven subscription, but the values you want to use need to be in the data source used for the subscription data - SSRS is still not able to collect data in the report itself.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to automatically remove trailing whitespace in Visual Studio 2008? Is it possible to configure Visual Studio 2008 to automatically remove whitespace characters at the end of each line when saving a file? There doesn't seem to be a built-in option, so are there any extensions available to do this?
A: Taking elements from all the answers already given, here's the code I ended up with. (I mainly write C++ code, but it's easy to check for different file extensions, as needed.)
Thanks to everyone who contributed!
Private Sub DocumentEvents_DocumentSaved(ByVal document As EnvDTE.Document) _
Handles DocumentEvents.DocumentSaved
Dim fileName As String
Dim result As vsFindResult
Try
fileName = document.Name.ToLower()
If fileName.EndsWith(".cs") _
Or fileName.EndsWith(".cpp") _
Or fileName.EndsWith(".c") _
Or fileName.EndsWith(".h") Then
' Remove trailing whitespace
result = DTE.Find.FindReplace( _
vsFindAction.vsFindActionReplaceAll, _
"{:b}+$", _
vsFindOptions.vsFindOptionsRegularExpression, _
String.Empty, _
vsFindTarget.vsFindTargetFiles, _
document.FullName, _
"", _
vsFindResultsLocation.vsFindResultsNone)
If result = vsFindResult.vsFindResultReplaced Then
' Triggers DocumentEvents_DocumentSaved event again
document.Save()
End If
End If
Catch ex As Exception
MsgBox(ex.Message, MsgBoxStyle.OkOnly, "Trim White Space exception")
End Try
End Sub
A: Find/Replacing using Regular Expressions
In the Find and Replace dialog, expand Find Options, check Use, choose Regular expressions
Find What: ":Zs#$"
Replace with: ""
click Replace All
In other editors (a normal Regular Expression parser) ":Zs#$" would be "\s*$".
A: CodeMaid is a very popular Visual Studio extension and does this automatically along with other useful cleanups.
*
*Download: https://github.com/codecadwallader/codemaid/releases/tag/v0.4.3
*Modern Download: https://marketplace.visualstudio.com/items?itemName=SteveCadwallader.CodeMaid
*Documentation: http://www.codemaid.net/documentation/#cleaning
I set it to clean up a file on save, which I believe is the default.
A: I personally love the Trailing Whitespace Visualizer Visual Studio extension which has support back through Visual Studio 2012.
A: You can use a macro like described in Removing whitespace and rewriting comments, using regex searches
A: You can create a macro that executes after a save to do this for you.
Add the following into the EnvironmentEvents Module for your macros.
Private saved As Boolean = False
Private Sub DocumentEvents_DocumentSaved(ByVal document As EnvDTE.Document) _
Handles DocumentEvents.DocumentSaved
If Not saved Then
Try
DTE.Find.FindReplace(vsFindAction.vsFindActionReplaceAll, _
"\t", _
vsFindOptions.vsFindOptionsRegularExpression, _
" ", _
vsFindTarget.vsFindTargetCurrentDocument, , , _
vsFindResultsLocation.vsFindResultsNone)
' Remove all the trailing whitespaces.
DTE.Find.FindReplace(vsFindAction.vsFindActionReplaceAll, _
":Zs+$", _
vsFindOptions.vsFindOptionsRegularExpression, _
String.Empty, _
vsFindTarget.vsFindTargetCurrentDocument, , , _
vsFindResultsLocation.vsFindResultsNone)
saved = True
document.Save()
Catch ex As Exception
MsgBox(ex.Message, MsgBoxStyle.OkOnly, "Trim White Space exception")
End Try
Else
saved = False
End If
End Sub
I've been using this for some time now without any problems. I didn't create the macro, but modified it from the one in ace_guidelines.vsmacros which can be found with a quick google search.
A: Unless this is a one-person project, don't do it. It's got to be trivial to diff your local files against your source code repository, and clearing whitespace would change lines you don't need to change. I totally understand; I love to get my whitespace all uniform – but this is something you should give up for the sake of cleaner collaboration.
A: I am using VWD 2010 Express where macros are not supported, unfortunately. So I just do copy/paste into Notepad++ top left menu Edit > Blank Operations > Trim Trailing Space there are other related operations available too. Then copy/paste back into Visual Studio.
One can also use NetBeans instead of Notepad++, which has "Remove trailing spaces" under the "Source" menu.
A: Before saving you may be able to use the auto-format shortcut CTRL+K+D.
A: I think that the Jeff Muir version could be a little improved if it only trims source code files (in my case C#, but is easy to add more extensions). Also I added a check to ensure that the document window is visible because some situations without that check show me strange errors (LINQ to SQL files '*.dbml', for example).
Private Sub DocumentEvents_DocumentSaved(ByVal document As EnvDTE.Document) Handles DocumentEvents.DocumentSaved
Dim result As vsFindResult
Try
If (document.ActiveWindow Is Nothing) Then
Return
End If
If (document.Name.ToLower().EndsWith(".cs")) Then
document.Activate()
result = DTE.Find.FindReplace(vsFindAction.vsFindActionReplaceAll, ":Zs+$", vsFindOptions.vsFindOptionsRegularExpression, String.Empty, vsFindTarget.vsFindTargetCurrentDocument, , , vsFindResultsLocation.vsFindResultsNone)
If result = vsFindResult.vsFindResultReplaced Then
document.Save()
End If
End If
Catch ex As Exception
MsgBox(ex.Message & Chr(13) & "Document: " & document.FullName, MsgBoxStyle.OkOnly, "Trim White Space exception")
End Try
End Sub
A: You can do this easily with these three actions:
*
*Ctrl + A (select all text)
*Edit -> Advanced -> Delete Horizontal Whitespace
*Edit -> Advanced -> Format Selection
Wait a few seconds and done.
It's Ctrl + Z'able in case something went wrong.
A: I use ArtisticStyle (C++) to do this and also reformat my code. However, I had to add this as an external tool and you need to trigger it yourself so you might not like it.
However, I find it excellent that I can reformat code in more custom way (for example, multiline function parameters) that I can pay the price of running it manually. The tool is free.
A: Building on Dyaus's answer and a regular expression from a connect report, here's a macro that handles save all, doesn't replace tabs with spaces, and doesn't require a static variable. Its possible downside? It seems a little slow, perhaps due to multiple calls to FindReplace.
Private Sub DocumentEvents_DocumentSaved(ByVal document As EnvDTE.Document) _
Handles DocumentEvents.DocumentSaved
Try
' Remove all the trailing whitespaces.
If vsFindResult.vsFindResultReplaced = DTE.Find.FindReplace(vsFindAction.vsFindActionReplaceAll, _
"{:b}+$", _
vsFindOptions.vsFindOptionsRegularExpression, _
String.Empty, _
vsFindTarget.vsFindTargetFiles, _
document.FullName, , _
vsFindResultsLocation.vsFindResultsNone) Then
document.Save()
End If
Catch ex As Exception
MsgBox(ex.Message, MsgBoxStyle.OkOnly, "Trim White Space exception")
End Try
End Sub
For anyone else trying to use this in a Visual Studio 2012 add-in, the regular expression I ended up using is [ \t]+(?=\r?$) (don't forget to escape the backslashes if necessary). I arrived here after several futile attempts to fix the problems with a raw conversion of {:b}+$ failing to match the carriage return.
A: I think I have a version of this macro that won't crash VS2010 on refactor, and also won't hang the IDE when saving non-text files. Try this:
Private Sub DocumentEvents_DocumentSaved( _
ByVal document As EnvDTE.Document) _
Handles DocumentEvents.DocumentSaved
' See if we're saving a text file
Dim textDocument As EnvDTE.TextDocument = _
TryCast(document.Object(), EnvDTE.TextDocument)
If textDocument IsNot Nothing Then
' Perform search/replace on the text document directly
' Convert tabs to spaces
Dim convertedTabs = textDocument.ReplacePattern("\t", " ", _
vsFindOptions.vsFindOptionsRegularExpression)
' Remove trailing whitespace from each line
Dim removedTrailingWS = textDocument.ReplacePattern(":Zs+$", "", _
vsFindOptions.vsFindOptionsRegularExpression)
' Re-save the document if either replace was successful
' (NOTE: Should recurse only once; the searches will fail next time)
If convertedTabs Or removedTrailingWS Then
document.Save()
End If
End If
End Sub
A: This is a really good example of how to remove trailing whitespace. There are a few things that I would change based on what I discovered using this macro. First of all, the macro automatically converts tabs to spaces. This is not always desirable and could lead to making things worse for people that love tabs (typically Linux-based). The tab problem is not really the same as the extra whitespace problem anyways.
Secondly, the macro assumes only one file is being saved at once. If you save multiple files at once, it will not correctly remove the whitespace. The reason is simple. The current document is considered the document you can see.
Third, it does no error checking on the find results. These results can give better intelligence about what to do next. For example, if no whitespace is found and replaced, there is no need to save the file again. In general, I did not like the need for the global flag for being saved or not. It tends to ask for trouble based on unknown states. I suspect the flag had been added solely to prevent an infinite loop.
Private Sub DocumentEvents_DocumentSaved(ByVal document As EnvDTE.Document) _
Handles DocumentEvents.DocumentSaved
Dim result As vsFindResult
'Dim nameresult As String
Try
document.Activate()
' Remove all the trailing whitespaces.
result = DTE.Find.FindReplace(vsFindAction.vsFindActionReplaceAll, _
":Zs+$", _
vsFindOptions.vsFindOptionsRegularExpression, _
String.Empty, _
vsFindTarget.vsFindTargetCurrentDocument, , , _
vsFindResultsLocation.vsFindResultsNone)
'nameresult = document.Name & " " & Str$(result)
'MsgBox(nameresult, , "Filename and result")
If result = vsFindResult.vsFindResultReplaced Then
'MsgBox("Document Saved", MsgBoxStyle.OkOnly, "Saved Macro")
document.Save()
Else
'MsgBox("Document Not Saved", MsgBoxStyle.OkOnly, "Saved Macro")
End If
Catch ex As Exception
MsgBox(ex.Message, MsgBoxStyle.OkOnly, "Trim White Space exception")
End Try
End Sub
I added debug message boxes to help see what was going on. It made it very clear that multiple file save was not working. If you want to play with them, uncomment those lines.
The key difference is using document.Activate() to force the document into the foreground active current document. If the result is 4, that means that the text was replaced. Zero means nothing happened. You will see two saves for every file. The first will replace and the second will do nothing. Potentially there could be trouble if the save cannot write the file but hopefully this event will not get called if that happens.
Before the original script, I was unaware of how the scripting worked in Visual Studio. It is slightly surprising that it uses Visual Basic as the main interface but it works just fine for what it needs to do.
A: A simple addition is to remove carriage returns during the save.
' Remove all the carriage returns.
result = DTE.Find.FindReplace(vsFindAction.vsFindActionReplaceAll, _
"\x000d\x000a", _
vsFindOptions.vsFindOptionsRegularExpression, _
"\x000a", _
vsFindTarget.vsFindTargetCurrentDocument, , , _
vsFindResultsLocation.vsFindResultsNone)
The key to this working is changing \x000d\x000a to \x000a. The \x prefix indicates a Unicode pattern. This will automate the process of getting source files ready for Linux systems.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "144"
}
|
Q: Inline displayed blocks form a single word in IE The problem is that I have several "h2" tags that have a display:inline attribute, and on Microsoft's wonderful browsers the space between them doesn't appear. Is there a workaround?
I know there is a "non-breaking space" in HTML but I was wondering if one can make a space that may be a "breaking space".
--- edit ---
The website is http://newstoday.ro and the behaviour is in the footer. If the site is opened in IE the list is continuous, even though there is a space between the words. Please don't comment the rest of the code as I am just the plumber in this situation. Also there is a must for the headings as the client thinks it is better for SEO.
A: I can't think of a rationale for why you're wanting h2's to display inline. In fact, why would you want two headers to read together? Think of the way it should be read. Do you want it to read:
"Header one header two"
or:
"Header One"
"Header Two"
If it's the first way, then it's probably your HTML that's messed up. If it's the second, then you should probably think of it's positioning rather than changing it's behavior and utilize other css methods like float and position.
A: You can just use a regular space, but add "margin-right:_ px" to the h2 css definition to adjust the spacing between tags. Negative values are allowed too.
A: Have you tried setting the "margin" property? Not sure if that directly applies to your question.
A: Throwing an in there seems to create a space:
<html>
<head><title>Blah</title></head>
<body>
<h2 style="display:inline;">Something</h2>
<h2 style="display:inline;">Something Else</h2>
</body>
</html>
In this example, you actually end up with 2 spaces, so you might want to eliminate whitespace between the tags and the if you require only one space. Another option would be to add a left/right margin to the header element.
A: Apply the style margin: 0 0.5em to both headers - adjust 0.5 to suit (maybe 0.25 or 0.75 is better; also the first 0 is top/bottom margin, adjust as relevant).
Note: Since you want a character space, you want em not px as suggested earlier.
Complete example code...
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Inline Header Example</title>
<style type="text/css">
h2
{
display: inline;
margin: 0 0.5em;
}
</style>
</head>
<body>
<h2>First Header</h2>
<h2>Second Header</h2>
</body>
</html>
A: Well the braking space is just a space, you know a " " without the quotes...
A: The answer is that it's not possible. You mean you want text that's in a larger block of text to flow just like the rest of it, as if the tag were < strong > instead of of < h2 >
Since h2 is a block level element no matter how you style it, some browsers (cough) will choke on your attempt to flow it inline with other text.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Windows CD Burning API We need to programatically burn files to CD in a C\C++ Windows XP/Vista application we are developing using Borlands Turbo C++.
What is the simplest and best way to do this? We would prefer a native windows API (that doesnt rely on MFC) so as not to rely on any third party software/drivers if one is available.
A: To complement the accepted answer, we added this helper function to programatically change the burn directory on the fly as this was a requirement of ours.
typedef HMODULE (WINAPI * SHSETFOLDERPATHA)( int , HANDLE , DWORD , LPCTSTR );
int SetBurnPath( char * cpPath )
{
SHSETFOLDERPATHA pSHSetFolderPath;
HANDLE hShell = LoadLibraryA( "shell32.dll" );
if( hShell == NULL )
return -2;
DWORD dwOrdinal = 0x00000000 + 231;
pSHSetFolderPath = (SHSETFOLDERPATHA)GetProcAddress( hShell, (LPCSTR)dwOrdinal );
if( pSHSetFolderPath == NULL )
return -3;
if( pSHSetFolderPath( CSIDL_CDBURN_AREA, NULL, 0, cpPath ) == S_OK )
return 0;
return -1;
}
A: We used the following:
Store files in the directory returned by GetBurnPath, then write using Burn. GetCDRecordableInfo is used to check when the CD is ready.
#include <stdio.h>
#include <imapi.h>
#include <windows.h>
struct MEDIAINFO {
BYTE nSessions;
BYTE nLastTrack;
ULONG nStartAddress;
ULONG nNextWritable;
ULONG nFreeBlocks;
};
//==============================================================================
// Description: CD burning on Windows XP
//==============================================================================
#define CSIDL_CDBURN_AREA 0x003b
SHSTDAPI_(BOOL) SHGetSpecialFolderPathA(HWND hwnd, LPSTR pszPath, int csidl, BOOL fCreate);
SHSTDAPI_(BOOL) SHGetSpecialFolderPathW(HWND hwnd, LPWSTR pszPath, int csidl, BOOL fCreate);
#ifdef UNICODE
#define SHGetSpecialFolderPath SHGetSpecialFolderPathW
#else
#define SHGetSpecialFolderPath SHGetSpecialFolderPathA
#endif
//==============================================================================
// Interface IDiscMaster
const IID IID_IDiscMaster = {0x520CCA62,0x51A5,0x11D3,{0x91,0x44,0x00,0x10,0x4B,0xA1,0x1C,0x5E}};
const CLSID CLSID_MSDiscMasterObj = {0x520CCA63,0x51A5,0x11D3,{0x91,0x44,0x00,0x10,0x4B,0xA1,0x1C,0x5E}};
typedef interface ICDBurn ICDBurn;
// Interface ICDBurn
const IID IID_ICDBurn = {0x3d73a659,0xe5d0,0x4d42,{0xaf,0xc0,0x51,0x21,0xba,0x42,0x5c,0x8d}};
const CLSID CLSID_CDBurn = {0xfbeb8a05,0xbeee,0x4442,{0x80,0x4e,0x40,0x9d,0x6c,0x45,0x15,0xe9}};
MIDL_INTERFACE("3d73a659-e5d0-4d42-afc0-5121ba425c8d")
ICDBurn : public IUnknown
{
public:
virtual HRESULT STDMETHODCALLTYPE GetRecorderDriveLetter(
/* [size_is][out] */ LPWSTR pszDrive,
/* [in] */ UINT cch) = 0;
virtual HRESULT STDMETHODCALLTYPE Burn(
/* [in] */ HWND hwnd) = 0;
virtual HRESULT STDMETHODCALLTYPE HasRecordableDrive(
/* [out] */ BOOL *pfHasRecorder) = 0;
};
//==============================================================================
// Description: Get burn pathname
// Parameters: pathname - must be at least MAX_PATH in size
// Returns: Non-zero for an error
// Notes: CoInitialize(0) must be called once in application
//==============================================================================
int GetBurnPath(char *path)
{
ICDBurn* pICDBurn;
int ret = 0;
if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {
BOOL flag;
if (pICDBurn->HasRecordableDrive(&flag) == S_OK) {
if (SHGetSpecialFolderPath(0, path, CSIDL_CDBURN_AREA, 0)) {
strcat(path, "\\");
}
else {
ret = 1;
}
}
else {
ret = 2;
}
pICDBurn->Release();
}
else {
ret = 3;
}
return ret;
}
//==============================================================================
// Description: Get CD pathname
// Parameters: pathname - must be at least 5 bytes in size
// Returns: Non-zero for an error
// Notes: CoInitialize(0) must be called once in application
//==============================================================================
int GetCDPath(char *path)
{
ICDBurn* pICDBurn;
int ret = 0;
if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {
BOOL flag;
WCHAR drive[5];
if (pICDBurn->GetRecorderDriveLetter(drive, 4) == S_OK) {
sprintf(path, "%S", drive);
}
else {
ret = 1;
}
pICDBurn->Release();
}
else {
ret = 3;
}
return ret;
}
//==============================================================================
// Description: Burn CD
// Parameters: None
// Returns: Non-zero for an error
// Notes: CoInitialize(0) must be called once in application
//==============================================================================
int Burn(void)
{
ICDBurn* pICDBurn;
int ret = 0;
if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {
if (pICDBurn->Burn(NULL) != S_OK) {
ret = 1;
}
pICDBurn->Release();
}
else {
ret = 2;
}
return ret;
}
//==============================================================================
bool GetCDRecordableInfo(long *FreeSpaceSize)
{
bool Result = false;
IDiscMaster *idm = NULL;
IDiscRecorder *idr = NULL;
IEnumDiscRecorders *pEnumDiscRecorders = NULL;
ULONG cnt;
long type;
long mtype;
long mflags;
MEDIAINFO mi;
try {
CoCreateInstance(CLSID_MSDiscMasterObj, 0, CLSCTX_ALL, IID_IDiscMaster, (void**)&idm);
idm->Open();
idm->EnumDiscRecorders(&pEnumDiscRecorders);
pEnumDiscRecorders->Next(1, &idr, &cnt);
pEnumDiscRecorders->Release();
idr->OpenExclusive();
idr->GetRecorderType(&type);
idr->QueryMediaType(&mtype, &mflags);
idr->QueryMediaInfo(&mi.nSessions, &mi.nLastTrack, &mi.nStartAddress, &mi.nNextWritable, &mi.nFreeBlocks);
idr->Release();
idm->Close();
idm->Release();
Result = true;
}
catch (...) {
Result = false;
}
if (Result == true) {
Result = false;
if (mtype == 0) {
// No Media inserted
Result = false;
}
else {
if ((mflags & 0x04) == 0x04) {
// Writable Media
Result = true;
}
else {
Result = false;
}
if (Result == true) {
*FreeSpaceSize = (mi.nFreeBlocks * 2048);
}
else {
*FreeSpaceSize = 0;
}
}
}
return Result;
}
A: This is the information for IMAPI in MSDN site http://msdn.microsoft.com/en-us/library/aa939967.aspx
A: You should be able to use the shell's ICDBurn interface. Back in the XP day MFC didn't even have any classes for cd burning. I'll see if I can find some examples for you, but it's been a while since I looked at this.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/82993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: What's the best/fastest/easiest way to collapse all projects in Visual Studio? I'm currently using DPack as this adds a "Collapse All Projects" option to the Solution node in Solution Explorer. It works pretty well but can take a while to execute and doesn't always collapse everything fully.
Are there any better alternatives? Preferably free and easy to install/setup. There are lots out there but which work best and don't have any bugs or performance issues.
A: I use the following Macro which works in Visual Studio 2005 and Visual Studio 2008:
*
*View > Other Windows > Macro Explorer (Alt+F8)
*Right-click on the MyMacros node in Macro Explorer
*New Module...
*Name it CollapseAll (or whatever you like)
*Replace the default code with the code shown below
*File > Save CollapseAll (Ctrl+S)
*Close the Macro editor
To set up a keyboard shortcut:
*
*Tools > Customize... > Commands
*Keyboard...
*Show commands containing: Macros.MyMacros.CollapseAll.CollapseAll
*Assign a keyboard shortcut (I use Alt+C)
Code
Imports System
Imports EnvDTE
Imports EnvDTE80
Imports System.Diagnostics
Public Module CollapseAll
Sub CollapseAll()
' Get the the Solution Explorer tree
Dim solutionExplorer As UIHierarchy
solutionExplorer = DTE.Windows.Item(Constants.vsext_wk_SProjectWindow).Object()
' Check if there is any open solution
If (solutionExplorer.UIHierarchyItems.Count = 0) Then
Return
End If
' Get the top node (the name of the solution)
Dim rootNode As UIHierarchyItem = solutionExplorer.UIHierarchyItems.Item(1)
rootNode.DTE.SuppressUI = True
' Collapse each project node
Collapse(rootNode, solutionExplorer)
' Select the solution node, or else when you click
' on the solution window
' scrollbar, it will synchronize the open document
' with the tree and pop
' out the corresponding node which is probably not what you want.
rootNode.Select(vsUISelectionType.vsUISelectionTypeSelect)
rootNode.DTE.SuppressUI = False
End Sub
Sub CollapseSelected()
' Get the the Solution Explorer tree
Dim solutionExplorer As UIHierarchy
solutionExplorer = DTE.Windows.Item(Constants.vsext_wk_SProjectWindow).Object()
' Check if there is any open solution
If (solutionExplorer.UIHierarchyItems.Count = 0) Then
Return
End If
' Get the top node (the name of the solution)
Dim selected As Array = solutionExplorer.SelectedItems
If (selected.Length = 0) Then Return
Dim rootNode As UIHierarchyItem = selected(0)
rootNode.DTE.SuppressUI = True
' Collapse each project node
Collapse(rootNode, solutionExplorer)
' Select the solution node, or else when you click
' on the solution window
' scrollbar, it will synchronize the open document
' with the tree and pop
' out the corresponding node which is probably not what you want.
rootNode.Select(vsUISelectionType.vsUISelectionTypeSelect)
rootNode.DTE.SuppressUI = False
End Sub
Private Sub Collapse(ByVal item As UIHierarchyItem, ByRef solutionExplorer As UIHierarchy)
For Each innerItem As UIHierarchyItem In item.UIHierarchyItems
If innerItem.UIHierarchyItems.Count > 0 Then
' Re-cursive call
Collapse(innerItem, solutionExplorer)
' Collapse
If innerItem.UIHierarchyItems.Expanded Then
innerItem.UIHierarchyItems.Expanded = False
If innerItem.UIHierarchyItems.Expanded = True Then
' Bug in VS 2005
innerItem.Select(vsUISelectionType.vsUISelectionTypeSelect)
solutionExplorer.DoDefaultAction()
End If
End If
End If
Next
End Sub
End Module
I didn't write this code and I'm not sure where this code came from, but there are variations of it online.
A: Power commands for visual studio will do the trick. Didn't notice any performance\stability issues with them.
A: For VS2005, I've been using CoolCommands 4.0. The feature description is more complete for the older 3.0 version.
Version 3 had an .msi installer. Version 4 is a .zip file (which was easier for my environment anyway).
My favorite features (a subset of the complete list):
*
*From the Solution explorer:
*
*Collapse All Projects
*Open containing folder (Project/file level only)
*From the filename tabs above the editor
*
*Locate in Solution Explorer
*From the context menu in the editor
*
*Demo Font
A: Here is a better list of features for CoolCommands 4.0.
To install it for VS 2005, execute the include setup.bat.
To install it for VS 2008, modify the following line from
regpkg CoolCommands.dll /codebase
to:
regpkg CoolCommands.dll /root:Software\Microsoft\VisualStudio\9.0 /codebase
A: PowerCommands for Visual Studio will work for both VS2008 and VS2010. It is the Microsoft-enabled way to do this quickly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Executing stored procedures with date parameters: Command Object vs Connection Object When supplying dates to a stored procedure via a parameter I'm a little confused over which format to use for the dates. My original VBA syntax used the ADO Connection object to execute the stored procedure:
Set SentDetailRS = Me.ADOConnectionToIntegrity.Execute("dbo.s_SelectAggregatedSentDetailList '" & fCSQLDate(EffectiveDate) & "'", , adCmdText)
This works fine for me using the date syntax yyyy-mm-dd but when another user executes the code they recieve the error: 13 'Type Mismatch'.
After some experimentation I found that supplying the date in the format dd/mm/yyyy fixes this error for the user but now gives me the error!
Executing the stored procedure using a command object with parameters works regardless of the format of the date (I assume ADO is taking care of the formatting behind the scenes). I thought that using the format yyyy-mm-dd would work universally with SQL Server?
I'm also perplexed as to why this problem appears to be user specific? I noticed that my default language on SQL Server is 'English' whereas the other user's default language is 'British English', could that cause the problem?
I'm using ADO 2.8 with Access 2003 and SQL Server 2000, SQL Server login is via Windows integrated security.
A: Be careful, and do not believe that ADO is taking care of the problem. Universal SQL date format is 'YYYYMMDD', while both SQL and ACCESS are influenced by the regional settings of the machine in the way they display dates and convert them in character strings.
Do not forget that Date separator is # in Access, while it is ' in SQL
My best advice will be to systematically convert your Access #MM-DD-YYYY# (or similar) into 'YYYYMMDD' before sending the instruction to your server. You could build a small function such as:
Public function SQLdateFormat(x_date) as string
SQLDateFormat = _
trim(str(datePart("yyyy",x_date))) & _
right(str(datePart("m",date)),2) & _
right(str(datePart("d",date)),2)
''be carefull, you might get something like '2008 9 3'
SQLDateFormat = replace(functionSQLDateFormat," ","0")
'' you will have the expected '20080903'
End function
If you do not programmatically build your INSERT/UPDATE string before sending it to the server, I will then advise you to turn the regional settings of all the machines to the regional settings of the machine hosting SQL. You might also have to check if there is a specific date format on your SQL server (I am not sure). Personnaly, I solved this kind of localisation problems (it also happens when coma is used as a decimal separator in French) or SQL specific characters problems (when quotes or double quotes are in a string) by retreating the SQL instructions before sending them to the server.
A: I would guess that fCSQLDate function is culture-specific - i.e. it will parse the date based on the user's locale settings. That's why you see the problem.
Anyway, using queries with concatenated strings is always a bad idea (injection attacks). You are better off if you use parameters.
A: Access uses # as date field delimiter. The format should be #mm/dd/yyyy# probably the #mm-dd-yyyy# will also work fine.
A: Sorry I don't know mysql, but with oracle I would always explicity state the format that I was expecting the format to be in, eg: 'DD-MM-YYYY', to avoid (regional) date format problems
A: Why not use the format
dd mmm yyyy
There is only one way it can be interpreted.
A: You can use the Date() function to return a universal date based on the machine date and time settings. The region settings on the machine will determine how it it formatted on the client end. If you leave the field as strictle a DateTime field then the cleint region settings can format the date.
Going into the server, using the Date() function should aslo work (returning a universal date value).
Also, use a command object and parameters in your query when you pass them to avoid SQL injection attacks on string fields.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Connecting to Oracle using PHP How do I connect to a remote Oracle database instance from PHP?
I need to query (read only) the remote Oracle database for some information; do I need to have an Oracle Instant Client installed?
Is the OCI extension for PHP enough?
A: From PHP Manual
*
*You will need the Oracle client libraries to use this extension.
*The most convenient way to install all the required files is to use Oracle Instant Client, which is available from Oracle's site
A: The best manual for using PHP with Oracle is Underground PHP Oracle Manual. Periodically updated. For example last update describe new cool OCI (Oracle Call Interface) features. I found it by accident and since then has never regretted. Start from that good manual.
A: there are a couple of steps you need to go through to make this work.
First, you need to install the oracle driver for whatever OS you have. Then, create a DSN for odbc to use to connect the php function call to the oracle database. On windows, you can find this on the Control Panel -> ODBC Sources
Once you have done this, restart the DB, the web server and then you should be able to test it all with this:
odbc_connect($dsn,$user,$pass);
If you have linux, the same steps are needed but I'm not sure how you create a DSN in unix.
A: I saw this in the "Notes" section of the PHP documentation:
If you're using PHP with Oracle Instant Client, you can use easy connect naming method (...)
So I think it's rather clear that you can connect to an Oracle DB without the Oracle Instant Client, using only the PHP Oracle extension.
A: If you're attempting to connect to oracle on ubuntu with PHP, the following links have been more than helpful:
A) http://pecl.php.net/bugs/bug.php?id=9253
That's the real-workhorse one - it gives you just about all the data you need.
B) http://fabrizioballiano.net/2008/01/26/how-to-install-php-pdo_oci-on-ubuntu-gutsy/
This is also helpful for details of things that need to be installed for oracle to work with ubuntu.
If you're using it with PHP, you'll need to make sure that the TNS_ADMIN and ORACLE_HOME environment variables are available for apache's user - there's a file named 'envvars' in the apache2 directory where you can set these. (For my own ease of use, I have the two point to the same directory.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Command switch to toggle Notepads word wrap I have a costumer showing Notepad with a large set of data that looks totally misaligned if word wrap is on and I want to force it off. Is there a command switch to do this?
A: I dont think there is a command switch to do this at all. If you want to force it off all the time then you may want to edit the registry:
Hive: HKEY_CURRENT_USER
Key: SOFTWARE\Microsoft\Notepad
Name: fWrap
Type: REG_DWORD
Value: 0
You could even create a .reg file and put it in a batch file to run it and reset it every time notepad runs.
Usually though if you have word wrap turned off, when you open it up again, it will still be turned off.
A: you could just turn it off by going to Format -> Word Wrap.
A: I do not believe there is any command-line option to do that.
You can however set the default behavior by setting the registry-value HKEY_CURRENT_USER\Software\Microsoft\Notepad\fWrap to 0.
Depending on your exact requirements, you might be able to solve your problem by making a bat-file that modifies the registry before starting Notepad. That would be a rather large hack, though.
A: You could just use Wordpad instead of Notepad, it has word wrap off by default.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Trip time calculation in relational databases? I had this question in mind and since I just discovered this site I decided to post it here.
Let's say I have a table with a timestamp and a state for a given "object" (generic meaning, not OOP object); is there an optimal way to calculate the time between a state and the next occurrence of another (or same) state (what I call a "trip") with a single SQL statement (inner SELECTs and UNIONs aren't counted)?
Ex: For the following, the trip time between Initial and Done would be 6 days, but between Initial and Review it would be 2 days.
2008-08-01 13:30:00 - Initial
2008-08-02 13:30:00 - Work
2008-08-03 13:30:00 - Review
2008-08-04 13:30:00 - Work
2008-08-05 13:30:00 - Review
2008-08-06 13:30:00 - Accepted
2008-08-07 13:30:00 - Done
No need to be generic, just say what SGBD your solution is specific to if not generic.
A: Here's an Oracle methodology using an analytic function.
with data as (
SELECT 1 trip_id, to_date('20080801 13:30:00','YYYYMMDD HH24:mi:ss') dt, 'Initial' step from dual UNION ALL
SELECT 1 trip_id, to_date('20080802 13:30:00','YYYYMMDD HH24:mi:ss') dt, 'Work' step from dual UNION ALL
SELECT 1 trip_id, to_date('20080803 13:30:00','YYYYMMDD HH24:mi:ss') dt, 'Review' step from dual UNION ALL
SELECT 1 trip_id, to_date('20080804 13:30:00','YYYYMMDD HH24:mi:ss') dt, 'Work' step from dual UNION ALL
SELECT 1 trip_id, to_date('20080805 13:30:00','YYYYMMDD HH24:mi:ss') dt, 'Review' step from dual UNION ALL
SELECT 1 trip_id, to_date('20080806 13:30:00','YYYYMMDD HH24:mi:ss') dt, 'Accepted' step from dual UNION ALL
SELECT 1 trip_id, to_date('20080807 13:30:00','YYYYMMDD HH24:mi:ss') dt, 'Done' step from dual )
select trip_id,
step,
dt - lag(dt) over (partition by trip_id order by dt) trip_time
from data
/
1 Initial
1 Work 1
1 Review 1
1 Work 1
1 Review 1
1 Accepted 1
1 Done 1
These are very commonly used in situations where traditionally we might use a self-join.
A: PostgreSQL syntax :
DROP TABLE ObjectState;
CREATE TABLE ObjectState (
object_id integer not null,--foreign key
event_time timestamp NOT NULL,
state varchar(10) NOT NULL,
--Other fields
CONSTRAINT pk_ObjectState PRIMARY KEY (object_id,event_time)
);
For given state find first folowing state of given type
select parent.object_id,parent.event_time,parent.state,min(child.event_time) as ch_event_time,min(child.event_time)-parent.event_time as step_time
from
ObjectState parent
join ObjectState child on (parent.object_id=child.object_id and parent.event_time<child.event_time)
where
--Starting state
parent.object_id=1 and parent.event_time=to_timestamp('01-Aug-2008 13:30:00','dd-Mon-yyyy hh24:mi:ss')
--needed state
and child.state='Review'
group by parent.object_id,parent.event_time,parent.state;
This query is not the shortest posible but it should be easy to understand and used as part of other queries :
List events and their duration for given object
select parent.object_id,parent.event_time,parent.state,min(child.event_time) as ch_event_time,
CASE WHEN parent.state<>'Done' and min(child.event_time) is null THEN (select localtimestamp)-parent.event_time ELSE min(child.event_time)-parent.event_time END as step_time
from
ObjectState parent
left outer join ObjectState child on (parent.object_id=child.object_id and parent.event_time<child.event_time)
where parent.object_id=4
group by parent.object_id,parent.event_time,parent.state
order by parent.object_id,parent.event_time,parent.state;
List current states for objects that are not "done"
select states.object_id,states.event_time,states.state,(select localtimestamp)-states.event_time as step_time
from
(select parent.object_id,parent.event_time,parent.state,min(child.event_time) as ch_event_time,min(child.event_time)-parent.event_time as step_time
from
ObjectState parent
left outer join ObjectState child on (parent.object_id=child.object_id and parent.event_time<child.event_time)
group by parent.object_id,parent.event_time,parent.state) states
where
states.object_id not in (select object_id from ObjectState where state='Done')
and ch_event_time is null;
Test data
insert into ObjectState (object_id,event_time,state)
select 1,to_timestamp('01-Aug-2008 13:30:00','dd-Mon-yyyy hh24:mi:ss'),'Initial' union all
select 1,to_timestamp('02-Aug-2008 13:40:00','dd-Mon-yyyy hh24:mi:ss'),'Work' union all
select 1,to_timestamp('03-Aug-2008 13:50:00','dd-Mon-yyyy hh24:mi:ss'),'Review' union all
select 1,to_timestamp('04-Aug-2008 14:30:00','dd-Mon-yyyy hh24:mi:ss'),'Work' union all
select 1,to_timestamp('04-Aug-2008 16:20:00','dd-Mon-yyyy hh24:mi:ss'),'Review' union all
select 1,to_timestamp('06-Aug-2008 18:00:00','dd-Mon-yyyy hh24:mi:ss'),'Accepted' union all
select 1,to_timestamp('07-Aug-2008 21:30:00','dd-Mon-yyyy hh24:mi:ss'),'Done';
insert into ObjectState (object_id,event_time,state)
select 2,to_timestamp('01-Aug-2008 13:30:00','dd-Mon-yyyy hh24:mi:ss'),'Initial' union all
select 2,to_timestamp('02-Aug-2008 13:40:00','dd-Mon-yyyy hh24:mi:ss'),'Work' union all
select 2,to_timestamp('07-Aug-2008 13:50:00','dd-Mon-yyyy hh24:mi:ss'),'Review' union all
select 2,to_timestamp('14-Aug-2008 14:30:00','dd-Mon-yyyy hh24:mi:ss'),'Work' union all
select 2,to_timestamp('15-Aug-2008 16:20:00','dd-Mon-yyyy hh24:mi:ss'),'Review' union all
select 2,to_timestamp('16-Aug-2008 18:02:00','dd-Mon-yyyy hh24:mi:ss'),'Accepted' union all
select 2,to_timestamp('17-Aug-2008 22:10:00','dd-Mon-yyyy hh24:mi:ss'),'Done';
insert into ObjectState (object_id,event_time,state)
select 3,to_timestamp('12-Sep-2008 13:30:00','dd-Mon-yyyy hh24:mi:ss'),'Initial' union all
select 3,to_timestamp('13-Sep-2008 13:40:00','dd-Mon-yyyy hh24:mi:ss'),'Work' union all
select 3,to_timestamp('14-Sep-2008 13:50:00','dd-Mon-yyyy hh24:mi:ss'),'Review' union all
select 3,to_timestamp('15-Sep-2008 14:30:00','dd-Mon-yyyy hh24:mi:ss'),'Work' union all
select 3,to_timestamp('16-Sep-2008 16:20:00','dd-Mon-yyyy hh24:mi:ss'),'Review';
insert into ObjectState (object_id,event_time,state)
select 4,to_timestamp('21-Aug-2008 03:10:00','dd-Mon-yyyy hh24:mi:ss'),'Initial' union all
select 4,to_timestamp('22-Aug-2008 03:40:00','dd-Mon-yyyy hh24:mi:ss'),'Work' union all
select 4,to_timestamp('23-Aug-2008 03:20:00','dd-Mon-yyyy hh24:mi:ss'),'Review' union all
select 4,to_timestamp('24-Aug-2008 04:30:00','dd-Mon-yyyy hh24:mi:ss'),'Work';
A: I don't think you can get that answer with one SQL statement as you are trying to obtain one result from many records. The only way to achieve that in SQL is to get the timestamp field for two different records and calculate the difference (datediff). Therefore, UNIONS or Inner Joins are needed.
A: I'm not sure I understand the question exactly, but you can do something like the following which reads the table in one pass then uses a derived table to calculate it. SQL Server code:
CREATE TABLE #testing
(
eventdatetime datetime NOT NULL,
state varchar(10) NOT NULL
)
INSERT INTO #testing (
eventdatetime,
state
)
SELECT '20080801 13:30:00', 'Initial' UNION ALL
SELECT '20080802 13:30:00', 'Work' UNION ALL
SELECT '20080803 13:30:00', 'Review' UNION ALL
SELECT '20080804 13:30:00', 'Work' UNION ALL
SELECT '20080805 13:30:00', 'Review' UNION ALL
SELECT '20080806 13:30:00', 'Accepted' UNION ALL
SELECT '20080807 13:30:00', 'Done'
SELECT DATEDIFF(dd, Initial, Review)
FROM (
SELECT MIN(CASE WHEN state='Initial' THEN eventdatetime END) AS Initial,
MIN(CASE WHEN state='Review' THEN eventdatetime END) AS Review
FROM #testing
) AS A
DROP TABLE #testing
A: create table A (
At datetime not null,
State varchar(20) not null
)
go
insert into A(At,State)
select '2008-08-01T13:30:00','Initial' union all
select '2008-08-02T13:30:00','Work' union all
select '2008-08-03T13:30:00','Review' union all
select '2008-08-04T13:30:00','Work' union all
select '2008-08-05T13:30:00','Review' union all
select '2008-08-06T13:30:00','Accepted' union all
select '2008-08-07T13:30:00','Done'
go
--Find trip time from Initial to Done
select DATEDIFF(day,t1.At,t2.At)
from
A t1
inner join
A t2
on
t1.State = 'Initial' and
t2.State = 'Review' and
t1.At < t2.At
left join
A t3
on
t3.State = 'Initial' and
t3.At > t1.At and
t4.At < t2.At
left join
A t4
on
t4.State = 'Review' and
t4.At < t2.At and
t4.At > t1.At
where
t3.At is null and
t4.At is null
Didn't say whether joins were allowed or not. Joins to t3 and t4 (and their comparisons) let you say whether you want the earliest or latest occurrence of the start and end states (in this case, I'm asking for latest "Initial" and earliest "Review")
In real code, my start and end states would be parameters
Edit: Oops, need to include "t3.At < t2.At" and "t4.At > t1.At", to fix some odd sequences of States (e.g. If we removed the second "Review" and then queried from "Work" to "Review", the original query will fail)
A: It is probably easier if you have a sequence number as well as the time-stamp: in most RDBMSs you can create an auto-increment column and not change any of the INSERT statements. Then you join the table with a copy of itself to get the deltas
select after.moment - before.moment, before.state, after.state
from object_states before, object_states after
where after.sequence + 1 = before.sequence
(where the details of SQL syntax will vary according to which database system).
A: -- Oracle SQl
CREATE TABLE ObjectState
(
startdate date NOT NULL,
state varchar2(10) NOT NULL
);
insert into ObjectState
select to_date('01-Aug-2008 13:30:00','dd-Mon-rrrr hh24:mi:ss'),'Initial' union all
select to_date('02-Aug-2008 13:30:00','dd-Mon-rrrr hh24:mi:ss'),'Work' union all
select to_date('03-Aug-2008 13:30:00','dd-Mon-rrrr hh24:mi:ss'),'Review' union all
select to_date('04-Aug-2008 13:30:00','dd-Mon-rrrr hh24:mi:ss'),'Work' union all
select to_date('05-Aug-2008 13:30:00','dd-Mon-rrrr hh24:mi:ss'),'Review' union all
select to_date('06-Aug-2008 13:30:00','dd-Mon-rrrr hh24:mi:ss'),'Accepted' union all
select to_date('07-Aug-2008 13:30:00','dd-Mon-rrrr hh24:mi:ss'),'Done';
-- Days in between two states
select o2.startdate - o1.startdate as days
from ObjectState o1, ObjectState o2
where o1.state = 'Initial'
and o2.state = 'Review';
A: I think that your steps (each record of your trip can be seen as a step) can be somewhere grouped together as part of the same activity. It is then possible to group your data on it, as, for example:
SELECT Min(Tbl_Step.dateTimeStep) as tripBegin, _
Max(Tbl_Step.dateTimeStep) as tripEnd _
FROM
Tbl_Step
WHERE
id_Activity = 'AAAAAAA'
Using this principle, you can then calculate other aggregates like the number of steps in the activity and so on. But you will not find an SQL way to calculate values like gap between 2 steps, as such a data does not belong either to the first or to the second step. Some reporting tools use what they call "running sums" to calculate such intermediate data. Depending on your objectives, this might be a solution for you.
A: I tried to do this in MySQL. You would need to use a variable since there is no rank function in MySQL, so it would go like this:
set @trip1 = 0; set @trip2 = 0;
SELECT trip1.`date` as startdate, datediff(trip2.`date`, trip1.`date`) length_of_trip
FROM
(SELECT @trip1 := @trip1 + 1 as rank1, `date` from trip where state='Initial') as trip1
INNER JOIN
(SELECT @trip2 := @trip2 + 1 as rank2, `date` from trip where state='Done') as trip2
ON rank1 = rank2;
I am assuming that you want to calculate the time between 'Initial' and 'Done' states.
+---------------------+----------------+
| startdate | length_of_trip |
+---------------------+----------------+
| 2008-08-01 13:30:00 | 6 |
+---------------------+----------------+
A: Ok, this is a bit beyond geeky, but I built a web application to track my wife's contractions just before we had a baby so that I could see from work when it was getting close to time to go to the hospital. Anyway, I built this basic thing fairly easily as two views.
create table contractions time_date timestamp primary key;
create view contraction_time as
SELECT a.time_date, max(b.prev_time) AS prev_time
FROM contractions a, ( SELECT contractions.time_date AS prev_time
FROM contractions) b
WHERE b.prev_time < a.time_date
GROUP BY a.time_date;
create view time_between as
SELECT contraction_time.time_date, contraction_time.prev_time, contraction_time.time_date - contraction_time.prev_time
FROM contraction_time;
This could be done as a subselect obviously as well, but I used the intermediate views for other things as well, and so this worked out well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Removing static file cachebusting in rails I have a rails application which is still showing the cachebusting numeric string at the end of the URL for static mode, even though I have put it into the production environment. Can someone tell me what config option I need to set to prevent this behaviour...
A: That file isn't there to break the cache during day-to-day operations. At least in theory, proxy servers are allowed to cache HTTP GET requests (as long the parameters remain the same).
Instead, that number is there to allow you to smoothly upgrade your CSS and JavaScript files from one version to the next. As I understand it, it's supposed to remain on in production mode. The numbers should only change when the timestamps on your files change.
Are you seeing common proxy servers that completely fail to cache any HTTP GET request with a single parameter?
A: To disable the ?timestamp cache busting in production add this to your config/environments/production.rb
ENV['RAILS_ASSET_ID'] = ''
If you want to dig deeper into what this does, check out asset_tag_helper.rb in the ActionPack gem, line 527 (ish)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Parse tree and grammar information Does anyone know where to find good online resources with examples of how to make grammars and parse trees? Preferably introductory materials.
Info that is n00b friendly, haven't found anything good with Google myself.
Edit: I'm thinking about theory, not a specific parser software.
A: Not online, but maybe you should take a look at Compilers: Principles, Techniques, and Tools (2nd Edition) by Aho et al. This is a standard text that has been evolving for 30 years (if you count the 1st Dragon Book, published in 1977
A: Well, here's where I learned it...
http://www.cs.uiuc.edu/class/sp08/cs273/
Click on the lectures tag, scroll through till you find the lectures on the material you are talking about.
Love my alma mater. God bless them, they never take down their lectures in any class and you can go and read any of them anytime you want.
edit: Looks like you want lecture11
A: Antlr?
http://www.antlr.org/
Has a quite good IDE for designing a grammar, and a lot of generators for different languages.
A: www.goldparser.com
The tools are free and good to work on. It has technical and theoretical tutorials, lots of info, tools and code generators for many langs.
A: in C,C++ use lex and bison
in java use ANTLR
this is a beautiful antlr video tutorial
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: ORA-03113: end-of-file on communication channel after long inactivity in ASP.Net app I've got a load-balanced (not using Session state) ASP.Net 2.0 app on IIS5 running back to a single Oracle 10g server, using version 10.1.0.301 of the ODAC/ODP.Net drivers. After a long period of inactivity (a few hours), the application, seemingly randomly, will throw an Oracle exception:
Exception: ORA-03113: end-of-file on communication channel at
Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32
errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx*
pOpoSqlValCtx, Object src, String procedure) at
Oracle.DataAccess.Client.OracleCommand.ExecuteReader(Boolean requery,
Boolean fillRequest, CommandBehavior behavior) at
Oracle.DataAccess.Client.OracleCommand.System.Data.IDbCommand.ExecuteReader()
...Oracle portion of the stack ends here...
We are creating new connections on every request, have the open & close wrapped in a try/catch/finally to ensure proper connection closure, and the whole thing is wrapped in a using (OracleConnection yadayada) {...} block. This problem does not appear linked to the restart of the ASP.Net application after being spun down for inactivity.
We have yet to reproduce the problem ourselves. Thoughts, prayers, help?
More: Checked with IT, the firewall isn't set to kill connections between those servers.
A: Check that there isn't a firewall that is ending the connection after certain period of time (this was the cause of a similar problem we had)
A:
end-of-file on communication channel:
One of the course of this error is due to database fail to write the log when its in the stage of opening;
Solution check the database if its running in ARCHIVELOG or NOARCHIVELOG
to check use
select log_mode from v$database;
if its on ARCHIVELOG try to change into NOARCHIVELOG
by using sqlplus
*
*startup mount
*alter database noarchivelog;
*alter database open;
if it works for this
Then you can adjust your flashrecovery area its possibly that your flashrecovery area is full
-> then after confirm that your flashrecovery area has the space you can alter your database into the ARCHIVELOG
A: This error message can be thrown in the application logs when the actual issue is that the oracle database server ran out of space.
After correcting the space issue, this particular error message disappeared.
A: You could try this registry hack:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
"DeadGWDetectDefault"=dword:00000001
"KeepAliveTime"=dword:00120000
If it works, just keep increasing the KeepAliveTime. It is currently set for 2 minutes.
A:
ORA-03113: end-of-file on communication channel
Is the database letting you know that the network connection is no more. This could be because:
*
*A network issue - faulty connection, or firewall issue
*The server process on the database that is servicing you died unexpectedly.
For 1) (firewall) search tahiti.oracle.com for SQLNET.EXPIRE_TIME. This is a sqlnet.ora parameter that will regularly send a network packet at a configurable interval ie: setting this will make the firewall believe that the connection is live.
For 1) (network) speak to your network admin (connection could be unreliable)
For 2) Check the alert.log for errors. If the server process failed there will be an error message. Also a trace file will have been written to enable support to identify the issue. The error message will reference the trace file.
Support issues can be raised at metalink.oracle.com with a suitable Customer Service Identifier (CSI)
A: Add Validate Connection=true to your connection string.
Look at this blog to find more about.
DETAILS:
After OracleConnection.Close() the real database connection does not terminate. The connection object is put back in connection pool. The use of connection pool is implicit by ODP.NET. If you create a new connection you get one of the pool. If this connection is "yet open" the OracleConnection.Open() method does not really creates a new connection. If the real connection is broken (for any reason) you get a failure on first select, update, insert or delete.
With Validate Connection the real connection is validated in Open() method.
A: The article previously mentioned is good. http://forums.oracle.com/forums/thread.jspa?threadID=191750 (as far as it goes)
If this is not something that runs frequently (don't do it on your home page), you can turn off connection pooling.
There is one other "gotcha" that is not mentioned in the article. If the first thing you try to do with the connection is call a stored procedure, ODP will HANG!!!! You will not get back an error condition to manage, just a full bore HANG! The only way to fix it is to turn OFF connection pooling. Once we did that, all issues went away.
Pooling is good in some situations, but at the cost of increased complexity around the first statement of every connection.
If the error handling approach is so good, why don't they make it an option for ODP to handle it for us????
A: //First start the database in mount mode
startup mount
//Disable archivelog
alter database noarchivelog
//Then put db in open
alter database open
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: Why not use tables for layout in HTML? It seems to be the general opinion that tables should not be used for layout in HTML.
Why?
I have never (or rarely to be honest) seen good arguments for this. The usual answers are:
*
*It's good to separate content from layoutBut this is a fallacious argument; Cliche Thinking. I guess it's true that using the table element for layout has little to do with tabular data. So what? Does my boss care? Do my users care?Perhaps me or my fellow developers who have to maintain a web page care... Is a table less maintainable? I think using a table is easier than using divs and CSS.By the way... why is using a div or a span good separation of content from layout and a table not? Getting a good layout with only divs often requires a lot of nested divs.
*Readability of the codeI think it's the other way around. Most people understand HTML, few understand CSS.
*It's better for SEO not to use tablesWhy? Can anybody show some evidence that it is? Or a statement from Google that tables are discouraged from an SEO perspective?
*Tables are slower.An extra tbody element has to be inserted. This is peanuts for modern web browsers. Show me some benchmarks where the use of a table significantly slows down a page.
*A layout overhaul is easier without tables, see css Zen Garden.Most web sites that need an upgrade need new content (HTML) as well. Scenarios where a new version of a web site only needs a new CSS file are not very likely. Zen Garden is a nice web site, but a bit theoretical. Not to mention its misuse of CSS.
I am really interested in good arguments to use divs + CSS instead of tables.
A: See this duplicate question.
One item you're forgetting there is accessibility. Table-based layouts don't translate as well if you need to use a screen reader, for example. And if you do work for the government, supporting accessible browsers like screen readers may be required.
I also think you underestimate the impact of some of the things you mentioned in the question. For example, if you are both the designer and the programmer, you may not have a full appreciation of how well it separates presentation from content. But once you get into a shop where they are two distinct roles the advantages start to become clearer.
If you know what you're doing and have good tools, CSS really does have significant advantages over tables for layout. And while each item by itself may not justify abandoning tables, taken together it's generally worth it.
A: Having had to work with a website that involved 6 layers of nested tables generated by some application, and having had it generate invalid HTML, it was in fact a 3 hour job to rectify it breaking for a minor change.
This is of course the edge case, but table based design is unmaintainable. If you use css, you separate the style out so when fixing the HTML you have less to worry about breaking.
Also, try this with JavaScript. Move a single table cell from one place to another place in another table. Rather complicated to perform where div/span would just work copy-paste-wise.
"Does my boss care"
If I were your boss. You would care. ;) If you value your life.
A: Layout flexibility
Imagine you're making a page with a large number of thumbnails.
DIVs:
If you put each thumbnail in a DIV, floated left, maybe 10 of them fit on a row. Make the window narrower, and BAM - it's 6 on a row, or 2, or however many fit.
TABLE:
You have to explicitly say how many cells are in a row. If the window is too narrow, the user has to scroll horizontally.
Maintainability
Same situation as above. Now you want to add three thumbnails to the third row.
DIVs:
Add them in. The layout will automatically adjust.
TABLE:
Paste the new cells into the third row. Oops! Now there are too many items there. Cut some from that row and put them on the fourth row. Now there are too many items there. Cut some from that row... (etc)
(Of course, if you're generating the rows and cells with server-side scripting, this probably won't be an issue.)
A: I think that boat has sailed. If you look at the direction the industry has taken you will notice that CSS and Open Standards are the winners of that discussion. Which in turn means for most html work, with the exception of forms, the designers will use divs instead of tables. I have a hard time with that because I am not a CSS guru but thats the way it is.
A: Unfortunately, CSS Zen Garden can no longer be used as an example of good HTML/CSS design. Virtually all of their recent designs use graphics for section heading. These graphic files are specified in the CSS.
Hence, a website whose purpose is to show the advantage of keeping design out of content, now regularly commits the UNSPEAKABLE SIN of putting content into design. (If the section heading in the HTML file were to change, the section heading displayed would not).
Which only goes to show that even those advocate the strict DIV & CSS religion, can't follow their own rules. You may use that as a guideline in how closely you follow them.
A: Also, don't forget, tables don't quite render well on mobile browsers. Sure, the iPhone has a kick-ass browser but everyone doesn't have an iPhone. Table rendering can be peanuts for modern browsers, but it's a bunch of watermelons for mobile browsers.
I have personally found that many people use too many <div> tags, but in moderation, it can be extremely clean and easy to read. You mention that folks have a harder time reading CSS than tables; in terms of 'code' that maybe true; but in terms of reading content (view > source) it is a heck of a lot easier to understand the structure with stylesheets than with tables.
A: Looks like you are just used to tables and that's it.
Putting layout in a table limits you for just that layout. With CSS you can move bits around, take a look at http://csszengarden.com/
And no, layout does not usally require a lot of nested divs.
With no tables for layout and proper semantics HTML is much cleaner, hence easier to read.
Why should someone who cannot understand CSS try to read it? And if someone considers himself to be webdeveloper then the good grasp of CSS is a must.
SEO benefits come from the ability to have most important content higher up the page and
having better content-to-markup ratio.
http://www.hotdesign.com/seybold/
A: *
*508 Compliance - the ability for a screenreader to make sense of your markup.
*Waiting for render - tables don't render in the browser until it gets to the end of the </table> element.
A: The whole idea around semantic markup is the separation of markup and presentation, which includes layout.
Div's aren't replacing tables, they have their own use in separating content into blocks of related content (, ). When you don't have the skills and are relying on tables, you'll often have to separate your content in to cells in order to get the desired layout, but you wont need to touch the markup to achieve presentation when using semantic markup. This is really important when the markup is being generated rather than static pages.
Developers need to stop providing markup that implies layout so that those of us who do have the skills to present content can get on with our jobs, and developers don't have to come back to their code to make changes when presentation needs change.
A: I'm going to go through your arguments one after another and try to show the errors in them.
It's good to separate content from layout
But this is a fallacious argument; Cliché Thinking.
It's not fallacious at all because HTML was designed intentionally. Misuse of an element might not be completely out of question (after all, new idioms have developed in other languages, as well) but possible negative implications have to be counterbalanced. Additionally, even if there were no arguments against misusing the <table> element today, there might be tomorrow because of the way browser vendors apply special treatment to the element. After all, they know that “<table> elements are for tabular data only” and might use this fact to improve the rendering engine, in the process subtly changing how <table>s behave, and thus breaking cases where it was previously misused.
So what? Does my boss care? Do my users care?
Depends. Is your boss pointy-haired? Then he might not care. If she's competent, then she will care, because the users will.
Perhaps me or my fellow developers who have to maintain a web page care... Is a table less maintainable? I think using a table is easier than using divs and css.
The majority of professional web developers seem to oppose you[citation needed]. That tables are in fact less maintainable should be obvious. Using tables for layout means that changing the corporate layout will in fact mean changing every single page. This can be very expensive. On the other hand, judicious use of semantically meaningful HTML combined with CSS might confine such changes to the CSS and the pictures used.
By the way... why is using a div or a span good separation of content from layout and a table not? Getting a good layout with only divs often requires a lot of nested divs.
Deeply nested <div>s are an anti-pattern just as table layouts. Good web designers don't need many of them. On the other hand, even such deep-nested divs don't have many of the problems of table layouts. In fact, they can even contribute to a semantic structure by logically dividing the content in parts.
Readability of the code
I think it's the other way around. Most people understand html, little understand css. It's simpler.
“Most people” don't matter. Professionals matter. For professionals, table layouts create many more problems than HTML + CSS. This is like saying I shouldn't use GVim or Emacs because Notepad is simpler for most people. Or that I shouldn't use LaTeX because MS Word is simpler for most people.
It's better for SEO not to use tables
I don't know if this is true and wouldn't use this as an argument but it would be logical. Search engines search for relevant data. While tabular data could of course be relevant, it's rarely what users search for. Users search for terms used in the page title or similarly prominent positions. It would therefore be logical to exclude tabular content from filtering and thus cutting the processing time (and costs!) by a large factor.
Tables are slower.
An extra tbody element has to be inserted. This is peanuts for modern web browsers.
The extra element has got nothing to do with tables being slower. On the other hand, the layout algorithm for tables is much harder, the browser often has to wait for the whole table to load before it can begin to layout the content. Additionally, caching of the layout won't work (CSS can easily be cached). All this has been mentioned before.
Show me some benchmarks where the use of a table significantly slows down a page.
Unfortunately, I don't have any benchmark data. I would be interested in it myself because it's right that this argument lacks a certain scientific rigour.
Most web sites that need an upgrade need new content (html) as well. Scenarios where a new version of a web site only needs a new css file are not very likely.
Not at all. I've worked on several cases where changing the design was simplified by a separation of content and design. It's often still necessary to change some HTML code but the changes will always be much more confined. Additionally, design changes must on occasion be made dynamically. Consider template engines such as the one used by the WordPress blogging system. Table layouts would literally kill this system. I've worked on a similar case for a commercial software. Being able to change the design without changing the HTML code was one of the business requirements.
Another thing. Table layout makes automated parsing of websites (screen scraping) much harder. This might sound trivial because, after all, who does it? I was surprised myself. Screen scraping can help a lot if the service in question doesn't offer a WebService alternative to access its data. I'm working in bioinformatics where this is a sad reality. Modern web techniques and WebServices have not reached most developers and often, screen scraping is the only way to automate the process of getting data. No wonder that many biologists still perform such tasks manually. For thousands of data sets.
A: This isn't the definitive argument, by any means, but with CSS you can take the same markup and change the layout depending on medium, which is a nice advantage. For a print page you can quietly suppress navigation without having to create a printer-friendly page, for example.
A: One table for layout wouldn't be that bad. But you can't get the layout you need with just one table most of the time. Pretty soon you have 2 or three nested tables. This becomes very cumbersome.
*
*It IS a LOT harder to read. That's not up to opinion. There's just more nested tags with no identifying marks on them.
*Separating content from presentation is a good thing because it allows you to focus on what you're doing. Mixing the two leads to bloated pages that are hard to read.
*CSS for styles allows your browser to cache the files and subsequent requests are much faster. This is HUGE.
*Tables lock you into a design. Sure, not everyone needs the flexibility of CSS Zen Garden, but I've never worked on a site where I didn't need to change the design a little bit here and there. It's much easier with CSS.
*Tables are hard to style. You don't have very much flexibility with them (i.e. you still need to add HTML attributes to fully control a table's styles)
I haven't used tables for non-tabular data in probably 4 years. I haven't looked back.
I'd really like to suggest reading CSS Mastery by Andy Budd. It's fantastic.
Image at ecx.images-amazon.com http://ecx.images-amazon.com/images/I/41TH5NFKPEL._SL500_BO2,204,203,200_PIsitb-dp-500-arrow,TopRight,45,-64_OU01_AA240_SH20_.jpg
A: This isn't really about whether 'divs are better than tables for layout'. Someone who understands CSS can duplicate any design using 'layout tables' pretty straightforwardly. The real win is using HTML elements for what they are there for. The reason you would not use tables for non-tablular data is the same reason you don't store integers as character strings - technology works much more easily when you use it for the purpose for which it is desgined. If it was ever necessary to use tables for layout (because of browser shortcomings in the early 1990s) it certainly isn't now.
A:
It's good to separate content from layout
But this is a fallacious argument; Cliche Thinking
It's a fallacious argument because HTML tables are layout! The content is the data in the table, the presentation is the table itself. This is why separating CSS from HTML can be very difficult at times. You're not separating content from presentation, you're separating presentation from presentation! A pile of nested divs is no different than a table - it's just a different set of tags.
The other problem with separating the HTML from the CSS is that they need intimate knowledge of one another - you really can't separate them fully. The tag layout in the HTML is tightly coupled with the CSS file no matter what you do.
I think tables vs divs comes down to the needs of your application.
In the application we develop at work, we needed a page layout where the pieces would dynamically size themselves to their content. I spent days trying to get this to work cross-browser with CSS and DIVs and it was a complete nightmare. We switched to tables and it all just worked.
However, we have a very closed audience for our product (we sell a piece of hardware with a web interface) and accessibility issues are not a concern for us. I don't know why screen readers can't deal with tables well, but I guess if that's the way it is then developers have to handle it.
A: Tools that use table layouts can become extraordinarily heavy due to the amount of code required to create the layout. SAP's Netweaver Portal by default uses TABLE to layout their pages.
The production SAP portal at my current gig has a home page whose HTML weighs over 60K and goes seven tables deep, three times within the page. Add in the Javascript, the misuse of 16 iframes with similar table issues inside of them, overly heavy CSS etc, and the page weighs over 5MB.
Taking the time to lower the page weight so you can use your bandwidth to do engaging activities with users is worth the effort.
A: It's worth figuring out CSS and divs so the central content column loads and renders before the sidebar in a page layout. But if you are struggling to use floating divs to vertically align a logo with some sponsorship text, just use the table and move on with life. The Zen garden religion just doesn't give much bang for the buck.
The idea of separating content from presentation is to partition the application so different kinds of work affect different blocks of code. This is actually about change management. But coding standards can only examine the present state of code in a superficial manner.
The change log for an application that depends on coding standards to "separate content from presentation" will show a pattern of parallel changes across vertical silos. If a change to "content" is always accompanied by a change to "presentation", how successful is the partitioning?
If you really want to partition your code productively, use Subversion and review your change logs. Then use the simplest coding techniques -- divs, tables, JavaScript, includes, functions, objects, continuations, whatever -- to structure the application so that the changes fit in a simple and comfortable manner.
A: Tables are not in general easier or more maintainable than CSS. However, there are a few specific layout-problems where tables are indeed the simplest and most flexible solution.
CSS is clearly preferable in cases where presentational markup and CSS support the same kind of design, no one in their right mind would argue that font-tags are better than specifying typography in CSS, since CSS gives you the same power than font-tags, but in a much cleaner way.
The issue with tables, however, is basically that the table-layout model in CSS is not supported in Microsoft Internet Explorer. Tables and CSS are therefore not equivalent in power. The missing part is the grid-like behavior of tables, where the edges of cells align both vertically and horizontally, while cells still expand to contain their content. This behavior is not easy to achieve in pure CSS without hardcoding some dimensions, which makes the design rigid and brittle (as long as we have to support Internet Explorer - in other browsers this is easliy achieved by using display:table-cell).
So it's not really a question of whether tables or CSS is preferable, but it is a question of recognizing the specific cases where use of tables may make the layout more flexible.
The most important reason for not using tables is accessibility. The Web Content Accessibility Guidelines http://www.w3.org/TR/WCAG10/ advice againt using tables for layout. If you are concerned about accessibility (and in some cases you may be legally obliged to), you should use CSS even if tables are simpler. Note that you can always create the same layout with CSS as with tables, it might just require more work.
A: Because it's HELL to maintain a site that uses tables, and takes a LOT longer to code. If you're scared of floating divs, go take a course in them. They're not difficult to understand and they're approximately 100 times more efficient and a million times less a pain in the ass (unless you don't understand them -- but hey, welcome to the world of computers).
Anyone considering doing their layout with a table better not expect me to maintain it. It's the most ass-backwards way to render a website. Thank god we have a much better alternative now. I would NEVER go back.
It's scary that some folks might not be aware of the time and energy benefits from creating a site using modern tools.
A: DOM Manipulation is difficult in a table-based layout.
With semantic divs:
$('#myawesomediv').click(function(){
// Do awesome stuff
});
With tables:
$('table tr td table tr td table tr td.......').click(function(){
// Cry self to sleep at night
});
Now, granted, the second example is kind of stupid, and you can always apply IDs or classes to a table or td element, but that would be adding semantic value, which is what table proponents so vehemently oppose.
A: I was surprised to see some issues were not already covered, so here are my 2 cents, in addition to all the very valid points made earlier:
.1. CSS & SEO:
a) CSS used to have a very significant impact on SEO by allowing to position the content in the page wherever you want. A few years ago, Search Engines were giving a significant emphasis to "on-page" factors. Something at the top of the page was deemed more relevant to the page than something located at the bottom. "Top of the page" for a spider meant "at the beginning of the code". Using CSS, you could organize your keyword-rich content at the beginning of the code, and still position it wherever you liked in the page. This is still somewhat relevant, but on page factors are less and less important for page ranking.
b) When the layout is moved over to CSS, the HTML page is lighter and therefore loads faster for a search engine spider. (spiders don't bother downloading external css files). Fast loading pages is an important ranking consideration for several search engines, including Google
c) SEO work often requires testing and changing things, which is much more convenient with a CSS based layout
.2. Generated content:
A table is considerably easier to generate programmically than the equivalent CSS layout.
foreach ($comment as $key=>$value)
{
echo "<tr><td>$key</td><td>$value</td></tr>";
}
Generating a table is simple and safe. It is self-contained and integrates well within any template. To do the same with CSS is considerably harder and may be of no benefit at all: hard to edit the CSS stylesheet on the flight, and adding the style inline is no different from using a table (content is not separated from layout).
Further, when a table is generated, the content (in variables) is already separated from the layout (in code), making it as easy to modify.
This is one reason why some very well designed websites (SO for instance) still use table layouts.
Of course, if the results need to be acted upon through JavaScript, divs are worth the trouble.
.3. Quick conversion testing
When figuring out what works for a specific audience, it is useful to be able to change the layout in various ways to figure out what gets the best results. A CSS based layout makes things considerably easier
.4. Different solutions for different problems
Layout tables are usually dissed because "everybody knows divs & CSS" are the way to go.
However the fact remains that tables are faster to create, easier to understand and are more robust than most CSS layouts. (Yes, CSS can be as robust, but a quick look through the net on different browsers and screen resolutions shows it's not often the case)
There are a lot of downsides to tables, including maintenance, lack of flexibility... but let's not throw the baby with the bath water. There are plenty of professional uses for a solution which is both quick and reliable.
Some time ago, I had to rewrite a clean and simple CSS layout using tables because a significant portion of the users would be using an older version of IE with really bad support for CSS
I, for one, am sick and tired of the knee-jerk reaction "Oh noes! Tables for layout!"
As for the "it wasn't intended for that purpose and therefore you shouldn't use it this way" crowd, isn't that hypocrisy? What do you think of all the CSS tricks you have to use to get the darn thing working in most browsers? Were they meant for that purpose?
A: Here's my programmer's answer from a simliar thread
Semantics 101
First take a look at this code and think about what's wrong here...
class car {
int wheels = 4;
string engine;
}
car mybike = new car();
mybike.wheels = 2;
mybike.engine = null;
The problem, of course, is that a bike is not a car. The car class is an inappropriate class for the bike instance. The code is error-free, but is semantically incorrect. It reflects poorly on the programmer.
Semantics 102
Now apply this to document markup. If your document needs to present tabular data, then the appropriate tag would be <table>. If you place navigation into a table however, then you're misusing the intended purpose of the <table> element. In the second case, you're not presenting tabular data -- you're (mis)using the <table> element to achieve a presentational goal.
Conclusion
Will visitors notice? No. Does your boss care? Maybe. Do we sometimes cut corners as programmers? Sure. But should we? No. Who benefits if you use semantic markup? You -- and your professional reputation. Now go and do the right thing.
A: CSS/DIV - it's just jobs for the design boys, isn't it. The hundreds of hours I've spent debugging DIV/CSS issues, searching the Internet to get some part of markup working with an obscure browser - it drives me mad. You make one little change and the whole layout goes horrendously wrong - where on eath is the logic in that. Spending hours moving something 3 pixels this way then something else 2 pixels the other to get them all to line up. This just seems plain wrong to me somehow. Just because you're a purist and something is "not the right thing to do" doesn't mean you should make use of it to the nth degree and under all circumstances, especially if it makes your life 1000 times easier.
So I've finally decided, purely on commercial grounds, although I keep use to minimum, if I anticipate 20 hours work to get a DIV placed correctly, I'll stick in a table. It's wrong, it upsets the purists, but in most cases it costs less time and is cheaper to manage. I can then concentrate on getting the application working as the customer wants, rather than pleasing the purists. They do pay the bills after all and my argument to a manager enforcing the use of CSS/DIV - I would merely point out the customers pay his salary as well!
The only reason all these CSS/DIV arguments occur is because of the shortcoming of CSS in the first place and because the browsers aren't compatible with each other and if they were, half the web designers in the world would be out of a job.
When you design a windows form you don't try moving controls around after you have laid them out so I kind of think it's strange to me why you would you want to do this with a web form. I simply can't understand this logic. Get the layout right to start with and what's the problem. I think it's because designers like to flirt with creativity, whilst application developers are more concerned with actually getting the application working, creating business objects, implementing business rules, working out how bits of customer data relates to each other, ensuring the thing meets the customers requirements - you know - like the real world stuff.
Don't get me wrong, both arguments are valid, but please don't critise developers for choosing an easier, more logical approach to designing forms. We often have more important things to worry about than the correct semantics of using a table over a div.
Case in point - based on this discussion I converted a few existing tds and trs to divs. 45 minutes messing about with it trying to get everything to line up next to each other and I gave up. TDs back in 10 seconds later - works - straight away - on all browsers, nothing more to do. Please try to make me understand - what possible justification do you have for wanting me to do it any other way!
A: Layout should be easy. The fact that there are articles written on how to achieve a dynamic three column layout with header and footer in CSS shows that it is a poor layout system. Of course you can get it to work, but there are literally hundreds of articles online about how to do it. There are pretty much no such articles for a similar layout with tables because it's patently obvious. No matter what you say against tables and in favor of CSS, this one fact undoes it all: a basic three column layout in CSS is often called "The Holy Grail".
If that doesn't make you say "WTF" then you really need to put down the kool-aid now.
I love CSS. It offers amazing styling options and some cool positioning tools, but as a layout engine it is deficient. There needs to be some type of dynamic grid positioning system. A straightforward way to align boxes on multiple axis without knowing their sizes first. I don't give a damn if you call it <table> or <gridlayout> or whatever, but this is a basic layout feature that is missing from CSS.
The larger problem is that by not admitting there are missing features, the CSS zealots have been holding CSS back from all it could be. I'd be perfectly happy to stop using tables if CSS provided decent multi-axis grid positioning like basically every other layout engine in the world. (You do realize this problem has already been solved many times in many languages by everyone except the W3C, right? And nobody else denied that such a feature was useful.)
Sigh. Enough venting. Go ahead and stick your head back in the sand.
A: The separation between content and layout makes it also easier to generate printer-friendly layouts or just different skins (styles) for your site, without having to create different html-files. Some browser (like Firefox) even support selecting a stylesheet from the view-menu.
And I do think it's easier to maintain a tableless layout. You don't have to worry about rowspans, colspans, etc. You just create some container-divs and place the content where you need to. And that said I think it also more readable (<div id="sidebar"> vs <tr><td>...</td><td>...<td>sidebar</td></tr>).
It's just a little 'trick' you have to learn (and once you mastered the trick, I think it's easier and makes more sense).
A: A huge issue for me is that tables, especially nested tables, take much longer to render than a properly layed out css implementation. (You can make css just as slow).
All browsers render the css faster because each div is a seperate element, so a screen can be loading as the user is reading. (For huge data sets, etc). I've used css instead of tables in that instance not even dealing with layout.
A nested table (tables inside of cells, etc) will not render to the browser window until the last "/table" is found. Even worse - a poorly defined table will somtimes not even render! Or when it does, things misbehave. (not colspanning properly with "TD"'s etc.)
I use tables for most things, but when it comes to large data and the desire to have a screen render quickly for an end-user - I try my best to utilize what CSS has to offer.
A:
One example: you want to center the
main content area of a page, but in
order to contain the floats inside it,
it needs to be floated. There is no
"float: center" in CSS.
That is not the only way to "contain the floats" inside a centred element. So, not a good argument at all!
In a way, it's a false premise, the "divs vs tables" thing.
Quick-and-dirty division of a page into three columns? Tables are easier, to be honest. But no professional uses them any more for layout, because they lock the positioning of page elements into the page.
The real argument is "positioning done by CSS (hopefully in a remote file)" as opposed to "positioning done by HTML in the page". Surely everyone can see the benefits of the former as opposed to the latter?
*
*Size -- if your page layout is in the HTML, in the pages, it can't be cached, and it has to be repeated on every page. You will save enormous amounts of bandwidth if your layout is in a cached CSS file, not in the page.
*Multiple developers can work on the same page at the same time -- I work on the HTML, other guy works on the CSS. No repository needed, no problems with over-writing, file locking etc.
*Making changes is easier -- there will be problems with layout in different browsers, but you only have to fix one file, the CSS file, to sort them out.
*Accessibility, as mentioned a lot previously. Tables assume a two-dimensional layout works for everyone. That's not how some users view your content and it's not how Google views your content.
Consider this:
[ picture ] [ picture ] [ picture ]
[ caption ] [ caption ] [ caption ]
which represents two rows of a table with 6 cells. Someone who can see the 2-D table layout will see a caption under each picture. But using speech synthesis, or a PDA, and for a search engine spider, that's
picture picture picture caption caption caption
and the relationship, which is obvious with the table in place, disappears.
Are DIVs and CSS better for the task of simply laying out rectangles on an HTML page to achieve a given design in the shortest possible time? No, they're probably not. But I'm not in the business of quickly laying out rectangles to achieve a given design. I'm thinking of a much bigger picture.
A: I've had to do sites in both of those ways, plus a third, the dreaded "hybrid" layout with tables, divs and styles: Divs/CSS wins, handily.
You'd have to nest divs three deep to match the code weight of just one table cell, right off the bat. That effect scales with nested tables.
I'd also prefer to make one layout change, vs one change for every page in my site.
I have full control over every aspect of presentation with divs/css. Tables mess that up in hideous ways, especially in IE, a browser which I have never yet had the option not to support.
My time for maintenance or redesign of a divs/css website is a fraction of what it would be in tables.
Finally, I can innovate multiple, switchable layouts with CSS and virtually any scripting language. That would be impossible for me with tables.
Good luck with your ROI as you make that decision.
A: *
*Try to merge/split a 10/20 something deep colspan/rowspan. More than once I had to supress my instinct to start a fight with someone. [?!]
*Try to change source code order without changing visible order. [SEO, usability, ...]
*The very (really simple) page we're looking at is ~150K. I bet It can nearly be halved using proper CSS. [SEO (Yes, SEO, read latest Google specs etc), perfo, ...]
*Try to make an iterator template that can work in any width.
*The discussion of the matter in this table-based medium of SO can cause a singularity and destroy us all
A: The issue of strictly separating presentation and content strikes me as roughly analogous to separating header files from implementation files in C++. It makes sense, but it can also be a pain. Witness Java and C# where classes are defined in a single source file. The authors of the newer languages noticed something that was causing programmers headaches and they got rid of it. That seems to be the gist of this discussion. One side is saying CSS is too difficult, the other side is saying one must become a CSS master.
For simple layout issues why not bend the rule that says presentation must be completely separate? What about a new tag (or some extension to the div tag) that allows us to control presentation directly in HTML? After all, aren't we already leaking presentation into HTML? Look at h1, h2...h6. We all know these control presentation.
The ability to read code (and HTML is code) is very important. Gurus tend to overlook how important it is to make a programming environment as accessible to the masses as possible. It is very shortsighted to think that only professional programmers matter.
A: According to 508 compliance (for screen readers for visually impared), tables should only be used to hold data and not for layout as it causes the screen readers to freak out. Or so I've been told.
If you assign names to each of the divs, you can skin them all together using CSS as well. They're just a bit more of a pain to get to sit the way you need them to.
A: Here's a section of html from a recent project:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>{DYNAMIC(TITLE)}</title>
<meta http-equiv="content-type" content="text/html;charset=utf-8" />
<meta http-equiv="Content-Style-Type" content="text/css" />
<link rel="stylesheet" type="text/css" href="./styles/base.css" />
</head>
<body>
<div id="header">
<h1><!-- Page title --></h1>
<ol id="navigation">
<!-- Navigation items -->
</ol>
<div class="clearfix"></div>
</div>
<div id="sidebar">
<!-- Sidebar content -->
</div>
<!-- Page content -->
<p id="footer"><!-- Footer content --></p>
</body>
</html>
And here's that same code as a table based layout.
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>{DYNAMIC(TITLE)}</title>
<meta http-equiv="content-type" content="text/html;charset=utf-8" />
<meta http-equiv="Content-Style-Type" content="text/css" />
<link rel="stylesheet" type="text/css" href="./styles/base.css" />
</head>
<body>
<table cellspacing="0">
<tr>
<td><!-- Page Title --></td>
<td>
<table>
<tr>
<td>Navitem</td>
<td>Navitem</td>
</tr>
</table>
</td>
</tr>
</table>
<table>
<tr>
<td><!-- Page content --></td>
<td><!-- Sidebar content --></td>
</tr>
<tr>
<td colspan="2">Footer</td>
</tr>
</table>
</body>
</html>
The only cleanliness I see in that table based layout is the fact that I'm overzealous with my indentation. I'm sure that the content section would have a further two embedded tables.
Another thing to think about: filesizes. I've found that table-based layouts are twice the size of their CSS counterparts usually. On our hig-speed broadband that isn't a huge issue but it is on those with dial up modems.
A: I'd like to add that div-based layouts are easer to mantain, evolve, and refactor. Just some changes in the CSS to reorder elements and it is done. From my experience, redesign a layout that uses tables is a nightmare (more if there are nested tables).
Your code also has a meaning from a semantic point of view.
A: CSS layouts are generally much better for accessibility, provided the content comes in a natural order and makes sense without a stylesheet. And it's not just screen readers that struggle with table-based layouts: they also make it much harder for mobile browsers to render a page properly.
Also, with a div-based layout you can very easily do cool things with a print stylesheet such as excluding headers, footers and navigation from printed pages - I think it would be impossible, or at least much more difficult, to do that with a table-based layout.
If you're doubting that separation of content from layout is easier with divs than with tables, take a look at the div-based HTML at CSS Zen Garden, see how changing the stylesheets can drastically change the layout, and think about whether you could achieve the same variety of layouts if the HTML was table based... If you're doing a table-based layout, you're unlikely to be using CSS to control all the spacing and padding in the cells (if you were, you'd almost certainly find it easier to use floating divs etc. in the first place). Without using CSS to control all that, and because of the fact that tables specify the left-to-right and top-to bottom order of things in the HTML, tables tend to mean that your layout becomes very much fixed in the HTML.
Realistically I think it's very hard to completely change the layout of a div-and-CSS-based design without changing the divs a bit. However, with a div-and-CSS-based layout it's much easier to tweak things like the spacing between various blocks, and their relative sizes.
A: No arguments in DIVs favour from me.
I'd say : If the shoe fits, wear it.
It's worth noting that it's difficult if not impossible to find a good DIV+CSS method of rendering contents in two or three columns, that is consistent on all browsers, and still looks just the way I intended.
This tips the balance a bit towards tables in most of my layouts, and altough I feel guilty of using them (dunny why, people just say it's bad so I try to listen to them), in the end , the pragmatic view is it's just easier and faster for me to use TABLEs. I'm not being payed by the hour, so tables are cheaper for me.
A: The fact that this is a hotly debated question is a testament to the failure of the W3C to anticipate the diversity of layout designs which would be attempted. Using divs+css for semantically-friendly layout is a great concept, but the details of implementation are so flawed that they actually limit creative freedom.
I attempted to switch one of our company's sites from tables to divs, and it was such a headache that I totally scrapped the hours of work I had poured into it and went back to tables. Trying to wrestle with my divs in order to gain control of vertical alignment has cursed me with major psychological issues that I will never shake as long as this debate rages on.
The fact that people must frequently come up with complex and ugly workarounds to accomplish simple design goals (such as vertical alignment) strongly suggests that the rules are not nearly flexible enough. If the specs ARE sufficient, then why do high-profile sites (like SO) find it necessary to bend the rules using tables and other workarounds?
A:
I guess it's true that using the table element for layout has little to do with tabular data. So what? Does my boss care? Do my users care?
Google and other automated systems do care, and they're just as important in many situations. Semantic code is easier for a non-intelligent system to parse and process.
A: Obvious answer: See CSS Zen Garden. If you tell me that you can easily do the same with a table-based layout (remember - the HTML isn't changing) then by all means use tables for layout.
Two other important things are accessibility and SEO.
Both care about in what order information is presented. You cannot easily present your navigation at the top of the page if your table-based layout puts it in the 3rd cell of the 2nd row of the 2nd nested table on the page.
So your answers are maintainability, accessibility and SEO.
Don't be lazy. Do things the right and proper way even if they are a bit harder to learn.
A: Tables are good for HTML that you're throwing together for something simple or temporary. If you're building a large-scale website, you should go with divs and CSS, since it will be easier to maintain over time as your website changes.
A: To respond to the "tables are slower" argument - you're thinking rendering time, which is the wrong metric. Very often, developers will write out a huge table to do the entire layout for a page - which adds significantly to the size of the page to be downloaded. Like it or not, there's still a ton of dialup users out there.
See also: overusing ViewState
A: div's and CSS positioning allow a more flexible design, leading to easier modification and templating of your web pages.
That said, if you aren't interested in the flexibility then using a table rather than some divs that are morphed into a table by CSS is definitely a lot easier and quicker to knock up. I tend to use tables when knocking up a design just to get it looking right that bit quicker.
A: I once learned that a table loads at once, in other words when a connection is slow, the space where the table comes remains blank until the entire table is loaded, a div on the other hand loads top to bottom as fast as the data comes and regardless if it is allready complete or not.
A: If you're supporting the table angle on this find a site with tables and then get yourself a screenreader - set off the screen reader and turn off your monitor.
Then try it with a nice semantically correct div layout site.
You'll see the difference.
Tables aren't evil if the data in them is tabular not to layout the page.
A: I do believe this is an issue connected to a general problem. When HTML was born no one could foresee its widespread use. Another technology which almost collapsed under the weight of its own success. When HTML pages were written in vi on a green text terminal a TABLE was all that was needed to present data to the visitors of the page, and it mostly was data that made sense in a tabular form.
We all know how things evolved. TABLEs went out of fashion comparatively recently, but there are lots of reasons to prefer DIVs and CSS based layouts (accessibility not the last of them). Of course I can't write a CSS to save my life :-) and I think a graphical design expert should always be at hand.
That said... there are lots of data that should be presented in a table even in a modern web site.
A: Use tables when you need to ensure that elements need to remain in a specific physical relationship in the layout. For data, the table is generally the best layout element to use because you do not want your columns to wrap in an uxexpected ways and confuse the associations.
One could also argue that non-data elements that must remain in a specific relationship should also be rendered in a table.
Flexible css layouts are great for content that is suitable for mobile devices and large screens and printing and other display types, but sometimes, the content just has to be displayed in a very specific way and if that requires that screen readers cannot easily access it, it could very well be justified.
A: I try to avoid TABLEs as much as possible, but when we are designing complex forms that mix multiple control types and different caption positions with pretty strict controls on grouping, using DIVs is unreliable or often near impossible.
Now, I will not argue that these forms could not be redesigned to better accommodate a DIV based layout, but for some of them our customer is adamant about not changing the existing layouts from the previous version (written in classic ASP) because it parallels a paper form that their users are familiar with.
Because the presentation of the forms is dynamic (where the display of some sections is based on the state of the case or the permissions of the user), we use sets of stacked DIVs, each containing a TABLE of logically grouped form elements. Each column of the TABLE is classed so that they can be controlled by CSS. That way we can turn off different sections of the form without the problem of not being table to wrap rows in DIVs.
A: From past experience, I'd have to go for DIV's. Even in OOP, the main aim is to reduce the coupling between objects, so this concept can be applied to DIVS and tables. Tables are used for holding data, not for arranging it around a page. A DIV is specifically designed to arrange items around a page, so the design should use DIV's, tables should be used to store data.
Also, editting websites made with tables is just plain hard (in my opinion)
A: I think nobody cares how a website was designed/implemented when it behaves great and it works fast.
I use both "table" and "div"/"span" tags in HTML markup.
Let me give you few arguments why I am choosing divs:
*
*for a table you have to write at least 3 tags (table, tr, td, thead, tbody), for a nice design, sometimes you have a lot of nested tables
*I like to have components on the page. I don't know how to explain exactly but will try. Suppose you need a logo and this have to be placed, just a small piece of it, over the next page content. Using tables you have to cut 2 images and put this into 2 different TDs. Using DIVs you can have a simple CSS to arange it as you want. Which solution do you like best?
*when more then 3 nested tables for doing something I am thinking to redesign it using DIVs
BUT I am still using tables for:
*
*tabular data
*content that is expanding self
*fast solutions (prototypes), because DIVs box model is different on each browser, because many generators are using tables, etc
A: When I design my layout using CSS, I generally give every major section its own root (body level) div, and use relative/absolute positioning to get it into its proper place. This is a bit more flexible than tables, as I'm not limited to an arrangement that I can represent using rows and columns.
Furthermore, if I decide that I want to rearrange the layout (say I want the navigation bar to be on the right now) I can simply go and alter the position for the elements in one place (the CSS file) and the HTML doesn't have to change. If I were doing that with tables, I would have to go in and find the information and do a lot of attribute modding and copying and pasting to get the same effect.
In fact, using CSS, I can even have my users select how they want their layout to work. So long as the general size of the content areas doesn't change, I'm perfectly OK with using a bit of PHP scripting to output my CSS based on user preferences, and allowing them to rearrange the site to their own liking. Once again, possible with tables, but much much harder to maintain.
Finally, CSS allows one MAJOR benefit that tables will never provide: the ability to reformat content based on the display device. CSS allows me to use a completely different style set (including position, formatting, etc) for a printer than the one I use for the monitor. This can be extended to other media as well, an excellent example is Opera Show, which allows a cleverly designed (and very standard) CSS enhanced page to be viewed as a slide show.
So in the end, flexibility and management are the real winners. Generally, CSS allows you to do more with the layout. There's nothing technically nonstandard about a table based layout, but why would you want to limit yourself?
A: In the past, screen readers and other accessibility software had a difficult time handling tables in an efficient fashion. To some extent, this became handled in screen readers by the reader switching between a "table" mode and a "layout" mode based on what it saw inside the table. This was often wrong, and so the users had to manually switch the mode when navigating through tables. In any case, the large, often highly nested tables were, and to a large extent, are still very difficult to navigate through using a screen reader.
The same is true when divs or other block-level elements are used to recreate tables and are highly nested. The purpose of divs is to be used as a fomating and layout element, and as such, are intended used to hold similar information, and lay it out on the screen for visual users. When a screen reader encounters a page, it often ignores any layout information, both CSS based, as well as html attribute based(This isn't true for all screen readers, but for the most popular ones, like JAWS, Windows Eyes, and Orca for Linux it is).
To this end, tabular data, that is to say data that makes logical sense to be ordered in two or more dimensions, with some sort of headers, is best placed in tables, and use divs to manage the layout of content on the page. (another way to think of what "tabular data" is is to try to draw it in graph form...if you can't, it likely is not best represented in a table)
Finally, with a table-based layout, in order to achieve a fine-grained control of the position of elements on the page, highly nested tables are often used. This has two effects: 1.) Increased code size for each page - Since navigation and common structure is often done with the tables, the same code is sent over the network for each request, whereas a div/css based layout pulls the css file over once, and then uses less wordy divs. 2.) Highly nested tables take much longer for the client's browser to render, leading to slightly slower load times.
In both cases, the increase in "last mile" bandwidth, as well as much faster personal computers mitigates these factors, but none-the-less, they still are existing issues for many sites.
With all of this in mind, as others have said, tables are easier, because they are more grid-oriented, allowing for less thought. If the site in question is not expected to be around long, or will not be maintained, it might make sense to do what is easiest, because it might be the most cost effective. However, if the anticipated userbase might include a substantial portion of handicapped individuals, or if the site will be maintained by others for a long time, spending the time up front to do things in a concise, accessible way may payoff more in the end.
A: 1: Yes, your users do care. If they use a screen reader, it will be lost. If I use any other tool which tries to extract information from the page, encountering tables that aren't used to represent tabular data is misleading.
A div or span is acceptable for separating content because that is precisely the meaning of those elements. When I, a search engine, a screen reader or anything else, encounter a table element, we expect that this means "the following is tabular data, represented in a table". When we encounter a div, we expect "this is an element used to divide my content into separate parts or areas.
2: Readability: Wrong. If all the presentation code is in css, I can read the html and I'll understand the content of the page. Or I can read the css and understand the presentation. If everything is jumbled together in the html, I have to mentally strike out all the presentation-related bits before I can even see what is content and what isn't.
Moreover, I'd be scared to meet a web developer who didn't understand css, so I really don't think that is an issue.
3: Tables are slower: Yes, they are. The reason is simple: Tables have to be parsed completely, including their contents before they can be rendered. A div can be rendered when it is encountered, even before its contents have been parsed. That means divs will show up before the page has finished loading.
And then there's the bonus, that tables are much more fragile, and won't always be rendered the same in different browsers, with different fonts and font sizes and all the other factors that can cause layout to vary. Tables are an excellent way to ensure that your site will be off by a pixel or two in certain browsers, won't scale well when the user changes his font size, or changes his settings in any other way.
Of course #1 is the big one. A lot of tools and applications depend on the semantic meaning of a webpage. The usual example is screen-readers for visually impaired users. If you're a web developer, you'll find that many large companies who may otherwise hire you to work on a site, require that the site is accessible even in this case. Which means you have to think about the semantic meaning of your html. With the semantic web, or more relevantly, microformats, rss readers and other tools, your page content is no longer viewed exclusively through a browser.
A: I'm sorry for my English but here's another reason :
I worked in some governmental organization and the number one reason to not use TABLE, is for disabled peoples. They use machines to "translate" web pages.
The problem is this "translation machine" can't read the website if it's done by TABLE. Why ? Because TABLE are for DATAS.
in fact, if you use TABLES, for each CELLS you have to specify some informations to let disabled people to know where they are in the TABLE. Imagine you have a big table and have to zoom to see only 1 cell in the screen : you have to know in which line/col you are.
So, DIV are used, and the disabled can simply read text, and don't get some weird informations about lines/cols when they don't have to be there.
I also prefer TABLE to make quick and easy templates, but I'm now used to CSS... it's powerful, but you really have to know what you are doing... :)
A: I researched the issue of screen readers and tables a few years ago and came up with information that contradicts what most developers believe:
http://www.webaim.org/techniques/tables/
"You will probably hear some accessibility advocates say that layout tables are a bad idea, and that CSS layout techniques ought to be used instead. There is truth in what they say, but, to be honest, using tables for layout is not the worst thing that you could do in terms of accessibility. People with all kinds of disabilities can easily access tables, as long as the tables are designed with accessibility in mind. "
A: Google gives very low priority to text content contained inside a table. I was giving some SEO advice to a local charity. In examining their website it was using tables to layout the site. Looking at each page, no matter what words - or combination of words - from their pages I used in the Google search box the pages would not come up in any of the top search pages. (However, by specifying the site in the search the page was returned.)
One page was well copy written by normal standards to produce a good result in a search but still it didn't appear in any of the first pages of search results returned. (Note this text was within a table.)
I then spotted a section of text on the pages which was in a div rather than a table. We put a few of the words from that div in the search engine.
Result? It came in at No.2 in the search result.
A: I still don't quite understand how divs / CSS make it easier to change a page design when you consider the amount of testing to ensure the changes work on all browsers, especially with all the hacks and so on. Its a hugely frustrating and tedious process which wastes large amounts of time and money.
Thankfully the 508 legislation only applies to the USA (land of the free - yeah right) and so being as I am based in the UK, I can develop web sites in whatever style I choose. Contrary to popular (US) belief, legislation made in Washington doesn't apply to the rest of the world - thank goodness for that. It must have been a good day in the world of web design the day the legislation came into force.
I think I'm becoming increasingly cynical as I get older with 25 years in the IT industry but I feel sure this kind of legislation is just to protect jobs. In reality anyone can knock together a reasonable web page with a couple of tables. It takes a lot more effort and knowledge to do this with DIVs / CSS. In my experience it can take hours and hours Googling to find solutions to quite simple problems and reading incomprehensible articles in forums full of idealistic zealots all argueing about the 'right' way to do things. You can't just dip your toe in the water and get things to work properly in every case. It also seems to me that the lack of a definitive guide to using DIVS / CSS "out of the box", that applies to all situations, working on browsers, and written using 'normal' language with no geek speak, also smells of a bit of protectionism.
I'm an application developer and I would say it takes almost twice as long to figure out layout problems and test against all browsers than it does to create the basic application, design and implement business objects, and create the database back end. My time = money, both for me and my customers alike so I am sorry if I don't reject all the pro DIV / CSS arguments in favour of cutting costs and providing value for money for my customers. Maybe its just the way that developers minds work, but it seems to me far easier to change a complex table structure than it is to modify DIVs / CSS.
Thankfully it now appears that a solution to these issues is now available - its called WPF.
A: Flex has a tag for laying things out in vertical columns. I don't think they got the whole layout/content thing right either to be honest, but at least they've resolved that issue.
Like many of the people frustrated with CSS I've also looked far and wide for an easy answer, was duped into feeling elated when I thought I had found it, and then had my hopes dashed to pieces when I opened the page in Chrome. I'm definitely not skilled enough to say it's not possible, but I haven't seen anyone offer up sample code for peer review proving unequivocally that it can be done reliably.
So can someone from the CSS side of this island recommend a mindset/methodology for laying out vertical columns? I've tried absolute positioning in second and third rows, but i end up with stuff overlapping everywhere and float has similar issues if the page is shrunk down.
If there was an answer to this I'd be ecstatic to -do the right thing- Just tell me something like, "Hey have you tried **flow:vertical|horizontal" and I'm totally out of your hair.
A: As per my knowledge on tables, if too many tables are nested, there is a great overhead to browser in rendering the page.
1 - The browser has wait to render the final view wait until the entire table gets loaded.
2 - The algorithm to render the table is expensive and is not in a single go. The browser, as and when, gets the contents, will try to render calculating the content width and height. So, if you are having nested tables, say, the browser has received the first row and the 1st cell is having large amount of content and width and height not defined, it will calculate the width and will render the first row,
In the mean while it gets the 2nd row will cell#2 having loads of content! It will now calculate the width for 2nd row cells.. What about the first ? It will calculate widths recursively. That's bad at client side.
(To site an example) As a programmer, you'll optimize stuffs such as time to fetch data, optimized data structures and etc. You optimize things to complete on server side, say in2 secs, but end user in getting the final view in 8 secs. What is wrong here ?
1. May be network is slow! What if network is fine ? What is network is delivering the contents in next 1 sec ? Where is this extra 5 secs getting consumed ? Thing to worry about-- The browser might be taking lot of time in estimating and rendering the tables!
How to optimize tables ?
If you're using tables, I would suggest, always define width for the cells. This does not guarantees that browser will blindly take this widths only, but would be of great help to browser in deciding the initial widths.
But, at the end, div are great way as CSS can be cached by the browser; while table aren't cached !
A: By still using table for layouts, we are missing on the innovation on the div side.
Many have come up with solutions that make creating layout for divs easier. The most popular being the grid architecture. There are dynamic layout generators based on this architecture. Check out:
1) 960.gs and (http://grids.heroku.com/)
2) blueprint
and so many of late.
I have not seen much of innovation in terms of architecture and tools with the tables layout.
I would say, all the theories aside, practically layout with CSS and divs are faster. Rather innovation in this direction made it easier than anything.
A: Tables used as pure layout does pose some problems for accessability (that I've heard). But I understand what is being asked here that somehow you're crippling the web by using a table to ensure correct alignment of something on your page.
I've heard people argue before that FORM labels and inputs are, in fact, data and that they should be allowed into tables.
The argument that using a table to make sure a few elements line up correctly causes this massive increase in code tend to include examples of how a single DIV can meet all their needs. They don't always include the 10 lines of CSS and special browser hacks they had to write for IE5,IE5.5,IE6,IE7...
I think it remains about using balance in your design. Tables are in the toolbox, just remember what they are for...
A: Surely the OP was a bit of a wind up, the arguments seem so week and easily countered.
Web pages are the domain of web developers, and if they say div & CSS is better than tables that's good enough for me.
If a layout is achieved by tables generated by a server app, then a new layout means changes to the app, a rebuild and redelpoy of the application, as apposed to just changes to a css file.
Plus, accessibility. Tables used for layout make a website inaccessible, so don't use them. It's a no brainer, not to mention illegal.
A: Using DIV, you can easily switch things. For example, you could make this :
Menu | Content
Content | Menu
Menu
----
Content
It's easy to change it in CSS, not in HTML. You can also provide several styles (right handed, left handed, special for little screen).
In CSS, you can also hide the menu in a special stylesheet used for printing.
Another good thing, is that your content is always in the same order in the code (the menu first, the content after) even if visually it's presented otherwise.
A: Super short answer: designing maintainable websites is difficult with tables, and simple with the standard way to do it.
A website isn't a table, it's a collection of components that interact with each other. Describing it as a table doesn't make sense.
A: In terms of site maintenance and design overhauls while maintaining content (which happen all the time, especially in eCommerce):
Content and design mashed up together via tables = updating both content and design.
Content separate from design = updating design and maybe a little content.
If I had it my way, I'd keep my content in PHP generating XML, converted to markup in XSLT and designed with CSS and Javascript for the interaction. For the Java side of things, JSP to JSTL to generate the markup.
A: I have found that even with the best planning divs come up short in several respects. For instance. there is no way with divs to have a bottom bar that always sits at the bottom of the browser, even when the rest of the content does not go to the bottom of the browser. Also, you cannot elegantly do anything better than three columns, and you cannot have columns that grow and shrink according the the width of their content. In the end, we try to use divs first. However, we will not limit our html designs based on some religious content vs layout ideal.
A: It doesn't have to be a war. Harmony is possible.
Use one table for the overall layout and divs inside it.
<table>
<tr><td colspan="3"><div>Top content</div></td></tr>
<tr>
<td><div>Left navigation</div></td>
<td><div>Main content</div></td>
<td><div>Right navigation</div></td>
</tr>
<tr><td colspan="3"><div>Bottom content</div></td></tr>
</table>
Look - no nested tables.
I have read so many articles on how to achieve this with divs but have never found anything that works every time with no issues.
Divs are great once you have the overall structure but quite frankly, fluid header/footer and three fluid columns is a total pain in divs. Divs weren't designed for fluidity so why use them?
Note that this approach will give you 100 % CSS compliance at link text
A: WYSIWYG!!! I can't for the life of me get our designers to stop using nested DIVS and styled by elementID css in templates that are supposed to be used by clients in CMS projects. That's the whole point of a WYSIWYG online editor. You are controlling both the content and the layout at the same time! There is no separation at all in the first place in this scenario. Positioned and styled Divs in some external stylesheet are anathema to the whole idea of WYSIWYG editing. Tables can be seen, rows inserted, cells combined and so on. Good luck trying this with divs in a way that doesn't frustrate users.
A: Data: use tables. Layout: use styles. Tables can render fastest with a very lean browser, that is, Links 2 or Lynx with no styling, just plain markup.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "664"
}
|
Q: VS 2008 vs VS 2008 Express I'm using Visual Studio Team System 2008 at work to do web development. I've gotten quite used to it but can't really afford to purchase even VS 2008 Standard at this time.
I have never used any of the Express editions before but I was thinking about downloading VS C# Express and VS Web Developer Express.
Am I wasting my time or can I do some serious development with these tools?
A: You can indeed do serious development using the Visual Studio 2008 express editions, this includes commercial products see question number 7 in the FAQ which says:
Seven) Can I use Express Editions for commercial use?
Yes, there are no
licensing restrictions for
applications built using Visual Studio
Express Editions.
The feature matrix shows that whilst you do lose some functionality between the Pro and Express Editions. The single biggest issue being that there is no add-in support (and adding it is forbidden by the EULA) which limits many nice additions to the environment such as ReSharper, VisualAssist, etc.
You also don't get a "Studio" but four individual editions, Web Developer, VB, VC++ and C#, if you wish to mix and match languages/projects in the way that the Standard/Professional Editions support then you are out of luck. Under the surface however, MSBuild is available and can provide you with multi-language solutions.
A: Express Editions works fine if you do not want to have different project types/languages in a solution, and you have no need for builtin source control.
Else, it's pretty much the same.
A: You can do serious development on the express editions. They have taken out a few things most notably the plug in system. If you are use to using a bunch of plug ins you may find that not being able to use them is a deterrent.
Here is a link to a comparison of the express edition and the other editions.
http://msdn.microsoft.com/en-us/library/zcbsd3cz(VS.80).aspx
A: Here is a detailed list of available features on different editions of Visual Studio : Product Comparison
A: You can find a comparison of the features in the various editions of Visual Studio 2008 here. The things that I find most annoying about the express edition are that you can't have multiple projects in a solution file, and you can't use add-ons like Resharper.
A: It depends how you define "serious development". One big thing missing from the express (and even standard) editions is the lack of support for mobile development. You also miss the convenience of grouping different project types in a solution.
I think you also miss some of the project types (windows services, Sql Server/CLR projects come to mind) in the Express edition.
A: I haven't afforded the full version of VS2008 at home yet so I have Express and use it for some intermediate application development (no web stuff). I find it quite good enough, it's got most of the stuff I use. I tried SharpDevelop but it wouldn't allow more than one start up project so I ditched it for Express.
Most plugins don't seem to work in the Express versions if that's an issue for you.
A: You actually CAN do commercial work with the VS 2008 Express editions.
See the answer to question #7 of the FAQ at this link:
http://www.microsoft.com/express/support/faq/
A: You can download the Professional version of VS2008 for free (if you have a .edu address) via Microsoft Dreamspark.
For that matter, the (fully-functional) 90-day trial of both VS2005 and VS2008 Pro... can be "extended"... indefinitely... by setting your system clock back, but no real reason to do that.
Express is fine for being a "lite" version but is hobbled in all sorts of ways. For anything serious, get the real thing.
A: I do serious work using the Express editions. I'm not a professional programmer since I moved into management, but I still keep my hand in writing the occasional utility or web page. The only thing I've missed from the professional versions is remote web debugging.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Is there another way to integrate PDF viewing in a Flex application? I'm looking at ways to embed PDF viewing in a Flex application.
Currently the only option I've seen is by using the flash.html.HTMLLoader class, which only works if you're using AIR. This isn't a big deal -- I'm willing to use AIR if I have to -- but based on my experimentation with viewing a PDF this way it appears that AIR simply integrates the embedded Adobe PDF browser Plug-in for viewing, which not only shows the PDF page(s), but provides all of the manipulation controls as well (zooming, printing, etc.) which I don't want to see.
I'm looking for something that works somewhat along the lines of the JPedal library for Java -- an embedded component that simply renders the PDF alone.
Has anyone found a way to do this with either AIR's built-in component or via some other method?
A: There are a couple of ways, but neither actually have the PDF in the Flex App:
*
*Convert the PDF to SWF. Use this tool or one like it to convert the file over.
*Use HTMLComponent, a method that uses an iframe over your flash/flex to make it appear like an external page is in your app. There are a few downsides to this method however - most of them described in detail at Deitte.com.
A: What you want is possible with AIR and described in this Adobe article:
http://www.adobe.com/devnet/air/flex/quickstart/scripting_pdf.html
A: Take a look at http://www.adobe.com/devnet/flex/quickstart/embedding_assets/ and see if it helps.
I don't think you can embed PDF files directly (but I'm not really sure) but if you totally need to do it and you don't want to open a new window you could convert the PDF to another format that can be inserted in your app.
A: If your goal is to simply display the PDF in the Flex environment then you could use the IFrame approach. You can find an example here http://www.deitte.com/archives/2006/08/finally_updated.htm
By using this approach you can load any HTML content which includes PDF's.
Take a look.
A: Okay guys here is the exact one we're looking
http://subinsugunan.blogspot.com/2009/06/embed-pdf-in-flex-application.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: PHP/mySQL - regular recalculation of benchmark values as new users submit their data I am confronted with a new kind of problem which I haven't encountered yet in my very young programming "career" and would like to know your opinion about how to tackle it best.
The situation
A research application (php/mysql) gathers stress related health data from users. User gets a an analyses after filling in the questionnaire. Value for each parameter is transformed into a percentile value using a benchmark (mean and standard devitation of existing data set).
The task
Since more and more ppl are filling in the questionnaire, there is the potential to make the benchmark values (mean/SD) more accurate by recalculating them using the new user data. I would like the database to regularly run a script that updates the benchmark values.
The question
I've never used stored precedures so far and I only have a slight notion of what they are but somehow I have a feeling they could maybe help me with this? Or should I write the script as php and then set up a cron job?
[edit]After the first couple of answers it looks like cron is clearly the way to go.[/edit]
A: PHP set up as a cron job lets you keep it in your source code management system, and if you're using a database abstraction layer it'll be portable to other databases if you ever decide to switch. For those reasons, I tend to go with scripts over stored procedures.
A: The easiest way to make this work is probably to write a script in the same language your website is using (sounds like PHP) and call it from cron.
No need to make it more complicated than it needs to be by putting the logic in two places (your existing calculations and a stored procedure).
A: What you're considering could be done in a number of ways.
*
*You could setup a trigger in your DB to recalculate the values whenever a new record is updated. You could store the code needed to update the values in a sproc if necessary.
*You could write a PHP script and run it regularly via cron.
#1 will slow down inserts to your database but will make sure your data is always up to date. #2 may lock the tables while it updates the new values, and your data will only be accurate until the next update. #2 is much easier to back up, as the script can easily be stored in your versioning system, whereas you'd need to store the trigger and sproc creation scripts in whatever backup you'd make.
Obviously you'll have to weigh up your requirements before you pick a method.
A: If the volume of data is big enough that calculating it on the fly is too much, then either:
*
*Cron job with php script to denormalise the totals
*Trigger on inserts that increments totals
A: Go with the cron job way. Simple, solid, works. In the PHP/MySQL world I would say stored procedures are no-go.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: hibernate insert batch with partitioned postgresql is there a solution for batch insert via hibernate in partitioned postgresql table? currently i'm getting an error like this...
ERROR org.hibernate.jdbc.AbstractBatcher - Exception executing batch:
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:61)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:46)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:68)....
i have found this link http://lists.jboss.org/pipermail/hibernate-dev/2007-October/002771.html but i can't find anywhere on the web is this problem solved or how it can be get around
A: You might want to try using a custom Batcher by setting the hibernate.jdbc.factory_class property. Making sure hibernate won't check the update count of batch operations might fix your problem, you can achieve that by making your custom Batcher extend the class BatchingBatcher, and then overriding the method doExecuteBatch(...) to look like:
@Override
protected void doExecuteBatch(PreparedStatement ps) throws SQLException, HibernateException {
if ( batchSize == 0 ) {
log.debug( "no batched statements to execute" );
}
else {
if ( log.isDebugEnabled() ) {
log.debug( "Executing batch size: " + batchSize );
}
try {
// checkRowCounts( ps.executeBatch(), ps );
ps.executeBatch();
}
catch (RuntimeException re) {
log.error( "Exception executing batch: ", re );
throw re;
}
finally {
batchSize = 0;
}
}
}
Note that the new method doesn't check the results of executing the prepared statements. Keep in mind that making this change might affect hibernate in some unexpected way (or maybe not).
A: thnx! it did the trick, no problems poped up, so far :)....one thing thou...
i had to implement BatcherFactory class and put it int the persistence.xml file,
like this:
property name="hibernate.jdbc.factory_class" value="path.to.my.batcher.factory.implementation"
from that factory i've called my batcher implementation with the code above
ps
hibernate core 3.2.6 GA
thanks once again
A: They say to use two triggers in a partitioned table or the @SQLInsert annotation here: http://www.redhat.com/f/pdf/jbw/jmlodgenski_940_scaling_hibernate.pdf pages 21-26 (it also mentions an @SQLInsert specifying a String method).
Here is an example with an after trigger to delete the extra row in the master: https://gist.github.com/copiousfreetime/59067
A: Appears if you can use RULES instead of triggers for the insert, then it can return the right number, but only with a single RULE without a WHERE statement.
ref1
ref2
ref3
another option may be to create a view that 'wraps' the partitioned table, then you return the NEW row out to indicate a successful row update, without accidentally adding an extra unwanted row to the master table.
create view tablename_view as select * from tablename; -- create trivial wrapping view
CREATE OR REPLACE FUNCTION partitioned_insert_trigger() -- partitioned insert trigger
RETURNS TRIGGER AS $$
BEGIN
IF (NEW.partition_key>= 5500000000 AND
NEW.partition_key < 6000000000) THEN
INSERT INTO tablename_55_59 VALUES (NEW.*);
ELSIF (NEW.partition_key >= 5000000000 AND
NEW.partition_key < 5500000000) THEN
INSERT INTO tablename_50_54 VALUES (NEW.*);
ELSIF (NEW.partition_key >= 500000000 AND
NEW.partition_key < 1000000000) THEN
INSERT INTO tablename_5_9 VALUES (NEW.*);
ELSIF (NEW.partition_key >= 0 AND
NEW.partition_key < 500000000) THEN
INSERT INTO tablename_0_4 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'partition key is out of range. Fix the trigger function';
END IF;
RETURN NEW; -- RETURN NEW in this case, typically you'd return NULL from this trigger, but for views we return NEW
END;
$$
LANGUAGE plpgsql;
CREATE TRIGGER insert_view_trigger
INSTEAD OF INSERT ON tablename_view
FOR EACH ROW EXECUTE PROCEDURE partitioned_insert_trigger(); -- create "INSTEAD OF" trigger
ref: http://www.postgresql.org/docs/9.2/static/trigger-definition.html
If you went the view wrapper route one option is to also define trivial "instead of" triggers for delete and update, as well, then you can just use the name of the view table in place of your normal table in all transactions.
Another option that uses the view is to create an insert rule so that any inserts on the main table go to the view [which uses its trigger], ex (assuming you already have partitioned_insert_trigger and tablename_view and insert_view_trigger created as listed above)
create RULE use_right_inserter_tablename AS
ON INSERT TO tablename
DO INSTEAD insert into tablename_view VALUES (NEW.*);
Then it will use your new working view wrapper insert.
A: I faced the same problem while inserting documents through hibernate after lot of search found that it is expecting that updated rows should be returned so instead of null change it to new in trigger procedure which will resolve the problem as shown below
RETURN NEW
A: I found another solution for the same problem on this webpage:
This suggests the same solution that @rogerdpack said, changing the Return Null to Return NEW, and adding a new trigger that deletes the duplicated tuple in the master with the query:
DELETE FROM ONLY master_table;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: ChatFx Lite LicenseException on build server I downloaded ChartFx Lite and am using it successfully in my windows forms application on my development machine. I have added the ChartFX.Lite.dll assembly to my source repository and am trying to build the project on my build server that does not have ChartFx Lite installed. I get the error:
Exception occurred creating type 'SoftwareFX.ChartFX.Lite.Chart, ChartFX.Lite, Version=6.0.839.0, Culture=neutral, PublicKeyToken=a1878e2052c08dce' System.ComponentModel.LicenseException: Couldn't get Design Time license for 'SoftwareFX.ChartFX.Lite.Chart'
What do I need to do to get this working without installing ChartFx Lite on my build server?
A: If you just want to test the build, you can suppress the lines concerning ChartFx from the .licx file created by Visual Studio. It should build this way, but probably will not execute correctly, as the license will not be included.
The .licx file contains instructions to include binary license resource during build. I'm afraid that if you want a real build you have to install ChartFx on the build server.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Widget notifying other widget(s) How should widgets in GWT inform other widgets to refresh themselfs or perform some other action.
Should I use sinkEvent / onBrowserEvent?
And if so is there a way to create custom Events?
A: It's a very open ended question - for example, you could create your own static event Handler class which widgets subscribe themselves to. e.g:
Class newMessageHandler {
void update(Widget caller, Widget subscriber) {
...
}
}
customEventHandler.addEventType("New Message", newMessageHandler);
Widget w;
customEventHandler.subscribe(w, "New Message");
...
Widget caller;
// Fire "New Message" event for all widgets which have
// subscribed
customEventHandler.fireEvent(caller, "New Message");
Where customEventHandler keeps track of all widgets subscribing to each named event, and calls the update method on the named class, which could then call any additional methods you want. You might want to call unsubscribe in the destructor - but you could make it as fancy as you want.
A: I have solved this problem using the Observer Pattern and a central Controller. The central controller is the only class that has knowledge of all widgets in the application and determines the way they fit together. If someone changes something on widget A, widget A fires an event. In the eventhandler you call the central controller through the 'notifyObservers()' call, which informes the central controller (and optionally others, but for simplicity I'm not going into that) that a certain action (passing a 'MyEvent' enum instance) has occurred.
This way, application flow logic is contained in a single central class and widgets don't need a spaghetti of references to eachother.
A: So here is my (sample) implementation,
first let's create a new event:
import java.util.EventObject;
import com.google.gwt.user.client.ui.Widget;
public class NotificationEvent extends EventObject {
public NotificationEvent(String data) {
super(data);
}
}
Then we create an event handler interface:
import com.google.gwt.user.client.EventListener;
public interface NotificationHandler extends EventListener {
void onNotification(NotificationEvent event);
}
If we now have a widget implementing the NotificationHanlder, we can
trigger the event by calling:
((NotificationHandler)widget).onNotification(event);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Logging activities in multithreaded applications I have a layered application in Java which has a multi thread data access layer which is invoked from different points. A single call to this layer is likely to spawn several threads to parallelize requests to the DB.
What I'm looking for is a logging tool that would allow me to define "activities" that are composed by various threads. Therefore, the same method in the data access layer should log different outputs depending on its caller. The ability to group different outputs to summarize the total cost of an operation is also important.
Although the application is in Java, language is not a restriction; what I need are the design guidelines so to eventually implement it. We are currently using log4j, but can't get this behaviour from it.
A: You should also have a look at the nested diagnostic context feature of log4j. Pushing different contexts to the logger for different callers might do the trick for you.
A: You should be able to pass a logger around, so you create a logger based on some "common" for the task data - i.e. username, etc. Then, pass this logger as parameter to all methods you need. That way, you'll be able to set different filters and/or rules in your log4j config file. Or to scrape the output file based on the logger name.
EDIT: Also check MDC and NDC classes in log4j. You can add there context data.
A: In log4j you can log the thread name with the "%t" pattern. See log4j Pattern Layout.
A: In one of my (web) applications, i use a ThreadLocal logger that captures logging information into a StringBuilder. The logger object is initialized in the HttpServlet#service method, if a trace parameter is set (if it is not set, there is a very fast null-logger). The resulting output is either dumped as a HTML comment into the requesting page, or written to a log file in one segment.
A: In Java5 (and later) you can call
StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace();
Inspect the stack trace to whatever depth you want and log accordingly.
In Java 1.4 you can get the same info with
StackTraceElement[] stackTrace = new Exception().getStackTrace();
A: You want to associate logger objects with threads I think. A ThreadLocal variable holding a log4j logger instance for each thread might help:
http://java.sun.com/javase/6/docs/api/java/lang/ThreadLocal.html
A: You will need to pass some structure to the data access layer that identifies the current "activity". You might already have an "Activity"-class that makes sense, you might use a Logger-instance as Sunny suggested or you might use a third structure to keep track of the activity-context.
In any case, since your "activity" is processed across multiple threads you cannot use thread-local-storage for keeping track of the current "activity", like most of the other current answers suggest. You will need to pass it around explicitly.
I would suggest making a small facade on top of log4j that expands the interface with methods like
void debug(Activity activity, String message);
and passing the activity-context into this from the data access layer.
You will need to make some modification to the data access layer to allow you to pass the current activity to it, but how best to do that depends strongly on the current interface.
If you use the Workspace-pattern, you might just need to add a setActivity() method on the Workspace-class, but other interface-pattern might require you to add an Activity parameter to all methods.
If you for some reason is unable or unwilling to change the data access layer, you might of course store the activity-context in thread-local-storage before invoking the data access layer and retrieve it just before spawning the sub-threads or enqueing the jobs in the data access layer. That is a workable solution, but is it a bit dangerous to pass information around in that way.
A: You can use MDC or NDC for your scenario, NDC works on principle of stack while MDC works on Map, here is official documentation for both
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/NDC.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: WCF Oracle adaptor and UDT Is there any way to work with Oracle UDT's with current WCF adaptor?
A: If I understand you question correctly, you want to express Oracle user defined types in WCF services? This will really depend on the protocol to be used. For example, if you are using one of the SOAP protocols like the WS* protocols, then you are stuck with those data types that are defined in SOAP. Going from any data type, whether it be a built in type in your database, a custom type in C#, or a user defined type in SQL Server, Oracle, whatever, you will have this limitation. Your simple types will prolly map to something less complex like a numeric or a string. If you have a complex type you may opt to write your own serialization for the type.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Consequences of changing USERPostMessageLimit One of our legacy applications relies heavily on PostThreadMessage() for inter-thread communication, so we increased USERPostMessageLimit in the registry (way) beyond the normal 10.000.
However, documentation on MSDN states that "This limit should be sufficiently large. If your application exceeds the limit, it should be redesigned to avoid consuming so many system resources." [1]
Can anyone enlighten me as to how exactly consuming too many system resources manifests itself? What exactly are system resources? Can I somehow monitor an application's usage of system resources? Any information would be very helpful in deciding whether it is worth the time and effort to redesign this application.
A: The resources it is refering to are those used by the threads for receiving/handling the messages. You can monitor the thread pool size & other resources using the Taskmanager (look at View->Select Columns). It it may help you identify the specific resource if the consumer is resource locked, look for a resource count that tops out even while your threads are increasing.
However; if you need to increase USERPostMessageLimit then message producer is simply overloading the message consumer; by increasing this limit you are compounding your problem not fixing it. Reducing USERPostMessageLimit back to the default, and if your message producer cannot post the message try sleeping before retrying, allowing the consuming thread to clear some messages.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: ORA-00161: transaction branch length 103 is illegal (maximum allowed 64 Error:
ORA-00161: transaction branch length 103 is illegal (maximum allowed 64…
I'm using the DAC from Oracle, any idea if there is a patch for this?
A: This looks to be a similar issue for .net 2.0, vista and oracle http://forums.oracle.com/forums/thread.jspa?threadID=516250
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What causes the error "Can't execute code from a freed script" I thought I'd found the solution a while ago (see my blog):
If you ever get the JavaScript (or should that be JScript) error "Can't execute code from a freed script" - try moving any meta tags in the head so that they're before your script tags.
...but based on one of the most recent blog comments, the fix I suggested may not work for everyone. I thought this would be a good one to open up to the StackOverflow community....
What causes the error "Can't execute code from a freed script" and what are the solutions/workarounds?
A: Here's a very specific case in which I've seen this behavior. It is reproducible for me in IE6 and IE7.
From within an iframe:
window.parent.mySpecialHandler = function() { ...work... }
Then, after reloading the iframe with new content, in the window containing the iframe:
window.mySpecialHandler();
This call fails with "Can't execute code from a freed script" because mySpecialHandler was defined in a context (the iframe's original DOM) that no longer exits. (Reloading the iframe destroyed this context.)
You can however safely set "serializeable" values (primitives, object graphs that don't reference functions directly) in the parent window. If you really need a separate window (in my case, an iframe) to specify some work to a remote window, you can pass the work as a String and "eval" it in the receiver. Be careful with this, it generally doesn't make for a clean or secure implementation.
A: If you are trying to access the JS object, the easiest way is to create a copy:
var objectCopy = JSON.parse(JSON.stringify(object));
Hope it'll help.
A: This error can occur in MSIE when a child window tries to communicate with a parent window which is no longer open.
(Not exactly the most helpful error message text in the world.)
A: Beginning in IE9 we began receiving this error when calling .getTime() on a Date object stored in an Array within another Object. The solution was to make sure it was a Date before calling Date methods:
Fail: rowTime = wl.rowData[a][12].getTime()
Pass: rowTime = new Date(wl.rowData[a][12]).getTime()
A: You get this error when you call a function that was created in a window or frame that no longer exists.
If you don't know in advance if the window still exists, you can do a try/catch to detect it:
try
{
f();
}
catch(e)
{
if (e.number == -2146823277)
// f is no longer available
...
}
A: The error is caused when the 'parent' window of script is disposed (ie: closed) but a reference to the script which is still held (such as in another window) is invoked. Even though the 'object' is still alive, the context in which it wants to execute is not.
It's somewhat dirty, but it works for my Windows Sidebar Gadget:
Here is the general idea:
The 'main' window sets up a function which will eval'uate some code, yup, it's that ugly.
Then a 'child' can call this "builder function" (which is /bound to the scope of the main window/) and get back a function which is also bound to the 'main' window. An obvious disadvantage is, of course, that the function being 'rebound' can't closure over the scope it is seemingly defined in... anyway, enough of the gibbering:
This is partially pseudo-code, but I use a variant of it on a Windows Sidebar Gadget (I keep saying this because Sidebar Gadgets run in "unrestricted zone 0", which may -- or may not -- change the scenario greatly.)
// This has to be setup from the main window, not a child/etc!
mainWindow.functionBuilder = function (func, args) {
// trim the name, if any
var funcStr = ("" + func).replace(/^function\s+[^\s(]+\s*\(/, "function (")
try {
var rebuilt
eval("rebuilt = (" + funcStr + ")")
return rebuilt(args)
} catch (e) {
alert("oops! " + e.message)
}
}
// then in the child, as an example
// as stated above, even though function (args) looks like it's
// a closure in the child scope, IT IS NOT. There you go :)
var x = {blerg: 2}
functionInMainWindowContenxt = mainWindow.functionBuilder(function (args) {
// in here args is in the bound scope -- have at the child objects! :-/
function fn (blah) {
return blah * args.blerg
}
return fn
}, x)
x.blerg = 7
functionInMainWindowContext(6) // -> 42 if I did my math right
As a variant, the main window should be able to pass the functionBuilder function to the child window -- as long as the functionBuilder function is defined in the main window context!
I feel like I used too many words. YMMV.
A: I ran into this problem when inside of a child frame I added a reference type to the top level window and attempted to access it after the child window reloaded
i.e.
// set the value on first load
window.top.timestamp = new Date();
// after frame reloads, try to access the value
if(window.top.timestamp) // <--- Raises exception
...
I was able to resolve the issue by using only primitive types
// set the value on first load
window.top.timestamp = Number(new Date());
A: This isn't really an answer, but more an example of where this precisely happens.
We have frame A and frame B (this wasn't my idea, but I have to live with it). Frame A never changes, Frame B changes constantly. We cannot apply code changes directly into frame A, so (per the vendor's instructions) we can only run JavaScript in frame B - the exact frame that keeps changing.
We have a piece of JavaScript that needs to run every 5 seconds, so the JavaScript in frame B create a new script tag and inserts into into the head section of frame B. The setInterval exists in this new scripts (the one injected), as well as the function to invoke. Even though the injected JavaScript is technically loaded by frame A (since it now contains the script tag), once frame B changes, the function is no longer accessible by the setInterval.
A: I got this error in IE9 within a page that eventually opens an iFrame. As long as the iFrame wasn't open, I could use localStorage. Once the iFrame was opened and closed, I wasn't able to use the localStorage anymore because of this error. To fix it, I had to add this code to in the Javascript that was inside the iFrame and also using the localStorage.
if (window.parent) {
localStorage = window.parent.localStorage;
}
A: got this error in DHTMLX while opening a dialogue & parent id or current window id not found
$(document).ready(function () {
if (parent.dxWindowMngr == undefined) return;
DhtmlxJS.GetCurrentWindow('wnManageConDlg').show();
});
Just make sure you are sending correct curr/parent window id while opening a dialogue
A: On update of iframe's src i am getting that error.
Got that error by accessing an event(click in my case) of an element in the main window like this (calling the main/outmost window directly):
top.$("#settings").on("click",function(){
$("#settings_modal").modal("show");
});
I just changed it like this and it works fine (calling the parent of the parent of the iframe window):
$('#settings', window.parent.parent.document).on("click",function(){
$("#settings_modal").modal("show");
});
My iframe containing the modal is also inside another iframe.
A: The explanations are very relevant in the previous answers. Just trying to provide my scenario. Hope this can help others.
we were using:
<script> window.document.writeln(table) </script>
, and calling other functions in the script on onchange events but writeln completely overrides the HTML in IE where as it is having different behavior in chrome.
we changed it to:
<script> window.document.body.innerHTML = table;</script>
Thus retained the script which fixed the issue.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
}
|
Q: What's wrong with foreign keys? I remember hearing Joel Spolsky mention in podcast 014 that he'd barely ever used a foreign key (if I remember correctly). However, to me they seem pretty vital to avoid duplication and subsequent data integrity problems throughout your database.
Do people have some solid reasons as to why (to avoid a discussion in lines with Stack Overflow principles)?
Edit: "I've yet to have a reason to create a foreign key, so this might be my first reason to actually set up one."
A: "Before adding a record, check that a corresponding record exists in another table" is business logic.
Here are some reasons you don't want this in the database:
*
*If the business rules change, you have to change the database. The database will need to recreate the index in a lot of cases and this is slow on large tables. (Changing rules include: allow guests to post messages or allow users to delete their account despite having posted comments, etc).
*Changing the database is not as easy as deploying a software fix by pushing the changes to the production repository. We want to avoid changing the database structure as much as possible. The more business logic there is in the database the more you increase the chances of needing to change the databae (and triggering re-indexing).
*TDD. In unit tests you can substitute the database for mocks and test the functionality. If you have any business logic in your database, you are not doing complete tests and would need to either test with the database or replicate the business logic in code for testing purposes, duplicating the logic and increasing the likelyhood of the logic not working in the same way.
*Reusing your logic with different data sources. If there is no logic in the database, my application can create objects from records from the database, create them from a web service, a json file or any other source. I just need to swap out the data mapper implementation and can use all my business logic with any source. If there is logic in the database, this isn't possible and you have to implement the logic at the data mapper layer or in the business logic. Either way, you need those checks in your code. If there's no logic in the database I can deploy the application in different locations using different database or flat-file implementations.
A: This is an issue of upbringing. If somewhere in your educational or professional career you spent time feeding and caring for databases (or worked closely with talented folks who did), then the fundamental tenets of entities and relationships are well-ingrained in your thought process. Among those rudiments is how/when/why to specify keys in your database (primary, foreign and perhaps alternate). It's second nature.
If, however, you've not had such a thorough or positive experience in your past with RDBMS-related endeavors, then you've likely not been exposed to such information. Or perhaps your past includes immersion in an environment that was vociferously anti-database (e.g., "those DBAs are idiots - we few, we chosen few java/c# code slingers will save the day"), in which case you might be vehemently opposed to the arcane babblings of some dweeb telling you that FKs (and the constraints they can imply) really are important if you'd just listen.
Most everyone was taught when they were kids that brushing your teeth was important. Can you get by without it? Sure, but somewhere down the line you'll have less teeth available than you could have if you had brushed after every meal. If moms and dads were responsible enough to cover database design as well as oral hygiene, we wouldn't be having this conversation. :-)
A: From my experience its always better to avoid using FKs in Database Critical Applications. I would not disagree with guys here who say FKs is a good practice but its not practical where the database is huge and has huge CRUD operations/sec. I can share without naming ... one of the biggest investment bank of doesn't have a single FK in databases. These constrains are handled by programmers while creating applications involving DB. The basic reason is when ever a new CRUD is done it has to effect multiple tables and verify for each inserts/updates, though this won't be a big issue for queries affecting single rows but it does create a huge latency when you deal with batch processing which any big bank has to do as daily tasks.
Its better to avoid FKs but its risk has to be handled by programmers.
A: I'm sure there are plenty of applications where you can get away with it, but it's not the best idea. You can't always count on your application to properly manage your database, and frankly managing the database should not be of very much concern to your application.
If you are using a relational database then it seems you ought to have some relationships defined in it. Unfortunately this attitude (you don't need foreign keys) seems to be embraced by a lot of application developers who would rather not be bothered with silly things like data integrity (but need to because their companies don't have dedicated database developers). Usually in databases put together by these types you are lucky just to have primary keys ;)
A: Foreign keys are essential to any relational database model.
A: Bigger question is: would you drive with a blindfold on? That’s how it is if you develop a system without referential constraints. Keep in mind, that business requirements change, application design changes, respective logical assumptions in the code changes, logic itself can be refactored, and so on. In general, constraints in databases are put in place under contemporary logical assumptions, seemingly correct for particular set of logical assertions and assumptions.
Through the lifecycle of an application, referential and data checks constraints police data collection via the application, especially when new requirements drive logical application changes.
To the subject of this listing - a foreign key does not by itself "improve performance", nor does it "degrade performance" significantly from a standpoint of real-time transaction processing system. However, there is an aggregated cost for constraint checking in HIGH volume "batch" system. So, here is the difference, real-time vs. batch transaction process; batch processing - where aggreated cost, incured by constraint checks, of a sequentially processed batch poses a performance hit.
In a well designed system, data consistency checks would be done "before" processing a batch through (nevertheless, there is a cost associated here also); therefore, foreign key constraint checks are not required during load time. In fact all constraints, including foreign key, should be temporarily disabled till the batch is processed.
QUERY PERFORMANCE - if tables are joined on foreign keys, be cognizant of the fact that foreign key columns are NOT INDEXED (though the respective primary key is indexed by definition). By indexing a foreign key, for that matter, by indexing any key, and joining tables on indexed helps with better performances, not by joining on non-indexed key with foreign key constraint on it.
Changing subjects, if a database is just supporting website display/rendering content/etc and recording clicks, then a database with full constraints on all tables is over kill for such purposes. Think about it. Most websites don’t even use a database for such. For similar requirements, where data is just being recorded and not referenced per say, use an in-memory database, which does not have constraints. This doesn’t mean that there is no data model, yes logical model, but no physical data model.
A: Reasons to use Foreign Keys:
*
*you won't get Orphaned Rows
*you can get nice "on delete cascade" behavior, automatically cleaning up tables
*knowing about the relationships between tables in the database helps the Optimizer plan your queries for most efficient execution, since it is able to get better estimates on join cardinality.
*FKs give a pretty big hint on what statistics are most important to collect on the database, which in turn leads to better performance
*they enable all kinds of auto-generated support -- ORMs can generate themselves, visualization tools will be able to create nice schema layouts for you, etc.
*someone new to the project will get into the flow of things faster since otherwise implicit relationships are explicitly documented
Reasons not to use Foreign Keys:
*
*you are making the DB work extra on every CRUD operation because it has to check FK consistency. This can be a big cost if you have a lot of churn
*by enforcing relationships, FKs specify an order in which you have to add/delete things, which can lead to refusal by the DB to do what you want. (Granted, in such cases, what you are trying to do is create an Orphaned Row, and that's not usually a good thing). This is especially painful when you are doing large batch updates, and you load up one table before another, with the second table creating consistent state (but should you be doing that sort of thing if there is a possibility that the second load fails and your database is now inconsistent?).
*sometimes you know beforehand your data is going to be dirty, you accept that, and you want the DB to accept it
*you are just being lazy :-)
I think (I am not certain!) that most established databases provide a way to specify a foreign key that is not enforced, and is simply a bit of metadata. Since non-enforcement wipes out every reason not to use FKs, you should probably go that route if any of the reasons in the second section apply.
A: I always use them, but then I make databases for financial systems. The database is the critical part of the application. If the data in a financial database isn't totally accurate then it really doesn't matter how much effort you put into your code/front-end design. You're just wasting your time.
There's also the fact that multiple systems generally need to interface directly with the database - from other systems that just read data out (Crystal Reports) to systems that insert data (not necessarily using an API I've designed; it may be written by a dull-witted manager who has just discovered VBScript and has the SA password for the SQL box). If the database isn't as idiot-proof as it can possibly be, well - bye bye database.
If your data is important, then yes, use foreign keys, create a suite of stored procedures to interact with the data, and make the toughest DB you can. If your data isn't important, why are you making a database to begin with?
A: I agree with the previous answers in that they are useful to mantain data consistency. However, there was an interesting post by Jeff Atwood some weeks ago that discussed the pros and cons of normalized and consistent data.
In a few words, a denormalized database can be faster when handling huge amounts of data; and you may not care about precise consistency depending on the application, but it forces you to be much more careful when dealing with data, as the DB won't be.
A: The Clarify database is an example of a commercial database that has no primary or foreign keys.
http://www.geekinterview.com/question_details/18869
The funny thing is, the technical documentation goes to great lengths to explain how tables are related, what columns to use to join them etc.
In other words, they could have joined the tables with explicit declarations (DRI) but they chose not to.
Consequently, the Clarify database is full of inconsistencies and it underperforms.
But I suppose it made the developers job easier, not having to write code to deal with referential integrity such as checking for related rows before deleting, adding.
And that, I think, is the main benefit of not having foreign key constraints in a relational database. It makes it easier to develop, at least that is from a devil-may-care point of view.
A: If you are absolutey sure, that the one underlying database system will not change in the future, I would use foreign keys to ensure data integrity.
But here is another very good real-life reason not to use foreign keys at all:
You are developing a product, which should support different database systems.
If you are working with the Entity Framework, which is able to connect to many different database systems, you may also want to support "open-source-free-of-charge" serverless databases. Not all of these databases may support your foreign key rules (updating, deleting rows...).
This can lead to different problems:
1.) You may run into errors, when the database structure is created or updated. Maybe there will only be silent errors, because your foreign keys are just ignored by the database system.
2.) If you rely on foreign keys, you will propably make less or even no data integrity checks in your business logic. Now, if the new database system does not support these foreign key rules or just behaves in a different way, you have to rewrite your business logic.
You may ask: Who needs different database systems? Well, not everybody can afford or wants a full blown SQL-Server on his machine. This is software, which needs to be maintained. Others already have invested time and money in some other DB system. Serverless database are great for small customers on only one machine.
Nobody knows, how all of these DB systems behave, but your business logic, with integrity checks, always stays the same.
A: Update: I always use foreign keys now. My answer to the objection "they complicated testing" is "write your unit tests so they don't need the database at all. Any tests that use the database should use it properly, and that includes foreign keys. If the setup is painful, find a less painful way to do the setup."
Foreign keys complicate automated testing
Suppose you're using foreign keys. You're writing an automated test that says "when I update a financial account, it should save a record of the transaction." In this test, you're only concerned with two tables: accounts and transactions.
However, accounts has a foreign key to contracts, and contracts has a fk to clients, and clients has a fk to cities, and cities has a fk to states.
Now the database will not allow you to run your test without setting up data in four tables that aren't related to your test.
There are at least two possible perspectives on this:
*
*"That's a good thing: your test should be realistic, and those data constraints will exist in production."
*"That's a bad thing: you should be able to unit test pieces of the system without involving other pieces. You can add integration tests for the system as a whole."
It may also be possible to temporarily turn off foreign key checks while running tests. MySQL, at least, supports this.
A: They can make deleting records more cumbersome - you can't delete the "master" record where there are records in other tables where foreign keys would violate that constraint. You can use triggers to have cascading deletes.
If you chose your primary key unwisely, then changing that value becomes even more complex. For example, if I have the PK of my "customers" table as the person's name, and make that key a FK in the "orders" table", if the customer wants to change his name, then it is a royal pain... but that is just shoddy database design.
I believe the advantages in using fireign keys outweighs any supposed disadvantages.
A: Verifying foreign key constraints takes some CPU time, so some folks omit foreign keys to get some extra performance.
A: Additional Reason to use Foreign Keys:
- Allows greater reuse of a database
Additional Reason to NOT use Foreign Keys:
- You are trying to lock-in a customer into your tool by reducing reuse.
A: I know only Oracle databases, no other ones, and I can tell that Foreign Keys are essential for maintaining data integrity. Prior to inserting data, a data structure needs to be made, and be made correctlty. When that is done - and thus all primary AND foreign keys are created - the work is done !
Meaning : orphaned rows ? No. Never seen that in my life. Unless a bad programmer forgot the foreign key, or if he implemented that on another level. Both are - in context of Oracle - huge mistakes, which will lead to data duplication, orphan data, and thus : data corruption. I can't imagine a database without FK enforced. It looks like chaos to me. It's a bit like the Unix permission system : imagine that everybody is root. Think of the chaos.
Foreign Keys are essential, just like Primary Keys. It's like saying : what if we removing Primary Keys ? Well, total chaos is going to happen. That's what. You may not move the primary or foreign key responsibility to the programming level, it must be at the data level.
Drawbacks ? Yes, absolutely ! Because on insert, a lot more checks are going to be happening. But, if data integrity is more important than performance, it's a no-brainer. The problem with performance on Oracle is more related to indexes, which come with PK and FK's.
A: "They can make deleting records more cumbersome - you can't delete the "master" record where there are records in other tables where foreign keys would violate that constraint."
It's important to remember that the SQL standard defines actions that are taken when a foreign key is deleted or updated.
The ones I know of are:
*
*ON DELETE RESTRICT - Prevents any rows in the other table that have keys in this column from being deleted. This is what Ken Ray described above.
*ON DELETE CASCADE - If a row in the other table is deleted, delete any rows in this table that reference it.
*ON DELETE SET DEFAULT - If a row in the other table is deleted, set any foreign keys referencing it to the column's default.
*ON DELETE SET NULL - If a row in the other table is deleted, set any foreign keys referencing it in this table to null.
*ON DELETE NO ACTION - This foreign key only marks that it is a foreign key; namely for use in OR mappers.
These same actions also apply to ON UPDATE.
The default seems to depend on which sql server you're using.
A: @imphasing - this is exactly the kind of mindset that causes maintenance nightmares.
Why oh why would you ignore declarative referential integrity, where the data can be guaranteed to be at least consistent, in favour of so called "software enforcement" which is a weak preventative measure at best.
A: There's one good reason not to use them: If you don't understand their role or how to use them.
In the wrong situations, foreign key constraints can lead to waterfall replication of accidents. If somebody removes the wrong record, undoing it can become a mammoth task.
Also, conversely, when you need to remove something, if poorly designed, constraints can cause all sorts of locks that prevent you.
A: There are no good reasons not to use them... unless orphaned rows aren't a big deal to you I guess.
A: The argument I have heard is that the front-end should have these business rules. Foreign keys "add unnecessary overhead" when you shouldn't be allowing any insertions that break your constraints in the first place. Do I agree with this? No, but that is what I have always heard.
EDIT: My guess is he was referring to foreign key constraints, not foreign keys as a concept.
A: To me, if you want to go by the ACID standards, it is critical to have foreign keys to ensure referential integrity.
A: I have to second most of the comments here, Foreign Keys are necessary items to ensure that you have data with integrity. The different options for ON DELETE and ON UPDATE will allow you to get around some of the "down falls" that people mention here regarding their use.
I find that in 99% of all my projects I will have FK's to enforce the integrity of the data, however, there are those rare occasions where I have clients that MUST keep their old data, regardless of how bad it is....but then I spend a lot of time writing code that goes in to only get the valid data anyway, so it becomes pointless.
A: How about maintainability and constancy across application life cycles? Most data has a longer lifespan than the applications that make use of it. Relationships and data integrity are much too important to leave to the hope that the next dev team gets it right in the app code. If you haven't worked on a db with dirty data that doesn't respect the natural relationships, you will. The importance of data integrity will then become very clear.
A: I also think that foreign keys are a necessity in most databases. The only drawback (besides the performance hit that comes with having enforced consistence) is that having a foreign key allows people to write code that assumes there is a functional foreign key. That should never be allowed.
For example, I've seen people write code that inserts into the referenced table and then attempts inserts into the referencing table without verifying the first insert was successful. If the foreign key is removed at a later time, that results in an inconsistent database.
You also don't have the option of assuming a specific behavior on update or delete. You still need to write your code to do what you want regardless of whether there is a foreign key present. If you assume deletes are cascaded when they are not, your deletes will fail. If you assume updates to the referenced columns are propogated to the referencing rows when they are not, your updates will fail. For the purposes of writing code, you might as well not have those features.
If those features are turned on, then your code will emulate them anyway and you'll lose a little performance.
So, the summary.... Foreign keys are essential if you need a consistent database. Foreign keys should never be assumed to be present or functional in code that you write.
A: I echo the answer by Dmitriy - very well put.
For those who are worried about the performance overhead FK's often bring, there's a way (in Oracle) you can get the query optimiser advantage of the FK constraint without the cost overhead of constraint validation during insert, delete or update. That is to create the FK constraint with the attributes RELY DISABLE NOVALIDATE. This means the query optimiser ASSUMES that the constraint has been enforced when building queries, without the database actually enforcing the constraint. You have to be very careful here to take the responsibility when you populate a table with an FK constraint like this to make absolutely sure you don't have data in your FK column(s) that violate the constraint, as if you do so you could get unreliable results from queries that involve the table this FK constraint is on.
I usually use this strategy on some tables in my data mart schema, but not in my integrated staging schema. I make sure the tables I am copying data from already have the same constraint enforced, or the ETL routine enforces the constraint.
A: Many of the people answering here get too hung up on the importance of referential integrity implemented via referential constraints. Working on large databases with referential integrity just does not perform well. Oracle seems particularly bad at cascading deletes. My rule of thumb is that applications should never update the database directly and should be via a stored procedure. This keeps the code base inside the database, and means that the database maintains its integrity.
Where many applications may be accessing the database, problems do arise because of referential integrity constraints but this is down to a control.
There is a wider issue too in that, application developers may have very different requirements that database developers may not necessarily be that familiar with.
A: I have heard this argument too - from people who forgot to put an index on their foreign keys and then complained that certain operations were slow (because constraint checking could take advantage of any index). So to sum up: There is no good reason not to use foreign keys. All modern databases support cascaded deletes, so...
A: One time when an FK might cause you a problem is when you have historical data that references the key (in a lookup table) even though you no longer want the key available.
Obviously the solution is to design things better up front, but I am thinking of real world situations here where you don't always have control of the full solution.
For example: perhaps you have a look up table customer_type that lists different types of customers - lets say you need to remove a certain customer type, but (due to business restraints) aren't able to update the client software, and nobody invisaged this situation when developing the software, the fact that it is a foreign key in some other table may prevent you from removing the row even though you know the historical data that references it is irrelevant.
After being burnt with this a few times you probably lean away from db enforcement of relationships.
(I'm not saying this is good - just giving a reason why you may decide to avoid FKs and db contraints in general)
A: I'll echo what Dmitriy said, but adding on a point.
I worked on a batch billing system that needed to insert large sets of rows on 30+ tables. We weren't allowed to do a data pump (Oracle) so we had to do bulk inserts. Those tables had foreign keys on them, but we had already ensured that they were not breaking any relationships.
Before insert, we disable the foreign key constraints so that Oracle doesn't take forever doing the inserts. After the insert is successful, we re-enable the constraints.
PS: In a large database with many foreign keys and child row data for a single record, sometimes foreign keys can be bad, and you may want to disallow cascading deletes. For us in the billing system, it would take too long and be too taxing on the database if we did cascading deletes, so we just mark the record as bad with a field on the main driver (parent) table.
A: Like many things, it's a tradeoff. It's a question of where you want to do the work to verify the data integrity:
(1) use a foreign key (a single point to configure for a table, feature is already implemented, tested, proven to work)
(2) leave it to the users of the database (possible multiple users/apps updating the same table (s) meaning more potential points of failure and increased complexity in testing).
It's more efficient for the database to do (2), easier to maintain and less risk with (1).
A: One good principle of data structure design is to ensure that every attribute of a table or object be subject to a well-understood constraint. This is important because if you or your program can count on valid data in the database, you are less likely to have program defects caused by bad data. You also spend less time writing code to handle error conditions, and you are more likely to write error-handling code up front.
In many cases these constraints can be defined at compile-time, in which case you can write a filter to ensure that the attribute always falls within range, or the attempt to save the attribute fails.
However, in many cases these constraints can change at run-time. For example, you may have a "cars" table that has "colour" as an attribute which initially takes on the values, say, of "red", "green" and "blue". It is possible during the execution of the program to add valid colours to that initial list, and new "cars" added may take on any colour in the up-to-date list of colours. Furthermore, you usually want this updated list of colours to survive a program restart.
To answer your question, it turns out that if you have a requirement for data constraint that can change at run-time, and those changes must survive a program restart, foreign keys are the simplest and most concise solution to the problem. The development cost is the addition of one table (e.g. "colours", a foreign key constraint to the "cars" table, and an index), and the run-time cost is the extra table lookup for the up-to-date colours to validate the data, and this run-time cost is usually mitigated by indexing and caching.
If you don't use foreign keys for these requirements, you must write software to manage the list, look valid entries, save it to disk, structure the data efficiently if the list is large, ensure that any updates to the list don't corrupt the list file, provide serial access to the list in case there are multiple readers and/or writers, and so on. i.e. You need to implement a lot of RDBMS functionality.
A: In a project I worked on there was often implicit rather than explicit relationships so that numerous tables could be joined on the same column.
Take the following table
Address
*
*AddressId (PK)
*EntityId
*EntityType
*City
*State
*Country
*Etc..
Possible values of EntityType may be Employee, Company, Customer, and the EntityId refers to the primarky key of whichever table you were interested in.
I don't really think this is the best way to do things, but it worked for this project.
A: In DB2, if MQTs (Materialized Query Tables) are used, foreign key constraints are required for the optimizer to choose the right plan for any given query. Since they contain the cardinality information, the optimizer uses the metadata heavily to use a MQT or not.
A: Quite often we receive the errors with FK constraints
Cannot add or update a child row: a foreign key constraint fails
Suppose there are two tables inventory_source and contract_lines, and we are referring inventory_source_id in contract_lines from inventory_source and suppose we want to delete record from inventory_source and the record is already present in contract_lines or we want to delete the PK column from Base table, we get errors for FK constraints, we can avoid it using the steps jotted below.
CREATE TABLE inventory_source (
inventory_source_id int(11) NOT NULL AUTO_INCREMENT,
display_name varchar(40) NOT NULL,
state_id int(11) NOT NULL,
PRIMARY KEY (inventory_source_id),
KEY state_id (state_id),
CONSTRAINT ba_inventory_source_state_fk FOREIGN KEY (state_id) REFERENCES ba_state (state_id)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8;
CREATE TABLE contract_lines(
contract_line_id int(11) NOT NULL AUTO_INCREMENT,
inventory_source_id int(11) NULL ,
PRIMARY KEY (contract_line_id),
UNIQUE KEY contract_line_id (contract_line_id),
KEY AI_contract_line_id (contract_line_id),
KEY contract_lines_inventory_source_fk (inventory_source_id),
CONSTRAINT contract_lines_inventory_source_fk FOREIGN KEY (inventory_source_id) REFERENCES ba_inventory_source (inventory_source_id)
) ENGINE=InnoDB AUTO_INCREMENT=135 DEFAULT CHARSET=utf8 ;
We can overcome it using the following steps:-
*
*Delete or update the row from the inventory_source will automatically delete or update the matching rows in the contract_lines table and this is known as cascade delete or update.
*Another way of doing it is setting the column i.e inventory_source_id in the contract_lines table to NULL, when a record corresponding to it is deleted in the inventory_source table.
*We can restrict the parent table for delete or update in other words one can reject the delete or update operation for the inventory_source table.
*Attempt to delete or update a primary key value will not be permitted to proceed if there is a related foreign key value in the referenced table.
A: I can see a few reasons to use foreign keys (Orphaned rows, as someone mentioned, are annoying) but I never use them either. With a relatively sane DB schema, I don't think they are 100% needed. Constraints are good, but enforcing them via software is a better method, I think.
Alex
A: I always thought it was lazy not to use them. I was taught it should always be done. But then, I didnt listen to Joel's discussion. He may have had a good reason, I don't know.
A: Wowowo...
Answers everywhere. Actually this is the most complicated topic I have ever encountered.
I use FKs when they are needed but on production environment I rarely use them.
Here is my whys I rarely use the Fks:
1. Most of the time I am dealing with huge data on small server to improve performance I need to remove the FKs. Because when you have FKs and you do Create, Update or Delete the RDBMS first check if there no constraint violation and if you have huge DB that could be something fatal
2. Sometimes I need to import data from others places and because I am not too sure of how well structured they are, I simply drop the FKs.
3. In case you are dealing with multiple DBs and having reference key in an other DB will not go well(as for now) until you remove the FKs (cross database relations)
4. They was also a case when you write an application which will seat on whatever RDBMS or you want your DB to be exported and imported in any RDBMS system in this case each specific RDBMS system has his own way of dealing with FKs and you will probably be obliged to drop the use of FKs.
5. If you user RDBMS platform (ORMs) you know that some of them offer their own mapping depending on the solution and technicality their offer and you don't care about creating the tables and their FKs.
6. Before the last point will be knowledge to deal with DB that has FKs and the knowledge to write an application that does all the Job without the need of FK
7. Lastly as I started saying it all depend on your scenario, in case knowledge is not a barrier. You will always want to run the best of the best you can get!
Thank you everybody!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "277"
}
|
Q: Reading PDF documents in .Net Is there an open source library that will help me with reading/parsing PDF documents in .NET/C#?
A: PDFClown might help, but I would not recommend it for a big or heavy use application.
A: iTextSharp is the best bet. Used it to make a spider for lucene.Net so that it could crawl PDF.
using System;
using System.IO;
using iTextSharp.text.pdf;
using System.Text.RegularExpressions;
namespace Spider.Utils
{
/// <summary>
/// Parses a PDF file and extracts the text from it.
/// </summary>
public class PDFParser
{
/// BT = Beginning of a text object operator
/// ET = End of a text object operator
/// Td move to the start of next line
/// 5 Ts = superscript
/// -5 Ts = subscript
#region Fields
#region _numberOfCharsToKeep
/// <summary>
/// The number of characters to keep, when extracting text.
/// </summary>
private static int _numberOfCharsToKeep = 15;
#endregion
#endregion
#region ExtractText
/// <summary>
/// Extracts a text from a PDF file.
/// </summary>
/// <param name="inFileName">the full path to the pdf file.</param>
/// <param name="outFileName">the output file name.</param>
/// <returns>the extracted text</returns>
public bool ExtractText(string inFileName, string outFileName)
{
StreamWriter outFile = null;
try
{
// Create a reader for the given PDF file
PdfReader reader = new PdfReader(inFileName);
//outFile = File.CreateText(outFileName);
outFile = new StreamWriter(outFileName, false, System.Text.Encoding.UTF8);
Console.Write("Processing: ");
int totalLen = 68;
float charUnit = ((float)totalLen) / (float)reader.NumberOfPages;
int totalWritten = 0;
float curUnit = 0;
for (int page = 1; page <= reader.NumberOfPages; page++)
{
outFile.Write(ExtractTextFromPDFBytes(reader.GetPageContent(page)) + " ");
// Write the progress.
if (charUnit >= 1.0f)
{
for (int i = 0; i < (int)charUnit; i++)
{
Console.Write("#");
totalWritten++;
}
}
else
{
curUnit += charUnit;
if (curUnit >= 1.0f)
{
for (int i = 0; i < (int)curUnit; i++)
{
Console.Write("#");
totalWritten++;
}
curUnit = 0;
}
}
}
if (totalWritten < totalLen)
{
for (int i = 0; i < (totalLen - totalWritten); i++)
{
Console.Write("#");
}
}
return true;
}
catch
{
return false;
}
finally
{
if (outFile != null) outFile.Close();
}
}
#endregion
#region ExtractTextFromPDFBytes
/// <summary>
/// This method processes an uncompressed Adobe (text) object
/// and extracts text.
/// </summary>
/// <param name="input">uncompressed</param>
/// <returns></returns>
public string ExtractTextFromPDFBytes(byte[] input)
{
if (input == null || input.Length == 0) return "";
try
{
string resultString = "";
// Flag showing if we are we currently inside a text object
bool inTextObject = false;
// Flag showing if the next character is literal
// e.g. '\\' to get a '\' character or '\(' to get '('
bool nextLiteral = false;
// () Bracket nesting level. Text appears inside ()
int bracketDepth = 0;
// Keep previous chars to get extract numbers etc.:
char[] previousCharacters = new char[_numberOfCharsToKeep];
for (int j = 0; j < _numberOfCharsToKeep; j++) previousCharacters[j] = ' ';
for (int i = 0; i < input.Length; i++)
{
char c = (char)input[i];
if (input[i] == 213)
c = "'".ToCharArray()[0];
if (inTextObject)
{
// Position the text
if (bracketDepth == 0)
{
if (CheckToken(new string[] { "TD", "Td" }, previousCharacters))
{
resultString += "\n\r";
}
else
{
if (CheckToken(new string[] { "'", "T*", "\"" }, previousCharacters))
{
resultString += "\n";
}
else
{
if (CheckToken(new string[] { "Tj" }, previousCharacters))
{
resultString += " ";
}
}
}
}
// End of a text object, also go to a new line.
if (bracketDepth == 0 &&
CheckToken(new string[] { "ET" }, previousCharacters))
{
inTextObject = false;
resultString += " ";
}
else
{
// Start outputting text
if ((c == '(') && (bracketDepth == 0) && (!nextLiteral))
{
bracketDepth = 1;
}
else
{
// Stop outputting text
if ((c == ')') && (bracketDepth == 1) && (!nextLiteral))
{
bracketDepth = 0;
}
else
{
// Just a normal text character:
if (bracketDepth == 1)
{
// Only print out next character no matter what.
// Do not interpret.
if (c == '\\' && !nextLiteral)
{
resultString += c.ToString();
nextLiteral = true;
}
else
{
if (((c >= ' ') && (c <= '~')) ||
((c >= 128) && (c < 255)))
{
resultString += c.ToString();
}
nextLiteral = false;
}
}
}
}
}
}
// Store the recent characters for
// when we have to go back for a checking
for (int j = 0; j < _numberOfCharsToKeep - 1; j++)
{
previousCharacters[j] = previousCharacters[j + 1];
}
previousCharacters[_numberOfCharsToKeep - 1] = c;
// Start of a text object
if (!inTextObject && CheckToken(new string[] { "BT" }, previousCharacters))
{
inTextObject = true;
}
}
return CleanupContent(resultString);
}
catch
{
return "";
}
}
private string CleanupContent(string text)
{
string[] patterns = { @"\\\(", @"\\\)", @"\\226", @"\\222", @"\\223", @"\\224", @"\\340", @"\\342", @"\\344", @"\\300", @"\\302", @"\\304", @"\\351", @"\\350", @"\\352", @"\\353", @"\\311", @"\\310", @"\\312", @"\\313", @"\\362", @"\\364", @"\\366", @"\\322", @"\\324", @"\\326", @"\\354", @"\\356", @"\\357", @"\\314", @"\\316", @"\\317", @"\\347", @"\\307", @"\\371", @"\\373", @"\\374", @"\\331", @"\\333", @"\\334", @"\\256", @"\\231", @"\\253", @"\\273", @"\\251", @"\\221"};
string[] replace = { "(", ")", "-", "'", "\"", "\"", "à", "â", "ä", "À", "Â", "Ä", "é", "è", "ê", "ë", "É", "È", "Ê", "Ë", "ò", "ô", "ö", "Ò", "Ô", "Ö", "ì", "î", "ï", "Ì", "Î", "Ï", "ç", "Ç", "ù", "û", "ü", "Ù", "Û", "Ü", "®", "™", "«", "»", "©", "'" };
for (int i = 0; i < patterns.Length; i++)
{
string regExPattern = patterns[i];
Regex regex = new Regex(regExPattern, RegexOptions.IgnoreCase);
text = regex.Replace(text, replace[i]);
}
return text;
}
#endregion
#region CheckToken
/// <summary>
/// Check if a certain 2 character token just came along (e.g. BT)
/// </summary>
/// <param name="tokens">the searched token</param>
/// <param name="recent">the recent character array</param>
/// <returns></returns>
private bool CheckToken(string[] tokens, char[] recent)
{
foreach (string token in tokens)
{
if ((recent[_numberOfCharsToKeep - 3] == token[0]) &&
(recent[_numberOfCharsToKeep - 2] == token[1]) &&
((recent[_numberOfCharsToKeep - 1] == ' ') ||
(recent[_numberOfCharsToKeep - 1] == 0x0d) ||
(recent[_numberOfCharsToKeep - 1] == 0x0a)) &&
((recent[_numberOfCharsToKeep - 4] == ' ') ||
(recent[_numberOfCharsToKeep - 4] == 0x0d) ||
(recent[_numberOfCharsToKeep - 4] == 0x0a))
)
{
return true;
}
}
return false;
}
#endregion
}
}
A: public string ReadPdfFile(object Filename, DataTable ReadLibray)
{
PdfReader reader2 = new PdfReader((string)Filename);
string strText = string.Empty;
for (int page = 1; page <= reader2.NumberOfPages; page++)
{
ITextExtractionStrategy its = new iTextSharp.text.pdf.parser.SimpleTextExtractionStrategy();
PdfReader reader = new PdfReader((string)Filename);
String s = PdfTextExtractor.GetTextFromPage(reader, page, its);
s = Encoding.UTF8.GetString(ASCIIEncoding.Convert(Encoding.Default, Encoding.UTF8, Encoding.Default.GetBytes(s)));
strText = strText + s;
reader.Close();
}
return strText;
}
A: iText is the best library I know. Originally written in Java, there is a .NET port as well.
See http://www.ujihara.jp/iTextdotNET/en/
A: itext?
http://www.itextpdf.com/terms-of-use/index.php
Guide
http://www.vogella.com/articles/JavaPDF/article.html
A: Since this question was last answered in 2008, iTextSharp has improved their api dramatically. If you download the latest version of their api from http://sourceforge.net/projects/itextsharp/, you can use the following snippet of code to extract all text from a pdf into a string.
using iTextSharp.text.pdf;
using iTextSharp.text.pdf.parser;
namespace PdfParser
{
public static class PdfTextExtractor
{
public static string pdfText(string path)
{
PdfReader reader = new PdfReader(path);
string text = string.Empty;
for(int page = 1; page <= reader.NumberOfPages; page++)
{
text += PdfTextExtractor.GetTextFromPage(reader,page);
}
reader.Close();
return text;
}
}
}
A: You could look into this:
http://www.codeproject.com/KB/showcase/pdfrasterizer.aspx
It's not completely free, but it looks very nice.
Alex
A: http://www.c-sharpcorner.com/UploadFile/psingh/PDFFileGenerator12062005235236PM/PDFFileGenerator.aspx is open source and may be a good starting point for you.
A: aspose pdf works pretty well. then again, you have to pay for it
A: Have a look at Docotic.Pdf library. It does not require you to make source code of your application open (like iTextSharp with viral AGPL 3 license, for example).
Docotic.Pdf can be used to read PDF files and extract text with or without formatting. Please have a look at the article that shows how to extract text from PDFs.
Disclaimer: I work for Bit Miracle, vendor of the library.
A: There is also LibHaru
http://libharu.org/wiki/Main_Page
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "101"
}
|
Q: Entity Framework and Application Architecture (loose coupling, etc) I am considering to apply Entity Framework in a new project because I liked its OR/M-API as well as the storage/conceptual model mapping-capabilities (plus Linq of course and Entity SQL).
But how can loose coupling be achieved betwen the UI layer and the business layer if EF entities are used as dataholders in both. If I leave the entities attached to their ObjectContext while they reside in the UI, the UI might bypass the business layer and connect straight to the database. If I detach the entities from their ObjectContext before passing them to the UI, there will be no changetracking, so I have to "replay" all changes in the business layer for them to be persisted to the database (difficult to achieve, esp. with parent-child relations). While I don't want the business layer to degrade to a "object-tree-persistence-engine", there are scenarios where having this ability would be helpful.
This certainly applies to other OR-mappers as well, but several alternative products seem to have somewhat better detaching/attaching mechanisms.
A: "Replaying" the changes is easier than you might think. Here's the general outline of what you need to do:
*
*Store the "original" version of the entity instance before you detach it and hand it to the UI.
*Let the UI do its thing.
*When you want to persist changes made by the UI to the database, take the original version that you stored, and attach it to the EntityContext. Apply the changes from the modified version returned by the UI to this instance. Now SaveChanges. The Entity Framework will handle the three-way merge.
A: I'm not aware of any ORM that deals gracefully with n-tier solutions where you want to have platform independence. The EF works well when everything is happening within an ObjectContext, when you have an n-tier solution (physical separation, WCF/XML Web Service calls) you have to do some plumbing to get the objects behaving correctly.
You can achieve loose coupling by using a Repository pattern to separate out the api dependencies on the Ef (http://blog.keithpatton.com/2008/05/29/Polymorphic+Repository+For+ADONet+Entity+Framework.aspx). However, if you are using your EF classes within the UI layer directly, you will have a dependency on certain types like EntityReference, EntityKey and EntityObject, unless you decide to delve into the world of getting EF to behave with pure C# classes (POCO) which seems more trouble than it's worth to me.
A: Daniel Simmons, of ADO.Net team gave an extension method "AttachAsModified" to attach an object that has been modified.
That's not as smart as replaying changes, but that makes it : I'm using it into a sample project.
A: Google "Entity framework" and "vote of no confidence" and see what you get.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: File and directory with same name in same parent directory - Solaris 8, ufs Ok, I have been working with Solaris for a 10+ years, and have never seen this...
I have a directory listing which includes both a file and subdirectory with the same name:
-rw-r--r-- 1 root other 15922214 Nov 29 2006 msheehan
drwxrwxrwx 12 msheehan sysadmin 2048 Mar 25 15:39 msheehan
I use file to discover contents of the file, and I get:
bash-2.03# file msheehan
msheehan: directory
bash-2.03# file msh*
msheehan: ascii text
msheehan: directory
I am not worried about the file, but I want to keep the directory, so I try rm:
bash-2.03# rm msheehan
rm: msheehan is a directory
So here is my two part question:
*
*What's up with this?
*How do I carefully delete the file?
Jonathan
Edit:
Thanks for the answers guys, both (so far) were helpful, but piping the listing to an editor did the trick, ala:
bash-2.03# ls -l > jb.txt
bash-2.03# vi jb.txt
Which contained:
-rw-r--r-- 1 root other 15922214 Nov 29 2006 msheehab^?n
drwxrwxrwx 12 msheehan sysadmin 2048 Mar 25 15:39 msheehan
Always be careful with the backspace key!
A: I would guess that these are in fact two different filenames that "look" the same, as the command file was able to distinguish them when the shell passed the expanded versions of the name in. Try piping ls into od or another hex/octal dump utility to see if they really have the same name, or if there are non-printing characters involved.
A: I'm wondering what could cause this. Aside from filesystem bugs, it could be caused by a non-ascii chararacter that got through somehow. In that case, use another language with easier string semantics to do the operation.
It would be interesting to see what would be the output of this ruby snippet:
ruby -e 'puts Dir["msheehan*"].inspect'
A: You can delete using the iNode
If you use the "-i" option in "ls"
$ ls -li
total 1
20801 -rw-r--r-- 1 root root 0 2010-11-08 01:55 a?
20802 -rw-r--r-- 1 root root 0 2010-11-08 01:55 a\?
$ find . -inum 20802 -exec rm {} \;
$ ls -li
total 1
20801 -rw-r--r-- 1 root root 0 2010-11-08 01:55 a?
I've an example (in Spanish) how you can delete a file using then iNode on Solaris
http://sparcki.blogspot.com/2010/03/como-eliminar-archivos-utilizando-su.html
Urko,
A: And a quick answer to part 2 of my own question...
I would imagine I could rename the directory, delete the file, and rename the directory back to it's original again.
... I would still be interested to see what other people come up with.
JB
A: I suspect that one of them has a strange character in the name. You could try using the shell wildcard expansion to see that: type
cat msh*
and press the wildcard expansion key (in my shell it's Ctrl-X *). You should get two names listed, perhaps one of which has an escape character in it.
A: To see if there are special characters in your file, Try the -b or -q options to ls,
assuming solaris 8 has those options.
As another solution to deleting the file you can bring up the graphical file browser
(gasp!) and drag and drop the unwanted file to the trash.
Another solution might be to move the one file to a different name (the one without the unknown special character), then delete the special character directory name with wildcards.
mv msheehan temp
rm mshee*
mv temp msheehan
Of course, you want to be sure that only the file you want to delete matches the wildcard.
And, for your particular case, since one was a directory and the other a file, this command might have solved it all:
rmdir msheeha*
A: One quick-and-easy way to see non-printing characters and whitespace is to pipe the output through cat -vet, e.g.:
# ls -l | cat -vet
Nice and easy to remember!
A: For part 2, since one name contains two extra characters, you can use:
mv sheehan abc
mv sheeha??n xyz
Once you've done that, you've got sane file names again, that you can fix up as you need.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Which heap size do you prefer? I know there is no "right" heap size, but which heap size do you use in your applications (application type, jdk, os)?
The JVM Options -Xms (initial/minimum) and -Xmx (maximum) allow for controlling the heap size. What settings make sense under which circumstances? When are the defaults appropriate?
A: You have to try your application and see how it performs. for example, I used to always run IDEA out of the box until I've got this new job where I work on this huge monolithic project. IDEA was running very slow and regularly throwing out of memory errors when compiling the full project.
first thing I did is to ramp up the heap to 1 gig. this got rid of the out of memory issues but it was still slow. I also noticed IDEA was regularly freezing for 10 seconds or so after which the used memory was cut in half only to ramp up again and , and that triggered the garbage collection idea. I now use it with -Xms512m, -Xmx768m but, I also added -Xincgc, to activate incremental garbage collection
As a result, I've got my old IDEA back: it runs smooth, doesn't freeze anymore and never uses more than 600m of heap.
For your application you have to use a similar approach. try to determine the typical memory usage and tune your heap for the application to run well in those conditions. But also let advanced users tune the setting, to address out of the ordinary data loads.
A: It depends on the application type. A desktop application is much different than a web application. An application server is much different than a standalone application.
It also depends on the JVM that you are using. JDK5 and later 6 include enhancements that help understand how to tune your application.
Heap size is important, but its also important to know how it plays with the garbage collector.
JDK1.4 Garbage Collector Tuning
JDK5 Garbage Collector Tuning
JDK6 Garbage Collector Tuning
A: Actually I always considered it very strange that Java limits the heap size. A native application can usually use as much heap as it wants, until it runs out of virtual address space. The only reason to limit the heap in Java seems the garbage collector, which has a certain kind of "laziness" and may not garbage collect objects, unless there is a necessity to do so. That means if you choose the heap too big, your app constantly uses more memory than is really necessary.
However, Sun has improved the GC a lot over the years and to emulate the behavior of a native C app, I would set the initial heap size to 32 MB (for small programs) or 64 MB (for bigger ones) and the maximum to something between 1-2 GB. If your app really needs over a 1 GB of memory, it is most likely broken (unless you deal with data objects that large), but I see no reason why your app should be killed, just because it goes over a certain heap size.
Of course, this is referring to normal PCs. If you create Java code for mobile phones or other limited devices, you should probably adopt the initial and maximum heap size to the limitations of that device.
A: Typically i try not to use heaps which are larger than 1GB.
It will cost you on major garbage collections.
Sometime it is better to split your application to a few JVM on the same machine and not you large heap sizes.
Major collection with a large heap size can take >10 mintues (on unoptimized GC applications).
A: This is entirely dependent on your application and any hardware limitations you may have. There is no one size fits all.
jmap can be used to have a look at what heap you are actually using and is a good starting point for right-sizing the heap.
A: You need to spend quite some time in JConsole or visualvm to get a clear picture on what the plateau memory usage is. Wait until everything is stable and you see the characteristic sawtooth curve of heap memory usage. The peaks should be your 70-80% heap, depending on what garbage collector you use.
Most garbage collectors trigger full GCs when heap usage reaches a certain percentage. This percentage is from 60% to 80% of max heap, depending on what strategy is involved.
A: 1.3Gb for a heavy GUI application.
Unfortunately on Linux the JVM seems to pre-request 1.3G of virtual memory in that situation, which looks bad even if it's not needed (and causes a lot of confused grumbling from users)
A: On my most memory intensive app:
-Xms250M -Xmx1500M -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to insert XmlCursor content to DOM Document Some API returns me XmlCursor pointing on root of XML Document. I need to insert all of this into another org.w3c.DOM represented document.
At start:
XmlCursor poiting on
<a>
<b>
some text
</b>
</a>
DOM Document:
<foo>
</foo>
At the end I want to have original DOM document changed like this:
<foo>
<someOtherInsertedElement>
<a>
<b>
some text
</b>
</a>
</someOtherInsertedElement>
</foo>
NOTE: document.importNode(cursor.getDomNode()) doesn't work - Exception is thrown: NOT_SUPPORTED_ERR: The implementation does not support the requested type of object or operation.
A: Try something like this:
Node originalNode = cursor.getDomNode();
Node importNode = document.importNode(originalNode.getFirstChild());
Node otherNode = document.createElement("someOtherInsertedElement");
otherNode.appendChild(importNode);
document.appendChild(otherNode);
So in other words:
*
*Get the DOM Node from the cursor. In this case, it's a DOMDocument, so do getFirstChild() to get the root node.
*Import it into the DOMDocument.
*Do other stuff with the DOMDocument.
*Append the imported node to the right Node.
The reason to import is that a node always "belongs" to a given DOMDocument. Just adding the original node would cause exceptions.
A: I was having the same issue.
This was failing:
Node importNode = document.importNode(originalNode);
This fixed the problem:
Node importNode = document.importNode(originalNode.getFirstChild());
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Embedding Live Video from an IP WebCam We are using a Sony SNC-RZ30N IP-based webcam to monitor osprey nests and would like to stream the video feed via our own webserver.
Rather than use the built-in webserver of the camera (which requires either ActiveX or Java on the client side) to display the live feed, I would like to weed out just the live feed and display it on our campus webserver (Win2k8/IIS7). Perhaps in an iFrame or the like.
Unfortunately, documentation for anything other than FTP'ing a static image snapshot from this camera seems to be pretty much non-existent.
There are other "video surveillance" packages (ie: ProSight SMB) that will feed up a web page with the live feed on their own built-in webservers (along with controls to position the camera, which we don't want displayed) - but that is undesireable.
I simply want to capture the live stream from the camera and embed it a page on our website so that we can control how the page looks as well as other relevant hyperlinks.
Thx.
A: I don't have enough reputation to comment, so a new answer...
In that case I think there are only two options: run a service that converts the existing video feed to a more useable feed (for example to wmv, is accepted by most clients), or create some kind of 'applet' (like in Flash) that updates the image every second or so?
WebcamXP seems to support your camera (http://www.webcamxp.com/ipcams.aspx), so maybe that's an option?
A: Some IP-based webcams (I know the Axis cameras do), also offer a static JPG or a motion JPEG over HTTP.
like http://<ip>/img/static.jpg.
Maybe that helps you?
A: You could use the built in FTP features of the cam, to upload the picture to your webserver,
and write some simple php code to read the uploaded image, display it on an HTML page, and update it equal to the output frame rate of the camera. Most cameras output to picture formats rather than video.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to set up the browser scrollbar to scroll part of a page? I've seen this done in a few sites, an example is artofadambetts.com. The scroll bar on the page scrolls only an element of the page, not the entire page. I looked at the source and havent't been able to figure it out yet. How is this done?
A: In fact it is not the scrolling part that is "doing the job", it is the fixed part of the page.
In order to do this, you should use CSS and add position: fixed; property (use it with top, bottom, left and/or right properties) to the elements that you wish not to scroll.
And you should not forget to give them a greater z-index, if you don't there might be some of the scrolling element that can go over your fixed element as you scroll (and you certainly don't want that).
A: That's pretty nifty. He uses "position:fixed" on most of the divs, and the one that scrolls is the one that doesn't have it.
A: To find out how people do these kinds of things in CSS and/or Javascript the tool Firebug is just outstanding:
Firebug addon for Firefox
A: It should be noted that without further hacks position fixed does not work for IE6, which is still managing to hold on to 15-30% of the market, depending on your site.
A: You can use fixed positioning or absolute positioning to tie various elements to fixed positions on the page. Alternatively you can specify a fixed size element (such as a DIV) and use overflow: scroll to force the scrollbars on that.
As already mentioned, getting everything to work in Internet Explorer AND Firefox/Opera/Safari requires judicious use of hacks.
A: For a div, you can add in the cSS
overflow: auto
For example,
<div style="overflow:auto; height: 500px">Some really long text</div>
Edit: After looking at the site you posted, you probably don't want this. What he does in his website is make the layout as fixed (position: fixed) and assigns it a higher z-index than the text, which is lower z-index.
For example:
<div class="highz"> //Put random stuff here. it'll be fixed </div>
<div class="lowz"> Put stuff here you want to scroll and position it.</div>
with css file
div.highz {position: fixed; z-index: 2;}
div.lowz {position: fixed; z-index: 1;}
A: This can be done in CSS using the "position:absolute;" clause
Here is an example template:
http://www.demusdesign.com/bipolar/index.html
From http://www.demusdesign.com/
A: The browser is scrolling the page, its just that part of it is fixed in position.
This is done by using the "position: fixed" CSS property on the part that you wish not to scroll.
A: They've set the side and top elements to have fixed positions via CSS (see line 94 of their style.css file). This holds them in the viewport while the rest scrolls.
A: Try this for scrolling a particular part of web page......
<html>
<head>
<title>Separately Scrolled Area Demo</title>
</head>
<body>
<div style="width: 100px; border-style: solid">
<div style="overflow: auto; width: 100px; height: 100px">
sumit..................
amit...................
mrinal.................
nitesh................
maneesh................
raghav...................
hitesh...................
deshpande................
sidarth....................
mayank.....................
santanu....................
sahil......................
malhan.....................
rajib.....................
</div>
</div>
</body>
</html>
A: To put scroll bars on an element such as a div:
<div style="overflow-x: auto; overflow-y: auto;>the content</div>
If you only want a horizontal or vertical scroll bar, only use whichever of overflow-x and overflow-y you need.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Is there a serializable generic Key/Value pair class in .NET? I'm looking for a key/value pair object that I can include in a web service.
I tried using .NET's System.Collections.Generic.KeyValuePair<> class, but it does not properly serialize in a web service. In a web service, the Key and Value properties are not serialized, making this class useless, unless someone knows a way to fix this.
Is there any other generic class that can be used for this situation?
I'd use .NET's System.Web.UI.Pair class, but it uses Object for its types. It would be nice to use a Generic class, if only for type safety.
A: Just define a struct/class.
[Serializable]
public struct KeyValuePair<K,V>
{
public K Key {get;set;}
public V Value {get;set;}
}
A: [Serializable]
public class SerializableKeyValuePair<TKey, TValue>
{
public SerializableKeyValuePair()
{
}
public SerializableKeyValuePair(TKey key, TValue value)
{
Key = key;
Value = value;
}
public TKey Key { get; set; }
public TValue Value { get; set; }
}
A: I don't think there is as Dictionary<> itself isn't XML serializable, when I had need to send a dictionary object via a web service I ended up wrapping the Dictionary<> object myself and adding support for IXMLSerializable.
/// <summary>
/// Represents an XML serializable collection of keys and values.
/// </summary>
/// <typeparam name="TKey">The type of the keys in the dictionary.</typeparam>
/// <typeparam name="TValue">The type of the values in the dictionary.</typeparam>
[XmlRoot("dictionary")]
public class SerializableDictionary<TKey, TValue> : Dictionary<TKey, TValue>, IXmlSerializable
{
#region Constants
/// <summary>
/// The default XML tag name for an item.
/// </summary>
private const string DEFAULT_ITEM_TAG = "Item";
/// <summary>
/// The default XML tag name for a key.
/// </summary>
private const string DEFAULT_KEY_TAG = "Key";
/// <summary>
/// The default XML tag name for a value.
/// </summary>
private const string DEFAULT_VALUE_TAG = "Value";
#endregion
#region Protected Properties
/// <summary>
/// Gets the XML tag name for an item.
/// </summary>
protected virtual string ItemTagName
{
get
{
return DEFAULT_ITEM_TAG;
}
}
/// <summary>
/// Gets the XML tag name for a key.
/// </summary>
protected virtual string KeyTagName
{
get
{
return DEFAULT_KEY_TAG;
}
}
/// <summary>
/// Gets the XML tag name for a value.
/// </summary>
protected virtual string ValueTagName
{
get
{
return DEFAULT_VALUE_TAG;
}
}
#endregion
#region Public Methods
/// <summary>
/// Gets the XML schema for the XML serialization.
/// </summary>
/// <returns>An XML schema for the serialized object.</returns>
public XmlSchema GetSchema()
{
return null;
}
/// <summary>
/// Deserializes the object from XML.
/// </summary>
/// <param name="reader">The XML representation of the object.</param>
public void ReadXml(XmlReader reader)
{
XmlSerializer keySerializer = new XmlSerializer(typeof(TKey));
XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue));
bool wasEmpty = reader.IsEmptyElement;
reader.Read();
if (wasEmpty)
{
return;
}
while (reader.NodeType != XmlNodeType.EndElement)
{
reader.ReadStartElement(ItemTagName);
reader.ReadStartElement(KeyTagName);
TKey key = (TKey)keySerializer.Deserialize(reader);
reader.ReadEndElement();
reader.ReadStartElement(ValueTagName);
TValue value = (TValue)valueSerializer.Deserialize(reader);
reader.ReadEndElement();
this.Add(key, value);
reader.ReadEndElement();
reader.MoveToContent();
}
reader.ReadEndElement();
}
/// <summary>
/// Serializes this instance to XML.
/// </summary>
/// <param name="writer">The writer to serialize to.</param>
public void WriteXml(XmlWriter writer)
{
XmlSerializer keySerializer = new XmlSerializer(typeof(TKey));
XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue));
foreach (TKey key in this.Keys)
{
writer.WriteStartElement(ItemTagName);
writer.WriteStartElement(KeyTagName);
keySerializer.Serialize(writer, key);
writer.WriteEndElement();
writer.WriteStartElement(ValueTagName);
TValue value = this[key];
valueSerializer.Serialize(writer, value);
writer.WriteEndElement();
writer.WriteEndElement();
}
}
#endregion
}
A: In the 4.0 Framework, there is also the addition of the Tuple family of classes that are serializable and equatable. You can use Tuple.Create(a, b) or new Tuple<T1, T2>(a, b).
A: You will find the reason why KeyValuePairs cannot be serialised at this MSDN Blog Post
The Struct answer is the simplest solution, however not the only solution. A "better" solution is to write a Custom KeyValurPair class which is Serializable.
A: A KeyedCollection is a type of dictionary that can be directly serialized to xml without any nonsense. The only issue is that you have to access values by: coll["key"].Value;
A: XmlSerializer doesn't work with Dictionaries. Oh, and it has problems with KeyValuePairs too
http://www.codeproject.com/Tips/314447/XmlSerializer-doesnt-work-with-Dictionaries-Oh-and
A: Use the DataContractSerializer since it can handle the Key Value Pair.
public static string GetXMLStringFromDataContract(object contractEntity)
{
using (System.IO.MemoryStream writer = new System.IO.MemoryStream())
{
var dataContractSerializer = new DataContractSerializer(contractEntity.GetType());
dataContractSerializer.WriteObject(writer, contractEntity);
writer.Position = 0;
var streamReader = new System.IO.StreamReader(writer);
return streamReader.ReadToEnd();
}
}
A: DataTable is my favorite collection for (solely) wrapping data to be serialized to JSON, since it's easy to expand without the need for an extra struct & acts like a serializable replacement for Tuple<>[]
Maybe not the cleanest way, but I prefer to include & use it directly in the classes (which shall be serialized), instead of declaring a new struct
class AnyClassToBeSerialized
{
public DataTable KeyValuePairs { get; }
public AnyClassToBeSerialized
{
KeyValuePairs = new DataTable();
KeyValuePairs.Columns.Add("Key", typeof(string));
KeyValuePairs.Columns.Add("Value", typeof(string));
}
public void AddEntry(string key, string value)
{
DataRow row = KeyValuePairs.NewRow();
row["Key"] = key; // "Key" & "Value" used only for example
row["Value"] = value;
KeyValuePairs.Rows.Add(row);
}
}
A: You can use Tuple<string,object>
see this for more details on Tuple usage : Working with Tuple in C# 4.0
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80"
}
|
Q: mysqldump | mysql yields 'too many open files' error. Why? I have a RHEL 5 system with a fresh new hard drive I just dedicated to the MySQL server. To get things started, I used "mysqldump --host otherhost -A | mysql", even though I noticed the manpage never explicitly recommends trying this (mysqldump into a file is a no-go. We're talking 500G of database).
This process fails at random intervals, complaining that too many files are open (at which point mysqld gets the relevant signal, and dies and respawns).
I tried upping it at sysctl and ulimit, but the problem persists. What do I do about it?
A: mysqldump has been reported to yeld that error for larger databases (1, 2, 3). Explanation and workaround from MySQL Bugs:
[3 Feb 2007 22:00] Sergei Golubchik
This is not really a bug.
mysqldump by default has --lock-tables enabled, which means it tries to lock all tables to
be dumped before starting the dump. And doing LOCK TABLES t1, t2, ... for really big
number of tables will inevitably exhaust all available file descriptors, as LOCK needs all
tables to be opened.
Workarounds: --skip-lock-tables will disable such a locking completely. Alternatively,
--lock-all-tables will make mysqldump to use FLUSH TABLES WITH READ LOCK which locks all
tables in all databases (without opening them). In this case mysqldump will automatically
disable --lock-tables because it makes no sense when --lock-all-tables is used.
Edit: Please check Dave's workaround for InnoDB in the comment below.
A: mysqldump by default performs a per-table lock of all involved tables. If you have many tables that can exceed the amount of file descriptors of the mysql server process.
Try --skip-lock-tables or if locking is imperative --lock-all-tables.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html--lock-all-tables, -x
Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.
A: If your database is that large you've got a few issues.
*
*You have to lock the tables to dump the data.
*mysqldump will take a very very long time and your tables will need to locked during this time.
*importing the data on the new server will also take a long time.
Since your database is going to be essentially unusable while #1 and #2 are happening I would actually recommend stopping the database and using rsync to copy the files to the other server. It's faster than using mysqldump and much faster than importing because you don't have the added IO and CPU of generating indexes.
In production environments on Linux many people put Mysql data on an LVM partition. Then they stop the database, do an LVM snapshot, start the database, and copy off the state of the stopped database at their leisure.
A: I just restarted the "MySql" Server and then I could use the mysqldump command flawlessly.
Thought this might be helpful tip here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How can I get the unique values of an array in .net? Say I've got this array:
MyArray(0)="aaa"
MyArray(1)="bbb"
MyArray(2)="aaa"
Is there a .net function which can give me the unique values? I would like something like this as an output of the function:
OutputArray(0)="aaa"
OutputArray(1)="bbb"
A: Assuming you have .Net 3.5/LINQ:
string[] OutputArray = MyArray.Distinct().ToArray();
A: A solution could be to use LINQ as in the following example:
int[] test = { 1, 2, 1, 3, 3, 4, 5 };
var res = (from t in test select t).Distinct<int>();
foreach (var i in res)
{
Console.WriteLine(i);
}
That would print the expected:
1
2
3
4
5
A: You could use a dictionary to add them with a key, and when you add them check if the key already exists.
string[] myarray = new string[] { "aaa", "bbb", "aaa" };
Dictionary mydict = new Dictionary();
foreach (string s in myarray) {
if (!mydict.ContainsKey(s)) mydict.Add(s, s);
}
A: Use the HashSet class included in .NET 3.5.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Codeplex/Sourceforge for internal use I'm looking for a free/open source collaborative project manager that can be deployed internally in my workplace that would act similar to Codeplex or Sourceforge. Does anyone know of something like this, and if so do you have experience with it.
Requirements:
*
*Open Source or Free
*Locally Deployable
*Has the same types of features found in Sourceforge / Codeplex
*
*Issue/Feature Tracking
*Community Interaction (ie. Voting, Roles, etc.)
*SCM Integration (Optional)
*.NET/Windows Friendly (Optional)
Every business ends up having internal utilities, and domain specific apps that developers create to make life easier. Given the input of the internal developer community they have the potential to become much better (can you say GMail...), and I would simply like to foster such an environment internally by providing an easy place for that interaction to take place.
UPDATE:
So I like what I am seeing in both Trac and GForge, but both are heavily geared towards UNIX/Subversion environments. I should have specified this, but we are a MS shop from top to bottom. How practical do you think it is going to be to try and use these in a MS .NET environment? Would that be like trying to shove a square peg through a round hole?
A: I like redmine for this: http://www.redmine.org. The only thing it's missing from your criteria is voting, but there might even be a plugin for this.
Trac is also popular (http://trac.edgewall.org) but it lacks suport for aggregation of data across projects.
A: Try GForge, it's a SourceForge fork and has most of its features.
A: I agree, Trac should work. IMHO setting up Subversion should be relatively easy on Windows too, there are great Windows clients for it (tortoiseSvn), and Trac runs on python, so it will work on Windows too.
A: Other advantages of Sourceforge Enterprise are these plugins. There are extra plugins for Visual Studio wich can be found here and here.
A: SourceForge Enterprise Edition 4.4 is available for free for up to 15 users. We use it for our development team and another development team where I work.
It's been working great for us. It has subversion and cvs built in (whichever you wish to use). If you plan on accessing it over the internet you might want to enable HTTPS. I had to do a little finagling to get HTTPS to work correctly (finding the right CentOS packages to install). If you wanted to use this solution with HTTPS I wouldn't mind if you sent me a message asking for help.
It comes with a VM for VMWare Player:
http://www.collab.net/downloads/sfee/index4.4.html
A: Launchpad has support for Code Hosting and version control, Bug tracking, Blueprints, Answers, Polls, Translations, etc.
Launchpad is used by the Ubuntu Project.
A few weeks ago, Launchapad was released as open source.
A: I was just wondering the same thing, something like Trac but in .NET, after a quick GOOGLE search (I have never tried these tools) I found
sharpforge (This no longer looks free!)
I like how the site .netTiers looks.
They use screwturn wiki.
It is totally free if you fulfill all GPLv2 statements.
A: Assembla and BeanStalk are nice, both have things like; wiki, discussion, alerts, chat, ticketing, Trac, Git and Subversion
A: What about Trac? It's pretty simple, but does it's Job for a lot of Open Source projects.
A: I would concur on the Trac suggestion. I use it both for an open source project and for an internal project. It has decent issue tracking and integration with Subversion which allows links between tickets and subversion checkins. It also has an integrated wiki, which can be of some use for documentation. Although we do not use it for voting / community type features, I know there's a number of addons to it that might serve this purpose.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Best way to implement a 3-column website using tags? I'm developing a 3-column website using a layout like this:
<div id='left' style='left: 0; width: 150px; '> ... </div>
<div id='middle' style='left: 150px; right: 200px' > ... </div>
<div id='right' style='right: 0; width: 200px; '> ... </div>
But, considering the default CSS 'position' property of <DIV>'s is 'static', my <DIV>'s were shown one below the other, as expected.
So I set the CSS property 'position' to 'relative', and changed the 'top' property of the 'middle' and 'right' <DIV>'s to -(minus) the height of the preceding <DIV>. It worked fine, but this approach brought me two problems:
1) Even though Internet Explorer 7 shows three columns properly, it still keeps the vertical scrollbar as if the <DIV>'s were positioned one below the other, and there is a lot of white space after the content is over. I would'n like to have that.
2) The height of these elements is variable, so I don't really know which value to set for each <DIV>'s 'top' property; and I wouldn't like to hardcode it.
So my question is, what would be the best (simple + elegant) way to implement this layout? I would like to avoid absolute positioning , and I also to keep my design tableless.
A: If you haven't already checked out A List Apart you should, as it contains some excellent tutorials and guidelines for website design.
This article in particular should help you out.
A: Give BluePrint CSS a try. It is really simple to get started with, yet powerful enough for most applications.
Easy to understand tutorials and examples. Has a typography library that produces decent results straight out of the box.
A: By far the easiest way that I have found to do 3 columns (or any other number of columns split over the available space in weird ways) is YUI Grids. There is a YUI Grids Builder to give you the basic layout. The following will give you a 750px wide basic 3 column layout (split 1/3 1/3 1/3) with a 160px left sidebar. Changing it to to other widths, sidebar configs and column splits is really easy.
1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
3 <html>
4 <head>
5 <title>YUI Base Page</title>
6 <link rel="stylesheet" href="http://yui.yahooapis.com/2.5.1/build/reset-fonts-grids/reset-fonts-grids.css" type="text/css">
7 </head>
8 <body>
9 <div id="doc" class="yui-t1">
10 <div id="hd"><h1>YUI: CSS Grid Builder</h1></div>
11 <div id="bd">
12 <div id="yui-main">
13 <div class="yui-b"> <div class="yui-gb">
14 <div class="yui-u first">
15 <!-- YOUR DATA GOES HERE -->
16 </div>
17 <div class="yui-u">
18 <!-- YOUR DATA GOES HERE -->
19 </div>
20 <div class="yui-u">
21 <!-- YOUR DATA GOES HERE -->
22 </div>
23 </div>
24 </div>
25 </div>
26 <div class="yui-b"><!-- YOUR NAVIGATION GOES HERE --></div>
27
28 </div>
29 <div id="ft">Footer is here.</div>
30 </div>
31 </body>
32 </html>
A: There are a number of examples and libraries out there you can search on - a couple already listed (A List Apart is a must read).
I've used the Yahoo User Interface Library (YUI) on my last couple of sites and really like it. Yahoo completely supports it and it's quick to understand and use. Here is there CSS for Grids, which allows you to format your page into as many columns and sections as you want.
YUI is nice because you don't have to reinvent the wheel for the foundation of your site, and they do all the work of making sure their foundations work across all browsers. And best of all, it's free.
A: I like 960 Grid System. It's a lightweight, easy to use css which devides the screen into 12 (or 16) columns. You can use it for a 3 column design and align the rest of your content accordingly.
A: Try floating the div's to the left, that will keep them all on the same line - assuming there is enough spacing.
A: For fixed coloumn, just set height:xxxpx will make them equal.
Use this 3 column layout generator to try.
A: This code work on my computer with IE 8, Chrome, Firefox.
<!DOCTYPE HTML PUBLIC"-//W3C//DTD HTML 4.01//EN">
<html>
<head>
<title> Test </title>
</head>
<body>
<div id="grad2" style="width:15%; height:100%; position:fixed; top:0px; left:0px; background-color:rgb(147,81,73);">
<a href="http://abv.bg"> Column1 </a> </div>
<div id="grad4" style="width:70%; height:100%; position:fixed; top:0px; left:15%; background-color:rgb(0,0,0);">
<font color="#FFFFFF">Column 2 </font> </div>
<div id="grad3" style="width:100%; height:100%; position:fixed; top:0px; left:85%; background-color:rgb(60,255,4);">
<a href="http://abv.bg"> Column 3 </a> </div>
</body>
</html>
A: Firstly, relative positioning does what you've described: it reserves space in the original location but displays the DIV offset by some amount.
If you float the DIVs then they will stack left-to-right, but this can cause problems.
A three-column layout using CSS is quite hard. Have a look at [http://www.glish.com/css/7.asp]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: In what situations would you get different users seeing different rows in a table on SQL Server? SQL Server Version 2000.
We've a bunch of desktops talking to MSSQL Server. When looking for a specific record, some desktops return the correct data, but some do not.
The SQL Command is "SELECT * FROM PODORDH WHERE ([NO]=6141)"
On one or two desktops, this returns a record. On the server and on all other desktops, no record is returned.
What areas do I need to look at? What would cause this to happen?
A: This error probably comes from an user who deleted/inserted that record within a transaction but did not yet commit said transaction.
A: Check which database and server you are connecting to on each machine - the query is simple enough that you must get the same answer everywhere UNLESS you are connecting to different databases or servers.
A: If it is just ONE workstation returning the row then it sounds like that workstation has an open transaction that has not been committed.
Otherwise is it possible that the isolation levels are different for different workstations, ie. some will see uncommitted data and other wont?
A: You may want to look at the permissions for the table you are selecting from, if you are connecting to the server as a different user from each machine.
If some users but not others have access to read that table, you may get the result you describe.
A: After you exhaust all the options mentioned above, I would look into row and table locks. If this is the case it should return an error saying it encountered a lock. Are you running an application that could be swallowing errors?
A: Perhaps one or two users who find records are using a different schema-name and thus a different tables. IE most users are using dbo.PODORDH, but one or two users are using otheruser.PODORDH.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: SVN revision in Microsoft Office I have some code documentation in MS Powerpoint 2003 that I'm revision-controlling in an SVN repository. I'd like to auto-insert the latest revision number into this document whenever I open it. I am using TortoiseSVN. I've been able to google up a macro or two that might work but wanted advice from experts. :) Thanks!
A: If you like to use "keyword expansion" with binary files (e.g. .doc) you have to use the following format:
$keyword::______________________$ (underscore = space)
The :: ensures that the number of characters are always the same, otherwise you'll corrupt the binary word file.
But this only works until Word 2003.
This will not work for Word 2007 because a .docx file is a ZIP file containing word data (e.g. xml). But it does not make sense to embed something directly into the ZIP content.
Maybe in the future there will be a SVN contribution which does the trick also for .docx file ;-)
A: I think it should be possible to use the $Rev$ macro inside it with the Office 2007 XML formats, but I am not too sure what will happen with older formats that might contain binary data. You might need to tweak svn settings a bit so it sees .ppt files as text and not binary for this to work, I am not sure what is the default behavior. See svn:mime-type for this : http://svnbook.red-bean.com/en/1.2/svn.advanced.props.html
Read this for detailed infos on $Rev$ replacement: http://svnbook.red-bean.com/en/1.4/svn.advanced.props.special.keywords.html
A: I believe SVN Won't touch binary files, and chances are if you embedded a $REV$ string in it something would break.
I know nothing about "Office Macros" either, but it would probably be preferable to
*
*Have a text file with that revision string.
*Have an office macro copy the previous revision string ( sans $REV$ for safteys sake ) Into the office file prior to saving.
*Have the same office-macro that injects random garbage into textfile #1
every time the file is saved.
Its better IMO to write the rev string into the document on /save/ because that will eliminate the need for extra commits just for the sake of putting a revision string in the file, and it will also reduce dependency on that revision file in the event you share this PPT around without the aforementioned text file present.
A: A little late but I consider that the original question is still relevant today.
Did you ever try SvnProperties4MSOffice? The second version (V2) seemed to be designed exactly as an alternative to keyword replacement for MS Office 2007 and above. It uses TortoiseSVN in the background: the SubWCRev tool (packaged with TortoiseSVN) is accessed through its COM interface by means of a VB macro.
I know that the word "macro" can be frightening by itself but wait: this one is open source and quite simple. This makes it easily editable thus maintainable.
If you're interested to go further with the topic "MS Office & SVN" I invite you to read one of my blog posts. You would have a chance to:
*
*Install a SVN menu right inside MS Word, MS Excel and MS PowerPoint
*Diff two MS Office documents
Disclaimer: I'm the owner of the blog.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What's the easiest way to use C source code in a Java application? I found this open-source library that I want to use in my Java application. The library is written in C and was developed under Unix/Linux, and my application will run on Windows. It's a library of mostly mathematical functions, so as far as I can tell it doesn't use anything that's platform-dependent, it's just very basic C code. Also, it's not that big, less than 5,000 lines.
What's the easiest way to use the library in my application? I know there's JNI, but that involves finding a compiler to compile the library under Windows, getting up-to-date with the JNI framework, writing the code, etc. Doable, but not that easy. Is there an easier way? Considering the small size of the library, I'm tempted to just translate it to Java. Are there any tools that can help with that?
EDIT
I ended up translating the part of the library that I needed to Java. It's about 10% of the library so far, though it'll probably increase with time. C and Java are pretty similar, so it only took a few hours. The main difficulty is fixing the bugs that get introduced by mistakes in the translation.
Thank you everyone for your help. The proposed solutions all seemed interesting and I'll look into them when I need to link to larger libraries. For a small piece of C code, manual translation was the simplest solution.
A: Your best bet is probably to grab a good c book (K&R: The C Progranmming language) a cup of tea and start translating! I would be skeptical about trusting a translation program, more often then not the best translator is yourself! If you do this one, then its done and you don't need to keep re-doing it. There might be some complications if the library is open source, you'll need to check the licence carefully about this. Another point to consider is that there is always going to be some element of risk and potential error in the translation, therefore it might be necessary to consider writing some tests to ensure that the translation is correct.
Are there no JAVA equivelent Math functions?
As you yourself comment the JNI way is possible, as for a c compiler you could probably use 'Bloodshead Dev-c++' might work, but it is a lot of effort for ~5000 lines.
A: I'd compile it and use JNA.
JNA (Java Native Access) is basically does in runtime what JNI at compile time and doesnt need any non-java code (not much java either).
I don't know about its performance or usability in your case but I'd give it a try.
A: On the Java GNU Scientific Library project I used Swig to generate the JNI wrapper classes around the C libraries. Great tool, and can also generate wrapper code in several languages including Python. Highly recommended.
A: Are you sure you want to use the C library, even if it is that small?
Once 64 bit gets a little more common, you'll need to start building/deploying both 32 bit and 64 bit versions of the library as well. And depending on what the C code is like, you may or may not need to update the code to make it build as 64 bit.
If the C library is simple, it may be easier to just port the C library to pure java and not have to deal with building/deploying a JNI library, the C library and the java code.
A: Indeed, JNA looks impressive, it requires less effort than directly using JNI. But in any case you'd lose the platform independence, and since you're probably only using a small part of it, you might consider translating what you actually need.
A: Well, there is AMPC. It is a C compiler for Windows, MacOS X and Linux, that can compile C code into Java Byte Code (the kind of code, that runs on a Java virtual machine).
AMPC
However, it is commercial and costs $199 per license. I doubt that pays off for you ;) I don't know of any free compiler like that.
OTOH, Java and C are pretty similar. You could probably refactor the C Code to Java (structs can be replaced with objects with public instance variables) and pointer operations can usually be translated to something else (array operations for example). Though I guess you don't want to go through 5,000 lines of code, do you?
Using JNI makes the code platform dependent, however if you say it is platform independent C, there is no reason why your Java code should be platform dependent. OTOH, depending on how costly these calculations are, using JNI might actually buy you a performance gain, as when it comes to raw number crunching throughput, C can still beat Java in speed. However JNI calls are very costly, so if the calculation is just a very simple, quick calculation, the JNI call itself might take equally long (or even longer) than the calculation performed, in which case using JNI will buy you nothing, but slowing down your app and causing memory overhead.
A: Have you tried using:
System.loadLibrary("mylibrary.dll");
Not sure if this will work with a pure C library but it's probably worth a shot. :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: How much time do you spend in Reflector? (.NET) As a consultant I get to toy around with many different products and APIs as the customer demands we use X and Y. I think it is great fun and I learn a lot from it.
What will make a great developer over time is, in my opinion, the will to understand and learn new things. Therefore, I will always try to understand what happens "behind the scenes" when I am using 3rd party products.
I spend around 10-15% of my time in Reflector to learn what the heck I'm really doing when I call method X.
How much time do you spend on average? This may also apply to reading (open) source code, documentation etc.
A: For me it depends. When I'm learning a new technology stack or API I'll typically break out Reflector, and my usage of it goes up.
For instance I recently started working with the Commerce Server 2007 API. I found much of the documentation around the Profile System incomplete or lacking in sufficient detail for my curiousity. So I broke out reflector, and used it to inspect the Commerce Membership Provider implementation (not to mention the implementation of the native asp.net sql membership provider).
Inspecting the code helped me much better understand how and why the membership providers work the way they do, versus just relying on what the documentation said.
I was then able to go on and implement a custom membership provider for commerce server that I believe made up for some of the limitations of the stock commerce server membership provider. Granted my implementation was not looking to be as generic and feature rich, as my goal was to establish "standard" setup and configuration of the commerce user profile for my company.
So when I was initially working with commerce server, I spent probably 20% of my time in reflector. Now that I have better understanding I rarely use it to inspect commerce server at least.
Secondly, when i first started working with asp.net ajax and the ajax web control toolkit. The toolkit is open source. I spend a fair amount of time in the control toolkit code initially, the documentation was ok, but the samples were very weak. Dividing into the source code helped me better understand use its various web controls to their full capacity. It also helped me better and more deeply learn how to work with the asp.net ajax javascript libraries. Initially I spent about probably 10% of my time in the source code of the toolkit.
Day to day how much time do I spend using reflector? Not that much, depends on the project and if the technology involved is familiar or new.
A: I used to use it at times, but now it's a paid software so that amount of time spent will definitely go down for me, mainly because I feel Reflector should have remained free.
http://reflector.red-gate.com/download.aspx?TreatAsUpdate=1
A: Since I develop for both .net and compact .net framework, I sometimes decompile the full .net assemblies to "copy" existing functionality to the compact framework.
Other than that I don't spend that much time decompiling libs. Mostly only when something doesn't works and the problem clearly points to an assebmly and I don't want to bug someone else before I'm really shure.
If you want me to stick an number to it, I would say 5% of my time.
A: I'd say less than 1% of my time is spent in Reflector. I can see why it might be a good learning tool but I don't often need to know what goes on under the covers, as long as it works as I'd expect then I'm happy. It's an interesting idea though.
A: I used reflektor for fun a bit, but right now I'm not using it at all.
Since we got all source for our c# programs there is no real need to decompile anything.
A: I think I may have spent 5%-10% of my time in Reflector at some point -- when I was first leaning .Net. These days is probably less than 1/2 hour a month. But then I don't use many 3rd party libraries.
Source code / documentation, it's harder to say. (yeah, like open source code HAS documentation....) One would have to attribute some portion of my general blog reading to that, but what percentage is very hard to say.
A: maybe 2-3 %? Mostly using this DSM plugin: http://www.tom-carter.net
A: P/Invoking becomes so much easier when you use Reflector on WindowsBase.dll. Check out the MS.Internal.Interop namespace for COM interop, and MS.Win32.NativeMethods and MS.Win32.UnsafeNativeMethods for Win32 interop.
A: If I'm writing against libraries I have the source to I rarely if ever use it because I have the source.
If I'm writing against .Net libraries I probably spend 5% of the time, only when using something I'm unfamiliar with and Google/MSDN/StackOverflow lets me down.
When working with 3rd party libraries I probably spend 20% of the time using Reflector because is usually far better than the anemic documentation provided. One one project I spent probably 50% of my time using Reflector because the documentation/forums/knowledge bad for this particular product was missing what I needed or was completely wrong.
I find that I'm happy when I don't need to use Reflector -- its a great tool, but using usually means I have a gnarly problem that isn't fun.
A: I probably spend about 1% of my time in Reflector. It's a really nifty tool, but because we write so much of our code in house it hasn't been a great need to decompile any dlls.
A: I use it all the time. Why look at potentially flawed documentation when you can see the actual source?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: WPF 3.5 WebBrowser control and ZIndex I'm trying to figure out why the control does not honor ZIndex.
Example 1 - which works fine
<Canvas>
<Rectangle Canvas.ZIndex="1" Height="400" Width="600" Fill="Yellow"/>
<Rectangle Canvas.ZIndex="2" Height="100" Width="100" Fill="Red"/>
</Canvas>
Example 2 - which does not work
<Canvas>
<WebBrowser Canvas.ZIndex="1" Height="400" Width="600" Source="http://www.stackoverflow.com"/>
<Rectangle Canvas.ZIndex="2" Height="100" Width="100" Fill="Red"/>
</Canvas>
Thanks,
-- Ed
A: You are running into a common WPF pitfall, most commonly called the "The Airspace Problem". A possible solution is to NOT use the WebBrowser control, and instead go for something a little crazier - namely an embedded WebKit browser rendering directly to WPF. There are two packages that do this; Awesomonium (commercial) and Berkelium (open-source). There's a .NET wrapper for both of these.
A: You could SetWindowRgn to fake the overlapping area by hiding it as shown here:
*
*flounder.com
*msdn
A: Unfortunately this is because the WebBrowser control is a wrapper around the Internet Explorer COM control. This means that it gets its own HWND and does not allow WPF to draw anything over it. It has the same restrictions as hosting any other Win32 or WinForms control in WPF.
MSDN has more information about WPF/Win32 interop.
A: I solved a similar issue where I was hosting a 3rd party WinForms control in my WPF application. I created a WPF control that renders the WinForms control in memory and then paints it to a bitmap. Then I use DrawImage in the OnRender method to draw the rendered content. Finally I routed mouse events from my control to the hosted control. In the case of a web browser you would also have to route keyboard events.
My case was fairly easy - a chart with some simple mouse interaction. A web browser control may have other issues that I didn't take into consideration. Anyway I hope that helps.
A: I hit this issue as well. In my case I was dragging images from one panel into the WebBrowser, but of course as soon as my image moved into the browser it was hidden.
Currently working on the following solution:
*
*When the Image drag starts, create a Bitmap of the WebBrowser using "RenderTargetBitmap"
*Add your Bitmap to the canvas, using the same width/location as the webbrowser
*webControl.Visibility = Visibility.Hidden.
*When the drag is released, remove your bitmap and set webControl.Visibility = Visible.
This solution is very specific to my situation, but maybe it will give you some ideas.
A: I managed to solve this by using this structure, check out the properties configuration in each element:
<Canvas ClipToBounds="False">
<Popup AllowsTransparency="True" ClipToBounds="False" IsOpen="True">
<Expander>
<Grid x:Name="YourContent"/>
</Expander>
<Popup>
</Canvas>
You just have to manage the Expander to show or hide your content, I'm using it for a menu bar, I think that the expander is optional depending on the case.
Check out this picture with the result, you can even show your controls on top of the WebBrowser and even outside the main window:
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: What is the difference between TrueType fonts and Type-1 fonts? What is the difference between TrueType fonts and Type-1 fonts?
A: The Postscript Type-1 specification was created by Adobe back in 1985 or so. Type-1 fonts are vector based. You can find the specification in "Adobe Type 1. Font Format.".
TrueType fonts were defined by Apple a couple of years earlier so True Type and PostScript were competitors in the 1990s. Microsoft picked up True Type for the native Windows font format in the beginning 1990s (for using PostScript, additional tools like Adobe Type manager were necessary).
Today, Microsoft is fading out support for PostScript fonts. Try using one as an UI font in Vista. Good luck ;-)
As a successor of TrueType, Microsoft (I think together with Adobe) created the Open Type (anytime around 2000) format and Adobe converted their whole font library into the new format (you can still get them as Type-1 fonts).
A: A very key difference is that PostScript (and PostScript flavoured OpenType) supports cubic Bézier curves, where each arc of each glyph is described by four control points. TrueType (and tt flavored OpenType) uses quadratic curves, with each arc having only three control points. This offers less control over the shape of the curve.
Another key difference is the way they perform hinting. Since TrueType was originally targeted to low resolution screen rendering, its hinting system works by adjusting the curves to fit nicely on pixel lattice points, using a fairly elaborate bytecode mechanism. PostScript fonts were intended for higher resolution paper prints, and used guidelines to snap curves to right angles at appropriate places.
A: Type-1 is the older format, and dates back to the days when Adobe where pioneering DTP with PostScript and vector fonts. At the time Type 1 and Type 3 were the only formats understood by PostScript printers, and only Type 1 could include hints needed to make fonts look good, and the format was a trade secret. This way Adobe relegated other font foundries to non-hinted fonts using Type 3 format.
TrueType was invented by Microsoft as a way (a) to break Adobe's monopoly on hinted font formats, (b) to avoid using a format associated with the Macintosh and PostScript on Windows. Internally TrueType used quadratic curves rather than cubic beziers, thus making them faster to render on the screen and on the cheaper non-PostScript-capable printeres used on Windows sytems. TrueType also has better support for Unicode and other things invented since the creation of Type 1. Modern Macs support TrueType as well.
The new format OpenType combines TrueType and Type 1 (the vector data is permitted to be in quadratic or cubic form, so you can directly convert either of the old formats to the OpenType). OpenType also has support for fancy automatic ligatures and glyph substitution, which is nice in English text and vital for text using Arabic or Indian scripts.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: How can I extract a predetermined range of lines from a text file on Unix? I have a ~23000 line SQL dump containing several databases worth of data. I need to extract a certain section of this file (i.e. the data for a single database) and place it in a new file. I know both the start and end line numbers of the data that I want.
Does anyone know a Unix command (or series of commands) to extract all lines from a file between say line 16224 and 16482 and then redirect them into a new file?
A: sed -n '16224,16482p;16483q' filename > newfile
From the sed manual:
p -
Print out the pattern space (to the standard output). This command is usually only used in conjunction with the -n command-line option.
n -
If auto-print is not disabled, print the pattern space, then, regardless, replace the pattern space with the next line of input. If
there is no more input then sed exits without processing any more
commands.
q -
Exit sed without processing any more commands or input.
Note that the current pattern space is printed if auto-print is not disabled with the -n option.
and
Addresses in a sed script can be in any of the following forms:
number
Specifying a line number will match only that line in the input.
An address range can be specified by specifying two addresses
separated by a comma (,). An address range matches lines starting from
where the first address matches, and continues until the second
address matches (inclusively).
A: cat dump.txt | head -16224 | tail -258
should do the trick. The downside of this approach is that you need to do the arithmetic to determine the argument for tail and to account for whether you want the 'between' to include the ending line or not.
A: sed -n '16224,16482p' < dump.sql
A: You could use 'vi' and then the following command:
:16224,16482w!/tmp/some-file
Alternatively:
cat file | head -n 16482 | tail -n 258
EDIT:- Just to add explanation, you use head -n 16482 to display first 16482 lines then use tail -n 258 to get last 258 lines out of the first output.
A: There is another approach with awk:
awk 'NR==16224, NR==16482' file
If the file is huge, it can be good to exit after reading the last desired line. This way, it won't read the following lines unnecessarily:
awk 'NR==16224, NR==16482-1; NR==16482 {print; exit}' file
awk 'NR==16224, NR==16482; NR==16482 {exit}' file
A: Quick and dirty:
head -16428 < file.in | tail -259 > file.out
Probably not the best way to do it but it should work.
BTW: 259 = 16482-16224+1.
A: I wrote a Haskell program called splitter that does exactly this: have a read through my release blog post.
You can use the program as follows:
$ cat somefile | splitter 16224-16482
And that is all that there is to it. You will need Haskell to install it. Just:
$ cabal install splitter
And you are done. I hope that you find this program useful.
A: Even we can do this to check at command line:
cat filename|sed 'n1,n2!d' > abc.txt
For Example:
cat foo.pl|sed '100,200!d' > abc.txt
A: Using ruby:
ruby -ne 'puts "#{$.}: #{$_}" if $. >= 32613500 && $. <= 32614500' < GND.rdf > GND.extract.rdf
A: I wanted to do the same thing from a script using a variable and achieved it by putting quotes around the $variable to separate the variable name from the p:
sed -n "$first","$count"p imagelist.txt >"$imageblock"
I wanted to split a list into separate folders and found the initial question and answer a useful step. (split command not an option on the old os I have to port code to).
A: Just benchmarking 3 solutions given above, that works to me:
*
*awk
*sed
*"head+tail"
Credits on the 3 solutions goes to:
*
*@boxxar
*@avandeursen
*@wds
*@manveru
*@sibaz
*@SOFe
*@fedorqui 'SO stop harming'
*@Robin A. Meade
I'm using a huge file I find in my server:
# wc fo2debug.1.log
10421186 19448208 38795491134 fo2debug.1.log
38 Gb in 10.4 million lines.
And yes, I have a logrotate problem. : ))
Make your bets!
Getting 256 lines from the beginning of the file.
# time sed -n '1001,1256p;1256q' fo2debug.1.log | wc -l
256
real 0m0,003s
user 0m0,000s
sys 0m0,004s
# time head -1256 fo2debug.1.log | tail -n +1001 | wc -l
256
real 0m0,003s
user 0m0,006s
sys 0m0,000s
# time awk 'NR==1001, NR==1256; NR==1256 {exit}' fo2debug.1.log | wc -l
256
real 0m0,002s
user 0m0,004s
sys 0m0,000s
Awk won. Technical tie in second place between sed and "head+tail".
Getting 256 lines at the end of the first third of the file.
# time sed -n '3473001,3473256p;3473256q' fo2debug.1.log | wc -l
256
real 0m0,265s
user 0m0,242s
sys 0m0,024s
# time head -3473256 fo2debug.1.log | tail -n +3473001 | wc -l
256
real 0m0,308s
user 0m0,313s
sys 0m0,145s
# time awk 'NR==3473001, NR==3473256; NR==3473256 {exit}' fo2debug.1.log | wc -l
256
real 0m0,393s
user 0m0,326s
sys 0m0,068s
Sed won. Followed by "head+tail" and, finally, awk.
Getting 256 lines at the end of the second third of the file.
# time sed -n '6947001,6947256p;6947256q' fo2debug.1.log | wc -l
A256
real 0m0,525s
user 0m0,462s
sys 0m0,064s
# time head -6947256 fo2debug.1.log | tail -n +6947001 | wc -l
256
real 0m0,615s
user 0m0,488s
sys 0m0,423s
# time awk 'NR==6947001, NR==6947256; NR==6947256 {exit}' fo2debug.1.log | wc -l
256
real 0m0,779s
user 0m0,650s
sys 0m0,130s
Same results.
Sed won. Followed by "head+tail" and, finally, awk.
Getting 256 lines near the end of the file.
# time sed -n '10420001,10420256p;10420256q' fo2debug.1.log | wc -l
256
real 1m50,017s
user 0m12,735s
sys 0m22,926s
# time head -10420256 fo2debug.1.log | tail -n +10420001 | wc -l
256
real 1m48,269s
user 0m42,404s
sys 0m51,015s
# time awk 'NR==10420001, NR==10420256; NR==10420256 {exit}' fo2debug.1.log | wc -l
256
real 1m49,106s
user 0m12,322s
sys 0m18,576s
And suddenly, a twist!
"Head+tail" won. Followed by awk and, finally, sed.
(some hours later...)
Sorry guys!
My analysis above ends up being an example of a basic flaw in doing an analysis.
The flaw is not knowing in depth the resources used for the analysis.
In this case, I used a log file to analyze the performance of a search for a certain number of lines within it.
Using 3 different techniques, searches were made at different points in the file, comparing the performance of the techniques at each point and checking whether the results varied depending on the point in the file where the search was made.
My mistake was to assume that there was a certain homogeneity of content in the log file.
The reality is that long lines appear more frequently at the end of the file.
Thus, the apparent conclusion that longer searches (closer to the end of the file) are better with a given technique, may be biased. In fact, this technique may be better when dealing with longer lines. What remains to be confirmed.
A: sed -n '16224,16482 p' orig-data-file > new-file
Where 16224,16482 are the start line number and end line number, inclusive. This is 1-indexed. -n suppresses echoing the input as output, which you clearly don't want; the numbers indicate the range of lines to make the following command operate on; the command p prints out the relevant lines.
A: I was about to post the head/tail trick, but actually I'd probably just fire up emacs. ;-)
*
*esc-x goto-line ret 16224
*mark (ctrl-space)
*esc-x goto-line ret 16482
*esc-w
open the new output file, ctl-y
save
Let's me see what's happening.
A: I would use:
awk 'FNR >= 16224 && FNR <= 16482' my_file > extracted.txt
FNR contains the record (line) number of the line being read from the file.
A: Using ed:
ed -s infile <<<'16224,16482p'
-s suppresses diagnostic output; the actual commands are in a here-string. Specifically, 16224,16482p runs the p (print) command on the desired line address range.
A: perl -ne 'print if 16224..16482' file.txt > new_file.txt
A: People trying to wrap their heads around computing an interval for the head | tail combo are overthinking it.
Here's how you get the "16224 -- 16482" range without computing anything:
cat file | head -n +16482 | tail -n +16224
Explanation:
*
*The + instructs the head/tail command to "go up to / start from" (respectively) the specified line number as counted from the beginning of the file.
*Similarly, a - instructs them to "go up to / start from" (respectively) the specified line number as counted from the end of the file
*The solution shown above simply uses head first, to 'keep everything up to the top number', and then tail second, to 'keep everything from the bottom number upwards', thus defining our range of interest (with no need to compute an interval).
A: Standing on the shoulders of boxxar, I like this:
sed -n '<first line>,$p;<last line>q' input
e.g.
sed -n '16224,$p;16482q' input
The $ means "last line", so the first command makes sed print all lines starting with line 16224 and the second command makes sed quit after printing line 16428. (Adding 1 for the q-range in boxxar's solution does not seem to be necessary.)
I like this variant because I don't need to specify the ending line number twice. And I measured that using $ does not have detrimental effects on performance.
A: Quite simple using head/tail:
head -16482 in.sql | tail -258 > out.sql
using sed:
sed -n '16224,16482p' in.sql > out.sql
using awk:
awk 'NR>=16224&&NR<=16482' in.sql > out.sql
A: # print section of file based on line numbers
sed -n '16224 ,16482p' # method 1
sed '16224,16482!d' # method 2
A: I wrote a small bash script that you can run from your command line, so long as you update your PATH to include its directory (or you can place it in a directory that is already contained in the PATH).
Usage: $ pinch filename start-line end-line
#!/bin/bash
# Display line number ranges of a file to the terminal.
# Usage: $ pinch filename start-line end-line
# By Evan J. Coon
FILENAME=$1
START=$2
END=$3
ERROR="[PINCH ERROR]"
# Check that the number of arguments is 3
if [ $# -lt 3 ]; then
echo "$ERROR Need three arguments: Filename Start-line End-line"
exit 1
fi
# Check that the file exists.
if [ ! -f "$FILENAME" ]; then
echo -e "$ERROR File does not exist. \n\t$FILENAME"
exit 1
fi
# Check that start-line is not greater than end-line
if [ "$START" -gt "$END" ]; then
echo -e "$ERROR Start line is greater than End line."
exit 1
fi
# Check that start-line is positive.
if [ "$START" -lt 0 ]; then
echo -e "$ERROR Start line is less than 0."
exit 1
fi
# Check that end-line is positive.
if [ "$END" -lt 0 ]; then
echo -e "$ERROR End line is less than 0."
exit 1
fi
NUMOFLINES=$(wc -l < "$FILENAME")
# Check that end-line is not greater than the number of lines in the file.
if [ "$END" -gt "$NUMOFLINES" ]; then
echo -e "$ERROR End line is greater than number of lines in file."
exit 1
fi
# The distance from the end of the file to end-line
ENDDIFF=$(( NUMOFLINES - END ))
# For larger files, this will run more quickly. If the distance from the
# end of the file to the end-line is less than the distance from the
# start of the file to the start-line, then start pinching from the
# bottom as opposed to the top.
if [ "$START" -lt "$ENDDIFF" ]; then
< "$FILENAME" head -n $END | tail -n +$START
else
< "$FILENAME" tail -n +$START | head -n $(( END-START+1 ))
fi
# Success
exit 0
A: This might work for you (GNU sed):
sed -ne '16224,16482w newfile' -e '16482q' file
or taking advantage of bash:
sed -n $'16224,16482w newfile\n16482q' file
A: Since we are talking about extracting lines of text from a text file, I will give an special case where you want to extract all lines that match a certain pattern.
myfile content:
=====================
line1 not needed
line2 also discarded
[Data]
first data line
second data line
=====================
sed -n '/Data/,$p' myfile
Will print the [Data] line and the remaining. If you want the text from line1 to the pattern, you type: sed -n '1,/Data/p' myfile. Furthermore, if you know two pattern (better be unique in your text), both the beginning and end line of the range can be specified with matches.
sed -n '/BEGIN_MARK/,/END_MARK/p' myfile
A: I've compiled some of the highest rated solutions for sed, perl, head+tail, plus my own code for awk, and focusing on performance via the pipe, while using LC_ALL=C to ensure all candidates at their fastest possible, allocating 2-second sleep gap in between.
The gaps are somewhat noticeable :
abs time awk/app speed ratio
----------------------------------
0.0672 sec : 1.00x mawk-2
0.0839 sec : 1.25x gnu-sed
0.1289 sec : 1.92x perl
0.2151 sec : 3.20x gnu-head+tail
Haven't had chance to test python or BSD variants of those utilities.
(fg && fg && fg && fg) 2>/dev/null;
echo;
( time ( pvE0 < "${m3t}"
| LC_ALL=C mawk2 '
BEGIN {
_=10420001-(\
__=10420256)^(FS="^$")
} _<NR {
print
if(__==NR) { exit }
}' ) | pvE9) | tee >(xxh128sum >&2) | LC_ALL=C gwc -lcm | lgp3 ;
sleep 2;
(fg && fg && fg && fg) 2>/dev/null
echo;
( time ( pvE0 < "${m3t}"
| LC_ALL=C gsed -n '10420001,10420256p;10420256q'
) | pvE9 ) | tee >(xxh128sum >&2) | LC_ALL=C gwc -lcm | lgp3 ;
sleep 2; (fg && fg && fg && fg) 2>/dev/null
echo
( time ( pvE0 < "${m3t}"
| LC_ALL=C perl -ne 'print if 10420001..10420256'
) | pvE9 ) | tee >(xxh128sum >&2) | LC_ALL=C gwc -lcm | lgp3 ;
sleep 2; (fg && fg && fg && fg) 2>/dev/null
echo
( time ( pvE0 < "${m3t}"
| LC_ALL=C ghead -n +10420256
| LC_ALL=C gtail -n +10420001
) | pvE9 ) | tee >(xxh128sum >&2) | LC_ALL=C gwc -lcm | lgp3 ;
in0: 1.51GiB 0:00:00 [2.31GiB/s] [2.31GiB/s] [============> ] 81%
out9: 42.5KiB 0:00:00 [64.9KiB/s] [64.9KiB/s] [ <=> ]
( pvE 0.1 in0 < "${m3t}" | LC_ALL=C mawk2 ; )
0.43s user 0.36s system 117% cpu 0.672 total
256 43487 43487
54313365c2e66a48dc1dc33595716cc8 stdin
out9: 42.5KiB 0:00:00 [51.7KiB/s] [51.7KiB/s] [ <=> ]
in0: 1.51GiB 0:00:00 [1.84GiB/s] [1.84GiB/s] [==========> ] 81%
( pvE 0.1 in0 < "${m3t}" |LC_ALL=C gsed -n '10420001,10420256p;10420256q'; )
0.68s user 0.34s system 121% cpu 0.839 total
256 43487 43487
54313365c2e66a48dc1dc33595716cc8 stdin
in0: 1.85GiB 0:00:01 [1.46GiB/s] [1.46GiB/s] [=============>] 100%
out9: 42.5KiB 0:00:01 [33.5KiB/s] [33.5KiB/s] [ <=> ]
( pvE 0.1 in0 < "${m3t}" | LC_ALL=C perl -ne 'print if 10420001..10420256'; )
1.10s user 0.44s system 119% cpu 1.289 total
256 43487 43487
54313365c2e66a48dc1dc33595716cc8 stdin
in0: 1.51GiB 0:00:02 [ 728MiB/s] [ 728MiB/s] [=============> ] 81%
out9: 42.5KiB 0:00:02 [19.9KiB/s] [19.9KiB/s] [ <=> ]
( pvE 0.1 in0 < "${m3t}"
| LC_ALL=C ghead -n +10420256
| LC_ALL=C gtail -n ; )
1.98s user 1.40s system 157% cpu 2.151 total
256 43487 43487
54313365c2e66a48dc1dc33595716cc8 stdin
A: The -n in the accept answers work. Here's another way in case you're inclined.
cat $filename | sed "${linenum}p;d";
This does the following:
*
*pipe in the contents of a file (or feed in the text however you want).
*sed selects the given line, prints it
*d is required to delete lines, otherwise sed will assume all lines will eventually be printed. i.e., without the d, you will get all lines printed by the selected line printed twice because you have the ${linenum}p part asking for it to be printed. I'm pretty sure the -n is basically doing the same thing as the d here.
A: I was looking for an answer to this but I had to end up writing my own code which worked. None of the answers above were satisfactory.
Consider you have very large file and have certain line numbers that you want to print out but the numbers are not in order. You can do the following:
My relatively large file
for letter in {a..k} ; do echo $letter; done | cat -n > myfile.txt
1 a
2 b
3 c
4 d
5 e
6 f
7 g
8 h
9 i
10 j
11 k
Specific line numbers I want:
shuf -i 1-11 -n 4 > line_numbers_I_want.txt
10
11
4
9
To print these line numbers, do the following.
awk '{system("head myfile.txt -n " $0 " | tail -n 1")}' line_numbers_I_want.txt
What the above does is to head the n line then take the last line using tail
If you want your line numbers in order, sort ( is -n numeric sort) first then get the lines.
cat line_numbers_I_want.txt | sort -n | awk '{system("head myfile.txt -n " $0 " | tail -n 1")}'
4 d
9 i
10 j
11 k
A: Maybe, you would be so kind to give this humble script a chance ;-)
#!/usr/bin/bash
# Usage:
# body n m|-m
from=$1
to=$2
if [ $to -gt 0 ]; then
# count $from the begin of the file $to selected line
awk "NR >= $from && NR <= $to {print}"
else
# count $from the begin of the file skipping tailing $to lines
awk '
BEGIN {lines=0; from='$from'; to='$to'}
{++lines}
NR >= $from {line[lines]=$0}
END {for (i = from; i < lines + to + 1; i++) {
print line[i]
}
}'
fi
Outputs:
$ seq 20 | ./body.sh 5 15
5
6
7
8
9
10
11
12
13
14
15
$ seq 20 | ./body.sh 5 -5
5
6
7
8
9
10
11
12
13
14
15
A: You could use sed command in your case and is pretty fast.
As mentioned lets assume the range is: between 16224 and 16482 lines
#get the lines from 16224 to 16482 and prints the values into filename.txt file
sed -n '16224 ,16482p' file.txt > filename.txt
#Additional Info to showcase other possible scenarios:
#get the 16224 th line and writes the value to filename.txt
sed -n '16224p' file.txt > filename.txt
#get the 16224 and 16300 line values only and write to filename.txt.
sed -n '16224p;16300p;' file.txt > filename.txt
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "639"
}
|
Q: strpos function issue in PHP not finding the needle In php I have open a .php file and want to evaluate certain lines. Specifically when the $table_id and $line variables are assigned a value.
Within the text file I have:
...
$table_id = 'crs_class'; // table name
$screen = 'crs_class.detail.screen.inc'; // file identifying screen structure
...
amongst other lines. The if statement below never detects the occurance of $table_id or $screen (even without the $ prepended). I can't understand why it won't work as the strpos statement below looking for 'require' works fine.
So, why isn't this if statement getting a hit?
while ($line=fgets($fh)) {
//echo "Evaluating... $line <br>";
**if ((($pos = stripos($line, '$table_id')) === true) || (($pos = stripos($line, '$screen'))===true))**
{
// TODO: Not evaluating tableid and screen lines correctly fix.
// Set $table_id and $screen variables from task scripts
eval($line);
}
if (($pos=stripos($line, 'require')) === true) {
$controller = $line;
}
}
A: use !==false instead of ===true
stripos returns the position as an integer if the needle is found. And that's never ===bool.
You might also be interested in PHP's tokenizer module or the lexer package in the pear repository.
A: I think VolkerK already has the answer - stripos() does not return a boolean, it returns the position within the string, or false if it's not found - so you want to be checking that the return is not false using !== (not != as you want to check the type as well).
Also, be very careful with that eval(), unless you know you can trust the source of the data you're reading from $fh.
Otherwise, there could be anything else on that line that you unwittingly eval() - the line could be something like:
$table_id = 'foo'; exec('/bin/rm -rf /');
A: According to the PHP docs, strpos() and stripos() will return an integer for the position, OR a boolean FALSE.
Since 0 (zero) is a valid, and very expect-able index, this function should be used with extreme caution.
Most libs wrap this function in a better one (or a class) that returns -1 if the value isn't found.
e.g. like Javascript's
String.indexOf(str)
A: Variable interpolation is only performed on "strings", not 'strings' (note the quotes). i.e.
<?php
$foo = "bar";
print '$foo';
print "$foo";
?>
prints $foobar. Change your quotes, and all should be well.
A: Why are you using the === Argument?
If it is anywhere in the line, it will be an integer. You're comparing the type also by using ====
From my understand you're asking it "If the position is equal and of the same type as true" which will never work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Interacting with Outlook appointments using rails I have a rails application running on a Linux server. I would like to interact with Outlook/Exchange 2003 appointments from the rails application. For example, certain actions should trigger sending an appointment, and then preferably accepting/canceling the appointment in Outlook should trigger events in the application.
Failing this, is it possible to publish calendars that Outlook 2003 can read without requiring Outlook plugins? I note that Outlook 2003 does not support ical without plugins for example. Similarly, if this is not easily doable in Ruby, but is in another language (such as Perl for example) running on Linux then those suggestions would be welcome.
Any advice on how to achieve this, or where to start looking for answers would be gratefully received.
A: Thanks for everyones help. I found something that showed me how to do this with Perl, and ported it over to ruby. I've blogged about it for those looking for a solution
A: Outlook appointments are just e-mails with special header information. There's some information in this tutorial on the required parts. I sent a few meeting invites from my Outlook to my Gmail account and took a look at the raw headers there - you can figure most of the protocol out from that.
The iCalendar specs may help you, as well.
A: Take a look at the project RExchange on github.
A: If you can upgrade to Exchange 2007, you can use Exchange Web Services that is more powerful and convenient to use than WebDAV.
At work, I inherited a Rails app that allow users to create single appointments. I was asked to write code to link those appointments in the app to users' outlook calendars, so that they are always in sync. Sounds to me very similar to what you want to do.
I don't think I'm allowed to publish the exact code I wrote though. Anyway I'll give you a bit idea on how I addressed it.
Exchange Web Services only provide API in C# (no surprise, it's Microsoft. Technically, you can use other languages since it's actually SOAP.). I wrote a middleware in C# that does the sync between Exchange server and the Rails app. When users do scheduling in the app, changes are sent to the middleware so changes can be reflected to their outlook calendars. Meanwhile, the middleware registers Push Notification subscriptions for all users -- every time changes are made in Outlook, the middleware will be immediately notified, which in turn faithfully reflect those changes in the app as well. Of course, recurring appointments are also supported.
Hope that helps you.
A: For accessing appointments, you can just access the Calendar folder on Exchange using WebDav. For creating appointments, please refer to RFC2445 for details.
A: Further to ceejayoz's comment, you can also use ActionMailer to catch the replies that are sent back, and act on them - you'll need some form of unique id in a place that will be included in the reply though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: XML parser for JavaScript I am looking for a good JavaScript library for parsing XML data. It should be much easier to use than the built-in XML DOM parsers bundled with the browsers.
I got spoiled a bit working with JSON and am looking forward to something on similar lines for XML.
A: I use jQuery for this. Here is a good example:
(EDIT: Note - the following blog seems to have gone away.)
http://blog.reindel.com/2007/09/24/jquery-and-xml-revisited/
There are also lots and lots of good examples in the jQuery documentation:
http://www.webmonkey.com/tutorial/Easy_XML_Consumption_using_jQuery?oldid=20032
EDIT: Due to the blog for my primary example going away, I wanted to add another example that shows the basics and helps with namespace issues:
http://www.zachleat.com/web/selecting-xml-with-javascript/
A: Disclaimer: I am the author if the open-source Jsonix library which may be suitable for the task.
A couple of years ago I was also looking for a good XML<->JSON parsing/serialization library for JavaScript. I needed to process XML documents conforming to rather complex XML Schemas. In Java, I routinely use JAXB for the task so I was looking for something similar:
Is there a JavaScript API for XML binding - analog to JAXB for Java?
I failed to find such a tool back then.
So I wrote Jsonix which I consider to be a JAXB analog for JavaScript.
You may find Jsonix suitable, if you're interested in the following features:
*
*XML<->JSON conversion is based on a declaraive mapping between XML and JSON structures
*This mapping can be generated from an XML Schema or written manually
*Bidirectional - supports parsing as well as serialization (or unmarshalling/marshalling in other terms).
*Support elements, attributes and also considers namespaces defined in the XML document.
*Strictly typed.
*Strictly structured.
*Support almost all of the XML Schema built-in types (including special types like QName).
*Works in browsers as well as Node.js, also compatible to RequireJS/AMD (also to amdefine in Node.js)
*Has extensive documentation.
However, Jsonix may be an overkill, if your XML is rather simple, does not have an XML Schema or if you're not interested in strict typing or structures. Check your requirements.
Example
Try it in JSFiddle.
You can take a purchase order schema and generate a mapping for it using the following command:
java -jar node_modules/jsonix/lib/jsonix-schema-compiler-full.jar
-d mappings -p PO purchaseorder.xsd
You'll get a PO.js file which describes mappings between XML and JavaScript structures. Here's a snippet from this mapping file to give you an impression:
var PO = {
name: 'PO',
typeInfos: [{
localName: 'PurchaseOrderType',
propertyInfos: [{
name: 'shipTo',
typeInfo: 'PO.USAddress'
}, {
name: 'billTo',
typeInfo: 'PO.USAddress'
}, {
name: 'comment'
}, {
name: 'orderDate',
typeInfo: 'Calendar',
type: 'attribute'
}, ...]
}, {
localName: 'USAddress',
propertyInfos: [ ... ]
}, ...],
elementInfos: [{
elementName: 'purchaseOrder',
typeInfo: 'PO.PurchaseOrderType'
}, ... ]
};
Having this mapping file you can parse the XML:
// First we construct a Jsonix context - a factory for unmarshaller (parser)
// and marshaller (serializer)
var context = new Jsonix.Context([PO]);
// Then we create a unmarshaller
var unmarshaller = context.createUnmarshaller();
// Unmarshal an object from the XML retrieved from the URL
unmarshaller.unmarshalURL('po.xml',
// This callback function will be provided
// with the result of the unmarshalling
function (unmarshalled) {
// Alice Smith
console.log(unmarshalled.value.shipTo.name);
// Baby Monitor
console.log(unmarshalled.value.items.item[1].productName);
});
Or serialize your JavaScript object as XML:
// Create a marshaller
var marshaller = context.createMarshaller();
// Marshal a JavaScript Object as XML (DOM Document)
var doc = marshaller.marshalDocument({
name: {
localPart: "purchaseOrder"
},
value: {
orderDate: { year: 1999, month: 10, day: 20 },
shipTo: {
country: "US",
name: "Alice Smith",
street: "123 Maple Street",
city: "Mill Valley",
state: "CA",
zip: 90952
},
billTo: { /* ... */ },
comment: 'Hurry, my lawn is going wild!',
items: { /* ... */ }
}
});
You can try it in JSFiddle to see how it works in practice.
Additional disclaimer: this answer is high-voted because of the following discussion on meta. So please be aware of the "meta-effect". High votes here do not necessarily mean that Jsonix is good, applicable or recommended by the community. Do not be mislead by the high votes.
A: If your XML is in a simple format you may look at jQuery and the XML to JSON plugin or the xmlObjectifier.
For a straight parser you may want to look at XML for <SCRIPT>.
A: Have you tried XML for SCRIPT. I have to admit, that I have never used it personally, but I have heard/read a few good things about it.
Give it a try and maybe share your experience here?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: How do I call a SQL Server stored procedure from PowerShell? I have a large CSV file and I want to execute a stored procedure for each line.
What is the best way to execute a stored procedure from PowerShell?
A: Here is a function I use to execute sql commands. You just have to change $sqlCommand.CommandText to the name of your sproc and $SqlCommand.CommandType to CommandType.StoredProcedure.
function execute-Sql{
param($server, $db, $sql )
$sqlConnection = new-object System.Data.SqlClient.SqlConnection
$sqlConnection.ConnectionString = 'server=' + $server + ';integrated security=TRUE;database=' + $db
$sqlConnection.Open()
$sqlCommand = new-object System.Data.SqlClient.SqlCommand
$sqlCommand.CommandTimeout = 120
$sqlCommand.Connection = $sqlConnection
$sqlCommand.CommandText= $sql
$text = $sql.Substring(0, 50)
Write-Progress -Activity "Executing SQL" -Status "Executing SQL => $text..."
Write-Host "Executing SQL => $text..."
$result = $sqlCommand.ExecuteNonQuery()
$sqlConnection.Close()
}
A: This answer was pulled from http://www.databasejournal.com/features/mssql/article.php/3683181
This same example can be used for any adhoc queries. Let us execute the stored procedure “sp_helpdb” as shown below.
$SqlConnection = New-Object System.Data.SqlClient.SqlConnection
$SqlConnection.ConnectionString = "Server=HOME\SQLEXPRESS;Database=master;Integrated Security=True"
$SqlCmd = New-Object System.Data.SqlClient.SqlCommand
$SqlCmd.CommandText = "sp_helpdb"
$SqlCmd.Connection = $SqlConnection
$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$SqlAdapter.SelectCommand = $SqlCmd
$DataSet = New-Object System.Data.DataSet
$SqlAdapter.Fill($DataSet)
$SqlConnection.Close()
$DataSet.Tables[0]
A: Use sqlcmd instead of osql if it's a 2005 database
A: Consider calling osql.exe (the command line tool for SQL Server) passing as parameter a text file written for each line with the call to the stored procedure.
SQL Server provides some assemblies that could be of use with the name SMO that have seamless integration with PowerShell. Here is an article on that.
http://www.databasejournal.com/features/mssql/article.php/3696731
There are API methods to execute stored procedures that I think are worth being investigated. Here a startup example:
http://www.eggheadcafe.com/software/aspnet/29974894/smo-running-a-stored-pro.aspx
A: Here is a function that I use (slightly redacted). It allows input and output parameters. I only have uniqueidentifier and varchar types implemented, but any other types are easy to add. If you use parameterized stored procedures (or just parameterized sql...this code is easily adapted to that), this will make your life a lot easier.
To call the function, you need a connection to the SQL server (say $conn),
$res=exec-storedprocedure -storedProcName 'stp_myProc' -parameters @{Param1="Hello";Param2=50} -outparams @{ID="uniqueidentifier"} $conn
retrieve proc output from returned object
$res.data #dataset containing the datatables returned by selects
$res.outputparams.ID #output parameter ID (uniqueidentifier)
The function:
function exec-storedprocedure($storedProcName,
[hashtable] $parameters=@{},
[hashtable] $outparams=@{},
$conn,[switch]$help){
function put-outputparameters($cmd, $outparams){
foreach($outp in $outparams.Keys){
$cmd.Parameters.Add("@$outp", (get-paramtype $outparams[$outp])).Direction=[System.Data.ParameterDirection]::Output
}
}
function get-outputparameters($cmd,$outparams){
foreach($p in $cmd.Parameters){
if ($p.Direction -eq [System.Data.ParameterDirection]::Output){
$outparams[$p.ParameterName.Replace("@","")]=$p.Value
}
}
}
function get-paramtype($typename,[switch]$help){
switch ($typename){
'uniqueidentifier' {[System.Data.SqlDbType]::UniqueIdentifier}
'int' {[System.Data.SqlDbType]::Int}
'xml' {[System.Data.SqlDbType]::Xml}
'nvarchar' {[System.Data.SqlDbType]::NVarchar}
default {[System.Data.SqlDbType]::Varchar}
}
}
if ($help){
$msg = @"
Execute a sql statement. Parameters are allowed.
Input parameters should be a dictionary of parameter names and values.
Output parameters should be a dictionary of parameter names and types.
Return value will usually be a list of datarows.
Usage: exec-query sql [inputparameters] [outputparameters] [conn] [-help]
"@
Write-Host $msg
return
}
$close=($conn.State -eq [System.Data.ConnectionState]'Closed')
if ($close) {
$conn.Open()
}
$cmd=new-object system.Data.SqlClient.SqlCommand($sql,$conn)
$cmd.CommandType=[System.Data.CommandType]'StoredProcedure'
$cmd.CommandText=$storedProcName
foreach($p in $parameters.Keys){
$cmd.Parameters.AddWithValue("@$p",[string]$parameters[$p]).Direction=
[System.Data.ParameterDirection]::Input
}
put-outputparameters $cmd $outparams
$ds=New-Object system.Data.DataSet
$da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
[Void]$da.fill($ds)
if ($close) {
$conn.Close()
}
get-outputparameters $cmd $outparams
return @{data=$ds;outputparams=$outparams}
}
A: I include invoke-sqlcmd2.ps1 and write-datatable.ps1 from http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/01/use-powershell-to-collect-server-data-and-write-to-sql.aspx. Calls to run SQL commands take the form: Invoke-sqlcmd2 -ServerInstance "<sql-server>" -Database <DB> -Query "truncate table <table>" An example of writing the contents of DataTable variables to a SQL table looks like: $logs = (get-item SQLSERVER:\sql\<server_path>).ReadErrorLog()
Write-DataTable -ServerInstance "<sql-server>" -Database "<DB>" -TableName "<table>" -Data $logs I find these useful when doing SQL Server database-related PowerShell scripts as the resulting scripts are clean and readable.
A: Adds CommandType and Parameters to @Santiago Cepas' answer:
function Execute-Stored-Procedure
{
param($server, $db, $spname)
$sqlConnection = new-object System.Data.SqlClient.SqlConnection
$sqlConnection.ConnectionString = 'server=' + $server + ';integrated security=TRUE;database=' + $db
$sqlConnection.Open()
$sqlCommand = new-object System.Data.SqlClient.SqlCommand
$sqlCommand.CommandTimeout = 120
$sqlCommand.Connection = $sqlConnection
$sqlCommand.CommandType= [System.Data.CommandType]::StoredProcedure
# If you have paramters, add them like this:
# $sqlCommand.Parameters.AddWithValue("@paramName", "$param") | Out-Null
$sqlCommand.CommandText= $spname
$text = $spname.Substring(0, 50)
Write-Progress -Activity "Executing Stored Procedure" -Status "Executing SQL => $text..."
Write-Host "Executing Stored Procedure => $text..."
$result = $sqlCommand.ExecuteNonQuery()
$sqlConnection.Close()
}
# Call like this:
Execute-Stored-Procedure -server "enter-server-name-here" -db "enter-db-name-here" -spname "enter-sp-name-here"
A: I added timeout and show how to reader a scalar or get results using a reader
function exec-query( $storedProcName,$parameters=@{},$conn,$timeout=60){
$cmd=new-object system.Data.SqlClient.SqlCommand
$cmd.CommandType=[System.Data.CommandType]'StoredProcedure'
$cmd.Connection=$conn
$cmd.CommandText=$storedProcName
$cmd.CommandTimeout=$timeout
foreach($p in $parameters.Keys){
[Void] $cmd.Parameters.AddWithValue("@$p",$parameters[$p])
}
#$id=$cmd.ExecuteScalar()
$adapter=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
$dataset=New-Object system.Data.DataSet
$adapter.fill($dataset) | Out-Null
#$reader = $cmd.ExecuteReader()
#$results = @()
#while ($reader.Read())
#{
# write-host "reached" -ForegroundColor Green
#}
return $dataSet.Tables[0]
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
}
|
Q: Stored procedures a no-go in the php/mysql world? I'm quoting part of an answer which I received for another question of mine:
In the PHP/MySQL world I would say
stored procedures are no-go
I would like to know: Is that so? Why? Why not?
[edit]I mean this as a general question without a specific need in mind[/edit]
A: Do you have a specific need in mind which makes you consider them? Stored procedures are much less portable than "plain" SQL, that's usually why people don't want to use them. Also, having written a fair share of PL/SQL, I must say that the procedural way of writing code adds complexity and it's just not very modern or testable. They might be handy in some special cases where you need to optimize, but I'd certainly think twice. Jeff has similar opinions.
A: I generally stay away from stored procedures because it adds load to the database which is 99% of the time, your biggest bottleneck. Adding a new php server is nothing compared to making your MySQL db replicate.
A: I develop and maintain a large PHP/MySQL application. Here is my experience with stored procedures.
Over time our application has grown very complex. And with all the logic on the php side, some operations would query the database with over 100 short queries.
MySQL is so quick that the performance was still acceptable, but not great.
We made the decision in our latest version of the software to move some of the logic to stored procedures for complex operations.
We did achieve a significant performance gain due to the fact that we did not have to send data back and forth between PHP and MySQL.
I do agree with the other posters here that PL/SQL is not a modern language and is difficult to debug.
Bottom Line: Stored Procedures are a great tool for certain situations. But I would not recommend using them unless you have a good reason. For simple applications, stored procedures are not worth the hassle.
A: This is a subjective question.
I would personally include all calculations within PHP and only really use MySQL as a table.
But, If you feel that it is easier to use stored procedures then by all means, go ahead and do it.
A: When using stored procedures with MySQL, you will often need to use the mysqli interface in PHP and not the regular mysql interface.
The reason for this is due to the fact that the stored procedures often will return more than 1 result set. If it does, the mysql API can not handle it and will you get errors.
The mysqli interface has functions to handling these multiple result sets, functions such as mysqli_more_results and mysqli_next_result.
Keep in mind that if you return any result set at all from the stored procedure, then you need to use these APIs, as the stored procedure generates 1 result set for the actual execution, and then 1 additional one for each result set intentionally returned from the stored procedure.
A: There's possibly a phobia of stored procedures with mysql, partly due to not being overwhelmingly powerful ( compared to Postgresql and even MSSQL, mysqls stored procedures are greatly lacking ).
On the plus: They make interfacing with it from more than one language easier.
If somebody states that "using stored procedures is bad because its not portable to different databases" then this of course means they think you're likely to switch databases, which means they in turn saying they think you shouldn't be using mysql.
It is popular to use ORM's these days, but I personally think ORM is a BadThing (Question:82882)
A: I would not say "stored procedures are a no-go", I would say "Don't use them without a good reason".
MySQL stored procedures have a particularly horrible syntax (Oracle and MSSQL are pretty awful too), maintaining them just complicates your application.
Do use a stored procedure if you have a real (measurable) reason to do so, otherwise don't. That's my opinion anyway.
A: I think that using stored procedures can offer some abstraction in certain applications, as in any where you would use the same SQL code chunk to update or add the same data, you could then create the one sproc save_user($attr.....) rather that repeating yourself all over the place.
Agreed the syntax is hairy and if your used to MSSQL and oracle sprocs there are differences that can fustrate.
A: You should also be aware that stored procedures were not supported in Mysql before version 5.0. http://dev.mysql.com/doc/refman/5.0/en/stored-routines.html Also stored procedures tended to be be a bit weird in that implementation. Now that Mysql 5.1 is starting to crop up in the wild I see more use of stored procedures with Mysql.
A: I make limited use of stored procedures, and it works well. I am the lead dev for one of my companies clients, working on their e-comm website. The client has a stock system, we implemented a set of stored procedures on their system and built an API to communicate with it. This allowed us to abstract their database and they could implement logic in the stored procedures. Simple but met the business requirement very well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Remove spaces from std::string in C++ What is the preferred way to remove spaces from a string in C++? I could loop through all the characters and build a new string, but is there a better way?
A: If you want to do this with an easy macro, here's one:
#define REMOVE_SPACES(x) x.erase(std::remove(x.begin(), x.end(), ' '), x.end())
This assumes you have done #include <string> of course.
Call it like so:
std::string sName = " Example Name ";
REMOVE_SPACES(sName);
printf("%s",sName.c_str()); // requires #include <stdio.h>
A: string replaceinString(std::string str, std::string tofind, std::string toreplace)
{
size_t position = 0;
for ( position = str.find(tofind); position != std::string::npos; position = str.find(tofind,position) )
{
str.replace(position ,1, toreplace);
}
return(str);
}
use it:
string replace = replaceinString(thisstring, " ", "%20");
string replace2 = replaceinString(thisstring, " ", "-");
string replace3 = replaceinString(thisstring, " ", "+");
A: In C++20 you can use free function std::erase
std::string str = " Hello World !";
std::erase(str, ' ');
Full example:
#include<string>
#include<iostream>
int main() {
std::string str = " Hello World !";
std::erase(str, ' ');
std::cout << "|" << str <<"|";
}
I print | so that it is obvious that space at the begining is also removed.
note: this removes only the space, not every other possible character that may be considered whitespace, see https://en.cppreference.com/w/cpp/string/byte/isspace
A: From gamedev
string.erase(std::remove_if(string.begin(), string.end(), std::isspace), string.end());
A:
#include <algorithm>
using namespace std;
int main() {
.
.
s.erase( remove( s.begin(), s.end(), ' ' ), s.end() );
.
.
}
Source:
Reference taken from this forum.
A: Can you use Boost String Algo? http://www.boost.org/doc/libs/1_35_0/doc/html/string_algo/usage.html#id1290573
erase_all(str, " ");
A: The best thing to do is to use the algorithm remove_if and isspace:
remove_if(str.begin(), str.end(), isspace);
Now the algorithm itself can't change the container(only modify the values), so it actually shuffles the values around and returns a pointer to where the end now should be. So we have to call string::erase to actually modify the length of the container:
str.erase(remove_if(str.begin(), str.end(), isspace), str.end());
We should also note that remove_if will make at most one copy of the data. Here is a sample implementation:
template<typename T, typename P>
T remove_if(T beg, T end, P pred)
{
T dest = beg;
for (T itr = beg;itr != end; ++itr)
if (!pred(*itr))
*(dest++) = *itr;
return dest;
}
A: Removes all whitespace characters such as tabs and line breaks (C++11):
string str = " \n AB cd \t efg\v\n";
str = regex_replace(str,regex("\\s"),"");
A: You can use this solution for removing a char:
#include <algorithm>
#include <string>
using namespace std;
str.erase(remove(str.begin(), str.end(), char_to_remove), str.end());
A: I used the below work around for long - not sure about its complexity.
s.erase(std::unique(s.begin(),s.end(),[](char s,char f){return (f==' '||s==' ');}),s.end());
when you wanna remove character ' ' and some for example - use
s.erase(std::unique(s.begin(),s.end(),[](char s,char f){return ((f==' '||s==' ')||(f=='-'||s=='-'));}),s.end());
likewise just increase the || if number of characters you wanna remove is not 1
but as mentioned by others the erase remove idiom also seems fine.
A: string removeSpaces(string word) {
string newWord;
for (int i = 0; i < word.length(); i++) {
if (word[i] != ' ') {
newWord += word[i];
}
}
return newWord;
}
This code basically takes a string and iterates through every character in it. It then checks whether that string is a white space, if it isn't then the character is added to a new string.
A: For trimming, use boost string algorithms:
#include <boost/algorithm/string.hpp>
using namespace std;
using namespace boost;
// ...
string str1(" hello world! ");
trim(str1); // str1 == "hello world!"
A: std::string::iterator end_pos = std::remove(str.begin(), str.end(), ' ');
str.erase(end_pos, str.end());
A: Hi, you can do something like that. This function deletes all spaces.
string delSpaces(string &str)
{
str.erase(std::remove(str.begin(), str.end(), ' '), str.end());
return str;
}
I made another function, that deletes all unnecessary spaces.
string delUnnecessary(string &str)
{
int size = str.length();
for(int j = 0; j<=size; j++)
{
for(int i = 0; i <=j; i++)
{
if(str[i] == ' ' && str[i+1] == ' ')
{
str.erase(str.begin() + i);
}
else if(str[0]== ' ')
{
str.erase(str.begin());
}
else if(str[i] == '\0' && str[i-1]== ' ')
{
str.erase(str.end() - 1);
}
}
}
return str;
}
A: Just for fun, as other answers are much better than this.
#include <boost/hana/functional/partial.hpp>
#include <iostream>
#include <range/v3/range/conversion.hpp>
#include <range/v3/view/filter.hpp>
int main() {
using ranges::to;
using ranges::views::filter;
using boost::hana::partial;
auto const& not_space = partial(std::not_equal_to<>{}, ' ');
auto const& to_string = to<std::string>;
std::string input = "2C F4 32 3C B9 DE";
std::string output = input | filter(not_space) | to_string;
assert(output == "2CF4323CB9DE");
}
A: I created a function, that removes the white spaces from the either ends of string. Such as
" Hello World ", will be converted into "Hello world".
This works similar to strip, lstrip and rstrip functions, which are frequently used in python.
string strip(string str) {
while (str[str.length() - 1] == ' ') {
str = str.substr(0, str.length() - 1);
}
while (str[0] == ' ') {
str = str.substr(1, str.length() - 1);
}
return str;
}
string lstrip(string str) {
while (str[0] == ' ') {
str = str.substr(1, str.length() - 1);
}
return str;
}
string rstrip(string str) {
while (str[str.length() - 1] == ' ') {
str = str.substr(0, str.length() - 1);
}
return str;
}
A: I'm afraid it's the best solution that I can think of. But you can use reserve() to pre-allocate the minimum required memory in advance to speed up things a bit. You'll end up with a new string that will probably be shorter but that takes up the same amount of memory, but you'll avoid reallocations.
EDIT: Depending on your situation, this may incur less overhead than jumbling characters around.
You should try different approaches and see what is best for you: you might not have any performance issues at all.
A: string removespace(string str)
{
int m = str.length();
int i=0;
while(i<m)
{
while(str[i] == 32)
str.erase(i,1);
i++;
}
}
A: string str = "2C F4 32 3C B9 DE";
str.erase(remove(str.begin(),str.end(),' '),str.end());
cout << str << endl;
output: 2CF4323CB9DE
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "261"
}
|
Q: How to Update to Revision using Subclipse SVN plugin? In subclipse, the Team > Update menu option performs an "svn update -r HEAD".
I want to run "svn update -r [revision number]" but can't find a menu option which will let me update to anything besides the HEAD revision.
A: Subclipse used to prompt but users complained. We did not want to add two update options. The easiest way to do it is just Team > Switch and do not change the URL. Switch and update are the same code paths within Subversion. If you do not change the URL it is just behaving like update and the Switch dialog exposes all the options available.
A: It is the "Replace With" menu option. It is not under "Team", but on the same level.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: String list in SqlCommand through Parameters in C# Working with a SqlCommand in C# I've created a query that contains a IN (list...) part in the where clause. Instead of looping through my string list generating the list I need for the query (dangerous if you think in sqlInjection). I thought that I could create a parameter like:
SELECT blahblahblah WHERE blahblahblah IN @LISTOFWORDS
Then in the code I try to add a parameter like this:
DataTable dt = new DataTable();
dt.Columns.Add("word", typeof(string));
foreach (String word in listOfWords)
{
dt.Rows.Add(word);
}
comm.Parameters.Add("LISTOFWORDS", System.Data.SqlDbType.Structured).Value = dt;
But this doesn't work.
Questions:
*
*Am I trying something impossible?
*Did I took the wrong approach?
*Do I have mistakes in this approach?
Thanks for your time :)
A: What you are trying to do is possible but not using your current approach. This is a very common problem with all possible solutions prior to SQL Server 2008 having trade offs related to performance, security and memory usage.
This link shows some approaches for SQL Server 2000/2005
SQL Server 2008 supports passing a table value parameter.
I hope this helps.
A: You want to think about where that list comes from. Generally that information is in the database somewhere. For example, instead of this:
SELECT * FROM [Table] WHERE ID IN (1,2,3)
You could use a subquery like this:
SELECT * FROM [Table] WHERE ID IN ( SELECT TableID FROM [OtherTable] WHERE OtherTableID= @OtherTableID )
A: If I understand right, you're trying to pass a list as a SQL parameter.
Some folks have attempted this before with limited success:
Passing Arrays to Stored Procedures
Arrays and Lists in SQL 2005
Passing Array of Values to SQL Server without String Manipulation
Using MS SQL 2005's XML capabilities to pass a list of values to a command
A: *
*Am I trying something impossible?
No, it isn't impossible.
*
*Did I took the wrong approach?
Your approach is not working (at least in .net 2)
*
*Do I have mistakes in this approach?
I would try "Joel Coehoorn" solution (2nd answers) if it is possible.
Otherwise, another option is to send a "string" parameter with all values delimited by an separator. Write a dynamic query (build it based on values from string) and execute it using "exec".
Another solution will be o build the query directly from code. Somthing like this:
StringBuilder sb = new StringBuilder();
for (int i=0; i< listOfWords.Count; i++)
{
sb.AppendFormat("p{0},",i);
comm.Parameters.AddWithValue("p"+i.ToString(), listOfWords[i]);
}
comm.CommandText = string.Format(""SELECT blahblahblah WHERE blahblahblah IN ({0})",
sb.ToString().TrimEnd(','));
The command should look like:
SELECT blah WHERE blah IN (p0,p1,p2,p3...)...p0='aaa',p1='bbb'
In MsSql2005, "IN" is working only with 256 values.
A: I would recommend setting the parameter as a comma delimited string of values and use a Split function in SQL to turn that into a single column table of values and then you can use the IN feature.
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=50648 - Split Functions
A: If you want to pass the list as a string in a parameter, you could just build the query dynamically.
DECLARE @query varchar(500)
SET @query = 'SELECT blah blah WHERE blahblah in (' + @list + ')'
EXECUTE(@query)
A: I used to have the same problem, I think there is now way to do this directly over the ADO.NET API.
You might consider inserting the words into a temptable (plus a queryid or something) and then refering to that temptable from the query. Or dynamically creating the query string and avoid sql injection by other measures (e.g. regex checks).
A: This is an old question but I've come up with an elegant solution for this that I love to reuse and I think everyone else will find it useful.
First of all you need to create a FUNCTION in SqlServer that takes a delimited input and returns a table with the items split into records.
Here is the following code for this:
ALTER FUNCTION [dbo].[Split]
(
@RowData nvarchar(max),
@SplitOn nvarchar(5) = ','
)
RETURNS @RtnValue table
(
Id int identity(1,1),
Data nvarchar(100)
)
AS
BEGIN
Declare @Cnt int
Set @Cnt = 1
While (Charindex(@SplitOn,@RowData)>0)
Begin
Insert Into @RtnValue (data)
Select
Data = ltrim(rtrim(Substring(@RowData,1,Charindex(@SplitOn,@RowData)-1)))
Set @RowData = Substring(@RowData,Charindex(@SplitOn,@RowData)+1,len(@RowData))
Set @Cnt = @Cnt + 1
End
Insert Into @RtnValue (data)
Select Data = ltrim(rtrim(@RowData))
Return
END
You can now do something like this:
Select Id, Data from dbo.Split('123,234,345,456',',')
And fear not, this can't be susceptible to Sql injection attacks.
Next write a stored procedure that takes your comma delimited data and then you can write a sql statement that uses this Split function:
CREATE PROCEDURE [dbo].[findDuplicates]
@ids nvarchar(max)
as
begin
select ID
from SomeTable with (nolock)
where ID in (select Data from dbo.Split(@ids,','))
end
Now you can write a C# wrapper around it:
public void SomeFunction(List<int> ids)
{
var idsAsDelimitedString = string.Join(",", ids.Select(id => id.ToString()).ToArray());
// ... or however you make your connection
var con = GetConnection();
try
{
con.Open();
var cmd = new SqlCommand("findDuplicates", con);
cmd.Parameters.Add(new SqlParameter("@ids", idsAsDelimitedString));
var reader = cmd.ExecuteReader();
// .... do something here.
}
catch (Exception)
{
// catch an exception?
}
finally
{
con.Close();
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Deleting Rows from a SQL Table marked for Replication I erroneously delete all the rows from a MS SQL 2000 table that is used in merge replication (the table is on the publisher). I then compounded the issue by using a DTS operation to retrieve the rows from a backup database and repopulate the table.
This has created the following issue:
The delete operation marked the rows for deletion on the clients but the DTS operation bypasses the replication triggers so the imported rows are not marked for insertion on the subscribers. In effect the subscribers lose the data although it is on the publisher.
So I thought "no worries" I will just delete the rows again and then add them correctly via an insert statement and they will then be marked for insertion on the subscribers.
This is my problem:
I cannot delete the DTSed rows because I get a "Cannot insert duplicate key row in object 'MSmerge_tombstone' with unique index 'uc1MSmerge_tombstone'." error. What I would like to do is somehow delete the rows from the table bypassing the merge replication trigger. Is this possible? I don't want to remove and redo the replication because the subscribers are 50+ windows mobile devices.
Edit: I have tried the Truncate Table command. This gives the following error "Cannot truncate table xxxx because it is published for replication"
A: Have you tried truncating the table?
A: You may have to truncate the table and reset the ID field back to 0 if you need the inserted rows to have the same ID. If not, just truncate and it should be fine.
A: You also could look into temporarily dropping the unique index and adding it back when you're done.
A: Look into sp_mergedummyupdate
A: Would creating a second table be an option? You could create a second table, populate it with the needed data, add the constraints/indexes, then drop the first table and rename your second table. This should give you the data with the right keys...and it should all consist of SQL statements that are allowed to trickle down the replication. It just isn't probably the best on performance...and definitely would impose some risk.
I haven't tried this first hand in a replicated environment...but it may be at least worth trying out.
A: Thanks for the tips...I eventually found a solution:
I deleted the merge delete trigger from the table
Deleted the DTSed rows
Recreated the merge delete trigger
Added my rows correctly using an insert statement.
I was a little worried bout fiddling with the merge triggers but every thing appears to be working correctly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Google Authentication API: How to get the user's gmail address I've been studying the Google authentication API (AuthSub)... My question is, how do I get the user's account information (at least their Gmail address) after the authentication has passed?
Because currently, all I get back from the authentication process is a token granting me access to which ever Google service I have specified in the scope, but there's no easy way to even get the user's login id (Gmail address) as far as I can tell...
If so, what Google service allows me to access the user's information?
A: Google Authentication API is a token based system to authenticate a valid user. It does not expose any of other interface that allows to get account holder information back to authorizer.
A: Using the Google AppEngine GData services, you can request the user to give you access to their Google Mail, Calendar, Picasa, etc. Check it out here.
A: You can get some of the data through the OpenID API, with the ax extension. If you are authenticating with other methods, best I found is calling https://www-opensocial.googleusercontent.com/api/people/@me/@self and it will get you name, email and picture. Be sure to have http://www-opensocial.googleusercontent.com/api in scopes when authenticating.
A: [ValidateInput(false)]
public ActionResult Authenticate(string returnUrl)
{
try
{
logger.Info("" + returnUrl + "] LoginController : Authenticate method start ");
var response = openid.GetResponse();
if (response == null)
{
try
{
string discoveryuri = "https://www.google.com/accounts/o8/id";
//OpenIdRelyingParty openid = new OpenIdRelyingParty();
var fetch = new FetchRequest();// new
var b = new UriBuilder(Request.Url) { Query = "" };
var req = openid.CreateRequest(discoveryuri, b.Uri, b.Uri);
fetch.Attributes.AddRequired(WellKnownAttributes.Contact.Email);
fetch.Attributes.AddRequired(WellKnownAttributes.Name.FullName);
req.AddExtension(fetch);
return req.RedirectingResponse.AsActionResult();
}
catch (ProtocolException ex)
{
logger.ErrorFormat(" LoginController : Authenticate method has error, Exception:" + ex.ToString());
ViewData["Message"] = ex.Message;
return View("Login");
}
}
else
{
logger.Info("" + returnUrl + "] LoginController : Authenticate method :when responce not null ");
switch (response.Status)
{
case AuthenticationStatus.Authenticated:
logger.Info("" + response.Status + "] LoginController : Authenticate method : responce status ");
var fetchResponse = response.GetExtension<FetchResponse>();
string email = fetchResponse.GetAttributeValue(WellKnownAttributes.Contact.Email);
string userIPAddress = HttpContext.Request.UserHostAddress;
SecurityManager manager = new SecurityManager();
int userID = manager.IsValidUser(email);
if (userID != 0)
{
ViewBag.IsFailed = "False";
logger.Info("" + userID + "] LoginController : Authenticate method : user id id not null ");
Session["FriendlyIdentifier"] = response.FriendlyIdentifierForDisplay;
Session["UserEmail"] = email;
FormsAuthentication.SetAuthCookie(email, false);
WebSession.UserEmail = email;
WebSession.UserID = userID;
UserManager userManager = new UserManager();
WebSession.AssignedSites = userManager.GetAssignedSites(userID);
if (!string.IsNullOrEmpty(returnUrl))
{
logger.Info("" + returnUrl + "] LoginController : Authenticate method : retutn url not null then return Redirect ");
return Redirect(returnUrl);
}
else
{
logger.Info("" + returnUrl + "] LoginController : Authenticate method : retutn url null then return RedirectToAction ");
//
return Redirect("/Home");
}
}
else
{
ViewBag.IsFailed = "True";
logger.Info("" + returnUrl + "] LoginController : Authenticate method :user id null ");
if (!string.IsNullOrEmpty(returnUrl))
{
logger.Info("" + returnUrl + "] LoginController : Authenticate method :and return Redirect ");
return Redirect(returnUrl);
}
else
{
logger.Info("" + returnUrl + "] LoginController : Authenticate method :and return RedirectToAction ");
return View("Index");
}
}
case AuthenticationStatus.Canceled:
logger.Info("" + response.Status + "] LoginController : Authenticate method : AuthenticationStatus.Canceled and return view ");
ViewData["Message"] = "Canceled at provider";
return View("Login");
case AuthenticationStatus.Failed:
logger.Info("" + response.Status + "] LoginController : Authenticate method : AuthenticationStatus.Failed and return view ");
logger.Error(response.Exception.Message);
ViewData["Message"] = response.Exception.Message;
return View("Login");
}
}
logger.Info("" + returnUrl + "] LoginController : Authenticate method end and return EmptyResult");
return new EmptyResult();
}
catch (Exception ex)
{
logger.Error(" LoginController : Authenticate method ", ex);
throw;
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: C++ class design from database schema I am writing a perl script to parse a mysql database schema and create C++ classes when necessary. My question is a pretty easy one, but us something I haven't really done before and don't know common practice. Any object of any of classes created will need to have "get" methods to populate this information. So my questions are twofold:
*
*Does it make sense to call all of the get methods in the constructor so that the object has data right away? Some classes will have a lot of them, so as needed might make sense too. I have two constrcutors now. One that populates the data and one that does not.
*Should I also have a another "get" method that retrieves the object's copy of the data rather that the db copy.
I could go both ways on #1 and am leaning towards yes on #2. Any advice, pointers would be much appreciated.
A: Ususally, the most costly part of an application is round trips to the database, so it would me much more efficient to populate all your data members from a single query than to do them one at a time, either on an as needed basis or from your constructor. Once you've paid for the round trip, you may as well get your money's worth.
Also, in general, your get* methods should be declared as const, meaning they don't change the underlying object, so having them go out to the database to populate the object would break that (which you could allow by making the member variables mutable, but that would basically defeat the purpose of const).
To break things down into concrete steps, I would recommend:
*
*Have your constructor call a separate init() method that queries the database and populates your object's data members.
*Declare your get* methods as const, and just have them return the data members.
A: First realize that you're re-inventing the wheel here. There are a number of decent object-relational mapping libraries for database access in just about every language. For C/C++ you might look at:
http://trac.butterfat.net/public/StactiveRecord
http://debea.net/trac
Ok, with that out of the way, you probably want to create a static method in your class called find or search which is a factory for constructing objects and selecting them from the database:
Artist MJ = Artist::Find("Michael Jackson");
MJ->set("relevant", "no");
MJ->save();
Note the save method which then takes the modified object and stores it back into the database. If you actually want to create a new record, then you'd use the new method which would instantiate an empty object:
Artist StackOverflow = Artist->new();
StackOverflow->set("relevant", "yes");
StackOverflow->save();
Note the set and get methods here just set and get the values from the object, not the database. To actually store elements in the database you'd need to use the static Find method or the object's save method.
A: there are existing tools that reverse db's into java (and probably other languages). consider using one of them and converting that to c++.
A: I would not recommend having your get methods go to the database at all, unless absolutely necessary for your particular problem. It makes for a lot more places something could go wrong, and probably a lot of unnecessary reads on your DB, and could inadvertently tie your objects to db-specific features, losing a lot of the benefits of a tiered architecture. As far as your domain model is concerned, the database does not exist.
edit - this is for #2 (obviously). For #1 I would say no, for many of the same reasons.
A: Another alternative would be to not automate creating the classes, and instead create separate classes that only contain the data members that individual executables are interested in, so that those classes only pull the necessary data.
Don't know how many tables we're talking about, though, so that may explode the scope of your project.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: SQL Select Upcoming Birthdays I'm trying to write a stored procedure to select employees who have birthdays that are upcoming.
SELECT * FROM Employees WHERE Birthday > @Today AND Birthday < @Today + @NumDays
This will not work because the birth year is part of Birthday, so if my birthday was '09-18-1983' that will not fall between '09-18-2008' and '09-25-2008'.
Is there a way to ignore the year portion of date fields and just compare month/days?
This will be run every monday morning to alert managers of birthdays upcoming, so it possibly will span new years.
Here is the working solution that I ended up creating, thanks Kogus.
SELECT * FROM Employees
WHERE Cast(DATEDIFF(dd, birthdt, getDate()) / 365.25 as int)
- Cast(DATEDIFF(dd, birthdt, futureDate) / 365.25 as int)
<> 0
A: Liked the approach of @strelc, but his sql was a bit off. Here's an updated version that works well and is simple to use:
SELECT * FROM User
WHERE (DATEDIFF(dd, getdate(), DATEADD(yyyy,
DATEDIFF(yyyy, birthdate, getdate()) + 1, birthdate)) + 1) % 366 <= <number of days>
edit 10/2017: add single day to end
A: Note: I've edited this to fix what I believe was a significant bug. The currently posted version works for me.
This should work after you modify the field and table names to correspond to your database.
SELECT
BRTHDATE AS BIRTHDAY
,FLOOR(DATEDIFF(dd,EMP.BRTHDATE,GETDATE()) / 365.25) AS AGE_NOW
,FLOOR(DATEDIFF(dd,EMP.BRTHDATE,GETDATE()+7) / 365.25) AS AGE_ONE_WEEK_FROM_NOW
FROM
"Database name".dbo.EMPLOYEES EMP
WHERE 1 = (FLOOR(DATEDIFF(dd,EMP.BRTHDATE,GETDATE()+7) / 365.25))
-
(FLOOR(DATEDIFF(dd,EMP.BRTHDATE,GETDATE()) / 365.25))
Basically, it gets the # of days from their birthday to now, and divides that by 365 (to avoid rounding issues that come up when you convert directly to years).
Then it gets the # of days from their birthday to a week from now, and divides that by 365 to get their age a week from now.
If their birthday is within a week, then the difference between those two values will be 1. So it returns all of those records.
A: You could use the DAYOFYEAR function but be careful when you want to look for January birthdays in December. I think you'll be fine as long as the date range you're looking for doesn't span the New Year.
A: Sorry didn't see the requirement to neutralize the year.
select * from Employees
where DATEADD (year, DatePart(year, getdate()) - DatePart(year, Birthday), Birthday)
between convert(datetime, getdate(), 101)
and convert(datetime, DateAdd(day, 5, getdate()), 101)
This should work.
A: My guess is using "365.25" soon or later would be fail.
So I test the working solution using "365.25"
And It don't return the same numbers of rows for every case.
Here an example:
http://sqlfiddle.com/#!3/94c3ce/7
test with year 2016 and 2116 and you will see the difference. I only can post one link but change de /7 by /8 to see both queries. (/10 and /11 for the first answer)
So, I suggest this another query, where the point is determinate next birthday from a starting date and then compare if it is in my range of interest.
SELECT * FROM Employees
WHERE
CASE WHEN (DATEADD(yyyy,DATEDIFF(yyyy, birthdt, @fromDate),birthdt) < @fromDate )
THEN DATEADD(yyyy,DATEDIFF(yyyy, birthdt, @fromDate)+1,birthdt)
ELSE DATEADD(yyyy,DATEDIFF(yyyy, birthdt, @fromDate),birthdt) END
BETWEEN @fromDate AND @toDate
A: In case someone is still looking for a solution in MySQL (slightly different commands), here's the query:
SELECT
name,birthday,
FLOOR(DATEDIFF(DATE(NOW()),birthday) / 365.25) AS age_now,
FLOOR(DATEDIFF(DATE_ADD(DATE(NOW()),INTERVAL 30 DAY),birthday) / 365.25) AS age_future
FROM user
WHERE 1 = (FLOOR(DATEDIFF(DATE_ADD(DATE(NOW()),INTERVAL 30 DAY),birthday) / 365.25)) - (FLOOR(DATEDIFF(DATE(NOW()),birthday) / 365.25))
ORDER BY MONTH(birthday),DAY(birthday)
A: Best use of datediff and dateadd. No rounding, no approximates, no 29th of february bug, nothing but date functions
*
*ageOfThePerson = DATEDIFF(yyyy,dateOfBirth, GETDATE())
*dateOfNextBirthday = DATEADD(yyyy,ageOfThePerson + 1, dateOfBirth)
*daysBeforeBirthday = DATEDIFF(d,GETDATE(), dateofNextBirthday)
Thanks to @Gustavo Cardoso, new definition for the age of the person
*
*ageOfThePerson = FLOOR(DATEDIFF(d,dateOfBirth, GETDATE())/365.25)
A: This is solution for MS SQL Server:
It returns employees with birthdays in 30 days.
SELECT * FROM rojstni_dnevi
WHERE (DATEDIFF (dd,
getdate(),
DATEADD ( yyyy,
DATEDIFF(yyyy, rDan, getdate()),
rDan)
nex )
+365) % 365 < 30
A: I found the solution for this. This may save someone's precious time.
select EmployeeID,DOB,dates.date from emp_tb_eob_employeepersonal
cross join dbo.GetDays(Getdate(),Getdate()+7) as dates where weekofmonthnumber>0
and month(dates.date)=month(DOB) and day(dates.date)=day(DOB)
GO
/****** Object: UserDefinedFunction [dbo].[GetDays] Script Date: 11/30/2011 13:19:17 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
--SELECT [dbo].[GetDays] ('02/01/2011','02/28/2011')
ALTER FUNCTION [dbo].[GetDays](@startDate datetime, @endDate datetime)
RETURNS @retValue TABLE
(Days int ,Date datetime, WeekOfMonthNumber int, WeekOfMonthDescription varchar(10), DayName varchar(10))
AS
BEGIN
DECLARE @nextDay int
DECLARE @nextDate datetime
DECLARE @WeekOfMonthNum int
DECLARE @WeekOfMonthDes varchar(10)
DECLARE @DayName varchar(10)
SELECT @nextDate = @startDate, @WeekOfMonthNum = DATEDIFF(week, DATEADD(MONTH, DATEDIFF(MONTH,0,@startDate),0),@startDate) + 1,
@WeekOfMonthDes = CASE @WeekOfMonthNum
WHEN '1' THEN 'First'
WHEN '2' THEN 'Second'
WHEN '3' THEN 'Third'
WHEN '4' THEN 'Fourth'
WHEN '5' THEN 'Fifth'
WHEN '6' THEN 'Sixth'
END,
@DayName
= DATENAME(weekday, @startDate)
SET @nextDay=1
WHILE @nextDate <= @endDate
BEGIN
INSERT INTO @retValue values (@nextDay,@nextDate, @WeekOfMonthNum, @WeekOfMonthDes, @DayName)
SELECT @nextDay=@nextDay + 1
SELECT @nextDate = DATEADD(day,1,@nextDate),
@WeekOfMonthNum
= DATEDIFF(week, DATEADD(MONTH, DATEDIFF(MONTH,0, @nextDate),0), @nextDate) + 1,
@WeekOfMonthDes
= CASE @WeekOfMonthNum
WHEN '1' THEN 'First'
WHEN '2' THEN 'Second'
WHEN '3' THEN 'Third'
WHEN '4' THEN 'Fourth'
WHEN '5' THEN 'Fifth'
WHEN '6' THEN 'Sixth'
END,
@DayName
= DATENAME(weekday, @nextDate)
CONTINUE
END
WHILE(@nextDay <=31)
BEGIN
INSERT INTO @retValue values (@nextDay,@nextDate, 0, '', '')
SELECT @nextDay=@nextDay + 1
END
RETURN
END
Make a cross join with the dates and check for the comparison of month and dates.
A: In less than a month:
SELECT * FROM people WHERE MOD( DATEDIFF( CURDATE( ) , `date_birth`) /30, 12 ) <1 and (((month(`date_birth`)) = (month(curdate())) and (day(`date_birth`)) > (day (curdate() ))) or ((month(`date_birth`)) > (month(curdate())) and (day(`date_birth`)) < (day (curdate() ))))
A: You could use DATE_FORMAT to extract the day and month parts of the birthday dates.
EDIT: sorry i didn't see that he wasn't using MySQL.
A: Assuming this is T-SQL, use DATEPART to compare the month and date separately.
http://msdn.microsoft.com/en-us/library/ms174420.aspx
Alternatively, subtract January 1st of the current year from everyone's birthday, and then compare using the year 1900 (or whatever your epoch year is).
A: Most of these solutions are close, but you have to remember a few extra scenarios. When working with birthdays and a sliding scale, you must be able to handle the transition into the next month.
For example Stephens example works great for birthdays up until the last 4 days of the month. Then you have a logic fault as the valid dates if today was the 29th would be :29, 30, AND then 1, 2, 3 of the NEXT month, so you have to condition for that as well.
An alternative would be to parse the date from the birthday field, and sub in the current year, then do a standard range comparison.
A: Another thought: Add their age in whole years to their birthday (or one more if their Birthday hasn't happened yet and then compare as you do above. Use DATEPART and DATEADD to do this.
http://msdn.microsoft.com/en-us/library/ms186819.aspx
The edge case of a range spanning the year would have to have special code.
Bonus tip: consider using BETWEEN...AND instead of repeating the Birthday operand.
A: This should work...
DECLARE @endDate DATETIME
DECLARE @today DATETIME
SELECT @endDate = getDate()+6, @today = getDate()
SELECT * FROM Employees
WHERE
(DATEPART (month, birthday) >= DATEPART (month, @today)
AND DATEPART (day, birthday) >= DATEPART (day, @today))
AND
(DATEPART (month, birthday) < DATEPART (month, @endDate)
AND DATEPART (day, birthday) < DATEPART (day, @endDate))
A: I faced the same problem with my college project a few years ago. I responded (in a rather weasel way) by splitting the year and the date(MM:DD) in two separate columns. And before that, my project mate was simply getting all the dates and programatically going through them. We changed that because it was too inefficient - not that my solution was any more elegant either. Also, its probably not possible to do in a database that has been in use for a while by multiple apps.
A: Give this a try:
SELECT * FROM Employees
WHERE DATEADD(yyyy, DATEPART(yyyy, @Today)-DATEPART(yyyy, Birthday), Birthday) > @Today
AND DATEADD(yyyy, DATEPART(yyyy, @Today)-DATEPART(yyyy, Birthday), Birthday) < DATEADD(dd, @NumDays, @Today)
A: Nuts! A good solution between when I started thinking about this and when I came back to answer. :)
I came up with:
select (365 + datediff(d,getdate(),cast(cast(datepart(yy,getdate()) as varchar(4)) + '-' + cast(datepart(m,birthdt) as varchar(2)) + '-' + cast(datepart(d,birthdt) as varchar(2)) as datetime))) % 365
from employees
where (365 + datediff(d,getdate(),cast(cast(datepart(yy,getdate()) as varchar(4)) + '-' + cast(datepart(m,birthdt) as varchar(2)) + '-' + cast(datepart(d,birthdt) as varchar(2)) as datetime))) % 365 < @NumDays
You don't need to cast getdate() as a datetime, right?
A: Upcoming Birthday for the Employee - Sqlserver
DECLARE @sam TABLE
(
EmployeeIDs int,
dob datetime
)
INSERT INTO @sam (dob, EmployeeIDs)
SELECT DOBirth, EmployeeID FROM Employee
SELECT *
FROM
(
SELECT *, bd_this_year = DATEADD(YEAR, DATEPART(YEAR, GETDATE()) - DATEPART(YEAR, dob), dob)
FROM @sam s
) d
WHERE d.bd_this_year > DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()), 0)
AND d.bd_this_year <= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()), 3)
A: I hope this helps u in some way....
select Employeename,DOB
from Employeemaster
where day(Dob)>day(getdate()) and month(DOB)>=month(getDate())
A: This is a combination of a couple of the answers that was tested. This will find the next brithday after a certain date and the age they will be. Also the numdays will limit the range you are looking 7 days = week etc.
SELECT DISTINCT FLOOR(DATEDIFF(dd,Birthday, @BeginDate) / 365.25) + 1 age,
DATEADD(yyyy, FLOOR(DATEDIFF(dd,Birthday, @BeginDate) / 365.25) + 1, Birthday) nextbirthday, birthday
FROM table
WHERE DATEADD(yyyy, FLOOR(DATEDIFF(dd,Birthday, @BeginDate) / 365.25) + 1, Birthday) > @BeginDate
AND DATEADD(yyyy, FLOOR(DATEDIFF(dd,Birthday, @BeginDate) / 365.25) + 1, Birthday) < DATEADD(dd, @NumDays, @BeginDate)
order by nextbirthday
A: The best way to achieve the same is
DECLARE @StartDate DATETIME
DECLARE @EndDate DATETIME
SELECT Member.* from vwMember AS Member
WHERE (DATEADD(YEAR, (DATEPART(YEAR, @StartDate) -
DATEPART(YEAR, Member.dBirthDay)), Member.dBirthDay)
BETWEEN @StartDate AND @EndDate)
A: I used this for MySQL, probably not the most efficient way to query but simple enough to implement.
select * from `schema`.`table` where date_format(birthday,'%m%d') >= date_format(now(),'%m%d') and date_format(birthday,'%m%d') < date_format(DATE_ADD(NOW(), INTERVAL 5 DAY),'%m%d');
A: i believe this ticket has been closed ages ago but for the benefit of getting the correct sql query please have a look.
SELECT Employee_Name, DATE_OF_BIRTH
FROM Hr_table
WHERE
/**
fetching the original birth_date and replacing the birth year to the current but have to deduct 7 days to adjust jan 1-7 birthdate.
**/
datediff(d,getdate(),DATEADD(year,datediff(year,DATEADD(d,-7,hr.DATE_OF_BIRTH),getdate()),hr.date_of_birth)) between 0 and 7
-- current date looks ahead to 7 days for upcoming modified year birth date.
order by
-- sort by no of days before the birthday
datediff(d,getdate(),DATEADD(year,datediff(year,DATEADD(d,-7,hr.DATE_OF_BIRTH),getdate()),hr.date_of_birth))
A: Better and easy solution:
select * from users with(nolock)
where date_of_birth is not null
and
(
DATEDIFF(dd,
DATEADD(yy, -(YEAR(GETDATE())-1900),GETDATE()), --Today
DATEADD(yy, -(YEAR(date_of_birth)-1901),date_of_birth)
) % 365
) = 30
A: This solution also takes care for birthdays in the next year and the ordering:
(dob = day of birth; bty = birthday this year; nbd = next birthday)
with rs (bty) as (
SELECT DATEADD(Year, DATEPART(Year, GETDATE()) - DATEPART(Year, dob), dob) as bty FROM Employees
),
rs2 (nbd) as (
select case when bty < getdate() then DATEADD(yyyy, 1, bty) else bty end as nbd from rs
)
select nbd, DATEDIFF(d, getdate(), nbd) as diff from rs2 where DATEDIFF(d, getdate(), nbd) < 14 order by diff
This version, which avoids comparison of the dates, could be faster:
with rs (dob, bty) as (
SELECT dob, DATEADD(Year, DATEPART(Year, GETDATE()) - DATEPART(Year, DOB), DOB) as bty FROM employee
),
rs2 (dob, nbd) as (
select dob, DATEADD(yyyy, FLOOR(ABS((-1*(SIGN(DATEDIFF(d, getdate(), bty))))+0.1)), bty) as nbd from rs
),
rs3 (dob, diff) as (
select dob, datediff(d, getdate(), nbd) as diff from rs2
)
select dob, diff from rs3 where diff < 14 order by diff
If the range covers the 29 of February in the next year, then use:
with rs (dob, ydiff) as (
select dob, DATEPART(Year, GETDATE()) - DATEPART(Year, DOB) as ydiff from Employee
),
rs2 (dob, bty, ydiff) as (
select dob, DATEADD(Year, ydiff, dob) as bty, ydiff from rs
),
rs3 (dob, nbd) as (
select dob, DATEADD(yyyy, FLOOR(ABS((-1*(SIGN(DATEDIFF(d, getdate(), bty))))+0.1)) + ydiff, dob) as nbd from rs2
),
rs4 (dob, ddiff, nbd) as (
select dob, datediff(d, getdate(), nbd) as diff, nbd from rs3
)
select dob, nbd, ddiff from rs4 where ddiff < 68 order by ddiff
A: You can also use DATEPART:
-- To find out Today's Birthday
DECLARE @today DATETIME
SELECT @today = getdate()
SELECT *
FROM SMIS_Registration
WHERE (DATEPART (month, DOB) >= DATEPART (month, @today)
AND DATEPART (day, DOB) = DATEPART (day, @today))
A: select BirthDate,Name from Employees
order by Case
WHEN convert(nvarchar(5),BirthDate,101) > convert(nvarchar(5),GETDATE(),101) then 2
WHEN convert(nvarchar(5),BirthDate,101) < convert(nvarchar(5),GETDATE(),101) then 3
WHEN convert(nvarchar(5),BirthDate,101) = convert(nvarchar(5),GETDATE(),101) then 1 else 4 end ,convert(nvarchar(2),BirthDate,101),convert(nvarchar(2),BirthDate,105)
A: Below query will return all next birthday of employee, it is shortest query.
SELECT
Employee.DOB,
DATEADD(
mm,
(
(
(
(
DATEPART(yyyy, getdate())-DATEPART(yyyy, Employee.DOB )
)
+
(
1-
(
((DATEPART(mm, Employee.DOB)*100)+DATEPART(dd, Employee.DOB))
/
((DATEPART(mm, getdate())*100) + DATEPART(dd, getdate()))
)
)
)
*12
)
),
Employee.DOB
) NextDOB
FROM
Employee
ORDER BY
NextDOB ;
Above query will cover all next month excluding current date.
A: Solution for SQLite3:
SELECT
*,
strftime('%j', birthday) - strftime('%j', 'now') AS days_remaining
FROM
person
WHERE :n_days >= CASE
WHEN days_remaining >= 0 THEN days_remaining
ELSE days_remaining + strftime('%j', strftime('%Y-12-31', 'now'))
END
;
The solutions dividing by 325.25 to get the age, or bringing the birthdate to the current year etc. didn't work for me.
What this does is computes the delta of the two daysOfTheYear (1-366). If the birthday didn't happen yet this year, you automatically get the correct number of remaining days, which you can compare to.
If the birthday already happened, remaining_days will be negative, and you can get the correct number of remaining days by still adding the total amount of days in the current year. This also correctly handles leap years, since in that case the extra day will be added as well (By using dayOfYear(Dec 31.))
A: You can use this query for today birthday
select *
from tableName
where DAY(convert(date,GETDATE(),105))=DAY(convert(date,DOB,105))
and month(convert(date,GETDATE(),105))=month(convert(date,DOB,105))
A: Here we need the real and the next to compare if it has already passed
CREATE FUNCTION dbo.FN_NEXT_BIRTHDAY(@BIRTHDAY DATE)
RETURNS DATE
AS
BEGIN
DECLARE @ACTUAL DATE = DATEADD(YEAR, DATEDIFF(YEAR, @BIRTHDAY , GETDATE()), @BIRTHDAY);
DECLARE @NEXT DATE = DATEADD(YEAR, DATEDIFF(YEAR, @BIRTHDAY , GETDATE())+1, @BIRTHDAY);
RETURN CASE WHEN @ACTUAL > GETDATE() THEN @ACTUAL
ELSE @NEXT
END;
END;
A: SELECT * FROM PERSON WHERE DATE_ADD(BIRTH_DATE,
INTERVAL YEAR(CURDATE())-YEAR(BIRTH_DATE) + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(BIRTH_DATE),1,0) YEAR)
BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 30 DAY)
A: Better, Add the difference in years to the BIRTHDAY date, to make everything this year, and then do your compares
SELECT * FROM Employees WHERE
DATEADD ( year, YEAR(@Today) - YEAR(@Birthday), birthday) BETWEEN @Today AND @EndDate
A: Try my solution... I have Informix database...
SELECT person, year(today)-year(birthdate) as years, birthdate,
CASE
WHEN MOD(year(birthdate)+((year(today)-year(birthdate))+1),4)<>0 AND MONTH(birthdate)=2 AND DAY(birthdate)=29 THEN
CASE
WHEN mdy(month(birthdate), 28, year(birthdate)+((year(today)-year(birthdate))+1))-today >= 365 THEN (mdy(month(birthdate), 28, year(birthdate)+((year(today)-year(birthdate))+1))-today)-365
WHEN mdy(month(birthdate), 28, year(birthdate)+((year(today)-year(birthdate))+1))-today < 365 THEN mdy(month(birthdate), 28, year(birthdate)+((year(today)-year(birthdate))+1))-today
END
ELSE
CASE
WHEN mdy(month(birthdate), day(birthdate), year(birthdate)+((year(today)-year(birthdate))+1))-today >= 365 THEN (mdy(month(birthdate), day(birthdate), year(birthdate)+((year(today)-year(birthdate))+1))-today)-365
WHEN mdy(month(birthdate), day(birthdate), year(birthdate)+((year(today)-year(birthdate))+1))-today < 365 THEN mdy(month(birthdate), day(birthdate), year(birthdate)+((year(today)-year(birthdate))+1))-today
END
END until
FROM table_name
WHERE mdy(month(birthdate), day(birthdate), 2000) >= mdy(month(today), day(today), 2000)
AND mdy(month(birthdate), day(birthdate), 2000) <= mdy(month(today), day(today), 2000)+30
OR
mdy(month(birthdate), day(birthdate), 2000) <= mdy(month(today), day(today), 2000)-(365-30)
ORDER BY 4, YEAR(birthdate)
A: CREATE PROCEDURE [dbo].[P_EmployeesGetBirths]
@Date Date,
@Days int
as
Begin
SET NOCOUNT ON;
Declare
@From int = Month(@Date) * 100 + Day(@Date),
@To int = Month(DateAdd(DD, @Days, @Date)) * 100 + Day(DateAdd(DD, @Days, @Date)),
@NeutralDate Date = Cast('1900-'+cast(Month(@Date) as nvarchar(2))+'-' + cast(Day(@Date) as nvarchar(2)) as Date)
Select
DOB,
DATEADD(DD, DateDiff(DD, @NeutralDate, DateAdd(YY, 1900-Year(DOB), DOB)), @Date) OnDate
From
Employees(nolock)
Where
DOB is not null and
Month(DOB) * 100 + Day(DOB) between @From and @To
order by
Month(DOB) * 100 + Day(DOB)
End
Go
A: Current months birthday
SELECT * FROM tblMember m
WHERE m.GDExpireDate != ''
AND CONVERT(CHAR(2),CONVERT(datetime, m.dob, 103), 101) = CONVERT(CHAR(2), GETDATE(), 101)
AND CONVERT(CHAR(2),CONVERT(datetime, m.dob, 103), 103) >= CONVERT(CHAR(2), GETDATE(), 103)
A: Get count of the employee having birthday this month:
select COUNT(employeeid) from Employee where month(DateOfBirth) = MONTH(GETDATE())
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: Algorithm to find which numbers from a list of size n sum to another number I have a decimal number (let's call it goal) and an array of other decimal numbers (let's call the array elements) and I need to find all the combinations of numbers from elements which sum to goal.
I have a preference for a solution in C# (.Net 2.0) but may the best algorithm win irrespective.
Your method signature might look something like:
public decimal[][] Solve(decimal goal, decimal[] elements)
A: I think you've got a bin packing problem on your hands (which is NP-hard), so I think the only solution is going to be to try every possible combination until you find one that works.
Edit: As pointed out in a comment, you won't always have to try every combination for every set of numbers you come across. However, any method you come up with has worst-case-scenario sets of numbers where you will have to try every combination -- or at least a subset of combinations that grows exponentially with the size of the set.
Otherwise, it wouldn't be NP-hard.
A: The subset-sum problem, and the slightly more general knapsack problem, are solved with dynamic programming: brute-force enumeration of all combinations is not required. Consult Wikipedia or your favourite algorithms reference.
Although the problems are NP-complete, they are very "easy" NP-complete. The algorithmic complexity in the number of elements is low.
A: You have described a knapsack problem, the only true solution is brute force. There are some approximation solutions which are faster, but they might not fit your needs.
A: While not solving the problem of brute force (as others already mentioned) you might want to sort your numbers first, and then go over the possible ones left (since once you passed Sum value, you can't add any number larger than Goal - Sum).
This will change the way you implement your algorithm (in order to sort only once and then skip marked elements), but on the average would improve performance.
A: Interesting answers. Thank you for the pointers to Wikipedia - whilst interesting - they don't actually solve the problem as stated as I was looking for exact matches - more of an accounting/book balancing problem than a traditional bin-packing / knapsack problem.
I have been following the development of stack overflow with interest and wondered how useful it would be. This problem came up at work and I wondered whether stack overflow could provide a ready-made answer (or a better answer) quicker than I could write it myself. Thanks also for the comments suggesting this be tagged homework - I guess that is reasonably accurate in light of the above.
For those who are interested, here is my solution which uses recursion (naturally) I also changed my mind about the method signature and went for List> rather than decimal[][] as the return type:
public class Solver {
private List<List<decimal>> mResults;
public List<List<decimal>> Solve(decimal goal, decimal[] elements) {
mResults = new List<List<decimal>>();
RecursiveSolve(goal, 0.0m,
new List<decimal>(), new List<decimal>(elements), 0);
return mResults;
}
private void RecursiveSolve(decimal goal, decimal currentSum,
List<decimal> included, List<decimal> notIncluded, int startIndex) {
for (int index = startIndex; index < notIncluded.Count; index++) {
decimal nextValue = notIncluded[index];
if (currentSum + nextValue == goal) {
List<decimal> newResult = new List<decimal>(included);
newResult.Add(nextValue);
mResults.Add(newResult);
}
else if (currentSum + nextValue < goal) {
List<decimal> nextIncluded = new List<decimal>(included);
nextIncluded.Add(nextValue);
List<decimal> nextNotIncluded = new List<decimal>(notIncluded);
nextNotIncluded.Remove(nextValue);
RecursiveSolve(goal, currentSum + nextValue,
nextIncluded, nextNotIncluded, startIndex++);
}
}
}
}
If you want an app to test this works, try this console app code:
class Program {
static void Main(string[] args) {
string input;
decimal goal;
decimal element;
do {
Console.WriteLine("Please enter the goal:");
input = Console.ReadLine();
}
while (!decimal.TryParse(input, out goal));
Console.WriteLine("Please enter the elements (separated by spaces)");
input = Console.ReadLine();
string[] elementsText = input.Split(' ');
List<decimal> elementsList = new List<decimal>();
foreach (string elementText in elementsText) {
if (decimal.TryParse(elementText, out element)) {
elementsList.Add(element);
}
}
Solver solver = new Solver();
List<List<decimal>> results = solver.Solve(goal, elementsList.ToArray());
foreach(List<decimal> result in results) {
foreach (decimal value in result) {
Console.Write("{0}\t", value);
}
Console.WriteLine();
}
Console.ReadLine();
}
}
I hope this helps someone else get their answer more quickly (whether for homework or otherwise).
Cheers...
A: public class Logic1 {
static int val = 121;
public static void main(String[] args)
{
f(new int[] {1,4,5,17,16,100,100}, 0, 0, "{");
}
static void f(int[] numbers, int index, int sum, String output)
{
System.out.println(output + " } = " + sum);
//System.out.println("Index value1 is "+index);
check (sum);
if (index == numbers.length)
{
System.out.println(output + " } = " + sum);
return;
}
// include numbers[index]
f(numbers, index + 1, sum + numbers[index], output + " " + numbers[index]);
check (sum);
//System.out.println("Index value2 is "+index);
// exclude numbers[index]
f(numbers, index + 1, sum, output);
check (sum);
}
static void check (int sum1)
{
if (sum1 == val)
System.exit(0);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: How to tie into a domain server's login for program access rights I need to write a program used internally where different users will have different abilities within the program.
Rather than making users have a new username and password, how do I tie into an existing domain server's login system?
Assume .NET (C#, VB, ASP, etc)
-Adam
A: For WinForms, use System.Threading.Thread.CurrentPrincipal with the IsInRole() method to check which groups they are a member of. You do need to set the principal policy of the AppDomain to WindowsPrincipal first.
Use this to get the current user name:
private string getWindowsUsername()
{
AppDomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal);
return Thread.CurrentPrincipal.Identity.Name;
}
And then something like this to check a role:
if (Thread.CurrentPrincipal.IsInRole("Domain Users") == true)
{}
In ASP.NET, the thread will belong to IIS, so instead you should
*
*Set the virtual folder or website to require authentication
*Get the user name supplied by the browser with Request.ServerVariables("LOGON_USER")
*Use the DirectorySearcher class to find the users groups
A: I would use LDAP
and the DirectorySearcher Class:
http://msdn.microsoft.com/en-us/library/system.directoryservices.directorysearcher.aspx
A: Assuming this is served through IIS, I would tell IIS to authenticate via the domain, but I would keep authorization (what roles a user is associated with, accessible functionality, etc) within the application itself.
You can retreive the username used to authenticate via
Trim(Request.ServerVariables("LOGON_USER")).Replace("/", "\").Replace("'", "''")
OR
CStr(Session("User")).Substring(CStr(Session("User")).LastIndexOf("\") + 1)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to know whether a given client-side startup script is already registered in an asp.net page? I have a asp.net page, and would like to know whether "script1" is already registered as a startup script or not?
A: Me.ClientScript.IsStartupScriptRegistered("clientScript")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is there an efficient algorithm to generate a 2D concave hull? Having a set of (2D) points from a GIS file (a city map), I need to generate the polygon that defines the 'contour' for that map (its boundary). Its input parameters would be the points set and a 'maximum edge length'. It would then output the corresponding (probably non-convex) polygon.
The best solution I found so far was to generate the Delaunay triangles and then remove the external edges that are longer than the maximum edge length. After all the external edges are shorter than that, I simply remove the internal edges and get the polygon I want. The problem is, this is very time-consuming and I'm wondering if there's a better way.
A: This paper discusses the Efficient generation of simple polygons for characterizing the shape of a set of points in the plane and provides the algorithm. There's also a Java applet utilizing the same algorithm here.
A: The guys here claim to have developed a k nearest neighbors approach to determining the concave hull of a set of points which behaves "almost linearly on the number of points". Sadly their paper seems to be very well guarded and you'll have to ask them for it.
Here's a good set of references that includes the above and might lead you to find a better approach.
A: The answer may still be interesting for somebody else: One may apply a variation of the marching square algorithm, applied (1) within the concave hull, and (2) then on (e.g. 3) different scales that my depend on the average density of points. The scales need to be int multiples of each other, such you build a grid you can use for efficient sampling. This allows to quickly find empty samples=squares, samples that are completely within a "cluster/cloud" of points, and those, which are in between. The latter category then can be used to determine easily the poly-line that represents a part of the concave hull.
Everything is linear in this approach, no triangulation is needed, it does not use alpha shapes and it is different from the commercial/patented offering as described here ( http://www.concavehull.com/ )
A: One of the former students in our lab used some applicable techniques for his PhD thesis. I believe one of them is called "alpha shapes" and is referenced in the following paper:
http://www.cis.rit.edu/people/faculty/kerekes/pdfs/AIPR_2007_Gurram.pdf
That paper gives some further references you can follow.
A: A quick approximate solution (also useful for convex hulls) is to find the north and south bounds for each small element east-west.
Based on how much detail you want, create a fixed sized array of upper/lower bounds.
For each point calculate which E-W column it is in and then update the upper/lower bounds for that column. After you processed all the points you can interpolate the upper/lower points for those columns that missed.
It's also worth doing a quick check beforehand for very long thin shapes and deciding wether to bin NS or Ew.
A: Good question! I haven't tried this out at all, but my first shot would be this iterative method:
*
*Create a set N ("not contained"), and add all points in your set to N.
*Pick 3 points from N at random to form an initial polygon P. Remove them from N.
*Use some point-in-polygon algorithm and look at points in N. For each point in N, if it is now contained by P, remove it from N. As soon as you find a point in N that is still not contained in P, continue to step 4. If N becomes empty, you're done.
*Call the point you found A. Find the line in P closest to A, and add A in the middle of it.
*Go back to step 3
I think it would work as long as it performs well enough — a good heuristic for your initial 3 points might help.
Good luck!
A: A simple solution is to walk around the edge of the polygon. Given a current edge om the boundary connecting points P0 and P1, the next point on the boundary P2 will be the point with the smallest possible A, where
H01 = bearing from P0 to P1
H12 = bearing from P1 to P2
A = fmod( H12-H01+360, 360 )
|P2-P1| <= MaxEdgeLength
Then you set
P0 <- P1
P1 <- P2
and repeat until you get back where you started.
This is still O(N^2) so you'll want to sort your pointlist a little. You can limit the set of points you need to consider at each iteration if you sort points on, say, their bearing from the city's centroid.
A: You can do it in QGIS with this plug in;
https://github.com/detlevn/QGIS-ConcaveHull-Plugin
Depending on how you need it to interact with your data, probably worth checking out how it was done here.
A: As a wildly adopted reference, PostGIS starts with a convexhull and then caves it in, you can see it here.
https://github.com/postgis/postgis/blob/380583da73227ca1a52da0e0b3413b92ae69af9d/postgis/postgis.sql.in#L5819
A: The Bing Maps V8 interactive SDK has a concave hull option within the advanced shape operations.
https://www.bing.com/mapspreview/sdkrelease/mapcontrol/isdk/advancedshapeoperations?toWww=1&redig=D53FACBB1A00423195C53D841EA0D14E#JS
Within ArcGIS 10.5.1, the 3D Analyst extension has a Minimum Bounding Volume tool with the geometry types of concave hull, sphere, envelope, or convex hull. It can be used at any license level.
There is a concave hull algorithm here: https://github.com/mapbox/concaveman
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
}
|
Q: WCF DataContracts and underlying data structures I am wondering what makes sense in relation to what objects to expose through a WCF service - should I add WCF Serialization specifications to my business entities or should I implement a converter that maps my business entities to the DataContracts that I want to expose through my WCF service?
Right now I have entities on different levels: DataAccess, Business and Contract. I have converters in place that can map entities from DataAccess to Business and from Business to Contract and vice versa. Implementing and Maintaining those is time consuming and pretty tedious. What are best practices in relation to this?
If I were using an OR/M such as NHibernate or Entity Framework should I be exposing the entities from the ORM directly or should I abstract them the same way I am doing now?
A: I typically don't expose my business/data entities across the wire since I like to adhere to the single responsibility principle (srp). To explain, the data entities were created to map to the underlying relational (db) model. So the only reason they should "change", is because of a change to the relational model, that's it.
The moment you expose such entities so they can cross the wire, then they're serving two purposes. It may seem like overkill, but it keeps things cleaner and transparent...which yields for a simpler design.
A: In general, I think from a best practices standpoint, you should not expose the structure of your business objects as data contracts, but rather define "data contract-specific" classes and convert Business to Contract. It may require extra work, but from a separation of concerns and protection from change standpoint the extra work is probably worth it.
The Microsoft patterns & practices "Service Factory Modeling Edition" implements this, and also provides tooling to auto-generate Business <=> Contract converter classes -- it's an excellent VS add-in, and also represents Microsoft's best practices for WCF.
A: Just to add to the above answers:
The object that the webservice exposes is called the Data Transfer Object (DTO). Having a DTO to map your Business Entity object (BEO) is good because of the separation it provides between your webservice and the actual implementation/logic that lies behind the web-service.
Finally, here is how you can decorate the DTO so that when it is exposed by the WSDL the names reflect the actual objects it represent (instead of objectNameDTO or something ugly like that).
//Business Entity
class Person
{
public string Name{ get; set; }
public int Age{ get; set; }
}
//Data transfer object
[DataContract(Name="Person")] //<-- this is what makes the WSDL look nice and clean
class PersonDTO
{
[DataMember(Name = "Name", Order = 0)]
public string Name{ get; set; }
[DataMember(Name = "Age", Order = 1)]
public int Age{ get; set; }
}
A: Something to also consider, I agree with the separation but it usually winds up leading to "Translators" or some such code to copy the data from the DTO to the Business Entity. This is where a library like AutoMapper (http://automapper.org/) comes in REAL handy and does away with the need to write the translation layer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: C++ does begin/end/rbegin/rend execute in constant time for std::set, std::map, etc? For data types such as std::set and std::map where lookup occurs in logarithmic time, is the implementation required to maintain the begin and end iterators? Does accessing begin and end imply a lookup that could occur in logarithmic time?
I have always assumed that begin and end always occur in constant time, however I can't find any confirmation of this in Josuttis. Now that I'm working on something where I need to be anal about performance, I want to make sure to cover my bases.
Thanks
A: They happen in constant time. I'm looking at page 466 of the ISO/IEC 14882:2003 standard:
Table 65 - Container Requiments
a.begin(); (constant complexity)
a.end(); (constant complexity)
Table 66 - Reversible Container Requirements
a.rbegin(); (constant complexity)
a.rend(); (constant complexity)
A: Yes, according to http://www.cplusplus.com/reference/stl/, begin(), end() etc are all O(1).
A: In the C++ standard, Table 65 in 23.1 (Container Requirements) lists begin() and end() as requiring constant time. If your implementation violates this, it isn't conforming.
A: Just look at the code, here you can see the iterators in the std::map in the GNU libstdc++
std::map
you'll see that all end rend cend ... are all implemented in constant time.
A: Be careful with hash_map though. begin() is not constant.
A: For std::set
begin: constant, end: constant,
rbegin: constant,
rend: constant,
For std::map
they are also constant (all of them)
if you have any doubt, just check www.cplusplus.com
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Shorthand if + nullable types (C#) The following returns
Type of conditional expression cannot be determined because there is no implicit conversion between 'double' and '<null>'
aNullableDouble = (double.TryParse(aString, out aDouble) ? aDouble : null)
The reason why I can't just use aNullableBool instead of the roundtrip with aDouble is because aNullableDouble is a property of a generated EntityFramework class which cannot be used as an out par.
A: aNullableDouble = double.TryParse(aString, out aDouble) ? (double?)aDouble : null;
A: Just blow the syntax out into the full syntax instead of the shorthand ... it'll be easier to read:
aNullableDouble = null;
if (double.TryParse(aString, out aDouble))
{
aNullableDouble = aDouble;
}
A: The interesting side-effect of using nullable types is that you can't really use a shorthand IF. Shorthand IF has to return the same Type from both conditions, and it can't be null in either case. So, cast or write it out :)
A: aNullableDouble = (double.TryParse(aString, out aDouble)?new Nullable<double>(aDouble):null)
A: .NET supports nullable types, but by declaring them as such you have to treat them a bit differently (as, understandably, something which is normally a value type now is sort of reference-ish).
This also might not help much if you end up having to do too much converting between nullable doubles and regular doubles... as might easily be the case with an auto-generated set of classes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Volumetric particles I'm toying with the idea of volumetric particles. By 'volumetric' I don't mean actually 3D model per particle - usually it's more expensive and harder to blend with other particles. What I mean is 2D particles that will look as close as possible to be volumetric.
Right now what I/we have tried is particles with additional local Z texture (spherical for example), and we conduct the alpha transparency according to the combination of the alpha value and the closeness by Z which is improved by the fact that particle does not have a single planar Z.
I think a cool add would be interaction with lighting (and shadows as well), but here the question is how will the lighting formula look like (taking transparency into account, let's assume that we are talking about smoke and dust/clouds and not additive blend) - any suggestions would be welcomed.
I also though about adding normal so I can actually squeeze all in two textures:
*
*Diffuse & Alpha texture.
*Normal & 256 level precision Z channel texture.
I ask this question to see what other directions can be thought of and to get your ideas regarding the proper lighting equation that might be used.
A: It sounds like you are asking for information on techniques for the simulation of participating media: "Participating media may absorb, emit and/or scatter light. The simplest participating medium only absorbs light. That means that light passing through the medium is attenuated depending on the density of the medium."
Here are some links to some example images and to Frisvad, Christensen, Jensen's the SIGGRAPH 2007 paper (including the PDF).
A: A nice paper on using spherical billboards to represent volumetric effects:
http://www.iit.bme.hu/~szirmay/firesmoke_link.htm
Doesn't handle particpating media, though.
A: See Volume Rendering and Voxel.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Mixing ActiveRecord find Conditions I want to find records on a combination of created_on >= some date AND name IN some list of names.
For ">=" I'd have to use sql condition. For "IN" I'd have to use a hash of conditions where the key is :name and the value is the array of names.
Is there a way to combine the two?
A: If you're using an older version Rails, Honza's query is close, but you need to add parentheses for the strings that get placed in the IN condition:
Person.find(:all, :conditions => ["created_at > ? AND name IN (?)", date, names])
Using IN can be a mixed bag: it's fastest for integers and slowest for a list of strings. If you find yourself using just one name, definitely use an equals operator:
Person.find(:all, :conditions => ["created_at > ? AND name = ?", date, name])
A: The cool thing about named_scopes is that they work on collections too:
class Post < ActiveRecord::Base
named_scope :published, :conditions => {:status => 'published'}
end
@post = Post.published
@posts = current_user.posts.published
A: You can use named scopes in rails 2.1 and above
Class Test < ActiveRecord::Base
named_scope :created_after_2005, :conditions => "created_on > 2005-01-01"
named_scope :named_fred, :conditions => { :name => "fred"}
end
then you can do
Test.created_after_2005.named_fred
Or you can give named_scope a lambda allowing you to pass in arguments
Class Test < ActiveRecord::Base
named_scope :created_after, lambda { |date| {:conditions => ["created_on > ?", date]} }
named_scope :named, lambda { |name| {:conditions => {:name => name}} }
end
then you can do
Test.created_after(Time.now-1.year).named("fred")
A: For more on named_scopes see Ryan's announcement and the Railscast on named_scopes
class Person < ActiveRecord::Base
named_scope :registered, lambda { |time_ago| { :conditions => ['created_at > ?', time_ago] } }
named_scope :with_names, lambda { |names| { :conditions => { :names => names } } }
end
If you are going to pass in variables to your scopes you have to use a lambda.
A: You can chain the where clause:
Person.where(name: ['foo', 'bar', 'baz']).where('id >= ?', 42).first
A: The named scopes already proposed are pretty fine. The clasic way to do it would be:
names = ["dave", "jerry", "mike"]
date = DateTime.now
Person.find(:all, :conidtions => ["created_at > ? AND name IN ?", date, names])
A: I think I'm either going to use simple AR finders or Searchgasm.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How do I display a substitute password character in a UILabel? I have a need to display a UITableView containing a user's account credentials. For this, I'm using UILabels in UITableViewCell. When I display their password, I'd obviously like to just display a placeholder password character instead of their actual password, similar to a UITextField when it's set to secure text entry mode. In fact, I'd like to use the same character as UITextField uses, instead of '*'.
My question is, what is the character code for the password character the UITextField when it's in secure mode?
A: Why not just use a UITextField, make the field non-editable and change the border style to make it look like a UILabel?
A: The password character is probably a bullet. On a Mac, option-8 will insert one wherever you are typing. The Character Palette says it is Unicode 2022 and UTF8 E2 80 A2.
A: In iOS 10, the BLACK CIRCLE Unicode character is not consistent with the secure text field anymore. The character to use is ⦁ "Z NOTATION SPOT" (U+2981).
A: Although a very old question, I came across the same problem.
benzado has the right idee, although I think the Unicode should be 25cf. To me it looks like that's exactly the dot apple uses in a secured UITextField.
A: In Swift 3 you can use:
passwordLabel.text = String(password.characters.map { _ in return "•" })
A: Here's a way to do this, e.g., to display the password "dotted out" in a Prototype Cell's detailTextLabel:
// self.password is your password string
NSMutableString *dottedPassword = [NSMutableString new];
for (int i = 0; i < [self.password length]; i++)
{
[dottedPassword appendString:@"●"]; // BLACK CIRCLE Unicode: U+25CF, UTF-8: E2 97 8F
}
cell.detailTextLabel.text = dottedPassword;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Strategy for Fixing Layout Bugs in IE6? Generally, what's the best way to tackle a layout bug in IE6? What are the most common bugs or problems that one should look for when trying to figure out why your page suddenly looks like a monkey coded it?
A: First Things First
Get yourself the Internet Explorer Developer Toolbar. It's a life saver and works great with IE6 and/or IE7. It's no replacement for Web Developer Toolbar or Firebug for Firefox, but it's better than nothing.
Know Thy Enemy
Read up on the quirks of IE — particularly hasLayout and overflow and the like. There are also many CSS niceties that you'll have to either do without or find alternatives. Look into how many of the popular JavaScript toolkits/frameworks/libraries get around different issues.
Rome Wasn't Built in a Day
The more you have to work with it, the more you'll remember off hand and won't have to lookup as often. There's just no replacement for experience in this. As several have pointed out, though, there are great resources out there on the net. Position Is Everything is certainly up there.
A: http://www.positioniseverything.net/ will certainly address your problem.
It provides comprehensive and in-depth descriptions of browser bugs along with options to work around them. A must read, in my opinion,
A: One good way to start learning about how IE happens to be mangling the page is to turn on red borders on different elements with CSS (border: 1px solid red;). This will immediately tell you whether it's a margin problem or a padding problem, how wide the element really is, etc.
A: The box model is usually the culprit. Basically what this means is that any div you are trying to position and use unsupported CSS with will cause this problem.
You may find it happens if you are using min-{width,height} or max-{width,height}.
this provides a great reference for checking compadibility with different versions.
http://www.aptana.com/reference/html/api/CSS.index.html
A: Noticed that Marc's post is at a -2 =D. He's only saying "resort to tables" even though they blow, because in sucky browsers like IE6, some of the broken CSS commands work in tables only (who know's why... dam you Bill Gates!!!). Here's a good reference to see what works and doesn't work as far as CSS goes. http://www.quirksmode.org/css/contents.html . It's a great reference to check on what cool effects work/don't work with various, widely used browsers. Also, always have a go-to plan for users who browse with IE6 (even though it's just about as old as mechanical dirt) as many businesses still use older browsers (including non-profits/3rd world countries etc.) So by all means, create the bugged out drop-down menu that looks WAY better than a standard horizontal menu, but create a secondary one specifically for IE6 that becomes the default when the page receives a request from an IE6 browser.
A: how do you define layout bug? the most frustrating layout implementation (i don't know if this should be defined as bug) in IE is we need to always specify style="display:inline" in the HTML <form> tag so that a blank line won't appear to disturb the form layout.
A: This question I believe has far too much scope.
Validate your code, and if pain persists, well, good luck.
The only real solutions, as with any other ballpark bug type are to google for a solution, or ask somebody who knows, ( ie: give the exact problem to us here at stackoverflow ).
You can use the IE Dev toolbar to glean an Idea, but many of the bugs are random, inexplicable, and esoteric. IE: the guillotine bug, the random item duplication bug, etc etc, the list goes on, and you can spend hours literally goofing with stupid variables everywhere and achieve nothing.
A: I have a simple strategy that works every time.
First, I develop the site using commonly accepted CSS to look good in Safari and Firefox 3. See w3schools.com for details on browser support.
Then, I go into IE6 and IE7 and alter the CSS using conditional includes.
This is hack free and lets you handle different browsers (IE6 and IE7 have separate issues).
Most of the issues you'll find come from unsupported features in IE (like min-width), errors in the box model (IE adds unseen extra padding (3px) to some boxes), or positioning issues. Go for those first as they are often the issue.
A: A common problem is padding not getting added to the width of a block element. So for layout div's, avoid using padding and instead use elements within them to define the padding.
A: I use Rafel Lima's Browser Selector when I need to tweak differences between IE/Standards browsers. It greatly reduces using "hacks" in your HTML to solve common problems.
You can target CSS statements for different browsers, or even different versions of browsers (Hello IE 6). It's very simple to implement, but requires the user has JavaScript turned on (most do).
.thing { ....}
.ie .thing { ....}
.ie6 .thing { ....}
A: In theory, use CSS compatible with IE6 layout bugs, utilise only well known workarounds (css and html filters) and code for them in a way that wont break forward compatibility, test for quirks/strict mode.
In reality, resort to tables.
A: We had a floating div issue that was only evident in a particular version of IE6. It was fixed by downloading the latest service pack.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Any good resources for creating Visual Slick macros? Does any one know any good resources for creating Visual Slick macros?
A: I'd start with the official community forum for SlickEdit macros.
A: After the community forum - I would find the macros directory and go to town looking at all the .e files. Depending on what you want to do - there is a lot of samples in the actual implementation (just the same way I would do it for emacs).
For example, many years ago I wanted to have something like 'project-load' (dialog that lets you open from all files in the project) but instead of project files - let me open any of the currently checked-out files (not just open, but also diff etc), went and found the project-load source and used that as a sample on implementing a similar dialog
A: This is a great resource to start with. I wrote my first one here
A: There is a book by John Hurst, 'Professional SlickEdit',ISBN-13: 978-0-470-12215-0 that is useful to get you started.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Whose responsibility is it, anyway? In the application I am writing I have a Policy class. There are 4 different types of Policy. Each Policy is weighted against the other Policies such that PolicyA > PolicyB > PolicyC > PolicyD.
Who's responsibility is it to implement the logic to determine whether one Policy is greather than another? My initial thought is to overload the > and < operators and implement the logic in the Policy type itself.
Does that violate the SRP?
A: I would think that a PolicyComparer class should do the evaluation.
A: I think you are on the right track with the overload however the extension of this is obviously a lot longer
if (A > B || B > C || C > D)
...
A: You could also store a PolicyWeight attribute in your class, that being a simple built-in type ( int, unsigned int, ... ) which can then be easily compared.
A: Certianly a dedicated comparer class. If you ever need to provide additional logic (e.g. have two or three different ways of comparing policies), this approach allows you more flexibility (not achievable through operator overloads).
A: You want a PolicyComparator class. If you want to override < and >, that's fine, but do that overriding in the Policy base class, and have those implementations utilize PolicyComparator to do it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Allow user@example or user@localhost in email validation? I'm working on an email validation check and we need to decided whether to allow user@localhost and user@example (notice no .anything) to be validated as a valid email address. This is for an open source project that has a number of use cases on both the web at large and intranets.
RFC 2822 (Internet Message Format Standard) allows it but RFC 2821 (SMTP Standard) says it should fail.
Thoughts?
A: It depends on your application. If you think that several of your users will have an email @localhost, and you don't mind. Then go for it.
A: Make it a configurable option, so people can decide for themselves. I'd default it to failure, personally, as I've yet to run into a case - intranet or public internet - where I've had someone use a valid user@localhost type address.
A: I would disable it. Very few organizations use internal domains, and those that do generally use "acme.localhost" or "intranet.com" or something else of the like. There is some sort of configuration going on in the DNS that they use to make it work.
Regardless, internal email is nearly dead anyway: with the advent of instant messaging, Twitter, and SMS along with the increasing availability of external email for every member of a company, it is almost entirely likely that you will never get a TLD-less domain in an email.
For the folks that do require it, they can always tweak the regex themselves, as they were savvy enough to set up a custom hostname to handle internal email.
A: Well, if you have DNS working for internally you could always just do a DNS lookup.
But if this is going to fail with SMTP, then I would suggest making sure you don't include it.
A: I have seen email addresses of the form user@localhost, typically when looking at archives of a mailing list and the administrator hosted and posted from the same machine. So it can definitely occur - and I admit it broke my parsing routine! So now I am a little more flexible to email addresses.
A: Looking at this it looks like you've we need two quick checks as detailed:
<?php
function valid_email($email) {
// First, we check that there's one @ symbol, and that the lengths are right
if (!ereg("^[^@]{1,64}@[^@]{1,255}$", $email)) {
// Email invalid because wrong number of characters in one section, or wrong number of @ symbols.
return false;
}
// take a given email address and split it into the username and domain.
list($userName, $mailDomain) = split("@", $email);
if (checkdnsrr($mailDomain, "MX")) {
// this is a valid email domain!
return true;
}
else {
// this email domain doesn't exist!
return false;
}
}
?>
(source 1, source 2)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What interrupt would you hook from DOS to get the real-time clock What interrupt would you hook from DOS to get the real-time clock?
A: The realtime clock cannot generate interrupts. It was (maybe still is) coupled with the cmos-ram because it was buffered by the battery. It can only be accessed via the ports 0x70 and 0x71.
You can however hook the interrupt of the PIT (programmable interrupt timer). That's interrupt 0x08 (e.g. hardware IRQ0). As far as I remember that interrupt was configured by dos to be called about 27 times per second. You can program it to other frequencies as well but that will mess up the dos-clock a bit (port 0x40 an 0x43).
A: http://www.control.com/thread/1026238869 has some info on this. Hook int 08h (don't forget to redispatch it); that is called every 55 miliseconds.
A: Read up on the Intel 8259 family of Programmable Interrupt Controllers. According to this, it's interrupts 8 (master) and 112 (slave). Here's a very technical document on the 8259A: http://pdos.csail.mit.edu/6.828/2008/readings/hardware/8259A.pdf
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Best way to aggregate multiple log files from several servers I need a simple way to monitor multiple text log files distributed over a number of HP-UX servers. They are a mix of text and XML log files from several distributed legacy systems. Currently we just ssh to the servers and use tail -f and grep, but that doesn't scale when you have many logs to keep track of.
Since the logs are in different formats and just files in folders (automatically rotated when they reach a certain size) I need to both collect them remotely and parse each one differently.
My initial thought was to make a simple daemon process that I can run on each server using a custom file reader for each file type to parse it into a common format that can be exported over the network via a socket. Another viewer program running locally will connect to these sockets and show the parsed logs in some simple tabbed GUI or aggregated to a console.
What log format should I try to convert to if I am to implement it this way?
Is there some other easier way? Should I attempt to translate the log files to the log4j format to use with Chainsaw or are there better log viewers that can connect to remote sockets? Could I use BareTail as suggested in another log question? This is not a massivly distributed system and changing the current logging implementations for all applications to use UDP broadcast or put messages on a JMS queue is not an option.
A: Logscape - like splunk without the price tag
A: Probably the lightest-weight solution for real-time log watching is to use Dancer's shell in concurrent mode with tail -f:
dsh -Mac -- tail -f /var/log/apache/*.log
*
*The -a is for all machine names that you've defined in ~/.dsh/machines.list
*The -c is for concurrent running of tail
*The -M prepends the hostname to every line of output.
A: Options:
*
*Use a SocketAppender to send all logs to 1 server directly. (This could serverly hamper performance and add a single point of failure.)
*Use scripts to aggregate the data. I use scp, ssh, and authentication keys to allow my scripts to get data from all servers without any login prompts.
A: multitail
or
"chip is a local and remote log parsing and monitoring tool for system admins and developers.
It wraps the features of swatch, tee, tail, grep, ccze, and mail into one, with some extras"
Eg.
chip -f -m0='RUN ' -s0='red' -m1='.*' -s1 user1@remote_ip1:'/var/log/log1 /var/log/log2 /var/log/log3
user2@remote_ip2:'/var/log/log1 /var/log/log2 /var/log/log3’' | egrep "RUN |==> /"
This will highlight in red the occurences of the -m0 pattern,
pre-filtering the 'RUN |==> /' pattern from all the log files.
A: I wrote vsConsole for exactly this purpose - easy access to log files - and then added app monitoring and version tracking. Would like to know what you think of it. http://vs-console.appspot.com/
A: We use a simple shell script like the one below. You'd, obviously, have to tweak it somewhat to tell it about the different file names and decide which box to look for which on but you get the basic idea. In our case we are tailing a file at the same location on multiple boxes. This requires ssh authentication via stored keys instead of typing in passwords.
#!/bin/bash
FILE=$1
for box in box1.foo.com box2.foo.com box3.foo.com box4.foo.com; do
ssh $box tail -f $FILE &
done
Regarding Mike Funk's comment about not being able to
kill the tailing with ^C, I store the above in a file called multitails.sh
and appended the following to the end of it. This creates a kill_multitails.sh file
which you run when you're done tailing, and then it deletes itself.
# create a bash script to kill off
# all the tails when you're done
# run kill_multitails.sh when you're finished
echo '#!/bin/sh' > kill_multitails.sh
chmod 755 kill_multitails.sh
echo "$(ps -awx | grep $FILE)" > kill_multitails_ids
perl -pi -e 's/^(\d+).*/kill -9 $1/g' kill_multitails_ids
cat kill_multitails_ids >> kill_multitails.sh
echo "echo 'running ps for it'" >> kill_multitails.sh
echo "ps -awx | grep $FILE" >> kill_multitails.sh
echo "rm kill_multitails.sh" >> kill_multitails.sh
rm kill_multitails_ids
wait
A: Awstats provides a perl script that can merge several apache log files together. This script scales well since the memory footprint is very low, logs files are never loaded in memory.
I know that si not exactly what you needs, but perhaps you can start from this script and adapt it for your needs.
A: You can use the various receivers available with Chainsaw (VFSLogFilePatternReceiver to tail files over ssh, SocketReceiver, UDPReceiver, CustomSQLDBReceiver, etc) and then aggregate the logs into a single tab by changing the default tab identifier or creating a 'custom expression logpanel' by providing an expression which matches the events in the various source tabs.
A: gltail - real-time visualization of server traffic, events and statistics with Ruby, SSH and OpenGL from multiple servers
A: XpoLog for Java
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
}
|
Q: How to access custom fields from the global class in a webhandler? I added some custom fields (public booleans) to the global class in global.asax.cs which are initialized during the Application_Start event. How do I access them in a webhandler (ashx)? Or is it better to save them in the Application state object?
A: You would probably need to access the class as the type that your Global.asax.cs is rather than the type it is inheriting from.
I believe it is more common to just use the Application State object for application wide variables.
A: have you tried ((global)Application).PublicBooleanField ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Semantic Web Framework What semantic web frameworks are there, and what are the advantages / disadvantages of each? I've made extensive use of Jena, and I have looked at Sesame briefly. Are there others I should consider as well?
A: Redland is a good RDF framework (just like Andreas said). I am mainly using its Python bindings and am installing it on Mac OS X via MacPorts (e.g., port install redland-bindings +python).
You could use it with other languages too (see its bindings for Perl, Ruby, ...).
For pointers to some larger lists of RDF frameworks see Semantic Web FAQ: Tools.
A: a more low-level appproach is redland, which provides bindings to a lot of languages like Perl, PHP, Python and Ruby. redland itself is written in C. i have scripted with it in ruby to provide a simple webservice with a rdf backend instead of a classic database.
A: http://www.cubicweb.org is a semantic web framework written in Python. It can be used to develop applications that serve content both to humans and computers, providing each with the format it asks for.
A: This question may be related to what-are-some-good-java-rdf-libraries
A: I would definitely take a look at Intellidimensions offerings if you are working on the Microsoft stack of technologies.
They have a mature SQL Server based framework for storing and processing (with rules) semantic web data. They also have a great .NET SDK that I have used extensively.
A: If you are using Java, and are interested in OWL inferencing, you should look at Pellet. It has bindings to Jena and the OWL-API, which itself, is a useful semweb framework.
A: The most web-centric I've seen so far is RAP (RDF API for PHP).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How to programmatically enable/disable network interfaces? (Windows XP) I need to enable/disable completely network interfaces from a script in Windows XP. I'm looking for a python solution, but any general way (eg WMI, some command-line à la netsh, some windows call) is welcome and will be adjusted. Thanks.
A: So far I've found the following Python solution:
>>> import wmi; c=wmi.WMI()
>>> o=c.query("select * from Win32_NetworkAdapter where NetConnectionID='wifi'")[0]
>>> o.EnableDevice(1)
(-2147217407,)
which is translated, AFAIU, to the generic WMI error 0x80041001. Could be permissions.
A: I found this .VBS script on the internet. It has the cool advantage of actually working on machines where I cannot get NETSH to work for this purpose.
Const ssfCONTROLS = 3
sConnectionName = "Local Area Connection"
sEnableVerb = "En&able"
sDisableVerb = "Disa&ble"
set shellApp = createobject("shell.application")
set oControlPanel = shellApp.Namespace(ssfCONTROLS)
set oNetConnections = nothing
for each folderitem in oControlPanel.items
if folderitem.name = "Network Connections" then
set oNetConnections = folderitem.getfolder: exit for
end if
next
if oNetConnections is nothing then
msgbox "Couldn't find 'Network Connections' folder"
wscript.quit
end if
set oLanConnection = nothing
for each folderitem in oNetConnections.items
if lcase(folderitem.name) = lcase(sConnectionName) then
set oLanConnection = folderitem: exit for
end if
next
if oLanConnection is nothing then
msgbox "Couldn't find '" & sConnectionName & "' item"
wscript.quit
end if
bEnabled = true
set oEnableVerb = nothing
set oDisableVerb = nothing
s = "Verbs: " & vbcrlf
for each verb in oLanConnection.verbs
s = s & vbcrlf & verb.name
if verb.name = sEnableVerb then
set oEnableVerb = verb
bEnabled = false
end if
if verb.name = sDisableVerb then
set oDisableVerb = verb
end if
next
'debugging displays left just in case...
'
'msgbox s ': wscript.quit
'msgbox "Enabled: " & bEnabled ': wscript.quit
'not sure why, but invokeverb always seemed to work
'for enable but not disable.
'
'saving a reference to the appropriate verb object
'and calling the DoIt method always seems to work.
'
if bEnabled then
' oLanConnection.invokeverb sDisableVerb
oDisableVerb.DoIt
else
' oLanConnection.invokeverb sEnableVerb
oEnableVerb.DoIt
end if
'adjust the sleep duration below as needed...
'
'if you let the oLanConnection go out of scope
'and be destroyed too soon, the action of the verb
'may not take...
'
wscript.sleep 1000
A: Using the netsh interface
Usage set interface [name = ] IfName
[ [admin = ] ENABLED|DISABLED
[connect = ] CONNECTED|DISCONNECTED
[newname = ] NewName ]
Try including everything inside the outer brackets:
netsh interface set interface name="thename" admin=disabled connect=DISCONNECTED newname="thename"
See also this MS KB page: http://support.microsoft.com/kb/262265/
You could follow either of their suggestions.
For disabling the adapter, you will need to determine a way to reference the hardware device. If there will not be multiple adapters with the same name on the computer, you could possibly go off of the Description for the interface (or PCI ID works well). After that, using devcon (disable|enable). Devcon is an add-on console interface for the Device Manager.
A: I can't seem to find any basic API for controlling interfaces on MSDN, apart from the RAS API's, but I don't think they apply to non-dialup connections. As you suggest yourself, netsh might be an option, supposedly it also has a programmatic interface: http://msdn.microsoft.com/en-us/library/ms708353(VS.85).aspx
If you want to be pure Python, you can perhaps open a set of pipes to communicate with an netsh process.
A: The devcon tool can control the NIC, but not the interface directly. It's a command-line version of the Device Manager applet.
devcon disable (id or portion of name)
devcon enable (id or portion of name)
A: this is VB.Net
Dim objectQuery As New ObjectQuery("SELECT * FROM Win32_NetworkAdapter WHERE NetConnectionId IS NOT NULL")
Dim searcher As New ManagementObjectSearcher(scope, objectQuery)
Dim os As ManagementObject
Dim moColl As ManagementObjectCollection = searcher.Get()
Dim _list As String = ""
For Each os In moColl
Console.WriteLine(os("NetConnectionId"))
Next os
That will get all the interfaces on you computer. Then you can do netsh to disable it.
netsh interface set interface
DISABLED
A: You may need to use WMI. This may serve as a good starting point:
http://msdn.microsoft.com/en-us/library/aa394595.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Dynamically add CalendarExtender to Textbox subclass server control? I'm trying to create a server control, which inherits from TextBox, that will automatically have a CalendarExtender attached to it. Is it possible to do this, or does my new control need to inherit from CompositeControl instead? I've tried the former, but I'm not clear during which part of the control lifecycle I should create the new instance of the CalendarExtender, and what controls collection I should add it to. I don't seem to be able to add it to the Page or Form's controls collection, and if I add it to the (TextBox) control's collection, I get none of the pop-up calendar functionality.
A: I accomplished this in a project a while back. To do it I created a CompositeControl that contains both the TextBox and the CalendarExtender.
In the CreateChildControls method of the CompositeControl I use code similar to this:
TextBox textbox = new TextBox();
textbox.ID = this.ID + "Textbox";
textbox.Text = this.EditableField.TextValue;
textbox.TextChanged += new EventHandler(HandleTextboxTextChanged);
textbox.Width = new Unit(100, UnitType.Pixel);
CalendarExtender calExender = new CalendarExtender();
calExender.PopupButtonID = "Image1";
calExender.TargetControlID = textbox.ID;
this.Controls.Add(textbox);
this.Controls.Add(calExender);
Of course make sure that the form containing this CompositeControl has a toolkit script manager.
A: I know this is an old thread, but I came across it when I had a similar question. This is what I ended up implementing, and it works great. If you want the control to BE a TextBox, then simply pump out the extender during the call to Render.
Imports System.Web.UI.WebControls
Imports AjaxControlToolkit
Public Class DateTextBox
Inherits TextBox
Private _dateValidator As CompareValidator
Private _calendarExtender As CalendarExtender
Protected Overrides Sub OnInit(ByVal e As System.EventArgs)
MyBase.OnInit(e)
_dateValidator = New CompareValidator
With _dateValidator
.ControlToValidate = ID
Rem set your other properties
End With
Controls.Add(_dateValidator)
_calendarExtender = New CalendarExtender
With _calendarExtender
.TargetControlID = ID
End With
Controls.Add(_calendarExtender)
End Sub
Protected Overrides Sub Render(ByVal writer As System.Web.UI.HtmlTextWriter)
MyBase.Render(writer)
_dateValidator.RenderControl(writer)
_calendarExtender.RenderControl(writer)
End Sub
End Class
A: You can easily add ajax calendar in custom server controls. You need to add two reference in your application.
1. AjaxControlToolkit.dll
2. System.Web.Extensions
With the help of second reference we will get all the property of “CalendarExtender” in your custom server controls.
A: When you are trying to not allow users to type anything in the textbox, but only be filled by the calendar extender and then you try to get the selected date from the textbox control it may be empty string if you have set the textbox property to ReadOnly="True".
Its because read only controls are NOT posted back to the server. Workaround for this is the following:
protected void Page_Load(object sender, EventArgs e)
{
TextBox1.Attributes.Add("readonly", "readonly");
}
Hope it helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Are there any Fuzzy Search or String Similarity Functions libraries written for C#? There are similar question, but not regarding C# libraries I can use in my source code.
Thank you all for your help.
I've already saw lucene, but I need something more easy to search for similar strings and without the overhead of the indexing part.
The answer I marked has got two very easy algorithms, and one uses LINQ too, so it's perfect.
A: Levenshtein distance implementation:
*
*Using LINQ (not really, see comments)
*Not using LINQ
I have a .NET 1.1 project in which I use the latter. It's simplistic, but works perfectly for what I need. From what I remember it needed a bit of tweaking, but nothing that wasn't obvious.
A: you can also look at the very impressive library titled Sam's String Metrics https://github.com/StefH/SimMetrics.Net . this includes a host of algorithms.
*
*Hamming distance
*Levenshtein distance
*Needleman-Wunch distance or Sellers Algorithm
*Smith-Waterman distance
*Gotoh Distance or Smith-Waterman-Gotoh distance
*Block distance or L1 distance or City block distance
*Monge Elkan distance
*Jaro distance metric
*Jaro Winkler
*SoundEx distance metric
*Matching Coefficient
*Dice’s Coefficient
*Jaccard Similarity or Jaccard Coefficient or Tanimoto coefficient
*Overlap Coefficient
*Euclidean distance or L2 distance
*Cosine similarity
*Variational distance
*Hellinger distance or Bhattacharyya distance
*Information Radius (Jensen-Shannon divergence)
*Harmonic Mean
*Skew divergence
*Confusion Probability
*Tau
*Fellegi and Sunters (SFS) metric
*TFIDF or TF/IDF
*FastA
*BlastP
*Maximal matches
*q-gram
*Ukkonen Algorithms
A: Have you taken a look at Lucene.net? It is a port of the Java Lucene search engine API to the .Net platform. That library offers a lot of search functionality. I played around with it a year or so ago, so don't take my suggestion as based on tons of experience. I saw it in the book Windows Developer Power Tools and took it for a test drive. You might look through their API documentation to see if it offers something like the Fuzzy Search for which you are looking.
A: They are not my own invention, but they are my favorites and I've just blogged about them and published my own tweaked versions of Dice Coefficient, Levenshtein Distance, Longest Common Subsequence and Double Metaphone in a blog post called Four Functions for Finding Fuzzy String Matches in C# Extensions.
A: This code project paper has a string similarity function using the Levenshtein distance.
A: There is the following Levenshtein Distance Algorithm which assigns a value to the similarity of two strings (well, the difference actually), that could be used to build upon: http://www.merriampark.com/ldcsharp.htm
A: The Beagle Project for Linux is written in c# (mono) and is a google-desktop like search tool. It may have some code in there for these kind of string matching.
If I recall correctly, it uses the Lucene library for searching and retrieving data. Maybe that can be useful for your project too.
A: I have used "Ternary Search Tree Dictionary in C#" (http://www.codeproject.com/KB/recipes/tst.aspx) to search for similar strings.
Regards, Patricio
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69"
}
|
Q: How do I programmatically build ad-hoc queries quickly? I've used Excel PivotTable to analyze data from my database because it allows me to "slice and dice" very quickly. As we know what is in our database tables, we all can write SQL queries that do what PivotTable does.
But I am wondering why PivotTable can construct the queries so fast while it knows nothing about the data and the meanings/relationship between the data fields we give it?
Put the question in another way, how can we build ad-hoc SQL queries in such a fast and efficient way? ("Use PivotTable, of course!", yep, but what I want is a programmatic way).
A: Just manipulate your order and group clauses as necessary.
Excel is fast because all the data is in memory, and it can be sorted fast and efficiently.
A: @Mark Ransom is definitely onto something with the notion of Excel keeping the data in memory, making it faster computationally. It's also possible that Excel pre-indexes datasets in such a way that makes it more responsive than your database.
There's one significant, non-algorithmic possibility for why it's faster: Excel, in Pivot Table usage, has no concept of a join. When you're fetching the data ad hoc from your database, any joins or correlations between tables will result in further lookups, scans, index loads, etc. Since Excel has all the data in a single location (RAM or no), it can perform lookups without having to pre-form datasets. If you were to load your database data into a temp table, it would be interesting to see how ad hoc queries against that table stacked up, performance-wise, against Excel.
One thing's certain, though: although databases are excellent tools for producing accurate reports, a traditionally-normalized database will be far less than optimal for ad hoc queries. Because normalized data structures focus on integrity above all else (if I may take that liberty), they sacrifice ad hoc optimization at the expense of keeping all the data sensible. Although this is a poor example, consider this normalized schema:
+--------+ +---------+
|tblUsers| |luGenders|
+--------+ +---------+
|userID | |genderID |
|genderID||gender |
+--------+ +---------+
SELECT * FROM luGenders;
> 1 Female
> 2 Male
If, in this example, we wished to know the number of female/male users in our system, the database would need to process the join and behave accordingly (again, this is a bad example due to the low number of joins and low number of possible values, which generally should bring about some database-engine optimisation). However, if you were to dump this data to Excel, you'd still incur some database penalty to pull the data, but actually pivoting the data in Excel would be fairly speedy. It could be that this notion of up-front, fixed-cost penalty is being missed by your idea of Excel being quicker than straight ad hoc queries, but I don't have the data to comment.
The most tangential point, though, is that while general databases are good for accuracy, they often suck at ad hoc reports. To produce ad hoc reports, it's often necessary to de-normalize ("warehouse") the data in a more queryable structure. Looking up info on data warehousing will provide a lot of good results on the subject.
Moral of the story: having a fully algorithmic, fast ad hoc query system is an awesome ideal, but is less than practical given space and time constraints (memory and people-hours). To effectively generate an ad hoc system, you really need to understand the use cases of your data, and then denormalize it effectively.
I'd highly recommend The Data Warehouse Toolkit. For the record, I'm no DBA, I'm just a lowly analyst who spends 80 hours per week munging Excel and Oracle. I know your pain.
A: My intuitive feeling tells me that the answer would have something to do with a Pivot Table outline, which has a fixed number of zones, namely:
- the Page Fields zone
- the Column Fields zone
- the Row Fields zone and
- the Data zone
In my wild guess:
- The Page zone builds the WHERE part of the ad-hoc query.
- The Column zone will put whichever fields drag-dropped to it in the GROUP BY clause.
- The Row zone will build a SELECT DISTINCT <field names>
- The Data zone will apply an AGGREGATE function to the field drag-dropped to it.
What do you think would happen "behind the scene" when we drag fields to those zones?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to get date picture created in java I would like to extract the date a jpg file was created. Java has the lastModified method for the File object, but appears to provide no support for extracting the created date from the file. I believe the information is stored within the file as the date I see when I hover the mouse pointer over the file in Win XP is different than what I can get by using JNI with "dir /TC" on the file in DOS.
A: The date is stored in the EXIF data in the jpeg. There's a java library and a viewer in java that might be helpful.
A: I use this metadata library: http://www.drewnoakes.com/code/exif/
Seems to work pretty well, although bear in mind that not all JPEG images have this information, so it can't be 100% fool-proof.
If the EXIF metadata doesn't contain the created date, then you'll probably have to make do with Java's lastUpdated - unless you want to resort to Runtime.exec(...) and using system functions to find out (I wouldn't recommend this, though!)
A: The information is stored within the image in a format called EXIF or link text. There several libraries out there capable of reading this format, like this one
A: You probably need something to access the exif data. Google suggests this library.
A: The code example below asks the user for a file path and then outputs the creation date and time:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
public class Main {
public static void main(final String[] args) {
try {
// get runtime environment and execute child process
Runtime systemShell = Runtime.getRuntime();
BufferedReader br1=new BufferedReader(new InputStreamReader(System.in));
System.out.println("Enter filename: ");
String fname=(String)br1.readLine();
Process output = systemShell.exec("cmd /c dir /a "+fname);
// open reader to get output from process
BufferedReader br = new BufferedReader (new InputStreamReader(output.getInputStream()));
String out="";
String line = null;
int step=1;
while((line = br.readLine()) != null )
{
if(step==6)
{
out=line;
}
step++;
} // display process output
try{
out=out.replaceAll(" ","");
System.out.println("CreationDate: "+out.substring(0,10));
System.out.println("CreationTime: "+out.substring(10,15));
}
catch(StringIndexOutOfBoundsException se)
{
System.out.println("File not found");
}
}
catch (IOException ioe){ System.err.println(ioe); }
catch (Throwable t) { t.printStackTrace();}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How do I find the definition of a named constraint in Oracle? All I know about the constraint is it's name (SYS_C003415), but I want to see it's definition.
A: Another option would be to reverse engineer the DDL...
DBMS_METADATA.GET_DDL('CONSTRAINT', 'SYS_C003415')
Some examples here....
http://www.psoug.org/reference/dbms_metadata.html
A: Use following query to get a definition of constraint in oracle:
Select DBMS_METADATA.GET_DDL('CONSTRAINT', 'CONSTRAINT_NAME') from dual
A: Looks like I should be querying ALL_CONSTRAINTS.
select OWNER, CONSTRAINT_NAME, CONSTRAINT_TYPE, TABLE_NAME, SEARCH_CONDITION from ALL_CONSTRAINTS where CONSTRAINT_NAME = 'SYS_C003415';
A: Or to see all constaints use SYS.DBA_CONSTRAINTS (If you have the privileges)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Home key go to start of line in Visual Studio? Where is the option in Visual Studio to make the Home key go to the start of the line?
Right now you have to do
Home,Home
or
Home, Ctrl+Left Arrow
i'd prefer that home goes to the start of the line. i saw it before, but now i cannot find it.
A: From asking the same question on MSDN forums:
TaylorMichaelL said:
The command you are interested in is
Edit.LineFirstColumn. You'll want to
change the scope to be the Text
Editor. You should remove any
existing shortcut key associated with
the command first. If you don't
change the scope then the Home key
won't work. Then try using the Home
key. It should work.
Michael Taylor - 9/18/08
http://p3net.mvps.org
Changing the Scope to Text Editor was the missing piece in the puzzle.
*
*Go to Tools/Customize/Keyboard
*Change Scope to "Text Editor".
*Reassign the "Home" key from Edit.LineStart to Edit.LineFirstColumn
A: In Tools/Customize/Keyboard, Reassign the "Home" key from Edit.LineStart" to "Edit.LineFirstColumn"
Edit by OP: You must change Scope to Text Editor before this will work.
Visual Studio 2010
Visual Studio 2010 removed the "scope" option. Instead you want the "Use new shortcut in" option:
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: Is there a Functional Programming library for .NET? For example, in Java there is Functional Java and Higher-Order Java. Both essentially give a small API for manipulating higher-order, curried functions, and perhaps a few new data types (tuples, immutable lists).
A: have you looked into F#?
Also a neat blog post would be here that talks about how to use the new generic / lambda expressions built into c# 3.0.
If you just add using System.Linq to the top of your source file there are a LOT of nice new functions added to working with collections such as folding / filtering / etc.
A: Assuming you can't use F# for whatever reason, and just want to use functional paradigms and idioms in your C# code to improve quality & reliability:
Functional style pattern matching for C#
Monad library for C#/.Net
There is also 'elevate' which has some functional things like option types (maybes) etc.
A: I think you want F#
Also, the more recent versions of C# have a lot of functional concepts included in the base langauge.
A: There may be such a library for C#, but you should probably consider just using F# http://research.microsoft.com/fsharp/fsharp.aspx and http://msdn.microsoft.com/en-us/fsharp/default.aspx.
Microsoft plans to make F# a first-class language in Visual Studio so there should be little risk in using one of the CTPs to build your initial stuff.
A: If you're looking for something that extends C# then no, but there is F# which is a .NET based functional language. From the "About F#" page:
F# is a typed functional programming language for the .NET Framework. It combines the succinctness, expressivity, and compositionality of typed functional programming with the runtime support, libraries, interoperability, tools and object model of .NET. F# stems from the ML family of languages and has a core language compatible with that of OCaml, though also draws from C# and Haskell. F# was designed from the ground up to be a first-class citizen on .NET, giving smooth interoperability with other .NET languages. For example, C# and F# can call each other directly. This means that F# has immediate access to all the .NET Framework APIs, including, for example, Windows Presentation Foundation and DirectX. Similarly, libraries developed in F# may be used from other .NET languages.
Since F# and OCaml share a similar core language, some OCaml libraries and applications can cross-compile either directly or with minor conditionally-compiled changes. This provides a path to cross-compile and/or port existing OCaml code to .NET, and also allows programmers to transfer skills between these languages. A major focus of the project has been to extend the reach of OCaml-like languages into arenas where they have not traditionally been used. Throughout the project the designers of F# are grateful for the support and encouragement of Xavier Leroy and others in the OCaml community.
A: Not a shrink-wrapped library per se, but Luca Bolognese of Microsoft has a series of blog posts where he builds a C# library for functional programming with types like tuples, records, type unions and so on:
Also Linq is basically a library for functional programming with syntactial support in C#.
A: Check out http://code.msdn.microsoft.com/FunctionalCSharp for some samples.
A: LanguageExt looks very promising for making functional style programming in C# easier.
https://github.com/louthy/language-ext
A: F#, there's a CTP release available from microsoft.
A: One more option to consider is FuncSharp. It's not so heavy as LanguageExt and it does cover the most important patterns/aspects.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: how to replace multiple strings together in Oracle I have a string coming from a table like "can no pay{1},as your payment{2}due on {3}". I want to replace {1} with some value , {2} with some value and {3} with some value .
Is it Possible to replace all 3 in one replace function ? or is there any way I can directly write query and get replaced value ? I want to replace these strings in Oracle stored procedure the original string is coming from one of my table I am just doing select on that table
and then I want to replace {1},{2},{3} values from that string to the other value that I have from another table
A: If there are many variables to replace and you have them in another table and if the number of variables is variable you can use a recursive CTE to replace them.
An example below. In table fg_rulez you put the strings with their replacement. In table fg_data you have your input strings.
set define off;
drop table fg_rulez
create table fg_rulez as
select 1 id,'<' symbol, 'less than' text from dual
union all select 2, '>', 'great than' from dual
union all select 3, '$', 'dollars' from dual
union all select 4, '&', 'and' from dual;
drop table fg_data;
create table fg_Data AS(
SELECT 'amount $ must be < 1 & > 2' str FROM dual
union all
SELECT 'John is > Peter & has many $' str FROM dual
union all
SELECT 'Eliana is < mary & do not has many $' str FROM dual
);
WITH q(str, id) as (
SELECT str, 0 id
FROM fg_Data
UNION ALL
SELECT replace(q.str,symbol,text), fg_rulez.id
FROM q
JOIN fg_rulez
ON q.id = fg_rulez.id - 1
)
SELECT str from q where id = (select max(id) from fg_rulez);
So, a single replace.
Result:
amount dollars must be less than 1 and great than 2
John is great than Peter and has many dollars
Eliana is less than mary and do not has many dollars
The terminology symbol instead of variable comes from this duplicated question.
Oracle 11gR2
A: If the number of values to replace is too big or you need to be able to easily maintain it, you could also split the string, use a dictionary table and finally aggregate the results
In the example below I'm assuming that the words in your string are separated with blankspaces and the wordcount in the string will not be bigger than 100 (pivot table cardinality)
with Dict as
(select '{1}' String, 'myfirstval' Repl from dual
union all
select '{2}' String, 'mysecondval' Repl from dual
union all
select '{3}' String, 'mythirdval' Repl from dual
union all
select '{Nth}' String, 'myNthval' Repl from dual
)
,MyStrings as
(select 'This is the first example {1} ' Str, 1 strnum from dual
union all
select 'In the Second example all values are shown {1} {2} {3} {Nth} ', 2 from dual
union all
select '{3} Is the value for the third', 3 from dual
union all
select '{Nth} Is the value for the Nth', 4 from dual
)
-- pivot is used to split the stings from MyStrings. We use a cartesian join for this
,pivot as (
Select Rownum Pnum
From dual
Connect By Rownum <= 100
)
-- StrtoRow is basically a cartesian join between MyStings and Pivot.
-- There as many rows as individual string elements in the Mystring Table
-- (Max = Numnber of rows Mystring table * 100).
,StrtoRow as
(
SELECT rownum rn
,ms.strnum
,REGEXP_SUBSTR (Str,'[^ ]+',1,pv.pnum) TXT
FROM MyStrings ms
,pivot pv
where REGEXP_SUBSTR (Str,'[^ ]+',1,pv.pnum) is not null
)
-- This is the main Select.
-- With the listagg function we group the string together in lines using the key strnum (group by)
-- The NVL gets the translations:
-- if there is a Repl (Replacement from the dict table) then provide it,
-- Otherwise TXT (string without translation)
Select Listagg(NVL(Repl,TXT),' ') within group (order by rn)
from
(
-- outher join between strings and the translations (not all strings have translations)
Select sr.TXT, d.Repl, sr.strnum, sr.rn
from StrtoRow sr
,dict d
where sr.TXT = d.String(+)
order by strnum, rn
) group by strnum
A: Let's write the same sample as a CTE only:
with fg_rulez as (
select 1 id,'<' symbol, 'less than' text from dual
union all select 2, '>', 'greater than' from dual
union all select 3, '$', 'dollars' from dual
union all select 4, '+', 'and' from dual
), fg_Data AS (
SELECT 'amount $ must be < 1 + > 2' str FROM dual
union all
SELECT 'John is > Peter + has many $' str FROM dual
union all
SELECT 'Eliana is < mary + do not has many $' str FROM dual
), q(str, id) as (
SELECT str, 0 id
FROM fg_Data
UNION ALL
SELECT replace(q.str,symbol,text), fg_rulez.id
FROM q
JOIN fg_rulez
ON q.id = fg_rulez.id - 1
)
SELECT str from q where id = (select max(id) from fg_rulez);
A: Although it is not one call, you can nest the replace() calls:
SET mycol = replace( replace(mycol, '{1}', 'myoneval'), '{2}', mytwoval)
A: If you are doing this inside of a select, you can just piece it together, if your replacement values are columns, using string concatenation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How can I create and develop new database projects in Visual Studio? I want to find a way to develop database projects quickly in Visual Studio. Any ideas?
A: I have a method of creating and updating database projects in Visual Studio 2005 that I thought was common knowledge. After asking a few coworkers if they knew how to update their database projects with this method and receiving no's, I thought I would blog about it and pass along some helpful hints and best practices.
I work a lot with databases and especially stored procedures that are built to be used with business logic/data access .NET framework. I enjoy working with databases and always create database projects to live with my .NET projects. I am psychotic about keeping database projects up to date. I have been burned too many time in my younger years where I needed to create a stored procedure that was deleted or was out of sync with the application using the database.
After creating your database project in Visual Studio 2005 as shown:
alt text http://www.cloudsocket.com/images/image-thumb16.png
Create 3 new directories in the projects: Tables, Stored Procedures and Functions. I usually only stored these for my projects.
alt text http://www.cloudsocket.com/images/image-thumb17.png
I now open the Server Explorer in Visual Studio and create a new connection to my desired database. I am using Northwind as my example. I am not going to walk through the creation of the connection for this example.
alt text http://www.cloudsocket.com/images/image-thumb18.png
I will use a stored procedure as my example on how to update the database project. First I expand the "Stored Procedures" directory in the Server Explorer for the Northwind database. I select a stored procedure.
alt text http://www.cloudsocket.com/images/image-thumb19.png
I drag the stored procedure to the "Stored Procedures" directory in the Solution Explorer and drop it.
alt text http://www.cloudsocket.com/images/image-thumb20.png
alt text http://www.cloudsocket.com/images/image-thumb21.png
If you open the file for the dragged stored procedures you will find that the IDE created the script as followed:
/****** Object: StoredProcedure [dbo].[CustOrdersOrders] Script Date: 08/25/2007 15:22:59 ******/
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[CustOrdersOrders]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[CustOrdersOrders]
GO
/****** Object: StoredProcedure [dbo].[CustOrdersOrders] Script Date: 08/25/2007 15:22:59 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[CustOrdersOrders]') AND type in (N'P', N'PC'))
BEGIN
EXEC dbo.sp_executesql @statement = N'
CREATE PROCEDURE CustOrdersOrders @CustomerID nchar(5)
AS
SELECT OrderID,
OrderDate,
RequiredDate,
ShippedDate
FROM Orders
WHERE CustomerID = @CustomerID
ORDER BY OrderID
'
END
GO
You can now drag over all the tables, functions and remaining stored procedures from your database. You can also right click on each script in the Solution Explorer and run the scripts on your database project's referenced database.
A: DataDude? http://msdn.microsoft.com/en-us/vsts2008/db/default.aspx
A: Hey Chris, I also use the same way for keeping a database project, the only problem, is that you often make changes to stored procedures, and sometimes you forget which ones you changed, so you might drag one and forget the other.
Do you know of a way to synchronize the database project with the database, or a way to import latest script for stored procs in your project, after they have been added by dragging the first time.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there any way to detect the target class in static methods? Below is an example class hierarchy and code. What I'm looking for is a way to determine if 'ChildClass1' or 'ChildClass2' had the static method whoAmI() called on it without re-implementing it in each child class.
<?php
abstract class ParentClass {
public static function whoAmI () {
// NOT correct, always gives 'ParentClass'
$class = __CLASS__;
// NOT correct, always gives 'ParentClass'.
// Also very round-about and likely slow.
$trace = debug_backtrace();
$class = $trace[0]['class'];
return $class;
}
}
class ChildClass1 extends ParentClass {
}
class ChildClass2 extends ParentClass {
}
// Shows 'ParentClass'
// Want to show 'ChildClass1'
print ChildClass1::whoAmI();
print "\n";
// Shows 'ParentClass'
// Want to show 'ChildClass2'
print ChildClass2::whoAmI();
print "\n";
A: I believe what you're referring to is a known php bug. Php 5.3 is aiming to address this issue with a new Late Static Binding feature.
http://www.colder.ch/news/08-24-2007/28/late-static-bindings-expl.html
A: Now that PHP 5.3 is widely available in the wild, I wanted to put together a summary answer to this question to reflect newly available techniques.
As mentioned in the other answers, PHP 5.3 has introduced Late Static Binding via a new static keyword. As well, a new get_called_class() function is also available that can only be used within a class method (instance or static).
For the purpose of determining the class as was asked in this question, the get_called_class() function is appropriate:
<?php
abstract class ParentClass {
public static function whoAmI () {
return get_called_class();
}
}
class ChildClass1 extends ParentClass {
}
class ChildClass2 extends ParentClass {
}
// Shows 'ChildClass1'
print ChildClass1::whoAmI();
print "\n";
// Shows 'ChildClass2'
print ChildClass2::whoAmI();
print "\n";
The user contributed notes for get_called_class() include a few sample implementations that should work in PHP 5.2 as well by making use of debug_backtrace().
A: Class identification is often a symptom of not well understood Polymorphism.
The clients of ChildClass1 and ChildClass2 shouldn't need to distinguish between them.
There's no place where any class should ask about someObject.whoAmI().
Whenever you have the urge to write if someObject.whoAmI() == 'ChildClass1' { do X(someObject) } you should really add an X() method to the ParentClass with various implementations in the various ChildClasses.
This kind of "run-time type identification" can almost always be replaced with properly polymorphic class designs.
A: As of PHP 5.3 it'll be possible with the use of the static keyword, but for now it isn't possible.
A: No. Wait for PHP 5.3.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.