text
stringlengths
8
267k
meta
dict
Q: What is the recommended way to skin an entire application in WPF? I want my WPF application to be skinnable, by applying a certain XAML template, and the changes to be application wide, even for dynamic controls or controls that aren't even in the visual/logical tree. What can I use to accomplish this type of functionality? Are there any good resources or tutorials that show how this specific task can be done? A: The replacing of resource will work but I found "structural skinning" to be more powerfull! Read more about it on CodeProject... http://www.codeproject.com/KB/WPF/podder1.aspx A: I have found the way to apply generic templates to all controls without using template keys. The solution is to use the type of the control as the Style key. Example: <Application.Resources> <Style x:Key="{x:Type Button}" TargetType="{x:Type Button}"> <Setter Property="Button.Background" Value="CornflowerBlue"/> <Setter Property="Button.Template"> <Setter.Value> <ControlTemplate x:Name="MyTemplate"> ... </ControlTemplate> </Setter.Value> </Setter> </Style> </Application.Resources> here the Style key is x:Key="{x:Type Button}", so the style will be applied to all controls of type button without the control declaring the Style property to be a static or dynamic resource. A: The basic approach to take is using resources all through your application and dynamically replacing the resources at runtime. See http://www.nablasoft.com/alkampfer/index.php/2008/05/22/simple-skinnable-and-theme-management-in-wpf-user-interface/ for the basic approach
{ "language": "en", "url": "https://stackoverflow.com/questions/120914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Create a database using T SQL on a specified location How to create a database using T SQL script on a specified location? Let's say, I want to create a SQL server database on D:\temp\dbFolder. How to do this? A: When you create the new database you specify the location. For example: USE [master] GO CREATE DATABASE [AdventureWorks] ON PRIMARY ( NAME = N'AdventureWorks_Data', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorks_Data.mdf' , SIZE = 167872KB , MAXSIZE = UNLIMITED, FILEGROWTH = 16384KB ) LOG ON ( NAME = N'AdventureWorks_Log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorks_Log.ldf' , SIZE = 2048KB , MAXSIZE = 2048GB , FILEGROWTH = 16384KB ) GO A: * *Create folder on your file system: D:\temp\dbFolder\ *Run the script: USE master; GO CREATE DATABASE TestDB1 ON ( NAME = Sales_dat, FILENAME = 'D:\temp\dbFolder\TestDB1.mdf') LOG ON ( NAME = Sales_log, FILENAME = 'D:\temp\dbFolder\TestDB1.ldf'); GO A: Using variables in Studio Manager expanding on the previous examples. Create folders and subfolders. Example: root folder E:\MSSQL\DATA subfolders E:\MSSQL\DATA\DB and E:\MSSQL\DATA\Logs. MKDIR "E:\MSSQL\DATA\DB" MKDIR "E:\MSSQL\DATA\Logs" Change Database name @DBNAME variable @Test_DB' to your 'DesiredName_DB' Change Root folder path @DataPath 'E:\MSSQL\DATA' to your as per above created folders. Run the below in Studio Manager DECLARE @DBNAME VARCHAR(MAX) DECLARE @DataPath AS NVARCHAR(MAX) DECLARE @sql VARCHAR(MAX) SET @DBNAME = N'Test_DB' SET @DataPath = N'E:\MSSQL\DATA' SELECT @sql = 'USE MASTER' EXEC (@sql) SELECT @sql = 'CREATE DATABASE '+ quotename(@DBNAME) + ' ON PRIMARY ( NAME = ''' + @DBNAME + '_DB'', FILENAME = ''' + @DataPath + '\DB\' + @DBNAME + '.mdf'', SIZE = 3136 KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024 KB ) LOG ON ( NAME = '''+ @DBNAME + '_Log'', FILENAME = '''+ @DataPath + '\Logs\' + @DBNAME + '_log.ldf'', SIZE = 832KB , MAXSIZE = 2048 GB , FILEGROWTH = 10 % )' EXEC (@sql) Or another variation on the above theme. DECLARE @DBNAME VARCHAR(MAX) DECLARE @DataFilePath AS NVARCHAR(MAX) DECLARE @LogFilePath AS NVARCHAR(MAX) DECLARE @sql VARCHAR(MAX) SET @DBNAME = N'Test_DB' SET @DataFilePath = N'E:\MSSQL\DATA\DB\' SET @LogFilePath = N'E:\MSSQL\DATA\Logs\' SELECT @sql = 'USE MASTER' EXEC (@sql) SELECT @sql = 'CREATE DATABASE '+ quotename(@DBNAME) + ' ON PRIMARY ( NAME = ''' + @DBNAME + '_DB'', FILENAME = ''' + @DataFilePath + @DBNAME + '.mdf'', SIZE = 3136 KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024 KB ) LOG ON ( NAME = '''+ @DBNAME + '_Log'', FILENAME = '''+ @LogFilePath+ @DBNAME + '_log.ldf'', SIZE = 832KB , MAXSIZE = 2048 GB , FILEGROWTH = 10 % )' EXEC (@sql) A: From the SQL Server Books an example where database filenames are explicitely defined: USE master GO CREATE DATABASE Sales ON ( NAME = Sales_dat, FILENAME = 'c:\program files\microsoft sql server\mssql\data\saledat.mdf', SIZE = 10, MAXSIZE = 50, FILEGROWTH = 5 ) LOG ON ( NAME = 'Sales_log', FILENAME = 'c:\program files\microsoft sql server\mssql\data\salelog.ldf', SIZE = 5MB, MAXSIZE = 25MB, FILEGROWTH = 5MB ) GO A: See this link : CREATE DATABASE (Transact-SQL) CREATE DATABASE [ADestinyDb] CONTAINMENT = NONE ON PRIMARY ( NAME = N'ADestinyDb', FILENAME = N'D:\temp\dbFolder\ADestinyDb.mdf' , SIZE = 3136 KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024 KB ) LOG ON ( NAME = N'ADestinyDb_log', FILENAME = N'D:\temp\dbFolder\_log.ldf' , SIZE = 832KB , MAXSIZE = 2048 GB , FILEGROWTH = 10 %) A: Create folder on your file system: D:\temp\dbFolder\ and run the below script (try 'sa' login) USE master CREATE DATABASE [faltu] ON PRIMARY ( NAME = N'faltu', FILENAME = N'D:\temp\dbFolder\faltu.mdf' , SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N'faltu_log', FILENAME = N'D:\temp\dbFolder\faltu_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
{ "language": "en", "url": "https://stackoverflow.com/questions/120917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Why does Python pep-8 strongly recommend spaces over tabs for indentation? I see on Stack Overflow and PEP 8 that the recommendation is to use spaces only for indentation in Python programs. I can understand the need for consistent indentation and I have felt that pain. Is there an underlying reason for spaces to be preferred? I would have thought that tabs were far easier to work with. A: The answer to the question is: PEP-8 wants to make a recommendation and has decided that since spaces are more popular it will strongly recommend spaces over tabs. Notes on PEP-8 PEP-8 says 'Use 4 spaces per indentation level.' Its clear that this is the standard recommendation. 'For really old code that you don't want to mess up, you can continue to use 8-space tabs.' Its clear that there are SOME circumstances when tabs can be used. 'Never mix tabs and spaces.' This is a clear prohibition of mixing - I think we all agree on this. Python can detect this and often chokes. Using the -tt argument makes this an explicit error. 'The most popular way of indenting Python is with spaces only. The second-most popular way is with tabs only.' This clearly states that both are used. Just to be ultra-clear: You should still never mix spaces and tabs in same file. 'For new projects, spaces-only are strongly recommended over tabs.' This is a clear recommendation, and a strong one, but not a prohibition of tabs. I can't find a good answer to my own question in PEP-8. I use tabs, which I have used historically in other languages. Python accepts source with exclusive use of tabs. That's good enough for me. I thought I would have a go at working with spaces. In my editor, I configured a file type to use spaces exclusively and so it inserts 4 spaces if I press tab. If I press tab too many times, I have to delete the spaces! Arrgh! Four times as many deletes as tabs! My editor can't tell that I'm using 4 spaces for indents (although AN editor might be able to do this) and obviously insists on deleting the spaces one at a time. Couldn't Python be told to consider tabs to be n spaces when its reading indentations? If we could agree on 4 spaces per indentation and 4 spaces per tab and allow Python to accept this, then there would be no problems. We should find win-win solutions to problems. A: I personally don't agree with spaces over tabs. To me, tabs are a document layout character/mechanism while spaces are for content or delineation between commands in the case of code. I have to agree with Jim's comments that tabs aren't really the issue, it is people and how they want to mix tabs and spaces. That said, I've forced myself to use spaces for the sake of convention. I value consistency over personal preference. Edit 2022: Use spaces. Follow the language conventions and those set in the particular project you're working on. Use a linter to ensure those conventions are maintained. Format on save. Lint on commit. This will reduce "bikeshedding" on a team. As I mentioned so many years ago, consistency over personal preference! Bikeshedding, also known as Parkinson’s law of triviality, describes our tendency to devote a disproportionate amount of our time to menial and trivial matters while leaving important matters unattended. A: I've always used tabs in my code. That said, I've recently found a reason to use spaces: When developing on my Nokia N900 internet tablet, I now had a keyboard without a tab key. This forced me to either copy and paste tabs or re-write my code with spaces. I've run into the same problem with other phones. Granted, this is not a standard use of Python, but something to keep in mind. A: The reason for spaces is that tabs are optional. Spaces are the actual lowest-common denominator in punctuation. Every decent text editor has a "replace tabs with spaces" and many people use this. But not always. While some text editors might replace a run of spaces with a tab, this is really rare. Bottom Line. You can't go wrong with spaces. You might go wrong with tabs. So don't use tabs and reduce the risk of mistakes. A: The main problems with indentation occur when you mix tabs and spaces. Obviously this doesn't tell you which you should choose, but it is a good reason to to recommend one, even if you pick it by flipping a coin. However, IMHO there are a few minor reasons to favour spaces over tabs: * *Different tools. Sometimes code gets displayed outside of a programmer's editor. Eg. posted to a newsgroup or forum. Spaces generally do better than tabs here - everywhere spaces would get mangled, tabs do as well, but not vice-versa. *Programmers see the source differently. This is deeply subjective - its either the main benefit of tabs, or a reason to avoid them depending on which side you're on. On the plus side, developers can view the source with their preferred indentation, so a developer preferring 2-space indent can work with an 8-space developer on the same source and still see it as they like. The downside is that there are repercussions to this - some people like 8-space because it gives very visible feedback that they're too deeply nested - they may see code checked in by the 2-indenter constantly wrapping in their editor. Having every developer see the code the same way leads to more consistency wrt line lengths, and other matters too. *Continued line indentation. Sometimes you want to indent a line to indicate it is carried from the previous one. eg. def foo(): x = some_function_with_lots_of_args(foo, bar, baz, xyzzy, blah) If using tabs, theres no way to align this for people using different tabstops in their editor without mixing spaces and tabs. This effectively kills the above benefit. Obviously though, this is a deeply religious issue, which programming is plagued with. The most important issue is that we should choose one - even if thats not the one you favour. Sometimes I think that the biggest advantage of significant indentation is that at least we're spared brace placement flamewars. Also worth reading is this article by Jamie Zawinski on the issue. A: The problem with tabs is that they are invisible, and people can never agree on the width of tabs. When you mix tabs and spaces, and you set tabstops at something other than Python (which uses tabstops every 8 spaces) you will be seeing the code in a different layout than Python sees it. And because the layout determines blocks, you will be seeing different logic. It leads to subtle bugs. If you insist on defying PEP 8 and using tabs -- or worse, mixing tabs and spaces -- at least always run python with the '-tt' argument, which makes inconsistent indentation (sometimes a tab, sometimes a space for the same indentation level) an error. Also, if possible, set your editor to display tabs differently. But really, the best approach is not to use tabs, period. A: Since python relies on indentation in order to recognize program structure, a clear way to identify identation is required. This is the reason to pick either spaces or tabs. However, python also has a strong philosophy of only having one way to do things, therefore there should be an official recommendation for one way to do indentation. Both spaces and tabs pose unique challenges for an editor to handle as indentation. The handling of tabs themselves is not uniform across editors or even user settings. Since spaces are not configurable, they pose the more logical choice as they guarantee that the outcome will look everywhere the same. A: JWZ says it best: When [people are] reading code, and when they're done writing new code, they care about how many screen columns by which the code tends to indent when a new scope (or sexpr, or whatever) opens... ...My opinion is that the best way to solve the technical issues is to mandate that the ASCII #9 TAB character never appear in disk files: program your editor to expand TABs to an appropriate number of spaces before writing the lines to disk... ...This assumes that you never use tabs in places where they are actually significant, like in string or character constants, but I never do that: when it matters that it is a tab, I always use '\t' instead. A: Well well, seems like everybody is strongly biased towards spaces. I use tabs exclusively. I know very well why. Tabs are actually a cool invention, that came after spaces. It allows you to indent without pushing space millions of times or using a fake tab (that produces spaces). I really don't get why everybody is discriminating the use of tabs. It is very much like old people discriminating younger people for choosing a newer more efficient technology and complaining that pulse dialing works on every phone, not just on these fancy new ones. "Tone dialing doesn't work on every phone, that's why it is wrong". Your editor cannot handle tabs properly? Well, get a modern editor. Might be darn time, we are now in the 21st century and the time when an editor was a high tech complicated piece of software is long past. We have now tons and tons of editors to choose from, all of them that support tabs just fine. Also, you can define how much a tab should be, a thing that you cannot do with spaces. Cannot see tabs? What is that for an argument? Well, you cannot see spaces neither! May I be so bold to suggest to get a better editor? One of these high tech ones, that were released some 10 years ago already, that display invisible characters? (sarcasm off) Using spaces causes a lot more deleting and formatting work. That is why (and all other people that know this and agree with me) use tabs for Python. Mixing tabs and spaces is a no-no and no argument about that. That is a mess and can never work. A: Note that the use of tabs confuses another aspect of PEP 8: Limit all lines to a maximum of 79 characters. Let's say, hypothetically, that you use a tab width of 2 and I use a tab width of 8. You write all your code so your longest lines reach 79 characters, then I start to work on your file. Now I've got hard-to-read code because (as the PEP states): The default wrapping in most tools disrupts the visual structure of the code If we all use 4 spaces, it's ALWAYS the same. Anyone whose editor can support an 80 character width can comfortably read the code. Note: The 80 character limit is a holy war in and of itself, so let's not start that here. Any non-sucky editor should have an option to use spaces as if they were tabs (both inserting and deleting), so that really shouldn't be a valid argument. A: The answer was given right there in the PEP [ed: this passage has been edited out in 2013]. I quote: The most popular way of indenting Python is with spaces only. What other underlying reason do you need? To put it less bluntly: Consider also the scope of the PEP as stated in the very first paragraph: This document gives coding conventions for the Python code comprising the standard library in the main Python distribution. The intention is to make all code that goes in the official python distribution consistently formatted (I hope we can agree that this is universally a Good Thing™). Since the decision between spaces and tabs for an individual programmer is a) really a matter of taste and b) easily dealt with by technical means (editors, conversion scripts, etc.), there is a clear way to end all discussion: choose one. Guido was the one to choose. He didn't even have to give a reason, but he still did by referring to empirical data. For all other purposes you can either take this PEP as a recommendation, or you can ignore it -- your choice, or your team's, or your team leaders. But if I may give you one advice: don't mix'em ;-) [ed: Mixing tabs and spaces is no longer an option.] A: The most significant advantage I can tell of spaces over tabs is that a lot of programmers and projects use a set number of columns for the source code, and if someone commits a change with their tabstop set to 2 spaces and the project uses 4 spaces as the tabstop the long lines are going to be too long for other people's editor window. I agree that tabs are easier to work with but I think spaces are easier for collaboration, which is important on a large open source project like Python. A: You can have your cake and eat it to. Set your editor to expand tabs into spaces automatically. (That would be :set expandtab in Vim.) A: Besides all the other reasons already named (consistency, never mixing spaces and tabs etc) I believe there are a few more reasons for the 4 spaces convention to note. These only apply to Python (and maybe other languages where indentation has meaning). Tabs may be nicer in other languages, depending on individual preferences. * *If an editor doesn't show tabs (which happens, depending on the configuration, in quite a few), another author might assume that your code uses 4 spaces, b/c almost all of the Python code being publicly available does; if that same editor happens to have a tab width of 4, nasty things may happen - at least, that poor person will lose time over an indentation issue that would have been very easy to avoid by sticking to the convention. So for me, the number one reason is to avoid bugs with consistency. *Reframing the question of which is better, tabs or spaces, one should ask which the advantages of tabs are; I've seen plenty posts praising tabs, but few compelling arguments for them; good editors like emacs, vi(m), kate, ... do proper indentation depending on the semantics of your code - even without tabs; the same editors can easily be configured to unindent on backspace etc. *Some people have very strong preferences when it comes to their freedom in deciding the look/ layout of code; others value consistency over this freedom. Python drastically reduces this freedom by dictating that indentation is used for blocks etc. This may be seen as a bug or a feature, but it sort of comes with choosing Python. Personally, I like this consistency - when starting to code on a new project, at least the layout is close to what I'm used to, so it's fairly easy to read. Almost always. *Using spaces for indentation allows "layout tricks" that may facilitate to comprehend code; some examples of these are listed in PEP8; eg. foo = long_function_name(var_one, var_two, var_three, var_four) # the same for lists a_long_list = [1, 2, # ... 79] # or dictionaries a_dict = {"a_key": "a_value", "another_key": "another_value"} Of course, the above can also be written nicely as foo = long_function_name( var_one, var_two, var_three, var_four) # the same for lists a_long_list = [ 1, 2, # ... 79] # or dictionaries a_dict = { "a_key": "a_value", "another_key": "another_value"} However, the latter takes more lines of code and less lines are sometimes argued to be better (b/c you get more on a single screen). But if you like alignment, spaces (preferably assisted by a good editor) give you, in a sense, more freedom in Python than tabs. [Well, I guess some editors allow you to do the same w/ tabs ;) - but with spaces, all of them do...] *Coming back to the same argument that everybody else makes - PEP 8 dictates (ok, strongly recommends) spaces. If coming to a project that uses tabs only, of course, you have little choice. But because of the establishment of the PEP 8 conventions, almost all Python programmers are used to this style. This makes it sooooo much easier to find a consensus on a style that is accepted by most programmers... and having individuals agree on style might be very hard otherwise. *Tools that help enforcing style are usually aware of PEP 8 without extra effort. That's not a great reason, but it's just nice to have things work ~out of the box. A: My guess is that most the linux text editors make defaults look ridiculously large by default. I can't think of any other good reason to use spaces over tabs. A: The universal problem with tabs is that they can be represented differently in different environment. In a given editor, a tab might be 8 spaces or it might be 2. In some editors, you can control this, while in others you can't. Another issue with tabs is how they are represented in printed output. I believe most printers interpret a tab as 8 spaces. With spaces, there is no doubt. Everything will line up as the author intended. A: On the discussion between Jim and Thomas Wouters in the comments. The issue was... since the width of tabs and spaces both can vary -- and since programmers can't agree on either width -- why is it that tabs bear the blame. I agree with Jim on that -- tabs are NOT evil in and of themselves. But there is a problem... With spaces I can control how "MY OWN CODE" looks in EVERY editor in the world. If I use 4 spaces -- then no matter what editor you open my code in, it will have the same distance from the left margin. With tabs I am at the mercy of the tab-width setting for the editor -- even for MY OWN CODE. And I don't like that. So while it is true that even spaces can't guarantee consistency -- they at least afford you more control over the look of your OWN code everywhere -- something that tabs can't. I think it's NOT the consistency in the programmers writing the code -- but the consistency in editors showing that code -- that spaces make easier to achieve (and impose). A: Well, I would say that there is not such "recommendation" in the PEP 8. It is stated as a recommendation since they won't prohibit you to write tabs but since code must be written in the most standardized way, use spaces we must. That said, if I were the one to write the standard guide, I would recommend tabs since they are a modern and more practical way to indent code. Finally, I'll stress, I am not encouraging anybody to use tabs, instead, I am saying that all of us should use spaces as stated in the style guide.
{ "language": "en", "url": "https://stackoverflow.com/questions/120926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "194" }
Q: SharePoint error: "Cannot import Web Part" I have a web part that I've developed, and if I manually install the web part it is fine. However when I have packaged the web part following the instructions on this web site as a guide: http://www.theartofsharepoint.com/2007/05/how-to-build-solution-pack-wsp.html I get this error in the log files: 09/23/2008 14:13:03.67 w3wp.exe (0x1B5C) 0x1534 Windows SharePoint Services Web Parts 8l4d Monitorable Error importing WebPart. Cannot import Project Filter. 09/23/2008 14:13:03.67 w3wp.exe (0x1B5C) 0x1534 Windows SharePoint Services Web Parts 89ku High Failed to add webpart http%253A%252F%252Fuk64p12%252FPWA%252F%255Fcatalogs%252Fwp%252FProjectFilter%252Ewebpart;Project%2520Filter. Exception Microsoft.SharePoint.WebPartPages.WebPartPageUserException: Cannot import Project Filter. at Microsoft.SharePoint.WebPartPages.WebPartImporter.CreateWebPart(Boolean clearConnections) at Microsoft.SharePoint.WebPartPages.WebPartImporter.Import(SPWebPartManager manager, XmlReader reader, Boolean clearConnections, Uri webPartPageUri, SPWeb spWeb) at Microsoft.SharePoint.WebPartPages.WebPartImporter.Import(SPWebPartManager manager, XmlReader reader, Boolean clearConnections, SPWeb spWeb) at Microsoft.SharePoint.WebPartPages.WebPartQuickAdd.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(... 09/23/2008 14:13:03.67* w3wp.exe (0x1B5C) 0x1534 Windows SharePoint Services Web Parts 89ku High ...String eventArgument) The pertinent bit is: http%253A%252F%252Fuk64p12%252FPWA%252F%255Fcatalogs%252Fwp%252FProjectFilter%252Ewebpart;Project%2520Filter. Exception Microsoft.SharePoint.WebPartPages.WebPartPageUserException: Cannot import Project Filter. at Microsoft.SharePoint.WebPartPages.WebPartImporter.CreateWebPart(Boolean clearConnections) at Microsoft.SharePoint.WebPartPages.WebPartImporter.Import(SPWebPartManager manager, XmlReader reader, Boolean clearConnections, Uri webPartPageUri, SPWeb spWeb) at Microsoft.SharePoint.WebPartPages.WebPartImporter.Import(SPWebPartManager manager, XmlReader reader, Boolean clearConnections, SPWeb spWeb) at Microsoft.SharePoint.WebPartPages.WebPartQuickAdd.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) And that's accompanied by a rather terse error message: "Cannot import web part". I have checked and my .dll is registered as safe, it is in the GAC, the feature is activated, and the web parts appear in the web part library with all of the correct properties showing that the webpart files were read successfully. Everything appears to be in place, yet I get that error and little explanation from SharePoint of how to resolve it. Any help finding a solution is appreciated. A: Figured it out. The error message is the one from the .webpart file: <?xml version="1.0" encoding="utf-8"?> <webParts> <webPart xmlns="http://schemas.microsoft.com/WebPart/v3"> <metaData> <!-- The following Guid is used as a reference to the web part class, and it will be automatically replaced with actual type name at deployment time. --> <type name="7F8C4D34-6311-4f22-87B4-A221FA8735BA" /> <importErrorMessage>Cannot import Project Filter.</importErrorMessage> </metaData> <data> <properties> <property name="Title" type="string">Project Filter</property> <property name="Description" type="string">Provides a list of Projects that can be used to Filter other Web Parts.</property> </properties> </data> </webPart> </webParts> The problem is that the original .webpart file was created on a 32-bit system with Visual Studio Extensions for WSS installed. However as I'm now on a 64-bit machine VSEWSS is unavailable, and I believe that results in the above GUID not being substituted as I am not using those deployment tools. Replacing the GUID with the full type name works. So if you encounter the error message from your importErrorMessage node, then check that your type node in the .webpart file looks more like this (unrelated example): <type name="TitleWP.TitleWP, TitleWP, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9f4da00116c38ec5" /> This is in the format: Class, Namespace, Version, Culture, PublicKey You can grab that easily from the web.config file associated with your SharePoint instance, as it will be in the safe controls list. A: We had this same problem and found that the constructor of our web part was being called by the WebPartImporter and within the constructor we were doing SPSecurity.RunWithElevatedPrivileges. For some reason the WebPartImporter cannot handle this. So, we simply moved our code out of the constructor to OnInit (where it really belonged) and all is well. A: All great suggestions. My problem was unique and silly: I had deployed the solution to the first Web Application but not to the second. SharePoint however still allowed me to activate the feature on the second Web App's Site Collection (not sure why). This meant the second Web App didn't have a safe control entry in this Web.config file (and I was stupidly checking the first Web.config). So, double-check you're looking at the correct web application/web.config. A: Now I get a answer for similar problem as below: When I try to added a new wep part to the page, then sharepoint show me a error message, tell me--Can not import my web part, this error message define in .webpart file. So i tried to add some ohter web parts in the page , A strange quesiton appearance, some of them can be added , some of them can not be added. After I traced the code of my web part and anaylsis them, I found the reason: Old Code for web part ProjectInfo(my web part name) is: namespace ProjectInfo .... public class ProjectInfo:System.Web.UI.WebControls.WebParts.Web.part { ..... private SPWeb _spWeb; private SPList _spList; private string _listName = "ProjectDocs"; ...... } public ProjectInfo() { ..... _spWeb = SPContext.Current.Web; //It show me a error here when i trace the code _spList = _spWeb.Lists[_listName]; ..... } Stop now, I thought that it maybe the web page init order problem. AS web page load web part control, constructrue function ProjectInfo() will be running at first. Actually, the web page havn't finish its init. by the time. so i did a test. firstly, I put a good web in the page, it's ok . then, I try to put the web part in the page which can not be added just now. ~~ OK!! It's working ...because the page already init. finished. Ok! I corrected my code: namespace ProjectInfo .... public class ProjectInfo:System.Web.UI.WebControls.WebParts.Web.part { ..... private SPWeb _spWeb; private SPList _spList; private string _listName = "ProjectDocs"; ...... } public ProjectInfo() { ..... //Remove code in constructure function. //_spWeb = SPContext.Current.Web; //It show me a error here when i trace the code //_spList = _spWeb.Lists[_listName]; ..... } protected override void CreateChildControls() { .... base.CreateChildControls(); _spWeb = SPContext.Current.Web; _spList = _spWeb.Lists[_listName]; .... } After I test, the error message did't happed again.. LoL ~~ Hope this explain will help you . A: I have seen this anomaly several times without a good resolution. Check that your Assembly, Namespace, Class name is correct EVERYWHERE. This has hung me up more than once. Make sure you have a valid SafeControls entry. Make sure your .webpart file is valid (let SharePoint create it for you if you can) If you are absolutely positive that everything is correct, well then you are stuck in a place that I have been several times. The only thing that I can come up with is that VS is compiling the assembly wrong. The ONLY fix that I have found is to create a new VS project and set it up how you need, then copy THE TEXT of your old CS files into your new CS files...do not copy the files themselves...the goal is to have everything fresh. This has worked for me. Good Luck. A: Have you recycled your worker process or reset IIS? A: Solved mine. i was getting this error: =========================================================================== Error importing WebPart. Cannot import ........ Web Part. Failed to add webpart Exception Microsoft.SharePoint.WebPartPages.WebPartPageUserException: Cannot import ... Web Part. at Microsoft.SharePoint.WebPartPages.WebPartImporter.CreateWebPart(Boolean clearConnections) at Microsoft.SharePoint.WebPartPages.WebPartImporter.Import(SPWebPartManager manager, XmlReader reader, Boolean clearConnections, Uri webPartPageUri, SPWeb spWeb) at Microsoft.SharePoint.WebPartPages.WebPartImporter.Import(SPWebPartManager manager, XmlReader reader, Boolean clearConnections, SPWeb spWeb) at Microsoft.SharePoint.WebPartPages.WebPartQuickAdd.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) =========================================================================== "Cannot import webpart..." The problem was: Non-matching GUIDS Check if the guid on webpart class , and in the .xml and in .webpart class are same. I was copy-pasting code from other webparts sources. ( Mutliple Document Upload Wepart on Codeplex) and forgot to fix guids. A: I have also experienced this error when the assemblies in the GAC that my web part referenced, was signed by a different strong name key file to what I was expecting. I found this out when deciding to update these DLLs. When inserting it into the GAC I noticed that there were 2 entries for the same DLL but they had different Public Key Tokens A: I got this error when I created a base class web part and then inherited a derived class from it. The base class was fine but the derived class failed when I tried to add it to a web page with the same error as the original post. In my case I had to add a public modifier to the class: public class DerivedWebPart : BaseWebPart Also I added a constructor in the derived class to call the base class one - although I think you shouldn't need this really: public DerivedWebPart() : base() { } A: I found that mine did not import the first time, but if I clicked 'New' and added it, it would work. From there I grabbed a copy of the XML and saved it to my project. The web part worked great after that. It only took me DAYS to get to this point. A lot of wasted time. A: I had a problem very similar to this, but Guids weren't the problem: my webpart didn't have the CLSCompliannt attribute set to false. Like so: namespace MyNamespace { [CLSCompliant(false)] [Guid("...")] public class MyWidget : MyWebPartBaseClass { } }
{ "language": "en", "url": "https://stackoverflow.com/questions/120928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Can I add custom version strings to a .net DLL? I can add custom version strings to a C++ DLL in Visual Studio by editing the .rc file by hand. For example, if I add to the VersionInfo section of the .rc file VALUE "BuildDate", "2008/09/19 15:42:52" Then that date is visible in the file explorer, in the DLL's properties, under the Version tab. Can I do the same for a C# DLL? Not just for build date, but for other version information (such as source control information) UPDATE: I think there may be a way to do this by embedding a windows resource, so I've asked how to do that. A: Expanding on the Khoth's answer, In AssemblyInfo.cs: You can do: [assembly: CustomResource("Build Date", "12/12/2012")] Where CustomResource is defined as: [AttributeUsage(AttributeTargets.Assembly)] public class CustomResourceAttribute : Attribute { private string the_variable; public string Variable {get { return the_variable; }} private string the_value; public string Value {get { return the_value; }} public CustomResourceAttribute(string variable, string value) { this.the_variable = variable; this.the_value = value; } } This solution is nice because it gives you the flexibility you need and it does not cause any compiler warnings. Unfortunately it is not possible to use a DateTime because the values entered in Attributes must be constants, and a DateTime is not a constant. A: In AssemblyInfo.cs, you can put: [assembly: System.Reflection.AssemblyInformationalVersion("whatever you want")] It's a compiler warning if it's not a number like 1.2.3.4, but I'm fairly sure everything will work.
{ "language": "en", "url": "https://stackoverflow.com/questions/120936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is Test-and-Set used for? After reading the Test-and-Set Wikipedia entry, I am still left with the question "What would a Test-and-Set be used for?" I realize that you can use it to implement Mutex (as described in wikipedia), but what other uses does it have? A: Imagine you were writing a banking application, and your application had a request to withdraw ten pounds (yes, I'm English ;) ) from the account. So you need to read the current account balance into a local variable, subtract the withdrawal and then write the balance back to memory. However, what if another, concurrent request happens between you reading the value and you writing it out? There's the possibility that the result of that request will get completely overwritten by the first, and the account balance will be incorrect. Test-and-set helps us fix that problem by checking that the value your overwriting is what you think it should be. In this case, you can check that the balance was the original value that you read. Since it's atomic, it's non-interruptible so no-one can pull the rug out from under you between the read and the write. Another way to fix the same problem is to take out a lock on the memory location. Unfortunately, locks are tremendously difficult to get right, hard to reason about, have scalability issues and behave badly in the face of failures, so they're not an ideal (but definitely practical) solution. Test-and-set approaches form the basis of some Software Transactional Memories, which optimistically allow every transaction to execute concurrently, at the cost of rolling them all back if they conflict. A: A good example is "increment." Say two threads execute a = a + 1. Say a starts with the value 100. If both threads are running at the same time (multi-core), both would load a as 100, increment to 101, and store that back in a. Wrong! With test-and-set, you are saying "Set a to 101, but only if it currently has the value 100." In this case, one thread will pass that test but the other will fail. In the failure case, the thread can retry the entire statement, this time loading a as 101. Success. This is generally faster than using a mutex because: * *Most of the time there isn't a race condition, so the update happens without having to acquire some sort of mutex. *Even during collision, one thread isn't blocked at all, and it's faster for the other thread to just spin and retry than it would be to suspend itself in line for some mutex. A: You use it any time you want to write data to memory after doing some work and make sure another thread hasn't overwritten the destination since you started. A lot of lock/mutex-free algorithms take this form. A: Basically, its use is exactly for mutexes, given the tremendous importance of atomicity. That's it. Test-and-set is an operation that can be performed with two other instructions, non-atomic and faster (atomicity bears a hardware overhead when on multiprocessor systems), so typically you wouldn't use it for other reasons. A: It's used when you need to get a shared value, do something with it, and change the value, assuming another thread hasn't already changed it. As for practical uses, the last time I saw it was in implementations of concurrent queues (queues that may be pushed/popped by multiple threads without needing semaphores or mutexes). Why would you use TestAndSet rather than a mutex? Because it generally requires less overhead than a mutex. Where a mutex requires OS intervention, a TestAndSet can be implemented as a single atomic instruction on the CPU. When running in parallel environments with 100's of threads, a single mutex in a critical section of code can cause serious bottlenecks. A: It can also be used to implement spinlock: void spin_lock(struct spinlock *lock) { while (test_and_set(&lock->locked)); } A: Test-and-set Lock (TLS) is used to implement entry into a critical section. TLS <destination> <origin> In general terms, TLS is an atomic operation consisting of two steps: * *Copy the value from origin to destination *Set a value of origin to 1 Let's see how we can implement a simple mutex for entering critical section with a TLS CPU instruction. We need a memory cell that will be used as a shared resource. Let's call it lock. It's important for us that we can set either 0 or 1 value to this cell of memory. Then entering critical section will look like: enter_critical_section: TLS <tmp>, <lock> ; copy value from <lock> to <tmp> and set <lock> to 1 CMP <tmp>, #0 ; check if previous <lock> value was 0 JNE enter_critical_section ; if previous <lock> value was 1, it means that we didn't enter the critical section, and must try again RET ; if previous <lock> value was 0, we entered critical section, and can return to the caller To leave a critical section, just set value of lock back to 0: leave_critical_section: MOV <lock>, #0 RET P.S. For example, in x86 there is an XCHG instruction, which allows exchanging value of a Registry/Memory with another Register. XCHG <destination> <origin> Implementation of entering critical section with XCHG instruction: enter_critical_section: MOV <tmp>, #1 XCHG <tmp>, <lock> CMP <tmp>, #0 JNE enter_critical_section RET A: Test and set (TAS) objects can be used to implement other concurrent objects, like fetch-and-increment (FAI). In 1. (subsection 3.4.1), FAI is implemented using TAS objects. 1. Rachid Guerraoui, Petr Kuznetsov; Algorithms for Concurrent Systems
{ "language": "en", "url": "https://stackoverflow.com/questions/120937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: JDBC Database Connections in a Web App DAL I am building a small website for fun/learning using a fairly standard Web/Service/Data Access layered design. For the Data Access Layer, what is the best way to handle creating Connection objects to call my SQL stored procedures and why? Bearing in mind I am writing a lot of the code by hand (I know I could be using Hibernate etc to do a lot of this for me)... 1) Should I create one static instance of the Connection and run all my querys through it or will this cause concurrency problems? 2) Should I create a Connection instance per database call and accept the performance overhead? (I will look into connection pooling at a later date if this is the case) A: You should use one Connection per thread. Don't share connections across threads. Consider using Apache DBCP. This is a free and standard way of configuring database connections and drawing them from a pool. It's the method used by high-performance web servers like Tomcat. Furthermore, if you're using DBCP, since it's a pool (read: cached), there's little penalty to creating/closing connections frequently. A: The standard way is to set up a DataSource. All application servers are able to do so via their admin console. The pool is then accessible by it's JNDI name (e.g. "jdbc/MyDB"). The data source should, in fact, be a connection pool (and usually is). It caches connections, tests them before passing to the application and does a lot of other important functions. In your code you: * *resolve JNDI name and cast it into DataSource *get a connection from the data source *do your work *close the connection (it goes back to the pool here) You can set up the pool yourself (using any of freely available pool implementation), but it really doesn't make any sense if you're using an application server. P.S. Since it's a web application a good way to make sure you have closed your connection after the request is to use HttpFilter. You can set up one in web.xml. When the request comes, acquire the connection, put it into ThreadLocal. During the request, get the connection from ThreadLocal, but never close it. After the request, in the filter, close the connection.
{ "language": "en", "url": "https://stackoverflow.com/questions/120941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Running away from SharePoint Have any of you ever tried to run from sharepoint? I've worked with sharepoint enough to know that it is not something that interests me. My interests are more along the lines of APIs / backend / distributed development. Have any of you found ways, as consultants, to move away from sharepoint and keep learning other things of interest? I'm currently in a position where sharepoint is in huge demand and I can't quite find a way to simply step aside from it. any suggestions ? A: If I infer correctly that you work for a consulting firm then find out what other kinds of things your firm works on. Learn those technologies better that the people who currently work on them for your firm, involve yourself in those projects, even if just in a hallway conversation manner, and come up with better (faster, cheaper) solutions for the problems your firm is solving. Your options are really seem to be 3-fold * *convince your boss your talents would be better used elsewhere *convince your co-workers they want you on those other teams *convince your company's clients that they want you, specifically. A: Learn Java, or Ruby. The Microsoft sales model of "attach" whereby they sell a solution comprised of multiple technologies and then sell the next solution on the basis of "well you have already invested in SharePoint so you already have the skills in place and the infrastructure for this new bit of technology we have" is here to stay... it's very successful. SharePoint is cloud computing for business who have MS shops... you avoid it by not doing C#. If you're doing C# then given enough time, your apps will need to run in the corporate cloud and you should be looking after your career by embracing it. Just my 2p. Sorry if it's not quite the answer you wanted. A: I know exactly what you mean. I think you don't mind the idea behind a product like SharePoint, but really hate the way its been implemented and how problematic it is. I know its a nightmare to work with. As a C# developer, I cringe when I hear the SharePoint word, SharePoint is Lord Voldemort. But unfortunately it comes with the job of being a senior C# / Microsoft developer. I say unfortunately because its likely if you're working in a corporate structure sooner or later you will end up having SharePoint in your solution. Not because its good, but because as others have said - MS use SharePoint as a Trojan horse to get and keep business. There might be some hope with the new version of SharePoint coming out (2010). Maybe this will finally include a better programming / implementation model. Otherwise either work for smaller companies (usually less pay, but not always), or try to play down your skills as a MOSS developer if possible. Never actively market them unless your salary depends on it. Remove the skill from your skill matrix, and turn down jobs that completely focus on MOSS. Some MOSS integration here and there you can live with. An entire solution focused on MOSS will drive you insane. If all else fails, learn other non Microsoft languages, and within a year or 2, SharePoint will be but a faded memory. I know lots of developers who are thinking about quitting IT because of SharePoint. I would say don't let it be the end of your career. And finally bitch and moan, and inform managers on a weekly / daily basis, as to why you are battling in SharePoint. Let them know, and constantly remind them how bad a technology it is. A: When life deals you lemons. Make Lemonade. Seriously, if you are seeing SharePoint in such high demand, maybe working with the beast is the best idea. SharePoint is really just middle-ware. SharePoint can simply be a distribution point for your solutions (i.e., a user interface such as a web application can be hosted on SharePoint through a Web Content part). If you look at it, SharePoint may even prove useful as a document respository or small scale data store, in the form of lists. A: Maybe you should turn down SharePoint contracts and accept contracts that interest you. A: Depending on the market you are in you can simply tell your boss at the consulting company you work for that your not interested in doing Sharepoint projects anymore and that you'll be forced to look elsewhere if they continue putting you on Sharepoint projects. That would work around West Michigan where the developer demand is high and the supply is sub-par. A: I'm, on the other hand, just starting to use SharePoint to enreach my currently boring C#-only projects. I'm starting to use it as a front-end to the distributed and complicated systems: simple configuration and customization, reporting, management, system control - looks like all this is available in this package it it's easy to make is usable by non-techies and by beginners. A: I personally don't want to work with SharePoint anymore. I've worked on developing a solution for it and even went full charge with a web integration of it. I hated it. First you have to master the awful programming model then handle all the deployments and it's not even the beginning. If you are developing a product for SharePoint, you have to debug the software itself which is a feat on it's own. My solution to this is to be very upfront about it. I don't mind doing knowledge transfer and helping out people but I don't want to be developing/deploying SharePoint applications. My boss get it, my friends get it. Our latest joke come from someone who said a few months ago that it was "easy and fast to deploy application with SharePoint". The joke? "Did he just put easy/fast in the same sentence as SharePoint?" So unless you salary would be lower because of it... downplay your skills on it and be upfront to your boss. :) A: Have you ever looked at Alfresco (http://alfresco.com)? It serves many of the same purposes as SharePoint, but does it from an Open Source J2EE application. It will leverage your existing collaboration / content management experience and expose you to a whole bunch of open source technologies. Full disclosure: I work for Alfresco. A: I've already given this suggestion to another guy...Running from SharePoint won't be difficult because technologies are similar to each other according to their structure. SharePoint is not the worst technology to be used, although it is limited in some way... Fortunately, software sphere is too wide to be afraid of not finding anything you can be interested in.
{ "language": "en", "url": "https://stackoverflow.com/questions/120949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I normalize a URL in python I'd like to know do I normalize a URL in python. For example, If I have a url string like : "http://www.example.com/foo goo/bar.html" I need a library in python that will transform the extra space (or any other non normalized character) to a proper URL. A: Have a look at this module: werkzeug.utils. (now in werkzeug.urls) The function you are looking for is called "url_fix" and works like this: >>> from werkzeug.urls import url_fix >>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)') 'http://de.wikipedia.org/wiki/Elf%20%28Begriffskl%C3%A4rung%29' It's implemented in Werkzeug as follows: import urllib import urlparse def url_fix(s, charset='utf-8'): """Sometimes you get an URL by a user that just isn't a real URL because it contains unsafe characters like ' ' and so on. This function can fix some of the problems in a similar way browsers handle data entered by the user: >>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)') 'http://de.wikipedia.org/wiki/Elf%20%28Begriffskl%C3%A4rung%29' :param charset: The target charset for the URL if the url was given as unicode string. """ if isinstance(s, unicode): s = s.encode(charset, 'ignore') scheme, netloc, path, qs, anchor = urlparse.urlsplit(s) path = urllib.quote(path, '/%') qs = urllib.quote_plus(qs, ':&=') return urlparse.urlunsplit((scheme, netloc, path, qs, anchor)) A: Real fix in Python 2.7 for that problem Right solution was: # percent encode url, fixing lame server errors for e.g, like space # within url paths. fullurl = quote(fullurl, safe="%/:=&?~#+!$,;'@()*[]") For more information see Issue918368: "urllib doesn't correct server returned urls" A: Just FYI, urlnorm has moved to github: http://gist.github.com/246089 A: use urllib.quote or urllib.quote_plus From the urllib documentation: quote(string[, safe]) Replace special characters in string using the "%xx" escape. Letters, digits, and the characters "_.-" are never quoted. The optional safe parameter specifies additional characters that should not be quoted -- its default value is '/'. Example: quote('/~connolly/') yields '/%7econnolly/'. quote_plus(string[, safe]) Like quote(), but also replaces spaces by plus signs, as required for quoting HTML form values. Plus signs in the original string are escaped unless they are included in safe. It also does not have safe default to '/'. EDIT: Using urllib.quote or urllib.quote_plus on the whole URL will mangle it, as @ΤΖΩΤΖΙΟΥ points out: >>> quoted_url = urllib.quote('http://www.example.com/foo goo/bar.html') >>> quoted_url 'http%3A//www.example.com/foo%20goo/bar.html' >>> urllib2.urlopen(quoted_url) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\python25\lib\urllib2.py", line 124, in urlopen return _opener.open(url, data) File "c:\python25\lib\urllib2.py", line 373, in open protocol = req.get_type() File "c:\python25\lib\urllib2.py", line 244, in get_type raise ValueError, "unknown url type: %s" % self.__original ValueError: unknown url type: http%3A//www.example.com/foo%20goo/bar.html @ΤΖΩΤΖΙΟΥ provides a function that uses urlparse.urlparse and urlparse.urlunparse to parse the url and only encode the path. This may be more useful for you, although if you're building the URL from a known protocol and host but with a suspect path, you could probably do just as well to avoid urlparse and just quote the suspect part of the URL, concatenating with known safe parts. A: Valid for Python 3.5: import urllib.parse urllib.parse.quote([your_url], "\./_-:") example: import urllib.parse print(urllib.parse.quote("http://www.example.com/foo goo/bar.html", "\./_-:")) the output will be http://www.example.com/foo%20goo/bar.html Font: https://docs.python.org/3.5/library/urllib.parse.html?highlight=quote#urllib.parse.quote A: Because this page is a top result for Google searches on the topic, I think it's worth mentioning some work that has been done on URL normalization with Python that goes beyond urlencoding space characters. For example, dealing with default ports, character case, lack of trailing slashes, etc. When the Atom syndication format was being developed, there was some discussion on how to normalize URLs into canonical format; this is documented in the article PaceCanonicalIds on the Atom/Pie wiki. That article provides some good test cases. I believe that one result of this discussion was Mark Nottingham's urlnorm.py library, which I've used with good results on a couple projects. That script doesn't work with the URL given in this question, however. So a better choice might be Sam Ruby's version of urlnorm.py, which handles that URL, and all of the aforementioned test cases from the Atom wiki. A: Py3 from urllib.parse import urlparse, urlunparse, quote def myquote(url): parts = urlparse(url) return urlunparse(parts._replace(path=quote(parts.path))) >>> myquote('https://www.example.com/~user/with space/index.html?a=1&b=2') 'https://www.example.com/~user/with%20space/index.html?a=1&b=2' Py2 import urlparse, urllib def myquote(url): parts = urlparse.urlparse(url) return urlparse.urlunparse(parts[:2] + (urllib.quote(parts[2]),) + parts[3:]) >>> myquote('https://www.example.com/~user/with space/index.html?a=1&b=2') 'https://www.example.com/%7Euser/with%20space/index.html?a=1&b=2' This quotes only the path component. A: I encounter such an problem: need to quote the space only. fullurl = quote(fullurl, safe="%/:=&?~#+!$,;'@()*[]") do help, but it's too complicated. So I used a simple way: url = url.replace(' ', '%20'), it's not perfect, but it's the simplest way and it works for this situation. A: A lot of answers here talk about quoting URLs, not about normalizing them. The best tool to normalize urls (for deduplication etc.) in Python IMO is w3lib's w3lib.url.canonicalize_url util. Taken from the official docs: Canonicalize the given url by applying the following procedures: - sort query arguments, first by key, then by value percent encode paths ; non-ASCII characters are percent-encoded using UTF-8 (RFC-3986) - percent encode query arguments ; non-ASCII characters are percent-encoded using passed encoding (UTF-8 by default) - normalize all spaces (in query arguments) ‘+’ (plus symbol) - normalize percent encodings case (%2f -> %2F) - remove query arguments with blank values (unless keep_blank_values is True) - remove fragments (unless keep_fragments is True) - List item The url passed can be bytes or unicode, while the url returned is always a native str (bytes in Python 2, unicode in Python 3). >>> import w3lib.url >>> >>> # sorting query arguments >>> w3lib.url.canonicalize_url('http://www.example.com/do?c=3&b=5&b=2&a=50') 'http://www.example.com/do?a=50&b=2&b=5&c=3' >>> >>> # UTF-8 conversion + percent-encoding of non-ASCII characters >>> w3lib.url.canonicalize_url('http://www.example.com/r\u00e9sum\u00e9') 'http://www.example.com/r%C3%A9sum%C3%A9' I've used this util with great success when broad crawling the web to avoid duplicate requests because of minor url differences (different parameter order, anchors etc)
{ "language": "en", "url": "https://stackoverflow.com/questions/120951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: Stored Procedure; Insert Slowness I have an SP that takes 10 seconds to run about 10 times (about a second every time it is ran). The platform is asp .net, and the server is SQL Server 2005. I have indexed the table (not on the PK also), and that is not the issue. Some caveats: * *usp_SaveKeyword is not the issue. I commented out that entire SP and it made not difference. *I set @SearchID to 1 and the time was significantly reduced, only taking about 15ms on average for the transaction. *I commented out the entire stored procedure except the insert into tblSearches and strangely it took more time to execute. Any ideas of what could be going on? set ANSI_NULLS ON go ALTER PROCEDURE [dbo].[usp_NewSearch] @Keyword VARCHAR(50), @SessionID UNIQUEIDENTIFIER, @time SMALLDATETIME = NULL, @CityID INT = NULL AS BEGIN SET NOCOUNT ON; IF @time IS NULL SET @time = GETDATE(); DECLARE @KeywordID INT; EXEC @KeywordID = usp_SaveKeyword @Keyword; PRINT 'KeywordID : ' PRINT @KeywordID DECLARE @SearchID BIGINT; SELECT TOP 1 @SearchID = SearchID FROM tblSearches WHERE SessionID = @SessionID AND KeywordID = @KeywordID; IF @SearchID IS NULL BEGIN INSERT INTO tblSearches (KeywordID, [time], SessionID, CityID) VALUES (@KeywordID, @time, @SessionID, @CityID) SELECT Scope_Identity(); END ELSE BEGIN SELECT @SearchID END END A: Enable "Display Estimated Execution Plan" in SQL Management Studio - where does the execution plan show you spending the time? It'll guide you on the heuristics being used to optimize the query (or not in this case). Generally the "fatter" lines are the ones to focus on - they're ones generating large amounts of I/O. Unfortunately even if you tell us the table schema, only you will be able to see actually how SQL chose to optimize the query. One last thing - have you got a clustered index on tblSearches? A: Why are you using top 1 @SearchID instead of max (SearchID) or where exists in this query? top requires you to run the query and retrieve the first row from the result set. If the result set is large this could consume quite a lot of resources before you get out the final result set. SELECT TOP 1 @SearchID = SearchID FROM tblSearches WHERE SessionID = @SessionID AND KeywordID = @KeywordID; I don't see any obvious reason for this - either of aforementioned constructs should get you something semantically equivalent to this with a very cheap index lookup. Unless I'm missing something you should be able to do something like select @SearchID = isnull (max (SearchID), -1) from tblSearches where SessionID = @SessionID and KeywordID = @KeywordID This ought to be fairly efficient and (unless I'm missing something) semantically equivalent. A: Triggers! They are insidious indeed. A: * *What is the clustered index on tblSearches? If the clustered index is not on primary key, the database may be spending a lot of time reordering. *How many other indexes do you have? *Do you have any triggers? *Where does the execution plan indicate the time is being spent?
{ "language": "en", "url": "https://stackoverflow.com/questions/120952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C++ usage in embedded systems What features of C++ should be avoided in embedded systems? Please classify the answer by reason such as: * *memory usage *code size *speed *portability EDIT: Lets' use an ARM7TDMI with 64k ram as a target to control the scope of the answers. A: It's an interesting read for the Rationale on the early Embedded C++ standrard See this article on EC++ as well. The Embedded C++ std was a proper subset of C++, i.e. it has no additions. The following language features were removed: * *Multiple inheritance *Virtual base classes *Run-time type information (typeid) *New style casts (static_cast, dynamic_cast, reinterpret_cast and const_cast) *The mutable type qualifier *Namespaces *Exceptions *Templates It's noted on the wiki page that Bjarne Stroustrup says (of the EC++ std), "To the best of my knowledge EC++ is dead (2004), and if it isn't it ought to be." Stroustrup goes on to recommend the document referenced by Prakash's answer. A: Using an ARM7 and assuming you don't have an external MMU, dynamic memory allocation problems can be harder to debug. I'd add "judicious use of new / delete / free / malloc" to the list of guidelines. A: If you're using an ARM7TDMI, avoid unaligned memory accesses at all costs. The basic ARM7TDMI core does not have alignment checking, and will return rotated data when you do an unaligned read. Some implementations have additional circuitry for raising an ABORT exception, but if you don't have one of those implementations, finding bugs due to unaligned accesses is very painful. Example: const char x[] = "ARM7TDMI"; unsigned int y = *reinterpret_cast<const unsigned int*>(&x[3]); printf("%c%c%c%c\n", y, y>>8, y>>16, y>>24); * *On an x86/x64 CPU, this prints "7TDM". *On a SPARC CPU, this dumps core with a bus error. *On an ARM7TDMI CPU, this might print something like "7ARM" or "ITDM", assuming that the variable "x" is aligned on a 32-bit boundary (which depends on where "x" is located and what compiler options are in use, etc.) and you are using little-endian mode. It's undefined behavior, but it's pretty much guaranteed not to work the way you want. A: RTTI and Exception Handling: * *Increases code-size *Decreases performance *Can often be replaced by cheaper mechanisms or a better software-design. Templates: * *be careful with them if code-size is an issue. If your target CPU has no or only a very tiny ínstruction cache it may reduce the performance as well. (templates tend to bloat code if used without care). Otoh clever meta-programming can decrease the code-size as well. There is no clear cut answer on his. Virtual functions and inheritance: * *These are fine for me. I write almost all of my embedded code in C. That does not stop me from using function-pointer tables to mimic virtual functions. They never became a peformance problem. A: In most systems you do not want to use new / delete unless you have overridden them with your own implementation that pulls from your own managed heap. Yes, it'll be work but you are dealing with a memory constrained system. A: Choosing to avoid certain features should always be driven by quantitative analysis of the behavior of your software, on your hardware, with your chosen toolchain, under the constraints your domain entails. There are a lot of conventional wisdom "don'ts" in C++ development which are based on superstition and ancient history rather than hard data. Unfortunately, this often results in a lot of extra workaround code being written to avoid using features that someone, somewhere, had a problem with once upon a time. A: Exceptions are likely going to be the most common answer of what to avoid. Most implementations have a fairly large static memory cost, or a runtime memory cost. They also tend to make realtime guarantees harder. Look here for a pretty good example of a coding standard written for embedded c++. A: I wouldn't have said there's a hard and fast rule to this; it depends a lot on your application. Embedded systems are typically: * *More constrained in the amount of memory they have available *Often run on slower hardware *Tend to be closer to hardware i.e. driving it in some way like fiddling with register settings. Just like any other development though, you should balance all of the points you've mentioned against the requirements you were given / derived. A: Regarding code bloat, I think the culprit is much more likely to be inline than templates. For example: // foo.h template <typename T> void foo () { /* some relatively large definition */ } // b1.cc #include "foo.h" void b1 () { foo<int> (); } // b2.cc #include "foo.h" void b2 () { foo<int> (); } // b3.cc #include "foo.h" void b3 () { foo<int> (); } The linker most likely will merge all the definitions of 'foo' into a single translation unit. Therefore the size of 'foo' is no different to that of any other namespace function. If your linker doesn't do this, then you can use an explicit instantiation to do that for you: // foo.h template <typename T> void foo (); // foo.cc #include "foo.h" template <typename T> void foo () { /* some relatively large definition */ } template void foo<int> (); // Definition of 'foo<int>' only in this TU // b1.cc #include "foo.h" void b1 () { foo<int> (); } // b2.cc #include "foo.h" void b2 () { foo<int> (); } // b3.cc #include "foo.h" void b3 () { foo<int> (); } Now consider the following: // foo.h inline void foo () { /* some relatively large definition */ } // b1.cc #include "foo.h" void b1 () { foo (); } // b2.cc #include "foo.h" void b2 () { foo (); } // b3.cc #include "foo.h" void b3 () { foo (); } If the compiler decides to inline 'foo' for you then you will end up with 3 different copies of 'foo'. No templates in sight! EDIT: From a comment above from InSciTek Jeff Using explicit instantiations for the functions that you know will be used only, you can also ensure that all unused functions are removed (which may actually reduce the code size compared with the non template case): // a.h template <typename T> class A { public: void f1(); // will be called void f2(); // will be called void f3(); // is never called } // a.cc #include "a.h" template <typename T> void A<T>::f1 () { /* ... */ } template <typename T> void A<T>::f2 () { /* ... */ } template <typename T> void A<T>::f3 () { /* ... */ } template void A<int>::f1 (); template void A<int>::f2 (); Unless your tool chain is completely broken, the above will generate code only for 'f1' and 'f2'. A: time functions are usually OS dependent (unless you rewrite them). Use your own functions (especially if you have a RTC) templates are ok to use as long as you have enough space for code - othwerise don't use them exceptions are not very portable also printf functions that don't write to a buffer are not portable (you need to be somehow connected to the filesystem to write to a FILE* with printf). Use only sprintf, snprintf and str* functions (strcat, strlen) and of course their wide char corespondents (wcslen...). If speed is the problem maybe you should use your own containers rather than STL (for example the std::map container to make sure a key is equal does 2 (yes 2) comparisons with the 'less' operator ( a [less than] b == false && b [less than] a == false mean a == b ). 'less' is the only comparison parameter received by the std::map class (and not only). This can lead to some performance loss in critical routines. templates, exceptions are increasing the code size (you can be sure of this). sometimes even performance is affected when having a larger code. memory allocation functions probably need to be rewritten also because they are OS dependent in many ways (especially when dealing with thread safety memory allocation). malloc uses the _end variable (declared usually in the linker script) to allocate memory but this is not thread safe in "unknown" environments. sometimes you should use Thumb rather than Arm mode. It can improve performance. So for 64k memory I would say that C++ with some of its nice features (STL, exceptions etc) can be overkill. I would definitely choose C. A: Having used both the GCC ARM compiler and the ARM's own SDT I'd have the following comments: * *The ARM SDT produces tighter, faster code but is very expensive (>Eur5k per seat!). At my previous job we used this compiler and it was ok. *The GCC ARM tools works very well though and it's what I use on my own projects (GBA/DS). *Use 'thumb' mode as this reduces code size significantly. On 16 bit bus variants of the ARM (such as the GBA) there is also a speed advantage. *64k is seriously small for C++ development. I'd use C & Assembler in that environment. On such a small platform you'll have to be careful of stack usage. Avoid recursion, large automatic (local) data structures etc. Heap usage will also be an issue (new, malloc etc). C will give you more control of these issues. A: If you are using a development environment targeted toward embedded development or a particular embedded system, it should have limited some of the options for you already. Depending on the resource capabilities of your target, it will turn off some of the aforementioned items (RTTI, exceptions, etc.). This is the easier route to go, rather than keeping in mind what will increase size or memory requirements (although, you should get to know that mentally anyway). A: For embedded systems, you'll predominately want to avoid things that have a definite abnormal runtime cost. Some examples: exceptions, and RTTI (to include dynamic_cast and typeid). A: Make sure you know what features are supported by the compiler for your embedded platform and also make sure you know the peculiarities of your platform. For example the TI's CodeComposer compiler does not do automatic template instantiations. As a result, if you want to use STL's sort, you need to instantiate five different things manually. It also does not support streams. Another example is that you may be using a DSP chip, which does not have hardware support for floating point operations. That means every time you use a float or a double you pay the cost of a function call. To summarize, know everything there is to know about your embedded platform and your compiler, and then you will know which features to avoid. A: One particular problem that surprised me with ATMega GCC 3.something: when I added a virtual ember function to one of my classes, I had to add a virtual destructor. At that point, the linker asked for operator delete(void *). I have no idea why that happens, and adding an empty definition for that operator slolved the problem. A: Note that the cost of exceptions depends on your code. In one application I profiled (a relatively small one on ARM968), exception support added 2 % to execution time, and code size was increased by 9.5 KB. In this application, exceptions were thrown only in case something seriously bad happened -- i.e. never in practice -- which kept the execution time overhead very low.
{ "language": "en", "url": "https://stackoverflow.com/questions/120957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Which search technology to use with ASP.NET? What's your preferred method of providing a search facility on a website? Currently I prefer to use Lucene.net over Indexing Service / SQL Server full-text search (as there's nothing to set up server-side), but what other ways are being used out there? A: We used both Lucene.net, Indexing Service and SQL Server full-text. For a project with large and heavy DB search functionality SQL search has an upper hand in terms of performance/resource hit. Otherwise Lucene is much better in all aspects. A: Take a look at Solr. It uses Lucene for text indexing, but it is a full blown http server so you can post documents over http and do search using urls. The best part is that it gives you faceted searching out of the box which will require a lot of work if you do it yourself. A: you could use google, it's not going to be the fastest indexer but it does provide great results when you have no budget. A: dtSearch is one we've often used, but I'm not really that big a fan of it. A: A lot of people are using Google's custom search these days; even a couple of banks that I know of use it for their intranet. A: If you need to index all the pages of your site (not just the ones Google indexes) or if you want to create a search for your intranet web sites, the Google Mini is pretty sweet. It will cost you some money, but it is really easy to have it up and running within just a couple of hours. Depending on how many pages you need to index it can be expensive though. A: I'm using dtSearch and I (kind of) like it. The API isn't the greatest in the world for .NET but it can get the job done and it's pretty fast. And it's cheap, so your boss will like it (~$1,000 US). The results leave something to be desired as it doesn't do any kind of semantic relevance rankings or anything fancy. It does a better job than anything you can get out of MS SQL server though. It has a web spider that makes it easy to do quick search apps on a website. If you need to you can use the API to create hooks into your database and to provide item level security - but you have to do the work yourself. Their forum leaves something to be desired as well but maybe people will start posting dtSearch stuff here. :) A: Has anyone tried Microsoft search server express? http://www.microsoft.com/enterprisesearch/serverproducts/searchserverexpress/default.aspx I haven't tried it yet, but it could potentially be powerful. From the site it looks primarily geared towards sharepoint users but given its sdk I don't see why you couldn't use it for a regular old site search A: I also recommend SOLR. It's easy to set up, maintain, and configure. I've found it to be stable and easy to scale. There's a c# package for interfacing with solr.
{ "language": "en", "url": "https://stackoverflow.com/questions/120965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why does a T-SQL block give an error even if it shouldn't even be executed? I was writing a (seemingly) straight-forward SQL snippet that drops a column after it makes sure the column exists. The problem: if the column does NOT exist, the code inside the IF clause complains that it can't find the column! Well, doh, that's why it's inside the IF clause! So my question is, why does a piece of code that shouldn't be executed give errors? Here's the snippet: IF exists (select * from syscolumns WHERE id=object_id('Table_MD') and name='timeout') BEGIN ALTER TABLE [dbo].[Table_MD] DROP COLUMN timeout END GO ...and here's the error: Error executing SQL script [...]. Invalid column name 'timeout' I'm using Microsoft SQL Server 2005 Express Edition. A: IF exists (select * from syscolumns WHERE id=object_id('Table_MD') and name='timeout') BEGIN DECLARE @SQL nvarchar(1000) SET @SQL = N'ALTER TABLE [dbo].[Table_MD] DROP COLUMN timeout' EXEC sp_executesql @SQL END GO Reason: When Sql server compiles the code, they check it for used objects ( if they exists ). This check procedure ignores any "IF", "WHILE", etc... constructs and simply check all used objects in code. A: It may never be executed, but it's parsed for validity by Sql Server. The only way to "get around" this is to construct a block of dynamic sql and then selectively execute it A: Here's how I got it to work: Inside the IF clause, I changed the ALTER ... DROP ... command with exec ('ALTER ... DROP ...') It seems the SQL server does a validity check on the code when parsing it, and sees that a non-existing column gets referenced somewhere (even if that piece of code will never be executed). Using the exec(ute) command wraps the problematic code in a string, the parser doesn't complain, and the code only gets executed when necessary. Here's the modified snippet: IF exists (select * from syscolumns WHERE id=object_id('Table_MD') and name='timeout') BEGIN exec ('ALTER TABLE [dbo].[Table_MD] DROP COLUMN timeout') END GO A: By the way, there is a similar issue in Oracle, and a similar workaround using the "execute immediate" clause.
{ "language": "en", "url": "https://stackoverflow.com/questions/120966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a PHP security framework that protects phone numbers as well as passwords? I understand the mantra of "don't roll your own" when it comes to site security frameworks. For most cases anyway. I'm going to be collaborating on a site that integrates text-messaging into the system. I'd like to use an existing, well-tested security framework to protect the users data, but I need it to also protect a users phone number as well. I wouldn't want to be the one responsible for a list of users cell phone numbers getting jacked and spammed. What suggestions can the community offer? A: Note that techniques applied to passwords aren't applicable here. You can store a password salted and hashed (although the value of doing so can be disputed), but that doesn't work for phone numbers. If someone jacks your server, they can do anything the server can. This must include recovering the phone number, but doesn't include recovering the password if it's hashed well. So the phone number is just a particular case of protecting confidential data. If phone nos truly are the only sensitive data in the app, then you could look at walling off the part of the app that sends the texts, and asymmetrically encrypting the phone nos. In a different process (or on a different machine) run an app that has the key to decrypt phone nos. This app's interface would have maybe one function taking an encrypted no and the message to send. Keep this app simple, and test and audit the snot out of it. Either hide it from the outside world, or use authentication to prove the request really came from your main app, or both. Neither the db nor the main part of the app is capable of decrypting phone nos (so for example you can't search on them), but they can encrypt them for addition to the db. The general technique is called "Privilege separation", the above is just one example. Note that phone nos would generally need to be padded with random data before encryption (like salting a hashed password). Otherwise it's possible to answer the question "is the encrypted phone number X?", without knowing the private key. That may not be a problem from the POV of spammers stealing your distribution list, but it is a problem from the POV of claiming that your phone numbers are securely stored, since it means a brute force attack becomes feasible: there are only a few billion phone nos, and it may be possible to narrow that down massively for a given user. Sorry this doesn't directly answer your question: I don't know whether there's a PHP framework which will help implement privilege separation. [Edit to add: in fact, it occurs to me that under the heading of 'keep the privileged app simple', you might not want to use a framework at all. It sort of depends whether you think you're more or less likely leave bugs in the small amount of code you really need, than the framework authors are to have left bugs in the much larger (but more widely used) amount of code they've written. But that's a huge over-simplification.] A: Since you need to be able to retrieve the phone numbers, the only thing you can really do to protect them (beyond the normal things you would do to protecting your db) is encrypt them. This means that you need to: * *Make sure the key doesn't leak when you inadvertently leak a database dump. *Make sure your system doesn't helpfully decrypt the phone numbers when someone manages to SQL inject your system. Of course the recommendation of not rolling your own still applies, use AES or some other well respected cipher with a reasonable key length. A: I’m pleased to announce the release of hole-security system for PHP This project stands for bring to PHP the kind of security that is provided in Java by Spring Security the formerly Acegi Security System for Spring. It’s designed to be attractive to Spring Security users because the philosophy is the same. It’s an unobtrusive way to add security to a PHP site. The configuration is made using substrate IoC/DI as Spring Security use Spring IoC/DI. An example configuration ship with the framework and can be used like this: $context = new substrate_Context( './path/to/hole-security/hole-security-config.php' ); $context->execute(); $hole_Security = $context->get('hole_FilterChainProxy' ); $hole_Security->doFilter(); Just be sure that the bootstrap code of the framework is executed before the bootstrap of the MVC of your choice. WebSite: http://code.google.com/p/hole-security/ Documentation: For the moment you can use reference documentation of Spring Security where it’s apply. You can get a general idea using the Acegi Security reference documentation because hole-security use the same way of configuration, but keep in mind that it’s based on Spring Security. License: It’s released under Apache License Version 2.0. Features: hole-security brings an pluggable security system where you can adopt the security requirement of your environment. Currently there is a very simple security system because it’s on the first release but with the base foundation that it brings you could suggest or request for new features to be added to the project. Currently Features: * *In memory dao authentication as a proof of concept, you can switch to your preferred dao or implementation that get’s user data from database or wherever you store it. In futures release an PDO based implementation will be created. *Configured filters for be applied to url patterns. Url path matcher can be plugged to, currently it ship with a ant styles path matcher. *Authorization Manager can be used in your application to decide wherever or not do something, always obtaining the reference from the substrate context. *Shared Security Context accessible from any code of your application if hole_HttpSessionContextIntegrationFilter is applied. You can use this context to save information related to the session without use the session object directly. *You can use a custom login page and customize it according to the hole_AuthenticationProcessingFilter configuration, or customize hole_AuthenticationProcessingFilter according to your custom login page. *The default password encoder is plain text, without encoding. Futures releases will have implementations for MD5, Sha based, Base64 and others related encoding. You can create your own password encoder and get configured. *All the objects are loaded as required, if something like a filter it’s not used for a request would not be loaded. This increase the performance of the application There are others features related that hole-security have.
{ "language": "en", "url": "https://stackoverflow.com/questions/120977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ASP Nested Tags in a Custom User Control I'm just getting started with Custom User Controls in C# and I'm wondering if there are any examples out there of how to write one which accepts nested tags? For example, when you create an asp:repeater you can add a nested tag for itemtemplate. A: I followed Rob's blog post, and made a slightly different control. The control is a conditional one, really just like an if-clause: <wc:PriceInfo runat="server" ID="PriceInfo"> <IfDiscount> You don't have a discount. </IfDiscount> <IfNotDiscount> Lucky you, <b>you have a discount!</b> </IfNotDiscount> </wc:PriceInfo> In the code I then set the HasDiscount property of the control to a boolean, which decides which clause is rendered. The big difference from Rob's solution, is that the clauses within the control really can hold arbitrary HTML/ASPX code. And here is the code for the control: using System.ComponentModel; using System.Web.UI; using System.Web.UI.WebControls; namespace WebUtilities { [ToolboxData("<{0}:PriceInfo runat=server></{0}:PriceInfo>")] public class PriceInfo : WebControl, INamingContainer { private readonly Control ifDiscountControl = new Control(); private readonly Control ifNotDiscountControl = new Control(); public bool HasDiscount { get; set; } [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public Control IfDiscount { get { return ifDiscountControl; } } [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public Control IfNotDiscount { get { return ifNotDiscountControl; } } public override void RenderControl(HtmlTextWriter writer) { if (HasDiscount) ifDiscountControl.RenderControl(writer); else ifNotDiscountControl.RenderControl(writer); } } } A: I ended up with something very similar to the answer by Rob (in wayback archive) @gudmundur-h, but I used ITemplate to get rid of that annoying "You can't place content between X tags" in the usage. I'm not entirely sure what is actually required or not, so it's all here just in case. The partial/user control markup: mycontrol.ascx Note the important bits: plcChild1 and plcChild2. <!-- markup, controls, etc --> <div class="shell"> <!-- etc --> <!-- optional content with default, will map to `ChildContentOne` --> <asp:PlaceHolder ID="plcChild1" runat="server"> Some default content in the first child. Will show this unless overwritten. Include HTML, controls, whatever. </asp:PlaceHolder> <!-- etc --> <!-- optional content, no default, will map to `ChildContentTwo` --> <asp:PlaceHolder ID="plcChild2" runat="server"></asp:PlaceHolder> </div> The partial/user control codebehind: mycontrol.ascx.cs [ParseChildren(true), PersistChildren(true)] [ToolboxData(false /* don't care about drag-n-drop */)] public partial class MyControlWithNestedContent: System.Web.UI.UserControl, INamingContainer { // expose properties as attributes, etc /// <summary> /// "attach" template to child controls /// </summary> /// <param name="template">the exposed markup "property"</param> /// <param name="control">the actual rendered control</param> protected virtual void attachContent(ITemplate template, Control control) { if(null != template) template.InstantiateIn(control); } [PersistenceMode(PersistenceMode.InnerProperty), DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public virtual ITemplate ChildContentOne { get; set; } [PersistenceMode(PersistenceMode.InnerProperty), DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public virtual ITemplate ChildContentTwo { get; set; } protected override void CreateChildControls() { // clear stuff, other setup, etc // needed? base.CreateChildControls(); this.EnsureChildControls(); // cuz...we want them? // using the templates, set up the appropriate child controls attachContent(this.ChildContentOne, this.plcChild1); attachContent(this.ChildContentTwo, this.plcChild2); } } Important bits (?): * *ParseChildren -- so stuff shows up? *PersistChildren -- so dynamically created stuff doesn't get reset? *PersistenceMode(PersistenceMode.InnerProperty) -- so controls are parsed correctly *DesignerSerializationVisibility(DesignerSerializationVisibility.Content) -- ditto? The control usage <%@ Register Src="~/App_Controls/MyStuff/mycontrol.ascx" TagPrefix="me" TagName="MyNestedControl" %> <me:MyNestedControl SomeProperty="foo" SomethingElse="bar" runat="server" ID="meWhatever"> <%-- omit `ChildContentOne` to use default --%> <ChildContentTwo>Stuff at the bottom! (not empty anymore)</ChildContentTwo> </me:MyNestedControl> A: I wrote a blog post about this some time ago. In brief, if you had a control with the following markup: <Abc:CustomControlUno runat="server" ID="Control1"> <Children> <Abc:Control1Child IntegerProperty="1" /> </Children> </Abc:CustomControlUno> You'd need the code in the control to be along the lines of: [ParseChildren(true)] [PersistChildren(true)] [ToolboxData("<{0}:CustomControlUno runat=server></{0}:CustomControlUno>")] public class CustomControlUno : WebControl, INamingContainer { private Control1ChildrenCollection _children; [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public Control1ChildrenCollection Children { get { if (_children == null) { _children = new Control1ChildrenCollection(); } return _children; } } } public class Control1ChildrenCollection : List<Control1Child> { } public class Control1Child { public int IntegerProperty { get; set; } } A: My guess is you're looking for something like this? http://msdn.microsoft.com/en-us/library/aa478964.aspx Your tags were removed or are invisible, so can't really help you there.
{ "language": "en", "url": "https://stackoverflow.com/questions/120997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Stuck creating a "security trimmed" html.ActionLink extension method I'm trying to create an Extension Method for MVC's htmlHelper. The purpose is to enable or disable an ActionLink based on the AuthorizeAttribute set on the controller/action. Borrowing from the MVCSitemap code that Maarten Balliauw created, I wanted to validate the user's permissions against the controller/action before deciding how to render the actionlink. When I try to get the MvcHandler, I get a null value. Is there a better way to the the attributes for the controller/action? Here is the code for the extension method: public static class HtmlHelperExtensions { public static string SecurityTrimmedActionLink(this HtmlHelper htmlHelper, string linkText, string action, string controller) { //simplified for brevity if (IsAccessibleToUser(action, controller)) { return htmlHelper.ActionLink(linkText, action,controller); } else { return String.Format("<span>{0}</span>",linkText); } } public static bool IsAccessibleToUser(string action, string controller) { HttpContext context = HttpContext.Current; MvcHandler handler = context.Handler as MvcHandler; IController verifyController = ControllerBuilder .Current .GetControllerFactory() .CreateController(handler.RequestContext, controller); object[] controllerAttributes = verifyController.GetType().GetCustomAttributes(typeof(AuthorizeAttribute), true); object[] actionAttributes = verifyController.GetType().GetMethod(action).GetCustomAttributes(typeof(AuthorizeAttribute), true); if (controllerAttributes.Length == 0 && actionAttributes.Length == 0) return true; IPrincipal principal = handler.RequestContext.HttpContext.User; string roles = ""; string users = ""; if (controllerAttributes.Length > 0) { AuthorizeAttribute attribute = controllerAttributes[0] as AuthorizeAttribute; roles += attribute.Roles; users += attribute.Users; } if (actionAttributes.Length > 0) { AuthorizeAttribute attribute = actionAttributes[0] as AuthorizeAttribute; roles += attribute.Roles; users += attribute.Users; } if (string.IsNullOrEmpty(roles) && string.IsNullOrEmpty(users) && principal.Identity.IsAuthenticated) return true; string[] roleArray = roles.Split(','); string[] usersArray = users.Split(','); foreach (string role in roleArray) { if (role != "*" && !principal.IsInRole(role)) return false; } foreach (string user in usersArray) { if (user != "*" && (principal.Identity.Name == "" || principal.Identity.Name != user)) return false; } return true; } } A: Here is the working code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Principal; using System.Web.Routing; using System.Web.Mvc; using System.Collections; using System.Reflection; namespace System.Web.Mvc.Html { public static class HtmlHelperExtensions { public static string SecurityTrimmedActionLink( this HtmlHelper htmlHelper, string linkText, string action, string controller) { return SecurityTrimmedActionLink(htmlHelper, linkText, action, controller, false); } public static string SecurityTrimmedActionLink(this HtmlHelper htmlHelper, string linkText, string action, string controller, bool showDisabled) { if (IsAccessibleToUser(action, controller)) { return htmlHelper.ActionLink(linkText, action, controller); } else { return showDisabled ? String.Format("<span>{0}</span>", linkText) : ""; } } public static bool IsAccessibleToUser(string actionAuthorize, string controllerAuthorize) { Assembly assembly = Assembly.GetExecutingAssembly(); GetControllerType(controllerAuthorize); Type controllerType = GetControllerType(controllerAuthorize); var controller = (IController)Activator.CreateInstance(controllerType); ArrayList controllerAttributes = new ArrayList(controller.GetType().GetCustomAttributes(typeof(AuthorizeAttribute), true)); ArrayList actionAttributes = new ArrayList(); MethodInfo[] methods = controller.GetType().GetMethods(); foreach (MethodInfo method in methods) { object[] attributes = method.GetCustomAttributes(typeof(ActionNameAttribute), true); if ((attributes.Length == 0 && method.Name == actionAuthorize) || (attributes.Length > 0 && ((ActionNameAttribute)attributes[0]).Name == actionAuthorize)) { actionAttributes.AddRange(method.GetCustomAttributes(typeof(AuthorizeAttribute), true)); } } if (controllerAttributes.Count == 0 && actionAttributes.Count == 0) return true; IPrincipal principal = HttpContext.Current.User; string roles = ""; string users = ""; if (controllerAttributes.Count > 0) { AuthorizeAttribute attribute = controllerAttributes[0] as AuthorizeAttribute; roles += attribute.Roles; users += attribute.Users; } if (actionAttributes.Count > 0) { AuthorizeAttribute attribute = actionAttributes[0] as AuthorizeAttribute; roles += attribute.Roles; users += attribute.Users; } if (string.IsNullOrEmpty(roles) && string.IsNullOrEmpty(users) && principal.Identity.IsAuthenticated) return true; string[] roleArray = roles.Split(','); string[] usersArray = users.Split(','); foreach (string role in roleArray) { if (role == "*" || principal.IsInRole(role)) return true; } foreach (string user in usersArray) { if (user == "*" && (principal.Identity.Name == user)) return true; } return false; } public static Type GetControllerType(string controllerName) { Assembly assembly = Assembly.GetExecutingAssembly(); foreach (Type type in assembly.GetTypes()) { if (type.BaseType.Name == "Controller" && (type.Name.ToUpper() == (controllerName.ToUpper() + "Controller".ToUpper()))) { return type; } } return null; } } } I don't like using reflection, but I can't get to the ControllerTypeCache. A: Your ViewPage has a reference to the view context, so you could make it an extension method on that instead. Then you can just say if Request.IsAuthenticated or Request.User.IsInRole(...) usage would be like <%= this.SecurityLink(text, demandRole, controller, action, values) %> A: I really liked the code from @Robert's post, but there were a few bugs and I wanted to cache the gathering of the roles and users because reflection can be a little time costly. Bugs fixed: if there is both a Controller attribute and an Action attribute, then when the roles get concatenated, an extra comma doesn't get inserted between the controller's roles and the action's roles which will not get analyzed correctly. [Authorize(Roles = "SuperAdmin,Executives")] public class SomeController() { [Authorize(Roles = "Accounting")] public ActionResult Stuff() { } } then the roles string ends up being SuperAdmin,ExecutivesAccounting, my version ensures that Executives and Accounting is separate. My new code also ignores Auth on HttpPost actions because that could throw things off, albeit unlikely. Lastly, it returns MvcHtmlString instead of string for newer versions of MVC using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Reflection; using System.Collections; using System.Web.Mvc; using System.Web.Mvc.Html; using System.Security.Principal; public static class HtmlHelperExtensions { /// <summary> /// only show links the user has access to /// </summary> /// <returns></returns> public static MvcHtmlString SecurityLink(this HtmlHelper htmlHelper, string linkText, string action, string controller, bool showDisabled = false) { if (IsAccessibleToUser(action, controller)) { return htmlHelper.ActionLink(linkText, action, controller); } else { return new MvcHtmlString(showDisabled ? String.Format("<span>{0}</span>", linkText) : ""); } } /// <summary> /// reflection can be kinda slow, lets cache auth info /// </summary> private static Dictionary<string, Tuple<string[], string[]>> _controllerAndActionToRolesAndUsers = new Dictionary<string, Tuple<string[], string[]>>(); private static Tuple<string[], string[]> GetAuthRolesAndUsers(string actionName, string controllerName) { var controllerAndAction = controllerName + "~~" + actionName; if (_controllerAndActionToRolesAndUsers.ContainsKey(controllerAndAction)) return _controllerAndActionToRolesAndUsers[controllerAndAction]; Type controllerType = GetControllerType(controllerName); MethodInfo matchingMethodInfo = null; foreach (MethodInfo method in controllerType.GetMethods()) { if (method.GetCustomAttributes(typeof(HttpPostAttribute), true).Any()) continue; if (method.GetCustomAttributes(typeof(HttpPutAttribute), true).Any()) continue; if (method.GetCustomAttributes(typeof(HttpDeleteAttribute), true).Any()) continue; var actionNameAttr = method.GetCustomAttributes(typeof(ActionNameAttribute), true).Cast<ActionNameAttribute>().FirstOrDefault(); if ((actionNameAttr == null && method.Name == actionName) || (actionNameAttr != null && actionNameAttr.Name == actionName)) { matchingMethodInfo = method; } } if (matchingMethodInfo == null) return new Tuple<string[], string[]>(new string[0], new string[0]); var authAttrs = new List<AuthorizeAttribute>(); authAttrs.AddRange(controllerType.GetCustomAttributes(typeof(AuthorizeAttribute), true).Cast<AuthorizeAttribute>()); var roles = new List<string>(); var users = new List<string>(); foreach(var authAttr in authAttrs) { roles.AddRange(authAttr.Roles.Split(',')); users.AddRange(authAttr.Roles.Split(',')); } var rolesAndUsers = new Tuple<string[], string[]>(roles.ToArray(), users.ToArray()); try { _controllerAndActionToRolesAndUsers.Add(controllerAndAction, rolesAndUsers); } catch (System.ArgumentException ex) { //possible but unlikely that two threads hit this code at the exact same time and enter a race condition //instead of using a mutex, we'll just swallow the exception when the method gets added to dictionary //for the second time. mutex only allow single worker regardless of which action method they're getting //auth for. doing it this way eliminates permanent bottleneck in favor of a once in a bluemoon time hit } return rolesAndUsers; } public static bool IsAccessibleToUser(string actionName, string controllerName) { var rolesAndUsers = GetAuthRolesAndUsers(actionName, controllerName); var roles = rolesAndUsers.Item1; var users = rolesAndUsers.Item2; IPrincipal principal = HttpContext.Current.User; if (!roles.Any() && !users.Any() && principal.Identity.IsAuthenticated) return true; foreach (string role in roles) { if (role == "*" || principal.IsInRole(role)) return true; } foreach (string user in users) { if (user == "*" && (principal.Identity.Name == user)) return true; } return false; } public static Type GetControllerType(string controllerName) { Assembly assembly = Assembly.GetExecutingAssembly(); foreach (Type type in assembly.GetTypes()) { if (type.BaseType.Name == "Controller" && (type.Name.ToUpper() == (controllerName.ToUpper() + "Controller".ToUpper()))) { return type; } } return null; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/121000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is it possible to develop for the iPhone without an iPhone? I know there are emulators, but is this good enough? If someone is serious about iPhone development, do they absolutely need an iPhone? A: Necessary: How the app handles in your hands is critical to something like the iPhone. you cannot tell how it will feel to use when plastered straight in front of you in the emulator on a big screen. If you cannot hold it you won't be getting the true user experience. A: If you need to learn Obj-C, go with the emulator for a while until you learn the ropes and save the expense for later. But yes, eventually you will need an iPhone for final testing. How long you can wait will depend on the features that your app uses, If all you are doing is button presses, you can wait a long time. If you are dragging, using location services, etc., you'll need a device earlier in the development cycle. A: Are you trying to convince yourself or your boss? ;-) I'd say you need one. Emulation of such a new device can only go wrong. Plus don't forget the tactile aspects. A: The iPod touch is a reasonable substitute provided you are not using: GPS, BlueTouch or Camera - the iPod touch doesn't have these Cellular network - although the iPod touch has WiFi, the latency of a cellular network is way way higher than that of a wifi network. If you are doing anything like designing a custom protocol for your application, you will want to check real-world performance - and if you do this too late in the development cycle, you will be in for an unpleasant surprise. Whether you develop on the iPod touch or on the iPhone, you absolutely must have a device. This is not optional! The simulator is good, but it is not perfect, and there is no substitute for having a device which correctly indicates performance, screen resolution, brightness, form factor and all the other factors that you will need to consider in your application. If you buy an iPod touch, you will probably end up getting an iPhone too. I'd just go straight for the iPhone. That way you can use it as your main phone, and get a real feel for how the platform behaves and what an application needs to do to make it great. A: Just my personal opinion: if you're serious it means that you're committed to quality of your product. If you're committed to quality there is no way to deliver a product without actually launching it on the target platform :) As noted in other posts you'll have tough time testing the multi-touch screen and other aspects of the hardware on your emulator. A: Kind-of "yes". Just download iPhone SDK (it's easy and free) and check out the emulator that is in there. You'll see whether that suits your needs or not. The emulator is not indicative of real hardware performance, there's no touch input, some quirks might be different, some things can not work, etc. A: The iPhone Simulator makes it easy to test your applications using the power and convenience of your desktop or laptop computer. Although, your development computer may not simulate complicated touch events, such as multifinger touches, the Simulator lets you perform pinches. To perform a pinch, hold Option while tapping on the Simulator screen. A: I'd say it depends on the kind of application you are developing. For a successful iPhone app, one which is properly integrated on the system, you are going to need to be able to test your tactile interface. That's hardly accomplished with the Emulator. So, my answer is Yes, you do need an iPhone to develop iPhone apps. Fortunately, if you cannot afford one, an iPod Touch (200 bucks) is a very competent replacement. The underlying hardware is pretty much the same. A: Necessary. If you plan to develop a successful product it needs to be one the end users (not just the developers) find easy to use. The best way to do that would be to load your app on an iPhone then take it to various people and ask them to use it while you watch them to see if they experience any issues. Users can get mighty creative in trying to do things a developer never intended - just ask any support tech. Unless you're app is going to sell for less then $500 total it's a relatively small investment to build a quality app. A: Don't forget that most types of iPhone apps also work on the iPod Touch, which is a one time cost and no monthly fees. Even network apps work if the iPod Touch is connected to WiFi. A: If you are serious about development, an iPhone (or iPod touch) is a must. However, the official SDK comes with a very complete "iPhone simulator". This will allow you getting a feel for Objective C and the entire development workflow. The SDK requires Leopard. You don't need a Mac for this. You can use OSX86 on your PC, either installed on and booted from disk or through VmWare. It works. In fact, you can even synch the iPhone through Leopard running in vmWare. Now, testing on a real iPhone is a necessity because of performance, memory usage etc. Also you need it for the entire authentification procedure, getting the keys etc. (if you want to sell your stuff on the Appstore), testing this really requires an iPhone. A: If you buy an iPod touch, you will probably end up getting an iPhone too. I'd just go straight for the iPhone. That way you can use it as your main phone, and get a real feel for how the platform behaves and what an application needs to do to make it great. I absolutely agree with this. If you are seriously developing an iPhone application - for fun or for profit - you will have to run it on a real iPhone to test out compatibility and usability at some point. Since you going to have to get one at some point, you may as well get one now. Don't go for half measures. An iPod Touch may be [significantly] cheaper to start with, but will be money wasted when you go and get your iPhone. (Of course, if you are planning an app that runs on the iPhone as well as the iPod Touch, then you MUST test it on both. You cannot assume that if it is good on one it must be good on the other). Also, by having an iPhone from day one, you can familiarize yourself with its user interface, its norms and the common metaphors the apps use. That will heavily feed into your own application design process, and make sure that your app looks, feels, and works like a first class iPhone citizen. A: During development of my first iPhone app, I wrote code that worked fine on the iPhone Simulator, but which did not work on the device. So I would say "Yes, you definitely need to test on an actual device." The simulator is not an emulator. It is not running the actual iPhone OS; it is running a set of Mac OS X libraries that are very similar, but not identical, to iPhone OS. The simulator is great for debugging and saving time during the code-and-test cycle, so you will use it a lot more than the device, but a device is indispensible. You really do need to touch-and-feel your app on a real device. A UI that works great while pointing and clicking with a mouse might be terrible when used with thumbs and fingers. If there is any text entry, you need to feel how painful it is to type using the onscreen keyboard, to determine whether it makes sense to provide alternative data-entry methods. There are also significant performance differences between the simulator and actual devices. You need to test with the oldest (slowest) device you want to support to verify it is not too slow, doesn't run out of memory, etc. As others have suggested, an iPod Touch is also sufficient, so the cost of a device isn't huge. Also, try to find beta testers with a variety of different models. A: From experience developing on other mobile platforms, once you get to a certain point, it really is best to have a physical device to test on. If this is something that you would also be using yourself, if it much easier to get some real world type of testing by using the application out and about. I also think it helps one to understand the platform better by having the device or devices you are targeting with your app, A: if you are going to develop native apps for the iphone, I would say get an iphone or ipod touch to target. emulators are good, but eventually you will need to target the real thing. if you are developing web specific content there are lots of things you can do without it (there are some great dev videos free from apples dev site which will only cost you a sign up) but eventually I would think you would still want to test with the real deal A: Get a cheap used iPod touch, develop, get money, buy an iPhone 5. I'm a nokia dev now, I'm thinking of going to iPhone, Actually I have the Mac to work, just the device itself ;) A: I've tried iPhoney and compared to my iPhone (Mark 1) it's not the same, it's close - but not close enough to rely on if the interface is of importance to you. A: You absolutely need the real device. The performance difference between the simulator and the actual iPhone/iPod Touch hardware is huge. Code that will run nice and fast in the simulator can easily turn out to be too slow to be usable on the real thing. Also the API provided by the simulator is not 100% identical to the real thing, so code that works fine in the sim, may not work on the device. The only way to know for sure is to test often on the actual device. As others have mentioned, the iPod touch works well as a development device. So if you don't need any of the features of the iPhone, it's a good, cheaper, alternative.
{ "language": "en", "url": "https://stackoverflow.com/questions/121018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How do I get the modified date/time of a file in Python? How do I get the modified date/time of a file in Python? A: os.path.getmtime(filepath) or os.stat(filepath).st_mtime A: Formated: import time print time.strftime("%m/%d/%Y %I:%M:%S %p",time.localtime(os.path.getmtime(fname)))
{ "language": "en", "url": "https://stackoverflow.com/questions/121025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: iPhone Full Screen Image How do I go about making an image or section of the page full screen on the iPhone? I have an image that is 480 x 320 and I want to pull that up full screen on the iPhone but it has to be within a webpage so that I can make the image a link back to the previous page. Currently if I drop the image on a blank page and I open it up on the iPhone it just shows up in the top left corner. A: I'd say set the viewport meta tag in your blank page so Safari knows to render the page at the right size. For more information, see this link: Apple iPhone Safari Documentation A: nice links which may help you further: How to optimize your website for mobile devices: http://solutions.treypiepmeier.com/2008/12/01/optimizing-a-website-for-iphone/ A: Hopefully I'm not in breach of the NDA here, but here goes. Mobile Safari, by default, renders a page as if that page had been viewed by a desktop browser, with a default width of 980 pixels. To change this behavior you need to explicitly declare the viewport, which you do via meta tags. If you declare the width to the constant device-width, it'll default to 320 instead of 980, and everything looks great. <head> <meta name="viewport" content="width=device-width,user-scalable=no" /> </head>
{ "language": "en", "url": "https://stackoverflow.com/questions/121051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Conversion from 32 bit integer to 4 chars What is the best way to divide a 32 bit integer into four (unsigned) chars in C#. A: Quick'n'dirty: int value = 0x48454C4F; Console.WriteLine(Encoding.ASCII.GetString( BitConverter.GetBytes(value).Reverse().ToArray() )); Converting the int to bytes, reversing the byte-array for the correct order and then getting the ASCII character representation from it. EDIT: The Reverse method is an extension method from .NET 3.5, just for info. Reversing the byte order may also not be needed in your scenario. A: Char? Maybe you are looking for this handy little helper function? Byte[] b = BitConverter.GetBytes(i); Char c = (Char)b[0]; [...] A: It's not clear if this is really what you want, but: int x = yourNumber(); char a = (char)(x & 0xff); char b = (char)((x >> 8) & 0xff); char c = (char)((x >> 16) & 0xff); char d = (char)((x >> 24) & 0xff); This assumes you want the bytes interpreted as the lowest range of Unicode characters. A: I have tried it a few ways and clocked the time taken to convert 1000000 ints. Built-in convert method, 325000 ticks: Encoding.ASCII.GetChars(BitConverter.GetBytes(x)); Pointer conversion, 100000 ticks: static unsafe char[] ToChars(int x) { byte* p = (byte*)&x) char[] chars = new char[4]; chars[0] = (char)*p++; chars[1] = (char)*p++; chars[2] = (char)*p++; chars[3] = (char)*p; return chars; } Bitshifting, 77000 ticks: public static char[] ToCharsBitShift(int x) { char[] chars = new char[4]; chars[0] = (char)(x & 0xFF); chars[1] = (char)(x >> 8 & 0xFF); chars[2] = (char)(x >> 16 & 0xFF); chars[3] = (char)(x >> 24 & 0xFF); return chars; } A: Do get the 8-byte-blocks: int a = i & 255; // bin 11111111 int b = i & 65280; // bin 1111111100000000 Do break the first three bytes down into a single byte, just divide them by the proper number and perform another logical and to get your final byte. Edit: Jason's solution with the bitshifts is much nicer of course. A: .net uses Unicode, a char is 2 bytes not 1 To convert between binary data containing non-unicode text use the System.Text.Encoding class. If you do want 4 bytes and not chars then replace the char with byte in Jason's answer
{ "language": "en", "url": "https://stackoverflow.com/questions/121059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Why can't VS2008 place an exception helper popup onto a secondary monitor? I've recently acquired a second monitor and now run VS2008 SP1 maximized on my secondary (and bigger) monitor. This theoretically has the benefit of opening the application under development on the primary monitor, where -- as it seems to me -- all newly started applications go. So far, so good. The problem though is now, that the exception helper popup is not opened on the secondary monitor. Even worse, it is only shown when the Studio window is far enough on the primary monitor! If I drag the studio with an opened exception helper from the primary to the secondary monitor, the helper is dragged with the window until it hits the border between the two monitors, where it suddenly disappears. Has somebody experienced this too? Is there any workaround? Anything else I should try? A: VS isn't multi monitor aware. I believe they're looking at improving the multimonitor experience in the next version. A: I run dual 22" widescreens and have the same issue, if you have one monitor that is larger or mor commonly used the ONLY thing that I know that will truly work is to make the larger desired monitor the primary. It isn't elegant, and might not even be appropriate for you, but that is all I have been able to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/121063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Event handling jQuery unclick() and unbind() events? I want to attach a click event to a button element and then later remove it, but I can't get unclick() or unbind() event(s) to work as expected. In the code below, the button is tan colour and the click event works. window.onload = init; function init() { $("#startButton").css('background-color', 'beige').click(process_click); $("#startButton").css('background-color', 'tan').unclick(); } How can I remove events from my elements? A: Or you could have a situation where you want to unbind the click function just after you use it, like I had to: $('#selector').click(function(event){ alert(1); $(this).unbind(event); }); A: unbind is your friend. $("#startButton").unbind('click') A: There's no such thing as unclick(). Where did you get that from? You can remove individual event handlers from an element by calling unbind: $("#startButton").unbind("click", process_click); If you want to remove all handlers, or you used an anonymous function as a handler, you can omit the second argument to unbind(): $("#startButton").unbind("click"); A: Are you sure you want to unbind it? What if later on you want to bind it again, and again, and again? I don't like dynamic event-handling bind/unbind, since they tend to get out of hand, when called from different points of your code. You may want to consider alternate options: * *change the button "disabled" property *implement your logic inside "process_click" function Just my 2 cents, not an universal solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/121066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Multi-select in the prompt is not working properly in Cognos 8.1 Changing a value prompt to a multi-select value prompt in Report studio, provide single select functionality. How can i get multi-select functionality? A: In addition to the above answer. If the prompt is embedded in the QuerySubject in your Framework, you may need to check the Query Subject and see if it uses #promptmany()# macro instead of #prompt#. A: Look at the parameter associated with the prompt. Now go look and see how you use that parameter to filter the queries in your report. If you have the filter set as:- [namespace].[table].[column] = ?MyParameter? ... then it doesn't matter that your prompt is a multi-select prompt, it will still run as a single selection prompt. Modify your filters so they are of the form:- [namespace].[table].[column] in ?MyParameter? This tells Cognos that your parameter can contain multiple values, and it will display the prompt accordingly.
{ "language": "en", "url": "https://stackoverflow.com/questions/121105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Good Resource Loading System I'm currently looking for a resource management system for actionscript3. More specifically in this case, I'm looking for opinions on off-the-shelf resource loaders/managers from people who have used them, * *are they clean/simple to use? *are they designed to be extended? *I plan on using this with an MVC like system, Mate is next on the list, has anyone else used a resource loader in this manner? *masapi *queueloader *bulk-loader A: I've been very happy with bulk-loader. We've integrated it with Parsley's MVC and IoC frameworks - http://spicefactory.org/parsley/ - with great success.
{ "language": "en", "url": "https://stackoverflow.com/questions/121110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I keep Visual Studio from autoraising, when I activate "Focus Follows Mouse"? I'm running VS2008 and have used SystemParametersInfo to activate "Focus Follows Mouse" and "Do not Raise On Focus." Sadly though, VS2008 (with and without SP1) doesn't honour the "Do not Raise" part and eagerly pushes into the foreground every time the pointer touches its window. A while ago I complained about that on my blog and posted an example app to set the parameters. Two others also reported having that problem, but they too didn't know how to proceed. How could I fix/workaround this problem? Anything else I should try? A: I know this problem is very old, but it still occurs with VS2019 and this thread is one of the first hits when someone searches for 'auto-raise'. In my case, I enabled X-mouse via a regedit, and had to live with this behaviour for quite a while. A couple of days ago I found a solution for Visual Studio autoraising itself when you hover the mouse over a window: Options -> Environment -> Tabs and Windows -> uncheck both entries under 'Floating Windows'. ('floating tab wells stay on top' and 'floating tool windows stay on top') A: Try true X-mouse. At least visual studio won't steal the focus anymore. You might not like the copy-and-past behaviour it introduces, though, and also pop-up windows usually don't appear in visual studio (use alt-tab to find them). A: I've noticed that Visual Studio only seems to auto-raise when a "document" has focus in VS. If you select a Find Results window or the Solution Explorer in VS, then the auto-raise doesn't occur.
{ "language": "en", "url": "https://stackoverflow.com/questions/121111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Slow treeview in C# I have a legacy application that is written in C# and it displays a very complex treeview with 10 to 20 thousand elements. In the past I encountered a similar problem (but in C++) that i solved with the OWNERDATA capability offered by the Win32 API. Is there a similar mechanism in C#? EDIT: The plan is to optimize the creation time as well as browsing time. The method available through Win32 API is excellent in both of these cases as it reduce initialization time to nothing and the number of requests for elements are limited to only the ones visible at any one time. Joshl: We are actually doing exactly what you suggest already, but we still need more efficiency. A: I don't believe the .NET TreeView supports what you want, although this type of model is supported by .NET's DataGridView (see DataGridView's VirtualMode property). The TreeView will let you draw your own nodes but it won't let you populate them from some virtual store. If possible, you might want to consider the use of a DataGridView for your application. If not, managing the nodes manually (like joshl mentions above) might work if you can get around some issues with refreshing the screen properly when nodes are expanded. Outside of that, you might want to check out some of the third party vendors, like this one (Divelements SandGrid), that might (emphasis on might) support your desired mode of operation. NOTE: The SandGrid is not supported by Divelements as of the end of July 2013. A: NOTE: This answer is invalidated by an edit by the questioner saying he already does this kind of thing, but I decided to still post it for future reference by others searching on this topic When I've done similar things in the past, I've tended to opt for the naive lazy-loading style. * *Use the TreeNode.Tag property to hold a reference that you can use to look-up the children *Use the TreeView.BeforeExpand event to populate the child nodes *Optionally use the TreeView.AfterCollapse event to remove them. *To get the [+]/[-] boxes to appear, the best way I've found is to create a singleton dummy TreeNode that gets added as a child to all unpopulated Nodes, and you check for its existence before populating with BeforeExpand. A: There is one way to make the TreeView perform much better and that is to create all sub-nodes and hook them together and then add the nodes to the TreeView. If it's the graphical performance we are talking about. TreeView tree = new TreeView(); TreeNode root = new TreeNode("Root"); PopulateRootNode(root); // Get all your data tree.Nodes.Add(root); Otherwise, load them node by node using OnTreeNodeExpanded. A: One technique for improving performance is to load TreeNodes as the user expands the treeview. Normally a user will not require 20,000 nodes to be open on their screen at once. Only load the level that the user needs to see, along with whatever child information you need to properly display affordances to the user (expand icon if children exist, counts, icons, etc). As the user expands nodes, load children just in time. Helpful hint from Keith: With the winforms TreeView you need to have at least one child node or it won't show the expand [+], but then you handle the TreeNodeExpanded event to remove that dummy node and populate the children. A: In our main WinForm app, we have a treeview loaded all in one shot: * *BeginUpdate() *Load 20.000 nodes *EndUpdate() and so far the performance is still nice. It is actually one of the few components we are not replacing with third party ones. The TreeView performance, in my experience, gets slow when you load nodes (in one shot, or on demand) without calling Begin/EndUpdate(), especially if your nodes are sorted, but if you call Begin/EndUpdate() correctly, you shouldn't really get performance issues related to the component itself. A: For large data in Windows C# programming, whether it be in WPF or WinForms, I have traditionally added nodes dynamically. I load the initial tree root + children + grandchildren deep. When any node is expanded, I load the tree nodes that would represent the grandchildren of the expanding node, if any. This pattern also works well with data retrieval. If you are truly loading data from a source of thousands or millions of records you probably don't want to load those all up front. No user wants to wait for that to be loaded, and there is not reason to load data that may never be viewed. I have typically loaded the grandchildren or the great-grandchildren node data as needed on a background thread, then marshal those data back to the UI thread and create and add the nodes. This leaves the UI responsive. You can visually decorate tree nodes to indicate they are still loading for the case where a user gets ahead of your IO to the data store. A: This works for me (CSharp): Visible = false; ... Visible = true; In my case(2000 nodes), it takes only 1~2 seconds to load the tree, which is much more quicker than any other ways. It might work well in C++.
{ "language": "en", "url": "https://stackoverflow.com/questions/121112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to get the executable path from a Managed DLL I have a managed DLL (written in C++/CLI) that contains a class used by a C# executable. In the constructor of the class, I need to get access to the full path of the executable referencing the DLL. In the actual app I know I can use the Application object to do this, but how can I do it from a managed DLL? A: @leppie: Thanks - that was the pointer I needed. For future reference, in C++/CLI this is the actual syntax that works: String^ appPathString = Assembly::GetEntryAssembly()->Location; GetExecutingAssembly() provided the name of the DLL GetCallingAssembly() returned something like System.Windows.Forms GetEntryAssembly returned the full path, similar to GetModulePath() under Win32. A: Assembly.GetCallingAssembly() or Assembly.GetExecutingAssembly() or Assembly.GetEntryAssembly() Depending on your need. Then use Location or CodeBase property (I never remember which one).
{ "language": "en", "url": "https://stackoverflow.com/questions/121116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Country, State, Province WebService? Are there any good webservices out there that provide good lookup information for Countries and States/Provinces? If so what ones do you use? A: A services that works well with .Net (because it leverages WSDL) is http://www.webservicex.net. They have a service for US ZIP codes available at http://www.webservicex.net/uszip.asmx. You can just add it as a service and Visual Studio will take care of the rest. The response comes as an XML response, so you'll have to parse it, but you can use something simple like USZIP.GetInfoByZIP(ZIP).SelectSingleNode("//STATE").InnerText. For my application I then built an in-memory cache of the data using XML following these directions: http://www.15seconds.com/issue/010410.htm. I used XML instead of a HashTable or Dictionary(TKey, TValue) because I wanted to be able to serialize it to a string so I could save the 'database' as a user setting. A: http://www.geonames.org/ That's the best one I've found. They let you download and host the web service yourself, which is also nice. A: If you only need US information, the US Postal Service provides a set of web services it calls WebTools for this exact thing. https://www.usps.com/business/web-tools-apis/welcome.htm. You will need to register to be able to use them but once you're registered they are really simple to use. You just send an XML request over HTTP and the server sends an XML response back and you just have to unpack it. Sample request: http://SERVERNAME/ShippingAPITest.dll?API=Verify&XML=<AddressValidateRequest%20USERID="xxxxxxx"><Address ID="0"><Address1></Address1><Address2>6406 Ivy Lane</Address2><City>Greenbelt</City><State>MD</State><Zip5></Zip5><Zip4></Zip4></Address></AddressValidateRequest> Sample response: <?xml version="1.0"?> <AddressValidateResponse> <Address ID="0"> <Address2>6406 IVY LN</Address2> <City>GREENBELT</City> <State>MD</State> <Zip5>20770</Zip5> <Zip4>1441</Zip4> </Address> </AddressValidateResponse> Here's a link to the technical documentation: https://www.usps.com/business/web-tools-apis/documentation-updates.htm A: A good source of geographic data, including lookups and mapping data for the USA is the US Census Bureau's TIGER Data set. They no longer actively track Zip code data, but they do have a 1999 vintage file still available. For countries, the ISO country code list is publicly available. I'm not aware of resources for information outside the US.
{ "language": "en", "url": "https://stackoverflow.com/questions/121117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Keyboards with the number pad in the middle? I'm suffering from early RSI symptoms and am looking for a way to avoid injury. My physiotherapist has determined that the worst thing I seem to be doing is using my mouse at such a weird angle. The problem for me is, I keep my keyboard positioned such that my left and right forearms are angled in the same amount, i.e., my body is centred roughly with the B key. On my current keyboard, which is not a split, this means the wide Enter key, the arrow keys, and the number pad all jut out to the right before I have space for my mouse. I have a medium-width frame, but even still, this leaves my wrist at a really awkward angle when using the mouse. I'd prefer not to have to push my keyboard out of the way every time I switch between the two, but I do use the num pad occasionally, so I wouldn't want a keyboard without that. I think it'd be ideal to have about a 30-50 cm space between the left and right halves of the keyboard, so my arms are more perpendicular to my collarbone, and the arrow keys and number pad in the middle, maybe even with the numbers on a 45 degree angle, so I could configure them for use with either hand. (In case you were wondering, then a touch-screen with a stylus that has a right-click modifier button for mousing, because otherwise the mouse pad would be right where I'd put the right half of the home row, in the most natural position for my right arm while sitting.) With that much space, you could fit so many custom keys for things that you normally use two-key combos for...or you could detach them completely (save for a wire) and just have, you know, actual desktop showing through. What's the closest keyboard you've seen to this? A: I use a keyboard with the numeric keypad on the left hand side. This allows me to bring my mouse in closer on the right hand side, allowing for a more natural position. I am right handed. You can see the keyboard I use here. (source: keyboardco.com) A: http://www.thehumansolution.com/keyboards.html I've been looking here at some keyboards (I've got severe Carpal Tunnel). The Kinesis keyboards are nice, but there where a few there with the number pad in the middle. alt text http://us.st12.yimg.com/us.st.yimg.com/I/thehumansolution_2020_3491616 alt text http://us.st12.yimg.com/us.st.yimg.com/I/thehumansolution_2016_1198996 A: In line with Joel Coehoorn's suggestion, I find that a trackpoint (aka "nipple") is even more ergonomical than a touchpad. Lenovo's USB Keyboard with UltraNav has both. My mouse usage is minimal, though, because I focus on using the keyboard for everything. This might not be easy in your particular development environment. (source: ibm.com) A: Just an advice. I was also suffering from the RSI symptoms, up to the shoulders. I've tried MS and Logitech ergonomic keyboard without succes. It was even worth than before because the mouse was more distant than before. Then I founded the TypeMatrix Keyboard and it reduce my RSI to nearly nothing. Unfortunately, they are hard to found for non US nor Canada citizens. (source: typematrix.com) A: I did try a PCD Maltron keyboard for a while when I was having some problems with referred pain, but I never really got used to it. I'm left handed, so my mouse is quite close to the keyboard. If you're having trouble with your mouse being too far out to the right you could try a small footprint keyboard such as a Happy Hacking Keyboard and a Separate Numeric Pad on the left or somewhere else to get a more clement key layout. I also learned to use a mouse with both hands - you might also try learning to use a mouse left-handed. alt text http://www.maltron.com/images/keyboards/maltron-usb-dual-l90-uk-mac-qwerty-gray-1-600.jpg (source: geekstuff4u.com) A: A few years ago I had wrist problems, too. What worked for me was changing my posture, how I hold myself, my arms. I move my keyboard and mouse around, I change my position and the position of those two. Additionally I use keyboard shortcuts a lot to not overuse the mouse. This way I got rid of my wrist problems without replacing keyboard or mouse. A: As a fellow RSI suffer, I hope one of these helps out. The ErgoMagic and ErgoFlex keyboards are split into 3 sections, and you can position the number pad in the middle. The ErgoFlex is a flat design: http://www.safecomputingtips.com/blog/ergonomic-keyboard/ergoflex-keyboard/ http://www.comfortkeyboard.com/keyboards_ergoflex.html (source: safecomputingtips.com) The ErgoMagic has adjustable tilts for each section: http://www.comfortkeyboard.com/keyboards_ergomagic.html (source: comfortkeyboard.com) A: The main problem about keyboards is not the keyboard layout or form factor, the main problem of keyboards is the layout. I had really strong RSI problems, to the point I could barely type (only by when accepting major pain!) for two weeks. I couldn't touch a keyboard for 3 weeks till it finally got better. After that the problems always came back every now and then. I tried different keyboards as well. I have a GoldTouch split keyboard, a Microsoft Natural split one, I have even the TypeMatrix one that gizmo mentioned, but it all helped little. Then I discovered the Dvorak layout. At first I thought it's a silly idea, but I took that route. It was a hard training and it took me quite a long time, but now it's the only one I keep using, as it's the only one I can type on for ours without any RSI symptoms. At first things seem to have gotten worse, but that's only because a new layout will lead to new finger movements. Recently I found out about Colemak, another alternative to QWERTY, it seems even more promising than Dvorak; I'm just trying to learn it... and I have a hard time again. Another thing that helped me much more than replacing the keyboard is replacing the mouse. I'm not using a mouse any longer, I'm using a trackball. (source: arstechnica.com) And after I tried plenty of keyboard (I have more than 20 at home, no kidding!) I stayed with the Microsoft Natural 4000 keyboard. It might be huge, but I can type very well on it. A: I don't know of any keyboards like that, but what might help is a keyboard with a built-in touchpad (like for a laptop) that you can use instead of your other mouse for some of your mousing: small adjustments, quick taps to click, etc, to avoid having to go out to the mouse.
{ "language": "en", "url": "https://stackoverflow.com/questions/121119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Securing a Web Service I have a Web Service (ASMX) with a few Web methods on our production Web server. On a separate internal Web server (which isn't publicly exposed) I have another Web site that will use the ASMX's public web methods. What are some of the best ways to secure the Web service such that only the internal Web server can access the Web Services running on the publicly exposed Web server? A: One of the easiest ways is to pass credentials in the soap header of the message. So each call passes along the info needed to determine if the user is authorized. WSE makes some of that easier but one of the most succinct descriptions of this process can be found in Rocky Lhotka's book on Business Objects. I get a lot of books to review by publishers and this one had the best explanation A: Assuming you don't have the option of using WCF, I'd advocate using WSE 3 (Web Service Enhancements). You can get the toolkit / SDK thingummy at MS's site To limit the access to only internal machines (as I think your question asked), I'd set up a separate web site in IIS and set it to only respond to the internal IP address of your server. A: I would set a firewall rule to restrict access to a whitelist of IP addresses. A: Use IIS's directory security IP address restrictions, and limit access to just that internal web server IP address. If you can't do that then, and you can't setup a username/password on the directory, then use WSE and add a username/password into the service, or look at certificates if you want some fun grin A: Maybe I did not understand correctly, but why expose the web methods publicly at all if they're only going to be consumed by the internal server? A: A simple HTTP module will work. Just hardcode (or from config) the allowed IP/host and reject all others. A: If it is only the internal server that will be accessing the asmx files? You could set them up in IIS under a separate web site or virtual directory, then place some IP restrictions on the site. In properties, go under Directory Security, then "IP Address and Domain Name Restrictions." Also, for passwords, WSE 3 is the new go-to, but I did find a simple method in a book from Apress called "Pro ASP.NET 2.0 in C# 2005" Chapter 34. (Note, the newer version of this book omits this chapter.) The section is custom Ticket-based authentication. A: In this moment what comes to my mind is IP filtering on IIS. Fast to apply, should work in your scenario. A: TLS with client certs. See Wikipedia entry to get started. A: Be aware that there are ways around whitelisting IPs. Don't get me wrong, it's a great idea, and you should definetly do it, but if your budget/resources allow it, you can expand your threat model.
{ "language": "en", "url": "https://stackoverflow.com/questions/121123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Determine if a directory is a bundle or package in the Mac OS X terminal? I'd like to be able to determine if a directory such as a '.app' is considered to be a package or bundle from Finder's point of view on the command line. I don't think this would be difficult to do with a small shell program, but I'd rather not re-invent the wheel if I don't have to. A: While you can identify some bundles based on the existence of './contents/Info.plist", it isn't required for all bundle types (e.g. documents and legacy bundles). Finder also identifies a directory as a bundle based on file extension (.app, .bundle, etc) or if the bundle bit is set. To check the bundle bit from the command line use: getFileInfo -aB directory_name In order to catch all cases I would check: * *Is the bundle bit set? *If not, does it have a file extension that identifies it as a bundle? (see Mecki's answer) *If not, it probably isn't a bundle. A: Update: On all systems with Spotlight, using mdls you can detect bundles looking at the kMDItemContentTypeTree property. E.g.: mdls -name kMDItemContentTypeTree "/Applications/Safari.app" produces the following output for me kMDItemContentTypeTree = ( "com.apple.application-bundle", "com.apple.application", "public.executable", "com.apple.localizable-name-bundle", "com.apple.bundle", "public.directory", "public.item", "com.apple.package" ) Whenever you see com.apple.package there, it is supposed to be displayed as a package by Finder. Of course, everything with "bundle" in the name implies that already but not all packages are bundles (bundles are a specific subset of packages that have a well defined directory structure). Old Answer: You can get a list of all registered file type extensions, using this command (OS X prior to Leopard): /System/Library/Frameworks/ApplicationServices.framework/Frameworks\ /LaunchServices.framework/Support/lsregister -dump or for Leopard and later: /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks\ /LaunchServices.framework/Versions/A/Support/lsregister -dump Every file extension there has flags. If the package flag is set, this is a package. E.g. claim id: 806354944 name: Bundle role: none flags: apple-internal relative-icon-path package icon: Contents/Resources/KEXT.icns bindings: .bundle -------------------------------------------------------- claim id: 1276116992 name: Plug-in role: none flags: apple-internal relative-icon-path package icon: Contents/Resources/KEXT.icns bindings: .plugin Compare this to a file that is no bundle claim id: 2484731904 name: TEXT role: viewer flags: apple-internal icon: bindings: .txt, .text, 'TEXT' The only way to really get all bundles is by looking up in the LaunchService database (the one we dumped above). If you just go by whether it has a plist or not or whether the bundle bit is set or not, you might catch some or even many bundles, but you can't catch all of them. This is the database Finder uses to determine * *Is this directory a bundle or not? *Is this a known file extension or not? *Which applications should be listed under "Open With" for this file type? *Which icon should I use for displaying this file type? and some more stuff. [EDIT: Added path for Leopard, thanks to Hagelin for the update] A: This is a bit late, but: it seems you can detect bundles using the mdls command. Specifically, the (multi-line) output of: mdls -name kMDItemContentTypeTree /Path/To/Directory Will contain the string "com.apple.package" (including the quotation marks, at least as of Lion) somewhere if the directory is a package. If the package is also a bundle, the output will also contain "com.apple.bundle" and, last but not least, if it is specifically an application bundle, the output will also contain "com.apple.application-bundle" (That's according to some very limited testing, but from what Apple's documentation on Uniform Type Identifiers, and the man page for mdls, this should hold true. And for the items I tested, this was true for non-Apple-provided bundles as well, which is what you would expect given the purpose of UTIs.) A: <plug> My launch tool has a feature for this. For example: % launch -f Guards.oo3 Guards.oo3: non-application package type: '' creator: '' kind: OmniOutliner 3 content type ID: com.omnigroup.omnioutliner.oo3-package contents: 1 item created: 3/6/09 3:36:50 PM modified: 3/6/09 4:06:13 PM accessed: 4/12/09 1:10:36 PM [only updated by Mac OS X] backed up: 12/31/03 6:00:00 PM % launch -f /Applications/Safari.app /Applications/Safari.app: scriptable Mac OS X application package type: 'APPL' creator: 'sfri' architecture: PowerPC 7400, Intel 80x86 bundle ID: com.apple.Safari version: 4 Public Beta kind: Application content type ID: com.apple.application-bundle contents: 1 item created: 8/21/07 5:11:33 PM modified: 2/24/09 7:29:51 PM accessed: 4/12/09 1:10:51 PM [only updated by Mac OS X] backed up: 12/31/03 6:00:00 PM You should be able to get what you want by checking to see if the first line of output ends in 'package'. launch is in Fink and MacPorts too. </plug> A: There ought to be a way to do it easily from the command line, because as an AppleScript user, I can do it using System Events. So if all else fails, you can execute the necessary AppleScript from the command line as follows: $ FILE=/Users/myuser/Desktop/foo.rtfd $ osascript -e "tell application \"System Events\" to get package folder of alias POSIX file \"${FILE}\"" result is true A: A bundle should always have a file `./contents/Info.plist'. You can check for the existance of this in a directory, if so then it's a package/bundle.
{ "language": "en", "url": "https://stackoverflow.com/questions/121147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What does the explicit keyword mean? What does the explicit keyword mean in C++? A: Constructors append implicit conversion. To suppress this implicit conversion it is required to declare a constructor with a parameter explicit. In C++11 you can also specify an "operator type()" with such keyword http://en.cppreference.com/w/cpp/language/explicit With such specification you can use operator in terms of explicit conversions, and direct initialization of object. P.S. When using transformations defined BY USER (via constructors and type conversion operator) it is allowed only one level of implicit conversions used. But you can combine this conversions with other language conversions * *up integral ranks (char to int, float to double); *standart conversions (int to double); *convert pointers of objects to base class and to void*; A: The keyword explicit accompanies either * *a constructor of class X that cannot be used to implicitly convert the first (any only) parameter to type X C++ [class.conv.ctor] 1) A constructor declared without the function-specifier explicit specifies a conversion from the types of its parameters to the type of its class. Such a constructor is called a converting constructor. 2) An explicit constructor constructs objects just like non-explicit constructors, but does so only where the direct-initialization syntax (8.5) or where casts (5.2.9, 5.4) are explicitly used. A default constructor may be an explicit constructor; such a constructor will be used to perform default-initialization or valueinitialization (8.5). * *or a conversion function that is only considered for direct initialization and explicit conversion. C++ [class.conv.fct] 2) A conversion function may be explicit (7.1.2), in which case it is only considered as a user-defined conversion for direct-initialization (8.5). Otherwise, user-defined conversions are not restricted to use in assignments and initializations. Overview Explicit conversion functions and constructors can only be used for explicit conversions (direct initialization or explicit cast operation) while non-explicit constructors and conversion functions can be used for implicit as well as explicit conversions. /* explicit conversion implicit conversion explicit constructor yes no constructor yes yes explicit conversion function yes no conversion function yes yes */ Example using structures X, Y, Z and functions foo, bar, baz: Let's look at a small setup of structures and functions to see the difference between explicit and non-explicit conversions. struct Z { }; struct X { explicit X(int a); // X can be constructed from int explicitly explicit operator Z (); // X can be converted to Z explicitly }; struct Y{ Y(int a); // int can be implicitly converted to Y operator Z (); // Y can be implicitly converted to Z }; void foo(X x) { } void bar(Y y) { } void baz(Z z) { } Examples regarding constructor: Conversion of a function argument: foo(2); // error: no implicit conversion int to X possible foo(X(2)); // OK: direct initialization: explicit conversion foo(static_cast<X>(2)); // OK: explicit conversion bar(2); // OK: implicit conversion via Y(int) bar(Y(2)); // OK: direct initialization bar(static_cast<Y>(2)); // OK: explicit conversion Object initialization: X x2 = 2; // error: no implicit conversion int to X possible X x3(2); // OK: direct initialization X x4 = X(2); // OK: direct initialization X x5 = static_cast<X>(2); // OK: explicit conversion Y y2 = 2; // OK: implicit conversion via Y(int) Y y3(2); // OK: direct initialization Y y4 = Y(2); // OK: direct initialization Y y5 = static_cast<Y>(2); // OK: explicit conversion Examples regarding conversion functions: X x1{ 0 }; Y y1{ 0 }; Conversion of a function argument: baz(x1); // error: X not implicitly convertible to Z baz(Z(x1)); // OK: explicit initialization baz(static_cast<Z>(x1)); // OK: explicit conversion baz(y1); // OK: implicit conversion via Y::operator Z() baz(Z(y1)); // OK: direct initialization baz(static_cast<Z>(y1)); // OK: explicit conversion Object initialization: Z z1 = x1; // error: X not implicitly convertible to Z Z z2(x1); // OK: explicit initialization Z z3 = Z(x1); // OK: explicit initialization Z z4 = static_cast<Z>(x1); // OK: explicit conversion Z z1 = y1; // OK: implicit conversion via Y::operator Z() Z z2(y1); // OK: direct initialization Z z3 = Z(y1); // OK: direct initialization Z z4 = static_cast<Z>(y1); // OK: explicit conversion Why use explicit conversion functions or constructors? Conversion constructors and non-explicit conversion functions may introduce ambiguity. Consider a structure V, convertible to int, a structure U implicitly constructible from V and a function f overloaded for U and bool respectively. struct V { operator bool() const { return true; } }; struct U { U(V) { } }; void f(U) { } void f(bool) { } A call to f is ambiguous if passing an object of type V. V x; f(x); // error: call of overloaded 'f(V&)' is ambiguous The compiler does not know wether to use the constructor of U or the conversion function to convert the V object into a type for passing to f. If either the constructor of U or the conversion function of V would be explicit, there would be no ambiguity since only the non-explicit conversion would be considered. If both are explicit the call to f using an object of type V would have to be done using an explicit conversion or cast operation. Conversion constructors and non-explicit conversion functions may lead to unexpected behaviour. Consider a function printing some vector: void print_intvector(std::vector<int> const &v) { for (int x : v) std::cout << x << '\n'; } If the size-constructor of the vector would not be explicit it would be possible to call the function like this: print_intvector(3); What would one expect from such a call? One line containing 3 or three lines containing 0? (Where the second one is what happens.) Using the explicit keyword in a class interface enforces the user of the interface to be explicit about a desired conversion. As Bjarne Stroustrup puts it (in "The C++ Programming Language", 4th Ed., 35.2.1, pp. 1011) on the question why std::duration cannot be implicitly constructed from a plain number: If you know what you mean, be explicit about it. A: This answer is about object creation with/without an explicit constructor since it is not covered in the other answers. Consider the following class without an explicit constructor: class Foo { public: Foo(int x) : m_x(x) { } private: int m_x; }; Objects of class Foo can be created in 2 ways: Foo bar1(10); Foo bar2 = 20; Depending upon the implementation, the second manner of instantiating class Foo may be confusing, or not what the programmer intended. Prefixing the explicit keyword to the constructor would generate a compiler error at Foo bar2 = 20;. It is usually good practice to declare single-argument constructors as explicit, unless your implementation specifically prohibits it. Note also that constructors with * *default arguments for all parameters, or *default arguments for the second parameter onwards can both be used as single-argument constructors. So you may want to make these also explicit. An example when you would deliberately not want to make your single-argument constructor explicit is if you're creating a functor (look at the 'add_x' struct declared in this answer). In such a case, creating an object as add_x add30 = 30; would probably make sense. Here is a good write-up on explicit constructors. A: The explicit keyword makes a conversion constructor to non-conversion constructor. As a result, the code is less error prone. A: The compiler is allowed to make one implicit conversion to resolve the parameters to a function. What this means is that the compiler can use constructors callable with a single parameter to convert from one type to another in order to get the right type for a parameter. Here's an example class with a constructor that can be used for implicit conversions: class Foo { private: int m_foo; public: // single parameter constructor, can be used as an implicit conversion Foo (int foo) : m_foo (foo) {} int GetFoo () { return m_foo; } }; Here's a simple function that takes a Foo object: void DoBar (Foo foo) { int i = foo.GetFoo (); } and here's where the DoBar function is called: int main () { DoBar (42); } The argument is not a Foo object, but an int. However, there exists a constructor for Foo that takes an int so this constructor can be used to convert the parameter to the correct type. The compiler is allowed to do this once for each parameter. Prefixing the explicit keyword to the constructor prevents the compiler from using that constructor for implicit conversions. Adding it to the above class will create a compiler error at the function call DoBar (42). It is now necessary to call for conversion explicitly with DoBar (Foo (42)) The reason you might want to do this is to avoid accidental construction that can hide bugs. Contrived example: * *You have a MyString class with a constructor that constructs a string of the given size. You have a function print(const MyString&) (as well as an overload print (char *string)), and you call print(3) (when you actually intended to call print("3")). You expect it to print "3", but it prints an empty string of length 3 instead. A: The explicit-keyword can be used to enforce a constructor to be called explicitly. class C { public: explicit C() =default; }; int main() { C c; return 0; } the explicit-keyword in front of the constructor C() tells the compiler that only explicit call to this constructor is allowed. The explicit-keyword can also be used in user-defined type cast operators: class C{ public: explicit inline operator bool() const { return true; } }; int main() { C c; bool b = static_cast<bool>(c); return 0; } Here, explicit-keyword enforces only explicit casts to be valid, so bool b = c; would be an invalid cast in this case. In situations like these explicit-keyword can help programmer to avoid implicit, unintended casts. This usage has been standardized in C++11. A: Cpp Reference is always helpful!!! Details about explicit specifier can be found here. You may need to look at implicit conversions and copy-initialization too. Quick look The explicit specifier specifies that a constructor or conversion function (since C++11) doesn't allow implicit conversions or copy-initialization. Example as follows: struct A { A(int) { } // converting constructor A(int, int) { } // converting constructor (C++11) operator bool() const { return true; } }; struct B { explicit B(int) { } explicit B(int, int) { } explicit operator bool() const { return true; } }; int main() { A a1 = 1; // OK: copy-initialization selects A::A(int) A a2(2); // OK: direct-initialization selects A::A(int) A a3 {4, 5}; // OK: direct-list-initialization selects A::A(int, int) A a4 = {4, 5}; // OK: copy-list-initialization selects A::A(int, int) A a5 = (A)1; // OK: explicit cast performs static_cast if (a1) cout << "true" << endl; // OK: A::operator bool() bool na1 = a1; // OK: copy-initialization selects A::operator bool() bool na2 = static_cast<bool>(a1); // OK: static_cast performs direct-initialization // B b1 = 1; // error: copy-initialization does not consider B::B(int) B b2(2); // OK: direct-initialization selects B::B(int) B b3 {4, 5}; // OK: direct-list-initialization selects B::B(int, int) // B b4 = {4, 5}; // error: copy-list-initialization does not consider B::B(int,int) B b5 = (B)1; // OK: explicit cast performs static_cast if (b5) cout << "true" << endl; // OK: B::operator bool() // bool nb1 = b2; // error: copy-initialization does not consider B::operator bool() bool nb2 = static_cast<bool>(b2); // OK: static_cast performs direct-initialization } A: It is always a good coding practice to make your one argument constructors (including those with default values for arg2,arg3,...) as already stated. Like always with C++: if you don't - you'll wish you did... Another good practice for classes is to make copy construction and assignment private (a.k.a. disable it) unless you really need to implement it. This avoids having eventual copies of pointers when using the methods that C++ will create for you by default. An other way to do this is derive from boost::noncopyable. A: In C++, a constructor with only one required parameter is considered an implicit conversion function. It converts the parameter type to the class type. Whether this is a good thing or not depends on the semantics of the constructor. For example, if you have a string class with constructor String(const char* s), that's probably exactly what you want. You can pass a const char* to a function expecting a String, and the compiler will automatically construct a temporary String object for you. On the other hand, if you have a buffer class whose constructor Buffer(int size) takes the size of the buffer in bytes, you probably don't want the compiler to quietly turn ints into Buffers. To prevent that, you declare the constructor with the explicit keyword: class Buffer { explicit Buffer(int size); ... } That way, void useBuffer(Buffer& buf); useBuffer(4); becomes a compile-time error. If you want to pass a temporary Buffer object, you have to do so explicitly: useBuffer(Buffer(4)); In summary, if your single-parameter constructor converts the parameter into an object of your class, you probably don't want to use the explicit keyword. But if you have a constructor that simply happens to take a single parameter, you should declare it as explicit to prevent the compiler from surprising you with unexpected conversions. A: Suppose, you have a class String: class String { public: String(int n); // allocate n bytes to the String object String(const char *p); // initializes object with char *p }; Now, if you try: String mystring = 'x'; The character 'x' will be implicitly converted to int and then the String(int) constructor will be called. But, this is not what the user might have intended. So, to prevent such conditions, we shall define the constructor as explicit: class String { public: explicit String (int n); //allocate n bytes String(const char *p); // initialize sobject with string p }; A: Other answers are missing one important factor which I am going to mention here. Along with "delete" keyword, "explicit" allows you to control the way compiler is going to generate special member functions - default constructor, copy constructor, copy-assignment operator, destructor, move constructor and move-assignment. Refer https://learn.microsoft.com/en-us/cpp/cpp/explicitly-defaulted-and-deleted-functions
{ "language": "en", "url": "https://stackoverflow.com/questions/121162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3552" }
Q: Hidden features of Greasemonkey What are some of the lesser-known but useful features and techniques that people are using in their Greasemonkey scripts? (Please, just one feature per answer.) Similar threads: * *Hidden Features of JavaScript *Hidden Features of Java *Hidden Features of C++ *Hidden Features of C# A: Data can be persisted across page loads by storing it as a mozilla preference value via GM_setValue(keyname, value). Here is a simple example that tallys the number of times your script has been executed - by a given browser: var od = GM_getValue("odometer", 0); od++; GM_setValue("odometer", od); GM_log("odometer=" + od); GM values are analogous to cookies in that cookie values can only be accessed by the originated domain, GM values can only be accessed by the script that created them. A: GM_setValue normally only stores 32-bit integers, strings, and booleans, but you can take advantage of the uneval() method (and a later eval() on retrieval) to store any object. If you're dealing with pure JSON values (rather than JavaScript objects), use JSON.stringify to store and JSON.parse to retrieve; this will be both faster and safer. var foo={people:['Bob','George','Smith','Grognak the Destroyer'],pie:true}; GM_setValue('myVeryOwnFoo',uneval(foo)); var fooReborn=eval(GM_getValue('myVeryOwnFoo','new Object()')); GM_log('People: '+fooReborn.people+' Pie:'+fooReborn.pie); I tend to use "new Object()" as my default in this case, but you could also use "({})". Just remember that "{}" evaluates as a string, not an object. As usual, eval() with care. A: Anonymous statistics Assuming you have a basic hosting service that provides access logging, you can easily track basic usage statistics for your script. * *Place a gif file (eg, a logo image) on your own website. *In your script, attach an img element to the page that references the gif: var img = document.createElement("img"); img.src = "http://mysite.com/logo.gif"; document.body.appendChild(img); Now, each time a user executes your script, your hosting service will register a hit on that gif file. To track more than one script, use a different gif file for each. Or add some kind of differentiating parameter to the URL, (eg: http://mysite.com/logo.gif?zippyver=1.0). A: A useful XPath technique is to specify your match relative to a node that you have already found. As a contrived example for stackoverflow: // first we got the username link at the top of the page var hdrdiv = document.evaluate( "//div[@id='headerlinks']/a[1]", document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue; // now we can retrieve text that follows it, (user's reputation score) // (note that hdrdiv is now the contextNode argument, rather than document) var reptext = document.evaluate( "following-sibling::span", hdrdiv, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue; alert("Reputation Score: " + reptext.textContent); You can match in any direction relative to the contextNode, ancestors, descendants, previous, following. Here is a helpful XPath reference. A: GreaseMonkey scripts run when the DOM is ready, so you don't need to add onload events, you just start manipulating the DOM straight away in your GreaseMonkey script. A: Greasemonkey scripts often need to search for content on a page. Instead of digging through the DOM, try using XPath to locate nodes of interest. The document.evaluate() method lets you provide an XPath expression and will return a collection of matching nodes. Here's a nice tutorial to get you started. As an example, here's a script I wrote that causes links in phpBB3 posts to open in a new tab (in the default skin): // ==UserScript== // @name New Tab in phpBB3 // @namespace http://robert.walkertribe.com/ // @description Makes links in posts in phpBB3 boards open new tabs. // ==/UserScript== var newWin = function(ev) { var win = window.open(ev.target.href); if (win) ev.preventDefault(); }; var links = document.evaluate( "//div[@class='content']//a[not(@onclick) and not(@href='#')]", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); for (var i = 0; i < links.snapshotLength; i++) { var link = links.snapshotItem(i); link.addEventListener("click", newWin, true); } The XPath expression used in the code identifies all a elements that 1) do not have an onclick attribute, 2) whose href attribute is not set to "#", and 3) are found inside divs whose class attribute is set to "content". A: ==UserScript== ... @require http://ajax.googleapis.com/ajax/framework-of-your/choice.js ==/UserScript== A: Your script can add graphics into a page, even if you don't have any place to host files, via data URIs. For example, here is a little button graphic: var button = document.createElement("img"); button.src = "data:image/gif;base64," + "R0lGODlhEAAQAKEDAAAA/wAAAMzMzP///yH5BAEAAAMALAAAAAAQABAAAAIhnI+pywOtwINHTmpvy3rx" + "nnABlAUCKZkYoGItJZzUTCMFACH+H09wdGltaXplZCBieSBVbGVhZCBTbWFydFNhdmVyIQAAOw==" somenode.appendChild(button); Here is an online image encoder. And a wikipedia article about the Data URI standard. A: Script header values, (@name, @description, @version, etc), can be made retrievable. This is preferable to maintaining the same constant values in multiple places in your script. See Accessing Greasemonkey metadata from within your script? A: Obsolete: Firefox dropped support for E4X, in Greasemonkey scripts, with FF version 17. Use GM_info to get metadata. You can use e4x to access your ==UserScript== information as a variable: var metadata=<> // ==UserScript== // @name search greasemonkey // @namespace foo // @include http://*.google.com/* // @include http://*.google.ca/* // @include http://search.*.com/* // @include http://*.yahoo.com/* // ==/UserScript== </>.toString();
{ "language": "en", "url": "https://stackoverflow.com/questions/121167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Configure trac for anonymous ticket submissions How would one configure trac to allow anonymous submission of tickets? A: In your trac config you need to give the anonymous user the TICKET_CREATE permission. A: Go to Admin > Permissions, then give "anonymous" the TICKET_CREATE and TICKET_MODIFY privileges (actions). See: http://trac.edgewall.org/wiki/TracPermissions A: Setup a trac site to allow anonymous (no actual login) login. Grant the anonymous user permission to only create tickets, and maybe view existing tickets if you wish. But deny all other permissions. The trac admin plugin makes this pretty easy.
{ "language": "en", "url": "https://stackoverflow.com/questions/121169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is There Still A Case For MFC What are the compelling features of MFC? Why would you select it for a new project? A: The advantage of MFC is that it's still nicer than coding to bare win32 and you can distribute a native .exe that doesn't require a 23-50Mb runtime like .Net. Now, if you're concerned about those things there are better alternatives out there: C++ Builder, WxWidgets, etc. But some places won't consider non-Microsoft tools. A: You could sort of reword the question, why would you select C++ over C# for a desktop app. C++ still offers speed advantages which matter for some applications (I work for a company that creates software for electronic trading. Speed matters a lot). If you are going to develop a desktop app aimed for Windows only in C++, then MFC is the most mature choice, with lots of free code based on MFC on the internet, lots of knowledge. A: Apart from win32 api itself, MFC is the only mainstream Windows programming technology that is still alive and well in 2011 after 15+ years. Back in 2001 everybody said 'MFC is dead, it's all Winforms now'; in 2005 everybody said 'MFC is dead, it's all XAML now'; now it's 2011 and Winforms and XAML are dead (OK XAML maybe not really dead, but way past its prime) and MFC is still being updated with the latest developments (ribbon, Aero extensions, Win7 API's etc). Of course, that doesn't guarantee anything for the future, but over those 15+ years, a lot of MFC code written, and it will remain in use for the next decade or decades. It may not be the prettiest tech but it's well-understood (its good and bad points) and is not a moving target like other hype technologies are, meaning that people who actually want to get stuff done can sort of rely on it (well more than on the alternatives, anyway). (same goes for C++, btw) A: Here's a possibility - imagine an application that would require a large amount of memory, say a graphics program, a game or maybe some high performance business application. It's no secret that .NET applications hog memory - in such a case, you may want a lean MFC app for the core of your application. You can always load up and use .NET components, controls, etc through either COM callable wrappers or directly through C++/CLI. That all being said - MFC is a pain. Consider WTL instead - you can still call into .NET if you need to, the same way as I mentioned above for MFC. WTL is a lot nicer than MFC :-) A: Apparently it is still a good choice for applications for windows-based hand-held devices, such as point-of-sale devices. In these, resources are limited so things like memory management become more significant. A: Quick Tour Of New MFC Functionality I hear they have a new ribbon control. If you're into this sort of complexity. Here's a screenshot of a newly generated app: (source: msdn.com) Really, it's just a widget update. So do we need more widgets? A: The existing windows API is entirely C based. If you want to use C++ (and you probably should) then MFC is the logical choice if you wish to stay native (i.e. not using .NET). MFC is just a set of object-orientated classes over the top of the C API. Plus quite a few additional "helper" classes that make it easier to do everyday tasks. A: I think not.. MFC would lose out in * *Level of abstraction *Development Time *Troubleshooting time *Learning curve for new developers *Future proofing (although now that's questionable.. with something new coming up every 3-4 years) *Finding good people who know their MFC *Easy to use controls The only place where MFC would probably sneak past is if you have some very performance intensive applications like you have things on screen that need to be redrawn every 10 msec or 1 sec. "Managed" apps still haven't managed to jump past that hurdle. MFC was an important step in the evolution, but now better options are available. A: On design & technical merits alone? Sorry to be categorical, but none. It's a poor design, a hugely leaky abstraction where you have to fall back to Win32 API programming, misuses C++ egregiously, and is firmly targeted on yesterday's technology: you won't get a modern (or even an attractive!) user experience out of an MFC app. If you can get C# developers and you don't have serious hardware limitations, go with WinForms. External factors such as the availability of competence for hire, training programmes and third party components, on the other hand, can still extend its lifespan, at least for some kinds of applications: small & simple, targeted for special applications with reasonably few users, preferably in-house. A: If you are developing Windows CE and mobile apps in C++, as Einar has already mentioned, MFC is a good choice. If you make this choice, MFC then also becomes a reasonable choice for the desktop as you can use the same code across desktop and hand-held devices. MFC remains a good perfomance / easy to implement combinitation in this scenario. Personally, I use MFC in conjunction with Stingray libraries in these enivornments which gives a very good interface, good performance and is quick and easy to implement. A: I would say that speed and footprint are good reasons over .NET. It's probably true that you'll find it difficult to locate good MFC programmers, but thats just as much because the modern languages promote lazy programming techniques and most programming courses gravitate towards them as they are easier to teach. A: MFC was a good option 10 years ago. It is still a good wrapper over Win32 API but unfortunately obsolete. Qt is a better option with one big advantage - it is platform-independent. With MFC you're doomed to Windows. A: I still use MFC for all kinds of applications. MFC got a bad rap from it's early implementations, but it is excellent now. I find it quite a bit more convenient than WTL as well. Plus the GUI tools in Visual Studio are already setup to make it easy to rapidly develop GUIs with MFC, mapping controls to variables, DDX, etc. For desktop applications that I intend for wide distribution I still go with native Windows applications, usually in MFC, because we're still not at a point where you can depend on your customers to have the version of .NET that you'll be using installed and asking them to install it will cause you to lose sales, not to mention the customer service headache when they run into problems installing .NET as a result of trying to get your app to run. A: I've written cross platform code for years so when I need to write something I always have a very thin abstraction layer between it and the system calls for almost everything except posix calls. That way you can code it go MFC but quite easily convert it a different API later if needed. My base set of c++ libraries that I use for everything does this with a small System class. I currently have it using MFC for Windows and I also have it using XWindows for Linux and a native Mac version as well. And later on when I port it for a handheld it should be quite painless. If you want to take a peek, it's LGPL'ed and is at: http://code.google.com/p/kgui/
{ "language": "en", "url": "https://stackoverflow.com/questions/121184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Surround with quotation marks How is it possible in Eclipse JDT to convert a multiline selection to String. Like the following From: xxxx yyyy zzz To: "xxxx " + "yyyy " + "zzz" I tried the following template "${line_selection}${cursor}"+ but that way I only get the whole block surrounded not each line separately. How can I achieve a multiline processing like commenting the selected block? A: Maybe this is not what you mean but... If I'm on a line in Eclipse and I enter double quotation marks, then inside that paste a multiline selection (like your xyz example) it will paste out like this: "xxxx\n" + "yyyy\n" + "zzz" Then you could just find/replace in a selection for "\n" to "", if you didn't intend the newlines. I think the option to enable this is in Window/Preferences, under Java/Editor/Typing/, check the box next to "Escape text when pasting into a string literal". (Eclipse 3.4 Ganymede) A: Find/Replace with the regex option turned on. Find: ^(.*)$ Replace with: "$1" + Well, the last line will have a surplus +, you have to delete it manually. A: I would go with a Find/Replace eclipse in regexp mode: * *Find: ^((?:\s(?)\S?)((?:\s(?![\r\n]))) *Replace with \1"\2"\3 + Will preserve exactly whatever space or tabs you have before and after each string, and will surround them with the needed double-quotes. (last '+' needs to be removed) A: This may not be exactly the answer you're looking for. You can easily achieve what you're asking by using the sed stream editor. This is available on all flavors of Unix, and also on Windows, by downloading a toolkit like cygwin. On the Unix shell command line run the command sed 's/^/"/;s/$/"+/' and paste the text you want to convert. On its output you'll obtain the converted text. The argument passed to sed says substitute (s) the beginning of a line (^) with a quote, and substitute (s) the end of each line ($) with a quote and a plus. If the text you want to convert is large you may want to redirect sed's input and output through files. In such a case run something like sed 's/^/"/;s/$/"+/' <inputfile >outputfile On Windows you can also use the winclip command of the Outwit tool suite to directly change what's in the clipboard. Simply run winclip -p | sed 's/^/"/;s/$/"+/' | winclip -c The above command will paste the clipboard's contents into sed and the result back into the clipboard. Finally, if you're often using this command, it makes sense placing it into a shell script file, so that you can easily run it. You can then even assign an Eclipse keyboard shortcut to it.
{ "language": "en", "url": "https://stackoverflow.com/questions/121199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Automatically folding #defines in vim I work with quite a bit of multi-platform C/C++ code, separated by common #defines (#if WIN, #if UNIX, etc). It would be nice if I could have vim automatically fold the sections I'm currently not interested in when I open a file. I've searched through the vim script archives, but I haven't found anything useful. Any suggestions? Places to start? A: Just add a folding region to your syntax http://vim.wikia.com/wiki/Syntax_folding_of_Vim_scripts#Syntax_definitions :syn region myFold start="\#IF" end="\#ENDIF" transparent fold :syn sync fromstart :set foldmethod=syntax A: To add to @hometoasts answer, you can add that command as a comment in the first ten or last ten lines of the file and vim will automatically use it for that file. /* vim: syn region regionName start="regex" end="regex": */ A: A quick addition to Denton's addition: to use the new syntax rule with any C or C++ code, add it to a file at $VIMRUNTIME/syntax/c.vim and cpp.vim. ($VIMRUNTIME is where your local Vim code lives: ~/.vim on Unix.) Also, the values for start and end in the syntax definition are regular expressions, so you can use ^#if and ^#endif to ensure they only match those strings at the start of a line. A: I've always used forldmethod=marker and defined my own fold tags placed within comments. this is for defining the characters that define the open and close folds. in this case open is "<(" and close is ")>" replace these with whatever you'd like. set foldmethod=marker set foldmarker=<(,)> This is my custom function to decide what to display of the folded text: set foldtext=GetCustomFoldText() function GetCustomFoldText() let preline = substitute(getline(v:foldstart),'<(','<(+)','') let line = substitute(preline,"\t",' ','g') let nextLnNum = v:foldstart + 1 let nextline = getline(nextLnNum) let foldTtl = v:foldend - v:foldstart return line . ' | ' . nextline . ' (' . foldTtl . ' lines)>' endfunction Hope that helps. A: I have a huge code base and so a large number of #defines. Each file has numerous #ifdef's and most of the times they are nested. I tried many of the vim scripts but they always used to run into some error with the code I have. So in the end I put all my defines in a header file and included it in the file that I wanted to work with and did a gcc on it like this gcc -E -C -P source.cpp > output.cpp The -E command gets gcc to run only the pre-processor on the file, so all the unwanted code within the undefined #ifdef's are removed. The -C option retains the comments in the file. The -P option inhibits generation of linemarkers in the output from the preprocessor.
{ "language": "en", "url": "https://stackoverflow.com/questions/121202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to detect if JavaScript is disabled? There was a post this morning asking about how many people disable JavaScript. Then I began to wonder what techniques might be used to determine if the user has it disabled. Does anyone know of some short/simple ways to detect if JavaScript is disabled? My intention is to give a warning that the site is not able to function properly without the browser having JS enabled. Eventually I would want to redirect them to content that is able to work in the absence of JS, but I need this detection as a placeholder to start. A: If javascript is disabled your client-side code won't run anyway, so I assume you mean you want that info available server-side. In that case, noscript is less helpful. Instead, I'd have a hidden input and use javascript to fill in a value. After your next request or postback, if the value is there you know javascript is turned on. Be careful of things like noscript, where the first request may show javascript disabled, but future requests turn it on. A: You might, for instance, use something like document.location = 'java_page.html' to redirect the browser to a new, script-laden page. Failure to redirect implies that JavaScript is unavailable, in which case you can either resort to CGI ro utines or insert appropriate code between the tags. (NOTE: NOSCRIPT is only available in Netscape Navigator 3.0 and up.) credit http://www.intranetjournal.com/faqs/jsfaq/how12.html A: This is what worked for me: it redirects a visitor if javascript is disabled <noscript><meta http-equiv="refresh" content="0; url=whatyouwant.html" /></noscript> A: I'd suggest you go the other way around by writing unobtrusive JavaScript. Make the features of your project work for users with JavaScript disabled, and when you're done, implement your JavaScript UI-enhancements. https://en.wikipedia.org/wiki/Unobtrusive_JavaScript A: If your use case is that you have a form (e.g., a login form) and your server-side script needs to know if the user has JavaScript enabled, you can do something like this: <form onsubmit="this.js_enabled.value=1;return true;"> <input type="hidden" name="js_enabled" value="0"> <input type="submit" value="go"> </form> This will change the value of js_enabled to 1 before submitting the form. If your server-side script gets a 0, no JS. If it gets a 1, JS! A: <noscript> isn't even necessary, and not to mention not supported in XHTML. Working Example: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd"> <html> <head> <title>My website</title> <style> #site { display: none; } </style> <script src="http://code.jquery.com/jquery-latest.min.js "></script> <script> $(document).ready(function() { $("#noJS").hide(); $("#site").show(); }); </script> </head> <body> <div id="noJS">Please enable JavaScript...</div> <div id="site">JavaScript dependent content here...</div> </body> </html> In this example, if JavaScript is enabled, then you see the site. If not, then you see the "Please enable JavaScript" message. The best way to test if JavaScript is enabled, is to simply try and use JavaScript! If it works, it's enabled, if not, then it's not... A: A technique I've used in the past is to use JavaScript to write a session cookie that simply acts as a flag to say that JavaScript is enabled. Then the server-side code looks for this cookie and if it's not found takes action as appropriate. Of course this technique does rely on cookies being enabled! A: I think you could insert an image tag into a noscript tag and look at the stats how many times your site and how often this image has been loaded. A: People have already posted examples that are good options for detection, but based on your requirement of "give warning that the site is not able to function properly without the browser having JS enabled". You basically add an element that appears somehow on the page, for example the 'pop-ups' on Stack Overflow when you earn a badge, with an appropriate message, then remove this with some Javascript that runs as soon as the page is loaded (and I mean the DOM, not the whole page). A: code inside <noscript> tags will be executed when there is no js enabled in browser. we can use noscript tags to display msg to turn on JS as below. <noscript> <h1 style="text-align: center;"> To view this page properly, please enable JavaScript and reload the page </h1> </noscript> while keeping our website content inside body as hidden. as below <body> <div id="main_body" style="display: none;"> website content. </div> </body> now if JS is turned on you can just make the content inside your main_body visible as below <script type="text/javascript"> document.getElementById("main_body").style.display="block"; </script> A: I'd like to add my .02 here. It's not 100% bulletproof, but I think it's good enough. The problem, for me, with the preferred example of putting up some sort of "this site doesn't work so well without Javascript" message is that you then need to make sure that your site works okay without Javascript. And once you've started down that road, then you start realizing that the site should be bulletproof with JS turned off, and that's a whole big chunk of additional work. So, what you really want is a "redirection" to a page that says "turn on JS, silly". But, of course, you can't reliably do meta redirections. So, here's the suggestion: <noscript> <style type="text/css"> .pagecontainer {display:none;} </style> <div class="noscriptmsg"> You don't have javascript enabled. Good luck with that. </div> </noscript> ...where all of the content in your site is wrapped with a div of class "pagecontainer". The CSS inside the noscript tag will then hide all of your page content, and instead display whatever "no JS" message you want to show. This is actually what Gmail appears to do...and if it's good enough for Google, it's good enough for my little site. A: Use a .no-js class on the body and create non javascript styles based on .no-js parent class. If javascript is disabled you will get all the non javascript styles, if there is JS support the .no-js class will be replaced giving you all the styles as usual. document.body.className = document.body.className.replace("no-js","js"); trick used in HTML5 boilerplate http://html5boilerplate.com/ through modernizr but you can use one line of javascript to replace the classes noscript tags are okay but why have extra stuff in your html when it can be done with css A: I assume you're trying to decide whether or not to deliver JavaScript-enhanced content. The best implementations degrade cleanly, so that the site will still operate without JavaScript. I also assume that you mean server-side detection, rather than using the <noscript> element for an unexplained reason. There is no good way to perform server-side JavaScript detection. As an alternative it is possible to set a cookie using JavaScript, and then test for that cookie using server-side scripting upon subsequent page views. However this would be unsuitable for deciding what content to deliver, as it would not distinguish visitors without the cookie from new visitors or from visitors who did not accept the JavaScript set cookie. A: Why don't you just put a hijacked onClick() event handler that will fire only when JS is enabled, and use this to append a parameter (js=true) to the clicked/selected URL (you could also detect a drop down list and change the value- of add a hidden form field). So now when the server sees this parameter (js=true) it knows that JS is enabled and then do your fancy logic server-side. The down side to this is that the first time a users comes to your site, bookmark, URL, search engine generated URL- you will need to detect that this is a new user so don't look for the NVP appended into the URL, and the server would have to wait for the next click to determine the user is JS enabled/disabled. Also, another downside is that the URL will end up on the browser URL and if this user then bookmarks this URL it will have the js=true NVP, even if the user does not have JS enabled, though on the next click the server would be wise to knowing whether the user still had JS enabled or not. Sigh.. this is fun... A: To force users to enable JavaScripts, I set 'href' attribute of each link to the same document, which notifies user to enable JavaScripts or download Firefox (if they don't know how to enable JavaScripts). I stored actual link url to the 'name' attribute of links and defined a global onclick event that reads 'name' attribute and redirects the page there. This works well for my user-base, though a bit fascist ;). A: You don't detect whether the user has javascript disabled (server side or client). Instead, you assume that javascript is disabled and build your webpage with javascript disabled. This obviates the need for noscript, which you should avoid using anyway because it doesn't work quite right and is unnecessary. For example, just build your site to say <div id="nojs">This website doesn't work without JS</div> Then, your script will simply do document.getElementById('nojs').style.display = 'none'; and go about its normal JS business. A: Check for cookies using a pure server side solution i have introduced here then check for javascript by dropping a cookie using Jquery.Cookie and then check for cookie this way u check for both cookies and javascript A: In some cases, doing it backwards could be sufficient. Add a class using javascript: // Jquery $('body').addClass('js-enabled'); /* CSS */ .menu-mobile {display:none;} body.js-enabled .menu-mobile {display:block;} This could create maintenance issues on anything complex, but it's a simple fix for some things. Rather than trying to detect when it's not loaded, just style according to when it is loaded. A: I would like to add my solution to get reliable statistics on how many real users visit my site with javascript disabled over the total users. The check is done one time only per session with these benefits: * *Users visiting 100 pages or just 1 are counted 1 each. This allows to focus on single users, not pages. *Does not break page flow, structure or semantic in anyway *Could logs user agent. This allow to exclude bots from statistics, such as google bot and bing bot which usually have JS disabled! Could also log IP, time etc... *Just one check per session (minimal overload) My code uses PHP, mysql and jquery with ajax but could be adapted to other languanges: Create a table in your DB like this one: CREATE TABLE IF NOT EXISTS `log_JS` ( `logJS_id` int(11) NOT NULL AUTO_INCREMENT, `data_ins` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `session_id` varchar(50) NOT NULL, `JS_ON` tinyint(1) NOT NULL DEFAULT '0', `agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`logJS_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; Add this to every page after using session_start() or equivalent (jquery required): <? if (!isset($_SESSION["JSTest"])) { mysql_query("INSERT INTO log_JS (session_id, agent) VALUES ('" . mysql_real_escape_string(session_id()) . "', '" . mysql_real_escape_string($_SERVER['HTTP_USER_AGENT']). "')"); $_SESSION["JSTest"] = 1; // One time per session ?> <script type="text/javascript"> $(document).ready(function() { $.get('JSOK.php'); }); </script> <? } ?> Create the page JSOK.php like this: <? include_once("[DB connection file].php"); mysql_query("UPDATE log_JS SET JS_ON = 1 WHERE session_id = '" . mysql_real_escape_string(session_id()) . "'"); A: I've figured out another approach using css and javascript itself. This is just to start tinkering with classes and ids. The CSS snippet: 1. Create a css ID rule, and name it #jsDis. 2. Use the "content" property to generate a text after the BODY element. (You can style this as you wish). 3 Create a 2nd css ID rule and name it #jsEn, and stylize it. (for the sake of simplicity, I gave to my #jsEn rule a different background color. <style> #jsDis:after { content:"Javascript is Disable. Please turn it ON!"; font:bold 11px Verdana; color:#FF0000; } #jsEn { background-color:#dedede; } #jsEn:after { content:"Javascript is Enable. Well Done!"; font:bold 11px Verdana; color:#333333; } </style> The JavaScript snippet: 1. Create a function. 2. Grab the BODY ID with getElementById and assign it to a variable. 3. Using the JS function 'setAttribute', change the value of the ID attribute of the BODY element. <script> function jsOn() { var chgID = document.getElementById('jsDis'); chgID.setAttribute('id', 'jsEn'); } </script> The HTML part. 1. Name the BODY element attribute with the ID of #jsDis. 2. Add the onLoad event with the function name. (jsOn()). <body id="jsDis" onLoad="jsOn()"> Because of the BODY tag has been given the ID of #jsDis: - If Javascript is enable, it will change by himself the attribute of the BODY tag. - If Javascript is disable, it will show the css 'content:' rule text. You can play around with a #wrapper container, or with any DIV that use JS. Hope this helps to get the idea. A: noscript blocks are executed when JavaScript is disabled, and are typically used to display alternative content to that you've generated in JavaScript, e.g. <script type="javascript"> ... construction of ajaxy-link, setting of "js-enabled" cookie flag, etc.. </script> <noscript> <a href="next_page.php?nojs=1">Next Page</a> </noscript> Users without js will get the next_page link - you can add parameters here so that you know on the next page whether they've come via a JS/non-JS link, or attempt to set a cookie via JS, the absence of which implies JS is disabled. Both of these examples are fairly trivial and open to manipulation, but you get the idea. If you want a purely statistical idea of how many of your users have javascript disabled, you could do something like: <noscript> <img src="no_js.gif" alt="Javascript not enabled" /> </noscript> then check your access logs to see how many times this image has been hit. A slightly crude solution, but it'll give you a good idea percentage-wise for your user base. The above approach (image tracking) won't work well for text-only browsers or those that don't support js at all, so if your userbase swings primarily towards that area, this mightn't be the best approach. A: Detect it in what? JavaScript? That would be impossible. If you just want it for logging purposes, you could use some sort of tracking scheme, where each page has JavaScript that will make a request for a special resource (probably a very small gif or similar). That way you can just take the difference between unique page requests and requests for your tracking file. A: Adding a refresh in meta inside noscript is not a good idea. * *Because noscript tag is not XHTML compliant *The attribute value "Refresh" is nonstandard, and should not be used. "Refresh" takes the control of a page away from the user. Using "Refresh" will cause a failure in W3C's Web Content Accessibility Guidelines --- Reference http://www.w3schools.com/TAGS/att_meta_http_equiv.asp. A: For those who just want to track if js was enabled, how about using an ajax routine to store the state? For example, I log all visitors/visits in a set of tables. The JSenabled field can be set to a default of FALSE, and the ajax routine would set it to TRUE, if JS is enabled. A: just a bit tough but (hairbo gave me the idea) CSS: .pagecontainer { display: none; } JS: function load() { document.getElementById('noscriptmsg').style.display = "none"; document.getElementById('load').style.display = "block"; /* rest of js*/ } HTML: <body onload="load();"> <div class="pagecontainer" id="load"> Page loading.... </div> <div id="noscriptmsg"> You don't have javascript enabled. Good luck with that. </div> </body> would work in any case right? even if the noscript tag is unsupported (only some css required) any one knows a non css solution? A: You can use a simple JS snippet to set the value of a hidden field. When posted back you know if JS was enabled or not. Or you can try to open a popup window that you close rapidly (but that might be visible). Also you have the NOSCRIPT tag that you can use to show text for browsers with JS disabled. A: You'll want to take a look at the noscript tag. <script type="text/javascript"> ...some javascript script to insert data... </script> <noscript> <p>Access the <a href="http://someplace.com/data">data.</a></p> </noscript> A: Because I always want to give the browser something worthwhile to look at I often use this trick: First, any portion of a page that needs JavaScript to run properly (including passive HTML elements that get modified through getElementById calls etc.) are designed to be usable as-is with the assumption that there ISN'T javaScript available. (designed as if it wasn't there) Any elements that would require JavaScript, I place inside a tag something like: <span name="jsOnly" style="display: none;"></span> Then at the beginning of my document, I use .onload or document.ready within a loop of getElementsByName('jsOnly') to set the .style.display = ""; turning the JS dependent elements back on. That way, non-JS browsers don't ever have to see the JS dependent portions of the site, and if they have it, it appears immediately when it's ready. Once you are used to this method, it's fairly easy to hybridize your code to handle both situations, although I am only now experimenting with the noscript tag and expect it will have some additional advantages. A: The noscript tag works well, but will require each additional page request to continue serving useless JS files, since essentially noscript is a client side check. You could set a cookie with JS, but as someone else pointed out, this could fail. Ideally, you'd like to be able to detect JS client side, and without using cookies, set a session server side for that user that indicates is JS is enabled. A possibility is to dynamically add a 1x1 image using JavaScript where the src attribute is actually a server side script. All this script does is saves to the current user session that JS is enabled ($_SESSION['js_enabled']). You can then output a 1x1 blank image back to the browser. The script won't run for users who have JS disabled, and hence the $_SESSION['js_enabled'] won't be set. Then for further pages served to this user, you can decide whether to include all of your external JS files, but you'll always want to include the check, since some of your users might be using the NoScript Firefox add-on or have JS disabled temporarily for some other reason. You'll probably want to include this check somewhere close to the end of your page so that the additional HTTP request doesn't slow down the rendering of your page. A: Add this to the HEAD tag of each page. <noscript> <meta http-equiv="refresh" runat="server" id="mtaJSCheck" content="0;logon.aspx" /> </noscript> So you have: <head> <noscript> <meta http-equiv="refresh" runat="server" id="mtaJSCheck" content="0;logon.aspx" /> </noscript> </head> With thanks to Jay. A: A common solution is to the meta tag in conjunction with noscript to refresh the page and notify the server when JavaScript is disabled, like this: <!DOCTYPE html> <html lang="en"> <head> <noscript> <meta http-equiv="refresh" content="0; /?javascript=false"> </noscript> <meta charset="UTF-8"/> <title></title> </head> </html> In the above example when JavaScript is disabled the browser will redirect to the home page of the web site in 0 seconds. In addition it will also send the parameter javascript=false to the server. A server side script such as node.js or PHP can then parse the parameter and come to know that JavaScript is disabled. It can then send a special non-JavaScript version of the web site to the client. A: This is the "cleanest" solution id use: <noscript> <style> body *{ /*hides all elements inside the body*/ display: none; } h1{ /* even if this h1 is inside head tags it will be first hidden, so we have to display it again after all body elements are hidden*/ display: block; } </style> <h1>JavaScript is not enabled, please check your browser settings.</h1> </noscript> A: Here is a PHP script which can be included once before any output is generated. It is not perfect, but it works well enough in most cases to avoid delivering content or code that will not be used by the client. The header comments explain how it works. <?php /***************************************************************************** * JAVASCRIPT DETECTION * *****************************************************************************/ // Progressive enhancement and graceful degradation are not sufficient if we // want to avoid sending HTML or JavaScript code that won't be useful on the // client side. A normal HTTP request will not include any explicit indicator // that JavaScript is enabled in the client. So a "preflight response" is // needed to prompt the client to provide an indicator in a follow-up request. // Once the state of JavaScript availability has been received the state of // data received in the original request must be restored before proceding. // To the user, this handshake should be as invisible as possible. // // The most convenient place to store the original data is in a PHP session. // The PHP session extension will try to use a cookie to pass the session ID // but if cookies are not enabled it will insert it into the query string. // This violates our preference for invisibility. When Javascript is not // enabled the only way to effect a client side redirect is with a "meta" // element with its "http-equiv" attribute set to "refresh". In this case // modifying the URL is the only way to pass the session ID back. // // But when cookies are disabled and JavaScript is enabled then a client side // redirect can be effected by setting the "window.onload" method to a function // which submits a form. The form has a "method" attribute of "post" and an // "action" attribute set to the original URL. The form contains two hidden // input elements, one in which the session ID is stored and one in which the // state of JavaScript availability is stored. Both values are thereby passed // back to the server in a POST request while the URL remains unchanged. The // follow-up request will be a POST even if the original request was a GET, but // since the original request data is restored, the containing script ought to // process the request as though it were a GET. // In order to ensure that the constant SID is defined as the caller of this // script would expect, call session_start if it hasn't already been called. $session = isset($_SESSION); if (!$session) session_start(); // Use a separate session for Javascript detection. Save the caller's session // name and ID. If this is the followup request then close the caller's // session and reopen the Javascript detection session. Otherwise, generate a // new session ID, close the caller's session and create a new session for // Javascript detection. $session_name = session_name(); $session_id = session_id(); session_write_close(); session_name('JS_DETECT'); if (isset($_COOKIE['JS_DETECT'])) { session_id($_COOKIE['JS_DETECT']); } elseif (isset($_REQUEST['JS_DETECT'])) { session_id($_REQUEST['JS_DETECT']); } else { session_id(sha1(mt_rand())); } session_start(); if (isset($_SESSION['_SERVER'])) { // Preflight response already sent. // Store the JavaScript availability status in a constant. define('JS_ENABLED', 0+$_REQUEST['JS_ENABLED']); // Store the cookie availability status in a constant. define('COOKIES_ENABLED', isset($_COOKIE['JS_DETECT'])); // Expire the cookies if they exist. setcookie('JS_DETECT', 0, time()-3600); setcookie('JS_ENABLED', 0, time()-3600); // Restore the original request data. $_GET = $_SESSION['_GET']; $_POST = $_SESSION['_POST']; $_FILES = $_SESSION['_FILES']; $_COOKIE = $_SESSION['_COOKIE']; $_SERVER = $_SESSION['_SERVER']; $_REQUEST = $_SESSION['_REQUEST']; // Ensure that uploaded files will be deleted if they are not moved or renamed. function unlink_uploaded_files () { foreach (array_keys($_FILES) as $k) if (file_exists($_FILES[$k]['tmp_name'])) unlink($_FILES[$k]['tmp_name']); } register_shutdown_function('unlink_uploaded_files'); // Reinitialize the superglobal. $_SESSION = array(); // Destroy the Javascript detection session. session_destroy(); // Reopen the caller's session. session_name($session_name); session_id($session_id); if ($session) session_start(); unset($session, $session_name, $session_id, $tmp_name); // Complete the request. } else { // Preflight response not sent so send it. // To cover the case where cookies are enabled but JavaScript is disabled, // initialize the cookie to indicate that JavaScript is disabled. setcookie('JS_ENABLED', 0); // Prepare the client side redirect used when JavaScript is disabled. $content = '0; url='.$_SERVER['REQUEST_URI']; if (!$_GET['JS_DETECT']) { $content .= empty($_SERVER['QUERY_STRING']) ? '?' : '&'; $content .= 'JS_DETECT='.session_id(); } // Remove request data which should only be used here. unset($_GET['JS_DETECT'],$_GET['JS_ENABLED'], $_POST['JS_DETECT'],$_POST['JS_ENABLED'], $_COOKIE['JS_DETECT'],$_COOKIE['JS_ENABLED'], $_REQUEST['JS_DETECT'],$_REQUEST['JS_ENABLED']); // Save all remaining request data in session data. $_SESSION['_GET'] = $_GET; $_SESSION['_POST'] = $_POST; $_SESSION['_FILES'] = $_FILES; $_SESSION['_COOKIE'] = $_COOKIE; $_SESSION['_SERVER'] = $_SERVER; $_SESSION['_REQUEST'] = $_REQUEST; // Rename any uploaded files so they won't be deleted by PHP. When using // a clustered web server, upload_tmp_dir must point to shared storage. foreach (array_keys($_FILES) as $k) { $tmp_name = $_FILES[$k]['tmp_name'].'x'; if (move_uploaded_file($_FILES[$k]['tmp_name'], $tmp_name)) $_SESSION['_FILES'][$k]['tmp_name'] = $tmp_name; } // Have the client inform the server as to the status of Javascript. ?> <!DOCTYPE html> <html> <head> <script> document.cookie = 'JS_ENABLED=1'; // location.reload causes a confirm box in FireFox // if (document.cookie) { location.reload(true); } if (document.cookie) { location.href = location; } </script> <meta http-equiv="refresh" content="<?=$content?>" /> </head> <body> <form id="formid" method="post" action="" > <input type="hidden" name="<?=$session_name?>" value="<?=$session_id?>" /> <input type="hidden" name="JS_DETECT" value="<?=session_id()?>" /> <input type="hidden" name="JS_ENABLED" value="1" /> </form> <script> document.getElementById('formid').submit(); </script> </body> </html> <?php exit; } ?> A: Here is the twist! There might be client browsers with enabled Javascript and who use JS compatible browsers. But for what ever the reason Javascript does not work in the browser (ex: firewall settings). According to statistics this happens every 1 out of 93 scenarios. So the server detects the client is capable of executing Javascript but actually it doesn't! As a solution I suggest we set a cookie in client site then read it from server. If the cookie is set then JS works fine. Any thoughts ? A: Of course, Cookies and HTTP headers are great solutions, but both would require explicit server side involvement. For simple sites, or where I don't have backend access, I prefer client side solutions. -- I use the following to set a class attribute to the HTML element itself, so my CSS can handle pretty much all other display type logic. METHODS: 1) place <script>document.getElementsByTagName('html')[0].classList.add('js-enabled');</script> above the <html> element. WARNING!!!! This method, will override all <html> class attributes, not to mention may not be "valid" HTML, but works in all browsers, I've tested it in. *NOTES: Due to the timing of when the script is run, before the <html> tag is processed, it ends up getting an empty classList collection with no nodes, so by the time the script completes, the <html> element will be given only the classes you added. 2) Preserves all other <html> class attributes, simply place the script <script>document.getElementsByTagName('html')[0].classList.add('js-enabled');</script> right after the opening <html> tag. In both cases, If JS was disabled, then no changes to the <html> class attributes will be made. ALTERNATIVES Over the years I've used a few other methods: <script type="text/javascript"> <!-- (function(d, a, b){ let x = function(){ // Select and swap let hits = d.getElementsByClassName(a); for( let i = hits.length - 1; i >= 0; i-- ){ hits[i].classList.add(b); hits[i].classList.remove(a); } }; // Initialize Second Pass... setTimeout(function(){ x(); },0); x(); })(document, 'no-js', 'js-enabled' ); --> </script> // Minified as: <script type="text/javascript"> <!-- (function(d, a, b, x, hits, i){x=function(){hits=d.getElementsByClassName(a);for(i=hits.length-1;i>=0;i--){hits[i].classList.add(b);hits[i].classList.remove(a);}};setTimeout(function(){ x(); },0);x();})(document, 'no-js', 'js-enabled' ); --> </script> * *This will cycle through the page twice, once at the point where it is in the page, generally right after the <html> and once again after page load. Two times was required, as I injected into a header.tpl file of a CMS which I did not have backend access to, but wanted to present styling options for no-js snippets. The first pass, would set set the .js-enabled class permitting any global styles to kick in, and prevented most further reflows. the second pass, was a catchall for any later included content. REASONINGS: The main reasons, I've cared if JS was enabled or not was for "Styling" purposes, hide/show a form, enable/disable buttons or restyle presentation and layouts of sliders, tables, and other presentations which required JS to function "correctly" and would be useless or unusable without JS to animate or handle the interactions. Also, you can't directly detect with javascript, if javascript is "disabled"... only if it is "enabled", by executing some javascript, so you either rely on <meta http-equiv="refresh" content="2;url=/url/to/no-js/content.html" /> or you can rely on css to switch styles and if javascript executes, to switch to a "js-enabled" mode. A: Why do you need to know server-side if JavaScript is enabled? Does it matter which variant the browser supports? Does it e.g. need to understand keyword let or is just var okay? I'd recommend sending readable content that doesn't require any JavaScript to be accessible and then just try to load JS file to add all the JS behaviors you want in addition. For example, the UI might end up missing Login or Modify button if JS is not enabled and it might include a small text at the bottom (using <noscript> or some element with CSS animation that shows the text after a small delay if JS code doesn't remove the whole element soon enough) saying "To login/modify this content, you must enable JavaScript support in your browser." If you do this well, the reader may not even notice that anything is missing unless he or she is trying to login or modify the content. As an optimization you could then set cookie with JavaScript and the server could avoid sending the non-JavaScript readable content if you wish to acquire it asyncronously for some reason. Just make sure to only set this cookie after the JS code has run into completion at least once, instead of setting it immediately the JS code starts to run, to make sure that the end user doesn't end up with a blank screen when (not if!) the JS code fails for any reason. (Note that loading the initial page state asyncronous will not get that content to end user any faster. However, you could only send part of the total content without JavaScript. That could allow rendering "above the fold" faster and then asyncronously load rest of the page using JS code.) As an added bonus, search engines can still index your site without any JavaScript enabled. A: Update 2022 This question has already over 30 answers, but none of them seem to be clear or precise on what had to be done. * *Server Side Framework Used - ASP.NET Core *Client side - vanilla js On BaseController where the first entry point on the app which is OnActionExecutionAsync I have this piece of logic. * *Basically by default I assume client does not have javascript enabled and set this flag. Response.Cookies.Append("jsEnabled", "false"); * *Now after initial load on client I have a javascript function that updates this flag to true. *This function will only have run when the environment has javascript So the full solution is here. Client On Initial load add this function function detectIfJavascriptIsEnabled() { // if this function run's which means js is enabled var jsEnabled = getCookie('jsEnabled'); if (jsEnabled === 'false') { setCookie('jsEnabled', 'true'); location.reload(); } } Server private bool ValidateIfEnvironmentHasJavascript() { if (HttpContext.Request.Cookies != null && HttpContext.Request.Cookies.Count > 0) { Boolean.TryParse(HttpContext.Request.Cookies["jsEnabled"], out var hasJavascriptEnabled); return hasJavascriptEnabled; } else { Response.Cookies.Append("jsEnabled", "false", new CookieOptions() { IsEssential = true, Expires = DateTime.UtcNow.AddHours(24) }); } return false; } This is how you get the result var environmentHasJavascript = ValidateIfEnvironmentHasJavascript(); A: I've had to solve the same problem yesterday, so I'm just adding my .001 here. The solution works for me ok at least at the home page (index.php) I like to have only one file at the root folder: index.php . Then I use folders to structure the whole project (code, css, js, etc). So the code for index.php is as follows: <head> <title>Please Activate Javascript</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <script type="text/javascript" src="js/jquery-1.3.2.min.js"></script> </head> <body> <script language="JavaScript"> $(document).ready(function() { location.href = "code/home.php"; }); </script> <noscript> <h2>This web site needs javascript activated to work properly. Please activate it. Thanks!</h2> </noscript> </body> </html> Hope this helps anyone. Best Regards. A: Might sound a strange solution, but you can give it a try : <?php $jsEnabledVar = 0; ?> <script type="text/javascript"> var jsenabled = 1; if(jsenabled == 1) { <?php $jsEnabledVar = 1; ?> } </script> <noscript> var jsenabled = 0; if(jsenabled == 0) { <?php $jsEnabledVar = 0; ?> } </noscript> Now use the value of '$jsEnabledVar' throughout the page. You may also use it to display a block indicating the user that JS is turned off. hope this will help
{ "language": "en", "url": "https://stackoverflow.com/questions/121203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "700" }
Q: JMXQuery connection - authentication failed Hey all. Newbie question time. I'm trying to setup JMXQuery to connect to my MBean, so far this is what I got. java -classpath jmxquery org.nagios.JMXQuery -U service:jmx:rmi:///jndi/rmi://localhost:8004/jmxrmi -O java.lang:type=Memory -A "NonHeapMemoryUsage" Here's what I get. JMX CRITICAL Authentication failed! Credentials required I got the credentials, but how do I pass them to JMXQuery? /Ace A: According to the source, you should be able to use -username and -password arguments. http://code.google.com/p/jmxquery/source/browse/trunk/src/main/java/jmxquery/JMXQuery.java?r=3 A: It seems that this is an addon to the original JMX-query, look at the comment field. /** * * JMXQuery is used for local or remote request of JMX attributes * It requires JRE 1.5 to be used for compilation and execution. * Look method main for description how it can be invoked. * * This plugin was found on nagiosexchange. It lacked a username/password/role system. * * @author unknown * @author Ryan Gravener (ryangravener@gmail.com) * */ Does that mean that there's no way to remotely access JMX with original JMXQuery? If so, what can you do with it? A: java -classpath jmxquery org.nagios.JMXQuery -U service:jmx:rmi:///jndi/rmi://localhost:8004/jmxrmi -O java.lang:type=Memory -A NonHeapMemoryUsage -K used -I NonHeapMemoryUsage -J used -vvvv -w 82208358 -c 105696461 -username monitorRole -password changeme A: You can download a version of check_jmx that works with --username and --password from http://snippets.syabru.ch/nagios-jmx-plugin/download.html
{ "language": "en", "url": "https://stackoverflow.com/questions/121205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it legal to reverse engineer binary file formats Is it legal to add support for a 3rd party file format in my application by reverse engineer a unencrypted binary file format from another application, and reading the contents? A: Depends on the format in question, your location, etc. And this is not "cracking", more like "reverse engineering". Consult a lawyer, with licenses of the applications that produce the formats! A: ArsTechnica recently published a nice article about the legality of reverse engineering in the context of Google Chrome using undocumented behavior in windows for some of it's security protections (to achieve DEP on some versions of Windows XP, for example). A: Depends on your location. In the EU it is specifically permitted (article 6 EU software convention) to "reverse engineer file formats for the purpose of interoperability" In general it's hard to prevent someone reading a file format, there have been examples where you are prohibeted from writing a file because the format contains patented technology (if software patents are allowed in your country). This was the case with GIF for a number of years. A: IANAL, but yes, I'm pretty sure it is legal, as long as you don't circumvent some copy-protection system or something. See as an example OpenOffice being able to read Word documents. A: It all depends on the laws in your country and the intent behind which you are reverse engineering the file format. You need to check with an attorney/lawyer/barrister to be certain. Have you checked with the company that owns the file format to see if there is a published spec or other information that would allow you to interoperate without the need to reverse engineer? A: If you have to violate the terms of an EULA to do so, it is illegal. Although most states have laws expressly permitting reverse engineering, it is my understanding that that is not an 'inalienable right', but one you can waive contractually. (In other words, permitting reverse engineering does not prevent you from agreeing not to anyway.) Some jurisdictions go further, however, and explicitly state that reverse engineering is always permitted, even if a contract states otherwise. If you live in the US, the DMCA generally prohibits reverse engineering if you have to circumvent a "copy protection technology" to do so, but it makes specific exception for cirucumventing it for the purpose of making something compatiable with the format. Otherwise, it is probably legal. Regardless of the answer here, though, consult a lawyer. A: I wonder if this is covered by the DMCA in the US? As always, your best bet is to ask a lawyer.
{ "language": "en", "url": "https://stackoverflow.com/questions/121208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Client-side detection of HTTP request method Is it possible to detect the HTTP request method (e.g. GET or POST) of a page from JavaScript? If so, how? A: If you need this functionality, have the server detect what method was used and then modify something in the DOM that you can then read out later. A: In a word - No A: I don't believe so. If you need this information, I suggest including a <meta> element generated on the server that you can check with JavaScript. For example, with PHP: <meta id="request-method" name="request-method" content="<?php echo htmlentities($_SERVER['REQUEST_METHOD']); ?>"> <script type="text/javascript"> alert(document.getElementById("request-method").content); </script> A: You can check the page's referrer: document.referrer == document.URL If it's the same page it's quite likely that the user submitted the form. Of course this requires * *that you don't link from a page to itself (which is required for accessibility anyway) *that the form is submitted to the very same page it's on *that the user did not disable the referrer A: You cant do this for a normal post/get however you can get to this info if you use an xmlhttp call and use the getResponseHeader A: No. JS is a client-side programming language meaning everything is client side. You can use PHP to do this however or any server-side language. Here is a example if you were to use php: <?php $foo = $_POST["fooRequest"]; # The actual response. # do something with the foo variable like: # echo "Response got: " + $foo; ?> Then add some HTML: <form action="test.php" method="post"> <input type="text" class="foo" name="fooRequest" placeholder="Testing requests" /> <button type="submit" name="submitButton">Send</button> </form> The above code sends a POST request then in the PHP we get the 'fooRequest'. However if you were to use JS, that's not possible like I said its a client side programming langauge I know there is already a answer but i just wanted to explain a little more. A: Try this function getURIQueryString(){ var params = {}; var qstring = window.location.toString().substring(window.location.toString().indexOf("?") + 1); var regex = /([^&=]+)=([^&=]+)/g; var m; while (m = regex.exec(qstring)){ params[decodeURIComponent(m[1])] = decodeURIComponent(m[2]) } return params } It usually works. For example to get a get parameters named test. Use this getURIQueryString().test But It is impossible to get a post request
{ "language": "en", "url": "https://stackoverflow.com/questions/121218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Visual Studio ClickOnce deployment - certificate expiration I have a problem with a ClickOnce deployment of a Windows Forms application. When I built the new setup, and tried to export it overwriting as usual the previous setup, Visual Studio came up stating that my certificate is expired. This behaviour is described in You receive an error message when you try to update a Visual Studio 2005 ClickOnce application after the certificate that was used to sign the installation expires and there is a workaround in RenewCert - Working Version. But these solutions are not applicable in my situation. Another workaround involves taking back the system date of the deployment server to a date before the certificate expiry date (during the deployment operations) - but I see this as a very "last chance". How can I fix this problem? Is there another workaround I can try? A: I found a blog entry, ClickOnce and Expiring Code Signing Certificates by James Harte, that describes a method to have your application remove itself and launch the new ClickOnce install. It worked for me. A: I ran into this problem almost two years ago. There is really no good workaround if RenewCert won't work for you. I even emailed the ClickOnce authority, Brian Noyes, and got confirmation that there were no good workarounds. We ended up buying a 3 year cert and telling our users to uninstall. However, if I remember correctly, the users only got error messages when launching the app from the start menu. If they went to the web page, it installed the app and ran fine. Of course the client then had 2 versions of the app on their machines :). I can't remember what happened to the start menu shortcuts in that scenario.
{ "language": "en", "url": "https://stackoverflow.com/questions/121223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you convert a string to a node in XQuery? I would like to convert a string into a node. I have a method that is defined to take a node, but the value I have is a string (it is hard coded). How do I turn that string into a node? So, given an XQuery method: define function foo($bar as node()*) as node() { (: unimportant details :) } I have a string that I want to pass to the foo method. How do I convert the string to a node so that the method will accept the string. A: If you are talking about strings that contain XML markup, there are standardized solutions (from XPath/XQuery Functions 3.0) as well: * *string to node: fn:parse-xml() *node to string: fn:serialize() A: The answer to this question depends on what engine is being used. For instance, users of Saxon, use the saxon:parse method. The fact is the XQuery spec doesn't have a built in for this. Generally speaking you would only really need to use this if you needed to pull some embedded XML from a CDATA section. Otherwise you can read files in from the filesystem, or declare XML directly inline. For the most you would use the declarative form, instead of a hardcoded string e.g. (using Stylus studio) declare namespace my = "http://tempuri.org"; declare function my:foo($bar as node()*) as node() { <unimportant></unimportant> } ; let $bar := <node><child></child></node> return my:foo(bar) A: MarkLogic solutions: The best way to convert a string into a node is to use: xdmp:unquote($string). Conversely if you want to convert a node into a string you would use: xdmp:quote($node). Language agnostic solutions: Node to string is: fn:string($node) A: If you want to create a text node out of the string, just use a text node constructor: text { "your string goes here" } or if you prefer to create an element with the string content, you can construct an element something like this: element (some-element) { "your string goes here" } A: you also can use fn:parse-xml(xs:string) to convert your current valid XML string into a document.
{ "language": "en", "url": "https://stackoverflow.com/questions/121237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to do unsigned saturating addition in C? What is the best (cleanest, most efficient) way to write saturating addition in C? The function or macro should add two unsigned inputs (need both 16- and 32-bit versions) and return all-bits-one (0xFFFF or 0xFFFFFFFF) if the sum overflows. Target is x86 and ARM using gcc (4.1.2) and Visual Studio (for simulation only, so a fallback implementation is OK there). A: uint32_t saturate_add32(uint32_t a, uint32_t b) { uint32_t sum = a + b; if ((sum < a) || (sum < b)) return ~((uint32_t)0); else return sum; } /* saturate_add32 */ uint16_t saturate_add16(uint16_t a, uint16_t b) { uint16_t sum = a + b; if ((sum < a) || (sum < b)) return ~((uint16_t)0); else return sum; } /* saturate_add16 */ Edit: Now that you've posted your version, I'm not sure mine is any cleaner/better/more efficient/more studly. A: You probably want portable C code here, which your compiler will turn into proper ARM assembly. ARM has conditional moves, and these can be conditional on overflow. The algorithm then becomes: add and conditionally set the destination to unsigned(-1), if overflow was detected. uint16_t add16(uint16_t a, uint16_t b) { uint16_t c = a + b; if (c < a) /* Can only happen due to overflow */ c = -1; return c; } Note that this differs from the other algorithms in that it corrects overflow, instead of relying on another calculation to detect overflow. x86-64 clang 3.7 -O3 output for adds32: significantly better than any other answer: add edi, esi mov eax, -1 cmovae eax, edi ret ARMv7: gcc 4.8 -O3 -mcpu=cortex-a15 -fverbose-asm output for adds32: adds r0, r0, r1 @ c, a, b it cs movcs r0, #-1 @ conditional-move bx lr 16bit: still doesn't use ARM's unsigned-saturating add instruction (UADD16) add r1, r1, r0 @ tmp114, a movw r3, #65535 @ tmp116, uxth r1, r1 @ c, tmp114 cmp r0, r1 @ a, c ite ls @ movls r0, r1 @,, c movhi r0, r3 @,, tmp116 bx lr @ A: The current implementation we are using is: #define sadd16(a, b) (uint16_t)( ((uint32_t)(a)+(uint32_t)(b)) > 0xffff ? 0xffff : ((a)+(b))) #define sadd32(a, b) (uint32_t)( ((uint64_t)(a)+(uint64_t)(b)) > 0xffffffff ? 0xffffffff : ((a)+(b))) A: I'm not sure if this is faster than Skizz's solution (always profile), but here's an alternative no-branch assembly solution. Note that this requires the conditional move (CMOV) instruction, which I'm not sure is available on your target. uint32_t sadd32(uint32_t a, uint32_t b) { __asm { movl eax, a addl eax, b movl edx, 0xffffffff cmovc eax, edx } } A: In plain C: uint16_t sadd16(uint16_t a, uint16_t b) { return (a > 0xFFFF - b) ? 0xFFFF : a + b; } uint32_t sadd32(uint32_t a, uint32_t b) { return (a > 0xFFFFFFFF - b) ? 0xFFFFFFFF : a + b; } which is almost macro-ized and directly conveys the meaning. A: I suppose, the best way for x86 is to use inline assembler to check overflow flag after addition. Something like: add eax, ebx jno @@1 or eax, 0FFFFFFFFh @@1: ....... It's not very portable, but IMHO the most efficient way. A: The best performance will usually involve inline assembly (as some have already stated). But for portable C, these functions only involve one comparison and no type-casting (and thus I believe optimal): unsigned saturate_add_uint(unsigned x, unsigned y) { if (y > UINT_MAX - x) return UINT_MAX; return x + y; } unsigned short saturate_add_ushort(unsigned short x, unsigned short y) { if (y > USHRT_MAX - x) return USHRT_MAX; return x + y; } As macros, they become: SATURATE_ADD_UINT(x, y) (((y)>UINT_MAX-(x)) ? UINT_MAX : ((x)+(y))) SATURATE_ADD_USHORT(x, y) (((y)>SHRT_MAX-(x)) ? USHRT_MAX : ((x)+(y))) I leave versions for 'unsigned long' and 'unsigned long long' as an exercise to the reader. ;-) A: Just in case someone wants to know an implementation without branching using 2's complement 32bit integers. Warning! This code uses the undefined operation: "shift right by -1" and therefore exploits the property of the Intel Pentium SAL instruction to mask the count operand to 5 bits. int32_t sadd(int32_t a, int32_t b){ int32_t sum = a+b; int32_t overflow = ((a^sum)&(b^sum))>>31; return (overflow<<31)^(sum>>overflow); } It's the best implementation known to me A: In IA32 without conditional jumps: uint32_t sadd32(uint32_t a, uint32_t b) { #if defined IA32 __asm { mov eax,a xor edx,edx add eax,b setnc dl dec edx or eax,edx } #elif defined ARM // ARM code #else // non-IA32/ARM way, copy from above #endif } A: In ARM you may already have saturated arithmetic built-in. The ARMv5 DSP-extensions can saturate registers to any bit-length. Also on ARM saturation is usually cheap because you can excute most instructions conditional. ARMv6 even has saturated addition, subtraction and all the other stuff for 32 bits and packed numbers. On the x86 you get saturated arithmetic either via MMX or SSE. All this needs assembler, so it's not what you've asked for. There are C-tricks to do saturated arithmetic as well. This little code does saturated addition on four bytes of a dword. It's based on the idea to calculate 32 half-adders in parallel, e.g. adding numbers without carry overflow. This is done first. Then the carries are calculated, added and replaced with a mask if the addition would overflow. uint32_t SatAddUnsigned8(uint32_t x, uint32_t y) { uint32_t signmask = 0x80808080; uint32_t t0 = (y ^ x) & signmask; uint32_t t1 = (y & x) & signmask; x &= ~signmask; y &= ~signmask; x += y; t1 |= t0 & x; t1 = (t1 << 1) - (t1 >> 7); return (x ^ t0) | t1; } You can get the same for 16 bits (or any kind of bit-field) by changing the signmask constant and the shifts at the bottom like this: uint32_t SatAddUnsigned16(uint32_t x, uint32_t y) { uint32_t signmask = 0x80008000; uint32_t t0 = (y ^ x) & signmask; uint32_t t1 = (y & x) & signmask; x &= ~signmask; y &= ~signmask; x += y; t1 |= t0 & x; t1 = (t1 << 1) - (t1 >> 15); return (x ^ t0) | t1; } uint32_t SatAddUnsigned32 (uint32_t x, uint32_t y) { uint32_t signmask = 0x80000000; uint32_t t0 = (y ^ x) & signmask; uint32_t t1 = (y & x) & signmask; x &= ~signmask; y &= ~signmask; x += y; t1 |= t0 & x; t1 = (t1 << 1) - (t1 >> 31); return (x ^ t0) | t1; } Above code does the same for 16 and 32 bit values. If you don't need the feature that the functions add and saturate multiple values in parallel just mask out the bits you need. On ARM you also want to change the signmask constant because ARM can't load all possible 32 bit constants in a single cycle. Edit: The parallel versions are most likely slower than the straight forward methods, but they are faster if you have to saturate more than one value at a time. A: If you care about performance, you really want to do this sort of stuff in SIMD, where x86 has native saturating arithmetic. Because of this lack of saturating arithmetic in scalar math, one can get cases in which operations done on 4-variable-wide SIMD is more than 4 times faster than the equivalent C (and correspondingly true with 8-variable-wide SIMD): sub8x8_dct8_c: 1332 clocks sub8x8_dct8_mmx: 182 clocks sub8x8_dct8_sse2: 127 clocks A: Zero branch solution: uint32_t sadd32(uint32_t a, uint32_t b) { uint64_t s = (uint64_t)a+b; return -(s>>32) | (uint32_t)s; } A good compiler will optimize this to avoid doing any actual 64-bit arithmetic (s>>32 will merely be the carry flag, and -(s>>32) is the result of sbb %eax,%eax). In x86 asm (AT&T syntax, a and b in eax and ebx, result in eax): add %eax,%ebx sbb %eax,%eax or %ebx,%eax 8- and 16-bit versions should be obvious. Signed version might require a bit more work. A: An alternative to the branch free x86 asm solution is (AT&T syntax, a and b in eax and ebx, result in eax): add %eax,%ebx sbb $0,%ebx A: int saturating_add(int x, int y) { int w = sizeof(int) << 3; int msb = 1 << (w-1); int s = x + y; int sign_x = msb & x; int sign_y = msb & y; int sign_s = msb & s; int nflow = sign_x && sign_y && !sign_s; int pflow = !sign_x && !sign_y && sign_s; int nmask = (~!nflow + 1); int pmask = (~!pflow + 1); return (nmask & ((pmask & s) | (~pmask & ~msb))) | (~nmask & msb); } This implementation doesn't use control flows, campare operators(==, !=) and the ?: operator. It just uses bitwise operators and logical operators. A: Using C++ you could write a more flexible variant of Remo.D's solution: template<typename T> T sadd(T first, T second) { static_assert(std::is_integral<T>::value, "sadd is not defined for non-integral types"); return first > std::numeric_limits<T>::max() - second ? std::numeric_limits<T>::max() : first + second; } This can be easily translated to C - using the limits defined in limits.h. Please also note that the Fixed width integer types might not been available on your system. A: //function-like macro to add signed vals, //then test for overlow and clamp to max if required #define SATURATE_ADD(a,b,val) ( {\ if( (a>=0) && (b>=0) )\ {\ val = a + b;\ if (val < 0) {val=0x7fffffff;}\ }\ else if( (a<=0) && (b<=0) )\ {\ val = a + b;\ if (val > 0) {val=-1*0x7fffffff;}\ }\ else\ {\ val = a + b;\ }\ }) I did a quick test and seems to work, but not extensively bashed it yet! This works with SIGNED 32 bit. op : the editor used on the web page does not let me post a macro ie its not understanding non-indented syntax etc! A: Saturation arithmetic is not standard for C, but it's often implemented via compiler intrinsics, so the most efficient way will not be the cleanest. You must add #ifdef blocks to select the proper way. MSalters's answer is the fastest for x86 architecture. For ARM you need to use __qadd16 function (ARM compiler) of _arm_qadd16 (Microsoft Visual Studio) for 16 bit version and __qadd for 32-bit version. They'll be automatically translated to one ARM instruction. Links: * *__qadd16 *_arm_qadd16 *__qadd A: I'll add solutions that were not yet mentioned above. There exists ADC instruction in Intel x86. It is represented as _addcarry_u32() intrinsic function. For ARM there should be similar intrinsic. Which allows us to implement very fast uint32_t saturated addition for Intel x86: Try it online! #include <stdint.h> #include <immintrin.h> uint32_t add_sat_u32(uint32_t a, uint32_t b) { uint32_t r, carry = _addcarry_u32(0, a, b, &r); return r | (-carry); } Intel x86 MMX saturated addition instructions can be used to implement uint16_t variant: Try it online! #include <stdint.h> #include <immintrin.h> uint16_t add_sat_u16(uint16_t a, uint16_t b) { return _mm_cvtsi64_si32(_mm_adds_pu16( _mm_cvtsi32_si64(a), _mm_cvtsi32_si64(b) )); } I don't mention ARM solution, as it can be implemented by other generic solutions from other answers.
{ "language": "en", "url": "https://stackoverflow.com/questions/121240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Hidden Features of SQL Server What are some hidden features of SQL Server? For example, undocumented system stored procedures, tricks to do things which are very useful but not documented enough? Answers Thanks to everybody for all the great answers! Stored Procedures * *sp_msforeachtable: Runs a command with '?' replaced with each table name (v6.5 and up) *sp_msforeachdb: Runs a command with '?' replaced with each database name (v7 and up) *sp_who2: just like sp_who, but with a lot more info for troubleshooting blocks (v7 and up) *sp_helptext: If you want the code of a stored procedure, view & UDF *sp_tables: return a list of all tables and views of database in scope. *sp_stored_procedures: return a list of all stored procedures *xp_sscanf: Reads data from the string into the argument locations specified by each format argument. *xp_fixeddrives:: Find the fixed drive with largest free space *sp_help: If you want to know the table structure, indexes and constraints of a table. Also views and UDFs. Shortcut is Alt+F1 Snippets * *Returning rows in random order *All database User Objects by Last Modified Date *Return Date Only *Find records which date falls somewhere inside the current week. *Find records which date occurred last week. *Returns the date for the beginning of the current week. *Returns the date for the beginning of last week. *See the text of a procedure that has been deployed to a server *Drop all connections to the database *Table Checksum *Row Checksum *Drop all the procedures in a database *Re-map the login Ids correctly after restore *Call Stored Procedures from an INSERT statement *Find Procedures By Keyword *Drop all the procedures in a database *Query the transaction log for a database programmatically. Functions * *HashBytes() *EncryptByKey *PIVOT command Misc * *Connection String extras *TableDiff.exe *Triggers for Logon Events (New in Service Pack 2) *Boosting performance with persisted-computed-columns (pcc). *DEFAULT_SCHEMA setting in sys.database_principles *Forced Parameterization *Vardecimal Storage Format *Figuring out the most popular queries in seconds *Scalable Shared Databases *Table/Stored Procedure Filter feature in SQL Management Studio *Trace flags *Number after a GO repeats the batch *Security using schemas *Encryption using built in encryption functions, views and base tables with triggers A: In Management Studio, you can put a number after a GO end-of-batch marker to cause the batch to be repeated that number of times: PRINT 'X' GO 10 Will print 'X' 10 times. This can save you from tedious copy/pasting when doing repetitive stuff. A: Return Date Only Select Cast(Floor(Cast(Getdate() As Float))As Datetime) or Select DateAdd(Day, 0, DateDiff(Day, 0, Getdate())) A: dm_db_index_usage_stats This allows you to know if data in a table has been updated recently even if you don't have a DateUpdated column on the table. SELECT OBJECT_NAME(OBJECT_ID) AS DatabaseName, last_user_update,* FROM sys.dm_db_index_usage_stats WHERE database_id = DB_ID( 'MyDatabase') AND OBJECT_ID=OBJECT_ID('MyTable') Code from: http://blog.sqlauthority.com/2009/05/09/sql-server-find-last-date-time-updated-for-any-table/ Information referenced from: SQL Server - What is the date/time of the last inserted row of a table? Available in SQL 2005 and later A: A lot of SQL Server developers still don't seem to know about the OUTPUT clause (SQL Server 2005 and newer) on the DELETE, INSERT and UPDATE statement. It can be extremely useful to know which rows have been INSERTed, UPDATEd, or DELETEd, and the OUTPUT clause allows to do this very easily - it allows access to the "virtual" tables called inserted and deleted (like in triggers): DELETE FROM (table) OUTPUT deleted.ID, deleted.Description WHERE (condition) If you're inserting values into a table which has an INT IDENTITY primary key field, with the OUTPUT clause, you can get the inserted new ID right away: INSERT INTO MyTable(Field1, Field2) OUTPUT inserted.ID VALUES (Value1, Value2) And if you're updating, it can be extremely useful to know what changed - in this case, inserted represents the new values (after the UPDATE), while deleted refers to the old values before the UPDATE: UPDATE (table) SET field1 = value1, field2 = value2 OUTPUT inserted.ID, deleted.field1, inserted.field1 WHERE (condition) If a lot of info will be returned, the output of OUTPUT can also be redirected to a temporary table or a table variable (OUTPUT INTO @myInfoTable). Extremely useful - and very little known! Marc A: Here are some features I find useful but a lot of people don't seem to know about: sp_tables Returns a list of objects that can be queried in the current environment. This means any object that can appear in a FROM clause, except synonym objects. Link sp_stored_procedures Returns a list of stored procedures in the current environment. Link A: Find records which date falls somewhere inside the current week. where dateadd( week, datediff( week, 0, TransDate ), 0 ) = dateadd( week, datediff( week, 0, getdate() ), 0 ) Find records which date occurred last week. where dateadd( week, datediff( week, 0, TransDate ), 0 ) = dateadd( week, datediff( week, 0, getdate() ) - 1, 0 ) Returns the date for the beginning of the current week. select dateadd( week, datediff( week, 0, getdate() ), 0 ) Returns the date for the beginning of last week. select dateadd( week, datediff( week, 0, getdate() ) - 1, 0 ) A: Not so much a hidden feature but setting up key mappings in Management Studio under Tools\Options\Keyboard: Alt+F1 is defaulted to sp_help "selected text" but I cannot live without the adding Ctrl+F1 for sp_helptext "selected text" A: Persisted-computed-columns * *Computed columns can help you shift the runtime computation cost to data modification phase. The computed column is stored with the rest of the row and is transparently utilized when the expression on the computed columns and the query matches. You can also build indexes on the PCC’s to speed up filtrations and range scans on the expression. Link A: There are times when there's no suitable column to sort by, or you just want the default sort order on a table and you want to enumerate each row. In order to do that you can put "(select 1)" in the "order by" clause and you'd get what you want. Neat, eh? select row_number() over (order by (select 1)), * from dbo.Table as t A: Simple encryption with EncryptByKey A: /* Find the fixed drive with largest free space, you can also copy files to estimate which disk is quickest */ EXEC master..xp_fixeddrives /* Checking assumptions about a file before use or reference */ EXEC master..xp_fileexist 'C:\file_you_want_to_check' More details here A: The most surprising thing I learned this week involved using a CASE statement in the ORDER By Clause. For example: declare @orderby varchar(10) set @orderby = 'NAME' select * from Users ORDER BY CASE @orderby WHEN 'NAME' THEN LastName WHEN 'EMAIL' THEN EmailAddress END A: SQLCMD If you've got scripts that you run over and over, but have to change slight details, running ssms in sqlcmd mode is awesome. The sqlcmd command line is pretty spiffy too. My favourite features are: * *You get to set variables. Proper variables that don't require jumping through sp_exec hoops *You can run multiple scripts one after the other *Those scripts can reference the variables in the "outer" script Rather than gushing any more, Simpletalk by Red Gate did an awesome wrap up of sqlcmd - The SQLCMD Workbench. Donabel Santos has some great SQLCMD examples too. A: Here's a simple but useful one: When you're editing table contents manually, you can insert NULL in a column by typing Control-0. A: sp_msforeachtable: Runs a command with '?' replaced with each table name. e.g. exec sp_msforeachtable "dbcc dbreindex('?')" You can issue up to 3 commands for each table exec sp_msforeachtable @Command1 = 'print ''reindexing table ?''', @Command2 = 'dbcc dbreindex(''?'')', @Command3 = 'select count (*) [?] from ?' Also, sp_MSforeachdb A: Connection String extras: MultipleActiveResultSets=true; This makes ADO.Net 2.0 and above read multiple, forward-only, read-only results sets on a single database connection, which can improve performance if you're doing a lot of reading. You can turn it on even if you're doing a mix of query types. Application Name=MyProgramName Now when you want to see a list of active connections by querying the sysprocesses table, your program's name will appear in the program_name column instead of ".Net SqlClient Data Provider" A: Here is a query I wrote to list All DB User Objects by Last Modified Date: select name, modify_date, case when type_desc = 'USER_TABLE' then 'Table' when type_desc = 'SQL_STORED_PROCEDURE' then 'Stored Procedure' when type_desc in ('SQL_INLINE_TABLE_VALUED_FUNCTION', 'SQL_SCALAR_FUNCTION', 'SQL_TABLE_VALUED_FUNCTION') then 'Function' end as type_desc from sys.objects where type in ('U', 'P', 'FN', 'IF', 'TF') and is_ms_shipped = 0 order by 2 desc A: sp_who2, just like sp_who, but with a lot more info for troubleshooting blocks A: I find this small script very handy to see the text of a procedure that has been deployed to a server: DECLARE @procedureName NVARCHAR( MAX ), @procedureText NVARCHAR( MAX ) SET @procedureName = 'myproc_Proc1' SET @procedureText = ( SELECT OBJECT_DEFINITION( object_id ) FROM sys.procedures WHERE Name = @procedureName ) PRINT @procedureText A: Trace Flags! "1204" was invaluable in deadlock debugging on SQL Server 2000 (2005 has better tools for this). A: Find Procedures By Keyword What procedures contain a certain piece of text (Table name, column name, variable name, TODO, etc)? SELECT OBJECT_NAME(ID) FROM SysComments WHERE Text LIKE '%SearchString%' AND OBJECTPROPERTY(id, 'IsProcedure') = 1 A: sp_executesql For executing a statement in a string. As good as Execute but can return parameters out A: Ok here's the few I've got left, shame I missed the start, but keep it up there's some top stuff here! Query Analyzer * *Alt+F1 executes sp_help on the selected text *Alt-D - focus to the database dropdown so you can use select db with cursor keys of letter. T-Sql * *if (object_id("nameofobject") IS NOT NULL) begin <do something> end - easiest existence check *sp_locks - more in depth locking informaiton than sp_who2 (which is the first port of call) *dbcc inputbuffer(spid) - list of top line of executing process (kinda useful but v. brief) *dbcc outputbuffer(spid) - list of top line of output of executing process General T-sql tip * *With large volumes use sub queries liberally to process data in sets e.g. to obtain a list of married people over fifty you could select a set of people who are married in a subquery and join with a set of the same people over 50 and output the joined results - please excuse the contrived example A: Batch Seperator Most people don't know it, but "GO" is not a SQL command. It is the default batch separator used by the client tools. You can find more info about it in Books Online. You can change the Batch separator by selecting Tools -> Options in Management Studio, and changing the Batch separator Option in the Query Execution section. I'm not sure why you would want to do this other than as a prank, but it is a somewhat interesting piece of trivia. A: use GETDATE() with + or - to calculate a nearby date SELECT GETDATE() - 1 -- yesterday, 1 day ago, 24 hours ago SELECT GETDATE() - .5 -- 12 hours ago SELECT GETDATE() - .25 -- 6 hours ago SELECT GETDATE() - (1 / 24.0) -- 1 hour ago (implicit decimal result after division) A: TableDiff.exe * *Table Difference tool allows you to discover and reconcile differences between a source and destination table or a view. Tablediff Utility can report differences on schema and data. The most popular feature of tablediff is the fact that it can generate a script that you can run on the destination that will reconcile differences between the tables. Link A: A less known TSQL technique for returning rows in random order: -- Return rows in a random order SELECT SomeColumn FROM SomeTable ORDER BY CHECKSUM(NEWID()) A: In Management Studio, you can quickly get a comma-delimited list of columns for a table by : * *In the Object Explorer, expand the nodes under a given table (so you will see folders for Columns, Keys, Constraints, Triggers etc.) *Point to the Columns folder and drag into a query. This is handy when you don't want to use heinous format returned by right-clicking on the table and choosing Script Table As..., then Insert To... This trick does work with the other folders in that it will give you a comma-delimited list of names contained within the folder. A: In SQL Server Management Studio (SSMS) you can highlight an object name in the Object Explorer and press Ctrl-C to copy the name to the clipboard. There is no need to press F2 or right-click, rename the object to copy the name. You can also drag and drop an object from the Object Explorer into your query window. A: Row Constructors You can insert multiple rows of data with a single insert statement. INSERT INTO Colors (id, Color) VALUES (1, 'Red'), (2, 'Blue'), (3, 'Green'), (4, 'Yellow') A: HashBytes() to return the MD2, MD4, MD5, SHA, or SHA1 hash of its input. A: If you want to know the table structure, indexes and constraints: sp_help 'TableName' A: Figuring out the most popular queries * *With sys.dm_exec_query_stats, you can figure out many combinations of query analyses by a single query. Link with the commnad select * from sys.dm_exec_query_stats order by execution_count desc A: My favorite is master..xp_cmdshell. It allows you to run commands from a command prompt on the server and see the output. It's extremely useful if you can't login to the server, but you need to get information or control it somehow. For example, to list the folders on the C: drive of the server where SQL Server is running. * *master..xp_cmdshell 'dir c:\' You can start and stop services, too. * *master..xp_cmdshell 'sc query "My Service"' *master..xp_cmdshell 'sc stop "My Service"' *master..xp_cmdshell 'sc start "My Service"' It's very powerful, but a security risk, also. Many people disable it because it could easily be used do bad things on the server. But, if you have access to it, it can be extremely useful. A: Triggers for Logon Events * *Logon triggers can help complement auditing and compliance. For example, logon events can be used for enforcing rules on connections (for example limiting connection through a specific username or limiting connections through a username to a specific time periods) or simply for tracking and recording general connection activity. Just like in any trigger, ROLLBACK cancels the operation that is in execution. In the case of logon event that means canceling the connection establishment. Logon events do not fire when the server is started in the minimal configuration mode or when a connection is established through dedicated admin connection (DAC). Link A: Here is one I learned today because I needed to search for a transaction. ::fn_dblog This allows you to query the transaction log for a database. USE mydatabase; SELECT * FROM ::fn_dblog(NULL, NULL) http://killspid.blogspot.com/2006/07/using-fndblog.html A: Since I'm a programmer, not a DBA, my favorite hidden feature is the SMO library. You can automate pretty much anything in SQL Server, from database/table/column creation and deletion to scripting to backup and restore. If you can do it in SQL Server Management Studio, you can automate it in SMO. A: Based on what appears to be a vehement reaction to it by hardened database developers, the CLR integration would rank right up there. =) A: Sql 2000+ DBCC DROPCLEANBUFFERS : Clears the buffers. Useful for testing the speed of queries when the buffer is clean. A: Stored proc sp_MSdependencies tells you about object dependencies in a more useful fashion than sp_depends. For some production releases it's convenient to temporarily disable child table constraints, apply changes then reenable the child table constraints. This is a great way of finding objects that depend on your parent table. This code disables child table constraints: create table #deps ( oType int, oObjName sysname, oOwner nvarchar(200), oSequence int ) insert into #deps exec sp_MSdependencies @tableName, null, 1315327 exec sp_MSforeachtable @command1 = 'ALTER TABLE ? NOCHECK CONSTRAINT ALL', @whereand = ' and o.name in (select oObjName from #deps where oType = 8)' After the change is applied one can run this code to reenable the constraints: exec sp_MSforeachtable @command1 = 'ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL', @whereand = ' and o.name in (select oObjName from #deps where oType = 8)' The third parameter is called @flags and it controls what sort of dependencies will be listed. Go read the proc contents to see how you can change @flags for your purposes. The proc uses bit masks to decipher what you want returned. A: The spatial results tab can be used to create art. enter link description here http://michaeljswart.com/wp-content/uploads/2010/02/venus.png A: useful when restoring a database for Testing purposes or whatever. Re-maps the login ID's correctly: EXEC sp_change_users_login 'Auto_Fix', 'Mary', NULL, 'B3r12-36' A: I know it's not exactly hidden, but not too many people know about the PIVOT command. I was able to change a stored procedure that used cursors and took 2 minutes to run into a speedy 6 second piece of code that was one tenth the number of lines! A: EXCEPT and INTERSECT Instead of writing elaborate joins and subqueries, these two keywords are a much more elegant shorthand and readable way of expressing your query's intent when comparing two query results. New as of SQL Server 2005, they strongly complement UNION which has already existed in the TSQL language for years. The concepts of EXCEPT, INTERSECT, and UNION are fundamental in set theory which serves as the basis and foundation of relational modeling used by all modern RDBMS. Now, Venn diagram type results can be more intuitively and quite easily generated using TSQL. A: Drop all connections to the database: Use Master Go Declare @dbname sysname Set @dbname = 'name of database you want to drop connections from' Declare @spid int Select @spid = min(spid) from master.dbo.sysprocesses where dbid = db_id(@dbname) While @spid Is Not Null Begin Execute ('Kill ' + @spid) Select @spid = min(spid) from master.dbo.sysprocesses where dbid = db_id(@dbname) and spid > @spid End A: Table Checksum Select CheckSum_Agg(Binary_CheckSum(*)) From Table With (NOLOCK) Row Checksum Select CheckSum_Agg(Binary_CheckSum(*)) From Table With (NOLOCK) Where Column = Value A: I'm not sure if this is a hidden feature or not, but I stumbled upon this, and have found it to be useful on many occassions. You can concatonate a set of a field in a single select statement, rather than using a cursor and looping through the select statement. Example: DECLARE @nvcConcatonated nvarchar(max) SET @nvcConcatonated = '' SELECT @nvcConcatonated = @nvcConcatonated + C.CompanyName + ', ' FROM tblCompany C WHERE C.CompanyID IN (1,2,3) SELECT @nvcConcatonated Results: Acme, Microsoft, Apple, A: If you want the code of a stored procedure you can: sp_helptext 'ProcedureName' (not sure if it is hidden feature, but I use it all the time) A: A stored procedure trick is that you can call them from an INSERT statement. I found this very useful when I was working on an SQL Server database. CREATE TABLE #toto (v1 int, v2 int, v3 char(4), status char(6)) INSERT #toto (v1, v2, v3, status) EXEC dbo.sp_fulubulu(sp_param1) SELECT * FROM #toto DROP TABLE #toto A: In SQL Server 2005/2008 to show row numbers in a SELECT query result: SELECT ( ROW_NUMBER() OVER (ORDER BY OrderId) ) AS RowNumber, GrandTotal, CustomerId, PurchaseDate FROM Orders ORDER BY is a compulsory clause. The OVER() clause tells the SQL Engine to sort data on the specified column (in this case OrderId) and assign numbers as per the sort results. A: Useful for parsing stored procedure arguments: xp_sscanf Reads data from the string into the argument locations specified by each format argument. The following example uses xp_sscanf to extract two values from a source string based on their positions in the format of the source string. DECLARE @filename varchar (20), @message varchar (20) EXEC xp_sscanf 'sync -b -fproducts10.tmp -rrandom', 'sync -b -f%s -r%s', @filename OUTPUT, @message OUTPUT SELECT @filename, @message Here is the result set. -------------------- -------------------- products10.tmp random A: A semi-hidden feature, the Table/Stored Procedure Filter feature can be really useful... In the SQL Server Management Studio Object Explorer, right-click the Tables or Stored Procedures folder, select the Filter menu, then Filter Settings, and enter a partial name in the Name contains row. Likewise, use Remove Filter to see all Tables/Stored Procedures again. A: If you want to drop all the procedures in a DB - SELECT IDENTITY ( int, 1, 1 ) id, [name] INTO #tmp FROM sys.procedures WHERE [type] = 'P' AND is_ms_shipped = 0 DECLARE @i INT SELECT @i = COUNT( id ) FROM #tmp WHILE @i > 0 BEGIN DECLARE @name VARCHAR( 100 ) SELECT @name = name FROM #tmp WHERE id = @i EXEC ( 'DROP PROCEDURE ' + @name ) SET @i = @i-1 END DROP TABLE #tmp A: DEFAULT_SCHEMA setting in sys.database_principles * *SQL Server provides great flexibility with name resolution. However name resolution comes at a cost and can get noticeably expensive in adhoc workloads that do not fully qualify object references. SQL Server 2005 allows a new setting of DEFEAULT_SCHEMA for each database principle (also known as “user”) which can eliminate this overhead without changing your TSQL code. Link A: Vardecimal Storage Format * *SQL Server 2005 adds a new storage format for numeric and decimal datatypes called vardecimal. Vardecimal is a variable-length representation for decimal types that can save unused bytes in every instance of the row. The biggest amount of savings come from cases where the decimal definition is large (like decimal(38,6)) but the values stored are small (like a value of 0.0) or there is a large number of repeated values or data is sparsely populated. Link A: Scalable Shared Databases * *Through Scalable Shared Databases one can mount the same physical drives on commodity machines and allow multiple instances of SQL Server 2005 to work off of the same set of data files. The setup does not require duplicate storage for every instance of SQL Server and allows additional processing power through multiple SQL Server instances that have their own local resources like cpu, memory, tempdb and potentially other local databases. Link A: Get a list of column headers in vertical format: Copy column names in grid results Tools - Options - Query Results - SQL Server - Results to Grid tick "Include column headers when copying or saving the results" you will need to make a new connection at this point, then run your query Now when you copy the results from the grid, you get the column headers Also If you then copy the results to excel Copy col headers only Paste Special (must not overlap copy area) tick "Transpose" OK [you may wish to add a "," and autofill down at this point] You have an instant list of columns in vertical format A: Execute a stored proc and capture the results in a (temp) table for further processing, e.g.: INSERT INTO someTable EXEC sp_someproc Example: Shows sp_help output, but ordered by database size: CREATE TABLE #dbs ( name nvarchar(50), db_size nvarchar(50), owner nvarchar(50), dbid int, created datetime, status nvarchar(255), compatiblity_level int ) INSERT INTO #dbs EXEC sp_helpdb SELECT * FROM #dbs ORDER BY CONVERT(decimal, LTRIM(LEFT(db_size, LEN(db_size)-3))) DESC DROP TABLE #dbs A: Using the osql utility to run command line queries/scripts/batches A: These are some SQL Management Studio hidden features I like. Something I love is that if you hold down the ALT key while highlighting information you can select columnar information and not just whole rows. In SQL Management Studio you have predefined keyboard shortcuts: Ctrl+1 runs sp_who Ctrl+2 runs sp_lock Alt+F1 runs sp_help Ctrl+F1 runs sp_helptext So if you highlight a table name in the editor and press Alt+F1 it will show you the structure of the table. A: did you ever accidentally click on Execute button when u actually wanted to click on : Debug / Parse / Use Database / Switch between query tabs / etc. ? Here is a way to move that button someplace safe: Tools -> Customize . and drag button where you want You can also : - add/remove other buttons which are commonly used/unused (applies even to commands within MenuBar like File/Edit) - change icon image of button (see the tiny pig under Change Button Image.. lol) A: I would like to recommend a free add-in SSMS Tools Pack which has got bunch of features such as Code Snippets You don't need to type SELECT * FROM on your own anymore. Just type SSF and hit enter (which can be customized to any other key. I prefer Tab). Few other useful snippets are SSC + tab - SELECT COUNT(*) FROM SST + tab - SELECT TOP 10 * FROM S + tab - SELECT I + tab - INSERT U + tab - UPDATE W + tab - WHERE OB + tab - ORDER BY and the list goes on. You can check and customize the entire list using SSMS Tools Pack Menu Execution Log History Have you ever realized that you could have saved an ad hoc query which you wrote few days back so that you don't need to reinvent the wheel again? SSMS Tools pack saves all your execution history and you can search based on date or any text in the query. Search Database Data This feature helps you to search for the occurence of the string in the entire database and displays the table name and column name with total number of occurences. You can use this feature by right clicking the database in object explorer and selecting Search Database Data. Format SQL Sets all keywords to uppercase or lowercase letters. Right click on query window and select Format Text. You can set the Shortcut key in SSMS Tools Menu. But it lacks alignment feature. CRUD SP Generation Right click a table, SSMS Tools - > Create CRUD to generate Insert, Update, Delete and Select SP. The content of the SP can be customized using SSMS Tools menu. Misc Few other features are * *Search results in the Grid mode. *Generate Insert script from resultset, tables & database. *Execution Plan Analyzer. *Run one script in multiple databases. For more information, you can visit their Features page A: For SQL Server 2005: select * from sys.dm_os_performance_counters select * from sys.dm_exec_requests A: @Gatekiller - An easier way to get just the Date is surely CAST(CONVERT(varchar,getdate(),103) as datetime) If you don't use DD/MM/YYYY in your locale, you'd need to use a different value from 103. Lookup CONVERT function in SQL Books Online for the locale codes. A: Forced Parameterization * *Parameterization allows SQL Server to take advantage of query plan reuse and avoid compilation and optimization overheads on subsequent executions of similar queries. However there are many applications out there that, for one reason or another, still suffer from ad-hoc query compilation overhead. For those cases with high number of query compilation and where lowering CPU utilization and response time is critical for your workload, force parameterization can help. Link A: A few of my favorite things: Added in sp2 - Scripting options under tools/options/scripting New security using schemas - create two schemas: user_access, admin_access. Put your user procs in one and your admin procs in the other like this: user_access.showList , admin_access.deleteUser . Grant EXECUTE on the schema to your app user/role. No more GRANTing EXECUTE all the time. Encryption using built in encryption functions, views(to decrypt for presentation), and base tables with triggers(to encrypt on insert/update). A: OK, here's my 2 cents: http://dbalink.wordpress.com/2008/10/24/querying-the-object-catalog-and-information-schema-views/ I am too lazy to re-write the whole thing here, so please check my post. That may be trivial to many, but there will be some who will find it a "hidden gem". EDIT: After a while, I decided to add the code here so you don't have to jump to my blog to see the code. SELECT T.NAME AS [TABLE NAME], C.NAME AS [COLUMN NAME], P.NAME AS [DATA TYPE], P.MAX_LENGTH AS[SIZE], CAST(P.PRECISION AS VARCHAR) +‘/’+ CAST(P.SCALE AS VARCHAR) AS [PRECISION/SCALE] FROM ADVENTUREWORKS.SYS.OBJECTS AS T JOIN ADVENTUREWORKS.SYS.COLUMNS AS C ON T.OBJECT_ID=C.OBJECT_ID JOIN ADVENTUREWORKS.SYS.TYPES AS P ON C.SYSTEM_TYPE_ID=P.SYSTEM_TYPE_ID WHERE T.TYPE_DESC=‘USER_TABLE’; Or, if you want to pull all the User Tables altogether, use CURSOR like this: DECLARE @tablename VARCHAR(60) DECLARE cursor_tablenames CURSOR FOR SELECT name FROM AdventureWorks.sys.tables OPEN cursor_tablenames FETCH NEXT FROM cursor_tablenames INTO @tablename WHILE @@FETCH_STATUS = 0 BEGIN SELECT t.name AS [TABLE Name], c.name AS [COLUMN Name], p.name AS [DATA Type], p.max_length AS[SIZE], CAST(p.PRECISION AS VARCHAR) +‘/’+ CAST(p.scale AS VARCHAR) AS [PRECISION/Scale] FROM AdventureWorks.sys.objects AS t JOIN AdventureWorks.sys.columns AS c ON t.OBJECT_ID=c.OBJECT_ID JOIN AdventureWorks.sys.types AS p ON c.system_type_id=p.system_type_id WHERE t.name = @tablename AND t.type_desc=‘USER_TABLE’ ORDER BY t.name ASC FETCH NEXT FROM cursor_tablenames INTO @tablename END CLOSE cursor_tablenames DEALLOCATE cursor_tablenames ADDITIONAL REFERENCE (my blog): http://dbalink.wordpress.com/2009/01/21/how-to-create-cursor-in-tsql/ A: Not undocumented RowNumber courtesy of Itzik Ben-Gan http://www.sqlmag.com/article/articleid/97675/sql_server_blog_97675.html SET XACT_ABORT ON rollback everything on error for transactions all the sp_'s are helpful just browse books online keyboard shortcuts I use all the time in management studio F6 - switch between results and query Alt+X or F5- run selected text in query if nothing is selected runs the entire window Alt+T and Alt+D - results in text or grid respectively A: I find sp_depends useful. It displays the objects which depend on a given object, e.g. exec sp_depends 'fn_myFunction' returns objects which depend on this function (note, if the objects have not originally been run into the database in the correct order this will give incorrect results.) A: In SQL Server 2005 you no longer need to run the sp-blocker-pss80 stored procedure. Instead, you can do: exec sp_configure 'show advanced options', 1; reconfigure; go exec sp_configure 'blocked process threshold', 30; reconfigure; You can then start a SQL Trace and select the Blocked process report event class in the Errors and Warnings group. Details of that event here. A: Some undocumented ones are here: Undocumented but handy SQL server Procs and DBCC commands A: use db go DECLARE @procName varchar(100) DECLARE @cursorProcNames CURSOR SET @cursorProcNames = CURSOR FOR select name from sys.procedures where modify_date > '2009-02-05 13:12:15.273' order by modify_date desc OPEN @cursorProcNames FETCH NEXT FROM @cursorProcNames INTO @procName WHILE @@FETCH_STATUS = 0 BEGIN -- see the text of the last stored procedures modified on -- the db , hint Ctrl + T would give you the procedures test set nocount off; exec sp_HelpText @procName --- or print them -- print @procName FETCH NEXT FROM @cursorProcNames INTO @procName END CLOSE @cursorProcNames select @@error A: use db go select o.name , (SELECT [definition] AS [text()] FROM sys.all_sql_modules WHERE sys.all_sql_modules.object_id=a.object_id FOR XML PATH(''), TYPE ) AS Statement_Text , a.object_id , o.modify_date FROM sys.all_sql_modules a LEFT JOIN sys.objects o ON a.object_id=o.object_id ORDER BY 4 desc --select * from sys.objects A: Returing results based on a pipe delimited string of IDs in a single statmeent (alternative to passing xml or first turning the delimited string to a table) Example: DECLARE @nvcIDs nvarchar(max) SET @nvcIDs = '|1|2|3|' SELECT C.* FROM tblCompany C WHERE @nvcIDs LIKE '%|' + CAST(C.CompanyID as nvarchar) + '|%' A: I use to add this stored procedure to the master db, Improvements: * *Trim on Host name, so the copy-paste works on VNC. *Added a LOCK option, for just watching what are the current locked processes. Usage: * *EXEC sp_who3 'ACTIVE' *EXEC sp_who3 'LOCK' *EXEC sp_who3 spid_No That's it. CREATE procedure sp_who3 @loginame sysname = NULL --or 'active' or 'lock' as declare @spidlow int, @spidhigh int, @spid int, @sid varbinary(85) select @spidlow = 0 ,@spidhigh = 32767 if @loginame is not NULL begin if upper(@loginame) = 'ACTIVE' begin select spid, ecid, status , loginame=rtrim(loginame) , hostname=rtrim(hostname) , blk=convert(char(5),blocked) , dbname = case when dbid = 0 then null when dbid <> 0 then db_name(dbid) end ,cmd from master.dbo.sysprocesses where spid >= @spidlow and spid <= @spidhigh AND upper(cmd) <> 'AWAITING COMMAND' return (0) end if upper(@loginame) = 'LOCK' begin select spid , ecid, status , loginame=rtrim(loginame) , hostname=rtrim(hostname) , blk=convert(char(5),blocked) , dbname = case when dbid = 0 then null when dbid <> 0 then db_name(dbid) end ,cmd from master.dbo.sysprocesses where spid >= 0 and spid <= 32767 AND upper(cmd) <> 'AWAITING COMMAND' AND convert(char(5),blocked) > 0 return (0) end end if (@loginame is not NULL AND upper(@loginame) <> 'ACTIVE' ) begin if (@loginame like '[0-9]%') -- is a spid. begin select @spid = convert(int, @loginame) select spid, ecid, status , loginame=rtrim(loginame) , hostname=rtrim(hostname) , blk=convert(char(5),blocked) , dbname = case when dbid = 0 then null when dbid <> 0 then db_name(dbid) end ,cmd from master.dbo.sysprocesses where spid = @spid end else begin select @sid = suser_sid(@loginame) if (@sid is null) begin raiserror(15007,-1,-1,@loginame) return (1) end select spid, ecid, status , loginame=rtrim(loginame) , hostname=rtrim(hostname) , blk=convert(char(5),blocked) , dbname = case when dbid = 0 then null when dbid <> 0 then db_name(dbid) end ,cmd from master.dbo.sysprocesses where sid = @sid end return (0) end /* loginame arg is null */ select spid, ecid, status , loginame=rtrim(loginame) , hostname=rtrim(hostname) , blk=convert(char(5),blocked) , dbname = case when dbid = 0 then null when dbid <> 0 then db_name(dbid) end ,cmd from master.dbo.sysprocesses where spid >= @spidlow and spid <= @spidhigh return (0) -- sp_who A: CTRL-E executes the currently selected text in Query Analyzer. A: Use select * from information_schema to list out all the databases,base tables,sps,views etc in sql server. A: Alternative to Kolten's sp_change_users_login: ALTER USER wacom_app WITH LOGIN = wacom_app A: BCP_IN and BCP_OUT perfect for BULK data import and export A: SQL Server Management Studio keyboard shortcuts... that will enable quicker and faster results in day-to-day works. http://sqlserver-qa.net/blogs/tools/archive/2007/04/25/management-studio-shortcut-keys.aspx A: master..spt_values (and specifically type='p') has been really useful for string splitting and doing 'binning' and time interpolation manipulation. A: You can create a comma separated list with a subquery and not have the last trailing comma. This has been said to be more efficient than the functions that were used before this became available. I think 2005 and later. SELECT Project.ProjectName, (SELECT SUBSTRING( (SELECT ', ' + Site.SiteName FROM Site WHERE Site.ProjectKey = Project.ProjectKey ORDER BY Project.ProjectName FOR XML PATH('')),2,200000)) AS CSV FROM Project You can also use FOR XML PATH with nested queries to select to XML which I have found useful. A: sp_lock: displays all the current locks. The returned data can be further queried as: spid - use it with sp_who to see who owns the lock. objid - use it with select object_name(objid) to see which database object is locked. A: I use SSMS to find text in files on the OS harddrive. It makes it super easy to write regex and sift through any directory to replace or find text. I always found this easier then using windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/121243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "215" }
Q: Using customBuildCallbacks.xml in an Eclipse RCP headless build I am trying to add some custom build steps to my headless build process for an Eclipse RCP application. I understand that the recommended way is to provide a customBuildCallbacks.xml file on the plug-in directory, and adding a link to it in the build.properties file. # This property sets the location of the customb Build callback customBuildCallbacks = customBuildCallbacks.xml However, during the build process, this step is ignored. Is there some set-up step I might be missing? A: Actually, I found out that this is the only thing required.... if we are using eclipse 3.3. This will not work using Eclipse 3.1
{ "language": "en", "url": "https://stackoverflow.com/questions/121244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best StAX Implementation My quick search reveals the reference implementation (http://stax.codehaus.org), the Woodstox implementation (http://woodstox.codehaus.org), and Sun's SJSXP implementation (https://sjsxp.dev.java.net/). Please comment on the relative merits of these, and fill me in on any other implementations I should consider. A: Interesting to note that: SJSXP performance is consistently faster than BEA, Oracle and RI for all of the documents described here in this study. However, it lags behind Woodstox and XPP3 in some document sizes and in the best cases, exhibits similar performance compared to these two parsers. Article from Sun: Streaming APIs for XML parsers A: Woodstox wins every time for me. It's not just performance, either - sjsxp is twitchy and overly pedantic, woodstox just gets on with it. A: http://javolution.org/ has a good StAX implementation A: Comment on Javolution: No it's not Stax implementation. It does implement an API similar to Stax, but because of Javolution's avoidance of Strings etc, it can not be source compatible. Either way, their implementation is not particularly good -- it's not faster, and it is less fully-featured, doesn't detect xml problems (like duplicate attributes), won't process entities or such. So I don't see much reason using it, unless you use Javolution classes for everything.
{ "language": "en", "url": "https://stackoverflow.com/questions/121251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: "Pending checkins" window stuck; never finishes updating I'm having a strange problem in Visual Studio 2008 where my "Pending Checkins" window never updates. I open it up, and it says "Updating..." like usual, but I never see the "X remaining" message, and nothing happens. It just sits there doing nothing. Checked-out stuff still shows as checked out in Solution Explorer. SourceSafe 2005 still works like normal. Any ideas? A: Hooray! I found a solution. For anyone else that stumbles across this, here's the deal. I discovered today that the Pending Checkins window wasn't broken for all solutions, but only for a particular one. Also, though I didn't realize it was related, every time I opened the solution, I was getting: "Some of the properties associated with the solution could not be read." The solution I found was here. It turns out that I had two GlobalSection(SourceCodeControl) = preSolution sections in the solution (.sln) file. I deleted the second one (which had a long list of projects, but also some gibberish in it), and the message went away, and my Pending Checkins window now works perfectly. A: Have you tried the Visual SourceSafe 2005 Update patch?
{ "language": "en", "url": "https://stackoverflow.com/questions/121253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Testing a JAX-RS Web Service? I'm currently looking for ways to create automated tests for a JAX-RS (Java API for RESTful Web Services) based web service. I basically need a way to send it certain inputs and verify that I get the expected responses. I'd prefer to do this via JUnit, but I'm not sure how that can be achieved. What approach do you use to test your web-services? Update: As entzik pointed out, decoupling the web service from the business logic allows me to unit test the business logic. However, I also want to test for the correct HTTP status codes etc. A: You probably wrote some java code that implements your business logic and then you have generated the web services end point for it. An important thing to do is to independently test your business logic. Since it's pure java code you can do that with regular JUnit tests. Now, since the web services part is just an end point, what you want to make sure is that the generated plumbing (stubs, etc) are in sync with your java code. you can do that by writing JUnit tests that invoke the generated web service java clients. This will let you know when you change your java signatures without updating the web services stuff. If your web services plumbing is automatically generated by your build system at every build, then it may not be necessary to test the end points (assuming it's all properly generated). Depends on your level of paranoia. A: Though its too late from the date of posting the question, thought this might be useful for others who have a similar question. Jersey comes with a test framework called the Jersey Test Framework which allows you to test your RESTful Web Service, including the response status codes. You can use it to run your tests on lightweight containers like Grizzly, HTTPServer and/or EmbeddedGlassFish. Also, the framework could be used to run your tests on a regular web container like GlassFish or Tomcat. A: Jersey comes with a great RESTful client API that makes writing unit tests really easy. See the unit tests in the examples that ship with Jersey. We use this approach to test the REST support in Apache Camel, if you are interested the test cases are here A: I use Apache's HTTPClient (http://hc.apache.org/) to call Restful Services. The HTTP Client library allows you to easily perform get, post or whatever other operation you need. If your service uses JAXB for xml binding, you can create a JAXBContext to serialize and deserialize inputs and outputs from the HTTP request. A: Take a look at Alchemy rest client generator. This can generate a proxy implementation for your JAX-RS webservice class using jersey client behind the scene. Effectively you will call you webservice methods as simple java methods from your unit tests. Handles http authentication as well. There is no code generation involved if you need to simply run tests so it is convenient. Dislclaimer: I am the author of this library. A: You can try out REST Assured which makes it very simple to test REST services and validating the response in Java (using JUnit or TestNG). A: Keep it simple. Have a look at https://github.com/valid4j/http-matchers which can be imported from Maven Central. <dependency> <groupId>org.valid4j</groupId> <artifactId>http-matchers</artifactId> <version>1.0</version> </dependency> Usage example: // Statically import the library entry point: import static org.valid4j.matchers.http.HttpResponseMatchers.*; // Invoke your web service using plain JAX-RS. E.g: Client client = ClientBuilder.newClient(); Response response = client.target("http://example.org/hello").request("text/plain").get(); // Verify the response assertThat(response, hasStatus(Status.OK)); assertThat(response, hasHeader("Content-Encoding", equalTo("gzip"))); assertThat(response, hasEntity(equalTo("content"))); // etc... A: As James said; There is built-in test framework for Jersey. A simple hello world example can be like this: pom.xml for maven integration. When you run mvn test. Frameworks start a grizzly container. You can use jetty or tomcat via changing dependencies. ... <dependencies> <dependency> <groupId>org.glassfish.jersey.containers</groupId> <artifactId>jersey-container-servlet</artifactId> <version>2.16</version> </dependency> <dependency> <groupId>org.glassfish.jersey.test-framework</groupId> <artifactId>jersey-test-framework-core</artifactId> <version>2.16</version> <scope>test</scope> </dependency> <dependency> <groupId>org.glassfish.jersey.test-framework.providers</groupId> <artifactId>jersey-test-framework-provider-grizzly2</artifactId> <version>2.16</version> <scope>test</scope> </dependency> </dependencies> ... ExampleApp.java import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/") public class ExampleApp extends Application { } HelloWorld.java import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("/") public final class HelloWorld { @GET @Path("/hello") @Produces(MediaType.TEXT_PLAIN) public String sayHelloWorld() { return "Hello World!"; } } HelloWorldTest.java import org.glassfish.jersey.server.ResourceConfig; import org.glassfish.jersey.test.JerseyTest; import org.junit.Test; import javax.ws.rs.core.Application; import static org.junit.Assert.assertEquals; public class HelloWorldTest extends JerseyTest { @Test public void testSayHello() { final String hello = target("hello").request().get(String.class); assertEquals("Hello World!", hello); } @Override protected Application configure() { return new ResourceConfig(HelloWorld.class); } } You can check this sample application. A: An important thing to do is to independently test your business logic I certainly would not assume that the person who wrote the JAX-RS code and is looking to unit test the interface is somehow, for some bizarre, inexplicable reason, oblivious to the notion that he or she can unit testing other parts of the program, including business logic classes. It's hardly helpful to state the obvious and the point was repeatedly made that the responses need to be tested, too. Both Jersey and RESTEasy have client applications and in the case of RESTEasy you can use the same annoations (even factor out annotated interface and use on the client and server side of your tests). REST not what this service can do for you; REST what you can do for this service. A: As I understand the main purpose of the auther of this issue is to decouple JAX RS layer from business one. And unit test only the first one. Two basic problems here we have to resolve: * *Run in test some web/application server, put JAX RS components in it. And only them. *Mock business services inside JAX RS components/REST layer. The first one is solved with Arquillian. The second one is perfectly described in arquillican and mock Here is an example of the code, it may differ if you use another application server, but I hope you'll get the basic idea and advantages. import javax.inject.Inject; import javax.ws.rs.GET; import javax.ws.rs.Path; import com.brandmaker.skinning.service.SomeBean; /** * Created by alexandr on 31.07.15. */ @Path("/entities") public class RestBean { @Inject SomeBean bean; @GET public String getEntiry() { return bean.methodToBeMoked(); } } import java.util.Set; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; import com.google.common.collect.Sets; /** */ @ApplicationPath("res") public class JAXRSConfiguration extends Application { @Override public Set<Class<?>> getClasses() { return Sets.newHashSet(RestBean.class); } } public class SomeBean { public String methodToBeMoked() { return "Original"; } } import javax.enterprise.inject.Specializes; import com.brandmaker.skinning.service.SomeBean; /** */ @Specializes public class SomeBeanMock extends SomeBean { @Override public String methodToBeMoked() { return "Mocked"; } } @RunWith(Arquillian.class) public class RestBeanTest { @Deployment public static WebArchive createDeployment() { WebArchive war = ShrinkWrap.create(WebArchive.class, "test.war") .addClasses(JAXRSConfiguration.class, RestBean.class, SomeBean.class, SomeBeanMock.class) .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml"); System.out.println(war.toString(true)); return war; } @Test public void should_create_greeting() { Client client = ClientBuilder.newClient(); WebTarget target = client.target("http://127.0.0.1:8181/test/res/entities"); //Building the request i.e a GET request to the RESTful Webservice defined //by the URI in the WebTarget instance. Invocation invocation = target.request().buildGet(); //Invoking the request to the RESTful API and capturing the Response. Response response = invocation.invoke(); //As we know that this RESTful Webserivce returns the XML data which can be unmarshalled //into the instance of Books by using JAXB. Assert.assertEquals("Mocked", response.readEntity(String.class)); } } A couple of notes: * *JAX RS configuration without web.xml is used here. *JAX RS Client is used here (no RESTEasy/Jersey, they expose more convenient API) *When test starts, Arquillian's runner starts working. Here you can find how to configure tests for Arquillian with needed application server. *Depending on the chosen application server, an url in the test will differ a little bit. Another port may be used. 8181 is used by Glassfish Embedded in my example. Hope, it'll help.
{ "language": "en", "url": "https://stackoverflow.com/questions/121266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: Is it possible to bind complex type properties to a datagrid? How would I go about binding the following object, Car, to a gridview? public class Car { long Id {get; set;} Manufacturer Maker {get; set;} } public class Manufacturer { long Id {get; set;} String Name {get; set;} } The primitive types get bound easy but I have found no way of displaying anything for Maker. I would like for it to display the Manufacturer.Name. Is it even possible? What would be a way to do it? Would I have to store ManufacturerId in Car as well and then setup an lookupEditRepository with list of Manufacturers? A: The way that I approached this in a recent application was to create my own DataGridViewColumn and DataGridViewCell classes inheriting off of one of the existing ones such as DataGridViewTextBoxColumn and DataGridViewTextBoxCell. Depending on the type of cell you want, you could use others such as Button, Checkbox, ComboBox, etc. Just take a look at the types available in System.Windows.Forms. The cells deal with their value's as objects so you will be able to pass your Car class into the cell's value. Overriding SetValue and GetValue will allow you to have any additional logic you need to handle the value. For example: public class CarCell : System.Windows.Forms.DataGridViewTextBoxCell { protected override object GetValue(int rowIndex) { Car car = base.GetValue(rowIndex) as Car; if (car != null) { return car.Maker.Name; } else { return ""; } } } On the column class the main thing you need to do is set the CellTemplate to your custom cell class. public class CarColumn : System.Windows.Forms.DataGridViewTextBoxColumn { public CarColumn(): base() { CarCell c = new CarCell(); base.CellTemplate = c; } } By using these custom Column/Cells on the DataGridView it allows you to add a lot of extra functionality to your DataGridView. I used them to alter the displayed formatting by overriding GetFormattedValue to apply custom formatting to the string values. I also did an override on Paint so that I could do custom cell highlighting depending on value conditions, altering the cells Style.BackColor to what I wanted based on the value. A: public class Manufacturer { long Id {get; set;} String Name {get; set;} public override string ToString() { return Name; } } Override the to string method. A: Allright guys... This question was posted waaay back but I just found a fairly nice & simple way to do this by using reflection in the cell_formatting event to go retrieve the nested properties. Goes like this: private void Grid_CellFormatting(object sender, DataGridViewCellFormattingEventArgs e) { DataGridView grid = (DataGridView)sender; DataGridViewRow row = grid.Rows[e.RowIndex]; DataGridViewColumn col = grid.Columns[e.ColumnIndex]; if (row.DataBoundItem != null && col.DataPropertyName.Contains(".")) { string[] props = col.DataPropertyName.Split('.'); PropertyInfo propInfo = row.DataBoundItem.GetType().GetProperty(props[0]); object val = propInfo.GetValue(row.DataBoundItem, null); for (int i = 1; i < props.Length; i++) { propInfo = val.GetType().GetProperty(props[i]); val = propInfo.GetValue(val, null); } e.Value = val; } } And that's it! You can now use the familiar syntax "ParentProp.ChildProp.GrandChildProp" in the DataPropertyName for your column. A: Just use a List and set the DataMember to the string "Maker.Name" and if you want the DataKeyField to use car's ID just set that to "ID". dataGrid.DataSource = carList; dataGrid.DataMember = "Maker.Name"; dataGrid.DataKeyField = "ID"; dataGrid.DataBind(); I know that works in the repeater-control, at least... A: If you want to expose specific, nested properties as binding targets, then Ben Hoffstein's answer (http://blogs.msdn.com/msdnts/archive/2007/01/19/how-to-bind-a-datagridview-column-to-a-second-level-property-of-a-data-source.aspx) is pretty good. The referenced article is a bit obtuse, but it works. If you just want to bind a column to a complex property (e.g. Manufacturer) and override the rendering logic, then either do what ManiacXZ recommended, or just subclass BoundField and provide a custom implementation of FormatDataValue(). This is similar to overriding ToString(); you get an object reference, and you return the string you want displayed in your grid. Something like this: public class ManufacturerField : BoundField { protected override string FormatDataValue(object dataValue, bool encode) { var mfr = dataValue as Manufacturer; if (mfr != null) { return mfr.Name + " (ID " + mfr.Id + ")"; } else { return base.FormatDataValue(dataValue, encode); } } } Just add a ManufacturerField to your grid, specifying "Manufacturer" as the data field, and you're good to go. A: Here's another option I got working: <asp:TemplateColumn HeaderText="Maker"> <ItemTemplate> <%#Eval("Maker.Name")%> </ItemTemplate> </asp:TemplateColumn> Might be ASP.NET 4.0 specific but it works like a charm! A: Yes, you can create a TypeDescriptionProvider to accomplish nested binding. Here is a detailed example from an MSDN blog: http://blogs.msdn.com/msdnts/archive/2007/01/19/how-to-bind-a-datagridview-column-to-a-second-level-property-of-a-data-source.aspx A: I would assume you could do the following: public class Car { public long Id {get; set;} public Manufacturer Maker {private get; set;} public string ManufacturerName { get { return Maker != null ? Maker.Name : ""; } } } public class Manufacturer { long Id {get; set;} String Name {get; set;} }
{ "language": "en", "url": "https://stackoverflow.com/questions/121274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do I detect what browser is used to access my site? How do I detect what browser (IE, Firefox, Opera) the user is accessing my site with? Examples in Javascript, PHP, ASP, Python, JSP, and any others you can think of would be helpful. Is there a language agnostic way to get this information? A: Comprehensive list of User Agent Strings from various Browsers The list is really large :) A: You would take a look at the User-Agent that they are sending. Note that you can send whatever agent you want, so that's not 100% foolproof, but most people don't change it unless there's a specific reason to. A: A quick and dirty java servlet example private String getBrowserName(HttpServletRequest request) { // get the user Agent from request header String userAgent = request.getHeader(Constants.BROWSER_USER_AGENT); String BrowesrName = ""; //check for Internet Explorer if (userAgent.indexOf("MSIE") > -1) { BrowesrName = Constants.BROWSER_NAME_IE; } else if (userAgent.indexOf(Constants.BROWSER_NAME_FIREFOX) > -1) { BrowesrName = Constants.BROWSER_NAME_MOZILLA_FIREFOX; } else if (userAgent.indexOf(Constants.BROWSER_NAME_OPERA) > -1) { BrowesrName = Constants.BROWSER_NAME_OPERA; } else if (userAgent.indexOf(Constants.BROWSER_NAME_SAFARI) > -1) { BrowesrName = Constants.BROWSER_NAME_SAFARI; } else if (userAgent.indexOf(Constants.BROWSER_NAME_NETSCAPE) > -1) { BrowesrName = Constants.BROWSER_NAME_NETSCAPE; } else { BrowesrName = "Undefined Browser"; } //return the browser name return BrowesrName; } A: If it's for handling the request, look at the User-Agent header on the incoming request. UPDATE: If it's for reporting, configure your web server to log the User-Agent in the access logs, then run a log analysis tool, e.g., AWStats. UPDATE 2: FYI, it's usually (not always, usually) a bad idea to change the way you're handling a request based on the User-Agent. A: You can use the HttpBrowserCapabilities Class in ASP.NET. Here is a sample from this link private void Button1_Click(object sender, System.EventArgs e) { HttpBrowserCapabilities bc; string s; bc = Request.Browser; s= "Browser Capabilities" + "\n"; s += "Type = " + bc.Type + "\n"; s += "Name = " + bc.Browser + "\n"; s += "Version = " + bc.Version + "\n"; s += "Major Version = " + bc.MajorVersion + "\n"; s += "Minor Version = " + bc.MinorVersion + "\n"; s += "Platform = " + bc.Platform + "\n"; s += "Is Beta = " + bc.Beta + "\n"; s += "Is Crawler = " + bc.Crawler + "\n"; s += "Is AOL = " + bc.AOL + "\n"; s += "Is Win16 = " + bc.Win16 + "\n"; s += "Is Win32 = " + bc.Win32 + "\n"; s += "Supports Frames = " + bc.Frames + "\n"; s += "Supports Tables = " + bc.Tables + "\n"; s += "Supports Cookies = " + bc.Cookies + "\n"; s += "Supports VB Script = " + bc.VBScript + "\n"; s += "Supports JavaScript = " + bc.JavaScript + "\n"; s += "Supports Java Applets = " + bc.JavaApplets + "\n"; s += "Supports ActiveX Controls = " + bc.ActiveXControls + "\n"; TextBox1.Text = s; } A: PHP's predefined superglobal array $_SERVER contains a key "HTTP_USER_AGENT", which contains the value of the User-Agent header as sent in the HTTP request. Remember that this is user-provided data and is not to be trusted. Few users alter their user-agent string, but it does happen from time to time. A: On the client side, you can do this in Javascript using the navigation.userAgent object. Here's a crude example: if (navigator.userAgent.indexOf("MSIE") > -1) { alert("Internet Explorer!"); } else if (navigator.userAgent.indexOf("Firefox") > -1) { alert("Firefox!"); } A more detailed and comprehensive example can be found here: http://www.quirksmode.org/js/detect.html Note that if you're doing the browser detection for the sake of Javascript compatibility, it's usually better to simply use object detection or a try/catch block, lest some version you didn't think of slip through the cracks of your script. For example, instead of doing this... if(navigator.userAgent.indexOf("MSIE 6") > -1) { objXMLHttp = new ActiveXObject("Microsoft.XMLHTTP"); } else { objXMLHttp = new XMLHttpRequest(); } ...this is better: if(window.XMLHttpRequest) // Works in Firefox, Opera, and Safari, maybe latest IE? { objXMLHttp = new XMLHttpRequest(); } else if (window.ActiveXObject) // If the above fails, try the MSIE 6 method { objXMLHttp = new ActiveXObject("Microsoft.XMLHTTP"); } A: It may be dependent of your setting. With apache on linux, its written in the access log /var/log/apache2/access_log A: You can do this by: - looking at the web server log, OR - looking at the User-Agent field in the HTML request (which is a plain text stream) before processing it. A: First of all, I'd like to note, that it is best to avoid patching against specific web-browsers, unless as a last result -try to achieve cross-browser compatibility instead using standard-compliant HTML/CSS/JS (yes, javascript does have a common denominator subset, which works across all major browsers). With that said, the user-agent tag from the HTTP request header contains the client's (claimed) browser. Although this has become a real mess due to people working against specific browser, and not the specification, so determining the real browser can be a little tricky. Match against this: contains browser Firefox -> Firefox MSIE -> Internet Explorer Opera -> Opera (one of the few browsers, which don't pretend to be Mozilla :) ) Most of the agents containing the words "bot", or "crawler" are usually bots (so you can omit it from logs / etc) A: check out browsecap.ini. The linked site has files for multiple scripting languages. The browsecap not only identifies the user-agent but also has info about the browser's CSS support, JS support, OS, if its a mobile browser etc. cruise over to this page to see an example of what info the browsecap.ini can tell you about your current browser.
{ "language": "en", "url": "https://stackoverflow.com/questions/121280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why do my keystrokes turn into crazy characters after I dump a bunch of binary data into my terminal? If I do something like: $ cat /bin/ls into my terminal, I understand why I see a bunch of binary data, representing the ls executable. But afterwards, when I get my prompt back, my own keystrokes look crazy. I type "a" and I get a weird diagonal line. I type "b" and I get a degree symbol. Why does this happen? A: Because somewhere in your binary data were some control sequences that your terminal interpreted as requests to, for example, change the character set used to draw. You can restore everything to normal like so: reset A: The terminal will try to interpret the binary data thrown at it as control codes, and garble itself up in the process, so you need to sanitize your tty. Run: stty sane And things should be back to normal. Even if the command looks garbled as you type it, the actual characters are being stored correctly, and when you press return the command will be invoked. You can find more information about the stty command here. A: You're getting some control characters piped into the shell that are telling the shell to alter its behavior and print things differently. A: VT100 is pretty much the standard command set used for terminal windows, but there are a lot of extensions. Some control character set used, keyboard mapping, etc. When you send a lot of binary characters to such a terminal, a lot of settings change. Some terminals have options to 'clear' the settings back to default, but in general they simply weren't made for binary data. VT100 and its successors are what allow Linux to print in color text (such as colored ls listings) in a simple terminal program. -Adam A: Just do a copy-paste: echo -e '\017' to your bash and characters will return to normal. If you don't run bash, try the following keystrokes: <Ctrl-V><Ctrl-O><Enter> and hopefully your terminal's status will return to normal when it complains that it can't find either a <Ctrl-V><Ctrl-O> or a <Ctrl-O> command to run. <Ctrl-N>, or character 14 —when sent to your terminal— orders to switch to a special graphics mode, where letters and numbers are replaced with symbols. <Ctrl-O>, or character 15, restores things back to normal. A: If you really must dump binary data to your terminal, you'd have much better luck if you pipe it to a pager like less, which will display it in a slightly more readable format. (You may also be interested in strings and od, both can be useful if you're fiddling around with binary files.)
{ "language": "en", "url": "https://stackoverflow.com/questions/121282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Separating Web Applications into multiple projects I have a web application that is becoming rather large. I want to separate it into smaller more logical projects, but the smaller projects are still going to need to access some of the classes in the app_code of the main project. What are some good methods to accomplish this? A: Add a class library project with the common classes and add a reference to this project to each of the new projects. So you'll have the following Solution layout /webapp1 /default.aspx /.... /webapp2 /default.aspx /.... /lib /Utils.cs A: Extract your common code from app_code into a class library which is referenced by each of your other projects. A: If you are only looking for a way to organize your files, then you can create a folder for each sub-project. This way you'll be able to get to the content of app_code and maintain a level of separation with very little rework. If you are looking for the best way to do this, then refactoring your code to have a common Class Library based on what is reusable in the app_code folder and multiple, separate projects that reference that library is the way to go. You may run into problem refactoring the code this way, including not being able to reference profile or user information directly. You are now going from the Web Site to Web Application paradigm. http://www.codersbarn.com/post/2008/06/ASPNET-Web-Site-versus-Web-Application-Project.aspx A: I like the 3 Tier approach of creating a data access project, a separate business project, then use your existing site code as the presentation layer, all within the same solution file. You do this, like posters before me said, by creating Class Library projects within your existing solution and moving your App_Code classes to the appropriate layer and then referencing the data access project in the business project, and the business project in the web project. It will take a bit of time to move it all around and get the bits and pieces reconnected once you move so make sure you set aside plenty of time for testing and refactoring. A: In CVS & Subversion, you can setup what I think are referred to as "aliases" (or maybe it's "modules"). Anyway, you can use them to checkout part(s) of your source control tree. For example, you could create an alias called "views" that checks out all your HTML, javascript, and css, but none of your php/java/.NET. A: Here's an example of what I'm doing within my projects. The basic idea is to have all common files separately from htdocs so they are not accessible by client directly and sharable. Directory structure: public_html The only htdocs dir for all projects. Stores only files which should be directly accessible by client, ie js, css, images, index script core Core classes/functions required by application and other scripts. Framework in other words. application Stores files used to generate separate pages requested by public_html/index script + classes common to all projects config Configuration for all projects, separated by project templates Template files separated from all other files The public_html/index script is then used for all projects on all domains/subdomains and based on the requested URL loads proper pages... A: A somewhat simple approach is to group the code in your app_code folder into it's own assembly. The only issue that you could possibly run into is if the code in your app_code folder is not decoupled from the elements on you pages (This is normally always a bad idea since it indicates poor cohesion in you classes). Once you have your code in a separate assembly you can deploy it to any number of servers when you are upgrading you apps.
{ "language": "en", "url": "https://stackoverflow.com/questions/121283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Test accounts and products in a production system Is it worth designing a system to expect test accounts and products to be present and active in production, or should there be no contamination of production databases with test entities, even if your shipping crew knows not to ship any box addressed to "Test Customer"? I've implemented messaging protocols that have a test="True" attribute in the spec, and wondered if a modern schema should include metadata for tagging orders, accounts, transactions, etc. as test entities that get processed just like any other entity--but just short of the point where money gets spent. Ie: it fakes charging an imaginary credit card and fakes the shipment of a package. This isn't expected to be a substitute for a fully separated testing, development, and QA database, but even with those, we've always had the well-known Test SKU and Test Customer in the production system. Harmless? A: Having testing accounts in production is something I usually frown upon because it opens up a potential security hole. One should strive to duplicate as much of the production environment in testing as possible but there are obviously cases where that isn't possible. Expensive production only hardware is a prime example. I would say as a general practice it should be discouraged but as with all things if you can provide a reason which makes sense to you then you might overlook a hard and fast rule. A: I imagine the Best Practice Police would state the mantra "never ever test in prod" and maybe even throw in "developers should not have access to prod". However, I work on a mainframe-based system where there are huge differences between production and test/qa/qc; the larger the system, the more likely such a situation is. Additionally, the more groups that have a stake in the application, the more likely this is. I need more than two hands to count how many times we could only duplicate a problem in the production environment. The option then becomes creating test tables/users/data or using live customer data. At times we do also create test records in production tables, as some users/clients like having something they can search/retrieve that is always there. So my advice is that it is OK to put test accounts/products into production if it will help to troubleshoot after go-live. A: If your database is created from scripts in an automated fashion, then this becomes a non-question. In my environment we use cruise control for continuous builds. The SQL Scripts for generating the database are checked into CVS with everything else, and the database is rebuilt from those scripts on a daily basis. Our test data is a second set of sql scripts, which are run for the test database and are not run for the production database. Given our environment test data never touches the production database. This solution really works great for us. A: I wouldn’t put any test data in a production system nor would I want to have access to this system as a developer. I’m working in an industry with very sensitive medical and financial information and having such information would make it impossible to distinguish productive from data out of the testing system. IMHO the best practice is to completely separate these two worlds and invest in setting up a procedure to prepare a comprehensive testing environment. A: In out ERP systems (internally accessible only) we have test data so that when we move changes from test to production environments we can test the whole process. I view that data as a necessary evil, since subtle configuration differences between systems can cause catastrophic results, so once a change is in production we test is fully before "releasing" it to the users. As I said though, these are internal apps only, so the security risks are lessened somewhat - that's a very valid concern. A: Never ever test in prod, even though that is where all the revenue is generated/stats are collected/magic happens...? Always have a production test plan. There are going to be problems that happen on prod, or, if you are unlucky, only happens on prod. If you don't have anything in place, the first time you need to test on prod (which are usually high-stress cases) you'll be up the creek without a paddle. It's not harmless to have test data on prod, you do need to be careful.
{ "language": "en", "url": "https://stackoverflow.com/questions/121306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Read in an XML String with Namespaces for Use in an XSL Transformation In an ASP.NET 2.0 website, I have a string representing some well-formed XML. I am currently creating an XmlDocument object with it and running an XSL transformation for display in a Web form. Everything was operating fine until the XML input started to contain namespaces. How can I read in this string and allow namespaces? I've included the current code below. The string source comes from an HTML encoded node in a WordPress RSS feed. XPathNavigator myNav= myPost.CreateNavigator(); XmlNamespaceManager myManager = new XmlNamespaceManager(myNav.NameTable); myManager.AddNamespace("content", "http://purl.org/rss/1.0/modules/content/"); string myPost = HttpUtility.HtmlDecode("<post>" + myNav.SelectSingleNode("//item[1]/content:encoded", myManager).InnerXml + "</post>"); XmlDocument myDocument = new XmlDocument(); myDocument.LoadXml(myPost.ToString()); The error is on the last line: "System.Xml.XmlException: 'w' is an undeclared namespace. Line 12, position 201. at System.Xml.XmlTextReaderImpl.Throw(Exception e) ..." A: Your code looks right. The problem is probably in the xml document you're trying to load. It must have elements with a "w" prefix, without having that prefix declared in the XML document For example, you should have: <test xmlns:w="http://..."> <w:elementInWNamespace /> </test> (your document is probably missing the xmlns:w="http://") A: Gut feel - one of the namespaces declared in //content:encoding is being dropped (probably because you're using the literal .InnerXml property) What's 'w' namespace evaluate to in the myNav DOM? You'll want to add xmlns:w= to your post node. There will probably be others too.
{ "language": "en", "url": "https://stackoverflow.com/questions/121309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Unix Script reading a specific line and position I need to have a script read the files coming in and check information for verification. On the first line of the files to be read is a date but in numeric form. eg: 20080923 But before the date is other information, I need to read it from position 27. Meaning line 1 position 27, I need to get that number and see if it’s greater then another number. I use the grep command to check other information but I use special characters to search, in this case the information before the date is always different, so I can’t use a character to search on. It has to be done by line 1 position 27. A: sed 1q $file | cut -c27-34 The sed command reads the first line of the file and the cut command chops out characters 27-34 of the one line, which is where you said the date is. Added later: For the more general case - where you need to read line 24, for example, instead of the first line, you need a slightly more complex sed command: sed -n -e 24p -e 24q | cut -c27-34 sed -n '24p;24q' | cut -c27-34 The -n option means 'do not print lines by default'; the 24p means print line 24; the 24q means quit after processing line 24. You could leave that out, in which case sed would continue processing the input, effectively ignoring it. Finally, especially if you are going to validate the date, you might want to use Perl for the whole job (or Python, or Ruby, or Tcl, or any scripting language of your choice). A: You can extract the characters starting at position 27 of line 1 like so: datestring=`head -1 $file | cut -c27-` You'd perform your next processing step on $datestring.
{ "language": "en", "url": "https://stackoverflow.com/questions/121318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: A Java API to generate Java source files I'm looking for a framework to generate Java source files. Something like the following API: X clazz = Something.createClass("package name", "class name"); clazz.addSuperInterface("interface name"); clazz.addMethod("method name", returnType, argumentTypes, ...); File targetDir = ...; clazz.generate(targetDir); Then, a java source file should be found in a sub-directory of the target directory. Does anyone know such a framework? EDIT: * *I really need the source files. *I also would like to fill out the code of the methods. *I'm looking for a high-level abstraction, not direct bytecode manipulation/generation. *I also need the "structure of the class" in a tree of objects. *The problem domain is general: to generate a large amount of very different classes, without a "common structure". SOLUTIONS I have posted 2 answers based in your answers... with CodeModel and with Eclipse JDT. I have used CodeModel in my solution, :-) A: Another alternative is Eclipse JDT's AST which is good if you need to rewrite arbitrary Java source code rather than just generate source code. (and I believe it can be used independently from eclipse). A: Sun provides an API called CodeModel for generating Java source files using an API. It's not the easiest thing to get information on, but it's there and it works extremely well. The easiest way to get hold of it is as part of the JAXB 2 RI - the XJC schema-to-java generator uses CodeModel to generate its java source, and it's part of the XJC jars. You can use it just for the CodeModel. Grab it from http://codemodel.java.net/ A: Solution found with CodeModel Thanks, skaffman. For example, with this code: JCodeModel cm = new JCodeModel(); JDefinedClass dc = cm._class("foo.Bar"); JMethod m = dc.method(0, int.class, "foo"); m.body()._return(JExpr.lit(5)); File file = new File("./target/classes"); file.mkdirs(); cm.build(file); I can get this output: package foo; public class Bar { int foo() { return 5; } } A: The Eclipse JET project can be used to do source generation. I don't think it's API is exactly like the one you described, but every time I've heard of a project doing Java source generation they've used JET or a homegrown tool. A: Don't know of a library, but a generic template engine might be all you need. There are a bunch of them, I personally have had good experience with FreeMarker A: Solution found with Eclipse JDT's AST Thanks, Giles. For example, with this code: AST ast = AST.newAST(AST.JLS3); CompilationUnit cu = ast.newCompilationUnit(); PackageDeclaration p1 = ast.newPackageDeclaration(); p1.setName(ast.newSimpleName("foo")); cu.setPackage(p1); ImportDeclaration id = ast.newImportDeclaration(); id.setName(ast.newName(new String[] { "java", "util", "Set" })); cu.imports().add(id); TypeDeclaration td = ast.newTypeDeclaration(); td.setName(ast.newSimpleName("Foo")); TypeParameter tp = ast.newTypeParameter(); tp.setName(ast.newSimpleName("X")); td.typeParameters().add(tp); cu.types().add(td); MethodDeclaration md = ast.newMethodDeclaration(); td.bodyDeclarations().add(md); Block block = ast.newBlock(); md.setBody(block); MethodInvocation mi = ast.newMethodInvocation(); mi.setName(ast.newSimpleName("x")); ExpressionStatement e = ast.newExpressionStatement(mi); block.statements().add(e); System.out.println(cu); I can get this output: package foo; import java.util.Set; class Foo<X> { void MISSING(){ x(); } } A: I built something that looks very much like your theoretical DSL, called "sourcegen", but technically instead of a util project for an ORM I wrote. The DSL looks like: @Test public void testTwoMethods() { GClass gc = new GClass("foo.bar.Foo"); GMethod hello = gc.getMethod("hello"); hello.arguments("String foo"); hello.setBody("return 'Hi' + foo;"); GMethod goodbye = gc.getMethod("goodbye"); goodbye.arguments("String foo"); goodbye.setBody("return 'Bye' + foo;"); Assert.assertEquals( Join.lines(new Object[] { "package foo.bar;", "", "public class Foo {", "", " public void hello(String foo) {", " return \"Hi\" + foo;", " }", "", " public void goodbye(String foo) {", " return \"Bye\" + foo;", " }", "", "}", "" }), gc.toCode()); } https://github.com/stephenh/joist/blob/master/util/src/test/java/joist/sourcegen/GClassTest.java It also does some neat things like "Auto-organize imports" any FQCNs in parameters/return types, auto-pruning any old files that were not touched in this codegen run, correctly indenting inner classes, etc. The idea is that generated code should be pretty to look at it, with no warnings (unused imports, etc.), just like the rest of your code. So much generated code is ugly to read...it's horrible. Anyway, there is not a lot of docs, but I think the API is pretty simple/intuitive. The Maven repo is here if anyone is interested. A: You can use Roaster (https://github.com/forge/roaster) to do code generation. Here is an example: JavaClassSource source = Roaster.create(JavaClassSource.class); source.setName("MyClass").setPublic(); source.addMethod().setName("testMethod").setPrivate().setBody("return null;") .setReturnType(String.class).addAnnotation(MyAnnotation.class); System.out.println(source); will display the following output: public class MyClass { private String testMethod() { return null; } } A: If you REALLY need the source, I don't know of anything that generates source. You can however use ASM or CGLIB to directly create the .class files. You might be able to generate source from these, but I've only used them to generate bytecode. A: I was doing it myself for a mock generator tool. It's a very simple task, even if you need to follow Sun formatting guidelines. I bet you'd finish the code that does it faster then you found something that fits your goal on the Internet. You've basically outlined the API yourself. Just fill it with the actual code now! A: There is also StringTemplate. It is by the author of ANTLR and is quite powerful. A: There is new project write-it-once. Template based code generator. You write custom template using Groovy, and generate file depending on java reflections. It's the simplest way to generate any file. You can make getters/settest/toString by generating AspectJ files, SQL based on JPA annotations, inserts / updates based on enums and so on. Template example: package ${cls.package.name}; public class ${cls.shortName}Builder { public static ${cls.name}Builder builder() { return new ${cls.name}Builder(); } <% for(field in cls.fields) {%> private ${field.type.name} ${field.name}; <% } %> <% for(field in cls.fields) {%> public ${cls.name}Builder ${field.name}(${field.type.name} ${field.name}) { this.${field.name} = ${field.name}; return this; } <% } %> public ${cls.name} build() { final ${cls.name} data = new ${cls.name}(); <% for(field in cls.fields) {%> data.${field.setter.name}(this.${field.name}); <% } %> return data; } } A: It really depends on what you are trying to do. Code generation is a topic within itself. Without a specific use-case, I suggest looking at velocity code generation/template library. Also, if you are doing the code generation offline, I would suggest using something like ArgoUML to go from UML diagram/Object model to Java code. A: Exemple : 1/ private JFieldVar generatedField; 2/ String className = "class name"; /* package name */ JPackage jp = jCodeModel._package("package name "); /* class name */ JDefinedClass jclass = jp._class(className); /* add comment */ JDocComment jDocComment = jclass.javadoc(); jDocComment.add("By AUTOMAT D.I.T tools : " + new Date() +" => " + className); // génération des getter & setter & attribues // create attribue this.generatedField = jclass.field(JMod.PRIVATE, Integer.class) , "attribue name "); // getter JMethod getter = jclass.method(JMod.PUBLIC, Integer.class) , "attribue name "); getter.body()._return(this.generatedField); // setter JMethod setter = jclass.method(JMod.PUBLIC, Integer.class) ,"attribue name "); // create setter paramétre JVar setParam = setter.param(getTypeDetailsForCodeModel(Integer.class,"param name"); // affectation ( this.param = setParam ) setter.body().assign(JExpr._this().ref(this.generatedField), setParam); jCodeModel.build(new File("path c://javaSrc//")); A: Here is a JSON-to-POJO project that looks interesting: http://www.jsonschema2pojo.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/121324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "130" }
Q: What does the GDB backtrace message "0x0000000000000000 in ?? ()" mean? What does it mean when it gives a backtrace with the following output? #0 0x00000008009c991c in pthread_testcancel () from /lib/libpthread.so.2 #1 0x00000008009b8120 in sigaction () from /lib/libpthread.so.2 #2 0x00000008009c211a in pthread_mutexattr_init () from /lib/libpthread.so.2 #3 0x0000000000000000 in ?? () The program has crashed with a standard signal 11, segmentation fault. My application is a multi-threaded FastCGI C++ program running on FreeBSD 6.3, using pthread as the threading library. It has been compiled with -g and all the symbol tables for my source are loaded, according to info sources. As is clear, none of my actual code appears in the trace but instead the error seems to originate from standard pthread libraries. In particular, what is ?? () ???? EDIT: eventually tracked the crash down to a standard invalid memory access in my main code. Doesn't explain why the stack trace was corrupted, but that's a question for another day :) A: Something you did cause the threading library to crash. Since the threading library itself is not compiled with debugging symbols (-g), it cannot display the source code file or line number the crash happened on. In addition, since it's threads, the call stack does not point back to your file. Unfortunately this will be a tough bug to track down, you're gonna need to step through your code and try and narrow down when exactly the crash happens. A: Make sure you compile with debug symbols. (For gcc I think that is the -g option). Then you should be able to get more interesting information out of GDB. Don't forget to turn it off when you compile the production version. A: I could be missing something, but isn't this indicative of someone using NULL as a function pointer? #include <stdio.h> typedef int (*funcptr)(void); int func_caller(funcptr f) { return (*f)(); } int main() { return func_caller(NULL); } This produces the same style of a backtrace if you run it in gdb: rivendell$ gcc -g -O0 foo.c -o foo rivendell$ gdb --quiet foo Reading symbols for shared libraries .. done (gdb) r Starting program: ... Reading symbols for shared libraries . done Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 0x00000000 in ?? () (gdb) bt #0 0x00000000 in ?? () #1 0x00001f9d in func_caller (f=0) at foo.c:8 #2 0x00001fb1 in main () at foo.c:14 This is a pretty strange crash though... pthread_mutexattr_init rarely does anything more than allocate a data structure and memset it. I'd look for something else going on. Is there a possibility of mismatched threading libraries or something. My BSD knowledge is a little dated, but there used to be issues around this. A: gdb wasn't able to extract the proper return address from pthread_mutexattr_init; it got an address of 0. The "??" is the result of looking up address 0 in the symbol table. It cannot find a symbolic name, so it prints a default "??" Unfortunately right offhand I don't know why it could not extract the correct return address. A: Maybe the bug that caused the crash has broken the stack (overwritten parts of the stack)? In that case, the backtrace might be useless; no idea what to do in that case...
{ "language": "en", "url": "https://stackoverflow.com/questions/121326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Point ADO.Net DataSet to different databases at runtime? I have a large ADO.Net dataset and two database schemas (Oracle) with different constraints. The dataset will work with either schema, but I want to be able to tell the dataset which schema to use (via connection string) at runtime. Is that even possible? A: In the .Net 2.0 world, you can change your connection string on your table adapters at run-time. You just have to be sure the Connnection property is public, which can be set from the dataset designer. A: Datasets don't know what database they're pointing to -- they're just containers for data. If the dataset is filled with a data adapter, then as @Austin Salonen pointed out, you change that on the adapter side. A: This is a code snippet on how you could updated the connection string at runtime. It does not matter what generated the dataset. DataSet ds = new DataSet(); // Do some updateing here // Put your connection string here dyanmiclly System.Data.OleDb.OleDbCommand command = new System.Data.OleDb.OleDbCommand("Your Runtime Connection String"); // Create the data Adapter System.Data.OleDb.OleDbDataAdapter dataAdapter = new System.Data.OleDb.OleDbDataAdapter(command); // Update the dataset dataAdapter.Update(ds);
{ "language": "en", "url": "https://stackoverflow.com/questions/121350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: WCF Routing Message Security I'm building some routing functionality between services. The original service and the service that does the routing have an identical configuration; both are using netTcpBinding with the following binding configuration: netTcpBinding security mode="Message" message clientCredentialType="UserName" The service behavior uses a AspNet Membership Provider and a client certificate we've installed on the machine. When I switch off the message security it relays just fine but when it's switched on I get the following exception: "The message could not be processed. This is most likely because the action 'http://foo/Whatever' is incorrect or because the message contains an invalid or expired security context token or because there is a mismatch between bindings*. The security context token would be invalid if the service aborted the channel due to inactivity. To prevent the service from aborting idle sessions prematurely increase the Receive timeout on the service endpoint's binding." (Emphasis mine) My thinking is that the certificate is operating on the message twice (once on the original call and then on the relay) and this is what corrupts the message's security token. Questions: * *Is my thinking on target? *Is there a way to continue to use message security for routing without having the complexity of a token service? A: You mentioned switching between no security and message security. Are you making sure to change both the WCF service endpoints as well as the endpoint on the receiving end? If not, and the two do not match up, you will receive an error. That's what that error seems to be saying to me. For Question 2, what type of environment are you running in? A closed system where you could use encrypt and sign, or a public environment, where you might need to be using a special key?
{ "language": "en", "url": "https://stackoverflow.com/questions/121354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Built-in localization tools in VS2008 I am working on a WinForms application programmed in C# .NET 2.0 and VS2008. I am just about to start translating the app into several languages. Before I start, is it a good idea to use the VS2008 itself for all the localization? Or is it better to use some external tool right away? This is my first .NET app, so I rather ask before I start. What are others using? All strings used in my app are in resources, so I think the app is ready to be translated. Thank you, Petr A: Who will be localizing it? Most external localization companies have utilities for this. If its yourself or your team the simplest thing is probably to use Visual Studio or something like what's mentioned here to convert it to and from a word doc: http://blog.vermorel.com/?p=73 A: I finished the work on a site (REFULOG) and I generated the .resx files for every page (Tools/Generate Local Resource; Make sure you are in design or split mode, otherwise the menu item does not appear). After this I tested the app called Resx Crunch (It is about to come out.) I loaded all the generated .resx files, added the desired languages, made the application copy the values from the default .resx files, so at the end it looked like this: Default value | DE | ES ------------------------------- apple |apple | apple ... I saved the info as a CSV file and I sent it to the translator. When it came back from the translator: Default value | DE | ES ------------------------------- apple |Appfel | Manzana ... I loaded it, Saved As into the application folder, and that was it. I tried to use other localization tools, but they wante3d to do too much and could not do enough. So to answer your question: Generate the meta tags & .resx files using Visual Studio and do the translation using some localization tool. A: For the benefit of others who may come across this (2+ years after the last post), I'm the author of a professional localization product that makes the entire translation process extremely easy. It's a Visual Studio add-in that will extract all ".resx" strings from any arbitrary solution and load them into a single file that can be translated using a free standalone application (translators can download this from my site). The same add-in will then import the translated strings back into your solution. Extremely easy to use with many built-in safeguards, lots of bells and whistles, and online help (you won't need it much). See http://www.hexadigm.com
{ "language": "en", "url": "https://stackoverflow.com/questions/121373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a way to comment out markup in an .ASPX page? Is there a way to comment out markup in an .ASPX page so that it isn't delivered to the client? I have tried the standard comments <!-- --> but this just gets delivered as a comment and doesn't prevent the control from rendering. A: While this works: <%-- <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ht_tv1.Default" %> --%> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="Blank._Default" %> This won't. <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" <%--Inherits="ht_tv1.Default"--%> Inherits="Blank._Default" %> So you can't comment out part of something which is what I want to do 99.9995% of the time. A: Bonus answer: The keyboard shortcut in Visual Studio for commenting out anything is Ctrl-KC . This works in a number of places, including C#, VB, Javascript, and aspx pages; it also works for SQL in SQL Management Studio. You can either select the text to be commented out, or you can position your text inside a chunk to be commented out; for example, put your cursor inside the opening tag of a GridView, press Ctrl-KC, and the whole thing is commented out. A: Another way assuming it's not server side code you want to comment out is... <asp:panel runat="server" visible="false"> html here </asp:panel> A: <%-- Commented out HTML/CODE/Markup. Anything with this block will not be parsed/handled by ASP.NET. <asp:Calendar runat="server"></asp:Calendar> <%# Eval(“SomeProperty”) %> --%> Source A: FYI | ctrl + K, C is the comment shortcut in Visual Studio. ctrl + K, U uncomments. A: <%-- not rendered to browser --%> A: I believe you're looking for: <%-- your markup here --%> That is a serverside comment and will not be delivered to the client ... but it's not optional. If you need this to be programmable, then you'll want this answer :-) A: Yes, there are special server side comments: <%-- Text not sent to client --%>
{ "language": "en", "url": "https://stackoverflow.com/questions/121382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "223" }
Q: What are the key aspects of a strongly typed language? What makes a language strongly typed? I'm looking for the most important aspects of a strongly typed language. Yesterday I asked if PowerShell was strongly typed, but no one could agree on the definition of "strongly-typed", so I'm looking to clarify the definition. Feel free to link to wikipedia or other sources, but don't just cut and paste for your answer. A: The key is to remember that there is a distinction between statically typed and strongly typed. A strongly typed language simply means that once assigned, a given variable will always behave as a certain type until it is reassigned. By definition statically typed languages like Java and C# are strongly typed, but so are many popular dynamic languages like Ruby and Python. So in a strongly typed language x = "5" x will always be a string and will never be an integer. In certain weakly typed languages you could do something like x = "5" y = x + 3 // y is now 8 A: People are confusing statically typed with strongly typed. Statically typed means "A string is a string is a string". Strongly typed means "Once you make this a string it will be treated as a string until it is reassigned as something different." edit: I see someone else did point this out after all :) A: I heard someone say in an interview (I think it was Anders Hejlsberg of C# and turbo pascal fame) that strong typing is not something that's on or off, some languages have a stronger type system than others. There's also a lot of confusion between strongly, weakly, static and dynamic typing where staticly typed languages assign types to variables and dynamic languages give types to the objects stored in variables. Try wikipedia for more info but don't expect a conclusive answer: http://en.wikipedia.org/wiki/Strongly_typed_language A: The term "strongly typed" has no agreed-upon definition. It makes a "great" argument in a flamewar, because whenever someone is proven wrong, they can just redefine it to mean whatever they want it to mean. Other than that, the term serves no real purpose. It is best to just not use the term, or, if you use it, rigorously define it first. If you see someone else use it, ask him to define the term. Everybody has their own definition. Some that I have seen are: * *strongly typed = statically typed *strongly typed = explicitly typed *strongly typed = nominally typed *strongly typed = typed *strongly typed = has no implicit typecasts, only explicit *strongly typed = has no typecasts at all *strongly typed = what I understand / weakly typed = what I don't understand *strongly typed = C++ / weakly typed = everything else *strongly typed = Java / weakly typed = everything else *strongly typed = .NET / weakly typed = everything else *strongly typed = my programming language / weakly typed = your programming language In Type Theory, there exists the notion of one type system being stronger than another. In particular, if there exists an expression e1 such that it is accepted by a type system T1, but rejected by a type system T2, then T2 is said to be stronger than T1. There are two important things to note here: * *this a comparative, not an absolute: there is no strong or weak, only stronger and weaker *there is no value implied by the term; stronger does not mean better A: According to B.C. Pierce, the guy who wrote "Types and Programming Languages and Advanced Types and Programming Languages" : I spent a few weeks trying to sort out the terminology of "strongly typed," "statically typed," "safe," etc., and found it amazingly difficult... The usage of these terms is so various as to render them almost useless. So no wonder why your collegues disagree. I'd go with the simplest answer : if you can concatenate a string and an int without casting, then it's not strongly typed. EDIT: as stated in comments, Java just does that :-( A: Strongly typed means you declare your variables of a certain type, and your compiler will throw a hissy fit if you try to convert that variable to another type without casting it. Example (in java mind you): int i = 4; char s = i; // Type mismatch: cannot convert from int to char A: The term 'strongly typed' is completely and utterly nonsensical. It has no meaning, and never did. Even if some of the claimed definitions were accurate, I see no purpose as to the reason for distinction; Why is it important to know, discuss or debate whether a language is strongly typed (whatever that may mean) or not? This is very similar to the terms 'Web 2.0' or 'OEM', which also have no real meaning. What is interesting to consider, is how these phrases begin, and take root in everyday communication. A: Statically typed language is one where the variables need to be declared before they can be used. While a Dynamically typed language is one where the variables can be used anytime, even if they are not declared. The only condition is that they must be initialized before they can be used. Now, let us come to Strongly typed language. In such a language the variables have a type, and they will always be that type. They cannot be assigned to a value of another type. While a Weakly typed language is one where variables don't have a type. One can assign value of any type to them. Example: Java is a statically typed as well as strongly typed language. It is statically typed, because one has to declare the variables before they can be used. It is strongly typed, because a variable of particular type int will always hold integer values. You can't assign boolean to them. Powershell is a dynamically typed as well as weakly typed language. It is dynamically typed as variables need not be declared before using them. it is weakly typed as variable may hold value of one type at certain point while a value of another type at different point of time.
{ "language": "en", "url": "https://stackoverflow.com/questions/121385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Fetch the rows which have the Max value for a column for each distinct value of another column Table: UserId, Value, Date. I want to get the UserId, Value for the max(Date) for each UserId. That is, the Value for each UserId that has the latest date. Is there a way to do this simply in SQL? (Preferably Oracle) Update: Apologies for any ambiguity: I need to get ALL the UserIds. But for each UserId, only that row where that user has the latest date. A: I don't have Oracle to test it, but the most efficient solution is to use analytic queries. It should look something like this: SELECT DISTINCT UserId , MaxValue FROM ( SELECT UserId , FIRST (Value) Over ( PARTITION BY UserId ORDER BY Date DESC ) MaxValue FROM SomeTable ) I suspect that you can get rid of the outer query and put distinct on the inner, but I'm not sure. In the meantime I know this one works. If you want to learn about analytic queries, I'd suggest reading http://www.orafaq.com/node/55 and http://www.akadia.com/services/ora_analytic_functions.html. Here is the short summary. Under the hood analytic queries sort the whole dataset, then process it sequentially. As you process it you partition the dataset according to certain criteria, and then for each row looks at some window (defaults to the first value in the partition to the current row - that default is also the most efficient) and can compute values using a number of analytic functions (the list of which is very similar to the aggregate functions). In this case here is what the inner query does. The whole dataset is sorted by UserId then Date DESC. Then it processes it in one pass. For each row you return the UserId and the first Date seen for that UserId (since dates are sorted DESC, that's the max date). This gives you your answer with duplicated rows. Then the outer DISTINCT squashes duplicates. This is not a particularly spectacular example of analytic queries. For a much bigger win consider taking a table of financial receipts and calculating for each user and receipt, a running total of what they paid. Analytic queries solve that efficiently. Other solutions are less efficient. Which is why they are part of the 2003 SQL standard. (Unfortunately Postgres doesn't have them yet. Grrr...) A: Wouldn't a QUALIFY clause be both simplest and best? select userid, my_date, ... from users qualify rank() over (partition by userid order by my_date desc) = 1 For context, on Teradata here a decent size test of this runs in 17s with this QUALIFY version and in 23s with the 'inline view'/Aldridge solution #1. A: With PostgreSQL 8.4 or later, you can use this: select user_id, user_value_1, user_value_2 from (select user_id, user_value_1, user_value_2, row_number() over (partition by user_id order by user_date desc) from users) as r where r.row_number=1 A: In Oracle 12c+, you can use Top n queries along with analytic function rank to achieve this very concisely without subqueries: select * from your_table order by rank() over (partition by user_id order by my_date desc) fetch first 1 row with ties; The above returns all the rows with max my_date per user. If you want only one row with max date, then replace the rank with row_number: select * from your_table order by row_number() over (partition by user_id order by my_date desc) fetch first 1 row with ties; A: I don't know your exact columns names, but it would be something like this: SELECT userid, value FROM users u1 WHERE date = ( SELECT MAX(date) FROM users u2 WHERE u1.userid = u2.userid ) A: I see many people use subqueries or else window functions to do this, but I often do this kind of query without subqueries in the following way. It uses plain, standard SQL so it should work in any brand of RDBMS. SELECT t1.* FROM mytable t1 LEFT OUTER JOIN mytable t2 ON (t1.UserId = t2.UserId AND t1."Date" < t2."Date") WHERE t2.UserId IS NULL; In other words: fetch the row from t1 where no other row exists with the same UserId and a greater Date. (I put the identifier "Date" in delimiters because it's an SQL reserved word.) In case if t1."Date" = t2."Date", doubling appears. Usually tables has auto_inc(seq) key, e.g. id. To avoid doubling can be used follows: SELECT t1.* FROM mytable t1 LEFT OUTER JOIN mytable t2 ON t1.UserId = t2.UserId AND ((t1."Date" < t2."Date") OR (t1."Date" = t2."Date" AND t1.id < t2.id)) WHERE t2.UserId IS NULL; Re comment from @Farhan: Here's a more detailed explanation: An outer join attempts to join t1 with t2. By default, all results of t1 are returned, and if there is a match in t2, it is also returned. If there is no match in t2 for a given row of t1, then the query still returns the row of t1, and uses NULL as a placeholder for all of t2's columns. That's just how outer joins work in general. The trick in this query is to design the join's matching condition such that t2 must match the same userid, and a greater date. The idea being if a row exists in t2 that has a greater date, then the row in t1 it's compared against can't be the greatest date for that userid. But if there is no match -- i.e. if no row exists in t2 with a greater date than the row in t1 -- we know that the row in t1 was the row with the greatest date for the given userid. In those cases (when there's no match), the columns of t2 will be NULL -- even the columns specified in the join condition. So that's why we use WHERE t2.UserId IS NULL, because we're searching for the cases where no row was found with a greater date for the given userid. A: Not being at work, I don't have Oracle to hand, but I seem to recall that Oracle allows multiple columns to be matched in an IN clause, which should at least avoid the options that use a correlated subquery, which is seldom a good idea. Something like this, perhaps (can't remember if the column list should be parenthesised or not): SELECT * FROM MyTable WHERE (User, Date) IN ( SELECT User, MAX(Date) FROM MyTable GROUP BY User) EDIT: Just tried it for real: SQL> create table MyTable (usr char(1), dt date); SQL> insert into mytable values ('A','01-JAN-2009'); SQL> insert into mytable values ('B','01-JAN-2009'); SQL> insert into mytable values ('A', '31-DEC-2008'); SQL> insert into mytable values ('B', '31-DEC-2008'); SQL> select usr, dt from mytable 2 where (usr, dt) in 3 ( select usr, max(dt) from mytable group by usr) 4 / U DT - --------- A 01-JAN-09 B 01-JAN-09 So it works, although some of the new-fangly stuff mentioned elsewhere may be more performant. A: This will retrieve all rows for which the my_date column value is equal to the maximum value of my_date for that userid. This may retrieve multiple rows for the userid where the maximum date is on multiple rows. select userid, my_date, ... from ( select userid, my_date, ... max(my_date) over (partition by userid) max_my_date from users ) where my_date = max_my_date "Analytic functions rock" Edit: With regard to the first comment ... "using analytic queries and a self-join defeats the purpose of analytic queries" There is no self-join in this code. There is instead a predicate placed on the result of the inline view that contains the analytic function -- a very different matter, and completely standard practice. "The default window in Oracle is from the first row in the partition to the current one" The windowing clause is only applicable in the presence of the order by clause. With no order by clause, no windowing clause is applied by default and none can be explicitly specified. The code works. A: Just had to write a "live" example at work :) This one supports multiple values for UserId on the same date. Columns: UserId, Value, Date SELECT DISTINCT UserId, MAX(Date) OVER (PARTITION BY UserId ORDER BY Date DESC), MAX(Values) OVER (PARTITION BY UserId ORDER BY Date DESC) FROM ( SELECT UserId, Date, SUM(Value) As Values FROM <<table_name>> GROUP BY UserId, Date ) You can use FIRST_VALUE instead of MAX and look it up in the explain plan. I didn't have the time to play with it. Of course, if searching through huge tables, it's probably better if you use FULL hints in your query. A: I'm quite late to the party but the following hack will outperform both correlated subqueries and any analytics function but has one restriction: values must convert to strings. So it works for dates, numbers and other strings. The code does not look good but the execution profile is great. select userid, to_number(substr(max(to_char(date,'yyyymmdd') || to_char(value)), 9)) as value, max(date) as date from users group by userid The reason why this code works so well is that it only needs to scan the table once. It does not require any indexes and most importantly it does not need to sort the table, which most analytics functions do. Indexes will help though if you need to filter the result for a single userid. A: Use ROW_NUMBER() to assign a unique ranking on descending Date for each UserId, then filter to the first row for each UserId (i.e., ROW_NUMBER = 1). SELECT UserId, Value, Date FROM (SELECT UserId, Value, Date, ROW_NUMBER() OVER (PARTITION BY UserId ORDER BY Date DESC) rn FROM users) u WHERE rn = 1; A: If you're using Postgres, you can use array_agg like SELECT userid,MAX(adate),(array_agg(value ORDER BY adate DESC))[1] as value FROM YOURTABLE GROUP BY userid I'm not familiar with Oracle. This is what I came up with SELECT userid, MAX(adate), SUBSTR( (LISTAGG(value, ',') WITHIN GROUP (ORDER BY adate DESC)), 0, INSTR((LISTAGG(value, ',') WITHIN GROUP (ORDER BY adate DESC)), ',')-1 ) as value FROM YOURTABLE GROUP BY userid Both queries return the same results as the accepted answer. See SQLFiddles: * *Accepted answer *My solution with Postgres *My solution with Oracle A: I think something like this. (Forgive me for any syntax mistakes; I'm used to using HQL at this point!) EDIT: Also misread the question! Corrected the query... SELECT UserId, Value FROM Users AS user WHERE Date = ( SELECT MAX(Date) FROM Users AS maxtest WHERE maxtest.UserId = user.UserId ) A: i thing you shuold make this variant to previous query: SELECT UserId, Value FROM Users U1 WHERE Date = ( SELECT MAX(Date) FROM Users where UserId = U1.UserId) A: Select UserID, Value, Date From Table, ( Select UserID, Max(Date) as MDate From Table Group by UserID ) as subQuery Where Table.UserID = subQuery.UserID and Table.Date = subQuery.mDate A: select VALUE from TABLE1 where TIME = (select max(TIME) from TABLE1 where DATE= (select max(DATE) from TABLE1 where CRITERIA=CRITERIA)) A: SELECT userid, MAX(value) KEEP (DENSE_RANK FIRST ORDER BY date DESC) FROM table GROUP BY userid A: I know you asked for Oracle, but in SQL 2005 we now use this: -- Single Value ;WITH ByDate AS ( SELECT UserId, Value, ROW_NUMBER() OVER (PARTITION BY UserId ORDER BY Date DESC) RowNum FROM UserDates ) SELECT UserId, Value FROM ByDate WHERE RowNum = 1 -- Multiple values where dates match ;WITH ByDate AS ( SELECT UserId, Value, RANK() OVER (PARTITION BY UserId ORDER BY Date DESC) Rnk FROM UserDates ) SELECT UserId, Value FROM ByDate WHERE Rnk = 1 A: (T-SQL) First get all the users and their maxdate. Join with the table to find the corresponding values for the users on the maxdates. create table users (userid int , value int , date datetime) insert into users values (1, 1, '20010101') insert into users values (1, 2, '20020101') insert into users values (2, 1, '20010101') insert into users values (2, 3, '20030101') select T1.userid, T1.value, T1.date from users T1, (select max(date) as maxdate, userid from users group by userid) T2 where T1.userid= T2.userid and T1.date = T2.maxdate results: userid value date ----------- ----------- -------------------------- 2 3 2003-01-01 00:00:00.000 1 2 2002-01-01 00:00:00.000 A: Assuming Date is unique for a given UserID, here's some TSQL: SELECT UserTest.UserID, UserTest.Value FROM UserTest INNER JOIN ( SELECT UserID, MAX(Date) MaxDate FROM UserTest GROUP BY UserID ) Dates ON UserTest.UserID = Dates.UserID AND UserTest.Date = Dates.MaxDate A: The answer here is Oracle only. Here's a bit more sophisticated answer in all SQL: Who has the best overall homework result (maximum sum of homework points)? SELECT FIRST, LAST, SUM(POINTS) AS TOTAL FROM STUDENTS S, RESULTS R WHERE S.SID = R.SID AND R.CAT = 'H' GROUP BY S.SID, FIRST, LAST HAVING SUM(POINTS) >= ALL (SELECT SUM (POINTS) FROM RESULTS WHERE CAT = 'H' GROUP BY SID) And a more difficult example, which need some explanation, for which I don't have time atm: Give the book (ISBN and title) that is most popular in 2008, i.e., which is borrowed most often in 2008. SELECT X.ISBN, X.title, X.loans FROM (SELECT Book.ISBN, Book.title, count(Loan.dateTimeOut) AS loans FROM CatalogEntry Book LEFT JOIN BookOnShelf Copy ON Book.bookId = Copy.bookId LEFT JOIN (SELECT * FROM Loan WHERE YEAR(Loan.dateTimeOut) = 2008) Loan ON Copy.copyId = Loan.copyId GROUP BY Book.title) X HAVING loans >= ALL (SELECT count(Loan.dateTimeOut) AS loans FROM CatalogEntry Book LEFT JOIN BookOnShelf Copy ON Book.bookId = Copy.bookId LEFT JOIN (SELECT * FROM Loan WHERE YEAR(Loan.dateTimeOut) = 2008) Loan ON Copy.copyId = Loan.copyId GROUP BY Book.title); Hope this helps (anyone).. :) Regards, Guus A: Solution for MySQL which doesn't have concepts of partition KEEP, DENSE_RANK. select userid, my_date, ... from ( select @sno:= case when @pid<>userid then 0 else @sno+1 end as serialnumber, @pid:=userid, my_Date, ... from users order by userid, my_date ) a where a.serialnumber=0 Reference: http://benincampus.blogspot.com/2013/08/select-rows-which-have-maxmin-value-in.html A: select userid, value, date from thetable t1 , ( select t2.userid, max(t2.date) date2 from thetable t2 group by t2.userid ) t3 where t3.userid t1.userid and t3.date2 = t1.date IMHO this works. HTH A: I think this should work? Select T1.UserId, (Select Top 1 T2.Value From Table T2 Where T2.UserId = T1.UserId Order By Date Desc) As 'Value' From Table T1 Group By T1.UserId Order By T1.UserId A: This should be as simple as: SELECT UserId, Value FROM Users u WHERE Date = (SELECT MAX(Date) FROM Users WHERE UserID = u.UserID) A: First try I misread the question, following the top answer, here is a complete example with correct results: CREATE TABLE table_name (id int, the_value varchar(2), the_date datetime); INSERT INTO table_name (id,the_value,the_date) VALUES(1 ,'a','1/1/2000'); INSERT INTO table_name (id,the_value,the_date) VALUES(1 ,'b','2/2/2002'); INSERT INTO table_name (id,the_value,the_date) VALUES(2 ,'c','1/1/2000'); INSERT INTO table_name (id,the_value,the_date) VALUES(2 ,'d','3/3/2003'); INSERT INTO table_name (id,the_value,the_date) VALUES(2 ,'e','3/3/2003'); -- select id, the_value from table_name u1 where the_date = (select max(the_date) from table_name u2 where u1.id = u2.id) -- id the_value ----------- --------- 2 d 2 e 1 b (3 row(s) affected) A: This will also take care of duplicates (return one row for each user_id): SELECT * FROM ( SELECT u.*, FIRST_VALUE(u.rowid) OVER(PARTITION BY u.user_id ORDER BY u.date DESC) AS last_rowid FROM users u ) u2 WHERE u2.rowid = u2.last_rowid A: Just tested this and it seems to work on a logging table select ColumnNames, max(DateColumn) from log group by ColumnNames order by 1 desc A: SELECT a.* FROM user a INNER JOIN (SELECT userid,Max(date) AS date12 FROM user1 GROUP BY userid) b ON a.date=b.date12 AND a.userid=b.userid ORDER BY a.userid; A: If (UserID, Date) is unique, i.e. no date appears twice for the same user then: select TheTable.UserID, TheTable.Value from TheTable inner join (select UserID, max([Date]) MaxDate from TheTable group by UserID) UserMaxDate on TheTable.UserID = UserMaxDate.UserID TheTable.[Date] = UserMaxDate.MaxDate; A: select UserId,max(Date) over (partition by UserId) value from users; A: check this link if your questions seems similar to that page then i would suggest you the following query which will give the solution for that link select distinct sno,item_name,max(start_date) over(partition by sno),max(end_date) over(partition by sno),max(creation_date) over(partition by sno), max(last_modified_date) over(partition by sno) from uniq_select_records order by sno,item_name asc; will given accurate results related to that link A: Use the code: select T.UserId,T.dt from (select UserId,max(dt) over (partition by UserId) as dt from t_users)T where T.dt=dt; This will retrieve the results, irrespective of duplicate values for UserId. If your UserId is unique, well it becomes more simple: select UserId,max(dt) from t_users group by UserId; A: SELECT a.userid,a.values1,b.mm FROM table_name a,(SELECT userid,Max(date1)AS mm FROM table_name GROUP BY userid) b WHERE a.userid=b.userid AND a.DATE1=b.mm; A: Below query can work : SELECT user_id, value, date , row_number() OVER (PARTITION BY user_id ORDER BY date desc) AS rn FROM table_name WHERE rn= 1
{ "language": "en", "url": "https://stackoverflow.com/questions/121387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "654" }
Q: How to determine the line ending of a file I have a bunch (hundreds) of files that are supposed to have Unix line endings. I strongly suspect that some of them have Windows line endings, and I want to programmatically figure out which ones do. I know I can just run flip -u or something similar in a script to convert everything, but I want to be able to identify those files that need changing first. A: You can use the file tool, which will tell you the type of line ending. Or, you could just use dos2unix -U which will convert everything to Unix line endings, regardless of what it started with. A: Here's the most failsafe answer. Stimms answer doesn account for subdirectories and binary files find . -type f -exec file {} \; | grep "CRLF" | awk -F ':' '{ print $1 }' * *Use file to find file type. Those with CRLF have windows return characters. The output of file is delimited by a :, and the first field is the path of the file. A: You could use grep egrep -l $'\r'\$ * A: Unix uses one byte, 0x0A (LineFeed), while windows uses two bytes, 0x0D 0x0A (Carriage Return, Line feed). If you never see a 0x0D, then it's very likely Unix. If you see 0x0D 0x0A pairs then it's very likely MSDOS. A: Something along the lines of: perl -p -e 's[\r\n][WIN\n]; s[(?<!WIN)\n][UNIX\n]; s[\r][MAC\n];' FILENAME though some of that regexp may need refining and tidying up. That'll output your file with WIN, MAC, or UNIX at the end of each line. Good if your file is somehow a dreadful mess (or a diff) and has mixed endings. A: Windows use char 13 & 10 for line ending, unix only one of them ( i don't rememeber which one ). So you can replace char 13 & 10 for char 13 or 10 ( the one, which use unix ). A: When you know which files has Windows line endings (0x0D 0x0A or \r \n), what you will do with that files? I supose, you will convert them into Unix line ends (0x0A or \n). You can convert file with Windows line endings into Unix line endings with sed utility, just use command: $> sed -i 's/\r//' my_file_with_win_line_endings.txt You can put it into script like this: #!/bin/bash function travers() { for file in $(ls); do if [ -f "${file}" ]; then sed -i 's/\r//' "${file}" elif [ -d "${file}" ]; then cd "${file}" travers cd .. fi done } travers If you run it from your root dir with files, at end you will be sure all files are with Unix line endings.
{ "language": "en", "url": "https://stackoverflow.com/questions/121392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: grails plugin for eclipse and netbeans 6.1 Has anyone been successful in getting the grails plugin for eclipse to work? How about grails plugin with netbeans? A: If you use the NetBeans 6.5 Beta you'll see the Grails functionality is promising, but still buggy(minor). The good thing is in 6.5 Groovy and Grails support is standard, you don't have to install the plugins. A: Unfortunately, there hasn't been much progress on the Eclipse plugin for Grails, we have started using IntelliJ IDEA for Grails development, the JetGroovy plugin is excellent and keeps getting better! A: Netbeans 6.5 is pretty good for Grails and allows for debugging, though the code completion is just barely there. A: Well, here's a quick update. The Eclipse plugin works, and has refactoring support. But, for some reason I can't get it to recognize the Grails plugins in the Eclipse project. It's starting to come along though. A: I haven't had any problems getting the Eclipse grails plugin "to work" insofar as it's installed and providing all the features advertised. The problem is that this set of features is minimal, and light years behind IntelliJ. I understand that switching from a free IDE to a commercial IDE isn't at all possible, but if it is, do it! Although Netbeans is better than Eclipse, it's still quite a distance behind IntelliJ. A: Just for future documentation: Netbeans 6.8 is available with a very nice Grails/Groovy Plugin that works like a charme. Additionally you can use a new Code Coverage Plugin. Really nice build. Link: Netbeans Homepage But you have to keep in mind that Grails now belongs to Spring Source. Spring Source is known for developing their own Tool Suite based on Eclipse. Maybe we will see a better grails plugin implementation for Eclipse.
{ "language": "en", "url": "https://stackoverflow.com/questions/121393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Accessing Object Memory Address When you call the object.__repr__() method in Python you get something like this back: <__main__.Test object at 0x2aba1c0cf890> Is there any way to get a hold of the memory address if you overload __repr__(), other then calling super(Class, obj).__repr__() and regexing it out? A: You could reimplement the default repr this way: def __repr__(self): return '<%s.%s object at %s>' % ( self.__class__.__module__, self.__class__.__name__, hex(id(self)) ) A: Just use id(object) A: You can get something suitable for that purpose with: id(self) A: With ctypes, you can achieve the same thing with >>> import ctypes >>> a = (1,2,3) >>> ctypes.addressof(a) 3077760748L Documentation: addressof(C instance) -> integer Return the address of the C instance internal buffer Note that in CPython, currently id(a) == ctypes.addressof(a), but ctypes.addressof should return the real address for each Python implementation, if * *ctypes is supported *memory pointers are a valid notion. Edit: added information about interpreter-independence of ctypes A: There are a few issues here that aren't covered by any of the other answers. First, id only returns: the “identity” of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. In CPython, this happens to be the pointer to the PyObject that represents the object in the interpreter, which is the same thing that object.__repr__ displays. But this is just an implementation detail of CPython, not something that's true of Python in general. Jython doesn't deal in pointers, it deals in Java references (which the JVM of course probably represents as pointers, but you can't see those—and wouldn't want to, because the GC is allowed to move them around). PyPy lets different types have different kinds of id, but the most general is just an index into a table of objects you've called id on, which is obviously not going to be a pointer. I'm not sure about IronPython, but I'd suspect it's more like Jython than like CPython in this regard. So, in most Python implementations, there's no way to get whatever showed up in that repr, and no use if you did. But what if you only care about CPython? That's a pretty common case, after all. Well, first, you may notice that id is an integer;* if you want that 0x2aba1c0cf890 string instead of the number 46978822895760, you're going to have to format it yourself. Under the covers, I believe object.__repr__ is ultimately using printf's %p format, which you don't have from Python… but you can always do this: format(id(spam), '#010x' if sys.maxsize.bit_length() <= 32 else '#18x') * In 3.x, it's an int. In 2.x, it's an int if that's big enough to hold a pointer—which is may not be because of signed number issues on some platforms—and a long otherwise. Is there anything you can do with these pointers besides print them out? Sure (again, assuming you only care about CPython). All of the C API functions take a pointer to a PyObject or a related type. For those related types, you can just call PyFoo_Check to make sure it really is a Foo object, then cast with (PyFoo *)p. So, if you're writing a C extension, the id is exactly what you need. What if you're writing pure Python code? You can call the exact same functions with pythonapi from ctypes. Finally, a few of the other answers have brought up ctypes.addressof. That isn't relevant here. This only works for ctypes objects like c_int32 (and maybe a few memory-buffer-like objects, like those provided by numpy). And, even there, it isn't giving you the address of the c_int32 value, it's giving you the address of the C-level int32 that the c_int32 wraps up. That being said, more often than not, if you really think you need the address of something, you didn't want a native Python object in the first place, you wanted a ctypes object. A: The Python manual has this to say about id(): Return the "identity'' of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. (Implementation note: this is the address of the object.) So in CPython, this will be the address of the object. No such guarantee for any other Python interpreter, though. Note that if you're writing a C extension, you have full access to the internals of the Python interpreter, including access to the addresses of objects directly. A: I know this is an old question but if you're still programming, in python 3 these days... I have actually found that if it is a string, then there is a really easy way to do this: >>> spam.upper <built-in method upper of str object at 0x1042e4830> >>> spam.upper() 'YO I NEED HELP!' >>> id(spam) 4365109296 string conversion does not affect location in memory either: >>> spam = {437 : 'passphrase'} >>> object.__repr__(spam) '<dict object at 0x1043313f0>' >>> str(spam) "{437: 'passphrase'}" >>> object.__repr__(spam) '<dict object at 0x1043313f0>' A: Just in response to Torsten, I wasn't able to call addressof() on a regular python object. Furthermore, id(a) != addressof(a). This is in CPython, don't know about anything else. >>> from ctypes import c_int, addressof >>> a = 69 >>> addressof(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: invalid type >>> b = c_int(69) >>> addressof(b) 4300673472 >>> id(b) 4300673392 A: You can get the memory address/location of any object by using the 'partition' method of the built-in 'str' type. Here is an example of using it to get the memory address of an object: Python 3.8.3 (default, May 27 2020, 02:08:17) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> object.__repr__(1) '<int object at 0x7ca70923f0>' >>> hex(int(object.__repr__(1).partition('object at ')[2].strip('>'), 16)) 0x7ca70923f0 >>> Here, I am using the built-in 'object' class' '__repr__' method with an object/item such as 1 as an argument to return the string and then I am partitioning that string which will return a tuple of the string before the string that I provided, the string that I provided and then the string after the string that I provided, and as the memory location is positioned after 'object at', I can get the memory address as it has partitioned it from that part. And then as the memory address was returned as the third item in the returned tuple, I can access it with index 2 from the tuple. But then, it has a right angled bracket as a suffix in the string that I obtained, so I use the 'strip' function to remove it, which will return it without the angled bracket. I then transformed the resulted string into an integer with base 16 and then turn it into a hex number. A: While it's true that id(object) gets the object's address in the default CPython implementation, this is generally useless... you can't do anything with the address from pure Python code. The only time you would actually be able to use the address is from a C extension library... in which case it is trivial to get the object's address since Python objects are always passed around as C pointers. A: If the __repr__ is overloaded, you may consider __str__ to see the memory address of the variable. Here is the details of __repr__ versus __str__ by Moshe Zadka in StackOverflow. A: There is a way to recovery the value from the 'id' command, here it the TL;DR. ctypes.cast(memory_address,ctypes.py_object).value source
{ "language": "en", "url": "https://stackoverflow.com/questions/121396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "218" }
Q: Storing a calendar day in sqlite database I need to store items with a calendar date (just the day, no time) in a sqlite database. What's the best way to represent the date in the column? Julian days and unix seconds come to mind as reasonable alternatives. If I go with a unit other than days at what clock time should it be? Update: I am aware of ISO8601 and actually used it to store the date as a string in YYYY-MM-DD format for the prototype. But for various arithmetic I have to convert it to some number internally, so I'd prefer to store a number and just convert to string for display. What units should this number be in, with what origin, and if the units is more precise than days what time of day should be used? A: If you expect to be handing the date to an external tool or library, you should use whatever format it expects. Unix time seems to be the lingua franca of digital horography, so it's a good default if external circumstances don't make the choice for you. ISO 8601 may be of interest. Edit: But for various arithmetic I have to convert it to some number internally, so I'd prefer to store a number and just convert to string for display. Ah, that makes sense. If your environment (.NET? Python? C++?) has time-handling tools, it would be best to use their native unit and epoch. There's no reason to re-write all those date-manipulation functions; they're trickier than they look. Otherwise, I'd use days in the local (Gregorian?) calendar since a reasonable epoch for your application. Be generous, you don't want a reverse Y2K bug when you suddenly need to handle a date earlier than you ever expected. The time of day is entirely a matter of taste. Midnight seems like the cleanest choice (all zeros in the hour, minute, second fields), but since it never reaches the user there's no practical implications. It would be a good spot for an easter egg, in fact. A: If you want to cover your bases make sure you store the date in utc and pick an iso standard. This approach requires minimal effort and will protect your code from future Interop headaches. I agree with @skymt ISO 8601 is a good choice. A: If you are using C++, then boost::date_time is well worth a look. A: "Best" almost entirely depends on where the dates come from (NSDate objects, some XML feed) and how they are manipulated (simply stored as-is vs. doing some arithmetic like "days until"). I'm not necessarily recommending this, but I wrote an app which also needed to store date-but-not-time in an SQLite DB. At first I used two columns, Month and Day, where Month was defined as the number of months since January 2000. I needed to enforce one-row-per-date, so I defined a UNIQUE index across those two columns, and it turned out to make updates horribly slow. For my second attempt, I used a similar scheme but encoded the date into one number, by using the bottom 5 bits for the Day and the remaining upper bits for the Month (again, since Jan 2000). The conversion functions were: Date = (Month << 5) | Day Month = Date >> 5 Day = 0x1F & Date This scheme preserves numerical order on dates and makes it easy to split them into components. It made sense for my app, because I was storing data in monthly chunks. If, given a Date, you want to find the next Date, this scheme might not be the best, since the Date + 1 may not be a valid date.
{ "language": "en", "url": "https://stackoverflow.com/questions/121408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I include a stacktrace in my Django 500.html page? I'm running Django 1.0 and I'm close to deploying my app. As such, I'll be changing the DEBUG setting to False. With that being said, I'd still like to include the stacktrace on my 500.html page when errors occur. By doing so, users can copy-and-paste the errors and easily email them to the developers. Any thoughts on how best to approach this issue? A: If we want to show exceptions which are generated , on ur template(500.html) then we could write your own 500 view, grabbing the exception and passing it to your 500 template. Steps: # In views.py: import sys,traceback def custom_500(request): t = loader.get_template('500.html') print sys.exc_info() type, value, tb = sys.exc_info() return HttpResponseServerError(t.render(Context({ 'exception_value': value, 'value':type, 'tb':traceback.format_exception(type, value, tb) },RequestContext(request)))) # In Main urls.py: from django.conf.urls.defaults import * handler500 = 'project.web.services.views.custom_500' # In Template(500.html): {{ exception_value }}{{value}}{{tb}} more about it here: https://docs.djangoproject.com/en/dev/topics/http/views/#the-500-server-error-view A: Automatically log your 500s, that way: * *You know when they occur. *You don't need to rely on users sending you stacktraces. Joel recommends even going so far as automatically creating tickets in your bug tracker when your application experiences a failure. Personally, I create a (private) RSS feed with the stacktraces, urls, etc. that the developers can subscribe to. Showing stack traces to your users on the other hand could possibly leak information that malicious users could use to attack your site. Overly detailed error messages are one of the classic stepping stones to SQL injection attacks. Edit (added code sample to capture traceback): You can get the exception information from the sys.exc_info call. While formatting the traceback for display comes from the traceback module: import traceback import sys try: raise Exception("Message") except: type, value, tb = sys.exc_info() print >> sys.stderr, type.__name__, ":", value print >> sys.stderr, '\n'.join(traceback.format_tb(tb)) Prints: Exception : Message File "exception.py", line 5, in <module> raise Exception("Message") A: As @zacherates says, you really don't want to display a stacktrace to your users. The easiest approach to this problem is what Django does by default if you have yourself and your developers listed in the ADMINS setting with email addresses; it sends an email to everyone in that list with the full stack trace (and more) everytime there is a 500 error with DEBUG = False. A: You could call sys.exc_info() in a custom exception handler. But I don't recommend that. Django can send you emails for exceptions. A: I know this is an old question, but these days I would recommend using a service such as Sentry to capture your errors. On Django, the steps to set this up are incredibly simple. From the docs: * *Install Raven using pip install raven *Add 'raven.contrib.django.raven_compat' to your settings.INSTALLED_APPS. *Add RAVEN_CONFIG = {"dsn": YOUR_SENTRY_DSN} to your settings. Then, on your 500 page (defined in handler500), pass the request.sentry.id to the template and your users can reference the specific error without any of your internals being exposed.
{ "language": "en", "url": "https://stackoverflow.com/questions/121439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How to know Flash Player Version from Action Script 3.0 There is a way to know the flash player version installed on the computer that runs our SWF file with Action Script 3.0? A: If you are programming from within the IDE the following will get you the version trace(Capabilities.version); If you are building a custom class the following should help. Make sure that this following code goes into a file named VersionCheck.as package { import flash.system.Capabilities; public class VersionCheck { public function VersionCheck():void { trace(Capabilities.version); } } } Hope this helps, always remember that all of the AS3 language can be studied online here http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/. A: It's in flash.system.Capabilities.version A: This example might help figuring out the details you receive so that you can act on specifics within the somewhat awkward data you get. import flash.system.Capabilities; var versionNumber:String = Capabilities.version; trace("versionNumber: "+versionNumber); trace("-----"); // The version number is a list of items divided by "," var versionArray:Array = versionNumber.split(","); var length:Number = versionArray.length; for(var i:Number = 0; i < length; i++) trace("versionArray["+i+"]: "+versionArray[i]); trace("-----"); // The main version contains the OS type too so we split it in two // and we'll have the OS type and the major version number separately. var platformAndVersion:Array = versionArray[0].split(" "); for(var j:Number = 0; j < 2; j++) trace("platformAndVersion["+j+"]: "+platformAndVersion[j]); trace("-----"); var majorVersion:Number = parseInt(platformAndVersion[1]); var minorVersion:Number = parseInt(versionArray[1]); var buildNumber:Number = parseInt(versionArray[2]); trace("Platform: "+platformAndVersion[0]); trace("Major version: "+majorVersion); trace("Minor version: "+minorVersion); trace("Build number: "+buildNumber); trace("-----"); if (majorVersion < 9) trace("Your Flash Player version is older than the current version 9, please update."); else trace("You are using Flash Player 9 or later.");
{ "language": "en", "url": "https://stackoverflow.com/questions/121453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Access running mono application via command line What is the best way to access a running mono application via the command line (Linux/Unix)? Example: a mono server application is running and I want to send commands to it using the command line in the lightest/fastest way possible, causing the server to send back a response (e.g. to stdout). A: I would say make a small, simple controller program that takes in your required command line arguments and uses remoting to send the messages to the running daemon. This would be similar to the tray icon controller program talking to the background service that is prevalent in most Windows service patterns. A: Mono's gsharp tool is a graphical REPL that lets you Attach to Process. A: @Rich B: This is definately a suitable solution, which I already had implemented - however on the server I have to use, the remoting approach takes around 350ms for a single request. I've measured the time on the server side of the request handling - the request is handled in less than 10ms, so it has to be the starting of the client program and the tcp connection, that takes up the time. Hence the hope that I can find another way to post the requests to the server application. A: You can use the system.net.sockets abstractions to create a service on a TCP port, and then telnet to that port. Check the library status page; Mono's coverage here is a bit patchy.
{ "language": "en", "url": "https://stackoverflow.com/questions/121476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a good way of securing an ASP.Net web service call made via Javascript on the click event handler of an HTML button? The purpose of using a Javascript proxy for the Web Service using a service reference with Script Manager is to avoid a page load. If the information being retrieved is potentially sensitive, is there a way to secure this web service call other than using SSL? A: I would use ssl it would also depend I suppose on how sensitive your information is. A: If your worried about other people access your web service directly, you could check the calling IP address and host header and make sure it matches expected IP's addresses. If your worried about people stealing information during it's journey from the server to the client, SSL is the only way to go. A: I would: * *Use SSL for the connection *Put a time and session based token in the request *Validate the inputs against expected ranges on the server SSL prevents man-in-the-middle Tokenized requests verify that the request is coming from an active and authenticated session, within a reasonable amount of time from the last activity within the session. This prevents stale requests being re-submitted and verifies that it came from the source of the session (store the IP address, user-agent, etc on the server for session management). Validating that the inputs are within expected ranges verifies that the request has not been doctored by the party that you are talking to. A: Though SSL would be best, there are a number of client-side cryptography libraries that could alleviate some of the security concerns - see https://github.com/jbt/js-crypto for a nice collection
{ "language": "en", "url": "https://stackoverflow.com/questions/121490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: 3DES with .cer public key? I'm no crypto expert, but as I understand it, 3DES is a symmetric encryption algorithm, which means it doesnt use public/private keys. Nevertheless, I have been tasked with encrypting data using a public key, (specifically, a .CER file). If you ignore the whole symmetric/asymmetric thang, I should just be able to use the key data from the public key as the TripleDES key. However, I'm having difficulty extracting the key bytes from the .CER file. This is the code as it stands.. TripleDESCryptoServiceProvider cryptoProvider = new TripleDESCryptoServiceProvider(); X509Certificate2 cert = new X509Certificate2(@"c:\temp\whatever.cer"); cryptoProvider.Key = cert.PublicKey.Key. The simplest method I can find to extract the raw key bytes from the certificate is ToXmlString(bool), and then doing some hacky substringing upon the returned string. However, this seems so hackish I feel I must be missing a simpler, more obvious way to do it. Am I missing a simpler way to use a .cer file to provide the key data to the C# 3DES crypto class, or is hacking it out of the certificate xml string really the best way to go about this? A: It's not a good idea to use keys generated for asymmetric cryptography for symmetric cryptography. There's nothing preventing you from coming up with a way of using a public key as an encryption key for 3DES, but the end result will be that anyone having access to the public key (and this means everyone!) will be able to decrypt your ciphertext. A: cryptoProvider.Key = cert.GetPublicKey()? A: Encrypting large amounts of data with asymmetric cryptography is not the way to go. Instead, encrypt the data with a symmetric algorithm and encrypt the symmetric key (and IV) with your public key. This page from MSDN really helped me get going with .Net symmetric cryptography. A: The real problem here is that the public key is, well, public. Meaning freely available, meaning it's providing zero security of encryption. Heck, anyone on this thread has all the information they need to decrypt everything. So do googlers. Please try to encourage your users not to use public key data like that. At the very least, get them to give a password or some other slightly-more-secure chunk you can use to generate a consistent key. One more thing. Certificate keys vary in size. It can probably handle throwing away extra bytes in the key, but you'll probably get an Array Index / Out Of Bounds exception if the key happens to be shorter than the 3DES key needs. I doubt that'll happen, 3DES only needs 56bits, and cert keys are almost always 256bits or larger. A: I think what you are missing is converting the bytes from the string containing the key-bytes. Hope the method FromBase64String will help you: byte[] keyBytes = Convert.FromBase64String(sourceString);
{ "language": "en", "url": "https://stackoverflow.com/questions/121493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When a 'blur' event occurs, how can I find out which element focus went *to*? Suppose I attach an blur function to an HTML input box like this: <input id="myInput" onblur="function() { ... }"></input> Is there a way to get the ID of the element which caused the blur event to fire (the element which was clicked) inside the function? How? For example, suppose I have a span like this: <span id="mySpan">Hello World</span> If I click the span right after the input element has focus, the input element will lose its focus. How does the function know that it was mySpan that was clicked? PS: If the onclick event of the span would occur before the onblur event of the input element my problem would be solved, because I could set some status value indicating a specific element had been clicked. PPS: The background of this problem is that I want to trigger an AJAX autocompleter control externally (from a clickable element) to show its suggestions, without the suggestions disappearing immediately because of the blur event on the input element. So I want to check in the blur function if one specific element has been clicked, and if so, ignore the blur event. A: Hmm... In Firefox, you can use explicitOriginalTarget to pull the element that was clicked on. I expected toElement to do the same for IE, but it does not appear to work... However, you can pull the newly-focused element from the document: function showBlur(ev) { var target = ev.explicitOriginalTarget||document.activeElement; document.getElementById("focused").value = target ? target.id||target.tagName||target : ''; } ... <button id="btn1" onblur="showBlur(event)">Button 1</button> <button id="btn2" onblur="showBlur(event)">Button 2</button> <button id="btn3" onblur="showBlur(event)">Button 3</button> <input id="focused" type="text" disabled="disabled" /> Caveat: This technique does not work for focus changes caused by tabbing through fields with the keyboard, and does not work at all in Chrome or Safari. The big problem with using activeElement (except in IE) is that it is not consistently updated until after the blur event has been processed, and may have no valid value at all during processing! This can be mitigated with a variation on the technique Michiel ended up using: function showBlur(ev) { // Use timeout to delay examination of activeElement until after blur/focus // events have been processed. setTimeout(function() { var target = document.activeElement; document.getElementById("focused").value = target ? target.id||target.tagName||target : ''; }, 1); } This should work in most modern browsers (tested in Chrome, IE, and Firefox), with the caveat that Chrome does not set focus on buttons that are clicked (vs. tabbed to). A: Works in Google Chrome v66.x, Mozilla v59.x and Microsoft Edge... Solution with jQuery. I test in Internet Explorer 9 and not supported. $("#YourElement").blur(function(e){ var InputTarget = $(e.relatedTarget).attr("id"); // GET ID Element console.log(InputTarget); if(target == "YourId") { // If you want validate or make a action to specfic element ... // your code } }); Comment your test in others internet explorer versions. A: I solved it eventually with a timeout on the onblur event (thanks to the advice of a friend who is not StackOverflow): <input id="myInput" onblur="setTimeout(function() {alert(clickSrc);},200);"></input> <span onclick="clickSrc='mySpan';" id="mySpan">Hello World</span> Works both in FF and IE. A: Can you reverse what you're checking and when? That is if you remeber what was blurred last: <input id="myInput" onblur="lastBlurred=this;"></input> and then in the onClick for your span, call function() with both objects: <span id="mySpan" onClick="function(lastBlurred, this);">Hello World</span> Your function could then decide whether or not to trigger the Ajax.AutoCompleter control. The function has the clicked object and the blurred object. The onBlur has already happened so it won't make the suggestions disappear. A: I am also trying to make Autocompleter ignore blurring if a specific element clicked and have a working solution, but for only Firefox due to explicitOriginalTarget Autocompleter.Base.prototype.onBlur = Autocompleter.Base.prototype.onBlur.wrap( function(origfunc, ev) { if ($(this.options.ignoreBlurEventElement)) { var newTargetElement = (ev.explicitOriginalTarget.nodeType == 3 ? ev.explicitOriginalTarget.parentNode : ev.explicitOriginalTarget); if (!newTargetElement.descendantOf($(this.options.ignoreBlurEventElement))) { return origfunc(ev); } } } ); This code wraps default onBlur method of Autocompleter and checks if ignoreBlurEventElement parameters is set. if it is set, it checks everytime to see if clicked element is ignoreBlurEventElement or not. If it is, Autocompleter does not cal onBlur, else it calls onBlur. The only problem with this is that it only works in Firefox because explicitOriginalTarget property is Mozilla specific . Now I am trying to find a different way than using explicitOriginalTarget. The solution you have mentioned requires you to add onclick behaviour manually to the element. If I can't manage to solve explicitOriginalTarget issue, I guess I will follow your solution. A: It's possible to use mousedown event of document instead of blur: $(document).mousedown(function(){ if ($(event.target).attr("id") == "mySpan") { // some process } }); A: 2015 answer: according to UI Events, you can use the relatedTarget property of the event: Used to identify a secondary EventTarget related to a Focus event, depending on the type of event. For blur events, relatedTarget: event target receiving focus. Example: function blurListener(event) { event.target.className = 'blurred'; if(event.relatedTarget) event.relatedTarget.className = 'focused'; } [].forEach.call(document.querySelectorAll('input'), function(el) { el.addEventListener('blur', blurListener, false); }); .blurred { background: orange } .focused { background: lime } <p>Blurred elements will become orange.</p> <p>Focused elements should become lime.</p> <input /><input /><input /> Note Firefox won't support relatedTarget until version 48 (bug 962251, MDN). A: The instance of type FocusEvent has the relatedTarget property, however, up to version 47 of the FF, specifically, this attribute returns null, from 48 it already works. You can see more here. A: i think it's not possibe, with IE you can try to use window.event.toElement, but it dosn't work with firefox! A: Use something like this: var myVar = null; And then inside your function: myVar = fldID; And then: setTimeout(setFocus,1000) And then: function setFocus(){ document.getElementById(fldID).focus(); } Final code: <html> <head> <script type="text/javascript"> function somefunction(){ var myVar = null; myVar = document.getElementById('myInput'); if(myVar.value=='') setTimeout(setFocusOnJobTitle,1000); else myVar.value='Success'; } function setFocusOnJobTitle(){ document.getElementById('myInput').focus(); } </script> </head> <body> <label id="jobTitleId" for="myInput">Job Title</label> <input id="myInput" onblur="somefunction();"></input> </body> </html> A: You can fix IE with : event.currentTarget.firstChild.ownerDocument.activeElement It looks like "explicitOriginalTarget" for FF. Antoine And J A: As noted in this answer, you can check the value of document.activeElement. document is a global variable, so you don't have to do any magic to use it in your onBlur handler: function myOnBlur(e) { if(document.activeElement === document.getElementById('elementToCheckForFocus')) { // Focus went where we expected! // ... } } A: * *document.activeElement could be a parent node (for example body node because it is in a temporary phase switching from a target to another), so it is not usable for your scope *ev.explicitOriginalTarget is not always valued So the best way is to use onclick on body event for understanding indirectly your node(event.target) is on blur A: Edit: A hacky way to do it would be to create a variable that keeps track of focus for every element you care about. So, if you care that 'myInput' lost focus, set a variable to it on focus. <script type="text/javascript"> var lastFocusedElement; </script> <input id="myInput" onFocus="lastFocusedElement=this;" /> Original Answer: You can pass 'this' to the function. <input id="myInput" onblur="function(this){ var theId = this.id; // will be 'myInput' }" /> A: I suggest using global variables blurfrom and blurto. Then, configure all elements you care about to assign their position in the DOM to the variable blurfrom when they lose focus. Additionally, configure them so that gaining focus sets the variable blurto to their position in the DOM. Then, you could use another function altogether to analyze the blurfrom and blurto data. A: keep in mind, that the solution with explicitOriginalTarget does not work for text-input-to-text-input jumps. try to replace buttons with the following text-inputs and you will see the difference: <input id="btn1" onblur="showBlur(event)" value="text1"> <input id="btn2" onblur="showBlur(event)" value="text2"> <input id="btn3" onblur="showBlur(event)" value="text3"> A: I've been playing with this same feature and found out that FF, IE, Chrome and Opera have the ability to provide the source element of an event. I haven't tested Safari but my guess is it might have something similar. $('#Form').keyup(function (e) { var ctrl = null; if (e.originalEvent.explicitOriginalTarget) { // FF ctrl = e.originalEvent.explicitOriginalTarget; } else if (e.originalEvent.srcElement) { // IE, Chrome and Opera ctrl = e.originalEvent.srcElement; } //... }); A: I do not like using timeout when coding javascript so I would do it the opposite way of Michiel Borkent. (Did not try the code behind but you should get the idea). <input id="myInput" onblur="blured = this.id;"></input> <span onfocus = "sortOfCallback(this.id)" id="mySpan">Hello World</span> In the head something like that <head> <script type="text/javascript"> function sortOfCallback(id){ bluredElement = document.getElementById(blured); // Do whatever you want on the blured element with the id of the focus element } </script> </head> A: I wrote an alternative solution how to make any element focusable and "blurable". It's based on making an element as contentEditable and hiding visually it and disabling edit mode itself: el.addEventListener("keydown", function(e) { e.preventDefault(); e.stopPropagation(); }); el.addEventListener("blur", cbBlur); el.contentEditable = true; DEMO Note: Tested in Chrome, Firefox, and Safari (OS X). Not sure about IE. Related: I was searching for a solution for VueJs, so for those who interested/curious how to implement such functionality using Vue Focusable directive, please take a look. A: This way: <script type="text/javascript"> function yourFunction(element) { alert(element); } </script> <input id="myinput" onblur="yourFunction(this)"> Or if you attach the listener via JavaScript (jQuery in this example): var input = $('#myinput').blur(function() { alert(this); }); Edit: sorry. I misread the question. A: I think its easily possible via jquery by passing the reference of the field causing the onblur event in "this". For e.g. <input type="text" id="text1" onblur="showMessageOnOnblur(this)"> function showMessageOnOnblur(field){ alert($(field).attr("id")); } Thanks Monika A: You could make it like this: <script type="text/javascript"> function myFunction(thisElement) { document.getElementByName(thisElement)[0]; } </script> <input type="text" name="txtInput1" onBlur="myFunction(this.name)"/> A: I see only hacks in the answers, but there's actually a builtin solution very easy to use : Basically you can capture the focus element like this: const focusedElement = document.activeElement https://developer.mozilla.org/en-US/docs/Web/API/DocumentOrShadowRoot/activeElement
{ "language": "en", "url": "https://stackoverflow.com/questions/121499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "227" }
Q: Does SPSecurity.RunWithElevatedPrivileges do anything in a console app? From what I have gleaned from reflector, RunWithElevatedPriveleges simply reverts the current thread identity to the base (non-impersonated) identity. This makes perfect sense in the case of code running inside the WSS application pool, since the base service account is a super-user. Does it have any effect when running in an external (console or service) application, when no impersonation exists? I'm guessing not, but I'd like to know for sure. I've seen varying opinions on this from googling. A: I think it would if you ran the executable under one account and then changed it's credentials with code (like SP does). Otherwise, it cannot elevate to permissions it didn't have at start without some way of generating a Credentials object. A: Normally Runwithelevatedprivileges will get your code running as the app pool identity, but it doesn't work from the console. What we've done in this case is either use runas or a set the identity of a scheduled task to the same account as the app pool identity. A: RunWithElevatedPriveleges don't work when HTTPContext is null. In console, HTTPContext is null.
{ "language": "en", "url": "https://stackoverflow.com/questions/121504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Reading XML with an "&" into C# XMLDocument Object I have inherited a poorly written web application that seems to have errors when it tries to read in an xml document stored in the database that has an "&" in it. For example there will be a tag with the contents: "Prepaid & Charge". Is there some secret simple thing to do to have it not get an error parsing that character, or am I missing something obvious? EDIT: Are there any other characters that will cause this same type of parser error for not being well formed? A: The problem is the xml is not well-formed. Properly generated xml would list the data like this: Prepaid &amp; Charge I've fixed the same problem before, and I did it with this regex: Regex badAmpersand = new Regex("&(?![a-zA-Z]{2,6};|#[0-9]{2,4};)"); Combine that with a string constant defined like this: const string goodAmpersand = "&amp;"; Now you can say badAmpersand.Replace(<your input>, goodAmpersand); Note a simple String.Replace("&", "&amp;") isn't good enough, since you can't know in advance for a given document whether any & characters will be coded correctly, incorrectly, or even both in the same document. The catches here are you have to do this to your xml document before loading it into your parser, which likely means an extra pass through the document. Also, it does not account for ampersands inside of a CDATA section. Finally, it only catches ampersands, not other illegal characters like <. Update: based on the comment, I need to update the expression for hex-coded (&#x...;) entities as well. Regarding which characters can cause problems, the actual rules are a little complex. For example, certain characters are allowed in data, but not as the first letter of an element name. And there's no simple list of illegal characters. Instead, large (non-contiguous) swaths of UNICODE are defined as legal, and anything outside that is illegal. When it comes down to it, you have to trust your document source to have at least a certain amount of compliance and consistency. For example, I've found people are often smart enough to make sure the tags work properly and escape <, even if they don't know that & isn't allowed, hence your problem today. However, the best thing would be to get this fixed at the source. Oh, and a note about the CDATA suggestion: I use that to make sure xml I'm creating is well-formed, but when dealing with existing xml from outside, I find the regex method easier. A: The web application isn't at fault, the XML document is. Ampersands in XML should be encoded as &amp;. Failure to do so is a syntax error. Edit: in answer to the followup question, yes there are all kinds of similar errors. For example, unbalanced tags, unencoded less-than signs, unquoted attribute values, octets outside of the character encoding and various Unicode oddities, unrecognised entity references, and so on. In order to get any decent XML parser to consume a document, that document must be well-formed. The XML specification requires that a parser encountering a malformed document throw a fatal error. A: The other answers are all correct, and I concur with their advice, but let me just add one thing: PLEASE do not make applications that work with non well-formed XML, it just makes the rest of our lives more difficult :). Granted, there are times when you really just don't have a choice if you have no control over the other end, but you should really have it throwing a fatal error and complaining very loudly and explicitly about what is broken when such an event occurs. You could probably take it one step further and say "Ack! This XML is broken in these places and for these reasons, here's how I tried to fix it to make it well-formed: ...". I'm not overly familiar with the MSXML APIs, but most good XML parsers will allow you to install error handlers so that you can trap the exact line/column number where errors are appearing along with getting the error code and message. A: Your database doesn't contain XML documents. It contains some well-formed XML documents and some strings that look like XML to a human. If it's at all possible, you should fix this - in particular, you should fix whatever process is generating the malformed XML documents. Fixing the program that reads data out of this database is just putting wallpaper over a crack in the wall. A: You can replace & with &amp; Or you might also be able to use CDATA sections. A: There are several characters which will cause XML data to be reported as badly-formed. From w3schools: Characters like "<" and "&" are illegal in XML elements. The best solution for input you can't trust to be XML-compliant is to wrap it in CDATA tags, e.g. <![CDATA[This is my wonderful & great user text]]> Everything within the <![CDATA[ and ]]> tags is ignored by the parser.
{ "language": "en", "url": "https://stackoverflow.com/questions/121511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How can I integrate a bitbucket repository with the hosted on-demand version of FogBugz? I use the on-demand (hosted) version of FogBugz. I would like to start using Mercurial for source control. I would like to integrate FogBugz and a BitBucket repository. I gave it a bit of a try but things weren't going very well. FogBugz requires that you hook up your Mercurial client to a fogbugz.py python script. TortoiseHg doesn't seem to have the hgext directory that they refer to in instructions. So has anyone successfully done something similar? A: From the sounds of it you are wanting to run the hook on your local machine. The hook and directions are intended for use on the central server. If you are the only one working in your repository or don't mind commit not showing up in FB until after you do a pull, then you can add the hook locally to your primary clone, If you are using your primary clone then you need to do something slightly different from what they say here: http://bugs.movabletype.org/help/topics/sourcecontrol/setup/Mercurial.html You can put your fogbugz.py anywhere you want, just add a path line to your [fogbugz] section of that repositories hgrc file: [fogbugz] path=C:\Program Files\TortoiseHg\scripts\fogbugz.py Just make sure you have python installed. you may also wish to add a commit hook so that local commits to the repository also get into FB. [hooks] commit=python:hgext.fogbugz.hook incoming=python:hgext.fogbugz.hook On the Fogbugz install you will want change put the following in your for your logs url: ^REPO/log/^R2/^FILE and the following for your diff url: ^REPO/diff/^R2/^FILE When the hook script runs it connects to your FB install and sends it a few parameters. These parameters are stored in the DB and used to generate urls for diffs and log informaiton. The script sends the url of repo, this is in your baseurl setting in the [web] section. You want this url to be the url to your bitbucket repository. This will be used to replace ^REPO from the url templates above. The hook script also passes the revision id and the file name to FB. These will replace ^R2 and ^FILE. So in summary this is the stuff you want to add to the hgrc file in your .hg directory: [extensions] hgext.fogbugz= [fogbugz] path=C:\Program Files\TortoiseHg\scripts\fogbugz.py host=https://<YOURACCOUNT>.fogbugz.com/ script=cvsSubmit.asp [hooks] commit=python:hgext.fogbugz.hook incoming=python:hgext.fogbugz.hook [web] baseurl=http://www.bitbucket.org/<YOURBITBUCKETACCOUNT>/<YOURPROJECT>/ One thing to remember is that FB may get notified of a checkin before you actually push those changes to bitbucket. If this is the cause do a push and things will work. EDIT: added section about the FB server and the summary. A: Post-mortem: Bitbucket now has native fogbugz support, as well as other post-back services. http://www.bitbucket.org/help/service-integration/ A: Just a heads-up: Fog Creek has released Kiln which provides Mercurial hosting that's tightly integrated with FogBugz and doesn't require any configuration. I normally wouldn't "advertise" on Stack Overflow (disclaimer: I'm one of the Kiln devs), but I feel that this directly answers the original question. A: It is possible to integrate your GIT BitBucket repository with FogBugz issue tracker, but unfortunately it is not properly documented. You have to follow steps described at https://confluence.atlassian.com/display/BITBUCKET/FogBugz+Service+Management, but beware that * *In CVSSubmit URL you need to put url WITHOUT "?ixBug=bugID&sFile=file&sPrev=x&sNew=y&ixRepository=" parameters. It should just be "https://your_repo.fogbugz.com/cvsSubmit.asp" *You will need to mention your FogBugz case ID in the git commit message by putting "BugzID: ID" string in it (this is not documented anywhere :-( ) similar to this: git commit -m "This is a superb commit which solves case BugzID: 42" Of course, commit info will be sent to FogBugz after you push your commit to BitBucket server, not after your do a local commit.
{ "language": "en", "url": "https://stackoverflow.com/questions/121521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: PHP: Use Pecl/Pear, or build my own systems? When building some of my PHP apps, a lot of the functionality could be coded using PEAR/PECL modules, however, the fact that some people using it may not have the access to install things, It poses a puzzler for me. Should I forsake some users to use PEAR/PECL for functionality, where these will allow me to have a system coded up quicker than if I wrote my own functionality, but eans that it will exclude certain people from using it. A: Save on development time by developing with the pear libraries, and provide the libraries bundled in what you distribute (though you'll have to make sure it obeys licensing requirements) I would not depend on certain PECL extensions being installed unless you're doing something particularly related to one (say an XDebug web-frontend or something), the majority of installs will be carrying a fairly vanilla set of extensions. A: It partly depends on how much time you have, and the purpose of the project. If you're just trying to make something that works, go with PEAR/PECL. If you're trying to learn to be a better programmer, and you have the time, then I'd recommend taking the effort to write your own versions. Once you understand the innards of whatever you're trying to replace, you may want to switch to the PEAR/PECL version so that you're not wasting time reimplementing what has already been implemented... ...but on the other hand, preexisting tools don't always do exactly what you need, and sometimes have overhead that doesn't do you any good. This is why Unix command-line tools are so small and narrow of purpose; nobody really needs a version of 'ls' that can do anything besides what 'ls' can currently do. Your version of whatever PEAR library will, by virtue of being written by you, do exactly what you need doing. It requires some careful thought... ...but on the gripping hand, don't spend too much time thinking about it. Spend five minutes, make a decision, and start coding. Even if you make the wrong decision, you'll at least have gotten more practice coding. :-) A: My suggestion is to start with assuming PEAR/PECL modules, and get the rest of the code done. Then, once you've got most of your code working the way you want, you can evaluate going back and piece by piece replacing the outside code with your own. Plus, by then you'll have a better idea of the impact using those has on your userbase. A: Code it initially using PEAR/PECL and if you get people asking for a non PEAR/PECL version, start coding your own alternatives then for such a version. The initial development will go much faster with this, and you may find that no-one cares about requiring 3rd party libraries once you have started releasing apps. A: Use PEAR but allow for including the PEAR packages inside your project. All PEAR packages can be separately downloaded from http://pear.php.net/ and can be put anywhere. Depending on convenience and licensing issues you could then package all the required PEAR files with your project or tell users how to download and "install" them. A: What I do most times is I'll never use PEAR installed globally on a server. Versions can change and affect your application.. Instead I have a config file (in my case XML) that lists all the packages required and their versions. The installer connects to my personal FTP repository and downloads and installs all the PEAR packages locally in $PROJECTBASE/lib/pear/ .. And PEAR is run locally instead of globally. Something you may want to consider. A: Using PEAR is no problem, if users do not have root access to their webserver, they can simply download the PHP files from pear.php.net and add it to their include path. PECL's a little more tricky to work around, since there's often no way to install new modules without root access. A: You need to watch out because a lot of modules in pear are really of pretty low quality. Some are great, don't get me wrong, but don't assume that anything in pear, by virtue of being in pear, is at any given quality. Which means you need to at least skim the source of a pear module before deciding to use it, which for simple enough tasks may take more time than going without pear. pecl is different, however. Extensions tend to be better vetted and tested, else they'd crash php. A: Reiterating much of what's already been said: http://www.codinghorror.com/blog/archives/001145.html
{ "language": "en", "url": "https://stackoverflow.com/questions/121525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails Caching Log Level With the new caching options in Rails 2.1 i get nice entires in my log along the lines of Cached fragment hit: views/homepage (0.16549) However they are logged at the :debug level, which is the same level as the SQL output. I want to be able to disable the SQL output, and still see the cache info. How can I do this A: well you could instantiate a specific logger for ActiveRecord and set it's log level to :info while leaving the default logger at debug ... ActiveRecord::Base.logger = Logger.new("#{RAILS_ROOT}/log/#{RAILS_ENV}_database.log") ActiveRecord::Base.logger.level = Logger::INFO # should set the log_level to info for you from http://wiki.rubyonrails.org/rails/pages/HowtoConfigureLogging or you could reopen AbstractAdapter and override the log(sql,name) method so it does nothing http://api.rubyonrails.com/classes/ActiveRecord/ConnectionAdapters/AbstractAdapter.html#M001242
{ "language": "en", "url": "https://stackoverflow.com/questions/121546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Strange unhandled exception from asp.net application - Validation of viewstate MAC failed I don't know if anyone has seen this issue before but I'm just stumped. Here's the unhandled exception message that my error page is capturing. Error Message: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Stack Trace: at System.Web.UI.ViewStateException.ThrowError(Exception inner, String persistedState, String errorPageMessage, Boolean macValidationError) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString) at System.Web.UI.ObjectStateFormatter.System.Web.UI.IStateFormatter.Deserialize(String serializedState) at System.Web.UI.Util.DeserializeWithAssert(IStateFormatter formatter, String serializedState) at System.Web.UI.HiddenFieldPageStatePersister.Load() at System.Web.UI.Page.LoadPageStateFromPersistenceMedium() at System.Web.UI.Page.LoadAllState() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.generic_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Source: System.Web Anybody have any ideas on how I could resolve this? Thanks. A: do you have multiple servers running this application and/or have a web garden? If yes, you are going to have to set the machine key in the web.config A: I seem to recall that this error can occur if you click a button/link etc before the page has fully loaded. If this is the case, the error is caused by an ASP.net 2.0 feature called Event Validation. This is a security feature that ensures that postback actions only come from events allowed and created by the server to help prevent spoofed postbacks. This feature is implemented by having controls register valid events when they render (as in, during their actual Render() methods). The end result is that at the bottom of your rendered form tag, you'll see something like this: <input type="hidden" name="__EVENTVALIDATION" id="__EVENTVALIDATION" value="AEBnx7v.........tS" /> When a postback occurs, ASP.net uses the values stored in this hidden field to ensure that the button you clicked invokes a valid event. If it's not valid, you get the exception that you've been seeing. The problem you're seeing happens specifically when you postback before the EventValidation field has been rendered. If EventValidation is enabled (which it is, by default), but ASP.net doesn't see the hidden field when you postback, you also get the exception. If you submit a form before it has been entirely rendered, then chances are the EventValidation field has not yet been rendered, and thus ASP.net cannot validate your click. One work around is of course to just disable event validation, but you have to be aware of the security implications. Alternatively, just never post back before the form has finished rendering. Of course, that's hard to tell your users, but perhaps you could disable the UI until the form has rendered? from http://forums.asp.net/p/955145/1173230.aspx A: @Chris if the problem is clicking an item before the page has completely rendered, asp.net 3.5 SP1 added a web.config entry on the page element called renderAllHiddenFieldsAtTopOfForm. A: By default, ASP.NET includes a digital signature of the ViewState value in the page. It does so with an automatically-generated key that is held in memory. This is done to prevent a malicious user from altering the ViewState from the browser and, for example, grant him/herself access to stuff they wouldn't normally have access to. ASP.NET can also, optionally, encrypt the ViewState, but it's turned off by default for performance reasons. In many web sites, it is a lot more important to make sure that the content of the ViewState is not 'mucked with', than it is to keep it confidential. The error message says that the signature verification failed. The page was posted with a ViewState, but the ViewState signature didn't match the signature calculated with the keys held by the server. The most common reason for this error is that you are using two or more web servers in a farm-like environment: one server sends the original page, signed with the key in memory on that server, but the page is posted back to the second (or third...) server. Because the two or more servers don't share the signature key, the signatures don't match. ...If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. What the error message is telling you is to use the validationKey attribute (see details in MSDN) in your web.config to hardcode the signature key to a value shared by all your servers, instead of using a dynamically-generated one. That way, the signature validation can succeed independently of which server receives the postback. You could turn off the verification, but it's very dangerous to do so. It means any hacker with a bit of free time can fake values in your application. For example, if you keep the price of the item in a ViewState value, the hacker could change the value from the browser to $0.01 right before putting the order. A: For anyone else ending up struggling with this issue here is a helpful link to some work arounds: http://blogs.msdn.com/tom/archive/2008/03/14/validation-of-viewstate-mac-failed-error.aspx A: I know you can disable the Validation of viewstate MAC, but I think if the page is not loaded you can get into more trouble. When I ran into this problem I had to disable all buttons until the page was fully loaded.
{ "language": "en", "url": "https://stackoverflow.com/questions/121579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Simplest way to create a date that is the first day of the month, given another date In SQL Server what is the simplest/cleanest way to make a datetime representing the first of the month based on another datetime? eg I have a variable or column with 3-Mar-2005 14:23 and I want to get 1-Mar-2005 00:00 (as a datetime, not as varchar) A: SELECT DATEADD(mm, DATEDIFF(mm, 0, @date), 0) A: Select DateAdd(Month, DateDiff(Month, 0, GetDate()), 0) To run this on a column, replace GetDate() with your column name. The trick to this code is with DateDiff. DateDiff returns an integer. The second parameter (the 0) represents the 0 date in SQL Server, which is Jan 1, 1900. So, the datediff calculates the integer number of months since Jan 1, 1900, then adds that number of months to Jan 1, 1900. The net effect is removing the day (and time) portion of a datetime value. A: Something like this would work.... UPDATE YOUR_TABLE SET NewColumn = DATEADD(day, (DATEPART(day, OldColumn) -1)*-1, OldColumn) A: Just use DATEADD(DAY, 1-DAY(@date), @date)
{ "language": "en", "url": "https://stackoverflow.com/questions/121581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Best way of detecting if Windows has Windows Updates ready to download/install? I'm specifically interested in Windows 2000/XP, but Vista/7 would be interesting too (if different). I was thinking along the lines of task scheduling a batch file or equivalent on a daily basis. EDIT: Sorry, I should have provided more info. The question pertains to 10 machines which I manually apply updates. I don't want to install the updates programatically, but just find out if there are updates ready to download/install (i.e. the update shield icon in the system tray is indicating this) using a batch or script. Thanks. A: You could use WUApiLib: UpdateSessionClass session = new UpdateSessionClass(); IUpdateSearcher search = session.CreateUpdateSearcher(); ISearchResult result = search.Search("IsInstalled=0 and IsPresent=0 and Type='Software'"); int numberOfUpdates = result.Updates.Count - 1; Log.Debug("Found " + numberOfUpdates.ToString() + " updates"); UpdateCollection updateCollection = new UpdateCollection(); for (int i = 0; i < numberOfUpdates; i++) { IUpdate update = result.Updates[i]; if (update.EulaAccepted == false) { update.AcceptEula(); } updateCollection.Add(update); } if (numberOfUpdates > 0) { UpdateCollection downloadCollection = new UpdateCollection(); for (int i = 0; i < updateCollection.Count; i++) { downloadCollection.Add(updateCollection[i]); } UpdateDownloader downloader = session.CreateUpdateDownloader(); downloader.Updates = downloadCollection; IDownloadResult dlResult = downloader.Download(); if (dlResult.ResultCode == OperationResultCode.orcSucceeded) { for (int i = 0; i < downloadCollection.Count; i++) { Log.Debug(string.Format("Downloaded {0} with a result of {1}", downloadCollection[i].Title, dlResult.GetUpdateResult(i).ResultCode)); } UpdateCollection installCollection = new UpdateCollection(); for (int i = 0; i < updateCollection.Count; i++) { if (downloadCollection[i].IsDownloaded) { installCollection.Add(downloadCollection[i]); } } UpdateInstaller installer = session.CreateUpdateInstaller() as UpdateInstaller; installer.Updates = installCollection; IInstallationResult iresult = installer.Install(); if (iresult.ResultCode == OperationResultCode.orcSucceeded) { updated = installCollection.Count.ToString() + " updates installed"; for (int i = 0; i < installCollection.Count; i++) { Log.Debug(string.Format("Installed {0} with a result of {1}", installCollection[i].Title, iresult.GetUpdateResult(i).ResultCode)); } if (iresult.RebootRequired == true) { ManagementClass mcWin32 = new ManagementClass("Win32_OperatingSystem"); foreach (ManagementObject shutdown in mcWin32.GetInstances()) { shutdown.Scope.Options.EnablePrivileges = true; shutdown.InvokeMethod("Reboot", null); } } } A: Windows SUS works very well for several machines on a network. A: The "easiest" way to tell is to setup Windows Updates to occur nightly and download the updates if available which then puts the update shield icon in the system tray. Just glance at the tray to see if the icon is present. You could also setup Windows to check nightly for the updates, then download and install them at a specified time. A: In regards to what mdsindzeleta said - going about this programatically probably isn't the best solution. I would use the features built into Windows XP to download and install updates. I'm assuming that Vista has similar features. A: I believe Windows updates are downloaded using the BITS service. You could use Bitsadmin.exe found in the Windows Support Tools. From the command line you can run bitsadmin.exe /list and you can see the status of BITS jobs. (i.e. download progress, job name, job status) A: In the end, Windows SUS wasn't an option, so I'm using the following in a batch file conjunction with ActiveState ActivePerl (recommended): perl -nle "print $_ if m/updates detected/i" < c:\Windows\WindowsUpdate.log This might be crude or dirty, and might break in future, but it currently does what I need. Thanks for all the ideas.
{ "language": "en", "url": "https://stackoverflow.com/questions/121585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: CouchDB backups and cloning the database We're looking at CouchdDB for a CMS-ish application. What are some common patterns, best practices and workflow advice surrounding backing up our production database? I'm particularly interested in the process of cloning the database for use in development and testing. Is it sufficient to just copy the files on disk out from under a live running instance? Can you clone database data between two live running instances? Advice and description of the techniques you use will be greatly appreciated. A: I'd like to second Paul's suggestion: Just cp your database files from under the live server if you can take the I/O-load hit. If you run a replicated copy anyway, you can safely copy from that too, without impacting your master's performance. A: CouchDB also works very nicely with filesystem snapshots offered by modern filesystems like ZFS. Since the database file always is in a consistent state you can take the snapshot of the file any time without weakening the integrity guarantees provided by CouchDB. This results in nearly no I/O overhead. In case you have e.g. accidentally deleted a document from the database you can move the snapshot to another machine and extract the missing data there. You might even be able to replicate back to the production database, but I never have tried that. But always make sure you use exactly the same couchdb revisions when moving around database files. The on-disk format is still evolving in incompatible ways. A: Another thing to be aware of is that you can copy files out from under a live database. Given that you may have a possibly large database, you could just copy it OOB from your test/production machine to another machine. Depending on the write load of the machines it may be advisable to trigger a replication after the copy to gather any writes that were in progress when the file was copied. But replication of a few records would still be quicker than replication the entire database. For reference see: http://wiki.apache.org/couchdb/FilesystemBackups A: CouchDB supports replication, so just replicate to another instance of CouchDB and backup from there, avoiding disturbing where you write changes to. https://docs.couchdb.org/en/latest/maintenance/backups.html You literally send a POST request to your CouchDB instance telling it where to replicate to, and it Works(tm) EDIT: You can just cp out the .couch files in the data directory from under the running database as long as you can accept the I/O hit. A: CouchDB replication is horrible. I generally do tar which is much better. * *Stop the CouchDB service on the source host *tar.gz the data files. *On my Ubuntu servers this is typically in /var/lib/couchdb (sometimes in a subdirectory based on the Couch version). If you aren’t sure where these files are, you can find the path in your CouchDb config files, or often by doing a ps -A w to see the full command that started CouchDb. Make sure you get the subdirectories that start with . when you archive the files. *Restart the couchdb service on the source host. *scp the tar.gz file to the destination host and unpack them in a temporary location there. *chown the files to the user and group that owns the files already in the database directory on the destination. This is likely couchdb:couchdb. This is important, as messing up the file permissions is the only way I’ve managed to mess up this process so far. *Stop CouchDB on the destination host. *cp the files into the destination directory. Again on my hosts this has been /var/lib/couchdb. *Double check the file permissions in their new home. *Restart CouchDB on the destination host. A: I do it via powershell and the PSCouchDB module with the command Export-CouchDBDatabase. This exports an entire database to a json file, which you can re-import via the import command (see the link). ex. Export-CouchDBDatabase -Database test -Authorization "admin:password" this export a json file in a current directory: test_05-28-2021_17_01_00.json
{ "language": "en", "url": "https://stackoverflow.com/questions/121599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: How to reduce javax.faces.ViewState in JSF What is the best way to reduce the size of the viewstate hidden field in JSF? I have noticed that my view state is approximately 40k this goes down to the client and back to the server on every request and response espically coming to the server this is a significant slowdown for the user. My Environment JSF 1.2, MyFaces, Tomcat, Tomahawk, RichFaces A: Have you tried setting the state saving to server? This should only send an id to the client, and keep the full state on the server. Simply add the following to the file web.xml : <context-param> <param-name>javax.faces.STATE_SAVING_METHOD</param-name> <param-value>server</param-value> </context-param> A: If you are using MyFaces you can try this setting to compress the viewstate before sending to the client. <context-param> <param-name>org.apache.myfaces.COMPRESS_STATE_IN_CLIENT</param-name> <param-value>true</param-value> </context-param> ` A: One option is to completely save the view state on client side but you may face some problem such as not being able to Serialize the object. You may want to try using different compression algorithm/utility based on your requirement but since the browser will already use the GZip by default I am not sure how much you can gain.
{ "language": "en", "url": "https://stackoverflow.com/questions/121605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Databinding with Silverlight If I want to bind a collection to a some form of listing control in Silverlight. Is the only way to do it so make the underlying objects in the collection implement INotifyPropertyChanged and for the collection to be an Observablecollection? If I was using some sort of third party object, for example that returned by a web service, I would have to wrap it or map it to something that implements INotifyPropertyChanged ? A: No, once you add a service reference to your silverlight project in Visual Studio, you can right click it and configure it such that it uses an ObservableCollection (which is the default setting anyway) Also, the Service Reference will by default ensure that the service's returned types already implement INotifyPropertyChanged. A: You can bind a list to any IEnumerable collection or a simple control to any object property. The downside is that you get no chnage notification if items are added to the list or properties are changed. So it depends on your application if this is a problem or not. A: As Maurice has said, you can bind to any collection (even IEnumerable) and the binding will work, but you won't get change notifications. Note, however, that you don't need to use ObservableCollection, anything that implements INotifyCollectionChanged will work (although ObservableCollection is the simplest one). It is not necessary for objects within the collection to implement INotifyPropertyChanged, but if they do, then you'll get notifications on every individual change. A: Just to be clear, you can OneTime bind to any object. If you want to OneWay or TwoWay bind you will ned an object supports one of those interfaces. As mentioned, creating the Service Reference does this for you for objects delivered via webservice. However, if for some reason, you still need to produce a Bindable object from a legacy class, you could implement a Converter that implements IValueConverter and then use it to "wrap" your legacy object in a Bindable one live this: <UserControl> <UserControl.Resources> <local:FooToBindableFooConverter x:Key="BindableFooConverter"/> </UserControl.Resources> <TextBlock Text="{Binding FooInstance, Converter={StaticResource BindableFooConverter}}"/> </UserControl> Converters are very powerful and can solve lots of "I need X but I have Y" problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/121606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to tint a UIButton? I'm in the process of adding custom buttons to my iPhone UI and want to make them have the glassy look from Apple's apps. I have a good default glass image, but I'd hate to have to have a separate image for each tint I want (red, green, blue, etc.). Is there a way to load a grayscale PNG and adjust it to the color I want? Better yet, is there a way to get Apple's glassy look without having to load custom images at all? A: A little late, but in case somebody like me searches for this kind of information: use a UISegmentedControl with a single button, set it as momentary, and set its tintColor. This way, no need to prepare as many PNGs as you have colors, and the framework takes care of all the rendering for you. Example: UISegmentedControl *cancelButton = [[UISegmentedControl alloc] initWithItems:[NSArray arrayWithObject:@"Cancel"]]; [cancelButton setSegmentedControlStyle:UISegmentedControlStyleBar]; [cancelButton setTintColor:[UIColor colorWithRed:0.8 green:0.3 blue:0.3 alpha:1.0]]; [cancelButton setMomentary:YES]; [cancelButton addTarget:self action:@selector(didTapCancel:) forControlEvents:UIControlEventValueChanged]; [self addSubview:cancelButton]; [cancelButton release]; A: There's a sample at the always excellent "Cocoa With Love" site: Drawing gloss gradients in CoreGraphic A: There isn't a one-liner way that I know of, but you might be able to get the desired effect by taking the default grayscale image and compositing it with (i.e. drawing it on top of) a solid color. If you look through the Core Graphics documentation, you can see there are over a dozen different compositing methods (e.g., Color Burn), and some combination of them may produce the effect you want. A: I don't know about iPhone UI, but you can use a single PNG to have colour variant graphics. i.e. Both the logos (top left) on these pages: Products Shops Use this PNG alt text http://47degrees.com/images/logos/47_degrees.png I presume you might need the background colour set as a tint to be glassy. A: I ended up creating my own custom button that closely matches the rendering used in the "Delete Contact" button in the contacts edit view. For those interested, it can be found on github at PFTintedButton A: I have done this and it works pretty nicely: iOS: Applying a RGB filter to a greyscale PNG However, it is cheating, it is adding, so for (eg) the red component of each pixel, R_dest = R_src + R_tint If you want fine-grained control, for example you might want to extract the greyscale value for each pixel by averaging the RGB components (with appropriate weighting), and then multiply this value by your tint, so the resulting colour would be greyscale(Sr,Sg,Sb) * {Tr,Tg,Tb} This would require creating a chunk of memory, creating a bitmap context which draws into this chunk, drawing your image into this context, copying the whole memory chunk for safekeeping, processing -- taking data from the copy and modifying the original backing store, then extracting an image from the original backing store and freeing up everything you allocated. Problem reconstituting UIImage from RGBA pixel byte data This library has done everything for you, it isn't the most efficient route, but it gets the job done. iirc he uses malloc when he should be using calloc.
{ "language": "en", "url": "https://stackoverflow.com/questions/121615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Inner join vs Where Is there a difference in performance (in oracle) between Select * from Table1 T1 Inner Join Table2 T2 On T1.ID = T2.ID And Select * from Table1 T1, Table2 T2 Where T1.ID = T2.ID ? A: If the query optimizer is doing its job right, there should be no difference between those queries. They are just two ways to specify the same desired result. A: They should be exactly the same. However, as a coding practice, I would rather see the Join. It clearly articulates your intent, A: The performance should be identical, but I would suggest using the join-version due to improved clarity when it comes to outer joins. Also unintentional cartesian products can be avoided using the join-version. A third effect is an easier to read SQL with a simpler WHERE-condition. A: Don’t forget that in Oracle, provided the join key attributes are named the same in both tables, you can also write this as: select * from Table1 inner join Table2 using (ID); This also has the same query plan, of course. A: In a scenario where tables are in 3rd normal form, joins between tables shouldn't change. I.e. join CUSTOMERS and PAYMENTS should always remain the same. However, we should distinguish joins from filters. Joins are about relationships and filters are about partitioning a whole. Some authors, referring to the standard (i.e. Jim Melton; Alan R. Simon (1993). Understanding The New SQL: A Complete Guide. Morgan Kaufmann. pp. 11–12. ISBN 978-1-55860-245-8.), wrote about benefits to adopt JOIN syntax over comma-separated tables in FROM clause. I totally agree with this point of view. There are several ways to write SQL and achieve the same results but for many of those who do teamwork, source code legibility is an important aspect, and certainly separate how tables relate to each other from specific filters was a big leap in sense of clarifying source code. A: In PostgreSQL, there's definitely no difference - they both equate to the same query plan. I'm 99% sure that's also the case for Oracle. A: Using JOIN makes the code easier to read, since it's self-explanatory. There's no difference in speed(I have just tested it) and the execution plan is the same. A: No! The same execution plan, look at these two tables: CREATE TABLE table1 ( id INT, name VARCHAR(20) ); CREATE TABLE table2 ( id INT, name VARCHAR(20) ); The execution plan for the query using the inner join: -- with inner join EXPLAIN PLAN FOR SELECT * FROM table1 t1 INNER JOIN table2 t2 ON t1.id = t2.id; SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY); -- 0 select statement -- 1 hash join (access("T1"."ID"="T2"."ID")) -- 2 table access full table1 -- 3 table access full table2 And the execution plan for the query using a WHERE clause. -- with where clause EXPLAIN PLAN FOR SELECT * FROM table1 t1, table2 t2 WHERE t1.id = t2.id; SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY); -- 0 select statement -- 1 hash join (access("T1"."ID"="T2"."ID")) -- 2 table access full table1 -- 3 table access full table2 A: They're both inner joins that do the same thing, one simply uses the newer ANSI syntax. A: Functionally they are the same as has been said. I agree though that doing the join is better for describing exactly what you want to do. Plenty of times I've thought I knew how I wanted to query something until I started doing the joins and realized I wanted to do a different query than the original one in my head. A: [For a bonus point...] Using the JOIN syntax allows you to more easily comment out the join as its all included on one line. This can be useful if you are debugging a complex query As everyone else says, they are functionally the same, however the JOIN is more clear of a statement of intent. It therefore may help the query optimiser either in current oracle versions in certain cases (I have no idea if it does), it may help the query optimiser in future versions of Oracle (no-one has any idea), or it may help if you change database supplier. A: I don't know about Oracle but I know that the old syntax is being deprecated in SQL Server and will disappear eventually. Before I used that old syntax in a new query I would check what Oracle plans to do with it. I prefer the newer syntax rather than the mixing of the join criteria with other needed where conditions. In the newer syntax it is much clearer what creates the join and what other conditions are being applied. Not really a big problem in a short query like this, but it gets much more confusing when you have a more complex query. Since people learn on the basic queries, I would tend to prefer people learn to use the join syntax before they need it in a complex query. And again I don't know Oracle specifically, but I know the SQL Server version of the old style left join is flawed even in SQL Server 2000 and gives inconsistent results (sometimes a left join sometimes a cross join), so it should never be used. Hopefully Oracle doesn't suffer the same issue, but certainly left and right joins can be mcuh harder to properly express in the old syntax. Plus it has been my experience (and of course this is strictly a personal opinion, you may have differnt experience) that developers who use the ANSII standard joins tend to have a better understanding of what a join is and what it means in terms of getting data out of the database. I belive that is becasue most of the people with good database understanding tend to write more complex queries and those seem to me to be far easier to maintain using the ANSII Standard than the old style. A: They're logically identical, but in the earlier versions of Oracle that adopted ANSI syntax there were often bugs with it in more complex cases, so you'll sometimes encounter resistance from Oracle developers when using it. A: It is true that, functionally, both queries should be processed the same way. However, experience has shown that if you are selecting from views that use the new join syntax, it is important to structure your queries using it as well. Oracle's optimizer can get confused if a view uses a "join" statement, but a query accessing the view uses the traditional method of joining in the "where" clause. A: Although the identity of two queries seems obvious sometimes some strange things happens. I have come accros the query wich has different execution plans when moving join predicate from JOIN to WHERE in Oracle 10g (for WHERE plan is better), but I can't reproduce this issue in simplified tables and data. I think it depends on my data and statistics. Optimizer is quite complex module and sometimes it behaves magically. Thats why we can't answer to this question in general because it depends on DB internals. But we should know that answer has to be 'no differences'. A: i had this conundrum today when inspecting one of our sp's timing out in production, changed an inner join on a table built from an xml feed to a 'where' clause instead....average exec time is now 80ms over 1000 executions, whereas before average exec was 2.2 seconds...major difference in the execution plan is the dissapearance of a key lookup... The message being you wont know until youve tested using both methods. cheers. A: They're both joins and where that do the same thing. Give a look at In MySQL queries, why use join instead of where? A: As kiewik said, the execution plan is the same. The JOIN statement is only more easy to read, making it easier not to forget the ON condition and getting a cartesian product. These errors can be quite hard to detect in long queries using multiple joins of type : SELECT * FROM t1, t2 WHERE t1.id=t2.some_field. If you forget only one join condition, you get a very long to execute query returning too many records... really too many. Some poeple use a DISTINCT to patch the query, but it's still very long to execute. That's accurately why, using JOIN statement is surely the best practice : a better maintainability, and a better readability. Further more, if I well remember, JOIN is optimized concerning memory usage. A: I have an addition to that good answer: That's what is defined as SQL92 and SQL89 respectively, there is no performance difference between them although you can omit the word INNER (using just JOIN is clear enough and in the simplest query you save 5 keyboard strokes now imagine how many strokes there are in big ones).
{ "language": "en", "url": "https://stackoverflow.com/questions/121631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "293" }
Q: MVC frameworks on hosted servers This may seem like a daft question, but i was wondering about how to use MVC frameworks on hosted servers. I am playing about (albeit slowly) at home with the RoR on Ubuntu and understand that if i want to do a live site then i need hosting with Rails and Ruby. However, what happens about the PHP frameworks. For example i have seen in the past all about the CakePHP but lately i have just come across the Symfony project and was thinking that if i had a server stack set up i could develop at home, how would i go about deploying anything live. How do i use php command line on live servers, and how would i go about installing the framework on another server. This is all hyperthetical at the moment as i am just thinking about it, but it is a question that i have thought of in the past. Regards A: Most MVC frameworks* (Kohana, Zend Framework, for example) can be installed anywhere on the server. The front controller in the document root then specifies where it is, and loads it from there. So basically, you put the framework directory somewhere, and then have your front controller load it. ZF and Kohana in particular both have quickstart tutorials for getting this set up. As for using the PHP command line... it's just the "php" command. Run "php -v" to see what version you have. (*MVC frameworks using the front controller pattern means that your web server's document tree really only contains one thing: the front controller file. For example, if you installed your framework at /home/username/frameworks/Kohana_2.2, and your Apache docroot is /home/username/document_root, then you'd have index.php in there, and all it would contain is something like this: <?php require_once("/home/username/frameworks/Kohana_2.2/system/core/Bootstrap.php"); (There'd be other configuration stuff, but there you go.)) A: Not every framework needs things installed or configured via the command line or even the php.ini file. I believe CodeIgnitor is like this - you can just put the path to the base CI directory in the path and off you go. I'm sure there's documentation about how to use symfony on a hosting solution. In fact, this document explains how to use symfony on shared hosting. A: Yes, I have used CakePHP without using the cake command line stuff. As long as the hosting supports php4 or 5 you should be good to go. A: I don't know much about RoR deployment other than I've heard it can be a pain to deploy, but I believe there has been some good work in this area. Obviously yes, you would need a host with RoR (personally I use Dreamhost). The PHP frameworks generally don't require actual installation, they are just a collection of PHP files that you dump in your website folder on the server. You usually have to configure an .htaccess file to route everything through a single index.php file, but you'll have to do that locally anyway. Again I have to admit I've never tried to use CakePHP on a server, but I have setup CodeIgniter before and it really is that simple. If you're looking at MonoRail .NET MVC framework in a shared hosting environment, good luck! I've done it before and it took me ages, not to mention my own custom build of MonoRail. If you are interested I can try and dig up my notes, but you seem to be more keen on RoR/PHP. A: MVC framworks should be hosting independant. But of course they are coded in a programming language so the hosting is important. Apart from that is a matter of permissions. Do you need to modify config files? do you need to access to certain directories? connect to a database? Sure, like any program. But the framework do no need to be preinstalled. I did play with CakePHP a few months ago and it didn't seem to need any kind of special powers. MVC frameworks are not an extension of the programming language so I seemed quited logical. I would like to give you more details but I would need to know the specific framework as it dependes on a one by one basis. Anyway, if you need to read or modify any file you can always try asking your hosting. A: Some hosts include frameworks installed on their servers so you only have to upload your project. You can look in google with something like "symfony hosting". Pablo
{ "language": "en", "url": "https://stackoverflow.com/questions/121638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Regular expression to remove XML tags and their content I have the following string and I would like to remove <bpt *>*</bpt> and <ept *>*</ept> (notice the additional tag content inside them that also needs to be removed) without using a XML parser (overhead too large for tiny strings). The big <bpt i="1" x="1" type="bold"><b></bpt>black<ept i="1"></b></ept> <bpt i="2" x="2" type="ulined"><u></bpt>cat<ept i="2"></u></ept> sleeps. Any regex in VB.NET or C# will do. A: If you just want to remove all the tags from the string, use this (C#): try { yourstring = Regex.Replace(yourstring, "(<[be]pt[^>]+>.+?</[be]pt>)", ""); } catch (ArgumentException ex) { // Syntax error in the regular expression } EDIT: I decided to add on to my solution with a better option. The previous option would not work if there were embedded tags. This new solution should strip all <**pt*> tags, embedded or not. In addition, this solution uses a back reference to the original [be] match so that the exact matching end tag is found. This solution also creates a reusable Regex object for improved performance so that each iteration does not have to recompile the Regex: bool FoundMatch = false; try { Regex regex = new Regex(@"<([be])pt[^>]+>.+?</\1pt>"); while(regex.IsMatch(yourstring) ) { yourstring = regex.Replace(yourstring, ""); } } catch (ArgumentException ex) { // Syntax error in the regular expression } ADDITIONAL NOTES: In the comments a user expressed worry that the '.' pattern matcher would be cpu intensive. While this is true in the case of a standalone greedy '.', the use of the non-greedy character '?' causes the regex engine to only look ahead until it finds the first match of the next character in the pattern versus a greedy '.' which requires the engine to look ahead all the way to the end of the string. I use RegexBuddy as a regex development tool, and it includes a debugger which lets you see the relative performance of different regex patterns. It also auto comments your regexes if desired, so I decided to include those comments here to explain the regex used above: // <([be])pt[^>]+>.+?</\1pt> // // Match the character "<" literally «<» // Match the regular expression below and capture its match into backreference number 1 «([be])» // Match a single character present in the list "be" «[be]» // Match the characters "pt" literally «pt» // Match any character that is not a ">" «[^>]+» // Between one and unlimited times, as many times as possible, giving back as needed (greedy) «+» // Match the character ">" literally «>» // Match any single character that is not a line break character «.+?» // Between one and unlimited times, as few times as possible, expanding as needed (lazy) «+?» // Match the characters "</" literally «</» // Match the same text as most recently matched by backreference number 1 «\1» // Match the characters "pt>" literally «pt>» A: I presume you want to drop the tag entirely? (<bpt .*?>.*?</bpt>)|(<ept .*?>.*?</ept>) The ? after the * makes it non-greedy, so it will try to match as few characters as possible. One problem you'll have is nested tags. stuff would not see the second because the first matched. A: Why do you say the overhead is too large? Did you measure it? Or are you guessing? Using a regex instead of a proper parser is a shortcut that you may run afoul of when someone comes along with something like <bpt foo="bar>"> A: Does the .NET regex engine support negative lookaheads? If yes, then you can use (<([eb])pt[^>]+>((?!</\2pt>).)+</\2pt>) Which makes The big black cat sleeps. out of the string above if you remove all matches. However keep in mind that it will not work if you have nested bpt/ept elements. You might also want to add \s in some places to allow for extra whitespace in closing elements etc. A: If you're going to use a regex to remove XML elements, you'd better be sure that your input XML doesn't use elements from different namespaces, or contain CDATA sections whose content you don't want to modify. The proper (i.e. both performant and correct) way to do this is with XSLT. An XSLT transform that copies everything except a specific element to the output is a trivial extension of the identity transform. Once the transform is compiled it will execute extremely quickly. And it won't contain any hidden defects. A: is there any possible way to get a global solution of the regex.pattern for xml type of text? that way i"ll get rid of the replace function and shell use the regex. The trouble is to analyze the < > coming in order or not.. Also replacing reserved chars as ' & and so on. here is the code 'handling special chars functions Friend Function ReplaceSpecChars(ByVal str As String) As String Dim arrLessThan As New Collection Dim arrGreaterThan As New Collection If Not IsDBNull(str) Then str = CStr(str) If Len(str) > 0 Then str = Replace(str, "&", "&amp;") str = Replace(str, "'", "&apos;") str = Replace(str, """", "&quot;") arrLessThan = FindLocationOfChar("<", str) arrGreaterThan = FindLocationOfChar(">", str) str = ChangeGreaterLess(arrLessThan, arrGreaterThan, str) str = Replace(str, Chr(13), "chr(13)") str = Replace(str, Chr(10), "chr(10)") End If Return str Else Return "" End If End Function Friend Function ChangeGreaterLess(ByVal lh As Collection, ByVal gr As Collection, ByVal str As String) As String For i As Integer = 0 To lh.Count If CInt(lh.Item(i)) > CInt(gr.Item(i)) Then str = Replace(str, "<", "<") /////////problems//// End If Next str = Replace(str, ">", "&gt;") End Function Friend Function FindLocationOfChar(ByVal chr As Char, ByVal str As String) As Collection Dim arr As New Collection For i As Integer = 1 To str.Length() - 1 If str.ToCharArray(i, 1) = chr Then arr.Add(i) End If Next Return arr End Function got trouble at problem mark that's a standart xml with different tags i want to analyse.. A: Have you measured this? I have run into performance issues using .NET's regex engine, but by contrast have parsed xml files of around 40GB without issue using the Xml parser (you will need to use XmlReader for larger strings, however). Please post a an actual code sample and mention your performance requirements: I doubt the Regex class is the best solution here if performance matters.
{ "language": "en", "url": "https://stackoverflow.com/questions/121656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Implementing User-Controlled Style Changes in ASP.NET Ok, so we have clients and those clients get to customize their web facing page. One option we are giving them is to be able to change the color of a graphic (it's like a framish-looking bar) using one of those hex wheels or whatever. So, I've thought about it, and I don't know where to start. I am sending comps out this week to my xhtml guy and I want to have the implementation done at least in my mind before I send things out. Something about System.Drawing sounds about right, but I've never worked with that before and it sounds hella complicated. Does anyone have an idea? UPDATE: The color of an image will be changing. So if I want image 1 to be green, and image 2 to be blue, I go into my admin screen and enter those hex values (probably will give them an interface for it) and then when someone else looks at their page they will see the changes they made. Kind of like customizing a facebook or myspace page (OMFGz soooo Werb 2.0) A: What exactly will be changing? Depending on what's changing you may be able to overlay a transparent png on top of an html background color. Just change the background color and the logo color will change. Of course this limits what you can change, but you'd be surprised how much you can get away with. And yes, the alternative is to paint the image on the web server. Here's a post on it from hanselman. A: EDIT (since you changed the title): If you have a small number of colours on the hex-wheel thing then you could simply use JavaScript to change the image source from some pre-made graphics. If you have a large or changeable set of colours for the user to choose from then I'd use an AJAX call to generate the graphic using the relevant ASP functions you'll find online or in a book. We'd need to see the frame or graphic that you're talking about. Might be doable client-side with CSS and JavaScript, or might need to be a server-side graphic generation using PHP or ASP etc. A: You maybe search for this example. But I'm not sure. A: Standard way to obtain something like this is linking to different CSS files (or classes) depending on the user choice (You probably want to store the user choice and retrieve whenever the same user logs in, but that's out of scope here). If you're using ASP.NET you could use Themes as an optimized and centralized way to control presentation for your web application. You can have stylesheets in your themes and easily programmatically switch between themes, automatically applying associated stylesheets. To see how to define ASP.NET page themes have a look at this link: http://msdn.microsoft.com/en-us/library/ms247256.aspx To see how to programmatically switch between themes follow this other link: http://msdn.microsoft.com/en-us/library/0yy5hxdk(VS.80).aspx A: I'm sort of intuiting that you'll have a black on white bitmap that you use as the base image. The client can then select any other color combination. This may not be exactly your situation, but it should get us started. (The code below is VB -- it's what I know, but converting to C# should be trivial for you.) Imports System.Drawing Private Function createImage(ByVal srcPath As String, ByVal fg As Color, ByVal bg As Color) As Bitmap Dim img As New Bitmap(srcPath) For x As Int16 = 0 To img.Width For y As Int16 = 0 To img.Height If img.GetPixel(x, y) = Color.Black Then img.SetPixel(x, y, fg) Else img.SetPixel(x, y, bg) End If Next Next Return img End Function And then you can do whatever with the image... A: I've done stuff like this in PHP before, and I used ImageMagick and GD libraries. I'm not sure if ASP and C# can plug into that using the .NET framework, but it's a start. A: System.Drawing is GDI+ based. Only useful if you're drawing bitmaps in teh worlda web. A: Your solution will depend on how complex the graphics are. If you have simple graphics (you can make with MS Paint), then you can use the System.Drawing namespace to re-create the image fairly reliably. If you have complex graphics, like ones made in photoshop or Paint.NET with multiple layers, you may be better off by allowing the client a choice of only a few colors (4-8-16) and pre-make the graphics to match the selections.
{ "language": "en", "url": "https://stackoverflow.com/questions/121662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to pipe stdout from a groovy method into a string How does one invoke a groovy method that prints to stdout, appending the output to a string? A: This demonstrates how you can do this. Paste this into a Groovy script file and run it. You will see the first call functions as normal. The second call produces no results. Finally, the last step in the main prints the results of the second call that were redirected to a ByteArrayOutputStream. Have fun! void doSomething() { println "i did something" } println "normal call\n---------------" doSomething() println "" def buf = new ByteArrayOutputStream() def newOut = new PrintStream(buf) def saveOut = System.out println "redirected call\n---------------" System.out = newOut doSomething() System.out = saveOut println "" println "results of call\n---------------" println buf.toString() A: I'm not sure what you mean by "appending the output to a string", but you can print to standard out using "print" or "println".
{ "language": "en", "url": "https://stackoverflow.com/questions/121665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Multi-Core and Concurrency - Languages, Libraries and Development Techniques The CPU architecture landscape has changed, multiple cores is a trend that will change how we have to develop software. I've done multi-threaded development in C, C++ and Java, I've done multi-process development using various IPC mechanisms. Traditional approaches of using threads doesn't seem to make it easy, for the developer, to utilize hardware that supports a high degree of concurrency. What languages, libraries and development techniques are you aware of that help alleviate the traditional challenges of creating concurrent applications? I'm obviously thinking of issues like deadlocks and race conditions. Design techniques, libraries, tools, etc. are also interesting that help actually take advantage of and ensure that the available resources are being utilized - just writing a safe, robust threaded application doesn't ensure that it's using all the available cores. What I've seen so far is: * *Erlang: process based, message passing IPC, the 'actor's model of concurrency *Dramatis: actors model library for Ruby and Python *Scala: functional programming language for the JVM with some added concurrency support *Clojure: functional programming language for the JVM with an actors library *Termite: a port of Erlang's process approach and message passing to Scheme What else do you know about, what has worked for you and what do you think is interesting to watch? A: You mentioned Java, but you only mention threads. Have you looked at Java's concurrent library? It comes bundled with Java 5 and above. It's a very nice library containing ThreadPools, CopyOnWriteCollections to name a very few. Check out the documentation at the Java Tutorial. Or if you prefer, the Java docs. A: Some Scala based stuff: * *PiLib: A Hosted Language for Pi-Calculus Style Concurrency *Event-Based Programming without Inversion of Control *Actors that Unify Threads and Events *Scala Multicast Actors: Architecture and Implementation *Implementing Joins using Extensible Pattern Matching *Communicating Scala Objects (Revised) A: I've used processing for Python. It mimicks the API of the threading module and is thus quite easy to use. If you happen to use map/imap or a generator/list comprehension, converting your code to use processing is straightforward: def do_something(x): return x**(x*x) results = [do_something(n) for n in range(10000)] can be parallelized with import processing pool = processing.Pool(processing.cpuCount()) results = pool.map(do_something, range(10000)) which will use however many processors you have to calculate the results. There are also lazy (Pool.imap) and asynchronous variants (Pool.map_async). There is a queue class which implements Queue.Queue, and workers that are similar to threads. Gotchas processing is based on fork(), which has to be emulated on Windows. Objects are transferred via pickle/unpickle, so you have to make sure that this works. Forking a process that has acquired resources already might not be what you want (think database connections), but in general it works. It works so well that it has been added to Python 2.6 on the fast track (cf. PEP-317). A: Intel's Threading Building Blocks for C++ looks very interesting to me. It offers a much higher level of abstraction than raw threads. O'Reilly has a very nice book if you like dead tree documentation. See, also, Any experiences with Intel’s Threading Building Blocks?. A: I would say: Models: threads + shared state, actors + message passing, transactional memory, map/reduce? Languages: Erlang, Io, Scala, Clojure, Reia Libraries: Retlang, Jetlang, Kilim, Cilk++, fork/join, MPI, Kamaelia, Terracotta I maintain a concurrency link blog about stuff like this (Erlang, Scala, Java threading, actor model, etc) and put up a couple links a day: http://concurrency.tumblr.com A: The question What parallel programming model do you recommend today to take advantage of the manycore processors of tomorrow? has already been asked. I gave the following answer there too. Kamaelia is a python framework for building applications with lots of communicating processes. Kamaelia - Concurrency made useful, fun In Kamaelia you build systems from simple components that talk to each other. This speeds development, massively aids maintenance and also means you build naturally concurrent software. It's intended to be accessible by any developer, including novices. It also makes it fun :) What sort of systems? Network servers, clients, desktop applications, pygame based games, transcode systems and pipelines, digital TV systems, spam eradicators, teaching tools, and a fair amount more :) Here's a video from Pycon 2009. It starts by comparing Kamaelia to Twisted and Parallel Python and then gives a hands on demonstration of Kamaelia. Easy Concurrency with Kamaelia - Part 1 (59:08) Easy Concurrency with Kamaelia - Part 2 (18:15) A: I've been doing concurrent programming in Ada for nearly 20 years now. The language itself (not some tacked on library) supports threading ("tasks"), multiple scheduling models, and multiple synchronization paradigms. You can even build your own synchronization schemes using the built in primitives. You can think of Ada's rendezvous as sort of a procedural-oriented synchronization facility, while protected objects are more object-oriented. Rendezvous are similar to the old CS-concept of monitors, but much more powerful. Protected objects are special types with synchronization primitives that allow you to build things exactly like OS locks, semaphores, events, etc. However, it is powerful enough that you can also invent and create your own kinds of sync objects, depending on your exact needs. A: I am keeping a close eye on Parallel Extensions for .NET and Parallel LINQ. A: I know of Reia - a language that is based on Erlang but looks more like Python/Ruby. A: This question is closely related to, if not a duplicate of, What parallel programming model do you recommend today to take advantage of the manycore processors of tomorrow? A: Java has an actors library too you know. And did you know that Java is a functional language? ;) A: OpenMP. It handles threads for you so you only worry about which parts of your C++ application you want to run in parallel. eg. #pragma omp parallel for for (int i=0; i < SIZE; i++) { // do something with an element } the above code will run the for loop on as many threads as you've told the openmp runtime to use, so if SIZE is 100, and you have a quad-core box, that for loop will run 25 items on each core. There are a few other parallel extensions for various languages, but the ones I'm most interested in are the ones that run on your graphics card. That's real parallel processing :) (examples: GPU++ and libSh) A: C++0x will provide std::lock functions for locking more than one mutex together. This will help alleviate deadlock due to out-of-order locking. Also, the C++0x thread library will have promises, futures and packaged tasks, which allow a thread to wait for the result of an operation performed on another thread without any user-level locks. A: I'd suggest two paradigm shifts: Software Transactional Memory You may want to take a look at the concept of Software Transactional Memory (STM). The idea is to use optimistic concurrency: any operation that runs in parallel to others try to complete its job in an isolated transaction; if at some point another transaction has been committed that invalidates data on which this transaction is working, the transaction's work is throwed away and the transaction run again. I think the first widely known implementation of the idea (if not the proof-of-concept and first one) is the one in Haskell : Papers and presentations about transactional memory in Haskell. Many other implementations are listed on Wikipedia's STM article. Event loops and promises Another very different way of dealing with concurrency is implemented in the [E programming language](http://en.wikipedia.org/wiki/E_(programming_language%29). Note that its way of dealing with concurrency, as well as other parts of the language design, is heavily based on the Actor model. A: multiprocessing is a python library that simplifies multi-core programming, as mentionned in another answer. Program written with python's multiprocessing can easily be modified to ship work on the cloud, instead of to local cores. piCloud takes advantage of that to provide large, on-demand processing power on the cloud: you just need to modify 2 lines of your code. So, here is the take-away: when selecting a library for multi-core, one may want to ask whether a cloud approach would also make sense.
{ "language": "en", "url": "https://stackoverflow.com/questions/121674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Starting a process in C# with username & password throws "Access is Denied" exception Inside a .NET 3.5 web app running impersonation I am trying to execute a process via: var process = new Process { StartInfo = { CreateNoWindow = true, FileName = "someFileName", Domain = "someDomain", Username = "someUserName", Password = securePassword, UseShellExecute = false } }; process.Start(); -Changing the trust mode to full in web.config did not fix. -Note the var securePassword is a secureString set up earlier in the code. This throws an exception with 'Access is Denied' as its message. If I remove the username and password information, the exception goes away, but the process starts as aspnet_wp instead of the user I need it to. I've seen this issue in multiple forums and never seen a solution provided. Any ideas? A: You can use ProcessStartInfo which allows you to specify credentials. The trick is that the password is a secure string, so you have to pass it as a byte array. The code might look something like: Dim startInfo As New ProcessStartInfo(programName) With startInfo .Domain = "test.local" .WorkingDirectory = My.Application.Info.DirectoryPath .UserName = "testuser" Dim pwd As New Security.SecureString For Each c As Char In "password" pwd.AppendChar(c) Next .Password = pwd 'If you provide a value for the Password property, the UseShellExecute property must be false, or an InvalidOperationException will be thrown when the Process..::.Start(ProcessStartInfo) method is called. .UseShellExecute = False .WindowStyle = ProcessWindowStyle.Hidden End With A: Not sure if this is it, but I had a related problem and the answer was that the account didn't have permission to impersonate on the machine. This can be changed by adding the account to the Policy "Impersonate a client after authentication" using the local policy manager on the machine. A: I went a different way and put the whole application in its own app-pool running as the user we were originally impersonating. Now, when asp.net spawns a new process, it spawns under the context of the user instead of aspnet_wp. Not the exact solution to the problem I posted, but it worked for our situation. A: I ran into the same problem that you did on a project. There should be a way to spawn a process out of your web app with given credentials, but in practice, it's a kludge at best. What I wound up finally doing was just having the app push information to an MSMQ and having a windows service that popped items of the Queue an serviced the requests. Even when you appliation is impersonating, it still wants to run under theaspnet user account. A: Check the Code Access Security level as Process requires Full Trust. Your web application may be running in a partial trust setting. From the Process MSDN page: Permissions * LinkDemand for full trust for the immediate caller. This class cannot be used by partially trusted code. * InheritanceDemand for full trust for inheritors. This class cannot be inherited by partially trusted code. A: I wanted to mention that I have tried the code at this site including the updated code mentioned in the comments. This code runs the process as the impersonated identity (which is really all I need), but the redirecting of the standard error fails -- so this link could be useful to those not concerned with dealing with the stderr.
{ "language": "en", "url": "https://stackoverflow.com/questions/121676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's the difference between 'int?' and 'int' in C#? I am 90% sure I saw this answer on stackoverflow before, in fact I had never seen the "int?" syntax before seeing it here, but no matter how I search I can't find the previous post, and it's driving me crazy. It's possible that I've been eating the funny mushrooms by accident, but if I'm not, can someone please point out the previous post if they can find it or re-explain it? My stackoverflow search-fu is apparently too low.... A: int belongs to System.ValueType and cannot have null as a value. When dealing with databases or other types where the elements can have a null value, it might be useful to check if the element is null. That is when int? comes into play. int? is a nullable type which can have values ranging from -2147483648 to 2147483648 and null. Reference: https://msdn.microsoft.com/en-us/library/1t3y8s4s.aspx A: int? is Nullable. MSDN: Using Nullable Types (C# Programming Guide) A: int? is the same thing as Nullable. It allows you to have "null" values in your int. A: int? is shorthand for Nullable<int>. This may be the post you were looking for. A: the symbol ? after the int means that it can be nullable. The ? symbol is usually used in situations whereby the variable can accept a null or an integer or alternatively, return an integer or null. Hope the context of usage helps. In this way you are not restricted to solely dealing with integers. A: Int cannot accept null but if developer are using int? then you store null in int like int i = null; // not accept int? i = null; // its working mostly use for pagination in MVC Pagelist A: you can use it when you expect a null value in your integer especially when you use CASTING ex: x= (int)y; if y = null then you will have an error. you have to use: x = (int?)y;
{ "language": "en", "url": "https://stackoverflow.com/questions/121680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: How can I ensure fopen() won't block when not connected to the network? My win32 C++ application frequently checks a file that normally resides on a network share [in our corporate network]. But if the computer isn't connected to a network, the application freezes for several minutes and the user usually has to end the process. How can I check if the file is accessible before I open it? Checking if any network connection exists may not be good enough. The reason the users disconnect is to use to a test network where the file does not exist. A: Put the file access into a seperate thread? A: I think a thread is your best option. There does not appear to be a way to invoke the CreateFile API asynchronously. Once a file handle is open you can do asynchronous I/O on it, but the act of opening it is still synchronous. You will have to manage the synchronization with the thread yourself since only your app knows when it needs the result of the fopen, and it may still end up blocking at that point. A: It seems to me this method in a separate thread would be your best bet. File.Exists
{ "language": "en", "url": "https://stackoverflow.com/questions/121685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What Exception should be thrown when an ADO.NET query cannot retrieve the requested data? In an attempt to add some parameter validation and correct usage semantics to our application, we are trying to add correct exception handling to our .NET applications. My question is this: When throwing exceptions in ADO.NET if a particular query returns no data or the data could not be found, what type of exception should I use? Psuedocode: (read, don't scrutinize the semantics of the code, I know it won't compile) public DataSet GetData(int identifier) { dataAdapter.Command.Text = "Select * from table1 Where ident = " + identifier.toString(); DataSet ds = dataAdapter.Fill(ds); if (ds.table1.Rows.Count == 0) throw new Exception("Data not found"); return ds; } A: The MSDN guidelines state: * *Consider throwing existing exceptions residing in the System namespaces instead of creating custom exception types. *Do create and throw custom exceptions if you have an error condition that can be programmatically handled in a different way than any other existing exceptions. Otherwise, throw one of the existing exceptions. *Do not create and throw new exceptions just to have your team's exception. There is no hard and fast rule: but if you have a scenario for treating this exception differently, consider creating a custom exception type, such as DataNotFoundException as suggested by Johan Buret. Otherwise you might consider throwing one of the existing exception types, such as System.Data.DataException or possibly even System.Collections.Generic.KeyNotFoundException. A: You really should define your own exception : DataNotFoundException. You should not use the basic class Exception, since when you will catch it in calling code, you will write something like try { int i; GetData(i); } catch(Exception e) //will catch many many exceptions { //Handle gracefully the "Data not Found" case; //Whatever else happens will get caught and ignored } Where as catching only your DataNotFoundEXception will get only the case you really want to handle. try { int i; GetData(i); } catch(DataNotFoundException e) { //Handle gracefully the "Data not Found" case; } //Any other exception will bubble up There is a class aptly named SqlException, when there are troubles with the SQL engine but it's better not overload it with your business logic A: As far as ADO.net is concerned, a query that returns zero rows is not an error. If your application wishes to treat such a query as an error, you should create your own exception class by inheriting from Exception. public class myException : Exception { public myException(string s) : base() { this.MyReasonMessage = s; } } public void GetData(int identifier) { dataAdapter.Command.Text = "Select * from table1 Where ident = " + identifier.toString(); DataSet ds = dataAdapter.Fill(ds); if (ds.table1.Rows.Count == 0) throw new myException("Data not found"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/121700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to use javascript with an asp.net dropdownlist control? I don't currently use ajax.net though I would be open to it if it is the only solution. I have a auto-complete control on screen that I am using to populate a asp.net dropdownlist with values through javascript (jQuery). I have had to use EnableEventValidation="false" to allow this. After I add my options to the select and the form is posted back I would like to be able to get all the values for the option elements I have added to the asp.net dropdownlist through javascript.. Is there a good way to accomplish this? A: If a DropDownList leaves the server with no options, it's recreated server-side with no options (from the viewstate) You could add the options to a hidden html control as a delimited string in your javascript as well as to the select list. Then iterate that into the control once server-side on post-back. Otherwise you could ajax them to the server and re-render the DropDownList only for each addition. A: You can get the selected value directly from the form like so: string fooBar = Request.Form[SomeDropDown.UniqueID]; This will return the correct value no matter what you do to to the drop down options. I use javascript to change the quantity dropdown for a product based on size selection for reflecting product availability.
{ "language": "en", "url": "https://stackoverflow.com/questions/121707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Preserve file creation date in Subversion My boss asked me to setup a Subversion server for him to use so that he can share all his documents across different machines in sync and still be able to access them when there is no Internet connection. I have this up for him, but now he's requesting that the 'create date' file attribute be preserved. I explained that since he downloaded all the files that is their create date, but he insists I find a manner to preserve this as it is affecting the desktop search agent he uses. Is there any way to set this attribute to be preserved via Subversion, or do I have to write a script to get the date of each file and have him run 'touch' after each intial check out? Note that the set of documents that were added to the SVN repository span back several years, and he wants these dates preserved across all checkouts. So the date of the last change that Subversion has could potentially be off by years from what he wants. A: Using TortoiseSVN * *Right-click and select TortoiseSVN -> Settings *Select Set file dates to the "last commit time" I think that will work for you. A: Sorry - misunderstood the question first time. One option might be to use the svnadmin dump and load commands. * *Dump the repository using svnadmin dump. *Write a script to trawl through the output, updating the SVN create date to the file create date on your source folder. *Load the updated dump file back in using svnadmin load. *Use the Set file dates to the "last commit time" setting I suggested earlier. I'm not certain this is possible, but off the top of my head I think it would be. I seem to remember a script which did something like this when I transferred from SourceSafe to Subversion - I'll see if I can find it. EDIT : Yeah, you might be able to crib some code from here (http://www.pumacode.org/projects/vss2svn) A: Check out the property "svn:date": svn propset svn:date --revprop -r HEAD "2007-04-22" Which requires a hook pre-revprop-change to be set up in advance. A: The last time I checked there was no way in Subversion to preserve the original file's timestamp. A: You could use a Subversion file property to store the modification date, but you'd have to write your own script to update the property when checking in and to set the modification date when checking out.
{ "language": "en", "url": "https://stackoverflow.com/questions/121713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Dynamically added JTable not displaying Java Newbie here. I have a JFrame that I added to my netbeans project, and I've added the following method to it, which creates a JTable. Problem is, for some reason when I call this method, the JTable isn't displayed. Any suggestions? public void showFromVectors(Vector colNames, Vector data) { jt = new javax.swing.JTable(data, colNames); sp = new javax.swing.JScrollPane(jt); //NB: "this" refers to my class DBGridForm, which extends JFrame this.add(sp,java.awt.BorderLayout.CENTER); this.setSize(640,480); } The method is called in the following context: DBGridForm gf = new DBGridForm(); //DBGridForm extends JFrame DBReader.outMatchesTable(gf); gf.setVisible(true); ... where DBReader.outMatchesTable() is defined as static public void outMatchesTable(DBGridForm gf) { DBReader ddb = new DBReader(); ddb.readMatchesTable(null); gf.showFromVectors(ddb.lastRsltColNames, ddb.lastRsltData); } My guess is I'm overlooking something, either about the swing classes I'm using, or about Java. Any ideas? A: "this" in your context is unclear. Is it inside an applet? a JFrame? You may be having a layout issue, make sure you've called setLayout on your class with a new borderlayout. In a swing application, you'd want to use getRootContentPane().add() instead of a raw add(), depending on the version. Java tutorial on adding top-level content: http://java.sun.com/docs/books/tutorial/uiswing/components/toplevel.html A: If you are not running on the event thread, it could be a problem--I've seen that cause stuff not to display. If this code is called in response to an AWT event (mouse click, button press, ...) then that's not the problem, but if it's still the same thread that started your app, or this code is running off a timer, could very well be.
{ "language": "en", "url": "https://stackoverflow.com/questions/121715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Getting value from a cell from a gridview on RowDataBound event string percentage = e.Row.Cells[7].Text; I am trying to do some dynamic stuff with my GridView, so I have wired up some code to the RowDataBound event. I am trying to get the value from a particular cell, which is a TemplateField. But the code above always seems to be returning an empty string. Any ideas? To clarify, here is a bit the offending cell: <asp:TemplateField HeaderText="# Percentage click throughs"> <ItemTemplate> <%# AddPercentClickThroughs((int)Eval("EmailSummary.pLinksClicked"), (int)Eval("NumberOfSends")) %> </ItemTemplate> </asp:TemplateField> On a related note, does anyone know if there is a better way of selecting the cell in the row. It sucks putting in cell[1]. Couldn't I do cell["mycellname"], so if I decide to change the order of my cells, bugs wont appear? A: Just use a loop to check your cell in a gridview for example: for (int i = 0; i < GridView2.Rows.Count; i++) { string vr; vr = GridView2.Rows[i].Cells[4].Text; // here you go vr = the value of the cel if (vr == "0") // you can check for anything { GridView2.Rows[i].Cells[4].Text = "Done"; // you can format this cell } } A: why not pull the data directly out of the data source. DataBinder.Eval(e.Row.DataItem, "ColumnName") A: When you use a TemplateField and bind literal text to it like you are doing, asp.net will actually insert a control FOR YOU! It gets put into a DataBoundLiteralControl. You can see this if you look in the debugger near your line of code that is getting the empty text. So, to access the information without changing your template to use a control, you would cast like this: string percentage = ((DataBoundLiteralControl)e.Row.Cells[7].Controls[0]).Text; That will get you your text! A: use RowDataBound function to bind data with a perticular cell, and to get control use (ASP Control Name like DropDownList) GridView.FindControl("Name of Control") A: The above are good suggestions, but you can get at the text value of a cell in a grid view without wrapping it in a literal or label control. You just have to know what event to wire up. In this case, use the DataBound event instead, like so: protected void GridView1_DataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { if (e.Row.Cells[0].Text.Contains("sometext")) { e.Row.Cells[0].Font.Bold = true; } } } When running a debugger, you will see the text appear in this method. A: First you need to wrap your code in a Label or Literal control so that you can reference it properly. What's happening is that there's no way for the system to keep track of it, because there's no control associated with the text. It's the control's responsibility to add its contents to viewstate. You need to use gridView.FindControl("controlName"); to get the control in the row. From there you can get at its properties including Text. You can also get at the DataItem property of the Row in question and cast it to the appropriate type and extract the information directly. A: I had a similar question, but found the solution through a slightly different approach. Instead of looking up the control as Chris suggested, I first changed the way the field was specified in the .aspx page. Instead of using a <asp:TemplateField ...> tag, I changed the field in question to use <asp:BoundField ...>. Then, when I got to the RowDataBound event, the data could be accessed in the cell directly. The relevant fragments: First, the aspx page: <asp:GridView ID="gvVarianceReport" runat="server" ... > ...Other fields... <asp:BoundField DataField="TotalExpected" HeaderText="Total Expected <br />Filtration Events" HtmlEncode="False" ItemStyle-HorizontalAlign="Left" SortExpression="TotalExpected" /> ... </asp:Gridview> Then in the RowDataBound event I can access the values directly: protected void gvVarianceReport_Sorting(object sender, GridViewSortEventArgs e) { if (e.Row.Cells[2].Text == "0") { e.Row.Cells[2].Text = "N/A"; e.Row.Cells[3].Text = "N/A"; e.Row.Cells[4].Text = "N/A"; } } If someone could comment on why this works, I'd appreciate it. I don't fully understand why without the BoundField the value is not in the cell after the bind, but you have to look it up via the control. A: Label lblSecret = ((Label)e.Row.FindControl("lblSecret")); A: <asp:TemplateField HeaderText="# Percentage click throughs"> <ItemTemplate> <%# AddPercentClickThroughs(Convert.ToDecimal(DataBinder.Eval(Container.DataItem, "EmailSummary.pLinksClicked")), Convert.ToDecimal(DataBinder.Eval(Container.DataItem, "NumberOfSends")))%> </ItemTemplate> </asp:TemplateField> public string AddPercentClickThroughs(decimal NumberOfSends, decimal EmailSummary.pLinksClicked) { decimal OccupancyPercentage = 0; if (TotalNoOfRooms != 0 && RoomsOccupied != 0) { OccupancyPercentage = (Convert.ToDecimal(NumberOfSends) / Convert.ToDecimal(EmailSummary.pLinksClicked) * 100); } return OccupancyPercentage.ToString("F"); } A: If you set the attribute Visible on the asp:BoundField to False. Like this <asp:BoundField DataField="F1" HeaderText="F1" Visible="False"/> You will not get any Text in the Cells[i].Text property when you loop the rows. So foreach (GridViewRow row in myGrid.Rows) { userList.Add(row.Cells[0].Text); //this will be empty "" } But you can set a column not visible by connecting the grid to the event OnRowDataBound then from here do this e.Row.Cells[0].Visible = false //now the cell has Text but it's hidden A: Regarding the index selection style of the columns i suggest you do the following. i ran into this problem but i was going to set values dynamically using an API so what i did was this : keep in mind .CastTo<T> is simply ((T)e.Row.DataItem) and you have to call DataBind() in order to see the changes in the grid. this way you wont run into issues if you decide to add a column to the grid. protected void gvdata_RowDataBound(object sender, GridViewRowEventArgs e) { if(e.Row.RowType == DataControlRowType.DataRow) { var number = e.Row.DataItem.CastTo<DataRowView>().Row["number"]; e.Row.DataItem.CastTo<DataRowView>().Row["ActivationDate"] = DateTime.Parse(userData.basic_info.creation_date).ToShortDateString(); e.Row.DataItem.CastTo<DataRowView>().Row["ExpirationDate"] = DateTime.Parse(userData.basic_info.nearest_exp_date).ToShortDateString(); e.Row.DataItem.CastTo<DataRowView>().Row["Remainder"] = Convert.ToDecimal(userData.basic_info.credit).ToStringWithSeparator(); e.Row.DataBind(); } } A: protected void gvbind_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes["onmouseover"] = "this.style.cursor='hand';"; e.Row.Attributes["onmouseout"] = "this.style.textDecoration='none';"; e.Row.Attributes["onclick"] = ClientScript.GetPostBackClientHyperlink(this.gvbind, "Select$" + e.Row.RowIndex); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/121722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }