text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Simple database application for Windows I need to build a simple, single user database application for Windows. Main requirements are independence from windows version and installed software. What technologies (language/framework) would you recommend? My preference for language is the Visual Basic.
EDIT: What about VB.Net and SQL Server Compact Edition?
A: Since your requirement is a windows based application i would suggest that you go with sql server 2005 express edition which is a free tool, but with certain small limitations. you can upgrade to a bigger version when you go with a paid version.
There are other DB engines like SQL Lite or FireBird, choose them if the support and growth options they provide are good enough for you
Additionally, Visual Basic is eof lifed. VB.NET might be a better windows based platform currently. It would give a better platform / features to start with and when you want to expand the talent you have working on the project, i assume .NET talent might be more available than programmers who want to work with a dead language.
A: duplicate of What options are there for a quick embedded DB in .NET?
I'll repeat my answer from there:
"Or theres Esent, the built in database that exists in every copy of windows. Read about it here: http://ayende.com/Blog/archive/2008/12/23/hidden-windows-gems-extensible-storage-engine.aspx" and http://www.codeplex.com/ManagedEsent
A: SQLite will work for a local desktop application. If you want several users, a few gigas of data, and multiple connections I would use mysql or Firebird.
http://www.mysql.com/
http://www.firebirdsql.org/
A: FireBird SQL server will be thing of choice. It can be used in both embedded and multiuser mode like traditional databases. It implements many of the SQL standards and has strong community base. It is available for Windows, Linux, Solaris, OS X, HP-UX
A: I would recommend Sqlite. It's completely self-contained, and public domain so there are no license issues at all.
A: Single user or multi user?
For single user, the answer would be SQLite
For multi user (or multithread), try MySQL or PostgreSQL.
A: As mentioned, SQLite is a great single-user database. This page has VB/SQLite examples. Once concerns is that SQLite parses foreign key constraints, but does not enforce them. You can use this code to generate "foreign key triggers" for SQLite, thus gaining an easy to use database with FK constraints.
Depending on how demanding your database needs are, though, you might want to consider MS Access.
A: I used SQL Server Compact Edition. It's like sqllite. A single SDF file accessed using ADO.NET.
You can develop your application using Visual Basic .NET and manage you database (add tables, columns, constraints, etc...) using Visual Studio.
A: SQLite may be what you are looking for. http://www.sqlite.org/
A: Depending on your needs for the application.
You could use SQLLite which is a very nice database with no installation required.
You could also use Microsoft SQL Server: SQL Server Compact 3.5.
Both are free!
A: Well, assuming you don't have any prior experience...
You need some kind of persistence storage (for example a database) and a client.
For the storage you could use almost anything. For example you could create your DB in MS Access and just ship it as a file, using ADO to access it.
Other options are MS SQL Express edition (comes pre-installed on some machines or could be installed for free) and plenty of open source databases like SQLite
For the client side you could not go wrong with VBScript and ADO (using OLE DB drivers). They come with every Windows installation since Dark Ages, you will have plenty of references/tutorials/answers online.
A drawback: no UI to speak of, so you'll have to build a command line interface (which was for a 'simple' application).
If you want to build a UI I would suggest using .NET WinForms. The overhead will be substantially bigger but .NET is now installed on all XP/Vista machines and even if it is not you could always install the framework with you application.
A: It's not quite clear from your post whether you want a web application or not.
For a web application, MySQL works effectively on the Windows platform. You also have nearly limitless options for development environment including, PHP, Ruby on Rails, Django, and .Net.
If you are looking at a desktop application, MS Access might be suitable ... incredible easy for simple applications.
A: If you want to build application that can move to other pc easily,I prefer Microsoft Access it is small database easy to use and no need to install.It suites for application like Addressbook,mini crud system.
But if you want to develop enterprise database system you should use MySQL instead.
A: I do not understand what you mean with "independence form [...] installed software". You ever need at least the DBMS installed as well as one client or user interface.
I recommend using MS Access. It is easy and cheap for simple, single user tasks and rapid prototyping development. Only development version have to be bought ("normal" Access) to create DBs. Runtime version of Access 2007 can be downloaded free of cost from Microsoft Homepage - for using only the database you created.
Also it combines DBMS and GUI frontend in same tool.
A: Dare I mention MS Access...?
A: If you are looking for small footprint (up to a few MB) and easy deployment (end-user should only install your application to get it working), then your options are SQLite and Firebird embedded.
Of those two, I'd pick Firebird any time, because of it's full support for SQL (you can't, for example, drop a column in SQLite), ACID compliance, and ability to go client/server without any changes (just change the connection string from embedded to server) to the code if you ever decide to let multiple users work on the same database.
Not to mention that you can use full server to develop (which means your application and database administration tool can be connected to database at the same time).
A: I'm successfully using Turbo Delphi (free for commercial and no commercial use) + ZeosLib (zeos.firmos.at).
The only things you need to distribute with your .exe are the database client dlls (no need to install the client, just put the dlls in the same directory).
A: Would Kexi work?
A: I can recommend from personal experience "My Visual database"
free, no code, no sql, just drag and drop.
http://myvisualdatabase.com/
A: Best Option would be to create a Win32 native application using Delphi and use SQLLite as the database.
Reason being Delphi can produce native win32 applications without any other product being installed on the machine.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: PHP webpage doesn't launch unix command even after updated sudoers Basically I am trying to restart a service from a php web page.
Here is the code:
<?php
exec ('/usr/bin/sudo /etc/init.d/portmap restart');
?>
But, in /var/log/httpd/error_log, I get
unable to change to sudoers gid: Operation not permitted
and in /var/log/messages, I get
Sep 22 15:01:56 ri kernel: audit(1222063316.536:777): avc: denied { getattr } for pid=4851 comm="sh" name="var" dev=dm-0 ino=114241 scontext=root:system_r:httpd_sys_script_t tcontext=system_u:object_r:var_t tclass=dir
Sep 22 15:01:56 ri kernel: audit(1222063316.549:778): avc: denied { setrlimit } for pid=4851 comm="sudo" scontext=root:system_r:httpd_sys_script_t tcontext=root:system_r:httpd_sys_script_t tclass=process
Sep 22 15:01:56 ri kernel: audit(1222063316.565:779): avc: denied { read } for pid=4851 comm="sudo" name="shadow" dev=dm-0 ino=379669 scontext=root:system_r:httpd_sys_script_t tcontext=system_u:object_r:shadow_t tclass=file
Sep 22 15:01:56 ri kernel: audit(1222063316.568:780): avc: denied { read } for pid=4851 comm="sudo" name="shadow" dev=dm-0 ino=379669 scontext=root:system_r:httpd_sys_script_t tcontext=system_u:object_r:shadow_t tclass=file
Sep 22 15:01:56 ri kernel: audit(1222063316.571:781): avc: denied { setgid } for pid=4851 comm="sudo" capability=6 scontext=root:system_r:httpd_sys_script_t tcontext=root:system_r:httpd_sys_script_t tclass=capability
Sep 22 15:01:56 ri kernel: audit(1222063316.574:782): avc: denied { setuid } for pid=4851 comm="sudo" capability=7 scontext=root:system_r:httpd_sys_script_t tcontext=root:system_r:httpd_sys_script_t tclass=capability
Sep 22 15:01:56 ri kernel: audit(1222063316.577:783): avc: denied { setgid } for pid=4851 comm="sudo" capability=6 scontext=root:system_r:httpd_sys_script_t tcontext=root:system_r:httpd_sys_script_t tclass=capability
In my visudo, I added those lines
User_Alias WWW=apache
WWW ALL=(ALL) NOPASSWD:ALL
Can you please help me ? Am I doing something wrong ?
Thanks for your help,
tiBoun
A: The error you are getting seems to be related to your SELinux configuration. You might try temporarily disabling that.
As an aside, I would strongly suggest that you adjust your sudo configuration to be more restrictive.
User_Alias WWW=apache
Cmnd_Alias WEBCMDS=/etc/init.d/portmap
WWW ALL=NOPASSWD: WEBCMDS
A: The problem is not with sudo at the moment, but with SELinux, which is (reasonably) set to deny the HTTPD from gaining root privileges.
You will need to either explicitly allow this (you can use audit2allow for this), or set SELinux to be permissive instead. I'd suggest the former.
A: I encountered the problem recently and the accepted answer above helped. However, I would like to post this answer to elaborate the same, so that the next person does not need to spend time much, like me!
Follow section 7 of the following link: https://wiki.centos.org/HowTos/SELinux.
Do grep with httpd_sys_script_t.
Basically the steps are:
# grep httpd_sys_script_t /var/log/audit/audit.log | audit2allow -M httpdallowsudo
# semodule -i httpdallowsudo.pp
A: This is probably down to something like trying to execute sudo in a non-interactive shell.
If you do a grep for 'sudo' in your apache users mail log you might find things like this
sudo: sorry, you must have a tty to run sudo
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I run a script when ip-address changes (most likely using a dhclient hook) on a (Ubuntu) Linux machine? I have a script which contacts a few sources and tell them "the IP-address XXX.XXX.XXX.XXX is my current one". My test web server has a dynamic IP-address through DHCP and amongst other things it needs to update a DDNS entry when its IP-address changes. However it's not the only thing it does, so I will need to run my own custom script.
I suspect that this is possible by a attaching the script to be run for a given dhclient hook. However I still need to know which hook I should use, and how.
A: I would recommend to put the script into dhclient-exit-hooks.d. Because you should just change the DDNS entry, if the address change has been finished. However, I am not sure if dhclient-exit-hooks are called, if assigning an address fails.
Edit: The man pages (man dhclient-script) says, that the exit-hooks script will get the exit code in a shell variable (exit_status). So you could check it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: CruiseControl.NET post-build actions We have CC.NET setup on our ASP.NET app. When we build the project, the ASP.NET app is pre-compiled and copied to a network share, from which a server runs the application.
The server is a bit different from development box'es, and the next server in our staging environment differs even more. The difference is specific config files and so on - so I want to exclude some files - or delete them before the pre-compiled app is copied to a network share.
My config file looks like this:
<project name="Assembly.Web.project">
<triggers>
<intervalTrigger seconds="3600" />
</triggers>
<sourcecontrol type="svn">
<trunkUrl>svn://svn-server/MyApp/Web/Trunk</trunkUrl>
<workingDirectory>C:\build-server\Assembly\Web\TEST-HL</workingDirectory>
<executable>C:\Program Files (x86)\SVN 1.5 bin\svn.exe</executable>
<username>uid</username>
<password>pwd</password>
</sourcecontrol>
<tasks>
<msbuild>
<executable>C:\Windows\Microsoft.NET\Framework64\v3.5\MSBuild.exe</executable>
<workingDirectory>C:\build-server\Assembly\Web\TEST-HL</workingDirectory>
<projectFile>C:\build-server\Assembly\Web\TEST-HL\Web\Web.sln</projectFile>
<buildArgs>/noconsolelogger /p:Configuration=Debug /v:diag</buildArgs>
<targets>Build</targets>
<timeout>900</timeout>
<logger>C:\Program Files\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger>
</msbuild>
</tasks>
<publishers>
<buildpublisher>
<sourceDir>C:\build-server\Assembly\Web\PrecompiledWeb</sourceDir>
<publishDir>\\test-web01\Web</publishDir>
<useLabelSubDirectory>false</useLabelSubDirectory>
<alwaysPublish>false</alwaysPublish>
</buildpublisher>
</publishers>
</project>
As you can see, I use a buildPublisher to copy the pre-compiled files to the network share. What I want to do here, is either 1) delete certain files before they are copied or 2) replace those files after they have been copied.
I DO NOT want to have some app running watching specific files for change, and then after that replace the files with other ones. I want something to be either done by CC.NET, or triggered by CC.NET.
Can you launch a .bat file with CC.NET?
A: I use a NAnt task for all publishing, deploying, cleaning and so on.
A: Take a look at MSDEPLOY or Web Deployment Projects. There is a question that will provide more detail here
A: You have to use NAnt for those kind of stuff.
Here is the Task Reference of Nant..
A: Of course CruiseControl.NET can run a batch file, simply use the exec task. However, an easier answer might just be to have MSBuild do the task for you. It should be simple to add a few steps in the postcompile target.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I use PowerShell to stop and start a clustered "Generic Service"? How do I use PowerShell to stop and start a "Generic Service" as seen in the Microsoft "Cluster Administrator" software?
A: You can also use WMI. You can get all the Generic Services with:
$services = Get-WmiObject -Computer "Computer" -namespace 'root\mscluster' `
MSCluster_Resource | Where {$_.Type -eq "Generic Service"}
To stop and start a service:
$timeout = 15
$services[0].TakeOffline($timeout)
$services[0].BringOnline($timeout)
A: It turns out the answer is to simply use the command line tool CLUSTER.EXE to do this:
cluster RES MyGenericServiceName /OFF
cluster RES MyGenericServiceName /ON
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I cancel a text selection after the initial mousedown event? I'm trying to implement a pop-up menu based on a click-and-hold, positioned so that a (really) slow click will still trigger the default action, and with the delay set so that a text-selection gesture won't usually trigger the menu.
What I can't seem to do is cancel the text-selection in a way that doesn't prevent text-selection in the first place: returning false from the event handler (or calling $(this).preventDefault()) prevents the user from selecting at all, and the obvious $().trigger('mouseup') doesn't doesn't do anything with the selection at all.
*
*This is in the general context of a page, not particular to a textarea or other text field.
*e.stopPropogation() doesn't cancel text-selection.
*I'm not looking to prevent text selections, but rather to veto them after some short period of time, if certain conditions are met.
A: Try this:
var input = document.getElementById('myInputField');
if (input) {
input.onmousedown = function(e) {
if (!e) e = window.event;
e.cancelBubble = true;
if (e.stopPropagation) e.stopPropagation();
}
}
And if not, have a read of:
http://www.quirksmode.org/js/introevents.html
A: In addition to the top thread, there is an official way to implement what I think you want in DOM. What you can use in lieu of events is something called a range object.
Consider, (which works definitively on FF3)
window.onclick = function(evt)
{
// retrieves the selection and displays its content
var selectObj = window.getSelection();
alert(selectObj);
// now collapse the selection to the initial point of the selection
var rangeObj = selectObj.getRangeAt(0);
rangeObj.collapse(true);
}
Unfortunately, this doesn't quite fly with IE, Opera, Chrome, or Safari; not sure why, because in Opera, Chrome, or Safari, there is something associated with the collapse and getRangeAt methods. If I know more, I'll let you know.
An update on my previous answer, one that works more universally is the selection object and collapse, collapseToStart, collapseToEnd methods. (link text)
Now consider the 2.0 of the above:
window.onmouseup = function(evt)
{
var selectObj = window.getSelection();
alert(selectObj); // to get a flavor of what you selected
// works in FF3, Safari, Opera, and Chrome
selectObj.collapseToStart();
// works in FF3, Safari, Chrome (but not opera)
/* selectObj.collapse(document.body, 0); */
// and as the code is native, I have no idea why...
// ...and it makes me sad
}
A: I'm not sure if this will help, exactly, but here is some code to de-select text:
// onselectstart is IE-only
if ('undefined' !== typeof this.onselectstart) {
this.onselectstart = function () { return false; };
} else {
this.onmousedown = function () { return false; };
this.onclick = function () { return true; };
}
"this" in this context would be the element for which you want to prevent text selections.
A: $(this).focus() (or anything along the lines of document.body.focus()) seems to do the trick, although I haven't tested it much beyond ff3.
A: An answer to this question works for me:
How to disable text selection using jquery?
(It not only disables, but also cancels, any text selection.
At least on my computer in FF and Chrome.)
Here is what the answer does:
.attr('unselectable', 'on')
'-ms-user-select': 'none',
'-moz-user-select': 'none',
'-webkit-user-select': 'none',
'user-select': 'none'
.each(function() { // for IE
this.onselectstart = function() { return false; };
});
A: Thanks to knight for the beginnings of a universal solution. This slight modification also includes support for IE 7-10 (and probably 6).
I had a similar prob as cwillu-gmail.. needed to attach to the shift-click event (click on label while holding shift key) to perform some alternate functionality when in a specific "design" mode. (Yeah, sounds weird, but it's what the client wanted.) That part was easy, but had the annoying effect of selecting text. This worked for me: (I used onclick but you could use onmouseup.. depends on what you are trying to do and when)
var element = document.getElementById("myElementId");
element.onclick = function (event)
{
// if (event.shiftKey) // uncomment this line to only deselect text when clicking while holding shift key
{
if (document.selection)
{
document.selection.empty(); // works in IE (7/8/9/10)
}
else if (window.getSelection)
{
window.getSelection().collapseToStart(); // works in chrome/safari/opera/FF
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Programmatically add an application to Windows Firewall I have an application that is installed and updated via ClickOnce. The application downloads files via FTP, and therefore needs to be added as an exception to the windows firewall. Because of the way that ClickOnce works, the path to the EXE changes with every update, so the exception needs to change also. What would be the best way to have the changes made to the firewall so that it's invisible to the end user?
(The application is written in C#)
A: Assuming we're using a Visual Studio Installer->Setup Project - You need an installer class like this inside an assembly that's being installed, and then make sure you add a custom action for the "Primary output" in the install phase.
using System.Collections;
using System.ComponentModel;
using System.Configuration.Install;
using System.IO;
using System.Diagnostics;
namespace YourNamespace
{
[RunInstaller(true)]
public class AddFirewallExceptionInstaller : Installer
{
protected override void OnAfterInstall(IDictionary savedState)
{
base.OnAfterInstall(savedState);
var path = Path.GetDirectoryName(Context.Parameters["assemblypath"]);
OpenFirewallForProgram(Path.Combine(path, "YourExe.exe"),
"Your program name for display");
}
private static void OpenFirewallForProgram(string exeFileName, string displayName)
{
var proc = Process.Start(
new ProcessStartInfo
{
FileName = "netsh",
Arguments =
string.Format(
"firewall add allowedprogram program=\"{0}\" name=\"{1}\" profile=\"ALL\"",
exeFileName, displayName),
WindowStyle = ProcessWindowStyle.Hidden
});
proc.WaitForExit();
}
}
}
A: The easiest way I know would be to use netsh, you can simply delete the rule and re-create it, or set up a port rule, if yours is fixed.
Here is a page describing the options for its firewall context.
A: The dead link to "Adding an Application to the Exception list on the Windows Firewall" can be found on The Wayback Machine:
http://web.archive.org/web/20070707110141/http://www.dot.net.nz/Default.aspx?tabid=42&mid=404&ctl=Details&ItemID=8
A: This answer might be to late. This is what I ended up using:
https://support.microsoft.com/en-gb/help/947709/how-to-use-the-netsh-advfirewall-firewall-context-instead-of-the-netsh
A: Not sure if this is the best way, but running netsh should work:
netsh firewall add allowedprogram C:\MyApp\MyApp.exe MyApp ENABLE
I think this requires Administrator Permissions though,for obvious reasons :)
Edit: I just don't know enough about ClickOnce to know whether or not you can run external programs through it.
A: I found this article, which has a complete wrapper class included for manipulating the windows firewall. Adding an Application to the Exception list on the Windows Firewall
///
/// Allows basic access to the windows firewall API.
/// This can be used to add an exception to the windows firewall
/// exceptions list, so that our programs can continue to run merrily
/// even when nasty windows firewall is running.
///
/// Please note: It is not enforced here, but it might be a good idea
/// to actually prompt the user before messing with their firewall settings,
/// just as a matter of politeness.
///
///
/// To allow the installers to authorize idiom products to work through
/// the Windows Firewall.
///
public class FirewallHelper
{
#region Variables
///
/// Hooray! Singleton access.
///
private static FirewallHelper instance = null;
///
/// Interface to the firewall manager COM object
///
private INetFwMgr fwMgr = null;
#endregion
#region Properties
///
/// Singleton access to the firewallhelper object.
/// Threadsafe.
///
public static FirewallHelper Instance
{
get
{
lock (typeof(FirewallHelper))
{
if (instance == null)
instance = new FirewallHelper();
return instance;
}
}
}
#endregion
#region Constructivat0r
///
/// Private Constructor. If this fails, HasFirewall will return
/// false;
///
private FirewallHelper()
{
// Get the type of HNetCfg.FwMgr, or null if an error occurred
Type fwMgrType = Type.GetTypeFromProgID("HNetCfg.FwMgr", false);
// Assume failed.
fwMgr = null;
if (fwMgrType != null)
{
try
{
fwMgr = (INetFwMgr)Activator.CreateInstance(fwMgrType);
}
// In all other circumnstances, fwMgr is null.
catch (ArgumentException) { }
catch (NotSupportedException) { }
catch (System.Reflection.TargetInvocationException) { }
catch (MissingMethodException) { }
catch (MethodAccessException) { }
catch (MemberAccessException) { }
catch (InvalidComObjectException) { }
catch (COMException) { }
catch (TypeLoadException) { }
}
}
#endregion
#region Helper Methods
///
/// Gets whether or not the firewall is installed on this computer.
///
///
public bool IsFirewallInstalled
{
get
{
if (fwMgr != null &&
fwMgr.LocalPolicy != null &&
fwMgr.LocalPolicy.CurrentProfile != null)
return true;
else
return false;
}
}
///
/// Returns whether or not the firewall is enabled.
/// If the firewall is not installed, this returns false.
///
public bool IsFirewallEnabled
{
get
{
if (IsFirewallInstalled && fwMgr.LocalPolicy.CurrentProfile.FirewallEnabled)
return true;
else
return false;
}
}
///
/// Returns whether or not the firewall allows Application "Exceptions".
/// If the firewall is not installed, this returns false.
///
///
/// Added to allow access to this metho
///
public bool AppAuthorizationsAllowed
{
get
{
if (IsFirewallInstalled && !fwMgr.LocalPolicy.CurrentProfile.ExceptionsNotAllowed)
return true;
else
return false;
}
}
///
/// Adds an application to the list of authorized applications.
/// If the application is already authorized, does nothing.
///
///
/// The full path to the application executable. This cannot
/// be blank, and cannot be a relative path.
///
///
/// This is the name of the application, purely for display
/// puposes in the Microsoft Security Center.
///
///
/// When applicationFullPath is null OR
/// When appName is null.
///
///
/// When applicationFullPath is blank OR
/// When appName is blank OR
/// applicationFullPath contains invalid path characters OR
/// applicationFullPath is not an absolute path
///
///
/// If the firewall is not installed OR
/// If the firewall does not allow specific application 'exceptions' OR
/// Due to an exception in COM this method could not create the
/// necessary COM types
///
///
/// If no file exists at the given applicationFullPath
///
public void GrantAuthorization(string applicationFullPath, string appName)
{
#region Parameter checking
if (applicationFullPath == null)
throw new ArgumentNullException("applicationFullPath");
if (appName == null)
throw new ArgumentNullException("appName");
if (applicationFullPath.Trim().Length == 0)
throw new ArgumentException("applicationFullPath must not be blank");
if (applicationFullPath.Trim().Length == 0)
throw new ArgumentException("appName must not be blank");
if (applicationFullPath.IndexOfAny(Path.InvalidPathChars) >= 0)
throw new ArgumentException("applicationFullPath must not contain invalid path characters");
if (!Path.IsPathRooted(applicationFullPath))
throw new ArgumentException("applicationFullPath is not an absolute path");
if (!File.Exists(applicationFullPath))
throw new FileNotFoundException("File does not exist", applicationFullPath);
// State checking
if (!IsFirewallInstalled)
throw new FirewallHelperException("Cannot grant authorization: Firewall is not installed.");
if (!AppAuthorizationsAllowed)
throw new FirewallHelperException("Application exemptions are not allowed.");
#endregion
if (!HasAuthorization(applicationFullPath))
{
// Get the type of HNetCfg.FwMgr, or null if an error occurred
Type authAppType = Type.GetTypeFromProgID("HNetCfg.FwAuthorizedApplication", false);
// Assume failed.
INetFwAuthorizedApplication appInfo = null;
if (authAppType != null)
{
try
{
appInfo = (INetFwAuthorizedApplication)Activator.CreateInstance(authAppType);
}
// In all other circumnstances, appInfo is null.
catch (ArgumentException) { }
catch (NotSupportedException) { }
catch (System.Reflection.TargetInvocationException) { }
catch (MissingMethodException) { }
catch (MethodAccessException) { }
catch (MemberAccessException) { }
catch (InvalidComObjectException) { }
catch (COMException) { }
catch (TypeLoadException) { }
}
if (appInfo == null)
throw new FirewallHelperException("Could not grant authorization: can't create INetFwAuthorizedApplication instance.");
appInfo.Name = appName;
appInfo.ProcessImageFileName = applicationFullPath;
// ...
// Use defaults for other properties of the AuthorizedApplication COM object
// Authorize this application
fwMgr.LocalPolicy.CurrentProfile.AuthorizedApplications.Add(appInfo);
}
// otherwise it already has authorization so do nothing
}
///
/// Removes an application to the list of authorized applications.
/// Note that the specified application must exist or a FileNotFound
/// exception will be thrown.
/// If the specified application exists but does not current have
/// authorization, this method will do nothing.
///
///
/// The full path to the application executable. This cannot
/// be blank, and cannot be a relative path.
///
///
/// When applicationFullPath is null
///
///
/// When applicationFullPath is blank OR
/// applicationFullPath contains invalid path characters OR
/// applicationFullPath is not an absolute path
///
///
/// If the firewall is not installed.
///
///
/// If the specified application does not exist.
///
public void RemoveAuthorization(string applicationFullPath)
{
#region Parameter checking
if (applicationFullPath == null)
throw new ArgumentNullException("applicationFullPath");
if (applicationFullPath.Trim().Length == 0)
throw new ArgumentException("applicationFullPath must not be blank");
if (applicationFullPath.IndexOfAny(Path.InvalidPathChars) >= 0)
throw new ArgumentException("applicationFullPath must not contain invalid path characters");
if (!Path.IsPathRooted(applicationFullPath))
throw new ArgumentException("applicationFullPath is not an absolute path");
if (!File.Exists(applicationFullPath))
throw new FileNotFoundException("File does not exist", applicationFullPath);
// State checking
if (!IsFirewallInstalled)
throw new FirewallHelperException("Cannot remove authorization: Firewall is not installed.");
#endregion
if (HasAuthorization(applicationFullPath))
{
// Remove Authorization for this application
fwMgr.LocalPolicy.CurrentProfile.AuthorizedApplications.Remove(applicationFullPath);
}
// otherwise it does not have authorization so do nothing
}
///
/// Returns whether an application is in the list of authorized applications.
/// Note if the file does not exist, this throws a FileNotFound exception.
///
///
/// The full path to the application executable. This cannot
/// be blank, and cannot be a relative path.
///
///
/// The full path to the application executable. This cannot
/// be blank, and cannot be a relative path.
///
///
/// When applicationFullPath is null
///
///
/// When applicationFullPath is blank OR
/// applicationFullPath contains invalid path characters OR
/// applicationFullPath is not an absolute path
///
///
/// If the firewall is not installed.
///
///
/// If the specified application does not exist.
///
public bool HasAuthorization(string applicationFullPath)
{
#region Parameter checking
if (applicationFullPath == null)
throw new ArgumentNullException("applicationFullPath");
if (applicationFullPath.Trim().Length == 0)
throw new ArgumentException("applicationFullPath must not be blank");
if (applicationFullPath.IndexOfAny(Path.InvalidPathChars) >= 0)
throw new ArgumentException("applicationFullPath must not contain invalid path characters");
if (!Path.IsPathRooted(applicationFullPath))
throw new ArgumentException("applicationFullPath is not an absolute path");
if (!File.Exists(applicationFullPath))
throw new FileNotFoundException("File does not exist.", applicationFullPath);
// State checking
if (!IsFirewallInstalled)
throw new FirewallHelperException("Cannot remove authorization: Firewall is not installed.");
#endregion
// Locate Authorization for this application
foreach (string appName in GetAuthorizedAppPaths())
{
// Paths on windows file systems are not case sensitive.
if (appName.ToLower() == applicationFullPath.ToLower())
return true;
}
// Failed to locate the given app.
return false;
}
///
/// Retrieves a collection of paths to applications that are authorized.
///
///
///
/// If the Firewall is not installed.
///
public ICollection GetAuthorizedAppPaths()
{
// State checking
if (!IsFirewallInstalled)
throw new FirewallHelperException("Cannot remove authorization: Firewall is not installed.");
ArrayList list = new ArrayList();
// Collect the paths of all authorized applications
foreach (INetFwAuthorizedApplication app in fwMgr.LocalPolicy.CurrentProfile.AuthorizedApplications)
list.Add(app.ProcessImageFileName);
return list;
}
#endregion
}
///
/// Describes a FirewallHelperException.
///
///
///
///
public class FirewallHelperException : System.Exception
{
///
/// Construct a new FirewallHelperException
///
///
public FirewallHelperException(string message)
: base(message)
{ }
}
The ClickOnce sandbox did not present any problems.
A: It's possible to access the data from the firewall, look at the following articles.
*
*Windows XP SP2 Firewall Controller
*Controlling Windows Firewall using C# via COM Interop
The real question is does the ClickOnce sandbox allows this kind of access? My guess would be that it doesn't. Maybe you could use a webservice? (For more information about the data access methods in ClickOnce see Accessing Local and Remote Data in ClickOnce Applications)
A: The answer is you only allow trusted software to run with Admin privileges. From time to time SOME software has to have admin privileges and make sensitive changes to your system. You might as well have a read only hard disk otherwise...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: Can you render a Direct3D window for one process into another process' HWND? What I want to do is have a Windows application launch another Direct3D application and have the Direct3D application render into a control provided by the parent process.
Is this even possible? If it is, how would it be done?
A: It can be done easly, all you need to do is pass the HWND window id as a parameter when creating D3DDevice.
It even works during interop - you can pass the hWnd from .NET to C++ and render C# window using native directX.
A: Here's one example, hosting content of a WPF window in an existing Win32 window. I did this when writing a WPF-based screen saver, having to render into the Screen Saver little preview window.
http://stuff.seans.com/2008/09/01/writing-a-screen-saver-in-wpf/
A: This is possible. I remember doing it in VB6 when writing screen savers. The screen saver control panel sends a commandline to the screen saver with the HWND of the preview window. Using that HWND, you can then get the HDC and from there everything else you need.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Is there any way to inherit the existing TLD definition for particular JSP Tag? I am planning to extend the basic html input tag provided by Spring to incorporate more features into it. I did similar kind of exercise in past for Struts too. At that point of time I had to create a tld file with the attributes that I introduced plus all the attributes available in the parent tag.
This is bit tiresome and repetitive so the question is that Is there anyway to inherit the existing TLD definition for particular JSP Tag?
Or any shortcut?
A: I don't think there is an option to inherit a TLD definition.
The shortest solution, i think, will be to inherit the tag class and change the tld to your new (derived) class.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: how to workaround a java image scaling bug I've got a java servlet which is hitting this bug when down-scaling images...
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5101502
I'm trying to work out the best way to work around it and would appreciate any ideas from the community.
Thanks, Steve
A: Image scaling with Java can be surprisingly complicated - I ran into lots of reproducable JVM crashes when doing batch processing, which are not related to your bug though. In the end I ended up using the external command line tool 'convert' from ImageMagick, available for the relevant platforms. Simply call it with Runtime.exec(.) and the appropriate parameters. Might be even faster than a real Java solution, but this is certainly not a good solution for all apps.
A: You might want to use the Java Advanced Imaging (JAI) API. An example of how to scale an image using JAI can be found here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: JavaScript curry: what are the practical applications? I don’t think I’ve grokked currying yet. I understand what it does, and how to do it. I just can’t think of a situation I would use it.
Where are you using currying in JavaScript (or where are the main libraries using it)? DOM manipulation or general application development examples welcome.
One of the answers mentions animation. Functions like slideUp, fadeIn take an element as an arguments and are normally a curried function returning the high order function with the default “animation function” built-in. Why is that better than just applying the higher-up function with some defaults?
Are there any drawbacks to using it?
As requested here are some good resources on JavaScript currying:
*
*http://www.dustindiaz.com/javascript-curry/
*Crockford, Douglas (2008) JavaScript: The Good Parts
*http://www.svendtofte.com/code/curried_javascript/
(Takes a detour into ML so skip the whole section from “A crash course in ML” and start again at “How to write curried JavaScript”)
*http://web.archive.org/web/20111217011630/http://blog.morrisjohns.com:80/javascript_closures_for_dummies
*How do JavaScript closures work?
*http://ejohn.org/blog/partial-functions-in-javascript (Mr. Resig on the money as per usual)
*http://benalman.com/news/2010/09/partial-application-in-javascript/
I’ll add more as they crop up in the comments.
So, according to the answers, currying and partial application in general are convenience techniques.
If you are frequently “refining” a high-level function by calling it with same configuration, you can curry (or use Resig’s partial) the higher-level function to create simple, concise helper methods.
A: Agreeing with Hank Gay - It's extremely useful in certain true functional programming languages - because it's a necessary part. For example, in Haskell you simply cannot take multiple parameters to a function - you cannot do that in pure functional programming. You take one param at a time and build up your function. In JavaScript it's simply unnecessary, despite contrived examples like "converter". Here's that same converter code, without the need for currying:
var converter = function(ratio, symbol, input) {
return (input*ratio).toFixed(2) + " " + symbol;
}
var kilosToPoundsRatio = 2.2;
var litersToUKPintsRatio = 1.75;
var litersToUSPintsRatio = 1.98;
var milesToKilometersRatio = 1.62;
converter(kilosToPoundsRatio, "lbs", 4); //8.80 lbs
converter(litersToUKPintsRatio, "imperial pints", 2.4); //4.20 imperial pints
converter(litersToUSPintsRatio, "US pints", 2.4); //4.75 US pints
converter(milesToKilometersRatio, "km", 34); //55.08 km
I badly wish Douglas Crockford, in "JavaScript: The Good Parts", had given some mention of the history and actual use of currying rather than his offhanded remarks. For the longest time after reading that, I was boggled, until I was studying Functional programming and realized that's where it came from.
After some more thinking, I posit there is one valid use case for currying in JavaScript: if you are trying to write using pure functional programming techniques using JavaScript. Seems like a rare use case though.
A: I found functions that resemble python's functools.partial more useful in JavaScript:
function partial(fn) {
return partialWithScope.apply(this,
Array.prototype.concat.apply([fn, this],
Array.prototype.slice.call(arguments, 1)));
}
function partialWithScope(fn, scope) {
var args = Array.prototype.slice.call(arguments, 2);
return function() {
return fn.apply(scope, Array.prototype.concat.apply(args, arguments));
};
}
Why would you want to use it? A common situation where you want to use this is when you want to bind this in a function to a value:
var callback = partialWithScope(Object.function, obj);
Now when callback is called, this points to obj. This is useful in event situations or to save some space because it usually makes code shorter.
Currying is similar to partial with the difference that the function the currying returns just accepts one argument (as far as I understand that).
A: Consider filter function. And you want to write a callback for it.
let x = [1,2,3,4,5,6,7,11,12,14,15];
let results = x.filter(callback);
Assume want to output only even numbers, so:
let callback = x => x % 2 === 0;
Now imagine we want to implement our callback such that
depending on scenario it outputs even numbers which are above some threshold number (such
number should be configurable).
We can't easily make such threshold number a parameter to callback function, because filter invokes callback and by default passes it array elements and index.
How would you implement this?
This is a good use case for currying:
let x = [1,2,3,4,5,6,7,11,12,14,15];
let callback = (threshold) => (x) => (x % 2==0 && x > threshold);
let results1 = x.filter(callback(5)); // Even numbers higher than 5
let results2 = x.filter(callback(10)); // Even numbers higher than 10
console.log(results1,results2);
A: @Hank Gay
In response to EmbiggensTheMind's comment:
I can't think of an instance where currying—by itself—is useful in JavaScript; it is a technique for converting function calls with multiple arguments into chains of function calls with a single argument for each call, but JavaScript supports multiple arguments in a single function call.
In JavaScript—and I assume most other actual languages (not lambda calculus)—it is commonly associated with partial application, though. John Resig explains it better, but the gist is that have some logic that will be applied to two or more arguments, and you only know the value(s) for some of those arguments.
You can use partial application/currying to fix those known values and return a function that only accepts the unknowns, to be invoked later when you actually have the values you wish to pass. This provides a nifty way to avoid repeating yourself when you would have been calling the same JavaScript built-ins over and over with all the same values but one. To steal John's example:
String.prototype.csv = String.prototype.split.partial(/,\s*/);
var results = "John, Resig, Boston".csv();
alert( (results[1] == "Resig") + " The text values were split properly" );
A: It's no magic or anything... just a pleasant shorthand for anonymous functions.
partial(alert, "FOO!") is equivalent to function(){alert("FOO!");}
partial(Math.max, 0) corresponds to function(x){return Math.max(0, x);}
The calls to partial (MochiKit terminology. I think some other libraries give functions a .curry method which does the same thing) look slightly nicer and less noisy than the anonymous functions.
A: I know its old thread but I will have to show how this is being used in javascript libraries:
I will use lodash.js library to describe these concepts concretely.
Example:
var fn = function(a,b,c){
return a+b+c+(this.greet || ‘');
}
Partial Application:
var partialFnA = _.partial(fn, 1,3);
Currying:
var curriedFn = _.curry(fn);
Binding:
var boundFn = _.bind(fn,object,1,3 );//object= {greet: ’!'}
usage:
curriedFn(1)(3)(5); // gives 9
or
curriedFn(1,3)(5); // gives 9
or
curriedFn(1)(_,3)(2); //gives 9
partialFnA(5); //gives 9
boundFn(5); //gives 9!
difference:
after currying we get a new function with no parameters pre bound.
after partial application we get a function which is bound with some parameters prebound.
in binding we can bind a context which will be used to replace ‘this’, if not bound default of any function will be window scope.
Advise: There is no need to reinvent the wheel. Partial application/binding/currying are very much related. You can see the difference above. Use this meaning anywhere and people will recognise what you are doing without issues in understanding plus you will have to use less code.
A: Here's an interesting AND practical use of currying in JavaScript that uses closures:
function converter(toUnit, factor, offset, input) {
offset = offset || 0;
return [((offset + input) * factor).toFixed(2), toUnit].join(" ");
}
var milesToKm = converter.curry('km', 1.60936, undefined);
var poundsToKg = converter.curry('kg', 0.45460, undefined);
var farenheitToCelsius = converter.curry('degrees C', 0.5556, -32);
milesToKm(10); // returns "16.09 km"
poundsToKg(2.5); // returns "1.14 kg"
farenheitToCelsius(98); // returns "36.67 degrees C"
This relies on a curry extension of Function, although as you can see it only uses apply (nothing too fancy):
Function.prototype.curry = function() {
if (arguments.length < 1) {
return this; //nothing to curry with - return function
}
var __method = this;
var args = toArray(arguments);
return function() {
return __method.apply(this, args.concat([].slice.apply(null, arguments)));
}
}
A: As for libraries using it, there's always Functional.
When is it useful in JS? Probably the same times it is useful in other modern languages, but the only time I can see myself using it is in conjunction with partial application.
A: I would say that, most probably, all the animation library in JS are using currying. Rather than having to pass for each call a set of impacted elements and a function, describing how the element should behave, to a higher order function that will ensure all the timing stuff, its generally easier for the customer to release, as public API some function like "slideUp", "fadeIn" that takes only elements as arguments, and that are just some curried function returning the high order function with the default "animation function" built-in.
A: Here's an example.
I'm instrumenting a bunch of fields with JQuery so I can see what users are up to. The code looks like this:
$('#foo').focus(trackActivity);
$('#foo').blur(trackActivity);
$('#bar').focus(trackActivity);
$('#bar').blur(trackActivity);
(For non-JQuery users, I'm saying that any time a couple of fields get or lose focus, I want the trackActivity() function to be called. I could also use an anonymous function, but I'd have to duplicate it 4 times, so I pulled it out and named it.)
Now it turns out that one of those fields needs to be handled differently. I'd like to be able to pass a parameter in on one of those calls to be passed along to our tracking infrastructure. With currying, I can.
A: JavaScript functions is called lamda in other functional language. It can be used to compose a new api (more powerful or complext function) to based on another developer's simple input. Curry is just one of the techniques. You can use it to create a simplified api to call a complex api. If you are the develper who use the simplified api (for example you use jQuery to do simple manipulation), you don't need to use curry. But if you want to create the simplified api, curry is your friend. You have to write a javascript framework (like jQuery, mootools) or library, then you can appreciate its power. I wrote a enhanced curry function, at http://blog.semanticsworks.com/2011/03/enhanced-curry-method.html . You don't need to the curry method to do currying, it just help to do currying, but you can always do it manually by writing a function A(){} to return another function B(){}. To make it more interesting, use function B() to return another function C().
A: I agree that at times you would like to get the ball rolling by creating a pseudo-function that will always have the value of the first argument filled in. Fortunately, I came across a brand new JavaScript library called jPaq (http://jpaq.org/) which provides this functionality. The best thing about the library is the fact that you can download your own build which contains only the code that you will need.
A: Just wanted to add some resources for Functional.js:
Lecture/conference explaining some applications
http://www.youtube.com/watch?v=HAcN3JyQoyY
Updated Functional.js library:
https://github.com/loop-recur/FunctionalJS
Some nice helpers (sorry new here, no reputation :p):
/loop-recur/PreludeJS
I've been using this library a lot recently to reduce the repetition in an js IRC clients helper library. It's great stuff - really helps clean up and simplify code.
In addition, if performance becomes an issue (but this lib is pretty light), it's easy to just rewrite using a native function.
A: You can use native bind for quick, one line solution
function clampAngle(min, max, angle) {
var result, delta;
delta = max - min;
result = (angle - min) % delta;
if (result < 0) {
result += delta;
}
return min + result;
};
var clamp0To360 = clampAngle.bind(null, 0, 360);
console.log(clamp0To360(405)) // 45
A: Another stab at it, from working with promises.
(Disclaimer: JS noob, coming from the Python world. Even there, currying is not used all that much, but it can come in handy on occasion. So I cribbed the currying function - see links)
First, I am starting with an ajax call. I have some specific processing to do on success, but on failure, I just want to give the user the feedback that calling something resulted in some error. In my actual code, I display the error feedback in a bootstrap panel, but am just using logging here.
I've modified my live url to make this fail.
function ajax_batch(e){
var url = $(e.target).data("url");
//induce error
url = "x" + url;
var promise_details = $.ajax(
url,
{
headers: { Accept : "application/json" },
// accepts : "application/json",
beforeSend: function (request) {
if (!this.crossDomain) {
request.setRequestHeader("X-CSRFToken", csrf_token);
}
},
dataType : "json",
type : "POST"}
);
promise_details.then(notify_batch_success, fail_status_specific_to_batch);
}
Now, here in order to tell the user that a batch failed, I need to write that info in the error handler, because all it is getting is a response from the server.
I still only have the info available at coding time - in my case I have a number of possible batches, but I don't know which one has failed w.o. parsing the server response about the failed url.
function fail_status_specific_to_batch(d){
console.log("bad batch run, dude");
console.log("response.status:" + d.status);
}
Let's do it. Console output is:
console:
bad batch run, dude
utility.js (line 109)
response.status:404
Now, let's change things a bit and use a reusable generic failure handler, but also one that is curried at runtime with both the known-at-code-time calling context and the run-time info available from event.
... rest is as before...
var target = $(e.target).text();
var context = {"user_msg": "bad batch run, dude. you were calling :" + target};
var contexted_fail_notification = curry(generic_fail, context);
promise_details.then(notify_batch_success, contexted_fail_notification);
}
function generic_fail(context, d){
console.log(context);
console.log("response.status:" + d.status);
}
function curry(fn) {
var slice = Array.prototype.slice,
stored_args = slice.call(arguments, 1);
return function () {
var new_args = slice.call(arguments),
args = stored_args.concat(new_args);
return fn.apply(null, args);
};
}
console:
Object { user_msg="bad batch run, dude. you were calling :Run ACL now"}
utility.js (line 117)
response.status:404
utility.js (line 118)
More generally, given how widespread callback usage is in JS, currying seems like a quite useful tool to have.
https://javascriptweblog.wordpress.com/2010/04/05/curry-cooking-up-tastier-functions/
http://www.drdobbs.com/open-source/currying-and-partial-functions-in-javasc/231001821?pgno=2
A: I asked a similar question at https://softwareengineering.stackexchange.com/questions/384529/a-real-life-example-of-using-curry-function
But only after I use ramda do I finally appreciate the usefulness of curry. So I will argue that if we need to chain functions together to process some input data one step a time, e.g. the promise chain example in the article Favoring Curry, using curry by "function first,data last", the code does look clean!
A: Here you have a practical example of were currying is being used at the moment.
https://www.joshwcomeau.com/react/demystifying-styled-components/
Basically he is creating a poor man styled components and uses currying to "preload" the name of the tag when creating a new style for it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "186"
}
|
Q: using dotnetnuke (dnn) with with subversion Currently i am developing sites using DNN framework. Currently my development and staging environment is same. Client is vewing same site which I am using for development.
I have started using tortoise svn (subversion) for maintaining versions and backup. I am using file based svn repository for it.
The issue is svn creates .svn folder (hidden) in every folder. This folder and files inside it shows in portal system while file selection and at many different locations like FCKEditor File Browser, Icon selection for module / page, skins selection.
I would like to hide this folder for entire application and it should not show up anywhere.
A: Do you have to have the same environment for development and staging? I would really recommend against it. Even if you have them on the same server, I think you should have them at least in separate virtual directories.
Assuming you have then done that, it is simple to keep the '.' directories hidden, you simply export your svn repository from dev to staging. Staging will no longer be a working copy so the '.' directories will not be present. This also allows to test potentially breaking changes without affecting the client and it keeps the staging environment more stable.
A: You can hide the .svn folders in DNN but you'll have to modify the core.
Probably an easier solution is to exclude the folder Portals/[PortalID] from your repository, but that depends on what you're developing. Do you need the Portal's files in your repository?
A: Personally if you are NOT modifying the core of DNN, I wouldn't check in the core system, and only your custom modules, skins etc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: MySQL foreign keys - how to enforce one-to-one across tables? If I have a table in MySQL which represents a base class, and I have a bunch of tables which represent the fields in the derived classes, each of which refers back to the base table with a foreign key, is there any way to get MySQL to enforce the one-to-one relationship between the derived table and the base table, or does this have to be done in code?
Using the following quick 'n' dirty schema as an example, is there any way to get MySQL to ensure that rows in both product_cd and product_dvd cannot share the same product_id? Is there a better way to design the schema to allow the database to enforce this relationship, or is it simply not possible?
CREATE TABLE IF NOT EXISTS `product` (
`product_id` int(10) unsigned NOT NULL auto_increment,
`product_name` varchar(50) NOT NULL,
`description` text NOT NULL,
PRIMARY KEY (`product_id`)
) ENGINE = InnoDB;
CREATE TABLE `product_cd` (
`product_cd_id` INT UNSIGNED NOT NULL AUTO_INCREMENT ,
`product_id` INT UNSIGNED NOT NULL ,
`artist_name` VARCHAR( 50 ) NOT NULL ,
PRIMARY KEY ( `product_cd_id` ) ,
INDEX ( `product_id` )
) ENGINE = InnoDB;
ALTER TABLE `product_cd` ADD FOREIGN KEY ( `product_id` )
REFERENCES `product` (`product_id`)
ON DELETE RESTRICT ON UPDATE RESTRICT ;
CREATE TABLE `product_dvd` (
`product_dvd_id` INT UNSIGNED NOT NULL AUTO_INCREMENT ,
`product_id` INT UNSIGNED NOT NULL ,
`director` VARCHAR( 50 ) NOT NULL ,
PRIMARY KEY ( `product_dvd_id` ) ,
INDEX ( `product_id` )
) ENGINE = InnoDB;
ALTER TABLE `product_dvd` ADD FOREIGN KEY ( `product_id` )
REFERENCES `product` (`product_id`)
ON DELETE RESTRICT ON UPDATE RESTRICT ;
@Skliwz, can you please provide more detail about how triggers can be used to enforce this constraint with the schema provided?
@boes, that sounds great. How does it work in situations where you have a child of a child? For example, if we added product_movie and made product_dvd a child of product_movie? Would it be a maintainability nightmare to make the check constraint for product_dvd have to factor in all child types as well?
A: Enforcing a 1:0-1 or 1:1 relationship can be achieved by defining a unique constraint on the foreign key's columns, so only one combination can exist. Normally this would be the primary key of the child table.
If the FK is on a primary or unique key of the referenced tables it will constrain them to values present in the parent and the unique constraint on the column or columns restricts them to uniqueness. This means that the child table can only have values corresponding to the parent in the constrained columns and each row must have a unique value. Doing this enforces that the child table will have at most one row corresponding to the parent record.
A: If you got rid of product-dvd-id and product-cd-id, and used the product-id as the primary key for all three tables, you could at least make sure that no two DVD or no two CD use the same product-id. Plus there would be less ids to keep track of.
And you maybe need some kind of type column in the products table.
A: You could just add a foreign key from one primary key to the other primary key. Because PK's have to be unique, you automaticly get a one-to-one relation.
A: To make sure that a product is or a cd or a dvd I would add a type column and make it part of the primary key. In the derived column you add a check constraint for the type. In the example I set cd to 1 and you could make dvd = 2 and so on for each derived table.
CREATE TABLE IF NOT EXISTS `product` (
`product_id` int(10) unsigned NOT NULL auto_increment,
'product_type' int not null,
`product_name` varchar(50) NOT NULL,
`description` text NOT NULL,
PRIMARY KEY (`product_id`, 'product_type')
) ENGINE = InnoDB;
CREATE TABLE `product_cd` (
`product_id` INT UNSIGNED NOT NULL ,
'product_type' int not null default(1) check ('product_type' = 1)
`artist_name` VARCHAR( 50 ) NOT NULL ,
PRIMARY KEY ( `product_id`, 'product_type' ) ,
) ENGINE = InnoDB;
ALTER TABLE `product_cd` ADD FOREIGN KEY ( `product_id`, 'product_type' )
REFERENCES `product` (`product_id`, 'product_type')
ON DELETE RESTRICT ON UPDATE RESTRICT ;
A: The only effective way to maintain a one-to-one relationship is to set the foreign-key column to UNIQUE; this way we're sure only one record will be created for it. And if you try to violate this constraint you get the following error (MySQL 5.1):
`1 error(s) saving changes to table testdb`.table4: INSERT INTO testdb.table4 (idtable4, label, table3_idtable3) VALUES (0, 'soso', 0) 1062: Duplicate entry '0' for key 'table3_idtable3_UNIQUE' Rollback complete
A: If you use MySQL 5.x you can use triggers for these kinds of constraints.
Another (suboptimal) option would be to use a "type" column in the parent table to silently ignore duplication, and to be able to choose the correct "extension table".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Class library with support for several persistence strategies I am developing a C++ class library containing domain model classes, and I would like to add support for instantiating these classes from various persistence mechanisms, i.e. databases and file. The user of the class library should be given an interface(?) against which to program a class that can transfer the data from/to the persistence mechanism.
I know of the Data Access Object pattern which seems to work for Java, but I'm not exactly sure how to apply it to C++. Are there other solutions?
A: Boost Serialization provides some pretty useful stuff for working with serializing C++ types, but how well it will match the interface you desire I don't know. It supports both intrusive and non-intrusive designs, so is pretty flexible.
A: C++ supports multiple inheritance so you can have a generic persistence API and inherit a persistence mechanism. This would still have to use introspection to get out the class metadata, but you would still have this issue with any persistence layer.
Alternatively you could do something similar but use the metadata to drive a code generator that fills in the 'Getters' and 'Setters' for the persistence layer.
Any persistence layer will typically use one or the other approach, so your problem is hooking the loading mechanism into the persistence layer. I think this makes your problem little different from a single persistence layer but tackling it from the other direction. Rather than building domain classes onto a persistence framework you are providing a set of domain classes with the hooks for a persistence framework that third parties can plug their data access mechanism into.
I think that once you provide access to class metadata and callbacks the perisistence mechanism is relatively straightforward. Look at the metadata components of any convenient C++ O/R mapping framework and understand how they work. Encapsulate this with an API in one of the base classes of your domain classes and provide a generic getter/setter API for instantiation or persisting. The rest is up to the person implementing the persistence layer.
Edit: I can't think of a C++ library with the type of pluggable persistence mechanism you're describing, but I did something in Python that could have had this type of facility added. The particular implementation used facilities in Python with no direct C++ equivalent, although the basic principle could probably be adapted to work with C++.
In Python, you can intercept accesses to instance variables by overriding __getattr()__ and __setattr()__. The persistence mechanism actually maintained its own data cache behind the scenes. When the functionality was mixed into the class (done through multiple inheritance), it overrode the default system behaviour for member accessing and checked whether the attribute being queried matched anything in its dictionary. Where this happened, the call was redirected to get or set an item in the data cache.
The cache had metadata of its own. It was aware of relationships between entities within its data model, and knew which attribute names to intercept to access data. The way this worked separated it from the database access layer and could (at least in theory) have allowed the persistence mechanism to be used with different drivers. There is no inherent reason that you couldn't have (for example) built a driver that serialised it out to an XML file.
Making something like this work in C++ would be a bit more fiddly, and it may not be possible to make the object cache access as transparent as it was with this system. You would probably be best with an explicit protocol that loads and flushes the object's state to the cache. The code to this would be quite amenable to generation from the cache metadata, but this would have to be done at compile time. You may be able to do something with templates or by overriding the -> operator to make the access protocol more transparent, but this is probably more trouble than it's worth.
A: I would avoid serialization, IMHO, we implemented this for one of our applications in MFC back in 1995, we were smart enough to use independent object versioning, and file versioning, but you end up with a lot of old messy code around after time.
Imagine certain scenarios, deprecating classes, deprecating members, etc, each presents a new problem. Now we use compreseds "XML type" streams, we can add new data and maintain backward compatibility.
Reading and writing the file is abstracted from mapping the data to the objects, we can now switch file formats, add importers/exporters without modification to our core business objects.
That being said some developers love serialization, my own encounters is that switching code base, platforms, languages, toolkits all bring along a lot of problems, reading and writing your data should not be one of them.
Additionally using a standard data format, with some proprietary key, means its a lot easier to work with 3rd parties.
A: You might like to look at boost serialization. Not having used it I can't say whether to recommend it or not. Boost libraries are typically high quality.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Mount TrueCrypt volume on login in Ubuntu Is it possible to automount a TrueCrypt volume when logging in to Ubuntu 8.04? It's already storing the wireless network keys using the Seahorse password manager. Could TrueCrypt be made to fetch its volume password from the same keyring? Currently this would seem like the most convenient way to store my source code on the USB stick I carry around to boot from.
A: Although I'm currently not a Gentoo user (on Ubuntu now), I used to be one, for years, and had learned, that it's a good thing to search for linux answers on forums.gentoo.org
and the Gentoo wiki.
I had found these, HTH:
*
*http://forums.gentoo.org/viewtopic-t-691788-highlight-truecrypt.html
*http://forums.gentoo.org/viewtopic-t-657404-highlight-truecrypt.html
A: I can't really remember where I found this solution, but it was working for me on Ubuntu Karmic with gdm.
You have to edit the /etc/gdm/Init file and add the following:
if !(echo `mount` | grep -q "/home/your_username type")
then
truecrypt /dev/sdaxxx /home/your_username
fi
Unfortunately it doesn't work in the new Precious Penguin Ubuntu release, since it doesn't come with the gmd package. Does anybody know how to init truecrypt for this Ubuntu release?
A: Apparently one solution will be to update to Ubuntu 8.10 which by default supports an encrypted directory for each user, mounted at login. It's not the same as TrueCrypt but has other strengths and weaknesses.
There's also a way to get TrueCrypt working with the login password.
A: I don't know much about Truecrypt, but if it can be mounted with a script, you could write such a script and place it in your session startup (System -> Preferences -> Session, I think). If it needs a password on the command line you could have the script launch gnome-terminal for you.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to convert date to timestamp in PHP? How do I get timestamp from e.g. 22-09-2008?
A: Use PHP function strtotime()
echo strtotime('2019/06/06');
date — Format a local time/date
A:
This method works on both Windows and Unix and is time-zone aware, which is probably what you want if you work with dates.
If you don't care about timezone, or want to use the time zone your server uses:
$d = DateTime::createFromFormat('d-m-Y H:i:s', '22-09-2008 00:00:00');
if ($d === false) {
die("Incorrect date string");
} else {
echo $d->getTimestamp();
}
1222093324 (This will differ depending on your server time zone...)
If you want to specify in which time zone, here EST. (Same as New York.)
$d = DateTime::createFromFormat(
'd-m-Y H:i:s',
'22-09-2008 00:00:00',
new DateTimeZone('EST')
);
if ($d === false) {
die("Incorrect date string");
} else {
echo $d->getTimestamp();
}
1222093305
Or if you want to use UTC. (Same as "GMT".)
$d = DateTime::createFromFormat(
'd-m-Y H:i:s',
'22-09-2008 00:00:00',
new DateTimeZone('UTC')
);
if ($d === false) {
die("Incorrect date string");
} else {
echo $d->getTimestamp();
}
1222093289
Regardless, it's always a good starting point to be strict when parsing strings into structured data. It can save awkward debugging in the future. Therefore I recommend to always specify date format.
A:
This method works on both Windows and Unix and is time-zone aware, which is probably what you want if you work with dates.
If you don't care about timezone, or want to use the time zone your server uses:
$d = DateTime::createFromFormat('d-m-Y H:i:s', '22-09-2008 00:00:00');
if ($d === false) {
die("Incorrect date string");
} else {
echo $d->getTimestamp();
}
1222093324 (This will differ depending on your server time zone...)
If you want to specify in which time zone, here EST. (Same as New York.)
$d = DateTime::createFromFormat(
'd-m-Y H:i:s',
'22-09-2008 00:00:00',
new DateTimeZone('EST')
);
if ($d === false) {
die("Incorrect date string");
} else {
echo $d->getTimestamp();
}
1222093305
Or if you want to use UTC. (Same as "GMT".)
$d = DateTime::createFromFormat(
'd-m-Y H:i:s',
'22-09-2008 00:00:00',
new DateTimeZone('UTC')
);
if ($d === false) {
die("Incorrect date string");
} else {
echo $d->getTimestamp();
}
1222093289
Regardless, it's always a good starting point to be strict when parsing strings into structured data. It can save awkward debugging in the future. Therefore I recommend to always specify date format.
A: If you know the format use strptime because strtotime does a guess for the format, which might not always be correct. Since strptime is not implemented in Windows there is a custom function
*
*http://nl3.php.net/manual/en/function.strptime.php#86572
Remember that the returnvalue tm_year is from 1900! and tm_month is 0-11
Example:
$a = strptime('22-09-2008', '%d-%m-%Y');
$timestamp = mktime(0, 0, 0, $a['tm_mon']+1, $a['tm_mday'], $a['tm_year']+1900)
A: Given that the function strptime() does not work for Windows and strtotime() can return unexpected results, I recommend using date_parse_from_format():
$date = date_parse_from_format('d-m-Y', '22-09-2008');
$timestamp = mktime(0, 0, 0, $date['month'], $date['day'], $date['year']);
A: If you want to know for sure whether a date gets parsed into something you expect, you can use DateTime::createFromFormat():
$d = DateTime::createFromFormat('d-m-Y', '22-09-2008');
if ($d === false) {
die("Woah, that date doesn't look right!");
}
echo $d->format('Y-m-d'), PHP_EOL;
// prints 2008-09-22
It's obvious in this case, but e.g. 03-04-2008 could be 3rd of April or 4th of March depending on where you come from :)
A: Using mktime:
list($day, $month, $year) = explode('-', '22-09-2008');
echo mktime(0, 0, 0, $month, $day, $year);
A: <?php echo date('M j Y g:i A', strtotime('2013-11-15 13:01:02')); ?>
http://php.net/manual/en/function.date.php
A: $time = '22-09-2008';
echo strtotime($time);
A: Here is how I'd do it:
function dateToTimestamp($date, $format, $timezone='Europe/Belgrade')
{
//returns an array containing day start and day end timestamps
$old_timezone=date_timezone_get();
date_default_timezone_set($timezone);
$date=strptime($date,$format);
$day_start=mktime(0,0,0,++$date['tm_mon'],++$date['tm_mday'],($date['tm_year']+1900));
$day_end=$day_start+(60*60*24);
date_default_timezone_set($old_timezone);
return array('day_start'=>$day_start, 'day_end'=>$day_end);
}
$timestamps=dateToTimestamp('15.02.1991.', '%d.%m.%Y.', 'Europe/London');
$day_start=$timestamps['day_start'];
This way, you let the function know what date format you are using and even specify the timezone.
A: function date_to_stamp( $date, $slash_time = true, $timezone = 'Europe/London', $expression = "#^\d{2}([^\d]*)\d{2}([^\d]*)\d{4}$#is" ) {
$return = false;
$_timezone = date_default_timezone_get();
date_default_timezone_set( $timezone );
if( preg_match( $expression, $date, $matches ) )
$return = date( "Y-m-d " . ( $slash_time ? '00:00:00' : "h:i:s" ), strtotime( str_replace( array($matches[1], $matches[2]), '-', $date ) . ' ' . date("h:i:s") ) );
date_default_timezone_set( $_timezone );
return $return;
}
// expression may need changing in relation to timezone
echo date_to_stamp('19/03/1986', false) . '<br />';
echo date_to_stamp('19**03**1986', false) . '<br />';
echo date_to_stamp('19.03.1986') . '<br />';
echo date_to_stamp('19.03.1986', false, 'Asia/Aden') . '<br />';
echo date('Y-m-d h:i:s') . '<br />';
//1986-03-19 02:37:30
//1986-03-19 02:37:30
//1986-03-19 00:00:00
//1986-03-19 05:37:30
//2012-02-12 02:37:30
A: <?php echo date('U') ?>
If you want, put it in a MySQL input type timestamp. The above works very well (only in PHP 5 or later):
<?php $timestamp_for_mysql = date('c') ?>
A: Using strtotime() function you can easily convert date to timestamp
<?php
// set default timezone
date_default_timezone_set('America/Los_Angeles');
//define date and time
$date = date("d M Y H:i:s");
// output
echo strtotime($date);
?>
More info: http://php.net/manual/en/function.strtotime.php
Online conversion tool: http://freeonlinetools24.com/
A: There is also strptime() which expects exactly one format:
$a = strptime('22-09-2008', '%d-%m-%Y');
$timestamp = mktime(0, 0, 0, $a['tm_mon']+1, $a['tm_mday'], $a['tm_year']+1900);
Warnings:
*
*This function is not implemented on Windows
*This function has been DEPRECATED as of PHP 8.1.0. Relying on this function is highly discouraged.
A: Here is a very simple and effective solution using the split and mtime functions:
$date="30/07/2010 13:24"; //Date example
list($day, $month, $year, $hour, $minute) = split('[/ :]', $date);
//The variables should be arranged according to your date format and so the separators
$timestamp = mktime($hour, $minute, 0, $month, $day, $year);
echo date("r", $timestamp);
It worked like a charm for me.
A: With DateTime API:
$dateTime = new DateTime('2008-09-22');
echo $dateTime->format('U');
// or
$date = new DateTime('2008-09-22');
echo $date->getTimestamp();
The same with the procedural API:
$date = date_create('2008-09-22');
echo date_format($date, 'U');
// or
$date = date_create('2008-09-22');
echo date_timestamp_get($date);
If the above fails because you are using a unsupported format, you can use
$date = DateTime::createFromFormat('!d-m-Y', '22-09-2008');
echo $dateTime->format('U');
// or
$date = date_parse_from_format('!d-m-Y', '22-09-2008');
echo date_format($date, 'U');
Note that if you do not set the !, the time portion will be set to current time, which is different from the first four which will use midnight when you omit the time.
Yet another alternative is to use the IntlDateFormatter API:
$formatter = new IntlDateFormatter(
'en_US',
IntlDateFormatter::FULL,
IntlDateFormatter::FULL,
'GMT',
IntlDateFormatter::GREGORIAN,
'dd-MM-yyyy'
);
echo $formatter->parse('22-09-2008');
Unless you are working with localized date strings, the easier choice is likely DateTime.
A: Be careful with functions like strtotime() that try to "guess" what you mean (it doesn't guess of course, the rules are here).
Indeed 22-09-2008 will be parsed as 22 September 2008, as it is the only reasonable thing.
How will 08-09-2008 be parsed? Probably 09 August 2008.
What about 2008-09-50? Some versions of PHP parse this as 20 October 2008.
So, if you are sure your input is in DD-MM-YYYY format, it's better to use the solution offered by @Armin Ronacher.
A: For PHP >=5.3, 7 & 8 the this may work-
$date = date_parse_from_format('%Y-%m-%d', "2022-11-15"); //here you can give your desired date in desired format.
//just need to keep in mind that date and format matches.
$timestamp = mktime(0, 0, 0, $date['month'], $date['day'], $date['year'] + 2000); //this will return the timestamp
$finalDate= date('Y-m-d H:i:s', $timestamp); //now you can convert your timestamp to desired dateTime format.
Docs:
*
*date_parse_from_format()
*mktime()
*date()
A: If you already have the date in that format, you only need to call the "strtotime" function in PHP.
$date = '22-09-2008';
$timestamp = strtotime($date);
echo $timestamp; // 1222041600
Or in a single line:
echo strtotime('22-09-2008');
Short and simple.
A: Please be careful about time/zone if you set it to save dates in database, as I got an issue when I compared dates from mysql that converted to timestamp using strtotime. you must use exactly same time/zone before converting date to timestamp otherwise, strtotime() will use default server timezone.
Please see this example: https://3v4l.org/BRlmV
function getthistime($type, $modify = null) {
$now = new DateTime(null, new DateTimeZone('Asia/Baghdad'));
if($modify) {
$now->modify($modify);
}
if(!isset($type) || $type == 'datetime') {
return $now->format('Y-m-d H:i:s');
}
if($type == 'time') {
return $now->format('H:i:s');
}
if($type == 'timestamp') {
return $now->getTimestamp();
}
}
function timestampfromdate($date) {
return DateTime::createFromFormat('Y-m-d H:i:s', $date, new DateTimeZone('Asia/Baghdad'))->getTimestamp();
}
echo getthistime('timestamp')."--".
timestampfromdate(getthistime('datetime'))."--".
strtotime(getthistime('datetime'));
//getthistime('timestamp') == timestampfromdate(getthistime('datetime')) (true)
//getthistime('timestamp') == strtotime(getthistime('datetime')) (false)
A: If you're looking to convert a UTC datetime (2016-02-14T12:24:48.321Z) to timestamp, here's how you'd do it:
function UTCToTimestamp($utc_datetime_str)
{
preg_match_all('/(.+?)T(.+?)\.(.*?)Z/i', $utc_datetime_str, $matches_arr);
$datetime_str = $matches_arr[1][0]." ".$matches_arr[2][0];
return strtotime($datetime_str);
}
$my_utc_datetime_str = '2016-02-14T12:24:48.321Z';
$my_timestamp_str = UTCToTimestamp($my_utc_datetime_str);
A: I have used this format:
$presentDateTime = strtotime(date('Y-m-d H:i:s'));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "379"
}
|
Q: Performance penalty for working with interfaces in C++? Is there a runtime performance penalty when using interfaces (abstract base classes) in C++?
A: Short Answer: No.
Long Answer:
It is not the base class or the number of ancestors a class has in its hierarchy that affects it speed. The only thing is the cost of a method call.
A non virtual method call has a cost (but can be inlined)
A virtual method call has a slightly higher cost as you need to look up the method to call before you call it (but this is a simple table look up not a search). Since all methods on an interface are virtual by definition there is this cost.
Unless you are writing some hyper speed sensitive application this should not be a problem. The extra clarity that you will recieve from using an interface usually makes up for any perceived speed decrease.
A: When you call a virtual function (say through an interface) the program has to do a look up of the function in a table to see which function to call for that object. This gives a small penalty compared to a direct call to the function.
Also, when you use a virtual function the compiler cannot inline the function call. Therefore there could be a penalty to using a virtual function for some small functions. This is generally the biggest performance "hit" you are likely to see. This really only an issue if the function is small and called many times, say from within a loop.
A: Another alternative that is applicable in some cases is compile-time
polymorphism with templates. It is useful, for example, when you want
to make an implementation choice at the beginning of the program, and
then use it for the duration of the execution. An example with
runtime polymorphism
class AbstractAlgo
{
virtual int func();
};
class Algo1 : public AbstractAlgo
{
virtual int func();
};
class Algo2 : public AbstractAlgo
{
virtual int func();
};
void compute(AbstractAlgo* algo)
{
// Use algo many times, paying virtual function cost each time
}
int main()
{
int which;
AbstractAlgo* algo;
// read which from config file
if (which == 1)
algo = new Algo1();
else
algo = new Algo2();
compute(algo);
}
The same using compile time polymorphism
class Algo1
{
int func();
};
class Algo2
{
int func();
};
template<class ALGO> void compute()
{
ALGO algo;
// Use algo many times. No virtual function cost, and func() may be inlined.
}
int main()
{
int which;
// read which from config file
if (which == 1)
compute<Algo1>();
else
compute<Algo2>();
}
A: I don't think that the cost comparison is between virtual function call and a straight function call. If you are thinking about using a abstract base class (interface), then you have a situation where you want to perform one of several actions based of the dynamic type of an object. You have to make that choice somehow. One option is to use virtual functions. Another is a switch on the type of the object, either through RTTI (potentially expensive), or adding a type() method to the base class (potentially increasing memory use of each object). So the cost of the virtual function call should be compared to the cost of the alternative, not to the cost of doing nothing.
A: Most people note the runtime penalty, and rightly so.
However, in my experience working on large projects, the benefits from clear interfaces and proper encapsulation quickly offset the gain in speed. Modular code can be swapped for an improved implementation, so the net result is a large gain.
Your mileage may vary, and it clearly depend on the application you're developing.
A: Note that multiple inheritance bloats the object instance with multiple vtable pointers. With G++ on x86, if your class has a virtual method and no base class, you have one pointer to vtable. If you have one base class with virtual methods, you still have one pointer to vtable. If you have two base classes with virtual methods, you have two vtable pointers on each instance.
Thus, with multiple inheritance (which is what implementing interfaces in C++ is), you pay base classes times pointer size in the object instance size. The increase in memory footprint may have indirect performance implications.
A: Functions called using virtual dispatch are not inlined
There is one kind of penalty for virtual functions which is easy to forget about: virtual calls are not inlined in a (common) situation where the type of the object is not know compile time. If your function is small and suitable for inlining, this penalty may be very significant, as you are not only adding a call overhead, but the compiler is also limited in how it can optimize the calling function (it has to assume the virtual function may have changed some registers or memory locations, it cannot propagate constant values between the caller and the callee).
Virtual call cost depends on platform
As for the call overhead penalty compared to a normal function call, the answer depends on your target platform. If your are targeting a PC with x86/x64 CPU, the penalty for calling a virtual function is very small, as modern x86/x64 CPU can perform branch prediction on indirect calls. However, if you are targeting a PowerPC or some other RISC platform, the virtual call penalty may be quite significant, because indirect calls are never predicted on some platforms (Cf. PC/Xbox 360 Cross Platform Development Best Practices).
A: One thing that should be noted is that virtual function call cost can vary from one platform to another. On consoles they may be more noticeable, as usually vtable call means a cache miss and can screw branch prediction.
A: There is a small penalty per virtual function call compared to a regular call. You are unlikely to observe a difference unless you are doing hundreds of thousands of calls per second, and the price is often worth paying for added code clarity anyway.
A: Using abstract base classes in C++ generally mandates the use of a virtual function table, all your interface calls are going to be looked up through that table. The cost is tiny compared to a raw function call, so be sure that you need to be going faster than that before worrying about it.
A: The only main difference I know of is that, since you're not using a concrete class, inlining is (much?) harder to do.
A: The only thing I can think of is that virtual methods are a little bit slower to call than non-virtual methods, because the call has to go through the virtual method table.
However, this is a bad reason to screw up your design. If you need more performance, use a faster server.
A: As for any class that contains a virtual function, a vtable is used. Obviously, invoking a method through a dispatching mechanism like a vtable is slower than a direct call, but in most cases you can live with that.
A: Yes, but nothing noteworthy to my knowledge. The performance hit is because of 'indirection' you have in each method call.
However, it really depends on the compiler you're using since some compilers are not able to inline the method calls within the classes inheriting from the abstract base class.
If you want to be sure you should run your own tests.
A: Yes, there is a penalty. Something which may improve performance on your platform is to use a non-abstract class with no virtual functions. Then use a member function pointer to your non-virtual function.
A: I know it's an uncommon viewpoint, but even mentioning this issue makes me suspect you're putting way too much thought into the class structure. I've seen many systems that had way too many "levels of abstraction", and that alone made them prone to severe performance problems, not due the cost of method calls, but due to the tendency to make unnecessary calls. If this happens over multiple levels, it's a killer. take a look
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49"
}
|
Q: insert data from database into word according to the table format in the word now i need to insert some data from the sqlserver into a word,i know how to use bookmark and the office interop api do that but it's slow to call the word process do that and it's coupling between the bookmark define and the code , is it possible to do this without word process start?if not are there any template engine to do this?
A: You may want to look at a custom document writer, rather than using the COM Wrapped API from Microsoft. I have heard good things about OfficeWriter. It's not free, but speed never is.
It doesn't require Word on the server.
http://officewriter.softartisans.com/officewriter-59.aspx
A: I don't have an exact answer for what you wish to do. However you may wish to think about building the whole document on your server.
MS Excel 97 onwards supports creating a simple XML or HTML (with tables) file and just calling the file something-uniqueid.xls
It's possible that MS Word also does something similar. Take any basic HTML file (use <h1> <h2> <u> tags for a start) and change the name to something.doc See if Word will open it by double clicking it.
If this works you can serve up the whole document as a html file but tell the client that it is called something-unique-id#.doc
For this to work from a web server you will need to set the HTTP headers Content-type: application/msword and Content-disposition: Attachment; filename=something-unique-id.doc
Please check the MIME type for msword.. i'm not sure if that is correct.
Last but not least to be 100% sure try using URLs with the very last GET variable set to .doc this means your URL should look like /listing.asp?var1=abc&var2=def&output=.doc
This was necessary nine years ago to give 100% coverage of the browsers. You'd have to test whether it was still required.
A: If you need this for Word 2003, why not just use the WordML for that? Developing with XML Documents in Word
A: Not sure if this will help any, but if it is tabular data from SQL Server you need it might be possible to pull it into Excel first (through an embedded query) then embed the Excel table in the Word doc (OLE).
Sounds pretty clugy, but I've done worse. :-)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: SVN ignore versioned on update I have this setup where in my development copy I can commit changes on a certain file to the repository. Then in my production copy, which does checkouts only, I would normally edit this file because this contains references which are environment independent. Is there any way I can ignore this file on the subsequent checkouts/updates without using svn:ignore? More like a --ignore-files /projectroot/config.php or --ignore-files -F ignoredfiles.txt flag.
A: I would suggest renaming the file in the repository from config.php to config.php.sample. This is the file that you would edit to change the default options. For deployment, either to your development environment or to the production server, you would copy config.php.sample to config.php and edit it without worrying about future conflicts.
A: Another solution is maintaing all the configuration files for every environment in SVN. Depending on the value of an environment variable, only some files are used.
Advantages:
1. All files in SVN.
2. No copy of files in deploy/checkouts/updates...
Disadvantages:
1. All files in SVN.
2. There is an environment variable.
A: An alternative workaround is to use the tsvn:ignore-on-commit property. It will make files always start off unchecked in TortoiseSVN's commit dialog. If you'd want to commit it, you need to manually check it.
http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-commit.html
(bottom of the page)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to check if an OLEDB driver is installed on the system? How can I make sure that a certain OLEDB driver is installed when I start my application? I use ADO from Delphi and would like to display a descriptive error message if the driver is missing. The error that's returned from ADO isn't always that user-friendly.
There are probably a nice little function that returns all installed drivers but I haven't found it.
A: Each provider has a GUID associated with its class. To find the guid, open regedit and search the registry for the provider name. For example, search for "Microsoft Jet 4.0 OLE DB Provider". When you find it, copy the key (the GUID value) and use that in a registry search in your application.
function OleDBExists : boolean;
var
reg : TRegistry;
begin
Result := false;
// See if Advantage OLE DB Provider is on this PC
reg := TRegistry.Create;
try
reg.RootKey := HKEY_LOCAL_MACHINE;
Result := reg.OpenKeyReadOnly( '\SOFTWARE\Classes\CLSID\{C1637B2F-CA37-11D2-AE5C-00609791DC73}' );
finally
reg.Free;
end;
end;
A: You can get a ADO provider name and check it in registry at path HKEY_CLASSES_ROOT\[Provider_Name].
A: This is an old question but I had the same problem now and maybe this can help others.
In Delphi 7 there is an procedure in ADODB that return a TStringList with the provider names.
Usage example:
names := TStringList.Create;
ADODB.GetProviderNames(names);
if names.IndexOf('SQLNCLI10')<>-1 then
st := 'Provider=SQLNCLI10;'
else if names.IndexOf('SQLNCLI')<>-1 then
st := 'Provider=SQLNCLI;'
else if names.IndexOf('SQLOLEDB')<>-1 then
st := 'Provider=SQLOLEDB;';
A: Wouldn't the easiest way just be trying to make a connection at start-up and catching the error?
I mean you might get a few different errors back depending on, for example, the user is online, but they're cases that you should be able to test for.
A: I believe the OLEDB objects in question are buried someplace in the registry, since OLEDB / ADO is a COM solution. My guess would be to see if you can find the GUID that your driver is installed as in the registry.
A: namespace Common {
public class CLSIDHelper {
[DllImport("ole32.dll")]
static extern int CLSIDFromProgID([MarshalAs(UnmanagedType.LPWStr)] string lpszProgID, out Guid pclsid);
public static Guid RetrieveGUID(string Provider) {
Guid CLSID = Guid.Empty;
int Ok = CLSIDFromProgID(Provider, out CLSID);
if (Ok == 0)
return CLSID;
return null;
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: What does "Optimize Code" option really do in Visual Studio? Name of the option tells something but what Visual Studio/compiler really do and what are the real consequences?
Edit: If you search google you can find this address, but that is not really I am looking for. I wonder the real things happening. For example why do the loops get less time, etc.
A: Without optimizations the compiler produces very dumb code - each command is compiled in a very straightforward manner, so that it does the intended thing. The Debug builds have optimizations disabled by default, because without the optimizations the produced executable matches the source code in a straightforward manner.
Variables kept in registers
Once you turn on the optimizations, the compiler applies many different techniques to make the code run faster while still doing the same thing. The most obvious difference between optimized and unoptimized builds in Visual C++ is the fact the variable values are kept in registers as long as possible in optimized builds, while without optimizations they are always stored into the memory. This affects not only the code speed, but it also affects debugging. As a result of this optimization the debugger cannot reliably obtain a variable value as you are stepping through the code.
Other optimizations
There are multiple other optimizations applied by the compiler, as described in /O Options (Optimize Code) MSDN docs. For a general description of various optimizations techniques see Wikipedia Compiler Optimization article.
A: The short answer is: use -Ox and let the compiler do its job.
The long answer: the effect of different kind of optimizations is impossible to predict accurately. Sometimes optimizing for fast code will actually yield smaller code than when optimizing for size. If you really want to get the last 0.01% of performance (speedwise or sizewise), you have to benchmark different combination of options.
Also, recent versions of Visual Studio have options for more advanced optimizations such as link-time optimization and profile-guided optimization.
A: From Paul Vick's blog:
*
*It removes any NOP instructions that we would otherwise emit to assist in debugging. When optimizations are off (and debugging information is turned on), the compiler will emit NOP instructions for lines that don't have any actual IL associated with them but which you might want to put a breakpoint on. The most common example of something like this would be the “End If“ of an “If” statement - there's no actual IL emitted for an End If, so we don't emit a NOP the debugger won't let you set a breakpoint on it. Turning on optimizations forces the compiler not to emit the NOPs.
*We do a simple basic block analysis of the generated IL to remove any dead code blocks. That is, we break apart each method into blocks of IL separated by branch instructions. By doing a quick analysis of how the blocks interrelate, we can identify any blocks that have no branches into them. Thus, we can figure out code blocks that will never be executed and can be omitted, making the assembly slightly smaller. We also do some minor branch optimizations at this point as well - for example, if you GoTo another GoTo statement, we just optimize the first GoTo to jump to the second GoTo's target.
*We emit a DebuggableAttribute with IsJITOptimizerDisabled set to False. Basically, this allows the run-time JIT to optimize the code how it sees fit, including reordering and inlining code. This will produce more efficient and smaller code, but it means that trying to debug the code can be very challenging (as anyone who's tried it will tell you). The actual list of what the JIT optimizations are is something that I don't know - maybe someone like Chris Brumme will chime in at some point on this.
The long and the short of it is that the optimization switch enables optimizations that might make setting breakpoints and stepping through your code harder.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "83"
}
|
Q: Platform Builder command line building I'm building a project from the command line using WinCE platform builder, and I need RTTI to be enabled, so that my project works correctly. I tried setting the option RTTI="YES" in the sources and in the makefile of each dir, and I also tried to add it at the end of CDEFINES, but when I try to build the project, I get the D9025 warnings, which says that my "/GR was overriden by /GR-" ( enable RTTI was overriden by don't enable RTTI ).
My question is, how can I find out where that option is set, so that I can modify it?
The thing is that we only copy the sources to some dirs, we specify them in a file named "sources", and then we proceed with the build by issuing the following command: build
What I would like to know is, where is build taking it's options from? What is the name of the file?
A: Command-Line Warning D9025
If two options specify contradictory or incompatible directives, the directive specified or implied in the option farthest to the right on the command line is used.
If you get this warning when compiling from the development environment, and are not sure where the conflicting options are coming from, consider the following:
An option can be specified either in code or in the project's project settings. If you look at the compiler's Command Line Property Pages and if you see the conflicting options in the All Options field then the options are set in the project's property pages, otherwise, the options are set in source code.
If the options are set in project's property pages, look on the compiler's Preprocessor property page (with the project node selected in the Solution Explorer). If you do not see the option set there, check the Preprocessor property page settings for each source code file (in Solution Explorer) to make sure it's not added there.
If the options are set in code it could be set either in code or in the windows headers. You might try creating a preprocessed file (/P) and search it for the symbol.
reference: http://msdn.microsoft.com/en-us/library/8k3f51f1(vs.80).aspx
A: Start by looking in sources.cmn, which is a common sources file for the entire build system. If it's not there, keep in mind that every SOURCES file can alter compiler variables for it's folder, plus the makefiles can also alter things. To add to the fun, the system and platform build batch files can also play with environment variables, that in turn may adjust settings.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Extending ControlCollection in VB.NET I want to extend the basic ControlCollection in VB.NET so I can just add images and text to a self-made control, and then automaticly convert them to pictureboxes and lables.
So I made a class that inherits from ControlCollection, overrided the add method, and added the functionality.
But when I run the example, it gives a NullReferenceException.
Here is the code:
Shadows Sub add(ByVal text As String)
Dim LB As New Label
LB.AutoSize = True
LB.Text = text
MyBase.Add(LB) 'Here it gives the exception.
End Sub
I searched on Google, and someone said that the CreateControlsInstance method needs to be overriden. So I did that, but then it gives InvalidOperationException with an innerException message of NullReferenceException.
How do I to implement this?
A: Why not inherit from UserControl to define a custom control that has properties like Text and Image?
A: You are probably better off using just a generic collection anyways. Bieng Control Collection doesnt really do anything special for it.
puclic class MyCollection : Collection<Control>
A: If you're inheriting from Control.ControlCollection then you need to provide a New method in your class. Your New method must call ControlCollection's constructor (MyBase.New) and pass it a valid parent control.
If you haven't done this correctly, the NullReferenceException will be thrown in the Add method.
This could also be causing the InvalidOperationException in your CreateControlsInstance method
The following code calls the constructor incorrectly causing the Add method to throw a NullReferenceException...
Public Class MyControlCollection
Inherits Control.ControlCollection
Sub New()
'Bad - you need to pass a valid control instance
'to the constructor
MyBase.New(Nothing)
End Sub
Public Shadows Sub Add(ByVal text As String)
Dim LB As New Label()
LB.AutoSize = True
LB.Text = text
'The next line will throw a NullReferenceException
MyBase.Add(LB)
End Sub
End Class
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I determine the collation of a database in SQL 2005? How do you determine the collation of a database in SQL 2005, for instance if you need to perform a case-insensitive search/replace?
A: Use the following SQL determines the collation of a database:
SELECT DATABASEPROPERTYEX('{database name}', 'Collation') SQLCollation;
A: Remember, that individual columns can override the database collation:
SELECT TABLE_NAME, COLUMN_NAME, COLLATION_NAME
FROM INFORMATION_SCHEMA.COLUMNS
A: If you want to do a case-insensitive search and can't rely on the database's collation, you could always specifically request it for the query you're interested in. For instance:
SELECT TOP 1 FName, *
FROM People
WHERE FName LIKE '%mich%' COLLATE Latin1_General_CI_AI
I usually have the opposite problem, where I want the case sensitivity but don't have it in the database's collation, so I find myself using the Latin1_General_BIN collation quite a bit in my queries. If you don't already know, you can do:
SELECT
FROM ::fn_helpcollations()
for a list of the available collations and descriptions of what they're for.
A: Select the Database and run the following command.
sp_helpsort
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to recursively download a folder via FTP on Linux I'm trying to ftp a folder using the command line ftp client, but so far I've only been able to use 'get' to get individual files.
A: There is 'ncftp' which is available for installation in linux. This works on the FTP protocol and can be used to download files and folders recursively. works on linux. Has been used and is working fine for recursive folder/file transfer.
Check this link... http://www.ncftp.com/
A: You could rely on wget which usually handles ftp get properly (at least in my own experience). For example:
wget -r ftp://user:pass@server.com/
You can also use -m which is suitable for mirroring. It is currently equivalent to -r -N -l inf.
If you've some special characters in the credential details, you can specify the --user and --password arguments to get it to work. Example with custom login with specific characters:
wget -r --user="user@login" --password="Pa$$wo|^D" ftp://server.com/
As pointed out by @asmaier, watch out that even if -r is for recursion, it has a default max level of 5:
-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The default maximum depth is 5.
If you don't want to miss out subdirs, better use the mirroring option, -m:
-m
--mirror
Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite
recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf
--no-remove-listing.
A: If you can, I strongly suggest you tar and bzip (or gzip, whatever floats your boat) the directory on the remote machine—for a directory of any significant size, the bandwidth savings will probably be worth the time to zip/unzip.
A: If you want to stick to command line FTP, you should try NcFTP. Then you can use get -R to recursively get a folder. You will also get completion.
A: wget -r ftp://url
Work perfectly for Redhat and Ubuntu
A: ncftp -u <user> -p <pass> <server>
ncftp> mget directory
A: If lftp is installed on your machine, use mirror dir. And you are done. See the comment by Ciro below if you want to recursively download a directory.
A: You should not use ftp. Like telnet it is not using secure protocols, and passwords are transmitted in clear text. This makes it very easy for third parties to capture your username and password.
To copy remote directories remotely, these options are better:
*
*rsync is the best-suited tool if you can login via ssh, because it copies only the differences, and can easily restart in the middle in case the connection breaks.
*ssh -r is the second-best option to recursively copy directory structures.
To fetch files recursively, you can use a script like this:
https://gist.github.com/flibbertigibbet/8165881
See:
*
*rsync man page
*ssh man page
A: Just to complement the answer given by Thibaut Barrère.
I used
wget -r -nH --cut-dirs=5 -nc ftp://user:pass@server//absolute/path/to/directory
Note the double slash after the server name. If you don't put an extra slash the path is relative to the home directory of user.
*
*-nH avoids the creation of a directory named after the server name
*-nc avoids creating a new file if it already exists on the destination (it is just skipped)
*--cut-dirs=5 allows to take the content of /absolute/path/to/directory and to put it in the directory where you launch wget. The number 5 is used to filter out the 5 components of the path. The double slash means an extra component.
A: If you can use scp instead of ftp, the -r option will do this for you. I would check to see whether you can use a more modern file transfer mechanism than FTP.
A: Use WGet instead. It supports HTTP and FTP protocols.
wget -r ftp://mydomain.com/mystuff
Good Luck!
reference: http://linux.about.com/od/commands/l/blcmdl1_wget.htm
A: toggle the prompt by PROMPT command.
Usage:
ftp>cd /to/directory
ftp>prompt
ftp>mget *
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "347"
}
|
Q: How do I get the image paint/paintComponent generates? I have a quick question. How do I get the image generated by a JComponent.paint or paintComponent?
I have a JComponent which I use as a 'workspace' and where I have overwritten the paintComponent method to my own. The thing is that my workspace JComponent also has children which has their own paintComponent methods.
So when Swing renders my workspace component, it renders the workspace graphics and then its childrens'.
However, I want to get the image my workspace component generates (which includes the workspace graphics and the children's graphics).
How do I do that?
I tried to call the paintComponent/paint-method myself by using my own Graphics, but i just returned a black image. Here is what i tried;
public void paintComponent(Graphics g) {
if (bufferedImage != null) {
g.drawImage(bufferedImage, 0, 0, this);
}
else {
g.setColor(Color.WHITE);
g.fillRect(0, 0, bufferedImage.getWidth(), bufferedImage.getHeight());
}
}
public BufferedImage getImage() {
BufferedImage hello = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics g = hello.getGraphics();
paintComponent( g );
return hello;
}
Any thoughts or comments are welcome! :)
A: If you call getImage too early, your component will not have been displayed yet and will still have a 0 width and height. Have you made sure you're calling it at a sufficient late time? Try printing the component's size to stdout and see what its dimensions are.
A: Seems like the problem was due to how the BufferedImage was created. When using:
BufferedImage hello = bufferedImage.getSubimage(0,0, getWidth(), getHeight());
Instead it worked. I also had to change switch from paintComponent to paint to render the children.
Problem solved, but if anyone knows. Why did my:
BufferedImage hello = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_ARGB);
Always render a black image? Offtopic, but could be interesting to know :)
A: Do not call paintComponent() or paint() from the outside. Instead, let your image be created within those (overwritten) methods. Then you can be sure that your image will actually contain the painted content of the component.
Here is an example application. The ImagePanel will actually grab the graphics contents of the component every time it is painted, which might be a bit wasteful, so you may want to adjust the frequency with which this is done.
public class SomeApp extends JFrame {
private static class ImagePanel extends JPanel {
private BufferedImage currentImage;
public BufferedImage getCurrentImage() {
return currentImage;
}
@Override
public void paint(Graphics g) {
Rectangle tempBounds = g.getClipBounds();
currentImage = new BufferedImage(tempBounds.width, tempBounds.height, BufferedImage.TYPE_INT_ARGB);
super.paint(g);
super.paint(currentImage.getGraphics());
}
}
public SomeApp() {
setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE);
setSize(800,600);
int matrixSize = 4;
setLayout(new BorderLayout());
add(new JLabel("Wonderful Application"), BorderLayout.NORTH);
final ImagePanel imgPanel = new ImagePanel();
imgPanel.setLayout(new GridLayout(matrixSize,matrixSize));
for(int i=1; i<=matrixSize*matrixSize; i++) {
imgPanel.add(new JButton("A Button" + i));
}
add(imgPanel, BorderLayout.CENTER);
final JPanel buttonPanel = new JPanel();
buttonPanel.add(new JButton(new AbstractAction("get image") {
@Override
public void actionPerformed(ActionEvent e) {
JOptionPane.showMessageDialog(SomeApp.this, new ImageIcon(imgPanel.getCurrentImage()));
}
}));
add(buttonPanel, BorderLayout.SOUTH);
}
public static void main(String[] args) {
System.setProperty("swing.defaultlaf", UIManager.getSystemLookAndFeelClassName());
SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
new SomeApp().setVisible(true);
}
});
}
}
Like Martijn said, your image was probably black because the component wasn't even painted/displayed yet. You could add a ComponentListener to be notified when it is displayed. Your "solution" with getSubimage has nothing to do with the actual problem. I recommend you remove it.
A: Changing the new BufferedImage to TYPE_INT_RGB solves this for me. I can't give an explanation though - just a Swing mystery.
public BufferedImage getImage() {
BufferedImage hello = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_RGB);
Graphics g = hello.getGraphics();
paintComponent( g );
return hello;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Transparent form on the desktop I want to create a c# application with multiple windows that are all transparent with some text on.
The tricky part is making these forms sit on top of the desktop but under the desktop icons. Is this possible?
A: Just making the window transparent is very straight forward:
this.BackColor = Color.Fuchsia;
this.TransparencyKey = Color.Fuchsia;
You can do something like this to make it so you can still interact with the desktop or anything else under your window:
public const int WM_NCHITTEST = 0x84;
public const int HTTRANSPARENT = -1;
protected override void WndProc(ref Message message)
{
if ( message.Msg == (int)WM_NCHITTEST )
{
message.Result = (IntPtr)HTTRANSPARENT;
}
else
{
base.WndProc( ref message );
}
}
A: Thanks for the tips Jeff. Its still not quite what I'm after. I would effectively like the window to appear as if it was part of the desktop so icons could sit on top of my form.
Maybe there is a different way to do it. Can I actually draw text and graphics directly on to the desktop?
A: The method described above by Jeff Hillman is effective in making the window transparent, which should give you the ability to have it appear as though it's part of the desktop (which you mentioned is your goal.
One issue you may run into, which I have just recently run into as well, is drawing to the window with any anti-aliasing flags set. Specifically, using DrawText, any text that's rendered with anti-aliasing flags set is rendered as though the background were NOT transparent. The end result is that you get text with a slight off-color border around it. I'm sure this would hold true for anything else as well, though I haven't tried.
Are there any thoughts on how to resolve that?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I perform a case-sensitive search and replace in SQL 2000/2005? In order to perform a case-sensitive search/replace on a table in a SQL Server 2000/2005 database, you must use the correct collation.
How do you determine whether the default collation for a database is case-sensitive, and if it isn't, how to perform a case-sensitive search/replace?
A: If you have different cases of the same word in the same field, and only want to replace specific cases, then you can use collation in your REPLACE function:
UPDATE tableName
SET fieldName =
REPLACE(
REPLACE(
fieldName COLLATE Latin1_General_CS_AS,
'camelCase' COLLATE Latin1_General_CS_AS,
'changedWord'
),
'CamelCase' COLLATE Latin1_General_CS_AS,
'ChangedWord'
)
This will result in:
This is camelCase 1 and this is CamelCase 2
becoming:
This is changedWord 1 and this is ChangedWord 2
A: SELECT testColumn FROM testTable
WHERE testColumn COLLATE Latin1_General_CS_AS = 'example'
SELECT testColumn FROM testTable
WHERE testColumn COLLATE Latin1_General_CS_AS = 'EXAMPLE'
SELECT testColumn FROM testTable
WHERE testColumn COLLATE Latin1_General_CS_AS = 'eXaMpLe'
Don't assume the default collation will be case sensitive, just specify a case sensitive one every time (using the correct one for your language of course)
A: Determine whether the default collation is case-sensitive like this:
select charindex('RESULT', 'If the result is 0 you are in a case-sensitive collation mode')
A result of 0 indicates you are in a case-sensitive collation mode, 8 indicates it is case-insensitive.
If the collation is case-insensitive, you need to explicitly declare the collation mode you want to use when performing a search/replace.
Here's how to construct an UPDATE statement to perform a case-sensitive search/replace by specifying the collation mode to use:
update ContentTable
set ContentValue = replace(ContentValue COLLATE Latin1_General_BIN, 'THECONTENT', 'TheContent')
from StringResource
where charindex('THECONTENT', ContentValue COLLATE Latin1_General_BIN) > 0
This will match and replace 'THECONTENT', but not 'TheContent' or 'thecontent'.
A: Can be done in multiple statements.
This will not work if you have long strings that contain both capitalized an lowercase words you intend to replace.
You might also need to use different collation this is accent and case sensitive.
UPDATE T SET [String] = ReplacedString
FROM [dbo].[TranslationText] T,
(SELECT [LanguageCode]
,[StringNo]
,REPLACE([String], 'Favourite','Favorite') ReplacedString
FROM [dbo].[TranslationText]
WHERE
[String] COLLATE Latin1_General_CS_AS like '%Favourite%'
AND [LanguageCode] = 'en-us') US_STRINGS
WHERE
T.[LanguageCode] = US_STRINGS.[LanguageCode]
AND T.[StringNo] = US_STRINGS.[StringNo]
UPDATE T SET [String] = ReplacedString
FROM [dbo].[TranslationText] T,
(SELECT [LanguageCode]
,[StringNo]
, REPLACE([String], 'favourite','favorite') ReplacedString
FROM [dbo].[TranslationText]
WHERE
[String] COLLATE Latin1_General_CS_AS like '%favourite%'
AND [LanguageCode] = 'en-us') US_STRINGS
WHERE
T.[LanguageCode] = US_STRINGS.[LanguageCode]
AND T.[StringNo] = US_STRINGS.[StringNo]
A: First of all check this:
http://technet.microsoft.com/en-us/library/ms180175(SQL.90).aspx
You will see that CI specifies case-insensitive and CS specifies case-sensitive.
A: Also, this might be usefull.
select * from fn_helpcollations() - this gets all the collations your server supports.
select * from sys.databases - here there is a column that specifies what collation has every database on your server.
A: You can either specify the collation every time you query the table or you can apply the collation to the column(s) permanently by altering the table.
If you do choose to do the query method its beneficial to include the case insensitive search arguments as well. You will see that SQL will choose a more efficient exec plan if you include them. For example:
SELECT testColumn FROM testTable
WHERE testColumn COLLATE Latin1_General_CS_AS = 'eXaMpLe'
and testColumn = 'eXaMpLe'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Which GUI toolkit would you use for a touchscreen interface? The only experience I have so far with a touchscreen interface was one where everything was custom drawn, and I get the feeling it's not the most efficient way of doing it (even the most basic layout change is hell to make). I know plenty of GUI toolkits intended at keyboard & mouse interfaces, but can you advise something suited for touchscreens? Target platform is Windows, but cross-platform would be nice.
A: Check out the Windows Presentation Foundation (WPF). It uses XML (XAML) to define the interface and it is therefore quite easy to create an interface which would be easy to use with the touchscreen.
.NET 3.0 required.
A: I work at Little Caesars, and all orders are handled through the computer system. They use a touch screen interface which resembles a blown-up version of normal forms. Buttons are about 20x larger than what you would normally get, and text is enlarged quite a bit as well. It looked rather simple and I doubt they used a special framework for it. I recommend you give that a shot, using a standard toolkit.
A: I’d like to add Qt as an UI toolkit with (nowadays) solid touch support: QTouchEvent.
It is also cross-platform for every major platform, which includes embedded devices.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Getting java.lang.ClassCastException: javax.swing.KeyStroke when creating a JSplitPane I'm getting a random unreproducible Error when initializing a JSplitPane in with JDK 1.5.0_08. Note that this does not occur every time, but about 80% of the time:
Exception in thread "AWT-EventQueue-0" java.lang.ClassCastException: javax.swing.KeyStroke
at java.util.TreeMap.compare(TreeMap.java:1093)
at java.util.TreeMap.put(TreeMap.java:465)
at java.util.TreeSet.add(TreeSet.java:210)
at javax.swing.plaf.basic.BasicSplitPaneUI.installDefaults(BasicSplitPaneUI.java:364)
at javax.swing.plaf.basic.BasicSplitPaneUI.installUI(BasicSplitPaneUI.java:300)
at javax.swing.JComponent.setUI(JComponent.java:652)
at javax.swing.JSplitPane.setUI(JSplitPane.java:350)
at javax.swing.JSplitPane.updateUI(JSplitPane.java:378)
at javax.swing.JSplitPane.<init>(JSplitPane.java:332)
at javax.swing.JSplitPane.<init>(JSplitPane.java:287)
...
Thoughts? I've tried cleaning and rebuilding my project so as to minimize the probability of corrupted class files.
Edit #1 See http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6434148 - seems to be a JDK bug. Any known workarounds? None are listed on the bug entry page.
A: After doing some Googling on bugs.sun.com, this looks like this might be a JDK bug that was only fixed in JDK 6.
See http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6434148
A: Same exception had got thrown when i had upgraded java verion and db visualizer dint support jre7. and since
Support for Java 7 was introduced in DbVisualizer 8.0 for Windows and Linux/UNIX.
Support for Java 7 on Mac OS X was introduced in DbVisualizer 9.1.
So Solution that worked for me :
Windows/Unix/Linux:
In the DbVisualizer installation directory there is an .install4j directory,
In this directory create a file named pref_jre.cfg if it doesn't already exist,
Open the file in a text editor,
Add the complete path to the root directory for the Java installation you want to use.
Example: C:\Program Files\Java\jre7
A: java.lang.ClassCastException: javax.swing.KeyStroke cannot be cast to java.lang.Comparable....
If you are getting above error after installing java 7 in dbviz
then add Environment variabbles like:
'DBVIS_JAVA_HOME' as a 'Variable Name' and java path like
for ex. "C:\SWDTOOLS\IBM\RAD85\runtimes\base_v7\java"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I find out what collations are available in SQL 2000/2005 If I need to choose a collation mode to work with, how do I know what collations are available?
A: Use this query to list the available collation modes:
SELECT *
FROM fn_helpcollations()
A: select distinct COLLATION_NAME from INFORMATION_SCHEMA.COLUMNS order by 1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is there a DRM scheme that works? We help our clients to manage and publish their media online - images, video, audio, whatever. They always ask my boss whether they can stop users from copying their media, and he asks me, and I always tell him the same thing: no. If the users can view the media, then a sufficiently determined user will always be able to make a copy. But am I right?
I've been asked again today, and I promised my boss I'd ask about it online. So - is there a DRM scheme that will work? One that will stop users making copies without stopping legitimate viewing of the media?
And if there isn't, how do I convince my boss?
A: Nothing is perfect, but you can make copying a little more difficult = less worthwhile.
*
*You can watermark preview copies of media.
*You can serial number all copies so they can be tracked.
*You could encrypt the media, only allowing it to be decrypted by software that you control. (e.g. Adobe Acrobat documents can be set read-only. Audible ebooks can only be played in the Audible player).
*You could supply media that is of little value to anyone but a legitimate user. (eg. Pictures of my friends at a party are of little value to anyone but the people that know them).
IMHO, any attempt to DRM is annoying to legitimate end users, so I wouldn't recommend it.
Perhaps you can convince your boss by asking her to come up with an effective method of DRM, and then demonstrating how to overcome it?
A: Simply put, no, there isn't. Any content that can be viewed can be copied. There's no exception to this at all, unless you can bend the laws of physics in your favor :)
A: Long answer: Only let users browse your site by visiting your office and using a machine located there - under strict supervision, of course.
Short answer: No.
A: The reason you can't stop DRM no matter what is as follows: imagine a bank vault. There has to be a way in to get the money out. If there is a way in, that means someone could get in that way, therefore it is not impenetrable. If the vault is impenetrable, that means no one can get in -- meaning no one can get the money in or out, not even the people who legally have the right to access the money.
A: No. If you let them view it, they can always make a copy of what they saw. You can make it harder for this to happen, but in the end, you can't stop a suitably determined attacker.
A: At one point you will have to abandon whatever coding/encrypting you are using to circumvent the making of illegal copies and show the content to the user in plain sight. The latest, at that point the user can simply capture the content and make copies. Which means that if you cannot control who your users are (or how they are using your technology), you cannot stop them making copies.
Now, granted that making copies off the unencrypted content might not be the most efficient way of copying (for one -- depending on where it was captured -- it might not be compressed (e.g. the capturing took place between the video card and the monitor), and therefore might take up a lot of space).
Based on the above, the technical answer -- unless you have enough control -- is that no, you cannot stop users to make illegal copies.
However, you can make it much harder for them to make those copies in the desired format by using encryption or other DRM-related techniques. Depending on your users and the popularity of your content, there might be a point where the effort required to subvert the DRM technologies is higher than what your users are willing to pay/invest. Whether there is such a point solely depends on the nature of your business and your audience.
A: Anything that can be viewed and understood by a human can be viewed and stored by a computer.
The best you can do is obfuscate and attempt to confuse, but any suitably determined user will succeed. You could deliver text as an image, an image with a watermark, an encrypted file with public/private keys but the best that will happen allowing you to track who 'leaked' something rather than stopping it from getting leaked.
A: Right now I can see 12 answers all agreeing that the answer is "No".
If your business relies on your clients' media being published with protection, then your business may already be in trouble. You need to start a conversation with your clients about the content they're generating, why they're generating it and what they hope to get from it. It rather looks like they may have an out-of-date business model, in which case they may be in danger as well.
What the clients are saying they want may be their best attempt to stipulate the solution to a problem that they're not telling you about. Try digging a little deeper into what their actual problem is. Maybe look at the Five Whys for inspiration.
I definitely don't think I'd want to be planning a long-term career on DRM right now...
A: As far as convincing your boss goes, boil things down to essential DRM. You sell something valuable. To prevent your customers from copying it, you lock it in a box. To allow your customers to use it, you give them the key to the box.
Hopefully, the light is beginning to dawn on your boss by this point.
Technology is not a solution. We have a legal system to deal with unlicensed replication of intellectual property. Theft is prevalent in a small segment of the population. My advice would be don't try to sell digital media that appeals to a demographic likely to steal.
A: The main thing you need to identify is the level of user you wish to prevent from copying the content. You will never stop a 1337 h4xx0r from copying your things and passing any knowledge of hacks to more competent techies.
As you wander down the line of the less technical able there is more you can do (such as the usual DRM) to dissuade them from attempting to copy your content. As you get to idiot user there are probably a variety of tricks you can perform that are effective enough to fool them into thinking that they cannot copy the content, however all it takes is for them to meet a competent techie and for them to provide one link for them to be able to step up to the next level.
It's a case of very much diminishing returns but there are still users out there who think that just because a website disables the right click that they cannot download the images.
If your clients want to target those users (and offer substantial monies) then it might be worth pursuing a bit of obfuscation but it is a case of diminishing returns and your customers need to appreciate that all they are purchasing is a thin disguise.
A: Short of supplying specifically tailored hardware (which is what Microsoft is pushing with its Trusted Computing 'Palladium' initiative) the answer is no, you can't stop 'em to get to the bits.
Even in the case of specifically tailored hardware an attacker with enough skills and resources can still get to your content, you just reduce the attack surface enormously.
Of course a video camera will work just as well in many cases, you'd then have to counter that with a specific set of television/monitors. It shortly stops being economically viable.
To convince the boss, just tell him what's easier to understand: you cannot stop someone from placing a camera in front of the television.
A: the answer is simple : no
A: No, their is no way to prevent a user to use its camera to take a screenshot of the screen, or its recorder to record a movie, a song or anything else.
And if you're talking about preventing making "exact" copy of a digitalized content, the answer is still the same: NO.
A: You cant stop the viewing, but by possibly putting a serial number in each viewers video it will allow you to track copies. E.g. in the top right of the video put a small number that is unique to that user. If they copy the video and upload it you will know who did it. You could also move it around during long videos or make it appear randomly to make it harder to remove.
Just an idea. Im actually anti DRM.
A: You can have extermly complex DRMs (custom player, activation each time something is played/loaded), but it still won't be 100% hacker proof. And honestly, it's just not worth the trouble,
Try to just keep the honest people honest; either have no DRM at all, or just some simple ones that's easy to implement and will work on 80% of general public, leave the other 20% alone, they are probably techie enough and won't be stopped no matter what.
A: Allow me to argue that the answer is actually "Yes, with qualifications". It is possible to create a DRM system which is sufficiently difficult to crack, such that non-technical users will not be able to copy and redistribute the content, and highly technical users will only be able to do so with great difficulty.
So the original answer is correct: a "suitably motivated" hacker will always be able to get what he wants. But it's possible to set the bar high enough so that the number of suitably motivated hackers is approximately equal to zero.
A: With the hardware in use today, you cannot stop users from copying your media. And current (major) DRM-technologies is not even about that.
DRM is about annoying users who wants to copy. Hopefully so much, that most of them won't make copies.
The problem is that by annoying the users, you annoy all users.
That is why I almost never buy anything DRM-protected. And when I do, it's ONLY after I've got a DRM-free copy, so I'm sure that I'm actually able to hear/see a copy of the product.
A: I believe you have misinterpreted your boss's question. Perhaps he doesn't even know the right question to ask, so I'll give you the Q&A that should have occurred.
Boss: Can we stop every determined user from copying our clients' media?
You: No, this is impossible.
Boss: Can we make it difficult, such that the vast majority of determined attackers will be unable to break our content protections?
You: Yes, this is possible.
Boss: Can it be done in a way that does not impact performance of media playback such that it becomes inconvenient to our legitimate users?
You: Yes, this is challenging, but tractable.
Boss: Is it economically feasible to implement such a protection system?
You: That depends on the details of our contracts with media providers. If some providers are unwilling to license desirable content to us because of our unwillingness to protect it for them, it could be an economic imperative. We should hire one or more experts in digital rights management to implement the system if that is the route you decide to take.
A: As to convincing your boss you might try arguments from Cory Doctorow from this essay book.
He has some very good points.
I think the best argument is that you will be spending much programmer resources on writing features that your users will dislike. Noone wants their player to say: 'you can't listen to this song because it is on your PC already', and implementing this feature will be pain.
A: If it can be viewed, it can be copied.
And if one person can copy it, he can send it to a million other people.
So its meaningless to make it hard to copy, because there's always people able to copy it, who will then proceed to send it to everyone who can't.
The only thing DRM does is make it harder for consumers to legitimately use content. But this is intentional--media providers don't want you to backup your DVDs and convert them to play on an iPod: they want you to buy the same movie again from them in iPod format.
That is the real reason for DRM. They know it won't work to stop pirates; they do know it will work to stop legitimate fair use.
A: Yes, there's is a largely uncracked DRM in place. It's called Super Audio Compact Disc (SACD), a fantastic 5.1 surround sound format that was made to supersede original CD's. Don't think it really caught on as much as Sony, the creator had hoped, yet there's still a large following agmonst audiophiles.
The main reason it's largely unbreakable is because you need a special player to read the discs, they cannot be played on a computer or CD player, unless they're dual layers. Meaning SACD and CD data on one disc, and then they can only rip the CD data not the SACD.
So if you've got music you want to share and your clients are into high end audio. Then SACD is probably the way to go, if you want unbreakable/unshareable audio/music.
A: I'm inclined to agree that in a practical sense, there may be no foolproof way to prevent copying, but can I prove it? No, and I haven't heard any airtight proof yet.
Copying is inherent in normal computation, and it is irreversible. For example
X = A; // statement 1
X = B; // statement 2
When statement 2 is executed, there is no way to reverse it because X has no memory of its prior value. That is the essense of copying - forgetting that a copy was made.
From what little I know of quantum computing and cryptography, in that realm all processes are reversible, so it is possible to guarantee that copies can always be detected.
Back in the world of normal computation, if one can control the viewers of information, one can try to ensure that any copy is degraded and not as good as the original. For example, there is the watermark idea, which can be made practically invisible. Or additional information can be added that is not displayed, but which is required to show the image.
I'm not saying strong DRM is possible in normal computing. I'm just saying if it isn't, that's a strong claim, and I'd like to see an airtight proof of it. This field has a number of things once considered impossible, such as public-key cryptography and Dijkstra's mutex algorithm.
A: In words of a microsoft engineer "If your solution lasts for 6 months, thats eternity". Time to move to a newer implementation of the DRM solution. Hence we cannot guarantee a fool proof DRM solution. However we can make it very hard to crack/hack though by making less encrypted data in clear.
A: Although "what can be viewed, can be copied" is clearly true in theory, this does not necessarily sound the death knell for DRM.
If the DRM can control the hardware well enough, through to the display medium, so that there is no leak of data before the display, then any "copy" which is made of the display will likely be imperfect.
For example, a camcorder in a cinema multiplex - certainly, the viewed data has been copied, but highly sub-optimally.
For this "viewed; copied" data to be optimal, it requires a recording device which is able to record every 'bit' of data perfectly (and perhaps work in real-time).
Looking forward, you might get a DRM technology which prevents a product from being viewed when a device capable of copying it is present.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: Can I return the 'id' field after a LINQ insert? When I enter an object into the DB with Linq-to-SQL can I get the id that I just inserted without making another db call? I am assuming this is pretty easy, I just don't know how.
A: Try this:
MyContext Context = new MyContext();
Context.YourEntity.Add(obj);
Context.SaveChanges();
int ID = obj._ID;
A: After you commit your object into the db the object receives a value in its ID field.
So:
myObject.Field1 = "value";
// Db is the datacontext
db.MyObjects.InsertOnSubmit(myObject);
db.SubmitChanges();
// You can retrieve the id from the object
int id = myObject.ID;
A: When inserting the generated ID is saved into the instance of the object being saved (see below):
protected void btnInsertProductCategory_Click(object sender, EventArgs e)
{
ProductCategory productCategory = new ProductCategory();
productCategory.Name = “Sample Category”;
productCategory.ModifiedDate = DateTime.Now;
productCategory.rowguid = Guid.NewGuid();
int id = InsertProductCategory(productCategory);
lblResult.Text = id.ToString();
}
//Insert a new product category and return the generated ID (identity value)
private int InsertProductCategory(ProductCategory productCategory)
{
ctx.ProductCategories.InsertOnSubmit(productCategory);
ctx.SubmitChanges();
return productCategory.ProductCategoryID;
}
reference: http://blog.jemm.net/articles/databases/how-to-common-data-patterns-with-linq-to-sql/#4
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "186"
}
|
Q: Get Performance Counter Instance Name (w3wp#XX) from ASP.NET worker process ID I would like to display some memory statistics (working set, GCs etc.) on a web page using the .NET/Process performance counters. Unfortunately, if there are multiple application pools on that server, they are differentiated using an index (#1, #2 etc.) but I don't know how to match a process ID (which I have) to that #xx index. Is there a programmatic way (from an ASP.NET web page)?
A: Even though changing of registry settings look quite easy, unfortunately most of us dont have the rights to do it on the server (or we dont want to touch it!). In that case, there is a small workaround. I have blogged about this here.
A: private static string GetProcessInstanceName(int pid)
{
PerformanceCounterCategory cat = new PerformanceCounterCategory("Process");
string[] instances = cat.GetInstanceNames();
foreach (string instance in instances)
{
using (PerformanceCounter cnt = new PerformanceCounter("Process",
"ID Process", instance, true))
{
int val = (int) cnt.RawValue;
if (val == pid)
{
return instance;
}
}
}
throw new Exception("Could not find performance counter " +
"instance name for current process. This is truly strange ...");
}
A: The first hit on Google:
Multiple CLR performance counters appear that have names that resemble "W3wp#1"
When multiple ASP.NET worker processes
are running, Common Language Runtime
(CLR) performance counters will have
names that resemble "W3wp#1" or
"W3sp#2"and so on. This was remedied
in .NET Framework 2.0 to include a
counter named Process ID in the .NET
CLR Memory performance object. This
counter displays the process ID for an
instance. You can use this counter to
determine the CLR performance counter
that is associated with a process.
Also KB 281884:
By default, Performance Monitor
(Perfmon.msc) displays multiple
processes that have the same name by
enumerating the processes in the
following way:
Process#1 Process#2 Process#3
Performance Monitor can also display
these processes by appending the
process ID (PID) to the name in the
following way:
Process_PID
A: The example by chiru does not work in a specific case - when you have two versions of the same program, named the same, and one is not .net and you start the .net version after the non-.net version. The .Net version will be named applicaion#1 but when you access the CLR perf counters using this name, the instance names on the counter has the name withouth the #1, so you get failures.
Nick.
A: I know it has been answered before, but just for the sake of complete working code I'm posting this solution. Please note this code based on the method submitted by M4N in this chain:
public static long GetProcessPrivateWorkingSet64Size(int process_id)
{
long process_size = 0;
Process process = Process.GetProcessById(process_id);
if (process == null) return process_size;
string instanceName = GetProcessInstanceName(process.Id);
var counter = new PerformanceCounter("Process", "Working Set - Private", instanceName, true);
process_size = Convert.ToInt32(counter.NextValue()) / 1024;
return process_size;
}
public static string GetProcessInstanceName(int process_id)
{
PerformanceCounterCategory cat = new PerformanceCounterCategory("Process");
string[] instances = cat.GetInstanceNames();
foreach (string instance in instances)
{
using (PerformanceCounter cnt = new PerformanceCounter("Process", "ID Process", instance, true))
{
int val = (int)cnt.RawValue;
if (val == process_id)
return instance;
}
}
throw new Exception("Could not find performance counter ");
}
Also, if you want to get the total memory of multiple instances of the same process use the above methods with the following one:
public static long GetPrivateWorkingSetForAllProcesses(string ProcessName)
{
long totalMem = 0;
Process[] process = Process.GetProcessesByName(ProcessName);
foreach (Process proc in process)
{
long memsize = GetProcessPrivateWorkingSet64Size(proc.Id);
totalMem += memsize;
}
return totalMem;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: IPC on Vista (service and application) I am creating an appcation on Vista,which include a service and a Console application .Both running in same user account
In service i am creating an event and waits for that event.In console application i am opening the same event (problem starts here) and calling SetEvent function. I can not open the event (getting error 5,Access is denied) in the console application.I searched in the net and saw something about integrity level (I am not sure that the problem is related to integrity level).Its telling that service and applicaation got differnt integrity levels.
here is the part of the code,where IPC occures
service
DWORD
WINAPI IpcThread(LPVOID lpParam)
{
HANDLE ghRequestEvent = NULL ;
ghRequestEvent = CreateEvent(NULL, FALSE,
FALSE, "Global\\Event1") ; //creating the event
if(NULL == ghRequestEvent)
{
//error
}
while(1)
{
WaitForSingleObject(ghRequestEvent, INFINITE) //waiting for the event
//here some action related to event
}
}
Console Application
Here in application ,opening the event and seting the event
unsigned int
event_notification()
{
HANDLE ghRequestEvent = NULL ;
ghRequestEvent = OpenEvent(SYNCHRONIZE|EVENT_MODIFY_STATE, FALSE, "Global\\Event1") ;
if(NULL == ghRequestEvent)
{
//error
}
SetEvent(ghRequestEvent) ;
}
I am running both application (serivce and console application) with administrative privilege (i logged in as Administraor and running the console application by right clicking and using the option "run as administrator") .
The error i am getting in console application (where i am opening the event) is error no 5(Access is denied. ) .
So it will be very helpfull if you tell how to do the IPC between a service and an application in Vista
Thanks in advance
Navaneeth
A: Are the service and the application running as the same user with different integrity levels, or are they running as different users?
If it is the former, then this article from MSDN which talks about integrity levels might help. They have some sample code for lowering the integrity level of a file. I'm not sure that this could be relevant for an event though.
#include <sddl.h>
#include <AccCtrl.h>
#include <Aclapi.h>
void SetLowLabelToFile()
{
// The LABEL_SECURITY_INFORMATION SDDL SACL to be set for low integrity
#define LOW_INTEGRITY_SDDL_SACL_W L"S:(ML;;NW;;;LW)"
DWORD dwErr = ERROR_SUCCESS;
PSECURITY_DESCRIPTOR pSD = NULL;
PACL pSacl = NULL; // not allocated
BOOL fSaclPresent = FALSE;
BOOL fSaclDefaulted = FALSE;
LPCWSTR pwszFileName = L"Sample.txt";
if (ConvertStringSecurityDescriptorToSecurityDescriptorW(
LOW_INTEGRITY_SDDL_SACL_W, SDDL_REVISION_1, &pSD;, NULL))
{
if (GetSecurityDescriptorSacl(pSD, &fSaclPresent;, &pSacl;,
&fSaclDefaulted;))
{
// Note that psidOwner, psidGroup, and pDacl are
// all NULL and set the new LABEL_SECURITY_INFORMATION
dwErr = SetNamedSecurityInfoW((LPWSTR) pwszFileName,
SE_FILE_OBJECT, LABEL_SECURITY_INFORMATION,
NULL, NULL, NULL, pSacl);
}
LocalFree(pSD);
}
}
If it is the latter you might look at this link which suggests creating a NULL ACL and associating it with the object (in the example it is a named pipe, but the approach is similar for an event I'm sure:
BYTE sd[SECURITY_DESCRIPTOR_MIN_LENGTH];
SECURITY_ATTRIBUTES sa;
sa.nLength = sizeof(sa);
sa.bInheritHandle = TRUE;
sa.lpSecurityDescriptor = &sd;
InitializeSecurityDescriptor(&sd, SECURITY_DESCRIPTOR_REVISION);
SetSecurityDescriptorDacl(&sd, TRUE, (PACL) 0, FALSE);
CreateNamedPipe(..., &sa);
A: I notice that you are creating the object in the "Global" namespace but are trying to open it in a local namespace. Does adding "Global\" to the name in the open call help?
Also, in the //error area, is there anything there to let you know it wasn't created?
A: First, it is important to conceptually understand what is required. Once that is understood we can take it from there.
On the server, it should look something similar to:
{
HANDLE hEvent;
hEvent = CreateEvent(null, true, false, TEXT("MyEvent"));
while (1)
{
WaitForSingleObject (hEvent);
ResetEvent (hEvent);
/* Do something -- start */
/* Processing 1 */
/* Processing 2 */
/* Do something -- end */
}
}
On the client:
{
HANDLE hEvent;
hEvent = OpenEvent(0, false, TEXT("MyEvent"));
SetEvent (hEvent);
}
Several points to note:
*
*ResetEvent should be as early as possible, right after WaitForSingleObject or WaitForMultipleObjects. If multiple clients are using the server and first client's processing takes time, second client might set the event and it might not be caught while server processes the first request.
*You should implement some mechanism that would notify the client that server finished processing.
*Before doing any win32 service mumbo-jumbo, have server running as simple application. This will eliminate any security-related problems.
A: @Navaneeth:
Excellent feedback. Since your error is Access Denied, then I would change the desired access from EVENT_ALL_ACCESS, which you really don't need, to
(SYNCHRONIZE | EVENT_MODIFY_STATE)
SYNCHRONIZE lets you wait on the event and EVENT_MODIFY_STATE lets you call SetEvent, ResetEvent and PulseEvent.
It is possible that you might need more access, but that is highly unusual.
A: "1800 INFORMATION" is right - this is a UIPI issue; don't use Events in new code anyways, the event signal can be lost if the target blocking on the event happens to be in user-mode APC code when it is fired. The canonical way in Win32 to write a service/application is to use RPC calls to cross the UIPI boundary.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Is there a OpenID provider in Europe? Does anybody know a Europe-based provider for OpenID?
I am not using one of the well-known bigger services (Yahoo,..).
A: I have developed one myself, LoginBuzz. It is primarily Danish and I have first translated many parts of it the last week, but it should be fully functional.
It supports simple registration, attribute exchange and PAPE. I have also implemented CardSpace support and text messaging (only available in Denmark at the moment) for a more secure authentication process. In a near future it should get support for client certificates as well.
My main goal is keeping it simple and fast. What you really want is to go back to the site that requested the authentication, not to spend time on the provider´s site.
I am very interested in feedback for making it a better service.
A: And of course, there is the Clavid OpenID provider supporting many secure authentication mechanisms.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Trouble Ticket in Microsoft CRM I'm working on customizing a Microsoft Dynamics CRM (4.0) system for my university as a thesis.
My teacher would like to know if it is possible to implement a ticketing system in the CRM so that the users (not the clients) could generate a trouble ticket. For example if their computer doesn't work properly.
I had a look around the internet and found some software that would handle the ticketing but I couldn't understand if this can be integrated in the CRM
Can anyone help me?
Thanks
A: CRM contains all the functionality you would need to build a ticket business object and have the user create a new ticket, assign the ticket to a developer or support tech for work, and resolve the ticket (with notes) when the work has been completed. External software would not be required.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I immediately play a sound when another sound ends using XNA/XACT? This question borders between the world of the audio designer and the programmer. While this question might have to be partially answered by that domain of an audio designer, it is sure a problem for the programmer. In our project, we want to loop a sound (background music) while the game timer is greater than one minute left. When this time is hit, we wish to stop the music as authored, and then immediately continue with ending segment. I have been looking into XACT, and it seems to have support for different events. Unfortunately, the documentation is lacking, and the application is somewhat alien to me as a programmer.
What I am looking to do is something along these lines (different approaches):
*
*When the music stops, I want to tie
an event to play another sound
immediately
*When a marker is
triggered in the music, I want to
play another sound immediately
*I would also like to know in my application when some of these events happen
The problem is that I haven't been able to find any mechanism to auto-play sound when another sound begins and that I can't find a way to hook up with the events made in the XACT project to C#.
If this can't be done (i.e. XACT/XNA lacks support for these operations), please gather your ideas on how to solve this problem with minimal cross-sound time errors. Preferably I would be able to control this as much as possible in C# with calls to XNA.
A: I think I've solved it now
Here's how I did it.
*
*Select the Cue which you want to change sound after it has been stopped in XACT.
*Set Playlist Type to Interactive.
*Open Cue Transitions.
*Select View by Destination.
*Select the Cue in the (stop) node visible in the tree view.
*In Transition Properties at the right side of the tree view, for Source and Destination:
*
*Set Source to End of Loop.
*Set Destiantion to Beginning.
*In Transition Properties at the right side of the tree view, for Transitions:
*
*Set Transition Type to Direct Concurrent Transition.
*Set Transitional Sound to the sound you wish to play after the looping sound has completed its loop.
Close the window and test it by playing the Cue. Also test to stop it As Authored to see if the behaviour fulfilles your expectations.
Implementation
In code, Stop the Cue As Authored to get this behaviour. Immediate stops the cue without playing the sound outro. I hope this helps other people who might run into this problem in the future. The question and answer wasn't that much code oriented as I thought initially. Also, this applies for XNA 2.0. I don't know if there will be other options for controlling this kind of behaviour in XNA 3.0+.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Finding Cell Range With Excel Macros I have an many embedded objects (shapes) in a worksheet and the icons is displayed inside a cell. How do I know the Cell range in which the shape object is displayed.
Example: When I select a B2 and then select the object(shape) in the B17, and I query on the Cell.Address it shows B2 - how do I get the cell address as B17?
thanks
A: You can use the Shape properties .TopLeftCell and .BottomRightCell to return the extents of the rectangular range that the shape overlaps.
In your example, YourShape.TopLeftCell.Address should return $B$17
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Test if a Font is installed Is there an easy way (in .Net) to test if a Font is installed on the current machine?
A: Other answers proposed using Font creation only work if the FontStyle.Regular is available. Some fonts, for example Verlag Bold, do not have a regular style. Creation would fail with exception Font 'Verlag Bold' does not support style 'Regular'. You'll need to check for styles that your application will require. A solution follows:
public static bool IsFontInstalled(string fontName)
{
bool installed = IsFontInstalled(fontName, FontStyle.Regular);
if (!installed) { installed = IsFontInstalled(fontName, FontStyle.Bold); }
if (!installed) { installed = IsFontInstalled(fontName, FontStyle.Italic); }
return installed;
}
public static bool IsFontInstalled(string fontName, FontStyle style)
{
bool installed = false;
const float emSize = 8.0f;
try
{
using (var testFont = new Font(fontName, emSize, style))
{
installed = (0 == string.Compare(fontName, testFont.Name, StringComparison.InvariantCultureIgnoreCase));
}
}
catch
{
}
return installed;
}
A: string fontName = "Consolas";
float fontSize = 12;
using (Font fontTester = new Font(
fontName,
fontSize,
FontStyle.Regular,
GraphicsUnit.Pixel))
{
if (fontTester.Name == fontName)
{
// Font exists
}
else
{
// Font doesn't exist
}
}
A: Here's how I would do it:
private static bool IsFontInstalled(string name)
{
using (InstalledFontCollection fontsCollection = new InstalledFontCollection())
{
return fontsCollection.Families
.Any(x => x.Name.Equals(name, StringComparison.CurrentCultureIgnoreCase));
}
}
One thing to note with this is that the Name property is not always what you would expect from looking in C:\WINDOWS\Fonts. For example, I have a font installed called "Arabic Typsetting Regular". IsFontInstalled("Arabic Typesetting Regular") will return false, but IsFontInstalled("Arabic Typesetting") will return true. ("Arabic Typesetting" is the name of the font in Windows' font preview tool.)
As far as resources go, I ran a test where I called this method several times, and the test finished in only a few milliseconds every time. My machine's a bit overpowered, but unless you'd need to run this query very frequently it seems the performance is very good (and even if you did, that's what caching is for).
A: How do you get a list of all the installed fonts?
var fontsCollection = new InstalledFontCollection();
foreach (var fontFamily in fontsCollection.Families)
{
if (fontFamily.Name == fontName) {...} \\ check if font is installed
}
See InstalledFontCollection class for details.
MSDN:
Enumerating Installed Fonts
A: Going off of GvS' answer:
private static bool IsFontInstalled(string fontName)
{
using (var testFont = new Font(fontName, 8))
return fontName.Equals(testFont.Name, StringComparison.InvariantCultureIgnoreCase);
}
A: Thanks to Jeff, I have better read the documentation of the Font class:
If the familyName parameter
specifies a font that is not installed
on the machine running the application
or is not supported, Microsoft Sans
Serif will be substituted.
The result of this knowledge:
private bool IsFontInstalled(string fontName) {
using (var testFont = new Font(fontName, 8)) {
return 0 == string.Compare(
fontName,
testFont.Name,
StringComparison.InvariantCultureIgnoreCase);
}
}
A: In my case I need to check font filename with extension
ex: verdana.ttf = Verdana Regular, verdanai.ttf = Verdana Italic
using System.IO;
IsFontInstalled("verdana.ttf")
public bool IsFontInstalled(string ContentFontName)
{
return File.Exists(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Fonts), ContentFontName.ToUpper()));
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: What is the fastest way (in theory at least) to sort a heap? A heap is a list where the following applies:
l[i] <= l[2*i] && l[i] <= [2*i+1]
for 0 <= i < len(list)
I'm looking for in-place sorting.
A: Just use heap-sort. It is in-place. That would be the most natural choice.
You can as well just use your heap as it and sort it with some other algorithm. Afterwards you re-build your heap from the sorted list. Quicksort is a good candidate because you can be sure it won't run in the worst-case O(n²) order simply because your heap is already pre-sorted.
That may be faster if your compare-function is expensive. Heap-sort tend to evaluate the compare-function quite often.
A: Well you are half way through a Heap Sort already, by having your data in a heap. You just need to implement the second part of the heap sort algorithm. This should be faster than using quicksort on the heap array.
If you are feeling brave you could have a go at implementing smoothsort, which is faster than heapsort for nearly-sorted data.
A: Sorting a heap in-place kind of sounds like a job for Heap Sort.
I assume memory is constrained, an embedded app, perhaps?
A: Since you already have a heap, couldn't you just use the second phase of the heap sort? It works in place and should be nice and efficient.
A: For in-place sorting, the fastest way follows. Beware of off-by-one errors in my code. Note that this method gives a reversed sorted list which needs to be unreversed in the final step. If you use a max-heap, this problem goes away.
The general idea is a neat one: swap the smallest element (at index 0) with the last element in the heap, bubble that element down until the heap property is restored, shrink the size of the heap by one and repeat.
This isn't the absolute fastest way for non-in-place sorting as David Mackay demonstrates here - you can do better by putting an element more likely to be the smallest at the top of the heap instead of one from the bottom row.
Time complexity is T(n.log n) worst case - n iterations with possibly log n (the height of the heap) goes through the while loop.
for (int k=len(l)-1;k>0;k--){
swap( l, 0, k );
while (i*2 < k)
{
int left = i*2;
int right = l*2 + 1;
int swapidx = i;
if ( l[left] < l[right] )
{
if (l[i] > l[left])
{
swapidx = left;
}
}
else
{
if (l[i] > l[right])
{
swapidx = right;
}
}
if (swapidx == i)
{
// Found right place in the heap, break.
break;
}
swap( l, i, swapidx );
i = swapidx;
}}
// Now reverse the list in linear time:
int s = 0;
int e = len(l)-1;
while (e > s)
{
swap( l, s, e );
s++; e--:
}
A: Read the items off the top of the heap one by one. Basically what you have then is heap sort.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: C++ Binary operators order of precedence In what order are the following parameters tested (in C++)?
if (a || b && c)
{
}
I've just seen this code in our application and I hate it, I want to add some brackets to just clarify the ordering. But I don't want to add the brackets until I know I'm adding them in the right place.
Edit: Accepted Answer & Follow Up
This link has more information, but it's not totally clear what it means. It seems || and && are the same precedence, and in that case, they are evaluated left-to-right.
http://msdn.microsoft.com/en-us/library/126fe14k.aspx
A: [http://www.cppreference.com/wiki/operator_precedence] (Found by googling "C++ operator precedence")
That page tells us that &&, in group 13, has higher precedence than || in group 14, so the expression is equivalent to a || (b && c).
Unfortunately, the wikipedia article [http://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B#Operator_precedence] disagrees with this, but since I have the C89 standard on my desk and it agrees with the first site, I'm going to revise the wikipedia article.
A: From here:
a || (b && c)
This is the default precedence.
A: && (boolean AND) has higher precedence than || (boolean OR). Therefore the following are identical:
a || b && c
a || (b && c)
A good mnemonic rule is to remember that AND is like multiplication and OR is like addition. If we replace AND with * and OR with +, we get a more familiar equivalent:
a + b * c
a + (b * c)
Actually, in Boolean logic, AND and OR act similar to these arithmetic operators:
a b a AND b a * b a OR b a + b
---------------------------------------
0 0 0 0 0 0
0 1 0 0 1 1
1 0 0 0 1 1
1 1 1 1 1 1 (2 really, but we pretend it's 1)
A: To answer the follow-up: obviously the table at MSDN is botched, perhaps by somebody unable to do a decent HTML table (or using a Microsoft tool to generate it!).
I suppose it should look more like the Wikipedia table referenced by Rodrigo, where we have clear sub-sections.
But clearly the accepted answer is right, somehow we have same priority with && and || than with * and +, for example.
The snippet you gave is clear and unambiguous for me, but I suppose adding parentheses wouldn't hurt either.
A: I'm not sure but it should be easy for you to find out.
Just create a small program with a statement that prints out the truth value of:
(true || false && true)
If the result is true, then the || has higher precedence than &&, if it is falase, it's the other way around.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/113992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Variable height items in Win32 TreeView using NM_CUSTOMDRAW Is it possible for items in a WIn32 TreeView control to have variable heights when using NM_CUSTOMDRAW?
Right now, I can successfully select variable sized fonts in the dc in NM_CUSTOMDRAW, but the item texts get clipped.
A: You need to set the height of each item using the iIntegral member of the TVITEMEX structure that you specify when you insert the item.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Create anonymous object by Reflection in C# Is there any way to create C# 3.0 anonymous object via Reflection at runtime in .NET 3.5? I'd like to support them in my serialization scheme, so I need a way to manipulate them programmatically.
edited later to clarify the use case
An extra constraint is that I will be running all of it inside a Silverlight app, so extra runtimes are not an option, and not sure how generating code on the fly will work.
A: Here is another way, seems more direct.
object anon = Activator.CreateInstance(existingObject.GetType());
A: Yes, there is.
From memory:
public static T create<T>(T t)
{
return Activator.CreateInstance<T>();
}
object anon = create(existingAnonymousType);
A: Use reflection to get the Type, use GetConstructor on the type, use Invoke on the constructor.
Edit: Thanks to Sklivvz for pointing out that I answered a question that wasn't asked ;)
The answer to the actual question: I've found that generating C# code and then using CodeDomProvider (but not CodeDOM itself -- terrible) and then compiling that down and reflecting types out of that is the easiest way of doing 'anonymous' objects at runtime.
A: You might want to look into the DLR. I havn't done so myself (yet) but the use-case for the DLR (dynamic languages) sounds a lot like what you're trying to do.
Depending on what you want to do the Castle-framework's dynamic proxy object might be a good fit too.
A: You can use Reflection.Emit to generate the required classes dynamically, although it's pretty nasty to code up.
If you decide upon this route, I would suggest downloading the Reflection Emit Language Addin for .NET Reflector, as this allows you to see how existing classes would be built using Reflection.Emit, hence a good method for learning this corner of the framework.
A: You might also want to have a look into the FormatterServices class: MSDN entry on FormatterServices
It contains GetSafeUninitializedObject that will create an empty instance of the class, and several other handy methods when doing serialization.
In reply to comment from Michael:
If you don't have the Type instance for type T, you can always get it from typeof(T). If you have an object of an unknown type, you can invoke GetType() on it in order to get the Type instance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How do I get the current size of a matrix stack in OpenGL? How do I get the current size of a matrix stack (GL_MODELVIEW, GL_PROJECTION, GL_TEXTURE) in OpenGL?
I want this so that I can do some error checking to ensure that in certain parts of the code I can check that the matrix stacks have been left in the original condition.
A: Try:
GLint depth;
glGetIntegerv (GL_MODELVIEW_STACK_DEPTH, &depth);
The enums for the other stacks are:
GL_MODELVIEW_STACK_DEPTH
GL_PROJECTION_STACK_DEPTH
GL_TEXTURE_STACK_DEPTH
If you use multi-texturing, you have more than one texture matrix stack to query. To do so, set the current texture-unit via glActiveTexture();.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Modal popups - usability What are the cases where you'd use a modal popup ?
Does it interrupt the user's flow, if it all of a sudden opens up in his face ?
Would you avoid modal popups in general ? or when should one be careful of using them ?
Edit:
To be a bit more specific, the situation here is this :
I have a menu on the right, (VisualStudio style) when the user wants to add an element, should I expand the menu down and let them select something from it there, and then have to press the OK button, or display a Modal popup forcing them to select.
(the selection step is mandatory.)
A: From Wikipedia:
Frequent uses of modal windows include:
*
*drawing attention to vital pieces of information. This use has been criticised as ineffective.
*blocking the application flow until information required to continue is entered, as for example a password in a login process.
*collecting application configuration options in a centralized dialog. In such cases, typically the changes are applied upon closing the dialog, and access to the application is disabled while the edits are being made.
*warning that the effects of the current action are not reversible. This is a frequent interaction pattern for modal dialogs, but it is also criticised by usability experts as being ineffective for its intended use (protection against errors in destructive actions) and for which better alternatives exist.
A: Personally, i think that modal pop-ups can always be avoided. The most common use of a modal pop-up is to indicate errors, or seek user input to proceed. Both of these actions can be accomplished "inline", i.e., by creating suitable actions on the same page itself without a modal pop-up.
E.g. errors in a text field input can be indicated by making the background red, or by making a small error icon next to the field, and the error text below it.
Pop-ups are always an irritation to a user, and in my opinion can be replaced cleverly without losing any functionality at all.
EDIT:
In your situation, a simple solution would be to disable the commit button till the user has made a selection. This will ensure the user hits OK only after a selection is made
A: If you do go the modal popup route, please please add a delay before input is accepted. There are few things as annoying as typing in some application and seeing the tell-tale flash of dialogue box that implies something popped up, accepted whatever random key you happened to be pressing at the time as its input and gone off to take some random action.
A: IMO, avoid them for anything but stuff that you're absolutely sure requires immediate user attention. Otherwise, they just interupt the flow for no good reason
A: I don't think avoiding modal popups is usefull. Think about confirmation on closing unsaved work, fileopen dialogs, and such sort of things.
I think you should not show them all of a sudden, when the user is busy with something else.
A: Minimize. Use the status bar or some non-in-your-face mechanism of notifying the user.
You should be careful when you want to have automated tests. Modal dialogs love playing "show stopper".
A: To be a bit more specific, the situation here is this :
I have a menu on the right, (VisualStudio style) when the user wants to add an element, should I expand the menu down and let them select something from it there, and then have to press the OK button, or display a Modal popup forcing them to select.
(the selction step is mandatory.)
A: Modal dialogs have been condemned by usability experts for a long time because of their disruptive nature regarding user workflow. See, for instance, Jef Raskin's "Humane Interface" book for discussion of modeless interfaces.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Dynamic LINQ and Dynamic Lambda expressions? What is the best way of dynamically writing LINQ queries and Lambda expressions?
I am thinking of applications where the end user can design business logic rules, which then must be executed.
I am sorry if this is a newbie question, but it would be great to get best practices out of experience.
A: Another possibility is to integrate a scripting runtime into your program, so that your users can write the business logic in a DSL. IronPython would be a candidate.
A: I cannot recommend higher than you reading through the postings of Bart De Smet (http://community.bartdesmet.net/blogs/bart/), he is really brilliant when it comes to Lambda.
His recent series covered dynamic Lambda, starting with http://community.bartdesmet.net/blogs/bart/archive/2008/08/26/to-bind-or-not-to-bind-dynamic-expression-trees-part-0.aspx
Absolutely beautiful code.
A: I can see two ways you can dynamically generate lambda's. You could try Reflection.Emit to generate IL (the .Net bytecode) directly and call them as a lambda or you can use the System.CodeDom and Microsoft.CSharp.CSharpCodeProvider to generate the code from higher level constructs. What you want to do depends on how you want the user to input this stuff. If you want the user to write C# then you can just use the built in compliler.
Generating Linq dynamically should be easier. You should be able to generate LINQ queries as expression trees in runtime and then pass them into an IQueryable to execute. I'd suggest you look into the documentation on IQueryable to learn more about this. Another way would be to pre-define a couple of linq queries and then allow the user to chain them together. This should be workable because any Linq query returns an IEnumerable that can be consumed by the next Linq query.
A: Lambda expressions can be easily created via the System.Linq.Expressions namespace.
A: System.Linq.Expressions is what you need. I've written a nice UI that lets users define and build queries dynamically in the form of an expression tree. You can then hand this off to Linq2SQL or client of your choice.
A: I don't understand what do you mean saying "best way". It would be better to provide simple example of what you want to achieve. Composing dynamic LINQ expression is not hard but tricky.
Here is an example of dynamic linq expression creation:
How do I compose existing Linq Expressions
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Error 1150 genasm.exe(1) : error There was an error finalizing type . Method 'DoParse' oOo a highly exciting build issue. Compact framework occasionally goes funny when building. Usually when messing with xmta files but this is the first time i've seen it go awry in this scenario.
I created a abstract base class with an abstract method. Placed this in one of my core dlls. This was fine. I then, in a "later" .dll inherited from it. Now I get this error:
Error 1150 genasm.exe(1) : error There
was an error finalizing type . Method
'DoParse' in type
'MyComanyName.PlatformName.ProductName.Configuration.ConfigurationParser'
from assembly
'UICore.WindowsCE.asmmeta,
Version=1.0.3187.17348,
Culture=neutral, PublicKeyToken=null'
does not have an
implementation. UICore
And yes, I have implemented this method in this class. I have also tried a full clean and rebuild and a close and restart VS.
Out of interest, I also have a warning that is "Object not set to instance of object" which is slightly unusual.
Update: If I make the method virtual as opposed to abstract the problem disappears.
Update:
*
*CF 2.0 SP1
*Visual Studio 2005 SP1
*The method is not generic
*However I do give an object with a generic method to the constructor of this object.
A: It's an issue with genasm in Visual Studio 2005/2008, that it must instantiate types to generate the asmmeta files, so you can't have public abstract types, unfortunately.
Check this MSDN thread with a similar issue,(with generics). There's also some workarounds discussed.
A: Not sure if this will be related but if you include the DesignTimeAttributes.xmta file you get a similar issue.
You'd be getting that DesignTimeAttributes if you were using a base form and inheriting for it. There's a bug in the designer that means you won't see the inherited form at all so this is generated as part of the fix..
You can solve this one by excluding the file from the project.
A: If I make the method virtual as opposed to abstract the problem disappears.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What are the possible causes of a CGI::Session::CookieStore::TamperedWithCookie exception in rails I am receiving the expcetion CGI::Session::CookieStore::TamperedWithCookie after changing the config.action_controller.session.secret setting on an app (as part of preparation to full deployment.
Am I right in assuming that changing the secret while testers have cookies set is the cause of this, and what other cause could there be (both due to secuirty attacks, and coding issues)
A: The cause of your exception is most certainly changing the secret while testers have cookies set. The cookie is cryptographically signed using the secret to protect against users tampering with their cookie. For example, they might try to change their stored user id in order to elevate their privileges.
You could ask the testers to clear their cookies. Or, you could catch the exception and remove the cookie for your application. Some sites prefer to use ActiveRecordSession store for more control over their sessions so they drop all sessions when required but at a cost of performance.
A: Yes, testers should clear their cookies. Any time the cookie cannot be decrypted with the specified secret you'll get that error.
A: I found a plugin on Github that will trap the error and write it to the log without exposing the error to the user. I was plagued by this problem on a Rails 2.1 instance and it did the trick.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Exception handling Can you please clarify the folowing query? I am not sure if the way I am trying to code is correct. Kindly advise me if I am moving in the right/wrong direction.
I am trying to develop an automation framework using QuickTest Professional, a testing tool.
*
*There is an Excel sheet from which the data is being taken for execution based on the ID's stored in an array from another Excel sheet (The same ID is available in both Excel sheets).
*I'm trying to handle the exeptional cases through a function call. This function will capture the screenshot of the page error occured and then exit the entire loop.
*I need a scenario where execution continues for the next ID stored in the array, and this needs to be handled from the function call.
A: Well, it sounds like you already have the answer.. You just need to handle the expection that occurs when reading in the data within the main loop and make it stop there..
Now, I have not done VBScript for a LONG time so, to pseudo it:
While Not EndOfExcelSheet
ReadDataFromExcel();
If errOccurred Then TakeScreenPrint();
'NOTE: We have caught the error and requested the screen print
'is taken, but we have NOT bubbled the exception up!
End While
A: It's hard to answer your question based on what you wrote, but the first thing that comes to my mind is to add a boolean parameter to your exception-handling function (let's call it ExceptionHandler). Say, if the parameter (let's call it ExitLoop) is true, you wll exit from the "entire loop", otherwise, continue. Now, it might be too tedius to change that for old calls to the function (calls without the new parameter) -- I'm not sure if VB supports function overloading. If this is the case, you can rename your ExceptionHandler to ExceptionHandler2, add the new parameter (ExitLoop) and logic to it and create a (now new) function ExceptionHandler that calls ExceptionHandler2 with its parameters plus true for ExitLoop.
Hope it helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Common Causes of Operating System Crashes I am interested to learn: what are the most common technical causes (from the perspective of operating system programming) of an operating system crash (not limited to Windows crashes)? I'm looking for an answer not like "too many apps open", but what specifically happens when too many apps are open that causes the crash.
A: In my opinion
*
*Bad drivers
*Kernel bugs
*Hardware failure
*End of resources
A modern operating system will not let a mere application crash it.
A: It's Buggy Drivers that cause OS crashes. Only the OS itself and drivers are able to harm the system.
To your suggestions:
*
*No OS has problems if an application accesses the same memory as the OS. Either the memory is accessible or it is not. If an application tries to access memory that it should not the CPU generates a segmentation-fault. The OS hands this over to the application and the problem is solved. (in 99% the app will crash afterwards, but that's not the fault of the OS).
*You're suggesting that slower programs are more safe. That's not true. The OS does not need to know what exactly your program is doing.
A: In modern OS, application code and OS code run in separate address spaces. The application code cannot crash the operating system. See here:
http://en.wikipedia.org/wiki/Ring_(computer_security)
The most common readon for a crash is code that is acting as part of the OS interfearing with other code that is acting as part of the OS. A common cause is poorly written device drivers that live in the OS's address space.
Less often, crashes are caused by hardware faulures.
A: Any OS crash can occur due to either of the two main reasons:
*
*Hardware Problem.
*Software Problem.
HARDWARE PROBLEMS:
*
*Power Related problems:
Improper functioning of the System Power Supply can lead to immediate shutting down of the System.
*Overheating of RAM: Overheating RAM could lead to corruption of data in it.This can lead to definite crash where reset is a must.
*Improper Overclocking: Causes Overheating. Certain Hardware Components are sensitive to heat. When Overheating occurs automatically the system shuts down.
*Bad Sectors in Hard Drive:
The Hard disk is divided into sectors where data is stored. Some sectors become Bad sectors.
Reasons:
a. Prolonged usage - many writes and reads.
b. Manufacturing defect.
If sectors in the hard disk, where important system information is stored, becomes a bad sector then it is difficult to load those files, thus leading to a crash.
*RAM Issues: Cause: Data retrieval not possible. This is very important as this leads to Fatal Exception Error
Major Misconception: An application crash in your system does not always lead to a system crash. Generally "Nothing" happens to the OS. It just sends you a report saying so and so application has crashed.
SOFTWARE PROBLEMS:
*
*Corrupt Registry: Before starting any application, the OS looks into its registry. Registry is a small Database where all the information about kernel, drivers and information about applications are stored. Registry can get corrupted due to improper uninstallation of applications, careless editing of registry, too many installed applications etc.
More causes of Corrupt Registry. This leads to routine applications refusing to start thus causing the Blue Screen of Death to be displayed.
*Improper Drivers : In order to use additional hardware, we need drivers, generally downloaded from the internet. These drivers might contain bugs. These bugs cause the OS to crash. Modern operating systems are released with the option of "Safe Mode Boot". Safe Mode Boot loads only important drivers (minimum) and not all. Safe Mode Boot is used for diagnostic purposes to find the driver with bugs.
*Virus and Trojan: Common reasons for OS crash. Viruses and Trojans corrupt the system files, "eat up" the memory not allowing OS to retrieve it when a programs stops, changes administrative settings, frequent rebooting without any sign etc
*Thrashing: Deadlock occurs when two programs running require control over a particular resource. Sometimes during a deadlock, the OS tries to switch back and forth between the two programs. This eventually leads to Thrashing where the hard drive is being overworked by moving information between the system memory and virtual memory excessively causing a system crash.
A: No you are way off. Typically there is nothing an application can do that can cause the OS to crash. OS crashes are generally caused by buggy device drivers and hardware failures.
A: Two different sources trying to access a locked area is one problem. Getting all mutexes, monitors and locks to work 100% is not trivial.
A: If we use Vista as an example - drivers ... NVidia in particular http://arstechnica.com/news.ars/post/20080325-vista-capable-lawsuit-paints-picture-of-buggy-nvidia-drivers.html. It's the OS that dictates memory allocation, not applications. Well, that's the theory.
A: os crash may also cause due to some sectors crash in harddisk this is because i all pcs os presents in primary drive in the hardidsk.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Fast String Hashing Algorithm with low collision rates with 32 bit integer I have lots of unrelated named things that I'd like to do quick searches against. An "aardvark" is always an "aardvark" everywhere, so hashing the string and reusing the integer would work well to speed up comparisons. The entire set of names is unknown (and changes over time). What is a fast string hashing algorithm that will generate small (32 or 16) bit values and have a low collision rate?
I'd like to see an optimized implementation specific to C/C++.
A: Another solution that could be even better depending on your use-case is interned strings. This is how symbols work e.g. in Lisp.
An interned string is a string object whose value is the address of the actual string bytes. So you create an interned string object by checking in a global table: if the string is in there, you initialize the interned string to the address of that string. If not, you insert it, and then initialize your interned string.
This means that two interned strings built from the same string will have the same value, which is an address. So if N is the number of interned strings in your system, the characteristics are:
*
*Slow construction (needs lookup and possibly memory allocation)
*Requires global data and synchronization in the case of concurrent threads
*Compare is O(1), because you're comparing addresses, not actual string bytes (this means sorting works well, but it won't be an alphabetic sort).
Cheers,
Carl
A: It is never late for a good subject and I am sure people would be interested on my findings.
I needed a hash function and after reading this post and doing a bit of research on the links given here, I came up with this variation of Daniel J Bernstein's algorithm, which I used to do an interesting test:
unsigned long djb_hashl(const char *clave)
{
unsigned long c,i,h;
for(i=h=0;clave[i];i++)
{
c = toupper(clave[i]);
h = ((h << 5) + h) ^ c;
}
return h;
}
This variation hashes strings ignoring the case, which suits my need of hashing users login credentials. 'clave' is 'key' in Spanish. I am sorry for the spanish but its my mother tongue and the program is written on it.
Well, I wrote a program that will generate usernames from 'test_aaaa' to 'test_zzzz', and -to make the strings longer- I added to them a random domain in this list: 'cloud-nueve.com', 'yahoo.com', 'gmail.com' and 'hotmail.com'. Therefore each of them would look like:
test_aaaa@cloud-nueve.com, test_aaab@yahoo.com,
test_aaac@gmail.com, test_aaad@hotmail.com and so on.
Here is the output of the test -'Colision entre XXX y XXX' means 'Collision of XXX and XXX'. 'palabras' means 'words' and 'Total' is the same in both languages-.
Buscando Colisiones...
Colision entre 'test_phiz@hotmail.com' y 'test_juxg@cloud-nueve.com' (1DB903B7)
Colision entre 'test_rfhh@hotmail.com' y 'test_fpgo@yahoo.com' (2F5BC088)
Colision entre 'test_wxuj@hotmail.com' y 'test_pugy@cloud-nueve.com' (51FD09CC)
Colision entre 'test_sctb@gmail.com' y 'test_iohw@cloud-nueve.com' (52F5480E)
Colision entre 'test_wpgu@cloud-nueve.com' y 'test_seik@yahoo.com' (74FF72E2)
Colision entre 'test_rfll@hotmail.com' y 'test_btgo@yahoo.com' (7FD70008)
Colision entre 'test_wcho@cloud-nueve.com' y 'test_scfz@gmail.com' (9BD351C4)
Colision entre 'test_swky@cloud-nueve.com' y 'test_fqpn@gmail.com' (A86953E1)
Colision entre 'test_rftd@hotmail.com' y 'test_jlgo@yahoo.com' (BA6B0718)
Colision entre 'test_rfpp@hotmail.com' y 'test_nxgo@yahoo.com' (D0523F88)
Colision entre 'test_zlgo@yahoo.com' y 'test_rfdd@hotmail.com' (DEE08108)
Total de Colisiones: 11
Total de Palabras : 456976
That is not bad, 11 collisions out of 456,976 (off course using the full 32 bit as table lenght).
Running the program using 5 chars, that is from 'test_aaaaa' to 'test_zzzzz', actually runs out of memory building the table. Below is the output. 'No hay memoria para insertar XXXX (insertadas XXX)' means 'There is not memory left to insert XXX (XXX inserted)'. Basically malloc() failed at that point.
No hay memoria para insertar 'test_epjcv' (insertadas 2097701).
Buscando Colisiones...
...451 'colision' strings...
Total de Colisiones: 451
Total de Palabras : 2097701
Which means just 451 collisions on 2,097,701 strings. Note that in none of the occasions, there were more than 2 collisions per code. Which I confirm it is a great hash for me, as what I need is to convert the login ID to a 40 bit unique id for indexing. So I use this to convert the login credentials to a 32 bit hash and use the extra 8 bits to handle up to 255 collisions per code, which lookign at the test results would be almost impossible to generate.
Hope this is useful to someone.
EDIT:
Like the test box is AIX, I run it using LDR_CNTRL=MAXDATA=0x20000000 to give it more memory and it run longer, the results are here:
Buscando Colisiones...
Total de Colisiones: 2908
Total de Palabras : 5366384
That is 2908 after 5,366,384 tries!!
VERY IMPORTANT: Compiling the program with -maix64 (so unsigned long is 64 bits), the number of collisions is 0 for all cases!!!
A: Murmur Hash is pretty nice.
A: One of the FNV variants should meet your requirements. They're fast, and produce fairly evenly distributed outputs.
A: The Hsieh hash function is pretty good, and has some benchmarks/comparisons, as a general hash function in C. Depending on what you want (it's not completely obvious) you might want to consider something like cdb instead.
A: Why don't you just use Boost libraries? Their hashing function is simple to use and most of the stuff in Boost will soon be part of the C++ standard. Some of it already is.
Boost hash is as easy as
#include <boost/functional/hash.hpp>
int main()
{
boost::hash<std::string> string_hash;
std::size_t h = string_hash("Hash me");
}
You can find boost at boost.org
A: Bob Jenkins has many hash functions available, all of which are fast and have low collision rates.
A: Have a look at GNU gperf.
A: There is some good discussion in this previous question
And a nice overview of how to pick hash functions, as well as statistics about the distribution of several common ones here
A: You can see what .NET uses on the String.GetHashCode() method using Reflector.
I would hazard a guess that Microsoft spent considerable time optimising this. They have printed in all the MSDN documentation too that it is subject to change all the time. So clearly it is on their "performance tweaking radar" ;-)
Would be pretty trivial to port to C++ too I would have thought.
A: There's also a nice article at eternallyconfuzzled.com.
Jenkins' One-at-a-Time hash for strings should look something like this:
#include <stdint.h>
uint32_t hash_string(const char * s)
{
uint32_t hash = 0;
for(; *s; ++s)
{
hash += *s;
hash += (hash << 10);
hash ^= (hash >> 6);
}
hash += (hash << 3);
hash ^= (hash >> 11);
hash += (hash << 15);
return hash;
}
A: For a fixed string-set use gperf.
If your string-set changes you have to pick one hash function. That topic has been discussed before:
What's the best hashing algorithm to use on a stl string when using hash_map?
A: CRC-32. There is about a trillion links on google for it.
A: Described here is a simple way of implementing it yourself: http://www.devcodenote.com/2015/04/collision-free-string-hashing.html
A snippet from the post:
if say we have a character set of capital English letters, then the length of the character set is 26 where A could be represented by the number 0, B by the number 1, C by the number 2 and so on till Z by the number 25. Now, whenever we want to map a string of this character set to a unique number , we perform the same conversion as we did in case of the binary format
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69"
}
|
Q: Generate docs in automated build Is there any way to generate project docs during automated builds?
I'd like to have a single set of source files (HTML?) with the user manual, and from them generate:
*
*PDF document
*CHM help
*HTML version of the help
The content would be basically the same in all three formats.
Currently I'm using msbuild and CCNET, but I could change that if needed.
A: Yes!
*
*Use SandCastle to build CHM/HTM documentation of the APIs.
*Use DocBook + FOP and other tools to produce other kinds of documentation in PDF, RTF, HTML etc...
They can be easily integrated with CruiseControl.NET through NAnt.
A: Did you try doxygen? It's available for Windows too and it should be easy to integrate it in any build script/process.
A: The Apache Forrest project might go some way to giving you what you want.
You'll be generally better off writing your documentation in XML. From that you ought to be able to generate just about anything you need.
A: Help and Manual can generate good quality PDF, HTML and CHM (and other formats) from a single source. It also has a command line interface. I have version 4 and I like it a lot. I use conditionals (like #ifdefs) to to generate Windows and Mac versions of my documentation in various formats as part of a build .bat/.csh file. Version 5 is now available.
http://www.ec-software.com/
A: If you want your api's documented too, and you are using msbuild, then consider using DocProject to control the SandCastle build. (These tools are not for end-user documentation...)
A: I've had an experience with Doxygen. It is nice and easy, but it makes you want overcommenting the code to ease later documetation work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: ASP.NET MVC Preview 5 - Html.Image helper has moved namespace We've just updated ASP.NET from Preview 3 to Preview 5 and we've run into a problem with the Html.Image HtmlHelper in our aspx pages.
It seems that Html.Image has moved from System.Web.Mvc into Microsoft.Web.Mvc, and the only way we've found to access the helper now is to add an import statement to every .aspx page that uses it. All the other helpers can be accessed with using System.Web.Mvc; in the C# codebehind of a view master page, but this one seems to need an <@Import Namespace="Microsoft.Web.Mvc"> in every .aspx page.
Does anyone know of a way around this?
A: You can add the namespace to pages in System.Web in you web config.
<pages validateRequest="false">
<namespaces>
<add namespace="Microsoft.Web.Mvc"/>
</namespaces>
</pages>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Deploying Django: How do you do it? I have tried following guides like this one but it just didnt work for me.
So my question is this: What is a good guide for deploying Django, and how do you deploy your Django.
I keep hearing that capastrano is pretty nifty to use, but i have no idea as to how to work it or what it does (apart from automation of deploying code), or even if i want/need to use it or not.
A: mod_wsgi in combination with a virtualenv for all the dependencies, a mercurial checkout into the virtualenv and a fabric recipe to check out the changes on the server.
I wrote an article about my usual workflow: Deploying Python Web Applications. Hope that helps.
A: I have had success with mod_wsgi
A: In my previous work we had real genius guy on deployment duties, he deployed application (Python, SQL, Perl and Java code) as set of deb files built for Ubuntu. Unfortunately now, I have no such support. We are deploying apps manually to virtualenv-ed environments with separate nginx configs for FastCGI. We use paver to deploy to remote servers. It's painful, but it works.
A: This looks like a good place to start: http://www.unessa.net/en/hoyci/2007/06/using-capistrano-deploy-django-apps/
A: I use mod_python, and have every site in a git repository with the following subdirs:
*
*mysite
*template
*media
I have mysite/settings.py in .gitignore, and work like this:
*
*do development on my local machine
*create remote repository on webserver
*push my changes to webserver repo
*set up apache vhost config file, tweak live server settings.py
*run git checkout && git reset --hard && sudo /etc/init.d/apache2 restart on webserver repo to get up-to-date version to its working copy and restart apache
*repeat steps 1, 3 and 5 whenever change request comes
A: The easiest way would be to use one of the sites on http://djangofriendly.com/hosts/ that will provide the hosting and set up for you, but even if you're wanting to roll your own it will allow you to see what set up other sites are using.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Encoding user input for emails On a website if I have a form where the user can input some text and then a page which displays what the user has entered. I know to html encode the values the user has entered to prevent scripting attacks. If the form was sending emails addresses I presume I would do the same but is there any special cases for emails and will email clients run the any script injected into the email?
A: While it would still be a good idea to strip <script> tags from your document before sending it, I think that the threat is low. I believe that you would be hard pressed to find an email client (still receiving support) that does not strip scripts before rendering an email.
A: I believe that by marking the email body as text/plain would avoid javascript and/or html attacks (but I wouldn't trust outlook on following what the headers suggest).
A: You should use an SMTP library that takes any burden (and potential bugs) which are caused by duplicated or missing escaping. Then, use plaintext mails only (text/plain).
To avoid security problems with buggy mail clients, you could also send a nearly empty mail, and the text as attachment (file extension ".txt", content-type "text/plain").
A: You should definitely HTML encode before assigning posted content to the HTML body of an email. Your code should already be rejecting content such as '<script>' as invalid, not just in the case of an email but in all cases.
There are no other considerations you need to worry about.
A: I would highly suggest using an existing, tested solution for sending mails. If you're passing user input to, say, the PHP mail() function--even with HTML encoding--it's possible for an attacker to craft a "body" that actually contains the headers to create a multi-part message.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: "const correctness" in C# The point of const-correctness is to be able to provide a view of an instance that can't be altered or deleted by the user. The compiler supports this by pointing out when you break constness from within a const function, or try to use a non-const function of a const object. So without copying the const approach, is there a methodology I can use in C# that has the same ends?
I'm aware of immutability, but that doesn't really carry over to container objects to name but one example.
A: I've come across this issue a lot of times too and ended up using interfaces.
I think it's important to drop the idea that C# is any form, or even an evolution of C++. They're two different languages that share almost the same syntax.
I usually express 'const correctness' in C# by defining a read-only view of a class:
public interface IReadOnlyCustomer
{
String Name { get; }
int Age { get; }
}
public class Customer : IReadOnlyCustomer
{
private string m_name;
private int m_age;
public string Name
{
get { return m_name; }
set { m_name = value; }
}
public int Age
{
get { return m_age; }
set { m_age = value; }
}
}
A: C# doesn't have such feature. You can pass argument by value or by reference. Reference itself is immutable unless you specify ref modifier. But referenced data isn't immutable. So you need to be careful if you want to avoid side effects.
MSDN:
Passing Parameters
A: To get the benefit of const-craziness (or pureness in functional programming terms), you will need to design your classes in a way so they are immutable, just like the String class of c# is.
This approach is way better than just marking an object as readonly, since with immutable classes you can pass data around easily in multi-tasking environments.
A: I just wanted to note for you that many of the System.Collections.Generics containers have an AsReadOnly method which will give you back an immutable collection.
A: Interfaces are the answer, and are actually more powerful than "const" in C++. const is a one-size-fits-all solution to the problem where "const" is defined as "doesn't set members or call something that sets members". That's a good shorthand for const-ness in many scenarios, but not all of them. For example, consider a function that calculates a value based on some members but also caches the results. In C++, that's considered non-const, although from the user's perspective it is essentially const.
Interfaces give you more flexibility in defining the specific subset of capabilities you want to provide from your class. Want const-ness? Just provide an interface with no mutating methods. Want to allow setting some things but not others? Provide an interface with just those methods.
A: Agree with some of the others look at using readonly fields that you initialize in the constructor, to create immutable objects.
public class Customer
{
private readonly string m_name;
private readonly int m_age;
public Customer(string name, int age)
{
m_name = name;
m_age = age;
}
public string Name
{
get { return m_name; }
}
public int Age
{
get { return m_age; }
}
}
Alternatively you could also add access scope on the properties, i.e. public get and protected set?
public class Customer
{
private string m_name;
private int m_age;
protected Customer()
{}
public Customer(string name, int age)
{
m_name = name;
m_age = age;
}
public string Name
{
get { return m_name; }
protected set { m_name = value; }
}
public int Age
{
get { return m_age; }
protected set { m_age = value; }
}
}
A: *
*The const keyword can be used for compile time constants such as primitive types and strings
*The readonly keyword can be used for run-time constants such as reference types
The problem with readonly is that it only allows the reference (pointer) to be constant. The thing referenced (pointed to) can still be modified. This is the tricky part but there is no way around it. To implement constant objects means making them not expose any mutable methods or properties but this is awkward.
See also Effective C#: 50 Specific Ways to Improve Your C# (Item 2 - Prefer readonly to const.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
}
|
Q: How does a XAML definition get turned into an object instance? XAML allows you to specify an attribute value using a string that contains curly braces. Here is an example that creates a Binding instance and assigns it to the Text property of the TextBox element.
<TextBox Text="{Binding ElementName=Foo, Path=Bar}"/>
I want to extend XAML so that the developer could enter this as valid...
<TextBox Text="{MyCustomObject Field1=Foo, Field2=Bar}"/>
This would create an instance of my class and set the Field1/Field2 properties as appropriate. Is this possible? If so how do you do it?
If this is possible I have a followup question. Can I take a string "{Binding ElementName=Foo, Path=Bar}" and ask the framework to process it and return the Binding instance it specified? This must be done somewhere already to make the above XAML work and so there must be a way to ask for the same thing to be processed.
A: The Binding class is a Markup Extension. You can write your own by deriving from System.Windows.Markup.MarkupExtension.
ElementName and Path are simply properties on the Binding object.
As for the followup you can create a new Binding in code by instantiating the Binding object. I do not know of a way to process a string through.
A: take a look at markupextensions
http://blogs.msdn.com/wpfsdk/archive/2007/03/22/blogpost-text-creatingasimplecustommarkupextension.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Does CASCADE Delete execute as transaction? I want to perform cascade delete for some tables in my database, but I'm interested in what happens in case there's a failure when deleting something. Will everything rollback?
A: Cascade deletes are indeed atomic, they would be of little use without that property. It is in the documentation.
A: In general¹, yes, cascade deletes are done in the same transaction (or subtransaction) as your original delete. You should read the documentation of your SQL server, though.
¹ The exception is if you're using a database that doesn't support transactions, like MySQL with MyISAM tables.
A: It's worth pointing out that any cascading event should be atomic (i.e. with in a transaction). But, as Joel Coehoorn points out, check the documentation for your database.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: How to implement WiX installer upgrade? At work we use WiX for building installation packages. We want that installation of product X would result in uninstall of the previous version of that product on that machine.
I've read on several places on the Internet about a major upgrade but couldn't get it to work.
Can anyone please specify the exact steps that I need to take to add uninstall previous version feature to WiX?
A: The following is the sort of syntax I use for major upgrades:
<Product Id="*" UpgradeCode="PUT-GUID-HERE" Version="$(var.ProductVersion)">
<Upgrade Id="PUT-GUID-HERE">
<UpgradeVersion OnlyDetect="yes" Minimum="$(var.ProductVersion)" Property="NEWERVERSIONDETECTED" IncludeMinimum="no" />
<UpgradeVersion OnlyDetect="no" Maximum="$(var.ProductVersion)" Property="OLDERVERSIONBEINGUPGRADED" IncludeMaximum="no" />
</Upgrade>
<InstallExecuteSequence>
<RemoveExistingProducts After="InstallInitialize" />
</InstallExecuteSequence>
As @Brian Gillespie noted there are other places to schedule the RemoveExistingProducts depending on desired optimizations. Note the PUT-GUID-HERE must be identical.
A: I would suggest having a look at Alex Shevchuk's tutorial. He explains "major upgrade" through WiX with a good hands-on example at From MSI to WiX, Part 8 - Major Upgrade.
A: One important thing I missed from the tutorials for a while (stolen from http://www.tramontana.co.hu/wix/lesson4.php) which resulted in the "Another version of this product is already installed" errors:
*Small updates mean small changes to one or a few files where the change doesn't warrant changing the product version (major.minor.build). You don't have to change the Product GUID, either. Note that you always have to change the Package GUID when you create a new .msi file that is different from the previous ones in any respect. The Installer keeps track of your installed programs and finds them when the user wants to change or remove the installation using these GUIDs. Using the same GUID for different packages will confuse the Installer.
Minor upgrades denote changes where the product version will already change. Modify the Version attribute of the Product tag. The product will remain the same, so you don't need to change the Product GUID but, of course, get a new Package GUID.
Major upgrades denote significant changes like going from one full version to another. Change everything: Version attribute, Product and Package GUIDs.
A: I'm using the latest version of WiX (3.0) and couldn't get the above working. But this did work:
<Product Id="*" UpgradeCode="PUT-GUID-HERE" ... >
<Upgrade Id="PUT-GUID-HERE">
<UpgradeVersion OnlyDetect="no" Property="PREVIOUSFOUND"
Minimum="1.0.0.0" IncludeMinimum="yes"
Maximum="99.0.0.0" IncludeMaximum="no" />
</Upgrade>
Note that PUT-GUID-HERE should be the same as the GUID that you have defined in the UpgradeCode property of the Product.
A: The Upgrade element inside the Product element, combined with proper scheduling of the action will perform the uninstall you're after. Be sure to list the upgrade codes of all the products you want to remove.
<Property Id="PREVIOUSVERSIONSINSTALLED" Secure="yes" />
<Upgrade Id="00000000-0000-0000-0000-000000000000">
<UpgradeVersion Minimum="1.0.0.0" Maximum="1.0.5.0" Property="PREVIOUSVERSIONSINSTALLED" IncludeMinimum="yes" IncludeMaximum="no" />
</Upgrade>
Note that, if you're careful with your builds, you can prevent people from accidentally installing an older version of your product over a newer one. That's what the Maximum field is for. When we build installers, we set UpgradeVersion Maximum to the version being built, but IncludeMaximum="no" to prevent this scenario.
You have choices regarding the scheduling of RemoveExistingProducts. I prefer scheduling it after InstallFinalize (rather than after InstallInitialize as others have recommended):
<InstallExecuteSequence>
<RemoveExistingProducts After="InstallFinalize"></RemoveExistingProducts>
</InstallExecuteSequence>
This leaves the previous version of the product installed until after the new files and registry keys are copied. This lets me migrate data from the old version to the new (for example, you've switched storage of user preferences from the registry to an XML file, but you want to be polite and migrate their settings). This migration is done in a deferred custom action just before InstallFinalize.
Another benefit is efficiency: if there are unchanged files, Windows Installer doesn't bother copying them again when you schedule after InstallFinalize. If you schedule after InstallInitialize, the previous version is completely removed first, and then the new version is installed. This results in unnecessary deletion and recopying of files.
For other scheduling options, see the RemoveExistingProducts help topic in MSDN. This week, the link is: http://msdn.microsoft.com/en-us/library/aa371197.aspx
A: Below worked for me.
<Product Id="*" Name="XXXInstaller" Language="1033" Version="1.0.0.0"
Manufacturer="XXXX" UpgradeCode="YOUR_GUID_HERE">
<Package InstallerVersion="xxx" Compressed="yes"/>
<Upgrade Id="YOUR_GUID_HERE">
<UpgradeVersion Property="REMOVINGTHEOLDVERSION" Minimum="1.0.0.0"
RemoveFeatures="ALL" />
</Upgrade>
<InstallExecuteSequence>
<RemoveExistingProducts After="InstallInitialize" />
</InstallExecuteSequence>
Please make sure that the UpgradeCode in Product is matching to Id in Upgrade.
A: Finally I found a solution - I'm posting it here for other people who might have the same problem (all 5 of you):
*
*Change the product ID to *
*Under product add The following:
<Property Id="PREVIOUSVERSIONSINSTALLED" Secure="yes" />
<Upgrade Id="YOUR_GUID">
<UpgradeVersion
Minimum="1.0.0.0" Maximum="99.0.0.0"
Property="PREVIOUSVERSIONSINSTALLED"
IncludeMinimum="yes" IncludeMaximum="no" />
</Upgrade>
*Under InstallExecuteSequence add:
<RemoveExistingProducts Before="InstallInitialize" />
From now on whenever I install the product it removed previous installed versions.
Note: replace upgrade Id with your own GUID
A: In the newest versions (from the 3.5.1315.0 beta), you can use the MajorUpgrade element instead of using your own.
For example, we use this code to do automatic upgrades. It prevents downgrades, giving a localised error message, and also prevents upgrading an already existing identical version (i.e. only lower versions are upgraded):
<MajorUpgrade
AllowDowngrades="no" DowngradeErrorMessage="!(loc.NewerVersionInstalled)"
AllowSameVersionUpgrades="no"
/>
A: You might be better asking this on the WiX-users mailing list.
WiX is best used with a firm understanding of what Windows Installer is doing. You might consider getting "The Definitive Guide to Windows Installer".
The action that removes an existing product is the RemoveExistingProducts action. Because the consequences of what it does depends on where it's scheduled - namely, whether a failure causes the old product to be reinstalled, and whether unchanged files are copied again - you have to schedule it yourself.
RemoveExistingProducts processes <Upgrade> elements in the current installation, matching the @Id attribute to the UpgradeCode (specified in the <Product> element) of all the installed products on the system. The UpgradeCode defines a family of related products. Any products which have this UpgradeCode, whose versions fall into the range specified, and where the UpgradeVersion/@OnlyDetect attribute is no (or is omitted), will be removed.
The documentation for RemoveExistingProducts mentions setting the UPGRADINGPRODUCTCODE property. It means that the uninstall process for the product being removed receives that property, whose value is the Product/@Id for the product being installed.
If your original installation did not include an UpgradeCode, you will not be able to use this feature.
A: I used this site to help me understand the basics about WiX Upgrade:
http://wix.tramontana.co.hu/tutorial/upgrades-and-modularization
Afterwards I created a sample Installer, (installed a test file), then created the Upgrade installer (installed 2 sample test files). This will give you a basic understanding of how the mechanism works.
And as Mike said in the book from Apress, "The Definitive Guide to Windows Installer", it will help you out to understand, but it is not written using WiX.
Another site that was pretty helpful was this one:
http://www.wixwiki.com/index.php?title=Main_Page
A: I read the WiX documentation, downloaded examples, but I still had plenty of problems with upgrades. Minor upgrades don't execute uninstall of the previous products despite of possibility to specify those uninstall. I spent more that a day for investigations and found that WiX 3.5 intoduced a new tag for upgrades. Here is the usage:
<MajorUpgrade Schedule="afterInstallInitialize"
DowngradeErrorMessage="A later version of [ProductName] is already installed. Setup will now exit."
AllowDowngrades="no" />
But the main reason of problems was that documentation says to use the "REINSTALL=ALL REINSTALLMODE=vomus" parameters for minor and small upgrades, but it doesn't say that those parameters are FORBIDDEN for major upgrades - they simply stop working. So you shouldn't use them with major upgrades.
A: This is what worked for me, even with major DOWN grade:
<Wix ...>
<Product ...>
<Property Id="REINSTALLMODE" Value="amus" />
<MajorUpgrade AllowDowngrades="yes" />
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "247"
}
|
Q: Wrap an executable to diagnose it's invocations I have a Windows executable (whoami) which is crashing every so often. It's called from another process to get details about the current user and domain. I'd like to know what parameters are passed when it fails.
Does anyone know of an appropriate way to wrap the process and write it's command line arguments to log while still calling the process?
Say the command is used like this:
'whoami.exe /all'
I'd like a script to exist instead of the whoami.exe (with the same filename) which will write this invocation to log and then pass on the call to the actual process.
A: You didn't note which programming language. It is not doable from a .bat file if that's what you wanted, but you can do it in any programming language. Example in C:
int main(int argc, void **argv)
{
// dump contents of argv to some log file
int i=0;
for (i=0; i<argc; i++)
printf("Argument #%d: %s\n", argv[i]);
// run the 'real' program, giving it the rest of argv vector (1+)
// for example spawn, exec or system() functions can do it
return 0; // or you can do a blocking call, and pick the return value from the program
}
A: I don't think using a "script" will work, since the intermediate should have a .exe extension for your ploy to work.
I would write a very small command line program to do this; something like the following (written in Delphi/Virtual Pascal so it will result in a Win32 executable, but any compiled language should do):
program PassThrough;
uses
Dos; // Imports the Exec routine
const
PassTo = 'Original.exe'; // The program you really want to call
var
CommandLine: String;
i: Integer;
f: Text;
begin
CommandLine := '';
for i := 1 to ParamCount do
CommandLine := CommandLine + ParamStr(i) + ' ';
Assign(f,'Passthrough.log');
Append(f);
Writeln(f, CommandLine); // Write a line in the log
Close(f);
Exec(PassTo, CommandLine); // Run the intended program
end.
A: Can't you just change the calling program to log the parameters it used to call the process, and the exit code?
This would be way easier than trying to dig into whoami.exe
A: From a batch file:
echo Parameters: %* >> logfile.txt
whoami.exe %*
With the caveat that you can have problems if the parameters contain spaces (and you passed the in escaping with "), because the command-line parser basically de-escapes them and they should be re-escaped before passed to an other executable.
A: Look for whoami.exe, BACK IT UP, replace it with your own executable and see do whatever you like with it's parameters (maybe save them in a text file).
A: If you can reproduce the crash, use Process Explorer before crashed process is terminated to see its command line.
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Updating reference to a member variable in use I got this síngleton cache object and it exposes an IEnumerable property which just returns a private IEnumerable variable.
I have a static method on my singleton object that updates this member variable (that exists on the single 'Instance' instance of this cache object).
Let's say some thread is currently iterating over this IEnumerable variable/property while my cache is updating. I made it so the cache is updating on a new local variable and finally setting the exposed private variable to point to this new local variable.
I know i'm just updating a reference, leaving the other (old) object in memory waiting to be picked up by the GC but my problem is - i'm not 100% sure what happens once i set the new reference? Would the other thread suddenly be iterating over the new object or the old one it got passed through the IEnumerable interface? If it had been a normal reference i'd say 'no'. The calling thread would be operating on the old object, but i'm not sure if this is the case for IEnumerable as well?
Here is the class stripped down:
internal sealed class SektionCache : CacheBase
{
public static readonly SektionCache Instance = new SektionCache();
private static readonly object lockObject = new object();
private static bool isUpdating;
private IEnumerable<Sektion> sektioner;
static SektionCache()
{
UpdateCache();
}
public IEnumerable<Sektion> Sektioner
{
get { return sektioner; }
}
public static void UpdateCache()
{
// SNIP - getting data, locking etc.
Instance.sektioner = newSektioner;
// SNIP
}
}
A: Since the getter { return sektioner; } is called before the new value is put in the field, the old value is returned. Then, the loop foreach (Sektion s in cache.Sektioner) uses the value that was received when the getter was called, i.e. the old value. That value will be used throughout the foreach loop.
A: The thread which is currently enumerating sektioner will continue to enumerate it even when you update the reference within the singleton. There is nothing special about objects which implement IEnumerable.
You should perhaps add the volatile keyword to the sektioner field as you are not providing read-locking and multiple threads are reading/writing it.
A: First of all I can't see object locking, unused lockObject variable makes me sad. IEnumerable is not special. Each thread will have it's own copy of reference to some instance of sektioner object. You can't affect other threads that way. What would happen with old version of data pointed by sektioner field largely depends on calling party.
A: I think, if you want a thread safety, you should use this way:
internal sealed class SektionCache : CacheBase
{
//public static readonly SektionCache Instance = new SektionCache();
// this template is better ( safer ) than the previous one, for thread-safe singleton patter >>>
private static SektionCache defaultInstance;
private static object readonly lockObject = new object();
public static SektionCach Default {
get {
SektionCach result = defaultInstance;
if ( null == result ) {
lock( lockObject ) {
if ( null == result ) {
defaultInstance = result = new SektionCache();
}
}
}
return result;
}
}
// <<< this template is better ( safer ) than the previous one
//private static readonly object lockObject = new object();
//private static bool isUpdating;
//private IEnumerable<Sektion> sektioner;
// this declaration is enough
private volatile IEnumerable<Sektion> sektioner;
// no static constructor is required >>>
//static SektionCache()
//{
// UpdateCache();
//}
// <<< no static constructor is required
// I think, you can use getter and setter for reading & changing a collection
public IEnumerable<Sektion> Sektioner {
get {
IEnumerable<Sektion> result = this.sektioner;
// i don't know, if you need this functionality >>>
// if ( null == result ) { result = new Sektion[0]; }
// <<< i don't know, if you need this functionality
return result;
}
set { this.sektion = value; }
}
//public static void UpdateCache()
//{
//// SNIP - getting data, locking etc.
//Instance.sektioner = newSektioner;
//// SNIP
//}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Pointer vs. Reference What would be better practice when giving a function the original variable to work with:
unsigned long x = 4;
void func1(unsigned long& val) {
val = 5;
}
func1(x);
or:
void func2(unsigned long* val) {
*val = 5;
}
func2(&x);
IOW: Is there any reason to pick one over another?
A: I really think you will benefit from establishing the following function calling coding guidelines:
*
*As in all other places, always be const-correct.
*
*Note: This means, among other things, that only out-values (see item 3) and values passed by value (see item 4) can lack the const specifier.
*Only pass a value by pointer if the value 0/NULL is a valid input in the current context.
*
*Rationale 1: As a caller, you see that whatever you pass in must be in a usable state.
*Rationale 2: As called, you know that whatever comes in is in a usable state. Hence, no NULL-check or error handling needs to be done for that value.
*Rationale 3: Rationales 1 and 2 will be compiler enforced. Always catch errors at compile time if you can.
*If a function argument is an out-value, then pass it by reference.
*
*Rationale: We don't want to break item 2...
*Choose "pass by value" over "pass by const reference" only if the value is a POD (Plain old Datastructure) or small enough (memory-wise) or in other ways cheap enough (time-wise) to copy.
*
*Rationale: Avoid unnecessary copies.
*Note: small enough and cheap enough are not absolute measurables.
A: You should pass a pointer if you are going to modify the value of the variable.
Even though technically passing a reference or a pointer are the same, passing a pointer in your use case is more readable as it "advertises" the fact that the value will be changed by the function.
A: If you have a parameter where you may need to indicate the absence of a value, it's common practice to make the parameter a pointer value and pass in NULL.
A better solution in most cases (from a safety perspective) is to use boost::optional. This allows you to pass in optional values by reference and also as a return value.
// Sample method using optional as input parameter
void PrintOptional(const boost::optional<std::string>& optional_str)
{
if (optional_str)
{
cout << *optional_str << std::endl;
}
else
{
cout << "(no string)" << std::endl;
}
}
// Sample method using optional as return value
boost::optional<int> ReturnOptional(bool return_nothing)
{
if (return_nothing)
{
return boost::optional<int>();
}
return boost::optional<int>(42);
}
A: Use a reference when you can, use a pointer when you have to.
From C++ FAQ: "When should I use references, and when should I use pointers?"
A: My rule of thumb is:
Use pointers if you want to do pointer arithmetic with them (e.g. incrementing the pointer address to step through an array) or if you ever have to pass a NULL-pointer.
Use references otherwise.
A: A reference is an implicit pointer. Basically you can change the value the reference points to but you can't change the reference to point to something else. So my 2 cents is that if you only want to change the value of a parameter pass it as a reference but if you need to change the parameter to point to a different object pass it using a pointer.
A: Consider C#'s out keyword. The compiler requires the caller of a method to apply the out keyword to any out args, even though it knows already if they are. This is intended to enhance readability. Although with modern IDEs I'm inclined to think that this is a job for syntax (or semantic) highlighting.
A: This ultimately ends up being subjective. The discussion thus far is useful, but I don't think there is a correct or decisive answer to this. A lot will depend on style guidelines and your needs at the time.
While there are some different capabilities (whether or not something can be NULL) with a pointer, the largest practical difference for an output parameter is purely syntax. Google's C++ Style Guide (https://google.github.io/styleguide/cppguide.html#Reference_Arguments), for example, mandates only pointers for output parameters, and allows only references that are const. The reasoning is one of readability: something with value syntax should not have pointer semantic meaning. I'm not suggesting that this is necessarily right or wrong, but I think the point here is that it's a matter of style, not of correctness.
A: Pass by const reference unless there is a reason you wish to change/keep the contents you are passing in.
This will be the most efficient method in most cases.
Make sure you use const on each parameter you do not wish to change, as this not only protects you from doing something stupid in the function, it gives a good indication to other users what the function does to the passed in values. This includes making a pointer const when you only want to change whats pointed to...
A: Pointers:
*
*Can be assigned nullptr (or NULL).
*At the call site, you must use & if your type is not a pointer itself,
making explicitly you are modifying your object.
*Pointers can be rebound.
References:
*
*Cannot be null.
*Once bound, cannot change.
*Callers don't need to explicitly use &. This is considered sometimes
bad because you must go to the implementation of the function to see if
your parameter is modified.
A: Pointers
*
*A pointer is a variable that holds a memory address.
*A pointer declaration consists of a base type, an *, and the variable name.
*A pointer can point to any number of variables in lifetime
*A pointer that does not currently point to a valid memory location is given the value null (Which is zero)
BaseType* ptrBaseType;
BaseType objBaseType;
ptrBaseType = &objBaseType;
*The & is a unary operator that returns the memory address of its operand.
*Dereferencing operator (*) is used to access the value stored in the variable which pointer points to.
int nVar = 7;
int* ptrVar = &nVar;
int nVar2 = *ptrVar;
Reference
*
*A reference (&) is like an alias to an existing variable.
*A reference (&) is like a constant pointer that is automatically dereferenced.
*It is usually used for function argument lists and function return values.
*A reference must be initialized when it is created.
*Once a reference is initialized to an object, it cannot be changed to refer to another object.
*You cannot have NULL references.
*A const reference can refer to a const int. It is done with a temporary variable with value of the const
int i = 3; //integer declaration
int * pi = &i; //pi points to the integer i
int& ri = i; //ri is refers to integer i – creation of reference and initialization
A: A reference is similar to a pointer, except that you don’t need to use a prefix ∗ to access the value referred to by the reference. Also, a reference cannot be made to refer to a different object after its initialization.
References are particularly useful for specifying function arguments.
for more information see "A Tour of C++" by "Bjarne Stroustrup" (2014) Pages 11-12
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "273"
}
|
Q: Loading DLL from a /bin directory I have created a web project which references a class library project. When deployed to the local machine the web/classes all work fine but deployed on a ‘shared’ IIS site, the class DLLs exist in the /bin directory, but the web page generates the following error:
can’t find file “Documents and settings/….”
when trying to access the class DLL.
Is there a special setup to make the web pages look in its /bin directory?
Update: I am using .NET 1.1 and IIS settings are configured for .NET 1.1
A: sometimes the existence (or absense) of a web.config file in a virtual directory can cause wierd behavior that you've described. Also I've seen where a virtual directory will not look under virtual-directory/bin but rather under base/bin for the corresponding dll.
A: One reason could be, that the bin directory is not a direct subfolder of the directory which is marked as application. Check the IIS web settings to see, if your directory is an "application" directory and not a simple virtual directory of another application.
Another reason could be that the version of the assembly deployed doesn't correspond to the assembly used as reference in the other assembly.
Check also the security settings on that file on your server.
A: No by defauls IIS looks for DLL's in the Project/bin folder (not the /bin/Debug).
Check if you have the correct dotNET version set in IIS and all referenced libraries in the same folder.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I handle data which must be persisted in a database, but isn't a proper model, in Ruby on Rails? Imagine a web application written in Ruby on Rails. Part of the state of that application is represented in a piece of data which doesn't fit the description of a model. This state descriptor needs to be persisted in the same database as the models.
Where it differs from a model is that there needs to be only one instance of its class and it doesn't have relationships with other classes.
Has anyone come across anything like this?
A: If it's data, and it's in the database, it's part of the model.
A: From your description I think the rails-settings plugin should do what you need.
From the Readme:
"Settings is a plugin that makes managing a table of global key, value pairs easy. Think of it like a global Hash stored in you database, that uses simple ActiveRecord like methods for manipulation. Keep track of any global setting that you dont want to hard code into your rails app. You can store any kind of object. Strings, numbers, arrays, or any object."
http://github.com/Squeegy/rails-settings/tree/master
A: This isn't really a RoR problem; it's a general OO design problem.
If it were me, I'd probably find a way to conceptualize the data as a model and then just make it a singleton with a factory method and a private constructor.
Alternatively, you could think of this as a form of logging. In that case, you'd just have a Logger class (also a singleton) that reads/writes the database directly and is invoked at the beginning and end of each request.
A: In Rails, if data is in the database it's in a model. In this case the model may be called "Configuration", but it is still mapped to an ActiveRecord class in your Rails system.
If this data is truly static, you may not need the database at all.
You could use (as an example) a variable in your application controller:
class ApplicationController < ActionController::Base
helper :all
@data = "YOUR DATA HERE"
end
There are a number of approaches that can be used to instantiate data for use in a Rails application.
A: I'm not sure I understand why you say it can't fit in a Rails model.
If it's just a complex data structure, just save a bunch of Ruby code in a text field in the database :-)
If for example you have a complex nested hash you want to save, assign the following to your 'data' text field:
ComplexThing.data = complex_hash.inspect
When you want to read it back, simply
complex_hash = eval ComplexThing.data
Let me point out 2 more things about this solution:
*
*If your data structure is not standard Ruby classes, a simple inspect may not do it. If you see #<MyClass:0x4066e3c> anywhere, something's not being serialized properly.
*This is a naive implementation. You may want to check out real marshalling solutions if you risk having unicode data or if you really are saving a lot of custom-made classes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Where to Store writable data to be shared by all users in a vista installer? My app is installed via NSIS.
I want the installer to install the program for all users.
I can do this, by installing to the 'program files' directory.
There is a database file (firebird), that all user accounts on the system should share.
If I store this database file in the 'program files' directory it will be read only.
If I store it in the users APPDATA directory they will each have a different copy, when one user adds data the others wont see it.
Option 1 - In my app directory under 'program files' create a 'Data' directory, in my installer make this dir read-writeable by all, that way the user 'program files' virtualisation won't kick in and all users can update the file and see each others changes.
Any other options ?
A: Data for all users should be stored in %ALLUSERSPROFILE%, or call SHGetFolderPath() with the parameter CSIDL_COMMON_APPDATA to get the all users storage area.
See http://www.deez.info/sengelha/2006/02/28/windows-vista-changes/ for more details.
A: Somewhere under the All Users profile would be the obvious location. I think there are some rules about who gets read/write by default, but the MS documentation recommends if you need something different to create a subdirectory and set the ACLs right in the installer.
A: This is a security hole, see: http://blogs.msdn.com/oldnewthing/archive/2004/11/22/267890.aspx
A: Specifically I would use:
SetShellVarContext all
SetOutPath $APPDATA
File "MyInsecurelySharedFile.txt"
See the NSIS Scripting Reference for more info.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do you URL encode parameters in Erlang? I'm using httpc:request to post some data to a remote service. I have the post working but the data in the body() of the post comes through as is, without any URL-encoding which causes the post to fail when parsed by the remote service.
Is there a function in Erlang that is similar to CGI.escape in Ruby for this purpose?
A: You can find here the YAWS url_encode and url_decode routines
They are fairly straightforward, although comments indicate the encode is not 100% complete for all punctuation characters.
A: Here's a simple function that does the job. It's designed to work directly with inets httpc.
%% @doc A function to URL encode form data.
%% @spec url_encode(formdata()).
-spec(url_encode(formdata()) -> string()).
url_encode(Data) ->
url_encode(Data,"").
url_encode([],Acc) ->
Acc;
url_encode([{Key,Value}|R],"") ->
url_encode(R, edoc_lib:escape_uri(Key) ++ "=" ++ edoc_lib:escape_uri(Value));
url_encode([{Key,Value}|R],Acc) ->
url_encode(R, Acc ++ "&" ++ edoc_lib:escape_uri(Key) ++ "=" ++ edoc_lib:escape_uri(Value)).
Example usage:
httpc:request(post, {"http://localhost:3000/foo", [],
"application/x-www-form-urlencoded",
url_encode([{"username", "bob"}, {"password", "123456"}])}
,[],[]).
A: If someone need encode uri that works with utf-8 in erlang:
https://gist.github.com/3796470
Ex.
Eshell V5.9.1 (abort with ^G)
1> c(encode_uri_rfc3986).
{ok,encode_uri_rfc3986}
2> encode_uri_rfc3986:encode("テスト").
"%e3%83%86%e3%82%b9%e3%83%88"
3> edoc_lib:escape_uri("テスト").
"%c3%86%c2%b9%c3%88" # output wrong: ƹÈ
A: To answer my own question...I found this lib in ibrowse!
http://www.erlware.org/lib/5.6.3/ibrowse-1.4/ibrowse_lib.html#url_encode-1
url_encode/1
url_encode(Str) -> UrlEncodedStr
Str = string()
UrlEncodedStr = string()
URL-encodes a string based on RFC 1738. Returns a flat list.
I guess I can use this to do the encoding and still use http:
A: I encountered the lack of this feature in the HTTP modules as well.
It turns out that this functionality is actually available in the erlang distribution, you just gotta look hard enough.
> edoc_lib:escape_uri("luca+more@here.com").
"luca%2bmore%40here.com"
This behaves like CGI.escape in Ruby, there is also URI.escape which behaves slightly differently:
> CGI.escape("luca+more@here.com")
=> "luca%2Bmore%40here.com"
> URI.escape("luca+more@here.com")
=> "luca+more@here.com"
edoc_lib
A: At least in R15 there is http_uri:encode/1 which does the job. I would also not recommend using edoc_lib:escape_uri as its translating an '=' to a %3d instead of a %3D which caused me some trouble.
A: Here's a "fork" of the edoc_lib:escape_uri function that improves on the UTF-8 support and also supports binaries.
escape_uri(S) when is_list(S) ->
escape_uri(unicode:characters_to_binary(S));
escape_uri(<<C:8, Cs/binary>>) when C >= $a, C =< $z ->
[C] ++ escape_uri(Cs);
escape_uri(<<C:8, Cs/binary>>) when C >= $A, C =< $Z ->
[C] ++ escape_uri(Cs);
escape_uri(<<C:8, Cs/binary>>) when C >= $0, C =< $9 ->
[C] ++ escape_uri(Cs);
escape_uri(<<C:8, Cs/binary>>) when C == $. ->
[C] ++ escape_uri(Cs);
escape_uri(<<C:8, Cs/binary>>) when C == $- ->
[C] ++ escape_uri(Cs);
escape_uri(<<C:8, Cs/binary>>) when C == $_ ->
[C] ++ escape_uri(Cs);
escape_uri(<<C:8, Cs/binary>>) ->
escape_byte(C) ++ escape_uri(Cs);
escape_uri(<<>>) ->
"".
escape_byte(C) ->
"%" ++ hex_octet(C).
hex_octet(N) when N =< 9 ->
[$0 + N];
hex_octet(N) when N > 15 ->
hex_octet(N bsr 4) ++ hex_octet(N band 15);
hex_octet(N) ->
[N - 10 + $a].
Note that, because of the use of unicode:characters_to_binary it'll only work in R13 or newer.
Example usage is:
9> httpc:request("http://httpbin.org/get?q=" ++ mylib_app:escape_uri("☺")).
{ok,{{"HTTP/1.1",200,"OK"},
[{"connection","keep-alive"},
{"date","Sat, 09 Nov 2019 21:51:54 GMT"},
{"server","nginx"},
{"content-length","178"},
{"content-type","application/json"},
{"access-control-allow-credentials","true"},
{"access-control-allow-origin","*"},
{"referrer-policy","no-referrer-when-downgrade"},
{"x-content-type-options","nosniff"},
{"x-frame-options","DENY"},
{"x-xss-protection","1; mode=block"}],
"{\n \"args\": {\n \"q\": \"\\u263a\"\n }, \n \"headers\": {\n \"Host\": \"httpbin.org\"\n }, \n \"origin\": \"11.111.111.111, 11.111.111.111\", \n \"url\": \"https://httpbin.org/get?q=\\u263a\"\n}\n"}}
We send out a request with escaped query parameter and see that we get back the correct Unicode codepoint.
A: AFAIK there's no URL encoder in the standard libraries. Think I 'borrowed' the following code from YAWS or maybe one of the other Erlang web servers:
% Utility function to convert a 'form' of name-value pairs into a URL encoded
% content string.
urlencode(Form) ->
RevPairs = lists:foldl(fun({K,V},Acc) -> [[quote_plus(K),$=,quote_plus(V)] | Acc] end, [],Form),
lists:flatten(revjoin(RevPairs,$&,[])).
quote_plus(Atom) when is_atom(Atom) ->
quote_plus(atom_to_list(Atom));
quote_plus(Int) when is_integer(Int) ->
quote_plus(integer_to_list(Int));
quote_plus(String) ->
quote_plus(String, []).
quote_plus([], Acc) ->
lists:reverse(Acc);
quote_plus([C | Rest], Acc) when ?QS_SAFE(C) ->
quote_plus(Rest, [C | Acc]);
quote_plus([$\s | Rest], Acc) ->
quote_plus(Rest, [$+ | Acc]);
quote_plus([C | Rest], Acc) ->
<<Hi:4, Lo:4>> = <<C>>,
quote_plus(Rest, [hexdigit(Lo), hexdigit(Hi), ?PERCENT | Acc]).
revjoin([], _Separator, Acc) ->
Acc;
revjoin([S | Rest],Separator,[]) ->
revjoin(Rest,Separator,[S]);
revjoin([S | Rest],Separator,Acc) ->
revjoin(Rest,Separator,[S,Separator | Acc]).
hexdigit(C) when C < 10 -> $0 + C;
hexdigit(C) when C < 16 -> $A + (C - 10).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: How do I read/write as the authenticated user with Apache/WebDAV? I've set up DAV in apache2, which works great. The thing is, all read/write operations are done with the apache user's credentials. Instead I want to use the HTTP authenticated user's credentials. If I authenticate as "john", all read and write operations should use the system user john's credentials (from /etc/passwd). suEXEC seems like overkill since I am not executing anything, but I might be wrong...
Here's the current configuration:
<VirtualHost *:80>
DocumentRoot /var/www/webdav
ServerName webdav.mydomain.com
ServerAdmin webmaster@mydomain.com
<Location "/">
DAV On
AuthType Basic
AuthName "WebDAV Restricted"
AuthUserFile /etc/apache2/extra/webdav-passwords
require valid-user
Options +Indexes
</Location>
DAVLockDB /var/lib/dav/lockdb
ErrorLog /var/log/apache2/webdav-error.log
TransferLog /var/log/apache2/webdav-access.log
</VirtualHost>
A: Shot answer, and as far as I know: you don't.
Long answer: it is possible to implement such a feature with an appropriate mpm, and there were various attempts to do so, but they don't seem to be very actively supported, and are at least not in the mainline Apache codebase.
peruser:
Q. Is peruser ready for production use?
A. In general, no.
perchild:
This module is not functional. Development of this module is not complete and is not currently active. Do not use perchild unless you are a programmer willing to help fix it.
That's too bad, really; most uses of WebDav I've seen store ownership information at the application layer, in the database, anyway. The consensus for doing file sharing is to use Samba instead; and that's not really a solution, I admit.
A: We have been using davenport (http://davenport.sourceforge.net/) for years to provide access to Windows/samba shares over webdav. Samba/Windows gives a lot of control over this sort of thing, and the Davenport just makes it usable over the web over SSL without a VPN
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Reverse Engineering C# code using StarUML I am trying to convert my C# code to design (Reverese Engineering) using StarUML. I got the error while performing the Reverse engineering
"Error occurred in the process of reverse engineering. message : Catastrophic failure".
After the error, the application crashed.
Could anyone suggest a solution for this please?
A: One of the problems with staruml is that apparently it does not support generics and when a file has the "<", a parser error occurs
A: I'm not familiar with StarUML in particular, although there are a couple of ways that it could be going about the process of documenting your assemblies. The most likely method is .Net reflection.
Lots of applications struggle with the more recent C# optimisations.
The best application for reflecting code back out is Reflector, and there are plug-ins that will generate UML for you.
A: It seems like a parser error of StarUML. You may try other UML software such as ModelMaker for C#.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: reinitialize system wide environment variable in linux I just want my apache to register some of my predefined environment so that i can retrieve it using getenv function in php. How can i do this? I tried adding /etc/profile.d/foo.sh with export FOO=/bar/baz using root and restarted apache.
A: Environment variables are inherited by processes in Unix. The files in /etc/profile.d are only executed (in the current shell, not in a subshell) when you log in. Just changing the value there and then restarting a process will not update the environment.
Possible Fixes:
*
*log out/log in, then start apache
*source the file: # . /etc/profile.d/foo.sh, then restart apache
*source the file in the apache init script
You also need to make sure that /etc/profile.d/ is sourced when Apache is started by init rather than yourself.
The best fix might also depend on the distribution you are using, because they use different schemes for configuration.
A: You can use SetEnv in your config files (/etc/httpd/conf.d/*.conf, .htaccess ...). Additionally you should be able to define them in /etc/sysconfig/httpd (on RPM-based distribs) and export them (note: not tested).
Note: it wouldn't surprise me if some distributions tried quite hard to hide as much as possible, as far as system config is concerned, from a publically accessible service such as Apache. And if they don't, they might start doing this in a future version. Hence I advise you to do this explicitly. If you need to share such a setting between Apache and your shells, you could try sourcing /etc/profile.d/yourprofile.sh from /etc/sysconfig/httpd
A: Apache config files allow you to set environment variables on a per site basis.
So if your web server is serving pages from two logical sites you can have the same environment variable set differently for each site and thus get your PHP to react differently.
See the Apache mod_env for details:
A: If you need env vars for Apache only, what worked for me was editing the /etc/apache2/envvars and restart of Apache. I added these settings:
export LANG='en_US.UTF-8'
export LC_ALL='en_US.UTF-8'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to list some specific images in some folder on web server? Let me explain:
this is path to this folder: > www.my_site.com/images
And images are created by user_id, and for example, images of user_id = 27 are,
27_1.jpg, 27_2.jpg, 27_3.jpg!
How to list and print images which start with 27_%.jpg?
I hope You have understood me!
PS. I am totally beginmer in ASP.NET (VB) and please give me detailed information
Here starts my loop
while dbread.Read()
'and then id user_id
dbread('user_id')
NEXT???
I nedd to create XML, till now I created like this:
act.WriteLine("")
act.WriteLine("http://www.my_site.com/images/"&dbread("user_id")&"_1.jpg")
act.WriteLine("")
But this is not answer because I need to create this nodes how many images of this user exist?
In database doesn't exist list of this images so that is reason why I must count them in folder. (this is not my site exacly, but I need to create XMl on this site)
Do you understand me?
A: The best way is to just loop through all the files in the directory.
While dbRead.Read
dim sUserId as String= dbread('user_id')
For Each sFile As String In IO.Directory.GetFiles("C:\")
if sFile.StartsWith (sUserId) Then
'Do something.
End If
Next
Loop
However, to actually show the images, you're best bet could be to create a datatable of these images, and then use a datalist or repeater control to display them.
Dim dtImages as new DataTable
dtImages.Columns.Add("Filename")
If dbRead.Read
dim sUserId as String= dbread('user_id')
For Each sFile As String In IO.Directory.GetFiles("C:\")
if sFile.StartsWith (sUserId) Then
Dim drImage as DataRow = dtImages.NewRow
drImage("Filename") = sFile
dtImages.Rows.add(drImage)
End If
Next
End If
dlImages.DataSource = dtImages
dlImages.DataBind
Then, on your ASPX page, you would have a datalist control called dlImages defined like:
<asp:datalist id="dlImages" RepeatDirection="Horizontal" runat="server" RepeatLayout="Flow" Height="100%">
<ItemTemplate>
<asp:Image ID="Image1" Runat=server ImageUrl='<%# Server.MapPath("photos") & Container.DataItem("FileName") %>'>
</asp:Image>
</ItemTemplate>
</asp:datalist>
A: The appropriate method would be to do the following
*
*Get the listing of files using System.IO.Directory.GetFiles("YourPath", UserId + "_*.jpg")
*Loop through this listing and build your XML or then render it out to the user.
Basically the GetFiles method accepts a path, and a "filter" parameter which allows you to do a wildcard search!
EDIT:
The GetFiles operation returns a listing of strings that represent the full file name, you can then manipulate those values using the System.IO.Path.GetFileName() method to get the actual file name.
You can use the XmlDocument class if you want to actually build the document, or you could do it with a simple loop and a string builder. Something like the following.
StringBuilder oBuilder = new StringBuilder();
oBuilder.Append("<root>");
string[] ofiles = Directory.GetFiles("YourPath", "yourMask");
foreach(string currentString in oFiles)
{
oBuilder.AppendLine("<file>http://yourpath/" + Path.GetFileName(currentString) + "</file>");
}
oBuilder.Append("</root");
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Boost shared_ptr container question Let's say I have a container (std::vector) of pointers used by a multi-threaded application. When adding new pointers to the container, the code is protected using a critical section (boost::mutex). All well and good. The code should be able to return one of these pointers to a thread for processing, but another separate thread could choose to delete one of these pointers, which might still be in use. e.g.:
thread1()
{
foo* p = get_pointer();
...
p->do_something();
}
thread2()
{
foo* p = get_pointer();
...
delete p;
}
So thread2 could delete the pointer whilst thread1 is using it. Nasty.
So instead I want to use a container of Boost shared ptrs. IIRC these pointers will be reference counted, so as long as I return shared ptrs instead of raw pointers, removing one from the container WON'T actually free it until the last use of it goes out of scope. i.e.
std::vector<boost::shared_ptr<foo> > my_vec;
thread1()
{
boost::shared_ptr<foo> sp = get_ptr[0];
...
sp->do_something();
}
thread2()
{
boost::shared_ptr<foo> sp = get_ptr[0];
...
my_vec.erase(my_vec.begin());
}
boost::shared_ptr<foo> get_ptr(int index)
{
lock_my_vec();
return my_vec[index];
}
In the above example, if thread1 gets the pointer before thread2 calls erase, will the object pointed to still be valid? It won't actually be deleted when thread1 completes? Note that access to the global vector will be via a critical section.
I think this is how shared_ptrs work but I need to be sure.
A: For the threading safety of boost::shared_ptr you should check this link. It's not guarantied to be safe, but on many platforms it works. Modifying the std::vector is not safe AFAIK.
A:
In the above example, if thread1 gets the pointer before thread2 calls erase, will the object pointed to still be valid? It won't actually be deleted when thread1 completes?
In your example, if thread1 gets the pointer before thread2, then thread2 will have to wait at the beginning of the function (because of the lock). So, yes, the object pointed to will still be valid. However, you might want to make sure that my_vec is not empty before accessing its first element.
A: If in addition, you synchronize the accesses to the vector (as in your original raw pointer proposal), your usage is safe. Otherwise, you may fall foul of example 4 in the link provided by the other respondent.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Thoughts on Design - Core Control Logic and Rendering Layers I just wanted to see if I could have your thoughts on the design of some work I am currently doing.
Here's the current situation - Basically:
*
*I am developing a series of controls for our applications.
*Some of these may be used in both WinForms and ASP.NET Web applications.
*I am on a constant endeavor to improve my testing and testability of my code.
So, here is what I have done:
*
*Created the core control logic in a class that has no concept of a UI. It simply raises events when things about it change. All data as stored as custom typed objects where it needs to be distinguished from others (e.g. I have a PagingControl where it has SelectedPage and PageNumber items).
*I then created an abstract class to act as the interface for a rendering "engine". This ensures that any custom types used (and possibly added) to the core logic are handled by the engine. Following the above example, it contains an abstract method RenderSelectedPage.
*I then created concrete implementations of the abstract rendering engine (e.g. ConsoleRenderingEngine, HtmlRenderingEngine etc.). This then handled the methods and rendered them to their respective UI's/Outputs as appropriate.
I found the following pro's and con's to this approach:
Pro's
*
*It works. Quite well, its easy to implement a new rendering mechanism, all you do is subclass the abstract engine and render the output (which passes required references to you).
*It's really seperates the UI from the core code, making it much easier to test.
*Obviously due to the encapsulation of core/rendering logic, it's quite obvious where problems lie when they appear.
Con's
*It can look confusing/bloated. Even though there is not a massive amount of code in each class, there are 3x classes to get it to render to 1 output (1x core, 1x interface, 1x renderer). However, when creating the WinForms/WebForms controls it also means another classe (since one needs to sublcass Control as well as the AbstractRenderingEngine).
... OK so that's the only "con" I can really think of, and the main reason for this question ^_^
So,
What are your thoughts on this "pattern"? How would you change/improve it?
This question may get updated as more thoughts come to me, or clarity may be requested (I know it's a heavy read!).
Update
Thanks for the answers guys, funny you said MVP, I thought I had seen something like this somewhere but couldn't remember for the life of me what it was! As soon as I saw "MVP" I thought "dammit". :D
Thanks for the responses guys. I will study MVP more and see if I can improve what I have further.
A: From your description it's a bit like how I do MVP but with the events going the other way.
I usually have a very thin view that hides behind an interface and that knows nothing about the presenter. The view is the one who throws events on user actions. Usually all the view does is translate UI specific to primitives or sometimes value objects from the model (value object in a ddd sense, not .net structs) Sometimes I nest views for more complex situations and for reuse. UserControls sometimes have their own view and presenter structure. When you start doing nesting views and presenters instantiation of objects starts getting a lot of work so this is usually when I start looking for an IoC container.
The presenter knows about the view through it's interface and talks direcly to it. It reacts to view events and does most of the logic. The view and model are Di'd into the presenter so all the logic in it is testable.
Another approach I saw was where the view knew about the presenter and the presenter only knew about the view through the interface. This gets around having to raise events for view actions because the view can talk directly to the presenter. (I think this is what used to be called MVC in the smalltalk world) The presenter is still testable and this enables you to do databinding from the view to the presenter. I usually don't use databinding so for me this is not a big advantage. I to decouple stuff a bit more like in the first example.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Class method differences in Python: bound, unbound and static What is the difference between the following class methods?
Is it that one is static and the other is not?
class Test(object):
def method_one(self):
print "Called method_one"
def method_two():
print "Called method_two"
a_test = Test()
a_test.method_one()
a_test.method_two()
A: In Python, there is a distinction between bound and unbound methods.
Basically, a call to a member function (like method_one), a bound function
a_test.method_one()
is translated to
Test.method_one(a_test)
i.e. a call to an unbound method. Because of that, a call to your version of method_two will fail with a TypeError
>>> a_test = Test()
>>> a_test.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
You can change the behavior of a method using a decorator
class Test(object):
def method_one(self):
print "Called method_one"
@staticmethod
def method_two():
print "Called method two"
The decorator tells the built-in default metaclass type (the class of a class, cf. this question) to not create bound methods for method_two.
Now, you can invoke static method both on an instance or on the class directly:
>>> a_test = Test()
>>> a_test.method_one()
Called method_one
>>> a_test.method_two()
Called method_two
>>> Test.method_two()
Called method_two
A: method_two won't work because you're defining a member function but not telling it what the function is a member of. If you execute the last line you'll get:
>>> a_test.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
If you're defining member functions for a class the first argument must always be 'self'.
A: Accurate explanation from Armin Ronacher above, expanding on his answers so that beginners like me understand it well:
Difference in the methods defined in a class, whether static or instance method(there is yet another type - class method - not discussed here so skipping it), lay in the fact whether they are somehow bound to the class instance or not. For example, say whether the method receives a reference to the class instance during runtime
class C:
a = []
def foo(self):
pass
C # this is the class object
C.a # is a list object (class property object)
C.foo # is a function object (class property object)
c = C()
c # this is the class instance
The __dict__ dictionary property of the class object holds the reference to all the properties and methods of a class object and thus
>>> C.__dict__['foo']
<function foo at 0x17d05b0>
the method foo is accessible as above. An important point to note here is that everything in python is an object and so references in the dictionary above are themselves pointing to other objects. Let me call them Class Property Objects - or as CPO within the scope of my answer for brevity.
If a CPO is a descriptor, then python interpretor calls the __get__() method of the CPO to access the value it contains.
In order to determine if a CPO is a descriptor, python interpretor checks if it implements the descriptor protocol. To implement descriptor protocol is to implement 3 methods
def __get__(self, instance, owner)
def __set__(self, instance, value)
def __delete__(self, instance)
for e.g.
>>> C.__dict__['foo'].__get__(c, C)
where
*
*self is the CPO (it could be an instance of list, str, function etc) and is supplied by the runtime
*instance is the instance of the class where this CPO is defined (the object 'c' above) and needs to be explicity supplied by us
*owner is the class where this CPO is defined(the class object 'C' above) and needs to be supplied by us. However this is because we are calling it on the CPO. when we call it on the instance, we dont need to supply this since the runtime can supply the instance or its class(polymorphism)
*value is the intended value for the CPO and needs to be supplied by us
Not all CPO are descriptors. For example
>>> C.__dict__['foo'].__get__(None, C)
<function C.foo at 0x10a72f510>
>>> C.__dict__['a'].__get__(None, C)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute '__get__'
This is because the list class doesnt implement the descriptor protocol.
Thus the argument self in c.foo(self) is required because its method signature is actually this C.__dict__['foo'].__get__(c, C) (as explained above, C is not needed as it can be found out or polymorphed)
And this is also why you get a TypeError if you dont pass that required instance argument.
If you notice the method is still referenced via the class Object C and the binding with the class instance is achieved via passing a context in the form of the instance object into this function.
This is pretty awesome since if you chose to keep no context or no binding to the instance, all that was needed was to write a class to wrap the descriptor CPO and override its __get__() method to require no context.
This new class is what we call a decorator and is applied via the keyword @staticmethod
class C(object):
@staticmethod
def foo():
pass
The absence of context in the new wrapped CPO foo doesnt throw an error and can be verified as follows:
>>> C.__dict__['foo'].__get__(None, C)
<function foo at 0x17d0c30>
Use case of a static method is more of a namespacing and code maintainability one(taking it out of a class and making it available throughout the module etc).
It maybe better to write static methods rather than instance methods whenever possible, unless ofcourse you need to contexualise the methods(like access instance variables, class variables etc). One reason is to ease garbage collection by not keeping unwanted reference to objects.
A: Methods in Python are a very, very simple thing once you understood the basics of the descriptor system. Imagine the following class:
class C(object):
def foo(self):
pass
Now let's have a look at that class in the shell:
>>> C.foo
<unbound method C.foo>
>>> C.__dict__['foo']
<function foo at 0x17d05b0>
As you can see if you access the foo attribute on the class you get back an unbound method, however inside the class storage (the dict) there is a function. Why's that? The reason for this is that the class of your class implements a __getattribute__ that resolves descriptors. Sounds complex, but is not. C.foo is roughly equivalent to this code in that special case:
>>> C.__dict__['foo'].__get__(None, C)
<unbound method C.foo>
That's because functions have a __get__ method which makes them descriptors. If you have an instance of a class it's nearly the same, just that None is the class instance:
>>> c = C()
>>> C.__dict__['foo'].__get__(c, C)
<bound method C.foo of <__main__.C object at 0x17bd4d0>>
Now why does Python do that? Because the method object binds the first parameter of a function to the instance of the class. That's where self comes from. Now sometimes you don't want your class to make a function a method, that's where staticmethod comes into play:
class C(object):
@staticmethod
def foo():
pass
The staticmethod decorator wraps your class and implements a dummy __get__ that returns the wrapped function as function and not as a method:
>>> C.__dict__['foo'].__get__(None, C)
<function foo at 0x17d0c30>
Hope that explains it.
A: When you call a class member, Python automatically uses a reference to the object as the first parameter. The variable self actually means nothing, it's just a coding convention. You could call it gargaloo if you wanted. That said, the call to method_two would raise a TypeError, because Python is automatically trying to pass a parameter (the reference to its parent object) to a method that was defined as having no parameters.
To actually make it work, you could append this to your class definition:
method_two = staticmethod(method_two)
or you could use the @staticmethod function decorator.
A: >>> class Class(object):
... def __init__(self):
... self.i = 0
... def instance_method(self):
... self.i += 1
... print self.i
... c = 0
... @classmethod
... def class_method(cls):
... cls.c += 1
... print cls.c
... @staticmethod
... def static_method(s):
... s += 1
... print s
...
>>> a = Class()
>>> a.class_method()
1
>>> Class.class_method() # The class shares this value across instances
2
>>> a.instance_method()
1
>>> Class.instance_method() # The class cannot use an instance method
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method instance_method() must be called with Class instance as first argument (got nothing instead)
>>> Class.instance_method(a)
2
>>> b = 0
>>> a.static_method(b)
1
>>> a.static_method(a.c) # Static method does not have direct access to
>>> # class or instance properties.
3
>>> Class.c # a.c above was passed by value and not by reference.
2
>>> a.c
2
>>> a.c = 5 # The connection between the instance
>>> Class.c # and its class is weak as seen here.
2
>>> Class.class_method()
3
>>> a.c
5
A: The call to method_two will throw an exception for not accepting the self parameter the Python runtime will automatically pass it.
If you want to create a static method in a Python class, decorate it with the staticmethod decorator.
Class Test(Object):
@staticmethod
def method_two():
print "Called method_two"
Test.method_two()
A: that is an error.
first of all, first line should be like this (be careful of capitals)
class Test(object):
Whenever you call a method of a class, it gets itself as the first argument (hence the name self) and method_two gives this error
>>> a.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
A: The second one won't work because when you call it like that python internally tries to call it with the a_test instance as the first argument, but your method_two doesn't accept any arguments, so it wont work, you'll get a runtime error.
If you want the equivalent of a static method you can use a class method.
There's much less need for class methods in Python than static methods in languages like Java or C#. Most often the best solution is to use a method in the module, outside a class definition, those work more efficiently than class methods.
A: Please read this docs from the Guido First Class everything Clearly explained how Unbound, Bound methods are born.
A: Bound method = instance method
Unbound method = static method.
A: The definition of method_two is invalid. When you call method_two, you'll get TypeError: method_two() takes 0 positional arguments but 1 was given from the interpreter.
An instance method is a bounded function when you call it like a_test.method_two(). It automatically accepts self, which points to an instance of Test, as its first parameter. Through the self parameter, an instance method can freely access attributes and modify them on the same object.
A: Unbound Methods
Unbound methods are methods that are not bound to any particular class instance yet.
Bound Methods
Bound methods are the ones which are bound to a specific instance of a class.
As its documented here, self can refer to different things depending on the function is bound, unbound or static.
Take a look at the following example:
class MyClass:
def some_method(self):
return self # For the sake of the example
>>> MyClass().some_method()
<__main__.MyClass object at 0x10e8e43a0># This can also be written as:>>> obj = MyClass()
>>> obj.some_method()
<__main__.MyClass object at 0x10ea12bb0>
# Bound method call:
>>> obj.some_method(10)
TypeError: some_method() takes 1 positional argument but 2 were given
# WHY IT DIDN'T WORK?
# obj.some_method(10) bound call translated as
# MyClass.some_method(obj, 10) unbound method and it takes 2
# arguments now instead of 1
# ----- USING THE UNBOUND METHOD ------
>>> MyClass.some_method(10)
10
Since we did not use the class instance — obj — on the last call, we can kinda say it looks like a static method.
If so, what is the difference between MyClass.some_method(10) call and a call to a static function decorated with a @staticmethod decorator?
By using the decorator, we explicitly make it clear that the method will be used without creating an instance for it first. Normally one would not expect the class member methods to be used without the instance and accesing them can cause possible errors depending on the structure of the method.
Also, by adding the @staticmethod decorator, we are making it possible to be reached through an object as well.
class MyClass:
def some_method(self):
return self
@staticmethod
def some_static_method(number):
return number
>>> MyClass.some_static_method(10) # without an instance
10
>>> MyClass().some_static_method(10) # Calling through an instance
10
You can’t do the above example with the instance methods. You may survive the first one (as we did before) but the second one will be translated into an unbound call MyClass.some_method(obj, 10) which will raise a TypeError since the instance method takes one argument and you unintentionally tried to pass two.
Then, you might say, “if I can call static methods through both an instance and a class, MyClass.some_static_method and MyClass().some_static_method should be the same methods.” Yes!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "252"
}
|
Q: Design Patterns using IQueryable With the introduction of .NET 3.5 and the IQueryable<T> interface, new patterns will emerge. While I have seen a number of implementations of the Specification pattern, I have not seen many other patterns using this technology. Rob Conery's Storefront application is another concrete example using IQueryable<T> which may lead to some new patterns.
What patterns have emerged from the useful IQueryable<T> interface?
A: It has certainly made the repository pattern much simpler to implement as well. You can essentially create a generic repository:
public class LinqToSqlRepository : IRepository
{
private readonly DataContext _context;
public LinqToSqlRepository(DataContext context)
{
_context = context;
}
public IQueryable<T> Find<T>()
{
return _dataContext.GetTable<T>(); // linq 2 sql
}
/** snip: Insert, Update etc.. **/
}
and then use it with linq:
var query = from customers in _repository.Find<Customer>()
select customers;
A: I like the repository-filter pattern. It allows you to separate concerns from the middle and data end tier without sacrificing performance.
Your data layer can concentrate on simple list-get-save style operations, while your middle tier can utilize extensions to IQueryable to provide more robust functionality:
Repository (Data layer):
public class ThingRepository : IThingRepository
{
public IQueryable<Thing> GetThings()
{
return from m in context.Things
select m; // Really simple!
}
}
Filter (Service layer):
public static class ServiceExtensions
{
public static IQueryable<Thing> ForUserID(this IQueryable<Thing> qry, int userID)
{
return from a in qry
where a.UserID == userID
select a;
}
}
Service:
public GetThingsForUserID(int userID)
{
return repository.GetThings().ForUserID(userID);
}
This is a simple example, but filters can be safely combined to build more complicated queries. The performance is saved because the list isn't materialized until all the filters have been built into the query.
I love it because I dislike application-specific repositories!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: PHP: array_map on object? I'm trying to write a function that formats every (string) member/variable in an object, for example with a callback function. The variable names are unknown to me, so it must work with objects of all classes.
How can I achieve something similar to array_map or array_walk with objects?
A: use get_object_vars() to get an associative array of the members, and use the functions you mentioned.
btw, you can also do a foreach on an object like you would on an array, which is sometimes useful as well.
A: You can use get_object_vars(), but if you need more control, try using reflection. It's slower than get_object_vars() (or get_class_methods() for that matter), but it's much more powerful.
A: You are looking for get_object_vars / get_class_methods (the first gets the variables, the second the method names).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Solaris Core dump analysis I use pstack to analyze core dump files in Solaris
How else can I analyze the core dump from solaris?
What commands can be used to do this?
What other information will be available from the dump?
A: If the core dump is from a program you wrote or built, then use whichever debugger you would normally use to debug the running application. They should all be able to load core files. If you're not picky about debuggers, and you're using Solaris, I would recommend dbx. It will help to get the latest FCS version of Sun Studio with patches, or else the latest Express version of Sun Studio. It's also very helpful if you can load the core file into the debugger on the same system where the core file was created. If the code in the libraries is different from when the core file was created, then stack trace won't be useful when it goes through libraries. Debuggers also use OS helper libraries for understanding the libthread and runtime linker data structures, so IF you need to load the core file on a different machine, you'll want to make sure the helper libraries installed on the OS match the system data structures in the OS. You can find out everything you never wanted to know about these system libraries in a white paper that was written a few years ago.
http://developers.sun.com/solaris/articles/DebugLibraries/DebugLibraries_content.html
A: I guess any answer to this question should start with a simple recipe:
For dbx, the recipe is:
% dbx a.out core
(dbx) where
(dbx) threads
(dbx) thread t@3
(dbx) where
A: The pflags command is also useful for determining the state each thread was in when it core dumped. In this way you can often pinpoint the problem.
A: I would suggest trying gdb first as it's easier to learn basic tasks than the native Solaris debuggers in my opinion.
A: You can use Solaris modular debugger,mdb, or dbx. mdb comes with SUNWmdb (or SUNWmdb x for the 64 bits version) package.
A core file is the image of your running process at the time it crashed.
Depending on whether your application was compiled with debug flags or not,you will be able to view an image of the stack, hence to know which function caused the core, to get the value of the parameters that were passed to that function, the value of the variables, the allocated memory zones ...
On recent solaris versions, you can configure what the core file will contain with the coreadm command ; for instance, you can have the mapped memory segments the process were attached to.
Refer to MDB documentation and dbx documentation. The GDB quick reference card is also helpful once you know the basics of GDB.
A: GDB can be used.
It can give the call that was attempted prior to the dump.
http://sourceware.org/gdb/
http://en.wikipedia.org/wiki/GDB
Having the source is great and if you can reproduce the errors even better as you can use this to debug it.
Worked great for me in the past.
A: Attach to the process image using the dbx debugger:
dbx [executable_file_name] [coredump_file_name]
It is important that there were no changes to the executable since the core was dumped (i.e. it wasn't rebuilt).
You can see the stack trace to see where the program crashed with dbx command "where".
You can move up and down the stack with command "up" and "down", or jump to the exact stack frame with "frame [number]", with the numbers seen in the output of "where".
You can print the value of variables or expressions with "print [expr]" command.
Have fun.
A: I found dbx on my solaris x86 box at
/opt/SUNWspro/bin/dbx
Cheers!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: Considering object encapsulation, should getters return an immutable property? When a getter returns a property, such as returning a List of other related objects, should that list and it's objects be immutable to prevent code outside of the class, changing the state of those objects, without the main parent object knowing?
For example if a Contact object, has a getDetails getter, which returns a List of ContactDetails objects, then any code calling that getter:
*
*can remove ContactDetail objects from that list without the Contact object knowing of it.
*can change each ContactDetail object without the Contact object knowing of it.
So what should we do here? Should we just trust the calling code and return easily mutable objects, or go the hard way and make a immutable class for each mutable class?
A: It's a matter of whether you should be "defensive" in your code. If you're the (sole) user of your class and you trust yourself then by all means no need for immutability. However, if this code needs to work no matter what, or you don't trust your user, then make everything that is externalized immutable.
That said, most properties I create are mutable. An occasional user botches this up, but then again it's his/her fault, since it is clearly documented that mutation should not occur via mutable objects received via getters.
A: It depends on the context. If the list is intended to be mutable, there is no point in cluttering up the API of the main class with methods to mutate it when List has a perfectly good API of its own.
However, if the main class can't cope with mutations, then you'll need to return an immutable list - and the entries in the list may also need to be immutable themselves.
Don't forget, though, that you can return a custom List implementation that knows how to respond safely to mutation requests, whether by firing events or by performing any required actions directly. In fact, this is a classic example of a good time to use an inner class.
A: If you have control of the calling code then what matters most is that the choice you make is documented well in all the right places.
A: Joshua Bloch in his excellent "Effective Java" book says that you should ALWAYS make defensive copies when returning something like this. That may be a little extreme, especially if the ContactDetails objects are not Cloneable, but it's always the safe way. If in doubt always favour code safety over performance - unless profiling has shown that the cloneing is a real performance bottleneck.
There are actually several levels of protection you can add. You can simply return the member, which is essentially giving any other class access to the internals of your class. Very unsafe, but in fairness widely done. It will also cause you trouble later if you want to change the internals so that the ContactDetails are stored in a Set. You can return a newly-created list with references to the same objects in the internal list. This is safer - another class can't remove or add to the list, but it can modify the existing objects. Thirdly return a newly created list with copies of the ContactDetails objects. That's the safe way, but can be expensive.
I would do this a better way. Don't return a list at all - instead return an iterator over a list. That way you don't have to create a new list (List has a method to get an iterator) but the external class can't modify the list. It can still modify the items, unless you write your own iterator that clones the elements as needed. If you later switch to using another collection internally it can still return an iterator, so no external changes are needed.
A: In the particular case of a Collection, List, Set, or Map in Java, it is easy to return an immutable view to the class using return Collections.unmodifiableList(list);
Of course, if it is possible that the backing-data will still be modified then you need to make a full copy of the list.
A: Depends on the context, really. But generally, yes, one should write as defensive code as possible (returning array copies, returning readonly wrappers around collections etc.). In any case, it should be clearly documented.
A: I used to return a read-only version of the list, or at least, a copy. But each object contained in the list must be editable, unless they are immutable by design.
A: I think you'll find that it's very rare for every gettable to be immutable.
What you could do is to fire events when a property is changed within such objects. Not a perfect solution either.
Documentation is probably the most pragmatic solution ;)
A: Your first imperative should be to follow the Law of Demeter or ‘Tell don't ask’; tell the object instance what to do e.g.
contact.print( printer ) ; // or
contact.show( new Dialog() ) ; // or
contactList.findByName( searchName ).print( printer ) ;
Object-oriented code tells objects to do things. Procedural code gets information then acts on that information. Asking an object to reveal the details of its internals breaks encapsulation, it is procedural code, not sound OO programming and as Will has already said it is a flawed design.
If you follow the Law of Demeter approach any change in the state of an object occurs through its defined interface, therefore side-effects are known and controlled. Your problem goes away.
A: When I was starting out I was still heavily under the influence of HIDE YOUR DATA OO PRINCIPALS LOL. I would sit and ponder what would happen if somebody changed the state of one of the objects exposed by a property. Should I make them read only for external callers? Should I not expose them at all?
Collections brought out these anxieties to the extreme. I mean, somebody could remove all the objects in the collection while I'm not looking!
I eventually realized that if your objects' hold such tight dependencies on their externally visible properties and their types that, if somebody touches them in a bad place you go boom, your architecture is flawed.
There are valid reasons to make your external properties readonly and their types immutable. But that is the corner case, not the typical one, imho.
A: First of all, setters and getters are an indication of bad OO. Generally the idea of OO is you ask the object to do something for you. Setting and getting is the opposite. Sun should have figured out some other way to implement Java beans so that people wouldn't pick up this pattern and think it's "Correct".
Secondly, each object you have should be a world in itself--generally, if you are going to use setters and getters they should return fairly safe independent objects. Those objects may or may not be immutable because they are just first-class objects. The other possibility is that they return native types which are always immutable. So saying "Should setters and getters return something immutable" doesn't make too much sense.
As for making immutable objects themselves, you should virtually always make the members inside your object final unless you have a strong reason not to (Final should have been the default, "mutable" should be a keyword that overrides that default). This implies that wherever possible, objects will be immutable.
As for predefined quasi-object things you might pass around, I recommend you wrap stuff like collections and groups of values that go together into their own classes with their own methods. I virtually never pass around an unprotected collection simply because you aren't giving any guidance/help on how it's used where the use of a well-designed object should be obvious. Safety is also a factor since allowing someone access to a collection inside your class makes it virtually impossible to ensure that the class will always be valid.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Difference between managed C++ and C++ The second question is: When do I use what of these two?
A: When not specified, C++ is unmanaged C++, compiled to machine code. In unmanaged C++ you must manage memory allocation manually.
Managed C++ is a language invented by Microsoft, that compiles to bytecode run by the .NET Framework. It uses mostly the same syntax as C++ (hence the name) but is compiled in the same way as C# or VB.NET; basically only the syntax changes, e.g. using '->' to point to a member of an object (instead of '.' in C#), using '::' for namespaces, etc.
Managed C++ was made to ease transition from classic C++ to the .NET Framework. It is not intended to be used to start new projects (C# is preferred).
A: You can code native C++ two different ways. The first is compiling directly to machine code with just the operating system between you and the platform (hardware). The second native coding is done with MFC (Microsoft Foundation Classes). This is the same as the first example except for the use of MFC.
Managed C++ uses the CLR (Common Language Runtime). The CLR along with the .NET framework class libraries make up the .NET Framework. This managed C++/CLI standard uses the .NET framework along with the CIL (Microsoft Intermediate Language). This standard works by mapping to machine code only when the program is executing by the use of a just in time compiler. If your code will be running on different hardware platforms the use of managed code will be much easier. As with all thing there is a slight price to pay for convenience, as native code will run faster.
A: "Managed C++" refers to a language that was included in Visual Studio.NET/Visual Studio.NET 2003. It has since been deprecated, with the latest .NET C++ being C++/CLI.
A: You'll be using managed C++ when want to use a native C++ class library from managed code. In this case you wrap unmanaged classes in managed C++ ones, then you use them in any CLR language.
A: Managed C++ means that memory allocation, management, garbage collection is handled by the virtual machine. Whereas in "regular" C++ you would have to allocate and deallocate memory.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
}
|
Q: SQL: inner join on alias column Previously I have asked to strip text from a field and convert it to an int, this works successfully. But now, I would like to do an INNER JOIN on this new value.
So I have this:
SELECT CONVERT(int, SUBSTRING(accountingTab.id, PATINDEX('%[0-9]%', accountingTab.id), 999))
AS 'memId', userDetails.title, userDetails.lname
FROM accountingTab INNER JOIN
(SELECT id, title, first, last FROM memDetTab) AS userDetails ON memID = userDetails.id
And then I get the Invalid Column Name memID error.
How can I fix this?
A: You can either repeat the whole expression or reverse your join:
SELECT *
FROM memDetTab
JOIN (SELECT CONVERT(int, SUBSTRING(accountingTab.id, PATINDEX('%[0-9]%', accountingTab.id), 999)) AS 'memId', userDetails.title, userDetails.lname
FROM accountingTab) subquery
ON subquery.memID = memDetTab.ID
A: Instead of memId, repeat the whole expression.
A: If you have to do this, you have design problems. If you're able, I would suggest you need to refactor your table or relationships.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How can I share a variable value between classic asp, .NET and javascript? I've created an IHttpHandler in .NET C# which returns pieces of html to a classic asp page.
The classic asp page communicates with the IHttpHandler through basic http requests using ServerXMLHTTP in vbscript or Ajax Calls in JavaScript.
Now, I need a way to share a variable which I have in vbscript but not in javascript with the .NET application.
The first bit, sharing a variable between classic asp and .net is not a problem as I can just add it onto the http request. Because the variable is not available in javascript however I can't do this in that case.
I had the idea that I could maybe cache the variable in the .NET application and use the cached version for javascript calls. But for this to work I would need a way to uniquely identify the "client" in .NET...
I tried to add System.Web.SessionState.IRequiresSessionState to my HttpHandler and use the SessionId but this didn't work because every single call to the HttpHandler seems to get a new ID.
Am I thinking the right way? What are my options here?
A: You could put your value in a cookie, then you could read with javascript and do anything with it, including sending it in a request to the .net app...
A: How about having a hidden variable on the page in which you can store the value of the variable from your server side vb script of your asp pages.
Then you can use Javascript to query this variable to send across to the Asp.Net handler through your ajax calls.
A: In the end my solution to the problem was the following:
*
*I send the variables to my webservice along with the normal ws request
*I store the variable values in a hash table on the webservice and return the hashkey to the client in a hidden html object (I used a head <meta> element for that)
*In the javascript I read that hash value and attach it to the http-request as a header value.
*When the webservice sees that hashkey he looks up the respective variable values in the persistent hash table.
Might not be the very best solution but seems to work well in my case...
A: I'd recommend a cookie as well. You could also load the value from the database, but that'd be overkill.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Http Exception generated while validating viewstate I am getting the following error whenever I click on a postbacking control
HttpException (0x80004005): Validation
of viewstate MAC failed. If this
application is hosted by a Web Farm
or cluster, ensure that configuration
specifies the same validationKey and
validation algorithm. AutoGenerate
cannot be used in a cluster.
I am not using a Web Farm or cluster server. I have even tried setting the page property EnableViewStateMac to false but it changes the error message stating
The state information is invalid for
this page and might be corrupted.
What could possibly be wrong?
A: There is an article about this here: http://blogs.msdn.com/tom/archive/2008/03/14/validation-of-viewstate-mac-failed-error.aspx .
The basic problem is that Your page hasn't completed loading before You perform the postback.
A few different solutions are in the article listed above:
1. Set enableEventValidation to false and viewStateEncryptionMode to Never
2. Mark the form as disabled and then enable it in script once the load is complete.
3. override the Render Event of the page to place the hidden fields for Encrypted Viewstate and Event validation on the top of the form.
But the main problem is that the page load slow, which should be fixed (if possible ASAP). It can also be good to apply solution 2 above as well as there will always be trigger happy users that will click faster that the page loads no matter how fast it loads :-).
/Andreas
A: I have encountered the same problem with a custom build ASP.NET control which was dynamically reloaded and rebuild on every POST / GET request. Thus the page sending the POST request was not the same as the one recieving the response.
If you use any custom or databound controls look closly how they behave on a POST back.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Are there any utilites that will help me refactor CSS I am working with some CSS that is poorly written to say the least. I am not a design/CSS expert, but I at least understand the C in CSS. While the builtin CSS support inside of VS-2008 is far improved over previous versions, it still doesn't quite do what I am looking for.
I was wondering if anyone know of a good program or utility that will help me to refactor and clean up my CSS like what ReSharper allows to do with C#.
Some features that would be nice to have:
*
*Examine CSS files and determine ways to extract common styles like font-style, color, etc...
*Plugin to VS-2008 would be awesome!
*Examine markup files and make some suggestions on improving the current use of classes and styles.
A: TopStyle is popular and always the one I hear recommended. It has recommendations on styles etc.
I use Aptana but this doesn't do an refactoring just flags up errors and allows you to target certain browsers. Using this a a decent CSS book may help.
A: Firebug is a very good Firefox extension that allows you to examine which CSS declarations are active for which DOM element in your document tree.
Although it does not make any suggestions for improvements, it's a great help when debugging/simplifying CSS code by hand.
The Web Developer extension is also a great help.
A: If you're using ASP.NET 2.0, there's ReFactor! for ASP.NET
A: The Dust-Me Selectors Firefox extension can scan a website and tell you what CSS is used and what is not. Removing unused CSS is one good first step in refactoring.
I have often found that when some section is removed from a website, the HTML is removed but the CSS is not.
A: I've had good luck using Stylizer in the past. It's nicer and only costs 1/6 of TopStyle.
A: There's a Ruby gem called HAML that ships with an executable called css2sass. That executable translates CSS into SASS, which is a metalanguage on top of CSS that makes it much easier to refactor (by better illustrating the relationships among your selectors). Might be worth taking a look.
A: I used to use WestCiv's StyleMaster, which is a pretty good CSS editor / inspector / debugger app. Combine that with the afforementioned Firebug, and you can't help but stay on top of your CSS.
A: My attempt at playing around with Less for .NET.
A: I might be a little late but the ReSharper 6 early access preview (EAP) does this for you!
In a CSS file, entering "#" will auto-complete every ID from your project. Same with a period "." to list all your classes.
Best part: when you rename the selector it will rename it project-wide. It makes refactoring CSS much faster, if not pleasurable.
A: I like Expression Web's CSS facilities. But it doesn't do much for minimizing or unifying your CSS. You have to understand how CSS works to use it properly.
A: EditCSS for firefox is amazing.
A: This site at least helps to sort and minimize your rules: http://www.cleancss.com/
It doesn't get you to where you want to be, but it's a good first step.
A: Maybe CssTidy or CssOptimiser can help to clean-up and make smaller
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Where can a save confirmation page be hooked into the Django admin? (similar to delete confirmation) I want to emulate the delete confirmation page behavior before saving
certain models in the admin. In my case if I change one object,
certain others should be deleted as they depend upon the object's now
out-of-date state.
I understand where to implement the actual cascaded updates (inside
the parent model's save method), but I don't see a quick way to ask
the user for confirmation (and then rollback if they decide not to
save). I suppose I could implement some weird confirmation logic
directly inside the save method (sort of a two phase save) but that
seems...ugly.
Any thoughts, even general pointers into the django codebase?
Thanks!
A: You could overload the get_form method of your model admin and add an extra checkbox to the generated form that has to be ticket. Alternatively you can override change_view and intercept the request.
A: I'm by no means a Django expert, so this answer might misguide you.
Start looking somewhere around django.contrib.admin.options.ModelAdmin, especially render_change_form and response_change. I guess you would need to subclass ModelAdmin for your model and provide required behavior around those methods.
A: Have you considered overriding the administrative templates for the models in question? This link provides an excellent overview of the process. In this particular situation, having a finer-grained level of control may be the best way to achieve the desired result.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Optimize SQL query on large-ish table First of all, this question regards MySQL 3.23.58, so be advised.
I have 2 tables with the following definition:
Table A: id INT (primary), customer_id INT, offlineid INT
Table B: id INT (primary), name VARCHAR(255)
Now, table A contains in the range of 65k+ records, while table B contains ~40 records. In addition to the 2 primary key indexes, there is also an index on the offlineid field in table A. There are more fields in each table, but they are not relevant (as I see it, ask if necessary) for this query.
I was first presented with the following query (query time: ~22 seconds):
SELECT b.name, COUNT(*) AS orders, COUNT(DISTINCT(a.kundeid)) AS leads
FROM katalogbestilling_katalog a, medie b
WHERE a.offlineid = b.id
GROUP BY b.name
Now, each id in medie is associated with a different name, meaning you could group by id as well as name. A bit of testing back and forth settled me on this (query time: ~6 seconds):
SELECT a.name, COUNT(*) AS orders, COUNT(DISTINCT(b.kundeid)) AS leads
FROM medie a
INNER JOIN katalogbestilling_katalog b ON a.id = b.offline
GROUP BY b.offline;
Is there any way to crank it down to "instant" time (max 1 second at worst)? I added the index on offlineid, but besides that and the re-arrangement of the query, I am at a loss for what to do. The EXPLAIN query shows me the query is using fileshort (the original query also used temp tables). All suggestions are welcome!
A: I'm going to guess that your main problem is that you are using such an old version of MySQL. Maybe MySQL 3 doesn't like the COUNT(DISTINCT()).
Alternately, it might just be system performance. How much memory do you have?
Still, MySQL 3 is really old. I would at least put together a test system to see if a newer version ran that query faster.
A: Unfortunately, mysql 3 doesn't support sub-queries. I suspect that the old version in general is what's causing the slow performance.
A: How is kundeid defined? It would be helpful to see the full schema for both tables (as generated by MySQL, ie. with indexes) as well as the output of EXPLAIN with the queries above.
The easiest way to debug this and find out what is your bottleneck would be to start removing fields, one by one, from the query and measure how long does it take to run (remember to run RESET QUERY CACHE before running each query). At some point you'll see a significant drop in the execution time and then you've identified your bottleneck. For example:
SELECT b.name, COUNT(*) AS orders, COUNT(DISTINCT(a.kundeid)) AS leads
FROM katalogbestilling_katalog a, medie b
WHERE a.offlineid = b.id
GROUP BY b.name
may become
SELECT b.name, COUNT(DISTINCT(a.kundeid)) AS leads
FROM katalogbestilling_katalog a, medie b
WHERE a.offlineid = b.id
GROUP BY b.name
to eliminate the possibility of "orders" being the bottleneck, or
SELECT b.name, COUNT(*) AS orders
FROM katalogbestilling_katalog a, medie b
WHERE a.offlineid = b.id
GROUP BY b.name
to eliminate "leads" from the equasion. This will lead you in the right direction.
update: I'm not suggesting removing any of the data from the final query. Just remove them to reduce the number of variables while looking for the bottleneck. Given your comment, I understand
SELECT b.name
FROM katalogbestilling_katalog a, medie b
WHERE a.offlineid = b.id
GROUP BY b.name
is still performing badly? This clearly means it's either the join that is not optimized or the group by (which you can test by removing the group by - either the JOIN will be still slow, in which case that's the problem you need to fix, or it won't - in which case it's obviously the GROUP BY). Can you post the output of
EXPLAIN SELECT b.name
FROM katalogbestilling_katalog a, medie b
WHERE a.offlineid = b.id
GROUP BY b.name
as well as the table schemas (to make it easier to debug)?
update #2
there's also a possibility that all of your indeces are created correctly but you have you mysql installation misconfigured when it comes to max memory usage or something along those lines which forces it to use disk sortation.
A: You may get a small increase in performance if you remove the inner join and replace it with a nested select statement also remove the count(*) and replace it with the PK.
SELECT a.name, COUNT(*) AS orders, COUNT(DISTINCT(b.kundeid)) AS leads
FROM medie aINNER JOIN katalogbestilling_katalog b ON a.id = b.offline
GROUP BY b.offline;
would be
SELECT a.name,
COUNT(a.id) AS orders,
(SELECT COUNT(kundeid) FROM katalogbestilling_katalog b WHERE b.offline = a.id) AS Leads
FROM medie a;
A: Well if the query is run often enough to warrant the overhead, create an index on table A containing the fields used in the query. Then all the results can be read from an index and it wont have to scan the table.
That said, all my experience is based on MSSQL, so might not work.
A: Your second query is fine and 65k+40k rows is not very large :)
Put an new index on katalogbestilling_katalog.offline column and it will run faster for you.
A: You could try making sure there are covering indexes defined on each table. A covering index is just an index where each column requested in the select or used in a join is included in the index. This way, the engine only has to read the index entry and doesn't have to also do the corresponding row lookup to get any requested columns not included in the index. I've used this technique with great success in Oracle and MS SqlServer.
Looking at your query, you could try:
one index for medie.id, medie.name
one index for katalogbestilling_katalog.offlineid, katalogbestilling_katalog.kundeid
The columns should be defined in these orders for the index. That makes a difference whether the index can be used or not.
More info here:
Covering Index Info
A: Try adding an index to (offlineid, kundeid)
I added 180,000 BS rows to katalog and 30,000 BS rows to medie (with katalog offlineid's corresponding to medie id's and with a few overlapping kundeid's to make sure the disinct counts work). Mind you this is on mysql 5, so if you don't have similar results, mysql 3 may be your culprit, but from what I recall mysql 3 should be able to handle this just fine.
My tables:
CREATE TABLE `katalogbestilling_katalog` (
`id` int(11) NOT NULL auto_increment,
`offlineid` int(11) NOT NULL,
`kundeid` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `offline_id` (`offlineid`,`kundeid`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=60001 ;
CREATE TABLE `medie` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=30001 ;
My query:
SELECT b.name, COUNT(*) AS orders, COUNT(DISTINCT(a.kundeid)) AS leads
FROM medie b
INNER JOIN katalogbestilling_katalog a ON b.id = a.offlineid
GROUP BY a.offlineid
LIMIT 0 , 30
"Showing rows 0 - 29 (30,000 total, Query took 0.0018 sec)"
And the explain:
id: 1
select_type: SIMPLE
table: a
type: index
possible_keys: NULL
key: offline_id
key_len: 8
ref: NULL
rows: 180000
Extra: Using index
id: 1
select_type: SIMPLE
table: b
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: test.a.offlineid
rows: 1
Extra:
A: Try optimizing the server itself. See this post by Peter Zaitsev for the most important variables. Some are InnoDB specific, while others are for MyISAM. You didnt mention which engine you were using which might be relevant in this case (count(*) is much faster in MyISAM than in InnoDB for example).
Here is another post from same blog, and an article from MySQL Forge
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What unit-test frameworks would you recommend for J2ME? I'm relatively new to J2ME and about to begin my first serious project. My experience in testing isn't too deep either. I'm looking for a unit test framework for J2ME.
So far I've seen J2MEUnit, but I don't now how well supported it is. I've seen JavaTest Harness but I don't know if it's not an overkill.
Please tell me what framework you suggest with respect to:
* Simplicity of implementing tests
* Supporting community and tools
* Compatibility with application certification processes
* Integration with IDEs (Eclipse, NetBeans)
* Other aspects you find important...
Thanks,
Asaf.
A: This is a blog entry of a spanish company who makes movile games. Compares many frameworks and the conclusion is (translated):
*
*MoMEUnit Offer very useful
information about the tests. Is
easily ported and Ant compabile. A
disadvantage (or maybe not), its
that it needs that every test class
have an unique test method, using a
lot of inheritance.
*JMEUnit. (Future merge of J2MEUnit
and JMUnit) JMUnit doesn't supports
Ant but the interface is similar to
MoMEUnit. J2MEUnit doesn't provide
very useful information with the
tests. Test creation in both
frameworks is somehow complex.
J2MEUnit does support Ant; thats
why the merge of both frameworks
will be very interesting(they have
been working on int for a year more
o less)
My experience: I've use J2ME Unit and setting up Test Fixtures is a pain due to the lack of "Reflection" in J2ME, but they are all build always the same way, so a template saves a lot of time.
I was planning to try out MoME Unit this week, just to check its simpler model
Some Test Unit Frameworks for J2ME:
*
*JMUnit
*MoME Unit
*J2ME Unit
*Sony-Ericsson Movil Java Unit
A: Take a glance at MockME as well.
www.mockme.org
From their site:
"MockME is Java ME mock objects for Java SE. MockME lets you write real unit tests without having to run them on the phone. You can even use dynamic mock object frameworks such as EasyMock that enables you to mock any object in Java ME! MockME integrates best-of-breed tools for unit testing including JUnit, EasyMock and DDSteps. By making Java ME API's mockable you can write unit tests for your Java ME application the way you really want to."
A: MicroEmulator + JUnit on J2SE
I started out with tools like JMUnit, but I recently switched over to standard JUnit + MicroEmulator on J2SE. This is similar to using MockME, but with MicroEmulator instead. I prefer MicroEmulator, because it has actual implementations of the components, and you can run an entire MIDlet on it. I've never used MockME myself though.
All my non-GUI unit tests are run by simply using MicroEmulator as a library. This has the advantage that all the JUnit tools work seamlessly, specifically Ant, Maven, most IDE's and Continuous Integration tools. As it runs on J2SE, you can also use features such as generics and JUnit annotations, which makes writing unit tests a little nicer.
Some components like the RecordStore require some setup before working. This is done with MIDletBridge.setMicroEmulator().
Using MicroEmulator also has the advantage that the implementation of some components can be customized, for example the RecordStore. I use an in-memory RecordStore, which is re-created before each test, so that I'm sure the tests run independently.
Real Devices
The approach described above won't run on any real devices. But, in my opinion, only GUI and acceptance tests need to be run on real devices. For this, tools like mVNC together with T-Plan Robot can be used on Symbian devices (thanks to this blog post). However, I could only get mVNC to work over Bluetooth, and it was very slow.
An alternative might be to use a service like The Forum Nokia Remote Device Access (RDA). I still need to investigate whether platforms like this are suitable for automated testing.
A: Hmm... I myself have not developed a mobile application but I think J2MEUnit is the better choice as its based on the original JUnit which has a big community and is supported by most IDEs so it should be guite easy to run at least those test which do not depend on the mobile hardware directly from your IDE.
More important might be that J2MEUnit integrates with ANT so you can run your test with every build.
A: A related document I found (after posting the question) is Testing Wireless Java Applications. It describes J2MEUnit near the document's end.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Can i use my WatiN tests to stresstest? In my current project we are testing our ASP.NET GUI using WatiN and Mbunit.
When I was writing the tests I realized that it would be great if we also could use all of these for stresstesting. Currently we are using Grinder to stresstest but then we have to script our cases all over again which for many reasons isent that good.
I have been trying to find a tool that can use my existing tests to create load on the site and record stats, but so far i have found noting. Is there such a tool or is there an easy way to create one?
A: We have issues on our build server when running WatiN tests as it often throws timeouts trying to access the Internet Explorer COM component. It seems to hang randomly while waiting for the total page to load.
Given this, I would not recommend it for stress testing as the results will be inaccurate and the tests are likely to be slow.
I would recommend JMeter for making threaded calls to the HTTP requests that your GUI is making
A: For load testing there is a tool which looks promising - LoadStorm. Free for 25 users. It has zero deployment needs as this is a cloud based service.
A: You could build a load controller for your stress testing. It could take your watin tests and run them in a multithreaded/multiprocessed way.
A: If you are comfortable using Selenium instead of WatiN, check out BrowserMob for browser-based load testing. I'm one of the Selenium RC authors and started BrowserMob to provide a new way to load test. By using real browsers, rather than simulated traffic, tests end up being much easier to script and maintain.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is the Win32 API function definition available in a "database"/XML format? I'm looking for a list of win32 API in some "database"/XML format.
I'd need it to easily create a "conversion layer" between win32 API and the higher level language I'm using (harbour/xharbour). Since this runs Pcode, it is necessary to transform parameters to C standard...
Instead of doing manual code write, I'd like to automate the process...
just for example, the windows API definition (taken from MSDN)
DWORD WINAPI GetSysColor(
__in int nIndex
);
should be transformed in
HB_FUNC( GETSYSCOLOR )
{
hb_retnl( (LONG) GetSysColor( hb_parni( 1 ) ) );
}
A: AFAIK, pinvoke.net only stores text data with the PInvoke definition for the call. Not very useful if what you want is something to use as a pre-parsed database of APIs.
Probably you could create an small parser that will take the include file and translate it to what you need. In that case, I'd recommend using lcc-win32's include files, as they are pretty much fat-free/no-BS version of the SDK headers (they don't come with a bunch of special reserved words you'd have to ignore, etc.)
A: Of course, you have Microsoft Platform SDK, but it is in raw .h C code, so hard to parse!
Similar work have been done by VB users (and Delphi users and probably for some other languages), for example ApiViewer has such database, but in some proprietary binary format (.apv extension), so you might have to reverse-engineer it.
Similarly, there is an API-Guide, which was hosted at Allapi.net but the later seems to be a parking site now. It used .api files (again binary-proprietary).
A: About the closest thing I know of would be: http://pinvoke.net/
Maybe they would share their data with you?
They have a VS tool that accesses this data, so it may be a webservice. You might even be able to sniff that out.
A: There seems to be some database (and an app for using it, named "PInvoke Interop Assistant") at:
https://github.com/jaredpar/pinvoke/tree/master/StorageGenerator/Data
although I'm not sure what's the license for now — thus I've asked the authors.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Monitor running .net apps I have some .net apps running that I need to monitor for example, then MethodA is called in App1, my monitor app should detect this. I have a lot of running apps and the solution proposed here is to recompile all those apps and include a new line in the desired methods that we want to monitor. I want to do this only if there is absolutely no way. So did anybody ever done something like this? Basically, I need to create a new app that when I click a button on it, it will tell me: MethodA was called in App1, in real time...
thanks!
A: There are several ways you could do this. One is to use log4Net, 'sprinkle' your methods with calls to log4Net's write methods. You can choose a variety of logging appenders (destinations) such as email or a database, but a less known tip is to download the standalone program, DebugView (SysInternals -> now Microsoft) which listens for the default messages.
A: The PostSharp deliver way, how to edit compiled .net code. The editation is written in C# code which is compiled ( attributes ) or by configuration code. Thay have a mechanism, which can log ( or populate or anything else ) a method/event calling and much more.
I think, this is tool, you need.
A: System.Diagnostics.PerformanceCounter is a good place to start. You can create new counters that can be viewed in the Performance control panel applet. They're a little confusing at the start, but when you realize average counters need two components to calculate a percentage it gets a lot easier.
A: I don't know if .NET has a matching mechanism, but Java allows you to specify an agent JAR file, namely a class that is notified/invoked when each class is loaded. Then, via instrumentation/bytecode manipulation, you could intercept such method calls. Perhaps you can replace the class loader in some way in .NET. not sure.
A: Reflection is what you are looking for in .NET, but I am not sure of the implementation details behind what you want to do.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to quickly add tickets in Trac? It's very painful to add multiple tickets to Trac or to have it as your own todo list. That causes people to use their own task management tools so tasks are then spread all around.
Is there any plugin or macro that would quicken the process of adding a ticket?
A: If you're using Eclipse: Mylyn is perfect.
Otherwise you could always get the XML RPC plugin. http://trac-hacks.org/wiki/XmlRpcPlugin and roll your own little tool.
For quickly creating similar tickets, you could use the Clone plugin: http://trac-hacks.org/wiki/CloneTicketPlugin
Edit And I second Espen's idea with the SVN checkin hook, it works great for us, as well.
A: You could try using EmailtoTrack, so you can create tickets just by sending emails.
(Another neat track tip, if not directly related to your question, is to use a commit hook with your version control system so you can close tickets by doing commits. I've only tried this one for SVN, but it shouldn't be hard to port.)
A: There is also a command-line trac ticket creator on track-hacks, you have to run it on the same machine as the trac repo resides. I find the command line addition to be much faster than the web-based one.
http://trac-hacks.org/wiki/TicketToTracScript
A: The following allows you to type a quick note. The note becomes a Trac ticket, assigned to yourself. I use this for very quick bugs and/or features I don't want to forget. Or, if I make up a feature I open then close a ticket for it, so I get full credit :)
- j
#!/usr/bin/env python
'''
trac-bug: add bug/feature to current Trac project, from the command line.
Specify Trac project directory in TRAC_ENV environment variable.
'''
import os, sys
TRAC_ENV = os.environ.get('TRAC_ENV') or os.path.expanduser('~/trac/projectenv')
if not os.path.isdir(TRAC_ENV):
print >>sys.stderr, "Set TRAC_ENV to the Trac project directory."
sys.exit(2)
from trac.env import open_environment
from trac.ticket import Ticket
t = Ticket(open_environment(TRAC_ENV))
desc = ' '.join(sys.argv[1:])
info = dict(
status='open',
owner=os.environ['USER'], reporter=os.environ['USER'],
description = desc, summary=desc
)
t.populate(info)
num = t.insert()
if not num:
print >>sys.stderr, "Ticket not created"
print >>sys.stder, vals
sys.exit(1)
print "Ticket #%d: %s" % (num,desc)
sys.exit(0) # all is well
Usage is brief:
$ trac-bug out of beer
Ticket #9: out of beer
A: Meanwhile one programmed TicketImportPlugin which creates or updates multiple tickets in one user interaction from Excel table.
A: If Mylyn is working for you, consider checking out http://tasktop.com too. Tasktop extends Mylyn with powerful productivity features such as automatic time tracking, web browsing support, email and calendar integration, and more.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Creating Custom GnuCash Reports with Scheme Are there any resources which show how to create custom GnuCash reports? I don't know the intricacies of Scheme but I do know the basics of Lisp, based on tinkering with Emacs. Is there a site which lays out the API for GnuCash reports, ideally with a little explanation of Scheme as well?
A: It appears that their wiki has some information here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Best way to deal with session timeout in web apps? I am currently building an internal web application used in a factory/warehouse type location. The users will be sharing a single PC between several people, so we need to have a fairly short session timeout to stop people wandering off and leaving the application logged in where someone else can come to the PC and do something under the previous user's username.
The problem with this is a session can timeout while a user is currently entering information into a form, especially if they take a long time.
How would you deal with this in a user friendly manner?
A: Use AJAX to regularly stash the contents of the partially filled-out form so they have not lost their work if they get booted by the system. Heck, once you're doing that, use AJAX to keep their session from timing out if they spend the time typing.
A: Keep the server informed about the fact that the user is actively entering information.
For instance send a message to the server if the user presses the TAB key or clicks with a mouse on a field.
The final solution is up to you.
A: The best advice would probably be to ask the users to close the browser window once they're done. With the use of session-cookies, the session will automatically end when the browser is closed or otherwise on a 30 minute timeout (can be changed afaik).
Since there by default is no interaction between the browser and the server once a page is loaded, you would have to have a javascript contact the server in the background on forms-pages to refresh the session, but it seems a bit too much trouble for such a minor problem.
A: If the session timeout is so short that the user doesn't have the time to fill in a form, I would put an AJAX script that makes a http request to the server, every few minutes, to keep the session alive. I would do that only on pages that the user has to fill in something or has already started filling something.
Another solution would be to use a session timeout reminder script that popups a dialog to remind the user that the session is about to time out. The popup should display a "Logout" and a "Continue using application" that makes a ajax request to update the session time out.
A: Maybe that a keep-alive javascript process could be helpfull in this case. If the script capture some key triggers, it send a "I'm still typing" message to the server to keep the session alive.
A: have you considered breaking the form into smaller chunks?
A: Monitor the timeout and post a pop-up to notify the user that their current session will expire and present "OK" or "Cancel" buttons. OK to keep the session going (i.e. reset the counter to another 5 minutes or 10 minutes - whatever you need) -or- Cancel to allow the session to continue to countdown to zero and thus, ending.
That's one of lots of ways to handle it.
A: Using a JavaScript "thread" to keep the session open is, to me, a bad idea.
It's against the idea of session timeout which exists to free some resources if there's no user in front of the application.
I think you should adjust the session timeout with the more accurate time, in order to fill the form in an "typical normal use".
You may also be proactive by :
*
*having a JavaScript alert displaying a non-intrusive warning (not a popup) to the user before the timeout expire, which say that the session will expire soon (and give an link to send an ajax request to reset the timeout and remove that warning - that will avoid the user to lost the form he is currently typing),
*and also have a second JavaScript "thread", which, if the session has expired, redirect to the login page with a message saying that the session has now expired.
It think that's the best because it avoid the user to fill a complicated form for nothing, and handle the case when the user has gone away.
A: As an alternative for the technical solutions, you could make your application in such a way that everytime a particular job is done, for example filling in a form, you ask the user if he wants to continue doing another job or if he's done. Yould could have a startscreen with menu options and if the user chooses an option he first has to enter his credentials.
Or put a password field on the form. Depends on how many forms they have to fill in a session.
A: When the user posts the form and their session has timed out, you should make sure you save the form values somewhere and then ask the user to login again. Once they have re-authenticated you they can then re-submit the form (as none of their data will have been lost).
A: I had developed something requiring very long session. The user logged in on a page when he sit on the machine and after doing his work, logged out. Now he may use system for few minutes or for hours. To keep session alive till he logged out, I used timer with javascript, it went to server and updated an anthem label with current time on server.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Converting MS Access "OLE Objects" back to plain JPEGs - best way? Background: We have an old (but business-critical) SQL Server database with an MS Access ADP front-end; this was originally upsized to SQL Server from a series of Access databases.
This database tracks hazardous materials for our customers, and stores a large number of images. These images are inserted from MS Access, and get put into the database as OLE Objects.
The problems are:
*
*These are a pain to read out in anything except Access/Office
*There's a MASSIVE storage overhead - ~10GB of images takes up 600+ GB of storage space(!)
My question is this: what way would you recommend to convert these bloated objects back into simple JPEGs? Once we do this we can finally migrate our front-end from Access and onto a simple web-based system, and our backup times will become manageable again!
A: Take the *.bas file from here http:http://stackoverflow.com/Content/img/wmd/ul.png//www.access-im-unternehmen.de/index1.php?BeitragID=337&id=300 (unfortunately it is German).
It uses the GDI+ lib from MS (included in Win standard installation) to import/export pics to/from Access OLE.
Rough translation of interface:
*
*IsGDIPInstalled: Checks for installation of GDI+
*InitGDIP: Init of GDI+.
*ShutDownGDIP: Deinit of GDI+ (importand to be used!)
*LoadPictureGDIP: Loads pic in StdPicture object (bmp, gif, jp(e)g, tif, png, wmf, emf and ico).
*ResampleGDIP: Scales pic to new dimensions and sharpens if needed.
*MakeThumbGDIP: Makes thumbnail and fills border with color.
*GetDimensionsGDIP: Get dimensions in TSize-Struktur in pixel.
*SavePicGDIPlus: Saves Picture objekt in file as BMP, GIF, PNG or JPG (jpg with given quality)
*ArrayFromPicture: Returns a byte array of picutre to put pic into OLE field of table
*ArrayToPicture: Creates byte array of OLE field of table containing a picture
A: Here is the link again: http://www.access-im-unternehmen.de/index1.php?BeitragID=337&id=300
A: Use Access MVP Stephen Lebans ExtractInventoryOLE tool to extract the OLE objects from a table to separate files.
http://www.lebans.com/oletodisk.htm
According to Lebans: "Does NOT require the original application that served as the OLE server to insert the object. Supports all MS Office documents, PDF, All images inserted by MS Photo Editor, MS Paint, and Paint Shop Pro. Also supports extraction of PACKAGE class including original Filename."
Also, Access 2007 stores OLE objects much more efficiently than the historical BMP formats of previous versions, so you would have a smaller storage space and be able to keep your Access app if you converted it from the 600+GB storage of SQL Server to Access 2007 accdb format. Your backup times would be manageable and you wouldn't need to spend time converting an Access front end to a web front end.
A: I think the reason your database becomes so bloated, is that the JPGs are also stored as bitmaps inside the "OLE object" structure, or so I've seen, depending on the method the JPEG was inserted.
This is not optimal, but: for every image in the database, I would programmatically create a dummy .doc containing just the image, then pass it through OpenOffice conversion, and extract the JPEG from the images subfolder of the produced OpenOffice document (which is a ZIP file).
I would then replace the OLE documents in the database with the raw JPEG data, but then I have no way for you to plainly display them in a custom application (unless it's a web app).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Abusing XmlReader ReadSubtree() I need to parse a xml file which is practically an image of a really big tree structure, so I'm using the XmlReader class to populate the tree 'on the fly'. Each node is passed just the xml chunk it expects from its parent via the ReadSubtree() function. This has the advantage of not having to worry about when a node has consumed all its children. But now I'm wondering if this is actually a good idea, since there could be thousands of nodes and while reading the .NET source files I've found that a couple (and probably more) new objects are created with every ReadSubtree call, and no caching for reusable objects is made (that I'd seen).
Maybe ReadSubtree() was not thought to be massively used, or maybe I'm just worrying for nothing and I just need to call GC.Collect() after parsing the file...
Hope someone can shed some light on this.
Thanks in advance.
Update:
Thanks for the nice and insightful answers.
I had a deeper look at the .NET source code and I found it to be more complex than I first imagined. I've finally abandoned the idea of calling this function in this very scenario. As Stefan pointed out, the xml reader is never passed to outsiders and I can trust the code that parses the xml stream, (which is written by myself), so I'd rather force each node to be responsible for the amount of data they steal from the stream than using the not-so-thin-in-the-end ReadSubtree() function to just save a few lines of code.
A: Making the assumption that all objects are created on the normal managed heap, and not the large object heap (ie less than 85k), there really should be no problem here, this is just what the GC was designed to deal with.
I would suggest that there is also no need to call GC.Collect at the end of the process, as in almost all cases allowing the GC to schedule collections itself allows it to work in the optimal manner (see this blog post for a very detailed explanation of GC which explains this much better than I can).
A: ReadSubTree() gives you an XmlReader that wraps the original XmlReader. This new reader appears to consumers as a complete document. This might be important if the code you pass the subtree to thinks it is getting a standalone xml document. For example the Depth property of the new Reader starts out at 0. It is a pretty thin wrapper, so you won't be using any more resources than you would if you used the original XmlReader directly, In the example you gave, it is rather likely that you aren't really getting much out of the subtree reader.
The big advantage in your case would be that the subtree reader can't accidentally read past the subtree. Since the subtree reader isn't very expensive, that safety might be enough, though it is generally more helpful when you need the subtree to look like a document or you don't trust the code to only read its own subtree.
As Will noted, you never want to call GC.Collect(). It will never improve performance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Visual Studio setup problem - 'A problem has been encountered while loading the setup components. Canceling setup.' I've had a serious issue with my Visual Studio 2008 setup. I receive the ever-so-useful error 'A problem has been encountered while loading the setup components. Canceling setup.' whenever I try to uninstall, reinstall or repair Visual Studio 2008 (team system version). If I can't resolve this issue I have no choice but to completely wipe my computer and start again which will take all day long! I've recently received very strange errors when trying to build projects regarding components running out of memory (despite having ~2gb physical memory free at the time) which has rendered my current VS install useless.
Note I installed VS2005 shell version using the vs_setup.msi file in the SQL Server folder after I had installed VS2008, in order to gain access to the SQL Server 2005 Reporting Services designer in Business Intelligence Development Studio (this is inexplicably unavailable in VS2008).
Does anyone have any solutions to this problem?
P.S.: I know this isn't directly related to programming, however I feel this is appropriate to SO as it is directly related to my ability to program at all!
Note: A colleague found a solution to this problem, hopefully this should help others with this problem.
A: I had the same error message. For me it was happening because I was trying to run the installer from the DVD rather than running the installer from Add/Remove programs.
A: Sure enough, for me, it was the hotfixes. In Add/Remove Programs, check the "Show Updates" box, and remove ALL of the Hotfixes associated with your version of VS2008. Then try the "Change/Remove" button - it should now proceed without a hitch.
Well, it did for me, anyway... ;-)
A: I have Visual Studio Team System 2008 Development Edition, and had to remove all updates and Hotfixes:
*
*Update KB972221
*Hotfix KB973674
*Hotfix KB971091
Reboot, then the following Hotfix appeared, which I then removed as per @riaraos' answer:
*
*Hotfix KB952241
Before the Change/Remove would work!
Hope that helps someone else.
A: Uninstall hotfixes installed in related to vs2008 and then try again.
It worked for me and hopefully it will for you as well.
Thanks,
Zelalem
A: Remove the following hot fixes and updates
*
*Update KB972221
*Hotfix KB973674
*Hotfix KB971091
Restart the PC and try to uninstall now. This worked for me without problems.
A: Microsoft itself posted a KB article about this, and that article has a service pack that they claim fixes the problem. See below.
http://support.microsoft.com/kb/959417/
It took a while for the associated update to install itself, but once it did, I was able to run the Visual Studio setup successfully from the Add/Remove Programs control panel.
A: In my case, uninstalling from Add&Remove Programs didn't work. Instead, the problem was due to a recently hotfix installed through automatic updates. The hotfix to VS 2008 (in my case) has the number KB952241, so I uninstalled it using Add/Remove Programs checking on the show updates option. After it was unistalled the problem was gone.
A: A colleague found this MS auto-uninstall tool which has successfully uninstalled VS2008 for me and saved me hours of work!!
Hopefully this might be useful to others. Doesn't speak highly of MS's faith in their usual VS maintenance tools that they have to provide this as well!
A: I encountered the same problem and found a very easy solution.Go to the following Link:
http://msdn.microsoft.com/en-us/vs2008/bb968856.aspx
and run VS AutoUninstall tool .This will automatically remove all the components of VS 2008.
Cheers
A: You should look for the MSI setup logs in the temp directory of your system. They will contain detailed inforamtion about why the setup failed.
I had a similar installation problem with Visual Studio 2008 which I was able to resolve by studying the logs.
A: I think this sort of question is entirely appropriate to the forum, especially if an easy solution can be found, as would save others hours of pain.
Unfortunately I dont have the solution, but would suggest (if you haven't already)
*
*Run FileMon to see if the
installer is looking for specific
files which are no longer there -
this may give some clues.
*Painful, but try uninstalling other apps based upon the VS shell (eg 2005) first.
A: Thanks, riaraos, uninstalling KB952241 was the solution for me, too. Before doing that I tried to run the installer from "Programs and Features" and from the installation DVD without success. I did not want to completely remove the VS 2008 installation but only add a few components.
Notes on my system:
Windows 7 Beta 1
Visual Studio 2008 SP1
A: Okay I had the same issues first my VS2008 was acting up so i tried to uninstall it and it didn't work...I read online that using an AutoUninstall by MS will do the trick it did just that , but left a lot of nasty files behind..
So i used "Windows Install Clean Up" and cleaned up more stuff that had to do with VS..
then went back into Add and remove in control panel Removed the KB952241...
then opened up Ccleaner and scanned the registry found a lot of left behind crap from VB2008 removed all that once that was done.
I went ahead and launch the installed back from the CD again and BAM its working.
I did all this without having to restart my PC..
Hope this helps people who are stuck..like i was
A: In my case, installing visual studio SP1 unbroke the uninstall/repair functionality.
A: Windows 7 suggested to "Uninstall using recommended settings" after hitting OK in the error message. It solved the problem.
A: Solution to this is
http://www.dotnetzone.gr/cs/forums/48758/ShowThread.aspx#48758
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "152"
}
|
Q: HTML Tag ClientID in a .NET Project If I want to manipulate an HTML tag's properties on the server within an aspx page based on a master page i.e.
<a href="#" runat="server" ID="myLink">My Link</a>
For example to give the link a different class depending on the current page i.e.
if (Path.GetFileName(Request.PhysicalPath) == "MyPage")
{
myLink.Attributes.Add("class","active");
}
.NET changes the ID property of the link to something like
<a href="#" ID="ct100-foo-myLink">My Link</a>
Is there any way of stopping this from happening and keeping the original ID?
Thanks in advance
A: It's not possible directly. You could make a new control that inherits the link and override its ClientID to return directly the ID. But this seems to be overkill. You can simply use HTML markup and use <%# GetClass() %> to add the class when you need it.
Regarding the usage of ClientID for Javascript:
<a ID="myLink" runat="server">....
var ctrl = document.getElementById('<%# myLink.ClientID %>');
Of course you need a DataBind somewhere in the code.
A: AFAIK there is no way.
It shows the actual control tree, which is in this case masterpage-content-control.
However if you add an ID to the masterpage (this.ID = "whatever") then you will see "whatever" instead of ctl00 (which means control index 0).
A: I just discovered an easy way to do it if you are using jQuery.
var mangled_name = $("[id$=original_name]").attr("id");
So if you have a server control like this:
<asp:TextBox ID="txtStartDate" runat="server" />
You can get the mangled name in your jQuery/Javascript with:
var start_date_id = $("[id$=txtStartDate]").attr("id");
Since it is a string, you can also use it "inline" instead of assigning it, e.g. concatenating it with other elements for selectors.
A: I don't think so.
However, you probably want to keep the original name so that you can manipulate the control with javascript.... if that's the case, you should look into ClientID; it will return the assigned ID at runtime.
EDIT:
It looks like there is a way to do it, but with some work... take a look at ASP.NET Controls - Improving automatic ID generation : Architectural Changes ( Part 3)
I didn't read the full post, but it looks like he created his own container and own naming provider. I think that if you wanted to leave your control's name untouched, you would just return the name back in
public abstract string SetControlID(string name, System.Web.UI.Control control);
A: I never figured out how to prevent .NET from doing this, but what I did start doing was placing asp:Literals and using c# to add a WebControl to them. If you write a WebControl you can set various properties without it getting .NET's unique ID.
For instance:
<asp:Literal ID="myLink" />
...
WebControl a = new WebControl(HtmlTextWriterTag.A);
a.CssClass = "active";
a.Attributes["href"] = "#";
Literal text = new Literal();
text.Text = "click here";
a.Controls.Add(text);
myLink.Controls.Add(a);
Which will result in the following html:
<a href="#" class="active">click here</a>
Overall, a pretty dirty solution, but I didn't know what else to do at the time and I had a deadline to meet. ;)
A: Yep exactly, we will need to persist the original ID for JavaScript and CSS reasons. I noticed the ClientId property but it's read only and can't be assigned, so i'm not sure how I would use it?
A: This blog post could be helpful:
Remove Name Mangling from ASP.Net Master Pages (get your ID back!)
The solution in the post overwrites the .getElementById method, allowing you to search for the element just using the normal ID. Maybe you can adapt it to your needings.
Disclaimer: That ASP.NET behaviour is by design and it is good that way, even for HTML Server Controls like the control in your example. Server-ID and Client-ID are two very different things, which should not be mixed. Using Master Pages or User Controls you could end with identical Client-IDs if there wasn't that mechanism.
A: Why not just use a standard a tag, and use render blocks to modify it, and then you have its behavior locked in on client.
A: ASP.NET 4 has property ClientIDMode that controls ClientID rendering.
Also, there is a trick how to disable ClientID rendering for any non-postback control (ASP.NET 2.0+).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: IIS Error: Cannot connect to the configuration database After rebooting the Windows 2003 Server with the IIS I receive the follow error message:
Cannot connect to the configuration database
in the browser. The error occur on different pages and with different clients. that I think it is a server problem.
What can I do to solve this error?
A: A quick web search suggests that this error message is probably coming from SharePoint Services, and indicates that SharePoint cannot connect to its database.
There seem to be several reasons suggested:
*
*The SQL database is not running, has been removed, or you otherwise can't connect to it (firewall, login credentials, network failure)
*IIS is running in the wrong mode
The latter could be IIS 6.0 configured for IIS 5.0 compatibility mode, or the application pool configured for 32-bit worker processes on a 64-bit system.
A: I hade the same problem, and it did happen just after a windows update, hmm...
First of all, someone (windows update) hade change the user account on the service "Windows Internal Database (MICROSOFT##SSEE)". Changed back to the right account and the WSS started to work, but with an error (Application / error or something)
This new problem was something I just got for free after I hade run the Exchange Analyzer tool and done some modifications to my system, that was recommended by the tool.
If i changed my web.config too look like this (c:/inetpub/wss-dir/web.config):
<!-- Web.Config Configuration File -->
<configuration>
<system.web>
<customErrors mode="RemoteOnly"/>
</system.web>
</configuration>
I discovered what the problem was, it was a access/security issue. The error message told me:
Access to the path "C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\Temporary ASP.NET Files\root\8c91a6b5\649b28ba" is denied.
But this was not the hole truth... the access denied was not to the ..\Temporary ASP.NET Files\root\8c91a6b5\649b28ba folder, it was to the %TEMP% folder, that I hade just moved due to a suggestion from the Exchange Analyzer Tool.
Have a nice day!
A: I had that problem with sharepoint I used this to fix the sql database
http://heresjaken.com/sharepoint-cannot-connect-to-configuration-database-error-after-installing-kb2687442-update-on-sbs-server-2008/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Using JET with EMF I need to run JET templates on a EMF model metadata - i.e. the model itself (not data) is input to my JET template.
More practically - I want generate non java code, based on EMF templates.
How I do it?
Thank you
A: I'm not sure I get you right, but you can pass your model just like any other object into the JET template (as described in the JET tutorial). Also, it makes no difference if you generate Java or any other text with JET. As an additional pointer, you might want to consider using Xpand (part of openArchitectureWare) for very comfortable model to text generation (including things like content assist for your model in the template editor).
A: For code generation, you could use Acceleo. That is like Xpand very comfortable model to text generation (Acceleo language is very intuitive for model browsing) and also less painful than JET.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Executing JavaScript on page load selectively Mending a bug in our SAP BW web application, I need to call two javascript functions from the web framework library upon page load. The problem is that each of these functions reloads the page as a side-effect. In addition, I don't have access to modify these functions.
Any great ideas on how to execute a piece of code on "real" page load, then another piece of code on the subsequent load caused by this function, and then execute no code the third reload?
My best idea so far it to set a cookie on each go to determine what to run. I don't greatly love this solution. Anything better would be very welcome. And by the way, I do realize loading a page three times is absolutely ridiculous, but that's how we roll with SAP.
A: A cookie would work just fine. Or you could modify the query string each time with a "mode=x" or "load=x" parameter.
This would present a problem if the user tries to bookmark the final page, though. If that's an option, the cookie solution is fine. I would guess they need cookies enabled to get that far in the app anyway?
A: A cookie, or pass a query string parameter indicating which javascript function has been run. We had to do something along these lines to trip out a piece of our software. That's really the best I got.
A: Use a cookie or set a hidden field value. My vote would be for the field value.
A: This might be a cute case for using window.name, 'the property that survives page reloads'.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: When does ++ not produce the same results as +1? The following two C# code snippets produce different results (assuming the variable level is used both before and after the recursive call). Why?
public DoStuff(int level)
{
// ...
DoStuff(level++);
// ...
}
,
public DoStuff(int level)
{
// ...
DoStuff(level+1);
// ...
}
After reading some of the responses below I thought it would be worthwhile posting the stack traces for level++, ++level and level+1 to highlight how deceiving this problem is.
I've simplified them for this post. The recursive call sequence starts with DoStuff(1).
// level++
DoStuff(int level = 1)
DoStuff(int level = 2)
DoStuff(int level = 2)
DoStuff(int level = 2)
// ++level
DoStuff(int level = 4)
DoStuff(int level = 4)
DoStuff(int level = 3)
DoStuff(int level = 2)
// level+1
DoStuff(int level = 4)
DoStuff(int level = 3)
DoStuff(int level = 2)
DoStuff(int level = 1)
A: To clarify all the other responses:
+++++++++++++++++++++
DoStuff(a++);
Is equivalent to:
DoStuff(a);
a = a + 1;
+++++++++++++++++++++
DoStuff(++a);
Is equivalent to:
a = a + 1;
DoStuff(a);
+++++++++++++++++++++
DoStuff(a + 1);
Is equivalent to:
b = a + 1;
DoStuff(b);
+++++++++++++++++++++
A: Because the first example is really equivalent to:
public DoStuff(int level)
{
// ...
int temp = level;
level = level + 1;
DoStuff(temp);
// ...
}
Note that you can also write ++level; that would be equivalent to:
public DoStuff(int level)
{
// ...
level = level + 1;
DoStuff(level);
// ...
}
It's best not to overuse the ++ and -- operators in my opinion; it quickly gets confusing and/or undefined what's really happening, and modern C++ compilers don't generate more efficient code with these operators anyway.
A: level++ will pass level into DoStuff and then increment level for use in the rest of the function. This could be a fairly nasty bug as the recursion will never end (from what is shown DoStuff is always being passed the same value). Perhaps ++level is meant instead, as this is the opposite of level++ (increments level and passes the incremented value into DoStuff)?
level+1 will pass level+1 into DoStuff and leave level unchanged for the rest of the function.
A: The first is using the value in level and THEN incrmenting it.
The latter is using level+1 as a passed variable.
A: the return value of level++ will be level and therefore pass level into DoStuff. This could be a fairly nasty bug as the recursion will never end (from what is shown DoStuff is always being passed with the same value). Perhaps ++level or level + 1 is meant instead?
level + 1 will pass level + 1 into DoStuff and leave level unchanged for the rest of the function.
The post-increment operator (variable++) is precisely equivalent to the function
int post_increment(ref int value)
{
int temp = value;
value = value + 1
return temp;
}
while the pre-increment operator (++variable) is precisely equivalent to the function
int pre_increment(ref int value)
{
value = value + 1;
return value;
}
Therefore, if you expand the operator inline into the code, the operators are equivalent to:
DoStuff(a + 1)
int temp = a + 1;
DoStuff(temp);
DoStuff(++a)
a = a + 1;
DoStuff(a);
DoStuff(a++);
int temp = a;
a = a + 1;
DoStuff(temp);
It is important to note that post-increment is not equivalent to:
DoStuff(a);
a = a + 1;
Additionally, as a point of style, one shouldn't increment a value unless the intention is to use the incremented value (a specific version of the rule, "don't assign a value to a variable unless you plan on using that value"). If the value i + 1 is never used again, then the preferred usage should be DoStuff(i + 1) and not DoStuff(++i).
A: level++ returns the current value of level, then increments level.
level+1 doesn't change level at all, but DoStuff is called with the value of (level + 1).
A: public DoStuff(int level)
{
// DoStuff(level);
DoStuff(level++);
// level = level + 1;
// here, level's value is 1 greater than when it came in
}
It actually increments the value of level.
public DoStuff(int level)
{
// int iTmp = level + 1;
// DoStuff(iTmp);
DoStuff(level+1);
// here, level's value hasn't changed
}
doesn't actually increment the value of level.
Not a huge problem before the function call, but after the function call, the values will be different.
A: In level++ you are using postfix operator. This operator works after the variable is used. That is after it is put on the stack for the called function, it is incremented. On the other hand level + 1 is simple mathematical expression and it is evaluated and the result is passed to called function.
If you want to increment the variable first and then pass it to called function, you can use prefix operator: ++level
A: The first code snippet uses the post-operation increment operator, so the call is made as DoStuff(level);. If you want to use an increment operator here, use DoStuff(++level);.
A: level+1 sends whatever level+1 is to the function.
level++ sends level to the function and then increments it.
You could do ++level and that would likely give you the results you want.
A: The first example uses the value of 'index', increments the value and updates 'index'.
The second example uses the value of 'index' plus 1 but does not change the content of 'index'.
So, depending on what you are wanting to do here, there could be some surprises in store!
A: Whilst it is tempting to rewrite as:
DoStuff(++level);
I personally think this is less readable than incrementing the variable prior to the method call. As noted by a couple of the answers above, the following would be clearer:
level++;
DoStuff(level);
A: When you use a language that allows operator overloading, and '+ <integer>' has been defined to do something other than post- and prefix '++'.
Then again, I have only seen such abominations in school projects*, if you encounter that in the wild you probably have a really good, well-documented, reason.
[* a stack of integers, if I'm not mistaken. '++' and '--' pushed and popped, while '+' and '-' performed normal arithmetics]
A: To put it in the most simple way, ++var is a prefix operator and will increment the variables before the rest of the expression is evaluated. var++, a postfix operator, increments a variable after the rest of the expression is evaluated. And as others have mentioned of course, var+1 creates just a temporary variable (seperate in memory) which is initiated with var and incremented with constant 1.
A: As far as my experience goes, the parameter expression is evaluated first, and gets a value of level.
The variable itself is incremented before the function is called, because the compiler doesnt care whether you are using the expression as a parameter or otherwise... All it knows is that it should increment the value and get the old value as the result of the expression.
However in my opinion, code like this is really sloppy, since by trying to be clever, it makes you have to think twice about what is really happening.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.